repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-119787 | # create an internals documentation folder in the cpython repo
The internals documentation is scattered in markdown files in the codebase, as well as parts which are in the dev guide. We would like to have it in one place, versioned along with the code (unlike the dev guide).
<!-- gh-linked-prs -->
### Linked PRs
* gh-119787
* gh-119815
* gh-120137
* gh-120134
* gh-120445
* gh-121009
* gh-123198
* gh-124450
* gh-124989
* gh-125119
* gh-125282
* gh-125715
* gh-125874
* gh-125888
* gh-126832
* gh-127175
* gh-128329
* gh-128524
* gh-131203
* gh-131213
* gh-131382
* gh-135411
<!-- /gh-linked-prs -->
### Cosmetic PRs
* gh-120077
* gh-121601
* gh-124990
* gh-125455
* gh-125456
* gh-125815
* gh-127485
* gh-127957
* gh-128174
* gh-128275
* gh-128314 | e91fc11fafb657cab88c5e6f13822432a3b9dc64 | 6fb191be15fd49da10506de29b6393ffdf59b894 |
python/cpython | python__cpython-119781 | # Unexpected <class 'TypeError'> exception in test_format
# Bug report
### Bug description:
```
$ ./python -m test test_format
Using random seed: 982853282
0:00:00 load avg: 2.14 Run 1 test sequentially
0:00:00 load avg: 2.14 [1/1] test_format
Unexpected <class 'TypeError'> : '%c requires an integer in range(256) or a single byte, not a bytes object of length 2'
Unexpected <class 'TypeError'> : '%c requires an integer in range(256) or a single byte, not str'
Unexpected <class 'TypeError'> : '%c requires an integer in range(256) or a single byte, not float'
Unexpected <class 'TypeError'> : '%c requires an int or a unicode character, not float'
Unexpected <class 'TypeError'> : '%c requires an int or a unicode character, not a string of length 2'
Unexpected <class 'TypeError'> : '%c requires an int or a unicode character, not bytes'
== Tests result: SUCCESS ==
1 test OK.
Total duration: 171 ms
Total tests: run=18
Total test files: run=1/1
Result: SUCCESS
```
I would guess this was broken by b313cc68d5.
1. Expected messages should be adjusted.
2. I think that tests must fail in this case, so ``test_exc()`` should be fixed.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119781
<!-- /gh-linked-prs -->
| b278c723d79a238b14e99908e83f4b1b6a39ed3d | 010aaa32fb93c5033a698d7213469af02d76fef3 |
python/cpython | python__cpython-119785 | # Fix link in Python 3 porting
# Documentation
https://docs.python.org/3.12/howto/pyporting.html
`"You can find the old guide in the [archive](https://docs.python.org/3.12/howto/pyporting.html)."`
Link points to the page itself, not to the actual archive.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119785
* gh-119788
* gh-119789
<!-- /gh-linked-prs -->
| 6fb191be15fd49da10506de29b6393ffdf59b894 | e50fac96e82d857ecc024b4cd4e012493b077064 |
python/cpython | python__cpython-119776 | # Remove deprecated feature to create immutable types with mutable bases
# Feature or enhancement
Refs https://github.com/python/cpython/pull/95533
Refs https://github.com/python/cpython/issues/95388
Now instead of a deprecation warning, we need to raise a `TypeError`.
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119776
<!-- /gh-linked-prs -->
| 4aed319a8eb63b205d6007c36713cacdbf1ce8a3 | fd6cd621e0cce6ba2e737103d2a62b5ade90f41f |
python/cpython | python__cpython-120256 | # _Py_c_pow() should adjust errno on error conditions
# Bug report
### Bug description:
Right now we do this after invocation of the function or it's optimized alternative (for small integers). That has advantage as - IIUC - both algorithms may trigger error condition. On another hand, behaviour of the public C API function ``_Py_c_pow()`` (used in the CPython codebase only for ``complex_pow()``) will differ from the pure-python pow()...
Other similar functions (``complex_div()`` and ``complex_abs()``) leave setting ``errno`` to corresponding C-API function.
My proposal: move ``_Py_ADJUST_ERANGE2()`` call to ``_Py_c_pow()`` and ``c_powi()``. If that does make sense I'll provide a patch.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120256
<!-- /gh-linked-prs -->
| 8a284e189673582e262744618f293f9901a32e49 | 81480e6edb34774d783d018d1f0e61ab5c3f0a9a |
python/cpython | python__cpython-119745 | # Move more functions from compile.c to flowgraph.c
Following #117494, there is now less coupling between compile.c and there are some functions that can move to, and belong better in, flowgraph.c. There include:
- stack effect functions
- _PyCompile_OptimizeCfg
- _PyCfg_FromInstructionSequence
<!-- gh-linked-prs -->
### Linked PRs
* gh-119745
<!-- /gh-linked-prs -->
| 13a5fdc72f701c053b96abea48cd8f2775e9418e | 9732ed5ca94cd8fe9ca2fc7ba5a42dfa2b7791ea |
python/cpython | python__cpython-119743 | # Remove deprecated delegation of the int built-in to __trunc__.
Python 3.11 deprecated delegation of the `int()` built-in to `__trunc__` (see issue #89140). I haven't seen any evidence of problems arising from that deprecation, and I think it's now safe to remove that delegation for Python 3.14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119743
<!-- /gh-linked-prs -->
| f79ffc879b919604ed5de22ece83825006cf9a17 | 4aed319a8eb63b205d6007c36713cacdbf1ce8a3 |
python/cpython | python__cpython-119738 | # `pkg-config` files (`.pc`) conflict between default and free-threaded build
# Bug report
If you install the default and free-threaded build to the same prefix, some files conflict and will get overwritten by the most recent install:
```
modified: bin/idle3.13
modified: bin/pydoc3.13
modified: bin/python3.13
typechange: bin/python3.13-config
modified: lib/pkgconfig/python-3.13-embed.pc
modified: lib/pkgconfig/python-3.13.pc
```
The most problematic of these are the `pkg-config` files:
```diff
diff --git a/lib/pkgconfig/python-3.13-embed.pc b/lib/pkgconfig/python-3.13-embed.pc
index 9bc6ab5..5fc2548 100644
--- a/lib/pkgconfig/python-3.13-embed.pc
+++ b/lib/pkgconfig/python-3.13-embed.pc
@@ -9,5 +9,5 @@ Description: Embed Python into an application
Requires:
Version: 3.13
Libs.private: -ldl -framework CoreFoundation
-Libs: -L${libdir} -lpython3.13
-Cflags: -I${includedir}/python3.13
+Libs: -L${libdir} -lpython3.13t
+Cflags: -I${includedir}/python3.13t
diff --git a/lib/pkgconfig/python-3.13.pc b/lib/pkgconfig/python-3.13.pc
index c206220..519f20f 100644
--- a/lib/pkgconfig/python-3.13.pc
+++ b/lib/pkgconfig/python-3.13.pc
@@ -10,4 +10,4 @@ Requires:
Version: 3.13
Libs.private: -ldl -framework CoreFoundation
Libs: -L${libdir}
-Cflags: -I${includedir}/python3.13
+Cflags: -I${includedir}/python3.13t
```
Maybe the pkg-config files for the free-threaded build should be called `python-3.13t.pc` and `python-3.13t-embed.pc`?
<!-- gh-linked-prs -->
### Linked PRs
* gh-119738
* gh-119797
<!-- /gh-linked-prs -->
| 1c04c63ced5038e8f45a2aac7dc45f0815a4ddc5 | bf098d4157158e1e4b2ea78aba4ac82d72e24cff |
python/cpython | python__cpython-119728 | # Add `--single-process` option to regrtest to always run tests sequentially (ignore `-jN` option)
The Python release process runs tests sequentially. It catchs some issues which are ignored silently when running tests in parallel.
I propose adding `--sequentially` option to regrtest to always run tests sequentially. It can be used to ignore any `-jN` option and also to re-run failed tests sequentially.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119728
* gh-120456
* gh-123010
* gh-130359
<!-- /gh-linked-prs -->
| 4e8aa32245e2d72bf558b711ccdbcee594347615 | 1d4c2e4a877a48cdc8bcc9808d799b91c82b3757 |
python/cpython | python__cpython-119974 | # Syntax errors in `else` blocks are reported at `else`
Since #29513, syntax errors in `else` and `elif` blocks, like:
```python
if 1:
pass
else:
This is invalid syntax (sic)
```
are reported at the `else`:
```
File "/tmp/repro.py", line 3
else:
^^^^^^
SyntaxError: 'else' must match a valid statement here
```
This is quite unhelpful when a small typo is hiding in a large block.
@lysnikolaou, could you take a look?
<!-- gh-linked-prs -->
### Linked PRs
* gh-119974
* gh-120013
<!-- /gh-linked-prs -->
| 31a4fb3c74a0284436343858803b54471e2dc9c7 | 105f22ea46ac16866e6df18ebae2a8ba422b7f45 |
python/cpython | python__cpython-119722 | # Integrate documentation fixes into `__about__` docstring of heapq module.
676d7aa905864157de630e5360291ccf7e6e997a and d2a296a73a3a49d15fd3d1505c10e98ab8ad1a63 made some corrections to `Doc/library/heapq.rst` that never got reflected in the `__about__` docstring in `Lib/heapq.py`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119722
<!-- /gh-linked-prs -->
| 659cb7e6b8e83e1541fc27fd29d4846e940b600e | 78d697b7d5ec2a6fa046b0e1c34e804f49e750b4 |
python/cpython | python__cpython-136913 | # WinError 10022 for create_datagram_endpoint with local_addr=None.
# Bug report
### Bug description:
A problem occurs with the Windows proactor event loop when creating a datagram endpoint with `local_addr=None`. The problem does not occur with the selector event loop (either on Windows or Linux).
```python
import socket
import asyncio
class MyDatagramProto(asyncio.DatagramProtocol):
def error_received(self, exc):
raise exc
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
loop = asyncio.new_event_loop()
coro = loop.create_datagram_endpoint(MyDatagramProto, local_addr=None, family=socket.AF_INET)
loop.run_until_complete(coro)
print('No problem with selector loop.')
print()
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
loop = asyncio.new_event_loop()
coro = loop.create_datagram_endpoint(MyDatagramProto, local_addr=None, family=socket.AF_INET)
loop.run_until_complete(coro)
print()
print('We got error 10022 with proactor loop.')
```
gives as output
```
No problem with selector loop.
Exception in callback _ProactorDatagramTransport._loop_reading()
handle: <Handle _ProactorDatagramTransport._loop_reading()>
Traceback (most recent call last):
File "C:\Users\Berry\AppData\Local\Programs\Python\Python313\Lib\asyncio\events.py", line 89, in _run
self._context.run(self._callback, *self._args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Berry\AppData\Local\Programs\Python\Python313\Lib\asyncio\proactor_events.py", line 577, in _loop_reading
self._protocol.error_received(exc)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "C:\Users\Berry\Desktop\scratch\bugasynciolocaddrNone.py", line 7, in error_received
raise exc
File "C:\Users\Berry\AppData\Local\Programs\Python\Python313\Lib\asyncio\proactor_events.py", line 574, in _loop_reading
self._read_fut = self._loop._proactor.recvfrom(self._sock,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
self.max_size)
^^^^^^^^^^^^^^
File "C:\Users\Berry\AppData\Local\Programs\Python\Python313\Lib\asyncio\windows_events.py", line 513, in recvfrom
ov.WSARecvFrom(conn.fileno(), nbytes, flags)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 10022] An invalid argument was supplied
We got error 10022 with proactor loop.
```
The problem happens because the socket `conn` is not bound when `WSARecvFrom` is called. The `bind()` call is skipped because the local address was `None`, see:
https://github.com/python/cpython/blob/86d1a1aa8841ea182eaf434ae6b942b3e93f58db/Lib/asyncio/base_events.py#L1440-L1441
The code above is a stripped down version of the following uvloop test case:
https://github.com/MagicStack/uvloop/blob/6c770dc3fbdd281d15c2ad46588c139696f9269c/tests/test_udp.py#L141-L160
This test passes on Linux both with the uvloop and with the asyncio (selector) loop.
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-136913
* gh-137163
* gh-137164
<!-- /gh-linked-prs -->
| 1481384141342479b3ba4b89f653b4e5bef0d272 | 45138d35843297395b2d646f5391be108243957a |
python/cpython | python__cpython-119705 | # ``test_pydoc`` leaks references
# Bug report
### Bug description:
```python
./python.exe -m test -R 3:3 test_pydoc
Using random seed: 928599674
0:00:00 load avg: 6.51 Run 1 test sequentially
0:00:00 load avg: 6.51 [1/1] test_pydoc.test_pydoc
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XXX XXX
test_pydoc.test_pydoc leaked [609, 609, 609] references, sum=1827
test_pydoc.test_pydoc leaked [607, 607, 607] memory blocks, sum=1821
test_pydoc.test_pydoc failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_pydoc.test_pydoc
Total duration: 15.7 sec
Total tests: run=108 skipped=3
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119705
* gh-119707
* gh-119708
<!-- /gh-linked-prs -->
| c0faade891e6ccb61137041fe10cc05e5fa8d534 | a8e35e8ebad8c3bb44d14968aa05d1acbc028247 |
python/cpython | python__cpython-119717 | # winapi audit events returning garbage
The CreateFile and CreateNamedPipe audit events from the winapi module appear to be returning garbage instead of the names. There's potential for buffer overreads and/or information leakage.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119717
* gh-119732
* gh-119733
* gh-119734
* gh-119735
* gh-123679
* gh-123680
<!-- /gh-linked-prs -->
| 78d697b7d5ec2a6fa046b0e1c34e804f49e750b4 | 34f9b3e7244615d2372614b20e10250e68cc8e61 |
python/cpython | python__cpython-119691 | # Generate stack effect for pseudo instructions
The opcode metadata has stack effect for real instructions, and then in code we add the pseudo instructions. Would be tidier to generate it for the pseudo instructions as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119691
<!-- /gh-linked-prs -->
| c1e9647107c854439a9864b6ec4f6784aeb94ed5 | 7ca74a760a5d3cdf48159f003d4db7c2778e9261 |
python/cpython | python__cpython-119790 | # Windows installer missing free-threading library
# Bug report
### Bug description:
Tested with Python 3.13b1 release.
I've been trying to help someone test a simple extension module with the free-threading interpreter on Windows, which ultimately failed because it was missing `python313t.lib` so was unable to link.


Installed libraries:

### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119790
* gh-119983
* gh-120133
* gh-120160
<!-- /gh-linked-prs -->
| fd01271366abefa8f991e53f090387882fbd6bdd | 4765e1fa292007f8ddc59f33454b747312506a7a |
python/cpython | python__cpython-119677 | # LOAD_SUPER_METHOD etc don't need to be pseudo instructions
`LOAD_SUPER_METHOD`, `LOAD_ZERO_SUPER_METHOD`, `LOAD_ZERO_SUPER_ATTR`, `LOAD_METHOD` don't need to be pseudo-instructions. They are temporaries use solely within codegen.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119677
<!-- /gh-linked-prs -->
| ae9140f32a1630838374f1af402291d4649a0be0 | 6b240c2308a044e38623900ccb8fa58c3549d4ae |
python/cpython | python__cpython-120295 | # Uninitialized value usage of localspluskinds in assemble.c's makecode function
# Bug report
### Bug description:
## Recreator
```bash
./python -c "class i:[super for()in d]*[__class__*4for()in d]"
<string>:1: SyntaxWarning: invalid decimal literal
[1] 23793 segmentation fault ./python -c "class i:[super for()in d]*[__class__*4for()in d]"
```
## Details
This issue was found through the oss-fuzz compilation fuzzer. Here is the MSAN stack trace:
```
==691==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x5661f67ca290 in get_localsplus_counts cpython3/Objects/codeobject.c:344:13
#1 0x5661f67c95a7 in _PyCode_Validate cpython3/Objects/codeobject.c:433:5
#2 0x5661f6a17be2 in makecode cpython3/Python/assemble.c:614:8
#3 0x5661f6a17be2 in _PyAssemble_MakeCodeObject cpython3/Python/assemble.c:754:14
#4 0x5661f612aa99 in optimize_and_assemble_code_unit cpython3/Python/compile.c:7655:10
...
Uninitialized value was created by a heap allocation
#0 0x5661f5b307b2 in __interceptor_malloc /src/llvm-project/compiler-rt/lib/msan/msan_interceptors.cpp:1007:3
#1 0x5661f675e32c in _PyBytes_FromSize cpython3/Objects/bytesobject.c:96:31
#2 0x5661f675e00a in PyBytes_FromStringAndSize cpython3/Objects/bytesobject.c:129:27
#3 0x5661f6a15d32 in makecode cpython3/Python/assemble.c:580:23
#4 0x5661f6a15d32 in _PyAssemble_MakeCodeObject cpython3/Python/assemble.c:754:14
...
```
---
I haven't done any debugging yet but my hunch is that this code is hitting a path in `compute_localsplus_info` https://github.com/python/cpython/blob/f912e5a2f6d128b17f85229b722422e4a2478e23/Python/assemble.c#L475
that ends up not setting the `localspluskinds` made here https://github.com/python/cpython/blob/f912e5a2f6d128b17f85229b722422e4a2478e23/Python/assemble.c#L580-L587
and when this eventually gets to `_PyCode_Validate` it causes it to read uninitialized memory.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120295
* gh-120299
* gh-120300
<!-- /gh-linked-prs -->
| 0ae8579b85f9b0cd3f287082ad6e194bdb025d88 | 34f5ae69fe9ab0f5b23311d5c396d0cbb5902913 |
python/cpython | python__cpython-119712 | # Build failure due to missing _Py_SINGLETON in static-extension-modules build (3.13 only)
# Bug report
### Bug description:
While attempting to upgrade [python-build-standalone](https://github.com/indygreg/python-build-standalone/pull/264) to support Python 3.13 beta releases, we've run into a build failure which may be caused by Argument Clinic failing to ensure that all necessary `#include` directives are emitted into the source files it generates.
The failure can be seen [here](https://github.com/kpfleming/python-build-standalone/actions/runs/9149133735/job/25152446034#step:7:36777). Granted this is a very unusual way to build CPython, but given some of the discussions at the recent Packaging Summit it may become more valuable soon :-)
It was suggested that @vstinner might have the knowledge needed to understand what is happening here, so apologies in advance for the direct ping :-)
I'm happy to include patches into that build tree to see if we can overcome the problem before the next beta release, and I'm also happy to make that branch writable by others who want to try to help troubleshoot.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119712
* gh-119716
<!-- /gh-linked-prs -->
| 7ca74a760a5d3cdf48159f003d4db7c2778e9261 | cd11ff12ac55f37d38b5ef08c143c78f07da5717 |
python/cpython | python__cpython-119660 | # Refactor: move `no_rerun` from `test_import` and `datetimetester` to `test.support`
# Feature or enhancement
It is defined here: https://github.com/python/cpython/blob/b407ad38fb93585332c370b8fa56905fb238cdfd/Lib/test/test_import/__init__.py#L123-L139
and here: https://github.com/python/cpython/blob/b407ad38fb93585332c370b8fa56905fb238cdfd/Lib/test/datetimetester.py#L50-L68
Introduced in https://github.com/python/cpython/pull/119373
<!-- gh-linked-prs -->
### Linked PRs
* gh-119660
* gh-119675
* gh-120180
<!-- /gh-linked-prs -->
| 2da0dc094fa855ed4df251aab58b6f8a2b6969a1 | f912e5a2f6d128b17f85229b722422e4a2478e23 |
python/cpython | python__cpython-119713 | # test_datetime leaks references
Example:
```
$ ./python -m test -R 3:3 test_datetime -m test.datetimetester.TestDateTime_Pure.test_compat_unpickle
(...)
test_datetime leaked [8, 8, 8] references, sum=24
test_datetime leaked [8, 8, 8] memory blocks, sum=24
```
Regression: commit 3e8b60905e97a4fe89bb24180063732214368938
```
commit 3e8b60905e97a4fe89bb24180063732214368938
Author: Erlend E. Aasland <erlend@python.org>
Date: Tue May 28 00:02:46 2024 +0200
gh-117398: Add multiphase support to _datetime (gh-119373)
This is minimal support. Subinterpreters are not supported yet. That will be addressed in a later change.
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
Lib/test/datetimetester.py | 21 +++++++++++++++++++++
Modules/_datetimemodule.c | 26 +++++++++++---------------
2 files changed, 32 insertions(+), 15 deletions(-)
```
cc @erlend-aasland @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-119713
<!-- /gh-linked-prs -->
| 34f9b3e7244615d2372614b20e10250e68cc8e61 | 1f481fd3275dbc12a88c16129621de19ea20e4ca |
python/cpython | python__cpython-120909 | # Python IDLE stops outputting a string on encountering a null character both for STDOUT and STDERR
# Bug report
### Bug description:
I noticed that if you pass a string containing a null character (`\0`) as an argument to the built-in `print` function, IDLE only outputs that string to the point of the null character, then it jumps to outputting the next argument to print as if that string has actually ended. This is not consistent with the behavior when using the Python interpreter directly in a terminal, where the null character is usually printed as a space-looking character and does not terminate the string. If further arguments to the same `print` call also contains null characters, the same thing happens (it'll skip to the next argument immediately on encountering a `\0`). If the `sep` string contains a null character, it'll also only print to the point of the null character before starting to print the next positional argument, as if the `sep` string had actually ended there. The same thing happens with the `end` string. And the same behavior happens when calling `sys.stdout.write` directly. Writing to STDERR has the same effect of stuff getting cut off on the null character. Basically anything that results in text being outputted to IDLE triggers this bug. Maybe tk's Text widget treats null characters as string terminators because tk is written in C? By the way, the bug works the same both in interactive mode and running a file. Below's an example, run it in IDLE and you should get the same erroneous output:
```python3
>>> print("hello\0world!") # expected output: hello world!
hello
>>> print("wait", "what?", sep="\0, ") # expected output: wait , what?
waitwhat?
>>> print("fo\0o", "\0bar", "ba\0z", "spa\0m", "ha\0m", "e\0ggs", end=" xyz\0zy\n") # expected: fo o bar ba z spa m ha m e ggs xyz zy
fo ba spa ha e xyz
>>> import sys
>>> _ = sys.stdout.write("same\0here\n") # expected output: same here
same
>>> _ = sys.stderr.write("in errors\0too\n") # expected output: in errors too
in errors
>>> raise RuntimeError("something ba\0d happened.")
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
raise RuntimeError("something ba\0d happened.")
RuntimeError: something ba
>>>
```
Version Info:
I have tested this on the 64 bit "Windows Installer" versions of Python `3.9.7` and `3.12.3`. My OS is `Windows 10 Version 22H2, build 19045.4412`. The `sys.version` value on the 3.9.7 version is `3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 bit (AMD64)]`, and the 3.12.3 version is `3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]`. Hope this helps.
### CPython versions tested on:
3.9, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-120909
* gh-120938
* gh-120939
<!-- /gh-linked-prs -->
| c38e2f64d012929168dfef7363c9e48bd1a6c731 | fc297b4ba4c61febeb2d8f5d718f2955c6bbea0a |
python/cpython | python__cpython-119619 | # Deprecate Py_IS_NAN/INFINITY/FINITE?
# Feature or enhancement
### Proposal:
isnan(), isinf() and isfinite() are part of C99, which is a requirement for 3.11+. Probably, it does make sense to deprecate (undocumented) public macros and switch codebase to use C stdlib functions.
JFR: https://github.com/python/cpython/pull/119457#discussion_r1615985599
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119619
* gh-119701
* gh-120020
<!-- /gh-linked-prs -->
| cd11ff12ac55f37d38b5ef08c143c78f07da5717 | 86d1a1aa8841ea182eaf434ae6b942b3e93f58db |
python/cpython | python__cpython-132055 | # Failed to inspect `__new__` and `__init_subclass__` methods generated by `warnings.deprecated`
# Bug report
### Bug description:
[PEP 702 – Marking deprecations using the type system](https://peps.python.org/pep-0702) introduces a new API [`warnings.deprecated`](https://docs.python.org/3.13/library/warnings.html#warnings.deprecated) for deprecation.
While decorating a class object, it will update the `__new__` method:
https://github.com/python/cpython/blob/2268289a47c6e3c9a220b53697f9480ec390466f/Lib/warnings.py#L589-L603
and the `__init_subclass__` method:
https://github.com/python/cpython/blob/2268289a47c6e3c9a220b53697f9480ec390466f/Lib/warnings.py#L605-L625
For a class (`cls`) that does not implement the `__new__` and `__init_subclass__` methods, the [`warnings.deprecated`](https://docs.python.org/3.13/library/warnings.html#warnings.deprecated) decorator will generate these two methods (based on `object.__new__` and `type.__init_subclass__` ) and assign them to `cls.__dict__`.
However, if users want to inspect the methods in `cls.__dict__`, the `inspect` module fails to get the correct definition of `cls.__new__`. It is defined in `warnings.py` rather than `object.__new__`.
```python
# test.py
from warnings import deprecated # or from typing_extensions import deprecated
class Foo:
def __new__(cls):
return super().__new__(cls)
@deprecated("test")
class Bar:
pass
```
```python
In [1]: import inspect
In [2]: from test import Foo, Bar
In [3]: inspect.getsourcelines(Foo.__new__)
Out[3]: ([' def __new__(cls):\n', ' return super().__new__(cls)\n'], 7)
In [4]: inspect.getsourcelines(Bar.__new__)
TypeError: module, class, method, function, traceback, frame, or code object was expected, got builtin_function_or_method
```
Expected to have `inspect.getsourcelines(Bar.__new__)` to be source code in `/.../lib/python3.13/warnings.py`.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132055
* gh-133277
<!-- /gh-linked-prs -->
| b8633f9aca9b198e5592106b649389d638cbc620 | c14134020f44575635e11e4552cefcfd8cdbe22f |
python/cpython | python__cpython-119601 | # mock.patch introspects original even when new_callable set
# Bug report
### Bug description:
In order to patch `flask.g` e.g. as in #84982, that proxies getattr must not be invoked. For that, mock must not try to read from the original object. In some cases that is unavoidable, e.g. when doing `autospec`. However, `patch("flask.g", new_callable=MagicMock)` should be entirely safe.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-119601
* gh-120334
* gh-120335
<!-- /gh-linked-prs -->
| 422c4fc855afd18bcc6415902ea1d85a50cb7ce1 | 6efe3460693c4f39de198a64cebeeee8b1d4e8b6 |
python/cpython | python__cpython-119593 | # Improve pow(fraction.Fraction(), b, modulo) error message
# Bug report
### Bug description:
`Fraction.__pow__` does not accept the third argument which `pow` tries to pass to it:
```
>>> pow(Fraction(1), 1, 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Fraction.__pow__() takes 2 positional arguments but 3 were given
```
Other number types don't do that, they support it when they can and print an error message if they don't want to, e.g.
```
>>> pow(Decimal("1"), 1, 1)
Decimal('0')
>>> pow(Decimal('1'), 1.1, 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: pow() 3rd argument not allowed unless all arguments are integers
```
I'm not sure what fraction should do, maybe it should just continue not to support it? Or just accept the argument, and then print a better error message if modulo is not None?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119593
<!-- /gh-linked-prs -->
| fcca08ec2f48f4ba5ba1d4690fb39b1efe630944 | bf4ff3ad2e362801e87c85fffd9e140b774cef26 |
python/cpython | python__cpython-119591 | # Implement zipfile.Path.is_symlink
In jaraco/zipp#117, I learned that the current implementation of `is_symlink` might have a security risk if a user is relying on it to ensure that a zipfile has no symlinks before using another tool to extract it.
zipp 3.19.0 adds an implementation for `Path.is_symlink` to alleviate this risk.
CPython should adopt this change as well, possibly as a security fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119591
* gh-119985
* gh-119988
* gh-120043
* gh-120046
<!-- /gh-linked-prs -->
| 42a34ddb0b63e638905b01e17a7254623a0de427 | fd01271366abefa8f991e53f090387882fbd6bdd |
python/cpython | python__cpython-119753 | # Crash in freethreading when acquiring/releasing the GIL in a finalizer
# Bug report
### Bug description:
bug.cpp
```c++
#include <thread>
#include <cstdio>
#include <latch>
#include <Python.h>
PyObject* callInDestructor(PyObject *self, PyObject *args) {
auto state = PyGILState_Ensure();
printf("In destructor\n");
PyGILState_Release(state);
Py_RETURN_NONE;
}
PyObject *doStuff(PyObject *self, PyObject *cls) {
PyObject *o;
Py_BEGIN_ALLOW_THREADS // I'm not really sure why this is needed...
{
std::latch l1(1);
std::latch l2(1);
std::latch l3(1);
auto thread1 = std::jthread([&](){
l1.wait();
auto state = PyGILState_Ensure();
o = PyObject_CallNoArgs(cls);
l2.count_down();
// printf("0\n");
l3.wait();
PyGILState_Release(state);
printf("thread1 end\n");
});
auto thread2 = std::jthread([&](){
l1.count_down();
auto state = PyGILState_Ensure();
l2.wait();
Py_XDECREF(o);
l3.count_down();
PyGILState_Release(state);
printf("thread2 end\n");
});
}
Py_END_ALLOW_THREADS
Py_RETURN_NONE;
}
static PyMethodDef methods[] = {
{"doStuff", doStuff, METH_O, "Demonstrate error."},
{"callInDestructor", callInDestructor, METH_NOARGS, "destruct"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
static struct PyModuleDef moduleDef = {
PyModuleDef_HEAD_INIT,
"bug", /* name of module */
NULL,
-1, /* size of per-interpreter state of the module,
or -1 if the module keeps state in global variables. */
methods
};
PyMODINIT_FUNC
PyInit_bug(void)
{
PyObject *m;
m = PyModule_Create(&moduleDef);
if (m == NULL)
return NULL;
return m;
}
```
setupbug.py
```python
from setuptools import Extension, setup
setup(
ext_modules=[
Extension(
name="bug",
sources=["bug.cpp"],
language="C++",
extra_compile_args=['-std=c++20'],
extra_link_args=['-lstdc++'],
),
]
)
```
testbug.py
```python
import bug
class C:
def __del__(self):
bug.callInDestructor()
bug.doStuff(C)
```
Build with `python setupbug.py build_ext --inplace`
Run with `python -Xgil=0 testbug.py`
I'm a little unclear on exactly what's going wrong here, but essentially it's destroying the C object from within `merge_queued_objects(&brc->local_objects_to_merge);`, `callInDestruction()` calls `PyGILState_Ensure` and `PyGILState_Release` (which is unnecessary because it already has the GIL, but should be fine anyway), and it's around here that it crashes with a segmentation fault.
If I remove the GIL handing from `callInDestruction` then it doesn't crash for me.
This is a cut-down version of something I've seen in Cython https://github.com/cython/cython/pull/6214#issuecomment-2132292912 with a very similar crash but happening in a slightly different way.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119753
* gh-119859
* gh-119861
<!-- /gh-linked-prs -->
| bcc1be39cb1d04ad9fc0bd1b9193d3972835a57c | 891c1e36f4e08da107443772a4eb50c72a83836d |
python/cpython | python__cpython-119623 | # ``test_import`` crashes with a ``--forever`` option
# Crash report
### Bug description:
```python
./python.exe -m test -v test_import -m test_check_state_first --forever
== CPython 3.14.0a0 (heads/main:5d04cc50e5, May 26 2024, 21:33:31) [Clang 15.0.0 (clang-1500.3.9.4)]
== macOS-14.5-arm64-arm-64bit-Mach-O little-endian
== Python build: debug
== cwd: /Users/admin/Projects/cpython/build/test_python_worker_94626æ
== CPU count: 8
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 4127756675
0:00:00 load avg: 4.71 Run tests sequentially
0:00:00 load avg: 4.71 [ 1] test_import
test_check_state_first (test.test_import.SinglephaseInitTests.test_check_state_first) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
0:00:00 load avg: 4.71 [ 2] test_import
test_check_state_first (test.test_import.SinglephaseInitTests.test_check_state_first) ... Assertion failed: (_testsinglephase_with_reinit_check_cache_first.m_base.m_index == 0), function PyInit__testsinglephase_with_reinit_check_cache_first, file _testsinglephase.c, line 714.
Fatal Python error: Aborted
Current thread 0x00000001f61d0c00 (most recent call first):
File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1316 in create_module
File "<frozen importlib._bootstrap>", line 813 in module_from_spec
File "<frozen importlib._bootstrap>", line 921 in _load_unlocked
File "<frozen importlib._bootstrap>", line 966 in _load
File "/Users/admin/Projects/cpython/Lib/test/test_import/__init__.py", line 2495 in _load_dynamic
File "/Users/admin/Projects/cpython/Lib/test/test_import/__init__.py", line 2894 in test_check_state_first
File "/Users/admin/Projects/cpython/Lib/unittest/case.py", line 606 in _callTestMethod
File "/Users/admin/Projects/cpython/Lib/unittest/case.py", line 651 in run
File "/Users/admin/Projects/cpython/Lib/unittest/case.py", line 707 in __call__
File "/Users/admin/Projects/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/admin/Projects/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/admin/Projects/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/admin/Projects/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/admin/Projects/cpython/Lib/unittest/runner.py", line 240 in run
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 135 in test_func
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 91 in regrtest_runner
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 355 in run_test
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 389 in run_tests_sequentially
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 533 in _run_tests
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 568 in run_tests
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 731 in main
File "/Users/admin/Projects/cpython/Lib/test/libregrtest/main.py", line 739 in main
File "/Users/admin/Projects/cpython/Lib/test/__main__.py", line 2 in <module>
File "/Users/admin/Projects/cpython/Lib/runpy.py", line 88 in _run_code
File "/Users/admin/Projects/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testinternalcapi, _testmultiphase (total: 2)
zsh: abort ./python.exe -m test -v test_import -m test_check_state_first --forever
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119623
* gh-119633
<!-- /gh-linked-prs -->
| ae7b17673f29efe17b416cbcfbf43b5b3ff5977c | 0bd0d4072a49df49a88e8b02c3258dbd294170f6 |
python/cpython | python__cpython-119582 | # The `dataclasses` unit tests should record behavior of shadowed init vars
# Tests should verify that dataclasses allow no-default `InitVars` to use shadowed names.
I was surprised, when reviewing a change to MyPy that allows `InitVar` fields to be shadowed by property methods, that this even works since the runtime will drop the original `InitVar` value.
It turns out that `__annotations__` does *not* forget about the original `InitVar` (`__annotations__` preserves annotations from class attributes that came from assignment statements, ignoring any shadowing methods) and as a result this actually does work as long as no default value is provided.
See more discussion and context in https://github.com/python/mypy/pull/17219.
@hauntsaninja mentioned that MyPy has had several reports of this as a false positives, so we know people are relying on this behavior (even if it may not have been intended originally) and we probably should not break it.
It probably makes sense to add a unit test, since this behavior is actually a bit surprising and could potentially be broken by a refactor. I'm not sure whether we should also discuss this in the documentation, I would lean toward "no" because I think this is supported more by accident than by design (both Pyright and Pyre reject it, which seems reasonable to me).
<!-- gh-linked-prs -->
### Linked PRs
* gh-119582
* gh-119672
* gh-119673
<!-- /gh-linked-prs -->
| 6ec371223dff4da7719039e271f35a16a5b861c6 | 0518edc17049a2f474b049b7d7fe3ef4339ceb83 |
python/cpython | python__cpython-120006 | # Some environment variables are missing from `--help-env`
# Documentation
While looking for ways to control the new REPL, I found that `PYTHON_BASIC_REPL` influences it, but is not present in the output of `--help-env`.
Here's a list of environment variables that are present in [the documentation](https://docs.python.org/3.14/using/cmdline.html#environment-variables) but missing from `--help-env` in `initconfig.c`:
- `PYTHONASYNCIODEBUG`
- `PYTHONDUMPREFS`
- `PYTHONDUMPREFSFILE`
- `PYTHONEXECUTABLE`
- `PYTHONLEGACYWINDOWSFSENCODING`
- `PYTHONLEGACYWINDOWSSTDIO`
- `PYTHONMALLOCSTATS`
- `PYTHONUSERBASE`
- `PYTHON_BASIC_REPL`
- `PYTHON_PERF_JIT_SUPPORT`
The environment variable `PYTHONSTATS` is present in `initconfig.c` but missing from the docs.
Here's the part of `initconfig.c` with the content of `--help-env`:
https://github.com/python/cpython/blob/d25954dff5409c8926d2a4053d3e892462f8b8b5/Python/initconfig.c#L234-L301
I did a search to figure out if these are intentionally missing, but wasn't able to find out.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120006
<!-- /gh-linked-prs -->
| c81a5e6b5b7749862d271e7a67f89976069ad2cd | e9f4d80fa66e0d3331e79d539329747297e54ae5 |
python/cpython | python__cpython-119563 | # Remove AST nodes deprecated since Python 3.8, with warnings since Python 3.12
# Feature or enhancement
### Proposal:
In https://github.com/python/cpython/pull/104199, we added deprecation warnings for AST nodes that had been deprecated in the docs since Python 3.8. They have now had deprecation warnings for two releases; it is time to remove them.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119563
* gh-119576
<!-- /gh-linked-prs -->
| 008bc04dcb3b1fa6d7c11ed8050467dfad3090a9 | b5b7dc98c94100e992a5409d24bf035d88c7b2cd |
python/cpython | python__cpython-119561 | # Invalid Assert in PyState_FindModule()
# Crash report
### Bug description:
gh-118532 added an assert that is erroneously triggers if `PyState_FindModule()` is called before the provided module def has been initialized. The assert should be removed. (Python/import.c, line 460)
See https://github.com/ultrajson/ultrajson/issues/629.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119561
* gh-119632
<!-- /gh-linked-prs -->
| ae7b17673f29efe17b416cbcfbf43b5b3ff5977c | 0bd0d4072a49df49a88e8b02c3258dbd294170f6 |
python/cpython | python__cpython-119557 | # New repl closes CLI session on SyntaxError inside of the match statement
# Bug report
### Bug description:
```pycon
>>> match 1:
... case {0: _, 0j: _}:
... pass
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/sk/src/cpython/Lib/_pyrepl/__main__.py", line 47, in <module>
interactive_console()
~~~~~~~~~~~~~~~~~~~^^
File "/home/sk/src/cpython/Lib/_pyrepl/__main__.py", line 44, in interactive_console
return run_interactive(mainmodule)
File "/home/sk/src/cpython/Lib/_pyrepl/simple_interact.py", line 176, in run_multiline_interactive_console
more = console.push(_strip_final_indent(statement), filename=input_name, _symbol="single") # type: ignore[call-arg]
File "/home/sk/src/cpython/Lib/code.py", line 303, in push
more = self.runsource(source, filename, symbol=_symbol)
File "/home/sk/src/cpython/Lib/_pyrepl/simple_interact.py", line 98, in runsource
code = compile(item, filename, the_symbol)
File "<python-input-0>", line 2
SyntaxError: mapping pattern checks duplicate key (0j)
```
At that point I'm in the system shell...
C.f.:
```pycon
>>> def f():
... raise ValueError("boo!")
...
>>> f()
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
f()
~^^
File "<python-input-0>", line 2, in f
raise ValueError("boo!")
ValueError: boo!
>>> 3.to_bytes()
File "<unknown>", line 1
3.to_bytes()
^
SyntaxError: invalid decimal literal
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119557
* gh-119709
<!-- /gh-linked-prs -->
| 86d1a1aa8841ea182eaf434ae6b942b3e93f58db | c0faade891e6ccb61137041fe10cc05e5fa8d534 |
python/cpython | python__cpython-119801 | # PyREPL: KeyboardInterrupt does not clear completion menu if it's already visible
# Bug report
### Bug description:
Pressing Ctrl-C (and emitting a KeyboardInterrupt) while the completion menu is visible (after having pressed Tab) does not clear it before the next line.
```python
>>> import itertools
itertools.accumulate( itertools.groupby(
itertools.batched( itertools.islice(
itertools.chain( itertools.pairwise(
itertools.combinations( itertools.permutations(
itertools.combinations_with_replacement( itertools.product(
itertools.compress( itertools.repeat(
itertools.count( itertools.starmap(
itertools.cycle( itertools.takewhile(
itertools.dropwhile( itertools.tee(
itertools.filterfalse( itertools.zip_longest(
>>> itertools.
KeyboardInterrupt
itertools.accumulate( itertools.groupby(
itertools.batched( itertools.islice(
itertools.chain( itertools.pairwise(
itertools.combinations( itertools.permutations(
itertools.combinations_with_replacement( itertools.product(
itertools.compress( itertools.repeat(
itertools.count( itertools.starmap(
itertools.cycle( itertools.takewhile(
itertools.dropwhile( itertools.tee(
itertools.filterfalse( itertools.zip_longest(
>>>
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119801
* gh-120062
* gh-120075
* gh-120076
<!-- /gh-linked-prs -->
| 010ea93b2b888149561becefeee90826bf8a2934 | d419d468ff4aaf6bc673354d0ee41b273d09dd3f |
python/cpython | python__cpython-119549 | # Add a "clear" command to PYREPL
This will do the same as Ctrl+L as requested by @brandtbucher
<!-- gh-linked-prs -->
### Linked PRs
* gh-119549
* gh-119552
<!-- /gh-linked-prs -->
| e3bac04c37f6823cebc74d97feae0e0c25818b31 | a531fd7fdb45d13825cb0c38d97fd38246cf9634 |
python/cpython | python__cpython-119536 | # Support `pythonπ` in Python 3.14 venv's
# Feature or enhancement
### Proposal:
```
λ env/bin/pythonπ
Python 3.14.0a0 (heads/main-dirty:de19694cfb, May 24 2024, 23:13:56) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119536
* gh-125035
* gh-134037
<!-- /gh-linked-prs -->
| 54a6875adbc9091909b1473c49595c0cc84dc438 | 7eaa09739059aaac4812395f8d6bb586af8eadcc |
python/cpython | python__cpython-119527 | # Deadlock while updating type cache (free-threading)
# Bug report
Deadlock when running `pool_in_threads.py` (from `test_multiprocessing_pool_circular_import`)
This was observed with the GIL enabled (`PYTHON_GIL=1`) in the free-threaded build. I'm not sure if it could happen when the GIL is disabled.
The problem is due to:
* We are calling `Py_DECREF(old_name)` while holding the type lock. Even though the name is just a unicode object (simple destructor), it may try to acquire other locks due to the biased reference counting inter-thread queue (see thread 2's stack trace).
* The type cache seqlock spins and doesn't ever release the GIL while spinning.
It's probably easier to move the `Py_DECREF()` outside of the lock than to adjust the seqlock.
https://github.com/python/cpython/blob/81d63362302187e5cb838c9a7cd857181142e530/Objects/typeobject.c#L5179
Thread 1 (holds GIL, spinning on type lock):
```
#0 0x00007f4c35093c9b in sched_yield () at ../sysdeps/unix/syscall-template.S:120
#1 0x00005607b1aeed82 in _Py_yield () at Python/lock.c:46
#2 0x00005607b1aef6e3 in _PySeqLock_BeginRead (seqlock=seqlock@entry=0x5607b1e6e5e4 <_PyRuntime+236068>) at Python/lock.c:515
#3 0x00005607b19a2c2f in _PyType_LookupRef (type=type@entry=0x20002782c10, name=name@entry='_handle_workers') at Objects/typeobject.c:5239
#4 0x00005607b19a9f15 in _Py_type_getattro_impl (type=0x20002782c10, name='_handle_workers', suppress_missing_attribute=suppress_missing_attribute@entry=0x0) at Objects/typeobject.c:5440
#5 0x00005607b19aa081 in _Py_type_getattro (type=<optimized out>, name=<optimized out>) at Objects/typeobject.c:5491
```
Thread 2 (holds type lock, waiting on GIL):
```
#5 0x00005607b1ac695a in PyCOND_TIMEDWAIT (us=<optimized out>, mut=0x5607b1e55778 <_PyRuntime+134072>, cond=0x5607b1e55748 <_PyRuntime+134024>) at Python/condvar.h:74
#6 take_gil (tstate=tstate@entry=0x5607b2deb250) at Python/ceval_gil.c:331
#7 0x00005607b1ac6f5b in _PyEval_AcquireLock (tstate=tstate@entry=0x5607b2deb250) at Python/ceval_gil.c:585
#8 0x00005607b1b0d376 in _PyThreadState_Attach (tstate=tstate@entry=0x5607b2deb250) at Python/pystate.c:2074
#9 0x00005607b1ac6fda in PyEval_AcquireThread (tstate=tstate@entry=0x5607b2deb250) at Python/ceval_gil.c:602
#10 0x00005607b1af8085 in _PySemaphore_Wait (sema=sema@entry=0x7f4c2f7fd2f0, timeout=timeout@entry=-1, detach=detach@entry=1) at Python/parking_lot.c:215
#11 0x00005607b1af81ff in _PyParkingLot_Park (addr=addr@entry=0x5607b1e56b70 <_PyRuntime+139184>, expected=expected@entry=0x7f4c2f7fd387, size=size@entry=1, timeout_ns=timeout_ns@entry=-1, park_arg=park_arg@entry=0x7f4c2f7fd390, detach=detach@entry=1) at Python/parking_lot.c:316
#12 0x00005607b1aeefe7 in _PyMutex_LockTimed (m=m@entry=0x5607b1e56b70 <_PyRuntime+139184>, timeout=timeout@entry=-1, flags=flags@entry=_PY_LOCK_DETACH) at Python/lock.c:112
#13 0x00005607b1aef0e8 in _PyMutex_LockSlow (m=m@entry=0x5607b1e56b70 <_PyRuntime+139184>) at Python/lock.c:53
#14 0x00005607b1a64332 in PyMutex_Lock (m=0x5607b1e56b70 <_PyRuntime+139184>) at ./Include/internal/pycore_lock.h:75
#15 _Py_brc_queue_object (ob=ob@entry='__subclasses__') at Python/brc.c:67
#16 0x00005607b1952246 in _Py_DecRefSharedDebug (o=o@entry='__subclasses__', filename=filename@entry=0x5607b1c23d8b "Objects/typeobject.c", lineno=lineno@entry=5179) at Objects/object.c:359
#17 0x00005607b1992747 in Py_DECREF (op='__subclasses__', lineno=5179, filename=0x5607b1c23d8b "Objects/typeobject.c") at ./Include/object.h:894
#18 update_cache (entry=entry@entry=0x5607b1e6e5e0 <_PyRuntime+236064>, name=name@entry='_handle_workers', version_tag=version_tag@entry=131418, value=value@entry=<classmethod at remote 0x20002037540>) at Objects/typeobject.c:5179
#19 0x00005607b1992792 in update_cache_gil_disabled (entry=entry@entry=0x5607b1e6e5e0 <_PyRuntime+236064>, name=name@entry='_handle_workers', version_tag=version_tag@entry=131418, value=value@entry=<classmethod at remote 0x20002037540>) at Objects/typeobject.c:5199
#20 0x00005607b19a2f13 in _PyType_LookupRef (type=type@entry=0x20002782c10, name=name@entry='_handle_workers') at Objects/typeobject.c:5312
#21 0x00005607b19a9f15 in _Py_type_getattro_impl (type=0x20002782c10, name='_handle_workers', suppress_missing_attribute=suppress_missing_attribute@entry=0x0) at Objects/typeobject.c:5440
#22 0x00005607b19aa081 in _Py_type_getattro (type=<optimized out>, name=<optimized out>) at Objects/typeobject.c:5491
#23 0x00005607b1954c1a in PyObject_GetAttr (v=v@entry=<type at remote 0x20002782c10>, name='_handle_workers') at Objects/object.c:1175
```
cc @DinoV
<!-- gh-linked-prs -->
### Linked PRs
* gh-119527
* gh-119746
<!-- /gh-linked-prs -->
| c22323cd1c200ca1b22c47af95f67c4b2d661fe7 | df93f5d4bf9d70036d485666d4dd4f009d37f8b9 |
python/cpython | python__cpython-119680 | # `IncompleteInputError` is undocumented
#113745 added a new built-in exception `IncompleteInputError`, but it is not mentioned anywhere in the docs (except in the auto-generated list of classes) or in the What's New. It should be documented. cc @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-119680
* gh-120944
* gh-120955
* gh-120993
* gh-121076
<!-- /gh-linked-prs -->
| ac61d58db0753a3b37de21dbc6e86b38f2a93f1b | 65a12c559cbc13c2c5a4aa65c76310bd8d2051a7 |
python/cpython | python__cpython-123356 | # Stop manually interning strings in pathlib
When parsing and normalizing a path, pathlib calls `sys.intern()` on the string parts:
https://github.com/python/cpython/blob/96b392df303b2cfaea823afcb462c0b455704ce8/Lib/pathlib/_local.py#L273
I've never been able to establish that this is a worthwhile thing to do. The implementation seems incomplete, because the path normalization only occurs when a user manually initialises a path object, and not in paths generated from `path.iterdir()`, `path.walk()`, etc. Drives/roots/anchors aren't interned despite most likely to be shared.
Previous discussion: https://github.com/python/cpython/pull/112856#issuecomment-1850272228
<!-- gh-linked-prs -->
### Linked PRs
* gh-123356
<!-- /gh-linked-prs -->
| 5002f17794a9f403540305c733698d1e01699490 | 77a2fb4bf1a1b160d6ce105508288fc77f636943 |
python/cpython | python__cpython-119514 | # OOM vulnerability in the imaplib module
The IMAP4 protocol supports "literal" syntax, when one side specifies the size of the data and then sends the specified amount of bytes. The `imaplib` client reads such data with a single `read(size)` call, and therefore is affected by the general vulnerability in the `read()` method -- it allocates the amount of memory that came from untrusted source. Just an attempt to connect to the malicious IMAP4 server can send your Python program in swap.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119514
* gh-129355
* gh-129356
* gh-129357
* gh-129358
* gh-130248
<!-- /gh-linked-prs -->
| 0fef47e5bbd167c21eb4f3cbd885cf61270014e7 | 43e024021392c8c70e5a56cdf7428ced45d73688 |
python/cpython | python__cpython-119507 | # `_io.TextIOWrapper.write`: write during flush causes `pending_bytes` length mismatch
# Crash report
### What happened?
Bisected to #24592.
Simple repro:
```python
import _io
class MyIO(_io.BytesIO):
def __init__(self):
_io.BytesIO.__init__(self)
self.writes = []
def write(self, b):
self.writes.append(b)
tw.write("c")
return len(b)
buf = MyIO()
tw = _io.TextIOWrapper(buf)
CHUNK_SIZE = 8192
tw.write("a" * (CHUNK_SIZE - 1))
tw.write("b" * 2)
tw.flush()
assert b''.join(tw.buffer.writes) == b"a" * (CHUNK_SIZE - 1) + b"b" * 2 + b"c"
```
On debug build it causes C assertion failure:
```
python: ./Modules/_io/textio.c:1582: _textiowrapper_writeflush: Assertion `PyUnicode_GET_LENGTH(pending) == self->pending_bytes_count' failed.
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7c84537 in __GI_abort () at abort.c:79
#2 0x00007ffff7c8440f in __assert_fail_base (fmt=0x7ffff7dfb688 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x555555a24d40 "PyUnicode_GET_LENGTH(pending) == self->pending_bytes_count", file=0x555555a25332 "./Modules/_io/textio.c", line=1582,
function=<optimized out>) at assert.c:92
#3 0x00007ffff7c93662 in __GI___assert_fail (assertion=assertion@entry=0x555555a24d40 "PyUnicode_GET_LENGTH(pending) == self->pending_bytes_count",
file=file@entry=0x555555a25332 "./Modules/_io/textio.c", line=line@entry=1582,
function=function@entry=0x555555a256b0 <__PRETTY_FUNCTION__.9> "_textiowrapper_writeflush") at assert.c:101
#4 0x00005555559102b9 in _textiowrapper_writeflush (self=self@entry=0x7ffff77896d0) at ./Modules/_io/textio.c:1582
#5 0x000055555591065d in _io_TextIOWrapper_flush_impl (self=0x7ffff77896d0) at ./Modules/_io/textio.c:3092
#6 0x0000555555910791 in _io_TextIOWrapper_flush (self=<optimized out>, _unused_ignored=<optimized out>) at ./Modules/_io/clinic/textio.c.h:1105
#7 0x0000555555693483 in method_vectorcall_NOARGS (func=0x7ffff7731250, args=0x7ffff7fc1070, nargsf=<optimized out>, kwnames=<optimized out>)
at Objects/descrobject.c:447
#8 0x0000555555680d7c in _PyObject_VectorcallTstate (tstate=0x555555be4678 <_PyRuntime+294136>, callable=0x7ffff7731250, args=0x7ffff7fc1070,
nargsf=9223372036854775809, kwnames=0x0) at ./Include/internal/pycore_call.h:168
#9 0x0000555555680e97 in PyObject_Vectorcall (callable=callable@entry=0x7ffff7731250, args=args@entry=0x7ffff7fc1070, nargsf=<optimized out>,
kwnames=kwnames@entry=0x0) at Objects/call.c:327
#10 0x000055555580876d in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555be4678 <_PyRuntime+294136>, frame=0x7ffff7fc1020, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:813
...
```
If `_io.TextIOWrapper.write()` tries to store more than `self->chunk_size` data in `self->pending_bytes`, it calls `_textiowrapper_writeflush()`:
https://github.com/python/cpython/blob/b48a3dbff4d70e72797e67b46276564fc63ddb89/Modules/_io/textio.c#L1726-L1733
`_textiowrapper_writeflush()` flushes `self->pending_bytes` contents to wrapped buffer through `write()` method call:
https://github.com/python/cpython/blob/b48a3dbff4d70e72797e67b46276564fc63ddb89/Modules/_io/textio.c#L1621-L1628
The problem is that call to `write()` method can cause `_io.TextIOWrapper.write()` call (directly, as in repro, or from other thread), which re-sets `self->pending_bytes` and `self->pending_bytes_count` values.
### CPython versions tested on:
3.10, 3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:e94dbe4ed8, May 24 2024, 00:47:49) [GCC 10.2.1 20210110]
<!-- gh-linked-prs -->
### Linked PRs
* gh-119507
* gh-119964
* gh-119965
* gh-120314
<!-- /gh-linked-prs -->
| 52586f930f62bd80374f0f240a4ecce0c0238174 | 3ea9b92086240b2f38a74c6945e7a723b480cefe |
python/cpython | python__cpython-119470 | # test_pyrepl leaks references
Since commit fe921931a35b461fb81821a474b510f4d67c520b, test_pyrepl started to leak a lot of references.
```
commit fe921931a35b461fb81821a474b510f4d67c520b (HEAD)
Author: Lysandros Nikolaou <lisandrosnik@gmail.com>
Date: Mon May 20 17:57:32 2024 -0400
gh-111201: Add tests for unix console class in pyrepl (#118653)
```
Example:
```
$ ./python -m test test_pyrepl -u all -R 3:3
(...)
test_pyrepl leaked [8943, 8939, 8943] references, sum=26825
test_pyrepl leaked [3138, 3136, 3138] memory blocks, sum=9412
(...)
```
cc @lysnikolaou
<!-- gh-linked-prs -->
### Linked PRs
* gh-119470
* gh-119471
<!-- /gh-linked-prs -->
| 6e012ced6cc07a7502278e1849c5618d1ab54a08 | a192547dfe7c4f184cc8b579c3eff2f61f642483 |
python/cpython | python__cpython-119475 | # Py_buffer.format declaration inconsistent with the docs
# Bug report
### Bug description:
The declaration of the `format` field of the `Py_buffer` structure is `char *` but the documentation states it is `const char *`.
IMHO the declaration should be changed to match the docs.
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119475
* gh-119602
* gh-119603
<!-- /gh-linked-prs -->
| 3b26cd8ca0e6c65e4b61effea9aa44d06e926797 | c7a5e1e550a2a0bfa11dbf055ed4b7afb26b5fe9 |
python/cpython | python__cpython-120731 | # Python 3.13b repeatedly setting superclass attribute in subclass leads to crashes
# Crash report
### What happened?
I'm trying to run the [qutebrowser testsuite](https://qutebrowser.org/) with Python 3.13, and am running into an issue where a test reproducibly fails (usually by crashing the interpreter), but only when I run the entire testsuite (not when run in isolation, or even just the tests in the same subfolder).
Given those circumstances, it seems tricky to get to a minimal example. I thought I'd open this issue in the hope of distilling things down further, and arriving at such an example. In the meantime, the best reproduction steps I can come up with are:
```
git clone https://github.com/qutebrowser/qutebrowser
cd qutebrowser
tox -e py313-pyqt66 -- tests/unit -v -s
```
A few tests will fail with a `--with-pydebug` build due to timeouts, those can be ignored. After a while (~13 minutes with `--with-pydebug` under gdb on my system), one of the tests in `tests/unit/mainwindow/test_messageview.py` will fail, usually due to a [failing assertion](https://github.com/python/cpython/blob/v3.13.0b1/Include/cpython/unicodeobject.h#L374) because `PyUnicode_KIND` did return an invalid value.
Failing Python code:
```
value = value.replace('default_family', self.default_family)
```
stacktrace:
<details>
```c
#0 0x00007211a3fda32c in ??? () at /usr/lib/libc.so.6
#1 0x00007211a3f896c8 in raise () at /usr/lib/libc.so.6
#2 0x00007211a3f89770 in <signal handler called> () at /usr/lib/libc.so.6
#3 0x00007211a3fda32c in ??? () at /usr/lib/libc.so.6
#4 0x00007211a3f896c8 in raise () at /usr/lib/libc.so.6
#5 0x00007211a3f714b8 in abort () at /usr/lib/libc.so.6
#6 0x00007211a3f713dc in ??? () at /usr/lib/libc.so.6
#7 0x00007211a3f81d46 in __assert_fail () at /usr/lib/libc.so.6
#8 0x00007211a389feb4 in PyUnicode_MAX_CHAR_VALUE (op=<optimized out>) at ./Include/cpython/unicodeobject.h:374
#9 0x00007211a39ab346 in PyUnicode_MAX_CHAR_VALUE (op=<optimized out>) at ./Include/cpython/unicodeobject.h:371
#10 replace (self=Python Exception <class 'gdb.error'>: There is no member named ready.
, str1=Python Exception <class 'gdb.error'>: There is no member named ready.
, str2=<unknown at remote 0x72110f89c130>, maxcount=<optimized out>)
at Objects/unicodeobject.c:10133
#11 0x00007211a388e6e6 in _PyEval_EvalFrameDefault
(tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414be50, throwflag=6) at Python/generated_cases.c.h:1688
#12 0x00007211a3a1a488 in _PyObject_VectorcallTstate
(kwnames=0x0, nargsf=2, args=0x7ffecd2707a0, callable=<function at remote 0x72118e52c7d0>, tstate=0x7211a3ecb1f0 <_PyRuntime+294032>) at ./Include/internal/pycore_call.h:168
#13 PyObject_Vectorcall (kwnames=0x0, nargsf=2, args=0x7ffecd2707a0, callable=<function at remote 0x72118e52c7d0>)
at Objects/call.c:327
#14 call_attribute (name=Python Exception <class 'gdb.error'>: There is no member named ready.
, attr=<function at remote 0x72118e52c7d0>, self=<optimized out>)
at Objects/typeobject.c:9435
#15 call_attribute (name=Python Exception <class 'gdb.error'>: There is no member named ready.
, attr=<function at remote 0x72118e52c7d0>, self=<optimized out>)
at Objects/typeobject.c:9429
#16 _Py_slot_tp_getattr_hook (self=<optimized out>, name=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9485
#17 0x00007211a3a1b123 in PyObject_GetAttr (v=Python Exception <class 'AssertionError'>:
, name=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/object.c:1175
#18 0x00007211a3892454 in _PyEval_EvalFrameDefault
(tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414bc98, throwflag=6) at Python/generated_cases.c.h:1165
#19 0x00007211a3c3d04d in _PyEval_EvalFrame
(throwflag=0, frame=0x7211198c8208, tstate=0x7211a3ecb1f0 <_PyRuntime+294032>)
at ./Include/internal/pycore_ceval.h:119
#20 gen_send_ex2
(gen=0x7211198c81c0, arg=arg@entry=0x0, presult=presult@entry=0x7ffecd270a90, exc=exc@entry=0, closing=closing@entry=0) at Objects/genobject.c:229
#21 0x00007211a3c3d242 in gen_iternext (gen=<optimized out>) at Objects/genobject.c:589
#22 0x00007211a3acbb60 in list_extend_iter_lock_held
(iterable=<generator at remote 0x7211198c81c0>, self=0x72110f8d1770) at Objects/listobject.c:1231
#23 _list_extend (self=self@entry=0x72110f8d1770, iterable=iterable@entry=<generator at remote 0x7211198c81c0>)
at Objects/listobject.c:1404
#24 0x00007211a3acc7b7 in list_extend (iterable=<generator at remote 0x7211198c81c0>, self=0x72110f8d1770)
at Objects/listobject.c:1430
#25 _PyList_Extend (iterable=<generator at remote 0x7211198c81c0>, self=0x72110f8d1770) at Objects/listobject.c:1432
#26 PySequence_List (v=<generator at remote 0x7211198c81c0>) at Objects/abstract.c:2135
#27 0x00007211a3ad6128 in PySequence_Fast (m=0x7211a3cae034 "can only join an iterable", v=<optimized out>)
at Objects/abstract.c:2166
#28 PySequence_Fast (v=<optimized out>, m=0x7211a3cae034 "can only join an iterable") at Objects/abstract.c:2145
#29 0x00007211a3ad81bf in PyUnicode_Join (separator=Python Exception <class 'gdb.error'>: There is no member named ready.
, seq=<optimized out>) at Objects/unicodeobject.c:9557
--Type <RET> for more, q to quit, c to continue without paging--
#30 0x00007211a3892ba1 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414bbe0, throwflag=6) at Python/generated_cases.c.h:1264
#31 0x00007211a3a10d2a in PyObject_Call (kwargs=0x0, args=Python Exception <class 'gdb.error'>: There is no member named ready.
, callable=<function at remote 0x72118e3ed790>) at Objects/call.c:373
#32 bounded_lru_cache_wrapper (self=0x72118e3ed850, args=Python Exception <class 'gdb.error'>: There is no member named ready.
, kwds=0x0) at ./Modules/_functoolsmodule.c:1050
#33 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=<optimized out>, nargs=1, keywords=<optimized out>) at Objects/call.c:242
#34 0x00007211a388d023 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414baf0, throwflag=6) at Python/generated_cases.c.h:1837
#35 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x72118e3edd90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 _hook.py'...
#36 0x00007211a39ed0a6 in _PyObject_Call_Prepend (tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x72118e3edd90>, obj=obj@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#37 0x00007211a3a1a645 in slot_tp_init (self=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9646
#38 0x00007211a39afafd in type_call (self=<PyQt6.sip.wrappertype at remote 0x5e3a1fe4e720>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:1909
#39 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<PyQt6.sip.wrappertype at remote 0x5e3a1fe4e720>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#40 0x00007211a3884cc8 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414b880, throwflag=6) at Python/generated_cases.c.h:1500
#41 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7211a2b85d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 piling '/home/florian/proj/aur/rixx/python313/pkg/python313/usr/lib/python3.13/test/test_importlib/import_/test_hel
#42 0x00007211a39ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7211a2b85d90>, obj=obj@entry=<HookCaller at remote 0x7211a2395d30>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/call.c:504
#43 0x00007211a3a17ff5 in slot_tp_call (self=<HookCaller at remote 0x7211a2395d30>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9400
#44 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7211a2395d30>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#45 0x00007211a3884cc8 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414b370, throwflag=6) at Python/generated_cases.c.h:1500
#46 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7211a2b85d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 es/example2/example2/__init__.py'...
#47 0x00007211a39ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7211a2b85d90>, obj=obj@entry=<HookCaller at remote 0x7211a23964b0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/call.c:504
#48 0x00007211a3a17ff5 in slot_tp_call (self=<HookCaller at remote 0x7211a23964b0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9400
#49 0x00007211a3a0e1ee in _PyObject_Call (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7211a23964b0>, args=(), kwargs=<optimized out>) at Objects/call.c:361
#50 0x00007211a3881380 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414b008, throwflag=6) at Python/generated_cases.c.h:1353
#51 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7211a2b85d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 a_namespace_pkg/foo/one.py'...
#52 0x00007211a39ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7211a2b85d90>, obj=obj@entry=<HookCaller at remote 0x7211a2396b70>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/call.c:504
#53 0x00007211a3a17ff5 in slot_tp_call (self=<HookCaller at remote 0x7211a2396b70>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9400
#54 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7211a2396b70>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#55 0x00007211a3884cc8 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414aa10, throwflag=6) at Python/generated_cases.c.h:1500
#56 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7211a2b85d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 piling '/home/florian/proj/aur/rixx/python313/pkg/python313/usr/lib/python3.13/test/test_importlib/namespace_pkgs/p
#57 0x00007211a39ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7211a2b85d90>, obj=obj@entry=<HookCaller at remote 0x7211a2396f90>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/call.c:504
#58 0x00007211a3a17ff5 in slot_tp_call (self=<HookCaller at remote 0x7211a2396f90>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9400
#59 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7211a2396f90>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#60 0x00007211a3884cc8 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414a730, throwflag=6) at Python/generated_cases.c.h:1500
#61 0x00007211a39ece98 in _PyObject_VectorcallDictTstate (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7211a2b85d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146 piling '/home/florian/proj/aur/rixx/python313/pkg/python313/usr/lib/python3.13/test/test_httpservers.py'...
#62 0x00007211a39ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7211a2b85d90>, obj=obj@entry=<HookCaller at remote 0x7211a238b2f0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/call.c:504
#63 0x00007211a3a17ff5 in slot_tp_call (self=<HookCaller at remote 0x7211a238b2f0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Objects/typeobject.c:9400
#64 0x00007211a39ec88d in _PyObject_MakeTpCall (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7211a238b2f0>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#65 0x00007211a3884cc8 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414a2b0, throwflag=6) at Python/generated_cases.c.h:1500
#66 0x00007211a3c3f6d6 in PyEval_EvalCode (co=<code at remote 0x7211a25a96d0>, globals=<optimized out>, locals=Python Exception <class 'gdb.error'>: There is no member named ready.
) at Python/ceval.c:598
#67 0x00007211a3c51950 in builtin_exec_impl (module=<optimized out>, closure=<optimized out>, locals=Python Exception <class 'gdb.error'>: There is no member named ready.
, globals=Python Exception <class 'gdb.error'>: There is no member named ready.
, source=<code at remote 0x7211a25a96d0>) at Python/bltinmodule.c:1145
#68 builtin_exec (module=<optimized out>, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Python/clinic/bltinmodule.c.h:556
#69 0x00007211a395a4ea in cfunction_vectorcall_FASTCALL_KEYWORDS (func=<built-in method exec of module object at remote 0x7211a36cc410>, args=0x7211a414a180, nargsf=<optimized out>, kwnames=0x0) at Objects/methodobject.c:441
#70 0x00007211a39fcce9 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x7211a414a180, callable=<built-in method exec of module object at remote 0x7211a36cc410>, tstate=0x7211a3ecb1f0 <_PyRuntime+294032>)
at ./Include/internal/pycore_call.h:168
#71 PyObject_Vectorcall (callable=<built-in method exec of module object at remote 0x7211a36cc410>, args=0x7211a414a180, nargsf=9223372036854775810, kwnames=0x0) at Objects/call.c:327
#72 0x00007211a3889b59 in _PyEval_EvalFrameDefault (tstate=0x7211a3ecb1f0 <_PyRuntime+294032>, frame=0x7211a414a0d8, throwflag=6) at Python/generated_cases.c.h:813
#73 0x00007211a3c36c04 in PyObject_Call (kwargs=0x0, args=Python Exception <class 'gdb.error'>: There is no member named ready.
, callable=<function at remote 0x7211a2d38290>) at Objects/call.c:373
#74 pymain_run_module (modname=<optimized out>, set_argv0=set_argv0@entry=1) at Modules/main.c:297
#75 0x00007211a3c498d2 in pymain_run_python (exitcode=0x7ffecd272a68) at Modules/main.c:633
#76 Py_RunMain () at Modules/main.c:718
#77 0x00007211a3f72cd0 in ??? () at /usr/lib/libc.so.6
#78 0x00007211a3f72d8a in __libc_start_main () at /usr/lib/libc.so.6
#79 0x00005e3a1e856055 in _start ()
```
</details>
Sometimes I've also seen a `MemoryError` on the line calling `str.replace`, or an ominous:
```pytb
TypeError: replace() argument 2 must be str, not +
```
with the following Python stack (note that jinja is involved, which might or might not be a trigger?):
```pytb
qtbot = <pytestqt.qtbot.QtBot object at 0x7552ae9d6200>, view = <qutebrowser.mainwindow.messageview.MessageView object at 0x7552acd7fb10>
config_stub = <qutebrowser.config.config.Config object at 0x7552acbe6170>
def test_changing_timer_with_messages_shown(qtbot, view, config_stub):
"""When we change messages.timeout, the timer should be restarted."""
config_stub.val.messages.timeout = 900000 # 15s
> view.show_message(message.MessageInfo(usertypes.MessageLevel.info, 'test'))
tests/unit/mainwindow/test_messageview.py:172:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
qutebrowser/mainwindow/messageview.py:148: in show_message
widget = Message.from_info(info)
qutebrowser/mainwindow/messageview.py:71: in from_info
return cls(
qutebrowser/mainwindow/messageview.py:62: in __init__
stylesheet.set_register(self, qss, update=False)
qutebrowser/config/stylesheet.py:32: in set_register
observer.register()
qutebrowser/config/stylesheet.py:97: in register
qss = self._get_stylesheet()
qutebrowser/config/stylesheet.py:86: in _get_stylesheet
return _render_stylesheet(self._stylesheet)
qutebrowser/config/stylesheet.py:41: in _render_stylesheet
return template.render(conf=config.val)
.tox/py313-pyqt66/lib/python3.13/site-packages/jinja2/environment.py:1304: in render
self.environment.handle_exception()
.tox/py313-pyqt66/lib/python3.13/site-packages/jinja2/environment.py:939: in handle_exception
raise rewrite_traceback_stack(source=source)
<template>:7: in top-level template code
???
qutebrowser/utils/jinja.py:117: in getattr
return getattr(obj, attribute)
qutebrowser/config/config.py:633: in __getattr__
return self._config.get(name)
qutebrowser/config/config.py:385: in get
return opt.typ.to_py(obj)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <qutebrowser.config.configtypes.Font completions=None none_ok=False>, value = 'default_size default_family'
def to_py(self, value: _StrUnset) -> _StrUnsetNone:
self._basic_py_validation(value, str)
if isinstance(value, usertypes.Unset):
return value
elif not value:
return None
if not self.font_regex.fullmatch(value): # pragma: no cover
# This should never happen, as the regex always matches everything
# as family.
raise configexc.ValidationError(value, "must be a valid font")
if (value.endswith(' default_family') and
self.default_family is not None):
> value = value.replace('default_family', self.default_family)
E TypeError: replace() argument 2 must be str, not +
qutebrowser/config/configtypes.py:1244: TypeError
```
which leads me to the conclusion that there must be some sort of memory corruption going on there.
---
On one run, I've also see a GC-related crash, which I'm not sure is related:
Python stack:
<details>
```pytb
Fatal Python error: Aborted
Thread 0x00007f31066006c0 (most recent call first):
File "/usr/lib/python3.13/socket.py", line 295 in accept
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pytest_rerunfailures.py", line 433 in run_server
File "/usr/lib/python3.13/threading.py", line 990 in run
File "/usr/lib/python3.13/threading.py", line 1039 in _bootstrap_inner
File "/usr/lib/python3.13/threading.py", line 1010 in _bootstrap
Current thread 0x00007f311ca29740 (most recent call first):
Garbage-collecting
File "/home/florian/proj/qutebrowser/git/qutebrowser/config/configfiles.py", line 255 in __init__
File "/home/florian/proj/qutebrowser/git/tests/helpers/fixtures.py", line 315 in yaml_config_stub
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 884 in call_fixture_func
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 1122 in pytest_fixture_setup
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 1073 in execute
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 603 in _get_active_fixturedef
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 1035 in execute
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 603 in _get_active_fixturedef
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 1035 in execute
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 603 in _get_active_fixturedef
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 518 in getfixturevalue
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/fixtures.py", line 683 in _fillfixtures
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/python.py", line 1630 in setup
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 512 in setup
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 159 in pytest_runtest_setup
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 241 in <lambda>
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 341 in from_call
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 240 in call_and_report
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 129 in runtestprotocol
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/runner.py", line 116 in pytest_runtest_protocol
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/main.py", line 364 in pytest_runtestloop
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/main.py", line 339 in _main
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/main.py", line 285 in wrap_session
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/main.py", line 332 in pytest_cmdline_main
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/config/__init__.py", line 178 in main
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/_pytest/config/__init__.py", line 206 in console_main
File "/home/florian/proj/qutebrowser/git/.tox/py313-pyqt66/lib/python3.13/site-packages/pytest/__main__.py", line 7 in <module>
File "/usr/lib/python3.13/runpy.py", line 88 in _run_code
File "/usr/lib/python3.13/runpy.py", line 198 in _run_module_as_main
Extension modules: hunter._predicates, hunter._tracer, hunter.vendor._cymem.cymem, hunter._event, markupsafe._speedups, PyQt6.QtCore, PyQt6.QtGui, PyQt6.QtWidgets, PyQt6.QtNetwork, PyQt6.QtPrintSupport, PyQt6.QtQml, PyQt6.QtOpenGL, PyQt6.QtWebChannel, PyQt6.QtWebEngineCore, PyQt6.QtSql, PyQt6.QtWebEngineWidgets, PyQt6.QtTest, PyQt6.QtDBus (total: 18)
```
</details>
C stack:
<details>
```c
#0 0x00007f311c0ab32c in ??? () at /usr/lib/libc.so.6
#1 0x00007f311c05a6c8 in raise () at /usr/lib/libc.so.6
#2 0x00007f311c05a770 in <signal handler called> () at /usr/lib/libc.so.6
#3 0x00007f311c0ab32c in ??? () at /usr/lib/libc.so.6
#4 0x00007f311c05a6c8 in raise () at /usr/lib/libc.so.6
#5 0x00007f311c0424b8 in abort () at /usr/lib/libc.so.6
#6 0x00007f311c0423dc in ??? () at /usr/lib/libc.so.6
#7 0x00007f311c052d46 in __assert_fail () at /usr/lib/libc.so.6
#8 0x00007f311c2a2bd1 in validate_old (gcstate=<optimized out>) at Python/gc.c:432
#9 validate_old (gcstate=gcstate@entry=0x7f311c89deb8 <_PyRuntime+108888>) at Python/gc.c:425
#10 0x00007f311c6076cc in gc_collect_increment (stats=0x7ffc5a234f70, tstate=<optimized out>) at Python/gc.c:1453
#11 _PyGC_Collect
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, generation=generation@entry=1, reason=reason@entry=_Py_GC_REASON_HEAP) at Python/gc.c:1811
#12 0x00007f311c607b4b in _Py_RunGC (tstate=0x7f311c8cb1f0 <_PyRuntime+294032>) at Python/gc.c:2010
#13 _Py_RunGC (tstate=0x7f311c8cb1f0 <_PyRuntime+294032>) at Python/gc.c:2007
#14 _Py_HandlePending (tstate=0x7f311c8cb1f0 <_PyRuntime+294032>) at Python/ceval_gil.c:1244
#15 0x00007f311c289f30 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb34ed8, throwflag=6) at Python/generated_cases.c.h:846
#16 0x00007f311c3ece2f in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f3106d8d010>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:135
#17 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f3106d8d010>, obj=obj@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=args@entry=(), kwargs=kwargs@entry=0x0) at Objects/call.c:504
#18 0x00007f311c41a645 in slot_tp_init (self=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=(), kwds=0x0) at Objects/typeobject.c:9646
#19 0x00007f311c3afafd in type_call (self=<PyQt6.sip.wrappertype at remote 0x64c813562e80>, args=(), kwds=0x0)
at Objects/typeobject.c:1909
#20 0x00007f311c3ec88d in _PyObject_MakeTpCall
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<PyQt6.sip.wrappertype at remote 0x64c813562e80>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#21 0x00007f311c28d023 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb34e78, throwflag=6) at Python/generated_cases.c.h:1837
#22 0x00007f311c3ece98 in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f311b519d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146
#23 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f311b519d90>, obj=obj@entry=<HookCaller at remote 0x7f311ad507d0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#24 0x00007f311c417ff5 in slot_tp_call (self=<HookCaller at remote 0x7f311ad507d0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/typeobject.c:9400
#25 0x00007f311c3ec88d in _PyObject_MakeTpCall
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7f311ad507d0>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#26 0x00007f311c284cc8 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb349d8, throwflag=6) at Python/generated_cases.c.h:1500
#27 0x00007f311c3ece98 in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f311b519d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146
#28 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f311b519d90>, obj=obj@entry=<HookCaller at remote 0x7f311ad52cf0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#29 0x00007f311c417ff5 in slot_tp_call (self=<HookCaller at remote 0x7f311ad52cf0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/typeobject.c:9400
#30 0x00007f311c40e1ee in _PyObject_Call
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7f311ad52cf0>, args=(), kwargs=<optimized out>) at Objects/call.c:361
#31 0x00007f311c281380 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb34008, throwflag=6) at Python/generated_cases.c.h:1353
#32 0x00007f311c3ece98 in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f311b519d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146
#33 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f311b519d90>, obj=obj@entry=<HookCaller at remote 0x7f311ad52bd0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#34 0x00007f311c417ff5 in slot_tp_call (self=<HookCaller at remote 0x7f311ad52bd0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/typeobject.c:9400
#35 0x00007f311c3ec88d in _PyObject_MakeTpCall
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7f311ad52bd0>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#36 0x00007f311c284cc8 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb33a10, throwflag=6) at Python/generated_cases.c.h:1500
#37 0x00007f311c3ece98 in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f311b519d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146
#38 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f311b519d90>, obj=obj@entry=<HookCaller at remote 0x7f311ad52ff0>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#39 0x00007f311c417ff5 in slot_tp_call (self=<HookCaller at remote 0x7f311ad52ff0>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/typeobject.c:9400
#40 0x00007f311c3ec88d in _PyObject_MakeTpCall
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7f311ad52ff0>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#41 0x00007f311c284cc8 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb33730, throwflag=6) at Python/generated_cases.c.h:1500
#42 0x00007f311c3ece98 in _PyObject_VectorcallDictTstate
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<function at remote 0x7f311b519d90>, args=<optimized out>, nargsf=<optimized out>, kwargs=<optimized out>) at Objects/call.c:146
#43 0x00007f311c3ed0a6 in _PyObject_Call_Prepend
(tstate=tstate@entry=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=callable@entry=<function at remote 0x7f311b519d90>, obj=obj@entry=<HookCaller at remote 0x7f311ad47410>, args=args@entry=(), kwargs=kwargs@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/call.c:504
#44 0x00007f311c417ff5 in slot_tp_call (self=<HookCaller at remote 0x7f311ad47410>, args=(), kwds=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Objects/typeobject.c:9400
#45 0x00007f311c3ec88d in _PyObject_MakeTpCall
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, callable=<HookCaller at remote 0x7f311ad47410>, args=<optimized out>, nargs=0, keywords=<optimized out>) at Objects/call.c:242
#46 0x00007f311c284cc8 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb332b0, throwflag=6) at Python/generated_cases.c.h:1500
#47 0x00007f311c63f6d6 in PyEval_EvalCode (co=<code at remote 0x7f311af616d0>, globals=<optimized out>, locals=Python Exception <class 'gdb.error'>: There is no member named ready.
)
at Python/ceval.c:598
#48 0x00007f311c651950 in builtin_exec_impl (module=<optimized out>, closure=<optimized out>, locals=Python Exception <class 'gdb.error'>: There is no member named ready.
, globals=Python Exception <class 'gdb.error'>: There is no member named ready.
, source=<code at remote 0x7f311af616d0>)
at Python/bltinmodule.c:1145
#49 builtin_exec (module=<optimized out>, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at Python/clinic/bltinmodule.c.h:556
#50 0x00007f311c35a4ea in cfunction_vectorcall_FASTCALL_KEYWORDS
(func=<built-in method exec of module object at remote 0x7f311bbb8410>, args=0x7f311cb33180, nargsf=<optimized out>, kwnames=0x0) at Objects/methodobject.c:441
#51 0x00007f311c3fcce9 in _PyObject_VectorcallTstate
(kwnames=0x0, nargsf=9223372036854775810, args=0x7f311cb33180, callable=<built-in method exec of module object at remote 0x7f311bbb8410>, tstate=0x7f311c8cb1f0 <_PyRuntime+294032>) at ./Include/internal/pycore_call.h:168
#52 PyObject_Vectorcall
(callable=<built-in method exec of module object at remote 0x7f311bbb8410>, args=0x7f311cb33180, nargsf=9223372036854775810, kwnames=0x0) at Objects/call.c:327
#53 0x00007f311c289b59 in _PyEval_EvalFrameDefault
(tstate=0x7f311c8cb1f0 <_PyRuntime+294032>, frame=0x7f311cb330d8, throwflag=6) at Python/generated_cases.c.h:813
#54 0x00007f311c636c04 in PyObject_Call (kwargs=0x0, args=Python Exception <class 'gdb.error'>: There is no member named ready.
, callable=<function at remote 0x7f311b664290>)
at Objects/call.c:373
#55 pymain_run_module (modname=<optimized out>, set_argv0=set_argv0@entry=1) at Modules/main.c:297
#56 0x00007f311c6498d2 in pymain_run_python (exitcode=0x7ffc5a236bd8) at Modules/main.c:633
#57 Py_RunMain () at Modules/main.c:718
#58 0x00007f311c043cd0 in ??? () at /usr/lib/libc.so.6
#59 0x00007f311c043d8a in __libc_start_main () at /usr/lib/libc.so.6
#60 0x000064c810b08055 in _start ()
```
</details>
---
I'm lost here on how to best debug this further. My best bet would be to try and at least get an example that I can run more quickly, and then try and bisect CPython in order to find the offending change. If there are any other guesses or approaches to debug what could be going on here, I'd be happy to dig in further.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0b1 (main, May 23 2024, 09:21:12) [GCC 13.2.1 20240417]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120731
* gh-120748
<!-- /gh-linked-prs -->
| 00257c746c447a2e026b5a2a618f0e033fb90111 | f385d99f57773e48285e0bcdbcd66dcbfdc647b3 |
python/cpython | python__cpython-121178 | # 1506-007 (S) "struct _dtoa_state" is undefined on AIX
# Bug report
### Bug description:
```python
bash-4.2$ ./configure
checking build system type... powerpc-ibm-aix7.2.3.0
checking host system type... powerpc-ibm-aix7.2.3.0
checking for Python interpreter freezing... ./_bootstrap_python
checking for python3.12... no
checking for python3.12... no
checking for python3.11... no
checking for python3.10... no
checking for python3... python3
checking Python for regen version... Python 3.9.18
checking for pkg-config... no
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... "aix"
checking for gcc... no
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether the compiler supports GNU C... no
checking whether cc accepts -g... yes
checking for cc option to enable C11 features... unsupported
checking for cc option to enable C99 features... -qlanglvl=extc1x
checking how to run the C preprocessor... cc -qlanglvl=extc1x -E
checking for grep that handles long lines and -e... /bin/grep
checking for a sed that does not truncate output... /bin/sed
checking for egrep... /bin/grep -E
checking for CC compiler name... xlc
checking for stdio.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for strings.h... yes
checking for sys/stat.h... yes
checking for sys/types.h... yes
checking for unistd.h... yes
checking for wchar.h... yes
checking for minix/config.h... no
checking whether it is safe to define __EXTENSIONS__... yes
checking whether _XOPEN_SOURCE should be defined... no
checking for c++... no
checking for g++... no
checking for gcc... no
checking for CC... no
checking for cxx... no
checking for cc++... no
checking for cl... no
checking for the platform triplet based on compiler characteristics... none
checking for multiarch...
checking for PEP 11 support tier... configure: WARNING: powerpc-ibm-aix7.2.3.0/xlc is not supported
checking for -Wl,--no-as-needed... no
checking for the Android API level... not Android
checking for --with-emscripten-target...
checking for --enable-wasm-dynamic-linking... missing
checking for --enable-wasm-pthreads... missing
checking for --with-suffix...
checking for case-insensitive build directory... no
checking LIBRARY... libpython$(VERSION)$(ABIFLAGS).a
checking LINKCC... $(PURIFY) $(CC)
checking EXPORTSYMS... Modules/python.exp
checking for GNU ld... no
checking for --enable-shared... no
checking for --with-static-libpython... yes
checking for --enable-profiling... no
checking LDLIBRARY... checking HOSTRUNNER...
libpython$(VERSION)$(ABIFLAGS).a
checking for ar... ar
checking for a BSD-compatible install... ./install-sh -c
checking for a race-free mkdir -p... ./install-sh -c -d
checking for --with-pydebug... no
checking for --with-trace-refs... no
checking for --enable-pystats... no
checking for --with-assertions... no
checking for --enable-optimizations... no
checking PROFILE_TASK... -m test --pgo --timeout=$(TESTTIMEOUT)
checking for --with-lto... no
checking for llvm-profdata... no
checking for --enable-bolt... no
checking BOLT_INSTRUMENT_FLAGS...
checking BOLT_APPLY_FLAGS... -update-debug-sections -reorder-blocks=ext-tsp -reorder-functions=hfsort+ -split-functions -icf=1 -inline-all -split-eh -reorder-functions-use-hot-size -peepholes=none -jump-tables=aggressive -inline-ap -indirect-call-promotion=all -dyno-stats -use-gnu-stack -frame-opt=hot
checking if cc -qlanglvl=extc1x supports -fstrict-overflow and -fno-strict-overflow... yes
checking for --with-strict-overflow... no
checking if cc -qlanglvl=extc1x supports -Og optimization level... yes
checking whether pthreads are available without options... no
checking whether cc -qlanglvl=extc1x accepts -Kpthread... no
checking whether cc -qlanglvl=extc1x accepts -Kthread... no
checking whether cc -qlanglvl=extc1x accepts -pthread... no
checking for alloca.h... yes
checking for asm/types.h... no
checking for bluetooth.h... no
checking for conio.h... no
checking for crypt.h... yes
checking for direct.h... no
checking for dlfcn.h... yes
checking for endian.h... no
checking for errno.h... yes
checking for fcntl.h... yes
checking for grp.h... yes
checking for ieeefp.h... no
checking for io.h... no
checking for langinfo.h... yes
checking for libintl.h... no
checking for libutil.h... no
checking for linux/auxvec.h... no
checking for sys/auxv.h... no
checking for linux/fs.h... no
checking for linux/limits.h... no
checking for linux/memfd.h... no
checking for linux/random.h... no
checking for linux/soundcard.h... no
checking for linux/tipc.h... no
checking for linux/wait.h... no
checking for netdb.h... yes
checking for net/ethernet.h... no
checking for netinet/in.h... yes
checking for netpacket/packet.h... no
checking for poll.h... yes
checking for process.h... no
checking for pthread.h... yes
checking for pty.h... no
checking for sched.h... yes
checking for setjmp.h... yes
checking for shadow.h... no
checking for signal.h... yes
checking for spawn.h... yes
checking for stropts.h... yes
checking for sys/audioio.h... no
checking for sys/bsdtty.h... no
checking for sys/devpoll.h... no
checking for sys/endian.h... no
checking for sys/epoll.h... no
checking for sys/event.h... no
checking for sys/eventfd.h... no
checking for sys/file.h... yes
checking for sys/ioctl.h... yes
checking for sys/kern_control.h... no
checking for sys/loadavg.h... no
checking for sys/lock.h... yes
checking for sys/memfd.h... no
checking for sys/mkdev.h... no
checking for sys/mman.h... yes
checking for sys/modem.h... no
checking for sys/param.h... yes
checking for sys/poll.h... yes
checking for sys/random.h... no
checking for sys/resource.h... yes
checking for sys/select.h... yes
checking for sys/sendfile.h... no
checking for sys/socket.h... yes
checking for sys/soundcard.h... no
checking for sys/stat.h... (cached) yes
checking for sys/statvfs.h... yes
checking for sys/sys_domain.h... no
checking for sys/syscall.h... no
checking for sys/sysmacros.h... yes
checking for sys/termio.h... yes
checking for sys/time.h... yes
checking for sys/times.h... yes
checking for sys/types.h... (cached) yes
checking for sys/uio.h... yes
checking for sys/un.h... yes
checking for sys/utsname.h... yes
checking for sys/wait.h... yes
checking for sys/xattr.h... no
checking for sysexits.h... yes
checking for syslog.h... yes
checking for termios.h... yes
checking for util.h... no
checking for utime.h... yes
checking for utmp.h... yes
checking for dirent.h that defines DIR... yes
checking for library containing opendir... none required
checking for sys/mkdev.h... (cached) no
checking for sys/sysmacros.h... (cached) yes
checking for bluetooth/bluetooth.h... no
checking for net/if.h... yes
checking for linux/netlink.h... no
checking for linux/qrtr.h... no
checking for linux/vm_sockets.h... no
checking for linux/can.h... no
checking for linux/can/bcm.h... no
checking for linux/can/j1939.h... no
checking for linux/can/raw.h... no
checking for netcan/can.h... no
checking for clock_t in time.h... yes
checking for makedev... yes
checking for le64toh... no
checking for mode_t... yes
checking for off_t... yes
checking for pid_t... yes
checking for size_t... yes
checking for uid_t in sys/types.h... yes
checking for ssize_t... yes
checking for __uint128_t... no
checking size of int... 4
checking size of long... 4
checking alignment of long... 4
checking size of long long... 8
checking size of void *... 4
checking size of short... 2
checking size of float... 4
checking size of double... 8
checking size of fpos_t... 8
checking size of size_t... 4
checking alignment of size_t... 4
checking size of pid_t... 4
checking size of uintptr_t... 4
checking alignment of max_align_t... 4
checking for long double... yes
checking size of long double... 8
checking size of _Bool... 1
checking size of off_t... 8
checking whether to enable large file support... yes
checking size of time_t... 4
checking for pthread_t... yes
checking size of pthread_t... 4
checking size of pthread_key_t... 4
checking whether pthread_key_t is compatible with int... yes
checking for --enable-framework... no
checking for --with-dsymutil... no
checking for dyld... no
checking for --with-address-sanitizer... no
checking for --with-memory-sanitizer... no
checking for --with-undefined-behavior-sanitizer... no
checking for --with-thread-sanitizer... no
checking the extension of shared libraries... .so
checking LDSHARED... $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp
checking BLDSHARED flags... Modules/ld_so_aix $(CC) -bI:Modules/python.exp
checking CCSHARED...
checking LINKFORSHARED... -Wl,-bE:Modules/python.exp -lld
checking CFLAGSFORSHARED...
checking SHLIBS... $(LIBS)
checking perf trampoline... no
checking for sendfile in -lsendfile... no
checking for dlopen in -ldl... yes
checking for shl_load in -ldld... no
checking for uuid.h... yes
checking for uuid_create... yes
checking for uuid_enc_be... no
checking for library containing sem_init... none required
checking for textdomain in -lintl... yes
checking for genuine AIX C++ extensions support... yes
checking for the system builddate... 2113
checking aligned memory access is required... no
checking for --with-hash-algorithm... default
checking for --with-tzpath... "/usr/share/zoneinfo:/usr/lib/zoneinfo:/usr/share/lib/zoneinfo:/etc/zoneinfo"
checking for t_open in -lnsl... no
checking for socket in -lsocket... no
checking for --with-libs... no
checking for --with-system-expat... no
checking for libffi... no
checking for ffi.h... no
checking for --with-system-libmpdec... no
checking for --with-decimal-contextvar... yes
checking for decimal libmpdec machine... ansi32
checking for libnsl... no
checking for library containing yp_match... none required
checking for rpc/rpc.h... yes
checking for sqlite3 >= 3.7.15... no
checking for sqlite3.h... no
checking for --enable-loadable-sqlite-extensions... no
checking for gdbm.h... no
checking for ndbm.h... yes
checking for library containing dbm_open... none required
checking for ndbm presence and linker args... yes ()
checking for gdbm/ndbm.h... no
checking for gdbm-ndbm.h... no
checking for db.h... no
checking for --with-dbmliborder... gdbm:ndbm:bdb
checking for _dbm module CFLAGS and LIBS... -DUSE_NDBM
checking for _POSIX_THREADS in unistd.h... yes
checking for pthread_create in -lpthread... yes
checking for usconfig in -lmpc... no
checking if PTHREAD_SCOPE_SYSTEM is supported... yes
checking for pthread_sigmask... yes
checking for pthread_getcpuclockid... yes
checking if --enable-ipv6 is specified... yes
checking if RFC2553 API is available... yes
checking ipv6 stack type... unknown
checking CAN_RAW_FD_FRAMES... no
checking for CAN_RAW_JOIN_FILTERS... no
checking for --with-doc-strings... yes
checking for --with-pymalloc... yes
checking for --with-freelists... yes
checking for --with-c-locale-coercion... yes
checking for --with-valgrind... no
checking for --with-dtrace... no
checking for dlopen... yes
checking DYNLOADFILE... dynload_shlib.o
checking MACHDEP_OBJS... none
checking for accept4... no
checking for alarm... yes
checking for bind_textdomain_codeset... yes
checking for chmod... yes
checking for chown... yes
checking for clock... yes
checking for close_range... no
checking for confstr... yes
checking for copy_file_range... no
checking for ctermid... yes
checking for dup... yes
checking for dup3... no
checking for execv... yes
checking for explicit_bzero... no
checking for explicit_memset... no
checking for faccessat... yes
checking for fchmod... yes
checking for fchmodat... yes
checking for fchown... yes
checking for fchownat... yes
checking for fdopendir... yes
checking for fdwalk... no
checking for fexecve... yes
checking for fork... yes
checking for fork1... no
checking for fpathconf... yes
checking for fstatat... yes
checking for ftime... yes
checking for ftruncate... yes
checking for futimens... yes
checking for futimes... no
checking for futimesat... no
checking for gai_strerror... yes
checking for getegid... yes
checking for getentropy... no
checking for geteuid... yes
checking for getgid... yes
checking for getgrgid... yes
checking for getgrgid_r... yes
checking for getgrnam_r... yes
checking for getgrouplist... no
checking for getgroups... yes
checking for gethostname... yes
checking for getitimer... yes
checking for getloadavg... no
checking for getlogin... yes
checking for getpeername... yes
checking for getpgid... yes
checking for getpid... yes
checking for getppid... yes
checking for getpriority... yes
checking for _getpty... no
checking for getpwent... yes
checking for getpwnam_r... yes
checking for getpwuid... yes
checking for getpwuid_r... yes
checking for getresgid... no
checking for getresuid... no
checking for getrusage... yes
checking for getsid... yes
checking for getspent... no
checking for getspnam... no
checking for getuid... yes
checking for getwd... yes
checking for if_nameindex... yes
checking for initgroups... yes
checking for kill... yes
checking for killpg... yes
checking for lchown... yes
checking for linkat... yes
checking for lockf... yes
checking for lstat... yes
checking for lutimes... no
checking for madvise... yes
checking for mbrtowc... yes
checking for memrchr... no
checking for mkdirat... yes
checking for mkfifo... yes
checking for mkfifoat... yes
checking for mknod... yes
checking for mknodat... yes
checking for mktime... yes
checking for mmap... yes
checking for mremap... no
checking for nice... yes
checking for openat... yes
checking for opendir... yes
checking for pathconf... yes
checking for pause... yes
checking for pipe... yes
checking for pipe2... no
checking for plock... yes
checking for poll... yes
checking for posix_fadvise... yes
checking for posix_fallocate... yes
checking for posix_spawn... yes
checking for posix_spawnp... yes
checking for pread... yes
checking for preadv... yes
checking for preadv2... no
checking for pthread_condattr_setclock... yes
checking for pthread_init... yes
checking for pthread_kill... yes
checking for pwrite... yes
checking for pwritev... yes
checking for pwritev2... no
checking for readlink... yes
checking for readlinkat... yes
checking for readv... yes
checking for realpath... yes
checking for renameat... yes
checking for rtpSpawn... no
checking for sched_get_priority_max... yes
checking for sched_rr_get_interval... yes
checking for sched_setaffinity... no
checking for sched_setparam... yes
checking for sched_setscheduler... yes
checking for sem_clockwait... no
checking for sem_getvalue... yes
checking for sem_open... yes
checking for sem_timedwait... yes
checking for sem_unlink... yes
checking for sendfile... no
checking for setegid... yes
checking for seteuid... yes
checking for setgid... yes
checking for sethostname... yes
checking for setitimer... yes
checking for setlocale... yes
checking for setpgid... yes
checking for setpgrp... yes
checking for setpriority... yes
checking for setregid... yes
checking for setresgid... no
checking for setresuid... no
checking for setreuid... yes
checking for setsid... yes
checking for setuid... yes
checking for setvbuf... yes
checking for shutdown... yes
checking for sigaction... yes
checking for sigaltstack... yes
checking for sigfillset... yes
checking for siginterrupt... yes
checking for sigpending... yes
checking for sigrelse... yes
checking for sigtimedwait... yes
checking for sigwait... yes
checking for sigwaitinfo... yes
checking for snprintf... yes
checking for splice... yes
checking for strftime... yes
checking for strlcpy... no
checking for strsignal... yes
checking for symlinkat... yes
checking for sync... yes
checking for sysconf... yes
checking for system... yes
checking for tcgetpgrp... yes
checking for tcsetpgrp... yes
checking for tempnam... yes
checking for timegm... no
checking for times... yes
checking for tmpfile... yes
checking for tmpnam... yes
checking for tmpnam_r... no
checking for truncate... yes
checking for ttyname... yes
checking for umask... yes
checking for uname... yes
checking for unlinkat... yes
checking for utimensat... yes
checking for utimes... yes
checking for vfork... yes
checking for wait... yes
checking for wait3... yes
checking for wait4... yes
checking for waitid... yes
checking for waitpid... yes
checking for wcscoll... yes
checking for wcsftime... yes
checking for wcsxfrm... yes
checking for wmemcmp... yes
checking for writev... yes
checking for lchmod... no
checking for cc -qlanglvl=extc1x options needed to detect all undeclared functions... none needed
checking whether dirfd is declared... yes
checking for chroot... yes
checking for link... yes
checking for symlink... yes
checking for fchdir... yes
checking for fsync... yes
checking for fdatasync... yes
checking for epoll_create... no
checking for epoll_create1... no
checking for kqueue... no
checking for prlimit... no
checking for _dyld_shared_cache_contains_path... no
checking for memfd_create... no
checking for eventfd... no
checking for ctermid_r... no
checking for flock declaration... yes
checking for flock... no
checking for flock in -lbsd... yes
checking for getpagesize... yes
checking for broken unsetenv... no
checking for true... true
checking for inet_aton in -lc... yes
checking for chflags... no
checking for lchflags... no
checking for zlib >= 1.2.0... no
checking for zlib.h... no
checking for bzip2... no
checking for bzlib.h... no
checking for liblzma... no
checking for lzma.h... no
checking for hstrerror... no
checking for getservbyname... yes
checking for getservbyport... yes
checking for gethostbyname... yes
checking for gethostbyaddr... yes
checking for getprotobyname... yes
checking for inet_aton... yes
checking for inet_ntoa... yes
checking for inet_pton... yes
checking for getpeername... (cached) yes
checking for getsockname... yes
checking for accept... yes
checking for bind... yes
checking for connect... yes
checking for listen... yes
checking for recvfrom... yes
checking for sendto... yes
checking for setsockopt... yes
checking for socket... yes
checking for setgroups... yes
checking for openpty... no
checking for openpty in -lutil... no
checking for openpty in -lbsd... no
checking for library containing login_tty... no
checking for forkpty... no
checking for forkpty in -lutil... no
checking for forkpty in -lbsd... no
checking for fseek64... no
checking for fseeko... yes
checking for fstatvfs... yes
checking for ftell64... no
checking for ftello... yes
checking for statvfs... yes
checking for dup2... yes
checking for getpgrp... yes
checking for setpgrp... (cached) yes
checking for setns... no
checking for unshare... no
checking for libxcrypt >= 3.1.1... no
checking for library containing crypt_r... none required
checking for crypt or crypt_r... yes
checking for clock_gettime... yes
checking for clock_getres... yes
checking for clock_settime... yes
checking for clock_nanosleep... yes
checking for nanosleep... yes
checking for major, minor, and makedev... yes
checking for getaddrinfo... yes
checking getaddrinfo bug... no
checking for getnameinfo... yes
checking whether struct tm is in sys/time.h or time.h... time.h
checking for struct tm.tm_zone... no
checking whether tzname is declared... yes
checking for tzname... yes
checking for struct stat.st_rdev... yes
checking for struct stat.st_blksize... yes
checking for struct stat.st_flags... no
checking for struct stat.st_gen... yes
checking for struct stat.st_birthtime... no
checking for struct stat.st_blocks... yes
checking for struct passwd.pw_gecos... yes
checking for struct passwd.pw_passwd... yes
checking for siginfo_t.si_band... yes
checking for time.h that defines altzone... no
checking for addrinfo... yes
checking for sockaddr_storage... yes
checking for sockaddr_alg... no
checking for an ANSI C-conforming const... yes
checking for working signed char... yes
checking for prototypes... yes
checking for socketpair... yes
checking if sockaddr has sa_len member... yes
checking for gethostbyname_r... yes
checking gethostbyname_r with 6 args... no
checking gethostbyname_r with 5 args... no
checking gethostbyname_r with 3 args... yes
checking for __fpu_control... no
checking for __fpu_control in -lieee... no
checking for --with-libm=STRING... default LIBM="-lm"
checking for --with-libc=STRING... default LIBC=""
checking for x64 gcc inline assembler... no
checking whether float word ordering is bigendian... yes
checking whether we can use gcc inline assembler to get and set x87 control word... no
checking whether we can use gcc inline assembler to get and set mc68881 fpcr... no
checking for x87-style double rounding... yes
checking for acosh... yes
checking for asinh... yes
checking for atanh... yes
checking for erf... yes
checking for erfc... yes
checking for expm1... yes
checking for log1p... yes
checking for log2... yes
checking whether POSIX semaphores are enabled... yes
checking for broken sem_getvalue... no
checking whether RTLD_LAZY is declared... yes
checking whether RTLD_NOW is declared... yes
checking whether RTLD_GLOBAL is declared... yes
checking whether RTLD_LOCAL is declared... yes
checking whether RTLD_NODELETE is declared... no
checking whether RTLD_NOLOAD is declared... no
checking whether RTLD_DEEPBIND is declared... no
checking whether RTLD_MEMBER is declared... yes
checking digit size for Python's longs... no value specified
checking for wchar.h... (cached) yes
checking size of wchar_t... 2
checking whether wchar_t is signed... no
checking whether wchar_t is usable... yes
checking whether byte ordering is bigendian... yes
checking ABIFLAGS...
checking SOABI... cpython-312
checking LDVERSION... $(VERSION)$(ABIFLAGS)
checking for --with-platlibdir... no
checking for --with-wheel-pkg-dir... no
checking whether right shift extends the sign bit... yes
checking for getc_unlocked() and friends... yes
checking for readline... no
checking for readline/readline.h... no
checking how to link readline... no
checking for broken nice()... no
checking for broken poll()... no
checking for working tzset()... yes
checking for tv_nsec in struct stat... yes
checking for tv_nsec2 in struct stat... no
checking for curses.h... yes
checking for ncurses.h... no
checking curses module flags... no
checking for panel.h... no
checking panel flags... no
checking for term.h... yes
checking whether mvwdelch is an expression... yes
checking whether WINDOW has _flags... yes
checking for curses function is_pad... no
checking for curses function is_term_resized... no
checking for curses function resize_term... no
checking for curses function resizeterm... no
checking for curses function immedok... yes
checking for curses function syncok... yes
checking for curses function wchgat... yes
checking for curses function filter... yes
checking for curses function has_key... no
checking for curses function typeahead... yes
checking for curses function use_env... yes
configure: checking for device files
checking for /dev/ptmx... no
checking for /dev/ptc... yes
checking for socklen_t... yes
checking for broken mbstowcs... no
checking for --with-computed-gotos... no value specified
checking whether cc -qlanglvl=extc1x supports computed gotos... yes
checking for build directories... done
checking for -O2... yes
checking for glibc _FORTIFY_SOURCE/memmove bug... no
checking for stdatomic.h... no
checking for builtin __atomic_load_n and __atomic_store_n functions... no
checking for ensurepip... upgrade
checking if the dirent structure of a d_type field... no
checking for the Linux getrandom() syscall... no
checking for the getrandom() function... no
checking for library containing shm_open... none required
checking for shm_open... yes
checking for shm_unlink... yes
checking for pkg-config... no
checking for include/openssl/ssl.h in /usr/local/ssl... no
checking for include/openssl/ssl.h in /usr/lib/ssl... no
checking for include/openssl/ssl.h in /usr/ssl... no
checking for include/openssl/ssl.h in /usr/pkg... no
checking for include/openssl/ssl.h in /usr/local... no
checking for include/openssl/ssl.h in /usr... yes
checking whether compiling and linking against OpenSSL works... yes
checking for --with-openssl-rpath...
checking whether OpenSSL provides required ssl module APIs... yes
checking whether OpenSSL provides required hashlib module APIs... yes
checking for --with-ssl-default-suites... python
checking for --with-builtin-hashlib-hashes... md5,sha1,sha2,sha3,blake2
checking for libb2... no
checking for --disable-test-modules... yes
checking for stdlib extension module _multiprocessing... yes
checking for stdlib extension module _posixshmem... yes
checking for stdlib extension module fcntl... yes
checking for stdlib extension module mmap... yes
checking for stdlib extension module _socket... yes
checking for stdlib extension module grp... yes
checking for stdlib extension module ossaudiodev... missing
checking for stdlib extension module pwd... yes
checking for stdlib extension module resource... yes
checking for stdlib extension module _scproxy... n/a
checking for stdlib extension module spwd... n/a
checking for stdlib extension module syslog... yes
checking for stdlib extension module termios... yes
checking for stdlib extension module pyexpat... yes
checking for stdlib extension module _elementtree... yes
checking for stdlib extension module _md5... yes
checking for stdlib extension module _sha1... yes
checking for stdlib extension module _sha2... yes
checking for stdlib extension module _sha3... yes
checking for stdlib extension module _blake2... yes
checking for stdlib extension module _crypt... yes
checking for stdlib extension module _ctypes... missing
checking for stdlib extension module _curses... missing
checking for stdlib extension module _curses_panel... missing
checking for stdlib extension module _decimal... yes
checking for stdlib extension module _dbm... yes
checking for stdlib extension module _gdbm... missing
checking for stdlib extension module nis... yes
checking for stdlib extension module readline... missing
checking for stdlib extension module _sqlite3... disabled
checking for stdlib extension module _tkinter... missing
checking for stdlib extension module _uuid... yes
checking for stdlib extension module zlib... missing
checking for stdlib extension module _bz2... missing
checking for stdlib extension module _lzma... missing
checking for stdlib extension module _ssl... yes
checking for stdlib extension module _hashlib... yes
checking for stdlib extension module _testcapi... yes
checking for stdlib extension module _testclinic... yes
checking for stdlib extension module _testinternalcapi... yes
checking for stdlib extension module _testbuffer... yes
checking for stdlib extension module _testimportmultiple... yes
checking for stdlib extension module _testmultiphase... yes
checking for stdlib extension module xxsubtype... yes
checking for stdlib extension module _xxtestfuzz... yes
checking for stdlib extension module _ctypes_test... missing
checking for stdlib extension module xxlimited... yes
checking for stdlib extension module xxlimited_35... yes
configure: creating ./config.status
config.status: creating Makefile.pre
config.status: creating Misc/python.pc
config.status: creating Misc/python-embed.pc
config.status: creating Misc/python-config.sh
config.status: creating Modules/Setup.bootstrap
config.status: creating Modules/Setup.stdlib
config.status: creating Modules/ld_so_aix
config.status: creating pyconfig.h
configure: creating Modules/Setup.local
configure: creating Makefile
configure: WARNING: pkg-config is missing. Some dependencies may not be detected correctly.
configure:
If you want a release build with all stable optimizations active (PGO, etc),
please run ./configure --enable-optimizations
configure: WARNING:
Platform "powerpc-ibm-aix7.2.3.0" with compiler "xlc" is not supported by the
CPython core team, see https://peps.python.org/pep-0011/ for more information.
bash-4.2$
bash-4.2$ gmake
xlc_r -q64 -c -fno-strict-overflow -DNDEBUG -O -I/vob/thirdparty_1/src/python/python312x/inst-ssl/usr/include -I/usr/include -I/opt/freeware/include/ -qalias=noansi -qmaxmem=-1 -I./Include/internal -I. -I./Include -I /vob/peopletools/src/python/python312x/inst-ffi/usr/include/ -DPy_BUILD_CORE -o Programs/python.o ./Programs/python.c
xlc_r -q64 -c -fno-strict-overflow -DNDEBUG -O -I/vob/thirdparty_1/src/python/python312x/inst-ssl/usr/include -I/usr/include -I/opt/freeware/include/ -qalias=noansi -qmaxmem=-1 -I./Include/internal -I. -I./Include -I /vob/peopletools/src/python/python312x/inst-ffi/usr/include/ -DPy_BUILD_CORE -o Parser/token.o Parser/token.c
xlc_r -q64 -c -fno-strict-overflow -DNDEBUG -O -I/vob/thirdparty_1/src/python/python312x/inst-ssl/usr/include -I/usr/include -I/opt/freeware/include/ -qalias=noansi -qmaxmem=-1 -I./Include/internal -I. -I./Include -I /vob/peopletools/src/python/python312x/inst-ffi/usr/include/ -DPy_BUILD_CORE -o Parser/pegen.o Parser/pegen.c
"./Include/internal/pycore_interp.h", line 193.12: 1506-007 (S) "struct _dtoa_state" is undefined.
gmake: *** [Parser/pegen.o] Error 1
bash-4.2$
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-121178
* gh-121179
* gh-121180
<!-- /gh-linked-prs -->
| c3677befbecbd7fa94cde8c1fecaa4cc18e6aa2b | 48cd104b0cf05dad8958efa9cb9666c029ef9201 |
python/cpython | python__cpython-119493 | # pyrepl always has `from __future__ import annotations` on
# Bug report
### Bug description:
```
% ./python.exe
Python 3.14.0a0 (heads/main:e12a6780bb, May 22 2024, 13:54:40) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> def f(x: y): pass
...
>>> f.__annotations__
{'x': 'y'}
```
This likely happens because the code is exec()'ed in a file that has the future import; exec() inherits the `__future__` state from the file where it's called (a questionable feature but that's how it is).
For what it's worth, PEP 649 explicitly says the REPL should continue to use "stock" semantics: https://peps.python.org/pep-0649/#interactive-repl-shell.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
```[tasklist]
### Tasks
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119493
* gh-119697
<!-- /gh-linked-prs -->
| a8e35e8ebad8c3bb44d14968aa05d1acbc028247 | 548a11d5cf1dbb32d86ce0c045130c77f50c1427 |
python/cpython | python__cpython-119435 | # REPL takes extra key presses on long inputs that wrap around lines
# Bug report
### Bug description:
The new REPL has an issue when outputting at the right edge of the console window. It tries to output a "\" at the end, and then print the typed character at the beginning of the next line. This works fine on the first line. The 2nd line the "\" + char doesn't show up until after you press a 2nd char. And at the 3rd line it takes 3 chars, and so on.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119435
* gh-119441
<!-- /gh-linked-prs -->
| e3bf5381fd056d0bbdd775463e3724aab2012e45 | 9b422fc6af87b81812aaf3010c004eb27c4dc280 |
python/cpython | python__cpython-119444 | # ``test_monitoring`` leaks references
# Bug report
### Bug description:
```python
./python -m test -R 3:3 test_monitoring
Using random seed: 1027068683
0:00:00 load avg: 0.28 Run 1 test sequentially
0:00:00 load avg: 0.28 [1/1] test_monitoring
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XXX XXX
test_monitoring leaked [18, 18, 18] references, sum=54
test_monitoring leaked [16, 16, 16] memory blocks, sum=48
test_monitoring failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_monitoring
Total duration: 696 ms
Total tests: run=77
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119444
<!-- /gh-linked-prs -->
| c85e3526736d1cf8226686fdf4f5117e105a7b13 | e3f5a4455346747fe98f6aeefc74b46048d09bda |
python/cpython | python__cpython-119392 | # The docstrings for PyMapping_Values, PyMapping_Items and PyMapping_Keys are incorrect
# Documentation
The docstring for `PyMapping_Keys`
> /* On success, return a list or tuple of the keys in mapping object 'o'.
does not reflect the documentation (see https://docs.python.org/3/c-api/mapping.html#c.PyMapping_Keys) or the implementation (see [abstract.c#L2519](https://github.com/python/cpython/blob/858b9e85fcdd495947c9e892ce6e3734652c48f2/Objects/abstract.c#L2519)) as `PyMapping_Keys` can only return a list. Similar for the other methods.
The docstrings show up in IDEs (such as Visual Studio Code) and can cause confusion. I encountered this when working on the making the json module compatible with free-threading
<!-- gh-linked-prs -->
### Linked PRs
* gh-119392
<!-- /gh-linked-prs -->
| d472b4f9fa4fb6061588d421f33a0388a2005bc6 | 858b9e85fcdd495947c9e892ce6e3734652c48f2 |
python/cpython | python__cpython-119457 | # Invalid corner cases (resulting in nan+nanj) in complex division
# Bug report
### Bug description:
Reproducer:
```pycon
>>> (1+1j)/(cmath.inf+0j) # ok
0j
>>> (1+1j)/cmath.infj # ok
-0j
>>> (1+1j)/(cmath.inf+cmath.infj) # should be 0j, isn't?
(nan+nanj)
```
c.f. MPC:
```pycon
>>> gmpy2.mpc('(1+1j)')/gmpy2.mpc('(inf+infj)')
mpc('0.0+0.0j')
```
It seems, there are not so much such cases (i.e. when the numerator is finite and the denominator has infinite and infinite/nan components, or if the numerator has infinite components and the denominator has zeros). Following patch (mostly literally taken from the C11 Annex G.5.1 ``_Cdivd()``'s example, except for the ``denom == 0.0`` case) should fix the issue:
```diff
diff --git a/Objects/complexobject.c b/Objects/complexobject.c
index b3d57b8337..9041cbb8ae 100644
--- a/Objects/complexobject.c
+++ b/Objects/complexobject.c
@@ -122,6 +122,27 @@ _Py_c_quot(Py_complex a, Py_complex b)
/* At least one of b.real or b.imag is a NaN */
r.real = r.imag = Py_NAN;
}
+
+ /* Recover infinities and zeros that computed as nan+nanj */
+ if (isnan(r.real) && isnan(r.imag)) {
+ if ((isinf(a.real) || isinf(a.imag))
+ && isfinite(b.real) && isfinite(b.imag))
+ {
+ const double x = copysign(isinf(a.real) ? 1.0 : 0.0, a.real);
+ const double y = copysign(isinf(a.imag) ? 1.0 : 0.0, a.imag);
+ r.real = Py_INFINITY * (x*b.real + y*b.imag);
+ r.imag = Py_INFINITY * (y*b.real - x*b.imag);
+ }
+ else if ((fmax(abs_breal, abs_bimag) == Py_INFINITY)
+ && isfinite(a.real) && isfinite(a.imag))
+ {
+ const double x = copysign(isinf(b.real) ? 1.0 : 0.0, b.real);
+ const double y = copysign(isinf(b.imag) ? 1.0 : 0.0, b.imag);
+ r.real = 0.0 * (a.real*x + a.imag*y);
+ r.imag = 0.0 * (a.imag*x - a.real*y);
+ }
+ }
+
return r;
}
```
Perhaps, we could also clear ending remark in ``_Py_c_quot()`` comment:
https://github.com/python/cpython/blob/d065edfb66470bbf06367b3570661d0346aa6707/Objects/complexobject.c#L91-L92
I'll prepare a PR, if this issue will be accepted.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119457
<!-- /gh-linked-prs -->
| 2cb84b107ad136eafb6e3d69145b7bdaefcca879 | 0a1e8ff9c15675fdc4d07fa6c59f83808bf00798 |
python/cpython | python__cpython-119528 | # Deadlock in `pool_in_threads.py` (`test_multiprocessing_pool_circular_import`) in free-threaded build
# Bug report
Running `pool_in_threads.py` in a loop will occasionally lead to deadlock even with #118745 applied.
Here's a summary of the state I observed:
### Thread 28
Thread 28 holds the GIL and is blocked on the QSBR shared mutex:
```
#6 0x000055989dd7d8a6 in _PySemaphore_PlatformWait (sema=sema@entry=0x7f8ba4ff8c60, timeout=timeout@entry=-1) at Python/parking_lot.c:142
#7 0x000055989dd7d9dc in _PySemaphore_Wait (sema=sema@entry=0x7f8ba4ff8c60, timeout=timeout@entry=-1, detach=detach@entry=1) at Python/parking_lot.c:213
#8 0x000055989dd7db65 in _PyParkingLot_Park (addr=addr@entry=0x55989e0db1c8 <_PyRuntime+136840>, expected=expected@entry=0x7f8ba4ff8cf7, size=size@entry=1, timeout_ns=timeout_ns@entry=-1, park_arg=park_arg@entry=0x7f8ba4ff8d00, detach=detach@entry=1) at Python/parking_lot.c:316
#9 0x000055989dd7494d in _PyMutex_LockTimed (m=m@entry=0x55989e0db1c8 <_PyRuntime+136840>, timeout=timeout@entry=-1, flags=flags@entry=_PY_LOCK_DETACH) at Python/lock.c:112
#10 0x000055989dd74a4e in _PyMutex_LockSlow (m=m@entry=0x55989e0db1c8 <_PyRuntime+136840>) at Python/lock.c:53
#11 0x000055989dd9c8e1 in PyMutex_Lock (m=0x55989e0db1c8 <_PyRuntime+136840>) at ./Include/internal/pycore_lock.h:75
#12 _Py_qsbr_unregister (tstate=tstate@entry=0x7f8bec00bad0) at Python/qsbr.c:239
#13 0x000055989dd92142 in tstate_delete_common (tstate=tstate@entry=0x7f8bec00bad0) at Python/pystate.c:1797
#14 0x000055989dd92831 in _PyThreadState_DeleteCurrent (tstate=tstate@entry=0x7f8bec00bad0) at Python/pystate.c:1845
```
It's blocked trying to lock the QSBR shared mutex:
https://github.com/python/cpython/blob/9fa206aaeccc979a4bd03852ba38c045294a3d6f/Python/qsbr.c#L194
Normally, this would release the GIL while blocking, but we clear the thread state before calling `tstate_delete_common`:
https://github.com/python/cpython/blob/9fa206aaeccc979a4bd03852ba38c045294a3d6f/Python/pystate.c#L1844-L1845
We only detach (and release the GIL) if we both have a thread state and it's currently attached:
https://github.com/python/cpython/blob/9fa206aaeccc979a4bd03852ba38c045294a3d6f/Python/parking_lot.c#L204-L207
Note that the PyThreadState in this case is actually attached, just not visible from `_PyThreadState_GET()`.
### Thread 4
Thread 4 holds the shared mutex and is blocked trying to acquire the GIL:
```
#6 take_gil (tstate=tstate@entry=0x55989eeb6e00) at Python/ceval_gil.c:331
#7 0x000055989dd4c93c in _PyEval_AcquireLock (tstate=tstate@entry=0x55989eeb6e00) at Python/ceval_gil.c:585
#8 0x000055989dd92ca3 in _PyThreadState_Attach (tstate=tstate@entry=0x55989eeb6e00) at Python/pystate.c:2071
#9 0x000055989dd4c9bb in PyEval_AcquireThread (tstate=tstate@entry=0x55989eeb6e00) at Python/ceval_gil.c:602
#10 0x000055989dd7d9eb in _PySemaphore_Wait (sema=sema@entry=0x7f8bf3ffe240, timeout=timeout@entry=-1, detach=detach@entry=1) at Python/parking_lot.c:215
#11 0x000055989dd7db65 in _PyParkingLot_Park (addr=addr@entry=0x55989e0c0108 <_PyRuntime+26056>, expected=expected@entry=0x7f8bf3ffe2c8, size=size@entry=8, timeout_ns=timeout_ns@entry=-1, park_arg=park_arg@entry=0x0, detach=detach@entry=1) at Python/parking_lot.c:316
#12 0x000055989dd747ce in rwmutex_set_parked_and_wait (rwmutex=rwmutex@entry=0x55989e0c0108 <_PyRuntime+26056>, bits=bits@entry=1) at Python/lock.c:386
#13 0x000055989dd74e38 in _PyRWMutex_RLock (rwmutex=0x55989e0c0108 <_PyRuntime+26056>) at Python/lock.c:404
#14 0x000055989dd9357a in stop_the_world (stw=stw@entry=0x55989e0db190 <_PyRuntime+136784>) at Python/pystate.c:2234
#15 0x000055989dd936d3 in _PyEval_StopTheWorld (interp=interp@entry=0x55989e0d8780 <_PyRuntime+126016>) at Python/pystate.c:2331
#16 0x000055989dd9c761 in _Py_qsbr_reserve (interp=interp@entry=0x55989e0d8780 <_PyRuntime+126016>) at Python/qsbr.c:201
#17 0x000055989dd908d4 in new_threadstate (interp=0x55989e0d8780 <_PyRuntime+126016>, whence=whence@entry=2) at Python/pystate.c:1543
#18 0x000055989dd92769 in _PyThreadState_New (interp=<optimized out>, whence=whence@entry=2) at Python/pystate.c:1624
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119528
* gh-119868
<!-- /gh-linked-prs -->
| 078b8c8cf2bf68f7484cc4d2e3dd74b6fab55664 | 64ff1e217d963b48140326e8b63c62f4b306f4a0 |
python/cpython | python__cpython-119358 | # Increase coverage of test_pyrepl
When running the PyREPL tests, it gives back a code coverage percentage of about ~60% on the new `_pyrepl` module.
```
% ./python.exe ../coveragepy report --show-missing
Name Stmts Miss Branch BrPart Cover Missing
------------------------------------------------------------------------------
Lib/_pyrepl/__init__.py 0 0 0 0 100%
Lib/_pyrepl/__main__.py 35 35 16 0 0% 1-44
Lib/_pyrepl/_minimal_curses.py 38 38 12 0 0% 12-68
Lib/_pyrepl/commands.py 294 148 80 3 46% 56, 61-75, 96, 100, 110-126, 131-133, 138, 143-144, 149-157, 162-163, 168-170, 175-177, 182-184, 189-193, 198-211, 216-219, 224-236, 250-252, 286->exit, 309, 326, 331, 336, 341, 346-348, 353-355, 366-367, 372-385, 390-398, 403-418, 423, 428-431, 436-438, 443-444, 449-455
Lib/_pyrepl/completing_reader.py 151 19 52 10 85% 47-48, 66, 79-80, 88->100, 102->116, 174->177, 182-183, 190->192, 197-198, 214, 223, 278-284, 287
Lib/_pyrepl/console.py 33 0 22 0 100%
Lib/_pyrepl/curses.py 11 5 0 0 55% 24-28
Lib/_pyrepl/fancy_termios.py 25 7 0 0 72% 62-64, 67, 70, 73-74
Lib/_pyrepl/historical_reader.py 215 83 50 7 58% 58-62, 67-71, 76-81, 86, 91, 96, 101-124, 129-134, 149-154, 164->exit, 170-175, 180-182, 187-189, 272, 276-282, 289-292, 296-298, 315, 323-324, 327-329, 341->exit
Lib/_pyrepl/input.py 59 8 26 5 85% 52, 56, 60, 76, 83, 88, 94, 103
Lib/_pyrepl/keymap.py 77 9 46 9 85% 126, 131, 138, 143, 159, 165, 172, 181, 210
Lib/_pyrepl/pager.py 118 118 54 0 0% 1-173
Lib/_pyrepl/reader.py 358 65 130 20 78% 265-267, 271-273, 325-333, 345-354, 362-370, 377->379, 391, 411, 417, 430->432, 459, 477->489, 504->exit, 506, 522-524, 528, 538-546, 553-555, 558-559, 579, 591->594, 611, 622, 627, 633, 639-640, 649, 659-660
Lib/_pyrepl/readline.py 306 129 90 10 54% 122-127, 130->148, 133-134, 139-140, 151-157, 174-181, 186, 192-193, 195, 244->253, 247-252, 257, 273->275, 275->exit, 279-282, 285-291, 298-308, 311, 314, 317, 320, 323, 326-327, 330, 333, 336, 343-360, 363-368, 371, 374-378, 381-385, 389-393, 397, 400, 403-404, 407-413, 416, 419, 422, 463-465, 484-502
Lib/_pyrepl/simple_interact.py 89 68 16 0 22% 41-45, 74-75, 78, 84-158
Lib/_pyrepl/trace.py 13 5 6 2 53% 12, 18-21
Lib/_pyrepl/types.py 7 0 0 0 100%
Lib/_pyrepl/unix_console.py 428 161 140 21 61% 47, 112-126, 152, 157, 173, 215, 235, 288-292, 346-347, 355-362, 368-369, 381-397, 403, 412-415, 426-448, 454, 471-476, 482-483, 494-529, 535-539, 545, 551->560, 555-558, 562-565, 569-572, 575, 579, 613-614, 623-626, 636-645, 651, 659, 668->exit, 672-676, 686-687, 690-694, 712-713, 726-735, 752-762
Lib/_pyrepl/unix_eventqueue.py 69 6 20 4 89% 69, 93, 126, 143, 148-149
Lib/_pyrepl/utils.py 11 0 6 0 100%
------------------------------------------------------------------------------
TOTAL 2337 904 766 91 59%
```
It'd be great to increase this as we work to squash bugs in this area
<!-- gh-linked-prs -->
### Linked PRs
* gh-119358
* gh-119414
<!-- /gh-linked-prs -->
| 73ab83b27f105a4509046ce26e35f20d66625195 | c886bece3b3a49f8a0f188aecfc1d6ff89d281e6 |
python/cpython | python__cpython-122946 | # Add a ctypes.util function to list loaded shared libraries
# Feature or enhancement
### Proposal:
When writing code which loads dynamic libraries, it is often very useful to be able to query which shared libraries are already in use by the current process. There are a few well-known tricks to do this, but they all end up being platform dependent.
For example, on Linux, you can use `dl_iterate_phdr`, and it seems that [quite a bit of Python code ](https://github.com/search?q=dl_iterate_phdr+language%3APython&type=code)does. This won’t work on macOS or Windows, which provide other functions for this same functionality.
Julia provides this function in the standard library under `Libdl.dllist`.
A Python re-implementation of the same platform-specific code can be found at [GitHub - WardBrian/dllist: List DLLs loaded by the current process ](https://github.com/WardBrian/dllist). This essentially just wraps the platform specific code in an if-else based on the runtime platform
```python
import dllist
print(dllist.dllist())
# ['linux-vdso.so.1', '/lib/x86_64-linux-gnu/libpthread.so.0', '/lib/x86_64-linux-gnu/libdl.so.2', ...
```
I would like to take the next step toward adding a similar function to the standard library
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/a-ctypes-function-to-list-all-loaded-shared-libraries/36370
<!-- gh-linked-prs -->
### Linked PRs
* gh-122946
<!-- /gh-linked-prs -->
| 421ea1291d9b8ebfe5eaa72ab041338073fb67d0 | 0f128b9435fccb296714f3ea2466c3fdda77d91d |
python/cpython | python__cpython-119353 | # Make `Py_BEGIN_CRITICAL_SECTION()` and `Py_END_CRITICAL_SECTION()` public in the non-limited C API
# Feature or enhancement
The critical section API is useful for making C API extensions thread-safe when the GIL is disabled. Compared to plain mutexes, the API makes it easier to avoid deadlocks, especially when interacting with the Python C API, where calls like `Py_DECREF()` may invoke destructors that themselves acquire locks.
A more detailed description and motivation is in PEP 703: https://peps.python.org/pep-0703/#python-critical-sections
The underlying implementation is hooked into Python's `PyThreadState` management, so it would not be practical to implement this API outside of CPython.
### C-API WG Issue
* https://github.com/capi-workgroup/decisions/issues/26
<!-- gh-linked-prs -->
### Linked PRs
* gh-119353
* gh-120856
<!-- /gh-linked-prs -->
| 8f17d69b7bc906e8407095317842cc0fd52cd84a | 03fa2df92707b543c304a426732214002f81d671 |
python/cpython | python__cpython-119418 | # '_PyLong_NumBits': identifier not found in 3.13
# Bug report
### Bug description:
While fixing pywin32 to let CI builds work in Python 3.13, the build process complained of not being able to find `_PyLong_NumBits`. All other Python versions in the CI built successfully.
```c
BOOL PyCom_VariantFromPyObject(PyObject *obj, VARIANT *var)
{
// ...
if (PyLong_Check(obj)) {
int sign = _PyLong_Sign(obj);
size_t nbits = _PyLong_NumBits(obj);
if (nbits == (size_t)-1 && PyErr_Occurred())
return FALSE;
// ...
}
```
The function still exists in `main`, apparently inaccessible via the public C API since 3.13. What's the recommended replacement?
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119418
* gh-119970
<!-- /gh-linked-prs -->
| e50fac96e82d857ecc024b4cd4e012493b077064 | b1374aa1c2e68becf9b6dcdcb8a586b0bd997c0d |
python/cpython | python__cpython-119335 | # Add c-api to set callback function on enter and exit of PyContext
# Feature or enhancement
### Proposal:
In python every time a context is switched, two c-api are called.
```
PyContext_Enter(PyObject *);
PyContext_Exit(PyObject *);
```
I need a way to know when those api's are called on a context. The purpose is that at Meta we have a similar system to `contextvars` in C++ called `folly::requestcontext`. When the python context switches we need to inform C++ to swap its active context. Right now we have been carry forward patches to Asyncio.Task to do this on the `_step()` since way back in py3.6 which is generally the wrong place to do it.
There is prior art in #91054 which provides watching creation/destruction of Code objects. The proposed solution would mirror that functionality closely.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/request-can-we-get-a-c-api-hook-into-pycontext-enter-and-pycontext-exit/51730
There was an in person meeting between @gvanrossum, @itamaro and myself where we discussed the plausibility of this feature request.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119335
* gh-124444
* gh-124737
* gh-124741
* gh-124773
* gh-124774
* gh-124776
<!-- /gh-linked-prs -->
| d87482bc4ee9458d6ba16140e7bc008637dbbb16 | ad7c7785461fffba04f5a36cd6d062e92b0fda16 |
python/cpython | python__cpython-119319 | # Resolve deprecation warnings in Docs/tools
# Documentation
The following scripts are throwing deprecation warnings.
- Docs/tools/extensions/pyspecific.py
- Docs/tools/extensions/glossary_search.py
```
PendingDeprecationWarning: nodes.Node.traverse() is obsoleted by Node.findall()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119319
* gh-119486
* gh-119487
<!-- /gh-linked-prs -->
| 0867bce45768454ee31bee95ca33fdc2c9d8b0fa | ffa24aab107b5bc3c6ad31a6a245c226bf24b208 |
python/cpython | python__cpython-119464 | # Unexpected name mangling behavior with generics
# Bug report
### Bug description:
Name mangling behaves inconsistently when used with generics. Here’s the code to reproduce the issue:
```python
class __Foo(type):
pass
class Bar[T](metaclass=__Foo):
pass
```
This raises `NameError: name '_Bar__Foo' is not defined`. However, removing the `[T]` results in the error disappearing. This suggests that the name mangling behavior is inconsistent when generics are involved. This behavior seems to be a bug, as it would be expected for the name mangling to either always occur or never occur, regardless of whether generics are used.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119464
* gh-119643
* gh-119644
* gh-125771
<!-- /gh-linked-prs -->
| a9a74da4a0ca0645f049e67b6434a95e30592c32 | 3e8b60905e97a4fe89bb24180063732214368938 |
python/cpython | python__cpython-119307 | # Break up _pyrepl tests
I'd like to work on increasing the test coverage of the new _pyrepl module:
```
% ./python.exe ../coveragepy report --show-missing
Name Stmts Miss Branch BrPart Cover Missing
------------------------------------------------------------------------------
Lib/_pyrepl/__init__.py 0 0 0 0 100%
Lib/_pyrepl/__main__.py 35 35 16 0 0% 1-44
Lib/_pyrepl/_minimal_curses.py 38 38 12 0 0% 12-68
Lib/_pyrepl/commands.py 294 148 80 3 46% 56, 61-75, 96, 100, 110-126, 131-133, 138, 143-144, 149-157, 162-163, 168-170, 175-177, 182-184, 189-193, 198-211, 216-219, 224-236, 250-252, 286->exit, 309, 326, 331, 336, 341, 346-348, 353-355, 366-367, 372-385, 390-398, 403-418, 423, 428-431, 436-438, 443-444, 449-455
Lib/_pyrepl/completing_reader.py 151 19 52 10 85% 47-48, 66, 79-80, 88->100, 102->116, 174->177, 182-183, 190->192, 197-198, 214, 223, 278-284, 287
Lib/_pyrepl/console.py 33 0 22 0 100%
Lib/_pyrepl/curses.py 11 5 0 0 55% 24-28
Lib/_pyrepl/fancy_termios.py 25 7 0 0 72% 62-64, 67, 70, 73-74
Lib/_pyrepl/historical_reader.py 215 83 50 7 58% 58-62, 67-71, 76-81, 86, 91, 96, 101-124, 129-134, 149-154, 164->exit, 170-175, 180-182, 187-189, 272, 276-282, 289-292, 296-298, 315, 323-324, 327-329, 341->exit
Lib/_pyrepl/input.py 59 8 26 5 85% 52, 56, 60, 76, 83, 88, 94, 103
Lib/_pyrepl/keymap.py 77 9 46 9 85% 126, 131, 138, 143, 159, 165, 172, 181, 210
Lib/_pyrepl/pager.py 118 118 54 0 0% 1-173
Lib/_pyrepl/reader.py 358 65 130 20 78% 265-267, 271-273, 325-333, 345-354, 362-370, 377->379, 391, 411, 417, 430->432, 459, 477->489, 504->exit, 506, 522-524, 528, 538-546, 553-555, 558-559, 579, 591->594, 611, 622, 627, 633, 639-640, 649, 659-660
Lib/_pyrepl/readline.py 306 129 90 10 54% 122-127, 130->148, 133-134, 139-140, 151-157, 174-181, 186, 192-193, 195, 244->253, 247-252, 257, 273->275, 275->exit, 279-282, 285-291, 298-308, 311, 314, 317, 320, 323, 326-327, 330, 333, 336, 343-360, 363-368, 371, 374-378, 381-385, 389-393, 397, 400, 403-404, 407-413, 416, 419, 422, 463-465, 484-502
Lib/_pyrepl/simple_interact.py 89 68 16 0 22% 41-45, 74-75, 78, 84-158
Lib/_pyrepl/trace.py 13 5 6 2 53% 12, 18-21
Lib/_pyrepl/types.py 7 0 0 0 100%
Lib/_pyrepl/unix_console.py 428 161 140 21 61% 47, 112-126, 152, 157, 173, 215, 235, 288-292, 346-347, 355-362, 368-369, 381-397, 403, 412-415, 426-448, 454, 471-476, 482-483, 494-529, 535-539, 545, 551->560, 555-558, 562-565, 569-572, 575, 579, 613-614, 623-626, 636-645, 651, 659, 668->exit, 672-676, 686-687, 690-694, 712-713, 726-735, 752-762
Lib/_pyrepl/unix_eventqueue.py 69 6 20 4 89% 69, 93, 126, 143, 148-149
Lib/_pyrepl/utils.py 11 0 6 0 100%
------------------------------------------------------------------------------
TOTAL 2337 904 766 91 59%
```
However, with all the tests in the single file, it was seeming like it'd start to get unwieldy. As a first step, I'd like to break this `test_pyrepl.py` file apart.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119307
* gh-119362
<!-- /gh-linked-prs -->
| f49df4f486e531ff2666eb22854117c564b3de3d | e03dde5a24d3953e0b16f7cdefdc8b00aa9d9e11 |
python/cpython | python__cpython-119293 | # Add job to `jit.yml` to build and test with `--disable-gil`
I recently fixed https://github.com/python/cpython/pull/118935, so I figure we probably want a job in the JIT CI that checks that nothing regresses over time.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119293
* gh-119314
<!-- /gh-linked-prs -->
| c4722cd0573c83aaa52b63a27022b9048a949f54 | ab4263a82abe8b684d8ad1edf7c7c6ec286ff756 |
python/cpython | python__cpython-119657 | # `contextlib.suppress` converts instances of a subtype of `ExceptionGroup` to an instance of the `ExceptionGroup` class
# Bug report
### Bug description:
```python
from contextlib import suppress
class FooException(Exception): ...
class FooExceptionGroup(ExceptionGroup[Exception]): ...
try:
with suppress(FooException):
raise FooExceptionGroup("", [Exception()])
except ExceptionGroup as e:
print(type(e))
```
in this code, the `suppress` context manager is expected to have no effect, as a `FooException` is not being raised within it. instead, it gets converted to an `ExceptionGroup` then re-raised.
expected output:
```
<class '__main__.FooExceptionGroup'>
```
actual output:
```
<class 'ExceptionGroup'>
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119657
* gh-120105
* gh-120106
<!-- /gh-linked-prs -->
| 5c02ea8bae2287a828840f5734966da23dc573dc | 983efcf15b2503fe0c05d5e03762385967962b33 |
python/cpython | python__cpython-119275 | # test_ioctl is skipped because of setsid()
Python test runner "regrtest" uses process groups to be able to kill child processes of worker processes. Tests are run in subprocesses to run them in parallel and to catch bugs.
Problem: if setsid() is called (after fork), test_ioctl fails to open `/dev/tty`.
I propose to not use `setsid()` when running tests using TTY, such as test_ioctl.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119275
<!-- /gh-linked-prs -->
| 1f481fd3275dbc12a88c16129621de19ea20e4ca | 055c739536ad63b55ad7cd0b91ccacc33064fe11 |
python/cpython | python__cpython-119480 | # is_dataclass() returns True for non-dataclass subclass of dataclass
# Bug report
### Bug description:
If a dataclass has a subclass that is not itself a dataclass, `is_dataclass()` returns True on the subclass and its instances:
```python
from dataclasses import dataclass, is_dataclass
@dataclass
class X:
y: int
class Z(X):
pass
print(is_dataclass(Z)) # True
print(is_dataclass(Z())) # True
```
Documentation of is_dataclass() for reference: https://docs.python.org/3.13/library/dataclasses.html#dataclasses.is_dataclass
Intuitively this seems wrong: Z is not itself a dataclass. In pyanalyze I wrote a replacement for `is_dataclass()` because I needed to check whether the exact class was a dataclass:
```python
def is_dataclass_type(cls: type) -> bool:
try:
return "__dataclass_fields__" in cls.__dict__
except Exception:
return False
```
Changing the CPython behavior might be impossible for compatibility reasons, but in that case, we should document and test this edge case.
cc @ericvsmith @carljm for dataclasses.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119480
* gh-119760
* gh-119761
<!-- /gh-linked-prs -->
| bf4ff3ad2e362801e87c85fffd9e140b774cef26 | 0751511d24295c39fdf2f5b2255e3fa3d796ce4d |
python/cpython | python__cpython-119259 | # Tier 2 Optimizer Eliminate Type Version Guards
# Feature or Enhancement:
### Proposal:
_Note: I made this issue at the PyCon sprints discussing with @Fidget-Spinner and in collaboration with @dpdani_
The tier 2 optimizer should eliminate type version guards if it is safe to do.
It is safe if it has previously checked the type version guard at runtime and we haven't had an escape opt since then.
Note that this will need `Py_Decref` to be deferred before the 3.13 final release, to be usable, because otherwise there could be an escape anywhere we decrement the reference count.
## Example
For example, if we have this code:
```python
class A:
attr = 1
def thing(a):
return a.attr + a.attr
for _ in range(1000):
thing(A())
```
then when `thing` is executed it should only run the type guard once.
If we disassemble this code then we see it emits the `LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES` bytecode:
```python
>>> import dis
>>> dis.dis(thing, adaptive=True)
8 RESUME_CHECK 0
9 LOAD_FAST 0 (a)
LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES 0 (attr)
LOAD_FAST 0 (a)
LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES 0 (attr)
BINARY_OP_ADD_INT 0 (+)
RETURN_VALUE
```
If we look at the definition of `LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES`, we see it uses `_GUARD_TYPE_VERSION` (in `bytecodes.c`:
```c
macro(LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES) =
unused/1 +
_GUARD_TYPE_VERSION +
_GUARD_DORV_VALUES_INST_ATTR_FROM_DICT +
_GUARD_KEYS_VERSION +
_LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES;
```
So if this is executed one after the other, without any unknown function calls in the middle, it should remove the second `_GUARD_TYPE_VERSION` call.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119259
* gh-119365
* gh-119481
* gh-120699
<!-- /gh-linked-prs -->
| d8faa3654c2887eaa146dcdb553a9f9793bd2e5a | 6247fdbf93a8c72e1be40d396102b7f1a1271c95 |
python/cpython | python__cpython-119254 | # Import `_ios_support` raises RuntimeError on Windows
# Bug report
### Bug description:
See also:
- https://github.com/python/typeshed/actions/runs/9161558906/job/25186691740?pr=11987
- https://github.com/python/mypy/pull/17270
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119254
* gh-119265
<!-- /gh-linked-prs -->
| bf17986096491b9ca14c214ed4885340e7857e12 | 8231a24454c854ea22590fd74733d29e4274122d |
python/cpython | python__cpython-119315 | # PySequence_Fast needs new macros to be safe in a nogil world
# Feature or enhancement
### Proposal:
Right now, most uses of `PySequence_Fast` are invalid in a nogil context when it is passed an existing `list`; `PySequence_FAST_ITEMS` returns a reference to the internal array of `PyObject*`s that can be resized at any time if other threads add or delete items, `PySequence_FAST_GET_SIZE` similarly reports a size that is invalid an instant after it's reported. Similarly, if individual items are replaced without changing size, you'd have similar issues.
But when the argument passed is a `tuple` (incref-ed and returned unchanged, but safe due to immutability) or any non-`list` type (converted to new `list`) no lock is needed. Per conversation with Dino, going to create macros, to be called after a call to `PySequence_Fast`, to conditionally lock and unlock the *original* `list` when applicable, while avoiding locks in all other cases, before any other `PySequence*` APIs are used.
Preliminary (subject to bike-shedding) macro names are:
`Py_BEGIN_CRITICAL_SECTION_SEQUENCE_FAST`
`Py_END_CRITICAL_SECTION_SEQUENCE_FAST`
both defined in `pycore_critical_section.h`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Discussion occurred with Dino during CPython core sprints.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119315
* gh-119419
<!-- /gh-linked-prs -->
| baf347d91643a83483bae110092750d39471e0c2 | 2b3fb767bea1f96c9e0523f6cc341b40f0fa1ca1 |
python/cpython | python__cpython-119877 | # C API Extension Support for Freethreading HOWTO
We should add a "HOWTO" guide for C API extension authors covering how to support the free-threaded build.
See also https://peps.python.org/pep-0703/#how-to-teach-this
<!-- gh-linked-prs -->
### Linked PRs
* gh-119877
* gh-120693
* gh-124368
<!-- /gh-linked-prs -->
| 02b272b7026b68e70b4a4d9a0ca080904aed374c | dacc5ac71a8e546f9ef76805827cb50d4d40cabf |
python/cpython | python__cpython-119223 | # Remove legacy TODOs from code.
# Bug report
### Bug description:
I left a few TODOs in the compiler code from its early development. They are no longer relevant. For example, they comment on issues to consider that do not appear to have been a priority in years.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119223
<!-- /gh-linked-prs -->
| 19c11f244ebef492a5b20cd85a3b2e213e85388d | 6f7dd0a4260254390d75838c84ccc7285a2264f0 |
python/cpython | python__cpython-119229 | # Warning on autocomplete messes up the terminal
# Bug report
### Bug description:
1. Open `./python.exe`
2. `import sys; code = sys._getframe(1).f_code`
3. Type `code.co_` and hit tab
4. This:
<img width="1332" alt="Screenshot 2024-05-20 at 6 52 31 AM" src="https://github.com/python/cpython/assets/906600/11673a91-ea85-47d2-b80e-a2c9842935df">
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119229
* gh-119407
<!-- /gh-linked-prs -->
| 506b1a3ff66a41c72d205c8e4cba574e439d8e76 | a3e4fec8734a304d654e4ae24a4aa2f41a7b0640 |
python/cpython | python__cpython-119236 | # Fraction wrongfully gets casted into float when given as argument to __rpow__
# Bug report
### Bug description:
When using the `**` operator with a Fraction as a base and an object that implements `__rpow__` as an exponent the fraction gets wrongfully casted into a float before being passed to `__rpow__`
```python
from fractions import Fraction
foo = Fraction(4, 3)
class One:
def __rpow__(self, other):
return other
bar = foo**One()
print(bar)
print(type(bar))
```
Expected Output
```
4/3
<class 'fractions.Fraction'>
```
Actual Output:
```
1.3333333333333333
<class 'float'>
```
Tested with Python 3.12.3
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119236
* gh-119242
* gh-119255
* gh-119256
* gh-119298
* gh-119346
* gh-119347
* gh-119835
* gh-119836
<!-- /gh-linked-prs -->
| fe67af19638d208239549ccac8b4f4fb6480e801 | 034cf0c3167c850c8341deb61e210cb0dbcdb02d |
python/cpython | python__cpython-118881 | # ipython breaks on Python-3.13.0b1 when a 'tempfilepager' is not defined
# Bug report
### Bug description:
on a windows Python-3.13.0b1 non-english
test:
```python
Python 3.13.0b1 (tags/v3.13.0b1:2268289, May 8 2024, 12:20:07) [MSC v.1938 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.24.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: help(len)
```
result:
```python
Unexpected exception formatting exception. Falling back to standard exception
Traceback (most recent call last):
File "......\python-3.13.0b1.amd64\Lib\site-packages\IPython\core\interactiveshell.py", line 3577, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<ipython-input-1-1dda769017ba>", line 1, in <module>
help(len)
~~~~^^^^^
File "<frozen _sitebuiltins>", line 103, in __call__
File "......\python-3.13.0b1.amd64\Lib\pydoc.py", line 1983, in __call__
self.help(request)
~~~~~~~~~^^^^^^^^^
File "......\python-3.13.0b1.amd64\Lib\pydoc.py", line 2044, in help
else: doc(request, 'Help on %s:', output=self._output, is_cli=is_cli)
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "......\python-3.13.0b1.amd64\Lib\pydoc.py", line 1757, in doc
pager(render_doc(thing, title, forceload), f'Help on {what!s}')
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "......\python-3.13.0b1.amd64\Lib\pydoc.py", line 1656, in pager
pager(text, title)
~~~~~^^^^^^^^^^^^^
File "......\python-3.13.0b1.amd64\Lib\_pyrepl\pager.py", line 38, in <lambda>
return lambda text, title='': tempfilepager(plain(text), 'more <')
^^^^^^^^^^^^^
NameError: name 'tempfilepager' is not defined. Did you mean: 'tempfile_pager'?
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-118881
* gh-119211
<!-- /gh-linked-prs -->
| 05e1dce76d7669e90ab73e7e682360d83b8a0d02 | bbb49888a752869ae93423c42039a3a8dfab34d4 |
python/cpython | python__cpython-119184 | # [C API] Add an efficient public PyUnicodeWriter API
# Feature or enhancement
Creating a Python string object in an efficient way is complicated. Python has **private** `_PyUnicodeWriter` API. It's being used by these projects:
Affected projects (5):
* Cython (3.0.9)
* asyncpg (0.29.0)
* catboost (1.2.3)
* frozendict (2.4.0)
* immutables (0.20)
I propose making the API public to promote it and help C extensions maintainers to write more efficient code to create Python string objects.
API:
```c
typedef struct PyUnicodeWriter PyUnicodeWriter;
PyAPI_FUNC(PyUnicodeWriter*) PyUnicodeWriter_Create(void);
PyAPI_FUNC(void) PyUnicodeWriter_Discard(PyUnicodeWriter *writer);
PyAPI_FUNC(PyObject*) PyUnicodeWriter_Finish(PyUnicodeWriter *writer);
PyAPI_FUNC(void) PyUnicodeWriter_SetOverallocate(
PyUnicodeWriter *writer,
int overallocate);
PyAPI_FUNC(int) PyUnicodeWriter_WriteChar(
PyUnicodeWriter *writer,
Py_UCS4 ch);
PyAPI_FUNC(int) PyUnicodeWriter_WriteUTF8(
PyUnicodeWriter *writer,
const char *str, // decoded from UTF-8
Py_ssize_t len); // use strlen() if len < 0
PyAPI_FUNC(int) PyUnicodeWriter_Format(
PyUnicodeWriter *writer,
const char *format,
...);
// Write str(obj)
PyAPI_FUNC(int) PyUnicodeWriter_WriteStr(
PyUnicodeWriter *writer,
PyObject *obj);
// Write repr(obj)
PyAPI_FUNC(int) PyUnicodeWriter_WriteRepr(
PyUnicodeWriter *writer,
PyObject *obj);
// Write str[start:end]
PyAPI_FUNC(int) PyUnicodeWriter_WriteSubstring(
PyUnicodeWriter *writer,
PyObject *str,
Py_ssize_t start,
Py_ssize_t end);
```
The internal writer buffer is **overallocated by default**. `PyUnicodeWriter_Finish()` truncates the buffer to the exact size if the buffer was overallocated.
Overallocation reduces the cost of exponential complexity when adding short strings in a loop. Use `PyUnicodeWriter_SetOverallocate(writer, 0)` to disable overallocation just before the last write.
The writer takes care of the internal buffer kind: Py_UCS1 (latin1), Py_UCS2 (BMP) or Py_UCS4 (full Unicode Character Set). It also implements an optimization if a single write is made using `PyUnicodeWriter_WriteStr()`: it returns the string unchanged without any copy.
---
Example of usage (simplified code from Python/unionobject.c):
```c
static PyObject *
union_repr(PyObject *self)
{
unionobject *alias = (unionobject *)self;
Py_ssize_t len = PyTuple_GET_SIZE(alias->args);
PyUnicodeWriter *writer = PyUnicodeWriter_Create();
if (writer == NULL) {
return NULL;
}
for (Py_ssize_t i = 0; i < len; i++) {
if (i > 0 && PyUnicodeWriter_WriteUTF8(writer, " | ", 3) < 0) {
goto error;
}
PyObject *p = PyTuple_GET_ITEM(alias->args, i);
if (PyUnicodeWriter_WriteRepr(writer, p) < 0) {
goto error;
}
}
return PyUnicodeWriter_Finish(writer);
error:
PyUnicodeWriter_Discard(writer);
return NULL;
}
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119184
* gh-119398
* gh-120248
* gh-120307
* gh-120639
* gh-120796
* gh-120797
* gh-120799
* gh-120809
* gh-120845
* gh-120849
* gh-120851
* gh-120870
* gh-127607
* gh-129206
* gh-129207
* gh-129208
* gh-129209
* gh-129243
* gh-129249
<!-- /gh-linked-prs -->
| 5c4235cd8ce00852cfcb2d3a2cb4c66c6c53c4bf | 2c7209a3bdf81a289ccd6b80a77497cfcd5732de |
python/cpython | python__cpython-119209 | # Implement PEP 649 and PEP 749
PEP-649 has been accepted and should be implemented in Python 3.14. Let's use this issue to track the implementation:
- [x] Decide on or clarify the adoption strategy (https://discuss.python.org/t/pep-649-deferred-evaluation-of-annotations-tentatively-accepted/21331/44)
- [x] Add new `__annotate__` attributes #119209
- [x] Implement core interpreter changes (e.g., new symbol table functionality) #119361
- [x] Implement new Python-level APIs, like the new `format` argument to `inspect.get_annotations` #119891
- [x] (PEP-649 specifies that `inspect.AnnotationsFormat` should be a "global enum". Is that desirable? TBD. https://github.com/python/cpython/pull/119361/files#r1614753031)
- [x] Document the new semantics #122235
- [ ] Add higher-level documentation, e.g. a migration guide away from `from __future__ import annotations`; an introduction to annotationlib; an update to Larry's annotations HOWTO
- [ ] The SC would like https://peps.python.org/pep-0749/#appendix incorporated into the docs
- [x] Make staticmethod and classmethod not force evaluation of annotations #119864
- [x] Adapt standard library code that uses annotations to work excellently with PEP 649
- [x] dataclasses #119891
- [x] typing.py: TypedDict, NamedTuple, try to remove dependency on `inspect`. (If it can't be removed, get rid of the awkward dance we do for `typing.Protocol` to import `inspect.getattr_static` lazily.)
- [x] singledispatch #119891
- [x] Also make it only evaluate the relevant parameter, https://github.com/python/cpython/pull/119891#discussion_r1636397226
- [x] functools.update_wrapper #119891, #124342
- [x] Add PEP 649-like functionality to TypeAliasType, TypeVar, etc. (`pep649-typevar` branch in my fork)
- [ ] Make sure the ecosystem is ready by testing third-party libraries like Pydantic
- [x] typing_extensions: needs changes similar to those in typing.py (python/typing_extensions#412)
- [ ] beartype: had various unrelated problems on 3.13 (beartype/beartype#387); a few additional tests fail on the PR branch due to direct manipulation of `__dict__`
- [ ] typeguard: some unrelated 3.13 problems (agronholm/typeguard#460); no test failures related to PEP 649
- [ ] pyanalyze: some unrelated 3.13 issues and a few changes necessary due to direct `__dict__` access (quora/pyanalyze#773)
- [ ] pydantic: failed to build for 3.14 so far
- [ ] Optimize the implementation. Ideas:
- [ ] Avoid creation of function objects and store only code objects instead #124157
- [x] Avoid creation of AST nodes and use of eval() for ForwardRefs that are just names #124337
Things to revisit:
- [x] Should FORWARDREF be the default? https://github.com/beartype/beartype/pull/440#issuecomment-2373086020
- [x] Add VALUE_WITH_FAKE_GLOBALS format
- [x] Name of `__annotate__` parameter
- [x] Should setting `__annotations__` invalidate `__annotate__`?
- [x] Rename SOURCE to STRING?
I am planning to work on the interpreter core first.
cc @larryhastings @carljm @samuelcolvin
<!-- gh-linked-prs -->
### Linked PRs
* gh-119209
* gh-119321
* gh-119361
* gh-119397
* gh-119864
* gh-119891
* gh-120719
* gh-120816
* gh-122074
* gh-122210
* gh-122212
* gh-122235
* gh-122365
* gh-122366
* gh-124326
* gh-124337
* gh-124393
* gh-124415
* gh-124461
* gh-124479
* gh-124561
* gh-124620
* gh-124634
* gh-124730
* gh-131755
* gh-133407
* gh-133552
* gh-133841
* gh-133902
* gh-133903
* gh-134640
* gh-134731
* gh-135644
* gh-135654
* gh-137247
* gh-137263
<!-- /gh-linked-prs -->
| e9875ecb5dd3a9c44a184c71cc562ce1fea6e03b | 73ab83b27f105a4509046ce26e35f20d66625195 |
python/cpython | python__cpython-119175 | # [Windows] High DPI causes tkinter turtledemo windows blurry
# Feature or enhancement
## Comparison
Here is a side by side comparison of both tkinter turtle-graphics examples windows, with and without the GUI fix. Maybe the screenshot was reduced by me so that the difference was not obvious, but actually before the text was also blurry. Also the sizes are different. Increasing pixel density while not changing the number of pixels will result in smaller window sizes.

## Solution
The solution is simple
Copy `https://github.com/python/cpython/blob/main/Lib/idlelib/pyshell.py#L16-L22` to `https://github.com/python/cpython/blob/main/Lib/turtledemo/__main__.py`
See https://learn.microsoft.com/en-us/windows/win32/api/shellscalingapi/ne-shellscalingapi-process_dpi_awareness
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
<!-- gh-linked-prs -->
### Linked PRs
* gh-119175
* gh-119289
* gh-119290
<!-- /gh-linked-prs -->
| 538ed5e4818aa0d0aa759634e8bfa23e317434a1 | 172690227e771c2e8ab137815073e3a172c08dec |
python/cpython | python__cpython-119147 | # Update `paths` in `jit.yml`
Per https://github.com/python/cpython/pull/118983#issuecomment-2115553477
<!-- gh-linked-prs -->
### Linked PRs
* gh-119147
* gh-119226
* gh-120435
* gh-120447
* gh-120448
<!-- /gh-linked-prs -->
| 5307f44fb983f2a17727fb43602f5dfa63e93311 | 697465ff88e49d98443025474e5b534adfba2cb0 |
python/cpython | python__cpython-119134 | # Python -VV should display whether the build is default build or free-threading.
Currently, there is no way to identify whether the build is free-threaded or not through `python -VV`
See the pyperf and pyperformance case
* https://github.com/psf/pyperf/pull/189
* https://github.com/python/pyperformance/pull/336
The following display would be great.
```
Python 3.14.0a0 (heads/main-dirty:31a28cbae0, May 17 2024, 17:30:32, default) [GCC 14.1.1 20240507 (Red Hat 14.1.1-1)]
Python 3.14.0a0 (heads/main-dirty:31a28cbae0, May 17 2024, 17:30:32, free-threading) [GCC 14.1.1 20240507 (Red Hat 14.1.1-1)]
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119134
* gh-119140
* gh-119143
* gh-119145
* gh-119153
* gh-135550
* gh-135565
* gh-135591
* gh-135594
<!-- /gh-linked-prs -->
| c141d4393750c827cbcb3867f0f42997a3bb3528 | 691429702f1cb657e65f4e5275bb5ed16121d2b7 |
python/cpython | python__cpython-119173 | # `asyncio.staggered` is missing `typing` import
# Bug report
### Bug description:
`asyncio/staggered.py` is using `typing.Optional`:
https://github.com/python/cpython/blob/65de194dd80bbc8cb7098d21cfd6aefd11d0d0ce/Lib/asyncio/staggered.py#L73
However, #114281 removed the `typing` import. This causes test failures in `aiohappyeyeballs`:
```pytb
coro_fns = <generator object start_connection.<locals>.<genexpr> at 0x7feba06d1be0>, delay = 0.3
async def staggered_race(coro_fns, delay, *, loop=None):
"""Run coroutines with staggered start times and take the first to finish.
This method takes an iterable of coroutine functions. The first one is
started immediately. From then on, whenever the immediately preceding one
fails (raises an exception), or when *delay* seconds has passed, the next
coroutine is started. This continues until one of the coroutines complete
successfully, in which case all others are cancelled, or until all
coroutines fail.
The coroutines provided should be well-behaved in the following way:
* They should only ``return`` if completed successfully.
* They should always raise an exception if they did not complete
successfully. In particular, if they handle cancellation, they should
probably reraise, like this::
try:
# do work
except asyncio.CancelledError:
# undo partially completed work
raise
Args:
coro_fns: an iterable of coroutine functions, i.e. callables that
return a coroutine object when called. Use ``functools.partial`` or
lambdas to pass arguments.
delay: amount of time, in seconds, between starting coroutines. If
``None``, the coroutines will run sequentially.
loop: the event loop to use.
Returns:
tuple *(winner_result, winner_index, exceptions)* where
- *winner_result*: the result of the winning coroutine, or ``None``
if no coroutines won.
- *winner_index*: the index of the winning coroutine in
``coro_fns``, or ``None`` if no coroutines won. If the winning
coroutine may return None on success, *winner_index* can be used
to definitively determine whether any coroutine won.
- *exceptions*: list of exceptions returned by the coroutines.
``len(exceptions)`` is equal to the number of coroutines actually
started, and the order is the same as in ``coro_fns``. The winning
coroutine's entry is ``None``.
"""
# TODO: when we have aiter() and anext(), allow async iterables in coro_fns.
loop = loop or events.get_running_loop()
enum_coro_fns = enumerate(coro_fns)
winner_result = None
winner_index = None
exceptions = []
running_tasks = []
async def run_one_coro(
> previous_failed: typing.Optional[locks.Event]) -> None:
E NameError: name 'typing' is not defined. Did you forget to import 'typing'?
coro_fns = <generator object start_connection.<locals>.<genexpr> at 0x7feba06d1be0>
delay = 0.3
enum_coro_fns = <enumerate object at 0x7feba06223e0>
exceptions = []
loop = <_UnixSelectorEventLoop running=False closed=False debug=False>
running_tasks = []
winner_index = None
winner_result = None
/usr/lib/python3.13/asyncio/staggered.py:73: NameError
```
CC @AA-Turner
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119173
* gh-119206
<!-- /gh-linked-prs -->
| 16b46ebd2b0025aa461fdfc95fbf98a4f04b49e6 | 357f5a1f73684d0c126a5e8f79d76ff3641c4d52 |
python/cpython | python__cpython-119615 | # `tokenize.generate_tokens()` performance regression in 3.12
# Bug report
### Bug description:
There seems to be a significant performance regression in `tokenize.generate_tokens()` between 3.11 and 3.12 when tokenizing a (very) large dict on a single line. I searched the existing issues but couldn't find anything about this.
To reproduce, rename the file [largedict.py.txt](https://github.com/python/cpython/files/15348502/largedict.py.txt) to `largedict.py` in the same directory as the script below, then run the script. That file comes from https://github.com/nedbat/coveragepy/issues/1785.
```python
import io, time, sys, tokenize
import largedict
text = largedict.d
readline = io.StringIO(text).readline
glob_start = start = time.time()
print(f"{sys.implementation.name} {sys.platform} {sys.version}")
for i, (ttype, ttext, (sline, scol), (_, ecol), _) in enumerate(tokenize.generate_tokens(readline)):
if i % 500 == 0:
print(i, ttype, ttext, sline, scol, time.time() - start)
start = time.time()
if i % 5000 == 0:
print(time.time() - glob_start)
print(f"Time taken: {time.time() - glob_start}")
```
For Python 3.12, this results in:
```
cpython linux 3.12.3 (main, May 17 2024, 07:19:22) [GCC 11.4.0]
0 1 a_large_dict_literal 1 0 0.04641866683959961
0.046633005142211914
500 3 ':tombol_a_(golongan_darah):' 1 2675 9.689745903015137
1000 3 ':flagge_anguilla:' 1 5261 9.767053604125977
1500 3 ':флаг_Армения:' 1 7879 9.258271932601929
[...]
```
For Python 3.11, this results in:
```
cpython linux 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
0 1 a_large_dict_literal 1 0 0.013637304306030273
0.013663768768310547
500 3 ':tombol_a_(golongan_darah):' 1 2675 0.002939462661743164
1000 3 ':flagge_anguilla:' 1 5261 0.0028715133666992188
1500 3 ':флаг_Армения:' 1 7879 0.002806425094604492
[...]
352500 3 'pt' 1 2589077 0.003370046615600586
Time taken: 2.1244866847991943
```
That is, each 500 tokens in Python 3.12 is taking over 9 seconds to process, while the 352500 tokens in Python 3.11 is taking a bit over 2 seconds to process.
I can reproduce this on Linux (WSL) and Windows. Also seems to affect 3.13.
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119615
* gh-119682
* gh-119683
<!-- /gh-linked-prs -->
| d87b0151062e36e67f9e42e1595fba5bf23a485c | ae9140f32a1630838374f1af402291d4649a0be0 |
python/cpython | python__cpython-119124 | # pathlib.Path.with_suffix(None) became allowed in Python 3.13b1
# Bug report
### Bug description:
In Python 3.13b1 (compared to Python 3.12 and earlier), the behavior of `pathlib.Path.with_suffix(None)` changed from raising an incidental `TypeError` to removing the suffix. This casues a [certain test](https://github.com/pytest-dev/pytest/blob/fdf3aa3fc35ee6d336eafef5016f308b5e57bddb/testing/test_legacypath.py#L36-L39) in pytest to fail. I am going to fix pytest to not rely on this in any case, but I think pathlib should either:
- Restore the previous behavior if it was unintended, or
- Update [the doc](https://docs.python.org/3.14/library/pathlib.html#pathlib.PurePath.with_suffix) that `None` is now accepted. Currently it says " If the suffix is an empty string, the original suffix is removed".
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119124
* gh-119183
<!-- /gh-linked-prs -->
| 3c28510b984392b8dac87a17dfc5887366d5c4ab | 4b7667172898d440c1931ae923446c6a5ef1765e |
python/cpython | python__cpython-119131 | # difflib.py Differ.compare is too slow [for degenerate cases]
# Bug report
### Bug description:
```python
import difflib
a = ["0123456789\n"] * 1_000
b = ["01234a56789\n"] * 1_000
list(difflib.Differ().compare(a, b)) # very slow and will probably hit max recursion depth
```
The case is pathological in the sense of many lines with the same exact diff / ratio.
The issue is that in the current implementation `_fancy_replace` will take the first pair of lines (with the same ratio) as a split point and will call itself recursively for all lines starting at 2, then 3, 4, etc. This repeats `1_000` times resulting in a massive recursion depth and `O(N^3)` complexity scaling.
For an average random case it should split anywhere in the range of `1_000` with a complexity scaling of `O(N^2 log N)`.
I personally encountered this in diffing csv files where one of the files has a column added which, apparently, results in all-same ratios for every line in the file.
### Proposal
Fixing this is not so hard by adding some heuristics (WIP) https://github.com/pulkin/cpython/commit/31e1ed03cf05bdbc9b4695c8bb680e65963f9bff
The idea is very straightforward: while doing the `_fancy_replace` magic, if you see many diffs with the same exact ratio, pick the one closest to the middle of chunks rather than the first one (which can be the worst possible choice). This is done by promoting the ratios of those pairs of lines that are closer to the middle of the chunks.
The `_drag_to_center` function turns line number into the weight added to the ratio (twice: for a and for b). The weight is zero for both ends of chunks and maximal in the middle (quadratic poly was chosen for simplicity). The magnitude of the weight "`_gravity`" is small enough to only affect ratios that are exactly the same: it relies on the assumption that we probably have less than 500k symbols in the line such that the steps in the `ratio` are greater than 1e-6. If this assumption fails some diffs may become different (not necessarily worse).
Performance impact for non-pathological cases is probably minimal.
### CPython versions tested on:
3.9, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119131
* gh-119376
* gh-119492
<!-- /gh-linked-prs -->
| 0abf997e75bd3a8b76d920d33cc64d5e6c2d380f | 3c28510b984392b8dac87a17dfc5887366d5c4ab |
python/cpython | python__cpython-119269 | # New pyrepl gives a traceback on exit with "dumb" terminal
# Bug report
### Bug description:
I was experimenting with terminal types:
```
$ export TERM=dumb
$ python3
Python 3.13.0b1 (main, May 12 2024, 23:38:03) [GCC 13.2.1 20240316 (Red Hat 13.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
warning: can't use pyrepl: terminal doesn't have the required clear capability
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
>>> quit()
Exception ignored in atexit callback <function register_readline.<locals>.write_history at 0x7f7204f537e0>:
Traceback (most recent call last):
File "<frozen site>", line 530, in write_history
File "/home/duncan/py313inst/lib/python3.13/_pyrepl/readline.py", line 361, in write_history_file
history = self.get_reader().get_trimmed_history(maxlength)
File "/home/duncan/py313inst/lib/python3.13/_pyrepl/readline.py", line 277, in get_reader
console = UnixConsole(self.f_in, self.f_out, encoding=ENCODING)
File "/home/duncan/omni/py313inst/lib/python3.13/_pyrepl/unix_console.py", line 170, in __init__
self._clear = _my_getstr("clear")
File "/home/duncan/py313inst/lib/python3.13/_pyrepl/unix_console.py", line 163, in _my_getstr
raise InvalidTerminal(
_pyrepl.unix_console.InvalidTerminal: terminal doesn't have the required clear capability
```
It notices that the terminal is unsuitable, but then tries to use it on exit.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119269
* gh-119308
* gh-119328
* gh-119332
* gh-119359
<!-- /gh-linked-prs -->
| 73f4a58d36b65ec650e8f00b2affc4a4d3195f0c | b36533290608aed757f6eb16869a679650d32e17 |
python/cpython | python__cpython-119129 | # venv tutorial wrong/confusing about python version used
# Documentation
From https://docs.python.org/3/tutorial/venv.html:
> [venv](https://docs.python.org/3/library/venv.html#module-venv) will usually install the most recent version of Python that you have available. If you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.
https://docs.python.org/3/library/venv.html#module-venv though makes it clear that the venv will use `the Python installation from which the command was run`.
So I believe it should be updated to:
> [venv](https://docs.python.org/3/library/venv.html#module-venv) will install the Python version from which the command was run. If you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119129
* gh-119141
* gh-119142
<!-- /gh-linked-prs -->
| 0f5e8bed636c2f29701e5a1965d1b088d33abbf0 | 81c3130c51a2b1504842cb1a93732cc03ddbbd79 |
python/cpython | python__cpython-119426 | # Version numbers not supported for shebang line virtual command /usr/bin/env python on windows
# Documentation
In section 4.8.2 of the docs (Shebang Lines), it is stated that: "Any of the above virtual commands can be suffixed with an explicit version (either just the major version, or the major and minor version)."
This sentence refers to the 4 virtual commands: /usr/bin/env, /usr/bin/python, /usr/local/bin/python, python.
This is correct for the last 3 virtual commands (e.g. /usr/bin/python3.12 works), but not for /usr/bin/env.
Specifically, using either the major version (e.g. "/usr/bin/env python3") or the major and minor version (e.g. "/usr/bin/env python3.12") in the shebang line of a .py file, then calling the .py with the windows launcher, results in the following error: "Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases."
Simple solution: change the relevant sentence in the docs to exclude /usr/bin/env: "Any of the above virtual commands (apart from ``/usr/bin/env``) can be suffixed with an explicit version (either just the major version, or the major and minor version)."
<!-- gh-linked-prs -->
### Linked PRs
* gh-119426
* gh-119739
* gh-119741
* gh-119747
* gh-119846
* gh-120015
* gh-120016
<!-- /gh-linked-prs -->
| df93f5d4bf9d70036d485666d4dd4f009d37f8b9 | fcca08ec2f48f4ba5ba1d4690fb39b1efe630944 |
python/cpython | python__cpython-119065 | # Test with the path protocol, not with pathlib.Path
A number of tests use `pathlib.Path` to test that the code supports path-like objects. They should use a special object that only implements the path protocol instead. This will avoid unintentional dependency on other `pathlib.Path` methods and attributes and allow to test path-like objects with the bytes path.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119065
* gh-119087
* gh-119088
* gh-119089
<!-- /gh-linked-prs -->
| 0152dc4ff5534fa2948b95262e70ff6b202b9b99 | 0142a2292c3d3bfa56a987d576a9678be0f56931 |
python/cpython | python__cpython-119066 | # Make ZeroDivisionError message more precise when floor-dividing by zero
# Feature or enhancement
### Proposal:
```python
try:
10 // 0
except ZeroDivisionError as e:
print(e) #returns integer division or modulo by zero
try:
10 / 0
except ZeroDivisionError as e:
print(e) #returns integer division by zero
try:
10 % 0
except ZeroDivisionError as e:
print(e) #returns integer modulo by zero
```
When floor dividing by zero the error says "division or modulo" but while (non-floor) dividing by zero or modulo-ing (?) it is more specific saying "division" or "modulo". I would recommend changing this to "floor division" or "division" because it would help with knowing what exactly the error is talking about.
Thank You!
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119066
<!-- /gh-linked-prs -->
| 1d4c2e4a877a48cdc8bcc9808d799b91c82b3757 | 153b118b78588209850cc2a4cbc977f193a3ab6e |
python/cpython | python__cpython-119055 | # Documentation of pathlib.Path methods is disorganised
# Documentation
The docs for [`pathlib.Path` methods](https://docs.python.org/3.14/library/pathlib.html#methods) is disorganised. It was originally alphabetical, but over the last few years we've begun moving similar methods together. We ought to split it up into sections, to make it easier to navigate and absorb, e.g.:
- Parsing and generating URIs
- Querying file type and status
- Reading and writing files
- Iterating over directories
- Making paths absolute
- Expanding home directories
- Resolving symlinks
- Permissions
- Ownership
- Other methods
If this granularity is too fine, we could stick more stuff in "Other methods".
<!-- gh-linked-prs -->
### Linked PRs
* gh-119055
* gh-119524
* gh-119951
* gh-119952
* gh-119954
* gh-119955
* gh-119956
* gh-120183
* gh-120184
* gh-120186
* gh-120462
* gh-120464
* gh-120465
* gh-120472
* gh-120473
* gh-120505
* gh-120967
* gh-120968
* gh-120970
* gh-121155
* gh-121156
* gh-121158
* gh-121167
* gh-121168
* gh-121169
<!-- /gh-linked-prs -->
| 81d63362302187e5cb838c9a7cd857181142e530 | 045e195c76f33c77c339284b13f81102e4b9abe2 |
python/cpython | python__cpython-119112 | # Implement the fast path for `list.__getitem__`
# Feature or enhancement
### Proposal:
Implement the fast path for `list.__getitem__` [outlined in PEP-703](https://peps.python.org/pep-0703/#optimistically-avoiding-locking).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119112
* gh-119309
<!-- /gh-linked-prs -->
| ab4263a82abe8b684d8ad1edf7c7c6ec286ff756 | 73f4a58d36b65ec650e8f00b2affc4a4d3195f0c |
python/cpython | python__cpython-119063 | # Defer `import warnings` in pathlib
As of GH-118793, the `warnings` module is used only once in pathlib, by `PurePath.is_reserved()`. We should be able to move the `import warnings` line into that method. But beware! Doing so seems to cause a failure in `test_io`, which needs further investigation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119063
* gh-119106
* gh-119111
* gh-119119
<!-- /gh-linked-prs -->
| 100c7ab00ab66a8c0d54582f35e38d8eb691743c | 4702b7b5bdc07d046576b4126cf4e4f5f7145abb |
python/cpython | python__cpython-119248 | # Python 3.13.0b1 REPL does not travel words when pressing Ctrl+← or Ctrl+→
# Bug report
### Bug description:
When I type this to older Python REPLs:
```pycon
>>> text = "I can travel the words."
```
And I press <kbd>Ctrl</kbd>+<kbd>←</kbd>, my cursor moves in front of `w`. I can press <kbd>Ctrl</kbd>+<kbd>←</kbd> again to go in front of the `t` etc. I can then press <kbd>Ctrl</kbd>+<kbd>→</kbd> to travel the words in the other direction. My bash behaves the same. Again, I don't know if this is Fedora's configuration of readline, or the default.
In Python 3.13.0b1 REPL, this no longer works. Pressing <kbd>Ctrl</kbd>+<kbd>←</kbd> or <kbd>Ctrl</kbd>+<kbd>→</kbd> seemingly does nothing.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119248
* gh-119323
<!-- /gh-linked-prs -->
| 0398d9339217aa0710c0de45a7e9b587136e7129 | c4722cd0573c83aaa52b63a27022b9048a949f54 |
python/cpython | python__cpython-123607 | # Python 3.13.0b1 REPL changes behavior wrt PgUp, inserts first line of ~/.python_history
# Bug report
### Bug description:
When I use the Python REPL, I am used to <kbd>PgUp</kbd> browsing my prompt history based on the partial command I already typed.
Consider:
```pycon
>>> import sys
>>> import os
>>> ...
>>> im[PgUp]
```
Python 3.12 REPL inserts `import os` with my cursor between `m` and `p`. I can keep pressing <kbd>PgUp</kbd> to get `import sys` and older commands from my history. I don't know if this is Fedora's configuration of readline, or the default. However, Bash and older Python REPLs behave that way, as well as IPython/Jupyter console.
Python 3.130b1 REPL changes my prompt to `print("a")` when I press <kbd>PgUp</kbd>. The particular command is my first line of `~/.python_history`. Pressing <kbd>PgUp</kbd> again changes nothing.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123607
* gh-123773
<!-- /gh-linked-prs -->
| 8311b11800509c975023e062e2c336f417c5e4c0 | d683f49a7b0635a26150cfbb398a3d93b227a74e |
python/cpython | python__cpython-119130 | # Classes tutorial: confusing explanation of non-instance methods
# Documentation
From https://docs.python.org/3/tutorial/classes.html#instance-objects:
> The other kind of instance attribute reference is a method. A method is a function that “belongs to” an object. (In Python, the term method is not unique to class instances: other object types can have methods as well. For example, list objects have methods called append, insert, remove, sort, and so on. However, in the following discussion, we’ll use the term method exclusively to mean methods of class instance objects, unless explicitly stated otherwise.)
From the user's perspective, a list is a class in Python 3. It may be implemented under the hood using some special construct, but in all the documentation and to the user, it is still a class. So an a list object is a class instance for all intents and purposes even if it's really not under the hood.
I suggest reworking this paragraph to clarify whatever it's trying to communicate or deleting it except the first 2 sentences.
A potential reworked version is below, but I'm not sure it wouldn't still be confusing to someone new.
> The other kind of instance attribute reference is a method. A method is a function that “belongs to” an object. (Note that technically, there are different types of methods in python - see https://docs.python.org/3/library/stdtypes.html#methods for more details. However, in the following discussion, we’ll use the term method exclusively to mean methods of class instance objects, unless explicitly stated otherwise.)
Note that part of the confusion might be because the tutorial doesn't really explain about bound methods. Note that the following section 9.3.4 mentions a method being "bound" without the terminology ever being explained.
Is the distinction between built-in and instance methods even necessary at this point especially in higher level docs like the tutorial?
<!-- gh-linked-prs -->
### Linked PRs
* gh-119130
* gh-119925
* gh-119926
<!-- /gh-linked-prs -->
| c618f7d80e78f83cc24b6bdead33ca38cbd4d27f | 53b1981fb0cda6c656069e992f172fc6aad7c99c |
python/cpython | python__cpython-119296 | # `functools.update_wrapper` does not work with `type`
# Bug report
### Bug description:
```python
from functools import update_wrapper
def my_type(*args): pass
t = update_wrapper(my_type, type)
```
This works on Python ≤ 3.11, and I think this is expected because the docs for `update_wrapper` claim that it may be used with callables other than functions.
However, with 3.12, I get
```
TypeError: __type_params__ must be set to a tuple
```
(Similarly for `functools.wraps`.)
The problem is that update_wrapper includes `__type_params__` by default (which is undocumented, see [related issue](https://github.com/python/cpython/issues/119010)) and `type.__type_params__` is a descriptor.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119296
* gh-119313
* gh-119678
* gh-119681
<!-- /gh-linked-prs -->
| 6b240c2308a044e38623900ccb8fa58c3549d4ae | ae11d68ab90324a3359699ca13fcf9a229966713 |
python/cpython | python__cpython-119012 | # Missing `__type_params__` in the documentation of `functools.update_wrapper`.
# Documentation
The description of the default `WRAPPER_ASSIGNMENTS` in the [docs](https://docs.python.org/3/library/functools.html#functools.update_wrapper) for `functools.update_wrapper` omits the `__type_params__` attribute, which is included in the implementation since [3fadd7d](https://github.com/python/cpython/commit/3fadd7d5857842fc5c).
<!-- gh-linked-prs -->
### Linked PRs
* gh-119012
* gh-119013
* gh-119014
<!-- /gh-linked-prs -->
| b04c497f187b0b474e431a6d8d282269b40ffe52 | 7d7eec595a47a5cd67ab420164f0059eb8b9aa28 |
python/cpython | python__cpython-119006 | # Add gettext target to documentation's Makefile
# Feature or enhancement
### Proposal:
As some may know already, Sphinx's _gettext_ builder is used to extract strings from the docs and store them in message catalog templates (pot files).
Currently there is no straightforward `make gettext` command to generate pot files, so the language teams need to run commands like:
```shell
sphinx-build -b gettext -D gettext_compact=0 . locales/pot
```
or via `make` command using doc’s Makefile:
```shell
make build BUILDER=gettext SPHINXOPTS='-D gettext_compact=0'
```
(On a side note, the first command doesn’t generate ‘changelog.pot’.)
In my personal opinion, having a _gettext_ target would simplify the process, reducing the complexity of the command construction and uniformizing the outputs. It would also be a small step to simplifying the setup for new teams.
Having a _gettext_ target would allow to simply run `make gettext` and have the pot files in `build/gettext` directory. See output:
```
$ make gettext
mkdir -p build
Building NEWS from Misc/NEWS.d with blurb
/home/rffontenelle/Projects/cpython/Doc/build/NEWS is already up to date
PATH=./venv/bin:$PATH sphinx-build -b gettext -d build/doctrees -j auto -W . build/gettext
Running Sphinx v7.3.7
building [gettext]: targets for 7 template files
reading templates... [100%] /home/rffontenelle/Projects/cpython/Doc/tools/templates/search.html
building [gettext]: targets for 469 source files that are out of date
updating environment: [new config] 469 added, 0 changed, 0 removed
reading sources... [100%] using/unix .. whatsnew/index
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
copying assets... done
writing output... [100%] whatsnew/index
writing message catalogs... [100%] whatsnew/index
build succeeded.
The message catalogs are in build/gettext.
$ ls build/gettext/
about.pot c-api copyright.pot extending glossary.pot installing license.pot sphinx.pot using
bugs.pot contents.pot distributing faq howto library reference tutorial whatsnew
$ find build/gettext/ -name '*.pot' | wc -l
470
```
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/add-gettext-builder-as-target-for-docs-makefile/53229
<!-- gh-linked-prs -->
### Linked PRs
* gh-119006
* gh-119074
* gh-119075
<!-- /gh-linked-prs -->
| fb0cf7d1408c904e40142a74cd7a53eb52a8e568 | 5b88d95cc542cf02303c6fe0e8719a93544decdb |
python/cpython | python__cpython-121329 | # Crash in OrderedDict similiar to #83771
# Bug report
### Bug description:
```python
import collections
global count
count = 0
class Evil():
def __eq__(self, other):
global count
print(count)
if count == 1:
l.clear()
print("cleared l")
count += 1
return True
def __hash__(self):
return 3
l = collections.OrderedDict({Evil(): 4, 5: 6})
r = collections.OrderedDict({Evil(): 4, 5: 6})
print(l == r)
```
### CPython versions tested on:
3.10, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121329
* gh-124507
* gh-124508
<!-- /gh-linked-prs -->
| 38a887dc3ec52c4a7222279bf4b3ca2431b86de9 | e80dd3035fb805716bc49f9e7e9cab5f83614661 |
python/cpython | python__cpython-119935 | # library/stdtypes.html - Mutable Sequence Types - Operations Table - Slice Assignments
# Documentation
The table of operations for [mutable sequence types](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) has footnote (1) for the operation `s[i:j:k] = t`, which states that "_t_ must have the same length as the slice it is replacing." When the step size of the slice (k) is 1, the slice object behaves the same as the operation `s[i:j] = t`. When creating slices with similar values, these slices are distinct:
```python
slice1 = slice(1,10)
slice2 = slice(1,10,1)
slice1 == slice2
#False
```
But when these slices are used in mutable sequence assignment, they behave the same:
```python
list1 = list(range(10))
print(list1, len(list1))
#[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 10
list1[slice1] = [None] * 6 #s[i:j] = t | slice of _s_ from _i_ to _j_ is replaced by the contents of the iterable _t_
print(list1, len(list1))
#[0, None, None, None, None, None, None, 6, 7, 8, 9] 11
#Length of list increases by 1 (10 -> 11)
list2 = list(range(10))
list2[slice2] = [None] * 6 #s[i:j:k] = t | the elements of s[i:j:k] are replaced by those of _t_
print(list2, len(list2))
#[0, None, None, None, None, None, None, 6, 7, 8, 9] 11
#Length of list increases by 1 (10 -> 11)
```
This is only when the step (k) is 1, as shown by the following example:
```python
slice3 = slice(1, 11, 2)
list3 = list(range(20))
print(list3, len(list3))
#[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] 20
list3[slice3] = [None] * 6 #s[i:j:k] = t | the elements of s[i:j:k] are replaced by those of _t_
#ValueError: attempt to assign sequence of size 6 to extended slice of size 5
#Length of list (can't) increase(s) by 1 (20 -> 21)
```
Since the example `s[i:j:k] = t` does not behave the same when k = 1 compared to k > 1, the documentation should be updated. My proposed change is footnote (1) reads as follows:
1. If k = 1, then this behaves the same as `s[i:j] = t`, otherwise _t_ must have the same length as the slice it is replacing. If _t_ is not the same length as the slice it is replacing and k does not equal 1, then a ValueError is raised.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119935
* gh-120847
* gh-120848
<!-- /gh-linked-prs -->
| 462832041e342f8aaf8c88ec44f7b14c70042575 | 1dadcb5a6a821dd6ab397cf52a2fa9618839d8c0 |
python/cpython | python__cpython-118999 | # Handle errors correctly in `tmtotuple` in `timemodule`
# Bug report
This call is problematic: https://github.com/python/cpython/blob/f526314194f7fd15931025f8a4439c1765666e42/Modules/timemodule.c#L465
It can return `NULL` in theory.
These calls also can return `NULL`:
- https://github.com/python/cpython/blob/f526314194f7fd15931025f8a4439c1765666e42/Modules/timemodule.c#L477-L478
- https://github.com/python/cpython/blob/f526314194f7fd15931025f8a4439c1765666e42/Modules/timemodule.c#L481-L482
This error guard in the end will only show the last error: https://github.com/python/cpython/blob/f526314194f7fd15931025f8a4439c1765666e42/Modules/timemodule.c#L486-L489 Not the first one. Also: why `XDECREF` when `v` cannot be `NULL` at this point?
Refs https://github.com/python/cpython/issues/116714
I will send a PR adding our regular macro for the job.
<!-- gh-linked-prs -->
### Linked PRs
* gh-118999
* gh-119018
* gh-119019
<!-- /gh-linked-prs -->
| fc757925944a9486d4244853dbe6e37ab3e560c2 | b04c497f187b0b474e431a6d8d282269b40ffe52 |
python/cpython | python__cpython-126768 | # "Windows fatal Exception: access violation" while testing 3.13.0b1 free-threading build (related to tkinter ?)
# Crash report
### What happened?
doing pytest on my github repository ["sqlite-bro"](https://github.com/stonebig/sqlite_bro) , with Python-3.13.0b1 augmented per:
- "with free-threading binaries download" option checked
- tempfile.py patched with "_os.mkdir(file)" (zooba magic potion)
- python3.13t.exe renamed python.exe
- pythonw3.13t.exe renamed pythonw.exe
I get a crash
<details>
```text
C:\Users\stonebig\Documents\GitHub\sqlite_bro>pytest
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.13.0b1, pytest-8.2.0, pluggy-1.5.0
rootdir: C:\Users\stonebig\Documents\GitHub\sqlite_bro
configfile: pyproject.toml
collecting ... Windows fatal exception: access violation
Current thread 0x00006b84 (most recent call first):
File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1315 in create_module
File "<frozen importlib._bootstrap>", line 813 in module_from_spec
File "<frozen importlib._bootstrap>", line 921 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1360 in _find_and_load
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\tkinter\__init__.py", line 38 in <module>
File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1021 in exec_module
File "<frozen importlib._bootstrap>", line 935 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1360 in _find_and_load
File "C:\Users\stonebig\Documents\GitHub\sqlite_bro\sqlite_bro\sqlite_bro.py", line 23 in <module>
File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1021 in exec_module
File "<frozen importlib._bootstrap>", line 935 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1360 in _find_and_load
File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist
File "C:\Users\stonebig\Documents\GitHub\sqlite_bro\sqlite_bro\tests\test_general.py", line 7 in <module>
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\assertion\rewrite.py", line 178 in exec_module
File "<frozen importlib._bootstrap>", line 935 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1360 in _find_and_load
File "<frozen importlib._bootstrap>", line 1387 in _gcd_import
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\importlib\__init__.py", line 88 in import_module
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\pathlib.py", line 591 in import_path
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\python.py", line 487 in importtestmodule
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\python.py", line 540 in _getobj
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\python.py", line 282 in obj
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\python.py", line 556 in _register_setup_module_fixture
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\python.py", line 543 in collect
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\runner.py", line 389 in collect
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\runner.py", line 341 in from_call
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\runner.py", line 391 in pytest_make_collect_report
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_callers.py", line 103 in _multicall
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_manager.py", line 120 in _hookexec
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_hooks.py", line 513 in __call__
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\runner.py", line 565 in collect_one_node
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 837 in _collect_one_node
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 974 in genitems
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 979 in genitems
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 979 in genitems
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 979 in genitems
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 811 in perform_collect
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 349 in pytest_collection
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_callers.py", line 103 in _multicall
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_manager.py", line 120 in _hookexec
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_hooks.py", line 513 in __call__
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 338 in _main
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 285 in wrap_session
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\main.py", line 332 in pytest_cmdline_main
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_callers.py", line 103 in _multicall
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_manager.py", line 120 in _hookexec
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\pluggy\_hooks.py", line 513 in __call__
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\config\__init__.py", line 178 in main
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Lib\site-packages\_pytest\config\__init__.py", line 206 in console_main
File "C:...\WPy64-31300b1\python-3.13.0b1.amd64\Scripts\pytest.exe\__main__.py", line 7 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
```
</details>
on 3.13.0b1 patched the same but not Free-threading all seems ok
<details>
```text
================================================= test session starts =================================================
platform win32 -- Python 3.13.0b1, pytest-8.2.0, pluggy-1.5.0
rootdir: C:\Users\stonebig\Documents\GitHub\sqlite_bro
configfile: pyproject.toml
collected 3 items
sqlite_bro\tests\test_general.py ... [100%]
================================================== 3 passed in 0.76s ==================================================
```
</details>
I did that test because IDLE doesn't want to start and doesn't say why.
Package set used:
```
Package Version Summary
_______________ ____________ ______________________________________________________________________
build 1.1.1 A simple, correct Python build frontend
colorama 0.4.6 Cross-platform colored terminal text.
iniconfig 2.0.0 brain-dead simple config-ini parsing
packaging 23.2 Core utilities for Python packages
pip 24.0 The PyPA recommended tool for installing Python packages.
pluggy 1.5.0 plugin and hook calling mechanisms for python
pyproject-hooks 1.0.0 Wrappers to call pyproject.toml-based build backend hooks.
pytest 8.2.0 pytest: simple powerful testing with Python
setuptools 69.2.0 Easily download, build, install, upgrade, and uninstall Python package
sqlite-bro 0.13.1 a graphic SQLite Client in 1 Python file
wheel 0.43.0 A built-package format for Python
winpython 8.0.20240512 WinPython distribution tools, including WPPM
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
Python 3.13.0b1 (tags/v3.13.0b1:2268289, May 8 2024, 12:31:50) [MSC v.1938 64 bit (AMD64)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-126768
* gh-126867
<!-- /gh-linked-prs -->
| 9332a6f82506f819f591466eb03213be2c8d1808 | d4c72fed8cba8e15ab7bb6c30a92bc9f2c8f0a2c |
python/cpython | python__cpython-118960 | # `asyncio.sslproto._SSLProtocolTransport` can experience invalid state, leading to silent failures.
# Bug report
### Bug description:
## TL;DR
`_SSLProtocolTransport.is_closing` should match its inner `_SelectorTransport.is_closing`, indicating to the user that the transport is actually closed instead of silently logging an error.
## Description
I've been using the aio-libs library `aiohttp` in production together with its WebSocket client implementation, and found an interesting issue that sometimes occured on certain devices (Specifically M-series macbooks).
The logs i've been seeing looks something like this:
```
( Indication that we are sending messages over websocket successfully )
...
[2024-05-07 09:34:13,403] WARNING asyncio.write: socket.send() raised exception.
...
( Sucessfully sent a message over websocket )
...
[2024-05-07 09:34:13,553] WARNING asyncio.write: socket.send() raised exception.
( No more indication that we're sending or recieving messages over websocket )
```
Digging deeper the issue occurs when the connection has been lost due to an exception when invoking `socket.send`, this normally will result in the Transports `is_closing()` function returning `True`.
The issue occurs when using TLS, which now uses the transport `_SSLProtocolTransport` which implements its own `is_closing` logic.
When [`_SocketSelectorTransport.write`](https://github.com/python/cpython/blob/7e894c2f38f64aed9b259c8fd31880f1142a259d/Lib/asyncio/selector_events.py#L1073) gets an OSError such as `Broken Pipe` (which is the issue i've experienced in the wild) it sets its inner transport state as closed but when a library such as aiohttp checks its transport [`is_closing`](https://github.com/aio-libs/aiohttp/blob/eb432238ffaea0f435343913dcfb35f70379e3ce/aiohttp/http_websocket.py#L685C27-L685C37) it returns `False` leading to it silently assuming that it is still connected.
I've been able to recreate the flow by raising a different exception (by manually closing the socket) but the error source and flow is the same in both cases as far as i can tell.
## Full example (out of the box + SSL cert generation)
```python
import asyncio
import contextlib
import logging
import socket
import ssl
import subprocess
import tempfile
@contextlib.contextmanager
def server_ssl_context(host):
with tempfile.NamedTemporaryFile() as keyfile, tempfile.NamedTemporaryFile() as certfile:
subprocess.run([
'openssl', 'req', '-new', '-newkey', 'ec', '-pkeyopt', 'ec_paramgen_curve:prime256v1',
'-keyout', keyfile.name, '-nodes', '-x509', '-days', '365', '-subj', f'/CN={host}',
'-out', certfile.name,
], check=True, shell=False)
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.load_cert_chain(certfile.name, keyfile.name)
yield context
@contextlib.contextmanager
def client_ssl_context():
try:
context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
yield context
finally:
pass
async def client_handle(reader, writer):
...
async def main(host, port):
with server_ssl_context(host) as server_context, client_ssl_context() as client_context:
await asyncio.start_server(client_handle, host, port, ssl=server_context)
reader, writer = await asyncio.open_connection(host, port, ssl=client_context)
transport = writer._transport
from asyncio.sslproto import _SSLProtocolTransport
assert isinstance(transport, _SSLProtocolTransport)
inner_transport = transport._ssl_protocol._transport
from asyncio.selector_events import _SelectorSocketTransport
assert isinstance(inner_transport, _SelectorSocketTransport)
sock: socket.socket = inner_transport._sock
assert isinstance(sock, socket.socket)
# Simulate a broken pipe, this invokes the OS error "Bad file descriptor"
# but triggers the same flow as "Broken Pipe"
sock.close()
# Invoke write so socket returns an OSError
print('[Client] Sending x6: %r' % 'Hello, world!')
# Increment _conn_lost to more than 5 to trigger the logging.
# This silently fails, but the logging is triggered.
for i in range(6):
writer.write('Hello, world!'.encode())
await writer.drain()
print(f"{inner_transport._conn_lost=}, {transport.is_closing()=}, {inner_transport.is_closing()=}")
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
asyncio.run(main('localhost', 8443), debug=True)
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-118960
* gh-125931
* gh-125932
<!-- /gh-linked-prs -->
| 3f24bde0b6689b8f05872a8118a97908b5a94659 | 41bd9d959ccdb1095b6662b903bb3cbd2a47087b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.