repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-110168 | # if SOURCE_DATE_EPOCH is set, don't randomize the testsuite
# Feature or enhancement
### Proposal:
cc @vstinner from https://github.com/python/cpython/pull/109570#discussion_r1342002862
cc @doko42 @stefanor from debian -- this would allow debian to remove this patch I believe https://salsa.debian.org/cpython-team/python3/-/blob/master/debian/patches/test-no-random-order.diff
if `SOURCE_DATE_EPOCH` is set test randomization should be disabled
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/109570#discussion_r1342002862
<!-- gh-linked-prs -->
### Linked PRs
* gh-110168
<!-- /gh-linked-prs -->
| 65c285062ce2769249610348636d3d73153e0144 | adf0f15a06c6e8ddd1a6d59b28efcbb26289f080 |
python/cpython | python__cpython-110170 | # test_userstring: test_find_periodic_pattern() failed on GHA Hypothesis tests on Ubuntu
GHA Hypothesis tests on Ubuntu:
```
FAIL: test_find_periodic_pattern (test.test_userstring.UserStringTest.test_find_periodic_pattern) (p='', text='')
Cover the special path for periodic patterns.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/string_tests.py", line 341, in test_find_periodic_pattern
self.checkequal(reference_find(p, text),
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_userstring.py", line 24, in checkequal
self.assertEqual(
AssertionError: -1 != 0
```
build: https://github.com/python/cpython/actions/runs/6365753186/job/17283009565?pr=109907
<!-- gh-linked-prs -->
### Linked PRs
* gh-110170
* gh-110182
* gh-110183
<!-- /gh-linked-prs -->
| 06faa9a39bd93c5e7999d52b52043ecdd0774dac | 038c3564fb4ac6439ff0484e6746b0790794f41b |
python/cpython | python__cpython-110156 | # regrtest: handle cross compilation and HOSTRUNNER in libregrtest, avoid Tools/scripts/run_tests.py
Previously, Tools/scripts/run_tests.py was used to add Python options to the command line: ``-u -E -W default -bb``. But regrtest is now able to do it when ``--fast-ci`` and ``--slow-ci`` options are used.
I propose moving the remaining code of Tools/scripts/run_tests.py for cross compilation and HOSTRUNNER directly in libregrtest.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110156
<!-- /gh-linked-prs -->
| 53eb9a676f8c59b206dfc536b7590f6563ad65e0 | d3728ddc572fff7ffcc95301bf5265717dbaf476 |
python/cpython | python__cpython-110151 | # Fix base case handling in quantiles()
Fix an inconvenience in `quantiles()` by supporting input lists of length one, much like `min()`, `max()`, `mean()` and `median()` also support datasets of size one.
The principal use case is making statistical summaries of data streams. It is really inconvenient to require a special case for the first data point. Instead, it is much nicer to make updates as new data arrives, starting with the very first datum.
This is what we want:
```
"Running five number summary for a data stream"
# https://en.wikipedia.org/wiki/Five-number_summary
from statistics import quantiles
import random
stream = (random.expovariate() for i in range(20))
data = []
for x in stream:
data.append(x)
print(min(data), quantiles(data), max(data))
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-110151
<!-- /gh-linked-prs -->
| 62405c7867b03730f0d278ea845855692d262d44 | a46e96076898d126c9f276aef1934195aac34b4e |
python/cpython | python__cpython-110268 | # test_msvcrt: test_getwche() failed with timeout (10 min) on GHA Windows x64
GHA Windows x64:
```
0:16:19 load avg: 5.19 [377/467/1] test_msvcrt process crashed (Exit code 1) -- running (1): test_winconsoleio (10 min)
Timeout (0:10:00)!
Thread 0x000001d4 (most recent call first):
File "D:\a\cpython\cpython\Lib\test\test_msvcrt.py", line 91 in test_getwche
(...)
== Tests result: FAILURE ==
```
build: https://github.com/python/cpython/actions/runs/6363931614/job/17279864816?pr=110146
<!-- gh-linked-prs -->
### Linked PRs
* gh-110268
<!-- /gh-linked-prs -->
| 1f3af03f83fead5cd86d54e1b2f47fc5866e635c | a13620685f68957c965fca89343a0e91f95f1bab |
python/cpython | python__cpython-110142 | # Grammar mistake in description of __main__.py idiomatic usage.
# Documentation
The following sentence in the section about [idiomatic usage](https://docs.python.org/3.11/library/__main__.html#id1) of `__main__.py` seems to be missing one word:
> Instead, those files are kept short, functions to execute from other modules
I suspect the correct meaning is:
> Instead, those files are kept short, *importing* functions to execute from other modules
I also suggest using the uncountable form of "content" in the first sentence:
> The *content* of __main__.py typically isn’t fenced with an `if __name__ == '__main__'` block.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110142
* gh-110188
* gh-110189
<!-- /gh-linked-prs -->
| adf0f15a06c6e8ddd1a6d59b28efcbb26289f080 | 31097df611bb5c8084190202e095ae47e8b81c0f |
python/cpython | python__cpython-110123 | # `--disable-gil` test failures due to ABI flag
# Bug report
On Linux and macOS, `test_cppext` fails due to:
```
AssertionError: would build wheel with unsupported tag ('cp313', 'cp313td', 'linux_x86_64')
```
Example build: https://buildbot.python.org/all/#/builders/1217/builds/286
Resolving this failure will require updating `pip` and our bundled pip version. In the meantime, I will skip this test on `--disable-gil` builds.
On Windows, there are additional build failures (`test_importlib`, `test_peg_generator`)
```
AssertionError: '_d.cp313-win32.pyd' not found in ['_d.cp313t-win_amd64.pyd', '_d.pyd']
```
```
LINK : fatal error LNK1181: cannot open input file 'python313_d.lib'
```
Example build: https://buildbot.python.org/all/#/builders/1241/builds/18
Resolving the Windows test failures require https://github.com/python/cpython/pull/110049 so that we have a standard way of determining `--disable-gil` builds from Python code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110123
* gh-110422
* gh-116964
<!-- /gh-linked-prs -->
| 2973970af8fb3f117ab2e8ab2d82e8a541fcb1da | 5ae6c6d053311d411a077200f85698d51d5fe8b9 |
python/cpython | python__cpython-110308 | # User-defined virtual path objects
## Proposal
Add a `PathBase` class that can be subclassed by users to implement "virtual paths", e.g. paths within `.zip` files or on FTP servers, and publish it as an external PyPI package.
## Phase 1: Private support in pathlib
- [x] Add private `_PathBase` class
- #106337
- [x] Merge `WalkTests` into `DummyPathTest`
- #110308
- #110655
- [x] Sort out `__hash__()`, `__eq__`, etc - we shouldn't inherit the `PurePath` implementation
- #112012
- #110670
- [x] Simplify/speed up path walking and globbing
- #106703
- [x] Add `from_uri()` classmethod
- #107640
## Phase 2: Experimental support in PyPI package
- [x] Move ABCs into `pathlib._abc`
- #112881
- #112904
- [x] Figure out the default value for `pathmod`, if any
- #113221
- #113219
- [x] Publish PyPI package
## Previous Discussion
https://discuss.python.org/t/make-pathlib-extensible/3428
<!-- gh-linked-prs -->
### Linked PRs
* gh-110308
* gh-110312
* gh-110321
* gh-110412
* gh-110655
* gh-110670
* gh-112012
* gh-112239
* gh-112242
* gh-112881
* gh-112901
* gh-112904
* gh-113219
* gh-113221
* gh-113292
* gh-113376
* gh-113411
* gh-113417
* gh-113419
<!-- /gh-linked-prs -->
| da0a68afc9d1487fad20c50f5133cda731d17a17 | 0d805b998ded854840f029b7f0c9a02eb3efa251 |
python/cpython | python__cpython-110094 | # Replace trivial Py_BuildValue() with direct C API call
`Py_BuildValue()` is a powerful tool which can be used to create complex Python structures. But in some cases it is used to create a Python objects from a C value for which a special C API exists. Since `Py_BuildValue()` adds some overhead (parsing the format string, copying va_list, several levels of recursive calls), it is less efficient than direct call of the corresponding C API. For example `Py_BuildValue("i", x)` can be replaced with `PyLong_FromLong((int)x)` or simply `PyLong_FromLong(x)`, `Py_BuildValue("(OO)", x, y)` can be replaced with `PyTuple_Pack(2, x, y)`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110094
* gh-111120
<!-- /gh-linked-prs -->
| 59ea0f523e155ac1a471cd292b41a76241fccd36 | ff4e53cb747063e95eaec181fd396f062f885ac2 |
python/cpython | python__cpython-110092 | # test_asyncio.test_windows_events: test_wait_for_handle() failed on GHA Windows x86
The test must not measure the performance of the CI.
GHA Windows x86:
```
FAIL: test_wait_for_handle (test.test_asyncio.test_windows_events.ProactorTests.test_wait_for_handle)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_asyncio\test_windows_events.py", line 188, in test_wait_for_handle
self.assertTrue(0 <= elapsed < 0.3, elapsed)
AssertionError: False is not true : 0.3129999999998745
```
build: https://github.com/python/cpython/actions/runs/6348560548/job/17245417314?pr=110080
<!-- gh-linked-prs -->
### Linked PRs
* gh-110092
* gh-110098
* gh-110099
* gh-110157
* gh-110158
* gh-110159
<!-- /gh-linked-prs -->
| db0a258e796703e12befea9d6dec04e349ca2f5b | e27adc68ccee8345e05b7516e6b46f6c7ff53371 |
python/cpython | python__cpython-110080 | # Remove extern "C" { ... } in C files
# Bug report
Until Python 3.11, it was possible to build Python with a C++ compiler: ``./configure --with-cxx-main``. This build mode has been removed in Python 3.12: in gh-93744 by commit 398ed84dc40abc58e16f5014d44c08f20cb4b5f6:
```
commit 398ed84dc40abc58e16f5014d44c08f20cb4b5f6
Author: Victor Stinner <vstinner@python.org>
Date: Fri Aug 5 13:26:58 2022 +0200
gh-93744: Remove configure --with-cxx-main option (#95651)
Remove the "configure --with-cxx-main" build option: it didn't work
for many years. Remove the MAINCC variable from configure and
Makefile.
The MAINCC variable was added by the issue gh-42471: commit
0f48d98b740110a672b62d467af192ec160e56ba. Previously, --with-cxx-main
was named --with-cxx.
Keep CXX and LDCXXSHARED variables, even if they are no longer used
by Python build system.
```
In the C code, there are still a bunch of ``extern "C" { ... }``:
```c
#ifdef __cplusplus
extern "C" {
#endif
...
#ifdef __cplusplus
}
#endif
```
This code path is only taken in the C file is built by a C++ compiler which is no longer possible. I simply propose to remove it.
---
Obviously, the in Python C API, we must keep ``extern { ... }``, since it's required to handle properly C++ name mangling when exposing a C API. The Python C API is now tested by a C++ compiler in test_cpp. This issue is only about C files, not about header (.h) files.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110080
<!-- /gh-linked-prs -->
| 8b626a47bafdb2d1ebb1321e50ffa5d6c721bf3a | bfd94ab9e9f4055ecedaa500b46b0270da9ffe12 |
python/cpython | python__cpython-110069 | # test_threading.test_4_daemon_threads() crash randomly
On Linux, when I stress test test_threading.test_4_daemon_threads(), it does crash randomly:
```
./python -m test test_threading -m test_4_daemon_threads -j50 -F --fail-env-changed
```
gdb traceback:
```
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00000000006e02f4 in PyInterpreterState_ThreadHead (interp=0xdddddddddddddddd) at Python/pystate.c:1961
warning: Source file is more recent than executable.
1961 return interp->threads.head;
[Current thread is 1 (Thread 0x7fcbb8ff96c0 (LWP 510515))]
Missing separate debuginfos, use: dnf debuginfo-install bzip2-libs-1.0.8-13.fc38.x86_64 glibc-2.37-5.fc38.x86_64 libffi-3.4.4-2.fc38.x86_64 libgcc-13.2.1-1.fc38.x86_64 openssl-libs-3.0.9-2.fc38.x86_64 xz-libs-5.4.1-1.fc38.x86_64 zlib-1.2.13-3.fc38.x86_64
(gdb) where
#0 0x00000000006e02f4 in PyInterpreterState_ThreadHead (interp=0xdddddddddddddddd) at Python/pystate.c:1961
#1 0x0000000000703090 in _Py_DumpTracebackThreads (fd=2, interp=0xdddddddddddddddd, current_tstate=0x24a34e0) at Python/traceback.c:1331
#2 0x000000000071bcef in faulthandler_dump_traceback (fd=2, all_threads=1, interp=0xa53688 <_PyRuntime+92712>) at ./Modules/faulthandler.c:195
#3 0x000000000071bfed in faulthandler_fatal_error (signum=11) at ./Modules/faulthandler.c:313
#4 <signal handler called>
#5 0x000000000069f808 in take_gil (tstate=0x24a34e0) at Python/ceval_gil.c:360
#6 0x00000000006a03ab in PyEval_RestoreThread (tstate=0x24a34e0) at Python/ceval_gil.c:714
#7 0x000000000074dd93 in portable_lseek (self=0x7fcc186c3590, posobj=0x0, whence=1, suppress_pipe_error=false) at ./Modules/_io/fileio.c:934
#8 0x000000000074de88 in _io_FileIO_tell_impl (self=0x7fcc186c3590) at ./Modules/_io/fileio.c:997
#9 0x000000000074ea4c in _io_FileIO_tell (self=0x7fcc186c3590, _unused_ignored=0x0) at ./Modules/_io/clinic/fileio.c.h:460
(...)
```
gdb debug:
```
(gdb) frame 5
#5 0x000000000069f808 in take_gil (tstate=0x24a34e0) at Python/ceval_gil.c:360
360 struct _gil_runtime_state *gil = ceval->gil;
(gdb) l
355 }
356
357 assert(_PyThreadState_CheckConsistency(tstate));
358 PyInterpreterState *interp = tstate->interp;
359 struct _ceval_state *ceval = &interp->ceval;
360 struct _gil_runtime_state *gil = ceval->gil;
361
362 /* Check that _PyEval_InitThreads() was called to create the lock */
363 assert(gil_created(gil));
364
(gdb) p /x ceval
$1 = 0xdddddddddddddddd
(gdb) p /x tstate->interp
$2 = 0xdddddddddddddddd
(gdb) p /x tstate
$5 = 0x24a34e0
(gdb) p /x _PyRuntime._finalizing._value
$3 = 0xab8fa8
(gdb) p /x _PyRuntime._finalizing_id
$4 = 0x7fcc49fce740
```
I don't understand why the test didn't exit: _PyThreadState_MustExit() should return, no?
I don't understand why ``assert(_PyThreadState_CheckConsistency(tstate));`` didn't fail.
**Maybe** Py_Finalize() was called between the pre-check:
```c
if (_PyThreadState_MustExit(tstate)) {
/* bpo-39877: If Py_Finalize() has been called and tstate is not the
thread which called Py_Finalize(), exit immediately the thread.
This code path can be reached by a daemon thread after Py_Finalize()
completes. In this case, tstate is a dangling pointer: points to
PyThreadState freed memory. */
PyThread_exit_thread();
}
assert(_PyThreadState_CheckConsistency(tstate));
```
and the code:
```c
PyInterpreterState *interp = tstate->interp;
struct _ceval_state *ceval = &interp->ceval;
struct _gil_runtime_state *gil = ceval->gil;
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-110069
* gh-110071
* gh-110072
<!-- /gh-linked-prs -->
| 2e37a38bcbfbe1357436e030538290e7d00b668d | 235aacdeed71afa6572ffad15155e781cc70bad1 |
python/cpython | python__cpython-110051 | # The TypeError message from random.seed call has a seemingly unintentional newline in the middle
# Bug report
### Bug description:
```python
>>> import random
>>> random.seed(random)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...lib/python3.12/random.py", line 167, in seed
raise TypeError('The only supported seed types are: None,\n'
TypeError: The only supported seed types are: None,
int, float, str, bytes, and bytearray.
```
It appears that the newline in the middle added in https://github.com/python/cpython/pull/25874 isn't intentional.
I'll send a PR.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110051
* gh-110625
<!-- /gh-linked-prs -->
| ce43d5f0e1e271289be5510ec80ebb9df77d88e1 | 756062b296df6242ba324e4cdc8f3e38bfc83617 |
python/cpython | python__cpython-110066 | # Symbol table for type parameter (PEP 695) raises AssertionError on get_type()
# Bug report
### Bug description:
Tested with CPython 3.12.0rc3
```python
import symtable
def foo[T]():
return "bar"
t_mod = symtable.symtable("""\
def foo[T]():
return "bar"
""", '?', 'exec')
t_T = t_mod.get_children()[0]
print(t_T.get_type())
```
```
Traceback (most recent call last):
File ".../bug_symtable.py", line 12, in <module>
print(t_T.get_type())
^^^^^^^^^^^^^^
File ".../3.12.0rc3/lib/python3.12/symtable.py", line 74, in get_type
assert self._table.type in (1, 2, 3), \
AssertionError: unexpected type: 6
[Finished in 1 seconds with code 1]
```
The symbol table for type parameters (generic) seem to have a new type, but it appears that the `symtable` implementation was not updated accordingly. Also there is no `_symtable.TYPE_*` constant for type code 6 (see [_symtable](https://github.com/python/cpython/blob/3.12/Modules/symtablemodule.c#L83-L87)).
https://github.com/python/cpython/blob/3.12/Lib/symtable.py#L62-L75
https://github.com/python/cpython/blob/main/Lib/symtable.py#L62-L75
There are new block types introduced by PEP 695 https://github.com/python/cpython/blob/3.12/Include/internal/pycore_symtable.h#L13-L23
```
- 0: FunctionBlock
- 1: ClassBlock
- 2: ModuleBlock
// Used for annotations 'from __future__ import annotations'
- 3: AnnotationBlock
// Used for generics and type aliases. These work mostly like functions
// (see PEP 695 for details). The three different blocks function identically;
// they are different enum entries only so that error messages can be more
// precise.
- 4: TypeVarBoundBlock
- 5: TypeAliasBlock
- 6: TypeParamBlock
```
https://github.com/python/cpython/blob/3.12/Include/internal/pycore_symtable.h#L13-L23
Hope this can be fixed by the 3.12 GA release.
### CPython versions tested on:
3.12.0rc3
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-110066
* gh-110070
<!-- /gh-linked-prs -->
| 7dc2c5093ef027aab57bca953ac2d6477a4a440b | 2e37a38bcbfbe1357436e030538290e7d00b668d |
python/cpython | python__cpython-110039 | # `KqueueSelector` miscounts max events
# Bug report
### Bug description:
`KqueueSelector.select()` may not get all events if a fd is registered for _both read and write_.
kqueue requires to register two filters, one for read and one for write. But KqueueSelector.select() calls kqueue.control() counting the number of registered fds, not the filters. As a result a single call to select() won't return all the available events.
```python
>>> import socket, selectors
>>> sel = selectors.KqueueSelector()
>>> s1, s2 = socket.socketpair()
>>> sel.register(s1, selectors.EVENT_READ | selectors.EVENT_WRITE)
SelectorKey(fileobj=<socket.socket fd=4, family=1, type=1, proto=0>, fd=4, events=3, data=None)
>>> s2.send(b"foo")
3
>>> sel.select()
[(SelectorKey(fileobj=<socket.socket fd=4, family=1, type=1, proto=0>, fd=4, events=3, data=None), 2)]
>>> sel.select()
[(SelectorKey(fileobj=<socket.socket fd=4, family=1, type=1, proto=0>, fd=4, events=3, data=None), 1)]
```
(Note that the two events are only visible on two subsequent selects)
For users like asyncio, this might mean missing a loop cycle or more (depending on number of registered readers/writers and ready events) before an event is seen. I previously said that this might cause deadlocks but I could not produce an example where a read/write event is never seen.
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
macOS, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-110039
* gh-110043
* gh-110044
<!-- /gh-linked-prs -->
| b14f0ab51cb4851b25935279617e388456dcf716 | 7e0fbf5175fcf21dae390ba68b7f49706d62aa49 |
python/cpython | python__cpython-110037 | # test_multiprocessing_spawn: Popen.terminate() failed with PermissionError: [WinError 5] Access is denied
subprocess.terminate() catchs PermissionError: we can reuse their code. I'm working on a fix.
GHA Windows x86:
```
ERROR: tearDownClass (test.test_multiprocessing_spawn.test_manager.WithManagerTestPool)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\_test_multiprocessing.py", line 2483, in tearDownClass
super().tearDownClass()
File "D:\a\cpython\cpython\Lib\test\_test_multiprocessing.py", line 6131, in tearDownClass
cls.manager.shutdown()
File "D:\a\cpython\cpython\Lib\multiprocessing\util.py", line 220, in __call__
res = self._callback(*self._args, **self._kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\multiprocessing\managers.py", line 683, in _finalize_manager
process.terminate()
File "D:\a\cpython\cpython\Lib\multiprocessing\process.py", line 133, in terminate
self._popen.terminate()
File "D:\a\cpython\cpython\Lib\multiprocessing\popen_spawn_win32.py", line 124, in terminate
_winapi.TerminateProcess(int(self._handle), TERMINATE)
PermissionError: [WinError 5] Access is denied
```
build: https://github.com/python/cpython/actions/runs/6340922718/job/17223431499?pr=110026
<!-- gh-linked-prs -->
### Linked PRs
* gh-110037
* gh-110064
* gh-110065
<!-- /gh-linked-prs -->
| bd4518c60c9df356cf5e05b81305e3644ebb5e70 | 4e356ad183eeb567783f4a87fd092573da1e9252 |
python/cpython | python__cpython-110035 | # test_signal: test_interprocess_signal() failed on GHA macOS
The problem is that the first subprocess.Popen may not be removed **immediately** after ``with self.subprocess_send_signal(pid, "SIGHUP") as child:`` block, it can survive a little bit. But while it is being deleted automatically, oooops, sigusr1_handler() triggers and raises an SIGUSR1Exception exception which is logged as an "ignored exception", but it is **ignored**! The test fails.
GHA macOS:
```
======================================================================
FAIL: test_interprocess_signal (test.test_signal.PosixTests.test_interprocess_signal)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/cpython/cpython/Lib/test/test_signal.py", line 108, in test_interprocess_signal
assert_python_ok(script)
File "/Users/runner/work/cpython/cpython/Lib/test/support/script_helper.py", line 166, in assert_python_ok
return _assert_python(True, *args, **env_vars)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/runner/work/cpython/cpython/Lib/test/support/script_helper.py", line 151, in _assert_python
res.fail(cmd_line)
File "/Users/runner/work/cpython/cpython/Lib/test/support/script_helper.py", line 76, in fail
raise AssertionError("Process return code is %d\n"
AssertionError: Process return code is 1
command line: ['/Users/runner/work/cpython/cpython/python.exe', '-X', 'faulthandler', '-I', '/Users/runner/work/cpython/cpython/Lib/test/signalinterproctester.py']
stdout:
---
---
stderr:
---
Exception ignored in: <function Popen.__del__ at 0x109080dd0>
Traceback (most recent call last):
File "/Users/runner/work/cpython/cpython/Lib/subprocess.py", line 1120, in __del__
def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
File "/Users/runner/work/cpython/cpython/Lib/test/signalinterproctester.py", line 23, in sigusr1_handler
raise SIGUSR1Exception
SIGUSR1Exception:
F
======================================================================
FAIL: test_interprocess_signal (__main__.InterProcessSignalTests.test_interprocess_signal)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/cpython/cpython/Lib/test/signalinterproctester.py", line 62, in test_interprocess_signal
with self.assertRaises(SIGUSR1Exception):
AssertionError: SIGUSR1Exception not raised
----------------------------------------------------------------------
Ran 1 test in 0.252s
FAILED (failures=1)
---
----------------------------------------------------------------------
Ran 46 tests in 132.286s
FAILED (failures=1, skipped=10)
```
build: https://github.com/python/cpython/actions/runs/6340922718/job/17223429951?pr=110026
<!-- gh-linked-prs -->
### Linked PRs
* gh-110035
* gh-110040
* gh-110041
<!-- /gh-linked-prs -->
| 7e0fbf5175fcf21dae390ba68b7f49706d62aa49 | 757cbd4f29c9e89b38b975e0463dc8ed331b2515 |
python/cpython | python__cpython-110100 | # test_threading failed: test_reinit_tls_after_fork() failed with "env changed" on GHA Address Sanitizer: process 19857 is still running after 300.4 seconds
GHA Address Sanitizer:
```
test_reinit_tls_after_fork (test.test_threading.ThreadJoinOnShutdown.test_reinit_tls_after_fork) ... Warning -- Uncaught thread exception: AssertionError
Exception in thread Thread-35 (do_fork_and_wait):
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/threading.py", line 1066, in _bootstrap_inner
self.run()
File "/home/runner/work/cpython/cpython/Lib/threading.py", line 1003, in run
self._target(*self._args, **self._kwargs)
File "/home/runner/work/cpython/cpython/Lib/test/test_threading.py", line 1185, in do_fork_and_wait
support.wait_process(pid, exitcode=50)
File "/home/runner/work/cpython/cpython/Lib/test/support/__init__.py", line 2202, in wait_process
raise AssertionError(f"process {pid} is still running "
AssertionError: process 19857 is still running after 300.4 seconds
ok
```
build: https://github.com/python/cpython/actions/runs/6340021936/job/17220524327?pr=110026
<!-- gh-linked-prs -->
### Linked PRs
* gh-110100
* gh-110103
* gh-110104
<!-- /gh-linked-prs -->
| 86e76ab8af9a5018acbcdcbb6285678175b1bd8a | 743e3572ee940a6cf88fd518e5f4a447905ba5eb |
python/cpython | python__cpython-110053 | # Warning: `incompatible-pointer-types` in `pycore_pylifecycle.h`
# Bug report
```
In file included from Python/optimizer.c:3:
./Include/internal/pycore_interp.h:222:42: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'const uint64_t *' (aka 'const unsigned long long *') [-Wincompatible-pointer-types]
return _Py_atomic_load_ulong_relaxed(&interp->_finalizing_id);
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:342:48: note: passing argument to parameter 'obj' here
_Py_atomic_load_uint64_relaxed(const uint64_t *obj)
^
In file included from Python/optimizer.c:3:
./Include/internal/pycore_interp.h:229:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&interp->_finalizing_id, 0);
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from Python/optimizer.c:3:
./Include/internal/pycore_interp.h:234:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&interp->_finalizing_id,
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from Python/optimizer.c:8:
In file included from ./Include/internal/pycore_pystate.h:11:
./Include/internal/pycore_runtime.h:310:42: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'const uint64_t *' (aka 'const unsigned long long *') [-Wincompatible-pointer-types]
return _Py_atomic_load_ulong_relaxed(&runtime->_finalizing_id);
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:342:48: note: passing argument to parameter 'obj' here
_Py_atomic_load_uint64_relaxed(const uint64_t *obj)
^
In file included from Python/optimizer.c:8:
In file included from ./Include/internal/pycore_pystate.h:11:
./Include/internal/pycore_runtime.h:317:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&runtime->_finalizing_id, 0);
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from Python/optimizer.c:8:
In file included from ./Include/internal/pycore_pystate.h:11:
./Include/internal/pycore_runtime.h:322:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&runtime->_finalizing_id,
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
6 warnings generated.
gcc -c -fno-strict-overflow -Wsign-compare -g -Og -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -I/opt/homebrew/opt/openssl/include -I/opt/homebrew/opt/openssl/include -DPy_BUILD_CORE \
-DGITVERSION="\"`LC_ALL=C git --git-dir ./.git rev-parse --short HEAD`\"" \
-DGITTAG="\"`LC_ALL=C git --git-dir ./.git describe --all --always --dirty`\"" \
-DGITBRANCH="\"`LC_ALL=C git --git-dir ./.git name-rev --name-only HEAD`\"" \
-o Modules/getbuildinfo.o ./Modules/getbuildinfo.c
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
In file included from ./Include/internal/pycore_runtime.h:17:
./Include/internal/pycore_interp.h:222:42: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'const uint64_t *' (aka 'const unsigned long long *') [-Wincompatible-pointer-types]
return _Py_atomic_load_ulong_relaxed(&interp->_finalizing_id);
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:342:48: note: passing argument to parameter 'obj' here
_Py_atomic_load_uint64_relaxed(const uint64_t *obj)
^
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
In file included from ./Include/internal/pycore_runtime.h:17:
./Include/internal/pycore_interp.h:229:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&interp->_finalizing_id, 0);
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
In file included from ./Include/internal/pycore_runtime.h:17:
./Include/internal/pycore_interp.h:234:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&interp->_finalizing_id,
^~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
./Include/internal/pycore_runtime.h:310:42: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'const uint64_t *' (aka 'const unsigned long long *') [-Wincompatible-pointer-types]
return _Py_atomic_load_ulong_relaxed(&runtime->_finalizing_id);
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:342:48: note: passing argument to parameter 'obj' here
_Py_atomic_load_uint64_relaxed(const uint64_t *obj)
^
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
./Include/internal/pycore_runtime.h:317:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&runtime->_finalizing_id, 0);
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
In file included from ./Modules/getbuildinfo.c:6:
In file included from ./Include/internal/pycore_pylifecycle.h:11:
./Include/internal/pycore_runtime.h:322:40: warning: incompatible pointer types passing 'unsigned long *' to parameter of type 'uint64_t *' (aka 'unsigned long long *') [-Wincompatible-pointer-types]
_Py_atomic_store_ulong_relaxed(&runtime->_finalizing_id,
^~~~~~~~~~~~~~~~~~~~~~~~
./Include/cpython/pyatomic_gcc.h:460:43: note: passing argument to parameter 'obj' here
_Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value)
^
6 warnings generated.
```
I am working on macos sonoma
<!-- gh-linked-prs -->
### Linked PRs
* gh-110053
<!-- /gh-linked-prs -->
| 6364873d2abe0973e21af7c8c7dddbb5f8dc1e85 | 9c73a9acec095c05a178e7dff638f7d9769318f3 |
python/cpython | python__cpython-110023 | # Warning: unused variable ‘owner_cls’ in `generated_cases.c.h`
# Bug report
<img width="651" alt="Снимок экрана 2023-09-28 в 16 14 05" src="https://github.com/python/cpython/assets/4660275/d6f50141-207e-4554-bfea-5060b7978c9f">
Example: https://github.com/python/cpython/pull/110013/files#diff-4ef46fa654f95502e49a24f7dc8ee31a4cac9b3433fe9cd2b2d4dd78cfbad448
Introduced in https://github.com/python/cpython/pull/109943
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110023
<!-- /gh-linked-prs -->
| 3814bc17230df4cd3bc4d8e2ce0ad36470fba269 | 9be283e5e15d5d5685b78a38eb132501f7f3febb |
python/cpython | python__cpython-110398 | # Refactor to reduce code duplication in `Tools/scripts/summarize_stats.py`
# Feature or enhancement
### Proposal:
The `summarize_stats.py` script operates in two modes: single and comparison. It grew these two modes somewhat organically, and there is a lot of code duplication and opportunity for these two modes to diverge. By refactoring to have each of the tables know how to present themselves in both modes, we should reduce the overall amount of code and these kinds of bugs. This refactor should be helpful in advance of the many [new stats we plan to add for the Tier 2 interpreter](https://github.com/python/cpython/issues/109329).
Cc: @brandtbucher, @markshannon
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
Related: https://github.com/python/cpython/issues/109329
<!-- gh-linked-prs -->
### Linked PRs
* gh-110398
<!-- /gh-linked-prs -->
| 81eba7645082a192c027e739b8eb99a94b4c0eec | 6b9babf140445ec2a560d1df056b795e898c6bd0 |
python/cpython | python__cpython-112834 | # Random crash of `test_signal` on macos sonoma
# Crash report
I've experience a randome crash while fixing https://github.com/python/cpython/issues/109981
Why random? Because I was not able to reproduce it ever again.
My command:
```
» ./python.exe -m test -f tests.txt
0:00:00 load avg: 4.78 Run 31 tests sequentially
0:00:00 load avg: 4.78 [ 1/31] test_shelve
0:00:00 load avg: 4.78 [ 2/31] test_shlex
0:00:00 load avg: 4.78 [ 3/31] test_signal
Fatal Python error: Bus error
Current thread 0x000000016c947000 (most recent call first):
File "/Users/sobolev/Desktop/cpython/Lib/test/test_signal.py", line 1338 in set_interrupts
File "/Users/sobolev/Desktop/cpython/Lib/threading.py", line 1003 in run
File "/Users/sobolev/Desktop/cpython/Lib/threading.py", line 1066 in _bootstrap_inner
File "/Users/sobolev/Desktop/cpython/Lib/threading.py", line 1023 in _bootstrap
Thread 0x00000001de555300 (most recent call first):
File "/Users/sobolev/Desktop/cpython/Lib/signal.py", line 29 in _int_to_enum
File "/Users/sobolev/Desktop/cpython/Lib/signal.py", line 57 in signal
File "/Users/sobolev/Desktop/cpython/Lib/test/test_signal.py", line 1346 in cycle_handlers
File "/Users/sobolev/Desktop/cpython/Lib/test/test_signal.py", line 1356 in test_stress_modifying_handlers
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 589 in _callTestMethod
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 636 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 692 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/test/support/testresult.py", line 146 in run
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 92 in test_func
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 238 in _runtest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 290 in run_test
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 325 in run_tests_sequentially
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 459 in _run_tests
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 490 in run_tests
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 576 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 584 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/__main__.py", line 2 in <module>
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 88 in _run_code
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testcapi (total: 1)
[1] 3657 bus error ./python.exe -m test -f tests.txt
```
Contents of `tests.txt`:
```
test_shelve
test_shlex
test_signal
test_site
test_slice
test_smtplib
test_smtpnet
test_socket
test_socketserver
test_sort
test_source_encoding
test_sqlite3
test_ssl
test_stable_abi_ctypes
test_startfile
test_stat
test_statistics
test_str
test_strftime
test_string
test_string_literals
test_stringprep
test_strptime
test_strtod
test_struct
test_structseq
test_subclassinit
test_subprocess
test_sundry
test_super
test_support
```
But, note that only 3 tests were run before: `test_shelve test_shlex test_signal`
I've also tried to run `./python.exe -m test -j4 --forever test_shelve test_shlex test_signal`, but no luck for now.
Env:
- 98c0c1de18e9ec02a0dde0a89b9acf9415891de2 (`main`)
- macos sonoma, apple m2
<!-- gh-linked-prs -->
### Linked PRs
* gh-112834
* gh-112851
* gh-112852
* gh-112864
<!-- /gh-linked-prs -->
| bf0beae6a05f3266606a21e22a4d803abbb8d731 | a955fd68d6451bd42199110c978e99b3d2959db2 |
python/cpython | python__cpython-110002 | # Update to OpenSSL 3.0.13 (& 1.1.1w) in our binary release build process.
# Bug report
### Bug description:
We need to upgrade the OpenSSL versions we build & bundle into our binary releases before the next release. More security fixes as usual. In particular https://nvd.nist.gov/vuln/detail/CVE-2023-4807 applies to our 64-bit Windows binaries.
Pick the latest 3.0.x and 1.1.1 releases at the time the work is done. 3.0.11 today, and if we build binaries for older shipping-with-1.1 branches, 1.1.1w. We should update the binary _build_ tooling in older release branches for those to at least reference and pull in 1.1.1w even if we aren't shipping new binary releases on those ourselves.
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12
### Operating systems tested on:
macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110002
* gh-110003
* gh-110004
* gh-110005
* gh-110006
* gh-110007
* gh-110008
* gh-110009
* gh-110010
* gh-110054
* gh-110055
* gh-110056
* gh-110059
* gh-110090
* gh-111265
* gh-111266
* gh-115043
* gh-115047
* gh-115048
* gh-115050
* gh-115052
* gh-115053
* gh-115054
* gh-115055
* gh-115057
<!-- /gh-linked-prs -->
| c88037d137a98d7c399c7bd74d5117b5bcae1543 | 526380e28644236bde9e41b949497ca1ee22653f |
python/cpython | python__cpython-113378 | # Test `test_c_locale_coercion` fails on macos Sonoma
Repro: `./python.exe -m test test_c_locale_coercion -v`
Short output:
```
======================================================================
FAIL: test_external_target_locale_configuration (test.test_c_locale_coercion.LocaleConfigurationTests.test_external_target_locale_configuration) (env_var='LC_CTYPE', configured_locale='UTF-8')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/test_c_locale_coercion.py", line 273, in test_external_target_locale_configuration
self._check_child_encoding_details(var_dict,
File "/Users/sobolev/Desktop/cpython/Lib/test/test_c_locale_coercion.py", line 230, in _check_child_encoding_details
self.assertEqual(encoding_details, expected_details)
AssertionError: {'fse[36 chars]f-8:strict', 'stdout_info': 'utf-8:strict', 's[80 chars]: ''} != {'fse[36 chars]f-8:surrogateescape', 'stdout_info': 'utf-8:su[98 chars]: ''}
{'fsencoding': 'utf-8',
'lang': '',
'lc_all': '',
'lc_ctype': 'UTF-8',
'stderr_info': 'utf-8:backslashreplace',
- 'stdin_info': 'utf-8:strict',
? ^^ ^
+ 'stdin_info': 'utf-8:surrogateescape',
? ++++++ ^^^ ^^^
- 'stdout_info': 'utf-8:strict'}
? ^^ ^
+ 'stdout_info': 'utf-8:surrogateescape'}
? ++++++ ^^^ ^^^
----------------------------------------------------------------------
Ran 7 tests in 0.883s
FAILED (failures=51)
test test_c_locale_coercion failed
test_c_locale_coercion failed (51 failures)
== Tests result: FAILURE ==
1 test failed:
test_c_locale_coercion
Total duration: 924 ms
Total tests: run=7 failures=51
Total test files: run=1/1 failed=1
Result: FAILURE
```
Full output is attached below.
[out.txt](https://github.com/python/cpython/files/12743468/out.txt)
<!-- gh-linked-prs -->
### Linked PRs
* gh-113378
* gh-113398
* gh-113399
<!-- /gh-linked-prs -->
| 5f665e99e0b8a52415f83c2416eaf28abaacc3ae | bee627c1e29a070562d1a540a6e513d0daa322f5 |
python/cpython | python__cpython-112797 | # Tests crash on macos Sonoma
# Crash report
I've experienced a crash on `main` branch.
I just used `./python.exe -m test`
This is the all output I have for now.
Cannot reproduce with smaller set of tests.
```
0:13:53 load avg: 2.12 [371/463/4] test_super
0:13:53 load avg: 2.12 [372/463/4] test_support
[1] 12160 killed ./python.exe -m test
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-112797
* gh-112824
* gh-112825
* gh-132823
* gh-132824
<!-- /gh-linked-prs -->
| 953ee622b3901d3467e65e3484dcfa75ba6fcddf | 16448cab44e23d350824e9ac75e699f5bcc48a14 |
python/cpython | python__cpython-112905 | # `test_tarfile_vs_tar` of `test_shutil` fails on macos Sonoma
# Bug report
```
0:10:50 load avg: 1.97 [343/463/3] test_shutil
test test_shutil failed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/test_shutil.py", line 1592, in test_tarfile_vs_tar
self.assertEqual(self._tarinfo(tarball), self._tarinfo(tarball2))
AssertionError: Tuples differ: ('dist', 'dist/file1', 'dist/file2', 'dist/[31 chars]ub2') != ('._dist', 'dist', 'dist/._file1', 'dist/._[122 chars]ub2')
First differing element 0:
'dist'
'._dist'
Second tuple contains 6 additional elements.
First extra element 6:
'dist/file1'
- ('dist', 'dist/file1', 'dist/file2', 'dist/sub', 'dist/sub/file3', 'dist/sub2')
+ ('._dist',
+ 'dist',
+ 'dist/._file1',
+ 'dist/._file2',
+ 'dist/._sub',
+ 'dist/._sub2',
+ 'dist/file1',
+ 'dist/file2',
+ 'dist/sub',
+ 'dist/sub/._file3',
+ 'dist/sub/file3',
+ 'dist/sub2')
```
Running locally on macos 14.0
<!-- gh-linked-prs -->
### Linked PRs
* gh-112905
* gh-112927
* gh-112928
<!-- /gh-linked-prs -->
| dd2ebdf89ff144e89db180bd552c50615f712cb2 | 5bf7580d72259d7d64f5ee8cfc2df677de5310a4 |
python/cpython | python__cpython-110193 | # Change DEOPT_IF macro not to need the opcode name
The `DEOPT_IF` macro currently takes a condition and the name of the unspecialized instruction. This causes some duplication of guards (when the same guard could be reused by different instruction families), and is quite unnecessary, since the code generator can easily fill in the correct instruction.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110193
* gh-110301
<!-- /gh-linked-prs -->
| d67edcf0b361c9ee0d29ed719562c58a85304cd0 | d73501602f863a54c872ce103cd3fa119e38bac9 |
python/cpython | python__cpython-110993 | # What's New in Python 3.13 (copyediting)
# Documentation
This is a meta-issue to capture PRs for copyedits to What's New in Python 3.13.
Other meta-issues:
* 3.14: #123299
* 3.12: #109190
<!-- gh-linked-prs -->
### Linked PRs
* gh-110993
* gh-110994
* gh-110997
* gh-114401
* gh-117902
* gh-118439
* gh-118694
* gh-118711
* gh-119958
* gh-119959
* gh-122958
* gh-122971
* gh-122990
* gh-123032
* gh-123063
* gh-123086
* gh-123101
* gh-123132
* gh-123150
* gh-123164
* gh-123292
* gh-123301
* gh-123308
* gh-123393
* gh-123396
* gh-123529
* gh-123552
* gh-123589
* gh-123590
* gh-123776
* gh-123794
* gh-123845
* gh-124135
* gh-124313
* gh-124334
* gh-124336
* gh-124341
* gh-124343
* gh-124348
* gh-124357
* gh-124360
* gh-124362
* gh-124365
* gh-124376
* gh-124827
* gh-124828
* gh-124831
* gh-124830
* gh-124833
* gh-124947
* gh-124966
* gh-125007
* gh-125033
* gh-125034
* gh-127816
* gh-129242
* gh-136046
* gh-137766
* gh-137767
<!-- /gh-linked-prs -->
| c9c4a87f5da8d1e9b7ab19c687b49ba5431f6d36 | 6c23635f2b7067ef091a550954e09f8b7c329e3f |
python/cpython | python__cpython-110057 | # test_threading: test_set_and_clear() failed on Windows x64
Windows x64:
```
FAIL: test_set_and_clear (test.test_threading.EventTests.test_set_and_clear)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 481, in test_set_and_clear
self.assertEqual(results, [True] * N)
AssertionError: Lists differ: [False, True, True, True, True] != [True, True, True, True, True]
First differing element 0:
False
True
- [False, True, True, True, True]
? -------
+ [True, True, True, True, True]
? ++++++
```
The test passed when re-run:
```
0:19:31 load avg: 9.73 Run 2 tests in parallel using 2 worker processes (timeout: 10 min, worker timeout: 15 min)
0:19:33 load avg: 9.53 [1/2] test_threading passed
Re-running test_threading in verbose mode (matching: test_set_and_clear)
test_set_and_clear (test.test_threading.EventTests.test_set_and_clear) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.766s
OK
```
build: https://github.com/python/cpython/actions/runs/6323487736/job/17171157457?pr=109922
<!-- gh-linked-prs -->
### Linked PRs
* gh-110057
* gh-110063
* gh-110089
* gh-110344
* gh-110346
* gh-110355
<!-- /gh-linked-prs -->
| 4e356ad183eeb567783f4a87fd092573da1e9252 | 5fdcea744024c8a19ddb57057bf5ec2889546c98 |
python/cpython | python__cpython-109977 | # test_gdb failed with 15 min timeout on PPC64LE RHEL8 3.x (LTO + pydebug)
PPC64LE RHEL8 3.x:
```
0:19:19 load avg: 1.37 [463/463/1] test_gdb process crashed (Exit code 1)
Timeout (0:15:00)!
Thread 0x00007fffae664d70 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/selectors.py", line 398 in select
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/subprocess.py", line 2108 in _communicate
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/subprocess.py", line 1209 in communicate
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 107 in run_gdb
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 224 in get_stack_trace
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 949 in test_pycfunction
(...)
(...)
0:19:19 load avg: 1.37 Re-running 1 failed tests in verbose mode in subprocesses
0:19:19 load avg: 1.37 Run 1 test in parallel using 1 worker process (timeout: 15 min, worker timeout: 20 min)
0:19:49 load avg: 1.22 running (1): test_gdb (30.0 sec)
0:20:19 load avg: 1.19 running (1): test_gdb (1 min)
0:20:49 load avg: 1.11 running (1): test_gdb (1 min 30 sec)
(...)
0:33:50 load avg: 1.09 running (1): test_gdb (14 min 30 sec)
0:34:20 load avg: 1.14 running (1): test_gdb (15 min)
0:34:20 load avg: 1.14 [1/1/1] test_gdb process crashed (Exit code 1)
Re-running test_gdb in verbose mode
test_NULL_ob_type (test.test_gdb.PrettyPrintTests.test_NULL_ob_type)
Ensure that a PyObject* with NULL ob_type is handled gracefully ... ok
(...)
test_pycfunction (test.test_gdb.PyBtTests.test_pycfunction)
Verify that "py-bt" displays invocations of PyCFunction instances ... Timeout (0:15:00)!
Thread 0x00007fffa14b4d70 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/selectors.py", line 398 in select
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/subprocess.py", line 2108 in _communicate
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/subprocess.py", line 1209 in communicate
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 107 in run_gdb
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 224 in get_stack_trace
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-ppc64le/build/Lib/test/test_gdb.py", line 949 in test_pycfunction
(...)
```
build: https://buildbot.python.org/all/#/builders/559/builds/3951
<!-- gh-linked-prs -->
### Linked PRs
* gh-109977
* gh-110026
* gh-110339
* gh-110343
* gh-110351
* gh-110354
<!-- /gh-linked-prs -->
| 8f324b7ecd2df3036fab098c4c8ac185ac07b277 | 98c0c1de18e9ec02a0dde0a89b9acf9415891de2 |
python/cpython | python__cpython-109968 | # Incorrect rendering of `__replace__` method in `copy` docs: consider adding `copy.SupportsReplace`?
# Bug report
Here's how it looks right now:
<img width="792" alt="Снимок экрана 2023-09-27 в 12 51 08" src="https://github.com/python/cpython/assets/4660275/7847e178-7cf3-483c-a0be-f7f9d80791a7">
Link: https://docs.python.org/dev/library/copy.html
But, there's no such thing as `copy.__replace__`.
There are several options:
1. Do not document this as a method (however, it is)
2. Document it as `object.__replace__`, but `object` does not have this method - it can be very confusing
3. Add some protocol, like `typing.SupportsReplace` and document `SupportsReplace.__replace__`
I personally think that `typing.SupportsReplace` is very useful on its own, because:
- we will use it in `typeshed` for typing `copy.replace`
- it can be used in other places by end users
- it matches exactly what `__replace__` is: a protocol
So, it can something like:
```python
class SupportsReplace(Protocol):
def __replace__(self, /, **kwargs: Any) -> Self: ...
```
We cannot really type `**kwargs` here. See https://discuss.python.org/t/generalize-replace-function/28511/20 But, we can later apply mypy plugin similar to one we use for `dataclasses.replace`: https://github.com/python/mypy/blob/4b66fa9de07828621fee1d53abd533f3903e570a/mypy/plugins/dataclasses.py#L390-L402
I would like to work on docs / implementation changes after we discuss all the things and come to an agreement :)
CC @serhiy-storchaka @AlexWaygood
<!-- gh-linked-prs -->
### Linked PRs
* gh-109968
* gh-110027
* gh-130909
<!-- /gh-linked-prs -->
| 0baf72696e79191241a2d5cfdfd7e6135115f7b2 | 8f324b7ecd2df3036fab098c4c8ac185ac07b277 |
python/cpython | python__cpython-110058 | # test_pty: test_openpty() failed on aarch64 RHEL8 LTO + PGO 3.x
aarch64 RHEL8 LTO + PGO 3.x:
```
0:00:16 load avg: 8.60 [299/463/1] test_pty failed (1 failure)
hi there
test_fork (test.test_pty.PtyTest.test_fork) ... ok
test_master_read (test.test_pty.PtyTest.test_master_read) ... ok
test_openpty (test.test_pty.PtyTest.test_openpty) ... FAIL
Stdout:
tty.tcgetattr(pty.STDIN_FILENO) failed
Calling pty.openpty()
Got master_fd=3, slave_fd=4, slave_name=None
Writing to slave_fd
Writing chunked output
test_spawn_doesnt_hang (test.test_pty.PtyTest.test_spawn_doesnt_hang) ... ok
test__copy_to_each (test.test_pty.SmallPtyTests.test__copy_to_each)
Test the normal data case on both master_fd and stdin. ... ok
test__restore_tty_mode_normal_return (test.test_pty.SmallPtyTests.test__restore_tty_mode_normal_return)
Test that spawn resets the tty mode no when _copy returns normally. ... ok
======================================================================
FAIL: test_openpty (test.test_pty.PtyTest.test_openpty)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.lto-pgo/build/Lib/test/test_pty.py", line 192, in test_openpty
s2 = _readline(master_fd)
^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.lto-pgo/build/Lib/test/test_pty.py", line 68, in _readline
return reader.readline()
^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.lto-pgo/build/Lib/test/test_pty.py", line 105, in handle_sig
self.fail("isatty hung")
AssertionError: isatty hung
Stdout:
tty.tcgetattr(pty.STDIN_FILENO) failed
Calling pty.openpty()
Got master_fd=3, slave_fd=4, slave_name=None
Writing to slave_fd
Writing chunked output
----------------------------------------------------------------------
Ran 6 tests in 10.023s
FAILED (failures=1)
test test_pty failed
```
build: https://buildbot.python.org/all/#/builders/78/builds/5451
<!-- gh-linked-prs -->
### Linked PRs
* gh-110058
* gh-110060
* gh-110061
<!-- /gh-linked-prs -->
| 5fdcea744024c8a19ddb57057bf5ec2889546c98 | b488c0d761b2018c10bc5a0e5469b8b209e1a681 |
python/cpython | python__cpython-128122 | # test_glob: test_selflink() fails randomly on Linux
AMD64 RHEL8 Refleaks 3.x:
```
0:09:26 load avg: 9.08 [406/463/1] test_glob failed (1 failure) -- running (7): (...)
beginning 6 repetitions
123456
.....test test_glob failed -- Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_glob.py", line 396, in test_selflink
self.assertIn(path, results)
AssertionError:
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/' not found in
{'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/',
'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/'}
```
The tested path ``dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link`` has one less ``link/`` than the first path of the test: 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/'.
```
# Tested path
>>> len('dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/'.split('/'))
33
# First (shorted) expected path
>>> len('dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/'.split('/'))
34
```
build: https://buildbot.python.org/all/#/builders/259/builds/892
<!-- gh-linked-prs -->
### Linked PRs
* gh-128122
* gh-128255
* gh-128812
* gh-128821
* gh-128834
* gh-130551
<!-- /gh-linked-prs -->
| 0974d7bb866062ed4aaa40f705d6cc4c294d99f1 | df46c780febab667ee01264ae32c4e866cecd911 |
python/cpython | python__cpython-109957 | # Also test `typing.NamedTuple` with `copy.replace`
# Bug report
Right now only `collections.namedtuple` is tested: https://github.com/python/cpython/blob/b1aebf1e6576680d606068d17e2208259573e061/Lib/test/test_copy.py#L937-L946
@serhiy-storchaka noted in https://github.com/python/cpython/pull/108752#discussion_r1313369490 that only named tuples created by `collections.namedtuple()` are supported. But, since `typing.NamedTuple` uses `collections.namedtuple` inside: https://github.com/python/cpython/blob/b1aebf1e6576680d606068d17e2208259573e061/Lib/typing.py#L2695-L2696 it is also supported:
```python
>>> import typing
>>>
>>> class N(typing.NamedTuple):
... x: int
... y: int
...
>>> N.__replace__
<function N._replace at 0x10580a210>
>>> import copy
>>> copy.replace(N(1, 2), x=3)
N(x=3, y=2)
```
I have a PR ready with extra tests.
Refs https://github.com/python/cpython/issues/108751
Refs https://github.com/python/cpython/pull/108752
<!-- gh-linked-prs -->
### Linked PRs
* gh-109957
<!-- /gh-linked-prs -->
| bb2e96f6f4d6c397c4eb5775a09262a207675577 | 8c071373f12f325c54591fe990ec026184e48f8f |
python/cpython | python__cpython-109910 | # Inline comments about Task state in ``asyncio.tasks`` are out of date
# Documentation
Some of the in-line comments in the asyncio.tasks module are out of date. They incorrectly describe the state transitions between a task in a sleeping and running state. This is unhelpful to anyone reading the code.
I've created a PR in #109910 where I offer an expanded comment reflecting the current logic.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109910
* gh-109992
* gh-109993
<!-- /gh-linked-prs -->
| 45cf5b0c69bb5c51f33fc681d90c45147e311ddf | 32466c97c06ee5812923d695195394c736eeb707 |
python/cpython | python__cpython-109924 | # Set line number on the POP_TOP that follows a RETURN_GENERATOR
Currently the ``POP_TOP`` of a generator doesn't get a line number.
If we give it the same line number as the ``RETURN_GENERATOR`` and ``RESUME``, that will reduce the size of the line number table.
The missing line number also got in my way while working on #109094.
```
>>> def f(): yield 42
...
>>> dis.dis(f)
1 0 RETURN_GENERATOR
None 2 POP_TOP
1 4 RESUME 0
6 LOAD_CONST 1 (42)
8 YIELD_VALUE 1
10 RESUME 1
12 POP_TOP
14 RETURN_CONST 0 (None)
None >> 16 CALL_INTRINSIC_1 3 (INTRINSIC_STOPITERATION_ERROR)
18 RERAISE 1
ExceptionTable:
4 to 14 -> 16 [0] lasti
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109924
<!-- /gh-linked-prs -->
| ea285ad8b69c6ed91fe79edb3b0ea4d9cd6e6011 | b89ed9df39851348fbb1552294644f99f6b17d2c |
python/cpython | python__cpython-110911 | # Segfault when printing `MemoryError` raised through `PyErr_NoMemory()` from subinterpreter
# Crash report
### What happened?
Repro:
```python
import _testcapi
_testcapi.run_in_subinterp("[0]*100000000000")
```
Backtrace:
```
BaseException_str (self=<optimized out>) at Objects/exceptions.c:120
120 switch (PyTuple_GET_SIZE(self->args)) {
(gdb) bt
#0 BaseException_str (self=<optimized out>) at Objects/exceptions.c:120
#1 0x00005555556ed00f in PyObject_Str (v=v@entry=0x7ffff75988f0) at Objects/object.c:606
#2 0x0000555555872d45 in print_exception_message (ctx=ctx@entry=0x7fffffffd870, type=type@entry=0x555555b54f20 <_PyExc_MemoryError>, value=0x7ffff75988f0)
at Python/pythonrun.c:1079
#3 0x0000555555873c59 in print_exception (ctx=ctx@entry=0x7fffffffd870, value=value@entry=0x7ffff75988f0) at Python/pythonrun.c:1226
#4 0x000055555587443a in print_exception_recursive (ctx=ctx@entry=0x7fffffffd870, value=value@entry=0x7ffff75988f0) at Python/pythonrun.c:1507
#5 0x0000555555874af2 in _PyErr_Display (file=file@entry=0x7ffff743fe30, unused=unused@entry=0x0, value=value@entry=0x7ffff75988f0, tb=tb@entry=0x7ffff74f39d0)
at Python/pythonrun.c:1557
#6 0x0000555555874c91 in PyErr_Display (unused=unused@entry=0x0, value=0x7ffff75988f0, tb=0x7ffff74f39d0) at Python/pythonrun.c:1585
#7 0x000055555588456f in sys_excepthook_impl (module=module@entry=0x7ffff7631370, exctype=<optimized out>, value=<optimized out>, traceback=<optimized out>)
at ./Python/sysmodule.c:783
#8 0x00005555558845bb in sys_excepthook (module=0x7ffff7631370, args=0x7fffffffd9c0, nargs=<optimized out>) at ./Python/clinic/sysmodule.c.h:101
#9 0x00005555556e691b in cfunction_vectorcall_FASTCALL (func=0x7ffff7631850, args=0x7fffffffd9c0, nargsf=<optimized out>, kwnames=<optimized out>)
at Objects/methodobject.c:425
#10 0x00005555556794b8 in _PyObject_VectorcallTstate (tstate=0x7ffff7598938, callable=0x7ffff7631850, args=0x7fffffffd9c0, nargsf=3, kwnames=0x0)
at ./Include/internal/pycore_call.h:187
#11 0x00005555556795d3 in PyObject_Vectorcall (callable=callable@entry=0x7ffff7631850, args=args@entry=0x7fffffffd9c0, nargsf=nargsf@entry=3, kwnames=kwnames@entry=0x0)
at Objects/call.c:327
#12 0x0000555555874e20 in _PyErr_PrintEx (tstate=0x7ffff7598938, set_sys_last_vars=set_sys_last_vars@entry=1) at Python/pythonrun.c:840
#13 0x0000555555875199 in PyErr_PrintEx (set_sys_last_vars=set_sys_last_vars@entry=1) at Python/pythonrun.c:878
#14 0x00005555558751a9 in PyErr_Print () at Python/pythonrun.c:884
#15 0x0000555555875b77 in PyRun_SimpleStringFlags (command=<optimized out>, flags=flags@entry=0x7fffffffda70) at Python/pythonrun.c:517
#16 0x00007ffff76ad1a4 in run_in_subinterp (self=<optimized out>, args=args@entry=0x7ffff779a210) at ./Modules/_testcapimodule.c:1405
#17 0x00005555556e65c8 in cfunction_call (func=func@entry=0x7ffff7810f50, args=args@entry=0x7ffff779a210, kwargs=kwargs@entry=0x0) at Objects/methodobject.c:551
#18 0x000055555567916b in _PyObject_MakeTpCall (tstate=tstate@entry=0x555555c02338 <_PyRuntime+508728>, callable=callable@entry=0x7ffff7810f50,
args=args@entry=0x7ffff7fc1078, nargs=<optimized out>, keywords=keywords@entry=0x0) at Objects/call.c:242
#19 0x00005555556795ab in _PyObject_VectorcallTstate (tstate=0x555555c02338 <_PyRuntime+508728>, callable=0x7ffff7810f50, args=0x7ffff7fc1078, nargsf=<optimized out>,
kwnames=0x0) at ./Include/internal/pycore_call.h:185
#20 0x00005555556795d3 in PyObject_Vectorcall (callable=callable@entry=0x7ffff7810f50, args=args@entry=0x7ffff7fc1078, nargsf=<optimized out>,
kwnames=kwnames@entry=0x0) at Objects/call.c:327
#21 0x00005555557e3ad6 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555c02338 <_PyRuntime+508728>, frame=0x7ffff7fc1020, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:3761
#22 0x00005555557ed5e4 in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c02338 <_PyRuntime+508728>) at ./Include/internal/pycore_ceval.h:107
#23 _PyEval_Vector (tstate=tstate@entry=0x555555c02338 <_PyRuntime+508728>, func=func@entry=0x7ffff777b050, locals=locals@entry=0x7ffff778e690, args=args@entry=0x0,
argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1630
#24 0x00005555557ed68b in PyEval_EvalCode (co=co@entry=0x7ffff7767520, globals=globals@entry=0x7ffff778e690, locals=locals@entry=0x7ffff778e690) at Python/ceval.c:582
#25 0x00005555558718cf in run_eval_code_obj (tstate=tstate@entry=0x555555c02338 <_PyRuntime+508728>, co=co@entry=0x7ffff7767520, globals=globals@entry=0x7ffff778e690,
--Type <RET> for more, q to quit, c to continue without paging--
locals=locals@entry=0x7ffff778e690) at Python/pythonrun.c:1720
#26 0x00005555558722b7 in run_mod (mod=mod@entry=0x555555c55608, filename=filename@entry=0x7ffff77b5940, globals=globals@entry=0x7ffff778e690,
locals=locals@entry=0x7ffff778e690, flags=flags@entry=0x7fffffffdf98, arena=arena@entry=0x7ffff77de320) at Python/pythonrun.c:1741
#27 0x00005555558723cf in pyrun_file (fp=fp@entry=0x555555c2f3b0, filename=filename@entry=0x7ffff77b5940, start=start@entry=257, globals=globals@entry=0x7ffff778e690,
locals=locals@entry=0x7ffff778e690, closeit=closeit@entry=1, flags=0x7fffffffdf98) at Python/pythonrun.c:1641
#28 0x00005555558756bb in _PyRun_SimpleFileObject (fp=fp@entry=0x555555c2f3b0, filename=filename@entry=0x7ffff77b5940, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdf98) at Python/pythonrun.c:464
#29 0x0000555555875938 in _PyRun_AnyFileObject (fp=fp@entry=0x555555c2f3b0, filename=filename@entry=0x7ffff77b5940, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdf98) at Python/pythonrun.c:79
#30 0x000055555589dfda in pymain_run_file_obj (program_name=program_name@entry=0x7ffff79790e0, filename=filename@entry=0x7ffff77b5940, skip_source_first_line=0)
at Modules/main.c:361
#31 0x000055555589e2b7 in pymain_run_file (config=config@entry=0x555555b9cf20 <_PyRuntime+93984>) at Modules/main.c:380
#32 0x000055555589f3fc in pymain_run_python (exitcode=exitcode@entry=0x7fffffffe10c) at Modules/main.c:611
#33 0x000055555589f454 in Py_RunMain () at Modules/main.c:689
#34 0x000055555589f4a8 in pymain_main (args=args@entry=0x7fffffffe150) at Modules/main.c:719
#35 0x000055555589f51d in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:743
#36 0x00005555555cf74e in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
I can see two simultaneous reasons for this behaviour:
* unlike for main interpreter that preallocates memory errors, for subinterpreter `interp.exc_state.memerrors_freelist` remains equal to `NULL`, which means that `PyErr_NoMemory()` always returns `interp.static_objects.last_resort_memory_error`.
* `interp.static_objects.last_resort_memory_error` has `args` field equal to `NULL`, thus attempt to print it segfaults.
Possible fix:
* assign `interp.static_objects.last_resort_memory_error.args` to empty tuple:
```patch
diff --git a/Include/internal/pycore_runtime_init.h b/Include/internal/pycore_runtime_init.h
index 2deba02a89..e22ffcd1fb 100644
--- a/Include/internal/pycore_runtime_init.h
+++ b/Include/internal/pycore_runtime_init.h
@@ -177,6 +177,7 @@ extern PyTypeObject _PyExc_MemoryError;
}, \
.last_resort_memory_error = { \
_PyObject_HEAD_INIT(&_PyExc_MemoryError) \
+ .args = (PyObject*)&_Py_SINGLETON(tuple_empty), \
}, \
}, \
}, \
```
* initialize memory errors on subinterpreter creation in a similar manner as in the main interpreter, through `_PyExc_InitGlobalObjects()` or whatever
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:e81bd3fa16, Sep 26 2023, 12:14:51) [GCC 10.2.1 20210110]
<!-- gh-linked-prs -->
### Linked PRs
* gh-110911
* gh-111238
<!-- /gh-linked-prs -->
| 47d3e2ed930a9f3d228aed4f62133737dae74cf7 | 96cbd1e1db3447a33e5cc5cc2886ce79b61cc6eb |
python/cpython | python__cpython-109899 | # flowgraph optimize_cfg Assertion for if-else-expression with if-else-expression as condition
# Bug report
### Bug description:
```python
a if (1 if b else c) else d
```
output (Python 3.12.0rc3+):
```python
python: Python/flowgraph.c:1598: optimize_cfg: Assertion `no_redundant_nops(g)' failed.
```
tested with the current 3.12 branch (538f505a3744ab1dd29861a859ab81ab65436144)
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109899
* gh-109987
* gh-110048
<!-- /gh-linked-prs -->
| 9fa094677186b4bb05e488e5bc9d5dfe7ec32812 | c0b194a77082f2db4b5689a27e73f07fa046fa79 |
python/cpython | python__cpython-110421 | # test_os.Win32KillTests: test_CTRL_BREAK_EVENT() failed on GHA Windows x64
Windows x64 failed:
```
ERROR: test_CTRL_BREAK_EVENT (test.test_os.Win32KillTests.test_CTRL_BREAK_EVENT)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_os.py", line 2616, in test_CTRL_BREAK_EVENT
self._kill_with_event(signal.CTRL_BREAK_EVENT, "CTRL_BREAK_EVENT")
File "D:\a\cpython\cpython\Lib\test\test_os.py", line 2591, in _kill_with_event
os.kill(proc.pid, signal.SIGINT)
PermissionError: [WinError 5] Access is denied
```
Test passed when re-run:
```
test_CTRL_BREAK_EVENT (test.test_os.Win32KillTests.test_CTRL_BREAK_EVENT) ... ok
```
build: https://github.com/python/cpython/actions/runs/6309792803/job/17130459509?pr=109887
<!-- gh-linked-prs -->
### Linked PRs
* gh-110421
* gh-110442
* gh-110443
<!-- /gh-linked-prs -->
| aaf297c048694cd9652790f8b74e69f7ddadfbde | fb6c4ed2bbb2a867d5f0b9a94656e4714be5d9c2 |
python/cpython | python__cpython-109869 | # Skip deepcopy memo check when memo is empty
# Feature or enhancement
### Proposal:
In the `copy.deepcopy` method we can skip the initial memo check if the memo was just created. We can replace
```
if memo is None:
memo = {}
d = id(x)
y = memo.get(d, _nil)
if y is not _nil:
return y
```
with
```
d = id(x)
if memo is None:
memo = {}
else:
y = memo.get(d, _nil)
if y is not _nil:
return y
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109869
<!-- /gh-linked-prs -->
| 05079d93e410fca1e41ed32e67c54d63cbd9b35b | 7dc2c5093ef027aab57bca953ac2d6477a4a440b |
python/cpython | python__cpython-109866 | # PluralFormsTestCase tests in test_gettext are not independent
# Bug report
### Bug description:
```
$ ./python -m test -v test_gettext -m PluralFormsTestCase
...
======================================================================
FAIL: test_plural_context_forms1 (test.test_gettext.PluralFormsTestCase.test_plural_context_forms1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_gettext.py", line 330, in test_plural_context_forms1
eq(x, 'Hay %s fichero (context)')
AssertionError: 'There is %s file' != 'Hay %s fichero (context)'
- There is %s file
+ Hay %s fichero (context)
======================================================================
FAIL: test_plural_context_forms2 (test.test_gettext.PluralFormsTestCase.test_plural_context_forms2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_gettext.py", line 359, in test_plural_context_forms2
eq(x, 'Hay %s fichero (context)')
AssertionError: 'There is %s file' != 'Hay %s fichero (context)'
- There is %s file
+ Hay %s fichero (context)
======================================================================
FAIL: test_plural_forms1 (test.test_gettext.PluralFormsTestCase.test_plural_forms1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_gettext.py", line 320, in test_plural_forms1
eq(x, 'Hay %s fichero')
AssertionError: 'There is %s file' != 'Hay %s fichero'
- There is %s file
+ Hay %s fichero
```
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109866
* gh-110502
* gh-110503
<!-- /gh-linked-prs -->
| 1aad4fc5dba993899621de86ae5955883448d6f6 | 92ca90b7629c070ebc3b08e6f03db0bb552634e3 |
python/cpython | python__cpython-113991 | # test_create_subprocess_with_pidfd in test_asyncio.test_subprocess is not independent
# Bug report
### Bug description:
```
$ ./python -m test -v test_asyncio.test_subprocess -m test_create_subprocess_with_pidfd
...
test_create_subprocess_with_pidfd (test.test_asyncio.test_subprocess.GenericWatcherTests.test_create_subprocess_with_pidfd) ... /home/serhiy/py/cpython/Lib/test/test_asyncio/test_subprocess.py:980: DeprecationWarning: There is no current event loop
asyncio.get_event_loop_policy().get_event_loop()
FAIL
======================================================================
FAIL: test_create_subprocess_with_pidfd (test.test_asyncio.test_subprocess.GenericWatcherTests.test_create_subprocess_with_pidfd)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_asyncio/test_subprocess.py", line 986, in test_create_subprocess_with_pidfd
returncode, stdout = runner.run(main())
^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/asyncio/base_events.py", line 664, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/test_asyncio/test_subprocess.py", line 979, in main
with self.assertRaises(RuntimeError):
AssertionError: RuntimeError not raised
```
Note also a deprecation warning (you need to scroll horizontally to see it):
```
.../Lib/test/test_asyncio/test_subprocess.py:980: DeprecationWarning: There is no current event loop
```
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113991
* gh-114072
<!-- /gh-linked-prs -->
| f8a79109d0c4f408d34d51861cc0a7c447f46d70 | 1709020e8ebaf9bf1bc9ee14d56173c860613931 |
python/cpython | python__cpython-110245 | # PyThreadState Sometimes Used for Different Threads for Subinterpreters
# Bug report
In a few places, we treat `PyInterpreterState.threads.head` (AKA `PyInterpreterState_ThreadHead()`) for subinterpreters as an unused thread state that we can use in a one-off manner:
* `_PyInterpreterState_IDDecref()` (Python/pystate.c)
* `_call_in_interpreter()` (Python/pystate.c) - fixed in 3.13+ (gh-109556)
* `_run_script_in_interpreter()` (Modules/_xxsubinterpreter.c)
* `interp_destroy()` (Modules/_xxsubinterpreter.c)
The problem is that each thread state should correspond to a single OS thread, and for each OS thread there should be at most one thread state per interpreter. Using `PyInterpreterState.threads.head` like that violates those assumptions, which can cause problems. [^1]
[^1]: Currently, this mostly doesn't affect the main interpreter. Ideally, the main interpreter wouldn't actually be treated specially. See gh-109857.
Also, some code assumes `PyInterpreterState.threads.head` is the current thread state, when it might not be:
* `interpreter_clear()` (Python/pystate.c)
* ...
We should fix this by always using the thread state that corresponds to the current thread (and create one if missing). For efficiency, we can use a thread-local (via `PyThread_tss_*`) to store each interpreter's thread state for that thread.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110245
* gh-110709
<!-- /gh-linked-prs -->
| f5198b09e16bca1886f8245fa88203d07d51ec11 | 4227bfa8b273207a2b882f7d69c8ac49c3d2b57d |
python/cpython | python__cpython-110016 | # Python "zipfile" can't detect "quoted-overlap" zipbomb that can be used as a DoS attack
# Bug report
### Bug description:
Just found this vulnerability in the latest Python 3.11.5 (and previous 3.10.10).
If we craft a zipbomb using the "quoted-overlap" way (as mentioned [https://www.bamsoftware.com/hacks/zipbomb/](https://urldefense.com/v3/__https://www.bamsoftware.com/hacks/zipbomb/__;!!ACWV5N9M2RV99hQ!NeHC7dlBypbGhcmQALXGCNyt3gJ-N86F3rJ3GczFSLYbl0kR34I67_YZCxbseU0GGIxRGc4gcPu4Sw$)), this can't be detected by Python's zip file and the zip will be extracted and thus potentially cause a DoS attack by consuming all the storage.
This issue is related to CVE-2019-9674 but not the same. CVE-2019-9674 is talking about the "normal" overlap-zipbomb which is a "full" overlap. This can already be detected by Python's new version of zipfile. However, when we craft a "quoted-overlap" zip, as indicated by [https://www.bamsoftware.com/hacks/zipbomb/](https://urldefense.com/v3/__https://www.bamsoftware.com/hacks/zipbomb/__;!!ACWV5N9M2RV99hQ!NeHC7dlBypbGhcmQALXGCNyt3gJ-N86F3rJ3GczFSLYbl0kR34I67_YZCxbseU0GGIxRGc4gcPu4Sw$), python can't detect and happily starts to extract.
For example, the following is the python to extract a zip file, 116 KB before extraction, goes to as large as 17GB after extraction. The size after extraction can be easily increased to multi TBs or even PBs by adjusting the zip-creation.
```python
import zipfile
import sys
import os
def extract_zip(zip_path):
"""
Extracts the contents of a ZIP file to the current directory.
:param zip_path: Path to the ZIP file
"""
if not os.path.exists(zip_path):
print(f"Error: {zip_path} does not exist.")
return
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall()
print(f"Extracted contents of {zip_path} to the current directory.")
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python extract_zip.py <path_to_zip_file>")
sys.exit(1)
zip_file_path = sys.argv[1]
extract_zip(zip_file_path)
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110016
* gh-113912
* gh-113913
* gh-113914
* gh-113915
* gh-113916
* gh-113918
<!-- /gh-linked-prs -->
| 66363b9a7b9fe7c99eba3a185b74c5fdbf842eba | 183b97bb9db075197153ad82b8ffdfce8e913250 |
python/cpython | python__cpython-109850 | # test_rot13_func in test_codecs is not independent
# Bug report
### Bug description:
Running separate test `test_rot13_func` in `test_codecs` fails:
```
$ ./python -m test -v test_codecs -m test_rot13_func
...
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_codecs.py", line 3572, in test_rot13_func
encodings.rot_13.rot13(infile, outfile)
^^^^^^^^^^^^^^^^
AttributeError: module 'encodings' has no attribute 'rot_13'
```
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109850
* gh-110504
* gh-110505
<!-- /gh-linked-prs -->
| b987fdb19b981ef6e7f71b41790b5ed4e2064646 | 1aad4fc5dba993899621de86ae5955883448d6f6 |
python/cpython | python__cpython-109912 | # test_ftplib: test_storlines() fails randomly
s390x RHEL7 Refleaks 3.x:
```
0:00:19 load avg: 3.25 [ 17/463/1] test_ftplib failed (1 failure)
beginning 6 repetitions
123456
..test test_ftplib failed -- Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_ftplib.py", line 634, in test_storlines
self.check_data(self.server.handler_instance.last_received_data, RETR_DATA)
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_ftplib.py", line 505, in check_data
self.assertEqual(len(received), len(expected))
AssertionError: 12019 != 12018
```
build: https://buildbot.python.org/all/#/builders/129/builds/890
I can reproduce the failure locally on Linux with command:
```
vstinner@mona$ ./python -m test test_ftplib -m test_storlines -u all -j10 --forever -R 3:3
(...)
0:00:10 load avg: 24.24 [ 11/1] test_ftplib failed (1 failure)
beginning 6 repetitions
123456
..test test_ftplib failed -- Traceback (most recent call last):
File "/home/vstinner/python/main/Lib/test/test_ftplib.py", line 634, in test_storlines
self.check_data(self.server.handler_instance.last_received_data, RETR_DATA)
File "/home/vstinner/python/main/Lib/test/test_ftplib.py", line 505, in check_data
self.assertEqual(len(received), len(expected))
AssertionError: 12019 != 12018
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109912
* gh-109919
* gh-109920
<!-- /gh-linked-prs -->
| 2ef2fffe3be953b91852585c75188d5475b09474 | b1e4f6e83e8916005caa3f751f25fb58cccbf812 |
python/cpython | python__cpython-110428 | # test_multiprocessing_spawn.test_processes: test_waitfor_timeout() failed on GHA Windows x64
GHA Windows x64:
```
FAIL: test_waitfor_timeout (test.test_multiprocessing_spawn.test_processes.WithProcessesTestCondition.test_waitfor_timeout)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\_test_multiprocessing.py", line 1683, in test_waitfor_timeout
self.assertTrue(success.value)
AssertionError: 0 is not true
```
build: https://github.com/python/cpython/actions/runs/6299237551/job/17100847177?pr=109828
<!-- gh-linked-prs -->
### Linked PRs
* gh-110428
* gh-110430
* gh-110431
<!-- /gh-linked-prs -->
| 5eae8dc2cb832af6ae1ee340fb0194107fe3bd6e | 0db2f1475e6539e1954e1f8bd53e005c3ecd6a26 |
python/cpython | python__cpython-109834 | # test_asyncio.test_waitfor: test_wait_for() failed with "AssertionError: 0.547 not less than 0.5" on GHA Windows x64
GHA Windows x64:
```
FAIL: test_wait_for (test.test_asyncio.test_waitfor.AsyncioWaitForTest.test_wait_for)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\unittest\async_case.py", line 90, in _callTestMethod
if self._callMaybeAsync(method) is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\unittest\async_case.py", line 112, in _callMaybeAsync
return self._asyncioRunner.run(
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\asyncio\base_events.py", line 664, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\test\test_asyncio\test_waitfor.py", line 147, in test_wait_for
self.assertLess(t1 - t0, 0.5)
AssertionError: 0.5470000000000255 not less than 0.5
```
build: https://github.com/python/cpython/actions/runs/6299237551/job/17099620804?pr=109828
<!-- gh-linked-prs -->
### Linked PRs
* gh-109834
* gh-109837
* gh-109838
<!-- /gh-linked-prs -->
| f29bc9c9a0a6794c6b8a9e84a7ba9237b427a10a | bccc1b78002c924e8f4121fea5de7df5eb127548 |
python/cpython | python__cpython-109887 | # test_concurrent_futures.test_deadlock fails with env changed: sys.stderr was modified on s390x RHEL8 Refleaks 3.x
s390x RHEL8 Refleaks 3.x:
```
Warning -- sys.stderr was modified by test.test_concurrent_futures.test_deadlock
Warning -- Before: <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>
Warning -- After: <_io.StringIO object at 0x3ff971b6410>
```
build: https://buildbot.python.org/all/#/builders/75/builds/895
<!-- gh-linked-prs -->
### Linked PRs
* gh-109887
* gh-109892
* gh-109893
<!-- /gh-linked-prs -->
| 2897142d2ec0930a8991af964c798b68fb6dcadd | 8ac2085b80eca4d9b2a1093d0a7da020fd12e11a |
python/cpython | python__cpython-109839 | # basicblock_addop Assertion: while-loop with if-expression
# Bug report
### Bug description:
```python
while x:
0 if 1 else 0 # 1 can not be 0 to reproduce this bug
```
output (Python 3.12.0rc3+):
```python
python: Python/flowgraph.c:114: basicblock_addop: Assertion `0 <= oparg && oparg < (1 << 30)' failed.
```
I tested this with the current 3.12 branch (6f1d4552b3)
@iritkatriel I think this is again something like #109719 or #109627
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109839
* gh-109865
<!-- /gh-linked-prs -->
| d73c12b88c2275fd44e27c91c24f3ac85419d2b8 | 88a6137cdb81c80440d9d1ee7dee17ea0b820f11 |
python/cpython | python__cpython-109819 | # `reprlib.recursive_repr` does not copy `__type_params__`
# Bug report
Repro:
```python
>>> from reprlib import recursive_repr
>>>
>>> class My:
... @recursive_repr()
... def __repr__[T](self, converter: T | None = None): ...
...
>>> My().__repr__.__type_params__
()
```
This happens because `recursive_repr` does not use `@wraps`, but reinvents it: https://github.com/python/cpython/blob/f2eaa92b0cc5a37a9e6010c7c6f5ad1a230ea49b/Lib/reprlib.py#L26-L33
And `__type_params__` was added in https://github.com/python/cpython/issues/104600
<!-- gh-linked-prs -->
### Linked PRs
* gh-109819
* gh-109999
<!-- /gh-linked-prs -->
| f65f9e80fe741c894582a3e413d4e3318c1ed626 | 5bb6f0fcba663e1006f9063d1027ce8bd9f8effb |
python/cpython | python__cpython-109813 | # Tricky phrasing in the `collections.Counter` docs
# Documentation
The [collections.Counter documentation](https://docs.python.org/3/library/collections.html#collections.Counter) states:
`c.items() # convert to a list of (elem, cnt) pairs`
which seems to have been correct in Python 2 but is no longer "technically" correct
<!-- gh-linked-prs -->
### Linked PRs
* gh-109813
<!-- /gh-linked-prs -->
| 99fba5f156386cf8f4a71321708690cdb9357ffc | f65f9e80fe741c894582a3e413d4e3318c1ed626 |
python/cpython | python__cpython-127345 | # datetime error message is different between _pydatetime.py and _datetimemodule.c
# Bug report
### Bug description:
```python
import _datetime, _pydatetime
_datetime.date(1, 1, 50)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: day is out of range for month```
_pydatetime.date(1, 1, 50)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.12/_pydatetime.py", line 960, in __new__
year, month, day = _check_date_fields(year, month, day)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/_pydatetime.py", line 535, in _check_date_fields
raise ValueError('day must be in 1..%d' % dim, day)
ValueError: ('day must be in 1..31', 50)
```
The error message differs between the two implementations of datetime. This came up when testing PyPy, which uses the pure-python datetime implementation. xref https://github.com/conda-forge/rtoml-feedstock/pull/1#issuecomment-1732509648
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127345
<!-- /gh-linked-prs -->
| 3e222e3a15959690a41847a1177ac424427815e5 | 57f45ee2d8ee23c2a1d1daba4095a5a044169419 |
python/cpython | python__cpython-109808 | # Starting new thread during finalization leads to call to `PyMem_Free` without holding the GIL
# Crash report
### What happened?
Since #109135, `thread_run` checks if it is called during finalization, and in this case it frees thread bootstate in `thread_bootstate_free` and returns. The problem is that `PyMem_Free` shouldn't be called if GIL is not held.
Repro and error message (just in case):
```python
import os
import random
import signal
import subprocess
import sys
import time
script = """
import threading
while True:
t = threading.Thread()
t.start()
t.join()
"""
while True:
p = subprocess.Popen([sys.executable, '-c', script], stderr=subprocess.PIPE)
time.sleep(random.random())
os.kill(p.pid, signal.SIGINT)
_, err = p.communicate()
if p.returncode == -signal.SIGABRT:
print(err.decode('utf-8'))
break
```
```
Traceback (most recent call last):
File "<string>", line 5, in <module>
File "/home/radislav/projects/cpython/Lib/threading.py", line 983, in start
self._started.wait()
File "/home/radislav/projects/cpython/Lib/threading.py", line 641, in wait
signaled = self._cond.wait(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/threading.py", line 341, in wait
waiter.acquire()
KeyboardInterrupt
Fatal Python error: _PyMem_DebugFree: Python memory allocator called without holding the GIL
Python runtime state: finalizing (tstate=0x0000561b61b67338)
Thread 0x00007f8f890b8280 (most recent call first):
<no Python frame>
```
cc @vstinner
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:3e8fcb7df7, Sep 23 2023, 01:43:45) [GCC 10.2.1 20210110]
<!-- gh-linked-prs -->
### Linked PRs
* gh-109808
* gh-109852
<!-- /gh-linked-prs -->
| 1b8f2366b38c87b0450d9c15bdfdd4c4a2fc3a01 | f19416534a546460fdf6a0739b70e44d1950a073 |
python/cpython | python__cpython-109794 | # PyThreadState_Swap() During Finalization Causes Immediate Exit (AKA Daemon Threads Are Still the Worst!)
# Bug report
tl;dr Switching between interpreters while finalizing causes the main thread to exit. The fix should be simple.
We use `PyThreadState_Swap()` to switch between interpreters. That function almost immediately calls `_PyEval_AcquireLock()`. During finalization, `_PyEval_AcquireLock()` immediately causes the thread to exit if the current thread state doesn't match the one that was active when `Py_FinalizeEx()` was called.
Thus, if we switch interpreters during finalization then the thread will exit. If we do this in the finalizing (main) thread then the process immediately exits with an exit code of 0.
One notable consequence is that a Python process with an unhandled exception will print the traceback like normal but can end up with an exit code of 0 instead of 1 (and some of the runtime finalization code never gets executed). [^1]
[^1]: This may help explain why, when we re-run some tests in subprocesses, they aren't marked as failures even when they actually fail.
## Reproducer
```shell
$ cat > script.py << EOF
import _xxsubinterpreters as _interpreters
interpid = _interpreters.create()
raise Exception
EOF
$ ./python script.py
Traceback (most recent call last):
File ".../check-swapped-exitcode.py", line 3, in <module>
raise Exception
Exception
$ echo $?
0
```
In this case, "interpid" is a `PyInterpreterIDObject` bound to the `__main__` module (of the main interpreter). It is still bound there when the script ends and the executable starts finalizing the runtime by calling `Py_FinalizeEx()`. [^2]
[^2]: Note that we did not create any extra threads; we stayed exclusively in the main thread. We also didn't even run any code in the subinterpreter.
Here's what happens in `Py_FinalizeEx()`:
1. wait for non-daemon threads to finish [^3]
2. run any remaining pending calls belong to the main interpreter
3. run at exit hooks
4. mark the runtime as finalizing (storing the pointer to the current tstate, which belongs to the main interpreter)
5. delete all other tstates belong to the main interpreter (i.e. all daemon threads)
6. remove our custom signal handlers
7. finalize the import state
9. clean up `sys.modules` of the main interpreter (`finalize_modules()` in Python/pylifecycle.c)
[^3]: FYI, IIRC we used to abort right before this point if there were any subinterpreters around still.
At the point the following happens:
1. the `__main__` module is dealloc'ed
2. "interpid" is dealloc'ed (`PyInterpreterID_Type.tp_dealloc`)
3. `_PyInterpreterState_IDDecref()` is called, which finalizes the corresponding interpreter state
4, before `Py_EndInterpreter()` is called, we call `_PyThreadState_Swap()` to switch to a tstate belonging to the subinterpreter
5. that calls `_PyEval_AcquireLock()`
6. that basically calls `_PyThreadState_MustExit()`, which sees that the current tstate pointer isn't the one we stored as "finalizing"
7. it then calls `PyThread_exit_thread()`, which kills the main thread
8. the process exits with an exitcode of 0
Notably, the rest of `Py_FinalizeEx()` (and `Py_Main()`, etc.) does *not* execute. `main()` never gets a chance to return an exitcode of 1.
## Background
Runtime finalization happens in whichever thread called `Py_FinalizeEx()` and happens relative to whichever `PyThreadState` is active there. This is typically the main thread and the main interpreter.
Other threads may still be running when we start finalization, whether daemon threads or not, and each of those threads has a thread state corresponding to the interpreter that is active in that thread. [^4] One of the first things we do during finalization is to wait for all non-daemon threads to finish running. Daemon threads are a different story. They must die!
[^4]: In any given OS thread, each interpreter has a distinct tstate. Each tstate (mostly) corresponds to exactly one OS thread.
Back in 2011 we identified that daemon threads were interfering with finalization, sometimes causing crashes or making the Python executable hang. [^5] At the time, we applied a best-effort solution where we kill the current thread if it isn't the one where `Py_FinalizeEx()` was called.
[^5]: If a daemon thread keeps running and tries to access any objects or other runtime state then there's a decent chance of a crash.
However, that solution checked the tstate pointer rather than the thread ID, so swapping interpreters in the finalizing thread was broken, and here we are.
History:
* gh-46164 (2011; commit 0d5e52d3469) - exit thread during finalization in `PyEval_RestoreThread()` (also add `_Py_Finalizing`)
* gh-??? (2014; commit 17548dda51d) - do same in `_PyEval_EvalFrameDefault()` (eval loop, right after re-acquiring GIL when handling eval breaker)
* gh-80656 (2019; PR: gh-12667) - do same in `PyEval_AcquireLock()` and `PyEval_AcquireThread()` (also add `exit_thread_if_finalizing()`)
* gh-84058 (2020; PR: gh-18811) - use `_PyRuntime` directly
* gh-84058 (2020; PR: gh-18885) - move all the checks to `take_gil()`
Related: gh-87135 (PRs: gh-105805, gh-28525)
<!-- gh-linked-prs -->
### Linked PRs
* gh-109794
* gh-110705
<!-- /gh-linked-prs -->
| 6364873d2abe0973e21af7c8c7dddbb5f8dc1e85 | 9c73a9acec095c05a178e7dff638f7d9769318f3 |
python/cpython | python__cpython-109788 | # Re-entering pairwise.__next__() leaks references
Re-entering the `__next__()` method of itertools.pairwise leaks references, because *old* hold a borrowed reference when the `__next__()` method of the underlying iterator is called. It may potentially lead to the use of a freed memory and a crash, but I only have reproducer for leaks.
Even if the re-entrant call is not very meaningful, it should correctly count references. There are several ways to fix this issue. I choose the simplest one which matches the current results (only without leaks) in most of simple tests, even if there is no other reasons to prefer these results.
cc @rhettinger
<!-- gh-linked-prs -->
### Linked PRs
* gh-109788
* gh-112699
* gh-112700
<!-- /gh-linked-prs -->
| 6ca9d3e0173c38e2eac50367b187d4c1d43f9892 | c74e9fb18917ceb287c3ed5be5d0c2a16a646a99 |
python/cpython | python__cpython-109790 | # Windows os.path.isdir has different signature
# Bug report
### Bug description:
In this PR #101324 and issue #101196 the optimized methods of `isdir` and `isfile` were added for windows. If not available they will fall back to `genericpath` implementations.
The issue with `isdir` is that this introduced a different signature compared to `genericpath`:
```py
# in genericpath.py
def isdir(s) -> bool: ...
# in optimized nt module
def isdir(path) -> bool: ...
```
I'm not sure if this was intentional to have different signatures depending on the platform, and there's any possibility to fix this now. Ref: https://github.com/python/typeshed/pull/10751
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109790
* gh-110233
<!-- /gh-linked-prs -->
| 7df8b16d28d2418161cef49814b6aca9fb70788d | 3814bc17230df4cd3bc4d8e2ce0ad36470fba269 |
python/cpython | python__cpython-109872 | # test_venv: test_zippath_from_non_installed_posix() failed on aarch64 RHEL8 Refleaks 3.x
``cp874.cpython-313.pyc.281473310212544`` looks like a temporary PYC filename used by importlib to create PYC files in atomic way.
The problem seems to be that tests are run in parallel and that files can appear or disappear while test_venv is running (since other tests are running at the same time).
I'm not sure why test_zippath_from_non_installed_posix() wants to eagerly copies ``__pycache__/`` directories. Is it important to copy PYC files for a "zipapp"?
aarch64 RHEL8 Refleaks 3.x:
```
0:00:14 load avg: 8.78 [ 68/463/1] test_venv failed (1 error)
beginning 6 repetitions
123456
Could not find platform dependent libraries <exec_prefix>
.Could not find platform dependent libraries <exec_prefix>
.test test_venv failed -- Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/test/test_venv.py", line 578, in test_zippath_from_non_installed_posix
shutil.copytree(fn, os.path.join(libdir, name))
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/shutil.py", line 588, in copytree
return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/shutil.py", line 542, in _copytree
raise Error(errors)
shutil.Error: [('/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/encodings/__pycache__/cp874.cpython-313.pyc.281473310212544', '/tmp/test_python_fw_naiq7/tmpfycz07rf/lib/python3.13/encodings/__pycache__/cp874.cpython-313.pyc.281473310212544', "[Errno 2] No such file or directory: '/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/encodings/__pycache__/cp874.cpython-313.pyc.281473310212544'")]
```
build: https://buildbot.python.org/all/#/builders/551/builds/865
By the way, ``Could not find platform dependent libraries <exec_prefix>`` message is surprising.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109872
* gh-109873
* gh-109874
* gh-110149
* gh-110153
<!-- /gh-linked-prs -->
| 25bb266fc876b344e31e0b5634a4db94912c1aba | d73c12b88c2275fd44e27c91c24f3ac85419d2b8 |
python/cpython | python__cpython-109859 | # re: undocumented exception is raised
# Bug report
### Bug description:
The `re` module's documentation says it only raises the `re.error` exception, but the regex `"\x00(?<!\x00{2147483648})"` causes RuntimeError:
```python
Python 3.11.5 (main, Sep 20 2023, 10:46:56) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> re.compile("\x00(?<!\x00{2147483647})")
re.compile('\x00(?<!\x00{2147483647})')
>>>
>>>
>>> re.compile("\x00(?<!\x00{2147483648})")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.11/re/__init__.py", line 227, in compile
return _compile(pattern, flags)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/re/__init__.py", line 294, in _compile
p = _compiler.compile(pattern, flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/re/_compiler.py", line 759, in compile
return _sre.compile(
^^^^^^^^^^^^^
RuntimeError: invalid SRE code
```
Other `re` methods, such as `match` or `split` show the same result.
For brevity:
`"\x00(?<!\x00{2147483648})"` -> RuntimeError
`"\x00(?<!\x00{2147483647})"` -> no errors
I have found this with libFuzzer by testing the `fuzz_sre_compile` binary.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109859
* gh-110859
* gh-110860
<!-- /gh-linked-prs -->
| e2b3d831fd2824d8a5713e3ed2a64aad0fb6b62d | ca0f3d858d069231ce7c5b382790a774f385b467 |
python/cpython | python__cpython-109761 | # Crash at finalization after fail to start new thread
# Crash report
### What happened?
Bisected to e11fc032a75d067d2167a21037722a770b9dfb51, but I guess this issue exists longer, and assertion that is added to `PyThreadState_Clear` by this commit just made it visible.
```python
import resource
import threading
# this isn't essential, but helps PyThread_start_new_thread() fail
resource.setrlimit(resource.RLIMIT_NPROC, (150, 150))
while True:
t = threading.Thread()
t.start()
t.join()
```
Error message with backtrace:
```c++
Traceback (most recent call last):
File "/home/radislav/projects/cpython/thread_repro.py", line 10, in <module>
t.start()
File "/home/radislav/projects/cpython/Lib/threading.py", line 978, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
python: Python/pystate.c:1484: PyThreadState_Clear: Assertion `tstate->_status.initialized && !tstate->_status.cleared' failed.
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7c87537 in __GI_abort () at abort.c:79
#2 0x00007ffff7c8740f in __assert_fail_base (fmt=0x7ffff7dfe688 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x5555559c7be0 "tstate->_status.initialized && !tstate->_status.cleared", file=0x5555559c6e2f "Python/pystate.c", line=1484, function=<optimized out>)
at assert.c:92
#3 0x00007ffff7c96662 in __GI___assert_fail (assertion=assertion@entry=0x5555559c7be0 "tstate->_status.initialized && !tstate->_status.cleared",
file=file@entry=0x5555559c6e2f "Python/pystate.c", line=line@entry=1484, function=function@entry=0x5555559c83e0 <__PRETTY_FUNCTION__.47> "PyThreadState_Clear")
at assert.c:101
#4 0x000055555587075f in PyThreadState_Clear (tstate=tstate@entry=0x555555c4be00) at Python/pystate.c:1484
#5 0x0000555555871483 in _PyThreadState_DeleteExcept (tstate=tstate@entry=0x555555c03338 <_PyRuntime+508728>) at Python/pystate.c:1680
#6 0x000055555586afa4 in Py_FinalizeEx () at Python/pylifecycle.c:1831
#7 0x000055555589f6fc in Py_RunMain () at Modules/main.c:691
#8 0x000055555589f74b in pymain_main (args=args@entry=0x7fffffffe160) at Modules/main.c:719
#9 0x000055555589f7c0 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:743
#10 0x00005555555cf74e in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
When trying to start a new thread, Python creates new thread state by `_PyThreadState_New` call, adding this new state to list of thread states for current interpreter:
https://github.com/python/cpython/blob/c32abf1f21c4bd32abcefe4d601611b152568961/Modules/_threadmodule.c#L1191
If consequent call to `PyThread_start_new_thread` fails, this new state gets cleared, but remains in list:
https://github.com/python/cpython/blob/c32abf1f21c4bd32abcefe4d601611b152568961/Modules/_threadmodule.c#L1203-L1209
Then, at Python finalization, call to `_PyThreadState_DeleteExcept` attempts to clear this thread state again, which causes assertion failure:
https://github.com/python/cpython/blob/3e8fcb7df74248530c4280915c77e69811f69c3f/Python/pystate.c#L1674-L1682
cc @ericsnowcurrently
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:d4cea794a7, Sep 22 2023, 18:42:05) [GCC 10.2.1 20210110]
<!-- gh-linked-prs -->
### Linked PRs
* gh-109761
* gh-127171
* gh-127173
* gh-127299
* gh-127323
* gh-127324
<!-- /gh-linked-prs -->
| ca3ea9ad05c3d876a58463595e5b4228fda06936 | a264637654f9d3ac3c140e66fd56ee32faf22431 |
python/cpython | python__cpython-109922 | # Include "t" in ABI tag for `--disable-gil` builds
# Feature or enhancement
Context:
* [PEP 703 (Build Configuration Changes)](https://peps.python.org/pep-0703/#build-configuration-changes)
* [PEP 3149 – ABI version tagged .so files](https://peps.python.org/pep-3149/#proposal)
The `--disable-gil` builds will have a different *version specific* ABI from the default CPython 3.13 builds due to reference counting differences and other changes. This should be indicated by a "t" (for "threading") in the ABI tag.
On POSIX, this is achieved by setting the `ABIFLAGS` variable, which feeds into the sysconfig `EXT_SUFFIX` and `SOABI` variables. The version specific shared libraries would look like:
* `package.cpython-313t-darwin.so`
On Windows, this is achieved by setting [`PYD_TAGGED_SUFFIX`](https://github.com/python/cpython/blob/c32abf1f21c4bd32abcefe4d601611b152568961/Python/dynload_win.c#L18-L22), which feeds into the sysconfig `EXT_SUFFIX`. The shared libraries using the version specific ABI would look like:
* `_package.cp313t-win_amd64.pyd`
Note that this does not address and is independent of the stable ABI. There's ongoing [discussion](https://discuss.python.org/t/python-abis-and-pep-703/34018) about how best to support the stable ABI in `--disable-gil` builds.
cc @gpshead @brettcannon @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-109922
<!-- /gh-linked-prs -->
| 773614e03aef29c744d0300bd62fc8254e3c06b6 | b35f0843fc15486b17bc945dde08b306b8e4e81f |
python/cpython | python__cpython-109871 | # test_regrtest: test_huntrleaks() fails randomly: leaked [9, 1, 1] references
Failure on GHA Windows x86 CI:
```
FAIL: test_huntrleaks (test.test_regrtest.ArgsTestCase.test_huntrleaks)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_regrtest.py", line 1068, in test_huntrleaks
self.check_huntrleaks(run_workers=False)
File "D:\a\cpython\cpython\Lib\test\test_regrtest.py", line 1065, in check_huntrleaks
self.check_leak(code, 'references', run_workers=run_workers)
File "D:\a\cpython\cpython\Lib\test\support\__init__.py", line 2564, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\test\test_regrtest.py", line 1047, in check_leak
self.assertIn(line2, output)
AssertionError:
'test_regrtest_huntrleaks leaked [1, 1, 1] references, sum=3\n'
not found in
'0:00:00 Run 1 test sequentially\n
0:00:00 [1/1] test_regrtest_huntrleaks\n
beginning 6 repetitions\n
123456\n
......\n
test_regrtest_huntrleaks leaked [9, 1, 1] references, sum=11\n
test_regrtest_huntrleaks leaked [1, 1, 1] memory blocks, sum=3\n
test_regrtest_huntrleaks failed (reference leak)\n
\n
== Tests result: FAILURE ==\n
\n
1 test failed:\n
test_regrtest_huntrleaks\n
\n
Total duration: 292 ms\n
Total tests: run=1\n
Total test files: run=1/1 failed=1\n
Result: FAILURE\n'
```
build: https://github.com/python/cpython/actions/runs/6274556366/job/17041487300?pr=109727
<!-- gh-linked-prs -->
### Linked PRs
* gh-109871
<!-- /gh-linked-prs -->
| 4091deba88946841044b0a54090492a2fd903d42 | e5186c3de4194de3ea8c80edb182d786f5e20944 |
python/cpython | python__cpython-109727 | # Python fails to build on WASM: _testcapi/vectorcall_limited.c is built with Py_BUILD_CORE_BUILTIN
My PR #109690 broke WASM buildbots.
Example with wasm32-emscripten node (dynamic linking) 3.x: https://buildbot.python.org/all/#/builders/1056/builds/3142
```
/opt/emsdk/upstream/emscripten/emcc -DNDEBUG -g -O3 (...) -DPy_BUILD_CORE_BUILTIN -c ../../Modules/_testcapi/vectorcall_limited.c -o Modules/_testcapi/vectorcall_limited.o
In file included from ../../Modules/_testcapi/vectorcall_limited.c:2:
In file included from ../../Modules/_testcapi/parts.h:7:
In file included from ../../Include/Python.h:44:
../../Include/pyport.h:52:4: error: "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
# error "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
^
1 error generated.
```
I'm not sure how ``-DPy_BUILD_CORE_BUILTIN`` landed in the command building ``Modules/_testcapi/vectorcall_limited.c``.
I don't think that it's correct that ``vectorcall_limited.c`` which tests the limited C API on purpose it built with ``Py_BUILD_CORE``.
On my Linux machine, Makefile contains:
```
Modules/_testcapi/vectorcall_limited.o: $(srcdir)/Modules/_testcapi/vectorcall_limited.c $(MODULE__TESTCAPI_DEPS) $(MODULE_DEPS_SHARED) $(PYTHON_HEADERS); $(CC) $(MODULE__TESTCAPI_CFLAGS) $(PY_STDMODULE_CFLAGS) $(CCSHARED) -c $(srcdir)/Modules/_testcapi/vectorcall_limited.c -o Modules/_testcapi/vectorcall_limited.o
```
So it gets two groups of compiler flags:
* ``MODULE__TESTCAPI_CFLAGS``: not defined (empty)
* ``PY_STDMODULE_CFLAGS``: ``-fno-strict-overflow -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -Og -Wall -O0 -std=c11 -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include``
I don't see ``-DPy_BUILD_CORE_BUILTIN`` here.
I can reproduce the issue locally with this custom ``Modules/Setup.local``:
```
*static*
_testcapi _testcapimodule.c _testcapi/vectorcall.c _testcapi/vectorcall_limited.c _testcapi/heaptype.c _testcapi/abstract.c _testcapi/unicode.c _testcapi/dict.c _testcapi/getargs.c _testcapi/datetime.c _testcapi/docstring.c _testcapi/mem.c _testcapi/watchers.c _testcapi/long.c _testcapi/float.c _testcapi/structmember.c _testcapi/exceptions.c _testcapi/code.c _testcapi/buffer.c _testcapi/pyatomic.c _testcapi/pyos.c _testcapi/immortal.c _testcapi/heaptype_relative.c _testcapi/gc.c
```
make fails with:
```
In file included from ./Include/Python.h:44,
from ./Modules/_testcapi/parts.h:7,
from ./Modules/_testcapi/vectorcall_limited.c:2:
./Include/pyport.h:52:4: error: #error "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
52 | # error "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
| ^~~~~
make: *** [Makefile:2982: Modules/_testcapi/vectorcall_limited.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from ./Include/Python.h:44,
from ./Modules/_testcapi/parts.h:7,
from ./Modules/_testcapi/heaptype_relative.c:2:
./Include/pyport.h:52:4: error: #error "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
52 | # error "Py_LIMITED_API is not compatible with Py_BUILD_CORE"
| ^~~~~
make: *** [Makefile:3001: Modules/_testcapi/heaptype_relative.o] Error 1
```
``Modules/makesetup`` uses ``$(PY_STDMODULE_CFLAGS)`` if ``$doconfig`` is no, but uses ``$(PY_BUILTIN_MODULE_CFLAGS)`` otherwise. In the second case, $(PY_BUILTIN_MODULE_CFLAGS) adds ``-DPy_BUILD_CORE_BUILTIN`` to compiler command used to build ``_testcapi`` C files.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109727
* gh-109842
<!-- /gh-linked-prs -->
| 09a25616a908a028b6373f9ab372d86edf064282 | c32abf1f21c4bd32abcefe4d601611b152568961 |
python/cpython | python__cpython-109722 | # `_testinternalcapi` is imported without guards in some test modules
# Bug report
I found at least three cases where `_testinternalcapi` is just imported, we cannot do that, because other python implementation which reuse our test cases will fail on this.
1. https://github.com/python/cpython/blob/34ddcc3fa118168901fa0d3a69b3b5444fc2f943/Lib/test/test_cmd_line.py#L785-L826
2. https://github.com/python/cpython/blob/34ddcc3fa118168901fa0d3a69b3b5444fc2f943/Lib/test/test_import/__init__.py#L25
3. https://github.com/python/cpython/blob/34ddcc3fa118168901fa0d3a69b3b5444fc2f943/Lib/test/test_opcache.py#L8
There can be several options to solve this:
1. Use `import _testinternalcapi` with `except ImportError`, when some places might need it
2. Use `import_helper.import_module` to skip some tests where this module is required
3. Use `@cpython_only` decorator
<!-- gh-linked-prs -->
### Linked PRs
* gh-109722
<!-- /gh-linked-prs -->
| 8ded34a1ff2d355e95213ab72493908f2ca25dd9 | 8a82bff12c8e6c6c204c8a48ee4993d908ec4b73 |
python/cpython | python__cpython-109734 | # basicblock_addop Assertion: while-else-loop with try
# Bug report
### Bug description:
```python
while name_5:
try:
break
except:
pass
else:
1 if 1 else 1
```
output (Python 3.12.0rc3+):
```python
python: Python/flowgraph.c:114: basicblock_addop: Assertion `0 <= oparg && oparg < (1 << 30)' failed.
```
@iritkatriel I think this looks similar to https://github.com/python/cpython/issues/109627
I tested this bug with faa8003 and the fix does not solve this issue.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109734
* gh-109749
<!-- /gh-linked-prs -->
| 7c553991724d8d537f8444db73f016008753d77a | 73ccfa28c5e6ff68de15fdbb1321d4773a688e61 |
python/cpython | python__cpython-109710 | # test_asyncio.test_subprocess: test_stdin_broken_pipe() failed on GHA Windows x64 CI
```
test_stdin_broken_pipe (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_stdin_broken_pipe) ... FAIL
FAIL: test_stdin_broken_pipe (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_stdin_broken_pipe)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_asyncio\test_subprocess.py", line 298, in test_stdin_broken_pipe
self.assertRaises((BrokenPipeError, ConnectionResetError),
AssertionError: (<class 'BrokenPipeError'>, <class 'ConnectionResetError'>) not raised by run_until_complete
```
build: https://github.com/python/cpython/actions/runs/6268408487/job/17024556425?pr=109701
cc @sorcio
<!-- gh-linked-prs -->
### Linked PRs
* gh-109710
* gh-109731
* gh-109735
<!-- /gh-linked-prs -->
| cbbdf2c1440c804adcfc32ea0470865b3b3b8eb2 | 46b63ced2564ad6c3d7b65e0ea1f04fd5c7d2959 |
python/cpython | python__cpython-109707 | # test_multiprocessing_fork: test_nested_startmethod() fails randomly.
test_nested_startmethod() fails randomly. Since there is no synchronization, putting items in the queue is not really ordered.
I suggest to either accept [1, 2] and [2, 1] in the test, or add some kind of synchronization to ensure that events happen in the expected order. Here I don't think that order matters.
The test was added by PR #108568 of issue gh-108520.
```
FAIL: test_nested_startmethod (test.test_multiprocessing_fork.test_misc.TestStartMethod.test_nested_startmethod)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le.lto/build/Lib/test/_test_multiprocessing.py", line 5475, in test_nested_startmethod
self.assertEqual(results, [2, 1])
AssertionError: Lists differ: [1, 2] != [2, 1]
First differing element 0:
1
2
- [1, 2]
+ [2, 1]
```
build: https://buildbot.python.org/all/#/builders/503/builds/3941
The failure can be reproduced by stressing the test:
```
$ ./python -m test test_multiprocessing_fork.test_misc -m test_nested_startmethod -v -j50 --forever
(...)
0:00:24 load avg: 20.03 [157/1] test_multiprocessing_fork.test_misc failed (1 failure)
test_nested_startmethod (test.test_multiprocessing_fork.test_misc.TestStartMethod.test_nested_startmethod) ... FAIL
======================================================================
FAIL: test_nested_startmethod (test.test_multiprocessing_fork.test_misc.TestStartMethod.test_nested_startmethod)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vstinner/python/3.11/Lib/test/_test_multiprocessing.py", line 5399, in test_nested_startmethod
self.assertEqual(results, [2, 1])
AssertionError: Lists differ: [1, 2] != [2, 1]
First differing element 0:
1
2
- [1, 2]
+ [2, 1]
----------------------------------------------------------------------
Ran 1 test in 2.150s
FAILED (failures=1)
test test_multiprocessing_fork.test_misc failed
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109707
* gh-109762
* gh-109763
<!-- /gh-linked-prs -->
| b03a791497ff4b3c42805e06c73d08ac34087402 | d5611f280403d19befe4a3e505b037d286cf798e |
python/cpython | python__cpython-109703 | # test_concurrent_futures: test_deadlock failed with timeout on ARM Raspbian 3.x
Example:
```
FAIL: test_crash_during_func_exec_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolForkserverExecutorDeadlockTest.test_crash_during_func_exec_on_worker)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
```
build: https://buildbot.python.org/all/#/builders/424/builds/4978
* FAIL: test_crash_during_func_exec_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolForkserverExecutorDeadlockTest.test_crash_during_func_exec_on_worker)
* FAIL: test_crash_during_result_pickle_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolForkserverExecutorDeadlockTest.test_crash_during_result_pickle_on_worker)
* FAIL: test_crash_at_task_unpickle (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_at_task_unpickle)
* FAIL: test_crash_during_func_exec_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_during_func_exec_on_worker)
* FAIL: test_crash_during_result_pickle_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_during_result_pickle_on_worker)
These tests passed when re-run in verbose mode in a fresh process.
It's just a timeout issue because the machine was too busy.
Full logs:
<details>
```
======================================================================
FAIL: test_crash_during_func_exec_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolForkserverExecutorDeadlockTest.test_crash_during_func_exec_on_worker)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 154, in test_crash_during_func_exec_on_worker
self._check_crash(BrokenProcessPool, _crash)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132, in _check_crash
self._fail_on_deadlock(executor)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 109, in _fail_on_deadlock
self.fail(f"Executor deadlock:\n\n{tb}")
AssertionError: Executor deadlock:
Thread 0xf5deb440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 341 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 231 in _feed
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 996 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Thread 0xf53ff440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/selectors.py", line 398 in select
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 1136 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 414 in wait_result_broken_or_wakeup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 341 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Current thread 0xf79af040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 100 in _fail_on_deadlock
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132 in _check_crash
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 154 in test_crash_during_func_exec_on_worker
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 634 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 690 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 223 in _runtest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 82 in worker_process
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 105 in main
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 109 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
======================================================================
FAIL: test_crash_during_result_pickle_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolForkserverExecutorDeadlockTest.test_crash_during_result_pickle_on_worker)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 167, in test_crash_during_result_pickle_on_worker
self._check_crash(BrokenProcessPool, _return_instance, CrashAtPickle)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132, in _check_crash
self._fail_on_deadlock(executor)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 109, in _fail_on_deadlock
self.fail(f"Executor deadlock:\n\n{tb}")
AssertionError: Executor deadlock:
Thread 0xf5deb440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 341 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 231 in _feed
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 996 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Thread 0xf53ff440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/selectors.py", line 398 in select
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 1136 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 414 in wait_result_broken_or_wakeup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 341 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Current thread 0xf79af040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 100 in _fail_on_deadlock
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132 in _check_crash
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 167 in test_crash_during_result_pickle_on_worker
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 634 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 690 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 223 in _runtest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 82 in worker_process
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 105 in main
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 109 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
======================================================================
FAIL: test_crash_at_task_unpickle (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_at_task_unpickle)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 150, in test_crash_at_task_unpickle
self._check_crash(BrokenProcessPool, id, CrashAtUnpickle())
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132, in _check_crash
self._fail_on_deadlock(executor)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 109, in _fail_on_deadlock
self.fail(f"Executor deadlock:\n\n{tb}")
AssertionError: Executor deadlock:
Thread 0xf5deb440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 341 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 231 in _feed
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 996 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Thread 0xf53ff440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/selectors.py", line 398 in select
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 1136 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 414 in wait_result_broken_or_wakeup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 341 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Current thread 0xf79af040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 100 in _fail_on_deadlock
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132 in _check_crash
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 150 in test_crash_at_task_unpickle
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 634 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 690 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 223 in _runtest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 82 in worker_process
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 105 in main
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 109 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
Stderr:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 250, in _feed
send_bytes(obj)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 185, in send_bytes
self._check_closed()
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 138, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
======================================================================
FAIL: test_crash_during_func_exec_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_during_func_exec_on_worker)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 154, in test_crash_during_func_exec_on_worker
self._check_crash(BrokenProcessPool, _crash)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132, in _check_crash
self._fail_on_deadlock(executor)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 109, in _fail_on_deadlock
self.fail(f"Executor deadlock:\n\n{tb}")
AssertionError: Executor deadlock:
Thread 0xf5deb440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 341 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 231 in _feed
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 996 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Thread 0xf53ff440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/selectors.py", line 398 in select
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 1136 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 414 in wait_result_broken_or_wakeup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 341 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Current thread 0xf79af040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 100 in _fail_on_deadlock
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132 in _check_crash
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 154 in test_crash_during_func_exec_on_worker
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 634 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 690 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 223 in _runtest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 82 in worker_process
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 105 in main
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 109 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
Stderr:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 250, in _feed
send_bytes(obj)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 185, in send_bytes
self._check_closed()
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 138, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
======================================================================
FAIL: test_crash_during_result_pickle_on_worker (test.test_concurrent_futures.test_deadlock.ProcessPoolSpawnExecutorDeadlockTest.test_crash_during_result_pickle_on_worker)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 128, in _check_crash
res.result(timeout=self.TIMEOUT)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/_base.py", line 458, in result
raise TimeoutError()
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 167, in test_crash_during_result_pickle_on_worker
self._check_crash(BrokenProcessPool, _return_instance, CrashAtPickle)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132, in _check_crash
self._fail_on_deadlock(executor)
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 109, in _fail_on_deadlock
self.fail(f"Executor deadlock:\n\n{tb}")
AssertionError: Executor deadlock:
Thread 0xf5deb440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 341 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/queues.py", line 231 in _feed
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 996 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Thread 0xf53ff440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/selectors.py", line 398 in select
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/multiprocessing/connection.py", line 1136 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 414 in wait_result_broken_or_wakeup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/concurrent/futures/process.py", line 341 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1059 in _bootstrap_inner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 1016 in _bootstrap
Current thread 0xf79af040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 100 in _fail_on_deadlock
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 132 in _check_crash
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_deadlock.py", line 167 in test_crash_during_result_pickle_on_worker
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 634 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 690 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1152 in _run_suite
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1279 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 223 in _runtest
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 82 in worker_process
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 105 in main
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/worker.py", line 109 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
----------------------------------------------------------------------
Ran 45 tests in 491.342s
FAILED (failures=5)
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-109703
* gh-109705
* gh-109708
<!-- /gh-linked-prs -->
| 1eb1b45183c3b8aeefe3d5d27694155741e82bbc | 4230d7ce93cc25e9c5fb564a0b37e93f19ca0e4e |
python/cpython | python__cpython-109694 | # Clean-up pyatomic headers
# Feature or enhancement
Now that https://github.com/python/cpython/issues/108337 is done, we have three atomic headers. We should be able to remove `pycore_atomic.h` and `pycore_atomic_funcs.h` and replace their usages with calls to `pyatomic.h`.
cc @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-109694
* gh-110477
* gh-110480
* gh-110604
* gh-110605
* gh-110836
* gh-110837
* gh-110992
<!-- /gh-linked-prs -->
| 2aceb21ae61b4648b47afd9f8fdba8c106a745d0 | 5b8f0246834f211db0ea83b89277489abc2521ed |
python/cpython | python__cpython-109651 | # Improve import time of various stdlib modules
# Feature or enhancement
### Proposal:
As noted in https://discuss.python.org/t/deferred-computation-evalution-for-toplevels-imports-and-dataclasses/34173, `typing` isn't the slowest stdlib module in terms of import time, but neither is it one of the quickest. We should speed it up, if possible.
### Links to previous discussion of this feature:
https://discuss.python.org/t/deferred-computation-evalution-for-toplevels-imports-and-dataclasses/34173
<!-- gh-linked-prs -->
### Linked PRs
* gh-109651
* gh-109789
* gh-109803
* gh-109804
* gh-109822
* gh-109824
* gh-109829
* gh-110221
* gh-110247
* gh-110286
* gh-112995
* gh-114509
* gh-114664
* gh-115160
* gh-118697
<!-- /gh-linked-prs -->
| e8be0c9c5a7c2327b3dd64009f45ee0682322dcb | 62c7015e89cbdedb5218d4fedd45f971885f67a8 |
python/cpython | python__cpython-109635 | # Use :samp: role
The `:samp:` role formats text as literal, but unlike to literal text formatting (``` ``...`` ```) it allows to emphasize variable parts. For example, in ``` ``\xhh`` ``` all letters are in the same font, but in ``` :samp:`\x{hh}` ``` the variable part *hh* is emphasized in italic.
It is the same as the `:file:` role, but without additional semantic.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109635
* gh-109776
* gh-109778
* gh-110073
* gh-110095
<!-- /gh-linked-prs -->
| 92af0cc580051fd1129c7a86af2cbadeb2aa36dc | 5e7ea95d9d5c3b80a67ffbeebd76ce4fc327dd8e |
python/cpython | python__cpython-109867 | # Short repeated regex patterns can skip signal handling
# Bug report
### Bug description:
I mentioned regex but this probably applies to other modules as well. In the code below, if the `#` line right after the findall is commented out (as it is right now), the code raises the exception past the try-except block on the `print("Safely reached EOF.")` statement. The exception happens one statement later than expected. NB: I tested all Python versions on Windows but only 3.11 on Linux.
```python
import _thread, re
from threading import Timer
s = "ab"*20000000
pattern = re.compile("ab+")
Timer(0.2, _thread.interrupt_main).start()
try:
pattern.findall(s)
# print() ###################################################################
except:
print("Exception block")
print("Safely reached EOF.")
```
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-109867
* gh-109885
* gh-109886
<!-- /gh-linked-prs -->
| 8ac2085b80eca4d9b2a1093d0a7da020fd12e11a | 7c61a361fc2e93375e22849fffbc20b60e94dbde |
python/cpython | python__cpython-109630 | # basicblock_addop Assertion
# Bug report
### Bug description:
I found the following error with the latest cpython 3.12 from git (4a0c118d6a4080efc538802f70ee79ce5c046e72), which I build myself with --with-pydebug enabled.
source:
```python
def get_namespace_uri():
while element and something:
try:
return something
except:
pass
```
output (Python 3.12.0rc3+):
```python
python3: Python/flowgraph.c:114: basicblock_addop: Assertion `0 <= oparg && oparg < (1 << 30)' failed.
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109630
* gh-109632
<!-- /gh-linked-prs -->
| 9ccf0545efd5bc5af5aa51774030c471d49a972b | 14cdefa667f211401c9dfab33c4695e80b4e5e95 |
python/cpython | python__cpython-109626 | # Move `test.test_import._ready_to_import` helper to `test.support.import_helper`
# Feature or enhancement
It is used in `test_import` and `test_inspect`, right now `test_inspect` has to import `test_import` to use this function. Since we try to reduce the amount of test cross-dependencies, it is better to move it to the designated support module.
Refs https://github.com/python/cpython/pull/109607
<!-- gh-linked-prs -->
### Linked PRs
* gh-109626
* gh-109640
* gh-109718
<!-- /gh-linked-prs -->
| 115c49ad5a5ccfb628fef3ae06a566f7a0197f97 | 712cb173f8e1d02c625a40ae03bba57b0c1c032a |
python/cpython | python__cpython-111258 | # Refresh Screen Provided By `curses.wrapper` Causes Seg Fault (macOS, xcode 15 Apple supplied ncurses 6.0 breakage)
# Crash report
### What happened?
MacOS began pushing out updates to XCode Command Line Tools to install 15.0 recently. Upon updating I began having issues with `curses`. This happens with the Python provided by Apple. I'm not aware of the best way to communicate this issue to Apple, hopefully someone here knows who to ping or is watching.
Save the below as `curses-segfault.py`:
```python
import curses
def main(stdscr):
stdscr.refresh()
curses.wrapper(main)
```
Run the script in `zsh` in MacOS Terminal via:
```zsh
/usr/bin/python3 curses-segfault.py
```
An easy way to see if you got the update is via the terminal by running:
```zsh
softwareupdate --history
```
### CPython versions tested on:
3.9
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
python3 -VV Python 3.9.6 (default, Aug 11 2023, 19:44:49) [Clang 15.0.0 (clang-1500.0.40.1)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-111258
<!-- /gh-linked-prs -->
| 08d169f14a715ceaae3d563ced2ff1633d009359 | 291cfa454b9c5b677c955aaf53fab91f0186b6fa |
python/cpython | python__cpython-109935 | # test_tools: test_freeze_simple_script() fails on s390x SLES 3.x: [Errno 2] No such file or directory: (...)
s390x SLES 3.x:
```
test_freeze_simple_script (test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script) ... ERROR
Stdout:
creating the script to be frozen at /tmp/test_python_3vf4i07o/tmp7pw_cxg6/app.py
copying the source tree into /tmp/test_python_3vf4i07o/tmp7pw_cxg6/cpython...
(...)
======================================================================
ERROR: test_freeze_simple_script (test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Lib/test/test_tools/test_freeze.py", line 32, in test_freeze_simple_script
outdir, scriptfile, python = helper.prepare(script, outdir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Tools/freeze/test/freeze.py", line 146, in prepare
copy_source_tree(srcdir, SRCDIR)
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Tools/freeze/test/freeze.py", line 95, in copy_source_tree
shutil.copytree(oldroot, newroot, ignore=ignore_non_src)
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Lib/shutil.py", line 588, in copytree
return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Lib/shutil.py", line 542, in _copytree
raise Error(errors)
shutil.Error: [('/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/build/test_python_29475æ', '/tmp/test_python_3vf4i07o/tmp7pw_cxg6/cpython/build/test_python_29475æ', "[Errno 2] No such file or directory: '/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/build/test_python_29475æ'")]
Stdout:
creating the script to be frozen at /tmp/test_python_3vf4i07o/tmp7pw_cxg6/app.py
copying the source tree into /tmp/test_python_3vf4i07o/tmp7pw_cxg6/cpython...
----------------------------------------------------------------------
Ran 36 tests in 7.203s
FAILED (errors=1)
test test_tools failed
```
build: https://buildbot.python.org/all/#/builders/540/builds/6583
<!-- gh-linked-prs -->
### Linked PRs
* gh-109935
* gh-109950
* gh-109951
* gh-109958
* gh-109962
* gh-109970
* gh-109976
* gh-110108
* gh-110110
* gh-110340
<!-- /gh-linked-prs -->
| 1512d6c6ee2a770afb339bbb74c1b990116f7f89 | 0e28d0f7a1bc3776cc07e0f8b91bc43fcdbb4206 |
python/cpython | python__cpython-109618 | # os.stat() and os.DirEntry.stat() don't check for fill_time() exception: _Py_CheckSlotResult: Slot * of type int succeeded with an exception set
# Bug report
```
$ ./python -m test -v test_tools.test_freeze -u all
== CPython 3.12.0rc3+ (heads/3.12-dirty:4a0c118d6a, Sep 20 2023, 16:55:13) [GCC 13.2.1 20230728 (Red Hat 13.2.1-1)]
== Linux-6.4.14-200.fc38.x86_64-x86_64-with-glibc2.37 little-endian
== Python build: debug
== cwd: /home/vstinner/python/3.12/build/test_python_768929æ
== CPU count: 12
== encodings: locale=UTF-8, FS=utf-8
0:00:00 load avg: 1.16 Run tests sequentially
0:00:00 load avg: 1.16 [1/1] test_tools.test_freeze
test_freeze_simple_script (test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script) ... creating the script to be frozen at /tmp/tmp_as044dn/app.py
copying the source tree into /tmp/tmp_as044dn/cpython...
^C
Fatal Python error: _Py_CheckSlotResult: Slot * of type int succeeded with an exception set
Python runtime state: initialized
KeyboardInterrupt
Current thread 0x00007fd397f9e740 (most recent call first):
File "/home/vstinner/python/3.12/Lib/shutil.py", line 376 in copystat
File "/home/vstinner/python/3.12/Lib/shutil.py", line 536 in _copytree
File "/home/vstinner/python/3.12/Lib/shutil.py", line 588 in copytree
File "/home/vstinner/python/3.12/Lib/shutil.py", line 524 in _copytree
File "/home/vstinner/python/3.12/Lib/shutil.py", line 588 in copytree
...
```
I suppose that ``long_mul()`` was called indirectly by ``os_DirEntry_stat_impl()``. Traceback when I put a breakpoint in gdb:
```
(gdb) where
#0 long_mul (a=0x7fffe97e8ac0, b=0x7fffea5d23c0) at Objects/longobject.c:3956
#1 0x00000000004ca219 in binary_op1 (v=1695221791, w=1000000000, op_slot=16, op_name=0x7adfe4 "*") at Objects/abstract.c:882
#2 0x00000000004cab6b in PyNumber_Multiply (v=1695221791, w=1000000000) at Objects/abstract.c:1100
#3 0x000000000070a0ac in fill_time (module=<module at remote 0x7fffea5ce570>, v=<os.stat_result at remote 0x7fffe97fb150>, s_index=7,
f_index=10, ns_index=13, sec=1695221791, nsec=606287902) at ./Modules/posixmodule.c:2400
#4 0x000000000070a398 in _pystat_fromstructstat (module=<module at remote 0x7fffea5ce570>, st=0x7ffffffe8730)
at ./Modules/posixmodule.c:2508
#5 0x0000000000713e1d in DirEntry_fetch_stat (module=<module at remote 0x7fffea5ce570>, self=0x7fffe97d5ae0, follow_symlinks=0)
at ./Modules/posixmodule.c:14639
#6 0x0000000000713e64 in DirEntry_get_lstat (defining_class=0xb6dd60, self=0x7fffe97d5ae0) at ./Modules/posixmodule.c:14650
#7 0x0000000000713f2a in os_DirEntry_stat_impl (self=0x7fffe97d5ae0, defining_class=0xb6dd60, follow_symlinks=1)
at ./Modules/posixmodule.c:14685
#8 0x000000000071dadc in os_DirEntry_stat (self=0x7fffe97d5ae0, defining_class=0xb6dd60, args=0x7ffff7fbafb0, nargs=0, kwnames=0x0)
at ./Modules/clinic/posixmodule.c.h:10788
#9 0x00000000004feba4 in method_vectorcall_FASTCALL_KEYWORDS_METHOD (func=<method_descriptor at remote 0x7fffea5fc6b0>,
args=0x7ffff7fbafa8, nargsf=9223372036854775809, kwnames=0x0) at Objects/descrobject.c:387
#10 0x00000000004ed499 in _PyObject_VectorcallTstate (tstate=0xb5a8e0 <_PyRuntime+475616>,
callable=<method_descriptor at remote 0x7fffea5fc6b0>, args=0x7ffff7fbafa8, nargsf=9223372036854775809, kwnames=0x0)
at ./Include/internal/pycore_call.h:92
#11 0x00000000004ee17d in PyObject_Vectorcall (callable=<method_descriptor at remote 0x7fffea5fc6b0>, args=0x7ffff7fbafa8,
nargsf=9223372036854775809, kwnames=0x0) at Objects/call.c:325
#12 0x0000000000644123 in _PyEval_EvalFrameDefault (tstate=0xb5a8e0 <_PyRuntime+475616>, frame=0x7ffff7fbaf38, throwflag=0)
at Python/bytecodes.c:2711
#13 0x000000000062f7a0 in _PyEval_EvalFrame (tstate=0xb5a8e0 <_PyRuntime+475616>, frame=0x7ffff7fba1c0, throwflag=0)
at ./Include/internal/pycore_ceval.h:88
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109618
* gh-109641
* gh-109668
<!-- /gh-linked-prs -->
| d4cea794a7b9b745817d2bd982d35412aef04710 | 115c49ad5a5ccfb628fef3ae06a566f7a0197f97 |
python/cpython | python__cpython-109612 | # Add convenient C API for flushing a file
When working on the replacement of `PySys_GetObject()` I noticed a large number of sites (23) that call the flush() method of a file, check the result for error, and decref the returned object. It takes 7 lines of code and requires a temporary variable. There are many variations in writing the same code, so you need to read it carefully to recognize the pattern and ensure that it is correct.
I propose to add convenient function `_PyFile_Flush()` which calls the method and returns -1 on error and 0 on success. It allows to get rid of temporary variable and write the same code in 3 lines, including the closing `}`. It is also more recognizable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109612
<!-- /gh-linked-prs -->
| b8d1744e7ba87a4057350fdfd788b5621095fc59 | 92af0cc580051fd1129c7a86af2cbadeb2aa36dc |
python/cpython | python__cpython-109600 | # Expose capsule type in types module
# Feature or enhancement
### Proposal:
It would be nice if the type of capsule objects was exposed in the `types` module, just like other interpreter types.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109600
* gh-131969
<!-- /gh-linked-prs -->
| 88a6137cdb81c80440d9d1ee7dee17ea0b820f11 | bc06743533b5fea2d5ecdad6dd3caa372c67439f |
python/cpython | python__cpython-109647 | # Make PyComplex_RealAsDouble/ImagAsDouble use __complex__
The C-API docs says: "Return the real/imag part of op as a C double." But the real code looks like:
```c
PyComplex_RealAsDouble(PyObject *op)
{
if (PyComplex_Check(op)) {
return ((PyComplexObject *)op)->cval.real;
}
else {
return PyFloat_AsDouble(op);
}
}
```
So, we assume instead that the ``op`` is a float-like class (a subtype of or something with a ``__float__`` dunder method). Instead, we should look on the ``__complex__`` method in the ``else`` branch. This is an issue like #44670, I think.
Minor issue: these functions aren't tested, only indirectly with ``format()`` (that will not trigger all possible cases).
Edit:
The current ``PyComplex_ImagAsDouble()`` silently returns ``0.0`` for all non-PyComplexObject objects (or subtypes of). I think it's a bug and we should return ``-1.0`` instead and set the error indicator.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109647
<!-- /gh-linked-prs -->
| 0f2fa6150baf111a6c69d5d491c95c3c2ee60eaf | ac10947ba79a15bfdaa3ca92c6864214648ab364 |
python/cpython | python__cpython-109606 | # Rule name conflict and token marking mistakes in the 3.12.0rc grammar
# Bug report
### Bug description:
This report was originally launched [here](https://discuss.python.org/t/rule-name-conflict-and-token-reference-mistakes-in-the-3-12-0rc-grammar/34151/3).
[grammars](https://github.com/python/cpython/blob/main/Grammar/python.gram):
3.12.0rc3
3.13.0 alpha 0
Rule name conflict with the follwing two grammar rules:
```
fstring: star_expressions
```
and
```
fstring:
| FSTRING_START fstring_middle* FSTRING_END
```
The colon sign is not a soft keyword (":"), it is a token (':') in these rules:
```
type_param:
| NAME [type_param_bound]
| '*' NAME ":" expression
| '*' NAME
| '**' NAME ":" expression
| '**' NAME
type_param_bound: ":" expression
```
The equal sign is not a soft keyword ("="), it is a token ('=') in the following rule:
```
fstring_replacement_field:
| '{' (yield_expr | star_expressions) "="? [fstring_conversion] [fstring_full_format_spec] '}'
```
These are more just notation problems in the previous two cases.
In general, this is a problem from that point of view that comparing two texts is slower than comparing two integers.
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109606
* gh-109752
<!-- /gh-linked-prs -->
| b28ffaa193efc66f46ab90d383279174a11a11d7 | 7c553991724d8d537f8444db73f016008753d77a |
python/cpython | python__cpython-109667 | # Support python -Xcpu_count=<n> feature for container environment.
# Feature or enhancement
As https://github.com/python/cpython/issues/80235, there are requests for isolating CPU count in k8s or container environment, and **this is a very important feature these days**. (Practically my corp, a lot of workloads are running under container environments, and controlling CPU count is very important to resolve busy neighborhood issues)
There were a lot of discussions, and following the cgroup spec requires a lot of complexity and performance issues (due to fallback).
JDK 21 chooses not to depend on CPU Shares to compute active processor count and they choose to use `-XX:ActiveProcessorCount=<n>`.
see: https://bugs.openjdk.org/browse/JDK-8281571
I think that this strategy will be worth using from the CPython side too.
So if the user executes the python with `-Xcpu_count=3`option `os.cpu_count` will return 3, instead of the actual CPU count that was calculated from `os.cpu_count`.
cc @vstinner @indygreg
<!-- gh-linked-prs -->
### Linked PRs
* gh-109667
* gh-110639
<!-- /gh-linked-prs -->
| 0362cbf908aff2b87298f8a9422e7b368f890071 | 5aa62a8de15212577a13966710b3aede46e93824 |
python/cpython | python__cpython-110018 | # test_concurrent_futures.test_wait: test_timeout() failed on ARM Raspbian 3.x
The test uses sleep and timeout in seconds (6 and 7 seconds):
```py
@support.requires_resource('walltime')
def test_timeout(self):
future1 = self.executor.submit(mul, 6, 7)
future2 = self.executor.submit(time.sleep, 6) # <==== HERE
finished, pending = futures.wait(
[CANCELLED_AND_NOTIFIED_FUTURE,
EXCEPTION_FUTURE,
SUCCESSFUL_FUTURE,
future1, future2],
timeout=5, # <=== HERE
return_when=futures.ALL_COMPLETED)
self.assertEqual(set([CANCELLED_AND_NOTIFIED_FUTURE,
EXCEPTION_FUTURE,
SUCCESSFUL_FUTURE,
future1]), finished)
self.assertEqual(set([future2]), pending)
```
ARM Raspbian 3.x:
```
test_timeout (test.test_concurrent_futures.test_wait.ProcessPoolForkserverWaitTest.test_timeout) ... FAIL
Stdout:
12.34s
(...)
FAIL: test_timeout (test.test_concurrent_futures.test_wait.ProcessPoolForkserverWaitTest.test_timeout)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_concurrent_futures/test_wait.py", line 128, in test_timeout
self.assertEqual(set([CANCELLED_AND_NOTIFIED_FUTURE,
AssertionError: Items in the first set but not the second:
<Future at 0xf5fbdfc0 state=running>
```
build: https://buildbot.python.org/all/#/builders/424/builds/4964
<!-- gh-linked-prs -->
### Linked PRs
* gh-110018
* gh-110021
* gh-110022
<!-- /gh-linked-prs -->
| 9be283e5e15d5d5685b78a38eb132501f7f3febb | 0baf72696e79191241a2d5cfdfd7e6135115f7b2 |
python/cpython | python__cpython-110102 | # test_eintr: test_flock() fails with: 0.19043820584192872 not greater than or equal to 0.2
Failure on AMD64 RHEL8 3.x buildbot:
```
FAIL: test_flock (__main__.FNTLEINTRTest.test_flock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/test/_test_eintr.py", line 526, in test_flock
self._lock(fcntl.flock, "flock")
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/test/_test_eintr.py", line 515, in _lock
self.assertGreaterEqual(dt, self.sleep_time)
AssertionError: 0.19043820584192872 not greater than or equal to 0.2
```
build: https://buildbot.python.org/all/#/builders/185/builds/5045
dt is measured with ``time.monotonic()``. test.pythoninfo says:
```
time.get_clock_info(monotonic): namespace(implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, adjustable=False, resolution=1e-09)
```
``sleep_time = 0.2`` constant is used to call ``time.sleep(0.2)`` in a child process which is created after ``start_time = time.monotonic()``.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110102
* gh-110106
* gh-110107
<!-- /gh-linked-prs -->
| 9c73a9acec095c05a178e7dff638f7d9769318f3 | 86e76ab8af9a5018acbcdcbb6285678175b1bd8a |
python/cpython | python__cpython-109666 | # [proposal] Allow "precompiled" perf-trampolines to largely mitigate the cost of enabling perf-trampolines
# Feature or enhancement
### Proposal:
The perf trampoline feature introduced in #96143 is incredibly useful to gain visibility into the cost of individual Python functions.
Currently, there are a few aspects of the trampoline that make it costly to enable on live server-side applications:
1. Each call to a new function results in a disk IO operation (to write to the `/tmp/perf-<pid>.map` file)
2. For a forked multiprocess model:
a. We disable and re-enable the trampoline on fork, which means the same work must be done in child processes.
b. We use more memory, because we do not take advantage of copy-on-write.
On a fairly large Python server (Instagram), we have observed a 1.5% overall CPU regression, mainly stemming from the repeated file writes.
In order to address these, and make it possible to have perf-trampoline running in an always-enabled fashion, we could allow extension modules to initialize trampolines eagerly, after the application is "warmed up". (The definition of warmed up is specific to the application).
Essentially, this would involve introducing a two C-API functions:
```
// Creates a new trampoline by doing essentially what `py_trampoline_evaluator` currently does.
// 1. Call `compile_trampoline()`
// 2. Register it by calling `trampoline_api.write_state()`
// 3. Call `_PyCode_SetExtra`
int PyUnstable_PerfTrampoline_CompileCode(PyCodeObject *co);
```
```
// This flag will be used by _PyPerfTrampoline_AfterFork_Child to
// decide whether to re-initialize trampolines in child processes. If enabled,
// we would copy the the parent process perf-map file, else we would use
// the current behavior.
static Py_ssize_t persist_after_fork = 0;
int PyUnstable_PerfTrampoline_PersistAfterFork (int enable) {
persist_after_fork = enable;
}
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109666
<!-- /gh-linked-prs -->
| 21f068d80c6cc5de75f9df70fdd733d0ce9c70de | 3d2f1f0b830d86f16f42c42b54d3ea4453dac318 |
python/cpython | python__cpython-109605 | # test_asyncio.test_unix_events: test_fork_signal_handling() fails randomly on Linux/macOS
```
FAIL: test_fork_signal_handling (test.test_asyncio.test_unix_events.TestFork.test_fork_signal_handling)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/cpython/cpython/Lib/unittest/async_case.py", line 90, in _callTestMethod
if self._callMaybeAsync(method) is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/runner/work/cpython/cpython/Lib/unittest/async_case.py", line 117, in _callMaybeAsync
return self._asyncioTestContext.run(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/runner/work/cpython/cpython/Lib/test/support/hashlib_helper.py", line 49, in wrapper
return func_or_class(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/runner/work/cpython/cpython/Lib/test/test_asyncio/test_unix_events.py", line 1937, in test_fork_signal_handling
self.assertTrue(child_handled.is_set())
AssertionError: False is not true
```
build: https://github.com/python/cpython/actions/runs/6237986497/job/16932828971
Passed when re-run:
```
Re-running test.test_asyncio.test_unix_events in verbose mode (matching: test_fork_signal_handling)
test_fork_signal_handling (test.test_asyncio.test_unix_events.TestFork.test_fork_signal_handling) ... /Users/runner/work/cpython/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=14768) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
ok
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109605
* gh-109695
<!-- /gh-linked-prs -->
| 608c1f3083ea1e06d383ef1a9878a9758903de4b | 2aceb21ae61b4648b47afd9f8fdba8c106a745d0 |
python/cpython | python__cpython-109584 | # test_perf_profiler: test_trampoline_works_with_forks() failed on Address Sanitizer CI
# Bug report
```
test_trampoline_works_with_forks (test.test_perf_profiler.TestPerfTrampoline.test_trampoline_works_with_forks) ... FAIL
FAIL: test_trampoline_works_with_forks (test.test_perf_profiler.TestPerfTrampoline.test_trampoline_works_with_forks)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/test/test_perf_profiler.py", line 123, in test_trampoline_works_with_forks
self.assertEqual(process.returncode, 0)
AssertionError: 139 != 0
```
It looks like a SIGSEGV crash (128 + SIGSEGV = 139 on Linux).
build: https://github.com/python/cpython/actions/runs/6237694188/job/16931875912?pr=108965
Passed when re-run:
```
0:09:17 load avg: 4.36 Re-running 1 failed tests in verbose mode in subprocesses
0:09:17 load avg: 4.36 Run 1 test in parallel using 1 worker process (timeout: 20 min, worker timeout: 25 min)
0:09:17 load avg: 4.36 [1/1] test_perf_profiler passed
Re-running test_perf_profiler in verbose mode (matching: test_trampoline_works_with_forks)
test_trampoline_works_with_forks (test.test_perf_profiler.TestPerfTrampoline.test_trampoline_works_with_forks) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.083s
OK
```
cc @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-109584
* gh-109585
<!-- /gh-linked-prs -->
| 754519a9f8c2bb06d85ff9b3e9fe6f967ac46d5c | 9df6712c122b621dc5a1053ff1eccf0debfb2148 |
python/cpython | python__cpython-109567 | # Unify ways to run the Python test suite
There are at least 15 known ways to run the Python test suite. IMO it's too much, it's hard to keep all these code paths consistent. IMO ``python -m test`` should be enough to fit all use cases.
Maybe we need a ``--ci``, ``--buildbot``, or ``--strict`` option which would run tests is "strict" mode.
---
Portable way to run tests:
```
./python -m test
./python -m test.regrtest
./python -m test.libregrtest
./python -m test.autotest
./python Lib/test/regrtest.py
./python Lib/test/autotest.py
./python -c 'from test import autotest'
```
The main drawback is that running ``./python`` doesn't work when Python is built with ``./configure --enable-shared``, and it doesn't work when Python is cross-compiled (WASM/WASI buildbots). Moreover, on macOS, ``./python.exe`` must be run instead. On Windows, it's just ``python``.
Unix, from Python source code:
```
make buildbottest TESTOPTS="..."
make test
make testall
make testuniversal
make hostrunnertest
./python Tools/scripts/run_tests.py
```
Windows, from Python source code:
```
PCbuild\rt.bat
Tools\buildbot\test.bat
```
---
* Lib/test/autotest.py just runs ``test.libregrtest.main()``.
* Lib/test/regrtest.py runs ``test.libregrtest.main()`` **and** remove ``Lib/test/`` from ``sys.path`` **and** make ``__file__`` an absolute path.
* ``PCbuild\rt.bat``:
* python: pass ``-u -Wd -E -bb`` options
* regrtest: pass options passed to ``rt.bat`` (except of options specific to rt.bat)
* most options are to get the path to the python.exe program in PCbuild/
* ``Tools\buildbot\test.bat`` runs ``PCbuild\rt.bat``:
* ``rt.bat``: pass ``-q -d`` options.
* regrtest: pass ``-j1 -uall -rwW --slowest --timeout=1200 --fail-env-changed`` options.
* ARM32 SSH pass ``-unetwork -udecimal -usubprocess -uurlfetch -utzdata -rwW --slowest --timeout=1200 --fail-env-changed`` options.
* ``make buildbottest`` runs ``Tools/scripts/run_tests.py``:
* python: pass ``$(TESTPYTHONOPTS)`` options.
* regrtest: pass ``-j 1 -u all -W --slowest --fail-env-changed --timeout=$(TESTTIMEOUT) $(TESTOPTS)`` options.
* ``Tools/scripts/run_tests.py``:
* python: pass ``-u -W default -bb -E`` options -- also cross-compilation options, not detailed here
* regrtest: pass ``-r -w -j0 -u all,-largefile,-audio,-gui`` options, add ``-n`` on Windows.
---
GitHub Action workflow uses, [.github/workflows/build.yml](https://github.com/python/cpython/blob/afa7b0d743d9b7172d58d1e4c28da8324b9be2eb/.github/workflows/build.yml):
* Windows x86: ``.\PCbuild\rt.bat -p Win32 -d -q -uall -u-cpu -rwW --slowest --timeout=1200 -j0``
* Windows x64: ``.\PCbuild\rt.bat -p x64 -d -q -uall -u-cpu -rwW --slowest --timeout=1200 -j0``
* macOS: ``make buildbottest TESTOPTS="-j4 -uall,-cpu"``
* Ubuntu: ``xvfb-run make buildbottest TESTOPTS="-j4 -uall,-cpu"``
* Address Sanitizer: ``xvfb-run make buildbottest TESTOPTS="-j4 -uall,-cpu"``
Buildbot use, [master/custom/factories.py](https://github.com/python/buildmaster-config/blob/0fbab0b010c8d24145e304505463bdf562439d85/master/custom/factories.py):
* UnixBuild and UnixCrossBuild: ``make buildbottest TESTOPTS="..." TESTPYTHONOPTS="..." TESTTIMEOUT="..."`` with code to select TESTOPTS, TESTPYTHONOPTS and TESTTIMEOUT.
* UnixInstalledBuild: ``/path/to/python -m test ...`` with code to select ``-m test`` options.
* BaseWindowsBuild: ``Tools\buildbot\test.bat ...`` with code to select ``....`` options
<!-- gh-linked-prs -->
### Linked PRs
* gh-109567
* gh-109570
* gh-109701
* gh-109909
* gh-109926
* gh-109954
* gh-109969
* gh-110062
* gh-110120
* gh-110121
* gh-110122
<!-- /gh-linked-prs -->
| 67d9363372d9b214d9aa1812163866c9804aa55a | f2636d2c45aae0a04960dcfbc7d9a2a8a36ba3bc |
python/cpython | python__cpython-109560 | # Unicode 15.1 Support
# Feature or enhancement
### Proposal:
[Unicode 15.1 has recently been released](http://blog.unicode.org/2023/09/announcing-unicode-standard-version-151.html), and the stdlib `unicodedata` and associated tooling should be updated accordingly.
An example of a query against the Unicode Character Database that is now out-of-date, querying the category of U+2EE5D currently returns `"Cn"`, but as of the update this now-allocated codepoint is in category `"Lo"`
```
$ python3.12 -c "import unicodedata; print(unicodedata.category('\U0002EE5D'))" # should be 'Lo'
Cn
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109560
* gh-109597
<!-- /gh-linked-prs -->
| def828995a35a289c9f03500903b5917df93465f | 1293fcc3c6b67b7e8d0081863ec6387e162341eb |
python/cpython | python__cpython-109915 | # Add new states `ATTACHED`, `DETACHED`, `GC` to `PyThreadState` to support PEP 703
# Feature or enhancement
To support pausing threads for garbage collection, PEP 703 requires identifying which threads may be accessing Python object internals (like reference counts). Currently, this is effectively controlled by acquiring the GIL. The PEP proposes three thread-states (https://peps.python.org/pep-0703/#thread-states):
* `ATTACHED`
* `DETACHED`
* `GC`
I propose adding these states and a new field to `PyThreadState`. Like all the other fields in `PyThreadState` (other than `interp`), the field is intended to be private.
```c
#define _Py_THREAD_DETACHED 0
#define _Py_THREAD_ATTACHED 1
#define _Py_THREAD_GC 2
struct _ts {
...
int state;
}
```
Unfortunately, we can't add this to the [`_status`](https://github.com/python/cpython/blob/74f315edd01b4d6c6c99e50c03a90116820d8d47/Include/cpython/pystate.h#L71-L94) bitfield because operations need to be atomic, and atomic operations don't work well with C bitfields.
### Default build (with GIL)
The attached and detached states will correspond precisely with acquiring and releasing the GIL. A thread will set it's state to `_Py_THREAD_ATTACHED` immediately after acquiring the GIL lock and set it to `_Py_THREAD_DETACHED` immediately before releasing the GIL lock.
The `_Py_THREAD_GC` state will not be used in the default build.
### `--disable-gil` build
From https://peps.python.org/pep-0703/#thread-states:
> When compiling without the GIL, functions that previously acquired the GIL instead transition the thread state to ATTACHED, and functions that previously released the GIL transition the thread state to DETACHED. Just as threads previously needed to acquire the GIL before accessing or modifying Python objects, they now must be in the ATTACHED state before accessing or modifying Python objects. Since the same public C-API functions “attach” the thread as previously acquired the GIL (e.g., PyEval_RestoreThread), the requirements for thread initialization in extensions remain the same. The substantial difference is that multiple threads can be in the attached state simultaneously, while previously only one thread could acquire the GIL at a time.
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-109915
<!-- /gh-linked-prs -->
| 6e97a9647ae028facb392d12fc24973503693bd6 | 9eb2489266c4c1f115b8f72c0728db737cc8a815 |
python/cpython | python__cpython-109548 | # Add more tests for formatting floats and fractions
File "formatfloat_testcases.txt" contains test cases for testing printf-like formatting of floats. The same data can be used for testing new-style formatting of floats (they give the same result) and float-style formatting of Fraction (added in 3.12).
cc @mdickinson
<!-- gh-linked-prs -->
### Linked PRs
* gh-109548
* gh-109557
* gh-109685
<!-- /gh-linked-prs -->
| beb5ec5817b645562ebbdd59f25683a93061c32c | c829975428253568d47ebfc3104fa7386b5e0b58 |
python/cpython | python__cpython-109544 | # typing.TypedDict: unnecessary hasattr check
These lines:
https://github.com/python/cpython/blob/412f5e85d6b9f2e90c57c54539d06c7a025a472a/Lib/typing.py#L2889-2890
```python
if not hasattr(tp_dict, '__total__'):
tp_dict.__total__ = total
```
go out of their way to account for the case there a TypedDict already has a `__total__` attribute. But I don't think there is any way for the TypedDict to have that attribute at this point. For clarity, we should remove the check.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109544
<!-- /gh-linked-prs -->
| d8ce092fe4e98fec414f4e60cfc958b3ac3ec9a3 | 89d8b2d14bb6f7e5c4565b5c3f56d917e6134c89 |
python/cpython | python__cpython-111983 | # 3.11.5 regression: StreamWriter.__del__ fails if event loop is already closed
# Bug report
### Bug description:
PR #107650 added a `StreamWriter.__del__` that emits a ResourceWarning if a StreamWriter is not closed by the time it is garbage collected, and it has been backported as 3.11.5. However, if the event loop has already been closed by the time this happens, it causes a RuntimeError message to be displayed. It's non-fatal because exceptions raised by `__del__` are ignored, but it causes an error message to be displayed to the user (as opposed to ResourceWarning, which is only shown when one opts in).
Code like the following used to run without any visible error, but now prints a traceback (and does not report the ResourceWarning for the unclosed StreamWriter when run with `-X dev`):
```python
#!/usr/bin/env python3
import asyncio
async def main():
global writer
reader, writer = await asyncio.open_connection("127.0.0.1", 22)
asyncio.run(main())
```
Output in 3.11.5:
```
Exception ignored in: <function StreamWriter.__del__ at 0x7fd54ee11080>
Traceback (most recent call last):
File "/usr/lib/python3.11/asyncio/streams.py", line 396, in __del__
File "/usr/lib/python3.11/asyncio/streams.py", line 344, in close
File "/usr/lib/python3.11/asyncio/selector_events.py", line 860, in close
File "/usr/lib/python3.11/asyncio/base_events.py", line 761, in call_soon
File "/usr/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
RuntimeError: Event loop is closed
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111983
* gh-112141
* gh-112142
<!-- /gh-linked-prs -->
| e0f512797596282bff63260f8102592aad37cdf1 | fe9db901b2446b047e537447ea5bad3d470b0f78 |
python/cpython | python__cpython-109522 | # Problems with PyImport_GetImporter()
# Bug report
1. `PyImport_GetImporter()` can return NULL with set or not set error. The latter happens only if `sys.path_hooks` or `sys.path_importer_cache` was deleted, or in more obscure cases: when string allocation for strings "path_hooks" or "path_importer_cache" fail, or the `sys` module was not yet created. These cases so obscure, that the user code most likely do not expect this. The only place where `PyImport_GetImporter()` is used in Python itself expects an error to be set if NULL is returned.
2. `PyImport_GetImporter()` can crash (in debug build) or raise SystemError if `sys.path_hooks` is not a list or `sys.path_importer_cache` is not a dict. Note that both are set to None in `_PyImport_FiniExternal()` (which is called in `Py_FinalizeEx()` and `Py_EndInterpreter()`).
Crash is unacceptable, so asserts should be replaced with runtime checks and raising an exception (RuntimeError looks suitable).
And I think that `PyImport_GetImporter()` should set RuntimeError when it fails to get `sys.path_hooks` or `sys.path_importer_cache`. It is the most common error in similar cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109522
* gh-109777
* gh-109781
<!-- /gh-linked-prs -->
| 62c7015e89cbdedb5218d4fedd45f971885f67a8 | b8d1744e7ba87a4057350fdfd788b5621095fc59 |
python/cpython | python__cpython-109516 | # freeze_modules.py on windows hits command line limits when number of frozen modules grows
# Bug report
### Bug description:
frozen_modules.py machinery invokes deepfreeze.py passing modules to be frozen on the command line.
If the number of modules is high enough, the command line is truncated and deepfreeze.py fails.
This problem can be triggered by enabling the <encodings*> line on freeze_modules.py
It can be fixed by writing module paths into a file (see PR)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-109516
<!-- /gh-linked-prs -->
| 8eaa206feccc913d172027a613d34a50210f4511 | 6dfb8fe0236718e9afc8136ff2b58dcfbc182022 |
python/cpython | python__cpython-109696 | # Doctest - No clear guidance on how to use the "Option Flag" and and the explanation of "Which Docstrings Are Examined" is ambiguos.
# Documentation
Which Docstrings Are Examined :- The issue with the provided documentation lacks clarity and precision. It doesn’t clearly specify what “is true” means in the context of M.test, and it uses unclear language when describing how strings are treated. Additionally, it doesn’t explain why this information is relevant or how it’s practically used.
Option flag :- The documentation provides detailed information about various option flags in doctest, but it lacks real-world examples or scenarios where these flags might be used. Without practical examples, users may find it challenging to understand when and how to apply these flags effectively in their code or documentation testing processes. Providing concrete examples and use cases would greatly enhance the usability of this documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109696
* gh-111077
* gh-111078
<!-- /gh-linked-prs -->
| bcc941bd4a7fbed3b20f5a5fc68b183fda1506a5 | 63acf78d710461919b285213fadc817108fb754e |
python/cpython | python__cpython-109506 | # `test_asyncio` has several expired `hasattr` checks
# Bug report
The first one: https://github.com/python/cpython/blob/e57ecf6bbc59f999d27b125ea51b042c24a07bd9/Lib/test/test_asyncio/test_events.py#L2338-L2340
From: https://github.com/python/cpython/commit/8dc3e438398046c6bb989e038cd08280aa356982
I cannot think of any reasonable python version / alternative interpreter that does not have `collections.abc.Coroutine`. Since this commit was added >7 years ago, I think it can be removed for good.
Another one is: https://github.com/python/cpython/blob/e57ecf6bbc59f999d27b125ea51b042c24a07bd9/Lib/test/test_asyncio/utils.py#L40
From: https://github.com/python/cpython/commit/f111b3dcb414093a4efb9d74b69925e535ddc470#diff-73a82d560b5e6a3b3abd58b460de2b09eaf53e3d1598a263a7b3a61d41068d34
Which is also from 6 years ago, now `support` always has this member.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109506
<!-- /gh-linked-prs -->
| 0d20fc7477a21328dd353071eaa06384bb818f7b | b10de68c6ceae1076cdc98c890b9802dc81a7f44 |
python/cpython | python__cpython-109539 | # Calling `Py_DECREF` twice does not result in error with debug build
# Bug report
### Bug description:
[Documented behavior](https://github.com/python/cpython/blob/d6892c2b9263b39ea1c7905667942914b6a24b2c/Misc/SpecialBuilds.txt#L30) of Py_REF_DEBUG includes:
> Py_REF_DEBUG also checks after every decref to verify that the refcount hasn't
gone negative, and causes an immediate fatal error if it has.
Calling `Py_DECREF` twice does not result in the expected error:
```c
static PyObject* spam_double_decref(PyObject *self, PyObject *args) {
printf("spam_double_decref ... begin\n");
PyObject *obj = Py_BuildValue("s", "foobar");
Py_DECREF (obj);
Py_DECREF (obj); // Expect error, but does not error when using cpython built with `--with-pydebug`.
printf("spam_double_decref ... end\n");
Py_RETURN_NONE;
}
```
To reproduce:
Build CPython with `--with-pydebug`:
```sh
cd /home/kevin/code/cpython
mkdir debug
cd debug/
../configure --with-pydebug
make -j16
```
Build [this sample extension](https://github.com/kevinAlbs/double_decref_extension) calling `Py_DECREF` twice using the debug build of CPython:
```sh
PYTHON=/home/kevin/code/cpython/debug/python
$PYTHON setup.py build
```
Run a test with this extension:
```sh
# Add path to built extension to `PYTHONPATH`.
export PYTHONPATH=/home/kevin/code/cpython/KEVINALBS/double_decref_extension/build/lib.linux-x86_64-cpython-313-pydebug
PYTHON=/home/kevin/code/cpython/debug/python
$PYTHON -c "import spam; spam.double_decref()"
# Prints:
# spam_double_decref ... begin
# spam_double_decref ... end
```
No error is indicated, but an error is expected.
Extension source is located here: https://github.com/kevinAlbs/double_decref_extension
Tested with cpython main branch on commit: 929cc4e4a0999b777e1aa94f9c007db720e67f43.
If this issue is confirmed, I may be interested to investigate possible solutions.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109539
* gh-109545
* gh-109573
* gh-109578
<!-- /gh-linked-prs -->
| 0bb0d88e2d4e300946e399e088e2ff60de2ccf8c | ef659b96169888e52b6ff03ce28fffaaa8f76818 |
python/cpython | python__cpython-109494 | # _pydatetime.datetime has extra unnecessary slots
# Bug report
### Bug description:
Pre patch
```py
from _pydatetime import datetime
import sys
sys.getsizeof(datetime(1970, 1, 1)) # 152
```
Post patch
```py
from _pydatetime import datetime
import sys
sys.getsizeof(datetime(1970, 1, 1)) # 120
```
I had a look round for any other cases of this but couldn't find any just using a simple GitHub search. It might in the future be useful to use https://github.com/ariebovenberg/slotscheck (which I wasn't personally able to figure out how to run on cpython) in CI to prevent things like this in the future.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109494
<!-- /gh-linked-prs -->
| 7d57288f6d0e7fffb2002ceb460784d39277584a | 20bc5f7c28a6f8a2e156c4a748ffabb5efc7c761 |
python/cpython | python__cpython-109486 | # Further improve `test_future_stmt`
# Bug report
Based on the review of @vstinner in https://github.com/python/cpython/pull/109368#pullrequestreview-1628376115
I propose to:
1. Replace `badsyntax_future` files with regular code samples, so we can more easily manage them
2. Keep at lest one file to be sure that it also works as intended for modules
3. Apply better names
I have a PR ready :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-109486
<!-- /gh-linked-prs -->
| 94c95d42a3b74729ee775f583fb7951f05d16c77 | 59f32a785fbce76a0fa02fe6d671813057714a0a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.