repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-114686 | # `_imp.get_frozen_object` possible incorrect use of `PyBUF_READ`
# Bug report
It is documented that `PyBUF_READ` should be used with `memoryview` objects. But, it is used in `PyObject_GetBuffer`:
https://github.com/python/cpython/blob/5ecfd750b4f511f270c38f0d748da9cffa279295/Python/import.c#L3547
Other similar places that access `.buf` and `.len` just use `PyBUF_SIMPLE`, which I think we should use here as well.
I will send a PR.
Originally found by @serhiy-storchaka in https://github.com/python/cpython/pull/114669#issuecomment-1913600699
<!-- gh-linked-prs -->
### Linked PRs
* gh-114686
* gh-114700
* gh-114701
* gh-114707
* gh-114802
<!-- /gh-linked-prs -->
| 1ac1b2f9536a581f1656f0ac9330a7382420cda1 | 2124a3ddcc0e274521f74d239f0e94060e17dd7f |
python/cpython | python__cpython-114683 | # Incorrect deprecation warning for 'N' specifier
# Bug report
### Bug description:
In the main branch, the following warning is issued whenever a fill character (possibly utf-8) either is 'N' or contains 'N':
```
>>> from _decimal import *
>>> Decimal("0").__format__("N<g")
<stdin>-2:1: DeprecationWarning: Format specifier 'N' is deprecated
Decimal("0").__format__("N<g")
'0'
```
Fix (after applying the patches from #114563):
https://www.bytereef.org/contrib/0003-main-fix-deprecation-warning.patch
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114683
<!-- /gh-linked-prs -->
| aa3402ad451777d8dd3ec560e14cb16dc8540c0e | 0cd9bacb8ad41fe86f95b326e9199caa749539eb |
python/cpython | python__cpython-114672 | # `_testbuffer.c`'s initialization does not handle errors
# Bug report
I found three problematic places:
https://github.com/python/cpython/blob/a768e12f094a9b14a9a1680fb50330e1050716c4/Modules/_testbuffer.c#L2841-L2844
Here the first `AttributeError` can be swallowed by the second one.
https://github.com/python/cpython/blob/a768e12f094a9b14a9a1680fb50330e1050716c4/Modules/_testbuffer.c#L2850-L2879
Here `PyModule_AddIntMacro` can return an error. It is not checked.
https://github.com/python/cpython/blob/a768e12f094a9b14a9a1680fb50330e1050716c4/Modules/_testbuffer.c#L2829-L2835
`PyModule_AddObject` can also return an error.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114672
* gh-115271
* gh-115272
<!-- /gh-linked-prs -->
| 3a5b38e3b465e00f133ff8074a2d4afb1392dfb5 | 33f56b743285f8419e92cfabe673fa165165a580 |
python/cpython | python__cpython-114681 | # Add IndexError to tutorial datastructures list.pop doc
Documentation
In the [Datastructures](https://github.com/python/cpython/blob/main/Doc/tutorial/datastructures.rst?plain=1#L47-L54) tutorial, the `remove` and `index` methods mention that an `IndexError` may be raised.
The tutorial is not intended to be an extensive reference, but I feel it is better to state the potential exception.
We could add the boldfaced text to the paragraph:
> > `list.pop([i])`
Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list. **It raises an IndexError if the list if the list is empty or the index is outside the list range.** (The square brackets around the i in the method signature denote that the parameter is optional, not that you should type square brackets at that position. You will see this notation frequently in the Python Library Reference.)
Thoughts?
<!-- gh-linked-prs -->
### Linked PRs
* gh-114681
* gh-114841
* gh-114842
<!-- /gh-linked-prs -->
| 57c3e775df5a5ca0982adf15010ed80a158b1b80 | 586057e9f80d57f16334c0eee8431931e4aa8cff |
python/cpython | python__cpython-115005 | # Display cvs.Error without TypeError context
# Bug report
### Bug description:
Working my way through some documentation nits for the `csv` module, I noticed this method definition:
```
def _validate(self):
try:
_Dialect(self)
except TypeError as e:
# We do this for compatibility with py2.3
raise Error(str(e))
```
Why should we still try and maintain compatibility with Python 2.3 at this late date? Shouldn't we just let `TypeError` bubble up to the user?
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-115005
<!-- /gh-linked-prs -->
| e207cc181fbb0ceb30542fd0d68140c916305f57 | 391659b3da570bfa28fed5fbdb6f2d9c26ab3dd0 |
python/cpython | python__cpython-114627 | # Expose unprefixed variants of `_PyCFunctionFast` and `_PyCFunctionFastWithKeywords`
`METH_FASTCALL` is documented as part of the stable API since Python 3.10.
The accompanying function pointer typedefs `_PyCFunctionFast` and `_PyCFunctionFastWithKeywords` have leading underscore names, which I understand to hint at these being private / internal APIs.
I think this is potentially an oversight and these function pointer typedefs should have new public names `PyCFunctionFast` and `PyCFunctionFastWithKeywords`?
<!-- gh-linked-prs -->
### Linked PRs
* gh-114627
* gh-115561
<!-- /gh-linked-prs -->
| 9e3729bbd77fb9dcaea6a06ac760160136d80b79 | 32f8ab1ab65c13ed70f047ffd780ec1fe303ff1e |
python/cpython | python__cpython-120033 | # 3.13 dis: the show_caches argument is deprecated but the documentation is incomplete
# Documentation
show_caches was deprecated in 3.13 for dis.get_instructions.
The documentation misses the following things:
## get_instructions:
The documentation says:
> The show_caches parameter is deprecated and has no effect. The cache_info field of each instruction is populated regardless of its value.
problems:
* What does 'no effect' mean?
* The cache_info field is also not well documented.
My first assumption was that "The cache_info field of each instruction" would mean that each instruction (including Cache instructions) would be part of the returned list, but this is not the case.
## Bytecode:
Bytecode() has also a show_caches parameter which has no effect any more but the deprecation is not documented.
# Questions
Is it still possible to get access to the Cache bytecodes? They are only visible with dis.dis(...,show_caches=True) but I found no way to access the `Cache` bytecode objects.
should this deprecation be part of the whatsnew page for 3.13?
<!-- gh-linked-prs -->
### Linked PRs
* gh-120033
* gh-120079
<!-- /gh-linked-prs -->
| 770f3c1eadd3392c72fd55be47770234dd143a14 | 69b3e8ea569faabccd74036e3d0e5ec7c0c62a20 |
python/cpython | python__cpython-114612 | # `pathlib.PurePath.with_stem('')` promotes file extension to stem
# Bug report
```python
>>> pathlib.PurePath('foo.jpg').with_stem('').stem
'.jpg'
```
The `with_stem()` call should raise `ValueError`, as it's impossible for a filename to have an empty stem and a non-empty suffix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114612
* gh-114613
<!-- /gh-linked-prs -->
| e3dedeae7abbeda0cb3f1d872ebbb914635d64f2 | 53c5c17e0a97ee06e511c89f1ca6ceb38fd06246 |
python/cpython | python__cpython-116513 | # Rename `pathlib.PurePath.pathmod`
# Feature or enhancement
In #100502 we added a `pathmod` attribute that points to either `posixpath` or `ntpath`, and supplies low-level lexical manipulation functions used by `PurePath`.
The name "pathmod" implies the value is a module, and while that's true in `PurePath` and its subclasses, it's not necessarily true in `_abc.PurePathBase` and its subclasses. In fact, user subclasses of `PurePathBase` and `PathBase` are more likely to implement pathmod using a class, and not a module.
The ABCs are currently private, but they may be made public in future, and so I think it's worth improving the name _now_ to reduce probable future churn/grief.
Downstream issue: https://github.com/barneygale/pathlib-abc/issues/19
<!-- gh-linked-prs -->
### Linked PRs
* gh-116513
<!-- /gh-linked-prs -->
| 752e18389ed03087b51b38eac9769ef8dfd167b7 | bfc57d43d8766120ba0c8f3f6d7b2ac681a81d8a |
python/cpython | python__cpython-114573 | # ssl: Fix locking in cert_store_stats and get_ca_certs
# Bug report
### Bug description:
Filing this to attach a PR to.
cert_store_stats and get_ca_certs query the SSLContext's X509_STORE with X509_STORE_get0_objects, but reading the result requires a lock. See https://github.com/openssl/openssl/pull/23224 for details.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114573
* gh-115547
* gh-115548
* gh-115549
* gh-118109
* gh-118442
<!-- /gh-linked-prs -->
| bce693111bff906ccf9281c22371331aaff766ab | 58cb634632cd4d27e1348320665bcfa010e9cbb2 |
python/cpython | python__cpython-114574 | # Do not allocate non-PyObjects with PyObject_Malloc()
# Feature or enhancement
The free-threaded build requires that Python objects -- and only Python objects -- be allocated through the Python object allocation APIs (like `PyObject_Malloc()` or `PyType_GenericNew()`) [^1].
There are a few places internally [^2] that use `PyObject_Malloc()` for non Python objects. We should switch those call sites to use `PyMem_Malloc()/Free()` instead.
Note that there is not a significant difference between using `PyObject_Malloc()` and `PyMem_Malloc()` in the default build. Both calls use obmalloc under the hood, so switching from one to the other should not matter for the default build.
<details>
<summary>Here are some examples, but this list may not be exhaustive:</summary>
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Modules/_sre/sre_lib.h#L1125
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Modules/_elementtree.c#L270
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Modules/_elementtree.c#L498-L499
https://github.com/python/cpython/blob/main/Modules/mathmodule.c#L2573
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Modules/pyexpat.c#L23-L24
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Objects/bytearrayobject.c#L135
https://github.com/python/cpython/blob/841eacd07646e643f87d7f063106633a25315910/Parser/lexer/lexer.c#L132
</details>
[^1]: See https://peps.python.org/pep-0703/#backwards-compatibility:~:text=Non%2DPython%20objects%20must%20not%20be%20allocated%20through%20those%20APIs
[^2]: @DinoV fixed the use in `dictobject.c` in https://github.com/python/cpython/pull/114543
<!-- gh-linked-prs -->
### Linked PRs
* gh-114574
* gh-114587
* gh-114690
<!-- /gh-linked-prs -->
| dcd28b5c35dda8e2cb7c5f66450f2aff0948c001 | d0f7f5c41d71758c59f9372a192e927d73cf7c27 |
python/cpython | python__cpython-114879 | # Possible memory leak in _cdecimal.c with 'z' format
# Bug report
### Bug description:
See https://mail.python.org/archives/list/python-announce-list@python.org/thread/DHZROL7YYJZTPWJQ3WME4HI3Z65K2H4F/
This feature was originally implemented in https://github.com/python/cpython/pull/30049
I have not verified the memory leak or looked at Stefan's suggestions.
@mdickinson @belm0
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114879
* gh-115353
* gh-115384
<!-- /gh-linked-prs -->
| 72340d15cdfdfa4796fdd7c702094c852c2b32d2 | 235cacff81931a68e8c400bb3919ae6e55462fb5 |
python/cpython | python__cpython-114565 | # ``test_winconsoleio`` prints unnecessary information
# Bug report
### Bug description:
```python
./python -m test -v test_winconsoleio
Running Debug|x64 interpreter...
== CPython 3.13.0a3+ (heads/main:07ef63fb6a, Jan 25 2024, 18:36:12) [MSC v.1933 64 bit (AMD64)]
== Windows-10-10.0.19043-SP0 little-endian
== Python build: debug
== cwd: C:\Users\KIRILL-1\CLionProjects\cpython\build\test_python_worker_21244æ
== CPU count: 16
== encodings: locale=cp1251 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 4064093575
0:00:00 Run 1 test sequentially
0:00:00 [1/1] test_winconsoleio
test_abc (test.test_winconsoleio.WindowsConsoleIOTests.test_abc) ... ok
test_conin_conout_names (test.test_winconsoleio.WindowsConsoleIOTests.test_conin_conout_names) ... ok
test_conout_path (test.test_winconsoleio.WindowsConsoleIOTests.test_conout_path) ... ok
test_ctrl_z (test.test_winconsoleio.WindowsConsoleIOTests.test_ctrl_z) ... Ä^Z
ok
test_input (test.test_winconsoleio.WindowsConsoleIOTests.test_input) ... abc123
ϼўТλФЙ
A͏B ﬖ̳AA̝
ok
test_input_nonbmp (test.test_winconsoleio.WindowsConsoleIOTests.test_input_nonbmp) ... skipped 'Handling Non-BMP characters is bro
ken'
test_open_fd (test.test_winconsoleio.WindowsConsoleIOTests.test_open_fd) ... ok
test_open_name (test.test_winconsoleio.WindowsConsoleIOTests.test_open_name) ... ok
test_partial_reads (test.test_winconsoleio.WindowsConsoleIOTests.test_partial_reads) ... ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ϼўТλФЙ
ok
test_partial_surrogate_reads (test.test_winconsoleio.WindowsConsoleIOTests.test_partial_surrogate_reads) ... skipped 'Handling Non
-BMP characters is broken'
test_subclass_repr (test.test_winconsoleio.WindowsConsoleIOTests.test_subclass_repr) ... ok
test_write_empty_data (test.test_winconsoleio.WindowsConsoleIOTests.test_write_empty_data) ... ok
----------------------------------------------------------------------
Ran 10 tests in 0.036s
OK (skipped=2)
== Tests result: SUCCESS ==
1 test OK.
Total duration: 207 ms
Total tests: run=10 skipped=2
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-114565
* gh-114566
* gh-114567
<!-- /gh-linked-prs -->
| 33ae9895d4ac0d88447e529038bc4725ddd8c291 | d96358ff9de646dbf64dfdfed46d510da7ec4803 |
python/cpython | python__cpython-114662 | # Must `object.__dir__` return a sequence of `str`, or is any iterable OK?
According to the documentation [`__dir__` must return a `Sequence`](https://docs.python.org/3/reference/datamodel.html#object.__dir__), but e.g. typeshed's annotations currently imply that any iterable will do:
```py
class object:
def __dir__(self) -> Iterable[str]: ...
```
https://github.com/python/typeshed/blob/26452a7c68c6ad512be00b553f7a03f869f64be9/stdlib/builtins.pyi#L125
<!-- gh-linked-prs -->
### Linked PRs
* gh-114662
* gh-115234
* gh-115235
<!-- /gh-linked-prs -->
| e19103a346f0277c44a43dfaebad9a5aa468bf1e | b2d9d134dcb5633deebebf2b0118cd4f7ca598a2 |
python/cpython | python__cpython-117996 | # [doc] subprocess security considerations needs a Windows-specific exception
The documentation at https://docs.python.org/3/library/subprocess.html#security-considerations says that "this implementation will never implicitly call a system shell".
While this is technically true, on Windows the underlying CreateProcess API may create a system shell, which then exposes arguments to shell parsing. This happens when passed a `.bat` or `.cmd` file.
PSRT review of the issue determined that we can't safely detect and handle this situation without causing new issues and making it more complex for users to work around when they want to intentionally launch a batch file without shell processing. For the two cases of untrusted input, an untrusted application/`argv[0]` is already vulnerable, and an untrusted argument/`argv[1:]` is safe provided `argv[0]` is controlled. However, we do need to inform developers of the inconsistency so they can check their own use.
We'll use this issue to ensure we get good wording. First proposal in the next comment.
Thanks to RyotaK for reporting responsibly to the Python Security Response Team.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117996
* gh-118002
* gh-118003
* gh-118004
* gh-118005
* gh-118006
<!-- /gh-linked-prs -->
| a4b44d39cd6941cc03590fee7538776728bdfd0a | 353ea0b273b389e075b2ac9687d3e27467b893cd |
python/cpython | python__cpython-114513 | # test_embed failures after switching configure options due to missing Makefile dependencies
# Bug report
### Bug description:
`test_embed` started failing for me recently on free threaded builds on both M1 Mac and Ubuntu (Intel). `git bisect` narrowed it down to a suspect commit. @colesbury this is probably for you.
Unfortunately, on my Mac I get this commit as the culprit:
```shell
% git bisect bad
5f1997896d9c3ecf92e9863177c452b468a6a2c8 is the first bad commit
commit 5f1997896d9c3ecf92e9863177c452b468a6a2c8
Author: Sam Gross <colesbury@gmail.com>
Date: Tue Jan 23 13:05:15 2024 -0500
gh-112984: Fix link error on free-threaded Windows build (GH-114455)
The test_peg_generator test tried to link the python313_d.lib library,
which failed because the library is now named python313t_d.lib. The
underlying problem is that the "compiler" attribute was not set when
we call get_libraries() from distutils.
Tools/peg_generator/pegen/build.py | 3 +++
1 file changed, 3 insertions(+)
```
while in my Linux environment it says this:
```
% git bisect bad
b331381485c1965d1c88b7aee7ae9604aca05758 is the first bad commit
commit b331381485c1965d1c88b7aee7ae9604aca05758
Author: Sam Gross <colesbury@gmail.com>
Date: Tue Jan 16 16:42:15 2024 -0500
gh-112529: Track if debug allocator is used as underlying allocator (#113747)
* gh-112529: Track if debug allocator is used as underlying allocator
The GC implementation for free-threaded builds will need to accurately
detect if the debug allocator is used because it affects the offset of
the Python object from the beginning of the memory allocation. The
current implementation of `_PyMem_DebugEnabled` only considers if the
debug allocator is the outer-most allocator; it doesn't handle the case
of "hooks" like tracemalloc being used on top of the debug allocator.
This change enables more accurate detection of the debug allocator by
tracking when debug hooks are enabled.
* Simplify _PyMem_DebugEnabled
Include/internal/pycore_pymem.h | 3 +++
Include/internal/pycore_pymem_init.h | 2 ++
Include/internal/pycore_runtime_init.h | 1 +
Objects/obmalloc.c | 21 +++++++++++++++------
4 files changed, 21 insertions(+), 6 deletions(-)
```
(I backed up farther on my Linux box to find a good commit.) 16 January was a long time ago. My inclination is that the later commit is the culprit.
I then tried with the GIL enabled on main. Still an error. Hmmm... I went through the bisect process again on my Mac and landed on this as the putative culprit commit:
```
% git bisect good
441affc9e7f419ef0b68f734505fa2f79fe653c7 is the first bad commit
commit 441affc9e7f419ef0b68f734505fa2f79fe653c7
Author: Sam Gross <colesbury@gmail.com>
Date: Tue Jan 23 13:08:23 2024 -0500
gh-111964: Implement stop-the-world pauses (gh-112471)
The `--disable-gil` builds occasionally need to pause all but one thread. Some
examples include:
* Cyclic garbage collection, where this is often called a "stop the world event"
* Before calling `fork()`, to ensure a consistent state for internal data structures
* During interpreter shutdown, to ensure that daemon threads aren't accessing Python objects
This adds the following functions to implement global and per-interpreter pauses:
* `_PyEval_StopTheWorldAll()` and `_PyEval_StartTheWorldAll()` (for the global runtime)
* `_PyEval_StopTheWorld()` and `_PyEval_StartTheWorld()` (per-interpreter)
(The function names may change.)
These functions are no-ops outside of the `--disable-gil` build.
Include/cpython/pystate.h | 2 +-
Include/internal/pycore_ceval.h | 1 +
Include/internal/pycore_interp.h | 17 +++
Include/internal/pycore_llist.h | 3 +-
Include/internal/pycore_pystate.h | 51 +++++--
Include/internal/pycore_runtime.h | 7 +
Include/internal/pycore_runtime_init.h | 3 +
Include/pymacro.h | 3 +
Python/ceval_gil.c | 9 ++
Python/pystate.c | 269 +++++++++++++++++++++++++++++++--
10 files changed, 336 insertions(+), 29 deletions(-)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114513
<!-- /gh-linked-prs -->
| 2afc7182e66635b3ec7efb59d2a6c18a7ad1f215 | 60375a38092b4d4dec9a826818a20adc5d4ff2f7 |
python/cpython | python__cpython-114495 | # New test_termios fails on Alpinelinux
# Bug report
### Bug description:
As reported https://github.com/python/cpython/issues/81002#issuecomment-1904867665 where the test was added
I'm upgrading Python in Alpinelinux to 3.11.7 and testing upgrade to 3.12.1 (both fails the same test)
```
Re-running test_termios in verbose mode (matching: test_tcgetattr)
test_tcgetattr (test.test_termios.TestFunctions.test_tcgetattr) ... FAIL
======================================================================
FAIL: test_tcgetattr (test.test_termios.TestFunctions.test_tcgetattr)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builds/alpine/aports/main/python3/src/Python-3.11.7/Lib/test/test_termios.py", line 42, in test_tcgetattr
self.assertEqual(termios.tcgetattr(self.stream), attrs)
AssertionError: Lists differ: [1280[227 chars]', b'P', b'\xed', b'\x1a', b'\xf7', b'*', b'z'[18 chars]'(']] != [1280[227 chars]', b'\x01', b'\x00', b'\x00', b'\x00', b' ', b[27 chars]d8']]
First differing element 6:
[b'\x[197 chars]', b'P', b'\xed', b'\x1a', b'\xf7', b'*', b'z'[17 chars]b'(']
[b'\x[197 chars]', b'\x01', b'\x00', b'\x00', b'\x00', b' ', b[26 chars]xd8']
[1280,
5,
191,
35387,
15,
15,
[b'\x03',
b'\x1c',
b'\x7f',
b'\x15',
b'\x04',
b'\x00',
b'\x01',
b'\x00',
b'\x11',
b'\x13',
b'\x1a',
b'\x00',
b'\x12',
b'\x0f',
b'\x17',
b'\x16',
b'\x00',
b'\x00',
b'\x00',
b'|',
b'\xdc',
b'\xc9',
b'\xe7',
- b'P',
- b'\xed',
- b'\x1a',
? -
+ b'\x01',
? +
+ b'\x00',
+ b'\x00',
+ b'\x00',
+ b' ',
+ b'\xd5',
+ b'$',
b'\xf7',
+ b'\xd8']]
- b'*',
- b'z',
- b'd',
- b'\xf7',
- b'(']]
----------------------------------------------------------------------
Ran 1 test in 0.003s
```
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114495
* gh-114502
* gh-114503
<!-- /gh-linked-prs -->
| d22c066b802592932f9eb18434782299e80ca42e | ce01ab536f22a3cf095d621f3b3579c1e3567859 |
python/cpython | python__cpython-114491 | # Add support for Mach-O linkage detection in Lib/platform.py
# Feature or enhancement
### Proposal:
Proposed behaviour on MacOS.
```python
>>> import platform
>>> platform.architecture()
('64bit', 'Mach-O')
```
Current behaviour as tested with Python 3.11.7 on `24.05.20240115.e062961+darwin4.44a6ec1`.
```python
>>> import platform
>>> platform.architecture()
('64bit', '')
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114491
<!-- /gh-linked-prs -->
| b5c7c84673b96bfdd7c877521a970f7a4beafece | 07236f5b39a2e534cf190cd4f7c73300d209520b |
python/cpython | python__cpython-114558 | # logging docstrings should use bool instead of int for exc_info
The documentation passes an integer as the `exc_info` parameter:
https://github.com/python/cpython/blob/8edc8029def8040ebe1caf75d815439156dd2124/Lib/logging/__init__.py#L1496
But the annotation doesn't allow that.
https://github.com/python/typeshed/blob/2168ab5ff4e40c37505b3e5048944603b71a857d/stdlib/logging/__init__.pyi#L66-L67
<!-- gh-linked-prs -->
### Linked PRs
* gh-114558
* gh-114624
* gh-116242
<!-- /gh-linked-prs -->
| 07236f5b39a2e534cf190cd4f7c73300d209520b | df17b5264378f38f49b16343b5016a8882212a8a |
python/cpython | python__cpython-117778 | # Misleading comment "heap.sort() maintains the heap invariant"
# Documentation
In the fourth paragraph of the documentation says:
"""
These two make it possible to view the heap as a regular Python list without surprises: heap[0] is the smallest item, and heap.sort() maintains the heap invariant!
"""
This seems to suggest that `heap.sort()` does not change `heap`, although that's clearly not true:
```[Python]
heap = [1, 2, 4, 3] # this is already a valid heap
heapq.heapify(heap) # doesn't change `heap`
heap.sort() # this is now `[1, 2, 3, 4]`!!
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117778
* gh-117835
<!-- /gh-linked-prs -->
| 37a4cbd8727fe392dd5c78aea60a7c37fdbad89a | c2a551a30b520e520b084eec251f168549e1a3f0 |
python/cpython | python__cpython-114457 | # Lower the recursion limit under pydebug WASI
# Bug report
Setting it to 150 allows hitting `RecursionError` instead of having the WASI host like wasmtime trigger its stack depth protections.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114457
<!-- /gh-linked-prs -->
| f59f90b5bccb9e7ac522bc779ab1f6bf11bb4aa3 | afe8f376c096d5d6e8b12fbc691ca9b35381470b |
python/cpython | python__cpython-114449 | # [dev convenience] Compared histograms in summarize_stats.py output should not be sorted
# Bug report
### Bug description:
When comparing two pystats results, currently the tables that compare two histograms are sorted by the amount of change in each row. This is confusing, because the histogram buckets get out of order. It would be better to leave these tables sorted by the histogram buckets.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114449
<!-- /gh-linked-prs -->
| e45bae7a45e5696c3ebdf477ecc948374cf8ebff | 5cd9c6b1fca549741828288febf9d5c13293847d |
python/cpython | python__cpython-114571 | # Windows: test_os.test_stat_inaccessible_file() fails if run as an administrator
When test_os is run as an administrator, like running tests in a SSH shell for example, the test fails:
```
FAIL: test_stat_inaccessible_file (test.test_os.Win32NtTests.test_stat_inaccessible_file)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\victor\python\main\Lib\test\test_os.py", line 3134, in test_stat_inaccessible_file
self.assertEqual(0, stat2.st_dev)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
AssertionError: 0 != 10298252993199238233
```
The test was added recently by commit ed066481c76c6888ff5709f5b9f93b92c232a4a6 of issue gh-111877.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114571
* gh-115851
<!-- /gh-linked-prs -->
| d91ddff5de61447844f1dac575d2e670c8d7e26b | 102569d150b690efe94c13921e93da66081ba1cf |
python/cpython | python__cpython-114424 | # Remove DummyThread from threading._active when thread dies
# Feature or enhancement
### Proposal:
This has bothered me quite a bit already: If some code calls threading.current_thread() in a thread that’s not created from the threading module it lingers at threading._active forever (or until another thread with the same thread.ident is created, which isn’t that uncommon).
My suggestion would be to remove it when the thread dies.
I'll provide a PR.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/remove-dummythread-from-threading-active-when-thread-dies/41322
<!-- gh-linked-prs -->
### Linked PRs
* gh-114424
<!-- /gh-linked-prs -->
| 5a1ecc8cc7d3dfedd14adea1c3cdc3cfeb79f0e1 | 7d21cae964fc47afda400fc1fbbcf7984fcfe819 |
python/cpython | python__cpython-114415 | # `_threadmodule` does not check for `NULL` after `PyType_GetModuleByDef` calls
# Bug report
`PyType_GetModuleByDef` can return `NULL`: https://github.com/python/cpython/blob/fd49e226700e2483a452c3c92da6f15d822ae054/Objects/typeobject.c#L4640-L4645
I think that right now this is a hypothetical issue, because in practice `NULL` will never be returned for these samples. But, all other similar places do check for `NULL`. I guess that adding a check won't hurt. And in the future, if something changes - we would be safe.
Samples:
https://github.com/python/cpython/blob/fd49e226700e2483a452c3c92da6f15d822ae054/Modules/_threadmodule.c#L903-L904
https://github.com/python/cpython/blob/fd49e226700e2483a452c3c92da6f15d822ae054/Modules/_threadmodule.c#L1044-L1045
https://github.com/python/cpython/blob/fd49e226700e2483a452c3c92da6f15d822ae054/Modules/_threadmodule.c#L1096-L1097
<!-- gh-linked-prs -->
### Linked PRs
* gh-114415
<!-- /gh-linked-prs -->
| d1b031cc58516e1aba823fd613528417a996f50d | 650f9e4c94711ff49ea4e13bf800945a6147b7e0 |
python/cpython | python__cpython-114393 | # Improve test_capi.test_structmembers
`test_capi.test_structmembers` tests writing and reading attributes that represent C struct members (defined via `PyMemberDef`). But for most types it only tests 1 or 2 valid values, and maybe yet 1 or 2 values that trigger a RuntimeWarning, for some types. It does not test integer overflow errors. It does not test integer-like objects which are not instances of int (and the range of accepted values is different for such objects due to quirks of implementation). Some bugs (like #114388, but I'll open more issues for other bugs or quirks) are slipped unnoticed.
I'm going to add support for more C types, so I need more comprehensive tests. The proposed PR unifies tests for integer members. It contains some special cases which will be removed when the implementation for integer members be fixed and unified.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114393
* gh-115010
* gh-115030
<!-- /gh-linked-prs -->
| 15f6f048a6ecdf0f6f4fc076d013be3d110f8ed6 | 929d44e15a5667151beadb2d3a2528cd641639d6 |
python/cpython | python__cpython-114391 | # Incorrect warning when assign an unsigned integer member
Assigning a negative value to an unsigned integer member can emit a RuntimeWarning.
```pycon
$ ./python -Wa
>>> import _testcapi
>>> ts = _testcapi._test_structmembersType_NewAPI()
>>> ts.T_UBYTE = -42
<stdin>:1: RuntimeWarning: Truncation of value to unsigned char
>>> ts.T_USHORT = -42
<stdin>:1: RuntimeWarning: Truncation of value to unsigned short
>>> ts.T_UINT = -42
<stdin>:1: RuntimeWarning: Writing negative value into unsigned field
<stdin>:1: RuntimeWarning: Truncation of value to unsigned int
>>> ts.T_ULONG = -42
<stdin>:1: RuntimeWarning: Writing negative value into unsigned field
```
(And for T_ULONGLONG it raises an OverflowError, but this is perhaps another issue).
The first issue is that it emits two warnings for T_UINT. It is also weird that warnings are different.
The second issue is that if the value is not an int subclass, it emits a warning also for positive values for T_UINT and T_ULONG:
```pycon
>>> class Index:
... def __init__(self, value):
... self.value = value
... def __index__(self):
... return self.value
...
>>> ts.T_UBYTE = Index(42)
>>> ts.T_USHORT = Index(42)
>>> ts.T_UINT = Index(42)
<stdin>:1: RuntimeWarning: Writing negative value into unsigned field
>>> ts.T_ULONG = Index(42)
<stdin>:1: RuntimeWarning: Writing negative value into unsigned field
```
(No warning is emitted for assigning a negative index to T_ULONGLONG, but this is perhaps another issue).
<!-- gh-linked-prs -->
### Linked PRs
* gh-114391
* gh-115000
* gh-115001
* gh-115002
<!-- /gh-linked-prs -->
| d466052ad48091a00a50c5298f33238aff591028 | 7e42fddf608337e83b30401910d76fd75d5cf20a |
python/cpython | python__cpython-114385 | # `sys.set_asyncgen_hooks` docs are not correct about its signature
# Bug report
Refs https://github.com/python/cpython/pull/103132#discussion_r1448716543
The signature is `set_asyncgen_hooks(firstiter=<unrepresentable>, finalizer=<unrepresentable>)`.
Docs say that it is: `set_asyncgen_hooks(firstiter, finalizer)`
Docstring say that it is: `set_asyncgen_hooks(* [, firstiter] [, finalizer])`
It should be: `set_asyncgen_hooks([firstiter] [, finalizer])` (I guess)
<!-- gh-linked-prs -->
### Linked PRs
* gh-114385
* gh-114386
* gh-114387
<!-- /gh-linked-prs -->
| 38768e4cdd1c4b6e03702da8a94e1c22479d6ed3 | 96c15b1c8d03db5b7b5b719214d9d156b317ba9d |
python/cpython | python__cpython-114374 | # Awkward wording in GH-111835 whatsnew entry
# Documentation
"it requires" is a little unclear
```diff
diff --git a/Doc/whatsnew/3.13.rst b/Doc/whatsnew/3.13.rst
index 40f0cd37fe..eb00e4ad74 100644
--- a/Doc/whatsnew/3.13.rst
+++ b/Doc/whatsnew/3.13.rst
@@ -259,8 +259,8 @@ mmap
----
* The :class:`mmap.mmap` class now has an :meth:`~mmap.mmap.seekable` method
- that can be used where it requires a file-like object with seekable and
- the :meth:`~mmap.mmap.seek` method return the new absolute position.
+ that can be used when a file-like object with seekable is required. The
+ :meth:`~mmap.mmap.seek` method now returns the new absolute position.
(Contributed by Donghee Na and Sylvie Liberman in :gh:`111835`.)
```
I'm not sure if I should make a PR changing Doc/whatsnew/3.13.rst directly or something via Misc/NEWS.d
This refers #111852
<!-- gh-linked-prs -->
### Linked PRs
* gh-114374
<!-- /gh-linked-prs -->
| 5ce193e65a7e6f239337a8c5305895cf8a4d2726 | 57c3e775df5a5ca0982adf15010ed80a158b1b80 |
python/cpython | python__cpython-114334 | # Reference to flag variables broken in re.compile() documentation
# Documentation
The #93000 change set inadvertently caused a sentence in re.compile() documentation to no longer make sense. It says "Values can be any of the following variables..." (https://github.com/python/cpython/blob/681e9e85a2c1f72576ddfbd766506e2d6db34862/Doc/library/re.rst#L883), but there are no following variables documented anymore, since they were moved to a different sub-subsection above. (This is a docs regression in 3.10, 3.11, and 3.12.) This is, of course, quite trivial to fix, and I'll be submitting a PR shortly to do so.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114334
* gh-114339
* gh-114340
<!-- /gh-linked-prs -->
| 567a85e9c15a3f7848330ae7bef3de2f70fc9f97 | 1e610fb05fa4ba61a759b68461f1a9aed07622fc |
python/cpython | python__cpython-117326 | # mimalloc should fail to allocate 784 271 641 GiB: test_decimal killed by the Linux kernel with OOM on a Free Threading build
# Bug report
### Bug description:
Hello. When we were updating Python 3.13 from a2 to a3 in Fedora, I wanted to enable the freethreding on s390x and ppc64le, as it was previously disabled due to https://github.com/python/cpython/issues/112535
However, the freethreding build fails to [build in Fedora Linux 40 on s390x](https://koji.fedoraproject.org/koji/taskinfo?taskID=111993787) with:
```
...
# Next, run the profile task to generate the profile information.
LD_LIBRARY_PATH=/builddir/build/BUILD/Python-3.13.0a3/build/freethreading ./python -m test --pgo --timeout=
Using random seed: 1705622400
0:00:00 load avg: 2.56 Run 44 tests sequentially
0:00:00 load avg: 2.56 [ 1/44] test_array
0:00:01 load avg: 2.56 [ 2/44] test_base64
0:00:02 load avg: 2.56 [ 3/44] test_binascii
0:00:02 load avg: 2.56 [ 4/44] test_binop
0:00:02 load avg: 2.56 [ 5/44] test_bisect
0:00:02 load avg: 2.56 [ 6/44] test_bytes
0:00:09 load avg: 2.63 [ 7/44] test_bz2
0:00:10 load avg: 2.63 [ 8/44] test_cmath
0:00:11 load avg: 2.63 [ 9/44] test_codecs
0:00:13 load avg: 2.63 [10/44] test_collections
0:00:14 load avg: 2.58 [11/44] test_complex
0:00:15 load avg: 2.58 [12/44] test_dataclasses
0:00:15 load avg: 2.58 [13/44] test_datetime
0:00:22 load avg: 2.78 [14/44] test_decimal
make: *** [Makefile:845: profile-run-stamp] Killed
```
Full log: [build.log](https://github.com/python/cpython/files/13994894/build.log.txt)
I'll try to build this on older Fedora Linux versions as well to eliminate other factors, such as GCC 14:
- [Fedora Linux 39](https://koji.fedoraproject.org/koji/taskinfo?taskID=112005303)
- [Fedora Linux 38](https://koji.fedoraproject.org/koji/taskinfo?taskID=112006015)
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117326
* gh-117327
* gh-117328
* gh-117331
<!-- /gh-linked-prs -->
| 6702d2bf6edcd5b5415e17837383623b9d76a5b8 | c1712ef066321c01bf09cba3f22fc474b5b8dfa7 |
python/cpython | python__cpython-114504 | # Add `PyList_GetItemRef`, a variant of `PyList_GetItem` that returns a strong reference
# Feature or enhancement
The free-threaded builds need a variant of `PyList_GetItem` that returns a strong reference instead of a borrowed reference for thread-safety reasons. PEP 703 proposed [`PyList_FetchItem`](https://peps.python.org/pep-0703/#borrowed-references), but since then `PyDict_GetItemRef` and functions with similar signatures have been added.
This proposes `PyList_GetItemRef` with the following signature:
```
PyObject *PyList_GetItemRef(PyObject *list, Py_ssize_t index)
```
Return a *strong reference* to the object at position index in the list pointed to by list. If `index` is out of bounds (<0 or >=len(list)), return NULL and set an IndexError. If `list` is not a list instance, return NULL and set a TypeError.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114504
* gh-117520
<!-- /gh-linked-prs -->
| d0f1307580a69372611d27b04bbf2551dc85a1ef | 0e71a295e9530c939a5efcb45db23cf31e0303b4 |
python/cpython | python__cpython-114335 | # read1 in cbreak mode returns 0x0D (instead of 0x0A) when <enter> is entered.
# Bug report
### Bug description:
A <i>read1</i> of 'stdin' in cbreak mode returns 0x0D (instead of 0x0A) when \<enter\> is entered.
That is for Python version 3.12.1, version 3.11.6 returns 0x0A.
The following code demonstrates this. When run enter \<enter\>.
```python
"""
cbreak <enter> test
"""
import sys
import tty
import select
import termios
from time import sleep
# save stdin attributes
prevStdinAttributes = termios.tcgetattr(sys.stdin)
# set cbreak mode:
# "Enter cbreak mode. In cbreak mode (sometimes called “rare” mode) normal tty line buffering
# is turned off and characters are available to be read one by one. However, unlike raw mode,
# special characters (interrupt, quit, suspend, and flow control) retain their effects on the
# tty driver and calling program. Calling first raw() then cbreak() leaves the terminal in cbreak mode."
tty.setcbreak(sys.stdin, when = termios.TCSANOW)
c = b''
# wait for a byte
while True:
if select.select([sys.stdin], [], [], 0) == ([sys.stdin], [], []):
c = sys.stdin.buffer.read1(1)
break
sleep(.1)
# restore stdin attributes
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, prevStdinAttributes)
# print result byte in hex
print(f"bytes: '\\x{c.hex().upper()}'")
```
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114335
* gh-114410
<!-- /gh-linked-prs -->
| fd49e226700e2483a452c3c92da6f15d822ae054 | db1c18eb6220653290a3ba9ebbe1df44394a3f19 |
python/cpython | python__cpython-114322 | # Expose more constants in the fcntl module
# Feature or enhancement
See also:
* https://github.com/python/cpython/issues/90179
* https://github.com/python/cpython/issues/99987
* https://github.com/python/cpython/issues/113092
* https://github.com/python/cpython/issues/114061
<!-- gh-linked-prs -->
### Linked PRs
* gh-114322
<!-- /gh-linked-prs -->
| 1b719b39b9cd125258739699ac5168d139901b48 | 6d30cbee013b4182937ffa11a7c87d2a7b6b7b41 |
python/cpython | python__cpython-114479 | # TypeError for threading.Lock | None
# Bug report
### Bug description:
I tried to search for a related ticket by various queries but did not find any, hopefully I haven't overlooked it.
While rewriting a codebase from `Optional[foo]` to the newer `foo | None` style, I encountered that `threading.Lock` does not allow the latter.
Example on MacOS 3.12.1:
```python
>>> from typing import Optional
>>> from threading import Lock
>>> Optional[Lock]
typing.Optional[<built-in function allocate_lock>]
>>> Lock | None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'builtin_function_or_method' and 'NoneType'
```
Given that the documentation states the following, the error makes sense:
> Note that Lock is actually a factory function which returns an instance of the most efficient version of the concrete Lock class that is supported by the platform.
I actually didn't know it was a factory function. I would not intentionally create a `<function> | None` type. But since the documentation describes `threading.Lock` as a class, one would expect to be able to use it as any other type.
### CPython versions tested on:
3.11, 3.12, 3.13.0a3
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114479
<!-- /gh-linked-prs -->
| d96358ff9de646dbf64dfdfed46d510da7ec4803 | b52fc70d1ab3be7866ab71065bae61a03a28bfae |
python/cpython | python__cpython-114493 | # Add stats for "unlikely events" that will impact optimization.
Events such as using changing the `__code__` attribute of a function, or the `__bases__` of a class, are rare.
At least we presume that they are rare.
We want to optimize code with the assumption that they are rare (see See https://github.com/faster-cpython/ideas/issues/645).
So, we needs stats to confirm that they are rare, and hopefully give us some insight into how rare.
We should add a struct to the interpreter to track these events as they occur, and if stats are enabled add that data to the stats output.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114493
* gh-114554
<!-- /gh-linked-prs -->
| ea3cd0498c443e93be441736c804258e93d21edd | c63c6142f9146e1e977f4c824c56e8979e6aca87 |
python/cpython | python__cpython-114287 | # Warning: `Modules/_io/fileio.c:175:9: warning: ‘exc’ may be used uninitialized [-Wmaybe-uninitialized]`
# Bug report
This code produces a warning:
https://github.com/python/cpython/blob/a34e4db28a98904f6c9976675ed7121ed61edabe/Modules/_io/fileio.c#L160-L176
Link: https://buildbot.python.org/all/#/builders/1061/builds/695/steps/7/logs/warnings__145_
I think that it is enough to add `= NULL;` to `PyObject *exc;`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114287
* gh-114288
<!-- /gh-linked-prs -->
| 05e47202a34e6ae05e699af1083455f5b8b59496 | a34e4db28a98904f6c9976675ed7121ed61edabe |
python/cpython | python__cpython-114282 | # `asyncio.staggered` has incorrect type hints
# Bug report
Correct ones: https://github.com/python/typeshed/blob/8885cc870cc1089cba30a6ac6d4ea1e32c83cedb/stdlib/asyncio/staggered.pyi
There are several major problems:
1. `loop: events.AbstractEventLoop = None`, relies on very old mypy behaviour
2. `coro_fns: typing.Iterable[typing.Callable[[], typing.Awaitable]]` does not have type var for `Awaitable`, which is very hard to read: reader has to infer this type variable themself
This is a problem if people are using runtime type-checkers that will use `__annotations__` from this function and not typeshed definitions.
Basically, we don't use annotations in the stdlib, but when we do: it should be correct.
I propose fixing it and even upgrading some other stuff:
1. It is a simple quality of life improvement
2. It only uses simple types
3. All python versions for backports are supported
4. It should not be edited again in the recent future
<!-- gh-linked-prs -->
### Linked PRs
* gh-114282
<!-- /gh-linked-prs -->
| 0554a9594e07f46836a58795c9d9af2a97acec66 | 1d6d5e854c375821a64fa9c2fbb04a36fb3b9aaa |
python/cpython | python__cpython-114451 | # Test failures on Windows ARM64 install
Ran the test suite on an installed ARM64 machine and saw the errors below.
This is mostly a note-to-self to come back to this, but if anyone else wants to jump in and fix a test or two (I think they're mostly obvious what's gone wrong), feel free. I'm including the skipped tests to double check that they ought to be skipped.
The run command was just `py -m test -j4 -w` and it should be an otherwise clean machine.
```
== Tests result: FAILURE ==
42 tests skipped:
test.test_asyncio.test_unix_events test.test_gdb.test_backtrace
test.test_gdb.test_cfunction test.test_gdb.test_cfunction_full
test.test_gdb.test_misc test.test_gdb.test_pretty_print
test.test_multiprocessing_fork.test_manager
test.test_multiprocessing_fork.test_misc
test.test_multiprocessing_fork.test_processes
test.test_multiprocessing_fork.test_threads
test.test_multiprocessing_forkserver.test_manager
test.test_multiprocessing_forkserver.test_misc
test.test_multiprocessing_forkserver.test_processes
test.test_multiprocessing_forkserver.test_threads test_asdl_parser
test_clinic test_dbm_gnu test_dbm_ndbm test_devpoll test_epoll
test_fcntl test_fork1 test_generated_cases test_grp test_ioctl
test_kqueue test_openpty test_perf_profiler test_perfmaps
test_poll test_pty test_pwd test_readline test_resource
test_syslog test_termios test_threadsignals test_tty test_wait3
test_wait4 test_xxlimited test_xxtestfuzz
10 tests skipped (resource denied):
test_curses test_peg_generator test_smtpnet test_socketserver
test_tkinter test_ttk test_urllib2net test_urllibnet test_winsound
test_zipfile64
6 tests failed:
test.test_asyncio.test_subprocess test_audit test_ctypes
test_launcher test_os test_webbrowser
411 tests OK.
0:07:06 load avg: 19.35 Re-running 6 failed tests in verbose mode in subprocesses
0:07:06 load avg: 19.35 Run 6 tests in parallel using 4 worker processes
0:07:07 load avg: 19.34 [1/6/1] test_ctypes failed (1 failure)
Re-running test_ctypes in verbose mode (matching: test_array_in_struct)
test_array_in_struct (test.test_ctypes.test_structures.StructureTestCase.test_array_in_struct) ... FAIL
======================================================================
FAIL: test_array_in_struct (test.test_ctypes.test_structures.StructureTestCase.test_array_in_struct)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_ctypes\test_structures.py", line 669, in test_array_in_struct
self.assertEqual(result, expected)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
AssertionError: 2.0 != 6.0
----------------------------------------------------------------------
Ran 1 test in 0.005s
FAILED (failures=1)
test test_ctypes failed
0:07:07 load avg: 19.34 [2/6/1] test_audit passed
Re-running test_audit in verbose mode (matching: test_wmi_exec_query)
test_wmi_exec_query (test.test_audit.AuditTest.test_wmi_exec_query) ... ('_wmi.exec_query', ' ', 'SELECT * FROM Win32_OperatingSystem')
ok
----------------------------------------------------------------------
Ran 1 test in 0.268s
OK
0:07:07 load avg: 19.34 [3/6/2] test_webbrowser failed (1 error)
Re-running test_webbrowser in verbose mode (matching: test_synthesize)
test_synthesize (test.test_webbrowser.ImportTest.test_synthesize) ... ERROR
======================================================================
ERROR: test_synthesize (test.test_webbrowser.ImportTest.test_synthesize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_webbrowser.py", line 314, in test_synthesize
webbrowser.get(sys.executable)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "C:\Program Files\Python313-arm64\Lib\webbrowser.py", line 65, in get
raise Error("could not locate runnable browser")
webbrowser.Error: could not locate runnable browser
----------------------------------------------------------------------
Ran 1 test in 0.017s
FAILED (errors=1)
test test_webbrowser failed
0:07:07 load avg: 19.34 [4/6/3] test_os failed (1 failure)
Re-running test_os in verbose mode (matching: test_pipe_spawnl)
test_pipe_spawnl (test.test_os.FDInheritanceTests.test_pipe_spawnl) ... C:\Program: can't open file 'C:\\Users\\USERNAME\\AppData\\Local\\Temp\\test_python_y8xoiflm\\test_python_4120æ\\Files\\Python313-arm64\\python.exe': [Errno 2] No such file or directory
FAIL
======================================================================
FAIL: test_pipe_spawnl (test.test_os.FDInheritanceTests.test_pipe_spawnl)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_os.py", line 4601, in test_pipe_spawnl
self.assertEqual(exitcode, 0)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
AssertionError: 2 != 0
----------------------------------------------------------------------
Ran 1 test in 0.056s
FAILED (failures=1)
test test_os failed
0:07:08 load avg: 19.27 [5/6/4] test_launcher failed (3 failures)
Re-running test_launcher in verbose mode (matching: test_search_path, test_search_path_exe, test_shebang_command_in_venv)
test_search_path (test.test_launcher.TestLauncher.test_search_path) ... FAIL
test_search_path_exe (test.test_launcher.TestLauncher.test_search_path_exe) ... FAIL
test_shebang_command_in_venv (test.test_launcher.TestLauncher.test_shebang_command_in_venv) ... FAIL
======================================================================
FAIL: test_search_path (test.test_launcher.TestLauncher.test_search_path)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_launcher.py", line 626, in test_search_path
self.assertEqual(f"{sys.executable} -prearg {script} -postarg", data["stdout"].strip())
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'C:\\Program Files\\Python313-arm64\\pytho[115 chars]targ' != '"C:\\Program Files\\Python313-arm64\\pyth[117 chars]targ'
- C:\Program Files\Python313-arm64\python.exe -prearg C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmpkbsz_fju.py -postarg
+ "C:\Program Files\Python313-arm64\python.exe" -prearg C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmpkbsz_fju.py -postarg
? + +
======================================================================
FAIL: test_search_path_exe (test.test_launcher.TestLauncher.test_search_path_exe)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_launcher.py", line 637, in test_search_path_exe
self.assertEqual(f"{sys.executable} -prearg {script} -postarg", data["stdout"].strip())
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'C:\\Program Files\\Python313-arm64\\pytho[115 chars]targ' != '"C:\\Program Files\\Python313-arm64\\pyth[117 chars]targ'
- C:\Program Files\Python313-arm64\python.exe -prearg C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmp1t02n1ni.py -postarg
+ "C:\Program Files\Python313-arm64\python.exe" -prearg C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmp1t02n1ni.py -postarg
? + +
======================================================================
FAIL: test_shebang_command_in_venv (test.test_launcher.TestLauncher.test_shebang_command_in_venv)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_launcher.py", line 741, in test_shebang_command_in_venv
self.assertEqual(data["stdout"].strip(), f"{sys.executable} arg1 {script}")
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '"C:\\Program Files\\Python313-arm64\\pyth[105 chars]f.py' != 'C:\\Program Files\\Python313-arm64\\pytho[103 chars]f.py'
- "C:\Program Files\Python313-arm64\python.exe" arg1 C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmpk4vqht5f.py
? - -
+ C:\Program Files\Python313-arm64\python.exe arg1 C:\Users\USERNAME\AppData\Local\Temp\test_python_hde_mn6u\test_python_8920æ\tmpk4vqht5f.py
----------------------------------------------------------------------
Ran 3 tests in 1.318s
FAILED (failures=3)
test test_launcher failed
0:07:08 load avg: 19.27 [6/6/5] test.test_asyncio.test_subprocess failed (1 error, 1 failure)
Re-running test.test_asyncio.test_subprocess in verbose mode (matching: test_kill_issue43884, test_create_subprocess_env_shell)
test_create_subprocess_env_shell (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_create_subprocess_env_shell) ... 'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
FAIL
test_kill_issue43884 (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_kill_issue43884) ... 'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
ERROR
======================================================================
ERROR: test_kill_issue43884 (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_kill_issue43884)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_asyncio\test_subprocess.py", line 225, in test_kill_issue43884
proc.send_signal(signal.CTRL_BREAK_EVENT)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python313-arm64\Lib\asyncio\subprocess.py", line 140, in send_signal
self._transport.send_signal(signal)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "C:\Program Files\Python313-arm64\Lib\asyncio\base_subprocess.py", line 146, in send_signal
self._check_proc()
~~~~~~~~~~~~~~~~^^
File "C:\Program Files\Python313-arm64\Lib\asyncio\base_subprocess.py", line 143, in _check_proc
raise ProcessLookupError()
ProcessLookupError
======================================================================
FAIL: test_create_subprocess_env_shell (test.test_asyncio.test_subprocess.SubprocessProactorTests.test_create_subprocess_env_shell)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Python313-arm64\Lib\test\test_asyncio\test_subprocess.py", line 756, in test_create_subprocess_env_shell
self.loop.run_until_complete(self.check_stdout_output(main(), b'bar'))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python313-arm64\Lib\asyncio\base_events.py", line 709, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "C:\Program Files\Python313-arm64\Lib\test\test_asyncio\test_subprocess.py", line 740, in check_stdout_output
self.assertEqual(stdout, output)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
AssertionError: b'' != b'bar'
----------------------------------------------------------------------
Ran 2 tests in 1.204s
FAILED (failures=1, errors=1)
test test.test_asyncio.test_subprocess failed
5 tests failed again:
test.test_asyncio.test_subprocess test_ctypes test_launcher
test_os test_webbrowser
== Tests result: FAILURE then FAILURE ==
42 tests skipped:
test.test_asyncio.test_unix_events test.test_gdb.test_backtrace
test.test_gdb.test_cfunction test.test_gdb.test_cfunction_full
test.test_gdb.test_misc test.test_gdb.test_pretty_print
test.test_multiprocessing_fork.test_manager
test.test_multiprocessing_fork.test_misc
test.test_multiprocessing_fork.test_processes
test.test_multiprocessing_fork.test_threads
test.test_multiprocessing_forkserver.test_manager
test.test_multiprocessing_forkserver.test_misc
test.test_multiprocessing_forkserver.test_processes
test.test_multiprocessing_forkserver.test_threads test_asdl_parser
test_clinic test_dbm_gnu test_dbm_ndbm test_devpoll test_epoll
test_fcntl test_fork1 test_generated_cases test_grp test_ioctl
test_kqueue test_openpty test_perf_profiler test_perfmaps
test_poll test_pty test_pwd test_readline test_resource
test_syslog test_termios test_threadsignals test_tty test_wait3
test_wait4 test_xxlimited test_xxtestfuzz
10 tests skipped (resource denied):
test_curses test_peg_generator test_smtpnet test_socketserver
test_tkinter test_ttk test_urllib2net test_urllibnet test_winsound
test_zipfile64
6 re-run tests:
test.test_asyncio.test_subprocess test_audit test_ctypes
test_launcher test_os test_webbrowser
5 tests failed:
test.test_asyncio.test_subprocess test_ctypes test_launcher
test_os test_webbrowser
412 tests OK.
Total duration: 7 min 8 sec
Total tests: run=38,410 failures=13 skipped=2,693
Total test files: run=465/469 failed=5 skipped=42 resource_denied=10 rerun=6
Result: FAILURE then FAILURE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-114451
* gh-114602
* gh-128160
<!-- /gh-linked-prs -->
| c63c6142f9146e1e977f4c824c56e8979e6aca87 | d5c21c12c17b6e4db2378755af8e3699516da187 |
python/cpython | python__cpython-114839 | # Make `_threadmodule.c` thread-safe in `--disable-gil` builds
# Feature or enhancement
### Proposal:
Make the functionality in `_threadmodule.c` thread-safe in free-threaded builds.
Commits from nogil-3.12: https://github.com/colesbury/nogil-3.12/commits/nogil-3.12/Modules/_threadmodule.c
Context from @colesbury:
- Ignore the CriticalLock commit; this was an idea that didn't work and was prone to deadlock.
- Ignore the weakrefs change.
- Unclear if we need to deal with _tstate_lock.
- For the other changes, use your judgement. You may be able to put together smaller changes
- `_Py_ThreadId()` is currently only available in the free-threaded build, so the recursive lock implementation can probably (?) just stick with the existing `PyThread_get_thread_ident()`.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114839
* gh-115093
* gh-115102
* gh-115190
* gh-116433
* gh-116776
<!-- /gh-linked-prs -->
| 33da0e844c922b3dcded75fbb9b7be67cb013a17 | 86bc40dd414bceb3f93382cc9f670936de9d68be |
python/cpython | python__cpython-114267 | # The compiler's line number propagation algorithm is not quite correct
# Bug report
The compiler's algorithm for line propagation has some heuristics which are not quite correct (particularly the `guarantee_lineno_for_exits`, which tidies up loose ends by assigning a possible incorrect line number). Also, line numbers are propagated after optimisation, and this prevents optimisation in a few cases (like when it's not clear whether two jumps have the same line number or not).
<!-- gh-linked-prs -->
### Linked PRs
* gh-114267
* gh-114530
* gh-114535
<!-- /gh-linked-prs -->
| 7e49f27b41d5728cde1f8790586d113ddc25f18d | efb81a60f5ce7e192095230a0f7ff9684d6f835a |
python/cpython | python__cpython-114266 | # Optimize performance of copy.deepcopy by adding a fast path for atomic types
# Feature or enhancement
### Proposal:
The performance of `copy.deepcopy` can be improved by adding a fast path for the atomic types. This will reduce the overhead of checking the `memo` and the overhead of calling the dispatch method from the deepcopy dispatch table.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114266
<!-- /gh-linked-prs -->
| 9d6604222e9ef4e136ee9ccfa2d4d5ff9feee976 | 225aab7f70d804174cc3a75bc04a5bb1545e5adb |
python/cpython | python__cpython-114394 | # [`ctypes`][`linux`] `ctypes.util.find_library()` crash instead of return None
# Bug report
### Bug description:
# The problem:
`ctypes.util.find_library()` raises an exception on certain inputs when it cannot find it.
# Expected behavior:
No exception, return `None` as [advertised](https://docs.python.org/3/library/ctypes.html#ctypes.util.find_library) if it cannot find the library.
# Reproducer:
```python
# on python 3.10, but same deal on other versions too
>>> import ctypes.util
>>> ctypes.util.find_library('libgomp')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/lib/python3.10/ctypes/util.py", line 351, in find_library
_get_soname(_findLib_gcc(name)) or _get_soname(_findLib_ld(name))
File "/path/to/lib/python3.10/ctypes/util.py", line 148, in _findLib_gcc
if not _is_elf(file):
File "/path/to/lib/python3.10/ctypes/util.py", line 100, in _is_elf
with open(filename, 'br') as thefile:
FileNotFoundError: [Errno 2] No such file or directory: b'liblibgomp.a'
```
Note that the crash does not occur if you put nonsense in:
1. `find_library('asdadsas')` -> OK
2. `find_library('libasdasdasd')` -> OK
The problem appears to be a bit more contrived. Python is attempting to parse the output from gcc, which in this case is:
```
/usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/Scrt1.o
/usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/12/crtbeginS.o
/usr/bin/ld: cannot find -llibgomp: No such file or directory
/usr/bin/ld: note to link with /usr/lib/gcc/x86_64-linux-gnu/12/libgomp.a use -l:libgomp.a or rename it to liblibgomp.a
/usr/lib/gcc/x86_64-linux-gnu/12/libgcc.a
/usr/lib/gcc/x86_64-linux-gnu/12/libgcc_s.so
/usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/libgcc_s.so.1
/usr/lib/gcc/x86_64-linux-gnu/12/libgcc.a
collect2: error: ld returned 1 exit status
```
But then the parser is getting confused trying to parse the useful diagnostic message mentioning `libgomp.a`.
Machine details (from `platform.uname()`)
```
uname_result(system='Linux', node='iblis', release='6.5.0-14-generic', version='#14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2', machine='x86_64')
```
### CPython versions tested on:
3.10, 3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114394
* gh-114444
* gh-114445
<!-- /gh-linked-prs -->
| 7fc51c3f6b7b13f88480557ff14bdb1c049f9a37 | ed30a3c337f30abd2ea5357565a956ed3dc0719c |
python/cpython | python__cpython-114450 | # Enumerate pip's vendored dependencies in SBOM
# Bug report
### Bug description:
Part of https://github.com/python/cpython/issues/112302
Currently pip's package entry in the SBOM is quite simple, including only itself but not all the vendored projects and their licenses/IDs. See comment from @hroncok on DPO on why this is problematic: https://discuss.python.org/t/create-and-distribute-software-bill-of-materials-sbom-for-python-artifacts/39293/24
My proposed changes to `generate_sbom.py` is the following:
* Find the entry `pip/_vendor/vendor.txt` in the pip wheel archive.
* Read the content, parse the requirements into names and versions.
* Ensure all entries are represented as packages in the SBOM with a `pip DEPENDS_ON <package>` relationship.
This approach lets the license identifiers be specified in the SBOM like other packages but then would raise an error if pip is upgraded with a difference in vendored dependencies or versions allowing the reviewer to acknowledge any changes.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114450
<!-- /gh-linked-prs -->
| 582d95e8bb0b78bf1b6b9a12371108b9993d3b84 | 456e274578dc9863f42ab24d62adc0d8c511b50f |
python/cpython | python__cpython-114242 | # ftplib CLI fails to retrieve files
# Bug report
The `ftplib` module has undocumented feature -- a CLI that implements a simple FTP client. It allows listing directories and retrieving files. But retrieving files does not work in Python 3, because it writes binary data to sys.stdout.
It's help is also incomplete and unclear.
Documenting, testing and extending this CLI to a more complete FTP client is a matter of other issues. Here I only want to fix small functionality that it currently has.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114242
* gh-114404
* gh-114405
<!-- /gh-linked-prs -->
| 42d72b23dd1ee0e100ee47aca64fc1e1bbe576c9 | 336030161a6cb8aa5b4f42a08510f4383984703f |
python/cpython | python__cpython-114232 | # Fix indentation in enum.rst
# Documentation
There is an indentation error in enum.rst (the second line of the description for "SECOND").
```rst
* ``FIRST = auto()`` will work (auto() is replaced with ``1``);
* ``SECOND = auto(), -2`` will work (auto is replaced with ``2``, so ``2, -2`` is
used to create the ``SECOND`` enum member;
* ``THREE = [auto(), -3]`` will *not* work (``<auto instance>, -3`` is used to
create the ``THREE`` enum member)
```
https://github.com/python/cpython/blob/v3.13.0a3/Doc/library/enum.rst?plain=1#L839-L843
In reStructuredText, the space after a line break within a list item should be two spaces.
```rst
* This is a bulleted list.
* It has two items, the second
item uses two lines.
```
https://devguide.python.org/documentation/markup/#lists-and-quotes
<!-- gh-linked-prs -->
### Linked PRs
* gh-114232
* gh-114234
* gh-114235
<!-- /gh-linked-prs -->
| ba683c22ecd035a1090f9fc7aba48d54854d23bd | 6f4b242a03e521a55f0b9e440703b424ed18ce2f |
python/cpython | python__cpython-114224 | # Deprecated statements in email.message.rst
# Documentation
In https://docs.python.org/3/library/email.message.html
I see:
> Unlike a real dict, there is an ordering to the keys
> in dictionaries there is no guaranteed order to the keys returned by [keys()](https://docs.python.org/3/library/email.message.html#email.message.EmailMessage.keys)
But this became false in 3.7 (https://docs.python.org/3/whatsnew/3.7.html « the insertion-order preservation nature of [dict](https://docs.python.org/3/library/stdtypes.html#typesmapping) objects [has been declared](https://mail.python.org/pipermail/python-dev/2017-December/151283.html) to be an official part of the Python language spec »).
I think this can just be removed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114224
* gh-114225
* gh-114226
<!-- /gh-linked-prs -->
| 8cda72037b262772399b2b7fc36dee9340d74fd6 | c1db9606081bdbe0207f83a861a3c70c356d3704 |
python/cpython | python__cpython-114251 | # Rename dataclass's __replace__ argument from "obj" to "self"?
# Bug report
### Bug description:
The first argument of the new `__replace__` method on dataclasses is conspicuously named `obj` rather than `self`. While that's not a bug, it would be kinder on introspection tools to use `self`, since it is otherwise a regular instance method.
```python
>>> import dataclasses, inspect, pprint
>>> @dataclasses.dataclass
... class Foo:
... x: int
...
>>> pprint.pprint({n:inspect.signature(f) for n, f in inspect.getmembers(Foo, inspect.isfunction)})
{'__eq__': <Signature (self, other)>,
'__init__': <Signature (self, x: int) -> None>,
'__replace__': <Signature (obj, /, **changes)>,
'__repr__': <Signature (self)>}
```
`__replace__` was added in #108752
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114251
<!-- /gh-linked-prs -->
| 339fc3c22446a148d27d9ec061594ac8d0abd33d | 9c93350f582fe6f5fed2cd873869dfe4fbf2dfe8 |
python/cpython | python__cpython-114179 | # Tools/build/generate_sbom.py fails in out-of-tree builds
# Bug report
```console
$ pwd
/Users/erlend.aasland/src/python/build-main
$ make regen-all
[...]
python3.13 /Users/erlend.aasland/src/python/main/Tools/build/generate_sbom.py
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "/Users/erlend.aasland/src/python/main/Tools/build/generate_sbom.py", line 310, in <module>
main()
File "/Users/erlend.aasland/src/python/main/Tools/build/generate_sbom.py", line 270, in main
paths = filter_gitignored_paths(paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/erlend.aasland/src/python/main/Tools/build/generate_sbom.py", line 113, in filter_gitignored_paths
assert git_check_ignore_proc.returncode in (0, 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
make: *** [regen-sbom] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-114179
<!-- /gh-linked-prs -->
| 7a0ac89b292324b629114a4c49b95fc5a78df7ca | 8cf37f244fa88b4e860e853ecb59b75f3f218191 |
python/cpython | python__cpython-134508 | # `BaseSubprocessTransport.__del__` fails if the event loop is already closed, which can leak an orphan process
# Bug report
### Bug description:
there is a race where it's possible for `BaseSubprocessTransport.__del__` to try to close the transport *after* the event loop has been closed. this results in an unraisable exception in `__del__`, and it can also result in an orphan process being leaked.
the following is a reproducer that triggers the race between `run()` exiting and [the process dying and the event loop learning about the process's death]. on my machine, with this reproducer the bug occurs (due to `run()` winning the race) maybe 90% of the time:
```python
from __future__ import annotations
import asyncio
from subprocess import PIPE
async def main() -> None:
try:
async with asyncio.timeout(1):
process = await asyncio.create_subprocess_exec(
"/usr/bin/env",
"sh",
"-c",
"while true; do sleep 1; done",
stdin=PIPE,
stdout=PIPE,
stderr=PIPE,
)
try:
await process.wait()
except BaseException:
process.kill()
# N.B.: even though they send it SIGKILL, the user is (very briefly)
# leaking an orphan process to asyncio, because they are not waiting for
# the event loop to learn that the process died. if we added
# while True:
# try:
# await process.wait()
# except CancelledError:
# pass
# else:
# break
# (i.e. if we used structured concurrency) then the race would not
# occur.
raise
except TimeoutError:
pass
if __name__ == "__main__":
asyncio.run(main())
```
most of the time, running this emits
```
Exception ignored in: <function BaseSubprocessTransport.__del__ at 0x7fd25db428e0>
Traceback (most recent call last):
File "/usr/lib/python3.11/asyncio/base_subprocess.py", line 126, in __del__
self.close()
File "/usr/lib/python3.11/asyncio/base_subprocess.py", line 104, in close
proto.pipe.close()
File "/usr/lib/python3.11/asyncio/unix_events.py", line 566, in close
self._close(None)
File "/usr/lib/python3.11/asyncio/unix_events.py", line 590, in _close
self._loop.call_soon(self._call_connection_lost, exc)
File "/usr/lib/python3.11/asyncio/base_events.py", line 761, in call_soon
self._check_closed()
File "/usr/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```
this case looks similar to GH-109538. i think the following patch (analogous to GH-111983) fixes it:
```diff
diff --git a/Lib/asyncio/base_subprocess.py b/Lib/asyncio/base_subprocess.py
index 6dbde2b696..ba219ba39d 100644
--- a/Lib/asyncio/base_subprocess.py
+++ b/Lib/asyncio/base_subprocess.py
@@ -123,8 +123,11 @@ def close(self):
def __del__(self, _warn=warnings.warn):
if not self._closed:
- _warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
- self.close()
+ if self._loop.is_closed():
+ _warn("loop is closed", ResourceWarning, source=self)
+ else:
+ _warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
+ self.close()
def get_pid(self):
return self._pid
```
however, there is another case for which the above patch is not sufficient. in the above example the user orphaned the process after sending `SIGKILL`/`TerminateProcess` (which is not immediate, but only schedules the kill), but what if they fully orphan it?
```python
from __future__ import annotations
import asyncio
from subprocess import PIPE
async def main_leak_subprocess() -> None:
await asyncio.create_subprocess_exec(
"/usr/bin/env",
"sh",
"-c",
"while true; do sleep 1; done",
stdin=PIPE,
stdout=PIPE,
stderr=PIPE,
)
if __name__ == "__main__":
asyncio.run(main_leak_subprocess())
```
currently (on `main`), when the race condition occurs (for this example the condition is `run()` winning the race against `BaseSubprocessTransport` GC) then asyncio emits a loud complaint `Exception ignored in: <function BaseSubprocessTransport.__del__ at 0x7f5b3b291e40>` and leaks the orphan process (check `htop` after the interpreter exits!). asyncio probably also leaks the pipes.
but with the patch above, asyncio will quietly leak the orphan process (and probably pipes), but it will not yell about the leak unless the user enables `ResourceWarning`s. which is not good.
so a more correct patch (fixes both cases) may be something along the lines of
```diff
diff --git a/Lib/asyncio/base_subprocess.py b/Lib/asyncio/base_subprocess.py
index 6dbde2b696..9c86c8444c 100644
--- a/Lib/asyncio/base_subprocess.py
+++ b/Lib/asyncio/base_subprocess.py
@@ -123,8 +123,31 @@ def close(self):
def __del__(self, _warn=warnings.warn):
if not self._closed:
- _warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
- self.close()
+ if not self._loop.is_closed():
+ _warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
+ self.close()
+ else:
+ _warn("loop is closed", ResourceWarning, source=self)
+
+ # self.close() requires the event loop to be open, so we need to reach
+ # into close() and its dependencies and manually do the bare-minimum
+ # cleanup that they'd do if the loop was open. I.e. we do the syscalls
+ # only; we can't interact with an event loop.
+
+ # TODO: Probably need some more stuff here too so that we don't leak fd's...
+
+ if (self._proc is not None and
+ # has the child process finished?
+ self._returncode is None and
+ # the child process has finished, but the
+ # transport hasn't been notified yet?
+ self._proc.poll() is None):
+
+ try:
+ self._proc.kill()
+ except (ProcessLookupError, PermissionError):
+ # the process may have already exited or may be running setuid
+ pass
def get_pid(self):
return self._pid
```
with this patch applied, neither example leaks an orphan process out of `run()`, and both examples emit `ResourceWarning`. however this patch is rather messy. it is also perhaps still leaking pipe fd's out of `run()`. (the fd's probably get closed by the OS when the interpreter shuts down, but i suspect one end of each pipe will be an orphan from the time when `run()` exits to the time when the interpreter shuts down, which can be arbitrarily long).
### CPython versions tested on:
3.11, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-134508
* gh-134561
* gh-134562
<!-- /gh-linked-prs -->
| 5804ee7b467d86131be3ff7d569443efb0d0f9fd | 8dbc11971974a725dc8a11c0dc65d8f6fcb4d902 |
python/cpython | python__cpython-114160 | # Using NamedTuple with custom Enum
# Documentation
This issue is related to https://github.com/python/cpython/issues/114071 and @ethanfurman suggested creating an isolated issue:
The handling of NamedTuple with a custom Enum is unexpected. Here are two examples that seek to achieve the same thing, but one will fail when using NamedTuple.
Using dataclasses (works):
<details>
```python
from dataclasses import dataclass
from enum import Enum
@dataclass
class CodeLabel:
code: int
label: str
class LabelledEnumMixin:
labels = {}
def __new__(cls, codelabel: CodeLabel):
member = object.__new__(cls)
member._value_ = codelabel.code
member.label = codelabel.label
cls.labels[codelabel.code] = codelabel.label
return member
@classmethod
def list_labels(cls):
return list(l for c, l in cls.labels.items())
class Test(LabelledEnumMixin, Enum):
A = CodeLabel(1, "Label A")
B = CodeLabel(2, "Custom B")
C = CodeLabel(3, "Custom label for value C + another string")
Test.list_labels()
```
</details>
Using NamedTuple (fails):
<details>
```python
from typing import NamedTuple
from enum import Enum
class CodeLabel(NamedTuple):
code: int
label: str
class LabelledEnumMixin:
labels = {}
def __new__(cls, codelabel: CodeLabel):
member = object.__new__(cls)
member._value_ = codelabel.code
member.label = codelabel.label
cls.labels[codelabel.code] = codelabel.label
return member
@classmethod
def list_labels(cls):
return list(l for c, l in cls.labels.items())
class Test(LabelledEnumMixin, Enum):
A = CodeLabel(1, "Label A")
B = CodeLabel(2, "Custom B")
C = CodeLabel(3, "Custom label for value C + another string")
Test.list_labels()
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-114160
* gh-114196
* gh-114197
* gh-114215
* gh-114218
<!-- /gh-linked-prs -->
| 4c7e09d0129dafddba58979ced9580f856f65efa | 945540306c12116154d2e4cc6c17a8efd2290537 |
python/cpython | python__cpython-114124 | # csv module docstring imported from _csv module
# Documentation
While messing around trying to find undocumented module attributes, I noticed that `csv.__all__` contains both `__doc__` and `__version__`. Those inclusions are themselves weird, but I further noticed this:
```
from _csv import Error, __version__, writer, reader, register_dialect, \
unregister_dialect, get_dialect, list_dialects, \
field_size_limit, \
QUOTE_MINIMAL, QUOTE_ALL, QUOTE_NONNUMERIC, QUOTE_NONE, \
QUOTE_STRINGS, QUOTE_NOTNULL, \
__doc__
```
Note that both attributes are imported from the underlying `_csv` module. Furthermore, the `__doc__` attribute is pretty extensive.
I see no particular reason either attribute needs to be defined in the underlying C module. In particular, maintaining a docstring in C (with its constrainged multi-line string syntax) is much more difficult than in Python (with its rather nice triple-quoted string syntax).
I hope I wasn't the ultimate source of this problem. I suspect that in pulling non-performance-critical bits out of the old CSV module (when it was written entirely in C) these two attributes were just missed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114124
<!-- /gh-linked-prs -->
| 72abb8c5d487ead9eb115fec8132ccef5ba189e5 | 68a7b78cd5185cbd9456f42c15ecf872a7c16f44 |
python/cpython | python__cpython-114117 | # Update documentation of array.array
It still refers to bytes objects as strings in some cases and contains other outdated information.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114117
* gh-114417
* gh-114418
<!-- /gh-linked-prs -->
| 650f9e4c94711ff49ea4e13bf800945a6147b7e0 | fd49e226700e2483a452c3c92da6f15d822ae054 |
python/cpython | python__cpython-114108 | # importlib.metadata test fails on Windows if symlinks aren't allowed
From the AMD64 Windows11 Refleaks 3.x buildbot:
```pytb
test_packages_distributions_symlinked_top_level (test.test_importlib.test_main.PackagesDistributionsTest.test_packages_distributions_symlinked_top_level)
Distribution is resolvable from a simple top-level symlink in RECORD. ... ERROR
======================================================================
ERROR: test_packages_distributions_symlinked_top_level (test.test_importlib.test_main.PackagesDistributionsTest.test_packages_distributions_symlinked_top_level)
Distribution is resolvable from a simple top-level symlink in RECORD.
----------------------------------------------------------------------
Traceback (most recent call last):
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_importlib\test_main.py", line 424, in test_packages_distributions_symlinked_top_level
fixtures.build_files(files, self.site_dir)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_importlib\_path.py", line 70, in build
create(contents, _ensure_tree_maker(prefix) / name)
~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\functools.py", line 911, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_importlib\_path.py", line 91, in _
path.symlink_to(content)
~~~~~~~~~~~~~~~^^^^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\pathlib\__init__.py", line 806, in symlink_to
os.symlink(target, self, target_is_directory)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 1314] A required privilege is not held by the client: '.symlink.target' -> 'b:\\uildworker_temp\\test_python_sby1_qjd\\tmp16dtmklg\\symlinked'
```
The test needs to be skipped when symlinks are not supported.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114108
* gh-114121
* gh-114128
* gh-114129
<!-- /gh-linked-prs -->
| c361a1f395de9e508b6e1b0a4c5e69204905a7b7 | ac44ec6206a1e5479effd91e02e2946b94e98ede |
python/cpython | python__cpython-121175 | # Not all generators containing `await` are compiled to async generators
# Documentation
A generator of the form `f(x) for x in await a` first awaits `a` and then creates a regular sync `generator`, as can be seen in the following minimal reproducer:
```py
import asyncio
async def foo():
print("Entered foo")
return [1, 2, 3]
async def main():
gen = (x for x in await foo())
print(gen)
print(list(gen))
asyncio.run(main())
```
```
Entered foo
<generator object main.<locals>.<genexpr> at 0x7f8fce7f8110>
[1, 2, 3]
```
However the [python language reference 6.2.8. Generator expressions](https://docs.python.org/3/reference/expressions.html#generator-expressions) states:
> If a generator expression contains either async for clauses or [await](https://docs.python.org/3/reference/expressions.html#await) expressions it is called an asynchronous generator expression. An asynchronous generator expression returns a new asynchronous generator object, which is an asynchronous iterator (see [Asynchronous Iterators](https://docs.python.org/3/reference/datamodel.html#async-iterators)).
This seems to suggest that the generator `f(x) for x in await a` does contain an await expression, so it should become an `async_generator`? However there is also:
> However, the iterable expression in the leftmost for clause is immediately evaluated, so that an error produced by it will be emitted at the point where the generator expression is defined, rather than at the point where the first value is retrieved.
This seems to hint at an implementation detail that the leftmost `for` clause has special treatment and is always evaluated before the generator is constructed, but the interactions with `await` are unclear from this, especially whether this disqualifies the generator from becoming async.
I don't know whether this is an undocumented feature or a bug.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121175
* gh-121234
* gh-121235
<!-- /gh-linked-prs -->
| 91313afdb392d0d6105e9aaa57b5a50112b613e7 | d44c550f7ebee7d33785142e6031a4621cf21573 |
python/cpython | python__cpython-114102 | # Some `PyErr_Format` in `_testcapi` module have invalid argument
# Bug report
### Bug description:
See the upcoming PR for detail.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114102
<!-- /gh-linked-prs -->
| 6c502ba809ff662a5eebf8c6778dec6bd28918fb | 8fd287b18f20f0a310203f574adec196530627c7 |
python/cpython | python__cpython-114889 | # Add support for iOS as a target platform
# Feature or enhancement
### Proposal:
As proposed by [PEP 730](https://peps.python.org/pep-0730/), it should be possible to compile and run CPython on iOS.
This is a meta-issue for the various PRs implementing this support to reference.
For historical reference: this was previously managed as #67858 (BPO23670).
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
[PEP 730 Discussion thread](https://discuss.python.org/t/pep-730-adding-ios-as-a-supported-platform/35854)
<!-- gh-linked-prs -->
### Linked PRs
* gh-114889
* gh-115023
* gh-115063
* gh-115120
* gh-115390
* gh-115930
* gh-116454
* gh-117052
* gh-117057
* gh-118020
* gh-118073
* gh-118378
<!-- /gh-linked-prs -->
| 391659b3da570bfa28fed5fbdb6f2d9c26ab3dd0 | 15f6f048a6ecdf0f6f4fc076d013be3d110f8ed6 |
python/cpython | python__cpython-114089 | # `_winapi.CreateJunction` does not restore privilege levels
Creating a junction requires enabling the `SE_RESTORE_NAME` privilege, which may cause changes in behaviour elsewhere in the process. We should restore it after creating the junction.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114089
* gh-114134
* gh-114135
<!-- /gh-linked-prs -->
| de4ced54eb08e8630e3b6c13436d4ecc3fb14708 | 31a2543c80e1e38c97e50533249d9aa00e2f6cae |
python/cpython | python__cpython-114090 | # Reword error message for unawaitable types
# Feature or enhancement
### Proposal:
Currently, attempting to await something that cannot be awaited results in this error message:
```py
import asyncio
async def main():
await 1
asyncio.run(main())
# TypeError: object int can't be used in 'await' expression
```
I recently encountered this for the first time when the unawaitable object was a method (which was not being called), and found the phrase "object method" unclear. This PR changes the error message so that the name of the class precedes the word "object" and appears in single quotes. A similar construction appears elsewhere in the code base.
```python
>>> 1[2]
TypeError: 'int' object is not subscriptable
```
The PR already exists at #114090 but will be amended momentarily to implement changes suggested by Jelle and to resolve failed tests.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114090
<!-- /gh-linked-prs -->
| 2c7209a3bdf81a289ccd6b80a77497cfcd5732de | a26d27e7ee512cd822b7a7ba075171152779ffdd |
python/cpython | python__cpython-114088 | # Speed up dataclasses._asdict_inner
# Feature or enhancement
### Proposal:
There are several opportunities for speeding up `dataclasses._asdict_inner` (and therefore `asdict`) while maintaining the same underlying logic (including function signature and return values). My in-progress branch has ~18% speedup. Will open PR soon for scrutiny.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
This grew out of https://github.com/python/cpython/issues/114011 as a more focused change.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114088
<!-- /gh-linked-prs -->
| 2d3f6b56c5846df60b0b305e51a1d293ba0b2aae | 339fc3c22446a148d27d9ec061594ac8d0abd33d |
python/cpython | python__cpython-114408 | # Python/flowgraph.c:497: _Bool no_redundant_jumps(cfg_builder *): Assertion `0' failed.
# Bug report
### Bug description:
Another crash in the compiler found by the fuzzer:
```
~/p/cpython ❯❯❯ ./python.exe -c 'compile("if 9<9<9and 9or 9or 9:9", "", "exec")'
:1: SyntaxWarning: invalid decimal literal
:1: SyntaxWarning: invalid decimal literal
:1: SyntaxWarning: invalid decimal literal
Assertion failed: (0), function no_redundant_jumps, file flowgraph.c, line 497.
fish: Job 1, './python.exe -c 'compile("if 9<…' terminated by signal SIGABRT (Abort)
```
cc: @iritkatriel
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114408
<!-- /gh-linked-prs -->
| ed30a3c337f30abd2ea5357565a956ed3dc0719c | a53e56e7d88b4f2a2943c9f191024198009fcf9e |
python/cpython | python__cpython-114079 | # socket.sendfile(): os.sendfile() can fail with OverflowError on 32-bit system
# Bug report
bpo-38319/gh-16491 (PR #82500) tried to fix the use of `os.sendfile()` for larger than 2 GiB files on 32-bit FreeBSD. But it only fixed `socket.sendfile()` for the case when `count` is false. If `count` is not false, `blocksize` is calculated as `count - total_sent` in the loop, and can cause an integer overflow if `count` is larger than 2 GiB.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114079
* gh-114110
* gh-114111
<!-- /gh-linked-prs -->
| d4dfad2aa9e76038302b0c5a29ebacc2723ed50d | c85c0026a64c2a902138feeb73a9c66af1af31e0 |
python/cpython | python__cpython-114076 | # ``test_compileall`` prints uneccessary information
# Bug report
### Bug description:
Full trace:
```python
./python.exe -m test -R 3:3 test_compileall
Using random seed: 181893785
0:00:00 load avg: 2.79 Run 1 test sequentially
0:00:00 load avg: 2.79 [1/1] test_compileall
beginning 6 repetitions
123456
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmplkkvydvt/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmplkkvydvt/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmppru9pv_h/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmppru9pv_h/test/build/real/path/test.py'; ignoring
.The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp7xzrjbg_/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp7xzrjbg_/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp27hnkava/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp27hnkava/test/build/real/path/test.py'; ignoring
.The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmph6rxs48l/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmph6rxs48l/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpt8mlethj/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpt8mlethj/test/build/real/path/test.py'; ignoring
.The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpedg59hhr/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpedg59hhr/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpw3kb6a4i/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpw3kb6a4i/test/build/real/path/test.py'; ignoring
.The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpxsgg9r_l/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmpxsgg9r_l/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp0t11_owi/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp0t11_owi/test/build/real/path/test.py'; ignoring
.The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp_vvh0tq2/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp_vvh0tq2/test/build/real/path/test.py'; ignoring
The stripdir path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp4t0whft4/test/build/fake' is not a valid prefix for source path '/var/folders/02/ps_0fn7s0sx9kkngq6ykymdh0000gn/T/tmp4t0whft4/test/build/real/path/test.py'; ignoring
.
test_compileall passed in 39.8 sec
== Tests result: SUCCESS ==
1 test OK.
Total duration: 39.8 sec
Total tests: run=141 skipped=6
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114076
<!-- /gh-linked-prs -->
| 892155d7365c9c4a6c2dd6850b4527222ba5c217 | f8a79109d0c4f408d34d51861cc0a7c447f46d70 |
python/cpython | python__cpython-114871 | # Document tuples and __new__
In this StackOverflow [answer](https://stackoverflow.com/a/77817668/6890912), Ethan Furman, the author of the `enum` module, demonstrated how a tuple value is [special-cased](https://github.com/python/cpython/blob/1709020e8ebaf9bf1bc9ee14d56173c860613931/Lib/enum.py#L256) so that it can be [unpacked](https://github.com/python/cpython/blob/1709020e8ebaf9bf1bc9ee14d56173c860613931/Lib/enum.py#L262) as arguments to the constructor of the mixin type, making it possible to extremely elegantly implement a member type with additional information such as a label for each member as requested by the SO question:
```python
from enum import Enum
class LabelledEnumMixin:
labels = {}
def __new__(cls, value, label):
member = object.__new__(cls)
member._value_ = value
member.label = label
cls.labels[value] = label
return member
@classmethod
def list_labels(cls):
return list(l for c, l in cls.labels.items())
class Test(LabelledEnumMixin, Enum):
A = 1, "Label A"
B = 2, "Custom B"
C = 3, "Custom label for value C + another string"
print(list(Test))
print(Test.list_labels())
```
This outputs:
```
[<Test.A: 1>, <Test.B: 2>, <Test.C: 3>]
['Label A', 'Custom B', 'Custom label for value C + another string']
```
Such a neat behavior of a tuple value is currently undocumented in the [Supported `__dunder__` names](https://docs.python.org/3/library/enum.html#supported-dunder-names) section of the `enum`'s docs, however, making it more of an implementation detail rather than a publicly usable feature.
Please help document this feature properly so we can all benefit from the new possibilities this feature enables without fearing that we are using an undocumented implementation detail. Thanks.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114871
* gh-114993
<!-- /gh-linked-prs -->
| ff7588b729a2a414ea189a2012904da3fbd1401c | ec69e1d0ddc9906e0fb755a5234aeabdc96450ab |
python/cpython | python__cpython-114080 | # Floating point syntax specification is incorrect
# Documentation
The floating point syntax spec is incorrect, and can be found in [/Doc/library/functions.rst](https://github.com/python/cpython/blob/main/Doc/library/functions.rst):
The syntax [rendered currently](https://docs.python.org/3/library/functions.html#float) is the following:
```ebnf
sign ::= "+" | "-"
infinity ::= "Infinity" | "inf"
nan ::= "nan"
digitpart ::= `!digit` (["_"] `!digit`)*
number ::= [digitpart] "." digitpart | digitpart ["."]
exponent ::= ("e" | "E") ["+" | "-"] digitpart
floatnumber ::= number [exponent]
floatvalue ::= [sign] (floatnumber | infinity | nan)
```
`digitpart` should not include an exclamation point or backtick quotes. The backticks appear to be the underlying syntax failing to render, but the exclamation points have a meaning, which is an issue. This is the underlying syntax:
```yaml
sign: "+" | "-"
infinity: "Infinity" | "inf"
nan: "nan"
digitpart: `!digit` (["_"] `!digit`)*
number: [`digitpart`] "." `digitpart` | `digitpart` ["."]
exponent: ("e" | "E") ["+" | "-"] `digitpart`
floatnumber: number [`exponent`]
```
EBNF includes a "[Special Sequences](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form#Table_of_symbols)" syntax, that would be appropriate for this situation, but I'm uncertain if it is possible to use in the underlying YAML format. It should be rendered like this:
```ebnf
sign ::= "+" | "-"
infinity ::= "Infinity" | "inf"
nan ::= "nan"
digit ::= ? Any character in the unicode digit class ?
digitpart ::= digit (["_"] digit)*
number ::= [digitpart] "." digitpart | digitpart ["."]
exponent ::= ("e" | "E") ["+" | "-"] digitpart
floatnumber ::= number [exponent]
floatvalue ::= [sign] (floatnumber | infinity | nan)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-114080
* gh-114094
* gh-114095
* gh-114169
* gh-114192
* gh-114193
<!-- /gh-linked-prs -->
| 4f24b92aa0677ed5310dd2d1572b55f4e30c88ef | d457345bbc6414db0443819290b04a9a4333313d |
python/cpython | python__cpython-114127 | # Suggested wording change in Class Method Objects Tutorial
# Documentation
This is not a bug but a suggestion to improve the wording of a paragraph.
While reading [section 9.3.4](https://docs.python.org/3/tutorial/classes.html#method-objects) of the python tutorial on the official website, I came across this sentence in the last paragraph:
> If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object.
The subject of the sentence starts with "a method object is created by packing" but also weirdly and redundantly ends with the phrase ": this is the method object." We've already told the user that a method object is created by packing two objects into an abstract object, so it would sound nicer to remove the ":" phrase at the end of this sentence.
> If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together into an abstract object.
It could also be better to remove this idea of this "abstract object" from the wording. Is this Method Object a real object or is it supposed to be an abstract object idea?
> If the name denotes a valid class attribute that is a function object, the references to both the instance object and the function object are packed into a method object.
Just a small wording issue suggestion. Thanks!
<!-- gh-linked-prs -->
### Linked PRs
* gh-114127
* gh-114131
* gh-114132
<!-- /gh-linked-prs -->
| 31a2543c80e1e38c97e50533249d9aa00e2f6cae | e454f9383c5ea629a917dfea791da0bb92b90d8e |
python/cpython | python__cpython-118009 | # typing.get_type_hints fails when passed a class with PEP 695 type parameters and PEP 563 is enabled
# Bug report
### Bug description:
```python
from __future__ import annotations
import typing
class Test[M]:
attr: M
print(typing.get_type_hints(Test))
```
This fails with a NameError:
```
$ poetry run python3 pep_695_get_type_hints.py
Traceback (most recent call last):
File "/home/mjog/pep_695_get_type_hints.py", line 8, in <module>
print(typing.get_type_hints(Test))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/typing.py", line 2197, in get_type_hints
value = _eval_type(value, base_globals, base_locals)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/typing.py", line 393, in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/typing.py", line 900, in _evaluate
eval(self.__forward_code__, globalns, localns),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
NameError: name 'M' is not defined
```
Although `get_type_hints` does not work with imported imported type aliases, this is not imported, so I would expect it to work. Further, the documentation for the function indicates using PEP 563 should /help/, but in this case it actually causes an error.
FWIW, removing the PEP 563 import works fine, giving:
```
{'attr': M}
```
Python version:
```
$ poetry run python3 --version
Python 3.12.0
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-118009
* gh-118104
* gh-120270
* gh-120272
* gh-120474
* gh-120475
* gh-121003
* gh-121004
<!-- /gh-linked-prs -->
| 1e3e7ce11e3b0fc76e981db85d27019d6d210bbc | 15b3555e4a47ec925c965778a415dc11f0f981fd |
python/cpython | python__cpython-114067 | # crash in long_vectorcall in longobject.c
# Crash report
### What happened?
PyErr_Format function has wrong a format string `%s`.
So, the format string must be removed.
A python executable with building attached patch file do work well.
1. trigger code
```python
class evil(1):
pass
```
2. Root cause source location
```c
static PyObject *
long_vectorcall(PyObject *type, PyObject * const*args,
size_t nargsf, PyObject *kwnames)
{
Py_ssize_t nargs = PyVectorcall_NARGS(nargsf);
if (kwnames != NULL) {
PyThreadState *tstate = PyThreadState_GET();
return _PyObject_MakeTpCall(tstate, type, args, nargs, kwnames);
}
switch (nargs) {
case 0:
return _PyLong_GetZero();
case 1:
return PyNumber_Long(args[0]);
case 2:
return long_new_impl(_PyType_CAST(type), args[0], args[1]);
default:
return PyErr_Format(PyExc_TypeError,
"int expected at most 2 argument%s, got %zd", // <-- here
nargs);
}
}
```
4. patch file
[bugfix.patch](https://github.com/python/cpython/files/13930636/bugfix.patch)
5. asan log
<details><summary> asan</summary>
<p>
==146567==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000003 (pc 0xffffa3159950 bp 0xffffcc068cc0 sp 0xffffcc068cc0 T0)
==146567==The signal is caused by a READ memory access.
==146567==Hint: address points to the zero page.
#0 0xffffa3159950 (/lib/aarch64-linux-gnu/libc.so.6+0x99950)
#1 0xffffa334e078 in __interceptor_strlen ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:387
#2 0xaaaaca78de70 in unicode_fromformat_write_cstr Objects/unicodeobject.c:2384
#3 0xaaaaca78f3f0 in unicode_fromformat_arg Objects/unicodeobject.c:2697
#4 0xaaaaca78fa1c in PyUnicode_FromFormatV Objects/unicodeobject.c:2816
#5 0xaaaaca926bc4 in _PyErr_FormatV Python/errors.c:1161
#6 0xaaaaca9246e4 in PyErr_Format Python/errors.c:1196
#7 0xaaaaca62187c in long_vectorcall Objects/longobject.c:6173
#8 0xaaaaca58a540 in _PyObject_VectorcallDictTstate Objects/call.c:135
#9 0xaaaaca58a7b8 in PyObject_VectorcallDict Objects/call.c:159
#10 0xaaaaca861a10 in builtin___build_class__ Python/bltinmodule.c:216
#11 0xaaaaca66cc70 in cfunction_vectorcall_FASTCALL_KEYWORDS Objects/methodobject.c:441
#12 0xaaaaca58661c in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#13 0xaaaaca586758 in PyObject_Vectorcall Objects/call.c:327
#14 0xaaaaca8a2120 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:4344
#15 0xaaaaca8d5574 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:115
#16 0xaaaaca8d5574 in _PyEval_Vector Python/ceval.c:1783
#17 0xaaaaca8d573c in PyEval_EvalCode Python/ceval.c:591
#18 0xaaaaca9cb214 in run_eval_code_obj Python/pythonrun.c:1294
#19 0xaaaaca9ce108 in run_mod Python/pythonrun.c:1379
#20 0xaaaaca9cebfc in PyRun_InteractiveOneObjectEx Python/pythonrun.c:287
#21 0xaaaaca9d0ce8 in _PyRun_InteractiveLoopObject Python/pythonrun.c:136
#22 0xaaaaca9d16c8 in _PyRun_AnyFileObject Python/pythonrun.c:71
#23 0xaaaaca9d181c in PyRun_AnyFileExFlags Python/pythonrun.c:103
#24 0xaaaacaa2dbd0 in pymain_run_stdin Modules/main.c:517
#25 0xaaaacaa2f9b8 in pymain_run_python Modules/main.c:631
#26 0xaaaacaa2fc18 in Py_RunMain Modules/main.c:707
#27 0xaaaacaa2fe08 in pymain_main Modules/main.c:737
#28 0xaaaacaa30144 in Py_BytesMain Modules/main.c:761
#29 0xaaaaca3eb4dc in main Programs/python.c:15
#30 0xffffa30e73f8 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#31 0xffffa30e74c8 in __libc_start_main_impl ../csu/libc-start.c:392
#32 0xaaaaca3eb3ec in _start (/home/kk/projects/cpython/python+0x27b3ec)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/aarch64-linux-gnu/libc.so.6+0x99950)
==146567==ABORTING
</p>
</details>
6. work well stdout in interpreter
```
>>> class evil(1):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
class evil(1):
TypeError: int expected at most 2 arguments, got 3
>>>
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a2 (tags/v3.13.0a2-dirty:9c4347ef8b, Jan 14 2024, 06:56:06) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-114067
<!-- /gh-linked-prs -->
| a571a2fd3fdaeafdfd71f3d80ed5a3b22b63d0f7 | 311d1e2701037952eaf75f993be76f3092c1f01c |
python/cpython | python__cpython-114015 | # Inconsistency in `fractions.Fraction()` initialization error with invalid string
### Bug description:
The `fractions.Fraction()` error message differs when, for example, `123.dd` is done instead of `123.aa`; they should be equal.
```pycon
>>> from fractions import Fraction
>>> Fraction("123.dd")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Fraction("123.dd")
File "C:\Program Files\Python313\Lib\fractions.py", line 251, in __new__
numerator = numerator * scale + int(decimal)
^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'dd'
>>> Fraction("123.aa")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Fraction("123.aa")
File "C:\Program Files\Python313\Lib\fractions.py", line 239, in __new__
raise ValueError('Invalid literal for Fraction: %r' %
ValueError: Invalid literal for Fraction: '123.aa'
```
I discovered this bug while inspecting the rational parsing regex.
https://github.com/python/cpython/blob/dac1da21218a406652b35919aa2118cc32d4c65a/Lib/fractions.py#L57-L69
I think the bug stems from line 65's matching of `d*` instead of `\d*`.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-114015
* gh-114023
* gh-114025
<!-- /gh-linked-prs -->
| dd56b5748317c3d504d6a9660d9207620c547f5c | c7d59bd8cfa053e77ae3446c82afff1fd38a4886 |
python/cpython | python__cpython-114097 | # `make test` doesn't work w/ a WASI cross-build via (at least) `Tools/wasm/wasi.py`
# Bug report
### Bug description:
For some reason, `make test` is appending `python.wasm` to the command to run even though `python.wasm` was specified as part of the host runner.
<details>
<summary>`make test` output</summary>
```
❯ make -C cross-build/wasm32-wasi/ test
make: Entering directory '/home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi'
The following modules are *disabled* in configure script:
_sqlite3
The necessary bits to build these optional modules were not found:
_bz2 _ctypes _hashlib
_lzma _ssl _testimportmultiple
_testmultiphase _uuid readline
xxlimited xxlimited_35 zlib
To find the necessary bits, look in configure.ac and config.log.
Could not build the ssl module!
Python requires a OpenSSL 1.1.1 or newer
Checked 109 modules (76 built-in, 0 shared, 20 n/a on wasi-wasm32, 1 disabled, 12 missing, 0 failed on import)
_PYTHON_HOSTRUNNER='/home/linuxbrew/.linuxbrew/bin/wasmtime run --wasm max-wasm-stack=8388608 --wasm threads=y --wasi threads=y --dir /home/brettcannon/Repositories/python/cpython::/ --env PYTHONPATH=/cross-build/wasm32-wasi/build/lib.wasi-wasm32-3.13-pydebug /home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/python.wasm' _PYTHON_PROJECT_BASE=/home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi _PYTHON_HOST_PLATFORM=wasi-wasm32 PYTHONPATH=/home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/build/lib.wasi-wasm32-3.13-pydebug:../../Lib _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_d_wasi_wasm32-wasi /home/brettcannon/Repositories/python/cpython/cross-build/build/python -m test --fast-ci --timeout=
+ /home/brettcannon/Repositories/python/cpython/cross-build/build/python -u -W default -bb -m test --fast-ci --timeout= --python '/home/linuxbrew/.linuxbrew/bin/wasmtime run --wasm max-wasm-stack=8388608 --wasm threads=y --wasi threads=y --dir /home/brettcannon/Repositories/python/cpython::/ --env PYTHONPATH=/cross-build/wasm32-wasi/build/lib.wasi-wasm32-3.13-pydebug /home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/python.wasm python.wasm' --dont-add-python-opts
== CPython 3.13.0a2+ (heads/main-dirty:3aa4b839e4, Jan 12 2024, 14:07:13) [Clang 17.0.6 ]
== Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.36 little-endian
== Python build: debug
== cwd: /home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/build/test_python_worker_13486æ
== CPU count: 8
== encodings: locale=UTF-8 FS=utf-8
== resources: all,-cpu
== cross compiled: Yes
== host python: /home/linuxbrew/.linuxbrew/bin/wasmtime run --wasm max-wasm-stack=8388608 --wasm threads=y --wasi threads=y --dir /home/brettcannon/Repositories/python/cpython::/ --env PYTHONPATH=/cross-build/wasm32-wasi/build/lib.wasi-wasm32-3.13-pydebug /home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/python.wasm python.wasm
python.wasm: can't open file '//python.wasm': [Errno 44] No such file or directory
== host platform: <command failed with exit code 2>
```
</details>
And for comparison ...
<details>
<summary>`./python.sh -m test` in `cross-build/wasm32-wasi`</summary>
```
❯ ./python.sh -m test
== CPython 3.13.0a2+ (heads/main-dirty:3aa4b839e4, Jan 12 2024, 23:19:57) [Clang 16.0.0 ]
== wasi-0.0.0-wasm32-32bit little-endian
== Python build: debug
== cwd: /build/test_python_worker_728488æ
== CPU count: 1
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
== host runner: /home/linuxbrew/.linuxbrew/bin/wasmtime run --wasm max-wasm-stack=8388608 --wasm threads=y --wasi threads=y --dir /home/brettcannon/Repositories/python/cpython::/ --env PYTHONPATH=/cross-build/wasm32-wasi/build/lib.wasi-wasm32-3.13-pydebug /home/brettcannon/Repositories/python/cpython/cross-build/wasm32-wasi/python.wasm
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-114097
<!-- /gh-linked-prs -->
| 03f78397039760085b8ec07969835c89610bbc6d | 3d5df54cdc1e946bd953bc9906da5abf78a48357 |
python/cpython | python__cpython-120520 | # Interned strings are immortal, despite what the documentation says
# Bug report
### Bug description:
The [`sys.intern` documentation](https://docs.python.org/3.13/library/sys.html#sys.intern) explicitly says:
> Interned strings are not [immortal](https://docs.python.org/3.13/glossary.html#term-immortal); you must keep a reference to the return value of [`intern()`](https://docs.python.org/3.13/library/sys.html#sys.intern) around to benefit from it.
However, they were made immortal in https://github.com/python/cpython/pull/19474 (Implement Immortal Objects -- implementation of PEP 683), without a documentation update. The 3.12 What's New also doesn't mention the change. [edit: it's now clear that the PR author (@eduardo-elizondo) intended it but the reviewer (@markshannon) did not]
PEP-683 itself only mentions the change as a possible optimization.
Meanwhile, `pathlib` [interns every path segment it processes](https://github.com/python/cpython/blob/79970792fd2c70f77c38e08c7b3a9daf6a11bde1/Lib/pathlib/__init__.py#L257), presumably depending on the documented behaviour. In CPython's test suite, that causes names of temporary directories to leak (stealthily, since interned strings are excluded from the total refcount).
[edit: Worse, `type_setattro`, `PyObject_SetAttr` or `PyDict_SetItemString` immortalize keys, so strings used for key/attribute names this way are not reclaimed until interpreter shutdown.]
On the other hand, free-threading (PEP-703) ~~seems to rely~~ [edit: relies] on these being immortal.
What's the correct behaviour? [edit: specifically, in 3.12 and non-free-threading 3.13, should this change be reverted or documented?]
@eduardo-elizondo @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-120520
* gh-120945
* gh-121358
* gh-121364
* gh-121851
* gh-121854
* gh-123065
* gh-124938
* gh-127250
<!-- /gh-linked-prs -->
Related fixes:
- https://github.com/python/cpython/pull/121903
- https://github.com/python/cpython/pull/122303
- https://github.com/python/cpython/pull/122421 | 6f1d448bc110633eda110310fd833bd46e7b30f2 | 7595e6743ac78ac0dd19418176f66d251668fafc |
python/cpython | python__cpython-113979 | # Deprecation warning during text completion in REPL
# Bug report
### Bug description:
Define function inside REPL session:
```pycon
>>> def f():pass
```
then try to hit TAB to get suggestions for `f.__code__.` and you will be left with this ugly output without suggestions:
```pycon
>>> f.__code__./root/Desktop/cpython-main/Lib/rlcompleter.py:191: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
if (value := getattr(thisobject, word, None)) is not None:
```
then try to access `co_lnotab` and you will get this warning again:
```pycon
>>> f.__code__.co_lnotab
<stdin>-1:1: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113979
* gh-119429
<!-- /gh-linked-prs -->
| e03dde5a24d3953e0b16f7cdefdc8b00aa9d9e11 | 9db2fd7edaa9d03e8c649c3bb0e8d963233cde22 |
python/cpython | python__cpython-122101 | # Unbounded reads by `zipfile` may cause a `MemoryError`.
# Bug report
### Bug description:
```python
def _EndRecData(fpin):
"""Return data from the "End of Central Directory" record, or None.
The data is a list of the nine items in the ZIP "End of central dir"
record followed by a tenth item, the file seek offset of this record."""
# Determine file size
fpin.seek(0, 2)
filesize = fpin.tell()
# Check to see if this is ZIP file with no archive comment (the
# "end of central directory" structure should be the last item in the
# file if this is the case).
try:
fpin.seek(-sizeEndCentDir, 2)
except OSError:
return None
data = fpin.read()
if (len(data) == sizeEndCentDir and
data[0:4] == stringEndArchive and
data[-2:] == b"\000\000"):
```

When checking whether a file is a zip file, MemoryError was triggered, followed by OOM. After investigation, it was found that it was a read() read exception.
Through PDB debugging, it was found that a link file was read, which points to /proc/kcore, why does the existing zip file check not determine whether it is a zip file by reading the header byte (504B0304) of the file .
I think the existing judgment ZIP method does not limit the read reading. When reading a non -normal file, it may cause the system to collapse .
Hope to be resolved.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122101
* gh-126347
* gh-126348
<!-- /gh-linked-prs -->
| 556dc9b8a78bad296513221f3f414a3f8fd0ae70 | 8161afe51c65afbf0332da58837d94975cec9f65 |
python/cpython | python__cpython-113969 | # `zipfile.ZipInfo._compresslevel` should be public.
# Feature or enhancement
### Proposal:
Today people pass `ZipInfo` instances into APIs that accept them such as `ZipFile.writestr()` in order to control how individual items are put into a zip archive as such:
```python
zip_info = zipfile.ZipInfo(filename=filename, date_time=desired_timestamp)
zip_info.compress_type = this_files_compress_type
# This attribute is sadly not public; manual creation of ZipInfo objects
# is uncommon. Without this we cannot write compressed files with our
# own overridden info.
zip_info._compresslevel = this_files_compress_level # pylint: disable=protected-access
zip_info.external_attr = desired_permissions
zip_file.writestr(zip_info, data)
```
There is no reason for `._compresslevel` to be protected with an _. We should make it public as it is useful.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113969
<!-- /gh-linked-prs -->
| b44b9d99004f096619c962a8b42a19322f6a441b | ac92527c08d917dffdb9c0a218d06f21114614a2 |
python/cpython | python__cpython-113955 | # Tkinter: widget.tag_unbind(tag, sequence, funcid) unbind all bindings
# Bug report
This issue is similar to #75666. When *funcid* is passed to `tag_unbind()` methods of `Text` or `Canvas`, they remove all bindings and delete only *funcid* Tcl command. According to the docstrings, they only should remove binding specified by *funcid*.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113955
* gh-114997
* gh-114998
<!-- /gh-linked-prs -->
| 7e42fddf608337e83b30401910d76fd75d5cf20a | 3ddc5152550ea62280124c37d0b4339030ff7df4 |
python/cpython | python__cpython-115306 | # pydoc does not show unbound methods of builtin classes as module members
If you set an unbound method of builtin class as a module global, `pydoc` will not show it in the module output. It shows builtin functions and static methods, class methods and bound instance methods of builtin classes.
```py
list_count = list.count # not shown
list_repr = list.__repr__ # not shown
builtin_ord = ord
str_maketrans = str.maketrans
dict_fromkeys = dict.fromkeys
dict_get = {}.get
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-115306
<!-- /gh-linked-prs -->
| 72cff8d8e5a476d3406efb0491452bf9d6b02feb | 68c79d21fa791d7418a858b7aa4604880e988a02 |
python/cpython | python__cpython-113940 | # `traceback.clear_frames` does not clear locals when there have been previous access to `f_locals`
# Bug report
### Bug description:
```python
import traceback
import gc
class Obj:
def __init__(self, name: str):
self.name = name
def __repr__(self):
return f"Obj({self.name!r})"
def __del__(self):
print("del", self)
def deep(i: int):
a = Obj(f"a, i={i}")
if i == 2:
raise Exception(f"exception at i={i}")
print(a)
def func():
for i in range(5):
gc.collect()
print("** i:", i)
try:
deep(i)
except Exception as exc:
print("caught", exc)
print_tb(exc.__traceback__)
# traceback.clear_frames(prev_exc.__traceback__)
clear_tb(exc.__traceback__)
continue # continue with next i
print("deep", i, "done")
def print_tb(tb):
print("Call stack:")
while tb:
frame_i = tb.tb_frame.f_locals.get("i")
print(f" {tb.tb_frame.f_code.co_name}: i={frame_i}")
tb = tb.tb_next
def clear_tb(tb):
print("Clearing stack:")
while tb:
print(tb.tb_frame)
try:
tb.tb_frame.clear()
except RuntimeError:
print(" cannot clear?")
else:
print(" cleared")
# Using this code triggers that the ref actually goes out of scope, otherwise it does not!
# print(" now:", tb.tb_frame.f_locals)
tb = tb.tb_next
if __name__ == '__main__':
func()
print("exit")
```
Running this code gives the following output:
```
** i: 0
Obj('a, i=0')
del Obj('a, i=0')
deep 0 done
** i: 1
Obj('a, i=1')
del Obj('a, i=1')
deep 1 done
** i: 2
caught exception at i=2
Call stack:
func: i=2
deep: i=2
Clearing stack:
<frame at 0x7f9ee1cc72a0, file '/u/zeyer/code/playground/py-oom-out-of-scope.py', line 34, code func>
cannot clear?
<frame at 0x7f9ee1c168c0, file '/u/zeyer/code/playground/py-oom-out-of-scope.py', line 20, code deep>
cleared
** i: 3
Obj('a, i=3')
del Obj('a, i=3')
deep 3 done
** i: 4
Obj('a, i=4')
del Obj('a, i=4')
deep 4 done
exit
del Obj('a, i=2')
```
You see that `Obj('a, i=2')` only is deleted at exit.
This only happens when the `print_tb` is used before, which will access `f_locals` of each frame.
`traceback.clear_frames` should have cleared the locals. But as you see from the output, it does not.
`clear_tb` is basically a copy of `traceback.clear_frames`.
The problem goes away if you access `tb.tb_frame.f_locals` *after it was cleared* (i.e. `tb.tb_frame.clear()` was called).
Looking at the C code, this is what `tb_frame.clear()` will do:
https://github.com/python/cpython/blob/3.12/Objects/frameobject.c#L933-L946
```
static int
frame_tp_clear(PyFrameObject *f)
{
Py_CLEAR(f->f_trace);
/* locals and stack */
PyObject **locals = _PyFrame_GetLocalsArray(f->f_frame);
assert(f->f_frame->stacktop >= 0);
for (int i = 0; i < f->f_frame->stacktop; i++) {
Py_CLEAR(locals[i]);
}
f->f_frame->stacktop = 0;
return 0;
}
```
However, if you accessed `tb_frame.f_locals` before, it will have created a dictionary in `frame->f_locals` here: https://github.com/python/cpython/blob/5c238225f60c33cf1931b1a8c9a3310192c716ae/Objects/frameobject.c#L1218C18-L1218C33
That `frame->f_locals` dict will also have references to all the local vars. And that `f_locals` dict is not cleared in `tb_frame.clear()`.
However, then when you access `tb_frame.f_locals` again, it will update the existing `frame->f_locals` dict, and delete all the local vars in it, because they are not available anymore. Here:
https://github.com/python/cpython/blob/3.12/Objects/frameobject.c#L1256C13-L1256C55
I think it's a bug (or at least very unexpected) that `tb_frame.clear()` does not clear `frame->f_locals`.
So my suggestion would be to add `Py_CLEAR(f->f_frame->f_locals)` in `frame_tp_clear`.
---
There is then another related issue: When the `except` block is left, the exception goes out of scope, so then it should free all the locals (even when `frame.clear()` was not called). However, this is also not the case.
After inspecting this further: Once `frame.f_locals` was accessed from the current frame where the exception is handled, this `frame.f_locals` still has a reference to the exception, and thus to the frames, even though the `DELETE_FAST` for the exception deleted it from the fast locals. See the comments below for more on this.
---
Note, for PyTorch and others, when you first do extended exception reporting which accesses `f_locals` in any way, this here fixes two arising problems. Related:
* https://github.com/pytorch/pytorch/issues/18853
* https://github.com/pytorch/pytorch/issues/27600
E.g., this came up for us because we have this extended exception reporting, which accesses `f_locals`:
```python
# Extend exception message by module call stack.
module_names_by_id = {} # id -> name
for name, mod in model.named_modules():
if id(mod) not in module_names_by_id:
module_names_by_id[id(mod)] = name or "(root)"
exc_ext = []
for frame in iter_traceback(exc.__traceback__):
if frame.f_code.co_nlocals == 0:
continue
frame_self = frame.f_locals.get("self")
if isinstance(frame_self, (torch.nn.Module, rf.Module)):
func = get_func_from_code_object(frame.f_code, frame=frame)
if func and func.__name__ and func.__name__.startswith("_") and not func.__name__.startswith("__"):
continue
func_name = (func and func.__qualname__) or type(frame_self).__name__
exc_ext.append(f"({func_name}) {module_names_by_id.get(id(frame_self), '(unknown)')}")
if not exc_ext:
exc_ext.append("(No module call frames.)")
if len(exc.args) == 1 and isinstance(exc.args[0], str) and not always_direct_print:
exc.args = ("\n".join([exc.args[0], "", "Module call stack:"] + exc_ext),)
else:
print("Module call stack:", file=log.v3)
for msg in exc_ext:
print(msg, file=log.v3)
```
The normal `traceback.clear_frames` here does not help.
---
### CPython versions tested on:
3.11, 3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113940
<!-- /gh-linked-prs -->
| 78c254582b1757c15098ae65e97a2589ae663cd7 | b905fad83819ec9102ecfb97e3d8ab0aaddd9784 |
python/cpython | python__cpython-113953 | # ``test_type_cache`` fails with ``--forever`` argument
# Bug report
### Bug description:
Trace:
```python
./python.exe -m test -q test_type_cache --forever
Using random seed: 2064316463
0:00:00 load avg: 2.54 Run tests sequentially
test test_type_cache failed -- Traceback (most recent call last):
File "/Users/admin/Projects/cpython/Lib/test/test_type_cache.py", line 132, in test_class_load_attr_specialization_static_type
self._assign_and_check_valid_version(str)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/admin/Projects/cpython/Lib/test/test_type_cache.py", line 90, in _assign_and_check_valid_version
self.assertNotEqual(type_get_version(user_type), 0)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 0 == 0
== Tests result: FAILURE ==
1 test failed:
test_type_cache
Total duration: 50 ms
Total tests: run=14 failures=1
Total test files: run=2 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113953
<!-- /gh-linked-prs -->
| e58334e4c9ccbebce6858da1985c1f75a6063d05 | c4992f4106aa509375f5beca8dc044a7f6c36a72 |
python/cpython | python__cpython-113933 | # ``test_compile`` raises a ``SyntaxWarning``
# Bug report
### Bug description:
Trace:
```python
test_condition_expression_with_redundant_comparisons_compiles (test.test_compile.TestSpecifics.test_condition_expression_with_redundant_comparisons_compiles) ... <eval>:1: SyntaxWarning: invalid decimal literal
<eval>:1: SyntaxWarning: invalid decimal literal
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113933
<!-- /gh-linked-prs -->
| 9f088336b268dfe9011a7cb550f4e488ccd7e8f5 | ec23e90082ffdedc7f0bdd2dfadfc4983ddc0712 |
python/cpython | python__cpython-113973 | # test_idle fails on Devuan 5 Linux
# Bug report
### Bug description:
On Devuan 5, with this command in a build script:
```
./python -m test -uall -j0 > $test_file 2>&1
```
the following is logged:
```
0:10:21 load avg: 3.54 [229/469/2] test_idle failed (1 error)
test test_idle failed -- Traceback (most recent call last):
File "/usr/local/src/forks/cpython/Lib/idlelib/idle_test/test_configdialog.py", line 452, in test_highlight_target_text_mouse
click_it(start_index)
~~~~~~~~^^^^^^^^^^^^^
File "/usr/local/src/forks/cpython/Lib/idlelib/idle_test/test_configdialog.py", line 436, in click_it
x, y, dx, dy = hs.bbox(start)
^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object
```
But this passes:
```
./python -m test -uall -j0 test_idle
Using random seed: 2055385424
0:00:00 load avg: 0.00 Run 1 test in parallel using 1 worker process
0:00:01 load avg: 0.00 [1/1] test_idle passed
== Tests result: SUCCESS ==
1 test OK.
Total duration: 1.8 sec
Total tests: run=283 skipped=81
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113973
* gh-113974
* gh-113975
<!-- /gh-linked-prs -->
| c4992f4106aa509375f5beca8dc044a7f6c36a72 | efa738e862da26f870ca659b01ff732649f400a7 |
python/cpython | python__cpython-113897 | # ``test_builtin`` raises ``DeprecationWarning``
# Bug report
### Bug description:
Trace:
<details>
```python
./python.exe -m test -v test_builtin
== CPython 3.13.0a2+ (heads/main:4826d52338, Jan 10 2024, 10:05:57) [Clang 15.0.0 (clang-1500.1.0.2.5)]
== macOS-14.2.1-arm64-arm-64bit little-endian
== Python build: debug
== cwd: /Users/admin/Projects/cpython/build/test_python_worker_41682æ
== CPU count: 8
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 656825523
0:00:00 load avg: 1.80 Run 1 test sequentially
0:00:00 load avg: 1.80 [1/1] test_builtin
test___ne__ (test.test_builtin.BuiltinTest.test___ne__) ... /Users/admin/Projects/cpython/Lib/unittest/case.py:727: DeprecationWarning: NotImplemented should not be used in a boolean context
if not expr:
ok
test_abs (test.test_builtin.BuiltinTest.test_abs) ... ok
test_all (test.test_builtin.BuiltinTest.test_all) ... ok
test_any (test.test_builtin.BuiltinTest.test_any) ... ok
test_ascii (test.test_builtin.BuiltinTest.test_ascii) ... ok
test_bin (test.test_builtin.BuiltinTest.test_bin) ... ok
test_bug_27936 (test.test_builtin.BuiltinTest.test_bug_27936) ... ok
test_bytearray_extend_error (test.test_builtin.BuiltinTest.test_bytearray_extend_error) ... ok
test_bytearray_join_with_custom_iterator (test.test_builtin.BuiltinTest.test_bytearray_join_with_custom_iterator) ... ok
test_bytearray_join_with_misbehaving_iterator (test.test_builtin.BuiltinTest.test_bytearray_join_with_misbehaving_iterator) ... ok
test_bytearray_translate (test.test_builtin.BuiltinTest.test_bytearray_translate) ... ok
test_callable (test.test_builtin.BuiltinTest.test_callable) ... ok
test_chr (test.test_builtin.BuiltinTest.test_chr) ... ok
test_cmp (test.test_builtin.BuiltinTest.test_cmp) ... ok
test_compile (test.test_builtin.BuiltinTest.test_compile) ... ok
test_compile_ast (test.test_builtin.BuiltinTest.test_compile_ast) ... ok
test_compile_async_generator (test.test_builtin.BuiltinTest.test_compile_async_generator)
With the PyCF_ALLOW_TOP_LEVEL_AWAIT flag added in 3.8, we want to ... ok
test_compile_top_level_await (test.test_builtin.BuiltinTest.test_compile_top_level_await)
Test whether code some top level await can be compiled. ... ok
test_compile_top_level_await_invalid_cases (test.test_builtin.BuiltinTest.test_compile_top_level_await_invalid_cases) ... ok
test_compile_top_level_await_no_coro (test.test_builtin.BuiltinTest.test_compile_top_level_await_no_coro)
Make sure top level non-await codes get the correct coroutine flags ... ok
test_construct_singletons (test.test_builtin.BuiltinTest.test_construct_singletons) ... ok
test_delattr (test.test_builtin.BuiltinTest.test_delattr) ... ok
test_dir (test.test_builtin.BuiltinTest.test_dir) ... ok
test_divmod (test.test_builtin.BuiltinTest.test_divmod) ... ok
test_eval (test.test_builtin.BuiltinTest.test_eval) ... ok
test_eval_builtins_mapping (test.test_builtin.BuiltinTest.test_eval_builtins_mapping) ... ok
test_eval_builtins_mapping_reduce (test.test_builtin.BuiltinTest.test_eval_builtins_mapping_reduce) ... ok
test_exec (test.test_builtin.BuiltinTest.test_exec) ... ok
test_exec_builtins_mapping_import (test.test_builtin.BuiltinTest.test_exec_builtins_mapping_import) ... ok
test_exec_closure (test.test_builtin.BuiltinTest.test_exec_closure) ... ok
test_exec_globals (test.test_builtin.BuiltinTest.test_exec_globals) ... ok
test_exec_globals_dict_subclass (test.test_builtin.BuiltinTest.test_exec_globals_dict_subclass) ... ok
test_exec_globals_error_on_get (test.test_builtin.BuiltinTest.test_exec_globals_error_on_get) ... ok
test_exec_globals_frozen (test.test_builtin.BuiltinTest.test_exec_globals_frozen) ... ok
test_exec_redirected (test.test_builtin.BuiltinTest.test_exec_redirected) ... ok
test_filter (test.test_builtin.BuiltinTest.test_filter) ... ok
test_filter_dealloc (test.test_builtin.BuiltinTest.test_filter_dealloc) ... skipped "resource 'cpu' is not enabled"
test_filter_pickle (test.test_builtin.BuiltinTest.test_filter_pickle) ... ok
test_format (test.test_builtin.BuiltinTest.test_format) ... ok
test_general_eval (test.test_builtin.BuiltinTest.test_general_eval) ... ok
test_getattr (test.test_builtin.BuiltinTest.test_getattr) ... ok
test_hasattr (test.test_builtin.BuiltinTest.test_hasattr) ... ok
test_hash (test.test_builtin.BuiltinTest.test_hash) ... ok
test_hex (test.test_builtin.BuiltinTest.test_hex) ... ok
test_id (test.test_builtin.BuiltinTest.test_id) ... ok
test_import (test.test_builtin.BuiltinTest.test_import) ... ok
test_input (test.test_builtin.BuiltinTest.test_input) ... ok
test_isinstance (test.test_builtin.BuiltinTest.test_isinstance) ... ok
test_issubclass (test.test_builtin.BuiltinTest.test_issubclass) ... ok
test_iter (test.test_builtin.BuiltinTest.test_iter) ... ok
test_len (test.test_builtin.BuiltinTest.test_len) ... ok
test_map (test.test_builtin.BuiltinTest.test_map) ... ok
test_map_pickle (test.test_builtin.BuiltinTest.test_map_pickle) ... ok
test_max (test.test_builtin.BuiltinTest.test_max) ... ok
test_min (test.test_builtin.BuiltinTest.test_min) ... ok
test_neg (test.test_builtin.BuiltinTest.test_neg) ... ok
test_next (test.test_builtin.BuiltinTest.test_next) ... ok
test_oct (test.test_builtin.BuiltinTest.test_oct) ... ok
test_open (test.test_builtin.BuiltinTest.test_open) ... ok
test_open_default_encoding (test.test_builtin.BuiltinTest.test_open_default_encoding) ... ok
test_open_non_inheritable (test.test_builtin.BuiltinTest.test_open_non_inheritable) ... ok
test_ord (test.test_builtin.BuiltinTest.test_ord) ... ok
test_pow (test.test_builtin.BuiltinTest.test_pow) ... ok
test_repr (test.test_builtin.BuiltinTest.test_repr) ... ok
test_round (test.test_builtin.BuiltinTest.test_round) ... ok
test_round_large (test.test_builtin.BuiltinTest.test_round_large) ... ok
test_setattr (test.test_builtin.BuiltinTest.test_setattr) ... ok
test_sum (test.test_builtin.BuiltinTest.test_sum) ... ok
test_sum_accuracy (test.test_builtin.BuiltinTest.test_sum_accuracy) ... ok
test_type (test.test_builtin.BuiltinTest.test_type) ... ok
test_vars (test.test_builtin.BuiltinTest.test_vars) ... ok
test_warning_notimplemented (test.test_builtin.BuiltinTest.test_warning_notimplemented) ... ok
test_zip (test.test_builtin.BuiltinTest.test_zip) ... ok
test_zip_bad_iterable (test.test_builtin.BuiltinTest.test_zip_bad_iterable) ... ok
test_zip_pickle (test.test_builtin.BuiltinTest.test_zip_pickle) ... ok
test_zip_pickle_strict (test.test_builtin.BuiltinTest.test_zip_pickle_strict) ... ok
test_zip_pickle_strict_fail (test.test_builtin.BuiltinTest.test_zip_pickle_strict_fail) ... ok
test_zip_result_gc (test.test_builtin.BuiltinTest.test_zip_result_gc) ... ok
test_zip_strict (test.test_builtin.BuiltinTest.test_zip_strict) ... ok
test_zip_strict_error_handling (test.test_builtin.BuiltinTest.test_zip_strict_error_handling) ... ok
test_zip_strict_error_handling_stopiteration (test.test_builtin.BuiltinTest.test_zip_strict_error_handling_stopiteration) ... ok
test_zip_strict_iterators (test.test_builtin.BuiltinTest.test_zip_strict_iterators) ... ok
test_immortals (test.test_builtin.ImmortalTests.test_immortals) ... ok
test_list_repeat_respect_immortality (test.test_builtin.ImmortalTests.test_list_repeat_respect_immortality) ... ok
test_tuple_repeat_respect_immortality (test.test_builtin.ImmortalTests.test_tuple_repeat_respect_immortality) ... ok
test_input_no_stdout_fileno (test.test_builtin.PtyTests.test_input_no_stdout_fileno) ... ok
test_input_tty (test.test_builtin.PtyTests.test_input_tty) ... ok
test_input_tty_non_ascii (test.test_builtin.PtyTests.test_input_tty_non_ascii) ... ok
test_input_tty_non_ascii_unicode_errors (test.test_builtin.PtyTests.test_input_tty_non_ascii_unicode_errors) ... ok
test_input_tty_nondecodable_input (test.test_builtin.PtyTests.test_input_tty_nondecodable_input) ... ok
test_input_tty_nonencodable_prompt (test.test_builtin.PtyTests.test_input_tty_nonencodable_prompt) ... ok
test_input_tty_null_in_prompt (test.test_builtin.PtyTests.test_input_tty_null_in_prompt) ... ok
test_cleanup (test.test_builtin.ShutdownTest.test_cleanup) ... ok
test_breakpoint (test.test_builtin.TestBreakpoint.test_breakpoint) ... ok
test_breakpoint_with_args_and_keywords (test.test_builtin.TestBreakpoint.test_breakpoint_with_args_and_keywords) ... ok
test_breakpoint_with_breakpointhook_reset (test.test_builtin.TestBreakpoint.test_breakpoint_with_breakpointhook_reset) ... ok
test_breakpoint_with_breakpointhook_set (test.test_builtin.TestBreakpoint.test_breakpoint_with_breakpointhook_set) ... ok
test_breakpoint_with_passthru_error (test.test_builtin.TestBreakpoint.test_breakpoint_with_passthru_error) ... ok
test_envar_good_path_builtin (test.test_builtin.TestBreakpoint.test_envar_good_path_builtin) ... ok
test_envar_good_path_empty_string (test.test_builtin.TestBreakpoint.test_envar_good_path_empty_string) ... ok
test_envar_good_path_noop_0 (test.test_builtin.TestBreakpoint.test_envar_good_path_noop_0) ... ok
test_envar_good_path_other (test.test_builtin.TestBreakpoint.test_envar_good_path_other) ... ok
test_envar_ignored_when_hook_is_set (test.test_builtin.TestBreakpoint.test_envar_ignored_when_hook_is_set) ... ok
test_envar_unimportable (test.test_builtin.TestBreakpoint.test_envar_unimportable) ... ok
test_runtime_error_when_hook_is_lost (test.test_builtin.TestBreakpoint.test_runtime_error_when_hook_is_lost) ... ok
test_bad_arguments (test.test_builtin.TestSorted.test_bad_arguments) ... ok
test_baddecorator (test.test_builtin.TestSorted.test_baddecorator) ... ok
test_basic (test.test_builtin.TestSorted.test_basic) ... ok
test_inputtypes (test.test_builtin.TestSorted.test_inputtypes) ... ok
test_bad_args (test.test_builtin.TestType.test_bad_args) ... ok
test_bad_slots (test.test_builtin.TestType.test_bad_slots) ... ok
test_namespace_order (test.test_builtin.TestType.test_namespace_order) ... ok
test_new_type (test.test_builtin.TestType.test_new_type) ... ok
test_type_doc (test.test_builtin.TestType.test_type_doc) ... ok
test_type_name (test.test_builtin.TestType.test_type_name) ... ok
test_type_nokwargs (test.test_builtin.TestType.test_type_nokwargs) ... ok
test_type_qualname (test.test_builtin.TestType.test_type_qualname) ... ok
test_type_typeparams (test.test_builtin.TestType.test_type_typeparams) ... ok
bin (builtins)
Doctest: builtins.bin ... ok
hex (builtins.bytearray)
Doctest: builtins.bytearray.hex ... ok
hex (builtins.bytes)
Doctest: builtins.bytes.hex ... ok
as_integer_ratio (builtins.float)
Doctest: builtins.float.as_integer_ratio ... ok
fromhex (builtins.float)
Doctest: builtins.float.fromhex ... ok
hex (builtins.float)
Doctest: builtins.float.hex ... ok
hex (builtins)
Doctest: builtins.hex ... ok
int (builtins)
Doctest: builtins.int ... ok
as_integer_ratio (builtins.int)
Doctest: builtins.int.as_integer_ratio ... ok
bit_count (builtins.int)
Doctest: builtins.int.bit_count ... ok
bit_length (builtins.int)
Doctest: builtins.int.bit_length ... ok
hex (builtins.memoryview)
Doctest: builtins.memoryview.hex ... ok
oct (builtins)
Doctest: builtins.oct ... ok
zip (builtins)
Doctest: builtins.zip ... ok
----------------------------------------------------------------------
Ran 131 tests in 0.199s
OK (skipped=1)
== Tests result: SUCCESS ==
1 test OK.
Total duration: 350 ms
Total tests: run=131 skipped=1
Total test files: run=1/1
Result: SUCCESS
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113897
* gh-113928
<!-- /gh-linked-prs -->
| 9d33c23857cfd952bf3e1e7f34c77b7c9a5accc3 | fafb3275f25e116e51ff0b867aec597cb3de840f |
python/cpython | python__cpython-119519 | # `asyncio` `ProactorEventLoop` didn't check if the `socket` is blocking
# Bug report
### Bug description:
```python
import asyncio
import socket
s = socket.socket()
await asyncio.get_running_loop().sock_connect(s,('127.0.0.1',80))
```
according to https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_connect and https://github.com/python/cpython/blob/5d8a3e74b51a59752f24cb869e7daa065b673f83/Lib/asyncio/selector_events.py#L617-L630
it should raise. however, `ProactorEventLoop` doesn't follow the rule
should we fix this, any thoughts
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-119519
* gh-119912
* gh-119913
<!-- /gh-linked-prs -->
| cf3bba3f0671d2c9fee099e3ab0f78b98b176131 | ce2ea7d629788fd051cbec099b5947ecbe50e819 |
python/cpython | python__cpython-114161 | # Make queue.SimpleQueue thread-safe in `--disable-gil` builds
# Feature or enhancement
### Proposal:
Instances of `queue.SimpleQueue` have mutable state that is not safe to mutate without the GIL. The previous implementation (in 3.12) was https://github.com/colesbury/nogil-3.12/commit/7e60a01aee.
@colesbury - You mentioned in chat that porting the 3.12 approach wouldn't be straightforward. Can you say more?
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114161
* gh-114259
<!-- /gh-linked-prs -->
| 925907ea362c4c014086be48625ac7dd67645cfc | 441affc9e7f419ef0b68f734505fa2f79fe653c7 |
python/cpython | python__cpython-113881 | # ``test_asyncio.test_server`` raises a ``ResourceWarning``
# Bug report
### Bug description:
```python
./python -m test -v test_asyncio.test_server
== CPython 3.13.0a2+ (heads/main:a9df076d7d, Jan 8 2024, 21:28:03) [GCC 9.4.0]
== Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 little-endian
== Python build: debug
== cwd: /home/eclips4/CLionProjects/cpython/build/test_python_worker_20604æ
== CPU count: 16
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 711920436
0:00:00 load avg: 0.21 Run 1 test sequentially
0:00:00 load avg: 0.21 [1/1] test_asyncio.test_server
test_start_server_1 (test.test_asyncio.test_server.ProactorStartServerTests.test_start_server_1) ... skipped 'Windows only'
test_start_server_1 (test.test_asyncio.test_server.SelectorStartServerTests.test_start_server_1) ... ok
test_start_unix_server_1 (test.test_asyncio.test_server.SelectorStartServerTests.test_start_unix_server_1) ... ok
test_wait_closed_basic (test.test_asyncio.test_server.TestServer2.test_wait_closed_basic) ... ok
test_wait_closed_race (test.test_asyncio.test_server.TestServer2.test_wait_closed_race) ... ok
test_unix_server_addr_cleanup (test.test_asyncio.test_server.UnixServerCleanupTests.test_unix_server_addr_cleanup) ... ok
test_unix_server_cleanup_gone (test.test_asyncio.test_server.UnixServerCleanupTests.test_unix_server_cleanup_gone) ... ok
test_unix_server_cleanup_prevented (test.test_asyncio.test_server.UnixServerCleanupTests.test_unix_server_cleanup_prevented) ... o
k
test_unix_server_cleanup_replaced (test.test_asyncio.test_server.UnixServerCleanupTests.test_unix_server_cleanup_replaced) ... /ho
me/eclips4/CLionProjects/cpython/Lib/asyncio/events.py:84: ResourceWarning: unclosed <socket.socket fd=7, family=1, type=1, proto=
0, laddr=./test_python_gustheh6.sock>
self._context.run(self._callback, *self._args)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
ok
test_unix_server_sock_cleanup (test.test_asyncio.test_server.UnixServerCleanupTests.test_unix_server_sock_cleanup) ... ok
----------------------------------------------------------------------
Ran 9 tests in 0.243s
OK (skipped=1)
== Tests result: SUCCESS ==
1 test OK.
Total duration: 389 ms
Total tests: run=9 skipped=1
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113881
<!-- /gh-linked-prs -->
| ab0ad62038317a3d15099c23d2b0f03bee9f8fa7 | 1b7e0024a16c1820f61c04a8a100498568410afd |
python/cpython | python__cpython-114051 | # Support field docstrings for dataclasses that use __slots__
# Feature or enhancement
### Proposal:
A really nice `__slots__` feature is the ability to provide a data dictionary in the form of docstrings. That is also available for `property` objects. Consider this example:
```
class Rocket:
__slots__ = {
'velocity': 'Speed relative to Earth in meters per second',
'mass': 'Gross weight in kilograms',
}
@property
def kinetic_energy(self):
'Kinetic energy in Newtons'
return self.mass * self.velocity ** 2
```
Running `help(Rocket)` shows all the field docstrings which is super helpful:
```
class Rocket(builtins.object)
| Readonly properties defined here:
|
| kinetic_energy
| Kinetic energy in Newtons
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| mass
| Gross weight in kilograms
|
| velocity
| Speed relative to Earth in meters per second
```
It would be nice is the same can be done with dataclasses that define `__slots__`:
```
@dataclass(slots=True)
class Rocket:
velocity: float = field(doc='Speed relative to Earth in meters per second')
weight: float = field(doc='Gross weight in kilograms')
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114051
* gh-134065
* gh-134128
<!-- /gh-linked-prs -->
| 9c7657f09914254724683d91177aed7947637be5 | 0a3577bdfcb7132c92a3f7fb2ac231bc346383c0 |
python/cpython | python__cpython-113900 | # winfo_pathname gives an exception on 64-bit Python on Microsoft Windows
# Bug report
### Bug description:
```python
import sys
import tkinter
print(sys.version)
button = tkinter.Button()
print(button)
i = button.winfo_id()
print(i)
p = button.winfo_pathname(i)
print(p)
```
Code above copy-typed from attachment. It works on 32-bit Python but gives exception on 64-bit Python on Windows.
Other means 64-bit BSDs which are fine, and Msys2 on Windows where 32-bit is ok but 64-bit gives exception.
[winfo_id.txt](https://github.com/python/cpython/files/13878790/winfo_id.txt)
### CPython versions tested on:
3.10, 3.12
### Operating systems tested on:
Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-113900
* gh-113901
* gh-113902
<!-- /gh-linked-prs -->
| 1b7e0024a16c1820f61c04a8a100498568410afd | 5d384b0468b35b393f8ae2d3149d13ff607c9501 |
python/cpython | python__cpython-113869 | # Some `MAP_` constants defined on macOS but not in `mmap`
# Feature or enhancement
### Proposal:
The following `MAP_*` flags are defined on macOS, but are not exposed in the module `mmap`:
- MAP_NORESERVE
- MAP_NOEXTEND
- MAP_HASSEMAPHORE
- MAP_NOCACHE
- MAP_JIT
- MAP_RESILIENT_CODESIGN
- MAP_RESILIENT_MEDIA
- MAP_32BIT
- MAP_TRANSLATED_ALLOW_EXECUTE
- MAP_UNIX03
- MAP_TPRO
The system also defines `MAP_COPY`, `MAP_RENAME`, but those seem to be less useful. `MAP_FILE` and `MAP_FIXED` are generic flags that aren't useful for module `mmap`.
This is based on the macOS 14.2 SDK.
Two other things I noticed, but something for different issues:
- There's a comment about `msync(2)` flags in the implementation for ``mmap.mmap.flush()``
- There is no way to change the memory protection (`PROT_READ` etc.) after creating the mapping, exposing such functionality could be useful for using `mmap` to write executable code in Python.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113869
<!-- /gh-linked-prs -->
| 79970792fd2c70f77c38e08c7b3a9daf6a11bde1 | 8aa0088ea2422ed6b95076fe48a13df7562a4355 |
python/cpython | python__cpython-113864 | # Make all executors execute tier 2 instructions (micro-ops)
[Our plan](https://github.com/faster-cpython/ideas/blob/main/3.13/engine.md) proposes that we either execute all tier 2 code in the tier 2 interpreter or by [jitting the code](https://github.com/python/cpython/pull/113465)
However, our current executor interface allows calls to arbitrary function pointers.
We should remove that interface, requiring all optimizers to produce tier 2 micro-ops, which the VM is responsible for executing.
This will prevent low-level JITs like Cinder from using executors, but Cinder uses `PyFunction_SetVectorcall` to insert machine code anyway.
Higher level JITs like PyTorch dynamo, can potentially still use executors, as they provide a more powerful and flexible interface than PEP 523.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113864
* gh-113954
<!-- /gh-linked-prs -->
| a0c9cf9456c2ee7a89d9bd859c07afac8cf5e893 | 93930eaf0acd64dc0d08d58321d2682cb019bc1a |
python/cpython | python__cpython-113854 | # Guarantee progress in executors
Reasoning about and implementing trace stitching is made much easier if we guarantee that individual executors make progress.
We need to guarantee that tier 2 execution as a whole makes progress, otherwise we might get stuck.
Implementing this guarantee is blocking progress on https://github.com/python/cpython/issues/112354.
There are many ways to guarantee progress, but the simplest conceptually, to my mind, is to guarantee that all individual executors make progress. If all executors make progress, then any (non-empty) graph of executors must also make progress.
The implementation is reasonably straightforward: De-specialize the first (tier 1) instruction when creating the executor.
Our concern is that this will negatively impact performance.
~We await profiling and stats to see what the performance impact might be.~
Until we do stitching, all executors start with `JUMP_BACKWARD` which is not specialized, so de-specializing has no effect for now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113854
<!-- /gh-linked-prs -->
| 55824d01f866d1fa0f21996d897fba0e07d09ac8 | 0d8fec79ca30e67870752c6ad4e299f591271e69 |
python/cpython | python__cpython-113849 | # About checking for CancelledError and its subclasses
For long time I wanted to replace the use of `PyObject_IsInstance()` with `CancelledError` in `_asynciomodule.c`. Even if the C code is correct and is closest to the corresponding Python code, it looked unnecessary complicated and bugprone. Also, `PyErr_GivenExceptionMatches()` is used in `except` implementation, so it may be more correct than an isinstance check. But I did not have tests for CancelledError subclasses which would show the difference.
Other issue. @gvanrossum [noticed](https://github.com/python/cpython/pull/113819#discussion_r1445139243) that `asyncio.timeout()` only checks for exact CancelledError, and not its subclasses. `asyncio.TaskGroup()` also only checks for exact CancelledError. It is suspicious, because all other code (except `_convert_future_exc()` in `futures.py`) treats CancelledError subclasses in the same way as CancelledError. `asyncio.timeout()` and `asyncio.TaskGroup()` were added recently, so perhaps it is error in their implementation. On other hand, I do not know any use case for CancelledError subclasses.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113849
* gh-113850
<!-- /gh-linked-prs -->
| 5273655bea050432756098641b9fda72361bf983 | 9100fc407e8c7038e7214b600b4ae568ae5510e3 |
python/cpython | python__cpython-113949 | # New warning: `‘_Py_stdlib_module_names’ defined but not used` in `Python/stdlib_module_names.h`
# Bug report
<img width="635" alt="Снимок экрана 2024-01-09 в 11 21 27" src="https://github.com/python/cpython/assets/4660275/3079a778-973e-457c-a910-3db41a688b45">
This happens in all recent PRs, examples:
- https://github.com/python/cpython/pull/113835/files
- https://github.com/python/cpython/pull/113843/files
<!-- gh-linked-prs -->
### Linked PRs
* gh-113949
<!-- /gh-linked-prs -->
| 8717f7b495c8c33fd37017f4e7684609c304c556 | 55824d01f866d1fa0f21996d897fba0e07d09ac8 |
python/cpython | python__cpython-113843 | # Missing error check in `update_symbols`
# Bug report
### Bug description:
There is error check missing right after a loop in https://github.com/python/cpython/blob/f3d5d4aa8f0388217aeff69e28d078bdda464b38/Python/symtable.c#L935-L982
Loop breaks when `PyIter_Next` returns `NULL` indicating that it is exhausted, but if an exception occurs, it will also return `NULL` and additionally set the exception which is not checked.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113843
* gh-113851
* gh-113852
<!-- /gh-linked-prs -->
| fda901a1ff94ea6cc338b74928acdbc5ee165ed7 | 2e17cad2b8899126eb2024bf75db331b871bd5bc |
python/cpython | python__cpython-127211 | # Possible undefined behavior division by zero in complex's `_Py_c_pow()`
# Bug report
### Bug description:
https://github.com/python/cpython/blob/main/Objects/complexobject.c#L150 `_Py_c_pow()` contains:
```c
at = atan2(a.imag, a.real);
phase = at*b.real;
if (b.imag != 0.0) {
len /= exp(at*b.imag);
```
An oss-fuzz pycompiler fuzzer identified a problem compiling the code "` 9J**33J**3`" within the optimizer folding the constant by doing the math in place. We haven't been able to reproduce this ourselves - but code inspection reveals a potential problem:
C `exp(x)` can return 0 in a couple of cases. Which would lead to the /= executing an undefined division by zero operation.
I believe the correct `_Py_c_pow()` code should look something like:
```c
if (b.imag != 0.0) {
double tmp_exp = exp(at*b.imag);
if (exp == 0.0 || errno == ERANGE) {
// Detected as OverflowError by our caller.
r.real = Py_HUGE_VAL;
r.imag = Py_HUGE_VAL;
return r;
}
len /= tmp_exp;
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127211
* gh-127216
* gh-127530
<!-- /gh-linked-prs -->
| f7bb658124aba74be4c13f498bf46cfded710ef9 | a4d4c1ede21f9fa72280f4fc0f50212eecfac9ae |
python/cpython | python__cpython-115926 | # pathlib: clarify difference between pathlib and os.path
# Documentation
Currently, in [pathlib's doc page](https://docs.python.org/3/library/pathlib.html) the reason to use `os.path` instead is only stated as below:
> For low-level path manipulation on strings, you can also use the [os.path](https://docs.python.org/3/library/os.path.html#module-os.path) module.
This doesn't explain when one might want to use `os.path` instead of `pathlib`.
I propose to add several examples where one might want to use `os.path` at the same place where the aforementioned sentence was (i.e. near the top) to let users choose the appropriate tool for their job.
## Example of difference against `os.path` (i.e. `str`)
Since `pathlib` is a library to handle filesystem paths, design decisions were made to format when creating an instance. Here are some examples:
| input | converting to `pathlib` |
| --- | --- |
| path with trailing slash (`a/b/`) | strips (`a/b`) |
| path with single leading dot (`./a/b`) | strips (`a/b`) |
| path with single dot inside (`a/./b`) | strips (`a/b`) |
As stated below, some type of stripping can change the meaning of the input. (trailing slash & leading dot)
## What are the impact if we aren't aware of this?
1. A quick google search for "python pathlib vs os.path" doesn't give any reason to choose `os.path` over `pathlib` on the 1st page. Even [a stack overflow question](https://stackoverflow.com/questions/70259852/) doesn't have an answer.
2. As discussed in the python discourse [topic: leading dot](https://discuss.python.org/t/suggestion-for-pathlib-differentiate-explicit-and-implicit-local-paths-pathlib-strictpath/31235) [topic: trailing slash](https://discuss.python.org/t/pathlib-preserve-trailing-slash/33389), there are cases where preserving seemingly meaningless dots and slashes.
3. With the wide-spread adoption of libraries that automatically parses inputs into specific classes (`argparse`, `fastapi`/`pydantic`, `marshmallow`, etc.), it is more intuitive to directly cast the input to `pathlib`. As stated on (2.), there needs to be documentation to point when *not* to use `pathlib`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115926
<!-- /gh-linked-prs -->
| 1904f0a2245f500aa85fba347b260620350efc78 | be1c808fcad201adc4d5d6cca52ddb24aeb5e367 |
python/cpython | python__cpython-113828 | # Windows PGInstrument build fails to find frozen modules
After #113303 we now write frozen modules to a configuration-specific directory. Unfortunately, the change doesn't take into account PGO builds, which use a mix of configurations.
We should just be able to omit the configuration from the path. Debug builds ought to have the same generated modules.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113828
<!-- /gh-linked-prs -->
| 92f96240d713a5a36c54515e44445b3cd0947163 | 52161781a6134a4b846500ad68004fe9027a233c |
python/cpython | python__cpython-115199 | # asyncio DatagramTransport.sendto does not send empty UDP packets
# Bug report
### Bug description:
Trying to send multicast UDP packets with DatagramTransport.sendto, I notice no error on the python program but no packet is sent (I've checked with tcpdump). Modifying the call to use the underlying socket fixes the issue but I get a warning that I should not use this socket directly.
Is this a known issue?
How should I use asyncio to properly send UDP multicast packets?
### CPython versions tested on:
3.10
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115199
<!-- /gh-linked-prs -->
| 73e8637002639e565938d3f205bf46e7f1dbd6a8 | 8db8d7118e1ef22bc7cdc3d8b657bc10c22c2fd6 |
python/cpython | python__cpython-113837 | # inaccurate documentation for `shutil.move` when dst is an existing directory
# Documentation
(A clear and concise description of the issue.)
In the [doc of shutil](https://github.com/python/cpython/blob/3.12/Doc/library/shutil.rst?plain=1#L360) it says:
```rst
If the destination is an existing directory, then *src* is moved inside that
directory. If the destination already exists but is not a directory, it may
be overwritten depending on :func:`os.rename` semantics.
```
However, before calling `os.rename`, it checks the existence of the destination path, and raises Error if it exists. see [shutil.py](https://github.com/python/cpython/blob/3.12/Lib/shutil.py#L883).
This behavior is not described in the documentation, and may be different from how files are overwritten in `os.rename`. For example, `os.rename('a.txt', '/b')` will silently replace `/b/a.txt` on Unix, but `shutil.move('a.txt', '/b')` will raise an Error.
In the [docstring](https://github.com/python/cpython/blob/3.12/Lib/shutil.py#L849) however, this is described much more clear:
```
If the destination is a directory or a symlink to a directory, the source
is moved inside the directory. The destination path must not already
exist.
If the destination already exists but is not a directory, it may be
overwritten depending on os.rename() semantics.
```
This should be also in the documentation, IMO.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113837
* gh-115006
* gh-115007
<!-- /gh-linked-prs -->
| da8f9fb2ea65cc2784c2400fc39ad8c800a67a42 | d466052ad48091a00a50c5298f33238aff591028 |
python/cpython | python__cpython-113797 | # Additional validation of CSV Dialect parameters
# Feature or enhancement
CSV writer and reader allow to configure parameters of the CSV dialect. They check for some incompatible combination, for example quotechar must be set if quoting enabled. But it is still possible to specify values that produce ambiguous output which cannot be parsed neither by the CSV reader not by any other CSV parser. For example if any two of delimiter, quotechar or escapechar match. The error can be only noticed when try to read the CSV file, and it can simply produce wrong result instead of failing.
I propose to add more validation checks in addition to existing checks in the Dialect constructor.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113797
<!-- /gh-linked-prs -->
| c8351a617b8970dbe0d3af721c6aea873019c466 | 2f2ddabd1a02e3095b751100b94b529e4e0bcd20 |
python/cpython | python__cpython-113792 | # Missing clock definitions on macOS (time module)
# Feature or enhancement
### Proposal:
`time` exports a number of `CLOCK_*` values. MacOS has two constants that aren't exposed here:
- `CLOCK_MONOTONIC_RAW_APPROX`
- `CLOCK_UPTIME_RAW_APPROX`
Their definitions are the same as the same clocks without `_APPROX` at the end, but with lower accuracy due to the value only changing at context switches.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113792
<!-- /gh-linked-prs -->
| c6ca562138a0916192f9c3100cae678c616aed29 | b3dba18eab96dc95653031863bb2a222af912f2b |
python/cpython | python__cpython-113816 | # `test_capi` has refleaks
# Bug report
### Bug description:
```python
./python -m test -R 3:3 test_capi
Using random seed: 969223870
0:00:00 load avg: 10.38 Run 1 test sequentially
0:00:00 load avg: 10.38 [1/1] test_capi
beginning 6 repetitions
123456
......
test_capi leaked [90, 90, 91] references, sum=271
test_capi leaked [72, 72, 74] memory blocks, sum=218
test_capi failed (reference leak) in 33.9 sec
== Tests result: FAILURE ==
1 test failed:
test_capi
Total duration: 33.9 sec
Total tests: run=823 skipped=4
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113816
<!-- /gh-linked-prs -->
| ace4d7ff9a247cbe7350719b996a1c7d88a57813 | 61dd77b04ea205f492498d30637f2d516d8e2a8b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.