repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-98103 | # Create modular packages for zipfile and test_zipfile
I'm about to embark on once again syncing changes from [zipp](/jaraco/zipp) to `zipfile`.
Historically, I've manually copied the contents of the [cpython branch](/jaraco/zipp/tree/cpython) to the relevant files in this repository, but that process is error prone (because it involves syncing a whole file to a portion of another file).
I'd like to instead create packages for `zipfile` and `test_zipfile`, such that the functionality that's synced with `zipp` can be kept separate from other zipfile functionality. Moreover, late versions of `zipp` bring in extra dependencies so a package can also serve as a home to vendor such functionality.
I'm not suggesting to change the user's signature at all. The names will still be presented through the `zipfile` module unchanged.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98103
<!-- /gh-linked-prs -->
| 003f341e99234cf6088341e746ffef15e12ccda2 | 78365b8e283c78e23725748500f48dd2c2ca1161 |
python/cpython | python__cpython-98573 | # Remove more deprecated importlib APIs from Python 3.12
Issue #97850 is the meta issue tracking removals of long deprecated functions from importlib. This ticket tracks just the removals of the following previously deprecated APIs:
- [x] `find_loader()`
- [x] `find_module()`
- [x] `imp` module
- [x] `importlib.abc.Finder`
- [x] `pkgutil.ImpImporter`
- [x] `pkgutil.ImpLoader`
+ @brettcannon @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-98573
* gh-102561
* gh-98059
* gh-104131
* gh-105743
* gh-105754
<!-- /gh-linked-prs -->
| e1f14643dc0e6024f8df9ae975c3b05912a3cb28 | 79b9db9295a5a1607a0b4b10a8b4b72567eaf1ef |
python/cpython | python__cpython-98031 | # socket: add missing TCP socket options from Linux
# Feature or enhancement
`socket` modules already know most TCP socket options (`TCP_NODELAY`, `TCP_MAXSEG`, `TCP_CORK`, etc.) but not the recent ones.
Here is the complete list from the last Linux kernel version:
https://elixir.bootlin.com/linux/v6.0/source/include/uapi/linux/tcp.h#L91
# Pitch
I noticed `TCP_FASTOPEN_CONNECT` was missing. I wanted to use it to write some quick tests for the kernel (MPTCP development). I was going to add only this one but while at it, best to add all the missing ones.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98031
<!-- /gh-linked-prs -->
| cce836296016463032495c6ca739ab469ed13d3c | 90d5c9b195a8133688c2d7b6ad9e0ca8b76df1df |
python/cpython | python__cpython-98004 | # Inline call frames for CALL_FUNCTION_EX
* Inline frames for both call types.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98004
<!-- /gh-linked-prs -->
| ed95e8cbd4cbc813666c7ce7760257cc0f169d03 | accb417c338630ac6e836a5c811a89d54a3cd1d3 |
python/cpython | python__cpython-98120 | # `pydoc` renders `from builtins.type` note, even if it is incorrect
While working on https://github.com/python/cpython/pull/97958 I've noticed that there's something strange with `help()` and `classmethod`s.
Take a look at this example:
```python
import pydoc
class My:
@classmethod
def __init_subclass__(cls, *args, **kwargs):
pass
@classmethod
def custom(cls):
pass
print(pydoc.plain(pydoc.render_doc(My)))
```
It prints:
```
Python Library Documentation: class My in module __main__
class My(builtins.object)
| Class methods defined here:
|
| __init_subclass__(*args, **kwargs) from builtins.type
| This method is called when a class is subclassed.
|
| The default implementation does nothing. It may be
| overridden to extend subclasses.
|
| custom() from builtins.type
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
```
Take a look at these two entries:
1. `__init_subclass__(*args, **kwargs) from builtins.type`
2. `custom() from builtins.type`
While `type` has `__init_subclass__`, there's no `type.custom`. But, `help` says that there is!
```python
>>> type.__init_subclass__
<built-in method __init_subclass__ of type object at 0x10a50c360>
>>> type.custom
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'type' has no attribute 'custom'
```
I think that it is incorrect and can lead to confusion.
Instead it should be:
```
| __init_subclass__(*args, **kwargs) from builtins.type
| This method is called when a class is subclassed.
|
| The default implementation does nothing. It may be
| overridden to extend subclasses.
|
| custom()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-98120
* gh-113941
* gh-115296
* gh-115302
<!-- /gh-linked-prs -->
| 1aa74ee5c5f4647c12c21ab3f5262a33d263bb35 | 80ba8e85490515c293959a4196cbd99b1b3819a2 |
python/cpython | python__cpython-103996 | # Docs use deprecated Sphinx `.. index::` entries
In https://github.com/python/cpython/pull/97921#issuecomment-1269131226 @hugovk noticed that [Sphinx has deprecated some of the `.. index::` entry types](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#index-generating-markup):
> module, keyword, operator, object, exception, statement, builtin
> These all create two index entries. For example, module: hashlib creates the entries module; hashlib and hashlib; module. (These are Python-specific and therefore deprecated.)
Currently we are using some of those in the docs, so we have at least 3 options:
* keep using them if it's a silent deprecation with no removal plans
* reimplement them in pyspecific.py, if we want to keep use them (and if it's possible to reimplement them)
* remove them and replace them with the non-deprecated ones
(cc @AA-Turner)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103996
* gh-104000
* gh-104151
* gh-104153
* gh-104154
* gh-104155
* gh-104156
* gh-104157
* gh-104158
* gh-104159
* gh-104160
* gh-104161
* gh-104162
* gh-104163
* gh-104164
* gh-104221
* gh-107246
<!-- /gh-linked-prs -->
| d0122372f2acb4cc56b89ab8c577ff9039d17d89 | cd9a56c2b0e14f56f2e83dd4db43c5c69a74b232 |
python/cpython | python__cpython-100598 | # Refresh importlib.resources
This bug tracks the sync with importlib_resources 5.x ([history](https://importlib-resources.readthedocs.io/en/latest/history.html)) into Python 3.12. For now, 5.9, but probably some other releases soon.
- [x] 5.9 (#97929)
- [x] 5.10 (#100598)
- [x] 5.12 (#102010)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100598
* gh-102010
* gh-102030
<!-- /gh-linked-prs -->
| 447d061bc7b978afedd3b0148715d2153ac726c5 | ba1342ce998c6c0c36078411d169f29179fbc9f6 |
python/cpython | python__cpython-98484 | # tkinter.Text.count(index1, index2) returns None not (0,) when index1 equals index2
Surely the Text.count() method should return (0,) in the title code snippet? Following the example of what happens when more than one option is given.
I suppose the text_count method should do so too, but it was a hack of something that did not exist when written.
Tk8.6 documentation says a list of integers is returned.
Sample code below.
```python
import tkinter
def text_count(widget, index1, index2, *options):
"""Hack Text count command. Return integer, or tuple if len(options) > 1.
Tkinter does not provide a wrapper for the Tk Text widget count command
at Python 2.7.1
widget is a Tkinter Text widget.
index1 and index2 are Indicies as specified in TkCmd documentation.
options must be a tuple of zero or more option values. If no options
are given the Tk default option is used. If less than two options are
given an integer is returned. Otherwise a tuple of integers is returned
(in the order specified in TkCmd documentation).
See text manual page in TkCmd documentation for valid option values and
index specification.
Example:
chars, lines = text_count(widget, start, end, '-chars', '-lines')
"""
return widget.tk.call((widget._w, 'count') + options + (index1, index2))
text = tkinter.Text()
print(text.count("1.0", tkinter.END)) # (1,0)
print(text.count("1.0", "1.0")) # None
print(text_count(text, "1.0", "1.0")) # 0
print(text_count(text, "1.0", tkinter.END)) # 1
print(text.count("1.0", tkinter.END, "chars")) # (1,)
print(text.count("1.0", "1.0", "chars")) # None
print(text_count(text, "1.0", "1.0", "-chars")) # 0
print(text_count(text, "1.0", tkinter.END, "-chars")) # 1
print(text.count(tkinter.END, "1.0", "chars")) # (-1,)
print(text_count(text, tkinter.END, "1.0", "-chars")) # -1
print(text.count("1.0", tkinter.END, "chars", "lines")) # (1, 1)
print(text.count("1.0", "1.0", "chars", "lines")) # (0, 0)
print(text_count(text, "1.0", "1.0", "-chars", "-lines")) # (0, 0)
print(text_count(text, "1.0", tkinter.END, "-chars", "-lines")) # (1, 1)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-98484
* gh-115031
<!-- /gh-linked-prs -->
| b8c20f90492f9ef2c4600e2739f8e871a31b93c3 | 81eba7645082a192c027e739b8eb99a94b4c0eec |
python/cpython | python__cpython-100089 | # Some C struct members are not marked up in docs
The members of [PyMemberDef](https://docs.python.org/3/c-api/structures.html#c.PyMemberDef) aren't marked up as members, so they can't be linked to individually.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100089
* gh-100311
* gh-100312
<!-- /gh-linked-prs -->
| 8edcb30c3f8bdd8099a093146fedbd9b63a3f667 | a6f82f1fc68cb24e2d88d35fde4cfb663213a744 |
python/cpython | python__cpython-100054 | # Docs for some C struct members repeat the struct name
The members of [PyType_Spec](https://docs.python.org/3/c-api/type.html?highlight=pytype_spec#c.PyType_Spec.PyType_Spec.name) are documented as e.g. `PyType_Spec.PyType_Spec.name`:

This makes it cumbersome to link to them.
Fixing this will probably break URLs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100054
* gh-105057
<!-- /gh-linked-prs -->
| 1668b41dc477bc9562e4c50ab36a232839b4621b | bfd20d257e4ad16a25f4bac0ea4dbb719cdf6bc7 |
python/cpython | python__cpython-97902 | # Add missing text/rtf to mimetypes
# Feature or enhancement
As of Python 3.10, the types map does not contain an entry for the valid mime type of `text/rtf`. It only contains a `application/rtf` entry for non standard types. The missing `text/rtf` mapping should be added as this is a valid mime type. For example this file type is guessed by the `file` command (e.g. version 5.04; see [libmagic1](https://www.darwinsys.com/file/)). My temporary workaround is to add the mapping manually:
```python
mimetypes.add_type("text/rtf", ".rtf", strict=False)
```
I would dare to add it to the list of standard types, but to be consistend with the other `rtf` mapping I'd add it to the non standard (`strict=False`) list.
# Pitch
The feature is implemented in a pull request pointing to this issue. I simply adds one one line with the mapping. Other python lovers could take advantage of this functional mapping of a valid mime type to an extension.
<!-- gh-linked-prs -->
### Linked PRs
* gh-97902
<!-- /gh-linked-prs -->
| 70969d53a77a8a190c40a30419e772bc874a4f62 | 4ec347760f98b156c6a2d42ca397af6b0b6ecc50 |
python/cpython | python__cpython-103405 | # typing.Annotated should document the __metadata__ field
Both `help(typing.Annotated)` and `help(typing._AnnotatedAlias)` should document the `__metadata__` field; based on the comments in #89543 the field was meant to be public but was inadvertently omitted from the PEP and documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103405
* gh-103413
* gh-105365
* gh-105448
* gh-105449
<!-- /gh-linked-prs -->
| dc604a8c58af748ce25aee1af36b6521a3592fa5 | a28e2ce3fbcc852959324879e0bbf5ba8ecf0105 |
python/cpython | python__cpython-101826 | # Compiler warning for `_Py_InIntegralTypeRange`
# Bug report
```
Python/pytime.c:297:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]
if (!_Py_InIntegralTypeRange(time_t, intpart)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:72:45: note: expanded from macro '_Py_InIntegralTypeRange'
(_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type))
~~ ^~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:61:88: note: expanded from macro '_Py_IntegralTypeMax'
(_Py_IS_TYPE_SIGNED(type) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
Python/pytime.c:352:14: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]
if (!_Py_InIntegralTypeRange(time_t, intpart)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:72:45: note: expanded from macro '_Py_InIntegralTypeRange'
(_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type))
~~ ^~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:61:88: note: expanded from macro '_Py_IntegralTypeMax'
(_Py_IS_TYPE_SIGNED(type) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
Python/pytime.c:518:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]
if (!_Py_InIntegralTypeRange(_PyTime_t, d)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:72:45: note: expanded from macro '_Py_InIntegralTypeRange'
(_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type))
~~ ^~~~~~~~~~~~~~~~~~~~~~~~~
./Include/internal/pycore_pymath.h:61:88: note: expanded from macro '_Py_IntegralTypeMax'
(_Py_IS_TYPE_SIGNED(type) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
3 warnings generated.
```
# Your environment
- CPython `main` (3.11.0rc2 time frame)
- clang 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-101826
* gh-102062
* gh-102150
<!-- /gh-linked-prs -->
| b022250e67449e0bc49a3c982fe9e6a2d6a7b71a | 8de59c1bb9fdcea69ff6e6357972ef1b75b71721 |
python/cpython | python__cpython-97665 | # Improved usability of WASM web REPL for testing/development.
# Feature or enhancement
There's a demo python WASM REPL included in cpython for development and testing purposes.
It has the following issues:
* It's currently impossible to 'stop' the python process - an infinite loop means you have to close the browser tab.
* The 'Start REPL' button can be pressed multiple times while the program is running, this does not work as expected. It queues another REPL to start as soon as the current REPL exits.
* Working in the REPL for testing is painful - especially since it's a limited REPL implementation, e.g. no up-arrow to repeat previous commands.
Example here: https://repl.ethanhs.me/
# Pitch
I propose adding the following to the REPL:
* A stop button that kills the webworker process (effectively terminating the python process)
* Disabling the start button while there is an active python process running.
* Adding a text area for python code so that code can be edited and executed multiple times.
# Previous discussion
<!-- gh-linked-prs -->
### Linked PRs
* gh-97665
* gh-119828
<!-- /gh-linked-prs -->
| 010aaa32fb93c5033a698d7213469af02d76fef3 | 0d07182821fad7b95a043d006f1ce13a2d22edcb |
python/cpython | python__cpython-101652 | # `asyncio.Task.print_stack` doesn't use `sys.stderr` by default
`asyncio.Task.print_stack` is described by documentation as having a default output file of `sys.stderr`. However, the default `None` is passed all the way to `print` statements, leading to an actual default of `sys.stdout`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101652
* gh-101653
* gh-101654
<!-- /gh-linked-prs -->
| f87f6e23964d7a4c38b655089cda65538a24ec36 | 6fd5eb640af19b535f4f2ba27b1b61b8d17f02e9 |
python/cpython | python__cpython-97652 | # Lack of a blank line after `.. impl-detail::`
# Documentation
This issue is made for translation work of documentation.
The issue is that if a blank line was absent after `.. impl-detail::`, the Sphinx would not build the paragraph after this line during HTML file generation for zh-tw translation.
I found this issue in three .rst files:
[Doc/library/queue.rst](https://github.com/python/cpython/blob/main/Doc/library/queue.rst)
[Doc/library/readline.rst](https://github.com/python/cpython/blob/main/Doc/library/readline.rst)
[Doc/howto/isolating-extensions.rst](https://github.com/python/cpython/blob/main/Doc/howto/isolating-extensions.rst)
A blank line after `.. impl-detail::` is suggested in those places for a complete building of corresponding translation pages.
Thanks!
<!-- gh-linked-prs -->
### Linked PRs
* gh-97652
<!-- /gh-linked-prs -->
| e8165d47b852e933c176209ddc0b5836a9b0d5f4 | dcc82331c8f05a6a149ac15c519d4fbae72692b2 |
python/cpython | python__cpython-96825 | # `ftplib.FTP.voidcmd` incorrectly document the return value
# Documentation
Since 2f3941d743481ac48628b8b2c075f2b82762050b this function returns the response string, rather than the nothing which is currently documented..
<!-- gh-linked-prs -->
### Linked PRs
* gh-96825
* gh-115601
* gh-115602
<!-- /gh-linked-prs -->
| e88ebc1c4028cf2f0db43659e513440257eaec01 | 26800cf25a0970d46934fa9a881c0ef6881d642b |
python/cpython | python__cpython-104136 | # Null characters in strings cause a C SystemError
# Crash report
Putting a null byte into a Python string causes a SystemError in Python 3.10, due to a call to strlen in the string parsing library. In Python 3.9, the following example runs without errors:
```
# -*- coding: latin-1 -*-
"""
<NULL>
"""
```
In Python 3.10, it raises `SystemError: ../Parser/string_parser.c:219: bad argument to internal function`.
Internally, the new string_parser library introduced in v3.10.0a1 uses a call to strlen to determine the string size, which is getting thrown off by the null byte. This call is actually unnecessary, as the length has already been calculated by the calling parser and can be retrieved with `PyBytes_AsStringAndSize`.
# Error messages
For single line strings, the error is `SystemError: Negative size passed to PyUnicode_New`
For multiline strings, the error is `SystemError: ../Parser/string_parser.c:219: bad argument to internal function`
<!-- gh-linked-prs -->
### Linked PRs
* gh-104136
<!-- /gh-linked-prs -->
| ef0df5284f929719b2ef3955b1b569ade0a5193c | 55d50d147c953fab37b273bca9ab010f40e067d3 |
python/cpython | python__cpython-102421 | # Analyze and improve `test_asyncio.py`: right now it might be flaky
Openning as proposed in https://github.com/python/cpython/issues/97535#issuecomment-1257220241
We had multiple issues with `test_asyncio` before. Some tests are changing env, some are just flaky:
- https://github.com/python/cpython/issues/97535
- https://github.com/python/cpython/issues/95027
- https://github.com/python/cpython/issues/91676
- https://github.com/python/cpython/issues/85848
- https://github.com/python/cpython/issues/85801
- https://github.com/python/cpython/issues/76639
- Any others?
It is proposed to analyze what is causing this to happen again and again.
And fix it, of course :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102421
<!-- /gh-linked-prs -->
| a74cd3ba5de1aad1a1e1ee57328b54c22be47f77 | eff9f43924fc836970b2378d58523388d9246194 |
python/cpython | python__cpython-99503 | # termios.tcdrain hangs on MacOS
# Bug report
The following Python program hangs forever on MacOS, but does not hang on Linux:
```
import os, termios, time
device_fd, tty_fd = os.openpty()
def loop():
while True:
data = os.read(device_fd, 1)
print(data) # <--- never gets here
time.sleep(.1)
from threading import Thread
thread = Thread(target=loop, daemon=True)
thread.start()
os.write(tty_fd, b'123')
termios.tcdrain(tty_fd) # <--- gets stuck here
time.sleep(3) # allow thread to read all data
```
Reading the data in a thread should mean there's absolutely no reason for `tcdrain` to hang. [The docs say it only waits for the data to be "transmitted"](https://docs.python.org/3/library/termios.html#termios.tcdrain), which sounds different to "read". And there's the fact this works on Linux.
Any single-threaded call to `tcdrain()` using `tty_fd` also hangs, even if the `os.write` line is removed i.e. there's nothing for `tcdrain` to wait for write. When I try with a real file descriptor or a pipe I get an error: `'Inappropriate ioctl for device'` - it's unclear if it's the combination of `tcdrain` and `openpty` or just `tcdrain` that's causing the issue.
When I sample the process using the MacOS Activity Monitor I get the following at the bottom of the call graph for the main thread (unchanging) and the secondary "read" thread (bottommost lines vary per sample) respectively:
```
...
+ 2553 termios_tcdrain (in termios.cpython-39-darwin.so) + 56 [0x1048dedcc]
+ 2553 tcdrain (in libsystem_c.dylib) + 48 [0x19f27c454]
+ 2553 ioctl (in libsystem_kernel.dylib) + 36 [0x19f30b0c0]
+ 2553 __ioctl (in libsystem_kernel.dylib) + 8 [0x19f30b0d4]
...
853 os_read (in python3.9) + 320 [0x100d45ad8]
853 _Py_read (in python3.9) + 92 [0x100d35ca0]
853 PyEval_RestoreThread (in python3.9) + 24 [0x100cd2c7c]
853 take_gil (in python3.9) + 176 [0x100cd2550]
852 _pthread_cond_wait (in libsystem_pthread.dylib) + 1236 [0x19f34483c]
! 852 __psynch_cvwait (in libsystem_kernel.dylib) + 8 [0x19f30a270]
1 _pthread_cond_wait (in libsystem_pthread.dylib) + 344 [0x19f3444c0]
1 __gettimeofday (in libsystem_kernel.dylib) + 12 [0x19f30aa0c]
```
The questions I'm struggling to answer:
- What is the expected behaviour of `tcdrain` in this scenario?
- If the Linux behaviour is expected, why does it not work on MacOS?
# Your environment
- CPython versions tested on: 3.9.13
- Operating system and architecture: MacOS 12.5.1 (ARM CPU).
I've also tested on MacOS with an Intel CPU and [another developer has tested on Linux](https://github.com/pyserial/pyserial/issues/625#issuecomment-1250544769) (to prove it terminates there). Originally I thought the issue might be an OS issue and [posted in the Apple dev forum about it](https://developer.apple.com/forums//thread/715119).
<!-- gh-linked-prs -->
### Linked PRs
* gh-99503
* gh-99679
* gh-99680
<!-- /gh-linked-prs -->
| 959ba45d75953caa911e16b4c2a277978fc4b9b0 | 4d82f628c44490d6fbc3f6998d2473d1304d891f |
python/cpython | python__cpython-97906 | # Port PyPy's new unicode name db format to optimize binary CPython size
# Feature or enhancement
PyPy has a new algorithm that might help us reduce the unicode name db size of our binaries (which would be helpful on our way forward to better WASM compatibility [through reduced download sizes]). For more details, you can see the new implementation in [PyPy side](https://foss.heptapod.net/pypy/pypy/-/blob/branch/default/rpython/rlib/unicodedata/dawg.py#L1-12) by @cfbolz.
CC: @ambv @cfbolz
We'll probably start with missing tests, and then build out a prototype to see how feasible it is and will share numbers on how much it would help before going forward with the implementation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-97906
* gh-111764
* gh-112118
* gh-112120
<!-- /gh-linked-prs -->
| 9573d142157d8432f2772a109c304dafeaa454a5 | 0e9c364f4ac18a2237bdbac702b96bcf8ef9cb09 |
python/cpython | python__cpython-96932 | # ssl.SSLSocket.shared_ciphers always returns server cipher list
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
The function `ssl.SSLSocket.shared_ciphers` always returns the server's cipher list, rather than the ciphers common to both client and server. I.e., the same list as `ssl.SSLContext.get_ciphers`, in a different format.
This is due to the use of `SSL_get_ciphers` in the implementation, rather than `SSL_get_shared_ciphers`.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.8.10, 3.12.0a0
- Operating system and architecture: Ubuntu 20.04, x86_64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-96932
* gh-102918
* gh-102919
<!-- /gh-linked-prs -->
| af9c34f6ef8dceb21871206eb3e4d350f6e3d3dc | ea93bde4ece139d4152a59f2c38aa6568559447c |
python/cpython | python__cpython-114227 | # IDLE: Stop reusing built-in names as parameters
Some idlelib functions use built-in names such as 'object', 'dict', and 'type' as function parameter names. As a result, these parameter names are mistakenly highlighted as builtin names. Unless a parameter name is used as a keyword in any call of the function, changing a parameter name within a function should cause no problems.
The easiest name change is to append '\_'. That is what I have done so far to get 'object_' and 'dict_' in my b_ins branch.
EDIT: In appropriate places, I prefixed 'g' or 'l', for 'global or 'local', to 'dict'.
Dependency of #87179
<!-- gh-linked-prs -->
### Linked PRs
* gh-114227
* gh-114228
* gh-114229
<!-- /gh-linked-prs -->
| 6f4b242a03e521a55f0b9e440703b424ed18ce2f | 8cda72037b262772399b2b7fc36dee9340d74fd6 |
python/cpython | python__cpython-106455 | # Show value in error of list.remove
Consider
```python
foo = ["a", "b"]
for i in ["c"]:
foo.remove(i)
```
This throws with a useless error message:
```
Traceback (most recent call last):
File "/Users/tdegeus/Downloads/t.py", line 3, in <module>
foo.remove(i)
ValueError: list.remove(x): x not in list
```
Instead it would be very helpful to show the value of `x`. Now one needs extra manual debugging, or extra lines of code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106455
* gh-116956
<!-- /gh-linked-prs -->
| f6cdc6b4a191b75027de342aa8b5d344fb31313e | 2a4cbf17af19a01d942f9579342f77c39fbd23c4 |
python/cpython | python__cpython-98518 | # Rewrite `asyncio.wait_for` using `asyncio.timeout`
Over the years a lot of issues have accumulated for `asyncio.wait_for` and the code has become complicated. The `asyncio.timeout` can be used to simplify this a lot and will also fix bugs which are already fixed or don't exist in `asyncio.timeout`. This rewrite won't be backported. `asyncio.wait_for` should be nothing more than a wrapper around `asyncio.timeout`.
`asyncio.wait_for` issues: https://github.com/python/cpython/issues?q=is%3Aissue+is%3Aopen+wait_for+label%3Aexpert-asyncio
<!-- gh-linked-prs -->
### Linked PRs
* gh-98518
<!-- /gh-linked-prs -->
| a5024a261a75dafa4fb6613298dcb64a9603d9c7 | 226484e47599a93f5bf033ac47198e68ff401432 |
python/cpython | python__cpython-106660 | # Improved replacement functionality for deprecated crypt module
# Documentation
The `crypt` module is deprecated for 3.11. At the top of the doc page (https://docs.python.org/3/library/crypt.html), it suggests that maybe `hashlib` can provide a replacement, or PEP 594. Neither of these were very helpful.
What I **did** find is `passlib` has a functional replacement. Please see: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.md5_crypt.html
I think this package/module should be recommended in the `crypt` module documentation.
Note that `passlib` will use the OS `crypt()` function *if available*, and will default to a pure-python solution based on the (guaranteed) presence of the MD5 hashing library.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106660
* gh-106697
<!-- /gh-linked-prs -->
| da264121f4a529504dd931bad2b0af351df67d39 | 3a4231dd745e54ae533ce46371637f1fe8fa6a7b |
python/cpython | python__cpython-96716 | # Redundant NULL check in profile_trampoline function (sysmodule.c)
Code in `profile_trampoline` function checks if `arg` argument value is equal to `NULL` and in this case assigns `Py_None` to it
https://github.com/python/cpython/blob/88a7f661ca02c0eb76b8f19234b8293b70f171e2/Python/sysmodule.c#L954-L956
The only place where `arg` is used in `profile_trampoline` is this call
https://github.com/python/cpython/blob/88a7f661ca02c0eb76b8f19234b8293b70f171e2/Python/sysmodule.c#L959
But similar check is already done by `call_trampoline` function
https://github.com/python/cpython/blob/88a7f661ca02c0eb76b8f19234b8293b70f171e2/Python/sysmodule.c#L930
My suggestion is to remove excess check from `profile_trampoline`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-96716
<!-- /gh-linked-prs -->
| 3221b0de6792145cb4d0d461065a956db82acc93 | 621a1790c4d9e316c7228a13735bf71f2b0f5372 |
python/cpython | python__cpython-103232 | # Better error message for assignment to non-existent __slots__
# Feature or enhancement
Currently, the error message for an attribute that isn't included in a class's `__slots__` is harder to understand than I think it needs to be.
```pycon
Python 3.12.0a0 (heads/main:4114bcc, Sep 7 2022, 19:35:54) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo:
... __slots__ = ("bar",)
...
>>> Foo().not_an_attribute = 1234
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'not_an_attribute'
```
# Pitch
I think the message can be improved in multiple ways and be made more similar to the error message for attribute access:
1. Make the error message contain a note as to why the assignment failed. Ideally, this would look something like
```py
AttributeError: 'Foo' cannot have attribute 'not_an_attribute' set as it is not included in its __slots__
```
2. Make the error message more forgiving in the case of a typo.
```pycon
>>> Foo().bat = "hello world"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' cannot have attribute 'bat' set as it is not included in its __slots__. Did you mean: 'bar'?
```
3. Support more introspection on the raised AttributeError as currently name and obj are None.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103232
<!-- /gh-linked-prs -->
| cdeb1a6caad5e3067f01d6058238803b8517f9de | 41ca16455188db806bfc7037058e8ecff2755e6c |
python/cpython | python__cpython-96639 | # pty.spawn deadlock
# Bug report
```
% yes | python3 -c "import pty; pty.spawn(['cat'])"
y
...
y^CTraceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.10/pty.py", line 181, in spawn
_copy(master_fd, master_read, stdin_read)
File "/usr/lib/python3.10/pty.py", line 161, in _copy
_writen(master_fd, data)
File "/usr/lib/python3.10/pty.py", line 127, in _writen
n = os.write(fd, data)
KeyboardInterrupt
```
You will see 2048 lines of y due to tty echo, then the output stops and hangs forever.
Both python and cat block on write, waiting each other consume something from pty buffer.
# Your environment
Tested on Linux/macOS with CPython 3.10.
<!-- gh-linked-prs -->
### Linked PRs
* gh-96639
* gh-104655
<!-- /gh-linked-prs -->
| 9c5aa8967bd7c1b02fb1da055c6b3afcccbbb251 | c26d03d5d6da94367c7f9cd93185616f2385db30 |
python/cpython | python__cpython-96561 | # SyntaxError for walrus target in a comprehension when the target is a global variable with a private name.
# Bug report
If I have a walrus target with a private name, contained in a comprehension, the compiler mangles the name just fine.
However, if the name is declared global in a function containing the comprehension, the walrus is supposed to assign to the global variable with the mangled name.
Instead, I get a SyntaxError. BTW, if I use the mangled name instead of the private name in the walrus, it works fine.
Example:
```py
>>> class C:
... def f():
... global __x
... __x = 0
... [_C__x := 1 for a in [2]]
... [__x := 2 for a in [3]] # BUG
...
File "<stdin>", line 6
SyntaxError: no binding for nonlocal '_C__x' found
```
Line 4 correctly assigns the global variable `_C__x` = 0, and line 5 assigns it = 1.
Disassembly of this program, without line 5:
```
4 0 LOAD_CONST 1 (0)
2 STORE_GLOBAL 0 (_C__x)
5 4 LOAD_CONST 2 (<code object <listcomp> at 0x00000213F83B4C90, file "<stdin>", line 5>)
6 LOAD_CONST 3 ('C.f.<locals>.<listcomp>')
8 MAKE_FUNCTION 0
10 LOAD_CONST 4 ((2,))
12 GET_ITER
14 CALL_FUNCTION 1
16 POP_TOP
18 LOAD_CONST 0 (None)
20 RETURN_VALUE
Disassembly of <code object <listcomp> at 0x00000213F83B4C90, file "<stdin>", line 5>:
5 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 12 (to 18)
6 STORE_FAST 1 (a)
8 LOAD_CONST 0 (1)
10 DUP_TOP
12 STORE_GLOBAL 0 (_C__x)
14 LIST_APPEND 2
16 JUMP_ABSOLUTE 4
>> 18 RETURN_VALUE```
```
# Your environment
- CPython versions tested on: Python 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32
- Operating system and architecture: Windows 10.
# Suggestions
I don't have the facility to debug the compiler code, so I can only speculate about the cause of the bug.
It would appear that when __x is found in the NamedExpr, which is part of the <listcomp>, it somehow is using the original name __x in a symbol lookup instead of the mangled name _C__x. I don't know which symbol table is involved, but whichever it is, __x is of course not in it. And the SyntaxError has the mangled name in the message.
<!-- gh-linked-prs -->
### Linked PRs
* gh-96561
* gh-115603
* gh-115604
<!-- /gh-linked-prs -->
| 664965a1c141e8af5eb465d29099781a6a2fc3f3 | e88ebc1c4028cf2f0db43659e513440257eaec01 |
python/cpython | python__cpython-117815 | # configure.ac: using $CC to check compiler names may result in erroneous judgements
configure.ac contains multiple checks of this kind:
```
case $CC in
```
When paths contain "clang"/"gcc"/"icc", they might be part of $CC
for example because of the "--sysroot" parameter. That could cause
judgement error about clang/gcc/icc compilers. e.g.
when "icc" is containded in working path, below errors are reported when compiling python3:
x86_64-wrs-linux-gcc: error: strict: No such file or directory
x86_64-wrs-linux-gcc: error: unrecognized command line option '-fp-model'
<!-- gh-linked-prs -->
### Linked PRs
* gh-117815
* gh-117819
* gh-117825
* gh-117836
* gh-117857
<!-- /gh-linked-prs -->
| a5b94d066016be63d632cccee0ec2a2eb24536dc | 75f7cf91ec5afc6091a0fd442a1f0435c19300b2 |
python/cpython | python__cpython-96311 | # argparse sometimes tracebacks when all options in a mutually exclusive group are suppressed
# Bug report
When a subparser contains a mutually exclusive group and the all options in the group are suppressed,
it sometimes errors out with a traceback. The crash depends on other options (their ordering and length) and terminal size.
Reproducer:
```
#!/usr/bin/python3
import argparse
parser = argparse.ArgumentParser()
commands = parser.add_subparsers(title="commands", dest="command")
cmd_foo = commands.add_parser("foo")
group = cmd_foo.add_mutually_exclusive_group()
group.add_argument('--verbose', action='store_true', help=argparse.SUPPRESS)
group.add_argument('--quiet', action='store_true', help=argparse.SUPPRESS)
cmd_foo.add_argument("--longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong")
parser.parse_args()
```
```
$ python3.11 reproducer.py foo --help
Traceback (most recent call last):
File ".../reproducer.py", line 13, in <module>
parser.parse_args()
File "/usr/lib64/python3.11/argparse.py", line 1862, in parse_args
args, argv = self.parse_known_args(args, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 1895, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2085, in _parse_known_args
positionals_end_index = consume_positionals(start_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2062, in consume_positionals
take_action(action, args)
File "/usr/lib64/python3.11/argparse.py", line 1971, in take_action
action(self, namespace, argument_values, option_string)
File "/usr/lib64/python3.11/argparse.py", line 1234, in __call__
subnamespace, arg_strings = parser.parse_known_args(arg_strings, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 1895, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2103, in _parse_known_args
start_index = consume_optional(start_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2043, in consume_optional
take_action(action, args, option_string)
File "/usr/lib64/python3.11/argparse.py", line 1971, in take_action
action(self, namespace, argument_values, option_string)
File "/usr/lib64/python3.11/argparse.py", line 1112, in __call__
parser.print_help()
File "/usr/lib64/python3.11/argparse.py", line 2590, in print_help
self._print_message(self.format_help(), file)
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2574, in format_help
return formatter.format_help()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 286, in format_help
help = self._root_section.format_help()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 217, in format_help
item_help = join([func(*args) for func, args in self.items])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 217, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 341, in _format_usage
assert ' '.join(opt_parts) == opt_usage
AssertionError
```
Please note that setting `COLUMNS` to a high number doesn't trigger the traceback:
```
COLUMNS=1000 python3.11 reproducer.py foo --help
usage: reproducer.py foo [-h] [--longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong LONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONG]
options:
-h, --help show this help message and exit
--longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong LONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONGLONG
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.8.13, 3.10.6, 3.11.0b5
- openSUSE Tumbleweed 20220823
<!-- gh-linked-prs -->
### Linked PRs
* gh-96311
* gh-115767
* gh-115768
<!-- /gh-linked-prs -->
| de1428f8c234a8731ced99cbfe5cd6c5c719e31d | 49258efada0cb0fc58ccffc018ff310b8f7f4570 |
python/cpython | python__cpython-100351 | # ntpath.normpath('\\\\') produces different result on Windows
In Python 3.11.0rc1 on Windows:
```python
>>> import ntpath
>>> ntpath.normpath('\\\\')
'\\\\'
```
In Python 3.11.0rc1 on non-Windows platforms, and in 3.10 across all platforms:
```python
>>> import ntpath
>>> ntpath.normpath('\\\\')
'\\'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100351
* gh-100999
<!-- /gh-linked-prs -->
| 005e69403d638f9ff8f71e59960c600016e101a4 | eecd422d1ba849f0724962666ba840581217812b |
python/cpython | python__cpython-96194 | # os.path.ismount() doesn't properly use byte-paths from an os.DirEntry
# Bug report
It seems that `os.path.ismount()` doesn't properly use a bytes-path from an `os.DirEntry` object (despite both claiming to support/be PathLike).
Take e.g. the following code, when called with a bytes `path`:
```
def scandirtree(path=b".", xdev=True):
for p in os.scandir(path):
yield p
if p.is_dir(follow_symlinks=False) and ( not xdev or not os.path.ismount(p) ):
yield from scandirtree(p, xdev)
```
That fails with:
```
Traceback (most recent call last):
File "/home/calestyo/prj/generate-file-list/src/./generate-file-list", line 65, in <module>
main()
File "/home/calestyo/prj/generate-file-list/src/./generate-file-list", line 52, in main
for p in scandirtree(ap, args.xdev):
File "/home/calestyo/prj/generate-file-list/src/./generate-file-list", line 25, in scandirtree
if p.is_dir(follow_symlinks=False) and ( not xdev or not os.path.ismount(p) ):
File "/usr/lib/python3.10/posixpath.py", line 201, in ismount
parent = join(path, '..')
File "/usr/lib/python3.10/posixpath.py", line 90, in join
genericpath._check_arg_types('join', a, *p)
File "/usr/lib/python3.10/genericpath.py", line 155, in _check_arg_types
raise TypeError("Can't mix strings and bytes in path components") from None
TypeError: Can't mix strings and bytes in path components
```
See also https://discuss.python.org/t/bug-in-os-path-ismount-or-perhaps-os-direntry/18406
# Your environment
- CPython versions tested on: 3.10.6
- Operating system and architecture: Debian sid, x86_64
Cheers,
Chris.
<!-- gh-linked-prs -->
### Linked PRs
* gh-96194
* gh-99455
* gh-99456
<!-- /gh-linked-prs -->
| 367f552129341796d75fc4cc40edb49405235a2b | 1455c516fce829f8d46e4f15557afe8653e7e995 |
python/cpython | python__cpython-106451 | # sqlite3 docs: explain SELECT-with-literals trick
Some examples use the following SQL trick:
```python3
# setup code; we only need a "dummy" connection
import sqlite3
cx = sqlite3.connect(":memory:")
# returns one row with one column: ("a",)
row = cx.execute("select 'a' as literal").fetchone()
# do stuff with result
print(row)
```
Some people may find such examples strange, because:
- we create a connection to an empty database; it is not immediately obvious why we do this
- it may not be immediatly obvious that you can construct a resulting row by using literals in your query
We may consider one or more of the following:
- Add a _very short_ SQL tutorial
- Add an SQL (and/or) SQLite tips and tricks howto
- Explain how SQL queries work (what is a resulting row , etc.)
_Originally posted by @erlend-aasland in https://github.com/python/cpython/pull/96122#discussion_r951115951_
<!-- gh-linked-prs -->
### Linked PRs
* gh-106451
* gh-106513
* gh-106645
* gh-106646
* gh-106647
* gh-106648
<!-- /gh-linked-prs -->
| f520804b039df0d87fb9df6f1fed2a9bc9df8d61 | fc7ff1af457e27b7d9752600b3436641be90f598 |
python/cpython | python__cpython-96335 | # Calling inspect.signature with an AsyncMock instance raises an exception
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Starting in Python 3.10.6, the following code raises an exception:
```python
from inspect import signature
from unittest.mock import AsyncMock
mock = AsyncMock()
signature(mock)
```
* No exception is raised when `Mock` is used.
* The exception does not occur with Python 3.10.5.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.10.4, 3.10.5, 3.10.6 (Only occurs in 3.10.6)
- Operating system and architecture: Windows 10, Linux (Official Docker Image)
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-96335
* gh-101646
* gh-101647
* gh-108652
<!-- /gh-linked-prs -->
| 9e7d7266ecdcccc02385fe4ccb094f3444102e26 | a109454e828ce2d9bde15dea78405f8ffee653ec |
python/cpython | python__cpython-113819 | # TimeoutError not raised by 'timeout' context manager when "side errors" occur
Consider something like this:
```py
async with asyncio.timeout(1):
try:
await asyncio.sleep(2) # Will be interrupted after 1 second
finally:
1/0 # Crash in cleanup
```
This will report `ZeroDivisionError` but suppresses the fact that it happened during cleanup. Once #95761 lands the same problem will happen if the `try/finally` block happens to be in a task created using `TaskGroup`.
For end users who are looking why their code crashes it may be useful to know that the `timeout` context manager decided to cancel the thread, and that their crash was induced by the cancellation.
There are various ways we could solve this -- e.g. we might consider raising an `ExceptionGroup` that combines `TimeoutError` with the crash exception, but that would lead people to believe that they have to write `except* TimeoutError` to catch timeouts, and we don't want that (especially not since in 3.11 they don't have to).
I discussed this with @iritkatriel and she proposed that the `timeout` context manager could add a *note* to the crash exception (see [PEP 678](https://peps.python.org/pep-0678/)). For maximal usability the note should probably dig up the filename and linenumber of the caller and add those to its error message, so the note would read something like `"Timed out at file <file>, line <line>"` (details TBD by whoever writes a PR for this).
<!-- gh-linked-prs -->
### Linked PRs
* gh-113819
<!-- /gh-linked-prs -->
| aef4a1203c06efde8505aefc9cf994e9a23f398d | ab0ad62038317a3d15099c23d2b0f03bee9f8fa7 |
python/cpython | python__cpython-98429 | # Edit What's New for Python 3.11
As discussed with @pablogsal , this is a meta-issue for coordinating an editing pass on the What's New in Python 3.11 document.
The focus with this issue and its accompanying PRs will be on textual and reST/Sphinx fixes and improvements to the existing content, rather than adding any missing NEWS entries, or touching the organization of the document itself. Another issue, #95914 , will cover adding the PEPs not currently listed to the Summary - Release highlights section, as well as a few PEPs that are not documented at all in What's New, but probably should be somewhere (PEP-624, PEP-654 and PEP-670).
It seems best to split this into separate PRs, one for each top-level section. I've listed them here for reference, with PRs linked as they are submitted:
## Prerequisites/General Changes
* #97740
* #97741
* #95914
* #96016
* #95937
* #95976
* #98315
* #98342
* Forward-ports
* #98344
* #98345
## Edit Sections
* [x] Summary
* #95914
* #95916
* #98416
* [x] New Features
* #95915
* #97739
* #97718
* [x] New Features related to Type Hints
* #96097
* [x] Other Language Changes
* #97719
* [x] Other CPython Implementation Changes
* #97720
* [x] New Modules
* #97721
* [ ] Improved Modules
* #97806
* #98295
* #98250
* #98304
* [x] Optimizations
* #98426
* [x] Faster CPython
* #98429
* [x] CPython bytecode changes
* #98559
* [x] Deprecated
* #98581
* [x] Pending Removal in Python 3.12
* #98583
* [x] Removed
* #98584
* [x] Porting to Python 3.11
* #98585
* [x] Build Changes
* #98588
* #98781
* [ ] C API Changes
## Add new APIs
Adapted and updated from @pablogsal 's post
* [x] module: asyncio.exceptions
added: ['BrokenBarrierError']
(Already implicitly referred to by Barrier mention, but will be linked directly in editing pass)
#97806
* [x] ~~module: asyncio.proactor_events
added: ['BaseProactorEventLoop.sock_recvfrom_into', 'BaseProactorEventLoop.sock_recvfrom', 'BaseProactorEventLoop.sock_sendto']~~
(Already added; will be improved in editing phase)
* [x] module: asyncio.runners
added: ['Runner']
#97806
* [x] ~~module: asyncio.selector_events
added: ['BaseSelectorEventLoop.sock_recvfrom_into', 'BaseSelectorEventLoop.sock_recvfrom', 'BaseSelectorEventLoop.sock_sendto']~~
(Already added; will be improved in editing phase)
#97806
* [x] ~~module: asyncio.sslproto
added: ['SSLProtocolState', 'AppProtocolState', 'add_flowcontrol_defaults', 'SSLProtocol.get_buffer', 'SSLProtocol.buffer_updated']~~
([Considered implementation details](https://github.com/python/cpython/issues/95913#issuecomment-1264565405), so @kumaraditya303 says no need to be documented (indeed, I don't see it documented anywhere else but the changelog)
* [x] module: asyncio.tasks
added: ['Task.cancelling', 'Task.uncancel']
#97806
* [x] ~~module: asyncio.windows_events
added: ['IocpProactor.recvfrom_into']~~
(Appears to be undocumented implementation detail of the added socket methods)
* [x] module: contextlib
added: ['chdir']
#95962
* [x] module: enum
added: ['global_enum_repr', 'global_str', 'show_flag_values', 'global_flag_repr']
(Not documented yet, and`ReprEnum` and `global_enum` mentioned in What's New aren't either)
#98298
#98455
* [x] module: hashlib
added: ['file_digest']
#95965
#95980
* [x] module: inspect
added: ['FrameInfo']
Already discussed, just not explicitly referenced
#98304
* [x] module: logging.handlers
added: ['SysLogHandler.createSocket']
(Not documented yet; seems like it should be?)
#98307
#98319
#98320
* [x] ~~module: pdb
added: ['ScriptTarget', 'ModuleTarget']~~
(Made private)
#96053
* [x] module: string
added: ['Template.is_valid', 'Template.get_identifiers']
#98311
* [x] module: tempfile
added: ['SpooledTemporaryFile.detach', 'SpooledTemporaryFile.read1', 'SpooledTemporaryFile.writable', 'SpooledTemporaryFile.readinto1', 'SpooledTemporaryFile.seekable', 'SpooledTemporaryFile.readable', 'SpooledTemporaryFile.readinto']
#98312
#98604
* [x] module: traceback
added: ['TracebackException.print', 'StackSummary.format_frame_summary']
#95980
* [x] module: zipfile
added: ['Path.suffix', 'Path.stem', 'Path.suffixes', 'ZipFile.mkdir']
#98314
## Related
* #93986
<!-- gh-linked-prs -->
### Linked PRs
* gh-98429
* gh-102490
* gh-102497
* gh-109750
* gh-109771
* gh-109772
<!-- /gh-linked-prs -->
| 80b19a30c0d5f9f8a8651e7f8847c0e68671c89a | 8606697f49dc58ff7e18147401ac65a09c38cf57 |
python/cpython | python__cpython-95897 | # posixmodule.c: osdefs.h inclusion should not depend on compiler
osdefs.h should not be defined by the compiler used, this small patch moves an inclusion out of an _MSC_VER check into an MS_WINDOWS one to better honour the assumption that compiler and OS macros are independent
<!-- gh-linked-prs -->
### Linked PRs
* gh-95897
* gh-99788
* gh-99789
<!-- /gh-linked-prs -->
| ec2b76aa8b7c6313293ff9c6814e8bc31e08fcaf | a86d8545221b16e714ffe3bda5afafc1d4748d13 |
python/cpython | python__cpython-95883 | # Trackstack includes __aexit__ in contextlib which is not in previous versions
```python
import contextlib
@contextlib.asynccontextmanager
async def f():
try:
yield
finally:
pass
async def amain():
async with f(): 1/0
with contextlib.closing(amain().__await__()) as gen:
next(gen, None)
```
```
Traceback (most recent call last):
File "/home/graingert/projects/cpython/demo.py", line 16, in <module>
next(gen, None)
File "/usr/lib/python3.11/contextlib.py", line 222, in __aexit__
await self.gen.athrow(typ, value, traceback)
File "/home/graingert/projects/cpython/demo.py", line 7, in f
yield
File "/home/graingert/projects/cpython/demo.py", line 12, in amain
async with f(): 1/0
~^~
ZeroDivisionError: division by zero
```
_Originally posted by @graingert in https://github.com/python/cpython/issues/92118#issuecomment-1209663766_
<!-- gh-linked-prs -->
### Linked PRs
* gh-95883
* gh-100715
* gh-100718
<!-- /gh-linked-prs -->
| b3722ca058f6a6d6505cf2ea9ffabaf7fb6b6e19 | 85869498331f7020e18bb243c89cd694f674b911 |
python/cpython | python__cpython-107221 | # Target libc based suffix (-musl/-gnu) in PLATFORM_TRIPLET is chosen based on build machine configuration instead of configuration of the target
That, as an example, leads to cpython 3.9.13 build failing[^1] when cross-compiling on AMD64 Linux with glibc for mpc8548 Linux (OpenWrt) with musl.
As already described and confirmed as a bug in https://github.com/python/cpython/pull/24502#discussion_r938078760 :
If I'm not mistaken, `PLATFORM_TRIPLET` should refer to the target platform on which cpython will run. If that is the case, the musl libc vs. glibc decision should be based on `$host_os` rather than `$build_os` as the former is based on autoconf's `AC_CANONICAL_HOST` macro[^2] which refers to the target platform that might differ from the build platform in case of cross-compilation.
I'm creating this as a separate issue concerning a particular problem, but I think this also contributes to existing discussion in https://github.com/python/cpython/issues/87278 .
[^1]: "internal configure error for the platform triplet, please file a bug report"
[^2]: https://www.gnu.org/software/autoconf/manual/autoconf-2.68/html_node/Canonicalizing.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-107221
<!-- /gh-linked-prs -->
| c163d7f0b67a568e9b64eeb9c1cbbaa127818596 | 809ea7c4b6c2b818ae510f1f58e82b6b05ed4ef9 |
python/cpython | python__cpython-125376 | # Ambiguity in behavior of ArgumentParser prefix_chars when same flag is specified with different prefixes
## Environment
```
$ python --version
Python 3.8.10
$ uname -a
Linux 4.4.0-19041-Microsoft #1237-Microsoft Sat Sep 11 14:32:00 PST 2021 x86_64 x86_64 x86_64 GNU/Linux
```
## Description
It seems that flags specified with different prefixes but the same 'name' (for lack of a better word) are treated as one flag.
## Example
```python
# example.py
import argparse
parser = argparse.ArgumentParser(prefix_chars='-+')
parser.add_argument('-e', metavar='<example>', action='append')
parser.add_argument('+e', metavar='<example>', action='append')
args = parser.parse_args()
print(args)
```
```sh
$ python example.py -e hello1 +e hello2
Namespace(e=['hello1', 'hello2'])
```
I was kind of hoping that somebody had thought of a smart way of treating them as two distinct flags : ). Regardless, I don't see anything about this on https://docs.python.org/3/library/argparse.html#prefix-chars, so I'm filing this as a bug in the documentation. It seems like a nitty detail but the page advertises itself as being thorough:
> This page contains the API reference information. For a more gentle introduction to Python command-line parsing, have a look at the [argparse tutorial](https://docs.python.org/3/howto/argparse.html#id1).
<!-- gh-linked-prs -->
### Linked PRs
* gh-125376
* gh-125642
* gh-125643
<!-- /gh-linked-prs -->
| dbcc5ac4709dfd8dfaf323d51f135f2218d14068 | 7b04496e5c7ed47e9653f4591674fc9ffef34587 |
python/cpython | python__cpython-99571 | # os.remove is documented to raise IsADirectoryError but macOS raises PermissionError
# Documentation
In the documentation for `os.remove` it states:
> Remove (delete) the file path. If path is a directory, an [IsADirectoryError](https://docs.python.org/3/library/exceptions.html#IsADirectoryError) is raised. Use [rmdir()](https://docs.python.org/3/library/os.html#os.rmdir) to remove directories. If the file does not exist, a [FileNotFoundError](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) is raised.
This is the case on Linux:
```bash
$ python -c 'import os, tempfile; os.remove(tempfile.mkdtemp())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
IsADirectoryError: [Errno 21] Is a directory: '/tmp/tmphxrd1p6n'
```
However on macOS this results in a `PermissionError`:
```bash
$ python3 -c 'import os, tempfile; os.remove(tempfile.mkdtemp())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
PermissionError: [Errno 1] Operation not permitted: '/var/folders/1g/q2y74s696ng0v_cmb9sv7qgr0000gn/T/tmpawxhz8wm'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-99571
* gh-99639
* gh-99641
<!-- /gh-linked-prs -->
| 1cae31d26ba621f6b1f0656ad3d69a0236338bad | cdde29dde90947df9bac39c1d19479914fb3db09 |
python/cpython | python__cpython-102343 | # [subinterpreter] Make type version tag counter threadsafe for per interpreter GIL
The `next_version_tag` in `typeobject.c` is currently not thread safe and relies on GIL to protect against concurrent increments across threads.
https://github.com/python/cpython/blob/63140b445e4a303df430b3d60c1cd4ef34f27c03/Objects/typeobject.c#L46-L50
For per-interpreter GIL, it must be made thread-safe otherwise the type cache will be affected by race conditions. Static types are not affected because they are immutable so this is a issue for pure Python classes aka heap types. This issue is for discussion of possible implementations.
---
Possible Solutions:
- Make the `next_version_tag` counter per interpreter, while this may seems to fix the issue but since objects are shared across the interpreters and the type version tag is also used to represent a current state of a type in the specializing interpreter, two interpreters can have same the same tag for different types this won't work as expected.
- Make the `next_version_tag` an atomic counter and increment it atomically using the `pyatomic APIs`. This is my preferred solution since `next_version_tag` is only modified when a type is modified so is a rare operation and not a performance issue. Since this is just a counter `relaxed ordering` can be used here.
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-102343
<!-- /gh-linked-prs -->
| 0acea96dad622cba41bf1e9ee25d87658579ba88 | d8627999d85cc5b000dbe17180250d919f8510ad |
python/cpython | python__cpython-99709 | # Negative `_io.BufferedReader.tell`
# Bug report
I believe `tell` should never be negative, even for `/dev/urandom`. And @benjaminp wrote [ToDo](https://github.com/python/cpython/blob/cc9160a29bc3356ced92348bcd8e6668c67167c9/Modules/_io/bufferedio.c#L1189) that it shouldn't be negative.
```
>>> from pathlib import Path
>>> urandom = Path('/dev/urandom').open('rb')
>>> urandom.tell()
0
>>> urandom.read(1)
b'$'
>>> urandom.tell()
-4095
```
# Environment
Fedora linux, python 3.10.6
<!-- gh-linked-prs -->
### Linked PRs
* gh-99709
* gh-115599
* gh-115600
<!-- /gh-linked-prs -->
| 26800cf25a0970d46934fa9a881c0ef6881d642b | d5a30a1777f04523c7b151b894e999f5714d8e96 |
python/cpython | python__cpython-100627 | # CVE-2020-10735: Prevent DoS by large int<->str conversions
## Problem
A Denial Of Service (DoS) issue was identified in CPython because we use binary bignum’s for our `int` implementation. A huge integer will always consume a near-quadratic amount of CPU time in conversion to or from a base 10 (decimal) string with a large number of digits. No efficient algorithm exists to do otherwise.
It is quite common for Python code implementing network protocols and data serialization to do `int(untrusted_string_or_bytes_value)` on input to get a numeric value, without having limited the input length or to do `log("processing thing id %s", unknowingly_huge_integer)` or any similar concept to convert an `int` to a string without first checking its magnitude. (`http`, `json`, `xmlrpc`, `logging`, loading large values into integer via linear-time conversions such as hexadecimal stored in `yaml`, or anything computing larger values based on user controlled inputs… which then wind up attempting to output as decimal later on). All of these can suffer a CPU consuming DoS in the face of untrusted data.
Everyone auditing all existing code for this, adding length guards, and maintaining that practice everywhere is not feasible nor is it what we deem the vast majority of our users want to do.
This issue has been reported to the Python Security Response Team multiple times by a few different people since early 2020, most recently a few weeks ago while I was in the middle of polishing up the PR so it’d be ready before 3.11.0rc2.
## Mitigation
After discussion on the Python Security Response Team mailing list the conclusion was that we needed to limit the size of integer to string conversions for non-linear time conversions (anything not a power-of-2 base) by default. And offer the ability to configure or disable this limit.
The Python Steering Council is aware of this change and accepts it as necessary.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100627
* gh-100628
* gh-101065
* gh-101066
* gh-101630
* gh-101631
<!-- /gh-linked-prs -->
| 46521826cb1883e29e4640f94089dd92c57efc5b | f4fcfdf8c593611f98b9358cc0c5604c15306465 |
python/cpython | python__cpython-112577 | # Add more specific error message when file has same name as an imported module
# Feature / enhancement
Currently, if a python file I create and am trying to run has the same name as a module I'm importing in it (i.e. my file is named `random.py` and I try to do `import random; print(random.randint(5))`, the following error message appears:
```
AttributeError: partially initialized module 'random' has no attribute 'randint' (most likely due to a circular import)
```
Instead, for this particular scenario, a more specific error message would be helpful:
```
ImportError: random.py imported itself. File name should be different than the imported module name.
```
# Pitch
Frequently, someone learning python will try to name their python files after the topic they are learning about. These topics can be about a specific module (e.g. random, turtle, pygame, requests). For instance, someone experimenting with the `turtle` module, might name their python file `turtle.py`. This results in the scenario of a person trying to import a module, but instead their program tries to import itself because of the name collision.
The current error message isn't clear that the issue is the filename conflicting with the module name and overriding it. Folks with more experience can deduce that from the "circular import" portion, but beginners are often confused by the "AttributeError" portion. I think this scenario would be a good candidate for a more specific error message, to better warn about this common pitfall.
The recent improvements for more detailed and specific error messages have helped many people, especially those new to Python. I think this would be a worthwhile addition in that same vein.
# Considerations & discussion points
The example improved error message I have above is very much open to improvements, if folks have suggestions.
I am also operating under the somewhat informed assumption that there isn't a valid case for a file to import itself, so there would not be an issue with this error appearing if the imported module is the same name as the file itself.
# Previous discussion
- https://groups.google.com/g/python-ideas/c/dNbXlL2XoJ8?pli=1
<!-- gh-linked-prs -->
### Linked PRs
* gh-112577
* gh-113769
<!-- /gh-linked-prs -->
| 61e818409567ce452af60605937cdedf582f6293 | 2d91409c690b113493e3e81efc880301d2949f5f |
python/cpython | python__cpython-102207 | # test_tar test_add_dir_getmember fails if uid/gid is larger than 16777215
# Bug report
The [`test_add_dir_getmember`](https://github.com/python/cpython/blob/12d92c733cfc00ee630b30e4e0250da400c83395/Lib/test/test_tarfile.py#L223) added in https://github.com/python/cpython/pull/30283 fails if the current user's uid or gid is larger than 16777215 because of limitations of UStar format.
Note that the other test cases don't have this problem.
This test should either avoid depending on the current user's uid/gid (such as by using an explicit uid/gid) or be skipped if the current uid/gid is 8**8 or larger.
For example,
```python
def filter(tarinfo):
tarinfo.uid = 100
tarinfo.gid = 100
return tarinfo
...
tar.add(name, filter=filter)
```
Will use 100 as the uid and gid instead of the current user's uid/gid.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102207
* gh-102230
* gh-102231
<!-- /gh-linked-prs -->
| 56e93c8020e89e1712aa238574bca2076a225028 | 54dfa14c5a94b893b67a4d9e9e403ff538ce9023 |
python/cpython | python__cpython-102119 | # pipe test failures on some Linux configurations
# Bug report
On Linux systems, when configured with a default pipe capacity of 4096, the following test fail:
- test_fcntl.test_fcntl_f_pipesize
- test_subprocess.test_pipesizes
From a recent `fcntl(2)` Linux [man page](https://man7.org/linux/man-pages/man2/fcntl.2.html#:~:text=Attempts%20to%20set%20the%20pipe%20capacity%20below%20the%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20page%20size%20are%20silently%20rounded%20up%20to%20the%20page%20size.):
```
F_SETPIPE_SZ:
...
Attempts to set the pipe capacity below the page size are silently rounded up to the page size.
```
There's a check that attempts to skip the tests if the pipe capacity is [512 bytes](https://github.com/python/cpython/blob/6a5104f4fa83ed08fe31f712757dddabfede394c/Lib/test/test_fcntl.py#L202), but that's less than the smallest page size on x86.
Since this feature appears to be Linux specific, the check should:
1) Use `os.sysconf('SC_PAGESIZE')` as a minimum
2) Fix the typos ("SkitTest") in [test_fcntl_f_pipesize](https://github.com/python/cpython/blob/6a5104f4fa83ed08fe31f712757dddabfede394c/Lib/test/test_fcntl.py#L203) and [test_pipesizes](https://github.com/python/cpython/blob/f6dd14c65336cda4e2ebccbc6408dfe3b0a68a34/Lib/test/test_subprocess.py#L715)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102119
* gh-102121
* gh-102122
* gh-102163
* gh-102365
* gh-102455
<!-- /gh-linked-prs -->
| d5c7954d0c3ff874d2d27d33dcc207bb7356f328 | 0d4c7fcd4f078708a5ac6499af378ce5ee8eb211 |
python/cpython | python__cpython-107536 | # New asyncio ssl implementation is lacking license and origin information
# Bug report
In GH-31275 / GH-88177 a new asyncio ssl implementation was added. The code was copied from uvloop project. The commit message 13c10bfb777483c7b02877aab029345a056b809c lacks information about origin, original authors, and original license of the code. The copied files and code also lacks provenance and license information.
Lack of license information is problematic from a legal standpoint. MIT license requires us to keep the license with code or non-trival portions of code.
> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
Each file with non-trivial code from uvloop should gain a file header with information about origin of the files, authors of the code, and the original license of the uvloop project. Further more ``Docs/license.rst`` should be extended. Please get in contact with our lawyer VanL if you have further questions.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107536
* gh-114045
* gh-114046
<!-- /gh-linked-prs -->
| dce30c9cbc212e5455e100f35ac6afeb30dfd23e | 8aa126354d93d7c928fb35b842cb3a4bd6e1881f |
python/cpython | python__cpython-118105 | # Remove unused indent_level from Modules/_json.c
There are commits from 2009-05-02 calculating indent_level but not
using it to indent. There are some other commits from 2011-02-22
commenting out the code with the message "it does not work". Nobody cared
enough about it for 10 years, time to remove commented code and leftovers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-118105
* gh-118636
<!-- /gh-linked-prs -->
| 05adfbba2abafcdd271bf144a7b3f80bcd927288 | 7758be431807d574e0f1bbab003796585ae46719 |
python/cpython | python__cpython-95378 | # Add support for other image formats(e.g. PNG) to the turtle module
# Feature or enhancement
Add support for PGM, PPM, and PNG formats to *turtle.bgpic()* and *turtle.register_shape()*.
# Pitch
Currently, *turtle* only supports GIF format for an image file,
where as the backend *tkinter.PhotoImage* supports PGM, PPM, and PNG formats in addition to GIF format.
If *turtle* supports PNG format, you can animate true color images.
(It helps teaching Python with *turtle*, because you can search PNG images more easily than GIF ones on Internet.)
Also it would be consistent if *turtle* supports all formats that *tkinter* supports.
<!-- gh-linked-prs -->
### Linked PRs
* gh-95378
<!-- /gh-linked-prs -->
| e1baa778f602ede66831eb34b9ef17f21e4d4347 | 60c65184695a3eab766b3bc26fc99f695deb998f |
python/cpython | python__cpython-101039 | # Remove bundled setuptools
# Feature or enhancement
Remove the bundled setuptools so that `ensurepip` and `python -m venv` only installs pip.
# Context
The `setup.py install` command of `setuptools` is deprecated.
However, in an environment where `setuptools` is installed but `wheel` is not (such as one created with `python -m venv`), pip falls back on running the deprecated and non-standard `setup.py install`.
Since version 22.1 pip works correctly by default in environments where `setuptools` is not installed, by enabling its PEP 517 mode automatically, leading to unsurprising installations in most cases.
So, in order to progressively expose more users to standard-based installation workflows, we (the pip team) would like that virtual environments are created without `setuptools` by default.
Users faced with failing installations following this change (likely due to packages with customized `setup.py` that do not support building a wheel) can easily `pip install setuptools` to solve the issue.
# Previous discussion
https://github.com/pypa/pip/issues/8102#issuecomment-1195566597 and following comments has some more context.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101039
* gh-103316
* gh-103613
<!-- /gh-linked-prs -->
| ece20dba120a1a4745721c49f8d7389d4b1ee2a7 | f39e00f9521a0d412a5fc9a50f2a553ec2bb1a7c |
python/cpython | python__cpython-99506 | # failing configure command
# Bug report
When trying to build Python from source code on Mac, the configure command fails.
[config.log](https://github.com/python/cpython/files/9180910/config.log)
# Your environment
Python Version: 3.10.5
Operating System:
MacOS Monterey 12.3
64 Bit x68, Intel core CPU
<!-- gh-linked-prs -->
### Linked PRs
* gh-99506
* gh-99718
* gh-99719
<!-- /gh-linked-prs -->
| 8f024a02d7d63315ecc3479f0715e927f48fc91b | 8f18ac04d32515eab841172c956a8cb14bcee9c3 |
python/cpython | python__cpython-119888 | # Improve error message when using `in` for object that is not container
# Feature or enhancement
When using `in` to test containment, for example `if "a" in b:`, if b does not support the `in` operator, then it would raise an error message: `TypeError: Argument of type b is not iterable`.
To the reader/debugger of this code, the error message seems to suggest that you need to pass in an iterable object (an object that implements `__iter__`), but in reality, you can also pass in a container object (an object that implements `__contains__`).
It would be great if the error message can be improved and be more accurate and helpful.
# Pitch
The `in` keyword can be used in different ways:
## for loop
```
for item in [1,2,3]:
```
You can do for loop when you gave an iterable/sequences, like lists, dicts, strings, objects that implements `__iter__`
## testing for containment
```
if "a" in "abcd":
```
To make a class into a container, you just need to implement the `__contains__` method.
```
class Blabla:
def __contains__(self, item):
return False
b = Blabla()
>>> "a" in b
False
```
If somehow we made a bad refactoring on this container class and removed/renamed the `__contains__`, and tried to use the same existing code `"a" in b`, it would raise an error saying that b is not iterable.
If the object was not an iterable to begin with, this error message is confusing to the person debugging this, and they would not realize that this is due to the missing `__contains__` method.
I think it would be great if the error message when testing containment `if a in b` can be different than the error message when doing for loop `for a in b`.
Providing more accurate error message will be helpful to the user.
Example message:
`TypeError: Argument of type '%.200s' is not a container`
(and that this is only raised when doing `if a in b`)
I tried to look into the CPython code, and it seems like the error message is coming from this line:
https://github.com/python/cpython/blob/8a808952a61b4bd572d95f5efbff1680c59aa507/Objects/abstract.c#L2187
Which was introduced in https://github.com/python/cpython/pull/20537
Regarding the term container, it is used in this doc: https://docs.python.org/3/library/collections.abc.html#collections.abc.Container
# Previous discussion
I don't know if you'd count Twitter thread as previous discussions, but here are some links:
[Start of thread](https://twitter.com/mariatta/status/1549546791904808960)
Supporting message that the error message can be improved: [here](https://twitter.com/AdamChainz/status/1549555137311555587) and [here](https://twitter.com/liiight/status/1550163071921860608)
[Comment about Python's terminology of container and iterable](https://twitter.com/n7cmdr/status/1549550160316837889)
[Comment about Python oddity](https://twitter.com/treyhunner/status/1549881638473039873)
<!-- gh-linked-prs -->
### Linked PRs
* gh-119888
<!-- /gh-linked-prs -->
| dc03ce797ae8786a9711e6ee5dcaadde02c55864 | 65fededf9cc1780d5edbef8a6e0a7cf9bc15aea6 |
python/cpython | python__cpython-103779 | # String formatting clarification
https://docs.python.org/3/library/string.html#formatstrings has the following -
`An expression of the form '.name' selects the named attribute using [getattr()], while an expression of the form '[index]' does an index lookup using __getitem__().`
An expression of the form [1] correctly returns the element in position 1. An expression of the form [-1] fails with TypeError: list indices must be integers or slices, not str.
Discussion on python-list (https://mail.python.org/pipermail/python-list/2022-July/906930.html) led to unearthing this quote from PEP3101 - Advanced String Formatting (https://peps.python.org/pep-3101/#simple-and-compound-field-names) -
`The rules for parsing an item key are very simple. If it starts with a digit, then it is treated as a number, otherwise it is used as a string. `
It was felt that the documentation would be improved if this rule was made explicit.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103779
* gh-105418
* gh-105419
<!-- /gh-linked-prs -->
| 3e7316d7e8969febb56fbc7416d483b073bd1702 | 9a89f1bf1e7bb819fe7240be779c99a84f47ea46 |
python/cpython | python__cpython-95151 | # Argument Clinic: add support for deprecating positional use of parameters
**Feature or enhancement**
Suggesting to add syntax for producing deprecation warnings for positional use of optional parameters.
I made a [proof-of-concept patch](https://github.com/erlend-aasland/cpython/pull/37) for this where I introduced a new special symbol `x`. Example use:
```c
/*[clinic input]
mod.func
stuff: object
/
x
optarg: int = 128
```
This will then generate code that emits a DeprecationWarning if `optarg` is passed as a positional argument, but not if it is passed as a keyword.
We can use this feature to introduce deprecation warnings for parameters that will become keyword-only in future releases. When the deprecation period is done, we can simply replace the `x` with `*`, to really make the optional params keyword-only.
**Pitch**
Quoting @serhiy-storchaka, in issue #93057:
> It is recommended to make optional rarely used arguments keyword-only. I do not think it will break much code, but we need a deprecation period for this.
>
> The problem is that sqlite3.connect() was converted to Argument Clinic, and it was much easier to add deprecation warning in the old code.
**Previous discussion**
- #93057
<!-- gh-linked-prs -->
### Linked PRs
* gh-95151
* gh-107712
* gh-107742
* gh-107745
* gh-107766
* gh-107768
* gh-107808
* gh-108132
<!-- /gh-linked-prs -->
| 33cb0b06efe33968eb32463fa1b02b5a729a17f8 | 3c8e8f3ceeae08fc43d885f5a4c65a3ee4b1a2c8 |
python/cpython | python__cpython-99740 | # [Enum] Enhance repr() when inheriting from dataclass
In 3.10 and prior, a combined dataclass/enum such as
from dataclasses import dataclass
from enum import Enum
@dataclass(frozen=True)
class CreatureDataMixin:
size: str
legs: int
class Creature(CreatureDataMixin, Enum):
BEETLE = ('small', 6)
DOG = ('medium', 4)
had a repr() similar to
Creature(size='medium', legs=4)
In 3.11 that has been corrected to:
<Creature.DOG: CreatureDataMixin(size='medium', legs=4)>
Ideally, that would be:
<Creature.DOG: size='medium', legs=4>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99740
<!-- /gh-linked-prs -->
| 2b2d607095335024e5e2bb358e3ef37650536839 | 616468b87bc5bcf5a4db688637ef748e1243db8a |
python/cpython | python__cpython-94962 | # `unitest.mock.create_autospec(async_def)` does not create functions valid for `inspect.iscoroutinefunction`
discovered while creating https://github.com/python/cpython/pull/94923
**Bug report**
```python
import unittest.mock
import inspect
import asyncio
import sys
def main():
async def demo():
pass
print(f"{inspect.iscoroutinefunction(unittest.mock.create_autospec(demo))=}")
print(f"{asyncio.iscoroutinefunction(unittest.mock.create_autospec(demo))=}")
# this prints:
# inspect.iscoroutinefunction(unittest.mock.create_autospec(demo))=False
# asyncio.iscoroutinefunction(unittest.mock.create_autospec(demo))=True
if __name__ == "__main__":
sys.exit(main())
```
see also https://github.com/python/cpython/issues/84753
<!-- gh-linked-prs -->
### Linked PRs
* gh-94962
<!-- /gh-linked-prs -->
| 9bf8d825a66ea2a76169b917c12c237a6af2ed75 | 0f885ffa94aa9b69ff556e119cb17deb23a5a4b3 |
python/cpython | python__cpython-103881 | # Let math.nextafter() compute multiple steps at a time.
Sometimes ``math.nextafter()`` needs to be applied multiple times in succession.
x = nextafter(nextafter( nextafter(x, inf), inf), inf) # Three steps up
It would be nice if the function supported this directly:
x = nextafter(x, inf, n=3)
The implementation would just be a for-loop:
```
def newnextafter(x, y, /, *, n=1):
'Return the floating-point value n steps after x towards y.'
for i in range(n):
x = nextafter(x, y)
return x
```
The formal paramater can be just ``n`` or the longer but more descriptive ``steps``.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103881
<!-- /gh-linked-prs -->
| 6e39fa19555043588910d10f1fe677cf6b04d77e | c3f43bfb4bec39ff8f2c36d861a3c3a243bcb3af |
python/cpython | python__cpython-102684 | # Check for ref counting bugs in debug mode caused by immortal objects.
With https://github.com/python/cpython/issues/90699, the identifiers are statically allocated and are immortal. This makes it easy to make reference counting mistakes as they are not detected and cause negative ref count in `_Py_RefTotal`.
On my machine the reference count is negative because of missing incref on `&_Py_STR(empty)`:
```console
@kumaraditya303 ➜ /workspaces/cpython (main) $ ./python -I -X showrefcount -c pass
[-1 refs, 0 blocks]
```
PR https://github.com/python/cpython/pull/94850 fixes this issue.
---
To make it easy to discover reference counting issue, I propose to after each runtime finalization check that all the static allocated immortal objects have ref count of `999999999` otherwise `_PyObject_Dump` can be used to output the object and abort the process in debug mode and this will help prevent these kinds of issues of "unstable" ref count.
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-102684
<!-- /gh-linked-prs -->
| a703f743dbf2675948e59c44fa9d7112f7825100 | 88c262c086077377b40dfae5e46f597e28ffe3c9 |
python/cpython | python__cpython-104409 | # Help latest MSVC 1932 (v143, VS2022) to optimize PyObject_Free()
There is a report that python.org `3.11` Windows release has slowed down on micro benchmarks:
https://github.com/faster-cpython/ideas/issues/420
One reason is that `PyObject_Free()` is not well optimized by the latest MSVC (ver.1932 in v143 tool set), which is likely to be used for the next `3.10` official as well.
This issue has been reported to the MSVC team. However, their tools would not be fixed urgently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104409
* gh-104439
<!-- /gh-linked-prs -->
| a10b026f0fdceac42c4b928917894d77da996555 | 79b17f2cf0e1a2688bd91df02744f461835c4253 |
python/cpython | python__cpython-94784 | # ProcessPoolExecutor deadlock when a child process crashes while data is being sent in call queue
**Bug report**
When using a ProcessPoolExecutor with forked child processes, if one of the child processes suddenly dies (segmentation fault, not a Python exception) and if simultaneously data is being sent into the call queue, then the parent process hangs forever.
*Reproduction*
```
import ctypes
from concurrent.futures import ProcessPoolExecutor
def segfault():
ctypes.string_at(0)
def func(i, data):
print(f"Start {i}.")
if i == 1:
segfault()
print(f"Done {i}.")
return i
data = list(range(100_000_000))
count = 10
with ProcessPoolExecutor(2) as pool:
list(pool.map(func, range(count), [data] * count))
print(f"OK")
```
In Python 3.8.10 it raises a BrokenProcessPool exception whereas in 3.9.13 and 3.10.5 it hangs.
*Analysis*
When a crash happens in a child process, all workers are terminated and they stop reading in communication pipes. However if data is being send in the call queue, the call queue thread which writes data from buffer to pipe (`multiprocessing.queues.Queue._feed`) can get stuck in `send_bytes(obj)` when the unix pipe it's writing to is full. `_ExecutorManagerThread` is blocked in `self.join_executor_internals()` on line https://github.com/python/cpython/blob/da4912885f11f525a82a83f795ebffba06560e13/Lib/concurrent/futures/process.py#L515 (called from `self.terminate_broken()`). The main thread itself is blocked on https://github.com/python/cpython/blob/da4912885f11f525a82a83f795ebffba06560e13/Lib/concurrent/futures/process.py#L775 coming from the `__exit__` method of the Executor.
*Proposed solution*
Drain call queue buffer either in `terminate_broken` method before calling `join_executor_internals` or in queue `close` method.
I will create a pull request with a possible implementation.
**Your environment**
- CPython versions tested on: reproduced in 3.10.5 and 3.9.13 (works well in 3.8.10: BrokenProcessPool exception)
- Operating system and architecture: Linux, x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-94784
* gh-106607
* gh-106609
<!-- /gh-linked-prs -->
| 6782fc050281205734700a1c3e13b123961ed15b | 9d582250d8fde240b8e7299b74ba888c574f74a3 |
python/cpython | python__cpython-112385 | # DocTest sorts by lineno which may be int or None
In `doctest.py` the following ordering is defined for the class `DocTest`:
```
def __lt__(self, other):
if not isinstance(other, DocTest):
return NotImplemented
return ((self.name, self.filename, self.lineno, id(self))
<
(other.name, other.filename, other.lineno, id(other)))
```
This is incorrect because the `lineno` field may be an integer and may be None, and comparisons between integers and None fail. Typically `lineno` is an integer, but `_find_lineno` explicitly can fall back to returning `None` so the field may be None:
```
def _find_lineno(self, obj, source_lines):
...
# We couldn't find the line number.
return None
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-112385
* gh-112400
* gh-112401
<!-- /gh-linked-prs -->
| fbb9027a037ff1bfaf3f596df033ca45743ee980 | 19a1fc1b3df30f64450d157dc3a5d40c992e347f |
python/cpython | python__cpython-112756 | # Python 3.10/3.8: shutil.rmtree(None, ignore_errors=True) behaves differently between Windows and *nix platforms
**Bug report**
Expected behavior: Passing None to shutil.rmtree's path argument should not yield an exception when ignore_errors=True.
Behavior on MacOS Python 3.10 (MacPorts) - Correct behavior:
```
>>> import shutil
>>> shutil.rmtree(None, ignore_errors=True)
>>> shutil.rmtree(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 712, in rmtree
onerror(os.lstat, path, sys.exc_info())
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 710, in rmtree
orig_st = os.lstat(path)
TypeError: lstat: path should be string, bytes or os.PathLike, not NoneType
```
The above occurs on RedHat Linux 8 Python 3.8 and Python 3.10 as well.
Behavior on Windows (Incorrect/differs from MacOS/Linux):
```
>>> import shutil
>>> shutil.rmtree(None, ignore_errors=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python310\lib\shutil.py", line 733, in rmtree
onerror(os.lstat, path, sys.exc_info())
File "C:\Python310\lib\shutil.py", line 577, in rmtree
orig_st = os.lstat(path)
TypeError: lstat: path should be string, bytes or os.PathLike, not NoneType
>>> shutil.rmtree(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python310\lib\shutil.py", line 733, in rmtree
onerror(os.lstat, path, sys.exc_info())
File "C:\Python310\lib\shutil.py", line 577, in rmtree
orig_st = os.lstat(path)
TypeError: lstat: path should be string, bytes or os.PathLike, not NoneType
```
**Environment**
* CPython Versions Tested: 3.8.x, 3.10.x
* Operating Systems: Windows 10 21H2, MacOS, RedHat Linux 8 (x86-64)
<!-- gh-linked-prs -->
### Linked PRs
* gh-112756
* gh-112846
<!-- /gh-linked-prs -->
| 563ccded6e83bfdd8c5622663c4edb679e96e08b | bc68f4a4abcfbea60bb1db1ccadb07613561931c |
python/cpython | python__cpython-102663 | # [subinterpreters] Static types incorrectly share some objects between interpreters
While static types (`PyTypeObject` values) don't themselves ever change (once `PyType_Ready()` has run), they do hold mutable data. This means they cannot be safely shared between multiple interpreters without a common GIL (and without state leaking between).
Mutable data:
* the object header (e.g. refcount), as with all objects
* otherwise immutable objects:
* ob_type (`__class__`)
* tp_base (`__base__`) - set by `PyType_Ready()` if not set
* tp_bases (`__bases__`) - always set by `PyType_Ready()`
* tp_mro (`__mro__`) - always set by `PyType_Ready()`
* tp_dict (`__dict__`) - set by `PyType_Ready()` if not set
* mutable containers:
* tp_subclasses (`__subclasses__`)
* tp_weaklist
(See https://docs.python.org/3/c-api/typeobj.html#tp-slots.)
(Note that `tp_cache` is no longer used.)
For the object header, if PEP 683 (immortal objects) is accepted then we can make static types immortal.
For the otherwise immutable objects, we can make sure each is immortal and then we're good. Even `tp_dict` is fine since it gets hidden behind `types.MappingProxyType`. We'd also need to either make sure each contained item is immortal.
For `tp_subclasses` we will need a per-interpreter copy, and do the proper lookup in the `__subclasses__` getter. The cache could be stored on `PyInterpreterState` or even as a dict in `tp_subclasses`.
For `tp_weaklist` it's a similar story as for `tp_subclasses`. Note that `tp_weaklist` isn't very important for static types since they are never deallocated.
(The above is also discussed in [PEP 684](https://peps.python.org/pep-0684/#global-objects).)
CC @kumaraditya303
<!-- gh-linked-prs -->
### Linked PRs
* gh-102663
* gh-103912
* gh-103940
* gh-103961
* gh-104072
* gh-104074
* gh-105465
* gh-105471
* gh-117761
* gh-117980
<!-- /gh-linked-prs -->
| e6ecd3e6b437f3056e0a410a57c52e2639b56353 | 8d015fa000db5775d477cd04dc574ba13721e278 |
python/cpython | python__cpython-94641 | # email.message get_payload throws UnicodeEncodeError with some surrogate Unicode characters
email.message get_payload gets a UnicodeEncodeError if the message body contains a line that has either:
a Unicode surrogate code point that is valid for surrogateescape encoding (U-DC80 through U-DCFF) and a non ASCII UTF-8 character
OR
a Unicode surrogate character that is not valid for surrogateescape encoding
Here is a minimal code example with one of the cases commented out
```
from email import message_from_string
from email.message import EmailMessage
m = message_from_string("surrogate char \udcc3 and 8-bit utf-8 ë on same line")
# m = message_from_string("surrogate char \udfff does it by itself")
payload = m.get_payload(decode=True)
```
On my python 3.10.5 on macOS this produces:
```
Traceback (most recent call last):
File "/Users/sidney/tmp/./test5.py", line 8, in <module>
payload = m.get_payload(decode=True)
File "/usr/local/Cellar/python@3.10/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/email/message.py", line 264, in get_payload
bpayload = payload.encode('ascii', 'surrogateescape')
UnicodeEncodeError: 'ascii' codec can't encode character '\xeb' in position 33: ordinal not in range(128)
```
This was tested on python 3.10.5 on macOS, however I tracked it down based on a report in the wild that was running python 3.8 on Ubuntu 20.04 processing actual emails
<!-- gh-linked-prs -->
### Linked PRs
* gh-94641
* gh-112971
* gh-112972
<!-- /gh-linked-prs -->
| 27a5fd8cb8c88537216d7a498eba9d9177951d76 | 3251ba8f1af535bf28e31a6832511ba19e96b262 |
python/cpython | python__cpython-94687 | # Port 23-argument `_posixsubprocess.fork_exec` to Argument Clinic
Currently the function is parsed with [the following behemoth](https://github.com/python/cpython/blob/main/Modules/_posixsubprocess.c#L816-L826):
```cpp
if (!PyArg_ParseTuple(
args, "OOpO!OOiiiiiiiiii" _Py_PARSE_PID "OOOiOp:fork_exec",
&process_args, &executable_list,
&close_fds, &PyTuple_Type, &py_fds_to_keep,
&cwd_obj, &env_list,
&p2cread, &p2cwrite, &c2pread, &c2pwrite,
&errread, &errwrite, &errpipe_read, &errpipe_write,
&restore_signals, &call_setsid, &pgid_to_set,
&gid_object, &groups_list, &uid_object, &child_umask,
&preexec_fn, &allow_vfork))
return NULL;
```
Conversion will:
- hide *this* from a realm of manual and error-prone labor into a precise and checked world of automation
- allow to use faster methods like `METH_FASTCALL`+`_PyArg_CheckPositional`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-94687
* gh-94519
* gh-101054
<!-- /gh-linked-prs -->
| 124af17b6e49f0f22fbe646fb57800393235d704 | 49cae39ef020eaf242607bb2d2d193760b9855a6 |
python/cpython | python__cpython-136217 | # Log formatters which use newlines to separate messages should quote newlines for security reasons
The logging module, in most common configurations, is vulnerable to [log injection](https://owasp.org/www-community/attacks/Log_Injection) attacks.
For example:
```python3
import logging
logging.basicConfig(format='%(asctime)s %(message)s')
logging.warning('message\n2022-06-17 15:15:15,123 was logged.message')
```
results in
```text
2022-06-16 14:03:06,858 message
2022-06-17 15:15:15,123 was logged.message
```
All available log formatters in the standard library should provide a straightforward way to tell the difference between log message contents and log file format framing. For example, if your output format is newline-delimited, then it cannot allow raw newlines in messages and should "sanitize" by quoting them somehow.
Twisted deals with this by quoting them with trailing tabs, so, for example, the following code:
```python3
from twisted.logger import globalLogBeginner, textFileLogObserver, Logger
import sys
globalLogBeginner.beginLoggingTo(
[textFileLogObserver(sys.stdout)], redirectStandardIO=False
)
log = Logger()
log.info("regular log message\nhaha i tricked you this isn't a log message")
log.info("second log message")
```
Produces this output:
```text
2022-06-17T15:35:13-0700 [__main__#info] regular log message
haha i tricked you this isn't a log message
2022-06-17T15:35:13-0700 [__main__#info] second log message
```
I'd suggest that the stdlib do basically the same thing.
One alternate solution is just documenting that no application or framework is ever allowed to log a newlines without doing this manually themselves (and unfortunately this seems to be where the Java world has ended up, see for example https://github.com/spring-projects/spring-framework/commit/e9083d7d2053fea3919bfeb9057e9fdba4049119 ), but putting the responsibility on individual projects to do this themselves means making app and library authors predict all possible Formatters that they might have applied to them, then try to avoid any framing characters that that Formatter might use to indicate a message boundary. Today the most popular default formatter uses newlines. But what if some framework were to try to make parsing easier by using RFC2822? Now every application has to start avoiding colons as well as newlines. CSV? Better make sure you don't use commas. Et cetera, et cetera.
Pushing this up to the app or framework means that every library that wants to log anything derived from user data can't log the data in a straightforward structured way that will be useful to sophisticated application consumers, because they have to mangle their output in a way which won't trigger any log-parsing issues with the naive formats. In practice this really just means newlines, but if we make newlines part of the contract here, that also hems in any future Formatter improvements the stdlib might want to make.
I suspect that the best place to handle this would be logging.Formatter.format; there's even some precedent for this, since it already has a tiny bit of special-cased handling of newlines (albeit only when logging exceptions).
(The right thing to do to avoid logging injection robustly is to emit all your logs as JSON, dump them into Cloudwatch or Honeycomb or something like that and skip this problem entirely. The more that the standard logging framework can encourage users to get onto that happy path quickly, the less Python needs to worry about trying to support scraping stuff out of text files with no schema as an interesting compatibility API surface, but this is a really big problem that I think spans more than one bug report.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-136217
* gh-136357
* gh-136358
* gh-136446
* gh-136449
* gh-136450
<!-- /gh-linked-prs -->
| d05423a90ce0ee9ad5207dce3dd06ab2397f3d6e | 3e849d75f400569cbf3c29c356061c788284b71e |
python/cpython | python__cpython-98479 | # Tkinter Canvas.coords does not flatten arguments
**Bug report**
Double nested arrays in tk.coords will produce errors in tkinter (`_tkinter.TclError: wrong # coordinates: expected at least 4, got 2` in this case)
```py
import tkinter as tk
coords = [[100, 100], [300, 300]]
root = tk.Tk()
canvas = tk.Canvas(width=400,
height=400,
background="bisque")
canvas.pack(fill="both", expand=True)
line = canvas.create_line(coords)
coords[1] = [200, 200]
canvas.coords(line, coords) # line with error
root.mainloop()
```
**Your environment**
- CPython versions tested on: 3.10.4
- Operating system: Windows 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-98479
<!-- /gh-linked-prs -->
| 9bc80dac47f6d43d0bbfbf10c4cc3848b175e97f | 6fba0314765d7c58a33e9859d9cd5bcc35d2fd0a |
python/cpython | python__cpython-94468 | # ProcessPoolExecutor shutdown hangs after future cancel was requested
**Bug report**
With a ProcessPoolExecutor, after submitting and quickly canceling a future, a call to `shutdown(wait=True)` would hang indefinitely.
This happens pretty much on all platforms and all recent Python versions.
Here is a minimal reproduction:
```py
import concurrent.futures
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
ppe.shutdown(wait=True)
```
The first submission gets the executor going and creates its internal `queue_management_thread`.
The second submission appears to get that thread to loop, enter a wait state, and never receive a wakeup event.
Introducing a tiny sleep between the second submit and its cancel request makes the issue disappear. From my initial observation it looks like something in the way the `queue_management_worker` internal loop is structured doesn't handle this edge case well.
Shutting down with `wait=False` would return immediately as expected, but the `queue_management_thread` would then die with an unhandled `OSError: handle is closed` exception.
**Environment**
* Discovered on macOS-12.2.1 with cpython 3.8.5.
* Reproduced in Ubuntu and Windows (x64) as well, and in cpython versions 3.7 to 3.11.0-beta.3.
* Reproduced in pypy3.8 as well, but not consistently. Seen for example in Ubuntu with Python 3.8.13 (PyPy 7.3.9).
**Additional info**
When tested with `pytest-timeout` under Ubuntu and cpython 3.8.13, these are the tracebacks at the moment of timing out:
<details>
```pytb
_____________________________________ test _____________________________________
@pytest.mark.timeout(10)
def test():
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
> ppe.shutdown(wait=True)
test_reproduce_python_bug.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py:686: in shutdown
self._queue_management_thread.join()
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1011: in join
self._wait_for_tstate_lock()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Thread(QueueManagerThread, started daemon 140003176535808)>
block = True, timeout = -1
def _wait_for_tstate_lock(self, block=True, timeout=-1):
# Issue #18808: wait for the thread state to be gone.
# At the end of the thread's life, after all knowledge of the thread
# is removed from C data structures, C code releases our _tstate_lock.
# This method passes its arguments to _tstate_lock.acquire().
# If the lock is acquired, the C code is done, and self._stop() is
# called. That sets ._is_stopped to True, and ._tstate_lock to None.
lock = self._tstate_lock
if lock is None: # already determined that the C code is done
assert self._is_stopped
> elif lock.acquire(block, timeout):
E Failed: Timeout >10.0s
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1027: Failed
----------------------------- Captured stderr call -----------------------------
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~ Stack of QueueFeederThread (140003159754496) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/queues.py", line 227, in _feed
nwait()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 302, in wait
waiter.acquire()
~~~~~~~~~~~~~~~~ Stack of QueueManagerThread (140003176535808) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py", line 362, in _queue_management_worker
ready = mp.connection.wait(readers + worker_sentinels)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
```
</details>
Tracebacks in PyPy are similar on the `concurrent.futures.process` level. Tracebacks in Windows are different in the lower-level areas, but again similar on the `concurrent.futures.process` level.
Linked PRs:
- #94468
<!-- gh-linked-prs -->
### Linked PRs
* gh-94468
* gh-102746
* gh-102747
<!-- /gh-linked-prs -->
| 2dc94634b50f0e5e207787e5ac1d56c68b22c3ae | a44553ea9f7745a1119148082edb1fb0372ac0e2 |
python/cpython | python__cpython-111237 | # frame.setlineno has serious flaws.
The `frame_setlineno` function works in in stages:
* Determine a set of possible bytecode offsets as targets from the line number.
* Compute the stack state for these targets and the current position
* Determine a best target. That is, the first one that has a compatible stack.
* Pop values form the stack and jump.
The first steps is faulty (I think, I haven't demonstrated this) as it might be possible to jump to an instruction involved in frame creation. This should be easy to fix using the new `_co_firsttraceable` field.
The second step has (at least) three flaws:
- [x] It does not account for `NULL`s on the stack, making it possible to jump from a stack with `NULL`s to one that cannot handle `NULL`s.
- [x] It does not skip over caches, so could produce incorrect stacks by misinterpreting cache entries as normal instructions.
- [x] It is out of date. For example it thinks that `PUSH_EXC_INFO` pushes three values. It only pushes one.
Setting the line number of a frame is only possible in the debugger, so this isn't as terrible as might appear, but it definitely needs fixing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111237
* gh-111243
* gh-111338
* gh-111341
* gh-111369
<!-- /gh-linked-prs -->
| 6640f1d8d2462ca0877e1d2789e1721767e9caf2 | 4fbf20605bd70fb96711c92a0ce3309291ffd6fb |
python/cpython | python__cpython-112196 | # Deprecate typing.Hashable/Sized
[`typing.Hashable`](https://docs.python.org/3/library/typing.html#typing.Hashable) and [`typing.Sized`](https://docs.python.org/3/library/typing.html#typing.Sized) are aliases to their equivalents in the [`collections.abc module`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Hashable); [PEP 585](https://peps.python.org/pep-0585/) deprecated all aliases like these while aiming to remove the duplication between the two modules, but the aforementioned two seem to have been left out of that because they're not generic.
If the others are deprecated, I don't think it makes sense to keep them when they're just aliases that provide no additional functionality
<!-- gh-linked-prs -->
### Linked PRs
* gh-112196
* gh-112200
<!-- /gh-linked-prs -->
| fb4cddb0cc6c9b94929f846da8e95aeec3849212 | 0ee2d77331f2362fcaab20cc678530b18e467e3c |
python/cpython | python__cpython-95318 | # Documentation: Error in datetime.datetime.strptime()
**Documentation**
The documentation for [datetime.datetime.strptime](https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime) is incorrect:
https://github.com/python/cpython/blob/bb8b931385ba9df4e01f7dd3ce4575d49f60efdf/Doc/library/datetime.rst#L1048-L1050
This is incorrect if `format` contains microseconds or timezone information. Counterexample:
```python
>>> timestr = '20200304050607.554321'
>>> print(datetime.strptime(timestr, '%Y%m%d%H%M%S.%f'))
2020-03-04 05:06:07.554321
>>> print(datetime(*(time.strptime(timestr, '%Y%m%d%H%M%S.%f')[0:6])))
2020-03-04 05:06:07
```
I suggest removing the cited part entirely, since since I see no easy way of correcting these lines, especially concerning timezones.
<!-- gh-linked-prs -->
### Linked PRs
* gh-95318
* gh-103785
<!-- /gh-linked-prs -->
| 5b404d6cad2bf53295fdf96305f95efe1ea0174e | ed948e01bb68e3f026f38a7e43241d850ee1bfb5 |
python/cpython | python__cpython-114152 | # fnmatch module parameters names are not correct
**Documentation**
fnmatch module parameters names are not correct
```python
>>> import fnmatch
>>> fnmatch.fnmatch(filename='file.txt', pattern='*.txt')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: fnmatch() got an unexpected keyword argument 'filename'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-114152
* gh-114155
* gh-114156
<!-- /gh-linked-prs -->
| 6e84f3b56f445b56ab48723d636c0a17090298ab | 7092b3f1319269accf4c02f08256d51f111b9ca3 |
python/cpython | python__cpython-134712 | # Excess spaces at the end of files or repositorys are not handle when extracting zip files on Windows.
**Bug report**
Excess spaces at the end of files or repositorys are not handle when extracting zip files on Windows.
`FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Documents \\test.txt'`
Can be tested with this [Documents.zip](https://github.com/python/cpython/files/8939776/Documents.zip)
and this piece of code:
```
from zipfile import ZipFile
with ZipFile('Documents.zip', 'r') as zip:
zip.extractall()
```
Fix proposal
cpython/Lib/zipfile.py : 1690
```
# remove end spaces
def remove_end_spaces(x):
for c in x[::-1]:
if(c == ' '): x = x[:-1]
else: return x
arcname = (remove_end_spaces(x) for x in arcname)
```
**Your environment**
- CPython versions tested on: python 3.9
- Operating system and architecture: Windows 10 Professionnel 21H2 19044.1706
<!-- gh-linked-prs -->
### Linked PRs
* gh-134712
<!-- /gh-linked-prs -->
| 965662ee4a986605b60da470d9e7c1e9a6f922b3 | 9eb84d83e00070cec3cfe78f1d0c7a7a0fbef30f |
python/cpython | python__cpython-119720 | # Officially deprecate and remove abcs in importlib.abc moved to importlib.resources.
In #90276, I grouped the functionality related to `importlib.resources` into its own package (creating clearer separation of responsibility from `importlib.*`). That included moving some abstract base classes from `importlib.abc` to `importlib.resources.abc`. We need to officially deprecate the presence in `importlib.abc` and then remove them in a future release.
- [x] deprecation: https://github.com/python/cpython/pull/93965
- [x] documentation: #94546
- [x] cleanup: #95217, #96598
- [ ] ~removal: https://github.com/python/cpython/pull/94528~
- [x] removal: https://github.com/python/cpython/pull/119720
_Originally posted by @jaraco in https://github.com/python/cpython/issues/93610#issuecomment-1157848163_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119720
<!-- /gh-linked-prs -->
| 0751511d24295c39fdf2f5b2255e3fa3d796ce4d | c8b45a385ab85c3f3463b1e7394c515e892dfba7 |
python/cpython | python__cpython-99695 | # [C API] Move PyFrame_* API to <Python.h>
Currently, getter functions of a Python frame object (PyFrameObject) are only accessible if the ``frameobject.h`` header is included explicitly. It's not documented in the frame doc: https://docs.python.org/dev/c-api/frame.html
In Python 3.11, the PyFrameObject structure was moved to the internal C API. Third party C extensions now must only use getter functions, as explained in What's New in Python 3.11: https://docs.python.org/dev/whatsnew/3.11.html#id6
Problem: functions like PyFrame_GetBack() requires to include ``frameobject.h``. I propose to move these getter functions to ``Python.h`` (to ``pyframe.h`` in practice) to make these functions less special.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99695
* gh-99697
<!-- /gh-linked-prs -->
| d15b9f19ac0ffb29b646735d69b29f48a71c247f | 995f6170c78570eca818f7e7dbd8a7661c171a81 |
python/cpython | python__cpython-103236 | # Change in semantics and much worse performance for enum members.
Given the enum:
```Python
class Colours(Enum):
RED = 1
```
In Python 3.9 and 3.10:
```Python
>>> Colours.__dict__["RED"] is Colours.RED
True
>>> Colours.RED.RED
<Colours.RED: 1>
```
In Python 3.11:
```Python
>>> Colours.__dict__["RED"] is Colours.RED
False
>>> Colours.RED.RED
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mark/repos/cpython/Lib/enum.py", line 198, in __get__
raise AttributeError(
^^^^^^^^^^^^^^^^^^^^^
AttributeError: <enum 'Colours'> member has no attribute 'RED'
```
While these might seem like minor semantic changes, there is also a large performance impact.
Lookup of `Colours.RED` is simple and efficient in 3.10, but involves a lot of indirection and dispatching through the `enum.property` class in 3.11.
The performance impact is likely to get worse in 3.12, as we optimize more kinds of attributes.
Introduced in https://github.com/python/cpython/commit/c314e60388282d9829762fb6c30b12e2807caa19, I believe.
@pablogsal
@ethanfurman
<!-- gh-linked-prs -->
### Linked PRs
* gh-103236
* gh-103299
<!-- /gh-linked-prs -->
| 4ec8dd10bd4682793559c4eccbcf6ae00688c4c3 | b4978ff872be5102117b4e25d93dbbb4e04c8292 |
python/cpython | python__cpython-132862 | # pdb cannot find source code for frozen stdlib modules
**Bug report**
Context: this issue came out of a discussion in #python on the Libera.chat IRC network, where a user wanted to peek into `importlib._bootstrap` with `pdb` while chasing a bug.
`pdb` is capable of stepping into function calls into frozen modules, but the `list` command cannot locate the source code necessary to display the source being stepped through.
```python
# repro.py
import importlib._bootstrap
# some function call that we want to step into with pdb
importlib._bootstrap._resolve_name("os", ".", 1)
```
```
$ python3 -m pdb repro.py
> /home/snoopjedi/repro.py(2)<module>()
-> import importlib._bootstrap
(Pdb) n
> /home/snoopjedi/repro.py(5)<module>()
-> importlib._bootstrap._resolve_name("os", ".", 1)
(Pdb) s
--Call--
> <frozen importlib._bootstrap>(883)_resolve_name()
(Pdb) l
[EOF]
```
Note that executing `source importlib._bootstrap` from the frame that calls into this module _does_ successfully locate the source, but this isn't very useful to a user of `pdb`.
I believe that bringing the frame's `co_filename` into agreement with the module's `__file__` would fix this issue without changes to `pdb` (see #89815), but thought it would be good to track the issue with `pdb` in a separate ticket since that fix is more nuanced and I think I have an interim patch for the debugger.
**Your environment**
- CPython versions tested on: 3.9.4, 3.11.0b3
- Operating system and architecture: Ubuntu 20.04, x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-132862
<!-- /gh-linked-prs -->
| eef49c359505eaf109d519d39e53dfd3c78d066a | 0a387b311e617a9a614c593551d3c04a37331e53 |
python/cpython | python__cpython-120125 | # Bytecode positions seem way too broad
(Note that `dis` currently has a [bug](https://github.com/python/cpython/pull/93663) in displaying accurate location info in the presence of `CACHE`s. The correct information can be observed by working with `co_positions` directly or using the code from that PR.)
While developing `specialist`, I realized that there are lots of common code patterns that produce bytecode with unexpectedly large source ranges. In addition to being unhelpful for both friendly tracebacks (the original motivation) and things like bytecode introspection, I suspect these huge ranges may also be bloating the size of our internal position tables as well.
Consider the following function:
```py
def analyze(path): # 1
upper = lower = total = 0 # 2
with open(path) as file: # 3
for line in file: # 4
for character in line: # 5
if character.isupper(): # 6
upper += 1 # 7
elif character.islower(): # 8
lower += 1 # 9
total += 1 # 10
return lower / total, upper / total # 11
import dis
from pprint import pprint as pp
def pos(p):
return (p.lineno, p.end_lineno, p.col_offset, p.end_col_offset)
pp([(pos(x.positions), x.opname, x.argval) for x in dis.get_instructions(analyze)])
```
Things that should probably span one line at most:
- The first `GET_ITER`/`FOR_ITER` pair span all of lines 4 through 10.
- The second `GET_ITER`/`FOR_ITER` pair spans all of lines 5 through 10.
- The first `POP_JUMP_FORWARD_IF_FALSE` spans all of lines 6 through 9.
- The second `POP_JUMP_FORWARD_IF_FALSE` spans all of lines 8 through 9.
- Ten instructions for `with` cleanup each span all of lines 3 through 10.
Things that should probably be artificial:
- A `JUMP_FORWARD` spans all of line 7.
- The first `JUMP_BACKWARD` spans all of line 10.
- The second `JUMP_BACKWARD` spans all of lines 5 through 10.
Things I don't get:
- A `NOP` spans all of lines 4 through 10.
As a result, over half of the generated bytecode for this function claims to span line 9, for instance. Also not shown here: the instructions for building functions and classes have similarly huge spans.
I think this can be tightened up in the compiler by:
- Being more aggressive in calling `SET_LOC` on child nodes.
- Being more aggressive in calling `UNSET_LOC` before unconditional jumps.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120125
* gh-120330
* gh-120399
* gh-120405
* gh-123604
* gh-123605
<!-- /gh-linked-prs -->
| eca3f7762c23b22a73a5e0b09520748c88aab4a0 | d68a22e7a68ae09f7db61d5a1a3bd9c0360cf3ee |
python/cpython | python__cpython-119485 | # Conditional backward edges should help "warm up" code
#93229 introduced a regression in how aggressively we quicken some `for` loops. Minimal example:
```py
def f(x: bool) -> None:
for i in range(1_000_000):
if x:
pass
```
`f(True)` will quicken this code, but `f(False)` will not, even though both contain the same number of back edges. The issue is that we only quicken on *unconditional* backwards jumps, not on conditional ones.
We've known about this limitation for some time, in particular with regard to `while` loops. Since we check the loop condition at the bottom of `while` loops, one call is not enough to quicken `w`:
```py
def w() -> None:
i = 0
while i < 1_000_000:
i += 1
```
@markshannon has expressed a preference for having all branches be forward (i.e. replacing backward `POP_JUMP_IF_FALSE(x)` instructions with `POP_JUMP_FORWARD_IF_TRUE(1); JUMP_BACKWARD(x)` in the assembler). @iritkatriel believes that this shouldn't be too difficult, based on recent assembler rewrites.
CC @sweeneyde
<!-- gh-linked-prs -->
### Linked PRs
* gh-119485
<!-- /gh-linked-prs -->
| 016a46ab572fc681e4a06760147b9ae311b21881 | 18c1a8d3a81bf8d287a06f2985bbf65c9a9b9794 |
python/cpython | python__cpython-98440 | # Finish deprecation in asyncio.get_event_loop()
Since 3.10 `asyncio.get_event_loop()` emits a deprecation warning if used outside of the event loop (see #83710). It is a time to turn a warning into error and make `asyncio.get_event_loop()` an alias of `asyncio.get_running_loop()`.
But maybe we should first deprecate `set_event_loop()`? It will be a no-op now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98440
* gh-99949
* gh-100059
<!-- /gh-linked-prs -->
| fd38a2f0ec03b4eec5e3cfd41241d198b1ee555a | b72014c783e5698beb18ee1249597e510b8bcb5a |
python/cpython | python__cpython-93776 | # Running the Python test suite leaks .pem files in /tmp
Running the Python test suite has two issues:
* A test leaks 3 .pem files in /tmp
* The test suite is not marked as failed when it leaks temporary files (in /tmp)
Moreover, test_tools enters an unlimited loop and fills the $TMPDIR directory if the $TMPDIR is a sub-directory of the Python source code directory. Example:
* /home/vstinner/python/main/ : Python source code
* /home/vstinner/python/main/TMP/ : Temporary directory ($TMPDIR)
Running test_freeze_simple_script() of test_tools copies TMP/ into TMP/TMP/ and then into TMP/TMP/TMP/, etc. Quickly, it fills TMP/ with a "loop" of files :-)
<!-- gh-linked-prs -->
### Linked PRs
* gh-93776
<!-- /gh-linked-prs -->
| de1428f8c234a8731ced99cbfe5cd6c5c719e31d | 49258efada0cb0fc58ccffc018ff310b8f7f4570 |
python/cpython | python__cpython-127593 | # Expose PIDFD_NONBLOCK as os.PIDFD_NONBLOCK
**Feature or enhancement**
Expose `PIDFD_NONBLOCK` as `os.PIDFD_NONBLOCK` to be used with `os.waitid`.
**Pitch**
`pidfd_open` returns a nonblocking file descriptor if `PIDFD_NONBLOCK` is supplied as flag. If the process
referred to by the file descriptor has not yet terminated,
then an attempt to wait on the file descriptor using
[waitid(2)](https://man7.org/linux/man-pages/man2/waitid.2.html) will immediately return the error EAGAIN rather
than blocking.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127593
* gh-127630
* gh-127631
<!-- /gh-linked-prs -->
| fcbe6ecdb6ed4dd93b2ee144f89a73af755e2634 | 1ef6e8ca3faf2c2b008fb170c7c44c38b86e874a |
python/cpython | python__cpython-93224 | # Multiple TimedRotatingFileHandler with similar names but different backup counts do not work
**Bug report**
When setting up multiple logging handlers of type `TimedRotatingFileHandler` that log to the same directory and only differ by the file extension, rollover does not work properly for either of them.
Consider this configuration:
```python
root = logging.getLogger()
root.addHandler(TimedRotatingFileHandler("test.log", when="S", backupCount=2, delay=True))
root.addHandler(TimedRotatingFileHandler("test.log.json", when="S", backupCount=1, delay=True))
```
running this for several seconds should cause the logging directory to have 5 files (time stamps would obviously vary based on when you run it) :
```
test.log
test.log.2022-05-25_05-19-19
test.log.2022-05-25_05-19-18
test.log.json
test.log.json.2022-05-25_05-19-19
```
However, the second handler deletes files that should only match the first handler, so it ends up not deleting its own files:
```
test.log
test.log.2022-05-25_05-19-19
test.log.json
test.log.json.2022-05-25_05-19-17
test.log.json.2022-05-25_05-19-18
test.log.json.2022-05-25_05-19-19
```
Digging through code this seems to be caused by the change in bpo-44753 aka #88916, reverting [this change](https://github.com/python/cpython/commit/882e4761c63ae76d994b57bbcd7e5adbf2aa7b4f) solves the issue for me.
Here's the full code to reproduce the issue:
```python
import logging
import datetime
from logging.handlers import TimedRotatingFileHandler
from tempfile import TemporaryDirectory
from pathlib import Path
from time import sleep
with TemporaryDirectory() as td:
tmp_path = Path(td)
filename = "test.log"
filename_json = f"{filename}.json"
logfile = tmp_path / filename
logfile_json = tmp_path / filename_json
h1 = TimedRotatingFileHandler(logfile, when="S", backupCount=2, delay=True)
h2 = TimedRotatingFileHandler(logfile_json, when="S", backupCount=1, delay=True)
times = []
for log_str in ("hi1", "hi2", "hi3", "hi4"):
times.append(datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
for h in (h1, h2):
h.emit(logging.LogRecord("name", logging.INFO, "path", 1, log_str, (), None))
sleep(1)
assert logfile.is_file()
actual = set(f.name for f in tmp_path.iterdir())
expected = {
"test.log",
f"test.log.{times[-3]}",
f"test.log.{times[-2]}",
"test.log.json",
f"test.log.json.{times[-2]}",
}
assert actual == expected, (
f"\n\texpected:\t{','.join(expected)}" f"\n\tactual:\t\t{','.join(actual)}"
)
assert logfile.read_text() == "hi4\n"
assert logfile_json.read_text() == "hi4\n"
assert (tmp_path / f"{filename}.{times[-3]}").read_text() == "hi2\n"
assert (tmp_path / f"{filename_json}.{times[-2]}").read_text() == "hi3\n"
```
**Your environment**
Tested this with Python 3.9.7 and 3.10.4, and with the current development cpython code.
Linux 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-93224
* gh-115784
* gh-115785
<!-- /gh-linked-prs -->
| 113687a8381d6dde179aeede607bcbca5c09d182 | 347acded845d07b70232cade8d576b6f0aaeb473 |
python/cpython | python__cpython-100512 | # Document the thread safety of `functools.lru_cache`
**Documentation**
The documentation doesn't state whether `functools.lru_cache` (and decorators derived from it like `functools.cache`) are thread safe, meaning that it's not clear that it's safe to (for example) populate the cache by running the function in multiple threads. The comments in the source code certainly imply that it's intended to be thread safe, but I'll admit I haven't double checked all the details.
Can the thread safety be made explicit in the docs?
<!-- gh-linked-prs -->
### Linked PRs
* gh-100512
<!-- /gh-linked-prs -->
| 2b3feec58f82fee5a8f74ef78e7061bfb73f09a2 | 3fdd43ef3507d0db65cff4dd8252e35f7b81988f |
python/cpython | python__cpython-117000 | # Enable test_cppext on Windows
Once source build virtual environments are properly detected, this test can successfully run on Windows.
Depends on #92897.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117000
<!-- /gh-linked-prs -->
| a114d08a8912a50530ab3f19842c6ba73b0d1017 | 27cf3ed00cfe942f4277c273a3dda8ee2ba61fc8 |
python/cpython | python__cpython-129102 | # Ensure venv works with source builds when using --copies
Currently a venv created with `--copies` or on Windows doesn't respect the `sysconfig.is_python_build()` property of the creating interpreter. This leads to compilation errors when attempting to build extensions in the virtual environment.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129102
* gh-130583
* gh-130585
* gh-130586
* gh-133815
<!-- /gh-linked-prs -->
| e52ab564da84540a7544cb829f262b0154efccae | 5ed5572cac7ef204767ddf8e8888e15672ba558e |
python/cpython | python__cpython-99529 | # ARM64 macOS variadic arguments not passed properly in ctypes
**Bug report**
Using ctypes with variadic functions on ARM64 macOS machines seems to improperly pass the arguments, leading to truncation.
Minimal repro:
```py
>>> import ctypes
>>> from ctypes import util
>>> libc = ctypes.CDLL(util.find_library("c"))
>>> libc.printf(b"hello %d world\n", 128_000_000)
hello 0 world
14
```
This happens regardless of casting it to a `ctypes.c_int` explicitly or not.
```py
>>> libc.printf(b"hello %ld world\n", ctypes.c_int(1))
hello 0 world
14
```
On my regular machine (in this case an x64 Windows machine) it works as expected:
```py
>>> import ctypes
>>> libc = ctypes.cdll.msvcrt
>>> libc.printf(b"hello %d world\n", 128_000_000)
hello 128000000 world
22
```
**Your environment**
I do not personally have a macOS machine, but I got a few others who did have a machine test for me. Their versions were as follows:
Machine 1:
`Python 3.10.1, macOS 12.3.1 (21E258)`
Machine 2:
```
Python 3.9.13 (v3.9.13:6de2ca5339, May 17 2022, 11:37:23)
[Clang 13.0.0 (clang-1300.0.29.30)] on darwin
macOS-12.2.1-arm64-arm-64bit
```
Machine 3:
```
~ % python3 --version
Python 3.9.10
~ % sw_vers
ProductName: macOS
ProductVersion: 12.3.1
BuildVersion: 21E258
```
- CPython versions tested on: 3.9, 3.10
- Operating system and architecture: ARM64 Apple Silicon macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-99529
* gh-99681
* gh-99682
<!-- /gh-linked-prs -->
| bc3a11d21ddef28047b18c0f6a5068fa9fb16da2 | 959ba45d75953caa911e16b4c2a277978fc4b9b0 |
python/cpython | python__cpython-101055 | # No documentation for ast.Module class and others.
**Documentation**
The [documentation on the ast module](https://docs.python.org/3.10/library/ast.html) does not mention the `Module, Interactive, Expression, or FunctionType ` class. They should be added.
I haven't checked but I think these are the only module members that are not documented. You might want to check for any other public members (helper functions or node classes) that are not documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101055
* gh-106138
* gh-106139
<!-- /gh-linked-prs -->
| 33608fd67df8b1033519f808441ee00289e2dac0 | a8210b6df1ed2793c484b3380182ba56c4254a4e |
python/cpython | python__cpython-95596 | # MRO: Behavior change from 3.10 to 3.11: multiple inheritance issue with C-API (pybind11)
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
**Bug report**
* We have started testing the 3.11 beta branch in pybind11 and have found an issue where a class constructed using the C-API that inherits from two base classes that have different tpget_set implementations. Interestingly, this issue occurs when the first base does not have [dynamic attributes enabled](https://github.com/pybind/pybind11/blob/5f9b090a915fbaa5904c6bd4a24f888f2ff55db7/include/pybind11/detail/class.h#L543), but another base class does. I tried making the child class also have dynamic attributes (and therefore a larger size to store the dict etc.), but I still get a PyTypeReady failed. We've been encountering this issue since alpha 3, but I have not found any issues on BPO of other people having similar issues. I am wondering what changed and how we can fix our API usage to allow for full support of Python 3.11 as it enters beta.
* I suspect it's something very subtle with how we are constructing our Python types, but there was nothing in the 3.11 migration guide that flags this issue. Any thoughts on how to fix issue? Is this known behavior or a bug? Or is it something that should be added to the migration guide?
* @vstinner I know you are very familiar with the C-API and helped us deal with some of the other API changes, any thoughts?
Here is the failing test: https://github.com/pybind/pybind11/pull/3923
* **Your environment**
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11
- Operating system and architecture: Ubuntu-latest
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-95596
<!-- /gh-linked-prs -->
| 906e4509328917fe9951f85457897f6a841e0529 | 89f52293281b6efc4ef666ef25e677683821f4b9 |
python/cpython | python__cpython-105267 | # socket.connect - support custom family value
**Feature or enhancement**
Currently `socket.connect()` accepts a wide range of values when connecting to a socket with the code in `socketmodule.c` transforming it based on the family the socket was created with. Unfortunately there is a check that fails the connection if the family is unrecognized https://github.com/python/cpython/blob/9d85aba9e245c1a0f6d1879f8bc6c260cb4eb721/Modules/socketmodule.c#L2530-L2534.
My proposal is to allow a caller to bypass this check if passing in a raw byte value to be used as the addr info on the native call.
**Pitch**
The reason why I am hoping for this feature is to support clients connecting to a Hyper-V socket. This uses the `AF_HYPERV` family which isn't known to Python so any attempts to connect to it won't work.
Currently I am using the following to work around this restriction by using ctypes to call the C API directly and using the fileno:
```python
import ctypes
import socket
import uuid
HV_GUID_VM_SESSION_SERVICE_ID = uuid.UUID("999e53d4-3d5c-4c3e-8779-bed06ec056e1")
HV_GUID_VM_SESSION_SERVICE_ID_2 = uuid.UUID("a5201c21-2770-4c11-a68e-f182edb29220")
AF_HYPERV = 34
HV_PROTOCOL_RAW = 1
# This is the GUID of the Win VM to connect to
vm_id = uuid.UUID("...")
win32sock = ctypes.WinDLL("Ws2_32.dll")
raw_sock = win32sock.socket(AF_HYPERV, socket.SOCK_STREAM, HV_PROTOCOL_RAW)
if raw_sock == -1:
err = win32sock.WSAGetLastError()
raise ctypes.WinError(code=err)
try:
sock_addr = b"\x22\x00\x00\x00" + vm_id.bytes_le + HV_GUID_VM_SESSION_SERVICE_ID.bytes_le
res = win32sock.connect(raw_sock, sock_addr, len(sock_addr))
if res:
err = win32sock.WSAGetLastError()
raise ctypes.WinError(code=err)
sock = socket.socket(fileno=raw_sock)
except:
win32sock.closesocket(raw_sock)
raise
...
sock.close()
```
It would be good to be able to do this instead
```python
import socket
sock = socket.socket(AF_HYPERV, socket.SOCK_STREAM, HV_PROTOCOL_RAW)
sock_addr = b"\x22\x00\x00\x00" + vm_id.bytes_le + HV_GUID_VM_SESSION_SERVICE_ID.bytes_le
sock.connect(sock_addr)
...
sock.close()
```
Currently that fails due to the hardcoded check against unknown families
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: connect(): bad family
```
Another option is to add support for `AF_HYPERV` on Windows and support a tuple of (vm_id, service_id) and have Python create the struct. This could be done as a separate feature request potentially.
<!-- gh-linked-prs -->
### Linked PRs
* gh-105267
* gh-105398
<!-- /gh-linked-prs -->
| 3907de12b57b14f674cdcc80ae64350a23af53a0 | b1a91d26c67250ff7abeb20064e7766096604001 |
python/cpython | python__cpython-101873 | # sqlite3: remove features deprecated in 3.10
The following sqlite3 features were deprecated in Python 3.10, and are scheduled for removal in Python 3.12:
- sqlite3.OptimizedUnicode: #23163
- sqlite3.enable_shared_cache: #24008
3.12 development was just opened, so these features can now be removed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101873
<!-- /gh-linked-prs -->
| 2db2c4b45501eebef5b3ff89118554bd5eb92ed4 | d9199175c7386a95aaac91822a2197b9365eb0e8 |
python/cpython | python__cpython-105208 | # PEP 623: Remove wstr from Unicode
**Feature or enhancement**
[PEP 623](https://peps.python.org/pep-0623/)
<!-- gh-linked-prs -->
### Linked PRs
* gh-105208
* gh-105210
<!-- /gh-linked-prs -->
| cbb9ba844f15f2b8127028e6dfd4681b2cb2376f | 146939306adcff706ebddb047f7470d148125cdf |
python/cpython | python__cpython-94627 | # Argparse choices should be a sequence
**Documentation**
Instead of saying "any container" is supported, refer only to "sequences".
Technically, a *Container* is only required to support ``__contains__`` which is insufficent for argparse. Also, a sets do get accepted are a bad choice because the order shown in help and usage is non-deterministic. So, *Sequence* is the only reasonable choice because we need sizing and ordered iteration.
<!-- gh-linked-prs -->
### Linked PRs
* gh-94627
* gh-100528
* gh-100529
<!-- /gh-linked-prs -->
| ad3c99e521151680afc65d3f8a7d2167ec1969ad | dbc1e696ebf273bc62545d999eb185d6c9470e71 |
python/cpython | python__cpython-103678 | # argparse.BooleanOptionalAction accepts and silently discards choices, metavar, and type arguments
This is an elaboration of issue #85039.
>>> parser = ArgumentParser()
>>> parser.add_argument('--foo', action=BooleanOptionalAction,
... choices=[1,2], metavar='FOOBAR', type=int) # doctest: +ELLIPSIS
BooleanOptionalAction(...)
Note that the store_const, store_true, and store_false actions disallow those keyword arguments.
>>> parser.add_argument('--bar', action='store_true', choices=[1,2])
Traceback (most recent call last):
...
TypeError: __init__() got an unexpected keyword argument 'choices'
>>> parser.add_argument('--bar', action='store_true', metavar='FOOBAR')
Traceback (most recent call last):
...
TypeError: __init__() got an unexpected keyword argument 'metavar'
>>> parser.add_argument('--bar', action='store_true', type=int)
Traceback (most recent call last):
...
TypeError: __init__() got an unexpected keyword argument 'type'
>>> parser.add_argument('--bar', action='store_true') # doctest: +ELLIPSIS
_StoreTrueAction(...)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103678
<!-- /gh-linked-prs -->
| 27a7d5e1cd5b937d5f164fce572d442672f53065 | ac56a854b418d35ad3838f3072604227dc718fca |
python/cpython | python__cpython-99977 | # Performance of attribute lookup for type objects
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
**Bug report**
The performance of attribute lookup for type objects is worse than for other objects. A benchmark
```
import pyperf
runner=pyperf.Runner()
setup="""
class Class:
def all(self):
pass
x=Class()
"""
runner.timeit('hasattr x.all', "hasattr(x, 'all')", setup=setup)
runner.timeit('hasattr x.__array_ufunc__', "hasattr(x, '__array_ufunc__')", setup=setup)
runner.timeit('hasattr Class.all', "hasattr(Class, 'all')", setup=setup)
runner.timeit('hasattr Class.__array_ufunc__', "hasattr(Class, '__array_ufunc__')", setup=setup) # worse performance
```
Results:
```
hasattr x.all: Mean +- std dev: 68.1 ns +- 1.1 ns
hasattr x.__array_ufunc__: Mean +- std dev: 40.4 ns +- 0.3 ns
hasattr Class.all: Mean +- std dev: 38.1 ns +- 0.6 ns
hasattr Class.__array_ufunc__: Mean +- std dev: 255 ns +- 2 ns
```
The reason seems to be that the [`type_getattro`](https://github.com/python/cpython/blob/364ed9409269fb321dc4eafdea677c09a4bc0d8d/Objects/typeobject.c#L3902) always executes `PyErr_Format`, wheras for the "normal" attribute lookup this is avoided (see [here](https://github.com/python/cpython/blob/364ed9409269fb321dc4eafdea677c09a4bc0d8d/Objects/object.c#L952) and [here](https://github.com/python/cpython/blob/364ed9409269fb321dc4eafdea677c09a4bc0d8d/Objects/object.c#L1348))
Notes:
* The benchmark is from the python side, but we are working with the C-API
* The performance is important for numpy, see https://github.com/numpy/numpy/pull/21423
* Another location where this is a bottleneck: https://github.com/python/cpython/blob/v3.12.0a2/Lib/dataclasses.py#L1301
**Your environment**
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.11.0a7+
- Operating system and architecture: Linux Ubuntu
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-99977
* gh-99979
<!-- /gh-linked-prs -->
| 014f103705afef0c1c6d7dc740e6a4f21b2da794 | f9f0b21653721c0c2a62a2c125fa343784435bf6 |
python/cpython | python__cpython-92185 | # Zipfile module doesn't replace `os.altsep` in filenames in some cases
The zipfile module currently doesn't replace `os.altsep` with `/` in filename in cases where `os.altsep` is not `None` and not `"/"`.
I'm not currently aware of any cases where `os.altsep` is not `None` or `"/"`, so I don't think this is currently causing any issues, but it's at least not consistent with [the documentation](https://docs.python.org/3/library/os.html#os.altsep) which states that `os.altsep` is "An alternative character used by the operating system to separate pathname components".
To reproduce the issue, the code below manually sets `os.sep` and `os.altsep` to be the normal Windows values, then swaps them. This should have no effect on the resulting zip file since the values should both be treated as valid path separators.
```python
import io
import ntpath
import os
import zipfile
def show_zip():
zf = zipfile.ZipFile(io.BytesIO(), 'w')
zf.writestr("a/b/", "")
zf.writestr("a/b\\", "")
zf.writestr("a\\b/", "")
zf.writestr("a\\b\\", "")
print([x.filename for x in zf.infolist()])
os.sep = ntpath.sep
os.altsep = ntpath.altsep
show_zip()
os.altsep, os.sep = os.sep, os.altsep
show_zip()
```
Expected output:
```
['a/b/', 'a/b/', 'a/b/', 'a/b/']
['a/b/', 'a/b/', 'a/b/', 'a/b/']
```
Actual output:
```
['a/b/', 'a/b/', 'a/b/', 'a/b/']
['a/b/', 'a/b\\', 'a\\b/', 'a\\b\\']
```
Environment: `Python 3.11.0a7+ (heads/main:56f9844014, May 2 2022, 15:22:18) [Clang 11.0.0 (clang-1100.0.33.17)] on darwin`
<!-- gh-linked-prs -->
### Linked PRs
* gh-92185
<!-- /gh-linked-prs -->
| 4abfe6a14b5be5decbaa3142d9e2549cf2d86c34 | fcd5fb49b1d71165f3c503c3d2e74a082ddb2f21 |
python/cpython | python__cpython-99221 | # [subinterpreters] crash in _elementtree when parsing in parallel
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
**Crash report**
I have Kodi crashing when certain add-ons are enabled. I traced the error to _elementtree and repurposed a script from https://github.com/python/cpython/issues/90228 to reproduce the bug. See [bug.py.txt](https://github.com/python/cpython/files/8599659/bug.py.txt)
**Error messages**
<details>
```
/opt/python-dbg/bin/python3 bug.py
Fatal Python error: Segmentation fault
Current thread 0x00007fd85f7fe640 (most recent call first):
File "/home/plv/tmp/bug.py", line 18 in doIt
File "/opt/python-dbg/lib/python3.10/threading.py", line 946 in run
File "/opt/python-dbg/lib/python3.10/threading.py", line 1009 in _bootstrap_inner
File "/opt/python-dbg/lib/python3.10/threading.py", line 966 in _bootstrap
Thread 0x00007fd8654d2640 (most recent call first):
File "/home/plv/tmp/bug.py", line 18 in doIt
File "/opt/python-dbg/lib/python3.10/threading.py", line 946 in run
File "/opt/python-dbg/lib/python3.10/threading.py", line 1009 in _bootstrap_inner
File "/opt/python-dbg/lib/python3.10/threading.py", line 966 in _bootstrap
Thread 0x00007fd865f20740 (most recent call first):
File "/opt/python-dbg/lib/python3.10/threading.py", line 1109 in _wait_for_tstate_lock
File "/opt/python-dbg/lib/python3.10/threading.py", line 1089 in join
File "/home/plv/tmp/bug.py", line 25 in func
File "/home/plv/tmp/bug.py", line 27 in <module>
Extension modules: _testcapi (total: 1)
Segmentation fault (core dumped)
```
```
Reading symbols from /opt/python-dbg/bin/python3.10...
[New LWP 3737917]
[New LWP 3737913]
[New LWP 3737914]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by `/opt/python-dbg/bin/python3 bug.py'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fd865da234c in __pthread_kill_implementation () from /usr/lib/libc.so.6
[Current thread is 1 (Thread 0x7fd85f7fe640 (LWP 3737917))]
(gdb) bt
#0 0x00007fd865da234c in __pthread_kill_implementation () from /usr/lib/libc.so.6
#1 0x00007fd865d554b8 in raise () from /usr/lib/libc.so.6
#2 <signal handler called>
#3 0x00007fd8640c4c8a in expat_parse (self=0x7fd85efb45f0, data=0x7fd84c0c39e0 "<data />", data_len=8, final=0)
at /home/plv/Documents/projects/aur/python-dbg/src/Python-3.10.2/Modules/_elementtree.c:3835
#4 0x00007fd8640c5690 in _elementtree_XMLParser__parse_whole (self=0x7fd85efb45f0, file=<optimized out>)
at /home/plv/Documents/projects/aur/python-dbg/src/Python-3.10.2/Modules/_elementtree.c:3994
#5 0x00007fd8660bfbdf in method_vectorcall_O (func=func@entry=0x7fd85eef49b0, args=args@entry=0x7fd8642061e0, nargsf=<optimized out>, kwnames=0x0) at Objects/descrobject.c:460
#6 0x00007fd8660b544b in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>)
at ./Include/cpython/abstract.h:114
#7 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#8 call_function (tstate=0x7fd84c01cc00, trace_info=<optimized out>, pp_stack=0x7fd85f7fc940, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#9 0x00007fd8660ab172 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd864206050, throwflag=<optimized out>) at Python/ceval.c:4198
#10 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd864206050, tstate=0x7fd84c01cc00) at ./Include/internal/pycore_ceval.h:46
#11 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=3, kwnames=<optimized out>) at Python/ceval.c:5065
#12 0x00007fd8660b544b in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>)
at ./Include/cpython/abstract.h:114
#13 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#14 call_function (tstate=0x7fd84c01cc00, trace_info=<optimized out>, pp_stack=0x7fd85f7fcc10, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#15 0x00007fd8660ab172 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd864124240, throwflag=<optimized out>) at Python/ceval.c:4198
#16 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd864124240, tstate=0x7fd84c01cc00) at ./Include/internal/pycore_ceval.h:46
#17 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=1, kwnames=<optimized out>) at Python/ceval.c:5065
#18 0x00007fd8660b544b in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>)
at ./Include/cpython/abstract.h:114
#19 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#20 call_function (tstate=0x7fd84c01cc00, trace_info=<optimized out>, pp_stack=0x7fd85f7fcee0, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#21 0x00007fd8660b12a6 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd864356af0, throwflag=<optimized out>) at Python/ceval.c:4181
#22 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd864356af0, tstate=0x7fd84c01cc00) at ./Include/internal/pycore_ceval.h:46
#23 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=0, kwnames=<optimized out>) at Python/ceval.c:5065
#24 0x00007fd8661269b4 in PyEval_EvalCode (co=0x7fd864240520, globals=0x7fd86422f7d0, locals=<optimized out>) at Python/ceval.c:1134
#25 0x00007fd866140884 in run_eval_code_obj (tstate=0x7fd84c01cc00, co=0x7fd864240520, globals=0x7fd86422f7d0, locals=0x7fd86422f7d0) at Python/pythonrun.c:1291
#26 0x00007fd8661382d6 in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7fd86422f7d0, locals=0x7fd86422f7d0, flags=<optimized out>, arena=<optimized out>)
at Python/pythonrun.c:1312
#27 0x00007fd86612c561 in PyRun_StringFlags (str=<optimized out>, start=257, globals=0x7fd86422f7d0, locals=0x7fd86422f7d0, flags=0x7fd85f7fd220) at Python/pythonrun.c:1183
#28 0x00007fd86612c480 in PyRun_SimpleStringFlags (command=0x7fd8656802b0 "\nimport xml.etree.ElementTree as ETree\nETree.parse(\"data.xml\")\n", flags=flags@entry=0x7fd85f7fd220)
at Python/pythonrun.c:503
#29 0x00007fd8655e0120 in run_in_subinterp (self=<optimized out>, args=<optimized out>) at /home/plv/Documents/projects/aur/python-dbg/src/Python-3.10.2/Modules/_testcapimodule.c:3639
#30 0x00007fd8660b9438 in cfunction_call (func=0x7fd8656e9c10, args=<optimized out>, kwargs=<optimized out>) at Objects/methodobject.c:552
#31 0x00007fd8660b6468 in _PyObject_MakeTpCall (tstate=0x556284058d10, callable=0x7fd8656e9c10, args=<optimized out>, nargs=<optimized out>, keywords=<optimized out>) at Objects/call.c:215
#32 0x00007fd8660b55d9 in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>)
at ./Include/cpython/abstract.h:112
#33 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#34 call_function (tstate=0x556284058d10, trace_info=<optimized out>, pp_stack=0x7fd85f7fd3c0, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#35 0x00007fd8660b12a6 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd865563b60, throwflag=<optimized out>) at Python/ceval.c:4181
#36 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd865563b60, tstate=0x556284058d10) at ./Include/internal/pycore_ceval.h:46
#37 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=0, kwnames=<optimized out>) at Python/ceval.c:5065
#38 0x00007fd8660ae50a in PyObject_Call (kwargs=0x7fd864c33650, args=0x7fd865700250, callable=0x7fd8657d4940) at Objects/call.c:317
--Type <RET> for more, q to quit, c to continue without paging--c
#39 do_call_core (kwdict=0x7fd864c33650, callargs=0x7fd865700250, func=0x7fd8657d4940, trace_info=0x7fd85f7fd660, tstate=<optimized out>) at Python/ceval.c:5919
#40 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd86552abd0, throwflag=<optimized out>) at Python/ceval.c:4277
#41 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd86552abd0, tstate=0x556284058d10) at ./Include/internal/pycore_ceval.h:46
#42 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=1, kwnames=<optimized out>) at Python/ceval.c:5065
#43 0x00007fd8660b544b in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>) at ./Include/cpython/abstract.h:114
#44 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#45 call_function (tstate=0x556284058d10, trace_info=<optimized out>, pp_stack=0x7fd85f7fd900, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#46 0x00007fd8660ab172 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd84c000ba0, throwflag=<optimized out>) at Python/ceval.c:4198
#47 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd84c000ba0, tstate=0x556284058d10) at ./Include/internal/pycore_ceval.h:46
#48 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=1, kwnames=<optimized out>) at Python/ceval.c:5065
#49 0x00007fd8660b544b in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>, tstate=<optimized out>) at ./Include/cpython/abstract.h:114
#50 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>) at ./Include/cpython/abstract.h:123
#51 call_function (tstate=0x556284058d10, trace_info=<optimized out>, pp_stack=0x7fd85f7fdbd0, oparg=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:5867
#52 0x00007fd8660ab172 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fd86552aa00, throwflag=<optimized out>) at Python/ceval.c:4198
#53 0x00007fd8660a980b in _PyEval_EvalFrame (throwflag=0, f=0x7fd86552aa00, tstate=0x556284058d10) at ./Include/internal/pycore_ceval.h:46
#54 _PyEval_Vector (tstate=<optimized out>, con=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=1, kwnames=<optimized out>) at Python/ceval.c:5065
#55 0x00007fd8660c0e2c in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x7fd85f7fddb8, callable=0x7fd86555f540, tstate=0x556284058d10) at ./Include/cpython/abstract.h:114
#56 method_vectorcall (method=<optimized out>, args=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:61
#57 0x00007fd86618ac9a in thread_run (boot_raw=0x7fd864bf5ee0) at ./Modules/_threadmodule.c:1090
#58 0x00007fd866165f18 in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:248
#59 0x00007fd865da05c2 in start_thread () from /usr/lib/libc.so.6
#60 0x00007fd865e25584 in clone () from /usr/lib/libc.so.6
```
</details>
**Your environment**
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: `python --with-pydebug 3.10.2`
- Operating system and architecture: `Linux 5.17.1-zen1-1-zen #1 ZEN SMP PREEMPT Mon, 28 Mar 2022 21:56:46 +0000 x86_64 GNU/Linux`
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-99221
* gh-101187
* gh-101189
* gh-101190
* gh-101285
<!-- /gh-linked-prs -->
| 3847a6c64b96bb2cb93be394a590d4df2c35e876 | 9109d460511a317f5598a26658ba495e77ea8686 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.