repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-104788 | # Optimize and reduce memory usage for `asyncio.Task` and others
Tracker issue for performance work related to `asyncio.Task` for 3.13.
Two things of top of my head is to use bitfields and use newer managed dict optimization in `_asyncio`. I'll add whatsnew entry once all changes land under this issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104788
* gh-104795
* gh-106516
<!-- /gh-linked-prs -->
| 829ac13b69a2b53153e1b40670e6ef82f05130c1 | 8da9d1b16319f4a6bd78435016ef1f4bef6e2b41 |
python/cpython | python__cpython-104891 | # Remove kwargs-based TypedDict creation
#90224 deprecated the creation of TypedDicts through the kwargs-based syntax (`TypedDict("TD", a=int, b=str)`). The syntax is to be removed in 3.13, so now is the time to do that.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104891
<!-- /gh-linked-prs -->
| fea8632ec69d160a11b8ec506900c14989952bc1 | d08679212d9af52dd074cd4a6abb440edb944c9c |
python/cpython | python__cpython-104784 | # Remove locale.resetlocale() function in Python 3.13
Remove locale.getdefaultlocale() and locale.resetlocale() functions deprecated in Python 3.11: see issue #90817 for the rationale and the deprecation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104784
* gh-105381
* gh-105401
<!-- /gh-linked-prs -->
| 0cb6b9b0db5df6b3f902e86eb3d4a1e504afb851 | c7bf74bacd2b2db308e80e532153ffaf6dbca851 |
python/cpython | python__cpython-104781 | # Remove the 2to3 program and the lib2to3 module
The 2to3 program is implemented with the lib2to3 module which implements a parser of the Python language. Problem: this parser is unable to parse Python 3.10 grammar :-( This issue was already known in Python 3.9 when [PEP 617 – New PEG parser for CPython](https://peps.python.org/pep-0617/) was approved.
The 2to3 program and the lib2to3 were marked as "pending" deprecated since Python 3.9, and then officially deprecated in Python 3.11 (``import lib2to3`` emits a ``DeprecationWarning``).
One of a famous user of lib2to3 was the [black project](https://github.com/psf/black) but they decided to fork lib2to3 and their fork was made compatible with Python 3.10 grammar (maybe also Python 3.11).
I propose to remove 2to3 and lib2to3 at the beginning of the Python 3.13 development cycle to give users more time to be prepared for this incompatible change before Python 3.13 final release (scheduled in October 2024).
cc @isidentical @lysnikolaou @pablogsal @ambv
---
Example of valid Python script (``script.py``:
```py
with (open(__file__) as fp): data=fp.read()
print(len(data))
```
It works well on Python 3.11:
```
$ python3.11 script.py
61
```
But lib2to3 is unable to parse it:
```
$ python3.11 -m lib2to3 script.py
(...)
RefactoringTool: No files need to be modified.
RefactoringTool: There was 1 error:
RefactoringTool: Can't parse script.py: ParseError: bad input: type=1, value='as', context=(' ', (1, 21))
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104781
<!-- /gh-linked-prs -->
| ae00b810d1d3ad7f1f7e226b02ece37c986330e7 | ddb14859535ab8091381b9d0baf32dbe245b5e65 |
python/cpython | python__cpython-104774 | # PEP 594: Remove stdlib modules scheduled for deletion in Python 3.13
[PEP 594](https://peps.python.org/pep-0594/) ("Removing dead batteries from the standard library") deprecated the following 19 modules in Python 3.11 (or before) and their removal in Python 3.13:
* [x] aifc
* [x] audioop
* [x] cgi
* [x] cgitb
* [x] chunk
* [x] crypt
* [x] imghdr
* [x] mailcap
* [x] msilib
* [x] nis
* [x] nntplib
* [x] ossaudiodev
* [x] pipes
* [x] sndhdr
* [x] spwd
* [x] sunau
* [x] telnetlib
* [x] uu
* [x] xdrlib
I propose to remove them as soon as possible in the Python 3.13 development cycle to help users to prepare for this incompatible change before Python 3.13 final release (scheduled for October 2024).
I plan to work on pull requests to remove these modules.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104774
* gh-104775
* gh-104777
* gh-104778
* gh-104848
* gh-104862
* gh-104863
* gh-104864
* gh-104867
* gh-104868
* gh-104871
* gh-104894
* gh-104897
* gh-104900
* gh-104901
* gh-104908
* gh-104911
* gh-104927
* gh-104932
* gh-104933
* gh-104937
* gh-105139
<!-- /gh-linked-prs -->
| 7b00940f69ab26212ea375860a1956e157dd2c30 | 08d592389603500af398d278af4842cff6f22c33 |
python/cpython | python__cpython-104771 | # Let generator.close() return StopIteration.value
# Feature or enhancement
Change the `close()` method of generators to return the value of `StopIteration`.
# Pitch
If a generator handles the `GeneratorExit` thrown by `close()`, it can exit gracefully, raising `StopIteration` with its return value.
```py
def f():
try:
yield
except GeneratorExit:
pass
return 0
g = f()
g.send(None)
g.close() # StopIteration handled by close()
```
The `StopIteration` is handled by `close()`, but the return value is currently discarded, and `close()` always returns `None`.
The proposed change is to let `close()` return the value of a `StopIteration` it encounters after a graceful generator exit.
Every other case, including errors thrown between `except GeneratorExit` and `return`, is already handled by `close()` and would remain unchanged. This includes repeated calls to `close()`: Only the first such call might cause a generator to exit gracefully, after which it cannot raise `StopIteration` any longer.
The change enables more natural use of generators as "pipelines" that process input data until ended. As a trivial example of the functionality, consider this computer of averages:
```py
def averager():
n = 0
sum = 0.
while True:
try:
x = yield n
except GeneratorExit:
break
n += 1
sum += x
mean = sum/n
return mean
avg = averager()
avg.send(None)
for number in get_some_numbers():
avg.send(number)
mean = avg.close()
```
The generator processes data in an infinite loop. Once the controlling process terminates processing, the generator performs post-processing, and returns a final result. Without the return value of `close()`, there is no intrinsic way of obtaining such a post-processed result.
# Previous discussion
Discourse thread: [Let generator.close() return StopIteration.value](https://discuss.python.org/t/let-generator-close-return-stopiteration-value/24786?u=ntessore)
<!-- gh-linked-prs -->
### Linked PRs
* gh-104771
<!-- /gh-linked-prs -->
| d56c933992c86986bd58eb3880aed0ed1b0cadc9 | 50fce89d123b25e53fa8a0303a169e8887154a0e |
python/cpython | python__cpython-104765 | # test_enum tests fail because of the version bump to 3.13.0a0
Following the creation of the 3.12 branch and the bump of main to 3.13, test_enum started failing (https://github.com/python/cpython/actions/runs/5050528705/jobs/9061352655). This seems to be because a few tests are testing features that haven't landed in 3.13, or perhaps were rolled back. The failing tests are blocking CI.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104765
* gh-104779
<!-- /gh-linked-prs -->
| 586aca3fc647c626c39e995d07d8bd7dd13e2d52 | 4194d8f2c40f478eb0fc9b6fa9b913baaff229da |
python/cpython | python__cpython-126649 | # AttributeError from stopping stopped patch which was started more than once
# Bug report
It seems, such bug appeared after https://github.com/python/cpython/commit/4b222c9491d1700e9bdd98e6889b8d0ea1c7321e
Suppose we have initialized patch, than call `start` more than once and than call `stop` more than once:
```
>>> from unittest import mock
... class Foo:
... bar = None
... patch = mock.patch.object(Foo, 'bar', 'x')
>>> patch.start()
'x'
>>> patch.start()
'x'
>>> patch.stop()
False
>>> patch.stop()
Traceback (most recent call last):
File "...", line ..., in runcode
coro = func()
File "<input>", line 1, in <module>
File "/usr/lib/python3.8/unittest/mock.py", line 1542, in stop
return self.__exit__(None, None, None)
File "/usr/lib/python3.8/unittest/mock.py", line 1508, in __exit__
if self.is_local and self.temp_original is not DEFAULT:
AttributeError: '_patch' object has no attribute 'is_local'
```
But if we call `start` only once, multiple `stop`s won't cause such error.
For a first glance it is due to `stop` excepting `ValueError` on removing patch from `_active_patches` and if `ValueError` was not raised it proceeds assuming patch have attribute `is_local`.
# Your environment
- CPython versions tested on: Python 3.8
- Operating system: Ubuntu
- I think error exists in newer versions too
<!-- gh-linked-prs -->
### Linked PRs
* gh-126649
* gh-126772
* gh-126773
<!-- /gh-linked-prs -->
| 1e40c5ba47780ddd91868abb3aa064f5ba3015e4 | 2e39d77ddeb51505d65fd54ccfcd72615c6b1927 |
python/cpython | python__cpython-104743 | # tokenize.generate_tokens no longer reports lineno for indentation errors
`tokenize.generate_tokens` raises an exception for indentation errors. It used to populate the `.lineno` attribute of the exception, but as of yesterday, `.lineno is None`.
Test file:
```python
# parsebug.py
import io
import sys
import tokenize
text = """\
0 spaces
2
1
"""
print(sys.version)
readline = io.StringIO(text).readline
try:
list(tokenize.generate_tokens(readline))
except Exception as exc:
print(f"{exc.lineno = }")
raise
```
With 3.12.0a7:
```
% .tox/py312/bin/python parsebug.py
3.12.0a7 (main, Apr 5 2023, 05:51:58) [Clang 14.0.3 (clang-1403.0.22.14.1)]
exc.lineno = 3
Traceback (most recent call last):
File "/Users/nedbatchelder/coverage/trunk/parsebug.py", line 15, in <module>
list(tokenize.generate_tokens(readline))
File "/usr/local/pyenv/pyenv/versions/3.12.0a7/lib/python3.12/tokenize.py", line 516, in _tokenize
raise IndentationError(
File "<tokenize>", line 3
1
IndentationError: unindent does not match any outer indentation level
```
With a nightly 3.12 build:
```
% .tox/anypy/bin/python parsebug.py
3.12.0a7+ (heads/main:9bc80dac47, May 22 2023, 05:34:19) [Clang 14.0.3 (clang-1403.0.22.14.1)]
exc.lineno = None
Traceback (most recent call last):
File "/Users/nedbatchelder/coverage/trunk/parsebug.py", line 15, in <module>
list(tokenize.generate_tokens(readline))
File "/usr/local/cpython/lib/python3.12/tokenize.py", line 451, in _tokenize
for token in _generate_tokens_from_c_tokenizer(source, extra_tokens=True):
File "/usr/local/cpython/lib/python3.12/tokenize.py", line 542, in _generate_tokens_from_c_tokenizer
for info in c_tokenizer.TokenizerIter(source, extra_tokens=extra_tokens):
IndentationError: unindent does not match any outer indentation level (<tokenize>, line 3)
```
The line number is clearly known, since it's reported. Is there another way I should be getting the line number from the exception?
/cc @mgmacias95
<!-- gh-linked-prs -->
### Linked PRs
* gh-104743
<!-- /gh-linked-prs -->
| 729b252241966f464cc46e176fb854dbcc5296cb | 0a7796052acb9cec8b13f8d0a5f304f56f26ec5b |
python/cpython | python__cpython-104722 | # IDLE is unable to open any `.py` files
With a fresh CPython build (be0c106789322273f1f76d232c768c09880a14bd), IDLE is unable to open any `.py` files.
To reproduce:
1. Create an empty `.py` file with the name `repro.py`
2. Run `python -m idlelib repro.py`
IDLE still seems able to create new `.py` files and save them; it just can't open pre-existing `.py` files right now.
## Traceback observed
```pytb
C:\Users\alexw\coding\cpython>python -m idlelib repro.py
Running Debug|x64 interpreter...
Traceback (most recent call last):
File "C:\Users\alexw\coding\cpython\Lib\runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\runpy.py", line 88, in _run_code
exec(code, run_globals)
File "C:\Users\alexw\coding\cpython\Lib\idlelib\__main__.py", line 7, in <module>
idlelib.pyshell.main()
File "C:\Users\alexw\coding\cpython\Lib\idlelib\pyshell.py", line 1640, in main
if flist.open(filename) is None:
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\idlelib\filelist.py", line 37, in open
edit = self.EditorWindow(self, filename, key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\idlelib\pyshell.py", line 135, in __init__
EditorWindow.__init__(self, *args)
File "C:\Users\alexw\coding\cpython\Lib\idlelib\editor.py", line 289, in __init__
self.set_indentation_params(is_py_src)
File "C:\Users\alexw\coding\cpython\Lib\idlelib\editor.py", line 1327, in set_indentation_params
i = self.guess_indent()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\idlelib\editor.py", line 1574, in guess_indent
opener, indented = IndentSearcher(self.text, self.tabwidth).run()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\idlelib\editor.py", line 1646, in run
save_tabsize = tokenize.tabsize
^^^^^^^^^^^^^^^^
AttributeError: module 'tokenize' has no attribute 'tabsize'
```
## Environment
(Given the cause of the bug, the environment details shouldn't really be relevant; but they're included here anyway, for completeness.)
```
Python 3.12.0a7+ (heads/main:be0c106789, May 21 2023, 12:00:27) [MSC v.1932 64 bit (AMD64)] on win32
```
Reproduces on a debug and non-debug build, FWIW.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104722
* gh-104726
* gh-104727
* gh-104767
* gh-104844
* gh-104845
<!-- /gh-linked-prs -->
| ffe47cb623999db05959ec4b5168d1c87a1e40ef | 93923793f602ea9117f13bfac8cbe01a864eeb01 |
python/cpython | python__cpython-104700 | # test_mmap are leaked
Tried on current main branch.
```python
./python -m test -R 3:3 test_mmap
Running Debug|x64 interpreter...
0:00:00 Run tests sequentially
0:00:00 [1/1] test_mmap
beginning 6 repetitions
123456
......
test_mmap leaked [13, 13, 13] references, sum=39
test_mmap failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_mmap
Total duration: 1.3 sec
Tests result: FAILURE
```
OS: Windows 10 & WSL Ubuntu 20.04
<!-- gh-linked-prs -->
### Linked PRs
* gh-104700
* gh-104710
<!-- /gh-linked-prs -->
| 99b641886a09252bbcf99a1d322fa8734f1ca30d | b870b1fa755808977578e99bd42859c519e00bb7 |
python/cpython | python__cpython-104693 | # Race condition during installation in parallel builds
# Bug report
When running `make -j install`, it is possible for the `python3` symlink to be installed before all standard library modules are installed. If an external program tries to run Python at this moment it can lead to an execution error.
We are getting sporadic build errors at OpenWrt (https://github.com/openwrt/packages/issues/19241) and believe this to be the cause, that during the build for "host" Python (Python that runs on the buildbot), another package being built tries to run a script using this Python because the `python3` symlink is in place.
# Your environment
- CPython versions tested on: 3.10
- Operating system and architecture: Linux, not sure exactly which distribution
<!-- gh-linked-prs -->
### Linked PRs
* gh-104693
* gh-105428
* gh-105429
<!-- /gh-linked-prs -->
| 990cb3676c2edb7e5787372d6cbe360a73367f4c | 81c81328a4fa13fead6f8cc9053a1a32a62a0279 |
python/cpython | python__cpython-104826 | # Threads started in exit handler are still running after their thread states are destroyed
Reproduced on main (663c049ff78a299bdf7c1a0444b9900e6d37372d), bisected to 283ab0e1c0.
```python
import atexit
import threading
def t0():
pass
def t1():
threading.Thread(target=t0).start()
def f():
threading.Thread(target=t1).start()
atexit.register(f)
exit()
```
Output (can also pass without crash or segv depending on at which point `t0` tries to access its `tstate`):
```
python: Python/pystate.c:244: bind_tstate: Assertion `tstate_is_alive(tstate) && !tstate->_status.bound' failed.
Aborted (core dumped)
```
Stack traces:
```
(gdb) b Python/pylifecycle.c:1810
Breakpoint 1 at 0x372529: file Python/pylifecycle.c, line 1828.
(gdb) r repro.py
Starting program: /home/chgnrdv/cpython/python repro.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff7621700 (LWP 1539803)]
[New Thread 0x7ffff6e20700 (LWP 1539804)]
python: Python/pystate.c:244: bind_tstate: Assertion `tstate_is_alive(tstate) && !tstate->_status.bound' failed.
Thread 3 "python" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff6e20700 (LWP 1539804)]
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7c8f537 in __GI_abort () at abort.c:79
#2 0x00007ffff7c8f40f in __assert_fail_base (fmt=0x7ffff7e076a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x555555a1e830 "tstate_is_alive(tstate) && !tstate->_status.bound", file=0x555555a1df0f "Python/pystate.c", line=244, function=<optimized out>)
at assert.c:92
#3 0x00007ffff7c9e662 in __GI___assert_fail (assertion=assertion@entry=0x555555a1e830 "tstate_is_alive(tstate) && !tstate->_status.bound",
file=file@entry=0x555555a1df0f "Python/pystate.c", line=line@entry=244, function=function@entry=0x555555a1f678 <__PRETTY_FUNCTION__.52> "bind_tstate")
at assert.c:101
#4 0x00005555558c8213 in bind_tstate (tstate=tstate@entry=0x7ffff0000e40) at Python/pystate.c:244
#5 0x00005555558c9d9e in _PyThreadState_Bind (tstate=tstate@entry=0x7ffff0000e40) at Python/pystate.c:1929
#6 0x000055555596afac in thread_run (boot_raw=boot_raw@entry=0x7ffff77e80e0) at ./Modules/_threadmodule.c:1077
#7 0x00005555558e5ce7 in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:233
#8 0x00007ffff7f98ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#9 0x00007ffff7d69a2f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) frame level 4
#4 0x00005555558c8213 in bind_tstate (tstate=tstate@entry=0x7ffff0000e40) at Python/pystate.c:244
244 assert(tstate_is_alive(tstate) && !tstate->_status.bound);
(gdb) print *tstate
$1 = {prev = 0xdddddddddddddddd, next = 0xdddddddddddddddd, interp = 0xdddddddddddddddd, _status = {initialized = 1, bound = 0, unbound = 1, bound_gilstate = 1,
active = 1, finalizing = 0, cleared = 1, finalized = 1}, py_recursion_remaining = -572662307, py_recursion_limit = -572662307, c_recursion_remaining = -572662307,
recursion_headroom = -572662307, tracing = -572662307, what_event = -572662307, cframe = 0xdddddddddddddddd, c_profilefunc = 0xdddddddddddddddd,
c_tracefunc = 0xdddddddddddddddd, c_profileobj = 0xdddddddddddddddd, c_traceobj = 0xdddddddddddddddd, current_exception = 0xdddddddddddddddd,
exc_info = 0xdddddddddddddddd, dict = 0xdddddddddddddddd, gilstate_counter = -572662307, async_exc = 0xdddddddddddddddd, thread_id = 15987178197214944733,
native_thread_id = 15987178197214944733, trash = {delete_nesting = -572662307, delete_later = 0xdddddddddddddddd}, on_delete = 0xdddddddddddddddd,
on_delete_data = 0xdddddddddddddddd, coroutine_origin_tracking_depth = -572662307, async_gen_firstiter = 0xdddddddddddddddd,
async_gen_finalizer = 0xdddddddddddddddd, context = 0xdddddddddddddddd, context_ver = 15987178197214944733, id = 15987178197214944733,
datastack_chunk = 0xdddddddddddddddd, datastack_top = 0xdddddddddddddddd, datastack_limit = 0xdddddddddddddddd, exc_state = {exc_value = 0xdddddddddddddddd,
previous_item = 0xdddddddddddddddd}, root_cframe = {current_frame = 0xdddddddddddddddd, previous = 0xdddddddddddddddd}}
(gdb) info threads
Id Target Id Frame
1 Thread 0x7ffff7c6c280 (LWP 1539799) "python" Py_FinalizeEx () at Python/pylifecycle.c:1828
2 Thread 0x7ffff7621700 (LWP 1539803) "python" futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7ffff00010b0)
at ../sysdeps/nptl/futex-internal.h:323
* 3 Thread 0x7ffff6e20700 (LWP 1539804) "python" __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
(gdb) thread 2
[Switching to thread 2 (Thread 0x7ffff7621700 (LWP 1539803))]
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7ffff00010b0) at ../sysdeps/nptl/futex-internal.h:323
323 ../sysdeps/nptl/futex-internal.h: No such file or directory.
(gdb) bt
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7ffff00010b0) at ../sysdeps/nptl/futex-internal.h:323
#1 do_futex_wait (sem=sem@entry=0x7ffff00010b0, abstime=0x0, clockid=0) at sem_waitcommon.c:112
#2 0x00007ffff7fa2278 in __new_sem_wait_slow (sem=sem@entry=0x7ffff00010b0, abstime=0x0, clockid=0) at sem_waitcommon.c:184
#3 0x00007ffff7fa22f1 in __new_sem_wait (sem=sem@entry=0x7ffff00010b0) at sem_wait.c:42
#4 0x00005555558e6120 in PyThread_acquire_lock_timed (lock=lock@entry=0x7ffff00010b0, microseconds=microseconds@entry=-1000000, intr_flag=intr_flag@entry=1)
at Python/thread_pthread.h:478
#5 0x000055555596a3be in acquire_timed (lock=0x7ffff00010b0, timeout=-1000000000) at ./Modules/_threadmodule.c:98
#6 0x000055555596a50d in lock_PyThread_acquire_lock (self=0x7ffff77d9c70, args=<optimized out>, kwds=<optimized out>) at ./Modules/_threadmodule.c:179
#7 0x000055555570defe in method_vectorcall_VARARGS_KEYWORDS (func=0x7ffff792ef90, args=0x7ffff7fbd378, nargsf=<optimized out>, kwnames=<optimized out>)
at Objects/descrobject.c:365
#8 0x00005555556fc9ca in _PyObject_VectorcallTstate (tstate=0x555555d24ac0, callable=0x7ffff792ef90, args=0x7ffff7fbd378, nargsf=9223372036854775809, kwnames=0x0)
at ./Include/internal/pycore_call.h:92
#9 0x00005555556fcae5 in PyObject_Vectorcall (callable=callable@entry=0x7ffff792ef90, args=args@entry=0x7ffff7fbd378, nargsf=<optimized out>,
kwnames=kwnames@entry=0x0) at Objects/call.c:325
#10 0x0000555555856917 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555d24ac0, frame=0x7ffff7fbd300, throwflag=throwflag@entry=0) at Python/bytecodes.c:2643
#11 0x000055555585e296 in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555d24ac0) at ./Include/internal/pycore_ceval.h:87
#12 _PyEval_Vector (tstate=0x555555d24ac0, func=0x7ffff7649fd0, locals=locals@entry=0x0, args=0x7ffff7620db8, argcount=1, kwnames=0x0) at Python/ceval.c:1610
#13 0x00005555556fc45b in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:419
#14 0x0000555555700b6f in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x7ffff7620db8, callable=0x7ffff7649fd0, tstate=0x555555d24ac0)
at ./Include/internal/pycore_call.h:92
#15 method_vectorcall (method=<optimized out>, args=0x555555ca9190 <_PyRuntime+91760>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:67
#16 0x00005555556fecfe in _PyVectorcall_Call (tstate=tstate@entry=0x555555d24ac0, func=0x5555557008f9 <method_vectorcall>, callable=callable@entry=0x7ffff77b03b0,
tuple=tuple@entry=0x555555ca9178 <_PyRuntime+91736>, kwargs=kwargs@entry=0x0) at Objects/call.c:271
#17 0x00005555556ff0aa in _PyObject_Call (tstate=0x555555d24ac0, callable=0x7ffff77b03b0, args=0x555555ca9178 <_PyRuntime+91736>, kwargs=0x0) at Objects/call.c:354
#18 0x00005555556ff103 in PyObject_Call (callable=<optimized out>, args=<optimized out>, kwargs=<optimized out>) at Objects/call.c:379
#19 0x000055555596afdc in thread_run (boot_raw=boot_raw@entry=0x7ffff77ada80) at ./Modules/_threadmodule.c:1081
#20 0x00005555558e5ce7 in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:233
#21 0x00007ffff7f98ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#22 0x00007ffff7d69a2f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) frame level 19
#19 0x000055555596afdc in thread_run (boot_raw=boot_raw@entry=0x7ffff77ada80) at ./Modules/_threadmodule.c:1081
1081 PyObject *res = PyObject_Call(boot->func, boot->args, boot->kwargs);
(gdb) print *tstate
$2 = {prev = 0xdddddddddddddddd, next = 0xdddddddddddddddd, interp = 0xdddddddddddddddd, _status = {initialized = 1, bound = 0, unbound = 1, bound_gilstate = 1,
active = 1, finalizing = 0, cleared = 1, finalized = 1}, py_recursion_remaining = -572662307, py_recursion_limit = -572662307, c_recursion_remaining = -572662307,
recursion_headroom = -572662307, tracing = -572662307, what_event = -572662307, cframe = 0xdddddddddddddddd, c_profilefunc = 0xdddddddddddddddd,
c_tracefunc = 0xdddddddddddddddd, c_profileobj = 0xdddddddddddddddd, c_traceobj = 0xdddddddddddddddd, current_exception = 0xdddddddddddddddd,
exc_info = 0xdddddddddddddddd, dict = 0xdddddddddddddddd, gilstate_counter = -572662307, async_exc = 0xdddddddddddddddd, thread_id = 15987178197214944733,
native_thread_id = 15987178197214944733, trash = {delete_nesting = -572662307, delete_later = 0xdddddddddddddddd}, on_delete = 0xdddddddddddddddd,
on_delete_data = 0xdddddddddddddddd, coroutine_origin_tracking_depth = -572662307, async_gen_firstiter = 0xdddddddddddddddd,
async_gen_finalizer = 0xdddddddddddddddd, context = 0xdddddddddddddddd, context_ver = 15987178197214944733, id = 15987178197214944733,
datastack_chunk = 0xdddddddddddddddd, datastack_top = 0xdddddddddddddddd, datastack_limit = 0xdddddddddddddddd, exc_state = {exc_value = 0xdddddddddddddddd,
previous_item = 0xdddddddddddddddd}, root_cframe = {current_frame = 0xdddddddddddddddd, previous = 0xdddddddddddddddd}}
(gdb) thread 1
[Switching to thread 1 (Thread 0x7ffff7c6c280 (LWP 1539799))]
#0 Py_FinalizeEx () at Python/pylifecycle.c:1828
1828 if (flush_std_files() < 0) {
(gdb) bt
#0 Py_FinalizeEx () at Python/pylifecycle.c:1828
#1 0x00005555558c6dd4 in Py_Exit (sts=0) at Python/pylifecycle.c:3002
#2 0x00005555558cf99e in handle_system_exit () at Python/pythonrun.c:756
#3 0x00005555558cfcea in _PyErr_PrintEx (tstate=0x555555d07728 <_PyRuntime+478216>, set_sys_last_vars=set_sys_last_vars@entry=1) at Python/pythonrun.c:765
#4 0x00005555558d00e3 in PyErr_PrintEx (set_sys_last_vars=set_sys_last_vars@entry=1) at Python/pythonrun.c:845
#5 0x00005555558d00f3 in PyErr_Print () at Python/pythonrun.c:851
#6 0x00005555558d06bb in _PyRun_SimpleFileObject (fp=fp@entry=0x555555d35500, filename=filename@entry=0x7ffff775ac70, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdfa8) at Python/pythonrun.c:439
#7 0x00005555558d08bd in _PyRun_AnyFileObject (fp=fp@entry=0x555555d35500, filename=filename@entry=0x7ffff775ac70, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdfa8) at Python/pythonrun.c:78
#8 0x00005555558f80d7 in pymain_run_file_obj (program_name=program_name@entry=0x7ffff77a95b0, filename=filename@entry=0x7ffff775ac70, skip_source_first_line=0)
at Modules/main.c:360
#9 0x00005555558f83b9 in pymain_run_file (config=config@entry=0x555555ce9a60 <_PyRuntime+356160>) at Modules/main.c:379
#10 0x00005555558f9480 in pymain_run_python (exitcode=exitcode@entry=0x7fffffffe11c) at Modules/main.c:610
#11 0x00005555558f94d8 in Py_RunMain () at Modules/main.c:689
#12 0x00005555558f952c in pymain_main (args=args@entry=0x7fffffffe160) at Modules/main.c:719
#13 0x00005555558f95a1 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:743
#14 0x000055555565273e in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
Looks like some race condition between main thread and threads started and still running at finalization, whose `tstate` main thread is trying to delete, although I'm not an expert and didn't make further debugging.
Platform: `Linux 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux`
<!-- gh-linked-prs -->
### Linked PRs
* gh-104826
* gh-105277
* gh-109056
* gh-109133
* gh-109134
<!-- /gh-linked-prs -->
| ce558e69d4087dd3653207de78345fbb8a2c7835 | eaff9c39aa1a70d401521847cc35bec883ae9772 |
python/cpython | python__cpython-104708 | # PEP 695 changed how class decorators are traced
Commit 24d8b88420b81fc60aeb0cbcacef1e72d633824a ("gh-103763: Implement PEP 695 (#103764)") changed how nested class decorators are traced.
Example file nightly-20230518.py:
```python
def decorator(arg):
def _dec(c):
return c
return _dec
@decorator(6)
@decorator(
len([8]),
)
class MyObject(
object
):
X = 13
a = 14
```
Tracing with 3.12.0a7:
```
% python3.12 -m trace --trace nightly-20230518.py
--- modulename: nightly-20230518, funcname: <module>
nightly-20230518.py(1): def decorator(arg):
nightly-20230518.py(6): @decorator(6)
--- modulename: nightly-20230518, funcname: decorator
nightly-20230518.py(2): def _dec(c):
nightly-20230518.py(4): return _dec
nightly-20230518.py(7): @decorator(
nightly-20230518.py(8): len([8]),
nightly-20230518.py(7): @decorator(
--- modulename: nightly-20230518, funcname: decorator
nightly-20230518.py(2): def _dec(c):
nightly-20230518.py(4): return _dec
nightly-20230518.py(10): class MyObject( <*****
nightly-20230518.py(11): object
nightly-20230518.py(10): class MyObject(
--- modulename: nightly-20230518, funcname: MyObject
nightly-20230518.py(6): @decorator(6)
nightly-20230518.py(13): X = 13
nightly-20230518.py(7): @decorator(
--- modulename: nightly-20230518, funcname: _dec
nightly-20230518.py(3): return c
nightly-20230518.py(6): @decorator(6)
--- modulename: nightly-20230518, funcname: _dec
nightly-20230518.py(3): return c
nightly-20230518.py(10): class MyObject(
nightly-20230518.py(14): a = 14
```
```
% python3.12 -c "import sys; print(sys.version)"
3.12.0a7 (main, Apr 5 2023, 05:51:58) [Clang 14.0.3 (clang-1403.0.22.14.1)]
```
Running with newer code:
```
% /usr/local/cpython/bin/python3 -m trace --trace nightly-20230518.py
--- modulename: nightly-20230518, funcname: <module>
nightly-20230518.py(1): def decorator(arg):
nightly-20230518.py(6): @decorator(6)
--- modulename: nightly-20230518, funcname: decorator
nightly-20230518.py(2): def _dec(c):
nightly-20230518.py(4): return _dec
nightly-20230518.py(7): @decorator(
nightly-20230518.py(8): len([8]),
nightly-20230518.py(7): @decorator(
--- modulename: nightly-20230518, funcname: decorator
nightly-20230518.py(2): def _dec(c):
nightly-20230518.py(4): return _dec
nightly-20230518.py(6): @decorator(6) <*****
nightly-20230518.py(11): object
nightly-20230518.py(10): class MyObject(
--- modulename: nightly-20230518, funcname: MyObject
nightly-20230518.py(6): @decorator(6)
nightly-20230518.py(13): X = 13
nightly-20230518.py(7): @decorator(
--- modulename: nightly-20230518, funcname: _dec
nightly-20230518.py(3): return c
nightly-20230518.py(6): @decorator(6)
--- modulename: nightly-20230518, funcname: _dec
nightly-20230518.py(3): return c
nightly-20230518.py(10): class MyObject(
nightly-20230518.py(14): a = 14
```
```
% /usr/local/cpython/bin/python3 -c "import sys; print(sys.version)"
3.12.0a7+ (tags/v3.12.0a7-548-g24d8b88420:24d8b88420, May 20 2023, 06:39:38) [Clang 14.0.3 (clang-1403.0.22.14.1)]
```
The different lines are marked with `<*****`.
cc: @JelleZijlstra @markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-104708
<!-- /gh-linked-prs -->
| cd9748409aa877d6d9905730bf68f25cf7a6a723 | 64d1b44a5459605af3e8572d4febfde021315633 |
python/cpython | python__cpython-104647 | # Modernise code in `Tools/clinic/`
# Feature or enhancement
Modernise some Python anachronisms in `Tools/clinic/`, to make the code more readable and maintainable. Some things that can be easily done:
- [x] Use `dict` instead of `OrderedDict` (#104647)
- [x] Simplify code like this by using the `str.removesuffix` and `str.removeprefix` methods, both new in Python 3.9:
https://github.com/python/cpython/blob/06eeee97e36aa6bb3d21d7cbc288763ae3a7b21e/Tools/clinic/clinic.py#L2572-L2575
(#104685)
- [x] [Pyupgrade](https://github.com/asottile/pyupgrade) (run using the `--py310-plus` setting) also finds various things that can be modernised (#104684)
- [ ] Refactor CLanguage.output_templates (see https://github.com/python/cpython/issues/104683#issuecomment-1633266536)
- [ ] Put templating in a separate .py file, kind of like Tools/clinic/cpp.py
- [x] #107468
- [x] #107467
Cc. @erlend-aasland
<!-- gh-linked-prs -->
### Linked PRs
* gh-104647
* gh-104684
* gh-104685
* gh-104696
* gh-104729
* gh-104730
* gh-106362
* gh-106443
* gh-106444
* gh-106445
* gh-106476
* gh-106477
* gh-106478
* gh-106652
* gh-106698
* gh-106721
* gh-106837
* gh-107289
* gh-107340
* gh-107435
* gh-107439
* gh-107541
* gh-107543
* gh-107550
* gh-107551
* gh-107556
* gh-107623
* gh-107635
* gh-107638
* gh-107667
* gh-107770
* gh-107771
* gh-107790
* gh-107840
* gh-107964
* gh-108092
* gh-108552
<!-- /gh-linked-prs --> | 02b60035ceb735c3f7dd5bb7dc6a1508748a7a8d | 06eeee97e36aa6bb3d21d7cbc288763ae3a7b21e |
python/cpython | python__cpython-104682 | # Syntax highlighting issue in Turtle docs
# Documentation
A couple of function docs
- https://docs.python.org/3/library/turtle.html#turtle.setposition
- https://docs.python.org/3/library/turtle.html#turtle.pencolor
- https://docs.python.org/3/library/turtle.html#turtle.fillcolor
- https://docs.python.org/3/library/turtle.html#turtle.color
- https://docs.python.org/3/library/turtle.html#turtle.filling
- https://docs.python.org/3/library/turtle.html#turtle.shearfactor

seems to not have syntax highlighting working properly.
On inspection, I see this was due to small indentation issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104682
* gh-104695
<!-- /gh-linked-prs -->
| 2c97878bb8a09f5aba8bf413bb794f6efd5065df | ff7f7316326a19749c5d79f9e44acdbe7d54ac4e |
python/cpython | python__cpython-104674 | # The `PyOS_*` hooks interact poorly with subinterpreters
`PyOS_InputHook` and `PyOS_ReadlineFunctionPointer` are globally-registered `input` hooks that have no way of recovering module-level state for the extensions that registered them. In practice, extensions like `readline` and `tkinter` get around this by using global state, which obviously isn't subinterpreter-friendly.
What's more, extensions without advertised subinterpreter support (like `readline` and `tkinter`) who register these hooks might find themselves called from within a subinterpreter (where their extension hasn't even been loaded). That's definitely a situation we want to avoid.
It seems like the best solution for 3.12 is to only call these hooks from the main interpreter, which makes the bad situation quite a bit better. If libraries really need better APIs that work per-interpreter and have ways to access module state, we can certainly add them later (but for now maybe we'll just cross our fingers and hope that nobody actually cares).
@ericsnowcurrently, how does this sound to you?
<!-- gh-linked-prs -->
### Linked PRs
* gh-104674
* gh-104760
<!-- /gh-linked-prs -->
| 357bed0bcd3c5d7c4a8caad451754a9a172aca3e | 2c4e29e32260783207568dc8581d65f0022f773a |
python/cpython | python__cpython-104665 | # Plural vs Singular in enum documentation
# Documentation
Hi! I've just seen that classname in the example in the documentation for `IntEnum` (https://docs.python.org/3/library/enum.html#enum.IntEnum) is in the plural (`Numbers`) but for all the other enum examples a classname in the singular is used.
Since a singluar classname IMO makes more sense for all enum classes, I think `Numbers` should be changed to `Number` in the documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104665
* gh-104666
<!-- /gh-linked-prs -->
| 3ac856e69734491ff8423020c700c9ae86ac9a08 | 27a7d5e1cd5b937d5f164fce572d442672f53065 |
python/cpython | python__cpython-104660 | # Location and text of f-string unclosed quote error is incorrect
Since the PEP 701 changes the error of unclosed triple quited f-string literals is now incorrect:
```
x = 1 + 1
y = 2 + 2
z = f"""
sdfjnsdfjsdf
sdfsdfs{1+
2}dfigdf {3+
4}sdufsd
""
```
Executing this file will show:
```
File "/Users/pgalindo3/github/python/python_tokenizer_marta/lol.py", line 8
4}sdufsd
^
SyntaxError: unterminated triple-quoted f-string literal (detected at line 13)
```
both the line number and the text are wrong.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104660
<!-- /gh-linked-prs -->
| ff7f7316326a19749c5d79f9e44acdbe7d54ac4e | 663c049ff78a299bdf7c1a0444b9900e6d37372d |
python/cpython | python__cpython-104657 | # PEP 695: Rename typeparams to type_params in AST
PEP-695 introduces a few new AST attributes that are currently called `typeparams`, as specified in https://peps.python.org/pep-0695/#ast-changes. However, @AlexWaygood rightly points out (https://github.com/python/cpython/pull/104642#discussion_r1198996246) that `type_params` would be more readable and in line with most of the rest of the AST. Should we change it?
cc @erictraut for this PEP, @cdce8p who made some AST changes in this area before, @pablogsal @isidentical as AST experts.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104657
<!-- /gh-linked-prs -->
| a5f244d627a6815cf2d8ccec836b9b52eb3e8de2 | ef5d00a59207a63c6d5ae0d5d44054847d1bf3b5 |
python/cpython | python__cpython-104646 | # marshal tests are sloppy with error checking
pymarshal_write_long_to_file and pymarshal_write_object_to_file pretend that PyMarshal_WriteLongToFile/PyMarshal_WriteObjectToFile can set an error.
pymarshal_read_last_object_from_file and pymarshal_read_object_from_file don't check for errors so they can call Py_BuildValue with a NULL arg.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104646
* gh-104663
<!-- /gh-linked-prs -->
| ac56a854b418d35ad3838f3072604227dc718fca | 8f1f3b9abdaa3e9d19aad22d6c310eb1f05ae5c2 |
python/cpython | python__cpython-104641 | # Walrus in comprehension can leak into PEP 695 scope
Example:
```pycon
>>> class B: pass
...
>>> class X[T]([(x := 3) for _ in range(2)] and B): print(x)
...
3
```
This is because a walrus in a comprehension assigns to the immediately enclosing scope, which is now the type parameter scope. I don't think this causes any concrete problem right now, but I don't like it because I'd like to preserve the invariant that type parameters are the only names that can be defined in PEP 695 scopes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104641
<!-- /gh-linked-prs -->
| 8a8853af24b41a73653d07dcc2f44d05b10e0673 | ab8f54668b0c49186f1da8e127e303ca73220017 |
python/cpython | python__cpython-104630 | # Clinic test are skipped on Windows
`test_clinic` is skipped on Windows. This is a regression introduced by gh-96178.
- The `Modules/_testclinic.c` helper extension module is not included in the Windows build
- `test.support.import_helper.import_module("_testclinic")` fails and skips the whole test, since `_testclinic` is not found
<!-- gh-linked-prs -->
### Linked PRs
* gh-104630
* gh-104632
* gh-104723
* gh-107393
<!-- /gh-linked-prs -->
| 86ee49f469b84e4b746526a00d8191d0e374a268 | b9dce3aec46bf5190400bd8239fdd4ea9e64d674 |
python/cpython | python__cpython-104624 | # Bump SQLite to 3.42.0 on Windows and macOS installers
SQLite 3.42.0 was [released recently](https://sqlite.org/releaselog/3_42_0.html). Let's just bump the installers now. If patch releases appear (they probably will), we can just bump those during the beta.
See also https://github.com/python/cpython/issues/102997#issuecomment-1550108229
cc. @felixxm
<!-- gh-linked-prs -->
### Linked PRs
* gh-104624
* gh-104625
* gh-104633
* gh-104643
<!-- /gh-linked-prs -->
| fd04bfeaf7a4531120ad450dbd1afc121a2523ee | 70c77964778817907fbcc2a047a2abad4eb6e127 |
python/cpython | python__cpython-107184 | # Subinterpreters can load modules without subinterpreter support
The first time a subinterpreter attempts to import a module without proper subinterpreter support, an `ImportError` is raised (which is correct). However, *subsequent* imports of the same module succeed!
Using `readline` as an example, since it's currently single-phase init:
```py
>>> from _xxsubinterpreters import create, run_string
>>> s = "import readline; print(readline)"
>>> interp = create()
>>> run_string(interp, s)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_xxsubinterpreters.RunFailedError: <class 'ImportError'>: module readline does not support loading in subinterpreters
>>> run_string(interp, s)
<module 'readline'>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107184
* gh-107360
<!-- /gh-linked-prs -->
| 017f047183fa33743f7e36c5c360f5c670032be3 | bcdd3072316181b49d94567bb648825a07ca9ae1 |
python/cpython | python__cpython-104637 | # PEP 709: Comprehension iteration variable leaks into outer locals()
Code:
```python
def __b():
[__a for __b in [__b] for _ in []]
return locals()
```
On #104603, calling this gives:
```
>>> __b()
{'__b': <function __b at 0x100e3e390>}
```
But on 3.11:
```
>>> __b()
{}
```
So the locals() in the outer function can now include comprehension variables from the inner comprehension. cc @carljm
<!-- gh-linked-prs -->
### Linked PRs
* gh-104637
<!-- /gh-linked-prs -->
| 70c77964778817907fbcc2a047a2abad4eb6e127 | 86e6f16ccb97f66f2b9a31191ce347dca499d48c |
python/cpython | python__cpython-104620 | # compiler can incorrectly optimize a run of stores to the same name preceded by a SWAP
If the `apply_static_swaps` optimization in the compiler sees the instruction sequence `SWAP 2; STORE_FAST a; STORE_FAST a`, it will optimize that by removing the `SWAP` and swapping the two instructions, resulting in `STORE_FAST a; STORE_FAST a`.
But of course, in this case the two instructions are identical, and their ordering matters because they store to the same location. So this change results in the wrong value being stored to `a`.
This was exposed by comprehension inlining, since it can result in this bytecode sequence for code in the form `a = [1 for a in [0]]` (where the first `STORE_FAST a` is restoring the previous value of `a` from before the comprehension, if any, and the second `STORE_FAST a` is storing the result of the comprehension to `a`.).
<!-- gh-linked-prs -->
### Linked PRs
* gh-104620
* gh-104636
<!-- /gh-linked-prs -->
| 0589c6a4d3d822cace42050198cb9a5e99c879ad | dcdc90d384723920e8dea0ee04eae8c219333634 |
python/cpython | python__cpython-105122 | # Type object's ob_type does not get set when tp_bases is set before PyType_Ready
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
This came up while testing numpy on 3.12. A bug report has been filed on numpy/numpy#23766. It's happening due to #103912, which introduced [an additional check for `tp_bases` not being NULL](https://github.com/python/cpython/blob/cfa517d5a68bae24cbe8d9fe6b8e0d4935e507d2/Objects/typeobject.c#L6977) in `type_ready_set_bases`, which is called from `PyType_Ready`.
numpy sets `tp_bases` manually before calling `PyType_Ready`, which means that the afore-mentioned check succeeds, and so, [the line that sets `ob_type`](https://github.com/python/cpython/blob/cfa517d5a68bae24cbe8d9fe6b8e0d4935e507d2/Objects/typeobject.c#L7012) does not get executed (it used to before #103912), which leads to a segmentation fault [later on, when trying to set `mro`](https://github.com/python/cpython/blob/cfa517d5a68bae24cbe8d9fe6b8e0d4935e507d2/Objects/typeobject.c#L2152).
This looks like a bug, but I'm not sure whether that's expected and numpy should be adjusted. If the latter is true, should a note be added in the `What's new` document?
<!-- gh-linked-prs -->
### Linked PRs
* gh-105122
* gh-105211
* gh-105225
* gh-105248
<!-- /gh-linked-prs -->
| 146939306adcff706ebddb047f7470d148125cdf | 3698fda06eefb3c01e78c4c07f46fcdd0559e0f6 |
python/cpython | python__cpython-104651 | # Remove the `PREDICT` macros.
The `PREDICT()` macros are supposed to speed dispatch on platforms without computed gotos (i.e. Windows).
However the "predictions" are nothing like the actual [behavior observed](https://github.com/faster-cpython/ideas/blob/main/stats/pystats-2023-05-14-python-7d2deaf.md#pair-counts), especially with PEP 659.
We might want to insert something like the `PREDICT` macros automatically, but the manually inserted ones are useless, or worse, and should be removed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104651
* gh-105459
<!-- /gh-linked-prs -->
| 064de0e3fca014e5225830a35766fb7867cbf403 | d63a7c3694d5c4484fcaa01c33590b1d4bc2559e |
python/cpython | python__cpython-104603 | # PEP 709: Crash with lambda + nested scope
This crashes on main:
```python
def a():
def a():
[(lambda : b) for b in [a]]
print(b)
```
The original reproducer (found with my fork of https://github.com/carljm/compgenerator) was
```python
class a:
def a():
class a:
[(lambda : (a := a[(a := 2)])[b]) for b in (lambda b, a: 7)[a]]
[][2] = b
(1)[lambda a: a] = 4
(2)[2] = b = a
(4)[lambda b, a: b] = a = lambda : 1
```
Found in https://github.com/python/cpython/pull/104528#issuecomment-1551772622 but turned out be unrelated to that PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104603
* gh-104639
<!-- /gh-linked-prs -->
| dbe171e6098fbb96beed81db2c34f6428109e005 | 8a8853af24b41a73653d07dcc2f44d05b10e0673 |
python/cpython | python__cpython-104601 | # function.__type_params__ and type.__type_params__ should be writable
The PEP-695 implementation added a new attribute `__type_params__` to functions. I made this field read-only, but I realized there is a use case for writing to it: `functools.wraps`, when wrapping a generic function, should add the `.__type_params__` to the wrapper. Making it writable is also more consistent with other fields on functions, as even the `__name__` of functions is writable.
The PEP also adds a `__type_params__` attribute to classes and type aliases. For classes it's already writable (it's just stored in the type's `__dict__`). For type aliases it's readonly, but as I don't see a use case for mutating a type alias's `__type_params__`, I'd like to keep it that way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104601
* gh-104634
<!-- /gh-linked-prs -->
| 3fadd7d5857842fc5cddd4c496b73161b0bcb421 | f7835fc7e9617cefd87e72002916e258f589c857 |
python/cpython | python__cpython-105100 | # Pluggable optimizer API
We need an API for optimizers to be plugged in to CPython.
The proposed model is that of client server, where the VM is the client and the optimizer is the server.
The optimizer registers with the VM, then VM calls the optimizer when hotspots are detected.
The API:
```C
type struct {
OBJECT_HEADER;
_PyInterpreterFrame *(*execute)(PyExecutorObject *self, _PyInterpreterFrame *frame, PyObject **stack_pointer);
/* Data needed by the executor goes here, but is opaque to the VM */
} PyExecutorObject;
/* This would be nicer as an enum, but C doesn't define the size of enums */
#define PY_OPTIMIZE_FUNCTION_ENTRY 1
#define PY_OPTIMIZE_RESUME_AFTER_YIELD 2
#define PY_OPTIMIZE_BACK_EDGE 4
typedef uint32_t PyOptimizerCapabilities;
type struct {
OBJECT_HEADER;
PyExecutorObject *(*compile)(PyOptimizerObject* self, PyCodeObject *code, int offset);
PyOptimizerCapabilities capabilities;
float optimization_cost;
float run_cost;
/* Data needed by the compiler goes here, but is opaque to the VM */
} PyOptimizerObject;
void _Py_Executor_Replace(PyCodeObject *code, int offset, PyExecutorObject *executor);
int _Py_Optimizer_Register(PyOptimizerObject* optimizer);
```
The semantics of a `PyExecutorObject` is that upon return from its `execute` function, the VM state will have advanced `N` instructions. Where `N` is a non-negative integer.
Full discussion here: https://github.com/faster-cpython/ideas/discussions/380
This is not a replacement for PEP 523. That will need a PEP. We should get this working first, before we consider replacing PEP 523.
<!-- gh-linked-prs -->
### Linked PRs
* gh-105100
* gh-105244
* gh-105683
* gh-105924
* gh-106126
* gh-106131
* gh-106141
* gh-106146
* gh-106163
* gh-106171
* gh-106277
* gh-106393
* gh-106484
* gh-106489
* gh-106492
* gh-106497
* gh-106500
* gh-106526
* gh-106641
* gh-106908
* gh-107513
* gh-108953
* gh-108961
* gh-109347
* gh-110593
<!-- /gh-linked-prs -->
| 4bfa01b9d911ce9358cf1a453bee15554f8e4c07 | 601ae09f0c8eda213b9050892f5ce9b91f0aa522 |
python/cpython | python__cpython-104581 | # Remove `eval_breaker` and `kwnames` local variables from interpreter definition.
Currently there are quite a few local variables in `PyEval_EvalDefault()`.
Each of these is:
* A candidate for register allocation, confusing the compiler.
* Potentially adds extra state to the interpreter
* Adds complexity to automatically generated interpreters and compilers
The local variables break down into three classes:
* The state of the interpreter
* Transient values
* Values for gathering stats, debugging, etc.
The last category are not present in release builds, so we can ignore those.
The transient values, like `opcode` and `oparg` are an artifact of the dispatching mechanism and will be absent for compiled code, or different for a different interpreter.
It is the remaining values that matter.
Each of these will consume a register in compiled code, and need to kept in sync where there is redundancy.
The current values are:
* `next_instr` -- The VM instruction pointer
* `frame` -- Pointer to the current frame
* `stackpointer` -- Pointer to the top of stack
* `tstate` -- Pointer to the current thread
* `eval_break` -- Pointer to the eval-breaker
* `kwnames` -- Pointer to the keyword names, if any for the next call
`next_instr` and `frame` are fundamental to VM operation.
Hypothetically `stackpointer` can be eliminated by a compiler, and `tstate` can be eliminated by some clever stack arrangement, or using thread-local storage. However, these approaches are complex and may not be worthwhile.
`eval_breaker` can easily be eliminated, as it is redundant. `eval_breaker == tstate->interp->eval_breaker`
`kwnames` is logically transient, in that it is only valid between the `KW_NAMES` instruction and a `CALL`. However from the very local view of a single instruction it is effectively an interpreter-wide value.
It can removed by pushing the kwnames to the stack, but that is likely to cause a slowdown. We need to experiment.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104581
* gh-107383
<!-- /gh-linked-prs -->
| 68b5f08b72e02f62ec787bfbb7aa99bac661daec | 662aede68b0ea222cf3db4715b310e91c51b665f |
python/cpython | python__cpython-104573 | # Error message for walrus etc. in PEP 695 contexts could be improved
Before:
```
>>> type A[T: (x:=3)] = int
File "<stdin>", line 1
SyntaxError: 'named expression' can not be used within a TypeVar bound
```
After:
```
>>> type A[T: (x:=3)] = int
File "<stdin>", line 1
SyntaxError: named expression cannot be used within a TypeVar bound
```
("cannot" is one word; the quotes don't really make sense here)
<!-- gh-linked-prs -->
### Linked PRs
* gh-104573
<!-- /gh-linked-prs -->
| 97db2f3e07bf7d56750e215e4f32653bf3867ef8 | 0cb2fdc6217aa7c04b5c798cfd195c8d0f4af353 |
python/cpython | python__cpython-104556 | # Typing: runtime-checkable protocols are broken on `main`
PEP-695 protocols don't work as intended:
Here's the behaviour you get with protocols that use pre-PEP 695 syntax, which is correct:
```pycon
>>> from typing import Protocol, runtime_checkable, TypeVar
>>> T_co = TypeVar("T_co", covariant=True)
>>> @runtime_checkable
... class SupportsAbsOld(Protocol[T_co]):
... def __abs__(self) -> T_co:
... ...
...
>>> isinstance(0, SupportsAbsOld)
True
>>> issubclass(float, SupportsAbsOld)
True
```
And here's the behaviour you get on `main` with protocols that use PEP 695 syntax, which is incorrect:
```pycon
>>> @runtime_checkable
... class SupportsAbsNew[T_co](Protocol):
... def __abs__(self) -> T_co:
... ...
...
>>> isinstance(0, SupportsAbsNew)
False
>>> issubclass(float, SupportsAbsNew)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\alexw\coding\cpython\Lib\abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\typing.py", line 1875, in _proto_hook
raise TypeError("Protocols with non-method members"
TypeError: Protocols with non-method members don't support issubclass()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104556
* gh-104559
<!-- /gh-linked-prs -->
| 1163782868454287ca9ac170aaebca4beeb83192 | f40890b124a330b589c8093127be1274e15dbd7f |
python/cpython | python__cpython-104605 | # RTSPS scheme support in urllib.parse
In accordance with
https://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml
There are three valid RTSP schemes:
URI Scheme | Description | Status | Reference
-- | -- | -- | --
rtsp | Real-Time Streaming Protocol (RTSP) | Permanent | [RFC2326][RFC7826]
rtsps | Real-Time Streaming Protocol (RTSP) over TLS | Permanent | [RFC2326][RFC7826]
rtspu | Real-Time Streaming Protocol (RTSP) over unreliable datagram transport | Permanent | [RFC2326]
But in `urllib/parse.py` only two of them are defined:
- rtsp
- rtspu
What make impossible to use functions like `urllib.parse.urljoin()` upon `rtsps://*` URLs:
```
Type "help", "copyright", "credits" or "license" for more information.
>>> from urllib.parse import urljoin
>>> urljoin('rtsps://127.0.0.1/foo/bar', 'trackID=1')
'trackID=1'
```
Expected result is:
```
Type "help", "copyright", "credits" or "license" for more information.
>>> from urllib.parse import urljoin
>>> urljoin('rtsps://127.0.0.1/foo/bar', 'trackID=1')
'rtsps://127.0.0.1/foo/trackID=1'
>>>
```
I was able to get the expected behavior by patching `urllib/parsing.py`:
```
--- parse.py.bak 2023-05-16 16:16:01.910634526 +0000
+++ parse.py 2023-05-16 16:17:35.793154580 +0000
@@ -46,17 +46,17 @@
uses_relative = ['', 'ftp', 'http', 'gopher', 'nntp', 'imap',
'wais', 'file', 'https', 'shttp', 'mms',
- 'prospero', 'rtsp', 'rtspu', 'sftp',
+ 'prospero', 'rtsp', 'rtsps', 'rtspu', 'sftp',
'svn', 'svn+ssh', 'ws', 'wss']
uses_netloc = ['', 'ftp', 'http', 'gopher', 'nntp', 'telnet',
'imap', 'wais', 'file', 'mms', 'https', 'shttp',
- 'snews', 'prospero', 'rtsp', 'rtspu', 'rsync',
+ 'snews', 'prospero', 'rtsp', 'rtsps', 'rtspu', 'rsync',
'svn', 'svn+ssh', 'sftp', 'nfs', 'git', 'git+ssh',
'ws', 'wss']
uses_params = ['', 'ftp', 'hdl', 'prospero', 'http', 'imap',
- 'https', 'shttp', 'rtsp', 'rtspu', 'sip', 'sips',
+ 'https', 'shttp', 'rtsp', 'rtsps', 'rtspu', 'sip', 'sips',
'mms', 'sftp', 'tel']
# These are not actually used anymore, but should stay for backwards
@@ -66,7 +66,7 @@
'telnet', 'wais', 'imap', 'snews', 'sip', 'sips']
uses_query = ['', 'http', 'wais', 'imap', 'https', 'shttp', 'mms',
- 'gopher', 'rtsp', 'rtspu', 'sip', 'sips']
+ 'gopher', 'rtsp', 'rtsps', 'rtspu', 'sip', 'sips']
uses_fragment = ['', 'ftp', 'hdl', 'http', 'gopher', 'news',
'nntp', 'wais', 'https', 'shttp', 'snews',
```
The issue presents in Python versions:
- 3.10.6
- 3.12.0a7
<!-- gh-linked-prs -->
### Linked PRs
* gh-104605
* gh-105759
* gh-105760
<!-- /gh-linked-prs -->
| f3266c05b6186ab6d1db0799c06b8f76aefe7cf1 | 4cefe3cf10f498c0927ae4fdba4880d5a64826e4 |
python/cpython | python__cpython-104550 | # TypeAliasType should have a __module__
Currently the `__module__` attribute of TypeAliasType instances (created through `type X = ...`) is always `"typing"`. Let's set it to the actual module.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104550
<!-- /gh-linked-prs -->
| b9dce3aec46bf5190400bd8239fdd4ea9e64d674 | 1c55e8d00728ceabd97cd1a5bd4906c9875a80c6 |
python/cpython | python__cpython-104537 | # multiprocessing.process._children is not multithread-safe, BaseProcess.close() function would cause AttributeError
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
I have a project that uses threading.Thread and multiprocessing.Process for handling concurrent tasks. While under certain circumstances, there is the possibility that when I try new a `Process()` instance and call the `Process.start()` function, an error would occur:
`AttributeError: 'NoneType' object has no attribute 'poll'`
I tried debugging my project and found that under the `multiprocessing.process` module a global variable `_children` is used for managing all child processes, the variable is not thread-safe and the error would occur occasionally. I made a simple test code with minor modification on the `multiprocessing.process`
test.py
```py
from multiprocessing import Process
from threading import Thread
def helper():
time.sleep(0.1)
return 1
def close_process(p_: Process):
p.terminate()
p_.close()
if __name__ == "__main__":
process_list = []
for _ in range(1):
p = Process(target=helper)
p.start()
time.sleep(0.2)
process_list.append(p)
for _ in range(1):
t = Thread(target=close_process, args=(process_list[_],))
t.start()
new_p = Process(target=helper)
time.sleep(0.2)
new_p.start()
```
multiprocessing.process.py class BaseProcess
```py
def close(self):
'''
Close the Process object.
This method releases resources held by the Process object. It is
an error to call this method if the child process is still running.
'''
if self._popen is not None:
if self._popen.poll() is None:
raise ValueError("Cannot close a process while it is still running. "
"You should first call join() or terminate().")
self._popen.close()
self._popen = None
import time
time.sleep(5)
del self._sentinel
_children.discard(self)
self._closed = True
```
I simply add `time.sleep()` for `BaseProcess.close()` class and now every time I run the` test.py,` the error would occur.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: python 3.7
- Operating system and architecture: windows 10 or linux el7
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-104537
* gh-104737
<!-- /gh-linked-prs -->
| ef5d00a59207a63c6d5ae0d5d44054847d1bf3b5 | b9c807a260f63284f16e25b5e98e18191f61a05f |
python/cpython | python__cpython-104531 | # Switch Python to using native Win32 condition variables
`Py_HAVE_CONDVAR` denotes support for Python condition variable APIs. On Win32, the `_PY_EMULATED_WIN_CV` macro exists to allow condition variables to be emulated using Win32 critical section and semaphore APIs.
[Native Win32 support for condition variables](https://learn.microsoft.com/en-us/windows/win32/sync/using-condition-variables) has existed from Windows Vista onwards (courtesy of my friend and fellow ex-MSFT colleague Neill Clift).
An implementation of Python condition variable also exists with `_PY_EMULATED_WIN_CV` set to `0` which uses native Win32 condition variable / SRW lock (mutex) support.
Since [support for Windows XP was dropped as of Python 3.5](https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems), I propose moving to the native Win32 condition variable APIs, as they are more efficient and easier to maintain.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104531
<!-- /gh-linked-prs -->
| b3f0b698daf2438a6e59d5d19ccb34acdba0bffc | d29f57f6036353b4e705a42637177442bf7e07e5 |
python/cpython | python__cpython-104857 | # zipfile.write should check that it isn't appending an archive to itself
A common use pattern for zipfile is to recursively compress an entire directory into a single .zip file. A common implementation of this pattern looks like this:
```python
#!/usr/bin/env python3
import zipfile, pathlib
rootpath=pathlib.Path('targetdir')
with zipfile.ZipFile('outputfile', 'w') as archive:
for file_path in sorted(rootpath.rglob('*')):
arcname=file_path.relative_to(rootpath)
archive.write(file_path, arcname.as_posix())
```
However, if outputfile is a path that is a child of targetdir, this results in the operation hanging once the `rglob` operation eventually causes the archive to attempt to write `outfile` into `outfile`, causing the write operation to continue indefinitely until the filesystem runs out of space or the archive hits its max file size, like can be observed in this example:
```python
#!/usr/bin/env python3
import zipfile, pathlib
rootpath=pathlib.Path('./')
with zipfile.ZipFile('./foo.zip', 'w') as archive:
for file_path in sorted(rootpath.rglob('*')):
arcname=file_path.relative_to(rootpath)
archive.write(file_path, arcname.as_posix())
```
Needless to say, this is hardly an intuitive error path, and can cause difficulties with debugging.
Note that it is not simply third party libraries that allow this error to happen. Neither zipapp nor shutil.make_archive include a check making sure the output file is not a child of the target dir.
There are two ways I think this could be fixed:
* make zipfile.write simply check that self.file and filename are not equal, raising a ValueError if they are
* patch all users of zipfile to silently skip over outputfile when they are compressing.
I think the first, at the least, should be implemented, in order to provide an actual error message in this situation, instead of hanging for however long it takes for someone to notice a multi-gb zip file growing by the second. Ill be submitting a PR doing so shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104857
* gh-106076
<!-- /gh-linked-prs -->
| 42fabc3ea767f10989363536eaaa9da32616ab57 | c9e231de8551ab6d06c92dfa95033150e52d7f1f |
python/cpython | python__cpython-104516 | # Ref leaks introduced by _io isolation (gh-101948)
Tried on current main.
```python
./python -m test -R 3:3 test_nntplib
0:00:00 load avg: 2.49 Run tests sequentially
0:00:00 load avg: 2.49 [1/1] test_nntplib
beginning 6 repetitions
123456
......
test_nntplib leaked [1222, 1220, 1222] references, sum=3664
test_nntplib leaked [828, 827, 829] memory blocks, sum=2484
test_nntplib failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_nntplib
Total duration: 640 ms
Tests result: FAILURE
```
OS: WSL Ubuntu 20.04 & Windows 10
UPD:
Leaked tests:
test_nntplib
test_gzip
test_httpservers
test_xmlrpc
test_tarfile
<!-- gh-linked-prs -->
### Linked PRs
* gh-104516
<!-- /gh-linked-prs -->
| 442a3e65da2594bedee88a3e81338d86926bde56 | 0bb61dd5b0ffc248e18f1b33cddd18788f28e60a |
python/cpython | python__cpython-108090 | # Run `mypy` on `cases_generator`
# Feature or enhancement
After https://github.com/python/cpython/pull/104421 is merged, we now have the precedent and the needed infrastructure to run `mypy` on things that support type annotations in `Tools/`
# Pitch
Maybe we should run it on `Tools/cases_generator`? It has all annotations in place, it needs very little work. Right now `mypy` finds only 24 errors.
Full list:
```python
» mypy .
parser.py:22: error: Return value expected [return-value]
return
^~~~~~
parser.py:142: error: Missing return statement [return]
def definition(self) -> InstDef | Super | Macro | Family | None:
^
parser.py:168: error: Module has no attribute "OVERRIDE" [attr-defined]
override = bool(self.expect(lx.OVERRIDE))
^~~~~~~~~~~
parser.py:169: error: Module has no attribute "REGISTER" [attr-defined]
register = bool(self.expect(lx.REGISTER))
^~~~~~~~~~~
parser.py:177: error: Argument 3 to "InstHeader" has incompatible type "str";
expected "Literal['inst', 'op', 'legacy']" [arg-type]
... return InstHeader(override, register, kind, name, inp, outp...
^~~~
parser.py:200: error: Incompatible return value type (got
"List[Union[StackEffect, CacheEffect, Node]]", expected
"Optional[List[Union[StackEffect, CacheEffect]]]") [return-value]
return [inp] + rest
^~~~~~~~~~~~
parser.py:202: error: List item 0 has incompatible type "Node"; expected
"Union[StackEffect, CacheEffect]" [list-item]
return [inp]
^~~
parser.py:228: error: Missing return statement [return]
def cache_effect(self) -> CacheEffect | None:
^
parser.py:241: error: Missing return statement [return]
def stack_effect(self) -> StackEffect | None:
^
parser.py:249: error: Module has no attribute "IF" [attr-defined]
if self.expect(lx.IF):
^~~~~
parser.py:284: error: Missing return statement [return]
def super_def(self) -> Super | None:
^
parser.py:295: error: Missing return statement [return]
def ops(self) -> list[OpName] | None:
^
parser.py:304: error: Missing return statement [return]
def op(self) -> OpName | None:
^
parser.py:309: error: Missing return statement [return]
def macro_def(self) -> Macro | None:
^
parser.py:320: error: Missing return statement [return]
def uops(self) -> list[UOp] | None:
^
parser.py:328: error: Incompatible return value type (got "List[Node]", expected
"Optional[List[Union[OpName, CacheEffect]]]") [return-value]
return uops
^~~~
parser.py:331: error: Missing return statement [return]
def uop(self) -> UOp | None:
^
parser.py:384: error: Missing return statement [return]
def block(self) -> Block | None:
^
generate_cases.py:395: error: Item "None" of "Optional[Context]" has no attribute
"owner" [union-attr]
filename = context.owner.filename
^~~~~~~~~~~~~
generate_cases.py:598: error: Incompatible types in assignment (expression has type
"Optional[Node]", variable has type "Union[InstDef, Super, Macro, Family, None]")
[assignment]
while thing := psr.definition():
^~~~~~~~~~~~~~~~
generate_cases.py:664: error: Item "None" of "Optional[Family]" has no attribute
"name" [union-attr]
f"Instruction {member} is a member of multiple f...
^
generate_cases.py:675: error: Item "None" of "Optional[Family]" has no attribute
"name" [union-attr]
f"Component {part.instr.name} of macro {...
^
test_generator.py:46: error: Missing positional argument "metadata_filename" in call
to "Analyzer" [call-arg]
a = generate_cases.Analyzer(temp_input.name, temp_output.name)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
test_generator.py:46: error: Argument 1 to "Analyzer" has incompatible type "str";
expected "List[str]" [arg-type]
a = generate_cases.Analyzer(temp_input.name, temp_output.name)
^~~~~~~~~~~~~~~
Found 24 errors in 3 files (checked 5 source files)
```
If @gvanrossum finds it useful, I can surely work on this.
CC @AlexWaygood
<!-- gh-linked-prs -->
### Linked PRs
* gh-108090
* gh-108112
* gh-108454
<!-- /gh-linked-prs -->
| 28cab71f954f3a14de9f474ce9c4abbd23c97862 | fd195092204aa7fc9f13c5c6d423bc723d0b3520 |
python/cpython | python__cpython-104591 | # IDLE: completions toplevel blank on Tk Aqua 8.7
https://github.com/python/cpython/blob/27d8ecd7f3325a40a967d2d6b6b36b21d5328753/Lib/idlelib/autocomplete_w.py#L185-L190
Using `wm geometry` to move an overrideredirect toplevel far enough offscreen triggers a strange bug in recent Tk Aqua 8.7 (assuming it isn’t actually a macOS bug) which leaves the toplevel completely blank. I have reported it upstream: https://core.tcl-lang.org/tk/tktview/132dd3d350
However, I would like to see if IDLE can make a simple change to avoid the issue, in case Tk Aqua does not address it soon. Moving a toplevel far offscreen to hide it while configuring its contents seems improper, as opposed to something like withdrawing it. Hiding the toplevel to avoid visual artifacts is probably no longer necessary, except maybe on something sufficiently slow like X11 forwarding. IDLE also does not make a similar effort to temporarily hide the toplevel for calltips.
So I would suggest making this `wm_geometry()` call only when `acw._windowingsystem == 'x11'` or at least `acw._windowingsystem != 'aqua'`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104591
* gh-104596
* gh-104598
* gh-104599
<!-- /gh-linked-prs -->
| 678bf57ed04b8c250f0bc031ebd264bece76e731 | 7fc8e2d4627cdba5cb0075c9052ed6f4b6ecd36d |
python/cpython | python__cpython-104789 | # Tkinter: Tk 8.7 alphabetizes options in certain error messages
As done by e.g. https://core.tcl-lang.org/tk/info/b7db31b3a38b and causing this Tkinter test to fail:
```
======================================================================
FAIL: test_configure_type (test.test_tkinter.test_widgets.MenuTest.test_configure_type)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/user/git/cpython/Lib/test/test_tkinter/test_widgets.py", line 1403, in test_configure_type
self.checkEnumParam(
File "/Users/user/git/cpython/Lib/test/test_tkinter/widget_tests.py", line 134, in checkEnumParam
self.checkInvalidParam(widget, name, 'spam', errmsg=errmsg)
File "/Users/user/git/cpython/Lib/test/test_tkinter/widget_tests.py", line 63, in checkInvalidParam
self.assertEqual(str(cm.exception), errmsg)
AssertionError: 'bad type "spam": must be menubar, normal, or tearoff' != 'bad type "spam": must be normal, tearoff, or menubar'
- bad type "spam": must be menubar, normal, or tearoff
+ bad type "spam": must be normal, tearoff, or menubar
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104789
* gh-105028
* gh-105029
<!-- /gh-linked-prs -->
| 897e716d03d559a10dd5015ecb501ceb98955f3a | a989b73e8ebf869dcc71d06127e8797c92260a0f |
python/cpython | python__cpython-104585 | # Tkinter, IDLE: Future Tcl and Tk can have differing version/patchlevel
~~Tk 8.7 will be compatible with both Tcl 8.7 and Tcl 9.0; there is no Tk 9.0 currently in development. The version and/or patchlevel of Tcl and Tk will not necessarily be identical~~, but there are places in Tkinter code which assume they are. A few examples which I am aware of:
https://github.com/python/cpython/blob/48b3617de491f00a3bf978b355074cc8e228d61b/Lib/idlelib/help_about.py#L79
https://github.com/python/cpython/blob/48b3617de491f00a3bf978b355074cc8e228d61b/Lib/test/test_tkinter/support.py#L103
https://github.com/python/cpython/blob/48b3617de491f00a3bf978b355074cc8e228d61b/Lib/test/test_tcl.py#L28
<!-- gh-linked-prs -->
### Linked PRs
* gh-104585
* gh-104587
* gh-107688
* gh-107709
* gh-107719
<!-- /gh-linked-prs -->
| aed643baa968b4959b830d37750080cac546fba7 | c649df63e0d052044a4660101d5769ff46ae9234 |
python/cpython | python__cpython-104495 | # Tkinter: Tk 8.7 quotes pathnames in certain error messages
Tk 8.7 (as of https://core.tcl-lang.org/tk/info/2991150c09f6) adds quotes around window pathnames in a few error messages, including those triggered by two Tkinter tests, causing them to fail:
```
======================================================================
FAIL: test_pack_configure_in (test.test_tkinter.test_geometry_managers.PackTest.test_pack_configure_in)
----------------------------------------------------------------------
_tkinter.TclError: can't pack ".pack.a" inside itself
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/user/git/cpython/Lib/test/test_tkinter/test_geometry_managers.py", line 111, in test_pack_configure_in
with self.assertRaisesRegex(TclError,
AssertionError: "can't pack .pack.a inside itself" does not match "can't pack ".pack.a" inside itself"
======================================================================
FAIL: test_place_configure_in (test.test_tkinter.test_geometry_managers.PlaceTest.test_place_configure_in)
----------------------------------------------------------------------
_tkinter.TclError: can't place ".!toplevel2.!frame2" relative to itself
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/user/git/cpython/Lib/test/test_tkinter/test_geometry_managers.py", line 295, in test_place_configure_in
with self.assertRaisesRegex(TclError, "can't place %s relative to "
AssertionError: "can't place \.!toplevel2\.!frame2 relative to itself" does not match "can't place ".!toplevel2.!frame2" relative to itself"
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104495
* gh-104569
<!-- /gh-linked-prs -->
| 3cba61f111db9b5e8ef35632915309f81fff8c6c | 9d41f83c58e6dc2fc6eb4b91f803551850b0adeb |
python/cpython | python__cpython-104493 | # Move BOLT autoconf logic after PGO
# Feature or enhancement
Just a simple refactor to tee us up for better BOLT / PGO integration in the build system.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104493
<!-- /gh-linked-prs -->
| 27d8ecd7f3325a40a967d2d6b6b36b21d5328753 | 48b3617de491f00a3bf978b355074cc8e228d61b |
python/cpython | python__cpython-104491 | # Handle virtual / phony make targets consistently and more robustly
# Feature or enhancement
I already typed a detailed commit message explaining this. Will be explained by forthcoming PR linking to this issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104491
<!-- /gh-linked-prs -->
| a6bcc8fb92ffb75bb1907cc568ba9fff516979c3 | b15a1a6ac6ea0d7792036e639e90f0e51400c2ee |
python/cpython | python__cpython-104488 | # PYTHON_FOR_REGEN only works with Python 3.10 or newer
On current `main`, `PYTHON_FOR_REGEN` only works with Python 3.10 or newer.
Previously, we've tried to maintain backwards compatibility with older Python versions in our build system. Last time we adjusted `PYTHON_FOR_REGEN` (gh-98988), we deliberately did not bump the minimal version requirements; our `configure` checks for Python 3.11 through Python 3.6.
While working on gh-104050, a discussion about version requirements came up, and we found out that on `main`, `make clinic` only works when `PYTHON_FOR_REGEN` is Python 3.10 or newer. Also, the newly introduced "generate cases script" uses pattern matching and uses newly introduced typing features. Per now, `PYTHON_FOR_REGEN` must be 3.10 or newer.
There are two solution to this:
1. bump the `PYTHON_FOR_REGEN` version requirements to 3.10 in `configure`
2. amend the problematic build scripts, so older Python versions are still supported
Personally, I think I'm leaning towards the former. Apparently no core dev, triager or other contributor has noticed this issue until now. No buildbot has failed (we don't test the minimum required version; perhaps we should). OTOH, I understand that it makes stuff like bootstrapping slightly more cumbersome.
Anyway, we should probably at least bump the version requirement from Python 3.6 to 3.8.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104488
<!-- /gh-linked-prs -->
| 146106a0f1cc61815fa33f0d3f808a3e3e3275be | 27d8ecd7f3325a40a967d2d6b6b36b21d5328753 |
python/cpython | python__cpython-104565 | # Add case_sensitive argument to `pathlib.PurePath.match()`
In #81079 we added a *case_sensitive* argument to `pathlib.Path.glob()` and `rglob()`. It would be good to have this functionality available in the closely-related `PurePath.match()` method. The rationale is much the same: this argument is useful when dealing with case-sensitive filesystems on Windows, and case-insensitive filesystems on Posix. Also, it would allow this method to be used from `glob()` to implement recursive globbing efficiently, which is important for #102613.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104565
<!-- /gh-linked-prs -->
| dcdc90d384723920e8dea0ee04eae8c219333634 | cfa517d5a68bae24cbe8d9fe6b8e0d4935e507d2 |
python/cpython | python__cpython-104483 | # Error handling bugs in ast.c
in validate_pattern, there are two places where in case of error we break out of a loop, but we also need to break out of the switch.
```
--- a/Python/ast.c
+++ b/Python/ast.c
@@ -580,7 +580,9 @@ validate_pattern(struct validator *state, pattern_ty p, int star_ok)
break;
}
}
-
+ if (ret == 0) {
+ break;
+ }
ret = validate_patterns(state, p->v.MatchMapping.patterns, /*star_ok=*/0);
break;
case MatchClass_kind:
@@ -620,6 +622,9 @@ validate_pattern(struct validator *state, pattern_ty p, int star_ok)
}
}
+ if (ret == 0) {
+ break;
+ }
if (!validate_patterns(state, p->v.MatchClass.patterns, /*star_ok=*/0)) {
ret = 0;
break;
```
If we don't do this we can end up calling _PyAST_Compile with an error set.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104483
* gh-104514
<!-- /gh-linked-prs -->
| 8a3702f0c79e5a99fcef61e35724f4b9ea3453b8 | 26baa747c2ebc2beeff769bb07b5fb5a51ad5f4b |
python/cpython | python__cpython-104681 | # Floating point tutorial: "today (November 2000)"
# Documentation
It [says](https://docs.python.org/3.11/tutorial/floatingpoint.html?highlight=floating#representation-error):
> Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, [...]
That feels really weird, computer documentation written 22.5 years ago. I don't know what to think of that, whether that can be trusted today. I suggest to update that.
I also suggest to not start the next sentence with "754 doubles" but with "IEEE-754 doubles", both for style (don't start a sentence with a digit) and because it reads like a number of doubles instead of the kind of doubles (i.e., like "3 apples").
@mdickinson
<!-- gh-linked-prs -->
### Linked PRs
* gh-104681
* gh-104960
* gh-104961
<!-- /gh-linked-prs -->
| 2cf04e455d8f087bd08cd1d43751007b5e41b3c5 | 3f9c60f51ef820937e7e0f95f45e63fa0ae21e6c |
python/cpython | python__cpython-104667 | # ASAN failure was detected while running test_threading
See: https://github.com/python/cpython/actions/runs/4971544925/jobs/8896154582
Unicodeobject looks quite related.
`````
2023-05-14T09:18:05.5991524Z Direct leak of 1702 byte(s) in 27 object(s) allocated from:
2023-05-14T09:18:05.5991966Z #0 0x7fb1e4894c47 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
2023-05-14T09:18:05.5992397Z #1 0x55f4ae5903f7 in PyUnicode_New Objects/unicodeobject.c:1207
2023-05-14T09:18:05.5992778Z #2 0x55f4ae5f3da8 in PyUnicode_New Objects/unicodeobject.c:1156
2023-05-14T09:18:05.5993182Z #3 0x55f4ae5f3da8 in unicode_decode_utf8 Objects/unicodeobject.c:4537
2023-05-14T09:18:05.5993607Z #4 0x55f4ae4e689a in PyDict_SetItemString Objects/dictobject.c:3914
2023-05-14T09:18:05.5994016Z #5 0x55f4ae858ed0 in setup_confname_table Modules/posixmodule.c:13345
2023-05-14T09:18:05.5994429Z #6 0x55f4ae858ed0 in setup_confname_tables Modules/posixmodule.c:13367
2023-05-14T09:18:05.5994830Z #7 0x55f4ae858ed0 in posixmodule_exec Modules/posixmodule.c:16663
2023-05-14T09:18:05.5995232Z #8 0x55f4ae510890 in PyModule_ExecDef Objects/moduleobject.c:440
2023-05-14T09:18:05.5995683Z #9 0x55f4ae75cc7a in _imp_exec_builtin (/home/runner/work/cpython/cpython/python+0x72fc7a)
2023-05-14T09:18:05.5996126Z #10 0x55f4ae50b4dc in cfunction_vectorcall_O Objects/methodobject.c:509
2023-05-14T09:18:05.5996568Z #11 0x55f4ae42a4c6 in PyObject_Call (/home/runner/work/cpython/cpython/python+0x3fd4c6)
2023-05-14T09:18:05.6133164Z #12 0x55f4ae2de55d in _PyEval_EvalFrameDefault Python/bytecodes.c:3157
2023-05-14T09:18:05.6134128Z #13 0x55f4ae42415e in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
2023-05-14T09:18:05.6134546Z #14 0x55f4ae42415e in object_vacall Objects/call.c:850
2023-05-14T09:18:05.6134938Z #15 0x55f4ae427dfe in PyObject_CallMethodObjArgs Objects/call.c:911
2023-05-14T09:18:05.6135351Z #16 0x55f4ae766927 in import_find_and_load Python/import.c:2715
2023-05-14T09:18:05.6135778Z #17 0x55f4ae766927 in PyImport_ImportModuleLevelObject Python/import.c:2798
2023-05-14T09:18:05.6136174Z #18 0x55f4ae2fb183 in import_name Python/ceval.c:2386
2023-05-14T09:18:05.6136645Z #19 0x55f4ae2fb183 in _PyEval_EvalFrameDefault Python/bytecodes.c:2031
2023-05-14T09:18:05.6137070Z #20 0x55f4ae6cf786 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:87
2023-05-14T09:18:05.6137459Z #21 0x55f4ae6cf786 in _PyEval_Vector Python/ceval.c:1611
2023-05-14T09:18:05.6137826Z #22 0x55f4ae6cf786 in PyEval_EvalCode Python/ceval.c:567
2023-05-14T09:18:05.6138204Z #23 0x55f4ae6c9061 in builtin_exec_impl Python/bltinmodule.c:1079
2023-05-14T09:18:05.6138615Z #24 0x55f4ae6c9061 in builtin_exec Python/clinic/bltinmodule.c.h:586
2023-05-14T09:18:05.6139068Z #25 0x55f4ae50c015 in cfunction_vectorcall_FASTCALL_KEYWORDS Objects/methodobject.c:438
2023-05-14T09:18:05.6139533Z #26 0x55f4ae424f8f in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
2023-05-14T09:18:05.6139944Z #27 0x55f4ae424f8f in PyObject_Vectorcall Objects/call.c:325
2023-05-14T09:18:05.6140349Z #28 0x55f4ae2db394 in _PyEval_EvalFrameDefault Python/bytecodes.c:2609
2023-05-14T09:18:05.6140941Z #29 0x55f4ae42415e in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
2023-05-14T09:18:05.6141332Z #30 0x55f4ae42415e in object_vacall Objects/call.c:850
2023-05-14T09:18:05.6141720Z #31 0x55f4ae427dfe in PyObject_CallMethodObjArgs Objects/call.c:911
2023-05-14T09:18:05.6142136Z #32 0x55f4ae766927 in import_find_and_load Python/import.c:2715
2023-05-14T09:18:05.6142564Z #33 0x55f4ae766927 in PyImport_ImportModuleLevelObject Python/import.c:2798
2023-05-14T09:18:05.6142967Z #34 0x55f4ae2fb183 in import_name Python/ceval.c:2386
2023-05-14T09:18:05.6143360Z #35 0x55f4ae2fb183 in _PyEval_EvalFrameDefault Python/bytecodes.c:2031
2023-05-14T09:18:05.6143799Z #36 0x55f4ae42490f in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
2023-05-14T09:18:05.6144249Z #37 0x55f4ae42490f in _PyObject_CallNoArgsTstate Include/internal/pycore_call.h:99
2023-05-14T09:18:05.6144663Z #38 0x55f4ae42490f in _PyObject_CallFunctionVa Objects/call.c:535
2023-05-14T09:18:05.6145036Z #39 0x55f4ae4262e3 in callmethod Objects/call.c:634
2023-05-14T09:18:05.6145400Z #40 0x55f4ae4262e3 in PyObject_CallMethod Objects/call.c:653
2023-05-14T09:18:05.6145797Z #41 0x55f4ae769ca2 in init_importlib_external Python/import.c:2265
2023-05-14T09:18:05.6146192Z #42 0x55f4ae769ca2 in _PyImport_InitExternal Python/import.c:3186
2023-05-14T09:18:05.6146587Z #43 0x55f4ae7b30f3 in init_interp_main Python/pylifecycle.c:1116
2023-05-14T09:18:05.6146981Z #44 0x55f4ae7b48c4 in pyinit_main Python/pylifecycle.c:1227
2023-05-14T09:18:05.6147395Z #45 0x55f4ae7b76af in Py_InitializeFromConfig Python/pylifecycle.c:1258
2023-05-14T09:18:05.6147773Z #46 0x55f4ae829c19 in pymain_init Modules/main.c:67
`````
cc @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-104667
* gh-104669
* gh-104673
<!-- /gh-linked-prs -->
| c3f43bfb4bec39ff8f2c36d861a3c3a243bcb3af | 625887e6df5dbebe48be172b424ba519e2ba2ddc |
python/cpython | python__cpython-104470 | # Improve the readability and maintainability of test_capi using the AC
Currently, most of test_capi modules do not use the Argument Clinic tool.
As a result, we manually implement docstrings containing explanations for the test codes and handle parameter parsing manually: `PyArg_ParseTuple` .
To maintain code consistency in test_capi, I suggest using the Argument Clinic tool.
While some might criticize this as code churn, I believe it is necessary for maintaining consistent test code writing practices. I will attach a sample PR to illustrate how it can help improve the understanding of the test code. I hope this will help clarify the rationale behind it.
cc @erlend-aasland @sobolevn
<!-- gh-linked-prs -->
### Linked PRs
* gh-104470
* gh-104502
* gh-104503
* gh-104529
* gh-104720
* gh-106557
* gh-107857
* gh-107859
* gh-107860
* gh-107951
* gh-107953
* gh-109690
* gh-109691
<!-- /gh-linked-prs -->
| 48b3617de491f00a3bf978b355074cc8e228d61b | 2cd1c87d2a23ffd00730b5d1648304593530326c |
python/cpython | python__cpython-104462 | # Tkinter: test_configure_screen should only run on X11
test_configure_screen uses the `DISPLAY` environment variable, which is only relevant to X11; and the purpose of the test is to set the `-screen` option for a toplevel, which is only useful on Tk for X11, since Tk for the Win32 or Aqua windowing systems only support a single “screen” (called `:0` by default).
I use XQuartz installed from MacPorts, and so the `DISPLAY` environment variable is usually set. test_configure_screen tries to use `DISPLAY` even if I am running the test on Tk Aqua:
```
======================================================================
ERROR: test_configure_screen (test.test_tkinter.test_widgets.ToplevelTest.test_configure_screen)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/user/git/cpython/Lib/test/test_tkinter/test_widgets.py", line 86, in test_configure_screen
widget2 = self.create(screen=display)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/git/cpython/Lib/test/test_tkinter/test_widgets.py", line 69, in create
return tkinter.Toplevel(self.root, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/git/cpython/Lib/tkinter/__init__.py", line 2678, in __init__
BaseWidget.__init__(self, master, 'toplevel', cnf, {}, extra)
File "/Users/user/git/cpython/Lib/tkinter/__init__.py", line 2629, in __init__
self.tk.call(
_tkinter.TclError: couldn't connect to display "/private/tmp/com.apple.launchd.yqEvGPTxFm/org.macports:0"
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104462
* gh-104526
<!-- /gh-linked-prs -->
| fdafdc235e74f2f4fedc1f745bf8b90141daa162 | 456d56698db6c6287500f591927c900a5f5221ca |
python/cpython | python__cpython-104457 | # test_ctypes are leaked
Tried on last commit.
```python
PS C:\Users\KIRILL-1\CLionProjects\cpython> ./python -m test -R 3:3 test_ctypes
Running Debug|x64 interpreter...
0:00:00 Run tests sequentially
0:00:00 [1/1] test_ctypes
beginning 6 repetitions
123456
a=5, b=10, c=15
a=5, b=10, c=15
.a=5, b=10, c=15
a=5, b=10, c=15
.a=5, b=10, c=15
a=5, b=10, c=15
.a=5, b=10, c=15
a=5, b=10, c=15
.a=5, b=10, c=15
a=5, b=10, c=15
.a=5, b=10, c=15
a=5, b=10, c=15
.
test_ctypes leaked [5, 5, 5] references, sum=15
test_ctypes leaked [3, 3, 4] memory blocks, sum=10
test_ctypes failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_ctypes
Total duration: 17.1 sec
Tests result: FAILURE
```
OS: Windows 10
However, I cannot reproduce it in Linux (WSL Ubuntu)
UPD:
It is `test_ctypes/test_win32.py/test_COMError`
<!-- gh-linked-prs -->
### Linked PRs
* gh-104457
<!-- /gh-linked-prs -->
| 2cd1c87d2a23ffd00730b5d1648304593530326c | fb8739f0b6291fb048a94d6312f59ba4d10a20ca |
python/cpython | python__cpython-104455 | # test_exceptions are leaked
Tried on [last commit](https://github.com/python/cpython/commit/46f1c78eebe08e96ed29d364b1804dd37364831d)
```python
PS C:\Users\KIRILL-1\CLionProjects\cpython> ./python -m test -R 3:3 test_exceptions
Running Debug|x64 interpreter...
0:00:00 Run tests sequentially
0:00:00 [1/1] test_exceptions
beginning 6 repetitions
123456
......
test_exceptions leaked [18, 18, 18] references, sum=54
test_exceptions leaked [12, 12, 12] memory blocks, sum=36
test_exceptions failed (reference leak) in 38.1 sec
== Tests result: FAILURE ==
1 test failed:
test_exceptions
Total duration: 38.2 sec
Tests result: FAILURE
```
OS: Windows 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-104455
<!-- /gh-linked-prs -->
| 7d2deafb73237a2175971a26cfb544974661de4b | 46f1c78eebe08e96ed29d364b1804dd37364831d |
python/cpython | python__cpython-104433 | # UBSan misaligned load errors in `gethost_common()`, `mkgrent()`
test.test_asyncio.test_events.KqueueEventLoopTests.test_create_connection triggers `-fsanitize=alignment` errors on macOS (i.e. Darwin):
```
Modules/socketmodule.c:5790:34: runtime error: load of misaligned address 0x60d0001ca152 for type 'char *', which requires 8 byte alignment
0x60d0001ca152: note: pointer points here
6f 73 74 00 62 a1 1c 00 d0 60 00 00 00 00 00 00 00 00 00 00 31 2e 30 2e 30 2e 31 32 37 2e 69 6e
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/socketmodule.c:5790:34 in
Modules/socketmodule.c:5792:40: runtime error: load of misaligned address 0x60d0001ca152 for type 'char *', which requires 8 byte alignment
0x60d0001ca152: note: pointer points here
6f 73 74 00 62 a1 1c 00 d0 60 00 00 00 00 00 00 00 00 00 00 31 2e 30 2e 30 2e 31 32 37 2e 69 6e
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/socketmodule.c:5792:40 in
Modules/socketmodule.c:5804:32: runtime error: load of misaligned address 0x60d0001ca179 for type 'char *', which requires 8 byte alignment
0x60d0001ca179: note: pointer points here
72 70 61 00 89 a1 1c 00 d0 60 00 00 00 00 00 00 00 00 00 00 7f 00 00 01 00 00 00 b9 96 6e cc b9
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/socketmodule.c:5804:32 in
Modules/socketmodule.c:5817:35: runtime error: load of misaligned address 0x60d0001ca179 for type 'char *', which requires 8 byte alignment
0x60d0001ca179: note: pointer points here
72 70 61 00 89 a1 1c 00 d0 60 00 00 00 00 00 00 00 00 00 00 7f 00 00 01 00 00 00 b9 96 6e cc b9
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/socketmodule.c:5817:35 in
```
I believe this issue is also present in this line (but I am not aware if any tests cover it):
https://github.com/python/cpython/blob/ac66cc17f21653b66321b50d0a1f792982fca21f/Modules/socketmodule.c#L5834
Likewise in test.test_grp.GroupDatabaseTestCase.test_errors:
```
Modules/grpmodule.c:68:30: runtime error: load of misaligned address 0x6080006331f4 for type 'char *', which requires 8 byte alignment
0x6080006331f4: note: pointer points here
72 00 2a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 32 63 00 80 60 00 00
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/grpmodule.c:68:30 in
Modules/grpmodule.c:69:49: runtime error: load of misaligned address 0x60c00474bcdb for type 'char *', which requires 8 byte alignment
0x60c00474bcdb: note: pointer points here
64 00 2a 00 eb bc 74 04 c0 60 00 00 00 00 00 00 00 00 00 00 5f 6b 6e 6f 77 6c 65 64 67 65 67 72
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Modules/grpmodule.c:69:49 in
```
To be clear, these tests pass, and the misaligned pointers are produced by the OS and not Python. The misaligned pointers appear to be a known issue, presumably one which Apple may never resolve (given these are functions inherited from BSD, and newer functions like `getaddrinfo()` are preferred according to man pages). The workaround is to use `memcpy()` (see e.g. https://github.com/php/php-src/commit/26ac6cb6be807d9e654b4a0c9b970908e210f269).
<!-- gh-linked-prs -->
### Linked PRs
* gh-104433
* gh-107355
* gh-107356
<!-- /gh-linked-prs -->
| f01e4cedba1a17d321664834bb255d9d04ad16ce | 983305268e2291b0a7835621b81bf40cba7c27f3 |
python/cpython | python__cpython-104416 | # `test_typing` fails refleak tests
Repro: `./python.exe -m test -R 3:3 -v test_typing`
The first test round passes, but the second one fails with:
```python
======================================================================
FAIL: test_bytestring (test.test_typing.CollectionsAbcTests.test_bytestring)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/test/test_typing.py", line 6007, in test_bytestring
with self.assertWarns(DeprecationWarning):
AssertionError: DeprecationWarning not triggered
----------------------------------------------------------------------
Ran 576 tests in 1.479s
FAILED (failures=1, skipped=1)
test test_typing failed
test_typing failed (1 failure)
```
I am working on a fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104416
<!-- /gh-linked-prs -->
| 5b8cd5abe5924646b9ed90e7ba90085b56d5f634 | d50c37d8adb2d2da9808089d959ca7d6791ac59f |
python/cpython | python__cpython-104414 | # Refleak in LOAD_SUPER_ATTR_ATTR
The buildbots show a refleak in test_super. I narrowed it down to this:
```python
"""Unit tests for zero-argument super() & related machinery."""
import unittest
class TestSuper(unittest.TestCase):
def test_attribute_error(self):
class C:
def method(self):
return super().msg
try:
C().method()
except AttributeError:
pass
if __name__ == "__main__":
unittest.main()
```
And found a fix. PR incoming.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104414
<!-- /gh-linked-prs -->
| 718b13277217e90232da5edf7ab3267e59189698 | a781484c8e9834538e5ee7b9e2e6bec7b679e033 |
python/cpython | python__cpython-104412 | # Tkinter: test_getint incompatible with Tcl 9.0
[TIP 114](https://core.tcl-lang.org/tips/doc/trunk/tip/114.md) means that as of Tcl 9.0, integers are only treated as octal when the `0o` prefix is specified; a leading 0 by itself is no longer sufficient. This change partially breaks test_getint:
```
======================================================================
FAIL: test_getint (test.test_tcl.TclTest.test_getint)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/user/git/cpython/Lib/test/test_tcl.py", line 145, in test_getint
self.assertEqual(tcl.getint((' %#o ' % i).replace('o', '')), i)
AssertionError: 17777777777 != 2147483647
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104412
* gh-105356
* gh-105357
<!-- /gh-linked-prs -->
| 2c49c759e880a32539f50c31dbd35d2bc4b4e030 | 8ddf0dd264acafda29dc587ab8393387bb9a76ab |
python/cpython | python__cpython-104406 | # Some instructions and specializations ignore PEP 523
Four instructions ignore PEP 523 when executing:
- `BINARY_SUBSCR_GETITEM`
- `FOR_ITER_GEN`
- `SEND`
- `SEND_GEN`
Five instructions ignore it when specializing:
- `BINARY_SUBSCR_GETITEM`
- `FOR_ITER_GEN`
- `LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN`
- `LOAD_ATTR_PROPERTY`
- `SEND_GEN`
I'll have a PR up soon.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104406
* gh-104441
<!-- /gh-linked-prs -->
| 1eb950ca55b3e0b6524b3f03b0b519723916eca2 | a10b026f0fdceac42c4b928917894d77da996555 |
python/cpython | python__cpython-104442 | # PEP 709 segfault with nested comprehensions plus lambdas
This is another crasher related to PEP 709.
```python
def f():
[([lambda: x for x in range(4)], lambda: x) for x in range(3)]
```
Stack trace:
<details>
```
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x00000001001662f0 python.exe`Py_TYPE(ob=0x0000000000000000) at object.h:204:16 [opt]
frame #1: 0x000000010015d4e0 python.exe`_PyEval_EvalFrameDefault(tstate=<unavailable>, frame=0x0000000100a60020, throwflag=<unavailable>) at bytecodes.c:1398:31 [opt]
frame #2: 0x0000000100157c00 python.exe`_PyEval_EvalFrame(tstate=<unavailable>, frame=<unavailable>, throwflag=<unavailable>) at pycore_ceval.h:87:16 [opt]
frame #3: 0x0000000100157b20 python.exe`_PyEval_Vector(tstate=0x0000000100567f50, func=0x0000000100e19910, locals=0x0000000100e2cd70, args=0x0000000000000000, argcount=0, kwnames=0x0000000000000000) at ceval.c:1576:12 [opt]
frame #4: 0x0000000100157a0c python.exe`PyEval_EvalCode(co=0x0000000100e7a110, globals=0x0000000100e2cd70, locals=0x0000000100e2cd70) at ceval.c:567:21 [opt]
frame #5: 0x00000001001bcce0 python.exe`run_eval_code_obj(tstate=0x0000000100567f50, co=0x0000000100e7a110, globals=0x0000000100e2cd70, locals=0x0000000100e2cd70) at pythonrun.c:1695:9 [opt]
frame #6: 0x00000001001bae50 python.exe`run_mod(mod=<unavailable>, filename=<unavailable>, globals=0x0000000100e2cd70, locals=0x0000000100e2cd70, flags=<unavailable>, arena=<unavailable>) at pythonrun.c:1716:19 [opt]
frame #7: 0x00000001001ba42c python.exe`pyrun_file(fp=0x0000000209e4cd10, filename=0x0000000100e89b60, start=<unavailable>, globals=0x0000000100e2cd70, locals=0x0000000100e2cd70, closeit=1, flags=0x000000016fdff478) at pythonrun.c:1616:15 [opt]
frame #8: 0x00000001001b99dc python.exe`_PyRun_SimpleFileObject(fp=0x0000000209e4cd10, filename=0x0000000100e89b60, closeit=1, flags=0x000000016fdff478) at pythonrun.c:433:13 [opt]
frame #9: 0x00000001001b96cc python.exe`_PyRun_AnyFileObject(fp=0x0000000209e4cd10, filename=0x0000000100e89b60, closeit=1, flags=0x000000016fdff478) at pythonrun.c:78:15 [opt]
frame #10: 0x00000001001d9094 python.exe`pymain_run_file_obj(program_name=0x0000000100e89bd0, filename=0x0000000100e89b60, skip_source_first_line=0) at main.c:360:15 [opt]
frame #11: 0x00000001001d8c64 python.exe`pymain_run_file(config=0x000000010054a2c0) at main.c:379:15 [opt]
frame #12: 0x00000001001d8418 python.exe`pymain_run_python(exitcode=0x000000016fdff5dc) at main.c:610:21 [opt]
frame #13: 0x00000001001d82a0 python.exe`Py_RunMain at main.c:689:5 [opt]
frame #14: 0x00000001001d8518 python.exe`pymain_main(args=0x000000016fdff640) at main.c:719:12 [opt]
frame #15: 0x00000001001d855c python.exe`Py_BytesMain(argc=<unavailable>, argv=<unavailable>) at main.c:743:12 [opt]
frame #16: 0x0000000100003de4 python.exe`main(argc=<unavailable>, argv=<unavailable>) at python.c:15:12 [opt]
frame #17: 0x000000010086108c dyld`start + 520
```
</details>
Interestingly, this is an interpreter crash, not a compiler crash, even though we never call the function. The crash happens inside `COPY_FREE_VARS`, because the function's `func_closure` is NULL.
When we disassemble it we can see why:
```
>>> dis.dis("""
... def f():
... [([lambda: x for x in range(4)], lambda: x) for x in range(3)]
... """)
0 COPY_FREE_VARS 1
0 2 RESUME 0
2 4 LOAD_CONST 0 (<code object f at 0x104d2b740, file "<dis>", line 2>)
6 MAKE_FUNCTION 0
8 STORE_NAME 0 (f)
10 RETURN_CONST 1 (None)
Disassembly of <code object f at 0x104d2b740, file "<dis>", line 2>:
<snip>
```
There is a `COPY_FREE_VARS` at the module scope, but there is no closure to copy at that level.
Still reproduces with the fix from #104394 applied. cc @carljm
<!-- gh-linked-prs -->
### Linked PRs
* gh-104442
<!-- /gh-linked-prs -->
| 563c7dcba0ea1070698b77129628e9e1c86d34e2 | 1eb950ca55b3e0b6524b3f03b0b519723916eca2 |
python/cpython | python__cpython-104402 | # pygettext: use an AST parser instead of a tokenizer
Follow up on this [forum discussion](https://discuss.python.org/t/modernize-and-add-missing-features-to-pygettext/26455)
This is a part 1/X of improving [pygettext](https://github.com/python/cpython/blob/main/Tools/i18n/pygettext.py). Replacing the tokenizer that powers the message extraction with a parser will simplify the code (no more counting brackets and f-string madness) and make it much easier to extend it with new features later down the road.
This change should also come with a healthy dose of new tests to verify the implementation.
PR coming shortly ;)
<!-- gh-linked-prs -->
### Linked PRs
* gh-104402
* gh-108173
* gh-126361
* gh-126362
* gh-129580
* gh-129672
<!-- /gh-linked-prs -->
| 374abded070b861cc389d509937344073193c36a | 1da412e574670cd8b48854112ba118c28ff2aba0 |
python/cpython | python__cpython-104407 | # Tkinter uses deprecated functions `mp_to_unsigned_bin_n()` and `mp_unsigned_bin_size()`
_tkinter.c currently uses the functions `mp_to_unsigned_bin_n()` and `mp_unsigned_bin_size()`, which were deprecated in libtommath 1.2.0 and replaced by new functions `mp_to_ubin()` and `mp_ubin_size()`.
The deprecated functions are removed from future libtommath and Tcl 9.0, leaving _tkinter.c unable to build:
```
./Modules/_tkinter.c:1089:16: error: call to undeclared function 'mp_unsigned_bin_size'; ISO C99 and later do not support implicit function declarations [-Werror,-Wimplicit-function-declaration]
numBytes = mp_unsigned_bin_size(&bigValue);
^
./Modules/_tkinter.c:1095:9: error: call to undeclared function 'TclBN_mp_to_unsigned_bin_n'; ISO C99 and later do not support implicit function declarations [-Werror,-Wimplicit-function-declaration]
if (mp_to_unsigned_bin_n(&bigValue, bytes,
^
/Users/user/tcl90p/include/tclTomMathDecls.h:153:30: note: expanded from macro 'mp_to_unsigned_bin_n'
#define mp_to_unsigned_bin_n TclBN_mp_to_unsigned_bin_n
^
```
[TIP 538](https://core.tcl-lang.org/tips/doc/trunk/tip/538.md) says that Tcl 8.7/9.0 can build against external libtommath 1.2.0 or later. So if `TCL_WITH_EXTERNAL_TOMMATH` is defined, or if `TCL_MAJOR_VERSION >= 9`, then `mp_to_ubin()` and `mp_ubin_size()` should be available.
Even though `mp_to_ubin()` and `mp_ubin_size()` have been available since Tcl 8.6.10 (the first release to bundle libtommath 1.2.0), `mp_to_ubin()` was added as a new ABI, unlike `mp_ubin_size()` which was added by having the existing ABI for `mp_unsigned_bin_size()` return `size_t` instead of `int`. So Tkinter presumably could use the new functions when built with Tcl 8.6.10 through 8.7 only if there is no expectation for that build to work with Tcl 8.5.12 through 8.6.9; otherwise the deprecated functions can continue to be used for Tcl < 9.
Note that Tcl 8.7 with bundled libtommath currently only warns about `mp_to_unsigned_bin_n()` being deprecated:
```
./Modules/_tkinter.c:1099:9: warning: 'TclBN_mp_to_unsigned_bin_n' is deprecated [-Wdeprecated-declarations]
if (mp_to_unsigned_bin_n(&bigValue, bytes,
^
/Users/user/tcl87p/include/tclTomMathDecls.h:150:30: note: expanded from macro 'mp_to_unsigned_bin_n'
#define mp_to_unsigned_bin_n TclBN_mp_to_unsigned_bin_n
^
/Users/user/tcl87p/include/tclTomMathDecls.h:316:1: note: 'TclBN_mp_to_unsigned_bin_n' has been explicitly marked deprecated here
TCL_DEPRECATED("Use mp_to_ubin")
^
/Users/user/tcl87p/include/tclDecls.h:33:37: note: expanded from macro 'TCL_DEPRECATED'
# define TCL_DEPRECATED(msg) EXTERN TCL_DEPRECATED_API(msg)
^
/Users/user/tcl87p/include/tcl.h:181:50: note: expanded from macro 'TCL_DEPRECATED_API'
# define TCL_DEPRECATED_API(msg) __attribute__ ((__deprecated__))
^
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104407
* gh-105343
* gh-105344
<!-- /gh-linked-prs -->
| 00d73caf804c0474980e471347d6385757af975f | 852348ab65783601e0844b6647ea033668b45c11 |
python/cpython | python__cpython-104397 | # `uuid.py` checks platform for emscripten and wasi
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
`uuid` now always check platform except for `win32` and `darwin`.
It can be also skipped for `emscripten` and `wasi`.
This is strictly not a bug. But it can skip useless checks.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on:
- Operating system and architecture: WASI
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-104397
<!-- /gh-linked-prs -->
| 434db68ee31514ddc4aa93f8dfc2eb874d3669c5 | 7d7dd4cd70ed997ed7c3cda867c4e7b1ab02b205 |
python/cpython | python__cpython-104443 | # Add link to Download page on documentation index
# Documentation
In docs.python.org, there is no a obvious link to [download page](https://docs.python.org/3/download.html). I found one just in a question on FAQ section.
So, I suggestion add a link on documentation index. I think a new entry on Meta Information section a good place for this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104443
* gh-109345
* gh-109346
<!-- /gh-linked-prs -->
| 90cf345ed42ae4d17d2a073718985eb3432a7c20 | 44c8699196c1951037bc549c895ea5af26c7254e |
python/cpython | python__cpython-104393 | # _paramspec_tvars in typing.py does nothing
Several classes in typing.py pass around an attribute called `_paramspec_tvars`, but nothing depends on its value. It was added in b2f3f8e3d81 by @Fidget-Spinner to enforce some behavior around ParamSpec, but since then we have apparently loosened that behavior. Let's just remove the attribute.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104393
<!-- /gh-linked-prs -->
| 37a5d256b97bc9d2a0ff445997fec851e328ebad | 434db68ee31514ddc4aa93f8dfc2eb874d3669c5 |
python/cpython | python__cpython-104390 | # Make it possible to mark Argument Clinic args as unused
For base classes, it is not unusual to create stubs for interface methods, and let those stubs simply raise an exception. It would be nice to be able to mark the arguments with Py\_UNUSED in those cases.
Implementing this is straight-forward in clinic.py. CConverter get's a new keyword argument `unused` which defaults to `False`, and we can write code like this:
```
/*[clinic input]
_io._TextIOBase.read
cls: defining_class
size: int(unused=True) = -1
[clinic start generated code]*/
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104390
<!-- /gh-linked-prs -->
| b2c1b4da1935639cb89fbbad0ce170a1182537bd | 15795b57d92ee6315b5c8263290944b16834b5f2 |
python/cpython | python__cpython-104394 | # Segfault with lambda nested in comprehension
```
def gen_params():
def bound():
return [lambda: T for T in (T, [1])[1]]
T = ...
```
This segfaults on current main.
Stack trace:
<details>
```
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x000000010008bcc4 python.exe`Py_TYPE(ob=0x0000000000000000) at object.h:204:16 [opt]
frame #1: 0x000000010008d414 python.exe`Py_IS_TYPE(ob=<unavailable>, type=0x00000001004210b8) at object.h:235:12 [opt]
frame #2: 0x000000010008e4c0 python.exe`_PyCode_ConstantKey(op=0x0000000000000000) at codeobject.c:2163:11 [opt]
frame #3: 0x000000010008e688 python.exe`_PyCode_ConstantKey(op=0x0000000102a42620) at codeobject.c:2225:24 [opt]
frame #4: 0x000000010016deec python.exe`_PyCompile_ConstCacheMergeOne(const_cache=0x000000010282d610, obj=0x000000016fdfefd8) at compile.c:6943:21 [opt]
frame #5: 0x0000000100149330 python.exe`makecode(umd=0x0000000102b04820, a=0x000000016fdff050, const_cache=0x000000010282d610, constslist=0x0000000102a419a0, maxdepth=5, nlocalsplus=2, code_flags=19, filename=0x00000001028761b0) at assemble.c:568:9 [opt]
frame #6: 0x000000010014905c python.exe`_PyAssemble_MakeCodeObject(umd=0x0000000102b04820, const_cache=0x000000010282d610, consts=0x0000000102a419a0, maxdepth=5, instrs=<unavailable>, nlocalsplus=2, code_flags=19, filename=0x00000001028761b0) at assemble.c:597:14 [opt]
frame #7: 0x000000010016f1ec python.exe`optimize_and_assemble_code_unit(u=0x0000000102b04600, const_cache=0x000000010282d610, code_flags=19, filename=0x00000001028761b0) at compile.c:7219:10 [opt]
frame #8: 0x000000010016f030 python.exe`optimize_and_assemble(c=0x000000010288b610, addNone=1) at compile.c:7246:12 [opt]
frame #9: 0x0000000100170f84 python.exe`compiler_function(c=0x000000010288b610, s=0x0000000102063f10, is_async=0) at compile.c:2166:10 [opt]
frame #10: 0x000000010016faec python.exe`compiler_visit_stmt(c=0x000000010288b610, s=0x0000000102063f10) at compile.c:3518:16 [opt]
frame #11: 0x0000000100170f40 python.exe`compiler_function(c=0x000000010288b610, s=0x0000000104008530, is_async=0) at compile.c:2158:9 [opt]
frame #12: 0x000000010016faec python.exe`compiler_visit_stmt(c=0x000000010288b610, s=0x0000000104008530) at compile.c:3518:16 [opt]
frame #13: 0x000000010016e438 python.exe`compiler_codegen(c=0x000000010288b610, mod=0x0000000104008590) at compile.c:1684:9 [opt]
frame #14: 0x000000010016dcdc python.exe`compiler_mod(c=0x000000010288b610, mod=<unavailable>) at compile.c:1702:9 [opt]
frame #15: 0x000000010016dbf8 python.exe`_PyAST_Compile(mod=0x0000000104008590, filename=<unavailable>, pflags=<unavailable>, optimize=<unavailable>, arena=<unavailable>) at compile.c:575:24 [opt]
frame #16: 0x00000001001bae6c python.exe`run_mod(mod=<unavailable>, filename=<unavailable>, globals=0x000000010282ce30, locals=0x000000010282ce30, flags=<unavailable>, arena=<unavailable>) at pythonrun.c:1707:24 [opt]
frame #17: 0x00000001001b9e90 python.exe`PyRun_InteractiveOneObjectEx(fp=0x0000000209e4b848, filename=0x00000001028761b0, flags=0x000000016fdff550) at pythonrun.c:260:9 [opt]
frame #18: 0x00000001001b9878 python.exe`_PyRun_InteractiveLoopObject(fp=0x0000000209e4b848, filename=0x00000001028761b0, flags=0x000000016fdff550) at pythonrun.c:137:15 [opt]
frame #19: 0x00000001001b96f8 python.exe`_PyRun_AnyFileObject(fp=0x0000000209e4b848, filename=0x00000001028761b0, closeit=0, flags=0x000000016fdff550) at pythonrun.c:72:15 [opt]
frame #20: 0x00000001001b9c64 python.exe`PyRun_AnyFileExFlags(fp=0x0000000209e4b848, filename=<unavailable>, closeit=0, flags=0x000000016fdff550) at pythonrun.c:104:15 [opt]
frame #21: 0x00000001001d8d78 python.exe`pymain_run_stdin(config=0x000000010054a2c0) at main.c:520:15 [opt]
frame #22: 0x00000001001d8444 python.exe`pymain_run_python(exitcode=0x000000016fdff5ec) at main.c:613:21 [opt]
frame #23: 0x00000001001d82ac python.exe`Py_RunMain at main.c:689:5 [opt]
frame #24: 0x00000001001d8524 python.exe`pymain_main(args=0x000000016fdff650) at main.c:719:12 [opt]
frame #25: 0x00000001001d8568 python.exe`Py_BytesMain(argc=<unavailable>, argv=<unavailable>) at main.c:743:12 [opt]
frame #26: 0x0000000100003e94 python.exe`main(argc=<unavailable>, argv=<unavailable>) at python.c:15:12 [opt]
frame #27: 0x000000010086108c dyld`start + 520
```
</details>
I got crashes here at some point too while working on PEP 695; I think it means there's something wrong with how the cellvars/freevars are tracked.
cc @carljm.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104394
<!-- /gh-linked-prs -->
| ac66cc17f21653b66321b50d0a1f792982fca21f | 37a5d256b97bc9d2a0ff445997fec851e328ebad |
python/cpython | python__cpython-104376 | # Pathlib docs use `versionadded` incorrectly
The pathlib documentation uses `.. versionadded` blocks for new _arguments_. These should use `.. versionchanged`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104376
* gh-106058
<!-- /gh-linked-prs -->
| 4a6c84fc1ea8f26d84a0fbeeff6f8dedc32263d4 | d2cbb6e918d9ea39f0dd44acb53270f2dac07454 |
python/cpython | python__cpython-104378 | # PEP 688: Crash if `__release_buffer__` is called while an exception is active
```
class A:
def __buffer__(self, flags):
return memoryview(bytes(8))
def __release_buffer__(self, view):
pass # do not need to do anything here, just needs to exist
b = bytearray(8)
m = memoryview(b) # now b.extend will raise an exception due to exports
b.extend(A())
```
In a debug build this crashes with `Assertion failed: (!PyErr_Occurred()), function _PyType_Lookup, file typeobject.c, line 4707.`
Reported by @chilaxan. I'll work on a fix later today.
Most likely, the fix will be to call `PyErr_Fetch`/`PyErr_Restore` around calls to Python `__release_buffer__` methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104378
* gh-104417
<!-- /gh-linked-prs -->
| a0a98ddb31591357bead4694b21717cb4034924f | ac66cc17f21653b66321b50d0a1f792982fca21f |
python/cpython | python__cpython-104361 | # ssl docs still refer to removed module-level ssl.wrap_socket
# Documentation
ssl docs still refer to removed global ssl.wrap_socket https://github.com/python/cpython/blob/a7a2dbbf72aceef61bfb50901bfa39bfb8d6d229/Doc/library/ssl.rst?plain=1#L2456-L2458
<!-- gh-linked-prs -->
### Linked PRs
* gh-104361
* gh-114528
* gh-114529
<!-- /gh-linked-prs -->
| 127a49785247ac8af158b18e38b722e520054d71 | 51d9068ede41d49e86c9637960f212e2a0f07f4c |
python/cpython | python__cpython-104368 | # Comprehension inlining: Bug if comprehension contains a lambda
Code sample:
```python
def outer(x):
return [lambda: x for x in range(x)]
print([f() for f in outer(2)])
```
On 3.11, this produces `[1, 1]` as expected.
But on current main, I get:
```
>>> [f() for f in outer(2)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in outer
TypeError: 'cell' object cannot be interpreted as an integer
```
Almost certainly due to PEP 709 / #101441, cc @carljm.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104368
<!-- /gh-linked-prs -->
| fcd5fb49b1d71165f3c503c3d2e74a082ddb2f21 | 94f30c75576bb8a20724b2ac758fa33af089a522 |
python/cpython | python__cpython-104586 | # Unhandled BrokenPipeError in asyncio.streams
# Bug report
Kind of a weird one here, been running into it a for a while but just recently figured out how to reproduce it reliably.
Basically, if an async process is killed while a large amount of data remains to be written to its stdin, it fails to throw a `ConnectionResetError` and instead experiences a `BrokenPipeError` inside the `_drain_helper()` method. Because the exception happens inside an internal task, it evades handling by the user.
```python
Traceback (most recent call last):
File "/brokenpipeerror_bug.py", line 28, in main
await proc.stdin.drain()
File "/usr/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/lib/python3.10/asyncio/streams.py", line 173, in _drain_helper
await waiter
BrokenPipeError
```
### Minimal reproducible example:
```python
import asyncio
import traceback
async def main():
proc = await asyncio.create_subprocess_exec("sleep", "999", stdin=asyncio.subprocess.PIPE)
try:
for _ in range(10000): # NOTE: only triggers if this is a high number
i = b"www.blacklanternsecurity.com\n"
proc.stdin.write(i)
proc.kill()
await proc.stdin.drain() # This triggers error
except BrokenPipeError:
print(f"Handled error: {traceback.format_exc()}")
asyncio.run(main())
```
---
```bash
$ python brokenpipeerror_bug.py
Handled error: Traceback (most recent call last):
File "/brokenpipeerror_bug.py", line 28, in main
await proc.stdin.drain()
File "/usr/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/lib/python3.10/asyncio/streams.py", line 173, in _drain_helper
await waiter
BrokenPipeError
Future exception was never retrieved
future: <Future finished exception=BrokenPipeError()>
Traceback (most recent call last):
File "/brokenpipeerror_bug.py", line 28, in main
await proc.stdin.drain()
File "/usr/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/lib/python3.10/asyncio/streams.py", line 173, in _drain_helper
await waiter
BrokenPipeError
```
Tested on CPython 3.10.10 on Arch Linux, x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-104586
* gh-104594
<!-- /gh-linked-prs -->
| 7fc8e2d4627cdba5cb0075c9052ed6f4b6ecd36d | b27fe67f3c643e174c3619b669228ef34b6d87ee |
python/cpython | python__cpython-104410 | # gammavariate uses misleading parameter names.
# Documentation for gammavariate
The parameters of the `gammavariate` function (in `random` module) are `alpha`, `beta`.
We can see, from the mathematical description given, that these correspond to _shape_ and _scale_. [Wikipedia conventionally describes these using the symbols `k` and `theta`](https://en.wikipedia.org/wiki/Gamma_distribution). Wikipedia uses `beta` for _rate_, which is the reciprocal of _scale_: _beta_ = 1/_theta_
This is a proposal to:
- explain that `alpha` and `beta` are also known as _shape_ and _scale_;
- reinforce that by adding that μ is proportional to _scale_;
- add a warning that these are not the conventional names.
or, more controversially, scrap all that and rename the parameters _k_ and _theta_. I understand that the parameter names leak into the API, so maybe not this option.
Side note: there is some variability in naming convention. The only textbook i have to hand is Grimmett and Stirzaker "Probability and Random Process" (OUP, 1982). That uses Γ(λ, t) where _lambda_ is rate and _t_ is shape.
(A clear and concise description of the issue.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-104410
* gh-104481
<!-- /gh-linked-prs -->
| 88c5c586708dcff369c49edae947d487a80f0346 | 2f7b5e458e9189fa1ffd44339848aa1e52add3fa |
python/cpython | python__cpython-104335 | # Bad error message when inheriting from Generic multiple times
```
>>> from typing import Generic, TypeVar
>>> T = TypeVar("T")
>>> U = TypeVar("U")
>>> class X(Generic[T], Generic[U]): pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jelle/py/cpython/Lib/typing.py", line 1088, in _generic_init_subclass
raise TypeError(
TypeError: Cannot inherit from Generic[...] multiple types.
```
Should say "multiple times".
Noticed by @carljm in https://github.com/python/cpython/pull/103764#discussion_r1187992790.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104335
* gh-104338
<!-- /gh-linked-prs -->
| 01c321ca34d99f35f174768c6f8c500801d4ef4c | 2866e030f01dc3ff08de32857fa77d52468b676b |
python/cpython | python__cpython-104311 | # Add importlib.util.allowing_all_extensions()
Per [PEP 684](https://peps.python.org/pep-0684/#restricting-extension-modules), we are providing `importlib.util.allowing_all_extensions()` as a context manager users may use to disable the strict compatablility checks.
I'll be added docs for this separately from the code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104311
* gh-105255
* gh-105518
<!-- /gh-linked-prs -->
| 4541d1a0dba3ef0c386991cf54c4c3c411a364c0 | 5c9ee498c6f4b75e0e020f17b6860309c3b7e11e |
python/cpython | python__cpython-104307 | # socket.getnameinfo doesn't drop the GIL
It should do that. Trivial fix is in #104307 but creating an issue to make bedevere happy.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104307
* gh-104313
<!-- /gh-linked-prs -->
| faf196213e60d8a90773e9e5680d3252bd294643 | 4541d1a0dba3ef0c386991cf54c4c3c411a364c0 |
python/cpython | python__cpython-104296 | # Unable to generate POT for Python documentation
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
I coordinate Brazilian Portuguese translation of the Python documentation. As part of the translation update process, I run sphinx-build's `gettext` builder to generate POT (translation files template) and update translations.
Since 3 days ago I'm unable to successfully generate POT files because the `gettext` builder fails for me with the following output:
```
...
reading sources... [ 14%] distutils/apiref
Exception occurred:
File "/home/rffontenelle/cpython/Doc/venv/lib/python3.11/site-packages/sphinx/domains/python.py", line 1043, in run
indextext = '%s; %s' % (pairindextypes['module'], modname)
~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'module'
The full traceback has been saved in /tmp/sphinx-err-ds0bs43h.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [Makefile:53: build] Error 2
```
The log file containing the full traceback: [sphinx-err-ds0bs43h.log](https://github.com/python/cpython/files/11417013/sphinx-err-ds0bs43h.log)
This might be related to #104000.
Steps to reproduce:
- `cd cpython/Doc`
- `git switch 3.11` # current target of translation efforts
- `make venv`
- `make ALLSPHINXOPTS='-b gettext -d build/doctrees . locales/pot' build`
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11 and 3.10
- Operating system and architecture: Arch Linux x86_64 and Ubuntu 22.04 (GitHub runner image)
- Packages installed by `make venv`: Jinja2-3.1.2 MarkupSafe-2.1.2 Pygments-2.15.1 alabaster-0.7.13 babel-2.12.1 blurb-1.1.0 certifi-2023.5.7 charset-normalizer-3.1.0 contourpy-1.0.7 cycler-0.11.0 docutils-0.17.1 fonttools-4.39.3 idna-3.4 imagesize-1.4.1 kiwisolver-1.4.4 matplotlib-3.7.1 numpy-1.24.3 packaging-23.1 pillow-9.5.0 polib-1.2.0 pyparsing-3.0.9 python-dateutil-2.8.2 python-docs-theme-2023.3.1 regex-2023.5.5 requests-2.30.0 six-1.16.0 snowballstemmer-2.2.0 sphinx-4.5.0 sphinx-lint-0.6.7 sphinxcontrib-applehelp-1.0.4 sphinxcontrib-devhelp-1.0.2 sphinxcontrib-htmlhelp-2.0.1 sphinxcontrib-jsmath-1.0.1 sphinxcontrib-qthelp-1.0.3 sphinxcontrib-serializinghtml-1.1.5 sphinxext-opengraph-0.8.2 urllib3-2.0.2
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-104296
* gh-104299
<!-- /gh-linked-prs -->
| 942482c8e660765f68098eae347d84b93e37661a | 9af485436b83003b5705a6e54bdeb900c70e0c69 |
python/cpython | python__cpython-104283 | # `lzma._decode_filter_properties` crashes with BCJ filter and buffer of zero length
Example:
```python
>>> import lzma
>>> lzma._decode_filter_properties(lzma.FILTER_X86, b"")
Segmentation fault (core dumped)
```
In `_lzma__decode_filter_properties_impl` call to `lzma_properties_decode` returns `LZMA_OK` and leaves `filter.options` intact (that is uninitialized) if `filter.id` is id of a BCJ filter (FILTER_X86, FILTER_POWERPC, FILTER_IA64, FILTER_ARM, FILTER_ARMTHUMB, FILTER_SPARC) and `encoded_props->len` is equal to zero.
https://github.com/python/cpython/blob/01cc9c1ff79bf18fe34c05c6cd573e79ff9487c3/Modules/_lzmamodule.c#L1487-L1495
Then, in `build_filter_spec`, access to `f->options->start_offset` leads to segmentation fault:
https://github.com/python/cpython/blob/01cc9c1ff79bf18fe34c05c6cd573e79ff9487c3/Modules/_lzmamodule.c#L489-L499
The PR is on the way.
3.9-3.12 are affected for sure.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104283
* gh-114181
* gh-114182
<!-- /gh-linked-prs --> | 0154405350c272833bd51f68138223655e142a37 | b204c4beb44c1a9013f8da16984c9129374ed8c5 |
python/cpython | python__cpython-104277 | # Use type flag instead of custom constructor in `_struct.unpack_iterator`
Currently `_struct.unpack_iterator` type defines its own constructor just to report to user that it isn't instantiable (with a typo, by the way):
https://github.com/python/cpython/blob/8d95012c95988dc517db6e09348aab996868699c/Modules/_struct.c#L1835-L1838
It can be easily removed and replaced by appropriate type flag. I'll submit a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104277
<!-- /gh-linked-prs -->
| c21f82876089f3e9a7b1e706c029664b799fa659 | 85f981880ae9591ba577e44d2945a771078a7c35 |
python/cpython | python__cpython-104274 | # argparse: remove redundant len()
# Feature or enhancement
I decreased calling of redundant len() function.
# Pitch
(Explain why this feature or enhancement should be implemented and how it would be used.
Add examples, if applicable.)
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-104274
<!-- /gh-linked-prs -->
| 01cc9c1ff79bf18fe34c05c6cd573e79ff9487c3 | ac020624b32820e8e6e272122b94883f8e75ac61 |
python/cpython | python__cpython-105406 | # `glob.glob('**/**', recursive=True)` yields duplicate results
Calling `glob.glob(pattern, recursive=True)`, where *pattern* contains two or more `**/` segments, can yield the same paths multiple times:
```python
>>> import glob
>>> len(glob.glob('**/**', recursive=True))
314206
>>> len(set(glob.glob('**/**', recursive=True)))
44849
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-105406
* gh-117757
<!-- /gh-linked-prs -->
| c06be6bbb8d138dde50c0a07cbd64496bee537c5 | b8eaad30090b46f115dfed23266305b6546fb364 |
python/cpython | python__cpython-104266 | # `_csv` `Reader` and `Writer` types shouldn't be directly instantiable
The accepted way to create these objects is to use constructor functions `_csv.reader()` and `_csv.writer()` with appropriate arguments. Objects that are created through type constructors `_csv.Reader` and `_csv.Writer` turn out to be not properly initialized, and operations on them easily lead to crash:
```python
>>> import _csv
>>> _csv.Writer().writerow([])
Segmentation fault (core dumped)
```
```python
>>> import _csv
>>> list(_csv.Reader())
Segmentation fault (core dumped)
```
Although this is an internal detail, I'm sure that this should be fixed. I'll submit a PR shortly.
The crash appears on 3.10, 3.11 and current main.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104266
* gh-104278
<!-- /gh-linked-prs -->
| 06c2a4858b8806abc700a0471434067910db54ec | c0ece3dc9791694e960952ba74070efaaa79a676 |
python/cpython | python__cpython-104251 | # Task object in asyncio docs is missing the "context" argument
# Documentation
the signature in the docs (https://docs.python.org/3/library/asyncio-task.html#asyncio.Task) and the description doesn't cover the *context* argument that was added in 3.11.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104251
* gh-104258
<!-- /gh-linked-prs -->
| 4ee2068c34bd45eddba7f6a8ee83f62d5b6932fc | 42f54d1f9244784fec99e0610aa05a5051e594bb |
python/cpython | python__cpython-104253 | # Immortalize Py_EMPTY_KEYS
The every dict has a keys "object", of type `PyDictKeysObject`. While it isn't actually a Python object, it does have a refcount, which is used to know when to free it. `PyDictKeysObject` (and the helpers, `dictkeys_incref()` and `dictkeys_decref()`) was not updated to be immortal when the other singletons were. When it comes to interpreter isolation, that's a problem for empty dicts.
Every empty dict shares a global, statically allocated singleton for its keys: `Py_EMPTY_KEYS` (AKA `static PyDictKeysObject empty_keys_struct`). This singleton is defined and used internally in dictobject.c, so we don't have the same ABI compatibility concerns that we have with object ref counts generally,
One way or another, we need to isolate `Py_EMPTY_KEYS`. Otherwise we end up with races on the refcount.
cc @eduardo-elizondo @markshannon
----
Possible solutions:
1. update the code in dictobject.c to make `Py_EMPTY_KEYS` immortal
2. move `Py_EMPTY_KEYS` to `PyInterpreterState`
The first one seems simpler.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104253
<!-- /gh-linked-prs -->
| b8f7ab5783b370004757af5a4c6e70c63dc5fe7a | 2dcb289ed08980c8f97d538060b4ad8d5e82b56a |
python/cpython | python__cpython-104241 | # combine compilation steps in a pipeline
Make it possible to chain compiler_codegen, optimize_cfg, assemble_code_object such that each stage takes the previous stage's output and collectively they do the same as compile().
<!-- gh-linked-prs -->
### Linked PRs
* gh-104241
* gh-104300
<!-- /gh-linked-prs -->
| 2c2dc61e8d01f44e5b7e63dd99196460a80905f1 | 1b19bd1a88e6c410fc9cd08db48e0d35cfa8bb5a |
python/cpython | python__cpython-104234 | # New warning: unused variable ‘main_interp’
<img width="916" alt="Снимок экрана 2023-05-06 в 15 17 06" src="https://user-images.githubusercontent.com/4660275/236623527-548a597a-b226-464a-8bf7-3c10c69a21a1.png">
Here: https://github.com/python/cpython/blob/f5088006ca8e9654fbc3de119462f0ab764e408b/Python/ceval_gil.c#L550-L553
Cause: https://github.com/python/cpython/commit/f3e7eb48f86057919c347f56dabf417acfd55845
I think the right thing to do here is to guard these lines with `#ifdef Py_DEBUG`.
I will send a PR.
CC @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-104234
<!-- /gh-linked-prs -->
| 6616710731b9ad1a4e6b73696e8bd5c40cf8762d | f5088006ca8e9654fbc3de119462f0ab764e408b |
python/cpython | python__cpython-104227 | # PEP 688: Cannot call super().__buffer__()
The following will currently fail with a RecursionError:
```
class A(bytearray):
def __buffer__(self, flags):
return super().__buffer__(flags)
a = A()
mv = memoryview(a)
```
Thanks to @chilaxan for finding this, along with some related problems. I am working on a fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104227
<!-- /gh-linked-prs -->
| 405eacc1b87a42e19fd176131e70537f0539e05e | 874010c6cab2e079069767619af2e0eab05ad0b2 |
python/cpython | python__cpython-104220 | # Inplace dunders can return NotImplemented, and that's not documented
See [this discussion](https://discuss.python.org/t/notimplemented-in-inplace-dunders/26536).
Normal NotImplemented situation : `A() + B()` is resolved in `A.__add__(A(), B())` which if returning NotImplemented, falls back to `B.__radd__(B(), A())`.
Now this behavior means there can be a double fallback : in `a += b`, if `A.__iadd__` exists but returns NotImplemented, it will first fall back to `A.__add__` and then to `B.__radd__`.
This is a great feature, but it's not currently documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104220
* gh-116210
* gh-116211
<!-- /gh-linked-prs -->
| 2713c2abc8d0f30cd0060cd307bb4ec92f1f04bf | fee86fd9a422612b39e5aabf2571b8fe4abac770 |
python/cpython | python__cpython-104191 | # usan failure on main
https://buildbot.python.org/all/#/builders/719/builds/2611
```
Include/object.h:227:12: runtime error: member access within address 0x7ffd008dd4d0 with insufficient space for an object of type 'PyObject' (aka 'struct _object')
0x7ffd008dd4d0: note: pointer points here
9d 7f 00 00 80 d9 ab d8 36 56 00 00 00 f8 24 3f e9 07 c3 81 00 ba 81 e0 9d 7f 00 00 f9 e9 1a d6
^
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Include/object.h:227:12 in
make: *** [Makefile:1106: checksharedmods] Error 1
```
I already located the error. Will open a PR soon.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104191
<!-- /gh-linked-prs -->
| 163034515a81f137d1dd7d289dc048eb0f1cd424 | e5b8b19d99861c73ab76ee0175a685acf6082d7e |
python/cpython | python__cpython-104217 | # build fails with `--enable-pystats --with-pydebug` (use of Py_SIZE on PyLongObject)
```
> ./configure --enable-pystats --with-pydebug
> make -j
...
_bootstrap_python: ../../cpython/Include/object.h:215: Py_SIZE: Assertion `ob->ob_type != &PyLong_Type' failed.
```
It looks like #102464 missed removing this use of `Py_SIZE` on a `PyLongObject` because it is gated behind `#ifdef Py_STATS`.
When debug is enabled, this causes an assertion to fail. Without assertions, I think it can cause inaccurate stats collection for `STORE_SUBSCR` specialization failures.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104217
<!-- /gh-linked-prs -->
| afe7703744f813adb15719642444b5fd35888d86 | 82f789be3b15df5f6660f5fd0c563ad690ee00fb |
python/cpython | python__cpython-104181 | # Support detecting SOCKS proxy configuration on macOS
# Feature or enhancement
Allow `urllib.request.getproies()` to detect SOCKS proxies configured in macOS system settings.
# Pitch
`urllib.request.getproies()` is widely used, for example by `requests`, to detect proxy configurations from environment variables, macOS System Settings or the Windows Systems Registry. On macOS, however, it does not support reading SOCKS proxy configurations. This is supported on Windows, see #26307.
I'm not sure if this is intentional or just an oversight, but the difference is certainly not documented in https://docs.python.org/3/library/urllib.request.html#urllib.request.getproxies. It would be nice to have feature parity here.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104181
<!-- /gh-linked-prs -->
| 9a9b176eb7e052af84c01c0cfb3231e51f980f2d | bf89d4283a28dd00836f2c312a9255f543f93fc7 |
python/cpython | python__cpython-104170 | # Improve tokenize error handling
There has been quite a lot of instances were poor error handling in the tokenizer leads to crashes or errors being overwritten. We should not rely on continuous patches to every single individual issue but we should improve the situation with more robust infrastructure.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104170
* gh-110684
* gh-110727
<!-- /gh-linked-prs -->
| eba64d2afb4c429e80d863dc0dd7808bdbef30d3 | e2ef5015d1b6cb56f1a7988583f2fb8c0e6d65fc |
python/cpython | python__cpython-104147 | # Remove dead code from clinic.py
There's a bit of unused code in clinic.py. For example, the code that manipulated `second_pass_replacements` was removed in 2015 (commit 0759f84d6260bad1234b802212e73fdc5873d261, gh-67688 (bpo-23500)), but the variable and a branch depending on it still persists.
Suggesting to remove all unused code in order to make clinic more readable and maintainable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104147
* gh-104214
* gh-104627
* gh-104680
* gh-106358
* gh-107555
* gh-107608
* gh-107627
* gh-107632
* gh-119162
<!-- /gh-linked-prs -->
| 9885677b0494e9be3eb8d7d69bebca0e79d8abcc | e95dd40aff35775efce4c03bec7d82f03711310b |
python/cpython | python__cpython-104172 | # bisect.bisect is not cross-referenced properly
# Documentation
The documentation page for bisect includes lots of interlinks to the bisect function but it appears that link isn't being resolved correctly by sphinx, whilst it thinks it is a function, it links to the module header
To reproduce: https://docs.python.org/3/library/bisect.html#searching-sorted-lists, click on `bisect()` and see it take you to the top of the page and not to the function bisect
<!-- gh-linked-prs -->
### Linked PRs
* gh-104172
* gh-104295
<!-- /gh-linked-prs -->
| 76eef552f3653179782afcc5063f10560a6e1a80 | 921185ed050efbca2f0adeab79f676b7f8cc3660 |
python/cpython | python__cpython-104140 | # Leverage eager tasks to optimize asyncio gather & TaskGroups further
# Feature or enhancement
gh-97696 introduced eager tasks factory, which speeds up some async-heavy workloads by up to 50% when opted in.
installing the eager tasks factory applies out-of-the-box when gathering tasks (`asyncio.gather(...)`), and when creating tasks as part of a `TaskGroup`, e.g.:
```
asyncio.get_event_loop().set_task_factory(asyncio.eager_task_factory)
await asyncio.gather(coro1, coro2, coro3)
async with asyncio.TaskGroup() as tg:
tg.create_task(coro1)
tg.create_task(coro2)
tg.create_task(coro3)
```
in both examples, `coro{1,2,3}` will eagerly execute the first step, and potentially complete without scheduling to the event loop if the coros don't block.
the implementation of both `gather` and `TaskGroup` uses callbacks internally that end up getting scheduled to the event loop even if all the tasks were able to finish synchronously, and blocking the coroutine in which `gather` or `TaskGroup()` was awaited, preventing from the task to complete eagerly even if otherwise it could.
applications that use multiple levels of nested gathers / TaskGroups can benefit significantly from eagerly completing multiple levels without blocking.
this can be achieved by modifying the implementation of `gather` and `TaskGroup` to detect when it can complete without blocking, and skip scheduling wakeup & done callbacks.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104140
* gh-104138
<!-- /gh-linked-prs -->
| 52d8f36e8c9f6048367d7bdfede3698e3f5f70d0 | f3e7eb48f86057919c347f56dabf417acfd55845 |
python/cpython | python__cpython-104143 | # `_Py_RefcntAdd` doesn't respect immortality
This is used by `list` and `tuple` internally:
```py
>>> import sys
>>> sys.getrefcount(None) == (1 << 32) - 1
True
>>> refs = [None] * 42
>>> sys.getrefcount(None) == (1 << 32) - 1
False
>>> del refs
>>> sys.getrefcount(None) == (1 << 32) - 1
True
```
I'll have a PR up in a bit.
CC: @ericsnowcurrently, @eduardo-elizondo
<!-- gh-linked-prs -->
### Linked PRs
* gh-104143
<!-- /gh-linked-prs -->
| ce871fdc3a02e8441ad73b13f9fced308a9d9ad1 | fa86a77589a06661fcebb806d36f3a7450e2aecf |
python/cpython | python__cpython-104312 | # urlunsplit for itms-services scheme returns invalid url
Relating to a Werkzueg issue (https://github.com/pallets/werkzeug/issues/2691), when parsing an iOS App install url e.g.
itms-services:action=download-manifest&url=https://theacmeinc.com/abcdefeg, urlunpslit returns an invalid url.
e.g.
```from urllib.parse import urlparse, urlunsplit
vals = urlparse( "itms-services://?action=download-manifest&url=https://theacmeinc.com/abcdefeg" )
print(vals)
newURL = urlunsplit((vals.scheme, vals.netloc, vals.path, vals.query, vals.params))
print( newURL )
```
prints:
```
ParseResult(scheme='itms-services', netloc='', path='', params='', query='action=download-manifest&url=https://theacmeinc.com/abcdefeg', fragment='')
itms-services:?action=download-manifest&url=https://theacmeinc.com/abcdefeg
```
Note the newURL is missing the // after the itms-services scheme.
This scheme is used to install ad-hoc and enterprise iOS apps.
# Your environment
Tested on Apple M1 Max - 13.4 Beta (22F5049e)
Python: 3.10.10
For more details on the scheme here is a link to the Apple documentation (look for the "Use a website to distribute the app" section).
https://support.apple.com/en-gb/guide/deployment/depce7cefc4d/web
<!-- gh-linked-prs -->
### Linked PRs
* gh-104312
<!-- /gh-linked-prs -->
| 82f789be3b15df5f6660f5fd0c563ad690ee00fb | ca95edf177e3c10e10d7011ed52619b1312cf15e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.