repo
stringclasses
1 value
instance_id
stringlengths
20
22
problem_statement
stringlengths
126
60.8k
merge_commit
stringlengths
40
40
base_commit
stringlengths
40
40
python/cpython
python__cpython-113128
# `TemporaryDirectory.__exit__` sometimes raises bad PermissionError # Bug report `TemporaryDirectory.__exit__` usually raises PermissionError if another process keeping a handle to a file in the tmp directory has just finished its execution. ### Bug description: Please consider the following block of code: ```python import multiprocessing import os import tempfile import time def _open_func(file_path): with open(file_path, "w"): time.sleep(1000) def test(): with tempfile.TemporaryDirectory(suffix="used_by_another_process") as dir_path: file_path = os.path.join(dir_path, "file_being_used") proc = multiprocessing.Process(target=_open_func, args=(file_path,)) proc.start() while not os.path.exists(file_path): time.sleep(0.1) proc.terminate() proc.join() if __name__ == "__main__": test() ``` Despite the child process being terminated and joined, the `__exit__` method of `TemporaryDirectory` sometimes raises `PermissionError`: ``` Traceback (most recent call last): File "C:\Python\python.3.12.1\tools\Lib\shutil.py", line 634, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\kamil\\AppData\\Local\\Temp\\tmpvaou_oibused_by_another_process\\file_being_used' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\kamil\tmp_test\tmpdir.py", line 24, in <module> test() File "C:\Users\kamil\tmp_test\tmpdir.py", line 13, in test with tempfile.TemporaryDirectory(suffix="used_by_another_process") as dir_path: File "C:\Python\python.3.12.1\tools\Lib\tempfile.py", line 946, in __exit__ self.cleanup() File "C:\Python\python.3.12.1\tools\Lib\tempfile.py", line 950, in cleanup self._rmtree(self.name, ignore_errors=self._ignore_cleanup_errors) File "C:\Python\python.3.12.1\tools\Lib\tempfile.py", line 930, in _rmtree _shutil.rmtree(name, onexc=onexc) File "C:\Python\python.3.12.1\tools\Lib\shutil.py", line 808, in rmtree return _rmtree_unsafe(path, onexc) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python\python.3.12.1\tools\Lib\shutil.py", line 636, in _rmtree_unsafe onexc(os.unlink, fullname, err) File "C:\Python\python.3.12.1\tools\Lib\tempfile.py", line 905, in onexc _os.unlink(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\kamil\\AppData\\Local\\Temp\\tmpvaou_oibused_by_another_process\\file_being_used' ``` It does not reproduce in 100% but most executions fail. I can reproduce it only using Python 3.12.1. It does not happen to me on 3.12.0 or 3.11. It seems to be a regression in the last release. With some small sleep after the `proc.join()` it stops reproducing so it looks like a kind of race condition. The Windows version I use is: Edition: Windows 10 Enterprise Version: 21H2 OS build: 19044.3693 ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-113128 * gh-113177 * gh-113178 <!-- /gh-linked-prs -->
4026ad5b2c595b855a3605420cfa0e3d49e63db7
d1a2adfb0820ee730fa3e4bbc4bd88a67aa50666
python/cpython
python__cpython-113085
# argparse drops "|" when a positional, then a special "-" argument (order-sensitive), are defined in a mutually exclusive group # Bug report ### Bug description: I'm defining a argparse parser that works like vim: > support a file as a positional: > ```shell > vim file.py > ``` > or reading from stdin: > ```shell > cat file.py | vim - > ``` So I write a mutually exclusive group: ```python import argparse ap = argparse.ArgumentParser(description="fake vim") mg = ap.add_mutually_exclusive_group(required=True) mg.add_argument("input", nargs="?") mg.add_argument("-", dest="from_stdin", action="store_true") args = ap.parse_args() ``` When I use "-h" to show the help message, the "|" in usage line is dropped, as if the 2 arguments are not mutually exclusive: ``` usage: fake_vim.py [-h] [-] [input] ``` However, if I ***swap*** the 2 `add_argument` lines in the source code: ```python import argparse ap = argparse.ArgumentParser(description="fake vim") mg = ap.add_mutually_exclusive_group(required=True) mg.add_argument("-", dest="from_stdin", action="store_true") mg.add_argument("input", nargs="?") args = ap.parse_args() ``` It magically behaves correctly: ``` usage: fake_vim.py [-h] (- | input) ``` ### CPython versions tested on: 3.9, 3.11 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-113085 <!-- /gh-linked-prs -->
d21b0b5d36834d4d35aec3a01661597019594936
4a5e4aade420c594c5b3fe0589e9e6b444bd6ee5
python/cpython
python__cpython-113000
# Replace the outdated "deprecated" directives with "versionchanged" In few cases the **deprecated** directive was left after the end of the deprecation period and removing the deprecated features. The **versionchanged** directive is more appropriate here. <!-- gh-linked-prs --> ### Linked PRs * gh-113000 * gh-113019 * gh-113020 <!-- /gh-linked-prs -->
fe9991bb672dd95fb9cd38b5a03180719ac4e722
eafc2381a0b891383098b08300ae766868a19ba6
python/cpython
python__cpython-115667
# Asyncio exception handling breaks convention by logging callback parameter values # Bug report ### Bug description: My understanding is that Python usually does not attempt to log or show parameter and variable values when errors occur. It seems uncharacteristic that asyncio does so, e.g. when it handles uncaught exceptions. This type of logging can also occur when asyncio reports on tasks with too slow execution times, but then only in debug mode. When not in debug mode, parameter values are logged for uncaught exceptions, but not for slow execution times. I don't see why these two cases behave differently. If you think that this logging is necessary, perhaps it can be limited to debug mode only? ### Input: ```python import asyncio import functools loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) def test_callback(a, b, c, *, x, y, z): raise RuntimeError('Some uncaught error') loop.call_soon( functools.partial( test_callback, {'key': 'value'}, ['value'], ('value',), x = {'key': 'value'}, y = ['value'], z = ('value',) ) ) loop.call_soon(loop.stop) res = '' try: loop.run_forever() finally: loop.close() ``` ### Output: ``` Exception in callback test_callback({'key': 'value'}, ['value'], ('value',), x={'key': 'value'}, y=['value'], z=('value',))() at /home/aleze/Documents/scenario.py:7 handle: <Handle test_callback({'key': 'value'}, ['value'], ('value',), x={'key': 'value'}, y=['value'], z=('value',))() at /home/aleze/Documents/scenario.py:7> Traceback (most recent call last): File "/usr/lib64/python3.11/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/home/aleze/Documents/scenario.py", line 8, in test_callback raise RuntimeError('Some uncaught error') RuntimeError: Some uncaught error ``` ### Extra: The function that actually creates the output is `format_helpers._format_args_and_kwargs()`. ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-115667 <!-- /gh-linked-prs -->
5a1559d9493dd298a08c4be32b52295aa3eb89e5
a355f60b032306651ca27bc53bbb82eb5106ff71
python/cpython
python__cpython-112991
# Calling loop.sock_connect has a KeyError exception in the success path # Bug report ### Bug description: Similar to #106527 and #106664 ```python loop.sock_connect() ``` https://github.com/python/cpython/blob/616622cab7200bac476bf753d6cc98f4ee48808c/Lib/asyncio/selector_events.py#L266 <img width="1252" alt="Screenshot 2023-12-11 at 7 31 03 PM" src="https://github.com/python/cpython/assets/663432/87ede167-95fb-4be9-9876-1383ae76d582"> ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112991 <!-- /gh-linked-prs -->
7e2d93f30b157e414924c32232bb748c8f66c828
a3c031884d2f16d84aacc3f733c047b3a6cae208
python/cpython
python__cpython-113129
# Build changes for Windows free-threaded builds We should use a different `$(IntDir)` and `$(OutDir)` for free threaded builds so that it's faster to switch between them and so they can be built in parallel. The output filename should be `python{major}.{minor}t.exe` and `python{major}{minor}t.dll`. Debug builds are `python{major}.{minor}t_d.exe` and `python{major}{minor}t_d.dll`. The ABI tag becomes `.cp{major}{minor}t-{plat}` All stdlib .pyd files get an ABI tag. The `python.bat` is updated as normal (overwriting the last yesgil build). The installer should not include a `python.exe`, to avoid causing PATH conflicts, but Nuget, embeddable and Store packages should include a copy/alias. PEP 514 registration (a.k.a. `sys.winver`) should use `{major}.{minor}t-{plat}`. Tests that rely on the executable name should use `sys.executable` wherever possible, and hopefully there's a support variable already for the rest. All of these only take effect when `$(DisableGil) == 'true'` (which is set by `build.bat --disable-gil`, but the build variable is canonical). <!-- gh-linked-prs --> ### Linked PRs * gh-113129 * gh-114217 * gh-114455 <!-- /gh-linked-prs -->
f56d132deb9fff861439ed56ed7414d22e4e4bb9
78fcde039a33d8463e34356d5462fecee0f2831a
python/cpython
python__cpython-112985
# Add magic number of 3.11 final bytecode # Bug report ### Bug description: The magic number 3495 of Python 3.11 final is somehow magically disappeared from main branch. It's there in all 3.11.x: https://github.com/python/cpython/blob/v3.11.0/Lib/importlib/_bootstrap_external.py#L406 ... https://github.com/python/cpython/blob/v3.11.7/Lib/importlib/_bootstrap_external.py#L406 So not sure how it went missing afterward, bad cherry-picking maybe? This should be added back to the known values list. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112985 * gh-113023 <!-- /gh-linked-prs -->
616622cab7200bac476bf753d6cc98f4ee48808c
5b8664433829ea967c150363cf49a5c4c1380fe8
python/cpython
python__cpython-112979
# Remove redundant condition inside `take_gil` Based on discussion: https://discuss.python.org/t/redundant-condition-inside-take-gil-function/40882 <!-- gh-linked-prs --> ### Linked PRs * gh-112979 <!-- /gh-linked-prs -->
fed294c6453527addd1644633849e2d8492058c5
1c5fc02fd0576be125638a5261be12eb3224be81
python/cpython
python__cpython-112969
# closefrom check is too strict # Bug report ### Bug description: `closefrom(3)` is available with >=glibc-2.34 on Linux (and Hurd, but, well...) The check in `Python/fileutils.c` is a bit too strict, as it hardcodes FreeBSD. This excludes both the aforementioned systems but also other BSDs where I believe it's available. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112969 <!-- /gh-linked-prs -->
c454e934d36193709aadba8e8e28739790086b95
0d2fe6bab01541301abe98a23ee15a16f493fe74
python/cpython
python__cpython-113016
# Make cache information a part of dis.Instruction # Feature or enhancement We currently create fake Instruction objects to represent cache entries. It would make more sense to make the cache info part of the Instruction. This will change the output of ``dis.get_instructions``, but not anything which is documented. <!-- gh-linked-prs --> ### Linked PRs * gh-113016 <!-- /gh-linked-prs -->
428c9812cb4ff3521e5904719224fe63fba5370a
3531ea441b8b76bff90d2ecc062335da65fd3341
python/cpython
python__cpython-112988
# idlelib/News3.txt for 3.13.0 and backports Branch 'main' became 3.13.0a0 as of 3.12.0 beta 1: 2023-05-22 However, unless an patch is not backported to 3.12.b#, idlelib/News3.txt items continue going under What's New in IDLE 3.12.0 on both main and 3.12 until 3.12.0rc?, 2023-08 or so. (None for 3.12.0b_.) Subsequent news items go under What's New in IDLE 3.13.0 (new header) on main branch What's New in IDLE 3.12.z (new header) on 3.12 branch What's New in IDLE 3.11.z (existing) on 3.11 branch In other words, idlelib News3.txt is handled as if main were branched off as of about 3.x.0rc1. This is different from the changelog attached to What's New in 3.x. Release peps -- needed for proposed and actual release dates. 3.11 [PEP-664](https://peps.python.org/pep-0664/) 3.12 [PEP-693](https://peps.python.org/pep-0693/) 3.13 [PEP-719](https://peps.python.org/pep-0719/) <!-- gh-linked-prs --> ### Linked PRs * gh-112988 * gh-112990 * gh-112992 * gh-129877 * gh-129878 * gh-129879 <!-- /gh-linked-prs -->
e0fb7004ede71389c9dd462cd03352cc3c3a4d8c
616622cab7200bac476bf753d6cc98f4ee48808c
python/cpython
python__cpython-112950
# Make `pdb` completion similar to repl completion # Feature or enhancement ### Proposal: Currently there are a couple of issues with `pdb`'s completion: 1. It does not know about built-in functions like repl. So `pri\t` won't complete to `print`. 2. Disabling of completion in multi-line mode is a bit too strong - the user should be able to complete on expressions when they need. Also it does not work with libedit. 3. If you are typing a statement/expression, rather than using a command like `p`, pdb will lose the ability to complete expressions. We have `rlcompleter` and we should utilize that to make `pdb`'s completion better - the code is already there and it has fixed a couple of corner issues. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112950 <!-- /gh-linked-prs -->
01e7405da400e8997f8964d06cc414045e144681
9db2a8f914ad59019d448cecc43b6d45f46424a0
python/cpython
python__cpython-112949
# Change in tokenize.generate_tokens behaviour with non-ASCII # Bug report ### Bug description: This docstring has non-ASCII characters: ```python import io import tokenize src = '''\ def thing(): """Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli aktualni pracownicy, obecni pracownicy""" ... ''' tokens = list(tokenize.generate_tokens(io.StringIO(src).readline)) for token in tokens: print(token) assert tokens[7].end == (3, 45), tokens[7].end ``` And `tokenize.generate_tokens` has different behaviour between 3.11, and 3.12 (and 3.13). ## Python 3.11 ```console ❯ python3.11 --version Python 3.11.7 ❯ python3.11 1.py TokenInfo(type=1 (NAME), string='def', start=(1, 0), end=(1, 3), line='def thing():\n') TokenInfo(type=1 (NAME), string='thing', start=(1, 4), end=(1, 9), line='def thing():\n') TokenInfo(type=54 (OP), string='(', start=(1, 9), end=(1, 10), line='def thing():\n') TokenInfo(type=54 (OP), string=')', start=(1, 10), end=(1, 11), line='def thing():\n') TokenInfo(type=54 (OP), string=':', start=(1, 11), end=(1, 12), line='def thing():\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 12), end=(1, 13), line='def thing():\n') TokenInfo(type=5 (INDENT), string=' ', start=(2, 0), end=(2, 4), line=' """Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n') TokenInfo(type=3 (STRING), string='"""Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n aktualni pracownicy, obecni pracownicy"""', start=(2, 4), end=(3, 45), line=' """Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n aktualni pracownicy, obecni pracownicy"""\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(3, 45), end=(3, 46), line=' aktualni pracownicy, obecni pracownicy"""\n') TokenInfo(type=54 (OP), string='...', start=(4, 4), end=(4, 7), line=' ...\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(4, 7), end=(4, 8), line=' ...\n') TokenInfo(type=6 (DEDENT), string='', start=(5, 0), end=(5, 0), line='') TokenInfo(type=0 (ENDMARKER), string='', start=(5, 0), end=(5, 0), line='') ``` ## Python 3.12 ```console ❯ python3.12 --version Python 3.12.1 ❯ python3.12 1.py TokenInfo(type=1 (NAME), string='def', start=(1, 0), end=(1, 3), line='def thing():\n') TokenInfo(type=1 (NAME), string='thing', start=(1, 4), end=(1, 9), line='def thing():\n') TokenInfo(type=55 (OP), string='(', start=(1, 9), end=(1, 10), line='def thing():\n') TokenInfo(type=55 (OP), string=')', start=(1, 10), end=(1, 11), line='def thing():\n') TokenInfo(type=55 (OP), string=':', start=(1, 11), end=(1, 12), line='def thing():\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 12), end=(1, 13), line='def thing():\n') TokenInfo(type=5 (INDENT), string=' ', start=(2, 0), end=(2, 4), line=' """Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n') TokenInfo(type=3 (STRING), string='"""Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n aktualni pracownicy, obecni pracownicy"""', start=(2, 4), end=(3, 41), line=' """Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli\n aktualni pracownicy, obecni pracownicy"""\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(3, 45), end=(3, 46), line=' aktualni pracownicy, obecni pracownicy"""\n') TokenInfo(type=55 (OP), string='...', start=(4, 4), end=(4, 7), line=' ...\n') TokenInfo(type=4 (NEWLINE), string='\n', start=(4, 7), end=(4, 8), line=' ...\n') TokenInfo(type=6 (DEDENT), string='', start=(5, 0), end=(5, 0), line='') TokenInfo(type=0 (ENDMARKER), string='', start=(5, 0), end=(5, 0), line='') Traceback (most recent call last): File "/private/tmp/1.py", line 15, in <module> assert tokens[7].end == (3, 45), tokens[7].end AssertionError: (3, 41) ``` ## git bisect Points to PR https://github.com/python/cpython/pull/104323 (gh-102856: Python tokenizer implementation for PEP 701). [What’s New In Python 3.12 » Changes in the Python API](https://docs.python.org/3/whatsnew/3.12.html#changes-in-the-python-api) says: > Additionally, there may be some minor behavioral changes as a consequence of the changes required to support [PEP 701](https://peps.python.org/pep-0701/). Some of these changes include: This change isn't listed here, but is this an acceptable behavioural change or something to fix? cc @lysnikolaou @pablogsal ## More info Originally reported at https://github.com/asottile/pyupgrade/issues/923 by @mpasternak with the minimal reproducer created by @asottile. ### CPython versions tested on: 3.12, 3.13 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112949 * gh-112957 <!-- /gh-linked-prs -->
a135a6d2c6d503b186695f01efa7eed65611b04e
4c5b9c107a1d158b245f21a1839a2bec97d05383
python/cpython
python__cpython-124310
# IDLE: infinite print loop hangs and crashes # Bug report Interactively execute `while 1: 1` and Python hangs while, on Windows, a steady stream of `1`s is printed. In REPL, ^C stops execution with KeyboardInterrupt. In IDLE, ^C has no effect. Same for ^F6, which (at least on Windows) should kill and restart the execution process. On Windows, clicking top menu Shell, in an attempt to access Restart Shell on the dropdown menu, or clicking anywhere else, crashed IDLE and '(Not responding)' is added to the title bar. The Close button is also disabled. This [Discourse post](https://discuss.python.org/t/unable-to-quit-infinite-while-loop-in-idle-3-12-0-64-bit-with-keyboard-interrupt/40780) shows the eventual crash result. I suspect that the prints come so fast that the tk event loop is somehow 'jammed'. This might be unfixable, but it might be worth a look. Can prints get lower priority? Behavior is the same in -n (no subprocess) mode, so not because of IPC socket comms. If no fix, this difference from REPL should be documented. On macOS, the REPL behavior in Terminal is the same. In IDLE, nothing is printed. Instead, I get an immediate twirling colors ball. The only way to quit was to click the dock icon and select Force Quit. It is possible that this should be a tkinter bug. @chrstphrchvz Any comment from your tk knowledge? <!-- gh-linked-prs --> ### Linked PRs * gh-124310 * gh-124318 * gh-124319 <!-- /gh-linked-prs -->
d5f95ec07bb47a4d6554e04d13a979dbeac05f74
342e654b8eda24c68da64cc21bc9583e480d9e8e
python/cpython
python__cpython-126598
# IDLE: no Shell menu item in single-process mode Start IDLE with `python -m idlelib -n` and the Shell menu is missing 'Shell'. -n mode is semi-deprecated, but this is egregious and should be easily fixed. <!-- gh-linked-prs --> ### Linked PRs * gh-126598 * gh-133310 <!-- /gh-linked-prs -->
7e7e49be78e26d0a3b861a04bbec1635aabb71b9
a512905e156bc09a20b171686ac129e66c13f26a
python/cpython
python__cpython-112931
# Error in example of `datetime.time.fromisoformat` # Documentation https://github.com/python/cpython/blob/5bf7580d72259d7d64f5ee8cfc2df677de5310a4/Doc/library/datetime.rst?plain=1#L1811-L1812 I consider the correct example should be below. ```pycon >>> time.fromisoformat('04:23:01,000384') datetime.time(4, 23, 1, 384) ``` ## python docs version - 3.11 - 3.12 - 3.13 <!-- gh-linked-prs --> ### Linked PRs * gh-112931 * gh-113427 * gh-113428 <!-- /gh-linked-prs -->
bdc8d667ab545ccec0bf8c2769f5c5573ed293ea
4e5b27e6a3be85853bd04d45128dd7cc706bb1c8
python/cpython
python__cpython-112921
# datetime.replace() is very slow # Bug report ### Bug description: datetime.replace() is very slow when called with keyword arguments. For example, with python 3.13 from main on my laptop ```console $ python -m timeit -s "from datetime import datetime; dt = datetime.now()" "dt.replace(microsecond=0)" 500000 loops, best of 5: 501 nsec per loop ``` For comparison, ```console $ python -m timeit -s "from datetime import datetime" "datetime.now()" 2000000 loops, best of 5: 150 nsec per loop $ python -m timeit -s "from datetime import datetime" "datetime(2020, 1, 1, 12, 34, 56, 123456)" 2000000 loops, best of 5: 119 nsec per loop ``` So calling replace() is over 4x slower than constructing a new datetime from components and over 3x slower than now(), which makes a system call. ### CPython versions tested on: 3.9, 3.10, 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112921 * gh-115344 <!-- /gh-linked-prs -->
1f515e8a109204f7399d85b7fd806135166422d9
4287e8608bcabcc5bde851d838d4709db5d69089
python/cpython
python__cpython-112907
# Performance regression in pathlib path instantiation #110670 has increased the time taken to instantiate a `PurePath` object by ~30%. <!-- gh-linked-prs --> ### Linked PRs * gh-112907 <!-- /gh-linked-prs -->
23df46a1dde82bc5a51578d9443024cf85827b95
96f64a2b1b4e3d4902848c63e42717a9c5539e03
python/cpython
python__cpython-113195
# `unittest` Test Discovery page says "Python 3.11 dropped the namespace packages support," which is inaccurate # Documentation The [documentation for `unittest` Test Discovery ](https://docs.python.org/3/library/unittest.html#test-discovery) says: "Changed in version 3.11: Python 3.11 dropped the [namespace packages](https://docs.python.org/3/glossary.html#term-namespace-package) support. It has been broken since Python 3.7. Start directory and subdirectories containing tests must be regular package that have __init__.py file." I believe the most straightforward interpretation of this statement is that Python, _as a whole_, has dropped namespace packages, which I believe is false. I think the intended interpretation is "Changed in version 3.11: _unittest_ dropped the [namespace packages](https://docs.python.org/3/glossary.html#term-namespace-package) support in Python 3.11. It has been broken since Python 3.7. Start directory and subdirectories containing tests must be regular package that have __init__.py file. <!-- gh-linked-prs --> ### Linked PRs * gh-113195 * gh-113228 * gh-113229 <!-- /gh-linked-prs -->
21d52995ea490328edf9be3ba072821cd445dd30
48c907a15ceae7202fcfeb435943addff896c42c
python/cpython
python__cpython-112885
# build fails with -DWITH_PYMALLOC_RADIX_TREE=0 # Bug report ### Bug description: bpo-37448 added a radix tree based memory map, also allowing to disable it: To disable the radix tree map, set a preprocessor flag as follows: `-DWITH_PYMALLOC_RADIX_TREE=0`. However building with that fails: ``` In file included from ../Include/internal/pycore_interp.h:31, from ../Include/internal/pycore_runtime.h:18, from ../Include/internal/pycore_pystate.h:11, from ../Modules/_asynciomodule.c:7: ../Include/internal/pycore_obmalloc.h:668:28: error: field 'usage' has incomplete type 668 | struct _obmalloc_usage usage; | ^~~~~ ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112885 * gh-113068 <!-- /gh-linked-prs -->
890ce430d94b0b2bccc92a8472b1e1030b4faeb8
c1652d6d6201e5407b94afc297115a584b5a0955
python/cpython
python__cpython-112856
# Optimize pathlib path pickling `pathlib.PurePath.__reduce__()` currently accesses and returns [the `parts` tuple](https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.parts). Pathlib ensures that the strings therein are [interned](https://docs.python.org/3/library/sys.html#sys.intern). There's a good reason to do this: it ensures that the pickled data is as small as possible, with maximum re-use of small string objects. However, it comes with some disadvantages: 1. When normalising any path, we need to call `sys.intern(str(part))` on each part 2. When pickling a path, we must join, parse and normalise, and then generate the `parts` tuple. We could instead make `__reduce__()` return the raw paths fed to the constructor (the `_raw_paths` attribute). This would be faster but less space efficient. With the cost of storage and bandwidth falling at a faster rate than compute, I suspect this trade-off is worth making. <!-- gh-linked-prs --> ### Linked PRs * gh-112856 * gh-113243 <!-- /gh-linked-prs -->
15fbd53ba96be4b6a5abd94ceada684493c36bdd
d8f350309ded3130c43f0d2809dcb8ec13112320
python/cpython
python__cpython-115789
# Add Software Bill-of-Materials for Windows source dependencies ### Proposal: Part of https://github.com/python/cpython/issues/112302 An SBOM document has been added for dependencies within CPython itself. This document is kept up-to-date using tooling and CI within the CPython repository. For building the Windows there exists a repository [`cpython-source-deps`](https://github.com/python/cpython-source-deps) which "mirrors" the source code of projects not in the CPython git repo. These dependencies are pulled in optionally, I still need to investigate what combinations are possible, but I know the possible projects and versions for each CPython branch is captured currently in `PCBuild/get_externals.bat`. Will be investigating what the best method for creating an SBOM for these dependencies such that release-tools can stitch it into the final SBOMs that are distributed with release artifacts. There's a chance that no work needs to be done on this repository, in that case this issue will be migrated. cc @zooba @ned-deily @ambv ### Has this already been discussed elsewhere? See the [Discourse topic](https://discuss.python.org/t/create-and-distribute-software-bill-of-materials-sbom-for-python-artifacts/39293) <!-- gh-linked-prs --> ### Linked PRs * gh-115789 * gh-116128 * gh-117656 * gh-117951 * gh-118521 * gh-119237 * gh-119238 <!-- /gh-linked-prs -->
45d8871dc4da33fcef92991031707c5bf88a40cf
6a86030bc2519b4a6b055e0b47b9870c86db8588
python/cpython
python__cpython-112850
# gh-105716 breaks greenlet/eventlet # Bug report ### Bug description: gh-105716 breaks greenlet/eventlet, which apparently was working with the 3.12.0 release. this is greenlet 3.0.1, plus eventlet 0.33.3 plus the python3.12 for eventlet taken from https://src.fedoraproject.org/rpms/python-eventlet/tree/rawhide ```pytb Traceback (most recent call last): File "/home/packages/12/python-eventlet-0.33.3/tests/isolated/patcher_existing_locks_late.py", line 20, in <module> eventlet.monkey_patch() File "/home/packages/12/python-eventlet-0.33.3/eventlet/patcher.py", line 294, in monkey_patch modules_to_patch += modules_function() ^^^^^^^^^^^^^^^^^^ File "/home/packages/12/python-eventlet-0.33.3/eventlet/patcher.py", line 480, in _green_thread_modules from eventlet.green import threading File "/home/packages/12/python-eventlet-0.33.3/eventlet/green/threading.py", line 22, in <module> eventlet.patcher.inject( File "/home/packages/12/python-eventlet-0.33.3/eventlet/patcher.py", line 109, in inject module = __import__(module_name, {}, {}, module_name.split('.')[:-1]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/threading.py", line 40, in <module> _is_main_interpreter = _thread._is_main_interpreter ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'eventlet.green.thread' has no attribute '_is_main_interpreter' --- FAIL ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112850 * gh-112853 <!-- /gh-linked-prs -->
64d8b4c7099a6097a7f7340c575679c5622fcd5c
cf6110ba1337cb67e5867d86e7c0e8d923a5bc8d
python/cpython
python__cpython-112809
# mimalloc build fails on Solaris due to undeclared 'large_os_page_size' # Bug report ### Bug description: The compilation of mimalloc on Solaris currently fails with the following error: ``` In file included from /.../cpython-main/Objects/mimalloc/prim/prim.c:22, from /.../cpython-main/Objects/mimalloc/static.c:37, from /.../cpython-main/Objects/obmalloc.c:15: /.../cpython-main/Objects/mimalloc/prim/unix/prim.c: In function ‘unix_mmap’: /.../cpython-main/Objects/mimalloc/prim/unix/prim.c:313:28: error: ‘large_os_page_size’ undeclared (first use in this function); did you mean ‘_mi_os_page_size’? 313 | cmd.mha_pagesize = large_os_page_size; | ^~~~~~~~~~~~~~~~~~ | _mi_os_page_size ``` This is already known in mimalloc upstream: https://github.com/microsoft/mimalloc/issues/802 and the PR is taken from there. (Seems like a leftover from https://github.com/microsoft/mimalloc/commit/08a01d26dc079756c8e94409fba051fd2eb5bd2c). ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-112809 <!-- /gh-linked-prs -->
10d3f04aec745c6676ef31611549b970a78338b3
ae968d10326165b1114568ab1ca0a7776ea9c234
python/cpython
python__cpython-112807
# Unused function warnings during mimalloc build on Solaris # Bug report ### Bug description: Similarly to FreeBSD here #111906, when building Python main on Solaris, we are seeing unused function warnings: ``` /.../cpython-main/Objects/mimalloc/prim/unix/prim.c: At top level: /.../cpython-main/Objects/mimalloc/prim/unix/prim.c:90:12: warning: 'mi_prim_access' defined but not used [-Wunused-function] 90 | static int mi_prim_access(const char *fpath, int mode) { | ^~~~~~~~~~~~~~ /.../cpython-main/Objects/mimalloc/prim/unix/prim.c:87:12: warning: 'mi_prim_close' defined but not used [-Wunused-function] 87 | static int mi_prim_close(int fd) { | ^~~~~~~~~~~~~ /.../cpython-main/Objects/mimalloc/prim/unix/prim.c:84:16: warning: 'mi_prim_read' defined but not used [-Wunused-function] 84 | static ssize_t mi_prim_read(int fd, void* buf, size_t bufsize) { | ^~~~~~~~~~~~ /.../cpython-main/Objects/mimalloc/prim/unix/prim.c:81:12: warning: 'mi_prim_open' defined but not used [-Wunused-function] 81 | static int mi_prim_open(const char* fpath, int open_flags) { | ^~~~~~~~~~~~ ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-112807 <!-- /gh-linked-prs -->
ae968d10326165b1114568ab1ca0a7776ea9c234
0b2340263172ad0fdd95aed6266496b7f4db4de3
python/cpython
python__cpython-124914
# parking_lot.c should check for overflow in Windows `_PySemaphore_PlatformWait` code path # Bug report The cast in `millis = (DWORD) (timeout / 1000000);` may overflow because `DWORD` is an unsigned 32-bit integer and `timeout` is a 64-bit integer. We should clamp the result if the cast would overflow. https://github.com/python/cpython/blob/cc7e45cc572dd818412a649970fdee579417701f/Python/parking_lot.c#L93-L105 Noticed by @vstinner in https://github.com/python/cpython/pull/112733#pullrequestreview-1766327004 <!-- gh-linked-prs --> ### Linked PRs * gh-124914 * gh-124991 <!-- /gh-linked-prs -->
a5fc50994a3fae46d0c3d496c4e1d5e00548a1b8
adfe7657a3f1ce5d8384694ed27a40376a18fa6c
python/cpython
python__cpython-112803
# SubprocessTransport .close() can fail with PermissionError with setuid programs # Bug report ### Bug description: ``` Python 3.12.0 (main, Oct 2 2023, 00:00:00) [GCC 13.2.1 20230918 (Red Hat 13.2.1-3)] on linux ``` aka `python3-3.12.0-1.fc39.x86_64`. Run this code on your Linux system with pkexec setup. That should work on any normalish GNOME system, I'd guess. I'm using Fedora 39. ```python import asyncio class MyProtocol(asyncio.SubprocessProtocol): pass async def run(): loop = asyncio.get_running_loop() transport, protocol = await loop.subprocess_exec(MyProtocol, 'pkexec', 'cat') await asyncio.sleep(10) transport.close() asyncio.run(run()) ``` You should get a popup to enter your admin password. Do that within 10 seconds. Then `cat` (which now has the same PID as we spawned `pkexec` with) will be running as root. `transport.close()` attempts to `kill()` that PID, which fails: ``` Traceback (most recent call last): File "/var/home/lis/src/cockpit/ferny-transport/break.py", line 15, in <module> asyncio.run(run()) File "/usr/lib64/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/base_events.py", line 664, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/var/home/lis/src/cockpit/ferny-transport/break.py", line 12, in run transport.close() File "/usr/lib64/python3.12/asyncio/base_subprocess.py", line 117, in close self._proc.kill() File "/usr/lib64/python3.12/subprocess.py", line 2209, in kill self.send_signal(signal.SIGKILL) File "/usr/lib64/python3.12/subprocess.py", line 2196, in send_signal os.kill(self.pid, sig) PermissionError: [Errno 1] Operation not permitted ``` Probably the call to `self._proc.kill()` in `.close()` should be guarded to ignore `PermissionError`. It already ignores `ProcessLookupError`: ```python try: self._proc.kill() except ProcessLookupError: pass ``` There are many other setuid utilities that this doesn't seem to be a problem with. The shadow-utils tools like `passwd` seem to remain killable, as does `sudo` (which keeps a process running around and forks off to spawn the desired command as root). In fact, `pkexec` was the only tool I could find that causes this issue, but as viewed from the Python side, we clearly cannot necessarily rely on being able to `.kill()` a PID that we created. Thanks! ### CPython versions tested on: 3.12, 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112803 <!-- /gh-linked-prs -->
0187a7e4ec9550a6e35dd06b26f22863520242ab
ca71987f4e3be56a369a1dd57763c6077b3c4899
python/cpython
python__cpython-112932
# zlib extractall can not extract zip because of '/' in namelist # Bug report ### Bug description: I am trying to extract all members from zipfile, and this zipfile is created by my code. In python3.11 and MacOS default archiver I can extract this file, but using 3.12 I see an error because of '/' in the zipfile.namelist() [debug_data_1_analyze.zip](https://github.com/python/cpython/files/13581108/debug_data_1_analyze.zip) ```python with open('./myarchive.zip', 'rb') as fh: z = zipfile.ZipFile(fh) z.extractall('.') ``` <img width="713" alt="image" src="https://github.com/python/cpython/assets/37744768/5fb48148-abc6-46bc-81c0-40a9f010a3d9"> <img width="842" alt="image" src="https://github.com/python/cpython/assets/37744768/69b44170-11c7-409a-8047-909bd7dea1d0"> ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112932 * gh-113789 * gh-116823 * gh-116830 <!-- /gh-linked-prs -->
541c5dbb81c784afd587406be2cc82645979a107
84d1f76092c24c4d6614797cc10eb8a231397646
python/cpython
python__cpython-112819
# Build Error on RISC-V: undefined reference to `__atomic_exchange_1' # Bug report ### Bug description: Configuration: ``` ./configure ``` Build Output ([full-build-output.txt](https://github.com/python/cpython/files/13573831/full-build-output.txt)): ```sh user@starfive:~/cpython$ make gcc -c -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Programs/_freeze_module.o Programs/_freeze_module.c gcc -c -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Modules/getpath_noop.o Modules/getpath_noop.c gcc -o Programs/_freeze_module Programs/_freeze_module.o Modules/getpath_noop.o Modules/getbuildinfo.o Parser/token.o Parser/pegen.o Parser/pegen_errors.o Parser/action_helpers.o Parser/parser.o Parser/string_parser.o Parser/peg_api.o Parser/lexer/buffer.o Parser/lexer/lexer.o Parser/lexer/state.o Parser/tokenizer/file_tokenizer.o Parser/tokenizer/readline_tokenizer.o Parser/tokenizer/string_tokenizer.o Parser/tokenizer/utf8_tokenizer.o Parser/tokenizer/helpers.o Parser/myreadline.o Objects/abstract.o Objects/boolobject.o Objects/bytes_methods.o Objects/bytearrayobject.o Objects/bytesobject.o Objects/call.o Objects/capsule.o Objects/cellobject.o Objects/classobject.o Objects/codeobject.o Objects/complexobject.o Objects/descrobject.o Objects/enumobject.o Objects/exceptions.o Objects/genericaliasobject.o Objects/genobject.o Objects/fileobject.o Objects/floatobject.o Objects/frameobject.o Objects/funcobject.o Objects/interpreteridobject.o Objects/iterobject.o Objects/listobject.o Objects/longobject.o Objects/dictobject.o Objects/odictobject.o Objects/memoryobject.o Objects/methodobject.o Objects/moduleobject.o Objects/namespaceobject.o Objects/object.o Objects/obmalloc.o Objects/picklebufobject.o Objects/rangeobject.o Objects/setobject.o Objects/sliceobject.o Objects/structseq.o Objects/tupleobject.o Objects/typeobject.o Objects/typevarobject.o Objects/unicodeobject.o Objects/unicodectype.o Objects/unionobject.o Objects/weakrefobject.o Python/_warnings.o Python/Python-ast.o Python/Python-tokenize.o Python/asdl.o Python/assemble.o Python/ast.o Python/ast_opt.o Python/ast_unparse.o Python/bltinmodule.o Python/ceval.o Python/codecs.o Python/compile.o Python/context.o Python/critical_section.o Python/crossinterp.o Python/dynamic_annotations.o Python/errors.o Python/flowgraph.o Python/frame.o Python/frozenmain.o Python/future.o Python/getargs.o Python/getcompiler.o Python/getcopyright.o Python/getplatform.o Python/getversion.o Python/ceval_gil.o Python/hamt.o Python/hashtable.o Python/import.o Python/importdl.o Python/initconfig.o Python/instrumentation.o Python/intrinsics.o Python/legacy_tracing.o Python/lock.o Python/marshal.o Python/modsupport.o Python/mysnprintf.o Python/mystrtoul.o Python/optimizer.o Python/optimizer_analysis.o Python/parking_lot.o Python/pathconfig.o Python/preconfig.o Python/pyarena.o Python/pyctype.o Python/pyfpe.o Python/pyhash.o Python/pylifecycle.o Python/pymath.o Python/pystate.o Python/pythonrun.o Python/pytime.o Python/bootstrap_hash.o Python/specialize.o Python/structmember.o Python/symtable.o Python/sysmodule.o Python/thread.o Python/traceback.o Python/tracemalloc.o Python/getopt.o Python/pystrcmp.o Python/pystrtod.o Python/pystrhex.o Python/dtoa.o Python/formatter_unicode.o Python/fileutils.o Python/suggestions.o Python/perf_trampoline.o Python/dynload_shlib.o Modules/config.o Modules/main.o Modules/gcmodule.o Modules/atexitmodule.o Modules/faulthandler.o Modules/posixmodule.o Modules/signalmodule.o Modules/_tracemalloc.o Modules/_codecsmodule.o Modules/_collectionsmodule.o Modules/errnomodule.o Modules/_io/_iomodule.o Modules/_io/iobase.o Modules/_io/fileio.o Modules/_io/bytesio.o Modules/_io/bufferedio.o Modules/_io/textio.o Modules/_io/stringio.o Modules/itertoolsmodule.o Modules/_sre/sre.o Modules/_sysconfig.o Modules/_threadmodule.o Modules/timemodule.o Modules/_typingmodule.o Modules/_weakref.o Modules/_abc.o Modules/_functoolsmodule.o Modules/_localemodule.o Modules/_operator.o Modules/_stat.o Modules/symtablemodule.o Modules/pwdmodule.o -ldl -lm /usr/bin/ld: Python/critical_section.o: in function `PyMutex_Lock': /home/user/cpython/./Include/internal/pycore_lock.h:72: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: /home/user/cpython/./Include/internal/pycore_lock.h:72: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/critical_section.o: in function `_Py_atomic_compare_exchange_uint8': /home/user/cpython/./Include/cpython/pyatomic_gcc.h:105: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: /home/user/cpython/./Include/cpython/pyatomic_gcc.h:105: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/critical_section.o: in function `_PyCriticalSection_Resume': /home/user/cpython/Python/critical_section.c:80: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/critical_section.o:/home/user/cpython/Python/critical_section.c:90: more undefined references to `__atomic_compare_exchange_1' follow /usr/bin/ld: Python/lock.o: in function `_Py_atomic_compare_exchange_uintptr': /home/user/cpython/./Include/cpython/pyatomic_gcc.h:125: undefined reference to `__atomic_exchange_1' /usr/bin/ld: Python/lock.o: in function `_PyEvent_Notify': /home/user/cpython/Python/lock.c:267: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/lock.o: in function `PyEvent_Wait': /home/user/cpython/Python/lock.c:272: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/lock.o: in function `_PyOnceFlag_CallOnceSlow': /home/user/cpython/Python/lock.c:325: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/lock.o: in function `_Py_atomic_compare_exchange_uint8': /home/user/cpython/./Include/cpython/pyatomic_gcc.h:105: undefined reference to `__atomic_compare_exchange_1' /usr/bin/ld: Python/lock.o: in function `_PyOnceFlag_CallOnceSlow': /home/user/cpython/Python/lock.c:352: undefined reference to `__atomic_exchange_1' collect2: error: ld returned 1 exit status make: *** [Makefile:1342: Programs/_freeze_module] Error 1 ``` Enviroment: ``` user@starfive:~/cpython$ ld -v GNU ld (GNU Binutils for Debian) 2.39.50.20221224 ``` ``` user@starfive:~/cpython$ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/riscv64-linux-gnu/12/lto-wrapper Target: riscv64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Debian 12.2.0-10' --with-bugurl=file:///usr/share/doc/gcc-12/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-12 --program-prefix=riscv64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --disable-libquadmath-support --enable-plugin --enable-default-pie --with-system-zlib --enable-libphobos-checking=release --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --disable-multilib --with-arch=rv64gc --with-abi=lp64d --enable-checking=release --build=riscv64-linux-gnu --host=riscv64-linux-gnu --target=riscv64-linux-gnu Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 12.2.0 (Debian 12.2.0-10) ``` ``` user@starfive:~/cpython$ make -v GNU Make 4.3 Built for riscv64-unknown-linux-gnu Copyright (C) 1988-2020 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` ``` user@starfive:~/cpython$ uname -a Linux starfive 5.15.0-starfive #1 SMP Fri Nov 24 07:22:28 UTC 2023 riscv64 GNU/Linux user@starfive:~/cpython$ ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-112819 <!-- /gh-linked-prs -->
4d1eea59bd26d329417cc2252f1c91b52d0f4a28
c744dbe9ac0e88e395a7464f20a4fd4184a0a222
python/cpython
python__cpython-112771
# test.test_zlib.CompressObjectTestCase.test_flushes fails to parse ZLIB_VERSION with zlib-ng # Bug report ### Bug description: Hello. Fedora is switching from zlib to zlib-ng. https://fedoraproject.org/wiki/Changes/ZlibNGTransition zlib-ng defines `ZLIB_VERSION` as `"1.3.0.zlib-ng"`: https://github.com/zlib-ng/zlib-ng/blob/f3211aba349a1d4781d0d41cb00d29fb8325af06/zlib.h.in#L61 And `test.test_zlib.CompressObjectTestCase.test_flushes` fails to parse it a sequence of dot-separated integers: https://github.com/python/cpython/blob/11d88a178b077e42025da538b890db3151a47070/Lib/test/test_zlib.py#L476 ``` Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.12.0/Lib/test/test_zlib.py", line 477, in test_flushes ver = tuple(int(v) for v in zlib.ZLIB_RUNTIME_VERSION.split('.')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/builddir/build/BUILD/Python-3.12.0/Lib/test/test_zlib.py", line 477, in <genexpr> ver = tuple(int(v) for v in zlib.ZLIB_RUNTIME_VERSION.split('.')) ^^^^^^ ValueError: invalid literal for int() with base 10: 'zlib-ng' ``` Another test in this file already handles this differently via 4c7108a77144493d0aa6fc0105b67d3797e143f5: https://github.com/python/cpython/blob/11d88a178b077e42025da538b890db3151a47070/Lib/test/test_zlib.py#L798-L804 This could be made a function, so both the tests that need to compare the version could re-use it. ### CPython versions tested on: 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112771 * gh-112773 * gh-112774 * gh-119565 * gh-119566 * gh-119567 <!-- /gh-linked-prs -->
d384813ff18b33280a90b6d2011654528a2b6ad1
d109f637c048c2b5fc95dc7fdfd50f8ac41a7747
python/cpython
python__cpython-112814
# `PurePath.match` accepts a `PurePath` pattern only since Python 3.12 # Documentation In the docs on [`PurePath.match(pattern, *, case_sensitive=None)`](https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.match) on `pathlib`, it says: > The pattern may be another path object; this speeds up matching the same pattern against multiple files: But it **doesn't say that `pattern` can only be a path object since Python 3.12**. For example, on Python 3.11, you get this error when trying to use a `PurePath` as the pattern: ```python Python 3.11.6 (main, Oct 18 2023, 12:13:43) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from pathlib import PurePath >>> pattern = PurePath("*.py") >>> PurePath("hello.py").match(pattern) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/victor/.pyenv/versions/3.11.6/lib/python3.11/pathlib.py", line 810, in match drv, root, pat_parts = self._flavour.parse_parts((path_pattern,)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/.pyenv/versions/3.11.6/lib/python3.11/pathlib.py", line 67, in parse_parts drv, root, rel = self.splitroot(part) ^^^^^^^^^^^^^^^^^^^^ File "/home/victor/.pyenv/versions/3.11.6/lib/python3.11/pathlib.py", line 240, in splitroot if part and part[0] == sep: ~~~~^^^ TypeError: 'PurePosixPath' object is not subscriptable >>> vim /home/victor/.pyenv/versions/3.11.6/lib/python3.11/pathlib.py File "<stdin>", line 1 vim /home/victor/.pyenv/versions/3.11.6/lib/python3.11/pathlib.py ^ SyntaxError: invalid syntax >>> :810 File "<stdin>", line 1 :810 ^ SyntaxError: invalid syntax >>> ``` <!-- gh-linked-prs --> ### Linked PRs * gh-112814 * gh-112882 <!-- /gh-linked-prs -->
ed8720ace4f73e49f149a1fdd548063ee05f42d5
f3bff4ee9d3e4276949e5cde81180195b95bacb9
python/cpython
python__cpython-112738
# del-safe symbols in subprocess module cannot be overwritten in cross-build environment # Bug report ### Bug description: The subprocess module has a number of symbols that must be available during the deletion process. To ensure this, the symbols that must be available for deletion are pre-declared, then used as the default value of arguments of methods. For example, see the handling of [`_waitstatus_to_exitcode`, `_WIFSTOPPED` and `_WSTOPSIG` in `_handle_exitstatus`]( https://github.com/python/cpython/blob/main/Lib/subprocess.py#L1954). However, in a cross-platform compilation environment, this poses a problem. The approach taken by cross-platform environments (such as [crossenv](https://github.com/benfogle/crossenv)) is to build a hybrid environment by creating a virtual environment on the build machine, then monkeypatch any standard library module that has platform-specific properties. This allows code to execute using build platform executables, but return properties of the host platform as required. By binding specific constant values as arguments, those values can't be overwritten by a monkeypatch; e.g.,: ``` import subprocess # Mock a disabled waitpid subprocess._waitpid = None ``` This won't work, because `_waitpid` has already been bound as an argument in the methods that use it when the subprocess module was imported. Most historical cross platform compilation environments won't be affected by this, as both the build and host platform will support subprocesses, so there either isn't a need to patch the subprocess module, or the constants on the build and host are the same. However, when building a cross-platform build environment for iOS, the build platform supports subprocesses, but iOS doesn't, and there is a need to monkeypatch the delete-safe symbols so that the hybrid environment disables subprocesses. If the symbols that need to be delete-safe were gathered into a single wrapper object, it would both simplify the process of passing in the delete safe properties (as you only need to bind a single object acting as a namespace), and allow for cross-platform environments to monkeypatch the symbols, as the monkeypatch can modify the properties of the namespace object: ``` import subprocess # Mock a disabled waitpid subprocess._del_safe.waitpid = None ``` This *will* work, because `_del_safe` is the object that has been bound as an argument, and `waitpid` is a property of that object. ### CPython versions tested on: 3.13 ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-112738 <!-- /gh-linked-prs -->
dc824c5dc120ffed84bafd23f95e95a99678ed6a
304a1b3f3a8ed9a734ef1d098cafccb6725162db
python/cpython
python__cpython-112732
# Use color to highlight error locations This has several advantages: * Will help a lot with readability as parsing the error lines is easier if color highlights the error ranges. * In the future we can optionally (via config) drop the ranges and only use color, recovering back the extra lines that the carets are taking. * All the cool kids are doing it: This feature has already been successfully implemented in various tools. It has proven to be an effective aid for developers in quickly identifying the source and location of errors. Control features: * The feature will use 16 ANSI color scapes so the actual color is configured via the terminal emulator. * Users can set the following env vars to control it (in the following order): - `PY_COLORS=1` activates the feature (used by pytest: https://github.com/pytest-dev/pytest/blob/022f1b4de546c8b3529e071965555888ecf01cb4/src/_pytest/_io/terminalwriter.py#L28) - `PY_COLORS=0` deactivates the feature - `NO_COLOR=1` deactivates the feature - `FORCE_COLOR=1` activates the feature - Feature is deactivated if terminal is not a tty or if `TERM` is set to `dumb`. <!-- gh-linked-prs --> ### Linked PRs * gh-112732 * gh-112837 * gh-117672 * gh-118086 * gh-118288 <!-- /gh-linked-prs -->
16448cab44e23d350824e9ac75e699f5bcc48a14
3870d19d151c31d77b737d6a480aa946b4e87af6
python/cpython
python__cpython-112728
# Optimize `pathlib.Path.absolute()` The implementation of `pathlib.Path.absolute()` can call `with_segments()` with multiple arguments; the resulting path object will be internally unjoined and unparsed, making `str(path.absolute())` slow, amongst other things. `absolute()` already has most of the information it needs to produce paths that are pre-joined, and either pre-parsed (if the input has tail segments) or pre-stringified (otherwise). This could provide a tasty speedup. <!-- gh-linked-prs --> ### Linked PRs * gh-112728 <!-- /gh-linked-prs -->
304a1b3f3a8ed9a734ef1d098cafccb6725162db
9fe7655c6ce0b8e9adc229daf681b6d30e6b1610
python/cpython
python__cpython-112776
# `PyThreadState_Clear()` should only be called from the same interpreter `PyThreadState_Clear()` includes the following comment: https://github.com/python/cpython/blob/1e4680ce52ab6c065f5e0bb27e0b156b897aff67/Python/pystate.c#L1553-L1558 We should enforce this, particularly the comment about the matching interpreters. Calling `PyThreadState_Clear()` from the "wrong" interpreter is unsafe because if any of the `PyObject`s on the `tstate` are not NULL, calling their destructors from the wrong thread can lead to memory corruption. This is also important for the "free threaded" builds because they have free lists associated with the `PyThreadState` and these will be cleared in `PyThreadState_Clear()` -- doing this in the wrong interpreter leads to memory corruption. There are currently two places which call `PyThreadState_Clear()` from the "wrong" interpreter: 1. `interp_create()` in `_xxsubinterpretersmodule.c`. This is pretty easy to fix by setting the thread state before calling clear. `https://github.com/python/cpython/blob/1e4680ce52ab6c065f5e0bb27e0b156b897aff67/Modules/_xxsubinterpretersmodule.c#L266 2. `new_interpreter()` in `pylifecycle.c` in the error code path. This is trickier because the thread state is not fully initialized. https://github.com/python/cpython/blob/1e4680ce52ab6c065f5e0bb27e0b156b897aff67/Python/pylifecycle.c#L2164 Related: https://github.com/python/cpython/issues/101436#112722 cc @ericsnowcurrently <!-- gh-linked-prs --> ### Linked PRs * gh-112776 <!-- /gh-linked-prs -->
a3c031884d2f16d84aacc3f733c047b3a6cae208
8a4c1f3ff1e3d7ed2e00e77b94056f9bb7f9ae3b
python/cpython
python__cpython-112722
# Refactor dis module to separate instructions creation from formatting The dis module is not very flexible, because the formatting and instruction creation are all entangled with one another. I will refactor it so that we are able to feed an instruction sequence to a formatter, or to plug in a different formatter into dis functions. <!-- gh-linked-prs --> ### Linked PRs * gh-112722 * gh-113108 * gh-115564 <!-- /gh-linked-prs -->
c98c40227e8cd976a08ff0f6dc386b5d33f62f84
10e9bb13b8dcaa414645b9bd10718d8f7179e82b
python/cpython
python__cpython-112770
# SystemError when builtins is not a dict + eval # Bug report ### Bug description: If `__builtins__` is not a dict, you can get a SystemError: ```pycon >>> import types >>> exec("import builtins; builtins.print(3)", {"__builtins__": types.MappingProxyType({})}) Traceback (most recent call last): File "<stdin>", line 1, in <module> exec("import builtins; builtins.print(3)", {"__builtins__": types.MappingProxyType({})}) File "<string>", line 1, in <module> SystemError: Objects/dictobject.c:1761: bad argument to internal function ``` Originally found this while playing with https://oskaerik.github.io/theevalgame/ ### CPython versions tested on: 3.11, CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112770 * gh-113103 * gh-113105 <!-- /gh-linked-prs -->
1161c14e8c68296fc465cd48970b32be9bee012e
12f0bbd6e08bcc1e7165f2641716f7685c1db35c
python/cpython
python__cpython-112714
# Support for Partitioned cookies attribute # Feature or enhancement ### Proposal: Chrome is phasing out support for Third Party Cookies in Q1 2024, and for several key use cases, the proposed solution is CHIPS (https://github.com/privacycg/CHIPS). See Chrome's blogpost about these changes: https://developer.chrome.com/en/docs/privacy-sandbox/third-party-cookie-phase-out/#partitioned Currently, cookies with the `Partitioned` attribute cannot be set from within the `http.cookies` library. I'm proposing that we add support for that attribute. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112714 <!-- /gh-linked-prs -->
9abbb58e3f023555473d9e8b82738ef44077cfa8
3a3a6b86f4069a5a3561c65692937eb798053ae5
python/cpython
python__cpython-112679
# Tkinter: `Tkapp_CallDeallocArgs()` should be `static` …as per [PEP 7](https://peps.python.org/pep-0007/#code-lay-out). <!-- gh-linked-prs --> ### Linked PRs * gh-112679 * gh-112690 * gh-112691 <!-- /gh-linked-prs -->
23e001fa9f1897ba3384c02bbbe634313358a549
e5b0db0315941b968ebcc2414bfcdd2da44fd3c2
python/cpython
python__cpython-112676
# Tests for path joining in `test_pathlib` are misplaced Pathlib originally implemented its own path joining algorithm with its own tests. Nowadays pathlib now calls through to `os.path.join()` (see #95450), but there's still a handful of tests for path joining remaining that are better suited to `test_posixpath` and `test_ntpath`. Affects the `test_drive_root_parts_common` and `test_drive_root_parts` test methods. <!-- gh-linked-prs --> ### Linked PRs * gh-112676 <!-- /gh-linked-prs -->
28b2b7407c25d448ff5d8836efabbe7c02316568
2c3906bc4b7ee62bf9d122a6fdd98b6ae330643f
python/cpython
python__cpython-112681
# Tkinter: incompatible pointer types warnings when built with Tcl 9 Tcl/Tk 9 changes C APIs to use a different type for sizes to allow transferring ≥32-bit sized things. The new type is `Tcl_Size`, which as of [TIP 660](https://core.tcl-lang.org/tips/doc/trunk/tip/660.md) is defined as either `ptrdiff_t` in Tcl 9, or `int` in Tcl 8.7 for binary compatibility. My impression is that Tcl/Tk wrappers such as Tkinter, which are primarily used for Tk GUI, have little if any need for this. But as of [TIP 664](https://core.tcl-lang.org/tips/doc/trunk/tip/664.md), usage of APIs which previously expected `int *` must be updated for Tcl 9. There was previously effort in Tcl to continue allowing `int *` usage for compatibility, and maybe that would have been good enough for Tkinter, but others in the Tcl/Tk inner circle (who already wish to abandon Tcl/Tk < 9 entirely) rejected that approach. `-Wincompatible-pointer-types` warnings now seen under Tcl 9: ``` ./Modules/_tkinter.c:504:21: warning: incompatible pointer types passing 'int *' to parameter of type 'Tcl_Size *' (aka 'long *') [-Wincompatible-pointer-types] 504 | const char *s = Tcl_GetStringFromObj(value, &len); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ tcl90/include/tclDecls.h:4235:36: note: expanded from macro 'Tcl_GetStringFromObj' 4235 | (Tcl_GetStringFromObj)((objPtr), (sizePtr))) | ^~~~~~~~~ tcl90/include/tclDecls.h:1754:15: note: passing argument to parameter 'lengthPtr' here 1754 | Tcl_Size *lengthPtr); | ^ ./Modules/_tkinter.c:1138:29: warning: incompatible pointer types passing 'int *' to parameter of type 'Tcl_Size *' (aka 'long *') [-Wincompatible-pointer-types] 1138 | char *data = (char*)Tcl_GetByteArrayFromObj(value, &size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ tcl90/include/tclDecls.h:4229:41: note: expanded from macro 'Tcl_GetByteArrayFromObj' 4229 | (Tcl_GetBytesFromObj)(NULL, (objPtr), (sizePtr))) | ^~~~~~~~~ tcl90/include/tclDecls.h:1751:32: note: passing argument to parameter 'numBytesPtr' here 1751 | Tcl_Obj *objPtr, Tcl_Size *numBytesPtr); | ^ ./Modules/_tkinter.c:1168:18: warning: incompatible pointer types passing 'int *' to parameter of type 'Tcl_Size *' (aka 'long *') [-Wincompatible-pointer-types] 1168 | status = Tcl_ListObjLength(interp, value, &size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ tcl90/include/tclDecls.h:4244:44: note: expanded from macro 'Tcl_ListObjLength' 4244 | (Tcl_ListObjLength)((interp), (listPtr), (lengthPtr))) | ^~~~~~~~~~~ tcl90/include/tclDecls.h:1790:33: note: passing argument to parameter 'lengthPtr' here 1790 | Tcl_Obj *listPtr, Tcl_Size *lengthPtr); | ^ ./Modules/_tkinter.c:2104:13: warning: incompatible pointer types passing 'int *' to parameter of type 'Tcl_Size *' (aka 'long *') [-Wincompatible-pointer-types] 2104 | if (Tcl_ListObjGetElements(Tkapp_Interp(self), | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2105 | ((PyTclObject*)arg)->value, | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2106 | &objc, &objv) == TCL_ERROR) { | ~~~~~~~~~~~~~ tcl90/include/tclDecls.h:4241:49: note: expanded from macro 'Tcl_ListObjGetElements' 4241 | (Tcl_ListObjGetElements)((interp), (listPtr), (objcPtr), (objvPtr))) | ^~~~~~~~~ tcl90/include/tclDecls.h:1786:33: note: passing argument to parameter 'objcPtr' here 1786 | Tcl_Obj *listPtr, Tcl_Size *objcPtr, | ^ ./Modules/_tkinter.c:2136:9: warning: incompatible pointer types passing 'int *' to parameter of type 'Tcl_Size *' (aka 'long *') [-Wincompatible-pointer-types] 2136 | if (Tcl_SplitList(Tkapp_Interp(self), list, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2137 | &argc, &argv) == TCL_ERROR) { | ~~~~~~~~~~~~~ tcl90/include/tclDecls.h:4250:40: note: expanded from macro 'Tcl_SplitList' 4250 | (Tcl_SplitList)((interp), (listStr), (argcPtr), (argvPtr))) | ^~~~~~~~~ tcl90/include/tclDecls.h:1796:36: note: passing argument to parameter 'argcPtr' here 1796 | const char *listStr, Tcl_Size *argcPtr, | ^ 5 warnings generated. ``` Code intending to remain compatible with Tcl 8.6 is suggested to use the following after including tcl.h: ``` #ifndef TCL_SIZE_MAX typedef int Tcl_Size; # define Tcl_GetSizeIntFromObj Tcl_GetIntFromObj # define Tcl_NewSizeIntObj Tcl_NewIntObj # define TCL_SIZE_MAX INT_MAX # define TCL_SIZE_MODIFIER "" #endif ``` I intend to open a PR updating _tkinter.c to pass `Tcl_Size *` where needed. There are several other instances where Tkinter assumes Tcl expects`int` or something not larger than `INT_MAX`, but migrating those to `Tcl_Size` and `TCL_SIZE_MAX` seems optional. <!-- gh-linked-prs --> ### Linked PRs * gh-112681 * gh-120208 * gh-120209 <!-- /gh-linked-prs -->
e0799352823289fafb8131341abd751923ee9c08
7111d9605f9db7aa0b095bb8ece7ccc0b8115c3f
python/cpython
python__cpython-112715
# Typo in Macro/C/Python Type Table The (*) footnote to the table in [this section](https://docs.python.org/3/c-api/structures.html#member-types) of the documentation has a typo. It says: `with Py_T_STRING_INLINE the string is stored directly in the structure` but it seems like it should be: `with Py_T_STRING_INPLACE the string is stored directly in the structure` based on what the (*) is referencing in the table. A quick search of the CPython repo shows no results for `Py_T_STRING_INLINE` (except for this documentation) and many results for `Py_T_STRING_INPLACE`. This bug would be resolved by substituting `Py_T_STRING_INLINE` with `Py_T_STRING_INPLACE`. <!-- gh-linked-prs --> ### Linked PRs * gh-112715 * gh-112726 <!-- /gh-linked-prs -->
a8ce149628c9eaafb8c38fbf25fbd1ed483d2902
4eddb4c9d9452482c9af7fa9eec223d12b5a9f33
python/cpython
python__cpython-112661
# Do not clear arbitrary errors on import Currently the import and module code can clear arbitrary errors when format error message for ImportError or AttributeError and override them with ImportError or AttributeError. Usually it is not an issue, because these errors (accessing missed attribute or dict key) should be ignored, but in theory it can be arbitrary error, like KeyboardInterrupt, MemoryError or Recursion error which should not be ignored. <!-- gh-linked-prs --> ### Linked PRs * gh-112661 <!-- /gh-linked-prs -->
45e6dd63b88a782f2ec96ab1da54eb5a074d8f4c
daa260ebb1c1b20321e7f26df7c9dbd35d4edcbf
python/cpython
python__cpython-112659
# Undeprecate the onerror argument in shutil.rmtree() # Bug report A new `onexc` parameter was added in `shutil.rmtree()` in 3.12 (see #102828). At the same time passing the `onerror` argument was deprecated. It creates inconvenience for code that should run on several Python versions. It forces users to modify their working code to avoid deprecation warnings. I think that there was nothing wrong in `onerror`. The only advantage of `onexc` over `onerror` was that it may require to write less code in some cases, but the deprecation causes an opposite effect -- it requires to write more code. So I think that the deprecation should be reverted. @iritkatriel <!-- gh-linked-prs --> ### Linked PRs * gh-112659 * gh-112665 <!-- /gh-linked-prs -->
97857ac0580057c3a4f75d34209841c81ee11a96
162d3d428a836850ba29c58bbf37c931843d9e37
python/cpython
python__cpython-112641
# `types.FunctionType` is missing `kwdefaults` from the constructor # Feature or enhancement https://github.com/python/cpython/blob/a9574c68f04695eecd19866faaf4cdee5965bc70/Objects/funcobject.c#L800-L813 It is rather strange to have `defaults`, but not `kwdefaults`. I propose adding `kwdefaults` as the last parameter. <!-- gh-linked-prs --> ### Linked PRs * gh-112641 <!-- /gh-linked-prs -->
2ac4cf4743a65ac54c7ac6a762bed636800598fe
f653caa5a88d3b5027a8f286ff3a3ccd9e6fe4ed
python/cpython
python__cpython-112626
# stringlib bytearray.join function has the potential to leak memory when used with a custom iterator # Bug report ### Bug description: If a custom iterator is passed into `bytearray.join`, and then it frees the bytearray inside of its `__iter__`, then memory can be read after it is freed: ```python # stringlib_join_ReadAfterFree.py def ReadAfterFree(size, do): b = bytearray(size) class T: def __iter__(self): b.clear() self.v = do() yield b'' yield b'' c = b.join(t:=T()) return memoryview(c).cast('P'), t.v if __name__ == '__main__': leak, obj = ReadAfterFree(bytearray.__basicsize__, lambda: bytearray(8)) print('bytearray:', obj) print('leaked memory of buffer:', leak.tolist()) ``` ```sh ➜ ~/Desktop/Coding/cpython_source git:(main) ./python.exe ../python/stringlib_join_ReadAfterFree.py bytearray: bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00') leaked memory of buffer: [1, 4305259912, 8, 9, 4307812848, 4307812848, 0] ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112626 * gh-112693 * gh-112694 <!-- /gh-linked-prs -->
0e732d0997cff08855d98c17af4dd5527f10e419
23e001fa9f1897ba3384c02bbbe634313358a549
python/cpython
python__cpython-112623
# create_task does not pass name parameter to event loop # Bug report ### Bug description: The name parameter in the `asyncio.create_task` function is never passed to the event loop. [Task names are instead only ever set by the `asyncio.create_task` function](https://github.com/python/cpython/blob/main/Lib/asyncio/tasks.py#L411) and custom implementations of the event loop will not ever see the task name. A crude demonstration of this issue is shown below ```python import asyncio from unittest import mock class TestLoop(asyncio.BaseEventLoop): def create_task(self, coro, *, name=None, context=None): if coro.__name__ == "sleep": assert name == "bar" return super().create_task(coro, name=name, context=context) class TestPolicy(asyncio.DefaultEventLoopPolicy): def new_event_loop(self) -> asyncio.AbstractEventLoop: loop = TestLoop() loop._process_events = mock.Mock() loop._write_to_self = mock.Mock() loop._write_to_self.return_value = None loop._selector = mock.Mock() loop._selector.select.return_value = () loop.shutdown_ag_run = False return loop async def foo(): asyncio.create_task( asyncio.sleep(0.1), name="bar", ) def main(): asyncio.set_event_loop_policy(TestPolicy()) asyncio.run(foo()) if __name__ == "__main__": main() ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112623 <!-- /gh-linked-prs -->
a3a1cb48456c809f7b1ab6a6ffe83e8b3f69be0f
c6e614fd81d7dca436fe640d63a307c7dc9f6f3b
python/cpython
python__cpython-112621
# AttributeError in 'python -m dis -C' # Bug report ### Bug description: Sorry @iritkatriel, I hit another crash in the new `dis` code: ```pytb ~/cpython$ ./python.exe -m dis -C Lib/re/_compiler.py >/dev/null Traceback (most recent call last): File "/Users/guido/cpython/Lib/runpy.py", line 198, in _run_module_as_main return _run_code(code, main_globals, None, ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ "__main__", mod_spec) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/runpy.py", line 88, in _run_code exec(code, run_globals) ~~~~^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 991, in <module> main() ~~~~^^ File "/Users/guido/cpython/Lib/dis.py", line 988, in main dis(code, show_caches=args.show_caches, show_offsets=args.show_offsets) ~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 114, in dis _disassemble_recursive(x, file=file, depth=depth, show_caches=show_caches, adaptive=adaptive, show_offsets=show_offsets) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 714, in _disassemble_recursive disassemble(co, file=file, show_caches=show_caches, adaptive=adaptive, show_offsets=show_offsets) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 706, in disassemble _disassemble_bytes(_get_code_array(co, adaptive), ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lasti, co._varname_from_oparg, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<2 lines>... co_positions=co.co_positions(), show_caches=show_caches, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ original_code=co.co_code, show_offsets=show_offsets) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 777, in _disassemble_bytes print(instr._disassemble(lineno_width, is_current_instr, offset_width), ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 493, in _disassemble fields.append(' ' * self.label_width) ^^^^^^^^^^^^^^^^ AttributeError: 'Instruction' object has no attribute 'label_width' ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112621 <!-- /gh-linked-prs -->
162d3d428a836850ba29c58bbf37c931843d9e37
4ed46d224401243399b41c7ceef4532bd249da27
python/cpython
python__cpython-112619
# `typing.get_type_hints` Can Return Incorrect Type for Annotated Metadata # Bug report ### Bug description: When running the following example: ```python from typing import Annotated, get_type_hints class MessageCode(str): pass class DisplayName(str): pass class Message: field_a: Annotated[str, MessageCode("A")] class Form: box_a: Annotated[str, DisplayName("A")] hints = get_type_hints(Message, include_extras=True) metadata = hints["field_a"].__metadata__[0] print(f"MessageCode metadata type: {type(metadata)}") hints = get_type_hints(Form, include_extras=True) metadata = hints["box_a"].__metadata__[0] print(f"DisplayName metadata type: {type(metadata)}") ``` the output will be: ``` MessageCode metadata type: <class '__main__.MessageCode'> DisplayName metadata type: <class '__main__.MessageCode'> ``` when it should be: ``` MessageCode metadata type: <class '__main__.MessageCode'> DisplayName metadata type: <class '__main__.DisplayName'> ``` This issue seems to occur only when: * the metadata subclasses an immutable type * when multiple instances of such metadata appear in different class definitions and they have the same value ### CPython versions tested on: 3.9 ### Operating systems tested on: Linux, Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112619 * gh-112628 * gh-112633 <!-- /gh-linked-prs -->
a35a30509820f956d6feeaa0dbf42e9ca82c12bb
a74daba7ca8b68f47284d82d4604721b8748bbde
python/cpython
python__cpython-112579
# `python -m zipfile` issues RuntimeWarning Running `python -m zipfile` on main gives: ``` <frozen runpy>:128: RuntimeWarning: 'zipfile.__main__' found in sys.modules after import of package 'zipfile', but prior to execution of 'zipfile.__main__'; this may result in unpredictable behaviour ``` This can be traced to https://github.com/python/cpython/pull/98103 Originally reported at https://github.com/python/cpython/issues/98098#issuecomment-1835629985 <!-- gh-linked-prs --> ### Linked PRs * gh-112579 * gh-112646 <!-- /gh-linked-prs -->
29e6c7b68acac628b084a82670708008be262379
fc9e24b01fb7da4160b82cef26981d72bb678c13
python/cpython
python__cpython-117169
# [windows/msys2] venv doesn't create activition script for `fish` shell # Bug report ### Bug description: 1. Install MSYS2 for Windows 2. Install `fish` shell under MSYS2 3. Start MSYS2 environment with `fish` shell 4. Run `python -m venv venv` Expect: `venv\Scripts` includes `activate.fish` Actual: `venv\Scripts` does not include `activate.fish` ### CPython versions tested on: 3.10 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-117169 <!-- /gh-linked-prs -->
83485a095363dad6c97b19af2826ca0c34343bfc
78a651fd7fbe7a3d1702e40f4cbfa72d87241ef0
python/cpython
python__cpython-112568
# time.perf_counter(): compute Greatest Common Denonimator (GCD) to reduce risk of integer overflow The `time.perf_counter()` function is implemented by calling `QueryPerformanceCounter()` and `QueryPerformanceFrequency()`. It computes `QueryPerformanceCounter() * SEC_TO_SEC / QueryPerformanceFrequency()` using int64_t integers. The problem is that `SEC_TO_SEC` is big: `10^9`. QueryPerformanceFrequency() usually returns 10 MHz on Windows 10 and newer. The fraction `SEC_TO_NS / frequency` = `1_000_000_000 / 10_000_000` can be simplified to `100 / 1`. I propose using a fraction internally to convert `QueryPerformanceCounter()` value to a number of seconds, and simplify the fraction using the Greatest Common Denominator (GCD). There are multiple functions using a fraction: * `_PyTime_GetClockWithInfo()` for `clock()` -- `time.process_time()` in Python * `_PyTime_GetProcessTimeWithInfo()` for `times()` -- `time.process_time()` in Python * `py_get_monotonic_clock()` for `mach_absolute_time()` on macOS -- `time.monotonic()` in Python * `py_get_win_perf_counter()` for `QueryPerformanceCounter()` on Windows -- `time.perf_counter()` in Python <!-- gh-linked-prs --> ### Linked PRs * gh-112568 * gh-112587 <!-- /gh-linked-prs -->
5c5022b8625e34f0035ad5a23bc4c2f16649d134
05a370abd6cdfe4b54be60b3b911f3a441026bb2
python/cpython
python__cpython-113040
# asyncio.run unnecessarily calls the repr of the task result twice since Python 3.11 # Bug report ### Bug description: Given the following code: ```python import asyncio import time class Foo: def __repr__(self): time.sleep(1) print('i am a repr, i should not be called. ') return '<Foo>' async def get_foo(): return Foo() asyncio.run(get_foo()) print('Done') ``` Output: ```bash $ python t.py i am a repr, i should not be called. i am a repr, i should not be called. Done ``` This was caused by the new SIGINT handler installed by asyncio.run here: https://github.com/python/cpython/commit/f08a191882f75bb79d42a49039892105b2212fb9 Upon investigation, changing the code with: ```python import asyncio import time class Foo: def __repr__(self): time.sleep(1) print('i am a repr, i should not be called. ') raise BaseException('where is this called???????') return '<Foo>' async def get_foo(): return Foo() asyncio.run(get_foo()) print('Done') ``` It shows: ``` i am a repr, i should not be called. Traceback (most recent call last): File "t.py", line 15, in <module> asyncio.run(get_foo()) File "lib/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "lib/python3.12/asyncio/runners.py", line 127, in run and signal.getsignal(signal.SIGINT) is sigint_handler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/signal.py", line 63, in getsignal return _int_to_enum(handler, Handlers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/signal.py", line 29, in _int_to_enum return enum_klass(value) ^^^^^^^^^^^^^^^^^ File "lib/python3.12/enum.py", line 740, in __call__ return cls.__new__(cls, value) ^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/enum.py", line 1152, in __new__ ve_exc = ValueError("%r is not a valid %s" % (value, cls.__qualname__)) ^^^^^ File "lib/python3.12/reprlib.py", line 21, in wrapper result = user_function(self) ^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/asyncio/base_tasks.py", line 30, in _task_repr info = ' '.join(_task_repr_info(task)) ^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/asyncio/base_tasks.py", line 10, in _task_repr_info info = base_futures._future_repr_info(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/asyncio/base_futures.py", line 54, in _future_repr_info result = reprlib.repr(future._result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/reprlib.py", line 58, in repr return self.repr1(x, self.maxlevel) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/reprlib.py", line 68, in repr1 return self.repr_instance(x, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib/python3.12/reprlib.py", line 170, in repr_instance s = builtins.repr(x) ^^^^^^^^^^^^^^^^ File "t.py", line 8, in __repr__ raise BaseException('where is this called???????') BaseException: where is this called??????? ``` This looks like a series unfortunate events, running on 3.12.0: 1. signal.getsignal tries to convert the handler function to enum in case this is part of Handlers: https://github.com/python/cpython/blob/0fb18b02c8ad56299d6a2910be0bab8ad601ef24/Lib/signal.py#L29 2. enum raises a ValueError with the repr of the handler function: https://github.com/python/cpython/blob/0fb18b02c8ad56299d6a2910be0bab8ad601ef24/Lib/enum.py#L1152 3. the handler asyncio.run installs uses a functools.partial, it's repr will include the repr of the task: https://github.com/python/cpython/blob/0fb18b02c8ad56299d6a2910be0bab8ad601ef24/Lib/asyncio/runners.py#L105 4. when the repr is actually called at the end (one in `signal.getsignal`, the other in `signal.signal`), the repr of the asyncio task will include the repr of the result: https://github.com/python/cpython/blob/0fb18b02c8ad56299d6a2910be0bab8ad601ef24/Lib/asyncio/runners.py#L127-L129 While one can argument calling `__repr__` shouldn't cause issues, but I _think_ we could avoid them in the `signal._int_to_enum` function completely, by only trying to convert to enum when it's an integer: ```python def _int_to_enum(value, enum_klass): """Convert a numeric value to an IntEnum member. If it's not a known member, return the numeric value itself. """ if not isinstance(value, int): return value try: return enum_klass(value) except ValueError: return value ``` This should be more efficient on its own anyway. This function's doc is also inaccurate, since it also accepts non integers (usually a callable). Am I missing something? ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-113040 * gh-113443 * gh-113444 <!-- /gh-linked-prs -->
050783cb37d6a09d8238fa640814df8a915f6a68
0187a7e4ec9550a6e35dd06b26f22863520242ab
python/cpython
python__cpython-112880
# Add zero value support for statistics.geometric_mean() # Bug report ### Bug description: The [implementation of `statistics.geometric_mean` using logarithms](https://github.com/python/cpython/blob/3.12/Lib/statistics.py#L539C20-L539C20) requires that all input values must be positive. However, a real geometric mean is defined for all sets of *non-negative* real values. The geo mean of any set of numbers containing zero is itself zero. ```python from statistics import geometric_mean geometric_mean([1.0, 2.0, 0.0]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/milthorpe/miniconda3/lib/python3.11/statistics.py", line 489, in geometric_mean raise StatisticsError('geometric mean requires a non-empty dataset ' statistics.StatisticsError: geometric mean requires a non-empty dataset containing positive numbers ``` I believe `geometric_mean` should return 0 if any of the input values are zero. (It should continue to return a `StatisticsError` if any of the input values are negative.) ### CPython versions tested on: 3.11 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112880 <!-- /gh-linked-prs -->
f3bff4ee9d3e4276949e5cde81180195b95bacb9
76929fdeebc5f89655a7a535c19fdcece9728a7d
python/cpython
python__cpython-112560
# Add internal-only `_PyThreadStateImpl` "wrapper" for `PyThreadState` # Feature or enhancement The `PyThreadState` struct definitions (i.e., `struct _ts`) is visible in Python's public C API. Our [documentation](https://docs.python.org/3/c-api/init.html#c.PyThreadState) lists only the "interp" field as public. In practice, all the fields are visible, and some extensions (like Cython) make use of some of those fields. We will want to add some private fields on a per-`PyThreadState` basis for the `--disable-gil` builds, but we also do not want to expose them publicly. The "solution" used in `nogil-3.12` was to add a new struct `_PyThreadStateImpl` that contains the `PyThreadState` as well as the private fields. Every `PyThreadState` is actually a `_PyThreadStateImpl` -- you can cast pointers from one to the other safely. Only the code that allocates & frees `PyThreadStates` (in `pystate.c`) and code that accesses those private fields needs to know about `_PyThreadStateImpl`. Everything else can keep using `PyThreadState*` pointers. Here is an example definition: ```c typedef struct { PyThreadState base; // private fields go here #ifdef Py_GIL_DISABLED struct _Py_float_state float_state; // e.g., float free-list #endif ... } _PyThreadStateImpl; ``` Some of the private fields include free lists, state for biased reference counting, and mimalloc heaps. Free lists are currently stored per-interpreter (in PyInterpreterState), but we will want them per-PyThreadState in `--disable-gil` builds. ### Alternative We could instead add an opaque pointer from PyThreadState to private data. For example, something like: ```c typedef struct _PyThreadStatePrivate _PyThreadStatePrivate; // opaque in the public API struct _ts { ... _PyThreadStatePrivate *private; }; ``` This requires an extra pointer-dereference to access private data, and many of the use cases for these private fields are performance sensitive (e.g., free lists, memory allocation). cc @ericsnowcurrently, @vstinner <!-- gh-linked-prs --> ### Linked PRs * gh-112560 <!-- /gh-linked-prs -->
db460735af7503984d1b7d878069722db44b11e8
bf0beae6a05f3266606a21e22a4d803abbb8d731
python/cpython
python__cpython-112648
# Add support for thread sanitizer (TSAN) via `--with-thread-sanitizer` # Feature or enhancement GCC and Clang provide ThreadSanitizer, a tool that detects data races. We should add support for thread sanitizer to CPython. Note that Python already supports the memory and address sanitizers. * Add `--with-thread-sanitizer` as a configure option * Define `_Py_THREAD_SANITIZER` if thread-sanitizer is enabled (see, e.g., `_Py_ADDRESS_SANITIZER` in `pyport.h`) * Move the definition of `_Py_NO_SANITIZE_THREAD` from `obmalloc.c` to a place that's more widely available (like `pyport.h`). We're going to need this in a few places Eventually, it would be helpful to have a continuous build for the combination of `--disable-gil --with-thread-sanitizer`. Note that we probably won't want to run all the tests. ThreadSanitizer is slow and also not very useful for single-threaded tests. We should collect a subset of our tests that use threading for the ThreadSanitizer continuous build. <!-- gh-linked-prs --> ### Linked PRs * gh-112648 * gh-113232 * gh-116555 * gh-116558 * gh-116601 * gh-116872 * gh-116896 * gh-116898 * gh-116911 * gh-116924 * gh-116926 * gh-116929 * gh-116953 * gh-117702 * gh-117713 * gh-123833 <!-- /gh-linked-prs -->
88cb9720001295f82c7771ab4ebf20f3cd0b31fb
f46987b8281148503568516c29a4a04a75aaba8d
python/cpython
python__cpython-112883
# mimalloc: additional integration and changes for `--disable-gil` builds # Feature or enhancement Mimalloc was added as an allocator in https://github.com/python/cpython/issues/90815. The `--disable-gil` builds need further integration with mimalloc, as well as some modifications to mimalloc to support thread-safe garbage collection in `--disable-gil` builds and the dictionary accesses that mostly avoid locking. These changes can be split up across multiple PRs. * Currently, when mimalloc is enabled, all allocations go to the default heap. This is fine for `PyMem_Malloc` calls, but we need separate heaps for `PyObject_Malloc` and `PyObject_GC_New`. We should associate some `mi_heap_t`s with each `PyThreadState`. Every PyThreadState needs four heaps: one for PyMem_Malloc, one for non-GC objects (via PyObject_Malloc), one for GC objects with managed dicts (extra pre-header) and one for GC objects without a managed dict. We need some way to know which heap to use in `_PyObject_MiMalloc`. There's not a great way to do this, but I suggest adding something like a ["current pyobject heap"](https://github.com/colesbury/nogil-3.12/blob/cedde4f5ec3759ad723c89d44738776f362df564/Include/cpython/pystate.h#L127) variable to PyThreadState. It should generally point to the `PyObject_Malloc` heap, but `PyObject_GC_New` should temporarily override it to point to the correct GC heap when called * `--disable-gil` should imply `--with-mimalloc` and require mimalloc (i.e., disallow changing the allocator with `PYTHONMALLOC`). * We should tag each mi_heap_t and mi_page_t with a number identifying which type of allocation it's associated with. This is important for when pages are abandoned (i.e., when a thread exits with live blocks remaining) and the page is no longer associated with a heap. The GC still needs to identify which of those pages store GC-enabled objects. (see https://github.com/colesbury/nogil-3.12/commit/d447b6980856df7e0050ecaba4fd6cf21747d4f2) * When claiming a page from an abandoned segment, mimalloc should associate it with the correct heap from the current thread. In other words, pages that store GC-enabled objects should only be put back in the correct GC heap. cc @DinoV <!-- gh-linked-prs --> ### Linked PRs * gh-112883 * gh-113263 * gh-113492 * gh-113717 * gh-113742 * gh-113995 * gh-114133 <!-- /gh-linked-prs -->
fdee7b7b3e15931d58f07e5449de2e55b4d48b05
fed294c6453527addd1644633849e2d8492058c5
python/cpython
python__cpython-112533
# Make the garbage collector thread-safe in `--disable-gil` builds # Feature or enhancement Python's cyclic garbage collector relies on the global interpreter lock for thread-safety. There are a number of changes needed to make the GC thread-safe in `--disable-gil` builds. My intention is to implement these as a series of smaller changes. * Garbage collection is guarded by `gcstate->collecting`. This need to be made thread-safe. * The `--disable-gil` builds should stop-the-world when finding garbage, but not when calling finalizers or destructors. (depends on #111964) * The `--disable-gil` builds should find GC-enabled objects by traversing the mimalloc heaps, because the `_gc_next`/`_gc_prev` lists are not thread-safe. (Note it's safe to use the lists during stop-the-world pauses; we just can't maintain them safely during normal Python execution). * The `--disable-gil` builds should probably use only a single generation to avoid frequent stop-the-world pauses * Eventually, we can get rid of the GC pre-header in `--disable-gil` builds to save memory. (This will require updating the trashcan mechanism.) Remaining work: - [ ] Fix scheduling of free-threaded GC (currently scheduled as if there is a young generation) - [ ] Update Python docs - [ ] Refactor out common code from `gc.c` and `gc_free_threaded.c` See also https://github.com/python/cpython/issues/111964 <!-- gh-linked-prs --> ### Linked PRs * gh-112533 * gh-113747 * gh-114157 * gh-114262 * gh-114564 * gh-114732 * gh-114823 * gh-114880 * gh-115488 * gh-115524 * gh-117370 <!-- /gh-linked-prs -->
d70e27f25886e3ac1aa9fcc2d44dd38b4001d8bb
0738b9a338fd27ff2d4456dd9c15801a8858ffd9
python/cpython
python__cpython-112520
# Make it possible to specify flags for pseudo instructions defined in bytecodes.c The instruction metadata is incorrect for some of the pseudo instructions. For instance, these map to ``NOP``, and take their metadata from it, but they should have the ``HAS_ARG`` flag set. ``` [SETUP_FINALLY] = { true, 0, 0 }, [SETUP_CLEANUP] = { true, 0, 0 }, [SETUP_WITH] = { true, 0, 0 }, ``` We need to be able to specify flags for pseudo instructions in bytecodes.c (and relax assertions that it is identical to the target flags). <!-- gh-linked-prs --> ### Linked PRs * gh-112520 <!-- /gh-linked-prs -->
07ebd46f9e55ed2f18c5ea2a79ec5054bc26b915
7eeea13403882af63a71226433c9a13b80c22564
python/cpython
python__cpython-112517
# Update the bundled version of pip to 23.3.1 # Feature or enhancement ### Proposal: Update the bundled pip in ensurepip to the latest version of pip (23.3.1). This ensures that users who install newest release of Python get the newest version of pip. Also the latest pip version includes a couple of security improvements, fix of CVE-2023-5752 and updated urllib3. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112517 * gh-112718 * gh-112719 <!-- /gh-linked-prs -->
1e4680ce52ab6c065f5e0bb27e0b156b897aff67
e08b70fab1fbc45fa498020aac522ae1d5da6136
python/cpython
python__cpython-112511
# Add `readline.backend` for the backend `readline` uses # Feature or enhancement ### Proposal: Currently we support two backends with readline: GNU readline and editline. They work in a similar way but have some differences. Notably the way to set `<tab>` as the complete key. The users need to distinguish the backend at run time, and currently the recommended way is: ```python if 'libedit' in getattr(readline, '__doc__', ''): ``` We have worse checks like ```python readline_doc = getattr(readline, '__doc__', '') if readline_doc is not None and 'libedit' in readline_doc: ``` in `site.py`. It would be nice to provide a more official and clean way to check the backend, instead of querying for the docstring for the module. This is also mentioned in https://github.com/python/cpython/pull/107748#discussion_r1395432432 by @encukou . In this proposal, a new attribute `backend` is added which could be either `readline` or `editline`. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112511 <!-- /gh-linked-prs -->
c2982380f827e53057068eccf9f1a16b5a653728
f8ff80f63536e96b004d29112452a8f1738fde37
python/cpython
python__cpython-112512
# TypedDict: `__required_keys__` and `__optional_keys__` can be wrong in the presence of inheritance # Bug report ### Bug description: ```python from typing import NotRequired, Required, TypedDict class A(TypedDict): a: NotRequired[int] b: Required[int] class B(A): a: Required[int] b: NotRequired[int] print(B.__required_keys__) print(B.__optional_keys__) ``` This will print (tried on 3.11 and current-ish main): ``` frozenset({'b', 'a'}) frozenset({'b', 'a'}) ``` But obviously, a single key should only be either Required or NotRequired, not both. The child class's version should prevail. @alicederyn and I discovered this while discussing the implementation of PEP-705 (https://github.com/python/typing_extensions/pull/284/#discussion_r1408644357). cc @davidfstr for PEP 655. I also see some problems with how type checkers handle this case, but i'll post about that separately. ### CPython versions tested on: 3.11, CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112512 * gh-112530 * gh-112531 <!-- /gh-linked-prs -->
403886942376210662610627b01fea6acd77d331
e0449b9a7fffc0c0eed806bf4cbb8f1f65397bbb
python/cpython
python__cpython-112508
# venv/scripts/common/activate should use `uname` instead of `$OSTYPE` to detect Cygwin and MSYS # Bug report ### Bug description: The venv activate script uses the `$OSTYPE` variable to determine if it's running on Cygwin or MSYS. https://github.com/python/cpython/blob/fb202af4470d6051a69bb9d2f44d7e8a1c99eb4f/Lib/venv/scripts/common/activate#L41-L49 `$OSTYPE` is not defined by POSIX. It's provided by bash and zsh, but may not be present in other shells like dash. The `uname` command is always available in any shell and should be used instead. ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112508 * gh-130674 <!-- /gh-linked-prs -->
2a378dba987e125521b678364f0cd44b92dd5d52
4b421e8aca7f2dccc5ac8604b78589941dd7974c
python/cpython
python__cpython-112562
# gc.collect() docs are wrong https://docs.python.org/3/library/gc.html#gc.collect says "The number of unreachable objects found is returned." In fact, it returns (https://github.com/python/cpython/blob/48dfd74a9db9d4aa9c6f23b4a67b461e5d977173/Modules/gcmodule.c#L1378) the sum of two numbers: the number of objects collected plus the number of uncollectable objects. The docs should be changed. (What makes an object uncollectable? I didn't read too closely but it seems to have something to do with finalizers.) <!-- gh-linked-prs --> ### Linked PRs * gh-112562 <!-- /gh-linked-prs -->
730d450d4334978f07e3cf39e1b320f2954e7963
482b0ee8f6cdecd96c246c8bcbda93292f4d08cc
python/cpython
python__cpython-112491
# AIX build break because of openssl with no PSK support # Bug report ### Bug description: AIX build broke after the commit #103181 `./Modules/_ssl.c:4757:5: error: unknown type name 'SSL_psk_client_cb_func' 4757 | SSL_psk_client_cb_func ssl_callback; | ^~~~~~~~~~~~~~~~~~~~~~ ./Modules/_ssl.c:4761:22: warning: assignment to 'int' from 'void *' makes integer from pointer without a cast [-Wint-conversion] 4761 | ssl_callback = NULL; | ^ ./Modules/_ssl.c:4767:22: warning: assignment to 'int' from 'unsigned int (*)(SSL *, const char *, char *, unsigned int, unsigned char *, unsigned int)' {aka 'unsigned int (*)(struct ssl_st *, const char *, char *, unsigned int, unsigned char *, unsigned int)'} makes integer from pointer without a cast [-Wint-conversion] 4767 | ssl_callback = psk_client_callback; | ^ ./Modules/_ssl.c:4774:5: error: implicit declaration of function 'SSL_CTX_set_psk_client_callback'; did you mean 'SSL_CTX_set_security_callback'? [-Werror=implicit-function-declaration] 4774 | SSL_CTX_set_psk_client_callback(self->ctx, ssl_callback); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | SSL_CTX_set_security_callback` Recent Openssl versions in AIX has been built without PSK support (--no-psk in configure). The psk related functions are protect under OPENSSL_NO_PSK (which is set) in the headers and functions are not part of library. Hence the failure. Applications which uses openssl can use OPENSSL_NO_PSK to not expose the psk related code when it is set (For example., python cryptography, proftpd, stunnel, etc., does that.) I think in python also we can do that , so that openssl library built without psk support can also be used to build the _ssl module. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-112491 <!-- /gh-linked-prs -->
e413daf5f6b983bdb4e1965d76b5313cb93b266e
48dfd74a9db9d4aa9c6f23b4a67b461e5d977173
python/cpython
python__cpython-112439
# Fix support of format units with the "e" prefix in nested tuples in PyArg_Parse # Bug report `PyArg_Parse` format units `es`, `et`, `es#`, and `et#` are not correctly supported in nested tuples. This is because the code for parsing nested tuples counts the number of alphabetical symbols to determine the number of items. But "e" is a prefix and should not be counted. <!-- gh-linked-prs --> ### Linked PRs * gh-112439 * gh-112460 * gh-112461 <!-- /gh-linked-prs -->
4eea1e82369fbf7a795d1956e7a8212a1b58009f
812360fddda86d7aff5823f529ab720f57ddc411
python/cpython
python__cpython-113790
# Add ability to force alignment of ctypes.Structure # Feature or enhancement ### Proposal: When creating a `ctypes.Structure` to map data from c/c++ to python, I have been coming up against issues where I have a structure which has an alignment due to a `#pragma align`. Currently there is no way to define a `ctypes.Structure` which can map to an object like this. I propose we add an `_align_` attribute to the `ctypes.Structure` class which will instruct the code how to align the structure itself in memory. In the way that the `_pack_` attribute indicates the *maximum* alignment of the fields in the struct, this would indicate the *minimum* alignment of the struct itself. An example of such a struct and it's use is as follows: ```python import ctypes class IDString(ctypes.Structure): _align_ = 0x10 _fields_ = [ ("string", ctypes.c_char * 0x10), ] class main(ctypes.Structure): _fields_ = [ ("first", ctypes.c_uint32), ("second", ctypes.c_ubyte), ("string", IDString), ] data = bytearray( b"\x07\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" b"\x68\x65\x6c\x6c\x6f\x20\x77\x6f\x72\x6c\x64\x21\x00\x00\x00\x00" ) m = main.from_buffer(data) print(f"first: {m.first}") # first: 7 print(f"second: {m.second}") # second: 1 print(f"string: {m.string.string}") # string: b'hello world!' ``` Without the `_align_` attribute the value of `m.string.string` would just be an empty `bytes` object since it would be reading from 8 bytes into the `bytearray`. I have already made a (preliminary) implementation of this [here](https://github.com/monkeyman192/cpython/commit/e3ad227c82a065ae2f8ca83b6fc4ff518296616e). Because the attribute is optional, I believe there are no potential backward compatibility issues as the default alignment will simply be what it was before (ie. 1). ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/add-ability-to-force-alignment-of-ctypes-structure/39109 <!-- gh-linked-prs --> ### Linked PRs * gh-113790 * gh-125087 * gh-125113 <!-- /gh-linked-prs -->
298bcdc185d1a9709271e61a4cc529d33483add4
f42e112fd86edb5507a38a2eb850d0ebc6bc27a2
python/cpython
python__cpython-112432
# venv/scripts/common/activate should unconditionally call `hash -r` # Bug report ### Bug description: The venv/scripts/common/activate script calls `hash -r` in two places to make sure the shell picks up the environment changes the script makes. Before that, it checks to see if the shell running the script is bash or zsh. https://github.com/python/cpython/blob/fb202af4470d6051a69bb9d2f44d7e8a1c99eb4f/Lib/venv/scripts/common/activate#L20-L22 https://github.com/python/cpython/blob/fb202af4470d6051a69bb9d2f44d7e8a1c99eb4f/Lib/venv/scripts/common/activate#L75-L77 `hash -r` is specified by POSIX and is not exclusive to bash and zsh. This guard will prevent the script from calling `hash -r` in other `#!/bin/sh`-compatible shells like dash. ### CPython versions tested on: 3.11, 3.12, 3.13 ### Operating systems tested on: macOS, Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112432 * gh-112492 * gh-112493 <!-- /gh-linked-prs -->
a194938f33a71e727e53490815bae874eece1460
f14d741daa1b9e5b9c9fc1edba93d0fa92b5ba8d
python/cpython
python__cpython-112421
# The `find_module` fallback has been removed but is still mentioned in `sys.meta_path`'s document # Documentation The `find_module` fallback for `sys.meta_path` has been [removed](https://github.com/python/cpython/issues/98040) in Python3.12, but is still mentioned in `sys.meta_path`'s document: https://docs.python.org/3/library/sys.html#sys.meta_path. <!-- gh-linked-prs --> ### Linked PRs * gh-112421 * gh-113934 <!-- /gh-linked-prs -->
ec23e90082ffdedc7f0bdd2dfadfc4983ddc0712
2ac4cf4743a65ac54c7ac6a762bed636800598fe
python/cpython
python__cpython-112425
# `ModuleType.__repr__` can fail on py312+ # Bug report ### Bug description: Calling `repr()` on a `ModuleType` instance can fail with `AttributeError` on py312+, due to changes made in #98870. This is because the implementation of `importlib._bootstrap._module_repr_from_spec` assumes that all loaders will have a `_path` attribute. But if the module had a custom loader, that won't necessarily hold true: `_path` is an undocumented, internal, private attribute that CPython's import loaders have, but custom third-party loaders won't necessarily have. An easy way to see this is the following (in a py312 venv with `setuptools` installed): ```pycon Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from setuptools._vendor import packaging >>> repr(packaging) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 545, in _module_repr File "<frozen importlib._bootstrap>", line 830, in _module_repr_from_spec AttributeError: 'VendorImporter' object has no attribute '_path' ``` `setuptools` implements the custom `VendorImporter` loader here: https://github.com/pypa/setuptools/blob/b1bf87be7869df40872e947e27296ef87e3125ae/setuptools/extern/__init__.py#L5-L70. Instances of `VendorImporter` do not have a `_path` attribute. This kind of thing trips up external tools such as mypy's `stubtest.py` script, which calls `repr()` on arbitrary instances of `types.ModuleType` on certain code paths and (reasonably) expects that to always succeed: https://github.com/python/mypy/blob/1200d1d956e589a0a33c86ef8a7cb3f5a9b64f1f/mypy/stubtest.py#L224 Cc. @FFY00 and @jaraco, as participants in #98139 (the issue #98870 was linked to) ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112425 * gh-112436 * gh-112440 * gh-112475 * gh-112480 <!-- /gh-linked-prs -->
0622839cfedacbb48eba27180fd0f0586fe97771
e954ac7205d7a6e356c1736eb372d2b50dbd9f69
python/cpython
python__cpython-112406
# Optimise `pathlib.Path.relative_to()` # Feature or enhancement ### Proposal: `pathlib.Path.relative_to()` can be significantly optimised by making use of `itertools.chain` ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112406 <!-- /gh-linked-prs -->
418d585febd280e779274f313f910613ac1a1c30
9fe60340d7e8dc22b3aec205c557bc69a1b2d18c
python/cpython
python__cpython-112410
# Assertion failure in `Objects/call.c`:342: `!_PyErr_Occurred(tstate)` failed # Crash report ### What happened? I found some crashes when adding an additional fuzz target in https://github.com/python/cpython/pull/111721. ```python # This script fails with a C assertion failure when assertions are enabled. # # I ran this through Python built from source with assertions enabled: # # ./configure --with-assertions --prefix "$PWD/debugbuild" # make -j12 altinstall # ./debugbuild/bin/python/crash2.py # # I ran this on ARM64 macOS: # # Darwin dialectic.local 23.1.0 Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 arm64 # Input found via the fuzz target added in https://github.com/python/cpython/pull/111721, then manually minimized s = b"with(0,,):\n\x01" # This line fails with a C assertion failure: # # fuzz_pycompile: Objects/call.c:342: PyObject *_PyObject_Call(PyThreadState *, PyObject *, PyObject *, PyObject *): Assertion `!_PyErr_Occurred(tstate)' failed. # compile(s, 's', 'exec') ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS ### Output from running 'python -VV' on the command line: Python 3.13.0a1+ (heads/main:3701f3bc10, Nov 24 2023, 23:05:42) [Clang 15.0.0 (clang-1500.0.40.1)] <!-- gh-linked-prs --> ### Linked PRs * gh-112410 * gh-112466 * gh-112467 <!-- /gh-linked-prs -->
2c8b19174274c183eb652932871f60570123fe99
967f2a3052c2d22e31564b428a9aa8cc63dc2a9f
python/cpython
python__cpython-112409
# Assertion failure in `get_error_line_from_tokenizer_buffers` in `pegen_errors.c` # Crash report ### What happened? I found some crashes when adding an additional fuzz target in #111721. ```python # This script fails in two different ways. # # First, it fails with a C assertion failure when assertions are enabled. # # Second, it *nondeterministically* gives the C assertion failure or an # uncaught Python `SyntaxError` when assertions are enabled and pymalloc is # disabled. # # I ran these on ARM64 macOS: # # Darwin dialectic.local 23.1.0 Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 arm64 # # For the first case, I ran this through Python built from source with # assertions enabled: # # ./configure --with-assertions --prefix "$PWD/debugbuild" # make -j12 altinstall # ./debugbuild/bin/python/crash1.py # # For the second case, I ran this through Python built from source with # assertions enabled and pymalloc disabled: # # ./configure --with-assertions --without-pymalloc --prefix "$PWD/debugbuild" # make -j12 altinstall # ./debugbuild/bin/python/crash1.py # Input found via the fuzz target added in https://github.com/python/cpython/pull/111721, then manually minimized s = b'# coding=latin\r(aaaaaaaaaaaaaaaaa\raaaaaaaaaaa\xb5' # This line fails nondeterministically with either a C assertion failure or an # uncaught Python `SyntaxError`, depending on the build configuration: # # Outcome 1: # # Assertion failed: (new_line != NULL && new_line + 1 < buf_end), function get_error_line_from_tokenizer_buffers, file pegen_errors.c, line 286. # # Outcome 2: # # Traceback (most recent call last): # File "crash1.py", line 17, in <module> # compile(s, 's', 'exec') # File "s", line 2 # (aaaaaaaaaaaaaaaaa # ^ # SyntaxError: '(' was never closed compile(s, 's', 'exec') ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS ### Output from running 'python -VV' on the command line: Python 3.13.0a1+ (heads/main:3701f3bc10, Nov 24 2023, 23:05:42) [Clang 15.0.0 (clang-1500.0.40.1)] <!-- gh-linked-prs --> ### Linked PRs * gh-112409 * gh-112468 * gh-112469 <!-- /gh-linked-prs -->
45d648597b1146431bf3d91041e60d7f040e70bf
2c8b19174274c183eb652932871f60570123fe99
python/cpython
python__cpython-113153
# teach dis how to interpret the oparg of ENTER_EXECUTOR The oparg of `ENTER_EXECUTOR` is not consistent with that of `JUMP_BACKWARD`. See https://github.com/python/cpython/pull/112377/files#r1404568464 <!-- gh-linked-prs --> ### Linked PRs * gh-113153 * gh-117171 <!-- /gh-linked-prs -->
d07483292b115a5a0e9b9b09f3ec1000ce879986
1addde0c698f7f8eb716bcecf63d119e19e1ecda
python/cpython
python__cpython-112368
# The perf trampoline can free the jitted code when is still used Deactivating the trampoline frees the code arenas but this can happen when we are under the trampolines themselves so this leads to UB. <!-- gh-linked-prs --> ### Linked PRs * gh-112368 * gh-112590 <!-- /gh-linked-prs -->
a73aa48e6bec900be7edd3431deaa5fc1d809e6f
bfb576ee23c133bec0ce7c26a8ecea76926b9d8e
python/cpython
python__cpython-115696
# can not unparse code with ' in format_spec # Bug report ### Bug description: The following example shows that it is not possible to unparse a f-string with a ' in the format_spec, but such code can be generated when the f-string is double-quoted. expected behaviour: `unparse` should use different quotes if quotes are part of the format_spec. This is only a problem in 3.12 and worked in 3.11 ```python import ast code="""f"{something:'}" """ print("original code:",code) tree=ast.parse(code) print("original tree:",ast.dump(tree,indent=2)) new_code=ast.unparse(tree) print("unparsed code:",new_code) ast.parse(new_code) ``` output (Python 3.12.0): ```python original code: f"{something:'}" original tree: Module( body=[ Expr( value=JoinedStr( values=[ FormattedValue( value=Name(id='something', ctx=Load()), conversion=-1, format_spec=JoinedStr( values=[ Constant(value="'")]))]))], type_ignores=[]) unparsed code: f'{something:'}' Traceback (most recent call last): File "/home/frank/projects/pysource-playground/pysource-codegen/codi.py", line 13, in <module> ast.parse(new_code) File "/home/frank/.pyenv/versions/3.12.0/lib/python3.12/ast.py", line 52, in parse return compile(source, filename, mode, flags, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<unknown>", line 1 f'{something:'}' ^ SyntaxError: unterminated string literal (detected at line 1) ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-115696 * gh-115782 <!-- /gh-linked-prs -->
69ab93082d14425aaac48b8393711c716575b132
074bbec9c4911da1d1155e56bd1693665800b814
python/cpython
python__cpython-112362
# Speed up pathlib by removing a few temporary objects A handful of pathlib methods that create paths with modified names, or additional segments, use list objects that are quickly thrown away. We can speed these methods up by only creating new lists where strictly necessary. Specifically: - `with_name()` (performance of `self._tail[:-1] + [name]` is cursed) - `with_suffix()` - `_make_child_relpath()` (used in `glob()` and `walk()`) - `glob()` (when parsing the pattern) <!-- gh-linked-prs --> ### Linked PRs * gh-112362 <!-- /gh-linked-prs -->
19a1fc1b3df30f64450d157dc3a5d40c992e347f
6b961b8ceaba372b78d03feaceb4837bf7236694
python/cpython
python__cpython-112424
# struct.Struct inheritance with Python 3.12.0 # Bug report ### Bug description: ```python import struct class MyStruct(struct.Struct): def __init__(self): super().__init__('>h') obj = MyStruct() ``` When I run this code I receive an error: ``` Traceback (most recent call last): File "/home/user/bug.py", line 7, in <module> obj = MyStruct() ^^^^^^^^^^ TypeError: Struct() missing required argument 'format' (pos 1) ``` It is rather strange error, I have passed `format` parameter to base class constructor, I receive this error with any value of `format` parameter. There are no any problems with this code in Python 3.11 and older. ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112424 * gh-112426 <!-- /gh-linked-prs -->
9fe60340d7e8dc22b3aec205c557bc69a1b2d18c
3faf8e586d36e73faba13d9b61663afed6a24cb4
python/cpython
python__cpython-112377
# New dis.py gets tripped up by ENTER_EXECUTOR # Bug report ### Bug description: I have some code that calls `dis.dis(test, adaptive=True)` where `test` is simple function containing a loop containing a branch. Somehow `dis` crashes like this: ``` Traceback (most recent call last): File "/Users/guido/cpython/t.py", line 20, in <module> dis.dis(test, adaptive=True) File "/Users/guido/cpython/Lib/dis.py", line 113, in dis _disassemble_recursive(x, file=file, depth=depth, show_caches=show_caches, adaptive=adaptive, show_offsets=show_offsets) File "/Users/guido/cpython/Lib/dis.py", line 709, in _disassemble_recursive disassemble(co, file=file, show_caches=show_caches, adaptive=adaptive, show_offsets=show_offsets) File "/Users/guido/cpython/Lib/dis.py", line 701, in disassemble _disassemble_bytes(_get_code_array(co, adaptive), File "/Users/guido/cpython/Lib/dis.py", line 754, in _disassemble_bytes for instr in _get_instructions_bytes(code, varname_from_oparg, names, File "/Users/guido/cpython/Lib/dis.py", line 668, in _get_instructions_bytes yield Instruction._create(op, arg, offset, start_offset, starts_line, line_number, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 413, in _create argval, argrepr = cls._get_argval_argrepr( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/guido/cpython/Lib/dis.py", line 376, in _get_argval_argrepr argrepr = f"to L{labels_map[argval]}" ~~~~~~~~~~^^^^^^^^ KeyError: 108 ``` Here's the script: <details> ```py def pr(i): pass def is_prime(n): # Bogus return n == 2 or n == 3 or n == 5 or n == 7 or (n % 2 != 0 and n % 3 != 0 and n % 5 != 0 and n % 7 != 0) def test(): for i in range(2, 50): if is_prime(i): print(i) import _testinternalcapi _testinternalcapi.set_optimizer(_testinternalcapi.get_uop_optimizer()) test() _testinternalcapi.set_optimizer(None) import dis dis.dis(test, adaptive=True) ``` </details> ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112377 <!-- /gh-linked-prs -->
9eb3b35dd7d72ff73005abf20266a618215b9ae0
fafae08cc7caa25f2bd6b29106b50ef76c3e296f
python/cpython
python__cpython-112344
# Improve error message when trying to call `issubclass()` against a Protocol that has non-method members # Feature or enhancement ### Proposal: The error message could tell the user what the non-method members in the protocol are. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112344 <!-- /gh-linked-prs -->
e9d1360c9a1072aa23950b321491dc542c3a19b8
d9fc15222e96942e30ea8b0561dec5c82ecb4663
python/cpython
python__cpython-112380
# pdb unintentionally interpolates strings with convenience variables # Bug report ### Bug description: Python 3.12 introduced $-prefixed convenience variables to pdb (#103693). It automatically replaces $-words in strings, which is undocumented. @nedbat suggests this is unintended behavior. ``` (Pdb) p '$in' '__pdb_convenience_variables["in"]' ``` This totally breaks MongoDB queries inside pdb, as it makes frequent use of such strings. ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112380 * gh-114202 <!-- /gh-linked-prs -->
5c351fc85afd2ed167694a7dfe066741a5cdab53
f49752552e673e5192f22eae0076b2650c7d6afc
python/cpython
python__cpython-112617
# subprocess.Popen: Performance regression on Linux since 124af17b6e [CVE-2023-6507] # Bug report ### Bug description: Apologies if this is a duplicate. I couldn’t find a similar report, though. # The issue and how to reproduce We’re seeing a performance regression since 124af17b6e. The best way to reproduce it is to spawn lots of processes from a `ThreadPoolExecutor`. For example: ```python #!/usr/bin/env python3 from concurrent.futures import ThreadPoolExecutor, wait from subprocess import Popen def target(i): p = Popen(['touch', f'/tmp/reproc-{i}']) p.communicate() executor = ThreadPoolExecutor(max_workers=64) futures = set() for i in range(10_000): futures.add(executor.submit(target, i)) wait(futures) ``` Before, on 49cae39ef0, it’s roughly this: ``` real 0m2.419s user 0m4.524s sys 0m0.976s ``` Since 124af17b6e, it’s roughly this: ``` real 0m11.772s user 0m10.287s sys 0m14.409s ``` # An attempt at an analysis and possible fix `strace` shows that the new code doesn’t use `vfork()` anymore but `clone()`. I believe that the reason for this is an incorrect check of `num_groups` (or `extra_group_size`, as it is now called on `main`). 124af17b6e checks if `extra_group_size` is _less than zero_ to determine if we can use `vfork()`. Is that correct? Maybe this should be _equal to zero_? I’m talking about these two locations (diff relative to `main`/9e56eedd018e1a46): ```diff diff --git a/Modules/_posixsubprocess.c b/Modules/_posixsubprocess.c index 2898eedc3e..fb6c235901 100644 --- a/Modules/_posixsubprocess.c +++ b/Modules/_posixsubprocess.c @@ -889,7 +889,7 @@ do_fork_exec(char *const exec_array[], /* These are checked by our caller; verify them in debug builds. */ assert(uid == (uid_t)-1); assert(gid == (gid_t)-1); - assert(extra_group_size < 0); + assert(extra_group_size == 0); assert(preexec_fn == Py_None); /* Drop the GIL so that other threads can continue execution while this @@ -1208,7 +1208,7 @@ subprocess_fork_exec_impl(PyObject *module, PyObject *process_args, /* Use vfork() only if it's safe. See the comment above child_exec(). */ sigset_t old_sigs; if (preexec_fn == Py_None && allow_vfork && - uid == (uid_t)-1 && gid == (gid_t)-1 && extra_group_size < 0) { + uid == (uid_t)-1 && gid == (gid_t)-1 && extra_group_size == 0) { /* Block all signals to ensure that no signal handlers are run in the * child process while it shares memory with us. Note that signals * used internally by C libraries won't be blocked by ``` `extra_group_size` is the result of the call to `PySequence_Size(extra_groups_packed)`. If I understand [the docs](https://docs.python.org/3/c-api/sequence.html) correctly, then this function only returns negative values to indicate errors. This error condition is already checked, right after the call itself: ```C extra_group_size = PySequence_Size(extra_groups_packed); if (extra_group_size < 0) goto cleanup; ``` Later in the code, `extra_group_size` can never be less than zero. It can, however, be equal to zero if `extra_groups` is an empty list. I believe this is what was _meant to be_ checked here. I’ll happily open a PR for this if you agree that this is the way to go. ### CPython versions tested on: 3.11, 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-112617 * gh-112731 * gh-112734 <!-- /gh-linked-prs -->
9fe7655c6ce0b8e9adc229daf681b6d30e6b1610
c5fa8a54dbdf564d482e2e3857aa3efa61edd329
python/cpython
python__cpython-112333
# Deprecate TracebackException.exc_type ``TracebackException`` is supposed to be a printable snapshot of an exception. It is intended to be lightweight (release references to frames, etc) and easily serialised. Since the ``exc_type`` field is causing issues for subinterpreters, I suggest we deprecate it and replace it with a snapshot of the type. Until removed, we can have an arg to optionally not save it (which can be used by subinterpreters). <!-- gh-linked-prs --> ### Linked PRs * gh-112333 <!-- /gh-linked-prs -->
2c68011780bd68463f5183601ea9c10af368dff6
2df26d83486b8f9ac6b7df2a9a4669508aa61983
python/cpython
python__cpython-112375
# Reference Manual 6.3.1: wrong method mentioned # Documentation 6.3.1. Attribute references This section states that "This production can be customized by overriding the `__getattr__()` method." This is wrong and should be replaced with "This production can be customized by overriding the `__getattribute__(self, attr)` method.". `__getattr__()` is only called when the attribute is not found. `__getattribute__(self, attr)` is the method that is called when accessing an attribute. Here is an example code for both: ```python class C: def __init__(self): self.a = 1 def __getattr__(self, attr): # called when attribute not found print('attribute does not exist') def __getattribute__(self, attr): # called first print('attribute', attr, 'accessed') return object.__getattribute__(self, attr) # object is the top-level type for classes o = C() print(o.a) # found, calls __getattribute__ print(o.b) # not found, calls both methods. ``` Hence, either mention both methods or only `__getattribute__`. <!-- gh-linked-prs --> ### Linked PRs * gh-112375 * gh-112412 * gh-112413 <!-- /gh-linked-prs -->
97f8f28b3e50196f6713faceccc2e15039117470
f93a4ef7a9e8d6f831c62707c0d39e0be306c4e6
python/cpython
python__cpython-112514
# [Enum] make some private attributes public # Feature or enhancement Make `_EnumDict`, `_EnumDict._member_names`, and possibly other private names public. This is to make subclassing `EnumType` and other advanced behavior supported, such as having multiple values per member. <!-- gh-linked-prs --> ### Linked PRs * gh-112514 * gh-121720 * gh-123669 * gh-128142 <!-- /gh-linked-prs -->
de6bca956432cc852a4a41e2a2cee9cdacd19f35
563ccded6e83bfdd8c5622663c4edb679e96e08b
python/cpython
python__cpython-112321
# Branch confidence decay in Tier 2 translator This is a follow-up to gh-109039 (and to a lesser extent gh-111848). (I'm sure I read about it somewhere on https://github.com/faster-cpython/ideas/issues too, but I can't find the relevant issue, so I'll describe the idea from scratch here.) When we translate a branch instruction (e.g. `POP_JUMP_IF_TRUE`), Tier 1 has a 16-bit shift register tracking how often we branched in the past 16 times this instruction was reached. During Tier 2 translation, if the bit count indicates that we've taken the branch more often than not, we continue the trace at the branch destination; otherwise, we continue following the branch instruction. What we should also do in the translator have a variable (for the entire trace) indicating how likely we are still "on trace". This "confidence factor" starts off at 100%. If we translate a branch that is taken X% of the time, for X >= 50%, we should multiply the confidence by X%. If the confidence ends too low (say, below 33%) we should end the trace at this point, generating an `_EXIT_TRACE` uop. **UPDATE:** Ideally we should also adjust the confidence each time we generate a guard. But what factor should we use there? Most guards fail rarely. I propose to punt on this now; later we can add code to adjust the same variable on deoptimization exits. <!-- gh-linked-prs --> ### Linked PRs * gh-112321 <!-- /gh-linked-prs -->
7316dfb0ebc46aedf484c1f15f03a0a309d12a42
dfaa9e060bf6d69cb862a2ac140b8fccbebf3000
python/cpython
python__cpython-112317
# Docs: incorrect signature for `inspect.Signature.from_callable` # Bug report https://github.com/python/cpython/blob/fef6fb876267f28fbb2c5fcb17aebe1a52cc8e12/Doc/library/inspect.rst#L755-L775 It has several problems: 1. Two `.. versionadded` entries, one should be `.. versionchanged` instead 2. `eval_str` is missing from the signature, here how it looks on 3.10+: https://github.com/python/cpython/blob/3.10/Lib/inspect.py#L2998C1-L3002 3. `eval_str` is not documented <!-- gh-linked-prs --> ### Linked PRs * gh-112317 * gh-112629 * gh-112630 * gh-112631 * gh-112649 * gh-112652 <!-- /gh-linked-prs -->
a74daba7ca8b68f47284d82d4604721b8748bbde
939fc6d6eab9b7ea8c244d513610dbdd556503a7
python/cpython
python__cpython-113344
# Builds outside source folder broken, failed import of _importlib - no detection or valid error message or recommended cleanup. ### Bug description: This bug was exposed by GH-108716. I think it was not the cause, just that it exposed an existing issue. The build fails with: ``` ./python -E -c 'import sys ; from sysconfig import get_platform ; print("%s-%d.%d" % (get_platform(), *sys.version_info[:2]))' >platform Fatal Python error: _PyImport_InitCore: failed to initialize importlib Python runtime state: preinitialized ImportError: Frozen object named '_frozen_importlib' is invalid Current thread 0x00007fb629644740 (most recent call first): <no Python frame> make: *** [Makefile:932: platform] Error 1 ``` Likely `python` doesn't know to look in `$(srcdir)/Lib` for the `_importlib` library. It probably looks in `./Lib/` and that fails. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-113344 * gh-113346 * gh-113347 <!-- /gh-linked-prs -->
103c4ea27464cef8d1793dab347f5ff3629dc243
a2dd0e7038ad65a2464541f91604524d871d98a7
python/cpython
python__cpython-112303
# Add Software Bill of Materials (SBOM) for Python releases # Feature or enhancement ### Proposal: Software Bill of Materials (SBOM) is a format for tracking software and its components. This information will also soon become relevant for Python users due to [this Executive Order](https://www.federalregister.gov/documents/2021/06/02/2021-11592/software-bill-of-materials-elements-and-considerations) and other requirements elsewhere in the world. Instead of requiring each individual consumer and redistributor to create their own documents we can provide an authoritative document for each Python release. This would not require a change to Python itself, instead I imagine the SBOM files would be provided alongside the release artifacts on python.org/downloads. My goal with this project is to provide this information to consumers with minimal modification to core developer workflows. I've experimented with [creating SBOMs for past and present Python versions](https://github.com/sethmlarson/cpython-sbom) and have found that most of the work comes when dependencies are updated and in those cases the SBOM metadata needs to also be updated (ie: versions, hashes). Beyond that the rest can be automated downstream with the Python release tooling. I'm happy to make all the changes required to implement this proposal. I'm also happy to be the reviewer for all SBOM related PRs while I'm the Security Developer-in-Residence. ## Proposed changes * Create a file which tracks all bundled dependency paths and ignored files (ie `Modules/_hacl/...`) * Add a new makefile target `regen-sbom` which regenerates the SBOM file containing hashes * Run this target as a part of CI (via `regen-all`) to ensure that all updates to dependencies require an update to the SBOM metadata. Then downstream in the release-tools repository: * Grab the SBOM file for each tagged release to use as a base for each artifact * Generate metadata for that Python release (files, relationships, etc). * For each artifact, there may be specific dependencies pulled in that will be recorded (ie https://github.com/python/cpython-source-deps, https://github.com/python/cpython-bin-deps) * Upload SBOM files to python.org/downloads similar to Sigstore signatures. ## Example of updating dependencies * Pull a new version of hacl-star, for example. * `make regen-all` would cause changes to the checked in SBOM file. This would either fail in CI or require user to inspect the SBOM locally. * Dev would read the instructions on how to update the version of the SBOM. Usually this would only require updating the version number and committing the generated changes to file checksums. * Tool would check consistency of version information in other identifiers (PURL, CPE, download URL, etc) ### Sub-issues - Create an informational PEP on SBOMs in CPython - Create a "What's New" entry for Python 3.13 - https://github.com/python/pythondotorg/issues/2339 - https://github.com/python/devguide/issues/1241 - https://github.com/python/cpython/issues/112844 - https://github.com/python/pythondotorg/issues/2340 ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: I've created [a Discourse topic](https://discuss.python.org/t/create-and-distribute-software-bill-of-materials-sbom-for-python-artifacts/39293) to discuss the impact to core developers and maintenance. <!-- gh-linked-prs --> ### Linked PRs * gh-112303 * gh-112854 * gh-113490 * gh-114730 * gh-115038 * gh-115088 * gh-115360 * gh-115486 <!-- /gh-linked-prs -->
21221c398f6d89b2d9295895d8a2fd71d28138fa
2d76be251d0aee89f76e6fa5a63fa1ad3f2b76cf
python/cpython
python__cpython-112313
# Check for sub interpreter support is not catching readline global state and crashing # Bug report ### Bug description: Sub interpreters have a check (`_PyImport_CheckSubinterpIncompatibleExtensionAllowed`) for extensions which aren't compatible. One example of an incompatible extension is `readline`. It has a global state and shouldn't be imported from a sub interpreter. The issue is that it can be imported dynamically and the crash happens in the module init mechanism, but the check is _after_ (https://github.com/python/cpython/blob/main/Python/importdl.c#L208) meaning it doesn't have the chance to stop the import before the segmentation fault. I discovered this by writing a simple test harness that goes through the Python test suite and tries to run each test in a sub interpreter. It segfaults on a number of test suites (`test_builtin` is the first one) ```python from test.libregrtest.findtests import findtests import _xxsubinterpreters as interpreters # Get a list of tests test_names = findtests() skip_tests = [ # "test_builtin" # Crashes ] def run_test(): import unittest test_cases = unittest.defaultTestLoader.loadTestsFromName(f"test.{test_name}") reasons = "" pass_count = 0 fail_count = 0 for case in test_cases: r = unittest.result.TestResult() case.run(r) if r.wasSuccessful(): pass_count += r.testsRun else: for failedcase, reason in r.failures: reasons += "---------------------------------------------------------------\n" reasons += f"Test case {failedcase} failed:\n" reasons += reason reasons += "\n---------------------------------------------------------------\n" fail_count += 1 for failedcase, reason in r.errors: reasons += ( "---------------------------------------------------------------\n" ) reasons += f"Test case {failedcase} failed with errors:\n" reasons += reason reasons += "\n---------------------------------------------------------------\n" fail_count += 1 interp = interpreters.create() for test in test_names: # Run the test suite if test in skip_tests: print(f"Skipping test {test}") continue print(f"Running test {test}") try: result = interpreters.run_func(interp, run_test, shared={"test_name": test}) except Exception as e: print(f"Test {test} failed with exception {e}") continue ``` This crashes during the `test_builtin` suite and any others which dynamically load the readline module. ```console > ./python.exe -X dev test_in_interp.py .... Running test test_bufio Running test test_builtin Fatal Python error: Segmentation fault Current thread 0x00007ff857472700 (most recent call first): File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 1304 in create_module File "<frozen importlib._bootstrap>", line 813 in module_from_spec File "<frozen importlib._bootstrap>", line 915 in _load_unlocked File "<frozen importlib._bootstrap>", line 1325 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1354 in _find_and_load File "/Users/anthonyshaw/projects/cpython/Lib/pdb.py", line 239 in __init__ File "/Users/anthonyshaw/projects/cpython/Lib/doctest.py", line 386 in __init__ File "/Users/anthonyshaw/projects/cpython/Lib/doctest.py", line 1527 in run File "/Users/anthonyshaw/projects/cpython/Lib/doctest.py", line 2261 in runTest File "/Users/anthonyshaw/projects/cpython/Lib/unittest/case.py", line 589 in _callTestMethod File "/Users/anthonyshaw/projects/cpython/Lib/unittest/case.py", line 636 in run File "/Users/anthonyshaw/projects/cpython/Lib/unittest/case.py", line 692 in __call__ File "/Users/anthonyshaw/projects/cpython/Lib/unittest/suite.py", line 122 in run File "/Users/anthonyshaw/projects/cpython/test_in_interp.py", line 19 in run_test Extension modules: _testcapi, _xxsubinterpreters (total: 2) zsh: segmentation fault ./python.exe -X dev test_in_interp.py ``` Stack trace: ```default python.exe!Py_TYPE (/Users/anthonyshaw/projects/cpython/Include/object.h:297) python.exe!Py_IS_TYPE (/Users/anthonyshaw/projects/cpython/Include/object.h:330) python.exe!PyObject_TypeCheck (/Users/anthonyshaw/projects/cpython/Include/object.h:487) python.exe!PyModule_GetState (/Users/anthonyshaw/projects/cpython/Objects/moduleobject.c:608) readline.cpython-313td-darwin.so!on_startup_hook (/Users/anthonyshaw/projects/cpython/Modules/readline.c:1029) libedit.3.dylib!rl_initialize (Unknown Source:0) readline.cpython-313td-darwin.so!setup_readline (/Users/anthonyshaw/projects/cpython/Modules/readline.c:1219) readline.cpython-313td-darwin.so!PyInit_readline (/Users/anthonyshaw/projects/cpython/Modules/readline.c:1515) python.exe!_PyImport_LoadDynamicModuleWithSpec (/Users/anthonyshaw/projects/cpython/Python/importdl.c:170) python.exe!_imp_create_dynamic_impl (/Users/anthonyshaw/projects/cpython/Python/import.c:3750) python.exe!_imp_create_dynamic (/Users/anthonyshaw/projects/cpython/Python/clinic/import.c.h:485) python.exe!cfunction_vectorcall_FASTCALL (/Users/anthonyshaw/projects/cpython/Objects/methodobject.c:425) python.exe!_PyVectorcall_Call (/Users/anthonyshaw/projects/cpython/Objects/call.c:273) python.exe!_PyObject_Call (/Users/anthonyshaw/projects/cpython/Objects/call.c:348) python.exe!PyObject_Call (/Users/anthonyshaw/projects/cpython/Objects/call.c:373) python.exe!_PyEval_EvalFrameDefault (/Users/anthonyshaw/projects/cpython/Python/generated_cases.c.h:5382) python.exe!_PyEval_EvalFrame (/Users/anthonyshaw/projects/cpython/Include/internal/pycore_ceval.h:115) python.exe!_PyEval_Vector (/Users/anthonyshaw/projects/cpython/Python/ceval.c:1783) python.exe!_PyFunction_Vectorcall (Unknown Source:0) python.exe!_PyObject_VectorcallTstate (/Users/anthonyshaw/projects/cpython/Include/internal/pycore_call.h:168) ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112313 <!-- /gh-linked-prs -->
154f099e611cea74daa755c77df3b8003861cc76
2e632fa07d13a58be62f59be4e656ad58b378f9b
python/cpython
python__cpython-112283
# Unioned types should be hashable: not documented? # Documentation It seems that `typing.Union[X, Y]` as well as `X | Y` type hints require `X` and `Y` to be hashable. This is not documented as far as I can see. Considering this example: ```python @dataclass class ValueRange: lo: int hi: int T1 = Annotated[int, ValueRange(-10, 5)] ``` This fails when unioned: ```pycon >>> Annotated[int, ValueRange(-10, 5)] | None (...) File "/usr/lib64/python3.11/typing.py", line 1375, in __or__ return Union[self, right] ~~~~~^^^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 358, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 481, in __getitem__ return self._getitem(self, parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 695, in Union parameters = _remove_dups_flatten(parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 326, in _remove_dups_flatten return tuple(_deduplicate(params)) ^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 301, in _deduplicate all_params = set(params) ^^^^^^^^^^^ File "/usr/lib64/python3.11/typing.py", line 2151, in __hash__ return hash((self.__origin__, self.__metadata__)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: unhashable type: 'ValueRange' ``` Should this be documented say here: https://docs.python.org/3/library/typing.html?highlight=typing#typing.Annotated ? And examples here should use **frozen** dataclasses: https://docs.python.org/3/library/typing.html?highlight=typing#typing.Annotated ? <!-- gh-linked-prs --> ### Linked PRs * gh-112283 * gh-116213 * gh-116288 <!-- /gh-linked-prs -->
a7549b03cec1699b5342cddf292c179315433fa2
2713c2abc8d0f30cd0060cd307bb4ec92f1f04bf
python/cpython
python__cpython-112658
# platform module functions execute for a long time if user has no permissions to WMI # Bug report ### Bug description: From Python 3.12, probably since https://github.com/python/cpython/issues/89545 was implemented, the invocation of functions from `platform` modules takes a multiple of 5 seconds if the user has no permissions to perform WMI queries. It can be reproduced by connecting to a Windows machine using SSH and executing a Python script from cmd or ps. Access to WMI for remote sessions is turned off by default but this probably can be achieved also by removing permissions using `wmimgmt.msc` application. For example, the execution of the `system` function in Python 3.12: ```shell user@WIN16 c:\Python3.12>python.exe -m timeit -s "import platform" -n 1 -r 1 "platform.system()" 1 loop, best of 1: 10.2 sec per loop ``` and the same in Python 3.11: ```shell user@WIN16 c:\Python3.11>python.exe -m timeit -s "import platform" -n 1 -r 1 "platform.system()" 1 loop, best of 1: 18.1 msec per loop ``` (`timeit` used with `-n 1 -r 1` to avoid using cache in the platform module. When results get cached then of course it works fast). The delay comes from https://github.com/python/cpython/commit/de33df27aaf930be6a34027c530a651f0b4c91f5#diff-cb1ba6039236dca71c97ea898f2798c60262c30614192862259889f839e4d503R323 which takes 5 seconds and raises `OSError`: `[WinError -2147217405] Windows Error 0x80041003`. `platform.system` apparently calls it twice. This applies to all functions from the module that makes WMI queries. Windows version used for tests: Edition: Windows Server 2016 Standard Version: 1607 OS Build: 14393.6452 ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112658 * gh-112878 * gh-113154 * gh-117818 * gh-117899 <!-- /gh-linked-prs -->
a955fd68d6451bd42199110c978e99b3d2959db2
b2923a61a10dc2717f4662b590cc9f6d181c6983
python/cpython
python__cpython-112268
# `help()` on types has strange `(if defined)` notice for attributes that are defined # Feature or enhancement Let's say we have a regular class and we call `help()` on it: ```python >>> class A: ... ... >>> help(A) Help on class A in module __main__: class A(builtins.object) | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) ``` This leaves a strange impression: what does it mean for `__dict__`? > dictionary for instance variables (if defined) It is defined. The same for regular `__doc__`: ```python >>> A.__dict__['__dict__'].__doc__ 'dictionary for instance variables (if defined)' ``` Let's see what happens when `__dict__` and `__weakref__` are not defined: ```python >>> class B: ... __slots__ = () ... >>> help(B) Help on class B in module __main__: class B(builtins.object) ``` And: ```python >>> B.__dict__['__dict__'] Traceback (most recent call last): File "<stdin>", line 1, in <module> B.__dict__['__dict__'] ~~~~~~~~~~^^^^^^^^^^^^ KeyError: '__dict__' ``` The historical reason behind it is: https://github.com/python/cpython/commit/373c7412f297c375d84c5984f753557b441dd6f4#diff-1decebeef15f4e0b0ce106c665751ec55068d4d1d1825847925ad4f528b5b872R1356-R1377 What do others think: should we remove `(if defined)` part? If so, I have a PR ready. <!-- gh-linked-prs --> ### Linked PRs * gh-112268 * gh-112270 * gh-112276 <!-- /gh-linked-prs -->
f8129146ef9e1b71609ef4becc5d508061970733
77d9f1e6d9aad637667264c16c83d255526cc1ba
python/cpython
python__cpython-112253
# Error in `venv/scripts/common/activate` when $OSTYPE is not set # Bug report ### Bug description: When trying to use `. .venv/bin/activate` to activate a virtualenv from within a `Justfile` the script exits with an error `"sh: 42: .venv/bin/activate: OSTYPE: parameter not set"`. Failing Test on 3.12.0: https://github.com/jamesturk/venv-just-experiment/actions/runs/6917021149/job/18817722528 Successful on 3.11.6: https://github.com/jamesturk/venv-just-experiment/actions/runs/6917021149/job/18817722369 This only started happening on Python 3.12, and I confirmed is still happening on the main branch. It seems a change to `Lib/venv/scripts/common/activate` is at issue here. I'll submit a matching PR. Due to the use of the variable without a default (unlike other variables in this script), running `activate` inside of a script with `set -u/set -o nounset` will fail. ### CPython versions tested on: 3.12, CPython main branch ### Operating systems tested on: Linux, macOS <!-- gh-linked-prs --> ### Linked PRs * gh-112253 * gh-112297 <!-- /gh-linked-prs -->
e1540ae74d1fce62f53e25838ba21746ba5d8444
44aa603d591388316d4e671656272d2a5bb9b334
python/cpython
python__cpython-112246
# Promote free-threaded CI # Feature or enhancement ### Proposal: I propose to promote the existing free-threaded GitHub Actions CI, following the official acceptance of PEP 703, including: 1. Change the triggering of the free-threaded jobs to always trigger, instead of conditionally on the presence of the topic-free-threaded label. 2. Remove the free-threaded jobs from the list of allowed failures (aka require that these jobs pass to get green signal). ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/is-it-time-to-promote-free-threaded-ci-and-buildbots/39064 <!-- gh-linked-prs --> ### Linked PRs * gh-112246 <!-- /gh-linked-prs -->
48dfd74a9db9d4aa9c6f23b4a67b461e5d977173
3fdf7ae3d1a08894e53c263945fba67fe62ac05b
python/cpython
python__cpython-112284
# Decide how to handle comments with debug expressions in f-strings After PEP 701, having comments mixed with debug expressions is possible but the interactions has not been discussed. For instance, consider this code: ``` a = 1 f"{a = # my comment }" ``` this produces: ``` 'a = # my comment\n1' ``` which is surprising. What we should do here? I think we should probably not include the comment but we should agree on the behavior before a PR can be made <!-- gh-linked-prs --> ### Linked PRs * gh-112284 * gh-112285 <!-- /gh-linked-prs -->
d59feb5dbe5395615d06c30a95e6a6a9b7681d4d
3b3ec0d77f0f836cbe5ff1ab97efcc8b7ed5d787
python/cpython
python__cpython-112241
# Add option to calendar module CLI to specify the weekday to start each week # Feature or enhancement ### Proposal: When running the calendar module CLI, there is no option to specify the weekday to start each week. It defaults to Monday (0). ```sh python -m calendar -h ``` Please consider adding an option (e.g. `--first-weekday`) to specify the weekday to start each week. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-112241 <!-- /gh-linked-prs -->
2c089b09ac0872e08d146c55ed60d754154761c3
39c766b579cabc71a4a50773d299d4350221a70b
python/cpython
python__cpython-112235
# Remove the toplevel parameter in converttuple() It is always 0. And it always was 0, since adding this code in Python 1.1. <!-- gh-linked-prs --> ### Linked PRs * gh-112235 <!-- /gh-linked-prs -->
91d17305414923ae3f1cf98108ca42c50e60c8ea
0566ab9c4d966c7280a1c02fdeea8129ba65de81
python/cpython
python__cpython-112216
# `do_raise()` doesn't make sure the constructed cause is an exception. # Bug report ### Bug description: Consider the following: ```python class ConstructsNone(BaseException): @classmethod def __new__(*args, **kwargs): return None raise Exception("Printing this exception raises an exception. Mwa-ha-nyaa~ >:3") from ConstructsNone ``` ``` TypeError: print_exception(): Exception expected for value, NoneType found The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> Exception: Printing this exception raises an exception. Mwa-ha-nyaa~ >:3 ``` In `Python/ceval.c`, in `do_raise()`, when you raise an object, cpython checks if it's an exception type, and if it is, constructs it by calling it with no arguments. Then it checks to make sure that what was constructed is in fact an exception. Then it does the same thing for the exception's cause. If it's a type, it constructs the cause by calling it with no arguments. But, for the cause, it actually doesn't check to make sure that the result of the call is in fact an exception, it just stores the result without checking. This seems like a bug. Not a catastrophic one by any means, but probably unintentional considering that the very same condition is checked a few lines above. That doesn't necessarily explain the result above though. We've created an exception object where the cause is `None` (or any other sort of object that we want). Then, when the interpreter (interactive mode) goes to print the exception, it expects the cause to be an exception. This leads to yet another exception being raised, telling you that the cause is the wrong type. The solution of course is just to add the check when the cause is called. I've submitted the pull request, here: https://github.com/python/cpython/pull/112216 ### CPython versions tested on: 3.10, 3.11, 3.12, CPython main branch ### Operating systems tested on: Linux, Windows <!-- gh-linked-prs --> ### Linked PRs * gh-112216 <!-- /gh-linked-prs -->
8f71b349de1ff2b11223ff7a8241c62a5a932339
4dcfd02bed0d7958703ef44baa79a4a98475be2e
python/cpython
python__cpython-112232
# Support specifying object to lock in Argument Clinic's `@critical_section` directive # Feature or enhancement https://github.com/python/cpython/issues/111903 added support for the `@critical_section` directive to Argument Clinic. It currently assumes that the first argument is the one that should be locked. This is a good default, but there are at least a few cases where we want to lock a different argument. For example, in the `_weakref` module, we generally want to lock the `object` argument, not the `_weakref` module itself. Let's add support for specifying the argument to lock. For example, ```c /*[clinic input] @critical_section object _weakref.getweakrefs object: object / Return a list of all weak reference objects pointing to 'object'. [clinic start generated code]*/ static PyObject * _weakref_getweakrefs(PyObject *module, PyObject *object) /*[clinic end generated code: output=25c7731d8e011824 input=00c6d0e5d3206693]*/ ``` <!-- gh-linked-prs --> ### Linked PRs * gh-112232 * gh-112250 * gh-112374 <!-- /gh-linked-prs -->
e52cc80f7fc3a560bf3d0053e0821a2db070cdd1
607b5e30c67bad35b90240d9ac176131e51423a5