repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-122110 | # CSV reader inconsistent with combination of QUOTE_NONNUMERIC and escapechar
```pycon
>>> next(csv.reader([r'2\.5'], escapechar='\\', quoting=csv.QUOTE_NONNUMERIC))
[2.5]
>>> next(csv.reader([r'.5'], escapechar='\\', quoting=csv.QUOTE_NONNUMERIC))
[0.5]
>>> next(csv.reader([r'\.5'], escapechar='\\', quoting=csv.QUOTE_NONNUMERIC))
['.5']
```
It parses numbers with an escaped character in the middle, it parses numbers starting with a dot, but it does not parse numbers starting with an escaped character. The following variants would be more consistent:
1. Any escaped character disables parsing field as a number.
2. Allow parsing a field starting with an escaped character as a number.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122110
* gh-122258
* gh-122259
<!-- /gh-linked-prs -->
| a3327dbfd4db9e5ad1ca514963d503abbbbfede7 | 9b4fe9b718f27352ba0c1cf1184f5b90d77d7df4 |
python/cpython | python__cpython-113813 | # AttributeError for `tracemalloc.is_tracing`
In two different PRs (gh-113754 and gh-11166) (both making separate, innocent-seeming changes to async generators), some test (but not the same one) is consistently reporting an attribute on `tracemalloc.is_tracing()`. Here's one.
```
test_async_gen_ags_gen_agt_gen (test.test_asyncgen.AsyncGenTest.test_async_gen_ags_gen_agt_gen) ... Warning -- Unraisable exception
Exception ignored in: <async_generator object AsyncGenAsyncioTest.test_asyncgen_nonstarted_hooks_are_cancellable.<locals>.async_iterate at 0x7fe32080c050>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/warnings.py", line 112, in _showwarnmsg
_showwarnmsg_impl(msg)
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/warnings.py", line 28, in _showwarnmsg_impl
text = _formatwarnmsg(msg)
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/warnings.py", line 128, in _formatwarnmsg
return _formatwarnmsg_impl(msg)
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/warnings.py", line 64, in _formatwarnmsg_impl
tracing = tracemalloc.is_tracing()
AttributeError: partially initialized module 'tracemalloc' from '/home/runner/work/cpython/cpython-ro-srcdir/Lib/tracemalloc.py' has no attribute 'is_tracing' (most likely due to a circular import)
/home/runner/work/cpython/cpython-ro-srcdir/Lib/unittest/case.py:589: RuntimeWarning: coroutine method 'athrow' of 'AsyncGenTest.test_async_gen_ags_gen_agt_gen.<locals>.agen' was never awaited
if method() is not None:
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
ok
```
I think the `AttributeError` maybe new? I've not seen it reported before. If there's no obvious root cause (other than that module finalization order is undefined), there's a simple fix: in warnings.py, where it says
```py
try:
import tracemalloc
# Logging a warning should not raise a new exception:
# catch Exception, not only ImportError and RecursionError.
except Exception:
# don't suggest to enable tracemalloc if it's not available
tracing = True
tb = None
else:
```
(and in the `else` block calls `tracemalloc.is_tracing()` and `tracemalloc.get_object_traceback()`), we could just add attribute accesses of those two functions to the `try` block, so if the module is incomplete, we don't crash on those calls.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113813
* gh-113873
* gh-113874
<!-- /gh-linked-prs -->
| 0297418cacf998e778bc0517aa11eaac827b8c0f | a5db6a3351b440a875a5af84a8b2447981356e34 |
python/cpython | python__cpython-113756 | # Adapt the remaining functions in gcmodule.c to Argument Clinic
# Feature or enhancement
Follow-up from #64384: there are only three remaining functions in gcmodule.c that were not converted to use Argument Clinic. Suggesting to adapt the three remaining functions, so we get nicely formatted signatures for these (and slightly less argument parsing code in the extension module).
<!-- gh-linked-prs -->
### Linked PRs
* gh-113756
<!-- /gh-linked-prs -->
| 35fa13d48b3247ea5e83b17c05a98163904e5a0b | ace4d7ff9a247cbe7350719b996a1c7d88a57813 |
python/cpython | python__cpython-113754 | # `PyAsyncGenASend` objects allocated from freelists may not have their finalizers called
# Bug report
CPython uses freelists to speed up allocation of certain frequently allocated types of objects. CPython also supports finalizers (i.e., `tp_finalize`) that are only called once, even if the object is resurrected by its finalizer. These two features do not work well together as currently implemented because we don't clear the `_PyGC_PREV_MASK_FINALIZED` bit when objects are allocated from free-lists.
As far as I can tell, this only affects `PyAsyncGenASend` objects -- I haven't seen other objects that are both allocated from free-lists and use `tp_finalize`.
The finalizer for `PyAsyncGenASend` (which may issue a warning), may not be called if the object is allocated from a free-list (and already finalized):
### Test case
The `test(False)` call should issue a warning about unawaited "asend" coroutine. However, the presence of the `test(true)` call will suppress this warning (if uncommented) because it ensures that there is an already-finalized object in the free-list.
```python
import asyncio
def main():
loop = asyncio.new_event_loop()
async def gen():
yield 1
async def test(do_await):
g = gen()
if do_await:
r = await g.asend(None)
else:
g.asend(None)
await g.aclose()
# Uncommenting this line prevents the warning on the following call
# due to the already finalized PyAsyncGenASend object in the free-list.
# loop.run_until_complete(test(True))
# This should warn!
loop.run_until_complete(test(False))
loop.close()
if __name__ == '__main__':
main()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-113754
<!-- /gh-linked-prs -->
| 73ae2023a76f199ff854f8da14bd9ff8e93ee7fd | 901a971e161e060bd95f3cf3aeebe8b48d6e6dac |
python/cpython | python__cpython-113751 | # Fix Python object resurrection on free-threaded builds
# Bug report
Python objects can be resurrected due to side effects of their finalizers (`__del__` functions). The relevant code is in `PyObject_CallFinalizerFromDealloc` in `object.c` (as well as in test code in `Modules/_testcapi/gc.c`):
https://github.com/python/cpython/blob/eb53750757062255b1793969ca4cb12ef82b91c6/Objects/object.c#L510-L514
Note that the `Py_REFCNT()`/`Py_SET_REFCNT()` calls **undo** the re-initialization of the refcount, so that the effect (in the default builds) is just to:
1) Call `_PyTraceMalloc_NewReference` if tracemalloc is enabled
2) Call `_Py_AddToAllObjects` if `Py_TRACE_REFS` is enabled
The problem is that the free-threaded builds do additional initialization in `_Py_NewReferenceNoTotal`. For example, they initialize the `ob_gc_bits` field, which will keep track of if the object has been finalized. We don't want to re-initialize that here.
We should add an internal-only function that only does the two desired calls (and not the other re-initialization) and call that from `PyObject_CallFinalizerFromDealloc` and the test in `Modules/_testcapi/gc.c` instead of re-purposing `_Py_NewReferenceNoTotal`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113751
<!-- /gh-linked-prs -->
| d0f0308a373298a8906ee5a7546275e1b2e906ea | 3375dfed400494ba5cc1b744d52f6fb8b7796059 |
python/cpython | python__cpython-113745 | # Formalize incomplete error exceptions to improve codeop module handling
Currently the way we detect incomplete errors in the `codeop` module is based on the exception text, which is suboptimal at least. To do this properly, add a new `SyntaxError` subclass called `IncompleteInputError` and use that in the codeop module to detect errors from the parser.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113745
<!-- /gh-linked-prs -->
| 39d102c2ee8eec8ab0bacbcd62d62a72742ecc7c | 1f515e8a109204f7399d85b7fd806135166422d9 |
python/cpython | python__cpython-113930 | # Make the MRO cache thread-safe in free-threaded builds
# Feature or enhancement
The MRO (method resolution order) cache is used to cache lookups on the type hierarchy, such as method lookups. It's currently implemented per-interpreter, which will not be thread-safe in free-threaded builds.
https://github.com/python/cpython/blob/5e1916ba1bf521d6ff9d2c553c057f3ef7008977/Objects/typeobject.c#L4750-L4803
The nogil-3.9 and nogil-3.12 forks used different approaches to make the MRO cache thread-safe. The nogil-3.9 fork moved the type cache to the `PyThreadState`. The nogil-3.12 fork implemented a thread-safe cache shared across threads [^1]. The nogil-3.9 approach is much simpler, I think we should start with that.
Suggested approach:
* If `Py_GIL_DISABLED` is defined, move the type_cache to the private struct `PyThreadStateImpl`.
* Refactor `_PyType_Lookup()` as necessary to use the correct cache
[^1]: For reference, here is the nogil-3.12 implementation: https://github.com/colesbury/nogil-3.12/commit/9c1f7ba1b4
<!-- gh-linked-prs -->
### Linked PRs
* gh-113930
* gh-115541
* gh-115563
<!-- /gh-linked-prs -->
| ae460d450ab854ca66d509ef6971cfe1b6312405 | e74fa294c9b0c67bfcbefdda5a069f0a7648f524 |
python/cpython | python__cpython-113738 | # CSV reader should support QUOTE_NOTNULL and QUOTE_STRINGS
# Feature or enhancement
New quoting rules QUOTE_NOTNULL and QUOTE_STRINGS were introduced in #67230. But they only affect CSV writer, not CSV reader. I think that they should affect CSV reader in the same way as QUOTE_NONNUMERIC does.
* QUOTE_NOTNULL -- unquoted empty strings are returned as None.
* QUOTE_STRINGS -- unquoted empty strings are returned as None and unquoted numeric strings are returned as float.
It is perhaps too late to change this in 3.12, so it can be considered as a new feature in 3.13.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113738
<!-- /gh-linked-prs -->
| ea30a28c3e89b69a214c536e61402660242c0f2a | 58f883b91bd8dd4cac38b58a026397363104a129 |
python/cpython | python__cpython-113731 | # IDLE.app on macOS stops when selection Help -> IDLE Help
# Bug report
### Bug description:
To reproduce:
- open IDLE.app
- open the help menu
- select "IDLE help"
Opening IDLE using `python3 -m idlelib` shows the following traceback when doing this:
```
$ python3.12 -m idlelib
2024-01-05 11:41:38.672 Python[2197:15344359] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/d3/rc5nx4v12y96knh2px3bpqsc0000gn/T/org.python.python.savedState
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/idlelib/__main__.py", line 7, in <module>
idlelib.pyshell.main()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/idlelib/pyshell.py", line 1693, in main
root.mainloop()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/tkinter/__init__.py", line 1499, in mainloop
self.tk.mainloop(n)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/idlelib/macosx.py", line 213, in help_dialog
help.show_idlehelp(root)
^^^^^^^^^^^^^^^^^^
AttributeError: module 'idlelib.help' has no attribute 'show_idlehelp'
```
### CPython versions tested on:
3.10, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113731
* gh-113765
* gh-113766
<!-- /gh-linked-prs -->
| 66f39648154214621d388f519210442d5fce738f | d0f0308a373298a8906ee5a7546275e1b2e906ea |
python/cpython | python__cpython-113711 | # The Tier 2 Optimizer
# Feature or enhancement
### Proposal:
We're getting a JIT. Now it's time to optimize the traces to pass them to the JIT. The following workstreams are somewhat parallel, and split into two parts. They will not necessarily land in order.
1. The specializer. This will be done as its own pass on uops. Please see Mark's issue here https://github.com/faster-cpython/ideas/issues/560.
2. The uops optimizer. This is a separate pass. In general there are two parts, uops abstract interpretation, and optimized code generation. Please see [this document](https://github.com/Fidget-Spinner/cpython_optimization_notes/blob/main/3.13/uops_optimizer.md) for more information. The following optimizations are targeted:
- Value numbering
- Type + constant propagation
- True function inlining
```[tasklist]
- [x] Interpreter DSL changes. https://github.com/python/cpython/pull/113711
- [ ] https://github.com/python/cpython/issues/114058
- [ ] https://github.com/python/cpython/issues/115419
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113711
* gh-114487
* gh-114592
* gh-115166
* gh-115221
* gh-115248
* gh-116410
* gh-116460
<!-- /gh-linked-prs -->
| ac92527c08d917dffdb9c0a218d06f21114614a2 | 79970792fd2c70f77c38e08c7b3a9daf6a11bde1 |
python/cpython | python__cpython-113707 | # C Comments about long integer representation are out of date
I've recently been working on improving the performance of long integers that contain a single "digit". This ultimately is probably leading nowhere, but I did discover along the way that the comments about long integer representation in `longintrepr.h` refer to an earlier version of the code. This should be updated.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113707
<!-- /gh-linked-prs -->
| 07824995a056b2894adac69813444307835cc245 | 015b97d19a24a169cc3c0939119e1228791e4253 |
python/cpython | python__cpython-113709 | # `code.InteractiveInterpreter.runsource()` regression on Python 3.12+
# Bug report
### Bug description:
`InteractiveInterpreter.runsource()` is supposed to return True when the input is incomplete, as per the doc: https://docs.python.org/3/library/code.html#code.InteractiveInterpreter.runsource. This appears to have regressed with f-strings in Python 3.12+
**Python 3.11.2:**
```python
>>> code.InteractiveInterpreter().runsource('a = f"""')
True
```
**Python 3.12.1 or Python 3.13.0a2+ (heads/main:1ae7ceba29, Jan 4 2024, 16:18:49):**
```python
>>> code.InteractiveInterpreter().runsource('a = f"""')
File "<input>", line 1
a = f"""
^
SyntaxError: unterminated triple-quoted f-string literal (detected at line 1)
False
```
### CPython versions tested on:
3.11, 3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113709
* gh-113733
<!-- /gh-linked-prs -->
| 3003fbbf00422bce6e327646063e97470afa9091 | 0ae60b66dea5140382190463a676bafe706608f5 |
python/cpython | python__cpython-113697 | # PyObject_CallOneArg and PyObject_CallNoArgs should be marked as returning a new reference
# Documentation
[`PyObject_CallOneArg` and `PyObject_CallNoArgs` should be updated in the docs](https://docs.python.org/3/c-api/call.html#c.PyObject_CallOneArg) with the reference count annotation `"Return value: New reference."` (as is the case with [`PyObject_CallFunction`](https://docs.python.org/3/c-api/call.html#c.PyObject_CallFunction) et al.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-113697
* gh-113698
* gh-113699
<!-- /gh-linked-prs -->
| 1ae7ceba29771baf8f2e8d2d4c50a0355cb6b5c8 | 35ef8cb25917bfd6cbbd7c2bb55dd4f82131c9cf |
python/cpython | python__cpython-113704 | # `test_logging`'s test_111615 fails under WASI
# Bug report
### Bug description:
https://github.com/python/cpython/pull/111638 broke WASI builds due to assuming multiprocessing was always available.
Failure can be seen at https://buildbot.python.org/all/#/builders/1046/builds/3844 .
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113704
* gh-113844
<!-- /gh-linked-prs -->
| 842b738129021f52293dc053e014ecb4fe095baa | f3d5d4aa8f0388217aeff69e28d078bdda464b38 |
python/cpython | python__cpython-113694 | # Negative refcount crash in test_monitoring with -Xuops
# Bug report
### Bug description:
```sh
./python.exe -Xuops -m test test_monitoring -v
```
<details>
<summary>Full output</summary>
```
== CPython 3.13.0a2+ (heads/main:4c4b08dd2bd, Jan 3 2024, 14:05:09) [Clang 15.0.0 (clang-1500.1.0.2.5)]
== macOS-14.1.2-arm64-arm-64bit little-endian
== Python build: debug pystats
== cwd: /Users/guido/cpython/build/test_python_worker_40548æ
== CPU count: 12
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 414366294
0:00:00 load avg: 2.46 Run 1 test sequentially
0:00:00 load avg: 2.46 [1/1] test_monitoring
test_async_for (test.test_monitoring.ExceptionMonitoringTest.test_async_for) ... ok
test_explicit_reraise (test.test_monitoring.ExceptionMonitoringTest.test_explicit_reraise) ... ok
test_explicit_reraise_named (test.test_monitoring.ExceptionMonitoringTest.test_explicit_reraise_named) ... ok
test_implicit_reraise (test.test_monitoring.ExceptionMonitoringTest.test_implicit_reraise) ... ok
test_implicit_reraise_named (test.test_monitoring.ExceptionMonitoringTest.test_implicit_reraise_named) ... ok
test_implicit_stop_iteration (test.test_monitoring.ExceptionMonitoringTest.test_implicit_stop_iteration) ... ok
test_simple_try_except (test.test_monitoring.ExceptionMonitoringTest.test_simple_try_except) ... ok
test_throw (test.test_monitoring.ExceptionMonitoringTest.test_throw) ... ok
test_try_finally (test.test_monitoring.ExceptionMonitoringTest.test_try_finally) ... ok
test_branch (test.test_monitoring.LineMonitoringTest.test_branch) ... ok
test_linear (test.test_monitoring.LineMonitoringTest.test_linear) ... ok
test_lines_loop (test.test_monitoring.LineMonitoringTest.test_lines_loop) ... ok
test_lines_single (test.test_monitoring.LineMonitoringTest.test_lines_single) ... ok
test_lines_two (test.test_monitoring.LineMonitoringTest.test_lines_two) ... ok
test_try_except (test.test_monitoring.LineMonitoringTest.test_try_except) ... ok
test_has_objects (test.test_monitoring.MonitoringBasicTest.test_has_objects) ... ok
test test_monitoring crashed -- Traceback (most recent call last):
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 178, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 135, in _load_run_test
regrtest_runner(result, test_func, runtests)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 88, in regrtest_runner
test_result = test_func()
~~~~~~~~~^^
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 132, in test_func
return run_unittest(test_mod)
~~~~~~~~~~~~^^^^^^^^^^
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 37, in run_unittest
return _run_suite(tests)
~~~~~~~~~~^^^^^^^
File "/Users/guido/cpython/Lib/test/libregrtest/single.py", line 57, in _run_suite
result = runner.run(suite)
~~~~~~~~~~^^^^^^^
File "/Users/guido/cpython/Lib/unittest/runner.py", line 240, in run
test(result)
~~~~^^^^^^^^
File "/Users/guido/cpython/Lib/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
~~~~~~~~^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Lib/unittest/suite.py", line 122, in run
test(result)
~~~~^^^^^^^^
File "/Users/guido/cpython/Lib/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
~~~~~~~~^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Lib/unittest/suite.py", line 107, in run
for index, test in enumerate(self):
...<19 lines>...
self._removeTestAtIndex(index)
TypeError: 'enumerate' object does not support the asynchronous context manager protocol
test_monitoring failed (uncaught exception)
== Tests result: FAILURE ==
1 test failed:
test_monitoring
Total duration: 56 ms
Total tests: run=0
Total test files: run=1/1 failed=1
Result: FAILURE
Objects/codeobject.c:1493: _Py_NegativeRefcount: Assertion failed: object has negative ref count
<object at 0x13a2c8820 is freed>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: finalizing (tstate=0x0000000100d27968)
Current thread 0x00000001da335ec0 (most recent call first):
Garbage-collecting
<no Python frame>
Abort trap: 6
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113694
<!-- /gh-linked-prs -->
| 35ef8cb25917bfd6cbbd7c2bb55dd4f82131c9cf | 4c4b08dd2bd5f2cad4e41bf29119a3daa2956f6e |
python/cpython | python__cpython-113715 | # Split up Modules/gcmodule.c
# Feature or enhancement
The free-threaded builds require substantial changes to the garbage collection implementation, but we don't want to make those changes to the default build. In other parts of CPython, we have used "local" `#ifdef Py_GIL_DISABLED` guards around free-threaded specific bits, but I think doing this throughout `Modules/gcmodule.c` will make the GC harder to maintain than having separate files for the free-threaded and default builds. Additionally, I think @markshannon already suggested splitting the core GC parts from the Python "gc" module.
There's definitely an increased maintenance cost for having two GC implementations in separate files, but I think this maintenance will be less than having a single file with intermixed implementations and lots of `#ifdef Py_GIL_DISABLED` guards.
Here's a proposed split:
`Modules/gcmodule.c` - [`gc`](https://docs.python.org/3/library/gc.html) Python package interface
`Python/gc.c` - core GC functionality common to both build configurations (GIL & free-threaded)
`Python/gc_gil.c` - GIL (non-free-threaded) specific implementations
`Python/gc_free_threaded.c` - free-threaded specific implementations
At first, I'd expect most code to be in either `Modules/gcmodule.c` or `Python/gc.c`. As work on the free-threaded build continues, I would expect `Python/gc.c` to shrink as code is moved into `Python/gc_gil.c` and `Python/gc_free_threaded.c`.
@nascheme @markshannon @pablogsal @DinoV @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-113715
* gh-113814
<!-- /gh-linked-prs -->
| 99854ce1701ca4d1a0d153e501a29f9eec306ce5 | 0b7476080b58ea2ee71c6c1229994a3bb62fe4fa |
python/cpython | python__cpython-113667 | # Missing UF_ and SF_ flags in `Lib/stat.py`
# Feature or enhancement
### Proposal:
The list of `UF_` and `SF_` constants is missing a number definitions from recent versions of macOS, in particular:
- meta constants: UF_SETTABLE, SF_SUPPORTED, SF_SETTABLE, SF_SYNTHETIC
- UF_TRACKED
- UF_DATAVAULT
- SF_RESTRICTED
- SF_FIRMLINK
- SF_DATALESS
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113667
<!-- /gh-linked-prs -->
| 2010d45327128594aed332befa687c8aead010bc | 892155d7365c9c4a6c2dd6850b4527222ba5c217 |
python/cpython | python__cpython-113856 | # 3.12 starts to return an error when all tests in a test file are skipped
# Bug report
### Bug description:
See https://github.com/apache/thrift/blob/5cf71b2beec3c67a4c8452ddabbbc6ae43fff16f/lib/py/test/test_sslsocket.py for a full example, we skipped all tests in that file via:
```python
@unittest.skip("failing SSL test to be fixed in subsequent pull request")
class TSSLSocketTest(unittest.TestCase):
...
```
On python 3.11 this is fine with:
```
/usr/bin/python3 test/test_sslsocket.py
sssssssssss
----------------------------------------------------------------------
Ran 11 tests in 0.000s
OK (skipped=11)
```
But on python 3.12, this starts to become an error:
```
/opt/hostedtoolcache/Python/3.12.1/x64/bin/python test/test_sslsocket.py
WARNING:thrift.transport.sslcompat:using legacy validation callback
sssssssssss
----------------------------------------------------------------------
Ran 0 tests in 0.000s
NO TESTS RAN (skipped=11)
make[1]: *** [Makefile:645: py3-test] Error 5
```
I have to add a not-skipped dummy test to work around this (https://github.com/apache/thrift/pull/2914/files)
Is this an intentional change? I found it weird that we consider a skipped test as a failure.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113856
* gh-113875
<!-- /gh-linked-prs -->
| 3a9096c337c16c9335e0d4eba8d1d4196258af72 | 0297418cacf998e778bc0517aa11eaac827b8c0f |
python/cpython | python__cpython-113660 | # Security risk of hidden pth files
"pth files are evil." (Barry Warsaw, #78125)
There is a special kind of evilness:
1. pth files allow to execute arbitrary Python code.
2. pth files are executed automatically, unlike to normal py files which need explicit import or passing as argument to Python interpreter.
3. Some files are hidden by default (in shell and file managers). In particularly dot-files on Posix.
In sum, it increases the risk of executing malicious code. When you receive a handful of files, you, as a cautious person, check their contents before executing. If Python source files are hidden, it's okay, because you saw that nothing suspicious is imported in the files that you execute. But pth files can be executed even if you do not see them and there are no references in visible files.
This issue was first discussed in comments in #113357.
The severity of this issue is not very large, because it requires user interaction to activate. But it increases the risk. I think we should forbid processing hidden pth files.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113660
* gh-114143
* gh-114144
* gh-114145
* gh-114146
* gh-114147
<!-- /gh-linked-prs -->
| 74208ed0c440244fb809d8acc97cb9ef51e888e3 | 7a24ecc953e1edc9c5bbedbd19cc587c3ff635ea |
python/cpython | python__cpython-113662 | # Test failures when running with Tier 2 enabled
# Bug report
### Bug description:
Over the last two weeks, some test failures were introduced when the Tier 2 compiler is turned on, such that building with `--pgo` and Tier 2 fails.
I am going to bisect to see if I can find the moment at which the failures were introduced and will report back.
[Full build log](https://gist.github.com/mdboom/81b017aa83ab67e147c508cb7a6fb299)
```
./python -m test --pgo --timeout=
Using random seed: 3445139140
0:00:00 load avg: 1.94 Run 44 tests sequentially
0:00:00 load avg: 1.94 [ 1/44] test_array
0:00:01 load avg: 1.94 [ 2/44] test_base64
0:00:02 load avg: 1.94 [ 3/44] test_binascii
0:00:02 load avg: 1.94 [ 4/44] test_binop
0:00:02 load avg: 1.94 [ 5/44] test_bisect
0:00:02 load avg: 1.94 [ 6/44] test_bytes
0:00:12 load avg: 1.79 [ 7/44] test_bz2
0:00:13 load avg: 1.73 [ 8/44] test_cmath
0:00:14 load avg: 1.73 [ 9/44] test_codecs
0:00:16 load avg: 1.73 [10/44] test_collections
0:00:19 load avg: 1.67 [11/44] test_complex
0:00:19 load avg: 1.67 [12/44] test_dataclasses
0:00:20 load avg: 1.67 [13/44] test_datetime -- test_dataclasses failed (uncaught exception)
0:00:26 load avg: 1.62 [14/44] test_decimal
test test_decimal failed
0:00:35 load avg: 1.52 [15/44] test_difflib -- test_decimal failed (2 errors)
0:00:38 load avg: 1.48 [16/44] test_embed
0:00:51 load avg: 1.41 [17/44] test_float
0:00:52 load avg: 1.41 [18/44] test_fstring
0:00:53 load avg: 1.37 [19/44] test_functools
test test_functools failed
0:00:55 load avg: 1.37 [20/44] test_generators -- test_functools failed (4 errors)
0:00:55 load avg: 1.37 [21/44] test_hashlib
0:00:56 load avg: 1.37 [22/44] test_heapq
0:00:57 load avg: 1.37 [23/44] test_int
0:00:58 load avg: 1.34 [24/44] test_itertools
0:01:08 load avg: 1.29 [25/44] test_json
0:01:15 load avg: 1.19 [26/44] test_long
0:01:21 load avg: 1.17 [27/44] test_lzma
0:01:22 load avg: 1.17 [28/44] test_math
0:01:29 load avg: 1.14 [29/44] test_memoryview
test test_memoryview failed
0:01:29 load avg: 1.14 [30/44] test_operator -- test_memoryview failed (4 errors)
0:01:29 load avg: 1.14 [31/44] test_ordered_dict
0:01:31 load avg: 1.14 [32/44] test_patma
0:01:31 load avg: 1.14 [33/44] test_pickle -- test_patma failed (uncaught exception)
0:01:43 load avg: 1.11 [34/44] test_pprint
0:01:43 load avg: 1.11 [35/44] test_re -- test_pprint failed (uncaught exception)
0:01:45 load avg: 1.11 [36/44] test_set
0:01:57 load avg: 1.09 [37/44] test_sqlite3
0:01:58 load avg: 1.09 [38/44] test_statistics
0:02:03 load avg: 1.08 [39/44] test_str
0:02:08 load avg: 1.07 [40/44] test_struct
0:02:10 load avg: 1.07 [41/44] test_tabnanny
0:02:10 load avg: 1.07 [42/44] test_time
0:02:13 load avg: 1.07 [43/44] test_xml_etree
0:02:13 load avg: 1.07 [44/44] test_xml_etree_c
Total duration: 2 min 15 sec
Total tests: run=8,409 skipped=199
Total test files: run=44/44 failed=6
make: *** [Makefile:842: profile-run-stamp] Error 2
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113662
<!-- /gh-linked-prs -->
| b0fb074d5983f07517cec76a37268f13c986d314 | bab0758ea4a1d4666a973ae2d65f21a09e4478ba |
python/cpython | python__cpython-113638 | # The whitespaces in the Doc/tools/templates/dummy.html make the translation process harder
These whitespaces in the [dummy.html](https://github.com/python/cpython/blob/main/Doc/tools/templates/dummy.html) become an issue when it comes to translating them.
- If the translator forgets to add a leading or a trailing space, it mostly results in phrases being concatenated without spaces in the HTML.
- Also linters like [``sphinx-lint``](https://github.com/sphinx-contrib/sphinx-lint) complain a lot about these whitespaces.
- And some translation file editors (po editors) tend to trim these whitespaces even if the translator did not forget to add them.
Here are some examples from the Turkish translation (scenario of a translator who forgot to add an extra space at the end):


How it looks in the English Doc (no problems):

We could remove the trailing whitespace from "Part of the" and insert a leading whitespace to "Stable ABI", but that would only please the linters. So, I propose removing all the whitespaces (leading and trailing) in the template file and let c_annotations.py to handle the spacing.
These strings about the API and the ABI were made translatable in #107680, which also introduced the whitespaces.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113638
* gh-113676
* gh-113679
<!-- /gh-linked-prs -->
| ea978c645edd7bc29d811c61477dff766d7318b6 | dc8df6e84024b79aa96e85a64f354bf8e827bcba |
python/cpython | python__cpython-113634 | # Use module state for _testcapi
Replace the "TestError" global variable with a module state structure.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113634
<!-- /gh-linked-prs -->
| b2566d89ce50e9924bb2fccb87dcfa3ceb6cc0d6 | 8e4ff5c7885abb04a66d079499335c4d46106aff |
python/cpython | python__cpython-115192 | # Update autoconf to warn when building under `wasm32-unknown-emscripten`
# Bug report
### Bug description:
Support for wasm32-unknown-emscripten was dropped in https://github.com/python/peps/pull/3612 . As [part of PEP 11](https://peps.python.org/pep-0011/#unsupporting-platforms), a `configure` error needs to be triggered when building for the platform.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115192
<!-- /gh-linked-prs -->
| c968dc7ff3041137bb702436ff944692dede1ad1 | 553c90ccc2f5b15be76a2bb6e38d23e58d739e2f |
python/cpython | python__cpython-113640 | # test_site _pth file tests fail when stdlib path is very long
# Bug report
### Bug description:
The `getpath` module [tries to read certain `_pth` files during initialization](https://github.com/python/cpython/blob/b4b2cc101216ae1017898dfbe43c90da2fd0a308/Modules/getpath.py#L462)).
This is tested in `test_site` with generated `_pth` files that [include the stdlib path 200 times](https://github.com/python/cpython/blob/b4b2cc101216ae1017898dfbe43c90da2fd0a308/Lib/test/test_site.py#L669).
The `getpath` module [disallows reading files over 32KB during initialization](https://github.com/python/cpython/blob/b4b2cc101216ae1017898dfbe43c90da2fd0a308/Modules/getpath.c#L375).
If the test suite runs from a very long base path, 200 repetitions of the stdlib path in the `_pth` file would be enough to exceed the 32KB limit.
To demonstrate, artificially increase the number of repetitions to some high number that would exceed 32KB, e.g. this patch:
```diff
diff --git a/Lib/test/test_site.py b/Lib/test/test_site.py
index 33d0975bda8..ba2f7d90a80 100644
--- a/Lib/test/test_site.py
+++ b/Lib/test/test_site.py
@@ -668,7 +668,7 @@ def test_underpth_nosite_file(self):
exe_prefix = os.path.dirname(sys.executable)
pth_lines = [
'fake-path-name',
- *[libpath for _ in range(200)],
+ *[libpath for _ in range(5000)],
'',
'# comment',
]
```
and observe the test failing with the following error:
```bash
$ ./python.exe -m test test_site -v -m '*_pthFileTests.test_underpth_nosite_file'
== CPython 3.13.0a2+ (heads/main-dirty:b4b2cc10121, Jan 1 2024, 11:38:27) [Clang 15.0.0 (clang-1500.1.0.2.5)]
== macOS-14.2.1-arm64-arm-64bit little-endian
== Python build: debug
== cwd: /Users/itamaro/work/pyexe/main-dbg/build/test_python_worker_96404æ
== CPU count: 12
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 4200167786
0:00:00 load avg: 8.99 Run 1 test sequentially
0:00:00 load avg: 8.99 [1/1] test_site
test_underpth_nosite_file (test.test_site._pthFileTests.test_underpth_nosite_file) ... Exception ignored in running getpath:
Traceback (most recent call last):
File "<frozen getpath>", line 462, in <module>
MemoryError: cannot read file larger than 32KB during initialization
Fatal Python error: error evaluating path
Python runtime state: core initialized
Current thread 0x00000001e0e39000 (most recent call first):
<no Python frame>
ERROR
======================================================================
ERROR: test_underpth_nosite_file (test.test_site._pthFileTests.test_underpth_nosite_file)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/itamaro/work/cpython/Lib/test/test_site.py", line 683, in test_underpth_nosite_file
output = subprocess.check_output([exe_file, '-c',
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
'import sys; print("\\n".join(sys.path) if sys.flags.no_site else "")'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
], env=env, encoding='utf-8', errors='surrogateescape')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/itamaro/work/cpython/Lib/subprocess.py", line 470, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**kwargs).stdout
^^^^^^^^^
File "/Users/itamaro/work/cpython/Lib/subprocess.py", line 575, in run
raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/var/folders/ng/hmf622ys3wnc3c21prtx7nd80000gn/T/tmpkc8z54mu/python.exe', '-c', 'import sys; print("\\n".join(sys.path) if sys.flags.no_site else "")']' returned non-zero exit status 1.
----------------------------------------------------------------------
Ran 1 test in 0.032s
FAILED (errors=1)
test test_site failed
test_site failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_site
Total duration: 150 ms
Total tests: run=1 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113640
* gh-113671
* gh-113672
<!-- /gh-linked-prs -->
| 5dc79e3d7f26a6a871a89ce3efc9f1bcee7bb447 | b0fb074d5983f07517cec76a37268f13c986d314 |
python/cpython | python__cpython-113648 | # Safer data serialization with marshal module
# Feature or enhancement
The main purpose of the `marshal` module -- serialization of precompiled module code objects. This requires support of code objects, strings for names, and other primitive Python types and simple collection types referred by the code object.
It allows to use it as more data generic serialization tool -- more limited than `pickle`, but less limited than JSON. The `marshal` module supports different versions of the format and backward compatible with all earlier versions. But only if the data does not contain code objects. The format of the code objects changed with every Python version, and this is not reflected in marshal format version. Loading marshal data created in different Python version has undefined behavior if the data contains a code object.
I propose to add a keyword-only parameter `allow_code` with default value True in `marshal` functions. Specifying `allow_code=False` forbid saving and loading code objects. It allows to be safer when you load external data and to guarantee that the output can be safely loaded in other Python.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113648
<!-- /gh-linked-prs -->
| d2d8332f71ae8059150a9d8d91498493f9b443fc | a482bc67ee786e60937a547776fcf9528810e1ce |
python/cpython | python__cpython-113894 | # [improvement] Align object addresses in the Descriptor HowTo Guide
# Documentation
In the [Functions and methods](https://docs.python.org/3.12/howto/descriptor.html#functions-and-methods) chapter of this wonderful guide there is an inconsistency between object addresses in the provided code examples and outputs.
Current outputs for `d.f` and `d.f.__self__` are `<bound method D.f of <__main__.D object at 0x00B18C90>>` and `<__main__.D object at 0x1012e1f98>` respectively. These two addresses should in fact be identical.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113894
* gh-113922
* gh-113923
<!-- /gh-linked-prs -->
| 901a971e161e060bd95f3cf3aeebe8b48d6e6dac | 70497218351ba44bffc8b571201ecb5652d84675 |
python/cpython | python__cpython-113636 | # Python/flowgraph.c:483: _Bool no_redundant_jumps(cfg_builder *): Assertion `last->i_target != b->b_next' failed
# Bug report
### Bug description:
The fuzz_pycompile fuzzer identified an assertion failure:
```
<fuzz input>:1: SyntaxWarning: invalid decimal literal
--
| <fuzz input>:1: SyntaxWarning: invalid decimal literal
| fuzz_pycompile: Python/flowgraph.c:483: _Bool no_redundant_jumps(cfg_builder *): Assertion `last->i_target != b->b_next' failed.
| AddressSanitizer:DEADLYSIGNAL
| =================================================================
| ==2627==ERROR: AddressSanitizer: ABRT on unknown address 0x053900000a43 (pc 0x790fafca100b bp 0x790fafe16588 sp 0x7ffee87019d0 T0)
| SCARINESS: 10 (signal)
| #0 0x790fafca100b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
| #1 0x790fafc80858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
| #2 0x790fafc80728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
| #3 0x790fafc91fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
| #4 0x825781 in no_redundant_jumps cpython3/Python/flowgraph.c:483:17
| #5 0x825781 in _PyCfg_OptimizedCfgToInstructionSequence cpython3/Python/flowgraph.c:2719:5
| #6 0x7992f9 in optimize_and_assemble_code_unit cpython3/Python/compile.c:7581:9
| #7 0x7992f9 in optimize_and_assemble cpython3/Python/compile.c:7616:12
| #8 0x7925a9 in compiler_mod cpython3/Python/compile.c:1779:24
| #9 0x7925a9 in _PyAST_Compile cpython3/Python/compile.c:555:24
| #10 0x8fce37 in Py_CompileStringObject cpython3/Python/pythonrun.c:1452:10
| #11 0x8fcf2c in Py_CompileStringExFlags cpython3/Python/pythonrun.c:1465:10
| #12 0x4f7e97 in fuzz_pycompile cpython3/Modules/_xxtestfuzz/fuzzer.c:550:24
| #13 0x4f7e97 in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:563:14
| #14 0x4f7e97 in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:704:11
| #15 0x4f878d in ExecuteFilesOnyByOne /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
| #16 0x4f8598 in LLVMFuzzerRunDriver /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:0
| #17 0x4f8158 in main /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:300:10
| #18 0x790fafc82082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
| #19 0x43906d in _start
<br class="Apple-interchange-newline"><fuzz input>:1: SyntaxWarning: invalid decimal literal
<fuzz input>:1: SyntaxWarning: invalid decimal literal
fuzz_pycompile: Python/flowgraph.c:483: _Bool no_redundant_jumps(cfg_builder *): Assertion `last->i_target != b->b_next' failed.
AddressSanitizer:DEADLYSIGNAL
=================================================================
==2627==ERROR: AddressSanitizer: ABRT on unknown address 0x053900000a43 (pc 0x790fafca100b bp 0x790fafe16588 sp 0x7ffee87019d0 T0)
SCARINESS: 10 (signal)
#0 0x790fafca100b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
#1 0x790fafc80858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
#2 0x790fafc80728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
#3 0x790fafc91fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
#4 0x825781 in no_redundant_jumps [cpython3/Python/flowgraph.c:483](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/flowgraph.c#L483):17
#5 0x825781 in _PyCfg_OptimizedCfgToInstructionSequence [cpython3/Python/flowgraph.c:2719](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/flowgraph.c#L2719):5
#6 0x7992f9 in optimize_and_assemble_code_unit [cpython3/Python/compile.c:7581](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/compile.c#L7581):9
#7 0x7992f9 in optimize_and_assemble [cpython3/Python/compile.c:7616](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/compile.c#L7616):12
#8 0x7925a9 in compiler_mod [cpython3/Python/compile.c:1779](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/compile.c#L1779):24
#9 0x7925a9 in _PyAST_Compile [cpython3/Python/compile.c:555](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/compile.c#L555):24
#10 0x8fce37 in Py_CompileStringObject [cpython3/Python/pythonrun.c:1452](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/pythonrun.c#L1452):10
#11 0x8fcf2c in Py_CompileStringExFlags [cpython3/Python/pythonrun.c:1465](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Python/pythonrun.c#L1465):10
#12 0x4f7e97 in fuzz_pycompile [cpython3/Modules/_xxtestfuzz/fuzzer.c:550](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Modules/_xxtestfuzz/fuzzer.c#L550):24
#13 0x4f7e97 in _run_fuzz [cpython3/Modules/_xxtestfuzz/fuzzer.c:563](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Modules/_xxtestfuzz/fuzzer.c#L563):14
#14 0x4f7e97 in LLVMFuzzerTestOneInput [cpython3/Modules/_xxtestfuzz/fuzzer.c:704](https://github.com/python/cpython/blob/471aa752415029c508693fa7971076f5148022a6/Modules/_xxtestfuzz/fuzzer.c#L704):11
#15 0x4f878d in ExecuteFilesOnyByOne /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
#16 0x4f8598 in LLVMFuzzerRunDriver /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:0
#17 0x4f8158 in main /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:300:10
#18 0x790fafc82082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
#19 0x43906d in _start
```
Reproducer (note that the first two bytes are metadata for the fuzzer):
```
00000000: 2020 6966 2039 3c39 3c39 616e 6420 396f if 9<9<9and 9o
00000010: 7220 393a 39 r 9:9
```
Basic reproduction:
```
~/p/cpython ❯❯❯ ./python.exe -c "compile('if 9<9<9and 9or 9:9', '<na>', 'exec')"
<na>:1: SyntaxWarning: invalid decimal literal
<na>:1: SyntaxWarning: invalid decimal literal
Assertion failed: (last->i_target != b->b_next), function no_redundant_jumps, file flowgraph.c, line 483.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113636
<!-- /gh-linked-prs -->
| 7d01fb48089872155e1721ba0a8cc27ee5c4fecd | 0c3455a9693cfabcd991c4c33db7cccb1387de58 |
python/cpython | python__cpython-113607 | # Objects/call.c:342: PyObject *_PyObject_Call(PyThreadState *, PyObject *, PyObject *, PyObject *): Assertion `!_PyErr_Occurred(tstate)' failed.
# Bug report
### Bug description:
The `fuzz_pycompile` fuzzer identified an assertion failure:
(https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=65451 - which should unembargo itself "soon" now that this is fixed)
```
Running: /mnt/scratch0/clusterfuzz/bot/inputs/fuzzer-testcases/a64c8acb44b2e25736a340a8e5865db3E.-6.ADDR.0.INSTR.[UNKNOWN].fuzz
--
| fuzz_pycompile: Objects/call.c:342: PyObject *_PyObject_Call(PyThreadState *, PyObject *, PyObject *, PyObject *): Assertion `!_PyErr_Occurred(tstate)' failed.
| ==65602== ERROR: libFuzzer: deadly signal
| #0 0x553b61 in __sanitizer_print_stack_trace /src/llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3
| #1 0x472678 in fuzzer::PrintStackTrace() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5
| #2 0x457353 in fuzzer::Fuzzer::CrashCallback() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3
| #3 0x79a033e1441f in libpthread.so.0
| #4 0x79a033c2a00a in __libc_signal_restore_set /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/internal-signals.h:86:3
| #5 0x79a033c2a00a in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:48:3
| #6 0x79a033c09858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
| #7 0x79a033c09728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
| #8 0x79a033c1afd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
| #9 0xbaee90 in _PyObject_Call cpython3/Objects/call.c:342:5
| #10 0xbaf0d0 in PyObject_Call cpython3/Objects/call.c:373:12
| #11 0x85175e in PyErr_SetFromErrnoWithFilenameObjects cpython3/Python/errors.c:874:13
| #12 0x851551 in PyErr_SetFromErrnoWithFilenameObject cpython3/Python/errors.c:785:12
| #13 0x9a1acd in _Py_fopen_obj cpython3/Python/fileutils.c:1832:9
| #14 0x8581b4 in _PyErr_ProgramDecodedTextObject cpython3/Python/errors.c:1924:16
| #15 0xdbb0d2 in _PyPegen_raise_error_known_location cpython3/Parser/pegen_errors.c:336:22
| #16 0xdcc92b in RAISE_ERROR_KNOWN_LOCATION cpython3/Parser/pegen.h:182:5
| #17 0xe4fc58 in invalid_class_pattern_rule cpython3/Parser/parser.c:23691:20
| #18 0xe49692 in class_pattern_rule cpython3/Parser/parser.c:10330:42
| #19 0xe45d6f in closed_pattern_rule cpython3/Parser/parser.c:8166:34
| #20 0xe43704 in _gather_65_rule cpython3/Parser/parser.c:29094:21
| #21 0xe43704 in or_pattern_rule cpython3/Parser/parser.c:7969:44
| #22 0xe40db8 in as_pattern_rule cpython3/Parser/parser.c:7885:24
| #23 0xe40db8 in pattern_rule cpython3/Parser/parser.c:7817:31
| #24 0xe46ffb in group_pattern_rule cpython3/Parser/parser.c:9388:24
| #25 0xe45437 in closed_pattern_rule cpython3/Parser/parser.c:8109:34
| #26 0xe43704 in _gather_65_rule cpython3/Parser/parser.c:29094:21
| #27 0xe43704 in or_pattern_rule cpython3/Parser/parser.c:7969:44
| #28 0xe40db8 in as_pattern_rule cpython3/Parser/parser.c:7885:24
| #29 0xe40db8 in pattern_rule cpython3/Parser/parser.c:7817:31
| #30 0xe41b27 in maybe_star_pattern_rule cpython3/Parser/parser.c:9654:28
| #31 0xe4075e in open_sequence_pattern_rule cpython3/Parser/parser.c:9541:24
| #32 0xe40064 in patterns_rule cpython3/Parser/parser.c:7746:44
| #33 0xe3e8ce in invalid_case_block_rule cpython3/Parser/parser.c:23514:29
| #34 0xe3e8ce in case_block_rule cpython3/Parser/parser.c:7617:39
| #35 0xe3e8ce in _loop1_64_rule cpython3/Parser/parser.c:28954:31
| #36 0xdcb16f in match_stmt_rule cpython3/Parser/parser.c:7458:44
| #37 0xdc1f10 in compound_stmt_rule cpython3/Parser/parser.c:2244:31
| #38 0xdc0a62 in statement_rule cpython3/Parser/parser.c:1405:18
| #39 0xdc0a62 in _loop1_3_rule cpython3/Parser/parser.c:25234:30
| #40 0xdc0a62 in statements_rule cpython3/Parser/parser.c:1362:18
| #41 0xdbcccd in file_rule cpython3/Parser/parser.c:1164:18
| #42 0xdbcccd in _PyPegen_parse cpython3/Parser/parser.c:41840:18
| #43 0xdb83b5 in _PyPegen_run_parser cpython3/Parser/pegen.c:857:9
| #44 0xdb8d48 in _PyPegen_run_parser_from_string cpython3/Parser/pegen.c:965:14
| #45 0xb2e517 in _PyParser_ASTFromString cpython3/Parser/peg_api.c:13:21
| #46 0x92ea85 in Py_CompileStringObject cpython3/Python/pythonrun.c:1437:11
| #47 0x92ebf4 in Py_CompileStringExFlags cpython3/Python/pythonrun.c:1465:10
| #48 0x5874d1 in fuzz_pycompile cpython3/Modules/_xxtestfuzz/fuzzer.c:550:24
| #49 0x5874d1 in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:563:14
| #50 0x5874d1 in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:704:11
| #51 0x4588f3 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
| #52 0x444052 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
| #53 0x4498fc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
| #54 0x472e32 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
| #55 0x79a033c0b082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
| #56 0x43a21d in _start
<br class="Apple-interchange-newline">Running: /mnt/scratch0/clusterfuzz/bot/inputs/fuzzer-testcases/a64c8acb44b2e25736a340a8e5865db3E.-6.ADDR.0.INSTR.[UNKNOWN].fuzz
fuzz_pycompile: Objects/call.c:342: PyObject *_PyObject_Call(PyThreadState *, PyObject *, PyObject *, PyObject *): Assertion `!_PyErr_Occurred(tstate)' failed.
==65602== ERROR: libFuzzer: deadly signal
#0 0x553b61 in __sanitizer_print_stack_trace /src/llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3
#1 0x472678 in fuzzer::PrintStackTrace() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5
#2 0x457353 in fuzzer::Fuzzer::CrashCallback() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3
#3 0x79a033e1441f in libpthread.so.0
#4 0x79a033c2a00a in __libc_signal_restore_set /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/internal-signals.h:86:3
#5 0x79a033c2a00a in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:48:3
#6 0x79a033c09858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
#7 0x79a033c09728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
#8 0x79a033c1afd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
#9 0xbaee90 in _PyObject_Call [cpython3/Objects/call.c:342](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Objects/call.c#L342):5
#10 0xbaf0d0 in PyObject_Call [cpython3/Objects/call.c:373](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Objects/call.c#L373):12
#11 0x85175e in PyErr_SetFromErrnoWithFilenameObjects [cpython3/Python/errors.c:874](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/errors.c#L874):13
#12 0x851551 in PyErr_SetFromErrnoWithFilenameObject [cpython3/Python/errors.c:785](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/errors.c#L785):12
#13 0x9a1acd in _Py_fopen_obj [cpython3/Python/fileutils.c:1832](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/fileutils.c#L1832):9
#14 0x8581b4 in _PyErr_ProgramDecodedTextObject [cpython3/Python/errors.c:1924](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/errors.c#L1924):16
#15 0xdbb0d2 in _PyPegen_raise_error_known_location [cpython3/Parser/pegen_errors.c:336](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/pegen_errors.c#L336):22
#16 0xdcc92b in RAISE_ERROR_KNOWN_LOCATION [cpython3/Parser/pegen.h:182](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/pegen.h#L182):5
#17 0xe4fc58 in invalid_class_pattern_rule [cpython3/Parser/parser.c:23691](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L23691):20
#18 0xe49692 in class_pattern_rule [cpython3/Parser/parser.c:10330](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L10330):42
#19 0xe45d6f in closed_pattern_rule [cpython3/Parser/parser.c:8166](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L8166):34
#20 0xe43704 in _gather_65_rule [cpython3/Parser/parser.c:29094](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L29094):21
#21 0xe43704 in or_pattern_rule [cpython3/Parser/parser.c:7969](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7969):44
#22 0xe40db8 in as_pattern_rule [cpython3/Parser/parser.c:7885](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7885):24
#23 0xe40db8 in pattern_rule [cpython3/Parser/parser.c:7817](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7817):31
#24 0xe46ffb in group_pattern_rule [cpython3/Parser/parser.c:9388](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L9388):24
#25 0xe45437 in closed_pattern_rule [cpython3/Parser/parser.c:8109](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L8109):34
#26 0xe43704 in _gather_65_rule [cpython3/Parser/parser.c:29094](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L29094):21
#27 0xe43704 in or_pattern_rule [cpython3/Parser/parser.c:7969](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7969):44
#28 0xe40db8 in as_pattern_rule [cpython3/Parser/parser.c:7885](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7885):24
#29 0xe40db8 in pattern_rule [cpython3/Parser/parser.c:7817](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7817):31
#30 0xe41b27 in maybe_star_pattern_rule [cpython3/Parser/parser.c:9654](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L9654):28
#31 0xe4075e in open_sequence_pattern_rule [cpython3/Parser/parser.c:9541](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L9541):24
#32 0xe40064 in patterns_rule [cpython3/Parser/parser.c:7746](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7746):44
#33 0xe3e8ce in invalid_case_block_rule [cpython3/Parser/parser.c:23514](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L23514):29
#34 0xe3e8ce in case_block_rule [cpython3/Parser/parser.c:7617](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7617):39
#35 0xe3e8ce in _loop1_64_rule [cpython3/Parser/parser.c:28954](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L28954):31
#36 0xdcb16f in match_stmt_rule [cpython3/Parser/parser.c:7458](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L7458):44
#37 0xdc1f10 in compound_stmt_rule [cpython3/Parser/parser.c:2244](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L2244):31
#38 0xdc0a62 in statement_rule [cpython3/Parser/parser.c:1405](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L1405):18
#39 0xdc0a62 in _loop1_3_rule [cpython3/Parser/parser.c:25234](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L25234):30
#40 0xdc0a62 in statements_rule [cpython3/Parser/parser.c:1362](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L1362):18
#41 0xdbcccd in file_rule [cpython3/Parser/parser.c:1164](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L1164):18
#42 0xdbcccd in _PyPegen_parse [cpython3/Parser/parser.c:41840](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/parser.c#L41840):18
#43 0xdb83b5 in _PyPegen_run_parser [cpython3/Parser/pegen.c:857](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/pegen.c#L857):9
#44 0xdb8d48 in _PyPegen_run_parser_from_string [cpython3/Parser/pegen.c:965](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/pegen.c#L965):14
#45 0xb2e517 in _PyParser_ASTFromString [cpython3/Parser/peg_api.c:13](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Parser/peg_api.c#L13):21
#46 0x92ea85 in Py_CompileStringObject [cpython3/Python/pythonrun.c:1437](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/pythonrun.c#L1437):11
#47 0x92ebf4 in Py_CompileStringExFlags [cpython3/Python/pythonrun.c:1465](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Python/pythonrun.c#L1465):10
#48 0x5874d1 in fuzz_pycompile [cpython3/Modules/_xxtestfuzz/fuzzer.c:550](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Modules/_xxtestfuzz/fuzzer.c#L550):24
#49 0x5874d1 in _run_fuzz [cpython3/Modules/_xxtestfuzz/fuzzer.c:563](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Modules/_xxtestfuzz/fuzzer.c#L563):14
#50 0x5874d1 in LLVMFuzzerTestOneInput [cpython3/Modules/_xxtestfuzz/fuzzer.c:704](https://github.com/python/cpython/blob/f46987b8281148503568516c29a4a04a75aaba8d/Modules/_xxtestfuzz/fuzzer.c#L704):11
#51 0x4588f3 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
#52 0x444052 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
#53 0x4498fc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
#54 0x472e32 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
#55 0x79a033c0b082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
#56 0x43a21d in _start
```
Reproducer (note that the first two bytes are metadata for the fuzzer):
```
00000000: 2020 6d61 7463 6820 793a 0a20 6361 7365 match y:. case
00000010: 2065 2865 3d76 2c76 2c e(e=v,v,
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113607
* gh-113652
* gh-113653
<!-- /gh-linked-prs -->
| 9ed36d533ab8b256f0a589b5be6d7a2fdcf4aff2 | 8ff44f855450244d965dbf82c7f0a31de666007c |
python/cpython | python__cpython-113596 | # We shouldn't be entering invalid executors, it rather invalidates the invalidation.
# Bug report
### Bug description:
`ENTER_EXECUTOR` doesn't check if an executor is invalid. We currently get away with this because only instrumentation invalidates executors, and it cleans up the `ENTER_EXECUTOR`s.
It would be safer if `ENTER_EXECUTOR` check for invalid executors, and it would reduce the coupling between executors and instrumentation.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113596
<!-- /gh-linked-prs -->
| dc8df6e84024b79aa96e85a64f354bf8e827bcba | 5dc79e3d7f26a6a871a89ce3efc9f1bcee7bb447 |
python/cpython | python__cpython-113577 | # reprlib (used by pytest) infers builtin types based on class `__name__`
# Bug report
### Bug description:
Code like the following:
```python
import reprlib
class array:
def __repr__(self):
return "not array.array!"
reprlib.repr(array())
```
raises
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jpivarski/mambaforge/lib/python3.10/reprlib.py", line 52, in repr
return self.repr1(x, self.maxlevel)
File "/home/jpivarski/mambaforge/lib/python3.10/reprlib.py", line 60, in repr1
return getattr(self, 'repr_' + typename)(x, level)
File "/home/jpivarski/mambaforge/lib/python3.10/reprlib.py", line 86, in repr_array
header = "array('%s', [" % x.typecode
AttributeError: 'array' object has no attribute 'typecode'
```
because reprlib uses `type(x).__name__` to infer that `x` has type `array.array`, rather than my user-defined class:
https://github.com/python/cpython/blob/cf34b7704be4c97d0479c04df0d9cd8fe210e5f4/Lib/reprlib.py#L62-L70
Perhaps there's good reason to check the `__name__` string instead of `isinstance(x, array.array)`, to avoid unnecessarily importing the `array` module for start-up time or for minimizing clutter in the `sys.modules`. However, this test should check both the `__name__` string and the `__module__` string.
This affects any user-defined classes with the following names:
* tuple
* list
* array
* set
* frozenset
* deque
* dict
* str
* int
* instance
Some of these, admittedly, would be bad names for user-defined classes, but others are more reasonable, such as `array` and `deque`.[^1]
Since these methods are publicly available on class `reprlib.Repr`, the method names can't change, but the lookup could be replaced using a dict like
```python
class Repr:
_lookup = {
("builtins", "tuple"): repr_tuple,
("builtins", "list"): repr_list,
("array", "array"): repr_array,
("builtins", "set"): repr_set,
("builtins", "frozenset"): repr_frozenset,
("collections", "deque"): repr_deque,
("builtins", "dict"): repr_dict,
("builtins", "str"): repr_str,
("builtins", "int"): repr_int,
}
```
-----------------------
I encountered this in pytest. My error output contained
```
x = <[ValueError('the truth value of an array whose length is not 1 is ambiguous; use ak.any() or ak.all()') raised in repr()] array object at 0x779b54763700>
y = array([1, 2, 3])
```
or
```
x = <[AttributeError("'array' object has no attribute 'typecode'") raised in repr()] array object at 0x728278bacd60>
y = array([1, 2, 3])
```
for reasons that had nothing to do with the actual error, and the `array.__repr__` code itself is error-free. (pytest overrides reprlib to provide a SafeRepr.)
Thanks!
[^1]: In my case, I want the `ragged` library to provide a `ragged.array` because it reads like English that way. I also don't want to change the `__name__` of my class to differ from its actual name. In particular, the [Array API specification](https://data-apis.org/array-api/latest/) uses "`array`" as a type name.
### CPython versions tested on:
3.10
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113577
* gh-125654
* gh-125655
<!-- /gh-linked-prs -->
| 04d6dd23e2d8a3132772cf7ce928676e26313585 | ad3eac1963a5f195ef9b2c1dbb5e44fa3cce4c72 |
python/cpython | python__cpython-113573 | # Display actual calls for unittest.assert_has_calls even if empty
# Feature or enhancement
### Proposal:
Running this:
```python
import unittest
from unittest.mock import Mock, call
class TestCase(unittest.TestCase):
def test_thing(self):
mock = Mock()
mock.assert_has_calls([call(a=1)])
unittest.main()
```
currently gives this test failure message:
```
AssertionError: Calls not found.
Expected: [call(a=1)]
```
This message does not imply that there were no calls. I propose that the test failure message is instead this:
```
AssertionError: Calls not found.
Expected: [call(a=1)]
Actual: []
```
The "Actual" list is already displayed if there are calls. It's only excluded if there are no calls.
---
The logic that causes an empty list not to be displayed is in the [`_calls_repr`](https://github.com/python/cpython/blob/cf34b7704be4c97d0479c04df0d9cd8fe210e5f4/Lib/unittest/mock.py#L1088) method. For the other usages - `assert_called_once_with`, `assert_called_once`, and `assert_not_called` - the number of calls is displayed directly in the error message so it already says that there are no calls. This makes me think the current behaviour of `assert_has_calls` is just an oversight rather than intentional behaviour.
I'd be happy to open a PR for this.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113573
<!-- /gh-linked-prs -->
| 1600d78e2d090319930c6538b496ffcca120a696 | 1ae7ceba29771baf8f2e8d2d4c50a0355cb6b5c8 |
python/cpython | python__cpython-113571 | # pathlib ABCs should not raise auditing events or deprecation warnings
# Bug report
The (private) `pathlib._abc.PathBase` class raises auditing events in its implementations of `glob()`, `rglob()` and `walk()`. These events should only be raised from the concrete implementations of these methods in `pathlib.Path`, and not from the ABC.
Also, several methods raise deprecation warnings. Same deal there.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113571
* gh-113757
<!-- /gh-linked-prs -->
| 3c4e972d6d0945a5401377bed25b307a88b19c75 | 99854ce1701ca4d1a0d153e501a29f9eec306ce5 |
python/cpython | python__cpython-119816 | # configure prefers system install of ncurses over available pkg-config on macOS
# Feature or enhancement
### Proposal:
I have ncurses 6.4 installed locally with corresponding pkg-config files. I'd expect that `configure` will pick up that installation, but it seems to prefer the system installation.
I ended up running `configure` with explicit overrides for the CURSES and PANEL CFLAGS and LIBS variables:
```sh
../configure ... CURSES_CFLAGS="$(pkg-config --cflags ncursesw)" CURSES_LIBS="$(pkg-config --libs ncursesw)" PANEL_CFLAGS="$(pkg-config --cflags panelw)" PANEL_LIBS="$(pkg-config --libs panelw)"
```
configure does pick up other local installs (sqlite, bz2, ...)
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119816
* gh-121202
* gh-121222
<!-- /gh-linked-prs -->
| f80376b129ad947263a6b03a6c3a874e9f8706e6 | bd473aa598c5161521a7018896dc124728214a6c |
python/cpython | python__cpython-113562 | # `help(set.issubset)` does not tell the truth that it could accept any iterable as an argument
# Documentation
The `set` [docs](https://docs.python.org/3.13/library/stdtypes.html#set) tell us:
> Note, the non-operator versions of [union()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.union), [intersection()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.intersection), [difference()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.difference), [symmetric_difference()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.symmetric_difference), [issubset()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.issubset), and [issuperset()](https://docs.python.org/3.13/library/stdtypes.html#frozenset.issuperset) methods **will accept any iterable as an argument**.
But the `help(set.issubset)` does not emphasize this. It only explains that it could test the inclusion relationship between two sets.
```python
>>> help(set.issubset)
Help on method_descriptor:
issubset(...)
Report whether another set contains this set.
# Actually, it could accept an iterable
>>> {'a'}.issubset('abc')
True
```
The same problem comes with the `help(set.issuperset)`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113562
* gh-114643
* gh-114644
<!-- /gh-linked-prs -->
| 11c582235d86b6020710eff282eeb381a7bf7bb7 | 6a8944acb61d0a2c210ab8066cdcec8602110e2f |
python/cpython | python__cpython-113557 | # pdb CLI argument parsing errors with arguments to module targets
# Bug report
### Bug description:
The `pdb` CLI is no longer correctly forwarding arguments to other tools.
Example reproduction:
- It raises an error `pdb: error: argument pyfile: not allowed with argument -m`
```
vscode ➜ ~ $ docker run -it --rm python:3.13.0a2 python -m pdb -m calendar 1
usage: pdb [-h] [-c command] (-m module | pyfile) [args ...]
pdb: error: argument pyfile: not allowed with argument -m
```
Expected behavior:
- It starts the `pdb` debugger.
```
vscode ➜ ~ $ docker run -it --rm python:3.12 python -m pdb -m calendar 1
> /usr/local/lib/python3.12/calendar.py(1)<module>()
-> """Calendar printing functions
(Pdb)
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113557
<!-- /gh-linked-prs -->
| b3e8c78ed7aa9bbd1084375587b99200c687cec9 | 48c0b05cf0dd2db275bd4653f84aa36c22bddcd2 |
python/cpython | python__cpython-113544 | # `MacOSXOSAScript` does not send `webbrowser.open` audit event
# Bug report
All other browsers do this:
- https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L173
- https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L193
- https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L258
- https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L348
- https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L557
But, not `MacOSXOSAScript`: https://github.com/python/cpython/blob/f108468970bf4e70910862476900f924fb701399/Lib/webbrowser.py#L576-L593
I think that this needs to be fixed.
Found this while looking at https://github.com/python/cpython/issues/113539
<!-- gh-linked-prs -->
### Linked PRs
* gh-113544
* gh-113549
* gh-113550
<!-- /gh-linked-prs -->
| fba324154e65b752e42aa59dea287d639935565f | f108468970bf4e70910862476900f924fb701399 |
python/cpython | python__cpython-113561 | # webbrowser.py cannot use a non-standard browser under MacOS
# Bug report
### Bug description:
webbrowser.py reads the environment variable "BROWSER" to find executable using "_synthesize".
This does not work with macOS, where the browser is open by either
```zsh
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
```
or
```zsh
open -a "Google Chrome"
```
it doesn't match the list of configured browsers
Workaround is to create a zsh script in the path called 'chrome':
```zsh
#!/bin/zsh
#
open -a "Google Chrome.app" $
```
"_synthesize" needs to be fixed.
### CPython versions tested on:
3.11
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113561
<!-- /gh-linked-prs -->
| 25e49841e3c943d5746f2eb57375a7460651d088 | 1a70f66ea856de1b1b0ca47baf9ee8ba6799ae18 |
python/cpython | python__cpython-113690 | # Cannot cleanly shut down an asyncio based server
# Bug report
### Bug description:
When writing an asyncio based service, you basically have this sequence:
1. Create an event loop
2. Register a SIGTERM handler
3. Start your server
4. `loop.run_forever()`
5. SIGTERM causes a `loop.stop()`
6. Close the server
7. Close event loop
If there are any connections active at this point, then they don't get discarded until interpreter shutdown, with the result that you get a bunch of ResourceWarnings (and cleanup code might not run).
It would be very useful if there was a `Server.close_clients()` or something like that. Even a `Server.all_transports()` would be useful, as then you could do something similar as when doing a `Task.cancel()` on what you get from `loop.all_tasks()`.
We could poke at `Server._transports`, but that is something internal that might change in the future.
There is `Server.wait_closed()`, but that hangs until all clients have gracefully disconnected. It doesn't help us when we want to shut down the service *now*.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113690
* gh-113713
* gh-113714
* gh-114432
* gh-116632
* gh-116784
<!-- /gh-linked-prs -->
| 4681a5271a8598b46021cbc556ac8098ab8a1d81 | 1600d78e2d090319930c6538b496ffcca120a696 |
python/cpython | python__cpython-113582 | # support `str` type parameter in `plistlib.loads`
# Feature or enhancement
### Proposal:
The `plistlib`'s [loads](https://docs.python.org/3/library/plistlib.html#plistlib.loads) function only accepts `bytes` as input. If the input format is XML against binary, it's normal to pass a XML content in `str` type in user codes.
[json.loads](https://docs.python.org/3/library/json.html#json.loads) and [tomllib.loads](https://docs.python.org/3/library/tomllib.html#tomllib.loads) also accept `str` as input.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113582
<!-- /gh-linked-prs -->
| bbf214df23be3ee5daead119e8a2506d810d7d1f | 66f39648154214621d388f519210442d5fce738f |
python/cpython | python__cpython-113542 | # `os.waitid` explicitly excludes macOS
# Feature or enhancement
### Proposal:
Expose `os.waitid` on macOS as well.
As mentioned in https://github.com/python/cpython/issues/55021 the `waitid(2)` function is exposed in `posixmodule.c`, but that code explicitly hides the function on macOS without a clear reason. And in any case, the reason to exclude the function was made for an OS version that we no longer support (10.8 or earlier).
It is better to just expose the function and document issues when they come up (and file issues with Apple as needed).
Filing this a "feature" because this is IMHO not something we can back port to stable releases.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/55021
<!-- gh-linked-prs -->
### Linked PRs
* gh-113542
<!-- /gh-linked-prs -->
| d0b0e3d2eff30f699c620bc87c4dadd8cd4a77d5 | 5f3cc90a12d6df404fd6f48a0df1334902e271f2 |
python/cpython | python__cpython-113527 | # Deoptimise pathlib ABCs
# Feature or enhancement
`pathlib._abc.PurePathBase` and `PathBase` have a few slots and methods that facilitate fast path object generation:
- `_drv`, `_root`, `_tail_cached`: roughly, the result of `os.path.splitroot()` on the path
- `_str`: the normalized path string
- `_make_child_relpath()`: used when walking directories
- `_from_parsed_parts()`: used in `parent`, `parents`, `with_name()`, `relative_to()`
These carefully-tuned features are useful in `pathlib`, where speed is important, but much less so in `pathlib._abc`, where the clarity of the interfaces and interactions should win out.
It should be possible to move them to `PurePath`, leaving `PurePathBase` with only `_raw_paths` and `_resolving` slots.
This should not affect the behaviour or performance of the public classes in `pathlib` proper.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113527
* gh-113534
* gh-113763
* gh-113776
* gh-113777
* gh-113531
* gh-113530
* gh-113529
* gh-113820
* gh-113532
* gh-113782
* gh-113559
* gh-113788
* gh-113882
* gh-113883
<!-- /gh-linked-prs -->
| a9df076d7d5e113aab4dfd32118a14b62537a8a2 | aef375f56ec93740f0a9b5031c3d2063c553fc12 |
python/cpython | python__cpython-115495 | # Build fails with WASI SDK 21
# Bug report
### Bug description:
Hello!
when executing
`python3 Tools/wasm/wasi.py make-host`
on `ubuntu 22.04` with wasi-ld version: LLD 17.0.6 (tried earlier versions too)
Errors are as follows:
```
/opt/wasi-sdk/bin/wasm-ld -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 Modules/_testimportmultiple.o -o Modules/_testimportmultiple.cpython-313d-wasm32-wasi.so
/opt/wasi-sdk/bin/wasm-ld -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 Modules/_testmultiphase.o -o Modules/_testmultiphase.cpython-313d-wasm32-wasi.so
/opt/wasi-sdk/bin/wasm-ld -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 Modules/_testsinglephase.o -o Modules/_testsinglephase.cpython-313d-wasm32-wasi.so
/opt/wasi-sdk/bin/wasm-ld -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 Modules/xxlimited.o -o Modules/xxlimited.cpython-313d-wasm32-wasi.so
/opt/wasi-sdk/bin/wasm-ld -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 Modules/xxlimited_35.o -o Modules/xxlimited_35.cpython-313d-wasm32-wasi.so
wasm-ld: error: unknown argument: -Wl,--stack-first
wasm-ld: error: unknown argument: -Wl,--stack-first
wasm-ld: error: unknown argument: -Wl,--initial-memory=10485760
wasm-ld: error: unknown argument: -Wl,--initial-memory=10485760
wasm-ld: error: unknown argument: -Wl,--stack-first
wasm-ld: error: unknown argument: -Wl,--initial-memory=10485760
wasm-ld: error: unknown argument: -Wl,--stack-first
wasm-ld: error: unknown argument: -Wl,--initial-memory=10485760
make: *** [Makefile:3134: Modules/_testimportmultiple.cpython-313d-wasm32-wasi.so] Error 1
```
if "-Wl" flag removed then clang compiler fails.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115495
* gh-115496
* gh-115497
<!-- /gh-linked-prs -->
| 18343c09856fcac11f813a5a59ffe3a35928c0cc | 468430189d3ebe16f3067279f9be0fe82cdfadf6 |
python/cpython | python__cpython-113680 | # Coverage.py test suite fails on 3.13 (since June?)
# Bug report
### Bug description:
The coverage.py test suite is failing on Python3.13 using sys.settrace. Sorry it's taken so long to report. This is blocking me claiming 3.13 support, and running the test suite on nightly builds.
The test suite fails as if there are too many "return" events being traced, or not enough "calls". To see the failure, we can run just two tests from the suite. One fails:
```sh
% git clone https://github.com/nedbat/coveragepy
% cd coveragepy
% git show --oneline --no-patch # Though the specific commit of coveragepy shouldn't matter
75edefcd (HEAD -> master, origin/master, origin/HEAD) build: avoid an installation problem on Windows PyPy
% python3.13 -m venv venv
% source venv/bin/activate
% python3.13 -VV
Python 3.13.0a2 (main, Nov 24 2023, 14:53:47) [Clang 15.0.0 (clang-1500.0.40.1)]
% python3.13 -m pip install -r requirements/pytest.pip
% python3.13 -m pytest tests -vvv -k test_report_precision -n0
```
It should fail with:
```pytb
# Leaving this function, pop the filename stack.
self.cur_file_data, self.cur_file_name, self.last_line, self.started_context = (
> self.data_stack.pop()
)
E IndexError: pop from empty list
/private/tmp/bug/coveragepy/coverage/pytracer.py:263: IndexError
```
Just before that code in pytracer.py is some ugly opcode inspection which could now be wrong, but the symptoms aren't immediately pointing to that.
Matt Wozniski (@godlygeek) bisected CPython and found two commits of interest. This is his commentary:
> 04492cbc9aa45ac2c12d22083c406a0364c39f5b is the bad commit. if you build cpython at that version, the coverage test suite fails with a SIGABRT due to a failed assertion. The first version after that where there's no SIGABRT is 9339d70ac2d45743507e320e81e68acf77e366af - that's the first version where you get the test suite failure we see today. You get the same error as we see today on main if you check out the older one, and then cherry-pick the fix for the assertion failure:
```sh
git checkout 04492cbc9aa45ac2c12d22083c406a0364c39f5b
git cherry-pick --no-commit 9339d70ac2d45743507e320e81e68acf77e366af
```
> And if you instead check out the one that fixes the assertion error, and then revert the one that introduced the assertion error, the coverage test suite passes:
```sh
git checkout 9339d70ac2d45743507e320e81e68acf77e366af
git revert --no-commit 04492cbc9aa45ac2c12d22083c406a0364c39f5b
```
> Unfortunately that doesn't revert cleanly on main, but at this point I'm very convinced 04492cbc9aa45ac2c12d22083c406a0364c39f5b introduces the problem.
> We know a commit where it fails (9339d70), and we know that from that point if you revert another commit (04492cb) it passes. And we know that it happens even with a pure Python tracer, not just with a C extension module, so it's not related to changes in the C API.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113680
<!-- /gh-linked-prs -->
| 0ae60b66dea5140382190463a676bafe706608f5 | ed6ea3ea79fac68b127c7eb457c7ecb996461010 |
python/cpython | python__cpython-115890 | # Enhance itertools.takewhile() to allow the failed transition element to captured
The current version of `takewhile()` has a problem. The element that first fails the predicate condition is consumed from the iterator and there is no way to access it. This is the premise behind the existing recipe `before_and_after()`.
I propose to extend the current API to allow that element to be captured. This is fully backwards compatible but addresses use cases that need *all* of the data not returned by the takewhile iterator.
**Option 0:**
In pure Python, the new `takewhile()` could look like this:
```py
def takewhile(predicate, iterable, *, transition=None):
# takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4
for x in iterable:
if predicate(x):
yield x
else:
if transition is not None: # <-- This is the new part
transition.append(x) # <-- This is the new part
break
```
It could be used like this:
```pycon
>>> input_it = iter([1, 4, 6 ,4, 1])
>>> transition_list = []
>>> takewhile_it = takewhile(lambda x: x<5, input_it, transition=transition_list)
>>> print('Under five:', list(takewhile_it))
[1, 4]
>>> remainder = chain(transition_list, input_it)
>>> print('Remainder:', list(remainder))
[6, 4, 1]
```
The API is a bit funky. While this pattern is [common in C programming](https://www.tutorialspoint.com/c_standard_library/c_function_frexp.htm), I rarely see something like it in Python. This may be the simplest solution for accessing the last value (if any) consumed from the input. The keyword argument `transition` accurately describes a list containing the transition element if there is one, but some other parameter name may be better.
**Option 1:**
We could have a conditional signature that returns two iterators if a flag is set:
```
true_iterator = takewhile(predicate, iterable, remainder=False)
true_iterator, remainder_iterator = takewhile(predicate, iterable, remainder=True)
```
**Option 2:**
Create a completely separate itertool by promoting the `before_and_after()` recipe to be a real itertool:
```
true_iterator, remainder_iterator = before_and_after(predicate, iterable)
```
I don't really like option 2 because it substantially duplicates `takewhile()` leaving a permanent tension between the two.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115890
* gh-115910
<!-- /gh-linked-prs -->
| a0a8d9ffe0ddb0f55aeb02801f48e722c2660ed3 | cb287d342139509e03a2dbe5ea2608627fd3a350 |
python/cpython | python__cpython-113469 | # Remove the "_new_ suffix from class names in pydocfodder
Previously this file contained two sets of classes: old-style classes and new-style classes. In Python 3 they are the same, and the set of "old-style" classes was removed in #99430. Classes that remain have the "_new" suffix which is useless, because all classes are now "new-style" classes. I'm planning to add more tests here, so the redundant suffix will only get in the way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113469
* gh-113483
* gh-115300
<!-- /gh-linked-prs -->
| 8a3d0e4a661e6c27e4c17c818ce4187a36579e5f | e87cadc1ce194aae2c076e81298d6e8074f1bb45 |
python/cpython | python__cpython-113465 | # JIT Compilation
It's probably about time to start thinking seriously about just-in-time compilation. To kick things off, here's my talk from this year's sprint about "copy-and-patch", which I think is a really promising path forward: https://youtu.be/HxSHIpEQRjs
I'll also be opening a draft PR for discussion soon.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113465
* gh-114739
* gh-114759
* gh-115800
* gh-118481
* gh-118512
* gh-118545
* gh-119466
* gh-119490
* gh-122839
* gh-133278
* gh-133486
<!-- /gh-linked-prs -->
| a16a9f978f42b8a09297c1efbf33877f6388c403 | f6d9e5926b6138994eaa60d1c36462e36105733d |
python/cpython | python__cpython-114900 | # Incrementing class variable about 2**32 times causes the Python 3.12.1 interpreter to crash
# Crash report
### What happened?
The below example stores an integer counter as a class variable `C.counter`. Incrementing the counter using `C.counter += 1` between 2^32 - 2^24 and 2^32 times makes the Python 3.12.1 interpreter crash (after about 30 min) .
Tested with Python 3.12.1 on Windows 11.
```python
# Windows 11: Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)] on win32
# CPython interpreter crashes, last printed value: 4278190080 = 2**32 - 2**24
class C:
counter = 0 # Class variable
while True:
C.counter += 1 # Increment class variable
if C.counter & 0b111111111111111111111111 == 0:
print(C.counter)
```
(In the original application, the class was a wrapper class implementing `__lt__` to count the number of comparisons performed by various algorithms)
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-114900
<!-- /gh-linked-prs -->
| 992446dd5bd3fff92ea0f8064fb19eebfe105cef | 87cd20a567aca56369010689e22a524bc1f1ac03 |
python/cpython | python__cpython-113457 | # Prepare ssl failed because the file name of LICENSE of openssl changed
While running `PCbuild\prepare_ssl.bat` for python 3.11.5+, it will occur MSB3030 error like:
`my_private_directory\Python-3.11.7\PCbuild\openssl.vcxproj(109,5): error MSB3030: Could not copy the file 'my_private_directory\Python-3.11.7\PCbuild\\..\externals\openssl-3.0.11\\LICENSE' because it was not found.`
It's because openssl has rename LICENSE to LICENSE.txt after [commit](https://github.com/openssl/openssl/commit/036cbb6bbf30955abdcffaf6e52cd926d8d8ee75)
<!-- gh-linked-prs -->
### Linked PRs
* gh-113457
<!-- /gh-linked-prs --> | c5140945c723ae6c4b7ee81ff720ac8ea4b52cfd | 4e704d7847f2333f581f87e31b42e44a471df93a |
python/cpython | python__cpython-113891 | # Invalid "python equivalent" in PyObject_RichCompareBool()
It should be ``bool(o1 op o2)`` instead. Or (better?) the sentence "This is the equivalent of the Python expression ``o1 op o2``, where ``op`` is the operator corresponding to *opid*." could be removed, as the note below shows there is no simple equivalent python expression:
https://github.com/python/cpython/blob/fc2cb86d210555d509debaeefd370d5331cd9d93/Doc/c-api/object.rst?plain=1#L232-L234
<!-- gh-linked-prs -->
### Linked PRs
* gh-113891
* gh-114637
* gh-114638
<!-- /gh-linked-prs -->
| 926881dc10ebf77069e02e66eea3e0d3ba500fe5 | 7a7bce5a0ab249407e866a1e955d21fa2b0c8506 |
python/cpython | python__cpython-113455 | # Documentation about PyUnicode_AsWideChar() function
In file [_winapi.c](https://github.com/python/cpython/blob/3.11/Modules/_winapi.c#L851)
We can see code:
```c
size = PyUnicode_AsWideChar(key, NULL, 0);
```
Py_ssize_t PyUnicode_AsWideChar(PyObject *unicode, wchar_t *wstr, Py_ssize_t size)
But the doc say "Copies the Unicode Object contents into the wchar_t buffer w. At most size wchar_t characters are copied.".I think we miss the situation when size is 0.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113455
* gh-115407
* gh-115408
<!-- /gh-linked-prs -->
| 5719aa23ab7f1c7a5f03309ca4044078a98e7b59 | f9f6156c5affc039d4ee6b6f4999daf0d5896428 |
python/cpython | python__cpython-121060 | # Crash of interpreter due to unclosed sub-interpreters
# Crash report
### Reproducer:
```python
import _xxsubinterpreters as interp
interp_id = interp.create()
```
Trace:
```
Fatal Python error: PyInterpreterState_Delete: remaining subinterpreters
Python runtime state: finalizing (tstate=0x00005576a7f343e0)
Aborted
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121060
* gh-121067
<!-- /gh-linked-prs -->
| 4be1f37b20bd51498d3adf8ad603095c0f38d6e5 | 1c13b29d54ad6d7c9e030227d575ad7d21b4054f |
python/cpython | python__cpython-113423 | # multiprocessing logger get bad %(filename)s
# Bug report
### Bug description:
`util.info` and `util.debug` get bad `%(filename)s` in `multiprocessing` logger.
https://github.com/python/cpython/blob/c3f92f6a7513340dfe2d82bfcd38eb77453e935d/Lib/multiprocessing/pool.py#L116
### Reproducible example
test_logger.py
```python
import logging
import multiprocessing
logger = multiprocessing.get_logger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(
logging.Formatter('[%(levelname)s] [%(processName)s] [%(filename)s:%(lineno)d:%(funcName)s] %(message)s'))
logger.addHandler(handler)
def add1(x):
return x + 1
logger.info("print info")
logger.debug("print debug")
with multiprocessing.Pool(3) as p:
result = p.map(add1, range(10))
```
It gets bad `%(filename)s` as follows
```sh
$ python test_logger.py
[INFO] [MainProcess] [test_logger.py:16:<module>] print info
[DEBUG] [MainProcess] [test_logger.py:17:<module>] print debug
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232166834176
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232166830080
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232157552640
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232157548544
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232157544448
[DEBUG] [MainProcess] [util.py:50:debug] created semlock with handle 140232157540352
[DEBUG] [MainProcess] [util.py:50:debug] added worker
[DEBUG] [MainProcess] [util.py:50:debug] added worker
[INFO] [ForkPoolWorker-1] [util.py:54:info] child process calling self.run()
[INFO] [ForkPoolWorker-2] [util.py:54:info] child process calling self.run()
[DEBUG] [MainProcess] [util.py:50:debug] added worker
[INFO] [ForkPoolWorker-3] [util.py:54:info] child process calling self.run()
...
```
After apply pull request https://github.com/python/cpython/pull/113423
```
$ python test_logger.py
[INFO] [MainProcess] [test_logger.py:16:<module>] print info
[DEBUG] [MainProcess] [test_logger.py:17:<module>] print debug
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963328901120
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963328897024
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963319619584
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963319615488
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963319611392
[DEBUG] [MainProcess] [synchronize.py:67:__init__] created semlock with handle 139963319607296
[DEBUG] [MainProcess] [pool.py:331:_repopulate_pool_static] added worker
[INFO] [ForkPoolWorker-1] [process.py:312:_bootstrap] child process calling self.run()
[DEBUG] [MainProcess] [pool.py:331:_repopulate_pool_static] added worker
[INFO] [ForkPoolWorker-2] [process.py:312:_bootstrap] child process calling self.run()
[DEBUG] [MainProcess] [pool.py:331:_repopulate_pool_static] added worker
[INFO] [ForkPoolWorker-3] [process.py:312:_bootstrap] child process calling self.run()
[DEBUG] [MainProcess] [pool.py:655:terminate] terminating pool
[DEBUG] [MainProcess] [pool.py:684:_terminate_pool] finalizing pool
[DEBUG] [MainProcess] [pool.py:694:_terminate_pool] helping task handler/workers to finish
[DEBUG] [MainProcess] [pool.py:525:_handle_workers] worker handler exiting
[DEBUG] [MainProcess] [pool.py:674:_help_stuff_finish] removing tasks from inqueue until task handler finished
[DEBUG] [MainProcess] [pool.py:557:_handle_tasks] task handler got sentinel
[DEBUG] [MainProcess] [pool.py:561:_handle_tasks] task handler sending sentinel to result handler
[DEBUG] [MainProcess] [pool.py:565:_handle_tasks] task handler sending sentinel to workers
[DEBUG] [MainProcess] [pool.py:590:_handle_results] result handler got sentinel
[DEBUG] [ForkPoolWorker-2] [pool.py:120:worker] worker got sentinel -- exiting
[DEBUG] [MainProcess] [pool.py:618:_handle_results] ensuring that outqueue is not full
[DEBUG] [ForkPoolWorker-2] [pool.py:140:worker] worker exiting after 3 tasks
...
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113423
* gh-113450
* gh-113451
<!-- /gh-linked-prs -->
| ce77ee50358c0668eda5078f50b38f0770a370ab | 08398631a0298dcf785ee7bd0e26c7844823ce59 |
python/cpython | python__cpython-113408 | # Importing unittest.mock fails when CPython is built without docstrings
```
$ ./python -c 'import unittest.mock'
Traceback (most recent call last):
File "<string>", line 1, in <module>
import unittest.mock
File "/home/serhiy/py/cpython/Lib/unittest/mock.py", line 2233, in <module>
_CODE_SIG = inspect.signature(partial(CodeType.__init__, None))
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/inspect.py", line 3378, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
globals=globals, locals=locals, eval_str=eval_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/inspect.py", line 3108, in from_callable
return _signature_from_callable(obj, sigcls=cls,
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
follow_wrapper_chains=follow_wrapped,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
globals=globals, locals=locals, eval_str=eval_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/inspect.py", line 2603, in _signature_from_callable
wrapped_sig = _get_signature_of(obj.func)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/inspect.py", line 2599, in _signature_from_callable
return _signature_from_builtin(sigcls, obj,
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
skip_bound_arg=skip_bound_arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/inspect.py", line 2398, in _signature_from_builtin
raise ValueError("no signature found for builtin {!r}".format(func))
ValueError: no signature found for builtin <slot wrapper '__init__' of 'object' objects>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-113408
* gh-113454
<!-- /gh-linked-prs -->
| 0c574540e07792cef5487aef61ab38bfe404060f | 0d74e9683b8567df933e415abf747d9e0b4cd7ef |
python/cpython | python__cpython-113420 | # Outdated PyObject_HasAttr documentation (it no longer behaves like builtin `hasattr`)
# Documentation
The documentation says
https://github.com/python/cpython/blob/c31943af16f885c8cf5d5a690c25c366afdb2862/Doc/c-api/object.rst#L50-L52
but since https://github.com/python/cpython/issues/53875 this is no longer equivalent to ``hasattr(o, attr_name)`` but, if I understood correctly, to something like:
```py
try:
return hasattr(o, attr_name)
except:
return False
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-113420
<!-- /gh-linked-prs -->
| 61dd77b04ea205f492498d30637f2d516d8e2a8b | a03ec20bcdf757138557127689405b3a525b3c44 |
python/cpython | python__cpython-113390 | # test_tools.test_freeze broken on Mac framework build–probably should be skipped
# Bug report
### Bug description:
I've been messing around with the Mac framework build and encountered a failing `test_tools.test_freeze`. It runs `Tools/freeze/freeze.py` which has this buried in it:
```
if sys.platform == "darwin" and sysconfig.get_config_var("PYTHONFRAMEWORK"):
print(f"{sys.argv[0]} cannot be used with framework builds of Python", file=sys.stderr)
```
That seems like the wrong way to skip the test, but I suppose it's useful if run from the command line. Based on a comment @ned-deily made in another issue, that might be better expressed as:
```
if sys.platform == "darwin" and sys._framework:
print(f"{sys.argv[0]} cannot be used with framework builds of Python",
file=sys.stderr)
```
That said, `test_tools/test_freeze.py` should perhaps skip the test on Darwin framework builds with something like this:
```
diff --git a/Lib/test/test_tools/test_freeze.py b/Lib/test/test_tools/test_freeze.py
index 671ec2961e..4d3f7e1a76 100644
--- a/Lib/test/test_tools/test_freeze.py
+++ b/Lib/test/test_tools/test_freeze.py
@@ -14,6 +14,8 @@
@support.requires_zlib()
@unittest.skipIf(sys.platform.startswith('win'), 'not supported on Windows')
+@unittest.skipIf(sys.platform == "darwin" and sys._framework,
+ 'not supported on Mac framework builds')
@support.skip_if_buildbot('not all buildbots have enough space')
# gh-103053: Skip test if Python is built with Profile Guided Optimization
# (PGO), since the test is just too slow in this case.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113390
* gh-113395
* gh-113396
<!-- /gh-linked-prs -->
| bee627c1e29a070562d1a540a6e513d0daa322f5 | c31943af16f885c8cf5d5a690c25c366afdb2862 |
python/cpython | python__cpython-113371 | # Makefile.pre.in missing some mimalloc dependencies
# Bug report
The dependencies list for `obmalloc.o` is missing some files like `Objects/mimalloc/prim/unix/prim.c`, so modifying those files doesn't rebuild `obmalloc.o`.
https://github.com/python/cpython/blob/5f7a80fd02158d9c655eff4202498f5cab9b2ca4/Makefile.pre.in#L1566-L1580
<!-- gh-linked-prs -->
### Linked PRs
* gh-113371
<!-- /gh-linked-prs -->
| 31d8757b6070010b0fb92a989b1812ecf303059f | 61e818409567ce452af60605937cdedf582f6293 |
python/cpython | python__cpython-113359 | # Exception with broken __getattr__ causes error rendering tracebacks
# Bug report
### Bug description:
Running this code:
```python
import logging
class X(Exception):
def __getattr__(self, a):
raise KeyError(a) # !!!
try:
raise X()
except X:
logging.getLogger().exception('')
print('ok')
```
does not print "ok" but instead:
```console
# python3.11 repro.py
--- Logging error ---
Traceback (most recent call last):
File "/workspaces/cpython/repro.py", line 8, in <module>
raise X()
object address : 0x7f3007c67e80
object refcount : 2
object type : 0x7f3015bf0b20
object type name: KeyError
object repr : KeyError('__notes__')
lost sys.stderr
```
### CPython versions tested on:
3.11, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113359
* gh-114105
* gh-114109
* gh-114118
* gh-114173
* gh-114379
<!-- /gh-linked-prs -->
| 04fabe22dd98b4d87f672254b743fbadd5206352 | 17b73ab99ef12f89d41acec7500a244e68b1aaa4 |
python/cpython | python__cpython-113342 | # mmap(2) should check whether return value is MAP_FAILED instead of NULL
According to the [man page](https://man7.org/linux/man-pages/man2/mmap.2.html#RETURN_VALUE), mmap(2) returns MAP_FAILED (-1) on error instead of NULL. by @namhyung
<!-- gh-linked-prs -->
### Linked PRs
* gh-113342
* gh-113374
<!-- /gh-linked-prs -->
| 6b70c3dc5ab2f290fcdbe474bcb7d6fdf29eae4c | 5f7a80fd02158d9c655eff4202498f5cab9b2ca4 |
python/cpython | python__cpython-113341 | # Argument Clinic: remove the 'version' directive
# Feature or enhancement
The `version` directive was added in #63929 (Nov 2013), to make it possible for a programmer to require a specific version of Argument Clinic. However, since the "version number" has been hard-coded to "1" since Nov 2013, the `version` directive has not been used in the CPython code base, and since Argument Clinic is an _internal tool_, I suggest we remove this feature.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113341
<!-- /gh-linked-prs -->
| fae096cd4b7025f91473546ca6a243c86374dd6a | 4b90b5d857ba88c7d85483e188a73323df9ba05c |
python/cpython | python__cpython-113333 | # _ssl.c calls SSL_CTX_get_verify_callback unnecessarily
# Bug report
### Bug description:
I'll upload a PR shortly for a small bit of cleanup. The PR template wants a GitHub issue, so this is that.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113333
<!-- /gh-linked-prs -->
| af2b8f6845e31dd6ab3bb0bac41b19a0e023fd61 | 2b53c767de0a7afd29598a87da084d0e125e1c34 |
python/cpython | python__cpython-113331 | # Makefile does not capture mimalloc header dependencies
# Bug report
Modifying `pycore_mimalloc.h` or any of the `mimalloc/*.h` files does not trigger recompilation of C files as one would expect. The problem is that we should use `$(MIMALLOC_HEADERS)` instead of `@MIMALLOC_HEADERS@` in Makefile.pre.in.
https://github.com/python/cpython/blob/713e42822f5c74d8420f641968c3bef4b10a513a/Makefile.pre.in#L1798-L1802
<!-- gh-linked-prs -->
### Linked PRs
* gh-113331
<!-- /gh-linked-prs -->
| a3e8afe0a3b5868440501edf579d1d4711c0fb18 | 713e42822f5c74d8420f641968c3bef4b10a513a |
python/cpython | python__cpython-113368 | # Unnecessary call of `print` in `test_symtable`?
# Bug report
### Bug description:
This line: https://github.com/python/cpython/blob/11ee912327ef51100d2a6b990249f25b6b1b435d/Lib/test/test_symtable.py#L340
Looks strange for me. Does this really need to be in test output?
<details>
<summary> Test output </summary>
```python
./python -m test -v test_symtable
== CPython 3.13.0a2+ (heads/main:e1117cb886, Dec 19 2023, 23:25:01) [GCC 9.4.0]
== Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 little-endian
== Python build: debug
== cwd: /home/eclips4/CLionProjects/cpython/build/test_python_worker_6126æ
== CPU count: 16
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 1554245783
0:00:00 load avg: 0.18 Run 1 test sequentially
0:00:00 load avg: 0.18 [1/1] test_symtable
test_file (test.test_symtable.CommandLineTest.test_file) ... ok
test_stdin (test.test_symtable.CommandLineTest.test_stdin) ... symbol table for module from file '<stdin>':
local symbol 'sys': def_import
local symbol 'glob': def_local
local symbol 'some_var': def_local
local symbol 'some_non_assigned_global_var': def_local
global_explicit symbol 'some_assigned_global_var': def_global, def_local
local symbol 'Mine': def_local
local symbol 'spam': def_local
global_explicit symbol 'bar': def_global
local symbol 'foo': def_local
local symbol 'namespace_test': def_local
local symbol 'Alias': def_local
local symbol 'GenericAlias': def_local
local symbol 'generic_spam': def_local
local symbol 'GenericMine': def_local
symbol table for class 'Mine':
local symbol 'instance_var': def_local
local symbol 'a_method': def_local
symbol table for function 'a_method':
local symbol 'p1': def_param
local symbol 'p2': def_param
symbol table for function 'spam':
local symbol 'a': def_param
local symbol 'b': def_param
local symbol 'var': def_param
local symbol 'kw': def_param
global_explicit symbol 'bar': def_global, def_local
global_explicit symbol 'some_assigned_global_var': def_global, def_local
cell symbol 'some_var': def_local
cell symbol 'x': def_local
global_implicit symbol 'glob': use
local symbol 'internal': use, def_local
local symbol 'other_internal': def_local
symbol table for nested function 'internal':
free symbol 'x': use
symbol table for nested function 'other_internal':
free symbol 'some_var': use, def_nonlocal, def_local
symbol table for function 'foo':
symbol table for function 'namespace_test':
symbol table for function 'namespace_test':
symbol table for type alias 'Alias':
global_implicit symbol 'int': use
symbol table for type parameter 'GenericAlias':
cell symbol 'T': def_local
symbol table for nested type alias 'GenericAlias':
global_implicit symbol 'list': use
free symbol 'T': use
symbol table for type parameter 'generic_spam':
local symbol '.defaults': def_param
local symbol 'T': def_local
symbol table for nested function 'generic_spam':
local symbol 'a': def_param
symbol table for type parameter 'GenericMine':
cell symbol '.type_params': use, def_local
local symbol '.generic_base': use, def_local
local symbol 'T': def_local
symbol table for nested TypeVar bound 'T':
global_implicit symbol 'int': use
symbol table for nested class 'GenericMine':
local symbol '__type_params__': def_local
free symbol '.type_params': use
ok
test_annotated (test.test_symtable.SymtableTest.test_annotated) ... ok
test_assigned (test.test_symtable.SymtableTest.test_assigned) ... ok
test_bytes (test.test_symtable.SymtableTest.test_bytes) ... ok
test_children (test.test_symtable.SymtableTest.test_children) ... ok
test_class_info (test.test_symtable.SymtableTest.test_class_info) ... ok
test_eval (test.test_symtable.SymtableTest.test_eval) ... ok
test_exec (test.test_symtable.SymtableTest.test_exec) ... ok
test_filename_correct (test.test_symtable.SymtableTest.test_filename_correct) ... ok
test_free (test.test_symtable.SymtableTest.test_free) ... ok
test_function_info (test.test_symtable.SymtableTest.test_function_info) ... ok
test_globals (test.test_symtable.SymtableTest.test_globals) ... ok
test_id (test.test_symtable.SymtableTest.test_id) ... ok
test_imported (test.test_symtable.SymtableTest.test_imported) ... ok
test_lineno (test.test_symtable.SymtableTest.test_lineno) ... ok
test_local (test.test_symtable.SymtableTest.test_local) ... ok
test_name (test.test_symtable.SymtableTest.test_name) ... ok
test_namespaces (test.test_symtable.SymtableTest.test_namespaces) ... ok
test_nested (test.test_symtable.SymtableTest.test_nested) ... ok
test_nonlocal (test.test_symtable.SymtableTest.test_nonlocal) ... ok
test_optimized (test.test_symtable.SymtableTest.test_optimized) ... ok
test_parameters (test.test_symtable.SymtableTest.test_parameters) ... ok
test_referenced (test.test_symtable.SymtableTest.test_referenced) ... ok
test_single (test.test_symtable.SymtableTest.test_single) ... ok
test_symbol_lookup (test.test_symtable.SymtableTest.test_symbol_lookup) ... ok
test_symbol_repr (test.test_symtable.SymtableTest.test_symbol_repr) ... ok
test_symtable_entry_repr (test.test_symtable.SymtableTest.test_symtable_entry_repr) ... ok
test_symtable_repr (test.test_symtable.SymtableTest.test_symtable_repr) ... ok
test_type (test.test_symtable.SymtableTest.test_type) ... ok
----------------------------------------------------------------------
Ran 30 tests in 0.012s
OK
== Tests result: SUCCESS ==
1 test OK.
Total duration: 75 ms
Total tests: run=30
Total test files: run=1/1
Result: SUCCESS
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113368
<!-- /gh-linked-prs -->
| 5f7a80fd02158d9c655eff4202498f5cab9b2ca4 | df1eec3dae3b1eddff819fd70f58b03b3fbd0eda |
python/cpython | python__cpython-113401 | # Should typing be calling getattr on arbitrary classes?
# Bug report
### Bug description:
I was updating my CI to support python3.12 and ran into the following error that isn't present in the 3.12 builds https://github.com/Gobot1234/steam.py/actions/runs/7276342755/job/19825996032
MRE:
```python
from enum import IntEnum
from typing import Protocol
class classproperty:
def __init__(self, func):
self.__func__ = func
def __get__(self, instance, type):
return self.__func__(type)
class _CommentThreadType(IntEnum):
WorkshopAccountDeveloper = 2
class Commentable(Protocol):
@classproperty
def _COMMENTABLE_TYPE(cls):
return _CommentThreadType[cls.__name__]
```
Traceback on `main`:
```pytb
Running Debug|x64 interpreter...
Traceback (most recent call last):
File "C:\Users\alexw\coding\test_protocol.py", line 14, in <module>
class Commentable(Protocol):
...<2 lines>...
return _CommentThreadType[cls.__name__]
File "C:\Users\alexw\coding\cpython\Lib\typing.py", line 1838, in __init__
cls.__callable_proto_members_only__ = all(
~~~^
callable(getattr(cls, attr, None)) for attr in cls.__protocol_attrs__
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\alexw\coding\cpython\Lib\typing.py", line 1839, in <genexpr>
callable(getattr(cls, attr, None)) for attr in cls.__protocol_attrs__
~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\test_protocol.py", line 9, in __get__
return self.__func__(type)
~~~~~~~~~~~~~^^^^^^
File "C:\Users\alexw\coding\test_protocol.py", line 17, in _COMMENTABLE_TYPE
return _CommentThreadType[cls.__name__]
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "C:\Users\alexw\coding\cpython\Lib\enum.py", line 763, in __getitem__
return cls._member_map_[name]
~~~~~~~~~~~~~~~~^^^^^^
KeyError: 'Commentable'
```
Should typing be more careful about accessing random attributes on classes to prevent these kinds of errors in the future?
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113401
* gh-113722
<!-- /gh-linked-prs -->
| ed6ea3ea79fac68b127c7eb457c7ecb996461010 | fcb3c2a444709d2a53faa20c5b43541674064018 |
python/cpython | python__cpython-113402 | # Argument Clinic: split out global stateless helpers and constants from clinic.py
# Feature or enhancement
`clinic.py` has a lot of globals in various forms. Getting rid of the globals will make it possible to split out `clinic.py` in multiple files (https://github.com/python/cpython/issues/113299). Combined, this will impact readability and maintainability of Argument Clinic in a positive way.
Most globals can be easily dealt with. We can structure them into two groups:
- global constants that are not mutated
- stateless helper functions and classes
It can make sense split out some of these in separate files (for example a libclinic package, as suggested in #113309). For some globals, it makes more sense to embed them into one of the existing classes (for example the global `version` variable clearly belongs in `DSLParser`).
Suggesting to start with the following:
- move stateless text accumulator helpers into a `Tools/clinic/libclinic/accumulators.py` file
- move stateless text formatting helpers into a `Tools/clinic/libclinic/formatters.py` file
- move `version` into `DSLParser` and version helper into a `Tools/clinic/libclinic/version.py` file
<!-- gh-linked-prs -->
### Linked PRs
* gh-113402
* gh-113413
* gh-113414
* gh-113438
* gh-113525
* gh-113986
* gh-114066
* gh-114319
* gh-114324
* gh-114330
* gh-114752
* gh-115369
* gh-115370
* gh-115371
* gh-115510
* gh-115513
* gh-115517
* gh-115518
* gh-115520
* gh-115522
* gh-116770
* gh-116807
* gh-116819
* gh-116821
* gh-116836
* gh-116853
* gh-117315
* gh-117451
* gh-117455
* gh-117513
* gh-117533
* gh-117624
* gh-117626
* gh-117707
<!-- /gh-linked-prs -->
| 9c3ddf31a34295ebcef6dc49b9e0ddd75d0ea9f1 | d1c711e757213b38cff177ba4b4d8ae201da1d21 |
python/cpython | python__cpython-113377 | # Documentation for object.__getitem__ is over-specified.
# Bug report
### Bug description:
`collections.deque` is a sequence but doesn't implement the full sequence protocol (doesn't support slices).
`collections.deque` is a sequence:
```py
from collections import deque
from collections.abc import Sequence
assert issubclass(deque, Sequence) # No AssertionError
```
The [docs on `collections.abc.Sequence`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence) say this is an ABC for sequences, linking to the [docs glossary entry on sequences](https://docs.python.org/3/glossary.html#term-sequence):
> ABCs for read-only and mutable [sequences](https://docs.python.org/3/glossary.html#term-sequence).
In turn, the glossary entry says sequences must implement `__getitem__`:
> An [iterable](https://docs.python.org/3/glossary.html#term-iterable) which supports efficient element access using integer indices via the [`__getitem__()`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__) special method [...]
Finally, the `__getitem__` docs say that sequences must accept integers and slice objects:
> [...] For [sequence](https://docs.python.org/3/glossary.html#term-sequence) types, the accepted keys should be integers and slice objects.[...]
(Excerpts taken from the Python 3.12.1 documentation on 20/12/2023.)
I see one of three resolutions:
1. `issubclass(deque, Sequence)` returns `False`
2. we implement slicing for `deque`
3. change the docs on `__getitem__` to say “For sequence types, the accepted keys should be integers (and optionally slice objects).”
I don't know which one is the “best”.
### CPython versions tested on:
3.8, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113377
* gh-113382
* gh-113383
<!-- /gh-linked-prs -->
| 6a5b4736e548fc5827c3bcf1a14193f77e1a989d | 31d8757b6070010b0fb92a989b1812ecf303059f |
python/cpython | python__cpython-115934 | # `uuid` has a bunch of deprecated functions: let's decide what to do with them
# Feature or enhancement
`uuid.py` has several protected deprecated functions which are no longer in use:
https://github.com/python/cpython/blob/4afa7be32da32fac2a2bcde4b881db174e81240c/Lib/uuid.py#L591-L594
https://github.com/python/cpython/blob/4afa7be32da32fac2a2bcde4b881db174e81240c/Lib/uuid.py#L567-L575
And one unused module-level var:
https://github.com/python/cpython/blob/4afa7be32da32fac2a2bcde4b881db174e81240c/Lib/uuid.py#L583
The problem is that they were not deprecated with a warning. Only with docs.
But, right now they are deprecated since 2020.
This has a big history:
- https://github.com/python/cpython/issues/72196
- https://github.com/python/cpython/issues/84681 and https://github.com/python/cpython/pull/19948
- https://github.com/python/cpython/pull/3796
Some projects in the wild use `_load_system_functions`, despite the fact it is deprecated and was never documented and always was protected.
Examples:
- https://github.com/spulec/freezegun/blob/5e06dff53244992204706adf6907e7138ac96d39/freezegun/api.py#L71
- https://github.com/pganssle/time-machine/blob/ccc59927b4c51e35895c34578cba7aca69f28055/src/time_machine/__init__.py#L211
- https://github.com/adamchainz/time-machine/blob/39e327e18d52ebc76d3973b465ff42e139fd651d/src/time_machine/__init__.py#L223
So, what should we do?
1. Add a proper warning, schedule it for removal in two versions
2. Just remove them
I think that `1.` is safer.
I would like to work on it after the decision is made.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115934
* gh-117832
<!-- /gh-linked-prs -->
| 61f576a5eff7cc1a7c163fbed491c1e75e0bd025 | 66fb613d90fe3dea32130a5937963a9362c8a59e |
python/cpython | python__cpython-113327 | # Segmentation fault with `compile`
# Crash report
### What happened?
### Minimal reproducible example
#### tests.py
```python
def mocks(self, search_rv=None, mock_internal_search=False):
with mock.patch.object(
TestResourceController, "_search" if mock_internal_search else "search", return_value=search_rv
) as mock_search, mock.patch.object(TestResourceController, "_count") as mock_count, mock.patch.object(
TestResourceController, "_before_create"
) as mock_before_create, mock.patch.object(
TestResourceController, "_create"
) as mock_create, mock.patch.object(
TestResourceController, "_after_create"
) as mock_after_create, mock.patch.object(
TestResourceController, "_before_update"
) as mock_before_update, mock.patch.object(
TestResourceController, "_update"
) as mock_update, mock.patch.object(
TestResourceController, "_after_update"
) as mock_after_update, mock.patch.object(
TestResourceController, "_before_delete"
) as mock_before_delete, mock.patch.object(
TestResourceController, "_delete"
) as mock_delete, mock.patch.object(
TestResourceController, "_after_delete"
) as mock_after_delete, mock.patch.object(
TestResourceController, "validate"
) as mock_validate, mock.patch.object(
TestSourceManager, "search", side_effect=self.mock_source_search_func()
) as mock_source_search, mock.patch.object(
TestSourceManager, "before_create"
) as mock_source_before_create, mock.patch.object(
TestSourceManager, "after_create"
) as mock_source_after_create, mock.patch.object(
TestSourceManager, "before_update"
) as mock_source_before_update, mock.patch.object(
TestSourceManager, "after_update"
) as mock_source_after_update, mock.patch.object(
TestSourceManager, "before_delete"
) as mock_source_before_delete, mock.patch.object(
TestSourceManager, "after_delete"
) as mock_source_after_delete, mock.patch.object(
TestSourceManager2, "search", side_effect=self.mock_source_search_func()
) as mock_source2_search:
yield 1
```
#### crash.py
```python
with open("tests.py") as f:
source = f.read()
compile(source, "tests.py", "exec")
```
#### Segmentation fault
```zsh
shadchin@i113735019 ~ % python3.12 crash.py
zsh: segmentation fault python3.12 crash.py
```
```
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 Python 0x1047dc148 _PyCfg_OptimizeCodeUnit + 348
1 Python 0x1047c1eb0 optimize_and_assemble + 252
2 Python 0x1047c9298 compiler_function + 584
3 Python 0x1047c2a30 compiler_body + 208
4 Python 0x1047c0c70 compiler_codegen + 148
5 Python 0x1047c02d8 _PyAST_Compile + 44
6 Python 0x10480c81c Py_CompileStringObject + 112
7 Python 0x1047a881c builtin_compile + 508
8 Python 0x104711e34 cfunction_vectorcall_FASTCALL_KEYWORDS + 92
9 Python 0x1047b7820 _PyEval_EvalFrameDefault + 43368
10 Python 0x1047acc48 PyEval_EvalCode + 184
11 Python 0x10480e4d0 run_eval_code_obj + 88
12 Python 0x10480c6a8 run_mod + 132
13 Python 0x10480bbb0 pyrun_file + 148
14 Python 0x10480af8c _PyRun_SimpleFileObject + 288
15 Python 0x10480ac64 _PyRun_AnyFileObject + 232
16 Python 0x10482fa14 pymain_run_file_obj + 220
17 Python 0x10482f674 pymain_run_file + 72
18 Python 0x10482ee0c Py_RunMain + 860
19 Python 0x10482ef50 Py_BytesMain + 40
20 dyld 0x195463f28 start + 2236
Thread 0 crashed with ARM Thread State (64-bit):
x0: 0x00000001042e4500 x1: 0x00000000000000b0 x2: 0x0000000000000000 x3: 0x000000000000003f
x4: 0x0000000000000020 x5: 0x000000000000003e x6: 0x0000000000000000 x7: 0x000000000000001c
x8: 0x000000000000000c x9: 0x00000000042d26f0 x10: 0x0000000000000660 x11: 0x00000001048cf440
x12: 0x0000000000000028 x13: 0x0000000000000001 x14: 0x0000000104aff8c0 x15: 0x0000000000000001
x16: 0x0000000000000008 x17: 0x0000600000acc090 x18: 0x0000000000000000 x19: 0x0000000142648960
x20: 0x00000001426380d0 x21: 0x000000016bdb2ad0 x22: 0x0000000000000000 x23: 0x0208805008000001
x24: 0x00000001042d26a0 x25: 0x00000001426488b0 x26: 0x0000000104183470 x27: 0x0000000000000000
x28: 0x00000001042d26f0 fp: 0x000000016bdb2ac0 lr: 0x00000001047dc314
sp: 0x000000016bdb29a0 pc: 0x00000001047dc148 cpsr: 0x20001000
far: 0x0000000125816bf0 esr: 0x92000006 (Data Abort) byte read Translation fault
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, macOS
### Output from running 'python -VV' on the command line:
Python 3.12.1 (main, Dec 8 2023, 18:57:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-113327
* gh-113404
<!-- /gh-linked-prs -->
| c31943af16f885c8cf5d5a690c25c366afdb2862 | 9afb0e1606cad41ed57c42ea0a53ac90433f211b |
python/cpython | python__cpython-113271 | # IDLE - Fix test_editor hang on macOS with installed Python
On my MacBook Air Catalina, Python 3.13.0a2, IDLE's test_editor hangs in IndentSearchTest.test_searcher. The problem is `insert` using `update()` instead of `update_idletasks()`. I added function in bpo-39885. It is called earlier in IndentAnd...Test without failure.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113271
* gh-113272
* gh-113273
<!-- /gh-linked-prs -->
| fa9ba02353d79632983b9fe24da851894877e342 | c895403de0b1c301aeef86921348c13f608257dc |
python/cpython | python__cpython-114470 | # `TestResult.stopTest` called when test is skipped, despite `TestResult.startTest` not being called
# Bug report
### Bug description:
Following commit [551aa6ab9419109a80ad53900ad930e9b7f2e40d](https://github.com/python/cpython/commit/551aa6ab9419109a80ad53900ad930e9b7f2e40d) a bug seems to have been introduced in the `TestCase.run` method.
Because the `result.startTest(self)` call was moved inside the try/finally block after the check to see if the test is skipped, the `result.stopTest(self)` call will be made even if the test is skipped and `startTest` is never called.
While this does not cause issues with the standard `TestResult` or `TextTestResult` classes, it can cause issues with other classes which subclass those and expect every call to `stopTest` to be preceded by a call from `startTest`.
Most notably for me, the [`unittest-xml-reporting`](https://github.com/xmlrunner/unittest-xml-reporting) package, whose `stopTest` method uses data initialized in `startTest`.
**Note: Further thinking about this problem, it seems like changing the flow of methods like this was ill thought solution to the problem in the first place, as this might be an unexpected change of behaviour for different libraries. I'm still leaving what I think could be the solution in case I'm wrong on that.**
It looks like the fix would seem to be moving the call to `stopTest` in yet another try/finally block right after the call to `startTest`, emulating the same behaviour as previous code while keeping the call to `startTest` after the skip check. The call to `stopTestRun` would need to remain where it is as `startTestRun` has already been called at this point.
So using the code as found in that commit, something like:
```python
def run(self, result=None):
if result is None:
result = self.defaultTestResult()
startTestRun = getattr(result, 'startTestRun', None)
stopTestRun = getattr(result, 'stopTestRun', None)
if startTestRun is not None:
startTestRun()
else:
stopTestRun = None
try:
testMethod = getattr(self, self._testMethodName)
if (getattr(self.__class__, "__unittest_skip__", False) or
getattr(testMethod, "__unittest_skip__", False)):
# If the class or method was skipped.
skip_why = (getattr(self.__class__, '__unittest_skip_why__', '')
or getattr(testMethod, '__unittest_skip_why__', ''))
_addSkip(result, self, skip_why)
return result
# Increase the number of tests only if it hasn't been skipped
result.startTest(self)
try:
expecting_failure = (
getattr(self, "__unittest_expecting_failure__", False) or
getattr(testMethod, "__unittest_expecting_failure__", False)
)
outcome = _Outcome(result)
start_time = time.perf_counter()
try:
self._outcome = outcome
with outcome.testPartExecutor(self):
self._callSetUp()
if outcome.success:
outcome.expecting_failure = expecting_failure
with outcome.testPartExecutor(self):
self._callTestMethod(testMethod)
outcome.expecting_failure = False
with outcome.testPartExecutor(self):
self._callTearDown()
self.doCleanups()
self._addDuration(result, (time.perf_counter() - start_time))
if outcome.success:
if expecting_failure:
if outcome.expectedFailure:
self._addExpectedFailure(result, outcome.expectedFailure)
else:
self._addUnexpectedSuccess(result)
else:
result.addSuccess(self)
return result
finally:
# explicitly break reference cycle:
# outcome.expectedFailure -> frame -> outcome -> outcome.expectedFailure
outcome.expectedFailure = None
outcome = None
# clear the outcome, no more needed
self._outcome = None
finally:
result.stopTest(self)
finally:
if stopTestRun is not None:
stopTestRun()
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-114470
* gh-114994
<!-- /gh-linked-prs -->
| ecabff98c41453f15ecd26ac255d531b571b9bc1 | ca715e56a13feabc15c368898df6511613d18987 |
python/cpython | python__cpython-113303 | # Frozen modules shouldn't be written to the source tree on Windows
# Feature or enhancement
### Proposal:
As a follow-up to gh-113039, this issue suggests we change the Windows build to avoid writing the generated frozen modules back into the source tree.
[Quoting @zooba ](https://github.com/python/cpython/issues/113039#issuecomment-1854406914)
> I'm not sure where the relative path came from, but I don't like it. I'm also not a huge fan of generated files going back into the source tree (I prefer them in $(IntDir) so that read-only source trees are supported)
This is already the case on Linux and MacOS, so making Windows use `$(IntDir)` for these is also more consistent.
While we're there, consider also changing `getpath.c` again to include `getpath.h`, as suggested [here](https://github.com/python/cpython/issues/113039#issuecomment-1861063000).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Previous discussion: gh-113039
<!-- gh-linked-prs -->
### Linked PRs
* gh-113303
<!-- /gh-linked-prs -->
| 178919cf2132a67bc03ae5994769d93cfb7e2cd3 | 7d01fb48089872155e1721ba0a8cc27ee5c4fecd |
python/cpython | python__cpython-113262 | # Automate pip's SBOM package entry
# Feature or enhancement
### Proposal:
Part of https://github.com/python/cpython/issues/112302
Most of the vendored dependencies in CPython's source tree can't be automated due to being "outside" a package ecosystem therefore being difficult to automatically parse a version, pip however likely can be automated further since it's a Python package.
See https://github.com/python/cpython/pull/113249#issuecomment-1859961692 and https://github.com/python/cpython/pull/113249#issuecomment-1859979747
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113262
* gh-113295
<!-- /gh-linked-prs -->
| 4658464e9cf092be930d0d8f938e801a69f7f987 | fa9ba02353d79632983b9fe24da851894877e342 |
python/cpython | python__cpython-113286 | # Consistency of reveal_type results with other type checkers
# Bug report
### Bug description:
Not sure if this bug is reported before.
FWIK, CPython's `reveal_type` was introduced based on the motivation of the same features in type checkers. But I found out the behavior is inconsistent, at least between [CPython](https://github.com/python/cpython/pull/30646), mypy, and pytype based on the cases in [Python 3 Types in the Wild: A Tale of Two Type Systems](https://www.cs.rpi.edu/~milanova/docs/dls2020.pdf).
Case 1:
```
a = 1
reveal_type(a)
a = 's'
reveal_type(a)
```
Case 2
```
a = [1]
reveal_type(a)
a.append('s')
reveal_type(a)
```
Case 3
```
class A:
attr = 1
a = A()
reveal_type(a.attr)
a.attr = 's'
reveal_type(a.attr)
```
reveal_type|Case1|Case2|Case3
-|-|-|-
CPython|int, str|list, list|int, str
mypy|int, assignment error|list[int], arg-type error|int, assignment error
pytype|int, str|List[int], List[Union[int,str]|int, str
For code without type annotations, the current behavior is closer to pytype. Case 2 is unclear, because CPython's `reveal_type` doesn't show the types for the elements.
And even if I add type to the code to address mypy errors, the result is still inconsistent:
Case 1:
```
a: int|str = 1
reveal_type(a)
a = 's'
reveal_type(a)
```
Case 2
```
a: list[int|str] = [1]
reveal_type(a)
a.append('s')
reveal_type(a)
```
Case 3
```
class A:
attr: int|str = 1
a = A()
reveal_type(a.attr)
a.attr = 's'
reveal_type(a.attr)
```
reveal_type|Case1|Case2|Case3
-|-|-|-
CPython|int, str|list, list|int, str
mypy|Union[int,str], str|List[Union[int, str]], List[Union[int, str]]|Union[int,str], str
pytype|Union[int,str], str|List[Union[int, str]], List[Union[int, str]]|int, str
I don't know what's the best behavior, while `reveal_type` in CPython may not be fully aligned with any type checker's type systems. Based on the motivation, will it make sense to align the behavior with at least mypy's `reveal_type`?
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113286
* gh-113323
* gh-113326
<!-- /gh-linked-prs -->
| 11ee912327ef51100d2a6b990249f25b6b1b435d | de8a4e52a5f5e5f0c5057afd4c391afccfca57d3 |
python/cpython | python__cpython-113249 | # Update bundled pip to 23.3.2
<!-- gh-linked-prs -->
### Linked PRs
* gh-113249
* gh-113253
* gh-113254
<!-- /gh-linked-prs -->
| 4a24bf9a13a7cf055113c04bde0874186722c62c | f428c4dafbfa2425ea056e7f2ed2ea45fa90be87 |
python/cpython | python__cpython-113236 | # Add remaining TOML types to tomllib conversation table documentation
# Documentation
[The conversion table for Python's TOML parser](https://docs.python.org/3/library/tomllib.html#conversion-table) does not contain entries for two types listed on the TOML specification:
1. inline table (dict)
2. array of tables (list of dicts)
Adding those entries to the conversion table will make it more clear how the objects are represented in Python when using `tomllib.load`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113236
* gh-113283
* gh-113284
<!-- /gh-linked-prs -->
| 76d757b38b414964546393bdccff31c1f8be3843 | d71fcdee0f1e4daa35d47bbef103f30259037ddb |
python/cpython | python__cpython-113226 | # Optimize `pathlib.Path.iterdir()` and friends by using `os.DirEntry.path`
`pathlib.Path.glob()` calls [`os.scandir()`](https://docs.python.org/3/library/os.html#os.scandir) under the hood, converting [`os.DirEntry`](https://docs.python.org/3/library/os.html#os.DirEntry) objects to path objects using a private `_make_child_relpath()` method. This builds a child path from a given basename. The basename is obtained from the dir entry's [`name`](https://docs.python.org/3/library/os.html#os.DirEntry.name) attribute.
I've just spotted (or realised) that dir entries have a [`path`](https://docs.python.org/3/library/os.html#os.DirEntry.path) attribute that we could use, rather than constructing our own string path from the basename. This seems to be a fair bit faster in my testing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113226
* gh-113556
* gh-113693
<!-- /gh-linked-prs -->
| c2e8298eba3f8d75a58e5b3636f8edc8d60e68da | 4681a5271a8598b46021cbc556ac8098ab8a1d81 |
python/cpython | python__cpython-113334 | # asyncio.sslproto.SSLProtocol exception handling triggers TypeError: SSLProtocol._abort() takes 1 positional argument but 2 were given
# Bug report
### Bug description:
There are several pathways to this bug, all which call `asyncio.sslproto._SSLProtocolTransport._force_close()` with an exception instance:
- During shutdown, if the flushing state takes too long (`asyncio.sslproto.SSLProtocolTransport._check_shutdown_timeout()` is called)
- Anything that triggers a call to `asyncio.sslproto.SSLProtocol._fatal_error()`, e.g. SSL handshake timeout or exception, SSL shutdown timeout or exception, an exception during reading, exception raised in the app transport EOF handler, etc.
I'm seeing this when using a HTTPS proxy with a aiohttp client session (which wraps TLS in TLS), but I don't think it is specific to that context. I'm seeing these tracebacks:
```
Fatal error on SSL protocol
protocol: <asyncio.sslproto.SSLProtocol object at 0x7fe36f3a1350>
transport: <_SelectorSocketTransport closing fd=6 read=idle write=<idle, bufsize=0>>
Traceback (most recent call last):
File ".../lib/python3.11/asyncio/sslproto.py", line 644, in _do_shutdown
self._sslobj.unwrap()
File ".../lib/python3.11/ssl.py", line 983, in unwrap
return self._sslobj.shutdown()
^^^^^^^^^^^^^^^^^^^^^^^
ssl.SSLError: [SSL: APPLICATION_DATA_AFTER_CLOSE_NOTIFY] application data after close notify (_ssl.c:2702)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".../lib/python3.11/asyncio/sslproto.py", line 731, in _do_read
self._do_read__buffered()
File ".../lib/python3.11/asyncio/sslproto.py", line 765, in _do_read__buffered
self._app_protocol_buffer_updated(offset)
File ".../lib/python3.11/asyncio/sslproto.py", line 445, in buffer_updated
self._do_shutdown()
File ".../lib/python3.11/asyncio/sslproto.py", line 648, in _do_shutdown
self._on_shutdown_complete(exc)
File ".../lib/python3.11/asyncio/sslproto.py", line 660, in _on_shutdown_complete
self._fatal_error(shutdown_exc)
File ".../lib/python3.11/asyncio/sslproto.py", line 911, in _fatal_error
self._transport._force_close(exc)
File ".../lib/python3.11/asyncio/sslproto.py", line 252, in _force_close
self._ssl_protocol._abort(exc)
TypeError: SSLProtocol._abort() takes 1 positional argument but 2 were given
```
To me, the implementation of `_SSLProtocolTransport._force_close()` looks like an unfinished copy of the `_SSLProtocolTransport.abort()` method:
https://github.com/python/cpython/blob/1583c40be938d2caf363c126976bc8757df90b13/Lib/asyncio/sslproto.py#L239-L252
At any rate, the `self._ssl_protocol` attribute is an instance of `SSLProtocol` in the same module, and the `_abort()` method on that class doesn't accept an exception instance:
https://github.com/python/cpython/blob/1583c40be938d2caf363c126976bc8757df90b13/Lib/asyncio/sslproto.py#L664-L667
I find the test suite surrounding the SSL protocol to be dense enough that I can't easily spot how to provide an update there to reproduce this issue more easily, but the _fix_ looks simple enough: don't pass an argument to `_abort()`.
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113334
* gh-113339
* gh-113340
<!-- /gh-linked-prs -->
| 1ff02385944924db7e683a607da2882462594764 | a3e8afe0a3b5868440501edf579d1d4711c0fb18 |
python/cpython | python__cpython-113307 | # Improve `super()` error message and document zero-arg `super()` usage inside nested functions and generator expressions.
# Bug report
### Bug description:
I tried searching for something related but I could not find exactly this case with `super`. Maybe I missed… so posting it here.
Following code works fine:
```python
class A:
def foo(self):
return 1
class B(A):
def foo(self):
return [super().foo() for _ in range(10)]
print(B().foo())
```
But if you modify it to be like this:
```python
class A:
def foo(self):
return 1
class B(A):
def foo(self):
return list(super().foo() for _ in range(10))
print(B().foo())
```
you will get following error:
```python
TypeError: super(type, obj): obj must be an instance or subtype of type
```
Before https://github.com/python/cpython/pull/101441 , `[super().foo() for _ in range(10)]` would cause the same error but it was fixed implicitly by [PEP709](https://peps.python.org/pep-0709/).
You can fix this yourself by specifying type and instance in `super()`:
```python
class A:
def foo(self):
return 1
class B(A):
def foo(self):
return list(super(B, self).foo() for _ in range(10))
print(B().foo())
```
If you decide that fixing this makes sense, can I work on fixing it? To me it makes sense (if you decide to leave it as it is) at least to give some kind of warning or error that in this case you should explicitly provide type and instance to `super`, because it is quite confusing why user gets this error as semantically this is valid and should work
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113307
<!-- /gh-linked-prs -->
| 4a3d2419bb5b7cd862e3d909f53a2ef0a09cdcee | 237e2cff00cca49db47bcb7ea13683a4d9ad1ea5 |
python/cpython | python__cpython-113209 | # The statement` __init__.py files are required to make Python treat directories containing the file as packages.` might be confusing
# Documentation
The statement ` __init__.py files are required to make Python treat directories containing the file as packages.` might be confusion.
It is possible to have a package(namespace package) without init.py file.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113209
* gh-113276
* gh-113277
<!-- /gh-linked-prs -->
| d71fcdee0f1e4daa35d47bbef103f30259037ddb | 4658464e9cf092be930d0d8f938e801a69f7f987 |
python/cpython | python__cpython-114186 | # Occasionally `multiprocessing.test.*.test_threads` can take 10 minutes
# Bug report
### Bug description:
See the logs in https://buildbot.python.org/all/#/builders/1138/builds/498
```
10 slowest tests:
- test.test_multiprocessing_spawn.test_threads: 10 min 14 sec
- test.test_multiprocessing_fork.test_threads: 10 min 13 sec
- test_imaplib: 1 min 31 sec
- test_signal: 1 min 18 sec
- test.test_multiprocessing_spawn.test_processes: 52.6 sec
- test.test_concurrent_futures.test_wait: 48.6 sec
- test_socket: 40.7 sec
- test.test_multiprocessing_forkserver.test_processes: 40.1 sec
- test_io: 37.4 sec
- test_codecs: 37.3 sec
```
Normally those two `.test_threads` sub-suites in `test_multiprocessing` do not make the top 10 slowest at all.
Something odd is going on and occasionally making them sit around for a very long time before apparently succeeding anyways. I've seen it multiple time in the rpi buildbot logs. I expect it'll show up on some other bots as well but I haven't gone hunting.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114186
* gh-114222
* gh-114223
* gh-114249
* gh-114516
* gh-114517
<!-- /gh-linked-prs -->
| c1db9606081bdbe0207f83a861a3c70c356d3704 | e2c097ebdee447ded1109f99a235e65aa3533bf8 |
python/cpython | python__cpython-113203 | # Add a strict option to itertools.batched()
This is modeled on the `strict` option for `zip()`.
One use case is improving the `reshape()` recipe. In numpy, a reshape will raise an exception if the new dimensions do not exactly fit the data.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113203
* gh-115889
<!-- /gh-linked-prs -->
| 1583c40be938d2caf363c126976bc8757df90b13 | fe479fb8a979894224a4d279d1e46a5cdb108fa4 |
python/cpython | python__cpython-113200 | # `http.client.HTTPResponse.read1` does not close response after reading all data
# Bug report
### Bug description:
While trying to use `read1` in urllib3, we discovered that `read1` unlike `read` never closes the response after reading all data when the content length is known (https://github.com/urllib3/urllib3/pull/3235).
---
```python
import http.client
conn = http.client.HTTPSConnection("www.python.org")
conn.request("GET", "/")
response = conn.getresponse()
```
This runs successfully:
```python
response.read()
assert response.isclosed()
```
This fails:
```python
while response.read1():
pass
assert response.isclosed()
```
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-113200
* gh-113259
* gh-113260
<!-- /gh-linked-prs -->
| 41336a72b90634d5ac74a57b6826e4dd6fe78eac | 2feec0fc7fd0b9caae7ab2e26e69311d3ed93e77 |
python/cpython | python__cpython-113192 | # Add os.fchmod() on Windows
# Feature or enhancement
After implementing `os.lchmod()` and `os.chmod(follow_symlinks=True)` on Windows in #59616, adding `os.fchmod()` is easy.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113192
<!-- /gh-linked-prs -->
| 1f06baeabd7ef64b7be6af7cb6fc03ec410b1aa2 | 53330f167792a2947ab8b0faafb11019d7fb09b6 |
python/cpython | python__cpython-113601 | # python3.12 introduces numerous memory leaks (as reported by ASAN)
# Bug report
### Bug description:
Let example illustrate:
```
$ LD_PRELOAD=/usr/lib/llvm-14/lib/clang/14.0.0/lib/linux/libclang_rt.asan-x86_64.so \
PYTHONMALLOC=malloc \
/usr/bin/python3.10 -c 'print("Hello")'
Hello
$ LD_PRELOAD=/usr/lib/llvm-14/lib/clang/14.0.0/lib/linux/libclang_rt.asan-x86_64.so \
PYTHONMALLOC=malloc \
/usr/bin/python3.11 -c 'print("Hello")'
Hello
$ LD_PRELOAD=/usr/lib/llvm-14/lib/clang/14.0.0/lib/linux/libclang_rt.asan-x86_64.so \
PYTHONMALLOC=malloc \
/usr/bin/python3.12 -c 'print("Hello")'
Hello
=================================================================
==3317995==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 55877 byte(s) in 1063 object(s) allocated from:
#0 0x7f5d145847ee in __interceptor_malloc (/usr/lib/llvm-14/lib/clang/14.0.0/lib/linux/libclang_rt.asan-x86_64.so+0xcd7ee) (BuildId: a6105a816e63299474c1078329a59ed80f244fbf)
#1 0x539ddf (/usr/bin/python3.12+0x539ddf) (BuildId: aba53fb0246abeea0af6e1c463dcb32848eb6c27)
Direct leak of 30732 byte(s) in 614 object(s) allocated from:
#0 0x7f5d145847ee in __interceptor_malloc (/usr/lib/llvm-14/lib/clang/14.0.0/lib/linux/libclang_rt.asan-x86_64.so+0xcd7ee) (BuildId: a6105a816e63299474c1078329a59ed80f244fbf)
#1 0x5573b9 (/usr/bin/python3.12+0x5573b9) (BuildId: aba53fb0246abeea0af6e1c463dcb32848eb6c27)
SUMMARY: AddressSanitizer: 86609 byte(s) leaked in 1677 allocation(s).
```
### CPython versions tested on:
3.12.1
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113601
* gh-130581
* gh-130582
<!-- /gh-linked-prs -->
| 3203a7412977b8da3aba2770308136a37f48c927 | b6cb435ac06fe2bef0549bd1e46e15518a23089f |
python/cpython | python__cpython-113189 | # Harmonize shutil.copymode() on Windows and POSIX
The behavior of `shutil.copymode()` on Windows was changed by adding `os.lchmod()` in #59616. Now if *follow_symlinks* is false and both *src* and *dst* are symlinks, it copies the symlink mode (only the read-only bit). Previously it did nothing.
But there is still a difference between Windows and POSIX. If *follow_symlinks* is true and *dst* is a symlink, it sets the permission of the symlink on Windows (because `os.chmod()` still does not follow symlinks by default on Windows) and the permission of the target on POSIX.
`shutil.copystat()` has different behavior, because it passes the *follow_symlinks* argument explicitly.
I propose to harmonize `shutil.copymode()` behavior on Windows with POSIX and with `shutil.copystat()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113189
* gh-113285
* gh-113426
<!-- /gh-linked-prs -->
| 6e02d79f96b30bacdbc7a85e42040920b3dee915 | bdc8d667ab545ccec0bf8c2769f5c5573ed293ea |
python/cpython | python__cpython-113175 | # Apply changes from importlib_metadata for Python 3.13
This bug tracks merging changes in importlib_metadata for Python 3.13.
- [x] v7.0.0: There have been a [number of changes in importlib_metadata](https://importlib-metadata.readthedocs.io/en/latest/history.html#v7-0-0) ([changelog](https://github.com/python/importlib_metadata/compare/v6.5.1...v7.0.0)) since the last sync at 6.5.1.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113175
<!-- /gh-linked-prs -->
| 2d91409c690b113493e3e81efc880301d2949f5f | 6b70c3dc5ab2f290fcdbe474bcb7d6fdf29eae4c |
python/cpython | python__cpython-113173 | # Compiler warnings in Modules/_xxinterpqueuesmodule.c
In every old PR I get warnings in Modules/_xxinterpqueuesmodule.c. For example: https://github.com/python/cpython/pull/92635/files
"unused variable ‘name’ [-Wunused-variable]" for line 239.
https://github.com/python/cpython/pull/92635/files#file-modules-_xxinterpqueuesmodule-c-L239
"‘is_full’ may be used uninitialized in this function [-Wmaybe-uninitialized]" for line 1513.
https://github.com/python/cpython/pull/92635/files#file-modules-_xxinterpqueuesmodule-c-L1513
<!-- gh-linked-prs -->
### Linked PRs
* gh-113173
<!-- /gh-linked-prs -->
| e365c943f24ab577cf7a4bf12bc0d2619ab9ae47 | 4a153a1d3b18803a684cd1bcc2cdf3ede3dbae19 |
python/cpython | python__cpython-113179 | # ipaddress considers some not globally reachable addresses global and vice versa
# Bug report
### Bug description:
Judging by what the `is_private` and `is_global` implementations in the `ipaddress` module do[1], everything that IANA special-purpose address registries[2][3] considers globally reachable should result in `is_global == True`, the rest should be `is_global = True`.
`is_private` is almost always the opposite of `is_global` (`100.64.0.0/10` is special).
The problem is the ranges present in the code (in the `_private_networks` variables) are not quite right:
* There are some ranges missing (so we incorrectly treat them as globally reachable)
* Some of the ranges fail to account for more specific allocations defined as globally reachable (so we incorrectly treat them as not globally reachable)
This is what I have right now and I'll submit that as a PR:
```diff
diff --git a/Lib/ipaddress.py b/Lib/ipaddress.py
index 5b5520b92bde..3ee565bcd30b 100644
--- a/Lib/ipaddress.py
+++ b/Lib/ipaddress.py
@@ -1558,13 +1558,18 @@ class _IPv4Constants:
_public_network = IPv4Network('100.64.0.0/10')
+ # Not globally reachable address blocks listed on
+ # https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
_private_networks = [
IPv4Network('0.0.0.0/8'),
IPv4Network('10.0.0.0/8'),
+ # 100.64.0.0/10 missing
IPv4Network('127.0.0.0/8'),
IPv4Network('169.254.0.0/16'),
IPv4Network('172.16.0.0/12'),
+ # Incomplete. Should be 192.0.0.0/24 - (192.0.0.9/32 + 192.0.0.10/32)
IPv4Network('192.0.0.0/29'),
+ # 192.0.0.8/32 missing
IPv4Network('192.0.0.170/31'),
IPv4Network('192.0.2.0/24'),
IPv4Network('192.168.0.0/16'),
@@ -2306,15 +2311,31 @@ class _IPv6Constants:
_multicast_network = IPv6Network('ff00::/8')
+ # Not globally reachable address blocks listed on
+ # https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
_private_networks = [
IPv6Network('::1/128'),
IPv6Network('::/128'),
IPv6Network('::ffff:0:0/96'),
+ # 64:ff9b:1::/48 missing
IPv6Network('100::/64'),
+ # Not quite right as there are multiple more specific allocations that are globally
+ # reachable. Should be:
+ # 2001::/23 - (
+ # 2001:1::1/128
+ # + 2001:1::2/128
+ # + 2001:3::/32
+ # + 2001:4:112::/48
+ # + 2001:20::/28
+ # + 2001:30::/28
+ # )
IPv6Network('2001::/23'),
+ # Unnecessary, covered by 2001::/23
IPv6Network('2001:2::/48'),
IPv6Network('2001:db8::/32'),
+ # Unnecessary, covered by 2001::/23
IPv6Network('2001:10::/28'),
+ # Potentially 2002::/16 missing? IANA says N/A, there is no true/false value
IPv6Network('fc00::/7'),
IPv6Network('fe80::/10'),
]
```
[1] And which should really be documented per #65056 – a separate issue though
[2] https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
[3] https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113179
* gh-118177
* gh-118227
* gh-118229
* gh-118472
* gh-118479
<!-- /gh-linked-prs -->
| 40d75c2b7f5c67e254d0a025e0f2e2c7ada7f69f | 3be9b9d8722696b95555937bb211dc4cda714d56 |
python/cpython | python__cpython-113227 | # Improving error message with trailing comma in json
# Bug report
### Bug description:
```python
import json
json.loads('{"a":1,}')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 8 (char 7)
```
The error is the trailing comma.
Initial pull proposal: #113047 but apparently this needs more attention than I can contribute.
The actual fix needs to be further down, possibly in line 206
https://github.com/python/cpython/blob/25061f5c98a47691fdb70f550943167bda77f6e0/Lib/json/decoder.py#L199-L208
we already know that we have seen a comma, and can insert a more helpful error message in line 206.
```python
if nextchar == '}':
raise JSONDecodeError("No trailing commas allowed in JSON objects.", s, end - 1)
```
at this location, the previous character must have been a comma (line 201). Whitespace has been removed, so this will also catch `,\n}`
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113227
<!-- /gh-linked-prs -->
| cfa25fe3e39e09612a6ba8409c46cf35a091de3c | 21d52995ea490328edf9be3ba072821cd445dd30 |
python/cpython | python__cpython-113120 | # subprocess.Popen can pass incomplete environment when posix_spawn is used
# Bug report
### Bug description:
Subprocess created with `posix_spawn` can have different environment when compared to a process created with stardard `fork/exec`.
The reason is that `posix_spawn` uses `os.environ` (when no env was given), which is not affected by `os.putenv` and `os.unsetenv`. This is different from `execv`, which uses process environment (available through `environ`).
This can be easily reproduced with `test_putenv_unsetenv` modified to call `subprocess.run` with `close_fds=False` (which makes it use the `posix_spawn` call):
```python
import subprocess
import sys
import os
name = "PYTHONTESTVAR"
value = "testvalue"
code = f'import os; print(repr(os.environ.get({name!r})))'
os.putenv(name, value)
proc = subprocess.run([sys.executable, '-c', code], check=True,
stdout=subprocess.PIPE, text=True, close_fds=False)
assert proc.stdout.rstrip() == repr(value)
os.unsetenv(name)
proc = subprocess.run([sys.executable, '-c', code], check=True,
stdout=subprocess.PIPE, text=True, close_fds=False)
assert proc.stdout.rstrip() == repr(None)
```
I found it when I patched Python with #113118 (on Solaris, but this is pretty platform independent, I think).
### CPython versions tested on:
3.9, 3.11
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-113120
* gh-113268
<!-- /gh-linked-prs -->
| 48c907a15ceae7202fcfeb435943addff896c42c | cde1335485b7bffb12c378d230ae48ad77d76ab1 |
python/cpython | python__cpython-113118 | # Add support for posix_spawn call in subprocess.Popen with close_fds=True
# Feature or enhancement
### Proposal:
One of the reasons `posix_spawn` is rarely used with subprocess functions is that it doesn't support `close_fds=True` (which is the default value).
Previously, it was deemed infeasible to support this (https://github.com/python/cpython/issues/79718#issuecomment-1093807396), but since then, `posix_spawn_file_actions_addclosefrom_np()` has become available on many systems (I know about Solaris, Linux and FreeBSD), which should make it pretty simple to support it.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
These two are related:
https://github.com/python/cpython/issues/79718 was the original issue for using os.posix_spawn() in subprocess.
https://github.com/python/cpython/issues/86904 proposed changing close_fds to default to False, so posix_spawn() could be used more often
<!-- gh-linked-prs -->
### Linked PRs
* gh-113118
* gh-113268
<!-- /gh-linked-prs -->
| 2b93f5224216d10f8119373e72b5c2b3984e0af6 | 32d87a88994c131a9f3857d01ae7c07918577a55 |
python/cpython | python__cpython-113114 | # doc: use less ambiguously named variable
# Documentation
In the control flow tutorial -- https://docs.python.org/3/tutorial/controlflow.html -- one of the examples uses 'ok' as the variable name to capture input(). 'Ok' is ambiguous and, at least to me, read as a boolean type when I was learning python with the tutorials. Since the following code processes `ok` with some Python-specific syntax (a container check against a tuple of strings), as a new user it was hard to understand what was going on. I propose changing the variable name to 'reply' to more clearly indicate to the tutorial reader that it's storing unevaluated user input, without the affirmative connotations of the name 'ok'. PR forthcoming.
Example in question:
while True:
ok = input(prompt)
if ok in ('y', 'ye', 'yes'):
return True
if ok in ('n', 'no', 'nop', 'nope'):
return False
<!-- gh-linked-prs -->
### Linked PRs
* gh-113114
* gh-113122
<!-- /gh-linked-prs -->
| fb4cb7ce310e28e25d3adf33b29cb97c637097ed | fd81afc624402816e3455fd8e932ac7702e4d988 |
python/cpython | python__cpython-113091 | # os_support.can_chmod() is broken
`test.support.os_support.can_chmod()` tests for `hasattr(os, "chown")` instead of `hasattr(os, "chmod")`. This, and also the fact that it tests bits not supported on Windows, makes it always returning False on Windows. It may make some meaningful tests skipped on Windows and prevents of using it in new tests for `os.chmod()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113091
* gh-113099
* gh-113100
<!-- /gh-linked-prs -->
| c6e953be125c491226adc1a5bc9c804ce256aa17 | 23a5711100271e5a8b9dd9ab48b10807627ef4fb |
python/cpython | python__cpython-113087 | # Add tests for os.chmod() and os.lchmod()
Currently only tests that test os.chmod() and os.lchmod() are tests that they raise exceptions for invalid arguments. There are no tests for working os.chmod() and os.lchmod() and their behavior with symlinks (there is a difference between Windows and POSIX).
<!-- gh-linked-prs -->
### Linked PRs
* gh-113087
* gh-113088
* gh-113089
<!-- /gh-linked-prs -->
| b4f2c89118d5a48ce6c495ba931d7f6493422029 | fddc829236d7b29a522a2160e57b2d7ca23b9b95 |
python/cpython | python__cpython-113139 | # Python/flowgraph.c:1813: void insert_superinstructions(cfg_builder *): Assertion `no_redundant_nops(g)' failed
# Bug report
### Bug description:
The `fuzz_pycompile` identified an assertion failure:
```
Running: /mnt/scratch0/clusterfuzz/bot/inputs/fuzzer-testcases/crash-09bb1aea9610b3c790c03fc92383fb3d19f08654
--
| <fuzz input>:1: SyntaxWarning: invalid decimal literal
| <fuzz input>:1: SyntaxWarning: invalid decimal literal
| fuzz_pycompile: Python/flowgraph.c:1813: void insert_superinstructions(cfg_builder *): Assertion `no_redundant_nops(g)' failed.
| MemorySanitizer:DEADLYSIGNAL
| ==53716==ERROR: MemorySanitizer: ABRT on unknown address 0x05390000d1d4 (pc 0x7eaab279400b bp 0x7eaab2909588 sp 0x7ffd50f56110 T53716)
| #0 0x7eaab279400b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
| #1 0x7eaab2773858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
| #2 0x7eaab2773728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
| #3 0x7eaab2784fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
| #4 0xc79572 in insert_superinstructions cpython3/Python/flowgraph.c:1813:5
| #5 0xc79572 in _PyCfg_OptimizeCodeUnit cpython3/Python/flowgraph.c:2424:5
| #6 0xb388cf in optimize_and_assemble_code_unit cpython3/Python/compile.c:7597:9
| #7 0xb388cf in optimize_and_assemble cpython3/Python/compile.c:7639:12
| #8 0xb296b6 in compiler_mod cpython3/Python/compile.c:1802:24
| #9 0xb296b6 in _PyAST_Compile cpython3/Python/compile.c:555:24
| #10 0xe531b9 in Py_CompileStringObject cpython3/Python/pythonrun.c:1452:10
| #11 0xe53554 in Py_CompileStringExFlags cpython3/Python/pythonrun.c:1465:10
| #12 0x54f518 in fuzz_pycompile cpython3/Modules/_xxtestfuzz/fuzzer.c:550:24
| #13 0x54f518 in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:563:14
| #14 0x54f518 in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:704:11
| #15 0x458603 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
| #16 0x443d62 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
| #17 0x44960c in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
| #18 0x472b42 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
| #19 0x7eaab2775082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
| #20 0x439f2d in _start
|
<br class="Apple-interchange-newline">Running: /mnt/scratch0/clusterfuzz/bot/inputs/fuzzer-testcases/crash-09bb1aea9610b3c790c03fc92383fb3d19f08654
<fuzz input>:1: SyntaxWarning: invalid decimal literal
<fuzz input>:1: SyntaxWarning: invalid decimal literal
fuzz_pycompile: Python/flowgraph.c:1813: void insert_superinstructions(cfg_builder *): Assertion `no_redundant_nops(g)' failed.
MemorySanitizer:DEADLYSIGNAL
==53716==ERROR: MemorySanitizer: ABRT on unknown address 0x05390000d1d4 (pc 0x7eaab279400b bp 0x7eaab2909588 sp 0x7ffd50f56110 T53716)
#0 0x7eaab279400b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
#1 0x7eaab2773858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
#2 0x7eaab2773728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
#3 0x7eaab2784fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
#4 0xc79572 in insert_superinstructions [cpython3/Python/flowgraph.c:1813](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/flowgraph.c#L1813):5
#5 0xc79572 in _PyCfg_OptimizeCodeUnit [cpython3/Python/flowgraph.c:2424](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/flowgraph.c#L2424):5
#6 0xb388cf in optimize_and_assemble_code_unit [cpython3/Python/compile.c:7597](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/compile.c#L7597):9
#7 0xb388cf in optimize_and_assemble [cpython3/Python/compile.c:7639](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/compile.c#L7639):12
#8 0xb296b6 in compiler_mod [cpython3/Python/compile.c:1802](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/compile.c#L1802):24
#9 0xb296b6 in _PyAST_Compile [cpython3/Python/compile.c:555](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/compile.c#L555):24
#10 0xe531b9 in Py_CompileStringObject [cpython3/Python/pythonrun.c:1452](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/pythonrun.c#L1452):10
#11 0xe53554 in Py_CompileStringExFlags [cpython3/Python/pythonrun.c:1465](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Python/pythonrun.c#L1465):10
#12 0x54f518 in fuzz_pycompile [cpython3/Modules/_xxtestfuzz/fuzzer.c:550](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Modules/_xxtestfuzz/fuzzer.c#L550):24
#13 0x54f518 in _run_fuzz [cpython3/Modules/_xxtestfuzz/fuzzer.c:563](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Modules/_xxtestfuzz/fuzzer.c#L563):14
#14 0x54f518 in LLVMFuzzerTestOneInput [cpython3/Modules/_xxtestfuzz/fuzzer.c:704](https://github.com/python/cpython/blob/e0fb7004ede71389c9dd462cd03352cc3c3a4d8c/Modules/_xxtestfuzz/fuzzer.c#L704):11
#15 0x458603 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
#16 0x443d62 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
#17 0x44960c in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
#18 0x472b42 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
#19 0x7eaab2775082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
#20 0x439f2d in _start
```
Reproducer (note that the first two bytes are metadata for the fuzzer):
```
00000000: 2020 6966 2035 6966 2035 656c 7365 2054 if 5if 5else T
00000010: 3a79 :y
```
cc: @bradlarsen
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113139
* gh-113636
<!-- /gh-linked-prs -->
| e51b4009454939e3ee5f1bfaed45ce65689a71b8 | 76d757b38b414964546393bdccff31c1f8be3843 |
python/cpython | python__cpython-113207 | # Error in the documentation for csv.reader
# Documentation
In the description for [csv.reader](https://docs.python.org/3/library/csv.html#csv.reader) function it says:
Return a reader object which will iterate over lines in the given csvfile. csvfile can be any object which supports the [iterator](https://docs.python.org/3/glossary.html#term-iterator) protocol and **returns a string** each time its `__next__()` method is called
(The bold emphasis is mine)
However it returns a `list` as it's properly documented at [`csvreader.__next()__`](https://docs.python.org/3/library/csv.html#csv.csvreader.__next__)
<!-- gh-linked-prs -->
### Linked PRs
* gh-113207
* gh-113210
* gh-113211
<!-- /gh-linked-prs -->
| 84df3172efe8767ddf5c28bdb6696b3f216bcaa6 | 5ae75e1be24bd6b031a68040cfddb71732461f67 |
python/cpython | python__cpython-113022 | # Relative include path of generated frozen module in getpath.c
# Feature or enhancement
### Proposal:
in `getpath.c`, the generated frozen `getpath.h` is [included using relative path](https://github.com/python/cpython/blob/c6e614fd81d7dca436fe640d63a307c7dc9f6f3b/Modules/getpath.c#L25
I'd like to consider changing this to [use absolute include path](https://github.com/python/cpython/pull/113022).
my primary motivation is to make it possible to build python using alternative build systems (buck2 in my case), where frozen modules are generated as intermediate build artifacts, and it's not straight forward (read: I couldn't figure it out yet) to make the generated header "look like" it's in that relative path.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-113022
<!-- /gh-linked-prs -->
| 2feec0fc7fd0b9caae7ab2e26e69311d3ed93e77 | d00dbf541525fcb36c9c6ebb7b41a5637c5aa6c0 |
python/cpython | python__cpython-113436 | # Difference between pickle.py and _pickle for certain strings
# Bug report
### Bug description:
There is a logical error in `pickle.Pickler.save_str` for protocol 0, such that it repeats pickling of a string object each time it is presented. The design clearly intends to re-use the first pickled representation, and the C-implementation `_pickle` does that.
In an implementation that does not provide a compiled `_pickle` (PyPy may be one) this is inefficient, but not actually wrong. The intended behaviour occurs with a simple string:
```python
>>> s = "hello"
>>> pickle._dumps((s,s), 0)
b'(Vhello\np0\ng0\ntp1\n.'
```
When read by `loads()` this string says:
1. stack "hello",
2. save a copy in memory 0,
3. stack the contents of memory 0,
4. make a tuple from the stack,
5. save a copy in memory 1.
The bug emerges when the pickled string needs pre-encoding:
```python
>>> s = "hello\n"
>>> pickle._dumps((s,s), 0)
b'(Vhello\\u000a\np0\nVhello\\u000a\np1\ntp2\n.'
```
Here we see identical data stacked and saved (but not used). The problem is here: https://github.com/python/cpython/blob/42a86df3a376a77a94ffe6b4011a82cf51dc336a/Lib/pickle.py#L860-L866
where the return from `obj.replace` may be a different object from `obj`. In CPython, that is only if a replacement takes place, which is why the problem only appears in the second case above.
`save_str` is only called when the object has not already been memoized, but in the cases at issue, the string memoized is not the original object, and so when the original string object is presented again, `save_str` is called again.
Depending upon the detailed behaviour of `str.replace` (in particular, if you decide to return an interned value when the result is, say, a Latin-1 character) an assertion may fail in `memoize()`: https://github.com/python/cpython/blob/42a86df3a376a77a94ffe6b4011a82cf51dc336a/Lib/pickle.py#L504-L507 I have not managed to trigger an `AssertionError` in CPython.
This has probably gone unnoticed so long only because `pickle.py` is not tested. (At least, I think it isn't. #105250 and #53350 note this coverage problem.)
### CPython versions tested on:
3.11
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-113436
* gh-113448
* gh-113449
<!-- /gh-linked-prs -->
| 08398631a0298dcf785ee7bd0e26c7844823ce59 | 894f0e573d9eb49cd5864c44328f10a731852dab |
python/cpython | python__cpython-113821 | # test_variable_tzname fails in 3.11.7 and 3.12.1 due to timezone change
Commit 46407fe79ca78051cbf6c80e8b8e70a228f9fa50 seems to not be generic enough because running the test in a system where tests where passing before now fails:
```
======================================================================
FAIL: test_variable_tzname (test.test_email.test_utils.LocaltimeTests.test_variable_tzname)
] ----------------------------------------------------------------------
Traceback (most recent call last):
File "python3.12/test/support/__init__.py", line 890, in inner
return func(*args, **kwds)
File "python3.12/test/test_email/test_utils.py", line 152, in test_variable_tzname
self.assertEqual(t1.tzname(), 'MSK')
AssertionError: 'Europe' != 'MSK'
- Europe
+ MSK
----------------------------------------------------------------------
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-113821
* gh-113831
* gh-113832
* gh-113835
* gh-126438
* gh-126477
<!-- /gh-linked-prs -->
| 931d7e052e22aa01e18fcc67ed71b6ea305aff71 | 92f96240d713a5a36c54515e44445b3cd0947163 |
python/cpython | python__cpython-113025 | # C API: Add PyObject_GenericHash() function
# Feature or enhancement
Just added `Py_HashPointer()` function can be used to implement the default Python object hashing function (`object.__hash__()`), but only in CPython. In other Python implementations the default object hash can depend not on the object address, but on its identity.
I think that we need a new function, `PyObject_GenericHash()` (similar to `PyObject_GenericGetAttr()` etc), that does not depend on CPython implementation details.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113025
<!-- /gh-linked-prs -->
| e2e0b4b4b92694ba894e02b4a66fd87c166ed10f | 567ab3bd15398c8c7b791f3e376ae3e3c0bbe079 |
python/cpython | python__cpython-113032 | # pystats specialization deferred numbers are wrong
# Bug report
### Bug description:
For many of the "deferred" values in the "specialization" section for each bytecode, the number is enormous. [For example](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20231210-3.13.0a2%2B-23df46a-PYTHON_UOPS/bm-20231210-azure-x86_64-python-23df46a1dde82bc5a515-3.13.0a2%2B-23df46a-pystats.md#for_iter-1)
Kind | Count | Ratio
-- | -- | --
deferred | 737,869,762,948,500,785,112 | 53,482,007,414,484.3%
hit | 1,120,068,160 | 81.2%
miss | 138,055,084 | 10.0%
We need to investigate where this is coming from (either the Python runtime or the summarize_stats.py script) and fix appropriately.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113032
<!-- /gh-linked-prs -->
| dfaa9e060bf6d69cb862a2ac140b8fccbebf3000 | 81a15ea74e2607728fceb822dfcc1aabff00478a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.