repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-117989 | # docs (easy): Add reference to the Python docs theme gh repo
The paragraph should be added in bugs.html
```
If you find a bug in the theme (HTML / CSS / JavaScript) of the
documentation, please submit a bug report on the `python-doc-theme bug
tracker <https://github.com/python/python-docs-theme>`_.
```
_Originally posted by @JulienPalard in https://github.com/python/python-docs-theme/issues/21#issuecomment-411396446_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117989
* gh-118031
* gh-118040
* gh-118041
<!-- /gh-linked-prs -->
| 468b9aeb922470c26275ce7dda1e6d570a3323f3 | 17ed54bc96f0ae2420aa380bab004baa5a18d4d1 |
python/cpython | python__cpython-117988 | # tarfile.py: TarFile.addfile not adding all files
# Bug report
### Bug description:
I would expect both files `A` and `B` to be stored in the tar file. However, only `A` is archived.
```sh
# creating the test directory
!rm -rf test1.tar test1
!mkdir test1
!echo thisisa >test1/A
!echo thisisb >test1/B
```
```Python
import tarfile
archive = tarfile.open("test1.tar", mode="w", format=tarfile.GNU_FORMAT)
archive.addfile(archive.gettarinfo(name="test1/A"))
archive.addfile(archive.gettarinfo(name="test1/B"))
archive.close()
print(tarfile.open("test1.tar", mode="r").getnames())
```
Expected output:
```
['test1/A', 'test1/B']
```
Returned output:
```
['test1/A']
```
Reproduced on these Python versions:
```
Python 3.11.6 (main, Nov 28 2023, 09:22:32) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
```
### CPython versions tested on:
3.11
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-117988
<!-- /gh-linked-prs -->
| 15b3555e4a47ec925c965778a415dc11f0f981fd | 3e7d990a09f0928050b2b0c85f724c2bce13fcbb |
python/cpython | python__cpython-116918 | # Why do we have two counters for function versions?
In the interpreter state there are two separate counters that are used to generate unique versions for code and function objects. Both counters share the following properties:
- The counter is an unsigned 32-bit int initialized to `1`
- When a new version is needed, the current value of the counter is used, and unless the counter is zero, the counter is incremented
- When the counter overflows to `0`, it remains zero, and no new versions can be allocated (in this case the specializer will just give up -- hopefully this never happens in real code, creating four billion code objects is just hard to imagine)
There are two ways that a **function object** obtains a version:
- The usual path is when a function object is created by the `MAKE_FUNCTION` bytecode. In this case it gets the version from the code object. (See below.)
- When an assignment is made to `func.__code__` or a few other attributes, the function version is reset, and if the specializer later needs a function version, a new version number is taken from `interp->func_state.next_version` by `_PyFunction_GetVersionForCurrentState()`.
Code objects obtain a version when they are created from `interp->next_func_version`.
I believe there's a scenario where two unrelated function objects could end up with the same version:
- one function is created with a version derived from its code object, i.e. from `interp->next_func_version`;
- the other has had its `__code__` attribute modified, and then obtains a fresh version from `interp->func_state.next_version`, which somehow has the same value as the other counter had -- this should be possible because the counters are managed independently.
I looked a bit into the history, and it looks like in 3.11 there was only a static global `next_func_version` in funcobject.c, which was used by `_PyFunction_GetVersionForCurrentState()`. In 3.12 the current structure exists, except `next_func_version` is still a static global (`func_state` however is an interpreter member). It seems that this was introduced by gh-98525 (commit: https://github.com/python/cpython/commit/fb713b21833a17cba8022af0fa4c486512157d4b). Was it intentional to use two separate counters? If so, what was the rationale? (If it had to do with deepfreeze, that rationale is now gone, as we no longer use it.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-116918
<!-- /gh-linked-prs -->
| 7e1f38f2de8f93de362433203faa5605a0c47f0e | 76d086890790f1bfbe05d12e02cadb539db5b0b1 |
python/cpython | python__cpython-116934 | # ``test_capi`` leaks references
# Bug report
### Bug description:
```python
./python -m test -R 3:3 test_capi
Using random seed: 2023738098
0:00:00 load avg: 1.04 Run 1 test sequentially
0:00:00 load avg: 1.04 [1/1] test_capi
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XXX XXX
test_capi leaked [160, 160, 160] references, sum=480
test_capi leaked [112, 112, 112] memory blocks, sum=336
test_capi failed (reference leak) in 36.5 sec
== Tests result: FAILURE ==
1 test failed:
test_capi
Total duration: 36.5 sec
Total tests: run=840 skipped=4
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116934
<!-- /gh-linked-prs -->
| b3f0c1591a85d335c89dc38a177d116d2017502d | 2982bdb936f76518b29cf7de356eb5fafd22d112 |
python/cpython | python__cpython-117044 | # TSan: data race in pending calls
# Bug report
### Bug description:
Running test_capi with TSan enabled and the GIL enabled yields the following race condition:
```
test_isolated_subinterpreter (test.test_capi.test_misc.TestPendingCalls.test_isolated_subinterpreter) ...
==================
WARNING: ThreadSanitizer: data race (pid=84714)
Write of size 4 at 0x7f72e30d2038 by main thread:
#0 _push_pending_call /home/antoine/cpython/default/Python/ceval_gil.c:674:25 (python+0x4b860b) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#1 _PyEval_AddPendingCall /home/antoine/cpython/default/Python/ceval_gil.c:746:18 (python+0x4b860b)
#2 pending_identify /home/antoine/cpython/default/./Modules/_testinternalcapi.c:1135:13 (_testinternalcapi.cpython-313d-x86_64-linux-gnu.so+0x8d77) (BuildId: 0997eb086570be7275cd70cbfb0545cf77f5f79c)
#3 cfunction_call /home/antoine/cpython/default/Objects/methodobject.c:551:18 (python+0x2cf92c) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
[...]
Previous atomic read of size 4 at 0x7f72e30d2038 by thread T1 (mutexes: write M0):
#0 _Py_atomic_load_int32_relaxed /home/antoine/cpython/default/./Include/cpython/pyatomic_gcc.h:319:10 (python+0x4b7cfc) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#1 update_eval_breaker_for_thread /home/antoine/cpython/default/Python/ceval_gil.c:87:27 (python+0x4b7cfc)
#2 take_gil /home/antoine/cpython/default/Python/ceval_gil.c:385:5 (python+0x4b7cfc)
#3 _PyEval_AcquireLock /home/antoine/cpython/default/Python/ceval_gil.c:555:5 (python+0x4b81be) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#4 _PyThreadState_Attach /home/antoine/cpython/default/Python/pystate.c:1878:5 (python+0x525943) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#5 PyEval_RestoreThread /home/antoine/cpython/default/Python/ceval_gil.c:623:5 (python+0x4b843e) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#6 _Py_write_impl /home/antoine/cpython/default/Python/fileutils.c:1970:13 (python+0x561f0e) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#7 _Py_write /home/antoine/cpython/default/Python/fileutils.c:2036:12 (python+0x561e1f) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#8 os_write_impl /home/antoine/cpython/default/./Modules/posixmodule.c:11495:12 (python+0x579d37) (BuildId: 80a378e5bf8e9a3b503d1ddb455283576a308db6)
#9 os_write /home/antoine/cpython/default/./Modules/clinic/posixmodule.c.h:7369:21 (python+0x579d37)
[...]
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117044
<!-- /gh-linked-prs -->
| 9221ef2d8cb7f4cf37592eb650d4c8f972033000 | fc4599800778f9b130d5e336deadbdeb5bd3e5ee |
python/cpython | python__cpython-116903 | # Deprecate support of false values in urllib.parse.parse_qsl()
`urllib.parse.parse_qsl()` returns `[]` for any false value. There were no tests for this, so it was broken by accident in #115771 and restored in #116764.
Historically, the special case was needed to circumvent the fact that `''.split('&')` returns `['']` instead of `[]`. `parse_qsl('')` and `parse_qsl(b'')` should return `[]`. But zero numbers and empty sequences (like `parse_qsl(0)` and `parse_qsl([])`) should be errors. So I propose to deprecate the current behavior for general false values and make them errors in future.
There is an open question about None. There is a code in the wild that expects `parse_qsl(None)` to work. Although it is not difficult to add workarounds for this, it may be more convenient if `None` is accepted as a valid value.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116903
<!-- /gh-linked-prs -->
| 7577307ebdaeef6702b639e22a896080e81aae4e | 03924b5deeb766fabd53ced28ba707e4dd08fb60 |
python/cpython | python__cpython-117018 | # CIFuzz build failures - _freeze_module segfaults during the build, fuzz testing doesn't start.
PR authors and core devs have been seeing CIFuzz failures on PRs not related to their changes of late. These are non-blocking, but confusing.
The CIFuzz build step is failing, it seems that _freeze_module crashes:
```
./Programs/_freeze_module importlib._bootstrap_external ./Lib/importlib/_bootstrap_external.py Python/frozen_modules/importlib._bootstrap_external.h
Segmentation fault (core dumped)
```
I believe _freeze_module got compiled with memory-sanitizer but no useful MSAN report seems to be generated by that crash.
Unclear where the problem lies, maybe this could be CIFuzz container or clang related?
CIFuzz background: read PR #107653 (discussion and links) and Issue #107652 (the pitch).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117018
* gh-117289
<!-- /gh-linked-prs -->
| 2cedd25c14d3acfdcb5e8ee55132ce3e334ab8fe | eefff682f09394fe4f18b7d7c6ac4c635caadd02 |
python/cpython | python__cpython-116885 | # NULL in lexical analysis of f-string
# Documentation
This is somewhat related to #116580. There are two references to NULL in the description of f-strings that don't have a clear meaning.
```
format_spec ::= (literal_char | NULL | replacement_field)*
literal_char ::= <any code point except "{", "}" or NULL>
```
In both cases (but especially `literal_char`), it could refer to U+0000, but I'm unaware that a null character is allowed anywhere in Python source. At least, my attempt to inject one failed:
```pytb
>>> print(ast.dump(ast.parse("f'{x:"'\x00'"}'"), indent=2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 52, in parse
return compile(source, filename, mode, flags,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: source code string cannot contain null bytes
```
For `format_spec`, it could refer to an empty specification (`f'{x:}'`), but `(literal_char | replacement_field)*` would cover that just as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116885
* gh-116951
* gh-116952
<!-- /gh-linked-prs -->
| 4e45c6c54a9457b1ca5b4cf3aa2843b7218d4414 | 3a99f5c5f34dc7b67597ca7230da355d92927c71 |
python/cpython | python__cpython-116880 | # pystats: New optimizer stats for _Py_uop_analyze_and_optimize are missing from the table
# Bug report
### Bug description:
The new stats added in #115085 were not added to the summarize_stats.py script so are therefore missing from the tables.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116880
<!-- /gh-linked-prs -->
| 1a33513f99bf4a9e5122b9cd82945879e73ff44c | bee7e290cdedb17e06f473a2f318c720ba766852 |
python/cpython | python__cpython-116878 | # Update `wheel` to `0.43.0` in `Lib/test/wheeldata`
# Feature or enhancement
The `test_cppext` test is currently disabled in `--disable-gil` builds because the bundled version of `wheel` did not support the "t" flag in the ABI.
This is now fixed and released upstream in wheel 0.43.0.
Let's update the bundled wheel `.whl` file in `Lib/test/wheeldata` and then re-enable `test_cppext` in the free-threaded build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116878
<!-- /gh-linked-prs -->
| c80d2d3263b3caf579777fd2a98399aeb3497f23 | 52ef4430a9b3e212fe9200675cddede77b90785b |
python/cpython | python__cpython-116930 | # Improve name suggestions for NameError/AttributeError by respecting underscore conventions
# Feature or enhancement
### Proposal:
## Problem description
In more recent versions of Python, for uncaught `NameError`s and `AttributeErrors`, the system tries to suggest names that might have been typoed:
```python
>>> intt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'intt' is not defined. Did you mean: 'int'?
```
However, the suggestion logic apparently does not take underscore conventions into account:
```
>>> _foo = 1
>>> foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined. Did you mean: '_foo'?
```
Commonly, leading underscores on names are used by convention to mark names that should not be referenced directly in client code. (Of course, it's occasionally necessary or desirable to call dunder methods directly, particularly when inheritance is involved, but generally not outside of classes.)
This has the **negative effect, that Python itself could recommend** that users invoke undocumented, unstable or unintentional APIs without good reason. One current real-world example of this occurs with Pandas, where version 2.0 removed the `append` method from `DataFrame`s, but there happens to be an `_append` method left behind as an implementation detail:
```
>>> import pandas as pd
>>> pd.DataFrame().append
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/lib/python3.11/site-packages/pandas/core/generic.py", line 6296, in __getattr__
return object.__getattribute__(self, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DataFrame' object has no attribute 'append'. Did you mean: '_append'?
```
## Proposal
I propose that: **when the invalid identifier or attribute name does not already start with an underscore, valid names that *do* start with an underscore should be excluded** from suggestions. As mentioned in the linked discussion, there is already precedent for this -
1. As implemented by `help`:
```
>>> import pydoc
>>> pydoc.pager = print # to simplify behaviour for the example
>>> class Example:
... __slots__ = [] # to simplify behaviour for the example
... def method(self): pass
... def _method(self): pass
...
>>> help(Example)
Help on class Example in module __main__:
class Example(builtins.object)
| Methods defined here:
|
| method(self)
>>>
```
2. When using star-imports:
```
>>> _, name, result = 1, 2, {}
>>> exec('from __main__ import *', result)
>>> '_' in result
False
>>> 'name' in result
True
```
It still makes sense IMO to suggest underscore names when the invalid name already starts with an underscore - even mixing and matching single- and double-underscore cases. For example, `_init_` is a very plausible typo for `__init__`, especially for beginners who are learning from a book or an old PDF.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/name-suggestions-for-attributeerrors-and-possibly-nameerrors-should-not-include-names-with-single-leading-underscores/48588
<!-- gh-linked-prs -->
### Linked PRs
* gh-116930
<!-- /gh-linked-prs -->
| 0085c3ae8f067abd4f6540d0f6dd2fb13107618e | d6fa1d4beef2bf9d83048469667e0ba5f2b41068 |
python/cpython | python__cpython-116950 | # Python 3.12.2 headers contain non-C90-compliant code
# Bug report
### Bug description:
When compiling a program that links against the Python libraries (and uses their headerfiles) with the `-Werror=declaration-after-statement` compiler flag enabled, the following error occurs:
```
/usr/include/python3.12/object.h:233:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
233 | PyVarObject *var_ob = _PyVarObject_CAST(ob);
```
This is fairly similar to previous issue #92781.
Although the C90 style may be archaic and perhaps not worth supporting at some point, maintaining support could be helpful for some downstream platforms (I encountered this in Debian bug [#1057442](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057442)).
<details>
<summary>Minimal test cases to show that the problem did not occur with Python 3.11.8 and that it does occur on Python 3.12.2 and to help verify any fixes.</summary>
```c
#include "/usr/include/python3.11/Python.h"
#include "/usr/include/python3.11/object.h"
int main() {}
```
```c
#include "/usr/include/python3.12/Python.h"
#include "/usr/include/python3.12/object.h"
int main() {}
```
Compiled in either case using:
```
$ gcc bugreport.c -Werror=declaration-after-statement
```
or equivalently, using `llvm` v17:
```
$ clang-17 bugreport.c -Werror=declaration-after-statement
```
</details>
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116950
* gh-116954
* gh-116963
* gh-116973
* gh-116990
* gh-116995
* gh-117010
* gh-117011
* gh-117043
* gh-117106
<!-- /gh-linked-prs -->
| a9c304cf020e2fa3ae78fd88359dfc808c9dd639 | 590a26010d5d7f27890f89820645580bb8f28547 |
python/cpython | python__cpython-116829 | # Free-threaded builds can experience heavy contention on `PyType_IsSubtype`
# Bug report
### Bug description:
`PyType_IsSubtype` takes the type lock to ensure that MRO is being accessed in a thread-safe manner as it needs to read and incref it. This can result in heavy contention in some pretty common scenarios (e.g. when `sys.setprofile` is used we experience heavy contention across threads doing `PyCFunction_Check`, profiling simple workloads can be nearly 50% slower in some cases). We can avoid this by using `Py_TryIncref` and only fallback to the lock when the MRO isn't yet shared.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116829
<!-- /gh-linked-prs -->
| 280de3661b42af9b3fe792764d0b09f403df5223 | ebf29b3a02d5b42a747e271e9cfc4dd73c01ebe6 |
python/cpython | python__cpython-116861 | # `test_parserhack` in `test_future` is outdated
# Bug report
This test is written for python2.6, which is long gone. It tests some implementation detail with is also long gone:
https://github.com/python/cpython/blob/2cf18a44303b6d84faa8ecffaecc427b53ae121e/Lib/test/test_future_stmt/test_future.py#L174-L192
We have new tests for `print` keyword in other modules:
https://github.com/python/cpython/blob/2cf18a44303b6d84faa8ecffaecc427b53ae121e/Lib/test/test_exceptions.py#L176-L183
I propose to remove this test, it does not add any value and might be confusing to readers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116861
* gh-119648
* gh-119649
<!-- /gh-linked-prs -->
| 669175bf8edc2c02d48401bac0e4c7d99a33f15b | b313cc68d50de5fb5f43acffd402c5c4da6516fc |
python/cpython | python__cpython-116859 | # Mark some tests `test_cmd_line` as `cpython_only`
# Bug report
There are different tests that are skipped due to various reasons (like exact `--help` output matches or `malloc` usages, frozen imports, etc) on other implementations. For example, RustPython: https://github.com/RustPython/RustPython/blob/92c8b371ae5db0d95bd8199bc42b08af115bb88a/Lib/test/test_cmd_line.py
Take note that some skips look more like bugs / incomplete features in RustPython, however it is better to mark some tests that we can be sure as CPython-only.
I have a PR ready with the full description of each case.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116859
* gh-116889
* gh-116890
<!-- /gh-linked-prs -->
| a1c4923d65a3e4cea917745e7f6bc2e377cde5c5 | 0c7dc494f2a32494f8971a236ba59c0c35f48d94 |
python/cpython | python__cpython-116852 | # The documentation for ctypes may be ambiguous.
# Documentation

In this picture, after "from ctypes import *" then use the "libc.printf" may misleading the reader that libc from ctypes.
i think should remove "from ctypes import *" then tell reader this snippet continue with the above code
<!-- gh-linked-prs -->
### Linked PRs
* gh-116852
* gh-116905
* gh-116906
<!-- /gh-linked-prs -->
| 744c0777952f1e535d1192ee15b286aa67b61533 | 33da0e844c922b3dcded75fbb9b7be67cb013a17 |
python/cpython | python__cpython-124667 | # argparse: parse_args with a subparser + namespace tries to setdefault on a mappingproxy object
# Bug report
### Bug description:
I was using the namespace argument in parse_args and found that argparse tries to setdefault on a mapping proxy when parsing unknown extra attributes:
```python
import argparse
parser = argparse.ArgumentParser()
subparser = argparse.ArgumentParser()
subparsers = parser.add_subparsers()
subparsers.add_parser(name="sub", add_help=False)
class MyNamespace:
pass
try:
parser.parse_args(["sub", "oops"], namespace=MyNamespace)
except SystemExit:
pass
```
```plaintext
Traceback (most recent call last):
File "<path_to_script>", line 16, in <module>
parser.parse_args(["sub", "oops"], namespace=MyNamespace)
File "<python_installation>/lib/python3.12/argparse.py", line 1891, in parse_args
args, argv = self.parse_known_args(args, namespace)
File "<python_installation>/lib/python3.12/argparse.py", line 1924, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "<python_installation>/lib/python3.12/argparse.py", line 2139, in _parse_known_args
stop_index = consume_positionals(start_index)
File "<python_installation>/lib/python3.12/argparse.py", line 2095, in consume_positionals
take_action(action, args)
File "<python_installation>/lib/python3.12/argparse.py", line 2000, in take_action
action(self, namespace, argument_values, option_string)
File "<python_installation>/lib/python3.12/argparse.py", line 1268, in call
vars(namespace).setdefault(_UNRECOGNIZED_ARGS_ATTR, [])
AttributeError: 'mappingproxy' object has no attribute 'setdefault'
```
I suspect this is because we're attempting to do vars(MyNamespace) instead of vars(argparse.Namespace) but I don't know much more than that.
I skimmed the top couple entries on the issue tracker with keywords 'argparse mappingproxy / argparse namespace' and found nothing, so I've decided that maybe nobody has reported this yet.
Using python 3.12.1 / argparse 1.1 / osx-arm64.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-124667
* gh-124757
* gh-124758
<!-- /gh-linked-prs -->
| 95e92ef6c74e973ea13d15180190d0fa2af82fbf | f1a2417b9e2993e584610851ac004c8b0599b323 |
python/cpython | python__cpython-116846 | # Itertools recipes improvements
# roundrobin
https://github.com/python/cpython/blob/5f52d20a93908196f74271db8437cc1ba7e1e262/Doc/library/itertools.rst?plain=1#L1570-L1571
That looks weird. Round-robin with a single iterable? And the ranges aren't iterated? I suspect this was intended:
```python
collections.Counter(roundrobin(*ranges)) == collections.Counter(chain(*ranges))
```
# unique_everseen
These could be lazier by yielding *first* (before updating `seen`):
https://github.com/python/cpython/blob/7bbb9b57e67057d5ca3b7e3a434527fb3fcf5a2b/Doc/library/itertools.rst?plain=1#L894-L895
https://github.com/python/cpython/blob/7bbb9b57e67057d5ca3b7e3a434527fb3fcf5a2b/Doc/library/itertools.rst?plain=1#L900-L901
The similar "roughly equivalent" code of [`cycle`](https://docs.python.org/3.13/library/itertools.html#itertools.cycle) and the `factor` recipe also yield as early as they can.
# iter_index
https://github.com/python/cpython/blob/7bbb9b57e67057d5ca3b7e3a434527fb3fcf5a2b/Doc/library/itertools.rst?plain=1#L1420
Not really subsequence searches but *substring* searches. For example, "fbr" is a subsequence of "foobar", but they don't search for that. See [Subsequence](https://en.wikipedia.org/wiki/Subsequence?wprov=sfla1) and [Substring](https://en.wikipedia.org/wiki/Substring?wprov=sfla1) at Wikipedia.
@rhettinger
<!-- gh-linked-prs -->
### Linked PRs
* gh-116846
* gh-116847
<!-- /gh-linked-prs -->
| 41e844a4acbd5070f675e034e31c988b4849dec9 | 1904f0a2245f500aa85fba347b260620350efc78 |
python/cpython | python__cpython-116775 | # Make `sys.settrace`, `sys.setprofile`, and `sys.monitoring` thread safe in `--disable-gil` builds
# Feature or enhancement
### Proposal:
These need to work properly when called with multiple threads running.
https://github.com/colesbury/nogil-3.12/commit/82800d8ec8
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/108219
<!-- gh-linked-prs -->
### Linked PRs
* gh-116775
<!-- /gh-linked-prs -->
| 07525c9a85e7fa04db565c6e6e6605ff6099dcb7 | b45af00bad3449b85c8f54ba7a9474ca1f52de69 |
python/cpython | python__cpython-116812 | # PathFinder.invalidate_caches should include MetadataPathFinder.invalidate_caches
`MetadataPathFinder.invalidate_caches` should be called from `PathFinder.invalidate_caches`, as that will match the behavior that would happen if MetadataPathFinder were on `sys.meta_path`.
_Originally posted by @jaraco in https://github.com/python/cpython/issues/116763#issuecomment-1997511772_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116812
* gh-116864
* gh-116865
<!-- /gh-linked-prs -->
| 5f52d20a93908196f74271db8437cc1ba7e1e262 | be59aaf3abec37b27bdb31fadf433665e5471a46 |
python/cpython | python__cpython-123249 | # getting ssl.SSLSocket.session brings to memory leak
# Bug report
### Bug description:
Ubuntu 22.04
python 3.12, python3.11
```python
import ssl
import socket
import time
host = '192.168.16.66' # some server we can connect with https as example
port = 443
session = None
context = ssl._create_unverified_context(protocol=ssl.PROTOCOL_TLSv1_2)
with socket.create_connection((host, port)) as sock:
with context.wrap_socket(sock, server_hostname=host, session = session) as ssock:
for i in range(300000):
session = ssock.session
print('Sleeping')
time.sleep(200)
```
Here is part of script that is used for "ssl reuse" session during connection to server
Running this script brings to memory leak (process memory increased from 322Mb to 2.5Gb).
Memory is not freed on "time.sleep" instruction
Not reproduced on python 3.8
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123249
* gh-124800
* gh-124801
<!-- /gh-linked-prs -->
| 7e7223e18f58ec48fb36a68fb75b5c5b7a45042a | 1c0bd8bd00287d3bd6830aca87bb14e047192008 |
python/cpython | python__cpython-116827 | # pystats: optimized_trace_length histogram includes exits
# Bug report
### Bug description:
With the addition of cold exits, the "optimized trace length" histogram now includes [the exit count + the optimized trace length](https://github.com/python/cpython/blob/main/Python/optimizer.c#L880).
While this is a useful metric for the overall executor size, it's no longer useful to see the amount of instructions that the optimizer was able to remove from the hot path.
I would suppose we break this out into two stats: the optimized trace length and the exit count.
Does this make sense, @markshannon?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116827
<!-- /gh-linked-prs -->
| 0f278012e88fa9607d85bc6c7265fd394f0ac163 | 5405e9e5b51f3bd883aee5c1a52a39a56e2fb2b4 |
python/cpython | python__cpython-116787 | # Direct invocation of `test_inspect/test_inspect.py` fails
# Bug report
```pytb
» ./python.exe Lib/test/test_inspect/test_inspect.py
/Users/sobolev/Desktop/cpython2/Lib/test/test_inspect/test_inspect.py:43: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
from . import inspect_fodder as mod
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_inspect/test_inspect.py", line 43, in <module>
from . import inspect_fodder as mod
ImportError: attempted relative import with no known parent package
```
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116787
* gh-116794
* gh-116795
<!-- /gh-linked-prs -->
| 66fb613d90fe3dea32130a5937963a9362c8a59e | d4028724f2c8c674202615b772913765423c69fd |
python/cpython | python__cpython-116783 | # `inspect.getmember` docs do not mention `__type_params__` for functions and classes
# Bug report
Docs do not mention `__type_params__` key returned value for classes and functions: https://github.com/python/cpython/blob/a18c9854e8c255981f07c0a1c1503253f85b7540/Doc/library/inspect.rst#L47-L58 https://github.com/python/cpython/blob/a18c9854e8c255981f07c0a1c1503253f85b7540/Doc/library/inspect.rst#L75-L109
But, they are returned:
```python
>>> class A[T]: ...
...
>>> import inspect
>>> dict(inspect.getmembers(A))['__type_params__']
(T,)
```
I will send a PR for that.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116783
* gh-116870
<!-- /gh-linked-prs -->
| 16349868d396cc1bff5188de3638321e87fe0293 | 8da83f3386da603605358dc9ec68796daa5ef455 |
python/cpython | python__cpython-116788 | # Test `test_inspect` fails with `-OO`
# Bug report
```
======================================================================
FAIL: test_class_inside_conditional (test.test_inspect.test_inspect.TestBuggyCases.test_class_inside_conditional)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_inspect/test_inspect.py", line 995, in test_class_inside_conditional
self.assertSourceEqual(mod2.cls238.cls239, 239, 240)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_inspect/test_inspect.py", line 550, in assertSourceEqual
self.assertEqual(inspect.getsource(obj),
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
self.sourcerange(top, bottom))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: " class cls239:\n '''else clause 239'''\n pass\n" != " class cls239:\n '''if clause cls239'''\n"
class cls239:
- '''else clause 239'''
? ^^^^
+ '''if clause cls239'''
? ^^ +++
- pass
======================================================================
FAIL: test_base_class_have_text_signature (test.test_inspect.test_inspect.TestSignatureDefinitions.test_base_class_have_text_signature)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_inspect/test_inspect.py", line 5315, in test_base_class_have_text_signature
self.assertEqual(text_signature, '(raw, buffer_size=DEFAULT_BUFFER_SIZE)')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != '(raw, buffer_size=DEFAULT_BUFFER_SIZE)'
----------------------------------------------------------------------
Ran 297 tests in 0.536s
```
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116788
* gh-116799
<!-- /gh-linked-prs -->
| f20dfb7569a769da50cd75f4932b9abe78e55d75 | 19ac28bd08fdb16795e6f82ea7bfac73e8f3791b |
python/cpython | python__cpython-116774 | # Windows: ProactorEventLoop: possible memory corruption leading to crashes
# Crash report
### What happened?
Also falls under: `RuntimeError: <_overlapped.Overlapped object at 0x<address>> still has pending operation at deallocation, the process may crash`.
We have an application that embeds Python and needs to tick the event loop once (or more) per frame. Generally this is done with the event loop idiom: `stop()` / `run_forever()` (once #110771 is available we intend to use that but our workaround is keeping the event loop running between frames similar to #110773 ).
(On my machine this has an 80% chance of printing the above error within 60 seconds, higher the longer it runs)
```python
import asyncio
import threading
import time
main_loop = asyncio.get_event_loop_policy().new_event_loop()
assert isinstance(main_loop, asyncio.ProactorEventLoop)
def threadMain():
while True:
main_loop.call_soon_threadsafe(lambda: None)
time.sleep(0.01)
thr = threading.Thread(target=threadMain, daemon=True)
thr.start()
while True:
main_loop.stop()
main_loop.run_forever()
```
## Initial diagnosis
Typically when the "still has pending operation" error message is printed, `_overlapped_Overlapped_cancel_impl()` will have been called which will call `CancelIoEx()` (returns `TRUE`).
When `Overlapped_dealloc()` is called and overlapped IO has not completed, `CancelIoEx()` is called again, returning `FALSE` with GLE=`ERROR_NOT_FOUND` (because the previous cancel succeeded). The existing code then releases the GIL and calls `GetOverlappedResult()` (`wait = FALSE`) which also returns `FALSE` with GLE=`ERROR_IO_PENDING`. Since Windows is not done with the `OVERLAPPED` struct the error message is printed and the memory is dangerously freed.
Changing `wait = TRUE` if either the `_overlapped_Overlapped_cancel_impl()` call to `CancelIoEx()` succeeds, or the one in `Overlapped_dealloc()` succeeds will result in a hang: `GetOverlappedResult()` will never return. This is because IO Completion Ports are in use and `GetQueuedCompletionStatus()` needs to be queried until it returns the `OVERLAPPED` in question.
### CPython versions tested on:
3.10, 3.11, CPython main branch
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
Python 3.13.0a4+ (heads/main:bb66600558, Mar 13 2024, 18:46:58) [MSC v.1938 64 bit (AMD64)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-116774
* gh-117078
* gh-117077
* gh-117079
* gh-117080
* gh-117083
<!-- /gh-linked-prs -->
| fc4599800778f9b130d5e336deadbdeb5bd3e5ee | 519b2ae22b54760475bbf62b9558d453c703f9c6 |
python/cpython | python__cpython-118348 | # Python/flowgraph.c:701: basicblock *push_except_block(struct _PyCfgExceptStack *, cfg_instr *): Assertion `stack->depth <= CO_MAXBLOCKS' failed
# Crash report
### What happened?
Minimal reproducer
```python
async def t():
async with h,t,t,o,f,y,o,t,r,o,f,t,f,r,t,m,r,o,t,l:n
```
```shell
~/p/cpython ❯❯❯ ./python.exe -c 'async def t():
async with h,t,t,o,f,y,o,t,r,o,f,t,f,r,t,m,r,o,t,l:n'
Assertion failed: (stack->depth <= CO_MAXBLOCKS), function push_except_block, file flowgraph.c, line 707.
fish: Job 1, './python.exe -c 'async def t():' terminated by signal async with h,t,t,o,f,y,o,t,r,o… (SIGABRT)
fish: Job Abort, '' terminated by signal ()
```
Full ASAN:
```
fuzz_pycompile: Python/flowgraph.c:701: basicblock *push_except_block(struct _PyCfgExceptStack *, cfg_instr *): Assertion `stack->depth <= CO_MAXBLOCKS' failed.
--
| AddressSanitizer:DEADLYSIGNAL
| =================================================================
| ==16858==ERROR: AddressSanitizer: ABRT on unknown address 0x0539000041da (pc 0x7a22fe4c600b bp 0x7a22fe63b588 sp 0x7ffe3dd3ff40 T0)
| SCARINESS: 10 (signal)
| #0 0x7a22fe4c600b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
| #1 0x7a22fe4a5858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
| #2 0x7a22fe4a5728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
| #3 0x7a22fe4b6fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
| #4 0x8f5084 in push_except_block cpython3/Python/flowgraph.c:701:5
| #5 0x8f5084 in label_exception_targets cpython3/Python/flowgraph.c:896:27
| #6 0x8e5db6 in _PyCfg_OptimizeCodeUnit cpython3/Python/flowgraph.c:2490:5
| #7 0x8702e3 in optimize_and_assemble_code_unit cpython3/Python/compile.c:7598:9
| #8 0x8702e3 in optimize_and_assemble cpython3/Python/compile.c:7640:12
| #9 0x8948d7 in compiler_function_body cpython3/Python/compile.c:2309:24
| #10 0x8948d7 in compiler_function cpython3/Python/compile.c:2410:9
| #11 0x8770af in compiler_visit_stmt cpython3/Python/compile.c:0
| #12 0x874f14 in compiler_body cpython3/Python/compile.c:1729:9
| #13 0x86c2fc in compiler_codegen cpython3/Python/compile.c:1740:13
| #14 0x869346 in compiler_mod cpython3/Python/compile.c:1781:9
| #15 0x869346 in _PyAST_Compile cpython3/Python/compile.c:555:24
| #16 0x9f3d07 in Py_CompileStringObject cpython3/Python/pythonrun.c:1449:10
| #17 0x9f3dfc in Py_CompileStringExFlags cpython3/Python/pythonrun.c:1462:10
| #18 0x4f8fcc in fuzz_pycompile cpython3/Modules/_xxtestfuzz/fuzzer.c:551:24
| #19 0x4f8fcc in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:570:14
| #20 0x4f8fcc in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:711:11
| #21 0x4f99ed in ExecuteFilesOnyByOne /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
| #22 0x4f97e8 in LLVMFuzzerRunDriver /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:0
| #23 0x4f939d in main /src/aflplusplus/utils/aflpp_driver/aflpp_driver.c:311:10
| #24 0x7a22fe4a7082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
| #25 0x43a0ad in _start
```
cc: @iritkatriel
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-118348
* gh-118477
<!-- /gh-linked-prs -->
| c1bf4874c1e9db2beda1d62c8c241229783c789b | f6fab21721c8aedc5dca97dbeb6292a067c19bf1 |
python/cpython | python__cpython-116801 | # Regression in `urllib.parse.parse_qsl(None)` behavior
# Bug report
### Bug description:
https://github.com/python/cpython/pull/115771 introduced a change in some currently working behavior. Previously:
```python
import urllib.parse
>>> urllib.parse.parse_qsl(None)
[]
```
but now it raises `TypeError: cannot convert 'NoneType' object to bytes`.
(`parse_qs` also broke, since it calls `parse_qsl`)
I think it's obviously pretty debatable whether the old behavior was desirable but there *is* popular code in the wild that depends on it: https://github.com/encode/httpx/blob/7df47ce4d93a06f2c3310cd692b4c2336d7663ba/httpx/_urls.py#L431-L433.
(I stumbled across this when one of my colleagues was accidentally using a current dev branch of 3.11 and started getting mysterious errors from inside the bowels of the httpx client.)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116801
* gh-116894
* gh-116895
<!-- /gh-linked-prs -->
| 1069a462f611f0b70b6eec0bba603d618a0378f3 | 269051d20e65eda30734cbbbdb07d21df61978d6 |
python/cpython | python__cpython-116761 | # pystats: More traces created than optimizations attempted
# Bug report
### Bug description:
In the pystats for many benchmarks, such as hexiom, the number of traces created exceeds the number of optimization attempts. This doesn't make sense because a trace shouldn't ever be created unless we tried to create one first. Probably there is a case where the trace creation code is getting called where the trace attempt stat isn't incremented.
For example:
Count | Ratio
-- | --
Optimization attemptsⓘ | 1,540 |
Traces createdⓘ | 52,900 | 3,435.1%
Trace stack overflowⓘ | 0 | 0.0%
Trace stack underflowⓘ | 16,060 | 1,042.9%
Trace too longⓘ | 0 | 0.0%
Trace too shortⓘ | 1,197,960 | 77,789.6%
Inner loop foundⓘ | 500 | 32.5%
Recursive callⓘ | 0 | 0.0%
Low confidenceⓘ | 100 | 6.5%
Executors invalidatedⓘ | 0 | 0.0%
Traces executedⓘ | 109,768,500 |
Uops executedⓘ | 2,948,922,980 | 2,686.5%
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116761
<!-- /gh-linked-prs -->
| cef0ec1a3ca40db69b56bcd736c1b3bb05a1cf48 | 8c6db45ce34df7081d7497e638daf3e130303295 |
python/cpython | python__cpython-124568 | # Provide a mechanism to "unload" the monitoring tool
# Feature or enhancement
### Proposal:
As mentioned in #111963, we don't have a way to fully unload the monitoring tool and this is a bad experience for users. Debuggers have to track code objects of their breakpoints if they want to do it with local events. I think we should have a `sys.monitoring.clear_tool_id(tool_id: int)` to clear up all the residues for the tool without giving up the `tool_id`. Also `sys.monitoring.free_tool_id()` should run `clear_tool_id` first - it's just safer and makes more sense.
Global events should be trivial, by `set_events(tool_id, 0)` all the global events should be disabled and if the next tool claims the id, it needs to explicitly set them again anyway. Callbacks are easy as they are just in an array.
The hard part is local events, as they are instrumented lazily. Currently we don't have a way to inform the code object that a tool is unloaded(cleared) and we don't track all the instrumented code objects.
I think the way to go is to version the tool and the local monitors. Each time a tool is registered (cleared), a new version number is assigned to the tool (same source of the code object, global_version basically). The increment of the version number will force all the code object to check their instrumentation, and in `update_instrumentation_data`, we can check whether the local monitors are in sync with the tools. Remove the tool if they are not.
Impacts:
* For each `_Py_LocalMonitors`, another `uint32_t versions[PY_MONITORING_TOOL_IDS]` to keep the version of the tools
* Extra check in `update_instrumentation_data`
* No run-time impact on programs without instrumentation
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-124568
<!-- /gh-linked-prs -->
| 5e0abb47886bc665eefdcc19fde985f803e49d4c | b48253852341c01309b0598852841cd89bc28afd |
python/cpython | python__cpython-116746 | # BLD: not all @LIBPYTHON@ changed to @MODULE_LDFLAGS@
# Bug report
### Bug description:
Attempting to build numpy fails with the errors that look like:
```
FAILED: numpy/_core/_struct_ufunc_tests.cpython-313-x86_64-linux-gnu.so
cc -o numpy/_core/_struct_ufunc_tests.cpython-313-x86_64-linux-gnu.so numpy/_core/_struct_ufunc_tests.cpython-313-x86_64-linux-gnu.so.p/src_umath__struct_ufunc_tests.c.o -Wl,--as-needed -Wl,--allow-shlib-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group -lm -Wl,--end-group -Wall -O2 -pipe -fomit-frame-pointer -fno-strict-aliasing -Wmaybe-uninitialized -Wdeprecated-declarations -Wimplicit-function-declaration -march=native -DCYTHON_FAST_THREAD_STATE=0 @LIBPYTHON@
/usr/bin/ld: cannot find @LIBPYTHON@: No such file or directory
```
I bisected this back to
https://github.com/python/cpython/pull/115780/
and found that there were still two instances of `@LIBPYTHON@` left in the repo. I will open a PR shortly.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116746
<!-- /gh-linked-prs -->
| c4bf58a14f162557038a1535ca22c52b49d81d7b | 3ec57307e70ee6f42410e844d3399bbd598917ba |
python/cpython | python__cpython-117296 | # Please upgrade bundled Expat to 2.6.2 (e.g. for the fix to CVE-2024-28757)
# Bug report
### Bug description:
Hi! :wave:
Please upgrade bundled Expat to 2.6.2 (e.g. for the fix to CVE-2024-28757).
- GitHub release: https://github.com/libexpat/libexpat/releases/tag/R_2_6_2
- Change log: https://github.com/libexpat/libexpat/blob/R_2_6_2/expat/Changes
The CPython issue for previous 2.6.0 was #115399 and the related merged main pull request was #115431, in case you want to have a look. Comment https://github.com/python/cpython/pull/115431#discussion_r1488706550 could be of help by raising confidence in the bump pull request when going forward.
Thanks in advance!
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-117296
* gh-118166
* gh-118185
* gh-118186
* gh-118187
* gh-118188
<!-- /gh-linked-prs -->
| c9829eec0883a8991ea4d319d965e123a3cf6c20 | 1b85b3424c081835406592868123fe898ee029ad |
python/cpython | python__cpython-116737 | # `INSTRUMENTED_FUNCTION_CALL_EX` should set `arg0` to `MISSING` instead of `None` if argument is absent
# Bug report
### Bug description:
I should've caught this in #116626 but somehow I missed it. `INSTRUMENTED_FUNCTION_CALL_EX` sets `arg0` to `None` now, instead of `MISSING`, when there's no argument.
```python
def f(a=1, b=2):
return a + b
empty_args = []
f(*empty_args) # arg0 will be None
```
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116737
* gh-116873
<!-- /gh-linked-prs -->
| 59e30f41ed6f2388a99ac0a8aebf0a12f7460a4a | d180b507c4929be399395bfd7946948f98ffc4f7 |
python/cpython | python__cpython-117407 | # Nested TaskGroup can silently swallow cancellation request from parent TaskGroup
# Bug report
### Bug description:
In the following code snippet, I start an `asyncio.TaskGroup` called `outer_tg` then start another one within it called `inner_tg`. The inner task group is wrapped in an `except*` block to catch a specific type of exception. A background task in each task group raises a different exception at roughly the same time, so the inner one ends up (correctly) raising the inner exception in an `ExceptionGroup`, which is caught and discarded by the `except*` block. However, this means that there is now no longer any exception bubbling up through the call stack, and the main body of the outer task group just continues on, even allowing waiting on more routines (but not creating more tasks in the outer group, as it is still shutting down).
```python
import asyncio
class ExceptionOuter(Exception):
pass
class ExceptionInner(Exception):
pass
async def raise_after(t, e):
await asyncio.sleep(t)
print(f"Raising {e}")
raise e()
async def my_main():
try:
async with asyncio.TaskGroup() as outer_tg:
try:
async with asyncio.TaskGroup() as inner_tg:
inner_tg.create_task(raise_after(1, ExceptionInner))
outer_tg.create_task(raise_after(1, ExceptionOuter))
except* ExceptionInner:
print("Got inner exception")
print("should not get here")
await asyncio.sleep(0.2)
print("waited")
# outer_tg.create_task(asyncio.sleep(0.2)) # raises RuntimeError("TaskGroup shutting down")
except* ExceptionOuter:
print("Got outer exception")
print("Done")
asyncio.run(my_main())
```
### Expected vs observed behaviour:
Observed behaviour:
```
Raising <class '__main__.ExceptionInner'>
Raising <class '__main__.ExceptionOuter'>
Got inner exception
should not get here
waited
Got outer exception
Done
```
Expected behaviour: the rest of the main body of the outer task group is skipped, so we just see:
```
Raising <class '__main__.ExceptionInner'>
Raising <class '__main__.ExceptionOuter'>
Got inner exception
Got outer exception
Done
```
### Variations:
Making either of these changes (or both) still gives the same issue:
* Add a third line within the inner task group `await asyncio.sleep(10)`. This means that the main body of the inner task group finishes with `asyncio.CancelledError` rather than just finishing before any exceptions are raised by tasks.
* Replace `inner_tg.create_task(raise_after(1, ExceptionInner))` with `inner_tg.create_task(raise_in_cancel())`, where:
```python
async def raise_in_cancel():
try:
await asyncio.sleep(10)
except asyncio.CancelledError:
raise ExceptionInner()
```
It's fair to dismiss this second case because it violates the rule about not suppressing `CancelledError`).
### Root cause:
I think the problem is that `TaskGroup.__aexit__()` never includes `CancelledError` in the ExceptionGroup it raises if there are any other types of exception, even if a parent task group is shutting down. That appears to be due to this code in `TaskGroup.__aexit__()` (specifically the `and not self._errors` part, because `self._errors` is the list of all non-cancellation exceptions):
```python
# Propagate CancelledError if there is one, except if there
# are other errors -- those have priority.
if propagate_cancellation_error and not self._errors:
raise propagate_cancellation_error
```
I'm not sure of the right solution. Here are a couple of possibilities that come to mind, but I'm not sure if either would work or what the wider implications would be.
* Perhaps, after all child tasks are completed, `TaskGroup.__aexit__()`, after needs to walk up the stack of parent task groups and see if any of those are shutting down, and if so include a `CancelledError` in the `ExceptionGroup`.
* Maybe CancelledError needs to include some metadata about what task group caused it to be raised and then it's only suppressed by the task group it's associated with (that's what Trio and AnyIO do I believe).
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117407
<!-- /gh-linked-prs -->
| fa58e75a8605146a89ef72b58b4529669ac48366 | 22b25d1ebaab7b8c4833a8c120c8b4699a830f40 |
python/cpython | python__cpython-116715 | # Handle error correctly in `PyFloat_GetInfo`
# Bug report
`floatobject` contains this code: https://github.com/python/cpython/blob/ba82a241ac7ddee7cad18e9994f8dd560c68df02/Objects/floatobject.c#L101-L123
It only shows the last error and swallows any others.
I propose to change the error handling with the smallest possible diff.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116715
* gh-116722
* gh-116723
<!-- /gh-linked-prs -->
| fcd49b4f47f1edd9a2717f6619da7e7af8ea73cf | aa7bcf284f006434b07839d82f325618f7a5c06c |
python/cpython | python__cpython-116683 | # test_concurrent_futures/test_shutdown.py: test_cancel_futures_wait_false flaky
The test looks like:
https://github.com/python/cpython/blob/bb66600558cb8d5dd9a56f562bd9531eb1e1685f/Lib/test/test_concurrent_futures/test_shutdown.py#L243-L250
The problem is that if the worker thread starts executing after the `shutdown()` call, then the task will be canceled (because `cancel_futures=True`) and nothing will be printed. This happens sometimes with the GIL and more frequently in the free-threaded build with the GIL disabled.
You can reliably trigger the problematic behavior by adding a short sleep at the beginning of the worker thread:
https://github.com/python/cpython/blob/bb66600558cb8d5dd9a56f562bd9531eb1e1685f/Lib/concurrent/futures/thread.py#L69-L70
i.e., add a `time.sleep(0.2)` before the `if` statement.
This test has been reported failing previously, but those issues were closed:
* https://github.com/python/cpython/issues/89099
* https://github.com/python/cpython/issues/88650 (not entirely clear if it's the same issue)
<!-- gh-linked-prs -->
### Linked PRs
* gh-116683
* gh-116692
* gh-116693
<!-- /gh-linked-prs -->
| 7d1abe9502641a3602e9773aebc29ee56d8f40ae | 3f54d1cfe78f7c88fb0ecdbc250d9f8be092ec5a |
python/cpython | python__cpython-130888 | # docs: `token` is not defined, but mentioned
The `token` term is not being explained (even by means of reference), but mentioned in the chapters [8](https://docs.python.org/3/tutorial/errors.html) and [14](https://docs.python.org/3/tutorial/interactive.html) of the tutorial, maybe it can be a good point to add a reference as well?
As a potential solution, we can add another entry in the glossary to shortly provide a basic intuition for a `token`/`code unit`. In general, introduction of the term might be of use to explain some concepts, such as error highlighting/handling, syntax. Maybe it would even make sense to introduce it right in the chapter 8 along with error handling.
I would like to see other perspectives on this question.
_Originally posted by @Privat33r-dev in https://github.com/python/cpython/issues/116569#issuecomment-1987354514_
<!-- gh-linked-prs -->
### Linked PRs
* gh-130888
* gh-131367
* gh-131368
<!-- /gh-linked-prs -->
| 30d52058493e07fd1d3efea960482f4001bd2f86 | 863d54cbaf6c0b45fff691ab275515c1483ad68d |
python/cpython | python__cpython-116768 | # Make `_warnings.c` thread-safe in free-threaded build
# Feature or enhancement
The `warnings` implementation is split between Python (`warnings.py`) and C (`_warnings.c`). There are a few bits of code in the C module that are not thread-safe without the GIL:
The `Py_SETREF` calls are not thread-safe if concurrent writers try to overwrite the same field (e.g., `st->once_registry`). We can probably use critical sections to make it thread-safe.
`get_once_registry`: https://github.com/python/cpython/blob/df4784b3b7519d137ca6a1aeb500ef59e24a7f9b/Python/_warnings.c#L259
`get_default_action`:
https://github.com/python/cpython/blob/df4784b3b7519d137ca6a1aeb500ef59e24a7f9b/Python/_warnings.c#L290
`get_filter`:
https://github.com/python/cpython/blob/df4784b3b7519d137ca6a1aeb500ef59e24a7f9b/Python/_warnings.c#L315
Some uses of borrowed references are likely not thread-safe
* `_PyDict_GetItemWithError` (replace with `PyDict_GetItemRef`?)
<!-- gh-linked-prs -->
### Linked PRs
* gh-116768
* gh-116959
* gh-117373
* gh-117374
<!-- /gh-linked-prs -->
| 762f489b31afe0f0589aa784bf99c752044e7f30 | 4e45c6c54a9457b1ca5b4cf3aa2843b7218d4414 |
python/cpython | python__cpython-116658 | # test_capi.test_misc fails with `-u all`
# Bug report
### Bug description:
In current `main`, test_capi.test_misc fails when run with `-uall`.
```
centurion:/tmp/testbuild/Python-3.13.0a5 > ./python -m test -uall test_capi.test_misc
Using random seed: 468383501
0:00:00 load avg: 7.27 Run 1 test sequentially
0:00:00 load avg: 7.27 [1/1] test_capi.test_misc
Unknown option: -a
usage: ./python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
Traceback (most recent call last):
File "<string>", line 8, in <module>
SystemError: _PyErr_SetFromPyStatus() status is not an error
test test_capi.test_misc failed -- Traceback (most recent call last):
File "/tmp/testbuild/Python-3.13.0a5/Lib/test/test_capi/test_misc.py", line 1806, in test_py_config_isoloated_per_interpreter
self.assertEqual(support.run_in_subinterp(code), 0,
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
'subinterp code failure, check stderr.')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: -1 != 0 : subinterp code failure, check stderr.
test_capi.test_misc failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_capi.test_misc
Total duration: 11.0 sec
Total tests: run=260 failures=1 skipped=3
Total test files: run=1/1 failed=1
Result: FAILURE
```
This came up during the release process for alpha 5, since we actually enable all resources for one of the test runs. I know we disable most resources on CI, but apparently we have no buildbots with the right resources enabled either?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116658
* gh-116668
<!-- /gh-linked-prs -->
| f6e7a6ce651b43c6e060608a4bb20685f39e9eaa | 5d72b753889977fa6d2d015499de03f94e16b035 |
python/cpython | python__cpython-116790 | # Python 3.13 regression: Recursive dataclasses fail to ==: RecursionError: maximum recursion depth exceeded
# Bug report
### Bug description:
There is a regression in comparing recursive dataclasses for equality in Python 3.13.0 from the first alpha until at least a4.
### Python 3.12
```pycon
>>> from dataclasses import dataclass
>>> @dataclass
... class C:
... recursive: object = ...
...
>>> c1 = C()
>>> c1.recursive = c1
>>> c1 == c1
True
```
### Python 3.13
```pycon
>>> from dataclasses import dataclass
>>> @dataclass
... class C:
... recursive: object = ...
...
>>> c1 = C()
>>> c1.recursive = c1
>>> c1 == c1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
c1 == c1
File "<string>", line 4, in __eq__
File "<string>", line 4, in __eq__
File "<string>", line 4, in __eq__
[Previous line repeated 996 more times]
RecursionError: maximum recursion depth exceeded
```
This has started happening since 18cfc1eea569f0ce72ad403840c0e6cc5f81e1c2.
Previously, tuples were compared via `==`, which skips calling `__eq__` for items with the same identity. Now it skips the identity check and compares items directly with `__eq__`, causing `RecursionError`.
This breaks sip-build; hence, we cannot build pyqt5 or pyqt6 in Fedora with Python 3.13 to test the entire stack. For details, see this [Fedora bugzilla](https://bugzilla.redhat.com/show_bug.cgi?id=2250649).
```
sip-build: An internal error occurred...
Traceback (most recent call last):
File "/usr/bin/sip-build", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/tools/build.py", line 37, in main
handle_exception(e)
File "/usr/lib64/python3.13/site-packages/sipbuild/exceptions.py", line 81, in handle_exception
raise e
File "/usr/lib64/python3.13/site-packages/sipbuild/tools/build.py", line 34, in main
project.build()
File "/usr/lib64/python3.13/site-packages/sipbuild/project.py", line 245, in build
self.builder.build()
File "/usr/lib64/python3.13/site-packages/sipbuild/builder.py", line 48, in build
self._generate_bindings()
File "/usr/lib64/python3.13/site-packages/sipbuild/builder.py", line 280, in _generate_bindings
buildable = bindings.generate()
^^^^^^^^^^^^^^^^^^^
File "/builddir/build/BUILD/PyQt5-5.15.9/project.py", line 619, in generate
buildable = super().generate()
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/bindings.py", line 214, in generate
output_pyi(spec, project, pyi_path)
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 53, in output_pyi
_module(pf, spec, module)
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 132, in _module
_class(pf, spec, module, klass, defined)
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 267, in _class
_class(pf, spec, module, nested, defined, indent)
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 289, in _class
_callable(pf, spec, module, member, klass.overloads,
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 485, in _callable
_overload(pf, spec, module, overload, overloaded, first_overload,
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 575, in _overload
signature = _python_signature(spec, module, py_signature, defined,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 599, in _python_signature
as_str = _argument(spec, module, arg, defined, arg_nr=arg_nr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 676, in _argument
s += _type(spec, module, arg, defined, out=out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/pyi.py", line 710, in _type
return ArgumentFormatter(spec, arg).as_type_hint(module, out, defined)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/formatters/argument.py", line 327, in as_type_hint
s += TypeHintManager(self.spec).as_type_hint(hint, out, context,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.13/site-packages/sipbuild/generator/outputs/type_hints.py", line 107, in __new__
manager = cls._spec_manager_map[spec]
~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/lib64/python3.13/weakref.py", line 415, in __getitem__
return self.data[ref(key)]
~~~~~~~~~^^^^^^^^^^
File "<string>", line 4, in __eq__
File "<string>", line 4, in __eq__
File "<string>", line 4, in __eq__
[Previous line repeated 495 more times]
RecursionError: maximum recursion depth exceeded in comparison
```
cc @rhettinger @ericvsmith
Thanks to @swt2c for bisecting the problem. Thanks to @encukou who gave me a hint of what to look for when finding the smaller reproducer.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116790
<!-- /gh-linked-prs -->
| 75935746be0cbd32b9d710b93db9bd49c8d634ba | 3cac2af5ecfa9e2a47bfdd15e114b65780836b9d |
python/cpython | python__cpython-116670 | # Race condition in `test_queue.test_shutdown_immediate_put_join`
There is a race condition in `test_shutdown_immediate_put_join` that leads to `ValueError: task_done() called too many times`. This happens frequently in the free-threaded build. It may also happen in the default build depending on when the GIL is released. You can reliably reproduce the issue by adding a short sleep at the start of [`Queue.task_done()`](https://gist.github.com/colesbury/18341ad58d3981b05b5d5703f296e53e).
The problem is that we call `q.task_done()` in a thread concurrently with `q.shutdown(immediate=True)`.
https://github.com/python/cpython/blob/3e45030076bf2cfab41c4456c73fb212b7322c60/Lib/test/test_queue.py#L581-L586
The queue has one unfinished task. If, `q.task_done()` finishes first then everything is okay. However, if the `q.shutdown(immediate=True)` happens first, it will remove the single outstanding task and the `q.task_done()` will raise an exception.
cc @EpicWink
<!-- gh-linked-prs -->
### Linked PRs
* gh-116670
<!-- /gh-linked-prs -->
| 98ab21cce6d4c7bd2b5a0a1521b50b1ce2566a44 | 25684e71310642ffd20b45eea9b5226a1fa809a5 |
python/cpython | python__cpython-116627 | # Inconsistent behavior for `sys.monitoring.events.CALL`
# Bug report
### Bug description:
According to the docs, `CALL` event should be emitted as long as there's a function call in Python code. However, `CALL_FUNCTION_EX` does it differently - it only emits the event when it's a call to C function. So monitoring the following code produce different results:
```python
def f(a, b):
return a + b
f(1, 2) # Emits CALL event
args = (1, 2)
f(*args) # Does NOT emit CALL event
```
I think we should just fix `CALL_FUNCTION_EX` to make it work the same as the other CALL instructions.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116627
* gh-116732
<!-- /gh-linked-prs -->
| 8332e85b2f079e8b9334666084d1f8495cff25c1 | ba82a241ac7ddee7cad18e9994f8dd560c68df02 |
python/cpython | python__cpython-116758 | # Add support for Android as a target platform
I am now working on [PEP 738](https://peps.python.org/pep-0738/), which proposes adding Android as a Tier 3 supported platform for Python 3.13. I'll use this page to track all the issues related to this work.
This was previously managed at #71052, but most of that issue's history is now irrelevant, so I'm opening a fresh one.
Android-related issues which were open before the PEP 738 work began:
* https://github.com/python/cpython/issues/71042
* https://github.com/python/cpython/issues/76391
* https://github.com/python/cpython/issues/75824
* https://github.com/python/cpython/issues/83032
* https://github.com/python/cpython/issues/91028
* https://github.com/python/cpython/issues/111225
Issues added during the PEP 738 work:
* #114875
* #116057
PRs from the PEP 738 work which are not linked to one of these issues will be listed below.
FYI: @freakboy3742 @encukou
<!-- gh-linked-prs -->
### Linked PRs
* https://github.com/python/cpython/pull/115576
* https://github.com/python/cpython/pull/115917
* https://github.com/python/cpython/pull/115918
* https://github.com/python/cpython/pull/115923
* https://github.com/python/cpython/pull/115955
* https://github.com/python/cpython/pull/116215
* https://github.com/python/cpython/pull/116379
* https://github.com/python/cpython/pull/116423
* https://github.com/python/cpython/pull/116426
* https://github.com/python/cpython/pull/116617
* https://github.com/python/cpython/pull/116618
* gh-116758
* gh-117299
* gh-117878
* gh-118063
* gh-118352
* gh-121595
* gh-122487
* gh-122490
* gh-122521
* gh-122522
* gh-122530
* gh-122539
* gh-122698
* gh-122719
* gh-122764
* gh-122842
* gh-123061
* gh-123981
* gh-123982
* gh-123988
* gh-124012
* gh-124015
* gh-124034
* gh-124035
* gh-124259
* gh-124395
* gh-124462
* gh-124516
<!-- /gh-linked-prs -->
| 22b25d1ebaab7b8c4833a8c120c8b4699a830f40 | f2132fcd2a6da7b2b86e80189fa009ce1d2c753b |
python/cpython | python__cpython-116657 | # `list(set)` should be atomic in the free-threaded build
We have code that constructs a list from a set, while the set may be concurrently updated. For example:
https://github.com/python/cpython/blob/44f9a84b67c97c94f0d581ffd63b24b73fb79610/Lib/multiprocessing/process.py#L61-L65
This is a fairly common pattern currently, but in the free-threaded build, this may lead to `RuntimeError: Set changed size during iteration`. We should make it so that `list(set)` locks the set to avoid the error and so that the list contains a consistent copy of the set.
For additional context, see the following code from the nogil-3.12 fork:
* https://github.com/colesbury/nogil-3.12/blob/cedde4f5ec3759ad723c89d44738776f362df564/Objects/listobject.c#L1223-L1243
* https://github.com/colesbury/nogil-3.12/blob/cedde4f5ec3759ad723c89d44738776f362df564/Objects/listobject.c#L1372-L1381
cc @corona10
<!-- gh-linked-prs -->
### Linked PRs
* gh-116657
* gh-116816
* gh-116888
<!-- /gh-linked-prs -->
| 3325699ffa3c633084f3e3fd94c4f3843066db85 | 126186674ed3d6abd0f87e817100b5ec7290e146 |
python/cpython | python__cpython-116623 | # socketmodule.c: use atomics to access `defaulttimeout` in free-threaded build
# Feature or enhancement
The `defaulttimeout` in `socketmodule.c` is shared, mutable state:
https://github.com/python/cpython/blob/9f983e00ec55b87a098a4c8229fe5bb9acb9f3ac/Modules/socketmodule.c#L551
In the free-threaded build, we should use relaxed atomic operations to access `defaulttimeout` to avoid data races. This probably requires adding wrappers for 64-bit relaxed atomic load and stores to https://github.com/python/cpython/blob/main/Include/internal/pycore_pyatomic_ft_wrappers.h. Note that `_PyTime_t` is a typedef for `int64_t`.
For context, here is a similar change from `nogil-3.12`, but note that `defaulttimeout` is now part of the module state:
* https://github.com/colesbury/nogil-3.12/commit/360a79cb88
<!-- gh-linked-prs -->
### Linked PRs
* gh-116623
<!-- /gh-linked-prs -->
| 3b7fe117fab91371f6b621e9efd02f3925f5d53b | 02918aa96117781261cb1a564e37a861b01eb883 |
python/cpython | python__cpython-116628 | # C API: `PyGC_Disable()` not respected
# Bug report
### Bug description:
The Python C API provides the function [PyGC_Disable()](https://docs.python.org/3/c-api/gcsupport.html#c.PyGC_Disable) to temporarily disable garbage collection. Calling it causes `PyGC_Collect()` to become a no-op. So far so good.
Unfortunately, CPython >= 3.12 no longer respects this flag beyond direct calls to `PyGC_Collect()`. Let's consider, for example, the operation `PyErr_CheckSignals()` at [line 1773](https://github.com/python/cpython/blob/3.12/Modules/signalmodule.c#L1773). This calls `_Py_RunGC`, which ignores the GC `enabled` flag. And this _indeed happens_! In fact, I just spent quite a bit of time debugging an issue where the GC was supposed to be disabled for a brief moment, and yet it runs on Python 3.12. (Whether disabling the GC for a brief period of time is a good design pattern is another discussion. I would like to steer the discussion away from this and focus on documented API behavior.)
The `PyErr_CheckSignals()` function is called from 135 locations in the CPython codebase including very common ones like `PyObject_Str()`, so making any fixes in callers of this API does not look feasible.
The flag-ignoring `_Py_RunGC()` function is only called by two places: besides the mentioned `PyErr_CheckSignals()`, there is also `_Py_HandlePending()` in `Python/ceval_gil.c`.
To restore the documented behavior, I see three options:
- `_Py_RunGC()` could be modified to exit immediately if the `enabled` flag is set to zero.
- The implementation of `_Py_HandlePending()` and `PyErr_CheckSignals()` could be modified to check `PyGC_IsEnabled()` before calling `_Py_RunGC()`.
- Something is setting `_PY_GC_SCHEDULED_BIT`, and that causes the implementation to enter `_Py_RunGC()`. I'm not really sure about how that works, but perhaps this flag could be cleared in `PyGC_Disable()`, and Python could then avoid setting the flag.
(Tagging @pablogsal and @swtaarrs, who commited changes to the relevant code.)
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116628
* gh-116653
* gh-116662
* gh-116663
<!-- /gh-linked-prs -->
| 02918aa96117781261cb1a564e37a861b01eb883 | eb947cdc1374842a32fa82249ba3c688abf252dc |
python/cpython | python__cpython-116615 | # Pydoc fails for test.test_enum
# Bug report
```
$ ./python -m pydoc test.test_enum
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
"__main__", mod_spec)
^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
~~~~^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2948, in <module>
cli()
~~~^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2909, in cli
help.help(arg, is_cli=True)
~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2165, in help
elif request: doc(request, 'Help on %s:', output=self._output, is_cli=is_cli)
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1881, in doc
pager(render_doc(thing, title, forceload))
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1874, in render_doc
return title % desc + '\n\n' + renderer.document(object, name)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 521, in document
if inspect.ismodule(object): return self.docmodule(*args)
~~~~~~~~~~~~~~^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1341, in docmodule
contents.append(self.document(value, key, name))
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 522, in document
if inspect.isclass(object): return self.docclass(*args)
~~~~~~~~~~~~~^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1512, in docclass
attrs = spilldata("Data and other attributes %s:\n" % tag, attrs,
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
lambda t: t[1] == 'data')
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1475, in spilldata
push(self.docother(obj, name, mod, maxlen=70, doc=doc) +
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1620, in docother
repr = self.repr(object)
~~~~~~~~~^^^^^^^^
File "/home/serhiy/py/cpython/Lib/reprlib.py", line 60, in repr
return self.repr1(x, self.maxlevel)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1213, in repr1
return cram(stripid(repr(x)), self.maxother)
~~~~^^^
File "/home/serhiy/py/cpython/Lib/enum.py", line 1685, in global_flag_repr
if _is_single_bit(self):
~~~~~~~~~~~~~~^^^^^^
File "/home/serhiy/py/cpython/Lib/enum.py", line 96, in _is_single_bit
num &= num - 1
~~~~^~~
TypeError: unsupported operand type(s) for -: 'NoName' and 'int'
```
cc @ethanfurman
<!-- gh-linked-prs -->
### Linked PRs
* gh-116615
* gh-116629
* gh-116630
<!-- /gh-linked-prs -->
| 06e29a224fac9edeba55422d2e60f2fbb88dddce | 3e45030076bf2cfab41c4456c73fb212b7322c60 |
python/cpython | python__cpython-116597 | # Too many tier 2 micro-ops are marked as escaping
# Bug report
### Bug description:
There are too many `_SET_IP` and `_CHECK_VALIDITY` micro-ops being inserted into tier 2 code.
There are two causes of this:
1. Too many micro-ops are marked as escaping
2. Python call uops do not technically escape but must be preceded by `_SET_IP` in order to correctly set the return address. We need a way of marking these, so that we insert `_SET_IP` before them, but do not needlessly insert `_CHECK_VALIDITY` after them.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116597
<!-- /gh-linked-prs -->
| b6ae6da1bd987506b599a30e37fb452f909b5cbe | 6c4fc209e1941958164509204cdc3505130c1820 |
python/cpython | python__cpython-116591 | # Warning "unused function 'current_thread_holds_gil'" in `ceval_gil.c`
# Bug report
https://buildbot.python.org/all/#/builders/350/builds/5582 has this warning:
```
Python/ceval_gil.c:421:1: warning: unused function 'current_thread_holds_gil' [-Wunused-function]
1 warning generated.
```
`current_thread_holds_gil` is only used in `assert` once:
```
» ag current_thread_holds_gil
Python/ceval_gil.c
422:current_thread_holds_gil(struct _gil_runtime_state *gil, PyThreadState *tstate)
459: assert(!current_thread_holds_gil(gil, tstate));
```
I propose adding `# ifndef NDEBUG` to this internal function, so this warning will be silenced.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116591
<!-- /gh-linked-prs -->
| 817fe33a1da747c57b467f73a47b701c0b0eb911 | ffd79bea0f032df5a2e7f75e8c823a09cdc7c7a2 |
python/cpython | python__cpython-116577 | # `sortperf.py` is broken
# Bug report
### Bug description:
It's supposed to measure sort speed for different kinds of data, from best to worst case. But currently it shows about the same speed for all kinds, because the same list object is sorted a thousand times. Which means all but the first sort are just sorting the already sorted list. And that dominates the total time, making the first=real sort insignificant.
Oddly it was specifically changed to be this way, [here](https://github.com/python/cpython/pull/114687/commits/7c2dd81535c67477188e9adf157f5fe158023ac8), from
```python
def _prepare_data(self, loops: int) -> list[float]:
bench = BENCHMARKS[self._name]
return [
bench(self._size, self._random)
for _ in range(loops)
]
```
to:
```python
def _prepare_data(self, loops: int) -> list[float]:
bench = BENCHMARKS[self._name]
return [bench(self._size, self._random)] * loops
```
There was even a [comment](https://github.com/python/cpython/pull/114687#issuecomment-1945650107) showing that all cases from best to worst now take about the same tiny time 29.2 us, calling it an optimization (and I think it refers to the above change). So is this intentional/desired? I highly doubt that, but if it's a mistake, it's really weird.
The whole update was btw called *"Move Lib/test/sortperf.py to Tools/scripts"*, but in reality it wasn't just moved but completely rewritten. I don't think that was good, in my opinion moves should just be moves, not hide massive changes.
@sobolevn @tim-one
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116577
* gh-116582
* gh-116583
<!-- /gh-linked-prs -->
| 4704e55a71c859c5d17cc2747ba62f49da58ea2d | 2339e7cff745271f0e4a919573a347ab2bc1c2e9 |
python/cpython | python__cpython-116569 | # Docs: incorrect error message
# Documentation
[Part 8](https://docs.python.org/3/tutorial/errors.html#syntax-errors) of the Python tutorial contains the following example:
```python
>>> while True print('Hello world')
File "<stdin>", line 1
while True print('Hello world')
^
SyntaxError: invalid syntax
```
As tested on Python 3.12 and 3.13, the actual error highlight would be:
```py
>>> while True print('Hello world')
File "<stdin>", line 1
while True print('Hello world')
^^^^^
SyntaxError: invalid syntax
```
hence the following description might be inaccurate as well:
> The parser repeats the offending line and displays a little ‘arrow’ pointing at the earliest point in the line where the error was detected. The error is caused by (or at least detected at) the token preceding the arrow: in the example, the error is detected at the function [print()](https://docs.python.org/3/library/functions.html#print), since a colon (':') is missing before it. File name and line number are printed so you know where to look in case the input came from a script.
I guess that @pablogsal as an expert in a Python error messages might provide more data on current error handling :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-116569
* gh-116624
* gh-116625
<!-- /gh-linked-prs -->
| 3e45030076bf2cfab41c4456c73fb212b7322c60 | 44f9a84b67c97c94f0d581ffd63b24b73fb79610 |
python/cpython | python__cpython-116561 | # Consider adding public PyLong_GetSign() function
# Feature or enhancement
### Proposal:
Currently there is no way to determine the sign of the PyLongObject value and CPython extensions use private macroses like ``_PyLong_IsNegative()``: https://github.com/aleaxit/gmpy/blob/eb8dfcbd84abcfcb36b4adcb0d5c6d050731dd75/src/gmpy2_convert_gmp.c#L56
``PyLong_Sign()`` will offer GMP-like API to do this.
This was suggested before: https://github.com/python/cpython/issues/102471#issuecomment-1620284985
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116561
<!-- /gh-linked-prs -->
| 61d3ab32da92e70bb97a544d76ef2b837501024f | 367adc91fb9834eb35b168048fd54705621c3f21 |
python/cpython | python__cpython-116578 | # Relax list.sort()'s notion of "descending" runs
# Feature or enhancement
### Bug description:
A question on StackOverflow got me thinking about "the other end" of `list.sort()`: the start, with `count_run()`.
https://stackoverflow.com/questions/78108792/
That descending runs have to be strictly descending was always annoying, but since we only have `__lt__` to work with, it seemed too expensive to check for equality. As comments in the cited issue say, if we _could_ tell when adjacent elements are equal, then runs of equal elements encountered during a descending run could be reversed in-place on the fly, and then their original order would be restored by a reverse-the-whole-run at the end.
Another minor annoyance is that, e.g., `[3, 2, 1, 3, 4, 5]` gives up after finding the `[3, 2, 1]` prefix. But after a descending run is reversed, it's all but free to go on to see whether it can be extended by a luckily ascending run immediately following.
So the idea is to make `count_run()` smarter, and have it do all reversing by itself.
A Python prototype convinced me this can be done without slowing processing of ascending runs(*), and even on randomized data it creates runs about 20% longer on average.
Which doesn't much matter on its own: on randomized data, all runs are almost certain to be artificially boosted to length MINRUN anyway. But starting with longer short runs would have beneficial effect anyway, by cutting the number of searches the binary insertion sort has to do.
Not a major thing regardless. The wins would be large on various new cases of high partial order, and on the brain relief by not having to explain the difference between the definitions of "ascending" and "descending" by appeal to implementation details.
(*) Consider `[100] * 1_000_000 + [2]`. The current code does a million compares to detect that this starts with a million-element ascending run, followed by a drop. The whole thing would be a "descending run" under the new definition, but just not worth it if it required 2 million compares to detect the long run of equalities.
But, happily, the current code for that doesn't need to change at all. After the million compares, it knows
a[0] <= a[1] <= ... <= a[-2] > a[-1]
At that point, it only needs one more compare, "is a[0] < a[-2]?". If so, we're done: at _some_ point (& we don't care where) the sequence increased, so this cannot be a descending run. But if not, the prefix both starts and ends with 100, so all elements in the prefix must be equal: this _is_ a descending run. We reverse the first million elements in-p1ace, extend the right end to include the trailing 2, then finally reverse the whole thing in-place.
Yes, that final result could be gotten with about half the stores using a suitable in-place array-rotation algorithm instead, but that's out of scope for _this_ issue report. The approach sketched works no matter how many distinct blocks of all-equal runs are encountered.
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116578
<!-- /gh-linked-prs -->
| bf121d6a694bea4fe9864a19879fe0c70c4e0656 | 7d1abe9502641a3602e9773aebc29ee56d8f40ae |
python/cpython | python__cpython-116548 | # Multiple things wrong with `mkpwent` in `pwdmodule`
# Bug report
Here's what's wrong:
- https://github.com/python/cpython/blob/b9cb855621c0813846421e4ced97260a32e57063/Modules/pwdmodule.c#L71-L73 is not checked to be `NULL`, while it is possible
- All errors here can overwrite each other: https://github.com/python/cpython/blob/b9cb855621c0813846421e4ced97260a32e57063/Modules/pwdmodule.c#L88-L104
- `XDECREF` used, while `v` cannot be `NULL`: https://github.com/python/cpython/blob/b9cb855621c0813846421e4ced97260a32e57063/Modules/pwdmodule.c#L109
I will send a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116548
* gh-116593
* gh-116594
<!-- /gh-linked-prs -->
| ffd79bea0f032df5a2e7f75e8c823a09cdc7c7a2 | 1cc02ca063f50b8c527fbdde9957b03c145c1575 |
python/cpython | python__cpython-116542 | # Handle errors correctly in `_pystatvfs_fromstructstatvfs` in `posixmodule`
# Bug report
Here the first possible error will be overwritten by the following ones:
https://github.com/python/cpython/blob/b4b4e764a798bab60324871074ce4cdebb9d01bb/Modules/posixmodule.c#L12969-L13014
<!-- gh-linked-prs -->
### Linked PRs
* gh-116542
* gh-116643
* gh-116644
<!-- /gh-linked-prs -->
| f8147d01da44da2434496d868c86c2785f7244cd | d308d33e098d8e176f1e5169225d3cf800ed6aa1 |
python/cpython | python__cpython-116607 | # Use stop-the-world to make fork and interpreter shutdown thread-safe
# Feature or enhancement
In the free-threaded build, we should use a stop-the-world call to make `fork()` [^1] and shutdown (i.e, [`Py_FinalizeEx`](https://github.com/python/cpython/blob/c951e25c24910064a4c8b7959e2f0f7c0d4d0a63/Python/pylifecycle.c#L1866)) thread-safe.
The two operations are similar in that they both involve deleting thread states for other threads.
### Fork
Fork and in multithreaded environments is notorious for the problems it causes, but we can make the free-threaded build at least as safe as the default (GIL) build by pausing other threads with a stop-the-world call before forking. In the parent, we can use a normal start-the-world call to resume other threads after fork. In the child, we want to call start the world after other thread states are removed, but before we execute python code. This may require some refactoring.
### Shutdown
The interpreter waits for threading module created non-daemon threads during shutdown, but this can be canceled by a `ctrl-c` and there may still be active thread states for daemon threads and threads created from C outside of the threading module. We should use a stop-the-world call to bring these threads to a consistent state before we try deleting their thread states.
[^1]: or at least less thread-unsafe...
<!-- gh-linked-prs -->
### Linked PRs
* gh-116607
* gh-117131
<!-- /gh-linked-prs -->
| e728303532168efab7694c55c82ea19b18bf8385 | 1f8b24ef69896680d6ba6005e75e1cc79a744f9e |
python/cpython | python__cpython-116521 | # Handle errors correctly in `os_get_terminal_size_impl` in `posixmodule.c`
# Bug report
Here the first possible error will be overwritten by the second one:
https://github.com/python/cpython/blob/3cdfdc07a9dd39bcd6855b8c104584f9c34624f2/Modules/posixmodule.c#L14981-L14990
<!-- gh-linked-prs -->
### Linked PRs
* gh-116521
* gh-116539
* gh-116540
<!-- /gh-linked-prs -->
| b4b4e764a798bab60324871074ce4cdebb9d01bb | 03f86b1b626ac5b0df1cc74d8f80ea11117aec8c |
python/cpython | python__cpython-116517 | # Ensure current thread state is cleared before deleting it in _PyThreadState_DeleteCurrent
# Feature or enhancement
In general, when `_PyThreadState_GET()` is non-NULL then the current thread is "attached", but there is a small window during `PyThreadState_DeleteCurrent()` where that's not true: tstate_delete_common() is called when the thread is detached, but before current_fast_clear():
https://github.com/python/cpython/blob/601f3a7b3391e9d219a8ec44a6c56d00ce584d2a/Python/pystate.c#L1689-L1691
I think it's worth swapping the order of the calls so that we call `current_fast_clear` before `tstate_delete_common`. This would also make the order of operations in `PyThreadState_DeleteCurrent()` more similar to the order used when calling `PyThreadState_Delete()`.
See also: https://github.com/python/cpython/pull/116483
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-116517
<!-- /gh-linked-prs -->
| 9f983e00ec55b87a098a4c8229fe5bb9acb9f3ac | 05070f40bbc3384c36c8b3dab76345ba92098d42 |
python/cpython | python__cpython-116494 | # Do not try to import `_winreg` in `platform` module
# Bug report
`_winreg` was renamed to `winreg` during Python2 -> Python3 migration. Right now, there's no `_winreg` module.
Somehow this code got in 5 years ago in https://github.com/python/cpython/issues/80101 and https://github.com/python/cpython/commit/62dfd7d6fe11bfa0cd1d7376382c8e7b1275e38c
I think this was just missed during the code review and should be remove, I am going to send a PR that simplify this code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116494
<!-- /gh-linked-prs -->
| 7cee276d551a41d9271daf2a6bcd7def55555973 | fdb2d90a274158aee23b526d972172bf41bd4b7e |
python/cpython | python__cpython-116506 | # Improve `test_win32_ver` in `test_platform`
# Bug report
Right now it looks like this:
https://github.com/python/cpython/blob/0b647141d587065c5b82bd658485adca8823a943/Lib/test/test_platform.py#L329-L330
Looks like this test was added as a part of this commit: https://github.com/python/cpython/commit/c69d1c498f3896803f78de613a54d17be88bbeaf 19 years ago.
And for some reason, `win32_ver` was never improved.
I will send a PR for this, however, I might need some help, because I don't have access to windows machine.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116506
* gh-116708
* gh-116709
<!-- /gh-linked-prs -->
| ee0dbbc04504e0e0f1455e2bab8801ce0a682afd | 27df81d5643f32be6ae84a00c5cf84b58e849b21 |
python/cpython | python__cpython-116486 | # Typo in documentation in the collections library.
# Documentation
It seems like there is a typo on line 634 [here](https://github.com/python/cpython/blob/a8e814db96ebfeb1f58bc471edffde2176c0ae05/Lib/collections/__init__.py#L634)?
Instead of `in the some of original`, it should be `in some of the original`?
<!-- gh-linked-prs -->
### Linked PRs
* gh-116486
* gh-116489
* gh-116490
<!-- /gh-linked-prs -->
| 4d952737e62b833d6782e0180ee89088fe601317 | d864b0094f9875c5613cbb0b7f7f3ca8f1c6b606 |
python/cpython | python__cpython-116495 | # tkinter breaks when mixing tk.Checkbutton and ttk.Checkbutton
# Bug report
### Bug description:
When mixing tk.Checkbutton and ttk.Checkbutton (and maybe custom class Checkbutton)
tkinter behaves weirdly (calling method of one object changes the other too).
It is because of **name collision**.
```python
import tkinter as tk
from tkinter import ttk
wn = tk.Tk()
tk.Checkbutton(wn)
ttk.Checkbutton(wn)
print(tk.Checkbutton(wn).winfo_name())
print(ttk.Checkbutton(wn).winfo_name())
```
this example prints
```
!checkbutton2
!checkbutton2
```
This happens because of `tk.Checkbutton._setup` overriden method
which is not called in ttk.Checkbutton.
I think that that method can be **erased**, which **would solve the problem**,
but I'm not sure why it is there in the first place.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-116495
* gh-116901
* gh-116902
<!-- /gh-linked-prs -->
| c61cb507c10c5b597928284e087a9a384ab267d0 | 1069a462f611f0b70b6eec0bba603d618a0378f3 |
python/cpython | python__cpython-116473 | # PCbuild/regen.targets contains invalid newlines in XML attribute
# Bug report
### Bug description:
`PCbuild/regen.targets` encodes a multi-line batch script in an XML attribute using literal newlines:
https://github.com/python/cpython/blob/b2d74cdbcd0b47bc938200969bb31e5b37dc11e1/PCbuild/regen.targets#L153-L156
https://www.w3.org/TR/2006/REC-xml11-20060816/#AVNormalize says that newlines in XML attributes will be normalized to spaces when parsed. It appears that MSBuild (that's the consumer of that file, right?) doesn't perform that normalization so this typically doesn't cause any problems, but any programmatic rewriting of that file (which we do in Android, to append additional license sources for the dependencies we use) will break the script if done using a compliant XML parser (such as minidom).
Not really sure what to provide as a reproducer here. A trivial read/rewrite of `PCbuild/regen.targets` done with minidom will cause the problem to happen:
```python
from xml.dom import minidom
xml = minidom.parse("PCbuild/regen.targets)
with open("PCbuild/regen.targets", "w", encoding="utf-8") as output:
xml.writexml(output)
```
Then build on Windows, and it'll fail with something like:
```
Regenerate test_frozenmain.h
Invalid parameter to SETLOCAL command
C:\tmpfs\src\git\external\python\cpython3\PCbuild\regen.targets(82,5): error MSB3073: The command "setlocal set PYTHONPATH=C:\tmpfs\src\git\external\python\cpython3\Lib "C:\tmpfs\src\git\external\python\cpython3\PCbuild\amd64\python.exe" Programs\freeze_test_frozenmain.py Programs\test_frozenmain.h" exited with code 1. [C:\tmpfs\src\git\external\python\cpython3\PCbuild\python.vcxproj]
```
The fix is trivial: replace the literal newlines with `%0D%0A` as is done in other places (`Tools/nuget/make_pkg.proj`, for example). Only filing this because the PR template told me to :)
### CPython versions tested on:
3.11
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-116473
* gh-116474
* gh-116475
<!-- /gh-linked-prs -->
| 5d0cdfe519e6f35ccae1a1adca1ffd7fac10cee0 | 13ffd4bd9f529b6a5fe33741fbd57f14b4b80137 |
python/cpython | python__cpython-116469 | # Use explicit constants in stack effects when known.
We use `oparg` for stack effects even when `oparg` is a constant for a particular specialization.
For example `UNPACK_SEQUENCE_TUPLE` is defined as:
```
inst(UNPACK_SEQUENCE_TWO_TUPLE, (unused/1, seq -- values[oparg]))
```
resulting in slightly inefficient code. It should be defined as:
```
inst(UNPACK_SEQUENCE_TWO_TUPLE, (unused/1, seq -- val1, val0))
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-116469
<!-- /gh-linked-prs -->
| 4e5df2013fc29ed8bdb71572f1d12ff36e7028d5 | 8d7fde655fbb57e393831b9f30ebba80d6da366f |
python/cpython | python__cpython-116449 | # Handle errors correctly in `os_waitid_impl` in `posixmodule`
# Bug report
Any possible first errors will be overwritten by following ones:
https://github.com/python/cpython/blob/40b79efae7f8e1e0d4fd50c13f8b7514203bc6da/Modules/posixmodule.c#L9731-L9745
I propose to use our custom macro for this as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116449
* gh-116451
* gh-116453
<!-- /gh-linked-prs -->
| 882fcede83af783a834b759e4643130dc1307ee3 | 808a77612fb89b125d25efac2788522a100e8a6d |
python/cpython | python__cpython-116459 | # Possible undefined behavior in `arraymodule` and `getargs`
# Bug report
## Problem
Recently I got contacted by a team of static analysis enthusiasts, who analyze CPython's internals for bugs.
They have found this:

Problematic lines:
- https://github.com/python/cpython/blob/40b79efae7f8e1e0d4fd50c13f8b7514203bc6da/Modules/arraymodule.c#L242-L250
- https://github.com/python/cpython/blob/40b79efae7f8e1e0d4fd50c13f8b7514203bc6da/Python/getargs.c#L615-L630
- `object.h` is ignored for now as unrelated
Basically, this issue is about `char` vs `unsigned char`.
Built with:
```
CC=clang CXX=clang++ CXXFLAGS=$"-fsanitize=integer,undefined" CFLAGS="-fsanitize=integer,undefined" LDFLAGS="-fsanitize=integer,undefined" ./configure
```
Built on Linux.
To reproduce this warning you would need to run this code with CPython built with some external extension, but it just finds the desribed issues:
```python
from array import array
array('B', [234])
```
**Important note**: I cannot find and reproduce any buggy behavior, I can only see the static analysis report.
Found by Linux Verification Center ([portal.linuxtesting.ru](http://portal.linuxtesting.ru/)) with SVACE.
Reporter: Svyatoslav Tereshin (s.tereshin@fobos-nt.ru).
## Proposed solution
The solution is proposed by me :)
```diff
diff --git Modules/arraymodule.c Modules/arraymodule.c
index df09d9d8478..317f4974814 100644
--- Modules/arraymodule.c
+++ Modules/arraymodule.c
@@ -247,7 +247,7 @@ BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v)
if (!PyArg_Parse(v, "b;array item must be integer", &x))
return -1;
if (i >= 0)
- ((char *)ap->ob_item)[i] = x;
+ ((unsigned char *)ap->ob_item)[i] = x;
return 0;
}
diff --git Python/getargs.c Python/getargs.c
index 08e97ee3e62..a7bfad4dd69 100644
--- Python/getargs.c
+++ Python/getargs.c
@@ -612,7 +612,7 @@ convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags,
switch (c) {
case 'b': { /* unsigned byte -- very short int */
- char *p = va_arg(*p_va, char *);
+ unsigned char *p = va_arg(*p_va, unsigned char *);
long ival = PyLong_AsLong(arg);
if (ival == -1 && PyErr_Occurred())
RETURN_ERR_OCCURRED;
```
Why?
- `arraymodule` do similar casts for other setters, like: `(wchar_t *)ap->ob_item`, etc. This seems like a correct thing to do here as well
- `getargs`'s `'b'` works with `unsigned char`, according to the docs: https://github.com/python/cpython/blob/40b79efae7f8e1e0d4fd50c13f8b7514203bc6da/Doc/c-api/arg.rst#L232-L234
Local tests pass after this change.
However, I know that different platforms and different compilers might have different pitfalls here.
So, I would like to discuss this before I will send a patch (or if at all).
CC @serhiy-storchaka
<!-- gh-linked-prs -->
### Linked PRs
* gh-116459
* gh-116496
* gh-116497
<!-- /gh-linked-prs -->
| fdb2d90a274158aee23b526d972172bf41bd4b7e | 0003285c8d78f0b463f2acc164655456fcfc3206 |
python/cpython | python__cpython-116442 | # test_multiprocessing.test_empty_authkey triggers env changed failure
The test starts a thread but doesn't join it, which can lead to test failures with "ENV CHANGED"
https://github.com/python/cpython/blob/c62144a02cfae412a9deb4059fae141693a6edc9/Lib/test/_test_multiprocessing.py#L3518-L3521
See also https://github.com/python/cpython/pull/25845
From https://buildbot.python.org/all/#/builders/249/builds/7957/steps/5/logs/stdio:
```
...
test_empty_authkey (test.test_multiprocessing_fork.test_processes.WithProcessesTestListener.test_empty_authkey) ... ok
test_multiple_bind (test.test_multiprocessing_fork.test_processes.WithProcessesTestListener.test_multiple_bind) ... ok
Warning -- Dangling threads: {<Thread(Thread-13 (run), started 139968890533568)>}
...
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-116442
<!-- /gh-linked-prs -->
| c4ab9a4f1d923bb60a341856da1d273b17a545e0 | 936e2f36ade1506d56dd5f10e1967936aabe70b3 |
python/cpython | python__cpython-116438 | # Use PyDict_Pop()
`PyDict_Pop()` can be used in two cases:
* If the `PyDict_Get*` call is followed by the `PyDict_Del*` call.
* If the `PyDict_Del*` call is followed by the code that handles KeyError.
In both cases the use of `PyDict_Pop()` can make the code clearer.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116438
<!-- /gh-linked-prs -->
| 72d3cc94cd8cae1925e7a14f297b06ac6184f916 | 882fcede83af783a834b759e4643130dc1307ee3 |
python/cpython | python__cpython-116813 | # Split hot and cold parts of the templates.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116813
* gh-116817
* gh-116832
<!-- /gh-linked-prs -->
| bf82f77957a31c3731b4ec470c406f5708ca9ba3 | 61599a48f52e951d8813877ee311d2a830ba2cd8 |
python/cpython | python__cpython-116466 | # Warning: "variable ‘right’ set but not used" in `optimizer_cases.c.h`
# Bug report
<img width="983" alt="Снимок экрана 2024-03-06 в 19 36 37" src="https://github.com/python/cpython/assets/4660275/64916f6b-93bb-4723-9c62-9e86d57048bf">
CC @Fidget-Spinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-116466
<!-- /gh-linked-prs -->
| 4298d69d4b2f7d0e9d93ad325238930bd6235dbf | 68157446aa39dedf7c90d85a7b0924beda004e76 |
python/cpython | python__cpython-116419 | # C API: Move limited C API tests from _testcapi to a new _testlimitedcapi extension
Currently, the `_testcapi` C extension is partially built with the limited C API and partially with the non-limited C API. It can lead to some confusion: which API is being tested?
I proposed to create a new `_testlimitedcapi` extension which is only built with the limited C API. It can only access to the limited C API and so there is a lower risk to test the non-limited C API by mistake.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116419
* gh-116567
* gh-116568
* gh-116570
* gh-116571
* gh-116573
* gh-116598
* gh-116602
* gh-116974
* gh-116986
* gh-116993
* gh-117001
* gh-117006
* gh-117014
<!-- /gh-linked-prs -->
| d9bcdda39c62a8c37637ecd5f82f83f6e8828243 | d9ccde28c4321ffc0d3f8b18c6346d075b784c40 |
python/cpython | python__cpython-116405 | # Handle errors correctly in `wait_helper` in `posixmodule.c`
# Bug report
There are several issues in this function:
1. Our traditional case, when all errors are overwritten in a sequence preparation: https://github.com/python/cpython/blob/d2f1b0eb4956b4923f111c7c740ba7ab25f3312d/Modules/posixmodule.c#L9584-L9609
2. `PyLong_FromPid` can return `NULL`, but this case is ignored https://github.com/python/cpython/blob/d2f1b0eb4956b4923f111c7c740ba7ab25f3312d/Modules/posixmodule.c#L9611
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116405
* gh-116406
* gh-116407
<!-- /gh-linked-prs -->
| 22ccf13b332902142fe0c52c593f9efc152c7761 | d2f1b0eb4956b4923f111c7c740ba7ab25f3312d |
python/cpython | python__cpython-122447 | # [macOS] ``test_builtin.PtyTests.test_input_tty`` hangs if ``rlcompleter`` is imported
# Bug report
### Bug description:
```python
./python.exe -m test
...many lines
0:05:14 load avg: 4.04 [ 87/472] test_bool
0:05:14 load avg: 4.04 [ 88/472] test_buffer
0:05:15 load avg: 3.80 [ 89/472] test_bufio
0:05:16 load avg: 3.80 [ 90/472] test_builtin
/Users/admin/Projects/cpython/Lib/pty.py:95: DeprecationWarning: This process (pid=47600) is multi-threaded, use of forkpty() may lead to deadlocks in the child.
pid, fd = os.forkpty()
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122447
* gh-122472
<!-- /gh-linked-prs -->
| 1d8e45390733d3eb29164799ea10f8406f53e830 | 8fb88b22b7a932ff16002dd19e904f9cafd59e9f |
python/cpython | python__cpython-116421 | # shutil.rmtree() gets stuck on opening named pipe
# Bug report
### Bug description:
When the target is a named pipe, shutil.rmtree() gets stuck on opening it.
```python
# Create a named pipe
import os, tempfile
filename = os.path.join(tempfile.mkdtemp())
filename = os.path.join(tempfile.mkdtemp(), "namedpipe")
os.mkfifo(filename)
# Try to remove it
import shutil
shutil.rmtree(filename) # <- This blocks indefinitely
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/shutil.py", line 725, in rmtree
fd = os.open(path, os.O_RDONLY, dir_fd=dir_fd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
This seems to be caused by `os.open()` with `os.O_RDONLY`.
https://github.com/python/cpython/blob/4637a1fcbd52d89b0561b588485e6240eac7b07d/Lib/shutil.py#L745
(This issue seems to exist on the main branch, IIUC)
Currently, it needs to check the file type and use `os.remove()` if it's a named pipe.
Should this be handled inside `shutil.rmtree()`?
### CPython versions tested on:
3.11
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-116421
* gh-116716
* gh-116717
<!-- /gh-linked-prs -->
| aa7bcf284f006434b07839d82f325618f7a5c06c | 8332e85b2f079e8b9334666084d1f8495cff25c1 |
python/cpython | python__cpython-116414 | # _PyGC_ClearAllFreeLists called while other threads may be running
# Bug report
In the free-threaded GC, we currently call `_PyGC_ClearAllFreeLists()` after other threads are resumed. That's not safe because the other threads may be using their own freelists at that point.
https://github.com/python/cpython/blob/d2f1b0eb4956b4923f111c7c740ba7ab25f3312d/Python/gc_free_threading.c#L1163-L1164
We should probably move the call earlier, such as immediately after `handle_resurrected_objects` https://github.com/python/cpython/blob/d2f1b0eb4956b4923f111c7c740ba7ab25f3312d/Python/gc_free_threading.c#L1060-L1061
(Alternatively, we could make each thread responsible for clearing it's own freelist, but that seems more complicated given the current implementation)
cc @corona10
<!-- gh-linked-prs -->
### Linked PRs
* gh-116414
<!-- /gh-linked-prs -->
| 2d4955fcf2a54d7ffc06a48774863ff65ba250d2 | 68b8ffff8c4b20d2f46b708b1a7906377ecc255f |
python/cpython | python__cpython-116398 | # Stop the world doesn't set paused threads states to `_Py_THREAD_SUSPENDED`
# Bug report
The `detach_thread` function should set the state to the `detached_state` argument, but currently that argument is unused.
https://github.com/python/cpython/blob/d2f1b0eb4956b4923f111c7c740ba7ab25f3312d/Python/pystate.c#L1925-L1941
<!-- gh-linked-prs -->
### Linked PRs
* gh-116398
<!-- /gh-linked-prs -->
| 834bf57eb79e9bf383a7173fccda032f4c53f69b | d9bcdda39c62a8c37637ecd5f82f83f6e8828243 |
python/cpython | python__cpython-116387 | # Warning: "'fprintf' : format string '%ld' requires an argument of type 'long', but variadic argument 1 has type 'int64_t'"
# Bug report
<img width="1297" alt="Снимок экрана 2024-03-06 в 00 54 29" src="https://github.com/python/cpython/assets/4660275/92f67751-1bba-438d-8a4a-f15e405c31fb">
Happens in `Modules/_xxinterpqueuesmodule.c`, spotted on https://github.com/python/cpython/pull/116371
<!-- gh-linked-prs -->
### Linked PRs
* gh-116387
<!-- /gh-linked-prs -->
| 40b79efae7f8e1e0d4fd50c13f8b7514203bc6da | b33980a2e3f195c63e3aadeeebd8e50eb41ad70c |
python/cpython | python__cpython-116385 | # Specialize `CONTAINS_OP`
# Feature or enhancement
### Proposal:
I'll work on the following specializations:
* `list`
* `dict`
* `tuple`
* `set`
* `str`
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116385
* gh-116464
<!-- /gh-linked-prs -->
| 7114cf20c015b99123b32c1ba4f5475b7a6c3a13 | 73807eb634315f70a464a18feaae33d9e065de09 |
python/cpython | python__cpython-116378 | # `glob.translate()` rejects non-recursive pattern segments that include "**"
# Bug report
### Bug description:
`glob.translate(recursive=True)` (new in 3.13) rejects any pattern with a segment that includes `**`, unless `**` is the entire segment. For example, `translate('**a')` is rejected but not `translate('**')`
Rejecting such segments is longstanding **pathlib** behaviour, but it has no precedent in the **glob** module -- `glob.glob(recursive=True)` happily accepts such segments.
```python
>>> import glob
>>> glob.glob('**.py', recursive=True)
['blah.py']
>>> glob.translate('**.py', recursive=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
glob.translate('**.py', recursive=True)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/barney/projects/cpython/Lib/glob.py", line 304, in translate
raise ValueError("Invalid pattern: '**' can only be an entire path component")
ValueError: Invalid pattern: '**' can only be an entire path component
```
`translate()` should treat these segments similar to `glob()`, and leave the pattern validation stuff to pathlib.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116378
<!-- /gh-linked-prs -->
| 0634201f5391242524dbb5225de37f81a2cc1826 | 1cf03010865c66c2c3286ffdafd55e7ce2d97444 |
python/cpython | python__cpython-116471 | # Consider deprecating `platform.java_ver` because it is only helpful for Jython
# Feature or enhancement
What do you think about deprecating `platform.java_ver`? It is never used on CPython, it is a helper for `Jython` only, which is basically stuck on 2.7 for the last 10 years.
I think that we should deprecate and remove such compat parts with tools that no longer exist. In the future Jython can add its own implementation to its own stdlib, if ever. https://github.com/python/cpython/blob/a29998a06bf75264c3faaeeec4584a5f75b45a1f/Lib/platform.py#L516-L547
I propose to depreacate / remove it in 3.13 / 3.15
<!-- gh-linked-prs -->
### Linked PRs
* gh-116471
<!-- /gh-linked-prs -->
| 0b647141d587065c5b82bd658485adca8823a943 | 4d952737e62b833d6782e0180ee89088fe601317 |
python/cpython | python__cpython-116334 | # Relax error string text expectations in SSL-related tests
# Feature or enhancement
### Proposal:
This Issue is a follow-up to [prior discussion](https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505) on the python Ideas discussion board. Please see that page for background and discussion.
### Has this already been discussed elsewhere?
There is ongoing discussion of this feature proposal on [Python's discussion board](https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505) as well PR #116334.
### Links to previous discussion of this feature:
https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505
<!-- gh-linked-prs -->
### Linked PRs
* gh-116334
* gh-117136
* gh-117227
* gh-117229
<!-- /gh-linked-prs -->
| c85d84166a84a5cb2d724012726bad34229ad24e | 1f72fb5447ef3f8892b4a7a6213522579c618e8e |
python/cpython | python__cpython-116339 | # `sys_getwindowsversion_impl` might potentially swallow errors in `sysmodule.c`
# Bug report
While merging https://github.com/python/cpython/pull/115321 I've noticed a similar issue to https://github.com/python/cpython/issues/115320:
https://github.com/python/cpython/blob/207030f5527d405940b79c10c1413c1e8ff696c1/Python/sysmodule.c#L1661-L1699
It can also overwrite errors, I will send a PR similar to #115321 to fix this as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116339
* gh-116354
* gh-116388
<!-- /gh-linked-prs -->
| c91bdf86ef1cf9365b61a46aa2e51e5d1932b00a | cbf3d38cbeb6e640d5959549169ec45cdedc1a71 |
python/cpython | python__cpython-116341 | # Unexpected IndexError in typing.List['']
# Bug report
### Bug description:
Calling `typing.List['']` produces an unexpected `IndexError`.
```python
>>> import typing
>>> typing.List['']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/typing.py", line 362, in inner
return cached(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 1575, in __getitem__
params = tuple(_type_check(p, msg) for p in params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 1575, in <genexpr>
params = tuple(_type_check(p, msg) for p in params)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 186, in _type_check
arg = _type_convert(arg, module=module, allow_special_forms=allow_special_forms)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 164, in _type_convert
return ForwardRef(arg, module=module, is_class=allow_special_forms)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 859, in __init__
if arg[0] == '*':
~~~^^^
IndexError: string index out of range
```
The construct is invalid, but I'd expect a more friendly error, something similar to
```python
>>> typing.Dict[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/typing.py", line 365, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 1576, in __getitem__
_check_generic(self, params, self._nparams)
File "/usr/lib/python3.11/typing.py", line 293, in _check_generic
raise TypeError(f"Too {'many' if alen > elen else 'few'} arguments for {cls};"
TypeError: Too few arguments for typing.Dict; actual 1, expected 2
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116341
* gh-116347
* gh-116348
<!-- /gh-linked-prs -->
| a29998a06bf75264c3faaeeec4584a5f75b45a1f | ffcc450a9b8b6927549b501eff7ac14abc238448 |
python/cpython | python__cpython-116882 | # Allow C extensions to declare compatibility with free-threading
# Feature or enhancement
There are a few pieces to this:
1. Add a `Py_mod_gil` slot, as [described in PEP 703](https://peps.python.org/pep-0703/#py-mod-gil-slot), that multi-phase init modules can use to indicate that they support free-threading.
2. Add a `PyModule_ExperimentalSetGIL()` function (discussed [here](https://github.com/python/cpython/pull/116882#discussion_r1575006053)) that single-phase init modules can use in place of `Py_mod_gil`.
3. Mark all built-in modules as free-threading compatible with one of the above mechanisms.
4. Enable the GIL while loading a C module and leave it permanently enabled if the module does not declare free-threading compatibility.
1-3 are addressed in gh-116882; 4 will be addressed in a separate PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116882
* gh-118560
* gh-118645
* gh-122268
* gh-122282
* gh-122284
<!-- /gh-linked-prs -->
| c2627d6eea924daf80f374c18a5fd73ef61283fa | 3e818afb9b7c557aa633aeb3d5c4959750feeab0 |
python/cpython | python__cpython-116317 | # Typo in `UNARY_FUNC(PyNumber_Positive)`
# Bug report
### Bug description:
https://github.com/python/cpython/blob/ea1b1c579f600cc85d145c60862b2e6b98701b24/Objects/abstract.c#L1393
There's should be a `__pos__` instead of `__pow__`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116317
<!-- /gh-linked-prs -->
| 8a84eb75a94ead1cc0dcdde635096e58910d9356 | 60743a9a7ee3c3c16a61ff6715e8d170237b5458 |
python/cpython | python__cpython-116445 | # Update `Tools/wasm/README.md` to link to the devguide for building for WASI
Update https://github.com/python/cpython/blob/main/Tools/wasm/README.md to point to https://devguide.python.org/getting-started/setup-building/#wasi .
<!-- gh-linked-prs -->
### Linked PRs
* gh-116445
<!-- /gh-linked-prs -->
| bc708c76d2b3ad7bbfd6577a4444e1e47db60fd6 | 2d4955fcf2a54d7ffc06a48774863ff65ba250d2 |
python/cpython | python__cpython-116327 | # Get tests passing under wasmtime 18 using preview2 / WASI 0.2 primitives
This is a tracking issue for getting `wasmtime --wasi preview2` working for wasmtime 18+ (older versions had bugs we needed fixed).
- [x] `main`
- [x] `3.12`
- [x] `3.11`
<!-- gh-linked-prs -->
### Linked PRs
* gh-116327
* gh-116373
* gh-116384
<!-- /gh-linked-prs -->
| 7af063d1d85f965da06a65eca800f4c537d55fa5 | 6cddc731fb59edb66b64b7a8dbd9e281309a8384 |
python/cpython | python__cpython-116307 | # Tests fail if `--disable-test-modules` is supplied
# Bug report
### Bug description:
I compiled python 3.12 on a Debian system with `--enable-optimizations` and `--disable-test-modules`, and during the profile-guided optimization tests, multiple tests failed (see below).
Compiling python without `--disable-test-modules` makes the tests pass, and the output indicates that the failures are due to the test API not being available.
I suppose it's not too big of a deal, but this might normalize seeing errors when building python, leading users to ignore actual test failures.
<details>
```
# Next, run the profile task to generate the profile information.
LD_LIBRARY_PATH=/home/chaosflaws/Python-3.12.2 ./python -m test --pgo --timeout=1200 || true
Using random seed: 3095374302
0:00:00 load avg: 1.00 Run 44 tests sequentially (timeout: 20 min)
0:00:00 load avg: 1.00 [ 1/44] test_array
0:00:05 load avg: 1.00 [ 2/44] test_base64
0:00:07 load avg: 1.00 [ 3/44] test_binascii
0:00:08 load avg: 1.00 [ 4/44] test_binop
0:00:08 load avg: 1.00 [ 5/44] test_bisect
0:00:09 load avg: 1.00 [ 6/44] test_bytes
0:00:09 load avg: 1.00 [ 7/44] test_bz2 -- test_bytes skipped
0:00:12 load avg: 1.08 [ 8/44] test_cmath
0:00:13 load avg: 1.08 [ 9/44] test_codecs
0:00:20 load avg: 1.07 [10/44] test_collections
0:00:25 load avg: 1.07 [11/44] test_complex
0:00:27 load avg: 1.06 [12/44] test_dataclasses
0:00:29 load avg: 1.06 [13/44] test_datetime
0:00:44 load avg: 1.05 [14/44] test_decimal
0:01:08 load avg: 1.03 [15/44] test_difflib
0:01:14 load avg: 1.03 [16/44] test_embed
test test_embed failed
0:01:28 load avg: 1.02 [17/44] test_float -- test_embed failed (37 errors, 3 failures)
0:01:30 load avg: 1.02 [18/44] test_fstring
0:01:34 load avg: 1.02 [19/44] test_functools
0:01:37 load avg: 1.02 [20/44] test_generators
0:01:39 load avg: 1.02 [21/44] test_hashlib
0:01:41 load avg: 1.02 [22/44] test_heapq
0:01:45 load avg: 1.02 [23/44] test_int
test test_int failed
0:01:47 load avg: 1.01 [24/44] test_itertools -- test_int failed (2 errors)
0:02:15 load avg: 1.01 [25/44] test_json
0:02:26 load avg: 1.08 [26/44] test_long
0:02:44 load avg: 1.14 [27/44] test_lzma
0:02:45 load avg: 1.14 [28/44] test_math
0:03:02 load avg: 1.10 [29/44] test_memoryview
0:03:05 load avg: 1.10 [30/44] test_operator
0:03:05 load avg: 1.10 [31/44] test_ordered_dict
0:03:13 load avg: 1.08 [32/44] test_patma
0:03:14 load avg: 1.08 [33/44] test_pickle
0:03:48 load avg: 1.05 [34/44] test_pprint -- test_pickle passed in 34.1 sec
0:03:50 load avg: 1.05 [35/44] test_re
0:03:54 load avg: 1.04 [36/44] test_set
0:04:24 load avg: 1.02 [37/44] test_sqlite3
Failed to import test module: test.test_sqlite3.test_dbapi
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
Failed to import test module: test.test_sqlite3.test_dump
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dump.py", line 5, in <module>
from .test_dbapi import memory_database
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
Failed to import test module: test.test_sqlite3.test_hooks
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_hooks.py", line 29, in <module>
from test.test_sqlite3.test_dbapi import memory_database, cx_limit
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
Failed to import test module: test.test_sqlite3.test_regression
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_regression.py", line 31, in <module>
from test.test_sqlite3.test_dbapi import memory_database, cx_limit
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
Failed to import test module: test.test_sqlite3.test_transactions
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_transactions.py", line 30, in <module>
from test.test_sqlite3.test_dbapi import memory_database
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
Failed to import test module: test.test_sqlite3.test_userfunctions
Traceback (most recent call last):
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 394, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chaosflaws/Python-3.12.2/Lib/unittest/loader.py", line 337, in _get_module_from_name
__import__(name)
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_userfunctions.py", line 35, in <module>
from test.test_sqlite3.test_dbapi import cx_limit
File "/home/chaosflaws/Python-3.12.2/Lib/test/test_sqlite3/test_dbapi.py", line 37, in <module>
from _testcapi import INT_MAX, ULLONG_MAX
ModuleNotFoundError: No module named '_testcapi'
0:04:25 load avg: 1.02 [38/44] test_statistics -- test_sqlite3 failed (uncaught exception)
0:04:38 load avg: 1.02 [39/44] test_struct
0:04:41 load avg: 1.02 [40/44] test_tabnanny
0:04:43 load avg: 1.02 [41/44] test_time
0:04:46 load avg: 1.02 [42/44] test_unicode
0:04:59 load avg: 1.01 [43/44] test_xml_etree
0:05:02 load avg: 1.01 [44/44] test_xml_etree_c
Total duration: 5 min 8 sec
Total tests: run=8,196 failures=3 skipped=267
Total test files: run=44/44 failed=3 skipped=1
Result: FAILURE
```
</details>
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116307
* gh-116479
* gh-116482
* gh-117341
* gh-117368
* gh-117554
<!-- /gh-linked-prs -->
| a2548077614f81f25a2c3465dabb7a0a3885c40c | bb66600558cb8d5dd9a56f562bd9531eb1e1685f |
python/cpython | python__cpython-116297 | # Refleak in `reduce_newobj` in typeobject.c
_Originally posted by @brandtbucher in https://github.com/python/cpython/issues/115874#issuecomment-1965775536_:
> I also think I found an unrelated refleak while chasing this down:
>
> ```diff
> diff --git a/Objects/typeobject.c b/Objects/typeobject.c
> index fe3b7b87c8..181d032328 100644
> --- a/Objects/typeobject.c
> +++ b/Objects/typeobject.c
> @@ -6549,6 +6549,7 @@ reduce_newobj(PyObject *obj)
> }
> else {
> /* args == NULL */
> + Py_DECREF(copyreg);
> Py_DECREF(kwargs);
> PyErr_BadInternalCall();
> return NULL;
> ```
<!-- gh-linked-prs -->
### Linked PRs
* gh-116297
* gh-116299
* gh-116300
<!-- /gh-linked-prs -->
| 17c4849981905fb1c9bfbb2b963b6ee12e3efb2c | 1dce0073da2e48f3cd387f4d57b14d6813bb8a85 |
python/cpython | python__cpython-116282 | # Some functions/methods include '\*' in the docs
# Documentation
I've found some doc issues similar to the one in https://github.com/python/cpython/issues/114811. I searched `::.*\\\*` by regex to find them out, and I hope that's everything (please refer to the PR).
<!-- gh-linked-prs -->
### Linked PRs
* gh-116282
* gh-116285
* gh-116289
<!-- /gh-linked-prs -->
| 4859ecb8609b51e2f6b8fb1b295e9ee0f83e1be6 | 4d3ee77aef7c3f739b3f8d4dc46dd946c2a80627 |
python/cpython | python__cpython-116283 | # Docs: clarify object assignments intuition in the tutorial section
# Documentation
In the Tutorial section of the Python Documentation, 2 parts ([introduction](https://docs.python.org/3/tutorial/introduction.html#lists) and [datastructures](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists)) are explaining `list`s and none of them go deeper in the concept of mutability in Python: lists (or other mutable) assignments are just essentially a pointer assignments and they do not create a new object, but rather create another reference to the same object, which might lead to unexpected results.
```python
>>> rgb = ['red', 'green', 'blue']
>>> rgba = rgb
>>> rgba.append('alpha')`
>>> 'alpha' in rgb
True # Unexpected!
>>> id(rgb) == id(rgba)
True # Explains the nature
```
### Mentions
@nedbat gave an advice regarding better positioning of the in the IRC channel
### Sidenote
While researching this topic, I found a Google's [paper](https://developers.google.com/edu/python/lists) on the topic that starts introduction to the `list` concept with this peculiarity.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116283
* gh-116305
* gh-116306
<!-- /gh-linked-prs -->
| 45a92436c5c42ca100f3ea0de9e7d37f1a97439b | eda2963378a3c292cf6bb202bb00e94e46ee6d90 |
python/cpython | python__cpython-116284 | # The README of c-analyzer needs update
# Documentation
https://github.com/python/cpython/blob/87faec28c78f6fa8eaaebbd1ababf687c7508e71/Tools/c-analyzer/README#L14
It is mentioned in the README about an `ignored-globals.txt` file, which was removed in #22841.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116284
* gh-116331
* gh-116332
<!-- /gh-linked-prs -->
| 88b5c665ee1624af1bc5097d3eb2af090b9cabed | eb22e2b251002b65f3b93e67c990c21e1151b25d |
python/cpython | python__cpython-122788 | # RotatingFileHandler can create empty backups
# Bug report
In the following example the rotating file handler writes the message in file "test.log" and creates an empty backup file "test.log.1".
```python
import logging.handlers
fh = logging.handlers.RotatingFileHandler('test.log', maxBytes=100, backupCount=1)
fh.emit(logging.makeLogRecord({'msg': 'x'*100}))
fh.close()
```
I think creating an empty backup file is meaningless. `shouldRollover()` should return False if `self.stream.tell()` returns 0 (it happens for just created file).
<!-- gh-linked-prs -->
### Linked PRs
* gh-122788
* gh-122814
* gh-122815
<!-- /gh-linked-prs -->
| 6094c6fc2fc30eb9ee7c2f9f1088a851f71bf1b9 | 3f76b6b8ac706be46de0b23c3fd582ec4bd176d5 |
python/cpython | python__cpython-116227 | # Compiling `Modules/_testexternalinspection.c` failing due to `PTHREADS_KEYS_MAX` not being defined
# Bug report
### Bug description:
On my local machine, I'm getting this compile error:
```
In file included from ../../Modules/_testexternalinspection.c:47:
In file included from ../../Include/internal/pycore_runtime.h:12:
In file included from ../../Include/internal/pycore_ceval_state.h:12:
In file included from ../../Include/internal/pycore_gil.h:11:
In file included from ../../Include/internal/pycore_condvar.h:8:
../../Include/internal/pycore_pythread.h:76:46: error: use of undeclared identifier 'PTHREAD_KEYS_MAX'
struct py_stub_tls_entry tls_entries[PTHREAD_KEYS_MAX];
^
In file included from ../../Modules/_testexternalinspection.c:47:
In file included from ../../Include/internal/pycore_runtime.h:12:
In file included from ../../Include/internal/pycore_ceval_state.h:12:
../../Include/internal/pycore_gil.h:14:4: error: You need either a POSIX-compatible or a Windows system!
# error You need either a POSIX-compatible or a Windows system!
^
../../Include/internal/pycore_gil.h:36:5: error: unknown type name 'PyCOND_T'
PyCOND_T cond;
^
../../Include/internal/pycore_gil.h:37:5: error: unknown type name 'PyMUTEX_T'
PyMUTEX_T mutex;
^
../../Include/internal/pycore_gil.h:41:5: error: unknown type name 'PyCOND_T'
PyCOND_T switch_cond;
^
../../Include/internal/pycore_gil.h:42:5: error: unknown type name 'PyMUTEX_T'
PyMUTEX_T switch_mutex;
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-116227
<!-- /gh-linked-prs -->
| 90a1e9880fb0c03a82df6eaa9c7430b03e86531c | 9e88173d363fb22c2c7bf3da3a266817db6bf24b |
python/cpython | python__cpython-116256 | # Availbility of resource constants not properly documented
# Documentation
In the [docs for the 'resource' module](https://docs.python.org/3.11/library/resource.html), RLIMIT_VMEM does not have any availability notes, however, on my system (Python 3.11.7 on ArchLinux virtualized under WSL) RLIMIT_VMEM is not defined. I see:
```sh
Python 3.11.7 (main, Jan 29 2024, 16:03:57) [GCC 13.2.1 20230801] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import resource as r
>>> dir(r)
['RLIMIT_AS', 'RLIMIT_CORE', 'RLIMIT_CPU', 'RLIMIT_DATA', 'RLIMIT_FSIZE', 'RLIMIT_MEMLOCK', 'RLIMIT_MSGQUEUE', 'RLIMIT_NICE', 'RLIMIT_NOFILE', 'RLIMIT_NPROC', 'RLIMIT_OFILE', 'RLIMIT_RSS', 'RLIMIT_RTPRIO', 'RLIMIT_RTTIME', 'RLIMIT_SIGPENDING', 'RLIMIT_STACK', 'RLIM_INFINITY', 'RUSAGE_CHILDREN', 'RUSAGE_SELF', 'RUSAGE_THREAD', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'error', 'getpagesize', 'getrlimit', 'getrusage', 'prlimit', 'setrlimit', 'struct_rusage']
```
Not sure which systems RLIMIT_VMEM is available on, but that should probably be mentioned.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116256
* gh-116533
* gh-116534
<!-- /gh-linked-prs -->
| 03f86b1b626ac5b0df1cc74d8f80ea11117aec8c | c951e25c24910064a4c8b7959e2f0f7c0d4d0a63 |
python/cpython | python__cpython-121260 | # Remove `Py_BUILD_CORE_MODULE` and `Py_BUILD_CORE_MODULE` in `rotatingtree.c`
# Feature or enhancement
### Proposal:
Currently, we require `Py_Mutex` to isolate the states of pseudo random generator in `rotatingtree.c`, thus the `Py_BUILD_CORE_MODULE` was introduced in #115301, but we should refrain from using `Py_BUILD_CORE_MODULE` or `Py_BUILD_CORE_BUILTIN` to build non-builtin stdlib modules.
Therefore, we should either remove them once `Py_Mutex` becomes public or explore alternative solutions to address this issue.
### Has this already been discussed elsewhere?
Previous discussion: https://github.com/python/cpython/pull/115301#discussion_r1508284232
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121260
* gh-121307
<!-- /gh-linked-prs -->
| 705a123898f1394b62076c00ab6008c18fd8e115 | ff5806c78edda1feed61254ac55193772dc9ec48 |
python/cpython | python__cpython-116172 | # Argument Clinic: overriding the return converter of `__init__` functions generates incorrect code
# Bug report
Instead of trying to fix this, let's just disallow this like we do for PyGetSet methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116172
<!-- /gh-linked-prs -->
| cc6f807760300b575195bb8e678b82c10e24231c | 41baa03d30bc6b8a439ccca42b656d2c50392896 |
python/cpython | python__cpython-117242 | # Tier2 peephole optimization: remove extraneous _CHECK_STACK_SPACE ops
Implement a new peephole optimization for the tier2 optimizer that removes _CHECK_STACK_SPACE if we see that this check is already present
<!-- gh-linked-prs -->
### Linked PRs
* gh-117242
<!-- /gh-linked-prs -->
| 1c434688866db79082def4f9ef572b85d85908c8 | 976bcb2379709da57073a9e07b518ff51daa617a |
python/cpython | python__cpython-116338 | # Add a mechanism to disable the GIL
# Feature or enhancement
### Proposal:
`PYTHON_GIL=0 python ...` or `python -Xgil=0 ...` should disable the GIL at runtime, in `Py_GIL_DISABLED` builds. This will be similar in spirit to colesbury/nogil-3.12@f546dbf16a.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://peps.python.org/pep-0703/
<!-- gh-linked-prs -->
### Linked PRs
* gh-116338
<!-- /gh-linked-prs -->
| 2731913dd5234ff5ab630a3b7f1c98ad79d4d9df | 546eb7a3be241c5abd8a83cebbbab8c71107edcf |
python/cpython | python__cpython-116162 | # argparse is slow when parsing large number of optional flags
# Bug report
### Bug description:
When parsing positional vs optional arguments, the use of min with a
list comprehension inside of a loop [1][] results in quadratic time based
on the number of optional arguments given. When combined with use of
prefix based argument files and a large number of optional flags, this
can result in extremely slow parsing behavior.
Example test case: https://gist.github.com/amyreese/b825bc092210ea7b64287f033dc995d8
The test case parses a list of arguments that contains 30,000 copies
of `--flag=something`. This takes roughly 15-16 seconds to parse:
```console
$ ~/opt/cpython/bin/python3.13 -VV
Python 3.13.0a4+ (heads/main:0656509033, Feb 29 2024, 11:59:43) [Clang 15.0.0 (clang-1500.1.0.2.5)]
$ ~/opt/cpython/bin/python3.13 test_many_flags.py
parsed args in 15,195,914 µs
```
[1]: https://github.com/python/cpython/blob/fa1d675309c6a08b0833cf25cffe476c6166aba3/Lib/argparse.py#L2156
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-116162
<!-- /gh-linked-prs -->
| 4a630980328b67f0aba6a04c3950aa9dbd532895 | 7b4794319c56b6b00e852c745af50fd95bd1a550 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.