repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-115499 | # `_xxinterpchannelsmodule.c` incorrect error handling in `SET_COUNT` macro
# Bug report
Right now it uses `Py_DECREF(info)`, which is strange, because `info` has `struct channel_info *` type.
But, it does not clear `PyObject *self = PyStructSequence_New(state->ChannelInfoType);` which is declared right above. I think that this might be a typo.
Link: https://github.com/python/cpython/blob/4ebf8fbdab1c64041ff0ea54b3d15624f6e01511/Modules/_xxinterpchannelsmodule.c#L2140-L2165
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115499
<!-- /gh-linked-prs -->
| fd2bb4be3dd802b1957cf37fe68a3634ab054b2e | 26f23daa1ea30dea368f00c2131017cef2586adc |
python/cpython | python__cpython-115573 | # Keep `ob_tid`, `ob_ref_local`, and `ob_ref_shared` fields valid across allocations in free-threaded build
# Feature or enhancement
The free-threaded implementation of dict and list try avoid acquiring locks during read operations. To support this, we need to be able to access the reference count fields of Python objects after they are deallocated (and possibly reallocated). Some of this support is provided by https://github.com/python/cpython/issues/115103. Additionally, we need to ensure that the debug allocators do not overwrite these fields with "dead" bytes `0xDD`, which might make the object look "alive" by having a non-zero reference count.
We still would like to overwrite the rest of the allocation (i.e., from `ob_type` onwards) to detect use-after-frees in debug builds.
See also: https://peps.python.org/pep-0703/#optimistically-avoiding-locking
<!-- gh-linked-prs -->
### Linked PRs
* gh-115573
* gh-115745
* gh-116153
<!-- /gh-linked-prs -->
| cc82e33af978df793b83cefe4e25e07223a3a09e | c0b0c2f2015fb27db4306109b2b3781eb2057c2b |
python/cpython | python__cpython-115493 | # test_interpreters fails when running tests sequentially
# Bug report
### Bug description:
test_interpreters fails when running tests sequentially (which we do as part of the release process). The easiest reproducer seems to be `python -m test test_interpreters test_interpreters.test_channels`:
```
% bin/python -m test test_interpreters test_interpreters.test_channels
0:00:00 load avg: 0.86 [1/2] test_interpreters
0:00:02 load avg: 0.87 [2/2] test_interpreters.test_channels
test test_interpreters.test_channels crashed -- Traceback (most recent call last):
File "/tmp/testinstall/lib/python3.13/test/libregrtest/single.py", line 178, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/single.py", line 125, in _load_run_test
test_mod = importlib.import_module(module_name)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1014, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/tmp/testinstall/lib/python3.13/test/test_interpreters/test_channels.py", line 10, in <module>
from test.support.interpreters import channels
File "/tmp/testinstall/lib/python3.13/test/support/interpreters/channels.py", line 171, in <module>
_channels._register_end_types(SendChannel, RecvChannel)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: already registered
```
The cause is libregrtest's behaviour of [unloading newly imported modules from `sys.modules`](https://github.com/python/cpython/blob/main/Lib/test/libregrtest/main.py#L338) after each test run. Either test.support.interpreters.channels should support being imported multiple times (perhaps by not doing any setup at import time, but via deliberate calls?), or libregrtest should be taught to skip unloading test.support.interpreters modules.
@ericsnowcurrently, since this seems to be your area, any opinion making test.support.interpreters do the right thing here?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115493
* gh-115515
<!-- /gh-linked-prs -->
| eb22e2b251002b65f3b93e67c990c21e1151b25d | 207030f5527d405940b79c10c1413c1e8ff696c1 |
python/cpython | python__cpython-115484 | # `test.test_interpreters.test_api.TestInterpreterIsRunning.test_main` Fails in Embedded App
# Bug report
(See https://discuss.python.org/t/clarification-on-pep-734-subinterpreters-with-embedded-c-api/45889/1)
The test fails if CPython (main/3.13) is embedded without using `Py_Main()`. This is because `Py_Main()` calls `_PyInterpreterState_SetRunningMain()`. which is a new *internal* API that embedders are unlikely to even know about.
The failure can be reproduced in a regular build by commenting out the `_PyInterpreterState_SetRunningMain()` and `_PyInterpreterState_SetNotRunningMain()` calls in Modules/main.c:
<details>
<summary>(expand)</summary>
```diff
diff --git a/Modules/main.c b/Modules/main.c
index df2ce55024..5c0cb8222d 100644
--- a/Modules/main.c
+++ b/Modules/main.c
@@ -612,7 +612,7 @@ pymain_run_python(int *exitcode)
pymain_header(config);
- _PyInterpreterState_SetRunningMain(interp);
+// _PyInterpreterState_SetRunningMain(interp);
assert(!PyErr_Occurred());
if (config->run_command) {
@@ -638,7 +638,7 @@ pymain_run_python(int *exitcode)
*exitcode = pymain_exit_err_print();
done:
- _PyInterpreterState_SetNotRunningMain(interp);
+// _PyInterpreterState_SetNotRunningMain(interp);
Py_XDECREF(main_importer_path);
}
```
</details>
FTR, I added the "running"-tracking API (1dd9dee45d2591b4e701039d1673282380696849) to support PEP 734's `Interpreter.is_running()`. The PEP hasn't been accepted yet, but we still use the implementation to exercise subinterpreters in the test suite.
Possible solutions:
* always assume the main interpreter is running (in the main thread)
* make calls to the “running”-tracking API implicit to calls to the PyRun_*() family (and similar)
* infer the “running”-tracking API should have been called in certain situations
* make it public API
* stop tracking if an interpreter is “running” (i.e. drop the API and the related state)
I plan on implementing a short-term fix right away. If we keep "running"-tracking then we'll definitely at least want to make the API public.
CC @freakboy3742
<!-- gh-linked-prs -->
### Linked PRs
* gh-115484
<!-- /gh-linked-prs -->
| 468430189d3ebe16f3067279f9be0fe82cdfadf6 | 3e7b7df5cbaad5617cc28f0c005010787c48e6d6 |
python/cpython | python__cpython-115478 | # Type/constant/value propagation for `BINARY_OP`
<!-- gh-linked-prs -->
### Linked PRs
* gh-115478
* gh-115507
* gh-115550
* gh-115710
* gh-118050
<!-- /gh-linked-prs -->
| 4ebf8fbdab1c64041ff0ea54b3d15624f6e01511 | ed23839dc5ce21ea9ca087fac170fa1412005210 |
python/cpython | python__cpython-115558 | # Split micro-ops that have different behavior depending on low bit of oparg.
Splitting these micro-ops will improve performance by reducing the number of branches, the size of code generated, and the number of holes in the JIT stencils. There is no real downside; the increase in complexity at runtime is negligible and there isn't much increased complexity in the tooling.
Taking `_LOAD_ATTR_INSTANCE_VALUE` as an example, as it is the dynamically most common.
```
op(_LOAD_ATTR_INSTANCE_VALUE, (index/1, owner -- attr, null if (oparg & 1))) {
...
```
can be split into
```
op(_LOAD_ATTR_INSTANCE_VALUE_0, (index/1, owner -- attr)) {
assert((oparg & 1) == 0);
...
```
and
```
op(_LOAD_ATTR_INSTANCE_VALUE_1, (index/1, owner -- attr, null)) {
assert((oparg & 1) == 1);
...
```
Each of these is simpler, thus smaller and faster than the base version.
We can always choose one of the two split version when projecting the trace, so we don't need an implementation of the base version at all. This means that the tier 2 interpreter and stencils aren't much bigger than before.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115558
<!-- /gh-linked-prs -->
| 626c414995bad1dab51c7222a6f7bf388255eb9e | 7b21403ccd16c480812a1e857c0ee2deca592be0 |
python/cpython | python__cpython-115451 | # Direct invocation of `test_descrtut.py` fails
# Bug report
Output:
```
» ./python.exe Lib/test/test_descrtut.py
F..F....
======================================================================
FAIL: tut1 (__main__.__test__)
Doctest: __main__.__test__.tut1
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/doctest.py", line 2271, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for __main__.__test__.tut1
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line unknown line number, in tut1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut1
Failed example:
print(defaultdict) # show our type
Expected:
<class 'test.test_descrtut.defaultdict'>
Got:
<class '__main__.defaultdict'>
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut1
Failed example:
print(type(a)) # show its type
Expected:
<class 'test.test_descrtut.defaultdict'>
Got:
<class '__main__.defaultdict'>
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut1
Failed example:
print(a.__class__) # show its class
Expected:
<class 'test.test_descrtut.defaultdict'>
Got:
<class '__main__.defaultdict'>
======================================================================
FAIL: tut4 (__main__.__test__)
Doctest: __main__.__test__.tut4
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/doctest.py", line 2271, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for __main__.__test__.tut4
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line unknown line number, in tut4
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
C.foo(1)
Expected:
classmethod <class 'test.test_descrtut.C'> 1
Got:
classmethod <class '__main__.C'> 1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
c.foo(1)
Expected:
classmethod <class 'test.test_descrtut.C'> 1
Got:
classmethod <class '__main__.C'> 1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
D.foo(1)
Expected:
classmethod <class 'test.test_descrtut.D'> 1
Got:
classmethod <class '__main__.D'> 1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
d.foo(1)
Expected:
classmethod <class 'test.test_descrtut.D'> 1
Got:
classmethod <class '__main__.D'> 1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
E.foo(1)
Expected:
E.foo() called
classmethod <class 'test.test_descrtut.C'> 1
Got:
E.foo() called
classmethod <class '__main__.C'> 1
----------------------------------------------------------------------
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descrtut.py", line ?, in __main__.__test__.tut4
Failed example:
e.foo(1)
Expected:
E.foo() called
classmethod <class 'test.test_descrtut.C'> 1
Got:
E.foo() called
classmethod <class '__main__.C'> 1
----------------------------------------------------------------------
Ran 8 tests in 0.008s
FAILED (failures=2)
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115451
* gh-115453
* gh-115454
<!-- /gh-linked-prs -->
| ec8909a23931338f81803ea3f18dc2073f74a152 | 029ec91d43b377535ff7eb94993e0d2add4af720 |
python/cpython | python__cpython-115460 | # New warning: `missing braces around initializer [-Wmissing-braces]`
# Bug report
### Bug description:
Popped up in https://github.com/python/cpython/pull/115440/files
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115460
<!-- /gh-linked-prs -->
| 17773fcb863d5aef299487b07207c2ced8e9477e | a2d4281415e67c62f91363376db97eb66a9fb716 |
python/cpython | python__cpython-115433 | # Add variant of `Py_BEGIN_CRITICAL_SECTION` that accepts a `NULL` argument
# Feature or enhancement
We should add a variant of `Py_BEGIN_CRITICAL_SECTION` / `Py_END_CRITICAL_SECTION` that accepts a possibly `NULL` object. If the passed object is `NULL` then nothing is locked or unlocked. Otherwise, it behaves like `Py_BEGIN_CRITICAL_SECTION`.
For example:
```c
PyObject *object = maybe ? real_object : NULL;
Py_XBEGIN_CRITICAL_SECTION(object);
...
Py_XEND_CRITICAL_SECTION();
```
This will be useful in making `set` thread-safe. There are a number of functions that take an optional `iterable` that may be NULL. We want to lock it in the cases where it's not `NULL`.
I don't think we will need a version of `Py_BEGIN_CRITICAL_SECTION2` that accepts optionally `NULL` arguments.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115433
* gh-118861
* gh-118872
<!-- /gh-linked-prs -->
| 46c808172fd3148e3397234b23674bf70734fb55 | 68fbc00dc870f6a8dcbecd2ec19298e21015867f |
python/cpython | python__cpython-115422 | # Not all tests are installed.
# Bug report
### Bug description:
Not all of the tests in Lib/test are being installed, which means some tests are (silently) not run from an installed python:
```
% /tmp/testinstall/bin/python3 -m test test_multiprocessing_fork
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/tmp/testinstall/lib/python3.13/test/__main__.py", line 2, in <module>
main(_add_python_opts=True)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/main.py", line 680, in main
Regrtest(ns, _add_python_opts=_add_python_opts).main(tests=tests)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/main.py", line 662, in main
selected, tests = self.find_tests(tests)
~~~~~~~~~~~~~~~^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/main.py", line 198, in find_tests
selected = split_test_packages(selected)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/findtests.py", line 70, in split_test_packages
splitted.extend(findtests(testdir=subdir, exclude=exclude,
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
split_test_dirs=split_test_dirs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
base_mod=name))
^^^^^^^^^^^^^^
File "/tmp/testinstall/lib/python3.13/test/libregrtest/findtests.py", line 43, in findtests
for name in os.listdir(testdir):
~~~~~~~~~~^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/testinstall/lib/python3.13/test/test_multiprocessing_fork'
```
From a quick look it seems we're missing test_concurrent_futures, test_interpreters, test_multiprocessing_fork, test_multiprocessing_forkserver and test_multiprocessing_spawn. This is a problem in 3.12 and earlier as well.
Besides tests it's also missing the test.support.interpreters package:
```
% ls Lib/test/support/interpreters/
channels.py __init__.py queues.py
% ls /tmp/testinstall/lib/python3.13/test/support/interpreters
ls: cannot access '/tmp/testinstall/lib/python3.13/test/support/interpreters': No such file or directory
```
... which has not caused issues because its main user, test/test_subinterpreters, isn't being installed, and the other uses of it in the testsuite import ignore the import error if it's not available.
This is a problem back to at least 3.10 (which is missing test_capi as well as various testdata directories).
We should either make the Makefile install all of the test subdirectories automatically, or have a CI check to make sure all subdirectories are listed in TESTSUBDIRS in the Makefile.
### CPython versions tested on:
3.10, 3.11, 3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115422
* gh-115511
* gh-115813
* gh-116462
* gh-116498
<!-- /gh-linked-prs -->
| 518af37eb569f52a3daf2cf9f4787deed10754ca | 514b1c91b8651e8ab9129a34b7482033d2fd4e5b |
python/cpython | python__cpython-115425 | # _testinternalcapi.optimize_cfg returns incorrect exception handler labels
```
def test_except_handler_label(self):
#
insts = [
('SETUP_FINALLY', handler := self.Label(), 10),
('POP_BLOCK', 0, -1),
('RETURN_CONST', 1, 11),
handler,
('RETURN_CONST', 2, 12),
]
expected_insts = [
('SETUP_FINALLY', handler := self.Label(), 10),
('RETURN_CONST', 1, 11),
handler,
('RETURN_CONST', 2, 12),
]
self.cfg_optimization_test(insts, expected_insts, consts=list(range(5)))
```
This test fails because the code returns with ``SETUP_FINALLY`` uninitialized.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115425
<!-- /gh-linked-prs -->
| f42e112fd86edb5507a38a2eb850d0ebc6bc27a2 | 3a9e67a9fdb4fad13bf42df6eb91126ab2ef45a1 |
python/cpython | python__cpython-116562 | # Tier 2 optimizations for 3.13
# Feature or enhancement
## Prerequisites
- [x] Overhaul Symbol API and add tests: https://github.com/python/cpython/pull/116028
- [x] Re-enable optimizer by default: https://github.com/python/cpython/pull/116062
- [x] https://github.com/python/cpython/issues/116088
## Optimizations to be added to the tier 2 optimizer for 3.13
- [x] https://github.com/python/cpython/issues/115685
- [x] Normal _TO_BOOL specializations and friends.
- [x] TO_BOOL_ALWAYS_TRUE (only if it's worth it?)
- [x] #115651
- [x] https://github.com/python/cpython/issues/115480
- #115478
- #115507
- #115550
- #115710
- [ ] https://github.com/python/cpython/issues/115506
- [ ] BINARY_OP https://github.com/python/cpython/issues/115758
- [x] TO_BOOL https://github.com/python/cpython/issues/115759
- [x] #115819
- [x] _GUARD_IS_TRUE_POP
- [x] _GUARD_IS_FALSE_POP
- [x] _GUARD_IS_NONE_POP
- [x] _GUARD_IS_NOT_NONE_POP
- [ ] All remaining type and value propagation.
- [x] Extract type guards for `COMPARE_OP_INT/FLOAT/STR`
- [ ] https://github.com/python/cpython/issues/115687
- [x] Split up into uops
- [ ] Propagate bool value and constants too.
- [ ] Any remaining guard elimination not handled above
- [ ] https://github.com/python/cpython/pull/116562
- [x] Eliminate/combine redundant stack checks (`_CHECK_STACK_SPACE`)
- [x] https://github.com/python/cpython/issues/115709
- [x] https://github.com/python/cpython/issues/116168
- [x] https://github.com/python/cpython/issues/116202
- [x] https://github.com/python/cpython/issues/116291
## If we have time:
- [x] https://github.com/python/cpython/issues/116381
- [ ] https://github.com/python/cpython/issues/116432
- [ ] Replace `CALL_TUPLE_1` for list and generator arguments
- [ ] Only specialize `CALL` in tier for type. Generate optimal argument handling code in tier 2.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116562
* gh-117997
* gh-118054
* gh-118088
<!-- /gh-linked-prs -->
| 617aca9e745b3dfb5e6bc8cda07632d2f716426d | fcd49b4f47f1edd9a2717f6619da7e7af8ea73cf |
python/cpython | python__cpython-115418 | # ``test_capi.test_time`` prints unnecessary information
# Bug report
### Bug description:
```python
./python -m test -R 3:3 test_capi.test_time
Using random seed: 1733803764
0:00:00 load avg: 0.08 Run 1 test sequentially
0:00:00 load avg: 0.08 [1/1] test_capi.test_time
beginning 6 repetitions
123456
... 1225850373
.... 1240693653
.... 1255919419
.... 1272033289
.... 1289622205
.... 1304837321
.
== Tests result: SUCCESS ==
1 test OK.
Total duration: 158 ms
Total tests: run=5
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115418
<!-- /gh-linked-prs -->
| 225cd55fe676d128518af31f53b63a591fc4a569 | 02b63239f1e91f8a03c0b455c5201e6d07f642ab |
python/cpython | python__cpython-115411 | # co_qualname "new in version" information missing in doc
# Documentation
Usually documentation includes deprecated or "new in version" tags for all features.
the field `co_qualname` in code objects was added in python 3.11 but the documentation does not precise that, and it feels like it was always defined
https://docs.python.org/3/reference/datamodel.html#index-58
<!-- gh-linked-prs -->
### Linked PRs
* gh-115411
* gh-115412
* gh-115413
<!-- /gh-linked-prs -->
| de07941729b8899b187b8ef9690f9a74b2d6286b | 5719aa23ab7f1c7a5f03309ca4044078a98e7b59 |
python/cpython | python__cpython-115452 | # datetime: ".. doctest:" showing above code example
# Documentation
There's a stray `.. doctest::` showing at https://docs.python.org/dev/library/datetime.html#datetime.time.fromisoformat

The markup should be fixed so it doesn't show in the rendered page, and the doctest is run for this example.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115452
* gh-115455
* gh-115456
<!-- /gh-linked-prs -->
| 6755c4e0c8803a246e632835030c0b8837b3b676 | 6d9141ed766f4003f39362937dc397e9f734c7e5 |
python/cpython | python__cpython-115431 | # Please upgrade bundled Expat to 2.6.0 (e.g. for the fix to CVE-2023-52425)
# Bug report
### Bug description:
Hi! :wave:
Please upgrade bundled Expat to 2.6.0 (e.g. for the fix to CVE-2023-52425).
- GitHub release: https://github.com/libexpat/libexpat/releases/tag/R_2_6_0
- Change log: https://github.com/libexpat/libexpat/blob/R_2_6_0/expat/Changes
The CPython issue for previous 2.5.0 was #98739 and the related merged pull request was #98742, in case you want to have a look. In particular comment https://github.com/python/cpython/pull/98742#pullrequestreview-1158468748 could be of help.
Thanks in advance!
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-115431
* gh-115468
* gh-115469
* gh-115473
* gh-115474
* gh-115475
* gh-115400
* gh-115760
* gh-115761
* gh-115762
* gh-115763
* gh-115764
<!-- /gh-linked-prs -->
| 4b2d1786ccf913bc80ff571c32b196be1543ca54 | 671360161f0b7a5ff4c1d062e570962e851b4bde |
python/cpython | python__cpython-115623 | # Please expose Expat >=2.6.0 API function `XML_SetReparseDeferralEnabled`
# Feature or enhancement
### Proposal:
Hello CPython team! :wave:
Expat 2.6.0 introduced a new [function `XML_SetReparseDeferralEnabled`](https://libexpat.github.io/doc/api/latest/#XML_SetReparseDeferralEnabled) that currently neither [`xml.parsers.expat.XMLParserType` of pyexpat](https://docs.python.org/3/library/pyexpat.html#xmlparser-objects) nor [`xml.etree.ElementTree.XMLParser` of ElementTree](https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.XMLParser) give access to it, so when some Python-based library or application finds themselves in a situation where toggling reparse deferral is beneficial or needed for their scenario in the future, they will not be able to do so with ease or at all.
Please note that this is a sibling ticket to #90949 in some sense and implementations could be combined or done in one go, potentially.
Also please note that…
- for [`xml.etree.ElementTree.XMLParser`](https://docs.python.org/3/library/xml.etree.elementtree.html#xmlparser-objects) a new method `def flush(self):` could be a reasonable API addition based on the idea https://github.com/python/cpython/pull/115138#issuecomment-1932444270 ;
- the same seems to apply to [`xml.sax.xmlreader.IncrementalParser`](https://docs.python.org/3/library/xml.sax.reader.html#xml.sax.xmlreader.IncrementalParser).
I cannot provide a pull request on the topic economically myself unfortunately, but I am happy to support any related efforts (including voice calls on the topic as needed). Thanks in advance!
Best, Sebastian
CC #90949
CC #115133
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115623
* gh-116248
* gh-116268
* gh-116270
* gh-116272
* gh-116275
* gh-116278
* gh-116301
* gh-116411
<!-- /gh-linked-prs -->
| eda2963378a3c292cf6bb202bb00e94e46ee6d90 | 17c4849981905fb1c9bfbb2b963b6ee12e3efb2c |
python/cpython | python__cpython-115440 | # Doctest incorrectly locates a decorated function
# Bug report
### Bug description:
TL;DR: If a function is decorated, the doctest is unable to find the correct location of the function.
## Example
Consider two simple files, `main.py` and `decorate.py`.
Contents of `main.py`:
```python
from decorate import decorator
@decorator
def foo():
"""
>>> foo()
2
"""
return 42
```
Contents of `decorate.py`:
```python
import functools
def decorator(f):
@functools.wraps(f)
def inner():
return f()
return inner
```
If we run a doctest like so: `python3 -m doctest main.py`, we find the error **correctly** on the line number 7, the line which says `>>> foo()`. Traceback is output as follows.
```
**********************************************************************
File "/codemill/chaudhat/learning/demo/main.py", line 7, in main.foo
Failed example:
foo()
Expected:
2
Got:
42
**********************************************************************
1 items had failures:
1 of 2 in main.foo
***Test Failed*** 1 failures.
```
## Incorrect Output
However, if we move the `decorator` definition in the `decorate.py` file by a few lines, as shown, (the space between could be empty/defining a function, etc.), we see that the doctest is unable to find the location of the decorated function, `foo`, and just outputs `?` as the line number.
```python
import functools
def decorator(f):
@functools.wraps(f)
def inner():
return f()
return inner
```
Traceback:
```
**********************************************************************
File "/codemill/chaudhat/learning/demo/main.py", line ?, in main.foo
Failed example:
foo()
Expected:
2
Got:
42
**********************************************************************
1 items had failures:
1 of 1 in main.foo
***Test Failed*** 1 failures.
```
PS: If move the `decorator` definition by even a line up, it shows that the line, `>>> foo()` incorrectly lives on line 10 and not line 7.
## Why?
The "?" is printed simply because while doctest is able to find the example's lineno, it is unable to understand the test's lineno. I found this after printing out the line numbers in the `_failure_header` function in `doctest.py`.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115440
* gh-115458
* gh-115459
<!-- /gh-linked-prs -->
| bb791c7728e0508ad5df28a90b27e202d66a9cfa | 6755c4e0c8803a246e632835030c0b8837b3b676 |
python/cpython | python__cpython-115379 | # _testinternalcapi.compiler_codegen segfaults on bad input
This segfaults:
```
import ast
from _testinternalcapi import compiler_codegen
a = ast.parse("return 42", "my_file.py", "exec")
compiler_codegen(a, "my_file.py", 0)
```
Expected SyntaxError (return not in function).
<!-- gh-linked-prs -->
### Linked PRs
* gh-115379
<!-- /gh-linked-prs -->
| 3a9e67a9fdb4fad13bf42df6eb91126ab2ef45a1 | 94f1334e52fbc1550feba4f433b16589d55255b9 |
python/cpython | python__cpython-115365 | # Add documentation to the pystats output (from summarize_stats.py)
# Feature or enhancement
### Proposal:
The output from `summarize_stats.py` ([example here](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20240210-3.13.0a3%2B-4821f08-PYTHON_UOPS/bm-20240210-azure-x86_64-python-4821f08674e290a396d2-3.13.0a3%2B-4821f08-pystats.md)) isn't terribly self-explanatory. Adding more text (and tooltips) would be very helpful in making it more obvious.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115365
<!-- /gh-linked-prs -->
| fbb016973149d983d30351bdd1aaf00df285c776 | 2ac9d9f2fbeb743ae6d6b1cbf73337c230e21f3c |
python/cpython | python__cpython-115494 | # Redundant NOP is generated in -OO mode
Run `test_dis`:
```
$ ./python -OO -m test -vuall test_dis
...
======================================================================
FAIL: test_disassemble_recursive (test.test_dis.DisTests.test_disassemble_recursive)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_dis.py", line 1105, in test_disassemble_recursive
check(dis_nested_1, depth=1)
~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/test_dis.py", line 1102, in check
self.assertEqual(dis, expected)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
AssertionError: ' --[771 chars]\n 725 NOP\n\n 726 LOAD_GL[532 chars]UE\n' != ' --[771 chars]\n 726 LOAD_GLOBAL 1 (l[510 chars]UE\n'
-- MAKE_CELL 0 (y)
723 RESUME 0
724 LOAD_FAST 0 (y)
BUILD_TUPLE 1
LOAD_CONST 1 (<code object foo at 0x..., file "/home/serhiy/py/cpython/Lib/test/test_dis.py", line 724>)
MAKE_FUNCTION
SET_FUNCTION_ATTRIBUTE 8 (closure)
STORE_FAST 1 (foo)
727 LOAD_FAST 1 (foo)
RETURN_VALUE
Disassembly of <code object foo at 0x..., file "/home/serhiy/py/cpython/Lib/test/test_dis.py", line 724>:
-- COPY_FREE_VARS 1
MAKE_CELL 0 (x)
724 RESUME 0
- 725 NOP
-
726 LOAD_GLOBAL 1 (list + NULL)
LOAD_FAST 0 (x)
BUILD_TUPLE 1
LOAD_CONST 1 (<code object <genexpr> at 0x..., file "/home/serhiy/py/cpython/Lib/test/test_dis.py", line 726>)
MAKE_FUNCTION
SET_FUNCTION_ATTRIBUTE 8 (closure)
LOAD_DEREF 1 (y)
GET_ITER
CALL 0
CALL 1
RETURN_VALUE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-115494
<!-- /gh-linked-prs -->
| 732faf17a618d65d264ffe775fa6db4508e3d9ff | ad4f909e0e7890e027c4ae7fea74586667242ad3 |
python/cpython | python__cpython-115342 | # Unittests with doctests cannot be loaded in -OO mode
# Bug report
For example:
```
$ ./pythonx -OO -m test -vuall test_code
...
0:00:00 load avg: 1.64 [1/1] test_code
Failed to call load_tests:
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/unittest/loader.py", line 113, in loadTestsFromModule
return load_tests(self, tests, pattern)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/test_code.py", line 880, in load_tests
tests.addTest(doctest.DocTestSuite())
~~~~~~~~~~~~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/doctest.py", line 2452, in DocTestSuite
suite.addTest(SkipDocTestCase(module))
~~~~~~~~~~~~~~~^^^^^^^^
File "/home/serhiy/py/cpython/Lib/doctest.py", line 2386, in __init__
DocTestCase.__init__(self, None)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/doctest.py", line 2228, in __init__
self._dt_globs = test.globs.copy()
^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'globs'
test test_code crashed -- Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 178, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 135, in _load_run_test
regrtest_runner(result, test_func, runtests)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 88, in regrtest_runner
test_result = test_func()
~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 132, in test_func
return run_unittest(test_mod)
~~~~~~~~~~~~^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 35, in run_unittest
raise Exception("errors while loading tests")
Exception: errors while loading tests
test_code failed (uncaught exception)
== Tests result: FAILURE ==
...
```
It seems that the regression was caused by 7ba7eae50803b11766421cb8aae1780058a57e2b (GH-31932, bpo-2604).
<!-- gh-linked-prs -->
### Linked PRs
* gh-115342
* gh-115671
* gh-115672
<!-- /gh-linked-prs -->
| 872cc9957a9c8b971448e7377fad865f351da6c9 | 07ef9d86a5efa82d06a8e7e15dd3aff1e946aa6b |
python/cpython | python__cpython-115332 | # bytearray.extend: Misleading error message
### Bug description:
```python
b=bytearray(b"abc")
b.extend('def')
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object cannot be interpreted as an integer```
```
… except that `bytes.extend` doesn't even accept integers as arguments; the error refers to the first element of the string, which happens also to be a string in Python. Meh.
Could CPython please complain about requiring the string to be encoded instead?
### CPython versions tested on:
main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115332
* gh-115894
* gh-115895
<!-- /gh-linked-prs -->
| 948acd6ed856251dc5889cc34cf7a58210c4f9a9 | e3dedeae7abbeda0cb3f1d872ebbb914635d64f2 |
python/cpython | python__cpython-115321 | # `get_hash_info` might potentially swallow errors in `sysmodule.c`
# Bug report
https://github.com/python/cpython/blob/54bde5dcc3c04c4ddebcc9df2904ab325fa0b486/Python/sysmodule.c#L1494-L1526
This code only checks for errors in the very last line:
https://github.com/python/cpython/blob/54bde5dcc3c04c4ddebcc9df2904ab325fa0b486/Python/sysmodule.c#L1522-L1526
I think that this pattern should not be used. Because it can potentially swallow previous errors on only show the very last one.
I propose to refactor this code to show the first error instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115321
* gh-116323
* gh-116324
<!-- /gh-linked-prs -->
| 207030f5527d405940b79c10c1413c1e8ff696c1 | 01440d3a3993b26f4a5f53c44be42cdbb0925289 |
python/cpython | python__cpython-115324 | # Changelog filter in the docs is broken
# Bug report
Long ago I wrote a script [to filter changelog entries](https://docs.python.org/3.10/whatsnew/changelog.html).
This worked until 3.10, but broke starting from 3.11:
* https://docs.python.org/3.10/whatsnew/changelog.html (see the inputbox at the top)
* https://docs.python.org/3.11/whatsnew/changelog.html (the inputbox is missing)
This is because JQuery was removed, probably when Sphinx was updated in:
* #99380
* https://github.com/sphinx-doc/sphinx/issues/7405
Assuming that changelog filtering is still valuable, it should be rewritten using vanilla javascript.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115324
* gh-115372
* gh-115373
<!-- /gh-linked-prs -->
| 341d7874f063dcb141672b09f62c19ffedd0a557 | bee2a11946a8d6df6b6c384abccf3dfb4e75d3fc |
python/cpython | python__cpython-115316 | # The time library documentation should include the field to catch microseconds in a date time string
# Documentation #
## Summary ##
Add documentation of the `%f` field to convert microseconds in the time library.
## Details ##
On this webpage (https://docs.python.org/3.10/library/time.html), the discussion of `strptime` and the conversion fields used (`%H` for hour, `%Y` for year, etc) doesn't include mention of the `%f` field to match microseconds. On my system, this is needed to convert back and forth from `datetime` to `time.struct_time`.
For example, the command sequence:
```python
now = datetime.now(timezone.utc).isoformat()
print (now)
```
Yields: `2024-02-12T00:18:44.621471+00:00`
To convert this to unix time or the python time tuple, I need to use the `strptime` format string of `"%Y-%m-%dT%H:%M:%S.%f%z"`
In short, I advocate adding a brief discussion of the `%f` field, perhaps just in the table, to this section of the documentation.
This probably impacts documentation for other versions of python too.
Thank you!
<!-- gh-linked-prs -->
### Linked PRs
* gh-115316
* gh-115990
* gh-115991
<!-- /gh-linked-prs -->
| 3a72fc36f93d40048371b789e32eefc97b6ade63 | 6ecfcfe8946cf701c6c7018e30ae54d8b7a7ac2a |
python/cpython | python__cpython-115305 | # The typical initialization of PyMutex in its comment document cannot be compiled on Windows
https://github.com/python/cpython/blob/2939ad02be62110ffa2ac6c4d9211c85e1d1720f/Include/internal/pycore_lock.h#L28-L29
But it cannot be compiled outside a function on Windows: https://github.com/python/cpython/actions/runs/7862752441/job/21452578591 with this error:
```
`3>D:\a\cpython\cpython\Modules\rotatingtree.c(19,16): error C2099: initializer is not a constant [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj]`
```
Also check here https://github.com/python/cpython/pull/115301/commits/be0427fc6551fb861b9a52bf726eeb539fbb277c#diff-266bed9d854389a88ddd0696e9a225a2570559901edf0e909de43dd1f49ce80aR19 to see the usage and the compile error.
Change it to `PyMutex m = {0};` can resolve this issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115305
<!-- /gh-linked-prs -->
| 87a65a5bd446a6fc74db651e56b04c332e33fa07 | 5f7df88821347c5f44fc4e2c691e83a60a6c6cd5 |
python/cpython | python__cpython-115286 | # `test_dataclass` fails with `-OO` mode
# Bug report
Output:
```
» ./python.exe -OO Lib/test/test_dataclasses/__init__.py
.................................................................................................................................F..........................................................................................................
======================================================================
FAIL: test_existing_docstring_not_overridden (__main__.TestDocString.test_existing_docstring_not_overridden)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_dataclasses/__init__.py", line 2225, in test_existing_docstring_not_overridden
self.assertEqual(C.__doc__, "Lorem ipsum")
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'C(x: int)' != 'Lorem ipsum'
- C(x: int)
+ Lorem ipsum
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115286
* gh-115358
* gh-115359
<!-- /gh-linked-prs -->
| 4297d7301b97aba2e0df9f9cc5fa4010e53a8950 | de7d67b19b9f31d7712de7211ffac5bf6018157f |
python/cpython | python__cpython-115283 | # Direct invocation of `test_traceback.py` fails
# Bug report
Output:
```
» ./python.exe Lib/test/test_traceback.py
.................................................................................................................................................................................................................................................................................................F................................................
======================================================================
FAIL: test_smoke_user_exception (__main__.TestTracebackException.test_smoke_user_exception)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_traceback.py", line 3134, in test_smoke_user_exception
self.do_test_smoke(MyException('bad things happened'), expected)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_traceback.py", line 3117, in do_test_smoke
self.assertEqual(expected_type_str, exc.exc_type_str)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'test.test_traceback.TestTracebackExceptio[44 chars]tion' != 'TestTracebackException.test_smoke_user_ex[24 chars]tion'
- test.test_traceback.TestTracebackException.test_smoke_user_exception.<locals>.MyException
? --------------------
+ TestTracebackException.test_smoke_user_exception.<locals>.MyException
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115283
<!-- /gh-linked-prs -->
| cc573c70b7d5e169de2a6e4297068de407dc8d4d | 2939ad02be62110ffa2ac6c4d9211c85e1d1720f |
python/cpython | python__cpython-115275 | # Direct invocation of `Lib/test/test_unittest/testmock/testpatch.py` fails
# Bug report
```
» ./python.exe Lib/test/test_unittest/testmock/testpatch.py
........................................................................................................F.....
======================================================================
FAIL: test_special_attrs (__main__.PatchTest.test_special_attrs)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_unittest/testmock/testpatch.py", line 1915, in test_special_attrs
self.assertEqual(foo.__module__, 'test.test_unittest.testmock.testpatch')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '__main__' != 'test.test_unittest.testmock.testpatch'
- __main__
+ test.test_unittest.testmock.testpatch
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115275
* gh-115280
* gh-115281
<!-- /gh-linked-prs -->
| f8e9c57067e32baab4ed2fd824b892c52ecb7225 | 1f23837277e604f41589273aeb3a10377d416510 |
python/cpython | python__cpython-115276 | # `test_functools` fails when run with `-OO`
# Bug report
Output:
```
======================================================================
FAIL: test_doc (test.test_functools.TestCachedProperty.test_doc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 3126, in test_doc
self.assertEqual(CachedCostItem.cost.__doc__, "The cost of the item.")
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'The cost of the item.'
======================================================================
FAIL: test_double_wrapped_methods (test.test_functools.TestSingleDispatch.test_double_wrapped_methods) (meth=<function TestSingleDispatch.test_double_wrapped_methods.<locals>.WithSingleDispatch.cls_context_manager at 0x105c34a10>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2798, in test_double_wrapped_methods
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_double_wrapped_methods (test.test_functools.TestSingleDispatch.test_double_wrapped_methods) (meth=<function TestSingleDispatch.test_double_wrapped_methods.<locals>.WithSingleDispatch.cls_context_manager at 0x105ced0d0>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2798, in test_double_wrapped_methods
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_double_wrapped_methods (test.test_functools.TestSingleDispatch.test_double_wrapped_methods) (meth=<function TestSingleDispatch.test_double_wrapped_methods.<locals>.WithSingleDispatch.decorated_classmethod at 0x105ced250>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2798, in test_double_wrapped_methods
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_double_wrapped_methods (test.test_functools.TestSingleDispatch.test_double_wrapped_methods) (meth=<function TestSingleDispatch.test_double_wrapped_methods.<locals>.WithSingleDispatch.decorated_classmethod at 0x105ced310>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2798, in test_double_wrapped_methods
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.func at 0x105c36d50>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.func at 0x105c36b10>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.cls_func at 0x105c36510>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.cls_func at 0x105c365d0>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.static_func at 0x105c368d0>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
======================================================================
FAIL: test_method_wrapping_attributes (test.test_functools.TestSingleDispatch.test_method_wrapping_attributes) (meth=<function TestSingleDispatch.test_method_wrapping_attributes.<locals>.A.static_func at 0x105c37d10>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_functools.py", line 2709, in test_method_wrapping_attributes
self.assertEqual(meth.__doc__, 'My function docstring')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None != 'My function docstring'
----------------------------------------------------------------------
Ran 262 tests in 0.124s
FAILED (failures=11, skipped=4)
test test_functools failed
test_functools failed (11 failures)
== Tests result: FAILURE ==
1 test failed:
test_functools
Total duration: 229 ms
Total tests: run=262 failures=11 skipped=4
Total test files: run=1/1 failed=1
Result: FAILURE
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115276
* gh-116706
* gh-116707
<!-- /gh-linked-prs -->
| 27df81d5643f32be6ae84a00c5cf84b58e849b21 | 43986f55671ba2f7b08f8c5cea69aa136a093697 |
python/cpython | python__cpython-115269 | # ``test_queue`` times out
# Bug report
### Bug description:
See https://github.com/python/cpython/actions/runs/7856342409/job/21439020143?pr=115257 for more details.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115269
* gh-115361
* gh-115898
* gh-115940
<!-- /gh-linked-prs -->
| 1a6e2138773b94fdae449b658a9983cd1fc0f08a | 4821f08674e290a396d27aa8256fd5b8a121f3d6 |
python/cpython | python__cpython-115257 | # Remove reference cycle when writing tarfiles
# Feature or enhancement
### Proposal:
The following code keeps a file handle for eternity and even `gc.collect` does not help.
```python
>>> import tarfile
>>> tarfile.TarFile("archive.tar.gz","w").add("somefile.py")
```
The culprit is a line that itself says that it is not needed. Indeed, it is never used anywhere. https://github.com/python/cpython/blob/main/Lib/tarfile.py#L2033
Now the example might be a rather bad style, but even a typical use case where a TarFile variable is defined and closed afterwards will also keep some stuff in memory until `gc.collect` happens due to this line. Removing the line also adds a ResourceWarning to the bad style example which currently does not appear.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115257
<!-- /gh-linked-prs -->
| 0dfa7ce346ac003475aa45d25c76b13081b81217 | cfbdce72083fca791947cbb18114115c90738d99 |
python/cpython | python__cpython-115255 | # `test_property` fails with `-00` mode
# Bug report
```
» ./python.exe -OO -m test test_property
Using random seed: 1955187317
0:00:00 load avg: 2.91 Run 1 test sequentially
0:00:00 load avg: 2.91 [1/1] test_property
test test_property failed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_property.py", line 233, in test_slots_docstring_copy_exception
with self.assertRaises(AttributeError):
...<4 lines>...
return 1
AssertionError: AttributeError not raised
test_property failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_property
Total duration: 27 ms
Total tests: run=27 failures=1 skipped=12
Total test files: run=1/1 failed=1
Result: FAILURE
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115255
* gh-115261
* gh-115262
<!-- /gh-linked-prs -->
| b70a68fbd6b72a25b5ef430603e39c9e40f40d29 | 3a5b38e3b465e00f133ff8074a2d4afb1392dfb5 |
python/cpython | python__cpython-115253 | # `test_enum` fails when run with `-OO`
# Bug report
```
» ./python.exe -OO -m test test_enum
Using random seed: 918426150
0:00:00 load avg: 2.87 Run 1 test sequentially
0:00:00 load avg: 2.87 [1/1] test_enum
test test_enum failed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_enum.py", line 4931, in test_pydoc
self.assertEqual(result, expected_text, result)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'Help[234 chars]n | CYAN = <Color.CYAN: 1>\n |\n | MAGENTA =[658 chars]rs__' != 'Help[234 chars]n | YELLOW = <Color.YELLOW: 3>\n |\n | MAGEN[361 chars]rs__'
Help on class Color in module test.test_enum:
class Color(enum.Enum)
| Color(*values)
|
| Method resolution order:
| Color
| enum.Enum
| builtins.object
|
| Data and other attributes defined here:
|
- | CYAN = <Color.CYAN: 1>
+ | YELLOW = <Color.YELLOW: 3>
|
| MAGENTA = <Color.MAGENTA: 2>
|
- | YELLOW = <Color.YELLOW: 3>
+ | CYAN = <Color.CYAN: 1>
|
| ----------------------------------------------------------------------
| Data descriptors inherited from enum.Enum:
|
| name
|
| value
|
| ----------------------------------------------------------------------
- | Methods inherited from enum.EnumType:
? ^ - ^
+ | Data descriptors inherited from enum.EnumType:
? ^^^^^^ +++++ ^
- |
- | __contains__(value) from enum.EnumType
- |
- | __getitem__(name) from enum.EnumType
- |
- | __iter__() from enum.EnumType
- |
- | __len__() from enum.EnumType
- |
- | ----------------------------------------------------------------------
- | Readonly properties inherited from enum.EnumType:
|
| __members__
: Help on class Color in module test.test_enum:
class Color(enum.Enum)
| Color(*values)
|
| Method resolution order:
| Color
| enum.Enum
| builtins.object
|
| Data and other attributes defined here:
|
| CYAN = <Color.CYAN: 1>
|
| MAGENTA = <Color.MAGENTA: 2>
|
| YELLOW = <Color.YELLOW: 3>
|
| ----------------------------------------------------------------------
| Data descriptors inherited from enum.Enum:
|
| name
|
| value
|
| ----------------------------------------------------------------------
| Methods inherited from enum.EnumType:
|
| __contains__(value) from enum.EnumType
|
| __getitem__(name) from enum.EnumType
|
| __iter__() from enum.EnumType
|
| __len__() from enum.EnumType
|
| ----------------------------------------------------------------------
| Readonly properties inherited from enum.EnumType:
|
| __members__
test_enum failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_enum
Total duration: 831 ms
Total tests: run=1,068 failures=1 skipped=4
Total test files: run=1/1 failed=1
Result: FAILURE
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115253
* gh-115260
* gh-115279
* gh-115334
* gh-115396
* gh-115397
<!-- /gh-linked-prs -->
| 33f56b743285f8419e92cfabe673fa165165a580 | 6f93b4df92b8fbf80529cb6435789f5a75664a20 |
python/cpython | python__cpython-115250 | # `test_descr` fails when run with `-OO`
# Bug report
Output:
```
» ./python.exe -OO -m test test_descr
Using random seed: 1321086896
0:00:00 load avg: 3.34 Run 1 test sequentially
0:00:00 load avg: 3.34 [1/1] test_descr
test test_descr failed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_descr.py", line 1601, in test_classmethods
self.assertEqual(cm.__dict__, cm_dict)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
AssertionError: {'__module__': 'test.test_descr', '__name__':[114 chars]: {}} != {'__annotations__': {}, '__doc__': 'f docstri[123 chars]>.f'}
{'__annotations__': {},
- '__doc__': None,
+ '__doc__': 'f docstring',
'__module__': 'test.test_descr',
'__name__': 'f',
'__qualname__': 'ClassPropertiesAndMethods.test_classmethods.<locals>.f'}
test_descr failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_descr
Total duration: 353 ms
Total tests: run=156 failures=1 skipped=2
Total test files: run=1/1 failed=1
Result: FAILURE
```
I have a fix ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115250
* gh-115277
* gh-115278
<!-- /gh-linked-prs -->
| 1f23837277e604f41589273aeb3a10377d416510 | 1a6e2138773b94fdae449b658a9983cd1fc0f08a |
python/cpython | python__cpython-115247 | # Use After Free in deque_index_impl
# Crash report
### What happened?
### Version
Python 3.13.0a3+ (heads/v3.13.0a2:e2c4038924, Feb 10 2024, 12:05:47) [GCC 11.4.0]
bisect from commit 32ea16577d2fd8994730250572957888c3e48f84
### Root cause
the `deque_index_impl` function retrieves an element from the deque using `b→data`. However, the reference count of the item may decrease due to `PyObject_RichCompareBool`, leading to a use-after-free
```c
static PyObject *
deque_index_impl(dequeobject *deque, PyObject *v, Py_ssize_t start,
Py_ssize_t stop){
...
while (n--) {
CHECK_NOT_END(b);
item = b->data[index]; // <--- don't raise reference count using Py_NewRef
cmp = PyObject_RichCompareBool(item, v, Py_EQ); // <--- arbitrary call to __eq__
if (cmp > 0)
return PyLong_FromSsize_t(stop - (n + 1));
else if (cmp < 0)
return NULL;
if (start_state != deque->state) {
PyErr_SetString(PyExc_RuntimeError,
"deque mutated during iteration");
return NULL;
}
}
```
### POC
```python
import collections
class evil_pre1(object):
def __eq__(self,o):
deq.clear()
return NotImplemented
deq = collections.deque([evil_pre1()])
deq.index(3)
```
### ASAN
<details>
<summary><b>asan</b></summary>
```
=================================================================
==246599==ERROR: AddressSanitizer: heap-use-after-free on address 0xffffb086b3c8 at pc 0xaaaabb857308 bp 0xffffc5757c40 sp 0xffffc5757c50
READ of size 8 at 0xffffb086b3c8 thread T0
#0 0xaaaabb857304 in Py_TYPE Include/object.h:333
#1 0xaaaabb857304 in long_richcompare Objects/longobject.c:3265
#2 0xaaaabb8c1158 in do_richcompare Objects/object.c:922
#3 0xaaaabb8c1438 in PyObject_RichCompare Objects/object.c:965
#4 0xaaaabb8c1578 in PyObject_RichCompareBool Objects/object.c:987
#5 0xaaaabbcc1a5c in deque_index_impl Modules/_collectionsmodule.c:1222
#6 0xaaaabbcc1c94 in deque_index Modules/clinic/_collectionsmodule.c.h:248
#7 0xaaaabb7f4ee4 in method_vectorcall_FASTCALL Objects/descrobject.c:401
#8 0xaaaabb7ccb84 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#9 0xaaaabb7cccc0 in PyObject_Vectorcall Objects/call.c:327
#10 0xaaaabbab73e4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:815
#11 0xaaaabbb008f4 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:115
#12 0xaaaabbb008f4 in _PyEval_Vector Python/ceval.c:1788
#13 0xaaaabbb00abc in PyEval_EvalCode Python/ceval.c:592
#14 0xaaaabbc02aec in run_eval_code_obj Python/pythonrun.c:1294
#15 0xaaaabbc059e0 in run_mod Python/pythonrun.c:1379
#16 0xaaaabbc0677c in pyrun_file Python/pythonrun.c:1215
#17 0xaaaabbc08c3c in _PyRun_SimpleFileObject Python/pythonrun.c:464
#18 0xaaaabbc08ff4 in _PyRun_AnyFileObject Python/pythonrun.c:77
#19 0xaaaabbc66e68 in pymain_run_file_obj Modules/main.c:357
#20 0xaaaabbc693d8 in pymain_run_file Modules/main.c:376
#21 0xaaaabbc69de0 in pymain_run_python Modules/main.c:628
#22 0xaaaabbc6a084 in Py_RunMain Modules/main.c:707
#23 0xaaaabbc6a274 in pymain_main Modules/main.c:737
#24 0xaaaabbc6a5b0 in Py_BytesMain Modules/main.c:761
#25 0xaaaabb63145c in main Programs/python.c:15
#26 0xffffb93d73f8 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#27 0xffffb93d74c8 in __libc_start_main_impl ../csu/libc-start.c:392
#28 0xaaaabb63136c in _start (/home/kk/projects/cpython/python+0x27136c)
0xffffb086b3c8 is located 56 bytes inside of 72-byte region [0xffffb086b390,0xffffb086b3d8)
freed by thread T0 here:
#0 0xffffb96a9fe8 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:127
#1 0xaaaabb8c9744 in _PyMem_RawFree Objects/obmalloc.c:84
#2 0xaaaabb8cc580 in _PyMem_DebugRawFree Objects/obmalloc.c:2398
#3 0xaaaabb8ccd74 in _PyMem_DebugFree Objects/obmalloc.c:2531
#4 0xaaaabb8f0f78 in PyObject_Free Objects/obmalloc.c:995
#5 0xaaaabbb7087c in PyObject_GC_Del Python/gc.c:1903
#6 0xaaaabb914e1c in object_dealloc Objects/typeobject.c:5569
#7 0xaaaabb938da8 in subtype_dealloc Objects/typeobject.c:2092
#8 0xaaaabb8bf0b8 in _Py_Dealloc Objects/object.c:2889
#9 0xaaaabbb69bc4 in Py_DECREF Include/object.h:922
#10 0xaaaabbb69bc4 in Py_XDECREF Include/object.h:1030
#11 0xaaaabbb69bc4 in _PyFrame_ClearExceptCode Python/frame.c:140
#12 0xaaaabbaa276c in clear_thread_frame Python/ceval.c:1652
#13 0xaaaabbaab750 in _PyEval_FrameClearAndPop Python/ceval.c:1679
#14 0xaaaabbadc91c in _PyEval_EvalFrameDefault Python/generated_cases.c.h:4914
#15 0xaaaabbb008f4 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:115
#16 0xaaaabbb008f4 in _PyEval_Vector Python/ceval.c:1788
#17 0xaaaabb7cc2fc in _PyFunction_Vectorcall Objects/call.c:413
#18 0xaaaabb94bfe0 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#19 0xaaaabb94bfe0 in vectorcall_unbound Objects/typeobject.c:2271
#20 0xaaaabb94bfe0 in slot_tp_richcompare Objects/typeobject.c:8983
#21 0xaaaabb8c1058 in do_richcompare Objects/object.c:916
#22 0xaaaabb8c1438 in PyObject_RichCompare Objects/object.c:965
#23 0xaaaabb8c1578 in PyObject_RichCompareBool Objects/object.c:987
#24 0xaaaabbcc1a5c in deque_index_impl Modules/_collectionsmodule.c:1222
#25 0xaaaabbcc1c94 in deque_index Modules/clinic/_collectionsmodule.c.h:248
#26 0xaaaabb7f4ee4 in method_vectorcall_FASTCALL Objects/descrobject.c:401
#27 0xaaaabb7ccb84 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#28 0xaaaabb7cccc0 in PyObject_Vectorcall Objects/call.c:327
#29 0xaaaabbab73e4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:815
#30 0xaaaabbb008f4 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:115
#31 0xaaaabbb008f4 in _PyEval_Vector Python/ceval.c:1788
#32 0xaaaabbb00abc in PyEval_EvalCode Python/ceval.c:592
#33 0xaaaabbc02aec in run_eval_code_obj Python/pythonrun.c:1294
#34 0xaaaabbc059e0 in run_mod Python/pythonrun.c:1379
#35 0xaaaabbc0677c in pyrun_file Python/pythonrun.c:1215
previously allocated by thread T0 here:
#0 0xffffb96aa2f4 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0xaaaabb8cb160 in _PyMem_RawMalloc Objects/obmalloc.c:56
#2 0xaaaabb8c9028 in _PyMem_DebugRawAlloc Objects/obmalloc.c:2331
#3 0xaaaabb8c9084 in _PyMem_DebugRawMalloc Objects/obmalloc.c:2364
#4 0xaaaabb8ccdc0 in _PyMem_DebugMalloc Objects/obmalloc.c:2516
#5 0xaaaabb8f0df0 in PyObject_Malloc Objects/obmalloc.c:966
#6 0xaaaabb92e22c in _PyObject_MallocWithType Include/internal/pycore_object_alloc.h:46
#7 0xaaaabb92e22c in _PyType_AllocNoTrack Objects/typeobject.c:1739
#8 0xaaaabb92e538 in PyType_GenericAlloc Objects/typeobject.c:1763
#9 0xaaaabb92830c in object_new Objects/typeobject.c:5555
#10 0xaaaabb92ec48 in type_call Objects/typeobject.c:1682
#11 0xaaaabb7cc64c in _PyObject_MakeTpCall Objects/call.c:242
#12 0xaaaabb7ccc90 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:166
#13 0xaaaabb7cccc0 in PyObject_Vectorcall Objects/call.c:327
#14 0xaaaabbab73e4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:815
#15 0xaaaabbb008f4 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:115
#16 0xaaaabbb008f4 in _PyEval_Vector Python/ceval.c:1788
#17 0xaaaabbb00abc in PyEval_EvalCode Python/ceval.c:592
#18 0xaaaabbc02aec in run_eval_code_obj Python/pythonrun.c:1294
#19 0xaaaabbc059e0 in run_mod Python/pythonrun.c:1379
#20 0xaaaabbc0677c in pyrun_file Python/pythonrun.c:1215
#21 0xaaaabbc08c3c in _PyRun_SimpleFileObject Python/pythonrun.c:464
#22 0xaaaabbc08ff4 in _PyRun_AnyFileObject Python/pythonrun.c:77
#23 0xaaaabbc66e68 in pymain_run_file_obj Modules/main.c:357
#24 0xaaaabbc693d8 in pymain_run_file Modules/main.c:376
#25 0xaaaabbc69de0 in pymain_run_python Modules/main.c:628
#26 0xaaaabbc6a084 in Py_RunMain Modules/main.c:707
#27 0xaaaabbc6a274 in pymain_main Modules/main.c:737
#28 0xaaaabbc6a5b0 in Py_BytesMain Modules/main.c:761
#29 0xaaaabb63145c in main Programs/python.c:15
#30 0xffffb93d73f8 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#31 0xffffb93d74c8 in __libc_start_main_impl ../csu/libc-start.c:392
SUMMARY: AddressSanitizer: heap-use-after-free Include/object.h:333 in Py_TYPE
Shadow bytes around the buggy address:
0x200ff610d620: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x200ff610d630: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x200ff610d640: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x200ff610d650: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x200ff610d660: fa fa fa fa fd fd fd fd fd fd fd fd fd fa fa fa
=>0x200ff610d670: fa fa fd fd fd fd fd fd fd[fd]fd fa fa fa fa fa
0x200ff610d680: fd fd fd fd fd fd fd fd fd fd fa fa fa fa fd fd
0x200ff610d690: fd fd fd fd fd fd fd fd fa fa fa fa fd fd fd fd
0x200ff610d6a0: fd fd fd fd fd fd fa fa fa fa 00 00 00 00 00 00
0x200ff610d6b0: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00
0x200ff610d6c0: 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==246599==ABORTING
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a3+ (heads/v3.13.0a2:e2c4038924, Feb 10 2024, 12:05:47) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-115247
* gh-115465
* gh-115466
<!-- /gh-linked-prs -->
| 671360161f0b7a5ff4c1d062e570962e851b4bde | 81e140d10b77f0a41a5581412e3f3471cc77981f |
python/cpython | python__cpython-115239 | # Redundant f-string in graphlib.TopologicalSorter prepare method.
The `prepare()` method of `graphlib.TopologicalSorter` class raises a `CycleError` if any cycles are detected in the graph.
https://github.com/python/cpython/blob/e2c403892400878707a20d4b7e183de505a64ca5/Lib/graphlib.py#L104-L106
The docstring for `CycleError` promises the cycle (list of nodes) to be accessible in the Exception's args attribute.
> The detected cycle can be accessed via the second
element in the *args* attribute of the exception instance
The f-string for the first argument to the CycleError is therefore redundant and may cause confusion.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115239
<!-- /gh-linked-prs -->
| 917283ada6fb01a3221b708d64f0a5195e1672dc | cf472577e24911cb70b619304c0108c7fba97cac |
python/cpython | python__cpython-115210 | # urllib.request resolves the host before checking it against the system's proxy bypass list [Security: LOW, minor info leak]
# Bug report
### Bug description:
When system proxy bypass list is set, the urllib.request library on macOS and Windows resolves the hostname to an IP address and the IP address to a hostname (on Windows) before checking it against the system proxy bypass list (see [here](https://github.com/python/cpython/blob/5914a211ef5542edd1f792c2684e373a42647b04/Lib/urllib/request.py#L2589-L2594) and [here](https://github.com/python/cpython/blob/5914a211ef5542edd1f792c2684e373a42647b04/Lib/urllib/request.py#L2731-L2743)).
This causes DNS leak and HTTP requests to hang while waiting for DNS timeout in some air-gaped environments. This behavior also differs from other system applications (tested on macOS Sonoma with Safari and Windows Server 2022 with the Edge browser).
<details>
<summary>Test process on macOS and Windows:</summary>
Creating an A record from `<my-test-domain>.net` to `<my-test-ip>`.
macOS with Safari:
In the system network setting:
- "Web proxy (HTTP)" is set to 172.16.0.1:8000
- "Secure web proxy (HTTPS)" is set to 172.16.0.1:8000
- "Bypass proxy settings" is set to `<my-test-ip>`
In Safari:
- visiting `http://<my-test-ip>`: does not use the proxy
- visiting `http://<my-test-domain>.net`: uses the proxy
Windows Server 2022 with Edge browser:
in system network setting:
- "HTTP proxy" is set to 172.16.0.1:8000
- "Do not use proxy server" is set to `<my-test-ip>`
In Edge browser:
- visiting `http://<my-test-ip>`: does not use the proxy
- visiting `http://<my-test-domain>.net`: uses the proxy
urllib.request on Windows also resolves the IP address back to FQDN before check, here's a test for that:
Windows Server 2022 with Edge browser:
Update the Host file so the IP address can be resolved back to FQDN (`socket.getfqdn("<my-test-ip>") == "<my-test-domain>.net"`).
In system network setting:
- "HTTP proxy" is set to 172.16.0.1:8000
- "Do not use proxy server" is set to <my-test-domain>.net
In Edge browser:
- visiting `http://<my-test-ip>`: uses the proxy
- visiting `http://<my-test-domain>.net`: does not use the proxy
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-115210
* gh-116066
* gh-116067
* gh-116068
* gh-116069
* gh-116070
<!-- /gh-linked-prs -->
| c43b26d02eaa103756c250e8d36829d388c5f3be | 6c1c94dc517b77afcebb25436a4b7b0d13b6eb4d |
python/cpython | python__cpython-115188 | # Fix refleak tracking in free-threaded build
# Bug report
There are a few bugs with refleak tracking in the free-threaded build uncovered in https://github.com/python/cpython/pull/114824:
* We should account for blocks in abandoned segments
* We should stop-the-world before traversing mimalloc heaps in case there are still other threads running
* The mimalloc heap and segment traversal must call `_mi_page_free_collect(page, true);` to properly account for blocks that were deferred freed by other threads. We need to call this earlier than we currently do (before computing page stats).
* `_Py_DecRefSharedDebug` was missing a `_Py_IncRefTotal`
<!-- gh-linked-prs -->
### Linked PRs
* gh-115188
<!-- /gh-linked-prs -->
| 31633f4473966b3bcd470440bab7f348711be48f | 769d4448260aaec687d9306950225316f9faefce |
python/cpython | python__cpython-115181 | # Add UOp Pair counts to `pystats`
# Feature or enhancement
### Proposal:
Add a new section to the stats produced by `--enable-pystats`, which keeps track of the pairs, triples, and longer sequences in which UOps appear. This will be useful for analyzing candidate pairs for condensing into a single superinstruction, as well as other kinds of optimization.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
This came up in the context of exploring superinstructions for the Copy and Patch JIT at faster-cpython/ideas#647.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115181
<!-- /gh-linked-prs -->
| acf69e09c66f8473399fabab36b81f56496528a6 | c053d52edd1e05ccc339e380b705749a3240d645 |
python/cpython | python__cpython-115173 | # Improve index for the C API
Index entries are added either implicitly by directives like **c:function** or explicitly by the **index** directive. And some explicitly added index entries do not match implicitly added index entries. For example:
```rst
single: PyList_GetItem()
```
creates index entry `PyList_GetItem()`, but the function declaration creates index entry `PyList_GetItem (C function)`. In result, two entries are not merged.
The following PR makes explicit index entries look like automatically generated ones.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115173
* gh-115292
* gh-115293
<!-- /gh-linked-prs -->
| 573acb30f22a84c0f2c951efa002c9946e29b6a3 | 4a08e7b3431cd32a0daf22a33421cd3035343dc4 |
python/cpython | python__cpython-115169 | # Add pystats counter for invalidated executors
# Feature or enhancement
### Proposal:
As discussed [in an investigation into why some pystats are surprising](https://github.com/faster-cpython/ideas/issues/652#issuecomment-1934168807), I suggested that it would be useful to have counts for when executors are invalidated (for example, when the globals or builtins dictionary changes).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115169
<!-- /gh-linked-prs -->
| b05afdd5ec325bdb4cc89bb3be177ed577bea41f | 96c10c648565c7406d5606099dbbb937310c26dc |
python/cpython | python__cpython-115176 | # vcruntime140_threads.dll is erroneously vendored when building with Visual Studio 17.8
# Bug report
### Bug description:
As described in https://devblogs.microsoft.com/cppblog/c11-threads-in-visual-studio-2022-version-17-8-preview-2/, Visual Studio 17.8 introduced support for C11 threads, provided by `vcruntime140_threads.dll`. When I use the `Tools\msi\buildrelease.bat` script to build Python 3.12.2 on Windows using this Visual Studio version, the `vcruntime140_threads.dll` ends up in the Python distribution. For instance, the `python-3.12.2-embed-amd64.zip` file that is built contains the file `vcruntime140_threads.dll` at its root.
I traced the cause of this to https://github.com/python/cpython/blob/17689e3c41d9f61bcd1928b24d3c50c37ceaf3f2/PCbuild/pyproject.props#L253 which copies all DLLs matching the pattern `vcruntime*.dll` from the Visual C++ runtime directory. Starting with Visual Studio 17.8, this directory (which for me is `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Redist\MSVC\14.38.33135\x64\Microsoft.VC143.CRT`) contains the file `vcruntime140_threads.dll`.
Given that we aren't using C11 threads on Windows, I think we should change that line to
`<VCRuntimeDLL Include="$(VCRedistDir)\Microsoft.VC*.CRT\vcruntime*.dll" Exclude="$(VCRedistDir)\Microsoft.VC*.CRT\vcruntime*_threads.dll" />` to ensure that the file is not included in the Python distribution. I can make a PR for this change.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-115176
* gh-115186
* gh-115187
<!-- /gh-linked-prs -->
| 5914a211ef5542edd1f792c2684e373a42647b04 | ed1a8daf10bc471f929c14c2d1e0474d44a63b00 |
python/cpython | python__cpython-115213 | # typing.Annotated fails for functions returning UUIDs
# Bug report
### Bug description:
`typing.Annotated` tries to set an attribute on the _return value_ of calling an annotated callable. This fails when the returned object is immutable. I believe that adding `TypeError` to the `except` clause in `typing. _BaseGenericAlias.__call__` will fix the problem. I'm not sure if there are more cases of the same pattern or not.
https://github.com/python/cpython/blob/17689e3c41d9f61bcd1928b24d3c50c37ceaf3f2/Lib/typing.py#L1128-L1131
### Example
```python
import typing
import uuid
class MyAnnotation:
def __init__(self, **properties) -> None:
self.properties = properties
def uuid_from_str(s: str) -> uuid.UUID:
return uuid.UUID(f'urn:uuid:{s}')
coercion = typing.Annotated[uuid_from_str, MyAnnotation(type='str', format='uuid')]
coercion('00000000-0000-0000-0000-000000000000')
```
### Result
```pythoncon
Traceback (most recent call last):
File "/Users/.../foo.py", line 12, in <module>
coercion('00000000-0000-0000-0000-000000000000')
File "/Users/.../lib/python3.12/typing.py", line 1142, in __call__
result.__orig_class__ = self
^^^^^^^^^^^^^^^^^^^^^
File "/Users/.../lib/python3.12/uuid.py", line 278, in __setattr__
raise TypeError('UUID objects are immutable')
TypeError: UUID objects are immutable
```
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-115213
* gh-115227
* gh-115228
<!-- /gh-linked-prs -->
| 564385612cdf72c2fa8e629a68225fb2cd3b3d99 | a3af3cb4f424034b56404704fdf8f18e8c0a9982 |
python/cpython | python__cpython-115171 | # Python 3.12 tokenize generates invalid locations for f'\N{unicode}'
# Bug report
### Bug description:
```python
from tokenize import untokenize, generate_tokens
from io import StringIO
untokenize(generate_tokens(StringIO("f'\\N{EXCLAMATION MARK}'").readline))
```
ValueError: start (1,22) precedes previous end (1,24)
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-115171
* gh-115662
<!-- /gh-linked-prs -->
| ecf16ee50e42f979624e55fa343a8522942db2e7 | d504968983c5cd5ddbdf73ccd3693ffb89e7952f |
python/cpython | python__cpython-115148 | # Typo in pickletools doc for long4 opcode
# Documentation
The `doc` parameter for the `LONG4` opcode in [pickletools says](https://github.com/python/cpython/blob/ef3ceab09d2d0959c343c662461123d5b0e0b64b/Lib/pickletools.py#L1256):
```py
doc="""Long integer using found-byte length.
A more efficient encoding of a Python long; the long4 encoding
says it all."""
```
It should say `four-byte length` instead of `found-byte length`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115148
* gh-115155
* gh-115156
<!-- /gh-linked-prs -->
| 4a7f63869aa61b24a7cc2d33f8a5e5a7fd0d76a4 | 38b970dfcc3cdebc87a456f17ef1e0f06dde7375 |
python/cpython | python__cpython-124920 | # `PyThreadState_DeleteCurrent` documentation incorrect
The documentation for [`PyThreadState_DeleteCurrent()`](https://docs.python.org/3/c-api/init.html#c.PyThreadState_DeleteCurrent) says:
> Destroy the current thread state and release the global interpreter lock. Like PyThreadState_Delete(), **the global interpreter lock need not be held**. The thread state must have been reset with a previous call to PyThreadState_Clear().
However, calling the function without holding the GIL results in a fatal Python error:
```
Fatal Python error: _PyThreadState_DeleteCurrent: the function must be called with the GIL held, after Python initialization and before Python finalization, but the GIL is released (the current Python thread state is NULL)
```
It looks to me like like CPython code since at least 3.8 required holding the GIL when calling `PyThreadState_DeleteCurrent()`, so I think we should update the documentation rather than change the implementation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124920
* gh-124930
* gh-124931
<!-- /gh-linked-prs -->
| 9eeb21bf761070649bf8d78976a62dabb6d67a99 | 656b7a3c83c79f99beac950b59c47575562ea729 |
python/cpython | python__cpython-115137 | # Possible NULL dereference in `getpath_joinpath()`
# Bug report
### Bug description:
```python
wchar_t **parts = (wchar_t **)PyMem_Malloc(n * sizeof(wchar_t *));
memset(parts, 0, n * sizeof(wchar_t *));
```
The return value of `PyMem_Malloc` not checked for NULL and dereferenced after it in memset()
Found by Linux Verification Center ([portal.linuxtesting.ru](http://portal.linuxtesting.ru/)) with SVACE.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115137
* gh-115157
* gh-115158
<!-- /gh-linked-prs -->
| 9e90313320a2af2d9ff7049ed3842344ed236630 | 4a7f63869aa61b24a7cc2d33f8a5e5a7fd0d76a4 |
python/cpython | python__cpython-115138 | # test.test_xml_etree*.XMLPullParserTest.test_simple_xml fails with (system) expat 2.6.0
# Bug report
### Bug description:
[Expat 2.6.0](https://github.com/libexpat/libexpat/blob/R_2_6_0/expat/Changes) was released yesterday, with CVE fixes. After upgrading the system library and building CPython `--with-system-expat`, I'm getting the following test failures:
```pytb
======================================================================
FAIL: test_simple_xml (test.test_xml_etree.XMLPullParserTest.test_simple_xml) (chunk_size=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1495, in test_simple_xml
self.assert_event_tags(parser, [('end', 'element')])
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1480, in assert_event_tags
self.assertEqual([(action, elem.tag) for action, elem in events],
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected)
^^^^^^^^^
AssertionError: Lists differ: [] != [('end', 'element')]
Second list contains 1 additional elements.
First extra element 0:
('end', 'element')
- []
+ [('end', 'element')]
======================================================================
FAIL: test_simple_xml (test.test_xml_etree.XMLPullParserTest.test_simple_xml) (chunk_size=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1498, in test_simple_xml
self.assert_event_tags(parser, [
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
('end', 'element'),
^^^^^^^^^^^^^^^^^^^
('end', 'empty-element'),
^^^^^^^^^^^^^^^^^^^^^^^^^
])
^^
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1480, in assert_event_tags
self.assertEqual([(action, elem.tag) for action, elem in events],
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected)
^^^^^^^^^
AssertionError: Lists differ: [('end', 'element')] != [('end', 'element'), ('end', 'empty-element')]
Second list contains 1 additional elements.
First extra element 1:
('end', 'empty-element')
- [('end', 'element')]
+ [('end', 'element'), ('end', 'empty-element')]
----------------------------------------------------------------------
======================================================================
FAIL: test_simple_xml (test.test_xml_etree_c.XMLPullParserTest.test_simple_xml) (chunk_size=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1495, in test_simple_xml
self.assert_event_tags(parser, [('end', 'element')])
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1480, in assert_event_tags
self.assertEqual([(action, elem.tag) for action, elem in events],
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected)
^^^^^^^^^
AssertionError: Lists differ: [] != [('end', 'element')]
Second list contains 1 additional elements.
First extra element 0:
('end', 'element')
- []
+ [('end', 'element')]
======================================================================
FAIL: test_simple_xml (test.test_xml_etree_c.XMLPullParserTest.test_simple_xml) (chunk_size=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1498, in test_simple_xml
self.assert_event_tags(parser, [
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
('end', 'element'),
^^^^^^^^^^^^^^^^^^^
('end', 'empty-element'),
^^^^^^^^^^^^^^^^^^^^^^^^^
])
^^
File "/home/mgorny/git/cpython/Lib/test/test_xml_etree.py", line 1480, in assert_event_tags
self.assertEqual([(action, elem.tag) for action, elem in events],
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected)
^^^^^^^^^
AssertionError: Lists differ: [('end', 'element')] != [('end', 'element'), ('end', 'empty-element')]
Second list contains 1 additional elements.
First extra element 1:
('end', 'empty-element')
- [('end', 'element')]
+ [('end', 'element'), ('end', 'empty-element')]
----------------------------------------------------------------------
```
I have reproduced with 3.11.8, 3.12.8 and main as of 2afc7182e66635b3ec7efb59d2a6c18a7ad1f215, both using Gentoo ebuild and raw git repository. I've tested the latter like this:
```
./configure -C --with-system-expat
make -j12
./python -u -W default -bb -E -m test -vv test_xml_etree{,_c}
```
CC @hartwork
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115138
* gh-115164
* gh-115288
* gh-115289
* gh-115525
* gh-115535
* gh-115536
<!-- /gh-linked-prs -->
| 6a95676bb526261434dd068d6c49927c44d24a9b | d01886c5c9e3a62921b304ba7e5145daaa56d3cf |
python/cpython | python__cpython-115123 | # regrtest: add --bisect option to run test.bisect_cmd on failed tests
When running tests with `-R 3:3` to check for a reference leak, it's not always obvious which tests lead to the leak. I propose to add a `--bisect` option which runs the `-m test.bisect_cmd` command to bisect failing tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115123
<!-- /gh-linked-prs -->
| 1e5719a663d5b1703ad588dda4fccd763c7d3e99 | 0c80da4c14d904a367968955544dd6ae58c8101c |
python/cpython | python__cpython-115115 | # Typo in "What’s New In Python 3.13", `file:/` missing another trailing slash
# Documentation
The example URI prefix is missing a trailing slash:
<blockquote>
<p>Add <a class="reference internal" href="../library/pathlib.html#pathlib.Path.from_uri" title="pathlib.Path.from_uri"><code class="xref py py-meth docutils literal notranslate"><span class="pre">pathlib.Path.from_uri()</span></code></a>, a new constructor to create a <a class="reference internal" href="../library/pathlib.html#pathlib.Path" title="pathlib.Path"><code class="xref py py-class docutils literal notranslate"><span class="pre">pathlib.Path</span></code></a>
object from a ‘file’ URI (<code class="docutils literal notranslate"><span class="pre">file:/</span></code>).
(Contributed by Barney Gale in <a class="reference external" href="https://github.com/python/cpython/issues/107465">gh-107465</a>.)</p>
</blockquote>
https://github.com/python/cpython/blob/11ac6f5354ec7a4da2a7e052d27d636b5a41c714/Doc/whatsnew/3.13.rst#L394
<!-- gh-linked-prs -->
### Linked PRs
* gh-115115
<!-- /gh-linked-prs -->
| 60375a38092b4d4dec9a826818a20adc5d4ff2f7 | 3f71c416c085cfaed49ef325f70eb374a4966256 |
python/cpython | python__cpython-115107 | # `enum.Flag.__iter__()` is marked as "changed" in Python 3.11, but it's actually new
# Documentation
`Doc/library/enum.rst` near line 537 reads:

`Flag.__iter__()` is new in Python 3.11, as already noted in typeshed and as may be verified using the following script:
```python
from __future__ import annotations
from enum import (
auto,
Flag,
)
class SomeFlag(Flag):
A = auto()
B = auto()
C = auto()
if __name__ == '__main__':
combined = SomeFlag.A | SomeFlag.B | SomeFlag.C
# this works in Python 3.11, but not in Python 3.10 (no '__iter__()'
# method)
for flag in combined:
print(flag)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-115107
* gh-115117
* gh-115118
<!-- /gh-linked-prs -->
| 3f71c416c085cfaed49ef325f70eb374a4966256 | 11ac6f5354ec7a4da2a7e052d27d636b5a41c714 |
python/cpython | python__cpython-115180 | # Add delayed reclamation mechanism for free-threaded build (QSBR)
# Feature or enhancement
Many operations in the free-threaded build are protected by locks. However, in some cases, we want to allow reads to happen concurrently with updates [^1]. For example, we want to avoid locking during most list read accesses. If there is a concurrent list resize operation, when is it safe to free the list's previous array?
"Safe memory reclamation" (SMR) schemes address this usually by delaying the "free" operation until all concurrent read accesses are guaranteed to have completed. Quiescent state-based reclamation (QSBR) is a safe memory reclamation scheme that will be useful in the free-threaded build. QSBR relies on marked "quiescent states", when a thread reports that it is definitely **not** in the middle of a non-locking read operation. The eval breaker checks serve as a good place to report a quiescent state.
### Background
SMR schemes are most common in systems that don't use garbage collection or reference counting. CPython uses reference counting and has a tracing garbage collector, so it may seem like an odd fit. However, some data structures are not reference counted. For example, a list object's [array](https://github.com/python/cpython/blob/b6228b521b4692b2de1c1c12f4aa5623f8319084/Include/cpython/listobject.h#L8) is not separately reference counted, but may have a shorter lifetime than the containing `PyListObject`. We could delay reclamation until the GC runs, but we want reclamation to happen quickly, and we generally want to run the GC less frequently in the free-threaded build, because it requires pausing all threads and, at least for now, scanning the entire heap.
### Use cases
* Dict keys (`PyDictKeysObject`) and list arrays (`ob_item`): If we resize a dict or list that may be [shared](https://github.com/python/cpython/blob/b6228b521b4692b2de1c1c12f4aa5623f8319084/Include/internal/pycore_gc.h#L74-L77) between threads, we use QSBR to delay freeing the old keys/array until it's safe to do so. Note that most dicts and lists are not accessed by multiple threads; their keys/arrays can be immediately freed on resize.
* Mimalloc `mi_page_t`: The non-locking [dict and list accesses](https://peps.python.org/pep-0703/#mimalloc-changes-for-optimistic-list-and-dict-access) require cooperation from mimalloc. We want to ensure that even if an item is freed during access and the memory reused for a new object, the new object’s reference count field is placed at the same location in memory. In practice, this means that when an `mi_page_t` is empty, we don't immediately allow it to be re-used for allocations of a different size class. We use QSBR to determine when it's safe to use the `mi_page_t` for a different size class (or return the memory to the OS).
### Implementation Overview
The implementation will be partially based on FreeBSD's ["Global Unbounded Sequences"](https://github.com/freebsd/freebsd-src/blob/main/sys/kern/subr_smr.c). The FreeBSD implementation provides a good starting point because it's relatively simple, self contained, and the license (2-Clause BSD) is compatible with CPython. The implementation relies on a few sequence counters:
* Global (per-interpreter) write sequence: When we want to safely free a data structure, we increment the global write sequence counter and tag the data to be freed with the new value.
* Per-thread state read sequence: When a thread reaches a quiescent state (such as when it checks the eval breaker), it copies the global write sequence to its local read counter.
It's safe to free data once all thread states' read sequences are greater than or equal to the write sequence used to tag the data.
To check that, we scan over all the thread states read sequences to compute the minimum value (excluding detached threads). For efficiency, we also cache that computed value globally (per-interpreter).
### Limitations
Determining the current minimum read sequence requires scanning over all thread states. This will likely become a bottleneck if we have a large number of threads (>1,000?). We will likely eventually want to address this, possibly with some combination of a tree-based mechanism [^2] and incremental scanning.
For now, I think keeping the implementation simple important until we are able to run and benchmark multithreaded programs with the GIL disabled.
[^1]: See ["Optimistically Avoiding Locking"](https://peps.python.org/pep-0703/#optimistically-avoiding-locking) in PEP 703.
[^2]: https://people.kernel.org/joelfernandes/gus-vs-rcu
<!-- gh-linked-prs -->
### Linked PRs
* gh-115180
* gh-115367
* gh-115435
* gh-116238
* gh-116251
* gh-116343
* gh-116480
<!-- /gh-linked-prs -->
| 590319072773bd6cdcca655c420d3adb84838e96 | 711f42de2e3749208cfa7effa0d45b04e4e1fdd4 |
python/cpython | python__cpython-115092 | # A paragraph in ctypes documentation refers to an example that is no longer present in the documentation
# Documentation
In ctypes documentation for 3.11 there was this excerpt in [ctypes documentation](https://docs.python.org/3.11/library/ctypes.html#accessing-values-exported-from-dlls):
> [ctypes](https://docs.python.org/3.11/library/ctypes.html#module-ctypes) can access values like this with the in_dll() class methods of the type. pythonapi is a predefined symbol giving access to the Python C api:
>
> ```
> >>> opt_flag = c_int.in_dll(pythonapi, "Py_OptimizeFlag")
> >>> print(opt_flag)
> c_long(0)
> >>>
> ```
>
> If the interpreter would have been started with [-O](https://docs.python.org/3.11/using/cmdline.html#cmdoption-O), the sample would have printed c_long(1), or c_long(2) if [-OO](https://docs.python.org/3.11/using/cmdline.html#cmdoption-OO) would have been specified.
The [example was rewritten for 3.12](https://docs.python.org/3.12/library/ctypes.html#module-ctypes) to use the - probably more widely understood `Py_Version` instead:
> [ctypes](https://docs.python.org/3.12/library/ctypes.html#module-ctypes) can access values like this with the [in_dll()](https://docs.python.org/3.12/library/ctypes.html#ctypes._CData.in_dll) class methods of the type. pythonapi is a predefined symbol giving access to the Python C api:
> >>>
>
> ```
> >>> version = ctypes.c_int.in_dll(ctypes.pythonapi, "Py_Version")
> >>> print(hex(version.value))
> 0x30c00a0
> ```
However the last paragraph referring to the old example is **still** present:
> _If the interpreter would have been started with [-O](https://docs.python.org/3.12/using/cmdline.html#cmdoption-O), the sample would have printed c_long(1), or c_long(2) if [-OO](https://docs.python.org/3.12/using/cmdline.html#cmdoption-OO) would have been specified._
>
<!-- gh-linked-prs -->
### Linked PRs
* gh-115092
* gh-115936
<!-- /gh-linked-prs -->
| 915d7dd090387b52f62bdc2f572413bc87297cee | 7a3518e43aa50ea57fd35863da831052749b6115 |
python/cpython | python__cpython-115061 | # Speed up `pathlib.Path.glob()` by removing redundant regex matching
In #104512 we made `pathlib.Path.glob()` use a "walk-and-filter" strategy for expanding `**` wildcards in patterns: when we encounter a `**` segment, we immediately consume subsequent segments and use them to build a regex that is used to filter results. This saves a bunch of `scandir()` calls.
However! We actually build a regex for the _entire_ pattern given to `glob()`, rather than just the segments following `**` wildcards. And so when evaluating a pattern like `dir*/**/file*`, the `dir*` part is needlessly matched twice against each path. @zooba noted this in a [review comment](https://github.com/python/cpython/pull/104512#discussion_r1212825322) at the time.
We should be able to improve performance by building an `re.Pattern` only for segments following `**` wildcards, and not the entire `glob()` pattern.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115061
* gh-116152
* gh-117732
* gh-117831
<!-- /gh-linked-prs -->
| 6f93b4df92b8fbf80529cb6435789f5a75664a20 | 9d1a353230f555fc28239c5ca1e82b758084e02a |
python/cpython | python__cpython-115163 | # io.TextIOWrapper.read does not flush the underlying write buffer when given a size argument
# Bug report
### Bug description:
As reported by the StackOverflow question:
https://stackoverflow.com/questions/76142400/python-file-write-stuck-in-append-mode-when-read-has-byte-count-as-parameter
In the code below, when `f.read` is not given an argument, the pending content in the write buffer gets flushed so the output is `***s is a line` as expected.
```
with open("new.txt", 'w+') as f:
f.write("this is a line")
with open("new.txt", 'r+') as f:
f.write("***")
f.read() # << note here
f.seek(0)
print(f.read()) # outputs: ***s is a line
```
But when `f.read` is given a size argument, it is apparent that the underlying write buffer is not flushed immediately, causing the pending content to be written at the end of the file, rendering an output of `this is a line***` instead:
```
with open("new.txt", 'w+') as f:
f.write("this is a line")
with open("new.txt", 'r+') as f:
f.write("***")
f.read(1) # << note here
f.seek(0)
print(f.read()) # outputs: this is a line***
```
This issue occurs only with the C implementation of `io.TextIOWrapper` since the code above would work as expected if it is prepended with:
```
from _pyio import open
```
### CPython versions tested on:
3.8, 3.11
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-115163
* gh-115206
* gh-115205
* gh-115240
* gh-115244
* gh-115245
<!-- /gh-linked-prs -->
| 846fd721d518dda88a7d427ec3d2c03c45d9fa90 | c968dc7ff3041137bb702436ff944692dede1ad1 |
python/cpython | python__cpython-115128 | # Test failure in `tests/test_optimizer.py`
# Bug report
### Bug description:
As of 01dceba1, I'm seeing a test failure on macOS and iOS in the `tests/test_optimizer.py` test case.
If you run the test by itself, (`python -m test tests/test_optimizer.py`) it passes.
However, if you run the entire test suite in default (alphabetical) order, it fails:
```
test test_optimizer failed -- Traceback (most recent call last):
File "/Users/rkm/projects/python/host/lib/python3.13/test/test_optimizer.py", line 52, in test_builtin_dict
self.assertEqual(
~~~~~~~~~~~~~~~~^
orig_counter + 1,
^^^^^^^^^^^^^^^^^
_testinternalcapi.get_rare_event_counters()["builtin_dict"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
AssertionError: 256 != 255
test_optimizer failed (1 failure)
== Tests result: FAILURE ==
```
I've been able to narrow down a minimal reproduction case if you run the following sequence of tests:
python -m test test_fileinput test_funcattrs test_functools test_generators test_genexps test_getopt test_gettext test_optimizer
If you remove any one of the tests before `test_optimizer`, the test passes.
I'm observing the failure on both x86_64 (Ventura 13.5.2) and M1 (Ventura 13.6.2) hardware.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-115128
<!-- /gh-linked-prs -->
| 93ac78ac3ee124942bca7492149c3ff0003b6e30 | 95ebd45613d6bf0a8b76778454f1d413d54209db |
python/cpython | python__cpython-115185 | # py command fails with RC_INTERNAL_ERROR (109) return code when run with an App Pool identity in Windows
# Bug report
### Bug description:
I have a .NET 6 web application that responds to REST requests. Amongst other things, it can execute python scripts. This is achieved by starting a new process that uses the "py" command to execute our script.
I recently discovered that this is failing with an exit code of 109, which is described [in the documentation](https://docs.python.org/3/using/windows.html#return-codes) as "Unexpected error. Please report a bug."
I had installed python 3.12.1 for all users on the host machine. I also found that when I'm debugging the application (it runs as me) it works fine, but when it's installed in IIS (running as the ApplicationPoolIdentity) it always fails with that error, even if I try to just run "py -h".
I did find that if I changed my Application Pool Identity to be "LocalSystem" it did work.
I also found that if I changed my code to execute "C:\Program Files\Python312\python.exe" it worked correctly.
It seems to be a problem with py.exe trying to discover the installed versions of python while being executed with an AppPoolIdentity.
Is this a bug in py.exe? Is there some special permission I need to grant my AppPoolIdentity on a particular folder or something?
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-115185
* gh-115354
<!-- /gh-linked-prs -->
| c39272e143b346bd6a3c04ca4fbf299163888277 | 91bf01d4b15a40be4510fd9ee5e6dc8e9c019fce |
python/cpython | python__cpython-115046 | # Atomic operations for only free-threaded builds
# Feature or enhancement
### Proposal:
Based on discussion in https://github.com/python/cpython/pull/113830 and elsewhere, there are situations in which we need to use an atomic operation in free-threaded builds, but do not need to use one in the default build, and do not want to introduce the potential performance overhead of an atomic operation in the default build. Since we anticipate that these situations will be common, we'd like to introduce wrappers around the functions found in `cpython/atomic.h` that perform the operation atomically in free-threaded builds and use the non-atomic equivalent in default builds.
To get discussion started, I propose the following:
1. We add a header that lives alongside `cpython/pyatomic.h` (`pyatomic_free_threaded_wrappers.h`?) that provides definitions of the wrappers.
2. Wrappers look like:
```
static inline _Py_ssize_t
_Py_ft_only_atomic_load_ssize(Py_ssize_t *obj)
{
#ifdef Py_GIL_DISABLED
return *obj;
#else
return _Py_atomic_load_ssize(obj);
#endif
}
```
Naming is hard. I'm not particularly in love with `_ft_only_` and would love a better alternative to communicate that the operation is atomic in free-threaded builds and non-atomic in default builds.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115046
<!-- /gh-linked-prs -->
| a95b1a56bbba76a382a5c676b71db025915e8695 | d9f4cbe5e1e3c31518724d87d0d379d7ce6823ca |
python/cpython | python__cpython-115042 | # Crash in `ThreadHandle_dealloc` after fork in free-threaded build
# Bug report
I've seen this in the free-threaded build, but I think the problem can theoretically occur in the default build as well.
The problem is that after a `fork()`, an already dead `ThreadHandle` may be deallocated **before** it's marked as not joinable. The `ThreadHandle_dealloc()` function can crash in `PyThread_detach_thread()`:
https://github.com/python/cpython/blob/bcccf1fb63870c1b7f8abe246e27b7fff343abd7/Modules/_threadmodule.c#L66-L70
The steps leading to the crash are:
1) A thread `T2` starts and finishes, but is not joined. The `ThreadHandle` is **not** immediately deallocated, either because it's part of a larger reference cycle or due to biased reference counting (in the free-threaded build)
2) The main thread calls `fork()`
3) In the child process, during `PyOS_AfterFork_Child()`, the `ThreadHandle` is deallocated. I've seen this happen in the free-threaded build due to biased reference counting merging the thread states in `PyThreadState_Clear()`. I believe this can also happen in the default build if, for example, a GC is triggered early on during `threading._after_fork()` before we get to marking the `ThreadHandle` as not joinable.
### Proposed fix
Early on in `PyOS_AfterFork_Child()`, we should fix up all `ThreadHandle` objects from C (without executing Python code) -- we should mark the dead ones as not joinable and update the remaining active thread.
I think it's important to do this without executing Python code. Once we start executing Python code, almost anything can happen, such as GC collections, destructors, etc.
cc @pitrou @gpshead @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-115042
<!-- /gh-linked-prs -->
| b6228b521b4692b2de1c1c12f4aa5623f8319084 | 71239d50b54c90afd3fdde260848e0c6d73a5c27 |
python/cpython | python__cpython-115303 | # Deprecate old backward compatible shims in configure_formatter()/handler().
# Feature or enhancement
### Proposal:
`DictConfigurator.configure_formatter()` and `configure_handler()` contain workarounds for old configurations
https://github.com/python/cpython/blob/bcccf1fb63870c1b7f8abe246e27b7fff343abd7/Lib/logging/config.py#L670-L676
https://github.com/python/cpython/blob/bcccf1fb63870c1b7f8abe246e27b7fff343abd7/Lib/logging/config.py#L844-L851
Django doesn't use `fmt` and `strm` for many years. I think both can be deprecated and removed.
I'd like to prepare a patch, if accepted.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115303
* gh-115314
<!-- /gh-linked-prs -->
| d823c235495e69fb4c1286b4ed751731bb31bda9 | 0a6e1a4119864bec0247b04a5c99fdd9799cd8eb |
python/cpython | python__cpython-115027 | # Argument Clinic does not check for errors in `PyBuffer_FillInfo` in generated code
# Bug report
AC generates:
- https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Modules/clinic/_codecsmodule.c.h#L300
- https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Modules/clinic/_codecsmodule.c.h#L1102
- https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Modules/clinic/_codecsmodule.c.h#L1178
- https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Modules/clinic/_codecsmodule.c.h#L1647
- https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Modules/clinic/_ssl.c.h#L1300
However, `PyBuffer_FillInfo` can raise errors and return `-1` on multiple occasions, so it is not safe to not check for it.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115027
<!-- /gh-linked-prs -->
| 87cd20a567aca56369010689e22a524bc1f1ac03 | f71bdd34085d31a826148b2e5da57e0302655056 |
python/cpython | python__cpython-115016 | # Argument Clinic generates incorrect code for METH_METHOD methods without args
# Bug report
Argument Clinic generates incorrect code for methods without arguments, that need the defining class (`METH_METHOD`). Currently, only positional arguments are checked; any keyword argument is silently accepted. This affects methods in several extension modules (see PR).
See also a055dac0b45031878a8196a8735522de018491e3.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115016
* gh-115067
* gh-115069
<!-- /gh-linked-prs -->
| 09096a1647913526a3d4fa69a9d2056ec82a8f37 | 4aa4f0906df9fc9c6c6f6657f2c521468c6b1688 |
python/cpython | python__cpython-115029 | # Setters of members with unsigned integer type and __index__()
Yet one strange thing about member setters with unsigned integer type is that they support different ranges for `int` and int-like objects (objects with the `__index__()` method).
For Py_T_ULONG the range for `int` is `LONG_MIN`-`ULONG_MAX`, but for indexes it is smaller: `LONG_MIN`-`LONG_MAX`.
The same is for Py_T_UINT, except that the maximal hard limit for index `LONG_MAX` can be larger than the maximal safe limit `UINT_MAX`, depending on platform, so on some platforms it may be lesser issue.
Py_T_ULLONG is even more limited. The range for `int` is 0-`ULLONG_MAX` (negatives not allowed!), and the range for indexes is `LONG_MIN`-`LONG_MAX`, which includes negatives, bat has much lesser upper limit. It is a remnant of dark old times, when Python had two not yet completely compatible integer types.
Py_T_PYSSIZET does not support `__index__()` at all. It is because `PyLong_AsSsize_t()` does not support them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115029
* gh-115294
* gh-115295
<!-- /gh-linked-prs -->
| d9d6909697501a2604d5895f9f88aeec61274ab0 | d2c4baa41ff93cd5695c201d40e20a88458ecc26 |
python/cpython | python__cpython-115065 | # Upgrade Windows and macOS installers to use SQLite 3.45
SQLite 3.45 was released [a couple of weeks ago](https://sqlite.org/releaselog/3_45_1.html). A patch version recently arrived, so more might follow. There are som bug fixes for long standing bugs included, so we might want to backport this to the 3.12 and 3.11 installers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115065
* gh-115066
* gh-115071
* gh-115072
* gh-115110
* gh-115111
* gh-117443
* gh-117445
* gh-117981
* gh-118008
<!-- /gh-linked-prs -->
| 11ac6f5354ec7a4da2a7e052d27d636b5a41c714 | 92abb0124037e5bc938fa870461a26f64c56095b |
python/cpython | python__cpython-114991 | # Some mixin methods in `collections.abc` are not mentioned in the Document
## Documentation
https://docs.python.org/3/library/collections.abc.html
## Set
[`Set.__rsub__`](https://github.com/python/cpython/blob/848c86786be588312bff948441929816cdd19e28/Lib/_collections_abc.py#L648) and [`Set.__rxor__`](https://github.com/python/cpython/blob/848c86786be588312bff948441929816cdd19e28/Lib/_collections_abc.py#L663) do exist but not mentioned in the Document.
## MappingView
[`MappingView.__init__`](https://github.com/python/cpython/blob/848c86786be588312bff948441929816cdd19e28/Lib/_collections_abc.py#L845) and [`MappingView.__repr__`](https://github.com/python/cpython/blob/848c86786be588312bff948441929816cdd19e28/Lib/_collections_abc.py#L851) have the same issue.
But since `__init__` and `__repr__` always have default implementation in the base class `object`, so I'm not sure should we treat them as mixin methods. But the implemntation in `MappingView` do have it's own logic so I assume they should be mixin methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114991
* gh-128535
* gh-128536
<!-- /gh-linked-prs -->
| 5768fef355a55aa9c6522e5444de9346bd267972 | 60c415bd531392a239c23c754154a7944695ac99 |
python/cpython | python__cpython-114968 | # Documentation says "The built-in exceptions listed below", but they are no longer just below that section
# Documentation
In the documentation of "Built-in Exceptions", the second paragraph states `The built-in exceptions listed below can be generated by the interpreter or built-in functions`. Because of the new sections "Exception context" and "Inheriting from built-in exceptions" were introduced in Python 3.9, the word "below" became ambiguous. I suggest changing it to `The built-in exceptions listed in the document...`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114968
* gh-115033
* gh-115034
<!-- /gh-linked-prs -->
| 750489cc774df44daa2c0d23e8a404fe62be93d1 | bcccf1fb63870c1b7f8abe246e27b7fff343abd7 |
python/cpython | python__cpython-114966 | # Update bundled pip to 24.0
<!-- gh-linked-prs -->
### Linked PRs
* gh-114966
* gh-114971
* gh-114973
* gh-115097
<!-- /gh-linked-prs -->
| a4c298c1494b602a9650b597aad50b48e3fa1f41 | 94ec2b9c9ce898723c3fe61fbc64d6c8f4f68700 |
python/cpython | python__cpython-114960 | # Tarfile ignores an error when trying to extract a directory on top of a file
# Bug report
During review of #112966 and #103263 I found inconsistency between `zipfile` and `tarfile`. When `zipfile` tries to extract a directory on top of an existing file, it fails. When `tarfile` tries to extract a directory on top of an existing file, it silently returns, keeping an existing file. This is an obvious bug in `tarfile`.
Both modules should be more cautious when extract on top of symlinks, but this is a different issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114960
* gh-114963
* gh-114964
<!-- /gh-linked-prs -->
| 96bce033c4a4da7112792ba335ef3eb9a3eb0da0 | b4240fd68ecd2c22ec82ac549eabfe5fd35fab2a |
python/cpython | python__cpython-114956 | # Missing `clear` mixin method in MutableSequence's document
Link: https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes
But `MutableSequence` do have this mixin method: https://github.com/python/cpython/blob/28bb2961ba2f650452c949fcfc75ccfe0b5517e9/Lib/_collections_abc.py#L1132
<!-- gh-linked-prs -->
### Linked PRs
* gh-114956
* gh-114961
* gh-114962
<!-- /gh-linked-prs -->
| b4240fd68ecd2c22ec82ac549eabfe5fd35fab2a | efc489021c2a5dba46979bd304563aee0c479a31 |
python/cpython | python__cpython-114945 | # Race between `_PyParkingLot_Park` and `_PyParkingLot_UnparkAll` when handling interrupts
# Bug report
### Bug description:
There is a potential race when `_PyParkingLot_UnparkAll` is executing in one thread and another thread is unblocked because of an interrupt in `_PyParkingLot_Park`. Consider the following scenario:
1. Thread T0 is [blocked](https://github.com/python/cpython/blob/f35c7c070ca6b49c5d6a97be34e6c02f828f5873/Python/parking_lot.c#L303) in `_PyParkingLot_Park` on address `A`.
2. Thread T1 executes `_PyParkingLot_UnparkAll` on address `A`. It finds the `wait_entry` for `T0` and [unlinks](https://github.com/python/cpython/blob/f35c7c070ca6b49c5d6a97be34e6c02f828f5873/Python/parking_lot.c#L369) its list node.
3. Immediately after (2), T0 is woken up due to an interrupt. It then segfaults trying to [unlink](https://github.com/python/cpython/blob/f35c7c070ca6b49c5d6a97be34e6c02f828f5873/Python/parking_lot.c#L320) the node that was previously unlinked in (2).
I haven't attempted to write a minimal repro for this. It occurs reliably on MacOS on [this PR](https://github.com/python/cpython/pull/114839) when running `./python.exe -m test test_asyncio.test_events --match test_get_event_loop_new_process`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114945
<!-- /gh-linked-prs -->
| c32bae52904723d99e1f98e2ef54570268d86467 | 652fbf88c4c422ff17fefd4dcb5e06b5c0e26e74 |
python/cpython | python__cpython-114918 | # add support for AI_NUMERICSERV in getaddrinfo emulation
# Feature or enhancement
### Proposal:
Add support for `AI_NUMERICSERV` in getaddrinfo emulation similar to what is done for `AI_NUMERICHOST`
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114918
* gh-131413
<!-- /gh-linked-prs -->
| 3453b5c1d652a0424a333332b576f9b878424061 | b0a4f6599a7d36cc08fe63d6f7d5d4dea64579f3 |
python/cpython | python__cpython-115661 | # Abandoned StreamWriter isn't reliably detected
# Bug report
### Bug description:
A StreamWriter should be `close()`:d when you are done with it. There is code in the destructor for StreamWriter to detect when this is overlooked and trigger a `ResourceWarning`. However, the current code often maintains a strong reference to the writer, preventing it from being garbage collected and hence no warning.
Test case:
```python
#!/usr/bin/python3
import asyncio
import gc
import socket
async def handle_echo(reader, writer):
addr = writer.get_extra_info('peername')
print(f"Connection from {addr!r}")
# Forgetting to close the writer
#writer.close()
#await writer.wait_closed()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
client = socket.create_connection(('127.0.0.1', 8888))
client.send(b'a' * 64 * 1024)
async with server:
for i in range(25):
await asyncio.sleep(0.1)
gc.collect()
print('Exiting')
print('Done serving')
client.close()
asyncio.run(main())
```
Test case locks up waiting for the client connection, when instead I would expect a `ResourceWarning` and everything exiting nicely.
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115661
<!-- /gh-linked-prs -->
| a355f60b032306651ca27bc53bbb82eb5106ff71 | 686ec17f506cddd0b14a8aad5849c15ffc20ed46 |
python/cpython | python__cpython-114941 | # Popen misleading documentation
# Documentation
It took me hours that I made an error in the code. However, this error was probably the result of an unfortunate formatting in the docs.
The documentation for `subprocess.Popen` reads:
If given, _startupinfo_ will be a [STARTUPINFO](https://docs.python.org/3/library/subprocess.html#subprocess.STARTUPINFO) object, which is passed to the underlying CreateProcess function. _creationflags_, if given, can be one or more of the following flags ...
I suggest to start the documentation for _creationflags_ in a new paragraph instead:
If given, _startupinfo_ will be a [STARTUPINFO](https://docs.python.org/3/library/subprocess.html#subprocess.STARTUPINFO) object, which is passed to the underlying CreateProcess function.
_creationflags_, if given, can be one or more of the following flags ...
I mixed up options for `creationflags` and `startupinfo` in my code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114941
* gh-114942
* gh-114943
<!-- /gh-linked-prs -->
| 1183f1e6bfba06ae6c8ea362f96e977bc288e627 | c4a2e8a2c5188c3288d57b80852e92c83f46f6f3 |
python/cpython | python__cpython-114910 | # calendar module CLI option --first-weekday is missing from usage message
# Documentation
PR https://github.com/python/cpython/pull/112241 added the `--first-weekday` option to the calendar module CLI.
It was only partly documented in `calendar.rst`, as it's missing from the first usage message.
> python -m calendar [-h] [-L LOCALE] [-e ENCODING] [-t {text,html}]
> [-w WIDTH] [-l LINES] [-s SPACING] [-m MONTHS] [-c CSS]
> [year] [month]
This should be added
> [-f FIRST_WEEKDAY]
<!-- gh-linked-prs -->
### Linked PRs
* gh-114910
<!-- /gh-linked-prs -->
| ee66c333493105e014678be118850e138e3c62a8 | b3f0b698daf2438a6e59d5d19ccb34acdba0bffc |
python/cpython | python__cpython-114919 | # Add array.clear()
# Bug report
### Bug description:
```python
from array import array
x = array("B")
x.clear() # AttributeError: 'array.array' object has no attribute 'clear'
```
But `array` is a `MutableSequence`, so we expect the clear() method to be implemented.
If this is correct, then I'm ready to work on PR.
Original issue from the typeshed repository: https://github.com/python/typeshed/issues/11008
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114919
<!-- /gh-linked-prs -->
| 9d1a353230f555fc28239c5ca1e82b758084e02a | 5319c66550a6d6c6698dea75c0a0ee005873ce61 |
python/cpython | python__cpython-114884 | # `jit.c` always recompiles, even on non-JIT builds
I think I messed up the `Makefile` rules for `Python/jit.o` and `regen-jit`. Since `regen-jit` is "phony", the rule isn't in the dependency graph and `make` has no way of knowing if a recompile is necessary.
We *may* have to do something similar to what we did for the Windows builds and encode the dependencies in the Makefile. Or maybe there's an easier way. If anyone has any tips, please share!
<!-- gh-linked-prs -->
### Linked PRs
* gh-114884
<!-- /gh-linked-prs -->
| 1032326fe46afaef57c3e01160a4f889dadfee95 | 72d2d0f10d5623bceb98a2014926ea0b87594ecb |
python/cpython | python__cpython-114876 | # `grp` module attempts to build even if `getgrent` family of functions are unavailable
# Bug report
### Bug description:
In the configure script, the only prerequisite for the `grp` module is currently the `getgrgid` function. Older versions of Android have this function, but they don't have the `getgrent` function, which the `grp` module uses unconditionally. This causes the build to fail as follows:
```
/Users/msmith/Library/Android/sdk/ndk/22.1.7171670/toolchains/llvm/prebuilt/darwin-x86_64/bin/x86_64-linux-android21-clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -c ./Modules/grpmodule.c -o Modules/grpmodule.o
./Modules/grpmodule.c:127:24: warning: unused variable 'buf2' [-Wunused-variable]
char *buf = NULL, *buf2 = NULL;
^
./Modules/grpmodule.c:205:24: warning: unused variable 'buf2' [-Wunused-variable]
char *buf = NULL, *buf2 = NULL, *name_chars;
^
./Modules/grpmodule.c:287:5: error: implicit declaration of function 'setgrent' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
setgrent();
^
./Modules/grpmodule.c:288:17: error: implicit declaration of function 'getgrent' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
while ((p = getgrent()) != NULL) {
^
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-114876
<!-- /gh-linked-prs -->
| f35c7c070ca6b49c5d6a97be34e6c02f828f5873 | 1183f1e6bfba06ae6c8ea362f96e977bc288e627 |
python/cpython | python__cpython-114850 | # JIT workflow should have `timeout-minutes: 60` set
In the past our GitHub Actions hanged from time to time (up to several hours). It can happen again anytime.
It is the best practice to use timeout for jobs. For long jobs like these we use 60 minutes:
https://github.com/python/cpython/blob/5ce193e65a7e6f239337a8c5305895cf8a4d2726/.github/workflows/build.yml#L121-L126
<!-- gh-linked-prs -->
### Linked PRs
* gh-114850
<!-- /gh-linked-prs -->
| 1aec0644447e69e981d582449849761b23702ec8 | 618d7256e78da8200f6e2c6235094a1ef885dca4 |
python/cpython | python__cpython-114848 | # Speed up `posixpath.realpath()`
Some optimizations to `posixpath.realpath()` are possible - see attached PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114848
* gh-117481
<!-- /gh-linked-prs -->
| abfa16b44bb9426312613893b6e193b02ee0304f | 9ceaee74db7da0e71042ab0b385d844e9f282adb |
python/cpython | python__cpython-115139 | # Python/flowgraph.c:528: _Bool all_exits_have_lineno(basicblock *): Assertion `0' failed
# Crash report
### What happened?
Reproducing code:
```python
class i:
if i:d<2<[super for()in e]
```
```
~/p/cpython ❯❯❯ ./python.exe -c '
class i:
if i:d<2<[super for()in e]'
Assertion failed: (0), function all_exits_have_lineno, file flowgraph.c, line 528.
```
Crash:
```
fuzz_pycompile: Python/flowgraph.c:528: _Bool all_exits_have_lineno(basicblock *): Assertion `0' failed.
--
| AddressSanitizer:DEADLYSIGNAL
| =================================================================
| ==27322==ERROR: AddressSanitizer: ABRT on unknown address 0x053900006aba (pc 0x7b9e312b000b bp 0x7b9e31425588 sp 0x7ffddbefe650 T0)
| SCARINESS: 10 (signal)
| #0 0x7b9e312b000b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
| #1 0x7b9e3128f858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
| #2 0x7b9e3128f728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
| #3 0x7b9e312a0fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
| #4 0x86784f in all_exits_have_lineno cpython3/Python/flowgraph.c:528:21
| #5 0x86784f in _PyCfg_OptimizeCodeUnit cpython3/Python/flowgraph.c:2474:5
| #6 0x7e624f in optimize_and_assemble_code_unit cpython3/Python/compile.c:7583:9
| #7 0x7e624f in optimize_and_assemble cpython3/Python/compile.c:7625:12
| #8 0x814aeb in compiler_class_body cpython3/Python/compile.c:2545:24
| #9 0x7f248e in compiler_class cpython3/Python/compile.c:2607:9
| #10 0x7f248e in compiler_visit_stmt cpython3/Python/compile.c:3978:16
| #11 0x7e8973 in compiler_body cpython3/Python/compile.c:1731:9
| #12 0x7e276e in compiler_codegen cpython3/Python/compile.c:1747:13
| #13 0x7dff40 in compiler_mod cpython3/Python/compile.c:1775:9
| #14 0x7dff40 in _PyAST_Compile cpython3/Python/compile.c:555:24
| #15 0x93b2e3 in Py_CompileStringObject cpython3/Python/pythonrun.c:1452:10
| #16 0x93b3d4 in Py_CompileStringExFlags cpython3/Python/pythonrun.c:1465:10
| #17 0x587501 in fuzz_pycompile cpython3/Modules/_xxtestfuzz/fuzzer.c:550:24
| #18 0x587501 in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:563:14
| #19 0x587501 in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:704:11
| #20 0x458923 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
| #21 0x444082 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
| #22 0x44992c in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
| #23 0x472e62 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
| #24 0x7b9e31291082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
| #25 0x43a24d in _start
```
cc: @iritkatriel
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-115139
* gh-115140
* gh-115143
* gh-115149
<!-- /gh-linked-prs -->
| fedbf77191ea9d6515b39f958cc9e588d23517c9 | 8a3c499ffe7e15297dd4c0b446a0b97b4d32108a |
python/cpython | python__cpython-114819 | # `warnings.warn(...)` arguments include '\*', which is wrong - 3.12
# Documentation
Looking at the online python docs at https://docs.python.org/3.11/library/warnings.html#warnings.warn. The function call is listed on the site like this:
```python
warnings.warn(message, category=None, stacklevel=1, source=None, \*, skip_file_prefixes=None)
```
As far as I know (my apologies if I am very wrong), `\*` isn't a valid argument, and it should just be `*`.
Best guess is that since the actual code ([here](https://github.com/python/cpython/blob/0536bbb192c5ecca5e21385f82b0ac86f2e7e34c/Lib/warnings.py#L297-L298)) has a linebreak in the arguments, that linebreak got formatted for the docs as a `\*`.
I saw this all come up in [this stackoverflow question](https://stackoverflow.com/questions/77912637/what-is-in-python-function-definition#77915107) where other people thought it all through, i'm just posting the issue.
This seems to be new in 3.12
<!-- gh-linked-prs -->
### Linked PRs
* gh-114819
* gh-114837
<!-- /gh-linked-prs -->
| ff8939e5abaad7cd87f4d1f07ca7f6d176090f6c | 854e2bc42340b22cdeea5d16ac8b1ef3762c6909 |
python/cpython | python__cpython-114808 | # Allow import of multiprocessing.connection even if _multiprocessing is missing
# Bug report
### Bug description:
Currently if `_multiprocessing` is missing then `from multiprocessing import connection` raises an `ImportError`. It would be nice to delay the error until someone attempts to actually use multiprocessing.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114808
<!-- /gh-linked-prs -->
| 4b75032c88046505cad36157aa94a41fd37638f4 | 1b895914742d20ccebd1b56b1b0936b7e00eb95e |
python/cpython | python__cpython-114870 | # Python 3.13.0a3 metaclass `__call__` runs only once
# Bug report
### Bug description:
Metaclass `__call__` is expected to run everytime a class is instantiated but in Python 3.13 it only run once.
Minimal reproducer:
```python
class _TypeMetaclass(type):
def __call__(cls, *args, **kwargs):
print("Metaclass.__call__", cls.__qualname__, args)
inst = type.__call__(cls, *args, **kwargs)
return inst
class Type(metaclass=_TypeMetaclass):
def __init__(self, obj):
self._obj = obj
class Function(Type):
pass
for i in range(100):
Function(i)
```
On python 3.12, the output is:
```
Metaclass.__call__ Function (0,)
Metaclass.__call__ Function (1,)
Metaclass.__call__ Function (2,)
...
Metaclass.__call__ Function (98,)
Metaclass.__call__ Function (99,)
```
On python 3.13 (docker image `3.13.0a3-bullseye`), the output is unexpectedly:
```
Metaclass.__call__ Function (0,)
```
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114870
<!-- /gh-linked-prs -->
| e66d0399cc2e78fcdb6a0113cd757d2ce567ca7c | 97cc58f9777ee8b8e91f4ca8726cdb9f79cf906c |
python/cpython | python__cpython-114891 | # Match on Enum with dataclass decorator
# Bug report
### Bug description:
Hello,
when Enum with @dataclass decorator is used in a match statement, the results seem to be wrong. Consider this example:
```python
from enum import Enum, auto
from dataclasses import dataclass
@dataclass
class Color(str, Enum):
Red = auto()
Green = auto()
mode = Color.Green
match mode:
case Color.Red:
print(f"Matched RED, actual {mode}")
case Color.Green:
print(f"Matched GREEN, actual {mode}")
case _:
assert False
```
I think it should print:
```
Matched GREEN, actual Color.Green
```
but it prints:
```
Matched RED, actual Color.Green
```
I would suggest this either prints the expected result, or at least produces some error instead of the current behaviour.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114891
<!-- /gh-linked-prs -->
| 72d2d0f10d5623bceb98a2014926ea0b87594ecb | ab76d37948fd506af44762dc1c3e32f27d1327a8 |
python/cpython | python__cpython-114791 | # `require-pr-label.yml` should not be executed on forks
It does not make much sense to execute it on forks, because it is a workflow specific for `python/cpython` only.
Here's an example PR on my own fork:
<img width="915" alt="Снимок экрана 2024-01-31 в 11 58 56" src="https://github.com/python/cpython/assets/4660275/e0c11280-8e15-4438-abb0-b72cc501cf4c">
I think that it does make sense to add `if: github.repository == 'python/cpython'` to it, so it can be safely skipped on forks.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114791
* gh-114800
* gh-114801
<!-- /gh-linked-prs -->
| 1c2ea8b33c6b1f995db0aca0b223a9cc22426708 | 25ce7f872df661de9392122df17111c75c77dee0 |
python/cpython | python__cpython-114789 | # JIT workflow is executed on unrelated changes in my local fork
# Bug report
There are several potential problems in this:
<img width="1341" alt="Снимок экрана 2024-01-31 в 11 26 45" src="https://github.com/python/cpython/assets/4660275/578aabbd-5b89-4549-abc9-a1af23a2ee73">
1. We have a failing JIT test job, CC @brandtbucher, link: https://github.com/sobolevn/cpython/actions/runs/7722621656/job/21051101804 Note, that it does not fail in 100% of cases, only sometimes
2. It is executed in my fork, while no other tests are
3. It is executed on a doc-only change, while other tests ignore such changes
I will send a PR to fix 2. and 3.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114789
<!-- /gh-linked-prs -->
| b25b7462d520f38049d25888f220f20f759bc077 | 78c254582b1757c15098ae65e97a2589ae663cd7 |
python/cpython | python__cpython-114805 | # "Porting Python2 to Python3" recommends tools that no longer support Python2
This section recommends several tools: https://docs.python.org/3/howto/pyporting.html#how-to-port-python-2-code-to-python-3
Including:
- pylint: https://github.com/pylint-dev/pylint/issues/1763
- coverage: https://pypi.org/project/coverage/ (python3 only)
- mypy: https://github.com/python/mypy/issues/12237
- tox: https://github.com/tox-dev/tox/issues/1130
But, these tools do not support Python2 anymore, so users will probably struggle with this advice.
CC @brettcannon as the author.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114805
* gh-115327
* gh-115328
<!-- /gh-linked-prs -->
| 705c76d4a202f1faf41027d48d44eac0e76bb1f0 | 72340d15cdfdfa4796fdd7c702094c852c2b32d2 |
python/cpython | python__cpython-114781 | # Accessing attributes of a lazily-loaded module is not thread-safe
# Bug report
### Bug description:
Attempting to access an attribute of a lazily-loaded module causes the module's `__class__` to be reset before its attributes have been populated.
```python
import importlib.util
import sys
import threading
import time
# Lazy load http
spec = importlib.util.find_spec("http")
module = importlib.util.module_from_spec(spec)
http = sys.modules["http"] = module
loader = importlib.util.LazyLoader(spec.loader)
loader.exec_module(module)
def check():
time.sleep(0.2)
return http.HTTPStatus.ACCEPTED == 202
def multicheck():
for _ in range(10):
threading.Thread(target=check).start()
if sys.argv[1:] == ["single"]:
check()
else:
multicheck()
```
The issue is here:
https://github.com/python/cpython/blob/6de8aa31f39b3d8dbfba132e6649724eb07b8348/Lib/importlib/util.py#L168-L177
When attempting to access an attribute, the module's `__dict__` is not updated until after `__class__` is reset. If other threads attempt to access between these two points, then an attribute lookup can fail.
Assuming this is considered a bug, the two fixes I can think of are:
1) A module-scoped lock that is used to protect `__getattribute__`'s critical section. The `self.__class__ = type.ModuleType` would need to be moved below `__dict__.update()`, which in turn would mean that `self.__spec__` and `self.__dict__` would need to change to `object.__getattribute__(self, ...)` lookups to avoid recursion.
2) A module-scoped dictionary of locks, one-per-`_LazyModule`. Here, additional work would be needed to remove no-longer-needed locks without creating another critical section where a thread enters `_LazyModule.__getattribute__` but looks up its lock after it is removed by the first thread.
My suspicion is that one lock is enough, so I would suggest going with 1.
### CPython versions tested on:
3.8, 3.10, 3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114781
* gh-115870
* gh-115871
<!-- /gh-linked-prs -->
| 200271c61db44d90759f8a8934949aefd72d5724 | ef6074b352a95706f44a592ffe31baace690cc1c |
python/cpython | python__cpython-114957 | # GIL section of the FAQ is outdated (or shortly to be outdated)
# Documentation
https://docs.python.org/3.13/faq/library.html#can-t-we-get-rid-of-the-global-interpreter-lock (link is to the current alpha version)
Obviously work is in progress to remove the GIL which isn't reflected in the FAQ yet.
Also the second option:
> It has been suggested that the GIL should be a per-interpreter-state lock rather than truly global; interpreters then wouldn’t be able to share objects. Unfortunately, this isn’t likely to happen either.
If I understand correctly, interpreters with an isolated GIL are now at least partially implemented in a release version and again this isn't reflected in the FAQ.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114957
<!-- /gh-linked-prs -->
| 0e2ab73dc31e0b8ea1827ec24bae93ae2644c617 | d7334e2c2012defaf7aae920d6a56689464509d1 |
python/cpython | python__cpython-114817 | # test_pendingcalls_threaded times out on Windows free-threading builds
`test.test_capi.test_misc.TestPendingCalls.test_pendingcalls_threaded` usually (but not always) times out on the [AMD64 Windows Server 2022 NoGIL 3.x](https://buildbot.python.org/all/#/builders/1241) buildbot, e.g. here: https://buildbot.python.org/all/#/builders/1241/builds/1116/steps/4/logs/stdio
It started 5 days ago, after #114262 (Implement GC for free-threaded builds) and #114479 (Make threading.Lock a real class, not a factory function), and it looks like it got more frequent after #113412 (Use pointer for interp->obmalloc state). (This is just from looking at the buildbot logs, I haven't bisected.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-114817
<!-- /gh-linked-prs -->
| e6d6d5dcc00af50446761b0c4d20bd6e92380135 | 5ce193e65a7e6f239337a8c5305895cf8a4d2726 |
python/cpython | python__cpython-114772 | # `test_runpy.test_main_recursion_error()` exhausts the stack under WASI debug build
# Bug report
### Bug description:
Has to be run directly to trigger it.
```shell
cross-build/wasm32-wasi/python.sh -m test.test_runpy
```
Seen on wasmtime 17.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-114772
<!-- /gh-linked-prs -->
| 2ed8f924ee05ec17ce0639a424e5ca7661b09a6b | 574291963f6b0eb7da3fde1ae9763236e7ece306 |
python/cpython | python__cpython-114755 | # Fix unintended behavior change in elementtree introduced in #114269
# Bug report
### Bug description:
As it was discussed [here](https://github.com/python/cpython/pull/114269#discussion_r1468734070), after merging of #114269 `it.root` is no longer `None` once an iterator is created. There was no intention to change this, it should be reverted.
Meanwhile, I think it will be good to add test for `root` attribute.
- Right after function returned iterator: `it.root` must be `None`.
- After iterator exhaustion: `it.root` must be an instance of `xml.etree.ElementTree.Element`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-114755
* gh-114798
* gh-114799
<!-- /gh-linked-prs -->
| 66f95ea6a65deff547cab0d312b8c8c8a4cf8beb | b7688ef71eddcaf14f71b1c22ff2f164f34b2c74 |
python/cpython | python__cpython-116771 | # WASI build fails on both SDK 20 and 21 (Mac OS)
# Bug report
### Bug description:
```python
# Add a code block here, if required
```
On trying to build CPython for wasmtime with both SDK 20 and 21, I get an error on running: `python Tools/wasm/wasi.py configure-host`
Error:

I've tried modifying the env vars to clang and tried to replace it with gcc as well. Still fails. Any pointers?
### CPython versions tested on:
3.11
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-116771
<!-- /gh-linked-prs -->
| 3a25d9c5a95d4e57513ea7edd9e184f4609ebe20 | 8c094c3095feb4de2efebd00f67fb6cc3b2bc240 |
python/cpython | python__cpython-114749 | # Documentation for date and datetime comparison is outdated
The implementation of comparison for `date` and `datetime` objects was changed 18 years ago, in 19960597adb65c9ecd33e4c3d320390eecd38625, but the documentation still describes the old behavior, with special-casing of other objects with a `timetuple` attribute. There were also other changes in this code. In particularly, there is a significant difference between equality comparison and order comparison. This part of the documentation should be rewritten.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114749
* gh-114928
* gh-114929
<!-- /gh-linked-prs -->
| 05b04903a14279421ecdc6522b8202822de6ebb5 | 936d4611d63d0c109e05d385e99acc0592eff341 |
python/cpython | python__cpython-114731 | # ZoneInfo gives a surprising exception for `''`
# Bug report
### Bug description:
```python
import zoneinfo
zoneinfo.ZoneInfo('')
```
results in the following exception:
```pycon
>>> zoneinfo.ZoneInfo('')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/asottile/.pyenv/versions/3.11.6/lib/python3.11/zoneinfo/_tzpath.py", line 67, in find_tzfile
_validate_tzfile_path(key)
File "/Users/asottile/.pyenv/versions/3.11.6/lib/python3.11/zoneinfo/_tzpath.py", line 91, in _validate_tzfile_path
raise ValueError(
ValueError: ZoneInfo keys must be normalized relative paths, got:
```
I expect `zoneinfo.ZoneInfoNotFound` instead, or some other error that's more specific about this case
it seems this stems from the code internally using the length of the `normpath`'d result of this string and:
```pycon
>>> normpath('')
'.'
```
a small improvement would be to use `!r` in the error message as well
### CPython versions tested on:
3.11, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-114731
* gh-132563
* gh-132582
* gh-133330
* gh-133331
<!-- /gh-linked-prs -->
| fe44fc4f4351bb4b457c01d94b4ae8b9eda501aa | ca0a96dfaa686c314e9d706023a59d26b6cc33b9 |
python/cpython | python__cpython-114710 | # posixpath.commonpath: Check for empty iterables broken
# Bug report
### Bug description:
This came up in python/typeshed#11310: When passing an empty sequence to `commonpath()`, a `ValueError` is raised with an appropriate error message. When an "empty" iterable is passed, an `IndexError` is raised instead, although iterables otherwise work fine:
```python
from posixpath import commonpath
commonpath([]) # -> ValueError: commonpath() arg is an empty sequence
commonpath(iter([])) # -> IndexError: tuple index out of range
```
The fix is trivial, I'll send a PR. Technically this is an API change, though, although the old API is unexpected.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-114710
* gh-115639
<!-- /gh-linked-prs -->
| 371c9708863c23ddc716085198ab07fa49968166 | f9154f8f237e31e7c30f8698f980bee5e494f1e0 |
python/cpython | python__cpython-114748 | # Allow repeatedly asking for termination of `QueueListener`
# Feature or enhancement
### Proposal:
Currently it's impossible to call `QueueListener.stop` twice, you get a crash:
```python
from queue import Queue
from logging import StreamHandler
from logging.handlers import (
QueueHandler,
QueueListener
)
sh = StreamHandler()
q = Queue(-1)
listener = QueueListener(
q,
sh,
respect_handler_level=True
)
listener.start()
listener.stop()
listener.stop() # Crash here: 'NoneType' object has no attribute 'join'
```
But it may be desirable in some cases where you can't control at which point the termination will happen, you just want it to happen. For example, having it in multiple places: on app shutdown (normal behaviour) and atexit callbacks (backup in case shutdown wasn't planned and its callbacks didn't run).
This appears to be a simple "fix" (not that it's a bug) to
https://github.com/python/cpython/blob/97fb2480e4807a34b8197243ad57566ed7769e24/Lib/logging/handlers.py#L1589-L1591
since you're already setting the attribute to `None`, a simple check should do:
```diff
- self.enqueue_sentinel()
- self._thread.join()
- self._thread = None
+ if self._thread:
+ self.enqueue_sentinel()
+ self._thread.join()
+ self._thread = None
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-114748
<!-- /gh-linked-prs -->
| e21754d7f8336d4647e28f355d8a3dbd5a2c7545 | ea30a28c3e89b69a214c536e61402660242c0f2a |
python/cpython | python__cpython-115152 | # Tier two memory and reference "leaks"
# Bug report
### Bug description:
CPython was built with `./configure --enable-experimental-jit --with-pydebug`
Full trace
[trace.txt](https://github.com/python/cpython/files/14080416/trace.txt)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-115152
<!-- /gh-linked-prs -->
| 235cacff81931a68e8c400bb3919ae6e55462fb5 | 54bde5dcc3c04c4ddebcc9df2904ab325fa0b486 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.