repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-120528 | # `inspect.signature(map)` works in Python 3.13, but gives a signature that looks like only one arg is required
# Bug report
### Bug description:
```
>>> list(inspect.signature(map).parameters.values())[0].kind
<_ParameterKind.POSITIONAL_ONLY: 0>
>>> list(inspect.signature(map).parameters.values())[1].kind
<_ParameterKind.VAR_POSITIONAL: 2>
```
Up to Python 3.12, `inspect.signature(map)` just doesn't work, it raises `ValueError`. In 3.13, it works, but the signature you get seems like kind of a lie. The `toolz` library has this code to work out how many arguments a function requires:
```
return sum(1 for p in sigspec.parameters.values()
if p.default is p.empty
and p.kind in (p.POSITIONAL_OR_KEYWORD, p.POSITIONAL_ONLY))
```
where `sigspec` is the result of `inspect.signature(function)`. That code seems sensible to me. But it gives the wrong answer for `map`, because the signature returned by `inspect.signature(map)` says the "kind" of the second arg (`iterables`) is `VAR_POSITIONAL`, which the [docs say](https://docs.python.org/3/library/inspect.html#inspect.Parameter.kind) "corresponds to a *args parameter in a Python function definition". That does make it seem optional, because of course if you define a function like `myfunc(*args)`, it does not require any arguments. But really, for `map`, you have to provide at least *one* iterable, or you get the error "map() must have at least two arguments." from https://github.com/python/cpython/blob/b2e71ff4f8fa5b7d8117dd8125137aee3d01f015/Python/bltinmodule.c#L1322C12-L1322C53 .
It kinda seems to me like maybe the signature of `map` should show *three* args, the first one `POSITIONAL_ONLY`, the second (`iterable`) `POSITIONAL_OR_KEYWORD`, and the third (`*iterables`) `VAR_POSITIONAL`? This is how it looks in [the docs](https://docs.python.org/3/library/functions.html#map).
toolz can work around this by special-casing `map` one way or another, but it seemed worth at least reporting upstream.
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120528
* gh-120539
<!-- /gh-linked-prs -->
| d4039d3f6f8cb7738c5cd272dde04171446dfd2b | 5c58e728b1391c258b224fc6d88f62f42c725026 |
python/cpython | python__cpython-120984 | # Python 3.12 change results in Apple App Store rejection
# Bug report
### Bug description:
This is not a bug in the traditional sense, but I recently went through an ordeal where updates to my app on Apple's App Store (Mac App Store to be specific) started to be rejected after updating my bundled version of Python from 3.11 to 3.12.
It took me quite a while to get to the bottom of this so I wanted to mention it here in case it saves others some pain.
Here is the rejection note I was getting:
**Guideline 2.5.2 - Performance - Software Requirements**
_The app installed or launched executable code. Specifically, the app uses the itms-services URL scheme to install an app._
Eventually I learned that the offending files were `Lib/urllib/parse.py` and its associated .pyc. It seems that an 'itms-services' string [was added here](https://github.com/python/cpython/issues/104139) in Python 3.12 and it seems that Apple is scanning for this string and auto-rejecting anything containing it (at least in my case).
After removing that string from my bundled copy of Python, my update finally passed review.
Has anyone else run into this? Would it be worth slightly obfuscating that string or something to avoid triggering that rejection? With Python set to officially support iOS in the next release it would be a bummer if it is unable to pass App Store reviews out of the box.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120984
* gh-121173
* gh-121174
* gh-121830
* gh-121844
* gh-121845
* gh-121947
* gh-122105
<!-- /gh-linked-prs -->
| f27593a87c344f3774ca73644a11cbd5614007ef | 8549559f383dfcc0ad0c32496f62a4b737c05b4f |
python/cpython | python__cpython-120523 | # Clarify documentation for except*
# Documentation
The [documentation for except*](https://docs.python.org/3/reference/compound_stmts.html#except-clause) states
> An except* clause must have a matching type, and this type cannot be a subclass of [BaseExceptionGroup](https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup).
This incorrectly implies that there can only be one matching type, when tuples of types are allowed in except*.
I propose to change this to:
> An except* clause must have a matching type or a tuple of matching types, which cannot be a subclass of [BaseExceptionGroup](https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup).
This has been discussed in the forum: https://discuss.python.org/t/clarifying-typing-behavior-for-exception-handlers-except-except/55060
<!-- gh-linked-prs -->
### Linked PRs
* gh-120523
* gh-120750
* gh-120751
<!-- /gh-linked-prs -->
| 58b3f111767148e9011ccd52660e208f0c834b2a | d484383861b44b4cf76f31ad6af9a0b413334a89 |
python/cpython | python__cpython-120640 | # Lower `BEFORE_WITH` and `BEFORE_ASYNC_WITH` to attribute lookups and calls.
With the JIT and better tier 2 optimizations, effective specialization is becoming more important that plain speed in tier 1.
`BEFORE_WITH` and `BEFORE_ASYNC_WITH` could be specialized, but they are bulky and won't optimize well in tier 2.
Instead, we should lower them to attribute lookups and calls which can then be optimized.
We should add a `LOAD_SPECIAL` instruction for loading dunder methods and replace `BEFORE_WITH` as follows:
```
COPY 1
LOAD_SPECIAL __enter__ + NULL|self
SWAP 3
LOAD_SPECIAL __exit__
SWAP 3
CALL 0
```
Likewise for `BEFORE_ASYNC_WITH`.
Even without any specialization of `LOAD_SPECIAL`, the `CALL` will be specialized and the JIT can eliminate the `COPY` and `SWAP`s.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120640
* gh-120648
<!-- /gh-linked-prs -->
| 9cefcc0ee781a1bef9e0685c2271237005fb488b | 73dc1c678eb720c2ced94d2f435a908bb6d18566 |
python/cpython | python__cpython-120498 | # Incorrect exception handling in Tab Nanny
# Bug report
### Bug description:
In the `Lib/tabnanny.py` module, the exception handling for `IndentationError` might not be reachable after `SyntaxError` is caught and a `return` statement is executed.

See https://github.com/python/cpython/blob/main/Lib/tabnanny.py#L108-L114
```python
except SyntaxError as msg:
errprint("%r: Token Error: %s" % (file, msg))
return
except IndentationError as msg:
errprint("%r: Indentation Error: %s" % (file, msg))
return
```
**Why This Happens:**
According to https://docs.python.org/3/library/exceptions.html#exception-hierarchy , we can learn that `SyntaxError` detection range than `IndentationError` bigger, Is the inclusion relationship. `IndentationError` is written after `SyntaxError`, which causes the program to never execute `IndentationError`

**How to Fix:**
We need to put the more specific `IndentationError` before `SyntaxError`
<!-- gh-linked-prs -->
### Linked PRs
* gh-120498
* gh-120548
* gh-120549
<!-- /gh-linked-prs -->
| c501261c919ceb97c850ef9427a93326f06a8f2e | 42ebdd83bb194f054fe5a10b3caa0c3a95be3679 |
python/cpython | python__cpython-120488 | # Some classes which inherit `socketserver.TCPServer ` should override `allow_reuse_port`
# Bug report
### Bug description:
In [#30072](https://github.com/python/cpython/pull/30072) there was a change to `socketserver.TCPServer` which added a new variable `allow_reuse_port` to support the usage of `SO_REUSEPORT` along with `SO_REUSEADDR`. The problem is that the classes which inherit from `socketserver.TCPServer` and override `allow_reuse_address` weren't updated to include the override of the new variable `allow_reuse_port`.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120488
<!-- /gh-linked-prs -->
| 192d17c3fd9945104bc0303cf248bb0d074d260e | 0c0348adbfca991f78b3aaa6790e5c26606a1c0f |
python/cpython | python__cpython-120451 | # Improve documentation about private name mangling
Private mangling is quite common if you want to emulate a "private" interface without leaving the possibility of the interface being considered "protected" (in the C++/Java sense). I've seen a few constructions in the standard library that are using this so I think it should be clear (visually speaking) in the docs as well.
I tend to forget about the rule of "no transformation if name only consists of underscores" or "underscores stripped + 1 underscore", so I think it's good to highlight them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120451
* gh-121715
* gh-121716
<!-- /gh-linked-prs -->
| f4d6e45c1e7161878b36ef9e876ca3e44b80a97d | 422855ad21f09b82c0bfa891dfb8fb48182c6d2b |
python/cpython | python__cpython-120450 | # [tests] mangled names are incorrectly introspected by `test_pyclbr.py`
# Bug report
### Bug description:
This is something I encountered when implementing #120422. It should not affect the `pyclbr.py` module itself since introspection is achieved without any loading (so the names are correctly picked).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120450
* gh-120700
* gh-120701
<!-- /gh-linked-prs -->
| d8cd0fa4e347f460d0f3277e2392504e61ed087d | 16f8e22e7c681d8e8184048ed1bf927d33e11758 |
python/cpython | python__cpython-120712 | # [3.13] Ensurepip fails when built with `--enable-experimental-jit` and `--with-pydebug` (Linux)
# Bug report
### Bug description:
When building for Linux/X64, the 3.13 branch will not run successfully run `python -m ensurepip` if built with both `--enable-experimental-jit` and `--with-pydebug`:
```sh
git checkout 3.13
make clean
./configure --enable-experimental-jit --with-pydebug
make -j8
./python -m ensurepip
```
This fails with an error message like:
```sh
subprocess.CalledProcessError: Command '['/home/jglass/Documents/cpython/python', '-W', 'ignore::DeprecationWarning', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/tmp/tmpsu81mj6o/pip-24.0-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/tmp/tmpsu81mj6o\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' died with <Signals.SIGABRT: 6>.
```
Building with either config flag individually successfully builds.
Results of `./python -VV: Python 3.13.0b2+ (heads/3.13:ff358616ddb, Jun 12 2024, 21:17:57) [GCC 11.4.0]`
Built on Ubuntu 22.04; same failure on two separate machines. These options build successfully on main (`030b452e`).
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120712
* gh-120747
<!-- /gh-linked-prs -->
| f385d99f57773e48285e0bcdbcd66dcbfdc647b3 | ace2045ea673e14a4c403d4bb56704cdde83ad07 |
python/cpython | python__cpython-120434 | # Mention ``chocolatey`` in ``Tools/jit/README.md``
Chocolatey is a well-known package manager for Windows users, but our readme for JIT doesn't mentioned it, which surprised me.
I'm think it would be a good addition 🙂
<!-- gh-linked-prs -->
### Linked PRs
* gh-120434
* gh-120651
<!-- /gh-linked-prs -->
| 95737bbf18765a24b6585708588c9b707dc30d27 | f4d301d8b99e5a001c89b0aea091b07b70c6354c |
python/cpython | python__cpython-123191 | # [Docs] Possibly Add More Information About Immortal Objects
From @ncoghlan in https://github.com/python/peps/issues/3817#issuecomment-2151651033:
> I'm wondering if it might be worth adding more detail on immortal objects to a new subheading in https://docs.python.org/3/c-api/refcounting.html though, as the best canonical doc reference I could find in the reference docs is to the term `reference count` which really just mentions immortal objects rather than explaining them. A glossary entry referencing that new subsection would also be helpful.
>
> The two explanatory links in the C API docs go to the PEP, which is about to be marked as no longer to be trusted as living documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123191
* gh-123491
* gh-123636
<!-- /gh-linked-prs -->
| 6754566a51a5706e8c9da0094b892113311ba20c | da4302699f0af50bcfc1df977481c117876c7117 |
python/cpython | python__cpython-126593 | # `nturl2path.pathname2url()` doesn't handle forward slashes
# Bug report
### Bug description:
Paths with forward slashes aren't handled the same as paths with backslashes:
```python
>>> from nturl2path import pathname2url
>>> pathname2url('//?/unc/server/share/dir')
'//%3F/unc/server/share/dir' # NOK
>>> pathname2url(r'\\?\unc\server\share\dir')
'//server/share/dir' # OK
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126593
* gh-126763
* gh-126764
<!-- /gh-linked-prs -->
| bf224bd7cef5d24eaff35945ebe7ffe14df7710f | 7577307ebdaeef6702b639e22a896080e81aae4e |
python/cpython | python__cpython-120419 | # Test tegression from gh-117711: Assuming wheels are deleted from source when WHEEL_PKG_DIR is set
# Bug report
### Bug description:
```python
test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders) ... FAIL
======================================================================
FAIL: test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.12/test/test_tools/test_makefile.py", line 75, in test_makefile_test_folders
self.assertSetEqual(unique_test_dirs, set(used))
AssertionError: Items in the second set but not the first: 'test/wheeldata'
----------------------------------------------------------------------
Ran 1 test in 0.009s
```
This is a regression caused by the change in #117712, which assumed that whenever `WHEEL_PKG_DIR` is set, the `.whl` files in the testdata will be deleted.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120419
* gh-120432
<!-- /gh-linked-prs -->
| 030b452e34bbb0096acacb70a31915b9590c8186 | 3453362183f083e37ea866a7ae1b34147ffaf81d |
python/cpython | python__cpython-120420 | # Remove unused imports from the stdlib
Using pyflakes or ruff, I found unused imports and I created this issue to remove them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120420
* gh-120421
* gh-120429
* gh-120454
* gh-120461
* gh-120574
* gh-120622
* gh-120623
* gh-120626
* gh-120627
* gh-120628
* gh-120629
* gh-120630
* gh-120631
* gh-120632
* gh-120679
* gh-120680
<!-- /gh-linked-prs -->
| 4c6d4f5cb33e48519922d635894eef356faddba2 | 4b5d3e0e721a952f4ac9d17bee331e6dfe543dcd |
python/cpython | python__cpython-120089 | # Linux perf profile can not see Python calls on RISC-V architecture
# Bug report
### Bug description:
```python
check the output of python -m sysconfig | grep HAVE_PERF_TRAMPOLINE to see that RISC-V system does not support perf profile
(python312_perf_venv) [root@openeuler-riscv64 pytorch]# python3 -m sysconfig | grep HAVE_PERF_TRAMPOLINE
PY_HAVE_PERF_TRAMPOLINE = "0"
refer to https://docs.python.org/3.12/howto/perf_profiling.html
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
```[tasklist]
### Tasks
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120089
* gh-120413
* gh-121328
<!-- /gh-linked-prs -->
| ca2e8765009d0d3eb9fe6c75465825c50808f4dd | 84512c0e7f4441f060026f4fd9ddb7611fc10de4 |
python/cpython | python__cpython-120398 | # The count method on strings, bytes, bytearray etc. can be significantly faster
# Feature or enhancement
### Proposal:
Counting single characters in a string is very useful. For instance calculating the GC content in a DNA sequence.
```python3
def gc_content(sequence: str) -> int:
upper_seq = sequence.upper()
a_count = upper_seq.count('A')
c_count = upper_seq.count('C')
g_count = upper_seq.count('G')
t_count = upper_seq.count('T')
# Unknown N bases should not influence the GC content, do not use len(sequence)
total = a_count + c_count + g_count + t_count
return (c_count + g_count) / total
```
Another example would be counting newline characters.
The current code counts one character at the time.
```
static inline Py_ssize_t
STRINGLIB(count_char)(const STRINGLIB_CHAR *s, Py_ssize_t n,
const STRINGLIB_CHAR p0, Py_ssize_t maxcount)
{
Py_ssize_t i, count = 0;
for (i = 0; i < n; i++) {
if (s[i] == p0) {
count++;
if (count == maxcount) {
return maxcount;
}
}
}
return count;
}
```
By providing the appropriate hints to the compiler, the function can be sped up significantly.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120398
* gh-120455
<!-- /gh-linked-prs -->
| 2078eb45ca0db495972a20fcaf96df8fcf48451d | 6ae254aaa0a5a3985a52d1ab387a2b68c001bd96 |
python/cpython | python__cpython-120390 | # [C API] Add PyLong_FromInt64() and PyLong_ToInt64()
# Feature or enhancement
I propose to add functions to convert <stdint.h> integers to/from Python int objects:
```c
PyObject* PyLong_FromInt32(int32_t value);
PyObject* PyLong_FromInt64(int64_t value);
PyObject* PyLong_FromUInt32(uint32_t value);
PyObject* PyLong_FromUInt64(uint64_t value);
int PyLong_ToInt32(PyObject *obj, int32_t *value);
int PyLong_ToInt64(PyObject *obj, int64_t *value);
int PyLong_ToUInt32(PyObject *obj, uint32_t *value);
int PyLong_ToUInt64(PyObject *obj, uint64_t *value);
```
Notes:
* I prefer to limit the API to 4 types for now: int32/64_t and uint32/64_t. Later, we can discuss add more types, but let's start with the most common ones. (**UPDATE:** I removed 8-bit and 16-bit types.)
* I prefer `UInt` to `Uint` since there are two words: Unsigned INTeger.
* `To` functions don't return the result, but a status: 0 on success, -1 on error (with an exception set). It's to solve the [C API Problem #1](https://github.com/capi-workgroup/problems/issues/1): "Ambiguous return values". PyLong_AsLong() returns -1 on success and on error (with an exception set).
Related discussion: [Avoid C-specific Types](https://github.com/capi-workgroup/api-evolution/issues/10).
<!-- gh-linked-prs -->
### Linked PRs
* gh-120390
<!-- /gh-linked-prs -->
| 4c6dca82925bd4be376a3e4a53c8104ad0b0cb5f | 1a0b828994ed4ec1f2ba05123995a7d1e852f4b4 |
python/cpython | python__cpython-120401 | # Improve deprecation warning message, when a test case returns non `None` value
# Feature or enhancement
Right now there are two potential things we can improve in this warning message: https://github.com/python/cpython/blob/19435d299a1fae9ad9a6bbe6609e41ddfd7f6cbe/Lib/unittest/case.py#L605-L609
1. The returned value's type is not shown. It can be useful for users
2. A common case for this warning is that `async def test_some` is defined in a regular `TestCase` class, I think that we can improve this specific case and suggest to use `IsolatedAsyncioTestCase` instead
I have a patch ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120401
<!-- /gh-linked-prs -->
| fabcf6bc8f89f008319442dea614d5cbeb959544 | 92c9c6ae147e1e658bbc8d454f8c7b2c4dea31d1 |
python/cpython | python__cpython-120386 | # ``test_coroutines`` leaks references
# Bug report
### Bug description:
```python
./python.exe -m test -R 3:3 test_coroutines
Using random seed: 2850221778
0:00:00 load avg: 2.19 Run 1 test sequentially in a single process
0:00:00 load avg: 2.19 [1/1] test_coroutines
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XXX XXX
test_coroutines leaked [10, 10, 10] references, sum=30
test_coroutines leaked [10, 10, 10] memory blocks, sum=30
test_coroutines failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_coroutines
Total duration: 2.6 sec
Total tests: run=97
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120386
<!-- /gh-linked-prs -->
| 19435d299a1fae9ad9a6bbe6609e41ddfd7f6cbe | f5a9c34f38886c5cf9c2f8d860eee3473447e030 |
python/cpython | python__cpython-120442 | # Array out of bounds assignment in list_ass_subscript
# Crash report
### What happened?
### Root Cause
When step is not 1 in slice assignment, `list_ass_subscript` first calculates the length of the slice and then converts the input iterable into a list. During the conversion, arbitrary code in Python can be executed to modify the length of the current list or even clear it:
```c
/* Python 3.10 source code */
static int
list_ass_subscript(PyListObject* self, PyObject* item, PyObject* value)
{
if (_PyIndex_Check(item)) { /* ... */ }
else if (PySlice_Check(item)) {
Py_ssize_t start, stop, step, slicelength;
if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
return -1;
}
slicelength = PySlice_AdjustIndices(Py_SIZE(self), &start, &stop,
step);
if (step == 1)
return list_ass_slice(self, start, stop, value);
/* Make sure s[5:2] = [..] inserts at the right place:
before 5, not before 2. */
if ((step < 0 && start < stop) ||
(step > 0 && start > stop))
stop = start;
if (value == NULL) { /* ... */ }
else {
/* assign slice */
PyObject *ins, *seq;
PyObject **garbage, **seqitems, **selfitems;
Py_ssize_t i;
size_t cur;
/* protect against a[::-1] = a */
if (self == (PyListObject*)value) {
seq = list_slice((PyListObject*)value, 0,
PyList_GET_SIZE(value));
}
else {
seq = PySequence_Fast(value, // <-- call arbitrary code in python
"must assign iterable "
"to extended slice");
}
if (!seq)
return -1;
/* ... */
selfitems = self->ob_item;
seqitems = PySequence_Fast_ITEMS(seq);
for (cur = start, i = 0; i < slicelength;
cur += (size_t)step, i++) {
garbage[i] = selfitems[cur];
ins = seqitems[i];
Py_INCREF(ins);
selfitems[cur] = ins; // <-- maybe out of bounds
}
/* ... */
}
/* ... */
}
```
### POC
```python
class evil:
def __init__(self, lst):
self.lst = lst
def __iter__(self):
yield from self.lst
self.lst.clear()
lst = list(range(10))
lst[::-1] = evil(lst)
```
### CPython versions tested on:
3.10, 3.11, 3.12
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Python 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120442
* gh-120825
* gh-120826
* gh-121345
<!-- /gh-linked-prs -->
| 8334a1b55c93068f5d243852029baa83377ff6c9 | 733dac01b0dc3047efc9027dba177d7116e47c50 |
python/cpython | python__cpython-120383 | # inspect.ismethoddescriptor(): lack of __delete__() is not checked
# Bug report
### Bug description:
The `inspect.ismethoddescriptor()` function reports descriptor objects which implement `__delete__()` but not `__set__()` as *method descriptors*, even though they are, in fact, *data descriptors*:
Actual behavior example:
```python
>>> import inspect
>>> class Descriptor:
... def __get__(self, *_): pass
... def __delete__(self, *_): pass # Note: `__set__()` *not* defined.
...
>>> inspect.ismethoddescriptor(Descriptor()) # Wrong result:
True
```
Expected result:
```python
>>> inspect.ismethoddescriptor(Descriptor()) # Correct result:
False
```
See also: https://discuss.python.org/t/inspect-ismethoddescriptor-checks-for-the-lack-of-set-but-ignores-delete-should-it-be-fixed/55039
***
There is an additional question: *to which Python versions should the fix be applied?*
IMHO, the fix can be safely applied to the branches 3.14 (main) and 3.13 (still in the beta phase). Backporting it to the earlier versions might be considered too disruptive. Obviously, the decision will need to be made by a core developer, not me.
***
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
<!-- gh-linked-prs -->
### Linked PRs
* gh-120383
* gh-120684
<!-- /gh-linked-prs -->
| dacc5ac71a8e546f9ef76805827cb50d4d40cabf | 7c5da94b5d674e112dc77f6494463014b7137193 |
python/cpython | python__cpython-120422 | # Python pickler unable to pickle object the native pickler can
# Bug report
### Bug description:
```python
import io
import pickle
class ZeroCopyByteArray(bytearray):
def __reduce_ex__(self, protocol):
return type(self), (pickle.PickleBuffer(self),), None
data = [
ZeroCopyByteArray(),
ZeroCopyByteArray(),
]
# this works
f = io.BytesIO()
pickler = pickle.Pickler(f, protocol=5)
pickler.dump(data)
# this hits memo assertion
f = io.BytesIO()
pickler = pickle._Pickler(f, protocol=5)
pickler.dump(data)
```
```pytb
Traceback (most recent call last):
File "bug.py", line 24, in <module>
pickler.dump(data)
File "/pickle.py", line 487, in dump
self.save(obj)
File "/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
^^^^^^^^^^^^
File "/pickle.py", line 932, in save_list
self._batch_appends(obj)
File "/pickle.py", line 956, in _batch_appends
save(x)
File "/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/pickle.py", line 692, in save_reduce
save(args)
File "/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
^^^^^^^^^^^^
File "/pickle.py", line 887, in save_tuple
save(element)
File "/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
^^^^^^^^^^^^
File "/pickle.py", line 842, in save_picklebuffer
self.save_bytearray(m.tobytes())
File "/pickle.py", line 821, in save_bytearray
self.memoize(obj)
File "/pickle.py", line 508, in memoize
assert id(obj) not in self.memo
^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120422
* gh-120832
* gh-120833
<!-- /gh-linked-prs -->
| 7595e6743ac78ac0dd19418176f66d251668fafc | 83d3d7aace32b8536f552f78dd29610344f13160 |
python/cpython | python__cpython-124555 | # Segmentation Fault in _curses
# Crash report
### What happened?
### Build
```
./configure --with-pydebug --with-address-sanitizer
apt-get install libncurses5-dev
```
### Root Cause
When calling `_curses.initscr`, initialised is set to True. Then, if `_curses.resizeterm` is called with an improper size for the first argument, an error occurs, and `stdscr `is freed. The error does not terminate even when wrapped in a try-except block.
Because initialised is set to True, a second call to `_curses.initscr` invokes `wrefresh(stdscr)` even though `stdscr` has already been freed.
https://github.com/python/cpython/blob/34e4d3287e724c065cc07b04a1ee8715817db284/Modules/_cursesmodule.c#L3265-L3283
### POC
```python
import _curses
_curses.initscr()
try:
_curses.resizeterm(+35000, 1)
except:
pass
_curses.initscr()
```
### ASAN
<details>
<summary>asan</summary>
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==1373==ERROR: AddressSanitizer: SEGV on unknown address 0x7f4c7b59d370 (pc 0x7f4c7b7eb5aa bp 0x61b000018880 sp 0x7ffd073842c0 T0)
==1373==The signal is caused by a READ memory access.
#0 0x7f4c7b7eb5aa (/lib/x86_64-linux-gnu/libncursesw.so.6+0x275aa)
#1 0x7f4c7b7edd09 in doupdate_sp (/lib/x86_64-linux-gnu/libncursesw.so.6+0x29d09)
#2 0x7f4c7b7e16d7 in wrefresh (/lib/x86_64-linux-gnu/libncursesw.so.6+0x1d6d7)
#3 0x7f4c7b9908f6 in _curses_initscr_impl Modules/_cursesmodule.c:3258
#4 0x7f4c7b999675 in _curses_initscr Modules/clinic/_cursesmodule.c.h:2661
#5 0x562817924edd in cfunction_vectorcall_NOARGS Objects/methodobject.c:481
#6 0x5628175fddeb in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
#7 0x5628175fe0a0 in PyObject_Vectorcall Objects/call.c:325
#8 0x56281800d628 in _PyEval_EvalFrameDefault Python/bytecodes.c:2706
#9 0x5628180346d0 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:89
#10 0x5628180346d0 in _PyEval_Vector Python/ceval.c:1683
#11 0x562818034a7c in PyEval_EvalCode Python/ceval.c:578
#12 0x562818377486 in run_eval_code_obj Python/pythonrun.c:1691
#13 0x56281837cb70 in run_mod Python/pythonrun.c:1712
#14 0x56281837d4f1 in pyrun_file Python/pythonrun.c:1612
#15 0x562818397728 in _PyRun_SimpleFileObject Python/pythonrun.c:433
#16 0x562818398a0c in _PyRun_AnyFileObject Python/pythonrun.c:78
#17 0x5628184e2cf0 in pymain_run_file_obj Modules/main.c:360
#18 0x5628184e4c04 in pymain_run_file Modules/main.c:379
#19 0x5628184f0722 in pymain_run_python Modules/main.c:629
#20 0x5628184f0be4 in Py_RunMain Modules/main.c:709
#21 0x5628184f1077 in pymain_main Modules/main.c:739
#22 0x5628184f14f4 in Py_BytesMain Modules/main.c:763
#23 0x562817147c3a in main Programs/python.c:15
#24 0x7f4c7ec56d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#25 0x7f4c7ec56e3f in __libc_start_main_impl ../csu/libc-start.c:392
#26 0x562817072344 in _start (/cpython/python+0x3a7344)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libncursesw.so.6+0x275aa)
==1373==ABORTING
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:34f5ae69fe, Jun 9 2024, 21:27:54) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-124555
* gh-124905
* gh-124911
<!-- /gh-linked-prs -->
| c2ba931318280796a6dcc33d1a5c5c02ad4d035b | 7bd9dbf8e148f14f9c9c6715a820bfda6adff957 |
python/cpython | python__cpython-120374 | # test_audit.test_http is running even if the 'network' resource is not enabled
# Bug report
### Bug description:
Running the `test_audit` test with no resources enabled in an environment with no Internet access takes ~9 minutes. Most of that time is spent waiting for `test_http` to timeout on network:
```sh
./python -m test test_audit -v
...
test_http (test.test_audit.AuditTest.test_http) ... ('http.client.connect', ' ', 'www.python.org 80')
('http.client.send', ' ', '[cannot send]')
ok
...
```
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120374
* gh-120948
* gh-120949
<!-- /gh-linked-prs -->
| b0e1c51882e3a129d1e4db8291f7a0d869d6f1d6 | ac61d58db0753a3b37de21dbc6e86b38f2a93f1b |
python/cpython | python__cpython-121523 | # Test w/ wasmtime 22
# Feature or enhancement
https://github.com/bytecodealliance/wasmtime/releases/
<!-- gh-linked-prs -->
### Linked PRs
* gh-121523
* gh-121557
<!-- /gh-linked-prs -->
| 80209468144fbd1af5cd31f152a6631627a9acab | 04397434aad9b31328785e17ac7b3a2d5097269b |
python/cpython | python__cpython-121870 | # Test w/ WASI SDK >=22
https://github.com/WebAssembly/wasi-sdk/releases/tag/wasi-sdk-22
https://github.com/WebAssembly/wasi-sdk/releases/tag/wasi-sdk-23 (currently pre-release)
<!-- gh-linked-prs -->
### Linked PRs
* gh-121870
* gh-121873
<!-- /gh-linked-prs -->
| f589f263bcb54332e47bfc76cbb06f775e82b778 | e65cb4c6f01a687f451ad9db1600525e1c5832c4 |
python/cpython | python__cpython-120425 | # no_redundant_jumps: Assertion `0' failed
# Bug report
### Bug description:
The following code causes a crash on cpython 3.13 branch (f5289c450a324bd560b328ecd42ac9faf578276e) and the current main.
```python
import ast
code="""
try:
if name_4:
pass
else:
pass
except* name_0:
pass
else:
name_4
"""
tree = ast.parse(code)
for node in ast.walk(tree):
if hasattr(node,"lineno"):
del node.lineno
del node.end_lineno
del node.col_offset
del node.end_col_offset
compile(ast.fix_missing_locations(tree), "<file>", "exec")
```
output (Python 3.13.0a3+):
```python
python: Python/flowgraph.c:511: no_redundant_jumps: Assertion `0' failed.
```
I bisected the problem down to 2091fb2a85c1aa2d9b22c02736b07831bd875c2a.
@iritkatriel can you take a look at it?
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120425
* gh-120621
* gh-120714
* gh-120716
<!-- /gh-linked-prs -->
| 21866c8ed296524f0ca175c0f55b43744c2b30df | c2d5df5787b1f7fbd2583811c66c34a417593cad |
python/cpython | python__cpython-120364 | # `enum.nonmember` type-decays Flag values
# Bug report
### Bug description:
```pycon
>>> from enum import Flag, auto, nonmember
>>>
>>> class DepType(Flag):
... LIBRARY = auto()
... SCRIPT = auto()
... TEST = auto()
... ALL = nonmember(LIBRARY | SCRIPT | TEST)
...
>>> DepType.ALL
7
>>> type(DepType.ALL)
<class 'int'>
>>> DepType.LIBRARY | DepType.SCRIPT | DepType.TEST
<DepType.LIBRARY|SCRIPT|TEST: 7>
>>> type(DepType.LIBRARY | DepType.SCRIPT | DepType.TEST)
<flag 'DepType'>
```
this manifests by seeing a funky error on:
```pycon
>>> DepType.LIBRARY in DepType.ALL
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument of type 'int' is not iterable
```
If this isn't a bug, it should at least be mentioned on the docs.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120364
* gh-120511
* gh-120512
<!-- /gh-linked-prs -->
| 7fadfd82ebf6ea90b38cb3f2a046a51f8601a205 | 7c38097add9cc24e9f68414cd3e5e1b6cbe38a17 |
python/cpython | python__cpython-120349 | # PYTHON_BASIC_REPL is ignored by interactive inspect
# Bug report
### Bug description:
The code in `pymain_repl()` in "Modules/main.c" needs to check `_Py_GetEnv(config->use_environment, "PYTHON_BASIC_REPL")`. Otherwise running with `-i`, or with `PYTHONINSPECT` set in the environment, ends up running the new REPL instead of the basic REPL after the command or script finishes.
https://github.com/python/cpython/blob/02c1dfff073a3dd6ce34a11b038defde291c2203/Modules/main.c#L545-L550
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120349
* gh-120351
<!-- /gh-linked-prs -->
| ec3af291fe2f680ab277edde7113e2762754f4aa | 9b8611eeea172cd4aa626ccd1ca333dc4093cd8c |
python/cpython | python__cpython-120347 | # Incorrect use of the :class: role with the "()" suffix
There are several uses of the `:class:` role with the name followed by `()`.
```
Doc/tutorial/stdlib2.rst:296:The :mod:`array` module provides an :class:`~array.array()` object that is like
Doc/tutorial/stdlib2.rst:309:The :mod:`collections` module provides a :class:`~collections.deque()` object
Doc/whatsnew/2.5.rst:1727:calling into the interpreter's code. There's a :class:`py_object()` type
Doc/whatsnew/2.5.rst:1737:Don't forget to use :class:`py_object()`; if it's omitted you end up with a
Doc/whatsnew/3.12.rst:742:* Add :class:`itertools.batched()` for collecting into even-sized
Doc/howto/descriptor.rst:790:object returned by :class:`super()`.
Doc/library/datetime.rst:2156: This is called from the default :class:`datetime.astimezone()`
Doc/library/fileinput.rst:50:*openhook* parameter to :func:`fileinput.input` or :class:`FileInput()`. The
Doc/library/collections.rst:102: Note, the iteration order of a :class:`ChainMap()` is determined by
```
This is incorrect. If we refer to a class as a type, we should not add `()` to the name. If we refer to it as a callable, we can use the `:func:` role instead (which adds `()` automatically). In one place `:class:` was improperly used for not a class at all.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120347
* gh-120411
* gh-120412
<!-- /gh-linked-prs -->
| 92c9c6ae147e1e658bbc8d454f8c7b2c4dea31d1 | 42b25dd61ff3593795c4cc2ffe876ab766098b24 |
python/cpython | python__cpython-120352 | # Multiple lines f-string with non-ASCII breaks tokenize.generate_tokens in 3.12.4
# Bug report
### Bug description:
```python
import io
import tokenize
src = '''\
a = f"""
Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""
'''
tokens = list(tokenize.generate_tokens(io.StringIO(src).readline))
for token in tokens:
print(token)
assert tokens[4].start == (2, 68), tokens[4].start
```
## Python 3.12.3 (correct)
```python
❯ python --version
Python 3.12.3
❯ python c.py
TokenInfo(type=1 (NAME), string='a', start=(1, 0), end=(1, 1), line='a = f"""\n')
TokenInfo(type=55 (OP), string='=', start=(1, 2), end=(1, 3), line='a = f"""\n')
TokenInfo(type=61 (FSTRING_START), string='f"""', start=(1, 4), end=(1, 8), line='a = f"""\n')
TokenInfo(type=62 (FSTRING_MIDDLE), string='\n Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli', start=(1, 8), end=(2, 68), line='a = f"""\n Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=63 (FSTRING_END), string='"""', start=(2, 68), end=(2, 71), line=' Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=4 (NEWLINE), string='\n', start=(2, 71), end=(2, 72), line=' Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=0 (ENDMARKER), string='', start=(3, 0), end=(3, 0), line='')
```
## Python 3.12.4 (broken)
```python
❯ python --version
Python 3.12.4
❯ python c.py
TokenInfo(type=1 (NAME), string='a', start=(1, 0), end=(1, 1), line='a = f"""\n')
TokenInfo(type=55 (OP), string='=', start=(1, 2), end=(1, 3), line='a = f"""\n')
TokenInfo(type=61 (FSTRING_START), string='f"""', start=(1, 4), end=(1, 8), line='a = f"""\n')
TokenInfo(type=62 (FSTRING_MIDDLE), string='\n Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli', start=(1, 8), end=(2, 68), line='a = f"""\n Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=63 (FSTRING_END), string='"""', start=(2, 72), end=(2, 75), line=' Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=4 (NEWLINE), string='\n', start=(2, 75), end=(2, 76), line=' Autorzy, którzy tą jednostkę mają wpisani jako AKTUALNA -- czyli"""\n')
TokenInfo(type=0 (ENDMARKER), string='', start=(3, 0), end=(3, 0), line='')
Traceback (most recent call last):
File "/private/tmp/flake8/c.py", line 13, in <module>
assert tokens[4].start == (2, 68), tokens[4].start
^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: (2, 72)
```
## More info
I found previous similar issue here: https://github.com/python/cpython/issues/112943
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120352
* gh-120355
* gh-120356
* gh-120391
* gh-120427
* gh-120428
<!-- /gh-linked-prs -->
| 1b62bcee941e54244b3ce6476aef8913604987c9 | 32a0faba439b239d7b0c242c1e3cd2025c52b8cf |
python/cpython | python__cpython-120329 | # Python 3.13 beta 2 build fails on Windows when using both `--experimental-jit` and `--disable-gil`
# Bug report
### Bug description:
```bash
PCbuild\build.bat --experimental-jit --disable-gil
```
commit: 14ff4c979c8564376707de4b5a84dd3e4fcb5d1d
Repeating a lot of the same error:
```
C:\Users\redacted\Projects\cpython\Include\object.h(249,11): error G10448251: call to undeclared library function '__readgsqword' with type 'unsigned long long (unsigned long)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] [C:\Users\redacted\Projects\cpython\PCbuild\pythoncore.vcxproj]
```
[build_output.txt](https://github.com/user-attachments/files/15777328/build_output.txt)
Both `--experimental-jit` and `--disable-gil` work on their own.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-120329
* gh-120363
* gh-120407
* gh-120414
<!-- /gh-linked-prs -->
| 939c201e00943c6dc2d515185168c30606ae522c | 86a8a1c57a386fb3330bee0fa44fc3fd6c3042a3 |
python/cpython | python__cpython-120318 | # Using internal tokenize module's TokenizerIter in multiple threads crashes
# Crash report
### What happened?
Because the tokenizer is not thread-safe, using the same `TokenizerIter` in multiple threads under the free-threaded build leads to all kinds of unpredicted behavior. It sometimes succeeds, sometimes throws a `SyntaxError` when there's none and sometimes crashes with the following.
<details>
<summary>Example error backtrace</summary>
```
Fatal Python error: tok_backup: tok_backup: wrong character
Python runtime state: initialized
Current thread 0x0000000172e1b000 (most recent call first):
File "/Users/lysnikolaou/repos/python/cpython/tmp/t1.py", line 9 in next_token
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 58 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 92 in _worker
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 990 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1039 in _bootstrap_inner
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1010 in _bootstrap
Thread 0x0000000171e0f000 (most recent call first):
File "/Users/lysnikolaou/repos/python/cpython/tmp/t1.py", line 10 in next_token
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 58 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 92 in _worker
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 990 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1039 in _bootstrap_inner
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1010 in _bootstrap
Thread 0x0000000170e03000 (most recent call first):
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/_base.py", line 550 in set_exception
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 60 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 92 in _worker
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 990 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1039 in _bootstrap_inner
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1010 in _bootstrap
Thread 0x000000016fdf7000 (most recent call first):
File "/Users/lysnikolaou/repos/python/cpython/tmp/t1.py", line 10 in next_token
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 58 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.py", line 92 in Assertion failed: (tok->done != E_ERROR), function _syntaxerror__workerrange, file helpers.c, line 17.
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 990 in run
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1039 in _bootstrap_inner
File "/Users/lysnikolaou/repos/python/cpython/Lib/threading.py", line 1010 in _bootstrap
Thread 0x000000016edeb000 (most recent call first):
File "/Users/lysnikolaou/repos/python/cpython/tmp/t1.py", line 10 in next_token
File "/Users/lysnikolaou/repos/python/cpython/Lib/concurrent/futures/thread.pyzsh: abort ./python.exe tmp/t1.py
```
</details>
A minimal reproducer is the following:
```python
import concurrent.futures
import io
import time
import tokenize
def next_token(it):
while True:
try:
r = next(it)
print(tokenize.TokenInfo._make(r))
time.sleep(1)
except StopIteration:
return
for _ in range(20):
with concurrent.futures.ThreadPoolExecutor() as executor:
source = io.StringIO("a = 'abc'\nprint(b)\nfor _ in a: do_something()")
it = tokenize._tokenize.TokenizerIter(source.readline, extra_tokens=False)
threads = (executor.submit(next_token, it) for _ in range(5))
for t in concurrent.futures.as_completed(threads):
t.result()
print("######################################################")
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 experimental free-threading build (heads/main:c3b6dbff2c8, Jun 10 2024, 14:33:07) [Clang 15.0.0 (clang-1500.3.9.4)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120318
* gh-121841
<!-- /gh-linked-prs -->
| 8549559f383dfcc0ad0c32496f62a4b737c05b4f | 8b6d4755812d0b02e9f26beb9c9a7714e4c5ac28 |
python/cpython | python__cpython-120989 | # `ctypes._FuncPtr` doesn't seem to exist as described in the doc (misspelled `_CFuncType`?)
# Documentation
Module `ctypes` doesn't seem to provide [`_FuncPtr`](https://docs.python.org/3.13/library/ctypes.html#ctypes._FuncPtr) when imported.
However, it provides `_CFuncType` that really looks similar to `_FuncPtr` as described by the documentation according to `help()`.
```
Help on class CFuncPtr in module _ctypes:
class CFuncPtr(_CData)
| Function Pointer
|
| Method resolution order:
| CFuncPtr
| _CData
| builtins.object
|
| Methods defined here:
|
| __bool__(self, /)
| True if self else False
|
| __buffer__(self, flags, /)
| Return a buffer object that exposes the underlying memory of the object.
|
| __call__(self, /, *args, **kwargs)
| Call self as a function.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs)
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| argtypes
| specify the argument types
|
| errcheck
| a function to check for errors
|
| restype
| specify the result type
|
| ----------------------------------------------------------------------
| Methods inherited from _CData:
|
| __ctypes_from_outparam__(...)
|
| __hash__(self, /)
| Return hash(self).
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
```
Could it be a typo in the documentation?
<!-- gh-linked-prs -->
### Linked PRs
* gh-120989
* gh-125978
* gh-125979
<!-- /gh-linked-prs -->
| 417c130ba55ca29e132808a0a500329f73b6ec41 | cae853e3b44cd5cb033b904e163c490dd28bc30a |
python/cpython | python__cpython-120303 | # Use After Free in list_richcompare_impl
# Crash report
### Bisect
bisect from https://github.com/python/cpython/commit/65e1cea6e372ed809501cf5b927a65c111f79cd3
### Build
```
./configure --with-pydebug --with-address-sanitizer
```
### Root Cause
The `list_richcompare_impl` function calls arbitrary code while comparing nested list structures. This can cause `vl->ob_item[i]` and `wl->ob_item[i]` to have their reference counts decreased, triggering a use-after-free issue. This issue arises when called from bisect, deque and heapq(https://github.com/python/cpython/issues/115706) indices with improper validation.
```
static PyObject *
list_richcompare_impl(PyObject *v, PyObject *w, int op)
{
PyListObject *vl, *wl;
Py_ssize_t i;
if (!PyList_Check(v) || !PyList_Check(w))
Py_RETURN_NOTIMPLEMENTED;
vl = (PyListObject *)v;
wl = (PyListObject *)w;
if (Py_SIZE(vl) != Py_SIZE(wl) && (op == Py_EQ || op == Py_NE)) {
/* Shortcut: if the lengths differ, the lists differ */
if (op == Py_EQ)
Py_RETURN_FALSE;
else
Py_RETURN_TRUE;
}
/* Search for the first index where items are different */
for (i = 0; i < Py_SIZE(vl) && i < Py_SIZE(wl); i++) {
PyObject *vitem = vl->ob_item[i];
PyObject *witem = wl->ob_item[i];
if (vitem == witem) {
continue;
}
Py_INCREF(vitem);
Py_INCREF(witem);
int k = PyObject_RichCompareBool(vitem, witem, Py_EQ);
Py_DECREF(vitem);
Py_DECREF(witem);
if (k < 0)
return NULL;
if (!k)
break;
}
if (i >= Py_SIZE(vl) || i >= Py_SIZE(wl)) {
/* No more items to compare -- compare sizes */
Py_RETURN_RICHCOMPARE(Py_SIZE(vl), Py_SIZE(wl), op);
}
/* We have an item that differs -- shortcuts for EQ/NE */
if (op == Py_EQ) {
Py_RETURN_FALSE;
}
if (op == Py_NE) {
Py_RETURN_TRUE;
}
/* Compare the final item again using the proper operator */
return PyObject_RichCompare(vl->ob_item[i], wl->ob_item[i], op); // <-- call arbitrary code in python
}
```
### POC
```python
import _bisect
class evil(object):
def __lt__(self, other):
other.clear()
return NotImplemented
a = [ [ evil()]]
_bisect.insort_left( a ,a )
```
```python
import collections
class evil(object):
def __lt__(self, other):
other.pop()
return NotImplemented
a = [ [ [ evil() ] ] ]
collections.deque( a[0] ) < collections.deque( a )
```
### asan
<details>
<summary> bisect asan </summary>
```
=================================================================
==148257==ERROR: AddressSanitizer: heap-use-after-free on address 0x61300001ff78 at pc 0x55564b4e5fe2 bp 0x7ffe8b09d4b0 sp 0x7ffe8b09d4a0
READ of size 8 at 0x61300001ff78 thread T0
#0 0x55564b4e5fe1 in Py_TYPE Include/object.h:249
#1 0x55564b4e5fe1 in list_richcompare_impl Objects/listobject.c:3338
#2 0x55564b4e6bcb in list_richcompare Objects/listobject.c:3393
#3 0x55564b561388 in do_richcompare Objects/object.c:933
#4 0x55564b561654 in PyObject_RichCompare Objects/object.c:976
#5 0x55564b4e66c9 in list_richcompare_impl Objects/listobject.c:3385
#6 0x55564b4e6bcb in list_richcompare Objects/listobject.c:3393
#7 0x7fd307a05a2b in internal_bisect_left Modules/_bisectmodule.c:288
#8 0x7fd307a063b6 in _bisect_insort_left_impl Modules/_bisectmodule.c:396
#9 0x7fd307a06a74 in _bisect_insort_left Modules/clinic/_bisectmodule.c.h:432
#10 0x55564b55224a in cfunction_vectorcall_FASTCALL_KEYWORDS Objects/methodobject.c:441
#11 0x55564b45bbb9 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#12 0x55564b45bd14 in PyObject_Vectorcall Objects/call.c:327
#13 0x55564b7988c4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:813
#14 0x55564b7d0a7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#15 0x55564b7d0a7b in _PyEval_Vector Python/ceval.c:1819
#16 0x55564b7d0c9c in PyEval_EvalCode Python/ceval.c:599
#17 0x55564b8e8c51 in run_eval_code_obj Python/pythonrun.c:1292
#18 0x55564b8ebb96 in run_mod Python/pythonrun.c:1377
#19 0x55564b8ec976 in pyrun_file Python/pythonrun.c:1210
#20 0x55564b8eee55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#21 0x55564b8ef349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#22 0x55564b950718 in pymain_run_file_obj Modules/main.c:357
#23 0x55564b952fea in pymain_run_file Modules/main.c:376
#24 0x55564b953bfb in pymain_run_python Modules/main.c:639
#25 0x55564b953d8b in Py_RunMain Modules/main.c:718
#26 0x55564b953f72 in pymain_main Modules/main.c:748
#27 0x55564b9542ea in Py_BytesMain Modules/main.c:772
#28 0x55564b2bdb15 in main Programs/python.c:15
#29 0x7fd30a683d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#30 0x7fd30a683e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#31 0x55564b2bda44 in _start (/home/kcats/cpython/python+0x282a44)
0x61300001ff78 is located 56 bytes inside of 352-byte region [0x61300001ff40,0x6130000200a0)
freed by thread T0 here:
#0 0x7fd30aa1e537 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:127
#1 0x55564b5689c5 in _PyMem_RawFree Objects/obmalloc.c:90
#2 0x55564b56ac2f in _PyMem_DebugRawFree Objects/obmalloc.c:2754
#3 0x55564b56b55d in _PyMem_DebugFree Objects/obmalloc.c:2891
#4 0x55564b59f467 in PyObject_Free Objects/obmalloc.c:1323
#5 0x55564b848285 in PyObject_GC_Del Python/gc.c:2123
#6 0x55564b5c4af8 in object_dealloc Objects/typeobject.c:6324
#7 0x55564b5ec8e4 in subtype_dealloc Objects/typeobject.c:2534
#8 0x55564b55f3b1 in _Py_Dealloc Objects/object.c:2854
#9 0x55564b83f056 in Py_DECREF Include/refcount.h:351
#10 0x55564b83f056 in Py_XDECREF Include/refcount.h:459
#11 0x55564b83f056 in _PyFrame_ClearLocals Python/frame.c:104
#12 0x55564b83f21c in _PyFrame_ClearExceptCode Python/frame.c:129
#13 0x55564b7819a6 in clear_thread_frame Python/ceval.c:1681
#14 0x55564b78a486 in _PyEval_FrameClearAndPop Python/ceval.c:1708
#15 0x55564b7c3ea5 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:5279
#16 0x55564b7d0a7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#17 0x55564b7d0a7b in _PyEval_Vector Python/ceval.c:1819
#18 0x55564b45b20d in _PyFunction_Vectorcall Objects/call.c:413
#19 0x55564b60196b in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#20 0x55564b60196b in vectorcall_unbound Objects/typeobject.c:2716
#21 0x55564b60196b in slot_tp_richcompare Objects/typeobject.c:9812
#22 0x55564b561280 in do_richcompare Objects/object.c:927
#23 0x55564b561654 in PyObject_RichCompare Objects/object.c:976
#24 0x55564b4e66c9 in list_richcompare_impl Objects/listobject.c:3385
#25 0x55564b4e6bcb in list_richcompare Objects/listobject.c:3393
#26 0x7fd307a05a2b in internal_bisect_left Modules/_bisectmodule.c:288
#27 0x7fd307a063b6 in _bisect_insort_left_impl Modules/_bisectmodule.c:396
#28 0x7fd307a06a74 in _bisect_insort_left Modules/clinic/_bisectmodule.c.h:432
#29 0x55564b55224a in cfunction_vectorcall_FASTCALL_KEYWORDS Objects/methodobject.c:441
#30 0x55564b45bbb9 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#31 0x55564b45bd14 in PyObject_Vectorcall Objects/call.c:327
#32 0x55564b7988c4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:813
#33 0x55564b7d0a7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#34 0x55564b7d0a7b in _PyEval_Vector Python/ceval.c:1819
#35 0x55564b7d0c9c in PyEval_EvalCode Python/ceval.c:599
previously allocated by thread T0 here:
#0 0x7fd30aa1e887 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x55564b56956c in _PyMem_RawMalloc Objects/obmalloc.c:62
#2 0x55564b56889f in _PyMem_DebugRawAlloc Objects/obmalloc.c:2686
#3 0x55564b568907 in _PyMem_DebugRawMalloc Objects/obmalloc.c:2719
#4 0x55564b56b59f in _PyMem_DebugMalloc Objects/obmalloc.c:2876
#5 0x55564b59f323 in PyObject_Malloc Objects/obmalloc.c:1294
#6 0x55564b5e16bc in _PyObject_MallocWithType Include/internal/pycore_object_alloc.h:46
#7 0x55564b5e16bc in _PyType_AllocNoTrack Objects/typeobject.c:2187
#8 0x55564b5e1b9b in PyType_GenericAlloc Objects/typeobject.c:2216
#9 0x55564b5da5bf in object_new Objects/typeobject.c:6314
#10 0x55564b5e8851 in type_call Objects/typeobject.c:2131
#11 0x55564b45b5e7 in _PyObject_MakeTpCall Objects/call.c:242
#12 0x55564b45bce8 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:166
#13 0x55564b45bd14 in PyObject_Vectorcall Objects/call.c:327
#14 0x55564b7988c4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:813
#15 0x55564b7d0a7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#16 0x55564b7d0a7b in _PyEval_Vector Python/ceval.c:1819
#17 0x55564b7d0c9c in PyEval_EvalCode Python/ceval.c:599
#18 0x55564b8e8c51 in run_eval_code_obj Python/pythonrun.c:1292
#19 0x55564b8ebb96 in run_mod Python/pythonrun.c:1377
#20 0x55564b8ec976 in pyrun_file Python/pythonrun.c:1210
#21 0x55564b8eee55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#22 0x55564b8ef349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#23 0x55564b950718 in pymain_run_file_obj Modules/main.c:357
#24 0x55564b952fea in pymain_run_file Modules/main.c:376
#25 0x55564b953bfb in pymain_run_python Modules/main.c:639
#26 0x55564b953d8b in Py_RunMain Modules/main.c:718
#27 0x55564b953f72 in pymain_main Modules/main.c:748
#28 0x55564b9542ea in Py_BytesMain Modules/main.c:772
#29 0x55564b2bdb15 in main Programs/python.c:15
#30 0x7fd30a683d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
SUMMARY: AddressSanitizer: heap-use-after-free Include/object.h:249 in Py_TYPE
Shadow bytes around the buggy address:
0x0c267fffbf90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffbfa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffbfb0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffbfc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffbfd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c267fffbfe0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd[fd]
0x0c267fffbff0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c267fffc000: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c267fffc010: fd fd fd fd fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffc020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffc030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==148257==ABORTING
```
</details>
<details>
<summary> deque asan </summary>
```
=================================================================
==144863==ERROR: AddressSanitizer: heap-use-after-free on address 0x6130000290b8 at pc 0x55ced2414fe2 bp 0x7ffd0b9be680 sp 0x7ffd0b9be670
READ of size 8 at 0x6130000290b8 thread T0
#0 0x55ced2414fe1 in Py_TYPE Include/object.h:249
#1 0x55ced2414fe1 in list_richcompare_impl Objects/listobject.c:3338
#2 0x55ced2415bcb in list_richcompare Objects/listobject.c:3393
#3 0x55ced2490388 in do_richcompare Objects/object.c:933
#4 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#5 0x55ced24156c9 in list_richcompare_impl Objects/listobject.c:3385
#6 0x55ced2415bcb in list_richcompare Objects/listobject.c:3393
#7 0x55ced2490280 in do_richcompare Objects/object.c:927
#8 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#9 0x55ced2490782 in PyObject_RichCompareBool Objects/object.c:998
#10 0x55ced28eeca2 in deque_richcompare Modules/_collectionsmodule.c:1678
#11 0x55ced2490280 in do_richcompare Objects/object.c:927
#12 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#13 0x55ced26d50e9 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:2218
#14 0x55ced26ffa7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#15 0x55ced26ffa7b in _PyEval_Vector Python/ceval.c:1819
#16 0x55ced26ffc9c in PyEval_EvalCode Python/ceval.c:599
#17 0x55ced2817c51 in run_eval_code_obj Python/pythonrun.c:1292
#18 0x55ced281ab96 in run_mod Python/pythonrun.c:1377
#19 0x55ced281b976 in pyrun_file Python/pythonrun.c:1210
#20 0x55ced281de55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#21 0x55ced281e349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#22 0x55ced287f718 in pymain_run_file_obj Modules/main.c:357
#23 0x55ced2881fea in pymain_run_file Modules/main.c:376
#24 0x55ced2882bfb in pymain_run_python Modules/main.c:639
#25 0x55ced2882d8b in Py_RunMain Modules/main.c:718
#26 0x55ced2882f72 in pymain_main Modules/main.c:748
#27 0x55ced28832ea in Py_BytesMain Modules/main.c:772
#28 0x55ced21ecb15 in main Programs/python.c:15
#29 0x7f384351dd8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#30 0x7f384351de3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#31 0x55ced21eca44 in _start (/home/kcats/cpython/python+0x282a44)
0x6130000290b8 is located 56 bytes inside of 352-byte region [0x613000029080,0x6130000291e0)
freed by thread T0 here:
#0 0x7f38438b8537 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:127
#1 0x55ced24979c5 in _PyMem_RawFree Objects/obmalloc.c:90
#2 0x55ced2499c2f in _PyMem_DebugRawFree Objects/obmalloc.c:2754
#3 0x55ced249a55d in _PyMem_DebugFree Objects/obmalloc.c:2891
#4 0x55ced24ce467 in PyObject_Free Objects/obmalloc.c:1323
#5 0x55ced2777285 in PyObject_GC_Del Python/gc.c:2123
#6 0x55ced24f3af8 in object_dealloc Objects/typeobject.c:6324
#7 0x55ced251b8e4 in subtype_dealloc Objects/typeobject.c:2534
#8 0x55ced248e3b1 in _Py_Dealloc Objects/object.c:2854
#9 0x55ced276e056 in Py_DECREF Include/refcount.h:351
#10 0x55ced276e056 in Py_XDECREF Include/refcount.h:459
#11 0x55ced276e056 in _PyFrame_ClearLocals Python/frame.c:104
#12 0x55ced276e21c in _PyFrame_ClearExceptCode Python/frame.c:129
#13 0x55ced26b09a6 in clear_thread_frame Python/ceval.c:1681
#14 0x55ced26b9486 in _PyEval_FrameClearAndPop Python/ceval.c:1708
#15 0x55ced26f2ea5 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:5279
#16 0x55ced26ffa7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#17 0x55ced26ffa7b in _PyEval_Vector Python/ceval.c:1819
#18 0x55ced238a20d in _PyFunction_Vectorcall Objects/call.c:413
#19 0x55ced253096b in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#20 0x55ced253096b in vectorcall_unbound Objects/typeobject.c:2716
#21 0x55ced253096b in slot_tp_richcompare Objects/typeobject.c:9812
#22 0x55ced2490280 in do_richcompare Objects/object.c:927
#23 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#24 0x55ced24156c9 in list_richcompare_impl Objects/listobject.c:3385
#25 0x55ced2415bcb in list_richcompare Objects/listobject.c:3393
#26 0x55ced2490280 in do_richcompare Objects/object.c:927
#27 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#28 0x55ced2490782 in PyObject_RichCompareBool Objects/object.c:998
#29 0x55ced28eeca2 in deque_richcompare Modules/_collectionsmodule.c:1678
#30 0x55ced2490280 in do_richcompare Objects/object.c:927
#31 0x55ced2490654 in PyObject_RichCompare Objects/object.c:976
#32 0x55ced26d50e9 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:2218
#33 0x55ced26ffa7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#34 0x55ced26ffa7b in _PyEval_Vector Python/ceval.c:1819
#35 0x55ced26ffc9c in PyEval_EvalCode Python/ceval.c:599
previously allocated by thread T0 here:
#0 0x7f38438b8887 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x55ced249856c in _PyMem_RawMalloc Objects/obmalloc.c:62
#2 0x55ced249789f in _PyMem_DebugRawAlloc Objects/obmalloc.c:2686
#3 0x55ced2497907 in _PyMem_DebugRawMalloc Objects/obmalloc.c:2719
#4 0x55ced249a59f in _PyMem_DebugMalloc Objects/obmalloc.c:2876
#5 0x55ced24ce323 in PyObject_Malloc Objects/obmalloc.c:1294
#6 0x55ced25106bc in _PyObject_MallocWithType Include/internal/pycore_object_alloc.h:46
#7 0x55ced25106bc in _PyType_AllocNoTrack Objects/typeobject.c:2187
#8 0x55ced2510b9b in PyType_GenericAlloc Objects/typeobject.c:2216
#9 0x55ced25095bf in object_new Objects/typeobject.c:6314
#10 0x55ced2517851 in type_call Objects/typeobject.c:2131
#11 0x55ced238a5e7 in _PyObject_MakeTpCall Objects/call.c:242
#12 0x55ced238ace8 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:166
#13 0x55ced238ad14 in PyObject_Vectorcall Objects/call.c:327
#14 0x55ced26c78c4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:813
#15 0x55ced26ffa7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#16 0x55ced26ffa7b in _PyEval_Vector Python/ceval.c:1819
#17 0x55ced26ffc9c in PyEval_EvalCode Python/ceval.c:599
#18 0x55ced2817c51 in run_eval_code_obj Python/pythonrun.c:1292
#19 0x55ced281ab96 in run_mod Python/pythonrun.c:1377
#20 0x55ced281b976 in pyrun_file Python/pythonrun.c:1210
#21 0x55ced281de55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#22 0x55ced281e349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#23 0x55ced287f718 in pymain_run_file_obj Modules/main.c:357
#24 0x55ced2881fea in pymain_run_file Modules/main.c:376
#25 0x55ced2882bfb in pymain_run_python Modules/main.c:639
#26 0x55ced2882d8b in Py_RunMain Modules/main.c:718
#27 0x55ced2882f72 in pymain_main Modules/main.c:748
#28 0x55ced28832ea in Py_BytesMain Modules/main.c:772
#29 0x55ced21ecb15 in main Programs/python.c:15
#30 0x7f384351dd8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
SUMMARY: AddressSanitizer: heap-use-after-free Include/object.h:249 in Py_TYPE
Shadow bytes around the buggy address:
0x0c267fffd1c0: 00 00 00 00 00 00 00 00 00 00 00 00 fa fa fa fa
0x0c267fffd1d0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x0c267fffd1e0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c267fffd1f0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c267fffd200: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c267fffd210: fd fd fd fd fd fd fd[fd]fd fd fd fd fd fd fd fd
0x0c267fffd220: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c267fffd230: fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa
0x0c267fffd240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffd250: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c267fffd260: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==144863==ABORTING
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:34f5ae69fe, Jun 9 2024, 21:27:54) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120303
* gh-120339
* gh-120340
<!-- /gh-linked-prs -->
| 141babad9b4eceb83371bf19ba3a36b50dd05250 | 9e9ee50421c857b443e2060274f17fb884d54473 |
python/cpython | python__cpython-120301 | # ``fnctlmodule.fcntl_ioctl_impl`` has outdated format string
# Bug report
### Bug description:
After 92fab3356f4c61d4c73606e4fae705c6d8f6213b fcntl.ioctl `code` argument has type `unsigned long`, so
this code should be updated:
from
```c
if (PySys_Audit("fcntl.ioctl", "iIO", fd, code,
ob_arg ? ob_arg : Py_None) < 0) {
return NULL;
}
```
to
```c
if (PySys_Audit("fcntl.ioctl", "ikO", fd, code,
ob_arg ? ob_arg : Py_None) < 0) {
return NULL;
}
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120301
<!-- /gh-linked-prs -->
| e5a7bc6f2eb9a3875063423caa67bb0ffcc3a6b8 | 0ae8579b85f9b0cd3f287082ad6e194bdb025d88 |
python/cpython | python__cpython-120292 | # [3.13] shell version of `python-config` started using bashisms
# Bug report
### Bug description:
Since de2a73dc4649b110351fce789de0abb14c460b97 `python-config.sh.in` is using bashisms but declaring `#!/bin/sh` as the shell. As a result, it now throws errors on systems using non-bash shells:
```
$ python3.13-config
/usr/bin/python3.13-config: 8: [[: not found
Usage: /usr/bin/python3.13-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir|--embed
```
FWICS, there's no reason to use `[[ ... ]]` over plain shell `[ ... ]` there.
CC @kiendang
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120292
* gh-120341
<!-- /gh-linked-prs -->
| 7d2447137e117ea9a6ee1493bce0b071c76b1bd7 | 141babad9b4eceb83371bf19ba3a36b50dd05250 |
python/cpython | python__cpython-120297 | # Use After Free in initContext(_lsprof.c)
# Crash report
### What happened?
Version
Python 3.14.0a0 (heads/main:34f5ae69fe, Jun 9 2024, 21:27:54) [GCC 11.4.0]
bisect from commit https://github.com/python/cpython/pull/8378/commits/2158977f913f68ff4df15ca3e6e8fb19c6ef0cf2
### Root Cause
The `call_timer` function can execute arbitrary code from `pObj`, which is initialized by the user. If the code calls `_lsprof_type_Profiler_post1.disable()` in Python, it will hit `profiler_disable` in C. Then, the `self` (ProfileContext) will be freed. Consequently, after call_timer returns, `self->t0` will cause a use-after-free error.
```C
static void
initContext(ProfilerObject *pObj, ProfilerContext *self, ProfilerEntry *entry)
{
self->ctxEntry = entry;
self->subt = 0;
self->previous = pObj->currentProfilerContext;
pObj->currentProfilerContext = self;
++entry->recursionLevel;
if ((pObj->flags & POF_SUBCALLS) && self->previous) {
/* find or create an entry for me in my caller's entry */
ProfilerEntry *caller = self->previous->ctxEntry;
ProfilerSubEntry *subentry = getSubEntry(pObj, caller, entry);
if (subentry == NULL)
subentry = newSubEntry(pObj, caller, entry);
if (subentry)
++subentry->recursionLevel;
}
self->t0 = call_timer(pObj); // <-- execute arbitrary code
}
```
### POC
```python
import _lsprof
class evil():
def __call__(self):
_lsprof_type_Profiler_post1.disable()
return True
_lsprof_type_Profiler_post1 = _lsprof.Profiler(evil())
_lsprof_type_Profiler_post1.enable()
print ("dummy")
```
### ASAN
<details>
<summary><b>asan</b></summary>
```
=================================================================
==21434==ERROR: AddressSanitizer: heap-use-after-free on address 0x6060000626d0 at pc 0x7f2a7eaa4f22 bp 0x7ffe7a876d30 sp 0x7ffe7a876d20
WRITE of size 8 at 0x6060000626d0 thread T0
#0 0x7f2a7eaa4f21 in initContext Modules/_lsprof.c:310
#1 0x7f2a7eaa684f in ptrace_enter_call Modules/_lsprof.c:379
#2 0x7f2a7eaa8098 in ccall_callback Modules/_lsprof.c:653
#3 0x563c6a7ce4da in cfunction_vectorcall_FASTCALL Objects/methodobject.c:425
#4 0x563c6ab10cb2 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#5 0x563c6ab10cb2 in call_one_instrument Python/instrumentation.c:907
#6 0x563c6ab1258b in call_instrumentation_vector Python/instrumentation.c:1095
#7 0x563c6ab1663d in _Py_call_instrumentation_2args Python/instrumentation.c:1150
#8 0x563c6aa2b5c5 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:3229
#9 0x563c6aa4ca7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#10 0x563c6aa4ca7b in _PyEval_Vector Python/ceval.c:1819
#11 0x563c6aa4cc9c in PyEval_EvalCode Python/ceval.c:599
#12 0x563c6ab64c51 in run_eval_code_obj Python/pythonrun.c:1292
#13 0x563c6ab67b96 in run_mod Python/pythonrun.c:1377
#14 0x563c6ab68976 in pyrun_file Python/pythonrun.c:1210
#15 0x563c6ab6ae55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#16 0x563c6ab6b349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#17 0x563c6abcc718 in pymain_run_file_obj Modules/main.c:357
#18 0x563c6abcefea in pymain_run_file Modules/main.c:376
#19 0x563c6abcfbfb in pymain_run_python Modules/main.c:639
#20 0x563c6abcfd8b in Py_RunMain Modules/main.c:718
#21 0x563c6abcff72 in pymain_main Modules/main.c:748
#22 0x563c6abd02ea in Py_BytesMain Modules/main.c:772
#23 0x563c6a539b15 in main Programs/python.c:15
#24 0x7f2a81d33d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#25 0x7f2a81d33e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#26 0x563c6a539a44 in _start (/home/kcats/cpython/python+0x282a44)
0x6060000626d0 is located 16 bytes inside of 56-byte region [0x6060000626c0,0x6060000626f8)
freed by thread T0 here:
#0 0x7f2a820ce537 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:127
#1 0x563c6a7e49c5 in _PyMem_RawFree Objects/obmalloc.c:90
#2 0x563c6a7e6c2f in _PyMem_DebugRawFree Objects/obmalloc.c:2754
#3 0x563c6a7e755d in _PyMem_DebugFree Objects/obmalloc.c:2891
#4 0x563c6a81abad in PyMem_Free Objects/obmalloc.c:1010
#5 0x7f2a7eaa4c21 in flush_unmatched Modules/_lsprof.c:766
#6 0x7f2a7eaa6d4f in profiler_disable Modules/_lsprof.c:815
#7 0x563c6a70183f in method_vectorcall_NOARGS Objects/descrobject.c:447
#8 0x563c6a6d7bb9 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#9 0x563c6a6d7d14 in PyObject_Vectorcall Objects/call.c:327
#10 0x563c6aa148c4 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:813
#11 0x563c6aa4ca7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#12 0x563c6aa4ca7b in _PyEval_Vector Python/ceval.c:1819
#13 0x563c6a6d720d in _PyFunction_Vectorcall Objects/call.c:413
#14 0x563c6a6dbe79 in _PyObject_VectorcallDictTstate Objects/call.c:135
#15 0x563c6a6dc351 in _PyObject_Call_Prepend Objects/call.c:504
#16 0x563c6a876cc8 in slot_tp_call Objects/typeobject.c:9668
#17 0x563c6a6d75e7 in _PyObject_MakeTpCall Objects/call.c:242
#18 0x7f2a7eaa452d in _PyObject_VectorcallTstate Include/internal/pycore_call.h:166
#19 0x7f2a7eaa452d in _PyObject_CallNoArgs Include/internal/pycore_call.h:184
#20 0x7f2a7eaa452d in CallExternalTimer Modules/_lsprof.c:90
#21 0x7f2a7eaa4e1c in call_timer Modules/_lsprof.c:121
#22 0x7f2a7eaa4e1c in initContext Modules/_lsprof.c:310
#23 0x7f2a7eaa684f in ptrace_enter_call Modules/_lsprof.c:379
#24 0x7f2a7eaa8098 in ccall_callback Modules/_lsprof.c:653
#25 0x563c6a7ce4da in cfunction_vectorcall_FASTCALL Objects/methodobject.c:425
#26 0x563c6ab10cb2 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#27 0x563c6ab10cb2 in call_one_instrument Python/instrumentation.c:907
#28 0x563c6ab1258b in call_instrumentation_vector Python/instrumentation.c:1095
#29 0x563c6ab1663d in _Py_call_instrumentation_2args Python/instrumentation.c:1150
#30 0x563c6aa2b5c5 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:3229
#31 0x563c6aa4ca7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#32 0x563c6aa4ca7b in _PyEval_Vector Python/ceval.c:1819
#33 0x563c6aa4cc9c in PyEval_EvalCode Python/ceval.c:599
#34 0x563c6ab64c51 in run_eval_code_obj Python/pythonrun.c:1292
#35 0x563c6ab67b96 in run_mod Python/pythonrun.c:1377
previously allocated by thread T0 here:
#0 0x7f2a820ce887 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x563c6a7e556c in _PyMem_RawMalloc Objects/obmalloc.c:62
#2 0x563c6a7e489f in _PyMem_DebugRawAlloc Objects/obmalloc.c:2686
#3 0x563c6a7e4907 in _PyMem_DebugRawMalloc Objects/obmalloc.c:2719
#4 0x563c6a7e759f in _PyMem_DebugMalloc Objects/obmalloc.c:2876
#5 0x563c6a81aa69 in PyMem_Malloc Objects/obmalloc.c:981
#6 0x7f2a7eaa6892 in ptrace_enter_call Modules/_lsprof.c:373
#7 0x7f2a7eaa8098 in ccall_callback Modules/_lsprof.c:653
#8 0x563c6a7ce4da in cfunction_vectorcall_FASTCALL Objects/methodobject.c:425
#9 0x563c6ab10cb2 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:168
#10 0x563c6ab10cb2 in call_one_instrument Python/instrumentation.c:907
#11 0x563c6ab1258b in call_instrumentation_vector Python/instrumentation.c:1095
#12 0x563c6ab1663d in _Py_call_instrumentation_2args Python/instrumentation.c:1150
#13 0x563c6aa2b5c5 in _PyEval_EvalFrameDefault Python/generated_cases.c.h:3229
#14 0x563c6aa4ca7b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#15 0x563c6aa4ca7b in _PyEval_Vector Python/ceval.c:1819
#16 0x563c6aa4cc9c in PyEval_EvalCode Python/ceval.c:599
#17 0x563c6ab64c51 in run_eval_code_obj Python/pythonrun.c:1292
#18 0x563c6ab67b96 in run_mod Python/pythonrun.c:1377
#19 0x563c6ab68976 in pyrun_file Python/pythonrun.c:1210
#20 0x563c6ab6ae55 in _PyRun_SimpleFileObject Python/pythonrun.c:459
#21 0x563c6ab6b349 in _PyRun_AnyFileObject Python/pythonrun.c:77
#22 0x563c6abcc718 in pymain_run_file_obj Modules/main.c:357
#23 0x563c6abcefea in pymain_run_file Modules/main.c:376
#24 0x563c6abcfbfb in pymain_run_python Modules/main.c:639
#25 0x563c6abcfd8b in Py_RunMain Modules/main.c:718
#26 0x563c6abcff72 in pymain_main Modules/main.c:748
#27 0x563c6abd02ea in Py_BytesMain Modules/main.c:772
#28 0x563c6a539b15 in main Programs/python.c:15
#29 0x7f2a81d33d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
SUMMARY: AddressSanitizer: heap-use-after-free Modules/_lsprof.c:310 in initContext
Shadow bytes around the buggy address:
0x0c0c80004480: fa fa fa fa fd fd fd fd fd fd fd fa fa fa fa fa
0x0c0c80004490: 00 00 00 00 00 00 00 fa fa fa fa fa fd fd fd fd
0x0c0c800044a0: fd fd fd fa fa fa fa fa fd fd fd fd fd fd fd fa
0x0c0c800044b0: fa fa fa fa fd fd fd fd fd fd fd fa fa fa fa fa
0x0c0c800044c0: fd fd fd fd fd fd fd fa fa fa fa fa fd fd fd fd
=>0x0c0c800044d0: fd fd fd fa fa fa fa fa fd fd[fd]fd fd fd fd fa
0x0c0c800044e0: fa fa fa fa fd fd fd fd fd fd fd fa fa fa fa fa
0x0c0c800044f0: fd fd fd fd fd fd fd fa fa fa fa fa fd fd fd fd
0x0c0c80004500: fd fd fd fa fa fa fa fa fd fd fd fd fd fd fd fa
0x0c0c80004510: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c80004520: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==21434==ABORTING
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:34f5ae69fe, Jun 9 2024, 21:27:54) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120297
* gh-121984
* gh-121989
* gh-121998
* gh-122000
* gh-122001
<!-- /gh-linked-prs -->
| 1ab17782832bb1b6baa915627aead3e3516a0894 | 7431c3799efbd06ed03ee70b64420f45e83b3667 |
python/cpython | python__cpython-120277 | # Fix incorrect `email.header.Header` `maxlinelen` default.
# Documentation
Update `email.header.Header` documentation for `maxlinelen` default from 76 to 78.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120277
* gh-120278
* gh-120279
<!-- /gh-linked-prs -->
| 7c016deae62308dd1b4e2767fc6abf04857c7843 | 5d59b870effa0f576acf7264cfcbfca2b36e34e3 |
python/cpython | python__cpython-120269 | # ``date.fromtimestamp(None)`` behaves differently between ``_pydatetime`` and ``_datetime``
# Bug report
### Bug description:
```python
./python
Python 3.14.0a0 (heads/main-dirty:55402d3232, Jun 8 2024, 11:03:56) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import _datetime
>>> _datetime.date.fromtimestamp(None)
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
_datetime.date.fromtimestamp(None)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
TypeError: 'NoneType' object cannot be interpreted as an integer
>>> import _pydatetime
>>> _pydatetime.date.fromtimestamp(None)
datetime.date(2024, 6, 8)
>>>
```
This happens because the `_pydatetime.date.fromtimestamp` is using `time.localtime` which is accepts ``None`` as a valid argument while `_datetime.date.fromtimestamp` is trying to convert ``None`` to an integer.
I would prefer to change the Python implementation, as the documentaion for ``datetime.date.fromtimestamp`` doesn't mention that it can accept ``None`` as a valid argument.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120269
* gh-120282
* gh-120283
<!-- /gh-linked-prs -->
| 34f5ae69fe9ab0f5b23311d5c396d0cbb5902913 | 7c016deae62308dd1b4e2767fc6abf04857c7843 |
python/cpython | python__cpython-120255 | # Add a `commands` argument to `pdb.set_trace` so the user can feed commands from source code
# Feature or enhancement
### Proposal:
I propose to add a `commands` argument to `pdb.set_trace` and `pdb.Pdb.set_trace`. The `commands` fed should be a list of pdb commands that will be executed the first time the debugger is brought up - it's very similar to `.pdbrc`.
```python
import pdb
def f():
x = 1
# should print 1 and continue
pdb.set_trace(commands=['p x', 'c'])
f()
```
There are many use cases
1. Some users use `.pdbrc` as debugging scripts, and sometimes it's not the best way because that would be the same for the whole directory. Being able to feed pdb commands can save user a lot of time re-typing all the commands after they try some fix.
2. It would be much easier to serve a repro regarding pdb on a silver platter. When user reports bugs about pdb, they had to give both the source code, and the commands they use to trigger the bug. With this command, the user can provide a simple one-piece script for the minimal repro.
3. For cases where `stdin` is not available, or not super convenient, this could help. For example, in testing. Also, I talked with `pyodide` maintainer about supporting `pdb` on webbrowser, and one of the issue was the lack of `stdin`. They successfully made it work with some workaround, but this could help the case.
4. It's useful for remote help. It's not uncommon that a more senior developer needs to help a newbee remotely. They could send a fully ready piece of code that contains the command and ask the person to simply put it in and get the result.
This feature is also trivial to implement and easy to maintain - all the pieces are already there.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120255
<!-- /gh-linked-prs -->
| af8403a58dbe45130400a133f756cbf53c5f1d7e | fc9e6bf53d1c9ce2b5f802864e0da265a77c111f |
python/cpython | python__cpython-120245 | # `re.sub()` leaks references
# Bug report
### Bug description:
```
>python_d -X showrefcount -c "import re; re.sub(r'()', r'\1', '')"
[7140 refs, 4669 blocks]
```
[`_strptime`](https://github.com/python/cpython/blob/e6076d1e1303c3cc14bc02baf607535af2cf1501/Lib/_strptime.py#L250-#L253) is a module affected by this.
### CPython versions tested on:
3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-120245
* gh-120264
* gh-120265
<!-- /gh-linked-prs -->
| 38a25e9560cf0ff0b80d9e90bce793ff24c6e027 | 55402d3232ca400ebafe4fe3bd70f252304ebe07 |
python/cpython | python__cpython-120243 | # `test_datetime` does not respect custom `setUpClass` and `tearDownClass`
# Bug report
Example test case that can be possibly added to `test_datetime.py`:
```python
class TestExample(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.value_to_test = 1
return super().setUpClass()
def test_example_bug(self):
self.assertEqual(self.value_to_test, 1)
```
`./python.exe -m test test_datetime -v -m test_example_bug` outputs:
```
test_example_bug (test.datetimetester.TestExample_Pure.test_example_bug) ... ERROR
test_example_bug (test.datetimetester.TestExample_Fast.test_example_bug) ... ERROR
======================================================================
ERROR: test_example_bug (test.datetimetester.TestExample_Pure.test_example_bug)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/datetimetester.py", line 77, in test_example_bug
self.assertEqual(self.value_to_test, 1)
^^^^^^^^^^^^^^^^^^
AttributeError: 'TestExample_Pure' object has no attribute 'value_to_test'
======================================================================
ERROR: test_example_bug (test.datetimetester.TestExample_Fast.test_example_bug)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/datetimetester.py", line 77, in test_example_bug
self.assertEqual(self.value_to_test, 1)
^^^^^^^^^^^^^^^^^^
AttributeError: 'TestExample_Fast' object has no attribute 'value_to_test'
----------------------------------------------------------------------
Ran 2 tests in 0.003s
FAILED (errors=2)
test test_datetime failed
test_datetime failed (2 errors)
== Tests result: FAILURE ==
1 test failed:
test_datetime
Total duration: 208 ms
Total tests: run=2 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
This happens because of these lines: https://github.com/python/cpython/blob/e6076d1e1303c3cc14bc02baf607535af2cf1501/Lib/test/test_datetime.py#L39-L55
I proposed this fix: https://github.com/python/cpython/pull/119675/files#diff-d4ea73ff6f3d1428ce4536793404e64ce7dbc1f6d0afe22f707ee0a24777afea but it was not merged.
Refs https://github.com/python/cpython/pull/119675
Refs https://github.com/python/cpython/issues/119659
<!-- gh-linked-prs -->
### Linked PRs
* gh-120243
* gh-120259
* gh-120260
<!-- /gh-linked-prs -->
| 95f4db88d5ab7d900f05d0418b2a2e77bf9ff126 | 4fc82b6d3b99f873179937215833e7a573ca7876 |
python/cpython | python__cpython-120227 | # test_sendfile_close_peer_in_the_middle_of_receiving fails with Linux 6.10 release candidates and non-4KiB page size
# Bug report
### Bug description:
With Linux-6.10-rc2 configured to use 16KiB page:
```shellsession
$ python3 -m test test.test_asyncio.test_sendfile -m test_sendfile_close_peer_in_the_middle_of_receiving -v
== CPython 3.12.4 (main, Jun 7 2024, 14:39:05) [GCC 14.1.0]
== Linux-6.10.0-rc2+-loongarch64-with-glibc2.39 little-endian
== Python build: release shared LTO+PGO
== cwd: /tmp/test_python_worker_1673æ
== CPU count: 8
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 3315485672
0:00:00 load avg: 0.03 Run 1 test sequentially
0:00:00 load avg: 0.03 [1/1] test.test_asyncio.test_sendfile
test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.EPollEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving) ... FAIL
test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.PollEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving) ... FAIL
test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.SelectEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving) ... FAIL
======================================================================
FAIL: test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.EPollEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.12/test/test_asyncio/test_sendfile.py", line 466, in test_sendfile_close_peer_in_the_middle_of_receiving
with self.assertRaises(ConnectionError):
AssertionError: ConnectionError not raised
======================================================================
FAIL: test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.PollEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.12/test/test_asyncio/test_sendfile.py", line 466, in test_sendfile_close_peer_in_the_middle_of_receiving
with self.assertRaises(ConnectionError):
AssertionError: ConnectionError not raised
======================================================================
FAIL: test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.SelectEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.12/test/test_asyncio/test_sendfile.py", line 466, in test_sendfile_close_peer_in_the_middle_of_receiving
with self.assertRaises(ConnectionError):
AssertionError: ConnectionError not raised
----------------------------------------------------------------------
Ran 3 tests in 0.028s
FAILED (failures=3)
test test.test_asyncio.test_sendfile failed
test.test_asyncio.test_sendfile failed (3 failures)
== Tests result: FAILURE ==
1 test failed:
test.test_asyncio.test_sendfile
Total duration: 88 ms
Total tests: run=3 (filtered) failures=3
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
I have to raise DATA to 272KiB to make the test pass. On an AArch64 system I configured the kernel to use 64KiB page I have to raise it to 1088KiB.
Bisection shows https://github.com/torvalds/linux/commit/8ee602c635206ed012f979370094015857c02359 is the first commit triggering the issue but frankly I've no idea why.
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120227
* gh-123421
* gh-123422
<!-- /gh-linked-prs -->
| a7584245661102a5768c643fbd7db8395fd3c90e | 10fb1b8f36ab2fc3d2fe7392d5735dd19c5e2365 |
python/cpython | python__cpython-120235 | # flowgraph.c: push_cold_blocks_to_end: Assertion `prev_instr' failed
# Bug report
### Bug description:
The following code causes a crash with cpython 3.13 (56a7e0483436d1ebd2af97c02defe0e67c4bb495) and with the current main branch (d68a22e7a68ae09f7db61d5a1a3bd9c0360cf3ee)
```python
async def name_4():
match b'':
case True:
pass
case name_5 if f'e':
{name_3: name_4 async for name_2 in name_5}
case []:
pass
[[]]
```
output (Python 3.13.0b2+):
```python
python: Python/flowgraph.c:2282: push_cold_blocks_to_end: Assertion `prev_instr' failed.
```
I bisected the problem down to this commit 2091fb2a85c1aa2d9b22c02736b07831bd875c2a.
@iritkatriel can you take a look at it?
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120235
* gh-120249
<!-- /gh-linked-prs -->
| 4fc82b6d3b99f873179937215833e7a573ca7876 | e6076d1e1303c3cc14bc02baf607535af2cf1501 |
python/cpython | python__cpython-120354 | # Overriding SIGINT doesn't work as expected in the new REPL
# Bug report
In the old REPL, overriding SIGINT and pressing Ctrl-C works as expected:
```
❯ python
Python 3.11.6 (main, Nov 3 2023, 17:05:41) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import signal
>>> signal.signal(signal.SIGINT, lambda *x: print("NOOOO"))
<built-in function default_int_handler>
>>> NOOOO
NOOOO
```
but in the new REPL it doesn't trigger the signal handler:
```
❯ ./python.exe
Python 3.14.0a0 (heads/more_offsets-dirty:9403c2cf58, Jun 6 2024, 12:10:11) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import signal
>>> signal.signal(signal.SIGINT, lambda *x: print("NOOOO"))
<built-in function default_int_handler>
>>>
KeyboardInterrupt
>>>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120354
* gh-120368
* gh-123795
* gh-123799
<!-- /gh-linked-prs -->
| 34e4d3287e724c065cc07b04a1ee8715817db284 | 203565b2f9c74656ba519780049b46d4e5afcba1 |
python/cpython | python__cpython-120223 | # Tkinter: emit deprecation warning for trace_variable() etc
The `Variable` methods `trace_variable()`, `trace_vdelete()` and `trace_vinfo()` wrap deprecated Tcl commands which were deleted in Tcl 9.0. Modern replacements `trace_add()`, `trace_remove()`, and `trace_info()` were added in Python 3.6 (#66313). They wrap Tcl commands supported since Tcl 8.4 or older, i.e. in all supported Tcl versions.
It is a time to add runtime warnings for the old methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120223
<!-- /gh-linked-prs -->
| c46635aa5a20fc1b4c5e85370fa0fa2303c47c14 | 814ca116d54f40d6958a623deb614c3e3734e237 |
python/cpython | python__cpython-120213 | # Fix tkinter.ttk with Tcl/Tk 9.0
There are two issues in `tkinter.ttk` with Tcl/Tk 9.0.
* Using the Variable methods trace_variable() and trace_vdelete(). They wrap deprecated Tcl commands which were removed in Tcl 9.0. New methods added in #66313 should be used instead (added in Python 3.6 and supported in Tcl 8.4 or older).
* Combobox.current() fails because the underlying Tk command returns now an empty string instead of -1 for empty combobox. It would be right to translate it to `None`, but -1 is more backward compatible. In future we can change this method to return `None`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120213
* gh-120215
* gh-120216
<!-- /gh-linked-prs -->
| d68a22e7a68ae09f7db61d5a1a3bd9c0360cf3ee | 6646a9da26d12fc54263b22dd2916a2f710f1db7 |
python/cpython | python__cpython-120214 | # inspect.iscoroutinefunction(inspect) returns True
# Bug report
### Bug description:
I would expect `inspect.iscoroutinefunction` to only return True for coroutine functions and those functions decorated with `inspect.markcoroutinefunction`, but it also returns True for the `inspect` module object itself.
```
Python 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> inspect.iscoroutinefunction(inspect)
True
```
This is a regression - in 3.11 it returned False.
The issue seems to be that `iscoroutinefunction(obj)` checks if `obj._is_coroutine_marker is inspect._is_coroutine_marker`, and this check passes for obj being the `inspect` module.
I tested 3.12.3 but I can see that the implementation of `iscoroutinefunction` hasn't changed between 3.12.3 and the main branch ([GitHub link](https://github.com/python/cpython/blob/47816f465e833a5257a82b759b1081e06381e528/Lib/inspect.py#L405-L429), in particular `inspect.py` line 412), so I believe the issue is still present in tip of main.
The bug was triggered in our proprietary codebase in a test file that defines a number of tests as coroutine functions and then uses `checks = {k: v for k, v in globals().items() if inspect.iscoroutinefunction(v)}` to iterate over all tests. This broke when updating from 3.11 to 3.12.
I think `inspect.iscoroutinefunction(inspect)` returning True is quite surprising behavior so I would propose changing inspect.py so it doesn't use the same attribute name `_is_coroutine_marker` as the global variable `_is_coroutine_marker`, perhaps by renaming the global to `_is_coroutine_marker_object` or something.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120214
* gh-120237
* gh-120239
<!-- /gh-linked-prs -->
| 10fb1b8f36ab2fc3d2fe7392d5735dd19c5e2365 | 9d6604222e9ef4e136ee9ccfa2d4d5ff9feee976 |
python/cpython | python__cpython-120195 | # Read and write of __class__ in two threads causes crash
# Crash report
### What happened?
See https://github.com/python/cpython/pull/120195 for crashing CI.
Apply diff from there and run
```
python -m test test_free_threading -m test_type
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120195
* gh-120366
* gh-120393
* gh-120672
* gh-121591
<!-- /gh-linked-prs -->
| 203565b2f9c74656ba519780049b46d4e5afcba1 | 939c201e00943c6dc2d515185168c30606ae522c |
python/cpython | python__cpython-120179 | # Three typos in documentation
# Documentation
I noticed a few typos in the documentation.
1. In `Doc/glossary.rst`, there is `immmortal` instead of `immortal`
2. In `Doc/library/dis.rst`, there is `oparand` instead of `operand`
3. In `Doc/library/pdb/rst`, there is `Accepatable` instead of `Acceptable`
I'm also submitting a PR to fix these.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120179
<!-- /gh-linked-prs -->
| 5bdc87b8859c837092e7c5b19583f98488f7a387 | e21057b99967eb5323320e6d1121955e0cd2985e |
python/cpython | python__cpython-120171 | # Importing multiprocessing breaks _checkmodule in _pickle.c
# Bug report
### Bug description:
#86572 describes a bug where importing `multiprocessing` causes `pickle` to determine the wrong module for objects without `__module__`. #23403 fixed the bug, but only for the pure Python implementation of `pickle`. The C extension module is still broken.
Consider a file `foo.py` containing:
```python
def f():
pass
del f.__module__
```
Then, in an interactive session:
```python
>>> import multiprocessing
>>> from foo import f
>>> import pickle
>>> import pickletools
>>> pickletools.dis(pickle.dumps(f))
0: \x80 PROTO 4
2: \x95 FRAME 21
11: \x8c SHORT_BINUNICODE '__mp_main__'
24: \x94 MEMOIZE (as 0)
25: \x8c SHORT_BINUNICODE 'f'
28: \x94 MEMOIZE (as 1)
29: \x93 STACK_GLOBAL
30: \x94 MEMOIZE (as 2)
31: . STOP
highest protocol among opcodes = 4
>>> pickletools.dis(pickle._dumps(f))
0: \x80 PROTO 4
2: \x95 FRAME 13
11: \x8c SHORT_BINUNICODE 'foo'
16: \x94 MEMOIZE (as 0)
17: \x8c SHORT_BINUNICODE 'f'
20: \x94 MEMOIZE (as 1)
21: \x93 STACK_GLOBAL
22: \x94 MEMOIZE (as 2)
23: . STOP
highest protocol among opcodes = 4
```
`pickle.dumps` tries to import `f` from `__mp_main__`. `pickle._dumps` correctly determines that `f` comes from the `foo` module.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120171
<!-- /gh-linked-prs -->
| 05a19b5e56894fd1e63aff6b38fb23ad7c7b3047 | 393773ae8763202ecf7afff81c4d57bd37c62ff6 |
python/cpython | python__cpython-120177 | # ``test_os.test_win32_mkdir_700`` fails on Windows
# Bug report
### Bug description:
```python
./python -m test -v test_os -m test_win32_mkdir_700
Running Debug|x64 interpreter...
== CPython 3.14.0a0 (heads/main-dirty:78634cfa3d, Jun 6 2024, 18:39:33) [MSC v.1933 64 bit (AMD64)]
== Windows-10-10.0.19043-SP0 little-endian
== Python build: debug
== cwd: C:\Users\KIRILL-1\CLionProjects\cpython\build\test_python_worker_6160æ
== CPU count: 16
== encodings: locale=cp1251 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 3576474721
0:00:00 Run 1 test sequentially in a single process
0:00:00 [1/1] test_os
test_win32_mkdir_700 (test.test_os.MakedirTests.test_win32_mkdir_700) ... FAIL
======================================================================
FAIL: test_win32_mkdir_700 (test.test_os.MakedirTests.test_win32_mkdir_700)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\test\test_os.py", line 1840, in test_win32_mkdir_700
self.assertEqual(
~~~~~~~~~~~~~~~~^
out.strip(),
^^^^^^^^^^^^
f'{path} "D:P(A;OICI;FA;;;SY)(A;OICI;FA;;;BA)(A;OICI;FA;;;OW)"',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
AssertionError: 'C:\\[66 chars]_6160?\\@test_6160_tmp?\\dir "D:P(A;OICI;FA;;;[32 chars]OW)"' != 'C:\\[66 chars]_6160æ\\@test_6160_tmpæ\\dir "D:P(A;OICI;FA;;;[32 chars]OW)"'
- C:\Users\KIRILL-1\CLionProjects\cpython\build\test_python_worker_6160?\@test_6160_tmp?\dir "D:P(A;OICI;FA;;;SY)(A;OICI;FA;;;BA)(A;OICI;FA;;;OW)"
? ^ ^
+ C:\Users\KIRILL-1\CLionProjects\cpython\build\test_python_worker_6160æ\@test_6160_tmpæ\dir "D:P(A;OICI;FA;;;SY)(A;OICI;FA;;;BA)(A;OICI;FA;;;OW)"
? ^ ^
----------------------------------------------------------------------
Ran 1 test in 0.036s
FAILED (failures=1)
test test_os failed
test_os failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_os
Total duration: 483 ms
Total tests: run=1 (filtered) failures=1
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-120177
* gh-120202
* gh-120203
<!-- /gh-linked-prs -->
| d5ba4fc9bc9b2d9eff2a90893e8d500e0c367237 | 47816f465e833a5257a82b759b1081e06381e528 |
python/cpython | python__cpython-120182 | # `Objects/typeobject.c:143: managed_static_type_index_get: Assertion `managed_static_type_index_is_set(self)' failed.` in 3.13.0b2+, with freezegun
# Crash report
### What happened?
Starting with Python 3.13.0b2, importing `freezegun` causes the interpreter to crash on exit. I was able to reduce it into the following ugly quasi-standalone snippet:
```python
import asyncio
import datetime
from typing import Type
class tzutc(datetime.tzinfo):
pass
_EPOCHTZ = datetime.datetime(1970, 1, 1, tzinfo=tzutc())
class FakeDateMeta(type):
def __instancecheck__(self, obj):
return True
class FakeDate(datetime.date, metaclass=FakeDateMeta):
pass
def pickle_fake_date(datetime_) -> Type[FakeDate]:
# A pickle function for FakeDate
return FakeDate
```
```
$ python -c 'import repro'
python: Objects/typeobject.c:143: managed_static_type_index_get: Assertion `managed_static_type_index_is_set(self)' failed.
Aborted (core dumped)
```
I've been able to reproduce it with 3.13.0b2, tip of 3.13 branch and tip of main branch.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:fd104dfcb83, Jun 6 2024, 16:55:57) [GCC 14.1.1 20240516]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120182
* gh-120518
* gh-120663
<!-- /gh-linked-prs -->
| 2c66318cdc0545da37e7046533dfe74bde129d91 | e73c42e15cf83c7a81de016ce2827c04110c80c3 |
python/cpython | python__cpython-120187 | # Unused code in `concurrent.future`
# Bug report
### Bug description:
https://github.com/python/cpython/blob/fd104dfcb838d735ef8128e3539d7a730d403422/Lib/concurrent/futures/_base.py#L26C1-L32
The `_FUTURE_STATES` variable hasn't been used in this file, and nor in other files: https://github.com/search?q=repo%3Apython%2Fcpython%20_FUTURE_STATES&type=code
And since it has an underscore prefix in the name, it's a private variable and won't be used by users.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120187
<!-- /gh-linked-prs -->
| bd826b9c77dbf7c789433cb8061c733c08634c0e | 5c115567b1e3aecb7a53cfd5757e25c088398411 |
python/cpython | python__cpython-120156 | # Fix some Coverity warnings and false alarms
I got access to a Coverity scan of Python 3.12.2 and there are about 67 warnings. Most of them seem to be false alarms, my team is still investigating the warnings.
I propose to make minor changes, when it makes sense, to make some false alarms quiet.
And propose fixes for real issues :-)
<!-- gh-linked-prs -->
### Linked PRs
* gh-120156
* gh-120228
* gh-120231
* gh-120232
* gh-120238
* gh-120240
* gh-120310
* gh-120311
* gh-120402
* gh-120409
* gh-120410
* gh-120997
* gh-121005
* gh-121006
<!-- /gh-linked-prs -->
| 78634cfa3dd4b542897835d5f097604dbeb0f3fd | cccc9f63c63ae693ccd0e2d8fc6cfd3aa18feb8e |
python/cpython | python__cpython-120173 | # Incorrect case string in configure script
# Bug report
### Bug description:
I think this: https://github.com/python/cpython/blob/fd104dfcb838d735ef8128e3539d7a730d403422/configure#L12895
Should be: `Emscripten*|WASI*`
I could be wrong, but I think this will never be matched otherwise since even if `ac_sys_release` is empty, we would be trying to match `Emscripten` against `Emscripten/`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-120173
* gh-120199
* gh-120204
* gh-120309
<!-- /gh-linked-prs -->
| 47816f465e833a5257a82b759b1081e06381e528 | 6a97929a5ad76c55bc6e1cf32898a1c31093334d |
python/cpython | python__cpython-124533 | # Make it possible to use `sys.monitoring` for pdb/bdb
# Feature or enhancement
### Proposal:
I had a few proposals to utilize `sys.monitoring` for pdb and all of them were raising concerns. We had a discussion during language summit and I got feedbacks from several members that breaking backwards compatibility for bdb is not acceptible. I can totally understand the concern so I have a much more conservative proposal to do this. I would like to get opinions from people before start working on it.
TL; DR:
1. All existing code will work exactly as before.
2. `sys.monitoring` will be an opt-in feature for bdb.
3. Minimum changes are needed for existing debugger (including pdb) to onboard.
Details:
What I propose, is to keep all the code for bdb as it is, and add extra code for `sys.monitoring` without disturbing the existing methods. Everything will be hidden behind a feature gate so all the existing code will work exactly as before.
In order to use `sys.monitoring` instead of `sys.settrace`, the user needs to init `bdb.Bdb` with an argument (`use_monitoring=True`?). Then the underlying `bdb.Bdb` will work in `sys.monitoring` mode. Ideally, that's the *only* change the debugger needs to make.
Of course, in reality, if the debugger wants to onboard this feature, it may need a few tweaks. For example, in pdb, `debug` command toggles `sys.settrace` to make it possible to debug something while tracing, that needs to be modified. However, the goal is to make it trivial for debugger developers to switch from `sys.settrace` to `sys.monitoring`, if they knew how `sys.monitoring` works.
Let me re-emphasize it in case that's still a confusion - there's nothing the developer needs to do, if they just want the old `sys.settrace`, everything will simply work because all the new stuff will be behind a feature gate.
If they chose to use `sys.monitoring`, there will be a few APIs in `bdb.Bdb` that does not even make sense anymore - `trace_dispatch` and `dispatch_*` functions. The documentation already says:
> The following methods of [Bdb](https://docs.python.org/3/library/bdb.html#bdb.Bdb) normally don’t need to be overridden.
and a normal debugger should not override those methods anyway (pdb does not). Again, those APIs will still work in `sys.settrace` mode.
As for pdb, we can also add a feature gate for it. The user can choose between the `sys.settrace` mode and the `sys.monitoring` mode. The behaviors in two modes should be identical, except for the performance. We can even make 3.14 a transition version, where the default mechanism is still `sys.settrace`, and the user can opt-in the `sys.monitoring` mode by explicitly asking for it (through initialization argument or command line argument). This way, we can get feedbacks from the brave pioneers without disturbing most pdb users. We will have a full year to fix bugs introduced by the mechanism and stablize it.
In 3.15, we can make `sys.monitoring` the default and still keep `sys.settrace` as a fallback plan.
So, why bother? Because we can really gain 100x performance with breakpoints. Not only with breakpoints, even without breakpoints, there's a significant overhead to run with debugger attached:
```python
def trace(*args):
return None
def f():
start = time.perf_counter()
fib(22)
print(time.perf_counter() - start)
f()
```
The overhead with trace attached is 4x-5x for `f()` because the `call` event will still be triggered and even if `f_trace == None`, the instrumentation is still there and will be executed! We can have an almost zero-overhead debugger and people are very excited about the possibility.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/103103
https://discuss.python.org/t/make-pdb-faster-with-pep-669/37674
<!-- gh-linked-prs -->
### Linked PRs
* gh-124533
* gh-131390
* gh-132484
<!-- /gh-linked-prs -->
| a936af924efc6e2fb59e27990dcd905b7819470a | f48887fb97651c02c5e412a75ed8b51a4ca11e6a |
python/cpython | python__cpython-120131 | # `ipaddress`: argument to `collapse_addresses()` should be described as iterable
# Documentation
In the description of the [`ipaddress.collapse_addresses()](https://docs.python.org/3/library/ipaddress.html#ipaddress.collapse_addresses) function, we can read that:
> *addresses* is an iterator of [IPv4Network](https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv4Network) or [IPv6Network](https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv6Network) objects.
Whereas, in fact, *addresses* can be any *iterable object* (not necessarily an *iterator*).
Therefore, I propose to change that fragment to:
> *addresses* is an [iterable](https://docs.python.org/3/glossary.html#term-iterable) of [IPv4Network](https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv4Network) or [IPv6Network](https://docs.python.org/3/library/ipaddress.html#ipaddress.IPv6Network) objects.
...and, also, to fix the [related fragment](https://github.com/python/cpython/blob/v3.12.3/Lib/ipaddress.py#L313) of the function's docstring – by replacing: `addresses: An iterator of IPv4Network or IPv6Network objects.` with `addresses: An iterable of IPv4Network or IPv6Network objects.`
[Edit] PS Please note that even the already existing example in the docs (below the function's description) sets the *addresses* argument to an *iterable* (namely: a *list*), rather than an *iterator*.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120131
* gh-120135
* gh-120136
<!-- /gh-linked-prs -->
| f878d46e5614f08a9302fcb6fc611ef49e9acf2f | e83ce850f433fd8bbf8ff4e8d7649b942639db31 |
python/cpython | python__cpython-120123 | # `concurrent.futures` does not include `InvalidStateError` in its `__all__`
# Bug report
### Bug description:
`InvalidStateError` is introduced in #7056, but not included in the module's `__all__`. A PR is on the way.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120123
* gh-120273
* gh-120274
<!-- /gh-linked-prs -->
| 5d59b870effa0f576acf7264cfcbfca2b36e34e3 | 38a25e9560cf0ff0b80d9e90bce793ff24c6e027 |
python/cpython | python__cpython-120116 | # Cirrus M1 macOS runners never start on fork
The Cirrus M1 macOS runners introduced in #119979 never start on [my fork](https://github.com/nineteendo/cpython/actions/runs/9371716405/job/25801596396). This causes every workflow to fail. Can we skip this job in that case?
<!-- gh-linked-prs -->
### Linked PRs
* gh-120116
* gh-120152
* gh-120153
<!-- /gh-linked-prs -->
| fd104dfcb838d735ef8128e3539d7a730d403422 | eeb8f67f837facb37f092a8b743f4d249515e82f |
python/cpython | python__cpython-120114 | # RecursionError during copy.deepcopy of an ast-tree with parent references
# Bug report
### Bug description:
python 3.13 raises an `RecursionError` when I try to `deepcopy` the ast in the following example.
```python
import ast
import copy
code="""
('',)
while i < n:
if ch == '':
ch = format[i]
if ch == '':
if freplace is None:
'' % getattr(object)
elif ch == '':
if zreplace is None:
if hasattr:
offset = object.utcoffset()
if offset is not None:
if offset.days < 0:
offset = -offset
h = divmod(timedelta(hours=0))
if u:
zreplace = '' % (sign,)
elif s:
zreplace = '' % (sign,)
else:
zreplace = '' % (sign,)
elif ch == '':
if Zreplace is None:
Zreplace = ''
if hasattr(object):
s = object.tzname()
if s is not None:
Zreplace = s.replace('')
newformat.append(Zreplace)
else:
push('')
else:
push(ch)
"""
tree=ast.parse(code)
# add a back reference to the parent node
for node in ast.walk(tree):
for child in ast.iter_child_nodes(node):
child.parent=node
tree2=copy.deepcopy(tree)
```
output (Python 3.13.0b1+):
```python
Traceback (most recent call last):
File "/home/frank/projects/cpython/../cpython_bugs/deep_copy_ast_with_parents.py", line 50, in <module>
tree2=copy.deepcopy(tree)
File "/home/frank/projects/cpython/Lib/copy.py", line 162, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/frank/projects/cpython/Lib/copy.py", line 253, in _reconstruct
y = func(*args)
File "/home/frank/projects/cpython/Lib/copy.py", line 252, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/frank/projects/cpython/Lib/copy.py", line 136, in deepcopy
y = copier(x, memo)
File "/home/frank/projects/cpython/Lib/copy.py", line 196, in _deepcopy_list
append(deepcopy(a, memo))
~~~~~~~~^^^^^^^^^
...
File "/home/frank/projects/cpython/Lib/copy.py", line 162, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/frank/projects/cpython/Lib/copy.py", line 259, in _reconstruct
state = deepcopy(state, memo)
File "/home/frank/projects/cpython/Lib/copy.py", line 136, in deepcopy
y = copier(x, memo)
File "/home/frank/projects/cpython/Lib/copy.py", line 221, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
~~~~~~~~^^^^^^^^^^^^^
File "/home/frank/projects/cpython/Lib/copy.py", line 162, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/frank/projects/cpython/Lib/copy.py", line 253, in _reconstruct
y = func(*args)
File "/home/frank/projects/cpython/Lib/copy.py", line 252, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/frank/projects/cpython/Lib/copy.py", line 136, in deepcopy
y = copier(x, memo)
RecursionError: maximum recursion depth exceeded
```
The problem can be reproduced on the current main (5c02ea8bae2287a828840f5734966da23dc573dc)
I was able to bisect the problem down to ed4dfd8825b49e16a0fcb9e67baf1b58bb8d438f (@JelleZijlstra may be you know more here)
The code in the example looks big but is already minimized.
I hope that this example is simple enough to find and fix the bug, I can try to find a smaller one if it is needed.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120114
* gh-121000
<!-- /gh-linked-prs -->
| 42b2c9d78da7ebd6bd5925a4d4c78aec3c9e78e6 | ead676516d4687b5e6d947a5c0808b5b93ba87e3 |
python/cpython | python__cpython-120107 | # IDLE uses incorrect screen dimension units
In two places IDLE uses incorrect unit for screen dimension:
https://github.com/python/cpython/blob/983efcf15b2503fe0c05d5e03762385967962b33/Lib/idlelib/configdialog.py#L114
https://github.com/python/cpython/blob/983efcf15b2503fe0c05d5e03762385967962b33/Lib/idlelib/searchbase.py#L89
It uses "5px", but the only documented valid suffixes are "c", "i", "m" and "p". Tk ignores the rest in versions < 8.7, but in 8.7 and 9.0 this is an error. And this is for good, because "px" did not mean pixels, as was expected, but printer's points (1/72 inch).
If we want to keep the same look, we should change "px" to "p". But if it originally should be in pixels, we should remove the suffix. In all other places padding is specified in pixels, and this makes sense, so I believe the latter option is better.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120107
* gh-122740
* gh-122741
<!-- /gh-linked-prs -->
| 4b66b6b7d6e65f9eb2d61435b9b37ffeb7bb00fb | 4c317918486348ff8486168e1003be8c1daa6cf5 |
python/cpython | python__cpython-120101 | # FrameLocalsProxy is not a mapping
# Bug report
### Bug description:
`FrameLocalsProxy` should be a mapping. I.e. it should subclass `collections.abc.Mapping` and match `{}` in a match statement.
```python
from collections.abc import Mapping
import sys
def f():
return sys._getframe().f_locals
proxy = f()
assert(instance(proxy, Mapping))
match proxy:
case {}:
kind = "mapping"
case _:
kind = "other"
assert(kind == "mapping")
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120101
* gh-120749
<!-- /gh-linked-prs -->
| d1c673b658977a8e6236feee579308e0ed6a0187 | 00257c746c447a2e026b5a2a618f0e033fb90111 |
python/cpython | python__cpython-120605 | # Add IDLE Hovertip foreground color needed for recent macOS
# Bug report
### Bug description:
### Minimal Reproducible Example
```python
import tkinter as tk
from idlelib.tooltip import Hovertip
root = tk.Tk()
root.geometry('200x100')
label = tk.Label(root, text='Hover Me!')
label.pack()
tip = Hovertip(label, text='Pro Tip')
root.mainloop()
```

### Proposed Fix
Specifying a foreground color, e.g. `foreground="black"`, at the `Label` declaration in `Hovertip.showcontents()` fixes the issue
```python
class Hovertip(OnHoverTooltipBase):
"A tooltip that pops up when a mouse hovers over an anchor widget."
def __init__(self, anchor_widget, text, hover_delay=1000):
"""Create a text tooltip with a mouse hover delay.
anchor_widget: the widget next to which the tooltip will be shown
hover_delay: time to delay before showing the tooltip, in milliseconds
Note that a widget will only be shown when showtip() is called,
e.g. after hovering over the anchor widget with the mouse for enough
time.
"""
super().__init__(anchor_widget, hover_delay=hover_delay)
self.text = text
def showcontents(self):
label = Label(self.tipwindow, text=self.text, justify=LEFT,
foreground="black", background="#ffffe0", relief=SOLID, borderwidth=1)
label.pack()
```

### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120605
* gh-122592
* gh-122739
<!-- /gh-linked-prs -->
| 5a7f7c48644baf82988f30bcb43e03dcfceb75dd | dbdbef3668293abdceac2b8a7b3e4615e6bde143 |
python/cpython | python__cpython-120088 | # `int.__round__(None)` is not supported
# Bug report
### Bug description:
The `__round__` method on floats accepts None as an explicit argument, but the one on int does not:
```python
>>> (1.0).__round__(None)
1
>>> (1).__round__(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object cannot be interpreted as an integer
```
This goes against the general principle that any operation that works on a float should also work on an int.
(Note that both `round(x)` and `round(x, None)` work on both floats and ints; this issue is entirely about direct calls to `.__round__()`.)
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120088
* gh-120197
* gh-120328
<!-- /gh-linked-prs -->
| 57ad769076201c858a768d81047f6ea44925a33b | bd826b9c77dbf7c789433cb8061c733c08634c0e |
python/cpython | python__cpython-120081 | # Typo in the documentation for the time module (specifically struct_time)
# Documentation
The table under class time.struct_time says 'tm_day' instead of 'tm_mday', which is incorrect.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120081
* gh-120084
* gh-120085
<!-- /gh-linked-prs -->
| b6b0dcbfc054f581b6f78602e4c2e9474e3efe21 | 770f3c1eadd3392c72fd55be47770234dd143a14 |
python/cpython | python__cpython-120068 | # Speed up `test_weakref` in the free-threaded build
The `test_weakref` test is one of the slowest tests when running in the free-threaded build.
The problem is the default period for the `collect_in_thread()` function is too short (100 µs). This isn't too much of a problem for the default build because the GIL switch interval is 5 ms, which effectively makes the smaller period irrelevant.
With the free-threaded build, this the 100 µs period means that the test spends the majority of its time calling `gc.collect()` nearly non-stop.
We should increase the period to 5 ms or so. On my machine, this decreases up the overall `test_weakref` time from 1 minute to 8 seconds.
https://github.com/python/cpython/blob/d9095194dde27eaabfc0b86a11989cdb9a2acfe1/Lib/test/test_weakref.py#L84-L103
<!-- gh-linked-prs -->
### Linked PRs
* gh-120068
* gh-120110
<!-- /gh-linked-prs -->
| 4bba1c9e6cfeaf69302b501a4306668613db4b28 | 5c02ea8bae2287a828840f5734966da23dc573dc |
python/cpython | python__cpython-120059 | # Add os.reload_environ() function
# Feature or enhancement
When the environment is modified outside Python, `os.environ` is not updated. I propose adding a new `os.environ.refresh()` method to manually update `os.environ`.
Discussion: https://discuss.python.org/t/method-to-refresh-os-environ/54774
<!-- gh-linked-prs -->
### Linked PRs
* gh-120059
* gh-120494
* gh-120789
* gh-120790
* gh-120808
* gh-126268
<!-- /gh-linked-prs -->
| 7aff2de62bc28eb23888270b698c6b6915f69b21 | 56c3815ba14c790d2e9a227b4ac0ead5e6b1e570 |
python/cpython | python__cpython-120058 | # Add `IP_RECVTTL` and `IP_RECVERR` constants to `socket` module
# Feature or enhancement
I recently needed these three constants in a real code, I had to create them manually.
Docs from Linux: https://man7.org/linux/man-pages/man7/ip.7.html
```c
IP_RECVERR (since Linux 2.2)
Enable extended reliable error message passing. When
enabled on a datagram socket, all generated errors will be
queued in a per-socket error queue. When the user
receives an error from a socket operation, the errors can
be received by calling [recvmsg(2)](https://man7.org/linux/man-pages/man2/recvmsg.2.html) with the MSG_ERRQUEUE
flag set. The sock_extended_err structure describing the
error will be passed in an ancillary message with the type
IP_RECVERR and the level IPPROTO_IP. This is useful for
reliable error handling on unconnected sockets. The
received data portion of the error queue contains the
error packet.
The IP_RECVERR control message contains a
sock_extended_err structure:
#define SO_EE_ORIGIN_NONE 0
#define SO_EE_ORIGIN_LOCAL 1
#define SO_EE_ORIGIN_ICMP 2
#define SO_EE_ORIGIN_ICMP6 3
struct sock_extended_err {
uint32_t ee_errno; /* error number */
uint8_t ee_origin; /* where the error originated */
uint8_t ee_type; /* type */
uint8_t ee_code; /* code */
uint8_t ee_pad;
uint32_t ee_info; /* additional information */
uint32_t ee_data; /* other data */
/* More data may follow */
};
struct sockaddr *SO_EE_OFFENDER(struct sock_extended_err *);
ee_[errno](https://man7.org/linux/man-pages/man3/errno.3.html) contains the errno number of the queued error.
ee_origin is the origin code of where the error
originated. The other fields are protocol-specific. The
macro SO_EE_OFFENDER returns a pointer to the address of
the network object where the error originated from given a
pointer to the ancillary message. If this address is not
known, the sa_family member of the sockaddr contains
AF_UNSPEC and the other fields of the sockaddr are
undefined.
IP uses the sock_extended_err structure as follows:
ee_origin is set to SO_EE_ORIGIN_ICMP for errors received
as an ICMP packet, or SO_EE_ORIGIN_LOCAL for locally
generated errors. Unknown values should be ignored.
ee_type and ee_code are set from the type and code fields
of the ICMP header. ee_info contains the discovered MTU
for EMSGSIZE errors. The message also contains the
sockaddr_in of the node caused the error, which can be
accessed with the SO_EE_OFFENDER macro. The sin_family
field of the SO_EE_OFFENDER address is AF_UNSPEC when the
source was unknown. When the error originated from the
network, all IP options (IP_OPTIONS, IP_TTL, etc.) enabled
on the socket and contained in the error packet are passed
as control messages. The payload of the packet causing
the error is returned as normal payload. Note that TCP
has no error queue; MSG_ERRQUEUE is not permitted on
SOCK_STREAM sockets. IP_RECVERR is valid for TCP, but all
errors are returned by socket function return or SO_ERROR
only.
For raw sockets, IP_RECVERR enables passing of all
received ICMP errors to the application, otherwise errors
are reported only on connected sockets
It sets or retrieves an integer boolean flag. IP_RECVERR
defaults to off.
IP_RECVTTL (since Linux 2.2)
When this flag is set, pass a IP_TTL control message with
the time-to-live field of the received packet as a 32 bit
integer. Not supported for SOCK_STREAM sockets.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120058
<!-- /gh-linked-prs -->
| f5a9c34f38886c5cf9c2f8d860eee3473447e030 | 34e4d3287e724c065cc07b04a1ee8715817db284 |
python/cpython | python__cpython-120050 | # `test_imaplib` takes 40+ minutes in refleaks builds
# Bug report
Similar to https://github.com/python/cpython/issues/120039, `test_imaplib` has at least one test that *expects* a timeout and that time out is scaled up with the global time out, so the refleaks tests unnecessarily take 40+ minutes.
https://github.com/python/cpython/blob/4dcd91ceafce91ec37bb1a9d544e41fc65578994/Lib/test/test_imaplib.py#L116-L117
https://github.com/python/cpython/blob/4dcd91ceafce91ec37bb1a9d544e41fc65578994/Lib/test/test_imaplib.py#L461-L472
<!-- gh-linked-prs -->
### Linked PRs
* gh-120050
* gh-120069
* gh-120070
<!-- /gh-linked-prs -->
| 710cbea6604d27c7d59ae4953bf522b997a82cc7 | d9095194dde27eaabfc0b86a11989cdb9a2acfe1 |
python/cpython | python__cpython-120042 | # PyREPL: Completion menu does not get updated when inserting a character
# Bug report
### Bug description:
The completions menu does not get updated when inserting a character. In a related matter, if one clicks `Tab` once when there's multiple possible completion a `[ not unique ]` message apperas. If someone presses another key there, instead of pressing `Tab` a second time, it messes up the cursor and appends the pressed character after the message.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120042
* gh-120051
* gh-120055
<!-- /gh-linked-prs -->
| 8fc7653766b106bdbc4ff6154e0020aea4ab15e6 | 4dcd91ceafce91ec37bb1a9d544e41fc65578994 |
python/cpython | python__cpython-120047 | # `test_siginterrupt_off` unnecessarily takes 30 minutes in refleak test
# Bug report
The `test_siginterrupt_off` test in `test_signal` *expects* a timeout and uses `support.SHORT_TIMEOUT`.
In at least some of the refleak test runners:
* we bump the overall timeout to 200 minutes:
* which adjusts `support.SHORT_TIMEOUT` to five minutes
* and the test case is run six times
So the test always takes a minimum of 30 minutes.
https://github.com/python/cpython/blob/e69d068ad0bd6a25434ea476a647b635da4d82bb/Lib/test/test_signal.py#L749
https://github.com/python/cpython/blob/e69d068ad0bd6a25434ea476a647b635da4d82bb/Lib/test/test_signal.py#L775-L781
<!-- gh-linked-prs -->
### Linked PRs
* gh-120047
* gh-120060
* gh-120061
<!-- /gh-linked-prs -->
| d419d468ff4aaf6bc673354d0ee41b273d09dd3f | bf8e5e53d0c359a1f9c285d855e7a5e9b6d91375 |
python/cpython | python__cpython-120028 | # Some flags are not publicly exported by `symtablemodule.c`
# Bug report
### Bug description:
The `symtablemodule.c` file does not export `DEF_COMP_ITER`, `DEF_TYPE_PARAM` and `DEF_COMP_CELL`. Those flags seem to have been added after the original ones so they were probably missed/forgotten. Here is a MWE for `DEF_TYPE_PARAM`:
```python
>>> import symtable
>>> s = symtable.symtable("class A[T]: pass", "?", "exec")
>>> s.get_children()[0].lookup('T')
<symbol 'T': LOCAL, DEF_LOCAL>
```
By the way, there are tests that are missing for those cases in `test_symtable`, so I can also add the corresponding test. I'm opening a PR now, but feel free to close it if you do not want to expose too many compiler flags (though, I fail to understand why you would do so).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120028
* gh-120099
* gh-120218
* gh-120222
<!-- /gh-linked-prs -->
| ff1857d6ed52fab8ef1507c289d89ee545ca6478 | e69d068ad0bd6a25434ea476a647b635da4d82bb |
python/cpython | python__cpython-120027 | # Deprecate (soft) Py_HUGE_VAL macro
# Feature or enhancement
### Proposal:
Nowadays, it's just an alias for ``Py_INFINITY``. Using both macros in single codebase, probably, slightly confuse code readers. I think it worth to replace all such cases to ``Py_INFINITY``, as mentioned in [this comment](https://github.com/python/cpython/pull/104202#discussion_r1187191082).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120027
<!-- /gh-linked-prs -->
| 8477951a1c460ff9b7dc7c54e7bf9b66b1722459 | 28b148fb32e4548b461137d18d1ab6d366395d36 |
python/cpython | python__cpython-120018 | # Wrap some multi-line macros in `do { ... } while(0)` in `{codegen,compile}.c`
# Feature or enhancement
### Proposal:
Follow-up from the following observation (I'm sending a PR just after):
> There are similar patterns in `compile.c`, maybe it is also worth taking a look? 🤔
_Originally posted by @sobolevn in https://github.com/python/cpython/issues/119981#issuecomment-2145636560_
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120018
<!-- /gh-linked-prs -->
| c222441fa7f89d448e476c252ba09be588568392 | bbe9b21d06c192a616bc1720ec8f7d4ccc16cab8 |
python/cpython | python__cpython-120102 | # The behavior of `multithreading.Queue.empty()` when the Queue is closed is not explained in the docs
# Documentation
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.empty
It should be added that in case of the given method `Queue.empty()` is called after `Queue.close()` it will raise `OSError("handle is closed")``
(A clear and concise description of the issue.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-120102
* gh-120469
* gh-120470
<!-- /gh-linked-prs -->
| a3711afefa7a520b3de01be3b2367cb830d1fc84 | 6674c63dc7bb175acc997ddcb799e8dbbafd2968 |
python/cpython | python__cpython-120287 | # Invalid corner cases (resulting in nan+nanj) in _Py_c_prod()
# Bug report
### Bug description:
Reproducer:
```pycon
>>> z = 1e300+1j
>>> z*complex('(inf+infj)') # should be (nan+infj)
(nan+nanj)
```
<details>
<summary>c.f. C code</summary>
```c
#include <stdio.h>
#include <complex.h>
#include <math.h>
int main (void)
{
complex double z = CMPLX(1e300, 1);
z = CMPLX(INFINITY, INFINITY)*z;
printf("%e %e\n", creal(z), cimag(z));
return 0;
}
```
```
$ cc a.c && ./a.out
-nan inf
```
</details>
That quickly leads to problems in optimized algorithm for exponentiation (by squaring), that claims to be "more accurate":
```pycon
>>> z**5
(nan+nanj)
>>> z**5.0000001 # generic algorithm
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
z**5.0000001
~^^~~~~~~~~~
OverflowError: complex exponentiation
>>> _testcapi._py_c_pow(z, 5) # for integer exponent it also signals overflow
((inf+infj), 34)
```
Following patch (mostly a literal translation of the ``_Cmultd()`` routine from the C11 Annex G.5.2) should solve the issue.
<details>
<summary>a patch</summary>
```diff
diff --git a/Objects/complexobject.c b/Objects/complexobject.c
index 59c84f1359..1d9707895c 100644
--- a/Objects/complexobject.c
+++ b/Objects/complexobject.c
@@ -53,11 +53,48 @@ _Py_c_neg(Py_complex a)
}
Py_complex
-_Py_c_prod(Py_complex a, Py_complex b)
+_Py_c_prod(Py_complex z, Py_complex w)
{
Py_complex r;
- r.real = a.real*b.real - a.imag*b.imag;
- r.imag = a.real*b.imag + a.imag*b.real;
+ double a = z.real, b = z.imag, c = w.real, d = w.imag;
+ double ac = a*c, bd = b*d, ad = a*d, bc = b*c;
+
+ r.real = ac - bd;
+ r.imag = ad + bc;
+
+ if (isnan(r.real) && isnan(r.imag)) {
+ /* Recover infinities that computed as (nan+nanj) */
+ int recalc = 0;
+ if (isinf(a) || isinf(b)) { /* z is infinite */
+ /* "Box" the infinity and change nans in the other factor to 0 */
+ a = copysign(isinf(a) ? 1.0 : 0.0, a);
+ b = copysign(isinf(b) ? 1.0 : 0.0, b);
+ if (isnan(c)) c = copysign(0.0, c);
+ if (isnan(d)) d = copysign(0.0, d);
+ recalc = 1;
+ }
+ if (isinf(c) || isinf(d)) { /* w is infinite */
+ /* "Box" the infinity and change nans in the other factor to 0 */
+ c = copysign(isinf(c) ? 1.0 : 0.0, c);
+ d = copysign(isinf(d) ? 1.0 : 0.0, d);
+ if (isnan(a)) a = copysign(0.0, a);
+ if (isnan(b)) b = copysign(0.0, b);
+ recalc = 1;
+ }
+ if (!recalc && (isinf(ac) || isinf(bd) || isinf(ad) || isinf(bc))) {
+ /* Recover infinities from overflow by changing nans to 0 */
+ if (isnan(a)) a = copysign(0.0, a);
+ if (isnan(b)) b = copysign(0.0, b);
+ if (isnan(c)) c = copysign(0.0, c);
+ if (isnan(d)) d = copysign(0.0, d);
+ recalc = 1;
+ }
+ if (recalc) {
+ r.real = Py_INFINITY*(a*c - b*d);
+ r.imag = Py_INFINITY*(a*d + b*c);
+ }
+ }
+
return r;
}
```
c.f. clang's version:
https://github.com/llvm/llvm-project/blob/4973ad47181710d2a69292018cad7bc6f95a6c1a/libcxx/include/complex#L712-L792
</details>
Also, maybe ``_Py_c_prod()`` code should set errno on overflows, just as ``_Py_c_abs()``. E.g. with above version we have:
```pycon
>>> z**5
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
z**5
~^^~
OverflowError: complex exponentiation
>>> z*z*z*z*z
(-inf+infj)
```
If this issue does make sense, I'll provide a patch.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120287
<!-- /gh-linked-prs -->
| 8b7c194c7bf7e547e4f6317528f0dcb9344c18c7 | e991ac8f2037d78140e417cc9a9486223eb3e786 |
python/cpython | python__cpython-120000 | # Race condition in `_Py_ExplicitMergeRefcount`
# Bug report
There is a subtle possible race condition in `_Py_ExplicitMergeRefcount`: we set `ob_ref_local` and `ob_tid` to zero *after* writing the merged refcount to `ob_ref_shared`.
That's not safe, because another thread might possibly deallocate the object after we merged refcount. For example:
* Assume that the merged refcount is `1`
* Some other thread calls `Py_DECREF()` and immediately frees the object
* We write zero to `ob_ref_local` and `ob_tid` -- BUG!
https://github.com/python/cpython/blob/41c1cefbae71d687d1a935233b086473df65e15c/Objects/object.c#L413-L422
<!-- gh-linked-prs -->
### Linked PRs
* gh-120000
* gh-120073
<!-- /gh-linked-prs -->
| 4055577221f5f52af329e87f31d81bb8fb02c504 | 109e1082ea92f89d42cd70f2cc7ca6fba6be9bab |
python/cpython | python__cpython-119982 | # Wrap some multi-line macros in ``do { ... } while(0)`` in ``symtable.c``
# Feature or enhancement
### Proposal:
While I was implementing #119976, I observed that some macros are not guarded against possible constructions because their content is only wrapped in `{}` but not in the classical `do { ... } while(0)` construction.
For instance, the current `VISIT` macro sometimes requires the semi-colon and sometimes not depending on how it is used, which I find a bit confusing. I think it's preferrable to always require to add the `;`.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119982
<!-- /gh-linked-prs -->
| 153b118b78588209850cc2a4cbc977f193a3ab6e | 6acb32fac3511c1d5500cac66f1d6397dcdab835 |
python/cpython | python__cpython-119969 | # A small typo in Doc/c-api/monitoring.rst
# Documentation
The title of the page (https://docs.python.org/3.14/c-api/monitoring.html) should be "***Monitoring*** C API", not "***Monitorong*** C API"
<!-- gh-linked-prs -->
### Linked PRs
* gh-119969
* gh-119971
<!-- /gh-linked-prs -->
| cae4c80714e7266772025676977e2a1b98cdcd7b | d7fcaa73b71f4c49c1b24cac04c9b6f1cf69b944 |
python/cpython | python__cpython-119962 | # Tests badge in README shows incorrect status
In the repo README, the tests badge currently shows as failing even though [all the recent tests workflows on main](https://github.com/python/cpython/actions/workflows/build.yml?query=branch%3Amain) are passing.

The URL currently being used is https://github.com/python/cpython/workflows/Tests/badge.svg, which is an old, now undocumented way to select a badge. I can't actually work out where that's finding a failing workflow but [apparently it's a bit dodgy](https://github.com/badges/shields/issues/8146#issuecomment-1262572065) so moving to the new format based on the file name, https://github.com/python/cpython/actions/workflows/build.yml/badge.svg, seems like a good idea, and seems to fix the issue.
PR Incoming
<!-- gh-linked-prs -->
### Linked PRs
* gh-119962
<!-- /gh-linked-prs -->
| 8e6321efd72d12263398994e59c5216edcada3c0 | 0594a27e5f1d87d59fa8a761dd8ca9df4e42816d |
python/cpython | python__cpython-119978 | # RegexFlag parameters not described in most re methods.
# Documentation
In [https://docs.python.org/3/library/re.html](https://docs.python.org/3/library/re.html) (also 3.12, 3.13, and 3.14) the `flags` parameter is defined in the documentation for [`re.compile()`](https://docs.python.org/3/library/re.html#re.compile), but not for any other method that takes a `flags` parameter (`search, match, fullmatch, split, findall, finditer, sub, subn`).
I expected the documentation for each of those methods to at least specify the type of the `flags` parameter (presumably [`re.RegexFlag`](https://docs.python.org/3/library/re.html#re.RegexFlag)), with a link to the detailed description under the “Module Contents” section.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119978
* gh-120730
* gh-120908
<!-- /gh-linked-prs -->
| a86e6255c371e14cab8680dee979a7393b339ce5 | b8fb369193029d10059bbb5f760092071f3a9f5f |
python/cpython | python__cpython-119976 | # Type parameters: Incorrect text in SyntaxError for disallowed expression
# Bug report
### Bug description:
We disallow certain expressions (e.g., `yield`) in type parameters bounds, constraints, and defaults. But the error message always says it's a bound:
```pycon
>>> def f[T=(yield)](): pass
File "<python-input-0>", line 1
SyntaxError: yield expression cannot be used within a TypeVar bound
```
```
>>> def f[T: (int, (yield))](): pass
File "<python-input-2>", line 1
SyntaxError: yield expression cannot be used within a TypeVar bound
```
We could either add some machinery in the symbol table so it knows whether we're in a bound, constraints, or default, or just change the error message to something like "within a TypeVar bound, constraints, or default".
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-119976
* gh-120641
<!-- /gh-linked-prs -->
| 4bf17c381fb7b465f0f26aecb94a6c54cf9be2d3 | 274f844830898355f14d6edb6e71894a2f37e53c |
python/cpython | python__cpython-121341 | # Flaky test `test_asyncio_repl_is_ok` in `test_repl`
The test fails because the running the asyncio repl exits with:
```
Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads
```
I'm not sure if this is just a flaky test (i.e, `_enter_buffered_busy` failures are expected), or if it's a bug in the underlying repl.
The exit code `-6` means the process crashes due to an `abort()`, I think.
Example: https://github.com/python/cpython/actions/runs/9331095550/job/25685390386?pr=119908
<details>
<summary>Test Output</summary>
```
======================================================================
FAIL: test_asyncio_repl_is_ok (test.test_repl.TestInteractiveInterpreter.test_asyncio_repl_is_ok)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_repl.py", line 199, in test_asyncio_repl_is_ok
assert_python_ok("-m", "asyncio")
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/support/script_helper.py", line 180, in assert_python_ok
return _assert_python(True, *args, **env_vars)
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/support/script_helper.py", line 165, in _assert_python
res.fail(cmd_line)
~~~~~~~~^^^^^^^^^^
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/support/script_helper.py", line 75, in fail
raise AssertionError("Process return code is %d\n"
...<13 lines>...
err))
AssertionError: Process return code is -6
command line: ['/home/runner/work/cpython/cpython-builddir/hypovenv/bin/python', '-X', 'faulthandler', '-I', '-m', 'asyncio']
stdout:
---
---
stderr:
---
asyncio REPL 3.14.0a0 (remotes/pull/119908/merge-dirty:43e023a, Jun 1 2024, 14:47:23) [GCC 11.4.0] on linux
Use "await" directly instead of "asyncio.run()".
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
Exception in thread Interactive thread:
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/asyncio/__main__.py", line 109, in run
raise OSError(errno.ENOTTY, "tty required", "stdin")
OSError: [Errno 25] tty required: 'stdin'
During handling of the above exception, another exception occurred:
exiting asyncio REPL...
Traceback (most recent call last):
Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads
Python runtime state: finalizing (tstate=0x000055910f31bbf0)
Current thread 0x00007fa49bcdb740 (most recent call first):
<no Python frame>
---
----------------------------------------------------------------------
Ran 8 tests in 3.[354](https://github.com/python/cpython/actions/runs/9331095550/job/25685390386?pr=119908#step:22:355)s
FAILED (failures=1)
test test_repl failed
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-121341
* gh-121447
<!-- /gh-linked-prs -->
| 114389470ec3db457c589b3991b695258d23ce5a | 53e12025cd7d7ee46ce10cc8f1b722c55716b892 |
python/cpython | python__cpython-120467 | # [3.12] Lambda generators cause assertion failure: `python: Objects/genobject.c:400: gen_close: Assertion `exception_handler_depth > 0' failed.`
# Crash report
### What happened?
When CPython 3.12 is built with assertions enabled, the following snippet causes the interpreter to crash:
```python
x = lambda: (yield 3)
next(x())
```
```
$ ./python /tmp/LambdasTest.py
python: Objects/genobject.c:400: gen_close: Assertion `exception_handler_depth > 0' failed.
Aborted (core dumped)
```
This doesn't happen with 3.11 or with main. From a quick bisect:
- #115818 (eb4774d2b7f5d50cd3cb8dc5abce9f94ce54197e) broke it on 3.12
- #111459 (52cc4af6ae9002f11605f91b672746c127494efd) fixed it on main
This was triggered in the wild by Nuitka's test suite: https://github.com/Nuitka/Nuitka/issues/2893.
CC @iritkatriel
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.12.3+ (heads/3.12:6d9677d78e, Jun 1 2024, 11:06:51) [GCC 14.1.1 20240516]
<!-- gh-linked-prs -->
### Linked PRs
* gh-120467
* gh-120658
* gh-120673
<!-- /gh-linked-prs -->
| 73dc1c678eb720c2ced94d2f435a908bb6d18566 | 2cf47389e26cb591342d07dad98619916d5a1b15 |
python/cpython | python__cpython-122217 | # Error exiting the new REPL with Ctrl+Z on Windows
# Bug report
### Bug description:
With the new REPL implementation on Windows, I'm getting an error from code that mistakenly tries to suspend the current process via `os.kill(os.getpid(), signal.SIGSTOP)` when exiting via Ctrl+Z, Enter. This raises the following `AttributeError`:
```python
os.kill(os.getpid(), signal.SIGSTOP)
^^^^^^^^^^^^^^
AttributeError: module 'signal' has no attribute 'SIGSTOP'
```
This is due to Ctrl+Z getting mapped to the "suspend" command. This key sequence has always been supported for exiting the REPL on Windows. It's also supported when reading from `io._WindowsConsoleIO` console-input files. Anything after Ctrl+Z at the start of a line gets ignored, yielding an empty read.
Note that the kernel on Windows has no support for POSIX signals. There's just a basic emulation of a few signals in the C runtime library. There's no support for `SIGSTOP`. It could be emulated, but it's not, and the GUI task manager hasn't implemented support for suspending and resuming processes. Thus on Windows you could just map Ctrl+Z to the "delete" command, and update the `delete` class to also exit the REPL if the raw event character is `"\x1a"`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-122217
* gh-122451
<!-- /gh-linked-prs -->
| d1a1bca1f0550a4715f1bf32b1586caa7bc4487b | d27a53fc02a87e76066fc4e15ff1fff3922a482d |
python/cpython | python__cpython-119880 | # Utilize last character gap for two-way periodic needles
# Feature or enhancement
### Proposal:
Two-way algorithm currently does not incorporate *Horspool skip rule* when needle is periodic. As part of my work this led to difficulties in run-time predictions.
This rule is a significant determinant of run-time and a common factor of 2 algorithms. If this rule was implemented for periodic needles it might be achievable to predict runtimes and implement dynamic `adaptive_find`.
The image below shows different permutations of `string search inputs`. X-axis is of the format: `needle matching characters + needle non-matching characters`. And all cases are define so that they all have a window for 100K iterations. Actual runtimes and predicted.
Period circled in red is where periodic needles are constructed, all of the other periods construct non-periodic periods.
The default algorithm always uses *Horspool skip rule* and its addition to `two-way` algorithm makes both algorithms consistent and comparable.

Below are run-times for 3 points - 1 from period circled in red and one from each nearby period at the same points within periods.
```python
# Adjacent period
nd = '_' * 3 + '1' * 10 + '_'
hs = '_' * (100_000 + 5)
%timeit hs.find(nd) # 36.3 µs
# Target Period
nd = '_' + '1' * 10 + '_'
hs = '_' * (100_000 + 3)
%timeit hs.find(nd) # 388 µs
# Adjacent period on the other side
nd = '1' * 10 + '_'
hs = '_' * (100_000 + 2)
%timeit hs.find(nd) # 36.7 µs
```
Here is an example of absence of the rule:
```
>>> '_111_' in '_____________'
###### Finding "_111_" in "_____________".
split: "_" + "111_"
===== Two-way: "_111_" in "_____________". =====
Needle is periodic.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
Right half does not match.
> "_____________"
> "_111_"
```
Finally, same plot after inclusion of the rule for periodic needles:
<img width="736" alt="Screenshot 2024-05-31 at 22 06 44" src="https://github.com/python/cpython/assets/3577712/e104d262-fd48-4689-9537-5ee46cae749b">
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
There was no discussion on this particular issue, but this is one of the building blocks to achieve:
https://discuss.python.org/t/optimize-str-find-aka-fastsearch/54483/12
<!-- gh-linked-prs -->
### Linked PRs
* gh-119880
<!-- /gh-linked-prs -->
| a8f1152b70d707340b394689cd09aa0831da3601 | 8d63c8d47b9edd8ac2f0b395b2fa0ae5f571252d |
python/cpython | python__cpython-119858 | # pyrepl help (pydoc): "exit" does not exit, whereas "quit" does
When I enter Python help in the REPL, if I type "exit", I get:
```
Help on Quitter in module _sitebuiltins object:
exit = class Quitter(builtins.object)
| exit(name, eof)
|
| Methods defined here:
(...)
```
If I try exit(), I get an error:
```
help> exit()
No Python documentation found for 'exit()'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.
```
It propose supporting to just type "exit", similar to "quit", to quit the help.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119858
* gh-119967
<!-- /gh-linked-prs -->
| 4223f1d828d3a3e1c8d803e3fdd420afd7d85faf | 1e5f615086d23c71a9701abe641b5241e4345234 |
python/cpython | python__cpython-119854 | # [C API] Split large object.h file: create refcount.h
The `Include/object.h` header file became quite big (1260 lines of C code). A large part of it is just reference count management. I propose adding a new `Include/refcount.h` header file to ease the navigation in `Include/object.h` and ease the maintenance of the C API.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119854
* gh-119860
<!-- /gh-linked-prs -->
| 891c1e36f4e08da107443772a4eb50c72a83836d | 91601a55964fdb3c02b21fa3c8dc629daff2390f |
python/cpython | python__cpython-119843 | # Honor PyOS_InputHook in the new REPL
# Bug report
We are currently not calling `PyOS_InputHook` if is set before blocking for input in the new REPL
<!-- gh-linked-prs -->
### Linked PRs
* gh-119843
* gh-120066
<!-- /gh-linked-prs -->
| d9095194dde27eaabfc0b86a11989cdb9a2acfe1 | bf5e1065f4ec2077c6ca352fc1ad940a76d1f6c9 |
python/cpython | python__cpython-119839 | # Treat Fraction as a real value in mixed arithmetic operations with complex
In arithmetic operations, if one of operands is a `Fraction`, and the other operand is a `complex` or a `numbers.Complex`, the fraction is implicitly converted to a `complex`. Third-party types implementing the `numbers.Complex` interface can distinguish between real and complex operands. In future we can support the mixed real-complex arithmetic also for the builtin `complex` type. A fraction is a real value, so it makes sense to convert it to `float` instead of `complex` in such cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119839
<!-- /gh-linked-prs -->
| d7fcaa73b71f4c49c1b24cac04c9b6f1cf69b944 | 70934fb46982ad2ae677cca485a730b39635919c |
python/cpython | python__cpython-119938 | # `ntpath.abspath()` always return absolute path
# Feature or enhancement
### Proposal:
`ntpath.abspath()` doesn't always return an absolute path:
```python
>>> import ntpath
>>> ntpath.abspath('C:\x00')
'C:\x00' # instead of 'C:\\Users\\wanne\\\x00'
>>> ntpath.abspath('\x00:')
'\x00:' # instead of '\x00:\\'
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
- #117587
<!-- gh-linked-prs -->
### Linked PRs
* gh-119938
* gh-127534
* gh-127535
<!-- /gh-linked-prs -->
| 4b00aba42e4d9440d22e399ec2122fe8601bbe54 | 5610860840aa71b186fc5639211dd268b817d65f |
python/cpython | python__cpython-119882 | # Python 3.13.0b1: suboptimal `pdb._post_mortem` behavior if `.pdbrc` exists
# Bug report
### Bug description:
`pdb.post_mortem` used to display the current stack entry before the command loop (which has been useful). With Python 3.13, this is no longer the case when there is a `.pdbrc` file.
The reason is the following code in `pdb.Pdb.interaction`:
```
# if we have more commands to process, do not show the stack entry
if not self.cmdqueue:
self.print_stack_entry(self.stack[self.curindex])
```
If a `.pdbrc` exists, `self.cmdqueue` contains the commands from this file and the current stack entry is not displayed.
I suggest the introduction of a new `interaction` parameter `show_stack_entry` (with default `False`) and to display the current stack entry if it has a true value. Set it to `True` in the `_post_mortem` call of `interaction`.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-119882
* gh-120533
* gh-120594
* gh-120919
* gh-120928
<!-- /gh-linked-prs -->
| ed60ab5fab6d187068cb3e0f0d4192ebf3a228b7 | 7fadfd82ebf6ea90b38cb3f2a046a51f8601a205 |
python/cpython | python__cpython-119822 | # Name lookup in annotation scopes in classes does not work with non-dict globals
# Bug report
### Bug description:
This works:
```python
class customdict(dict):
def __missing__(self, key):
return key
code = compile("type Alias = undefined", "test", "exec")
ns = customdict()
exec(code, ns)
Alias = ns["Alias"]
assert Alias.__value__ == "undefined"
```
But this does not:
```python
code = compile("class A: type Alias = undefined", "test", "exec")
ns = customdict()
exec(code, ns)
Alias = ns["A"].Alias
assert Alias.__value__ == "undefined"
```
This is a problem for PEP 649 because we're going to rely on a non-dict globals namespace.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-119822
* gh-119889
* gh-119890
* gh-119975
<!-- /gh-linked-prs -->
| 80a4e3899420faaa012c82b4e82cdb6675a6a944 | 2237946af0981c46dc7d3886477e425ccfb37f28 |
python/cpython | python__cpython-120030 | # Python 3.12+ breaks backwards compatibility for logging QueueHandler with some Queue classes
# Bug report
### Bug description:
Related bug: https://github.com/python/cpython/issues/111615
Related pull: https://github.com/python/cpython/pull/111638
Related [codes](https://github.com/python/cpython/blob/3.12/Lib/logging/config.py#L789-L804):
https://github.com/python/cpython/blob/bd0d97ce3497da35d17daa329c294f4cb87196ee/Lib/logging/config.py#L789-L804
#### reproducible example using Python 3.12.3
```python
import logging.config
def main(q):
config = {
'version': 1,
'handlers': {
'sink': {
'class': 'logging.handlers.QueueHandler',
'queue': q,
},
},
'root': {
'handlers': ['sink'],
},
}
logging.config.dictConfig(config)
if __name__ == '__main__':
import multiprocessing as mp
main(mp.Manager().Queue())
#import asyncio
#main(asyncio.Queue()) # broken too
```
error:
```
Traceback (most recent call last):
File "/usr/lib/python3.12/logging/config.py", line 581, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/logging/config.py", line 801, in configure_handler
raise TypeError('Invalid queue specifier %r' % qspec)
TypeError: Invalid queue specifier <AutoProxy[Queue] object, typeid 'Queue' at 0x7f8b07189d90>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xxx/logging-queue-handler-bug.py", line 33, in <module>
main(mp.Manager().Queue())
File "/home/xxx/logging-queue-handler-bug.py", line 29, in main
logging.config.dictConfig(config)
File "/usr/lib/python3.12/logging/config.py", line 914, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.12/logging/config.py", line 588, in configure
raise ValueError('Unable to configure handler '
ValueError: Unable to configure handler 'sink'
```
#### Queue classes to check for logging QueueHandler
* queue.Queue(), `<queue.Queue at 0x7fb86eeef170>`, **works**.
* multiprocessing
- multiprocessing.Queue(), `<multiprocessing.queues.Queue at 0x7fb871790500>`, **works**.
- multiprocessing.Manager().Queue() `<AutoProxy[Queue] object, typeid 'Queue' at 0x7fb86ef4a840>`, **broken**.
Its class is `multiprocessing.managers.AutoProxy[Queue]`, a subclass of `multiprocessing.managers.BaseProxy`.
* asyncio.queues.Queue() `<Queue at 0x7fb86f0be4e0 maxsize=0>`, **broken**.
* any other Queue?
#### discuss: how to fix
Check all Queue classes mentioned above in
https://github.com/python/cpython/blob/bd0d97ce3497da35d17daa329c294f4cb87196ee/Lib/logging/config.py#L792
or just focus on handling special cases: str and dict, while leaving other cases un-checked (e.g., queue.Queue, multiprocessing.queues.Queue).
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120030
* gh-120034
* gh-120035
* gh-120067
* gh-120071
* gh-120072
* gh-120090
* gh-120092
* gh-120093
* gh-120476
* gh-120531
* gh-120532
<!-- /gh-linked-prs -->
| 99d945c0c006e3246ac00338e37c443c6e08fc5c | dce14bb2dce7887df40ae5c13b0d13e0dafceff7 |
python/cpython | python__cpython-124006 | # Update memory management docs for free-threaded build
The memory management docs say:
> There is no hard requirement to use the memory returned by the allocation functions belonging to a given domain for only the purposes hinted by that domain (although this is the recommended practice). For example, one could use the memory returned by [PyMem_RawMalloc()](https://docs.python.org/3/c-api/memory.html#c.PyMem_RawMalloc) for allocating Python objects or the memory returned by [PyObject_Malloc()](https://docs.python.org/3/c-api/memory.html#c.PyObject_Malloc) for allocating memory for buffers.
https://docs.python.org/3/c-api/memory.html#allocator-domains
We should update this to account for the free-threaded build, which requires that the "object" domain only used for allocation Python objects (and that Python objects are only allocated using the "object" domain).
See also: https://peps.python.org/pep-0703/#memory-management
<!-- gh-linked-prs -->
### Linked PRs
* gh-124006
* gh-124054
<!-- /gh-linked-prs -->
| e6bb1a2b28ac8aed1e1b7f1c74221ca1d02a7235 | bb904e063d0cbe4c7c83ebfa5fbed2d9c4980a64 |
python/cpython | python__cpython-119800 | # `_Py_NewRefWithLock` missing `_Py_IncRefTotal`
# Bug report
@encukou noticed that the free-threaded buildbots were reporting a lot of negative refcount deltas (among other problems). This is because `_Py_NewRefWithLock` is missing a `_Py_IncRefTotal` (although it has a `_Py_INCREF_STAT_INC`).
https://github.com/python/cpython/blob/1c04c63ced5038e8f45a2aac7dc45f0815a4ddc5/Include/internal/pycore_object.h#L494-L514
<!-- gh-linked-prs -->
### Linked PRs
* gh-119800
* gh-119878
<!-- /gh-linked-prs -->
| 879d43b705faab0c59f1a6a0042e286f39f3a4ef | 9bc6045842ebc91ec48ab163a9e1e8644231607c |
python/cpython | python__cpython-120471 | # Add a "strict" option for map()
These two examples silently truncate the unmatched inputs:
```python
>>> list(map(pow, [1, 2, 3], [2, 2, 2, 2]))
[1, 4, 9]
>>> list(map(pow, [1, 2, 3, 4, 5], [2, 2, 2, 2]))
[1, 4, 9, 16]
```
The current workaround is:
```
starmap(pow, zip(vec1, vec2, strict=True))
```
Ideally, map() should support this directly. The reasoning is the same reasoning that motivated the `strict` option for `zip()`
<!-- gh-linked-prs -->
### Linked PRs
* gh-120471
* gh-126407
<!-- /gh-linked-prs -->
| 3032fcd90ecb745b737cbc93f694f9a802062a3a | bfc1d2504c183a9464e65c290e48516d176ea41f |
python/cpython | python__cpython-119792 | # Fix new Tkinter tests for wantobjects=0
Some of recently added Tkinter tests for `PhotoImage` do not pass if `tkinter.wantobjects=0`, because `PhotoImage.get()` returns a string like `"255 0 0"` instead of a 3-tuple of integers like `(255, 0, 0)`.
Other solution is to make `PhotoImage.get()` always returning a 3-tuple of integers, and it is a correct solution, but I am not sure that it can be backported. It can break the code that uses `tkinter.wantobjects=0` and manually parses the returned string.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119792
* gh-119794
* gh-119798
<!-- /gh-linked-prs -->
| e875c2d752fed0a8d16958dc7b331e66a2476247 | e91fc11fafb657cab88c5e6f13822432a3b9dc64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.