repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-109026 | # `frame.f_locals` for list/dict/set comprehension in module/class scope doesn't contain iteration variables
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:dd32611f4f, Aug 31 2023, 20:23:23) [GCC 10.2.1 20210110]
### A clear and concise description of the bug:
The following assertion fails with PEP 709 (c3b595e73efac59360d6dc869802abc752092460):
```python
import sys
f_locs = None
[f_locs := sys._getframe().f_locals for a in range(1)]
assert 'a' in f_locs
```
This was discovered when trying to inspect listcomp iteration variable under `pdb`, as it makes inspection of these variables impossible:
```
> /home/radislav/projects/cpython/listcomp.py(1)<module>()
-> [print(a) for a in range(1)]
(Pdb) s
0
> /home/radislav/projects/cpython/listcomp.py(1)<module>()
-> [print(a) for a in range(1)]
(Pdb) print(a)
*** NameError: name 'a' is not defined
(Pdb) s
/home/radislav/projects/cpython/listcomp.py:1: RuntimeWarning: assigning None to unbound local 'a'
[print(a) for a in range(1)]
--Return--
> /home/radislav/projects/cpython/listcomp.py(1)<module>()->None
-> [print(a) for a in range(1)]
```
As I can see, it happens because iteration vars are now stored in "fast hidden locals" that aren't included into `frame->f_locals` on `_PyFrame_FastToLocalsWithError(frame)` call, and thus are not accessible through `PyFrameObject` `f_locals` attribute.
cc @carljm
<!-- gh-linked-prs -->
### Linked PRs
* gh-109026
* gh-109097
<!-- /gh-linked-prs -->
| f2584eade378910b9ea18072bb1dab3dd58e23bb | b72251de930c8ec6893f1b3f6fdf1640cc17dfed |
python/cpython | python__cpython-108745 | # types.MemberDescriptorType docs should mention it is used for __slots__
The documentation for types.MemberDescriptorType in the types module indicate that it is used in extension modules. This is true, but fails to mention the other common use of the class, as the type of a slot.
For those who aren't aware, if we define a class like so:
```
class Foo:
__slots__ = ('bar',)
```
then it will create an attribute `bar` on the class. Calling `type(Foo.bar)` will return types.MemberDescriptorType.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108745
* gh-114536
* gh-114537
<!-- /gh-linked-prs -->
| 6888cccac0776d965cc38a7240e1bdbacb952b91 | 191531f352ce387a2d3a61544fb6feefab754d4a |
python/cpython | python__cpython-108734 | # _testinternalcapi.get_counter_optimizer() segfaults
# Crash report
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:13a00078b8, Aug 31 2023, 18:44:11) [Clang 14.0.3 (clang-1403.0.22.14.1)]
### What happened?
```python
# cat zz.py
__lltrace__ = True
import _testinternalcapi
def f():
_testinternalcapi.get_counter_optimizer()
f()
```
### Error messages
```
% ./python.exe zz.py
Resuming frame for 'f' in module '__main__'
stack=[]
0: RESUME 0
stack=[]
2: LOAD_GLOBAL 0
stack=[<module at 0x1073f2150>]
12: LOAD_ATTR 2
stack=[<builtin_function_or_method at 0x1073f3650>]
32: PUSH_NULL
stack=[<builtin_function_or_method at 0x1073f3650>, <nil>]
34: CALL 0
stack=[<Counter optimizer at 0x1073a47c0>]
42: POP_TOP
zsh: segmentation fault ./python.exe zz.py
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108734
<!-- /gh-linked-prs -->
| 844f4c2e12a7c637d1de93dbbb0718be06553510 | 044b8b3b6a65e6651b161e3badfa5d57c666db19 |
python/cpython | python__cpython-109344 | # Add lightweight locking C API
# Feature or enhancement
Implementing [PEP 703](https://peps.python.org/pep-0703/) will require adding additional fine grained locks and other synchronization mechanisms. For good performance, it's important that these locks be "lightweight" in the sense that they don't take up much space and don't require memory allocations to create. Additionally, it's important that these locks are fast in the common uncontended case, perform reasonably under contention, and avoid thread starvation.
Platform provided mutexes like `pthread_mutex_t` are large (40 bytes on x86-64 Linux) and our current cross-platform wrappers ([[1]](https://github.com/python/cpython/blob/2a3926fa51b7264787d5988abf083d8c4328f4ad/Python/thread_pthread.h#L385), [[2]](https://github.com/python/cpython/blob/2a3926fa51b7264787d5988abf083d8c4328f4ad/Python/thread_pthread.h#L567), [[3]](https://github.com/python/cpython/blob/2a3926fa51b7264787d5988abf083d8c4328f4ad/Python/thread_nt.h#L38)) require additional memory allocations.
I'm proposing a lightweight mutex (`PyMutex`) along with internal-only APIs used for building an efficient `PyMutex` as well as other synchronization primitives. The design is based on WebKit's `WTF::Lock` and `WTF::ParkingLot`, which is described in detail in the [Locking in WebKit](https://webkit.org/blog/6161/locking-in-webkit/) blog post. (The design has also been ported to Rust in the [`parking_lot`](https://docs.rs/parking_lot/latest/parking_lot/) crate.)
### Public API
The public API (in `Include/cpython`) would provide a `PyMutex` that occupies one byte and can be zero-initialized:
```c
typedef struct PyMutex { uint8_t state; } PyMutex;
void PyMutex_Lock(PyMutex *m);
void PyMutex_Unlock(PyMutex *m);
```
I'm proposing making `PyMutex` public because it's useful in C extensions, such as NumPy, (as opposed to C++) where it can be a pain to wrap cross-platform synchronization primitives.
### Internal APIs
The internal only API (in `Include/internal`) would provide APIs for building `PyMutex` and other synchronization primitives. The main addition is a compare-and-wait primitive, like Linux's [`futex`](https://man7.org/linux/man-pages/man2/futex.2.html) or Window's [`WaitOnAdress`](https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-waitonaddress).
```c
int _PyParkingLot_Park(const void *address, const void *expected, size_t address_size,
_PyTime_t timeout_ns, void *arg, int detach)
```
The API closely matches `WaitOnAddress` but with two additions: `arg` is an optional, arbitrary pointer passed to the wake-up thread and `detach` indicates whether to release the GIL (or [detach](https://peps.python.org/pep-0703/#thread-states) in `--disable-gil` builds) while waiting. The additional `arg` pointer allows the locks to be only one byte (instead of at least pointer sized), since it allows passing additional (stack allocated) data between the waiting and the waking thread.
The wakeup API looks like:
```c
// wake up all threads waiting on `address`
void _PyParkingLot_UnparkAll(const void *address);
// or wake up a single thread
_PyParkingLot_Unpark(address, unpark, {
// code here is executed after the thread to be woken up is identified but before we wake it up
void *arg = unpark->arg;
int more_waiters = unpark->more_waiters;
...
});
```
`_PyParkingLot_Unpark` is currently a macro that takes a code block. For `PyMutex` we need to update the mutex bits after we identify the thread but before we actually wake it up.
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-109344
* gh-109583
* gh-112222
* gh-116483
<!-- /gh-linked-prs -->
| 0c89056fe59ac42f09978582479d40e58a236856 | 0a31ff0050eec5079fd4c9cafd33b4e3e9afd9ab |
python/cpython | python__cpython-108718 | # `os.DirEntry.is_junction` can be ~twice as fast
Right now it is implemented as:
https://github.com/python/cpython/blob/79823c103b66030f10e07e04a5462f101674a4fc/Modules/posixmodule.c#L14566-L14583
Removing unused `defining_class: defining_class` from clinic has one big adavantage (aside from the fact that it is unused in the first place): it speeds up `is_junction` call.
The exact benchmark is system-dependent, here are my numbers (note, that I am on macos and always get `False` as the result).
Setup:
- `./configure --with-pydebug && make -j`
- Install `pyperf`
- `pyperf timeit --setup 'import os; d = os.scandir("."); d1 = next(d); d.close()' 'd1.is_junction()'`
With `defining_class`:
```
(.venv) ~/Desktop/cpython main ✔ 1 ⚠️
» pyperf timeit --setup 'import os; d = os.scandir("."); d1 = next(d); d.close()' 'd1.is_junction()'
.....................
Mean +- std dev: 46.1 ns +- 0.5 ns
```
Without:
```
(.venv) ~/Desktop/cpython main ✗
» pyperf timeit --setup 'import os; d = os.scandir("."); d1 = next(d); d.close()' 'd1.is_junction()'
.....................
Mean +- std dev: 25.0 ns +- 0.3 ns
```
This happens because `is_junction` def is changed from `METH_METHOD|METH_FASTCALL|METH_KEYWORDS` to `METH_NOARGS`.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108718
<!-- /gh-linked-prs -->
| 65a2bce70421197605feeed3351a4577462aae06 | 6f8411cfd68134ccae01b0b4cb332578008a69e3 |
python/cpython | python__cpython-108722 | # Remove deep-freezing of code objects and modules.
Deep-freezing, as implemented has a number of problems:
* It is slow to build
* It does not fit into the normal build system, `make` will not recreate deep-frozen modules
* Deep-freezing only makes sense if the objects are immutable, but code objects are not (at the VM level)
* It is [a bit slower](https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20230827-3.13.0a0-1ad9bed)
* It gets in the way of other optimizations, notable faster loading from `pyc` files.
So let's remove it.
But let's keep the ability to deep-freeze objects.
The ability to deep-freeze objects into the executable is valuable, we already do it with strings, and deep-freezing whole immutable object graphs like the `sys` and `builtin` modules could improve start up considerably.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108722
* gh-110078
* gh-116919
* gh-117141
<!-- /gh-linked-prs -->
| 15d4c9fabce67b8a1b5bd9dec9612014ec18291a | 00cf626cd41f806062c22a913b647b4efa84c476 |
python/cpython | python__cpython-108698 | # test_import leaks references
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (bisect/good-edb569e8e1f469ac5ce629cc647ab169d895de41-1-gfeb9a49c9c-dirty:feb9a49) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
### A clear and concise description of the bug:
Reproduce with `./python -m test -R: -v test_import`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108698
<!-- /gh-linked-prs -->
| 157b89e55ed1ec12418a4853a0ba10eabc11ce60 | d52c4482a82f3f98f1a78efa948144a1fe3c52b2 |
python/cpython | python__cpython-108704 | # 3.11.5 Regression: Flags with custom __new__ can't be iterated
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.11.5 (main, Aug 25 2023, 23:47:33) [GCC 12.2.0]
### A clear and concise description of the bug:
A behavior of `Flag` changed in python 3.11.5. When `Flag` with custom `__new__` is used in combination with functional API, it's members can't be retrieved by iteration over the class.
MWE:
```python
from enum import IntFlag
Perm = IntFlag('Perm', {'R': 4, 'W': 2, 'X': 1})
print(tuple(Perm))
class LabeledFlag(IntFlag):
def __new__(cls, value: int, label: str):
obj = super().__new__(cls, value)
obj._value_ = value
obj.label = label
return obj
LabeledPerm = LabeledFlag('LabeledPerm', {'R': (4, 'Read'), 'W': (2, 'Write'), 'X': (1, 'Exec')})
print(tuple(LabeledPerm))
```
The output in python 3.11.4:
```
(<Perm.R: 4>, <Perm.W: 2>, <Perm.X: 1>)
(<LabeledPerm.R: 4>, <LabeledPerm.W: 2>, <LabeledPerm.X: 1>)
```
The output in python 3.11.5:
```
(<Perm.R: 4>, <Perm.W: 2>, <Perm.X: 1>)
()
```
I suspect this commit to introduce the regression: https://github.com/python/cpython/commit/59f009e5898a006cdc8f5249be589de6edfe5cd0
A workaround for 3.11.5:
```python
class LabeledFlag(IntFlag):
def __new__(cls, value: int, label: str):
obj = super().__new__(cls, value)
obj._value_ = value
obj.label = label
return obj
@classmethod
def _missing_(cls, value):
pseudo_member = super(_DnskeyFlagBase, cls)._missing_(value)
if value in cls._value2member_map_ and pseudo_member.name is None:
cls._value2member_map_.pop(value)
return pseudo_member
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108704
* gh-108733
* gh-108739
<!-- /gh-linked-prs -->
| d48760b2f1e28dd3c1a35721939f400a8ab619b8 | 13a00078b81776b23b0b6add69b848382240d1f2 |
python/cpython | python__cpython-108670 | # Documentation for `TestResult.collectedDurations` is outdated
# Documentation
Following #106888, the documentation for `TestResult.collectedDurations` is outdated as it's now a 2-tuple of `str` and `float` instances. Noticed in python/typeshed#10636. I will prepare a PR. Cc @bityob.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108670
* gh-108672
<!-- /gh-linked-prs -->
| 6c484c39beeb66d40ef0a73cc4f1e900ea498cfa | 400a1cebc743515e40157ed7af86e48d654290ce |
python/cpython | python__cpython-108659 | # Comprehension iteration variable overwrite a variable of the same name in the outer scope inside a function with try..except
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.12.0rc1 (main, Aug 29 2023, 19:34:25) [Clang 12.0.5 (clang-1205.0.22.9)]
### A clear and concise description of the bug:
In the [release notes](https://docs.python.org/3.12/whatsnew/3.12.html#pep-709-comprehension-inlining) for 3.12.0rc1 there is the following paragraph about comprehension inlining:
> Comprehension iteration variables remain isolated; they don’t overwrite a variable of the same name in the outer scope, nor are they visible after the comprehension. This isolation is now maintained via stack/locals manipulation, not via separate function scope.
I found a case when a comprehension iteration variable overwrite a variable of the same name in the outer scope:
```python
def foo(value):
try:
{int(key): value for key, value in value.items()}
except:
print(repr(value)) # will print 'baz' instead of {'bar': 'baz'}
foo({'bar':'baz'})
```
However, if you run this code outside of the function, isolation works:
```python
value = {'bar': 'baz'}
try:
{int(key): value for key, value in value.items()}
except:
print(repr(value)) # ok, it will print {'bar': 'baz'}
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108659
* gh-108700
<!-- /gh-linked-prs -->
| d52c4482a82f3f98f1a78efa948144a1fe3c52b2 | f59c66e8c8a6715c585cf2cdf1f99715480b4da1 |
python/cpython | python__cpython-108639 | # Test suite expects _stat extension to be available
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
The ``_stat`` extension is specific to CPython. For example, PyPy has no ``_stat`` module.
But the Python test suite expects the ``_stat`` module to be available: test_inspect, test_pydoc, and test_tarfile fail with it's missing.
Disable compilation of the ``_stat`` extension to reproduce the issue:
```diff
diff --git a/Modules/Setup.bootstrap.in b/Modules/Setup.bootstrap.in
index 8ef0f203a8..a1f6b8cc85 100644
--- a/Modules/Setup.bootstrap.in
+++ b/Modules/Setup.bootstrap.in
@@ -29,7 +29,7 @@ _abc _abc.c
_functools _functoolsmodule.c
_locale _localemodule.c
_operator _operator.c
-_stat _stat.c
+#_stat _stat.c
_symtable symtablemodule.c
# for systems without $HOME env, used by site._getuserbase()
```
Example:
```
$ ./python -c 'import _stat'
ModuleNotFoundError: No module named '_stat'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108639
* gh-108689
<!-- /gh-linked-prs -->
| b62a76043e543fbb15cab7e9e8874a798bc600d2 | 7659128b9d7a30ddbcb063bc12e2ddb0f1f119e0 |
python/cpython | python__cpython-108636 | # Use positional-only parameters for some special methods
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Most special methods (double underscored from both sizes) are only called with positional arguments. But some implementations forgot to add `/` in Argument Clinic description. It makes the code unnecessary complex and prevent using the efficient limited C API for it (Argument Clinic does not yet fully support the limited C API for keyword parameters).
We can just add missed `/`. No deprecation period is needed as these functions were not purposed to be used with keyword arguments.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108636
<!-- /gh-linked-prs -->
| 9205dfeca54cb24aa9a984c12a6419d71635be9f | f8be2e262c5c2fdbc9721210ae1cb46edc16db82 |
python/cpython | python__cpython-108663 | # Py_TRACE_REFS is not compatible with Py_LIMITED_API
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
Trying to build a C extension targetting the limited C API (defining Py_LIMITED_API) fails if Python is configured with ``./configure --with-trace-refs``. Example with PR #108573:
```
In file included from ./Include/Python.h:44,
from ./Modules/_stat.c:16:
./Include/object.h:62:4: error: #error Py_LIMITED_API is incompatible with Py_TRACE_REFS
62 | # error Py_LIMITED_API is incompatible with Py_TRACE_REFS
| ^~~~~
make: *** [Makefile:3194: Modules/_stat.o] Error 1
```
The ``#error`` comes from ``Include/object.h``:
```c
#if defined(Py_LIMITED_API) && defined(Py_TRACE_REFS)
# error Py_LIMITED_API is incompatible with Py_TRACE_REFS
#endif
```
The problem is that the PyObject ABI is different: Py_TRACE_REFS adds two members to PyObject structure:
```c
PyObject *_ob_next;
PyObject *_ob_prev;
```
One solution to make Py_TRACE_REFS compatible with Py_LIMITED_API would be to **not** add these two members to PyObject, but store the double-linked list outside ``PyObject``. Store it in a different structure, so ``PyObject`` stays ABI compatible.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108663
* gh-108748
<!-- /gh-linked-prs -->
| f42edf1e7be5018a8988a219a168e231cbaa25e5 | 3bfa24e29f286cbc1f42bdb4d2b1c0c9d643c8d6 |
python/cpython | python__cpython-108624 | # New warnings related to `_PyOS_IsMainThread` in `Modules/_multiprocessing/semaphore.c`
# Bug report
This commit introduced new compile warnings in `Modules/_multiprocessing/semaphore.c`: https://github.com/python/cpython/commit/fadc2dc7df99501a40171f39b7cd23be732860cc
<img width="926" alt="Снимок экрана 2023-08-29 в 13 13 48" src="https://github.com/python/cpython/assets/4660275/e897a9ee-84e3-423f-a945-4e1e250f734e">
I have a PR ready.
Refs https://github.com/python/cpython/issues/106320
<!-- gh-linked-prs -->
### Linked PRs
* gh-108624
<!-- /gh-linked-prs -->
| 30305d6d01e3952f409d352a794e7a367b8c4b8b | bf08131e0ae3a2e59a1428c648f433da6921c561 |
python/cpython | python__cpython-108556 | # Extend sqlite3 CLI tests
# Bug report
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
The interactive sqlite3 CLI tests are suboptimal; Serhiy easily proved they can be silently broken in a number of ways.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108556
* gh-108626
<!-- /gh-linked-prs -->
| ecb2bf02a4a564b638f756ce6e644ec17b6edf16 | c8847841cc5629cbceead0c09dc6f537d7b92612 |
python/cpython | python__cpython-108630 | # Remove `TIER_ONE` and `TIER_TWO` from bytecodes.c
The `TIER_ONE` and `TIER_TWO` macros cause a few problems:
Their presence in the bytecode definitions means that anyone maintaining the interpreter or adding new instructions needs to understand the internals of the multiple execution layers and how they differ.
This makes the entry barrier for contribution, which is already high for these components, even higher.
It makes reasoning about the correctness of optimizations harder.
It prevents the addition of new execution tiers, or changing the design of the current tiers much harder.
We can remove the `TIER_ONE` and `TIER_TWO` macros by making modifications to internal execution engine state explicit, without describing how that state is implemented.
For example, rather than modifying `next_instr` in the tier 1 interpreter, we can use `JUMP_BY` or `SAVE_IP` macros.
See https://github.com/faster-cpython/ideas/issues/618
<!-- gh-linked-prs -->
### Linked PRs
* gh-108630
* gh-108685
* gh-108725
* gh-109132
* gh-109247
<!-- /gh-linked-prs -->
| 0858328ca2457ae95715eb93e347d5c0547bec6f | d485551c9d1792ff3539eef1d6374bd4c01dcd5d |
python/cpython | python__cpython-108657 | # sqlite3.iterdump() incompatible with binary data
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.11.5 (main, Aug 25 2023, 13:19:53) [GCC 9.4.0]
### A clear and concise description of the bug:
Apologies if I'm misunderstanding. Please advice if I should post elsewhere. But shouldn't iterdump() properly detect VARCHAR columns with binary data and output X'' strings instead of throwing an error? This is what `sqlite3 .dump` does.
```python
import sqlite3
with sqlite3.connect(db_path) as conn:
with open(dump_path, 'w') as dump:
for line in conn.iterdump():
pass
```
The above will throw an error:
```
File "foo.py", line 79, in dump_sqlite_db
for line in conn.iterdump():
File "/usr/lib/python3.11/sqlite3/dump.py", line 63, in _iterdump
for row in query_res:
sqlite3.OperationalError: Could not decode to UTF-8 column ''INSERT INTO "sync_entities_metadata" VALUES('||quote("storage_key")||','||quote("metadata")||')'' with text 'INSERT INTO "sync_entities_metadata" VALUES(1,'v10����
```
I tried enabling `conn.text_factory = bytes` as a workaround, but now get a different error.
```
File "foo.py", line 79, in dump_sqlite_db
for line in conn.iterdump():
File "/usr/lib/python3.11/sqlite3/dump.py", line 43, in _iterdump
elif table_name.startswith('sqlite_'):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108657
* gh-108673
* gh-108674
* gh-108683
* gh-108686
* gh-108694
* gh-108695
* gh-108699
* gh-111324
* gh-111325
<!-- /gh-linked-prs -->
| 2a3926fa51b7264787d5988abf083d8c4328f4ad | 2928e5dc6512e4206c616cd33e0bcc3288abf6ed |
python/cpython | python__cpython-108578 | # Add explicit test for sqlite3.Row.keys()
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Seems, there are no tests for `sqlite3.Row.keys()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108578
* gh-108615
* gh-108616
* gh-108628
<!-- /gh-linked-prs -->
| 6eaddc10e972273c1aed8b88c538e65e4773496e | 5f85b443f7119e1c68a15fc9a342655e544d2852 |
python/cpython | python__cpython-108551 | # Speed up sqlite3 tests
# Feature or enhancement
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
On my MacBook Pro M1, the sqlite3 test suite takes between 1.7 and up to 3 seconds to run. This is caused by the following inefficiencies:
- the sqlite3 CLI tests are run by launching Python in a new process
- tests relying on locking are using relatively large busy handler timeouts
With the following improvements, I can run the sqlite3 test suite in 350 milliseconds:
- refactor the CLI so we can invoke it and mock command-line arguments simply by importing the main() and passing a list of strings to it
- disable the busy handler for all concurrency tests; we have full control over the order of the SQLite C API calls, so we can safely do this
<!-- gh-linked-prs -->
### Linked PRs
* gh-108551
* gh-108566
* gh-108567
* gh-108618
* gh-108621
<!-- /gh-linked-prs -->
| c8847841cc5629cbceead0c09dc6f537d7b92612 | 6eaddc10e972273c1aed8b88c538e65e4773496e |
python/cpython | python__cpython-108543 | # Duplicate entry in 3.11.5 changelog - pickle module
# Documentation
The [3.11.5 changelog](https://docs.python.org/3/whatsnew/changelog.html#python-3-11-5-final) has an entry that appears twice:
> [gh-105375](https://github.com/python/cpython/issues/105375): Fix bugs in [pickle](https://docs.python.org/3/library/pickle.html#module-pickle) where exceptions could be overwritten.
> [gh-105497](https://github.com/python/cpython/issues/105497): Fix flag inversion when alias/mask members exist.
> [gh-105375](https://github.com/python/cpython/issues/105375): Fix bugs in [pickle](https://docs.python.org/3/library/pickle.html#module-pickle) where exceptions could be overwritten.
The relevant files with identical contents:
[Misc/NEWS.d/next/Library/2023-06-08-08-58-36.gh-issue-105375.bTcqS9.rst](https://github.com/python/cpython/blob/HEAD/Misc/NEWS.d/next/Library/2023-06-08-08-58-36.gh-issue-105375.bTcqS9.rst)
[Misc/NEWS.d/next/Library/2023-06-09-21-04-39.gh-issue-105375.bTcqS9.rst](https://github.com/python/cpython/blob/HEAD/Misc/NEWS.d/next/Library/2023-06-09-21-04-39.gh-issue-105375.bTcqS9.rst)
<!-- gh-linked-prs -->
### Linked PRs
* gh-108543
* gh-108544
* gh-108545
<!-- /gh-linked-prs -->
| a429eafef2d86eafc007ac19682e7d372c32da31 | 042aa88bcc6541cb8b312f1119452f7a58a5b4df |
python/cpython | python__cpython-108568 | # Nested multiprocessing leads to `AttributeError: is_fork_ctx` with `forkserver` or `spawn` methods
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.11.5 (main, Aug 26 2023, 00:26:34) [GCC 12.2.1 20220924]
### A clear and concise description of the bug:
Using nested `multiprocessing` (i.e. spawn a child process inside a child process) is broken as of Python 3.11.5, leading to an attribute error.
```python
Process Process-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/io/pyi_multiprocessing_nested_process.py", line 15, in process_function
process.start()
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/context.py", line 288, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/local/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/local/lib/python3.11/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "/usr/local/lib/python3.11/multiprocessing/synchronize.py", line 106, in __getstate__
if self.is_fork_ctx:
^^^^^^^^^^^^^^^^
AttributeError: 'Lock' object has no attribute 'is_fork_ctx'
Results: [1]
```
Minimal code example below. Invoke with argument `fork`, `forkserver` or `spawn`. `fork` will work. `forkserver` and `spawn` will both raise the above error. All three variants work with Python 3.11.4.
```python
import sys
import multiprocessing
def nested_process_function(queue):
print("Running nested sub-process!")
queue.put(2)
def process_function(queue):
print("Running sub-process!")
queue.put(1)
process = multiprocessing.Process(target=nested_process_function, args=(queue,))
process.start()
process.join()
def main(start_method):
multiprocessing.set_start_method(start_method)
queue = multiprocessing.Queue()
process = multiprocessing.Process(target=process_function, args=(queue,))
process.start()
process.join()
results = []
while not queue.empty():
results.append(queue.get())
print(f"Results: {results}")
assert results == [1, 2]
if __name__ == '__main__':
if len(sys.argv) != 2:
raise SystemExit(f"Usage: {sys.argv[0]} fork|forkserver|spawn")
main(sys.argv[1])
```
I believe that the source of this regression is https://github.com/python/cpython/commit/34ef75d3ef559288900fad008f05b29155eb8b59 which adds the attribute `is_fork_ctx` to `multiprocessing.Lock()` but doesn't update the pickle methods (`__getstate__()` and `__setstate__()`) so after being serialised and deserialised, the `Lock()` object looses that attribute.
The following patch, adding `is_fork_ctx` to the pickle methods, makes the above work again.
```diff
diff --git a/Lib/multiprocessing/synchronize.py b/Lib/multiprocessing/synchronize.py
index 2328d33212..9c5c2aada6 100644
--- a/Lib/multiprocessing/synchronize.py
+++ b/Lib/multiprocessing/synchronize.py
@@ -109,10 +109,11 @@ def __getstate__(self):
'not supported. Please use the same context to create '
'multiprocessing objects and Process.')
h = sl.handle
- return (h, sl.kind, sl.maxvalue, sl.name)
+ return (h, sl.kind, sl.maxvalue, sl.name, self.is_fork_ctx)
def __setstate__(self, state):
- self._semlock = _multiprocessing.SemLock._rebuild(*state)
+ self._semlock = _multiprocessing.SemLock._rebuild(*state[:4])
+ self.is_fork_ctx = state[4]
util.debug('recreated blocker with handle %r' % state[0])
self._make_methods()
```
```[tasklist]
### Tasks
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108568
* gh-108691
* gh-108692
<!-- /gh-linked-prs -->
| add8d45cbe46581b9748909fbbf60fdc8ee8f71e | 2a3926fa51b7264787d5988abf083d8c4328f4ad |
python/cpython | python__cpython-111035 | # C API: Add a replacement for PySys_GetObject
# Feature or enhancement
### Proposal:
The `PySys_GetObject()` function has two flaws:
* It clears any error raised inside the function, including important and critical errors.
* It returns a borrowed reference.
We need to add a replacement free from these flaws. Any ideas about the API and the name?
### Links to previous discussion of this feature:
- https://github.com/python/cpython/issues/75753
- https://github.com/python/cpython/issues/106672
<!-- gh-linked-prs -->
### Linked PRs
* gh-111035
* gh-129736
* gh-130503
<!-- /gh-linked-prs -->
| bac3fcba5b2d83aa294267a456ccc36d86151dd4 | b265a7ddeb12b2040d80b471d447ce4c3ff4bb95 |
python/cpython | python__cpython-109025 | # C API: Add replacements for PyObject_HasAttr() etc
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/75753
https://github.com/python/cpython/issues/106672
### Proposal:
Functions `PyDict_GetItem()`, `PyDict_GetItemString()`, `PyMapping_HasKey()`, `PyMapping_HasKeyString()`, `PyObject_HasAttr()`, `PyObject_HasAttrString()` and `PySys_GetObject()` have a flaw -- they clear any error raised inside the function, including important and critical errors. They cannot be fixed, because the user code which use them do not handle errors. There are replacements free from this flaw for `PyDict_GetItem()` (`PyDict_GetItemWithError()` and `PyDict_GetItemRef()`) and, in some applications, to `PyDict_GetItemString()` (`PyDict_GetItemRefString()`).
We need new functions similar to `PyMapping_HasKey()`, `PyMapping_HasKeyString()`, `PyObject_HasAttr()`, `PyObject_HasAttrString()` which return three-state value (`1` - yes, `0` -- no, and `-1` --error). What should be their names? Add the `WithError` suffix? Add the `Ex` sufix? Add the `2` suffix?
<!-- gh-linked-prs -->
### Linked PRs
* gh-109025
<!-- /gh-linked-prs -->
| add16f1a5e4013f97d33cc677dc008e8199f5b11 | e57ecf6bbc59f999d27b125ea51b042c24a07bd9 |
python/cpython | python__cpython-108495 | # Add support for the limited C API to Argument Clinic
# Feature or enhancement
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
* https://github.com/python/cpython/issues/85283
* https://github.com/python/cpython/pull/26080
### Proposal:
I propose adding support for the limited C API to Argument Clinic. Since Argument Clinic supports a wide diversity of parameter types and object types, I propose to add the support incrementally (step by step).
The final goal is to restrict some stdlib C extensions to the limited C API.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108495
* gh-108498
* gh-108504
* gh-108516
* gh-108536
* gh-108574
* gh-108575
* gh-108584
* gh-108589
* gh-108608
* gh-108622
* gh-110077
* gh-110230
* gh-110232
* gh-116610
<!-- /gh-linked-prs -->
| 1dd951097728d735d46a602fc43285d35b7b32cb | 4eae1e53425d3a816a26760f28d128a4f05c1da4 |
python/cpython | python__cpython-108591 | # Counter for JUMP_BACKWARD is initialized to 17 instead of 0
# Bug report
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main-dirty:5a25daa512, Aug 25 2023, 21:20:04) [Clang 14.0.3 (clang-1403.0.22.14.1)]
### A clear and concise description of the bug:
From https://github.com/python/cpython/issues/108311#issuecomment-1693569380:
> FWIW I also discovered that the JUMP_BACKWARD counter is [initialized to 17](https://github.com/python/cpython/blob/66b4d9c9f0b8a935b5d464abd2f6ee0253832fd9/Python/specialize.c#L305) (the constant computed by adaptive_counter_warmup()), like all other counters, by _PyCode_Quicken() in specialize.c. Arguably, since this counter is supposed to count upwards from zero (in steps of 16, i.e. 1 << OPTIMIZER_BITS_IN_COUNTER), it ought to be initialized to zero (as I had expected it would be). But this ought to be a separate issue and PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108591
<!-- /gh-linked-prs -->
| 59e46932c8d2dc6fe84a8cf144dde962838c0204 | 4f22152713d008cdd7c1d373a0f0c8dcf30e217e |
python/cpython | python__cpython-108553 | # ast.unparse doesn't observe the new PEP701 string delimiter rules
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.12.0b4 (v3.12.0b4:97a6a41816, Jul 11 2023, 11:19:02) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
### A clear and concise description of the bug:
The `ast.unparse` Unparser doesn't seem to respect PEP701, for example, if you use double quotes in an f-string then unparse the AST it will use a backslash--
```python
>>> import ast
>>> code = 'f" something { my_dict["key"] } something else "'
>>> tree = ast.parse(code)
>>> ast.unparse(tree)
'f" something {my_dict[\'key\']} something else "'
```
Furthermore, if you use the nested f-string example in the PEP, it crashes completely when unparsing the AST
```python
>>> f"{f"{f"{f"{f"{f"{1+1}"}"}"}"}"}"
'2'
>>> code = 'f"{f"{f"{f"{f"{f"{1+1}"}"}"}"}"}"'
>>> import ast
>>> tree = ast.parse(code)
>>> tree
<ast.Module object at 0x10992fed0>
>>> ast.unparse(tree)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 1777, in unparse
return unparser.visit(ast_obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 859, in visit
self.traverse(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 850, in traverse
super().visit(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 407, in visit
return visitor(node)
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 874, in visit_Module
self._write_docstring_and_traverse_body(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 867, in _write_docstring_and_traverse_body
self.traverse(node.body)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 848, in traverse
self.traverse(item)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 850, in traverse
super().visit(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 407, in visit
return visitor(node)
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 889, in visit_Expr
self.traverse(node.value)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 850, in traverse
super().visit(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 407, in visit
return visitor(node)
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 1240, in visit_JoinedStr
self._write_fstring_inner(value)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 1268, in _write_fstring_inner
self.visit_FormattedValue(node)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ast.py", line 1281, in visit_FormattedValue
raise ValueError(
ValueError: Unable to avoid backslash in f-string expression part
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108553
* gh-108960
<!-- /gh-linked-prs -->
| 3d5df54cdc1e946bd953bc9906da5abf78a48357 | 74208ed0c440244fb809d8acc97cb9ef51e888e3 |
python/cpython | python__cpython-108392 | # Python build fails if the path to the compiler includes strings common with the compiler names (icc, gcc, etc)
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
|Python 3.11.4 (main, Jun 6 2023, 22:16:46) [GCC 12.2.0]
### A clear and concise description of the bug:
My home directory includes the string "icc" because that is my userid (jmicco).
When using GCC installed into my home directory, incorrect options were passed to the compiler that were invalid because the code uses *icc* to determine when to add the flags. This causes the build to fail immediately with an illegal compiler option for GCC:
```sh
case "$CC" in
*mpicc*)
CFLAGS_NODIST="$CFLAGS_NODIST"
;;
*icc*)
# ICC needs -fp-model strict or floats behave badly
CFLAGS_NODIST="$CFLAGS_NODIST -fp-model strict"
;;
*xlc*)
CFLAGS_NODIST="$CFLAGS_NODIST -qalias=noansi -qmaxmem=-1"
;;
esac
```
The -fp-model strict option was passed to GCC when running the compiler from my sandbox - this caused the build to completely fail
<!-- gh-linked-prs -->
### Linked PRs
* gh-108392
<!-- /gh-linked-prs -->
| fecb9faf0b2df6a219696502a34b918c5d2bfe9d | 38afa4af9bfc8297a5ee270c37f3f120a04297ea |
python/cpython | python__cpython-108464 | # Make expressions in pdb work as people would expect
# Feature or enhancement
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
When people are using pdb, they often input an expression to check the evaluation of the expression.
```python
(Pdb) data = 1
(Pdb) data
1
(Pdb)
```
This would normally work, but it could get complicated when the variable happens to be a command as well. If the variable is a command, there's nothing we can do, but we can deal with it in the following situation:
```python
(Pdb) c.a
```
Before #103464, this would simply do a `continue`, which is a horrible experience for users - it's so confusing. After the argument check, it can realize the command "c(ontinue)" comes with an argument ".a" which is weird, and refuse to execute it with a warning to the user. However, the more intuitive result for the input is to print `c.a` - that's what the user expects.
The problem behind it is how `cmd.Cmd` parses commands - by default the `identchars` are the ones that can be used for a variable name, which makes sense, but would parse `c.a` as command `c` and argument `.a`.
For pdb, we'd rather parse this as command `c.a` - search for it's corresponding `do_` method - fail and fallback to default which is run it in Python.
Therefore, we should expand the `identchars` for `Pdb`, so we can read this full expression/statement as a command to trigger the default behavior, rather than to use the part that can be a variable name as the command - in no case that works as expected.
This would make inputs like
```python
c.a
c['a']
n()
j=1
r"a"
```
work.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108464
<!-- /gh-linked-prs -->
| 6304d983a0656c1841769bf36e5b42819508d21c | 7855d325e638a4b7f7b40f2c35dc80de82d8fe70 |
python/cpython | python__cpython-108456 | # Run `mypy` on `Tools/peg_generator`
# Feature or enhancement
Now that we have `mypy` infrastructure ready https://github.com/python/cpython/blob/main/.github/workflows/mypy.yml we can add more things to the test.
https://github.com/python/cpython/tree/main/Tools/peg_generator has `mypy.ini` and annotations. So, let's check it as well.
How I plan to do it (separate PRs):
1. Start small with the CI job itself and initial fixes
2. Add `types-setuptools` dependency and fix more problems
3. Update config to be stricter and refactor
<!-- gh-linked-prs -->
### Linked PRs
* gh-108456
* gh-108620
* gh-108627
* gh-108629
* gh-108637
* gh-108697
* gh-109160
<!-- /gh-linked-prs -->
| cf7ba83eb274df8389cb9ebdf8601132c47de48a | f75cefd402c4c830228d85ca3442377ebaf09454 |
python/cpython | python__cpython-108045 | # Add platform triplets for x86_64 GNU/Hurd
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
### Output from running 'python -VV' on the command line:
N/A
### A clear and concise description of the bug:
Building on x86_64 GNU/Hurd fails with
```
Misc/platform_triplet.c:260 error: #error unknown platform triplet
```
Trivial patch posted to https://github.com/python/cpython/pull/108045
<!-- gh-linked-prs -->
### Linked PRs
* gh-108045
* gh-111987
* gh-111988
<!-- /gh-linked-prs -->
| edb569e8e1f469ac5ce629cc647ab169d895de41 | a071ecb4d13595f3580cf82061dcd7b39cd475c5 |
python/cpython | python__cpython-108445 | # [C API] Add PyLong_AsInt() public function
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
The _PyLong_AsInt() function was added in 2013 by commit 74f49ab28b91d3c23524356230feb2724ee9b23f. It was added a private function (``_Py``) prefix and it's not part of the limited C API.
This function is widely used in the Python code base which means that it's useful. Casting PyLong_AsLong() to ``int`` and then checking for overflow is error prone, it requires to raise the right exception.
Using a private function is a bad practive: I removed many private functions from Python 3.13 C API, see issue #106320.
I propose to add PyLong_AsInt() to the limited C API, add documentation and tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108445
* gh-108458
* gh-108459
* gh-108461
<!-- /gh-linked-prs -->
| be436e08b8bd9fcd2202d6ce4d924bba7551e96f | feb9a49c9c09d08cb8c24cb74d90a218de6af244 |
python/cpython | python__cpython-108419 | # Speed up bigmem compression tests in dry mode
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
Three bigmem tests test compression of large (slightly more than 4 GiB) data. They create 10 MiB of random data and write them multiple times. When run in "dry run" mode (by default) they only write a small sample. But they generate the whole 10 MiB of random data, and it takes time.
The proposed PR makes them only generating small number of random bytes for dry run. This speeds up running tests in default mode.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108419
* gh-108473
* gh-108481
<!-- /gh-linked-prs -->
| 4ae3edf3008b70e20663143553a736d80ff3a501 | e59a95238b76f518e936b6e70da9207d923964db |
python/cpython | python__cpython-108421 | # Mark slow test methods with @requires_resource('cpu')
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
The proposed PR marks all test methods which have duration longer than 3 seconds with `@test.support.requires_resource('cpu')` decorator.
The purpose is to reduce manual testing time. It happens that all tests in a file are ran in fraction of a second, but few tests take a long time to run. When you work on some large feature or bugfix you need to run corresponding tests multiple times. You can exclude the slowest tests manually, but you should know what of them are culprits. When they are marked as CPU-hungry, you can just not enable the "cpu" resource.
For example, all `test_math` takes over 1.5 minutes to finish. But when exclude `test_sumprod_stress`, it takes only 3 seconds.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108421
* gh-108480
* gh-108798
* gh-108799
* gh-108923
* gh-108924
<!-- /gh-linked-prs -->
| f3ba0a74cd50274acdcd592d4ce8395b92492b7c | aa52888e6a0269f0c31a24bd0d1adb3238147261 |
python/cpython | python__cpython-108420 | # Crash when tracing specialized normal class call at deep Python recursion
# Crash report
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:29bc6165ab, Aug 23 2023, 18:18:02) [GCC 10.2.1 20210110]
### What happened?
Bisected to 04492cbc9aa45ac2c12d22083c406a0364c39f5b.
```python
import sys
def trace(frame, event, arg):
return trace
class Foo:
def __init__(self):
pass
def f():
Foo()
f()
sys.settrace(trace)
f()
```
### Error messages
```c++
Program received signal SIGSEGV, Segmentation fault.
0x00005555558c3fc0 in allocate_instrumentation_data (code=code@entry=0x555555bc97e0 <_Py_InitCleanup>) at Python/instrumentation.c:1458
1458 code->_co_monitoring = PyMem_Malloc(sizeof(_PyCoMonitoringData));
(gdb) bt
#0 0x00005555558c3fc0 in allocate_instrumentation_data (code=code@entry=0x555555bc97e0 <_Py_InitCleanup>) at Python/instrumentation.c:1458
#1 0x00005555558c620b in update_instrumentation_data (code=code@entry=0x555555bc97e0 <_Py_InitCleanup>, interp=interp@entry=0x555555cdc7f8 <_PyRuntime+93016>)
at Python/instrumentation.c:1478
#2 0x00005555558c7522 in _Py_Instrument (code=0x555555bc97e0 <_Py_InitCleanup>, interp=interp@entry=0x555555cdc7f8 <_PyRuntime+93016>) at Python/instrumentation.c:1549
#3 0x00005555558c7d63 in instrument_all_executing_code_objects (interp=interp@entry=0x555555cdc7f8 <_PyRuntime+93016>) at ./Include/internal/pycore_frame.h:84
#4 0x00005555558c7f12 in _PyMonitoring_SetEvents (tool_id=tool_id@entry=7, events=0) at Python/instrumentation.c:1735
#5 0x00005555558ca8a0 in _PyEval_SetTrace (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, func=func@entry=0x0, arg=arg@entry=0x0)
at Python/legacy_tracing.c:511
#6 0x00005555558fe68c in trace_trampoline (self=<optimized out>, frame=0x7ffff77d5700, what=<optimized out>, arg=<optimized out>) at ./Python/sysmodule.c:1035
#7 0x00005555558c97ae in call_trace_func (self=0x7ffff777e880, arg=0x555555be1600 <_Py_NoneStruct>) at Python/legacy_tracing.c:123
#8 0x00005555558c990c in sys_trace_func2 (self=<optimized out>, args=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Python/legacy_tracing.c:162
#9 0x00005555558c4fb1 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x7fffffffdb08, callable=0x7ffff777e880,
tstate=0x555555d42108 <_PyRuntime+509032>) at ./Include/internal/pycore_call.h:186
#10 call_one_instrument (interp=interp@entry=0x555555cdc7f8 <_PyRuntime+93016>, tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, args=args@entry=0x7fffffffdb08,
nargsf=nargsf@entry=-9223372036854775806, tool=tool@entry=7 '\a', event=event@entry=0) at Python/instrumentation.c:850
#11 0x00005555558c5890 in call_instrumentation_vector (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, event=event@entry=0, frame=frame@entry=0x7ffff7fae988,
instr=instr@entry=0x7ffff7796e08, nargs=<optimized out>, nargs@entry=2, args=args@entry=0x7fffffffdb00) at Python/instrumentation.c:980
#12 0x00005555558c653b in _Py_call_instrumentation (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, event=event@entry=0, frame=frame@entry=0x7ffff7fae988,
instr=instr@entry=0x7ffff7796e08) at Python/instrumentation.c:1015
#13 0x000055555584a828 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, frame=0x7ffff7fae988, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:43
#14 0x000055555586a69c in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555d42108 <_PyRuntime+509032>) at ./Include/internal/pycore_ceval.h:88
#15 _PyEval_Vector (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, func=func@entry=0x7ffff77407d0, locals=locals@entry=0x7ffff7755070, args=args@entry=0x0,
argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1622
#16 0x000055555586a73f in PyEval_EvalCode (co=co@entry=0x7ffff791d300, globals=globals@entry=0x7ffff7755070, locals=locals@entry=0x7ffff7755070) at Python/ceval.c:557
#17 0x00005555558eaf8e in run_eval_code_obj (tstate=tstate@entry=0x555555d42108 <_PyRuntime+509032>, co=co@entry=0x7ffff791d300, globals=globals@entry=0x7ffff7755070,
locals=locals@entry=0x7ffff7755070) at Python/pythonrun.c:1725
#18 0x00005555558eb994 in run_mod (mod=mod@entry=0x555555e01738, filename=filename@entry=0x7ffff7753140, globals=globals@entry=0x7ffff7755070,
locals=locals@entry=0x7ffff7755070, flags=flags@entry=0x7fffffffdf18, arena=arena@entry=0x7ffff7793940) at Python/pythonrun.c:1746
#19 0x00005555558ebaa4 in pyrun_file (fp=fp@entry=0x555555d85aa0, filename=filename@entry=0x7ffff7753140, start=start@entry=257, globals=globals@entry=0x7ffff7755070,
locals=locals@entry=0x7ffff7755070, closeit=closeit@entry=1, flags=0x7fffffffdf18) at Python/pythonrun.c:1646
#20 0x00005555558eeced in _PyRun_SimpleFileObject (fp=fp@entry=0x555555d85aa0, filename=filename@entry=0x7ffff7753140, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdf18) at Python/pythonrun.c:463
#21 0x00005555558eef40 in _PyRun_AnyFileObject (fp=fp@entry=0x555555d85aa0, filename=filename@entry=0x7ffff7753140, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffdf18) at Python/pythonrun.c:79
#22 0x0000555555916e68 in pymain_run_file_obj (program_name=program_name@entry=0x7ffff77a1690, filename=filename@entry=0x7ffff7753140, skip_source_first_line=0)
at Modules/main.c:360#23 0x0000555555917145 in pymain_run_file (config=config@entry=0x555555cdccf8 <_PyRuntime+94296>) at Modules/main.c:379
#24 0x000055555591820c in pymain_run_python (exitcode=exitcode@entry=0x7fffffffe08c) at Modules/main.c:610
#25 0x0000555555918264 in Py_RunMain () at Modules/main.c:688
#26 0x00005555559182b8 in pymain_main (args=args@entry=0x7fffffffe0d0) at Modules/main.c:718
#27 0x000055555591832d in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:742
#28 0x000055555565073e in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108420
* gh-108899
<!-- /gh-linked-prs -->
| 5a2a04615171899885b977d77dc379bd78bac87f | 04a0830b00879efe057e3dfe75e9aa9c0caf1a26 |
python/cpython | python__cpython-108729 | # Add `--disable-gil` option for Windows builds
# Feature or enhancement
Like https://github.com/python/cpython/issues/108223 but for Windows builds instead of the configure script.
The immediate motivation is for supporting additional Windows buildbots.
cc @terryjreedy @itamaro
<!-- gh-linked-prs -->
### Linked PRs
* gh-108729
<!-- /gh-linked-prs -->
| 6fafa6b919227cab06d0e3d7b20120e72d9b2bfd | ecc61a6d76ea329e56f98c4af0f24a17ed3b2d6c |
python/cpython | python__cpython-108353 | # Benchmark test fails for large numbers in `decimal` module
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/demo/parser:a794ebeb02, Aug 22 2023, 11:44:58) [GCC 11.3.0]
### A clear and concise description of the bug:
There are some tests in the decimal module related to large numbers, they may fail due to the patch of CVE-2020-10735 (large int<->str conversions). Here is a benchmark in decimal.
```shell
cd ./Modules/_decimal/tests
../../../python bench.py
```
Output:
```shell
...
# ======================================================================
# Factorial
# ======================================================================
n = 100000
cdecimal:
calculation time: 0.942886s
conversion time: 0.002155s
Traceback (most recent call last):
File "/home/github/cpython/Modules/_decimal/tests/bench.py", line 121, in <module>
sy = str(y)
^^^^^^
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108353
<!-- /gh-linked-prs -->
| b39f65a495b3300caa4144089ef7cb20a0bb5bd0 | 388d91cd474de80355f5a8f6a26e8962813a3128 |
python/cpython | python__cpython-108344 | # test_ssl fails with "env changed" on ARM64 Windows buildbot: thread doesn't catch TimeoutError
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
ARM64 Windows Non-Debug PR buildbot [build failed](https://buildbot.python.org/all/#/builders/738/builds/3479) with: test_ssl failed (**env changed**):
```
== CPython 3.13.0a0 (heads/refs/pull/108299/merge-dirty:5af459ef07, Aug 23 2023, 01:37:38) [MSC v.1936 64 bit (AMD64)]
== Windows-11-10.0.22621-SP0 little-endian
== Python build: debug
== cwd: R:\buildarea\pull_request.ambv-bb-win11.bigmem\build\build\test_python_12132�
== CPU count: 6
== encodings: locale=cp1252, FS=utf-8
0:00:00 Run tests in parallel using 4 child processes (timeout: 1 hour 15 min, worker timeout: 1 hour 20 min)
(...)
0:08:08 load avg: 0.81 [228/447/1] test_ssl failed (env changed) (32.5 sec) -- ...
(...)
test_https_client_non_tls_response_ignored (test.test_ssl.TestPreHandshakeClose.test_https_client_non_tls_response_ignored) ...
Warning -- Uncaught thread exception: TimeoutError
Exception in thread non_tls_http_RST_responder:
Traceback (most recent call last):
File "R:\buildarea\pull_request.ambv-bb-win11.bigmem\build\Lib\threading.py", line 1059, in _bootstrap_inner
self.run()
File "R:\buildarea\pull_request.ambv-bb-win11.bigmem\build\Lib\test\test_ssl.py", line 4708, in run
conn, address = self.listener.accept()
^^^^^^^^^^^^^^^^^^^^^^
File "R:\buildarea\pull_request.ambv-bb-win11.bigmem\build\Lib\socket.py", line 295, in accept
fd, addr = self._accept()
^^^^^^^^^^^^^^
TimeoutError: timed out
ok
```
test.pythoninfo:
```
ssl.OPENSSL_VERSION: OpenSSL 3.0.9 30 May 2023
ssl.OPENSSL_VERSION_INFO: (3, 0, 0, 9, 0)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108344
* gh-108348
* gh-108349
* gh-108350
* gh-108351
* gh-108352
* gh-108370
* gh-108404
* gh-108405
* gh-108406
* gh-108407
* gh-108408
<!-- /gh-linked-prs -->
| 64f99350351bc46e016b2286f36ba7cd669b79e3 | 9173b2bbe13aeccc075b571da05c653a2a91de1b |
python/cpython | python__cpython-108324 | # Speed-up statistics.NormalDist.samples()
Currently, `samples()` calls `random.gauss()`. It would be several times faster to use the inverse CDF method instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108324
* gh-108658
<!-- /gh-linked-prs -->
| 042aa88bcc6541cb8b312f1119452f7a58a5b4df | 09343dba44cdb5c279ec51df34552ef451434958 |
python/cpython | python__cpython-108323 | # [C API] Add PyDict_ContainsString() function
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Most, if not all, PyDict C APIs have a "String" flavor where the key argument is expressed as a UTF-8 encoded bytes string. But the ``PyDict_Contains()`` API is missing such variant.
I suppose that it was not proposed before since ``PyDict_GetItemString(dict, key) != NULL`` can already be used. My problem is that ``PyDict_GetItemString()`` **ignores errors**: I would like to report errors.
The newly added ``PyDict_GetItemStringRef()`` can be used to check if a dictionary has a key and report errors, but it requires calling ``Py_DECREF()`` which is not convenient. Example:
```
PyObject *value;
if (PyDict_GetItemStringRef(dict, key, &value) < 0) {
// ... handle error ...
}
int has_value = (value != NULL);
Py_XDECREF(value);
// ... use has_value ...
```
I would like to be able to replace this code with:
```
int has_value = PyDict_ContainsString(dict, key);
if (has_value < 0) {
// ... handle error ...
}
// ... use has_value ...
```
There is no need to INCREF/DECREF just to check if a dictionary has a key.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108323
* gh-108448
* gh-108489
<!-- /gh-linked-prs -->
| 67266266469fe0e817736227f39537182534c1a5 | c163d7f0b67a568e9b64eeb9c1cbbaa127818596 |
python/cpython | python__cpython-108312 | # test_opcache fails when run with -Xuops
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/fix-func-cache-dirty:531930f47f, Aug 22 2023, 07:17:15) [Clang 14.0.3 (clang-1403.0.22.14.1)]
### A clear and concise description of the bug:
```python
~/cpython$ ./python.exe -Xuops -m test test_opcache -m test_store_attr_with_hint
0:00:00 load avg: 27.04 Run tests sequentially
0:00:00 load avg: 27.04 [1/1] test_opcache
test test_opcache failed -- Traceback (most recent call last):
File "/Users/guido/cpython/Lib/test/test_opcache.py", line 955, in test_store_attr_with_hint
self.assert_races_do_not_crash(opname, get_items, read, write)
File "/Users/guido/cpython/Lib/test/test_opcache.py", line 524, in assert_races_do_not_crash
self.assert_specialized(read, opname)
File "/Users/guido/cpython/Lib/test/test_opcache.py", line 503, in assert_specialized
self.assertIn(opname, opnames)
AssertionError: 'STORE_ATTR_WITH_HINT' not found in {'RETURN_CONST', 'FOR_ITER_LIST', 'LOAD_CONST', 'RESUME', 'STORE_ATTR', 'STORE_FAST', 'ENTER_EXECUTOR', 'END_FOR', 'GET_ITER', 'LOAD_FAST'}
test_opcache failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_opcache
Total duration: 268 ms
Tests result: FAILURE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108312
<!-- /gh-linked-prs -->
| 53470184091f6fe1c7a1cf4de8fd90dc2ced7654 | 66b4d9c9f0b8a935b5d464abd2f6ee0253832fd9 |
python/cpython | python__cpython-108315 | # CVE-2023-40217: Bypass TLS handshake on closed sockets
# Bug report
Originally reported by @AapoOksman via the [Python Security Response Team](https://www.python.org/dev/security/) mailing list on 2023-08-08. Thanks for the responsible disclosure!
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux, macOS
### A clear and concise description of the bug:
Instances of ssl.SSLSocket are vulnerable to a bypass of the TLS handshake and included protections (like certificate verification) and could lead applications to treat unencrypted data received pre-TLS-handshake that is followed by an immediate connection close as if it were post-handshake TLS encrypted data.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108315
* gh-108316
* gh-108317
* gh-108318
* gh-108320
* gh-108321
* gh-110718
<!-- /gh-linked-prs -->
| 0cb0c238d520a8718e313b52cffc356a5a7561bf | d2879f2095abd5c8186c7f69c964a341c2053572 |
python/cpython | python__cpython-108309 | # [C API] Use new PyDict_GetItemRef() C API
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
Calls to PyDict_GetItem() should be replaced with PyDict_GetItemRef().
PyDict_GetItemWithError() may also be replaced with PyDict_GetItemRef().
<!-- gh-linked-prs -->
### Linked PRs
* gh-108309
* gh-108371
* gh-108372
* gh-108381
* gh-108426
<!-- /gh-linked-prs -->
| f5559f38d9831e7e55a518e516bcd620ec13af14 | 154477be722ae5c4e18d22d0860e284006b09c4f |
python/cpython | python__cpython-108305 | # Move test files into test subdirectories
# Feature or enhancement
### Proposal:
The Python test suite has around 166 files and sub-directories where the name is not clearly associated to a test. For example, it's not obvious to me which test uses ``talos-2019-0758.pem`` or ``coding20731.py``.
When possible, I propose to move these files into sub-directories related to their test. Example:
* create ``Lib/test/test_module/`` sub-directory
* move ``Lib/test/test_module.py`` to ``Lib/test/test_module/__init__.py``
* move ``good_getattr.py`` and ``bad_getattr*.py`` scripts to ``Lib/test/test_module/``
Well, I created PR #108293 for this specific example.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108305
* gh-108325
* gh-108328
* gh-108336
* gh-108354
* gh-108978
* gh-108979
* gh-109265
* gh-109314
* gh-109368
* gh-109489
* gh-109512
* gh-109513
* gh-109607
* gh-109670
* gh-109671
* gh-109672
* gh-109673
* gh-109674
* gh-109675
* gh-109677
* gh-109678
* gh-109679
* gh-109680
* gh-109682
* gh-109683
* gh-109686
* gh-109724
* gh-110646
* gh-110732
* gh-111543
* gh-111549
* gh-111825
* gh-111859
* gh-111860
* gh-111879
* gh-111880
* gh-111882
* gh-111883
* gh-111891
* gh-111892
* gh-111899
* gh-111945
* gh-111946
* gh-112108
* gh-112109
* gh-112110
* gh-112114
* gh-112976
* gh-112977
* gh-114254
* gh-114313
* gh-114343
* gh-114344
* gh-114356
* gh-114368
* gh-114372
* gh-114427
* gh-114433
* gh-114488
* gh-114506
* gh-114687
* gh-115501
* gh-115502
* gh-115625
* gh-115626
* gh-130540
<!-- /gh-linked-prs -->
| d2879f2095abd5c8186c7f69c964a341c2053572 | b8f96b5eda5b376b05a9dbf046208388249e30a6 |
python/cpython | python__cpython-108517 | # Crash in clear_weakref() in pydantic test suite, in 3.12.0rc1 and newer
# Crash report
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.12.0rc1+ (heads/3.12:149d70c254, Aug 22 2023, 16:11:47) [GCC 13.2.0]
### What happened?
The [pydantic](https://github.com/pydantic/pydantic) test suite started causing pytest to segfault on exit recently, with Python 3.12. I've been able to bisect it to pydantic commit pydantic/pydantic@f5e3aa950ffbf2d2ab14d90392d23a980da39ba1 that looks relatively harmless. I've also been able to bisect CPython into commit 58f9c8889d8fa79d617fddf6cb48942d3003c7fd:
> gh-106403: Restore weakref support for TypeVar and friends (GH-106418)
I have been able to reproduce the problem with tip of `main` as well, though I have to note it's a bit of heisenbug. I've basically had a very lucky day today that I've managed to correctly bisect on the second attempt.
To reproduce:
```
git clone https://github.com/pydantic/pydantic/
cd pydantic
python -m venv .venv
. .venv/bin/activate
pip install -e . pytest dirty-equals
# may need to be repeated a few times
python -m pytest -o addopts= tests/test_generics.py --deselect tests/test_generics.py::test_partial_specification_name -s
```
### Error messages
<details>
<summary>Backtrace</summary>
```
Core was generated by `python -m pytest -o addopts= tests/test_generics.py --deselect tests/test_gener'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 clear_weakref (self=self@entry=0x7f0a9a171df0) at Objects/weakrefobject.c:62
62 if (*list == self)
(gdb) bt
#0 clear_weakref (self=self@entry=0x7f0a9a171df0) at Objects/weakrefobject.c:62
#1 0x0000557c3663d843 in _PyWeakref_ClearRef (self=self@entry=0x7f0a9a171df0) at Objects/weakrefobject.c:102
#2 0x0000557c366ec77e in handle_weakrefs (unreachable=unreachable@entry=0x7ffeec923ad0,
old=old@entry=0x557c36a40f68 <_PyRuntime+76552>) at Modules/gcmodule.c:804
#3 0x0000557c366ecad3 in gc_collect_main (tstate=0x557c36a9e7b0 <_PyRuntime+459600>, generation=generation@entry=2,
n_collected=n_collected@entry=0x0, n_uncollectable=n_uncollectable@entry=0x0, nofail=nofail@entry=1) at Modules/gcmodule.c:1284
#4 0x0000557c366ed333 in _PyGC_CollectNoFail (tstate=tstate@entry=0x557c36a9e7b0 <_PyRuntime+459600>) at Modules/gcmodule.c:2135
#5 0x0000557c366c39bb in finalize_modules (tstate=tstate@entry=0x557c36a9e7b0 <_PyRuntime+459600>) at Python/pylifecycle.c:1602
#6 0x0000557c366c524e in Py_FinalizeEx () at Python/pylifecycle.c:1863
#7 0x0000557c366eb4f9 in Py_RunMain () at Modules/main.c:691
#8 0x0000557c366eb56e in pymain_main (args=args@entry=0x7ffeec923c10) at Modules/main.c:719
#9 0x0000557c366eb63d in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:743
#10 0x0000557c3655979e in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
(gdb) p self
$1 = (PyWeakReference *) 0x7f0a9a171df0
(gdb) p *self
$2 = {ob_base = {{ob_refcnt = 1, ob_refcnt_split = {1, 0}}, ob_type = 0x557c38bda680}, wr_object = 0x7f0a9a171c70, wr_callback = 0x0,
hash = -1, wr_prev = 0x0, wr_next = 0x0, vectorcall = 0x557c3663aa08 <weakref_vectorcall>}
(gdb) p *list
Cannot access memory at address 0xfe15355ba7e0
(gdb) p list
$3 = (PyWeakReference **) 0xfe15355ba7e0
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-108517
* gh-108527
<!-- /gh-linked-prs -->
| 732ad44cec971be5255b1accbac6555d3615c2bf | 8d92b6eff3bac45e7d4871c46c4511218b9b685a |
python/cpython | python__cpython-108298 | # Add audit event for time.sleep
# Feature or enhancement
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/27214/8
### Proposal:
Add an audit event that async frameworks can use to warn about a common beginner mistake -- using `time.sleep` instead of `await asyncio.sleep`.
This is not meant for async experts. There are better tools for diagnosing “freezes” in async code, but to use those you need to know/suspect that what you're debugging is an “async issue”.
Audit events are cheap (if unused), so I'd like to put it in even though the responses on Discourse were “meh”.
This audit event is not really security-related, but [Steve said he wouldn't oppose it going in](https://twitter.com/zooba/status/1663199659957862402).
<!-- gh-linked-prs -->
### Linked PRs
* gh-108298
* gh-108363
<!-- /gh-linked-prs -->
| 31b61d19abcc63aa28625a31ed75411948fc1e7e | 2dfbd4f36dd83f88f5df64c33612dd34eff256bb |
python/cpython | python__cpython-108281 | # Clean up sqlite3.Connection APIs
# Feature or enhancement
### Has this already been discussed elsewhere?
https://discuss.python.org/t/clean-up-some-sqlite3-apis/32093
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/87260#issuecomment-1156275354
### Proposal:
The sqlite3.Connection class has groups of similar APIs with inconsistent parameter specs.
Create user-defined function APIs:
- `create_function(name, narg, func, *, deterministic=False)`
- `create_aggregate(name, /, n_arg, aggregate_class)`
- `create_window_function(name, num_params, aggregate_class, /)`
- `create_collation(name, callable, /)`
Set callback APIs:
- `set_authorizer(authorizer_callback)`
- `set_progress_handler(progress_handler, n)`
- `set_trace_callback(trace_callback)`
For all APIs but `create_function`, I suggest to make all parameters positional-only; for `create_function`, I suggest to make the three first parameters positional-only:
Create user-defined function APIs:
- `create_function(name, nargs, callable, /, *, deterministic=False)`
- `create_aggregate(name, nargs, aggregate_class, /)`
- `create_window_function(name, nargs, aggregate_class, /)`
- `create_collation(name, callable, /)`
Set callback APIs:
- `set_authorizer(authorizer_callback, /)`
- `set_progress_handler(progress_handler, /, n)`
- `set_trace_callback(trace_callback, /)`
Obviously, `create_window_function` stays as it is.
**UPDATE**: I noticed the docs are wrong about `create_collation`; it's signature is actually `create_collation(name, callable, /)`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108281
* gh-108632
<!-- /gh-linked-prs -->
| 4116592b6f014a2720e9b09e2c8dec4bf4b4cd8f | bc5356bb5d7e3eda44128e89a695c05066e0840b |
python/cpython | python__cpython-108382 | # Add wrapper for timerfd_create, timerfd_settime, and timerfd_gettime to `os` module
# Feature or enhancement
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
### Proposal:
eventfd is implemented at #20930 (#85173).
But timerfd is not implemented yet.
timerfd_create, timerfd_settime, and timerfd_gettime are Linux syscalls that create and operate on a timer that delivers timer expiration notifications via a file descriptor.
See https://man7.org/linux/man-pages/man2/timerfd_create.2.html
I propse to add those wrapper functions to `os` module.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108382
* gh-110515
* gh-110661
* gh-117223
* gh-117231
<!-- /gh-linked-prs -->
| de2a4036cbfd5e41a5bdd2b81122b7765729af83 | 64f158e7b09e67d0bf5c8603ff88c86ed4e8f8fd |
python/cpython | python__cpython-113213 | # Enable `CFBundleAllowMixedLocalizations` property list key inside Info.plist in macOS
# Feature or enhancement
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
### Feature Request
The [QLocale](https://doc.qt.io/qtforpython-6/PySide6/QtCore/QLocale.html) module of Python packages like PySide6 and PyQt6, incorporates a notion of locale on macOS that extends beyond the POSIX locale (used by the locale Python module) and depends on the `Info.plist` of the application bundle. This is because both PySide6 and PyQt6 are python bindings to the C++ Qt framework, where every application is compiIed and will be mostly built as a framework, and have its own `Info.plist` file.
In the case of Python, the application bundle is Python macOS application framework and this module looks at the CFBundleDevelopmentRegion property key inside the [`Info.plist`](https://github.com/python/cpython/blob/main/Mac/IDLE/IDLE.app/Contents/Info.plist) file of the Python framework (generally in _/Library/Frameworks/Python.framework/Versions/3.11/Resources/Python.app/Contents/Info.plist_), which is always English. Therefore, even when the system language is something different, QLocale always returns `English` as the system language. **In order to support the retrieval of localized strings, [CFBundleAllowMixedLocalizations](https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleallowmixedlocalizations) property key must be turned on in Python's `Info.plist`.**
In the case of non-framework builds of Python, the locale reverts back to the POSIX locale. However, the Python installed by default in macOS and available for install from https://www.python.org/downloads/ are all framework builds.
Corresponding bug in PYSIDE: https://bugreports.qt.io/browse/PYSIDE-2419
### Solution
The solution is pretty simple and only includes adding the following line
` <key>CFBundleAllowMixedLocalizations</key><true/> `
to CPython's [Info.plist](https://github.com/python/cpython/blob/main/Mac/IDLE/IDLE.app/Contents/Info.plist) file. Afaik, the additon of this does not cause any issues elsewhere as well.
_I would be happy to make this change myself and contribute to CPython, if you all agree with the idea._
<!-- gh-linked-prs -->
### Linked PRs
* gh-113213
* gh-113294
<!-- /gh-linked-prs -->
| 4cfce3a4da7ca9513e7f2c8ec94d50f8bddfa41b | 41336a72b90634d5ac74a57b6826e4dd6fe78eac |
python/cpython | python__cpython-108355 | # Minor mistake in dataclasses documentation update
An [update](https://github.com/python/cpython/commit/0f23eda4b996dacd19dbe91bd47a30433bf236d2) to the dataclasses docs, intended to make magic method names link to the relevant data model documentation, accidentally changed a line that shouldn't have been changed.
The docs used to say
> There is a tiny performance penalty when using `frozen=True`: `__init__()` cannot use simple assignment to initialize fields, and must use `object.__setattr__()`.
The documentation update accidentally changed `object.__setattr__` to just `__setattr__` here, so now it reads
> There is a tiny performance penalty when using `frozen=True`: `__init__()` cannot use simple assignment to initialize fields, and must use `__setattr__()`.
This line was specifically meant to refer to `object.__setattr__`, the `__setattr__` method of the base `object` class, as simple attribute assignment would hit the frozen dataclass's `__setattr__` override.
This part of the documentation should be reverted. I think it should just take a 1-character change, simply removing a tilde.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108355
* gh-108357
* gh-108358
* gh-119082
* gh-119097
* gh-119098
* gh-119277
* gh-119279
* gh-119280
<!-- /gh-linked-prs -->
| 79fdacc0059a3959074d2d9d054653eae1dcfe06 | 64f99350351bc46e016b2286f36ba7cd669b79e3 |
python/cpython | python__cpython-108296 | # heap-use-after-free in _PyFunction_LookupByVersion
# Crash report
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:531930f47f, Aug 21 2023, 17:14:02) [Clang 14.0.3 (clang-1403.0.22.14.1)]
### What happened?
Shortest repro I've found so far:
```
./python.exe -Xuops -m test test_opcache -v
```
I'm guessing I'm doing something wrong with the `func_version_cache` logic (the table contains borrowed pointers which are cleared by `func_dealloc`), but I can't quite figure out what. You can't subclass the function type so clearing the table entry in `func_dealloc` would *seem* to be safe. The output from address sanitizer hasn't jogged my memory yet.
Click for address sanitizer output:
<details>
```
0:00:00 load avg: 2.38 Run tests sequentially
0:00:00 load avg: 2.38 [1/1] test_opcache
=================================================================
==48442==ERROR: AddressSanitizer: heap-use-after-free on address 0x610000f109e8 at pc 0x000102bf6885 bp 0x7ff7bd537070 sp 0x7ff7bd537068
READ of size 4 at 0x610000f109e8 thread T0
#0 0x102bf6884 in _PyFunction_LookupByVersion funcobject.c:285
#1 0x102fd3c74 in uop_optimize optimizer.c:792
#2 0x102fd0c8d in _PyOptimizer_BackEdge optimizer.c:169
#3 0x102e98232 in _PyEval_EvalFrameDefault generated_cases.c.h:2960
#4 0x102e6511f in _PyEval_Vector ceval.c:1622
#5 0x102b87545 in _PyFunction_Vectorcall call.c
#6 0x102b84b0a in _PyObject_VectorcallDictTstate call.c:146
#7 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#8 0x102cdbd0e in slot_tp_init typeobject.c:9057
#9 0x102cc3411 in type_call typeobject.c:1700
#10 0x102b85226 in _PyObject_MakeTpCall call.c:242
#11 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#12 0x102b86d4f in PyObject_Vectorcall call.c:327
#13 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#14 0x102e6511f in _PyEval_Vector ceval.c:1622
#15 0x102b87545 in _PyFunction_Vectorcall call.c
#16 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#17 0x102b8d9af in method_vectorcall classobject.c:91
#18 0x102b86bb1 in _PyVectorcall_Call call.c:273
#19 0x102b86ebd in _PyObject_Call call.c:348
#20 0x102b8723c in PyObject_Call call.c:373
#21 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#22 0x102bd92f2 in gen_send_ex2 genobject.c:232
#23 0x102bd8c60 in gen_send_ex genobject.c:273
#24 0x102bda780 in _gen_throw genobject.c:561
#25 0x102bd999e in gen_throw genobject.c:598
#26 0x102baa5ab in method_vectorcall_FASTCALL descrobject.c:410
#27 0x102b8468a in _PyObject_VectorcallTstate pycore_call.h:186
#28 0x102b86d4f in PyObject_Vectorcall call.c:327
#29 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#30 0x102e6511f in _PyEval_Vector ceval.c:1622
#31 0x102b87545 in _PyFunction_Vectorcall call.c
#32 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#33 0x102b8d8c3 in method_vectorcall classobject.c:61
#34 0x102b8468a in _PyObject_VectorcallTstate pycore_call.h:186
#35 0x102b86d4f in PyObject_Vectorcall call.c:327
#36 0x102e840b6 in _PyEval_EvalFrameDefault generated_cases.c.h:3519
#37 0x102e6511f in _PyEval_Vector ceval.c:1622
#38 0x102b87545 in _PyFunction_Vectorcall call.c
#39 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#40 0x102b8d9af in method_vectorcall classobject.c:91
#41 0x102b86bb1 in _PyVectorcall_Call call.c:273
#42 0x102b86ebd in _PyObject_Call call.c:348
#43 0x102b8723c in PyObject_Call call.c:373
#44 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#45 0x102e6511f in _PyEval_Vector ceval.c:1622
#46 0x102b87545 in _PyFunction_Vectorcall call.c
#47 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#48 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#49 0x102cd971b in slot_tp_call typeobject.c:8813
#50 0x102b85226 in _PyObject_MakeTpCall call.c:242
#51 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#52 0x102b86d4f in PyObject_Vectorcall call.c:327
#53 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#54 0x102e6511f in _PyEval_Vector ceval.c:1622
#55 0x102b87545 in _PyFunction_Vectorcall call.c
#56 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#57 0x102b8d9af in method_vectorcall classobject.c:91
#58 0x102b86bb1 in _PyVectorcall_Call call.c:273
#59 0x102b86ebd in _PyObject_Call call.c:348
#60 0x102b8723c in PyObject_Call call.c:373
#61 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#62 0x102e6511f in _PyEval_Vector ceval.c:1622
#63 0x102b87545 in _PyFunction_Vectorcall call.c
#64 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#65 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#66 0x102cd971b in slot_tp_call typeobject.c:8813
#67 0x102b85226 in _PyObject_MakeTpCall call.c:242
#68 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#69 0x102b86d4f in PyObject_Vectorcall call.c:327
#70 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#71 0x102e6511f in _PyEval_Vector ceval.c:1622
#72 0x102b87545 in _PyFunction_Vectorcall call.c
#73 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#74 0x102b8d9af in method_vectorcall classobject.c:91
#75 0x102b86bb1 in _PyVectorcall_Call call.c:273
#76 0x102b86ebd in _PyObject_Call call.c:348
#77 0x102b8723c in PyObject_Call call.c:373
#78 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#79 0x102e6511f in _PyEval_Vector ceval.c:1622
#80 0x102b87545 in _PyFunction_Vectorcall call.c
#81 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#82 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#83 0x102cd971b in slot_tp_call typeobject.c:8813
#84 0x102b85226 in _PyObject_MakeTpCall call.c:242
#85 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#86 0x102b86d4f in PyObject_Vectorcall call.c:327
#87 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#88 0x102e6511f in _PyEval_Vector ceval.c:1622
#89 0x102b87545 in _PyFunction_Vectorcall call.c
#90 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#91 0x102b8d9af in method_vectorcall classobject.c:91
#92 0x102b86bb1 in _PyVectorcall_Call call.c:273
#93 0x102b86ebd in _PyObject_Call call.c:348
#94 0x102b8723c in PyObject_Call call.c:373
#95 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#96 0x102e6511f in _PyEval_Vector ceval.c:1622
#97 0x102b87545 in _PyFunction_Vectorcall call.c
#98 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#99 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#100 0x102cd971b in slot_tp_call typeobject.c:8813
#101 0x102b85226 in _PyObject_MakeTpCall call.c:242
#102 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#103 0x102b86d4f in PyObject_Vectorcall call.c:327
#104 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#105 0x102e6511f in _PyEval_Vector ceval.c:1622
#106 0x102b87545 in _PyFunction_Vectorcall call.c
#107 0x103199e70 in _PyObject_VectorcallTstate pycore_call.h:186
#108 0x10319ce35 in partial_vectorcall _functoolsmodule.c:230
#109 0x102b8468a in _PyObject_VectorcallTstate pycore_call.h:186
#110 0x102b86d4f in PyObject_Vectorcall call.c:327
#111 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#112 0x102e6511f in _PyEval_Vector ceval.c:1622
#113 0x102b87545 in _PyFunction_Vectorcall call.c
#114 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#115 0x102b8d8c3 in method_vectorcall classobject.c:61
#116 0x102b86b77 in _PyVectorcall_Call call.c:285
#117 0x102b86ebd in _PyObject_Call call.c:348
#118 0x102b8723c in PyObject_Call call.c:373
#119 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#120 0x102e64cc6 in PyEval_EvalCode ceval.c:557
#121 0x102e5b47d in builtin_exec bltinmodule.c.h:540
#122 0x102c770b2 in cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:441
#123 0x102b8468a in _PyObject_VectorcallTstate pycore_call.h:186
#124 0x102b86d4f in PyObject_Vectorcall call.c:327
#125 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
#126 0x102e6511f in _PyEval_Vector ceval.c:1622
#127 0x102b87545 in _PyFunction_Vectorcall call.c
#128 0x102b86bb1 in _PyVectorcall_Call call.c:273
#129 0x102b86ebd in _PyObject_Call call.c:348
#130 0x102b8723c in PyObject_Call call.c:373
#131 0x10308900f in pymain_run_module main.c:300
#132 0x103086b09 in Py_RunMain main.c:688
#133 0x1030888b1 in pymain_main main.c:718
#134 0x103088bb0 in Py_BytesMain main.c:742
#135 0x1029c1918 in main python.c:15
#136 0x7ff80da2d41e in start+0x76e (dyld:x86_64+0xfffffffffff6e41e) (BuildId: 5db85b72c63a318291e55c942ec30e4832000000200000000100000000040d00)
0x610000f109e8 is located 168 bytes inside of 184-byte region [0x610000f10940,0x610000f109f8)
freed by thread T0 here:
#0 0x10444cee9 in wrap_free+0xa9 (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x48ee9) (BuildId: 756bb7515781379f84412f22c4274ffd2400000010000000000a0a0000030d00)
#1 0x102c8ca5b in _PyMem_RawFree obmalloc.c:74
#2 0x102c90ee5 in _PyMem_DebugFree obmalloc.c:2297
#3 0x102c8fb42 in PyObject_Free obmalloc.c:831
#4 0x10308eccd in PyObject_GC_Del gcmodule.c:2415
#5 0x102bf8228 in func_dealloc funcobject.c:934
#6 0x102c8c164 in _Py_Dealloc object.c:2656
#7 0x102c0628b in list_dealloc listobject.c:357
#8 0x102c8c164 in _Py_Dealloc object.c:2656
#9 0x102f594a8 in _PyFrame_ClearExceptCode frame.c:140
#10 0x102eb2819 in clear_thread_frame ceval.c:1486
#11 0x102eac816 in _PyEval_FrameClearAndPop ceval.c:1513
#12 0x102e95dab in _PyEval_EvalFrameDefault generated_cases.c.h:1064
#13 0x102e6511f in _PyEval_Vector ceval.c:1622
#14 0x102b87545 in _PyFunction_Vectorcall call.c
#15 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#16 0x102b8d9af in method_vectorcall classobject.c:91
#17 0x102b86bb1 in _PyVectorcall_Call call.c:273
#18 0x102b86ebd in _PyObject_Call call.c:348
#19 0x102b8723c in PyObject_Call call.c:373
#20 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#21 0x102e6511f in _PyEval_Vector ceval.c:1622
#22 0x102b87545 in _PyFunction_Vectorcall call.c
#23 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#24 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#25 0x102cd971b in slot_tp_call typeobject.c:8813
#26 0x102b85226 in _PyObject_MakeTpCall call.c:242
#27 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
#28 0x102b86d4f in PyObject_Vectorcall call.c:327
#29 0x102e78d52 in _PyEval_EvalFrameDefault generated_cases.c.h:3759
previously allocated by thread T0 here:
#0 0x10444cda0 in wrap_malloc+0xa0 (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x48da0) (BuildId: 756bb7515781379f84412f22c4274ffd2400000010000000000a0a0000030d00)
#1 0x102c8c9f4 in _PyMem_RawMalloc obmalloc.c:46
#2 0x102c90a21 in _PyMem_DebugMalloc obmalloc.c:2282
#3 0x102c8f9a7 in PyObject_Malloc obmalloc.c:802
#4 0x10308e2a2 in _PyObject_GC_New gcmodule.c:2337
#5 0x102bf5d2c in PyFunction_NewWithQualName funcobject.c:188
#6 0x102bf6aba in PyFunction_New funcobject.c:312
#7 0x102e71421 in _PyEval_EvalFrameDefault generated_cases.c.h:4621
#8 0x102e64cc6 in PyEval_EvalCode ceval.c:557
#9 0x1030261e8 in run_eval_code_obj pythonrun.c:1725
#10 0x103021d94 in run_mod pythonrun.c:1746
#11 0x10301f75f in PyRun_StringFlags pythonrun.c:1621
#12 0x102e5a6a1 in builtin_eval bltinmodule.c.h:456
#13 0x102f3995f in _PyUopExecute executor_cases.c.h:2417
#14 0x102e97857 in _PyEval_EvalFrameDefault generated_cases.c.h:2982
#15 0x102e6511f in _PyEval_Vector ceval.c:1622
#16 0x102b87545 in _PyFunction_Vectorcall call.c
#17 0x102b8faf0 in _PyObject_VectorcallTstate pycore_call.h:186
#18 0x102b8d9af in method_vectorcall classobject.c:91
#19 0x102b86bb1 in _PyVectorcall_Call call.c:273
#20 0x102b86ebd in _PyObject_Call call.c:348
#21 0x102b8723c in PyObject_Call call.c:373
#22 0x102e846a4 in _PyEval_EvalFrameDefault generated_cases.c.h:4601
#23 0x102e6511f in _PyEval_Vector ceval.c:1622
#24 0x102b87545 in _PyFunction_Vectorcall call.c
#25 0x102b84bd7 in _PyObject_VectorcallDictTstate call.c:135
#26 0x102b87bbe in _PyObject_Call_Prepend call.c:504
#27 0x102cd971b in slot_tp_call typeobject.c:8813
#28 0x102b85226 in _PyObject_MakeTpCall call.c:242
#29 0x102b8473d in _PyObject_VectorcallTstate pycore_call.h:184
SUMMARY: AddressSanitizer: heap-use-after-free funcobject.c:285 in _PyFunction_LookupByVersion
Shadow bytes around the buggy address:
0x1c20001e20e0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x1c20001e20f0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
0x1c20001e2100: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x1c20001e2110: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
0x1c20001e2120: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x1c20001e2130: fd fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd fa
0x1c20001e2140: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x1c20001e2150: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
0x1c20001e2160: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x1c20001e2170: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x1c20001e2180: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==48442==ABORTING
Fatal Python error: Aborted
Current thread 0x00007ff851463640 (most recent call first):
File "/Users/guido/cpython/Lib/traceback.py", line 357 in _walk_tb_with_full_positions
File "/Users/guido/cpython/Lib/traceback.py", line 421 in _extract_from_extended_frame_gen
File "/Users/guido/cpython/Lib/traceback.py", line 697 in __init__
File "/Users/guido/cpython/Lib/traceback.py", line 139 in format_exception
File "/Users/guido/cpython/Lib/test/support/testresult.py", line 96 in __makeErrorDict
File "/Users/guido/cpython/Lib/test/support/testresult.py", line 113 in addFailure
File "/Users/guido/cpython/Lib/unittest/case.py", line 98 in _addError
File "/Users/guido/cpython/Lib/unittest/case.py", line 75 in testPartExecutor
File "/Users/guido/cpython/Lib/contextlib.py", line 159 in __exit__
File "/Users/guido/cpython/Lib/unittest/case.py", line 633 in run
File "/Users/guido/cpython/Lib/unittest/case.py", line 690 in __call__
File "/Users/guido/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/guido/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/guido/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/guido/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/guido/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/guido/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/guido/cpython/Lib/test/support/testresult.py", line 143 in run
File "/Users/guido/cpython/Lib/test/support/__init__.py", line 1116 in _run_suite
File "/Users/guido/cpython/Lib/test/support/__init__.py", line 1242 in run_unittest
File "/Users/guido/cpython/Lib/test/libregrtest/runtest.py", line 294 in _test_module
File "/Users/guido/cpython/Lib/test/libregrtest/runtest.py", line 330 in _runtest_inner2
File "/Users/guido/cpython/Lib/test/libregrtest/runtest.py", line 373 in _runtest_inner
File "/Users/guido/cpython/Lib/test/libregrtest/runtest.py", line 248 in _runtest
File "/Users/guido/cpython/Lib/test/libregrtest/runtest.py", line 278 in runtest
File "/Users/guido/cpython/Lib/test/libregrtest/main.py", line 483 in run_tests_sequential
File "/Users/guido/cpython/Lib/test/libregrtest/main.py", line 621 in run_tests
File "/Users/guido/cpython/Lib/test/libregrtest/main.py", line 799 in _main
File "/Users/guido/cpython/Lib/test/libregrtest/main.py", line 758 in main
File "/Users/guido/cpython/Lib/test/libregrtest/main.py", line 822 in main
File "/Users/guido/cpython/Lib/test/__main__.py", line 2 in <module>
File "/Users/guido/cpython/Lib/runpy.py", line 88 in _run_code
File "/Users/guido/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testcapi, _testinternalcapi (total: 2)
Abort trap: 6
```
</details>
### Error messages
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-108296
* gh-108383
<!-- /gh-linked-prs -->
| b8f96b5eda5b376b05a9dbf046208388249e30a6 | adfc118fdab66882599e01a84c22bd897055f3f1 |
python/cpython | python__cpython-108230 | # Inconsistent documentation about calling subprocess inside async function in a child thread
# Documentation
This page states: https://docs.python.org/3/library/asyncio-dev.html#concurrency-and-multithreading
```
To handle signals and to execute subprocesses, the event loop must be run in the main thread.
```
For me this means that if I want to use subprocess call my async function must be triggered by loop on the main thread. In other words async functions can not call subprocess when running in child thread.
However this page states: https://docs.python.org/3/library/asyncio-subprocess.html#subprocess-and-threads
```
Standard asyncio event loop supports running subprocesses from different threads by default.
```
To me this means that I can spawn a child thread, call async function inside it, and that async function is free to call subprocess without problems.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108230
* gh-108231
* gh-108232
<!-- /gh-linked-prs -->
| 1cc391d9e2ea24ca750005335507b52933fc0b52 | 0dd3fc2a640b273979f94299b545e1e40ac0633c |
python/cpython | python__cpython-108227 | # Add --disable-gil option to configure
# Feature or enhancement
@itamaro is working on setting up additional buildbots for PEP 703 builds. To support this, we'll add an optional `--disable-gil` flag to the configure. For now, this flag won't do anything other than define the `Py_NOGIL` macro, but once the buildbots are setup, any future code behind this flag will be tested in the CI.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108227
* gh-108236
* gh-108238
* gh-108239
* gh-111060
* gh-112780
<!-- /gh-linked-prs -->
| b16ecb88e70d696a93ce993661973330baeafee1 | 21c0844742cf15db8e56e8848ecbb2e25f314aed |
python/cpython | python__cpython-108221 | # [C API] Cleanup internal C header files
# Feature or enhancement
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
Cleanup internal C header files: see related PRs for details.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108221
<!-- /gh-linked-prs -->
| 21c0844742cf15db8e56e8848ecbb2e25f314aed | db55383829ccd5ce80c551d60f26851346741fdf |
python/cpython | python__cpython-123802 | # PEP 703 -- Making the Global Interpreter Lock Optional in CPython
# Feature or enhancement
The steering council has [accepted](https://discuss.python.org/t/pep-703-making-the-global-interpreter-lock-optional-in-cpython-acceptance/37075) [PEP 703](https://peps.python.org/pep-0703/). This is intended as a top-level issue to keep track of integration status.
The "up for grabs" list contains issues that no one is currently working on and are ready to be implemented. If you are interested in working on one of them, please comment on the specific issue and CC me (@colesbury) on related PRs.
```[tasklist]
### Up For Grabs (comment on specific issue to take)
```
```[tasklist]
### Linked issues
- [ ] https://github.com/python/cpython/issues/111506
- [ ] https://github.com/python/cpython/issues/111870
- [ ] https://github.com/python/cpython/issues/111924
- [ ] https://github.com/python/cpython/issues/112606
- [ ] https://github.com/python/cpython/issues/114203
- [ ] https://github.com/python/cpython/issues/114214
- [ ] https://github.com/python/cpython/issues/115999
- [ ] https://github.com/python/cpython/issues/116024
- [ ] https://github.com/python/cpython/issues/116738
- [ ] https://github.com/python/cpython/issues/117139
- [ ] https://github.com/python/cpython/issues/117721
- [ ] https://github.com/python/cpython/issues/117657
- [ ] https://github.com/python/cpython/issues/119053
```
<details>
<summary>Completed Issues</summary>
```[tasklist]
### Completed Issues
- [ ] https://github.com/python/cpython/issues/108223
- [ ] https://github.com/python/cpython/issues/108374
- [ ] https://github.com/python/cpython/issues/108337
- [ ] https://github.com/python/cpython/issues/108724
- [ ] https://github.com/python/cpython/issues/109549
- [ ] https://github.com/python/cpython/issues/109740
- [ ] https://github.com/python/cpython/issues/110119
- [ ] https://github.com/pypa/packaging/issues/727
- [ ] https://github.com/python/cpython/issues/111062
- [ ] https://github.com/python/cpython/issues/111569
- [ ] https://github.com/python/cpython/issues/112062
- [ ] https://github.com/python/cpython/issues/111903
- [ ] https://github.com/python/cpython/issues/111916
- [ ] https://github.com/python/cpython/issues/111928
- [ ] https://github.com/python/cpython/issues/112070
- [ ] https://github.com/python/cpython/issues/111956
- [ ] https://github.com/python/cpython/issues/111965
- [ ] https://github.com/python/cpython/issues/112213
- [ ] https://github.com/python/cpython/issues/112071
- [ ] https://github.com/python/cpython/issues/111972
- [ ] https://github.com/python/cpython/issues/112538
- [ ] https://github.com/python/cpython/issues/111962
- [ ] https://github.com/python/cpython/issues/112535
- [ ] https://github.com/python/cpython/issues/112205
- [ ] https://github.com/python/cpython/issues/111971
- [ ] https://github.com/python/cpython/issues/111650
- [ ] https://github.com/python/cpython/issues/113750
- [ ] https://github.com/python/cpython/issues/111964
- [ ] https://github.com/python/cpython/issues/113884
- [ ] https://github.com/python/cpython/issues/114569
- [ ] https://github.com/python/cpython/issues/114329
- [ ] https://github.com/python/cpython/issues/112066
- [ ] https://github.com/python/cpython/issues/110481
- [ ] https://github.com/python/cpython/issues/112532
- [ ] https://github.com/python/cpython/issues/112050
- [ ] https://github.com/python/cpython/issues/111968
- [ ] https://github.com/python/cpython/issues/115491
- [ ] https://github.com/python/cpython/issues/112175
- [ ] https://github.com/python/cpython/issues/113743
- [ ] https://github.com/python/cpython/issues/115103
- [ ] https://github.com/python/cpython/issues/112087
- [ ] https://github.com/python/cpython/issues/116616
- [ ] https://github.com/python/cpython/issues/116167
- [ ] https://github.com/python/cpython/issues/112536
- [ ] https://github.com/python/cpython/issues/116522
- [ ] https://github.com/python/cpython/issues/116664
- [ ] https://github.com/python/cpython/issues/117323
- [ ] https://github.com/python/cpython/issues/117300
- [ ] https://github.com/python/cpython/issues/111926
- [ ] https://github.com/python/cpython/issues/117293
- [ ] https://github.com/python/cpython/issues/117435
- [ ] https://github.com/python/cpython/issues/114271
- [ ] https://github.com/python/cpython/issues/112069
- [ ] https://github.com/python/cpython/issues/116329
- [ ] https://github.com/python/cpython/issues/116818
- [ ] https://github.com/python/cpython/issues/117514
- [ ] https://github.com/python/cpython/issues/112529
- [ ] https://github.com/python/cpython/issues/112075
- [ ] https://github.com/python/cpython/issues/116322
```
</details>
```[tasklist]
### Deferred tasks
- [ ] Revisit and update [`--disable-gil`](https://docs.python.org/dev/using/configure.html#cmdoption-disable-gil) configure documentation
- [x] Reenable test_cppext on `--disable-gil` builds
- [x] Audit usage of PyObject_Malloc for non-PyObject allocations
- [ ] Consider avoiding refcounting `tp_mro` during PyType_IsSubtype
```
### Upstream functionality from nogil-3.12
This is a list of commits from the nogil-3.12 PR plan. The crossed-out entries are commits that do not need to be upstreamed, usually because the functionality is already in the main branch.
- [ ] [`cefe5dfee9`](https://github.com/colesbury/nogil-3.12/commit/cefe5dfee9) configure: disallow "--with-trace-refs" for "--disable-gil" builds
- [x] [`dcddbe2ddb`](https://github.com/colesbury/nogil-3.12/commit/dcddbe2ddb) configure: add support for --with-thread-sanitizer (#112536)
- [x] [`f546dbf16a`](https://github.com/colesbury/nogil-3.12/commit/f546dbf16a) Enable/disable the GIL at runtime (gh-116167)
- [x] [`f30d8d8f50`](https://github.com/colesbury/nogil-3.12/commit/f30d8d8f50) Add pyatomic.h
- [x] ~~[`385eb1d99c`](https://github.com/colesbury/nogil-3.12/commit/385eb1d99c) pyport: add new macros~~
- [x] ~~[`de2be447b3`](https://github.com/colesbury/nogil-3.12/commit/de2be447b3) Make PyThreadState_GET thread-local~~
- [x] [`a24dc2ecc3`](https://github.com/colesbury/nogil-3.12/commit/a24dc2ecc3) pystate: keep track of attached vs. detached state
- [x] [`4584be5950`](https://github.com/colesbury/nogil-3.12/commit/4584be5950) parking_lot: add mutexes and one-time notifications
- [x] [`6845b133cc`](https://github.com/colesbury/nogil-3.12/commit/6845b133cc) critical_section: helpers for fine-grained locking (gh-111569)
- [x] [`8ed62cab6a`](https://github.com/colesbury/nogil-3.12/commit/8ed62cab6a) pystate: use _PyRawMutex for internal mutexes (https://github.com/python/cpython/issues/111924)
- [x] [`e15443b1f2`](https://github.com/colesbury/nogil-3.12/commit/e15443b1f2) ceval: move eval_breaker to per-thread state (#112175)
- [x] [`b6b12a9a94`](https://github.com/colesbury/nogil-3.12/commit/b6b12a9a94) Implement biased reference counting
- [x] [`b6b12a9a94`](https://github.com/colesbury/nogil-3.12/commit/b6b12a9a94) Implement BRC inter-thread queue (#110481)
- [x] ~~[`7b6b6f1a01`](https://github.com/colesbury/nogil-3.12/commit/7b6b6f1a01) unicode: immortalize interned strings~~
- [x] ~~[`fc173e3711`](https://github.com/colesbury/nogil-3.12/commit/fc173e3711) unicode: always immortalize interned strings~~
- [x] [`dd9b78460c`](https://github.com/colesbury/nogil-3.12/commit/dd9b78460c) Add safe memory reclamation scheme based on FreeBSD's GUS
- [x] [`901e134921`](https://github.com/colesbury/nogil-3.12/commit/901e134921) Add mimalloc v2.0.9 (DinoV)
- [x] [`b6980856df`](https://github.com/colesbury/nogil-3.12/commit/d447b6980856df7e0050ecaba4fd6cf21747d4f2) Additional mimalloc changes (https://github.com/python/cpython/issues/112532)
- [ ] [`d13c63dee9`](https://github.com/colesbury/nogil-3.12/commit/d13c63dee9) pymem: remove uses of _PyMem_SetDefaultAllocator during finalization (https://github.com/python/cpython/issues/111924)
- [x] [`654be8ffd6`](https://github.com/colesbury/nogil-3.12/commit/654be8ffd6) gc: make the garbage collector non-generational (#112529)
- [x] [`967fe31473`](https://github.com/colesbury/nogil-3.12/commit/967fe31473) gc: Traverese mimalloc heaps to find all objects. (#112529)
- [x] [`2864b6b36e`](https://github.com/colesbury/nogil-3.12/commit/2864b6b36e) Implement stop-the-world pauses (#111964)
- [x] [`2864b6b36e`](https://github.com/colesbury/nogil-3.12/commit/2864b6b36e) gc: implement stop-the-world GC (#112529)
- [x] [`c1befd7689`](https://github.com/colesbury/nogil-3.12/commit/c1befd7689) Stop the world before fork() and Python shutdown (#116522)
- [x] [`82800d8ec8`](https://github.com/colesbury/nogil-3.12/commit/82800d8ec8) ceval: stop the world when enabling profiling/tracing for all threads (#116818)
- [x] [`7423dff344`](https://github.com/colesbury/nogil-3.12/commit/7423dff344) pystate: use stop-the-world in a few places (#117300)
- [ ] [`86efa7dfe3`](https://github.com/colesbury/nogil-3.12/commit/86efa7dfe3) pystate: implement _PyRuntime.multithreaded
- [x] [`d896dfc8db`](https://github.com/colesbury/nogil-3.12/commit/d896dfc8db) dict: make dict thread-safe (#112075)
- [x] [`df4c51f82b`](https://github.com/colesbury/nogil-3.12/commit/df4c51f82b) list: make list thread-safe (#112087)
- [x] [`9c1f7ba1b4`](https://github.com/colesbury/nogil-3.12/commit/9c1f7ba1b4) mro: thread-safe MRO cache (#113743)
- [x] [`7a7aca096b`](https://github.com/colesbury/nogil-3.12/commit/7a7aca096b) getargs.c: make parser_init thread-safe (#111956)
- [x] [`0dddcb6f9d`](https://github.com/colesbury/nogil-3.12/commit/0dddcb6f9d) weakref: make weakrefs thread-safe without the GIL (#111926)
- [x] [`410ba1036a`](https://github.com/colesbury/nogil-3.12/commit/410ba1036a) dtoa: make dtoa thread-safe (#111962)
- [x] [`6540bf3e6a`](https://github.com/colesbury/nogil-3.12/commit/6540bf3e6a) unicode: make unicodeobject.c thread-safe (#111971)
- [x] [`5d006db9fa`](https://github.com/colesbury/nogil-3.12/commit/5d006db9fa) codecs.c: fix race condition (#111972)
- [x] [`d1b5ed128e`](https://github.com/colesbury/nogil-3.12/commit/d1b5ed128e) _threadmodule: make _thread.lock thread-safe (#114271)
- [ ] [`cfc11bcb1a`](https://github.com/colesbury/nogil-3.12/commit/cfc11bcb1a) typeobject: thread safety
- [x] [`cfecf6f4eb`](https://github.com/colesbury/nogil-3.12/commit/cfecf6f4eb) threading: remove _tstate_lock from threading.Thread (#114271)
- [x] ~~[`74df7785f5`](https://github.com/colesbury/nogil-3.12/commit/74df7785f5) pyqueue: add internal queue data structure~~
- [x] [`4450445f51`](https://github.com/colesbury/nogil-3.12/commit/4450445f51) pymem: add _PyMem_FreeQsbr (#115103)
- [x] [`7e60a01aee`](https://github.com/colesbury/nogil-3.12/commit/7e60a01aee) queue: make SimpleQueue thread-safe (#113884)
- [x] [`4ca2924f0d`](https://github.com/colesbury/nogil-3.12/commit/4ca2924f0d) set: make set thread-safe (#112069)
- [x] ~~[`3cfbc49229`](https://github.com/colesbury/nogil-3.12/commit/3cfbc49229) moduleobject: fix data races~~
- [x] [`5722416ef5`](https://github.com/colesbury/nogil-3.12/commit/5722416ef5) _threadmodule: thread-safety fixes (#114271)
- [ ] [`31ec6f0290`](https://github.com/colesbury/nogil-3.12/commit/31ec6f0290) pystate: refcount threads to handle race between interpreter shutdown and thread exit
- [x] [`45bdd27ee5`](https://github.com/colesbury/nogil-3.12/commit/45bdd27ee5) threading: make _thread.lock thread-safe
- [x] [`07f5f8c318`](https://github.com/colesbury/nogil-3.12/commit/07f5f8c318) slice: move slice_cache to per-thread state (#111968)
- [x] [`ea1160c6d7`](https://github.com/colesbury/nogil-3.12/commit/ea1160c6d7) asyncio: fix race conditions in enter_task and leave_task
- [x] [`212fef480e`](https://github.com/colesbury/nogil-3.12/commit/212fef480e) object.c: fix race when accessing attributes and methods (#111789)
- [x] [`70856f126d`](https://github.com/colesbury/nogil-3.12/commit/70856f126d) asdl: use _PyOnceFlag in Python-ast.c (#111956)
- [x] [`360a79cb88`](https://github.com/colesbury/nogil-3.12/commit/360a79cb88) socketmodule.c: use relaxed atomics for global 'defaulttimeout' (#116616)
- [x] [`041a08e339`](https://github.com/colesbury/nogil-3.12/commit/041a08e339) functools: make lru_cache thread-safe (#112070)
- [x] [`9bf62ffc4b`](https://github.com/colesbury/nogil-3.12/commit/9bf62ffc4b) random: add a mutex to guard random.Random (#112071)
- [x] [`a8251a8d25`](https://github.com/colesbury/nogil-3.12/commit/a8251a8d25) clinic: support '@' syntax for recursive mutexes (#111903)
- [x] [`ffade9d6f6`](https://github.com/colesbury/nogil-3.12/commit/ffade9d6f6) bufferedio: add locks to make print() thread-safe (#111965)
- [x] [`5b83c16dcd`](https://github.com/colesbury/nogil-3.12/commit/5b83c16dcd) textio: add locks to make textio thread-safe (#111965)
- [x] [`6323ca60f9`](https://github.com/colesbury/nogil-3.12/commit/6323ca60f9) stringio: make stringio thread-safe (#111965)
- [x] [`f1e4742eaa`](https://github.com/colesbury/nogil-3.12/commit/f1e4742eaa) deque: make most functions thread-safe (#112050)
- [x] [`78825e0508`](https://github.com/colesbury/nogil-3.12/commit/78825e0508) importlib: fix data race in imports (PyImport_ImportModuleLevelObject)
- [x] [`2f5c90a284`](https://github.com/colesbury/nogil-3.12/commit/2f5c90a284) semaphore.c: decrease count before release sem_lock (#117435)
- [x] [`22eca6e215`](https://github.com/colesbury/nogil-3.12/commit/22eca6e215) sha1: make sha1module thread-safe (https://github.com/python/cpython/issues/111916)
- [x] [`ada9b73feb`](https://github.com/colesbury/nogil-3.12/commit/ada9b73feb) _struct: fix race condition in cache_struct_converter (#112062)
- [ ] [`0e0b3899d1`](https://github.com/colesbury/nogil-3.12/commit/0e0b3899d1) signalmodule: fix thread-safety issue on macOS (Unclear if this is still an issue on macOS)
- [x] [`964bb33962`](https://github.com/colesbury/nogil-3.12/commit/964bb33962) json: make JSON scanner thread safe (https://github.com/python/cpython/issues/111928)
- [ ] [`86e7772c64`](https://github.com/colesbury/nogil-3.12/commit/86e7772c64) http: fix dependency on finalization order
- [ ] [`9ab96964e7`](https://github.com/colesbury/nogil-3.12/commit/9ab96964e7) faulthandler: don't dump all threads when running without the GIL
- [x] [`cff32694a4`](https://github.com/colesbury/nogil-3.12/commit/cff32694a4) test_gdb: skip test_threads when running without GIL
- [x] ~~[`2ae5ee5ed4`](https://github.com/colesbury/nogil-3.12/commit/2ae5ee5ed4) tests: fix and work around some race conditions in tests~~
- [ ] [`9f9b3d085f`](https://github.com/colesbury/nogil-3.12/commit/9f9b3d085f) ceval: fix some thread-safety issues
- [x] [`2a4c17e896`](https://github.com/colesbury/nogil-3.12/commit/2a4c17e896) pystate: move freelists to per-thread state (#111968)
- [ ] [`149ea9dc43`](https://github.com/colesbury/nogil-3.12/commit/149ea9dc43) Deferred reference counting
- [x] [`7e7568672d`](https://github.com/colesbury/nogil-3.12/commit/7e7568672d) specialize: make specialization thread-safe
- [x] [`90d34f0d18`](https://github.com/colesbury/nogil-3.12/commit/90d34f0d18) specialize: optimize for single-threaded programs
- [x] [`42d3e11d8c`](https://github.com/colesbury/nogil-3.12/commit/42d3e11d8c) code: make code object use deferred reference counting
- [x] [`c9fc49666c`](https://github.com/colesbury/nogil-3.12/commit/c9fc49666c) test: add support for checking for TSAN
- [x] [`7507a77a98`](https://github.com/colesbury/nogil-3.12/commit/7507a77a98) thread: don't use sem_clockwait with TSAN
- [x] ~~[`a62d37674c`](https://github.com/colesbury/nogil-3.12/commit/a62d37674c) _posixsubprocess: disable vfork when running with ASAN~~
- [ ] [`398204d57b`](https://github.com/colesbury/nogil-3.12/commit/398204d57b) object: fix reported TSAN races
- [x] [`4526c07cae`](https://github.com/colesbury/nogil-3.12/commit/4526c07cae) Disable the GIL by default in `--disable-gil` builds (gh-116329)
<!-- gh-linked-prs -->
### Linked PRs
* gh-123802
* gh-123847
<!-- /gh-linked-prs -->
| aa3f11f80a644dac7184e8546ddfcc9b68be364c | 8ef8354ef15e00d484ac2ded9442b789c24b11e0 |
python/cpython | python__cpython-108195 | # Allow specifying SimpleNamespace attributes as a mapping/iterable in a positional argument
# Feature or enhancement
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/96145
https://mail.python.org/archives/list/python-dev@python.org/message/TYIXRGOLWGV5TNJ3XMV443O3RWLEYB65/
### Proposal:
It was discussed in the context of adding controversial AttrDict. The reason for adding AttrDict was to allow JSON object attributes to be accessed as Python object attributes rather than dict keys. The reason for not using SimpleNamespace for this was that you need to use a wrapper `lambda x: SimpleNamespace(**x)`, which makes this solution less obvious.
Supporting a positional argument in `SimpleNamespace()`, which can be a mapping or an iterable of key-value pairs (like in the `dict` constructor) will make using SimpleNamespace with JSON more convenient.
`SimpleNamespace({'a': 1, 'b': 2})` is the same as `SimpleNamespace([('a', 1), ('b', 2)])` is the same as `SimpleNamespace(a=1, b=2)`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108195
<!-- /gh-linked-prs -->
| 93b7ed7c6b1494f41818fa571b1843ca3dfe1bd1 | 85ec1c2dc67c2a506e847dbe2c3c740e81c3ab9b |
python/cpython | python__cpython-108256 | # Improve error message for parser stack overflow
# Crash report
### CPython versions tested on:
3.10
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.12.0rc1 (main, Aug 6 2023, 17:56:34) [GCC 9.4.0]
### What happened?
I'm working on a project where I execute AI generated code. One of the files contained the line below and it crashed the compiler. I know it's kind of a random string and an unlikely scenario, so feel free to ignore it.
Tested in:
3.8 It worked there
3.10 crash
3.12 crach
`minbug.py`
```python
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
```
### Error messages
MemoryError
<!-- gh-linked-prs -->
### Linked PRs
* gh-108256
* gh-108263
<!-- /gh-linked-prs -->
| 86617518c4ac824e2b6dc20691ba5a08df04f285 | 7f87ebbc3f52680c939791f397b9a478edf0c8d4 |
python/cpython | python__cpython-113011 | # webbrowser.register incorrectly prefers wrong browser if its name is a substring of `xdg-settings get default-web-browser` result
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.11.4 (main, Jun 9 2023, 07:59:55) [GCC 12.3.0]
### A clear and concise description of the bug:
I'm using a custom desktop entry (Linux application launcher) that starts Google Chrome with my custom data dir (or any other setting that is relevant). The custom desktop entry is called `google-chrome-work.desktop` (I have a few other Google Chrome custom launchers for other cases where I need the data dir separated), and this is set as the default browser in the operating system:
```
$ xdg-settings get default-web-browser
google-chrome-work.desktop
```
When using the `webbrowser` to open a URL, without first registering any specific browser, it calls `register_standard_browsers()` and that in turn runs `xdg-settings get default-web-browser`, and if that succeeds - sets up the resulting string (which is expected to be a `.desktop` entry) in `_os_preferred_browser`. After that `register_X_browsers()` gets run which knows about a bunch of browsers one might expect to find on Unix systems and tries to register each found with a call to `register()`. The first such "browser" to be registered is `xdg-open` - which will automatically use the system preferred web browser (using the XDG spec, that was consulted earlier), and one can always expect to find when `xdg-settings` is available (both are part of the same spec and always come from the same software package).
All this is great. The problem is that `register()` wants to check if the registered browser is supposedly the `_os_preferred_browser` and uses sub string comparison to check that: [`name in _os_preferred_browser`](https://github.com/python/cpython/blob/e5d45b7444733861153d6e8959c34323fd361322/Lib/webbrowser.py#L32C52-L32C81). In my case - because the text "google-chrome" (which is a browser name that `register_X_browsers()` knows) is a substring of `google-chrome-work.desktop` - it chooses to have the bare command for Google Chrome as the preferred browser, even though this is definitely not what is needed.
I've looked at the code, and `_os_preferred_browser` is **only** used to handle the result of `xdg-settings get default-web-browser`, so this substring search makes no sense - if `_os_preferred_browser` is set and has a value - it is guaranteed that using `xdg-open` is the correct command - we have basically verified that we have a valid FreeDesktop.org default browser configuration so using it is the only thing that is compatible with the FreeDesktop.org specifications (theoretically, we should execute the desktop entry file directly, but that is a lot more complicated and calling `xdg-open` with an HTTP URL will do the Right Thing™️).
This issue should be fixed by:
1. Removing the substring search - it makes no sense.
2. Make sure to register "xdg-open" ([here](https://github.com/python/cpython/blob/e5d45b7444733861153d6e8959c34323fd361322/Lib/webbrowser.py#L414)) in a way that is set as the preferred browser if `_os_preferred_browser` is set, for example:
```python
if shutil.which("xdg-open"):
register("xdg-open", None, BackgroundBrowser("xdg-open"), _os_preferred_browser is not None)
```
I can offer a PR if this seems reasonable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113011
* gh-123527
* gh-123528
<!-- /gh-linked-prs -->
| 10bf615bab9f832971a098f0a42b0d617aea6993 | 74bfb53e3afb6f5dd90dff3ef0e2dc3b2fba823e |
python/cpython | python__cpython-108154 | # compile() with PyCF_ONLY_AST flag ignores the optimize arg
``compile()`` (or ``ast.parse()``, which calls it) can return the ast for some source code, converted to a Python object version of the AST. But this AST is not optimized (with ``_PyAST_Optimize``).
Static Python re-implemented ``_PyAST_Optimize`` for this reason. If we expose a way for them to get an optimized AST, they won't have to.
For performance, it would be better not to call another API function on a Python AST, but to add an API option that runs ``_PyAST_Optimize`` before converting the AST to Python.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108154
* gh-108282
* gh-113241
<!-- /gh-linked-prs -->
| 10a91d7e98d847b05292eab828ff9ae51308d3ee | 47022a079eb9d2a2af781abae3de4a71f80247c2 |
python/cpython | python__cpython-108341 | # GzipFile.seek makes invalid write if buffer is not flushed in Python 3.12rc1
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.12.0rc1 (main, Aug 16 2023, 05:03:59) [GCC 12.2.0]
### A clear and concise description of the bug:
I have code that writes out sections of a data file in chunks, and uses seeks to ensure that the position is correct before writing.
In the following example, I write 5 bytes, seek to position 5 and write five more bytes. If I flush the buffer, the result is as expected. If I do not, 5 null bytes are written between the two groups of intended bytes.
```python
#!/usr/bin/env python
import io
import gzip
for flush in (True, False):
data = io.BytesIO()
gzip_writer = gzip.GzipFile(fileobj=data, mode='wb')
gzip_writer.write(b'abcde')
# If the buffer isn't flushed, seek works from unchanged offset
if flush and hasattr(gzip_writer, '_buffer'):
gzip_writer._buffer.flush()
gzip_writer.seek(5)
gzip_writer.write(b'fghij')
gzip_writer.close()
# Recover result
data.seek(0)
gzip_reader = gzip.GzipFile(fileobj=data, mode='rb')
result = gzip_reader.read()
print(f'{flush=}: {result}')
```
In the case where I seek but don't tell, I get spurious `\x00` bytes:
```
flush=True: b'abcdefghij'
flush=False: b'abcde\x00\x00\x00\x00\x00fghij'
```
Here is the output in Python 3.10.10:
```
flush=True: b'abcdefghij'
flush=False: b'abcdefghij'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108341
* gh-108402
<!-- /gh-linked-prs -->
| 2eb60c1934f47671e6b3c9b90b6d9f1912d829a0 | aa9a359ca2663195b0f04eef46109c28c4ff74d3 |
python/cpython | python__cpython-108084 | # sqlite3: some code paths ignore exceptions
_Originally posted by @vstinner in https://github.com/python/cpython/pull/108015#discussion_r1297029863_:
> Ignoring the exception here is a bug.
>
> connection_finalize() clears any exception with PyErr_SetRaisedException(), but pysqlite_connection_close_impl() must not ignore silently error, since here we are talking about a raised Python exception! The function must report if an exception was raised. Then the caller is free to ignore it or not.
>
> I suggest to continue ignoring it in finalize, but then write a separated PR to *log* the "unraisable exception".
<!-- gh-linked-prs -->
### Linked PRs
* gh-108084
* gh-108141
<!-- /gh-linked-prs -->
| fd195092204aa7fc9f13c5c6d423bc723d0b3520 | 3ff5ef2ad3d89c3ccf4e07ac8fdd798267ae6c61 |
python/cpython | python__cpython-111086 | # [C API] Enhance PyErr_WriteUnraisable() API to pass an error message
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Proposal:
I added a private ``_PyErr_WriteUnraisableMsg()`` API but it cannot be used in third party code, nor in stdlib extensions which try to avoid private/internal functions (like sqlite3). Moreover, ``_PyErr_WriteUnraisableMsg()`` API doesn't allow to fully customize the error message.
The limitation of ``_PyErr_WriteUnraisableMsg()`` affected PR #106674 which has to add ``; consider using ...`` in the error message which is not great.
```c
_PyErr_WriteUnraisableMsg(
"in PyMapping_HasKeyString(); consider using "
"PyMapping_GetOptionalItemString() or PyMapping_GetItemString()",
NULL);
```
@serhiy-storchaka suggested to add an API which allows string formatting, similar to ``PyErr_Format()``.
``_PyErr_WriteUnraisableMsg()`` was added in issue #81010.
---
By the way, we should go through all calls to PyErr_WriteUnraisable() and add more context to more these logs easier to get for developers: give more context about what the issue is, how to fix it, where it occurred, etc.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111086
* gh-111455
* gh-111507
* gh-111580
* gh-111643
<!-- /gh-linked-prs -->
| f6a02327b5fcdc10df855985ca9d2d9dc2a0a46f | 453e96e3020d38cfcaebf82b24cb681c6384fa82 |
python/cpython | python__cpython-116118 | # Windows build favours NuGet over local install of branch version
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
Windows builds of Python use a script called find_python.bat to locate a valid python.exe to use during the build.
It tries, in order:
1. whatever's in `VIRTUAL_ENV`
2. whatever's already in `externals\pythonx86\tools\`
3. query `py.exe` from $PATH, but only a permitted version, as per this line https://github.com/python/cpython/blob/main/PCbuild/find_python.bat#L45
4. downloading Python from the default configured NuGet repo (empty therefore nuget.exe defaults)
The problem is with case 3.
The permitted $PATH versions of Python to build 3.10 are [3.9 or 3.8](https://github.com/python/cpython/blob/3.10/PCbuild/find_python.bat#L45) - if 3.10 is installed, it is ignored and 3.12RC1 from NuGet is used instead
The permitted $PATH versions of Python to build 3.11 are [3.10 or 3.9](https://github.com/python/cpython/blob/3.11/PCbuild/find_python.bat#L45) - if 3.11 is installed, it is ignored and 3.12RC1 from NuGet is used instead
The permitted $PATH versions of Python to build 3.12 are [3.11, 3.10 or 3.9](https://github.com/python/cpython/blob/3.12/PCbuild/find_python.bat#L45) - if 3.12 is installed, it is ignored and 3.12RC1 from NuGet is used instead
The permitted $PATH versions of Python to build main are [3.11, 3.10 or 3.9](https://github.com/python/cpython/blob/main/PCbuild/find_python.bat#L45) - if main is installed, it is ignored and 3.12RC1 from NuGet is used instead
This behaviour is clearly wrong (especially since the version downloaded in the fallback case 4 is not version locked, so right now it's 3.12RC1 and doesn't account for compatibility breakages). Version X should always be in the permitted list to build version X.
3.9 _did_ allow builds from a system install of 3.9, but this seems like an accident rather than design.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116118
<!-- /gh-linked-prs -->
| 83c5ecdeec80fbd1f667f234f626c4154d40ebb5 | 91c3c64237f56bde9d1c1b8127fdcb02a112b5a4 |
python/cpython | python__cpython-108036 | # Remove the "CFrame" struct as it is no longer needed for performance.
In 3.11 we added a "CFrame" struct that sits on the C stack. This struct held a pointer to the current frame, and a "use_tracing" field.
Since we needed to check this field on every bytecode dispatch, it was very hot and needed to be on the C stack for performance.
With PEP 669, that field is gone and there is no need for the "CFrame" any more. We should remove it and revert to the keeping a pointer to the current frame in the thread state.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108036
<!-- /gh-linked-prs -->
| 006e44f9502308ec3d14424ad8bd774046f2be8e | 33e6e3fec02ff3035dec52692542d3dd10124bef |
python/cpython | python__cpython-108032 | # _Py_IsFinalizing has been removed, meaning C/C++ threads cannot avoid termination during shutdown
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (main, Jul 29 2023, 00:25:05) [GCC 13.1.1 20230614 (Red Hat 13.1.1-4)]
### A clear and concise description of the bug:
Threads that call PyEval_RestoreThread while the runtime is finalizing are terminated. The documented way to avoid this is to call _Py_IsFinalizing() to check. https://docs.python.org/3.13/c-api/init.html#c.PyEval_RestoreThread
_Py_IsFinalizing() has been removed from Python 3.13. Please provide a public API function that has the equivalent behavior, or some other way for a non-Python thread to safely attempt to call into Python code when the interpreter may be finalizing.
Rather than the calling code checking for finalization, an alternative solution would be to provide a variant of PyEval_RestoreThread that returns success or failure, so the calling code can know that it was unable to acquire the GIL, and react accordingly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108032
<!-- /gh-linked-prs -->
| 3ff5ef2ad3d89c3ccf4e07ac8fdd798267ae6c61 | cc58ec9724772a8d5c4a5c9a6525f9f96e994227 |
python/cpython | python__cpython-108002 | # `lambda` is not tested to have `__type_params__` attribute
We need to add a test that `lambda` has `__type_params__`.
Probably, this test should go to `test_funcattrs`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108002
* gh-108019
<!-- /gh-linked-prs -->
| a8d440b3837273926af5ce996162b019290ddad5 | e35c722d22cae605b485e75a69238dc44aab4c96 |
python/cpython | python__cpython-107996 | # @cached_property is not checked for doctests
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.10, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:607f18c894, Aug 15 2023, 08:58:26) [GCC 11.4.0]
### A clear and concise description of the bug:
Doctests belonging to a `functools.cached_property` are not ran by doctest.
Here is a minimal example (saved in a file `t.py`):
```python
from functools import cached_property
class Foo:
@cached_property
def my_cached_property(self):
"""
>>> assert False
"""
return 1
```
If I then invoke doctest with this file, I'd expect to see an assertion error. Instead, this doctest is skipped:
```console
$ python -m doctest t.py # no failing output due to assertion error
$ echo $? # zero exit code
0
```
Note, this issue was originally reported here: https://github.com/pytest-dev/pytest/issues/11237.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107996
<!-- /gh-linked-prs -->
| 9bb576cb07b42f34fd882b8502642b024f771c62 | 28cab71f954f3a14de9f474ce9c4abbd23c97862 |
python/cpython | python__cpython-108016 | # Drop the "Distributing Python Modules" section of the docs
# Documentation
https://docs.python.org/3/distributing/index.html is woefully outdated and the packaging ecosystem is managed externally to the core dev team. It would probably be better to have a link pointing at packaging.python.org than it is to try and maintain this part of the docs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108016
* gh-108081
* gh-108091
<!-- /gh-linked-prs -->
| 33e6e3fec02ff3035dec52692542d3dd10124bef | f51f0466c07eabc6177c2f64f70c952dada050e8 |
python/cpython | python__cpython-108126 | # ``asyncio.{timeout_at,timeout}`` are incorrectly documented as ``coroutine``
# Documentation
[``asyncio.timeout_at``](https://docs.python.org/3/library/asyncio-task.html#asyncio.timeout_at) and [``asyncio.timeout``](https://docs.python.org/3/library/asyncio-task.html#asyncio.timeout) are documented as ``coroutine`` but they are synchronous functions that return async context managers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108126
* gh-108152
* gh-108153
<!-- /gh-linked-prs -->
| a47c13cae5b32e6f3d7532cc6dbb4e1ac31219de | c31c61c04e55ef431615ffec959d84ac73a3db81 |
python/cpython | python__cpython-107973 | # Argument Clinic silently allow empty custom C basenames
See https://github.com/python/cpython/pull/107964#discussion_r1294498693:
> Here's a very contrived example that silently passes on this PR, current main and on 3.12:
>
> ```c
> /*[clinic input]
> output everything block
> class C "void *" ""
> # Trailing whitespace on following line!!!
> C.meth as
> [clinic start generated code]*/
> ```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107973
<!-- /gh-linked-prs -->
| 607f18c89456cdc9064e27f86a7505e011209757 | e90036c9bdf25621c15207a9cabd119fb5366c27 |
python/cpython | python__cpython-107968 | # Infinite recursion in the tokenizer when emitting invalid escape sequence warnings
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.11.3 (main, May 8 2023, 13:16:43) [Clang 14.0.3 (clang-1403.0.22.14.1)]
### A clear and concise description of the bug:
The tokenizer has an infinite recursion when emitting an invalid escape sequence `SyntaxWarning`. This was probably introduced in #106993. Fix will be up shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107968
* gh-107970
<!-- /gh-linked-prs -->
| d66bc9e8a7a8d6774d912a4b9d151885c4d8de1d | 13c36dc9ae5240124932137de4a94d81292c6c5f |
python/cpython | python__cpython-107965 | # set_forkserver_preload should check type of elements in passed list
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.10
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
### A clear and concise description of the bug:
The code for [`set_forkserver_preload`](https://github.com/python/cpython/blob/a482e5bf0022f85424a6308529a9ad51f1bfbb71/Lib/multiprocessing/forkserver.py#L64) (also in the latest `main` branch) should check the type of the elements passed through the `module_names` parameter, but instead checks the elements of `self._preload_modules`.
Code for reproducing:
```python
#!/usr/bin/env python3
import multiprocessing as mp
import time
def work(param):
print("work")
if __name__ == '__main__':
ctx = mp.get_context('forkserver')
ctx.set_forkserver_preload(['time', 1])
with ctx.Pool(2) as p:
p.map(work, [0, 1, 2, 3 ])
```
Raises error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File ".../lib/python3.10/multiprocessing/forkserver.py", line 178, in main
__import__(modname)
TypeError: __import__() argument 1 must be str, not int
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107965
* gh-107975
* gh-107976
<!-- /gh-linked-prs -->
| 6515ec3d3d5acd3d0b99c88794bdec09f0831e5b | d66bc9e8a7a8d6774d912a4b9d151885c4d8de1d |
python/cpython | python__cpython-107960 | # clarify documentation on when `os.lchmod()` is missing
# Documentation
`lchmod()` is not part of POSIX, see
https://pubs.opengroup.org/onlinepubs/9699919799/functions/chmod.html but only mentioned there for systems which support changing the file permission bits of symbolic links:
> Some implementations might allow changing the mode of symbolic links. This is not supported by the interfaces in the POSIX specification. Systems with such support provide an interface named lchmod(). To support such implementations fchmodat() has a flag parameter.
Therefore, `lchmod()` isn't available on any Unixes, which don't do so.
According to https://unix.stackexchange.com/questions/224979/why-do-linux-posix-have-lchown-but-not-lchmod this might be a majority.<br> In any case, Linux, which nowadays is undoubtedly the most important POSIX/Unix-like system, doesn't do so.
I shall provide a MR, which adds some clarification to the documentation about this.
Cheers,
Chris.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107960
* gh-113066
* gh-113067
<!-- /gh-linked-prs -->
| f14e3d59c955fb3cf89e5241727ec566164dcf42 | 85923cb377c4a13720c135da9ae3ed93465a81e7 |
python/cpython | python__cpython-107993 | # Unnecessary comment about increasing the reference count in usage of ``Py_None``
After merging of #19474 this comment looks unnecessary:
https://github.com/python/cpython/blob/8d3cb1bc4b5de091d7b5fcc5ce7378151a8f4f45/Include/object.h#L849
<!-- gh-linked-prs -->
### Linked PRs
* gh-107993
<!-- /gh-linked-prs -->
| f6099871fac0581ed4d9bcd9b15ce136fb0de8d6 | abd9cc52d94b8e2835322b62c29f09bb0e6fcfe9 |
python/cpython | python__cpython-107969 | # Improve error message for function calls with bad keyword arguments
Current traceback message:
```
>>> pow(bass=5, exp=2)
Traceback (most recent call last):
...
TypeError: pow() missing required argument 'base' (pos 1)
```
Proposed:
```
>>> pow(bass=5, exp=2)
Traceback (most recent call last):
...
TypeError: pow() Keyword argument 'bass' not defined. Did you mean "base"?
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107969
* gh-114792
<!-- /gh-linked-prs -->
| 75b3db8445188c2ad38cabc0021af694df0829b8 | 61c7249759ce88465ea655d5c19d17d03ff3f74b |
python/cpython | python__cpython-107939 | # Synchronise the signature of `sqlite3.connect? and `sqlite3.Connection.__init__`
See https://github.com/python/cpython/issues/93057#issuecomment-1675380260:
> [...] we have some technical dept when it comes to `sqlite3.connect` and `sqlite3.Connection.__init__`. Currently, the latter uses clinic, and the former does not use clinic. However, it is `sqlite3.connect` that needs the docstring, not `sqlite3.Connection.__init__`. We should make it so that the docstring from `sqlite3.Connection.__init__` is used for `sqlite3.connect` and output to a separate file, for inclusion by Module/_sqlite/module.c.
Previously, the two argument specs needed to be kept in sync manually. Some fairly recent changes improved this, but we still need to keep the docstring and methoddef of sqlite3.connect up to date manually.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107939
* gh-107943
* gh-107946
<!-- /gh-linked-prs -->
| 6fbaba552a52f93ecbe8be000888afa0b65b967e | 608927b01447b110de5094271fbc4d49c60130b0 |
python/cpython | python__cpython-107988 | # `dis.dis()` reports line numbers where there are none.
Take this example function from https://github.com/python/cpython/issues/107901:
```
def f():
for i in range(30000000):
if not i%1000000:
pass
```
`dis.dis` produces the following output:
```
1 0 RESUME 0
2 2 LOAD_GLOBAL 1 (range + NULL)
12 LOAD_CONST 1 (30000000)
14 CALL 1
22 GET_ITER
>> 24 FOR_ITER 14 (to 56)
28 STORE_FAST 0 (i)
3 30 LOAD_FAST 0 (i)
32 LOAD_CONST 2 (1000000)
34 BINARY_OP 6 (%)
38 TO_BOOL
46 POP_JUMP_IF_FALSE 2 (to 52)
48 JUMP_BACKWARD 14 (to 24)
4 >> 52 JUMP_BACKWARD 16 (to 24)
2 >> 56 END_FOR
58 RETURN_CONST 0 (None)
```
Which has incorrect line numbers.
The issue is not that the line numbers are wrong, but that you can't tell from the `dis` output.
The whole point of `dis` is show what is going on at the bytecode level, so it is failing if it gives wrong line numbers.
The actual line numbers can be see from `co_lines()`
```
>>> list(f.__code__.co_lines())
[(0, 2, 1), (2, 30, 2), (30, 48, 3), (48, 52, None), (52, 56, 4), (56, 60, 2)]
```
The correct output should be:
```
1 0 RESUME 0
2 2 LOAD_GLOBAL 1 (range + NULL)
12 LOAD_CONST 1 (30000000)
14 CALL 1
22 GET_ITER
>> 24 FOR_ITER 14 (to 56)
28 STORE_FAST 0 (i)
3 30 LOAD_FAST 0 (i)
32 LOAD_CONST 2 (1000000)
34 BINARY_OP 6 (%)
38 TO_BOOL
46 POP_JUMP_IF_FALSE 2 (to 52)
None 48 JUMP_BACKWARD 14 (to 24)
4 >> 52 JUMP_BACKWARD 16 (to 24)
2 >> 56 END_FOR
58 RETURN_CONST 0 (None)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107988
* gh-108479
* gh-108478
<!-- /gh-linked-prs -->
| 24aa249a6633249570978d6aae6f7b21581ee085 | 8d4052075ec43ac1ded7d2fa55c474295410bbdc |
python/cpython | python__cpython-107926 | # os.sendfile() flags improperly ordered in documentation
# Bug report
There are some minor issues in the documentation for which I'm about to submit MRs:
1. There's some bad ordering around ``os.sendfile()``, it seems to make no sense that the `os.set_blocking()` function comes in-between the `os.sendfile()` function and its flags.
2. Also, the flags could possibly be joined into one section
https://github.com/python/cpython/blob/cc2cf85d03cf29994a707aae5cc9a349a4165b84/Doc/library/os.rst?plain=1#L1542-L1618
## Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the CPython issue tracker, and am confident this bug has not been reported before
<!-- gh-linked-prs -->
### Linked PRs
* gh-107926
* gh-109099
* gh-109178
<!-- /gh-linked-prs -->
| 403ab1306a6e9860197bce57eadcb83418966f21 | f2584eade378910b9ea18072bb1dab3dd58e23bb |
python/cpython | python__cpython-107929 | # C API functions like `PyErr_SetFromErrnoWithFilename()` can use incorrect error code
Calling `PyUnicode_DecodeFSDefault()` in C API functions `PyErr_SetFromErrnoWithFilename()`, `PyErr_SetExcFromWindowsErrWithFilename()` and `PyErr_SetFromWindowsErrWithFilename()` can modify the value of `errno` or the result of `GetLastError()` which are used in these functions.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107929
* gh-108205
* gh-108206
<!-- /gh-linked-prs -->
| 80bdebdd8593f007a2232ec04a7729bba6ebf12c | acbd3f9c5c5f23e95267714e41236140d84fe962 |
python/cpython | python__cpython-107918 | # Correctly handle errors in `PyErr_Set*()` functions
C API functions `PyErr_SetFromErrnoWithFilename()`, `PyErr_SetExcFromWindowsErrWithFilename()`, `PyErr_SetFromWindowsErrWithFilename()` , `_PyErr_SetString()` and `_PyErr_FormatV()` convert some of their arguments from C string to Python string, but do not check for error. If it fails the behavior is undefined -- it can be ignoring the new error and using None or other default value instead of the string, it can be crash.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107918
* gh-108134
* gh-108135
<!-- /gh-linked-prs -->
| 633ea217a85f6b6ba5bdbc73094254d5811b3485 | 79db9d9a0e8f51ad4ea5caae31d7ae4b29985271 |
python/cpython | python__cpython-107930 | # PyErr_SetFromErrno() etc should be called immediately after setting the error code
Functions like `PyErr_SetFromErrno()` rely on global variable `errno` (actually it is thread local, but it does not matter here). They should be called immediately after using a functions which set `errno`. Calling other function (like `close()`) can change the value of `errno`. `Py_DECREF()` and `PyBuffer_Release()` can execute arbitrary code, in particularly the code which changes the value of `errno`. Even `PyMem_Free()` is not safe, because it the memory allocator can be customized.
There is the same issue with `SetFromWindowsErr()` and friends. If pass 0 as Windows error code, it calls `GetLastError()` to retrieve the global value which can be changed at that time if some functions were called before `SetFromWindowsErr()`.
Most uses in the code are correct, but there are several sites in the code where some cleanup code is inserted between function which sets the error code and function which consumes it.
Two ways to resolve this issue:
1. Reorganize the code so that `PyErr_SetFromErrno()` and `SetFromWindowsErr()` are called immediately after function which sets the error code (not counting simple memory reads or writes). In some cases it may require duplicating the cleanup code (usually just one line).
2. Save the error code to a local variable before executing the intermediate code and restore it after.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107930
* gh-108523
* gh-108524
<!-- /gh-linked-prs -->
| 2b15536fa94d07e9e286826c23507402313ec7f4 | e407cea1938b80b1d469f148a4ea65587820e3eb |
python/cpython | python__cpython-107928 | # `TypeError: Cannot create a consistent method resolution order`, why the newline?
Repro:
```
>>> class My[X](object): ...
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <generic parameters of My>
TypeError: Cannot create a consistent method resolution
order (MRO) for bases object, Generic
```
Notice that there's a `\n` between "resolution" and "order".
This code exists for 21 years now: https://github.com/python/cpython/blame/main/Objects/typeobject.c#L2447
And it does not look anyone else is bothered :)
For me, `TypeError: Cannot create a consistent method resolution order (MRO) for bases object, Generic` would be a much better error message.
Opinions?
<!-- gh-linked-prs -->
### Linked PRs
* gh-107928
<!-- /gh-linked-prs -->
| c3887b57a75a105615dd555aaf74e6c9a243ebdd | cc2cf85d03cf29994a707aae5cc9a349a4165b84 |
python/cpython | python__cpython-108001 | # `class My[X](object):` raises `TypeError: Cannot create a consistent method resolution order`
```python
>>> class My[X](object): ...
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <generic parameters of My>
TypeError: Cannot create a consistent method resolution
order (MRO) for bases object, Generic
```
It looks like a correct error:
```python
>>> from typing import Generic
>>> from typing import TypeVar
>>> T = TypeVar('T')
>>> class A(object, Generic[T]): ...
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Cannot create a consistent method resolution
order (MRO) for bases object, Generic
>>> class A(Generic[T], object): ...
...
>>>
```
But it is not tested. I think that `object` is quite an important corner-case to test it explicitly.
`class My[X]():` is tested.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108001
* gh-108022
<!-- /gh-linked-prs -->
| b61f5995aebb93496e968ca8d307375fa86d9329 | a8d440b3837273926af5ce996162b019290ddad5 |
python/cpython | python__cpython-107997 | # `TypeAliasType.__value__` might raise `NameError`, should we test it?
```python
>>> type Invalid = dict[X, Y]
>>> Invalid[str, int]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Only generic type aliases are subscriptable
>>> Invalid.__value__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in Invalid
NameError: name 'X' is not defined
```
This seems correct to me.
However, this case is not covered right now with existing tests.
But, before adding a test case for this error (that in my opinion will be quite common among devs: when you forget to add `[X, Y]` to type alias definition), I want to double check this with @JelleZijlstra
<!-- gh-linked-prs -->
### Linked PRs
* gh-107997
* gh-108217
<!-- /gh-linked-prs -->
| 13104f3b7412dce9bf7cfd09bf2d6dad1f3cc2ed | 71962e5237bb7e12d8fc1447b8ed8ba082e23f77 |
python/cpython | python__cpython-108368 | # test_tarfile fails on file modes
## Checklist
<!-- Bugs in third-party projects (e.g. `requests`) do not belong in the CPython issue tracker -->
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the CPython issue tracker, and am confident this bug has not been reported before
#
```bash
[ted@rasp4 cpython]$ ./python -m test test_tarfile
0:00:00 load avg: 0.27 Run tests sequentially
0:00:00 load avg: 0.27 [1/1] test_tarfile
test test_tarfile failed -- Traceback (most recent call last):
File "/home/ted/cpython/Lib/test/test_tarfile.py", line 3679, in test_modes
self.expect_file('all_bits', mode='?rwsrwsrwt')
File "/home/ted/cpython/Lib/test/test_tarfile.py", line 3429, in expect_file
self.assertEqual(got, mode)
AssertionError: '?rwsrwxrwt' != '?rwsrwsrwt'
- ?rwsrwxrwt
? ^
+ ?rwsrwsrwt
? ^
```
# Your environment
<!-- Include all relevant details about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.13.0a0
- Operating system and architecture: Manjaro ARM Linux aarch64 | Raspberry Pi 4 Model B Rev 1.5
- Configuration of CPython: ./configure --with-pydebug
<!--
You can freely edit this form. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-108368
* gh-125255
* gh-125576
* gh-125835
<!-- /gh-linked-prs -->
| 40e52c94a27e4cd94b48e8a705914823cbb6afed | f3b6608ba2b1db6ac449f656bf439bda8d66eb9f |
python/cpython | python__cpython-108242 | # Traceback says line -1 when KeyboardInterrupt during minimal for-if-loop
More generally, the issue happens with all exceptions raised by other threads.
Run the code below and interrupt with ctrl+c.
```py
# If part of a file, potentially more code before the loop. Traceback says -1 regardless.
for i in range(30000000):
if not i%1000000:
pass # Or any other operation here that is faster than iterating a million i values.
```
With the following traceback:
```py
Traceback (most recent call last):
File "/home/panta/issue.py", line -1, in <module>
KeyboardInterrupt
```
The issue disappears as soon as we add just a bit more to the loop
```py
for i in range(30000000):
if not i%1000000: # The 3.10 traceback points to this line.
pass
pass # The 3.11 and 3.12 tracebacks point to this line.
```
```py
Traceback (most recent call last):
File "/home/panta/issue.py", line 4, in <module>
pass
KeyboardInterrupt
```
The issue persists whether running from the shell or as a file or even using `exec`. It works the same inside functions/methods. It exists in 3.11 and 3.12 but not 3.10, across Windows and Linux.
- CPython versions tested on: 3.10.6 (no issue), 3.11.1 (issue), 3.11.4 (issue), 3.12.0rc1 (issue)
- Operating system and architecture: Windows 10 and Ubuntu 22.04.2 LTS (WSL)
<!-- gh-linked-prs -->
### Linked PRs
* gh-108242
* gh-108275
* gh-108326
* gh-108375
* gh-113721
* gh-113943
* gh-113950
* gh-114008
* gh-114530
* gh-114750
<!-- /gh-linked-prs -->
| a1cc74c4eebc55795877eb3be019a1bec34402f8 | e6db23f66d8741db0ffc526d8fd75373a5543e3e |
python/cpython | python__cpython-108168 | # test.test_asyncio.test_runners fails in "development mode"
```pytb
$ ./python -Xdev -m test test.test_asyncio.test_runners -m test_asyncio_run_debug
0:00:00 load avg: 2.08 Run tests sequentially
0:00:00 load avg: 2.08 [1/1] test.test_asyncio.test_runners
test test.test_asyncio.test_runners failed -- Traceback (most recent call last):
File ".../Lib/test/test_asyncio/test_runners.py", line 104, in test_asyncio_run_debug
asyncio.run(main(False))
File ".../Lib/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File ".../Lib/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../Lib/asyncio/base_events.py", line 664, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File ".../Lib/test/test_asyncio/test_runners.py", line 102, in main
self.assertIs(loop.get_debug(), expected)
AssertionError: True is not False
test.test_asyncio.test_runners failed (1 failure)
...
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108168
* gh-108196
* gh-108197
<!-- /gh-linked-prs -->
| 014a5b71e7538926ae1c03c8c5ea13c96e741be3 | 8f3d09bf5d16b508fece5420a22abe6f0c1f00b7 |
python/cpython | python__cpython-107892 | # Typo "consecuence" in 3.12 whatsnew
Typo is here, "consecuence" should be "consequence":
https://github.com/python/cpython/blob/2e27da18952c6561f48dab706b5911135cedd7cf/Doc/whatsnew/3.12.rst?plain=1#L1579
I'll open a PR.
## Checklist
<!-- Bugs in third-party projects (e.g. `requests`) do not belong in the CPython issue tracker -->
- [x] I am confident this is a bug in CPython, not a bug in a third-party project
- [x] I have searched the CPython issue tracker, and am confident this bug has not been reported before
<!-- gh-linked-prs -->
### Linked PRs
* gh-107892
* gh-107893
<!-- /gh-linked-prs -->
| 2e1f688fe0f0a612e54c09f5a7027a834dd8b8d5 | 2e27da18952c6561f48dab706b5911135cedd7cf |
python/cpython | python__cpython-109928 | # test_mmap.test_access_parameter failing on macOS Sonoma 14.0 beta on Apple Silicon
# Bug report
The other day I decided to give the MacOS Sonoma beta a go on my M1 MBP. ```test_mmap``` is now consistently failing with a ```PermissionError``` on ```main``` and ```3.12``` branches.
## A clear and concise description of the bug
I happened to have an old MacBook Air laying about which is still running MacOS Monterey. I verified that all tests pass there.
Here's the output from the failing test:
```
test_access_parameter (test.test_mmap.MmapTests.test_access_parameter) ... ERROR
======================================================================
ERROR: test_access_parameter (test.test_mmap.MmapTests.test_access_parameter)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/skip/src/python/cpython/Lib/test/test_mmap.py", line 258, in test_access_parameter
m = mmap.mmap(f.fileno(), mapsize, prot=prot)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 1] Operation not permitted
```
# Your environment
- CPython versions tested on: ```main``` and ```3.12``` branches
- Operating system and architecture: MacBook Pro M1 Pro, MacOS Sonoma 14.0 beta 5
<!-- gh-linked-prs -->
### Linked PRs
* gh-109928
* gh-109929
* gh-109930
* gh-110125
* gh-110130
* gh-114183
* gh-114184
* gh-114185
<!-- /gh-linked-prs -->
| 9dbfe2dc8e7bba25e52f9470ae6969821a365297 | 9abba715e3225b8e4c4b7dd0ed528ef3a3057bea |
python/cpython | python__cpython-107884 | # Argument Clinic: Function.fulldisplayname does not handle full module/class hierarchy
`Function.fulldisplayname` fails to return the correct string for class methods; per now, it wrongly assumes any Function only has a `Module` parent.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107884
<!-- /gh-linked-prs -->
| ee40b3e20d9b8d62a9b36b777dff42db1e9049d5 | bf707749e87f47b2fee2a208a654511ac318d4b9 |
python/cpython | python__cpython-107885 | # Argument Clinic: make it possible to clone __init__ functions
See https://github.com/python/cpython/issues/93057#issuecomment-1675380260
In sqlite3, both `sqlite3.connect` and `sqlite3.Connection.__init__` have the same param spec. This has led to the far-from-optimal status quo:
https://github.com/python/cpython/blob/d93b4ac2ff7bce07fb1c8805f43838818598191c/Modules/_sqlite/module.c#L51-L69
https://github.com/python/cpython/blob/d93b4ac2ff7bce07fb1c8805f43838818598191c/Modules/_sqlite/connection.c#L215-L239
Instead, we want to be able to do this in connection.c:
```c
/*[clinic input]
sqlite3.Connection.__init__ as pysqlite_connection_init
database: object
timeout: double = 5.0
detect_types: int = 0
isolation_level: IsolationLevel = ""
check_same_thread: bool = True
factory: object(c_default='(PyObject*)clinic_state()->ConnectionType') = ConnectionType
cached_statements as cache_size: int = 128
uri: bool = False
*
autocommit: Autocommit(c_default='LEGACY_TRANSACTION_CONTROL') = sqlite3.LEGACY_TRANSACTION_CONTROL
[clinic start generated code]*/
static int
pysqlite_connection_init_impl(pysqlite_Connection *self, PyObject *database,
double timeout, int detect_types,
const char *isolation_level,
int check_same_thread, PyObject *factory,
int cache_size, int uri,
enum autocommit_mode autocommit)
/*[clinic end generated code: output=cba057313ea7712f input=a0949fb85339104d]*/
{
// __init__ function is here; fast forward ...
}
/*[clinic input]
# Save the clinic output config.
output push
# Create a new destination 'connect' for the docstring and methoddef only.
destination connect new file '{dirname}/clinic/_sqlite3.connect.c.h'
output everything suppress
output docstring_definition connect
output methoddef_define connect
# Now, define the connect function.
sqlite3.connect as pysqlite_connect = sqlite3.Connection.__init__
[clinic start generated code]*/
/*[clinic end generated code: output=da39a3ee5e6b4b0d input=7913cd0b3bfc1b4a]*/
/*[clinic input]
# Restore the clinic output config.
output pop
[clinic start generated code]*/
```
The methoddef and docstring for `sqlite3.connect` could then be included in module.c.
However, to achieve this, we need to teach Argument Clinic how to clone `__init__` functions.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107885
* gh-107974
<!-- /gh-linked-prs -->
| e90036c9bdf25621c15207a9cabd119fb5366c27 | 6515ec3d3d5acd3d0b99c88794bdec09f0831e5b |
python/cpython | python__cpython-107894 | # The table on ``logging`` debug levels is unclear
## Existing Table
Many readers need a quick discussion to interpert [the current table](https://docs.python.org/3.12/library/logging.html#logging-levels):
Level | Numeric value
-- | --
logging.CRITICAL | 50
logging.ERROR | 40
logging.WARNING | 30
logging.INFO | 20
logging.DEBUG | 10
logging.NOTSET | 0
## Suggested Table
Level | Numeric value | Info
-- | -- | --
logging.CRITICAL | 50 | Only crashing bugs | Fewest Messages
logging.ERROR | 40 | The above, & major errors
logging.WARNING | 30 | The above, & minor errors
logging.INFO | 20 | The above, & detailed info
logging.DEBUG | 10 | The above, & 'helpful' messages
logging.NOTSET | 0 | No error logging |
The number of times I've had a bug in code due to the misunderstanding, or the number of times I've needed a whiteboard and 5 minutes to explain this concept is astounding. For some reason, it's just not intuitive for like 1/3 of people I have trained or worked with.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107894
* gh-107921
* gh-107922
<!-- /gh-linked-prs -->
| cc2cf85d03cf29994a707aae5cc9a349a4165b84 | 2b6dc2accc315ce279d259ed39e058a225068531 |
python/cpython | python__cpython-107846 | # tarfile.data_filter wrongly rejects some tarballs with symlinks
My implementation of PEP-706 has a bug: it wrongly determines the target of symlinks, and thus wrongly raises `LinkOutsideDestinationError` on some valid tarballs.
I didn't pay enough attention to this quirk of the format (which I'd like to add to [TarInfolinkname](https://docs.python.org/3/library/tarfile.html#tarfile.TarInfo.linkname) docs):
> For symbolic links (``SYMTYPE``), the linkname is relative to the directory that contains the link.
> For hard links (``LNKTYPE``), the linkname is relative to the root of the archive.
So, in a tarball that contains the following, the links point to `dir/target`:
- `dir/target`
- `other_dir/symlink` -> `../dir/target`
- `other_dir/hardlink` -> `dir/target`
But `data_filter` thinks that `other_dir/symlink` will point to `../dir target` outside the destination directory.
I have a fix but would like to test it more next week, before merging.
Sorry for the extra work this'll cause for a lot of people :(
<!-- gh-linked-prs -->
### Linked PRs
* gh-107846
* gh-108209
* gh-108210
* gh-108211
* gh-108274
* gh-108279
<!-- /gh-linked-prs -->
| acbd3f9c5c5f23e95267714e41236140d84fe962 | 622ddc41674c2566062af82f7b079aa01d2aae8c |
python/cpython | python__cpython-107815 | # find_python.bat is not properly silenced
# Bug report
When `find_python.bat` is run without a Python interpreter present on the system it installs an interpreter via nuget. However in this case it does not properly silence output to stdout when `-q` is passed. This leads to issues when the user relies on only the interpreter path to be printed, which is the case when it is used inside visual studio.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107815
* gh-107823
* gh-107824
<!-- /gh-linked-prs -->
| 1e229e2c3d212accbd5fbe3a46cd42f8252b2868 | 326f0ba1c5dda1d9613dbba11ea2470654b0d9c8 |
python/cpython | python__cpython-107813 | # Extend netlink support to FreeBSD
# Feature or enhancement
Extend [`socket`](https://docs.python.org/3/library/socket.html)'s `AF_NETLINK` support to FreeBSD.
# Pitch
FreeBSD 13.2-RELEASE was the first FreeBSD release with a solid support of netlink.
Previously this was Linux only technology, but has established itself as a standard: https://www.rfc-editor.org/rfc/rfc3549
# Previous discussion
This is all pretty new, so I am unaware of previous discussions.
<!--
You can freely edit this form. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-107813
<!-- /gh-linked-prs -->
| f50c17243a87b02086000185f6ed1cad4b8c2376 | 2ec16fed14aae896e38dd5bd9e73e2eddc974439 |
python/cpython | python__cpython-108369 | # test_tarfile fails on 32 bit systems in NoneInfoExtractTests_Data
These tests are newly added, and they produce the following fails on 32 bit x86 systems (running python 3.11.4):
```python
======================================================================
ERROR: test_extractall_none_gid (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_gid)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3067, in test_extractall_none_gid
with self.extract_with_none('gid'):
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: test_extractall_none_gname (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_gname)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3075, in test_extractall_none_gname
with self.extract_with_none('gname'):
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: test_extractall_none_mode (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_mode)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3053, in test_extractall_none_mode
with self.extract_with_none('mode') as DIR:
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: test_extractall_none_mtime (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_mtime)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3034, in test_extractall_none_mtime
with self.extract_with_none('mtime') as DIR:
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: test_extractall_none_uid (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_uid)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3063, in test_extractall_none_uid
with self.extract_with_none('uid'):
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: gid is greater than maximum
======================================================================
ERROR: test_extractall_none_uname (test.test_tarfile.NoneInfoExtractTests_Data.test_extractall_none_uname)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3071, in test_extractall_none_uname
with self.extract_with_none('uname'):
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/test/test_tarfile.py", line 3026, in extract_with_none
self.tar.extractall(DIR, filter='fully_trusted')
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: setUpClass (test.test_tarfile.NoneInfoExtractTests_Default)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3002, in setUpClass
tar.extractall(cls.control_dir, filter=cls.extraction_filter)
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: setUpClass (test.test_tarfile.NoneInfoExtractTests_FullyTrusted)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3002, in setUpClass
tar.extractall(cls.control_dir, filter=cls.extraction_filter)
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
======================================================================
ERROR: setUpClass (test.test_tarfile.NoneInfoExtractTests_Tar)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_tarfile.py", line 3002, in setUpClass
tar.extractall(cls.control_dir, filter=cls.extraction_filter)
File "/usr/lib/python3.11/tarfile.py", line 2257, in extractall
self._extract_one(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.11/tarfile.py", line 2320, in _extract_one
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.11/tarfile.py", line 2418, in _extract_member
self.chown(tarinfo, targetpath, numeric_owner)
File "/usr/lib/python3.11/tarfile.py", line 2546, in chown
os.chown(targetpath, u, g)
OverflowError: uid is greater than maximum
----------------------------------------------------------------------
Ran 545 tests in 8.309s
FAILED (errors=9, skipped=8)
```
I believe the file they are tripping over is:
```
alex@Zen2:/srv/work/alex/cpython$ tar --list --numeric-owner -vf Lib/test/testtar.tar|grep regtype-gnu-uid
-rw-r--r-- 4294967295/4294967295 7011 2003-01-06 00:19 gnu/regtype-gnu-uid
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108369
<!-- /gh-linked-prs -->
| 5d1871576500adc4ebaa7f59b8559605a57ad36b | 72119d16a5f658939809febef29dadeca02cf34d |
python/cpython | python__cpython-107834 | # Improve message of PyType_Spec deprecation warning with location
<!--
New to Python? The issue tracker isn't the right place to get help.
Consider instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting at https://discuss.python.org/c/users/7
- emailing https://mail.python.org/mailman/listinfo/python-list
-->
# Bug report
## A clear and concise description of the bug
Followup to https://github.com/python/cpython/issues/103968#issuecomment-1666804797.
The DeprecationWarning for `PyType_Spec` in `3.12.0rc1` is emitted on `<frozen importlib._bootstrap>:400` which makes it quite difficult to know what package is responsible.
In my particular case I found that, after reading #103968, it might be `protobuf`.
And indeed that's how I can reproduce it.
```py
# test.py
from google.protobuf import descriptor
```
```bash
pip install protobuf==4.24.0
python -X dev test.py
```
```
<frozen importlib._bootstrap>:400
DeprecationWarning: Using PyType_Spec with metaclasses that have custom tp_new is deprecated and will no longer be allowed in Python 3.14.
```
To be clear, I don't mind the warning. I'm just wondering if it's possible to improve the error location.
/CC @encukou
--
The relevant CPython code
https://github.com/python/cpython/blob/3bb43b7b1b75154bc4e94b1fa81afe296a8150d0/Objects/typeobject.c#L4243-L4249
And protobuf where it's likely (?) created:
https://github.com/protocolbuffers/protobuf/blob/v4.24.0/python/google/protobuf/pyext/map_container.cc#L562
https://github.com/protocolbuffers/protobuf/blob/v4.24.0/python/google/protobuf/pyext/map_container.cc#L778
# Your environment
- CPython versions tested on: `3.12.0rc1` and with the latest `3.12` commit: 3bb43b7b1b75154bc4e94b1fa81afe296a8150d0
- Operating system and architecture: MacOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-107834
* gh-107864
<!-- /gh-linked-prs -->
| 16dcce21768ba381996a88ac8c255bf1490b3680 | e4275f4df36a7cdd58cd4daa7d65b1947a2593d3 |
python/cpython | python__cpython-107807 | # `turtle.teleport` has incorrect-ish signature
# Bug report
`turtle.teleport` added in https://github.com/python/cpython/pull/103974 seems not quite correct.
Why?
1. It is produced automagically from `Turtle.teleport` instance method. It has this signature: `def teleport(self, x=None, y=None, *, fill_gap: bool = False) -> None:`
2. The result function `turtle.teleport` has this signature: `def teleport(x, y=None, fill_gap=None)`
Notice that it is missing `x=None` default and for some reason `fill_gap` is not a kw-only anymore.
The second problem happens because inside it uses `inspect.getargs`: https://github.com/python/cpython/blob/52fbcf61b5a70993c2d32332ff0ad9f369d968d3/Lib/turtle.py#L3927 which translates kw-only to positional-or-keyword params.
So, when I try to run `turtle.teleport` I get:
```python
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/ex.py", line 60, in <module>
print(teleport())
^^^^^^^^^^
TypeError: teleport() missing 1 required positional argument: 'x'
```
And with just one arg:
```python
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/ex.py", line 60, in <module>
print(teleport(1))
^^^^^^^^^^^
File "<string>", line 4, in teleport
TypeError: Turtle.teleport() takes from 1 to 3 positional arguments but 4 were given
```
Annotations are also missing. It is not a big problem, but it is inconvenient.
I propose using `inspect.signature` instead. It will allow us using pos-only and keyword-only params with ease.
I have a PR ready.
Found while working on https://github.com/python/typeshed/pull/10548
CC @AlexWaygood and @terryjreedy
<!-- gh-linked-prs -->
### Linked PRs
* gh-107807
* gh-108749
<!-- /gh-linked-prs -->
| 044b8b3b6a65e6651b161e3badfa5d57c666db19 | 3edcf743e88b4ac4431d4b0f3a66048628cf5c70 |
python/cpython | python__cpython-107804 | # Double linked list implementation for asyncio tasks
Currently `asyncio` tasks are stored in a `Weakset`, this is inefficient and in some cases causes bugs because of thread safety (https://github.com/python/cpython/issues/80788). In terms of memory usage it requires maintaining a full set and their corresponding weakref callback to cleanup objects when deallocated and finalized by the gc. In applications where tasks are created at fast pace this becomes a bottle neck, to mitigate this now `asyncio` tasks will now be stored in a global double linked of tasks for cases where Task is a subclass of `_asyncio.Task` in other cases we still rely on the weakset. This reduces the work done by the gc speedups the execution and reduces memory usage. In some of my own benchmarks I have seen 15- 20% improvement and pyperformance benchmarks reflect roughly the same.
https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20230805-3.13.0a0-1d32835/bm-20230805-linux-x86_64-kumaraditya303-linked_list-3.13.0a0-1d32835-vs-base.md
Updated: https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20240622-3.14.0a0-4717aaa#vs-base
<!-- gh-linked-prs -->
### Linked PRs
* gh-107804
* gh-120995
* gh-121007
* gh-126577
<!-- /gh-linked-prs -->
| 4717aaa1a72d1964f1531a7c613f37ce3d9056d9 | e21347549535b16f51a39986b78a2c2cd4ed09f4 |
python/cpython | python__cpython-107786 | # Pydoc: fall back to __text_signature__ if inspect.signature() fails
Pydoc uses `inspect.signature()` to get a function signature. `inspect.signature()` supports Python functions, and also extension functions defined with Argument Clinic (by parsing the `__text_signature__` attribute). Unfortunately Argument Clinic is used with functions whose signature cannot be expressed in Python, e.g. `getattr()` or `dict.pop()`. It produces a human readable `__text_signature__`, but it can not be represented as a `Signature` object, so `inspect.signature()` fails. Pydoc display generic `(...)` for such functions.
Since pydoc only needs a string representation of signature, not a `Signature` object, I propose to use `__text_signature__` if `inspect.signature()` fails. It needs only some trivial processing.
Before:
```
getattr(...)
Get a named attribute from an object.
getattr(x, 'y') is equivalent to x.y
When a default argument is given, it is returned when the attribute doesn't
exist; without it, an exception is raised in that case.
```
After:
```
getattr(object, name, default=<unrepresentable>, /)
Get a named attribute from an object.
getattr(x, 'y') is equivalent to x.y
When a default argument is given, it is returned when the attribute doesn't
exist; without it, an exception is raised in that case.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-107786
* gh-109325
<!-- /gh-linked-prs -->
| a39f0a350662f1978104ee1136472d784aa6f29c | 5f7d4ecf301ef12eb1d1d347add054f4fcd8fc5c |
python/cpython | python__cpython-107775 | # Missing audit event when registering a callback function in `sys.monitoring`.
PEP 669 states that "Registering or unregistering a callback function will generate a `sys.audit` event.", but it isn't implemented yet.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107775
* gh-107839
<!-- /gh-linked-prs -->
| 494e3d4436774a5ac1a569a635b8c5c881ef1c0c | 39ef93edb9802dccdb6555d4209ac2e60875a011 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.