repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-128097 | # NotImplementedError raised by annotationlib (?) when importing classes derived from NamedTuple
# Bug report
### Bug description:
With 3.14.0a3 there is an issue which affects all of the building block packages: setuptools, pip, wheel, causing failure to build of over 2800 packages in Fedora Linux total.
In cases I explored, the issue appears when importing classes derived from NamedTuple.
To reproduce in the Python 3.14.0a3 interpreter:
```python
>>> import setuptools.package_index
Traceback (most recent call last):
File "<python-input-18>", line 1, in <module>
import setuptools.package_index
File "/usr/lib/python3.14/site-packages/setuptools/package_index.py", line 1005, in <module>
class Credential(NamedTuple):
...<12 lines>...
return f'{self.username}:{self.password}'
File "/usr/lib64/python3.14/typing.py", line 2971, in __new__
types = annotationlib.call_annotate_function(original_annotate, annotationlib.Format.FORWARDREF)
File "/usr/lib64/python3.14/annotationlib.py", line 613, in call_annotate_function
result = func(Format.VALUE_WITH_FAKE_GLOBALS)
File "/usr/lib/python3.14/site-packages/setuptools/package_index.py", line 1005, in __annotate__
class Credential(NamedTuple):
NotImplementedError
```
The class in question looks like this: https://github.com/pypa/setuptools/blob/main/setuptools/package_index.py#L1003
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128097
<!-- /gh-linked-prs -->
| d50fa0541b6d4f458d7ab16d3a11b1117c607f85 | 879d287f49edcd4fa68324265f8ba63758716540 |
python/cpython | python__cpython-128081 | # It seems that `enum.Enum.__init__` serves no purpose
# Feature or enhancement
### Proposal:
Enum gained an `__init__` method in 3.11 which just passes:
```python
def __init__(self, *args, **kwds):
pass
```
That was added in https://github.com/python/cpython/pull/30582. [One person questioned it at the time](https://github.com/python/cpython/pull/30582#issuecomment-1181055042), but there was no response. The best I can figure is that it was related to special handling of `__init__` in `EnumType.__dir__`, which you can see here: https://github.com/python/cpython/blob/3852269b91fcc8ee668cd876b3669eba6da5b1ac/Lib/enum.py#L768-L779
That special handling of `__init__` was removed by https://github.com/python/cpython/pull/30677 prior to 3.11 being released, which leaves no plausible function to `Enum.__init__` that I can see. It muddies introspection of the type, so I think we should remove it if that's really the case.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128081
<!-- /gh-linked-prs -->
| c14db202750ff9eaf3919298f1172270b7dfd64e | 255762c09fe518757bb3e8ce1bb6e5d8eec9f466 |
python/cpython | python__cpython-128780 | # Async generator/anext with default-tuple-value results in SystemError: <class 'StopIteration'> returned with exception set
# Bug report
### Bug description:
I asked a question on [stackoverflow](https://stackoverflow.com/questions/79289349/why-does-this-async-generator-anext-snippet-result-in-systemerror-class-stopi) because I found a strange error while writing code that makes use an async generator function and the built-in `anext()` function when using a tuple value for the `default` argument of `anext()`.
One of the [stackoverflow answers](https://stackoverflow.com/a/79289763/3379953) contains a good analysis and a possible cause of the problem.
The failing code is the second line with `anext()`:
```python
import asyncio
async def generator(it=None):
if it is not None:
yield (it, it)
async def my_func():
# results in a=1 b=1
a, b = await anext(generator(1), (2, 3))
# results in no printing
async for a, b in generator():
print(a, b)
# raises exception
a, b = await anext(generator(), (2, 3))
loop = asyncio.new_event_loop()
loop.run_until_complete(my_func())
```
It raises this unexpected exception:
```shell
StopAsyncIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/manuel/.pyenv/versions/3.12.0/lib/python3.12/asyncio/base_events.py", line 664, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "<stdin>", line 6, in my_func
SystemError: <class 'StopIteration'> returned a result with an exception set
```
### CPython versions tested on:
3.11, 3.12, 3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-128780
* gh-128784
* gh-128785
* gh-128287
* gh-128789
* gh-128790
<!-- /gh-linked-prs -->
| 76ffaef729c91bb79da6df2ade48f3ec51118300 | 517dc65ffcea8413e1a60c4cb5d63e5fa39e7f72 |
python/cpython | python__cpython-128537 | # Code generator should reject escaping calls in DEOPT_IF/EXIT_IF
I had a
`EXIT_IF(escaping_call(), OP);`
and it produced:
```
_PyFrame_SetStackPointer(frame, stack_pointer);
DEOPT_IF(escaping_call(), OP);
stack_pointer = _PyFrame_GetStackPointer(frame);
```
and then the frame's stack pointer got messed up when deopt happened. It was very upsetting.
@markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-128537
<!-- /gh-linked-prs -->
| b9c693dcca01537eee1ef716ffebc632be37594b | f89e5e20cb8964653ea7d6f53d3e40953b6548ce |
python/cpython | python__cpython-128063 | # Fix the font size and shortcut display of the turtledemo menu
### Bug description:
The font size of turtledemo's menu bar is too large and the shortcut keys are not displayed well enough.
Actually:
<img width="732" alt="2024-12-18-18-07-39" src="https://github.com/user-attachments/assets/496c27e6-00c4-4b85-8d20-671e92a38792" />
But better: (Changed the font and the way shortcuts are displayed)
<img width="732" alt="2024-12-18-18-05-29" src="https://github.com/user-attachments/assets/f4e96311-d656-483f-ab33-be19c9187e6d" />
I'll create a PR later to fix it.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-128063
* gh-128101
<!-- /gh-linked-prs -->
| e163e8d4e1a9844b8615ef38b9917b887a377948 | 1b15c89a17ca3de6b05de5379b8717e9738c51ef |
python/cpython | python__cpython-128068 | # [Free Threading] test_builtin.ImmortalTests fail on i686 (32-bit)
# Bug report
### Bug description:
In Fedora Linux, when building 3.14.0a3 on i686 architecture and freethreading-debug build, `test_tuple_respect_immortality`, `test_list_repeat_respect_immortality` and `test_immortals` fail.
This is not happening on x86_64, aarch64. We don't know about the other arches yet.
```python
FAIL: test_tuple_repeat_respect_immortality (test.test_builtin.ImmortalTests.test_tuple_repeat_respect_immortality) [256]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builddir/build/BUILD/python3.14-3.14.0_a3-build/Python-3.14.0a3/Lib/test/test_builtin.py", line 2702, in assert_immortal
self.assertEqual(sys.getrefcount(immortal), self.IMMORTAL_REFCOUNT)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1342177280 != 1879048192
FAIL: test_immortals (test.test_builtin.ImmortalTests.test_immortals) [None]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builddir/build/BUILD/Python-3.14.0a3/Lib/test/test_builtin.py", line 2702, in assert_immortal
self.assertEqual(sys.getrefcount(immortal), self.IMMORTAL_REFCOUNT)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1342177280 != 1879048192
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128068
<!-- /gh-linked-prs -->
| daa260ebb1c1b20321e7f26df7c9dbd35d4edcbf | 39e69a7cd54d44c9061db89bb15c460d30fba7a6 |
python/cpython | python__cpython-128070 | # test_sysconfig.TestSysConfig.test_sysconfigdata_json fails: it expects 'userbase' to be '/root/.local'
# Bug report
### Bug description:
As in the title. We hit this in Fedora, can it be caused by the mismatch of users (one which generated the file, the other which runs tests?)
```python
FAIL: test_sysconfigdata_json (test.test_sysconfig.TestSysConfig.test_sysconfigdata_json)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib64/python3.14/test/test_sysconfig.py", line 671, in test_sysconfigdata_json
self.assertEqual(system_config_vars, json_config_vars)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<snip>
- 'userbase': '/root/.local'}
? ---
+ 'userbase': '/builddir/.local'}
? +++++++
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128070
<!-- /gh-linked-prs -->
| e7980ba233bcbdb811e96bd5003c7d51a4e25155 | 3a8cefba0b60bd25c6b13a32cab4eb8d1dbf94ce |
python/cpython | python__cpython-135908 | # Broken tests with "legacy" sys.float_repr_style
# Bug report
### Bug description:
In #128005 broken (by downstream patch) build reveal some problems in the CPython test suite. Except for few explicitly marked tests, other don't care about different repr() behavior for floats. Many tests fail [when interpreter built with ``_PY_SHORT_FLOAT_REPR`` set to 0](https://github.com/python/cpython/issues/128005#issuecomment-2548548630): https://github.com/python/cpython/issues/128005#issuecomment-2548584912
Currently we don't test platforms with ``sys.float_repr_style == "legacy"``. Maybe we should? At least this define can be explicitly set by some configure option and we could test such build.
Then failed tests either should be fixed (by using floats, having exact decimal representation, like 1.25). Or just xfailed out with suitable decorator.
Is this worth?
BTW, as [mentioned](https://github.com/python/cpython/issues/128005?#issuecomment-2548539759) by Victor Stinner, all [PEP 11 platforms](https://peps.python.org/pep-0011/) support the floatting point "short" format.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135908
* gh-136025
* gh-136026
<!-- /gh-linked-prs -->
| f3aec60d7a01c5f085a3ef2d6670d46b42b8ddd3 | e23518fa96583d0190d457adb807b19545df26cf |
python/cpython | python__cpython-128079 | # Return value of <ExceptionGroup class>.split has insufficient checks leading to a type confusion bug
# Crash report
### What happened?
The following code has checks to make sure the return value is a tuple and of size 2, but only in asserts which means that these checks wont happen on a non-debug build.
https://github.com/python/cpython/blob/b92f101d0f19a1df32050b8502cfcce777b079b2/Python/ceval.c#L2093-L2101
So you can create an ExceptionGroup subclass with a custom `split` function that doesnt return a tuple, and it will try to interpret that object as a tuple.
PoC
```python
class Evil(BaseExceptionGroup):
def split(self, *args):
return "NOT A TUPLE!"
print("Running...")
try:
raise Evil("wow!", [Exception()])
except* Exception:
pass
print("program should crash before reaching this")
```
Output
```
Running...
Segmentation fault (core dumped)
```
### CPython versions tested on:
3.11, 3.12, 3.13
### Operating systems tested on:
Linux, Windows
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128079
* gh-128139
* gh-128140
<!-- /gh-linked-prs -->
| 3879ca0100942ae15a09ac22889cbe3e46d424eb | 5a584c8f54bbeceae7ffa501291e29b7ddc8a0b9 |
python/cpython | python__cpython-128044 | # code objects remove unknown opcodes from the instruction stream when accessing `co_code`
# Bug report
### Bug description:
If you construct a code object with an unknown opcode it'll get removed when the code object goes through the opcodes to de-opt them. We should just ignore unknown opcodes instead.
### CPython versions tested on:
3.12, 3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128044
* gh-134228
* gh-134231
<!-- /gh-linked-prs -->
| cc9add695da42defc72e62c5d5389621dac54b2b | 71ea6a6798c5853ed33188865a73b044ede8aba8 |
python/cpython | python__cpython-128043 | # Add `terminate_workers` to `ProcessPoolExecutor`
# Feature or enhancement
### Proposal:
This is an interpretation of the feature ask in: https://discuss.python.org/t/cancel-running-work-in-processpoolexecutor/58605/1. It would be a way to stop all the workers running in a `ProcessPoolExecutor`
```python
p = ProcessPoolExecutor()
# use p
# i know i want p to die at this point no matter what
p.terminate_workers()
```
Previously the way to do this was to use to loop through the `._processes` of the `ProcessPoolExecutor` though, preferably this should be possible without accessing implementation details.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/cancel-running-work-in-processpoolexecutor/58605/1
<!-- gh-linked-prs -->
### Linked PRs
* gh-128043
* gh-130812
* gh-130838
* gh-130849
* gh-130900
<!-- /gh-linked-prs -->
| efadc5874cdecc0420926afd5540b9b25c5e97fe | 63ffb406bb000a42b0dbddcfc01cb98a12f8f76a |
python/cpython | python__cpython-128036 | # Add ssl.HAS_PHA to detect libssl PHA support
# Feature or enhancement
### Proposal:
[TLSv1.3 post-handshake client authentication](https://datatracker.ietf.org/doc/html/rfc8446#section-4.2.6) (PHA), often referred to as "mutual TLS" or "mTLS", allows TLS servers to authenticate client identities using digital certificates. Some TLS libraries do not implement PHA, including actively maintained and widely used libraries such as [AWS-LC](https://github.com/aws/aws-lc/) and [BoringSSL](https://github.com/google/boringssl/).
This issue proposes the addition of a boolean property `ssl.HAS_PHA` to indicate whether the crypto library CPython is built against supports PHA, allowing python's test suite and consuming modules to branch accordingly.
This feature has precedent in the `ssl.HAS_PSK` flag indicating support for another TLS feature that is not universally implemented across TLS libraries.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Related changes to increase libcrypto/libssl compatibility (specifically with AWS-LC) have been discussed with the community [here](https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/2).
<!-- gh-linked-prs -->
### Linked PRs
* gh-128036
<!-- /gh-linked-prs -->
| 418114c139666f33abff937e40ccbbbdce15bc39 | 7985d460c731b2c48419a33fc1820f9512bb6f21 |
python/cpython | python__cpython-128054 | # Change PyMutex_LockFast to take mutex as argument
`PyMutex_LockFast` is the only function which takes the bits from mutex directly rather than the mutex. It breaks a good abstraction so I propose to change it to take the mutex as its argument like all other functions.
cc @colesbury
<!-- gh-linked-prs -->
### Linked PRs
* gh-128054
<!-- /gh-linked-prs -->
| 91c55085a959016250f1877e147ef379bb97dd12 | 8a433b683fecafe1cb04469a301df2b4618167d0 |
python/cpython | python__cpython-128047 | # Regression in 3.13.1 for imports of nonexistant names from CFFI modules
# Bug report
### Bug description:
cffi's `TestNewFFI1.test_import_from_lib` started failing with:
```python
> from _test_import_from_lib.lib import bar
E TypeError: bad argument type for built-in operation
```
This is where `_test_import_from_lib` is a CFFI C extension, which does not contain a member `lib.bar`. So in this case `_test_import_from_lib.lib` probably doesn't have an origin.
More details here: https://github.com/python-cffi/cffi/issues/150
The offending change seems to be the introduction of `PyModule_GetFilenameObject` in #127775. (@hauntsaninja, @serhiy-storchaka)
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128047
* gh-128114
<!-- /gh-linked-prs -->
| 45e6dd63b88a782f2ec96ab1da54eb5a074d8f4c | daa260ebb1c1b20321e7f26df7c9dbd35d4edcbf |
python/cpython | python__cpython-128887 | # The sys.float_repr_style should be read-only
# Bug report
### Bug description:
It's possible to "change" the repr style for floats:
```pycon
Python 3.14.0a2+ (heads/long_export-decimal:4bb9ad04f5, Dec 17 2024, 05:39:57) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.float_repr_style
'short'
>>> sys.float_repr_style = 'legacy'
>>> sys.float_repr_style
'legacy'
```
Of course, this has no effect, as this variable just shows build-time settings. That certainly can misguide people, e.g. see this: https://github.com/python/cpython/issues/128005#issuecomment-2547366487
It could be nice to make this variable read-only (unfortunately, PEP 726 was rejected and it's might be not so easy). Or, at least, improve docs to mention that it's intended to be read-only (we emphasize this e.g. for int_info fields).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128887
* gh-128908
* gh-128909
<!-- /gh-linked-prs -->
| 313b96eb8b8d0ad3bac58d633822a0a3705ce60b | d05140f9f77d7dfc753dd1e5ac3a5962aaa03eff |
python/cpython | python__cpython-128020 | # Make `SyntaxWarning` for invalid escape sequences better reflect their intended deprecation
# Feature or enhancement
### Proposal:
I would like to propose that the `SyntaxWarning` that is raised when an invalid escape sequence is used be updated to better reflect the fact that the ability of Python users to employ invalid escape sequences is [intended to be removed](https://docs.python.org/3/whatsnew/3.12.html#other-language-changes) in a future Python release.
At present, if one runs the code `path = 'C:\Windows'` in Python, they get this warning: `SyntaxWarning: invalid escape sequence '\W'`.
I argue that that warning is not as semantically meaningful as it could be. It does not immediately convey to the *untrained* and/or *uninitiated* that `path = 'C:\Windows'` will in fact **break** in a future Python release.
What is a better of way communicating that? How about, `SyntaxWarning: '\W' is currently an invalid escape sequence. In the future, invalid escape sequences will raise a SyntaxError. Did you mean '\\W'?`.
That message is a little bit longer **but** it immediately tells the user, without the need for heading to Stack Overflow or Python documentation, that:
1. Although the code runs today, **it won't run soon**!
2. You can fix the code easily, **just add an extra backslash**.
Whereas all that `SyntaxWarning: invalid escape sequence '\W'` tells me is, at best, hey, there's something wrong here, but you've gotta figure out what that is. Someone could easily read that message and think maybe Python is trying to be helpful and make me double check that I didn't actually mean to type a valid escape sequence like `\f` or `\n`.
A message like `SyntaxWarning: '\W' is currently an invalid escape sequence. In the future, invalid escape sequences will raise a SyntaxError. Did you mean '\\W'?` makes it much more difficult to come away with the message that the Python developers might not like what I've just done but it works and it'll keep working forever.
----
Update: The proposed message has been revised to `"\W" is an invalid escape sequence. Such sequences will not work in the future. Did you mean "\\W"?` following [consultation](https://discuss.python.org/t/make-syntaxwarning-for-invalid-escape-sequences-better-reflect-their-intended-deprecation/74416/13).
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/make-syntaxwarning-for-invalid-escape-sequences-better-reflect-their-intended-deprecation/74416/2
<!-- gh-linked-prs -->
### Linked PRs
* gh-128020
<!-- /gh-linked-prs -->
| 8d8b854824c4723d7c5924f1d5c6a397ea7214a5 | 40a4d88a14c741172a158683c39d232c587c6f11 |
python/cpython | python__cpython-128015 | # `tkinter.Wm.wm_iconbitmap` has no effect when passing an empty string to the parameter `default`
# Bug report
### Bug description:
```python
import tkinter
from tkinter import messagebox
root = tkinter.Tk()
root.update()
root.tk.call("wm", "iconbitmap", root._w, "-default", "")
messagebox.showinfo()
root.mainloop()
```
Running the code above can get the following effect:

---
```python
import tkinter
from tkinter import messagebox
root = tkinter.Tk()
root.update()
root.wm_iconbitmap(default="") # the different line
messagebox.showinfo()
root.mainloop()
```
Running the code above can get the following effect:

However, in fact, these two pieces of code should be equivalent, but due to an inaccurate logical judgment, this problem arises.
This is a very easy bug to fix, and I'll create a PR to fix it later.
The issue may only exist on Windows systems.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-128015
* gh-128418
* gh-128420
<!-- /gh-linked-prs -->
| 58e9f95c4aa970db32a94b9152b51ede22f823bd | c9d2bc6d7f6d74e0539afb0f7066997ae736dfc8 |
python/cpython | python__cpython-128021 | # Data race in PyUnicode_AsUTF8AndSize under free-threading
# Bug report
### Bug description:
Repro: build this C extension (`race.so`) with free threading enabled, with a CPython built with thread-sanitizer enabled:
```
#define PY_SSIZE_T_CLEAN
#include <Python.h>
static PyObject *ConvertStr(PyObject *self, PyObject *arg) {
Py_ssize_t size;
const char *str = PyUnicode_AsUTF8AndSize(arg, &size);
return Py_None;
}
static PyMethodDef race_methods[] = {
{"convert_str", ConvertStr, METH_O, "Converts a string to utf8",},
{NULL, NULL, 0, NULL}};
static struct PyModuleDef race_module = {
PyModuleDef_HEAD_INIT, "race",
NULL, -1,
race_methods};
#define EXPORT_SYMBOL __attribute__ ((visibility("default")))
EXPORT_SYMBOL PyMODINIT_FUNC PyInit_race(void) {
PyObject *module = PyModule_Create(&race_module);
if (module == NULL) {
return NULL;
}
PyUnstable_Module_SetGIL(module, Py_MOD_GIL_NOT_USED);
return module;
}
```
and run this python module:
```python
import concurrent.futures
import threading
import race
num_threads = 8
b = threading.Barrier(num_threads)
def closure():
b.wait()
print("start")
for _ in range(10000):
race.convert_str("😊")
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
for _ in range(num_threads):
executor.submit(closure)
```
I built the module with `-fsanitize=thread` (`clang-18 -fsanitize=thread t.c -shared -o race.so -I ~/p/cpython-tsan/include/python3.13t/`) although I doubt it matters a whole lot.
After running it a few times on my machine, I received the following thread-sanitizer report:
```
WARNING: ThreadSanitizer: data race (pid=2939235)
Write of size 8 at 0x7f4b601ebd98 by thread T3:
#0 unicode_fill_utf8 /usr/local/google/home/phawkins/p/cpython/Objects/unicodeobject.c:5445:37 (python3.13+0x323820) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#1 PyUnicode_AsUTF8AndSize /usr/local/google/home/phawkins/p/cpython/Objects/unicodeobject.c:4066:13 (python3.13+0x323820)
#2 ConvertStr t.c (race.so+0x1205) (BuildId: 2ca767157d7177c993bad36fb4e26c7315893616)
#3 cfunction_vectorcall_O /usr/local/google/home/phawkins/p/cpython/Objects/methodobject.c:512:24 (python3.13+0x28a4b5) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#4 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1eafaa) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#5 PyObject_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c:327:12 (python3.13+0x1eafaa)
#6 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:813:23 (python3.13+0x3e24fb) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#7 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#8 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#9 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#10 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef5ef) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#11 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef5ef)
#12 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#13 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#14 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#15 thread_run /usr/local/google/home/phawkins/p/cpython/./Modules/_threadmodule.c:337:21 (python3.13+0x564292) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#16 pythread_wrapper /usr/local/google/home/phawkins/p/cpython/Python/thread_pthread.h:243:5 (python3.13+0x4bd637) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
Previous read of size 8 at 0x7f4b601ebd98 by thread T7:
#0 PyUnicode_AsUTF8AndSize /usr/local/google/home/phawkins/p/cpython/Objects/unicodeobject.c:4075:18 (python3.13+0x3236cc) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#1 ConvertStr t.c (race.so+0x1205) (BuildId: 2ca767157d7177c993bad36fb4e26c7315893616)
#2 cfunction_vectorcall_O /usr/local/google/home/phawkins/p/cpython/Objects/methodobject.c:512:24 (python3.13+0x28a4b5) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#3 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1eafaa) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#4 PyObject_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c:327:12 (python3.13+0x1eafaa)
#5 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:813:23 (python3.13+0x3e24fb) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#6 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#7 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#8 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#9 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef5ef) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#10 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef5ef)
#11 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#12 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#13 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#14 thread_run /usr/local/google/home/phawkins/p/cpython/./Modules/_threadmodule.c:337:21 (python3.13+0x564292) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#15 pythread_wrapper /usr/local/google/home/phawkins/p/cpython/Python/thread_pthread.h:243:5 (python3.13+0x4bd637) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
```
I'd guess that this CPython code really needs to hold a mutex:
```
if (PyUnicode_UTF8(unicode) == NULL) {
if (unicode_fill_utf8(unicode) == -1) {
if (psize) {
*psize = -1;
}
return NULL;
}
}
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128021
* gh-128061
* gh-128417
<!-- /gh-linked-prs -->
| 3c168f7f79d1da2323d35dcf88c2d3c8730e5df6 | 46dc1ba9c6e8b95635fa27607d01d6108d8f677e |
python/cpython | python__cpython-128009 | # Add `PyWeakref_IsDead()` to test if a weak reference is dead
# Feature or enhancement
```c
// Returns 1 if the pointed to object is dead, 0 if it's alive, and -1 with an error set if `ref` is not a weak reference.
int PyWeakref_IsDead(PyObject *ref);
```
### C API Working Group Issue
* https://github.com/capi-workgroup/decisions/issues/48
### Motivation
Prior to Python 3.13, you could check if a weak reference is dead via `PyWeakref_GetObject(ref) == Py_None`, but that function is now deprecated. You might try writing an "is dead" check using `PyWeakref_GetRef`. For example:
```c
int is_dead(PyObject *ref) {
PyObject *tmp;
if (PyWeakref_GetRef(&ref, &tmp) < 0) {
return -1;
}
else if (tmp == NULL) {
return 1;
}
Py_DECREF(tmp);
return 0;
}
```
In addition to not being ergonomic, the problem with this code is that the `Py_DECREF(tmp)` may introduce a side effect from a calling a destructor, at least in the free threading build where some other thread may concurrently drop the last reference. Our internal [`_PyWeakref_IS_DEAD`](https://github.com/python/cpython/blob/47cbf038850852cdcbe7a404ed7c64542340d58a/Include/internal/pycore_weakref.h#L85) implementation avoids this problem, but it's not possible to reimplement that code using our existing public APIs.
This can be a problem when you need to check if a weak reference is dead within a lock, such as when cleaning up dictionaries or lists of weak references -- you don't want to execute arbitrary code via a destructor while holding the lock.
I've run into this in two C API extensions this week that are not currently thread-safe with free threading:
* CFFI uses a [dictionary](https://github.com/python-cffi/cffi/blob/88f48d22484586d48079fc8780241dfdfa3379c8/src/c/_cffi_backend.c#L461-L468) that maps a string keys to unowned references. I'm working on update it to use `PyWeakReference`, but the "is dead" clean-up checks are more difficult due to the above issues.
* Pandas [cleans up](https://github.com/pandas-dev/pandas/blob/45aa7a55a5a1ef38c5e242488d2419ae84dd874b/pandas/_libs/internals.pyx#L912-L915) a list of weak references. (The code is not currently thread-safe, and probably needs a lock.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-128009
<!-- /gh-linked-prs -->
| 7b811d0562a0bf7433165785f1549ac199610f8b | b9b3e4a076caddf7876d1d4d762a117a26faffcf |
python/cpython | python__cpython-128147 | # Audit asyncio thread safety
The following functions needs to be made thread safe and atomic for free-threading for both pure python and C implementation:
- [x] `asyncio._enter_task`
- [x] `asyncio._leave_task`
- [x] `asyncio._register_task`
- [x] `asyncio._unregister_task`
- [x] `asyncio._swap_current_task`
- [x] `asyncio.current_task`
- [x] `asyncio.all_tasks`
Note that some of these were made thread safe in C impl https://github.com/python/cpython/issues/120974
The following classes needs to be thread safe for free-threading in C implementation:
- [x] `asyncio.Task`
- [x] `asyncio.Future`
Both of these classes are documented to be not thread safe but currently calling methods on these classes from multiple threads can crash the interpreter. The pure python implementation cannot crash the interpreter when called from multiple threads so changes are only needed for C implementation. Before making these thread safe in C I would gather some numbers for how much a difference the C implementation makes in free-threading and if it isn't much we can just disable the C extension for free-threading.
cc @colesbury
<!-- gh-linked-prs -->
### Linked PRs
* gh-128147
* gh-128256
* gh-128480
* gh-128541
* gh-128869
* gh-128885
* gh-129267
* gh-129942
* gh-129943
* gh-129995
* gh-130518
* gh-131106
* gh-131397
* gh-131399
* gh-131797
* gh-134362
* gh-134324
<!-- /gh-linked-prs -->
| 513a4efa75bf78c9d629ddabc9516fb058787289 | f1574859d7d6cd259f867194762f04b72ef2c340 |
python/cpython | python__cpython-127990 | # Refer to the GIL as a thread state in the C API
# Documentation
"The GIL" is used all over the place in the C API documentation, but it's not really correct, and is a bit misleading for users. The biggest issue is that for free-threading, there is no GIL, so users erroneously call the C API inside `Py_BEGIN_ALLOW_THREADS` blocks or omit `PyGILState_Ensure` in fresh threads. Similarly, PEP-684 let's us have *multiple* GILs. Which one is "the" GIL?
Really, what "holding the GIL" should actually mean these days, is having an attached thread state (which also means holding the GIL for an interpreter on default builds). Thread states are documented, but not well. As far as I can tell, users know very little about them, other than that they hold some thread-specific information. That isn't great, because basically the only way to use subinterpreters right now is with thread state shenanigans, and the free-threading problem I mentioned above is also an issue.
Documenting them better, and replacing phrases like "the caller must hold the GIL" with "the caller must have an attached thread state" should help users understand both subinterpreters and nogil better. This will be a decently big change to the documentation, though. Right now, we're sort of all over the place in what a thread state means. From [PyInterpreterState_Get](https://docs.python.org/3/c-api/init.html#c.PyInterpreterState_Get)
> Get the current interpreter.
> Issue a fatal error if there no current Python thread state or no current interpreter. It cannot return NULL.
> The caller must hold the GIL.
Most of this is redundant. If there is no thread state, that implies that there isn't an interpreter, and therefore can't hold a GIL. Redundant differences like this make it seem like there *is* a meaningful difference (for users) between an attached thread state and holding the GIL, which isn't good.
I have a PR ready (which I'll submit as a draft), but I'm putting this up for discussion first, because I'm not too sure how people will feel about it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127990
<!-- /gh-linked-prs -->
| 86d5fa95cf9edf6c6fbc2a72080ac1ee87c4e941 | 83d54fa8760f54935086bb40a2e721f927e705b9 |
python/cpython | python__cpython-127980 | # `ast.unparse` is needlessly "hip" for f-string quotes in Python 3.12+
# Feature or enhancement
### Proposal:
Up until Python 3.11, the output of the following `ast.parse`/`unparse` run
```python
import ast
from sys import version_info as vi
print(f"{vi.major}.{vi.minor}.{vi.micro}")
print(ast.unparse(ast.parse("f\"{'.' * 5}\"")))
```
was
```
3.11.11
f"{'.' * 5}"
```
In Python 3.12, [new f-string features were introduced](https://docs.python.org/3/whatsnew/3.12.html#pep-701-syntactic-formalization-of-f-strings), allowing for more general quote combinations. The output of the above script in Python 3.12 and 3.13 is
```
3.12.7
f'{'.' * 5}'
```
While this is legal Python 3.12/3.13, this representation needlessly restricts the usability of the generated code: It will not work on Python 3.11 and earlier.
I would thus like to suggest for `ast.unparse` to return, if possible, the more compatible code; in this case
```
f"{'.' * 5}"
```
which works across all currently supported Python versions.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127980
* gh-129600
* gh-129601
<!-- /gh-linked-prs -->
| 8df5193d37f70a1478642c4b456dcc7d6df6c117 | 95504f429eec04010d0b815345ebcc3af2402af0 |
python/cpython | python__cpython-132574 | # ASan: heap-buffer-overflow in ucs2lib_default_find
# Bug report
### Bug description:
**Environments**:
macOS 15.2
Homebrew 4.4.11
Homebrew clang version 19.1.5
Target: arm64-apple-darwin24.2.0
Thread model: posix
InstalledDir: /opt/homebrew/Cellar/llvm/19.1.5/bin
Configuration file: /opt/homebrew/etc/clang/arm64-apple-darwin24.cfg
Ubuntu 22.04.3 LTS
Codename: jammy
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
**Address sanitizer report**
```=================================================================
==114762==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x616000005982 at pc 0x5634dd2183f6 bp 0x7fff2ea0d040 sp 0x7fff2ea0d030
READ of size 2 at 0x616000005982 thread T0
#0 0x5634dd2183f5 in ucs2lib_default_find Objects/stringlib/fastsearch.h:600
#1 0x5634dd2183f5 in ucs2lib_fastsearch Objects/stringlib/fastsearch.h:775
#2 0x5634dd2183f5 in ucs2lib_find Objects/stringlib/find.h:18
#3 0x5634dd2183f5 in ucs2lib_find Objects/stringlib/find.h:8
#4 0x5634dd2183f5 in anylib_find Objects/unicodeobject.c:10226
#5 0x5634dd23d120 in replace Objects/unicodeobject.c:10384
#6 0x5634dd0b49b2 in method_vectorcall_FASTCALL Objects/descrobject.c:408
#7 0x5634dd08d80f in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
#8 0x5634dd08d80f in PyObject_Vectorcall Objects/call.c:325
#9 0x5634dcf3e5ba in _PyEval_EvalFrameDefault Python/bytecodes.c:2715
#10 0x5634dd345626 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:89
#11 0x5634dd345626 in _PyEval_Vector Python/ceval.c:1683
#12 0x5634dd345626 in PyEval_EvalCode Python/ceval.c:578
#13 0x5634dd4451a2 in run_eval_code_obj Python/pythonrun.c:1716
#14 0x5634dd44535c in run_mod Python/pythonrun.c:1737
#15 0x5634dd44ba83 in pyrun_file Python/pythonrun.c:1637
#16 0x5634dd44ba83 in _PyRun_SimpleFileObject Python/pythonrun.c:433
#17 0x5634dd44c39b in _PyRun_AnyFileObject Python/pythonrun.c:78
#18 0x5634dd4b3223 in pymain_run_file_obj Modules/main.c:360
#19 0x5634dd4b3223 in pymain_run_file Modules/main.c:379
#20 0x5634dd4b3223 in pymain_run_python Modules/main.c:633
#21 0x5634dd4b482f in Py_RunMain Modules/main.c:713
#22 0x5634dd4b482f in pymain_main Modules/main.c:743
#23 0x5634dd4b482f in Py_BytesMain Modules/main.c:767
#24 0x7f8c83883d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#25 0x7f8c83883e3f in __libc_start_main_impl ../csu/libc-start.c:392
#26 0x5634dcf6c5e4 in _start (/space/src/python-heap-buffer-overflow/cpython/python+0x2ed5e4)
0x616000005982 is located 0 bytes to the right of 514-byte region [0x616000005780,0x616000005982)
allocated by thread T0 here:
#0 0x7f8c83c1e887 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x5634dd2002d1 in unicode_askind Objects/unicodeobject.c:2401
#2 0x5634dd23ceeb in replace Objects/unicodeobject.c:10363
#3 0x5634dd0b49b2 in method_vectorcall_FASTCALL Objects/descrobject.c:408
#4 0x5634dd08d80f in _PyObject_VectorcallTstate Include/internal/pycore_call.h:92
#5 0x5634dd08d80f in PyObject_Vectorcall Objects/call.c:325
#6 0x5634dcf3e5ba in _PyEval_EvalFrameDefault Python/bytecodes.c:2715
#7 0x5634dd345626 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:89
#8 0x5634dd345626 in _PyEval_Vector Python/ceval.c:1683
#9 0x5634dd345626 in PyEval_EvalCode Python/ceval.c:578
#10 0x5634dd4451a2 in run_eval_code_obj Python/pythonrun.c:1716
#11 0x5634dd44535c in run_mod Python/pythonrun.c:1737
#12 0x5634dd44ba83 in pyrun_file Python/pythonrun.c:1637
#13 0x5634dd44ba83 in _PyRun_SimpleFileObject Python/pythonrun.c:433
#14 0x5634dd44c39b in _PyRun_AnyFileObject Python/pythonrun.c:78
#15 0x5634dd4b3223 in pymain_run_file_obj Modules/main.c:360
#16 0x5634dd4b3223 in pymain_run_file Modules/main.c:379
#17 0x5634dd4b3223 in pymain_run_python Modules/main.c:633
#18 0x5634dd4b482f in Py_RunMain Modules/main.c:713
#19 0x5634dd4b482f in pymain_main Modules/main.c:743
#20 0x5634dd4b482f in Py_BytesMain Modules/main.c:767
#21 0x7f8c83883d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
SUMMARY: AddressSanitizer: heap-buffer-overflow Objects/stringlib/fastsearch.h:600 in ucs2lib_default_find
Shadow bytes around the buggy address:
0x0c2c7fff8ae0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c2c7fff8af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c2c7fff8b30:[02]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c2c7fff8b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c2c7fff8b50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c2c7fff8b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==114762==ABORTING
```
**Python code to reproduce**
```python
# reproduce.py
any_three_nonblank_codepoints = '!!!'
seven_codepoints = any_three_nonblank_codepoints + ' ' + any_three_nonblank_codepoints
a = (' ' * 243) + seven_codepoints + (' ' * 7)
#b = ' ' * 6 + chr(0) # OK
#b = ' ' * 6 + chr(255) # OK
b = ' ' * 6 + chr(256) # heap-buffer-overflow
a.replace(seven_codepoints, b)
```
**Shell script to git clone CPython, build, and elicit the warning from ASan:**
```bash
#!/bin/bash
git clone -b 3.12 https://github.com/python/cpython.git
cd cpython
./configure --with-address-sanitizer
make
cd ..
if [[ -x "cpython/python" ]]; then
cpython/python reproduce.py
elif [[ -x "cpython/python.exe" ]]; then
cpython/python.exe reproduce.py
else
echo "error: no CPython binary"
exit 1
fi
```
Verified to happen under 3.12 and 3.13; I did not try any earlier Python version.
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-132574
* gh-136628
* gh-136645
* gh-136648
<!-- /gh-linked-prs -->
| 85ec3b3b503ffd5b7e45f8b3fa2cec0c10e4bef0 | a93d9aaf62bb2565e9eec00a2a8d06a91305127b |
python/cpython | python__cpython-127972 | # Inconsistent platform support for taking the loaded `libpython` path into account in `getpath`
# Bug report
### Bug description:
The current `getpath.py` code tries determining `base_prefix`/`base_exec_prefix` by searching the location of the `libpython` library loaded in the current process, falling back to the location of the Python interpreter executable.
https://github.com/python/cpython/blob/7900a85019457c14e8c6abac532846bc9f26760d/Modules/getpath.py#L559-L594
Looking at the location of the `libpython` library in use first makes sense, as that is more reliable than looking at interpreter location — it works when embedding, where there isn't any interpreter executable, it works when the executable is not on `base_prefix`, etc. However, this is only currently supported on Windows and macOS framework builds.
https://github.com/python/cpython/blob/7b8bd3b2b81f4aca63c5b603b56998f6b3ee2611/Modules/getpath.c#L802-L837
The spotty platform support stroke me as odd, especially on macOS, as I see no apparent reason for only supporting framework builds, so I looked traced back the origin of this code.
The macOS logic goes back to Python 2.0, having been introduced in 54ecc3d24f52ae45ca54a24167e434915c88b60f. At this time, we were determining `base_prefix`/`base_exec_prefix` based on the Python interpreter location, which was problematic on [OS X Frameworks](https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemFrameworks/SystemFrameworks.html), as the Python interpreter is provided via a launcher. The comment traces back to 55070f5d966f09256c603c3a82fab9b034430c6f and is unrelated to the change made by that commit, it just highlights the special case for macOS framework builds.
In GH-29041, which introduced `getpath.py`, rewriting the old path initialization C code in Python, the logic changed to purposefully search the `libpython` location before the executable location, also adding Windows support. I imagine the existing macOS code was kept as-is as a mistake, leaving it behind the `WITH_NEXT_FRAMEWORK` flag, maybe under the assumption it was needed for some reason.
Considering the clear intent in the code, I am treating this a bug.
cc @zooba
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127972
<!-- /gh-linked-prs -->
| 95cd9c669cdc7718198addb1abb49941a2c61fae | eb26e170695f15714b5e2ae0c0b83aa790c97869 |
python/cpython | python__cpython-134275 | # Executing `from . import *` within the REPL will import the `_pyrepl` package, But executing with the `-c` argument will result in an error.
# Bug report
### Bug description:
My working directory is under `MyProject`, which includes an `__init__.py` file and other necessary files to be recognized as a package.
But when I try to run the code `from . import *` in the shell using Python, there are two different outcomes in the two invocation methods **(argument passing and REPL execution)**.
The folder looks like:
├── __main__.py
├── __init__.py
├── _implements.py
└── code.py
```bash
(PythonProject) C:\Users\KrisT\Desktop\PythonProject>python -c "from . import *"
Traceback (most recent call last):
File "<string>", line 1, in <module>
from . import *
ImportError: attempted relative import with no known parent package
(PythonProject) C:\Users\KrisT\Desktop\PythonProject>python -q
>>> import sys;print(sys.version_info)
sys.version_info(major=3, minor=13, micro=1, releaselevel='final', serial=0)
>>> from . import *
>>> dir()
['__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'commands', 'completing_reader', 'console', 'historical_reader', 'input', 'keymap', 'main', 'reader', 'readline', 'simple_interact', 'sys', 'trace', 'types', 'utils', 'windows_console']
>>> print(console)
<module '_pyrepl.console' from 'E:\\Python\\Lib\\_pyrepl\\console.py'>
>>> print(input)
<module '_pyrepl.input' from 'E:\\Python\\Lib\\_pyrepl\\input.py'>
>>> print(keymap)
<module '_pyrepl.keymap' from 'E:\\Python\\Lib\\_pyrepl\\keymap.py'>
>>>
```
It loaded the _pyrepl package instead of my PythonProject.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-134275
* gh-134473
<!-- /gh-linked-prs -->
| b1b8962443e7d418601658a4b05347a5a9161910 | a66bae8bb52721ea597ade6222f83876f9e939ba |
python/cpython | python__cpython-127986 | # `PyObject_DelItemString` is not documented
# Documentation
`PyObject_DelItemString` is part of the limited C-API and is undocumented. This function was implemented in commit https://github.com/python/cpython/commit/b0d71d0ec6e575b7c379d55cb8366b26509ece53 , but no documentation was added.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127986
* gh-128496
* gh-128497
<!-- /gh-linked-prs -->
| 8ade15343d5daec3bf79ff7c47f03726fb2bcadf | 87ee76062a7eb9c0fa2b94e36cfed21d86ae90ac |
python/cpython | python__cpython-127952 | # Add option to display pystats on windows
# Feature or enhancement
### Proposal:
The internal Python performance statistics gathering option [PyStats](https://docs.python.org/3/using/configure.html#cmdoption-enable-pystats) is not available on Windows.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127952
<!-- /gh-linked-prs -->
| b9b3e4a076caddf7876d1d4d762a117a26faffcf | b5d1e4552f0ba40d8380368e1b099261686a89cf |
python/cpython | python__cpython-128024 | # Deprecate asyncio policy system
#### asyncio's policy system deprecation
asyncio's policy system[^1] has been a source of confusion and problems in asyncio for a very long time. The policies no longer serve a real purpose. Loops are always per thread, there is no need to have a "current loop" when no loop is currently running. This issue discusses the changes to deprecate it in Python 3.14 and schedule its removal in 3.16 or later.
The usual user applications would use the runner APIs (see flowchart) while those who want more control like Jupyter project can create an event loop and manage it themselves, the difference would be that instead of them first getting the policy then event loop then can directly create it like
```py
loop = MyCustomEventLoop()
loop.run_until_complete(task)
```
rather than currently
```py
asyncio.set_event_loop_policy(MyPolicy())
policy = asyncio.get_event_loop_policy()
loop = policy.new_event_loop()
loop.run_until_complete(task)
```
See these discussions for more background:
- https://github.com/python/cpython/issues/83710#issuecomment-1093855411
- https://github.com/python/cpython/issues/93453#issue-1259542498
- https://github.com/python/cpython/issues/94597#issuecomment-1270840197
- https://github.com/python/cpython/issues/94597
##### Functions and classes to be deprecated and later removed
- [x] `asyncio.get_event_loop_policy`
- [x] `asyncio.set_event_loop_policy`
- [x] `asyncio.AbstractEventLoopPolicy`
- [x] `asyncio.DefaultEventLoopPolicy`
- [x] `asyncio.WindowsSelectorEventLoopPolicy`
- [x] `asyncio.WindowsProactorEventLoopPolicy`
- [x] `asyncio.set_event_loop`
##### Functions to be modified
- `asyncio.get_event_loop` - In 3.16 or later this will become an alias to `get_running_loop`.
- `asyncio.new_event_loop` - In 3.16 or later this will ignore custom policies and will be an alias to `asyncio.EventLoop`
- `asyncio.run` & `asyncio.Runner` - In 3.16 or later this will be modified to not use policy system as that will be gone and rely solely on *loop_factory*.
#### The Grand Plan
- To minimize changes, all the deprecated functions will be underscored i.e. `set_event_loop` -> `_set_event_loop` and `set_event_loop` will emit the warning then call `_set_event_loop` as its underlying implementation. This way internally asyncio can still call these functions until they are removed without need of many ignore warnings and the tests too can easily be adapted.
- The deprecated classes will emit warnings when they are subclassed as it was done for child watchers.
- The runner APIs will be remain unmodified but making sure that correct warnings are emitted internally when policy system is used.
#### The Future
```mermaid
---
title: Flowchart for asyncio.run
---
flowchart TD
A["asyncio.run(coro, loop_factory=...)"] --> B{loop_factory}
B -->|loop_factory is None| D{platform}
B -->|loop_factory is not None| E["loop = loop_factory()"]
D --> |Unix| F["loop = SelectorEventLoop()"]
D --> |Windows| G["loop = ProactorEventLoop()"]
E --> H
F --> H
G --> H
H["loop.run_until_complete(coro)"]
```
[^1]: https://docs.python.org/3.14/library/asyncio-policy.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-128024
* gh-128053
* gh-128172
* gh-128215
* gh-128216
* gh-128218
* gh-128269
* gh-128290
* gh-131805
* gh-131806
<!-- /gh-linked-prs -->
| 5892853fb71acd6530e1e241a9a4bcf71a61fb21 | 559b0e7013f9cbf42a12fe9c86048d5cbb2f6f22 |
python/cpython | python__cpython-128109 | # Crash when modifying DLL function pointers conconcurrently
It seems that `ctypes` doesn't like having function pointers on a `CDLL` modified concurrently. Here's a reproducer (for Linux):
```py
import ctypes
from threading import Thread
dll = ctypes.CDLL("libc.so.6")
def main():
for _ in range(100):
dll.puts.argtypes = ctypes.c_char_p,
dll.puts.restype = ctypes.c_int
threads = [Thread(target=main) for _ in range(100)]
for thread in threads:
thread.start()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-128109
<!-- /gh-linked-prs -->
| ba45e5cdd41a39ce0b3de08bdcfa9d8e28e0e4f3 | cbfe3023e46b544b80ea1a38a8c900c6fb881554 |
python/cpython | python__cpython-131662 | # `ctypes` thread safety auditing (and fixing)
# Feature or enhancement
This is a tracking issue for all thread safety problems related to ctypes. I'll be working on this, but others can feel free to give me some help.
First of all, we need to find where the thread safety problems are.
```[tasklist]
### Auditing
- [x] Audit `_ctypes.c`
- [ ] Audit `_ctypes_test.c` (and probably the generated file too)
- [ ] Audit `callbacks.c`
- [ ] Audit `callproc.c`
- [ ] Audit `cfield.c`
- [x] Audit `malloc_closure.c`
- [ ] Audit `stgdict.c`
```
I'll be tracking the issues that get found when auditing here. The plan is to just create a new issue and link to it for each new problem instead of flooding this issue with PRs.
Generally, the workflow for fixes should follow most of the rules from #116738, but I suspect we'll need recursive mutexes for quite a few things related to callbacks, because it's difficult to tell what might be re-entrant, and we can't use critical sections for arbitrary function pointers.
```[tasklist]
### Known Issues
- [ ] https://github.com/python/cpython/issues/127946
- [ ] https://github.com/python/cpython/issues/128182
- [ ] https://github.com/python/cpython/issues/128485
- [ ] https://github.com/python/cpython/issues/128567
- [ ] https://github.com/python/cpython/issues/128570
- [ ] https://github.com/python/cpython/issues/131974
```
cc @encukou, as the ctypes genius, and @colesbury as the free-threading mastermind.
<!-- gh-linked-prs -->
### Linked PRs
* gh-131662
* gh-131710
* gh-131716
* gh-131896
* gh-131898
* gh-131899
* gh-132473
* gh-132552
* gh-132575
* gh-132646
* gh-132682
* gh-132720
* gh-132727
* gh-134332
* gh-134364
<!-- /gh-linked-prs -->
| 96ef4c511f3ec763dbb06a1f3c23c658a09403a1 | 6fb5f7f4d9d22c49f5c29d2ffcbcc32b6cd7d06a |
python/cpython | python__cpython-127939 | # Remove private _PyLong_FromDigits() function
With [PEP 757](https://peps.python.org/pep-0757/) - one can be replaced by PyLongWriter API.
I was able to find [just one project](https://github.com/xia-mc/PyFastUtil/blob/7183e961c5eb3da879f002e4e81e91cbcc9702e3/pyfastutil/src/utils/PythonUtils.h#L125) on the GH, using this function. On another hand, this function was removed and then restored in #112026. So, maybe we should first deprecate this function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127939
* gh-127925
<!-- /gh-linked-prs -->
| 233fd00f0a19d33932e35f2fb6794ecae745b780 | 3d8fc8b9ae5beec852acf1a0e8102da030eeb1aa |
python/cpython | python__cpython-128530 | # Remove private _PyLong_New() function
With [PEP 757](https://peps.python.org/pep-0757/) - one can be replaced by PyLongWriter API.
Despite this is a private function, it's used by some projects (e.g. Sagemath). Maybe we should first deprecate this function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-128530
<!-- /gh-linked-prs -->
| d7d066c3ab6842117f9e0fb1c9dde4bce00fa1e3 | 1d485db953fa839942a609202ace49d03708f797 |
python/cpython | python__cpython-128003 | # Optionally run Python regression tests in parallel
# Feature or enhancement
This proposes adding an option to the [regression test runner](https://docs.python.org/3/library/test.html#module-test.regrtest) to run individual tests multiple times in parallel with the goal of uncovering multithreading bugs, especially in the free threading build.
Note that this is different from `-j, --multiprocess`, which uses multiple processes to run different tests files in parallel. The motivation of `-j` is to speed up the time it takes to run tests and the use of processes improves isolation. Here the goal is to detect thread-safety bugs (at the cost of extra time).
This is motivated by the experience of using https://github.com/Quansight-Labs/pytest-run-parallel, which is a pytest plugin written by @lysnikolaou and @andfoy. We've used that effectively to find thread-safety bugs in C API extensions while working on free threading compatibility.
The proposed changes are limited to the Python-internal `test.regrtest` and `test.support` package and not the public `unittest` package.
### New command line arguments
These are chosen to match https://github.com/Quansight-Labs/pytest-run-parallel. I think using the same names across the Python ecosystem makes things a bit easier to remember if you work on multiple projects.
* `--parallel-threads=N [default=0]`, runs each test in N threads
* `--iterations=N [default=1]`, number of times to run each test
### Other support
Some of our test cases are not thread-safe (even with the GIL) because they modify global state or use non thread-safe unsafe modules (like `warnings`):
* `@support.thread_unsafe` - marks a test case as not safe to run in multiple threads. The test will be run once in the main thread (like normal) even when `--parallel-threads=N` is specified.
### Similar projects
There are a few existing projects that do essentially this for pytest and unittest, but they're not directly usable for Python's own test suite.
* https://github.com/Quansight-Labs/pytest-run-parallel
* https://github.com/tonybaloney/pytest-freethreaded
* https://github.com/amyreese/unittest-ft
<!-- gh-linked-prs -->
### Linked PRs
* gh-128003
<!-- /gh-linked-prs -->
| e5f10a741408206e61cf793451cbd373bbe61594 | 285c1c4e9543299c8bf69ceb39a424782b8c632e |
python/cpython | python__cpython-127876 | # Crash in `Objects/unicodeobject.c::_copy_characters` when there is nothing to copy (DEBUG build only)
# Crash report
### What happened?
Reproduce on Python 3.12.8+ or 3.13.1+, `main` work fine, but there is a problem there too:
```bash
./configure --with-pydebug
make
./python --version
Python 3.13.1+
./python -c 'import datetime as dt; dt.datetime(2013, 11, 10, 14, 20, 59).strftime("%z")'
Segmentation fault
```
No need to check `to` if we don't write there
Fix in #127876
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.1+ (heads/3.13:d51c1444e3, Dec 12 2024, 19:36:26) [GCC 9.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127876
* gh-128458
* gh-128459
<!-- /gh-linked-prs -->
| 46cb6340d7bad955edfc0a20f6a52dabc03b0932 | 4c14f03495724f2c52de2d34f1bfa35dd94757c0 |
python/cpython | python__cpython-127932 | # Unable to build a HACL* module on macOS under Catalina
# Bug report
### Bug description:
The recent HACL* change in Python 3.14.0a2 cause local build breakage on macOS below Catalina due to lack of support for `aligned_alloc` (also `memset_s` on older macOS <10.9).
Fixes for the affected dependency have already been submitted to the repositories:
- https://github.com/FStarLang/karamel/pull/504
- https://github.com/hacl-star/hacl-star/pull/1012
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127932
<!-- /gh-linked-prs -->
| 329165639f9ac00ba64f6493dbcafcef6955e2cb | 5892853fb71acd6530e1e241a9a4bcf71a61fb21 |
python/cpython | python__cpython-127979 | # `PySequence_In` is not documented
# Documentation
`PySequence_In` is part of the limited C-API, and has no document. PEP 3100 ( https://peps.python.org/pep-3100/ ) says this function is "To be removed," but it has not been removed in the latest version, so documentation seems to be needed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127979
<!-- /gh-linked-prs -->
| 52d552cda7614c7aa9f08b680089c630587e747f | 0d8e7106c260e96c4604f501165bd106bff51f6b |
python/cpython | python__cpython-127880 | # Race in `_PyFreeList_Push`
# Bug report
Seen in https://github.com/python/cpython/actions/runs/12216713951/job/34310688405?pr=126865:
Reported by @markshannon
```
==================
WARNING: ThreadSanitizer: data race (pid=20210)
Write of size 8 at 0x7fbb3e0415f0 by thread T31:
#0 _PyFreeList_Push /home/runner/work/cpython/cpython/./Include/internal/pycore_freelist.h:54:23 (python+0x2cbe10) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#1 _PyFreeList_Free /home/runner/work/cpython/cpython/./Include/internal/pycore_freelist.h:67:10 (python+0x2cbe10)
#2 long_dealloc /home/runner/work/cpython/cpython/Objects/longobject.c:3667:[13](https://github.com/python/cpython/actions/runs/12216713951/job/34310688405?pr=126865#step:13:14) (python+0x2cbe10)
#3 _Py_Dealloc /home/runner/work/cpython/cpython/Objects/object.c:2977:5 (python+0x3186c4) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#4 _Py_MergeZeroLocalRefcount /home/runner/work/cpython/cpython/Objects/object.c (python+0x318d15) (BuildId: 7dc38[14](https://github.com/python/cpython/actions/runs/12216713951/job/34310688405?pr=126865#step:13:15)d1a5683dfc5de7e8c3c8846a47a478499)
#5 Py_DECREF /home/runner/work/cpython/cpython/./Include/refcount.h:323:13 (python+0x5ea28e) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#6 Py_XDECREF /home/runner/work/cpython/cpython/./Include/refcount.h:476:9 (python+0x5ea28e)
#7 PyMember_SetOne /home/runner/work/cpython/cpython/Python/structmember.c:315:9
...
Previous atomic read of size 8 at 0x7fbb3e0415f0 by thread T32:
#0 _Py_atomic_load_uintptr_relaxed /home/runner/work/cpython/cpython/./Include/cpython/pyatomic_gcc.h:375:10 (python+0x5e96b9) (BuildId: 7dc[38](https://github.com/python/cpython/actions/runs/12216713951/job/34310688405?pr=126865#step:13:39)14d1a5683dfc5de7e8c3c8846a47a478499)
#1 _Py_IsOwnedByCurrentThread /home/runner/work/cpython/cpython/./Include/object.h:242:12 (python+0x5e96b9)
#2 _Py_TryIncrefFast /home/runner/work/cpython/cpython/./Include/internal/pycore_object.h:547:9 (python+0x5e96b9)
#3 _Py_TryIncrefCompare /home/runner/work/cpython/cpython/./Include/internal/pycore_object.h:586:9 (python+0x5e96b9)
#4 PyMember_GetOne /home/runner/work/cpython/cpython/Python/structmember.c:99:18 (python+0x5e93d5) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#5 member_get /home/runner/work/cpython/cpython/Objects/descrobject.c:179:12 (python+0x268c59) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#6 _PyObject_GenericGetAttrWithDict /home/runner/work/cpython/cpython/Objects/object.c:1690:19 (python+0x31d32b) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#7 PyObject_GenericGetAttr /home/runner/work/cpython/cpython/Objects/object.c:1772:12 (python+0x31d1[44](https://github.com/python/cpython/actions/runs/12216713951/job/34310688405?pr=126865#step:13:45)) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#8 PyObject_GetAttr /home/runner/work/cpython/cpython/Objects/object.c:1286:18 (python+0x31c6da) (BuildId: 7dc3814d1a5683dfc5de7e8c3c8846a47a478499)
#9 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:5254:30 (python+0x4da626) (BuildId: 7dc3814d1a5683dfc5de7e8c3c88
```
We should be using a relaxed atomic write in pycore_freelist.h.
The background is that we have a few places (dict, list, structmember) that have a fast-path that attempts to avoid a lock. (https://peps.python.org/pep-0703/#optimistically-avoiding-locking). They may access the ob_tid and refcount fields while the object is freed (but the memory is still valid).
The freelist overwrites the first field (ob_tid in the free threading build). That's okay semantically because valid pointers are distinct from the thread ids we use for ob_tid (the GC also relies on this property). However, we still need to use an atomic operation (relaxed is fine here) when we write the field.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127880
<!-- /gh-linked-prs -->
| f8dcb8200626a1a06c4a26d8129257f42658a9ff | 7146f1894638130940944d4808dae7d144d46227 |
python/cpython | python__cpython-127877 | # `can_colorize()` ignores `FORCE_COLOR`/`NO_COLOR`/`TERM` when `-E` is set
# Bug report
### Bug description:
Using one of the `FORCE_COLOR`, `NO_COLOR` or `TERM=dumb` environment variables is ignored when you use Python with `-E`.
* https://docs.python.org/3/using/cmdline.html#using-on-controlling-color
`-E` means:
> Ignore all `PYTHON*` environment variables, e.g. [PYTHONPATH](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPATH) and [PYTHONHOME](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHOME), that might be set.
* https://docs.python.org/3/using/cmdline.html#cmdoption-E
The `-E` is stored in `sys.flags.ignore_environment`.
* https://docs.python.org/3/library/sys.html#sys.flags.ignore_environment
`sys.flags.ignore_environment` is used to ignore `PYTHON_COLORS` (correct) but it's also ignoring these other env vars (incorrect).
For example, this is not colourised, as expected:
```
❯ NO_COLOR=1 python3.13 -c 1/0
Traceback (most recent call last):
File "<string>", line 1, in <module>
1/0
~^~
ZeroDivisionError: division by zero
```

However, `NO_COLOR=1` is ignored when passing `-E` and the output has colour when it should not:
```
❯ NO_COLOR=1 python3.13 -E -c 1/0
Traceback (most recent call last):
File "<string>", line 1, in <module>
1/0
~^~
ZeroDivisionError: division by zero
```

This bit needs updating:
https://github.com/python/cpython/blob/487fdbed40734fd7721457c6f6ffeca03da0b0e7/Lib/_colorize.py#L43-L56
### CPython versions tested on:
3.13, 3.14
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127877
* gh-129138
<!-- /gh-linked-prs -->
| 05d12eecbde1ace39826320cadf8e673d709b229 | 13475e0a5a317fa61f302f030b0effcb021873d6 |
python/cpython | python__cpython-127872 | # Segfaults in ctypes _as_parameter_ handling when called with `MagicMock`
# Crash report
### What happened?
It's possible to segfault the interpreter by calling any of the 3 functions below with `MagicMock` as argument. It takes a long time to trigger the crash (up to 3 minutes in my slow machine).
```python
from unittest.mock import MagicMock
import _pyrepl._minimal_curses
obj = _pyrepl._minimal_curses.tparm(MagicMock(), 0, 0, 0, 0, 0, 0, 0)
obj = _pyrepl._minimal_curses.setupterm(MagicMock(), 0)
obj = _pyrepl._minimal_curses.tigetstr(MagicMock())
```
The backtrace is very long, with over 87k entries in one case. Here's part of it:
```gdb
#0 0x00005555556b04e6 in compare_unicode_unicode (mp=0x0, dk=0x55556ce6f460, ep0=0x55556ce6f500, ix=38, key=0x7ffff73ae860, hash=5657245593745306375)
at Objects/dictobject.c:1136
#1 0x00005555556af1cb in do_lookup (mp=mp@entry=0x0, dk=dk@entry=0x55556ce6f460, key=key@entry=0x7ffff73ae860, hash=hash@entry=5657245593745306375,
check_lookup=check_lookup@entry=0x5555556b04da <compare_unicode_unicode>) at Objects/dictobject.c:1066
#2 0x00005555556af242 in unicodekeys_lookup_unicode (dk=dk@entry=0x55556ce6f460, key=key@entry=0x7ffff73ae860, hash=hash@entry=5657245593745306375)
at Objects/dictobject.c:1151
#3 0x00005555556b3c99 in _Py_dict_lookup (mp=0x7fffbe6a3c50, key=key@entry=0x7ffff73ae860, hash=hash@entry=5657245593745306375,
value_addr=value_addr@entry=0x7fffff7ff0f0) at Objects/dictobject.c:1265
#4 0x00005555556b4746 in _PyDict_GetItemRef_KnownHash (op=<optimized out>, key=key@entry=0x7ffff73ae860, hash=hash@entry=5657245593745306375,
result=result@entry=0x7fffff7ff130) at Objects/dictobject.c:2317
#5 0x00005555556ff874 in find_name_in_mro (type=type@entry=0x55556ce6cca0, name=name@entry=0x7ffff73ae860, error=error@entry=0x7fffff7ff194)
at Objects/typeobject.c:5108
#6 0x00005555556ffa3c in _PyType_LookupRef (type=type@entry=0x55556ce6cca0, name=name@entry=0x7ffff73ae860) at Objects/typeobject.c:5260
#7 0x00005555556ca8b8 in _PyObject_GenericSetAttrWithDict (obj=obj@entry=0x7fffbe6b9920, name=0x7ffff73ae860, value=0x555555ae8100 <_Py_NoneStruct>,
dict=dict@entry=0x0) at Objects/object.c:1773
#8 0x00005555556cab5f in PyObject_GenericSetAttr (obj=obj@entry=0x7fffbe6b9920, name=<optimized out>, value=<optimized out>) at Objects/object.c:1849
#9 0x00005555556f541e in wrap_setattr (self=0x7fffbe6b9920, args=<optimized out>, wrapped=0x5555556cab4d <PyObject_GenericSetAttr>) at Objects/typeobject.c:8792
#10 0x000055555567ee41 in wrapperdescr_raw_call (descr=descr@entry=0x7ffff7b002f0, self=self@entry=0x7fffbe6b9920, args=args@entry=0x7fffc5b5bd90,
kwds=kwds@entry=0x0) at Objects/descrobject.c:531
#11 0x000055555567f2c5 in wrapperdescr_call (_descr=_descr@entry=0x7ffff7b002f0, args=0x7fffc5b5bd90, args@entry=0x7fffc5a48c50, kwds=kwds@entry=0x0)
at Objects/descrobject.c:569
#12 0x0000555555672a51 in _PyObject_MakeTpCall (tstate=tstate@entry=0x555555b564c0 <_PyRuntime+299040>, callable=callable@entry=0x7ffff7b002f0,
args=args@entry=0x7ffff7e29850, nargs=<optimized out>, keywords=keywords@entry=0x0) at Objects/call.c:242
#13 0x0000555555672c91 in _PyObject_VectorcallTstate (tstate=0x555555b564c0 <_PyRuntime+299040>, callable=callable@entry=0x7ffff7b002f0,
args=args@entry=0x7ffff7e29850, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at ./Include/internal/pycore_call.h:166
#14 0x0000555555672ce7 in PyObject_Vectorcall (callable=callable@entry=0x7ffff7b002f0, args=args@entry=0x7ffff7e29850, nargsf=<optimized out>,
kwnames=kwnames@entry=0x0) at Objects/call.c:327
#15 0x00005555557a293c in _PyEval_EvalFrameDefault (tstate=0x555555b564c0 <_PyRuntime+299040>, frame=0x7ffff7e297c8, throwflag=0) at Python/generated_cases.c.h:1839
#16 0x00005555557b0957 in _PyEval_EvalFrame (tstate=tstate@entry=0x555555b564c0 <_PyRuntime+299040>, frame=<optimized out>, throwflag=throwflag@entry=0)
at ./Include/internal/pycore_ceval.h:119
#17 0x00005555557b0a76 in _PyEval_Vector (tstate=0x555555b564c0 <_PyRuntime+299040>, func=0x7ffff6a893d0, locals=locals@entry=0x0, args=0x7fffff7ff650, argcount=3,
kwnames=0x0) at Python/ceval.c:1807
#18 0x00005555556728a2 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:413
#19 0x00005555556f4c21 in _PyObject_VectorcallTstate (tstate=0x555555b564c0 <_PyRuntime+299040>, callable=0x7ffff6a893d0, args=0x7fffff7ff650, nargsf=3,
kwnames=kwnames@entry=0x0) at ./Include/internal/pycore_call.h:168
#20 0x00005555556f4cd8 in vectorcall_unbound (tstate=<optimized out>, unbound=<optimized out>, func=<optimized out>, args=<optimized out>, nargs=<optimized out>)
at Objects/typeobject.c:2566
#21 0x0000555555700c61 in vectorcall_method (name=<optimized out>, args=args@entry=0x7fffff7ff650, nargs=nargs@entry=3) at Objects/typeobject.c:2597
```
I realize these functions are implemented with `ctypes` and internal to an internal package and hence this issue can be of very low importance. I just report in case they can point to some interesting related bug.
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.1+ (heads/3.13:d51c1444e36, Dec 12 2024, 11:22:09) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127872
* gh-127917
* gh-127918
<!-- /gh-linked-prs -->
| 6ff38fc4e2af8e795dc791be6ea596d2146d4119 | e62e1ca4553dbcf9d7f89be24bebcbd9213f9ae5 |
python/cpython | python__cpython-127866 | # Build failure without thread local support
# Bug report
### Bug description:
When building environments without thread local support, this build failure occurs:
```
~/cpython-3.13.1/Python/import.c:750: error: argument of type "PyMutex" is incompatible with parameter of type "PyThread_type_lock"
PyThread_acquire_lock(EXTENSIONS.mutex, WAIT_LOCK);
^
~/cpython-3.13.1/Python/import.c:760: error: argument of type "PyMutex" is incompatible with parameter of type "PyThread_type_lock"
PyThread_release_lock(EXTENSIONS.mutex);
^
~/cpython-3.13.1/Python/import.c:769: error: argument of type "PyMutex" is incompatible with parameter of type "PyThread_type_lock"
PyThread_acquire_lock(EXTENSIONS.mutex, WAIT_LOCK);
^
~/cpython-3.13.1/Python/import.c:774: error: argument of type "PyMutex" is incompatible with parameter of type "PyThread_type_lock"
PyThread_release_lock(EXTENSIONS.mutex);
```
The issue is caused by incomplete change introduced with the commit 628f6eb from the PR https://github.com/python/cpython/pull/112207.
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-127866
* gh-127882
<!-- /gh-linked-prs -->
| f823910bbd4bf01ec3e1ab7b3cb1d77815138296 | f8dcb8200626a1a06c4a26d8129257f42658a9ff |
python/cpython | python__cpython-127878 | # New warning in the main: Python/import.c [-Wstringop-truncation]
# Bug report
### Bug description:
```
$ cc --version|head -1
cc (Debian 12.2.0-14) 12.2.0
$ ./configure -q && make -s
In function ‘hashtable_key_from_2_strings’,
inlined from ‘_extensions_cache_find_unlocked’ at Python/import.c:1266:17:
Python/import.c:1179:5: warning: ‘strncpy’ output truncated before terminating nul copying as many bytes from a string as its length [-Wstringop-truncation]
1179 | strncpy(key, str1_data, str1_len);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Python/import.c:1165:27: note: length computed here
1165 | Py_ssize_t str1_len = strlen(str1_data);
| ^~~~~~~~~~~~~~~~~
Written build/lib.linux-x86_64-3.14/_sysconfigdata__linux_x86_64-linux-gnu.py
Written build/lib.linux-x86_64-3.14/_sysconfig_vars__linux_x86_64-linux-gnu.json
Checked 112 modules (34 built-in, 77 shared, 1 n/a on linux-x86_64, 0 disabled, 0 missing, 0 failed on import)
```
Edit: happens on non-debug builds.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127878
* gh-128018
<!-- /gh-linked-prs -->
| 081673801e3d47d931d2e2b6a4a1515e1207d938 | 52d552cda7614c7aa9f08b680089c630587e747f |
python/cpython | python__cpython-127854 | # It's undocumented, that comma is invalid as separator for "b", "o" and "x" integer types
```pycon
>>> format(12347834, '_x')
'bc_69ba'
>>> format(12347834, ',x')
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
format(12347834, ',x')
~~~~~~^^^^^^^^^^^^^^^^
ValueError: Cannot specify ',' with 'x'.
>>> format(12347834, ',b')
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
format(12347834, ',b')
~~~~~~^^^^^^^^^^^^^^^^
ValueError: Cannot specify ',' with 'b'.
>>> format(12347834, ',o')
Traceback (most recent call last):
File "<python-input-3>", line 1, in <module>
format(12347834, ',o')
~~~~~~^^^^^^^^^^^^^^^^
ValueError: Cannot specify ',' with 'o'.
>>> format(12347834, ',d')
'12,347,834'
>>> format(12347111111111834, ',f')
'12,347,111,111,111,834.000000'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127854
* gh-127941
* gh-127942
<!-- /gh-linked-prs -->
| e2325c9db0650fc06d909eb2b5930c0573f24f71 | 78e766f2e217572eacefba9ec31396b016aa88c2 |
python/cpython | python__cpython-127856 | # Data corruption when reading from two files in parallel in an uncompressed .zip
# Bug report
### Bug description:
I ran into a data corruption bug that seems to be triggered by interleaving reads/seeks from different files inside of an uncompressed zip file. As far as I can tell from the docs, this is allowed by `zipfile`. It works correctly in Python 3.7 and 3.9, but fails in 3.12.
I'm attaching a somewhat convoluted testcase (still working on a simpler one). It parses a dBase IV database by reading records from a .dbf file, and for each record, reading a corresponding record from a .dbt file.
When run using Python 3.9, you will see a bunch of data printed out. When run using Python 3.12, you will get an exception `ValueError: Invalid dBase IV block: b'PK\x03\x04\n\x00\x00\x00'`. That block does not appear in the input file at all. (Though, when tested with a larger input, I got a block of bytes that appeared in the _wrong_ file.)
For some context, [here is a workaround](https://github.com/dimaryaz/jdmtool/commit/97ca5b168b2db5675878db676413d8c66e464584) I used in my project: I changed it to read the .dbf file first, then the .dbt.
Testcase:
```python
#!/usr/bin/env python3
import datetime
import pathlib
import struct
import zipfile
from dataclasses import dataclass
from typing import Any, BinaryIO, List, Tuple
ZIP_PATH = pathlib.Path(__file__).parent / 'notams.zip'
@dataclass
class DbfHeader:
SIZE = 32
VERSION = 3
info: int
last_update: datetime.date
num_records: int
header_bytes: int
record_bytes: int
@classmethod
def from_bytes(cls, data: bytes):
info, year, month, day, num_records, header_bytes, record_bytes = struct.unpack('<4BIHH20x', data)
version = info & 0x3
if version != cls.VERSION:
raise ValueError(f"Unsupported DBF version: {version}")
return cls(info, datetime.date(year + 1900, month, day), num_records, header_bytes, record_bytes)
@dataclass
class DbfField:
SIZE = 32
name: str
type: str
length: int
@classmethod
def from_bytes(cls, data: bytes):
name, typ, length = struct.unpack('<11sc4xB15x', data)
return cls(name.rstrip(b'\x00').decode(), typ.decode(), length)
class DbfFile:
@classmethod
def read_header(cls, fd: BinaryIO) -> Tuple[DbfHeader, List[DbfField]]:
header = DbfHeader.from_bytes(fd.read(DbfHeader.SIZE))
num_fields = (header.header_bytes - 33) // 32
fields = [DbfField.from_bytes(fd.read(DbfField.SIZE)) for _ in range(num_fields)]
if fd.read(1) != b'\x0D':
raise ValueError("Missing array terminator")
return header, fields
@classmethod
def read_record(cls, fd: BinaryIO, fields: List[DbfField]) -> List[Any]:
fd.read(1)
values = []
for field in fields:
data = fd.read(field.length).decode('latin-1').strip(' ')
if field.type == 'C':
value = data
elif field.type == 'D':
s = data.strip(' ')
if s:
value = datetime.datetime.strptime(data, '%Y%m%d').date()
else:
value = None
elif field.type == 'L':
if len(data) != 1:
raise ValueError(f"Incorrect length: {data!r}")
if data in 'YyTt':
value = True
elif data in 'NnFf':
value = False
elif data == '?':
value = None
else:
raise ValueError(f"Incorrect boolean: {data!r}")
elif field.type in ('M', 'N'):
value = int(data) if data else None
else:
raise ValueError(f"Unsupported field: {field.type}")
values.append(value)
return values
@dataclass
class DbtHeader:
SIZE = 512
next_free_block: int
dbf_filename: str
reserved: int
block_length: int
@classmethod
def from_bytes(cls, data: bytes):
next_free_block, dbf_filename, reserved, block_length = struct.unpack('<I4x8sIH490x', data)
return cls(next_free_block, dbf_filename.decode('latin-1'), reserved, block_length)
class DbtFile:
DBT3_BLOCK_SIZE = 512
DBT4_BLOCK_START = b'\xFF\xFF\x08\x00'
@classmethod
def read_header(cls, fd: BinaryIO) -> DbtHeader:
fd.seek(0)
block = fd.read(DbtHeader.SIZE)
return DbtHeader.from_bytes(block)
@classmethod
def read_record(cls, fd: BinaryIO, header: DbtHeader, idx: int) -> str:
fd.seek(header.block_length * idx)
block_start = fd.read(8)
if block_start[0:4] != cls.DBT4_BLOCK_START:
raise ValueError(f"Invalid dBase IV block: {block_start}")
length = int.from_bytes(block_start[4:8], 'little')
data = fd.read(length - len(block_start))
return data.decode('latin-1')
def main():
with zipfile.ZipFile(ZIP_PATH) as z:
with z.open('notams.dbf') as dbf_in, z.open('notams.dbt') as dbt_in:
dbf_header, dbf_fields = DbfFile.read_header(dbf_in)
dbt_header = DbtFile.read_header(dbt_in)
for _ in range(dbf_header.num_records):
record = DbfFile.read_record(dbf_in, dbf_fields)
print(record)
memo = DbtFile.read_record(dbt_in, dbt_header, record[3])
print(memo)
if __name__ == '__main__':
main()
```
Input file:
[notams.zip](https://github.com/user-attachments/files/18103952/notams.zip)
### CPython versions tested on:
3.9, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127856
* gh-128225
* gh-128226
<!-- /gh-linked-prs -->
| 7ed6c5c6961d0849f163d4d449fb36bae312b6bc | 9fce90682553e2cfe93e98e2ae5948bf9c7c4456 |
python/cpython | python__cpython-127846 | # Enhance the iOS test runner
# Feature or enhancement
### Proposal:
The iOS test runner can now be used to run third-party project test suites; using it in practice has revealed some possible enhancements.
* A `--verbose` option to reduce the amount of output from Xcode builds (which will be mostly boilerplate)
* Use symlinks instead of a copy when installing the iOS framework into the testbed
* Include `PY_COLORS=0` in the list of environmental markers to disable colors
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127846
* gh-127892
<!-- /gh-linked-prs -->
| ba2d2fda93a03a91ac6cdff319fd23ef51848d51 | 0cbc19d59e409854f2b9bdda75e1af2b6cd89ac2 |
python/cpython | python__cpython-127844 | # The JIT's understanding of `**` is wrong
# Crash report
`**` is weird in that the *type* of the result depends on the *values* of the inputs. The logic for `int`/`float` power is:
```py
def type_of_pow(lhs: float, rhs: float) -> type[complex]:
if isinstance(lhs, int) and isinstance(rhs, int) and rhs >= 0:
return int
if lhs < 0 and not rhs.is_integer():
return complex
return float
```
However, our optimizer wrongly assumes:
```py
def type_of_pow(lhs: float, rhs: float) -> type[float]:
if isinstance(lhs, int) and isinstance(rhs, int):
return int
return float
```
This means that tons of different poorly-chosen values can cause JIT code to crash:
```py
import _testinternalcapi
import itertools
def f(a, b):
for _ in range(_testinternalcapi.TIER2_THRESHOLD):
a + b # Remove guards...
a ** b + a # ...BOOM!
for la, ra, lb, rb in itertools.product([1, -1, 1.0, -1.0, 0.5, -0.5], repeat=4):
f(la, ra)
f(lb, rb)
```
Normally we could just ignore the problem and produce an unknown type during abstract interpretation, but a `**` containing at least one constant value is actually reasonably common (think `x ** 2`, `2 ** n`, or `s ** 0.5`).
We should probably teach the optimizer how to handle these properly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127844
<!-- /gh-linked-prs -->
| 65ae3d5a73ca3c53a0c6b601dddb8e9b3b6e3f51 | e08b28235a863323ca3a7e444776bb7803e77caf |
python/cpython | python__cpython-127810 | # pathlib ABCs: remove or namespace private attributes
@encukou [pointed out](https://github.com/python/cpython/pull/127725#issuecomment-2528154896):
> I see there's currently no distinction between pathlib-internal attribute names and names that are free for third-party subclasses to use.
> Should there be a stronger namespace for these, like `_pathlib_sep`, `_pathlib_globber` & `_pathlib_stack`?
> Or should `_sep` & co. be documented so users know to not override them?
`PurePathBase` and `PathBase` have ~10 private attributes for facilitating copying, ~3 for matching and globbing, and ~3 more for miscellaneous purposes. We should try to eliminate as many as possible, and namespace the rest (e.g. under `_pathlib_`)
<!-- gh-linked-prs -->
### Linked PRs
* gh-127810
* gh-127851
* gh-127855
* gh-127883
<!-- /gh-linked-prs -->
| 8d9f52a7be5c09c0fd4423943edadaacf6d7f917 | f5ba74b81979b621e38be70ec3ddad1e7f1365ce |
python/cpython | python__cpython-127850 | # Logger formatter class initialization state clarification
# Documentation
https://docs.python.org/3/library/logging.html#logging.Handler.setFormatter
When providing a logger handler with a formatter class, there is no documentation stating if that formatter class needs to be initialized before passing it as an argument. Through trial and error, I found that it needs to be initialized first. Please either clarify the documentation or have underlying code check to see if the formatter is initialized yet.
Problemed formatter configuration:
```
handler.setFormatter(xml_log_formatter)
logging.info('test_msg')
```
Error:

Fixed formatter configuration:
```
handler.setFormatter(xml_log_formatter())
logging.info('test_msg')
```
Purposed code check (from a custom logging module):
```
#Instantiate formatter class (if needed)
if isinstance(cls.log_formatter, type):
cls.log_formatter = cls.log_formatter()
````
<!-- gh-linked-prs -->
### Linked PRs
* gh-127850
* gh-130392
* gh-130393
<!-- /gh-linked-prs -->
| 5d66c55c8ad0a0aeff8d06021ddca1d02c5f4416 | d63af9540f6163104699a6e09267696307f0d002 |
python/cpython | python__cpython-127820 | # email.message.EmailMessage accepts invalid header field names without error, which raise an error when parsed
# Bug report
### Bug description:
`email.message.EmailMessage` accepts invalid header field names without error, which raise an error when parsed, regardless of policy and causes corrupt emails.
Case in point (with python 3.13.1 installed via pyenv, occurs in 3.11
and earlier as well):
```python
delgado@tuxedo-e101776:~> python3.13
Python 3.13.1 (main, Dec 10 2024, 15:13:47) [GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import email.message
>>> message = email.message.EmailMessage()
>>> message.add_header('From', 'me@example.example')
None
>>> message.add_header('To', 'me@example.example')
None
>>> message.add_header('Subject', 'Example Subject')
None
>>> message.add_header('Invalid Header', 'Contains a space, which is illegal')
None
>>> message.add_header('X-Valid Header', 'Custom header as recommended')
None
>>> message.set_content('Hello, this is an example!')
None
>>> message.defects
[]
>>> message._headers
[('From', 'me@example.example'),
('To', 'me@example.example'),
('Subject', 'Example Subject'),
('Invalid Header', 'Contains a space, which is illegal'),
('X-Valid Header', 'Custom header as recommended'),
('Content-Type', 'text/plain; charset="utf-8"'),
('Content-Transfer-Encoding', '7bit'),
('MIME-Version', '1.0')]
>>> message.as_string()
('From: me@example.example\n'
'To: me@example.example\n'
'Subject: Example Subject\n'
'Invalid Header: Contains a space, which is illegal\n'
'X-Valid Header: Custom header as recommended\n'
'Content-Type: text/plain; charset="utf-8"\n'
'Content-Transfer-Encoding: 7bit\n'
'MIME-Version: 1.0\n'
'\n'
'Hello, this is an example!\n')
>>> message.policy
EmailPolicy()
>>> msg_string = message.as_string()
>>> msg_string
('From: me@example.example\n'
'To: me@example.example\n'
'Subject: Example Subject\n'
'Invalid Header: Contains a space, which is illegal\n'
'X-Valid Header: Custom header as recommended\n'
'Content-Type: text/plain; charset="utf-8"\n'
'Content-Transfer-Encoding: 7bit\n'
'MIME-Version: 1.0\n'
'\n'
'Hello, this is an example!\n')
>>> import email.parser
>>> parsed_message = email.parser.Parser().parsestr(msg_string)
>>> parsed_message._headers
[('From', 'me@example.example'),
('To', 'me@example.example'),
('Subject', 'Example Subject')]
>>> parsed_message.as_string()
('From: me@example.example\n'
'To: me@example.example\n'
'Subject: Example Subject\n'
'\n'
'Invalid Header: Contains a space, which is illegal\n'
'X-Valid Header: Custom header as recommended\n'
'Content-Type: text/plain; charset="utf-8"\n'
'Content-Transfer-Encoding: 7bit\n'
'MIME-Version: 1.0\n'
'\n'
'Hello, this is an example!\n')
>>> parsed_message.policy
Compat32()
>>> parsed_message.defects
[MissingHeaderBodySeparatorDefect()]
>>> import email.policy
>>> parsed_message_strict = email.parser.Parser(policy=email.policy.strict).parsestr(msg_string)
Traceback (most recent call last):
File "<python-input-19>", line 1, in <module>
parsed_message_strict = email.parser.Parser(policy=email.policy.strict).parsestr(msg_string)
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/parser.py", line 64, in parsestr
return self.parse(StringIO(text), headersonly=headersonly)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/parser.py", line 53, in parse
feedparser.feed(data)
~~~~~~~~~~~~~~~^^^^^^
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/feedparser.py", line 176, in feed
self._call_parse()
~~~~~~~~~~~~~~~~^^
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/feedparser.py", line 180, in _call_parse
self._parse()
~~~~~~~~~~~^^
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/feedparser.py", line 234, in _parsegen
self.policy.handle_defect(self._cur, defect)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/home/delgado/git/pyenv/versions/3.13.1/lib/python3.13/email/_policybase.py", line 193, in handle_defect
raise defect
email.errors.MissingHeaderBodySeparatorDefect
>>> parsed_message_nonstrict = email.parser.Parser(policy=email.policy.default).parsestr(msg_string)
>>> parsed_message_nonstrict.as_string()
('From: me@example.example\n'
'To: me@example.example\n'
'Subject: Example Subject\n'
'\n'
'Invalid Header: Contains a space, which is illegal\n'
'X-Valid Header: Custom header as recommended\n'
'Content-Type: text/plain; charset="utf-8"\n'
'Content-Transfer-Encoding: 7bit\n'
'MIME-Version: 1.0\n'
'\n'
'Hello, this is an example!\n')
>>> parsed_message_nonstrict.defects
[MissingHeaderBodySeparatorDefect()]
```
The illegal header field name is accepted by EmailMessage without a defect, but when the resulting message is parsed, regardless of policy, it looks to me like header parsing stops at that point and the line with the defect header is viewed as first line of the body, which leads to the `MissingHeaderBodySeparatorDefect`.
It's interesting that `email.headers` contains the following:
~~~python
# Field name regexp, including trailing colon, but not separating whitespace,
# according to RFC 2822. Character range is from tilde to exclamation mark.
# For use with .match()
fcre = re.compile(r'[\041-\176]+:$')
~~~
which is the correct regex according to the rfc, including the final colon, which apparently isn't used anywhere in the code.
A MUA (such as claws or mutt) will display the resulting email with the remaining headers as part of the body, breaking any mime multipart rendering.
### CPython versions tested on:
3.11, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127820
<!-- /gh-linked-prs -->
| c432d0147bdf1a66604e7a3d6a71660ae79b5f45 | 55150a79cacbce44f50cea128c511782df0ab277 |
python/cpython | python__cpython-127793 | # `PyUnstable_AtExit` isn't well tested and undocumented
# Bug report
### Bug description:
While working on #126908, @vstinner noted that this part of `PyUnstable_AtExit` looks wrong: https://github.com/python/cpython/blob/cef0a90d8f3a94aa534593f39b4abf98165675b9/Modules/atexitmodule.c#L45-L47
This will result in loss of callbacks after one has been stored, because the second-to-last one is always overwritten. Noted in gh-118915, `PyUnstable_AtExit` is also undocumented. I'll work on fixing both.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127793
* gh-127819
<!-- /gh-linked-prs -->
| d5d84c3f13fe7fe591b375c41979d362bc11957a | 2cdeb61b57e638ae46a04386330a12abe9cddf2c |
python/cpython | python__cpython-127789 | # Refactor `PyUnicodeError` internal C helpers
# Feature or enhancement
### Proposal:
Some helpers make the realization of https://github.com/python/cpython/issues/126004 a bit harder as we end up with many duplicated code. In order to make the related PRs and https://github.com/python/cpython/pull/127694 smoother, I suggest refactoring the following helpers:
- Unify `get_unicode` and `get_string` as a single `as_unicode_error_attribute`.
- Allow to retrieve the underlying `object` attribute and its size in one function, say `get_unicode_error_object_and_size`. This is typically useful for unifying
- The `PyUnicode{Encode,Decode,Translate}Error_GetObject` public functions can use a common implementation `unicode_error_get_object_impl`. Since this depends on the underlying `object` type, the implementation function would take a flag as a parameter to indicate whether it's a bytes or a string.
- All `PyUnicode{Encode,Decode}Error_GetEncoding` public functions can use a common implementation `unicode_error_get_encoding_impl`. The encoding is always a string.
- All `PyUnicode{Encode,Decode,Translate}Error_{Get,Set}Reason` public functions can use a common implementation `unicode_error_{get,set}_reason_impl`. The reason is always a string.
- All `PyUnicode{Encode,Decode,Translate}_{Get,Set}{Start,End}` public functions can use a common implementation `unicode_error_{get,set}_{start,end}_impl`. Since this depends on the underlying `object` type, the implementation function would take a flag as a parameter to indicate whether it's a bytes or a string.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/127674#discussion_r1877752636
<!-- gh-linked-prs -->
### Linked PRs
* gh-127789
* gh-128803
* gh-128980
<!-- /gh-linked-prs -->
| fa985bee6189aabac1c329f2de32aa9a4e88e550 | 8abd6cef68a0582a4d912be76caddd9da5d55ccd |
python/cpython | python__cpython-130596 | # Verify permissions in `require-pr-label.yml` workflow
# Bug report
### Bug description:
This came up in https://github.com/python/cpython/pull/127749#discussion_r1876778887
The writing permissions for the `require-pr-label.yml` workflow might not be needed. We should double-check that and remove them if they aren't necessary.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-130596
* gh-130623
* gh-130624
* gh-130625
<!-- /gh-linked-prs -->
| 5ba69e747fa9da984a307b2cbc9f82bac1e0db04 | daeb0efaf445be5634d73e13d39a2641851b0bb4 |
python/cpython | python__cpython-127849 | # Unhandled `BytesWarning` when `test.support.strace_helper.strace_python` fails
```text
======================================================================
FAIL: test_all (test.test___all__.AllTest.test_all)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/abuild/rpmbuild/BUILD/Python-3.14.0a2/Lib/test/test___all__.py", line 129, in test_all
self.check_all(modname)
~~~~~~~~~~~~~~^^^^^^^^^
File "/home/abuild/rpmbuild/BUILD/Python-3.14.0a2/Lib/test/test___all__.py", line 35, in check_all
with warnings_helper.check_warnings(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
(f".*{modname}", DeprecationWarning),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
("", ResourceWarning),
^^^^^^^^^^^^^^^^^^^^^^
quiet=True):
^^^^^^^^^^^
File "/home/abuild/rpmbuild/BUILD/Python-3.14.0a2/Lib/contextlib.py", line 148, in __exit__
next(self.gen)
~~~~^^^^^^^^^^
File "/home/abuild/rpmbuild/BUILD/Python-3.14.0a2/Lib/test/support/warnings_helper.py", line 185, in _filterwarnings
raise AssertionError("unhandled warning %s" % reraise[0])
AssertionError: unhandled warning {message : BytesWarning('str() on a bytes instance'), category : 'BytesWarning', filename : '/home/abuild/rpmbuild/BUILD/Python-3.14.0a2/Lib/test/support/strace_helper.py', lineno : 85, line : None}
----------------------------------------------------------------------
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127849
<!-- /gh-linked-prs -->
| c0264fc57c51e68015bef95a2208711356b57c1f | 2de048ce79e621f5ae0574095b9600fe8595f607 |
python/cpython | python__cpython-127756 | # The error message for odd-length input to `bytes.fromhex` is cryptic
# Bug report
### Bug description:
```pytb
Python 3.14.0a2+ (main, Dec 8 2024, 10:50:32) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> bytes.fromhex('deadbee')
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
bytes.fromhex('deadbee')
~~~~~~~~~~~~~^^^^^^^^^^^
ValueError: non-hexadecimal number found in fromhex() arg at position 7
>>> bytes.fromhex('deadbeef')
b'\xde\xad\xbe\xef'
```
I suggest: `ValueError: fromhex() arg must be of even length`.
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127756
* gh-127818
<!-- /gh-linked-prs -->
| c33b6fbf358c1bc14b20e14a1fffff62c6826ecd | 41f29e5d16c314790559e563ce5ca0334fcd54df |
python/cpython | python__cpython-127735 | # Improving signature of `urllib.request.HTTPPasswordMgrWithPriorAuth.__init__`
# Feature or enhancement
### Proposal:
`urllib.request.HTTPPasswordMgrWithPriorAuth.__init__` ([link](https://github.com/python/cpython/blob/79b7cab50a3292a1c01466cf0e69fb7b4e56cfb1/Lib/urllib/request.py#L879)) has `(self, *args, **kwargs)`, but the args are just passed through to super, and the superclass for `HTTPPasswordMgrWithPriorAuth` takes no arguments. The passthrough doesn't serve a purpose and it muddies introspection of the method. I'd like to update `urllib.request.HTTPPasswordMgrWithPriorAuth.__init__` to have no arguments, so that signature introspection is more accurate. Currently, typeshed needs to have `urllib.request.HTTPPasswordMgrWithPriorAuth.__init__` on its allowlist for stubtest errors as a result of the poor introspection.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127735
* gh-127744
* gh-127745
<!-- /gh-linked-prs -->
| a03efb533a58fd13fb0cc7f4a5c02c8406a407bd | 7f8ec523021427a5c1ab3ce0cdd6e4bb909f1dc5 |
python/cpython | python__cpython-127733 | # Fix detection of Windows Server 2025 by `platform.py`
# Feature or enhancement
### Proposal:
Windows Server 2025 was officially released a month ago and is now detected as Server 2022 instead of Server 2025
Microsoft Lifecycle:
https://learn.microsoft.com/en-us/lifecycle/products/windows-server-2025
https://www.microsoft.com/en-us/windows-server/blog/
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127733
* gh-127762
* gh-127763
<!-- /gh-linked-prs -->
| 5eb7fd4d0fc37b91058086181afebec41e66e5ad | 2041a95e68ebf6d13f867e214ada28affa830669 |
python/cpython | python__cpython-127731 | # `venv.EnvBuilder.ensure_directories` returns incorrect `inc_path` value
# Bug report
### Bug description:
```python
$ ./python
Python 3.14.0a2+ experimental free-threading build (heads/main:79b7cab50a3, Dec 7 2024, 19:01:51) [GCC 14.2.1 20240910] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import venv
>>> builder = venv.EnvBuilder()
>>> context = builder.ensure_directories('venv')
>>> context.inc_path
'venv/include/python3.14td'
```
```python
$ venv/bin/python
Python 3.14.0a2+ experimental free-threading build (heads/main:79b7cab50a3, Dec 7 2024, 19:01:51) [GCC 14.2.1 20240910] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sysconfig
>>> sysconfig.get_path('include')
'/usr/local/include/python3.14td'
```
I noticed this when looking at the `venv` code:
https://github.com/python/cpython/blob/79b7cab50a3292a1c01466cf0e69fb7b4e56cfb1/Lib/venv/__init__.py#L102-L109
As the name suggests, `installed_base` and `installed_platbase` should be pointing to the base installation, not the virtual environment.
###### Note: The `include` directory being shared between all environments in a Python installation is a known issue.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127731
<!-- /gh-linked-prs -->
| 8ac307f0d6834148471d2e12a45bf022e659164c | a8ffe661548e16ad02dbe6cb8a89513d7ed2a42c |
python/cpython | python__cpython-127719 | # Add colour to `test.regrtest` output
# Feature or enhancement
In Python 3.13 we added colour output to the new REPL, tracebacks and doctest, and in 3.14 to unittest, that can also be controlled with the `PYTHON_COLORS`, `NO_COLOR` and `FORCE_COLOR` environment variables:
* https://docs.python.org/3.14/whatsnew/3.14.html#unittest
* https://docs.python.org/3.14/using/cmdline.html#using-on-controlling-color
Let's add colour to `test.regrtest` output.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127719
<!-- /gh-linked-prs -->
| 212448b1623b45f19c5529595e081da72a6521a0 | 2233c303e476496fc4c85a29a1429a7e4b1f707b |
python/cpython | python__cpython-127726 | # logging.handlers.SMTPHandler: secure argument is broken in Python 3.12+
# Bug report
### Bug description:
The [documentation](https://docs.python.org/3/library/logging.handlers.html#smtphandler) for `logging.handlers.SMTPHandler` describes the `secure` argument as follows:
> To specify the use of a secure protocol (TLS), pass in a tuple to the `secure` argument. This will only be used when authentication credentials are supplied. The tuple should be either an empty tuple, or a single-value tuple with the name of a keyfile, or a 2-value tuple with the names of the keyfile and certificate file. (This tuple is passed to the `smtplib.SMTP.starttls()` method.)
However, from Python 3.12 on, only the empty tuple actually works.
With a single-value-tuple or a two-value-tuple, `smtplib.SMTP.starttls()` throws an exception like this:
```
Traceback (most recent call last):
File "/usr/lib/python3.13/logging/handlers.py", line 1097, in emit
smtp.starttls(*self.secure)
~~~~~~~~~~~~~^^^^^^^^^^^^^^
TypeError: SMTP.starttls() takes 1 positional argument but 3 were given
```
The reason seems immediately obvious from the last sentence in the description of the `secure` argument: The tuple is passed to `smtplib.SMTP.starttls()`.
The [documentation](https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.starttls) for this method states:
> Changed in version 3.12: The deprecated `keyfile` and `certfile` parameters have been removed.
`SMTPHandler` still relies on the these parameters.
Here's a minimal script to reproduce the error (requires a mocked SMTP server on port 1025):
```python
#!/usr/bin/env python3
import logging
from logging.handlers import SMTPHandler
logger = logging.getLogger()
handler = SMTPHandler(
mailhost=("localhost", 1025),
fromaddr="foo@localhost",
toaddrs="bar@localhost",
subject="Test",
credentials=("foo", "bar"),
secure=(
"/path/to/private.key",
"/path/to/cert.pem"
),
)
logger.addHandler(handler)
logger.error("SMTPHandler is broken")
```
Note that Python 3.11 is not affected and the above code successfully attempts to issue a STARTTLS command.
### CPython versions tested on:
3.11, 3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127726
* gh-129955
* gh-129956
<!-- /gh-linked-prs -->
| d7672e5d5a7b9580a72dbe75d3a9e8840bcc604c | 94cd2e0ddeff83dee3254ca356d9e4396927d075 |
python/cpython | python__cpython-127875 | # Use tagged pointers on the stack in the default build.
Currently all references to objects in frameobjects use `_PyStackRef` instead of `PyObject *`.
This is necessary for the free-threaded build to support deferred references.
For the default build `_PyStackRef` is just an alias for `PyObject *`.
We should change `_PyStackRef` to use proper tagged pointers in the default build for two important reasons:
* It will reduce the maintenance burden of using tagged pointers if they were the same in both builds
* It offers a lot of optimization potential. The overhead of reference counting operations is large, and tagged pointers will allow us to reduce that overhead considerably.
My initial implementation is [0.8% slower](https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20241206-3.14.0a2+-0c20416), although I'd like to get that closer to 0 before merging anything. There is some speedup in the GC due to streamlined immortality checks, and some slowdown due to increased overhead of turning new `PyObject *` references into `_PyStackRef`s.
This small slowdown will allow us a large speedup (maybe more than 5%) as we can do the following:
* Reduce the overhead of refcount operations by using tagged references for the majority of `LOAD_` instructions in the interpreter.
* Completely eliminate many decref operations by tracking which references are tagged in the JIT.
The tagging scheme:
Tag | Meaning
--- | ---
00 | Normal pointers
01 | Pointers with embedded reference count
10 | Unused
11 | Pointer to immortal object<sup>1</sup> (including NULL)
This tagging scheme is chosen as it provides the best performance for the most common operations:
* PyStackRef_DUP: Can check to see if the object's reference count needs updating with a single check and no memory read: `ptr & 1`
* PyStackRef_CLOSE: As for PyStackRef_DUP, only a single bit check is needed
* PyStackRef_XCLOSE: Since `NULL` is treated as immortal and tagged, this is the same as PyStackRef_CLOSE.
Maintaining the invariant that tag `11` is used for all immortal objects is a bit expensive, but can be mitigated by pushing the conversion from `PyObject *` to `_PyStackRef` down to a point where it is known whether an object is newly created or not.
For newly created objects `PyStackRef_FromPyObjectStealMortal` can be used which performs no immortality check.
--------------
1. Actually, any object that was immortal when the reference was created. References to objects that are made immortal after the reference is created would have the low bits set to `00`, or `01`. This is OK as immortal refcounts have a huge margin of error and the number of possible references to one of these immortal objects is very small.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127875
* gh-128121
* gh-130785
* gh-131072
* gh-131140
* gh-131198
* gh-131365
* gh-131500
* gh-131508
* gh-136178
* gh-136206
<!-- /gh-linked-prs -->
| 2bef8ea8ea045d20394f0daec7a5c5b1046a4e22 | 7cc99a54b755cc7cc0b3680fde7834cf612d4fbc |
python/cpython | python__cpython-127694 | # Check for type consistency for `PyUnicodeError` API
# Feature or enhancement
### Proposal:
This is a follow-up to https://github.com/python/cpython/pull/123380#discussion_r1865910020.
The idea is to add assertion type-checks when calling helper functions on unicode objects:
```C
assert(PyObject_TypeCheck(exc, (PyTypeObject*)&PyExc_UnicodeError));
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127694
<!-- /gh-linked-prs -->
| 8bc18182a7c28f86265c9d82bd0338137480921c | 6446408d426814bf2bc9d3911a91741f04d4bc4e |
python/cpython | python__cpython-127689 | # Add `SCHED_DEADLINE` and `SCHED_NORMAL` constants to `os` module
# Feature or enhancement
### Proposal:
This issue suggests adding new os constants.
`os.SCHED_DEADLINE`
```
# deadline scheduling
# Set the current schedule to real-time schedule, To be precise, it
# is not real-time scheduling, but it is relatively real-time.
prio = os.sched_param(sched_priority=10)
os.sched_setscheduler(0, os.SCHED_DEADLINE, prio)
```
`SCHED_NORMAL` is the same as `SCHED_OTHER`.
But to run in the old linux, we can't remove `SCHED_OTHER`, even if it no longer exists in the current main branch.
But we still need to add `SCHED_NORMAL`, because `SCHED_OTHER` only exists in the old distribution.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127689
<!-- /gh-linked-prs -->
| ea578fc6d310c85538aefbb900a326c5c3424dd5 | 19c5134d57764d3db7b1cacec4f090c74849a5c1 |
python/cpython | python__cpython-132351 | # Regression of 3.13.1 with iterator creation being duplicated
# Bug report
### Bug description:
For my Python compiler Nuitka, I use CPython as the oracle of what the correct behaviour is. I am running some tests that I used to clarify the behavior from decades ago, in this case I wanted to know when exactly the iterator creation is used. I striped this test a bunch, so the regression is still visible. I first noticed the issue on GitHub Actions, where 3.13.0 got replaced with 3.13.1 for Windows and Linux, but it applies to all OSes. See below for a diff, that the same iterator is created multiple times.
```python
""" Generator expression tests
"""
from __future__ import print_function
import inspect
print("Generator expression that demonstrates the timing:")
def iteratorCreationTiming():
def getIterable(x):
print("Getting iterable", x)
return Iterable(x)
class Iterable:
def __init__(self, x):
self.x = x # pylint: disable=invalid-name
self.values = list(range(x))
self.count = 0
def __iter__(self):
print("Giving iterator now", self.x)
return self
def __next__(self):
print("Next of", self.x, "is", self.count)
if len(self.values) > self.count:
self.count += 1
return self.values[self.count - 1]
else:
print("Raising StopIteration for", self.x)
raise StopIteration
# Python2/3 compatibility.
next = __next__
def __del__(self):
print("Deleting", self.x)
gen = ((y, z) for y in getIterable(3) for z in getIterable(2))
print("next value is", next(gen))
res = tuple(gen)
print("remaining generator is", res)
try:
next(gen)
except StopIteration:
print("Usage past end gave StopIteration exception as expected.")
try:
print("Generator state then is", inspect.getgeneratorstate(gen))
except AttributeError:
pass
print("Its frame is now", gen.gi_frame)
print("Early aborting generator:")
gen2 = ((y, z) for y in getIterable(3) for z in getIterable(2))
del gen2
iteratorCreationTiming()
```
The unified diff between 3.13.0 output (and basically all Python versions before) and 3.13.1 output.
```
--- out-3.13.0.txt 2024-12-06 12:37:19.447115100 +0100
+++ out-3.13.1.txt 2024-12-06 12:37:23.452239500 +0100
@@ -1,9 +1,11 @@
Generator expression that demonstrates the timing:
Getting iterable 3
Giving iterator now 3
+Giving iterator now 3
Next of 3 is 0
Getting iterable 2
Giving iterator now 2
+Giving iterator now 2
Next of 2 is 0
next value is (0, 0)
Next of 2 is 1
@@ -13,6 +15,7 @@
Next of 3 is 1
Getting iterable 2
Giving iterator now 2
+Giving iterator now 2
Next of 2 is 0
Next of 2 is 1
Next of 2 is 2
@@ -21,6 +24,7 @@
Next of 3 is 2
Getting iterable 2
Giving iterator now 2
+Giving iterator now 2
Next of 2 is 0
```
The duplicated prints out the iterator creation are new. This is not optimal and new. I don't know if the iterator being through a slot cause cause this or what it is. I checked if `generator.c` changed but I think it didn't at all.
My self compiled Python 3.13.1 for Linux and the official Windows download agree in behaviour.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-132351
* gh-132384
<!-- /gh-linked-prs -->
| d87e7f35297d34755026173d84a38eedfbed78de | bc0b94b30c9d65ba550daee2c2ef20035defd980 |
python/cpython | python__cpython-127668 | # Improve error-branches of `hashlib`
# Bug report
### Bug description:
While reading the hashlib code, I found some issues in the error branches where the `EVP_MD_ctx` is not freed upon failure or when we call `py_digest_name` with a NULL `EVP_MD *`.
@gpshead Should I consider this as a security issue? (some places might be a security issue since we are leaking some EVP_MD context objects but others are just leaking un-initialized contexts).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127668
* gh-130783
* gh-130784
* gh-131145
* gh-131347
* gh-131348
<!-- /gh-linked-prs -->
| 097846502b7f33cb327d512e2a396acf4f4de46e | 7e3b788e8f3dc986bcccb047ddc3f0a7a99bb08c |
python/cpython | python__cpython-127656 | # `_SelectorSocketTransport.writelines` is missing a flow control check allowing writes to fill memory until exhausted
# Bug report
### Bug description:
This is the public issue for GHSA-fw89-6wjj-8j95
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127656
* gh-127663
* gh-127664
<!-- /gh-linked-prs -->
| e991ac8f2037d78140e417cc9a9486223eb3e786 | 25eee578c8e369b027da6d9d2725f29df6ef1cbd |
python/cpython | python__cpython-127704 | # Drop `--wasi preview2` from `wasi.py`
# Bug report
### Bug description:
Need to remove
https://github.com/python/cpython/blob/23f2e8f13c4e4a34106cf96fad9329cbfbf8844d/Tools/wasm/wasi.py#L300-L301
as we only officially support preview1.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-127704
<!-- /gh-linked-prs -->
| 0fc4063747c96223575f6f5a0562eddf2ed0ed62 | 5b6635f772d187d6049a56bfea76855644cd4ca1 |
python/cpython | python__cpython-127660 | # Regression of 3.13.1 for module paths in from import
# Bug report
### Bug description:
```python
import os
print("FILE", os.__file__, os.__spec__)
def localImportFailure():
try:
from os import listdir, listdir2, path
except Exception as e:
print("gives", type(e), repr(e))
print("From import that fails in the middle", end=" ")
localImportFailure()
```
This code outputs with 3.13.1 the following:
```
FILE C:\Python313_64\Lib\os.py ModuleSpec(name='os', loader=<class '_frozen_importlib.FrozenImporter'>, origin='frozen')
From import that fails in the middle gives <class 'ImportError'> ImportError("cannot import name 'listdir2' from 'os' (unknown location)")
```
As you can see, the `__file__` and `__spec__` values are fine. I found this in a regression test of Nuitka which worked with 3.13.1 and all Python versions before, giving the proper path from Python (which I use as a test oracle to know what the behavior to compare against is).
From my look at the code, this "unknown location" is coming from a code path, that tries to recognize shadowed stdlib modules, a new feature added in 3.13.1, can you please consider repairing it for 3.13.2, as I think file paths are very important part of exceptions for developers.
The issue is not OS specific. It only occurs with 3.13.1, not with 3.13.0.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127660
* gh-127775
<!-- /gh-linked-prs -->
| 45e6dd63b88a782f2ec96ab1da54eb5a074d8f4c | daa260ebb1c1b20321e7f26df7c9dbd35d4edcbf |
python/cpython | python__cpython-127648 | # Add simple `Reader` and `Writer` protocols
# Feature or enhancement
### Proposal:
See the [discourse thread](https://discuss.python.org/t/simple-reader-and-writer-protocols/61510) for the rationale.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/simple-reader-and-writer-protocols/61510
<!-- gh-linked-prs -->
### Linked PRs
* gh-127648
<!-- /gh-linked-prs -->
| c6dd2348ca61436fc1444ecc0343cb24932f6fa7 | 9c691500f9412ecd8f6221c20984dc7a55a8a9e8 |
python/cpython | python__cpython-127759 | # Add tests for `dis` command-line interface
The `dis` CLI (added in 3.10 or so) is lacking tests. I'll be writing them tomorrow or tonight, depending on when I'm done with my IRL tasks.
I think we can backport tests for all command-line options that were introduced in previous versions (up to 3.12) and only write tests for those that were introduced in 3.14 (namely `--show-positions` (#123168) and `--specialized` (#127414), and maybe more, but those I've added those two so I know they are 3.14+). To that end, I'll write two separate PRs so to ease backports.
cc @iritkatriel
<!-- gh-linked-prs -->
### Linked PRs
* gh-127759
* gh-127780
* gh-127781
<!-- /gh-linked-prs -->
| e85f2f1703e0f79cfd0d0e3010190b71c0eb18da | 5eb7fd4d0fc37b91058086181afebec41e66e5ad |
python/cpython | python__cpython-127683 | # Emscripten: Add ctypes to the Emscripten build
<!-- gh-linked-prs -->
### Linked PRs
* gh-127683
<!-- /gh-linked-prs -->
| 3b18af964da9814474a5db9e502962c7e0593e8d | 5c89adf385aaaca97c2ee9074f8b1fda0f57ad26 |
python/cpython | python__cpython-127628 | # Emscripten: Add function to add JS breakpoint
For debugging the Python test suite on the Emscripten platform, it's quite useful to be able to inject a breakpoint for the JavaScript debugger in Python code. This is particularly helpful when debugging problems with the file system -- the code paths I want to inspect are hit hundreds of times on startup. Inserting a breakpoint directly above the failing Python code makes everything much easier.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127628
<!-- /gh-linked-prs -->
| 25eee578c8e369b027da6d9d2725f29df6ef1cbd | 8b3cccf3f9508572d85b0044519f2bd5715dacad |
python/cpython | python__cpython-128503 | # configure is checking for `ttyname` but code is using `ttyname_r`
# Bug report
### Bug description:
PR https://github.com/python/cpython/pull/14868 by @chibby0ne changed the code to use `ttyname_r` but the `configure` script is checking for `ttyname`.
There are some old systems, such as OSX Tiger, that don't have `ttyname_r` (at least not by default), so it'd be nice if `ttyname_r` was checked for separately with fallback to `ttyname`.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-128503
* gh-128598
* gh-128599
<!-- /gh-linked-prs -->
| e08b28235a863323ca3a7e444776bb7803e77caf | 07e6aa2efc5b2c7ecbbdb41636013c64e335a2ba |
python/cpython | python__cpython-127657 | # No validation on erroneous creation of 2 or more parameter variables (VAR_KEYWORD and VAR_POSITIONAL) in inspect.Signature
# Bug report
### Bug description:
We can create `inspect.Signature` with two or more variable parameters, but this will be syntactically incorrect when defining the function manually:
```python
import inspect
parameters = [
inspect.Parameter("args", kind=inspect.Parameter.VAR_POSITIONAL),
inspect.Parameter("other_args", kind=inspect.Parameter.VAR_POSITIONAL),
inspect.Parameter("kwargs", kind=inspect.Parameter.VAR_KEYWORD),
inspect.Parameter("other_kwargs", kind=inspect.Parameter.VAR_KEYWORD),
]
signature = inspect.Signature(parameters)
print(signature)
```
OUTPUT:
```
(*args, *other_args, **kwargs, **other_kwargs)
```
And naturally we get a syntax error when defining a function with such a signature:
```python
def func(*args, *other_args, **kwargs, **other_kwargs):
...
```
OUTPUT:
```
File "/home/apostol_fet/inspect_signature.py", line 13
def func(*args, *other_args, **kwargs, **other_kwargs):
^
SyntaxError: * argument may appear only once
```
Can we add validations for this case in `inspect.Signature.__init__`?
I would like to send a PR to fix this if it is not intended behavior.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127657
<!-- /gh-linked-prs -->
| 1503fc8f88d4903e61f76a78a30bcd581b0ee0cd | 70154855cf698560dd9a5e484a649839cd68dc7c |
python/cpython | python__cpython-128159 | # Add a way of printing a C backtrace to `faulthandler`
# Feature or enhancement
### Proposal:
The `faulthandler` module allows registering a `SIGSEGV` handler to print a Python stacktrace if the program encounters a segfault. However, if developing C/C++ extension modules that may not be particularly useful on its own, and you also want the the C stacktrace.
The suggested API would be a kwarg to [faulthandler.enable()](https://docs.python.org/3/library/faulthandler.html#faulthandler.enable).
Implementation could use https://github.com/timmaxw/cfaulthandler as a starting point, https://github.com/timmaxw/cfaulthandler/commit/561dbdd2da4e5afa846f9742cfb9cbb36e134214 in particular.
The availability/usability of the feature would likely depend on platform and/or compile flags.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/print-c-stacktrace-with-faulthandler/56834
where @gpshead approved of opening a feature request
<!-- gh-linked-prs -->
### Linked PRs
* gh-128159
* gh-132800
* gh-132840
* gh-132841
* gh-132854
* gh-132897
* gh-133040
* gh-133071
* gh-133081
* gh-136081
* gh-136102
<!-- /gh-linked-prs -->
| 8dfa840773d1d6bae1bf6e0dfa07618ea10c9d71 | ea8ec95cfadbf58a11ef8e41341254d982a1a479 |
python/cpython | python__cpython-127717 | # `_Py_RefcntAdd` doesn't increment ref count stats
# Bug report
### Bug description:
Hi,
I am unsure if this intentional or not so apologies if this turns out not to be a bug.
The code for `_Py_RefcntAdd` does not include `_Py_INCREF_STAT_INC`. This appears to mean the statistics do not track these additions to the reference count, which are increments (although not increments by `1`, they are increasing the ref count in a single operation).
https://github.com/python/cpython/blob/ad9d059eb10ef132edd73075fa6d8d96d95b8701/Include/internal/pycore_object.h#L121
I believe this may be an oversight because this stat should be tracked, and also because the corresponding decrement ref count code does call `_Py_DECREF_STAT_INC` (e.g. https://github.com/python/cpython/blob/ad9d059eb10ef132edd73075fa6d8d96d95b8701/Include/internal/pycore_object.h#L383 )
I have checked the callers of `_Py_RefcntAdd` (in `listobject.c` and `tupleobject.c`) and they do not separately perform the stats tracking.
Is this an issue or is there a reason this ref count increment is excluded?
Thanks,
Ed
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127717
* gh-127963
* gh-128712
* gh-128713
<!-- /gh-linked-prs -->
| ab05beb8cea62636bd86f6f7cf1a82d7efca7162 | 7900a85019457c14e8c6abac532846bc9f26760d |
python/cpython | python__cpython-127587 | # `multiprocessing.Pool` does not properly restore blocked signals
# Bug report
### Bug description:
When instantiating a `multiprocessing.Pool`, for start methods "spawn" and "forkserver", the resource tracker must register a signal mask in the parent process to block some signals prior to creating the spawned childs, as explained by the comment below:
https://github.com/python/cpython/blob/bfb0788bfcaab7474c1be0605552744e15082ee9/Lib/multiprocessing/resource_tracker.py#L152-L164
The comment makes sense, however the way in which the mask is unregistered seems like a bug, because it can have unintended effects on the parent process.
```python
import signal
from multiprocessing import Pool, set_start_method
def get_blocked_signals():
return signal.pthread_sigmask(signal.SIG_BLOCK, {})
if __name__ == "__main__":
set_start_method("spawn")
signal.pthread_sigmask(signal.SIG_BLOCK, {signal.SIGTERM})
print(f"I am blocking {get_blocked_signals()}")
my_pool = Pool(5)
print(f"After creating Pool, now I am blocking {get_blocked_signals()}")
```
Running this code on the latest commit here, the output is as follows:
```
I am blocking {<Signals.SIGTERM: 15>}
After creating Pool, now I am blocking set()
```
This does not match what I would expect to see, I would have expected the set of blocked signals to remain unchanged after the call to construct the Pool. As far as I know I don't think Pool is supposed to have side effects on the signal handling of the main process.
It appears that in the resource tracker, after spawning the child process, the tracker unblocks the ignored signals rather than restore to the previous sigmask. This works fine for most use cases but has the unintended side effect of unblocking any of the ignored signals that were already blocked prior (as shown in the example above).
This seems relatively trivial to fix so I should have a PR up shortly.
One important thing to note here is with the fix, child processes created by spawn or forkserver will now inherit blocked signals such as SIGTERM, if they are blocked in the parent process. This is already the preexisting behavior for signals that weren't ignored by the resource manager, such as SIGUSR1. Calling methods like terminate(), or garbage collecting the Pool, will hang due to the child processes blocking SIGTERM. In this case Pool cleanup should be handled in the parent using close() followed by join(), or when the Pool is created an initializer is provided in the constructor to unblock SIGTERM in the child. To me this feels more in line with expected behavior.
It might be worthwhile to either adjust Pool.terminate() to call kill() (i.e. SIGKILL) instead on the workers or, to avoid breaking compatibility, provide an implementation of kill() in Pool to match parity with Process, but I'll leave that up for discussion for now.
Please let me know if anything here seems inaccurate or if there are any questions.
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127587
* gh-127973
* gh-127983
* gh-128011
* gh-128298
* gh-128299
<!-- /gh-linked-prs -->
| 1d276ec6f8403590a6a1a18c560ce75b9221572b | e4981e33b82ac14cca0f2d9b95257301fa201810 |
python/cpython | python__cpython-127612 | # Non-thread-safe use of `Py_SET_REFCNT` in destructors
# Bug report
When calling finalizers from dealloc, we temporarily increase the refcount with `Py_SET_REFCNT` and then decrease it again (also with `Py_SET_REFCNT`). This isn't thread-safe in the free threading build because some other thread may concurrently try to increase the refcount during a racy dictionary or list access.
https://github.com/python/cpython/blob/dabcecfd6dadb9430733105ba36925b290343d31/Objects/object.c#L552-L566
We have similar issues with some watcher events:
https://github.com/python/cpython/blob/dabcecfd6dadb9430733105ba36925b290343d31/Objects/funcobject.c#L1095-L1102
This can lead to crashes when we have racy accesses to objects that may be concurrently finalized. For example:
```
./python -m test test_free_threading -m test_racing_load_super_attr -v -F
Test (un)specialization of LOAD_SUPER_ATTR opcode. ... ./Include/internal/pycore_object.h:593: _Py_NegativeRefcount: Assertion failed: object has negative ref count
Enable tracemalloc to get the memory block allocation traceback
<object at 0x200041800f0 is freed>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
```
(Observed more frequently on macOS, but also occurs on Linux)
In macOS buildbot: https://buildbot.python.org/#/builders/1368/builds/2251/steps/6/logs/stdio
<!-- gh-linked-prs -->
### Linked PRs
* gh-127612
* gh-127659
<!-- /gh-linked-prs -->
| f4f530804b9d8f089eba0f157ec2144c03b13651 | 657d0e99aa8754372786120d6ec00c9d9970e775 |
python/cpython | python__cpython-127568 | # UBSan: misaligned memory loads in `Objects/dictobject.c`
# Bug report
`clang` 18+'s [undefined behavior sanitizer](https://devguide.python.org/development-tools/clang/) reports two cases of misaligned load:
```
Objects/unicodeobject.c:5088:24: runtime error: load of misaligned address 0x0000008b74be for type 'const size_t' (aka 'const unsigned long'), which requires 8 byte alignment
0x0000008b74be: note: pointer points here
20 25 73 2e 5f 5f 72 65 70 72 5f 5f 00 72 61 77 20 73 74 72 65 61 6d 20 68 61 73 20 62 65 65 6e
^
```
and
```
Objects/dictobject.c:2015:40: runtime error: load of misaligned address 0x5f7d064233d1 for type 'PyDictUnicodeEntry *', which requires 8 byte alignment
0x5f7d064233d1: note: pointer points here
00 00 00 ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 74 da 0f 06 7d 5f 00 00 df 01 00 00 14
^
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127568
* gh-127798
* gh-127813
* gh-135463
<!-- /gh-linked-prs -->
| 9af96f440618304e7cc609c246e1f8c8b2d7a119 | cef0a90d8f3a94aa534593f39b4abf98165675b9 |
python/cpython | python__cpython-127564 | # Remove `TODO` comment in `_pydatetime._isoweek1monday`
# Feature or enhancement
### Proposal:
According to the existing note `# XXX This could be done more efficiently`, the `_isoweek1monday` function can be improved.
### Current Implementation:
```python
def _isoweek1monday(year):
# Helper to calculate the day number of the Monday starting week 1
# XXX This could be done more efficiently
THURSDAY = 3
firstday = _ymd2ord(year, 1, 1)
firstweekday = (firstday + 6) % 7 # See weekday() above
week1monday = firstday - firstweekday
if firstweekday > THURSDAY:
week1monday += 7
return week1monday
```
### Proposed Change
Replace the current implementation with a simplified logic that is both more performant and easier to understand.
```python
def _isoweek1monday_new(year):
# Calculate the ordinal day of the Monday starting ISO week 1.
# ISO week 1 is defined as the week that contains January 4th.
jan4 = _ymd2ord(year, 1, 4)
jan4_weekday = (jan4 + 6) % 7
return jan4 - jan4_weekday
```
### Rationale
The new logic:
1. Aligns directly with the ISO week definition, which makes it easier to comprehend.
2. Reduces unnecessary computations, improving performance slightly.
### Validation and Performance Comparison
Using timeit to compare the current and proposed implementations:
```python
import timeit
from _pydatetime import _isoweek1monday, _ymd2ord
def _isoweek1monday_new(year):
jan4 = _ymd2ord(year, 1, 4)
jan4_weekday = (jan4 + 6) % 7
return jan4 - jan4_weekday
# Validation results:
# >>> all(_isoweek1monday_new(i) == _isoweek1monday(i) for i in range(2000))
# True
# Timing results:
# >>> timeit.timeit('_isoweek1monday(2000)', globals=globals())
# 0.4805744589539245
# >>> timeit.timeit('_isoweek1monday_new(2000)', globals=globals())
# 0.4299807079951279
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127564
* gh-128500
* gh-128501
<!-- /gh-linked-prs -->
| e8b6b39ff707378da654e15ccddde9c28cefdd30 | 8ade15343d5daec3bf79ff7c47f03726fb2bcadf |
python/cpython | python__cpython-127590 | # Remove comment questioning 4-digit restriction for ‘Y’ in datetime.strptime patterns
found a comment in _strptime.py that I believe is unnecessary and potentially outdated:
https://github.com/python/cpython/blob/8c3fd1f245fbdc747966daedfd22ed48491309dc/Lib/_strptime.py#L304
The comment reads:
`# XXX: Does 'Y' need to worry about having less or more than 4 digits?`
This question seems irrelevant based on the current documentation and implementation standards. Here are my reasons:
1. Python documentation specifies a 4-digit restriction
The [Python documentation](https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime) for datetime.strptime states: The `strptime()` method can parse years in the full [1, 9999] range, but years < 1000 must be zero-filled to 4-digit width. This makes it clear that Y is explicitly designed for 4-digit years only.
2. Supporting more than 4 digits is impractical
Years with 5 or more digits cannot be represented in the resulting datetime type. The datetime implementation enforces the [1, 9999] year range, which inherently limits support for larger year values.
3. Support for fewer than 4 digits tends to not standard-compliant
Some popular specifications expect a minimum of 4 digits for representing a year:
* ISO 8601: Requires at least 4 digits for year representation.
* RFC 3339: Also requires at least 4 digits for the year.
* POSIX.1-2008: While the standard doesn’t explicitly mandate the number of digits for %Y, implementations commonly expect 4 digits as a minimum.
Given these points, I propose removing the comment as it no longer adds value and could potentially confuse contributors or maintainers. If desired, I can submit a pull request to address this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127590
* gh-127649
* gh-127650
<!-- /gh-linked-prs -->
| 51cfa569e379f84b3418db0971a71b1ef575a42b | 6bc3e830a518112a4e242217807681e3908602f4 |
python/cpython | python__cpython-127765 | # Update os.walk example to avoid .git subdirs instead of CVS subdirs
# Documentation
While historically humorous or painful (depending on your experience), the os.walk example shows how to avoid CVS directories. someone could update this to be more practically modern and avoid .git subdirectories. https://docs.python.org/3/library/os.html#os.walk
<!-- gh-linked-prs -->
### Linked PRs
* gh-127765
* gh-131869
* gh-131873
<!-- /gh-linked-prs -->
| a5949986d631391d37b1b329ad8badcf2000f9a9 | 9c1e85fd64cf6a85f7c96390d7b2dfad03a76944 |
python/cpython | python__cpython-127580 | # Free threading locking issues in `listobject.c`
# Bug report
In https://github.com/python/cpython/pull/127524, @mpage suggested adding an assert to check that the list object is locked when calling `ensure_shared_on_resize`. This uncovers a few more locking issues.
Additionally, we may return out of the critical section without unlocking here:
https://github.com/python/cpython/blob/c7dec02de2ed4baf3cd22ad094350265b52c18af/Objects/listobject.c#L956-L964
<!-- gh-linked-prs -->
### Linked PRs
* gh-127580
* gh-127613
<!-- /gh-linked-prs -->
| e51da64ac3bc6cd45339864db32d05115af39ead | 51cfa569e379f84b3418db0971a71b1ef575a42b |
python/cpython | python__cpython-127532 | # asyncio's `BaseSelectorEventLoop._accept_connection` `return`s when it should `continue` on `ConnectionAbortedError`
# Bug report
### Bug description:
PR incoming! It's a 10 second fix.
## TLDR
`BaseSelectorEventLoop._accept_connection` incorrectly `return`s early from its `for _ in range(backlog)` loop when `accept(2)` returns `-ECONNABORTED` (raised in Python as `ConnectionAbortedError`), whereas it should `continue`. This was introduced in #27906 by [this commit](https://github.com/jb2170/cpython/commit/a1b0e7db7315ff0d8d0f8edc056f387f198cf5a1#diff-905d4384eaa0f1d54083b6d643e90b9404e32339f2c151e880491cb2e595f207L177), which whilst great, had a slight oversight in not separating `ConnectionAbortedError` from (`BlockingIOError` and `InterruptedError`) when putting them inside a loop ;) Ironically the commit was introduced to give a more contiguous timeslot for accepting sockets in an eventloop, and now with the fix to this issue it'll be *even more* contiguous on OpenBSD, continuing past the aborted connections instead of the event loop having to re-poll the server socket and call `_accept_connection` again. All is good! :D
## A brief explanation / reproduction of `ECONNABORTED` from `accept(2)`, for `AF_INET` on OpenBSD
It's worth writing this up as there is not much documentation online about `ECONNABORTED`s occurrences from `accept(2)`, and I have been intermittently in pursuit of this errno for over 2 years!
Some OS kernels including OpenBSD and Linux (tested and confirmed) continue queueing connections that were aborted before calling `accept(2)`. However the behaviour `accept`'s return value differs between OpenBSD and Linux!
Suppose the following sequence of TCP packets occurs when a client connects to a server, the client's kernel and server's kernel communicating over TCP/IP, and this happens before the server's userspace program calls `accept` on its listening socket:
`>SYN, <SYNACK, >ACK, >RST`, ie a standard TCP 3WHS but followed by the client sending a `RST`.
- On OpenBSD when the server's userspace program calls `accept` on the listening socket it receives `-1`, with `errno==ECONNABORTED`
- On Linux when the server's userspace program calls `accept` on the listening socket it receives `0`, with no `errno` set, ie everything is fine. But of course when trying to `send` on the socket `EPIPE` is either set as `errno` or delivered as `SIGPIPE`
One can test this with the following script
```py
#!/usr/bin/env python3
import socket
import time
import struct
ADDR = ("127.0.0.1", 3156)
def connect_disconnect_client(*, enable_rst: bool):
client = socket.socket()
if enable_rst:
# send an RST when we call close()
client.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0))
client.connect(ADDR)
client.close()
time.sleep(0.1) # let the FIN/RST reach the kernel's TCP/IP machinery
def main() -> None:
server_server = socket.socket()
server_server.bind(ADDR)
server_server.listen(64)
connect_disconnect_client(enable_rst=True)
connect_disconnect_client(enable_rst=False)
connect_disconnect_client(enable_rst=False)
connect_disconnect_client(enable_rst=True)
connect_disconnect_client(enable_rst=False)
for _ in range(5):
try:
server_client, server_client_addr = server_server.accept()
print("Okay")
except ConnectionAbortedError as e:
print(f"{e.strerror}")
if __name__ == "__main__":
main()
```
On Linux the output is
```
Okay
Okay
Okay
Okay
Okay
```
On OpenBSD the output is
```
Software caused connection abort
Okay
Okay
Software caused connection abort
Okay
```
Observe that both kernels kept the aborted connections queued. I used OpenBSD 7.4 on [Instant Workstation](https://instantworkstation.com/virtual-machine) to test this.
## `BaseSelectorEventLoop._accept_connection`'s fix
To demonstrate `asyncio`'s issue, we create the following test script to connect five clients to a `base_events.Server` being served in a `selector_events.BaseSelectorEventLoop`. Two of the clients are going to be naughty and send an `RST` to abort their connection before it is accepted into userspace. We monkey patch in a `print()` statement just to let us know when `BaseSelectorEventLoop._accept_connection` is called. Ideally this should be once, since the server's default `backlog` of `100` is sufficient, but as we will see OpenBSD's raising of `ConnectionAbortedError` changes this:
```py
#!/usr/bin/env python3
import socket
import asyncio
import time
import struct
ADDR = ("127.0.0.1", 31415)
def connect_disconnect_client(*, enable_rst: bool):
client = socket.socket()
if enable_rst:
# send an RST when we call close()
client.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0))
client.connect(ADDR)
client.close()
time.sleep(0.1) # let the FIN/RST reach the kernel's TCP/IP machinery
async def handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter):
try:
print("connected handler")
finally:
writer.close()
# monkey patch in a print() statement just for debugging sake
import asyncio.selector_events
_accept_connection_old = asyncio.selector_events.BaseSelectorEventLoop._accept_connection
def _accept_connection_new(*args, **kwargs):
print("_accept_connection called")
return _accept_connection_old(*args, **kwargs)
asyncio.selector_events.BaseSelectorEventLoop._accept_connection = _accept_connection_new
async def amain() -> None:
server = await asyncio.start_server(handler, *ADDR)
connect_disconnect_client(enable_rst=True)
connect_disconnect_client(enable_rst=False)
connect_disconnect_client(enable_rst=False)
connect_disconnect_client(enable_rst=True)
connect_disconnect_client(enable_rst=False)
await server.start_serving() # listen(3)
await server.serve_forever()
def main() -> None:
asyncio.run(amain())
if __name__ == "__main__":
main()
```
On Linux the output is
```
_accept_connection called
connected handler
connected handler
connected handler
connected handler
connected handler
```
On OpenBSD the output is
```
_accept_connection called
_accept_connection called
_accept_connection called
connected handler
connected handler
connected handler
```
The first `_accept_connection` returns immediately because of client 1's `ECONNABORTED`. The second `_accept_connection` brings in clients 2 and 3, then returns because of 4's `ECONNABORTED`, and then the third `_accept_connection` returns due to client 5's `ECONNABORTED`.
With the PR patch incoming the OpenBSD behaviour / output is corrected to
```
_accept_connection called
connected handler
connected handler
connected handler
```
All connections are accepted in one single stroke of `_accept_connection`.
## The Odyssey for `ECONNABORTED` on Linux
This is just a personal addendum for the record.
I use Linux and I like collecting all the `signal(7)`s and `errno(3)`s, it reminds me in a way of [Lego Star Wars](https://i.ytimg.com/vi/ar0Lx-ancNs/maxresdefault.jpg); it's nice to have a complete collection. Part of [Python's exception hierarchy](https://docs.python.org/3/library/exceptions.html#exception-hierarchy) is
```
ConnectionError
├── BrokenPipeError
├── ConnectionAbortedError
├── ConnectionRefusedError
└── ConnectionResetError
```
In the past two years of me doing socket programming on Linux, for `AF_INET` and `AF_UNIX` I have easily been able to produce `ConnectionRefusedError`, `ConnectionResetError`, and `BrokenPipeError`, but I have still never been able to produce `ConnectionAbortedError` with `accept()`. Looking at the Linux kernel's source code for [`net/socket.c`](https://github.com/torvalds/linux/blob/e70140ba0d2b1a30467d4af6bcfe761327b9ec95/net/socket.c#L1948) and `net/ipv4/` implementing sockets and TCP/IP I can only conclude that `ECONNABORTED` could possibly occur as a race condition between `ops->accept()` and `ops->getname()`, where there is a nanosecond when the socket is not protected by a spinlock.
I've tried various TCP situations including `TCP_FASTOPEN`, `TCP_NODELAY`, `O_NONBLOCK` `connect()`s, combined with `SO_LINGER`, trying to create the most disgusting TCP handshakes, all to no avail. `SYN,SYNACK,RST` gets dropped and does not get `accept()`ed.
So to any similarly eclectically minded programmers out there who wish to know for the record how to get `accept(2)` to produce `ECONNABORTED`: just try the scripts above on OpenBSD and save your time lol!
This one's for you, OpenBSD friends, thanks for OpenSSH!
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-127532
<!-- /gh-linked-prs -->
| 830e10651b1f45cd0af36ff611397b9f53171220 | bb2dfadb9221fa3035fda42a2c153c831013e3d3 |
python/cpython | python__cpython-127525 | # Document `start_response` parameters in `wsgiref` functions
# Bug report
### Bug description:
The [example WSGI server in the documentation](https://docs.python.org/3/library/wsgiref.html#examples) sets the headers as a list in the simple_app function.
```python
from wsgiref.simple_server import make_server
def hello_world_app(environ, start_response):
status = "200 OK" # HTTP Status
headers = [("Content-type", "text/plain; charset=utf-8")] # HTTP Headers
start_response(status, headers)
# The returned object is going to be printed
return [b"Hello World"]
with make_server("", 8000, hello_world_app) as httpd:
print("Serving on port 8000...")
# Serve until process is killed
httpd.serve_forever()
```
When moving the headers from the function to a global (as these headers might not change), this global array of tuples will be modified by the calls to `start_response`. This is because this function creates a `wsgiref.headers.Headers` with the passed reference of the global variable, instead of using a copy.
```python
from wsgiref.simple_server import make_server
count = 10
increase = True
headers = [('Content-type', 'text/plain; charset=utf-8')]
def bug_reproducer_app(environ, start_response):
status = '200 OK'
start_response(status, headers)
# Something that will change its Content-Length on every request
global count
count -= 1
if count == 0:
count = 10
return [b"Hello " + (b"x" * count)]
with make_server('', 8000, bug_reproducer_app) as httpd:
print("Serving on port 8000...")
httpd.serve_forever()
```
This results the `Content-Length` value being set once but never updated, resulting in too-short or too-long answers as shown by curl:
```console
$ # First request will set the Content-Length just fine
$ curl localhost:8000 -v
* Host localhost:8000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8000...
* connect to ::1 port 8000 from ::1 port 60888 failed: Connection refused
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/8.5.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Date: Mon, 02 Dec 2024 16:34:16 GMT
< Server: WSGIServer/0.2 CPython/3.12.3
< Content-type: text/plain; charset=utf-8
< Content-Length: 15
<
* Closing connection
Hello xxxxxxxxx%
```
```console
$ # Second request will reuse the previous Content-Length set in the global variable
$ curl localhost:8000 -v
* Host localhost:8000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8000...
* connect to ::1 port 8000 from ::1 port 60462 failed: Connection refused
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/8.5.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Date: Mon, 02 Dec 2024 16:34:18 GMT
< Server: WSGIServer/0.2 CPython/3.12.3
< Content-type: text/plain; charset=utf-8
< Content-Length: 15
<
* transfer closed with 1 bytes remaining to read
* Closing connection
curl: (18) transfer closed with 1 bytes remaining to read
Hello xxxxxxxx%
```
The solution for this problem would be to peform a shallow copy of the headers list, ideally in the `wsgi.headers.Headers` class so that it does not modify the passed reference but its own copy.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127525
* gh-130504
* gh-130505
<!-- /gh-linked-prs -->
| 39ba4b6619026bbe65c5844702a8a0bb6e75c4f7 | f8eefc2f35c002b6702990add7f33d445977de8d |
python/cpython | python__cpython-127524 | # [Free Threading] test_opcache: test_binary_subscr_list_int(): list_get_item_ref: Assertion `cap != -1 && cap >= size' failed
test_binary_subscr_list_int() does crash randomly with an assertion error if Python is built with ``--disable-gil``:
```
$ ./python -m test test_opcache -v -m test_binary_subscr_list_int -j10 -F
(...)
test_binary_subscr_list_int (test.test_opcache.TestRacesDoNotCrash.test_binary_subscr_list_int) ...
python: Objects/listobject.c:342: list_get_item_ref: Assertion `cap != -1 && cap >= size' failed.
Fatal Python error: Aborted
Thread 0x00007f244350f6c0 (most recent call first):
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 638 in write
File "/home/vstinner/python/main/Lib/threading.py", line 992 in run
File "/home/vstinner/python/main/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/vstinner/python/main/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f2443d106c0 (most recent call first):
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 639 in write
File "/home/vstinner/python/main/Lib/threading.py", line 992 in run
File "/home/vstinner/python/main/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/vstinner/python/main/Lib/threading.py", line 1012 in _bootstrap
Current thread 0x00007f2451be5740 (most recent call first):
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 632 in read
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 586 in assert_races_do_not_crash
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 22 in wrapper
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 642 in test_binary_subscr_list_int
(...)
Extension modules: _testinternalcapi (total: 1)
```
Bug seen on macOS CI, and reproduced on Linux.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127524
* gh-127533
<!-- /gh-linked-prs -->
| c7dec02de2ed4baf3cd22ad094350265b52c18af | c4303763dac4494300e299e54c079a4a11931a55 |
python/cpython | python__cpython-127526 | # `--enable-pystats` build fails on main
# Bug report
### Bug description:
```
Python/specialize.c: In function ‘store_subscr_fail_kind’:
Python/specialize.c:1821:50: error: ‘PyObject’ {aka ‘struct _object’} has no member named ‘tp_as_mapping’
1821 | PyMappingMethods *as_mapping = container_type->tp_as_mapping;
| ^~
Python/specialize.c:1826:30: error: ‘container’ undeclared (first use in this function)
1826 | if (PyObject_CheckBuffer(container)) {
| ^~~~~~~~~
Python/specialize.c:1826:30: note: each undeclared identifier is reported only once for each function it appears in
In file included from ./Include/Python.h:63,
from Python/specialize.c:1:
Python/specialize.c:1827:31: error: ‘sub’ undeclared (first use in this function); did you mean ‘Sub’?
1827 | if (PyLong_CheckExact(sub) && (!_PyLong_IsNonNegativeCompact((PyLongObject *)sub))) {
| ^~~
./Include/pyport.h:37:38: note: in definition of macro ‘_Py_CAST’
37 | #define _Py_CAST(type, expr) ((type)(expr))
| ^~~~
./Include/object.h:284:43: note: in expansion of macro ‘_PyObject_CAST’
284 | # define Py_IS_TYPE(ob, type) Py_IS_TYPE(_PyObject_CAST(ob), (type))
| ^~~~~~~~~~~~~~
./Include/longobject.h:14:31: note: in expansion of macro ‘Py_IS_TYPE’
14 | #define PyLong_CheckExact(op) Py_IS_TYPE((op), &PyLong_Type)
| ^~~~~~~~~~
Python/specialize.c:1827:13: note: in expansion of macro ‘PyLong_CheckExact’
1827 | if (PyLong_CheckExact(sub) && (!_PyLong_IsNonNegativeCompact((PyLongObject *)sub))) {
| ^~~~~~~~~~~~~~~~~
Python/specialize.c:1830:39: error: ‘PyObject’ {aka ‘struct _object’} has no member named ‘tp_name’
1830 | else if (strcmp(container_type->tp_name, "array.array") == 0) {
| ^~
Python/specialize.c:1865:43: error: passing argument 1 of ‘_PyType_Lookup’ from incompatible pointer type [-Wincompatible-pointer-types]
1865 | PyObject *descriptor = _PyType_Lookup(container_type, &_Py_ID(__setitem__));
| ^~~~~~~~~~~~~~
| |
| PyObject * {aka struct _object *}
In file included from ./Include/object.h:754,
from ./Include/Python.h:72:
./Include/cpython/object.h:281:39: note: expected ‘PyTypeObject *’ {aka ‘struct _typeobject *’} but argument is of type ‘PyObject *’ {aka ‘struct _object *’}
281 | PyAPI_FUNC(PyObject *) _PyType_Lookup(PyTypeObject *, PyObject *);
| ^~~~~~~~~~~~~~
Python/specialize.c: In function ‘_Py_Specialize_StoreSubscr’:
Python/specialize.c:1918:62: error: passing argument 1 of ‘store_subscr_fail_kind’ from incompatible pointer type [-Wincompatible-pointer-types]
1918 | SPECIALIZATION_FAIL(STORE_SUBSCR, store_subscr_fail_kind(container_type));
| ^~~~~~~~~~~~~~
| |
| PyTypeObject * {aka struct _typeobject *}
Python/specialize.c:430:70: note: in definition of macro ‘SPECIALIZATION_FAIL’
430 | _Py_stats->opcode_stats[opcode].specialization.failure_kinds[kind]++; \
| ^~~~
Python/specialize.c:1819:34: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘PyTypeObject *’ {aka ‘struct _typeobject *’}
1819 | store_subscr_fail_kind(PyObject *container_type)
| ~~~~~~~~~~^~~~~~~~~~~~~~
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127526
<!-- /gh-linked-prs -->
| edefb8678a11a20bdcdcbb8bb6a62ae22101bb51 | c7dec02de2ed4baf3cd22ad094350265b52c18af |
python/cpython | python__cpython-127506 | # Emscripten: Make python.sh function as proper Python CLI
There are several problems with `python.sh` right now:
1. It doesn't work correctly on macs with bsd coreutils
2. It doesn't mount the native file system into the Emscripten file system, so `python.sh /some/file.py` will fail to find the file
3. It doesn't set the current directory in the Emscripten file system to the same as the host working directory so `python.sh file.py` won't look for `file.py` in the current directory
4. It doesn't inherit environment variables
@freakboy3742
<!-- gh-linked-prs -->
### Linked PRs
* gh-127506
* gh-127632
* gh-127633
* gh-131158
<!-- /gh-linked-prs -->
| 87faf0a9c4aa7f8eb5b6b6c8f6e8f5f99b1e3d9b | 43634fc1fcc88b35171aa79258f767ba6477f764 |
python/cpython | python__cpython-135294 | # Reconsider XML Security warnings / obsolete vulnerabilities
# Documentation
The documentation for the xml.etree.ElementTree API contains the following stark warning:
*Warning: The [xml.etree.ElementTree](https://docs.python.org/3/library/xml.etree.elementtree.html#module-xml.etree.ElementTree) module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see [XML vulnerabilities](https://docs.python.org/3/library/xml.html#xml-vulnerabilities).*
Similar warnings exist on the documentation pages of other XML standard library functions.
From what I can tell, this warning is outdated, and should probably be reconsidered. If I look at the referenced info here
https://docs.python.org/3/library/xml.html#xml-vulnerabilities
it does say "Vulnerable" for 3 of the 6 issues for the etree API, but each contains a footnote, essentially saying that this is no longer true for a current version of Expat.
Correct me if I'm wrong, but I interpret that this means using these APIs is fine and secure, as long as one does not use an outdated version of Expat with known vulnerabilities. I don't think this justifies the stark warning above.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135294
* gh-136359
* gh-136360
<!-- /gh-linked-prs -->
| cb99d992774b67761441e122965ed056bac09241 | d05423a90ce0ee9ad5207dce3dd06ab2397f3d6e |
python/cpython | python__cpython-132294 | # Preserve command-line history after interpreter crash
When the CPython interpreter crashes - recent history entries are lost. An easy reproducer (on any Linux):
```
$ ulimit -v $((1024*256))
$ pip install -q gmpy2
$ python # started with empty history
Python 3.14.0a2+ (heads/main:e2713409cf, Dec 2 2024, 07:50:36) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gmpy2 import mpz
>>> mpz(2222222222222211111111122222222222222)**33322222
GNU MP: Cannot allocate memory (size=503998640)
Aborted
$ python # history is empty, again :(
...
```
That "works" both in the old repl and in the new repl. Sometimes you even can't recover all session from the terminal screen.
IMO, if it's a feature - it's a misfeature. I would expect, that all entered input will be preserved in the history. Or such behavior can be optionally turned on. BTW, this feature "doesn't work" in the IPython shell.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132294
<!-- /gh-linked-prs -->
| 276252565ccfcbc6408abcbcbe6af7c56eea1e10 | 614d79231d1e60d31b9452ea2afbc2a7d2f0034b |
python/cpython | python__cpython-127482 | # Add `EPOLLWAKEUP` to the select module
# Feature or enhancement
### Proposal:
This issue suggests adding new select constants.
for example
```
fd = /* a socket object */
p = /* a epoll object */
/* It will prevent the system from hanging (e.g. low-power or standby) */
p.register(fd, EPOLLWAKEUP)
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127482
<!-- /gh-linked-prs -->
| 6bc3e830a518112a4e242217807681e3908602f4 | bc0f2e945993747c8b1a6dd66cbe902fddd5758b |
python/cpython | python__cpython-127494 | # pathlib ABCs: add protocols for supporting types
# Feature or enhancement
For typing and documentation purposes, it would be useful to define three protocols in pathlib:
- `Parser`: like `os.path`, but including only pure functionality that's essential for `PurePathBase`
- This [already exists](https://github.com/python/cpython/blob/328187cc4fcdd578db42cf6a16c197c3382157a7/Lib/pathlib/_abc.py#L38-L87), but it's defined and used in an odd way.
- `DirEntry`: like `os.DirEntry`, but without some non-portable methods (like `inode()`)
- `StatResult`: like `os.stat_result`, but without the tuple-like interface, and dropping non-essential attributes
These could be defined in a private module like `pathlib._types`. For performance reasons that module shouldn't be imported by any other pathlib module.
If/when we make `PathBase` public, we'll also make these protocols public.
See also: https://discuss.python.org/t/make-pathlib-extensible/3428/196
<!-- gh-linked-prs -->
### Linked PRs
* gh-127494
* gh-127725
<!-- /gh-linked-prs -->
| 5c89adf385aaaca97c2ee9074f8b1fda0f57ad26 | e85f2f1703e0f79cfd0d0e3010190b71c0eb18da |
python/cpython | python__cpython-127575 | # iOS `clang` aliases crash with "invalid directory name" error if called with arguments having spaces
# Bug report
### Bug description:
While doing cross-compilation of Python packages for iOS calling [`arm64-apple-ios-clang`](https://github.com/python/cpython/blob/main/iOS/Resources/bin/arm64-apple-ios-clang) script (and other aliases as well) fails if arguments contain spaces, for example:
```
arm64-apple-ios-clang -DPACKAGE_TARNAME="aaa" -DPACKAGE_STRING="aaa 1.2.2" -c -o src/main.o src/main.c
```
gives `clang: invalid directory name: '1.2.2"'` error.
To fix that `$@` inside script should be wrapped with quotes: `"$@"`.
I'm going to make PR.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127575
* gh-127624
<!-- /gh-linked-prs -->
| 6cf77949fba7b44f6885794b2028f091f42f5d6c | 87faf0a9c4aa7f8eb5b6b6c8f6e8f5f99b1e3d9b |
python/cpython | python__cpython-127430 | # sysconfig data is generated with the host Python's Makefile when cross-compiling
# Bug report
### Bug description:
Currently, the POSIX sysconfig data is generated with the `Makefile` from the host interpreter, instead of the target. This results in incorrect data, as demonstrated by https://github.com/python/cpython/actions/runs/12090193539/job/33716932397?pr=127426.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127430
<!-- /gh-linked-prs -->
| 2950bc50af8fc2539e64731359bfb39b335a614d | e2713409cff5b71b1176b0e3fa63dae447548672 |
python/cpython | python__cpython-127549 | # `test_start_new_thread_failed` flaky: failed on macOS (free threading)
# Bug report
Seen in https://github.com/python/cpython/actions/runs/12082953758/job/33695120037?pr=127399
```
======================================================================
FAIL: test_start_new_thread_failed (test.test_threading.ThreadTests.test_start_new_thread_failed)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/admin/actions-runner/_work/cpython/cpython/Lib/test/test_threading.py", line 1201, in test_start_new_thread_failed
_, out, err = assert_python_ok("-u", "-c", code)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/admin/actions-runner/_work/cpython/cpython/Lib/test/support/script_helper.py", line 182, in assert_python_ok
return _assert_python(True, *args, **env_vars)
File "/Users/admin/actions-runner/_work/cpython/cpython/Lib/test/support/script_helper.py", line 167, in _assert_python
res.fail(cmd_line)
~~~~~~~~^^^^^^^^^^
File "/Users/admin/actions-runner/_work/cpython/cpython/Lib/test/support/script_helper.py", line 80, in fail
raise AssertionError(f"Process return code is {exitcode}\n"
...<10 lines>...
f"---")
AssertionError: Process return code is -6 (SIGABRT)
command line: ['/Users/admin/actions-runner/_work/cpython/cpython/python.exe', '-X', 'faulthandler', '-I', '-u', '-c', 'if 1:\n import resource\n import _thread\n\n def f():\n print("shouldn\'t be printed")\n\n limits = resource.getrlimit(resource.RLIMIT_NPROC)\n [_, hard] = limits\n resource.setrlimit(resource.RLIMIT_NPROC, (0, hard))\n\n try:\n _thread.start_new_thread(f, ())\n except RuntimeError:\n print(\'ok\')\n else:\n print(\'!skip!\')\n ']
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127549
* gh-127574
<!-- /gh-linked-prs -->
| 13b68e1a61e92a032d255aff5d5af435bbb63e8b | 0cb52220790d8bc70ec325fd89d52b5f3b7ad29c |
python/cpython | python__cpython-127433 | # AIX build broken with Illegal instruction
# Bug report
### Bug description:
AIX build is broken with Illegal instruction after this merge https://github.com/python/cpython/pull/126025
./_bootstrap_python ./Programs/_freeze_module.py abc ./Lib/abc.py Python/frozen_modules/abc.h
./_bootstrap_python ./Programs/_freeze_module.py codecs ./Lib/codecs.py Python/frozen_modules/codecs.h
./_bootstrap_python ./Programs/_freeze_module.py io ./Lib/io.py Python/frozen_modules/io.h
./_bootstrap_python ./Programs/_freeze_module.py _collections_abc ./Lib/_collections_abc.py Python/frozen_modules/_collections_abc.h
./_bootstrap_python ./Programs/_freeze_module.py _sitebuiltins ./Lib/_sitebuiltins.py Python/frozen_modules/_sitebuiltins.h
./_bootstrap_python ./Programs/_freeze_module.py genericpath ./Lib/genericpath.py Python/frozen_modules/genericpath.h
gmake: *** [Makefile:1767: Python/frozen_modules/io.h] Illegal instruction
gmake: *** Waiting for unfinished jobs....
gmake: *** [Makefile:1764: Python/frozen_modules/codecs.h] Illegal instruction
gmake: *** [Makefile:1776: Python/frozen_modules/genericpath.h] Illegal instruction
gmake: *** [Makefile:1773: Python/frozen_modules/_sitebuiltins.h] Illegal instruction
gmake: *** [Makefile:1770: Python/frozen_modules/_collections_abc.h] Illegal instruction
gmake: *** [Makefile:1761: Python/frozen_modules/abc.h] Illegal instruction (core dumped)
Looking into the core,
dbx ./_bootstrap_python core
Illegal instruction (illegal opcode) in find_first_nonascii at line 5118 in file "Objects/unicodeobject.c" ($t1)
5118 size_t u = load_unaligned(p, end - p) & ASCII_CHAR_MASK;
(dbx) where
find_first_nonascii(start = 0x0000000100463852, end = 0x0000000000000140), line 5118 in "unicodeobject.c"
PyUnicode_DecodeUTF8(??, ??, ??), line 5371 in "unicodeobject.c"
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-127433
<!-- /gh-linked-prs -->
| 7043bbd1ca6f0d84ad2211dd60114535ba3d51fc | 49f15d8667e4755876698a8daa13bab6acee5fa1 |
python/cpython | python__cpython-127414 | # Allow to show specialized bytecode via `dis` CLI
# Feature or enhancement
### Proposal:
It's already possible to show specialized bytecode using `dis.dis(code, adaptive=True)` but it's not possible to do it from the CLI. I think we can add a flag `-S` that does it (most of the other flags are exposed to the CLI).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127414
<!-- /gh-linked-prs -->
| 67b9a5331ae45aa126877d7f96a1e235600f9c4b | fcbe6ecdb6ed4dd93b2ee144f89a73af755e2634 |
python/cpython | python__cpython-127466 | # invalid conversion from ‘void*’ to ‘_PyCodeArray*’ found in internal headers when compiling with Python 3.14t
# Bug report
### Bug description:
Cross-reporting from ~~scipy/scipy#21968~~ scipy/scipy#21970
Installing scipy from git with the free-threaded interpreter yielded the following errors:
```
[65/1464] Compiling C++ object scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o
FAILED: scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o
c++ -Iscipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p -Iscipy/special -I../scipy/special -I../../../workspaces/venv/lib/python3.14t/site-packages/numpy/_core/include -I/usr/include/python3.14t -fvisibility=hidden -fvisibility-inlines-hidden -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -std=c++17 -O3 -fPIC -DNPY_NO_DEPRECATED_API=NPY_1_9_API_VERSION -MD -MQ scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o -MF scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o.d -o scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o -c scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/_specfun.cpp
In file included from /usr/include/python3.14t/internal/pycore_frame.h:13,
from scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/_specfun.cpp:12385:
/usr/include/python3.14t/internal/pycore_code.h: In function ‘_Py_CODEUNIT* _PyCode_GetTLBCFast(PyThreadState*, PyCodeObject*)’:
/usr/include/python3.14t/internal/pycore_code.h:617:53: error: invalid conversion from ‘void*’ to ‘_PyCodeArray*’ [-fpermissive]
617 | _PyCodeArray *code = _Py_atomic_load_ptr_acquire(&co->co_tlbc);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
| |
| void*
In file included from scipy/special/_specfun.cpython-314t-x86_64-linux-gnu.so.p/_specfun.cpp:12385:
/usr/include/python3.14t/internal/pycore_frame.h: In function ‘_Py_CODEUNIT* _PyFrame_GetBytecode(_PyInterpreterFrame*)’:
/usr/include/python3.14t/internal/pycore_frame.h:96:53: error: invalid conversion from ‘void*’ to ‘_PyCodeArray*’ [-fpermissive]
96 | _PyCodeArray *tlbc = _Py_atomic_load_ptr_acquire(&co->co_tlbc);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
| |
| void*
[66/1464] Compiling C++ object scipy/special/_gufuncs.cpython-314t-x86_64-linux-gnu.so.p/_gufuncs.cpp.o
[67/1464] Compiling C++ object scipy/special/_special_ufuncs.cpython-314t-x86_64-linux-gnu.so.p/_special_ufuncs.cpp.o
ninja: build stopped: subcommand failed.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127466
<!-- /gh-linked-prs -->
| c4303763dac4494300e299e54c079a4a11931a55 | c46acd3588864e97d0e0fe37a41aa5e94ac7af51 |
python/cpython | python__cpython-127386 | # Add `F_DUPFD_QUERY` to `fcntl`
# Feature or enhancement
### Proposal:
It's used to query and copy a target file descriptor. for example:
```
import fcntl
f = /* open a file */;
/* 'ret' must be greater equal than 5 */
ret = fcntl.fcntl(fd, F_DUPFD_QUERY, 5);
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127386
<!-- /gh-linked-prs -->
| 8e18a9dce2af361d9974812e6ffa22bf3970d057 | eef49c359505eaf109d519d39e53dfd3c78d066a |
python/cpython | python__cpython-127382 | # pathlib ABCs: prune PathBase interface
It's time to make some difficult decisions about which methods deserve to stay in the `pathlib._abc.PathBase` interface, and which ought to be made `pathlib.Path`-only. Guidelines:
- Compare to `zipfile.Path` and `os.DirEntry`; don't evict shared methods.
- Include abstract methods only for the most basic functionality common to all virtual filesystems
- Include concrete methods only when they combine abstract methods to produce widely useful functionality
<!-- gh-linked-prs -->
### Linked PRs
* gh-127382
* gh-127427
* gh-127658
* gh-127707
* gh-127736
* gh-127709
* gh-127714
* gh-127853
* gh-128334
* gh-128337
* gh-129147
* gh-130207
* gh-130520
* gh-130611
* gh-131024
<!-- /gh-linked-prs -->
| 38264a060a8178d58046e90e9beb8220e3c22046 | 15d6506d175780bb29e5fcde654e3860408aa93e |
python/cpython | python__cpython-127372 | # Unbounded growth in `SpooledTemporaryFile.writelines()`
# Bug report
### Bug description:
`SpooledTemporaryFile` provides a temporary file backed by a buffer in memory that spills over to disk when it gets too large. However, the `writelines()` method only checks whether it should roll over after the entire lines iterator is exhausted. This causes unexpectedly high memory use when feeding it large iterators.
```python
from tempfile import SpooledTemporaryFile
with SpooledTemporaryFile(mode="w", max_size=1024) as temp:
temp.writelines(map(str, range(1000)))
```
With the above code, one might expect that the buffer doesn't grow (much) past 1024 bytes, but it grows to almost three times that size before finally rolling over.
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127372
* gh-127378
* gh-130885
* gh-130886
<!-- /gh-linked-prs -->
| cb67b44ca92f9930b3aa2aba8420c89d12a25303 | 691354ccb04f0e8f4faa864ee5003fc5efe8377e |
python/cpython | python__cpython-127365 | # macOS CI is failing for 3.13 due to Tcl/Tk 9 on Homebrew
# Bug report
### Bug description:
The recent builds on the 3.13 and 3.12 branches are failing for macos-13 (and 3.11 and earlier likely will as well):
* https://github.com/python/cpython/actions/workflows/build.yml?query=branch%3A3.13
* https://github.com/python/cpython/actions/workflows/build.yml?query=branch%3A3.12
For example https://github.com/python/cpython/actions/runs/12052665181/job/33606528979#step:9:1072 :
```
2 tests failed:
test_tkinter test_ttk
```
---
This is because we install unpinned Tcl/Tk from Homebrew which has recently upgraded from 8.6 to 9.0:
https://github.com/python/cpython/blob/9328db7652677a23192cb51b0324a0fdbfa587c9/.github/workflows/reusable-macos.yml#L40
`main` isn't failing because it's also recently been updated to prepare for 9.0.
For now, let's pin `main` and earlier branches to 8.6 and start testing 9.0 later (or on a buildbot):
https://formulae.brew.sh/formula/tcl-tk@8
### CPython versions tested on:
3.12, 3.13, 3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127365
* gh-127393
* gh-127394
* gh-127407
* gh-127408
* gh-127409
<!-- /gh-linked-prs -->
| b83be9c9718aac42d0d8fc689a829d6594192afa | 20657fbdb14d50ca4ec115da0cbef155871d8d33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.