repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-127357 | # gettext Make target replaces SPHINXOPTS from command line
# Documentation
Running `make gettext` will override any value provided in SPHINXOPTS. The Makefile currently has `gettext: SPHINXOPTS += -d build/doctrees-gettext` since fb0cf7d (edited with 315a933) and it doesn't append such value, but replaces instead.
Steps to reproduce (from Doc folder):
1. `make gettext` # See '-d build/doctrees-gettext' added to command line
2. `make gettext SPHINXOPTS='-q' ` # See '-q' ignored, and '-d build/doctrees-gettext' added to command line
<!-- gh-linked-prs -->
### Linked PRs
* gh-127357
* gh-127470
* gh-127471
<!-- /gh-linked-prs -->
| a880358af03d9cab37f7db04385c5a97051b03b6 | 11c01092d5fa8f02c867a7f1f3c135ce63db4838 |
python/cpython | python__cpython-127354 | # Unable to force color output on Windows
# Bug report
### Bug description:
In [can_colorize function](https://github.com/python/cpython/blob/3a77980002845c22e5b294ca47a12d62bf5baf53/Lib/_colorize.py#L34), nt._supports_virtual_terminal call takes precedence over checking of environment variables. If the terminal emulator does not support this mode, the environment variables are not checked. This is contrary to the behavior on POSIX OSes, where variables are checked first.
There are third-party terminal emulators that support ANSI color codes, but do not support the new API added in Windows 10. For example, ConEmu and Cmder.
Also, the current behavior makes it impossible to output errors with ANSI codes to a file or through a pipe.
Screenshot of exception in ConEmu terminal:

### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127354
* gh-127886
* gh-127889
* gh-127926
* gh-127944
<!-- /gh-linked-prs -->
| a8ffe661548e16ad02dbe6cb8a89513d7ed2a42c | ed037d229f64db90aea00f397e9ce1b2f4a22d3f |
python/cpython | python__cpython-127821 | # _Py_wfopen no longer exported
# Bug report
### Bug description:
_Py_wfopen in no longer exported since 3.13.
I'm using embed version and I can not use fopen or _wfopen.
Please reconsider decision to remove _Py_wfopen since it is only way to open file when used in embed mode and fopen/_wfopen is not available. Without it PyRun_FileExFlags is useless to me and my application can no longer call external scripts.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127821
* gh-128587
<!-- /gh-linked-prs -->
| f89e5e20cb8964653ea7d6f53d3e40953b6548ce | 7e8c571604cd18e65cefd26bfc48082840264549 |
python/cpython | python__cpython-127387 | # PyREPL crashes when there aren't enough columns
# Bug report
### Bug description:
### To reproduce
<sub>(order doesn't matter)</sub>
* Open the new REPL
* Resize the terminal emulator window to be less than 5 cells wide
### Traceback
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/__main__.py", line 6, in <module>
__pyrepl_interactive_console()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/simple_interact.py", line 151, in run_multiline_interactive_console
statement = multiline_input(more_lines, ps1, ps2)
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/readline.py", line 389, in multiline_input
return reader.readline()
~~~~~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/reader.py", line 801, in readline
self.handle1()
~~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/reader.py", line 770, in handle1
self.refresh()
~~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/reader.py", line 691, in refresh
self.screen = self.calc_screen()
~~~~~~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/completing_reader.py", line 261, in calc_screen
screen = super().calc_screen()
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/reader.py", line 413, in calc_screen
self.cxy = self.pos2xy()
~~~~~~~~~~~^^
File "/Users/trag1c/.pyenv/versions/3.13.0/lib/python3.13/_pyrepl/reader.py", line 595, in pos2xy
p, l2 = self.screeninfo[y]
~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### The relevant fragment of code
https://github.com/python/cpython/blob/f4e5643df64d0c2a009ed224560044b3409a47c0/Lib/_pyrepl/reader.py#L587-L596
### An additional demo
https://github.com/user-attachments/assets/2992890d-7412-461b-b61e-b28a6ae15d6a
### CPython versions tested on:
3.13, 3.14
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127387
* gh-129484
* gh-129485
<!-- /gh-linked-prs -->
| 510fefdc625dd2ed2b6b3975314a59e291b94ae8 | 4ca9fc08f89bf7172d41e523d9e520eb1729ee8c |
python/cpython | python__cpython-127348 | # Document `traceback.print_list`
`traceback.print_list` is currently [undocumented](https://docs.python.org/3.14/library/traceback.html), let's fix that! I'll send a PR shortly :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-127348
* gh-127569
* gh-127570
<!-- /gh-linked-prs -->
| 8ba9f5bca9c0ce6130e1f4ba761a68f74f8457d0 | 412e11fe6e37f15971ef855f88b8b01bb3297679 |
python/cpython | python__cpython-127310 | # Clinic causes compiler warnings when using a getter and setter with a docstring
# Bug report
### Bug description:
I noticed this while working on GH-126890, as some odd macro redefinition warnings came up from the generated clinic file. As it turns out, clinic has a bug where using *both* `@getter` and `@setter` where the getter has a docstring doesn't work correctly. Quick reproducer:
```c
/*[clinic input]
@getter
Test.test
My silly docstring
[clinic start generated code]*/
/*[clinic input]
@setter
Test.test
[clinic start generated code]*/
```
This results in generated code that looks like this:
```c
PyDoc_STRVAR(Test_test__doc__,
"My silly docstring");
#define Test_test_HAS_DOCSTR
#if defined(Test_test_HAS_DOCSTR)
# define Test_test_DOCSTR Test_test__doc__
#else
# define Test_test_DOCSTR NULL
#endif
#if defined(TEST_TEST_GETSETDEF)
# undef TEST_TEST_GETSETDEF
# define TEST_TEST_GETSETDEF {"test", (getter)Test_test_get, (setter)Test_test_set, Test_test_DOCSTR},
#else
# define TEST_TEST_GETSETDEF {"test", (getter)Test_test_get, NULL, Test_test_DOCSTR},
#endif
static PyObject *
Test_test_get_impl(TestObj *self);
static PyObject *
Test_test_get(TestObj *self, void *Py_UNUSED(context))
{
return Test_test_get_impl(self);
}
#if defined(TEST_TEST_HAS_DOCSTR)
# define Test_test_DOCSTR Test_test__doc__
#else
# define Test_test_DOCSTR NULL
#endif
#if defined(TEST_TEST_GETSETDEF)
# undef TEST_TEST_GETSETDEF
# define TEST_TEST_GETSETDEF {"test", (getter)Test_test_get, (setter)Test_test_set, Test_test_DOCSTR},
#else
# define TEST_TEST_GETSETDEF {"test", NULL, (setter)Test_test_set, NULL},
#endif
```
There's two bugs here:
- The setter uses the wrong name for the `HAS_DOCSTR` part; it uses `TEST_TEST_HAS_DOCSTR` while `Test_test_HAS_DOCSTR`, so the resulting docstring ends up being redefined to NULL.
- Even with that fixed, the docstring macro would get redefined anyway.
The simplest fix is to just drop the `HAS_DOCSTR` entirely, and just check whether the actual `DOCSTR` macro is defined or not, and then set it to `NULL` in that case (thanks @erlend-aasland for this suggestion).
I've created GH-127310 as a fix.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127310
* gh-127431
<!-- /gh-linked-prs -->
| 99490913a08adcf2fe5e69b82772a829ec462275 | 762c603a866146afc7db2591fb49605e0858e9b1 |
python/cpython | python__cpython-127331 | # Add support for OpenSSL 3.4
# Feature or enhancement
OpenSSL 3.4 is out and it needs:
- new error information
- adding to multissltests
I plan to document what I learned about the process in `make_ssl_data.py` and point to that from all the places that need changing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127331
<!-- /gh-linked-prs -->
| db5c5763f3e3172f1dd011355b41469770dafc0f | 3a77980002845c22e5b294ca47a12d62bf5baf53 |
python/cpython | python__cpython-127457 | # Setting `f_trace_opcodes` to `True` can lead to `f_lineno` being removed in some cases (using `breakpoint()`/`pdb.set_trace()`)
# Bug report
### Bug description:
Since https://github.com/python/cpython/pull/118579 (introduced in 3.13), `Bdb.set_trace` will call `Bdb.set_stepinstr` instead of `Bdb.set_step` (on L406):
https://github.com/python/cpython/blob/6d3b5206cfaf5a85c128b671b1d9527ed553c930/Lib/bdb.py#L389-L407
This ends up setting `f_trace_opcodes` to True on all the frames of the stack.
This is fine for most use cases, but for some reason, this removes the `f_lineno` attribute of frames in exotic setups using `breakpoint()`:
```python
any(some_cond for el in it) and breakpoint()
# -> Warning: lineno is None
[1, 2] and breakpoint()
# -> Warning: lineno is None
True and breakpoint()
# Fine, lineno available
```
I'm using these inline conditions a lot to conditionally add a breakpoint, and not having access to the line number is annoying as many commands (such as `list`) will fail because they expect `frame.f_lineno` to _not_ be `None` (and thus crashes and exits the debugger).
I'm not familiar with opcodes and how this interacts with frames. It is expected for the `f_lineno` to be lost here?
### CPython versions tested on:
3.13, 3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127457
* gh-127487
<!-- /gh-linked-prs -->
| 1bc4f076d193ad157bdc69a1d62685a15f95113f | a880358af03d9cab37f7db04385c5a97051b03b6 |
python/cpython | python__cpython-135405 | # Python 3.14.0a2 should have raised exception when a socket is already in used, shouldn't it?
# Bug report
### Bug description:
In earlier versions of Python all the way to 3.13, it would not allow a server to listen on a port that has already been used, and rightfully so.
```bash
$ python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
# Trying to start the second server on the same port
$ python -m http.server 8000
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/lib/python3.11/http/server.py", line 1309, in <module>
test(
File "/usr/lib/python3.11/http/server.py", line 1256, in test
with ServerClass(addr, HandlerClass) as httpd:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/socketserver.py", line 456, in __init__
self.server_bind()
File "/usr/lib/python3.11/http/server.py", line 1303, in server_bind
return super().server_bind()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/server.py", line 136, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/lib/python3.11/socketserver.py", line 472, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
```
But in Python 3.14.0a2, the second command would also start a server without exception. Is that expected?
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135405
* gh-135538
<!-- /gh-linked-prs -->
| 2bd3895fcabb2dfdae5c0c72e60483e3d3267f0f | 8979d3afe376c67931665070a79f6939ebcd940b |
python/cpython | python__cpython-127399 | # [FreeThreading] object_set_class() fails with an assertion error in _PyCriticalSection_AssertHeld()
# Crash report
### What happened?
On a free-threaded debug build, even with `PYTHON_GIL=1`, it's possible to abort the interpreter by calling `_DummyThread._after_fork` after a `__reduce__` call:
```python
import threading
obj = threading._DummyThread()
res = obj.__reduce__()
res = obj._after_fork(1)
```
Abort message:
```
python: ./Include/internal/pycore_critical_section.h:222: _PyCriticalSection_AssertHeld: Assertion `cs != NULL && cs->_cs_mutex == mutex' failed.
Aborted (core dumped)
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ experimental free-threading build (heads/main:0af4ec3, Nov 20 2024, 21:48:16) [GCC 13.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127399
* gh-127422
<!-- /gh-linked-prs -->
| 45c5cba318a19dda3ee6f9fc84781cc7a2fbde80 | b14fdadc6c620875a20b7ccc3c9b069e85d8557a |
python/cpython | python__cpython-127315 | # Change error message for a `NULL` thread state on the free-threaded build
# Feature or enhancement
### Proposal:
In the C API, this message tends to be shown if you call a function without an active thread state:
```
the function must be called with the GIL held, after Python initialization and before Python finalization, but the GIL is released (the current Python thread state is NULL)
```
While reviewing #125962, I noticed that users don't really understand the distinction between having a thread state and the GIL being acquired, so I would imagine that C API users might end up very confused as to why Python is telling them that they need the GIL while it's disabled. In fact, I sort of noticed this in passing when figuring out #123134--users erroneously seemed to think that on the free-threaded build, it was OK to call functions in a fresh thread without calling `PyGILState_Ensure` or something first.
I think this is sort of a documentation problem (we should try to do a better job of clarifying the difference between "GIL" and "thread state"), but changing this error on the free-threaded build is a good start.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127315
<!-- /gh-linked-prs -->
| 12680ec5bd45c85b6daebe0739d30ef45f089efa | a353455fca1b8f468ff3ffbb4b5e316510b4fd43 |
python/cpython | python__cpython-127304 | # Document `token.EXACT_TOKEN_TYPES`
# Documentation
`token.EXACT_TOKEN_TYPES` isn't documented but it's the only way to get to the string representation of every token in Python. It has existed since at least Python 3.9.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127304
* gh-127390
* gh-127391
<!-- /gh-linked-prs -->
| dd3a87d2a8f8750978359a99de2c5cb2168351d1 | b83be9c9718aac42d0d8fc689a829d6594192afa |
python/cpython | python__cpython-127297 | # ctypes: Switch field accessors to fixed-width integers
# Feature or enhancement
I believe the next step toward untangling `ctypes` should be switching `cfield.c` to be based on fixed-width integer types.
This should be a pure refactoring, without user-visible behaviour changes.
Currently, we use traditional native C types, usually identified by [`struct` format characters][struct-chars] when a short (and identifier-friendly) name is needed:
- `signed char` (`b`) / `unsigned char` (`B`)
- `short` (`h`) / `unsigned short` (`h`)
- `int` (`i`) / `unsigned int` (`i`)
- `long` (`l`) / `unsigned long` (`l`)
- `long long` (`q`) / `unsigned long long` (`q`)
These map to C99 fixed-width types, which i propose switching to:
- `int8_t`/`uint8_t`
- `int16_t`/`uint16_t`
- `int32_t`/`uint32_t`
- `int64_t`/`uint64_t`
The C standard doesn't guatrantee that the “traditional” types must map to the fixints.
But, [`ctypes` currently requires it][swapdefs], so the assuption won't break anything.
By “map” I mean that the *size* of the types matches. The *alignment* requirements might not.
This needs to be kept in mind but is not an issue in `ctypes` accessors, which [explicitly handle unaligned memory][memcpy] for the integer types.
Note that there are 5 “traditional” C type sizes, but 4 fixed-width ones. Two of the former are functionally identical to one another; which ones they are is platform-specific (e.g. `int`==`long`==`int32_t`.)
This means that one of the [current][current-impls-1] [implementations][current-impls-2] is redundant on any given platform.
The fixint types are parametrized by the number of bytes/bits, and one bit for signedness. This makes it easier to autogenerate code for them or to write generic macros (though generic API like [`PyLong_AsNativeBytes`][PyLong_AsNativeBytes] is problematic for performance reasons -- especially compared to a `memcpy` with compile-time-constant size).
When one has a *different* integer type, determining the corresponding fixint means a `sizeof` and signedness check. This is easier and more robust than the current implementations (see [`wchar_t`][sizeof-wchar_t] or [`_Bool`][sizeof-bool]).
The refactoring can pave the way for:
- Separating bitfield accessors, so bitfield logic doesn't slow down normal access. (This can be done today, but would mean another set of nearly-identical hand-written functions, which is hard to maintain, let alone experiment with. We need more metaprogramming.)
- Integer types with arbitrary size & alignment (useful for getting the `__int128` type, or matching other platforms). This would be a new future feature.
[swapdefs]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L420-L444
[struct-chars]: https://docs.python.org/3/library/struct.html#format-characters
[current-impls-1]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L470-L653
[current-impls-2]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L703-L944
[memcpy]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L613
[PyLong_AsNativeBytes]: https://docs.python.org/3/c-api/long.html#c.PyLong_AsNativeBytes
[sizeof-wchar_t]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L1547-L1555
[sizeof-bool]: https://github.com/python/cpython/blob/v3.13.0/Modules/_ctypes/cfield.c#L1562-L1572
<!-- gh-linked-prs -->
### Linked PRs
* gh-127297
<!-- /gh-linked-prs -->
| 78ffba4221dcb2e39fd5db80c297d1777588bb59 | ba45e5cdd41a39ce0b3de08bdcfa9d8e28e0e4f3 |
python/cpython | python__cpython-127286 | # Wrong ARM platform configuration in pcbuild.sln
https://github.com/python/cpython/blob/4fd9eb2aca21489ea0841155cdb81cb30dda73d3/PCbuild/pcbuild.sln#L1102C70-L1102C75
<!-- gh-linked-prs -->
### Linked PRs
* gh-127286
<!-- /gh-linked-prs -->
| 6da9d252ac39d53342455a17bfec7b1087fba697 | 987311d42e3ec838de8ff27f9f0575aa791a6bde |
python/cpython | python__cpython-128012 | # Defer functions defined in nested classes in free-threaded builds
# Feature or enhancement
### Proposal:
We currently only defer functions that do not have the `CO_NESTED` flag set:
https://github.com/python/cpython/blob/26ff32b30553e1f7b0cc822835ad2da8890c180c/Objects/funcobject.c#L213-L218
This also excludes functions defined on nested classes. In the example below, the `Foo.__init__` function will not use deferred reference counting because the `__init__` method's code object has the `CO_NESTED` flag set.
```py
def func():
class Foo:
def __init__(self):
pass
```
We would like to relax the restriction on `CO_NESTED` to allow functions that are defined on nested classes to use deferred reference counting.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128012
<!-- /gh-linked-prs -->
| 255762c09fe518757bb3e8ce1bb6e5d8eec9f466 | e163e8d4e1a9844b8615ef38b9917b887a377948 |
python/cpython | python__cpython-127272 | # PyCell_GET/SET are not thread-safe in free-threaded builds
# Bug report
### Bug description:
The `PyCell_GET` and `PyCell_SET` macros are not safe for free-threading builds. First, they don't the correct locking. Second, `PyCell_GET` returns a borrowed reference.
For CPython internals, the usages of these macros can be replaced (either with `PyCell_GetRef()`, `PyCell_SetTakeRef()`, or similar). For external API users, these macros will need to be marked as deprecated, at least for free-threaded builds.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127272
* gh-130383
<!-- /gh-linked-prs -->
| fc5a0dc22483a35068888e828c65796d7a792c14 | 276cd66ccbbf85996a57bd1db3dd29b93a6eab64 |
python/cpython | python__cpython-131174 | # Type slots are not thread-safe in free-threaded builds
# Bug report
### Bug description:
Modification of type slots is protected by the global type lock, however, type slots are read non-atomically without holding the type lock. For example, in `PyObject_SetItem`:
https://github.com/python/cpython/blob/5bb059fe606983814a445e4dcf9e96fd7cb4951a/Objects/abstract.c#L231-L235
It's not clear how we want to address this. From @colesbury in https://github.com/python/cpython/pull/127169#discussion_r1857029790:
```
I'd lean towards doing a stop-the-world pause when modifying the slot so that we don't need
to change every read. I expect that to be a bit tricky since class definitions look like they're mutating
the class.
```
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-131174
* gh-133177
<!-- /gh-linked-prs -->
| eecafc33800c84ecb67f5d3ed819fbed057677ab | 219b1f9d1d97e271213fe324b94ed544e890630b |
python/cpython | python__cpython-127267 | # Confusing single quotes and/or apostrophe
# Documentation
The first apostrophe threw me off, and I realized it doesn't make sense for the second apostrophe to indicate the possessive case either. Then I realized the "apostrophes" are supposed to be single quotes around the word `arrow`. Nowhere else in the docs is the word `arrow` surrounded in single quotes, so I think they should be removed for consistency.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127267
* gh-127268
* gh-127269
<!-- /gh-linked-prs -->
| 26ff32b30553e1f7b0cc822835ad2da8890c180c | 5bb059fe606983814a445e4dcf9e96fd7cb4951a |
python/cpython | python__cpython-130134 | # Inconsistent behavior of `fromisoformat` methods in `datetime` module implementations
# Bug report
## Bug description:
### 1. Incorrect timezone validation in `_pydatetime` (solved)
As far as I understand, the [documentation](https://docs.python.org/3.14/library/datetime.html) says that `Z` char should mean that `tzinfo` is `timezone.utc`, so there cannot be any time zone fields after it.
Based on this, `_pydatetime` implementation is incorrect, right?
```python
>>> import _datetime, _pydatetime
>>> _pydatetime.datetime.fromisoformat('2020-01-01T00:00Z00:50')
datetime.datetime(2020, 1, 1, 0, 0, tzinfo=datetime.timezone(datetime.timedelta(seconds=3000)))
>>> _datetime.datetime.fromisoformat('2020-01-01T00:00Z00:50')
Traceback (most recent call last):
File "<python-input-54>", line 1, in <module>
_datetime.datetime.fromisoformat('2020-01-01T00:00Z00:50')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Invalid isoformat string: '2020-01-01T00:00Z00:50'
```
### 2. Miss the wrong millisecond separator in `_datetime` (solved)
In `_pydatetime` the separator for milliseconds must be either a period `.` or a comma `,`.
Should we allow colon `:` as millisecond separator?
```python
>>> import _datetime, _pydatetime
>>> _datetime.datetime.fromisoformat('2020-01-01T00:00:01:1')
datetime.datetime(2020, 1, 1, 0, 0, 1, 100000)
>>> _pydatetime.datetime.fromisoformat('2020-01-01T00:00:01:1')
Traceback (most recent call last):
File "<python-input-119>", line 1, in <module>
_pydatetime.datetime.fromisoformat('2020-01-01T00:00:01:1')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../cpython/Lib/_pydatetime.py", line 1969, in fromisoformat
"Return local time tuple compatible with time.localtime()."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dst = self.dst()
^^^^^^^^^^^^^^^^
ValueError: Invalid isoformat string: '2020-01-01T00:00:01:1'
```
### 3. The first errors caught can be different
If these errors occur separately, then both implementations are able to detect them, but when there are several problems, the methods may behave differently. In this case `_pydatetime` first detected an error due to the separator, and `_datetime` first detected an error in exceeding the limits.
```python
>>> import _datetime, _pydatetime
>>> _pydatetime.datetime.fromisoformat('2009-04-19T03:15:45+10:90.11')
Traceback (most recent call last):
File "<python-input-40>", line 1, in <module>
_pydatetime.datetime.fromisoformat('2009-04-19T03:15:45+10:90.11')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../cpython/Lib/_pydatetime.py", line 1969, in fromisoformat
f'Invalid isoformat string: {date_string!r}') from None
else:
ValueError: Invalid isoformat string: '2009-04-19T03:15:45+10:90.11'
>>> _datetime.datetime.fromisoformat('2009-04-19T03:15:45+10:90.11')
Traceback (most recent call last):
File "<python-input-41>", line 1, in <module>
_datetime.datetime.fromisoformat('2009-04-19T03:15:45+10:90.11')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: minute must be in 0..59
```
---
Also also an issue has already been created about the fact that some errors have different output, here:
* [datetime error message is different between _pydatetime.py and _datetimemodule.c #109798](https://github.com/python/cpython/issues/109798)
*I'll send a PR.*
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-130134
<!-- /gh-linked-prs -->
| 427dd10250f94d79bad7927d6ea231950c395bd6 | 46ac85e4d9fcffe1a8f921989414a89648b5501a |
python/cpython | python__cpython-127259 | # [tests] asyncio: test_staggered_race_with_eager_tasks() fails randomly
AMD64 Windows11 Refleaks 3.x build: https://buildbot.python.org/#/builders/920/builds/1141
```
FAIL: test_staggered_race_with_eager_tasks (test.test_asyncio.test_eager_task_factory.PyEagerTaskFactoryLoopTests.test_staggered_race_with_eager_tasks)
----------------------------------------------------------------------
Traceback (most recent call last):
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_asyncio\test_eager_task_factory.py", line 238, in test_staggered_race_with_eager_tasks
self.run_coro(run())
~~~~~~~~~~~~~^^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_asyncio\test_eager_task_factory.py", line 36, in run_coro
return self.loop.run_until_complete(coro)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\asyncio\base_events.py", line 720, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\asyncio\futures.py", line 198, in result
raise self._exception.with_traceback(self._exception_tb)
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\asyncio\tasks.py", line 289, in __step_run_and_handle_result
result = coro.send(None)
File "b:\uildarea\3.x.ware-win11.refleak\build\Lib\test\test_asyncio\test_eager_task_factory.py", line 232, in run
self.assertEqual(winner, 'sleep1')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
AssertionError: 'sleep2' != 'sleep1'
- sleep2
? ^
+ sleep1
? ^
```
I'm working on a fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127259
* gh-127358
* gh-127401
* gh-127402
<!-- /gh-linked-prs -->
| 770617b23e286f1147f9480b5f625e88e7badd50 | 77b20f099e6cbad346b18c0e6db94b6ab7fd4d39 |
python/cpython | python__cpython-127360 | # [tests] test_poplib fails with "env changed" on Arch Linux with OpenSSL 3.4: [SYS] unknown error (_ssl.c:2634)
First failure: https://buildbot.python.org/#/builders/484/builds/6224
Example of failure:
```
test_stls_context (test.test_poplib.TestPOP3Class.test_stls_context) ...
Warning -- Uncaught thread exception: SSLError
Exception in thread Thread-16:
Traceback (most recent call last):
(...)
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/test/test_poplib.py", line 203, in handle_read
asynchat.async_chat.handle_read(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/test/support/asynchat.py", line 128, in handle_read
self.handle_error()
~~~~~~~~~~~~~~~~~^^
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/test/support/asynchat.py", line 124, in handle_read
data = self.recv(self.ac_in_buffer_size)
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/test/support/asyncore.py", line 377, in recv
data = self.socket.recv(buffer_size)
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/ssl.py", line 1285, in recv
return self.read(buflen)
~~~~~~~~~^^^^^^^^
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/ssl.py", line 1140, in read
return self._sslobj.read(len)
~~~~~~~~~~~~~~~~~^^^^^
ssl.SSLError: [SYS] unknown error (_ssl.c:2634)
ok
```
According to @encukou, the failure started to occur after a system update:
* OpenSSL 3.3.0→3.4.0
* Linux 5.15.0→6.8.0
* glibc 2.39→2.40
* GCC 13.2.1→14.2.1
* etc.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127360
* gh-127361
* gh-127812
* gh-127905
* gh-131970
* gh-131971
<!-- /gh-linked-prs -->
| 802556abfa008abe0bdd78e6f9e18bef71db90c1 | 688f3a0d4b94874ff6d72af3baafd8bbf911153e |
python/cpython | python__cpython-127275 | # Make `CopyComPointer` public and add to `ctypes` doc.
# Feature or enhancement
### Proposal:
As with `COMError` in gh-126615 and gh-126686, I think `CopyComPointer` should also be made public and documented.
Tests covering the behavior of this API have already been added in gh-127183 and gh-127184.
I plan to write documentation based not only on the C implementation source code but also on these tests.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/127184
<!-- gh-linked-prs -->
### Linked PRs
* gh-127275
<!-- /gh-linked-prs -->
| 412e11fe6e37f15971ef855f88b8b01bb3297679 | 979bf2489d0c59ae451b97d7e3c148f47e259f0b |
python/cpython | python__cpython-127254 | # Clarify stability of Stable ABI
I used some unfortunate wording in the docs:
> The extension will work without recompilation with all Python 3 releases from the specified one onward [...]
Clarify that compiling for the Stable ABI will prevent ABI issues (missing symbols or data corruption due to changed layouts/signatures). But, behaviour can still change per PEP-387 (though we *should* keep it stable if we can).
<!-- gh-linked-prs -->
### Linked PRs
* gh-127254
* gh-127557
* gh-127558
<!-- /gh-linked-prs -->
| 35d37d6592d1be71ea76042165f6cbfa6c4c3a17 | 8c3fd1f245fbdc747966daedfd22ed48491309dc |
python/cpython | python__cpython-127249 | # Correct name of zconf.in.h is zconf.h.in
The file zconf.in.h should be renamed to zconf.h.in (https://github.com/python/cpython-source-deps/blob/4dc98e1909830e2bdc2a9cc2236e3c5d5037335b/zconf.h.in).
https://github.com/python/cpython/blob/c595eae84c7f665eced09ce67a1ad83b7574d6e5/PCbuild/pythoncore.vcxproj#L418
<!-- gh-linked-prs -->
### Linked PRs
* gh-127249
<!-- /gh-linked-prs -->
| 3e7ce6e9aed8616c8ce53eaef4279402d3ee38ec | d9331f1b16033796a3958f6e0b12626ed7d3e199 |
python/cpython | python__cpython-127241 | # sys.set_int_max_str_digits ValueError Message is Inaccurate
The sys.set_int_max_str_digits ValueError message is shown to be inaccurate in the following Python interpreter execution sequence:
```
>>> import sys
>>> sys.set_int_max_str_digits(maxdigits=639)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: maxdigits must be 0 or larger than 640
>>> sys.set_int_max_str_digits(maxdigits=640)
>>>
```
The sys.set_int_max_str_digits ValueError message states that "maxdigits" must be larger than "640." However, "sys.set_int_max_str_digits(maxdigits=640)" is a valid statement for Python to execute. Because argument "640" is valid for built-in function "sys.set_int_max_str_digits" and 640 is not larger than 640, this results in a discrepancy between what the ValueError message states as true and what is actually true.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127241
<!-- /gh-linked-prs -->
| c595eae84c7f665eced09ce67a1ad83b7574d6e5 | 17c16aea66b606d66f71ae9af381bc34d0ef3f5f |
python/cpython | python__cpython-127194 | # `urllib.request.pathname2url()`: generate RFC 1738 URLs where possible
# Feature or enhancement
### Proposal:
[`urllib.request.pathname2url`](https://docs.python.org/3/library/urllib.request.html#urllib.request.pathname2url) currently generates [RFC 1738](https://www.rfc-editor.org/rfc/rfc1738#section-3.10)-compliant `file:` URIs in the following cases:
- DOS drive paths (including drive-relative paths) on Windows.
- UNC paths on Windows.
- POSIX paths beginning `//` (since GH-127217)
This function _cannot_ generate RFC 1738-compliant URLs for:
- True relative paths, because of differing requirements for the first character of the path.
That leaves one case where the function _could_ generate RFC 1738-compatible URLs, but doesn't:
- Paths beginning with exactly one slash (both POSIX and Windows)
```python
>>> from urllib.request import pathname2url
>>> pathname2url('/etc/hosts')
'/etc/hosts' # expected: '///etc/hosts'
```
For consistency with `pathname2url()`'s handling of other paths, and consistency with [`pathlib.Path.as_uri()`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.as_uri), I propose we prepend two slashes to any path beginning with precisely one slash to produce a URL authority section with a zero-length authority.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/125866#issuecomment-2495919168
<!-- gh-linked-prs -->
### Linked PRs
* gh-127194
<!-- /gh-linked-prs -->
| 5bb059fe606983814a445e4dcf9e96fd7cb4951a | a2ee89968299fc4f0da4b5a4165025b941213ba5 |
python/cpython | python__cpython-127223 | # Add colour to unittest output
# Feature or enhancement
In Python 3.13, we added colour output to the new REPL, tracebacks and doctest, that can also be controlled with the `PYTHON_COLORS`, `NO_COLOR` and `FORCE_COLOR` environment variables:
* https://docs.python.org/3/whatsnew/3.13.html#summary-release-highlights
* https://docs.python.org/3.13/using/cmdline.html#using-on-controlling-color
Let's add colour to unittest output.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127223
<!-- /gh-linked-prs -->
| 23f2e8f13c4e4a34106cf96fad9329cbfbf8844d | d958d9f4a1b71c6d30960bf6c53c41046ea94590 |
python/cpython | python__cpython-127218 | # pathname2url() does not work if path starts with //
# Bug report
For example:
```pycon
>>> from urllib.request import pathname2url
>>> 'file:' + pathname2url('//foo/bar')
'file://foo/bar'
```
This is a file URI with path "/bar" and authority "foo". Non-empty authority other than "localhost" are usually rejected. The right URI for path "//foo/bar" is "file:////foo/bar" -- an URI with explicit empty authority.
Similar bug in urlunparse() and urlunsplit() was fixed in #67693.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127218
* gh-127230
* gh-127231
<!-- /gh-linked-prs -->
| 97b2ceaaaf88a73a45254912a0e972412879ccbf | 2bb7846cacb342246aada5ed92d323e54c946063 |
python/cpython | python__cpython-127400 | # `ExtensionFileLoader.load_module` aborts when initialized with a path containing null-bytes
# Crash report
### What happened?
It's possible to abort a debug build by initializing a `_frozen_importlib_external.ExtensionFileLoader` with a path containing null-bytes, then calling `load_module()`:
```python
import _frozen_importlib_external
_frozen_importlib_external.ExtensionFileLoader("a", "\x00").load_module(None)
```
Abort message:
```
python: Python/import.c:939: hashtable_key_from_2_strings: Assertion `strlen(key) == size - 1' failed.
Aborted
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.12, 3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ (heads/main:0af4ec3, Nov 20 2024, 21:45:19) [GCC 13.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127400
* gh-127418
* gh-127419
<!-- /gh-linked-prs -->
| b14fdadc6c620875a20b7ccc3c9b069e85d8557a | 3afb639f39e89888194d8e74cc498c8da3a58d8e |
python/cpython | python__cpython-127220 | # `_interpreters.exec` with invalid dict as `shared` segfaults
# Crash report
### What happened?
Passing a dict with an invalid key as the `shared` parameter to `_interpreters.exec` segfaults the interpreter (or aborts in a debug build):
```python
import _interpreters
_interpreters.exec(0, "1", {"\uFD7C\u5124\u7B91\u92E9\u1850\u39AA\u0DF2\uD82A\u2D68\uACAD\u92DE\u47C5\uFFD0\uDE0B\uAA9C\u2C17\\u6577\u4C92\uD37C": 0})
```
Backtrace:
```gdb
#0 0x00005555557c496c in _PyXI_ApplyError (error=0x0) at Python/crossinterp.c:1056
#1 0x00007ffff79db822 in _run_in_interpreter (p_excinfo=0x7fffffffd820, flags=1,
shareables=0x7ffff7a186c0, codestrlen=<optimized out>, codestr=0x555555aceff8 <_PyRuntime+76888> "1",
interp=0x555555ad1f18 <_PyRuntime+88952>) at ./Modules/_interpretersmodule.c:463
]#2 _interp_exec (interp=interp@entry=0x555555ad1f18 <_PyRuntime+88952>, code_arg=<optimized out>,
shared_arg=0x7ffff7a186c0, p_excinfo=p_excinfo@entry=0x7fffffffd820, self=<optimized out>)
at ./Modules/_interpretersmodule.c:955
#3 0x00007ffff79db9b0 in interp_exec (self=<optimized out>, args=<optimized out>, kwds=<optimized out>)
at ./Modules/_interpretersmodule.c:1000
#4 0x00005555556abb43 in cfunction_call (func=0x7ffff7a6d9e0, args=<optimized out>,
kwargs=<optimized out>) at Objects/methodobject.c:551
#5 0x0000555555643350 in _PyObject_MakeTpCall (tstate=0x555555b08c10 <_PyRuntime+313456>,
callable=callable@entry=0x7ffff7a6d9e0, args=args@entry=0x7ffff7fb0080, nargs=<optimized out>,
keywords=keywords@entry=0x0) at Objects/call.c:242
#6 0x0000555555643c76 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>,
args=0x7ffff7fb0080, callable=0x7ffff7a6d9e0, tstate=<optimized out>)
at ./Include/internal/pycore_call.h:165
#7 0x00005555555d8e75 in _PyEval_EvalFrameDefault (tstate=0x555555b08c10 <_PyRuntime+313456>,
frame=0x7ffff7fb0020, throwflag=<optimized out>) at Python/generated_cases.c.h:955
#8 0x00005555557a559c in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb0020,
tstate=0x555555b08c10 <_PyRuntime+313456>) at ./Include/internal/pycore_ceval.h:116
#9 _PyEval_Vector (args=0x0, argcount=0, kwnames=0x0, locals=0x7ffff7a18680, func=0x7ffff7a033d0,
tstate=0x555555b08c10 <_PyRuntime+313456>) at Python/ceval.c:1898
#10 PyEval_EvalCode (co=co@entry=0x7ffff7a32230, globals=globals@entry=0x7ffff7a18680,
locals=locals@entry=0x7ffff7a18680) at Python/ceval.c:659
```
The abort message is:
```
python: ./Modules/_interpretersmodule.c:462: _run_in_interpreter: Assertion `!PyErr_Occurred()' failed.
Aborted
```
Related to https://github.com/python/cpython/issues/126654.
Found using fusil by @vstinner.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ (heads/main:3c770e3f097, Nov 22 2024, 09:48:39) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127220
* gh-128689
<!-- /gh-linked-prs -->
| 087bb48acac997c06e69dae25bae2dd75194b980 | 8af57810946c216b3e18c94c8f0ee3c0c96566a9 |
python/cpython | python__cpython-127366 | # Segfault from `asyncio.events._running_loop.__setattr__` with invalid name
# Crash report
### What happened?
It's possible to segfault the interpreter by calling `asyncio.events._running_loop.__setattr__` with a special class as `name`, as in this example:
```python
import asyncio.events
class Liar1:
def __eq__(self, other):
return True
asyncio.events._running_loop.__setattr__(Liar1(), type)
```
The backtrace is:
```gdb
#0 _copy_characters (to=<optimized out>, to_start=<optimized out>, from=<optimized out>,
from_start=<optimized out>, how_many=<optimized out>, check_maxchar=0) at Objects/unicodeobject.c:1530
#1 0x000055555573497b in _copy_characters (check_maxchar=0, how_many=16842781, from_start=0,
from=0x7ffff71a3cb0, to_start=<optimized out>, to=<optimized out>) at Objects/unicodeobject.c:1435
#2 _PyUnicode_FastCopyCharacters (how_many=16842781, from_start=0, from=0x7ffff71a3cb0,
to_start=<optimized out>, to=<optimized out>) at Objects/unicodeobject.c:1562
#3 _PyUnicodeWriter_WriteStr (writer=0x7fffffffd660, str=0x7ffff71a3cb0) at Objects/unicodeobject.c:13635
#4 0x0000555555738abd in unicode_fromformat_arg (vargs=0x7fffffffd5d8,
f=0x5555558d2a04 "U' is read-only", writer=0x7fffffffd660) at Objects/unicodeobject.c:2993
#5 unicode_from_format (writer=writer@entry=0x7fffffffd660,
format=format@entry=0x5555558d29e8 "'%.100s' object attribute '%U' is read-only",
vargs=vargs@entry=0x7fffffffd6c0) at Objects/unicodeobject.c:3167
#6 0x0000555555739a1f in PyUnicode_FromFormatV (
format=format@entry=0x5555558d29e8 "'%.100s' object attribute '%U' is read-only",
vargs=vargs@entry=0x7fffffffd6c0) at Objects/unicodeobject.c:3201
#7 0x00005555557c7770 in _PyErr_FormatV (vargs=0x7fffffffd6c0,
format=0x5555558d29e8 "'%.100s' object attribute '%U' is read-only",
exception=0x555555a89960 <_PyExc_AttributeError>, tstate=0x555555b08c10 <_PyRuntime+313456>)
at Python/errors.c:1163
#8 PyErr_Format (exception=0x555555a89960 <_PyExc_AttributeError>,
format=format@entry=0x5555558d29e8 "'%.100s' object attribute '%U' is read-only")
at Python/errors.c:1198
#9 0x00005555558a613d in local_setattro (self=0x7ffff718d020, name=0x7ffff71a3cb0,
v=0x555555a9c6e0 <PyType_Type>) at ./Modules/_threadmodule.c:1625
#10 0x00005555556dff7d in wrap_setattr (self=0x7ffff718d020, args=<optimized out>,
wrapped=0x5555558a6070 <local_setattro>) at Objects/typeobject.c:9172
```
No threads, free-threading or JIT necessary for this to work.
In a debug built, an assertion fails instead:
```
Objects/unicodeobject.c:640: _PyUnicode_CheckConsistency: Assertion failed: PyType_HasFeature((_Py_TYPE(((PyObject*)((op))))), ((1UL << 28)))
Enable tracemalloc to get the memory block allocation traceback
object address : 0x20000549920
object refcount : 2
object type : 0x200011e2810
object type name: Liar1
object repr : <__main__.Liar1 object at 0x20000549920>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
Aborted
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ experimental free-threading build (heads/main-dirty:a13e94d84bf, Nov 23 2024, 07:16:19) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127366
* gh-127367
* gh-127368
<!-- /gh-linked-prs -->
| 20657fbdb14d50ca4ec115da0cbef155871d8d33 | 49fee592a4fad17781bb4a78f95085d6edbb24d5 |
python/cpython | python__cpython-127184 | # Add tests to verify the behavior of `_ctypes.CopyComPointer`.
# Feature or enhancement
### Proposal:
There are other untested features related to COM besides [`COMError`](https://docs.python.org/3.14/library/ctypes.html#ctypes.COMError).
[`_ctypes.CopyComPointer`](https://github.com/python/cpython/blob/39e60aeb3837f1f23d8b7f30d3b8d9faf805ef88/Modules/_ctypes/callproc.c#L1463-L1495), as the name suggests, copies a COM pointer from *src* to *dst*.
It also manages reference counts before and after the copy.
In the future, this functionality may also [need to be made public similar to `COMError`](https://github.com/python/cpython/pull/126686).
However, my plan is to first add tests to confirm its current behavior, and then consider making it public and adding documentation.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127184
* gh-127251
* gh-127252
<!-- /gh-linked-prs -->
| c7f1e3e150ca181f4b4bd1e5b59d492749f00be6 | f4f075b3d5a5b0bc1b13cc27ef2a7de8c103fd04 |
python/cpython | python__cpython-127219 | # Assertion failure from `StringIO.__setstate__`
# Crash report
### What happened?
It's possible to abort the interpreter by calling `StringIO.__setstate__` with a non-string `initial_value`:
```python
python -c "from io import StringIO; StringIO().__setstate__((None, '', 0, {}))"
python: Objects/unicodeobject.c:2542: as_ucs4: Assertion `PyUnicode_Check(string)' failed.
Aborted (core dumped)
```
Interestingly, on a non-debug build passing an int as `initial_value` gives an error message saying that `None` should be a valid value:
```python
python -c "from io import StringIO; StringIO().__setstate__((1, '', 0, {}))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
from io import StringIO; StringIO().__setstate__((1, '', 0, {}))
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: initial_value must be str or None, not int
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ (heads/main:0af4ec3, Nov 20 2024, 21:45:19) [GCC 13.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127219
* gh-127262
* gh-127263
<!-- /gh-linked-prs -->
| a2ee89968299fc4f0da4b5a4165025b941213ba5 | d3da04bfc91ec065fe587451409102213af0e57c |
python/cpython | python__cpython-127640 | # recommend what to use instead of asyncio.get_event_loop in 3.14 what's new
Could you please mention what should be used instead?
_Originally posted by @hroncok in https://github.com/python/cpython/pull/126354#discussion_r1853686937_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127640
<!-- /gh-linked-prs -->
| 0581e3f52b6116774698761088f0daa7ec23fc0b | 329165639f9ac00ba64f6493dbcafcef6955e2cb |
python/cpython | python__cpython-127199 | # Segfault in invalid `concurrent.futures.interpreter.WorkerContext`
# Crash report
### What happened?
It's possible to segfault the interpreter by calling `initialize()` on a `concurrent.futures.interpreter.WorkerContext` instance that was created with the `shared` argument being a dict containing the null byte as a key:
```python
python -c "import concurrent.futures.interpreter; w = concurrent.futures.interpreter.WorkerContext(0, {'\x00': ''}).initialize()"
```
This doesn't require threads or free-threading. It can be traced to the `_interpreters` module:
```python
import _interpreters
_interpreters.create()
_interpreters.set___main___attrs(1, {"\x00": 1}, restrict=True)
Segmentation fault (core dumped)
```
The backtrace is:
```gdb
#0 Py_INCREF (op=op@entry=0x0) at ./Include/refcount.h:241
#1 0x00005555557e6aa7 in _Py_NewRef (obj=0x0) at ./Include/refcount.h:492
#2 _sharednsitem_apply (item=0x555555cd55e0, ns=ns@entry=0x200028329b0, dflt=dflt@entry=0x0) at Python/crossinterp.c:1224
#3 0x00005555557e7d14 in _PyXI_ApplyNamespace (ns=ns@entry=0x555555cd55b0, nsobj=nsobj@entry=0x200028329b0, dflt=dflt@entry=0x0) at Python/crossinterp.c:1523
#4 0x00005555557e7ec8 in _PyXI_Enter (session=session@entry=0x7fffffffe040, interp=interp@entry=0x7ffff7bb8020, nsupdates=<optimized out>)
at Python/crossinterp.c:1754
#5 0x00007ffff7e1fafd in interp_set___main___attrs (self=self@entry=0x20000966980, args=args@entry=0x20000943850, kwargs=kwargs@entry=0x20000736eb0)
at ./Modules/_interpretersmodule.c:836
#6 0x00005555556c1565 in cfunction_call (func=func@entry=0x20000966830, args=args@entry=0x20000943850, kwargs=kwargs@entry=0x20000736eb0)
at Objects/methodobject.c:551
#7 0x000055555566987f in _PyObject_MakeTpCall (tstate=tstate@entry=0x555555c39510 <_PyRuntime+360208>, callable=callable@entry=0x20000966830,
args=args@entry=0x7fffffffe3c8, nargs=<optimized out>, keywords=keywords@entry=0x2000057e700) at Objects/call.c:242
#8 0x0000555555669ada in _PyObject_VectorcallTstate (tstate=0x555555c39510 <_PyRuntime+360208>, callable=callable@entry=0x20000966830,
args=args@entry=0x7fffffffe3c8, nargsf=<optimized out>, kwnames=kwnames@entry=0x2000057e700) at ./Include/internal/pycore_call.h:165
#9 0x0000555555669b30 in PyObject_Vectorcall (callable=callable@entry=0x20000966830, args=args@entry=0x7fffffffe3c8, nargsf=<optimized out>,
kwnames=kwnames@entry=0x2000057e700) at Objects/call.c:327
#10 0x00005555557adebb in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=0x7ffff7e6a0a0, throwflag=<optimized out>) at Python/generated_cases.c.h:1982
#11 0x00005555557c755c in _PyEval_EvalFrame (tstate=tstate@entry=0x555555c39510 <_PyRuntime+360208>, frame=<optimized out>, throwflag=throwflag@entry=0)
at ./Include/internal/pycore_ceval.h:116
#12 0x00005555557c766a in _PyEval_Vector (tstate=tstate@entry=0x555555c39510 <_PyRuntime+360208>, func=func@entry=0x200007b3490, locals=locals@entry=0x20000737870,
args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1898
#13 0x00005555557c7739 in PyEval_EvalCode (co=co@entry=0x200003a3f10, globals=globals@entry=0x20000737870, locals=locals@entry=0x20000737870) at Python/ceval.c:659
#14 0x000055555588bac3 in run_eval_code_obj (tstate=tstate@entry=0x555555c39510 <_PyRuntime+360208>, co=co@entry=0x200003a3f10, globals=globals@entry=0x20000737870,
locals=locals@entry=0x20000737870) at Python/pythonrun.c:1338
#15 0x000055555588bca3 in run_mod (mod=mod@entry=0x200007e42b0, filename=filename@entry=0x20000a462f0, globals=globals@entry=0x20000737870,
locals=locals@entry=0x20000737870, flags=flags@entry=0x7fffffffe720, arena=arena@entry=0x200000508b0, interactive_src=0x200002338f0, generate_new_source=0)
at Python/pythonrun.c:1423
#16 0x000055555588c6ad in _PyRun_StringFlagsWithName (
str=str@entry=0x200002341e0 "import concurrent.futures.interpreter; w = concurrent.futures.interpreter.WorkerContext(0, {'\\x00': ''}).initialize()\n",
name=name@entry=0x20000a462f0, start=start@entry=257, globals=globals@entry=0x20000737870, locals=locals@entry=0x20000737870, flags=flags@entry=0x7fffffffe720,
generate_new_source=0) at Python/pythonrun.c:1222
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ (heads/main:0af4ec3, Nov 20 2024, 21:45:19) [GCC 13.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127199
* gh-127463
<!-- /gh-linked-prs -->
| 46bfd26fb294a8769ef6d0c056ee6c9df022a037 | 328187cc4fcdd578db42cf6a16c197c3382157a7 |
python/cpython | python__cpython-127212 | # Turn off PGO for JIT CI
I don't think we need PGO turned on in JIT CI for non-debug jobs. It's mostly just a waste of time, and doesn't really give us any additional information.
@mdboom, I know that the Windows PGO builds were a nice canary for "breaking" ceval.c under MSVC (since it's not tested in CI or any buildbot), but I figure our weekly benchmarking runs are a good enough "buildbot" for that, so we don't need to run it all the time on PRs.
@savannahostrowski, want to take this?
<!-- gh-linked-prs -->
### Linked PRs
* gh-127212
<!-- /gh-linked-prs -->
| 2247dd0f11058502f44b289df0e18ecc08be8657 | 78cb377c622a98b1bf58df40c828e886575a6927 |
python/cpython | python__cpython-127151 | # Emscripten: Get test suite passing
First I'd like to get the test suite to run all the way through, then to pass.
cc @freakboy3742
#### Update 2025/01/30
Emscripten 4.0.2 has fixes for all of the file system bugs I've looked into. There are still problems with nonblocking I/O which I haven't looked into.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127151
* gh-127562
* gh-127565
* gh-127843
* gh-127984
* gh-127992
* gh-128545
* gh-128549
* gh-128556
* gh-128557
* gh-129375
* gh-129421
* gh-129422
* gh-129474
* gh-131672
* gh-132092
* gh-134358
* gh-134382
* gh-135622
* gh-135624
* gh-135626
* gh-135634
* gh-135635
* gh-135650
* gh-135651
* gh-135652
* gh-135653
* gh-135655
* gh-135722
* gh-135733
* gh-135764
* gh-135784
* gh-136227
* gh-136509
* gh-136510
* gh-136624
* gh-136631
* gh-136699
* gh-136706
* gh-136707
* gh-136708
* gh-136711
* gh-136712
* gh-136717
* gh-136740
* gh-136745
<!-- /gh-linked-prs -->
### Emscripten PRs:
* https://github.com/emscripten-core/emscripten/pull/22886
https://github.com/emscripten-core/emscripten/pull/23000
https://github.com/emscripten-core/emscripten/pull/23025
https://github.com/emscripten-core/emscripten/pull/23017
https://github.com/emscripten-core/emscripten/pull/23072
https://github.com/emscripten-core/emscripten/pull/23073
https://github.com/emscripten-core/emscripten/pull/22925
https://github.com/emscripten-core/emscripten/pull/23058
https://github.com/emscripten-core/emscripten/pull/23002
https://github.com/emscripten-core/emscripten/pull/22998
https://github.com/emscripten-core/emscripten/pull/23074
https://github.com/emscripten-core/emscripten/pull/23045
https://github.com/emscripten-core/emscripten/pull/23061
https://github.com/emscripten-core/emscripten/pull/23135
https://github.com/emscripten-core/emscripten/pull/23136
https://github.com/emscripten-core/emscripten/pull/23137
https://github.com/emscripten-core/emscripten/pull/23139
https://github.com/emscripten-core/emscripten/pull/23310
https://github.com/emscripten-core/emscripten/pull/23307
https://github.com/emscripten-core/emscripten/pull/23306
https://github.com/emscripten-core/emscripten/pull/23366
https://github.com/emscripten-core/emscripten/pull/23364
https://github.com/emscripten-core/emscripten/pull/23381
https://github.com/emscripten-core/emscripten/pull/23470
https://github.com/emscripten-core/emscripten/pull/23480/
https://github.com/emscripten-core/emscripten/pull/24591
https://github.com/emscripten-core/emscripten/pull/24593 | 43634fc1fcc88b35171aa79258f767ba6477f764 | 2f1cee8477e22bfc36a704310e4c0f409357e7e9 |
python/cpython | python__cpython-127137 | # Document forward compatibility for `suggest_on_error`
Support for `suggest_on_error` was added in https://github.com/python/cpython/pull/124456. There's been some demand to document how users can opportunistically add this to their code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127137
<!-- /gh-linked-prs -->
| a13e94d84bff334da3da2cab523ba75b57e0787f | 39e60aeb3837f1f23d8b7f30d3b8d9faf805ef88 |
python/cpython | python__cpython-127186 | # Raise exception when nesting argument groups and mutually exclusive groups
The ability to nest argument groups and mutually exclusive groups was deprecated in 3.11 via https://github.com/python/cpython/pull/30098.
At present, we raise warnings to let users know this has been deprecated. I'd like to remove the logic since we're now two (going on three) releases from initial deprecation, and doing so will also simplify the logic to support new features, like #55797.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127186
<!-- /gh-linked-prs -->
| 2104bde572aa60f547274fd529312c5708599001 | f7bb658124aba74be4c13f498bf46cfded710ef9 |
python/cpython | python__cpython-127118 | # The Last Fields of PyInterpreterState and of _PyRuntimeState Should Not Change
The last field of `PyInterpreterState` should be `_initial_thread` and the last field of `_PyRuntimeState` should be `_main_interpreter`. Respectively, they are the preallocated struct values that are used for the "main" thread and main interpreter.
Having them at the end simplifies some backporting scenarios. It also gives better locality to *all* the state in the respective interpreter/runtime; otherwise any fields after `_initial_thread` or `_main_interpreter` will be separated from the rest by those structs.
I'll make sure there's a note on each struct and that any out-of-place fields are moved up.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127118
* gh-132721
<!-- /gh-linked-prs -->
| 5ba67af006079915af0a1312735efc40fa36c4f3 | a5440d4a38316cc709ce0304175beda124b8e7f4 |
python/cpython | python__cpython-127113 | # Emscripten: Make the web example work again
My recent changes to the Emscripten build broke the web example. I'll fix it and also move all of the components of the web example into a subfolder under `Tools/wasm/emscripten`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127113
* gh-127551
* gh-127666
<!-- /gh-linked-prs -->
| bfb0788bfcaab7474c1be0605552744e15082ee9 | edefb8678a11a20bdcdcbb8bb6a62ae22101bb51 |
python/cpython | python__cpython-127228 | # ConfigParser replaces unnamed section on every read call
# Bug report
### Bug description:
```python
from configparser import ConfigParser
config = ConfigParser(allow_unnamed_section=True)
config.read(['first.ini', 'second.ini'])
# now the unnamed section contains values only from the second file:
print(config._sections)
# {
# <UNNAMED_SECTION>: {'second_unnamed_option1': '1', 'second_unnamed_option2': '2'},
# 'first_section': {'first_section_option1': '1', 'first_section_option2': '2'},
# 'second_section': {'second_section_option1': '1', 'second_section_option2': '2'}
# }
```
I think the problem is somewhere [here](https://github.com/python/cpython/blob/main/Lib/configparser.py#L1107). The unnamed section is recreated on every call.
[first.ini.txt](https://github.com/user-attachments/files/17847219/first.ini.txt)
[second.ini.txt](https://github.com/user-attachments/files/17847220/second.ini.txt)
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127228
* gh-129593
<!-- /gh-linked-prs -->
| 914c232e9391e8e5014b089ba12c75d4a3b0cc7f | 5d9b62005a110164c6be5bf412d344917e872e10 |
python/cpython | python__cpython-127091 | # urllib.request.urlopen('file:...').url incorrect value
# Bug report
### Bug description:
When a `file:` URL is given to `urllib.request.urlopen()`, it returns an [`addinfourl`](https://docs.python.org/3/library/urllib.request.html#urllib.response.addinfourl) object. This object's `url` attribute (and its deprecated `geturl()` method) usually return incorrect results. For example:
```python
>>> from urllib.request import urlopen
>>> urlopen('file:Doc/requirements.txt').url
'file://Doc/requirements.txt' # expected: 'file:Doc/requirements.txt'
>>> urlopen('file:C:/requirements.txt').url
'file://C:/requirements.txt' # expected: 'file:C:/requirements.txt' or 'file:///C:/requirements.txt'
```
The code always prepends `file://`, but this might be too many slashes or too few depending on the path's drive and root.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127091
<!-- /gh-linked-prs -->
| 79b7cab50a3292a1c01466cf0e69fb7b4e56cfb1 | 27d0d2141319d82709eb09ba20065df3e1714fab |
python/cpython | python__cpython-127100 | # Add missing description for codes in `http.HTTPStatus`
# Bug report
### Bug description:
We have Enum of **`62`** HTTP status codes. They stores in `http.HTTPStatus` class and **`14`** of them don't have description field.
I think it is not-obvious behavior when calling the description field, one code returns text and another returns an empty string:
```python
import http
assert http.HTTPStatus.PROCESSING.description == ''
assert http.HTTPStatus.OK.description == 'Request fulfilled, document follows'
```
Path file: [`./Lib/http/__init__.py`](https://github.com/python/cpython/blob/main/Lib/http/__init__.py)
List of codes with empty description:
```python
PROCESSING = 102, 'Processing'
EARLY_HINTS = 103, 'Early Hints'
MULTI_STATUS = 207, 'Multi-Status'
ALREADY_REPORTED = 208, 'Already Reported'
IM_USED = 226, 'IM Used'
UNPROCESSABLE_CONTENT = 422, 'Unprocessable Content'
LOCKED = 423, 'Locked'
FAILED_DEPENDENCY = 424, 'Failed Dependency'
TOO_EARLY = 425, 'Too Early'
UPGRADE_REQUIRED = 426, 'Upgrade Required'
VARIANT_ALSO_NEGOTIATES = 506, 'Variant Also Negotiates'
INSUFFICIENT_STORAGE = 507, 'Insufficient Storage'
LOOP_DETECTED = 508, 'Loop Detected'
NOT_EXTENDED = 510, 'Not Extended'
```
*I am ready to send PR*
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127100
<!-- /gh-linked-prs -->
| 71de839ec987bb67b98fcfecfc687281841a713c | 0b5f1fae573a2c658eb000433ad7b87e9c40c697 |
python/cpython | python__cpython-127412 | # Calling `ShareableList.count` in threads aborts: `Assertion 'self->exports == 0' failed`
# Crash report
### What happened?
It's possible to abort the interpreter by calling `multiprocessing.shared_memory.ShareableList.count` in threads with `PYTHON_GIL=0` in a debug build:
```python
import gc
import multiprocessing.shared_memory
from threading import Thread
obj = multiprocessing.shared_memory.ShareableList("Uq..SeDAmB+EBrkLl.SG.Z+Z.ZdsV..wT+zLxKwdN\b")
for x in range(10):
Thread(target=obj.count, args=(1,)).start()
del obj
gc.collect()
```
Result:
```python
Exception ignored in: <function SharedMemory.__del__ at 0x200006bbfb0>
Traceback (most recent call last):
File "/home/danzin/projects/mycpython/Lib/multiprocessing/shared_memory.py", line 189, in __del__
self.close()
File "/home/danzin/projects/mycpython/Lib/multiprocessing/shared_memory.py", line 229, in close
self._buf.release()
BufferError: memoryview has 2 exported buffers
python: Objects/memoryobject.c:1143: memory_dealloc: Assertion `self->exports == 0' failed.
Aborted
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a2+ experimental free-threading build (heads/main-dirty:c9b399fbdb0, Nov 19 2024, 20:12:48) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-127412
* gh-128019
<!-- /gh-linked-prs -->
| 4937ba54c0ff7cc4a83d7345d398b804365af2d6 | 1d276ec6f8403590a6a1a18c560ce75b9221572b |
python/cpython | python__cpython-127099 | # Replacing "Windows only: ..." with the `.. availability:: Windows` directive in `ctypes` doc.
# Documentation
This is a follow-up to gh-126615.
In the `ctypes` documentation, the supported platforms were previously described in plain text.
This will be replaced with a more modern approach using directives.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127099
* gh-127144
* gh-127145
<!-- /gh-linked-prs -->
| 3c770e3f0978d825c5ebea98fcd654660e7e135f | 89125e9f9f3d3099267ddaddfe72642e2af6495c |
python/cpython | python__cpython-127132 | # urllib.request.url2pathname() mishandles a UNC URI variant
# Bug report
### Bug description:
On Windows, `urllib.request.url2pathname()` mishandles an uncommon file URI variant encoding a UNC path. Specifically, a URI with _five_ leading slashes should be converted to a UNC path with _two_ leading slashes, but `url2pathname()` returns a path with _three_ leading slashes. Such URIs are created by software that simply prepends `file:///` to a Windows path. See [RFC 8089 E.3.2](https://datatracker.ietf.org/doc/html/rfc8089#appendix-E.3.2), final example.
```python
>>> from urllib.request import url2pathname
>>> url2pathname('/////server/share')
'\\\\\\server\\share' # expected: '\\\\server\\share'
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127132
* gh-127135
* gh-127136
<!-- /gh-linked-prs -->
| 8c98ed846a7d7e50c4cf06f823d94737144dcf6a | ebf564a1d3e2e81b9846535114e481d6096443d2 |
python/cpython | python__cpython-127086 | # FileIO Strace test fails on Gentoo (test_fileio test_syscalls_read)
# Bug report
### Bug description:
Initially reported in PR comments: https://github.com/python/cpython/pull/123413
There looks like three cases to investigate/work on from this:
1. Gentoo + `musl` -> there is an extra mmap call
2. Gentoo + sandbox -> many additional calls (Probably an environment to disable in if can detect)
3. Should there be a more general switch to disable?
relates to: gh-120754
My first thought for 1 is `mmap` could generally be filtered out (it has shown up moderately frequently in my testing on other systems, generally used for memory allocation in wrapping). Working on setting up a Gentoo environment to test / experiment / investigate locally and develop patches.
cc: @mgorny, @vstinner, @gpshead, @hauntsaninja
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127086
* gh-127088
<!-- /gh-linked-prs -->
| ff2278e2bf660155ca8f7c0529190ca59a41c13a | 1629d2ca56014beb2d46c42cc199a43ac97e3b97 |
python/cpython | python__cpython-127256 | # Outdated `socket.NETLINK_*` constants
# Feature or enhancement
### Proposal:
When working through the constants in `socket` for typeshed, I found that these appear to be obsolete:
```python
NETLINK_ARPD: int # linux 2.0 to 2.6.12 (EOL August 2005)
NETLINK_ROUTE6: int # linux 2.2 to 2.6.12 (EOL August 2005)
NETLINK_SKIP: int # linux 2.0 to 2.6.12 (EOL August 2005)
NETLINK_TAPBASE: int # linux 2.2 to 2.6.12 (EOL August 2005)
NETLINK_TCPDIAG: int # linux 2.6.0 to 2.6.13 (EOL December 2005)
NETLINK_W1: int # linux 2.6.13 to 2.6.17 (EOL October 2006)
```
The netlink constants are defined in `include/linux/netlink.h` on linux 2.0 to 3.6. Starting in 3.7 they moved to `include/uapi/linux/netlink.h`. I've annotated these with the versions of linux where I was able to find them, and confirmed that they're not present in FreeBSD's Netlink implementation either.
I suspect these can be safely removed, but I'm not terribly familiar with Netlink myself.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127256
<!-- /gh-linked-prs -->
| 6d3b5206cfaf5a85c128b671b1d9527ed553c930 | 71ede1142ddad2d31cc966b8fe4a5aff664f4d53 |
python/cpython | python__cpython-127109 | # `methodcaller` is not thread-safe (or re-entrant)
# Bug report
**EDIT**: edited to clarify that the issue is in the C implementation of `operator.methodcaller`.
Originally reported by @ngoldbaum in https://github.com/crate-py/rpds/issues/101
<details>
<summary>Reproducer</summary>
```python
from operator import methodcaller
from concurrent.futures import ThreadPoolExecutor
class HashTrieMap():
def keys(self):
return None
def values(self):
return None
def items(self):
return None
num_workers=1000
views = [methodcaller(p) for p in ["keys", "values", "items"]]
def work(view):
m, d = HashTrieMap(), {}
view(m)
view(d)
iterations = 10
for _ in range(iterations):
executor = ThreadPoolExecutor(max_workers=num_workers)
for view in views:
futures = [executor.submit(work, view) for _ in range(num_workers)]
results = [future.result() for future in futures]
```
</details>
Once every 5-10 runs, the program prints:
```
TypeError: descriptor 'keys' for 'dict' objects doesn't apply to a 'HashTrieMap' object
```
The problem is that `operator.methodcaller` is not thread-safe because it modifies the `vectorcall_args`, which is shared across calls:
https://github.com/python/cpython/blob/0af4ec30bd2e3a52350344d1011c0c125d6dcd71/Modules/_operator.c#L1646-L1666
I think this is generally unsafe, not just for free threading. The `vectorcall` args array needs to be valid for the duration of the call, and it's possible for `methodcaller` to be called reentrantly or by another thread while the call is still ongoing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127109
* gh-127150
* gh-127245
* gh-127746
<!-- /gh-linked-prs -->
| f83ca6962af973fff6a3124f4bd3d45fea4dd5b8 | 3c770e3f0978d825c5ebea98fcd654660e7e135f |
python/cpython | python__cpython-127024 | # Simplify `PyStackRef_FromPyObjectSteal`
# Feature or enhancement
Currently, `PyStackRef_FromPyObjectSteal` checks if the object is immortal in order to set the `Py_TAG_DEFERRED` bit.
https://github.com/python/cpython/blob/29cbcbd73bbfd8c953c0b213fb33682c289934ff/Include/internal/pycore_stackref.h#L96-L105
This check isn't necessary and has a performance cost that's not made up for by the slightly faster `PyStackRef_CLOSE()` calls or `PyStackRef_Is()` checks.
We should simplify `PyStackRef_FromPyObjectSteal` so that it creates `_PyStackRef` directly from the `PyObject *` without setting any tag bits.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127024
* gh-127168
<!-- /gh-linked-prs -->
| 4759ba6eec9f0b36b24b8eb7e7b120d471c67e82 | 8214e0f709010a0e1fa06dc2ce004b5f6103cc6b |
python/cpython | python__cpython-127043 | # `PyCode_GetCode` is not thread-safe and causes assertion fail with Python 3.13td
# Crash report
### What happened?
Race condition here:
https://github.com/python/cpython/blob/60403a5409ff2c3f3b07dd2ca91a7a3e096839c7/Objects/codeobject.c#L1663-L1665
Core dump and backtrace:
https://github.com/metaopt/optree/actions/runs/11913729282/job/33200071659#step:15:172
```text
Core was generated by `python -X dev -m pytest --verbose --color=yes --durations=10 --showlocals --cov'.
Program terminated with signal SIGABRT, Aborted.
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140335707584064) at ./nptl/pthread_kill.c:44
[Current thread is 1 (Thread 0x7fa273fff640 (LWP 2491))]
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140335707584064) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140335707584064) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140335707584064, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007fa27b642476 in __GI_raise (sig=6) at ../sysdeps/posix/raise.c:26
#4 0x00007fa27be3f0f0 in faulthandler_fatal_error (signum=6) at ./Modules/faulthandler.c:338
#5 <signal handler called>
#6 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140335707584064) at ./nptl/pthread_kill.c:44
#7 __pthread_kill_internal (signo=6, threadid=140335707584064) at ./nptl/pthread_kill.c:78
#8 __GI___pthread_kill (threadid=140335707584064, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#9 0x00007fa27b642476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#10 0x00007fa27b6287f3 in __GI_abort () at ./stdlib/abort.c:79
#11 0x00007fa27b62871b in __assert_fail_base (fmt=0x7fa27b7dd130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x7fa27bf0b730 "co->_co_cached->_co_code == NULL", file=0x7fa27bf0b174 "Objects/codeobject.c", line=1664,
function=<optimized out>) at ./assert/assert.c:92
#12 0x00007fa27b639e96 in __GI___assert_fail (assertion=0x7fa27bf0b730 "co->_co_cached->_co_code == NULL",
file=0x7fa27bf0b174 "Objects/codeobject.c", line=1664, function=0x7fa27bf0c0e0 <__PRETTY_FUNCTION__.13> "_PyCode_GetCode")
at ./assert/assert.c:101
#13 0x00007fa27bb7f165 in _PyCode_GetCode (co=0x200023271d0) at Objects/codeobject.c:1664
#14 0x00007fa27bb7f1a5 in PyCode_GetCode (co=0x200023271d0) at Objects/codeobject.c:1672
#15 0x00007fa27b1b2547 in CTracer_handle_call (frame=0x20026051010, self=0x20026070110) at coverage/ctracer/tracer.c:557
#16 CTracer_trace (self=0x20026070110, frame=0x20026051010, what=0, arg_unused=<optimized out>)
at coverage/ctracer/tracer.c:844
#17 0x00007fa27bdd738d in call_trace_func (self=0x20002152ea0, arg=0x7fa27c14fc60 <_Py_NoneStruct>)
at Python/legacy_tracing.c:189
#18 0x00007fa27bdd75fa in sys_trace_start (self=0x20002152ea0, args=0x7fa273ffcdf8, nargsf=9223372036854775810, kwnames=0x0)
at Python/legacy_tracing.c:229
#19 0x00007fa27bdcc0a5 in _PyObject_VectorcallTstate (tstate=0x5620713a56e0, callable=0x20002152ea0, args=0x7fa273ffcdf8,
nargsf=9223372036854775810, kwnames=0x0) at ./Include/internal/pycore_call.h:168
#20 0x00007fa27bdce182 in call_one_instrument (interp=0x7fa27c1975c0 <_PyRuntime+128640>, tstate=0x5620713a56e0,
args=0x7fa273ffcdf8, nargsf=9223372036854775810, tool=7 '\a', event=0) at Python/instrumentation.c:907
#21 0x00007fa27bdcea63 in call_instrumentation_vector (tstate=0x5620713a56e0, event=0, frame=0x7fa27b0d8350,
instr=0x200023272aa, nargs=2, args=0x7fa273ffcdf0) at Python/instrumentation.c:1095
#22 0x00007fa27bdcec5d in _Py_call_instrumentation (tstate=0x5620713a56e0, event=0, frame=0x7fa27b0d8350, instr=0x200023272aa)
at Python/instrumentation.c:1132
#23 0x00007fa27bd46096 in _PyEval_EvalFrameDefault (tstate=0x5620713a56e0, frame=0x7fa27b0d8350, throwflag=0)
at Python/generated_cases.c.h:3474
#24 0x00007fa27bd32d31 in _PyEval_EvalFrame (tstate=0x5620713a56e0, frame=0x7fa27b0d8020, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#25 0x00007fa27bd56877 in _PyEval_Vector (tstate=0x5620713a56e0, func=0x200012a69d0, locals=0x0, args=0x7fa273ffec40,
argcount=1, kwnames=0x0) at Python/ceval.c:1806
#26 0x00007fa27bb7271e in _PyFunction_Vectorcall (func=0x200012a69d0, stack=0x7fa273ffec40, nargsf=1, kwnames=0x0)
at Objects/call.c:413
#27 0x00007fa27bb765f3 in _PyObject_VectorcallTstate (tstate=0x5620713a56e0, callable=0x200012a69d0, args=0x7fa273ffec40,
nargsf=1, kwnames=0x0) at ./Include/internal/pycore_call.h:168
#28 0x00007fa27bb76c80 in method_vectorcall (method=0x200248b8890, args=0x7fa27c197548 <_PyRuntime+128520>, nargsf=0,
kwnames=0x0) at Objects/classobject.c:70
#29 0x00007fa27bb7208f in _PyVectorcall_Call (tstate=0x5620713a56e0, func=0x7fa27bb76a66 <method_vectorcall>,
callable=0x200248b8890, tuple=0x7fa27c197520 <_PyRuntime+128480>, kwargs=0x0) at Objects/call.c:273
#30 0x00007fa27bb7243c in _PyObject_Call (tstate=0x5620713a56e0, callable=0x200248b8890,
args=0x7fa27c197520 <_PyRuntime+128480>, kwargs=0x0) at Objects/call.c:348
#31 0x00007fa27bb72517 in PyObject_Call (callable=0x200248b8890, args=0x7fa27c197520 <_PyRuntime+128480>, kwargs=0x0)
at Objects/call.c:373
#32 0x00007fa27bec8297 in thread_run (boot_raw=0x5620712f0d80) at ./Modules/_threadmodule.c:337
#33 0x00007fa27be210f7 in pythread_wrapper (arg=0x5620712f0f30) at Python/thread_pthread.h:243
#34 0x00007fa27b694ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#35 0x00007fa27b726850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0 experimental free-threading build
<!-- gh-linked-prs -->
### Linked PRs
* gh-127043
* gh-127107
<!-- /gh-linked-prs -->
| 3926842117feffe5d2c9727e1899bea5ae2adb28 | dc7a2b6522ec7af41282bc34f405bee9b306d611 |
python/cpython | python__cpython-127027 | # Remove lazy dictionary tracking
# Feature or enhancement
### Proposal:
In order to reduce the overhead of cycle GC detection for objects that cannot be part of cycles, we lazily untrack tuples and dictionary that only refer to objects that cannot be part of a cycle.
This is fine for tuples, but dictionaries are mutable, so we need to check every time a dictionary is modified whether it needs to be tracked.
Since most objects no longer have a `__dict__` dictionary, the complexity and overhead of this lazy tracking is not worth the small benefit in the cycle GC.
This was originally implemented in https://github.com/python/cpython/pull/126502, but is largely orthogonal to the main purpose of that PR, so should be implemented separately.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127027
<!-- /gh-linked-prs -->
| aea0c586d181abb897511b6b46d28bfbe4858f79 | 7191b7662efcd79f2f19821c9b9fa2155df6f698 |
python/cpython | python__cpython-127035 | # shutil.which() can return non-executable path starting with 3.12 on Windows
# Bug report
### Bug description:
Starting with Python 3.12 if a file is in PATH that does not end with PATHEXT, but happens to be named the same as the searched command it is returned instead of the real command later in PATH which includes PATHEXT.
```python
# Assume:
# PATH=C:\foo;C:\WINDOWS\system32
# "C:\foo\cmd" exists
# "C:\WINDOWS\system32\cmd.exe" exists
import shutil
print(shutil.which('cmd'))
# Actual: C:\foo\cmd
# Expected: C:\WINDOWS\system32\cmd.exe (3.11 result)
```
I have verified that this change was introduced with https://github.com/python/cpython/pull/103179 and reverting it fixes the issue.
@csm10495 ^
Downstream context: https://github.com/mesonbuild/meson/pull/13886
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-127035
* gh-127156
* gh-127158
<!-- /gh-linked-prs -->
| 8899e85de100557899da05f0b37867a371a73800 | 615abb99a4538520f380ab26a42f1506e08ffd09 |
python/cpython | python__cpython-127062 | # Pickletools Default Encoding
# Bug report
### Bug description:
When unpickling using `_pickle.c` or `pickle.py` through `load`/`loads`, an encoding can be specified using the `encoding` argument, with the default being ASCII. However, pickletools does not support custom encodings and instead makes assumptions about what encoding it uses, which can lead to either incorrect data being displayed or erroring/not erroring when the normal unpickling process would error.
The three opcodes that I have found this in are `STRING`, `BINSTRING`, and `SHORT_BINSTRING`:
* On [line 359](https://github.com/python/cpython/blob/main/Lib/pickletools.py#L359), it assumes ASCII for `STRING`
* On [line 456](https://github.com/python/cpython/blob/main/Lib/pickletools.py#L456), it assumes `latin-1` for `BINSTRING`
* On [line 422](https://github.com/python/cpython/blob/main/Lib/pickletools.py#L422), it assumes `latin-1` for `SHORT_BINSTRING`
I think the best solution would be to support encodings as an optional argument in `pickletools.py`, with the default being set to ASCII (since that's the default encoding for `pickle.py` and `_pickle.c`).
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127062
* gh-127094
* gh-127095
<!-- /gh-linked-prs -->
| eaf217108226633c03cc5c4c90f0b6e4587c8803 | ff2278e2bf660155ca8f7c0529190ca59a41c13a |
python/cpython | python__cpython-127042 | # Pickle `INT`/`LONG` base discrepancy
# Bug report
### Bug description:
The `INT` opcode in pickle is the `I` character followed by an ASCII number and a newline. There are multiple comments asking if the base should be explicitly set to 10, or kept as 0. However, a discrepancy exists between pickle implementations:
* [`_pickle.c`](https://github.com/python/cpython/blob/main/Modules/_pickle.c#L5216) uses `strtol(s, &endptr, 0);` with a base of 0, meaning `0xf` would succeed
* [`pickle.py`](https://github.com/python/cpython/blob/main/Lib/pickle.py#L1390) uses `int(data, 0)` with a base of 0, meaning `0xf` would succeed
* [`pickletools.py`](https://github.com/python/cpython/blob/main/Lib/pickletools.py#L750) uses `read_decimalnl_short()`, which calls `int(s)`, meaning any non-decimal base would fail
This same inconsistency exists with the `LONG` opcode:
* [`_pickle.c`](https://github.com/python/cpython/blob/main/Modules/_pickle.c#L5374)
* [`pickle.py`](https://github.com/python/cpython/blob/main/Lib/pickle.py#L1410)
* [`pickletools.py`](https://github.com/Legoclones/cpython/blob/main/Lib/pickletools.py#L786)
This means an attempt to disassemble a pickle bytestream using `pickletools` would fail here, while the actual unpickling process would proceed undisputed.
Personally, I don't really care whether all implementations are changed to base 10 or base 0 (`save_long()` only puts it in decimal form), but I think it should be consistent across all implementations. I'd submit a pull request for one way or the other, but I'm not sure which way you'd prefer it.
Also as a note, the pickle bytestream `b'I0001\n.'` (`INT` with the argument `0001`) fails in `pickle.py` because having leading 0s in a number with base 0 causes an error. Note that no errors are thrown in `_pickle.c` because it uses `strtol` or `pickletools.py` because it doesn't have base 0 specified. If we keep the implementation as base 0, that discrepancy between `pickle.py` and other pickle implementations would stay, whereas if we change it to base 10 (aka remove base 0), that inconsistency would also go away. For `LONG`, both `pickle.py` and `_pickle.c` fail with `b'L0001L\n.'`, but `pickletools.py` has no problem displaying that number (since it has no base specified).
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127042
<!-- /gh-linked-prs -->
| ce76b547f94de6b1c9c74657b4e8f150365ad76f | d5d84c3f13fe7fe591b375c41979d362bc11957a |
python/cpython | python__cpython-126990 | # Missing Py_DECREF for load_build in _pickle.c
# Bug report
### Bug description:
In the `load_build` function of [`Modules/_pickle.c`](https://github.com/python/cpython/blob/3.11/Modules/_pickle.c#L5168), if setting a value in a dictionary fails, the `dict` variable does not have its reference counter decreased.
Pull request was made at https://github.com/python/cpython/pull/126990
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-126990
* gh-127018
* gh-127019
* gh-127031
* gh-127063
* gh-127064
<!-- /gh-linked-prs -->
| 29cbcbd73bbfd8c953c0b213fb33682c289934ff | c5c9286804e38c95fe717f22ce1bf2f18eee5b17 |
python/cpython | python__cpython-126988 | # Crash in _PyXI_ApplyErrorCode()
# Crash report
### Bug description:
This is a problem in Python/crossinterp.c, as exposed by the _interpreters module.
reproducer:
1. interp is running as "main" in thread 1
2. thread 2 calls `_PyXI_Enter()`, which emits the `_PyXI_ERR_ALREADY_RUNNING` error code
3. thread 1 finishes
4. thread 2 calls `_PyXI_ApplyError()`
5. `_PyXI_ApplyError()` calls `_PyInterpreterState_FailIfNotRunning()` to raise the exception, but it doesn't raise anything
I noticed this while working on gh-126914.
### CPython versions tested on:
3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126988
* gh-126995
* gh-127112
<!-- /gh-linked-prs -->
| d6b3e78504b3168c432b20002dbcf8ec9a435e61 | 0063f5f314350ad5122a86f31df65f5dff4f4e5c |
python/cpython | python__cpython-126987 | # Make `pyvenv.cfg`-relocation part of the interpreter initialization instead of the `site` module
# Feature or enhancement
### Proposal:
Currently, the mechanism behind virtual environments is implemented in the `site` module. This means that disabling the `site` initialization (passing `-S`) disables virtual environments.
I would like to move the virtual environment detection from `site` to the interpreter initialization (`getpath`), which would essentially consist of setting `sys.prefix` to `sys.exec_prefix` to the virtual environment prefix in `getpath`.
The motivation is the same as GH-126793, the same UX example applies. Running `python -S -m ensurepip` inside a virtual environment would install to the system, instead of the virtual environment.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126987
* gh-127968
<!-- /gh-linked-prs -->
| 2b0e2b2893a821ca36cd65a204bed932741ac189 | ab237ff81d2201c70aaacf51f0c033df334e5d07 |
python/cpython | python__cpython-126981 | # `bytearray().__buffer__(256)` crashes
# Crash report
Code says that it cannot fail: https://github.com/python/cpython/blob/4cd10762b06ec57252e3c7373e74240b4d0c5ed8/Objects/bytearrayobject.c#L44-L56
But, it can: https://github.com/python/cpython/blob/4cd10762b06ec57252e3c7373e74240b4d0c5ed8/Objects/abstract.c#L770-L774
I have a PR ready.
CC @JelleZijlstra
<!-- gh-linked-prs -->
### Linked PRs
* gh-126981
* gh-127023
<!-- /gh-linked-prs -->
| 3932e1db5353bbcf3e3c1133cc9d2cde654cb645 | 4d771977b17e5ffaa9c2e8a2e6f5d393f68fc63c |
python/cpython | python__cpython-126949 | # Typecheck can be added to _pydatetime.timedelta.__new__
# Feature or enhancement
### Proposal:
It seems that a typecheck can be added as suggested by the existing comment
```python
def __new__(cls, days=0, seconds=0, microseconds=0,
milliseconds=0, minutes=0, hours=0, weeks=0):
# Doing this efficiently and accurately in C is going to be difficult
...
# XXX Check that all inputs are ints or floats.
```
Currently, an error is raised, but it's not explicit that it's because of an argument of wrong type
```python
_pydatetime.timedelta(seconds='1')
# TypeError: can only concatenate str (not "int") to str
```
If this type check is not desirable, I think we can simply remove the comment.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126949
<!-- /gh-linked-prs -->
| 8da9920a80c60fb3fc326c623e0f217c84011c1d | 88dc84bcf9fef32afa9af0ab41fa467c9733483f |
python/cpython | python__cpython-126871 | # Error message in getopt.do_longs can be improved based on the existing comment
# Feature or enhancement
### Proposal:
`getopt.do_longs` raises an error when there are 2 or more possibilities for a given long option. Currently the error only shows the user input, but not the possibilities. For example, `option --he not a unique prefix`.
This behavior can be improved by including the possible matches in the error message, as suggested by the existing comment: “XXX since possibilities contains all valid continuations, might be nice to work them into the error msg”. For example, `option --he not a unique prefix; possible options: help, hello, hearts`.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126871
<!-- /gh-linked-prs -->
| f46d8475749a0eadbc1f37079906a8e1ed5d56dc | 733fe59206e04141fd5cf65606ebd3d42a996226 |
python/cpython | python__cpython-126938 | # Struct fields with size > 65535 bytes throw TypeError
# Bug report
### Bug description:
`PyCField_new_impl()` processes these fields with size > 65535 bytes as bitfields, due to relying on the `NUM_BITS()` function. It throws a `TypeError` because such fields aren't a valid bitfield type.
### Minimal Example
```python
from ctypes import Structure, c_byte
class OneByte(Structure):
_fields_ = [('b', c_byte),]
TooBig = OneByte * 65536
class CausesTypeError(Structure):
_fields_ = [('tb', TooBig),] # TypeError: bit fields not allowed for type OneByte_Array_65536
```
### Root Cause
When ctypes/_layout.py creates fields for a type, if the field type is a bitfield, it passes the size in bits to `CField()`'s `bit_size` parameter. Otherwise, it passes `None`. That parameter gets passed as `PyCField_new_impl()`'s `bit_size_obj` parameter. However, that parameter goes unused. Instead, when checking if the field is a bitfield, we use `NUM_BITS()`, passing in `size` (in bytes), which effectively does an int division by 65536, and check if the result > `0`. So, for fields with size > 65535 bytes, they will be treated as bitfields, and rejected as they're not one of the bitfield-compatible types.
### Tested versions
#### No Bug
Python 3.12.1 (main, Jul 26 2024, 14:03:47) [Clang 19.0.0git (https:/github.com/llvm/llvm-project 0a8cd1ed1f4f35905df318015b on emscripten
Python 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] on win32
#### Bug
Python 3.14.0a1 (tags/v3.14.0a1:8cdaca8, Oct 15 2024, 20:08:21) [MSC v.1941 64 bit (AMD64)] on win32
### Commentary
Found this bug while using the latest 3.14 python build and attempting to import the Scapy library. A `TypeError` is thrown on import. They use a struct with a large field to hold tables they expect to receive from a Windows API call.
If it's considered expected behavior to limit the size of a field to 64KB, then that would contradict the size test earlier in `PyCField_new_impl()`, which throws a `ValueError` only if the size is bigger than what `Py_ssize_t` can hold.
As long as it's possible to define one of these types without having to actually instantiate, we should add a test that defines a max-size field.
I don't understand the naming of `NUM_BITS()`. See below for reference. In context, it's tightly integrated in bitfield get/set behavior, in a way that's too convoluted for me to follow. And it's used sporadically throughout Modules/_ctypes/cfield.c, so I'd hesitate to touch other usages. But this bug represents one misusage, so are there others?
```C
static inline
Py_ssize_t NUM_BITS(Py_ssize_t bitsize) {
return bitsize >> 16;
}
```
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126938
* gh-127825
* gh-127909
<!-- /gh-linked-prs -->
| cef0a90d8f3a94aa534593f39b4abf98165675b9 | f4b31edf2d9d72878dab1f66a36913b5bcc848ec |
python/cpython | python__cpython-127592 | # "make testios" fails with Xcode 16+ due to changes in xcresulttool
As of Xcode 16, Apple has changed the command interface to `xcresultool` such that the way it is invoked when running the `testios` recipe in `Makefile.pre.in` is no longer valid, causing the iOS test step to fail while attempting to extract the results of the test run.
```pytb
[...]
# Regardless of success or failure, extract and print the test output
xcrun xcresulttool get --path iOSTestbed.arm64-iphonesimulator.1731824398/arm64-iphonesimulator.xcresult \
--id $( xcrun xcresulttool get --path iOSTestbed.arm64-iphonesimulator.1731824398/arm64-iphonesimulator.xcresult --format json | _PYTHON_PROJECT_BASE=/Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-ios-simulator _PYTHON_HOST_PLATFORM=ios-13.0-arm64-iphonesimulator PYTHONPATH=../Lib _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata__ios_arm64-iphonesimulator _PYTHON_SYSCONFIGDATA_PATH=/Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-ios-simulator/build/lib.ios-13.0-arm64-iphonesimulator-3.14 /Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-darwin/root-arm64-apple-darwin/bin/python3 -c "import sys, json; result = json.load(sys.stdin); print(result['actions']['_values'][0]['actionResult']['logRef']['id']['_value'])" ) \
--format json | \
_PYTHON_PROJECT_BASE=/Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-ios-simulator _PYTHON_HOST_PLATFORM=ios-13.0-arm64-iphonesimulator PYTHONPATH=../Lib _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata__ios_arm64-iphonesimulator _PYTHON_SYSCONFIGDATA_PATH=/Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-ios-simulator/build/lib.ios-13.0-arm64-iphonesimulator-3.14 /Users/nad/Projects/PyDev/active/dev/3x/source/build-arm64-apple-darwin/root-arm64-apple-darwin/bin/python3 -c "import sys, json; result = json.load(sys.stdin); print(result['subsections']['_values'][1]['subsections']['_values'][0]['emittedOutput']['_value'])"
Error: This command is deprecated and will be removed in a future release, --legacy flag is required to use it.
Usage: xcresulttool get object [--legacy] --path <path> [--id <id>] [--version <version>] [--format <format>]
See 'xcresulttool get object --help' for more information.
Traceback (most recent call last):
File "<string>", line 1, in <module>
import sys, json; result = json.load(sys.stdin); print(result['actions']['_values'][0]['actionResult']['logRef']['id']['_value'])
~~~~~~~~~^^^^^^^^^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/__init__.py", line 293, in load
return loads(fp.read(),
cls=cls, object_hook=object_hook,
parse_float=parse_float, parse_int=parse_int,
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
~~~~~~~~~~~~~~~~~~~~~~~^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/decoder.py", line 345, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/decoder.py", line 363, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Error: Missing value for '--id <id>'
Help: --id <id> The ID of the object [optional, assumes rootID if not specified].
Usage: xcresulttool get object [--legacy] --path <path> [--id <id>] [--version <version>] [--format <format>]
See 'xcresulttool get object --help' for more information.
Traceback (most recent call last):
File "<string>", line 1, in <module>
import sys, json; result = json.load(sys.stdin); print(result['subsections']['_values'][1]['subsections']['_values'][0]['emittedOutput']['_value'])
~~~~~~~~~^^^^^^^^^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/__init__.py", line 293, in load
return loads(fp.read(),
cls=cls, object_hook=object_hook,
parse_float=parse_float, parse_int=parse_int,
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
~~~~~~~~~~~~~~~~~~~~~~~^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/decoder.py", line 345, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nad/Projects/PyDev/active/dev/3x/source/Lib/json/decoder.py", line 363, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
make: *** [testios] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127592
* gh-127754
* gh-129124
<!-- /gh-linked-prs -->
| 2041a95e68ebf6d13f867e214ada28affa830669 | d8d12b37b5e5acb354db84b07dab8de64a6b9475 |
python/cpython | python__cpython-126913 | # Update "credits" command output
# Bug report
### Bug description:
```python
credits
```
Returns
`Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands
for supporting Python development. See www.python.org for more information.`
Which should be updated as the BeOpen.com domain has changed ownership and this command is always recommended by IDLE.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126913
* gh-126973
* gh-126974
<!-- /gh-linked-prs -->
| 8fe1926164932f868e6e907ad72a74c2f2372b07 | b0fcc2c47a34a69c35c1a8031cd0589d3747c1af |
python/cpython | python__cpython-126930 | # test_os.ExtendedAttributeTests fail on filesystems with low xattr limits (e.g. ext4 with small blocks)
# Bug report
### Bug description:
This is roughly the same as #66210, except that bug was closed due to "not enough info", and I'm here to provide "enough info", and I can't reopen that one.
I'm seeing the following tests fail on Gentoo ARM devboxen:
<details>
```pytb
$ ./python -m test test_os -v -m ExtendedAttributeTests
== CPython 3.14.0a1+ (heads/main:2313f84, Nov 16 2024, 16:54:48) [GCC 13.3.1 20241024]
== Linux-5.15.169-gentoo-dist-aarch64-with-glibc2.40 little-endian
== Python build: release
== cwd: /var/tmp/cpython/build/test_python_worker_802177æ
== CPU count: 96
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 892697182
0:00:00 load avg: 1.53 Run 1 test sequentially in a single process
0:00:00 load avg: 1.53 [1/1] test_os
test_fds (test.test_os.ExtendedAttributeTests.test_fds) ... ERROR
test_lpath (test.test_os.ExtendedAttributeTests.test_lpath) ... ERROR
test_simple (test.test_os.ExtendedAttributeTests.test_simple) ... ERROR
======================================================================
ERROR: test_fds (test.test_os.ExtendedAttributeTests.test_fds)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/cpython/Lib/test/test_os.py", line 4006, in test_fds
self._check_xattrs(getxattr, setxattr, removexattr, listxattr)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3979, in _check_xattrs
self._check_xattrs_str(str, *args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3970, in _check_xattrs_str
setxattr(fn, s("user.test"), b"a"*1024, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3999, in setxattr
os.setxattr(fp.fileno(), *args)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: 3
======================================================================
ERROR: test_lpath (test.test_os.ExtendedAttributeTests.test_lpath)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/cpython/Lib/test/test_os.py", line 3990, in test_lpath
self._check_xattrs(os.getxattr, os.setxattr, os.removexattr,
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
os.listxattr, follow_symlinks=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3979, in _check_xattrs
self._check_xattrs_str(str, *args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3970, in _check_xattrs_str
setxattr(fn, s("user.test"), b"a"*1024, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: '@test_802177_tmpæ'
======================================================================
ERROR: test_simple (test.test_os.ExtendedAttributeTests.test_simple)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/cpython/Lib/test/test_os.py", line 3986, in test_simple
self._check_xattrs(os.getxattr, os.setxattr, os.removexattr,
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
os.listxattr)
^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3979, in _check_xattrs
self._check_xattrs_str(str, *args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/cpython/Lib/test/test_os.py", line 3970, in _check_xattrs_str
setxattr(fn, s("user.test"), b"a"*1024, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: '@test_802177_tmpæ'
----------------------------------------------------------------------
Ran 3 tests in 0.008s
FAILED (errors=3)
test test_os failed
test_os failed (3 errors)
== Tests result: FAILURE ==
1 test failed:
test_os
Total duration: 154 ms
Total tests: run=3 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
</details>
These tests attempt to set quite a fair number of extended attributes, notably including one attribute with 1024-byte value and 100 short attributes (that should take another 1 KiB). However, according to `xattr(7)`:
> In the current ext2, ext3, and ext4 filesystem implementations, the total bytes used by the names and values of all of a file’s extended attributes must fit in a single filesystem block (1024, 2048 or 4096 bytes, depending on the block size specified when the filesystem was created).
Well, I don't know why exactly, but the filesystems here (on both ARM machines we have) are 1024 byte long. Hence, attempting to write over 2 KiB of xattrs to them triggers ENOSPC.
I can get the test to pass if I lower the numbers significantly, e.g.:
```diff
diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py
index 9a4be78..919ed92 100644
--- a/Lib/test/test_os.py
+++ b/Lib/test/test_os.py
@@ -3967,10 +3967,10 @@ def _check_xattrs_str(self, s, getxattr, setxattr, removexattr, listxattr, **kwa
xattr.remove("user.test")
self.assertEqual(set(listxattr(fn)), xattr)
self.assertEqual(getxattr(fn, s("user.test2"), **kwargs), b"foo")
- setxattr(fn, s("user.test"), b"a"*1024, **kwargs)
- self.assertEqual(getxattr(fn, s("user.test"), **kwargs), b"a"*1024)
+ setxattr(fn, s("user.test"), b"a"*256, **kwargs)
+ self.assertEqual(getxattr(fn, s("user.test"), **kwargs), b"a"*256)
removexattr(fn, s("user.test"), **kwargs)
- many = sorted("user.test{}".format(i) for i in range(100))
+ many = sorted("user.test{}".format(i) for i in range(32))
for thing in many:
setxattr(fn, thing, b"x", **kwargs)
self.assertEqual(set(listxattr(fn)), set(init_xattr) | set(many))
```
However, I don't know if that's desirable. The alternatives might be to catch `ENOSPC` and use lower numbers then, or use a larger value in `supports_extended_attributes()` to have the tests skipped when the filesystem has an xattr limit lower than 4096 bytes.
### CPython versions tested on:
3.9, 3.10, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-126930
* gh-126964
* gh-126965
<!-- /gh-linked-prs -->
| 2c0a21c1aad65ab8362491acf856eb574b1257ad | f9c5573dedcb2f2e9ae152672ce157987cdea612 |
python/cpython | python__cpython-126900 | # Add `**kw` to `tkinter.Misc.after` and `tkinter.Misc.after_idle`
# Feature or enhancement
### Proposal:
Add the argument `**kw` to the method `after` so that the keyword argument can passed to `func` Conveniently.
The current function definition of the `after` is as follows:
```python
def after(self, ms, func=None, *args): ...
```
If we have the argument `**kw`, we can do this:
```python
import tkinter
root = tkinter.Tk()
root.after(1000, root.configure, bg="red")
root.mainloop()
```
Otherwise, we may need something like this:
```python
import tkinter
root = tkinter.Tk()
root.after(1000, lambda: root.configure(bg="red"))
root.mainloop()
```
Obviously, the `lambda` here looks a bit redundant.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126900
<!-- /gh-linked-prs -->
| 7ea523f47cdb4cf512a1e2ae1f93f5d19a48945d | 04673d2f14414fce7a2372de3048190f66488e6e |
python/cpython | python__cpython-126903 | # Emscripten support: Use es6 modules
We are in the future and ES6 module support is finally universal in mainstream JavaScript runtimes (Firefox was the straggler but finally added Module WebWorker support in v114 released on June 6 2023). They have many benefits, such as fewer global variables and more sanity. We should switch to using them.
cc @freakboy3742
<!-- gh-linked-prs -->
### Linked PRs
* gh-126903
<!-- /gh-linked-prs -->
| 1629d2ca56014beb2d46c42cc199a43ac97e3b97 | 32428cf9ea03bce6d64c7acd28e0b7d92774eb53 |
python/cpython | python__cpython-126897 | # Outdated docs about `asyncio.start_server()`
# Documentation
Since 3.13, the function `asyncio.start_server()` has added a parameter named `keep_alive` (see https://github.com/python/cpython/pull/112485), but the docs didn't completely update in that PR, so the website still shows the old function signature (see https://docs.python.org/3/library/asyncio-stream.html#asyncio.start_server).
It's actually my fault and I'll submit a PR to fix it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126897
* gh-126934
<!-- /gh-linked-prs -->
| 0c5c80928c476ac0dcb9a053b15a562af899cfba | 9d6366b60d01305fc5e45100e0cd13e358aa397d |
python/cpython | python__cpython-131208 | # Segfault/aborts calling `readline.set_completer_delims` in threads in a free-threaded build
# Crash report
### What happened?
Calling `difflib._test` in threads in a free-threaded build (with `PYTHON_GIL=0`) will result in aborts or segfaults, apparently related to memory issues:
```python
from threading import Thread
import difflib
for x in range(100):
Thread(target=difflib._test, args=()).start()
```
Segfault backtrace:
```gdb
Thread 92 "python" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffeca7e4640 (LWP 631239)]
0x00007ffff7d3f743 in unlink_chunk (p=p@entry=0x555555f56c00, av=0x7ffff7eb8c80 <main_arena>) at ./malloc/malloc.c:1634
1634 ./malloc/malloc.c: No such file or directory.
(gdb) bt
#0 0x00007ffff7d3f743 in unlink_chunk (p=p@entry=0x555555f56c00, av=0x7ffff7eb8c80 <main_arena>)
at ./malloc/malloc.c:1634
#1 0x00007ffff7d40d2b in _int_free (av=0x7ffff7eb8c80 <main_arena>, p=0x555555f528a0,
have_lock=<optimized out>) at ./malloc/malloc.c:4616
#2 0x00007ffff7d43453 in __GI___libc_free (mem=<optimized out>) at ./malloc/malloc.c:3391
#3 0x000055555570ed2c in _PyMem_RawFree (_unused_ctx=<optimized out>, ptr=<optimized out>)
at Objects/obmalloc.c:90
#4 0x0000555555713955 in _PyMem_DebugRawFree (ctx=0x555555cdb8f8 <_PyRuntime+824>, p=0x555555f528c0)
at Objects/obmalloc.c:2767
#5 0x000055555572c462 in PyMem_RawFree (ptr=ptr@entry=0x555555f528c0) at Objects/obmalloc.c:971
#6 0x00005555559527fc in free_threadstate (tstate=tstate@entry=0x555555f528c0) at Python/pystate.c:1411
#7 0x00005555559549c0 in _PyThreadState_DeleteCurrent (tstate=tstate@entry=0x555555f528c0)
at Python/pystate.c:1834
#8 0x0000555555a11b74 in thread_run (boot_raw=boot_raw@entry=0x555555f52870)
at ./Modules/_threadmodule.c:355
#9 0x0000555555974fdb in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:242
#10 0x00007ffff7d32ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#11 0x00007ffff7dc4850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Abort 1 backtrace:
```gdb
Thread 78 "python" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffed17f2640 (LWP 633651)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140732413191744) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140732413191744) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140732413191744) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140732413191744, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7ce0476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7cc67f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7d27676 in __libc_message (action=action@entry=do_abort,
fmt=fmt@entry=0x7ffff7e79b77 "%s\n") at ../sysdeps/posix/libc_fatal.c:155
#6 0x00007ffff7d3ecfc in malloc_printerr (
str=str@entry=0x7ffff7e7c6c0 "free(): unaligned chunk detected in tcache 2") at ./malloc/malloc.c:5664
#7 0x00007ffff7d40e3f in _int_free (av=0x7ffff7eb8c80 <main_arena>, p=0x555555dc79b0, have_lock=0)
at ./malloc/malloc.c:4471
#8 0x00007ffff7d43453 in __GI___libc_free (mem=<optimized out>) at ./malloc/malloc.c:3391
#9 0x00007ffff41d4a2b in readline_set_completer_delims (module=<optimized out>, string=<optimized out>)
at ./Modules/readline.c:593
#10 0x00005555557021ef in cfunction_vectorcall_O (
func=<built-in method set_completer_delims of module object at remote 0x2000297b920>,
args=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/methodobject.c:523
#11 0x000055555567cc55 in _PyObject_VectorcallTstate (tstate=0x555555f15bc0,
callable=<built-in method set_completer_delims of module object at remote 0x2000297b920>,
args=0x7ffed17f15a8, nargsf=9223372036854775809, kwnames=0x0) at ./Include/internal/pycore_call.h:167
#12 0x000055555567cd74 in PyObject_Vectorcall (
callable=callable@entry=<built-in method set_completer_delims of module object at remote 0x2000297b920>,
args=args@entry=0x7ffed17f15a8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:327
#13 0x00005555558441f7 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555f15bc0,
frame=<optimized out>, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:955
#14 0x0000555555872978 in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555f15bc0)
at ./Include/internal/pycore_ceval.h:116
```
Abort 2 backtrace:
```gdb
Thread 43 "python" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff54ff9640 (LWP 636245)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140734619424320) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140734619424320) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140734619424320) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140734619424320, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7ce0476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7cc67f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7d27676 in __libc_message (action=action@entry=do_abort,
fmt=fmt@entry=0x7ffff7e79b77 "%s\n") at ../sysdeps/posix/libc_fatal.c:155
#6 0x00007ffff7d3ecfc in malloc_printerr (
str=str@entry=0x7ffff7e7cf20 "tcache_thread_shutdown(): unaligned tcache chunk detected")
at ./malloc/malloc.c:5664
#7 0x00007ffff7d436c4 in tcache_thread_shutdown () at ./malloc/malloc.c:3224
#8 __malloc_arena_thread_freeres () at ./malloc/arena.c:1003
#9 0x00007ffff7d461ca in __libc_thread_freeres () at ./malloc/thread-freeres.c:44
#10 0x00007ffff7d3294f in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:456
#11 0x00007ffff7dc4850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Found using fusil by @vstinner.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a1+ experimental free-threading build (heads/main-dirty:612ac283b81, Nov 16 2024, 01:37:56) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-131208
<!-- /gh-linked-prs -->
| 863d54cbaf6c0b45fff691ab275515c1483ad68d | 9a634c5c0cde70a0575c006d04e4cfe917f647ef |
python/cpython | python__cpython-126893 | # Warmup counters should be reset when JIT compiling code
When we tier up into the JIT, we don't reset the warmup counter to a non-zero value. This means that after JIT code is invalidated (either because of watchers or because it's cold), we'll re-JIT that code the very next time it's hit. That's not ideal.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126893
<!-- /gh-linked-prs -->
| 48c50ff1a22f086c302c52a70eb9912d76c66f91 | addb225f3823b03774cddacce35214dd471bec46 |
python/cpython | python__cpython-127281 | # Restore docstrings in `ssl`
# Bug report
### Bug description:
As mentioned [here](https://github.com/python/cpython/issues/124984#issuecomment-2477870588), apparently I stripped some of the docstrings while switching things over to argument clinic in the big SSL thread safety fix (GH-124993).
I'm not certain how much needs to be restored, but I'll get to fixing this sometime this weekend or next week.
cc @JelleZijlstra, @neonene
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127281
* gh-127513
<!-- /gh-linked-prs -->
| c112de1da2d18e3b5c2ea30b0e409f18e574efd8 | bf21e2160d1dc6869fb230b90a23ab030835395b |
python/cpython | python__cpython-127242 | # `datetime.fromisoformat()` parses offset minutes outside 00-59 range
# Bug report
### Bug description:
```python
>>> datetime.fromisoformat('2020-01-01T00:00+00:90')
datetime.datetime(2020, 1, 1, 0, 0, tzinfo=datetime.timezone(datetime.timedelta(seconds=5400)))
```
expected result: `ValueError` (90 minutes is out of range 00-59)
I wasn't able to find a definitive paragraph of the ISO8601 to quote (it's not an open standard), but it appears datetime libraries in other languages (Temporal, NodaTime) do enforce this. Also, RFC3339 does explicitly forbid offset minutes outside the 00-59 range.
What are your thoughts?
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-127242
<!-- /gh-linked-prs -->
| 71c42b778dfc0831734bb7bc6121ffd44beae1d3 | 8d490b368766ee7b28d2ccf47704b1d4b5f1ea23 |
python/cpython | python__cpython-126887 | # Internal docs: Update code blocks in `garbage_collector.md` to improve syntax highlighting
Code blocks in [garbage_collector.md](https://github.com/python/cpython/blob/main/InternalDocs/garbage_collector.md) are annotated with `pycon` which doesn't play well GitHub's syntax highlighting (and neither with VS Code's Markdown preview):

I suggest we change that to `python` which highlights the code blocks properly:

<!-- gh-linked-prs -->
### Linked PRs
* gh-126887
<!-- /gh-linked-prs -->
| 14a05a8f433197c40293028f01797da444fb8409 | 58e334e1431b2ed6b70ee42501ea73e08084e769 |
python/cpython | python__cpython-126968 | # Assertion failure for `socket` with too large default timeout (larger than INT_MAX)
# Crash report
### What happened?
A debug build will abort when calling `socket._fallback_socketpair()` after a call to `socket.setdefaulttimeout` with too high a value:
```python
python -c "import socket; socket.setdefaulttimeout(2**31) ; socket._fallback_socketpair()"
python: ./Modules/socketmodule.c:819: internal_select: Assertion `ms <= INT_MAX' failed.
Aborted (core dumped)
```
Found using fusil by @vstinner.
### CPython versions tested on:
3.12, 3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0+ (heads/3.13:7be8743, Nov 15 2024, 15:20:16) [GCC 13.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-126968
* gh-127002
* gh-127003
* gh-127517
<!-- /gh-linked-prs -->
| b3687ad454c4ac54c8599a10f3ace8a13ca48915 | d6b3e78504b3168c432b20002dbcf8ec9a435e61 |
python/cpython | python__cpython-126865 | # Add freelist for compact int objects
# Feature or enhancement
### Proposal:
By adding a freelist for compact int objects we can improve performance. More details in the draft PR.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126865
* gh-128181
<!-- /gh-linked-prs -->
| 5fc6bb2754a25157575efc0b37da78c629fea46e | 9b4bbf4401291636e5db90511a0548fffb23a505 |
python/cpython | python__cpython-127523 | # Possible overflow in typeobject.c:tail_contains
# Bug report
### Bug description:
[whence+1](https://github.com/python/cpython/blob/c0f045f7fd3bb7ccf9828f4bfad55347d097fd41/Objects/typeobject.c#L2867C14-L2867C22) could lead to overflow for large value of _whence_. I think [changing type from](https://github.com/python/cpython/blob/c0f045f7fd3bb7ccf9828f4bfad55347d097fd41/Objects/typeobject.c#L2986C5-L2986C44) _int_ to _Py_ssize_t_ could fix the problem (_remain_ is [input parameter](https://github.com/python/cpython/blob/c0f045f7fd3bb7ccf9828f4bfad55347d097fd41/Objects/typeobject.c#L3016)):
```c
static int
pmerge(PyObject *acc, PyObject **to_merge, Py_ssize_t to_merge_size)
{
...
remain = PyMem_New(Py_ssize_t, to_merge_size);
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-127523
* gh-128699
* gh-128700
<!-- /gh-linked-prs -->
| 2fcdc8488c32d18f4567f797094068a994777f16 | c1417487e98e270d614965ed78ff9439044b65a6 |
python/cpython | python__cpython-126880 | # Provide visualizations of executor graphs
# Feature or enhancement
### Proposal:
Visualizations can help us understand what is going on with executors. How they link to each other, where they drop to tier 1, etc, etc.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126880
<!-- /gh-linked-prs -->
| e62e1ca4553dbcf9d7f89be24bebcbd9213f9ae5 | 5fc6bb2754a25157575efc0b37da78c629fea46e |
python/cpython | python__cpython-127755 | # Add option for redirecting stdout and stderr to the system log
# Feature or enhancement
### Proposal:
Python currently assumes that stdout and stderr are streams that are universally available. This is true for most systems.
However, on iOS, stdout/stderr content is only really visible while in Xcode. When an app is running in the wild, the expectation is that the app will use the system log for all output needs.
On iOS, stdout and stderr should be automatically directed to the system log (with appropriate log levels). This matches what Android does by default (see #118063). Existing iOS apps in the wild need to use a shim such as [std-nslog](https://pypi.org/project/std-nslog/) to ensure that stdout/err content can be loaded at runtime. This also complicates the iOS testing story, as stdout/stderr content is only visible at the end of a test run, rather than being streamable content as the test runs.
As iOS and macOS share an underlying logging framework, it should also be possible to have this functionality on macOS as well (on an opt-in basis when the Python interpreter is configured). This would be useful for Python code that has been embedded into a native macOS GUI app. At present, stdout and stderr are essentially inaccessible for apps of this type.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-127755
* gh-127806
<!-- /gh-linked-prs -->
| 51216857ca8283f5b41c8cf9874238da56da4968 | 035f512046337e64a018d11fdaa3b21758625291 |
python/cpython | python__cpython-126808 | # pygettext: false positives when extracting messages
# Bug report
### Bug description:
Originally came up here: https://github.com/python/cpython/pull/126755#issuecomment-2471298923
To summarize, pygettext produces a warning for this code:
```
def _(x):
pass
```
The reason is that it confuses the function definition with a function call. I have a PR ready.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-126808
* gh-126846
* gh-126847
<!-- /gh-linked-prs -->
| 9a456383bed52010b90bd491277ea855626a7bba | cae9d9d20f61cdbde0765efa340b6b596c31b67f |
python/cpython | python__cpython-126816 | # Use a higher tier-up threshold for JIT code
Our current tier-up threshold is 16, which was chosen a while ago because:
- in theory, it gives some of our 16-bit branch counters time to stabilize
- it seemed to work fine in practice
It turns out that we're leaving significant performance and memory improvements on the table by not using higher thresholds. Here are the results of some experiments I ran:
| warmup | speedup | memory | traces created | traces executed | uops executed |
| -------------------------------------------------------------------------------------------------------------------------------- | ------- | ------ | -------------- | --------------- | ------------- |
| [64](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241108-3.14.0a1%2B-48ade84-JIT/README.md) | +0.3% | -1.2% | -8.0% | -0.1% | +0.2% |
| [256](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241110-3.14.0a1%2B-29895e9-JIT/README.md) | +1.0% | -2.6% | -22.0% | -0.7% | -1.3% |
| [1024](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241111-3.14.0a1%2B-aaa9ae0-JIT/README.md) | +1.2% | -3.2% | -38.6% | -3.0% | -1.5% |
| [2048](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241112-3.14.0a1%2B-f863657-JIT/README.md) | +1.1% | -3.3% | -44.9% | -12.4% | -3.8% |
| [4096](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241111-3.14.0a1%2B-a2be6fd-JIT/README.md) | +2.1% | -3.6% | -52.2% | -11.2% | -3.1% | **
| [8192](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241112-3.14.0a1%2B-1236a9d-JIT/README.md)\* | +2.0% | -3.4% | -59.2% | -12.8% | -3.1% |
| [16384](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241111-3.14.0a1%2B-1723e00-JIT/README.md)\* | +2.0% | -3.6% | -65.2% | -14.5% | -4.7% |
| [32768](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241112-3.14.0a1%2B-c561277-JIT/README.md)\* | +1.8% | -3.8% | -73.1% | -18.3% | -7.1% |
| [65536](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20241112-3.14.0a1%2B-c17f578-JIT/README.md)\* | +1.4% | -3.9% | -79.7% | -21.9% | -9.2% |
\* For warmups above 4096, exponential backoff is disabled.
Based on these numbers, I think 4096 as a new threshold makes sense (2% faster and 3% less memory without significant hits to the amount of work we actually do in JIT code). I'll open a PR.
My next steps will be conducting similar experiments with higher side-exit warmup values, and then lastly with different `JIT_CLEANUP_THRESHOLD` values.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126816
* gh-127155
<!-- /gh-linked-prs -->
| 4cd10762b06ec57252e3c7373e74240b4d0c5ed8 | 933f21c3c92f758fb0615d6a4cca10249c686ae7 |
python/cpython | python__cpython-126801 | # Fix `ntpath.normpath()` for drive-relative paths
# Bug report
### Bug description:
from @eryksun (https://github.com/python/cpython/pull/117855#discussion_r1581286073):
> Note that the behavior for normalizing drive-relative paths is currently wrong in this case on Windows:
>
> ```
> >>> nt._path_normpath('C:./spam')
> 'C:.\\spam'
> ```
>
> That's the result that we want here when returning an explicit current directory for _internal use_, but that's not the expected behavior for `normpath()`. The fallback implementation that's used on POSIX gets it right:
>
> ```
> >>> ntpath.normpath('C:./spam')
> 'C:spam'
> ```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126801
* gh-127103
<!-- /gh-linked-prs -->
| 60ec854bc297e04718fe13db3605d0465bf8badb | 0c5556fcb7315f26aa4b192e341cb2a72bb78f41 |
python/cpython | python__cpython-126776 | # linecache.checkcache() is not threadsafe or GC finalizer re-entrancy safe
# Bug report
### Bug description:
```python
import linecache
import weakref
def gen_func(n):
func_code = """
def func():
pass
"""
g = {}
exec(func_code, g, g)
func = g['func']
filename = f"<generated-{n}>"
linecache.cache[filename] = (len(func_code), None, func_code.splitlines(True), filename)
def cleanup_linecache(filename):
def _cleanup():
if filename in linecache.cache:
del linecache.cache[filename]
return _cleanup
weakref.finalize(func, cleanup_linecache(filename))
return func
def main():
n = 0
while True:
func = gen_func(n)
del func
linecache.checkcache()
n += 1
if n % 100000 == 0:
print(n)
if __name__ == '__main__':
main()
```
This crashes with a KeyError every time for me on 3.12 and 3.13, but some people report that it never crashes on CPython:
```pytb
✘ graingert@conscientious ~/projects/weakref-func-cycle-never-gc main python demo.py
Traceback (most recent call last):
File "/home/graingert/projects/weakref-func-cycle-never-gc/demo.py", line 38, in <module>
main()
File "/home/graingert/projects/weakref-func-cycle-never-gc/demo.py", line 32, in main
linecache.checkcache()
File "/usr/lib/python3.12/linecache.py", line 64, in checkcache
entry = cache[filename]
~~~~~^^^^^^^^^^
KeyError: '<generated-147>'
✘ graingert@conscientious ~/projects/weakref-func-cycle-never-gc main phyt
✘ graingert@conscientious ~/projects/weakref-func-cycle-never-gc main python3.13 demo.py
Traceback (most recent call last):
File "/home/graingert/projects/weakref-func-cycle-never-gc/demo.py", line 38, in <module>
main()
~~~~^^
File "/home/graingert/projects/weakref-func-cycle-never-gc/demo.py", line 32, in main
linecache.checkcache()
~~~~~~~~~~~~~~~~~~~~^^
File "/usr/lib/python3.13/linecache.py", line 59, in checkcache
entry = cache[filename]
~~~~~^^^^^^^^^^
KeyError: '<generated-2637>'
```
The script seems to run "forever" on Python3.9, 3.10 and 3.11 3.14.0a1+ (heads/main:ba088c8f9cf.
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126776
* gh-127778
* gh-127779
<!-- /gh-linked-prs -->
# important meta issue:
This reproducer absolutely should not reproduce on cpython!!
There does seem to be another issue, because this function should be deleted instantly because its refcount drops to 0, and never run the finalizer during the linecache.checkcache call
| 2233c303e476496fc4c85a29a1429a7e4b1f707b | 3983527c3a6b389e373a233e514919555853ccb3 |
python/cpython | python__cpython-126767 | # urllib.request.url2pathname() mishandles empty authority sections (mostly)
# Bug report
### Bug description:
File URIs that start with 3+ slashes should be parsed as having an empty authority section ([ref](https://datatracker.ietf.org/doc/html/rfc8089#section-3)), but `urllib.request.url2pathname()` incorrectly retains the slashes introducing the authority section. This means it can't properly parse the most common form of POSIX absolute file URIs (e.g. `file:///etc/hosts`).
On Windows, `url2pathname()` correctly discards slashes before DOS drives (so `file:///c:/foo` is parsed as `c:\foo`), and before old-fashioned UNC URIs (so `file:////server/share` is parsed as `\\server\share`), but incorrectly retains slashes if a rooted, driveless path is decoded (so `file:///foo/bar` is decoded as `\\\foo\bar` instead of `\foo\bar`). This is much less of a problem because such paths are rare on Windows.
```python
>>> from urllib.request import url2pathname
>>> url2pathname('///etc/hosts')
'///etc/hosts' # expected: '/etc/hosts'
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126767
* gh-126836
* gh-126837
* gh-127129
* gh-127130
* gh-127131
<!-- /gh-linked-prs -->
| cae9d9d20f61cdbde0765efa340b6b596c31b67f | 47cbf038850852cdcbe7a404ed7c64542340d58a |
python/cpython | python__cpython-126758 | # Indentation fix to one line of the source code?
Hi, is there a reason that this [line](https://github.com/python/cpython/blob/03924b5deeb766fabd53ced28ba707e4dd08fb60/Python/bltinmodule.c#L3339) is indented differently?
If not, will it be ok if I open a PR to make it align with others?
(Not really a big deal...)
<!-- gh-linked-prs -->
### Linked PRs
* gh-126758
<!-- /gh-linked-prs -->
| 8cc6e5c8751139e86b2a9fa5228795e6c5caaff9 | 4b00aba42e4d9440d22e399ec2122fe8601bbe54 |
python/cpython | python__cpython-126746 | # Allow to set non-UTF8 exception messages
# Feature or enhancement
### Proposal:
This is a follow-up to the discussion in https://github.com/python/cpython/pull/126555#discussion_r1837812348.
`dlerror()` may return non-UTF-8 messages, possibly translated. We should be able to set exception messages according to the current locale. To that end, we'll expose some internal helper:
```C
extern void
_PyErr_SetLocaleStringTstate(PyThreadState *tstate, PyObject *exception, const char *string);
extern void
_PyErr_SetLocaleString(PyObject *exception, const char *string);
```
cc @ZeroIntensity @encukou
For now, both functions would be only declared as `extern` and not part of the public C API.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126746
* gh-128023
* gh-128027
* gh-128034
* gh-128056
* gh-128057
* gh-128025
* gh-128059
* gh-128060
<!-- /gh-linked-prs -->
| 7303f06846b69016a075bca7ad7c6055f29ad024 | b9a492b809d8765ee365a5dd3c6ba4e5130a80af |
python/cpython | python__cpython-126732 | # Outdated `pprint.pp` doc example about pypi `sampleproject` version
# Documentation
Currently, the examples in `pprint` that demonstrate the `pprint.pp` function are outdated. The last edit was in 2018, six years ago, and the JSON link in the example now points to `sampleproject` (see https://github.com/python/cpython/pull/10201). The documentation currently references version 1.2 of `sampleproject`, which was released in 2015 (https://pypi.org/project/sampleproject/#history). The latest version is now 4.0, released last week. Thus, the information in the documentation no longer matches the current state, and I have submitted a PR to address this. The information varies widely over the past nine years
A possible question might be: do we need to frequently update this example? The answer is no—it changes infrequently and thus won’t increase maintenance overhead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126732
* gh-126818
* gh-126819
<!-- /gh-linked-prs -->
| 6a93a1adbb56a64ec6d20e8aab911439998502c9 | 4ae50615d2beef0f93d904ccbce44bbf7500b94a |
python/cpython | python__cpython-126730 | # locale.nl_langinfo(locale.ERA) does not work for past eras
# Bug report
According to the Posix specification (https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/V1_chap07.html#tag_07_03_05_02), `nl_langinfo(ERA)` should return a string containing semicolon separated era description segments. But in Glibc it uses NUL instead of a semicolon as a separator. As result, `locale.nl_langinfo(locale.ERA)` in Python only returns the first segment, corresponding to the last (current) era. For example, in Japanese locale the result cannot be used for data before year 2020:
```pycon
>>> import locale
>>> locale.setlocale(locale.LC_ALL, 'ja_JP')
'ja_JP'
>>> locale.nl_langinfo(locale.ERA)
'+:2:2020/01/01:+*:令和:%EC%Ey年'
```
This issue is similar to #124969, but at least the result can be used for the current date.
cc @methane, @kulikjak.
<!-- gh-linked-prs -->
### Linked PRs
* gh-126730
* gh-127097
* gh-127098
* gh-127327
* gh-127645
* gh-127646
<!-- /gh-linked-prs -->
| 4803cd0244847f286641c85591fda08b513cea52 | eaf217108226633c03cc5c4c90f0b6e4587c8803 |
python/cpython | python__cpython-127741 | # Missing explanation about math.fmod
# Documentation
This issue was first raised in a [PR](https://github.com/python/cpython/pull/126337), but it is preferred to be modified independently as per this [comment](https://github.com/python/cpython/pull/126337#discussion_r1835301380). This issue is created to track the modification to add it later eventually.
This is the proposed modification:
```
+ Return the remainder of division ``x / y``,
+ as defined by the platform C library function ``fmod(x, y)``.
+
+ Note that the
- Return ``fmod(x, y)``, as defined by the platform C library. Note that the
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-127741
* gh-128491
* gh-128492
<!-- /gh-linked-prs -->
| f28d471fbe99f9eaac05d60ed40da47b0b56fe86 | fd94c6a8032676d0659aa9e38cdaa7c17093119c |
python/cpython | python__cpython-126706 | # Make os.Pathlike more similar to a Protocol
# Feature or enhancement
### Proposal:
https://github.com/python/cpython/issues/83066 considered making `os.Pathlike` a protocol, but rejected the idea due to not wanting to import typing into os.py.
As an alternative, I propose making more it more protocol-like in two ways:
1) add it to typing._PROTO_ALLOWLIST so it can be used as a base class in other protocols
2) use `class PathLike(metaclass=abc.ABCMeta)` instead of `class PathLike(abc.ABC)`. This aligns with the other protocol-like ABCs in `collections.abc`. Actual protocols do have that metaclass but don't have the `abc.ABC` base class. Doing this makes it a little smoother for typeshed to define the class as `class PathLike(Protocol)` in the stubs.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126706
<!-- /gh-linked-prs -->
| a83472f49b958c55befd82c871be26afbf500306 | 73cf0690997647c9801d9ebba43305a868d3b776 |
python/cpython | python__cpython-126912 | # pygettext: Add support for multi-argument gettext functions
# Feature or enhancement
### Proposal:
This came up while adding translation tests for various stdlib modules: https://github.com/python/cpython/issues/124295 and https://github.com/python/cpython/pull/126698. More specifically: https://github.com/python/cpython/issues/124295#issuecomment-2440904998
pygettext currently only supports single-argument `gettext` and its aliases (can be controlled via CLI arguments). `pgettext`, `ngettext`, etc., are not supported. pygettext should support these by default. I'm gonna take a stab at this hopefully this week :)
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126912
<!-- /gh-linked-prs -->
| 0a1944cda8504ba0478a51075eba540576570336 | f83ca6962af973fff6a3124f4bd3d45fea4dd5b8 |
python/cpython | python__cpython-126702 | # Protocols can't inherit from AsyncIterator
# Bug report
### Bug description:
AsyncIterator was removed from `_PROTO_ALLOWLIST` by https://github.com/python/cpython/pull/15647, without any discussion. It looks like an accident to me.
```python
from typing import AsyncIterator, Protocol
class MyProto(AsyncIterator, Protocol): ...
```
result:
```
TypeError: Protocols can only inherit from other protocols, got <class 'collections.abc.AsyncIterator'>
```
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-126702
* gh-126761
* gh-126762
<!-- /gh-linked-prs -->
| feb3e0b19cb03f06364a3f5e970f0861b8883d1c | 0052a8c638518447baf39ae02b6ff6a309efd4ce |
python/cpython | python__cpython-126787 | # Emscripten support: Remove `--with-emscripten-target`
It's possible to support both node and the browser with a single configuration, for instance Pyodide does this. I think this is preferable since it's fewer options to have test coverage for or to not cover and be in doubt about whether they work. Requires refactoring the way the Python.html example and node runner work.
cc @freakboy3742
<!-- gh-linked-prs -->
### Linked PRs
* gh-126787
<!-- /gh-linked-prs -->
| 544b001b233ac57dfce17587ffbd10a70abe3ab0 | d6bcc154e93a0a20ab97187d3e8b726fffb14f8f |
python/cpython | python__cpython-126692 | # CPython should not assume that pthread_self returns the same value in fork parent and child
Summarized from https://github.com/SerenityOS/serenity/issues/25263 by request of @colesbury:
It seems that at least Python 3.13 relies on pthread_self returning the same value in both the parent and the child process.
[PyThread_get_thread_ident_ex](https://github.com/python/cpython/blob/3.13/Python/thread_pthread.h#L349)
POSIX doesn't seem to mention if the value returned by pthread_self should be inherited by the child process.
[pthread_self](https://pubs.opengroup.org/onlinepubs/9799919799/functions/pthread_self.html)
[fork](https://pubs.opengroup.org/onlinepubs/9799919799/functions/fork.html)
(It is the case on Linux, Hurd, FreeBSD, OpenBSD, Cygwin. It isn't on SerenityOS, and also not on Solaris 9/HP-UX 11 per https://bugs.python.org/issue7242.
It looks like the code add broke things is from this commit https://github.com/python/cpython/commit/e21057b99967eb5323320e6d1121955e0cd2985e added in https://github.com/python/cpython/pull/118523 as part of https://github.com/python/cpython/issues/117657
<!-- gh-linked-prs -->
### Linked PRs
* gh-126692
* gh-126765
* gh-127121
<!-- /gh-linked-prs -->
| 5610860840aa71b186fc5639211dd268b817d65f | bf224bd7cef5d24eaff35945ebe7ffe14df7710f |
python/cpython | python__cpython-126665 | # Use `else` instead of `finally` in "The with statement" documentation.
# Documentation
In [8.5 The `with` statement](https://docs.python.org/3.13/reference/compound_stmts.html#the-with-statement), we can use `else` instead of `finally`.
```python
manager = (EXPRESSION)
enter = type(manager).__enter__
exit = type(manager).__exit__
value = enter(manager)
hit_except = False
try:
TARGET = value
SUITE
except:
hit_except = True
if not exit(manager, *sys.exc_info()):
raise
finally:
if not hit_except:
exit(manager, None, None, None)
```
is semantically equivalent to:
```python
manager = (EXPRESSION)
enter = type(manager).__enter__
exit = type(manager).__exit__
value = enter(manager)
try:
TARGET = value
SUITE
except:
if not exit(manager, *sys.exc_info()):
raise
else:
exit(manager, None, None, None)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-126665
* gh-126670
* gh-126671
* gh-128169
* gh-128170
* gh-128171
<!-- /gh-linked-prs -->
| 228f275737615cc9be713a8c3f9325b359bf8aec | 8d9f52a7be5c09c0fd4423943edadaacf6d7f917 |
python/cpython | python__cpython-126678 | # `_interpreters.exec` with invalid parameters segfaults
# Crash report
### What happened?
The code below segfaults on non-debug builds and aborts on debug builds.
```python
import _interpreters
_interpreters.exec(False, "aaaa", 1)
```
The abort looks like:
```
python: ./Modules/_interpretersmodule.c:462: _run_in_interpreter: Assertion `!PyErr_Occurred()' failed.
Aborted
```
The backtrace of the segfault is:
```
Program received signal SIGSEGV, Segmentation fault.
0x00005555557c4e1c in _PyXI_ApplyError (error=0x0) at Python/crossinterp.c:1057
1057 if (error->code == _PyXI_ERR_UNCAUGHT_EXCEPTION) {
(gdb) bt
#0 0x00005555557c4e1c in _PyXI_ApplyError (error=0x0) at Python/crossinterp.c:1057
#1 0x00007ffff79db912 in _run_in_interpreter (p_excinfo=0x7fffffffd0a0, flags=1, shareables=0x555555abe9d0 <_PyRuntime+14032>,
codestrlen=<optimized out>, codestr=0x7ffff7a53358 "aaaa", interp=0x555555ad0e48 <_PyRuntime+88904>)
at ./Modules/_interpretersmodule.c:463
#2 _interp_exec (interp=interp@entry=0x555555ad0e48 <_PyRuntime+88904>, code_arg=<optimized out>,
shared_arg=0x555555abe9d0 <_PyRuntime+14032>, p_excinfo=p_excinfo@entry=0x7fffffffd0a0, self=<optimized out>)
at ./Modules/_interpretersmodule.c:950
#3 0x00007ffff79dbaa0 in interp_exec (self=<optimized out>, args=<optimized out>, kwds=<optimized out>)
at ./Modules/_interpretersmodule.c:995
#4 0x00005555556ac233 in cfunction_call (func=0x7ffff7a6d4e0, args=<optimized out>, kwargs=<optimized out>)
at Objects/methodobject.c:551
#5 0x00005555556433f0 in _PyObject_MakeTpCall (tstate=0x555555b07b20 <_PyRuntime+313376>, callable=callable@entry=0x7ffff7a6d4e0,
args=args@entry=0x7ffff7fb0080, nargs=<optimized out>, keywords=keywords@entry=0x0) at Objects/call.c:242
#6 0x0000555555643d16 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7ffff7fb0080,
callable=0x7ffff7a6d4e0, tstate=<optimized out>) at ./Include/internal/pycore_call.h:165
#7 0x00005555555d8e85 in _PyEval_EvalFrameDefault (tstate=0x555555b07b20 <_PyRuntime+313376>, frame=0x7ffff7fb0020,
throwflag=<optimized out>) at Python/generated_cases.c.h:955
#8 0x00005555557a5abc in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb0020, tstate=0x555555b07b20 <_PyRuntime+313376>)
at ./Include/internal/pycore_ceval.h:116
#9 _PyEval_Vector (args=0x0, argcount=0, kwnames=0x0, locals=0x7ffff7a187c0, func=0x7ffff7a033d0,
tstate=0x555555b07b20 <_PyRuntime+313376>) at Python/ceval.c:1901
#10 PyEval_EvalCode (co=co@entry=0x7ffff7a3a120, globals=globals@entry=0x7ffff7a187c0, locals=locals@entry=0x7ffff7a187c0)
at Python/ceval.c:662
#11 0x0000555555811018 in run_eval_code_obj (locals=0x7ffff7a187c0, globals=0x7ffff7a187c0, co=0x7ffff7a3a120,
tstate=0x555555b07b20 <_PyRuntime+313376>) at Python/pythonrun.c:1338
```
Found using fusil by @vstinner.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a1+ (heads/main:54c63a32d06, Nov 8 2024, 19:53:10) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-126678
* gh-126681
<!-- /gh-linked-prs -->
| 9fc2808eaf4e74a9f52f44d20a7d1110bd949d41 | 6ee542d491589b470ec7cdd353463ff9ff52d098 |
python/cpython | python__cpython-126648 | # `Doc/using/configure.rst` missing an entry for `--enable-experimental-jit` option
<!-- gh-linked-prs -->
### Linked PRs
* gh-126648
* gh-126655
<!-- /gh-linked-prs -->
| f435de6765e0327995850d719534be38c9b5ec49 | ca878b6e45f9c7934842f7bb94274e671b155e09 |
python/cpython | python__cpython-126677 | # NamedTemporaryFile doesn't issue a ResourceWarning when left unclosed on POSIX
# Bug report
### Bug description:
```python
import sys
import tempfile
def main():
tempfile.NamedTemporaryFile()
if __name__ == "__main__":
sys.exit(main())
```
when run with `python -Werror demo.py` nothing happens
if you open a file normally you get a ResourceWarning:
```python
import sys
import tempfile
def main():
open("example", "w")
if __name__ == "__main__":
sys.exit(main())
```
```
graingert@conscientious ~/projects/temp_file_resource_warnings python -Werror demo2.py
Exception ignored in: <_io.FileIO name='example' mode='wb' closefd=True>
Traceback (most recent call last):
File "/home/graingert/projects/temp_file_resource_warnings/demo2.py", line 6, in main
open("example", "w")
ResourceWarning: unclosed file <_io.TextIOWrapper name='example' mode='w' encoding='UTF-8'>
```
it appears that you do get a ResourceWarning on windows, but I've only seen it in CI I havn't reproduced locally yet
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-126677
<!-- /gh-linked-prs -->
| bad3cdefa840ff099e5e08cf88dcf6dfed7d37b8 | 2610bccfdf55bc6519808f8e1b5db2cfb03ae809 |
python/cpython | python__cpython-126625 | # Expose error code `XML_ERROR_NOT_STARTED` of Expat >=2.6.4
# Bug report
### Bug description:
The error code was introduced by a security fix (at https://github.com/libexpat/libexpat/commit/51c7019069b862e88d94ed228659e70bddd5de09) but the [XML_StopParser C-API](https://libexpat.github.io/doc/api/latest/#XML_StopParser) is not exposed through CPython yet so it *should* not be possible to encounter such error. In particular, exposing the error code can be considered a feature or postpone until https://github.com/python/cpython/issues/59979 is resolved.
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12, 3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-126625
<!-- /gh-linked-prs -->
| 8e48a6edc75ca67a34924bbe54463ca913ae6e58 | a3711d1541c1b7987941b41d2247f87dae347117 |
python/cpython | python__cpython-126792 | # Please upgrade bundled Expat to 2.6.4 (e.g. for the fix to CVE-2024-50602)
# Bug report
### Bug description:
Hi! :wave:
Please upgrade bundled Expat to 2.6.4 (e.g. for the fix to CVE-2024-50602).
- GitHub release: https://github.com/libexpat/libexpat/releases/tag/R_2_6_4
- Change log: https://github.com/libexpat/libexpat/blob/R_2_6_4/expat/Changes
The CPython issue for previous 2.6.3 was #123678 and the related merged main pull request was #123689, in case you want to have a look. The Dockerfile from comment https://github.com/python/cpython/pull/123689#pullrequestreview-2280929950 could be of help with raising confidence in a bump pull request when going forward.
Thanks in advance!
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12, 3.13, 3.14, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-126792
* gh-126796
* gh-126797
* gh-126798
* gh-126799
* gh-126800
<!-- /gh-linked-prs -->
| 3c9996909402fadc98e6ca2a64e75a71a7427352 | 8c9c6d3c1234e730c0beb2a6123e68fe98e57ede |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.