repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-117681 | # Need a detailed explanation of the below added line into documentation
consider this line "A class in an [except] clause is compatible with an exception if it is the same class or a base class thereof (but not the other way around — an except clause listing a derived class is not compatible with a base class)" which comes under errors and exception in python tutorial web page, please give clear explanation, because as it is not understandable by programmers who is new to python , added screenshot below

<!-- gh-linked-prs -->
### Linked PRs
* gh-117681
* gh-117700
<!-- /gh-linked-prs -->
| a05068db0cb43337d20a936d919b9d88c35d9818 | ca7591577926d13083291c3caef408116429f539 |
python/cpython | python__cpython-117595 | # Define test_search_anchor_at_beginning as CPU-heavy tests.
Currently test_search_anchor_at_beginning requires to be completed in an absolute time.
This can be flaky at low CPU environment or slow alternative implementation like RustPython.
I would like to suggest adding `@support.requires_resource('cpu')` to the test.
https://github.com/python/cpython/blob/62aeb0ee69b06091396398de56dcb755ca3b9dc9/Lib/test/test_re.py#L2285-L2296
<!-- gh-linked-prs -->
### Linked PRs
* gh-117595
* gh-117616
<!-- /gh-linked-prs -->
| 784623c63c45a4d13dfb04318c39fdb1ab790218 | fd3679025d9d0da7eb11f2810ed270c214926992 |
python/cpython | python__cpython-117589 | # Speed up `pathlib.Path.glob()` by working with strings internally
`pathlib.Path.glob()` currently generates `Path` objects for intermediate paths that might never be yielded to the user, which is slow and unnecessary. For example, a pattern like `**/*.mp3` is evaluated by creating a `Path` object for every directory visited.
There are already few tricks employed to avoid instantiation, but it would be better if only _real results_ were converted to path objects.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117589
* gh-117726
<!-- /gh-linked-prs -->
| 0cc71bde001950d3634c235e2b0d24cda6ce7dce | 6258844c27e3b5a43816e7c559089a5fe0a47123 |
python/cpython | python__cpython-117585 | # Raise TypeError for non-paths in `posixpath.relpath()`
# Bug report
### Bug description:
```python
>>> import posixpath
>>> posixpath.relpath(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen posixpath>", line 509, in relpath
ValueError: no path specified
```
Expected: TypeError like `npath.relpath()`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117585
* gh-119388
<!-- /gh-linked-prs -->
| 733e56ef9656dd79055acc2a3cecaf6054a45b6c | 62aeb0ee69b06091396398de56dcb755ca3b9dc9 |
python/cpython | python__cpython-117661 | # PyType_GetModuleByDef family for binary functions performance
# Feature or enhancement
### Proposal:
When implementing an extension module with a module state enabled,
binary tp slot functions (e.g. `nb_add`) need two `PyType_GetModuleByDef()` to be used like the following to compare the given types with a heap-type object in the module state, which can be slow:
```c
static PyObject *
foo_add(PyObject *left, PyObject *right)
{
...
PyObject *module = PyType_GetModuleByDef(Py_TYPE(left), &module_def);
if (module == NULL) {
PyErr_Clear();
module = PyType_GetModuleByDef(Py_TYPE(right), &module_def);
}
...
}
```
* 3.13.0 alpha5 `_decimal` (module state ver.)
```py
from timeit import timeit
f = lambda s: timeit(s, 'import _decimal; d = _decimal.Decimal(1)')
a = f('d + 1')
b = f('1 + d')
print('d + 1:', a)
print('1 + d:', b, b / a)
```
```
d + 1: 0.11202071857405826
1 + d: 0.49533294743326095 4.421797625818569
```
The difference mainly comes from a `TypeError` emission from `PyType_GetModuleByDef()`, so it would be nice if we could have a new function which receives two types and emits an error after checking them.
cc @encukou
### Related issue:
* #103092
<!-- gh-linked-prs -->
### Linked PRs
* gh-117661
* gh-123100
<!-- /gh-linked-prs -->
| 2c451489122d539080c8d674b391dedc1dedcb53 | f180b31e7629d36265fa36f1560365358b4fd47c |
python/cpython | python__cpython-117567 | # ipaddress.IPv6Address.is_loopback broken for IPv4-mapped addresses
# Bug report
### Bug description:
While properties like `is_private` account for IPv4-mapped IPv6 addresses, such as for example:
```python
>>> ipaddress.ip_address("192.168.0.1").is_private
True
>>> ipaddress.ip_address("::ffff:192.168.0.1").is_private
True
```
...the same doesn't currently apply to the `is_loopback` property:
```python
>>> ipaddress.ip_address("127.0.0.1").is_loopback
True
>>> ipaddress.ip_address("::ffff:127.0.0.1").is_loopback
False
```
At minimum, this inconsistency between different properties is counter-intuitive. Moreover, `::ffff:127.0.0.0/104` is for all intents and purposes a loopback address, and should be treated as such.
For the record, this will now match the behavior of other languages such as Rust, Go and .NET, cf. rust-lang/rust#69772.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117567
* gh-118391
<!-- /gh-linked-prs -->
| fb7f79b4da35b75cdc82ff3cf20816d2bf93d416 | eb20a7d12c4b2ab7931074843f8602a48b5b07bd |
python/cpython | python__cpython-117553 | # Avoid indefinite hang in `test.test_logging.HTTPHandlerTest.test_output` if request fails
This unit test logs to an HTTP backend. If for some reason the request fails, then the `self.handled.wait()` will bock indefinitely. In practice that means until the test runner times out, which can be after over an hour.
We should add a timeout in `self.handled.wait()` and fail the test more quickly if the timeout expires.
https://github.com/python/cpython/blob/42205143f8b3211d1392f1d9f2cf6717bdaa5b47/Lib/test/test_logging.py#L2189-L2194
<!-- gh-linked-prs -->
### Linked PRs
* gh-117553
<!-- /gh-linked-prs -->
| 59864edd572b5c0cc3be58087a9ea3a700226146 | ca62ffd1a5ef41401abceddfd171c76c68825a35 |
python/cpython | python__cpython-117551 | # New tier 2 counters break some C extensions due to order of field mismatch
# Bug report
### Bug description:
@tacaswell reported [here](https://github.com/python/cpython/pull/117144#issuecomment-2037824706):
#177144 appears to have broken building scipy
```
FAILED: scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o
ccache c++ -Iscipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p -Iscipy/special -I../scipy/special -I../../../../home/tcaswell/.virtualenvs/py313/lib/python3.13/site-packages/numpy/_core/include -I/home/tcaswell/.pybuild/py313/include/python3.13 -fvisibility=hidden -fvisibility-inlines-hidden -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -std=c++17 -O3 -fpermissive -fPIC -DNPY_NO_DEPRECATED_API=NPY_1_9_API_VERSION -MD -MQ scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o -MF scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o.d -o scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/meson-generated__specfun.cpp.o -c scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/_specfun.cpp
In file included from /home/tcaswell/.pybuild/py313/include/python3.13/internal/pycore_code.h:461,
from /home/tcaswell/.pybuild/py313/include/python3.13/internal/pycore_frame.h:13,
from scipy/special/_specfun.cpython-313-x86_64-linux-gnu.so.p/_specfun.cpp:14948:
/home/tcaswell/.pybuild/py313/include/python3.13/internal/pycore_backoff.h: In function ‘_Py_BackoffCounter make_backoff_counter(uint16_t, uint16_t)’:
/home/tcaswell/.pybuild/py313/include/python3.13/internal/pycore_backoff.h:47:67: error: designator order for field ‘_Py_BackoffCounter::<unnamed union>::<unnamed struct>::backoff’ does not match declaration order in ‘_Py_BackoffCounter::<unnamed union>::<unnamed struct>’
47 | return (_Py_BackoffCounter){.value = value, .backoff = backoff};
|
```
conformed scipy builds with https://github.com/python/cpython/commit/63bbe77d9bb2be4db83ed09b96dd22f2a44ef55b
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117551
* gh-118580
<!-- /gh-linked-prs -->
| b5e60918afa53dfd59ad26a9f4b5207a9b304bc1 | 63998a1347f3970ea4c69c881db69fc72b16a54c |
python/cpython | python__cpython-117548 | # mimalloc compile error on OpenBSD
# Bug report
```
In file included from Objects/mimalloc/prim/prim.c:22,
from Objects/mimalloc/static.c:37,
from Objects/obmalloc.c:22:
Objects/mimalloc/prim/unix/prim.c: In function 'mi_prim_open':
Objects/mimalloc/prim/unix/prim.c:67:10: error: implicit declaration of function 'syscall'; did you mean 'sysconf'? [-Werror=implicit-function-declaration]
return syscall(SYS_open,fpath,open_flags,0);
^~~~~~~
sysconf
```
Reported in https://discuss.python.org/t/compiling-python3-13-alpha-release-on-openbsd/50273
<!-- gh-linked-prs -->
### Linked PRs
* gh-117548
<!-- /gh-linked-prs -->
| 2067da25796ea3254d0edf61a39bcc0326c4f71d | df7317904849a41d51db39d92c5d431a18e22637 |
python/cpython | python__cpython-117568 | # `os.path.realpath('loop/../link')` does not resolve link
If `os.path.realpath()` encounters a symlink loop on posix, it prematurely stops querying paths and resolving symlink targets. This differs from coreutils `realpath`:
```shell
$ cd /tmp
$ ln -s loop loop
$ ln -s target link
$ realpath -m loop/../link
/tmp/target
$ python -c 'import os.path; print(os.path.realpath("loop/../link"))'
/tmp/link
```
It also differs from how `lstat()` errors are handled:
```shell
$ realpath -m missing/../link
/tmp/target
$ python -c 'import os.path; print(os.path.realpath("missing/../link"))'
/tmp/target
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117568
<!-- /gh-linked-prs -->
| 630df37116b1c5b381984c547ef9d23792ceb464 | 6bc0b33a91713ee62fd1860d28b19cb620c45971 |
python/cpython | python__cpython-117543 | # ./Modules/_datetimemodule.c:290: days_before_year: Assertion `year >= 1' failed.
# Crash report
### What happened?
Python terminates with core dumped on the following test:
```python
import datetime
y = datetime.datetime.fromisoformat('0000W25')
print(y)
```
The test result:
```
python: ./Modules/_datetimemodule.c:290: days_before_year: Assertion `year >= 1' failed.
Aborted (core dumped)
```
The issue happened due to lack of checking for input parameters in iso_to_ymd().
The 'iso_year' input parameter should be checked to fix the issue.
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a5+ (heads/main:dc54714044, Apr 4 2024, 12:28:42) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-117543
* gh-117689
<!-- /gh-linked-prs -->
| d5f1139c79525b4e7e4e8ad8c3e5fb831bbc3f28 | a25c02eaf01abc7ca79efdbcda986b9cc2787b6c |
python/cpython | python__cpython-117532 | # Queue non-immediate shutdown doesn't unblock getters
# Bug report
### Bug description:
Caused by #104750
```python
import queue
import threading
q = queue.Queue()
t = threading.Thread(target=q.get)
t.start()
q.shutdown()
t.join(timeout=10.0)
assert not t.is_alive() # raises AssertionError
```
Discovered here: https://github.com/python/cpython/pull/104228#issuecomment-2035201238
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117532
<!-- /gh-linked-prs -->
| 6bc0b33a91713ee62fd1860d28b19cb620c45971 | dfcae4379f2cc4d352a180f9fef2381570aa9bcb |
python/cpython | python__cpython-117522 | # Improve typing.TypeGuard docstring
The [current docstring](https://github.com/python/cpython/blob/1c434688866db79082def4f9ef572b85d85908c8/Lib/typing.py#L844) for `typing.TypeGuard` includes a single example, but that example does not use `TypeGuard`, and in fact demonstrates behavior that differs from that of `TypeGuard` in essential ways. It would be less confusing if the TypeGuard example actually exemplified TypeGuard.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117522
* gh-117538
<!-- /gh-linked-prs -->
| b32789ccb91bbe43e88193f68b1364a8da6d9866 | 3f5bcc86d0764b691087e8412941e947554c93fd |
python/cpython | python__cpython-117517 | # Implement PEP-742
# Feature or enhancement
### Proposal:
PEP-742 was just accepted. Let's implement it.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
PEP-742
<!-- gh-linked-prs -->
### Linked PRs
* gh-117517
<!-- /gh-linked-prs -->
| f2132fcd2a6da7b2b86e80189fa009ce1d2c753b | 57183241af76bf33e44d886a733f799d20fc680c |
python/cpython | python__cpython-118514 | # Add `sys._is_gil_enabled()`
# Feature or enhancement
I propose adding a new function to the `sys` module to check if the GIL is current enabled:
```
sys._is_gil_enabled() -> bool
Return True if the GIL is currently enabled and False if the GIL is currently disabled.
```
The function would always return `True` in builds that do not support free-threading. In the free-threaded build, the the GIL might be dynamically enabled or disabled depending on imported modules.
**EDIT**: Changed name to use underscore prefix
<!-- gh-linked-prs -->
### Linked PRs
* gh-118514
<!-- /gh-linked-prs -->
| 2dae505e87e3815f087d4b07a71bb2c5cce22304 | 24e643d4ef024a3561c927dc07c59c435bb27bcc |
python/cpython | python__cpython-117527 | # JIT compiler doesn't support 64-bit operands on 32-bit sytems
# Bug report
### Bug description:
Tracking issue for @brandtbucher , and for me to write TODOs in code.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117527
<!-- /gh-linked-prs -->
| 62aeb0ee69b06091396398de56dcb755ca3b9dc9 | df4d84c3cdca572f1be8f5dc5ef8ead5351b51fb |
python/cpython | python__cpython-117731 | # Make some `PyMutex` functions public
# Feature or enhancement
### Overview
The `PyMutex` APIs are currently internal-only in `pycore_lock.h`. This proposes making the type and two functions public in as part of the general, non-limited API in `Include/cpython/lock.h`.
The APIs to be included in the public header are:
```c
typedef struct PyMutex {
uint8_t v;
} PyMutex;
static inline void PyMutex_Lock(PyMutex *m) { ... }
static inline void PyMutex_Unlock(PyMutex *m) { ... }
// (private) slow path for locking the mutex
PyAPI_FUNC(void) _PyMutex_LockSlow(PyMutex *m);
// (private) slow path for unlocking the mutex
PyAPI_FUNC(void) _PyMutex_UnlockSlow(PyMutex *m);
````
### Motivation
* With the free-threaded build, C API extensions are more likely to require locking for thread-safety
* The `PyMutex` API is easier to use correctly than the existing `PyThread_type_lock`. The `PyMutex` APIs release the GIL before blocking, whereas with the `PyThread_type_lock` APIs you often want to use a [two step](https://github.com/python/cpython/blob/1c434688866db79082def4f9ef572b85d85908c8/Modules/zlibmodule.c#L175-L179) approach to locking to release the GIL before blocking.
* The `PyMutex` API enables a more efficient implementation: it's faster to acquire when the lock is not contended and does not require allocation.
### C-API WG issue
* https://github.com/capi-workgroup/decisions/issues/22
cc @ngoldbaum
<!-- gh-linked-prs -->
### Linked PRs
* gh-117731
* gh-120800
<!-- /gh-linked-prs -->
| 3af7263037de1d0ef63b070fc7bfc2cf042eaebe | e8e151d4715839f785ff853c77594d7302b40266 |
python/cpython | python__cpython-118257 | # [BUG] Newest 3.12 install on windows misses pip
# Bug report
### Bug description:
I recently did a fresh python installation, from python.org (https://www.python.org/ftp/python/3.12.2/python-3.12.2-amd64.exe)
I did installed it on an "uncommon" directory (`D:\EDM115\Programmes\Python312\`), but this had never caused any issue in the past.
The main issue is that the `Scripts` folder was simply empty, meaning that no pip command worked. I verified that the pip checkbox was checked, tried to repair and later redo the install, only to face the same issues.
What fixed it was to run `py -m ensurepip --upgrade` and later rename `D:\EDM115\Programmes\Python312\Scripts\pip3.exe` to `D:\EDM115\Programmes\Python312\Scripts\pip.exe`, but I believe that it should be done automatically
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-118257
* gh-119421
* gh-119422
<!-- /gh-linked-prs -->
| c9073eb1a99606df1efeb8959e9f11a8ebc23ae2 | baf347d91643a83483bae110092750d39471e0c2 |
python/cpython | python__cpython-117504 | # Support non-ASCII user name in posixpath.expanduser()
# Bug report
Currently `posixpath.expanduser()` only supports ASCII user names in bytes paths. It supports non-ASCII user names in string paths. `ntpath.expanduser()` also "supports" non-ASCII user names (although this support is only a guess, but non-ASCII user names are not discriminated).
So I consider is as a bug.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117504
* gh-117970
* gh-118056
* gh-118058
<!-- /gh-linked-prs -->
| 51132da0c4dac13500d9bb86b2fdad42091d3fd9 | 44890b209ebe2efaf4f57eed04967948547cfa3b |
python/cpython | python__cpython-117496 | # Extract the instruction sequence data structure into its own file and add a Python interface
The Instruction Sequence data structure used by the compiler/assembler should be extracted into its own file, so that it can be properly abstracted. It should get a convenient Python interface so that it can be directly accessed from tests without translation to and from a list-of-tuples representation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117496
<!-- /gh-linked-prs -->
| 04697bcfaf5dd34c9312f4f405083b6d33b3511f | 060a96f1a9a901b01ed304aa82b886d248ca1cb6 |
python/cpython | python__cpython-117678 | # typing.NoReturn and typing.Never are somewhat unclear
# Documentation
In [the typing docs](https://docs.python.org/3/library/typing.html), [typing.Never](https://docs.python.org/3/library/typing.html#typing.Never) and [typing.NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn) are somewhat unclear.
NoReturn used to be a type for function annotations when the function never returns. [PEP 484](https://peps.python.org/pep-0484/#the-noreturn-type) even says "The NoReturn type is only valid as a return annotation of functions, and considered an error if it appears in other positions". In practice, however, NoReturn was used as a bottom type, which eventually culminated in the addition of Never in #90633. However, it's still somewhat unclear from the typing docs exactly how you're supposed to use typing.Never and typing.NoReturn. In particular, are you supposed to prefer NoReturn for the return types of functions that never return?
## Never
Never says, in part:
> This can be used to define a function that should never be called, or a function that never returns
It then gives one example, a function never_call_me with an argument of type Never. Presumably this is the "a function that should never be called" case.
- It should be stated emphatically that a function that has a Never in its argument types should never be called, if this the case.
- Possibly, this sense of the would "should" should also be clarified. Is it a constraint on the user, the type system, or both?
- If Never is also supposed to be used as the return type for a function that should be called but never returns, then this also needs an example.
- If you're instead supposed to use NoReturn for that, then this fact should be mentioned here.
- The documentation should also mention the claim "Type checkers should treat the two equivalently." from NoReturn, if that fact is in fact true.
## NoReturn
NoReturn says, in part
> NoReturn can also be used as a [bottom type](https://en.wikipedia.org/wiki/Bottom_type), a type that has no values. Starting in Python 3.11, the [Never](https://docs.python.org/3/library/typing.html#typing.Never) type should be used for this concept instead. Type checkers should treat the two equivalently.
- It should be stated emphatically that a function that has a NoReturn in its argument types should never be called, if this the case.
- It should be clarified that here "this concept" means using NoReturn as the type annotation of anything but a function return type, and not the entire concept of NoReturn as a whole. This could also be clarified by amending the first sentence in the quote from "NoReturn can also be used as a..." to "NoReturn can also be used as a type annotation in other positions as a...", perhaps.
Further information: https://typing.readthedocs.io/en/latest/spec/special-types.html#noreturn , https://typing.readthedocs.io/en/latest/spec/special-types.html#never, https://typing.readthedocs.io/en/latest/source/unreachable.html#never-and-noreturn
<!-- gh-linked-prs -->
### Linked PRs
* gh-117678
* gh-118547
<!-- /gh-linked-prs -->
| 852263e1086748492602a90347ecc0a3925e1dda | f201628073f22a785a096eccb010e2f78601b60f |
python/cpython | python__cpython-118212 | # ast: Improve behavior for user-defined classes
From @encukou:
Thanks for the fix. Since it's a blocker I merged it, even though I'd have one more review comment:
It would be good to add this to the new test, and improve the behaviour (the error is quite opaque):
```python
class MoreFieldsThanTypes(ast.AST):
_fields = ('a', 'b')
_field_types = {'a': int | None}
a: int | None = None
b: int | None = None
```
_Originally posted by @encukou in https://github.com/python/cpython/issues/117266#issuecomment-2024873895_
Based on https://github.com/robotframework/robotframework/issues/5091#issuecomment-2033207000, we should also document `_field_types`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-118212
<!-- /gh-linked-prs -->
| e0422198fb4de0a5d81edd3de0d0ed32c119e9bb | 040571f258d13a807f5c8e4ce0a182d5f9a2e81b |
python/cpython | python__cpython-117484 | # `test_ssl.test_wrong_cert_tls13` should accept "Broken pipe" as valid error
The `test_wrong_cert_tls13` unit tests checks the behavior when the server rejects the client's ceritficate. On macOS, this can sometimes lead to a "Broken pipe" on the client instead of a "Connection reset by peer" when the connection is closed during the `s.write()` call.
This happens frequently in the free-threaded build, but can also be reproduced on the default (with GIL) build by adding a short `time.sleep(0.1)` immediately before the `s.write(b'data')`.
https://github.com/python/cpython/blob/8eda146e87d5531c9d2bc62fd1fea3bd3163f9b1/Lib/test/test_ssl.py#L3153-L3178
<!-- gh-linked-prs -->
### Linked PRs
* gh-117484
<!-- /gh-linked-prs -->
| a214f55b274df9782e78e99516a372e0b800162a | 33ee5cb3e92ea8798e7f1a2f3a13b92b39cee6d6 |
python/cpython | python__cpython-117479 | # Add `@support.requires_gil_enabled` decorator to mark tests that require the GIL
# Feature or enhancement
@Eclips4 [points out](https://github.com/python/cpython/pull/117475#discussion_r1548349232) that it might be time to add a dedicated decorator for marking tests that require the GIL:
He writes:
> I think this helper should look like this:
```python
def requires_gil_enabled(test):
if Py_GIL_DISABLED:
return unittest.skip('needs the GIL enabled')(test)
return test
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117479
<!-- /gh-linked-prs -->
| 42205143f8b3211d1392f1d9f2cf6717bdaa5b47 | de5ca0bf71760aad8f2b8449c89242498bff64c8 |
python/cpython | python__cpython-117475 | # Skip `test_gdb.test_backtrace.test_threads` in free-threaded build
# Feature or enhancement
The `test_threads` test checks that `py-bt` reports when a thread is waiting on the GIL. In the free-threaded build, the GIL will typically be disabled. I think we should just skip the test in that case.
https://github.com/python/cpython/blob/027fa2eccf39ddccdf7b416d16601277a7112054/Lib/test/test_gdb/test_backtrace.py#L53-L83
<!-- gh-linked-prs -->
### Linked PRs
* gh-117475
<!-- /gh-linked-prs -->
| 63998a1347f3970ea4c69c881db69fc72b16a54c | 434bc593df4c0274b8afd3c1dcdc9234f469d9bf |
python/cpython | python__cpython-117510 | # mailbox.mbox.flush() loses mailbox owner
In the `flush()` method for mbox mailboxes, after writing the new file, it correctly copies the file modes from the old file to the new one:
https://github.com/python/cpython/blob/027fa2eccf39ddccdf7b416d16601277a7112054/Lib/mailbox.py#L753-L755
However, it doesn't copy the user.group ownership of the old file to the new one, and I think it should.
I implemented a python program to modify a mailbox, in this case to delete old messages from a spam folder. The program runs as root, so that it can operate on folders belonging to different users without having to suid to each user. I found that on flushing after any changes, the mailbox was owned by root.root instead of by its original owner.
Suggest the following code instead:
```
# Make sure the new file's mode and owner are the same as the old file's
info = os.stat(self._path)
os.chmod(new_file.name, info.st_mode)
try:
os.chown(new_file.name, info.st_uid, info.st_gid)
except OSError:
pass
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117510
* gh-117537
<!-- /gh-linked-prs -->
| 3f5bcc86d0764b691087e8412941e947554c93fd | dc5471404489da53e6d591b52ba8886897ed3743 |
python/cpython | python__cpython-117460 | # asyncio.run_coroutine_threadsafe drops specified exceptions' traceback
# Bug report
### Bug description:
```python
import asyncio
import threading
import traceback
async def raiseme():
raise ValueError(42)
async def raiseme2():
raise asyncio.TimeoutError()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
thr = threading.Thread(target=loop.run_forever, daemon=True)
thr.start()
print('raiseme() run_coroutine_threadsafe')
try:
task = asyncio.run_coroutine_threadsafe(raiseme(), loop)
task.result()
except:
traceback.print_exc()
print('raiseme2() run_coroutine_threadsafe')
try:
task = asyncio.run_coroutine_threadsafe(raiseme2(), loop)
task.result()
except:
traceback.print_exc()
```
```
raiseme() run_coroutine_threadsafe
Traceback (most recent call last):
File "g:\Projects\NowPlaying\test.py", line 18, in <module>
task.result()
File "C:\Program Files\Python312\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "g:\Projects\NowPlaying\test.py", line 6, in raiseme
raise ValueError(42)
ValueError: 42
raiseme2() run_coroutine_threadsafe
Traceback (most recent call last):
File "g:\Projects\NowPlaying\test.py", line 25, in <module>
task.result()
File "C:\Program Files\Python312\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
TimeoutError
```
The traceback of the second exception (`TimeoutError`) is dropeed.
The reason is that `_convert_future_exc` drops the origin exception's traceback:
https://github.com/python/cpython/blob/812245ecce2d8344c3748228047bab456816180a/Lib/asyncio/futures.py#L316-L325
To fix it, construct the new exception with the original traceback like that:
```python
return exceptions.CancelledError(*exc.args).with_traceback(exc.__traceback__)
```
### CPython versions tested on:
3.10, CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117460
<!-- /gh-linked-prs -->
| 85843348c5f0b8c2f973e8bc586475e69af19cd2 | b4fe02f595fcb9f78261920a268ef614821ec195 |
python/cpython | python__cpython-117477 | # pystats: Uop "miss" counts are incorrect
# Bug report
### Bug description:
After the trace stitching, all uop miss counts are attributed to either `_DEOPT` or `_SIDE_EXIT`. These should instead by counted for the uop immediately prior.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117477
* gh-117559
<!-- /gh-linked-prs -->
| 757b62493b47c6d2f07fc8ecaa2278a7c8a3bea6 | 9c1dfe21fdaf2d93c3e1d1bba1cbe240e35ff35d |
python/cpython | python__cpython-118482 | # A loop split between two hot executors never calls CHECK_EVAL_BREAKER()
When a tight for loop without calls is turned into two traces that transfer directly to each other, it seems that `CHECK_EVAL_BREAKER()` is never called, and the loop won't be interruptible.
See https://github.com/faster-cpython/ideas/issues/669#issuecomment-2030810567, where @brandtbucher and I reasoned that this might occur in the _nbody_ benchmark (when the max trace length is reduced to 400).
While we didn't observe the predicted speedup, we really do believe that in this case the thread is uninterruptible.
@markshannon Any thoughts?
<!-- gh-linked-prs -->
### Linked PRs
* gh-118482
<!-- /gh-linked-prs -->
| 67bba9dd0f5b9c2d24c2bc6d239c4502040484af | f8e088df2a87f613ee23ea4f6787f87d9196b9de |
python/cpython | python__cpython-117441 | # Make syslog thread-safe in `--disable-gil` builds
# Feature or enhancement
The `syslog` module has two mutable global variables:
https://github.com/python/cpython/blob/9dae05ee59eeba0e67af2a46f2a2907c9f8d7e4a/Modules/syslogmodule.c#L71-L72
We need to protect access to these variables in the `--disable-gil` builds.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117441
<!-- /gh-linked-prs -->
| 954d616b4c8cd091214aa3b8ea886bcf9067243a | e569f9132b5bdc1c103116a020e19e3ccc20cf34 |
python/cpython | python__cpython-117469 | # Make refleak checking thread-safe without the GIL
# Feature or enhancement
The refleak checking relies on per-interpreter "total" refcount tracking. It uses non-atomic operations and is not thread-safe without the GIL.
In the free-threaded build, I think we should primarily track counts in `PyThreadState` and occasionally aggregate the results into the per-interpreter total refcount using atomic operations.
See:
https://github.com/python/cpython/blob/9dae05ee59eeba0e67af2a46f2a2907c9f8d7e4a/Objects/object.c#L72-L91
There is also the legacy `_Py_RefTotal`, but that's just preserved for ABI compatibility. I don't think we have to do anything with that.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117469
<!-- /gh-linked-prs -->
| 1a6594f66166206b08f24c3ba633c85f86f99a56 | 2067da25796ea3254d0edf61a39bcc0326c4f71d |
python/cpython | python__cpython-117436 | # Make multiprocessing `SemLock` thread-safe in the free-threaded build
# Feature or enhancement
The multiprocessing `SemLock` code maintains an internal `count` field. When operating as a recursive mutex, `count` is the number of times the thread has acquired the mutex (i.e., 0, 1, .. N). When operating as a semaphore, `count` is often `0` or `1`, but can be negative if the `SemLock` is initialized with `maxvalue > 1`
The modification of `count` is not thread-safe without the GIL within the process. Note that the `count` field is not shared across processes, unlike the underlying `sem_t` or `HANDLE`.
In particular, the modification of `count` after the semaphore is released is not thread-safe without the GIL:
https://github.com/python/cpython/blob/9dae05ee59eeba0e67af2a46f2a2907c9f8d7e4a/Modules/_multiprocessing/semaphore.c#L444-L448
I think this is the source of deadlocks when running `test_multiprocessing_forkserver.test_processes` with the GIL disabled.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117436
<!-- /gh-linked-prs -->
| de5ca0bf71760aad8f2b8449c89242498bff64c8 | 04697bcfaf5dd34c9312f4f405083b6d33b3511f |
python/cpython | python__cpython-117412 | # PyFutureFeatures should not be public
``PyFutureFeatures`` is defined in Include/cpython/compile.h. If it not used in anything which is user-facing, so there is no reason for anyone to use it. It should be private.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117412
<!-- /gh-linked-prs -->
| 1d5479b236e9a66dd32a24eff6fb83e3242b999d | 5fd1897ec51cb64ef7990ada538fcd8d9ca1f74b |
python/cpython | python__cpython-117390 | # `test_compileall.EncodingTest` is broken
# Bug report
This test was added a long time ago as a part of https://github.com/python/cpython/commit/4b00307425bb3219f269a13ba5a9526903d21ce8
But, it is broken right now for several reasons:
1. It does not assert anything at all, so it passes when it really should not
2. It uses `print` keyword from Python2, which really results in
```pytb
======================================================================
FAIL: test_error (test.test_compileall.EncodingTest.test_error)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_compileall.py", line 516, in test_error
self.assertEqual(buffer.getvalue(), '')
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'Listing \'/var/folders/vw/n7c5l8m94zb072h[327 chars]\n\n' != ''
- Listing '/var/folders/vw/n7c5l8m94zb072h487pnlpzm0000gn/T/tmpeolpbxlq'...
- Compiling '/var/folders/vw/n7c5l8m94zb072h487pnlpzm0000gn/T/tmpeolpbxlq/_test.py'...
- *** File "/var/folders/vw/n7c5l8m94zb072h487pnlpzm0000gn/T/tmpeolpbxlq/_test.py", line 2
- print u"€"
- ^^^^^^^^^^
- SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
-
----------------------------------------------------------------------
```
But, we cannot see this failure, because the result of `compile_all` is ignored. And `sys.stdout` is not checked as well.
I will send a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117390
* gh-118603
* gh-118604
<!-- /gh-linked-prs -->
| 44f67916dafd3583f482e6d001766581a1a734fc | 5092ea238e28c7d099c662d416b2a96fdbea4790 |
python/cpython | python__cpython-117386 | # sys.settrace set some events without callbacks
# Bug report
### Bug description:
`sys.settrace` sets `PY_MONITORING_EVENT_BRANCH` and `PY_MONITORING_EVENT_EXCEPTION_HANDLED` monitoring events but does not have any registered callbacks. Those two events should not generate any legacy trace events (and `sys.settrace` works fine without it for a long time), so we should just remove those.
This improves trace performance a bit (~3% for fib + empty tracefunc) as the events do not fire anymore.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117386
<!-- /gh-linked-prs -->
| 0f998613324bcb6fa1cd9a3a2fc7e46f67358df7 | a5eeb832c2bbbd6ce1e9d545a553de926af468d5 |
python/cpython | python__cpython-117382 | # Fix error message for `ntpath.commonpath`
# Bug report
### Bug description:
```python
>>> import ntpath
>>> ntpath.commonpath(["/foo", "foo"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen ntpath>", line 808, in commonpath
ValueError: Can't mix absolute and relative paths
```
`/foo` is not an absolute path, but a rooted path, which can't be mixed with not-rooted paths.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117382
<!-- /gh-linked-prs -->
| 2ec6bb4111d2c03c1cac02b27c74beee7e5a2a05 | a214f55b274df9782e78e99516a372e0b800162a |
python/cpython | python__cpython-126538 | # forkserver.set_forkserver_preload() silent preload import failures when sys.path is required
# Bug report
### Bug description:
Bug in `def main()`: https://github.com/python/cpython/blob/main/Lib/multiprocessing/forkserver.py#L167
The param `sys_path` is ignored. Result: `ModuleNotFoundError` for preloaded modules.
a) Using `sys_path` fixes this issue
b) Maybe better remove "pass" and report and error to simplify problem solving
```python
if preload:
if '__main__' in preload and main_path is not None:
process.current_process()._inheriting = True
try:
spawn.import_main_path(main_path)
finally:
del process.current_process()._inheriting
+ if sys_path:
+ sys.path = sys_path
for modname in preload:
try:
__import__(modname)
except ImportError:
- pass
+ warnings.warn('forkserver: preloading module failed %s' % modname)
```
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126538
* gh-126632
* gh-126633
* gh-126634
* gh-126635
* gh-126652
* gh-126653
* gh-126668
* gh-126669
<!-- /gh-linked-prs -->
| 9d08423b6e0fa89ce9cfea08e580ed72e5db8c70 | 266328552e922fd9030cd699e10a25f03a67c8ba |
python/cpython | python__cpython-117696 | # Implement deferred reference counting in free-threaded builds
# Feature or enhancement
@Fidget-Spinner has started implementing tagged pointers in the evaluation stack in https://github.com/python/cpython/issues/117139.
There are two other pieces needed for deferred reference counting support in the free-threaded build:
1) We need a way to indicate that a `PyObject` uses deferred reference counting (in the `PyObject` header)
2) We need to collect unreachable objects that use deferred reference counting in the GC
### Object representation (done)
I think we should use a bit in [`ob_gc_bits`](https://github.com/python/cpython/blob/5d21d884b6ffa45dac50a5f9a07c41356a8478b4/Include/object.h#L221) [^1] to mark objects that support deferred reference counting. This differs a bit from [PEP 703](https://peps.python.org/pep-0703/#queued-0b10:~:text=The%20two%20most%20significant%20bits%20are%20used%20to%20indicate%20the%20object%20is%20immortal%20or%20uses%20deferred%20reference%20counting), which says "The two most significant bits [of `ob_ref_local`] are used to indicate the object is immortal or uses deferred reference counting."
The flags in `ob_gc_bits` are, I think, a better marker than `ob_ref_local` because it avoids any concerns with underflow of a 32-bit field in 64-bit builds. This will make the check if an object is immortal *or* supports deferred reference counting a tiny bit more expensive (because it will need to check two fields), but I think it's still a better design choice.
Additionally, we don't want objects with deferred references to be destroyed when their refcount would reach zero. We can handle this by adding `1` to the refcount when we mark it as deferred, and account for that when we compute the "gc refs" in the GC.
#### What types of objects support deferred references to them?
* code objects
* descriptors
* functions (only enabled for top-level functions, not closures)
* modules and module dictionaries
* heap types [^2]
#### Where can deferred references live?
Deferred references are stored in `localsplus[]` in frames: both the local variables and the evaluation stack can contain deferred references. This includes suspended generators, so deferred references may occur in the heap without being present in the frame stack.
### GC algorithm changes part 1 (done)
We'll need to:
1) account for the extra reference. Since we don't have a [zero count table](https://www.memorymanagement.org/glossary/z.html#term-zero-count-table) we've added one to each deferred RC object (see previous section). When we initialize the computed `gc_refs` from the refcount, we should subtract one for these objects.
2) mark objects as no longer deferred before we finalize them. This ensures that finalized objects are freed promptly.
3) mark objects as no longer deferred during shutdown. This ensures prompt destruction during shutdown.
### GC algorithm changes part 2 (not yet implemented)
The GC needs special handling for frames with deferred references:
* When computing `gc_refs` (i.e, `visit_decref`) the GC should *skip* deferred references. We don't want to "subtract one" from `gc_refs` in this step because deferred references don't have a corresponding "+1".
* When marking objects as transitively reachable (i.e., `visit_clear_unreachable`/`visit_reachable`), the GC should treat deferred references in frames just like other references.
Note that deferred references might be "in the heap" (and possibly form cyclic trash) due to suspended generators or captured frame objects (e.g., from exceptions, `sys._getframe()`, etc.)
### GC thread stack scanning (not yet implemented)
The GC needs to account for deferred references from threads' stacks. This requires an extra step at the start of each GC: scan each frame in each thread and add one to each object with a deferred reference in the frame in order to ensure that it's kept alive. This step is complicated by the fact that active frames generally do not have [valid `stacktop` pointers](https://github.com/python/cpython/blob/7d7eec595a47a5cd67ab420164f0059eb8b9aa28/Include/internal/pycore_frame.h#L159-L169) on the frame. The true `stacktop` is stored in a [local "register" variable](https://github.com/python/cpython/blob/7d7eec595a47a5cd67ab420164f0059eb8b9aa28/Python/ceval.c#L749-L752) in `_PyEval_EvalFrameDefault` that is not accessible to the GC.
In order to work around this limitation, we do the following:
* When scanning thread stacks, the GC scans up to the *maximum* stack top for the frame (i.e., up to `co_stacksize`) rather than up to current stack top, which the GC can't determine.
* The stack needs to be zero-initialized: it can't contain garbage data.
* It's okay for the GC to consider "dead" deferred reference from the running frame. The pointed-to objects can't be dead because only the GC can free objects that use deferred reference counting. Note that the GC doesn't consider non-deferred reference in this step.
We also need to ensure that the frame does not contain dead references from *previous* executions. The simple way to deal with this is to zero out the frame's stack in `_PyThreadState_PushFrame`, but there are more efficient strategies that we should consider. For example, the responsibility can be shifted to the GC, which can clear anything above the top frame up to `datastack_limit`.
### Relevant commits from the nogil-3.12 fork
* https://github.com/colesbury/nogil-3.12/commit/149ea9dc439c6b34fe07a14d74acbd60a70be243
* https://github.com/colesbury/nogil-3.12/commit/42d3e11d8ced1bdccf170303a571dc2a1b708d9e
[^1]: `ob_gc_bits` has grown to encompass more than GC related-state, so we may want to consider renaming it.
[^2]: heap types will also need additional work beyond deferred reference counting so that creating instances scales well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117696
* gh-117823
* gh-122975
* gh-124301
<!-- /gh-linked-prs -->
| 4ad8f090cce03c24fd4279ec8198a099b2d0cf97 | c50cb6dd09d5a1bfdd1b896cc31ccdc96c72e561 |
python/cpython | python__cpython-117679 | # Confusing wording in `os.path.lexists` docs
os.path.lexists(path) https://docs.python.org/3/library/os.path.html#os.path.lexists
Return True if path refers to an existing path. Returns **True** for broken symbolic links. Equivalent to [exists()](https://docs.python.org/3/library/os.path.html#os.path.exists) on platforms lacking [os.lstat()](https://docs.python.org/3/library/os.html#os.lstat).
Should read **False**
<!-- gh-linked-prs -->
### Linked PRs
* gh-117679
* gh-117701
<!-- /gh-linked-prs -->
| 73906d5c908c1e0b73c5436faeff7d93698fc074 | a05068db0cb43337d20a936d919b9d88c35d9818 |
python/cpython | python__cpython-117350 | # Speed up `os.path`
# Feature or enhancement
### Proposal:
There are some minor optimisations possible in `os.path`, including some fast returns.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
### Links to previous discussion of this feature:
- #117202
<!-- gh-linked-prs -->
### Linked PRs
* gh-117350
* gh-117610
<!-- /gh-linked-prs -->
| cae4cdd07ddfcd8bcc05d683bac53815391c9907 | 8eda146e87d5531c9d2bc62fd1fea3bd3163f9b1 |
python/cpython | python__cpython-117372 | # configparser.RawConfigParser._read is unmanageably complex
From the review:
> This function is already unmanageably complex (and complexity checks [disabled in the backport](https://github.com/jaraco/configparser/blob/b244f597d871a83986d5146ab49a4e7c19f8b528/backports/configparser/__init__.py#L1039)). Adding this single boolean parameter has expanded this function from ~120 lines to almost 160 and increases the mccabe cyclometric complexity of this function from 25 to 31 (where a target complexity is <10).
>
> ```
> cpython main @ pip-run mccabe -- -m mccabe --min 14 Lib/configparser.py
> 940:4: 'RawConfigParser._read' 25
> cpython main @ gh pr checkout 117273
> Switched to branch 'unnamed-section'
> cpython unnamed-section @ pip-run mccabe -- -m mccabe --min 14 Lib/configparser.py
> 961:4: 'RawConfigParser._read' 31
> ```
_Originally posted by @jaraco in https://github.com/python/cpython/pull/117273#discussion_r1541849160_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117372
* gh-117703
<!-- /gh-linked-prs -->
| 019143fecbfc26e69800d28d2a9e3392a051780b | 01bd74eadbc4ff839d39762fae6366f50c1e116e |
python/cpython | python__cpython-117363 | # `test_clinic` fails with the `--forever` argument
# Bug report
### Bug description:
```python
./python -m test -q test_clinic --forever
Using random seed: 2016588083
0:00:00 load avg: 16.10 Run tests sequentially
test test_clinic failed -- Traceback (most recent call last):
File "/home/eclips4/CLionProjects/cpython/Lib/test/test_clinic.py", line 2675, in test_cli_converters
self.assertTrue(out.endswith(finale), out)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: False is not true :
Legacy converters:
B C D L O S U Y Z Z#
b c d f h i l p s s# s* u u# w* y y# y* z z# z*
Converters:
bool(accept={<class 'object'>})
byte(bitwise=False)
char()
Custom()
defining_class(type=None)
double()
fildes()
float()
int(accept={<class 'int'>}, type=None)
long()
long_long()
object(converter=None, type=None, subclass_of=None)
Py_buffer(accept={<class 'clinic.buffer'>})
Py_complex()
Py_ssize_t(accept={<class 'int'>})
Py_UNICODE(accept={<class 'str'>}, zeroes=False)
PyByteArrayObject()
PyBytesObject()
self(type=None)
short()
size_t()
slice_index(accept={<class 'NoneType'>, <class 'int'>})
str(accept={<class 'str'>}, encoding=None, zeroes=False)
unicode()
unsigned_char(bitwise=False)
unsigned_int(bitwise=False)
unsigned_long(bitwise=False)
unsigned_long_long(bitwise=False)
unsigned_short(bitwise=False)
Return converters:
bool()
Custom()
double()
float()
int()
long()
object()
Py_ssize_t()
size_t()
unsigned_int()
unsigned_long()
All converters also accept (c_default=None, py_default=None, annotation=None).
All return converters also accept (py_default=None).
== Tests result: FAILURE ==
1 test failed:
test_clinic
Total duration: 586 ms
Total tests: run=290 failures=1
Total test files: run=2 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117363
* gh-117365
* gh-117366
<!-- /gh-linked-prs -->
| 35b6c4a4da201a947b2ceb96ae4c0d83d4d2df4f | 7e2fef865899837c47e91ef0180fa59eb03e840b |
python/cpython | python__cpython-117343 | # dis [docs]: LOAD_SUPER_ATTR should push NULL instead of None if the lowest bit is set
# Documentation
The docs for LOAD_SUPER_ATTR currently states
> The low bit of namei signals to attempt a method load, as with [LOAD_ATTR](https://docs.python.org/3/library/dis.html#opcode-LOAD_ATTR), which results in pushing None and the loaded method. When it is unset a single value is pushed to the stack.
However, LOAD_METHOD/LOAD_ATTR push NULL to the stack, not None:
LOAD_ATTR
> Otherwise, NULL and the object returned by the attribute lookup are pushed.
Additionally, the code for LOAD_SUPER_ATTR seems to push NULL instead of None too: https://github.com/python/cpython/blob/efcc96844e7c66fcd6c23ac2d557ca141614ce9a/Python/generated_cases.c.h#L4499-L4502
<!-- gh-linked-prs -->
### Linked PRs
* gh-117343
* gh-117345
<!-- /gh-linked-prs -->
| a17f313e3958e825db9a83594c8471a984316536 | 26d328b2ba26374fb8d9ffe8215ecef7c5e3f7a2 |
python/cpython | python__cpython-117371 | # Deprecate glob.glob0() and glob.glob1()
Undocumented functions `glob0()` and `glob1()` were just helpers for `glob.iglob()`. They are not underscored because the `glob` module has `__all__`. They survived numerous refactorings only because they were used in the `msilib` module and some MSI related scripts. But the `msilib` module and these scripts have been removed.
`glob1(root_dir, pattern)` is equivalent to `iglob1(os.path.join(glob.escape(root_dir), pattern))`, but more restricted and slightly faster, because it does not need to escape `root_dir` and process the result.
Other alternative is `iglob(pattern, root_dir=root_dir)`, which can even be faster, but it emits paths relative to `root_dir`. You can use `(os.path.join(root_dir, p) for p in iglob(pattern, root_dir=root_dir))` if you need to append `root_dir`. Actually, creating an efficient replacement of `glob1()` was one of purposes of `root_dir`.
`glob0(root_dir, name)` just checks that `os.path.join(root_dir, name)` exists. I did not see its use in third-party code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117371
<!-- /gh-linked-prs -->
| fc8007ee3635db6ab73e132ebff987c910b6d538 | c741ad3537193c63fe697a8f0316aecd45eeb9ba |
python/cpython | python__cpython-117336 | # Handle non-iterables for `ntpath.commonpath`
# Bug report
### Bug description:
```python
>>> import ntpath
>>> ntpath.commonpath(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen ntpath>", line 826, in commonpath
ValueError: commonpath() arg is an empty sequence
```
Expected: `TypeError: 'NoneType' object is not iterable` like for `posixpath.commonpath`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-117336
<!-- /gh-linked-prs -->
| 14f1ca7d5363386163839b31ce987423daecc3de | 18cf239e39e25e6cef50ecbb7f197a82f8920ff5 |
python/cpython | python__cpython-117330 | # Make `cell` thread-safe in the free-threaded build
# Feature or enhancement
From the [docs](): “Cell” objects are used to implement variables referenced by multiple scopes.
Currently, modifying cell contents concurrently with other accesses is not thread-safe without the GIL. This shows up, for example, in `test_signal.test_stress_modifying_handlers`, which modifies the `num_received_signals` variable from multiple threads:
https://github.com/python/cpython/blob/9a388b9a64927c372d85f0eaec3de9b7320a6fb5/Lib/test/test_signal.py#L1352-L1374
<!-- gh-linked-prs -->
### Linked PRs
* gh-117330
<!-- /gh-linked-prs -->
| 19c1dd60c5b53fb0533610ad139ef591294f26e8 | 397d88db5e9ab2a43de3fdf5f8b973a949edc405 |
python/cpython | python__cpython-117369 | # email.policy.EmailPolicy._fold() breaking multi-byte Unicode sequences
https://github.com/python/cpython/blob/eefff682f09394fe4f18b7d7c6ac4c635caadd02/Lib/email/policy.py#L208
I think it's problematic that the method `email.policy.EmailPolicy._fold()` relies on the generic `str` / `bytes` method `.splitlines()`, especially in an email-processing context where the "official" line ending is `\r\n`.
I'm one of many devs who also leniently recognise (regex) `[\r\n]+` as a line break in emails. But I have no idea why all the other ending characters from other contexts are also used in a specific mail-manipulation context.
On the surface, `.splitlines()` seems a simple way to cover the case of a header value itself containing line endings.
However, in cases where a header value may contain multi-byte Unicode sequences, this causes breakage, because characters such as `\x0C` (which may potentially be part of a sequence) instead get treated as legacy ASCII 'form-feed', and deemed to be a line ending. This then breaks the sequence, which in turn, causes problems in the subsequent processing of the email message.
A specimen header (from real-world production traffic) which triggers this behaviour is:
```
b'Subject: P/L SEND : CARA-23PH00021,, 0xf2\x0C\xd8/FTEP'
```
Here, the `\x0C` is treated as a line-ending, so the trailing portion `b'\xd8/FTEP'` gets wrapped and indented on the next line.
To work around this in my networks, I've had to subclass `email.policy.EmailPolicy`, and override the method `._fold()` to instead split only on CR/LFs, via
```
RE_EOL_STR = re.compile(r'[\r\n]+')
RE_EOL_BYTES = re.compile(rb'[\r\n]+')
...
class MyPolicy(email.policy.EmailPolicy):
...
def _fold(self, name, value, refold_binary=False):
"""
Need to override this from email.policy.EmailPolicy to stop it treating chars other than
CR and LF as newlines
:param name:
:param value:
:param refold_binary:
:return:
"""
if hasattr(value, 'name'):
return value.fold(policy=self)
maxlen = self.max_line_length if self.max_line_length else sys.maxsize
# this is from the library version, and it improperly breaks on chars like 0x0c, treating
# them as 'form feed' etc.
# we need to ensure that only CR/LF is used as end of line
#lines = value.splitlines()
# this is a workaround which splits only on CR/LF characters
if refold_binary:
lines = RE_EOL_BYTES.split(value)
else:
lines = RE_EOL_STR.split(value)
refold = (self.refold_source == 'all' or
self.refold_source == 'long' and
(lines and len(lines[0])+len(name)+2 > maxlen or
any(len(x) > maxlen for x in lines[1:])))
if refold or refold_binary and _has_surrogates(value):
return self.header_factory(name, ''.join(lines)).fold(policy=self)
return name + ': ' + self.linesep.join(lines) + self.linesep
```
Can the maintainers of this class please advise with their thoughts?
Given that RFC822 and related standards specify that the "official" line ending is `\r\n`, is there any reason to catch everything else that may also be considered in other string contexts to constitute a line ending?
<!-- gh-linked-prs -->
### Linked PRs
* gh-117369
* gh-117971
* gh-117972
<!-- /gh-linked-prs -->
| aec1dac4efe36a7db51f08385ddcce978814dbe3 | 4e502a4997af4c8042a6ac13115a3f8ba31520ea |
python/cpython | python__cpython-117309 | # `_ssl._SSLContext` construction crashes when a bad build results in an empty `PY_SSL_DEFAULT_CIPHER_STRING` #define
# Bug report
### What happened?
This doesn't come up in practice as nobody would ever have such a misbehaving build in a supported released config.
Not a security issue.
I stumbled upon this while working on my draft https://github.com/python/cpython/pull/116399 BoringSSL linkage branch in my own non-OpenSSL Linux environment.
### Steps to reproduce
```
./configure --with-ssl-default-suites="" && make -j24
./python -m test test_ssl
... SIGSEGV ...
```
The desired result is a Python exception. Clearly nobody builds intentionally with the above flag as it would've crashed when they tried to use their build. I encountered this error a _different_ way due to a non-functional ssl library config.
### CPython versions tested on:
CPython main branch
<!-- gh-linked-prs -->
### Linked PRs
* gh-117309
* gh-117317
* gh-117318
<!-- /gh-linked-prs -->
| 8cb7d7ff86a1a2d41195f01ba4f218941dd7308c | 6c8ac8a32fd6de1960526561c44bc5603fab0f3e |
python/cpython | python__cpython-117304 | # Free-threading crash involving `os.fork()` and `PyThreadState_DeleteCurrent()`
When running with `PYTHON_GIL=0`, `test_threading.test_reinit_tls_after_fork()` may crash in the forked child process with an assertion error that one of the dead thread states was already cleared.
https://github.com/python/cpython/blob/669ef49c7d42f35da6f7ee280102353b9b37f83e/Python/pystate.c#L1562
The cause is that a thread may be in the process of calling `PyThreadState_DeleteCurrent()` during fork:
https://github.com/python/cpython/blob/669ef49c7d42f35da6f7ee280102353b9b37f83e/Modules/_threadmodule.c#L354-L355
https://github.com/python/cpython/blob/669ef49c7d42f35da6f7ee280102353b9b37f83e/Python/pystate.c#L1734-L1746
This isn't a problem with the GIL because we hold the GIL from the end of `PyThreadState_Clear()`, when `tstate->_status.cleared` is set to `1`, until the end of `_PyThreadState_DeleteCurrent()`.
We should do something similar in the free-threaded build: don't set the thread state to "DETACHED" before it's removed from the linked list of states. This will have a similar effect: prevent the stop-the-world pause from commencing while a thread state is deleting itself.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117304
<!-- /gh-linked-prs -->
| bfc57d43d8766120ba0c8f3f6d7b2ac681a81d8a | 05e0b67a43c5c1778dc2643c8b7c12864e135999 |
python/cpython | python__cpython-117301 | # Use stop-the-world to make `sys._current_frames()` and `sys._current_exceptions()` thread-safe in free-threaded build
# Feature or enhancement
The [`sys._current_frames`](https://docs.python.org/3/library/sys.html#sys._current_frames) and [`sys._current_exceptions`](https://docs.python.org/3/library/sys.html#sys._current_exceptions) functions rely on the GIL for thread-safety. We should use the stop the world mechanism to pause other threads so that we can safely capture the current frames and exceptions in other threads in the free-threaded build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117301
<!-- /gh-linked-prs -->
| 01bd74eadbc4ff839d39762fae6366f50c1e116e | 94c97423a9c4969f8ddd4a3aa4aacb99c4d5263d |
python/cpython | python__cpython-117297 | # DocTestCase should report as skipped if the doctest itself is skipped
# Feature or enhancement
### Proposal:
When a doctest is run through `unittest`, each docstring is wrapped in a `DocTestCase`. Currently, if the docstring's examples are all skipped (either with a `# doctest: +SKIP` comment, or the `optionflags` argument), the `DocTestCase` still reports as a test pass. Even in the verbose unittest log, there's no indication that anything has been skipped.
It would make sense for the `DocTestCase` to report as skipped in this situation.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
<!-- gh-linked-prs -->
### Linked PRs
* gh-117297
<!-- /gh-linked-prs -->
| 29829b58a8328a7c2ccacaa74c1d7d120a5e5ca5 | efcc96844e7c66fcd6c23ac2d557ca141614ce9a |
python/cpython | python__cpython-117298 | # test.libregrtest race condition in runtest_mp leads to 30 second delay in free-threaded build
This is related to https://github.com/python/cpython/issues/90363.
In https://github.com/python/cpython/pull/30470, the (partial) fix for that issue I wrote:
> There are still potential race conditions between checking if any workers are alive and reading from the output queue. This means that there can be up to an extra 30 second delay (instead of an hanging forever), while waiting for self.output.get().
We are now seeing that 30 second delay when running some tests with the GIL disabled, such as `test_regrtest`.
I think it's worth pursuing the approach used here, which avoids the race condition by tracking exiting workers by passing a sentinel value through the queue:
https://github.com/colesbury/nogil/commit/406e5d79a03c22751b04570de4d2e1930be5c541
<!-- gh-linked-prs -->
### Linked PRs
* gh-117298
<!-- /gh-linked-prs -->
| 26a680a58524fe39eecb243e37adfa6e157466f6 | 59864edd572b5c0cc3be58087a9ea3a700226146 |
python/cpython | python__cpython-117290 | # CFG to instructions creates too many label IDs
``_PyCfg_ToInstructionSequence`` creates an instruction sequence where the jump target labels are equal to the offsets of the jump targets. This by passes the "labelmap" mechanism of the instruction sequence data structure (the mapping is done at source) but also requires creating more label IDs (so a larger label map is built in the background).
It would be more efficient, and more logical, for this function to act like codegen, and use logical (incremental) labels which are later mapped to offset.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117290
<!-- /gh-linked-prs -->
| 262fb911ab7df8e890ebd0efb0773c3e0b5a757f | 74c8568d07719529b874897598d8b3bc25ff0434 |
python/cpython | python__cpython-117916 | # PyTuple_SetItem documentation doesn't warn about refcount requirements
The PyTuple_SetItem() function (correctly) raises a SystemError when used on a tuple whose refcount isn't 1. The `PyTuple_SET_ITEM()` macro doesn't have any checks on the refcount, but it's still really bad to use it on a tuple that already in use. However, the documentation doesn't mention this requirement. It could probably use a bright red flashing box with waving hands and a shouting man telling you not to use it on anything but newly created tuples.
By contrast, the docs for the `_PyTuple_Resize()` function, which also raises SystemErrow when used on a tuple whose refcount isn't 1, _does_ warn about the refcount requirement, although it isn't clear how hard a requirement it is. (It says "should not be used" rather than "cannot be used".)
<!-- gh-linked-prs -->
### Linked PRs
* gh-117916
<!-- /gh-linked-prs -->
| 041a566f3f987619cef7d6ae7915ba93e39d2d1e | 3b26cd8ca0e6c65e4b61effea9aa44d06e926797 |
python/cpython | python__cpython-117285 | # 3.13.0a5 SIGSEGV when calling weakref.proxy on a class instance
### What happened?
When building [dill](https://github.com/uqfoundation/dill) for Fedora Linux 41 with Python 3.13.0a5, a segmentation fault happens. The investigation leads back to Python.
The minimal reproducer for this to happen with Python 3.13.0a5 ~~(this was working in one of the first alphas and stopped ~around alpha3)~~:
```python
import weakref
class A:
pass
repr(weakref.proxy(A()))
```
A bt from gdb:
```
(gdb) bt
#0 0x00007ffff78a0a02 in Py_TYPE (ob=0x0) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Include/object.h:333
#1 0x00007ffff78a20f8 in proxy_repr (proxy=0x7fffe9cd8bb0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Objects/weakrefobject.c:476
#2 0x00007ffff77eb936 in PyObject_Repr (v=0x7fffe9cd8bb0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Objects/object.c:671
#3 0x00007ffff790bdf4 in builtin_repr (module=0x7fffe9da7a70, obj=0x7fffe9cd8bb0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/bltinmodule.c:2377
#4 0x00007ffff77e5c00 in cfunction_vectorcall_O (func=0x7fffe9dbc9b0, args=0x7ffff7fb8078,
nargsf=9223372036854775809, kwnames=0x0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Objects/methodobject.c:512
#5 0x00007ffff776d021 in _PyObject_VectorcallTstate (tstate=0x7ffff7d54970 <_PyRuntime+247824>,
callable=0x7fffe9dbc9b0, args=0x7ffff7fb8078, nargsf=9223372036854775809, kwnames=0x0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Include/internal/pycore_call.h:168
#6 0x00007ffff776de9e in PyObject_Vectorcall (callable=0x7fffe9dbc9b0, args=0x7ffff7fb8078,
nargsf=9223372036854775809, kwnames=0x0) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Objects/call.c:327
#7 0x00007ffff7915d85 in _PyEval_EvalFrameDefault (tstate=0x7ffff7d54970 <_PyRuntime+247824>, frame=0x7ffff7fb8020,
throwflag=0) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/generated_cases.c.h:817
#8 0x00007ffff790fad6 in _PyEval_EvalFrame (tstate=0x7ffff7d54970 <_PyRuntime+247824>, frame=0x7ffff7fb8020,
throwflag=0) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Include/internal/pycore_ceval.h:114
#9 0x00007ffff7941c7f in _PyEval_Vector (tstate=0x7ffff7d54970 <_PyRuntime+247824>, func=0x7fffe9c66210,
locals=0x7fffe9c7ce30, args=0x0, argcount=0, kwnames=0x0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/ceval.c:1820
#10 0x00007ffff79115e0 in PyEval_EvalCode (co=0x7fffe9da1f80, globals=0x7fffe9c7ce30, locals=0x7fffe9c7ce30)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/ceval.c:599
#11 0x00007ffff79eda45 in run_eval_code_obj (tstate=0x7ffff7d54970 <_PyRuntime+247824>, co=0x7fffe9da1f80,
globals=0x7fffe9c7ce30, locals=0x7fffe9c7ce30)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/pythonrun.c:1291
#12 0x00007ffff79ede15 in run_mod (mod=0x5555555ef230, filename=0x7fffe9cc4c80, globals=0x7fffe9c7ce30,
locals=0x7fffe9c7ce30, flags=0x7fffffffda78, arena=0x7fffe9cccfa0, interactive_src=0x0, generate_new_source=0)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/pythonrun.c:1376
#13 0x00007ffff79ed822 in pyrun_file (fp=0x555555584db0, filename=0x7fffe9cc4c80, start=257, globals=0x7fffe9c7ce30,
locals=0x7fffe9c7ce30, closeit=1, flags=0x7fffffffda78)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/pythonrun.c:1212
#14 0x00007ffff79ebbd3 in _PyRun_SimpleFileObject (fp=0x555555584db0, filename=0x7fffe9cc4c80, closeit=1,
flags=0x7fffffffda78) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/pythonrun.c:461
#15 0x00007ffff79eaf35 in _PyRun_AnyFileObject (fp=0x555555584db0, filename=0x7fffe9cc4c80, closeit=1,
flags=0x7fffffffda78) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Python/pythonrun.c:77
#16 0x00007ffff7a2478a in pymain_run_file_obj (program_name=0x7fffe9c7d000, filename=0x7fffe9cc4c80,
skip_source_first_line=0) at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:357
#17 0x00007ffff7a24864 in pymain_run_file (config=0x7ffff7d2f898 <_PyRuntime+96056>)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:376
#18 0x00007ffff7a250d6 in pymain_run_python (exitcode=0x7fffffffdc14)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:628
#19 0x00007ffff7a2521b in Py_RunMain () at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:707
#20 0x00007ffff7a252f1 in pymain_main (args=0x7fffffffdc90)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:737
#21 0x00007ffff7a253b9 in Py_BytesMain (argc=2, argv=0x7fffffffde08)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Modules/main.c:761
#22 0x000055555555517d in main (argc=2, argv=0x7fffffffde08)
at /usr/src/debug/python3.13-3.13.0~a5-2.fc39.x86_64/Programs/python.c:15
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a5 (main, Mar 13 2024, 00:00:00) [GCC 13.2.1 20240316 (Red Hat 13.2.1-7)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-117285
* gh-117287
<!-- /gh-linked-prs -->
| 8ef98924d304b5c9430e23f8170e2c32ec3a9920 | 8987a5c809343ae0dd2b8e607bf2c32a87773127 |
python/cpython | python__cpython-117354 | # scandir direntry.stat() always returns st_ctime of 0 on Windows
# Bug report
### Bug description:
In 3.12 on Windows, calling stat() on a direntry returned from os.scandir() always returns an st_ctime of 0:
```python
import os
for direntry in os.scandir('.'):
st = direntry.stat()
print(st.st_ctime)
```
In 3.10, this gave reasonable timestamps. In 3.12, it's always 0. This only affects os.scandir, not os.stat. I'm guessing it's related to this (https://docs.python.org/3/whatsnew/3.12.html#deprecated):
"The st_ctime fields return by [os.stat()](https://docs.python.org/3/library/os.html#os.stat) and [os.lstat()](https://docs.python.org/3/library/os.html#os.lstat) on Windows are deprecated. In a future release, they will contain the last metadata change time, consistent with other platforms. For now, they still contain the creation time, which is also available in the new st_birthtime field. (Contributed by Steve Dower in [gh-99726](https://github.com/python/cpython/issues/99726).)"
The os.stat results are as described, but maybe direntry.stat() was overlooked. I think it should also give st_ctime = st_birthtime for the compatibility period, like os.stat() is doing, not 0.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117354
* gh-117525
<!-- /gh-linked-prs -->
| 985917dc8d34e2d2f717f7a981580a8dcf18d53a | 2057c92125f2e37caee209f032be9fe9c208357b |
python/cpython | python__cpython-117276 | # SystemError: <method-wrapper '__init__' of ast.AST-child object at ...> returned NULL without setting an exception
# Bug report
### Bug description:
```python
import ast
class File(ast.AST):
_fields = ("xxx",)
def __init__(self):
super().__init__()
File()
```
```
Traceback (most recent call last):
File ".../method_wrapper.py", line 11, in <module>
File()
~~~~^^
File ".../method_wrapper.py", line 8, in __init__
super().__init__()
~~~~~~~~~~~~~~~~^^
SystemError: <method-wrapper '__init__' of File object at 0x7f923918cf10> returned NULL without setting an exception
```
This has happened since 3.13.0a5. Discovered via real usage in https://github.com/robotframework/robotframework/issues/5091
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117276
<!-- /gh-linked-prs -->
| 4c71d51a4b7989fc8754ba512c40e21666f9db0d | 8cb7d7ff86a1a2d41195f01ba4f218941dd7308c |
python/cpython | python__cpython-117234 | # Allow CPython to build against cryptography libraries lacking BLAKE2 support
# Feature or enhancement
### Proposal:
As part of a series of changes [discussed on the python Ideas board](https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/6), this issue proposes placing guards around references to BLAKE2's NIDs. This would allow cryptography libraries lacking full BLAKE2 support to be used with CPython.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505
<!-- gh-linked-prs -->
### Linked PRs
* gh-117234
* gh-117767
<!-- /gh-linked-prs -->
| b8eaad30090b46f115dfed23266305b6546fb364 | 01a51f949475f1590eb5899f3002304060501ab2 |
python/cpython | python__cpython-117228 | # Refresh doctest summary
# Feature or enhancement
The doctest output looks like [this](https://github.com/hugovk/cpython/actions/runs/8421808774/job/23059608592?pr=62):
```
Document: howto/ipaddress
-------------------------
1 items passed all tests:
10 tests in default
10 tests in 1 items.
10 passed and 0 failed.
Test passed.
Document: howto/functional
--------------------------
1 items passed all tests:
50 tests in default
50 tests in 1 items.
50 passed and 0 failed.
Test passed.
Document: whatsnew/3.2
----------------------
1 items passed all tests:
81 tests in default
81 tests in 1 items.
81 passed and 0 failed.
Test passed.
Document: tutorial/inputoutput
------------------------------
1 items passed all tests:
4 tests in default
4 tests in 1 items.
4 passed and 0 failed.
Test passed.
Document: library/configparser
------------------------------
1 items passed all tests:
82 tests in default
82 tests in 1 items.
82 passed and 0 failed.
Test passed.
1 items passed all tests:
1 tests in default (cleanup code)
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/weakref
-------------------------
1 items passed all tests:
31 tests in default
31 tests in 1 items.
31 passed and 0 failed.
Test passed.
Document: library/multiprocessing.shared_memory
-----------------------------------------------
1 items passed all tests:
79 tests in default
79 tests in 1 items.
79 passed and 0 failed.
Test passed.
Document: library/base64
------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: howto/sorting
-----------------------
1 items passed all tests:
42 tests in default
42 tests in 1 items.
42 passed and 0 failed.
Test passed.
Document: whatsnew/2.7
----------------------
1 items passed all tests:
39 tests in default
39 tests in 1 items.
39 passed and 0 failed.
Test passed.
Document: reference/executionmodel
----------------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: tutorial/floatingpoint
--------------------------------
1 items passed all tests:
35 tests in default
35 tests in 1 items.
35 passed and 0 failed.
Test passed.
Document: library/binascii
--------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: library/decimal
-------------------------
2 items passed all tests:
66 tests in default
42 tests in newcontext
108 tests in 2 items.
108 passed and 0 failed.
Test passed.
2 items passed all tests:
1 tests in default (cleanup code)
1 tests in newcontext (cleanup code)
2 tests in 2 items.
2 passed and 0 failed.
Test passed.
Document: library/inspect
-------------------------
**********************************************************************
File "library/inspect.rst", line 438, in default
Failed example:
inspect.isasyncgenfunction(agen)
Expected:
False
Got:
True
**********************************************************************
1 items had failures:
1 of 35 in default
35 tests in 1 items.
34 passed and 1 failed.
***Test Failed*** 1 failures.
Document: library/glob
----------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: library/email.policy
------------------------------
1 items passed all tests:
10 tests in default
10 tests in 1 items.
10 passed and 0 failed.
Test passed.
1 items passed all tests:
1 tests in default (cleanup code)
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/datetime
--------------------------
1 items passed all tests:
36 tests in default
36 tests in 1 items.
36 passed and 0 failed.
Test passed.
Document: library/fractions
---------------------------
1 items passed all tests:
8 tests in default
8 tests in 1 items.
8 passed and 0 failed.
Test passed.
Document: reference/datamodel
-----------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: library/functools
---------------------------
1 items passed all tests:
4 tests in default
4 tests in 1 items.
4 passed and 0 failed.
Test passed.
Document: library/random
------------------------
1 items passed all tests:
1 tests in import random
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/traceback
---------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: faq/programming
-------------------------
1 items passed all tests:
34 tests in default
34 tests in 1 items.
34 passed and 0 failed.
Test passed.
Document: library/shlex
-----------------------
1 items passed all tests:
23 tests in default
23 tests in 1 items.
23 passed and 0 failed.
Test passed.
Document: library/urllib.parse
------------------------------
1 items passed all tests:
26 tests in default
26 tests in 1 items.
26 passed and 0 failed.
Test passed.
Document: library/unittest.mock-examples
----------------------------------------
1 items passed all tests:
200 tests in default
200 tests in 1 items.
200 passed and 0 failed.
Test passed.
Document: library/collections.abc
---------------------------------
1 items passed all tests:
9 tests in default
9 tests in 1 items.
9 passed and 0 failed.
Test passed.
Document: library/fnmatch
-------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: library/graphlib
--------------------------
1 items passed all tests:
11 tests in default
11 tests in 1 items.
11 passed and 0 failed.
Test passed.
Document: library/email.message
-------------------------------
1 items passed all tests:
4 tests in default
4 tests in 1 items.
4 passed and 0 failed.
Test passed.
Document: library/exceptions
----------------------------
1 items passed all tests:
11 tests in default
11 tests in 1 items.
11 passed and 0 failed.
Test passed.
Document: library/unicodedata
-----------------------------
1 items passed all tests:
7 tests in default
7 tests in 1 items.
7 passed and 0 failed.
Test passed.
Document: library/re
--------------------
1 items passed all tests:
12 tests in default
12 tests in 1 items.
12 passed and 0 failed.
Test passed.
Document: whatsnew/3.13
-----------------------
1 items passed all tests:
1 tests in default
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/http.cookies
------------------------------
1 items passed all tests:
30 tests in default
30 tests in 1 items.
30 passed and 0 failed.
Test passed.
Document: library/ftplib
------------------------
1 items passed all tests:
2 tests in default
2 tests in 1 items.
2 passed and 0 failed.
Test passed.
Document: library/pprint
------------------------
1 items passed all tests:
16 tests in default
16 tests in 1 items.
16 passed and 0 failed.
Test passed.
Document: howto/descriptor
--------------------------
1 items passed all tests:
183 tests in default
183 tests in 1 items.
183 passed and 0 failed.
Test passed.
1 items passed all tests:
1 tests in default (cleanup code)
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/turtle
------------------------
1 items passed all tests:
314 tests in default
314 tests in 1 items.
314 passed and 0 failed.
Test passed.
Document: library/difflib
-------------------------
1 items passed all tests:
43 tests in default
43 tests in 1 items.
43 passed and 0 failed.
Test passed.
Document: library/math
----------------------
1 items passed all tests:
8 tests in default
8 tests in 1 items.
8 passed and 0 failed.
Test passed.
Document: library/pathlib
-------------------------
1 items passed all tests:
5 tests in default
5 tests in 1 items.
5 passed and 0 failed.
Test passed.
Document: library/unittest.mock
-------------------------------
1 items passed all tests:
462 tests in default
462 tests in 1 items.
462 passed and 0 failed.
Test passed.
Document: library/email.compat32-message
----------------------------------------
1 items passed all tests:
3 tests in default
3 tests in 1 items.
3 passed and 0 failed.
Test passed.
Document: library/copyreg
-------------------------
1 items passed all tests:
7 tests in default
7 tests in 1 items.
7 passed and 0 failed.
Test passed.
Document: library/getopt
------------------------
1 items passed all tests:
12 tests in default
12 tests in 1 items.
12 passed and 0 failed.
Test passed.
Document: reference/simple_stmts
--------------------------------
1 items passed all tests:
1 tests in default
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/typing
------------------------
1 items passed all tests:
58 tests in default
58 tests in 1 items.
58 passed and 0 failed.
Test passed.
Document: whatsnew/3.8
----------------------
1 items passed all tests:
6 tests in default
6 tests in 1 items.
6 passed and 0 failed.
Test passed.
Document: library/argparse
--------------------------
1 items passed all tests:
11 tests in default
11 tests in 1 items.
11 passed and 0 failed.
Test passed.
Document: library/stdtypes
--------------------------
1 items passed all tests:
59 tests in default
59 tests in 1 items.
59 passed and 0 failed.
Test passed.
Document: library/secrets
-------------------------
1 items passed all tests:
7 tests in default
7 tests in 1 items.
7 passed and 0 failed.
Test passed.
Document: tutorial/introduction
-------------------------------
1 items passed all tests:
3 tests in default
3 tests in 1 items.
3 passed and 0 failed.
Test passed.
Document: library/codecs
------------------------
1 items passed all tests:
2 tests in default
2 tests in 1 items.
2 passed and 0 failed.
Test passed.
Document: library/statistics
----------------------------
1 items passed all tests:
117 tests in default
117 tests in 1 items.
117 passed and 0 failed.
Test passed.
Document: library/enum
----------------------
1 items passed all tests:
19 tests in default
19 tests in 1 items.
19 passed and 0 failed.
Test passed.
Document: library/multiprocessing
---------------------------------
1 items passed all tests:
48 tests in default
48 tests in 1 items.
48 passed and 0 failed.
Test passed.
Document: library/ast
---------------------
1 items passed all tests:
74 tests in default
74 tests in 1 items.
74 passed and 0 failed.
Test passed.
Document: library/doctest
-------------------------
1 items passed all tests:
9 tests in default
9 tests in 1 items.
9 passed and 0 failed.
Test passed.
Document: library/email.iterators
---------------------------------
1 items passed all tests:
2 tests in default
2 tests in 1 items.
2 passed and 0 failed.
Test passed.
1 items passed all tests:
1 tests in default (cleanup code)
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/bz2
---------------------
1 items passed all tests:
17 tests in default
17 tests in 1 items.
17 passed and 0 failed.
Test passed.
1 items passed all tests:
1 tests in default (cleanup code)
1 tests in 1 items.
1 passed and 0 failed.
Test passed.
Document: library/functions
---------------------------
1 items passed all tests:
29 tests in default
29 tests in 1 items.
29 passed and 0 failed.
Test passed.
Document: library/struct
------------------------
1 items passed all tests:
4 tests in default
4 tests in 1 items.
4 passed and 0 failed.
Test passed.
Document: library/time
----------------------
1 items passed all tests:
2 tests in default
2 tests in 1 items.
2 passed and 0 failed.
Test passed.
Document: library/itertools
---------------------------
1 items passed all tests:
268 tests in default
268 tests in 1 items.
268 passed and 0 failed.
Test passed.
Document: library/ipaddress
---------------------------
1 items passed all tests:
50 tests in default
50 tests in 1 items.
50 passed and 0 failed.
Test passed.
Document: library/sqlite3
-------------------------
4 items passed all tests:
71 tests in default
3 tests in sqlite3.cursor
3 tests in sqlite3.limits
8 tests in sqlite3.trace
85 tests in 4 items.
85 passed and 0 failed.
Test passed.
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Document: library/collections
-----------------------------
1 items passed all tests:
121 tests in default
121 tests in 1 items.
121 passed and 0 failed.
Test passed.
Document: library/reprlib
-------------------------
2 items passed all tests:
7 tests in default
8 tests in indent
15 tests in 2 items.
15 passed and 0 failed.
Test passed.
Document: library/dis
---------------------
1 items passed all tests:
3 tests in default
3 tests in 1 items.
3 passed and 0 failed.
Test passed.
Exception ignored in sys.unraisablehook: <function debug at 0x7fbdb071bc50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Document: library/operator
--------------------------
1 items passed all tests:
15 tests in default
15 tests in 1 items.
15 passed and 0 failed.
Test passed.
Document: whatsnew/3.12
-----------------------
1 items passed all tests:
12 tests in default
12 tests in 1 items.
12 passed and 0 failed.
Test passed.
Document: library/hashlib
-------------------------
1 items passed all tests:
58 tests in default
58 tests in 1 items.
58 passed and 0 failed.
Test passed.
Document: library/ssl
---------------------
2 items passed all tests:
3 tests in default
5 tests in newcontext
8 tests in 2 items.
8 passed and 0 failed.
Test passed.
Document: library/ctypes
------------------------
1 items passed all tests:
6 tests in default
6 tests in 1 items.
6 passed and 0 failed.
Test passed.
Doctest summary
===============
3222 tests
1 failure in tests
0 failures in setup code
0 failures in cleanup code
```
When you know a doctest run has failed on the CI, you might search the logs for "failed" to find it. But when 81 of 84 results are for "0 failed" it's not so helpful.
* There's no need to print "0 failed" in the logs, we should only print that when there's 1+ failed.
* There are lots of logs with "1 items" or "1 tests". Let's skip the "s" when only one.
* And now we have [colour handling](https://docs.python.org/3.13/using/cmdline.html#using-on-controlling-color) in 3.13, it would also help to add some colour to the output.
I'll open some PRs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117228
* gh-117583
* gh-118268
* gh-118283
<!-- /gh-linked-prs -->
| ce00de4c8cd39816f992e749c1074487d93abe9d | 92397d5ead38dde4154e70d00f24973bcf2a925a |
python/cpython | python__cpython-117493 | # collection.ChainMap.fromkeys argument 'value' has incorrect argument kind
# Bug report
### Bug description:
fromkeys method of collection.ChainMap has vararg argument kind. Description of 'fromkeys' method and redirection to dict.fromkeys implies that there should be single value (positional only or keyword-or-positional kind of argument).
see https://github.com/python/cpython/blob/eebea7e515462b503632ada74923ec3246599c9c/Lib/collections/__init__.py#L1040
@classmethod
def fromkeys(cls, iterable, *args):
'Create a ChainMap with a single dict created from the iterable.'
return cls(dict.fromkeys(iterable, *args))
While passing value to ChainMap.fromkeys works correctly, it may confuse the reader of the code. Anyway, it is more cosmetic issue than a bug. Suggest to change to more clear.
@classmethod
def fromkeys(cls, iterable, value=None):
'Create a ChainMap with a single dict created from the iterable.'
return cls(dict.fromkeys(iterable, value))
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117493
<!-- /gh-linked-prs -->
| 03f7aaf953f00bf2953c21a057d8e6e88db659c8 | fc5f68e58ecfbc8c452e1c2f33a2a53d3f2d7ea2 |
python/cpython | python__cpython-117206 | # compileall with workers would is faster with larger chunksize
# Feature or enhancement
### Proposal:
Currently we do a map with the default chunksize=1 over files:
https://github.com/python/cpython/blob/3726cb0f146cb229a5e9db8d41c713b023dcd474/Lib/compileall.py#L109
A quick local benchmark suggests that chunksize=4 would be 20% faster
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117206
<!-- /gh-linked-prs -->
| b4fe02f595fcb9f78261920a268ef614821ec195 | 985917dc8d34e2d2f717f7a981580a8dcf18d53a |
python/cpython | python__cpython-117220 | # object.__sizeof__(1) have different output between 3.9.10 and 3.12.2
# Bug report
### Bug description:
on a Windows x64 machine
```python
>>> #python 3.9.10
>>> object.__sizeof__(1)
28
>>>
>>> #python 3.12.2
>>> object.__sizeof__(1)
56
>>>
```
### CPython versions tested on:
3.9, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117220
* gh-119456
* gh-127605
<!-- /gh-linked-prs -->
| 406ffb5293a8c9ca315bf63de1ee36a9b33f9aaf | c85e3526736d1cf8226686fdf4f5117e105a7b13 |
python/cpython | python__cpython-117198 | # Wrong formatting in the "What's new in Python 3.13" article, "Improved Modules" section.
There is a wrong formatting In the "What's new in Python 3.13" article, "Improved Modules" section: name of the `base64` module is written smaller than other modules' names. Everything becomes clear if you look at the [source code of the article markup](https://github.com/python/cpython/blob/main/Doc/whatsnew/3.13.rst?plain=1#L286):
```rst
asyncio
-------
...
base64
--- # <--
```
It can be seen that the characters `-` under the word `base64` are too few: there should be a word length (6), and now 3. Therefore, you just need to add three more `-`.
I could have submit a pull request myself, but it's too long ;) so, can you do it?
<!-- gh-linked-prs -->
### Linked PRs
* gh-117198
<!-- /gh-linked-prs -->
| 78a651fd7fbe7a3d1702e40f4cbfa72d87241ef0 | f267d5bf2a99fbeb26a720d1c87c1f0557424b14 |
python/cpython | python__cpython-117203 | # Both test_flush_reparse_deferral_disabled tests fail with older libexpat (2.4.4)
# Bug report
### Bug description:
Running the test suite for 3.9.19 (and other currently released versions) breaks with the old `libexpat` (of course, patched for the security issues):
```
[ 684s] 0:04:10 load avg: 0.64 Re-running test_sax in verbose mode (matching: test_flush_re
parse_deferral_disabled)
[ 684s] test_flush_reparse_deferral_disabled (test.test_sax.ExpatReaderTest) ... FAIL
[ 684s]
[ 684s] ======================================================================
[ 684s] FAIL: test_flush_reparse_deferral_disabled (test.test_sax.ExpatReaderTest)
[ 684s] ----------------------------------------------------------------------
[ 684s] Traceback (most recent call last):
[ 684s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.19/Lib/test/test_sax.py", line 1251,
in test_flush_reparse_deferral_disabled
[ 684s] self.assertEqual(result.getvalue(), start) # i.e. no elements started
[ 684s] AssertionError: b'<?xml version="1.0" encoding="iso-8859-1"?>\n<doc>' != b'<?xml ve
rsion="1.0" encoding="iso-8859-1"?>\n'
[ 684s]
[ 684s] ----------------------------------------------------------------------
[ 684s] Ran 1 test in 0.001s
[ 684s]
[ 684s] FAILED (failures=1)
[ 684s] test test_sax failed
[ 684s] 0:04:10 load avg: 0.64 Re-running test_xml_etree in verbose mode (matching: test_fl
ush_reparse_deferral_disabled)
[ 684s] test_flush_reparse_deferral_disabled (test.test_xml_etree.XMLPullParserTest) ... FA
IL
[ 684s]
[ 684s] ======================================================================
[ 684s] FAIL: test_flush_reparse_deferral_disabled (test.test_xml_etree.XMLPullParserTest)
[ 684s] ----------------------------------------------------------------------
[ 684s] Traceback (most recent call last):
[ 684s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.19/Lib/test/test_xml_etree.py", line
1659, in test_flush_reparse_deferral_disabled
[ 684s] self.assert_event_tags(parser, []) # i.e. no elements started
[ 684s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.19/Lib/test/test_xml_etree.py", line
1395, in assert_event_tags
[ 684s] self.assertEqual([(action, elem.tag) for action, elem in events],
[ 684s] AssertionError: Lists differ: [('start', 'doc')] != []
[ 684s]
[ 684s] First list contains 1 additional elements.
[ 684s] First extra element 0:
[ 684s] ('start', 'doc')
[ 684s]
[ 684s] - [('start', 'doc')]
[ 684s] + []
[ 684s]
[ 684s] ----------------------------------------------------------------------
[ 684s] Ran 1 test in 0.001s
```
[Complete build log](https://github.com/python/cpython/files/14733899/_log.txt) with all package versions and steps taken recorded.
### CPython versions tested on:
3.8, 3.9, 3.10
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117203
* gh-117244
* gh-117245
* gh-117246
* gh-117247
* gh-117248
<!-- /gh-linked-prs -->
| 9f74e86c78853c101a23e938f8e32ea838d8f62e | 872e212378ef86392069034afd80bb53896fd93d |
python/cpython | python__cpython-117184 | # new `TestUopsOptimization` test case failing an assert
# Bug report
### Bug description:
While writing test cases for #116168, I ran into a case that failed on `main` too. Here's the test as a diff. It's under `test_opt.TestUopsOptimization`:
```diff
diff --git a/Lib/test/test_capi/test_opt.py b/Lib/test/test_capi/test_opt.py
index b0859a382d..269753e4c1 100644
--- a/Lib/test/test_capi/test_opt.py
+++ b/Lib/test/test_capi/test_opt.py
@@ -955,6 +955,31 @@ def testfunc(n):
_, ex = self._run_with_optimizer(testfunc, 16)
self.assertIsNone(ex)
+ def test_many_nested(self):
+ # overflow the trace_stack
+ def dummy_a(x):
+ return x
+ def dummy_b(x):
+ return dummy_a(x)
+ def dummy_c(x):
+ return dummy_b(x)
+ def dummy_d(x):
+ return dummy_c(x)
+ def dummy_e(x):
+ return dummy_d(x)
+ def dummy_f(x):
+ return dummy_e(x)
+ def dummy_g(x):
+ return dummy_f(x)
+ def dummy_h(x):
+ return dummy_g(x)
+ def testfunc(n):
+ a = 0
+ for _ in range(n):
+ a += dummy_h(n)
+ return a
+
+ self._run_with_optimizer(testfunc, 32)
if __name__ == "__main__":
unittest.main()
```
I'm running this locally (source build of `main` `9967b568edd2e35b0415c14c7242f3ca2c0dc03d` on an M1 mac).
Command:
```
./python.exe -m test test.test_capi.test_opt
```
Output:
```
Using random seed: 1780919823
Raised RLIMIT_NOFILE: 256 -> 1024
0:00:00 load avg: 2.88 Run 1 test sequentially
0:00:00 load avg: 2.88 [1/1] test.test_capi.test_opt
Assertion failed: ((this_instr + 2)->opcode == _PUSH_FRAME), function optimize_uops, file optimizer_cases.c.h, line 1600.
Fatal Python error: Aborted
Current thread 0x000000020140a100 (most recent call first):
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/test/test_capi/test_opt.py", line 979 in testfunc
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/test/test_capi/test_opt.py", line 588 in _run_with_optimizer
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/test/test_capi/test_opt.py", line 982 in test_many_nested
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/unittest/case.py", line 589 in _callTestMethod
....[truncated for brevity]....
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/runpy.py", line 88 in _run_code
File "/Users/peterlazorchak/repos/oss-clones/forks/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testcapi, _testinternalcapi (total: 2)
zsh: abort ./python.exe -m test test.test_capi.test_opt
```
This assertion is in the optimizer case for `_INIT_CALL_PY_EXACT_ARGS`. I haven't investigated any further beyond that.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-117184
<!-- /gh-linked-prs -->
| 6c83352bfe78a7d567c8d76257df6eb91d5a7245 | f11d0d8be8af28e1368c3c7c116218cf65ddf93e |
python/cpython | python__cpython-117179 | # 3.13.0a5 broke `importlib.util.LazyLoader`
# Bug report
### Bug description:
Test program:
```python
# An importlib.util.LazyLoader test, based on example from
# https://docs.python.org/3/library/importlib.html#implementing-lazy-imports
import sys
import importlib.util
def lazy_import(name):
spec = importlib.util.find_spec(name)
loader = importlib.util.LazyLoader(spec.loader)
spec.loader = loader
module = importlib.util.module_from_spec(spec)
sys.modules[name] = module
loader.exec_module(module)
return module
# Lazy-load the module...
lazy_module = lazy_import(sys.argv[1])
# ... and then trigger load by listing its contents
print(dir(lazy_module))
```
Running under python 3.13.0a4:
```python
$ python3 test_program.py json
['JSONDecodeError', 'JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_default_decoder', '_default_encoder', 'codecs', 'decoder', 'detect_encoding', 'dump', 'dumps', 'encoder', 'load', 'loads', 'scanner']
```
```python
$ python3 test_program.py typing
['ABCMeta', 'AbstractSet', 'Annotated', 'Any', 'AnyStr', 'AsyncGenerator', 'AsyncIterable', 'AsyncIterator', 'Awaitable', 'BinaryIO', 'ByteString', 'CT_co', 'Callable', 'ChainMap', 'ClassVar', 'Collection', 'Concatenate', 'Container', 'Coroutine', 'Counter', 'DefaultDict', 'Deque', 'Dict', 'EXCLUDED_ATTRIBUTES', 'Final', 'ForwardRef', 'FrozenSet', 'Generator', 'Generic', 'GenericAlias', 'Hashable', 'IO', 'ItemsView', 'Iterable', 'Iterator', 'KT', 'KeysView', 'List', 'Literal', 'LiteralString', 'Mapping', 'MappingView', 'MethodDescriptorType', 'MethodWrapperType', 'MutableMapping', 'MutableSequence', 'MutableSet', 'NamedTuple', 'NamedTupleMeta', 'Never', 'NewType', 'NoReturn', 'NotRequired', 'Optional', 'OrderedDict', 'ParamSpec', 'ParamSpecArgs', 'ParamSpecKwargs', 'Protocol', 'Required', 'Reversible', 'Self', 'Sequence', 'Set', 'Sized', 'SupportsAbs', 'SupportsBytes', 'SupportsComplex', 'SupportsFloat', 'SupportsIndex', 'SupportsInt', 'SupportsRound', 'T', 'TYPE_CHECKING', 'T_co', 'T_contra', 'Text', 'TextIO', 'Tuple', 'Type', 'TypeAlias', 'TypeAliasType', 'TypeGuard', 'TypeVar', 'TypeVarTuple', 'TypedDict', 'Union', 'Unpack', 'VT', 'VT_co', 'V_co', 'ValuesView', 'WrapperDescriptorType', '_ASSERT_NEVER_REPR_MAX_LENGTH', '_AnnotatedAlias', '_AnyMeta', '_BaseGenericAlias', '_CallableGenericAlias', '_CallableType', '_ConcatenateGenericAlias', '_DeprecatedGenericAlias', '_Final', '_Func', '_GenericAlias', '_IdentityCallable', '_LiteralGenericAlias', '_NamedTuple', '_NotIterable', '_PROTO_ALLOWLIST', '_ProtocolMeta', '_SPECIAL_NAMES', '_Sentinel', '_SpecialForm', '_SpecialGenericAlias', '_TYPING_INTERNALS', '_TupleType', '_TypedCacheSpecialForm', '_TypedDict', '_TypedDictMeta', '_TypingEllipsis', '_UnionGenericAlias', '_UnpackGenericAlias', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__getattr__', '__loader__', '__name__', '__package__', '__spec__', '_abc_instancecheck', '_abc_subclasscheck', '_alias', '_allow_reckless_class_checks', '_allowed_types', '_caches', '_caller', '_check_generic', '_cleanups', '_collect_parameters', '_deduplicate', '_eval_type', '_flatten_literal_params', '_generic_class_getitem', '_generic_init_subclass', '_get_protocol_attrs', '_idfunc', '_is_dunder', '_is_param_expr', '_is_typevar_like', '_is_unpacked_typevartuple', '_lazy_load_getattr_static', '_make_nmtuple', '_make_union', '_namedtuple_mro_entries', '_no_init_or_replace_init', '_overload_dummy', '_overload_registry', '_paramspec_prepare_subst', '_paramspec_subst', '_prohibited', '_proto_hook', '_remove_dups_flatten', '_sentinel', '_should_unflatten_callable_args', '_special', '_strip_annotations', '_tp_cache', '_type_check', '_type_check_issubclass_arg_1', '_type_convert', '_type_repr', '_typevar_subst', '_typevartuple_prepare_subst', '_unpack_args', '_value_and_type_iter', 'abstractmethod', 'assert_never', 'assert_type', 'cast', 'clear_overloads', 'collections', 'copyreg', 'dataclass_transform', 'defaultdict', 'final', 'functools', 'get_args', 'get_origin', 'get_overloads', 'get_protocol_members', 'get_type_hints', 'is_protocol', 'is_typeddict', 'no_type_check', 'no_type_check_decorator', 'operator', 'overload', 'override', 'reveal_type', 'runtime_checkable', 'sys', 'types']
```
Running under python 3.13.0a5:
```pytb
$ python3 test_program.py json
Traceback (most recent call last):
File "/home/rok/tmp/pyi-py313/test_program.py", line 21, in <module>
print(dir(lazy_module))
~~~^^^^^^^^^^^^^
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1015, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/rok/python3.13-bin/lib/python3.13/json/__init__.py", line 106, in <module>
from .decoder import JSONDecoder, JSONDecodeError
File "/home/rok/python3.13-bin/lib/python3.13/json/decoder.py", line 5, in <module>
from json import scanner
File "<frozen importlib.util>", line 209, in __getattribute__
File "/home/rok/python3.13-bin/lib/python3.13/json/__init__.py", line 106, in <module>
from .decoder import JSONDecoder, JSONDecodeError
ImportError: cannot import name 'JSONDecoder' from partially initialized module 'json.decoder' (most likely due to a circular import) (/home/rok/python3.13-bin/lib/python3.13/json/decoder.py)
```
```pytb
$ python3 test_program.py typing
Traceback (most recent call last):
File "/home/rok/tmp/pyi-py313/test_program.py", line 21, in <module>
print(dir(lazy_module))
~~~^^^^^^^^^^^^^
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1015, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1961, in <module>
class Protocol(Generic, metaclass=_ProtocolMeta):
...<49 lines>...
cls.__init__ = _no_init_or_replace_init
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1870, in __new__
return super().__new__(mcls, name, bases, namespace, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen abc>", line 106, in __new__
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1015, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1961, in <module>
class Protocol(Generic, metaclass=_ProtocolMeta):
...<49 lines>...
cls.__init__ = _no_init_or_replace_init
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1870, in __new__
return super().__new__(mcls, name, bases, namespace, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[...]
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1870, in __new__
return super().__new__(mcls, name, bases, namespace, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen abc>", line 106, in __new__
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1015, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1961, in <module>
class Protocol(Generic, metaclass=_ProtocolMeta):
...<49 lines>...
cls.__init__ = _no_init_or_replace_init
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1870, in __new__
return super().__new__(mcls, name, bases, namespace, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen abc>", line 106, in __new__
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1015, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1961, in <module>
class Protocol(Generic, metaclass=_ProtocolMeta):
...<49 lines>...
cls.__init__ = _no_init_or_replace_init
File "/home/rok/python3.13-bin/lib/python3.13/typing.py", line 1870, in __new__
return super().__new__(mcls, name, bases, namespace, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen abc>", line 106, in __new__
File "<frozen importlib.util>", line 209, in __getattribute__
File "<frozen importlib._bootstrap_external>", line 1011, in exec_module
File "<frozen importlib._bootstrap_external>", line 1089, in get_code
RecursionError: maximum recursion depth exceeded
```
Bisection points at 200271c61db44d90759f8a8934949aefd72d5724 from #114781. Since this was backported to 3.11 and 3.12 branches, I expect to see the same problem in the next 3.11 and 3.12 releases.
At cursory glance, it seems that moving `self.__class__ = types.ModuleType` to the very end of loading does not play well with modules/packages whose initialization ends up referring to themselves (e.g., `from . import something`, or by importing a module that ends up referring to the lazy-loaded module).
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117179
* gh-117319
* gh-117320
<!-- /gh-linked-prs -->
| 9a1e55b8c5723206116f7016921be3937ef2f4e5 | 4c71d51a4b7989fc8754ba512c40e21666f9db0d |
python/cpython | python__cpython-117199 | # New warning: `'initializing': conversion from 'uint64_t' to 'uintptr_t', possible loss of data [D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj]`
# Bug report
### Bug description:
Popped up in https://github.com/python/cpython/pull/117171/files and https://github.com/python/cpython/pull/117170/files
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117199
<!-- /gh-linked-prs -->
| eebea7e515462b503632ada74923ec3246599c9c | 83485a095363dad6c97b19af2826ca0c34343bfc |
python/cpython | python__cpython-117500 | # `linecache.cache` sometimes has an entry for `<string>` under Python 3.13.0a5
# Bug report
### Bug description:
I noticed this via Hypothesis' pretty-printer for `lambda` functions, and tracked the divergence through the `inspect` module to `linecache`:
```python
import linecache
def test():
print(linecache.cache["<string>"]) # expected to raise KeyError
assert False
```
If I run this snippet with `python3.13 -Wignore -m pytest repro.py`, or any older Python version, you'll get the expected `KeyError`. But if I append `-n2` to the command, it prints `(44, None, ['import sys;exec(eval(sys.stdin.readline()))\n'], '<string>')`!
(`python3.13 -m pip install pytest pytest-xdist` will get the dependencies for this)
That's the [`popen` bootstrap line from `execnet`](https://github.com/pytest-dev/execnet/blob/c302c836a3a4c00918d3a98cb5ef4ef7c9128114/src/execnet/gateway_io.py#L52), which handles running code across multiple processes for `pytest-xdist`. At this point I've found `linecache.cache.pop("<string>", None)` to be an acceptable workaround, and since I don't have any particular knowledge of either the CPython or execnet code that's as far as I investigated.
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117500
* gh-118288
* gh-131060
* gh-131065
* gh-131095
* gh-131120
* gh-131143
* gh-131836
* gh-131841
* gh-131850
<!-- /gh-linked-prs -->
| a931a8b32415f311008dbb3f09079aae1e6d7a3d | ecdf6b15b0c0570c3c3302ab95bdbfd3007ea941 |
python/cpython | python__cpython-117190 | # FAIL: test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders)
### Bug description:
Configuration:
```sh
./configure --with-pydebug
```
Test:
```python
./python -m test test_tools --junit-xml test-resuls.xml -j2 --timeout 1200 -v
```
Output:
```python
FAIL: test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders) (relpath='test/test_lib2to3')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/arf/cpython/Lib/test/test_tools/test_makefile.py", line 51, in test_makefile_test_folders
self.assertIn(
~~~~~~~~~~~~~^
relpath,
^^^^^^^^
...<4 lines>...
)
^
)
^
AssertionError: 'test/test_lib2to3' not found in ['idlelib/idle_test', 'test', 'test/archivetestdata', 'test/audiodata', 'test/certdata', 'test/certdata/capath', 'test/cjkencodings', 'test/configdata', 'test/crashers', 'test/data', 'test/decimaltestdata', 'test/dtracedata', 'test/encoded_modules', 'test/leakers', 'test/libregrtest', 'test/mathdata', 'test/regrtestdata', 'test/regrtestdata/import_from_tests', 'test/regrtestdata/import_from_tests/test_regrtest_b', 'test/subprocessdata', 'test/support', 'test/support/_hypothesis_stubs', 'test/support/interpreters', 'test/test_asyncio', 'test/test_capi', 'test/test_cext', 'test/test_concurrent_futures', 'test/test_cppext', 'test/test_ctypes', 'test/test_dataclasses', 'test/test_doctest', 'test/test_email', 'test/test_email/data', 'test/test_future_stmt', 'test/test_gdb', 'test/test_import', 'test/test_import/data', 'test/test_import/data/circular_imports', 'test/test_import/data/circular_imports/subpkg', 'test/test_import/data/circular_imports/subpkg2', 'test/test_import/data/circular_imports/subpkg2/parent', 'test/test_import/data/package', 'test/test_import/data/package2', 'test/test_import/data/unwritable', 'test/test_importlib', 'test/test_importlib/builtin', 'test/test_importlib/extension', 'test/test_importlib/frozen', 'test/test_importlib/import_', 'test/test_importlib/metadata', 'test/test_importlib/metadata/data', 'test/test_importlib/metadata/data/sources', 'test/test_importlib/metadata/data/sources/example', 'test/test_importlib/metadata/data/sources/example/example', 'test/test_importlib/metadata/data/sources/example2', 'test/test_importlib/metadata/data/sources/example2/example2', 'test/test_importlib/namespace_pkgs', 'test/test_importlib/namespace_pkgs/both_portions', 'test/test_importlib/namespace_pkgs/both_portions/foo', 'test/test_importlib/namespace_pkgs/module_and_namespace_package', 'test/test_importlib/namespace_pkgs/module_and_namespace_package/a_test', 'test/test_importlib/namespace_pkgs/not_a_namespace_pkg', 'test/test_importlib/namespace_pkgs/not_a_namespace_pkg/foo', 'test/test_importlib/namespace_pkgs/portion1', 'test/test_importlib/namespace_pkgs/portion1/foo', 'test/test_importlib/namespace_pkgs/portion2', 'test/test_importlib/namespace_pkgs/portion2/foo', 'test/test_importlib/namespace_pkgs/project1', 'test/test_importlib/namespace_pkgs/project1/parent', 'test/test_importlib/namespace_pkgs/project1/parent/child', 'test/test_importlib/namespace_pkgs/project2', 'test/test_importlib/namespace_pkgs/project2/parent', 'test/test_importlib/namespace_pkgs/project2/parent/child', 'test/test_importlib/namespace_pkgs/project3', 'test/test_importlib/namespace_pkgs/project3/parent', 'test/test_importlib/namespace_pkgs/project3/parent/child', 'test/test_importlib/partial', 'test/test_importlib/resources', 'test/test_importlib/resources/data01', 'test/test_importlib/resources/data01/subdirectory', 'test/test_importlib/resources/data02', 'test/test_importlib/resources/data02/one', 'test/test_importlib/resources/data02/subdirectory', 'test/test_importlib/resources/data02/subdirectory/subsubdir', 'test/test_importlib/resources/data02/two', 'test/test_importlib/resources/data03', 'test/test_importlib/resources/data03/namespace', 'test/test_importlib/resources/data03/namespace/portion1', 'test/test_importlib/resources/data03/namespace/portion2', 'test/test_importlib/resources/namespacedata01', 'test/test_importlib/resources/zipdata01', 'test/test_importlib/resources/zipdata02', 'test/test_importlib/source', 'test/test_inspect', 'test/test_interpreters', 'test/test_json', 'test/test_module', 'test/test_multiprocessing_fork', 'test/test_multiprocessing_forkserver', 'test/test_multiprocessing_spawn', 'test/test_pathlib', 'test/test_peg_generator', 'test/test_pydoc', 'test/test_sqlite3', 'test/test_tkinter', 'test/test_tomllib', 'test/test_tomllib/data', 'test/test_tomllib/data/invalid', 'test/test_tomllib/data/invalid/array', 'test/test_tomllib/data/invalid/array-of-tables', 'test/test_tomllib/data/invalid/boolean', 'test/test_tomllib/data/invalid/dates-and-times', 'test/test_tomllib/data/invalid/dotted-keys', 'test/test_tomllib/data/invalid/inline-table', 'test/test_tomllib/data/invalid/keys-and-vals', 'test/test_tomllib/data/invalid/literal-str', 'test/test_tomllib/data/invalid/multiline-basic-str', 'test/test_tomllib/data/invalid/multiline-literal-str', 'test/test_tomllib/data/invalid/table', 'test/test_tomllib/data/valid', 'test/test_tomllib/data/valid/array', 'test/test_tomllib/data/valid/dates-and-times', 'test/test_tomllib/data/valid/multiline-basic-str', 'test/test_tools', 'test/test_ttk', 'test/test_unittest', 'test/test_unittest/testmock', 'test/test_warnings', 'test/test_warnings/data', 'test/test_zipfile', 'test/test_zipfile/_path', 'test/test_zoneinfo', 'test/test_zoneinfo/data', 'test/tkinterdata', 'test/tokenizedata', 'test/tracedmodules', 'test/typinganndata', 'test/wheeldata', 'test/xmltestdata', 'test/xmltestdata/c14n-20'] : 'test/test_lib2to3' is not included in the Makefile's list of test directories to install
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117190
* gh-117367
<!-- /gh-linked-prs -->
| d9cfe7e565a6e2dc15747a904736264e31a10be4 | 35b6c4a4da201a947b2ceb96ae4c0d83d4d2df4f |
python/cpython | python__cpython-118037 | # performance: Update io.DEFAULT_BUFFER_SIZE to make python IO faster?
# Bug report
### Bug description:
Hello,
I was doing some benchmarking of python and package installation.
That got me down a rabbit hole of buffering optimizations between between pip, requests, urllib and the cpython interpreter.
TL;DR I would like to discuss updating the value of io.DEFAULT_BUFFER_SIZE. It was set to 8192 since 16 years ago.
original commit: https://github.com/python/cpython/blame/main/Lib/_pyio.py#L27
It was a reasonable size given hardware and OS at the time. It's far from optimal today.
Remember, in 2008 you'd run a 32 bits operating system with less than 2 GB memory available and to share between all running applications.
Buffers had to be small, few kB, it wasn't conceivable to have buffer measured in entire MB.
I will attach benchmarks in the next messages showing 3 to 5 times write performance improvement when adjusting the buffer size.
I think the python interpreter can adopt a buffer size somewhere between 64k to 256k by default.
I think 64k is the minimum for python and it should be safe to adjust to.
Higher is better for performance in most cases, though there may be some cases where it's unwanted
(seek and small read/writes, unwanted trigger of write ahead, slow devices with throughput in measured in kB/s where you don't want to block for long)
In addition, I think there is a bug in open() on Linux.
open() sets the buffer size to the device block size on Linux when available (st_blksize, 4k on most disks), instead of io.DEFAULT_BUFFER_SIZE=8k.
I believe this is unwanted behavior, the block size is the minimal size for IO operations on the IO device, it's not the optimal size and it should not be preferred.
I think open() on Linux should be corrected to use a default buffer size of `max(st_blksize, io.DEFAULT_BUFFER_SIZE)` instead of `st_blksize`?
Related, the doc might be misleading for saying st_blksize is the preferred size for efficient I/O. https://github.com/python/cpython/blob/main/Doc/library/os.rst#L3181
The GNU doc was updated to clarify: "This is not guaranteed to give optimum performance" https://www.gnu.org/software/gnulib/manual/html_node/stat_002dsize.html
Thoughts?
Annex: some historical context and technical considerations around buffering.
On the hardware side:
* HDD had 512 bytes blocks historically, then HDD moved to 4096 bytes blocks in the 2010s.
* SSD have 4096 bytes blocks as far as I know.
On filesystems:
* buffer size should never be smaller than device and filesystem blocksize
* I think ext3, ext4, xfs, ntfs, etc... follow the device block size of 4k as default, though they can be configured for any block size.
* NTFS is capped to 16TB maximum disk size with 4k blocks.
* microsoft recommends 64k block size for windows server 2019+ and larger disks https://learn.microsoft.com/en-us/windows-server/storage/file-server/ntfs-overview
* RAID setups and assimilated with zfs/btrfs/xfs can have custom block size, I think anywhere 4kB-1MB. I don't know if there is any consensus, I think anything 16k-32k-64k-128k can be seen in the wild.
On network filesystems:
* shared network home directories are common on linux (NFS share) and windows (SMB share).
* entreprise storage vendors like Pure/Vast/NetApp recommend 524488 or 1048576 bytes for IO.
* see rsize wsize in mount settings:
* `host:path on path type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,acregmin=60,acdirmin=60,hard,proto=tcp,nconnect=8,mountproto=tcp, ...)`
* for windows I cannot find documentation for network clients, though the windows server should have the NTFS filesystem with at least 64k block size as per microsoft recommendation above.
On pipes:
* buffering is used by pipes and for interprocess communications. see subprocess.py
* posix guarantees that writes to pipes are atomic up to PIPE_BUF, 4096 bytes on Linux kernel, guaranteed to be at least 512 bytes by posix.
* Python had a default of io.DEFAULT_BUFFER_SIZE=8192 so it never benefitted from that atomic property :D
on compression code, they probably all need to be adjusted:
* the buffer size is used by compression code in cpython: gzip.py lzma.py bz2.py
* I think lzma and bz2 are using the default size.
* gzip is using a 128kb read buffer, somebody realized it was very slow 2 years ago and rewrote the buffering to 128k.
* then somebody else realized last year it was still very slow to write and added an arbitrary write buffer 4*io.DEFAULT_BUFFR_SIZE.
* https://github.com/python/cpython/commit/eae7dad40255bad42e4abce53ff8143dcbc66af5
* https://github.com/python/cpython/issues/89550
* base64 is reading in chunks of 76 characters???
* https://github.com/python/cpython/blob/main/Lib/base64.py#L532
On network IO:
* On Linux, TCP read and write buffers were a minimum of 16k historically. The read buffer was increased to 64k in kernel v4.20, year 2018
* the buffer is resized dynamically with the TCP window upto 4MB write 6M read, let's not get into TCP. see sysctl_tcp_rmem sysctl_tcp_wmem
* linux code: https://github.com/torvalds/linux/blame/master/net/ipv4/tcp.c#L4775
* commit Sep 2018: https://github.com/torvalds/linux/commit/a337531b942bd8a03e7052444d7e36972aac2d92
* I think socket buffers are managed separately by the kernel, the io.DEFAULT_BUFFER_SIZE matters when you read a file and write to network, or read from network and write to file.
on HTTP, a large subset of networking:
* HTTP is large file transfer and would benefit from a much larger buffer, but that's probably more of a concern for urllib/requests.
* requests.content is 10k chunk by default.
* requests iter_lines(chunk_size=512, decode_unicode=False, delimiter=None) is 512 chunk by default.
* requests iter_content(chunk_size=1, decode_unicode=False) is 1 byte by default
* source: set in 2012 https://github.com/psf/requests/blame/8dd3b26bf59808de24fd654699f592abf6de581e/src/requests/models.py#L80
note to self: remember to publish code and result in next message
### CPython versions tested on:
3.11
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-118037
* gh-118144
* gh-119783
* gh-131052
<!-- /gh-linked-prs -->
| 8fa124868519eeda0a6dfe9191ceabd708d84fa7 | 23950beff84c39d50f48011e930f4c6ebf32fc73 |
python/cpython | python__cpython-117143 | # Convert _ctypes extension module to multi-phase init
# Feature or enhancement
### Proposal:
This issue is available to keep track of PRs, following the heap type conversion completed at #114314.
### TODO:
- ~[ ] Make `free_list` in `malloc_closure.c` per-module variables.~ [rejected](https://github.com/python/cpython/pull/117874#issuecomment-2096089529)
- [x] Fix intentional memory leaks of `StgInfo` during a finalization: [comment](https://github.com/python/cpython/pull/117181#issuecomment-2029764312).
- [x] Enable `Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED`(DONE): [comment](https://github.com/python/cpython/pull/117181#discussion_r1557516762).
FUTURE?: `Py_MOD_PER_INTERPRETER_GIL_SUPPORTED` after a compatible `PyGILState_Ensure()` is introduced, see also the links below.
### Links to documents:
* [**libffi: a foreign function interface library**](https://gensoft.pasteur.fr/docs/libffi/3.3/libffi.pdf)
* [**libffi - GitHub**](https://github.com/libffi/libffi)
### Links to previous discussion of this feature:
* #103092
* #114314
* **`PyGILState_Ensure()`** for sub-interpreters:
* #59956
* #55124
* [`_ctypes` patch](https://bugs.python.org/file20417/gilstateinterp.patch) (2011)
<!-- gh-linked-prs -->
### Linked PRs
* gh-117143
* gh-117181
* gh-117189
* gh-117544
* gh-117874
* gh-117875
* gh-118139
* gh-119424
* gh-119468
* gh-119991
* gh-120008
<!-- /gh-linked-prs -->
| dd44ab994b7262f0704d64996e0a1bc37b233407 | 3de09cadde788065a4f2d45117e789c9353bbd12 |
python/cpython | python__cpython-117135 | # Microoptimize glob() for include_hidden=True
If `include_hidden` is false (by default), the dot-starting names are filtered out if the pattern does not start with a dot. If `include_hidden` is true, no filtering is needed, but the current code still creates a generator expression to filter names with condition which is always true.
It unlikely has measurable effect in most cases, taking into account the time of the IO operations, `os.path.join()`, etc.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117135
<!-- /gh-linked-prs -->
| 5a78f6e798d5c2af1dba2df6c9f1f1e5aac02a86 | 00baaa21de229a6db80ff2b84c2fd6ad1999a24c |
python/cpython | python__cpython-117128 | # glob tests miss `mypipe`
On my system, `test_glob_named_pipe` has been failing since it was added in #116421:
```
======================================================================
FAIL: test_glob_named_pipe (test.test_glob.GlobTests.test_glob_named_pipe)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/encukou/dev/cpython/Lib/test/test_glob.py", line 354, in test_glob_named_pipe
self.assertEqual(self.rglob('mypipe*'), [path])
~~~~~~~~~~^^^^^^^^^^^
File "/home/encukou/dev/cpython/Lib/test/test_glob.py", line 254, in rglob
return self.glob(*parts, recursive=True, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/encukou/dev/cpython/Lib/test/test_glob.py", line 93, in glob
self.assertCountEqual(
~~~~~~~~~~~~~~~~~~~~~^
glob.glob(pattern, dir_fd=self.dir_fd, **kwargs), res2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Element counts were not equal:
First has 0, Second has 1: 'mypipe'
```
I assume that it's because the `self.dir_fd = os.open(self.tempdir)` in `setUp` is done before `mkfifo` in the test, and the contents at `open` time show up in directory walk.
Indeed, if I change `setUp` to move some of the `mktemp`s before the `open`, lots more tests start failing.
I'm on Fedora 39, kernel `6.7.9-200.fc39.x86_64`, btrfs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117128
* gh-117149
* gh-117150
<!-- /gh-linked-prs -->
| 42ae924d278c48a719fb0ab86357f3235a9f7ab9 | 8383915031942f441f435a5ae800790116047b80 |
python/cpython | python__cpython-117115 | # Add `isdevdrive` to `posixpath`
# Feature or enhancement
### Proposal:
Four changes:
1. Add link to `genericpath.py` & remove redundant availability in the documentation for `os.path`
2. Deduplicate `lexists` & fallback `isjunction` implementations by defining them in `genericpath`.
3. Move fallback `ntpath.isdevdrive` to `genericpath` to provide the same interface for `posixpath` & `ntpath`.
4. Add `isdevdrive` to `ntpath.__all__`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117115
* gh-117249
* gh-117756
<!-- /gh-linked-prs -->
| 0821923aa979a72464c5da8dfa53a719bba5801c | c2276176d543a2fc2d57709c2787f99850fbb073 |
python/cpython | python__cpython-117111 | # Subclass of `typing.Any` cannot be instantiated with arguments
# Bug report
### Bug description:
```python
from typing import Any
class C(Any):
def __init__(self, v): ...
C(0) # TypeError: object.__new__() takes exactly one argument (the type to instantiate)
```
The fix should be easy. Just replace --
```python
return super().__new__(cls, *args, **kwargs)
```
with --
```python
return super().__new__(cls)
```
in `typing.Any.__new__(...)`, as the superclass is `object` and its `__new__` doesn't accept additional arguments.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117111
* gh-117357
* gh-117358
<!-- /gh-linked-prs -->
| 8eec7ed714e65d616573b7331780b0aa43c6ed6a | a17f313e3958e825db9a83594c8471a984316536 |
python/cpython | python__cpython-117120 | # Major slowdown on large listcomp
# Bug report
### Bug description:
That started within the last 48 hours, on the main branch. Here's test code; it's possible it could be simplified
```python
if 1:
class Obj:
def __init__(self, i):
self.val = i
import sys
print(sys.version)
from time import perf_counter as now
start = now()
xs = [Obj(i) for i in range(2**21)]
finish = now()
print(finish - start)
```
Under 3.12.2 it takes about a second, but under current main about 21 seconds.
```none
C:\Code\Python\PCbuild>py \MyPy\small.py
3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
0.9562742999987677
C:\Code\Python\PCbuild>amd64\python.exe \MyPy\small.py
3.13.0a5+ (heads/main:f4cc77d494, Mar 20 2024, 20:28:50) [MSC v.1939 64 bit (AMD64)]
20.99507549998816
```
If the size of the list is increased, it appears to suffer much worse than linear-time slowdown.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117120
* gh-117213
<!-- /gh-linked-prs -->
| e28477f214276db941e715eebc8cdfb96c1207d9 | e2e0b4b4b92694ba894e02b4a66fd87c166ed10f |
python/cpython | python__cpython-117092 | # Sync with importlib_metadata for Python 3.13
This issue tracks incorporating updates from importlib_metadata into CPython for Python 3.13, including:
<!-- gh-linked-prs -->
### Linked PRs
* gh-117092
* gh-117094
<!-- /gh-linked-prs -->
| 8ad88984200b2ccddc0a08229dd2f4c14d1a71fc | 7d446548ef53f6c3de1097c6d44cada6642ddc85 |
python/cpython | python__cpython-132595 | # Python 3.13a5 fails to build on AIX
# Bug report
### Bug description:
```
gcc -pthread -maix64 -bI:Modules/python.exp -Wl,-blibpath:/QOpenSys/pkgs/lib:/QOpenSys/usr/lib -lutil -Wl,-hlibpython3.13.so ...
ld: 0706-012 The -h flag is not recognized.
ld: 0706-006 Cannot find or open library file: -l ibpython3.13.so
ld:open(): No such file or directory
```
`-h` is not a valid linker flag on AIX. This seems to be a regression introduced in [fa1d675309c6a08b0833cf25cffe476c6166aba3](https://github.com/python/cpython/commit/fa1d675309c6a08b0833cf25cffe476c6166aba3#diff-1f0a8db227d22005511b0d90f5339b97db345917b863954b3b3ccb9ec308767c) as previously on AIX it would have gone in to the else leg and not used this flag.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-132595
* gh-133838
* gh-133839
<!-- /gh-linked-prs -->
| 47f1722d8053fb4f79e68cba07cbf08fb58a511c | efcc42ba70fb09333a2be16401da731662e2984b |
python/cpython | python__cpython-117113 | # importlib.resources.abc.Traversable.joinpath docs incomplete
[`importlib.resources.abc.Traversable.joinpath`](https://docs.python.org/3.13/library/importlib.resources.abc.html#importlib.resources.abc.Traversable.joinpath) should mention that it can take multiple descendants.
The versionchanged note should mention that some providers might still not support this.
@jaraco, this was officially added in 3.11, right?
<!-- gh-linked-prs -->
### Linked PRs
* gh-117113
* gh-117571
<!-- /gh-linked-prs -->
| e569f9132b5bdc1c103116a020e19e3ccc20cf34 | 52f5b7f9e05fc4a25e385c046e0b091641674556 |
python/cpython | python__cpython-117069 | # Unreachable (useless) code in bytesio.c:resize_buffer()
# Bug report
### Bug description:
https://github.com/NGRsoftlab/cpython/blob/main/Modules/_io/bytesio.c#L158
```C
size_t alloc = PyBytes_GET_SIZE(self->buf);
/* skipped for short */
if (alloc > ((size_t)-1) / sizeof(char))
goto overflow;
```
This code is useless and goto is unreachable, because of false condition:
1. 'alloc' has a type 'size_t' with minimum value '0' and a maximum value of size_t ('18446744073709551615' on x86_64)
2. ((size_t)-1) is a maximum value of size_t ('18446744073709551615' on x86_64)
3. size_t is built-in type for C
4. sizeof(char) is always 1 in C
Found by Linux Verification Center ([portal.linuxtesting.ru](http://portal.linuxtesting.ru/)) with SVACE.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-117069
<!-- /gh-linked-prs -->
| 63d6f2623ef2aa90f51c6a928b96845b9b380d89 | 42ae924d278c48a719fb0ab86357f3235a9f7ab9 |
python/cpython | python__cpython-117067 | # Repeated optimizations in tier 2, due to the optimizer "failing" when it cannot optimize perfectly.
The tier 2 optimizer fails whenever it cannot optimize the entire trace, which results in a large number of repeated attempts to optimize the same code.
While we should add some form of backoff #116968 to mitigate this, we should also fix the root cause.
If the optimizer can only optimize up to some point in the code, it should not throw away the work it has done.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117067
<!-- /gh-linked-prs -->
| 63289b9dfbc7d87e81f1517422ee91b6b6d19531 | dcaf33a41d5d220523d71c9b35bc08f5b8405dac |
python/cpython | python__cpython-120854 | # Change `JUMP_TO_TOP` to a more general backwards jump for tier 2.
Currently we have the `_JUMP_TO_TOP` uop which jumps to the *second* uop in the tier 2 trace, as the first is reserved for `_START_EXECUTOR`.
Since this is a bit of an odd special case, we might as well add a more general jump backward.
Doing so would have a few advantages:
* It is less confusing
* It would actually simplify the JIT a bit
* It would allow loop invariant code motion for looping traces.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120854
<!-- /gh-linked-prs -->
| a47abdb45d4f1c3195c324812c33b6ef1d9147da | ce1064e4c9bcfd673323ad690e60f86e1ab907bb |
python/cpython | python__cpython-117126 | # test_posix: test_sched_setaffinity() fails on CentOS9
# Bug report
Example of AMD64 CentOS9 3.x failure: https://buildbot.python.org/all/#/builders/838/builds/5082
```
======================================================================
FAIL: test_sched_setaffinity (test.test_posix.PosixTester.test_sched_setaffinity)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-CentOS9-ppc64le/build/Lib/test/test_posix.py", line 1343, in test_sched_setaffinity
self.assertRaises(OSError, posix.sched_setaffinity, 0, [])
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: OSError not raised by sched_setaffinity
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117126
* gh-117137
* gh-117138
<!-- /gh-linked-prs -->
| 50f9b0b1e0fb181875751cef951351ed007b6397 | 0907871d43bffb613cbd560224e1a9db13d06c06 |
python/cpython | python__cpython-117059 | # macOS GUI and packaging suggestions are out of date
# Documentation
The current macOS usage guide doesn't mention several prominent GUI toolkits (Kivy, Toga), or app packaging tools (PyInstaller, Briefcacse). The guide should be updated to include links to these projects.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117059
* gh-117081
* gh-117082
<!-- /gh-linked-prs -->
| 44fbab43d8f3f2df07091d237824cf4fa1f6c57c | 9221ef2d8cb7f4cf37592eb650d4c8f972033000 |
python/cpython | python__cpython-117028 | # Store both function and code object in the function version cache
(Copied from https://github.com/faster-cpython/ideas/issues/665.)
The idea here is to avoid function version cache misses for generator expressions. (See https://github.com/faster-cpython/ideas/issues/664#issuecomment-2000948111.)
We have a complicated mechanism to reset the function version whenever `__code__`, `__defaults__` and a few other critical attributes are mutated. (BTW: nothing is affected by changes to `__annotations__`, and yet that is also considered critical; I will fix this.)
Why not instead just reset the function version to zero and stick to that? We then guarantee that the function version is either zero or matches the code object version.
Nothing changes for specialization except that `_PyFunction_GetVersionForCurrentState()` returns 0 for mutated functions. This is unlikely to affect any benchmark or other perf-critical real-world code.
The function version cache will double in size, and store both the function and the code object. When a function is deallocated or its version is reset to zero, it evicts itself from the cache, but keeps the code object. Code objects remove themselves from the cache when deallocated (and probably also evict the function object).
For Tier 2, when generating `_PUSH_FRAME` or `_POP_FRAME`, we can handle the case where the function version maps to a pair `(NULL, some_code_object)` -- we store `NULL` in the operand, but we use `some_code_object` to trace through. Globals removal will no longer work (boo hoo), but at least we still have a trace. Presumably at least *some* generator expressions don't use globals (they can still use builtins, which can be reached without the function object).
<!-- gh-linked-prs -->
### Linked PRs
* gh-117028
<!-- /gh-linked-prs -->
| 570a82d46abfebb9976961113fb0f8bb400ad182 | c85d84166a84a5cb2d724012726bad34229ad24e |
python/cpython | python__cpython-117042 | # "-X gil" is not included in the Python CLI help
This option was added in #116167. `PYTHON_GIL` was added in the help output, but `-X gil` was not.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117042
<!-- /gh-linked-prs -->
| 2d17309cc719c41e02ffd1d6cac10f95a7e2359c | 332ac46c09cd500a16a5f03b53f038b1d9ce77ef |
python/cpython | python__cpython-117027 | # http.server misleading point in doc about "text/*" mime types
# Documentation
It's said in the [http.server docs](https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler.do_GET):
> If the file’s MIME type starts with text/ the file is opened in text mode; otherwise binary mode is used.
But [the code](https://github.com/python/cpython/blob/main/Lib/http/server.py#L731) always opens file in binary mode so this is not true. The quoted sentence should be removed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-117027
* gh-134107
* gh-134108
<!-- /gh-linked-prs -->
| ea2d707bd59963bd4f53407108026930ff12ae56 | ac8df4b5892d2e4bd99731e7d87223a35c238f81 |
python/cpython | python__cpython-117064 | # Unchecked signed integer overflow in PyLong_AsPid()
If `pid_t` has the same size as `int`, `PyLong_AsPid` is defined as `PyLong_AsLong`. if the size of `int` is less than the size of `long`, there are values out of the C `int` range but in the C `long` range. Calling `PyLong_AsPid()` with such argument will not raise an exception, but casting the result out of the C `int` range to `pid_t` has undefined behavior.
Most non-Windows 64-bit platforms are affected.
The simplest solution is to define `PyLong_AsPid` as `PyLong_AsInt`. It only applicable in 3.13, because `PyLong_AsInt` is new in 3.13. In older versions there is `_PyLong_AsInt`, but it is declared in `Include/cpython/longobject.h`, not in `Include/longobject.h`.
cc @vstinner, @pitrou
<!-- gh-linked-prs -->
### Linked PRs
* gh-117064
* gh-117070
* gh-117075
<!-- /gh-linked-prs -->
| 519b2ae22b54760475bbf62b9558d453c703f9c6 | 8182319de33a9519a2f243ac8c35a20ef82a4d2d |
python/cpython | python__cpython-117009 | # test_functools: test_recursive_pickle() crash on ARM64 Windows Non-Debug 3.x
# Bug report
build: https://buildbot.python.org/all/#/builders/730/builds/9247
```
test_recursive_pickle (test.test_functools.TestPartialC.test_recursive_pickle) ...
Windows fatal exception: stack overflow
Current thread 0x000015c8 (most recent call first):
File "C:\Workspace\buildarea\3.x.linaro-win-arm64.nondebug\build\Lib\test\test_functools.py", line 338 in test_recursive_pickle
File "C:\Workspace\buildarea\3.x.linaro-win-arm64.nondebug\build\Lib\unittest\case.py", line 589 in _callTestMethod
(...)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-117009
<!-- /gh-linked-prs -->
| 9967b568edd2e35b0415c14c7242f3ca2c0dc03d | 72eea512b88f8fd68b7258242c37da963ad87360 |
python/cpython | python__cpython-116997 | # pystats: Add some stats about where _Py_uop_analyse_and_optimize bails out
# Feature or enhancement
### Proposal:
It would be useful to know the cases where the new optimizer can't optimize a trace.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116997
<!-- /gh-linked-prs -->
| 50369e6c34d05222e5a0ec9443a9f7b230e83112 | 617158e07811edfd6fd552a3d84b0beedd8f1d18 |
python/cpython | python__cpython-116992 | # `pegen --help` should mention that it needs different grammar files
# Bug report
I've got contacted by a user who tried to generate a pure-python parser with `pegen` from https://github.com/python/cpython/blob/main/Grammar/python.gram
It was failing with
```python
python -m pegen -o parser.py cpython\Grammar\python.gram
File "<unknown>", line 1
_PyAST_Interactive ( a , p -> arena )
^^
SyntaxError: invalid syntax
```
I think that `--help` for `python` subparser should mention something like this:
```
» PYTHONPATH=$PWD/Tools/peg_generator python3 -m pegen --help
usage: pegen [-h] [-q] [-v] {c,python} ...
Experimental PEG-like parser generator
positional arguments:
{c,python} target language for the generated code
c Generate C code for inclusion into CPython
python Generate Python code, needs grammar definition with Python actions, see
`we-like-parsers/pegen`
options:
-h, --help show this help message and exit
-q, --quiet Don't print the parsed grammar
-v, --verbose Print timing stats; repeat for more debug output
```
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116992
<!-- /gh-linked-prs -->
| 1e5f615086d23c71a9701abe641b5241e4345234 | 84c3191954b40e090db15da49a59fcc40afe34fd |
python/cpython | python__cpython-117004 | # Simplify the Grammar
# Feature or enhancement
### Proposal:
I've been implementing a Python runtime for a couple of months and I've been meaning to ask about something I noticed in the grammar.
Why do we use:
```
assignment:
| NAME ':' expression ['=' annotated_rhs ]
| ('(' single_target ')'
| single_subscript_attribute_target) ':' expression ['=' annotated_rhs ]
| (star_targets '=' )+ (yield_expr | star_expressions) !'=' [TYPE_COMMENT]
| single_target augassign ~ (yield_expr | star_expressions)
annotated_rhs: yield_expr | star_expressions
```
Instead of:
```
assignment:
| NAME ':' expression ['=' annotated_rhs ]
| ('(' single_target ')'
| single_subscript_attribute_target) ':' expression ['=' annotated_rhs ]
| (star_targets '=' )+ (annotated_rhs) !'=' [TYPE_COMMENT]
| single_target augassign ~ (annotated_rhs)
annotated_rhs: yield_expr | star_expressions
```
Forgive my ignorance when it comes to grammars but are these not the same thing?
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-117004
<!-- /gh-linked-prs -->
| 9b280ab0ab97902d55ea3bde66b2e23f8b23959f | 1acd2497983f1a78dffd6e5b3e0f5dd0920a550f |
python/cpython | python__cpython-117025 | # inspect.getsource does not work for a class-defined code object
# Bug report
### Bug description:
In the following MRE, `code.co_firstlineno` correctly returns 4, the first line number of the class definition, but `getsource(code)` returns the content of the entire file:
```python
import sys
from inspect import getsource
class A:
code = sys._getframe(0).f_code
print(code.co_firstlineno)
print(getsource(code))
```
This is because `inspect.findsource` uses a regex pattern that does not match `class`:
https://github.com/python/cpython/blob/a8e93d3dca086896e668b88b6c5450eaf644c0e7/Lib/inspect.py#L1155
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117025
<!-- /gh-linked-prs -->
| d16c9d1278164f04778861814ebc87ed087511fc | 6547330f4e896c6748da23704b617e060e6cc68e |
python/cpython | python__cpython-116985 | # Mimalloc header is not installed
# Bug report
### Bug description:
Mimalloc is introduced in https://github.com/python/cpython/pull/109914.
This will cause a header not found when building any extension that includes a `pycore_*.h` header as:
```
[build] In file included from /Users/yyc/repo/py/install/include/python3.13d/internal/pycore_long.h:13:
[build] In file included from /Users/yyc/repo/py/install/include/python3.13d/internal/pycore_runtime.h:17:
[build] In file included from /Users/yyc/repo/py/install/include/python3.13d/internal/pycore_interp.h:30:
[build] /Users/yyc/repo/py/install/include/python3.13d/internal/pycore_mimalloc.h:39:10: fatal error: 'mimalloc.h' file not found
[build] #include "mimalloc.h"
[build] ^~~~~~~~~~~~
[build] 1 error generated.
```
See following PR for detail.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-116985
* gh-118808
* gh-118866
<!-- /gh-linked-prs -->
| e17cd1fbfd4f20824c686c7242423e84ba6a6cc5 | 456c29cf85847c67dfc0fa36d6fe6168569b46fe |
python/cpython | python__cpython-116958 | # configparser.DuplicateOptionError leaves ConfigParser instance in bad state
# Bug report
### Bug description:
If you catch `configparser.Error` when reading a config file (the intention is to skip invalid config files) and then attempt to use the ConfigParser instance, you can get really weird errors. In the following example, `read` raises DuplicateOptionError from the 2nd section, but attempting to `get` a value from the first section fails in the interpolation code, because somehow the value has been stored as a list instead of a string!
```python
>>> import configparser
>>> cp = configparser.ConfigParser()
>>> cp.read_string('[a]\nx=1\n[b]\ny=1\ny=2')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.12/configparser.py", line 710, in read_string
self.read_file(sfile, source)
File "/usr/lib/python3.12/configparser.py", line 705, in read_file
self._read(f, source)
File "/usr/lib/python3.12/configparser.py", line 1074, in _read
raise DuplicateOptionError(sectname, optname,
configparser.DuplicateOptionError: While reading from '<string>' [line 5]: option 'y' in section 'b' already exists
>>> cp.get('a', 'x')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.12/configparser.py", line 777, in get
return self._interpolation.before_get(self, section, option, value,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/configparser.py", line 367, in before_get
self._interpolate_some(parser, option, L, value, section, defaults, 1)
File "/usr/lib/python3.12/configparser.py", line 384, in _interpolate_some
p = rest.find("%")
^^^^^^^^^
AttributeError: 'list' object has no attribute 'find'
```
Disabling interpolation can read the value from the 1st section but it's a list, not a str!
```
>>> cp.get('a', 'x', raw=True)
['1']
```
### CPython versions tested on:
3.10, 3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116958
* gh-117012
* gh-117013
<!-- /gh-linked-prs -->
| b1bc37597f0d36084c4dcb15977fe6d4b9322cd4 | a8e93d3dca086896e668b88b6c5450eaf644c0e7 |
python/cpython | python__cpython-116967 | # `./Include/cpython/pyatomic_std.h` will not compile
# Bug report
### Bug description:
These two functions have unclosed return expression.
https://github.com/python/cpython/blob/2982bdb936f76518b29cf7de356eb5fafd22d112/Include/cpython/pyatomic_std.h#L913-L927
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116967
<!-- /gh-linked-prs -->
| 165cb4578c3cbd4d21faf1050193c297662fd031 | ecb4a2b711d62f1395ddbff52576d0cca8a1b43e |
python/cpython | python__cpython-116940 | # Rewrite binarysort()
# Feature or enhancement
### Proposal:
I tried many things to speed listobject.c's `binarysort()`, but nothing really helped (neither code tweaks nor entirely different approaches).
However, I got annoyed enough at the ancient code to rewrite it :wink: Some specific annoyances:
- When sortslices were introduced, this was left with a slightly weird signature, mixing a semi-abstract sortslice argument with raw pointers into the key part.
- My aging eyes can often no longer see the difference between "l" and "1".
- While it's still idiomatic C to do pointer arithmetic "by hand", the code is clearer when written in `a[index]` style. At least for simple code like this, modern optimizing compilers should produce much the same machine instructions either way.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-116940
<!-- /gh-linked-prs -->
| 8383915031942f441f435a5ae800790116047b80 | 97ba910e47ad298114800587979ce7beb0a705a3 |
python/cpython | python__cpython-125213 | # `dict` and `dict.update` treat the first argument as a mapping when it has attribute `keys` without attribute `__getitem__`
# Bug report
## How to reproduce
1. Make such a class:
```python
class Object:
def __iter__(self, /):
return iter(())
def keys(self, /):
return ['1', '2']
```
2. Call `dict(Object())`.
3. Call `d = {}` and then `d.update(Object())`.
## Expected result
At the step 2 an empty dictionary is returned.
At the step 3 `d` stays empty.
## Actual result
Both steps 2 and 3 raise a TypeError `'Object' object is not subscriptable`.
## CPython versions tested on:
3.10
3.11
## Operating systems tested on:
Windows 21H2 (19044.1645)
Ubuntu 20.04.6 LTS
---
[Docs](https://docs.python.org/3/library/stdtypes.html#dict) of `dict` state:
> If a positional argument is given and it is a mapping object, a dictionary is created with the same key-value pairs as the mapping object. Otherwise, the positional argument must be an [iterable](https://docs.python.org/3/glossary.html#term-iterable) object.
Unfortunately, there is no link to what is considered as a mapping object.
In [typeshed](https://github.com/python/typeshed/blob/main/stdlib/builtins.pyi) both `dict` and `dict.update` accept `SupportsKeysAndGetItem`, i.e., any object with attributes `keys` and `__getitem__`.
But the experiment above shows that only `keys` is enough. While `typeshed` is a bit too restrictive in the case for iterable (only iterables of 2-sized tuples are allowed, but `dict` accepts any iterable of 2-sized iterables), I think just checking for `keys` is not enough.
In the actual C code there is such comment:
https://github.com/python/cpython/blob/2982bdb936f76518b29cf7de356eb5fafd22d112/Include/dictobject.h#L39-L43
Thus, it is intended to check the presence of two attributes.
---
The error is here:
https://github.com/python/cpython/blob/2982bdb936f76518b29cf7de356eb5fafd22d112/Objects/dictobject.c#L3426-L3441
This code evaluates whether attribute `keys` is present. If the answer is true, calls `PyDict_Merge`, and calls `PyDict_MergeFromSeq2` otherwise.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125213
* gh-125336
* gh-125337
* gh-125421
* gh-126150
* gh-126151
<!-- /gh-linked-prs -->
| 21ac0a7f4cf6d11da728b33ed5e8cfa65a5a8ae7 | 89515be596a0ca05fd9ab4ddf76c8013dd093545 |
python/cpython | python__cpython-116937 | # C API: Add PyType_GetModuleByDef() to the limited C API version 3.13
I propose adding `PyType_GetModuleByDef()` to the limited C API version 3.13. The function was added to Python 3.9, the function is now well tested. We could add it way earlier to the limited C API.
cc @encukou
---
The `PyType_GetModuleByDef()` function is needed to access the **module state** in C function which don't get a module as first parameter, but only an instance of a heap type. Heap types should be created with `PyType_FromModuleAndSpec(module, spec, bases)` to store the module in the type. `PyType_GetModuleByDef(type)` gives the module and then `PyModule_GetState()` gives the moulde state.
Examples of functions which don't get a `module` argument:
* Class methods
* Most slots, examples: `tp_new`, `tp_richcompare`, `tp_iternext`, etc.
* `PyGetSetDef` getter and setter functions
Without `PyType_GetModuleByDef()`, many static types cannot be converted to heap types.
The following stdlib extensions use `PyType_GetModuleByDef()`
```
Modules/_asynciomodule.c
Modules/_bz2module.c
Modules/_collectionsmodule.c
Modules/_csv.c
Modules/_decimal/_decimal.c
Modules/_elementtree.c
Modules/_functoolsmodule.c
Modules/_io/_iomodule.h
Modules/_pickle.c
Modules/_queuemodule.c
Modules/_randommodule.c
Modules/_sqlite/module.h
Modules/_ssl.c
Modules/_ssl.h
Modules/_struct.c
Modules/_testmultiphase.c
Modules/_threadmodule.c
Modules/_xxinterpchannelsmodule.c
Modules/_zoneinfo.c
Modules/arraymodule.c
Modules/cjkcodecs/multibytecodec.c
Modules/clinic/_testmultiphase.c.h
Modules/itertoolsmodule.c
Modules/socketmodule.c
Python/Python-tokenize.c
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-116937
<!-- /gh-linked-prs -->
| 507896d97dcff2d7999efa264b29d9003c525c49 | 0c1a42cf9c8cd0d4534d5c1d58f118ce7c5c446e |
python/cpython | python__cpython-118021 | # docs(easy): Document that heap types need to support garbage collection
# Documentation
In https://docs.python.org/3/c-api/typeobj.html#c.Py_TPFLAGS_HEAPTYPE, it does not mention that the heap type must support GC. However this is actually the case. The type itself needs to be visited by the GC because it forms a reference cycle with its own module object. We should document this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-118021
* gh-118092
<!-- /gh-linked-prs -->
| 5d544365742a117027747306e2d4473f3b73d921 | 4605a197bd84da1a232bd835d8e8e654f2fef220 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.