repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-103557 | # Incorrect `inspect.Signature` can be created: pos-only with a default followed by pos-or-kw without one
While working on https://github.com/python/cpython/issues/103553 I found one more problem that merits its own issue.
Right now it is impossible to create a function that have a pos-or-keyword parameter without a default comming after a pos-only parameter with a default. Demo:
```python
>>> def test(pod=42, /, pk): ...
File "<stdin>", line 1
def test(pod=42, /, pk): ...
^^
SyntaxError: parameter without a default follows parameter with a default
>>> lambda pod=42, /, pk: ...
File "<stdin>", line 1
lambda pod=42, /, pk: ...
^^
SyntaxError: parameter without a default follows parameter with a default
```
But, it is possible to create a signature like this. Repro:
```python
>>> import inspect
>>> def one(pk): ...
...
>>> def two(pod=42, /): ...
...
>>> sig1 = inspect.signature(one)
>>> sig2 = inspect.signature(two)
>>> inspect.Signature((sig2.parameters['pod'], sig1.parameters['pk']))
<Signature (pod=42, /, pk)>
```
This looks like a bug to me, there's no reason for a signature to behave differently and to support function signatures that cannot be created.
Desired behaviour:
```python
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/ex.py", line 7, in <module>
print(inspect.Signature((sig2.parameters['pod'], sig1.parameters['pk'])))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/inspect.py", line 3039, in __init__
raise ValueError(msg)
ValueError: non-default argument follows default argument
```
The fix is quite easy. PR is incomming.
I've also tried the same thing with https://github.com/python/cpython/pull/103404 (because it is very related) but, it still does not solve this corner case.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103557
* gh-103675
<!-- /gh-linked-prs -->
| 6b58d419a1bd62ac94274d108d59980a3eb6f6e0 | 7d20783d45a9c78379fe79229b57e4c31610a623 |
python/cpython | python__cpython-103554 | # Improve `test_inspect`
While working on https://github.com/python/cpython/issues/103406 I've noticed that there are some problems with `test_inspect` that we can easily fix.
List of problems I found:
1. https://github.com/python/cpython/blob/2b6f5c3483597abcb8422508aeffab04f500f568/Lib/test/test_inspect.py#L1840-L1845 Here `f4` is not ever used, but `f3` has duplicate asserts. It looks like a copy-paste error to me. I propose to add asserts for `f4` as well
2. https://github.com/python/cpython/blob/2b6f5c3483597abcb8422508aeffab04f500f568/Lib/test/test_inspect.py#L1835-L1838 Unbound methods do not exist anymore, so this comment is outdated. Probably, it should be removed and assert should be restored
3. https://github.com/python/cpython/blob/2b6f5c3483597abcb8422508aeffab04f500f568/Lib/test/test_inspect.py#L1823-L1824 It says that the result depends on the dict order. But, dicts are ordered now. Let's see if that possible to uncomment / modernize this test somehow
4. https://github.com/python/cpython/blob/2b6f5c3483597abcb8422508aeffab04f500f568/Lib/test/test_inspect.py#L2991 This variable is not needed, because it is not used and there's an identical test right above it: https://github.com/python/cpython/blob/2b6f5c3483597abcb8422508aeffab04f500f568/Lib/test/test_inspect.py#L2987-L2989
5. There are also several unsused variables that can be removed durin this cleanup
Here's my PR :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103554
* gh-103568
<!-- /gh-linked-prs -->
| 4fe1c4b97e39429abbb9c2117fe40f585de00887 | 0097c36e07eeb516bcccba34feb4d494a67832d5 |
python/cpython | python__cpython-103549 | # Improve performance of `pathlib.Path.[is_]absolute()`
Two optimizations are possible:
- `is_absolute()` can pass the _unnormalized_ path to `os.path.isabs()`
- `absolute()` on an empty path (or `Path.cwd()`) can avoid joining an empty string onto the working directory (thus avoiding a call to `os.path.join()`), and short-circuit the string normalization (as the result of `getcwd()` is fully normalized)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103549
<!-- /gh-linked-prs -->
| de7f694e3c92797fe65f04cd2c6941ed0446bb24 | 376137f6ec73e0800e49cec6100e401f6154b693 |
python/cpython | python__cpython-104606 | # expose platform-specific `PRIO_DARWIN_*` constants for core type selection on macOS in `os` module.
# Feature or enhancement
As described in [this stack overflow answer](https://apple.stackexchange.com/a/443735/26977), the way to select "efficiency" vs. "performance" cores on Apple Silicon macOS machines is to use `setpriority`. However, you need to use custom platform-specific constants to do this, specifically, `PRIO_DARWIN_BG`, and either `PRIO_DARWIN_THREAD` or `PRIO_DARWIN_PROCESS`.
# Pitch
It's just a few integer constants and it would expose a pretty important bit of power-efficiency/performance-tuning functionality for Python.
# Previous discussion
No previous discussion anywhere, this seems like a fairly minor addition. This platform-specific functionality seems consistent with the way these functions are already treated, given that `os.PRIO_*` are already "Unix, not Emscripten, not WASI".
<!-- gh-linked-prs -->
### Linked PRs
* gh-104606
<!-- /gh-linked-prs -->
| 616fcad6e2e10b0d0252e7f3688e61c468c54e6e | fd04bfeaf7a4531120ad450dbd1afc121a2523ee |
python/cpython | python__cpython-103539 | # Remove unused TK_AQUA code
Sections of code which are guarded by `#ifdef TK_AQUA` were committed in cb85244228ed. But from examining the CPython
and Tcl/Tk repository histories, it appears nothing has ever defined this macro. (Tk Aqua itself instead defines/uses the `MAC_OSX_TK` macro.) The only mentions of it I found elsewhere were as `-DTK_AQUA` passed to build commands, either in a Setup.local file (https://mail.python.org/pipermail/pythonmac-sig/2001-December/004553.html, https://mail.python.org/pipermail/pythonmac-sig/2002-November/006746.html) or manual command line invocation (https://mail.python.org/pipermail/pythonmac-sig/2002-January/004776.html).
So I believe this code has long been unused, and I suspect that trying to use it now is not a good idea. Can it be removed?
<!-- gh-linked-prs -->
### Linked PRs
* gh-103539
<!-- /gh-linked-prs -->
| e464ec9f4c4af6c2fdb6db9a3fa70ffd703ae01f | a33ce66dca57d4c36b1022fdf3b7e322f3203468 |
python/cpython | python__cpython-103534 | # Use PEP 669 API for cprofile
# Feature or enhancement
Replace the current `setprofile` mechanism with PEP 669 API for cProfile.
# Pitch
It's faster.
Before:
```
Hanoi: 0.342869599997357, 1.4579414000036195, 4.252174587699983
fib: 0.15549560000363272, 0.7588815999988583, 4.88040561907301
Hanoi: 0.33156009999947855, 1.394542599999113, 4.2060024713507635
fib: 0.16397210000286577, 0.7469802000050549, 4.555532313070332
Hanoi: 0.3341937000004691, 1.394005099995411, 4.171248889471747
fib: 0.15704140000161715, 0.7399598000047263, 4.711877250184387
Hanoi: 0.33133889999589883, 1.3821628000005148, 4.171447421409387
fib: 0.15387790000386303, 0.754370700000436, 4.902397940064804
Hanoi: 0.3403002000050037, 1.3969628000049852, 4.105089564991276
fib: 0.15468130000226665, 0.761441399998148, 4.922646758121312
```
After:
```
Hanoi: 0.3318088000014541, 1.2147841000041808, 3.661096691826309
fib: 0.15522490000148537, 0.6360437999974238, 4.097562955372092
Hanoi: 0.33085879999998724, 1.1502422000048682, 3.476535005279934
fib: 0.15410060000431258, 0.6056611999956658, 3.9302974808580635
Hanoi: 0.35002540000277804, 1.145935599997756, 3.273864125256799
fib: 0.1540030999967712, 0.5974198000039905, 3.8792712615299036
Hanoi: 0.3338971999983187, 1.1459307999975863, 3.431986851052829
fib: 0.16891020000184653, 0.6197690999979386, 3.6692224625343126
Hanoi: 0.3318254999976489, 1.1875411000000895, 3.578812056362467
fib: 0.1544136999946204, 0.5971600999982911, 3.867274082669449
```
20%+ speed up for overhead.
I guess the incentive is not in doubt, but I did have some issues when I implemented it.
1. There is a very slight backward compatibility issue, or it's more like a design decision. The profiler has to disable its own profiling now.
```python
# This works on main, but I don't think that's intuitive.
pr = cProfile.Profile()
pr.enable()
pr = cProfile.Profile()
pr.disable()
```
We can make it work as before, I just think this is not the right way to do it with PEP 669. Because of this, I changed an old (15 years) test in `test_cprofile`.
2. We need to get the actual c function from the descriptor, for which I used the code in the current legacy tracing. However, `_PyInstrumentation_MISSING` is not exposed so I had to store it locally (keep reading it from `sys` is a bit expensive).
3. On that matter, are we going to expose some APIs on C? It would be nice if I don't have to get `sys.monitoring` and do stuff from there. We have some defined constants but some APIs could be handy. We may be able to reduce the overhead a bit if we have an interface like `PyEval_SetProfile`.
Addendum:
<details>
<summary> Benchmark Code </summary>
```python
import timeit
hanoi_setup = """
import cProfile
def test():
def TowerOfHanoi(n, source, destination, auxiliary):
if n == 1:
return
TowerOfHanoi(n - 1, source, auxiliary, destination)
TowerOfHanoi(n - 1, auxiliary, destination, source)
TowerOfHanoi(16, "A", "B", "C")
pr = cProfile.Profile()
"""
fib_setup = """
import cProfile
def test():
def fib(n):
if n <= 1:
return 1
return fib(n - 1) + fib(n - 2)
fib(21)
pr = cProfile.Profile()
"""
test_baseline = """
test()
"""
test_profile = """
pr.enable()
test()
pr.disable()
"""
baseline = timeit.timeit(test_baseline, setup=hanoi_setup, number=100)
profile = timeit.timeit(test_profile, setup=hanoi_setup, number=100)
print(f"Hanoi: {baseline}, {profile}, {profile / baseline}")
baseline = timeit.timeit(test_baseline, setup=fib_setup, number=100)
profile = timeit.timeit(test_profile, setup=fib_setup, number=100)
print(f"fib: {baseline}, {profile}, {profile / baseline}")
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-103534
<!-- /gh-linked-prs -->
| b9797417315cc2d1700cb2d427685016d3380711 | a0df9ee8fc77443510ab7e9ba8fd830f255a8fec |
python/cpython | python__cpython-103535 | # Remove unused TKINTER_PROTECT_LOADTK code
The `TKINTER_PROTECT_LOADTK` macro and relevant code (originally for #49372) are only defined for Tk 8.4.13 and earlier, but Tkinter already requires at least 8.5.12. Is there a reason to keep this code around, or can it be removed?
<!-- gh-linked-prs -->
### Linked PRs
* gh-103535
* gh-103542
* gh-103544
<!-- /gh-linked-prs -->
| 69e2c42f42f1d6fb1287ac5f9c6d19f2822df8fe | be8903eb9d66ef1229f93a3a6036aeafc3bb0bda |
python/cpython | python__cpython-103528 | # Make: multibyte codecs lack dependencies
There are no dependencies for the \_codec\_\* and \_multibytecodec modules, so if you're hacking away on a header file in Modules/cjkcodecs/ and execute `make`, you're not going to get a correct rebuild of those extension modules.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103528
* gh-103567
<!-- /gh-linked-prs -->
| 3d71b5ec5ecccc14c00707d73ebbc907877b3448 | 95ee7c47ab53eb0e26f2b2b4f681015f49c0bff7 |
python/cpython | python__cpython-103526 | # Misleading exception message from `pathlib.PurePath()` when mixing str and bytes arguments
Since 6716254e71eeb4666fd6d1a13857832caad7b19f, attempting to create a `pathlib.PurePath` or `Path` object with mixed `str` and `bytes` arguments raises a TypeError (yay!) with a misleading message (booo!):
```python
>>> import pathlib
>>> pathlib.Path('foo', b'bar')
TypeError: Can't mix strings and bytes in path components
```
This message implies that bytes are supported, as long as we don't mix them with strings. And yet when we try that:
```python
>>> pathlib.Path(b'foo', b'bar')
TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'bytes'
```
This message is better, and should be used in the former case.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103526
<!-- /gh-linked-prs -->
| 8611e7bf5ceace998fefcbf26ab1c5d5bc8a0e2a | d81ca7ec029ba05084751c8df64292bb48f4f30f |
python/cpython | python__cpython-103518 | # Tests for `pathlib.Path.walk()` are a little fragile
`test_walk_topdown()` attempts to handle all possible visitation orders, but the result is difficult to read. We can enforce a specific order by sorting `dirnames`.
https://github.com/python/cpython/blob/a6f95941a3d686707fb38e0f37758e666f25e180/Lib/test/test_pathlib.py#L2680-L2694
`test_walk_bottom_up()` suffers similar problems, and also makes unjustified assertions about the order that siblings are visited (which is arbitrary and cannot be influenced by the user, contrary to top-down mode). It can be simplified to ensure that children are yielded before parents.
https://github.com/python/cpython/blob/a6f95941a3d686707fb38e0f37758e666f25e180/Lib/test/test_pathlib.py#L2717-L2735
<!-- gh-linked-prs -->
### Linked PRs
* gh-103518
<!-- /gh-linked-prs -->
| 0097c36e07eeb516bcccba34feb4d494a67832d5 | 2b6f5c3483597abcb8422508aeffab04f500f568 |
python/cpython | python__cpython-103511 | # PEP 697 -- Limited C API for Extending Opaque Types
[PEP 697](https://peps.python.org/pep-0697/) was accepted and needs to be implemented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103511
<!-- /gh-linked-prs -->
| cd9a56c2b0e14f56f2e83dd4db43c5c69a74b232 | 35d273825abc319d0ecbd69110e847f6040d0cd7 |
python/cpython | python__cpython-103493 | # Clarify SyntaxWarning with literal comparison
I had a coworker slightly confused about the following warning, since None is a literal and `is not None` is idiomatic:
```
>>> 'a' is not None
<stdin>:1: SyntaxWarning: "is not" with a literal. Did you mean "!="?
True
```
I think this would be better as:
```
>>> 'a' is not None
<stdin>:1: SyntaxWarning: "is not" with str literal. Did you mean "!="?
True
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103493
<!-- /gh-linked-prs -->
| ae25855045e8f19f4715c9b2c02cbcd81e7f6f95 | 79ae019164eeb6b94118bc17bc1e937405684c75 |
python/cpython | python__cpython-103506 | # Expose `sqlite3_db_config` and verbs (or equivalent)
# Feature or enhancement
Python's SQLite bindings should expose [`sqlite3_db_config`](https://www.sqlite.org/c3ref/db_config.html) and at least [`SQLITE_DBCONFIG_DEFENSIVE`](https://www.sqlite.org/c3ref/c_dbconfig_defensive.html) (or an idiomatic version of the same)
# Pitch
The libsqlite3.dylib built into Darwin enables defensive mode by default for all connections in processes linked on or after macOS 11 Big Sur as a mitigation layer against the general class of security vulnerabilities that can be exploited with pure SQL, but it's still useful to be able to disable it when using certain tools (like [sqlite-utils](https://github.com/simonw/sqlite-utils/issues/235)). Conversely, developers may find it useful to be able to enable defensive mode on other platforms when opening a database with an untrusted schema, or executing untrusted SQL statements from remote sources.
# Previous discussion
This was prompted by a [brief discussion on Mastodon](https://fosstodon.org/@erlendaasland/110184534512599533) with @erlend-aasland
<!-- gh-linked-prs -->
### Linked PRs
* gh-103506
<!-- /gh-linked-prs -->
| bb8aa7a2b41ad7649d66909e5266fcee039e63ed | 222c63fc6b91f42e7cc53574615f4e9b7a33c28f |
python/cpython | python__cpython-103502 | # Python 3.12.0a7 can fail to catch exceptions when an iterator is involved
# Bug report
Python 3.12.0a7 (at least as of commit d65ed693a8a13a2a7f9b201bda1224d6ae5fcf0e) can fail to catch exceptions when an iterator is involved. The following code works on Python 3.11 but the `ValueError` is uncaught on Python 3.12:
```python
#!/usr/bin/env python
def do_work():
yield
raise ValueError()
def main():
try:
for _ in do_work():
if True is False:
return
except ValueError:
pass
if __name__ == '__main__':
main()
```
With this code in `repro.py` the following traceback is shown on Python 3.12:
```
Traceback (most recent call last):
File "//repro.py", line 18, in <module>
main()
File "//repro.py", line 12, in main
return
File "//repro.py", line 5, in do_work
raise ValueError()
ValueError
```
No error or output occurs under Python 3.11.
# Your environment
Tested on Ubuntu 20.04.5 using the deadsnakes nightly build: python3.12 - 3.12.0~a7-98-gd65ed693a8-1+focal1
<!-- gh-linked-prs -->
### Linked PRs
* gh-103502
<!-- /gh-linked-prs -->
| efb8a2553c88a295514be228c44fb99ef035e3fa | 4307feaddc76b9e93cd38e325a1f0ee59d593093 |
python/cpython | python__cpython-103495 | # Unexpected behavior change in subclassing `Enum` class as of 3.11
I had a question regarding changes in subclassing `Enum` in 3.11. As of 3.11, if enum class has a mixin parent apart from `Enum` which has defined `__init__` or `__new__`, `member._value_` is set to be another instance of the mixin class
unless user defines `__new__` in the enum class and set `_value_`. However user naturally expects `_value_` to be the value assigned to the member in class definition. In the example below
```python
class mixin:
def __init__(self):
pass
class CustomEnum(mixin, Enum):
MEMBER = "member"
```
`CustomEnum.MEMBER` is an instance of mixin and `CustomEnum.MEMBER.value` is another instance of it (with different id), while it was expected that `CustomEnum.MEMBER.value` would be `"member"`. What has been the rationale for this change?
By the way, this change is not noted in python docs enumeration page.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103495
* gh-103514
<!-- /gh-linked-prs -->
| a6f95941a3d686707fb38e0f37758e666f25e180 | 2194071540313e2bbdc7214d77453b9ce3034a5c |
python/cpython | python__cpython-103669 | # Document that cache() and lru_cache() do not have a "call once" guarantee
# Documentation
The documentation for `@functools.cache` states that
> The cache is threadsafe so the wrapped function can be used in multiple threads.
but it is not threadsafe before the value is cached.
In the following example the wrapped function will return two different instances of list.
```python
import functools
from threading import Thread
from time import sleep
@functools.cache
def get_cached_object():
sleep(1) # stand-in for some logic which takes some time to execute
return list()
def run():
print(f"{id(get_cached_object())}\n")
if __name__ == "__main__":
first_thread = Thread(target=run)
second_thread = Thread(target=run)
first_thread.start()
second_thread.start()
sleep(2) # wait for both threads to finish
run()
```
This is an issue, for example, when you use `@functools.cache` for implementation of singletons (we can leave debate about singletons not being a good practice out of this).
The documentation should not claim the cache to be threadsafe or there should be an explicit warning about this situation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103669
* gh-103682
<!-- /gh-linked-prs -->
| e5eaac6064561c8f7643011a31fa506e78330798 | 7b134d3e71af03c4593678f36fbb202cc3650f4e |
python/cpython | python__cpython-103473 | # ResourceWarning due to unclosed response in http.client.HTTPConnection._tunnel
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
calling `http.client.HTTPConnection._tunnel()` may result in a `ResourceWarning` because the response is not closed
this results in an intermittent failure on the urllib3 test `test/with_dummyserver/test_proxy_poolmanager.py::TestHTTPProxyManager::test_tunneling_proxy_request_timeout[https-https]` with ResourceWarnings enabled, see https://github.com/urllib3/urllib3/actions/runs/4638944124/jobs/8209257011#step:6:2471 and https://github.com/urllib3/urllib3/blob/4d425a2d767ce2e0c988fff4385046ea57458c49/test/with_dummyserver/test_proxy_poolmanager.py#L480-L498
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.8-3.11
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-103473
* gh-104077
<!-- /gh-linked-prs -->
| 9de0cf20fa0485e327e57cc0864c7476da85cfad | 690df4c16ca4f0054d27a6148da9e6af809a2658 |
python/cpython | python__cpython-103465 | # Add argument checking on pdb commands
# Feature or enhancement
Add some argument checking on pdb commands so it is not ignored silently
# Pitch
Currently for some pdb commands, we simply throw out provided arguments and proceed. This could confuse users. A super common case that I had (more than a few times) was this:
```python
(Pdb) a = 1
(Pdb)
```
Guess what it does? It lists all the arguments to the function(which was empty). The command succeeded (in a way the user would never expect) without any kind of warning.
Similar thing for `step` or `next`, which does not take any argument. `gdb` and some other debuggers provide a count for such commands, so people might do
```python
(Pdb) step 3
```
And `pdb` just does a `step`. Normally the user would expect it does something different than `step`, but that's not the case. Of course it has the similar issue when the user does ```s = "abc"``` - surprise! the code goes forward!
We should catch these cases (and it's pretty cheap) and let the users know it's not the correct way to use the command (or for most cases, let them know that this IS actually a command).
So we could have something like:
```python
(Pdb) a = 1
*** Invalid argument: = 1
Usage: a(rgs)
(Pdb) step 3
*** Invalid argument: 3
Usage: s(tep)
(Pdb)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103465
<!-- /gh-linked-prs -->
| d944d873b2d7a627c20246762e931f9d4fcf8fe7 | ed86e14b1672f32f0a31d72070e93d361ee0e2b4 |
python/cpython | python__cpython-103463 | # Calling SelectorSocketTransport.writelines with a very large payload doesn't get completely written to the socket
# Bug report
As a consequence of the optimization introduced in #91166, when `SelectorSocketTransport.writelines` is used with a very large buffer which can't be written in one shot either by using `socket.send` or `socket.sendmsg` the remaining data in the buffer is never written to the socket.
The following example can be used to reproduce the error:
```python
import asyncio
async def send_message(chunk_size: int, chunks: int):
await asyncio.sleep(1)
reader, writer = await asyncio.open_connection("127.0.0.1", 9898)
data = [bytes(chunk_size)] * chunks + [b"end"]
writer.writelines(data)
await writer.drain()
writer.close()
await writer.wait_closed()
async def handle_message(reader, writer):
data = await reader.read()
print(f"End marker: {data[-3:]!r}")
writer.close()
await writer.wait_closed()
async def run_server():
server = await asyncio.start_server(handle_message, "127.0.0.1", 9898)
async with server:
await server.serve_forever()
async def main():
await asyncio.gather(run_server(), send_message(6500, 100))
asyncio.run(main())
```
The example above prints `End marker: b'end'` with python 3.11 but not with python `3.12.0a6+`
# Your environment
- CPython versions tested on: `3.12.0a7`
- Operating system and architecture: macOS 13.2
<!-- gh-linked-prs -->
### Linked PRs
* gh-103463
<!-- /gh-linked-prs -->
| 19d2639d1e6478e2e251479d842bdfa2e8272396 | 9e677406ee6666b32d869ce68c826519ff877445 |
python/cpython | python__cpython-103454 | # [dataclasses] Exception on `__doc__` generation, when object signature cannot be fetched
Reproducible on current main branch:
```python
from dataclasses import dataclass
from typing import TypedDict
@dataclass
class Foo(TypedDict):
bar: int
```
Traceback:
```python
Traceback (most recent call last):
File "/home/eclips4/projects/test.py", line 6, in <module>
@dataclass
^^^^^^^^^
File "/usr/local/lib/python3.12/dataclasses.py", line 1237, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/local/lib/python3.12/dataclasses.py", line 1227, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dataclasses.py", line 1113, in _process_class
str(inspect.signature(cls)).replace(' -> None', ''))
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/inspect.py", line 3252, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/inspect.py", line 2996, in from_callable
return _signature_from_callable(obj, sigcls=cls,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/inspect.py", line 2503, in _signature_from_callable
sig = _get_signature_of(call)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/inspect.py", line 2556, in _signature_from_callable
raise ValueError(
ValueError: no signature found for builtin type <class 'dict'>
```
However, this code works on 3.8 version (but of course we cannot instantiate a `Test` class)
Maybe we need to add a some check in sources of `dataclass`.
Also, this error is not critical, so, feel free to "close as not planned" =)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103454
* gh-103599
<!-- /gh-linked-prs -->
| b57f55c23e15654e9dd77680ff1462603e360b76 | d83faf7f1ba2de95e98e3eeb5ce9009d9cd62192 |
python/cpython | python__cpython-103418 | # sched.scheduler docs improvement
# Documentation
The docs for `sched.scheduler` have an example that uses `time.time` as the `timefunc` https://docs.python.org/3/library/sched.html#sched.scheduler
Using `time.time` instead of `time.monotonic` (the default value of `sched.scheduler`) creates the potential for "strange things" to happen when seasonal time shifts make the system clock go backwards, resulting in bugs in user's code.
It seems like it would be better to use `time.monononic` so as to introduce better practices to people learning the language.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103418
* gh-103468
* gh-111497
* gh-115908
* gh-115909
<!-- /gh-linked-prs -->
| f2b7ecb7783299c4555e89125ca9ba8e89854643 | d65ed693a8a13a2a7f9b201bda1224d6ae5fcf0e |
python/cpython | python__cpython-103407 | # Modernize `test_inspect` by adding real pos-only parameters
Right now `test_inspect` uses several hacks to pretend that some parameters are positional only:
https://github.com/python/cpython/blob/ecad802e3f9ef7063e7d7fe9f0c037561396ef8e/Lib/test/test_inspect.py#L2465-L2469
https://github.com/python/cpython/blob/ecad802e3f9ef7063e7d7fe9f0c037561396ef8e/Lib/test/test_inspect.py#L2519-L2522
https://github.com/python/cpython/blob/ecad802e3f9ef7063e7d7fe9f0c037561396ef8e/Lib/test/test_inspect.py#L3047-L3052
https://github.com/python/cpython/blob/ecad802e3f9ef7063e7d7fe9f0c037561396ef8e/Lib/test/test_inspect.py#L3559-L3565
https://github.com/python/cpython/blob/ecad802e3f9ef7063e7d7fe9f0c037561396ef8e/Lib/test/test_inspect.py#L4160-L4166
And maybe others.
It makes code more complex, unclear, and hides the real purpose of these tests.
This is not a design decision, but rather a limitation at the time. Commits are quite old and pos-only syntax was not available 9 and 11 years ago:
- https://github.com/python/cpython/commit/3f73ca23cfe4e4058689bc5a46622c68ef1b6aa6
- https://github.com/python/cpython/commit/7c7cbfc00fc8d655fc267ff57f8084357858b1db
So, I propose to simplify these tests and make them more correct by using explicit pos only parameters.
Plus, we can keep one test like this to be sure that chaning a parameter kind still works as before. But, there's no need in keeping these old tests the way they are.
I will send a PR with the fix 👍
<!-- gh-linked-prs -->
### Linked PRs
* gh-103407
* gh-103536
<!-- /gh-linked-prs -->
| 756978117698eeac3af270db25d22599e681bcb3 | a3d313a9e2d894f301557c18788aa1eec16c507c |
python/cpython | python__cpython-103396 | # Improve `typing._GenericAlias.__dir__` coverage
Right now the only test we have for `dir()` on `_GenericAlias` is
```python
def test_genericalias_dir(self):
class Foo(Generic[T]):
def bar(self):
pass
baz = 3
# The class attributes of the original class should be visible even
# in dir() of the GenericAlias. See bpo-45755.
self.assertIn('bar', dir(Foo[int]))
self.assertIn('baz', dir(Foo[int]))
```
And here's how it is defined:
https://github.com/python/cpython/blob/c330b4a3e7b34308ad97d51de9ab8f4e51a0805c/Lib/typing.py#L1323-L1325
We clearly need more tests:
1. That dunder methods are not included
2. What `_GenericAlias` API we expose (at least parts that we consider user-visible, like `__args__` and `__parameters__` and `__origin__`)
3. `_GenericAlias` has subclasses, they are also not tested
I will send a PR for this :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103396
* gh-103410
<!-- /gh-linked-prs -->
| a28e2ce3fbcc852959324879e0bbf5ba8ecf0105 | ecad802e3f9ef7063e7d7fe9f0c037561396ef8e |
python/cpython | python__cpython-103391 | # `logging.config` `cfg` protocol rejects non-alphanumeric names, contrary to the documentation
## Description
In `logging.config`, when using the `cfg` protocol, names with spaces or non-alphanumeric characters (e.g. `cfg://nested[prop with spaces]`) raise a `ValueError` contrary to what is stated in the documentation.
The [documentation](https://docs.python.org/3/library/logging.config.html#access-to-internal-objects) states that one can use the bracket notation:
> The latter form only needs to be used if the key contains spaces or non-alphanumeric characters.
This does not work however:
```python
from logging.config import BaseConfigurator
config = {'nested': {'prop': 1, 'prop with spaces': 2}} # Create a simple logging config
bc = BaseConfigurator(config)
bc.cfg_convert('nested[prop]') # returns 1 (expected)
bc.cfg_convert('nested[prop with spaces]') # Raises ValueError
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tomas/dev/cpython/Lib/logging/config.py", line 433, in cfg_convert
raise ValueError('Unable to convert '
ValueError: Unable to convert 'nested[prop with spaces]' at '[prop with spaces]'
```
`ValueError` is also raised for any non-alphanumeric sequence:
```python
bc.cfg_convert('nested[!?]') # Raises
```
The culprit is the regex pattern (`BaseConfigurator.INDEX_PATTERN`) which is used for matching the bracket contents: `^\[\s*(\w+)\s*\]\s*`.
This only matches alphanumeric characters. Simply changing this to `^\[([^\[\]]*)\]\s*` would give us the behavior described in the docs.
# Your environment
- CPython versions tested on: 3.12.0a6+
- Operating system and architecture: Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-103391
<!-- /gh-linked-prs -->
| 8d4052075ec43ac1ded7d2fa55c474295410bbdc | 135098743a0fae0efbcd98e35458e5bc721702e9 |
python/cpython | python__cpython-103380 | # Vestigial pathlib 'flavour' tests make assertions about internal method
Pathlib used to contain internal "flavour classes" with their own test suites. These were removed in a68e585c8b7b27323f67905868467ce0588a1dae (#31691) and e5b08ddddf1099f04bf65e63017de840bd4b5980 (#101002) along with most of their tests. Only the tests for the internal `_parse_path()` method remain now.
These tests make assertions about an _internal_ method that we're liable to change, rename or remove in future. They should instead make assertions about the public `PurePath.drive`, `root` and `parts` attributes. I think.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103380
<!-- /gh-linked-prs -->
| 0a675f4bb57d01a5e69f8f58ae934ad7ca501a8d | 8317d51996e68d8bb205385c1d47a9edcd128e7b |
python/cpython | python__cpython-103374 | # Improve documentation for `__mro_entries__`
# Documentation
The `__mro_entries__` method is documented in the data model here: https://docs.python.org/3.11/reference/datamodel.html#resolving-mro-entries.
However, it's not currently possible to reference it using the expected syntax (`~object.__mro_entries__`) in other parts of the documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103374
* gh-103376
* gh-103398
* gh-103440
<!-- /gh-linked-prs -->
| 0ba0ca05d2b56afa0b055db02233e703fe138918 | b22d021ee6309f9b0d43e683056db84cfb6fd3c0 |
python/cpython | python__cpython-103494 | # enum.CONFORM behavior breaks backwards compatibility
Encountered yet another breaking change for enums. There are two problems here:
* CONFORM doesn't "conform" to members that are partially unsupported
* CONFORM does not match the previous behavior for Flag. It used to be more similar to STRICT, but STRICT also doesn't allow partially unsupported members, while Flag in 3.10 did.
Example to reproduce:
```python
import enum
class SkipFlag(enum.Flag):
A = 1
B = 2
C = 4 | B
print(SkipFlag.C in (SkipFlag.A|SkipFlag.C))
class SkipIntFlag(enum.IntFlag):
A = 1
B = 2
C = 4 | B
print(SkipIntFlag.C in (SkipIntFlag.A|SkipIntFlag.C))
print(SkipIntFlag(42).value)
print(SkipFlag(42).value)
```
In Python 3.10.6, this code outputs:
```
True
True
42
Traceback (most recent call last):
...
ValueError: 42 is not a valid SkipFlag
````
In Python 3.11.2:
```
False
True
42
2
```
Having such members as C is useful to create a flags that are distinct from B but always implies B.
In a previous comment bug report it was stated that the previous behavior of Flag was CONFORM:
> Actually, won't need a deprecation period, since the proper default for `Flag` is `CONFORM` as that matches previous behavior.
_Originally posted by @ethanfurman in https://github.com/python/cpython/issues/96865#issuecomment-1251372597_
Clearly this is incorrect. The previous behavior was more similar to STRICT, but there appears to be no precise equivalent in Python 3.11 to the old behavior. Using STRICT in Python 3.11, `SkipFlag.A | SkipFlag.C` would fail to be created completely.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103494
* gh-103513
<!-- /gh-linked-prs -->
| 2194071540313e2bbdc7214d77453b9ce3034a5c | efb8a2553c88a295514be228c44fb99ef035e3fa |
python/cpython | python__cpython-107962 | # `pathlib.Path.owner()` and `.group()` should accept a `follow_symlinks` argument
# Feature or enhancement
`pathlib.Path.owner()` and `pathlib.Path.group()` always follow symbolic links. They should gain a new argument, `follow_symlinks`, that defaults to `True`. This will align their API with that of `pathlib.Path.stat()` and `pathlib.Path.chmod()`
# Pitch
Most `pathlib.Path` methods follow symbolic links, so it's natural for these ownership methods to do the same.
However, doing so unconditionally results in a surprising error when you'd actually like them to look at a link's ownership - for example, to work with a system that uses dangling symbolic links as lock files - and there's no convenient API to call instead.
# Previous discussion
#65521 brought up the same issue but proposed fixing it by changing the system call from `stat` to `lstat`, which was deemed too disruptive.
#18864 added `follow_symlinks` to `pathlib.Path.stat` and `pathlib.Path.chmod`. [A commenter](https://github.com/python/cpython/pull/18864#issuecomment-645263576) asked about adding this argument to other methods, namely `is_file()` and `is_dir()`.
I'm happy to contribute the code if this seems worth doing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107962
<!-- /gh-linked-prs -->
| a1551b48eebb4a68fda031b5ee9e5cbde8d924dd | 2ed20d3bd84fdcf511235cc473a241c5e8278a91 |
python/cpython | python__cpython-103638 | # String escape sequences should be listed in the stdtypes page
# Documentation
Since string escape sequences are FREQUENTLY used and referred to, they should be listed on the [Built In Types](https://docs.python.org/3/library/stdtypes.html) page, not tucked away in an obscure [lexical analysis](https://docs.python.org/3/reference/lexical_analysis.html) page :( Finding it from a search engine is just HORRIBLE.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103638
* gh-114907
* gh-114908
<!-- /gh-linked-prs -->
| d29f57f6036353b4e705a42637177442bf7e07e5 | d0f1307580a69372611d27b04bbf2551dc85a1ef |
python/cpython | python__cpython-103359 | # Support Formatter defaults parameter in logging.config.dictconfig and fileconfig
# Feature or enhancement
Currently the logging.Formatter defaults parameter is unsupported in logging.config methodologies. Simply support it.
# Pitch
There's no reason for inconsistency between the rest of the variables which are supported, and this that isn't.
The implementation is fast and simple, and was requested on the original issue.
# Previous discussion
https://github.com/python/cpython/pull/20668#issuecomment-646049778
https://github.com/python/cpython/issues/85061#issuecomment-1416803343
I will create a PR right after this issue with the addition.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103359
<!-- /gh-linked-prs -->
| 8f54302ab49a07e857843f1a551db5ddb536ce56 | 449bf2a76b23b97a38158d506bc30d3ebe006321 |
python/cpython | python__cpython-103335 | # `Tools/c-analyzer/cpython/_parser.py` should be ignored from indentation check in `patchcheck`
Right now if you touch this file, it will fail the CI: https://dev.azure.com/Python/cpython/_build/results?buildId=124277&view=logs&j=256d7e09-002a-52d7-8661-29ee3960640e
with:
```
Getting the list of files that have been added/changed ... 5 files
Fixing Python file whitespace ... 1 file:
Tools/c-analyzer/cpython/_parser.py
```
This happens because this file contains tab separated values that are needed for c-analyzer.
I propose to ignore this file by default in `patchcheck`, because it is a false-positive.
PR is in the works.
CC @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-103335
<!-- /gh-linked-prs -->
| 40db5c65b7ca4e784613b6122106a92576aba2d6 | dc604a8c58af748ce25aee1af36b6521a3592fa5 |
python/cpython | python__cpython-103352 | # Pickling is losing some fields on exceptions
# Bug report
Say we have an AttributeError with some fields on it (like name). When we pickle and unpickle it, we can see that .name (at least) is lost (and set to None). Optimally those fields should still be in-tact on the unpickled instance.
```
C:\Users\csm10495\Desktop>python
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> a = AttributeError("test text", name="test name", obj="test obj")
>>> a.name
'test name'
>>> pickle.loads(pickle.dumps(a)).name
>>> pickle.loads(pickle.dumps(a))
AttributeError('test text')
```
# Your environment
```
C:\Users\csm10495\Desktop>python --version
Python 3.11.0
C:\Users\csm10495\Desktop>ver
Microsoft Windows [Version 10.0.19045.2364]
C:\Users\csm10495\Desktop>
```
Same thing seems to happen on 3.10.7 and 3.11.3 as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103352
<!-- /gh-linked-prs -->
| 79b17f2cf0e1a2688bd91df02744f461835c4253 | cf720acfcbd8c9c25a706a4b6df136465a803992 |
python/cpython | python__cpython-103358 | # 3.12.0a7 changes behavior of PropertyMock
# Bug report
3.12.0a7 changed the behavior of `unittest.mock.PropertyMock`
Setup:
```
from unittest.mock import MagicMock, PropertyMock
m = MagicMock()
p1 = PropertyMock(return_value=3)
p2 = PropertyMock(side_effect=ValueError)
type(m).foo = p1
type(m).bar = p2
```
In Python 3.12.0a6 and earlier, `m.foo` evaluates as 3, and `m.bar` raises a ValueError.
In Python 3.12.0a7, `m.foo` evaluates as 3, and `m.bar` returns `<MagicMock name='mock.bar' id=...>`
# Your environment
- CPython versions tested on: 3.12.0a7, 3.12.0a6, 3.11.2, 3.10.10, 3.9.16, 3.8.16
- Operating system and architecture:
- macOS Ventura 13.2.1, on an M1 MacBook Pro.
- GitHub Actions macOS 12 x86_64
- GitHub Actions Ubuntu 22.04
- GitHub Actions Windows Server 2022
<!-- gh-linked-prs -->
### Linked PRs
* gh-103358
* gh-103364
<!-- /gh-linked-prs -->
| 26c65980dc6d842879d133165bb7c461d98cc6c7 | 91794e587306343619a451473efae22aa9f4b9bd |
python/cpython | python__cpython-103331 | # Remove `Python/importlib.h`
https://github.com/python/cpython/blob/main/Python/importlib.h doesn't seem to be used anymore. I _think_ it can simply be deleted and then a quick search through the code base to remove any references (it's mentioned in some comments).
<!-- gh-linked-prs -->
### Linked PRs
* gh-103331
<!-- /gh-linked-prs -->
| 7f3c10650385907b5a2234edb2b1334cafd47a0a | 52f96d3ea39fea4c16e26b7b10bd2db09726bd7c |
python/cpython | python__cpython-103324 | # Get the "Current" Thread State from a Thread-local
(This is a revival of gh-84702, which @vstinner worked on several years ago.)
For a per-interpreter GIL (PEP 684) we need to start storing the "current" thread state in a thread-local variable, rather than `_PyRuntime.tstate_current`. There may be other benefits to the approach if we can take advantage of native compiler support (e.g. C11's `__thread_local`).
Also see https://discuss.python.org/t/how-to-get-the-current-thread-state/22655.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103324
* gh-104171
* gh-114593
<!-- /gh-linked-prs -->
| f8abfa331421e2c28388c5162f37820495e3c2ee | 7ef614c1adad2b8857442bf0fea649891b591109 |
python/cpython | python__cpython-103301 | # `patchcheck.py` hangs infinitely on branches with merges from `main`
# Bug report
`Tools/patchcheck/patchcheck.py` stalls on `Getting base branch for PR ...` on my Windows system if a processed branch has a moderately large diff relative to `main`.
It's easy to achieve with merging `main` into the branch. For example, `git diff --name-status origin/main` gives 8KB worth of text for gh-103244:
<details>
```plain
M .devcontainer/devcontainer.json
M Doc/howto/enum.rst
M Doc/library/enum.rst
M Doc/library/functions.rst
M Doc/library/http.client.rst
M Doc/library/multiprocessing.rst
M Doc/library/shutil.rst
M Doc/library/socket.rst
M Doc/library/sys.rst
D Doc/tools/.nitignore
A Doc/tools/clean-files.txt
M Doc/tools/touch-clean-files.py
M Doc/whatsnew/3.12.rst
M Include/patchlevel.h
M Include/pymacro.h
M Lib/enum.py
M Lib/http/client.py
M Lib/inspect.py
M Lib/pathlib.py
M Lib/pydoc_data/topics.py
M Lib/shutil.py
M Lib/test/test_exceptions.py
M Lib/test/test_httplib.py
M Lib/test/test_pathlib.py
M Lib/test/test_pickle.py
M Lib/test/test_shutil.py
M Lib/test/test_sys_settrace.py
M Lib/test/test_webbrowser.py
M Lib/test/test_zipfile/test_core.py
M Lib/typing.py
M Lib/webbrowser.py
M Lib/zipfile/__init__.py
M Mac/BuildScript/resources/Welcome.rtf
M Makefile.pre.in
M Misc/ACKS
D Misc/NEWS.d/3.12.0a7.rst
D Misc/NEWS.d/next/Build/2023-02-11-05-31-05.gh-issue-99069.X4LDvY.rst
A Misc/NEWS.d/next/Build/2023-03-15-02-03-39.gh-issue-102711.zTkjts.rst
A Misc/NEWS.d/next/Build/2023-03-23-20-58-56.gh-issue-102973.EaJUrw.rst
A Misc/NEWS.d/next/C API/2023-02-18-00-55-14.gh-issue-102013.83mrtI.rst
A Misc/NEWS.d/next/Core and Builtins/2023-02-21-17-22-06.gh-issue-101865.fwrTOA.rst
A Misc/NEWS.d/next/Core and Builtins/2023-02-21-23-42-39.gh-issue-102027.fQARG0.rst
A Misc/NEWS.d/next/Core and Builtins/2023-02-26-11-43-56.gh-issue-102255.cRnI5x.rst
A Misc/NEWS.d/next/Core and Builtins/2023-02-26-13-12-55.gh-issue-102213.fTH8X7.rst
A Misc/NEWS.d/next/Core and Builtins/2023-02-27-15-48-31.gh-issue-102300.8o-_Mt.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-02-13-49-21.gh-issue-102281.QCuu2N.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-03-23-21-16.gh-issue-102406.XLqYO3.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-04-06-48-34.gh-issue-102397.ACJaOf.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-06-10-02-22.gh-issue-101291.0FT2QS.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-08-08-37-36.gh-issue-102491.SFvvsC.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-09-13-57-35.gh-issue-90997.J-Yhn2.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-14-00-11-46.gh-issue-102594.BjU-m2.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-16-14-44-29.gh-issue-102755.j1GxlV.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-16-17-24-44.gh-issue-102701.iNGVaS.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-17-12-09-45.gh-issue-100982.Pf_BI6.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-17-13-43-34.gh-issue-102778.ANDv8I.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-18-02-36-39.gh-issue-101975.HwMR1d.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-21-00-46-36.gh-issue-102859.PRkGca.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-24-02-50-33.gh-issue-89987.oraTzh.rst
A Misc/NEWS.d/next/Core and Builtins/2023-03-31-12-22-25.gh-issue-102192.gYxJP_.rst
D Misc/NEWS.d/next/Documentation/2023-03-10-04-59-35.gh-issue-86094.zOYdy8.rst
A Misc/NEWS.d/next/Documentation/2023-03-29-14-51-39.gh-issue-103112.XgGSEO.rst
D Misc/NEWS.d/next/Library/2018-07-16-14-10-29.bpo-22708.592iRR.rst
A Misc/NEWS.d/next/Library/2019-03-15-22-50-27.bpo-36305.Pbkv6u.rst
D Misc/NEWS.d/next/Library/2021-12-03-23-00-56.bpo-44844.tvg2VY.rst
A Misc/NEWS.d/next/Library/2022-04-11-18-34-33.gh-issue-72346.pC7gnM.rst
A Misc/NEWS.d/next/Library/2022-06-30-21-28-41.gh-issue-94440.LtgX0d.rst
A Misc/NEWS.d/next/Library/2022-07-09-13-07-30.gh-issue-94684.nV5yno.rst
A Misc/NEWS.d/next/Library/2022-07-30-23-01-43.gh-issue-95495.RA-q1d.rst
A Misc/NEWS.d/next/Library/2022-09-19-08-12-58.gh-issue-96931.x0WQhh.rst
A Misc/NEWS.d/next/Library/2022-10-10-19-14-51.gh-issue-98169.DBWIxL.rst
A Misc/NEWS.d/next/Library/2022-11-24-13-23-07.gh-issue-48330.6uAX9F.rst
A Misc/NEWS.d/next/Library/2022-12-09-11-21-38.gh-issue-100131.v863yR.rst
A Misc/NEWS.d/next/Library/2022-12-16-10-27-58.gh-issue-89727.y64ZLM.rst
A Misc/NEWS.d/next/Library/2022-12-20-10-55-14.gh-issue-100372.utfP65.rst
A Misc/NEWS.d/next/Library/2023-01-27-14-51-07.gh-issue-101313.10AEXh.rst
A Misc/NEWS.d/next/Library/2023-02-09-19-40-41.gh-issue-101673.mX-Ppq.rst
A Misc/NEWS.d/next/Library/2023-02-18-23-03-50.gh-issue-98886.LkKGWv.rst
A Misc/NEWS.d/next/Library/2023-02-19-01-49-46.gh-issue-102038.n3if3D.rst
A Misc/NEWS.d/next/Library/2023-02-20-16-47-56.gh-issue-102069.FS7f1j.rst
A Misc/NEWS.d/next/Library/2023-02-21-11-56-16.gh-issue-102103.Dj0WEj.rst
A Misc/NEWS.d/next/Library/2023-02-26-17-29-57.gh-issue-79940.SAfmAy.rst
A Misc/NEWS.d/next/Library/2023-03-03-19-53-08.gh-issue-102378.kRdOZc.rst
A Misc/NEWS.d/next/Library/2023-03-04-20-58-29.gh-issue-74468.Ac5Ew_.rst
A Misc/NEWS.d/next/Library/2023-03-08-23-08-38.gh-issue-102519.wlcsFI.rst
A Misc/NEWS.d/next/Library/2023-03-10-13-21-16.gh-issue-102578.-gujoI.rst
A Misc/NEWS.d/next/Library/2023-03-10-13-51-21.gh-issue-100112.VHh4mw.rst
A Misc/NEWS.d/next/Library/2023-03-13-12-05-55.gh-issue-102615.NcA_ZL.rst
A Misc/NEWS.d/next/Library/2023-03-13-18-27-00.gh-issue-102670.GyoThv.rst
A Misc/NEWS.d/next/Library/2023-03-16-08-17-29.gh-issue-102748.WNACpI.rst
A Misc/NEWS.d/next/Library/2023-03-16-16-43-04.gh-issue-78530.Lr8eq_.rst
A Misc/NEWS.d/next/Library/2023-03-18-14-59-21.gh-issue-88965.kA70Km.rst
A Misc/NEWS.d/next/Library/2023-03-19-15-30-59.gh-issue-102828.NKClXg.rst
A Misc/NEWS.d/next/Library/2023-03-20-12-21-19.gh-issue-102839.RjRi12.rst
A Misc/NEWS.d/next/Library/2023-03-21-15-17-07.gh-issue-102871.U9mchn.rst
A Misc/NEWS.d/next/Library/2023-03-22-16-15-18.gh-issue-102780.NEcljy.rst
A Misc/NEWS.d/next/Library/2023-03-23-13-34-33.gh-issue-102947.cTwcpU.rst
A Misc/NEWS.d/next/Library/2023-03-25-02-08-05.gh-issue-103023.Qfn7Hl.rst
A Misc/NEWS.d/next/Library/2023-03-25-16-57-18.gh-issue-102433.L-7x2Q.rst
A Misc/NEWS.d/next/Library/2023-03-26-20-54-57.gh-issue-103046.xBlA2l.rst
A Misc/NEWS.d/next/Library/2023-03-27-15-01-16.gh-issue-103056.-Efh5Q.rst
A Misc/NEWS.d/next/Library/2023-03-27-19-21-51.gh-issue-102549.NQ6Nlv.rst
A Misc/NEWS.d/next/Library/2023-03-28-05-14-59.gh-issue-103068.YQTmrA.rst
A Misc/NEWS.d/next/Library/2023-03-28-15-12-53.gh-issue-103085.DqNehf.rst
D Misc/NEWS.d/next/Library/2023-04-02-17-51-08.gh-issue-103193.xrZbM1.rst
D Misc/NEWS.d/next/Library/2023-04-02-22-04-26.gh-issue-75586.526iJm.rst
A Misc/NEWS.d/next/Tests/2023-01-27-18-10-40.gh-issue-101377.IJGpqh.rst
A Misc/NEWS.d/next/Tests/2023-03-08-13-54-20.gh-issue-102537.Vfplpb.rst
A Misc/NEWS.d/next/Tests/2023-03-23-23-25-18.gh-issue-102980.Zps4QF.rst
A Misc/NEWS.d/next/Tests/2023-04-05-06-45-20.gh-issue-103186.640Eg-.rst
A Misc/NEWS.d/next/Tools-Demos/2023-03-21-01-27-07.gh-issue-102809.2F1Byz.rst
A Misc/NEWS.d/next/Windows/2023-02-22-17-26-10.gh-issue-99726.76t957.rst
A Misc/NEWS.d/next/Windows/2023-03-14-10-52-43.gh-issue-102690.sbXtqk.rst
D Misc/NEWS.d/next/macOS/2023-04-04-13-37-28.gh-issue-103207.x0vvQp.rst
M Modules/_pickle.c
M Modules/_ssl.c
M Modules/_ssl.h
M Modules/_tkinter.c
M Modules/_winapi.c
M Modules/_xxsubinterpretersmodule.c
M Modules/clinic/_pickle.c.h
M Modules/clinic/_winapi.c.h
M Modules/posixmodule.c
M PC/launcher.c
M PC/launcher2.c
M Python/ceval_gil.c
M Python/import.c
M Python/initconfig.c
M Python/sysmodule.c
M README.rst
M Tools/c-analyzer/cpython/globals-to-fix.tsv
```
</details>
The reason is waiting for Git using `Popen.wait()`. According to [the documentation](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.wait):
> This will deadlock when using `stdout=PIPE` or `stderr=PIPE` and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use `Popen.communicate()` when using pipes to avoid that.
As a result, the Azure Pipelines PR buildbot with a larger pipe buffer runs the script with no issue while my desktop edition fails to do so.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.12.0a7+ (heads/main-dirty:de18267685, Apr 5 2023, 22:28:44) [MSC v.1929 64 bit (AMD64)] on win32
- Operating system and architecture: Windows 10 Home 22H2
<!-- gh-linked-prs -->
### Linked PRs
* gh-103301
<!-- /gh-linked-prs -->
| 86d20441557bedbea3dadd5d0818a492148335bd | d9305f8e9d3e0f6267286c2da4b24e97b7a569f2 |
python/cpython | python__cpython-103546 | # expose API for writing perf map files
https://github.com/python/cpython/pull/96123 added support for CPython to write `/tmp/perf-<pid>.map` files, associating instruction address ranges with a human-readable frame name for the Linux `perf` profiler.
Two external Python JIT compilers, [Cinder](https://github.com/facebookincubator/cinder/blob/cinder/3.10/Jit/perf_jitdump.cpp#L177-L185) and [Pyston](https://github.com/pyston/pyston/blob/dee48edd457518cf3d2cb93648f9cffe8aad6bce/Python/aot_ceval_jit.c#L5782-L5796), both also independently write to perf map files.
Since perf map files are one-per-process, multiple separate libraries trying to write perf map entries independently can lead to file corruption from simultaneous writes.
It's unlikely for both Cinder and Pyston JITs to be used in the same process, but it's quite reasonable to use one of these JITs along with CPython's native perf trampoline support.
In order for this to be safe, CPython should expose a thread-safe API for writing perf map entries that all these clients can use.
(We've backported the 3.12 perf trampolines feature to Cinder 3.10 and experimented with using it, and we've seen this write corruption problem occur in practice; it's not just a theoretical risk we identified.)
cc @pablogsal , @kmod
<!-- gh-linked-prs -->
### Linked PRs
* gh-103546
* gh-104811
* gh-104823
<!-- /gh-linked-prs -->
| be0c106789322273f1f76d232c768c09880a14bd | 2e91c7e62609ef405901dd5c4cb9d5aa794591ab |
python/cpython | python__cpython-103307 | # `ast.get_source_segment` is slower than it needs to be because it reads every line of the source.
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
There is a private function `_splitlines_no_ff` which is only ever called in `ast.get_source_segment`. This functions splits the entire source given to it, but `ast.get_source_segment` only needs at most `node.end_lineo` lines to work.
https://github.com/python/cpython/blob/1acdfec359fdf3db936168480be0f4157273c200/Lib/ast.py#L308-L330
https://github.com/python/cpython/blob/1acdfec359fdf3db936168480be0f4157273c200/Lib/ast.py#L344-L378
If, for example, you want to extract an import line from a very long file, this can seriously degrade performance.
The introduction of a `max_lines` kwarg in `_splitlines_no_ff` which functions like `maxsplit` in `str.split` would minimize unneeded work. An implementation of the proposed fix is below (which makes my use case twice as fast):
```diff
--- a/Lib/ast.py
+++ b/Lib/ast.py
@@ -305,11 +305,16 @@ def get_docstring(node, clean=True):
return text
-def _splitlines_no_ff(source):
+def _splitlines_no_ff(source, max_lines=-1):
"""Split a string into lines ignoring form feed and other chars.
This mimics how the Python parser splits source code.
+
+ If max_lines is given, at most max_lines will be returned. If max_lines is not
+ specified or negative, then there is no limit on the number of lines returned.
"""
+ if not max_lines:
+ return []
idx = 0
lines = []
next_line = ''
@@ -323,6 +328,8 @@ def _splitlines_no_ff(source):
idx += 1
if c in '\r\n':
lines.append(next_line)
+ if max_lines == len(lines):
+ return lines
next_line = ''
if next_line:
@@ -360,7 +367,7 @@ def get_source_segment(source, node, *, padded=False):
except AttributeError:
return None
- lines = _splitlines_no_ff(source)
+ lines = _splitlines_no_ff(source, max_lines=end_lineno + 1)
if end_lineno == lineno:
return lines[lineno].encode()[col_offset:end_col_offset].decode()
```
# Your environment
- CPython versions tested on: 3.11
<!-- gh-linked-prs -->
### Linked PRs
* gh-103307
<!-- /gh-linked-prs -->
| 36860134a9eda8df5af5a38d6c7533437c594c2f | f0ed293f6aec1c2ed22725301b77d6ccedc2d486 |
python/cpython | python__cpython-103336 | # Maximum recursion depth exceeded in __getattr__().
# Bug report
We're hitting `RecursionError: maximum recursion depth exceeded` in Django test suite with Python 3.12.0a7 when accessing an attribute with a custom `__getattr__()` method:
```
ERROR: test_access_warning (deprecation.test_storages.DefaultStorageDeprecationTests.test_access_warning)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/django/tests/deprecation/test_storages.py", line 128, in test_access_warning
settings.DEFAULT_FILE_STORAGE
File "/django/django/conf/__init__.py", line 83, in __getattr__
if (_wrapped := self._wrapped) is empty:
^^^^^^^^^^^^^
File "/django/django/conf/__init__.py", line 83, in __getattr__
if (_wrapped := self._wrapped) is empty:
^^^^^^^^^^^^^
File "/django/django/conf/__init__.py", line 83, in __getattr__
if (_wrapped := self._wrapped) is empty:
^^^^^^^^^^^^^
[Previous line repeated 790 more times]
RecursionError: maximum recursion depth exceeded
```
See [affected test](https://github.com/django/django/blob/fdf0a367bdd72c70f91fb3aed77dabbe9dcef69f/tests/deprecation/test_storages.py#L47-L49) and [`LazySettings.__getattr__()`](https://github.com/django/django/blob/fdf0a367bdd72c70f91fb3aed77dabbe9dcef69f/django/conf/__init__.py#L81-L96).
Bisected to the aa0a73d1bc53dcb6348a869df1e775138991e561.
I'd try to prepare a small regression test.
# Your environment
- CPython versions tested on: Python 3.12.0a7
- Operating system and architecture: x86_64 GNU/Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-103336
* gh-103351
<!-- /gh-linked-prs -->
| 5d7d86f2fdbbfc23325e7256ee289bf20ce7124e | a90863c993157ae65e040476cf46abd73ae54b4a |
python/cpython | python__cpython-103267 | # Misspelling Speilberg vs. Spielberg in example code for bisect() function
The Python documentation has an example code for the `bisect` function [HERE](https://docs.python.org/3/library/bisect.html#examples). In the second example code there's the surname of the movie director Steven Spielberg written incorrectly as *Speilberg* in every occurence. His surname should be corrected from *Speilberg* to the correct spelling *Spielberg*.
Perhaps his Wikipedia article should be enough to prove my point: [Steven Spielberg Wikipedia article](https://en.wikipedia.org/wiki/Steven_Spielberg)
I am politely asking to fix this misspelling in the example code for every such occurence. Thank you.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103267
* gh-103327
<!-- /gh-linked-prs -->
| f0424ba4b6663d2a4240239266bea08aff46bb6c | a653c32d08abfa34d70186542479e3a7e704f8bd |
python/cpython | python__cpython-103286 | # hmac algorithm fallback is broken
`hmac` won't fall back if OpenSSL is available, the requested algorithm isn't in OpenSSL, but the algorithm is in `hashlib`.
If you [monkey]patch `hashlib` to include a new algorithm, you can't use that algorithm from `hmac` by name.
It appears that the OpenSSL implementation (known as `_hashlib` from inside `hashlib`, or `_hashopenssl` from inside `hmac`) doesn't actually return an `UnsupportedDigestmodError`, but rather it's base class `ValueError`.
### MRE
```pytb
# The following is MRE-specific to easily introduce a new name
# My use case involves a monkeypatch, but imagine any algorithm NOT implemented by OpenSSL, ONLY by hashlib
>>> hashlib.__builtin_constructor_cache['myhashalg'] = hashlib.md5
>>> hashlib.new('myhashalg', b'').digest().hex() # confirm hashlib can use that name
'd41d8cd98f00b204e9800998ecf8427e'
>>> hmac.digest(b'key', b'message', 'myhashalg')
Traceback (most recent call last):
File "<pyshell#nnn>", line 1, in <module>
hmac.digest(b'key', b'message', 'myhashalg')
File "C:\Python311\Lib\hmac.py", line 198, in digest
return _hashopenssl.hmac_digest(key, msg, digest)
ValueError: unsupported hash type myhashalg
```
The exception goes unhandled at https://github.com/python/cpython/blob/933dfd7504e521a27fd8b94d02b79f9ed08f4631/Lib/hmac.py#L199 instead of falling through to let `hashlib` handle it.
This also shows up in the stateful (non-oneshot) code at https://github.com/python/cpython/blob/933dfd7504e521a27fd8b94d02b79f9ed08f4631/Lib/hmac.py#L61
Passing a callable works as intended with my monkeypatch, so I have a workaround. However, I'd argue that either `hmac` is trying to catch the wrong thing, or OpenSSL is throwing the wrong thing, so some sort of fix is called for.
### Environment
Windows 10 64-bit
Python 3.11.2
### Possible fixes
- Change `_hashopenssl.hmac_digest` to correctly raise an `UnsupportedDigestmodError` (this looks like what was intended, given #24920)
- Catch a `ValueError` instead (as `UnsupportedDigestmodError` is derived from `ValueError` this would work, but may not be what is truly intended)
- Something closer to what existed before like https://github.com/tiran/cpython/blob/837f9e42e3a1ad03b340661afe85e67d2719334f/Lib/hmac.py#L181 ??
<!-- gh-linked-prs -->
### Linked PRs
* gh-103286
* gh-103328
<!-- /gh-linked-prs -->
| efb0a2cf3adf4629cf4669cb558758fb78107319 | f0424ba4b6663d2a4240239266bea08aff46bb6c |
python/cpython | python__cpython-103378 | # Can't build '_ssl' extension with Python 3.11.2 and OpenSSL 3.1
# Bug report
I have tried to build latest tagged release of Python v3.11.2 on a CentOS7 system with locally built OpenSSL library v3.1.0. During configuration step ssl library and headers are properly found. When I build Python I encounter error:
```
building '_ssl' extension
gcc -pthread .... /Modules/_ssl.o
_ssl.c: In function '_ssl__SSLContext_set_ecdh_curve':
/.../Python/Modules/_ssl.c:4344:11: error: implicit declaration of function 'EC_KEY_new_by_curve_name' [-Werror=implicit-function-declaration]
4344 | key = EC_KEY_new_by_curve_name(nid);
| ^~~~~~~~~~~~~~~~~~~~~~~~
/.../Python/Modules/_ssl.c:4344:9: warning: assignment to 'EC_KEY *' {aka 'struct ec_key_st *'} from 'int' makes pointer from integer without a cast [-Wint-conversion]
4344 | key = EC_KEY_new_by_curve_name(nid);
| ^
/.../Python/Modules/_ssl.c:4350:5: error: implicit declaration of function 'EC_KEY_free'; did you mean 'EVP_KEM_free'? [-Werror=implicit-function-declaration]
4350 | EC_KEY_free(key);
| ^~~~~~~~~~~
| EVP_KEM_free
cc1: some warnings being treated as errors
```
Looking inside OpenSSL sources for mentioned functions `EC_KEY_new_by_curve_name` and `EC_KEY_free` I can see in the file `include/openssl/ec.h` that they have been deprecated. I suppose that these functions need to be removed from Python's file `Python/Modules/_ssl.c` but I don't know anything about the details of OpenSSL so I would appreciate any help what to do in this case.
# Your environment
- virtual computer running CentOS7
- GCC 12.2
- Python 3.12.2
- OpenSSL 3.1
- configure called with:
```
./configure --prefix="$INSTALLROOT"
${OPENSSL_ROOT:+--with-openssl=$OPENSSL_ROOT}
${OPENSSL_ROOT:+--with-openssl-rpath=no}
--enable-shared --with-system-expat --with-ensurepip=install
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103378
* gh-103382
<!-- /gh-linked-prs -->
| 35167043e3a21055a94cf3de6ceccd1585554cb8 | 0ba0ca05d2b56afa0b055db02233e703fe138918 |
python/cpython | python__cpython-103238 | # Polish documentation of pdb
# Documentation
pdb's documentation has not been polished for a while and there are a couple of issues with it.
First of all, the introductive part was pretty awkward. It starts with `pdb.run()`, which no one is using for actual debugging. I've never see a 3rd-party tutorial for pdb starting with this usage, or even mentioning it. The example also uses a secluded module `mymodule` in which users have no idea what's going on.
Starting with the most unusual (and arguably most useless) usage of `pdb` is not friendly to the users and could push users away.
Then, `pdb.py` is used multiple times in the documentation which is very unusual. The users would never need to know `pdb.py`. For none of the other documentations, the `-m` usage would refer the module as `XX.py`. Most of the modules that can be invoked with `-m` in the command line use the word "command line" in the usage explanation([zipfile](https://docs.python.org/3/library/zipfile.html#command-line-interface), [unittest](https://docs.python.org/3/library/unittest.html#command-line-interface), [webbrowser](https://docs.python.org/3/library/webbrowser.html)) so I modified that a bit.
Moreover, the example we were giving was not reproducible as it uses a hidden module. We should give users an example that is easy to understand and reproduce.
A couple of the debugger commands also need some polish.
* `where` - we should show the arrow (`>`) because there's also `->` in the output of `where` command and it could also be referred to as an "arrow".
* `args` - we should clarify that we are listing the **current** value of the arguments, because the arguments could be changed since they were passed into the function.
* `display` - added a detailed explanation of expressions of mutable result, which was discussed in https://github.com/python/cpython/issues/102886
* `retval` - clarified the ambiguous usage "a function" to "the current function" as all the other commands use.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103238
* gh-103466
<!-- /gh-linked-prs -->
| 449bf2a76b23b97a38158d506bc30d3ebe006321 | 2f41a009b7311a4b44bae5b3583cde3d6d10d8d1 |
python/cpython | python__cpython-103221 | # `ntpath.join()` adds slash to incomplete UNC drive with trailing slash
`ntpath.join()` incorrectly inserts an additional slash when joining an argument onto an incomplete UNC drive with a trailing slash:
```python
>>> import ntpath
>>> ntpath.join('\\\\server\\share\\foo\\', 'bar')
'\\\\server\\share\\foo\\bar' # ok
>>> ntpath.join('\\\\server\\share\\', 'foo')
'\\\\server\\share\\foo' # ok
>>> ntpath.join('\\\\server\\', 'share')
'\\\\server\\\\share' # wrong!
>>> ntpath.join('\\\\', 'server')
'\\\\\\server' # wrong!
```
Before 005e69403d638f9ff8f71e59960c600016e101a4 (3.12), the last test case succeeds because `splitdrive()` doesn't identify `'\\\\'` as a UNC drive. But the third test case is reproducible going back to 3.11 and 3.10.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103221
<!-- /gh-linked-prs -->
| b57105ae33e1f61e6bdf0eec45c4135d067b9b22 | 50b4b1598411ed393f47ce7f4fffbe5b9063cd42 |
python/cpython | python__cpython-103222 | # Redundant `if` statement in `EnumType._find_data_type_()` in `enum.py`
This statement is redundant:
https://github.com/python/cpython/blob/823622212eb258fcec467db56f6e726d841297b3/Lib/enum.py#L996-L997
in
https://github.com/python/cpython/blob/823622212eb258fcec467db56f6e726d841297b3/Lib/enum.py#L990-L997
<!-- gh-linked-prs -->
### Linked PRs
* gh-103222
<!-- /gh-linked-prs -->
| d3a7732dd54c27ae523bef73efbb0c580ce2fbc0 | 24facd60f6e5458cc783e429fe63d6a80d476f84 |
python/cpython | python__cpython-103251 | # Installation on macOS fails with "The installer encountered an error"
Thread on discuss: https://discuss.python.org/t/is-there-a-recent-mac-install-issue/23755/20
There are sporadic reports of errors during the installation of Python on macOS using our installer in which "Installer.app" gives a vague generic error, in particular “The installer encountered an error”.
This appears to be related to system security settings. In particular, the Installer.app needs access to the install package and hence needs the right to access the "Downloads" folder (for most users) or possibly "Full Disk Access" (for users storing the package outside of the Downloads folder).
The system normally asks the user to approve such access the first time the permissions are needed, but only once. If a user (accidentally) does not approve the system won't ask for permissions later on.
To fix the error:
- Go to System Settings (or System Preferences), "Security & Privacy"
- On Ventura: Look for "Files & Folders" and then "Installer". Make sure "Downloads" folder is present and enabled
- On Ventura: (If previous step doesn't work or fix the issue): Look for "Full Disk Access" and then check that "Installer" is present and enabled (you can add the application using the "+" button at the bottom when it isn't present).
A different workaround for users: Install using the installer(1) command line tool.
On my system (where installing Python using the installer Works):
- "Files & Folders": Installer is present and has access to "Downloads"
- "Full Disk Access": Installer is present but does *not* have this permission enabled
At this time it is not clear what we can do about this other than documenting this (website, installer start page)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103251
* gh-103252
* gh-103253
* gh-103302
* gh-103303
* gh-103304
* gh-104815
* gh-104883
* gh-104885
<!-- /gh-linked-prs -->
| f184abbdc9ac3a5656de5f606faf505aa42ff391 | 30f82cf6c9bec4d9abe44f56e7ad84ef32ee2022 |
python/cpython | python__cpython-103205 | # `http.server` parses HTTP version numbers too permissively.
# `http.server` parses HTTP version numbers too permissively.
`http.server` accepts request lines with HTTP version numbers that have `'_'`, `'+'`, and `'-'`.
# Reproduction steps:
(Requires netcat)
```bash
python3 -m http.server --bind 127.0.0.1
printf 'GET / HTTP/-9_9_9.+9_9_9\r\n\r\n' | nc 127.0.0.1 8000
```
# Justification
Here are the `HTTP-version` definitions from each of the three HTTP RFCs:
- RFC 2616:
```
HTTP-Version = "HTTP" "/" 1*DIGIT "." 1*DIGIT
```
- RFC 7230:
```
HTTP-version = HTTP-name "/" DIGIT "." DIGIT
HTTP-name = %x48.54.54.50 ; "HTTP", case-sensitive
```
- RFC 9112:
```
HTTP-version = HTTP-name "/" DIGIT "." DIGIT
HTTP-name = %s"HTTP"
```
I understand allowing multiple digits for backwards-compatibility with RFC 2616, but I don't think it makes sense to let the specifics of `int` leak out into the world. We should at least ensure that only digits are permitted in HTTP version numbers.
# My environment
- CPython 3.12.0a6+
- Operating system and architecture: Arch Linux on x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-103205
* gh-104438
<!-- /gh-linked-prs -->
| cf720acfcbd8c9c25a706a4b6df136465a803992 | 25db95d224d18fb7b7f53165aeaa87632b0230f2 |
python/cpython | python__cpython-103208 | # `zipimport.invalidate_caches()` implementation causes performance regression for `importlib.invalidate_caches()`
In Python 3.10+, [an implementation of `zipimport.invalidate_caches()` was introduced](https://github.com/python/cpython/pull/24159).
An Apache Spark developer recently identified this implementation of `zipimport.invalidate_caches()` as the source of performance regressions for `importlib.invalidate_caches()`. They observed that importing only two zipped packages (py4j, and pyspark) slows down the speed of `importlib.invalidate_caches()` up to 3500%. See the [new discussion thread on the original PR](https://github.com/python/cpython/pull/24159#discussion_r1154007661) where `zipimport.invalidate_caches()` was introduced for more context.
The reason for this regression is an incorrect design for the API.
Currently in `zipimport.invalidate_caches()`, the cache of zip files is repopulated at the point of invalidation. This violates the semantics of cache invalidation which should simply clear the cache. Cache repopulation should occur on the next access of files.
There are three relevant events to consider:
1. The cache is accessed while valid
2. `invalidate_caches()` is called
3. The cache is accessed after being invalidated
Events (1) and (2) should be fast, while event (3) can be slow since we're repopulating a cache. In the original PR, we made (1) and (3) fast, but (2) slow. To fix this we can do the following:
- Add a boolean flag `cache_is_valid` that is set to false when `invalidate_caches()` is called.
- In `_get_files()`, if `cache_is_valid` is true, use the cache. If `cache_is_valid` is false, call `_read_directory()`.
This approach avoids any behaviour change introduced in Python 3.10+ and keeps the common path of reading the cache performant, while also shifting the cost of reading the directory out of cache invalidation.
We can go further and consider the fact that we rarely expect zip archives to change. Given this, we can consider adding a new flag to give users the option of disabling implicit invalidation of zipimported libaries when `importlib.invalidate_caches()` is called.
cc @brettcannon @HyukjinKwon
<!-- gh-linked-prs -->
### Linked PRs
* gh-103208
<!-- /gh-linked-prs -->
| 1fb9bd222bfe96cdf8a82701a3192e45d0811555 | 6e6a4cd52332017b10c8d88fbbbfe015948093f4 |
python/cpython | python__cpython-103846 | # Tkinter: C API changes are needed for Tcl 8.7 and 9.0 value types
# Bug report
I recently made changes to [Tcl.pm](http://tcl.pm/) (a Tcl wrapper for Perl) to support Tcl 8.7/9.0 (https://github.com/gisle/tcl.pm/pull/42). While I was doing so, I briefly looked at _tkinter.c for comparison (particularly `AsObj()` and `FromObj()`). Although I have not attempted to run Tkinter with Tcl 8.7/9.0, I do notice a few issues in the code which would need to be addressed first.
* Tcl 9.0 currently renames the Tcl 8.5 "booleanString" value type to "boolean", which will break the existing check in `FromObj()`. Tcl has suggested an idiom for retrieving the `Tcl_ObjType *` of unregistered types like "booleanString", regardless of Tcl version (see https://core.tcl-lang.org/tcl/info/3bb3bcf2da5b); although I would think that approach entails initializing `v->BooleanType` from `Tkapp_New()` rather than lazily initializing `tkapp->BooleanType` from `FromObj()`.
* [TIP 568](https://core.tcl-lang.org/tips/doc/trunk/tip/568.md): As of Tcl 9.0, "bytearray" is unregistered; the suggested idiom can be used instead.
* [TIP 484](https://core.tcl-lang.org/tips/doc/trunk/tip/484.md): As of Tcl 8.7, on platforms with 32-bit `long`, there are no longer separate 32-bit "int" and 64-bit "wideInt" value types; there is only a single 64-bit type. And as of Tcl 9.0, the "int" type is unregistered. It is possible to reliably get the `Tcl_ObjType *` of "int" from a value created using `Tcl_NewIntObj(0)` and of "wideInt" (if present) using e.g. `Tcl_NewWideIntObj((Tcl_WideInt)(1) << 32)`. However it would not be appropriate for `FromObj()` to continue using `Tcl_GetLongFromObj()` when `long` is 32-bit, as that would retrieve an overflowed result for values within (`LONG_MAX`,`ULONG_MAX`] rather than force `FromObj()` to try again with `Tcl_GetWideIntFromObj()` (via `fromWideIntObj()`); an alternative would be to always use `Tcl_GetWideIntFromObj()` for both "int" and "wideInt" values.
I am not familiar with contributing to Python, but I may be able to propose fixes for the above issues if no one else already has or intends to. I believe the fixes for the above issues are backward compatible with Tcl 8.5 and 8.6, and so can be implemented and tested without other changes that are likely also needed (such as for Tcl-syntax APIs) before Tkinter is usable with ~~Tcl 8.7/9.0 and Tk 8.7 (which will be compatible with both Tcl 8.7 and 9.0)~~ Tcl and Tk 8.7/9.0.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103846
* gh-119830
* gh-119831
<!-- /gh-linked-prs -->
| 94e9585e99abc2d060cedc77b3c03e06b4a0a9c4 | b278c723d79a238b14e99908e83f4b1b6a39ed3d |
python/cpython | python__cpython-103195 | # Optimise `inspect.getattr_static`
# Feature or enhancement
Following 6d59c9e, `typing._ProtocolMeta.__instancecheck__` uses `inspect.getattr_static` in a tight loop, where it had previously used `hasattr`. This improves semantics in several highly desirable ways, but causes a considerable slowdown for `_ProtocolMeta.__instancecheck__`, as `inspect.getattr_static` is _much_ slower than `hasattr`.
The performance hit to `_ProtocolMeta.__instancecheck__` has already been mostly mitigated through several `typing`-specific optimisations that are tracked in this issue:
- https://github.com/python/cpython/issues/74690
However, it would be good to also see if we can improve the performance of `inspect.getattr_static`. This will not only improve the performance of `isinstance()` checks against classes subclassing `typing.Protocol`. It will also improve the performance of all other tools that use `inspect.getattr_static` for introspection without side effects.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103195
* gh-103318
* gh-103321
* gh-103349
* gh-104267
* gh-104286
* gh-104290
* gh-104320
<!-- /gh-linked-prs -->
| 264c00a1c512a9bd58f47c80e72e436c639763c8 | 119f67de08f1fddc2a3f7b7caac7454cb57ef800 |
python/cpython | python__cpython-103196 | # Regression tests pass but return unnecessary output (errors and warnings)
Just built python for the first time using the devguide and I noticed some (deprecation) warnings and errors in the regression tests, even though all the tests (that ran) passed.
```
/opt/python/cpython/Lib/subprocess.py:849: RuntimeWarning: pass_fds overriding close_fds.
warnings.warn("pass_fds overriding close_fds.", RuntimeWarning)
/opt/python/cpython/Lib/http/server.py:1168: DeprecationWarning: This process (pid=55357) is multi-threaded, use of fork() may lead to deadlocks in the child.
/opt/python/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=52754) is multi-threaded, use of fork() may lead to deadlocks in the child.
0:07:26 load avg: 8.24 [216/433] test_mailcap passed -- running: test_gdb (2 min 45 sec)
/opt/python/cpython/Lib/mailcap.py:228: UnsafeMailcapInput: Refusing to substitute MIME type 'audio/*' into a shell command.
warnings.warn(msg, UnsafeMailcapInput)
/opt/python/cpython/Lib/mailcap.py:228: UnsafeMailcapInput: Refusing to substitute MIME type 'audio/*' into a shell command.
warnings.warn(msg, UnsafeMailcapInput)
/opt/python/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=59900) is multi-threaded, use of fork() may lead to deadlocks in the child.
/opt/python/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=59550) is multi-threaded, use of fork() may lead to deadlocks in the child.
/opt/python/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=62788) is multi-threaded, use of fork() may lead to deadlocks in the child.
/opt/python/cpython/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=59550) is multi-threaded, use of fork() may lead to deadlocks in the child.
/opt/python/cpython/Lib/test/test_sys_settrace.py:1771: RuntimeWarning: assigning None to 1 unbound local
/opt/python/cpython/Lib/test/test_sys_settrace.py:1771: RuntimeWarning: assigning None to 2 unbound locals
: error: Incorrect number of arguments. uuid3 requires a namespace and a name. Run 'python -m uuid -h' for more information.
: error: Incorrect number of arguments. uuid3 requires a namespace and a name. Run 'python -m uuid -h' for more information.
: error: Incorrect number of arguments. uuid3 requires a namespace and a name. Run 'python -m uuid -h' for more information.
: error: Incorrect number of arguments. uuid3 requires a namespace and a name. Run 'python -m uuid -h' for more information.
0:16:23 load avg: 3.94 [405/433] test_warnings passed -- running: test_tools (59.0 sec)
/opt/python/cpython/Lib/test/test_warnings/__init__.py:390: UserWarning: Other types of warnings are not errors
self.module.warn("Other types of warnings are not errors")
/opt/python/cpython/Lib/test/test_warnings/__init__.py:390: UserWarning: Other types of warnings are not errors
self.module.warn("Other types of warnings are not errors")
/opt/python/cpython/Lib/test/test_xxtestfuzz.py:13: DeprecationWarning: module 'sre_compile' is deprecated
/opt/python/cpython/Lib/test/test_xxtestfuzz.py:13: DeprecationWarning: module 'sre_constants' is deprecated
CalledProcessError: Command '['/tmp/test_python_9xherg3o/tmpbgbtwgi2/cpython/python', '-c', 'import sysconfig; print(sysconfig.get_config_var("CONFIG_ARGS"))']' returned non-zero exit status 1.
ModuleNotFoundError: No module named '_sysconfigdata_d_linux_x86_64-linux-gnu'
```
Will attempt to investigate and patch these where appropriate.
Raising this issue as a base for my first pull request.
Any feedback is welcome!
<!-- gh-linked-prs -->
### Linked PRs
* gh-103196
* gh-103213
* gh-103217
* gh-103244
* gh-106605
* gh-106606
* gh-106667
* gh-108800
* gh-109066
* gh-109075
* gh-109077
* gh-109084
* gh-109085
* gh-109086
* gh-109100
* gh-109138
<!-- /gh-linked-prs -->
| 9d582250d8fde240b8e7299b74ba888c574f74a3 | a840806d338805fe74a9de01081d30da7605a29f |
python/cpython | python__cpython-103175 | # use vectorcall in _asyncio instead of variadic calling APIs
It is relic of past, changing it to use vectorcall will allow it to use bound method calling optimization.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103175
<!-- /gh-linked-prs -->
| e6f7d35be7fb65d8624e9411251554c9dee0c931 | 385b5d6e091da454c3e0d3f7271acf3af26d8532 |
python/cpython | python__cpython-103437 | # Add a timeout to CI builds
The joke PR #103178 included some very slow code that runs at import time, and CI took >3 hours before I canceled it. This wastes resources and is a possible abuse vector; a malicious actor could use it to eat up our CI resource quota. We should have some reasonable timeout (1 hour?) for all CI builds.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103437
* gh-103543
* gh-130375
* gh-130431
* gh-130432
<!-- /gh-linked-prs -->
| be8903eb9d66ef1229f93a3a6036aeafc3bb0bda | 1aa376f946ef68ba46937019a424d925bf5b9611 |
python/cpython | python__cpython-103177 | # sys._current_exceptions() should return exceptions rather than exc_info tuples
The function ``sys._current_exceptions()`` returns a mapping of thread-id to an ``exc_info`` tuple representing that thread's handled exception. As part of the larger effort of moving away from exc_info to simple exception instances, we should change this mapping to have just the exception instance as value.
While this is a breaking change, I think we can do this without a deprecation period because this function is named with a leading ``_``, and it is documented as "for internal and specialized purposes only".
<!-- gh-linked-prs -->
### Linked PRs
* gh-103177
<!-- /gh-linked-prs -->
| 78b763f63032a7185c0905c319ead9e9b35787b6 | 8026cda10ccd3cbc7f7ff84dc6970266512961e4 |
python/cpython | python__cpython-103173 | # Typing: undocumented behaviour change for protocols decorated with `@final` and `@runtime_checkable` in 3.11
Python 3.11 introduced an undocumented behaviour change for protocols decorated with both `@final` and `@runtime_checkable`. On 3.10:
```pycon
>>> from typing import *
>>> @final
... @runtime_checkable
... class Foo(Protocol):
... def bar(self): ...
...
>>> class Spam:
... def bar(self): ...
...
>>> issubclass(Spam, Foo)
True
>>> isinstance(Spam(), Foo)
True
```
On 3.11:
```pycon
>>> from typing import *
>>> @final
... @runtime_checkable
... class Foo(Protocol):
... def bar(self): ...
...
>>> class Spam:
... def bar(self): ...
...
>>> issubclass(Spam, Foo)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen abc>", line 123, in __subclasscheck__
File "C:\Users\alexw\coding\cpython\Lib\typing.py", line 1547, in _proto_hook
raise TypeError("Protocols with non-method members"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Protocols with non-method members don't support issubclass()
>>> isinstance(Spam(), Foo)
False
```
This is because, following 0bbf30e (by @JelleZijlstra), the `@final` decorator sets a `__final__` attribute wherever it can, so that it is introspectable by runtime tools. But the runtime-checkable-protocol `isinstance()` machinery doesn't know anything about the `__final__` attribute, so it assumes that the `__final__` attribute is just a regular protocol member.
This should be pretty easy to fix: we just need to add `__final__` to the set of "special attributes" ignored by runtime-checkable-protocol `isinstance()`/`issubclass()` checks here:
https://github.com/python/cpython/blob/848bdbe166b71ab2ac2c0c1d88432fb995d1444c/Lib/typing.py#L1906-L1909
@JelleZijlstra do you agree with that course of action?
<!-- gh-linked-prs -->
### Linked PRs
* gh-103173
* gh-105445
* gh-105473
* gh-105474
<!-- /gh-linked-prs -->
| 62c11155eb13e950e10d660b8f5150e04efb3a5e | 3d15bd88063019352d11382f5d1b36b6845f65dc |
python/cpython | python__cpython-103168 | # `-Wstrict-prototypes` warnings on macOS
Recently, I've started noticing new issues while compiling CPython on my mac.
Something like:
```
Python/ceval_gil.c:470:40: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
unsigned long _PyEval_GetSwitchInterval()
^
void
1 warning generated.
```
Details aboit `-Wstrict-prototypes`: https://www.cism.ucl.ac.be/Services/Formations/ICS/ics_2013.0.028/composer_xe_2013/Documentation/en_US/compiler_c/main_cls/GUID-22FD192C-4C9D-4C16-B3DE-C3629D2EE483.htm
Seems like `clang` now requires `(void)` for functions without arguments.
Other developers have also seen these. I have a PR with the fix ready.
To find functions without arguments I've used this regex: `\(\)\n+\{`
It showed:
```
» ag '\(\)\n+\{' */**/*.c
Modules/_tkinter.c
124:_get_tcl_lib_path()
125:{
Modules/posixmodule.c
8549:win32_getppid()
8550:{
13333:check_ShellExecute()
13334:{
PC/launcher.c
452:locate_store_pythons()
453:{
469:locate_venv_python()
470:{
498:locate_all_pythons()
499:{
697:locate_wrapped_script()
698:{
1037:static void read_commands()
1038:{
1687:get_process_name()
1688:{
PC/launcher2.c
135:_getNativeMachine()
136:{
166:isAMD64Host()
167:{
173:isARM64Host()
174:{
Python/ceval_gil.c
470:unsigned long _PyEval_GetSwitchInterval()
471:{
Python/initconfig.c
2358:config_envvars_usage()
2359:{
2364:config_xoptions_usage()
2365:{
Python/sysmodule.c
1491:_sys_getwindowsversion_from_kernel32()
1492:{
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103168
<!-- /gh-linked-prs -->
| 119f67de08f1fddc2a3f7b7caac7454cb57ef800 | 1a8f862e329c3872a11d4ef8eb85cf353ca2f4d5 |
python/cpython | python__cpython-103144 | # Polish pdb help messages and doc strings
# Feature or enhancement
Make `pdb`'s help message less ugly.
# Pitch
Currently, the help message of pdb is not pretty.
```
(Pdb) h a
a(rgs)
Print the argument list of the current function.
(Pdb)
```
`a(rgs)` is a little bit confusing by itself and the description's indentation is too long(directly from the docstring). `commands` is even worse.
```
(Pdb) h commands
commands [bpnumber]
(com) ...
(com) end
(Pdb)
```
The docstring for each command is not entirely unified (whether there's a space between the command and the description) and for some docstrings they don't match the documentation.
```
(Pdb) h enable
enable bpnumber [bpnumber ...]
```
is different than the [documentation](https://docs.python.org/3/library/pdb.html#pdbcommand-enable) `enable [bpnumber ...]`
We can make it better and more user friendly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103144
<!-- /gh-linked-prs -->
| 2f41a009b7311a4b44bae5b3583cde3d6d10d8d1 | 96663875b2ea55c65e83551cdb741bbcdcaa7f21 |
python/cpython | python__cpython-103133 | # Update multiprocessing.managers.ListProxy and multiprocessing.managers.DictProxy
# Feature or enhancement
Standard lists accept `clear` as a shortcut for `del xs[:]`
```
xs = ['a']
xs.clear()
```
However, `multiprocessing.ListProxy` omitted support for `clear()`.
```
from multiprocessing import Manager
with Manager() as manager:
xs = manager.list()
xs.clear()
```
raises the following exception
```
AttributeError: 'ListProxy' object has no attribute 'clear'
```
# Pitch
If `clear` is supported in a standard list, it should be supported in a multiprocessing list!
# Previous discussion
This issue was not previously discussed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103133
<!-- /gh-linked-prs -->
| bbb49888a752869ae93423c42039a3a8dfab34d4 | 1db4695644388817c727db80cbbd38714cc5125d |
python/cpython | python__cpython-103125 | # Support multi-line statements in pdb
# Feature or enhancement
Support multi-line statements in pdb just like in normal Python interactive shells
# Pitch
Currently, we have this:
```
(Pdb) def f():
*** IndentationError: expected an indented block after function definition on line 1
(Pdb)
```
We can have this:
```
(Pdb) def f():
... pass
...
(Pdb)
```
The fundamental logic is handled by `codeop.compile_command`, we just need to port it in.
It's kind of a breaking change, as the behavior for the once failed single-line check only code could potentially work. For example, we have this in the error case check:
```
(Pdb) print(
*** SyntaxError: '(' was never closed"
```
And it won't be failing anymore as we can close it in the next line. But in general, I believe this feature has more benifits than problems.
I'll make the PR draft for now and wait for some more discussion on the matter.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103125
<!-- /gh-linked-prs -->
| 25a64fd28aaaaf2d21fae23212a0554c24fc7b20 | a4056c8f9c2d9970d39e3cb6bffb255cd4b8a42c |
python/cpython | python__cpython-103113 | # Wrong parameter name in http.client.read documentation: “amt” vs “n”
# Documentation
`pydoc http.client.HTTPResponse.read``
results in help text:
```
http.client.HTTPResponse.read = read(self, amt=None)
Read and return up to n bytes.
```
The parameter is called “amt” but the help text refers to “n”.
transferring the issue from [discuss.python.org](https://discuss.python.org/t/wrong-parameter-name-in-http-client-read-documentation-amt-vs-n/25284)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103113
* gh-103119
* gh-103120
<!-- /gh-linked-prs -->
| d052a383f1a0c599c176a12c73a761ca00436d8b | e375bff03736f809fbc234010c087ef9d7e0d384 |
python/cpython | python__cpython-103110 | # Improve the documentation for `ignore_warnings`
# Documentation
Recently, I found that using `ignore_warnings(*, category)`, a simple decorator in `test.support.warning_helper`, is sometimes a concise way to suppress warnings in test cases. For example:
```python
# use `with` statement
def test_suppress_warning():
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=SyntaxWarning)
# do something
# use decorator
@warning_helper.ignore_warnings(category=SyntaxWarning)
def test_suppress_warning():
# do something
```
**What I want to improve:**
1. The [comment](https://github.com/python/cpython/blob/e375bff03736f809fbc234010c087ef9d7e0d384/Lib/test/support/warnings_helper.py#L47) of function `ignore_warnings` , it writes
> Decorator to suppress deprecation warnings
But in fact this can become a more general warning suppression decorator, not just deprecation warnings, we can improve this comment.
2. The document of [test.support.warnings_helper](https://docs.python.org/3.12/library/test.html?highlight=warnings_helper#module-test.support.warnings_helper), it is missing information about the function `ignore_warnings`.
3. Perhaps we can also add a default value to it, just like `ignore_warnings(*, category=Warning)`, so that the behavior of this decorator is consistent with `warnings.simplefilter`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103110
* gh-103198
* gh-103199
<!-- /gh-linked-prs -->
| 32937d6aa414ec7db5c63ef277f21db1880b3af4 | 6883007a86bdf0d7cf4560b949fd5e577dab1013 |
python/cpython | python__cpython-103100 | # Link mypy docs from typing.rst
The mypy documentation is the best documentation there is for the Python type system. We should link users to it.
In the long run, typing.readthedocs.io could be a better home, but that shouldn't stop us from helping users today. In my opinion the most likely outcome is that typing.readthedocs.io ends up importing mypy's docs anyway.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103100
* gh-103126
* gh-103127
<!-- /gh-linked-prs -->
| fda95aa19447fe444ac2670afbf98ec42aca0c6f | dcd6f226d6596b25b6f4004058a67acabe012120 |
python/cpython | python__cpython-103098 | # Windows ARM64 builds fail index tests
Due to a compiler bug, some functionality involving large integers fail on ARM64. Bug details at https://developercommunity.visualstudio.com/t/Regression-in-MSVC-1433-1434-ARM64-co/10224361
The issue has been fixed, but has not been released in the last three months. So I'm going to add the targeted workaround suggested by the devs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103098
* gh-103101
* gh-103102
<!-- /gh-linked-prs -->
| 24ba507b1dd70cf468a1beadea6ca6336fe1d40f | ba65a065cf07a7a9f53be61057a090f7311a5ad7 |
python/cpython | python__cpython-103093 | # Isolate Stdlib Extension Modules
See [PEP 687](https://peps.python.org/pep-0687/).
Currently most stdlib extension have been ported to multi-phase init. There are still a number of them to be ported, almost entirely non-builtin modules. Also, some that have already been ported still have global state that needs to be fixed.
(This is part of the effort to finish isolating multiple interpreters from each other. See gh-100227.)
### High-Level Info
How to isolate modules: https://docs.python.org/3/howto/isolating-extensions.html (AKA PEP 630).
The full list of modules that need porting can be found with: `...`
The full list of remaining (unsupported) global variables is:
* builtin extensions: https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L308-L335
* non-builtin extensions: https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L338-L551
A full analysis of the modules may be found at the bottom of this post.
<details>
<summary>(other info)</summary>
#### Previous Work
* https://github.com/python/cpython/issues/84258
* _xxsubinterpreters [[issue](https://github.com/python/cpython/issues/99741)] [[PR](https://github.com/python/cpython/pull/99742)]
* _testinternalcapi [[issue](https://github.com/python/cpython/issues/100997)] [[PR](https://github.com/python/cpython/pull/100998)]
* ...
#### Related Links
* gh-85283
* https://discuss.python.org/t/a-new-c-api-for-extensions-that-need-runtime-global-locks/20668
* https://discuss.python.org/t/how-to-share-module-state-among-multiple-instances-of-an-extension-module/20663
</details>
### TODO
Here is the list of modules that need attention, in a *rough*, best-effort priority order. Additional details (e.g. if there is an issue and/or PR) is found in the analysis table at the bottom.
* builtins (high priority)
* [x] #101819
* [x] **isolate** _collections
* [x] #101520
* [ ] #101509
* essential (higher priority)
* [x] **port** _socket
* [x] **port** readline (leave as is: only available in main interp)
* [x] **isolate** _ssl
* [x] **port** _pickle
* [x] #117398
* [x] **isolate** _asyncio
* [x] #106078
* [x] **isolate** array
* [x] **port** _ctypes
* [x] #114314
* [x] #117142
* [x] **port** _tkinter (leave as is: only available in main interp)
* [x] #103583
* [x] #101714
* [x] **isolate** _curses_panel (leave as is: only available in main interp)
* [x] **isolate** _elementtree
* [x] **isolate** pyexpat
* [x] **port** winreg (Windows)
* [x] **port** msvcrt (Windows)
* non-essential (lower priority)
* [x] **port** winsound (Windows)
* [x] **isolate** _lsprof
The above does not include test modules. They don't need to be ported/isolated (except for a few which already have been).
---------------
### Modules Analysis
| module | builtin | Windows | PEP 594 | issue | PR | ported | # static types | # other global objects | # other globals |
| --- | :---: | :---: | :---: | --- | --- | :---: | ---: | ---: | ---: |
| _asyncio | | | | | | yes | | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L463-L464) | |
| _collections | X | | | (???) | (branch) | yes | [7](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L314-L318) | | |
| _ctypes | | | | | | **\*\*NO\*\*** | [37](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L344-L380) | [6](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L419-L465) | [4](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L484-L530) |
| _curses | | | | | | **\*\*NO\*\*** | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L381) | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L420-L442) | [4](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L486-L489) |
| _curses_panel | | | | | | yes | | | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L531) |
| _datetime | | | | gh-71587 | gh-102995 | **\*\*NO\*\*** | [7](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L382-L388) | [10](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L443-L452) | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L483) |
| _decimal | | | | | | **\*\*NO\*\*** | [4](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L389-L392) | [10](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L411-L459) | [6](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L490-L495) |
| _elementtree | | | | | | yes | | | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L496) |
| _io | X | | | gh-101819 | gh-101520 | **\*\*NO\*\*** | [5](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L319-L325) | | |
| _lsprof | | | | | | yes | | | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L549-L550) |
| _multibytecodec | | | | | | yes | | | [23](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L497-L519) |
| _pickle | | | | (???) | (yes) | **\*\*NO\*\*** | [5](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L382-L388) | | |
| _socket | | | | | | **\*\*NO\*\*** | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L400) | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L424-L425) | [3](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L523-L551) |
| _ssl | | | | (???) | (branch) | yes | | | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L532) |
| _tkinter | | | | | | **\*\*NO\*\*** | | [8](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L413-L469) | [9](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L533-L541) |
| _tracemalloc | X | | | gh-101520 | | **\*\*NO\*\*** | | [6](https://github.com/python/cpython/blob/main/Include/internal/pycore_tracemalloc.h#L86-L104) | [7](https://github.com/python/cpython/blob/main/Include/internal/pycore_tracemalloc.h#L68-L85) |
| array | | | | | | yes | | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L460) | |
| faulthandler | X | | | gh-101509 | | yes | | [3+](https://github.com/python/cpython/blob/main/Include/internal/pycore_faulthandler.h#L39-L79) | [~22+](https://github.com/python/cpython/blob/main/Include/internal/pycore_faulthandler.h#L38-L84) |
| msvcrt | | Y | | | | **\*\*NO\*\*** | ??? | ??? | ??? |
| pyexpat | | | | | | yes | | | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L480) |
| readline | | | | | | **\*\*NO\*\*** | | | [9](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L520-L548) |
| winreg | | Y | | | | **\*\*NO\*\*** | ??? | ??? | ??? |
| winsound | | Y | | | | **\*\*NO\*\*** | ??? | ??? | ??? |
<details>
<summary>test/example modules</summary>
These can be ported/isolated but don't have to be. They are the lowest priority.
| module | issue | PR | ported | # static types | # other global objects | # other globals |
| --- | --- | --- | :---: | ---: | ---: | ---: |
| xxmodule | | | | [3](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L401-L403) | [1](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L427) | |
| xxsubtype | | | | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L404-L405) | | |
| xxlimited_35 | | | | | [2](https://github.com/python/cpython/blob/main/Tools/c-analyzer/cpython/globals-to-fix.tsv#L416-L426) | |
| ... | | | | | | |
| ... | | | | | | |
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-103093
* gh-102982
* gh-103094
* gh-103108
* gh-103248
* gh-103249
* gh-103250
* gh-103381
* gh-103540
* gh-103612
* gh-103893
* gh-103932
* gh-104020
* gh-104196
* gh-104506
* gh-104561
* gh-104725
* gh-113434
* gh-113555
* gh-113620
* gh-113630
* gh-113727
* gh-113774
* gh-113857
* gh-115130
* gh-115242
* gh-115301
* gh-23091
* gh-133674
* gh-133695
<!-- /gh-linked-prs --> | 52f96d3ea39fea4c16e26b7b10bd2db09726bd7c | 411b1692811b2ecac59cb0df0f920861c7cf179a |
python/cpython | python__cpython-103095 | # Add a C API to assign a new version to a PyTypeObject
# Feature or enhancement
I would like to add an API function like this:
```
/* Attempt to assign a version tag to the given type.
*
* Returns 1 if the type already had a valid version tag or a new one was
* assigned, or 0 if a new tag could not be assigned.
*/
PyAPI_FUNC(int) _PyType_AssignVersionTag(PyTypeObject* type);
```
It would be a thin wrapper around the existing [`assign_version_tag`](https://github.com/python/cpython/blob/main/Objects/typeobject.c#L568-L599) function, and is modeled after a similar function we added to Cinder as part of https://github.com/facebookincubator/cinder/commit/43c4e2dfe0e4a967251a72468adfc6602011dbda.
# Pitch
Cinder (and possibly other JITs or optimizing interpreters) would benefit from being able to ensure that type has a valid version tag without having to call `_PyType_Lookup()` just for the version-assigning side-effect. We rely on notifications from `PyType_Modified()` to invalidate code in our JIT when types change, and this only works when the types in question have valid version tags.
# Previous discussion
There's no written discussion I'm aware of, but @carljm and @markshannon have discussed this offline.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103095
<!-- /gh-linked-prs -->
| b7f4811c88609d1ff12f42a3335fda49f8a478f3 | dca27a69a8261353f7f986eb8f808f0d487ac4b7 |
python/cpython | python__cpython-103325 | # If the working directory and the venv are on different disks, venv doesn't work in bash on Windows
**Environment**
- OS: Windows 10
- Architecture: x86_64
- Shell: bash 4.4.23, part of the git 2.37.0 install
- Python version: 3.11.1
- Output of `path/to/python -m pip list`:
```console
Package Version
---------- -------
pip 23.0.1
setuptools 65.5.0
```
**Issue**
When I create a venv on D: and use it from C:, executables like pip or python can't be found:
```console
$ pwd
/d
$ /path/to/python -m venv myenv
$ . myenv/Scripts/activate
$ pip --version
pip 22.3.1 from D:\myenv\Lib\site-packages\pip (python 3.11)
$ cd /c
$ pip --version
bash: \myenv/Scripts/pip: No such file or directory
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103325
* gh-103470
* gh-103500
* gh-103591
<!-- /gh-linked-prs -->
| ebc81034278ea186b4110c7dbf6d1c2a0ada9398 | b57105ae33e1f61e6bdf0eec45c4135d067b9b22 |
python/cpython | python__cpython-103086 | # python version locale.getencoding() warns deprecation and advice to use getencoding() again
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
When calling `locale.getencoding()` while `_locale.getencoding()` is not available, it shows looking-recursive deprecation warnings.
```
../Lib/locale.py:657: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead
```
This is happening because python version `locale.getencoding()` is calling `locale.getdefaultlocale()` https://github.com/python/cpython/blob/v3.12.0a5/Lib/locale.py#L642
and `locale.getdefaultlocale` warns regardless where the call came from
https://github.com/python/cpython/blob/v3.12.0a5/Lib/locale.py#L544-L547
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: v3.11.2 Lib with RustPython HEAD
- Operating system and architecture: aarch64-apple-darwin
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-103086
<!-- /gh-linked-prs -->
| 21e9de3bf0ecf32cd61296009518bfb9fdfcd04f | fda95aa19447fe444ac2670afbf98ec42aca0c6f |
python/cpython | python__cpython-103083 | # Implement and document PEP 669.
Now that PEP 669 is accepted, it needs to be implemented and documented.
- [x] Implement the PEP
- [x] Decide on and implement policy for monitors seeing the code in other monitors.
- [x] Document the PEP
- [ ] Use "macro" instructions to reduce code duplication in bytecodes.c
<!-- gh-linked-prs -->
### Linked PRs
* gh-103083
* gh-103474
* gh-103507
* gh-103561
* gh-104387
* gh-107069
* gh-107075
* gh-107772
* gh-107882
* gh-107978
* gh-108909
* gh-111048
<!-- /gh-linked-prs -->
| 411b1692811b2ecac59cb0df0f920861c7cf179a | dce2d38cb04b541bad477ccc1040a68fa70a9a69 |
python/cpython | python__cpython-103069 | # pdb's breakpoint should check if condition is a valid expression
This could be either a bug report or a feature request. `pdb` has conditional breakpoint feature where you can set a condition for the breakpoint to be effective. You can also use `condition` command to change it.
The problem is - it never checks whether that condition is a valid expression. The expression will be evaluated when the breakpoint is hit and if there's a syntax error, it'll just consider it `True`. This is not an ideal behavior. We can warn the users if their input `condition` is not even a valid expression because under no circumstances that's the expected input. Also it would not be a valid command.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103069
<!-- /gh-linked-prs -->
| e375bff03736f809fbc234010c087ef9d7e0d384 | d835b3f05de7e2d800138e5969eeb9656b0ed860 |
python/cpython | python__cpython-103777 | # site.py documentation does not mention special objects help, quit etc added to builtins
The [site.py](https://docs.python.org/3/library/site.html) documentation does not mention the special objects that it adds to builtins:
* help
* quit and exit
* license viewer
* copyright and credit strings.
(Have I missed any?) The second paragraph hints at them, saying "Importing this module will append site-specific paths to the module search path and add a few builtins" but fails to mention what those builtins are. We should add a brief section listing those added objects, and in the case of `help` linking to [pydoc](https://docs.python.org/3/library/pydoc.html).
<!-- gh-linked-prs -->
### Linked PRs
* gh-103777
* gh-123762
* gh-123763
<!-- /gh-linked-prs -->
| 67957ea77da8c667df1508a9d3d9b39e59f671d6 | d359c7c47b7e713cfbf7ba335d96b5f45e0f13e3 |
python/cpython | python__cpython-112473 | # Provide a script to create a complete WASI build
https://github.com/brettcannon/cpython-wasi-build currently constructs a zip file that one can use to run CPython when built for WASI. But the bundling of everything together is slightly finicky, so having a script or command in `wasm_build.py` that constructs the zip file would be helpful. This also works towards not only a universal understanding of how these sorts of zip file should be constructed, but also towards making it easy for release managers to create a release.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112473
<!-- /gh-linked-prs -->
| 37589d76bbe97b0bf13ffafb8dd6aab361a0209a | d4a6229afe10d7e5a9a59bf9472f36d7698988db |
python/cpython | python__cpython-103058 | # gc.freeze documentation is unclear
# Documentation
The [current advice](https://docs.python.org/3/library/gc#gc.freeze) seems to mean you should `gc.disable()` and `gc.freeze()` right before `os.fork()`, which doesn't do anything.
Łukasz explained the advice in https://bugs.python.org/issue31558#msg302780.
I'm proposing a fix in PR #103058.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103058
* gh-103416
<!-- /gh-linked-prs -->
| 8b1b17134e2241a8cdff9e0c869013a7ff3ca2fe | 40db5c65b7ca4e784613b6122106a92576aba2d6 |
python/cpython | python__cpython-103062 | # Enum._generate_next_value_ should be static
# Bug report
[Here](https://github.com/python/cpython/blob/2cdc5189a6bc3157fddd814662bde99ecfd77529/Lib/enum.py#L1149) and in two more places in the same file function `_generate_next_value_` supposed to be called for instance. But it will not work properly in that case, because it behaves like static method. Consider `staticmethod` decorator.
As far as I understand there is no problem right now and all calls are correct. But in future it will be more robust if it would be static.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103062
* gh-103231
<!-- /gh-linked-prs -->
| b838d80085b0162cc2ae7b4db5d2a9d9c6a28366 | 56d055a0d81a809e4ff8e1d56756a3bf32317efb |
python/cpython | python__cpython-103055 | # Improve test cases for type substitution of a list of types after initial ParamSpec substitution
After merging https://github.com/python/cpython/pull/102808 I realized that I've missed several important cases.
Right now substitution algorithm has these lines: https://github.com/python/cpython/blob/2cdc5189a6bc3157fddd814662bde99ecfd77529/Lib/typing.py#L1477-L1488
It is tested. But, it is never tested for the nested arguments:
https://github.com/python/cpython/blob/2cdc5189a6bc3157fddd814662bde99ecfd77529/Lib/typing.py#L1500-L1511
I think that `Callable` is complex and important enought to be covered with as many cases as possible. Furthermore, `Callable` can be nested deeply in real types and we need to be sure that this use-case works as intended.
I will send a PR with more tests :)
Related https://github.com/python/cpython/issues/88965
<!-- gh-linked-prs -->
### Linked PRs
* gh-103055
* gh-103105
<!-- /gh-linked-prs -->
| 60bdc16b459cf8f7b359c7f87d8ae6c5928147a4 | 24ba507b1dd70cf468a1beadea6ca6336fe1d40f |
python/cpython | python__cpython-103170 | # test_tools: test_freeze_simple_script() times out after 15 min: --with-lto --enable-optimizations (s390x LTO buildbots)
Reproduction:
1. `make clean && ./configure --with-lto --enable-optimizations && make`
2. `./python.exe -E ./Tools/scripts/run_tests.py -j 1 -u all -W --slowest --fail-env-changed --timeout=900 -j2 --junit-xml test-results.xml -j6 -v test_tools -m test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script`
Env:
- Fedora in the test suite: https://buildbot.python.org/all/#/builders/545/builds/3524
- Macos, locally
Traceback:
```
» ./python.exe -E ./Tools/scripts/run_tests.py -j 1 -u all -W --slowest --fail-env-changed --timeout=900 -j2 --junit-xml test-results.xml -j6 -v test_tools
/Users/sobolev/Desktop/cpython/python.exe -u -W default -bb -E -E -m test -r -w -j 1 -u all -W --slowest --fail-env-changed --timeout=900 -j2 --junit-xml test-results.xml -j6 -v test_tools
== CPython 3.12.0a6+ (heads/main:1fd603fad2, Mar 27 2023, 09:38:01) [Clang 11.0.0 (clang-1100.0.33.16)]
== macOS-10.14.6-x86_64-i386-64bit little-endian
== Python build: release LTO+PGO
== cwd: /Users/sobolev/Desktop/cpython/build/test_python_89351æ
== CPU count: 4
== encodings: locale=UTF-8, FS=utf-8
Using random seed 5597523
0:00:00 load avg: 1.77 Run tests in parallel using 6 child processes (timeout: 15 min, worker timeout: 20 min)
0:00:30 load avg: 2.74 running: test_tools (30.0 sec)
0:01:00 load avg: 3.09 running: test_tools (1 min)
0:01:30 load avg: 2.95 running: test_tools (1 min 30 sec)
0:02:00 load avg: 3.15 running: test_tools (2 min)
0:02:30 load avg: 3.15 running: test_tools (2 min 30 sec)
0:03:00 load avg: 3.19 running: test_tools (3 min)
0:03:30 load avg: 4.23 running: test_tools (3 min 30 sec)
0:04:00 load avg: 4.75 running: test_tools (4 min)
0:04:30 load avg: 4.91 running: test_tools (4 min 30 sec)
0:05:00 load avg: 3.85 running: test_tools (5 min)
0:05:30 load avg: 4.70 running: test_tools (5 min 30 sec)
0:06:00 load avg: 4.65 running: test_tools (6 min)
0:06:30 load avg: 3.73 running: test_tools (6 min 30 sec)
0:07:00 load avg: 3.65 running: test_tools (7 min)
0:07:30 load avg: 3.33 running: test_tools (7 min 30 sec)
0:08:00 load avg: 3.28 running: test_tools (8 min)
0:08:30 load avg: 3.01 running: test_tools (8 min 30 sec)
0:09:00 load avg: 3.82 running: test_tools (9 min)
0:09:30 load avg: 4.18 running: test_tools (9 min 30 sec)
0:10:00 load avg: 3.76 running: test_tools (10 min)
0:10:30 load avg: 3.58 running: test_tools (10 min 30 sec)
0:11:00 load avg: 4.73 running: test_tools (11 min)
0:11:30 load avg: 4.45 running: test_tools (11 min 30 sec)
0:12:00 load avg: 4.22 running: test_tools (12 min)
0:12:30 load avg: 3.22 running: test_tools (12 min 30 sec)
0:13:00 load avg: 2.67 running: test_tools (13 min)
0:13:30 load avg: 4.42 running: test_tools (13 min 30 sec)
0:14:00 load avg: 4.41 running: test_tools (14 min)
0:14:30 load avg: 3.99 running: test_tools (14 min 30 sec)
0:15:00 load avg: 5.46 running: test_tools (15 min)
0:15:02 load avg: 5.46 [1/1/1] test_tools crashed (Exit code 1)
Timeout (0:15:00)!
Thread 0x0000000113c4d5c0 (most recent call first):
File "/Users/sobolev/Desktop/cpython/Lib/selectors.py", line 415 in select
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 2075 in _communicate
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 1207 in communicate
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 550 in run
File "/Users/sobolev/Desktop/cpython/Tools/freeze/test/freeze.py", line 25 in _run_quiet
File "/Users/sobolev/Desktop/cpython/Tools/freeze/test/freeze.py", line 180 in prepare
File "/Users/sobolev/Desktop/cpython/Lib/test/test_tools/test_freeze.py", line 27 in test_freeze_simple_script
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 579 in _callTestMethod
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 623 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 678 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/runner.py", line 208 in run
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1106 in _run_suite
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1232 in run_unittest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 281 in _test_module
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 317 in _runtest_inner2
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 360 in _runtest_inner
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 219 in _runtest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 265 in runtest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest_mp.py", line 98 in run_tests_worker
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 732 in _main
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 711 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 775 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/regrtest.py", line 43 in _main
File "/Users/sobolev/Desktop/cpython/Lib/test/regrtest.py", line 47 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
== Tests result: FAILURE ==
10 slowest tests:
1 test failed:
test_tools
0:15:02 load avg: 5.46
0:15:02 load avg: 5.46 Re-running failed tests in verbose mode
0:15:02 load avg: 5.46 Re-running test_tools in verbose mode
test_freeze_simple_script (test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script) ... Timeout (0:15:00)!
Thread 0x000000010b8a55c0 (most recent call first):
File "/Users/sobolev/Desktop/cpython/Lib/selectors.py", line 415 in select
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 2075 in _communicate
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 1207 in communicate
File "/Users/sobolev/Desktop/cpython/Lib/subprocess.py", line 550 in run
File "/Users/sobolev/Desktop/cpython/Tools/freeze/test/freeze.py", line 25 in _run_quiet
File "/Users/sobolev/Desktop/cpython/Tools/freeze/test/freeze.py", line 180 in prepare
File "/Users/sobolev/Desktop/cpython/Lib/test/test_tools/test_freeze.py", line 27 in test_freeze_simple_script
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 579 in _callTestMethod
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 623 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/case.py", line 678 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 122 in run
File "/Users/sobolev/Desktop/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/Users/sobolev/Desktop/cpython/Lib/unittest/runner.py", line 208 in run
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1106 in _run_suite
File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 1232 in run_unittest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 281 in _test_module
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 317 in _runtest_inner2
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 360 in _runtest_inner
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 235 in _runtest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/runtest.py", line 265 in runtest
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 353 in rerun_failed_tests
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 756 in _main
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 711 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 775 in main
File "/Users/sobolev/Desktop/cpython/Lib/test/__main__.py", line 2 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
```
Looks like timeout is killing this test. I tried to increase it, it helped:
```
» ./python.exe -E ./Tools/scripts/run_tests.py -j 1 -u all -W --slowest --fail-env-changed --timeout=1800 -j2 --junit-xml test-results.xml -j6 -v test_tools -m test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script
/Users/sobolev/Desktop/cpython/python.exe -u -W default -bb -E -E -m test -r -w -j 1 -u all -W --slowest --fail-env-changed --timeout=1800 -j2 --junit-xml test-results.xml -j6 -v test_tools -m test.test_tools.test_freeze.TestFreeze.test_freeze_simple_script
== CPython 3.12.0a6+ (heads/main:1fd603fad2, Mar 27 2023, 09:38:01) [Clang 11.0.0 (clang-1100.0.33.16)]
== macOS-10.14.6-x86_64-i386-64bit little-endian
== Python build: release LTO+PGO
== cwd: /Users/sobolev/Desktop/cpython/build/test_python_39615æ
== CPU count: 4
== encodings: locale=UTF-8, FS=utf-8
Using random seed 1597906
0:00:00 load avg: 2.79 Run tests in parallel using 6 child processes (timeout: 30 min, worker timeout: 35 min)
0:01:00 load avg: 3.25 running: test_tools (1 min)
0:01:30 load avg: 3.33 running: test_tools (1 min 30 sec)
0:02:00 load avg: 3.41 running: test_tools (2 min)
0:02:30 load avg: 3.59 running: test_tools (2 min 30 sec)
0:03:00 load avg: 3.33 running: test_tools (3 min)
0:03:30 load avg: 3.43 running: test_tools (3 min 30 sec)
0:04:00 load avg: 3.08 running: test_tools (4 min)
0:04:30 load avg: 2.80 running: test_tools (4 min 30 sec)
0:05:00 load avg: 5.62 running: test_tools (5 min)
0:05:30 load avg: 5.53 running: test_tools (5 min 30 sec)
0:06:00 load avg: 4.24 running: test_tools (6 min)
0:06:30 load avg: 3.55 running: test_tools (6 min 30 sec)
0:07:00 load avg: 4.57 running: test_tools (7 min)
0:07:30 load avg: 4.09 running: test_tools (7 min 30 sec)
0:08:00 load avg: 3.80 running: test_tools (8 min)
0:08:30 load avg: 4.14 running: test_tools (8 min 30 sec)
0:09:00 load avg: 7.55 running: test_tools (9 min)
0:09:30 load avg: 7.24 running: test_tools (9 min 30 sec)
0:10:00 load avg: 5.84 running: test_tools (10 min)
0:10:30 load avg: 4.38 running: test_tools (10 min 30 sec)
0:11:00 load avg: 4.52 running: test_tools (11 min)
0:11:30 load avg: 4.17 running: test_tools (11 min 30 sec)
0:12:00 load avg: 4.23 running: test_tools (12 min)
0:12:30 load avg: 4.35 running: test_tools (12 min 30 sec)
0:13:00 load avg: 4.11 running: test_tools (13 min)
0:13:30 load avg: 3.37 running: test_tools (13 min 30 sec)
0:14:00 load avg: 3.12 running: test_tools (14 min)
0:14:30 load avg: 3.73 running: test_tools (14 min 30 sec)
0:15:00 load avg: 3.18 running: test_tools (15 min)
0:15:30 load avg: 2.94 running: test_tools (15 min 30 sec)
0:16:00 load avg: 4.23 running: test_tools (16 min)
0:16:30 load avg: 4.90 running: test_tools (16 min 30 sec)
0:17:00 load avg: 4.58 running: test_tools (17 min)
0:17:30 load avg: 4.44 running: test_tools (17 min 30 sec)
0:18:00 load avg: 5.53 running: test_tools (18 min)
0:18:30 load avg: 4.19 running: test_tools (18 min 30 sec)
0:19:00 load avg: 3.39 running: test_tools (19 min)
0:19:30 load avg: 2.78 running: test_tools (19 min 30 sec)
0:19:55 load avg: 2.72 [1/1] test_tools passed (19 min 55 sec)
== Tests result: SUCCESS ==
1 test OK.
10 slowest tests:
- test_tools: 19 min 55 sec
Total duration: 19 min 55 sec
Tests result: SUCCESS
```
So, should be increase this timeout in CI? Or should we try optimizing this test somehow?
<!-- gh-linked-prs -->
### Linked PRs
* gh-103170
* gh-109591
* gh-109614
* gh-109616
* gh-110075
* gh-110447
* gh-110448
* gh-110449
* gh-110451
* gh-110453
* gh-110454
* gh-110456
* gh-110457
* gh-110458
<!-- /gh-linked-prs -->
| f68f2fec7df1224a031c3feed8a0ef6028cfcddd | 5ac97957f72d42d5bc3ec658b4321cc207cb038e |
python/cpython | python__cpython-103047 | # dis.disco()/dis.dis() does not show current line correctly with CACHE entries
Now with CACHE entries, the `lasti` could point to a CACHE entry and current line indicator will be confused.
```python
import dis
def f():
print(a)
dis.disco(f.__code__, lasti=2)
dis.disco(f.__code__, lasti=4)
```
will display
```
3 0 RESUME 0
4 --> 2 LOAD_GLOBAL 1 (NULL + print)
14 LOAD_GLOBAL 2 (a)
26 PRECALL 1
30 CALL 1
40 POP_TOP
42 LOAD_CONST 0 (None)
44 RETURN_VALUE
3 0 RESUME 0
4 2 LOAD_GLOBAL 1 (NULL + print)
14 LOAD_GLOBAL 2 (a)
26 PRECALL 1
30 CALL 1
40 POP_TOP
42 LOAD_CONST 0 (None)
44 RETURN_VALUE
```
This is confusing for the users because they would probably not realize that there are CACHE entries. And with `show_caches=False`, we should show the current line to the instruction that the CACHE belongs to.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103047
<!-- /gh-linked-prs -->
| 34eb6f727632d9a1a6998f90c9421e420c785643 | 36067532467ec58c87ac9c140e555e1866431981 |
python/cpython | python__cpython-103028 | # Outdated `dataclass.make_dataclass` docstring
This docstring line in `make_dataclass` is outdated:
https://github.com/python/cpython/blob/1fd603fad20187496619930e5b74aa7690425926/Lib/dataclasses.py#L1424-L1425
There are new parameters: `kw_only`, `slots`, `weakref_slot` that are not mentioned.
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103028
<!-- /gh-linked-prs -->
| 8ec6486462b920ab92ecb685a79fc3446681e1b8 | 0708437ad043657f992cb985fd5c37e1ac052f93 |
python/cpython | python__cpython-103026 | # [ctypes documentation issues] varadic typo and doctest directive mistake
# Documentation
Two issues on https://docs.python.org/3/library/ctypes.html
Issue 1: typo "Calling varadic functions" => should be "variadic"
Issue 2:

Issue 2 has been discussed [here](https://discuss.python.org/t/ctypes-page-doctest-typo/25138)
<!-- gh-linked-prs -->
### Linked PRs
* gh-103026
* gh-103029
* gh-103030
<!-- /gh-linked-prs -->
| 0708437ad043657f992cb985fd5c37e1ac052f93 | 1fd603fad20187496619930e5b74aa7690425926 |
python/cpython | python__cpython-103024 | # Add SyntaxError check for pdb's command `display`
# Feature or enhancement
Check `SyntaxError` when doing `display` in `pdb`.
# Pitch
`display` adds an expression to watch list. However, if there's a syntax error, then the command is meaningless. Currently, `display` still adds this invalid expression into the watch list, which does not make sense in almost all cases. Adding a syntax error check could prevent the users from accidentally adding wrong expressions then having to remove it with `undisplay`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103024
<!-- /gh-linked-prs -->
| 36067532467ec58c87ac9c140e555e1866431981 | 2cdc5189a6bc3157fddd814662bde99ecfd77529 |
python/cpython | python__cpython-103073 | # Enhancement request: Add "entrypoint" as an option to `sqlite3.load_extension()`
# Enhancement Request:
The current version of [`sqlite3.Connection.load_extension()`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.load_extension) takes in a single parameter `path`. However, the underlying C API [`sqlite3_load_extension`](https://www.sqlite.org/c3ref/load_extension.html) has an additional `entrypoint` argument that I think should be added to the Python `load_extension` function.
Currently in `connection.c`, the entrypoint parameter is ignored with a null pointer:
https://github.com/python/cpython/blob/f2e5a6ee628502d307a97f587788d7022a200229/Modules/_sqlite/connection.c#L1624
# Pitch
We add a new second optional `entrypoint` parameter to the `load_extension()` Python function. If it is provided, it's passed along to the underlying `sqlite3_load_extension` C function as the extension entrypoint.
```python
import sqlite3
db = sqlite3.connect(":memory:")
db.enable_load_extension(True)
# Currently, only a single path argument is supported on load_extension()
db.load_extension("./lines0")
# In this proposal, an optional 2nd entrypoint parameter would be added
db.load_extension("./lines0", "sqlite3_lines_no_read_init")
db.enable_load_extension(False)
```
I've been building [several SQLite extensions](https://github.com/asg017?tab=repositories&q=topic%3Asqlite-extension&type=&language=&sort=), some of which use different entrypoints to enable/disable security sensitive features. For example, in [`sqlite-lines`](https://github.com/asg017/sqlite-lines) (A SQLite extension for reading files line-by-lines), there will be a secondary entrypoint called `sqlite3_lines_no_read_init` that disables all functions that read the filesystem, for use in [Datasette](https://datasette.io/) or other security-sensitive environments.
Many of these extensions [are also distributed as Python packages](https://observablehq.com/@asg017/making-sqlite-extensions-pip-install-able), so any extra customizable APIs for loading SQLite extensions is greatly appreciated!
# Workaround
There technically is a workaround for this: A user can use the [`load_extension()` SQL function](https://www.sqlite.org/lang_corefunc.html#load_extension) to load an extension with an entrypoint, like so:
```python
db = sqlite3.connect(":memory:")
db.enable_load_extension(True)
db.execute("select load_extension(?, ?)", ["path/to/extension", "entrypoint"])
```
However, doing so will circumvent the `sqlite3.load_extension` auditing event, as that's only trigged on calls to the python `Connection.load_extension()` function.
There's also some limitations to the pure SQL `load_extension()` function, mentioned here:
> *The load_extension() function will fail if the extension attempts to modify or delete an SQL function or collating sequence. The extension can add new functions or collating sequences, but cannot modify or delete existing functions or collating sequences because those functions and/or collating sequences might be used elsewhere in the currently running SQL statement. To load an extension that changes or deletes functions or collating sequences, use the [sqlite3_load_extension()](https://www.sqlite.org/c3ref/load_extension.html) C-language API.*
> **Source:** https://www.sqlite.org/lang_corefunc.html#load_extension
Also in general, it's recommended to avould the SQL load_extension() function altogether:
> *Security warning: It is recommended that extension loading be enabled using the [SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION](https://www.sqlite.org/c3ref/c_dbconfig_defensive.html#sqlitedbconfigenableloadextension) method rather than this interface, so the [load_extension()](https://www.sqlite.org/lang_corefunc.html#load_extension) SQL function remains disabled. This will prevent SQL injections from giving attackers access to extension loading capabilities.*
> **Source:** https://www.sqlite.org/c3ref/enable_load_extension.html
It's also just a little awkward, having to "eject" to SQL to do something that the Python API could readily support.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103073
<!-- /gh-linked-prs -->
| 222c63fc6b91f42e7cc53574615f4e9b7a33c28f | 28a05f4cc2b150b3ff02ec255daf1b6ec886ca6f |
python/cpython | python__cpython-103005 | # Dataclasses - Improve the performance of asdict/astuple for common types and default values
# Feature or enhancement
Improve the performance of asdict/astuple in common cases by making a shortcut for common types that are unaffected by deepcopy in the inner loop. Also special casing for the default `dict_factory=dict` to construct the dictionary directly.
The goal here is to improve performance in common cases without significantly impacting less common cases, while not changing the API or output in any way.
# Pitch
In cases where a dataclass contains a lot of data of common python types (eg: bool/str/int/float) currently the inner loops for `asdict` and `astuple` require the values to be compared to check if they are dataclasses, namedtuples, lists, tuples, and then dictionaries before passing them to `deepcopy`. This proposes to special case and shortcut objects of types where `deepcopy` returns the object unchanged.
It is much faster for these cases to instead check for them at the first opportunity and shortcut their return, skipping the recursive call and all of the other comparisons. In the case where this is being used to prepare an object to serialize to JSON this can be quite significant as this covers most of the remaining types handled by the stdlib `json` module.
Note: Anything that skips deepcopy with this alteration is already unchanged as`deepcopy(obj) is obj` is always True for these types.
Currently when constructing the `dict` for a dataclass, a list of tuples is created and passed to the `dict_factory` constructor. In the case where the `dict_factory` constructor is the default - `dict` - it is faster to construct the dictionary directly.
# Previous discussion
Discussed here with a few more details and earlier examples: https://discuss.python.org/t/dataclasses-make-asdict-astuple-faster-by-skipping-deepcopy-for-objects-where-deepcopy-obj-is-obj/24662
# Code Details
## Types to skip deepcopy
This is the current set of types to be checked for and shortcut returned, ordered in a way that I think makes more sense for `dataclasses` than the original ordering copied from the `copy` module. These are known to be safe to skip as they are all sent to `_deepcopy_atomic` (which returns the original object) in the `copy` module.
```python
# Types for which deepcopy(obj) is known to return obj unmodified
# Used to skip deepcopy in asdict and astuple for performance
_ATOMIC_TYPES = {
# Common JSON Serializable types
types.NoneType,
bool,
int,
float,
complex,
bytes,
str,
# Other types that are also unaffected by deepcopy
types.EllipsisType,
types.NotImplementedType,
types.CodeType,
types.BuiltinFunctionType,
types.FunctionType,
type,
range,
property,
# weakref.ref, # weakref is not currently imported by dataclasses directly
}
```
## Function changes
With that added the change is essentially replacing each instance of
```python
_asdict_inner(v, dict_factory)
```
inside `_asdict_inner`, with
```python
v if type(v) in _ATOMIC_TYPES else _asdict_inner(v, dict_factory)
```
Instances of subclasses of these types are not guaranteed to have `deepcopy(obj) is obj` so this checks specifically for instances of the base types.
# Performance tests
Test file: https://gist.github.com/DavidCEllis/a2c2ceeeeda2d1ac509fb8877e5fb60d
Results on my development machine (not a perfectly stable test machine, but these differences are large enough).
## Main
Current Main python branch:
```
Dataclasses asdict/astuple speed tests
--------------------------------------
Python v3.12.0alpha6
GIT branch: main
Test Iterations: 10000
List of Int case asdict: 5.80s
Test Iterations: 1000
List of Decimal case asdict: 0.65s
Test Iterations: 1000000
Basic types case asdict: 3.76s
Basic types astuple: 3.48s
Test Iterations: 100000
Opaque types asdict: 2.15s
Opaque types astuple: 2.11s
Test Iterations: 100
Mixed containers asdict: 3.66s
Mixed containers astuple: 3.28s
```
## Modified
[Modified Branch](https://github.com/DavidCEllis/cpython/blob/faster_dataclasses_serialize/Lib/dataclasses.py):
```
Dataclasses asdict/astuple speed tests
--------------------------------------
Python v3.12.0alpha6
GIT branch: faster_dataclasses_serialize
Test Iterations: 10000
List of Int case asdict: 0.53s
Test Iterations: 1000
List of Decimal case asdict: 0.68s
Test Iterations: 1000000
Basic types case asdict: 1.33s
Basic types astuple: 1.28s
Test Iterations: 100000
Opaque types asdict: 2.14s
Opaque types astuple: 2.13s
Test Iterations: 100
Mixed containers asdict: 1.99s
Mixed containers astuple: 1.84s
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103005
* gh-104364
<!-- /gh-linked-prs -->
| d034590294d4618880375a6db513c30bce3e126b | f80014a9b0e8d00df3e65379070be1dfd2721682 |
python/cpython | python__cpython-102998 | # Update Windows and MacOS installers to SQLite 3.41.2+.
Updating Windows and MacOS installers to SQLite 3.41.2, see [release notes](https://sqlite.org/releaselog/3_41_2.html).
There is a small [bugfix](https://www.sqlite.org/src/info/76e683c5f25fe047) in the `3.41` branch, so we can wait 2-3 weeks for SQLite 3.41.3.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102998
* gh-102999
* gh-104080
* gh-104085
<!-- /gh-linked-prs -->
| f0ad4567319ee4ae878d570ab7709ab63df9123e | 9de0cf20fa0485e327e57cc0864c7476da85cfad |
python/cpython | python__cpython-103074 | # Profile docs has error in example in tldr
# Documentation
https://docs.python.org/3/library/profile.html
The following example is given in the Instant User's Manual section:
```
214 function calls (207 primitive calls) in 0.002 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.002 0.002 {built-in method builtins.exec}
1 0.000 0.000 0.001 0.001 <string>:1(<module>)
1 0.000 0.000 0.001 0.001 __init__.py:250(compile)
1 0.000 0.000 0.001 0.001 __init__.py:289(_compile)
1 0.000 0.000 0.000 0.000 _compiler.py:759(compile)
1 0.000 0.000 0.000 0.000 _parser.py:937(parse)
1 0.000 0.000 0.000 0.000 _compiler.py:598(_code)
1 0.000 0.000 0.000 0.000 _parser.py:435(_parse_sub)
```
however in the description of the example:
`The next line: Ordered by: cumulative name, indicates ...`
Which does not match the example
<!-- gh-linked-prs -->
### Linked PRs
* gh-103074
* gh-103201
<!-- /gh-linked-prs -->
| 55decb72c4d2e4ea00ed13da5dd0fd22cecb9083 | 32937d6aa414ec7db5c63ef277f21db1880b3af4 |
python/cpython | python__cpython-102981 | # Improve test coverage on pdb
# Feature or enhancement
Add more tests on pdb module
# Pitch
The coverage on pdb module is poor - some of the commands are not tested at all. There are at least some low hanging fruits that we can do.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102981
* gh-103016
* gh-103017
* gh-111194
<!-- /gh-linked-prs -->
| ded9a7fc194a1d5c0e38f475a45f8f77dbe9c6bc | f2e5a6ee628502d307a97f587788d7022a200229 |
python/cpython | python__cpython-103228 | # unittest.mock.patch with autospec doesn't hold for classmethods nor staticmethods
# Bug report
Using `patch` with `autospec=True` does not work when a method is decorated with `@classmethod` or `@staticmethod`. The resulting mock can be called with any arguments without raising a `TypeError`.
Example:
```python
from unittest.mock import patch
import pytest
class Foo:
def foo(self):
pass
@staticmethod
def bar():
pass
@classmethod
def baz(cls):
pass
@pytest.mark.parametrize("method", ["foo", "bar", "baz"])
def test_foo(method: str):
with patch.object(Foo, method, autospec=True):
getattr(Foo(), method)(5)
```
The only subtest that fails is `foo`. The other two pass, even though they're clearly being called incorrectly.
If you prefer not to use `pytest` to demo/repro this:
```python
with patch.object(Foo, "foo", autospec=True):
try:
Foo().foo(5)
except TypeError:
print("Correctly raises on foo")
else:
print("Incorrectly does not raise with foo")
with patch.object(Foo, "bar", autospec=True):
try:
Foo().bar(5)
except TypeError:
print("Correctly raises on bar")
else:
print("Incorrectly does not raise with bar")
with patch.object(Foo, "baz", autospec=True):
try:
Foo().baz(5)
except TypeError:
print("Correctly raises on baz")
else:
print("Incorrectly does not raise with baz")
```
This has output:
```
Correctly raises on foo
Incorrectly does not raise with bar
Incorrectly does not raise with baz
```
# Your environment
- CPython versions tested on: 3.10, 3.11
- Operating system and architecture: macOS 12.6 with Apple M1 chip
<!-- gh-linked-prs -->
### Linked PRs
* gh-103228
* gh-103499
<!-- /gh-linked-prs -->
| 59e0de4903c02e72b329e505fddf1ad9794928bc | 19d2639d1e6478e2e251479d842bdfa2e8272396 |
python/cpython | python__cpython-102975 | # Add a dev container to the repository
Originally discussed at https://discuss.python.org/t/add-support-for-devcontainers-to-facilitate-github-codespaces-usage/21330 , the idea is to get to the point where we can have [GitHub Codespaces prebuilds](https://docs.github.com/en/codespaces/prebuilding-your-codespaces/about-github-codespaces-prebuilds). This would get people a running container in under a minute with CPython and the docs already fully built. The idea is it saves everyone time by having a preconfigured setup, making things like getting started as sprints extremely easy. It can also help in situations like with WASI where the toolchain can come pre-installed to make getting started much easier.
FYI everyone on GitHub gets 120 CPU-hours of free Codespaces, so that means a 2-core CPU instance is worth 60 hours of use. And with prebuilds, at least the initial building of `python` and the docs comes for "free" for users.
Adding this setup has already been approved by the SC.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102975
* gh-103283
<!-- /gh-linked-prs -->
| 0b1d9c44f1f091a499856d81542eeafda25011e1 | d052a383f1a0c599c176a12c73a761ca00436d8b |
python/cpython | python__cpython-103565 | # Regression in zipfile, read1(-1) after seek() returns empty byte string
# Bug report
When debugging a test failure in https://pypi.org/project/fs/ I found a regression in `zipfile`, read1(-1) after seek() returns empty byte string instead of substring. I've bisected it into this commit 330f1d58282517bdf1f19577ab9317fa9810bf95.
## Reproducer:
~~~~
import zipfile
# First, create the zip:
# echo 'Hello, World' > hello.txt
# zip hello.zip hello.txt
with zipfile.ZipFile('hello.zip') as myzip:
with myzip.open('hello.txt') as myfile:
print(myfile.read(5))
print(myfile.seek(2, 1))
print(myfile.read1(-1))
~~~~
## Expected output (3.11.2):
~~~~
❯ python3.11 reproduce.py
b'Hello'
7
b'World\n'
~~~~
## Actual output (3.12.0a6):
~~~~
❯ python3.12 reproduce.py
b'Hello'
7
b''
~~~~
# Your environment
- CPython versions tested on: main
- Operating system and architecture: Fedora 37, x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-103565
* gh-111289
<!-- /gh-linked-prs -->
| c73b0f35602abf5f283bf64266641f19bc82fce0 | e5168ff3f8abe651d0a96d9e2d49028183e21b15 |
python/cpython | python__cpython-102953 | # Implement PEP 706 – Filter for tarfile.extractall
This issue tracks implementation of [PEP 706](https://peps.python.org/pep-0706/) – Filter for tarfile.extractall
- [x] Fix WASI tests (or at least understand and document the symlink behavior)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102953
* gh-103832
* gh-104128
* gh-104327
* gh-104382
* gh-104548
* gh-104583
<!-- /gh-linked-prs -->
### Unofficial backport
* https://github.com/encukou/cpython/pull/26
| af530469954e8ad49f1e071ef31c844b9bfda414 | 36860134a9eda8df5af5a38d6c7533437c594c2f |
python/cpython | python__cpython-102948 | # Bad traceback when `dataclasses.fields` is called on a non-dataclass
Currently, if you call `dataclasses.fields` on a non-dataclass, quite a poor traceback is produced:
```pycon
>>> import dataclasses
>>> dataclasses.fields(object)
Traceback (most recent call last):
File "C:\Users\alexw\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1232, in fields
fields = getattr(class_or_instance, _FIELDS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'object' has no attribute '__dataclass_fields__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\alexw\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1234, in fields
raise TypeError('must be called with a dataclass type or instance')
TypeError: must be called with a dataclass type or instance
```
There are two issues here:
- The traceback mentions the `__dataclass_fields__` attribute, which is an internal implementation detail of the `dataclasses` module and should be hidden from the user.
- The "during handling of the above exception, another exception occurred" message implies to the user that there's a bug in the `dataclasses` module itself, rather than this being intentional behaviour.
This one-line change to `dataclasses.py` produces a much better traceback:
```diff
diff --git a/Lib/dataclasses.py b/Lib/dataclasses.py
index 82b08fc017..e3fd0b3e38 100644
--- a/Lib/dataclasses.py
+++ b/Lib/dataclasses.py
@@ -1248,7 +1248,7 @@ def fields(class_or_instance):
try:
fields = getattr(class_or_instance, _FIELDS)
except AttributeError:
- raise TypeError('must be called with a dataclass type or instance')
+ raise TypeError('must be called with a dataclass type or instance') from None
```
With this change applied, we get this traceback instead:
```pycon
>>> import dataclasses
>>> dataclasses.fields(object)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\alexw\coding\cpython\Lib\dataclasses.py", line 1251, in fields
raise TypeError('must be called with a dataclass type or instance') from None
TypeError: must be called with a dataclass type or instance
```
Cc. @ericvsmith and @carljm, as `dataclasses` experts.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102948
* gh-102951
* gh-102954
<!-- /gh-linked-prs -->
| baf4eb083c09b323cc12b8636c28c14089b87de8 | 08254be6c5324416081cc28a047b377e1f4ed644 |
python/cpython | python__cpython-102944 | # test case test_socket failed on Windows with non English locale
On Windows (tested on Win11) with non English locale (zh-CN), test_socket will be failed with this error message:
```
Traceback (most recent call last):
File "C:\Users\xxxxxx\source\cpython\Lib\test\test_socket.py", line 2567, in testCreateHyperVSocketWithUnknownProtoFailure
with self.assertRaisesRegex(OSError, expected):
AssertionError: "A protocol was specified in the socket function call that does not support the semantics of the socket type requested" does not match "[WinError 10041] 在套接字函数调用中指定的一个协议不支持请求的套接字类型的语法。"
```
This is because Windows localized their error message.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102944
<!-- /gh-linked-prs -->
| bf42eb8722befddf099a7bc26ea4a258179c32bf | adb0621652f489033b9db8d3949564c9fe545c1d |
python/cpython | python__cpython-102942 | # New warning: "‘subobj’ may be used uninitialized in this function" in `Objects/bytes_methods.c`
Example:
<img width="662" alt="Снимок экрана 2023-03-23 в 12 05 59" src="https://user-images.githubusercontent.com/4660275/227154730-efd0169d-fb5d-4bb6-bbb6-72ba7cc5159f.png">
This looks like a false-positive, because `subobj` is always initialized in `stringlib_parse_args_finds(function_name, args, &subobj, &start, &end)`:
https://github.com/python/cpython/blob/87be8d95228ee95de9045cf2952311d20dc5de45/Objects/bytes_methods.c#L770-L786
Morever, it is used before in `PyTuple_Check(subobj)`.
Any ideas on how to fix / silence it?
And why we get this warning only now? The code was not changed for 10 months.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102942
<!-- /gh-linked-prs -->
| 2cdc5189a6bc3157fddd814662bde99ecfd77529 | 30a306c2ade6c2af3c0b1d988a17dae3916e0d27 |
python/cpython | python__cpython-102940 | # New warnings in `bltnsmodule.c`: "conversion from 'Py_ssize_t' to 'long'"
After https://github.com/python/cpython/commit/7559f5fda94ab568a1a910b17683ed81dc3426fb#diff-e4fd8b8ee6a147f86c0719ff122aca6dfca36edbd4812c87892698b3b72e40a1 there are two new warnings on GH:
<img width="908" alt="Снимок экрана 2023-03-23 в 10 15 26" src="https://user-images.githubusercontent.com/4660275/227130592-e3ba71eb-2e80-44d4-9cb4-a79c209a9cfe.png">
<img width="455" alt="Снимок экрана 2023-03-23 в 10 15 10" src="https://user-images.githubusercontent.com/4660275/227130626-3a9f33cd-cd17-4f70-ab9f-2d33a681854f.png">
I am working on a fix, PR is incoming.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102940
<!-- /gh-linked-prs -->
| 0f2ba6580565c3b51396c840406211ad81297735 | 87be8d95228ee95de9045cf2952311d20dc5de45 |
python/cpython | python__cpython-102937 | # Typing: Document performance pitfalls of protocols decorated with `@runtime_checkable`
# Documentation
The following two calls look similar, but the second is dramatically slow compared to the first:
```python
from typing import SupportsInt
isinstance(3, int)
isinstance(3, SupportsInt)
```
The reason for the slowness is because of [`_ProtocolMeta.__instancecheck__`](https://github.com/python/cpython/blob/d3ca042c99b2e86a2bb927a877fdfcbacdc22f89/Lib/typing.py#L1977-L1999), which makes calling `isinstance()` against any runtime-checkable protocol pretty slow. It's possible we can find ways of optimising `_ProtocolMeta.__instancecheck__`, but whatever we do, it is likely to remain substantially slower to call `isinstance()` against runtime-checkable protocols than it is to call `isinstance()` against "normal" classes. This is often pretty surprising for users, so one of the consensus points in https://github.com/python/typing/issues/1363 was that this should be better documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102937
* gh-102963
* gh-102964
<!-- /gh-linked-prs -->
| 58d2b30c012c3a9fe5ab747ae47c96af09e0fd15 | 46957091433bfa097d7ea19b177bf42a52412f2d |
python/cpython | python__cpython-102922 | # `ExceptionGroup.derive` example uses different argument name than base class specifies
# Documentation
## Link
https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup.derive
## Description
The type for the `derive` method is specified as:
```python
derive(excs)
```
The type hints for derive (from mypy 1.1.1 compiled, so typeshed) are:
```python
@overload
def derive(self, __excs: Sequence[_ExceptionT]) -> ExceptionGroup[_ExceptionT]: ...
@overload
def derive(self, __excs: Sequence[_BaseExceptionT]) -> BaseExceptionGroup[_BaseExceptionT]: ...
```
This clarifies that exceptions being received are a sequence of one or more, not a singular exception outside of a sequence.
This issue is to change the example below the linked example from:
```python
class MyGroup(ExceptionGroup):
def derive(self, exc):
return MyGroup(self.message, exc)
```
to
```python
class MyGroup(ExceptionGroup):
def derive(self, excs):
return MyGroup(self.message, excs)]
```
to reflect this and reduce the risk of confusion
<!-- gh-linked-prs -->
### Linked PRs
* gh-102922
* gh-102924
<!-- /gh-linked-prs -->
| 9b19d3936d7cabef67698636d2faf6fa23a95059 | 3468c768ce5e467799758ec70b840da08c3c1da9 |
python/cpython | python__cpython-102901 | # Link to wrong function in sys.getfilesystemencoding to get filesystem error handler
# Documentation
The doc of the `sys.getfilesystemencoding` function indicates:
> [...] the encoding used with the [filesystem error handler](https://docs.python.org/3/glossary.html#term-filesystem-encoding-and-error-handler) to convert between Unicode filenames and bytes filenames. The filesystem error handler is returned from [getfilesystemencoding()](https://docs.python.org/3/library/sys.html?highlight=sys#sys.getfilesystemencoding).
The filesystem error handler is actually returned by the `sys.getfilesystemencodeerrors` function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102901
* gh-103278
* gh-103279
<!-- /gh-linked-prs -->
| fdd0fff277a55c010a4da0a7af0e986e38560545 | 5e7c468fc4ceb801c30844b794e04a2ac6a35743 |
python/cpython | python__cpython-102896 | # code.interact() will exit the whole process when given exit() or quit()
`code.interact()` is provided as a method to embed an interactive shell in arbitrary python code. It was used in `pdb` for `interact` command.
However, when the user needs to exit, they probably will use a very familiar command `exit()` (or `quit()`), then they'll realize that the whole process is terminated. In most of the cases, that's not the expected bahavior. The user probably just wants to exit the interpreter and go back to the original code.
Ctrl+D(Ctrl+Z then enter) works as expected because there's logic to handle that specifically.
Simply catching `SystemExit` won't work because `builtins.exit()` and `builtins.quit()` (if provided) will also close `sys.stdin` (weird behavior to me) and it's not trivial to reopen `stdin`.
To provide a more reasonable and unified behavior, we can replace the `exit` and `quit` in `builtins` in the interactive shell, and switch back after. This will also make `pdb` happier.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102896
<!-- /gh-linked-prs -->
| e6eb8cafca441046b5f9ded27c68d9a84c42022a | cb1bf89c4066f30c80f7d1193b586a2ff8c40579 |
python/cpython | python__cpython-102877 | # Redundant parens in itertools.batched recipe
# Documentation
```
def batched(iterable, n):
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError('n must be at least one')
it = iter(iterable)
while (batch := tuple(islice(it, n))):
yield batch
```
The parentheses around the `while` condition are superfluous and could be removed for a slight improvement in readability.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102877
<!-- /gh-linked-prs -->
| 4bb1dd3c5c14338c9d9cea5988431c858b3b76e0 | 3bb475662ba998e1b2d26fa370794d0804e33927 |
python/cpython | python__cpython-102875 | # logging module LogRecord can accept non-str msg but it's not documented
# Documentation
In the documentation of the logging `LogRecord` it's mentioned that it accepts msg as str argument:
https://docs.python.org/3/library/logging.html#logging.LogRecord
But just below that it explains that in `getMessage`:
> Returns the message for this [LogRecord](https://docs.python.org/3/library/logging.html#logging.LogRecord) instance after merging any user-supplied arguments with the message. If the user-supplied message argument to the logging call is not a string, [str()](https://docs.python.org/3/library/stdtypes.html#str) is called on it to convert it to a string. This allows use of user-defined classes as messages, whose __str__ method can return the actual format string to be used.
So non-str messages are allowed, and also in LogRecord attributes it's mentioned that non-str messages work and are accepted:
https://docs.python.org/3/library/logging.html#logrecord-attributes
Initially started the discussion in https://github.com/python/typeshed/pull/9914
<!-- gh-linked-prs -->
### Linked PRs
* gh-102875
* gh-103003
* gh-103004
<!-- /gh-linked-prs -->
| f2e5a6ee628502d307a97f587788d7022a200229 | acfe02f3b05436658d92add6b168538b30f357f0 |
python/cpython | python__cpython-102872 | # Cleanup and Modernize webbrowser
# Feature or enhancement
Certain browsers supported by webbrowser are now completely obsolete (and in some cases, have been for over 20 years).
I suggest to remove support for historical browsers from webbrowser, as they aren't used today, and it leaves us with a simpler and more modern module.
# Pitch
Propose to remove the following:
-Mosaic - Final release in 1997
-Grail - Final release in 1999
-Netscape - Final release in 2008
-Galeon - Final release in 2008
-SkipStone - Final release in 2012
-Iceape - Development ended 2013
-Old Firefox versions 35 and below (2014)
-related tests which do not work for modern Firefox
Other improvements:
Add "Chrome" to the PATH lookup performed on Windows when user hasn't set a default browser
Add commentary for the symbolic links checked on Linux.
Rename Galeon class to Epiphany.
Update dead links.
# Previous discussion
#86496
https://github.com/python/cpython/issues/67451
https://github.com/python/cpython/issues/87303
#102690
Any feedback or suggestions welcome!
<!-- gh-linked-prs -->
### Linked PRs
* gh-102872
<!-- /gh-linked-prs -->
| b0422e140df8fdc83c51cc64a3ed5426188de7f1 | 048d6243d45b9381dd4fcaad28ae9c5d443b8b4b |
python/cpython | python__cpython-102861 | # test.compile.test_stack_overflow is very slow
It's because of the quadratic performance of instr_sequence_to_cfg, which is easy to fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102861
<!-- /gh-linked-prs -->
| 8d015fa000db5775d477cd04dc574ba13721e278 | d1b883b52a99427d234c20e4a92ddfa6a1da8880 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.