repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-112396 | # Support getters and setters in Argument Clinic
# Feature or enhancement
In the C API, getters and setters are implemented using `PyGetSetDef`. Argument Clinic doesn't currently support writing getters and setters, probably because they are pretty straightforward to write manually -- there's not much argument parsing to be done.
Argument Clinic now supports the `@critical_section` directive, which avoids a bunch of boilerplate code when making things thread-safe with the `--disable-gil` builds. It would be helpful if Argument Clinic supported getters/setters so that we could avoid the critical section boilerplate in getters and setters as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112396
* gh-112549
* gh-112922
* gh-113095
* gh-113160
* gh-113278
<!-- /gh-linked-prs -->
| 7eeea13403882af63a71226433c9a13b80c22564 | 0785c685599aaa052f85d6163872bdecb9c66486 |
python/cpython | python__cpython-112201 | # Race between asyncio Condition.notify() and Task.cancel() may result in lost wakeups.
# Bug report
### Bug description:
A Task which issues a `condition.notify(1)` to wake up a `Task` from a set of waiting tasks to perform some task, e.g. consume a piece of data, may hit a *race condition* with a simultaneous `cancel()` of a task among the waiting tasks, resulting in none of the tasks
successfully returning from `cond.wait()`. This is problematic because because the `notify()` is essentially lost, and starvation/deadlocks may occur.
PR #112201 contains a fix, as well as documentation updates
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-112201
<!-- /gh-linked-prs -->
| 6b53d5fe04eadad76fb3706f0a4cc42d8f19f948 | 96bce033c4a4da7112792ba335ef3eb9a3eb0da0 |
python/cpython | python__cpython-112195 | # Convert more examples to doctests in `typing` module
# Feature or enhancement
There are multiple examples that are very similar to doctests, but are not doctests.
I propose adding `>>>` and `...` to them, so these examples would be checked during tests (since now we have this feature).
There are some easy ones, where just adding `>>>` (and some imports are enough).
There are also some more complex ones, where some new types / vars are needed, but I don't think it is worth doing, because it will increase the complexity of these examples.
Examples:
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L207-L217
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L252-L259
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L672-L686
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L2239-L2255
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L2268-L2280
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L2293-L2304
- https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L2899-L2909
This actually reveals a bug:
https://github.com/python/cpython/blob/0ee2d77331f2362fcaab20cc678530b18e467e3c/Lib/typing.py#L207-L219
`assert collections.abc.Callable[ParamSpec, str].__args__ == (ParamSpec, str)` example is invalid:
```
File "/Users/sobolev/Desktop/cpython2/Lib/typing.py", line 218, in typing._should_unflatten_callable_args
Failed example:
assert collections.abc.Callable[ParamSpec, str].__args__ == (ParamSpec, str)
Exception raised:
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/doctest.py", line 1374, in __run
exec(compile(example.source, filename, "single",
File "<doctest typing._should_unflatten_callable_args[2]>", line 1, in <module>
assert collections.abc.Callable[ParamSpec, str].__args__ == (ParamSpec, str)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/_collections_abc.py", line 477, in __new__
raise TypeError(f"Expected a list of types, an ellipsis, "
TypeError: Expected a list of types, an ellipsis, ParamSpec, or Concatenate. Got <class 'typing.ParamSpec'>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-112195
* gh-112208
* gh-112209
<!-- /gh-linked-prs -->
| 949b2cc6eae6ef4f3312dfd4e2650a138446fe77 | 9fb0f2dfeed6cf534b0188154e96b976d6a67152 |
python/cpython | python__cpython-126972 | # Increase the precision of the summary in `trace` module
# Feature or enhancement
### Proposal:
With the current trace module, you'll have a summary like this if you enable summary
```
lines cov% module (path)
154 24% _weakrefset (/home/gaogaotiantian/programs/mycpython/Lib/_weakrefset.py)
1293 0% ast (/home/gaogaotiantian/programs/mycpython/Lib/ast.py)
160 5% bz2 (/home/gaogaotiantian/programs/mycpython/Lib/bz2.py)
133 39% cProfile (/home/gaogaotiantian/programs/mycpython/Lib/cProfile.py)
903 7% collections.__init__ (/home/gaogaotiantian/programs/mycpython/Lib/collections/__init__.py)
```
I propose to increase the precision of `cov%` to one decimal place so you'll have something like
```
lines cov% module (path)
154 24.7% _weakrefset (/home/gaogaotiantian/programs/mycpython/Lib/_weakrefset.py)
1293 0.2% ast (/home/gaogaotiantian/programs/mycpython/Lib/ast.py)
160 5.0% bz2 (/home/gaogaotiantian/programs/mycpython/Lib/bz2.py)
133 39.8% cProfile (/home/gaogaotiantian/programs/mycpython/Lib/cProfile.py)
903 7.8% collections.__init__ (/home/gaogaotiantian/programs/mycpython/Lib/collections/__init__.py)
```
This may seem a bit non-sense at the first glance, but it's related to a very practical question - how do we know our coverage improved after some new test cases? With integer percentage, we have a 1% resolution, which means we can't tell if there's a one line improvement if the file is larger than 100 lines. For a file that has 1000 lines, the resolution is 10 lines! It's not trivial!
By introducing only a single digit, we improved the resolution to 0.1%, which is arbitray I know. But, in practical, most Python modules have lines between 100 and 1000 lines.
<details>
<summary>
Check a quick "wc -l Lib/*.py" result. Remember, this is without comments so the real line number is smaller
</summary>
147 Lib/__future__.py
16 Lib/__hello__.py
108 Lib/_aix_support.py
1173 Lib/_collections_abc.py
251 Lib/_compat_pickle.py
162 Lib/_compression.py
396 Lib/_markupbase.py
335 Lib/_opcode_metadata.py
574 Lib/_osx_support.py
147 Lib/_py_abc.py
2649 Lib/_pydatetime.py
6425 Lib/_pydecimal.py
2683 Lib/_pyio.py
285 Lib/_pylong.py
103 Lib/_sitebuiltins.py
566 Lib/_strptime.py
242 Lib/_threading_local.py
205 Lib/_weakrefset.py
188 Lib/abc.py
17 Lib/antigravity.py
2656 Lib/argparse.py
1836 Lib/ast.py
584 Lib/base64.py
900 Lib/bdb.py
118 Lib/bisect.py
344 Lib/bz2.py
195 Lib/cProfile.py
796 Lib/calendar.py
400 Lib/cmd.py
371 Lib/code.py
1132 Lib/codecs.py
151 Lib/codeop.py
166 Lib/colorsys.py
469 Lib/compileall.py
1283 Lib/configparser.py
814 Lib/contextlib.py
4 Lib/contextvars.py
305 Lib/copy.py
217 Lib/copyreg.py
451 Lib/csv.py
1592 Lib/dataclasses.py
9 Lib/datetime.py
11 Lib/decimal.py
2056 Lib/difflib.py
933 Lib/dis.py
2856 Lib/doctest.py
2052 Lib/enum.py
313 Lib/filecmp.py
442 Lib/fileinput.py
192 Lib/fnmatch.py
988 Lib/fractions.py
959 Lib/ftplib.py
1026 Lib/functools.py
167 Lib/genericpath.py
215 Lib/getopt.py
185 Lib/getpass.py
657 Lib/gettext.py
311 Lib/glob.py
250 Lib/graphlib.py
694 Lib/gzip.py
253 Lib/hashlib.py
603 Lib/heapq.py
219 Lib/hmac.py
1632 Lib/imaplib.py
3451 Lib/inspect.py
99 Lib/io.py
2341 Lib/ipaddress.py
64 Lib/keyword.py
191 Lib/linecache.py
1742 Lib/locale.py
356 Lib/lzma.py
2198 Lib/mailbox.py
645 Lib/mimetypes.py
666 Lib/modulefinder.py
192 Lib/netrc.py
901 Lib/ntpath.py
81 Lib/nturl2path.py
418 Lib/numbers.py
112 Lib/opcode.py
467 Lib/operator.py
1681 Lib/optparse.py
1152 Lib/os.py
1637 Lib/pathlib.py
2236 Lib/pdb.py
1814 Lib/pickle.py
2890 Lib/pickletools.py
529 Lib/pkgutil.py
1357 Lib/platform.py
911 Lib/plistlib.py
466 Lib/poplib.py
581 Lib/posixpath.py
655 Lib/pprint.py
615 Lib/profile.py
778 Lib/pstats.py
211 Lib/pty.py
212 Lib/py_compile.py
314 Lib/pyclbr.py
2851 Lib/pydoc.py
326 Lib/queue.py
237 Lib/quopri.py
1000 Lib/random.py
194 Lib/reprlib.py
219 Lib/rlcompleter.py
318 Lib/runpy.py
167 Lib/sched.py
71 Lib/secrets.py
603 Lib/selectors.py
250 Lib/shelve.py
345 Lib/shlex.py
1581 Lib/shutil.py
92 Lib/signal.py
671 Lib/site.py
1109 Lib/smtplib.py
967 Lib/socket.py
858 Lib/socketserver.py
7 Lib/sre_compile.py
7 Lib/sre_constants.py
7 Lib/sre_parse.py
1507 Lib/ssl.py
201 Lib/stat.py
1480 Lib/statistics.py
309 Lib/string.py
272 Lib/stringprep.py
15 Lib/struct.py
2209 Lib/subprocess.py
374 Lib/symtable.py
340 Lib/tabnanny.py
2904 Lib/tarfile.py
925 Lib/tempfile.py
497 Lib/textwrap.py
28 Lib/this.py
1733 Lib/threading.py
381 Lib/timeit.py
140 Lib/token.py
545 Lib/tokenize.py
747 Lib/trace.py
1227 Lib/traceback.py
560 Lib/tracemalloc.py
76 Lib/tty.py
4176 Lib/turtle.py
341 Lib/types.py
3432 Lib/typing.py
793 Lib/uuid.py
595 Lib/warnings.py
663 Lib/wave.py
674 Lib/weakref.py
620 Lib/webbrowser.py
206 Lib/zipapp.py
724 Lib/zipimport.py
</details>
As we can obviously tell, most of the files have line numbers between 100 and 1000. Even for files that have more than 1000 lines, they are normally less than 3000 lines. Again, the "executable lines" are less because `wc` counts empty lines and comments.
So, 0.1% is arbitrary, but I think it's the sweet spot for coverage report - it's not too verbose, but it can almost always show coverage changes for common python modules.
But, this is not backward compatible right? Yes, it's not. However, the summary is mainly used for human to read, not for machines. So I don't think there are a lot of code relying on the output of the summary. I believe most of the users would be happier than disturbed by the new resolution.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126972
<!-- /gh-linked-prs -->
| 12397a5781664bf43da98454db07cdfdec3ab815 | dabcecfd6dadb9430733105ba36925b290343d31 |
python/cpython | python__cpython-112187 | # Improve test case `test_loop_is_closed_resource_warnings`
If this exception is expected then why not use `self.assertRaises` etc?
_Originally posted by @kumaraditya303 in https://github.com/python/cpython/pull/111983#discussion_r1393608295_
<!-- gh-linked-prs -->
### Linked PRs
* gh-112187
* gh-112255
* gh-112261
<!-- /gh-linked-prs -->
| 18c692946953e586db432fd06c856531a2b05127 | 2bcc0f7d348fd978c8e7c7c377fcdcb7b050cb1d |
python/cpython | python__cpython-113220 | # `StopIteration` in a generator in a thread hanging on asyncio
# Bug report
### Bug description:
```python
import asyncio
def a():
raise StopIteration
try:
await asyncio.to_thread(a)
except:
print(3.12.0)
```
other cases: https://github.com/agronholm/anyio/pull/477
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-113220
* gh-123033
<!-- /gh-linked-prs -->
| 4826d52338396758b2d6790a498c2a06eec19a86 | 5d8a3e74b51a59752f24cb869e7daa065b673f83 |
python/cpython | python__cpython-115194 | # Move the `eval_breaker` to `PyThreadState`
# Feature or enhancement
The `eval_breaker` is a variable that keeps track of requests to break out of the eval loop to handle things like signals, run a garbage collection, or handle asynchronous exceptions. It is currently in the interpreter state (in `interp->ceval.eval_breaker`). However, some of the events are specific to a given thread. For example, signals and some pending calls can only be executed on the "main" thread of an interpreter.
We should move the `eval_breaker` to `PyThreadState` to better handle these thread-specific events. This is more important for the `--disable-gil` builds where multiple threads within the same interpreter may be running at the same time.
@markshannon suggested a combination of per-interpreter and per-thread state, where the thread copies the per-interpreter eval_breaker state to the per-thread state when it acquires the GIL.
<!-- gh-linked-prs -->
### Linked PRs
* gh-115194
<!-- /gh-linked-prs -->
| 0749244d13412d7cb5b53d834f586f2198f5b9a6 | e71468ba4f5fb2da0cefe9e923b01811cb53fb5f |
python/cpython | python__cpython-112191 | # `asyncio.getaddrinfo` manifests in misleading DNS timeouts when thread pool executor is saturated
# Bug report
### Bug description:
There is no mention in https://docs.python.org/3/library/asyncio-eventloop.html#dns that `asyncio.getaddrinfo` is just a ThreadPoolExecutor-based wrapper around sync getaddrinfo (base_events.py:860 - BaseEventLoop.getaddrinfo)
This can lead to mysterious "DNS timeouts" or CancelledErrors during DNS resolution if the executor pool is saturated by other, often totally unrelated code, which is especially not uncommon for any app using asgiref's `@sync_to_async` decorator.
At the minimum, until this function is converted to real async (if feasible), it would be nice if documentation warned about using shared thread pool executor.
Sample stack trace from httpcore => anyio, which uses asyncio.getaddrinfo under the hood, when the thread pool is saturated:
```pytb
Traceback (most recent call last):
File "/app/.venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
stream: anyio.abc.ByteStream = await anyio.connect_tcp(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.11/site-packages/anyio/_core/_sockets.py", line 192, in connect_tcp
gai_res = await getaddrinfo(
^^^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/app/.venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 113, in connect_tcp
with anyio.fail_after(timeout):
File "/app/.venv/lib/python3.11/site-packages/anyio/_core/_tasks.py", line 119, in __exit__
raise TimeoutError
TimeoutError
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-112191
* gh-120935
* gh-120936
<!-- /gh-linked-prs -->
| fc297b4ba4c61febeb2d8f5d718f2955c6bbea0a | 1500a23f33f5a6d052ff1ef6383d9839928b8ff1 |
python/cpython | python__cpython-112183 | # Off-by-one index error in __main__.py doc
Very minor issue, the '2' here should be '3' on the RHS of the comparison (given we really want to return sys.argv[2] from the ternary).
<img width="635" alt="image" src="https://github.com/python/cpython/assets/135147320/77b4c680-0c6e-48e1-9642-6e2fa55f8bae">
https://docs.python.org/3.10/library/__main__.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-112183
* gh-112184
* gh-112185
<!-- /gh-linked-prs -->
| 8cd70eefc7f3363cfa0d43f34522c3072fa9e160 | f92ea63f6f2c37917fc095a1bc036a8b0c45a084 |
python/cpython | python__cpython-112161 | # Backport `regen-configure` target to supported branches
# Feature or enhancement
### Proposal:
Follow-up from https://github.com/python/cpython/pull/112090 and related to https://github.com/python/release-tools/issues/70, I am proposing using the `regen-configure` makefile target in release tools and therefore need to add the target in each branch. There will be no changes to the version of autotools used, I'll continue using the existing quay.io image corresponding to each branch.
After all branches are updated, I'll create an update to release-tools to use `make regen-configure` instead of its own implementation of that process to avoid per-branch differences.
While I'm there I'll be pinning to a sha256 manifest instead of using a tag to prevent a update upstream from being able to inject into the release process. I don't forsee these images needing an update (they haven't been updated in over 2 years) but if that's an incorrect assumption let me know!
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Talked with @pablogsal already for 3.10 and 3.11 branches. Need to confirm with @ambv for 3.9 and 3.8.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112161
* gh-112163
* gh-112164
* gh-112167
* gh-114333
<!-- /gh-linked-prs -->
| f21a5f773964d34c7b6deb7e3d753fae2b9c70e2 | 33a5f2af59ddcf3f1b0447a8dbd0576fd78de303 |
python/cpython | python__cpython-112156 | # `typing.py` contains two doctests that are not executed, ever
# Feature or enhancement
`typing.py` contains at least two doctests: https://github.com/python/cpython/blob/81ab0e8a4add53035c87b040afda6d554cace528/Lib/typing.py#L3377-L3414
But, they are not ever executed.
I propose to add a `DocTestSuite` to `test_typing` to run these tests.
Note, that this is related but not similar to https://github.com/python/cpython/pull/111682
<!-- gh-linked-prs -->
### Linked PRs
* gh-112156
* gh-112230
* gh-112231
<!-- /gh-linked-prs -->
| 7680da458398c5a08b9c32785b1eeb7b7c0887e4 | 12c7e9d573de57343cf018fb4e67521aba46c90f |
python/cpython | python__cpython-112158 | # Typo in docstring for override decorator from `typing` module.
# `typing` module documentation
## Typo description
A typo in docstring for `override` decorator. Under `Usage::` we see the following code.
```python
class Base:
def method(self) -> None: ...
pass
class Child(Base):
@override
def method(self) -> None:
super().method()
```
Both `...` and `pass` are used in `Base`'s class method. Since `...` has already been used, `pass` statement does not relate to the method at all, causing `IndentationError`.
## Suggested solution: remove `...`
```python
class Base:
def method(self) -> None:
pass
class Child(Base):
@override
def method(self) -> None:
super().method()
```
https://github.com/python/cpython/blob/985679f05d1b72965bfbed99d1499c22815375e4/Lib/typing.py#L3348
<!-- gh-linked-prs -->
### Linked PRs
* gh-112158
* gh-112162
<!-- /gh-linked-prs -->
| 12c7e9d573de57343cf018fb4e67521aba46c90f | bd89bca9e2a57779c251ee6fadf4887acb364824 |
python/cpython | python__cpython-112143 | # help(): Display complex signatures in multiple lines
# Feature or enhancement
### Proposal:
Here's the help() output for a reasonably complex typed method ([link](https://github.com/quora/pyanalyze/blob/fc239e15e52cb524138c02b16135a03008a08c64/pyanalyze/name_check_visitor.py#L1107)):
```
Help on function __init__ in module pyanalyze.name_check_visitor:
__init__(self, filename: str, contents: str, tree: ast.Module, *, settings: Optional[Mapping[pyanalyze.error_code.ErrorCode, bool]] = None, fail_after_first: bool = False, verbosity: int = 50, unused_finder: Optional[pyanalyze.find_unused.UnusedObjectFinder] = None, module: Optional[module] = None, attribute_checker: Optional[pyanalyze.name_check_visitor.ClassAttributeChecker] = None, collector: Optional[pyanalyze.name_check_visitor.CallSiteCollector] = None, annotate: bool = False, add_ignores: bool = False, checker: pyanalyze.checker.Checker, is_code_only: bool = False) -> None
Constructor.
filename: name of the file to run on (to show in error messages)
contents: code that the visitor is run on
fail_after_first: whether to throw an error after the first problem is detected
verbosity: controls how much logging is emitted
```
It would be much easier to read if each parameter was on its own line, something like this:
```
Help on function __init__ in module pyanalyze.name_check_visitor:
__init__(
self,
filename: str,
contents: str,
tree: ast.Module,
*,
settings: Optional[Mapping[pyanalyze.error_code.ErrorCode, bool]] = None,
fail_after_first: bool = False,
verbosity: int = 50,
unused_finder: Optional[pyanalyze.find_unused.UnusedObjectFinder] = None,
module: Optional[module] = None,
attribute_checker: Optional[pyanalyze.name_check_visitor.ClassAttributeChecker] = None,
collector: Optional[pyanalyze.name_check_visitor.CallSiteCollector] = None,
annotate: bool = False,
add_ignores: bool = False,
checker: pyanalyze.checker.Checker,
is_code_only: bool = False,
) -> None
Constructor.
filename: name of the file to run on (to show in error messages)
contents: code that the visitor is run on
fail_after_first: whether to throw an error after the first problem is detected
verbosity: controls how much logging is emitted
```
help() should switch to this output format only for reasonably complex signatures, perhaps if the signature line would otherwise be >80 characters.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-112143
<!-- /gh-linked-prs -->
| a9574c68f04695eecd19866faaf4cdee5965bc70 | 0229d2a9b1d6ce6daa6a773f92e3754e7dc86d50 |
python/cpython | python__cpython-112138 | # Change dis output to display labels instead of offsets
TODO:
- [x] test for output with offsets
- [x] ``--`` instead of ``None`` for missing line number? - in another PR.
# Feature or enhancement
```
>>> def f():
... for i in x:
... if y:
... z
...
>>> dis.dis(f)
1 RESUME 0
2 LOAD_GLOBAL 0 (x)
GET_ITER
L1: FOR_ITER 22 (to L3)
STORE_FAST 0 (i)
3 LOAD_GLOBAL 2 (y)
TO_BOOL
POP_JUMP_IF_TRUE 2 (to L2)
JUMP_BACKWARD 16 (to L1)
4 L2: LOAD_GLOBAL 4 (z)
POP_TOP
JUMP_BACKWARD 24 (to L1)
2 L3: END_FOR
RETURN_CONST 0 (None)
```
This is easier to read than the current output of:
```
>>> dis.dis(f)
1 0 RESUME 0
2 2 LOAD_GLOBAL 0 (x)
12 GET_ITER
>> 14 FOR_ITER 22 (to 62)
18 STORE_FAST 0 (i)
3 20 LOAD_GLOBAL 2 (y)
30 TO_BOOL
38 POP_JUMP_IF_TRUE 2 (to 46)
42 JUMP_BACKWARD 16 (to 14)
4 >> 46 LOAD_GLOBAL 4 (z)
56 POP_TOP
58 JUMP_BACKWARD 24 (to 14)
2 >> 62 END_FOR
64 RETURN_CONST 0 (None)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-112138
* gh-112335
<!-- /gh-linked-prs -->
| 10e1a0c91613908757a5b97602834defbe575ab0 | 790db85c7737c2ebbb145f9a26f675a586c5f0d1 |
python/cpython | python__cpython-121262 | # [C API] Removed private _PyArg_Parser API has no replacement in Python 3.13
Copy of @tacaswell's [message](https://github.com/python/cpython/issues/112026#issuecomment-1813356876):
> aio-libs/multidict is also using `_PyArg_Parser` and friends (e.g. https://github.com/aio-libs/multidict/blob/18d981284b9e97b11a4c0cc1e2ad57a21c82f323/multidict/_multidict.c#L452-L456 but there are many)
The _PyArg_Parser API is used by Argument Clinic to generate efficient code parsing function arguments.
Since Python 3.12, when targeting Python internals, Argument Clinic initializes `_PyArg_Parser.kwtuple` with singletons objects using the `_Py_SINGLETON(name)` API. This API cannot be used outside Python.
Private functions with a `_PyArg_Parser` argument:
* _PyArg_ParseStackAndKeywords()
* _PyArg_ParseTupleAndKeywordsFast()
* _PyArg_UnpackKeywords()
* _PyArg_UnpackKeywordsWithVararg()
cc @serhiy-storchaka
<!-- gh-linked-prs -->
### Linked PRs
* gh-121262
* gh-121344
* gh-127093
<!-- /gh-linked-prs -->
| f8373db153920b890c2e2dd8def249e8df63bcc6 | 7c66906802cd8534b05264bd47acf9eb9db6d09e |
python/cpython | python__cpython-112504 | # None __ne__ behavior changes in 3.12
# Bug report
### Bug description:
```
Python 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] on win32
>>> None.__ne__(None)
False
```
```
Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32
>>> None.__ne__(None)
NotImplemented
```
In 3.11, `None.__ne__(None)` returned `False` but in 3.12 it returns `NotImplemented`. This is a breaking behavior change that is either a bug or should be documented.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-112504
* gh-112827
<!-- /gh-linked-prs -->
| 9c3458e05865093dd55d7608810a9d0ef0765978 | b449415b2f1b41e1c44cb453428657fdf6ff1d36 |
python/cpython | python__cpython-112106 | # `readline.set_completer_delims` has no effect with libedit
# Bug report
### Bug description:
`readline.set_completer_delims` works fine with GNU readline, but not with libedit.
A quick example:
```python
import readline
readline.set_completer_delims("\n")
if "libedit" in getattr(readline, "__doc__", ""):
readline.parse_and_bind("bind ^I rl_complete")
else:
readline.parse_and_bind("tab: complete")
def completer(text, state):
if text == "$" and state == 0:
return "$complete"
return None
readline.set_completer(completer)
input()
# Type $ then <tab>
```
With GNU readline, it completes correctly, but with libedit, it can't - libedit still considers `$` as a delimiter. You can confirm that by printing the text in `completer` function.
The issue is in `readline.c`:
https://github.com/python/cpython/blob/d4f83e1e3a19e2f881115f20d58ae6a019ddb48f/Modules/readline.c#L576-L581
`readline.c` writes to `rl_completer_word_break_characters`, which works fine [source](https://git.savannah.gnu.org/cgit/readline.git/tree/complete.c#n1079).
However, libedit does not do the same thing, it uses `rl_basic_word_break_characters` [instead](https://opensource.apple.com/source/libedit/libedit-40/src/readline.c):
```C
if (rl_completion_word_break_hook != NULL)
breakchars = (*rl_completion_word_break_hook)();
else
breakchars = rl_basic_word_break_characters;
```
Thus, writing to `rl_completer_word_break_characters` will not have any effect on libedit. The simplest way I can think of, is to write to both `rl_completer_word_break_characters` and `rl_basic_word_break_characters`. They both exist in GNU readline and libedit, for slightly different purposes.
* GNU readline:
* `rl_completer_word_break_characters` is the one that takes effect
* `rl_basic_word_break_characters` just keeps a string as default value for `rl_completer_word_break_characters`
* libedit
* `rl_completer_word_break_characters` is not used at all
* `rl_basic_word_break_characters` is used for break words
From what I can observe, writing to both variables has no unexpected side effect, because `rl_basic_word_break_characters` is not used on CPython with GNU readline.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-112106
* gh-112487
* gh-112488
<!-- /gh-linked-prs -->
| 2df26d83486b8f9ac6b7df2a9a4669508aa61983 | ac4b44266d61651aea5928ce7d3fae4de226f83d |
python/cpython | python__cpython-112093 | # Contradicting statement in c-api/stable.rst
# Documentation
> So, code compiled for Python 3.10.0 will work on 3.10.8 and vice versa, but will need to be compiled separately for 3.9.x and 3.10.x.
> \- [https://docs.python.org/3/c-api/stable.html](https://docs.python.org/3/c-api/stable.html#stable:~:text=So%2C%20code%20compiled%20for%20Python%203.10.0%20will%20work%20on%203.10.8%20and%20vice%20versa%2C%20but%20will%20need%20to%20be%20compiled%20separately%20for%203.9.x%20and%203.10.x.)
This contradicts itself, tho it could be that the `Unless documented otherwise` case is meant with it.
But if so it would need a better clarification
<!-- gh-linked-prs -->
### Linked PRs
* gh-112093
* gh-114260
* gh-114261
<!-- /gh-linked-prs -->
| 68a7b78cd5185cbd9456f42c15ecf872a7c16f44 | 7fa511ba576b9a760f3971ad16dbbbbf91c3f39c |
python/cpython | python__cpython-112090 | # Consider avoiding the dependency on quay.io server in the GitHub Action "Check if generated files are up to date" job
Last weeks/months, the `quay.io` server failed time time time. The problem is that the Python workflow has a GitHub Action "Check if generated files are up to date" job which pulls a container image from `quay.io` to run the `make regen-configure` command.
Should we consider avoiding the dependency on the `quay.io` server to make our workflow more reliable?
<!-- gh-linked-prs -->
### Linked PRs
* gh-112090
* gh-112126
* gh-112159
* gh-125457
* gh-125459
<!-- /gh-linked-prs -->
| d9fd33a869d2be769ff596530f63ee099465b037 | 7e2308aaa29419eadeef3a349c7c4d78e2cbc989 |
python/cpython | python__cpython-113764 | # Make `list` objects thread-safe in `--disable-gil` builds
# Feature or enhancement
I expect this to be implemented across multiple PRs.
For context, here is the change from the `nogil-3.12` fork, but things might be done a bit differently in CPython 3.13: https://github.com/colesbury/nogil-3.12/commit/df4c51f82b.
- [x] Most operations should acquire the list object lock using the critical section API.
- [x] Accessing a single element should optimistically avoid locking for performance
- [x] Iterators need some special handling
- [x] list.sort
### Iterators
For performance reasons, we don't want to acquire locks while iterating over lists. This means that list iterators have a lesser notion of thread-safety: multiple threads concurrently using the same *iterator* should not crash but may revisit elements. We should make clear in the documentation that iterators are not generally thread-safe. From an implementation point of view, we should use relaxed atomics to update the `it_index` variable.
Additionally, we don't want to clear the `it_seq` reference to the list when the iterator is exhausted in `--disable-gil` builds. Doing so would pose thread-safety issues (in the "may segfault sense").
I don't expect this to pose any issues for real code. While it's fairly common to share lists between threads, it's rare to share the same iterator object between threads.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113764
* gh-113863
* gh-114268
* gh-114582
* gh-114651
* gh-114843
* gh-114916
* gh-115471
* gh-115472
* gh-115605
* gh-115854
* gh-115875
* gh-116233
* gh-116237
* gh-116353
* gh-116529
* gh-116553
* gh-117438
<!-- /gh-linked-prs -->
| a023bc252dc744736bd21897c5a23a25b800df92 | 10d3f04aec745c6676ef31611549b970a78338b3 |
python/cpython | python__cpython-112128 | # Make `random.Random` thread-safe in `--disable-gil` builds
# Feature or enhancement
`random.Random` has mutable internal state. We should use the critical section API to make it thread-safe in `--disable-gil` builds.
For context, here is the change from the `nogil-3.12` fork. https://github.com/colesbury/nogil-3.12/commit/9bf62ffc4b. Note that we want to do things a bit differently in CPython 3.13:
1) No need for an extra `_PyMutex mutex` in `RandomObject`
2) Use the critical section API. Locking can be added around operations that use Argument Clinic by adding the `@critical_section` directive as the first line.
3) Compared to `nogil-3.12`, we want to push the locking "up" to the around the methods. (See the above note 2).
<!-- gh-linked-prs -->
### Linked PRs
* gh-112128
<!-- /gh-linked-prs -->
| ac4b44266d61651aea5928ce7d3fae4de226f83d | 154f099e611cea74daa755c77df3b8003861cc76 |
python/cpython | python__cpython-112111 | # Make `functools.lru_cache` thread-safe in `--disable-gil` builds
# Feature or enhancement
We should make `functools.lru_cache` thread-safe in the `--disable-gil` builds.
For context, here is the commit from the `nogil-3.12` fork: https://github.com/colesbury/nogil-3.12/commit/041a08e339
**NOTES** (differences in 3.13 from `nogil-3.12`):
1) No need for an extra mutex in `lru_cache_object`; every PyObject has a mutex in the `--disable-gil` builds
2) For `_functools__lru_cache_wrapper_cache_info_impl` and `_functools__lru_cache_wrapper_cache_clear_impl` we should instead use the `@critical_section` Arugment Clinic directive. This will be simpler and require fewer changes to the code. `lru_cache_call` still needs explicit calls to the critical section API.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112111
<!-- /gh-linked-prs -->
| 0ee2d77331f2362fcaab20cc678530b18e467e3c | 8cd70eefc7f3363cfa0d43f34522c3072fa9e160 |
python/cpython | python__cpython-113800 | # Make `set` thread-safe in `--disable-gil` builds
# Feature or enhancement
The `set` object is not currently thread-safe in `--disable-gil` builds. We should make it thread-safe by using the ["critical section" API](https://github.com/python/cpython/blob/0ff6368519ed7542ad8b443de01108690102420a/Include/internal/pycore_critical_section.h). to acquire the per-object locks around operations. There should be no effective change in the default build (i.e., with GIL) because critical sections are no-ops in the default build.
**Notes**:
1. Unlike `dict` and `list`, I don't think it's worth the complexity to try to "optimistically avoid locking" around any set operation (except `set_len`). We could consider doing this in the future if there is a performance justification, but not for 3.13.
2. `set_len` can avoid locking and instead use relaxed atomics for reading the "used" field. Note that writes to "used" should then also use relaxed atomics.
3. Some operations require locking two containers (like `set_merge`). Some of these will need refactorings so that the critical sections macros can be added in the correct places.
For context, here is the change from the `nogil-3.12` fork: https://github.com/colesbury/nogil-3.12/commit/4ca2924f0d. Note that the critical section API is slightly changed in 3.13 from `nogil-3.12`; In 3.13 `Py_BEGIN_CRITICAL_SECTION` takes a PyObject instead of a PyMutex.
### TODO:
- [x] Improve `set_init` (see https://github.com/python/cpython/pull/113800#discussion_r1517017947). We also want to avoid locking in `set_init` if possible
- [x] `_PySet_NextEntry`
- [x] `setiter_iternext`
<!-- gh-linked-prs -->
### Linked PRs
* gh-113800
* gh-115112
* gh-116525
* gh-117935
* gh-117990
* gh-118053
* gh-118069
<!-- /gh-linked-prs -->
| c951e25c24910064a4c8b7959e2f0f7c0d4d0a63 | 3cdfdc07a9dd39bcd6855b8c104584f9c34624f2 |
python/cpython | python__cpython-112123 | # Provide a variant of `PyDict_SetDefault` that returns a new reference (instead of a borrowed reference)
The `PyDict_SetDefault(mp, key, defaultobj)` function returns a borrowed reference to the value corresponding to key. This poses a thread-safety issue particularly for the case where `key` is already in the dict. In the `--disable-gil` builds, the returned value may no longer be valid if another thread concurrently modifies the dict.
### Proposal (from Victor)
`int PyDict_SetDefaultRef(PyObject *dict, PyObject *key, PyObject *default_value, PyObject **value)`;
The `**value` pointer is optional. If it is NULL, it is not used.
* If the *key* is present in the dict, set `*value` to a new reference to the current value (if `value` is not NULL), and return 1.
* If the *key* is missing from the dict, insert the key and default_value, and set `*value` to a new reference to `default_value` (if value is not NULL), and return 0.
* On error, set `*value` to `NULL` if `value` is not `NULL`, and return -1.
Ideally, this new function would be public and part of the stable ABI so that it could be used by all extensions, but even an internal-only function would unblock some of the nogil changes.
**EDIT**: Updated with @vstinner's proposal
<!-- gh-linked-prs -->
### Linked PRs
* gh-112123
* gh-112211
* gh-118696
* gh-119430
<!-- /gh-linked-prs -->
| de61d4bd4db868ce49a729a283763b94f2fda961 | 0e2ab73dc31e0b8ea1827ec24bae93ae2644c617 |
python/cpython | python__cpython-128270 | # http.client.HTTPResponse.read(-1) handled incorrectly
# Bug report
### Bug description:
`http.client.HTTPResponse` doesn't handle negative reads the same as other readers, for example the following code will hang for a significant amount of time:
```python
con = http.client.HTTPConnection("httpbin.org")
con.request("GET", "/get")
resp = con.getresponse()
# "connection: close" doesn't trigger this
assert resp.headers["connection"] == "keep-alive"
while chunk := resp.read(-1):
print(chunk)
```
The negative parameter is passed onto the underlying socket which will cause it to try and read to the end-of-stream. For `keep-alive` connections this just blocks until the connection is closed by the server due to inactivity.
I think this is a bug with not checking for negative `amt` values in:
https://github.com/python/cpython/blob/24216d0530fda84b482ab29490ddd0d496605e43/Lib/http/client.py#L469-L471
Changing the call to `read1` causes the above to display the response promptly as I'd expect. This is due to it correctly checking for negative sizes.
https://github.com/python/cpython/blob/24216d0530fda84b482ab29490ddd0d496605e43/Lib/http/client.py#L654-L655
Note that in earlier Python versions, e.g. 3.9, the above fails with `ValueError: negative count` which seems better than timing out, but I think reading to the end of the response makes more sense.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128270
* gh-129395
* gh-129396
<!-- /gh-linked-prs -->
| 4d0d24f6e3dff2864007c3cfd1cf7d49c6ee5317 | 8e57877e3f43e80762196f6869526d0c1585783a |
python/cpython | python__cpython-112094 | # Make `_struct` module thread-safe in `--disable-gil` builds
# Feature or enhancement
The `_struct` module has a few small issues:
* The use of `PyDict_GetItemWithError` returns a borrowed reference (should use `PyDict_GetItemRef`)
* The `state->cache` is lazily created; we should instead create it during `_structmodule_exec`
* We want `state->cache` to be an immutable reference to a mutable dict. (The `dict` will be thread-safe.) Use `PyDict_Clear` to empty the dict instead of clearing the reference.
See the commit from the `nogil-3.12` fork for context: https://github.com/colesbury/nogil-3.12/commit/ada9b73feb. Note that in CPython the relevant function is `PyDict_GetItemRef` not `PyDict_FetchItemWithError`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112094
<!-- /gh-linked-prs -->
| 4744f59a5e86690e76c35d9b9d79971ebe9a87d7 | 55f3cce821f8f18ddb485aa07bdf0190c358d081 |
python/cpython | python__cpython-113830 | # Make collections.deque thread-safe in `--disable-gil` builds
# Feature or enhancement
The `collections.deque` object has mutable internal state that would not be thread-safe without the GIL. We should use the critical section API to make it thread-safe. I think it might be cleanest to first convert most of the deque implementation to Argument Clinic and use the `@critical_section` directive rather than writing the critical sections manually.
Mostly for my own reference, here is the implementation from `nogil-3.12`: https://github.com/colesbury/nogil-3.12/commit/f1e4742eaa. That implementation did not use the critical section API, which made it more complicated.
Depends on: https://github.com/python/cpython/issues/111903
<!-- gh-linked-prs -->
### Linked PRs
* gh-113830
* gh-113963
<!-- /gh-linked-prs -->
| dc978f6ab62b68c66d3b354638c310ee1cc844a6 | 474204765bdbdd4bc84a8ba49d3a6558e9e4e3fd |
python/cpython | python__cpython-114153 | # [Doc] Wrong parameter name for `concurrent.futures.Executor.map(fn, ...)
# Documentation
Just noticed that in the published documentation, the `concurrent.futures.Executor.map` says it takes some parameter named `func` as the first argument, however the implementation has it as `fn`.
This is a strictly documentation issue and doesn't have non-introspected code related issues, as it's followed by an `*iterables` argument, preventing it from being used as a keyword.
Implementation: https://github.com/python/cpython/blob/d2f305dfd183025a95592319b280fcf4b20c8694/Lib/concurrent/futures/_base.py#L583
CPython Doc: https://github.com/python/cpython/blob/d2f305dfd183025a95592319b280fcf4b20c8694/Doc/library/concurrent.futures.rst#executor-objects
Published docs: https://docs.python.org/3.12/library/concurrent.futures.html#concurrent.futures.Executor.map
I came across this while overwriting the `Executor` class and using the `@override` decorator which warned me that the naming is inconsistent.
Happy to provide a PR if needed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114153
* gh-114164
* gh-114165
<!-- /gh-linked-prs -->
| 8d26db45df479a54eccd2aced7d8a5ea9fd0ffa5 | 05008c27b73da640b63c0d335c65ade517c0eb84 |
python/cpython | python__cpython-113372 | # mimalloc: warning: unable to directly request hinted aligned OS memory
# Bug report
### Bug description:
Configuration:
```sh
./configure --disable-gil --with-pydebug
```
Test Output:
```pytb
dietpi@DietPi:~/cpython$ ./python -m test test_cmd_line -v
======================================================================
FAIL: test_pythonmalloc (test.test_cmd_line.CmdLineTest.test_pythonmalloc) (env_var='mimalloc', name='mimalloc')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dietpi/cpython/Lib/test/test_cmd_line.py", line 845, in test_pythonmalloc
self.check_pythonmalloc(env_var, name)
File "/home/dietpi/cpython/Lib/test/test_cmd_line.py", line 811, in check_pythonmalloc
self.assertEqual(proc.stdout.rstrip(), name)
AssertionError: 'mimalloc: warning: unable to directly request hin[119 chars]lloc' != 'mimalloc'
- mimalloc: warning: unable to directly request hinted aligned OS memory (error: 2 (0x2), size: 0x40000000 bytes, alignment: 0x2000000, hint address: 0x20000000000)
mimalloc
======================================================================
FAIL: test_pythonmalloc (test.test_cmd_line.CmdLineTest.test_pythonmalloc) (env_var='mimalloc_debug', name='mimalloc_debug')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dietpi/cpython/Lib/test/test_cmd_line.py", line 845, in test_pythonmalloc
self.check_pythonmalloc(env_var, name)
File "/home/dietpi/cpython/Lib/test/test_cmd_line.py", line 811, in check_pythonmalloc
self.assertEqual(proc.stdout.rstrip(), name)
AssertionError: 'mimalloc: warning: unable to directly request hin[125 chars]ebug' != 'mimalloc_debug'
- mimalloc: warning: unable to directly request hinted aligned OS memory (error: 2 (0x2), size: 0x40000000 bytes, alignment: 0x2000000, hint address: 0x20000000000)
mimalloc_debug
----------------------------------------------------------------------
Ran 56 tests in 41.751s
FAILED (failures=2, skipped=2)
test test_cmd_line failed
test_cmd_line failed (2 failures) in 43.1 sec
== Tests result: FAILURE ==
1 test failed:
test_cmd_line
```
Environment:
```
dietpi@DietPi:~/cpython$ uname -a
Linux DietPi 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-113372
<!-- /gh-linked-prs -->
| 9afb0e1606cad41ed57c42ea0a53ac90433f211b | 7de9855410d034b2b7624a057dbf7c3f58ee5328 |
python/cpython | python__cpython-112046 | # [C API] Revert of private functions removed in Python 3.13 causing most problems
Copy of my [Discourse message](https://discuss.python.org/t/revert-python-3-13-c-api-incompatible-changes-causing-most-troubles/38214).
Hi,
My [C API: My plan to clarify private vs public functions in Python 3.13](https://discuss.python.org/t/c-api-my-plan-to-clarify-private-vs-public-functions-in-python-3-13/30131) uses Python 3.13 beta 1 (May 2024) as deadline to make most C extensions compatible with Python 3.13.
Problem: **I wasn't prepared for the number of people eagger to test their project on Python 3.13 alpha 1 as soon as it was released.** Some people are working on porting their own code to Python 3.13, whereas others are blocked by dependencies not compatible yet. **For Python 3.12, it took months** to get Cython (7 months) and numpy (10 months) to become compatible (with a release), and apparently people were fine with this pace. I expected to have 6 months to go through issues and make decisions on a case by case basis. But Cython was made compatible with Python 3.13 (alpha1) in 1 week! Well, for the Cython case, becoming compatible mostly means "disable optimizations" sadly. I don't want Cython (or other projects) to become slower.
Overall, it's great that people test Python 3.13 alpha 1! I love that, getting feedback as soon as possible! I want to encourage that and help people who are going through issue.
I'm still bug triaging C API reports, whereas some people ask to get Python 3.13 being fixed "right now". My plan is about designing public C API to replace removed privated functions, but this work takes time since we want to design "future proof" API which cover most use cases, expose less implementation details, and following the [latest C API guidelines](https://devguide.python.org/developer-workflow/c-api/index.html). It's not just "take the private API and put it in the public C API": even if new public functions are added, they will not be drop-in replacements (there might be more work than "remove the underscore prefix").
I tried to communicate as much as I can in advance about my plan removing private C functions. **Now that the changes are done, the impact is known better, and we should reevaluate the situation with these new information.**
Another problem is that it's unclear to me how C API decisions are taken. Latest C API questions to the Steering Council were delegated to the future C API Working Group, but the [Working Group doesn't exist yet](https://github.com/python/steering-council/issues/210).
---
So well, let me propose concrete actions.
What should be done right now:
* **Revert** immediately C API changes impacting at least 5 projects. My colleague Karolina did a [great bug triage work on couting build failures per C API issue](https://discuss.python.org/t/ongoing-packages-rebuild-with-python-3-13-in-fedora/38134) by recompiling 4000+ Python packages in Fedora with Python 3.13.
* Consider reverting more changes on a case by case basis.
What should be done next:
* For removed functions which impact at least one project but we decide to not revert, document a solution in What's New in Python 3.13 (even if the function was "private", usually we don't document changes related to private APIs). I expected the existing public C API to be complete enough, but the devil is in the details, and "it's complicated". Some functions are only available in the private/internal C API, there is "no good" replacement in the public C API. Well, that's the part that I'm trying to fix.
* Add better public C API function to replace private C API functions.
* Help projects to migrate to better public C API functions.
What can be done in the long term:
* Clarify what ``_Py`` prefix means: does it mean "can be changed anytime, no backward compatibility"? Or does mean "it depends..."?
* Design a new approach to introduce incompatible C API changes. IMO the most practical one would be to introduce the concept of a "compatibility version" and let C extensions opt-in for "future" incompatible changes. It may give more control on who is impacted and when. For example, you can target the "C API version 3.15" and you will be impacted by all planned changes, especially deprecated functions scheduled for removal.
* Design guidelines on how to introduce incompatible changes.
---
The overall plan on private APIs was done in the frame of the [private API contract](https://docs.python.org/dev/c-api/stable.html):
> Names prefixed by an underscore, such as _Py_InternalState, are private API that **can change without notice** even in patch releases.
Well. That's the theory, and the practice shows me that users have "different expectations". Apparently, we have to update/complete this contract...
---
If we revert private C API function removals, does it mean that these **private** functions become part of the **public** C API and have to stay forever, be documented and tested? Or would they move back to their previous state: not supported and "can change anything (which includes being removed)"?
---
See also:
* [Is Python 3.13 going to be Python 4?](https://discuss.python.org/t/is-python-3-13-going-to-be-python-4/37490) discussion
* [C API: Meta issue to replace removed functions with new clean public functions](https://github.com/python/cpython/issues/111481)
* [C API: Remove private C API functions (move them to the internal C API)](https://github.com/python/cpython/issues/106320#issuecomment-1762776276): link to recent discussions in this issue. I asked people to create new issues, since this one history is too long, but people prefer to commenting this issues instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112046
* gh-112115
* gh-112119
* gh-112121
* gh-112171
* gh-119855
<!-- /gh-linked-prs -->
| b338ffa4bc078fd363e8b0078eef4e0d6a071546 | 4bbb367ba65e1df7307f7c6a33afd3c369592188 |
python/cpython | python__cpython-112018 | # Provide `ctypes.memoryview_at()`
# Feature or enhancement
### Proposal:
It should be possible to easily make memoryview objects from pointer-able ctypes objects with an arbitrary length. In the same way we can currently use `ctypes.string_at()` to create `bytes` objects. The advantage of using `memoryview` objects is that we can elide a buffer copy.
```python
import ctypes
a = (ctypes.c_ubyte * 10)()
ctypes.buffer_at(a, 10, True)[0] = 1
assert a[0] == 1
a[0] = 2
assert ctypes.buffer_at(a, 10, True)[0] == 2
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-112018
<!-- /gh-linked-prs -->
| b4f799b1e78ede17b41de9a2bc51b437a7e6dd74 | f21af186bf21c1c554209ac67d78d3cf99f7d7c0 |
python/cpython | python__cpython-112017 | # quit() and exit() don't work in interactive help
# Bug report
### Bug description:
I tried to use quit and exit to return to the interactive Python shell but these not work. I need to use Ctr +Z.
```python
Python 3.13.0a1 (tags/v3.13.0a1:ad056f0, Oct 13 2023, 09:51:17) [MSC v.1935 64 bit (AMD64)] on win32
# Add a code block here, if required
help> exit()
No Python documentation found for 'exit()'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.
help> quit()
No Python documentation found for 'quit()'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-112017
* gh-112047
* gh-112048
<!-- /gh-linked-prs -->
| b28bb130bbc2ad956828819967d83e06d30a65c5 | d5491a6eff516ad47906bd91a13d71cdde18f5ab |
python/cpython | python__cpython-115540 | # inspect.unwrap() does not work with types with the `__wrapped__` data descriptor
# Bug report
`inspect.unwrap()` follows the chain by links `__wrapped__` and returns the last item in a chain or the original object if it does not have a `__wrapped__` attribute (there is also additional stop predicate and protection against loops, but it is unrelated). It works well in most cases, except with a type that has the `__wrapped__` data descriptor.
For example the following code
```python
class W:
def __init__(self, x):
self._wrapped = x
@property
def __wrapped__(self):
return self._wrapped
import inspect
print(inspect.unwrap(W(chr)))
print(inspect.unwrap(W))
```
prints
```
<built-in function chr>
<property object at 0x7f334092dc50>
```
The former output is correct, `W(chr)` wraps `chr`. But the latter is wrong: the `W` type does not wrap a `property` object.
It is not hypothetical issue. `staticmethod` and `classmethod` have now (bpo-43682/#87848) the `__wrapped__` attribute. `inspect.signature()` uses `inspect.unwrap()`, and it cannot support `staticmethod` and `classmethod` even if they get correct `__text_signature__`. `inspect.getsourcelines()` also uses `inspect.unwrap()` indirectly and can fail with Python classes with the `__wrapped__` attribute.
`inspect.unwrap()` should stop before such attribute. But how to detect such case? There are several ways:
* Stop if `func` is a class. `pickle` does it for its special methods, this is why classes are handled separately from instances. But it means that `functools.wraps()`, `staticmethod` and `classmethod` cannot be used to decorate classes. Although if they are currently used, the result can be weird, because instances will have the same `__wrapped__` attribute as a class. I do not know how often wrapped classes are used in the real code, but there is a test for this. It may be the right way at the end, although it can break some questionable code.
* Stop if `func.__wrapped__` is a data descriptor. I afraid that it will affect multidecorated properties.
* Stop if `func.__wrapped__` is not callable. Do not know what can be consequences.
Maybe there are other ways?
<!-- gh-linked-prs -->
### Linked PRs
* gh-115540
* gh-115965
* gh-115966
<!-- /gh-linked-prs -->
| 68c79d21fa791d7418a858b7aa4604880e988a02 | b05afdd5ec325bdb4cc89bb3be177ed577bea41f |
python/cpython | python__cpython-112002 | # Fix test_builtins_have_signatures in test_inspect
`test_builtins_have_signatures` in `test_inspect` is only passed by accident. It always passed because it checked the same object (the last object the `builtins` module, `zip`) in the loop that had no signature. But after adding the signature for `zip` in #111999 it exploded with failures.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112002
* gh-112003
* gh-112004
<!-- /gh-linked-prs -->
| 40752c1c1e8cec80e99a2c9796f4fde2f8b5d3e2 | 12a30bc1aa0586308bf3fe12c915bcc5e54a032f |
python/cpython | python__cpython-112000 | # Add signatures for some builtins
# Feature or enhancement
Signature can be manually added for some builtin functions and classes that do not use Argument Clinic.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112000
* gh-119540
* gh-119543
<!-- /gh-linked-prs -->
| 1d75ef6b6186619081c166b21c71c15b4f98beb8 | d0058cbd1cd7e72307adab45759e1ea6adc05866 |
python/cpython | python__cpython-116413 | # C-API for signalling monitoring events
# Feature or enhancement
### Proposal:
Language implementations for the CPython runtime (Cython, JIT compilers, regular expression engines, template engines, etc.) need a way to signal [PEP-669](https://peps.python.org/pep-0669/) monitoring events to the registered listeners.
1) We need a way to create events and inject them into the monitoring system. Since events have more than one signature, we might end up needing more than one C-API function for this.
2) We need a way to map 3D source code positions (file name, line, character) to 1D integer offsets. Code objects help, but branches might cross source file boundaries, so that's more tricky. For many use cases, a mapping between (line, character) positions and an integer offset would probably suffice, although templating languages usually provide some kind of include commands, as does Cython. There should be some help in the C-API for building up such a mapping.
The reason why I think we need CPython's help for mapping indices is that both sides, listeners and even producers, need to agree on the same mapping. Sadly, the events don't include line/character positions directly but only a single integer. So event producers need a way to produce a number that event listeners like coverage analysers can map back to a source code position.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/pep-669-low-impact-monitoring-for-cpython/13018/61?u=scoder
<!-- gh-linked-prs -->
### Linked PRs
* gh-116413
* gh-119179
* gh-119575
* gh-123822
* gh-123997
<!-- /gh-linked-prs -->
| 85af78996117dbe8ad45716633a3d6c39ff7bab2 | da2cfc4cb6b756b819b45bf34dd735c27b74d803 |
python/cpython | python__cpython-111994 | # socket module does not include all getnameinfo flags
The `socket` module does not include `NI_IDN` `getnameinfo` flag.
See: https://man7.org/linux/man-pages/man3/getnameinfo.3.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-111994
<!-- /gh-linked-prs -->
| fe9db901b2446b047e537447ea5bad3d470b0f78 | 62802b6228f001e1a4af6ac668a21d2dcec0ce57 |
python/cpython | python__cpython-113279 | # Update to SQLite 3.44 in Windows and macOS installers
SQLite 3.44 [is out](https://sqlite.org/releaselog/3_44_0.html). Let's wait some weeks and see if patch releases appear before bumping the version used by the installers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113279
* gh-113281
<!-- /gh-linked-prs -->
| 6a1d5a4c0f209d51ab33d6529935d643bcdb3ba2 | 76bef3832bae64664882e27ecb6f89800a12cf43 |
python/cpython | python__cpython-112249 | # unicode: make ucnhash_capi initialization thread-safe in `--disable-gil` builds
# Feature or enhancement
The `_PyUnicode_Name_CAPI` provides functions to get the name for a given Unicode character code and vice versa. It is lazily initialized and stored in the per-interpreter `_Py_unicode_state`:
https://github.com/python/cpython/blob/3932b0f7b1566374427daa8bc47203032015e350/Include/internal/pycore_unicodeobject.h#L425-L432
The initialization of the `ucnhash_capi` isn't thread-safe without the GIL. (There can be a data race on reading and writing `ucnhash_capi`).
Mostly for my own reference, here are the similar modifications in the `nogil-3.12` fork: https://github.com/colesbury/nogil-3.12/commit/5d006db9fa
<!-- gh-linked-prs -->
### Linked PRs
* gh-112249
<!-- /gh-linked-prs -->
| 0785c685599aaa052f85d6163872bdecb9c66486 | 81261fa67ff82b03c255733b0d1abbbb8a228187 |
python/cpython | python__cpython-113489 | # unicode: make `_PyUnicode_FromId` thread-safe in `--disable-gil` builds
# Feature or enhancement
The `_PyUnicode_FromId(_Py_Identifier *id)` function gets a Python `str` object (i.e., a `PyUnicodeObject`) from a static C string. Subsequent calls to `_PyUnicode_FromId()` return the same object. The initialization is not currently thread-safe without the GIL.
Mostly for my own reference, here is the implementation from `nogil-3.12`: https://github.com/colesbury/nogil-3.12/commit/6540bf3e6a. We will want to do things a bit differently in CPython 3.13.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113489
<!-- /gh-linked-prs -->
| 8f5b9987066f46daa67b622d913ff2c51c949ed4 | 36adc79041f4d2764e1daf7db5bb478923e89a1f |
python/cpython | python__cpython-111970 | # refactor dis so that it's easier to construct and display a list of Instruction() instances
I would like to use the dis module to display a list of Instruction instances, which is constructed externally.
Since the dis module currently expects a code object or string src code, this requires a refactor.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111970
<!-- /gh-linked-prs -->
| b2af50cb0266f654cff126c7a78a4a7755bc3fbe | 40752c1c1e8cec80e99a2c9796f4fde2f8b5d3e2 |
python/cpython | python__cpython-113584 | # Use per-thread freelists in `--disable-gil` builds
# Feature or enhancement
CPython uses freelists for frequently allocated Python objects, like `dict`, `list`, and `slice`. There freelists are generally stored in the per-interpreter state, which is not thread-safe without the GIL. In `--disable-gil` builds, the freelists should be stored in the per-thread state (i.e., `PyThreadState`). I think we probably want to keep the freelists in the interpreter state for the default build. This will probably require some refactoring.
Freelists:
* `float`
* `slice` (`slice_cache`)
* `tuple`
* `list`
* `dict` (`PyDictKeysObject` and `PyDictObject`)
* `generator` (`value_freelist` and `asend_freelist`)
* `PyContext`
For context, here are similar changes in the `nogil-3.12` fork, but I expect the changes in CPython 3.13 to be a bit different:
* https://github.com/colesbury/nogil-3.12/commit/07f5f8c318
* https://github.com/colesbury/nogil-3.12/commit/2a4c17e896
<!-- gh-linked-prs -->
### Linked PRs
* gh-113584
* gh-113886
* gh-113919
* gh-113921
* gh-113929
* gh-113972
* gh-114122
* gh-114189
* gh-114270
* gh-114323
* gh-114581
* gh-114899
* gh-115329
* gh-115505
* gh-115546
* gh-136673
<!-- /gh-linked-prs -->
| 57bdc6c30d2665c2760ff5a88487e57c8b3c397a | cdca0ce0ad47604b7007229415817a7a152f7f9a |
python/cpython | python__cpython-112116 | # Use critical sections to protect I/O objects (in `--disable-gil` builds)
# Feature or enhancement
The I/O objects, like [`io.BufferedIOBase`](https://docs.python.org/3/library/io.html#io.BufferedIOBase), [`io.TextIOWrapper`](https://docs.python.org/3/library/io.html#io.TextIOWrapper), and [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO) have internal state that would not be thread-safe without the GIL.
We should be able to mostly use Argument Clinic's (AC) support for "critical sections" to guard methods on these objects. For operations that don't use AC, we can either convert them to use AC or write the `Py_BEGIN_CRITICAL_SECTION`/`Py_END_CRITICAL_SECTION` manually.
For context, here are the similar modifications in the `nogil-3.12` fork, but the implementation in CPython 3.13 will be a bit different (no need for extra locks, use the syntax from #111903):
* https://github.com/colesbury/nogil-3.12/commit/ffade9d6f6 (PR by @Mayuresh16)
* https://github.com/colesbury/nogil-3.12/commit/5b83c16dcd (PR by @aisk)
* https://github.com/colesbury/nogil-3.12/commit/6323ca60f9 (PR by @aisk)
<!-- gh-linked-prs -->
### Linked PRs
* gh-112116
* gh-112193
* gh-112298
<!-- /gh-linked-prs -->
| 77d9f1e6d9aad637667264c16c83d255526cc1ba | a6d25de375087e27777ebc1c0bd106d532ef9083 |
python/cpython | python__cpython-112471 | # Implement stop-the-world functionality (for `--disable-gil` builds)
# Feature or enhancement
The `--disable-gil` builds occasionally need to pause all but one thread. Some examples include:
* Cyclic garbage collection, where this is often called a "stop the world event"
* Before calling `fork()`, to ensure a consistent state for internal data structures
* During interpreter shutdown, to ensure that daemon threads aren't accessing Python objects
In the `nogil-3.12` fork, a stop-the-world call paused all threads in all interpreters. In CPython 3.13, we probably want to provide two levels of "stop-the-world": per-interpreter and global. In general, per-interpreter pauses are preferable to global pauses, but there are some cases (like before `fork()`) where want *all* threads to pause, so that we don't `fork()` while a thread is modifying some global runtime structure.
In the default build, stop-the-world calls should generally be no-ops, but global pauses may be useful in a few cases, such as before `fork()` when using multiple interpreters each with their own "GIL".
In the `nogil-3.12` fork, this was implemented together with other GC modifications: https://github.com/colesbury/nogil-3.12/commit/2864b6b36e. I'd like to implement it separately in 3.13 to keep the PRs smaller.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112471
* gh-112859
<!-- /gh-linked-prs -->
| 441affc9e7f419ef0b68f734505fa2f79fe653c7 | 5f1997896d9c3ecf92e9863177c452b468a6a2c8 |
python/cpython | python__cpython-112291 | # sys.monitoring: local events still sent after free_tool_id
# Bug report
### Bug description:
I use set_local_events with a tool id. Later I free the tool id, but then still get those events. Shouldn't freeing the tool id stop all events?
```python
import sys
print(sys.version)
sysmon = sys.monitoring
events = sysmon.events
myid = 1
def simple_function():
print("Inside simple function")
def sysmon_py_start(code, instruction_offset):
print(f"sysmon_py_start({code=}, {instruction_offset=})")
sysmon.set_local_events(myid, code, events.LINE)
def sysmon_line(code, line_number):
print(f"sysmon_line({code=}, {line_number=})")
print("Registering")
sysmon.use_tool_id(myid, "try669.py")
sysmon.set_events(myid, events.PY_START)
sysmon.register_callback(myid, events.PY_START, sysmon_py_start)
sysmon.register_callback(myid, events.LINE, sysmon_line)
simple_function()
print("Freeing")
sysmon.set_events(myid, 0)
sysmon.free_tool_id(myid)
simple_function()
```
When I run this, I see:
```
3.12.0 (main, Oct 2 2023, 13:16:02) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Registering
sysmon_py_start(code=<code object simple_function at 0x10ead1530, file "/private/tmp/try669.py", line 8>, instruction_offset=0)
sysmon_line(code=<code object simple_function at 0x10ead1530, file "/private/tmp/try669.py", line 8>, line_number=9)
Inside simple function
Freeing
sysmon_line(code=<code object simple_function at 0x10ead1530, file "/private/tmp/try669.py", line 8>, line_number=9)
Inside simple function
```
Shouldn't freeing the tool id stop sysmon_line from being called again?
The same behavior happens with 3.13.0a1.
cc @markshannon
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-112291
* gh-112304
<!-- /gh-linked-prs -->
| 46500c42f09a8342efde48ad74327d5225158ff3 | 9d70831cb7127855a8bf83b585525f13cffb9f59 |
python/cpython | python__cpython-112049 | # dtoa: thread safety in `--disable-gil` builds
# Feature or enhancement
The `Python/dtoa.c` library is responsible for formatting floating point numbers (`double`) as strings and for parsing strings into numbers. The shared state is not thread-safe without the GIL:
1. `Balloc` and `Bfree` use a per-interpreter free-list to avoid some allocations of `Bigint` objects
2. `pow5mult` uses a per-interpreter append-only list of `Bigint` powers of 5
For (1), we can just skip using the freelists in `--disable-gil` builds. We already have a code path (`Py_USING_MEMORY_DEBUGGER`) that doesn't use freelists.
https://github.com/python/cpython/blob/d61313bdb1eee3e4bb111e0b248ac2dbb48be917/Python/dtoa.c#L312
For (2), we can use atomic operations to append to the powers-of-5 linked list in a thread-safe manner. I don't think this needs to be guarded by a `Py_NOGIL` checks, since each power-of-5 is only ever created once.
For context, here is the modification to `Python/dtoa.c` in nogil-3.12. Note that it uses a `PyMutex` for (2), which I think we can avoid.
### dragonbox, Ryū, Grisu, Schubfach
In the past 5 or so years, there have been a number of faster float-to-string algorithms with the desirable attributes (correctly rounded, no "garabage" digits, etc.). To my knowledge they are also all thread-safe. ["dragonbox"](https://github.com/jk-jeon/dragonbox) looks the most promising, but porting it to C is a bigger undertaking than making the existing `dtoa.c` thread-safe.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112049
<!-- /gh-linked-prs -->
| 2d76be251d0aee89f76e6fa5a63fa1ad3f2b76cf | 9f67042f28bf886a9bf30fed6795d26cff255f1e |
python/cpython | python__cpython-111960 | # Thread-safe one-time initialization
# Feature or enhancement
Some CPython internals require initialization exactly once. Some of these one time initializations are not thread-safe without the GIL or have data races according to the C11 memory model.
We should add a lightweight, thread-safe one-time initialization API similar to C++11's [`std::call_once`](https://en.cppreference.com/w/cpp/thread/call_once) [^1]. The proposed internal-only API follows C++11's `std::call_once`, but adapted for C (i.e., error returns and function pointers):
```c
typedef struct {
uint8_t v;
} _PyOnceFlag;
typedef int _Py_once_fn_t(void *arg);
// Calls `fn` once using `flag`. The `arg` is passed to the call to `fn`.
//
// Returns 1 on success and 0 on failure.
//
// If `fn` returns 1 (success), then subsequent calls immediately return 1.
// If `fn` returns 0 (failure), then subsequent calls will retry the call.
int _PyOnceFlag_CallOnce(_PyOnceFlag *flag, _Py_once_fn_t *fn, void *arg);
```
As an example, the `Python-ast.c` relies on the GIL and an `initialized` variable to ensure that it is only initialized once:
https://github.com/python/cpython/blob/d61313bdb1eee3e4bb111e0b248ac2dbb48be917/Python/Python-ast.c#L1126-L1135
[^1]: Also, [`pthread_once`](https://pubs.opengroup.org/onlinepubs/7908799/xsh/pthread_once.html) and C11's [`call_once`](https://en.cppreference.com/w/c/thread/call_once). `std::call_once` supports error returns, which is important for CPython's use cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111960
<!-- /gh-linked-prs -->
| 446f18a911916eabd2c0ceed0c2a109fc8480727 | f66afa395a6d06097ad1ca222ed076e18a7a8126 |
python/cpython | python__cpython-111977 | # Document better assignment expression parentheses requirements
# Feature or enhancement
### Proposal:
Should we maybe thinks about allowing unparenthesized top-level assignment expressions. It came up on ipython/ipython#14214 that people would like to have something like that, especially in REPLs, in order to be able to assign a value to a variable and have it be displayed afterwards. I played around with the parser a bit and the change to allow this is very straight-forward.
We could also discuss adding it behind a flag or only enable it on the REPL.
Behavior on main:
```python
❯ ./python.exe
Python 3.13.0a1+ (heads/main:baeb7718f8, Nov 10 2023, 12:05:38) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a := 3
File "<stdin>", line 1
a := 3
^^
SyntaxError: invalid syntax
```
New behavior:
```python
❯ ./python.exe
Python 3.13.0a1+ (heads/main-dirty:baeb7718f8, Nov 10 2023, 12:20:20) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a := 3
3
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-111977
* gh-112010
* gh-112011
<!-- /gh-linked-prs -->
| 9a2f25d374f027f6509484d66e1c7bba03977b99 | d7cef7bc7ea5478abb90a37c8ffb0792cc6e7518 |
python/cpython | python__cpython-111976 | # TextIOWrapper.reconfigure() crashes if encoding is not string or None
# Crash report
### What happened?
Unlike to `TextIOWrapper` constructor, `TextIOWrapper.reconfigure()` does not check that the encoding argument is actually None or string. If it is not None, it calls `_PyUnicode_EqualToASCIIString` that only works with strings and crashes in debug build. There may be other similar errors in other arguments.
```pycon
>>> import sys
>>> sys.stdout.reconfigure(encoding=42)
Objects/unicodeobject.c:552: _PyUnicode_CheckConsistency: Assertion failed: PyType_HasFeature((Py_TYPE(((PyObject*)((op))))), ((1UL << 28)))
Enable tracemalloc to get the memory block allocation traceback
object address : 0x55a2e3147ca8
object refcount : 4294967295
object type : 0x55a2e3119f00
object type name: int
object repr : 42
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
Current thread 0x00007fc99ee43740 (most recent call first):
File "<stdin>-1", line 1 in <module>
Aborted (core dumped)
```
### CPython versions tested on:
3.10, 3.11, 3.12
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-111976
* gh-112058
* gh-112059
* gh-112061
* gh-112067
* gh-112085
* gh-112089
* gh-126542
<!-- /gh-linked-prs -->
| ee06fffd38cb51ce1c045da9d8336d9ce13c318a | a519b87958da0b340caef48349d6e3c23c98e47e |
python/cpython | python__cpython-111937 | # broken link to A.Neumaier article in built-in sum comment
# Bug report
### Bug description:
The new implementation of `sum` on Python 3.12 (cfr. https://github.com/python/cpython/issues/100425 , https://github.com/python/cpython/pull/100426 , https://github.com/python/cpython/pull/107785 ) is not associative on **simple** input values. This minimal code shows the bug:
On Python 3.11:
```python
>>> a = [0.1, -0.2, 0.3, -0.4, 0.5]
>>> a.append(-sum(a))
>>> sum(a) == 0
True
```
On Python 3.12:
```python
>>> a = [0.1, -0.2, 0.3, -0.4, 0.5]
>>> a.append(-sum(a))
>>> sum(a) == 0
False
```
I'm sure this affects more users than the "improved numerical accuracy" on badly scaled input data which most users don't ever deal with, and for which exact arithmetic is already available in the Standard Library
-> https://docs.python.org/3/library/decimal.html.
I'm surprised this low-level change was accepted **with so little review**. There are other red flags connected with this change:
- The link to the new algorithm's description in `cPython`'s official code is dead -> https://github.com/python/cpython/blob/289af8612283508b67d7969d7182070381b4349b/Python/bltinmodule.c#L2614
- The researcher to which the algorithm is credited has an empty academic page, with no PDFs -> https://www.mat.univie.ac.at/~neum/
Is anybody interested in keeping the quality of `cPython`'s codebase high? When I learned Python, I remember one of the first thing in the official tutorial was that Python is a handy calculator, and now to me it seems broken. @gvanrossum ?
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111937
* gh-111993
<!-- /gh-linked-prs -->
| 12a30bc1aa0586308bf3fe12c915bcc5e54a032f | ce6a533c4bf1afa3775dfcaee5fc7d5c15a4af8c |
python/cpython | python__cpython-112005 | # json: make "memo" dict local to scan_once call
# Feature or enhancement
The `Modules/_json.c` parser is mostly stateless (or the state is immutable). The one exception is the "memo" dictionary, which is used to avoid duplicate `PyUnicodeObject` instances for the same JSON C strings.
https://github.com/python/cpython/blob/289af8612283508b67d7969d7182070381b4349b/Modules/_json.c#L696-L700
The `memo` dictionary is already cleared after each call `scan_once`:
https://github.com/python/cpython/blob/289af8612283508b67d7969d7182070381b4349b/Modules/_json.c#L1118
We should move the creation and destruction of the `memo` dict to the invocation of `scan_once` instead of having it as part of the module state. This will avoid contention on the dictionary locks in `--disable-gil` builds if multiple threads are concurrently parsing JSON strings.
For an example modification, see https://github.com/colesbury/nogil-3.12/commit/964bb33962.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112005
<!-- /gh-linked-prs -->
| d0058cbd1cd7e72307adab45759e1ea6adc05866 | 9a2f25d374f027f6509484d66e1c7bba03977b99 |
python/cpython | python__cpython-112189 | # Make weakref thread-safe without the GIL
# Feature or enhancement
The current weakref implementation relies on the GIL for thread-safety.
The `nogil-3.12` fork substantially modifies the weakref implementation. I think we can implement a simpler change in CPython 3.13 (main) now that all PyObject's have their own mutex (in `--disable-gil` builds).
### Basic idea
Protect access to the weakrefs linked list using the mutex in the weakly referenced object. Use the critical section API.
Prior implementation: https://github.com/colesbury/nogil-3.12/commit/0dddcb6f9d
<!-- gh-linked-prs -->
### Linked PRs
* gh-112189
* gh-112267
* gh-113621
* gh-116825
* gh-116843
* gh-116844
* gh-117168
* gh-117275
<!-- /gh-linked-prs -->
| 0566ab9c4d966c7280a1c02fdeea8129ba65de81 | eb3c94ea669561a0dfacaca715d4b2723bb2c6f4 |
python/cpython | python__cpython-112207 | # Avoid changing the PYMEM_DOMAIN_RAW allocator during initialization and finalization
# Bug report
CPython current temporarily changes `PYMEM_DOMAIN_RAW` to the default allocator during initialization and shutdown. The motivation is to ensure that core runtime structures are allocated and freed using the same allocator. However, modifying the current allocator changes global state and is not thread-safe even with the GIL. Other threads may be allocating or freeing objects use `PYMEM_DOMAIN_RAW`; they are not required to hold the GIL to call `PyMem_RawMalloc`/`PyMem_RawFree`.
We can avoid changing global state while still ensuring that we use a consistent allocator during initialization and shutdown.
### `PyThread_type_lock`
Many of the runtime structures are `PyThread_type_lock` objects. We can avoid allocation/freeing entirely for these locks by using `PyMutex` or `PyRawMutex` instead.
https://github.com/python/cpython/blob/b9f814ce6fdc2fd636bb01e60c60f3ed708a245f/Python/pystate.c#L396-L418
### Calls to `PyMem_RawMalloc`, `PyMem_RawCalloc`, `PyMem_RawFree`, etc.
For the other calls to `PyMem_RawMalloc`, etc. where we *know* we want to use the default allocator, we should directly call a new internal-only function that always uses the default allocator. This will avoid unnecessarily modifying global state.
For example, we can add a new function `_PyMem_DefaultRawMalloc` that behaves like `PyMem_RawMalloc`, except that it is not modifiable by `_PyMem_SetDefaultAllocator`.
For an example implementation in the nogil-3.12 fork, see https://github.com/colesbury/nogil-3.12/commit/d13c63dee9.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112207
* gh-125321
* gh-130287
<!-- /gh-linked-prs -->
| f823910bbd4bf01ec3e1ab7b3cb1d77815138296 | f8dcb8200626a1a06c4a26d8129257f42658a9ff |
python/cpython | python__cpython-111981 | # Make hashlib related modules thread-safe without the GIL
# Feature or enhancement
The hashlib based modules already have some locking to make some operations thread-safe (with the GIL), but the logic isn't sufficient if running with the GIL disabled.
Relevant files:
* Modules/_blake2/blake2b_impl.c
* Modules/_blake2/blake2s_impl.c
* Modules/_hashopenssl.c
* Modules/hashlib.h
* Modules/md5module.c
* Modules/sha1module.c
* Modules/sha2module.c
* Modules/sha3module.c
Basic idea:
1. Replace `PyThread_type_lock lock` with `PyMutex`. This should be both simpler and faster in general and avoid the need for dynamically assigning a lock, which can pose thread-safety issues without the GIL
2. Add a field `bool use_mutex` to indicate if the code should lock the mutex. This should always be set to `true` in `Py_NOGIL`. In the default build, we should dynamically set it to `true` in places where we previously allocated `self->lock`
3. Update `ENTER_HASHLIB` and `EXIT_HASHLIB` macros.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111981
<!-- /gh-linked-prs -->
| a6465605c1417792ec04ced88340cdf104a402b6 | 7218bac8c84115a8e9a18a4a8f3146235068facb |
python/cpython | python__cpython-111913 | # Run test_posix on Windows
When I searched for a place for tests for gh-111841, I have found that they already exist, but in `test_posix` which is skipped on Windows.
The simplest way is to make `test_posix` running on Windows. It turned out that most tests are already compatible with Windows, except this test and yet one test that tests for Posix-specific constants.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111913
* gh-111953
* gh-111954
<!-- /gh-linked-prs -->
| 64fea3211d08082236d05c38ee728f922eb7d8ed | 65d6dc27156112ac6a9f722b7b62529c94e0344b |
python/cpython | python__cpython-111907 | # Unused function warnings during mimalloc build on FREEBSD
# Bug report
### Bug description:
```c
cc -pthread -c -fno-strict-overflow -Wsign-compare -g -Og -Wall -O2 -pipe -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Objects/rangeobject.o Objects/rangeobject.c
--- Objects/obmalloc.o ---
In file included from Objects/obmalloc.c:15:
In file included from Objects/mimalloc/static.c:37:
In file included from Objects/mimalloc/prim/prim.c:22:
Objects/mimalloc/prim/unix/prim.c:66:12: warning: unused function 'mi_prim_open' [-Wunused-function]
static int mi_prim_open(const char* fpath, int open_flags) {
^
Objects/mimalloc/prim/unix/prim.c:69:16: warning: unused function 'mi_prim_read' [-Wunused-function]
static ssize_t mi_prim_read(int fd, void* buf, size_t bufsize) {
^
Objects/mimalloc/prim/unix/prim.c:72:12: warning: unused function 'mi_prim_close' [-Wunused-function]
static int mi_prim_close(int fd) {
^
Objects/mimalloc/prim/unix/prim.c:75:12: warning: unused function 'mi_prim_access' [-Wunused-function]
static int mi_prim_access(const char *fpath, int mode) {
^
--- Objects/setobject.o ---
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-111907
<!-- /gh-linked-prs -->
| 0ff6368519ed7542ad8b443de01108690102420a | ee06fffd38cb51ce1c045da9d8336d9ce13c318a |
python/cpython | python__cpython-111904 | # Argument Clinic: Add support for PEP 703's critical sections (`Py_BEGIN_CRITICAL_SECTION()`)
# Feature or enhancement
PEP 703 in large part relies on replacing the GIL with fine grained per-object locks. The primary way to acquire and release these locks is through the critical section API (https://github.com/python/cpython/issues/111569). It would be helpful if argument clinic could auto generate the API calls in the binding code when specified.
For example, the [`bufferedio.c`](https://github.com/python/cpython/blob/6f09f69b7f85962f66d10637c3325bbb2b2d9853/Modules/_io/bufferedio.c) will require per-object locking around most calls for thread-safety in `--disable-gil` builds.
As an example, the `_io._Buffered.close` function is written as:
https://github.com/python/cpython/blob/6f09f69b7f85962f66d10637c3325bbb2b2d9853/Modules/_io/bufferedio.c#L523-L529
We might add a `@critical_section` directive to designate that argument clinic should generate `Py_BEGIN_CRITICAL_SECTION()` and `Py_END_CRITICAL_SECTION()` in the binding code before calling `_io__Buffered_close_impl`.
```c
/*[clinic input]
@critical_section
_io._Buffered.close
[clinic start generated code]*/
static PyObject *
_io__Buffered_close_impl(buffered *self)
/*[clinic end generated code: output=7280b7b42033be0c input=d20b83d1ddd7d805]*/
```
The generated binding code in `bufferedio.c.h` would then look like:
```c
static PyObject *
_io__Buffered_close(buffered *self, PyObject *Py_UNUSED(ignored))
{
PyObject *return_value = NULL;
Py_BEGIN_CRITICAL_SECTION(self);
return_value = _io__Buffered_close_impl(self);
Py_END_CRITICAL_SECTION();
return return_value;
}
```
Note that `Py_BEGIN/END_CRITICAL_SECTION()` are no-ops in the default build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111904
* gh-112251
<!-- /gh-linked-prs -->
| 324531df909721978446d504186738a33ab03fd5 | 16055c160412544e2a49794aaf3aa70c584f843a |
python/cpython | python__cpython-111898 | # Enum HOWTO doesn't render well on mobile
Specifically the purple sidebar on in section [Functional API](https://docs.python.org/3/howto/enum.html#functional-api).
* The bar itself is hard to read.
* Text is way wider than the rest of the site.
* Text renders in a different font size.
In Chrome devtools device mode the font size seems consistent, but the other problems still appear.
Tested in Safari on iPhone 12 mini, with docs for Python 3.12 and 3.13.
<details>
<summary>Screenshot</summary>

</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-111898
* gh-111908
* gh-111909
<!-- /gh-linked-prs -->
| 7d21e3d5ee9858aee570aa6c5b6a6e87d776f4b5 | bc12f791127896cd3538fac4465e3190447b2257 |
python/cpython | python__cpython-111884 | # libregrtest: Reduce number of imports at startup
Currently, when a test is run by ``./python -m test (...)``, the test starts with 170 modules already imported. The problem is that imports can have side effects and I would prefer to start tests in a "minimal environment".
Running a test with ``./python -m test (...)`` only imports 164 modules.
Example by adding ``Lib/test/test_x.py``:
```py
import sys
print(len(sys.modules))
```
Output:
```
vstinner@mona$ ./python -m test test_x
170
vstinner@mona$ ./python -m test -j1 test_x
164
```
I propose to try to make more imports lazy: only import the module the first time that it's needed, tests are rarely "performance sensitive". It may improve the "startup time".
<!-- gh-linked-prs -->
### Linked PRs
* gh-111884
* gh-111885
* gh-111886
* gh-111887
* gh-111888
* gh-111889
* gh-111890
* gh-111893
* gh-111894
* gh-111902
* gh-112172
<!-- /gh-linked-prs -->
| 6f09f69b7f85962f66d10637c3325bbb2b2d9853 | 2f2a0a3a6c5494bcf13a8f587c28e30d86617ff0 |
python/cpython | python__cpython-113716 | # os.stat returns incorrect a/c/mtime for files without permissions under Windows in Python 3.12
# Bug report
### Bug description:
We are switching our product to Python 3.12 from Python 3.11.2 and our tests caught a difference in the way `os.stat` works under Windows. It returns negative values of `st_atime`, `st_mtime`, and `st_ctime` if there are no permissions assigned to the file.
How to reproduce: you can remove all permissions using Windows Explorer, right-click on a file, select "Properties", select "Security" tab, press "Edit" button, and finally remove all entries from the "Group and user names" list. Or remove all inherited permissions in "Properties / Security /Advanced" window, `icacls` should show an empty list of permissions, for example:
```shell
PS C:\...> icacls a.txt
a.txt
Successfully processed 1 files; Failed processing 0 files
```
and then check the output in Python 3.12:
```python
(python-312) PS C:\...> python -c "import os; print(os.stat('a.txt'))"
os.stat_result(st_mode=33206, st_ino=0, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=0, st_atime=-11644473600, st_mtime=-11644473600, st_ctime=-11644473600)
```
and the same for Python 3.11.2, which is correct:
```python
(python-311) PS C:\...> python -c "import os; print(os.stat('a.txt'))"
os.stat_result(st_mode=33206, st_ino=0, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=0, st_atime=1699527432, st_mtime=1699527008, st_ctime=1699527008)
```
The returned value sounds like `secs_between_epochs` from https://github.com/python/cpython/blob/main/Python/fileutils.c#L1050 Maybe it has something to do with the issue.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-113716
* gh-113989
* gh-114089
<!-- /gh-linked-prs -->
| ed066481c76c6888ff5709f5b9f93b92c232a4a6 | e68806c7122070078507b370b13bb225f8501ff8 |
python/cpython | python__cpython-111876 | # Objects in `NamedTuple` class namespaces don't have `__set_name__` called on them
# Bug report
### Bug description:
Generally speaking, any objects present in the namespace of a class `Foo` that define the special `__set_name__` method will have that method called on them as part of the creation of the class `Foo`. (`__set_name__` is generally only used for descriptors, but can be defined on any class.)
For example:
```pycon
Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class Annoying:
... def __set_name__(self, owner, name):
... raise Exception('no')
...
>>> class Foo:
... attr = Annoying()
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __set_name__
Exception: no
Error calling __set_name__ on 'Annoying' instance 'attr' in 'Foo'
```
Descriptors inside `typing.NamedTuple` namespaces generally work the same way as in other class namespaces...
```pycon
>>> from typing import NamedTuple
>>> class Foo(NamedTuple):
... bar = property(lambda self: 42)
...
>>> Foo().bar
42
```
...but with one notable exception: they don't have `__set_name__` called on them!
```pycon
>>> class Annoying:
... def __set_name__(self, owner, name):
... raise Exception('no')
...
>>> from typing import NamedTuple
>>> class Foo(NamedTuple):
... bar = Annoying() # this should cause the creation of the `Foo` class to fail...
...
>>> # ...but it didn't!
```
## Why does this happen?
`__set_name__` would normally be called on all members of a class dictionary during the class's creation. But the `NamedTuple` class `Foo` is _created_ here:
https://github.com/python/cpython/blob/97c4c06d0d235aad00e5b6b10af8b8d68c889b9b/Lib/typing.py#L2721-L2723
And the `bar` attribute is only monkey-patched onto the `Foo` class _after_ the class has actually been created. This happens a few lines lower down in `typing.py`, here:
https://github.com/python/cpython/blob/97c4c06d0d235aad00e5b6b10af8b8d68c889b9b/Lib/typing.py#L2732-L2733
`__set_name__` isn't called on `Foo.bar` as part of the creation of the `Foo` class, because the `Foo` class doesn't _have_ a `bar` attribute at the point in time when it's actually being created.
### CPython versions tested on:
3.8, 3.11, 3.12, CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-111876
<!-- /gh-linked-prs -->
| 22e411e1d107f79a0904d41a489a82355a39b5de | ffe1b2d07b88e185f373ad696fbea5a7f2a315c1 |
python/cpython | python__cpython-118134 | # No max_children in socketserver.ForkingMixIn documentation
# Documentation
There is nothing about max_children attribute of socketserver.ForkingMixIn class in documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-118134
<!-- /gh-linked-prs -->
| ff5751a208e05f9d054b6df44f7651b64d415908 | 705a123898f1394b62076c00ab6008c18fd8e115 |
python/cpython | python__cpython-111864 | # Rename `Py_NOGIL` to a positive term
From the [PEP 703 acceptance](https://discuss.python.org/t/pep-703-making-the-global-interpreter-lock-optional-in-cpython-acceptance/37075?u=hugovk):
> We want to avoid negatives in terms and flags and such, so we won’t get into double-negative terrain (like we do when we talk about ‘non no-GIL’). We’d like a positive, clear term to talk about the no-GIL build, and we’re suggesting ‘free-threaded’. (Relatedly, that’s why the build mode/ABI letter is ‘t’ and not ‘n’; that change was already made.)
`Py_NOGIL` is a user-facing macro, let's rename it to something positive.
Maybe `Py_DISABLE_GIL` or `Py_FREE_THREADED`?
cc @colesbury
<!-- gh-linked-prs -->
### Linked PRs
* gh-111864
* gh-112300
* gh-112307
<!-- /gh-linked-prs -->
| 3b3ec0d77f0f836cbe5ff1ab97efcc8b7ed5d787 | 1c8f912ebdfdb146cd7dd2d7a3a67d2c5045ddb0 |
python/cpython | python__cpython-112038 | # os.fstat() fails on FAT32 file system on Windows
# Bug report
### Bug description:
This is Python 3.12.0 x64 running on Windows 11. On a FAT32 drive, I have a file `f.txt`.
```python
>>> import os
>>> os.stat('f.txt')
os.stat_result(st_mode=33206, st_ino=4194560, st_dev=1589430838, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1699419600, st_mtime=1699459266, st_ctime=1699459265)
>>> f = open('f.txt','rb')
>>> f.fileno()
3
>>> os.fstat(f.fileno())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [WinError 87] The parameter is incorrect
```
This error does not occur with Python 3.11.6. I suspect that the issue was introduced with https://github.com/python/cpython/pull/102149. https://github.com/python/cpython/commit/6031727a37c6003f78e3b0c7414a0a214855dd08 provided a fix for `os.stat()`, but it seems that an equivalent bug exists for `os.fstat()`.
I believe the fix would involve changing the line https://github.com/python/cpython/blob/74b868f636a8af9e5540e3315de666500147d47a/Python/fileutils.c#L1275C48-L1275C48 to account for the possibility that the file system does not support `FileIdInfo`.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-112038
* gh-112044
<!-- /gh-linked-prs -->
| 29af7369dbbbba8cefafb196e977bce8189a527d | d2f305dfd183025a95592319b280fcf4b20c8694 |
python/cpython | python__cpython-111849 | # Unify branches and deopts in the tier 2 IR.
The basic premise of optimizing traces is that most branches are highly biased.
So we use `DEOPT_IF` to branch in the uncommon case.
However in the optimizer we may want to move work to exit branches, which needs jumps.
Unifying branches and jumps also simplifies the tier 2 interpreter *and* the JIT compiler.
So, let's merge jumps and branches, as follows:
* Convert most, if not all jumps in the tier 2 code to `DEOPT_IF`s in `bytecodes.c`
* Convert deopts into jumps internally to allow work like `SET_IP` to be sunk into exit branches.
See also https://github.com/python/cpython/issues/111610
<!-- gh-linked-prs -->
### Linked PRs
* gh-111849
* gh-112045
* gh-112065
* gh-112274
<!-- /gh-linked-prs -->
| 06efb602645226f108e02bde716f9061f1ec4cdd | 11e83488c5a4a6e75a4f363a2e1a45574fd53573 |
python/cpython | python__cpython-111850 | # Far too may failed optimization attempts in the tier 2 optimizer.
The [stats show](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20231102-3.13.0a1%2B-8794817/bm-20231102-azure-x86_64-mdboom-collect_tier2_stats-3.13.0a1%2B-8794817-pystats.md) that there are millions of optimization attempts, over 99.8% of which fail.
We should do something about this.
For tier 1 we use exponential backoff. We should do the same for tier 2.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111850
<!-- /gh-linked-prs -->
| 34a03e951b027902d993c7066ba8e6b7e92cb2a9 | 25c49564880e6868e4c76602f9f1650f0bc71c75 |
python/cpython | python__cpython-111842 | # os.putenv() on Windows truncates value on an embedded NUL
`os.putenv()` on Windows truncates a value containing an embedded null character.
```pycon
>>> import os
>>> os.putenv('xyz', 'abc\0def')
>>> os.system('''python.bat -c "import os; print(repr(os.environ['xyz']))"''')
Running Debug|x64 interpreter...
'abc'
0
```
`os.putenv()` and `os.unsetenv()` also truncate a name. It leads to OSError because it truncates before "=".
```pycon
>>> os.putenv('abc\0def', 'xyz')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
os.putenv('abc\0def', 'xyz')
OSError: [Errno 22] Invalid argument
>>> os.unsetenv('abc\0def')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
os.unsetenv('abc\0def')
OSError: [Errno 22] Invalid argument
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-111842
* gh-111966
* gh-111967
<!-- /gh-linked-prs -->
| 0b06d2482d77e02c5d40e221f6046c9c355458b2 | 2e7f0700800c0337a0b1b9471fcef410e3158250 |
python/cpython | python__cpython-111852 | # Add `seekable` method for `mmap.mmap`
# Bug report
### Bug description:
`mmap.mmap` has a `seek()` method, but no `seekable()` method. Because of this, it cannot be used an argument for ZipFile, which requires a file-like object with seekable. A small patch is enough to fix this
```python
class SeekableMmap(mmap.mmap):
def seekable(self):
return True
```
Now SeekableMmap can be passed to ZipFile without issue. It's easy to fix, but I can't think of any reasons why mmap shouldn't have a seekable method.
### CPython versions tested on:
3.8, 3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111852
* gh-111865
<!-- /gh-linked-prs -->
| 6046aec377311efb89c4438f7cf412e2c6568ba1 | 30ec968befde2313f66af4754d410dc5a080a20a |
python/cpython | python__cpython-112130 | # `test_recursive_repr` from `test_xml_etree` triggers stack overflow protections for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112130
* gh-112131
* gh-112132
<!-- /gh-linked-prs -->
| 7218bac8c84115a8e9a18a4a8f3146235068facb | d9fd33a869d2be769ff596530f63ee099465b037 |
python/cpython | python__cpython-112197 | # `test_repr_deep` from `test_userlist` triggers stack overflow protections for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112197
<!-- /gh-linked-prs -->
| 43b1c33204d125e256f7a0c3086ba547b71a105e | f489ace9e7aaa29b5a83b0a74a247abc8b032910 |
python/cpython | python__cpython-112229 | # `test_repr_deep` from `test_userdict` triggers stack overflow protections for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112229
<!-- /gh-linked-prs -->
| 14e539f0977aaf2768c58f1dcbbbab5ad0205ec5 | 10e1a0c91613908757a5b97602834defbe575ab0 |
python/cpython | python__cpython-111819 | # `test_forward_recursion_actually` from `test_typing` triggers stack overflow protections for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111819
* gh-112223
<!-- /gh-linked-prs -->
| 0e83d941bea921380ce4a1494121f3ec30ae652e | 2f9cb7e095370e38bde58c79c8a8ea7705eefdc2 |
python/cpython | python__cpython-112225 | # `test_error_on_parser_stack_overflow` from `test_syntax` triggers triggers-out-of-bounds memory protection for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112225
<!-- /gh-linked-prs -->
| 56e59a49ae4d9f518c5cc918aefe7eeee11736b4 | ce1096f974d3158a92e050f9226700775b8db398 |
python/cpython | python__cpython-111830 | # `test_recursion` from `test_richcmp` triggers stack overflow protections for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111830
* gh-111831
* gh-111832
<!-- /gh-linked-prs -->
| f115a55f0e455a4b43a1da9fd838a60a101f182a | 0e83d941bea921380ce4a1494121f3ec30ae652e |
python/cpython | python__cpython-111869 | # `test_posix_fallocate()` fails under `test_posix` under WASI
```
======================================================================
ERROR: test_posix_fallocate (__main__.PosixTester.test_posix_fallocate)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Lib/test/test_posix.py", line 405, in test_posix_fallocate
posix.posix_fallocate(fd, 0, 10)
OSError: [Errno 58] Not supported
----------------------------------------------------------------------
```
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111869
* gh-111919
* gh-111920
<!-- /gh-linked-prs -->
| 97c4c06d0d235aad00e5b6b10af8b8d68c889b9b | 31c90d5838e8d6e4c47d98500a34810ccb33a6d4 |
python/cpython | python__cpython-113996 | # `test_bad_getatttr` from `test_pickle` triggers stack exhaustion protections in a debug build under WASI
Detected using WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113996
<!-- /gh-linked-prs -->
| 8aa126354d93d7c928fb35b842cb3a4bd6e1881f | b44b9d99004f096619c962a8b42a19322f6a441b |
python/cpython | python__cpython-113997 | # `test_infinitely_many_bases` from `test_isinstance` triggers stack overflow protection for a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113997
<!-- /gh-linked-prs -->
| 3c19ee0422e9b9f1582fb74931c174a84583bca0 | a47353d587b78bb5501b21343d9bca739c49a43a |
python/cpython | python__cpython-112150 | # `test_recursive_repr` from `test_io` triggers stack overflow protections in a debug build under WASI
Detected with WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112150
<!-- /gh-linked-prs -->
| 974847be443e9798615e197ec6642e546a71a6b0 | 762eb58220992d1ab809b9a281d47c0cd48a5aec |
python/cpython | python__cpython-112181 | # `testRecursiveRepr` from `test_fileio` trigger stack overflow protections in a debug build under WASI
Detected using WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112181
<!-- /gh-linked-prs -->
| f92ea63f6f2c37917fc095a1bc036a8b0c45a084 | ceefa0b0795b5cc7adef89bd036ce843b5c78d3e |
python/cpython | python__cpython-112124 | # `test_super_deep()` for `test_call` triggers a stack overflow protection under a debug build for WASI
Tested using WASI-SDK 20 and wasmtime 14.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112124
* gh-114010
<!-- /gh-linked-prs -->
| bd89bca9e2a57779c251ee6fadf4887acb364824 | 81ab0e8a4add53035c87b040afda6d554cace528 |
python/cpython | python__cpython-111794 | # PGO build broken on Windows
# Bug report
### Bug description:
Building with --pgo on Windows seems to be broken on main (853b4b5).
My *hunch*, since this is failing in `ceval.c`, is that the recent [merge of the Tier 1 and Tier 2 interpreters](https://github.com/python/cpython/pull/111428) may have made the interpreter loop function too large for MSVC to handle.
Build command:
```
PCbuild\build.bat --pgo -c Release
```
Error:
```
Merging C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\amd64\python313!1.pgc
C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\amd64\python313!1.pgc: Used 25.9% (15643384 / 60370944) of total space reserved. 0.0% of the counts were dropped due to overflow.
Reading PGD file 1: C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\amd64\python313.pgd
Creating library C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\amd64\python313.lib and object C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\amd64\python313.exp
Generating code
0 of 0 ( 0.0%) original invalid call sites were matched.
0 new call sites were added.
257 of 14721 ( 1.75%) profiled functions will be compiled for speed, and the rest of the functions will be compiled for size
C:\actions-runner\_work\benchmarking\benchmarking\cpython\Python\ceval.c(669): fatal error C1001: Internal compiler error. [C:\actions-runner\_work\benchmarking\benchmarking\cpython\PCbuild\pythoncore.vcxproj]
(compiler file 'D:\a\_work\1\s\src\vctools\Compiler\Utc\src\p2\main.c', line 224)
To work around this problem, try simplifying or changing the program near the locations listed above.
If possible please provide a repro here: https://developercommunity.visualstudio.com/
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
link!InvokeCompilerPass()+0x10e636
link!InvokeCompilerPass()+0x10e636
link!InvokeCompilerPass()+0x10e373
link!InvokeCompilerPass()+0x10b310
link!InvokeCompilerPass()+0x10b215
link!InvokeCompilerPass()+0x102cea
Build FAILED.
```
[Complete build log](https://gist.github.com/mdboom/321507e5a51c605f71c39c6850f6a4f0)
We don't currently have a buildbot that builds with --pgo on Windows -- probably something to fix as well.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
### Linked issue
[Microsoft Developer Community MSVC bug report](https://developercommunity.visualstudio.com/t/C1001-when-compiling-very-large-function/10509459)
<!-- gh-linked-prs -->
### Linked PRs
* gh-111794
* gh-112289
<!-- /gh-linked-prs -->
| bc12f791127896cd3538fac4465e3190447b2257 | 6f09f69b7f85962f66d10637c3325bbb2b2d9853 |
python/cpython | python__cpython-111778 | # Assertion errors in Py_DEBUG mode when forgetting to untrack GC objects
This has been distilled from a crash we found at Google while upgrading to Python 3.11. It's a complicated setup and I have only been able to partially reproduce the issue, but I believe my solution is correct.
When extension types set `Py_TPFLAGS_HAVE_GC` they _should_ call `PyObject_GC_Untrack()` before starting destruction of the object so that the GC module doesn't see partially destructed objects if it happens to trigger during it. Python 3.11 started warning when types didn't do this correctly: https://github.com/python/cpython/blob/d78c872e0d680f6e63afa6661df5021775a03690/Modules/gcmodule.c#L2398-L2407
There's two subtle issues here: one is that `Py_DECREF()`, and thus object deallocation, can be called with a pending exception (it's not unreasonable or even uncommon to do so). The `PyErr_WarnExplicitFormat` call, however, may (and likely will) call other Python code, which will trigger an assertion that no exception is pending, usually this one: https://github.com/python/cpython/blob/d78c872e0d680f6e63afa6661df5021775a03690/Objects/typeobject.c#L4784-L4785
The code should store any pending exception before calling anything that might care about the exception.
The other subtle issue is that the code warns _before untracking the object_. The warning (which can call arbitrary Python code via `warnings.showwarning`) can easily trigger a GC run, which then turns up seeing the still-erroneously-tracked object in an invalid state. The act of warning about a potential issue is triggering the issue (but only with `Py_DEBUG` defined). As far as I can see there's no reason not to untrack _before_ raising the warning, which would avoid the problem. I have a PR that fixes both these issues.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111778
* gh-111989
* gh-111990
<!-- /gh-linked-prs -->
| ce6a533c4bf1afa3775dfcaee5fc7d5c15a4af8c | 21615f77b5a580e83589abae618dbe7c298700e2 |
python/cpython | python__cpython-111773 | # Specialization of member descriptors only handles `Py_T_OBJECT_EX` but `_Py_T_OBJECT` seems more common
Despite `_Py_T_OBJECT` being marked as deprecated in `descrobject.h` it is commonly used in the standard library.
`LOAD_ATTR_SLOT` de-optimizes if the stored value is `NULL`. If the stored value is not `NULL`, both `_Py_T_OBJECT` and `Py_T_OBJECT_EX` act the same, so we can reuse `LOAD_ATTR_SLOT` for `_Py_T_OBJECT`.
For storing attributes, `_Py_T_OBJECT` and `Py_T_OBJECT_EX` are exactly the same.
I don't know why `_Py_T_OBJECT` is deprecated, the semantics seem quite reasonable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111773
<!-- /gh-linked-prs -->
| a7b0f63cdb83c0652fab19bbbc8547dfe309b1d2 | d78c872e0d680f6e63afa6661df5021775a03690 |
python/cpython | python__cpython-111770 | # Visibility of wsgiref.util.is_hop_by_hop
# Documentation
`wsgiref.util.is_hop_by_hop` is available inside the official documentation (https://docs.python.org/3/library/wsgiref.html#wsgiref.util.is_hop_by_hop), but https://github.com/python/cpython/blob/ba8aa1fd3735aa3dd13a36ad8f059a422d25ff37/Lib/wsgiref/util.py#L5-L8 does not expose the method. This leads to IDE warnings when actually using it.
Is this method intended for public usage and just missing from `__all__` - or should this be considered an internal method instead?
<!-- gh-linked-prs -->
### Linked PRs
* gh-111770
<!-- /gh-linked-prs -->
| f88caab467eb57cfe293cdf9fb7cce29b24fda7f | fe3fd2c333ac080dba1fa64452c2f62098107731 |
python/cpython | python__cpython-111766 | # Move old C API tests for floats to Lib/test/test_capi/
After #111624, we have C API tests for PyFloat_* both in Lib/test/test_float.py (pack/unpack tests) and Lib/test/test_capi/test_float.py. I think we should move remaining tests to Lib/test/test_capi/.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111766
* gh-111818
<!-- /gh-linked-prs -->
| a077b2fbb88f5192bb47e514334f760bf08d0295 | 931f4438c92ec0eb2aa894092a91749f1d5bd216 |
python/cpython | python__cpython-135578 | # CI should run Undefined Behavior Sanitizer (UBSAN), as already done for ASAN
# Feature or enhancement
### Proposal:
cpython's CI should run Undefined Behavior Sanitizer (UBSAN), like it already does for Address Sanitizer (ASAN).
It already has support in configure for this with `./configure --with-undefined-behavior-sanitizer`.
This is important for portability, future proofing (keeping CPython working with a range of compilers), and avoiding confusing bug reports later when UB manifests.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135578
* gh-136820
* gh-136883
* gh-137029
<!-- /gh-linked-prs -->
| 7a20c72cb612653fa46c41c3a969aefa2b52738b | 140731ff671395fb7a869c2784429c14dc83fb27 |
python/cpython | python__cpython-111748 | # DOC: fix link redirect to "Documentation Translations"
# Documentation
On the [Dealing with Bugs](https://docs.python.org/3.13/bugs.html) page there is a link to [Documentation Translations](https://devguide.python.org/documenting/#translating) which has been moved from https://devguide.python.org/documenting/#translating to https://devguide.python.org/documentation/translating/
<!-- gh-linked-prs -->
### Linked PRs
* gh-111748
* gh-111749
* gh-111750
<!-- /gh-linked-prs -->
| 72e27a67b97993f277e69c9dafb063007ba79adf | 853b4b549dab445c1b54610e118fefaeba3f35e2 |
python/cpython | python__cpython-111834 | # `breakpoint` does not enter pdb until the "next event" happens
# Bug report
### Bug description:
Consider the following code:
```python
try:
raise ValueError()
except Exception as exc:
breakpoint()
```
You can't stop in the `except` block, you'll get:
```
--Return--
> /home/gaogaotiantian/programs/mycpython/example.py(4)<module>()->None
-> breakpoint()
(Pdb)
```
which is pretty misleading because you can't access anything in the `except` block:
```
(Pdb) import sys
(Pdb) sys.exc_info()
(None, None, None)
(Pdb) p exc
*** NameError: name 'exc' is not defined
(Pdb)
```
If you put a `pass` in the main function:
```python
try:
raise ValueError()
except Exception as exc:
breakpoint()
pass
```
At least the result is less misleading. Even though it's still not what I want - I want to stop in the `except` block.
```
> /home/gaogaotiantian/programs/mycpython/example.py(5)<module>()
-> pass
```
The "correct" way to do it is to put another line in the block
```python
try:
raise ValueError()
except Exception as exc:
breakpoint()
pass
```
We can stop at the `breakpoint()` actually, but that's a breaking behavior, I don't think we want that. So maybe we should at least document this in pdb so users get less confused?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111834
* gh-118579
<!-- /gh-linked-prs -->
| f34e965e52b9bdf157b829371870edfde45b80bf | f6b5d3bdc83f8daca05e8b379190587a236585ef |
python/cpython | python__cpython-111742 | # Support webp formats in mimetypes as standard types
# Feature or enhancement
### Proposal:
WEBP is not recognised as a standard type and demands the user to use `strict=False` as in `mimetypes.guess_type("foobar.webp", strict=False)` to recognise it. This is because it was previously not recognised by IANA as an official type.
Since the introduction of `webp` support in the `mimetypes` module in #89802 it has officially been accepted to IANA and thus is not a "non-standard format" anymore.
Refs:
- https://bugs.chromium.org/p/webp/issues/detail?id=448#c46
- https://www.iana.org/assignments/media-types/media-types.xhtml#image
- https://datatracker.ietf.org/doc/draft-zern-webp/12/
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://bugs.python.org/issue38902
https://bugs.python.org/issue45639
https://github.com/python/cpython/issues/89802
<!-- gh-linked-prs -->
### Linked PRs
* gh-111742
<!-- /gh-linked-prs -->
| b905fad83819ec9102ecfb97e3d8ab0aaddd9784 | 765b9ce9fb357bdb79a50ce51207c827fbd13dd8 |
python/cpython | python__cpython-111734 | # Outdated example code of typing.Concatenate
# Documentation
https://docs.python.org/3.12/library/typing.html#typing.Concatenate
Changed in version 3.12: Added support new generic syntax (pep695)
```python
from collections.abc import Callable
from threading import Lock
from typing import Concatenate, ParamSpec, TypeVar
P = ParamSpec('P')
R = TypeVar('R')
# Use this lock to ensure that only one thread is executing a function
# at any time.
my_lock = Lock()
def with_lock(f: Callable[Concatenate[Lock, P], R]) -> Callable[P, R]:
'''A type-safe decorator which provides a lock.'''
def inner(*args: P.args, **kwargs: P.kwargs) -> R:
# Provide the lock as the first argument.
return f(my_lock, *args, **kwargs)
return inner
@with_lock
def sum_threadsafe(lock: Lock, numbers: list[float]) -> float:
'''Add a list of numbers together in a thread-safe manner.'''
with lock:
return sum(numbers)
# We don't need to pass in the lock ourselves thanks to the decorator.
sum_threadsafe([1.1, 2.2, 3.3])
```
can be
```python
from collections.abc import Callable
from threading import Lock
from typing import Concatenate
# Use this lock to ensure that only one thread is executing a function
# at any time.
my_lock = Lock()
def with_lock[**P, R](f: Callable[Concatenate[Lock, P], R]) -> Callable[P, R]:
'''A type-safe decorator which provides a lock.'''
def inner(*args: P.args, **kwargs: P.kwargs) -> R:
# Provide the lock as the first argument.
return f(my_lock, *args, **kwargs)
return inner
@with_lock
def sum_threadsafe(lock: Lock, numbers: list[float]) -> float:
'''Add a list of numbers together in a thread-safe manner.'''
with lock:
return sum(numbers)
# We don't need to pass in the lock ourselves thanks to the decorator.
sum_threadsafe([1.1, 2.2, 3.3])
```
This is a small change, and other parts of the document have also been switched to the new generic syntax, so I think this change is reasonable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111734
* gh-111814
<!-- /gh-linked-prs -->
| c3e19c3a62e82b9e77563e934059895b6230de6e | 3e99c9cbf67225ec1d3bb6af812e883f19ef53de |
python/cpython | python__cpython-111730 | # `Doc/library/sqlite3.rst` has multiple doctest warnings
# Bug report
Link: https://github.com/python/cpython/actions/runs/6753865568/job/18360794210?pr=111723#step:8:82
```pytb
Document: library/sqlite3
-------------------------
Exception ignored in: <sqlite3.Connection object at 0x7f1684307130>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f1684307130>
Exception ignored in: <sqlite3.Connection object at 0x7f1684307ac0>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f1684307ac0>
Exception ignored in: <sqlite3.Connection object at 0x7f16891eb680>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891eb680>
Exception ignored in: <sqlite3.Connection object at 0x7f16891eb240>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891eb240>
Exception ignored in: <sqlite3.Connection object at 0x7f16891ea9c0>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891ea9c0>
Exception ignored in: <sqlite3.Connection object at 0x7f16891eb350>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891eb350>
Exception ignored in: <sqlite3.Connection object at 0x7f16891eb130>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891eb130>
Exception ignored in: <sqlite3.Connection object at 0x7f16891ea8b0>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891ea8b0>
Exception ignored in: <sqlite3.Connection object at 0x7f16891e9bf0>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/functools.py", line 55, in update_wrapper
pass
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f16891e9bf0>
4 items passed all tests:
71 tests in default
3 tests in sqlite3.cursor
3 tests in sqlite3.limits
8 tests in sqlite3.trace
85 tests in 4 items.
85 passed and 0 failed.
Test passed.
Exception ignored in sys.unraisablehook: <function debug at 0x7f167bb73c50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7f167bb73c50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7f167bb73c50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7f167bb73c50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
Exception ignored in sys.unraisablehook: <function debug at 0x7f167bb73c50>
Traceback (most recent call last):
File "<doctest sqlite3.trace[4]>", line 2, in debug
AttributeError: 'sqlite3.Connection' object has no attribute '__name__'
```
I will examine possible fixes and send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111730
* gh-117622
* gh-117623
* gh-117625
* gh-117630
<!-- /gh-linked-prs -->
| a7702663e3f7efc81f0b547f1f13ba64c4e5addc | e338e1a4ec5e43a02447f4ec80320d7fc12b3ed4 |
python/cpython | python__cpython-111725 | # `Doc/howto/descriptor.rst` has a warning during doctest run
# Bug report
Warning:
```
Document: howto/descriptor
--------------------------
1 items passed all tests:
173 tests in default
173 tests in 1 items.
173 passed and 0 failed.
Test passed.
Exception ignored in: <sqlite3.Connection object at 0x7f8ea14d0d10>
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Doc/venv/lib/python3.13/site-packages/docutils/nodes.py", line 387, in __new__
def __new__(cls, data, rawsource=None):
ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7f8ea14d0d10>
```
I propose adding:
```
.. testcleanup::
conn.close()
```
To silence it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-111725
* gh-111727
* gh-111728
<!-- /gh-linked-prs -->
| f48e669504ce53040a04e0181064c11741a87817 | ac01e2243a1104b2154c0d1bdbc9f8d5b3ada778 |
python/cpython | python__cpython-111720 | # Input validations for `alias` command in `pdb`
# Feature or enhancement
### Proposal:
This is kind of between a bug and a feature. `alias`, admittedly a rarely used command, is very unpolished.
The documentation claimed
> Replaceable parameters can be indicated by %1, %2, and so on
But that's not the truth, `%10` will not work because it contains `%1` and we are using pure string replacement. I doubt anyone would need `%10` so I just updated the document to imply that we only support up to `%9`.
All the replaceable parameters should be consecutive as well, if `%*` does not exist. I can't find one reason where an alias needs parameter 1 and 3, but not 2. (Feel free to show a useful example if you can think of one). We should catch this because it's almost definitely a mistake made by users.
When using alias, we should match all the replaceable parameters and the user input arguments if `%*` does not exist, otherwise something has to be wrong and we should warn user.
I stepped into this today when I was trying to use alias for some testing:
```
(Pdb) alias myp p
(Pdb) myp 1
*** SyntaxError: invalid syntax
(Pdb)
```
Really confusing huh? What if it's
```
(Pdb) alias myp p
(Pdb) myp 1
*** Too many arguments for alias 'myp'
(Pdb)
```
It'll probably be easier to track the logic and make you realize that there's no replaceable parameters defined in `myp` and it should be `alias myp %1`.
It's not the most intuitive way but it has been there for a long time. Making the error message better will hopefully make users less confusing.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-111720
<!-- /gh-linked-prs -->
| 853b4b549dab445c1b54610e118fefaeba3f35e2 | a6c1c04d4d2339f0094422974ae3f26f8c7c8565 |
python/cpython | python__cpython-113609 | # What’s New In Python 3.12: bad highlighting for Py_NewInterpreterFromConfig() example
In the "[PEP 684: A Per-Interpreter GIL](https://docs.python.org/3/whatsnew/3.12.html#pep-684-a-per-interpreter-gil)" section of "What’s New In Python 3.12", the C code snippet is highlighted as if it was Python.
<!-- gh-linked-prs -->
### Linked PRs
* gh-113609
* gh-113610
<!-- /gh-linked-prs -->
| 9ce6c01e38a2fc7a5ce832f1f8c8d9097132556d | 2849cbb53afc8c6a4465f1b3490c67c2455caf6f |
python/cpython | python__cpython-111707 | # [C API] Py_mod_multiple_interpreters Added to Limited C API Without Versioning
# Bug report
(See https://github.com/python/cpython/issues/110968#issuecomment-1766504893.)
When I added `Py_mod_multiple_interpreters`[^1] to Include/moduleobject.h, I forgot to restrict the limited API version in which it should be there.
[^1]: I added `Py_mod_multiple_interpreters` in [gh-104148](https://github.com/python/cpython/pull/104148) (1c420e138fd828895b6bd3c44ef99156e8796095), along with some pre-defined values (e.g. `Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED`). The direct motivation was to make an implicit contract in [PEP 489](https://peps.python.org/pep-0489/) (i.e. "must support subinterpreters") explicit. (See [gh-104108](https://github.com/python/cpython/issues/104108).) The indirect motivation was [PEP 684](https://peps.python.org/pep-0684/).
The fix should be something similar to what we did in [gh-110969](https://github.com/python/cpython/pull/110969), combined with [gh-111584](https://github.com/python/cpython/pull/111584). We will need to backport the change to 3.12.
Basically, the change would be something like:
```diff
- #define Py_mod_multiple_interpreters 3
+ #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x030c0000
+ # define Py_mod_multiple_interpreters 3
+ #endif
```
FYI, gh-110968 already dealt with the same issue for the pre-defined slot values (e.g. `Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED`).
CC @encukou, @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-111707
* gh-111787
<!-- /gh-linked-prs -->
| 836e0a75d565ecb7e2485fee88dbe67e649a1d5f | c5063fd62a3fd3c5c2af33fc17c60fabe54282fe |
python/cpython | python__cpython-111694 | # asyncio.Condition.wait() sometimes propagates incorrect asyncio.CancelledError
# Bug report
### Bug description:
`asyncio.Condition.wait()` contains an edge case where a `CancelledError` on lock re-aquire is overwritten with a fresh `CancelledError` instance.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-111694
<!-- /gh-linked-prs -->
| 52161781a6134a4b846500ad68004fe9027a233c | c6ca562138a0916192f9c3100cae678c616aed29 |
python/cpython | python__cpython-111706 | # New warning: "‘_session_is_active’ defined but not used [-Wunused-function]"
# Bug report
<img width="536" alt="Снимок экрана 2023-11-03 в 11 51 48" src="https://github.com/python/cpython/assets/4660275/98a21700-9d5e-44b3-88f8-a7204bb3e6b6">
Introduced in https://github.com/python/cpython/commit/93206d19a35106f64a1aef5fa25eb18966970534
CC @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-111706
<!-- /gh-linked-prs -->
| df9815eb11b58dfaae02a8e3fb85ae8aa725dc17 | 20cfab903db70cf952128bc6b606e3ec4a216498 |
python/cpython | python__cpython-111667 | # Speed up `BaseExceptionGroup.{derive,subgroup,split}` by ~20%
There's a simple change that we can make to increase the preformance of these three methods. Right now they are defined as:
https://github.com/python/cpython/blob/f4b5588bde656d8ad048b66a0be4cb5131f0d83f/Objects/exceptions.c#L1491-L1493
However, they only ever use one argument: https://github.com/python/cpython/blob/f4b5588bde656d8ad048b66a0be4cb5131f0d83f/Objects/exceptions.c#L883-L885
So, it would be much faster to use `METH_O` instead. I did these measurements, before and after:
```
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(1)] * 10); i = [TypeError(2)] * 10' 'e.derive(i)'
.....................
Mean +- std dev: 353 ns +- 2 ns
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(1)] * 10); i = [TypeError(2)] * 10' 'e.derive(i)'
.....................
Mean +- std dev: 319 ns +- 4 ns
```
```
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(n) if n % 2 == 0 else TypeError(n) for n in range(100)]); f = lambda e: True' 'e.split(f)'
.....................
Mean +- std dev: 180 ns +- 1 ns
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(n) if n % 2 == 0 else TypeError(n) for n in range(100)]); f = lambda e: True' 'e.split(f)'
.....................
Mean +- std dev: 151 ns +- 1 ns
```
```
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(n) if n % 2 == 0 else TypeError(n) for n in range(100)]); f = lambda e: True' 'e.subgroup(f)'
.....................
Mean +- std dev: 153 ns +- 0 ns
» pyperf timeit -s 'e = BaseExceptionGroup("Message", [ValueError(n) if n % 2 == 0 else TypeError(n) for n in range(100)]); f = lambda e: True' 'e.subgroup(f)'
.....................
Mean +- std dev: 121 ns +- 0 ns
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-111667
<!-- /gh-linked-prs -->
| a28a3967ab9a189122f895d51d2551f7b3a273b0 | 890ef1b035457fe5d0b0faf27a703c74c33e0141 |
python/cpython | python__cpython-111664 | # Tier 2 pystats uop counts are missing
# Bug report
### Bug description:
7e135a48d619407cd4b2a6d80a4ce204b2f5f938 inadvertently removed the uop count pystats.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111664
<!-- /gh-linked-prs -->
| 25937e31883862c8f290bfb1f3b8ba0cd16675b3 | f4b5588bde656d8ad048b66a0be4cb5131f0d83f |
python/cpython | python__cpython-111661 | # socket.htons uses unnecessary METH_VARARGS
# Feature or enhancement
### Proposal:
By replacing calling convention to ```METH_O``` it will run faster
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-111661
<!-- /gh-linked-prs -->
| 8fbe5314cd6544bdcd50b3a57e0f8a9c6bf97374 | 06efb602645226f108e02bde716f9061f1ec4cdd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.