repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-102870 | # Remove JUMP_IF_FALSE_OR_POP and JUMP_IF_TRUE_OR_POP
See https://github.com/faster-cpython/ideas/issues/567.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102870
<!-- /gh-linked-prs -->
| 3468c768ce5e467799758ec70b840da08c3c1da9 | 04adf2df395ded81922c71360a5d66b597471e49 |
python/cpython | python__cpython-102855 | # PEP 701 – Syntactic formalization of f-strings
- [x] Changes in the C tokenizer
- [x] Categorize failing tests
- [x] Fix failing tests or modify/remove them as needed
- [x] Changes in Python tokenizer
<!-- gh-linked-prs -->
### Linked PRs
* gh-102855
* gh-103633
* gh-103634
* gh-104006
* gh-104323
* gh-104731
* gh-104824
* gh-104847
* gh-104852
* gh-104854
* gh-104861
* gh-104865
<!-- /gh-linked-prs -->
| 1ef61cf71a218c71860ff6aecf0fd51edb8b65dc | a6b07b5a345f7f54ee9f6d75e81d2fb55971b35c |
python/cpython | python__cpython-102842 | # Confused traceback while `floordiv` or `mod` happens between `Fraction` and `complex` objects
```python
from fractions import Fraction
a = Fraction(1, 2)
b = a // 1j
>
Traceback (most recent call last):
File "C:\Users\KIRILL-1\CLionProjects\cpython\example.py", line 5, in <module>
b = a // 1j
~~^^~~~
File "C:\Users\KIRILL-1\AppData\Local\Programs\Python\Python311\Lib\fractions.py", line 363, in forward
return fallback_operator(complex(a), b)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: unsupported operand type(s) for //: 'complex' and 'complex'
```
This traceback really confused me.
However, it's easy to fix, just need change these lines:
https://github.com/python/cpython/blob/5e6661bce968173fa45b74fa2111098645ff609c/Lib/fractions.py#L620-L621
To this:
```python
elif isinstance(b, complex):
try:
return fallback_operator(complex(a), b)
except TypeError:
raise TypeError(
"unsupported operand type(s) for %r: %r and %r" % (
fallback_operator.__name__,
type(a).__qualname__,
type(b).__qualname__)
) from None
```
So, after that, we have a pretty nice traceback:
```python
Traceback (most recent call last):
File "C:\Users\KIRILL-1\CLionProjects\cpython\example.py", line 5, in <module>
b = a // 1j
~~^^~~~
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\fractions.py", line 624, in forward
raise TypeError(
TypeError: unsupported operand type(s) for 'floordiv': 'Fraction' and 'complex'
```
But, here a one problem - would be nice to have in traceback originally `//` instead of `floordiv`. I have no idea how to do it, without create a mapping with names. Theoretically - we can add a special attribute for it in `operator`, but it should be a another discussion.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102842
<!-- /gh-linked-prs --> | 5319c66550a6d6c6698dea75c0a0ee005873ce61 | 597fad07f7bf709ac7084ac20aa3647995759b01 |
python/cpython | python__cpython-102863 | # Speedup math.log by removing AC stuff
In #64385 this functions was converted to the Argument Clinic. Unfortunately, the function signature was "corrupted", probably to prevent showing "magic number" 2.718281828459045 in the function docstring, and this has a noticeable speed impact (~2x regression). In the current main:
```
$ ./python -m timeit -s 'from math import log' 'log(1.1)'
1000000 loops, best of 5: 341 nsec per loop
$ ./python -m timeit -s 'from math import log' 'log(1.1, 3.2)'
500000 loops, best of 5: 576 nsec per loop
```
while without the AC stuff (but with METH_FASTCALL):
```
$ ./python -m timeit -s 'from math import log' 'log(1.1)'
2000000 loops, best of 5: 171 nsec per loop
$ ./python -m timeit -s 'from math import log' 'log(1.1, 3.2)'
1000000 loops, best of 5: 346 nsec per loop
```
(A different "corruption" was used in the cmath.log, without noticeable speed regression. Unfortunately, it can't be adapted for the math module.)
Given that with the current "state of art" for the AC and the inspect module it's impossible to represent the math.log() correctly (see #89381) - I think it does make sense to revert the AC patch for this function. BTW, the math module has several other functions, that are not converted to the AC, e.g. due to performance reasons (like hypot(), see #30312).
<!-- gh-linked-prs -->
### Linked PRs
* gh-102863
<!-- /gh-linked-prs -->
| d1a89ce5156cd4e1eff5823ec2200885c43e395e | 41ef502d740b96ca6333a2d0202df7cce4a84e7d |
python/cpython | python__cpython-103009 | # Documentation for `bisect` with keys
# Documentation
Missing documentation for keys in `bisect` functions' doc-strings.
I've also found the wording in the [official documentation](https://docs.python.org/3/library/bisect.html) to be a little confusing. I think it might be clearer to use an overloaded definition like this:
```python
bisect_left(a, x, lo=0, hi=None, *, key=None)
bisect_left(a, k, lo=0, hi=None, *, key)
```
> Locate the insertion point for *x* in *a* to maintain sorted order. The parameters *lo* and *hi* may be used to
> specify a subset of the list which should be considered; by default the entire list is used. If *x* is already
> present in *a*, the insertion point will be before (to the left of) any existing entries. The return value is suitable
> for use as the first parameter to `list.insert()` assuming that *a* is already sorted.
>
> The returned insertion point *i* partitions the array *a* into two halves so that `all(val < x for val in a[lo :
> i])` for the left side and `all(val >= x for val in a[i : hi])` for the right side.
>
> *key* specifies a [key function][1] of one argument that is used to extract a comparison key from each element in
> the array. ___If given, *k = key(x)* should be given instead of *x*, which allows searching for an unknown *x* by known key.___
>
> If *key* is `None`, the elements are compared directly with no intervening function call.
>
> *Changed in version 3.10:* Added the *key* parameter.
```python
insort_left(a, x, lo=0, hi=None, *, key=None)
```
> Insert *x* in *a* in sorted order.
>
> This function first runs [bisect_left()][2] to locate an insertion point. Next, it runs the `insert()` method on *a*
> to insert *x* at the appropriate position to maintain sort order.
>
> To support inserting records in a table, the *key* function (if any) is applied to *x* for the search step but not for
> the insertion step. ___Unlike `bisect_left`, *key(x)* should not be given in place of *x* since *x* needs to be inserted into *a*.___
>
> Keep in mind that the `O(log n)` search is dominated by the slow O(n) insertion step.
>
> *Changed in version 3.10:* Added the *key* parameter.
[1]: https://docs.python.org/3/glossary.html#term-key-function
[2]: https://docs.python.org/3/library/bisect.html#bisect.bisect_left
<!-- gh-linked-prs -->
### Linked PRs
* gh-103009
<!-- /gh-linked-prs -->
| 1fd603fad20187496619930e5b74aa7690425926 | 87adc63d66349566b6459f93be60861b9d37782f |
python/cpython | python__cpython-103339 | # IDLE - Remove use of deprecated sys.last_xyzs in stackviewer
#102778 deprecated sys.last_type/value/traceback in favor of sys.last_exc, added in 3.12. We cannot access system system-set sys.last_exc in 3.10 and 3.11, but we can set it from sys.last_value or sys.exec_info()[1] and nearly never set or otherwise access the deprecated values. The except is `sys.last_type, sys.last_value, sys.last_traceback = excinfo` in run.print_exception, which should remain as long as the REPL does the same (add comment). Any `sys.last_exc = sys.last_value` statements can be deleted in June 2024. There is a [draft issue](https://github.com/orgs/python/projects/31/views/1?pane=issue&itemId=23303905) for this.
The files to be patched are run (where it handles exceptions or stackviewer), pyshell (where it starts stackviewer), stackviewer, and test_stackviewer. [This diff](https://github.com/python/cpython/pull/102825/files) has the relevant locations.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103339
* gh-105526
* gh-105527
* gh-105528
* gh-105534
<!-- /gh-linked-prs -->
| 3ee921d84f06da9dfa8aa29e0d33778b9dbf8f23 | 68dfa496278aa21585eb4654d5f7ef13ef76cb50 |
python/cpython | python__cpython-102829 | # add option for error callback of to shutil.rmtree to accept exception rather than exc_info
shutil.rmtree accepts as ``onerror`` a callback to should expect an exc_info tuple. To move off this, I will add a new kwarg "onexc" which should expect just an exception object. ``onerror`` will continue to work for now, but be deprecated.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102829
* gh-102835
* gh-102850
* gh-103422
<!-- /gh-linked-prs -->
| d51a6dc28e1b2cd0353a78bd13f46e288fa39aa6 | 4d1f033986675b883b9ff14588ae6ff78fdde313 |
python/cpython | python__cpython-102824 | # Document that float // float returns a float
PEP 238 [states clearly](https://peps.python.org/pep-0238/#semantics-of-floor-division) that the result of `x // y` is a `float`, when `x` and `y` both have type `float`.
> For floating point inputs, the result is a float. For example:
>
> 3.5//2.0 == 1.0
>
And indeed this is the behaviour that's been in place since Python 2.2. However, we seem to be missing a clear statement of this in the official docs. It would be good to fix that.
Motivated by the thread https://discuss.python.org/t/make-float-floordiv-and-rfloordiv-return-an-int/24959, where it's been asserted that this behaviour is an 'implementation detail'.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102824
* gh-109092
* gh-109093
<!-- /gh-linked-prs -->
| b72251de930c8ec6893f1b3f6fdf1640cc17dfed | b2729e93e9d73503b1fda4ea4fecd77c58909091 |
python/cpython | python__cpython-104579 | # Duplicate frame in traceback of exception raised inside trace function
First appeared in e028ae99ecee671c0e8a3eabb829b5b2acfc4441.
Reproducer:
```python
import sys
def f():
pass
def trace(frame, event, arg):
raise ValueError()
sys.settrace(trace)
f()
```
Before 'bad' commit (3e43fac2503afe219336742b150b3ef6e470686f):
```
Traceback (most recent call last):
File "/home/.../trace_tb_bug.py", line 10, in <module>
f()
^^^
File "/home/.../trace_tb_bug.py", line 3, in f
def f():
File "/home/.../trace_tb_bug.py", line 7, in trace
raise ValueError()
^^^^^^^^^^^^^^^^^^
ValueError
```
After 'bad' commit (e028ae99ecee671c0e8a3eabb829b5b2acfc4441):
```
Traceback (most recent call last):
File "/home/.../trace_tb_bug.py", line 10, in <module>
f()
^^^
File "/home/.../trace_tb_bug.py", line 3, in f
def f():
File "/home/.../trace_tb_bug.py", line 3, in f
def f():
File "/home/.../trace_tb_bug.py", line 7, in trace
raise ValueError()
^^^^^^^^^^^^^^^^^^
ValueError
```
3.11.0 release and main (039714d00f) also lack pointers to error locations, but this probably needs a different issue:
```
Traceback (most recent call last):
File "/home/.../trace_tb_bug.py", line 10, in <module>
f()
File "/home/.../trace_tb_bug.py", line 3, in f
def f():
File "/home/.../trace_tb_bug.py", line 3, in f
def f():
File "/home/.../trace_tb_bug.py", line 7, in trace
raise ValueError()
ValueError
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104579
* gh-104650
<!-- /gh-linked-prs -->
| c26d03d5d6da94367c7f9cd93185616f2385db30 | 616fcad6e2e10b0d0252e7f3688e61c468c54e6e |
python/cpython | python__cpython-102811 | # Add docstrings to asyncio.Timeout
# Feature or enhancement
`asyncio.Timeout` doesn't have docstrings. (`asyncio.timeout` does though:) )
# Pitch
Adding docstrings would increase the usability of this class.
# Notes
I will open a PR for this shortly.
I envisage starting out copying from the docs: https://docs.python.org/3/library/asyncio-task.html#asyncio.Timeout
and hammering out the wording in the PR, similar to what was done for `asyncio.TaskGroup` in https://github.com/python/cpython/issues/102560.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102811
* gh-102834
* gh-102934
* gh-102958
<!-- /gh-linked-prs -->
| 699cb20ae6fdef8b0f13d633cf4858465ef3469f | d51a6dc28e1b2cd0353a78bd13f46e288fa39aa6 |
python/cpython | python__cpython-102854 | # What to do with `Misc/gdbinit`?
Quoting: https://github.com/python/cpython/pull/101866#discussion_r1140997764
> Does this file have any tests? I suspect that most of the code in this file is wrong (as we have changed much of CPython since it was written).
Problems:
1. After https://github.com/python/cpython/pull/101866 `gdbinit` will use a deprecated `co_lnotab` property
2. There are no tests for it as far as I know
3. When the internal of CPython are changed, there's no way to detect that it was broken
4. There's no owner of this file, who can validate changes and keep it up-to-date
What should we do with it?
<!-- gh-linked-prs -->
### Linked PRs
* gh-102854
* gh-103269
<!-- /gh-linked-prs -->
| ef000eb3e2a8d0ecd51b6d44b390fefd820a61a6 | 094cf392f49d3c16fe798863717f6c8e0f3734bb |
python/cpython | python__cpython-102800 | # some usages of sys.exc_info can be replaced by more modern code
There are some obvious places where sys.exc_info() can be replaced by direct access to a captured exception (without needing any new features that could complicate backports.)
Need to avoid changing tests that are testing exc_info, like generator tests or sys.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102800
* gh-102830
* gh-102885
* gh-103293
* gh-103294
* gh-103311
* gh-104032
* gh-106746
<!-- /gh-linked-prs -->
| b3cc11a08e1e9121ddba3b9afb9fae032e86f449 | 72186aa637bc88cd5f5e234803af64acab25994c |
python/cpython | python__cpython-102798 | # Implement TODO in `test_ast.py`
In `test_ast.py` we have a old TODO:
https://github.com/python/cpython/blob/72186aa637bc88cd5f5e234803af64acab25994c/Lib/test/test_ast.py#L266-L267
How I understand, it says about code snippets in `exec_tests` and `eval_tests`.
So, I've decide to improve it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102798
<!-- /gh-linked-prs -->
| b8fb369193029d10059bbb5f760092071f3a9f5f | a816cd67f43d9adb27ccdb6331e08c835247d1df |
python/cpython | python__cpython-102796 | # test_epoll.test_control_and_wait flakes from improper poll() usage
# Bug report
test_epoll.test_control_and_wait can flake. In the test we wait for events on two file descriptors. This is done in a single call to select.epoll's poll() function. However, it is valid for the OS to return only one event via poll() and the next via a subsequent call to poll(). This rarely happens, but it can cause the test to fail despite properly functioning polling.
# Your environment
- CPython versions tested on: 3.10.2
- Operating system and architecture: Linux 5.19.11 and gVisor
<!-- gh-linked-prs -->
### Linked PRs
* gh-102796
* gh-104173
<!-- /gh-linked-prs -->
| c9ecd3ee75b472bb0a7538e0288c5cfea146da83 | 45398ad51220b63b8df08fb5551c6b736205daed |
python/cpython | python__cpython-102782 | # Cases generator: inserted #line directives are dependent on cwd
# Bug report
Python supports out-of-tree builds, e.g.:
```
> git clone git@github.com:python/cpython
> mkdir -p builds/dbg
> cd builds/dbg
> ../../cpython/configure --with-pydebug
> make -j
```
This is useful for e.g. keeping both debug and opt and asan builds around, without constantly having to `make clean` and recompile.
Currently the output of `make regen-cases` is cwd-dependent, so running it from an out-of-tree build (with no other changes) will change all `#line` directives in `generated_cases.c.h` from e.g. `Python/bytecodes.c` to e.g. `../../cpython/Python/bytecodes.c`.
This is easily fixable by inserting a couple `os.path.relpath(..., ROOT)` calls in the cases generator, to ensure all the filenames used in `#line` directives are always relative to the repo root, regardless of cwd.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102782
<!-- /gh-linked-prs -->
| 174c4bfd0fee4622657a604af7a2e7d20a3f0dbc | 65fb7c4055f280caaa970939d16dd947e6df8a8d |
python/cpython | python__cpython-102815 | # Catching CancelledError requires use of task.uncancel()
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Today I came across the following problem:
```python
import asyncio
ready = asyncio.Event()
async def task_func():
try:
ready.set()
await asyncio.sleep(1)
except asyncio.CancelledError:
# this is required, otherwise a TimeoutError is not created below,
# instead, the timeout-generated CancelledError is just raised upwards.
asyncio.current_task().uncancel()
finally:
# Perform some cleanup, with a timeout
try:
async with asyncio.timeout(0):
await asyncio.sleep(1)
except asyncio.TimeoutError:
pass
async def main():
task = asyncio.create_task(task_func())
await ready.wait()
await asyncio.sleep(0.01)
# the task is now sleeping, lets send it an exception
task.cancel()
# We expect the task to finish cleanly.
await task
asyncio.run(main())
```
The documentation mentions that sometimes an application may want to suppress `CancelledError`. What it fails
to mention is that unless the cancel state is subsequently cancelled with `task.uncancel()`, the task remains in a
_cancelled_ state, and context managers such as `asyncio.timeout` will not work as designed. However, `task.uncancel()`
is not supposed to be called by user code.
I think the documentation should mention this use case for `task.uncancel()`, particularly in the context of catching (and choosing to ignore) `CancelledError`.
This could also be considered a bug in `asyncio.timeout`. It prevents the use of the Timeout context manager within cleanup code, even if the intention is _not_ to ignore a `CancelledError`.
It should also be noted that the library `asyncio_timeout` on which the `asyncio.timeout` implementation is based, does not have this problem. Timeouts continue to work as designed, even if the task has been previously cancelled.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102815
* gh-102923
<!-- /gh-linked-prs -->
| 04adf2df395ded81922c71360a5d66b597471e49 | 1ca315538f2f9da6c7b86c4c46e76d454c1ec4b9 |
python/cpython | python__cpython-102779 | # Add sys.last_exc, deprecate sys.last_type, sys.last_value and sys.last_traceback
<!-- gh-linked-prs -->
### Linked PRs
* gh-102779
* gh-102825
* gh-103314
* gh-105190
* gh-105246
<!-- /gh-linked-prs -->
| e1e9bab0061e8d4bd7b94ed455f3bb7bf8633ae7 | 039714d00f147be4d018fa6aeaf174aad7e8fa32 |
python/cpython | python__cpython-103485 | # Improve performance of ntpath.isdir/isfile/exists/islink
These functions should use `GetFileAttributesW` for their fast paths, and the ones that traverse links can fall back to `stat` if a reparse point is found.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103485
<!-- /gh-linked-prs -->
| b701dce340352e1a20c1776feaa368d4bba91128 | 8a0c7f1e402768c7e806e2472e0a493c1800851f |
python/cpython | python__cpython-102759 | # Function signature mismatch for `functools.reduce` between C implementation, Python implementation, and online documentaion
# Documentation
(A clear and concise description of the issue.)
The function signature mismatch for `functools.reduce`:
- C implementation:
```python
_initial_missing = object()
@overload
def reduce(function, iterable, /): ...
@overload
def reduce(function, iterable, initial=_initial_missing, /): ...
```
- Python implementation:
```python
_initial_missing = object()
@overload
def reduce(function, sequence): ...
@overload
def reduce(function, sequence, initial=_initial_missing): ...
```
Argument change: `iterable -> sequence`.
- Online documentation:
```python
def reduce(function, iterable, initializer=None): ...
```
Argument change: `initial -> initializer`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102759
<!-- /gh-linked-prs -->
| 74f315edd01b4d6c6c99e50c03a90116820d8d47 | 0bb0d88e2d4e300946e399e088e2ff60de2ccf8c |
python/cpython | python__cpython-102756 | # Add PyErr_DisplayException(exc) as replacement for PyErr_Display(typ, exc, tb)
Create a single-arg version of PyErr_Display.
PyErr_Display is in the stable ABI but it is not documented. PyErr_DisplayException will therefore need to be in the stable ABI as well (and documented).
<!-- gh-linked-prs -->
### Linked PRs
* gh-102756
* gh-102826
* gh-102849
<!-- /gh-linked-prs -->
| 3f9285a8c52bf776c364f0cf4aecdd8f514ac4e1 | 405739f9166592104a5b0b945de92e28415ae972 |
python/cpython | python__cpython-102749 | # Remove legacy code for generator based coroutines in `asyncio`
Generator based coroutines were long deprecated and removed and `asyncio` doesn't not supports it. There is some left over code for supporting it which can be removed now.
The first thing is this behavior of `asyncio.iscoroutine` which makes no sense now. I propose to remove this first.
```py
import asyncio
def gen():
yield 1
assert not asyncio.iscoroutine(gen()) # fails
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102749
<!-- /gh-linked-prs -->
| adaed17341e649fabb0f522e9b4f5962fcdf1d48 | fbe82fdd777cead5d6e08ac0d1da19738c19bd81 |
python/cpython | python__cpython-136431 | # Clarify "system-wide" in docs for time.monotonic()
The documentation for [`time.monotonic()`](https://docs.python.org/dev/library/time.html#time.monotonic) currently states:
> Changed in version 3.5: The function is now always available and always system-wide.
I didn't know what system-wide meant, and so propose expanding the documentation to:
> Changed in version 3.5: The function is now always available and always system-wide, that is, the clock is the same for all processes on the system and the reference point does not change after start-up time.
Or anything equally enlightening. This would make it clear, that among other things, the delta may not be valid across reboots.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136431
* gh-136488
* gh-136489
<!-- /gh-linked-prs -->
| 9c4d28777526e9975b212d49fb0a530f773a3209 | 92b33c9590140263343c3e780bfc2ea1c1aded5c |
python/cpython | python__cpython-102739 | # remove register instruction related code from the cases generator script
Since we won't have register instructions in 3.12, we should remove this from the generator for now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102739
<!-- /gh-linked-prs -->
| 675b97a6ab483573f07ce6e0b79b0bc370ab76c9 | 0a22aa0528a4ff590854996b8854e9a79120987a |
python/cpython | python__cpython-102745 | # C-analyzer tool cannot parse `#line` directives
# Bug report
`#line` directives were added to `generated_cases.c.h` in https://github.com/python/cpython/commit/70185de1abfe428049a5c43d58fcb656b46db96c, but the C-analyzer tool cannot handle these directives. This caused the CI check to start failing on all PRs, e.g.:
- https://github.com/python/cpython/pull/94468
- https://github.com/python/cpython/pull/102734
The short term fix is to exclude `ceval.c` from the C-analyzer tool:
- #102735
But ideally we'd remove `ceval.c` from the excluded files and teach the C-analyzer tool to handle `#line` directives.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102745
<!-- /gh-linked-prs -->
| 84e20c689a8b3b6cebfd50d044c62af5d0e7dec1 | adaed17341e649fabb0f522e9b4f5962fcdf1d48 |
python/cpython | python__cpython-102722 | # Improve coverage of `_collections_abc._CallableGenericAlias`
Right now several important implementation details of this type is not covered:
1. Pickle with `ParamSpec`
<img width="1254" alt="Снимок экрана 2023-03-15 в 12 55 15" src="https://user-images.githubusercontent.com/4660275/225277373-31b4b04e-a172-49d9-bd86-5ff98a52a66d.png">
2. Invalid type argument
<img width="1247" alt="Снимок экрана 2023-03-15 в 12 55 28" src="https://user-images.githubusercontent.com/4660275/225277473-0f2c3c7c-c0ff-4625-91c5-dd03e33aa2e9.png">
Related to https://github.com/python/cpython/issues/102615 where this was originally discovered.
I will send a PR shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102722
* gh-102788
* gh-102790
<!-- /gh-linked-prs -->
| fbe82fdd777cead5d6e08ac0d1da19738c19bd81 | b0ec6253c9cf1b22b87cd99f317dbaeb31263b20 |
python/cpython | python__cpython-102712 | # Warnings found by clang
During build python via clang-15, the below warnings were found:
```
/home/lge/rpmbuild/BUILD/Python-3.10.9/Parser/pegen.c:812:31: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
_PyPegen_clear_memo_statistics()
^
void
/home/lge/rpmbuild/BUILD/Python-3.10.9/Parser/pegen.c:820:29: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
_PyPegen_get_memo_statistics()
^
void
```
Above warnings exist from 3.9.x to the latest main branch.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102712
* gh-103075
* gh-103076
<!-- /gh-linked-prs -->
| 7703def37e4fa7d25c3d23756de8f527daa4e165 | f4ed2c6ae5915329e49b9f94033ef182400e29fa |
python/cpython | python__cpython-102707 | # Typo in https://docs.python.org/3/tutorial/modules.html
# Documentation
> In “is prevents directories with a common name, such as string, unintentionally hiding valid modules that occur later on the module search path” s/ unintentionally/ from unintentionally/
Originally reported in the mailing list by Goldberg, Arthur P.
https://mail.python.org/archives/list/docs@python.org/thread/UMN53Z2TI6XIJ3BG4NDZ2X2BH3GLT4LV/
<!-- gh-linked-prs -->
### Linked PRs
* gh-102707
* gh-102708
* gh-102709
<!-- /gh-linked-prs -->
| 0a539b5db312d126ff45dd4aa6a53d40a292c512 | 152292b98fd52f8b3db252e5609d6a605a11cb57 |
python/cpython | python__cpython-102750 | # dict() resize failure
# Bug report
When running under python 3.11.1, 3.11.2 and 3.12.a6, the dict() data structure fails to resize past 357913941 elements. This worked last (in what I have available) under python 3.10.8 and I can find no documentation that limits the size of a dict to a finite size. The system has >512G of memory and this example easily fits in that memory.
My reproducer is mem.py:
```python
d=dict()
for a in range(500000000):
try:
d[a]=1
except:
print(a)
exit(0)
```
output:
357913941
As an aside (2^31)/3/2 = 357913941. It looks to me like there is an int32 in the code that's trying to represent the size of the dict. (GROWTH_FACTOR=3) I'm not sure where the extra factor of 2 is coming from yet.
Simplest reproducer is:
```python
d=dict()
for a in range(500000000):
d[a]=1
```
output:
d[a]=1
~^^^
MemoryError
exit code=1
# Your environment
I have tested on 3.10.8 (no failure) 3.11.1, 3.11.2, 3.12.a6 (fail)
This test must be run on a 64bit system with >40G of RAM free. (I have not tested on a system with less memory but I assume an Out of Memory error from the OS will stop the python program before the dict() failure>)
To reproduce I built the python with:
```
./configure --prefix=/mydir/python-3.12.a6 --enable-ipv6 --enable-shared --with-system-ffi --with-system-expat --with-ssl-default-suites=openssl --enable-optimizations
make -j
make install
set PATH=/mydir/python-3.12.a6/bin:$PATH and LD_LIBRARY_PATH=/mydir/python-3.12.a6/lib
python3 mem.py
```
In pyconfig.h
SIZEOF_VOID_P = 8
SIZEOF_SIZE_T = 8
OS:
Red Hat Enterprise Linux release 8.7 (Ootpa)
cat /proc/meminfo
MemTotal: 263686408 kB
MemFree: 150322584 kB
MemAvailable: 221228188 kB
...
Fails with both Intel and AMD servers.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102750
* gh-102777
<!-- /gh-linked-prs -->
| 65fb7c4055f280caaa970939d16dd947e6df8a8d | 4f5774f648eafd1a7076ecf9af9629fb81baa363 |
python/cpython | python__cpython-103162 | # Allow for built-in modules to be submodules
# Bug report
If you statically link a namespaced extension (such as `numpy.core._multiarray_umath`) and add it to the inittab, it will not get loaded properly. In this case, the path to the extension is passed to the `BultinImporter.find_spec` method which causes it to return `None`. The path is a pointer to the namespace of the extension.
One possible fix would be to check for a `.` in the name of the extension being loaded. If it contains that, consider it to be a fully-qualified import path and check to see if that fully-qualified name is in the builtins list before moving on to other checks.
# Your environment
- CPython 3.10 (although I believe it's the same in newer versions)
- Linux x86 and Wasm
<!-- gh-linked-prs -->
### Linked PRs
* gh-103162
* gh-103322
<!-- /gh-linked-prs -->
| 5d08c3ff7d89ca11556f18663a372f6c12435504 | dca7d174f1dc3f9e67c7451a27bc92dc5a733008 |
python/cpython | python__cpython-102691 | # Update webbrowser to use Edge as fallback instead of IE
# Feature or enhancement
[webbrowser.py](https://github.com/python/cpython/blob/54060ae91da2df44b3f6e6c698694d40284687e9/Lib/webbrowser.py#L546) currently uses Internet Explorer as the fallback browser on Windows, if the user has no default browser, and no user-installed browsers are detected. I propose to update this code to use Microsoft Edge as the fallback instead.
# Pitch
-IE is deprecated and no longer functional on most Windows installations today.
-Any Windows installation with current Python has had Microsoft Edge installed since release.
-It seems that certain enterprise policies will block the user from directly invoking IE ([See this Github issue](https://github.com/python/cpython/issues/102520)).
-While iexplore.exe currently redirects to Edge when executed, this PR makes the fallback behavior transparent.
# Previous discussion
https://discuss.python.org/t/update-webbrowser-py-to-use-microsoft-edge-as-fallback-instead-of-ie/24791
<!-- gh-linked-prs -->
### Linked PRs
* gh-102691
<!-- /gh-linked-prs -->
| 1c9f3391b916939e5ad18213e553f8d6bfbec25e | 2a03ed034edc6bb2db6fb972b2efe08db5145a28 |
python/cpython | python__cpython-103969 | # Make `dis.Instruction` more useful
I recently was [playing with bytecode instructions](https://github.com/faster-cpython/tools/tree/guidos-explorations/explore) and found that the `dis.Instruction` did *almost* what I needed, but not quite -- I ended up reimplementing it, mostly reinventing the wheel. I propose to improve this class in dis.py as follows:
- `start_offset`: start of the instruction _including_ `EXTENDED_ARG` prefixes
- `jump_target`: bytecode offset where a jump goes (can be property computed from `opcode` and `arg`)
- `baseopname`, `baseopcode`: name and opcode of the "family head" for specialized instructions (can be properties)
- `cache_offset`, `end_offset`: start and end of the cache entries (can be properties)
- `oparg`: alias for `arg` (in most places we seem to use `oparg`)
Of these, only `start_offset` needs to be a new data field -- the others can be computed properties. For my application, `start_offset` was important to have (though admittedly I could have done without if I had treated `EXTENDED_ARG` as a `NOP`). It just feels wrong that `offset` points to the opcode but `oparg` includes the extended arg -- this means one has to explicitly represent `EXTENDED_ARG` instructions in a sequence of instructions even though their _effect_ (`arg`) is included in the `Instruction`.
I also added a `__str__` method to my `Instruction` class that shows the entire instruction -- this could call the `_disassemble` method with default arguments. Here I made one improvement over `_disassemble`: if the opname is longer than `_OPNAME_WIDTH` but the arg is shorter than `_OPARG_WIDTH`, I let the opname overflow into the space reserved for the oparg, leaving at least one space in between. This makes for fewer misaligned opargs, since the opname is left-justified and the oparg is right-justified.
@isidentical @serhiy-storchaka @iritkatriel @markshannon @brandtbucher
<!-- gh-linked-prs -->
### Linked PRs
* gh-103969
<!-- /gh-linked-prs -->
| 18d16e93b6d4b7b10c5145890daa92b760fe962a | 845e593c4ec97dd9f73b50536c1e1e7ed10ceecd |
python/cpython | python__cpython-102685 | # Generate Lib/opcode.py from Python/bytecodes.c
This could also auto-generate Include/opcode.h, Include/internal/pycore_opcode.h and Python/opcode_targets.h, subsuming both Tools/build/generate_opcode_h.py and Python/makeopcodetargets.py -- although the simplest approach would probably be to just keep those tools and only ensure that they are re-run after opcode.py is updated.
## Variables to generate
The auto-generatable contents of opcode.py is as follows:
- `opmap` -- make up opcode numbers in a pass after reading all input; special-case `CACHE = 0`
- `HAVE_ARGUMENT` -- done while making up opcode numbers (group argument-less ones in lower group)
- `ENABLE_SPECIALIZATION = True`
- `EXTENDED_ARG` -- set from opcode for `EXTENDED_ARG` inst, if there is one
- `opname` -- invert `opmap`
- pseudo opcode definitions, can add DSL syntax `pseudo(NAME) = { name1, name2, ... };`
- `hasarg` -- can be derived from `instr_format` metadata
- `hasconst` -- could hardcode to `LOAD_CONST`
- `hasname` -- may have to check for occurrences of `co_names` in code?
- `hasjrel` -- check for `JUMPBY` with arg that doesn't start with `INLINE_CACHE_ENTRIES_`
- `hasjabs`-- no longer used, set to `[]` for backwards compatibility
- `haslocal` -- opcode name contains '_FAST'
- `hascompare` -- opcode name starts with `COMPARE_`
- `hasfree` -- opcode name ends in `DEREF` or `CELL` or `CLOSURE`
- `hasexc` -- pseudo opcode, name starts with `SETUP_`
- `MIN_PSEUDO_OPCODE = 256`
- `MAX_PSEUDO_OPCODE` -- derive from pseudo opcodes
- `__all__` -- hardcode
The following are not public but imported by `dis.py` so they are still needed:
- `_nb_ops` -- just hardcode?
- `_specializations` -- derive from families (with some adjustments)
- `_specialized_instructions` -- compute from `_specializations`
- `_specialization_stats` -- only used by `test__opcode.py`, move into there?
- `_cache_format` -- compute from cache effects? (how to make up names?)
- `_inline_cache_entries` -- compute from `_cache_format`
The hardcoded stuff can go in prologue and epilogue sections that are updated manually.
This project (if we decide to do it) might be a good reason to refactor `generate_cases.py` into a library (the code for this shouldn't be shoved into the main file).
## Benefits
We can wait on this project until we are sure we need at least one of the following benefits:
- Once we are generating the numeric opcode values it will be easier to also generate numeric values for micro-opcodes.
- Avoid having to keep multiple definitions in sync (e.g. families, cache formats).
- Easier to understand. E.g. where are the numeric opcode values for specialized instructions defined? (Would require also subsuming generate_opcode_h.py.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102685
<!-- /gh-linked-prs -->
| d77c48740f5cd17597693bd0d27e32db725fa3a0 | cdb21ba74d933e262bc1696b6ce78b50cb5a4443 |
python/cpython | python__cpython-102649 | # Use sumprod() to simplify, speed up, and improve accuracy of statistics functions
* Use sumprod() which is faster, simpler, and more accurate than rounding each multiplication before summation.
* For an additional speed-up and simplification, compute the (x_xi - bar) only once instead of multiple times.
* For Spearman's rank correlation, we can skip the (x_xi - bar) step because the ranks are centered around zero.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102649
<!-- /gh-linked-prs -->
| 457e4d1a516c2b83edeff2f255f4cd6e7b114feb | 61479d46848bc7a7f9b571b0b09c4a4b4436d839 |
python/cpython | python__cpython-102669 | # Add `#line` directives to generated_cases.c.h
It would be nice if compiler errors in Python/bytecodes.c actually pointed to the file you have to edit.
(Filing for @markshannon.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102669
<!-- /gh-linked-prs -->
| 70185de1abfe428049a5c43d58fcb656b46db96c | 61b9ff35cbda0cc59816951a17de073968fc25c6 |
python/cpython | python__cpython-102742 | # `random_combination_with_replacement` recipe has misleading docstring
# Documentation
The `random` module has four [recipes](https://docs.python.org/3/library/random.html#recipes) that are supposed to *"efficiently make random selections from the combinatoric iterators in the itertools module"*. And their docstrings all say *"Random selection from [iterator]"*. Both suggest they're equivalent to `random.choice(list(iterator))`, just efficiently.
For example, `itertools.combinations_with_replacement([0, 1], r=4)` produces these five combinations:
```
(0, 0, 0, 0)
(0, 0, 0, 1)
(0, 0, 1, 1)
(0, 1, 1, 1)
(1, 1, 1, 1)
```
So `random.choice(list(iterator))` would return one of those five with 20% probability each.
But the `random_combination_with_replacement` recipe instead produces these probabilities:
```
(0, 0, 0, 0) 6.25%
(0, 0, 0, 1) 25.00%
(0, 0, 1, 1) 37.50%
(0, 1, 1, 1) 25.00%
(1, 1, 1, 1) 6.25%
```
Here's an implementation that *is* equivalent to `random.choice(list(iterator))`:
```python
def random_combination_with_replacement(iterable, r):
"Random selection from itertools.combinations_with_replacement(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.sample(range(n+r-1), k=r))
return tuple(pool[i-j] for j, i in enumerate(indices))
```
---
One can view the combinations as the result of actually simulating r random draws with replacement, where the multiset `{0,0,1,1}` indeed occurs more often, namely as `0011`, `0101`, `0110`, etc. But that is not the only valid view and isn't the view suggested by the documentation (as my first paragraph argued). Though if that view and the bias is the intention, then I suggest its documentation should mention the bias.
<details><summary>Test code</summary>
[Attempt This Online!](https://ato.pxeger.com/run?1=1ZXNbtQwEMc5cfBTWEVoE5GELqoQqrSn3vfAtaqs1Jmwbv2F7QghxJNw6QUeiqdh_JH9EIvY9oCET7E985-Znyf2tx_2c9gY_fDwfQpj--7n82dCWeMCdb0ejCJlJgK4YIz0ZHRGUW6kBB6E0Z4WiyszaTQiJJr2txLoil6fN3R5Qxx-XhBCXrTtToi2Rwch1gkdqsXWcFGT0TgMqW6p0DuBLq4I3acs2CcRNsyBlT0HBSgwp9FQV18SiiMLR6-6JJNqSYYBIxxLZoCxkGB74X6LxqISm5WOxD57n0Soh8JtF_uxxZwlwW3WqyeJJA0HYXK61NfxjREcKil8MUX1eibFJ-dQo9iiJxcW6BNI_Qs2Fr2QS5ishO12LlnjugRdRZO8IvSAZXtc99jGMFQHOHycfoBK1w29X0Uee-BygCh1LW5obFKROjQrzuisM9Z4GP7E7vQmy0q9_G8Y-l5F94LwlWuXf8PY3mWQd01mCXpSsRUxh0J1xnpllO2d8EaTdD-Mk-bR49F_a3OKy0lG2wPav2_qve-YY8eY7hUwljfWCG15Hkea8hD_6HKXJvuDs0lsWCkTma4LyfmCbPDY4WPcLweBeh0KKF_VpVMOb0J0WHyJPq_Xl2-7Ny-_Lur8EJT3YH4XfgE)
```python
import random
import itertools
from collections import Counter
iterable = [0, 1]
r = 4
#-- itertools ----------------------
print('itertools')
for comb in itertools.combinations_with_replacement(iterable, r):
print(comb)
#-- from iterator ------------------
def random_combination_with_replacement_from_iterator(iterable, r):
"Random selection from itertools.combinations_with_replacement(iterable, r)"
iterator = itertools.combinations_with_replacement(iterable, r)
return random.choice(list(iterator))
#-- current random recipe ----------
def random_combination_with_replacement(iterable, r):
"Random selection from itertools.combinations_with_replacement(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.choices(range(n), k=r))
return tuple(pool[i] for i in indices)
#-- proposed random recipe ---------
def random_combination_with_replacement_proposal(iterable, r):
"Random selection from itertools.combinations_with_replacement(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.sample(range(n+r-1), k=r))
return tuple(pool[i-j] for j, i in enumerate(indices))
#-- Comparison
for func in random_combination_with_replacement_from_iterator, random_combination_with_replacement, random_combination_with_replacement_proposal:
print()
print(func.__name__)
N = 100000
ctr = Counter(func(iterable, r) for _ in range(N))
for comb, freq in sorted(ctr.items()):
print(comb, f'{freq/N:6.2%}')
```
</details>
<details><summary>Test results</summary>
```
itertools
(0, 0, 0, 0)
(0, 0, 0, 1)
(0, 0, 1, 1)
(0, 1, 1, 1)
(1, 1, 1, 1)
random_combination_with_replacement_from_iterator
(0, 0, 0, 0) 19.89%
(0, 0, 0, 1) 20.08%
(0, 0, 1, 1) 20.01%
(0, 1, 1, 1) 19.88%
(1, 1, 1, 1) 20.14%
random_combination_with_replacement
(0, 0, 0, 0) 6.14%
(0, 0, 0, 1) 24.98%
(0, 0, 1, 1) 37.71%
(0, 1, 1, 1) 25.04%
(1, 1, 1, 1) 6.13%
random_combination_with_replacement_proposal
(0, 0, 0, 0) 20.17%
(0, 0, 0, 1) 19.82%
(0, 0, 1, 1) 20.18%
(0, 1, 1, 1) 19.88%
(1, 1, 1, 1) 19.95%
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-102742
* gh-102754
<!-- /gh-linked-prs -->
| b0ec6253c9cf1b22b87cd99f317dbaeb31263b20 | a297d59609038ccfc3bdf6f350e8401f07b0a931 |
python/cpython | python__cpython-102651 | # Duplicate #include directives in multiple C files
These are examples of duplicate `#include` directives that, as far as I can see, do not cause any side effects and therefore can be freely removed (although I have doubts about few cases):
https://github.com/python/cpython/blob/634cb61909b1392f0eef3e749931777adb54f16e/Mac/Tools/pythonw.c#L30
https://github.com/python/cpython/blob/634cb61909b1392f0eef3e749931777adb54f16e/Modules/_hashopenssl.c#L40
https://github.com/python/cpython/blob/634cb61909b1392f0eef3e749931777adb54f16e/Modules/arraymodule.c#L16
https://github.com/python/cpython/blob/634cb61909b1392f0eef3e749931777adb54f16e/Modules/signalmodule.c#L31
https://github.com/python/cpython/blob/634cb61909b1392f0eef3e749931777adb54f16e/Programs/_testembed.c#L12
The PR is on the way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102651
<!-- /gh-linked-prs -->
| 85ba8a3e03707092800cbf2a29d95e0b495e3cb7 | 2d370da5702f07d044f0e612df68172852b89f42 |
python/cpython | python__cpython-103898 | # Prompt text in sqlite3 CLI (new in 3.12) is Linux oriented and doesn't apply to Windows
# Bug report
The command line command `python3.12 -m sqlite3` gives a prompt stating "[...]Type ".help" for more information; type ".quit" or CTRL-D to quit."
But on Windows, the Unixism `CTRL-D` doesn't work (instead, and after adding ";" and "ENTER", it reports: "OperationalError (SQLITE_ERROR): unrecognized token: "♦"" ); `CTRL-Z` _does_ work on Windows (but not on Linux, I guess).
Request to clarify the prompt.
# Environment
- CPython: Python 3.12.0a6 (tags/v3.12.0a6:f9774e5, Mar 7 2023, 23:52:43) [MSC v.1934 64 bit (AMD64)]
- Operating system: MS Windows 3.11 Pro
- Architecture: Intel i5
<!-- gh-linked-prs -->
### Linked PRs
* gh-103898
* gh-103945
<!-- /gh-linked-prs -->
| 8def5ef0160979c05a20f35b143d2314e639982b | 44b5c21f4124f9fa1312fada313c80c6abfa6d49 |
python/cpython | python__cpython-102630 | # Some supposedly invalid addresses in the documentation point toward malicious websites
## Describe the problem
I found in the documentation about concurrency some examples that have been "exploited" by malicious people:
in the [ThreadPoolExecutor Example](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example)
```python
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/'] # <<< (DO NOT TRY IT IN A BROWSER)
...
```
The last domain name is supposed to be non existent.
However, when I tried the snippet, I got a valid response on second try (the first one woke up their server).
It's not problematic with the code example, since the code of the page is just plain text, but anyone trying to go there through their browser might end up in some kind of troubles...
The content of the hosted page is apparently a "hard redirection" toward... something :
```js
<html><head><title>Loading...</title></head>
<body>
<script type='text/javascript'>window.location.replace(
'http://some-made-up-domain.com/?ch=1&js=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJKb2tlbiIsImV4cCI6MTY3ODYxNjgxMywiaWF0IjoxNjc4NjA5NjEzLCJpc3MiOiJKb2tlbiIsImpzIjoxLCJqdGkiOiIydDVwdDM2ajgyNjU0YjRma281ZjhhMGciLCJuYmYiOjE2Nzg2MDk2MTMsInRzIjoxNjc4NjA5NjEzODAyNDEzfQ.H4l5qNGb5Ex8ehG3hxX_kWx8ODqTMRgJs0HBeQyCx1Q&sid=a4f97e10-c0af-11ed-b324-9d77bf5b132c'
);
</script>
</body>
</html>
```
## Expected solution
Any invalid address in the docs should point to invalid page in trustful domains, to not allow this kind of security hole.
---
Cheers
<!-- gh-linked-prs -->
### Linked PRs
* gh-102630
* gh-102664
* gh-102665
* gh-102666
* gh-102667
* gh-102668
<!-- /gh-linked-prs -->
| 61479d46848bc7a7f9b571b0b09c4a4b4436d839 | 392f2ad3cbf2e1f24656fe0410a9b65882257582 |
python/cpython | python__cpython-102637 | # Representation of ParamSpecs at runtime compared to Callable
I think a list representation is more consistent with the text of PEP 612 and the runtime representation of Callable. We should change the runtime repr of ParamSpec generics to be more like Callable, because right now they're inconsistent:
```
>>> Callable[[int, str], bool]
typing.Callable[[int, str], bool]
>>> P = ParamSpec("P")
>>> T = TypeVar("T")
>>> class MyCallable(Generic[P, T]): pass
...
>>> MyCallable[[int, str], bool]
__main__.MyCallable[(<class 'int'>, <class 'str'>), bool]
>>> get_args(Callable[[int, str], bool])
([<class 'int'>, <class 'str'>], <class 'bool'>)
>>> get_args(MyCallable[[int, str], bool])
((<class 'int'>, <class 'str'>), <class 'bool'>)
```
_Originally posted by @JelleZijlstra in https://github.com/python/typing/issues/1274#issuecomment-1288302661_
<!-- gh-linked-prs -->
### Linked PRs
* gh-102637
* gh-102681
<!-- /gh-linked-prs -->
| 2b5781d659ce3ffe03d4c1f46e4875e604cf2d88 | 8647ba4b639077e201751ae6dbd82e8bfcf80895 |
python/cpython | python__cpython-104244 | # Path.rglob performance issues in deeply nested directories compared to glob.glob(recursive=True)
# Bug report
Pathlib.rglob can be orders of magnitudes slower than `glob.glob(recursive=True)`
With a 1000-deep nested directory, `glob.glob` and `Path.glob` both took under 1 second. `Path.rglob` took close to 1.5 minutes.
```python
import glob
import os
from pathlib import Path
x = ""
for _ in range(1000):
x += "a/"
os.mkdir(x)
# ~ 0.5s
print(glob.glob("**/*", recursive=True))
# ~ 87s
print(list(Path(".").rglob("**/*")))
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104244
* gh-104373
* gh-104512
* gh-105744
* gh-105749
<!-- /gh-linked-prs -->
| c0ece3dc9791694e960952ba74070efaaa79a676 | 8d95012c95988dc517db6e09348aab996868699c |
python/cpython | python__cpython-102599 | # Remove special-casing from `FORMAT_VALUE` opcode
Right now there's a special case in handling this opcode:
https://github.com/python/cpython/blob/5ffdaf748d98da6065158534720f1996a45a0072/Python/bytecodes.c#L3060-L3073
It comes from older version of `ceval.c`:
https://github.com/python/cpython/blob/281078794f10a42d0aa99d660e25a434fde96ec4/Python/ceval.c#L4379-L4395
But, I think that the needed optimizaion is there: https://github.com/python/cpython/blob/5ffdaf748d98da6065158534720f1996a45a0072/Objects/abstract.c#L774-L777
So, I think this can be safely removed.
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102599
<!-- /gh-linked-prs -->
| 82eb9469e717e0047543732287a8342e646c2262 | 1a5a14183ec807ead1c6a46464540159124e5260 |
python/cpython | python__cpython-102596 | # `PyObject_Format` c-api function is not documented
# Documentation
I think it must be, for the following reasons:
1. It is a stable API that exists since python3
2. It is implementing a builtin magic method `__format__`
3. It is quite useful for others
4. It is quite simple
5. There are no plans to remove / deprecate it
6. It is used in some parts of docs, example: https://github.com/python/cpython/blame/5ffdaf748d98da6065158534720f1996a45a0072/Doc/library/dis.rst#L1426
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102596
* gh-102878
* gh-102879
<!-- /gh-linked-prs -->
| 910a64e3013bce821bfac75377cbe88bedf265de | 7f760c2fca3dc5f46a518f5b7622783039bc8b7b |
python/cpython | python__cpython-102635 | # Another invalid JSON in
# Documentation
When building the documentation for Python 3.10.10 (with Sphinx 4.2.0), the process build with this error:
```
[ 143s] Warning, treated as error:
[ 143s] /home/abuild/rpmbuild/BUILD/Python-3.10.10/Doc/howto/logging-cookbook.rst:341:Could not lex literal_block as "json". Highlighting skipped.
[ 144s] make: *** [Makefile:52: build] Error 2
[ 144s] error: Bad exit status from /var/tmp/rpm-tmp.wHpiCT (%build)
```
I blame https://github.com/python/cpython/commit/11c25a402d77bda507f8012ee2c14c95c835cf15, because two examples of JSON are not valid JSON documents [according to JSONLint](https://jsonlint.com/).
[Complete build log](https://github.com/python/cpython/files/10942937/_log.txt) listing all packages used and all steps taken to reproduce.
This patch makes documentation to be buildable:
```diff
---
Doc/howto/logging-cookbook.rst | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
--- a/Doc/howto/logging-cookbook.rst
+++ b/Doc/howto/logging-cookbook.rst
@@ -340,10 +340,12 @@ adding a ``filters`` section parallel to
.. code-block:: json
- "filters": {
- "warnings_and_below": {
- "()" : "__main__.filter_maker",
- "level": "WARNING"
+ {
+ "filters": {
+ "warnings_and_below": {
+ "()" : "__main__.filter_maker",
+ "level": "WARNING"
+ }
}
}
@@ -351,12 +353,14 @@ and changing the section on the ``stdout
.. code-block:: json
- "stdout": {
- "class": "logging.StreamHandler",
- "level": "INFO",
- "formatter": "simple",
- "stream": "ext://sys.stdout",
- "filters": ["warnings_and_below"]
+ {
+ "stdout": {
+ "class": "logging.StreamHandler",
+ "level": "INFO",
+ "formatter": "simple",
+ "stream": "ext://sys.stdout",
+ "filters": ["warnings_and_below"]
+ }
}
A filter is just a function, so we can define the ``filter_maker`` (a factory
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102635
* gh-103106
* gh-103107
<!-- /gh-linked-prs -->
| d835b3f05de7e2d800138e5969eeb9656b0ed860 | 60bdc16b459cf8f7b359c7f87d8ae6c5928147a4 |
python/cpython | python__cpython-102573 | # [Enhancement] Speed up setting and deleting mutable attributes on non-dataclass subclasses of frozen dataclasses
# Feature or enhancement
The `dataclasses` library provides an easy way to create classes. The library will automatically generate relevant methods for the users.
Creating `dataclass`es with argument `frozen=True` will automatically generate methods `__setattr__` and `__delattr__` in `_frozen_get_del_attr`.
This issue proposes to change the `tuple`-based lookup to `set`-based lookup. Reduce the time complexity from $O(n)$ to $O(1)$.
```python
In [1]: # tuple-based
In [2]: %timeit 'a' in ('a', 'b', 'c', 'd', 'e', 'f', 'g')
9.91 ns ± 0.0982 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In [3]: %timeit 'd' in ('a', 'b', 'c', 'd', 'e', 'f', 'g')
33.2 ns ± 0.701 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [4]: %timeit 'g' in ('a', 'b', 'c', 'd', 'e', 'f', 'g')
56.4 ns ± 0.818 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [5]: # set-based
In [6]: %timeit 'a' in {'a', 'b', 'c', 'd', 'e', 'f', 'g'}
11.3 ns ± 0.0723 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In [7]: %timeit 'd' in {'a', 'b', 'c', 'd', 'e', 'f', 'g'}
11 ns ± 0.106 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In [8]: %timeit 'g' in {'a', 'b', 'c', 'd', 'e', 'f', 'g'}
11.1 ns ± 0.126 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
```
A tiny benchmark script:
```python
from contextlib import suppress
from dataclasses import FrozenInstanceError, dataclass
@dataclass(frozen=True)
class Foo2:
a: int
b: int
foo2 = Foo2(1, 2)
def bench2(inst):
with suppress(FrozenInstanceError):
inst.a = 0
with suppress(FrozenInstanceError):
inst.b = 0
@dataclass(frozen=True)
class Foo7:
a: int
b: int
c: int
d: int
e: int
f: int
g: int
foo7 = Foo7(1, 2, 3, 4, 5, 6, 7)
def bench7(inst):
with suppress(FrozenInstanceError):
inst.a = 0
with suppress(FrozenInstanceError):
inst.b = 0
with suppress(FrozenInstanceError):
inst.c = 0
with suppress(FrozenInstanceError):
inst.d = 0
with suppress(FrozenInstanceError):
inst.e = 0
with suppress(FrozenInstanceError):
inst.f = 0
with suppress(FrozenInstanceError):
inst.g = 0
class Bar(Foo7):
def __init__(self, a, b, c, d, e, f, g):
super().__init__(a, b, c, d, e, f, g)
self.baz = 0
def bench(inst):
inst.baz = 1
```
Result:
`set`-based lookup:
```python
In [2]: %timeit bench2(foo2)
1.08 µs ± 28.1 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [3]: %timeit bench7(foo7)
3.81 µs ± 20.3 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [4]: %timeit bench(bar)
249 ns ± 6.31 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```
`tuple`-based lookup (original):
```python
In [2]: %timeit bench2(foo2)
1.15 µs ± 10.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [3]: %timeit bench7(foo7)
3.97 µs ± 15.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [4]: %timeit bench(bar)
269 ns ± 4.09 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```
```
Result:
`set`-based lookup:
```python
In [2]: %timeit bench2(foo2)
1.08 µs ± 28.1 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [3]: %timeit bench7(foo7)
3.81 µs ± 20.3 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
```
`tuple`-based lookup (original):
```python
In [2]: %timeit bench2(foo2)
1.15 µs ± 10.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [3]: %timeit bench7(foo7)
3.97 µs ± 15.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
```
The `set`-based is constantly faster than the old approach. And the theoretical time complexity is also smaller ($O(1)$ vs. $O(n)$).
Ref: #102573
# Pitch
(Explain why this feature or enhancement should be implemented and how it would be used.
Add examples, if applicable.)
In the autogenerate `__setattr__` and `__delattr__`, they have a sanity check at the beginning of the method. For example:
```python
def __setattr__(self, name, value):
if type(self) is {{UserType}} or name in ({{a tuple of field names}}):
raise FrozenInstanceError(f"cannot assign to field {name!r}")
super(cls, self).__setattr__(name, value)
```
If someone inherits the frozen dataclass, the sanity check will take $O(n)$ time on the `tuple__contains__(...)` and finally calls `super().__setattr__(...)`. For example:
```python
@dataclass(frozen=True)
class FrozenBase:
x: int
y: int
... # N_FIELDS
class Foo(FrozenBase):
def __init__(self, x, y, somevalue, someothervalue):
super().__init__(x, y)
self.somevalue = somevalue # takes O(N_FIELDS)
self.someothervalue = someothervalue # takes O(N_FIELDS) time again
foo = Foo(1, 2, 3, 4)
foo.extravalue = extravalue # takes O(N_FIELDS) time again
```
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
N/A.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102573
<!-- /gh-linked-prs -->
| ee6f8413a99d0ee4828e1c81911e203d3fff85d5 | 90f1d777177e28b6c7b8d9ba751550e373d61b0a |
python/cpython | python__cpython-118655 | # Add `-X importtime=2` for additional logging when an imported module is already loaded
# Feature or enhancement
Add special handling for `-X importtime=2` that provides additional output when already-loaded modules are imported. This will allow users to get a complete picture of runtime imports.
# Pitch
While `-X importtime` is incredibly useful for analyzing module import times, by design, it doesn't log anything if an imported module has already been loaded. `-X importtime=2` would provide additional output for every module that's already been loaded:
```
>>> import uuid
import time: cached | cached | _io
import time: cached | cached | _io
import time: cached | cached | os
import time: cached | cached | sys
import time: cached | cached | enum
import time: cached | cached | _io
import time: cached | cached | _io
import time: cached | cached | collections
import time: cached | cached | os
import time: cached | cached | re
import time: cached | cached | sys
import time: cached | cached | functools
import time: cached | cached | itertools
import time: 151 | 151 | _wmi
import time: 18290 | 18440 | platform
import time: 372 | 372 | _uuid
import time: 10955 | 29766 | uuid
```
In codebases with convoluted/poorly managed import graphs (and consequently, workloads that suffer from long import times), the ability to record all paths to an expensive dependency–not just the first-imported–can help expedite refactoring (and help scale identification of this type of issue). More generally, this flag would provide a more efficient path to tracking runtime dependencies.
The changes required are largely unintrusive: here's my reference implementation: [will relink, need to do some paperwork].
# Previous discussion
Discussion: https://discuss.python.org/t/x-importtrace-to-supplement-x-importtime-for-loaded-modules/23882/5
Prior email chain: https://mail.python.org/archives/list/python-ideas@python.org/thread/GEISYQ5BXWGKT33RWF77EOSOMMMFUBUS/
<!-- gh-linked-prs -->
### Linked PRs
* gh-118655
* gh-133443
* gh-136391
* gh-136403
<!-- /gh-linked-prs -->
| c4bcc6a77864b42574ec663cde36f0cc10e97b46 | e6f8e0a035f4cbeffb331d90c295ab5894852c39 |
python/cpython | python__cpython-102565 | # Add docstrings to asyncio.TaskGroup
Python 3.11 added `asyncio.TaskGroup`[0]. We should encourage users of the language to use this class by adding docstrings and type-hinting, particularly since it has wide applications for users wishing to refactor their existing async code.
I'm happy to work on this but any suggestions for what the docstrings should include is welcome.
[0] https://docs.python.org/3/library/asyncio-task.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-102565
* gh-102715
<!-- /gh-linked-prs -->
| e94edab727d07bef851e0e1a345e18a453be863d | 5fce813d8e547d6508daa386b67f230105c3a174 |
python/cpython | python__cpython-102601 | # AttributeError in 3.11 on repr() of enum member if the enum has a non-Enum EnumType parent
# Bug report
In CPython 3.11.0 - 3.11.2 (but not 3.10.10), I get a weird AttributeError when repr() is called on an Enum member of a class that was defined using multiple inheritance. Minimal code to reproduce:
```
from enum import Enum, EnumMeta
class _EnumSuperClass(metaclass=EnumMeta):
pass
class E(_EnumSuperClass, Enum):
A=1
print(repr(E.A))
```
With Python up to 3.10, the output is:
```
<E.A: 1>
```
With Python 3.11, the output is:
```
Traceback (most recent call last):
File "C:\devel\tmp\api\e.py", line 9, in <module>
print(repr(E.A))
^^^^^^^^^
File "C:\Program Files\Python311\Lib\enum.py", line 1169, in __repr__
return "<%s.%s: %s>" % (self.__class__.__name__, self._name_, v_repr(self._value_))
^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\enum.py", line 1168, in __repr__
v_repr = self.__class__._value_repr_ or repr
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'int' has no attribute '_value_repr_'
```
# Your environment
`Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32`
(But reproduced also on 3.11.2)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102601
* gh-102977
* gh-103060
* gh-103064
<!-- /gh-linked-prs -->
| bd063756b34003c1bc7cacf5b1bd90a409180fb6 | 16f6165b71e81b5e4d0be660ac64a9fce7dfd86c |
python/cpython | python__cpython-103149 | # [Enum] Exception being ignored in custom datatypes
```python
import enum
class Base:
def __init__(self, x):
print('In Base init')
raise ValueError("I don't like", x)
class MyEnum(Base, enum.Enum):
A = 'a'
def __init__(self, y):
print('In MyEnum init')
self.y = y
```
`MyEnum` should not be created because of the exception in `Base.__init__`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103149
* gh-103154
<!-- /gh-linked-prs -->
| 2a4d8c0a9e88f45047da640ce5a92b304d2d39b1 | dfc4c95762f417e84dcb21dbbe6399ab7b7cef19 |
python/cpython | python__cpython-102614 | # `help` CLI shows a traceback when import failed
Currently, `help` will show a traceback when we type name of something which doesn't exist:
```python
help> 123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen _sitebuiltins>", line 103, in __call__
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 2004, in __call__
self.interact()
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 2031, in interact
self.help(request)
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 2057, in help
elif request: doc(request, 'Help on %s:', output=self._output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 1781, in doc
pager(render_doc(thing, title, forceload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 1755, in render_doc
object, name = resolve(thing, forceload)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\KIRILL-1\CLionProjects\cpython\Lib\pydoc.py", line 1741, in resolve
raise ImportError('''\
ImportError: No Python documentation found for '123'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.
>>>
```
I think, this shouldn't be shown.
Is there any reason for this?
Can we show something like
`No python documentation found for '123'.`?
Reproducible on current main.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102614
* gh-105778
* gh-105830
* gh-105934
* gh-106322
* gh-106323
* gh-106340
* gh-106639
* gh-106640
<!-- /gh-linked-prs -->
| ba516e70c6d156dc59dede35b6fc3db0151780a5 | 03160630319ca26dcbbad65225da4248e54c45ec |
python/cpython | python__cpython-102538 | # It is possible for python_tzpath_context to fail in test_zoneinfo
# Bug report
[This code](https://github.com/python/cpython/blob/11a2c6ce516b24b2435cb627742a6c4df92d411c/Lib/test/test_zoneinfo/test_zoneinfo.py#L1544-L1555) started failing the `pylint` check on `backports.zoneinfo` recently:
```python
@staticmethod
@contextlib.contextmanager
def python_tzpath_context(value):
path_var = "PYTHONTZPATH"
try:
with OS_ENV_LOCK:
old_env = os.environ.get(path_var, None)
os.environ[path_var] = value
yield
finally:
if old_env is None:
del os.environ[path_var]
else:
os.environ[path_var] = old_env # pragma: nocover
```
It's kind of a non-issue, but it *is* true that if an error is raised while acquiring `OS_ENV_LOCK` or when calling `os.environ.get`, `old_env` won't be set, which will raise a `NameError`. This is a kind of rare situation anyway, and it would probably only come up during a non-recoverable error, but we may as well fix it — if only so that people who use CPython as an example of "good code" will have an example of the right way to handle this sort of situation.
Probably this is trivial enough that we can skip an issue and news, but I'll just create one anyway 😛
<!-- gh-linked-prs -->
### Linked PRs
* gh-102538
* gh-102586
* gh-102587
<!-- /gh-linked-prs -->
| 64bde502cf89963bc7382b03ea9e1c0967d22e35 | 53dceb53ade15587b9cfd30c0a0942232517dee9 |
python/cpython | python__cpython-102544 | # Add `os.listdrives` on Windows
We don't currently have a way to get all the root directories on Windows, since `os.listdir('/')` doesn't contain everything.
I propose a very simple API:
```
>>> os.listdrives()
["C:\\", "D:\\", ...]
>>> os.listdrives(uuids=True)
["\\?\Volume{4c1b02c1-d990-11dc-99ae-806e6f6e6963}\", ...]
```
Basically, `listdrives(uuids=True)` would return everything found by [`FindNextVolume`](https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-findnextvolumew), while `listdrives()` would apply [`GetVolumePathNamesForVolumeName`](https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getvolumepathnamesforvolumenamew) to each GUID and return all of those. (This may return the same volume multiple times under different names, which is fine - `os.stat` can be used to see if they're the same `st_dev`.)
There's an endless amount of variations we could also apply here, but the most important functionality in my mind is to expose the OS API. App developers at least then have a starting point to do whatever they may need.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102544
* gh-102585
* gh-102706
<!-- /gh-linked-prs -->
| cb35882773a3ffc7fe0671e64848f4c926a2d52f | 2999e02836f9112de6b17784eaca762fb87e71a9 |
python/cpython | python__cpython-102516 | # Unused imports in the `Lib/` directory
The [`pycln`](https://github.com/hadialqattan/pycln) tool has highlighted 34 files in the `Lib/` directory which appear to have unnecessary imports.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102516
<!-- /gh-linked-prs -->
| 401d7a7f009ca2e282b1a0d1b880dc602afd39dc | 7d801f245e2021d19daff105ce722f22aa844391 |
python/cpython | python__cpython-118089 | # Add C implementation of os.path.splitroot()
# Feature or enhancement
Speed up `os.path.splitroot()` by implementing it in C.
# Pitch
[Per](https://github.com/python/cpython/pull/102454#discussion_r1128466021) @eryksun:
> I think `splitroot()` warrants a C implementation since it's a required step in our basic path handling on Windows -- `join()`, `split()`, `relpath()`, and `commonpath()`. Speeding it up gives a little boost across the board. Also, it would be less confusing if `nt._path_splitroot()` actually implemented `ntpath.splitroot()`.
>
> If implemented, using `_Py_splitroot()` in `_Py_normpath()` would help to ensure consistency. Currently, for example, `ntpath.normpath('//?/UNC/server/share/../..')` is correct on POSIX but wrong on Windows because `_Py_normpath()` incorrectly handles "//?/UNC/" as the root instead of "//?/UNC/server/share/".
# Previous discussion
- https://discuss.python.org/t/add-os-path-splitroot/22243
- https://github.com/python/cpython/issues/101000
- https://github.com/python/cpython/pull/102454
<!-- gh-linked-prs -->
### Linked PRs
* gh-118089
* gh-119394
* gh-119895
* gh-119980
* gh-124097
* gh-124919
<!-- /gh-linked-prs -->
| 10bb90ed49a81a525b126ce8e4d8564c1616d0b3 | e38b43c213a8ab2ad9748bac2732af9b58c816ae |
python/cpython | python__cpython-102510 | # cpython3:fuzz_builtin_unicode: Use-of-uninitialized-value in maybe_small_long
There is a bug disclosed by oss-fuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=51574.
I can reproduce it by running `CC=clang ./configure --with-memory-sanitizer && make -j12`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102510
* gh-102838
* gh-107464
<!-- /gh-linked-prs -->
| fc130c47daa715d60d8925c478a96d5083e47b6a | a1b679572e7156bc4299819e73235dc608d4a570 |
python/cpython | python__cpython-102531 | # Remove invisible pagebreak chars from the standard library
Following a discussion on a PR to clean up some unused vars in the email module: https://github.com/python/cpython/pull/102482,
there seemed to be a general feeling that we should remove invisible pagebreak characters (U+000C) from the standard library.
A quick ctrf-f with vscode shows <50 occurences which I proposed be deleted and optionally replaced with newlines. Each change should be checked for pep-8 compliance, in particular the 2-newlines rule.
I'm happy to submit a PR for this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102531
* gh-108266
<!-- /gh-linked-prs -->
| b097925858c6975c73e989226cf278cc382c0416 | 401d7a7f009ca2e282b1a0d1b880dc602afd39dc |
python/cpython | python__cpython-102521 | # Implement PEP 688: Making the buffer protocol accessible in Python
PEP-688 has just been accepted. I will use this issue to track its implementation in CPython.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102521
* gh-102571
* gh-104174
* gh-104281
* gh-104288
* gh-104317
<!-- /gh-linked-prs -->
| 04f673327530f47f002e784459037231de478412 | b17d32c1142d16a5fea0c95bce185bf9be696491 |
python/cpython | python__cpython-102482 | # Clean up unused variables and imports in the email module
I propose cleaning up some unused vars/imports in the email module.
This is mostly just tidying up a few things / housekeeping.
PR: https://github.com/python/cpython/pull/102482
<!-- gh-linked-prs -->
### Linked PRs
* gh-102482
<!-- /gh-linked-prs --> | 04ea04807dc76955f56ad25a81a0947536343c09 | 58b6be3791f55ceb550822ffd8664eca10fd89c4 |
python/cpython | python__cpython-102502 | # gh-101607 causes regression in third-party psutil package on macOS
As part of the cPython release process for macOS installers, besides the standard cPython test suite, we run a small set of additional smoke tests. One of these involves an install from source of [the psutil package](https://github.com/giampaolo/psutil) which includes the building of a C extension module. In the run up to the 3.12.0a6 release, we found that our long-unchanged simple test was now failing, a regression from 3.12.0a5 and 3.11.x. The commit causing the failure was bisected to feec49c40736fc05626a183a8d14c4ebbea5ae28 from GH-101607, a part of issue #101578. From a cursory glance at the traceback and the psutil code, it appears that psutil uses a [decorator to translate certain OSError exceptions](https://github.com/giampaolo/psutil/blob/master/psutil/_psosx.py#L339) from its C module into other Python exceptions that its calling code is prepared to handle or ignore. For some reason, that exception translation is now failing. FWIW, this is the first failure of this psutil smoke test in recent memory including previous 3.12.0 alphas. Also, FWIW, this same smoke test does not fail on a reasonably recent Debian Linux system but, of course, that exercises a different platform-dependent code path in psutil.
Because of its dependencies, it is a bit complicated to build and reproduce without changes. Here is one way to do it; it is likely not optimal. This assumes a fairly recent macOS system with Homebrew installed (as described in the devguide) and assumes git checkouts of the cpython and psutil Github repos.
```
brew install pkg-config openssl@1.1 xz
mkdir /tmp/test_psutil
cd /tmp/test_psutil
git clone https://github.com/giampaolo/psutil.git
git clone https://github.com/python/cpython.git # or from an existing repo
cd cpython
git checkout feec49c40736fc05626a183a8d14c4ebbea5ae28^ # last good commit
git clean -fdxq
./configure -q --prefix=/tmp/none --with-pydebug \
--with-openssl="$(brew --prefix openssl@1.1)"
make -s -j2
cd ..
git -C ./psutil clean -fdxq
rm -rf venv
./cpython/python.exe -m venv venv
venv/bin/python -m pip install --no-binary psutil ./psutil/
venv/bin/python -c 'import psutil, pprint; pprint.pprint(list(psutil.process_iter()))'
```
The expected result should be something like:
```
[psutil.Process(pid=0, name='kernel_task', status='running', started='03:25:52'),
psutil.Process(pid=1, name='launchd', status='running', started='03:25:52'),
psutil.Process(pid=75, name='logd', status='running', started='03:25:54'),
psutil.Process(pid=76, name='UserEventAgent', status='running', started='03:25:54'),
...
```
With the failing commit applied (up to and including the current head of the main branch):
`git checkout feec49c40736fc05626a183a8d14c4ebbea5ae28 # failing commit
`
and repeating the above build and test, the result is now something like:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/tmp/cpython/Lib/pprint.py", line 55, in pprint
printer.pprint(object)
File "/private/tmp/cpython/Lib/pprint.py", line 153, in pprint
self._format(object, self._stream, 0, 0, {}, 0)
File "/private/tmp/cpython/Lib/pprint.py", line 175, in _format
rep = self._repr(object, context, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/cpython/Lib/pprint.py", line 455, in _repr
repr, readable, recursive = self.format(object, context.copy(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/cpython/Lib/pprint.py", line 468, in format
return self._safe_repr(object, context, maxlevels, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/cpython/Lib/pprint.py", line 619, in _safe_repr
orepr, oreadable, orecur = self.format(
^^^^^^^^^^^^
File "/private/tmp/cpython/Lib/pprint.py", line 468, in format
return self._safe_repr(object, context, maxlevels, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/cpython/Lib/pprint.py", line 629, in _safe_repr
rep = repr(object)
^^^^^^^^^^^^
File "/private/tmp/venv/lib/python3.12/site-packages/psutil/__init__.py", line 389, in __str__
info["name"] = self.name()
^^^^^^^^^^^
File "/private/tmp/venv/lib/python3.12/site-packages/psutil/__init__.py", line 628, in name
cmdline = self.cmdline()
^^^^^^^^^^^^^^
File "/private/tmp/venv/lib/python3.12/site-packages/psutil/__init__.py", line 681, in cmdline
return self._proc.cmdline()
^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/venv/lib/python3.12/site-packages/psutil/_psosx.py", line 346, in wrapper
return fun(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/venv/lib/python3.12/site-packages/psutil/_psosx.py", line 404, in cmdline
return cext.proc_cmdline(self.pid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 1] Operation not permitted (originated from sysctl(KERN_PROCARGS2))
```
Note this seems to come from trying to inspect a privileged process and the point of the exception wrapping is to effectively ignore such errors.
Flagging as a potential release-blocker for @Yhg1s
<!-- gh-linked-prs -->
### Linked PRs
* gh-102502
* gh-102602
<!-- /gh-linked-prs -->
| a33ca2ad1fcf857817cba505a788e15cf9d6ed0c | 54060ae91da2df44b3f6e6c698694d40284687e9 |
python/cpython | python__cpython-102492 | # Remove legacy ironpython 2 version check
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
The platform module contains various version checks for systems such as cpython and ironpython. The file also contains a specfic case for ironpython 2.6 and 2.7 which can be removed (platform.py does not run with python2)
Removing the check improves the import speed as it avoids the compilation of a regular expression.
```
%timeit _ironpython26_sys_version_parser = re.compile(r'([\d.]+)\s*' r'\(IronPython\s*' r'[\d.]+\s*' r'\(([\d.]+)\) on ([\w.]+ [\d.]+(?: \(\d+-bit\))?)\)' )
383 ns ± 1.66 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102492
<!-- /gh-linked-prs -->
| 382ee2f0f2be662fbcabcb4a6b38de416cea0cae | 699cb20ae6fdef8b0f13d633cf4858465ef3469f |
python/cpython | python__cpython-112557 | # PyCFunction_New is not documented
`PyCFunction_New`, `PyCFunction_NewEx` and `PyCMethod_New`, which can be used to turn `PyMethodDef` into a Python callable, are currently not documented, though they're useful, stable, and used in the wild.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112557
* gh-114119
* gh-114120
<!-- /gh-linked-prs -->
| a482bc67ee786e60937a547776fcf9528810e1ce | c361a1f395de9e508b6e1b0a4c5e69204905a7b7 |
python/cpython | python__cpython-102457 | # Remove docstring and getopt short options for deleted option "python -m base64 -t"
The `-t` option for base64 `python -m base64 -t` is removed in the PR https://github.com/python/cpython/pull/94230
But some references still exist in the docstring and getopt short options. Because of this we can still run the following command
without getting any error
```bash
python -m base64 -t /dev/null
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102457
* gh-103183
* gh-103184
<!-- /gh-linked-prs -->
| d828b35785eeb590b8ca92684038f33177989e46 | 06249ec89fffdab25f7088c86fcbbfdcebcc3ebd |
python/cpython | python__cpython-105856 | # datetime library doesn't support valid ISO-8601 alternative for midnight
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
According to [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601), a time of `24:00` on a given date is a valid alternative to `00:00` of the following date, however Python does not support this, raising the following error when attempted: `ValueError: hour must be in 0..23`.
This bug can be seen from multiple scenarios, specifically anything that internally calls the [`_check_time_fields` function](https://github.com/python/cpython/blob/main/Lib/datetime.py#L531-L546), such as the following:
```python
>>> import datetime
>>> datetime.datetime(2022, 1, 2, 24, 0, 0) # should be equivalent to 2022-01-03 00:00:00
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
datetime.datetime(2022, 3, 4, 24, 0, 0)
ValueError: hour must be in 0..23
```
The fix for this is relatively simple: have an explicit check within [`_check_time_fields`](https://github.com/python/cpython/blob/main/Lib/datetime.py#L531-L546) for the scenario where `hour == 24 and minute == 0 and second == 0 and microsecond == 0`, or more concisely `hour == 24 and not any((minute, second, microsecond))`, and in this scenario increase the day by one (adjusting the week/month/year as necessary) and set the hour to 0.
I imagine the [`check_time_args` C function](https://github.com/python/cpython/blob/main/Modules/_datetimemodule.c#L467-L499) would also have to be updated.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.9.12, 3.10.7, 3.11.2 (presumably applies to all)
- Operating system and architecture: MacOS Ventura arm64 (presumably applies to all)
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-105856
<!-- /gh-linked-prs -->
| b0c6cf5f17f0be13aa927cf141a289f7b76ae6b1 | 68e384c2179fba41bc3be469e6ef34927a37f4a5 |
python/cpython | python__cpython-102445 | # Several minor bugs in `test_typing` highlighted by pyflakes
Following #102437 by @JosephSBoyle, I tried running pyflakes on `test_typing.py`. It highlighted:
- Another instance where a test isn't testing what it was meant to be testing
- An instance where a test is entirely duplicated
- Several unused imports or variables.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102445
* gh-102451
* gh-102452
<!-- /gh-linked-prs -->
| 96e10229292145012bc462a6ab3ce1626c8acf71 | 32220543e2db36c6146ff2704ed1714a6adecc1b |
python/cpython | python__cpython-102449 | # `isinstance` on `runtime_checkable` `Protocol` has side-effects for `@property` methods
For example:
```python
from typing import Protocol, runtime_checkable
@runtime_checkable
class X(Protocol):
@property
def myproperty(self): ...
class Y:
@property
def myproperty(self):
raise RuntimeError("hallo")
isinstance(Y(), X)
```
will raise the `RuntimeError`
This is an issue, for example, if `myproperty` is an expensive call, has unwanted side effects, or excepts outside of a context manager
<!-- gh-linked-prs -->
### Linked PRs
* gh-102449
* gh-102592
* gh-102593
* gh-103034
<!-- /gh-linked-prs -->
| 5ffdaf748d98da6065158534720f1996a45a0072 | 08b67fb34f4519be1b0bb4673643a2c761c7ae92 |
python/cpython | python__cpython-102467 | # Pegen improperly memoizes loop rules
# Bug report
In the following part of pegen, the memoized result is only used if `self._should_memoize(node)` is true. Since loop rules are autogenerated, they are not marked with `(memo)`, so this is always false. Despite that, in the end the result is memoized based on the different condition `if node.name`. Because of that, in the generated parser loop rules' results are always stored in the cache, but never accessed later.
https://github.com/python/cpython/blob/8de59c1bb9fdcea69ff6e6357972ef1b75b71721/Tools/peg_generator/pegen/c_generator.py#L608-L647
# Your environment
Does not matter - discovered through manual code analysis
<!-- gh-linked-prs -->
### Linked PRs
* gh-102467
* gh-102473
* gh-102474
<!-- /gh-linked-prs -->
| f533f216e6aaba3f36639ae27210420e7dcf9de1 | 6716254e71eeb4666fd6d1a13857832caad7b19f |
python/cpython | python__cpython-102407 | # codecs can use PEP-678 notes instead of wrapping/chaining exceptions
[In codecs.c](https://github.com/python/cpython/blob/cb944d0be869dfb1189265467ec8a986176cc104/Python/codecs.c#L397) there is a call to _PyErr_TrySetFromCause (a function that is only called from here), which tries to create a new exception of the same type with a different message, but bails out if there are any complications that make this unsafe. The purpose is to add information about the context of the operation that failed, without changing the type of the exception. This can be solved more easily and robustly with PEP-678 exception notes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102407
<!-- /gh-linked-prs -->
| 76350e85ebf96df588384f3d9bdc20d11045bef4 | e6ecd3e6b437f3056e0a410a57c52e2639b56353 |
python/cpython | python__cpython-105251 | # Document the build process for WASI
While it's great we have Python scripts to automate the WASI build process, it would probably be good to write down the process the scripts follow as independent documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-105251
<!-- /gh-linked-prs -->
| 70dc2fb9732ba3848ad3ae511a9d3195b1378915 | e01b04c9075c6468ed57bc883693ec2a06a6dd8e |
python/cpython | python__cpython-102412 | # Logging's msecs doesn't handle "100ms" well.
# Bug report
`LogRecord.msecs` returns an incorrect value when timestamp (`self.ct`) value has exactly 100ms.
One liner check:
```python
assert int((1677793338.100_000_0 - int(1677793338.100_000_0)) * 1000) + 0.0 == 100.0
```
The issue is binary representation of "0.1" / floating point error:
```python
>>> # Definition of LogRecord.msecs:
>>> # https://github.com/python/cpython/blob/12011dd8bafa6867f2b4a8a9e8e54cb0fbf006e4/Lib/logging/__init__.py#L343
>>> # int((ct - int(ct)) * 1000) + 0.0
>>> ct = 1677793338.100_000_0
>>> ct
1677793338.1
>>> ct - int(ct)
0.09999990463256836
>>> _ * 1000
99.99990463256836
>>> int(_)
99
>>> _ + 0.0
99.0
```
# Your environment
- CPython versions tested on:
- 3.10.9
- 3.11.2
- Operating system and architecture:
- Custom company OS based on Debian Testing
- 64-bit
# Discussion
I think switching to [`time.time_ns`](https://docs.python.org/3/library/time.html#time.time_ns) when [setting the creation time](https://github.com/python/cpython/blob/12011dd8bafa6867f2b4a8a9e8e54cb0fbf006e4/Lib/logging/__init__.py#L303) might be one solution.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102412
* gh-117985
* gh-118062
<!-- /gh-linked-prs -->
| 1316692e8c7c1e1f3b6639e51804f9db5ed892ea | 1a1e013a4a526546c373afd887f2e25eecc984ad |
python/cpython | python__cpython-102399 | # Possible race condition in signal handling
The following code segfaults the interpreter on Linux. Tested on current main.
```py
import gc
import _thread
gc.set_threshold(1, 0, 0)
def cb(*args):
_thread.interrupt_main()
gc.callbacks.append(cb)
def gen():
yield 1
g = gen()
g.__next__()
```
```console
Exception ignored in: <function cb at 0x7f7f4f6fe200>
Traceback (most recent call last):
File "/workspaces/cpython/main.py", line 7, in cb
_thread.interrupt_main()
KeyboardInterrupt:
Exception ignored in: <function cb at 0x7f7f4f6fe200>
Traceback (most recent call last):
File "/workspaces/cpython/main.py", line 7, in cb
_thread.interrupt_main()
KeyboardInterrupt:
Exception ignored in: <function cb at 0x7f7f4f6fe200>
Traceback (most recent call last):
File "/workspaces/cpython/main.py", line 7, in cb
_thread.interrupt_main()
KeyboardInterrupt:
Exception ignored in: <function cb at 0x7f7f4f6fe200>
Traceback (most recent call last):
File "/workspaces/cpython/main.py", line 7, in cb
_thread.interrupt_main()
KeyboardInterrupt:
Segmentation fault (core dumped)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102399
* gh-102527
* gh-102528
<!-- /gh-linked-prs -->
| 1a84cc007e207f2dd61f86a7fc3d86632fdce72f | 061325e0d2bbec6ff89d03f527c91dc7bfa14003 |
python/cpython | python__cpython-102389 | # Python cannot run in the ja_JP.sjis locale used windows-31j encoding.
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Linux using glibc cannot run Python when ja_JP.sjis locale is set as follows.
```
$ sudo dnf install glibc-locale-source # RHEL or RHEL compatible Linux distribution
$ sudo localedef -f WINDOWS-31J -i ja_JP ja_JP.sjis
$ export LANG=ja_JP.SJIS
$ python3
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = 'python3'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/usr/bin/python3'
sys.base_prefix = '/usr'
sys.base_exec_prefix = '/usr'
sys.platlibdir = 'lib64'
sys.executable = '/usr/bin/python3'
sys.prefix = '/usr'
sys.exec_prefix = '/usr'
sys.path = [
'/usr/lib64/python39.zip',
'/usr/lib64/python3.9',
'/usr/lib64/python3.9/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
LookupError: unknown encoding: WINDOWS-31J
Current thread 0x00007f49d76ee740 (most recent call first):
<no Python frame>
```
The charset name "Windows-31J" is registered in the IANA Charset Registry[1].
Windows-31J is supported by perl[2], php[3], ruby[4], java[5], etc.
Python's cp932 is equivalent to Windows-31J, so I propose to add windows_31j to aliases for cp932.
[1] https://www.iana.org/assignments/charset-reg/windows-31J
[2] https://perldoc.perl.org/Encode::JP
[3] https://www.php.net/manual/en/mbstring.encodings.php
[4] https://docs.ruby-lang.org/ja/latest/class/Encoding.html
[5] https://docs.oracle.com/en/java/javase/19/intl/supported-encodings.html
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.9.13, 3.12.0a5+
- Operating system and architecture: MIRACLE LINUX 8.6 x86_64 (RHEL 8.6 compatible)
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102389
<!-- /gh-linked-prs -->
| 1476ac2c58669b0a78d3089ebb9cc87955a1f5b0 | 177b9cb52e57da4e62dd8483bcd5905990d03f9e |
python/cpython | python__cpython-102390 | # Check documentation of PyObject_CopyData
According to the documentation, `PyObject_CopyData` takes two arguments of type `Py_buffer*`:
https://github.com/python/cpython/blob/71db5dbcd714b2e1297c43538188dd69715feb9a/Doc/c-api/buffer.rst?plain=1#L502
But in the implementation, it actually takes two arguments of `PyObject*`:
https://github.com/python/cpython/blob/71db5dbcd714b2e1297c43538188dd69715feb9a/Objects/abstract.c#L613
<!-- gh-linked-prs -->
### Linked PRs
* gh-102390
* gh-102401
<!-- /gh-linked-prs -->
| 7b9132057d8f176cb9c40e8324f5122a3132ee58 | 4e7c0cbf59595714848cf9827f6e5b40c3985924 |
python/cpython | python__cpython-102382 | # raising an exception in a dict/code/function watcher callback will segfault on a DEALLOCATED event
Because the object we pass into the callback is dead (zero refcount), and if the callback raises an error we try to `PyErr_WriteUnraisable` with that object as context, which increfs and decrefs it, which recursively again tries to deallocate it, and we end up in infinite recursion and overflowing the stack.
There are various other ways a callback could trigger this issue. We shouldn't pass around dead objects; instead `*_dealloc` should temporarily resurrect it before calling the callback.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102382
<!-- /gh-linked-prs -->
| 1e703a473343ed198c9a06a876b25d7d69d4bbd0 | a33ca2ad1fcf857817cba505a788e15cf9d6ed0c |
python/cpython | python__cpython-102379 | # `inspect._signature_strip_non_python_syntax` doesn't need to strip `/`
`inspect._signature_strip_non_python_syntax` removes `/`, this is probably a historical thing dating before Python 3.8 made this real Python syntax.
Removing this "feature" would simplify code in `inspect`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102379
<!-- /gh-linked-prs -->
| 71cf7c3dddd9c49ec70c1a95547f2fcd5daa7034 | c6858d1e7f4cd3184d5ddea4025ad5dfc7596546 |
python/cpython | python__cpython-102372 | # Move _Py_Mangle from compile.c to symtable.c
_Py_Mangle doesn't need to be in compile.c (which is > 10K lines and needs to be reduced).
Moving it to symtable.c will
(1) remove one reason for symtable.c to depend on compile.c
(2) make typeobject.c depend on the much smaller symtable.c instead of on compile.c
(3) make compile.c a little smaller.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102372
<!-- /gh-linked-prs -->
| 71db5dbcd714b2e1297c43538188dd69715feb9a | 73250000ac7d6a5e41917e8bcea7234444cbde78 |
python/cpython | python__cpython-112942 | # `_osx_support.get_platform_osx()` does not always return the minor release number
`_osx_support.get_platform_osx()`, particularly when called from `sysconfig.get_platform()`, might not return the minor release number. This seems to be because the `MACOSX_DEPLOYMENT_TARGET` environment variable will be used instead (if present), which may or may not include a minor version number.
E.g. on macOS 12.6 with `MACOSX_DEPLOYMENT_TARGET=12`:
```
>>> _osx_support.get_platform_osx({}, "", "", "")
('macosx', '12.6', '')
>>> _osx_support.get_platform_osx(sysconfig.get_config_vars(), "", "", "")
('macosx', '12', '')
```
Perhaps `release = macver` could be switched to `release = macrelease` [here](https://github.com/python/cpython/blob/2f62a5da949cd368a9498e6a03e700f4629fa97f/Lib/_osx_support.py#L514) so that the internal `_get_system_version()` will be used instead?
<!-- gh-linked-prs -->
### Linked PRs
* gh-112942
* gh-113264
* gh-113265
<!-- /gh-linked-prs -->
| 893c9ccf48eacb02fa6ae93632f2d0cb6778dbb6 | 4cfce3a4da7ca9513e7f2c8ec94d50f8bddfa41b |
python/cpython | python__cpython-102426 | # Stack overflow on GC of deeply nested filter()
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
The in-built function filter() crashes as the following:
```python
i = filter(bool, range(10000000))
for _ in range(10000000):
i = filter(bool, i)
```
# Error messages
```
Segmentation fault (core dumped)
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.9.12
- Operating system and architecture: Linux hx-rs4810gs 5.15.0-57-generic 63~20.04.1-Ubuntu SMP Wed Nov 30 13:40:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102426
* gh-102435
* gh-102436
<!-- /gh-linked-prs -->
| 66aa78cbe604a7c5731f074b869f92174a8e3b64 | 5da379ca7dff44b321450800252be01041b3320b |
python/cpython | python__cpython-102696 | # Usage of venv specifies a python3 command not found in 3.11.2 on Windows
# Documentation
The documentation on using `venv` at https://docs.python.org/3/library/venv.html for 3.11.2 specifies to use a `python3` executable.
I do not find a `python3` executable in the 3.11.2 installation for Windows 64-bit.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102696
* gh-102697
* gh-102698
<!-- /gh-linked-prs -->
| 80abd62647b2a36947a11a6a8e395061be6f0c61 | 1ff81c0cb67215694f084e51c4d35ae53b9f5cf9 |
python/cpython | python__cpython-102345 | # implement winreg QueryValue / SetValue using QueryValueEx / SetValueEx
# Feature or enhancement
`QueryValue` and `SetValue` can be implemented using `QueryValueEx` and `SetValueEx`. These are compatible with more windows API families.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102345
<!-- /gh-linked-prs -->
| c1748ed59dc30ab99fe69c22bdbab54f93baa57c | d3d20743ee1ae7e0be17bacd278985cffa864816 |
python/cpython | python__cpython-102342 | # Improve the test function for pow
Currently in the `powtest` function in `Lib/test/test_pow.py` file we are not testing the results of calling pow with negative exponents.
This PR will add appropriate assert statements to test the results of calling pow with negative exponents.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102342
* gh-102446
* gh-102447
<!-- /gh-linked-prs -->
| 32220543e2db36c6146ff2704ed1714a6adecc1b | 7894bbe94ba319eb650f383cb5196424c77b2cfd |
python/cpython | python__cpython-102337 | # Remove Windows 7 specific code
# Feature or enhancement
There are still a couple of code sections special handling Windows 7. Removing at least part of those is required for https://github.com/python/cpython/issues/102255. So we might as well go ahead and clean up a large part of those.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102337
* gh-102347
<!-- /gh-linked-prs -->
| 938e36f824c5f834d6b77d47942ad81edd5491d0 | 360ef843d8fc03aad38ed84f9f45026ab08ca4f4 |
python/cpython | python__cpython-102325 | # Improve tests of `typing.override`
Right now we have several things to improve:
1. This branch is not covered by tests: https://github.com/python/cpython/blob/4c87537efb5fd28b4e4ee9631076ed5953720156/Lib/typing.py#L3490 We don't have a single test that ensures that `@override` will not fail for some weird cases. Since 99.99% of use-cases will simply be `@override def some(...): ...`, it is not very important - but still desirable
2. We don't test `@classmethod`s, I guess it is special enough to be included
3. `@property` is also missing from tests, but is mentioned in PEP: https://peps.python.org/pep-0698/#specification and https://peps.python.org/pep-0698/#limitations-of-setting-override
4. PEP also mentions `@functools.lru_cache` and similar decorators as an example of how wrappers should be nested. Why don't we test it? Right now there are no tests for nested method decorators (except `staticmethod`)
5. It might not hurt to test that parent's methods definitions are not mutated (I think this is a part of the contract)
PR is on its way :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102325
<!-- /gh-linked-prs -->
| 12011dd8bafa6867f2b4a8a9e8e54cb0fbf006e4 | 71db5dbcd714b2e1297c43538188dd69715feb9a |
python/cpython | python__cpython-103663 | # (🐞) Column marker in error message off when line contains non ASCII character
```py
b"Ā"
```
```
👉 python test.py
File "/test.py", line 1
b"Ā"
^
SyntaxError: bytes can only contain ASCII literal characters
```
Here the caret is pointing to blackspace after the code
So each non-ASCII character is adding more incorrect offset:
```
👉 $c:temp = "b'ĀĀĀĀĀĀĀĀ'"
👉 py temp
File "C:\temp", line 1
b'ĀĀĀĀĀĀĀĀ'
^
```
## Expected
```
👉 python test.py
File "/test.py", line 1
b"Ā"
^
SyntaxError: bytes can only contain ASCII literal characters
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-103663
* gh-103703
<!-- /gh-linked-prs -->
| 0fd38917582aae0728e20d8a641e56d9be9270c7 | 7bf94568a9a4101c72b8bf555a811028e5b45ced |
python/cpython | python__cpython-102309 | # Interpreter generator should emit code closer to pure C
Rather than emitting macros such as `DISPATCH`, `POKE`, `TARGET` etc, it would be useful if the code generator emitted something closer to plain C. Some macros will still be needed for portability.
Doing so would make the overhead in dispatch explicit and expose redundancies that can be eliminated.
For example, not all instructions need to save `frame->prev_instr`, but all do because the assignment is hidden in a macro.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102309
<!-- /gh-linked-prs -->
| b5ff38243355c06d665ba8245d461a0d82504581 | e1a90ec75cd0f5cab21b467a73404081eaa1077f |
python/cpython | python__cpython-102514 | # Move _Py_RefTotal to PyInterpreterState
(See gh-100227.)
`_Py_RefTotal` holds the current global total number of refcounts. It only exists if `Py_REF_DEBUG` is defined (implied by `Py_DEBUG`). It is exposed by `sys.gettotalrefcount()` and set by `Py_INCREF()`, `Py_DECREF()`, etc. and `_Py_NewReference()`.
Modications to `_Py_RefTotal` are currently protected by the GIL so it should be moved to `PyInterpreterState`. For various aspects of compatibility, it makes sense to keep the `_Py_RefTotal` symbol around (and correct) and keep returning the global total from `sys.gettotalrefcount()`.
Also, `_Py_RefTotal` is used by stable ABI extensions only where `Py_REF_DEBUG` is defined (unlikely) and only where built against 3.9 or earlier. Just in case, though, we must still keep the global variable around, so any solution here must respect that.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102514
* gh-102543
* gh-102545
* gh-102845
* gh-102846
* gh-104762
* gh-104763
* gh-104766
* gh-105123
* gh-105345
* gh-105347
* gh-105351
* gh-105352
* gh-105371
* gh-105389
* gh-105391
* gh-105550
* gh-105551
* gh-105552
* gh-105553
* gh-107193
* gh-107199
* gh-109901
<!-- /gh-linked-prs -->
| cbb0aa71d040022db61390380b8aebc7c04f3275 | 11a2c6ce516b24b2435cb627742a6c4df92d411c |
python/cpython | python__cpython-102303 | # Make inspect.Parameter.__hash__ use _default instead of default
# Feature or enhancement
Enable subclasses of Parameter to add a `default` property accessing hash(self) rather than having hash rely on the `default` property
# Pitch
I wanted to make a subclass of Parameter which would store as _default the string eval-ing to the actual default value, and which would only eval it when accessed, and memorize it at that time.
So, I coded this :
```py
class Parameter(inspect.Parameter):
__slots__ = ()
@property
@functools.cache
def default(self):
return renpy.python.py_eval(super(Parameter, self).default)
```
(not using @cached_property because I want to keep the empty slots)
This doesn't work, because `cache` tries to hash `self`, and Parameter.\_\_hash\_\_ accesses `self.default`, rather than `self._default` (so it does an infinite recursion loop).
The last edit to the hash method was purportedly to "fix a discrepancy between eq and hash". However, the eq still accesses the raw _default, and hash did so too before the "fix".
Overall, apart from my selfish use-case, I think it makes more sense for an immutable class to hash relative to the actual value instead of the less-reliable propertied value.
And there's a minor performance bonus too.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102303
<!-- /gh-linked-prs -->
| 90801e48fdd2a57c5c5a5e202ff76c3a7dc42a50 | c2bd55d26f8eb2850eb9f9026b5d7f0ed1420b65 |
python/cpython | python__cpython-102301 | # Reuse objects with refcount == 1 in specialized instructions
Many VM instructions consume references to their operands and produce a reference to a result, often of the same type.
For some of those, we specialize for common types.
We can speed up the operations by reusing one of the operands if the refcount is one.
For example, in `BINARY_OP_ADD_FLOAT` if the refcount of the left operand is one, we can do the operation in place.
This is effectively free as we need to check the refcount of the operands in `Py_DECREF()` anyway.
We have attempted doing this in the past, but got bogged down trying to merge the operation and the following store.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102301
<!-- /gh-linked-prs -->
| 233e32f93614255bf5fc7c93cd98af453e58cc98 | 78e4e6c3d71980d4e6687f07afa6ddfc83e29b04 |
python/cpython | python__cpython-102297 | # Documenting that inspect.Parameter.kind attributes support ordering
# Documentation
As described [here](https://discuss.python.org/t/documenting-that-inspect-parameter-kind-attributes-are-a-sorted-intenum/24103).
This is more an improvement proposition than a documentation _problem_ strictly speaking.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102297
* gh-102298
* gh-102299
<!-- /gh-linked-prs -->
| 0db6f442598a1994c37f24e704892a2bb71a0a1b | e3c3f9fec099fe78d2f98912be337d632f6fcdd1 |
python/cpython | python__cpython-102282 | # potential nullptr dereference + use of uninitialized memory in fileutils
# Bug report
https://github.com/python/cpython/blob/6daf42b28e1c6d5f0c1a6350cfcc382789e11293/Python/fileutils.c#L2161-L2162 can lead to use of uninitialized memory when `join_relfile` fails.
https://github.com/python/cpython/blob/6daf42b28e1c6d5f0c1a6350cfcc382789e11293/Python/fileutils.c#L2197-L2199 in combination with https://github.com/python/cpython/blob/6daf42b28e1c6d5f0c1a6350cfcc382789e11293/Modules/getpath.c#L450
leads to a nullptr dereference.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102282
* gh-103040
<!-- /gh-linked-prs -->
| afa6092ee4260bacf7bc11905466e4c3f8556cbb | 2b5781d659ce3ffe03d4c1f46e4875e604cf2d88 |
python/cpython | python__cpython-102264 | # Confusion in regular expression syntax example.
In the 3.11 documentation, the regular expression syntax section reads, in part:
> To match a literal ']' inside a set, precede it with a backslash, or place it at the beginning of the set. For example, both `[()[\]{}]` and `[]()[{}]` will both match a parenthesis.
The discussion centers on matching a right square bracket, but the end phrase reads, "... will both match a parenthesis." I suspect the author meant something like "... will both match left or right braces, parentheses or square brackets."
The same wording exists in 2.7 and 3.5 through 3.12. I doubt it's worth backporting beyond whatever is still being actively bugfixed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102264
* gh-102270
* gh-102271
<!-- /gh-linked-prs -->
| bcadcde7122f6d3d08b35671d67e105149371a2f | d71edbd1b7437706519a9786211597d95934331a |
python/cpython | python__cpython-102256 | # Improve build support on xbox
# Feature or enhancement
Since there are probably more game studios using Python for their game scripting, it would be great to have better build support for CPython on the xbox. The xbox runs a windows, but does not provide all the APIs cpython is using under windows.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102256
* gh-102583
<!-- /gh-linked-prs -->
| c6858d1e7f4cd3184d5ddea4025ad5dfc7596546 | ca066bdbed85094a9c4d9930823ce3587807db48 |
python/cpython | python__cpython-102253 | # Acceptance of bool objects by complex() is not tested
Such operations as `complex(False) == 0j` is not covered by `test_bool.py`
<!-- gh-linked-prs -->
### Linked PRs
* gh-102253
* gh-102257
* gh-102258
<!-- /gh-linked-prs -->
| 41970436373f4be813fe8f5a07b6da04d5f4c80e | a35fd38b57d3eb05074ca36f3d57e1993c44ddc9 |
python/cpython | python__cpython-102254 | # Fix Refleak in test_import
Commit 096d0097a09e (for gh-101758) resulted in refleak failures. It's unclear if the leak is new or if the test simply exposed it. In 984f8ab018f847fe8d66768af962f69ec0e81849, I skipped the leaking tests until we can fix the leak.
UPDATE: I've updates the tests to only skip when checking for refleaks. I've also added a few more tests.
----
### Context
<details>
<summary>(expand)</summary>
<BR/>
(Also see gh-101758.)
<BR/><BR/>
Loading an extension module involves meaningfully different code paths depending on the content of the `PyModuleDef`. There's the difference between legacy (single-phase-init, PEP 3121) and modern (multi-phase-init, PEP 489) modules. For single-phase init, there are those that support being loaded more than once (`m_size` >= 0) and those that can't (`m_size` == -1). I've added several long comments in import.c explaining about single-phase init modules in detail. I also added some tests to verify the behavior of the single-phase-init cases more thoroughly. Those are the leaking tests.
Relevant state:
* `PyModuleDef.m_size` - the size of the module's per-interpreter state (`PyModuleObject.md_state`)
* single-phase init and multi-phase init modules can have a value of 0 or greater
* such modules must not have any process-global state
* only single-phase init modules can have a value of -1, which means the module does not support being loaded more than once
* such modules may have process-global state
* `PyModuleDef.m_base.m_index` - the index into `PyInterpreterState.imports.modules_by_index` (same index used for every interpreter)
* set by `PyModuleDef_Init()` (e.g. when the module is created)
* `PyModuleDef.m_base.m_copy` - a copy of the `__dict__` of the last time a module was *loaded* using this def (only single-phase init where `m_size` is -1)
* set exclusively by `fix_up_extension()`
* used exclusively by `import_find_extension()`
* cleared by (replaced in) `fix_up_extension()` if `m_copy` was already set (e.g. multiple interpreters, multiple modules in same file using same def)
* cleared by `_PyImport_ClearModulesByIndex()` during interpreter finalization
* `_PyRuntime.imports.extensions` - a dict mapping `(filename, name)` to `PyModuleDef`
* entry set exclusively by `fix_up_extension()` (only for single-phase init modules)
* entry used exclusively by `import_find_extension()`
* entry cleared by `_PyImport_Fini()` at runtime finalization
* `interp->imports.modules_by_index` - a list of single-phase-init modules loaded in this interpreter; lookups (e.g. `PyState_FindModule()`) use `PyModuleDef.m_base.m_index`
* entry set normally only by `fix_up_extension()` and `import_find_extension()` (only single-phase init modules)
* (entry also set by `PyState_AddModule()`)
* used exclusively by `PyState_FindModule()`
* entry cleared normally by `_PyImport_ClearModulesByIndex()` during interpreter finalization
* (entry also cleared by `PyState_RemoveModule()`)
Code path when loading a single-phase-init module for the first time (in import.c, unless otherwise noted):
* `imp.load_dynamic()`
* `importlib._boostrap._load()` (using a spec with `ExtensionFileLoader`)
* `ExtensionFileLoader.create_module()` (in _boostrap_external.py)
* `_imp.create_dynamic()` (`_imp_create_dynamic_impl()`)
* `import_find_extension()` (not found in `_PyRuntime.imports.extensions`)
* `_PyImport_LoadDynamicModuleWithSpec()` (importdl.c)
* `_PyImport_FindSharedFuncptr()`
* `<module init func from loaded binary>()`
* sets process-global state (only where `m_size == -1`)
* `PyModule_Create()`
* populates per-interpreter module state (only where `m_size > 0`)
* sets module attrs (on `__dict__`)
* sets `def->m_base.m_init` (only needed for single-phase-init where `m_size >=0`)
* `_PyImport_FixupExtensionObject()`
* sets `sys.modules[spec.name]`
* `fix_up_extension()`
* set `interp->imports.modules_by_index[def->m_base.m_index]`
* clear `def->m_base.m_copy` (only if set and only if `m_size == -1`)
* set `def->m_base.m_copy` to a copy of the module's `__dict__` (only if `m_size == -1`)
* set `_PyRuntime.imports.extensions[(filename, name)]`
* sets missing module attrs (e.g. `__file__`)
During testing we use a helper to erase (nearly) any evidence of the module having been imported before. That means clearing the state described above.
Here's the code path:
* `_testinternalcapi.clear_extension()` (Modules/_testinternalcapi.c)
* `_PyImport_ClearExtension()`
* `clear_singlephase_extension()`
* (only if there's a `_PyRuntime.imports.extensions` entry)
* clear the module def's `m_copy`
* replace the `interp->imports.modules_by_index` entry with `Py_None`
* delete the `_PyRuntime.imports.extensions` entry
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-102254
* gh-104796
* gh-105082
* gh-105083
* gh-105085
* gh-105170
* gh-106013
* gh-109540
<!-- /gh-linked-prs -->
| bb0cf8fd60e71581a90179af63e60e8704c3814b | 0db6f442598a1994c37f24e704892a2bb71a0a1b |
python/cpython | python__cpython-102287 | # Regression segfault under CPython 3.12a5 in the SymPy test suite
# Crash report
This comes from https://github.com/sympy/sympy/pull/24776 which adds CPython 3.12 prerelease testing in SymPy's CI.
This is seen with CPython 3.12a5 but not with 3.11 or earlier versions.
The reproducer is to run the SymPy test suite:
```
pip install sympy==1.11.1
```
Then
```
>>> import sympy
>>> sympy.test(subprocess=False)
...
(it takes about 5 minutes)
...
sympy/functions/special/tests/test_delta_functions.py[3] ... [OK]
sympy/functions/special/tests/test_elliptic_integrals.py[4] .... [OK]
sympy/functions/special/tests/test_error_functions.py[24] ................Segmentation fault: 11
```
I don't yet have a simpler reproducer for this because it seems to be non-deterministic but the SymPy test suite reliably invokes a segfault under 3.12 alpha 5 after about 5 minutes. The tests that are running at the time of the segfault will pass if run in isolation. Running the whole test suite though will cause it to fail randomly at one of a few specific places. I don't have a simpler reproducer because running a smaller part of the test suite does not reproduce the problem.
# Error messages
Usually:
```
Segmentation fault: 11
```
In one case I have also seen (on OSX):
```
Bus error: 10
```
# Your environment
- CPython versions tested on: 3.12 alpha 5 (the problem is not seen with 3.11 or earlier).
- Operating system and architecture: Ubuntu and OSX, both on x84-64 CPU.
The problem was seen initially in GitHub Actions CI on an Ubuntu 20.04 runner but I have also reproduced it locally in OSX (an Intel-CPU Macbook).
I don't immediately have a setup that I can use to bisect this but I will get one set up soon to narrow this down.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102287
<!-- /gh-linked-prs -->
| e3c3f9fec099fe78d2f98912be337d632f6fcdd1 | 101a12c5767a8c6ca6e32b8e24a462d2606d24ca |
python/cpython | python__cpython-102806 | # cProfile does not work in pdb
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
While debugging, cProfile fails to work, while profile does work, see session below:
``` python
Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pdb
>>> db = pdb.Pdb()
>>> db.run("aaa")
> <string>(1)<module>()
(Pdb) import profile, cProfile
(Pdb) cProfile.run("print(0)")
0
*** TypeError: Cannot create or construct a <class 'pstats.Stats'> object from <cProfile.Profile object at 0x1028d5660>
(Pdb) profile.run("print(0)")
0
1 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 profile:0(print(0))
0 0.000 0.000 profile:0(profiler)
(Pdb) q
>>> cProfile.run("print(0)")
0
4 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
>>> profile.run("print(0)")
0
5 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(exec)
1 0.000 0.000 0.000 0.000 :0(print)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 profile:0(print(0))
0 0.000 0.000 profile:0(profiler)
>>>
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
- Operating system and architecture: Apple M1 Max - 13.1 (22C65)
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102806
* gh-111557
* gh-111558
<!-- /gh-linked-prs -->
| 244567398370bfa62a332676762bb1db395c02fc | 5cc6c80a7797c08c884edf94de4fc7b6075e32ce |
python/cpython | python__cpython-102248 | # Improve the Efficiency of Python3.11.1 __getattr__
# Feature or enhancement
`__getattr__` in Python3.11.1 is much slower than `@Property` and visiting an object's attribute. It's even slower than Python3.10.4.
`_PyObject_GenericGetAttrWithDict` is the key reason. If Python3 fails finding an attribute in normal ways, it will return NULL and raise an exception. But raising an exception has performance cost. Python3.11.1 add set_attribute_error_context to support `Fine Grained Error Locations in Tracebacks`. It makes things worser.
# Pitch
We can use this test code:
```py
import time
import sys
class A:
def foo(self):
print("Call A.foo!")
def __getattr__(self, name):
return 2
@property
def ppp(self):
return 3
class B(A):
def foo(self):
print("Call B.foo!")
class C(B):
def __init__(self) -> None:
self.pps = 1
def foo(self):
print("Call C.foo!")
def main():
start = time.time()
for i in range(1, 1000000):
pass
end = time.time()
peer = end - start
c = C()
print(f"Python version of {sys.version}")
start = time.time()
for i in range(1, 1000000):
s = c.pps
end = time.time()
print(f"Normal getattr spend time: {end - start - peer}")
start = time.time()
for i in range(1, 1000000):
s = c.ppa
end = time.time()
print(f"Call __getattr__ spend time: {end - start - peer}")
start = time.time()
for i in range(1, 1000000):
s = c.ppp
end = time.time()
print(f"Call property spend time: {end - start - peer}")
if __name__ == "__main__":
main()
```
The result shows how slow `__getattr__` is:
```
Python version of 3.11.1 (main, Dec 26 2022, 16:32:50) [GCC 8.3.0]
Normal getattr spend time: 0.03204226493835449
Call __getattr__ spend time: 0.4767305850982666
Call property spend time: 0.06345891952514648
```
When we define `__getattr__`, failed to find an attribute is what we expected. If we can get this result and then call `__getattr__` without exception handling, it will be faster.
I tried to modify Python3.11.1 like this:
1. add a new function in `object.c`:
```c
PyObject *
PyObject_GenericTryGetAttr(PyObject *obj, PyObject *name)
{
return _PyObject_GenericGetAttrWithDict(obj, name, NULL, 1);
}
```
2. change `typeobject.c` :
```c
if (getattribute == NULL ||
(Py_IS_TYPE(getattribute, &PyWrapperDescr_Type) &&
((PyWrapperDescrObject *)getattribute)->d_wrapped ==
(void *)PyObject_GenericGetAttr))
// res = PyObject_GenericGetAttr(self, name);
res = PyObject_GenericTryGetAttr(self, name);
else {
Py_INCREF(getattribute);
res = call_attribute(self, getattribute, name);
Py_DECREF(getattribute);
}
if (res == NULL) {
if (PyErr_ExceptionMatches(PyExc_AttributeError))
PyErr_Clear();
res = call_attribute(self, getattr, name);
}
Py_DECREF(getattr);
return res;
```
Rebuild python, it really become faster: `spend time: 0.13772845268249512.`
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
[
__getattr__ is much slower in Python3.11](https://discuss.python.org/t/getattr-is-much-slower-in-python3-11/24028)
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102248
* gh-103332
* gh-103761
<!-- /gh-linked-prs -->
| 059bb04245a8b3490f93dfd72522a431a113eef1 | efb0a2cf3adf4629cf4669cb558758fb78107319 |
python/cpython | python__cpython-102212 | # re.Pattern and re.Match aren’t advertised
# Documentation
The `re` documentation only documents the classes’ methods, but not their existence as part of the `re` module.
`typing` however mentions their existence, so I assume they are officially supported.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102212
* gh-108490
* gh-108491
<!-- /gh-linked-prs -->
| 6895ddf6cb2bada7e392eb971c88ded03d8fc79e | 53470184091f6fe1c7a1cf4de8fd90dc2ced7654 |
python/cpython | python__cpython-102225 | # test_implied_dirs_performance is flaky
Reported [in discord](https://discord.com/channels/854719841091715092/868504620970962944/1078325012651593768), since #102018, the timing check was added to `test_implied_dirs_performance` and this check is frequently failing ([example](https://github.com/python/cpython/actions/runs/4248047744/jobs/7386765925)):
```
ERROR: test_implied_dirs_performance (test.test_zipfile.test_path.TestPath.test_implied_dirs_performance)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\contextlib.py", line 80, in inner
with self._recreate_cm():
File "D:\a\cpython\cpython\Lib\test\test_zipfile\_context.py", line 30, in __exit__
raise DeadlineExceeded(duration, self.max_duration)
test.test_zipfile._context.DeadlineExceeded: (3.140999999999849, 3)
```
These failures aren't occurring on [zipp](/jaraco/zipp), where the check has been running for years without fail.
Furthermore, the check is currently not capturing the failure case because the invocation fails to consume the generator:
https://github.com/python/cpython/blob/9f3ecd1aa3566947648a053bd9716ed67dd9a718/Lib/test/test_zipfile/test_path.py#L336
That indicates that the flaky failures are due to the construction of test data:
https://github.com/python/cpython/blob/9f3ecd1aa3566947648a053bd9716ed67dd9a718/Lib/test/test_zipfile/test_path.py#L335
<!-- gh-linked-prs -->
### Linked PRs
* gh-102225
* gh-102232
<!-- /gh-linked-prs -->
| 89b4c1205327cc8032f4a39e3dfbdb59009a0704 | 2db23d10bf64bf7c061fd95c6a8079ddc5c9aa4b |
python/cpython | python__cpython-102193 | # Replace Fetch/Restore etc by the new exception APIs
PyErr_Fetch/Restore etc are now legacy APIs, in some places we can replace them by more efficient alternatives.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102193
* gh-102196
* gh-102218
* gh-102319
* gh-102466
* gh-102472
* gh-102477
* gh-102619
* gh-102631
* gh-102743
* gh-102760
* gh-102769
* gh-102816
* gh-102935
* gh-103157
<!-- /gh-linked-prs -->
| 4c87537efb5fd28b4e4ee9631076ed5953720156 | 85b1fc1fc5a2e49d521382eaf5e7793175d00371 |
python/cpython | python__cpython-102182 | # Improve stats presentation for specialization
`SEND` and `FOR_ITER` use the same function to count the type of failure of specialization, which should require a minor improvement to make the correct specialization fail kind appear.
For example.
### Before improvement:
<summary> specialization stats for SEND family </summary>
|Failure kind | Count | Ratio |
|---|---:|---:|
| other | 27,540 | 96.6% |
| kind 13 | 980 | 3.4% |
### After improvement:
<summary> specialization stats for SEND family </summary>
|Failure kind | Count | Ratio |
|---|---:|---:|
| async generator send | 24,440 | 86.5% |
| other | 3,100 | 11.0% |
| list | 700 | 2.5% |
<!-- gh-linked-prs -->
### Linked PRs
* gh-102182
<!-- /gh-linked-prs -->
| 373bca0cc5256dc512ffc22bdff4424f7ee8baa2 | 7b8d7f56b64ab4370fea77e77ea4984dd2a73979 |
python/cpython | python__cpython-102180 | # os.dup2() raises wrong OSError for negative fds
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
```
>>> os.dup2(-1, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 0] Error
```
The bug is caused by the old manual sign check that at some point lost the part that set `errno`: https://github.com/python/cpython/blob/c3a178398c199038f3a0891d09f0363ec73f3b38/Modules/posixmodule.c#L9832-L9834
I'm going to submit a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102180
* gh-102419
* gh-102420
<!-- /gh-linked-prs -->
| c2bd55d26f8eb2850eb9f9026b5d7f0ed1420b65 | 705487c6557c3d8866622b4d32528bf7fc2e4204 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.