repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-102159 | # keyword: Tests for `softkwlist`
In 3.10 version python have introduced new type of keywords such as `soft`.
There's no tests for it, so i decided to write it =)
Follows issue at [typeshed](https://github.com/python/typeshed/pull/9680) about typehints of `kwlist` & `softkwlist`, we have came to conclusion to use `assertSequenceEqual` in `test_keywords_are_sorted` instead of `assertListEqual`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102159
* gh-102198
* gh-102199
<!-- /gh-linked-prs -->
| 9f3ecd1aa3566947648a053bd9716ed67dd9a718 | 0c857865e4f255f99d58678f878e09c11da89892 |
python/cpython | python__cpython-102152 | # test_freeze_simple_script in test_tools fails to fetch CONFIG_ARGS
`Tools/freeze/test/freeze.py` fetches `CONFIG_ARGS` from the [clean] build directory, instead of fetching it from the source directory.
Discovered while working on #102131.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102152
* gh-102176
<!-- /gh-linked-prs -->
| c3a178398c199038f3a0891d09f0363ec73f3b38 | 665730d2176aabd05ca5741056aef43189b6f754 |
python/cpython | python__cpython-102142 | # replace use of getpid on Windows with GetCurrentProcessId
# Feature or enhancement
Windows provides `getpid`, `_getpid` and `GetCurrentProcessId` to retrieve the process identifier. `getpid` is deprecated in favor of `_getpid`. In addition both `getpid` and `_getpid` cannot be used by application that execute in the Windows Runtime.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102142
<!-- /gh-linked-prs -->
| 1fa38906f0b228e6b0a6baa89ab6316989b0388a | 347f7406df62b2bbe551685d72a466f27b951f8e |
python/cpython | python__cpython-119271 | # Entering interactive mode after -m
# Documentation
This might be an omission in the docs or a bug in `-m`, not certain. [Documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-i) currently states `-i` used before a script of the `-c` switch launches the REPL:
> When a script is passed as first argument or the [-c](https://docs.python.org/3/using/cmdline.html#cmdoption-c) option is used, enter interactive mode after executing the script or the command [...]
but this works equally well with `-m` (unless this is also considered a script):
```python
python -i -m timeit "pass"
50000000 loops, best of 5: 4.99 nsec per loop
Traceback (most recent call last):
File "/home/imijmi/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/imijmi/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/imijmi/anaconda3/lib/python3.9/timeit.py", line 375, in <module>
sys.exit(main())
SystemExit
>>>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119271
* gh-119285
* gh-119286
<!-- /gh-linked-prs -->
| 172690227e771c2e8ab137815073e3a172c08dec | 423bbcbbc43cacfb6a217c04f890a47d3cf7c3a9 |
python/cpython | python__cpython-102137 | # "wikipedia" turtledemo example in docs is supposed to be "rosette"
The `turtle` module documentation, at [this link](https://docs.python.org/3.12/library/turtle.html) has a section on `turtledemo`. It says the demo scripts are
1. bytedesign
2. chaos
3. clock
4. colormixer
5. forest
6. fractalcurves
7. lindenmayer
8. minimal_hanoi
9. nim
10. paint
11. peace
12. penrose
13. planet_and_moon
14. round_dance
15. sorting_animate
16. tree
17. two_canvases
18. **wikipedia**
19. yinyang
The second last example is wrong. In my Python, the `turtledemo` list shows `wikipedia` as `rosette`. Is the `rosette` title wrong, or is it the `wikipedia` title?
<!-- gh-linked-prs -->
### Linked PRs
* gh-102137
* gh-102138
* gh-102139
<!-- /gh-linked-prs -->
| 8d46c7ed5e83e22d55fe4f4e6e873d87f340c1dc | 3ba7743b0696202e9caa283a0be253fd26a5cfbd |
python/cpython | python__cpython-102222 | # Possible deadlock at shutdown while recursively acquiring head lock
https://github.com/HypothesisWorks/hypothesis/issues/3585 is reproducible on Python 3.10.10 but not 3.10.9, and so we suspect that https://github.com/python/cpython/pull/100922 may have introduced a (rare?) deadlock while fixing the data race in https://github.com/python/cpython/issues/100892.
The downstream bug report on Hypothesis includes a reliable (but not minimal reproducer) on OSX - though it's unclear whether this might be an artifact of different patch versions of CPython on the various machines which have checked so far.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102222
* gh-102234
* gh-102235
* gh-102236
<!-- /gh-linked-prs -->
| 5f11478ce7fda826d399530af4c5ca96c592f144 | 56e93c8020e89e1712aa238574bca2076a225028 |
python/cpython | python__cpython-102128 | # tarfile's cache balloons in memory when streaming a big tarfile
# Bug report
I've got a bunch of tar files containing millions of small files. Never mind how we got here - I need to process those tars, handling each of the files inside. Furthermore, I need to process a lot of these tars, and I'd like to do it relatively quickly on cheapish hardware, so I'm at least a little sensitive to memory consumption.
The natural thing to do is to iterate through the tarfile, extracting each file one at a time, and carefully closing them when done:
```python
with tarfile.open(filepath, "r:gz") as tar:
for member in tar:
file_buf = tar.extractfile(member)
try:
handle(file_buf)
finally:
file_buf.close()
```
This looks like it should handle each small file and discard it when done, so memory should stay pretty tame. I was very surprised to discover that this actually uses gigabytes of memory. That's fixed if you do this:
```python
with tarfile.open(filepath, "r:gz") as tar:
for member in tar:
file_buf = tar.extractfile(member)
try:
handle(file_buf)
finally:
file_buf.close()
tar.members = [] # evil!
```
That works because tarinfo.TarFile has a cache, `self.members`. That cache is appended to in [`TarFile.next()`](https://github.com/python/cpython/blob/024ac542d738f56b36bdeb3517a10e93da5acab9/Lib/tarfile.py#L2381), which in turn is used in [`TarFile.__iter__`](https://github.com/python/cpython/blob/024ac542d738f56b36bdeb3517a10e93da5acab9/Lib/tarfile.py#L2471).
That cache is storing `TarInfo`s, which are headers describing each file. They're not very large, but with lots and lots of files, those headers can add up.
The net result is that it's not possible to stream a tarfile's contents without memory growing linearly with the number of files in the tarfile. This has been partially addressed in the past (see #46334, from way back in 2008), but never fully resolved. It shows up in [StackOverflow](https://stackoverflow.com/questions/21039974/high-memory-usage-with-pythons-native-tarfile-lib) and probably elsewhere, with a clumsy recommended solution of resetting `tar.members` each time, but there ought to be a better way.
# Your environment
CPython 3.10, mostly; I don't think OS or architecture etc are relevant.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102128
<!-- /gh-linked-prs -->
| 50fce89d123b25e53fa8a0303a169e8887154a0e | 097b7830cd67f039ff36ba4fa285d82d26e25e84 |
python/cpython | python__cpython-102115 | # dis._try_compile: traceback printed on error in source argument is too wordy
`_try_compile` function from `dis` module firstly tries to compile given source string with `'eval'` mode and, if exception is occured, catches it and tries again with `'exec'` mode. These actions lead to a long chained traceback if given string contains syntax/indentation error, because this string gets refused by `compile` in both `'eval'` and `'exec'` modes.
All functions/constructors from `dis` module that use `_try_compile` and accept source string as an argument (`dis`, `get_instructions`, `code_info`, `show_code`, `Bytecode`) are showing this behavior:
```python
>>> dis.dis(')')
Traceback (most recent call last):
File "/home/.../cpython/Lib/dis.py", line 67, in _try_compile
c = compile(source, name, 'eval')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<dis>", line 1
)
^
SyntaxError: unmatched ')'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/.../cpython/Lib/dis.py", line 112, in dis
_disassemble_str(x, file=file, depth=depth, show_caches=show_caches, adaptive=adaptive)
File "/home/.../cpython/Lib/dis.py", line 593, in _disassemble_str
_disassemble_recursive(_try_compile(source, '<dis>'), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../cpython/Lib/dis.py", line 69, in _try_compile
c = compile(source, name, 'exec')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<dis>", line 1
)
^
SyntaxError: unmatched ')'
```
Python versions affected: 3.10.8, 3.11.0, current main.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102115
<!-- /gh-linked-prs -->
| 2b6f5c3483597abcb8422508aeffab04f500f568 | 8af8f52d175959f03cf4a2786b6a81ec09e15e95 |
python/cpython | python__cpython-106995 | # `\N` not properly documented in `re` module documentation
# Documentation
In the `re` module, the `\N` escape is documented in the changelog from Python 3.8 but never actually explained in the documentation proper.
The changelog entry (which is quite helpful) says the following:
> Changed in version 3.8: The `'\N{name}'` escape sequence has been added. As in string literals, it expands to the named Unicode character (e.g. `'\N{EM DASH}'`).
While this pretty much hits all the high points of how you'd use `\N`, it still probably ought to be written out a little more formally in the documentation somewhere especially given how thoroughly and with what care the standard library tends to be documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106995
* gh-107096
<!-- /gh-linked-prs -->
| 0af247da0932692592ed85ba8b4a1520627ab4ac | 9eeb4b485f4f9e13e5cb638e43a0baecb3ff4e86 |
python/cpython | python__cpython-102189 | # Predicates in itertools and filter
# Documentation
## `filterfalse`
The [doc](https://docs.python.org/3/library/itertools.html#itertools.filterfalse) says *"Make an iterator that filters elements from iterable returning only those for which the predicate is `False`"*. That's not correct, it also returns other false elements:
```pycon
>>> list(filterfalse(lambda x: x, [False, 0, 1, '', ()]))
[False, 0, '', ()]
```
## `quantify`
This [recipe](https://docs.python.org/3/library/itertools.html#itertools-recipes) has the opposite issue:
```python
def quantify(iterable, pred=bool):
"Count how many times the predicate is true"
return sum(map(pred, iterable))
```
It says "true", but it only really works with `True`. Other predicates, like `re.compile(...).match` returning `re.Match` objects or `None`, don't work and `quantify` crashes. Predicates returning numbers other than 1 and 0 don't crash it but lead to wrong "counts". So `quantify` is limited. A pity and an odd outlier, since the other six tools/recipes and `filter` all support Python's general truth value concept.
The easy fix is to map `bool` over the predicate values:
```python
def quantify(iterable, pred=bool):
"Count how many times the predicate is true"
return sum(map(bool, map(pred, iterable)))
```
I don't see something "nicer" with the available tools/recipes. If only there was a `length` recipe, then we could do:
```python
def quantify(iterable, pred=bool):
"Count how many times the predicate is true"
return length(filter(pred, iterable))
```
## `filter`
The [doc](https://docs.python.org/3/library/functions.html#filter) says:
> **`filter`**(*`function`*, *`iterable`*)
> Construct an iterator from those elements of *iterable* for which *function* returns true.
The "returns true" doesn't work well. I'd say "returns a true value" or "for which *function* is true".
The last paragraph has the same issue:
> See `itertools.filterfalse()` for the complementary function that returns elements of *iterable* for which *function* returns false.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102189
* gh-102219
<!-- /gh-linked-prs -->
| 81bf10e4f20a0f6d36b67085eefafdf7ebb97c33 | e5e1c1fabd8b5626f9193e6c61b9d7ceb7fb2f95 |
python/cpython | python__cpython-102104 | # Classes and instances produced by `dataclasses.make_dataclass` are not pickleable
Repro:
```python
## Setup
>>> import pickle
>>> import dataclasses
>>> A = dataclasses.make_dataclass('A', [])
>>> @dataclasses.dataclass
... class B: pass
...
## Correct
>>> pickle.loads(pickle.dumps(B))
<class '__main__.B'>
>>> pickle.loads(pickle.dumps(B()))
B()
## Wrong
>>> pickle.load(pickle.dump(A))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <class 'types.A'>: attribute lookup A on types failed
>>> pickle.loads(pickle.dumps(A()))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <class 'types.A'>: attribute lookup A on types failed
```
I think that this happens because of this:
```python
>>> A.__module__
'types'
>>> B.__module__
'__main__'
```
After a manual fix it works as it should:
```python
>>> A.__module__ = '__main__'
>>> pickle.loads(pickle.dumps(A()))
A()
```
So, I propose to do it for all dataclasses that we create in `make_dataclass`.
I have a simple PR ready, it is inspired by `namedtuple`'s logic :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102104
<!-- /gh-linked-prs -->
| b48be8fa18518583abb21bf6e4f5d7e4b5c9d7b2 | ee6f8413a99d0ee4828e1c81911e203d3fff85d5 |
python/cpython | python__cpython-102360 | # Optimize `iter_index` itertools recipe
# Documentation
The *"Slow path for general iterables"* somewhat reinvents `operator.indexOf`. it seems faster to use it, and could show off an effective combination of the other itertools.
Benchmark with the current implementation and two new alternatives, finding the indices of `0` in a million random digits:
```
108.1 ms ± 0.1 ms iter_index_current
39.7 ms ± 0.3 ms iter_index_new1
31.0 ms ± 0.1 ms iter_index_new2
Python version:
3.10.8 (main, Oct 11 2022, 11:35:05) [GCC 11.3.0]
```
Code of all three (just the slow path portion):
```python
def iter_index_current(iterable, value, start=0):
it = islice(iterable, start, None)
for i, element in enumerate(it, start):
if element is value or element == value:
yield i
def iter_index_new1(iterable, value, start=0):
it = islice(iterable, start, None)
i = start - 1
try:
while True:
yield (i := i+1 + operator.indexOf(it, value))
except ValueError:
pass
def iter_index_new2(iterable, value, start=0):
it = iter(iterable)
consume(it, start)
i = start - 1
try:
for d in starmap(operator.indexOf, repeat((it, value))):
yield (i := i + (1 + d))
except ValueError:
pass
```
The `new1` version is written to be similar to the *"Fast path for sequences"*. The `new2` version has optimizations and uses three more itertools /recipes. Besides using `starmap(...)`, the optimizations are:
- Not piping all values through an `islice` iterator, instead only using `consume` to *advance* the direct iterator `it`, then using `it`.
- Add `1` to `d`, which is more often one of the existing small ints (up to 256) than adding `1` to `i`.
<details><summary>Rest of benchmark script</summary>
```python
funcs = iter_index_current, iter_index_new1, iter_index_new2
from itertools import islice, repeat, starmap
from timeit import timeit
import collections
import random
from statistics import mean, stdev
import sys
import operator
# Existing recipe
def consume(iterator, n=None):
if n is None:
collections.deque(iterator, maxlen=0)
else:
next(islice(iterator, n, n), None)
# Test arguments
iterable = random.choices(range(10), k=10**6)
value = 0
start = 100
def args():
return iter(iterable), value, start
# Correctness
expect = list(funcs[0](*args()))
for f in funcs[1:]:
print(list(f(*args())) == expect, f.__name__)
# For timing
times = {f: [] for f in funcs}
def stats(f):
ts = [t*1e3 for t in sorted(times[f])[:5]]
return f'{mean(ts):6.1f} ms ± {stdev(ts):3.1f} ms '
# Timing
number = 1
for _ in range(25):
for f in funcs:
t = timeit(lambda: consume(f(*args())), number=number) / number
times[f].append(t)
# Timing results
for f in sorted(funcs, key=stats, reverse=True):
print(stats(f), f.__name__)
print('\nPython version:')
print(sys.version)
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-102360
* gh-102363
<!-- /gh-linked-prs -->
| eaae563b6878aa050b4ad406b67728b6b066220e | 2f62a5da949cd368a9498e6a03e700f4629fa97f |
python/cpython | python__cpython-102075 | # Unexpected behavior with dataclasses and weakref
We recently discovered the following error in our project:
`TypeError: descriptor '__weakref__' for 'XYZ' objects doesn't apply to a 'XYZ' object`
XYZ in our case is a dataclass which uses slots (added v3.10) and weakref_slot (added v3.11) parameters.
We further investigated the error and have come to the following example to reproduce it:
```py
import dataclasses
import weakref
@dataclasses.dataclass(slots=True, weakref_slot=True)
class EntityAsDataclass:
x: int = 10
class Entity:
__slots__ = ("x", "__weakref__")
def __init__(self, x: int = 10) -> None:
self.x = x
e1 = Entity()
e1_ref = weakref.ref(e1)
e2 = EntityAsDataclass()
e2_ref = weakref.ref(e2)
assert e1.__weakref__ is e1_ref # works
assert e2.__weakref__ is e2_ref # fails with "TypeError: descriptor '__weakref__' for 'EntityAsDataclass' objects doesn't apply to a 'EntityAsDataclass' object"
```
I don't know if this is intended, but we expect EntityAsDataclass and Entity to have the same behavior here.
Thanks!
<!-- gh-linked-prs -->
### Linked PRs
* gh-102075
* gh-102662
<!-- /gh-linked-prs -->
| d97757f793ea53dda3cc6882b4a92d3e921b17c9 | 71e37d907905b0504c5bb7b25681adeea2157492 |
python/cpython | python__cpython-102078 | # Assertion failure on interrupt when catching exception group
This (maybe too made-up and detached from reality) repro causes assertion failure when executed on current main (32df540635cacce1053ee0ef98ee23f3f6a43c02).
On 3.11 it warns about "lost sys.stderr" and terminates.
```python3
import time
import threading
import _thread
def f():
try:
f()
except RecursionError:
f()
def g():
try:
raise ValueError()
except* ValueError:
f()
def h():
time.sleep(1)
_thread.interrupt_main()
t = threading.Thread(target=h)
t.start()
g()
t.join()
```
Output:
```
python: Objects/typeobject.c:4183: _PyType_Lookup: Assertion `!PyErr_Occurred()' failed.
Aborted (core dumped)
```
Stack trace:
```c++
#0 0x00007ffff7d3f34c in __pthread_kill_implementation () from /usr/lib/libc.so.6
#1 0x00007ffff7cf24b8 in raise () from /usr/lib/libc.so.6
#2 0x00007ffff7cdc534 in abort () from /usr/lib/libc.so.6
#3 0x00007ffff7cdc45c in __assert_fail_base.cold () from /usr/lib/libc.so.6
#4 0x00007ffff7ceb116 in __assert_fail () from /usr/lib/libc.so.6
#5 0x000055555574ecd9 in _PyType_Lookup (type=type@entry=0x555555be5430, name=name@entry=0x555555bb7c10 <_PyRuntime+334480>)
at Objects/typeobject.c:4183
#6 0x00005555557319e3 in _PyObject_GenericGetAttrWithDict (obj=0x7ffff77a7c90, name=0x555555bb7c10 <_PyRuntime+334480>, dict=dict@entry=0x0,
suppress=suppress@entry=1) at Objects/object.c:1266
#7 0x0000555555731d8d in _PyObject_LookupAttr (v=0x7ffff77a7c90, name=<optimized out>, result=result@entry=0x7ffffff40780) at Objects/object.c:933
#8 0x000055555582764c in print_exception_file_and_line (ctx=ctx@entry=0x7fffffffe270, value_p=value_p@entry=0x7ffffff40808)
at Python/pythonrun.c:941
#9 0x0000555555827ed2 in print_exception (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff77a7c90) at Python/pythonrun.c:1209
#10 0x0000555555827fcc in print_exception_group (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff77a7c90) at Python/pythonrun.c:1377
#11 0x0000555555828418 in print_exception_recursive (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff77a7c90) at Python/pythonrun.c:1491
#12 0x0000555555828491 in print_chained (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff77a7c90,
message=message@entry=0x55555594b200 <context_message> "During handling of the above exception, another exception occurred:\n",
tag=tag@entry=0x5555558deda0 "context") at Python/pythonrun.c:1254
#13 0x000055555582868e in print_exception_cause_and_context (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff780e900)
at Python/pythonrun.c:1344
#14 0x00005555558283d3 in print_exception_recursive (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff780e900) at Python/pythonrun.c:1482
#15 0x0000555555828491 in print_chained (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff780e900,
message=message@entry=0x55555594b200 <context_message> "During handling of the above exception, another exception occurred:\n",
tag=tag@entry=0x5555558deda0 "context") at Python/pythonrun.c:1254
...
#16186 0x000055555582868e in print_exception_cause_and_context (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff6be0ec0)
at Python/pythonrun.c:1344
#16187 0x00005555558283d3 in print_exception_recursive (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff6be0ec0) at Python/pythonrun.c:1482
#16188 0x0000555555828491 in print_chained (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff6be0ec0,
message=message@entry=0x55555594b200 <context_message> "During handling of the above exception, another exception occurred:\n",
tag=tag@entry=0x5555558deda0 "context") at Python/pythonrun.c:1254
#16189 0x000055555582868e in print_exception_cause_and_context (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff6be0f30)
at Python/pythonrun.c:1344
#16190 0x00005555558283d3 in print_exception_recursive (ctx=ctx@entry=0x7fffffffe270, value=value@entry=0x7ffff6be0f30) at Python/pythonrun.c:1482
#16191 0x0000555555828969 in _PyErr_Display (file=file@entry=0x7ffff79ba7b0, exception=exception@entry=0x555555a75160 <_PyExc_KeyboardInterrupt>,
value=value@entry=0x7ffff6be0f30, tb=tb@entry=0x7ffff6c22ad0) at Python/pythonrun.c:1530
#16192 0x0000555555828ad3 in PyErr_Display (exception=0x555555a75160 <_PyExc_KeyboardInterrupt>, value=0x7ffff6be0f30, tb=0x7ffff6c22ad0)
at Python/pythonrun.c:1562
#16193 0x0000555555835660 in sys_excepthook_impl (module=module@entry=0x7ffff7947f50, exctype=<optimized out>, value=<optimized out>,
traceback=<optimized out>) at ./Python/sysmodule.c:748
#16194 0x00005555558356c6 in sys_excepthook (module=0x7ffff7947f50, args=args@entry=0x7fffffffe3f0, nargs=nargs@entry=3)
at ./Python/clinic/sysmodule.c.h:102
#16195 0x000055555572c28c in cfunction_vectorcall_FASTCALL (func=0x7ffff7948470, args=0x7fffffffe3f0, nargsf=<optimized out>, kwnames=<optimized out>)
at Objects/methodobject.c:422
#16196 0x00005555556e1aa0 in _PyObject_VectorcallTstate (tstate=0x555555bd8808 <_PyRuntime+468616>, callable=callable@entry=0x7ffff7948470,
args=0x7fffffffe3f0, args@entry=0x7fffffffe360, nargsf=nargsf@entry=3, kwnames=kwnames@entry=0x0) at ./Include/internal/pycore_call.h:92
#16197 0x00005555556e1b90 in _PyObject_FastCallTstate (nargs=3, args=0x7fffffffe360, func=0x7ffff7948470, tstate=<optimized out>)
at ./Include/internal/pycore_call.h:116
#16198 _PyObject_FastCall (func=func@entry=0x7ffff7948470, args=args@entry=0x7fffffffe3f0, nargs=nargs@entry=3) at Objects/call.c:310
#16199 0x0000555555828c2b in _PyErr_PrintEx (tstate=0x555555bd8808 <_PyRuntime+468616>, set_sys_last_vars=set_sys_last_vars@entry=1)
at Python/pythonrun.c:813
#16200 0x0000555555828eb1 in PyErr_PrintEx (set_sys_last_vars=set_sys_last_vars@entry=1) at Python/pythonrun.c:861
#16201 0x0000555555828ec1 in PyErr_Print () at Python/pythonrun.c:867
#16202 0x00005555558293ab in _PyRun_SimpleFileObject (fp=fp@entry=0x555555bdb530, filename=filename@entry=0x7ffff77adbc0, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffe508) at Python/pythonrun.c:439
#16203 0x000055555582951d in _PyRun_AnyFileObject (fp=fp@entry=0x555555bdb530, filename=filename@entry=0x7ffff77adbc0, closeit=closeit@entry=1,
flags=flags@entry=0x7fffffffe508) at Python/pythonrun.c:78
#16204 0x00005555558487a5 in pymain_run_file_obj (program_name=program_name@entry=0x7ffff77c7370, filename=filename@entry=0x7ffff77adbc0,
skip_source_first_line=0) at Modules/main.c:360
#16205 0x00005555558488cd in pymain_run_file (config=config@entry=0x555555bbd3c0 <_PyRuntime+356928>) at Modules/main.c:379
#16206 0x0000555555849063 in pymain_run_python (exitcode=exitcode@entry=0x7fffffffe684) at Modules/main.c:610
#16207 0x000055555584930f in Py_RunMain () at Modules/main.c:689
#16208 0x0000555555849386 in pymain_main (args=args@entry=0x7fffffffe6e0) at Modules/main.c:719
#16209 0x000055555584944c in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:743
#16210 0x0000555555651772 in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102078
<!-- /gh-linked-prs -->
| 022b44f2546c44183e4df7b67e3e64502fae9143 | 4d3bc89a3f54c4f09756a9b644b3912bf54191a7 |
python/cpython | python__cpython-102039 | # site.py does a sometimes unnecessary stat
I had reason to look at site.py and noticed that if pyvenv.cfg exists alongside sys.executable, we do an unnecessary stat. It's a small thing, but we do run site.py on almost every interpreter startup.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102039
<!-- /gh-linked-prs -->
| 385b5d6e091da454c3e0d3f7271acf3af26d8532 | 55decb72c4d2e4ea00ed13da5dd0fd22cecb9083 |
python/cpython | python__cpython-102066 | # Syntax error in _delim.py
# Bug report
As part of packaging in fink, we try to byte-compile all .py using the just-built interpretter in the staging directory for installation. After compiling from the source tarball:
```
make install DESTDIR=/sw/build.build/root-python311-3.11.2-1
[...]
DYLD_LIBRARY_PATH=/sw/build.build/root-python311-3.11.2-1/sw/lib
+ /sw/build.build/root-python311-3.11.2-1/sw/bin/python3.11 -m compileall -f -o 0 -o 1 -o 2 _sysconfigdata__darwin_darwin.py Tools
[...]
Compiling 'Tools/c-analyzer/c_parser/parser/_delim.py'...
*** File "Tools/c-analyzer/c_parser/parser/_delim.py", line 37
''')
^
SyntaxError: f-string: empty expression not allowed
```
# Your environment
This is OS X 10.13, I see the same results on python-3.11.2, and 3.10.10 and 3.10.4 but not 3.9.12, which makes sense since _delim.py was introduced via PR #22841 and unchanged since that point.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102066
<!-- /gh-linked-prs -->
| 1ca315538f2f9da6c7b86c4c46e76d454c1ec4b9 | 7559f5fda94ab568a1a910b17683ed81dc3426fb |
python/cpython | python__cpython-102071 | # `threading.RLock` must not support `*args, **kwargs` arguments
Right now we have an interesting problem: `_CRLock` supports `*args, **kwargs` in `__new__`, while `_PYRLock` does not.
See:
1. https://github.com/python/cpython/blob/128379b8cdb88a6d3d7fed24df082c9a654b3fb8/Lib/threading.py#L133
2. https://github.com/python/cpython/blob/128379b8cdb88a6d3d7fed24df082c9a654b3fb8/Modules/_threadmodule.c#L506 (they are unused)
I am leaving aside the problem that `_PYRLock` is not ever used (unless imported directly) for now.
Right now our docs does not say what the signature should be.
Some facts:
1. CPython's code never use `RLock` with arguments (and never say it has arguments)
2. typeshed (a source of truth for all typecheckers and several IDEs [never had a single issue](https://github.com/python/typeshed/issues?q=is%3Aissue+rlock+is%3Aclosed) about missing `*args, **kwargs` in their `RLock`'s stub: https://github.com/python/typeshed/blob/0bb7d621d39d38bee7ce32e1ee920bd5bc4f9503/stdlib/threading.pyi#L113-L119
3. We don't have any docs about `RLock`'s signature
So, I think we can:
1. Document that `RLock` has `()` signature
2. Remove `*args, **kwargs` from C impl
4. Document this in "What's new"
Does it sound like a plan? If yes, I would like to do the honours :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102071
<!-- /gh-linked-prs -->
| 80f30cf51bd89411ef1220d714b33667d6a39901 | 0b243c2f665e6784b74ac4d602d4ee429a2ad7f0 |
python/cpython | python__cpython-102124 | # blake2-config.h `HAVE_SSE4_1` typo fix
# Bug report
### Unable to compile cpython on Windows with **`AVX`** enable
1. adding `<EnableEnhancedInstructionSet>AdvancedVectorExtensions</EnableEnhancedInstructionSet>` for `</ClCompile>` in vcxproj files (result `/arch:AVX` in cl.exe cmd line)
2. `MSBuild.exe C:\sdk\src\python\PCbuild\pcbuild.proj /nologo /nr:false /m:16 /p:Turbo=true /p:CL_MPCount=16 /p:RunCodeAnalysis=false /p:DebugType=None /p:DebugSymbols=true /p:WindowsTargetPlatformVersion=10.0.22621.0 /p:PlatformToolset=v143 /clp:EnableMPLogging;Summary;ShowCommandLine /v:d /p:Configuration=Release /p:Platform="x64" /p:IncludeExtensions=true /p:IncludeExternals=true /p:IncludeTests=false /p:IncludeCTypes=true /p:IncludeSSL=true /p:IncludeTkinter=false /p:IncludeUwp=true /t:RebuildAll /p:EXTERNALS_DIR="C:\sdk\release\vs17_x64-avx" /p:ExternalsSrcDir="C:\sdk\src" /p:_DLLSuffix=-81_3-x64 /p:SqliteVersionStrToParse=3.40.1.0 /nowarn:C4244`
3. result:
> C:\sdk\src\python\Modules\_blake2\impl\blake2-config.h(67,1): fatal error C1189: #error: "This code requires at least SSE2." (compiling source file ..\Modules\_blake2\blake2s_impl.c) [C:\sdk\src\python\PCbuild\pythoncore.vcxproj]
> C:\sdk\src\python\Modules\_blake2\impl\blake2-config.h(67,1): fatal error C1189: #error: "This code requires at least SSE2." (compiling source file ..\Modules\_blake2\blake2b_impl.c) [C:\sdk\src\python\PCbuild\pythoncore.vcxproj]
### It's just a typo issue in `Modules/_blake2/impl/blake2-config.h`
```diff
diff --git "a/Modules/_blake2/impl/blake2-config.h" "b/Modules/_blake2/impl/blake2-config.h"
index f5dd6faa9e..c09cb4bcf0 100644
--- "a/Modules/_blake2/impl/blake2-config.h"
+++ "b/Modules/_blake2/impl/blake2-config.h"
@@ -53,7 +53,7 @@
#endif
#endif
-#ifdef HAVE_SSE41
+#ifdef HAVE_SSE4_1
#ifndef HAVE_SSSE3
#define HAVE_SSSE3
#endif
```
The patch fix the issue
Seems to be present on every branch
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: branch 3.8 / tag 3.8.16
- Operating system and architecture: Win64 vs17 AVX
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102124
* gh-102916
* gh-102917
<!-- /gh-linked-prs -->
| ea93bde4ece139d4152a59f2c38aa6568559447c | 61405da9a5689f554aa53929a2a9c168cab6056b |
python/cpython | python__cpython-102025 | # Reduce _idle_semaphore calls in ThreadPoolExecutor
# Feature or enhancement
In `concurrent.futures.thread.ThredPoolExecutor` method `executor._idle_semaphore.release()` should be called if only work queue is empty
# Pitch
Currently `_idle_semaphore.release()` is called after processing every item in queue. That produces useless semaphore updates in case when queue still has items to process.
The optimization is going to increase processing speed of work queue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102025
* gh-104959
<!-- /gh-linked-prs -->
| 0242e9a57aa87ed0b5cac526f65631c654a39054 | 72c3d2e105f120f6f2bce410699b34dac4e948fd |
python/cpython | python__cpython-102022 | # Enable definition for generated interpreter cases to be composed from multiple files
# Feature or enhancement
Allow specifying multiple input files to the `generate_cases.py` script, making it behave mostly as if the input files were concatenated. Additionally allow existing definitions of instructions to be explicitly overridden using a new `override` keyword.
I attach a PR with a proposed initial implementation.
# Pitch
In [Cinder](https://github.com/facebookincubator/cinder) we add a number of new instructions to support our features like Static Python. We also currently have a few tweaks to existing instructions. When we migrate to the new upstream generated interpreter it would be preferable if we could avoid having to make changes to the core `bytecodes.c` and keep our own definitions/changes separate. As well as easing upstream merges, this would also avoid us having to copy/fork more than we need for Cinder features in a standalone module.
I've made an initial implementation which allows extra files to be passed to `generate_cases.py` by repeated use of the `-i` argument. E.g.:
```
$ generate_cases.py -i bytecodes.c -i cinder-bytecodes.c -o generated_cases.c.h
```
This mostly behaves as if the input files are concatenated but parsing only takes place between the `BEGIN/END BYTECODES` markers in each file. We also take advantage of mostly existing book-keeping features to track which input files definitions come from when producing errors.
I've also added a new `override` keyword which can prefix instruction definitions to explicitly express the intent to override an existing definition. E.g.:
```
inst(NOP, (--)) {
}
// This is the definition which ends up being used in generation.
override inst(NOP, (--)) {
magic();
}
// Error - previous definition of NOP exists and "override" not specified.
inst(NOP, (--)) {
}
// Error - requested override but no previous definition of ZOP exists.
override inst(ZOP, (--)) {
}
```
The goal of explicitly calling out overrides is to quickly reveal if either: something we modify is removed from upstream, or if a new opcode we add ends up with a name clash with a new upstream opcode.
# Previous discussion
The idea of having multiple input files for interpreter generation was briefly [discussed around the faster-cpython project](https://github.com/faster-cpython/ideas/issues/479#issuecomment-1279598946).
<!-- gh-linked-prs -->
### Linked PRs
* gh-102022
<!-- /gh-linked-prs -->
| 8de59c1bb9fdcea69ff6e6357972ef1b75b71721 | cb944d0be869dfb1189265467ec8a986176cc104 |
python/cpython | python__cpython-102014 | # Add a public C-API function to iterate over GC’able objects
# Feature or enhancement
An API similar to:
```
/* Visit all live GC-capable objects, similar to gc.get_objects(None).
*
* Users should avoid allocating or deallocating objects on the Python heap in
* the callback. */
typedef void (*gcvisitobjects_t)(PyObject *, void *);
PyAPI_FUNC(void) PyGC_VisitObjects(gcvisitobjects_t callback, void *arg);
```
Which could be used as:
```
void count_functions(PyObject *op, void *arg) {
if (PyFunction_Check(op)) {
(*(int*)arg)++;
}
}
int get_num_functions() {
int count;
PyGC_VisitObjects(count_functions, &count);
return count;
}
```
# Pitch
We have a version of this in [Cinder](https://github.com/facebookincubator/cinder) already and right now and use it to [identify all generator objects so they can be de-opted when our JIT is shutdown](https://github.com/facebookincubator/cinder/blob/cinder/3.10/Jit/pyjit.cpp#L1871). In future we plan to use it for things like discovering existing PyFunction objects, and then using gh-91049 to mark them as JIT’able. This could facilitate loading the JIT feature at a later time (e.g. as part of a module).
[Edited] In general, there already exists a Python API for iterating over GC’able objects via [gc.get_objects()](https://docs.python.org/3/library/gc.html#gc.get_objects), however there is no good way of doing this from native extensions. While it is technically possible to import the `gc` module and extract the `gc_objects` function in C this is cumbersome, and more importantly might lead to unexpected behavior if Python code has replaced the `gc_objects` function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102014
* gh-115560
<!-- /gh-linked-prs -->
| cbd3fbfb6e5c1cc96bbeb99483a580f165b01671 | 457e4d1a516c2b83edeff2f255f4cd6e7b114feb |
python/cpython | python__cpython-102012 | # Some of the docs no longer need to mention sys.exc_info()
There are a number of places in the documentation where sys.exc_info() is no longer necessary, and can be replaced by sys.exception(). They can be updated now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102012
* gh-102101
<!-- /gh-linked-prs -->
| 4d3bc89a3f54c4f09756a9b644b3912bf54191a7 | 36854bbb240e417c0df6f0014924fcc899388186 |
python/cpython | python__cpython-102009 | # test_except_star can be simplified with sys.exception()
migrating test_except_star to use sys.exception() instead of sys.exc_info() can make it a bit simpler. It won't cause any backporting issues because this test was only added in 3.11.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102009
* gh-102116
<!-- /gh-linked-prs -->
| c2b85a95a50687a2e5d1873e17266370876e77e9 | 7346a381be1bc5911924fcf298408e6909f3949f |
python/cpython | python__cpython-101998 | # Upgrade the bundled version of pip to 23.0.1
# Feature or enhancement
This is the latest pip bugfix release.
# Pitch
This ensures that users who install newest release of Python get the newest version of pip.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101998
* gh-102241
* gh-102242
* gh-102243
* gh-102244
* gh-102273
* gh-108998
<!-- /gh-linked-prs -->
| 89d9ff0f48c51a85920c7372a7df4a2204e32ea5 | b7c11264476ccc11e4bdf4bd3c3664ccd1b7c5f9 |
python/cpython | python__cpython-101994 | # plistlib documentation examples are not runnable
# Documentation
The current `plistlib` documentation does not run as written. It would be helpful if users were able to copy and paste the code example and successfully run it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101994
* gh-101999
* gh-102000
* gh-102133
* gh-102428
* gh-102429
<!-- /gh-linked-prs -->
| a1723caabfcdca5d675c4cb04554fb04c7edf601 | f482ade4c7887c49dfd8bba3be76f839e562608d |
python/cpython | python__cpython-107680 | # The annotations in c_annotations.py cannot be translated
The c_annotations Sphinx extension at https://github.com/python/cpython/blob/main/Doc/tools/extensions/c_annotations.py adds some markup to the documentation files but those do not appear in the translation dumps, e.g. the file at https://git.afpy.org/AFPy/python-docs-fr/src/branch/3.11/c-api/refcounting.po contains no annotations of the form "Part of the Stable ABI since version 3.10.".
Because their is no translation support in c_annotations.py and the annotation are not in the translation dump either it is not possible to translate those messages at the moment.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107680
* gh-112940
* gh-112941
<!-- /gh-linked-prs -->
| 9cdf05bc28c5cd8b000b9541a907028819b3d63e | 42a86df3a376a77a94ffe6b4011a82cf51dc336a |
python/cpython | python__cpython-102318 | # argparse metavar parentheses dropped on usage line
Parentheses in argparse `metavar`s are dropped on the `usage` line but appear as expected in the argument list. In particular, the very first and very last parentheses disappear. It looks from other issues (e.g. #56083) that braces, brackets and parentheses in `metavar`s are problematic, so this may be part of a wider and more difficult set of puzzlers to fix.
Illustrative code:
```
#!/usr/bin/python3
"""Demonstrate parenthesis drops in argparse metavar."""
import argparse
# The very first and last parentheses disappear in the usage line, but are
# present in the argument list.
parser = argparse.ArgumentParser(
description='Demonstrate parenthesis drops in argument meta-descriptions')
parser.add_argument(
'positional',
help='positional argument',
metavar='(example) positional')
parser.add_argument(
'-p',
'--optional',
help='optional argument',
type=int,
choices=[1, 2],
metavar='{1 (option A), 2 (option B)}',
default=1)
arguments = parser.parse_args()
print(arguments)
```
When this is run with `-h`, the help text is rendered as shown below. Note the parentheses are missing before `option A` and after `example`:
```
usage: parens.py [-h] [-p {1 option A), 2 (option B)}] (example positional
Demonstrate parenthesis drops in argument meta-descriptions
positional arguments:
(example) positional positional argument
optional arguments:
-h, --help show this help message and exit
-p {1 (option A), 2 (option B)}, --optional {1 (option A), 2 (option B)}
optional argument
```
I've tried this on Python 3.8.10 and Python 3.10.6.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102318
* gh-102438
* gh-102439
<!-- /gh-linked-prs -->
| 9a478be1a4314734c697dda7a7b0e633a6fb0751 | 66aa78cbe604a7c5731f074b869f92174a8e3b64 |
python/cpython | python__cpython-102803 | # Potential SegFault with multithreading garbage collection.
For now, I can only occationally observe the segfault on github actions. This is an issue that's not easy to reproduce, but I tried to understand the cause of it.
The direct cause would be in `deduce_unreachable` in `gcmodule.c`. In that function, `gc` tries to find cycles by traversing objects, including frame, which uses `_PyFrame_Traverse` for all its objects. In `_PyFrame_Traverse`, it uses `frame->stacktop` as the index range for all the locals and temporary data on stack(not sure if that's on purpose). However, `frame->stacktop` is not updated in real-time, which means the object it traverses might not be valid.
For example, in `FOR_ITER` dispatch code, there's a `Py_DECREF(iter); STACK_SHRINK(1);` when the iterator is exhausted. However, `STACK_SHIRNK` only increases `stack_pointer`, not `frame->stacktop`. At this point, the `iter` that's just freed will be traversed during garbage collection.
There might be something I missed because it's not trivial to reproduce this, but I got a demo that could reproduce this issue occasionally.
```python
from multiprocessing import Pool
import sys
def tracefunc(frame, *args):
a = 100 ** 100
def pool_worker(item):
return {"a": 1}
def pool_indexer(path):
item_count = 0
with Pool(processes=8) as pool:
for i in pool.imap(pool_worker, range(1, 2000), chunksize=10):
item_count = item_count + 1
sys.setprofile(tracefunc)
pool_indexer(10)
```
It might have something to do with the profile function, I think I can only reproduce this with it. You need to enable `--with-address-sanitizer` to find an error of ``ERROR: AddressSanitizer: heap-use-after-free on address``. Normally in ``Py_TYPE Include/object.h:135``, where the code dereferenced ``ob``, which could be freed already.
The memory it points to is often benign so I'm not able to reliably generate SegFaults, but in theory, this is a memory violation.
Python Version: cpython/main
OS Version: Ubuntu 20 on WSL
<!-- gh-linked-prs -->
### Linked PRs
* gh-102803
* gh-102807
<!-- /gh-linked-prs -->
| 039714d00f147be4d018fa6aeaf174aad7e8fa32 | b3cc11a08e1e9121ddba3b9afb9fae032e86f449 |
python/cpython | python__cpython-101976 | # Docs: Name of parameter should be 'def' in PyModule_FromDefAndSpec.
# Documentation
In the description of **PyModule_FromDefAndSpec** and **PyModule_FromDefAndSpec2**, the definition(PyModuleDef) parameter should be _def_, not _module_.
https://docs.python.org/3/c-api/module.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-101976
* gh-101982
* gh-101983
<!-- /gh-linked-prs -->
| a3bb7fbe7eecfae6bf7b2f0912f9b2b12fac8db1 | 984f8ab018f847fe8d66768af962f69ec0e81849 |
python/cpython | python__cpython-101968 | # ceval.c: `positional_only_passed_as_keyword` can be failed with segfault
https://github.com/python/cpython/blob/4d8959b73ac194ca9a2f623dcb5c23680f7d8536/Python/ceval.c#L1251-L1285
This implemention doesn't take in account case when `PyList_New` returns `NULL`.
If `PyList_New(0)` returns a `NULL`, `PyList_Append` will be failed with segfault, cause of `Py_TYPE`, which will try to reach out `ob_type`. of `(PyObject *) NULL`.
This hard to reproduce, because the only way `PyList_New` can error, if it is runs out of memory, but theoretically it can happen.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101968
* gh-102016
* gh-102015
* gh-114738
<!-- /gh-linked-prs -->
| 89413bbccb9261b72190e275eefe4b0d49671477 | 7f1c72175600b21c1c840e8988cc6e6b4b244582 |
python/cpython | python__cpython-101966 | # `Py_EnterRecursiveCall()`/`_Py_EnterRecursiveCall()` return value misused in a number of places
Using ``grep``, I identified 3 instances in `main` where code checks if ``Py_EnterRecursiveCall(...) < 0`` (which just returns ``_Py_EnterRecursiveCall(...)``) or ``_Py_EnterRecursiveCall(...) < 0``.
1. ``Py_EnterRecursiveCall()`` [documentation](https://docs.python.org/3/c-api/exceptions.html#c.Py_EnterRecursiveCall) only guarantees that a **nonzero** value is returned in the event of an error.
2. The actual implementation can't return a negative value:
https://github.com/python/cpython/blob/76449350b3467b85bcb565f9e2bf945bd150a66e/Include/internal/pycore_ceval.h#L130-L138
<!-- gh-linked-prs -->
### Linked PRs
* gh-101966
<!-- /gh-linked-prs -->
| 0f7a9725300e750947159409abbf5722a4a79f8b | 02d9f1504beebd98dea807f4b8761d3675a500d0 |
python/cpython | python__cpython-102068 | # Exception using fileinput.hook_compressed in binary mode
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
When using the [`fileinput.hook_compressed`](https://docs.python.org/3/library/fileinput.html#fileinput.hook_compressed) hook in binary mode in the case that the file is *not* compressed, an exception is raised instead of processing the file like normal.
```python
# fileinput_cat.py
import fileinput
import sys
with fileinput.input(mode='rb', openhook=fileinput.hook_compressed) as f:
for line in f:
sys.stdout.buffer.write(line)
```
```console
$ python3.10 fileinput_cat.py fileinput_cat.py
Traceback (most recent call last):
File "fileinput_bug.py", line 5, in <module>
for line in f:
File ".../lib/python3.10/fileinput.py", line 256, in __next__
line = self._readline()
File ".../lib/python3.10/fileinput.py", line 385, in _readline
self._file = self._openhook(self._filename, self._mode)
File ".../lib/python3.10/fileinput.py", line 432, in hook_compressed
return open(filename, mode, encoding=encoding, errors=errors)
ValueError: binary mode doesn't take an encoding argument
```
# Your environment
I tried this on Python 3.10.9 (installed from nixpkgs) on Intel macOS 11.7.3. It was working on Python 3.9.16.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102068
* gh-102098
* gh-102099
<!-- /gh-linked-prs -->
| 6f25657b83d7a680a97849490f6e973b3a695e1a | 022b44f2546c44183e4df7b67e3e64502fae9143 |
python/cpython | python__cpython-111362 | # SystemError in re.match with a "*+" pattern
# Bug report
The following code raises a SystemError.
```python
import re
re.match('((x)|y|z)*+', 'xyz')
```
Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Arttu\AppData\Local\Programs\Python\Python311\Lib\re\__init__.py", line 166, in match
return _compile(pattern, flags).match(string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SystemError: The span of capturing group is wrong, please report a bug for the re module.
```
`((x)|y|z)++` gives the same error but `(?>((x)|y|z)*)` does not.
# Your environment
Python 3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)] on win32
<!-- gh-linked-prs -->
### Linked PRs
* gh-111362
* gh-126962
* gh-126963
<!-- /gh-linked-prs -->
| f9c5573dedcb2f2e9ae152672ce157987cdea612 | 7538e7f5696408fa0aa02fce8a413a7dfac76a04 |
python/cpython | python__cpython-101958 | # Set: `BUILD_SET` opcode can be failed with segfault
https://github.com/python/cpython/blob/36b139af638cdeb671cb6b8b0315b254148688f7/Python/generated_cases.c.h#L1648-L1667
&
https://github.com/python/cpython/blob/36b139af638cdeb671cb6b8b0315b254148688f7/Python/bytecodes.c#L1303-L1316
Doesn't take in account case, when `PySet_New(NULL)` returns NULL.
We are checking that `PySet_Add` doesn't return a non-zero(-1) value.
But, `PySet_Add` has a check, that first argument is a subclass of `set`. Which fails, if we will pass `(PyObject *) NULL` as first argument. Why?
```c
#define PySet_Check(ob) \
(Py_IS_TYPE((ob), &PySet_Type) || \
PyType_IsSubtype(Py_TYPE(ob), &PySet_Type))
```
`PySet_Add` uses this macross. But, `Py_TYPE` will be failed with segfault when try to access `ob_type` of `(PyObject *) NULL`.
Implementation of `Py_TYPE`:
```c
static inline PyTypeObject* Py_TYPE(PyObject *ob) {
return ob->ob_type;
}
```
```gdb
(gdb) call (PyObject *) NULL
$1 = (PyObject *) 0x0
(gdb) call $1->ob_type
Cannot access memory at address 0x8
```
So, we should add check, that value of `PySet_New` is not-null.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101958
<!-- /gh-linked-prs -->
| ed4dfd8825b49e16a0fcb9e67baf1b58bb8d438f | 5a1559d9493dd298a08c4be32b52295aa3eb89e5 |
python/cpython | python__cpython-101950 | # use textwrap.dedent in compiler tests to make them more readable
Some of them use dedent and some don't, it looks messy.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101950
<!-- /gh-linked-prs -->
| 36b139af638cdeb671cb6b8b0315b254148688f7 | df7ccf6138b1a2ce0b82ff06aa3497ca4d38c90d |
python/cpython | python__cpython-102914 | # test_sqlite3 failure with SQLite 3.40.1
Since we recently updated from SQLite 3.35.5 to 3.40.1, I am seeing the following test failure:
```
======================================================================
FAIL: test_serialize_deserialize (test.test_sqlite3.test_dbapi.SerializeTests.test_serialize_deserialize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/.../Python-3.11.0/Lib/test/test_sqlite3/test_dbapi.py", line 607, in test_serialize_deserialize
self.assertEqual(len(data), 8192)
AssertionError: 65536 != 8192
```
It seems that the size of serialized data changed between those two SQLite versions, causing this issue; when I remove this assert, the test passes.
This is on Oracle Solaris (both SPARC and Intel); in both 3.11.0 and the latest changes cloned from GitHub (both 3.11 and main branches).
<!-- gh-linked-prs -->
### Linked PRs
* gh-102914
* gh-102915
<!-- /gh-linked-prs -->
| 61405da9a5689f554aa53929a2a9c168cab6056b | 3d7eb66c963c0c86021738271483bef27c425b17 |
python/cpython | python__cpython-102100 | # HTTPError fp.read returns string instead of bytes
Due to https://github.com/python/cpython/pull/99966/files
However, this returns a bytes stream on the live run:
```python
from http import HTTPStatus
from mailbox import Message
from urllib.error import HTTPError
from urllib.request import urlopen
try:
urlopen("http://asadsad.sd")
except HTTPError as exception:
content = exception.fp.read()
print(type(content))
error = HTTPError(url="url", code=HTTPStatus.IM_A_TEAPOT, msg="msg", hdrs=Message(), fp=None)
print(type(error.fp.read()))
```
```
<class 'bytes'>
<class 'str'>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102100
* gh-102117
* gh-102118
<!-- /gh-linked-prs -->
| 0d4c7fcd4f078708a5ac6499af378ce5ee8eb211 | c2b85a95a50687a2e5d1873e17266370876e77e9 |
python/cpython | python__cpython-101933 | # Multi-line arguments in a function call crashes CPython
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
In a function call, if one argument is split in two lines, CPython crashes. For example, we split the lambda expression, i.e., lamda_exp=lambda:1, in test2.py into two lines (See test1.py) . CPython reports a segmentation fault.
This crash just occurs on the latest main branch(e.g.commit a1f08f5, 8a2b7ee ). The older version of CPython(e.g. CPython 3.10.8, CPython 3.9.0 ) does not report any crash.
test1.py (Segmentation fault)
```
def foo(param, lambda_exp):
pass
foo(param=0,
lambda_exp=lambda:
1)
```
test2.py (work well )
```
def foo(param, lambda_exp):
pass
foo(param=0,
lambda_exp=lambda:1)
```
# Error messages
Segmentation Fault
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: main branch from GitHub (commit a1f08f5)
- Operating system and architecture: Ubuntu 18.04
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-101933
<!-- /gh-linked-prs -->
| df7ccf6138b1a2ce0b82ff06aa3497ca4d38c90d | 0b13575e74ff3321364a3389eda6b4e92792afe1 |
python/cpython | python__cpython-101899 | # Missing term references for hashable definition
In the documentation, the `hashable` word is written in plain text without the term keyword(`:term:'hashable'`).
Example;
```
- Remove the entry in dictionary *p* with key *key*. *key* must be hashable;
+ Remove the entry in dictionary *p* with key *key*. *key* must be :term:`hashable`;
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101899
* gh-101901
* gh-101902
<!-- /gh-linked-prs -->
| 3690688149dca11589af59b7704541336613199a | e5da9ab2c82c6b4e4f8ffa699a9a609ea1bea255 |
python/cpython | python__cpython-101896 | # Callable iterator can SystemError if call exhausts the iterator
# Bug report
If the callable of a callable iterator (with sentinel) exhausts the iterator itself during the call, a SystemError occurs.
> Example discovered by @chilaxan
```python
def bug():
if bug.clear:
return 1
else:
bug.clear = True
list(bug.iterator)
return 0
bug.iterator = iter(bug, 1)
bug.clear = False
next(bug.iterator)
```
```
SystemError: Objects\object.c:722: bad argument to internal function
```
# Likely cause
`iterobject.calliter_iternext` does not check whether `it->it_sentinel` is NULL after the `_PyObject_CallNoArgs` call
https://github.com/python/cpython/blob/main/Objects/iterobject.c#L207-L237
```diff
static PyObject *
calliter_iternext(calliterobject *it)
{
PyObject *result;
if (it->it_callable == NULL) {
return NULL;
}
+ result = _PyObject_CallNoArgs(it->it_callable);
if (result != NULL) {
int ok;
+ ok = PyObject_RichCompareBool(it->it_sentinel, result, Py_EQ);
if (ok == 0) {
return result; /* Common case, fast path */
}
(...)
```
# Your environment
Appears to affect 3.7-3.12
<!-- gh-linked-prs -->
### Linked PRs
* gh-101896
* gh-102418
* gh-102422
<!-- /gh-linked-prs -->
| 705487c6557c3d8866622b4d32528bf7fc2e4204 | b022250e67449e0bc49a3c982fe9e6a2d6a7b71a |
python/cpython | python__cpython-101882 | # os: support blocking functions on Windows
The os.get_blocking and os.set_blocking functions are only currently supported on Unix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101882
<!-- /gh-linked-prs -->
| 739c026f4488bd2e37d500a2c3d948aaf929b641 | 36b139af638cdeb671cb6b8b0315b254148688f7 |
python/cpython | python__cpython-101883 | # Docs: link hash doc to `object.__hash__` doc
# Documentation
A note in the documentation for the hash builtin writes
```
See __hash__ for details
```
`__hash__` should include a link.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101883
* gh-106546
* gh-106547
<!-- /gh-linked-prs -->
| ec7180bd1b3c156d4484e8e6babc5ecb707420e3 | 69a39bd9ad52241ca0e9a1926b4536c73017d067 |
python/cpython | python__cpython-101886 | # Documentation for smtplib Missing Italics
The input parameter `source_address` for `class smtplib.SMTP` is missing _italics_. I was scanning the text looking for the documentation for this input and couldn't find it because I am used to scanning for _italics_ according to the conventions of Python docs. Here is a highlight of the location of the parameter that needs italics.
https://docs.python.org/3/library/smtplib.html#smtplib.SMTP:~:text=raised.%20The%20optional-,source_address,-parameter%20allows%20binding
<!-- gh-linked-prs -->
### Linked PRs
* gh-101886
* gh-103888
<!-- /gh-linked-prs -->
| 28a05f4cc2b150b3ff02ec255daf1b6ec886ca6f | 37e37553b09dba073cfe77d9fb96863b94df2fbc |
python/cpython | python__cpython-101866 | # `co_lnotab` must be removed in 3.12 according to PEP 626
> The co_lnotab attribute will be deprecated in 3.10 and removed in 3.12.
https://peps.python.org/pep-0626/#backwards-compatibility
It was documented as deprecated in 3.10 and to-be-removed in 3.12: https://docs.python.org/3/whatsnew/3.10.html#pep-626-precise-line-numbers-for-debugging-and-other-tools
Original issue: https://github.com/python/cpython/issues/86412
Right now CPython does not use `co_lnotab` in its source code.
But, there are two mentions of it:
1. In `gdb`: https://github.com/python/cpython/blob/6ef6915d3530e844243893f91bf4bd702dfef570/Misc/gdbinit#L60-L61 And I have no idea how it works! Some weird mix of C and some kind of a DSL / scripting language
2. In `clinic.test`: https://github.com/python/cpython/blob/6ef6915d3530e844243893f91bf4bd702dfef570/Lib/test/clinic.test#L3575 But, this is just a string-based test. So, I think it can stay there
If that's fine - I would like to do the honours.
CC @markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-101866
* gh-126392
* gh-126403
* gh-126404
<!-- /gh-linked-prs -->
| 2a721258a199e9bcdcee2069719ad9c8f8c0d030 | e6f7d35be7fb65d8624e9411251554c9dee0c931 |
python/cpython | python__cpython-102417 | # euc_kr char '0x3164' decode('ksx1001') cause UnicodeDecodeError
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
char '0x3164' can be `encode('ksx1001')`, but can not `decode('ksx1001')`
```python
def main():
code_point = 0x3164
c = chr(code_point)
raw = c.encode('ksx1001')
c2 = raw.decode('ksx1001') # <--- this cause error
print(f'{c} {c2}')
if __name__ == '__main__':
main()
```
```
Traceback (most recent call last):
File "/Users/takwolf/Develop/FontDev/fusion-pixel-font/build.py", line 11, in <module>
main()
File "/Users/takwolf/Develop/FontDev/fusion-pixel-font/build.py", line 6, in main
c2 = raw.decode('ksx1001')
^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'euc_kr' codec can't decode bytes in position 0-1: incomplete multibyte sequence
```
The char is `Hangul Compatibility Jamo -> Hangul Filler`
https://unicode-table.com/en/3164/
<img width="676" alt="image" src="https://user-images.githubusercontent.com/6064962/218377969-f6a6dc6e-3464-448a-ae2d-89dea00a8efc.png">
The following code is get the zone in ks-x-1001:
```python
def main():
code_point = 0x3164
c = chr(code_point)
raw = c.encode('ksx1001')
block_offset = 0xA0
zone_1 = raw[0] - block_offset
zone_2 = raw[1] - block_offset
print(f'{zone_1} {zone_2}')
if __name__ == '__main__':
main()
```
```
zone_1 = 4
zone_2 = 52
```
https://en.wikipedia.org/wiki/KS_X_1001#Hangul_Filler
<img width="737" alt="image" src="https://user-images.githubusercontent.com/6064962/218380463-1cee6580-9dcf-4804-91f1-fb5ed59ee5e1.png">

other chars in ksx1001 encode an decode is ok, but only this.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.11.1
- Operating system and architecture: macOS 13.0
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-102417
<!-- /gh-linked-prs -->
| 77a3196b7cc17d90a8aae5629aa71ff183b9266a | 90801e48fdd2a57c5c5a5e202ff76c3a7dc42a50 |
python/cpython | python__cpython-101876 | # Add `__name__` to property
```python
class C:
@property
def foo(self):
return 1
assert C.foo.__name__ == "foo"
```
It would be very handy if this worked, so properties can be introspected the same way as functions, classmethods, etc. Name can be simply taken from `fget` the same way this is done for `__isabstractmethod__`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101876
* gh-123399
* gh-123428
<!-- /gh-linked-prs -->
| c0b0c2f2015fb27db4306109b2b3781eb2057c2b | 9af80ec83d1647a472331bd1333a7fa9108fe98e |
python/cpython | python__cpython-101858 | # `posixmodule.c` doesn't detect xattr support on Linux with non-glibc (e.g. musl)
# Bug report
`posixmodule.c` detects xattr support by checking for `__GLIBC__` (and some other conditions) at https://github.com/python/cpython/blob/86ebd5c3fa9ac0fba3b651f1d4abfca79614af5f/Modules/posixmodule.c#L277. This incorrectly excludes the musl libc,which supports xattr functionality.
On a musl system, it's easy to reproduce with:
```
>>> import os
>>> os.listxattr
Traceback (most recent call last):
File "", line 1, in
AttributeError: module 'os' has no attribute 'listxattr'
```
# Your environment
- CPython versions tested on: 3.10.10/3.11.2/3.12.0_alpha5
- Operating system and architecture: Gentoo Linux, amd64, musl libc (not glibc)
- Downstream report in Gentoo: https://bugs.gentoo.org/894130
- Report in a Python-using project: https://github.com/GNS3/gns3-gui/issues/1392
<!-- gh-linked-prs -->
### Linked PRs
* gh-101858
* gh-101894
<!-- /gh-linked-prs -->
| 8be8101bca34b60481ec3d7ecaea4a3379fb7dbb | 928752ce4c23f47d3175dd47ecacf08d86a99c9d |
python/cpython | python__cpython-101877 | # Windows Launcher MSI package has wrong version in 3.11.2, breaks upgrade from 3.11.0, .1
# Bug report
The upgrade from Python 3.11.1 to 3.11.2 on Windows fails because the launcher package refuses to downgrade itself. That it believes a downgrade to be involved in the installation is surprising. The reason is that the official 3.11.1 installers (at https://www.python.org/downloads/windows/) contain a `launcher.msi` with ProductVersion 3.11.8009.0, while the 3.11.2 installers' `launcher.msi` has version 3.11.2150.0.
The same package in the 3.11.0 installers has version 3.11.7966.0.
Installation log:
Action start 17:08:12: LaunchConditions.
MSI (s) (50:88) [17:08:12:404]: Note: 1: 2205 2: 3: Error
MSI (s) (50:88) [17:08:12:404]: Note: 1: 2228 2: 3: Error 4: SELECT `Message` FROM `Error` WHERE `Error` = 1709
MSI (s) (50:88) [17:08:12:404]: Product: Python Launcher -- A newer version of the Python launcher is already installed.
A newer version of the Python launcher is already installed.
Action ended 17:08:12: LaunchConditions. Return value 3.
Action ended 17:08:12: INSTALL. Return value 3.
- MSI from the 3.11.2 x64 installer:

- MSI from the 3.11.1 x64 installer:

The situation is the same with the x86 installers.
# Ceterum censeo
This bug would have been avoided by using (a single) MSI as the distribution package format because it would have had one ProductVersion rather than somewhere between 12 and 23.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101877
* gh-101889
<!-- /gh-linked-prs -->
| 0c6fe81dce9d6bb1dce5e4503f1b42bc5355ba24 | 95cbb3d908175ccd855078b3fab7f99e7d0bca88 |
python/cpython | python__cpython-101846 | # Doc: content of availability directive lost in i18n
# Bug report
See https://git.afpy.org/AFPy/python-docs-fr/issues/28
For example, take the `os.getgid` function. https://docs.python.org/3/library/os.html#os.getgid reads
"""
os.getgid()
Return the real group id of the current process.
[Availability](https://docs.python.org/3/library/intro.html#availability): Unix.
The function is a stub on Emscripten and WASI, see [WebAssembly platforms](https://docs.python.org/3/library/intro.html#wasm-availability) for more information.
"""
However, the French translation reads
"""
os.getgid()
Renvoie l'identifiant de groupe réel du processus actuel.
[Disponibilité](https://docs.python.org/fr/3/library/intro.html#availability) : Unix.
"""
This is unexpected. When a paragraph is not translated yet, the original is normally shown. Here, it is dropped.
Not only does this lose content, but it leads to build failures when warnings are turned into errors because the translation can lose some references.
CC @tiran (commit e3b6ff19aaa318a813130ba9ad2ab0a332f27feb)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101846
* gh-101852
<!-- /gh-linked-prs -->
| 6ef6915d3530e844243893f91bf4bd702dfef570 | dfc2e065a2e71011017077e549cd2f9bf4944c54 |
python/cpython | python__cpython-120884 | # Accessing a tkinter object's string representation converts the object to a string on Windows
Introduced in #16545
Affects Python 3.7+ on Windows only
---
The [`FromObj`](https://github.com/python/cpython/blob/3eb12df8b526aa5a2ca6b43f21a1c5e7d38ee634/Modules/_tkinter.c#L1115) function in `_tkinter.c` attempt to convert a `Tcl_Obj` to an equivalent Python object if possible, and otherwise returns a `_tkinter.Tcl_Obj` with the `typename` attribute set to the original object's type.
However, on Windows, accessing the resulting object's string representation [calls `Tcl_GetUnicodeFromObj`](https://github.com/python/cpython/blob/3eb12df8b526aa5a2ca6b43f21a1c5e7d38ee634/Modules/_tkinter.c#L492), which converts the `Tcl_Obj` to a `String`. This side effect isn't mentioned in the Tcl documentation, but is [in the Tcl source code](https://github.com/tcltk/tcl/blob/c529405c659f132d217336a626895a13a0ecaef0/generic/tclStringObj.c#L655-L672). As a result, retrieving the same tk property afterwards will return a Python string instead.
Minimal example:
```py
import tkinter as tk
root = tk.Tk()
print(type(root.cget("padx")))
_ = str(root.cget("padx")) # should really not cause any side effects
print(type(root.cget("padx")))
# Windows:
# <class '_tkinter.Tcl_Obj'>
# <class 'str'>
# Other platforms:
# <class '_tkinter.Tcl_Obj'>
# <class '_tkinter.Tcl_Obj'>
```
---
Possible solutions: `unicodeFromTclObj` should copy the object before passing calling `Tcl_GetUnicodeFromObj`, or handle Unicode without it like on other platforms.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120884
* gh-120905
* gh-120913
<!-- /gh-linked-prs -->
| f4ddaa396715855ffbd94590f89ab7d55feeec07 | 18b6ca9660370c7fb5fd50c4036be078c3267f4c |
python/cpython | python__cpython-101843 | # as_integer_ratio docstring's
For builtin/stdlib types we have
int:
```
Return integer ratio.
Return a pair of integers, whose ratio is exactly equal to the original int
and with a positive denominator.
```
Fraction:
```
as_integer_ratio(self)
Return the integer ratio as a tuple.
Return a tuple of two integers, whose ratio is equal to the
Fraction and with a positive denominator.
```
float:
```
Return integer ratio.
Return a pair of integers, whose ratio is exactly equal to the original float
and with a positive denominator.
```
Decimal:
```
Return a pair of integers, whose ratio is exactly equal to the original
Decimal and with a positive denominator. The ratio is in lowest terms.
Raise OverflowError on infinities and a ValueError on NaNs.
```
All methods actually return a pair, whose ratio is in lowest terms, but only the last docstring says this explicitly. Shouldn't other methods do same? (Taken from [this](https://github.com/python/cpython/pull/101780#issuecomment-1426699906).)
Edit: Ditto for some sphinx docs, e.g. for the float.as_integer_ratio(): "Return a pair of integers whose ratio is exactly equal to the original float and with a positive denominator. Raises OverflowError on infinities and a ValueError on NaNs."
<!-- gh-linked-prs -->
### Linked PRs
* gh-101843
<!-- /gh-linked-prs -->
| 4624987b296108c2dc1e6e3a24e65d2de7afd451 | 4f3786b7616dd464242b88ad6914053d409fe9d2 |
python/cpython | python__cpython-101822 | # `ast.main` is not covered
All this code is not covered: https://github.com/python/cpython/blob/b652d40f1c88fcd8595cd401513f6b7f8e499471/Lib/ast.py#L1712-L1737
Proof: https://github.com/python/cpython/blob/b652d40f1c88fcd8595cd401513f6b7f8e499471/Lib/test/test_ast.py
If I remove `def main`, tests still pass.
So, my proposal is: add a simple test case that ensures that `ast.main` is there. I don't think that we need any fancy stuff. `ast.main` is only a thin wrapper around `ast.parse` and `ast.dump` which are fully tested on their own.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101822
<!-- /gh-linked-prs -->
| bb396eece44036a71427e7766fbb8e0247373102 | 534660f1680955c7a6a47d5c6bd9649704b74a87 |
python/cpython | python__cpython-101904 | # Isolate the `_io` extension module
Isolate the `_io` extension module by moving all global variables to module state, porting static types to heap types, and implementing multi-phase init. All global variables in the `_io` module are static types:
- Modules/_io/bufferedio.c: PyBufferedIOBase_Type
- Modules/_io/bufferedio.c: PyBufferedRWPair_Type
- Modules/_io/bufferedio.c: PyBufferedRandom_Type
- Modules/_io/bufferedio.c: PyBufferedReader_Type
- Modules/_io/bufferedio.c: PyBufferedWriter_Type
- Modules/_io/bytesio.c: PyBytesIO_Type
- Modules/_io/bytesio.c: _PyBytesIOBuffer_Type
- Modules/_io/fileio.c: PyFileIO_Type
- Modules/_io/iobase.c: PyIOBase_Type
- Modules/_io/iobase.c: PyRawIOBase_Type
- Modules/_io/textio.c: PyIncrementalNewlineDecoder_Type
- Modules/_io/textio.c: PyTextIOBase_Type
- Modules/_io/textio.c: PyTextIOWrapper_Type
- Modules/_io/winconsoleio.c: PyWindowsConsoleIO_Type
Converting the static types to heap types involves applying PEP-687 to `_io`.
Adapting multi-phase init involves applying PEP-489 to `_io`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101904
* gh-101918
* gh-101948
* gh-101949
* gh-104178
* gh-104197
* gh-104246
* gh-104249
* gh-104264
* gh-104334
* gh-104352
* gh-104354
* gh-104355
* gh-104367
* gh-104369
* gh-104370
* gh-104383
* gh-104384
* gh-104386
* gh-104388
* gh-104418
<!-- /gh-linked-prs -->
| eb0c485b6c836abb71932537a5058344d11d7bc8 | c7766245c14fa03b8afd3aff9be30b13d0069f95 |
python/cpython | python__cpython-101811 | # Duplicated st_ino calculation
# Feature or enhancement
The `status->st_ino = (((uint64_t)info.nFileIndexHigh) << 32) + info.nFileIndexLow;` in function `_Py_fstat_noraise()` in file `Python/fileutils.c` is already calculated and assigned in the function `_Py_attribute_data_to_stat()` in same file.
So I'm proposing removing the duplicated code.
# Pitch
It's a duplicated code that is already done in the previous function call `_Py_attribute_data_to_stat()`.
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-101811
<!-- /gh-linked-prs -->
| 95cbb3d908175ccd855078b3fab7f99e7d0bca88 | 2db2c4b45501eebef5b3ff89118554bd5eb92ed4 |
python/cpython | python__cpython-101800 | # PREP_RERAISE_STAR can be replaced by an intrinsic function
The PREP_RERAISE_STAR bytecode is not performance critical and it can be implemented as a binary instrinsic function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101800
<!-- /gh-linked-prs -->
| 81e3aa835c32363f4547b6566edf1125386f1f6d | 3690688149dca11589af59b7704541336613199a |
python/cpython | python__cpython-101798 | # Allocate PyExpat_CAPI capsule on heap
To make it compatible with sub-interpreters it needs to be allocated on heap rather than static storage.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101798
<!-- /gh-linked-prs -->
| b652d40f1c88fcd8595cd401513f6b7f8e499471 | 17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5 |
python/cpython | python__cpython-103877 | # Incorrect / incomplete documentation of asyncio.Server
# Documentation
URL: https://docs.python.org/3.7/library/asyncio-eventloop.html#asyncio.Server.sockets
The documentation states that `sockets` is a list of `socket.socket` objects
The actual type is a list of `asyncio.trsock.TransportSocket` (that is not documented)
We should add the missing documentation to the `asyncio.trsock` module, and make the proper relationship in the `Server` section
<!-- gh-linked-prs -->
### Linked PRs
* gh-103877
* gh-103890
<!-- /gh-linked-prs -->
| 1c0a9c5a1c84bc334f2bde9d45676f19d9632823 | 214e5684c274f359d4cc34381f65e9f1a9952802 |
python/cpython | python__cpython-102026 | # Weird PriorityQueue description
# Documentation
Someone asked [Does PriorityQueue call sorted every time get is called?](https://stackoverflow.com/questions/75407200/does-priorityqueue-call-sorted-every-time-get-is-called). First I laughed, how did they come up with that silly idea, but then I saw they got it from [the doc](https://docs.python.org/3/library/queue.html#queue.PriorityQueue) and it really does make it sound like that:
> The lowest valued entries are retrieved first (the lowest valued entry is the one returned by `sorted(list(entries))[0]`).
Not only `sorted` but also `list` first, and of course even if the queue didn't use a heap, it should be the straightforward `min(entries)` instead. This is so over-the-top inefficient and complicated that it looks like a weird joke.
Unless there's good reason for it, I suggest to add "as if" and use `min`:
> The lowest valued entries are retrieved first (the lowest valued entry is the one as if returned by `min(entries)`).
<!-- gh-linked-prs -->
### Linked PRs
* gh-102026
* gh-102106
* gh-102107
* gh-102143
* gh-102154
* gh-102155
<!-- /gh-linked-prs -->
| 350ba7c07f8983537883e093c5c623287a2a22e5 | 0f7a9725300e750947159409abbf5722a4a79f8b |
python/cpython | python__cpython-101780 | # Optimize creation of unnormalized Fraction's in private methods (arithmetics, etc)
Right now we just use `__new__()` with a private kwarg `_normalize=False`. That's slightly less efficient, then a dedicated class method.
POC patch
---------------
```diff
diff --git a/Lib/fractions.py b/Lib/fractions.py
index 49a3f2841a..27c75785fb 100644
--- a/Lib/fractions.py
+++ b/Lib/fractions.py
@@ -315,6 +315,13 @@ def from_decimal(cls, dec):
(cls.__name__, dec, type(dec).__name__))
return cls(*dec.as_integer_ratio())
+ @classmethod
+ def _from_pair(cls, num, den):
+ obj = super(Fraction, cls).__new__(cls)
+ obj._numerator = num
+ obj._denominator = den
+ return obj
+
def is_integer(self):
"""Return True if the Fraction is an integer."""
return self._denominator == 1
@@ -703,13 +710,13 @@ def _add(a, b):
nb, db = b._numerator, b._denominator
g = math.gcd(da, db)
if g == 1:
- return Fraction(na * db + da * nb, da * db, _normalize=False)
+ return Fraction._from_pair(na * db + da * nb, da * db)
s = da // g
t = na * (db // g) + nb * s
g2 = math.gcd(t, g)
if g2 == 1:
- return Fraction(t, s * db, _normalize=False)
- return Fraction(t // g2, s * (db // g2), _normalize=False)
+ return Fraction._from_pair(t, s * db)
+ return Fraction._from_pair(t // g2, s * (db // g2))
__add__, __radd__ = _operator_fallbacks(_add, operator.add)
```
We can drop private kwarg of `__new__()` after adopting this way.
Some benchmarks
-----------------
With above patch
```
$ ./python -m timeit -s 'from fractions import Fraction as Q' -s 'a,b=Q(3,7),Q(5, 8)' -u usec 'a+b'
50000 loops, best of 5: 4.4 usec per loop
$ ./python -m timeit -s 'from fractions import Fraction as Q' -s 'a,b=Q(3,7)**100,Q(5, 8)**100' -u usec 'a+b'
20000 loops, best of 5: 12.6 usec per loop
```
On the ~~master~~main:
```
$ ./python -m timeit -s 'from fractions import Fraction as Q' -s 'a,b=Q(3,7),Q(5, 8)' -u usec 'a+b'
50000 loops, best of 5: 6.99 usec per loop
$ ./python -m timeit -s 'from fractions import Fraction as Q' -s 'a,b=Q(3,7)**100,Q(5, 8)**100' -u usec 'a+b'
20000 loops, best of 5: 15.9 usec per loop
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101780
<!-- /gh-linked-prs -->
| 4f3786b7616dd464242b88ad6914053d409fe9d2 | bb0cf8fd60e71581a90179af63e60e8704c3814b |
python/cpython | python__cpython-101942 | # Refleak in `test_importlib` on `aarch64 RHEL8`
I am not sure what is going on with this test run:
```
Ran 1377 tests in 3.713s
OK (skipped=18, expected failures=1)
......
test_importlib leaked [6, 4, 2] references, sum=12
test_importlib leaked [4, 4, 2] memory blocks, sum=10
0:32:52 load avg: 0.42 Re-running test_asyncio in verbose mode (matching: test_fork_asyncio_subprocess)
beginning 6 repetitions
123456
/home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.072s
OK
./home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.060s
OK
./home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.059s
OK
./home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.060s
OK
./home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.061s
OK
./home/buildbot/buildarea/pull_request.cstratak-RHEL8-aarch64.refleak/build/Lib/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=4145974) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
test_fork_asyncio_subprocess (test.test_asyncio.test_unix_events.TestFork.test_fork_asyncio_subprocess) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.057s
OK
.
1 test failed again:
test_importlib
```
See https://buildbot.python.org/all/#builders/802/builds/623
<!-- gh-linked-prs -->
### Linked PRs
* gh-101942
* gh-101987
* gh-101988
<!-- /gh-linked-prs -->
| 775f8819e319127f9bfb046773b74bcc62c68b6a | 3c0a31cbfd1258bd96153a007dd44a96f2947dbf |
python/cpython | python__cpython-101769 | # iter `__reduce__` can segfault if accessing `__builtins__.__dict__['iter']` mutates the iter object
# Crash report
Example from @chilaxan
```python
corrupt = iter(lambda:0, 0)
class Cstr:
def __hash__(self):
return hash('iter')
def __eq__(self, other):
[*corrupt]
return other == 'iter'
builtins = __builtins__.__dict__ if hasattr(__builtins__, '__dict__') else __builtins__
oiter = builtins['iter']
del builtins['iter']
builtins[Cstr()] = oiter
print(corrupt.__reduce__())
```
# Expected result
This should return a valid `__reduce__` tuple of the exhausted iterator. Instead behavior is inconsistent between segmentation faults, SystemErrors, and sometimes returning the iterator without being exhausted.
# Error messages
- 3.11, windows, `PYTHONMALLOC=debug`
- 3.12.0a4, windows, `PYTHONMALLOC=debug`
```py
Windows fatal exception: access violation
> exit code -1073741819 (0xC0000005)
```
- 3.12.04a4, windows, compiled with debug mode
```py
print(corrupt.__reduce__())
^^^^^^^^^^^^^^^^^^^^
SystemError: NULL object passed to Py_BuildValue
```
- 3.11, ubuntu
```py
(<built-in function iter>, (<function at 0x7fb772c3c4a0>, 0))
> terminated by signal SIGSEGV (Address boundary error)
```
- 3.12.0a4, ubuntu
```py
(<built-in function iter>, (<function at 0x7f3480d71f80>, 0))
```
- 3.12.0a4, ubuntu, `PYTHONMALLOC=debug`
```py
Fatal Python error: Segmentation fault
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101769
* gh-102228
* gh-102229
* gh-102265
* gh-102268
* gh-102269
* gh-102283
* gh-102285
* gh-102286
<!-- /gh-linked-prs -->
| d71edbd1b7437706519a9786211597d95934331a | a498de4c0ef9e264cab3320afbc4d38df6394800 |
python/cpython | python__cpython-101784 | # Update libffi in Windows installer to 3.4.4
libffi-3.4.4 was released a few months ago, and claims to fix unspecified issues on all Windows platforms.
We're currently using 3.4.3 and 3.11 and 3.12, so should update them both.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101784
* gh-101795
<!-- /gh-linked-prs -->
| e1aadedf099e645fd2eb1aa8bdcde5a105cee95d | 366b94905869d680b3f1d4801fb497e78811e511 |
python/cpython | python__cpython-101764 | # impaplib's IMAP4 last example does not use the 'host' parameter which leads to a crash
# Documentation
When using the example at the bottom of the page:
```python
import getpass, imaplib
M = imaplib.IMAP4()
M.login(getpass.getuser(), getpass.getpass())
M.select()
typ, data = M.search(None, 'ALL')
for num in data[0].split():
typ, data = M.fetch(num, '(RFC822)')
print('Message %s\n%s\n' % (num, data[0][1]))
M.close()
M.logout()
```
You get a crash looking like this:
```shell
Traceback (most recent call last):
File "/Users/bob/project/main.py", line 33, in crash_example
M = imaplib.IMAP4()
^^^^^^^^^^^^^^^
File "/Users/bob/.pyenv/versions/3.11.1/lib/python3.11/imaplib.py", line 202, in __init__
self.open(host, port, timeout)
File "/Users/bob/.pyenv/versions/3.11.1/lib/python3.11/imaplib.py", line 312, in open
self.sock = self._create_socket(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/.pyenv/versions/3.11.1/lib/python3.11/imaplib.py", line 302, in _create_socket
return socket.create_connection(address)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/.pyenv/versions/3.11.1/lib/python3.11/socket.py", line 851, in create_connection
raise exceptions[0]
File "/Users/bob/.pyenv/versions/3.11.1/lib/python3.11/socket.py", line 836, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
```
Which is caused by the following line that falls back to trying to connect to a localhost IMAP server which probably does not exist.
```python
M = imaplib.IMAP4()
```
As is mentioned at the top of the page, you should provide your server name/address unless you are using a localhost instance. However it might not be obvious at first.
Adding a fake domain like in the first example leads to the same kind of message, but it invites the developer to at least provide their server address.
Thus I would suggest to stick to what is shown in the first example as:
```python
M = imaplib.IMAP4('domain.org')
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101764
* gh-117191
* gh-117192
<!-- /gh-linked-prs -->
| 39df7732178c8e8f75b12f069a3dbc1715c99995 | a1e948edba9ec6ba61365429857f7a087c5edf51 |
python/cpython | python__cpython-101761 | # Bump to SQLite 3.40.1 in macOS and Windows installers
SQLite 3.40.1 has been out for more than one month now. Suggesting to bump in all active branches (main/3.12 through 3.10).
See also [the SQLite 3.40 release log](https://sqlite.org/releaselog/3_40_1.html)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101761
* gh-101762
* gh-101775
* gh-101776
* gh-101791
* gh-101792
* gh-102485
* gh-102488
* gh-102489
<!-- /gh-linked-prs -->
| d40a23c0a11060ba7fa076d50980c18a11a13a40 | 448c7d154e72506158d0a7a766e9f1cb8adf3dec |
python/cpython | python__cpython-101891 | # Isolate PyModuleDef to Each Interpreter for Extension/Builtin Modules
Typically each `PyModuleDef` for a builtin/extension module is a static global variable. Currently it's shared between all interpreters, whereas we are working toward interpreter isolation (for a variety of reasons). Isolating each `PyModuleDef` is worth doing, especially if you consider we've already run into problems[^1] because of `m_copy`.
The main focus here is on `PyModuleDef.m_base.m_copy`[^2] specifically. It's the state that facilitates importing legacy (single-phase init) extension/builtin modules that do not support repeated initialization[^3] (likely the vast majority).
<details>
<summary>(expand for more context)</summary>
----
`PyModuleDef` for an extension/builtin module is usually stored in a static variable and (with immortal objects, see gh-101755) is mostly immutable. The exception is `m_copy`, which is problematic in some cases for modules imported in multiple interpreters.
Note that `m_copy` is only relevant for legacy (single-phase init) modules, whether builtin and an extension, and only if the module does *not* support repeated initialization[^3]. It is never relevant for multi-phase init (PEP 489) modules.
* initialization
* `m_copy` is only set by `_PyImport_FixupExtensionObject()` (and thus indirectly `_PyImport_FixupBuiltin()` and `_imp.create_builtin()`)
* `_PyImport_FixupExtensionObject() is called by `_PyImport_LoadDynamicModuleWithSpec()` when a legacy (single-phase init) extension module is loaded
* usage
* `m_copy` is only used in `import_find_extension()`, which is only called by `_imp.create_builtin()` and `_imp.create_dynamic()` (via the respective importers)
When such a legacy module is imported for the first time, `m_copy` is set to a new copy of the just-imported module's `__dict__`, which is "owned" by the current interpreter (the one importing the module). Whenever the module is loaded again (e.g. reloaded or deleted from `sys.modules` and then imported), a new empty module is created and `m_copy` is [shallow] copied into that object's `__dict__`.
When `m_copy` is originally initialized, normally that will be the first time the module is imported. However, that code can be triggered multiple times for that module if it is imported under a different name (an unlikely case but apparently a real one). In that case the `m_copy` from the previous import is replaced with the new one right after it is released (decref'ed). This isn't the ideal approach but it's also been the behavior for [quite a while](https://peps.python.org/pep-3121/).
The tricky problem here is that the same code is triggered for each interpreter that imports the legacy module. Things are fine when a module is imported for the first time in any interpreter. However, currently, any subsequent import of that module in another interpreter will trigger that replacing code. The second interpreter decref's the old `m_copy`, but that object is "owned" by the first interpreter. This is a problem[^1].
Furthermore, even if the decref-in-the-wrong-interpreter problem was gone. When `m_copy` is copied into the new module's `__dict__` on subsequent imports, it's only a shallow copy. Thus such a legacy module, imported in other interpreters than the first one, would end up with its `__dict__` filled with objects not owned by the correct interpreter.
----
</details>
Here are some possible approaches to isolating each module's `PyModuleDef` to the interpreter that imports it:
1. keep a copy of `PyModuleDef` for each interpreter (would `_PyRuntimeState.imports.extensions` need to move to the interpreter?)
2. keep just `m_copy` for/on each interpreter
3. fix `_PyImport_FixupExtensionObject()` some other way...
[^1]: see https://github.com/python/cpython/pull/101660#issuecomment-1424507393
[^2]: We should probably consider isolating `PyModuleDef.m_base.m_index`, but for now we simply sync the `modules_by_index` list of each interpreter. (Also, `modules_by_index` and `m_index` are only used for single-phase init modules.)
[^3]: specifically `def->m_size == -1`; multi-phase init modules always have `def->m_size >= 0`; single-phase init modules can also have a non-negative `m_size`
<!-- gh-linked-prs -->
### Linked PRs
* gh-101891
* gh-101919
* gh-101920
* gh-101943
* gh-101956
* gh-101969
<!-- /gh-linked-prs -->
| 984f8ab018f847fe8d66768af962f69ec0e81849 | 4d8959b73ac194ca9a2f623dcb5c23680f7d8536 |
python/cpython | python__cpython-101840 | # Document that `os.environ` forcibly upper-cases keys on case-insensitive OSs
# Documentation
[`os.environ` docs](https://docs.python.org/3/library/os.html?highlight=os%20environ#os.environ) don't mention that the keys get upper-cased automatically on e.g. Windows.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101840
* gh-102393
* gh-102394
<!-- /gh-linked-prs -->
| 4e7c0cbf59595714848cf9827f6e5b40c3985924 | 12011dd8bafa6867f2b4a8a9e8e54cb0fbf006e4 |
python/cpython | python__cpython-101748 | # Refleak after new `OrderedDict` repr
It is reported by @corona10 that after https://github.com/python/cpython/commit/790ff6bc6a56b4bd6e403aa43a984b99f7171dd7 we have a new refleak:
* test_ordered_dict
* test_pprint
* test_trace
Looks like `dcopy = PyDict_Copy((PyObject *)self);` is created, but never freed :(
PR is on its way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101748
<!-- /gh-linked-prs -->
| 34c50ceb1e2d40f7faab673d2033ecaafd3c611a | 5b946d371979a772120e6ee7d37f9b735769d433 |
python/cpython | python__cpython-101746 | # enum.Flag: default for boundary is incorrect
In c9480d5ad59b052ad3d3422fcf0ac8dd5fed7f6d the default was changed from `STRICT` to `CONFORM`, but the docs were never updated to reflect this.
See https://docs.python.org/3.11/library/enum.html#enum.FlagBoundary
> STRICT
> Out-of-range values cause a [ValueError](https://docs.python.org/3.11/library/exceptions.html#ValueError) to be raised. This is the default for [Flag](https://docs.python.org/3.11/library/enum.html#enum.Flag):
<!-- gh-linked-prs -->
### Linked PRs
* gh-101746
* gh-102004
<!-- /gh-linked-prs -->
| 7f1c72175600b21c1c840e8988cc6e6b4b244582 | 77d95c83733722ada35eb1ef89ae5b84a51ddd32 |
python/cpython | python__cpython-118425 | # test_ssl fails after 2038
# Environment
- i.MX6 32 bits CPU
- Yocto Kirkstone (kirkstone-4.0.6)
- Linux 5.10.109
- glibc 2.35 (8d125a1f9145ad90c94e438858d6b5b7578686f2)
- openssl 3.0.7
- Python 3.10.8
- RFS built with "-D_TIME_BITS=64 -D_FILE_OFFSET_BITS=64" flags
- System date set after 011903142038 (2038 Jan 19 03:14:00)
```
# openssl version
OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022)
# python3 -V
Python 3.10.8
# date
Thu Jan 19 04:04:50 UTC 2040
```
# Description
test_ssl returns the following error on 32 bits board with system date after 2038:
```pytb
======================================================================
FAIL: test_session (test.test_ssl.ThreadedTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.10/test/test_ssl.py", line 4366, in test_session
self.assertGreater(session.time, 0)
AssertionError: -2084411619 not greater than 0
----------------------------------------------------------------------
```
The same test passes with a system date before 2038.
# How to reproduce
- Set `enddate` to `20421028142316Z` in make_ssl_certs.py file
- Run make_ssl_certs.py to create new certificates
- Update test_ssl.py and test_asyncio/utils.py according to make_ssl_certs.py output
- Run the following command:
```
python3 -m test test_ssl -v
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-118425
<!-- /gh-linked-prs -->
| 37ccf167869d101c4021c435868b7f89ccda8148 | ca269e58c290be8ca11bb728004ea842d9f85e3a |
python/cpython | python__cpython-101727 | # OpenSSL used in binary builds needs updating for CVE-2023-0286
https://www.openssl.org/news/secadv/20230207.txt
https://nvd.nist.gov/vuln/detail/CVE-2023-0286
The just released binaries on Windows and macOS were all build using 1.1.1s, we need to use 1.1.1t in the release branches.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101727
* gh-101749
* gh-101750
* gh-101751
* gh-101752
* gh-101753
<!-- /gh-linked-prs -->
| b41c47cd0606e8273aef4813e83fe2deaf9ab33b | 6d92373f500eb81a175516b3abb16e21f0806c1f |
python/cpython | python__cpython-101697 | # `_PyStaticType_Dealloc` does not invalidate type version tag
`_PyStaticType_Dealloc` does not invalidate the type version tag and when the interpreter is reinitialized it still points to the old index. It should be cleared after the runtime finalization to avoid a rare case where there is a cache hit from the old index.
Found while working on https://github.com/python/cpython/actions/runs/4112993949/jobs/7098592004
<!-- gh-linked-prs -->
### Linked PRs
* gh-101697
* gh-101722
* gh-101742
<!-- /gh-linked-prs -->
| d9de0792482d2ded364b0c7d2867b97a5da41b12 | 2a8bf2580441147f1a15e61229d669abc0ab86ee |
python/cpython | python__cpython-101698 | # sqlite3: issue a warning if a sequence of params are used with named placeholders in queries
(See [Discourse topic](https://discuss.python.org/t/sqlite3-consider-deprecating-combining-named-placeholders-with-sequence-of-params/22450?u=erlendaasland).)
Per now, it is possible to supply a sequence of params to queries with named placeholders:
```python
>>> cx.execute("select :name", [42]).fetchall()
[(42,)]
>>> cx.execute("select :other", [42]).fetchall()
[(42,)]
```
This may result in unexpected results if a user misuse the sqlite3 module and use PEP-249 style _numeric_ placeholders:
```
>>> cx.execute("select :1", ("first",)).fetchall()
[('first',)]
>>> cx.execute("select :1, :2", ("first", "second")).fetchall()
[('first', 'second')]
>>> cx.execute("select :2, :1", ("first", "second")).fetchall() # Unexpected result follows
[('first', 'second')]
```
PEP-249 style _numeric_ placeholders are not supported by sqlite3; it only supports PEP-249 style _named_ placeholders and PEP-249 style _qmark_ placeholders, so the placeholders in the above example are interpreted as _named_, not _numeric_, placeholders.
Based on the discussion in the above linked Discourse topic, I propose to now issue a deprecation warning if sequences are used with named placeholders. The deprecation warning should inform that from Python 3.14 and onward, `sqlite3.ProgrammingError` will be raised instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101698
<!-- /gh-linked-prs -->
| 8a2b7ee64d1bde762438b458ea7fe88f054a3a88 | d777790bab878b8d1a218a1a60894b2823485cca |
python/cpython | python__cpython-101827 | # Add typing.get_orig_bases and typing.get_orig_class
# Feature or enhancement
typing has had `type.__orig_bases__` and `type.__orig_class__` for quite some time now, there is no stable API to access these attributes.
# Pitch
I would like to propose adding `typing.get_orig_bases` as something like
```py
@overload
def get_orig_bases(cls: type[object]) -> tuple[type[Any], ...] | None: ...
@overload
def get_orig_bases(cls: Any) -> None: ...
def get_orig_bases(cls: Any) -> tuple[type[Any], ...] | None:
return getattr(cls, "__orig_bases__", None)
```
and `typing.get_orig_class`
```py
@overload
def get_orig_class(cls: type[object]) -> GenericAlias | None: ...
@overload
def get_orig_class(cls: Any) -> None: ...
def get_orig_class(cls: Any) -> GenericAlias | None:
return getattr(cls, "__orig_class__", None)
```
(side note, it might be possible to fully type `get_orig_class` it `types.GenericAlias` was generic over the `__origin__` and `__args__` i.e. `Foo[int] == GenericAlias[Foo, int]`)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101827
<!-- /gh-linked-prs -->
| 730bbddfdf610343a2e132b0312d12254c3c73d6 | 05b3ce7339b9ce44eec728e88e80ba1f125436ed |
python/cpython | python__cpython-101679 | # Refactor math module to use C99+ special functions (erf, erfc, etc)
The PEP 7 documents, that we must use C11 (without optional features) since CPython 3.11. erf, erfc, lgamma and tgamma (math.gamma) are part of the C99 (and mandatory for C11 too).
Probably, it's not a bad idea to get rid of custom implementations of such functions or document why it's present (e.g. for some exotic platform, broken libc implementation, speed optimization, etc). Keep in mind, that this code isn't tested by CI if we optionally include using standard implementations (as for erf with `#ifdef HAVE_ERF`).
<!-- gh-linked-prs -->
### Linked PRs
* gh-101679
* gh-101730
<!-- /gh-linked-prs -->
| 58395759b04273edccf3d199606088e0703ae6b1 | 244d4cd9d22d73fb3c0938937c4f435bd68f32d4 |
python/cpython | python__cpython-101674 | # pdb loses local variable change after command longlist
# Bug report
In pdb, `ll` will clear the local variable change.
```python
def main():
a = 1
breakpoint()
print(a)
main()
```
```python
-> print(a)
(Pdb) p a
1
(Pdb) !a = 2
(Pdb) p a
2
(Pdb) ll
1 def main():
2 a = 1
3 breakpoint()
4 -> print(a)
(Pdb) p a
1
(Pdb) s
1
--Return--
```
As you can tell, `a` was changed through `!a = 2` but it was reverted after `ll`. `print(a)` also prints the unmodified value. Without `ll`, everything works fine.
The reason lies in `getsourcelines` in `pdb.py`. In that function, it tried to access `obj.f_locals`, which will refresh the dict with `PyFrame_FastToLocalsWithError` as it's a property now.
```python
if inspect.isframe(obj) and obj.f_globals is obj.f_locals:
```
As a result, the local variable changes will be erased.
The original reason to check if `obj.f_globals is obj.f_locals` is to check whether `obj` is an module frame. Now that it has side effects to pdb, we can do the check with:
```python
if inspect.isframe(obj) and obj.f_code.co_name == "<module>":
```
It might not be the most delicate way, but I can't think of a situation where this won't work.
I did this change locally and I have confirmed:
1. `./python -m test -j3` passed
2. The original bug is fixed
3. `ll` still prints the full file for module frame
I'll make a PR soon and please let me know if there are any concerns about the fix.
# Your environment
- CPython versions tested on: 3.11.1
- Operating system and architecture: Ubuntu 20.04
<!-- gh-linked-prs -->
### Linked PRs
* gh-101674
* gh-102632
* gh-102633
<!-- /gh-linked-prs -->
| 5d677c556f03a34d1c2d86e4cc96025870c20c12 | f6ca71a42268dcd890bd1930501b6c7e6d7ce66e |
python/cpython | python__cpython-101672 | # Grammatical error in message string for `FatalError` when calling `PyImport_AppendInittab()` after `Py_Initialize()`
The following line should have the first instance of "be" replaced with "not":
https://github.com/python/cpython/blob/aacbdb0c650492756738b044e6ddf8b72f89a1a2/Python/import.c#L2695
<!-- gh-linked-prs -->
### Linked PRs
* gh-101672
* gh-101723
<!-- /gh-linked-prs -->
| 35dd55005ee9aea2843eff7f514ee689a0995df8 | 86ebd5c3fa9ac0fba3b651f1d4abfca79614af5f |
python/cpython | python__cpython-101660 | # Isolate the Default Object Allocator between Interpreters
(see gh-100227)
By default, the allocator for the "object" and "mem" domains is historically known as "obmalloc". The implementation of the allocator is `_PyObject_Malloc()`, etc., for which the runtime state is found in `_PyRuntimeState.obmalloc`. Thus all interpreters currently share the runtime state for the default allocator.
Isolating that state to each interpreter is important for a per-interpreter GIL (my short-term motivation here), but there are other benefits. For example, it helps us manage resources (e.g. objects) relative to the lifecycle of each interpreter, and to do so more efficiently. Furthermore, any situation where we share memory between interpreters is an invitation for memory leaks and other bugs.
The solution here should be as simple as moving the "obmalloc" state to `PyInterpreterState`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101660
* gh-102659
* gh-103048
* gh-103151
* gh-103287
* gh-103298
<!-- /gh-linked-prs -->
| df3173d28ef25a0f97d2cca8cf4e64e062a08d06 | 01be52e42eac468b6511b56ee60cd1b99baf3848 |
python/cpython | python__cpython-101657 | # New warning: `conversion from 'Py_ssize_t' to 'int', possible loss of data` in `Modules/_testcapimodule.c`
<img width="910" alt="Снимок экрана 2023-02-07 в 21 22 57" src="https://user-images.githubusercontent.com/4660275/217332496-cef2e448-0522-47fc-89b1-c87c60fab42f.png">
This happens because:
- `Py_ssize_t c_args_len = 0;`
- But, `PyEval_EvalCodeEx` expects `int`:
```c
PyObject *
PyEval_EvalCodeEx(PyObject *_co, PyObject *globals, PyObject *locals,
PyObject *const *args, int argcount,
PyObject *const *kws, int kwcount,
PyObject *const *defs, int defcount,
PyObject *kwdefs, PyObject *closure)
```
Looks like this is a side effect of https://github.com/python/cpython/commit/ae62bddaf81176a7e55f95f18d55621c9c46c23d
Possible solution is to use an explicit `(int)` converter.
I don't think that we should really worry about an overflow in the test code.
I will send a PR to check if this is a proper fix.
CC @ambv as the original PR reviewer.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101657
* gh-101680
<!-- /gh-linked-prs -->
| acc2f3b19d28d4bf3f8fb32357f581cba5ba24c7 | dec1ab03879e959f7efb910a723caf4a9ce453cf |
python/cpython | python__cpython-101802 | # argparse: Check if stderr is defined before writing to it
# Bug report
`argparse` tries to write to stderr inconditionally, even when it's `None`. This is only a problem on Windows. Here:
https://github.com/python/cpython/blob/3.11/Lib/argparse.py#L2596-L2600
I detected this when using [PyInstaller](https://pyinstaller.org/en/stable/usage.html) to create a binary on Windows using `--noconsole`, which triggers using `pythonw.exe`. In this case, `sys.stderr is None`.
This is similar to #89057
# Your environment
- Any version of Python on Windows, up to 3.11
<!-- gh-linked-prs -->
### Linked PRs
* gh-101802
* gh-104250
<!-- /gh-linked-prs -->
| 42f54d1f9244784fec99e0610aa05a5051e594bb | 92d8bfffbf377e91d8b92666525cb8700bb1d5e8 |
python/cpython | python__cpython-106169 | # regrtest can reports false SUCCESS and skip tests when a -j worker dies
See in https://buildbot.python.org/all/#/builders/424/builds/3407/steps/5/logs/stdio (Raspbian)
```
...
0:04:46 load avg: 7.28 [ 48/434] test_isinstance passed -- running: test_largefile (2 min 54 sec)
0:05:03 load avg: 7.39 [ 49/434] test_utf8source passed -- running: test_largefile (3 min 11 sec), test_unittest (38.7 sec)
Warning -- regrtest worker thread failed: Traceback (most recent call last):
Warning -- File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/runtest_mp.py", line 342, in run
Warning -- mp_result = self._runtest(test_name)
Warning -- ^^^^^^^^^^^^^^^^^^^^^^^^
Warning -- File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/libregrtest/runtest_mp.py", line 301, in _runtest
Warning -- stdout = stdout_fh.read().strip()
Warning -- ^^^^^^^^^^^^^^^^
Warning -- File "<frozen codecs>", line 322, in decode
Warning -- UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa9 in position 35: invalid start byte
Kill <TestWorkerProcess #1 running test=test_tempfile pid=7836 time=21.8 sec> process group
Kill <TestWorkerProcess #2 running test=test_largefile pid=7700 time=3 min 32 sec> process group
== Tests result: SUCCESS ==
385 tests omitted:
test___all__ test__locale test__opcode test__osx_support
test__xxinterpchannels test__xxsubinterpreters test_abc test_aifc
test_argparse test_array test_asdl_parser test_ast test_asyncgen
test_asyncio test_atexit test_audioop test_audit test_base64
...
```
This might be an infrastructure issue or an actual issue with real non utf-8 data making it through the mp sockets. Regardless, it should've triggered an error and exit 1 instead of a false SUCCESS exit 0.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106169
* gh-106174
* gh-106175
<!-- /gh-linked-prs -->
| 2ac3eec103cf450aaaebeb932e51155d2e7fb37b | 3f8483cad2f3b94600c3ecf3f0bb220bb1e61d7d |
python/cpython | python__cpython-101633 | # Add return const instruction
From the pystats doc ([pystats-2023-02-05-python-5a2b984.md](https://github.com/faster-cpython/ideas/blob/main/stats/pystats-2023-02-05-python-5a2b984.md)), I find that `LOAD_CONST + RETURN_VALUE` is a very high frequency (Because the default return of the function is None).
Successors for LOAD_CONST
Successors | Count | Percentage
-- | -- | --
RETURN_VALUE | 969,173,651 | 21.8%
BINARY_OP_ADD_INT | 418,647,997 | 9.4%
LOAD_CONST | 403,185,774 | 9.1%
COMPARE_AND_BRANCH_INT | 314,633,792 | 7.1%
STORE_FAST | 295,563,626 | 6.6%
And predecessors for RETURN_VALUE
Predecessors | Count | Percentage
-- | -- | --
LOAD_CONST | 969,173,651 | 29.9%
LOAD_FAST | 505,933,343 | 15.6%
RETURN_VALUE | 382,698,373 | 11.8%
BUILD_TUPLE | 328,532,240 | 10.1%
COMPARE_OP | 107,210,803 | 3.3%
This means that if we add a `RETURN_CONST`, we can reduce the `RETURN_VALUE` instruction by 30% and the `LOAD_CONST` instruction by 20%.
```
./bin/python3 -m pyperf timeit -w 3 --compare-to ../python-3.12/bin/python3 -s "
def test():
return 10000
" "test()"
/python-3.12/bin/python3: ..................... 27.0 ns +- 0.3 ns
/cpython/bin/python3: ..................... 25.0 ns +- 0.5 ns
Mean +- std dev: [/python-3.12/bin/python3] 27.0 ns +- 0.3 ns -> [/cpython/bin/python3] 25.0 ns +- 0.5 ns: 1.08x faster
./bin/python3 -m pyperf timeit -w 3 --compare-to ../python-3.12/bin/python3 -s "
def test():
return None
" "test()"
/python-3.12/bin/python3: ..................... 27.2 ns +- 1.3 ns
/cpython/bin/python3: ..................... 25.1 ns +- 0.6 ns
Mean +- std dev: [/python-3.12/bin/python3] 27.2 ns +- 1.3 ns -> [/cpython/bin/python3] 25.1 ns +- 0.6 ns: 1.08x faster
```
From the microbenchmark that there is indeed a ~10% improvement (considering the interference of function calls, I think 10% should be there), which is not very high, but it should be an optimization without adverse effects.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101633
<!-- /gh-linked-prs -->
| 753fc8a5d64369cd228c3e43fef1811ac3cfde83 | 0d3d5007b136ff1bc0faa960d6526e047dd92396 |
python/cpython | python__cpython-101617 | # Mention the Docs Discourse forum in the "Reporting Documentation Bugs" section
# Documentation
On the section for reporting [Documentation bugs](https://docs.python.org/3/bugs.html#documentation-bugs), we tell people to use the "tracker" or to email docs@python.org about it.
The Documentation Discourse forum was not mentioned.
Let's mention the Discourse forum there.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101617
* gh-101621
* gh-101622
<!-- /gh-linked-prs -->
| 949c58f945b93af5b7bb70c6448e940da669065d | 132b3f8302c021ac31e9c1797a127d57faa1afee |
python/cpython | python__cpython-101615 | # Cannot import extensions linked against `python3_d.dll` debug build on Windows
# Bug report
Build a debug version of CPython on Windows.
Build an an extension module linked against this debug CPython's `python3_d.dll`.
This crashes at runtime with the following error:
```
ImportError: Module use of python3_d.dll conflicts with this version of Python.
```
This originates from https://github.com/python/cpython/blob/7a253103d4c64fcca4c0939a584c2028d8da6829/Python/dynload_win.c#L311-L330
It looks like this code doesn't account for the possibility of linking against `python3_d.dll` when using a debug build.
# Your environment
- CPython versions tested on: main branch, 7a253103d4
- Operating system and architecture: Windows 12 amd64
Originally reported to me in https://github.com/PyO3/pyo3/issues/2780
<!-- gh-linked-prs -->
### Linked PRs
* gh-101615
* gh-101690
* gh-101691
<!-- /gh-linked-prs -->
| 3a88de7a0af00872d9d57e1d98bc2f035cb15a1c | eb49d32b9af0b3b01a5588626179187f11d145c9 |
python/cpython | python__cpython-101610 | # New warning: `‘state’ may be used uninitialized in this function` in `_xxinterpchannelsmodule.c`
I am not sure if this is a false-positive or not, but here's the warning:
<img width="747" alt="Снимок экрана 2023-02-06 в 21 57 19" src="https://user-images.githubusercontent.com/4660275/217060580-f1888605-77ce-48ad-8dae-7cc4b9846443.png">
It happens on all GitHub PRs right now, example: https://github.com/python/cpython/pull/101600/files
Looks like it is the result of https://github.com/python/cpython/commit/c67b00534abfeca83016a00818cf1fd949613d6b
🤔
Looks like it is not a false positive:
1. `(void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType);` uses `state`
2. But, we can go to `error`, before `state` is initialized. For example, here: `if (exceptions_init(mod) != 0) { goto error; }`
Moreover, `_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType)` might get `NULL`, because of this code:
```python
state->ChannelIDType = add_new_type(
mod, &ChannelIDType_spec, _channelid_shared);
if (state->ChannelIDType == NULL) {
goto error;
}
```
I am not sure that I understand this code.
When `state->ChannelIDType` is not `NULL` during `error:`? Why is `_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType)` needed?
<!-- gh-linked-prs -->
### Linked PRs
* gh-101610
<!-- /gh-linked-prs -->
| 262003fd3297f7f4ee09cebd1abb225066412ce7 | b96b344f251954bb64aeb13c3e0c460350565c2a |
python/cpython | python__cpython-103372 | # ``argparse`` Prints options per flag name when only once is necessary
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
When running a command line program with the ``--help/-h`` flag with ``argparse``, if another flag has multiple names and choices to pick from, the options are printed multiple times instead of just once. For example, the program below:
```python
import argparse
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument('-m', '--metric', choices=["accuracy", "precision", "recall"])
parser.parse_args(["-h"])
```
Will print
```bash
usage: argparse_test.py [-h] [-m {accuracy,precision,recall}]
options:
-h, --help show this help message and exit
-m {accuracy,precision,recall}, --metric {accuracy,precision,recall}
```
Notice that the flag choices are printed out twice, once for each flag name. This is redundant and negatively impacts readability. The program should output:
```bash
usage: argparse_test.py [-h] [-m {accuracy,precision,recall}]
options:
-h, --help show this help message and exit
-m, --metric {accuracy,precision,recall}
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.10.6
- Operating system and architecture: Windows 11 with WSL 2
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-103372
* gh-124025
* gh-124036
* gh-124037
<!-- /gh-linked-prs -->
| c4a2e8a2c5188c3288d57b80852e92c83f46f6f3 | 73d20cafb54193c94577ca60df1ba0410b3ced74 |
python/cpython | python__cpython-104965 | # Deprecate pickle support for itertools
Pickle support was long ago added to some itertools. It was done mostly to support an atypical use case for a single company. It was implemented in a very inefficient manner, essentially replaying iteration from the beginning to the mid-stream state where it was frozen. The implementation was of low quality and had many bugs. Also, it was not a documented or advertised feature. Newer itertools don't support pickling and no one has noticed or cared. The popular third-party package `more-itertools` is implemented with generators which do not have pickle support — again, none of their users seems to have noticed or cared.
IMO, this is just cruft that has made maintenance more difficult and we should get rid of it. As an undocumented feature, we could just remove it directly. But to be on the safe side, we can go through a deprecation cycle.
<!-- gh-linked-prs -->
### Linked PRs
* gh-104965
* gh-104997
* gh-118816
<!-- /gh-linked-prs -->
| 402ee5a68b306b489b782478ab96e8e3b913587a | 328422ce6162eb18735a2c0de12f8a696be97d0c |
python/cpython | python__cpython-101607 | # `PyErr_SetObject()` behavior is strange and not as documented.
Briefly:
`PyErr_SetObject(exc_type, exc_val)` does not create a new exception iff `isinstance(exc_val, BaseException)`, but uses `exc_val` instead.
Callers of `PyErr_SetObject()` need various workarounds to handle this.
The long version:
Internally CPython handles exceptions as a triple `(type, value, traceback)`, but the language treats exceptions as a single value.
This a legacy of the olden days before proper exceptions.
To handle adding proper exceptions to Python, various error handling functions, specifically `_PyErr_SetObject` still treat exceptions as triples, with the convention that if the value is an exception, then the exception is already normalized.
One other oddity is that if `exc_val` is a tuple, it is treated as the `*` arguments to `exc_type` when calling it. So, if `isinstance(exc_val, BaseException)` the desired behavior can be achieved by wrapping `exc_val` in a one-tuple.
As a consequence, both `_PyErr_SetKeyError` and `_PyGen_SetStopIterationValue` are a lot more complex than they should be to workaround this behavior.
We could make `PyErr_SetObject` act as documented, but that is likely to break C extensions, given how old this behavior is, and that it is relied on throughout CPython.
Code that does the following is common:
```C
exc = new_foo_exception();
PyErr_SetObject(&PyFooException_Type, exc);
```
We could just document the current behavior, but the current behavior is strange.
What I suggest is this:
* Create a new API function, PyErr_SetException(exc)` that takes a single exception object.
* Document `PyErr_SetObject()` accurately
* Deprecate the old function
This is an old bug going back to the 2 series.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101607
* gh-101962
* gh-102057
* gh-102157
* gh-102702
* gh-113369
* gh-113606
<!-- /gh-linked-prs -->
Also relevant:
* gh-102502 | feec49c40736fc05626a183a8d14c4ebbea5ae28 | 027adf42cd85db41fee05b0a40d89ef822876c97 |
python/cpython | python__cpython-101737 | # Document the behaviour of Decimal.__round__
# Documentation
The documentation is missing a description of the behaviour of `round` on `Decimal` objects. It's worth documenting, since the behaviour (particularly with respect to the context) is non-obvious.
Of particular note:
* two-argument `round` respects the rounding mode from the `Decimal` context.
* single-argument `round` always does round-ties-to-even, ignoring information from the context
```python
Python 3.12.0a4 (main, Jan 20 2023, 16:04:05) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal, getcontext, ROUND_DOWN
>>> getcontext().rounding = ROUND_DOWN
>>> round(Decimal('3.7')) # round-ties-to-even; context rounding ignored
4
>>> round(Decimal('3.7'), 0) # uses the context rounding
Decimal('3')
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101737
* gh-120394
* gh-120395
<!-- /gh-linked-prs -->
| 7dd8c37a067f9fcb6a2a658d6a93b294cc2e6fb4 | 02e74c356223feb0771759286d24d1dbac01d4ca |
python/cpython | python__cpython-101571 | # Update the bundled version of pip to 23.0
# Feature or enhancement
Update the copy of pip bundled with CPython to 23.0 (the current latest).
# Pitch
This ensures that users can use functionality in the newer versions of pip.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101571
* gh-102239
* gh-102240
<!-- /gh-linked-prs -->
| 19ac43629e6fd73f2dbf9fd5a0b97d791c28acc7 | 39017e04b55d4c110787551dc9a0cb753f27d700 |
python/cpython | python__cpython-102018 | # ZipFile.extractall fails if zipfile.Path object is created based on the ZipFile object
# Bug report
The following code works fine with Python 3.8 and 3.9 but starts failing with Python 3.10.
It seems that `zipfile.Path` creates an entry in the `zipfile.ZipFile` object that does not exist in the underlying file and therefore makes `zipfile.ZipFile.extractall` fail.
```
import zipfile
import tempfile
path = tempfile.mktemp()
f = zipfile.ZipFile(path, "w")
f.writestr("data/test.txt", "test1")
f.writestr("data/test2.txt", "test2")
f.close()
dest = tempfile.mkdtemp()
p = zipfile.ZipFile(path)
src_zipfile_path = zipfile.Path(p, "data")
p.extractall(dest)
```
# Your environment
Working version:
```
(python39) [moggi@mars cloudfluid]$ python --version
Python 3.9.16
(python39) [moggi@mars cloudfluid]$ python test.py
(python39) [moggi@mars cloudfluid]$
```
Failing versions:
3.10.9:
```
(python_310_2) [moggi@mars cloudfluid]$ python --version
Python 3.10.9
(python_310_2) [moggi@mars cloudfluid]$ python test.py
Traceback (most recent call last):
File "/home/moggi/devel/cloudfluid/test.py", line 15, in <module>
p.extractall(dest)
File "/home/moggi/devel/envs/python_310_2/lib/python3.10/zipfile.py", line 1645, in extractall
self._extract_member(zipinfo, path, pwd)
File "/home/moggi/devel/envs/python_310_2/lib/python3.10/zipfile.py", line 1667, in _extract_member
member = self.getinfo(member)
File "/home/moggi/devel/envs/python_310_2/lib/python3.10/zipfile.py", line 1441, in getinfo
raise KeyError(
KeyError: "There is no item named 'data/' in the archive"
```
and 3.11.0
```
(python311) [moggi@mars cloudfluid]$ python --version
Python 3.11.0
(python311) [moggi@mars cloudfluid]$ python test.py
Traceback (most recent call last):
File "/home/moggi/devel/cloudfluid/test.py", line 15, in <module>
p.extractall(dest)
File "/home/moggi/devel/envs/python311/lib/python3.11/zipfile.py", line 1677, in extractall
self._extract_member(zipinfo, path, pwd)
File "/home/moggi/devel/envs/python311/lib/python3.11/zipfile.py", line 1699, in _extract_member
member = self.getinfo(member)
^^^^^^^^^^^^^^^^^^^^
File "/home/moggi/devel/envs/python311/lib/python3.11/zipfile.py", line 1473, in getinfo
raise KeyError(
KeyError: "There is no item named 'data/' in the archive"
```
If this is the expected new behavior I think the documentation for 3.10 should mention this breaking change and the `zipfile.Path` documentation might need an entry about this problem.
The `zipfile.Path` object is used in the original code to check if the "data/" directory exists in the zip file.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102018
* gh-102090
* gh-102091
<!-- /gh-linked-prs -->
| 36854bbb240e417c0df6f0014924fcc899388186 | 84181c14040ed2befe7f1a55b4f560c80fa61154 |
python/cpython | python__cpython-101563 | # typing: Inheritance with `NotRequired` or `Required` in parent fields is not tested
Such case `Lib/test/test_typing.py` doesn't cover.
```py
from typing import TypedDict, Required, NotRequired, get_type_hints
class Parent(TypedDict, total=False):
field: Required[str]
class Child(Parent):
another_field: NotRequired[str]
assert Child.__required_keys__ == frozenset({"field": str})
assert Child.__optional_keys__ == frozenset({"another_field": str})
```
I think, we should test it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101563
* gh-101611
<!-- /gh-linked-prs -->
| b96b344f251954bb64aeb13c3e0c460350565c2a | 7a253103d4c64fcca4c0939a584c2028d8da6829 |
python/cpython | python__cpython-101564 | # Add typing.override decorator
# Feature or enhancement
See [PEP 698](https://peps.python.org/pep-0698/) for details.
The `typing.override` decorator should, at runtime, attempt to set the `__override__` attribute on its argument to `True` and then return the argument. If it cannot set the `__override__` flag it should return its argument unchanged.
# Pitch
The purpose of `typing.override` is to inform static type checkers that we expect this method to override some attribute of an ancestor class. By having this decorator in place, a developer can ensure that static type checkers will warn them if a base class method name changes.
To quote the PEP consider a code change where we rename a parent class method.
The original code looks like this:
```python
class Parent:
def foo(self, x: int) -> int:
return x
class Child(Parent):
def foo(self, x: int) -> int:
return x + 1
def parent_callsite(parent: Parent) -> None:
parent.foo(1)
def child_callsite(child: Child) -> None:
child.foo(1)
```
And after our rename it looks like this:
```python
class Parent:
# Rename this method
def new_foo(self, x: int) -> int:
return x
class Child(Parent):
# This (unchanged) method used to override `foo` but is unrelated to `new_foo`
def foo(self, x: int) -> int:
return x + 1
def parent_callsite(parent: Parent) -> None:
# If we pass a Child instance we’ll now run Parent.new_foo - likely a bug
parent.new_foo(1)
def child_callsite(child: Child) -> None:
# We probably wanted to invoke new_foo here. Instead, we forked the method
child.foo(1)
```
In the code snippet above, renaming `foo` to `new_foo` in `Parent` invalidated the override `foo` of `Child`. But type checkers have no way of knowing this, because they only see a snapshot of the code.
If we mark `Child.foo` as an override, then static type checkers will catch the mistake when we rename only `Parent.foo`:
```py
from typing import override
class Parent:
def new_foo(self) -> int:
return 1
def bar(self, x: str) -> str:
return x
class Child(Parent):
@override
def foo(self) -> int: # Type Error: foo does not override an attribute of any ancestor
return 2
```
# Previous discussion
[PEP 698](https://peps.python.org/pep-0698/) has details about the proposal itself.
Discussion on this proposal has happened on [typing-sig](https://mail.python.org/archives/list/typing-sig@python.org/thread/V23I4D6DEOFW4BBPWBMYTHZUOMKR7KQE/#MZMBDDKOOH2KUE2IRRMQLOEN6MJ2WHCN) and on [discuss.python.org)[https://discuss.python.org/t/pep-698-a-typing-override-decorator/20839].
<!-- gh-linked-prs -->
### Linked PRs
* gh-101564
<!-- /gh-linked-prs -->
| 12011dd8bafa6867f2b4a8a9e8e54cb0fbf006e4 | 71db5dbcd714b2e1297c43538188dd69715feb9a |
python/cpython | python__cpython-124669 | # The builtin `help(...)` should unstringify (and "unforwardref") annotations
# Feature or enhancement
When using `help(...)`, annotations should be unstringified and displayed without `typing.ForwardRef`.
# Pitch
Current behavior displays string annotations with quotes as follows:
```python
def foo(x: List["A"], y: "B") -> None:
...
help(foo)
"""
Help on function foo in module ...:
foo(x: List[ForwardRef('A')], y: 'B') -> None
"""
```
It should be fairly obvious how clunky this is to users, and that the desirable behavior should be something like:
```python
help(foo)
"""
Help on function foo in module ...:
foo(x: List[A], y: B) -> None
"""
```
#84171 is related, but whereas the suggestion there is to actually evaluate the annotations using `typing.get_type_hints` or similar, this proposal aims to only remove quotations and `typing.ForwardRef(...)` from the outputted documentation.
This means that the resulting documentation may not be 100% accurate but will not require evaluating the annotation, which can avoid issues such as annotations which cannot be evaluated for some reason.
For example:
```python
import typing
if typing.TYPE_CHECKING:
import numpy as np
def foo(x: "np.ndarray") -> None:
...
help(foo)
"""
Help on function foo in module ...:
foo(x: np.ndarray) -> None
"""
```
Note that the `np.ndarray` is not expanded out into its fully qualified name `numpy.ndarray`.
There are also some additional edge cases to consider, such as `"ForwardRef('A')"`. Ideally this should be changed to `A`, but I don't think there's much impact if it is purposefully left as that, and leaving it as-is avoids other problems e.g. `ForwardRef` is not actually `typing.ForwardRef` in that example.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124669
<!-- /gh-linked-prs -->
| 78406382c97207b985b5c1d24db244ec398b7e3f | b502573f7f800dbb2e401fa2a7a05eceac692c7a |
python/cpython | python__cpython-101550 | # wrong type documented for xml.etree.ElementInclude.default_loader
# Documentation
In the description of xml.etree.ElementInclude in the funtion reference at [ElementInclude reference](https://docs.python.org/3.7/library/xml.etree.elementtree.html?highlight=xml#elementinclude-functions) in all versions starting with 3.7, and ending with 3.12 dev (looks like the first with XInclude suppoert, or at least with such documentation, while it is not mentioned everywhere)
1. issue with [xml.etree.ElementInclude.default_loader](https://docs.python.org/3.7/library/xml.etree.elementtree.html?highlight=xml#xml.etree.ElementTree.xml.etree.ElementInclude.default_loader)
it does not return `ElementTree`, it returns `Element`. returning `Element` causes exceptions later in the library. So suggested text is " If the parse mode is "xml", this is an Element instance."
2. issue with [xml.etree.ElementInclude.include](https://docs.python.org/3.12/library/xml.etree.elementtree.html?highlight=xml#xml.etree.ElementInclude.include)
It does not return anything, it just expands one of its parameters. Also there is a parse mode mentioned there, but it is not under control of this function. The whole phrase starting from "If the parse ..." can be just dropped as confusing. Or it can be mentioned that only xml and text files are supported for inclusion into xi:include
<!-- gh-linked-prs -->
### Linked PRs
* gh-101550
* gh-117754
<!-- /gh-linked-prs -->
| 898f6de63fd5285006ee0f4993aeb8ed3e8f97f9 | 91d7605ac3c15a185fd67921907cebaf0def1674 |
python/cpython | python__cpython-101544 | # Python 3.11 on Windows prioritises registry paths over build directory
If you have Python 3.11 installed by the EXE installer, and then build from source and run it from the build directory, it will detect and use the installed stdlib rather than its own.
I believe this is a regression since 3.11.1, so it hasn't shipped yet. But since it only matters in a build directory, that doesn't really matter anyway.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101544
* gh-101606
<!-- /gh-linked-prs -->
| 7a253103d4c64fcca4c0939a584c2028d8da6829 | 46416b9004b687856eaa73e5d48520cd768bbf82 |
python/cpython | python__cpython-101590 | # Unexpected behaviour of IntFlag with a custom __new__ in Python 3.11.0.
# Bug report
According to the [enum documentation](https://docs.python.org/3/howto/enum.html#when-to-use-new-vs-init), it is possible to customize the enumeration *value* via a custom ``__new__`` method and the enumeration *member* (e.g., by adding an attribute) via a custom ``__init__`` method. However, the implementation of the [enum.Flag](https://docs.python.org/3/library/enum.html#enum.Flag) class in 3.11.0 (and probably in 3.11.1) introduces some issues compared to the one in 3.10.3, especially in the case of an [enum.IntFlag](https://docs.python.org/3/library/enum.html#enum.IntFlag):
```shell
$ read -r -d '' code << EOM
from enum import IntFlag
class Flag(IntFlag):
def __new__(cls, ch: str, *args):
value = 1 << ord(ch)
self = int.__new__(cls, value)
self._value_ = value
return self
def __init__(self, _, *args):
super().__init__()
# do something with the positional arguments
a = ('a', 'A')
print(repr(Flag.a ^ Flag.a))
EOM
$ python3.10 -c "$code"
<Flag.0: 0>
$ python3.11 -c "$code"
ValueError: 0 is not a valid Flag
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Lib/enum.py", line 1501, in __xor__
return self.__class__(value ^ other)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/enum.py", line 695, in __call__
return cls.__new__(cls, value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/enum.py", line 1119, in __new__
raise exc
File "/Lib/enum.py", line 1096, in __new__
result = cls._missing_(value)
^^^^^^^^^^^^^^^^^^^^
File "/Lib/enum.py", line 1416, in _missing_
pseudo_member = (__new__ or cls._member_type_.__new__)(cls, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 5, in __new__
TypeError: ord() expected string of length 1, but int found
```
This also results in the impossibility of writing ``Flag.a | i`` for ``i != 0`` (for ``i = 0``, it does work ! and this is confusing !), which IMHO is a regression compared to what was proposed in 3.10.3. It also clashes with the following assumption:
> If a Flag operation is performed with an IntFlag member and:
> * the result is a valid IntFlag: an IntFlag is returned
> * result is not a valid IntFlag: the result depends on the FlagBoundary setting
Currently, the FlagBoundary for IntFlag is KEEP, so ``Flag.a | 12`` is expected to be ``Flag.a|8|4`` as in 3.10.3.
In order to avoid this issue, users need to write something like:
```python
def __new__(cls, ch, *args):
value = ch if isinstance(ch, int) else 1 << ord(ch)
self = int.__new__(cls, value)
self._value_ = value
return self
```
Neverthless, this is only possible if ``__new__`` converts an input ``U`` to an entirely different type ``V`` (enum member type) or if ``args`` is non-empty when declaring enumeration members. However, this fails in the following example:
```python
class FlagFromChar(IntFlag):
def __new__(cls, e: int):
value = 1 << e
self = int.__new__(cls, value)
self._value_ = value
return self
a = ord('a')
# in Python 3.10.3
repr(FlagFromChar.a ^ FlagFromChar.a) == '<FlagFromChar.0: 0>'
# in Python 3.11.1
repr(FlagFromChar.a ^ FlagFromChar.a) == '<FlagFromChar: 1>'
```
# Environment
- CPython versions tested on: 3.10.3 and 3.11.0 (compiled from sources with GCC 7.5.0)
- Operating System: openSUSE Leap 15.2 x86_64
- Kernel: 5.3.18-lp152.106-default
<!-- gh-linked-prs -->
### Linked PRs
* gh-101590
* gh-101596
<!-- /gh-linked-prs -->
| ef7c2bfcf1fd618438f981ace64499a99ae9fae0 | d3e2dd6e71bd8e5482973891926d5df19be687eb |
python/cpython | python__cpython-101540 | # System transitions tests are skipped in datetime tester
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Tests that should be conditionally skipped only when time.tzset is not present are skipped all the time.
```sh
% ./python.exe Lib/test/test_datetime.py -k test_system_transitions -v
test_system_transitions (test.datetimetester.IranTest_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
test_system_transitions (test.datetimetester.ZoneInfoTest_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
test_system_transitions (test.datetimetester.ZoneInfoTest[Africa/Abidjan]_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
test_system_transitions (test.datetimetester.ZoneInfoTest[Africa/Accra]_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
test_system_transitions (test.datetimetester.ZoneInfoTest[Africa/Addis_Ababa]_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
test_system_transitions (test.datetimetester.ZoneInfoTest[Africa/Algiers]_Pure.test_system_transitions) ... skipped 'time module has no attribute tzset'
...
----------------------------------------------------------------------
Ran 842 tests in 0.036s
OK (skipped=842)
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on:
```sh
% ./python.exe
Python 3.12.0a4+ (heads/main:04e06e20ee, Feb 3 2023, 17:55:49) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
- Operating system and architecture: macOS
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-101540
<!-- /gh-linked-prs -->
| ddd619cffa457776a22f224b7111bd39de289d66 | 5a2b984568f72f0d7ff7c7b4ee8ce31af9fd1b7e |
python/cpython | python__cpython-101537 | # Add threading support based on wasi-threads
# Feature or enhancement
use [wasi-threads](https://github.com/WebAssembly/wasi-threads) via [wasi-sdk](https://github.com/WebAssembly/wasi-sdk)
# Pitch
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-101537
* gh-106834
<!-- /gh-linked-prs -->
| d8f87cdf94a6533c5cf2d25e09e6fa3eb06720b9 | 13237a2da846efef9ce9b93fd4bcfebd49933568 |
python/cpython | python__cpython-101612 | # HTTPError documentation needs improvement
# Bug report
The [documentation for HTTPError](https://docs.python.org/dev/library/urllib.error.html#urllib.error.HTTPError) needs some love.
At least:
- include the arguments that needs to receive (currently none is listed)
- describe those arguments below (currently only some are described, with wrong names!)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101612
<!-- /gh-linked-prs -->
| af446bbb76f64e67831444a0ceee6863a1527088 | 89413bbccb9261b72190e275eefe4b0d49671477 |
python/cpython | python__cpython-101526 | # Maybe Drop "channels" from _xxsubinterpreters
The `_xxsubinterpreters` module is essentially the low-level implementation of PEP 554. However, we added it a while back for testing purposes, especially to further exercise the runtime relative to subinterpreters. Since then, I've removed "channels" from PEP 554. So it may make sense to drop that part of the implementation. That part of Modules/_xxsubinterpretersmodule.c is much more code and certainly much more complex than the basic functionality the PEP now describes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101526
* gh-102655
* gh-105258
* gh-107303
* gh-107359
<!-- /gh-linked-prs -->
| c67b00534abfeca83016a00818cf1fd949613d6b | d4c410f0f922683f38c9d435923939d037fbd8c2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.