repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-122875 | # Deprecate `asyncio.iscoroutinefunction`
`asyncio.iscoroutinefunction` should be deprecated in favor of `inspect.iscoroutinefunction`, we don't need two different functions to do the same thing. I propose to deprecate `asyncio.iscoroutinefunction` because IMO these introspection functions don't belong to `asyncio` but more to `inspect` module. It will be deprecated in Python 3.14 and scheduled for removal in Python 3.16.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122875
<!-- /gh-linked-prs -->
| bc9d92c67933917b474e61905451c6408c68e71d | 3aaed083a3f5eb7e490495c460b3dc1ce7451ce8 |
python/cpython | python__cpython-122855 | # [C API] Add Py_HashBuffer() function
Implementation of the the C API Working Group decision: https://github.com/capi-workgroup/decisions/issues/13.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122855
<!-- /gh-linked-prs -->
| d8e69b2c1b3388c31a6083cfdd9dc9afff5b9860 | 3d60dfbe1755e00ab20d0ee81281886be77ad5da |
python/cpython | python__cpython-122847 | # Function definition grammar is incorrect
https://github.com/python/cpython/blob/2f5c3b09e45798a18d60841d04a165fb062be666/Doc/reference/compound_stmts.rst?plain=1#L1211-L1224
In ``parameter_list_starargs`` - everything can be optional, except for ``*``, and that permits incorrect function definitions like ``def foo(*)``:
```pycon
>>> def foo(*): pass
File "<stdin>", line 1
def foo(*): pass
^
SyntaxError: named arguments must follow bare *
```
See d.p.o thread https://discuss.python.org/t/56998. I'll provide a patch, based on suggested solution.
CC @norpadon
<!-- gh-linked-prs -->
### Linked PRs
* gh-122847
* gh-129150
* gh-129151
<!-- /gh-linked-prs -->
| 610584639003317193cdcfe38bbdc8268b612828 | 7ad793e5dbdf07e51a71b70d20f3e6e3ab60244d |
python/cpython | python__cpython-123073 | # Several Python 3.13 opcodes are not documented
# Documentation
The following opcodes which can appear in the output of dis, do not appear in the documentation https://docs.python.org/3.13/library/dis.html:
- LOAD_FAST_LOAD_FAST
- LOAD_FROM_DICT_OR_DEREF
- LOAD_FROM_DICT_OR_GLOBAL
- STORE_FAST_STORE_FAST
- STORE_FAST_LOAD_FAST
- STORE_FAST_STORE_FAST
- ENTER_EXECUTOR
<!-- gh-linked-prs -->
### Linked PRs
* gh-123073
* gh-126492
<!-- /gh-linked-prs -->
| 9cba47d9f151734815a61e32391ea7fca877ea55 | b1c4ffc20573befb4db66bbbdd569b9bd13bb127 |
python/cpython | python__cpython-122836 | # Running ``test_typing`` file directly fails
# Bug report
### Bug description:
```python
eclips4@nixos ~/p/p/cpython (main)> ./python Lib/test/test_typing.py
...................................................................................................................................................................................................................................................................s.........................................................................................................................................................................................................................................................................................................................................F............................................................................................
======================================================================
FAIL: test_annotations (__main__.TypedDictTests.test_annotations)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/eclips4/programming/programming-languages/cpython/Lib/test/test_typing.py", line 8832, in test_annotations
self.assertEqual(Y.__annotations__, {'a': type(None), 'b': fwdref})
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: {'a': <class 'NoneType'>, 'b': ForwardRef('int', module='__main__')} != {'a': <class 'NoneType'>, 'b': ForwardRef('int', module='test.test_typing')}
- {'a': <class 'NoneType'>, 'b': ForwardRef('int', module='__main__')}
? ^^^ ^^
+ {'a': <class 'NoneType'>, 'b': ForwardRef('int', module='test.test_typing')}
? +++++++++ ^^^ ^
----------------------------------------------------------------------
Ran 682 tests in 0.506s
FAILED (failures=1, skipped=1)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122836
<!-- /gh-linked-prs -->
| 2037d8cbaea4aed7000e3265113814657a9aea54 | 2d9d3a9f5319ce3f850341d116b63cc51869df3a |
python/cpython | python__cpython-122934 | # Inconsistent locations for conditional branches in `while` statements
# Bug report
### Bug description:
Reported here: https://github.com/python/cpython/issues/122762#issuecomment-2273809777
The locations for branches in while statements are not consistent. The first iteration shows different locations from the second and later iterations.
I've not check earlier versions, but this could be an issue for 3.12 as well.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122934
<!-- /gh-linked-prs -->
| fe23f8ed970425828de20fb48750fa89da914886 | 0e207f3e7adc6a0fdbe1482ce01163ff04d5ddcb |
python/cpython | python__cpython-122799 | # Make tests for warnings in the re module more strict
#122357 passed existing tests because tests for warnings in the `re` module only tested `re.compile()` and did not test stack level.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122799
* gh-122804
* gh-122805
<!-- /gh-linked-prs -->
| d2e5be1f39bc3d48c7bc8c146c4bcadee266672a | 3e753c689a802d2e6d909cce3e22173977b2edbf |
python/cpython | python__cpython-122793 | # Some IPv4 and IPv4-mapped IPv6 properties don't match
# Bug report
### Bug description:
The following properties on an `IPv6Address` don't match their `IPv4Address` counterparts when using an IPv6-mapped IPv4 address (ie `::ffff:<ipv4>`):
* is_multicast
* is_reserved
* is_link_local
* is_global
* is_unspecified
Proposed fix is to make all properties use their `IPv4Address` values for IPv4-mapped addresses.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122793
* gh-123814
* gh-123815
* gh-123818
* gh-123819
* gh-127571
<!-- /gh-linked-prs -->
| 76a1c5d18312712baed4699fe7333abb050ec9b7 | 033510e11dff742d9626b9fd895925ac77f566f1 |
python/cpython | python__cpython-122848 | # Documentation doesn't specify how "new style" formatting works for complex numbers
[Old string formatting](https://docs.python.org/3/tutorial/inputoutput.html#old-string-formatting) has no support for complex numbers:
```pycon
>>> "%f" % (1+0j)
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
"%f" % (1+0j)
~~~~~^~~~~~~~
TypeError: must be real number, not complex
```
But formatted string literals and str.format() have:
```pycon
>>> f"{1+0j:f}"
'1.000000+0.000000j'
>>> f"{1+0j:}"
'(1+0j)'
>>> f"{1.:}" # default formatting slightly different than for floats
'1.0'
```
Unfortunately, formatting rules for complexes aren't documented in the [Format Specification Mini-Language](https://docs.python.org/3/library/string.html#formatspec) section. I'll work on a patch.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122848
* gh-126128
* gh-126129
<!-- /gh-linked-prs -->
| 0bbbe15f5688552236c48f2b6e320c5312720b8e | faa3272fb8d63d481a136cc0467a0cba6ed7b264 |
python/cpython | python__cpython-123751 | # venv activate.csh fails when user prompt contains newline
# Bug report
### Bug description:
activate.csh uses "$prompt" in quotes like that, which fails when the user prompt contains a newline (error: unbalanced quotes).
Using $prompt:q or "$prompt:q" solves this with no harm, at least in tcsh (not sure about vanilla csh), at least for activate. (Deactivate may have similar issue.)
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123751
* gh-124185
<!-- /gh-linked-prs -->
| a15a584bf3f94ea11ab9363548c8872251364000 | f4dd4402108cc005d45acd4ca83c8530c36a93ca |
python/cpython | python__cpython-122760 | # Error handling of `RERAISE` is strange
# Bug report
There's a strange pattern used in `RERAISE` opcode: https://github.com/python/cpython/blob/4767a6e31c0550836b2af45d27e374e721f0c4e6/Python/bytecodes.c#L1174-L1189
Especially these lines: https://github.com/python/cpython/blob/4767a6e31c0550836b2af45d27e374e721f0c4e6/Python/bytecodes.c#L1180-L1187
It looks like the `assert` call is not needed here:
- In debug builds it will always crash, since `PyLong_Check(lasti)` will always be false
- In non-debug builds it will set `SystemError` as it should be (at least, I think so). That's the only line that matches `r'assert\(.*\);\n\s+_PyErr'`
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122760
<!-- /gh-linked-prs -->
| 61a8bf28530558da239834785a0b590ae8900a16 | 76bdeebef6c6206f3e0af1e42cbfc75c51fbb8ca |
python/cpython | python__cpython-122745 | # Upgrade bundled pip to 24.2
# Feature or enhancement
### Proposal:
`ensurepip`'s bundled version of pip gets updated to the current latest 24.2.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
N/A
<!-- gh-linked-prs -->
### Linked PRs
* gh-122745
* gh-122746
* gh-122747
* gh-122776
* gh-122822
* gh-122823
<!-- /gh-linked-prs -->
| 5b8a6c5186be299d96dd483146dc6ea737ffdfe7 | dc093010672207176857a747c61da9c046ad9d3e |
python/cpython | python__cpython-122735 | # 3.13.0rc1 regression in PyEval_GetLocals(): `SystemError: Objects/dictobject.c:3774: bad argument to internal function`
# Crash report
### What happened?
After upgrading to CPython 3.13.0rc1, the `gpgme` test suite started failing. The tests fail with errors resembling:
```pytb
Traceback (most recent call last):
File "/tmp/gpgme-1.23.2/lang/python/tests/./final.py", line 24, in <module>
import support
File "/tmp/gpgme-1.23.2/lang/python/tests/support.py", line 60, in <module>
assert_gpg_version((2, 1, 12))
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/tmp/gpgme-1.23.2/lang/python/tests/support.py", line 35, in assert_gpg_version
c.engine_info.version).group(0)
^^^^^^^^^^^^^
File "/tmp/gpgme-1.23.2/lang/python/python3.13-gpg/lib.linux-x86_64-cpython-313/gpg/core.py", line 1352, in engine_info
infos = [i for i in self.get_engine_info() if i.protocol == p]
~~~~~~~~~~~~~~~~~~~~^^
File "/tmp/gpgme-1.23.2/lang/python/python3.13-gpg/lib.linux-x86_64-cpython-313/gpg/core.py", line 1366, in get_engine_info
return gpgme.gpgme_ctx_get_engine_info(self.wrapped)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/tmp/gpgme-1.23.2/lang/python/python3.13-gpg/lib.linux-x86_64-cpython-313/gpg/gpgme.py", line 880, in gpgme_ctx_get_engine_info
return _gpgme.gpgme_ctx_get_engine_info(ctx)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "<frozen importlib._bootstrap>", line 649, in parent
SystemError: Objects/dictobject.c:3774: bad argument to internal function
FAIL: final.py
```
I've been able to bisect it to 233ed46e6d2ba1d1f7d83a72ccf9aad9e628ede1 (#121869). However, on this commit I'm getting also a bunch of `python3.13: Objects/typeobject.c:5257: _PyType_LookupRef: Assertion '!PyErr_Occurred()' failed.` — they are fixed in some subsequent commit. But the `final.py` failure occurs from that commit to tip of 3.13 (fe65a8b0d7a826158c6589ceb8cb83e91ed18977).
Now, gpgme's build system is horrible. To reproduce, I've done the following:
1. Built CPython with `--with-assertions`.
2. Created `ln -s python python3.13` symlink in the build directory.
3. Installed the gpgme C library 1.23.2 using the system package manager.
4. Then:
```
wget https://www.gnupg.org/ftp/gcrypt/gpgme/gpgme-1.23.2.tar.bz2
tar -xf gpgme-1.23.2.tar.bz2
cd gpgme-1.23.2
export PATH=${CPYTHON_BUILD_DIRECTORY}:$PATH
./configure # will probably need some -dev packages
cd lang/python
make PYTHONS=python3.13 check
```
Note that in order to rebuild, you need to `rm -rf python3.13-gpg`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0rc1 (main, Aug 2 2024, 18:56:30) [GCC 14.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-122735
* gh-122757
<!-- /gh-linked-prs -->
| 4767a6e31c0550836b2af45d27e374e721f0c4e6 | 5b8a6c5186be299d96dd483146dc6ea737ffdfe7 |
python/cpython | python__cpython-122713 | # Specializing interpreter may crash if the code object of a class's `__init__` method is reassigned
# Crash report
### What happened?
The interpreter will specialize `CALL` instructions into [`CALL_ALLOC_AND_ENTER_INIT`](https://github.com/python/cpython/blob/44659d392751f0161a0f958fec39ad013da45427/Python/bytecodes.c#L3528-L3578) when it sees that the target of a `CALL` is a heap type with a [simple](https://github.com/python/cpython/blob/44659d392751f0161a0f958fec39ad013da45427/Python/specialize.c#L1471-L1481) `__init__` method (i.e. no `*args`, `**kwargs`, or kwonly args) and the correct number of arguments are provided to the call. The `CALL_ALLOC_AND_ENTER_INIT` verifies this condition using a [weaker check](https://github.com/python/cpython/blob/44659d392751f0161a0f958fec39ad013da45427/Python/bytecodes.c#L3544) based only on the argument count. It's possible to reassign the code object for a class's `__init__` method using a code object that passes the argcount check in `CALL_ALLOC_AND_ENTER_INIT`, but uses one of the other properties that the specializer guards against (`*args` in the repro below).
The following repro causes the interpreter to crash (we should construct the `*args` tuple but do not):
```python
class MyClass:
def __init__(self):
pass
def count_args(self, *args):
print(f"there are {len(args)}")
def instantiate():
MyClass()
SPECIALIZATION_THRESHOLD = 10
def main():
# Trigger specialization
for i in range(SPECIALIZATION_THRESHOLD):
instantiate()
# Trigger the bug
MyClass.__init__.__code__ = count_args.__code__
instantiate()
if __name__ == "__main__":
main()
```
Backtrace:
```
python: Python/generated_cases.c.h:5011: _PyEval_EvalFrameDefault: Assertion `!PyStackRef_IsNull(GETLOCAL(oparg))' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff7c8b94c in __pthread_kill_implementation () from /lib64/libc.so.6
(gdb) bt
#0 0x00007ffff7c8b94c in __pthread_kill_implementation () from /lib64/libc.so.6
#1 0x00007ffff7c3e646 in raise () from /lib64/libc.so.6
#2 0x00007ffff7c287f3 in abort () from /lib64/libc.so.6
#3 0x00007ffff7c2871b in __assert_fail_base.cold () from /lib64/libc.so.6
#4 0x00007ffff7c37386 in __assert_fail () from /lib64/libc.so.6
#5 0x000000000065f3d8 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0xa7c008 <_PyRuntime+416008>, frame=0x7ffff7e1d1c0, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:5016
#6 0x00000000006697e2 in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0xa7c008 <_PyRuntime+416008>) at ./Include/internal/pycore_ceval.h:119
#7 _PyEval_Vector (tstate=tstate@entry=0xa7c008 <_PyRuntime+416008>, func=func@entry=0x7ffff7a625d0,
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>}, args=args@entry=0x0, argcount=argcount@entry=0,
kwnames=kwnames@entry=0x0) at Python/ceval.c:1821
#8 0x000000000066989c in PyEval_EvalCode (co=co@entry=<code at remote 0x7ffff7b63480>,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>},
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>}) at Python/ceval.c:618
#9 0x00000000006e51c9 in run_eval_code_obj (tstate=tstate@entry=0xa7c008 <_PyRuntime+416008>, co=co@entry=0x7ffff7b63480,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>},
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>}) at Python/pythonrun.c:1292
#10 0x00000000006e69aa in run_mod (mod=mod@entry=0xbb8770, filename=filename@entry='/home/mpage/local/scratch/repro_init_issue.py',
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>},
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>}, flags=flags@entry=0x7fffffffd2b8, arena=arena@entry=0x7ffff7ad96c0,
interactive_src=0x0, generate_new_source=0) at Python/pythonrun.c:1377
#11 0x00000000006e705b in pyrun_file (fp=fp@entry=0xba0550, filename=filename@entry='/home/mpage/local/scratch/repro_init_issue.py', start=start@entry=257,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>},
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/home/mpage/local/scratch/repro_init_issue.py') at remote 0x7ffff7ac3120>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7ffff7bbcbf0>, '__file__': '/home/mpage/local/scratch/repro_init_issue.py', '__cached__': None, 'MyClass': <type at remote 0xba9bf0>, 'count_args': <function at remote 0x7ffff7a63590>, 'instantiate': <function at remote 0x7ffff7ae8050>, 'SPECIALIZATION_THRESHOLD': 10, 'main': <function at remote 0x7ffff7ae8110>}, closeit=closeit@entry=1, flags=0x7fffffffd2b8)
at Python/pythonrun.c:1210
#12 0x00000000006e8199 in _PyRun_SimpleFileObject (fp=fp@entry=0xba0550, filename=filename@entry='/home/mpage/local/scratch/repro_init_issue.py', closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd2b8)
at Python/pythonrun.c:459
#13 0x00000000006e8445 in _PyRun_AnyFileObject (fp=fp@entry=0xba0550, filename=filename@entry='/home/mpage/local/scratch/repro_init_issue.py', closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd2b8)
at Python/pythonrun.c:77
#14 0x0000000000712c8d in pymain_run_file_obj (program_name=program_name@entry='/data/users/mpage/cpython/python', filename=filename@entry='/home/mpage/local/scratch/repro_init_issue.py', skip_source_first_line=0)
at Modules/main.c:409
#15 0x0000000000714203 in pymain_run_file (config=config@entry=0xa472d8 <_PyRuntime+199640>) at Modules/main.c:428
#16 0x00000000007146bd in pymain_run_python (exitcode=exitcode@entry=0x7fffffffd444) at Modules/main.c:696
#17 0x0000000000714727 in Py_RunMain () at Modules/main.c:775
#18 0x000000000071479e in pymain_main (args=args@entry=0x7fffffffd4a0) at Modules/main.c:805
#19 0x0000000000714859 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:829
#20 0x000000000041e86f in main (argc=<optimized out>, argv=<optimized out>) at ./Programs/python.c:15
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122713
* gh-123184
<!-- /gh-linked-prs -->
| 79ddf7571016a04f0e1d5f416ece2d45c4440b1b | ec89620e5e147ba028a46dd695ef073a72000b84 |
python/cpython | python__cpython-128092 | # Asyncio socket-level methods do not support SSLSocket
Attempting to use a `SSLSocket` with the`asyncio.loop.sock_sendall` method and the other socket-level async functions raises the following error:
```python
TypeError: Socket cannot be of type SSLSocket
```
This can be reproduced by the following if `socket` is a `SSLSocket`:
```python
await asyncio.wait_for(loop.sock_sendall(socket, buf), timeout=timeout)
```
However, the [ssl-nonblocking](https://docs.python.org/3/library/ssl.html#ssl-nonblocking) section of the docs says the following:
"See also: The [asyncio](https://docs.python.org/3/library/asyncio.html#module-asyncio) module supports [non-blocking SSL sockets](https://docs.python.org/3/library/ssl.html#ssl-nonblocking) and provides a higher level API. It polls for events using the [selectors](https://docs.python.org/3/library/selectors.html#module-selectors) module and handles [SSLWantWriteError](https://docs.python.org/3/library/ssl.html#ssl.SSLWantWriteError), [SSLWantReadError](https://docs.python.org/3/library/ssl.html#ssl.SSLWantReadError) and [BlockingIOError](https://docs.python.org/3/library/exceptions.html#BlockingIOError) exceptions. It runs the SSL handshake asynchronously as well."
Is this section specifically referencing the [streams](https://docs.python.org/3/library/asyncio-stream.html) API as the SSL-compatible asyncio API? The current wording is unclear and suggests to me that non-blocking `SSLSocket` sockets are supported by asyncio, despite the error above. Is there a different section that specifies that `SSLSocket` is not supported by the socket-level asyncio API?
<!-- gh-linked-prs -->
### Linked PRs
* gh-128092
* gh-128093
* gh-128094
<!-- /gh-linked-prs -->
| 19c5134d57764d3db7b1cacec4f090c74849a5c1 | 3c168f7f79d1da2323d35dcf88c2d3c8730e5df6 |
python/cpython | python__cpython-122705 | # Reference leaks in test suite on current main
# Bug report
### Bug description:
```python
./python -m test -R 3:3 -j 32
...many lines
== Tests result: FAILURE ==
37 tests skipped:
test.test_asyncio.test_windows_events
test.test_asyncio.test_windows_utils test.test_gdb.test_backtrace
test.test_gdb.test_cfunction test.test_gdb.test_cfunction_full
test.test_gdb.test_misc test.test_gdb.test_pretty_print
test_android test_bz2 test_ctypes test_dbm_gnu test_dbm_ndbm
test_dbm_sqlite3 test_devpoll test_free_threading test_gzip
test_idle test_kqueue test_launcher test_lzma test_msvcrt
test_readline test_smtpnet test_sqlite3 test_ssl
test_stable_abi_ctypes test_startfile test_tcl test_tkinter
test_ttk test_ttk_textonly test_turtle test_winapi
test_winconsoleio test_winreg test_wmi test_zlib
8 tests skipped (resource denied):
test_curses test_peg_generator test_pyrepl test_socketserver
test_urllib2net test_urllibnet test_winsound test_zipfile64
6 tests failed:
test.test_concurrent_futures.test_shutdown test_dataclasses
test_enum test_type_aliases test_typing test_xml_etree_c
428 tests OK.
Total duration: 14 min 35 sec
Total tests: run=41,650 skipped=2,520
Total test files: run=471/479 failed=6 skipped=37 resource_denied=8
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122705
<!-- /gh-linked-prs -->
| 94a4bd79a7ab7b0ff5f216782d6fdaff6ed348fc | b0c48b8fd88f26b31ec2f743358091073277dcde |
python/cpython | python__cpython-122702 | # Is this doc wording about raw bytes literal correct?
Link: https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals
<img width="816" alt="Снимок экрана 2024-08-05 в 21 38 55" src="https://github.com/user-attachments/assets/e77cf7aa-49cf-4208-b253-9d7e1ee2ee48">
> Both string and bytes literals may optionally be prefixed with a letter 'r' or 'R'; such strings are called raw strings
I don't think that it is a correct wording. You cannot call `rb''` as a "raw string", it is called "raw bytes". Or "raw bytes literal". Just like the docs docs in the next paragraph:
> The 'rb' prefix of raw bytes literals has been added as a synonym of 'br'.
I propose to fix this by using this phrase:
> Both string and bytes literals may optionally be prefixed with a letter 'r' or 'R'; such objects are called raw strings and raw bytes respectively.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122702
* gh-122914
* gh-122915
<!-- /gh-linked-prs -->
| ea70439bd2b5a1c881342646f30942f527f61373 | db8b83c2b0247f1d9b15152bbfcfe4afc7e588ed |
python/cpython | python__cpython-122703 | # Memory leaks in free-threaded build at shutdown
# Bug report
The `-X showrefcount` isn't reporting leaked blocks on exit and we're hiding some leaked objects on shutdown. Note that we're still able to catch most leaks via the refleaks test.
### -X showrefcount
`_Py_GetGlobalAllocatedBlocks()` is returning zero because the main interpreter is already `NULL`. This works in the default build because we've stashed away the number of leaked blocks in `_PyRuntime.obmalloc.interpreter_leaks`. We should do the same thing in the free-threaded build by updating `_PyInterpreterState_FinalizeAllocatedBlocks()`.
### Leaked objects on exit
Some objects that are referenced by global modules and use deferred reference counting are leaked on exit. Objects that use deferred reference counting are only collected by the GC. However, these object still have valid, live references when we run the GC for the final time. For example, one leaked object is a `member_descriptor` whose last reference is removed in `clear_static_type_objects`.
I think we'll cause too many problems if we try to invoke the GC later. Instead, I think that the free-threaded GC should disable deferred reference counting on all objects when its invoked for the final time.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122703
<!-- /gh-linked-prs -->
| 2d9d3a9f5319ce3f850341d116b63cc51869df3a | 833eb106f5ebbac258f236d50177712d98a36715 |
python/cpython | python__cpython-122687 | # Hypothesis tests on Ubuntu fails in the main
# Bug report
### Bug description:
See e.g.:
https://github.com/python/cpython/actions/runs/10240047164/job/28326351304
It seems, pinning attrs deps helps. I'll provide patch.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122687
* gh-122729
<!-- /gh-linked-prs -->
| 35ae4aab1aae93c1c11c45ac431787ff79ce7907 | 44659d392751f0161a0f958fec39ad013da45427 |
python/cpython | python__cpython-122682 | # Merge m_atan2() and c_atan2() helper functions (or replace libm's atan2)
# Feature or enhancement
### Proposal:
With #29179, many workarounds for buggy libm implementations were removed. Yet, there is ``m_atan2()``:
https://github.com/python/cpython/blob/d0b92dd5ca46a10558857adeb7bb48ecf39fa783/Modules/mathmodule.c#L546-L572
and
https://github.com/python/cpython/blob/d0b92dd5ca46a10558857adeb7bb48ecf39fa783/Modules/cmathmodule.c#L329-L355
They are identical, the second one just uses ``Py_complex`` to keep arguments.
We should either 1) move first helper to ``_math.h`` and reuse it in the ``cmathmodule.c`` or 2) just remove these helpers. I'll provide patch for 1) proposal.
But 2) looks also safe for me: it seems that ``c_atan2()`` helper was used only for ``cmath.phase()`` and ``cmath.polar()``; libm's ``atan2()`` used for the rest - and this don't poses any problems in CI or build bots. ("Problem" values *are* tested, e.g. infinities and nans for ``acos()``.)
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122682
* gh-122715
<!-- /gh-linked-prs -->
| 0b433aa9df6b5bb84e77ff97e59b7bcd04f2199a | 6ff82fdb56fa0381f94c7a45aa67ab4c4aa71930 |
python/cpython | python__cpython-126939 | # argparse: allow to override the type name in `'invalid %(type)s value: %(value)r'` error messages
# Feature or enhancement
### Proposal:
Hey.
When doing something like:
```python
class SomeEnumClass(enum.StrEnum):
foo = enum.auto()
bar = enum.auto()
parser.add_argument("--enum", type=SomeEnumClass, choices=[e.value for e in SomeEnumClass])
```
and calls `--enum` with argument `baz`, it will error out with a message like:
```
my.py: error: argument --enum: invalid SomeEnumClass value: 'baz'
```
via:
https://github.com/python/cpython/blob/d0b92dd5ca46a10558857adeb7bb48ecf39fa783/Lib/argparse.py#L2540
For the end user (who sees this message), the `SomeEnumClass`-part may be rather useless, as it may be some arbitrary internal class name, which may or may not be related to the option.
So in principle I think one should be able to configure a string to be printed instead of `SomeEnumClass`.
One candidate would be of course `metavar`, but the problem with that is, that if one sets it, the `--help`-text would be different, too, and not longer print the choices, but rather the `metavar`, which one may not want.
The only way I've found to get that without extending `argparse` is:
```python
class SomeEnumClass(enum.StrEnum):
foo = enum.auto()
bar = enum.auto()
SomeEnumClass.__name__ = "blubb"
```
which causes:
```
my.py: error: argument --enum: invalid blubb value: 'baz'
```
to be printed,... but I guess is rather not really well defined, or is it?
Cheers,
Chris.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126939
* gh-127148
* gh-127149
<!-- /gh-linked-prs -->
| fcfdb55465636afc256bc29781b283404d88e6ca | fd133d4f21cd7f5cbf6bcf332290ce52e5501167 |
python/cpython | python__cpython-122667 | # Add tests for ast optimizations
# Feature or enhancement
### Proposal:
Currently we have a one test case which testing that `optimize` parameter for ast.parse is working as expected.
I propose to add tests for these ast optimizations:
* Optimization for binary operations if left and right operands are constants
* Optimization for unary operations if operand is constant
* Optimization for `in` operator if right operand is constant list or set
* Folding of tuples
* Subscription for constant sequences (tuple, string)
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122667
* gh-123359
<!-- /gh-linked-prs -->
| 9f9b00d52ceafab6c183e8b0f502071d59dc6d22 | 1eed0f968f5f44d6a13403c1676298a322cbfbad |
python/cpython | python__cpython-122662 | # `make distclean` needs `GNU make` to work
# Bug report
### Bug description:
Introduced in bc37ac7b440b5e816f0b3915b830404290522603 , `define` directive is GNU make specific - https://www.gnu.org/software/make/manual/html_node/Multi_002dLine.html
This is the output of me running `make distclean` on main, where `make` is of the BSD variant - https://man.freebsd.org/cgi/man.cgi?make(1)
---
```
$ make distclean
find . -depth -name '__pycache__' -exec rm -rf {} ';'
find . -name '*.py[co]' -exec rm -f {} ';'
find . -name '*.[oa]' -exec rm -f {} ';'
find . -name '*.s[ol]' -exec rm -f {} ';'
find . -name '*.so.[0-9]*.[0-9]*' -exec rm -f {} ';'
find . -name '*.lto' -exec rm -f {} ';'
find . -name '*.wasm' -exec rm -f {} ';'
find . -name '*.lst' -exec rm -f {} ';'
find build -name 'fficonfig.h' -exec rm -f {} ';' || true
find build -name '*.py' -exec rm -f {} ';' || true
find build -name '*.py[co]' -exec rm -f {} ';' || true
rm -f pybuilddir.txt
rm -f _bootstrap_python
rm -f python.html python*.js python.data python*.symbols python*.map
rm -f ./usr/local/lib/python3.14/os.py
rm -f Programs/_testembed Programs/_freeze_module
rm -rf Python/deepfreeze
rm -f Python/frozen_modules/*.h
rm -f Python/frozen_modules/MANIFEST
rm -f jit_stencils.h
find build -type f -a ! -name '*.gc??' -exec rm -f {} ';'
rm -f Include/pydtrace_probes.h
rm -f profile-gen-stamp
rm -rf iOS/testbed/Python.xcframework/ios-*/bin
rm -rf iOS/testbed/Python.xcframework/ios-*/lib
rm -rf iOS/testbed/Python.xcframework/ios-*/include
rm -rf iOS/testbed/Python.xcframework/ios-*/Python.framework
rm -f *.fdata
rm -f *.prebolt
rm -f *.bolt_inst
rm -f python libpython3.14.a libpython3.14.a tags TAGS config.cache config.log pyconfig.h Modules/config.c
rm -rf build platform
rm -rf no-framework
rm -rf iOS/Frameworks
rm -rf iOSTestbed.*
rm -f python-config.py python-config
rm -rf cross-build
make -C ./Doc clean
make[1]: "/tmp/forked/cpython/Doc/Makefile" line 243: Invalid line type
make[1]: "/tmp/forked/cpython/Doc/Makefile" line 249: Invalid line type
make[1]: Fatal errors encountered -- cannot continue
make[1]: stopped in /tmp/forked/cpython/Doc
*** Error code 1
Stop.
make: stopped in /tmp/forked/cpython
```
Here are the lines:
```
$ $ cat -n Doc/Makefile | tail -n +243 | head -n7
243 define ensure_package
244 if uv --version > /dev/null; then \
245 $(VENVDIR)/bin/python3 -m $(1) --version > /dev/null || VIRTUAL_ENV=$(VENVDIR) uv pip install $(1); \
246 else \
247 $(VENVDIR)/bin/python3 -m $(1) --version > /dev/null || $(VENVDIR)/bin/python3 -m pip install $(1); \
248 fi
249 endef
```
---
Tested on: FreeBSD14.1
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-122662
* gh-122668
* gh-122669
<!-- /gh-linked-prs -->
| f5c39b3e9cc88d1eaa9229d610b0221305a83ad9 | e6fad7a0e3d824f4a3c9cd71a48208880606d705 |
python/cpython | python__cpython-122638 | # cmath.tanh(±0+infj) and tanh(±0+nanj) should return ±0+nanj
# Bug report
### Bug description:
As per C11 [DR#471](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2244.htm#dr_471) (accepted for C17), ``ctanh (0 + i NaN)`` and ``ctanh (0 + i Inf)`` should return ``0 + i NaN`` (with "invalid" exception in the second case). Currently, real part is nan.
This has corresponding implications for ``ctan(z)``, as its errors and special cases are handled as if the operation is implemented by ``-i*ctanh(i*z)``.
See glibc patch: https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=d15e83c5f5231d971472b5ffc9219d54056ca0f1
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122638
<!-- /gh-linked-prs -->
| e6fad7a0e3d824f4a3c9cd71a48208880606d705 | 3462a80d2cf37a63fe43f46f64a8c9823f84531d |
python/cpython | python__cpython-122651 | # Translation strings for auditing events no longer have reST markup
# Documentation
The Python-specific 'audit-event' directive is not being extracted to message catalogs (.po files) with the usual reST markups. For instance, expected a source string as ```Raises an :ref:`auditing event `sys._getframe <auditing>` with argument ``frame``.``` but I'm getting ```Raises an auditing event sys._getframe with argument frame.``` instead.
#122325 seems to be related to this issue, so cc'ing @AA-Turner .
Steps to reproduce:
- `make -C Doc gettext`
- `grep -R 'Raises an auditing event' build/gettext/` for an output like:
```
build/gettext/library/tempfile.pot:msgid "Raises an auditing event tempfile.mkstemp with argument fullpath."
build/gettext/library/tempfile.pot:msgid "Raises an auditing event tempfile.mkdtemp with argument fullpath."
build/gettext/library/types.pot:msgid "Raises an auditing event function.__new__ with argument code."
build/gettext/library/types.pot:msgid "Raises an auditing event code.__new__ with arguments code, filename, name, argcount, posonlyargcount, kwonlyargcount, nlocals, stacksize, flags."
build/gettext/library/marshal.pot:msgid "Raises an auditing event marshal.dumps with arguments value, version."
build/gettext/library/marshal.pot:msgid "Raises an auditing event marshal.load with no arguments."
build/gettext/library/marshal.pot:msgid "Raises an auditing event marshal.loads with argument bytes."
build/gettext/library/fcntl.pot:msgid "Raises an auditing event fcntl.fcntl with arguments fd, cmd, arg."
build/gettext/library/fcntl.pot:msgid "Raises an auditing event fcntl.ioctl with arguments fd, request, arg."
build/gettext/library/fcntl.pot:msgid "Raises an auditing event fcntl.flock with arguments fd, operation."
build/gettext/library/fcntl.pot:msgid "Raises an auditing event fcntl.lockf with arguments fd, cmd, len, start, whence."
build/gettext/library/functions.pot:msgid "Raises an auditing event builtins.breakpoint with argument breakpointhook."
build/gettext/library/functions.pot:msgid "Raises an auditing event compile with arguments source and filename. This event may also be raised by implicit compilation."
build/gettext/library/functions.pot:msgid "Raises an auditing event exec with the code object as the argument. Code compilation events may also be raised."
build/gettext/library/functions.pot:msgid "Raises an auditing event builtins.id with argument id."
build/gettext/library/functions.pot:msgid "Raises an auditing event builtins.input with argument prompt before reading input"
build/gettext/library/functions.pot:msgid "Raises an auditing event builtins.input/result with the result after successfully reading input."
build/gettext/library/functions.pot:msgid "Raises an auditing event open with arguments path, mode, flags."
...
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-122651
<!-- /gh-linked-prs -->
| 95f5c89b545beaafad73f05a695742da3e90bc41 | 1573d90ce17f27fd30a251de897a35bf598d2655 |
python/cpython | python__cpython-122624 | # Improve `c-api/bytearray.rst` by adding exception information
Only one function among `PyByteArray` functions says anything about error handling: https://github.com/python/cpython/blob/7aca84e557d0a6d242f322c493d53947a56bde91/Doc/c-api/bytearray.rst#L40-L56
I suggest that all functions must state that they can return `NULL` with an exception set. I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122624
* gh-122658
* gh-122659
<!-- /gh-linked-prs -->
| 151934a324789c58cca9c7bbd6753d735454df5a | 95f5c89b545beaafad73f05a695742da3e90bc41 |
python/cpython | python__cpython-122620 | # LOAD_ATTR_WITH_HINT and STORE_ATTR_WITH_HINT have internal branching.
https://github.com/python/cpython/blob/main/InternalDocs/adaptive.md describes how specialized instructions should be a series of guards followed by simple, linear code.
Both instructions have two different paths with guards on each path. We should eliminate one of the branches.
Since the instance dictionary is almost certain to have only `str` keys, we should choose that path.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122620
<!-- /gh-linked-prs -->
| 5bd72912a1a85be96092de302608a4298741c6cd | 1bb955a2fe0237721c141fdfe520fd3ba46db11e |
python/cpython | python__cpython-122280 | # PyLong_GetInfo() (part of Limited API) is missing in sphinx docs
See here: https://docs.python.org/3.12/c-api/stable.html#contents-of-limited-api
<!-- gh-linked-prs -->
### Linked PRs
* gh-122280
* gh-122644
* gh-122645
<!-- /gh-linked-prs -->
| d91ac525ef166edc0083acf5a96f81b87324fe7f | 7a5c4103b094aaf1b65af6de65795d172cfe8fe0 |
python/cpython | python__cpython-122601 | # Move the bodies of large instructions into helper functions.
There are two reasons for this:
1. Big chunks of code slow down the JIT and the resulting machine code.
2. These big chunks of code usually have complex control flow which makes analysis of the code for stack spilling and top-of-stack much harder.
Using tokens are (a quite approximate) guide to complexity, here are the uops with over 200 tokens:
_CALL_BUILTIN_FAST_WITH_KEYWORDS 200
_BINARY_SUBSCR_GETITEM 203
_LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN 210
_CALL_METHOD_DESCRIPTOR_NOARGS 212
_CALL_METHOD_DESCRIPTOR_O 214
_LOAD_FROM_DICT_OR_GLOBALS 215
_YIELD_VALUE 219
_LOAD_SUPER_ATTR 241
_CALL_METHOD_DESCRIPTOR_FAST 247
_CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS 258
_DYNAMIC_EXIT 269
_STORE_ATTR_WITH_HINT 273
_SEND 274
_EXIT_TRACE 335
_CALL_ALLOC_AND_ENTER_INIT 358
_DO_CALL 365
_CALL_FUNCTION_EX 469
_CALL_KW 498
<!-- gh-linked-prs -->
### Linked PRs
* gh-122601
<!-- /gh-linked-prs -->
| 7aca84e557d0a6d242f322c493d53947a56bde91 | 498376d7a7d6f704f22a2c963130cc15c17e7a6f |
python/cpython | python__cpython-122596 | # Add error checks in the compiler
Historically the compiler (`Python/compile.c` and `Python/symtable.c`) contained a lot of code that did not check for errors after the C API calls. For example, it did not check that `PyLong_AS_LONG()`, `PySequence_Contains()`, `PySet_Contains()`, etc can fail, it used `PyDict_GetItem()` that silence errors. There were so much such cases that the code cleanup often stopped before the compiler. Finally many of such cases were gradually fixed. What's left is unchecked `PyLong_AS_LONG()` and some `PyDict_GetItemWithItem()`.
The following PR adds many error checks. Some internal functions now return -1 to signal error. In most cases such checks are redundant, we know that values in the dicts are Python integers which can be converted to C `long` because they were created here. But it is safer to always check for errors than omit checks in "safe" case and be wrong.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122596
<!-- /gh-linked-prs -->
| e74680b7186e6823ea37cf7ab326d3d6bfa6f59a | 94a4bd79a7ab7b0ff5f216782d6fdaff6ed348fc |
python/cpython | python__cpython-122587 | # "template with C linkage" error when including "pycore_interp.h" from C++ file
# Bug report
### Bug description:
Tested on official docker image `python3.13.0b4-bullseye`. `pycore_interp.h` causes compiler error when included from C++ file. No problem when included from C file.
`mypackage/_good.c`:
```c
#include "Python.h"
#define Py_BUILD_CORE 1
#include "internal/pycore_interp.h"
```
`mypackage/_bad.cpp` (same content as above):
```c++
#include "Python.h"
#define Py_BUILD_CORE 1
#include "internal/pycore_interp.h"
```
`setup.py`:
```python
from setuptools import Extension, find_packages, setup
def get_ext_modules():
ext_good = Extension(name="mypackage.a",
sources=['mypackage/_good.c'])
ext_bad = Extension(name="mypackage.b",
sources=['mypackage/_bad.cpp'])
return [ext_good, ext_bad]
metadata = dict(
name='debugthis',
packages=find_packages(include=["mypackage"])
)
metadata['ext_modules'] = get_ext_modules()
setup(**metadata)
```
Running `python setup.py build` produces error:
```
running build
running build_py
copying mypackage/__init__.py -> build/lib.linux-x86_64-cpython-313/mypackage
running build_ext
building 'mypackage.a' extension
creating build/temp.linux-x86_64-cpython-313
creating build/temp.linux-x86_64-cpython-313/mypackage
gcc -pthread -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.13 -c mypackage/_good.c -o build/temp.linux-x86_64-cpython-313/mypackage/_good.o
gcc -pthread -shared build/temp.linux-x86_64-cpython-313/mypackage/_good.o -L/usr/local/lib -o build/lib.linux-x86_64-cpython-313/mypackage/a.cpython-313-x86_64-linux-gnu.so
building 'mypackage.b' extension
gcc -pthread -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.13 -c mypackage/_bad.cpp -o build/temp.linux-x86_64-cpython-313/mypackage/_bad.o
In file included from /usr/local/include/python3.13/internal/mimalloc/mimalloc.h:429,
from /usr/local/include/python3.13/internal/pycore_mimalloc.h:39,
from /usr/local/include/python3.13/internal/pycore_interp.h:31,
from mypackage/_bad.cpp:4:
/usr/include/c++/10/type_traits:56:3: error: template with C linkage
56 | template<typename _Tp, _Tp __v>
| ^~~~~~~~
In file included from mypackage/_bad.cpp:4:
/usr/local/include/python3.13/internal/pycore_interp.h:4:1: note: ‘extern "C"’ linkage started here
4 | extern "C" {
| ^~~~~~~~~~
...
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122587
* gh-123035
<!-- /gh-linked-prs -->
| 1dad23edbc9db3a13268c1000c8dd428edba29f8 | 3203a7412977b8da3aba2770308136a37f48c927 |
python/cpython | python__cpython-122694 | # TSan reported race condition in `_PyPegen_is_memoized` in `test_importlib`
# Bug report
I think this is a race on the global `_PyRuntime.parser.memo_statistics`. That's only used in the debug build for stats, so the race isn't particularly dangerous, but we should fix it.
```
WARNING: ThreadSanitizer: data race (pid=8921)
Write of size 8 at 0x55b27b2a92f0 by thread T84:
#0 _PyPegen_is_memoized /home/runner/work/cpython/cpython/Parser/pegen.c:351:39 (python+0x166292) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#1 bitwise_or_rule /home/runner/work/cpython/cpython/Parser/parser.c:[12](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:13)893:9 (python+0x1842c2) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#2 bitwise_or_raw /home/runner/work/cpython/cpython/Parser/parser.c:12951:18 (python+0x184511) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#3 bitwise_or_rule /home/runner/work/cpython/cpython/Parser/parser.c:12906:22 (python+0x184511)
#4 comparison_rule /home/runner/work/cpython/cpython/Parser/parser.c:12146:18 (python+0x19bc7f) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#5 inversion_rule /home/runner/work/cpython/cpython/Parser/parser.c:12097:31 (python+0x19b869) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#6 conjunction_rule /home/runner/work/cpython/cpython/Parser/parser.c:11974:18 (python+0x19a9b8) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#7 disjunction_rule /home/runner/work/cpython/cpython/Parser/parser.c:11886:18 (python+0x1938f8) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#8 expression_rule /home/runner/work/cpython/cpython/Parser/parser.c:11174:18 (python+0x182e81) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#9 named_expression_rule /home/runner/work/cpython/cpython/Parser/parser.c:11832:31 (python+0x181b40) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#10 if_stmt_rule /home/runner/work/cpython/cpython/Parser/parser.c:5927:18 (python+0x1769b7) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#11 compound_stmt_rule /home/runner/work/cpython/cpython/Parser/parser.c:2114:28 (python+0x1769b7)
#12 statement_rule /home/runner/work/cpython/cpython/Parser/parser.c:1419:18 (python+0x17582f) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#[13](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:14) _loop1_3_rule /home/runner/work/cpython/cpython/Parser/parser.c:25738:30 (python+0x17582f)
#14 statements_rule /home/runner/work/cpython/cpython/Parser/parser.c:1376:18 (python+0x17582f)
#15 file_rule /home/runner/work/cpython/cpython/Parser/parser.c:1178:18 (python+0x170f5a) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#16 _PyPegen_parse /home/runner/work/cpython/cpython/Parser/parser.c:42410:18 (python+0x170f5a)
#17 _PyPegen_run_parser /home/runner/work/cpython/cpython/Parser/pegen.c:869:17 (python+0x167e01) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#18 _PyPegen_run_parser_from_string /home/runner/work/cpython/cpython/Parser/pegen.c:992:[14](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:15) (python+0x1684e7) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#19 _PyParser_ASTFromString /home/runner/work/cpython/cpython/Parser/peg_api.c:13:21 (python+0x2175dd) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#20 Py_CompileStringObject /home/runner/work/cpython/cpython/Python/pythonrun.c:1435:11 (python+0x5fa5ab) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
...
Previous write of size 8 at 0x55b27b2a92f0 by thread T83:
#0 _PyPegen_is_memoized /home/runner/work/cpython/cpython/Parser/pegen.c:351:39 (python+0x166292) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#1 bitwise_or_rule /home/runner/work/cpython/cpython/Parser/parser.c:12893:9 (python+0x1842c2) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#2 bitwise_or_raw /home/runner/work/cpython/cpython/Parser/parser.c:12951:18 (python+0x184511) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#3 bitwise_or_rule /home/runner/work/cpython/cpython/Parser/parser.c:12906:22 (python+0x184511)
#4 comparison_rule /home/runner/work/cpython/cpython/Parser/parser.c:12146:18 (python+0x19bc7f) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#5 inversion_rule /home/runner/work/cpython/cpython/Parser/parser.c:12097:31 (python+0x19b869) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#6 conjunction_rule /home/runner/work/cpython/cpython/Parser/parser.c:11974:18 (python+0x19a9b8) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#7 disjunction_rule /home/runner/work/cpython/cpython/Parser/parser.c:11886:18 (python+0x1938f8) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#8 expression_rule /home/runner/work/cpython/cpython/Parser/parser.c:11174:18 (python+0x182e81) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#9 star_expression_rule /home/runner/work/cpython/cpython/Parser/parser.c:11551:31 (python+0x1a0839) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#10 star_expressions_rule /home/runner/work/cpython/cpython/Parser/parser.c:11391:18 (python+0x19f602) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#11 simple_stmt_rule /home/runner/work/cpython/cpython/Parser/parser.c:17[63](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:64):18 (python+0x20241e) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#12 simple_stmts_rule /home/runner/work/cpython/cpython/Parser/parser.c:1618:18 (python+0x177d4a) (BuildId: 5a0193[64](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:65)4aa87641efb42b5a646a83190f2fd33b)
#13 statement_rule /home/runner/work/cpython/cpython/Parser/parser.c:1443:34 (python+0x175911) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#14 _loop1_3_rule /home/runner/work/cpython/cpython/Parser/parser.c:25738:30 (python+0x175911)
#15 statements_rule /home/runner/work/cpython/cpython/Parser/parser.c:1376:18 (python+0x175911)
#16 file_rule /home/runner/work/cpython/cpython/Parser/parser.c:1178:18 (python+0x170f5a) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#17 _PyPegen_parse /home/runner/work/cpython/cpython/Parser/parser.c:42410:18 (python+0x170f5a)
#18 _PyPegen_run_parser /home/runner/work/cpython/cpython/Parser/pegen.c:869:17 (python+0x1[67](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:68)e01) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#19 _PyPegen_run_parser_from_string /home/runner/work/cpython/cpython/Parser/pegen.c:992:14 (python+0x1[68](https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577#step:13:69)4e7) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#20 _PyParser_ASTFromString /home/runner/work/cpython/cpython/Parser/peg_api.c:13:21 (python+0x2175dd) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
#21 Py_CompileStringObject /home/runner/work/cpython/cpython/Python/pythonrun.c:1435:11 (python+0x5fa5ab) (BuildId: 5a0193644aa87641efb42b5a646a83190f2fd33b)
...
```
Job: https://github.com/python/cpython/actions/runs/10204655178/job/28233674832?pr=122577
Full log archive: https://gist.github.com/colesbury/bfd65049b17505fcae64f5afd3ed388e
<!-- gh-linked-prs -->
### Linked PRs
* gh-122694
* gh-122733
<!-- /gh-linked-prs -->
| ce0d66c8d238c9676c6ecd3f04294a3299e07f74 | a8be8fc6c4682089be45a87bd5ee1f686040116c |
python/cpython | python__cpython-122960 | # Update to WASI SDK 24
I've already verified everything works.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122960
* gh-122961
<!-- /gh-linked-prs -->
| 0e207f3e7adc6a0fdbe1482ce01163ff04d5ddcb | 9621a7d0170bf1ec48bcfc35825007cdf75265ea |
python/cpython | python__cpython-122574 | # Incorrect minimum version of Python for Windows build bootstrapping
# Bug report
### Bug description:
Python now requires 3.10 to build itself, because `match` case statements have begun proliferating in the cpython repository. However, the Windows build still thinks 3.9 is enough, which results in a build error on a standard Visual Studio 2022 environment.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122574
* gh-122674
* gh-122677
<!-- /gh-linked-prs -->
| d0b92dd5ca46a10558857adeb7bb48ecf39fa783 | 3bde3d8e03eb3d0632d0dced0ab710ab9e3b2894 |
python/cpython | python__cpython-122572 | # Repeated definition of PY_BUILTIN_HASHLIB_HASHES in configure.ac
# Bug report
### Bug description:
The PY_BUILTIN_HASHLIB_HASHES variable is defined twice in configure.ac, which
results in the corresponding #define appearing twice in the autoconf
confdefs.h, which results in an unconditional compile error, meaning the test
isn't testing what it is supposed to test.
Found via https://github.com/python/cpython/pull/119316
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122572
* gh-122763
<!-- /gh-linked-prs -->
| b5e142ba7c2063efe9bb8065c3b0bad33e2a9afa | 4767a6e31c0550836b2af45d27e374e721f0c4e6 |
python/cpython | python__cpython-122563 | # symtable: ste_free and ste_child_free are unused
`symtable.c` computes the `ste_free` and `ste_child_free` values but nothing in the code uses them. Let's clean up those fields.
cc @carljm @iritkatriel
<!-- gh-linked-prs -->
### Linked PRs
* gh-122563
* gh-122825
<!-- /gh-linked-prs -->
| 8234419c32b9890689e26da936882bc1e9ee161f | df13a1821a90fcfb75eca59aad6af1f0893b1e77 |
python/cpython | python__cpython-122932 | # Clean up and microoptimize str.translate and charmap codec
Initially I just planned to get rid of `PyLong_AS_LONG` (undocumented transitional alias of `PyLong_AsLong`) in `Objects/unicodeobject.c`. But I have used opportunity to add few optimizations in the nearby code. I do not expect significant performance boost, but some overhead was removed in specific corner cases:
* `PyLong_AsLongAndOverflow` is now only called once for the replacement code (`PyLong_AS_LONG` was called twice).
* Using `PyMapping_GetOptionalItem` instead of `PyObject_GetItem` allows to avoid raising a `KeyError` if the translation table is a `dict`. I left this case in previous round (#106307) because this does not make the code simpler (we still need to handle other `LookupError`), but it still has a tiny performance benefit.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122932
<!-- /gh-linked-prs -->
| 1a0b828994ed4ec1f2ba05123995a7d1e852f4b4 | 6f563e364d1a7902417573f842019746a79cdc1b |
python/cpython | python__cpython-122582 | # inlined comprehension implementation in symtable - missing test or redundant code
No test fails if I remove the symtable code that modifies the symtable for an inlined comprehension:
https://github.com/python/cpython/pull/122557
I don't know whether this step is necessary. Would suggest we find a test covering it or remove it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122582
<!-- /gh-linked-prs -->
| fe0a28d850943cf2ba132c9b0a933bb0c98ff0ae | 4b63cd170e5dd840bffc80922f09f2d69932ff5c |
python/cpython | python__cpython-122628 | # Pickle ignores custom getstate methods on TextIOWrapper in Python 3.12
# Bug report
### Bug description:
So I am not entirely sure whether this is unintended behaviour, but it is definitely a noticeable change between 3.11 and 3.12 that is rather unintuitive
```python
import pickle
from io import BytesIO, TextIOWrapper
class EncodedFile(TextIOWrapper):
def __getstate__(self):
return "string"
def __setstate__(self, state):
pass
file = EncodedFile(BytesIO(b"string"))
pickle.dumps(file)
```
This works in Python 3.11 and 3.10, but fails in 3.12 with
```python
pickle.dumps(file)
TypeError: cannot pickle 'EncodedFile' instances
```
### CPython versions tested on:
3.10, 3.11, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122628
* gh-133381
<!-- /gh-linked-prs -->
| e9253ebf74433de5ae6d7f1bce693a3a1173b3b1 | a247dd300ea0c839154e2e38dbc0fdc9fdff673f |
python/cpython | python__cpython-122556 | # [cleanup] Remove removed functions from `Doc/data/refcounts.dat`
# Feature or enhancement
### Proposal:
This is a follow-up of https://github.com/python/cpython/pull/122331#discussion_r1699206142 where a typo was not spotted. Some functions are still in this file but were removed in previous versions (see the corresponding PR for how I detected them).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122556
<!-- /gh-linked-prs -->
| 88030861e216ac791725c8784752201d6fe31329 | 58ffc4cf4aa4cfb47f8768a3c3eaf1dd7a7c4584 |
python/cpython | python__cpython-122547 | # Add `platform.invalidate_caches` for invalidating cached results
# Feature or enhancement
### Proposal:
In #122525, it was observed that many functions of the `platform` module use cached results. This is, in practice, preferrable, but it should also be possible, if needed, to invalidate cached results. One use case is when the host name changes (i.e., one of the `uname()` components change).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122547
<!-- /gh-linked-prs -->
| 612ac283b81907d328891b102f5bfafcf62bd833 | 08f98f4576f95f9ae1a4423d151fce053416f39f |
python/cpython | python__cpython-123217 | # Inconsistency between file names of SyntaxErrors and other Exceptions in the new repl
A small issue, but the new repl reports `<unknown>` as the file name for `SyntaxErrors`, but `<python-input-x>` for other errors:
```
>>> a b c
File "<unknown>", line 1
a b c
^
SyntaxError: invalid syntax
>>> 1 / 0
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
1 / 0
~~^~~
ZeroDivisionError: division by zero
```
Classic repl:
```
>>> a b c
File "<stdin>", line 1
a b c
^
SyntaxError: invalid syntax
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
a
NameError: name 'a' is not defined
>>>
```
I think this is because `_pyrepl.console.InteractiveColoredConsole.showsyntaxerror` does not pass on the `filename` argument to the super method:
```python
def showsyntaxerror(self, filename=None):
super().showsyntaxerror(colorize=self.can_colorize)
```
Should probably wait till #122528 is done. Then, `InteractiveColoredConsole` could be simplified to overwrite only `_showtraceback`. The undocumented colorize keyword arguments that `showtraceback` and `showsyntaxerror` have gained for pyrepl could be removed again.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123217
* gh-123226
* gh-123233
* gh-123246
* gh-123247
<!-- /gh-linked-prs -->
| 3d7b1a526d858496add5b188c790b8d5fe73b06b | 427b106162c7467de8a84476a053dfba9ef16dfa |
python/cpython | python__cpython-122566 | # Change base OS image to Ubuntu-24.04 in CI testing
In current `ci.yml`, base OS image is `ubuntu-22.04`, should we move to the latest LTS verion: `ubuntu-24.04` ?
>
> pool:
> vmImage: ubuntu-22.04
To go further, if we should replace all current CI testing which run on `ubuntu-22.04` to `ubuntu-24.04` ?
<!-- gh-linked-prs -->
### Linked PRs
* gh-122566
* gh-122568
* gh-122570
* gh-122593
* gh-122594
* gh-125344
* gh-126479
* gh-126480
* gh-126619
* gh-126621
* gh-130260
* gh-130268
* gh-130295
<!-- /gh-linked-prs -->
| fc233f46d3761b4e808be2c44fda0b843179004e | c3a12ae13ee0212a096f570064407f8ba954e6aa |
python/cpython | python__cpython-122577 | # Crash on deallocator of type 'time.struct_time' during finalizing for free-threaded Python
# Crash report
### What happened?
A simple standalone file:
```python
# test1.py
import time
obj = time.struct_time(range(1, 10))
```
```console
$ pyenv install 3.14t-dev --debug
$ pyenv local 3.14t-dev-debug
$ python3 -VV
Python 3.14.0a0 experimental free-threading build (heads/main:bd3d31f, Aug 1 2024, 01:22:33) [Clang 15.0.0 (clang-1500.3.9.4)]
$ python3 test1.py
Fatal Python error: _Py_Dealloc: Deallocator of type 'time.struct_time' raised an exception
Python runtime state: finalizing (tstate=0x00000001018ca548)
Exception ignored in the internal traceback machinery:
ImportError: sys.meta_path is None, Python is likely shutting down
TypeError: Missed attribute 'n_fields' of type time.struct_time
Current thread 0x00000001fff70c00 (most recent call first):
Garbage-collecting
<no Python frame>
[1] 89377 abort python3 test1.py
$ python3 -c 'import test1'
Fatal Python error: _Py_Dealloc: Deallocator of type 'time.struct_time' raised an exception
Python runtime state: finalizing (tstate=0x00000001040a2548)
Exception ignored in the internal traceback machinery:
ImportError: sys.meta_path is None, Python is likely shutting down
TypeError: Missed attribute 'n_fields' of type time.struct_time
Current thread 0x00000001fff70c00 (most recent call first):
Garbage-collecting
<no Python frame>
[1] 89737 abort python3 -c 'import test1'
```
If I remove the assignment for `obj`, the program can exit successfully.
```python
# test2.py
import time
time.struct_time(range(1, 10))
```
```console
$ python3 test2.py # no output
$ python3 -c 'import test2' # no output
```
### CPython versions tested on:
CPython main branch, CPython 3.13t, CPython 3.14t
### Operating systems tested on:
Linux, macOS
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 experimental free-threading build (heads/main:bd3d31f, Aug 1 2024, 01:22:33) [Clang 15.0.0 (clang-1500.3.9.4)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-122577
* gh-122625
* gh-122626
<!-- /gh-linked-prs -->
| 4b63cd170e5dd840bffc80922f09f2d69932ff5c | 7aca84e557d0a6d242f322c493d53947a56bde91 |
python/cpython | python__cpython-122547 | # platform.node() does not return the latest hostname value
# Bug report
### Bug description:
I think it's because of how cache is implemented for the uname() helper: https://github.com/python/cpython/blob/5377f55b4e022041b7b57b5489c66c9b3c046c7e/Lib/platform.py#L890-L905.\
Given socket.gethostname() exists as well and respects the latest hostname value, maybe this issue is wontfix. Thought I would file it anyhow.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122547
<!-- /gh-linked-prs -->
| 612ac283b81907d328891b102f5bfafcf62bd833 | 08f98f4576f95f9ae1a4423d151fce053416f39f |
python/cpython | python__cpython-122543 | # socket module shutdown() constants SHUT_RD, etc not included in Constants section
# Documentation
In documentation for socket module, the constants SHUT_RD, SHUT_WR, SHUT_RDWR are mentioned in the description for shutdown(), but they are missing from the Constants section on that page.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122543
* gh-123093
* gh-123094
<!-- /gh-linked-prs -->
| 8a59deca59aa9452e71bb49e909199fbb41a5de7 | 19be0ee9316f9f0d0cae63d3cdd384f44f9a2fce |
python/cpython | python__cpython-122512 | # Improve the documentation for identity semantics of mutable and immutable types
Current paragraph:
> Types affect almost all aspects of object behavior. Even the importance of object identity is affected in some sense: for immutable types, operations that compute new values may actually return a reference to any existing object with the same type and value, while for mutable objects this is not allowed. E.g., after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists. (Note that c = d = [] assigns the same object to both c and d.)
Proposed changes:
> Types affect almost all aspects of object behavior. Even the importance of
object identity is affected in some sense.
>
> For immutable types such as :class:`int` or :class:`str`, operations that
compute new values may actually return a reference to any existing object
with the same type and value, e.g., after ``a = 1; b = 1``, *a* and *b* may
or may not refer to the same object with the value one.
>
> For mutable types such as :class:`list` or :class:`dict`, this is not allowed,
e.g., after ``c = []; d = []``, *c* and *d* are guaranteed to refer to two
different, unique, newly created empty lists (note that ``e = f = []`` assigns
the *same* object to both *e* and *f*).
cc @mdeiana @terryjreedy
Related: https://github.com/python/cpython/issues/122463
<!-- gh-linked-prs -->
### Linked PRs
* gh-122512
* gh-122778
* gh-122779
<!-- /gh-linked-prs -->
| 76bdeebef6c6206f3e0af1e42cbfc75c51fbb8ca | 674a50ef2f8909c1c5d812e166bcc12ae6377908 |
python/cpython | python__cpython-122483 | # About IDLE: direct discussion to DPO
Change About IDLE to direct discussions to discuss.python.org. Users are already doing so.
Currently, idle-dev@python.org and idle-dev mailing list serve, 90+%, to collect spam. The few user messages per year are off topic requests for help, mostly not about IDLE. The email should be disabled and the mailing list archived, but these actions are not part of this tracker.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122483
* gh-122485
* gh-122486
<!-- /gh-linked-prs -->
| 29c04dfa2718dd25ad8b381a1027045b312f9739 | 097633981879b3c9de9a1dd120d3aa585ecc2384 |
python/cpython | python__cpython-122528 | # The new REPL outputs different tracebeck when custom sys.excepthook is used
# Bug report
If `sys.systemhook` is not `sys.__systemhook__`, the new REPL outputs the line in the code module in the traceback:
```pycon
>>> xxx
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
xxx
NameError: name 'xxx' is not defined
>>> import sys; sys.excepthook = lambda *args: sys.__excepthook__(*args)
>>> xxx
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/code.py", line 91, in runcode
exec(code, self.locals)
~~~~^^^^^^^^^^^^^^^^^^^
File "<python-input-2>", line 1, in <module>
xxx
NameError: name 'xxx' is not defined
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-122528
* gh-122816
* gh-122817
* gh-123227
<!-- /gh-linked-prs -->
| e73e7a7abdc3fed252affcb1629df1b3c8fff2ef | 42d9bec98fd846e16a3f4fa9a07e2024aae533ce |
python/cpython | python__cpython-122460 | # Pickling objects by name without __module__ is suboptimal
When the object is pickled by name (classes, functions, and other enum-like objects) has no `__module__` attribute or it is None, the code searches the object in all imported modules: tries to resolve its name relatively to the module and check that the result is the same object.
Whether the module was get from the `__module__` attribute or found in `sys.modules`, the code checks it again:
Then it checks again: tries to resolve its name relatively to the module and check that the result is the same object.
So in some the work is repeated twice. The following PR makes it only be performed once. It also contains few other minor optimizations and simplifications.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122460
<!-- /gh-linked-prs -->
| 1bb955a2fe0237721c141fdfe520fd3ba46db11e | 1422500d020bd199b26357fc387f8b79b82226cd |
python/cpython | python__cpython-122446 | # __static_attributes__ doesn't include method calls
# Feature or enhancement
### Proposal:
```python
class C:
def f(self):
self.x
self.y[3]
self.z()
print(C.__static_attributes__) # gives ('y', 'x'), but 'z' is absent
```
From 3.13 documentation, `__static_attributes__` is _A tuple containing names of attributes of this class which are accessed through self.X from any function in its body.`_. Is there a reason why subscriptions are included in the result, but not method calls ?
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122446
* gh-122621
<!-- /gh-linked-prs -->
| 498376d7a7d6f704f22a2c963130cc15c17e7a6f | 9fc1c992d6fcea0b7558c581846eef6bdd811f6c |
python/cpython | python__cpython-122469 | # Segmentation Fault in append_history_file of readline
# Crash report
### What happened?
# Crash report
### What happened?
### Build
```
apt-get install libreadline6-dev
./configure --with-pydebug --with-address-sanitizer
```
### Root Cause
When calling readline.append_history_file, the first argument can be set to -2147483648, and a valid file path should be provided as the second argument. There is no proper validation logic for append_history, which can cause a crash
```c
static PyObject *
readline_append_history_file(PyObject *module, PyObject *const *args, Py_ssize_t nargs)
{
PyObject *return_value = NULL;
int nelements;
PyObject *filename_obj = Py_None;
if (!_PyArg_CheckPositional("append_history_file", nargs, 1, 2)) {
goto exit;
}
nelements = PyLong_AsInt(args[0]); // input from user
if (nelements == -1 && PyErr_Occurred()) {
goto exit;
}
if (nargs < 2) {
goto skip_optional;
}
filename_obj = args[1];
skip_optional:
return_value = readline_append_history_file_impl(module, nelements, filename_obj); // nelements : -2147483648
exit:
return return_value;
}
```
```c
static PyObject *
readline_append_history_file_impl(PyObject *module, int nelements,
PyObject *filename_obj)
/*[clinic end generated code: output=5df06fc9da56e4e4 input=784b774db3a4b7c5]*/
{
...
errno = err = append_history(
nelements - libedit_append_replace_history_offset, filename); // nelements : -2147483648
}
```
### POC
```python
import readline
readline.append_history_file(-2147483648, __file__)
```
### ASAN
<details>
<summary>asan</summary>
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==10389==ERROR: AddressSanitizer: SEGV on unknown address 0x620c0002a900 (pc 0x7fdf36f7aee0 bp 0x604000003ed0 sp 0x7ffd4d0abf50 T0)
==10389==The signal is caused by a READ memory access.
#0 0x7fdf36f7aee0 (/lib/x86_64-linux-gnu/libreadline.so.8+0x3dee0) python에서 안터지고 c gnu에서 터져요 그래서 이 코드가.
#1 0x7fdf36fa169e in readline_append_history_file_impl Modules/readline.c:365
#2 0x7fdf36fa192b in readline_append_history_file Modules/clinic/readline.c.h:154
#3 0x564386c5b367 in cfunction_vectorcall_FASTCALL Objects/methodobject.c:425
#4 0x564386b64981 in _PyObject_VectorcallTstate Include/internal/pycore_call.h:167
#5 0x564386b64adc in PyObject_Vectorcall Objects/call.c:327
#6 0x564386ec6fea in _PyEval_EvalFrameDefault Python/generated_cases.c.h:857
#7 0x564386f0b295 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:119
#8 0x564386f0b295 in _PyEval_Vector Python/ceval.c:1823
#9 0x564386f0b4b6 in PyEval_EvalCode Python/ceval.c:621
#10 0x56438701b139 in run_eval_code_obj Python/pythonrun.c:1292
#11 0x56438701e07e in run_mod Python/pythonrun.c:1377
#12 0x56438701ee5e in pyrun_file Python/pythonrun.c:1210
#13 0x56438702133d in _PyRun_SimpleFileObject Python/pythonrun.c:459
#14 0x564387021831 in _PyRun_AnyFileObject Python/pythonrun.c:77
#15 0x5643870869dc in pymain_run_file_obj Modules/main.c:409
#16 0x564387089854 in pymain_run_file Modules/main.c:428
#17 0x56438708a465 in pymain_run_python Modules/main.c:696
#18 0x56438708a5f5 in Py_RunMain Modules/main.c:775
#19 0x56438708a7dc in pymain_main Modules/main.c:805
#20 0x56438708ab54 in Py_BytesMain Modules/main.c:829
#21 0x5643869c5b15 in main Programs/python.c:15
#22 0x7fdf3a238d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#23 0x7fdf3a238e3f in __libc_start_main_impl ../csu/libc-start.c:392
#24 0x5643869c5a44 in _start (/cpython_latest/python+0x28aa44)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libreadline.so.8+0x3dee0)
==10389==ABORTING
```
</details>
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:bb09ba6792, Jul 27 2024, 09:44:43) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-122469
* gh-127641
* gh-127642
<!-- /gh-linked-prs -->
| 208b0fb645c0e14b0826c0014e74a0b70c58c9d6 | 67b9a5331ae45aa126877d7f96a1e235600f9c4b |
python/cpython | python__cpython-122421 | # `test_intern` in `test_sys` leaks negative references in free-threaded build
# Bug report
The `test_intern` test "leaks" -2 references in the free-threaded build. This isn't caught by the refleak build bots because negative refleaks are treated as "suspicious" rather than "failing". This is a problem because the negative "leak" can hide a real leak somewhere else.
I think there's likely something wrong with the accounting for references in the `immortalize=1` case in `intern_common` because this shows up in the free-threaded build, but not the default build.
```
./python -m test test_sys -m test_intern -R 3:3
Using random seed: 3390398091
0:00:00 load avg: 14.98 Run 1 test sequentially
0:00:00 load avg: 14.98 [1/1] test_sys
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
X8. ...
test_sys leaked [-2, -2, -2] references, sum=-6 (this is fine)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-122421
* gh-122430
<!-- /gh-linked-prs -->
| ac8da34621a574cd5773217404757a294025ba49 | 7797182b78baf78f64fe16f436aa2279cf6afc23 |
python/cpython | python__cpython-122418 | # Use per-thread reference count for heap type objects in the free-threaded build
# Feature or enhancement
### Overview
As described in [PEP 703](https://peps.python.org/pep-0703/#reference-counting-type-objects), heap type objects use a mix of reference counting techniques in the free-threaded build. They use deferred reference counting (see #117376) in combination with with per-thread reference counting. Deferred reference counting alone is not sufficient to address the multi-threaded scaling bottlenecks with heap types because most references to heap types are from object instances, not references on the interpreter stack.
To address this, heap type reference counts are partially stored in a distributed manner in per-thread arrays. Every thread stores an array of local reference counts for each heap type object. A heap type's true reference count is the sum of:
* The PyObject reference count (i.e., `ob_ref_local` and `ob_ref_shared`)
* The deferred references on the stack (like with other objects that use deferred reference counting)
* The per-thread reference counts stored in each `_PyThreadStateImpl`
### Mechanics
Two new internal functions for reference counting types:
* `void _Py_INCREF_TYPE(PyTypeObject *type)`
* `void _Py_DECREF_TYPE(PyTypeObject *type)`
These are like `Py_INCREF` and `Py_DECREF`, but modify the per-thread reference counts when possible. If necessary, they fallback to `Py_INCREF/DECREF`. In the default build, these just call `Py_INCREF/DECREF`. Note that it's always *safe* to call `Py_INCREF` instead of `_Py_INCREF_TYPE`; it's just that `_Py_INCREF_TYPE` avoids scalability bottlenecks in some cases.
The free-threaded GC will scan the per-thread refcount arrays when computing an object's reference count in addition to scanning each thread's stack for deferred reference counting.
We'll also need some code to manage the lifetime and resizing of the per-thread refcount arrays.
### Internal Usages
[`_PyObject_Init`](https://github.com/python/cpython/blob/490e0ad83ac72c5688dfbbab4eac61ccfd7be5fd/Include/internal/pycore_object.h#L294) should call `_Py_INCREF_TYPE(typeobj)` instead of `Py_INCREF(typeobj)`. (The PEP says `PyType_GenericAlloc`, but `_PyObject_Init` is now the right place to do it.)
`subtype_dealloc` should call `_Py_DECREF_TYPE(type)` instead of `Py_DECREF(type)`. (Same as described in the PEP).
Additionally, with the conversion of some static types to heap types, we now use the following `tp_dealloc` pattern in more places:
```c
PyTypeObject *tp = Py_TYPE(self);
tp->tp_free(self);
Py_DECREF(tp);
```
We will eventually want to change internal usages to the following to avoid reference count contention on `tp`:
```c
PyTypeObject *tp = Py_TYPE(self);
tp->tp_free(self);
_Py_DECREF_TYPE(tp);
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-122418
<!-- /gh-linked-prs -->
| dc093010672207176857a747c61da9c046ad9d3e | 1429651a06611a9dbcb1928b746faf52934c12e2 |
python/cpython | python__cpython-122401 | # Handle `ValueError` in `filecmp.dircmp` and `filecmp.cmpfiles`
# Bug report
### Bug description:
In `filecmp.cmpfiles`, when the path is not stat-able, it will be put in the "fancy" files. This should include paths that would raise `ValueError`.
Note that `filecmp.cmp` should *not* be protected against that since it already ignores an OSError possibly raised by `os.stat`.
Similarly, `filecmp.dircmp` should not suppress `OSError` or `ValueError` when listing the directory contents. It should however silence `ValueError` when checking the common files since it already silence an `OSError`.
TL;DR: Only protect against `ValueError` if the current code is already protecting against `OSError`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122401
* gh-122441
* gh-122442
<!-- /gh-linked-prs -->
| 3a9b2aae615165a40614db9aaa8b90c55ff0c7f9 | 3833d27f985a62c4709dcd9dc73724fc19d46ebf |
python/cpython | python__cpython-122407 | # Minor error in the doc for controller.name in the webbrowser documentation page
# Documentation
In https://docs.python.org/3/library/webbrowser.html#browser-controller-objects it says:
```
webbrowser.name
System-dependent name for the browser.
```
when it should be `controller` instead of `webbrowser` (it is the controller object that has the `name` attribute, not the webbrowser module).
In addition to that, I think that the first sentence,
```
Browser controllers provide these methods which parallel three of the module-level convenience functions:
```
would probably be better with something like:
```
Browser controllers provide the `name` attribute, and these methods which parallel three of the module-level convenience functions:
```
Thanks!
<!-- gh-linked-prs -->
### Linked PRs
* gh-122407
* gh-122410
* gh-132874
<!-- /gh-linked-prs -->
| 1583f9cdd482f6ddc1535799c03e0a0b5ec29c61 | 210f027d02f2f2a6e953fc3a4c4ebd9dc77563a5 |
python/cpython | python__cpython-124975 | # IDLE: Path Browser vertical spacing is too small, causing the text to be cut off
### Bug description:
Noise: @terryjreedy
The vertical spacing is too small, causing the text to be cut off. I'm not sure if this issue exists on other systems as well. Currently, I may not create a PR to fix it because I haven't found where the related code is and what the cause of the bug is. IDLE is a program with a long history; the folder icons here are very "retro"~~old~~, and the code is quite heavy.
<img width="297" alt="2024-07-29 193916" src="https://github.com/user-attachments/assets/0a335bcf-6572-452e-89b1-ec9fc9285c38">
### Operating systems tested on:
Windows or all
<!-- gh-linked-prs -->
### Linked PRs
* gh-124975
* gh-125061
* gh-125062
<!-- /gh-linked-prs -->
| c5df1cb7bde7e86f046196b0e34a0b90f8fc11de | 7ffe94fb242fd51bb07c7f0d31e94efeea3619d4 |
python/cpython | python__cpython-122942 | # Improve internal API for fetching original instruction, replacing `_Py_GetBaseOpcode`
The function `_Py_GetBaseOpcode` returns the original opcode for an instruction, even if that instruction has been instrumented.
However it doesn't work for `ENTER_EXECUTOR` and doesn't allow for the `op.arg` to have been changed.
We should add a new `_Py_GetBaseInstruction` which handles `ENTER_EXECUTOR` as well, and is efficient enough to use in the few places where it is used outside of an assert, such as in `is_resume` which is used in `gen_close`.
The new API will be either
```
_Py_CODEUNIT _Py_GetBaseInstruction(PyCodeObject *code, int i)
```
or
```
_Py_CODEUNIT _Py_GetBaseInstruction(PyCodeObject *code, _Py_CODEUNIT *)
```
depending on which is the most convenient for the current use cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122942
<!-- /gh-linked-prs -->
| 7a65439b93d6ee4d4e32757b55909b882f9a2056 | fe23f8ed970425828de20fb48750fa89da914886 |
python/cpython | python__cpython-122385 | # Make Download page translatable
# Documentation
The Python docs' Download page (https://docs.python.org/3/download.html) is not currently available for translation, hence differentiating from other pages that received the translations effort of the language teams.
Strings should be marked for translation, so the file's strings are properly extracted into translation files.
To reproduce:
1. `make -C Doc venv gettext` # from local CPython checkout
2. search (unsuccessfully) strings in the .pot files generate in 'Doc/build/gettext/' directory for the translation strings that should be extracted from 'Doc/tools/templates/download.html' file (e.g. `grep -R 'These archives contain all the content in the documentation.' Doc/build/gettext`)
<!-- gh-linked-prs -->
### Linked PRs
* gh-122385
* gh-122553
* gh-122554
<!-- /gh-linked-prs -->
| 58ffc4cf4aa4cfb47f8768a3c3eaf1dd7a7c4584 | a9d56e38a08ec198a2289d8fff65444b39dd4a32 |
python/cpython | python__cpython-122362 | # `_PyUnicodeWriter*` is passed where `PyUnicodeWriter*` is expected
# Bug report
<img width="483" alt="Снимок экрана 2024-07-27 в 20 32 19" src="https://github.com/user-attachments/assets/a4c9cb14-fe0a-471c-9576-03c64dd50b21">
In this commit: https://github.com/python/cpython/commit/ae192262ad1cffb6ece9d16e67804386c382be0c
Why? Because `_Py_typing_type_repr` expects `PyUnicodeWriter*` not `_PyUnicodeWriter*`
<!-- gh-linked-prs -->
### Linked PRs
* gh-122362
<!-- /gh-linked-prs -->
| 04eb5c8db1e24cabd0cb81392bb2632c03be1550 | ae192262ad1cffb6ece9d16e67804386c382be0c |
python/cpython | python__cpython-122397 | # is_zipfile does not set the position to the file header
https://github.com/python/cpython/blob/4e7550934941050f54c86338cd5e40cd565ceaf2/Lib/zipfile/__init__.py#L243-L244
When filename is BytesIO, is_zipfile don't set the position to the file header.
This will affect other codes.
And it's my first issues,i don't known how to report the question,sorry
<!-- gh-linked-prs -->
### Linked PRs
* gh-122397
<!-- /gh-linked-prs -->
| e0ef08f5b444950ad9e900b27f5b5dbc706f4459 | 3d8ac48aed6e27c43bf5d037018edcc31d9e66d8 |
python/cpython | python__cpython-122481 | # Importing ssl After Reinitializing Crashes
# Crash report
### What happened?
Using 3.13b4 and main (5592399313c963c110280a7c98de974889e1d353):
```
$ Programs/_testembed test_repeated_init_exec 'import ssl'
--- Loop #1 ---
--- Loop #2 ---
Segmentation fault (core dumped)
```
Using 3.12:
```
$ Programs/_testembed test_repeated_init_exec 'import ssl'
--- Loop #1 ---
--- Loop #2 ---
_testembed: Python/getargs.c:2052: parser_init: Assertion `parser->kwtuple != NULL' failed.
Aborted (core dumped)
```
The 3.12 output implies a problem related to argument clinic and the kwarg names tuple that gets cached.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122481
* gh-122495
* gh-122614
* gh-122630
* gh-122647
* gh-122648
<!-- /gh-linked-prs -->
| 9fc1c992d6fcea0b7558c581846eef6bdd811f6c | b5e6fb39a246bf7ee470d58632cdf588bb9d0298 |
python/cpython | python__cpython-122338 | # Eager task factory segfaults when hosting and calling an API from the same event loop
# Crash report
### What happened?
Love eager task factory, but it's crashing one of my tests. I'm using `uvicorn` and `fastapi` to host an API, and testing it with `httpx` running in the same event loop. This reliably produces a segfault.
It seems to only happen with streaming responses.
Here's a minimal code example:
```python
import asyncio
import httpx
import uvicorn
from fastapi import FastAPI
from starlette.responses import StreamingResponse
async def main():
loop = asyncio.get_running_loop()
loop.set_task_factory(asyncio.eager_task_factory)
app = FastAPI()
@app.get("/")
async def get():
async def _():
yield "1"
return StreamingResponse(
_(),
)
server = uvicorn.Server(
uvicorn.Config(
app,
host="0.0.0.0",
port=8080,
)
)
asyncio.create_task(server.serve())
client = httpx.AsyncClient()
await client.get("http://localhost:8080")
if __name__ == "__main__":
asyncio.run(main())
```
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ]
<!-- gh-linked-prs -->
### Linked PRs
* gh-122338
* gh-122344
* gh-122345
<!-- /gh-linked-prs -->
| c08696286f52d286674f264eecf7b33a335a890b | 863a92f2bc708b9e3dfa9828bb8155b8d371e09c |
python/cpython | python__cpython-122640 | # symtable recursion depth updates seem inconsistent
The symtable ``VISIT`` macro decrements ``st->recursion_depth`` (and returns) in case of error.
For explicit returns, visitor functions should use the ``VISIT_QUIT`` macro (which also decrements the recursion depth).
Each function should either always decrement it, or never. However, I can see that this doesn't always hold. For example, the following function decrements only in error cases:
```c
static int
symtable_handle_namedexpr(struct symtable *st, expr_ty e)
{
if (st->st_cur->ste_comp_iter_expr > 0) {
/* Assignment isn't allowed in a comprehension iterable expression */
PyErr_Format(PyExc_SyntaxError, NAMED_EXPR_COMP_ITER_EXPR);
SET_ERROR_LOCATION(st->st_filename, LOCATION(e));
return 0;
}
if (st->st_cur->ste_comprehension) {
/* Inside a comprehension body, so find the right target scope */
if (!symtable_extend_namedexpr_scope(st, e->v.NamedExpr.target))
return 0;
}
VISIT(st, expr, e->v.NamedExpr.value);
VISIT(st, expr, e->v.NamedExpr.target);
return 1;
}
```
I think error handling in this file needs to be reviewed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122640
<!-- /gh-linked-prs -->
| efcd65cd84d5ebcc6cacb67971f235a726a205e7 | fe0a28d850943cf2ba132c9b0a933bb0c98ff0ae |
python/cpython | python__cpython-122312 | # Add more tests for pickle and fix error messages
# Bug report
Much error generating code in the `pickle` module is not tested at all. As result, not only exception types and messages can differ between Python and C implementations, but other bugs are left unnoticed:
* NameError in formatting one of error messages.
* "Can't pickle" in the error message during unpickling.
* Error message including the repr of wrong object.
* Some errors are only detected in one of implementations (Python or C).
* Crash in untested corner case (#122306).
I am going to add more tests and fix the most odious errors. This will be backported to 3.12 and 3.13, but the backport to 3.12 will have more lenient variants of tests. Late, I will unify and improve other error messages (this is for main only).
<!-- gh-linked-prs -->
### Linked PRs
* gh-122312
* gh-122314
* gh-122315
* gh-122373
* gh-122376
* gh-122377
* gh-122378
* gh-122386
* gh-122387
* gh-122388
* gh-122411
* gh-122415
* gh-122416
* gh-122771
<!-- /gh-linked-prs -->
| 7c2921844f9fa713f93152bf3a569812cee347a0 | dcafb362f7eab84710ad924cac1724bbf3b9c304 |
python/cpython | python__cpython-122308 | # [3.13.0b4] f-string with precision specifier has `format_spec` of `Constant`, not `JoinedStr`
# Bug report
### Bug description:
```py
Python 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
>>> import ast
>>> ast.parse("f'{e:.3}'").body[0].value.values[0].format_spec
<ast.JoinedStr object at 0x1002aa050>
```
```py
Python 3.13.0b4 (v3.13.0b4:567c38b4eb, Jul 18 2024, 07:28:29) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
>>> import ast
>>> ast.parse("f'{e:.3}'").body[0].value.values[0].format_spec
<ast.Constant object at 0x104af45d0>
```
[Docs](https://docs.python.org/3/library/ast.html#ast.FormattedValue) say `FormattedValue.format_spec` is a `JoinedStr`.
Reported downstream in pylint-dev/astroid#2478
Not present in 3.13.0b3
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122308
* gh-122363
* gh-122364
<!-- /gh-linked-prs -->
| db2d8b6db1b56c2bd3802b86f9b76da33e8898d7 | 7c2921844f9fa713f93152bf3a569812cee347a0 |
python/cpython | python__cpython-123261 | # gc.DEBUG_STATS no longer print out anything
# Bug report
### Bug description:
The command `gc.set_debug(gc.DEBUG_STATS)` no longer prints out any statistics, even though `gc.set_debug(gc.DEBUG_COLLECTABLE)` shows that the `gc` is still being run.
(Probably caused by the incremental GC implementation, #116206 .)
For testing you can use the following code --- in previous Python versions the `collecting generation 0` text is printed, now it's not
```python
import gc
import sys
from time import sleep
gc.set_debug(gc.DEBUG_STATS | gc.DEBUG_COLLECTABLE)
for __ in range(3000):
a=[1]
a[0]=a
del a
print(__)
sleep(0.001)
sys.exit()
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123261
* gh-123268
<!-- /gh-linked-prs -->
| 7cd3aa42f0cf72bf9a214e2630850879fe078377 | adc5190014efcf7b7a4c5dfc9998faa8345527ed |
python/cpython | python__cpython-122293 | # Split up ``Lib/test/test_ast.py``
# Feature or enhancement
### Proposal:
``Lib/test/test_ast.py`` currently contains a ``ast`` test cases, and code-generated data such as ``exec_results``, ``eval_results`` and ``single_results``.
I propose to move the snippets which are used to generate these variables and the code which is generated into a separate file.
The ``test_ast`` directory would look like this:
* test_ast/test_ast.py - the previous version of test_ast.py but without the snippets and code-generation
* test_ast/test_cases.py - contains the snippets and code-generation
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122293
* gh-122393
* gh-122395
<!-- /gh-linked-prs -->
| 9187484dd97f6beb94fc17676014706922e380e1 | 0697188084bf61b28f258fbbe867e1010d679b3e |
python/cpython | python__cpython-122303 | # `intern_static` is not thread-safe with multiple interpreters
# Bug report
Most static strings are interned during Python initialization in [`_PyUnicode_InitStaticStrings`](https://github.com/python/cpython/blob/5f6001130f8ada871193377954cfcfee01ef93b6/Include/internal/pycore_unicodeobject_generated.h). However, the `_Py_LATIN1_CHR` characters (code points 0-255) are static, but not interned. They may be interned later while the Python is running. This can happen for various reasons, including calls to `sys.intern`.
This isn't thread-safe: it modifies the hashtable `_PyRuntime.cached_objects.interned_strings`, which is shared across threads and interpreters, without any synchronization.
It also can break the interning identity invariant. You can have a non-static, interned 1-characters string later shadowed by the global interning of the static 1-character string.
**Suggestions**
* The `_PyRuntime.cached_objects.interned_strings` should be immutable. We should not modify it after `Py_Initialize()` until shutdown (i.e., `_PyUnicode_ClearInterned` called from `finalize_interp_types()`)
* The 1-character latin1 strings should be interned. This can either be by explicitly interning them during startup, or by handling 1-character strings specially in `intern_common`.
cc @encukou @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-122303
* gh-122347
<!-- /gh-linked-prs -->
| bb09ba679223666e01f8da780f97888a29d07131 | c08696286f52d286674f264eecf7b33a335a890b |
python/cpython | python__cpython-122289 | # Improve performances of `fnmatch.translate`
# Feature or enhancement
### Proposal:
I implemented `fnmatch.translate` and `fnmatch.filter` in C in #121446 but the performances for `fnmatch.filter` are less pronounced than those in `fnmatch.translate`. While I believe that the factor 2x improvement brought by the C implementation is worthwhile for `fnmatch.translate`, this comes at a high maintenance cost. Instead, I translated (no pun intended) the implementation in Python and obtained the following timings (PGO build):
```
+------------------------------------------+-----------------------+-----------------------+
| Benchmark | fnmatch-translate-ref | fnmatch-translate-py |
+==========================================+=======================+=======================+
| abc/[!]b-ac-z9-1]/def/\*?/*/**c/?*[][!] | 6.09 us | 3.99 us: 1.53x faster |
+------------------------------------------+-----------------------+-----------------------+
| !abc/[!]b-ac-z9-1]/def/\*?/*/**c/?*[][!] | 6.39 us | 4.07 us: 1.57x faster |
+------------------------------------------+-----------------------+-----------------------+
| a**?**cd**?**??k*** | 2.24 us | 1.51 us: 1.49x faster |
+------------------------------------------+-----------------------+-----------------------+
| a/**/b/**/c | 1.97 us | 1.12 us: 1.76x faster |
+------------------------------------------+-----------------------+-----------------------+
| man/man1/bash.1 | 3.00 us | 1.21 us: 2.48x faster |
+------------------------------------------+-----------------------+-----------------------+
| a*b*c*d*e*f*g*h*i*j*k*l*m*n*o** | 5.40 us | 3.33 us: 1.62x faster |
+------------------------------------------+-----------------------+-----------------------+
| Geometric mean | (ref) | 1.71x faster |
+------------------------------------------+-----------------------+-----------------------+
```
It's not as good as the benchmarks reported in https://github.com/python/cpython/issues/121445, but I think this could be considered. Note that the improvements are brought by a new (but equivalent) algorithm during the translation phase. Currently, the translation algorithm does:
- Do not compress consecutive '*' into 1 within the same loop iteration.
- Use a sentinel to indicate a wildcard instead.
- Add to a list `re.escape(c)` for every non-special character (i.e, not for groups beginning by `?*[`).
- In a second phase, merge the groups delimited by the sentinels in order to produce the final regex.
Our (Barney and me) implementation instead does the following:
- Cache `re.escape` so that the lookup is faster.
- Remember the positions of '*' in the above split instead of using a sentinel object.
That way, I can easily merge parts during the second phase (technically, it'd be possible to do it in one phase but two passes makes it clearer).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122289
<!-- /gh-linked-prs -->
| 78cb377c622a98b1bf58df40c828e886575a6927 | 14a05a8f433197c40293028f01797da444fb8409 |
python/cpython | python__cpython-122436 | # Inconsistent `datetime.*.strftime()` year padding behavior across `%y`, `%Y`, `%F` and `%C`
# Bug report
### Bug description:
#120713 changed the way `%y` and `%Y` format works with `strftime()` to pad "short" years with leading zeros. However, the `%F` and `%C` formats were left alone, creating inconsistency:
```python
>>> from datetime import date
>>> date(909, 9, 9).strftime("%Y")
'0909'
>>> date(909, 9, 9).strftime("%y")
'09'
>>> date(909, 9, 9).strftime("%C")
'9'
>>> date(909, 9, 9).strftime("%F")
'909-09-09'
```
While I don't have a strong opinion about `%C`, the change to `%Y` now means that `%Y-%m-%d` no longer matches `%F`.
Notably, this change broke the logic in Django that assumed that if `%Y` returns a zero-padded year number, no padding needs to be done for all formats:
https://github.com/django/django/blob/0e94f292cda632153f2b3d9a9037eb0141ae9c2e/django/utils/formats.py#L266-L273
I will report a bug there as well, but I think it would be beneficial to preserve consistency here.
Django bug: https://code.djangoproject.com/ticket/35630
CC @blhsing @martinvuyk
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122436
<!-- /gh-linked-prs -->
| 126910edba812a01794f307b0cfa2a7f02bda190 | 7cd3aa42f0cf72bf9a214e2630850879fe078377 |
python/cpython | python__cpython-122271 | # Typos in the Py_DEBUG macro name
# Bug report
`Parser/pegen.c` and `Tools/peg_generator/peg_extension/peg_extension.c` contain checks for wrong macro:
```c
#if defined(PY_DEBUG)
```
They always false, because the correct name is `Py_DEBUG`, not `PY_DEBUG`. Therefore, the code which was supposed to run in the debug build, is never compiled code.
cc @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-122271
* gh-122275
* gh-122276
<!-- /gh-linked-prs -->
| 6c09b8de5c67406113e8d082e05c9587e35a852a | dc07f65a53baf60d9857186294d3d7ba92d5606d |
python/cpython | python__cpython-122251 | # Instrumentation assertion failure when calling the registered handler explicitly
# Bug report
### Bug description:
On my mac M1 Sonoma (14.5), Python 3.13.0b4, I see this crash when registering an opcode monitor, calling the handler explicitly while it's enabled, and then restarting events. To reproduce:
```python
import sys
def mymonitor(code, instruction_offset):
pass
tool_id = 4
sys.monitoring.use_tool_id(tool_id, "mytool")
sys.monitoring.register_callback(
tool_id,
sys.monitoring.events.INSTRUCTION,
mymonitor,
)
sys.monitoring.set_events(tool_id, sys.monitoring.events.INSTRUCTION)
mymonitor(None, 0) # call the *same* handler while it is registered
sys.monitoring.restart_events()
```
Triggers an assertion failure (my minimal debugging suggests it's about some inconsistency with the "instrumentation_version"):
```
$ 3.13.0b4-debug/bin/python -X dev repro.py
Assertion failed: (debug_check_sanity(tstate->interp, code)), function _Py_call_instrumentation_instruction, file instrumentation.c, line 1347.
Fatal Python error: Aborted
Current thread 0x00000002064b0c00 (most recent call first):
File "/Users/phillipschanely/proj/CrossHair/repro.py", line 4 in mymonitor
File "/Users/phillipschanely/proj/CrossHair/repro.py", line 16 in <module>
zsh: abort /Users/phillipschanely/.pyenv/versions/3.13.0b4-debug/bin/python -X dev
```
Thanks for investigating!
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122251
* gh-122812
<!-- /gh-linked-prs -->
| 57d7c3e78fb635a0c6ccce38ec3e2f4284d5fac7 | e006c7371d8e57db26254792c67292956e88d81d |
python/cpython | python__cpython-122246 | # Move detection of writes and shadowing of __debug__ from compiler to symtable.
There are checks in the compiler for writes and shadowing of ``__debug__``. This can be done earlier, in the symtable, where it is somewhat simpler.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122246
* gh-122322
<!-- /gh-linked-prs -->
| bc94cf7e254e43318223553a7959115573c679a5 | 2c42e13e80610a9dedcb15b57d142602e8143481 |
python/cpython | python__cpython-122244 | # Improve error message for unbalanced unpacking
# Feature or enhancement
### Proposal:
This is great:
```pytb
>>> x, y, z = 1, 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: not enough values to unpack (expected 3, got 2)
```
But here, it doesn't seem to tell how many values were found:
```pytb
>>> x, y, z = 1, 2, 3, 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 3)
```
I was hoping to see `(expected 3, got 4)` here. The eval loop also seems to have access to an entire `PyList` object at this point, so getting its length for the error message should not cause any changes in behaviour.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122244
<!-- /gh-linked-prs -->
| 3597642ed57d184511ca2dbd1a382ffe8e280ac4 | 07f0bf5aa4ca34e692c16e14129d79c161ee206f |
python/cpython | python__cpython-122236 | # Accuracy issues of sum() specialization for floats/complexes
# Bug report
### Bug description:
Unfortunately, #121176 was merged with a bug:
https://github.com/python/cpython/blob/e9681211b9ad11d1c1f471c43bc57cac46814779/Python/bltinmodule.c#L2749-L2755
L2751 lacks cs_add(). Sorry for that. Reproducer: ``sum([2j, 1., 10E100, 1., -10E100])`` (should be ``2+2j``). I'll provide a patch.
But maybe cases for integer arguments also should use compensated summation? E.g.:
https://github.com/python/cpython/blob/e9681211b9ad11d1c1f471c43bc57cac46814779/Python/bltinmodule.c#L2689-L2698
on L2694 (and use ``PyLong_AsDouble()``). An example:
```pycon
>>> sum([1.0, 10E100, 1.0, -10E100])
2.0
>>> sum([1.0, 10**100, 1.0, -10**100]) # huh?
0.0
```
I would guess, that integer values in this case are treated as exact and they are allowed to smash floating-point result to garbage. But... This looks as a bug for me. ``fsum()`` also chooses ``2.0``:
```pycon
>>> math.fsum([1.0, 10**100, 1.0, -10**100])
2.0
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122236
* gh-122406
<!-- /gh-linked-prs -->
| 169e7138ab84db465b6bf28e6c1dc6c39dbf89f4 | bc93923a2dee00751e44da58b6967c63e3f5c392 |
python/cpython | python__cpython-122230 | # Missing decref in error handling of `func_get_annotation_dict`
# Bug report
Here: https://github.com/python/cpython/blob/af4329e7b1a25d58bb92f79480f5059c3683517b/Objects/funcobject.c#L537-L551
- We allocate `ann_dict`
- If `PyDict_SetItem` fails, we return `NULL`
- We don't deallocate the dict
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122230
<!-- /gh-linked-prs -->
| e9681211b9ad11d1c1f471c43bc57cac46814779 | af4329e7b1a25d58bb92f79480f5059c3683517b |
python/cpython | python__cpython-122214 | # Add details for pickle serialization errors
# Feature or enhancement
When pickling a complex object, or a graph of objects, it is difficult to locate the source of error. At best you get the type of the unpickleable object at the bottom level, but you cannot know the part of what object or data structure it is.
The proposed PR adds notes to the raised exception which allow to identify the source of the error. For example:
```pycon
>>> import pickle
>>> pickle.dumps([{'a': 1, 'b': 2}, {'a': 3, 'b': pickle}])
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
pickle.dumps([{'a': 1, 'b': 2}, {'a': 3, 'b': pickle}])
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: cannot pickle 'module' object
when serializing dict item 'b'
when serializing list item 1
```
```pycon
>>> class A: pass
...
>>> a = A()
>>> a.x = pickle
>>> pickle.dumps(a)
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
pickle.dumps(a)
~~~~~~~~~~~~^^^
TypeError: cannot pickle 'module' object
when serializing dict item 'x'
when serializing A state
```
See also similar issue #122163 for JSON.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122214
<!-- /gh-linked-prs -->
| c0c2aa7644ebd4953682784dbb9904fe955ff647 | 4a6b1f179667e2a8c6131718eb78a15f726e047b |
python/cpython | python__cpython-122207 | # Dictionary watchers deliver `added` event before it's guaranteed to be successful leading to possible incosistent state
# Bug report
### Bug description:
Currently dictionary watchers deliver the `PyDict_EVENT_ADDED` event before they have done everything that is necessary to ensure success. If the dictionary requires a resize for insertion to succeed (or a new keys object needs to be allocated from an empty dict) and that fails then the dictionary watcher will have inconsistent state to what's stored in the dictionary. Watchers should only receive the event when the dictionary implementation is guaranteed that it will succeed.
The event can still be delivered before the insertion happens meaning there's no visible change in behavior except for the OOM case.
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122207
* gh-122326
* gh-122327
<!-- /gh-linked-prs -->
| 5592399313c963c110280a7c98de974889e1d353 | 9ac606080a0074cdf7589d9b7c9413a73e0ddf37 |
python/cpython | python__cpython-122204 | # Race condition in `make_pending_calls` in free-threaded build
# Bug report
`make_pending_calls` uses a mutex and the the `handling_thread` field to ensure that only one thread per-interpreter is handling pending calls at a time:
https://github.com/python/cpython/blob/41a91bd67f86c922f350894a797738038536e1c5/Python/ceval_gil.c#L911-L928
However, the clearing of `handling_thread` is done outside of the mutex:
https://github.com/python/cpython/blob/41a91bd67f86c922f350894a797738038536e1c5/Python/ceval_gil.c#L959-L960
There are two problems with this (for the free-threaded build):
* It's a data race because there's a read of `handling_thread` (in the mutex) concurrently with a write (outside the mutex)
* The logic in that sets `_PY_CALLS_TO_DO_BIT` on `pending->handling_thread` is subject to a time-of-check to time-of-use hazard: `pending->handling_thread` may be non-NULL when evaluating the if-statement, but then cleared before setting the eval breaker bit.
## Relevant unit test
TSan catches this race when running `test_subthreads_can_handle_pending_calls` from `test_capi.test_misc`.
## Suggested Fix
We should set `pending->handling_thread = NULL;` while holding `pending->mutex`, at least in the free-threaded build
<!-- gh-linked-prs -->
### Linked PRs
* gh-122204
* gh-122319
<!-- /gh-linked-prs -->
| c557ae97d6bd9d04164a19b4fe136610e54dbdd8 | 64857d849f3079a73367525ce93fd7a463b83908 |
python/cpython | python__cpython-122222 | # `test_warnings` fails if run with `-Werror`
# Bug report
### Bug description:
`./python.exe -Werror -m test test_warnings` fails, which feels sort-of ironic:
```pytb
~/dev/cpython (main)⚡ % ./python.exe -We -m test test_warnings
Using random seed: 673638140
Raised RLIMIT_NOFILE: 256 -> 1024
0:00:00 load avg: 2.83 Run 1 test sequentially in a single process
0:00:00 load avg: 2.83 [1/1] test_warnings
test test_warnings failed -- Traceback (most recent call last):
File "/Users/alexw/dev/cpython/Lib/test/test_warnings/__init__.py", line 904, in test_issue31285
wmod.warn_explicit(
~~~~~~~~~~~~~~~~~~^
'foo', UserWarning, 'bar', 1,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
module_globals={'__loader__': get_bad_loader(42),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
'__name__': 'foobar'})
^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap_external>", line 934, in _bless_my_loader
DeprecationWarning: Module globals is missing a __spec__.loader
test_warnings failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_warnings
Total duration: 482 ms
Total tests: run=146
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-122222
* gh-122256
* gh-122257
<!-- /gh-linked-prs -->
| 9b4fe9b718f27352ba0c1cf1184f5b90d77d7df4 | 5592399313c963c110280a7c98de974889e1d353 |
python/cpython | python__cpython-122243 | # Move opcode magic number out of `Lib/importlib/_bootstrap_external.py`
# Feature or enhancement
### Proposal:
The number changes way more frequently than it used to, and currently triggers a review request for CODEOWNERS on `Lib/importlib`. Maybe it's time to put it in its own file somewhere else?
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122243
* gh-122503
<!-- /gh-linked-prs -->
| af0a00f022d0fb8f1edb4abdda1bc6b915f0448d | 2b163aa9e796b312bb0549d49145d26e4904768e |
python/cpython | python__cpython-122189 | # Avoid TSan reported race in `run_udp_echo_server`
The `test_asyncio.test_sock_lowlevel.py` test uses a UDP echo server:
https://github.com/python/cpython/blob/a15feded71dd47202db169613effdafc468a8cf3/Lib/test/test_asyncio/utils.py#L288-L310
Thread sanitizer complains about the `sock.sendto(b'STOP', sock.getsockname())` line in the main thread happening concurrently with the `sock.close()` in the `echo_datagrams` thread.
This seems a bit bogus to me: the `sendto` has to start before the `close` starts because it triggers the `echo_datagrams` shutdown, but it's easy enough to avoid the data race. I also think it's better in this case to do a small code change to the test, instead of adding or keeping a global suppression.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122189
* gh-122263
<!-- /gh-linked-prs -->
| 2f74b709b637cad7a9c18a2d90b0747823f2ff51 | bb108580dec5d8655ccdfb6c8737b5f64e3366d0 |
python/cpython | python__cpython-122183 | # `hashlib.file_digest()` can't handle non-blocking I/O
# Bug report
### Bug description:
This came up in python/typeshed#12414.
The current implementation of `file_digest()` does not check the return value of `fileobj.readinto()` for `None`:
https://github.com/python/cpython/blob/2a5d1eb7073179a13159bce937afdbe240432e7d/Lib/hashlib.py#L232-L236
While buffered file objects can't return `None`, unbuffered ones can when they are doing non-blocking I/O. Specifically, `file_digest()` [is documented](https://docs.python.org/3/library/hashlib.html#hashlib.file_digest) to take `SocketIO` objects, which can very much return `None`:
https://github.com/python/cpython/blob/2a5d1eb7073179a13159bce937afdbe240432e7d/Lib/socket.py#L694-L714
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-122183
* gh-132787
<!-- /gh-linked-prs -->
| 2b47f46d7dc30d27b2486991fea4acd83553294b | fa70bf85931eff62cb24fb2f5b7e86c1dcf642d0 |
python/cpython | python__cpython-122165 | # Add details for JSON serialization errors
# Feature or enhancement
When an JSON unserializable object occurs deeply in the large structure, it is difficult to find the culprit, because the error message by default only contains the type of the unserializable object. This is pretty common error, for example you can forget to convert the datetime object to timestamp or string.
The proposed PR adds notes to the raised exception which allow to identify the source of the error. For example:
```pycon
>>> import json
>>> json.dumps([{'a': 1, 'b': 2}, {'a': 3, 'b': ...}])
Traceback (most recent call last):
File "<python-input-16>", line 1, in <module>
json.dumps([{'a': 1, 'b': 2}, {'a': 3, 'b': ...}])
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/home/serhiy/py/cpython/Lib/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/serhiy/py/cpython/Lib/json/encoder.py", line 261, in iterencode
return _iterencode(o, 0)
File "/home/serhiy/py/cpython/Lib/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
f'is not JSON serializable')
TypeError: Object of type ellipsis is not JSON serializable
when serializing dict item 'b'
when serializing list item 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-122165
<!-- /gh-linked-prs -->
| e6b25e9a09dbe09839b36f97b9174a30b1db2dbf | c908d1f87d287a4b3ec58c85b692a7eb617fa6ea |
python/cpython | python__cpython-122164 | # Remove `BUILD_CONST_KEY_MAP` opcode
According to our stats the `BUILD_CONST_KEY_MAP` represents fewer than 1 in 20_000 instructions executed.
Presumably it is more common in startup code, but it has no real value there either.
For run once code, turning the keys into a constant merely moves the cost of building the keys from the interpreter to unmarshalling.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122164
<!-- /gh-linked-prs -->
| 2e14a52cced9834ed5f7e0665a08055de554360f | 9bb2e4623f504c44655436eae181d802f544fff9 |
python/cpython | python__cpython-122158 | # Cases generator gets confused about the number of values to pop in case of errors in some macros
# Bug report
### Bug description:
This input:
```
op(FIRST, (x, y -- a, b)) {
a = x;
b = y;
}
op(SECOND, (a, b -- a, b)) {
}
op(THIRD, (j, k --)) {
ERROR_IF(cond, error);
}
macro(TEST) = FIRST + SECOND + THIRD;
"""
```
Generates:
```
...
if (cond) goto pop_4_error;
...
```
This should be `if (cond) goto pop_2_error;`
I noticed this error when working on a fix to https://github.com/python/cpython/issues/122029, so this isn't just a theoretical bug.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122158
* gh-122174
* gh-122286
<!-- /gh-linked-prs -->
| 624bda76386efd8eecf73c4ad06f997b9b25f07f | 498cb6dff10f97fa3d348a4c0ad9374d14af3312 |
python/cpython | python__cpython-122161 | # Exception raised in traceback.StackSummary._should_show_carets exits interpreter: IndexError on tree.body[0]
# Bug report
### Bug description:
Running the code sample from #122071 in 3.13.0b4 or main exits the interpreter due to `traceback.StackSummary._should_show_carets` raising an exception:
```python
>>> exec(compile("tuple()[0]", "s", "exec"))
Traceback (most recent call last):
Exception ignored in the internal traceback machinery:
Traceback (most recent call last):
File "~\PycharmProjects\cpython\Lib\traceback.py", line 139, in _print_exception_bltin
return print_exception(exc, limit=BUILTIN_EXCEPTION_LIMIT, file=file, colorize=colorize)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 130, in print_exception
te.print(file=file, chain=chain, colorize=colorize)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 1448, in print
for line in self.format(chain=chain, colorize=colorize):
File "~\PycharmProjects\cpython\Lib\traceback.py", line 1384, in format
yield from _ctx.emit(exc.stack.format(colorize=colorize))
File "~\PycharmProjects\cpython\Lib\traceback.py", line 747, in format
formatted_frame = self.format_frame_summary(frame_summary, colorize=colorize)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 583, in format_frame_summary
show_carets = self._should_show_carets(start_offset, end_offset, all_lines, anchors)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 701, in _should_show_carets
statement = tree.body[0]
IndexError: list index out of range
Traceback (most recent call last):
File "~\PycharmProjects\cpython\Lib\code.py", line 91, in runcode
exec(code, self.locals)
File "<python-input-2>", line 1, in <module>
File "s", line 1, in <module>
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~\PycharmProjects\cpython\Lib\runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
File "~\PycharmProjects\cpython\Lib\runpy.py", line 88, in _run_code
exec(code, run_globals)
File "~\PycharmProjects\cpython\Lib\_pyrepl\__main__.py", line 6, in <module>
__pyrepl_interactive_console()
File "~\PycharmProjects\cpython\Lib\_pyrepl\main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
File "~\PycharmProjects\cpython\Lib\_pyrepl\simple_interact.py", line 156, in run_multiline_interactive_console
more = console.push(_strip_final_indent(statement), filename=input_name, _symbol="single") # type: ignore[call-arg]
File "~\PycharmProjects\cpython\Lib\code.py", line 303, in push
more = self.runsource(source, filename, symbol=_symbol)
File "~\PycharmProjects\cpython\Lib\_pyrepl\console.py", line 200, in runsource
self.runcode(code)
File "~\PycharmProjects\cpython\Lib\code.py", line 95, in runcode
self.showtraceback()
File "~\PycharmProjects\cpython\Lib\_pyrepl\console.py", line 168, in showtraceback
super().showtraceback(colorize=self.can_colorize)
File "~\PycharmProjects\cpython\Lib\code.py", line 147, in showtraceback
lines = traceback.format_exception(ei[0], ei[1], last_tb.tb_next, colorize=colorize)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 155, in format_exception
return list(te.format(chain=chain, colorize=colorize))
File "~\PycharmProjects\cpython\Lib\traceback.py", line 1384, in format
yield from _ctx.emit(exc.stack.format(colorize=colorize))
File "~\PycharmProjects\cpython\Lib\traceback.py", line 747, in format
formatted_frame = self.format_frame_summary(frame_summary, colorize=colorize)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 583, in format_frame_summary
show_carets = self._should_show_carets(start_offset, end_offset, all_lines, anchors)
File "~\PycharmProjects\cpython\Lib\traceback.py", line 701, in _should_show_carets
statement = tree.body[0]
IndexError: list index out of range
[Thread 31424.0x8ec0 exited with code 1]
[Thread 31424.0x15dc exited with code 1]
[Thread 31424.0x8b8c exited with code 1]
[Inferior 1 (process 31424) exited with code 01]
```
We can either protect the `_should_show_carets` call with `suppress(Exception)` here:
https://github.com/python/cpython/blob/2762c6cc5e4c1c0d630568db5fbba7a3a71a507c/Lib/traceback.py#L581-L583
Or guard against `tree.body` being empty here:
https://github.com/python/cpython/blob/2762c6cc5e4c1c0d630568db5fbba7a3a71a507c/Lib/traceback.py#L700-L701
(or both?)
Should I submit a PR with one of the fixes above? Any preferences?
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Windows, Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122161
* gh-124214
<!-- /gh-linked-prs -->
| 5cd50cb6eb28e525f0c838e049e900ea982a5a23 | 8b6c7c7877c26f0201f37f69d4db2f35d7abd760 |
python/cpython | python__cpython-123423 | # test.test_asyncio.test_server.TestServer2.test_abort_clients consistently fails on Linux 6.10.x
# Bug report
### Bug description:
Hello, we run the testsuite of the optimized and debug builds of Python in Fedora CI. Since the addition in https://github.com/python/cpython/commit/415964417771946dcb7a163951913adf84644b6d the test has constantly failed like this on Fedora Rawhide / Fedora Linux 41 (the development version). It passes on Fedora 40 and 39.
```pytb
======================================================================
FAIL: test_abort_clients (test.test_asyncio.test_server.TestServer2.test_abort_clients)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib64/python3.13/unittest/async_case.py", line 93, in _callTestMethod
if self._callMaybeAsync(method) is not None:
~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/usr/lib64/python3.13/unittest/async_case.py", line 115, in _callMaybeAsync
return self._asyncioRunner.run(
~~~~~~~~~~~~~~~~~~~~~~~^
func(*args, **kwargs),
^^^^^^^^^^^^^^^^^^^^^^
context=self._asyncioTestContext,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/lib64/python3.13/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/lib64/python3.13/asyncio/base_events.py", line 721, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "/usr/lib64/python3.13/test/test_asyncio/test_server.py", line 249, in test_abort_clients
self.assertNotEqual(s_wr.transport.get_write_buffer_size(), 0)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 0 == 0
----------------------------------------------------------------------
```
I was unable to reproduce this outside of Fedora CI. Perhaps this has to do with how the network is configured, no idea.
This is the output of `python3.13 -m test.pythoninfo`:
<details>
```
Python debug information
========================
CC.version: gcc (GCC) 14.1.1 20240701 (Red Hat 14.1.1-7)
_decimal.__libmpdec_version__: 2.5.1
_testcapi.LONG_MAX: 9223372036854775807
_testcapi.PY_SSIZE_T_MAX: 9223372036854775807
_testcapi.Py_C_RECURSION_LIMIT: 10000
_testcapi.SIZEOF_TIME_T: 8
_testcapi.SIZEOF_WCHAR_T: 4
_testinternalcapi.SIZEOF_PYGC_HEAD: 16
_testinternalcapi.SIZEOF_PYOBJECT: 16
build.NDEBUG: ignore assertions (macro defined)
build.Py_DEBUG: No (sys.gettotalrefcount() missing)
build.Py_TRACE_REFS: No (sys.getobjects() missing)
build.WITH_DOC_STRINGS: Yes
build.WITH_DTRACE: Yes
build.WITH_FREELISTS: Yes
build.WITH_MIMALLOC: Yes
build.WITH_PYMALLOC: Yes
build.WITH_VALGRIND: Yes
builtins.float.double_format: IEEE, little-endian
builtins.float.float_format: IEEE, little-endian
config[_config_init]: 2
config[_init_main]: True
config[_install_importlib]: True
config[_is_python_build]: False
config[argv]: ['-m']
config[base_exec_prefix]: '/usr'
config[base_executable]: '/usr/bin/python3.13'
config[base_prefix]: '/usr'
config[buffered_stdio]: True
config[bytes_warning]: 0
config[check_hash_pycs_mode]: 'default'
config[code_debug_ranges]: True
config[configure_c_stdio]: True
config[cpu_count]: -1
config[dev_mode]: False
config[dump_refs]: False
config[dump_refs_file]: None
config[exec_prefix]: '/usr'
config[executable]: '/usr/bin/python3.13'
config[faulthandler]: False
config[filesystem_encoding]: 'utf-8'
config[filesystem_errors]: 'surrogateescape'
config[hash_seed]: 0
config[home]: None
config[import_time]: False
config[inspect]: False
config[install_signal_handlers]: True
config[int_max_str_digits]: 4300
config[interactive]: False
config[isolated]: False
config[malloc_stats]: False
config[module_search_paths]: ['/usr/lib64/python313.zip', '/usr/lib64/python3.13', '/usr/lib64/python3.13/lib-dynload']
config[module_search_paths_set]: True
config[optimization_level]: 0
config[orig_argv]: ['python3.13', '-m', 'test.pythoninfo']
config[parse_argv]: True
config[parser_debug]: False
config[pathconfig_warnings]: True
config[perf_profiling]: 0
config[platlibdir]: 'lib64'
config[prefix]: '/usr'
config[program_name]: 'python3.13'
config[pycache_prefix]: None
config[pythonpath_env]: None
config[quiet]: False
config[run_command]: None
config[run_filename]: None
config[run_module]: 'test.pythoninfo'
config[safe_path]: False
config[show_ref_count]: False
config[site_import]: True
config[skip_source_first_line]: False
config[stdio_encoding]: 'utf-8'
config[stdio_errors]: 'strict'
config[stdlib_dir]: '/usr/lib64/python3.13'
config[sys_path_0]: '/var/str/python/selftest'
config[tracemalloc]: 0
config[use_environment]: True
config[use_frozen_modules]: True
config[use_hash_seed]: False
config[user_site_directory]: True
config[verbose]: 0
config[warn_default_encoding]: False
config[warnoptions]: []
config[write_bytecode]: True
config[xoptions]: []
curses.ncurses_version: curses.ncurses_version(major=6, minor=4, patch=20240127)
datetime.datetime.now: 2024-07-22 16:05:29.258162
expat.EXPAT_VERSION: expat_2.6.2
fips.linux_crypto_fips_enabled: 0
fips.openssl_fips_mode: 0
gdb_version: GNU gdb (Fedora Linux) 14.2-14.fc41
gdbm.GDBM_VERSION: 1.23.0
global_config[Py_BytesWarningFlag]: 0
global_config[Py_DebugFlag]: 0
global_config[Py_DontWriteBytecodeFlag]: 0
global_config[Py_FileSystemDefaultEncodeErrors]: 'surrogateescape'
global_config[Py_FileSystemDefaultEncoding]: 'utf-8'
global_config[Py_FrozenFlag]: 0
global_config[Py_HasFileSystemDefaultEncoding]: 0
global_config[Py_HashRandomizationFlag]: 1
global_config[Py_IgnoreEnvironmentFlag]: 0
global_config[Py_InspectFlag]: 0
global_config[Py_InteractiveFlag]: 0
global_config[Py_IsolatedFlag]: 0
global_config[Py_NoSiteFlag]: 0
global_config[Py_NoUserSiteDirectory]: 0
global_config[Py_OptimizeFlag]: 0
global_config[Py_QuietFlag]: 0
global_config[Py_UTF8Mode]: 0
global_config[Py_UnbufferedStdioFlag]: 0
global_config[Py_VerboseFlag]: 0
global_config[_Py_HasFileSystemDefaultEncodeErrors]: 0
libregrtests.build_info: release shared LTO+PGO valgrind dtrace
locale.getencoding: UTF-8
os.cpu_count: 2
os.environ[HOME]: /root
os.environ[LANG]: en_US.UTF-8
os.environ[PATH]: /root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/str/python/selftest
os.environ[SHELL]: /bin/bash
os.environ[TERM]: xterm
os.getcwd: /var/str/python/selftest
os.getegid: 0
os.geteuid: 0
os.getgid: 0
os.getgrouplist: 0
os.getgroups: 0
os.getloadavg: (1.19873046875, 0.60791015625, 0.2412109375)
os.getrandom: ready (initialized)
os.getresgid: (0, 0, 0)
os.getresuid: (0, 0, 0)
os.getuid: 0
os.login: root
os.name: posix
os.process_cpu_count: 2
os.supports_bytes_environ: True
os.supports_effective_ids: ['access']
os.supports_fd: ['chdir', 'chmod', 'chown', 'execve', 'listdir', 'pathconf', 'scandir', 'stat', 'statvfs', 'truncate', 'utime']
os.supports_follow_symlinks: ['access', 'chown', 'link', 'stat', 'utime']
os.umask: 0o022
os.uname: posix.uname_result(sysname='Linux', nodename='ip-172-31-17-184.us-east-2.compute.internal', release='6.11.0-0.rc0.20240719git720261cfc732.7.fc41.x86_64', version='#1 SMP PREEMPT_DYNAMIC Fri Jul 19 16:59:06 UTC 2024', machine='x86_64')
platform.architecture: 64bit ELF
platform.freedesktop_os_release[ID]: fedora
platform.freedesktop_os_release[NAME]: Fedora Linux
platform.freedesktop_os_release[VARIANT_ID]: cloud
platform.freedesktop_os_release[VERSION]: 41 (Cloud Edition Prerelease)
platform.freedesktop_os_release[VERSION_ID]: 41
platform.libc_ver: glibc 2.39.9000
platform.platform: Linux-6.11.0-0.rc0.20240719git720261cfc732.7.fc41.x86_64-x86_64-with-glibc2.39.9000
platform.python_implementation: CPython
pre_config[_config_init]: 2
pre_config[allocator]: 0
pre_config[coerce_c_locale]: 0
pre_config[coerce_c_locale_warn]: 0
pre_config[configure_locale]: 1
pre_config[dev_mode]: 0
pre_config[isolated]: 0
pre_config[parse_argv]: 1
pre_config[use_environment]: 1
pre_config[utf8_mode]: 0
pwd.getpwuid(0): pwd.struct_passwd(pw_name='root', pw_passwd='x', pw_uid=0, pw_gid=0, pw_gecos='Super User', pw_dir='/root', pw_shell='/bin/bash')
pymem.allocator: pymalloc
readline._READLINE_LIBRARY_VERSION: 8.2
readline._READLINE_RUNTIME_VERSION: 0x802
readline._READLINE_VERSION: 0x802
resource.RLIMIT_AS: (-1, -1)
resource.RLIMIT_CORE: (-1, -1)
resource.RLIMIT_CPU: (-1, -1)
resource.RLIMIT_DATA: (-1, -1)
resource.RLIMIT_FSIZE: (-1, -1)
resource.RLIMIT_MEMLOCK: (8388608, 8388608)
resource.RLIMIT_MSGQUEUE: (819200, 819200)
resource.RLIMIT_NICE: (0, 0)
resource.RLIMIT_NOFILE: (1024, 524288)
resource.RLIMIT_NPROC: (15042, 15042)
resource.RLIMIT_OFILE: (1024, 524288)
resource.RLIMIT_RSS: (-1, -1)
resource.RLIMIT_RTPRIO: (0, 0)
resource.RLIMIT_RTTIME: (-1, -1)
resource.RLIMIT_SIGPENDING: (15042, 15042)
resource.RLIMIT_STACK: (8388608, -1)
resource.pagesize: 4096
socket.hostname: ip-172-31-17-184.us-east-2.compute.internal
sqlite3.sqlite_version: 3.46.0
ssl.HAS_SNI: True
ssl.OPENSSL_VERSION: OpenSSL 3.2.2 4 Jun 2024
ssl.OPENSSL_VERSION_INFO: (3, 2, 0, 2, 0)
ssl.OP_ALL: 0x80000050
ssl.OP_NO_TLSv1_1: 0x10000000
ssl.SSLContext.maximum_version: -1
ssl.SSLContext.minimum_version: -2
ssl.SSLContext.options: 2186412112
ssl.SSLContext.protocol: 16
ssl.SSLContext.verify_mode: 2
ssl.default_https_context.maximum_version: -1
ssl.default_https_context.minimum_version: -2
ssl.default_https_context.options: 2186412112
ssl.default_https_context.protocol: 16
ssl.default_https_context.verify_mode: 2
ssl.environ[OPENSSL_CONF]: /non-existing-file
ssl.stdlib_context.maximum_version: -1
ssl.stdlib_context.minimum_version: -2
ssl.stdlib_context.options: 2186412112
ssl.stdlib_context.protocol: 16
ssl.stdlib_context.verify_mode: 0
subprocess._USE_POSIX_SPAWN: True
support.MS_WINDOWS: False
support._is_gui_available: False
support.check_sanitizer(address=True): False
support.check_sanitizer(memory=True): False
support.check_sanitizer(ub=True): False
support.has_fork_support: True
support.has_socket_support: True
support.has_strftime_extensions: True
support.has_subprocess_support: True
support.is_android: False
support.is_emscripten: False
support.is_jython: False
support.is_wasi: False
support.python_is_optimized: True
support_os_helper.can_chmod: True
support_os_helper.can_dac_override: True
support_os_helper.can_symlink: True
support_os_helper.can_xattr: True
support_socket_helper.IPV6_ENABLED: True
support_socket_helper.has_gethostname: True
support_socket_helper.tcp_blackhole: False
support_threading_helper.can_start_thread: True
sys._is_gil_enabled: True
sys.api_version: 1013
sys.builtin_module_names: ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_suggestions', '_symtable', '_sysconfig', '_thread', '_tokenize', '_tracemalloc', '_typing', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time')
sys.byteorder: little
sys.dont_write_bytecode: False
sys.executable: /usr/bin/python3.13
sys.filesystem_encoding: utf-8/surrogateescape
sys.flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0, dev_mode=False, utf8_mode=0, warn_default_encoding=0, safe_path=False, int_max_str_digits=4300)
sys.float_info: sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
sys.float_repr_style: short
sys.getrecursionlimit: 1000
sys.hash_info: sys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003, algorithm='siphash13', hash_bits=64, seed_bits=128, cutoff=0)
sys.hexversion: 51183796
sys.implementation: namespace(name='cpython', cache_tag='cpython-313', version=sys.version_info(major=3, minor=13, micro=0, releaselevel='beta', serial=4), hexversion=51183796, _multiarch='x86_64-linux-gnu')
sys.int_info: sys.int_info(bits_per_digit=30, sizeof_digit=4, default_max_str_digits=4300, str_digits_check_threshold=640)
sys.maxsize: 9223372036854775807
sys.maxunicode: 1114111
sys.path: ['/var/str/python/selftest', '/usr/lib64/python313.zip', '/usr/lib64/python3.13', '/usr/lib64/python3.13/lib-dynload', '/usr/lib64/python3.13/site-packages', '/usr/lib/python3.13/site-packages']
sys.platform: linux
sys.platlibdir: lib64
sys.prefix: /usr
sys.stderr.encoding: utf-8/backslashreplace
sys.stdin.encoding: utf-8/strict
sys.stdout.encoding: utf-8/strict
sys.thread_info: sys.thread_info(name='pthread', lock='semaphore', version='NPTL 2.39.9000')
sys.version: 3.13.0b4 (main, Jul 19 2024, 00:00:00) [GCC 14.1.1 20240701 (Red Hat 14.1.1-7)]
sys.version_info: sys.version_info(major=3, minor=13, micro=0, releaselevel='beta', serial=4)
sysconfig.is_python_build: False
sysconfig[CCSHARED]: -fPIC
sysconfig[CC]: gcc
sysconfig[CFLAGSFORSHARED]: -fPIC
sysconfig[CFLAGS]: -fno-strict-overflow -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fcf-protection -fexceptions -fcf-protection -fexceptions -fcf-protection -fexceptions -O3
sysconfig[CONFIG_ARGS]: '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--runstatedir=/run' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-platlibdir=lib64' '--enable-ipv6' '--enable-shared' '--with-computed-gotos=yes' '--with-dbmliborder=gdbm:ndbm:bdb' '--with-system-expat' '--with-system-ffi' '--with-system-libmpdec' '--enable-loadable-sqlite-extensions' '--with-dtrace' '--with-lto' '--with-ssl-default-suites=openssl' '--without-static-libpython' '--with-wheel-pkg-dir=/usr/share/python-wheels' '--with-valgrind' '--without-ensurepip' '--enable-experimental-jit=yes-off' '--enable-optimizations' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 'CC=gcc' 'CFLAGS=-fcf-protection -fexceptions ' 'LDFLAGS= ' 'CPPFLAGS='
sysconfig[HOST_GNU_TYPE]: x86_64-redhat-linux-gnu
sysconfig[MACHDEP]: linux
sysconfig[MULTIARCH]: x86_64-linux-gnu
sysconfig[OPT]: -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fcf-protection -fexceptions
sysconfig[PGO_PROF_USE_FLAG]: -fprofile-use -fprofile-correction
sysconfig[PY_CFLAGS]: -fno-strict-overflow -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fcf-protection -fexceptions -fcf-protection -fexceptions -fcf-protection -fexceptions -O3
sysconfig[PY_CFLAGS_NODIST]: -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wno-complain-wrong-lang -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -D_GNU_SOURCE -fPIC -fwrapv -D_Py_TIER2=3 -D_Py_JIT -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wno-complain-wrong-lang -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -D_GNU_SOURCE -fPIC -fwrapv -O3 -fprofile-use -fprofile-correction -I/builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/Include/internal -I/builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/Include/internal/mimalloc
sysconfig[PY_CORE_LDFLAGS]: -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -g -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -g
sysconfig[PY_LDFLAGS_NODIST]: -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -g -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -g
sysconfig[PY_STDMODULE_CFLAGS]: -fno-strict-overflow -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fcf-protection -fexceptions -fcf-protection -fexceptions -fcf-protection -fexceptions -O3 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wno-complain-wrong-lang -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -D_GNU_SOURCE -fPIC -fwrapv -D_Py_TIER2=3 -D_Py_JIT -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wno-complain-wrong-lang -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -D_GNU_SOURCE -fPIC -fwrapv -O3 -fprofile-use -fprofile-correction -I/builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/Include/internal -I/builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/Include/internal/mimalloc -IObjects -IInclude -IPython -I. -I/builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/Include -fPIC
sysconfig[Py_DEBUG]: 0
sysconfig[Py_ENABLE_SHARED]: 1
sysconfig[Py_GIL_DISABLED]: 0
sysconfig[SHELL]: /bin/sh -e
sysconfig[SOABI]: cpython-313-x86_64-linux-gnu
sysconfig[TEST_MODULES]: yes
sysconfig[abs_builddir]: /builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4/build/optimized
sysconfig[abs_srcdir]: /builddir/build/BUILD/python3.13-3.13.0_b4-build/Python-3.13.0b4
sysconfig[prefix]: /usr
sysconfig[srcdir]: /usr/lib64/python3.13/config-3.13-x86_64-linux-gnu
tempfile.gettempdir: /tmp
test_socket.HAVE_SOCKET_ALG: True
test_socket.HAVE_SOCKET_BLUETOOTH: False
test_socket.HAVE_SOCKET_CAN: False
test_socket.HAVE_SOCKET_CAN_ISOTP: False
test_socket.HAVE_SOCKET_CAN_J1939: False
test_socket.HAVE_SOCKET_HYPERV: False
test_socket.HAVE_SOCKET_QIPCRTR: True
test_socket.HAVE_SOCKET_RDS: False
test_socket.HAVE_SOCKET_UDPLITE: True
test_socket.HAVE_SOCKET_VSOCK: True
time.altzone: 0
time.daylight: 0
time.get_clock_info(monotonic): namespace(implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, adjustable=False, resolution=1e-09)
time.get_clock_info(perf_counter): namespace(implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, adjustable=False, resolution=1e-09)
time.get_clock_info(process_time): namespace(implementation='clock_gettime(CLOCK_PROCESS_CPUTIME_ID)', monotonic=True, adjustable=False, resolution=1e-09)
time.get_clock_info(thread_time): namespace(implementation='clock_gettime(CLOCK_THREAD_CPUTIME_ID)', monotonic=True, adjustable=False, resolution=1e-09)
time.get_clock_info(time): namespace(implementation='clock_gettime(CLOCK_REALTIME)', monotonic=False, adjustable=True, resolution=1e-09)
time.time: 1721664329.3427575
time.timezone: 0
time.tzname: ('UTC', 'UTC')
tkinter.TCL_VERSION: 8.6
tkinter.TK_VERSION: 8.6
tkinter.info_patchlevel: 8.6.14
zlib.ZLIB_RUNTIME_VERSION: 1.3.1.zlib-ng
zlib.ZLIB_VERSION: 1.3.1.zlib-ng
```
</details>
We invoke the installed tests like this:
```
python3.13 -m test -wW -j0 -i test_check_probes
```
I'd like to debug this and see if something is wrong with the test or perhaps in Fedora 41. But I don't know where to start.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123423
* gh-123443
<!-- /gh-linked-prs -->
| b379f1b26c1e89c8e9160b4dede61b980cc77be6 | 61bef6245c4a32bf430d684ede8603f423d63284 |
python/cpython | python__cpython-122134 | # Pure-Python implementation of socket.socketpair() doesn't authenticate connected socket
# Bug report
### Bug description:
`socket.socketpair()` has a fall-back implementation on platforms that don't support `socket.AF_UNIX` which uses AF_INET[6] sockets bound to localhost. This connection is expected to come from the same process.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-122134
* gh-122424
* gh-122425
* gh-122426
* gh-122427
* gh-122428
* gh-122429
* gh-122493
* gh-122504
* gh-122505
* gh-122506
* gh-122507
* gh-122508
* gh-122509
<!-- /gh-linked-prs -->
| 78df1043dbdce5c989600616f9f87b4ee72944e5 | 76bdfa4cd02532519fb43ae91244e2b4b3650d78 |
python/cpython | python__cpython-122157 | # Invoking 'help(...)' unexpectedly includes name of internal wrapper function.
# Bug report
### Bug description:
Attempting to retrieve documentation for the [`urlsplit` function](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit) mostly works as expected, but the first line is confusing and appears to originate from a caching wrapper applied to the function (not relevant to usage documentation).
```pyi
Python 3.12.4 (main, Jul 15 2024, 12:17:32) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from urllib.parse import urlsplit
>>> help(urlsplit)
Help on _lru_cache_wrapper in module urllib.parse:
urlsplit(url, scheme='', allow_fragments=True)
...
```
Note: there is some existing mention of this (`help` and the interaction with the `@lru_cache` wrapper) in https://github.com/python/cpython/issues/88169 - if my report is a dup/invalid, please feel free to close this.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
Edit: fixup: add missing statement from interpreter session.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122157
<!-- /gh-linked-prs -->
| 4606eff0aa01d6ce30d25b05ed347567ea59b00b | a15feded71dd47202db169613effdafc468a8cf3 |
python/cpython | python__cpython-122097 | # ``test_pathlib`` prints unnecessary information
# Bug report
### Bug description:
```python
./python -m test -q test_pathlib
Using random seed: 2693504937
0:00:00 load avg: 0.51 Run 1 test sequentially in a single process
[OSError(), OSError()]
[PermissionError(13, 'Permission denied'), PermissionError(13, 'Permission denied'), OSError(39, 'Directory not empty')]
[OSError(), OSError()]
[PermissionError(13, 'Permission denied'), PermissionError(13, 'Permission denied'), OSError(39, 'Directory not empty')]
[OSError(), OSError()]
[PermissionError(13, 'Permission denied'), PermissionError(13, 'Permission denied'), OSError(39, 'Directory not empty')]
== Tests result: SUCCESS ==
Total duration: 6.9 sec
Total tests: run=1,710 skipped=571
Total test files: run=1/1
Result: SUCCESS
```
I have a PR ready.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122097
<!-- /gh-linked-prs -->
| 5901d92739c6e53668e3924eaff38e2e9eb95162 | a3f7db905c5aecda1d06fb60ed382a17e5b9c7aa |
python/cpython | python__cpython-121241 | # Docs: move deprecations into include files
# Documentation
Re: https://discuss.python.org/t/streamline-whats-new-by-moving-deprecations-and-removals-out-of-news/53997/8
To avoid needing to duplicate and sync the deprecation sections across What's New files, and ease backports, move them into include files.
* Python:
* [x] Use include files for `whatsnew/3.12.rst` deprecations
* [x] Use include files for `whatsnew/3.13.rst` deprecations
* [x] Use include files for `whatsnew/3.14.rst` deprecations
* C API:
* [x] Use include files for deprecations
* [x] Create dedicated page to list deprecations using include files
<!-- gh-linked-prs -->
### Linked PRs
* gh-121241
* gh-122038
* gh-122093
* gh-122223
* gh-122225
* gh-122224
* gh-122242
* gh-122350
* gh-122351
* gh-109843
* gh-122352
* gh-122374
* gh-122375
* gh-122422
* gh-122423
<!-- /gh-linked-prs -->
| a1df1b44394784721239615f307b273455536d14 | 709db44255eb5d73fc22a1341dd0253e71ddfda9 |
python/cpython | python__cpython-122082 | # Optional support for ieee contexts in the decimal module doesn't work
# Crash report
### What happened?
Reproducer:
```
$ ./configure CFLAGS=-DEXTRA_FUNCTIONALITY -q && make -s
configure: WARNING: no system libmpdecimal found; falling back to bundled libmpdecimal (deprecated and scheduled for removal in Python 3.15)
In function ‘word_to_string’,
inlined from ‘coeff_to_string’ at ./Modules/_decimal/libmpdec/io.c:411:13,
inlined from ‘_mpd_to_string’ at ./Modules/_decimal/libmpdec/io.c:612:18:
./Modules/_decimal/libmpdec/io.c:349:40: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
349 | if (s == dot) *s++ = '.'; *s++ = '0' + (char)(x / d); x %= d
| ~~~~~^~~~~~~~~~~~~~~~~~~~~
./Modules/_decimal/libmpdec/io.c:360:14: note: in expansion of macro ‘EXTRACT_DIGIT’
360 | case 15: EXTRACT_DIGIT(s, x, 100000000000000ULL, dot);
| ^~~~~~~~~~~~~
[... and similar warnings from issue #108562, hardly relevant]
Checked 112 modules (34 built-in, 77 shared, 1 n/a on linux-x86_64, 0 disabled, 0 missing, 0 failed on import)
$ ./python
Python 3.14.0a0 (heads/main:cecaceea31, Jul 19 2024, 05:34:00) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import *
>>> IEEEContext(11)
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
IEEEContext(11)
~~~~~~~~~~~^^^^
ValueError: argument must be a multiple of 32, with a maximum of 512
>>> IEEEContext(1024)
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
IEEEContext(1024)
~~~~~~~~~~~^^^^^^
ValueError: argument must be a multiple of 32, with a maximum of 512
>>> IEEEContext(512) # oops
Segmentation fault
```
There are tests (decorated by ``@requires_extra_functionality``) for this, but it seems nobody (including build bots) run this a long time. These tests are broken too (crash).
I'll provide a patch.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/fix-decimal-extra:c8d2630995, Jul 21 2024, 10:20:04) [GCC 12.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-122082
* gh-123136
<!-- /gh-linked-prs -->
| b9e10d1a0fc4d8428d4b36eb127570a832c26b6f | be257c58152e9b960827362b11c9ef2223fd6267 |
python/cpython | python__cpython-122059 | # Outdated docstrings in `Lib/inspect.py`
`inspect.isfunction` docstring is missing a:
- `__dict__`
- `__closure__`
- `__qualname__`
- `__module__`
- `__type_params__`
`inspect.isgenerator`:
- `gi_yieldfrom`
- `gi_suspended`(?) an undocumented attribute, I guess we should not expose it there
- `next` - an old attribute which has been removed, so should be removed from there as well
`inspect.isframe`:
- `clear`
- `f_trace_lines`
- `f_trace_opcodes`
`inspect.iscode`:
- `co_positions`
- `co_lines`
- `co_qualname`
- `replace`
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122059
<!-- /gh-linked-prs -->
| 8ce70d6c697c8179e007169ba2ec5d3a0dc77362 | 0b433aa9df6b5bb84e77ff97e59b7bcd04f2199a |
python/cpython | python__cpython-133169 | # Question about adjacent empty matches in regular expressions
# Documentation
The Python [documentation](https://docs.python.org/3/library/re.html#re.sub) for re.sub() states:
> Empty matches for the pattern are replaced only when not adjacent to a previous empty match.
However, after some testing, I have been unable to construct a regular expression pattern that produces adjacent empty matches. This leads to the following questions:
Is it actually possible to create a regular expression pattern that results in adjacent empty matches in Python's re module?
If not, should we consider updating the documentation to avoid potential confusion among developers?
## My Investigation
I've tried various patterns that might theoretically produce adjacent empty matches, such as:
```python
import re
def find_all_matches(pattern, string):
return [m.start() for m in re.finditer(pattern, string)]
test_string = "abcd"
patterns = [
r'\b|\b', # Word boundaries
r'^|$', # Start or end of string
r'(?=.)|(?<=.)', # Positive lookahead or lookbehind
r'.*?', # Non-greedy any character
]
for pattern in patterns:
matches = find_all_matches(pattern, test_string)
print(f"Pattern {pattern}: {matches}")
```
None of these patterns produce adjacent empty matches. The regex engine seems to always move forward after finding a match, even an empty one.
## Request for Clarification
This issue sincerely requests clarification on this matter:
1. If adjacent empty matches are indeed possible as the documentation suggests, could you provide some examples that demonstrate this behavior?
2. Are there specific scenarios or edge cases where adjacent empty matches can occur?
3. If possible, could you share a minimal working example that shows how `re.sub()` handles adjacent empty matches differently from non-adjacent ones?
These examples would greatly help in understanding the documentation and the behavior of the `re` module in such cases.
Thank you for your time and attention to this matter. Any insights or examples you can provide would be greatly appreciated.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133169
* gh-134217
* gh-134218
<!-- /gh-linked-prs -->
| 44b73d3cd4466e148460883acf4494124eae8c91 | 605022aeb69ae19cae1c020a6993ab5c433ce907 |
python/cpython | python__cpython-122045 | # SBOM generation tool fails during gitignore filtering for libraries with no files
Within https://github.com/python/cpython/pull/119316 the libb2 library was removed. When running the SBOM generation tool a confusing error is raised during gitignore filtering instead of triggering the helpful error for the "no files for package" condition. This can be avoided by short-circuiting gitignore filtering when no files are present.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122045
* gh-122354
* gh-122355
<!-- /gh-linked-prs -->
| 4e04d1a3d237abd0cba354024556c39519e0d163 | 7a6d4ccf0ec16e09f0d8b21c5a0c591e5e3e45f7 |
python/cpython | python__cpython-132201 | # Misleading comment in `Modules/xxmodule.c`
Everything is clear on how to use xxmodule.c as a starting point to build your own module based on the explanation on the file itself, but I feel confused with the following portion of the comments at the header:
>If your object type is needed in other files, you'll have to create a file "foobarobject.h"..
Before that there is the following:
> If your module is named foo your sourcefile should be named `foomodule.c`
Which is clear from the name of the file itself **xxmodule.c**.
Do we have typo here for bar? or it is just another way to say use the `module name + anything + object`?
https://github.com/python/cpython/blob/d66b06107b0104af513f664d9a5763216639018b/Modules/xxmodule.c#L12
<!-- gh-linked-prs -->
### Linked PRs
* gh-132201
* gh-132207
* gh-132208
<!-- /gh-linked-prs -->
| af8d1b95377917036aaedf18b9cc047d8877259c | b865871486987e7622a2059981cc8d708f9b04b0 |
python/cpython | python__cpython-122072 | # CPython profiler broken with TensorFlow 2.17.0 code in Python 3.12.1+
# Bug report
### Bug description:
Lately, I've been testing IA code on various Python interpreters and their corresponding profilers across multiple platforms. After multiple attempts, I've noticed that CPython profilers consistently fail to analyze the following code.
```python
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=2)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
model.save("epic_num_reader.keras")
predictions = model.predict([x_test])
print(predictions)
```
I've been testing an IA code on different Python interpreters and their respective profilers across multiple platforms. Specifically, I've been working with a simple TensorFlow+Keras code that classifies number image inputs. Interestingly, I found that the code works well with IntelPython, which uses Python 3.9.19 as its latest version. When I tested the code on multiple versions of CPython, I noticed that the profiler works well and returns information for CPython versions less than 3.12.1. However, since CPython 3.12.1, the code crashes with an error.
```bash
2024-07-19 14:53:51.996897: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-19 14:53:51.997294: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-19 14:53:51.999400: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-19 14:53:52.005233: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-19 14:53:52.014843: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-19 14:53:52.017590: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-19 14:53:52.025009: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 615, in <module>
main()
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 604, in main
runctx(code, globs, None, options.outfile, options.sort)
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 101, in runctx
return _Utils(Profile).runctx(statement, globals, locals, filename, sort)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 64, in runctx
prof.runctx(statement, globals, locals)
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 424, in runctx
exec(cmd, globals, locals)
File "mnist_number.py", line 1, in <module>
import tensorflow as tf
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/__init__.py", line 47, in <module>
from tensorflow._api.v2 import __internal__
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/_api/v2/__internal__/__init__.py", line 8, in <module>
from tensorflow._api.v2.__internal__ import autograph
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/_api/v2/__internal__/autograph/__init__.py", line 8, in <module>
from tensorflow.python.autograph.core.ag_ctx import control_status_ctx # line: 34
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/autograph/core/ag_ctx.py", line 21, in <module>
from tensorflow.python.autograph.utils import ag_logging
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/autograph/utils/__init__.py", line 17, in <module>
from tensorflow.python.autograph.utils.context_managers import control_dependency_on_returns
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/autograph/utils/context_managers.py", line 19, in <module>
from tensorflow.python.framework import ops
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/framework/ops.py", line 50, in <module>
from tensorflow.python.eager import context
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/eager/context.py", line 37, in <module>
from tensorflow.python.eager import execute
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/eager/execute.py", line 21, in <module>
from tensorflow.python.framework import dtypes
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/framework/dtypes.py", line 308, in <module>
resource = DType(types_pb2.DT_RESOURCE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/framework/dtypes.py", line 81, in __init__
self._handle_data = handle_data
^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 209, in trace_dispatch_i
if self.dispatch[event](self, frame, t):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/profile.py", line 293, in trace_dispatch_return
assert frame is self.cur[-2].f_back, ("Bad return", self.cur[-3])
AssertionError: ('Bad return', ('/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/tensorflow/python/framework/dtypes.py', 1, '<module>'))
```
I ran the code on my laptop, which has a Tiger Lake architecture and no NVIDIA GPU or Tensor Cores. As I didn't recompile the TensorFlow library. Therefore, it's expected to see warning messages related to the lack of AVX512 and GPU acceleration.
Test Environnement:
WSL2 Ubuntu 20.04
- Python 3.12.4
Manjaro
- Python 3.9.19
- PYthon 3.10.14
- Python 3.11.9
- Python 3.12.0
- Python 3.12.1
- Python 3.12.2
- Python 3.12.3
- Python 3.12.4
Fedora
- Python 3.12.4
For Python versions prior to 3.12.1, I only received warning messages, and the profiler worked as expected. However, since upgrading to 3.12.1, I've started encountering AssertError issues. Interestingly, I've compared the profile.py file between versions 3.12.0 and 3.12.1, and they appear to be identical. It's possible that the introduction of PEP 695 in Python 3.12 is causing this occasional error.
While waiting for your response, I wish you a good day.
Aaron SU
### CPython versions tested on:
3.9, 3.10, 3.11, 3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-122072
* gh-122177
* gh-122205
* gh-122206
* gh-130488
* gh-130898
<!-- /gh-linked-prs -->
| e91ef13861e88c27aed51a24e58d1dcc855a01dc | 41a91bd67f86c922f350894a797738038536e1c5 |
python/cpython | python__cpython-122028 | # Parser/lexer/lexer.c:1218: int tok_get_normal_mode(struct tok_state *, tokenizer_mode *, struct token *): Assertion `current_tok->curly_bracket_depth >= 0' failed.
# Crash report
### What happened?
```
~/p/cpython ❯❯❯ ./python.exe -c "import ast; ast.literal_eval(\"F'{[F'{:'}[F'{:'}]]]\")"
Assertion failed: (current_tok->curly_bracket_depth >= 0), function tok_get_normal_mode, file lexer.c, line 1218.
fish: Job 1, './python.exe -c "import ast; as…' terminated by signal SIGABRT (Abort)
```
Full ASAN stack:
```
==1327==ERROR: AddressSanitizer: ABRT on unknown address 0x05390000052f (pc 0x7add17cb300b bp 0x7add17e28588 sp 0x7ffcd862dbf0 T0)
SCARINESS: 10 (signal)
#0 0x7add17cb300b in raise /build/glibc-SzIz7B/glibc-2.31/sysdeps/unix/sysv/linux/raise.c:51:1
#1 0x7add17c92858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7
#2 0x7add17c92728 in __assert_fail_base /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92:3
#3 0x7add17ca3fd5 in __assert_fail /build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101:3
#4 0x5ad0da7419ae in tok_get_normal_mode cpython3/Parser/lexer/lexer.c:1218:21
#5 0x5ad0da732ea1 in tok_get cpython3/Parser/lexer/lexer.c:1483:16
#6 0x5ad0da732ea1 in _PyTokenizer_Get cpython3/Parser/lexer/lexer.c:1492:18
#7 0x5ad0da67f20e in _PyPegen_tokenize_full_source_to_check_for_errors cpython3/Parser/pegen_errors.c:178:17
#8 0x5ad0da67b0ec in _PyPegen_run_parser cpython3/Parser/pegen.c:888:9
#9 0x5ad0da67ba95 in _PyPegen_run_parser_from_string cpython3/Parser/pegen.c:992:14
#10 0x5ad0da25b0d5 in Py_CompileStringObject cpython3/Python/pythonrun.c:1435:11
#11 0x5ad0da085182 in builtin_compile_impl cpython3/Python/bltinmodule.c:878:14
#12 0x5ad0da085182 in builtin_compile cpython3/Python/clinic/bltinmodule.c.h:361:20
#13 0x5ad0da57cef2 in cfunction_vectorcall_FASTCALL_KEYWORDS cpython3/Objects/methodobject.c:441:24
#14 0x5ad0d9d9c798 in _PyObject_VectorcallTstate cpython3/Include/internal/pycore_call.h:167:11
#15 0x5ad0da0b9479 in _PyEval_EvalFrameDefault cpython3/Python/generated_cases.c.h:1647:31
#16 0x5ad0d9d9c798 in _PyObject_VectorcallTstate cpython3/Include/internal/pycore_call.h:167:11
#17 0x5ad0d9d9fd65 in PyObject_CallOneArg cpython3/Objects/call.c:395:12
#18 0x5ad0d9d9bc37 in fuzz_ast_literal_eval cpython3/Modules/_xxtestfuzz/fuzzer.c:426:25
#19 0x5ad0d9d9bc37 in _run_fuzz cpython3/Modules/_xxtestfuzz/fuzzer.c:570:14
#20 0x5ad0d9d9bc37 in LLVMFuzzerTestOneInput cpython3/Modules/_xxtestfuzz/fuzzer.c:697:11
#21 0x5ad0d9c4e040 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:614:13
#22 0x5ad0d9c387d4 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:327:6
#23 0x5ad0d9c3e26a in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:862:9
#24 0x5ad0d9c6a662 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
#25 0x7add17c94082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/libc-start.c:308:16
#26 0x5ad0d9c2f2ad in _start
```
Regression range in https://github.com/python/cpython/compare/ece20dba120a1a4745721c49f8d7389d4b1ee2a7...6be7aee18c5b8e639103df951d0d277f4b46f902
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-122028
* gh-122041
* gh-122062
<!-- /gh-linked-prs -->
| 2009e25e26040dca32696e70f91f13665350e7fd | 186b4d8ea2fdc91bf18e8be695244ead1722af18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.