repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-100740 | # Mock spec not respected for attributes prefixed with assert
# Bug report
Example:
```python
from unittest.mock import Mock
class Foo:
def assert_something(self):
pass
m = Mock(spec=Foo)
m.assert_something()
```
An exception is raised:
```pytb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cklein/github/cklein/cpython/Lib/unittest/mock.py", line 657, in __getattr__
raise AttributeError(
AttributeError: 'assert_something' is not a valid assertion. Use a spec for the mock if 'assert_something' is meant to be an attribute.
```
Python 3.9 and lower:
```pytb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/unittest/mock.py", line 635, in __getattr__
raise AttributeError("Attributes cannot start with 'assert' "
AttributeError: Attributes cannot start with 'assert' or 'assret'
```
The error message suggests that accessing attributes with prefix "assert_" should work when using a spec.
See https://github.com/cklein/cpython/commit/735ffc4afa02e832441f563f5c551f83c98bade8 for a possible fix
# Your environment
- CPython versions tested on:
- 3.12.0a3+
- 3.11.1
- 3.10.9
- 3.9.16
- 3.8.16
- 3.7.16
- Operating system and architecture:
- Darwin 21.6.0 on arm64
- Linux 5.4.0-1088-aws on x86_64 (only Python 3.7.5 and 3.8.0)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100740
* gh-100760
* gh-100761
<!-- /gh-linked-prs -->
| 7f1eefc6f4843f0fca60308f557a71af11d18a53 | 52017dbe1681a7cd4fe0e8d6fbbf81fd711a0506 |
python/cpython | python__cpython-114657 | # `Doc/whatsnew/3.{9,10,11}.rst` are out of sync on various branches
# Documentation
There are various inconsistencies between the whatsnew documents for 3.9, 3.10, and 3.11 on the `3.9`, `3.10`, `3.11`, and `main` branches. Most of the inconsistencies are trivial, but some will require some research to determine which version is the canonical text that should be synced across branches.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114657
* gh-114688
* gh-114689
* gh-115526
* gh-115527
<!-- /gh-linked-prs -->
| 3bb6912d8832e6e0a98c74de360dc1b23906c4b3 | d00fbed68ffcd5823acbb32a0e47e2e5f9732ff7 |
python/cpython | python__cpython-100722 | # Add _PyFrame_NumSlotsForCodeObject()
The calculation of how many slots need to be in the frame of a given code object is repeated in a number of places in the codebase, which makes it error prone to try and change the frame layout. It should be refactored into a single place.
(lessons learnt from the register machine experiment).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100722
<!-- /gh-linked-prs -->
| c31e356a10aa60b5967b9aaf80b9984059e46461 | 5fb1c08e15b864d8ea9353a0e013166e2e0e2160 |
python/cpython | python__cpython-100721 | # Code objects, function objects and generator object contain quite a lot of redundant information
There is a quite a lot of redundancy in code, function and generator objects.
### Intra-object redundancy
1. Code objects have four fields ,`co_nlocalsplus`, `co_nplaincellvars`, `co_nlocals`, `co_nfreevars`. Any of these fields can be computed from the other three.
2. Code objects have a qualified name and a name. The name is always the suffix of the qualified name. Changing this to qualifying prefix and name would save space and allow sharing.
3. The defaults and keyword defaults for a function are separate and the keyword defaults are a dict. They could be combined into a single array.
4. Generator objects have a `gi_code` field, which is redundant, as the frame contains a reference to the code object.
### Inter-object redundancy
1. Functions and generators have qualified name and name fields, which are almost always the same as the underlying code object. These should be lazily initialized
<!-- gh-linked-prs -->
### Linked PRs
* gh-100721
* gh-100749
<!-- /gh-linked-prs -->
| 15aecf8dd70f82eb507d74fae9662072a377bdc8 | c31e356a10aa60b5967b9aaf80b9984059e46461 |
python/cpython | python__cpython-100713 | # Make it possible to disable specialization (for debugging).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100713
<!-- /gh-linked-prs -->
| e9ccfe4a636d5fe33f65cea2605c3621ffa55f19 | a1e051a23736fdf3da812363bcaf32e53a294f03 |
python/cpython | python__cpython-100728 | # "What's New In Python X.YZ" Pages Show Incorrect Release Information
# Documentation
From this page https://docs.python.org/release/3.11.1/whatsnew/index.html, if you click on 3.11 or 3.10, you see the release information. It appears incorrect on 3.10, 3.9. Earlier releases are inconsistent (missing that same header information section).
On page https://docs.python.org/release/3.11.1/whatsnew/3.11.html
```
Release 3.11.1
Date December 06, 2022
Editor Pablo Galindo Salgado
```
On page https://docs.python.org/release/3.11.1/whatsnew/3.10.html
```
Release 3.11.1
Date December 06, 2022
Editor Pablo Galindo Salgado
```
For example, the 3.10 release should say:
```
Release 3.10
Date October 04, 2021
Editor X Y Z
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100728
* gh-100729
* gh-100730
* gh-106999
<!-- /gh-linked-prs -->
| e196d8c10a669e1996140a0e594489aa9421a38b | e6d44407827490a5345e8393fbdc78fd6c14f5b1 |
python/cpython | python__cpython-100691 | # Prevent prefix "called_" for methods on mock objects in safe mode
# Prevent prefix "called_" for methods on mock objects in safe mode
# Pitch
In safe mode, there's already support for catching typos for accessing the assertion methods:
> By default, accessing any attribute whose name starts with assert, assret, asert, aseert or assrt will raise an AttributeError.
Given you have a valid assertion to check whether a mocked function has been called:
```
assert mock_foo.called
```
If you now want to check the arguments, and do not pay full attention, you can end up with a tautology like
```
assert mock_foo.called_once_with(param="test")
```
The issue: `mock_foo.called_once_with` is not a valid (assertion) method and therefore an instance of mock.Mock is returned. Because instances of mock.Mock evaluate to true, the assertion is equivalent to `assert True`.
Like with the preventing the call of methods that start with `assert` and `assret` ([issue 21238](https://bugs.python.org/issue21238)) and also disallowing the typos `asert`, `aseert`, and `assrt` (https://github.com/python/cpython/pull/23165), this error will not cause a test failure.
Analyzing public repositories on github.com, the Python standard library (thanks @terryjreedy for fixing it in https://github.com/python/cpython/pull/100647), and our internal code base revealed what seems to be a common source of errors. In our own code base, we have had more than 500 of these issues. More than 50% of those failed after fixing the assertion call, which could potentially have covered existing bugs by relying on bad tests.
# Previous discussion
https://discuss.python.org/t/include-prefix-called-in-list-of-forbidden-method-prefixes-for-mock-objects-in-unsafe-mode/22249/4
<!-- gh-linked-prs -->
### Linked PRs
* gh-100691
* gh-100819
<!-- /gh-linked-prs -->
| 1d4d677d1c90fcf4886ded0bf04b8f9d5b60b909 | 9ffbc58f5cb6d2b002f8785886588d646af517db |
python/cpython | python__cpython-100745 | # Crash in _elementtree.c after #24061
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
Tell us what happened, ideally including a minimal, reproducible example (https://stackoverflow.com/help/minimal-reproducible-example).
Since updating LibreELEC master from Python 3.9.15 to 3.11/3.11.1 there are several reports of crashes in _elementtree module, see https://github.com/xbmc/xbmc/issues/22344.
It is hard to reproduce, you have to set up a minimal kodi addon like:
```
import xbmcaddon
import xbmcgui
import xbmc
import xbmcvfs
import xml.etree.ElementTree as ET
addon = xbmcaddon.Addon()
addonname = addon.getAddonInfo('name')
gpath = xbmcvfs.translatePath("special://profile/guisettings.xml")
tree = ET.parse(gpath)
root = tree.getroot()
l = root.find('.//setting[@id="locale.language"]').text
```
... and start it a few hundred to thousand times.
# Error messages
Enter any relevant error message caused by the crash, including a core dump if there is one.
Typical stack trace is:
```
Core was generated by `/usr/lib/kodi/kodi.bin --standalone -fs'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000000000000000 in ?? ()
[Current thread is 1 (Thread 0x7f13220136c0 (LWP 58189))]
[...]
Thread 1 (Thread 0x7f13220136c0 (LWP 58189)):
#0 0x0000000000000000 in ?? ()
No symbol table info available.
#1 0x00007f13505150d6 in _elementtree_XMLParser___init___impl (self=self@entry=0x7f13242102b0, target=target@entry=0x1e1e4e0 <_Py_NoneStruct>, encoding=encoding@entry=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Modules/_elementtree.c:3647
No locals.
#2 0x00007f1350515555 in _elementtree_XMLParser___init__ (self=0x7f13242102b0, args=<optimized out>, kwargs=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Modules/clinic/_elementtree.c.h:845
return_value = -1
_keywords = {0x7f135051a238 "target", 0x7f135051a23f "encoding", 0x0}
_parser = {format = 0x0, keywords = 0x7f135051dd10 <_keywords.23>, fname = 0x7f1350519fc6 "XMLParser", custom_msg = 0x0, pos = 0, min = 0, max = 0, kwtuple = 0x0, next = 0x0}
argsbuf = {0x7f13241557b8, 0x7f135051ede0 <XMLParser_Type>}
fastargs = <optimized out>
nargs = <optimized out>
noptargs = <optimized out>
target = 0x1e1e4e0 <_Py_NoneStruct>
encoding = 0x0
#3 0x00007f1381ed8afa in type_call (type=<optimized out>, args=0x7f138223c2d8 <_PyRuntime+58904>, kwds=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Objects/typeobject.c:1112
res = <optimized out>
obj = 0x7f13242102b0
tstate = <optimized out>
#4 0x00007f1381e8e304 in _PyObject_MakeTpCall (tstate=tstate@entry=0x7f1324146e40, callable=callable@entry=0x7f135051ede0 <XMLParser_Type>, args=args@entry=0x7f13241557b8, nargs=<optimized out>, keywords=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Objects/call.c:214
call = 0x7f1381ed8a70 <type_call>
argstuple = 0x7f138223c2d8 <_PyRuntime+58904>
kwdict = 0x0
result = 0x0
#5 0x00007f1381e8e3bd in _PyObject_VectorcallTstate (tstate=0x7f1324146e40, callable=callable@entry=0x7f135051ede0 <XMLParser_Type>, args=args@entry=0x7f13241557b8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Include/internal/pycore_call.h:90
nargs = <optimized out>
func = <optimized out>
res = <optimized out>
#6 0x00007f1381e8e422 in PyObject_Vectorcall (callable=callable@entry=0x7f135051ede0 <XMLParser_Type>, args=args@entry=0x7f13241557b8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Objects/call.c:299
tstate = <optimized out>
#7 0x00007f1381f3d170 in _PyEval_EvalFrameDefault (tstate=0x7f1324146e40, frame=0x7f1324155738, throwflag=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/ceval.c:4772
is_meth = 0
total_args = 0
function = 0x7f135051ede0 <XMLParser_Type>
positional_args = <optimized out>
res = <optimized out>
__func__ = "_PyEval_EvalFrameDefault"
opcode = <optimized out>
oparg = 0
eval_breaker = 0x7f132412cb24
cframe = {use_tracing = 0 '\000', current_frame = 0x7f1324155738, previous = 0x7f1324146f90}
call_shape = <optimized out>
prev_cframe = <optimized out>
names = 0x7f132426a5b0
consts = 0x7f13240351b0
first_instr = 0x7f132426a898
next_instr = 0x7f132426a8fa
stack_pointer = 0x7f13241557b8
exception_unwind = <optimized out>
dying = <optimized out>
#8 0x00007f1381f3f624 in _PyEval_EvalFrame (tstate=tstate@entry=0x7f1324146e40, frame=frame@entry=0x7f1324155650, throwflag=throwflag@entry=0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Include/internal/pycore_ceval.h:73
No locals.
#9 0x00007f1381f3f6ff in _PyEval_Vector (tstate=tstate@entry=0x7f1324146e40, func=func@entry=0x7f13240f8d00, locals=locals@entry=0x7f132403ffb0, args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/ceval.c:6435
frame = 0x7f1324155650
retval = <optimized out>
#10 0x00007f1381f3f7c3 in PyEval_EvalCode (co=co@entry=0x7f13240a2f00, globals=globals@entry=0x7f132403ffb0, locals=locals@entry=0x7f132403ffb0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/ceval.c:1154
tstate = 0x7f1324146e40
builtins = 0x7f1324126100
desc = {fc_globals = 0x7f132403ffb0, fc_builtins = 0x7f1324126100, fc_name = 0x7f1382233150 <_PyRuntime+21648>, fc_qualname = 0x7f1382233150 <_PyRuntime+21648>, fc_code = 0x7f13240a2f00, fc_defaults = 0x0, fc_kwdefaults = 0x0, fc_closure = 0x0}
func = 0x7f13240f8d00
res = <optimized out>
#11 0x00007f1381f772f7 in run_eval_code_obj (tstate=tstate@entry=0x7f1324146e40, co=co@entry=0x7f13240a2f00, globals=globals@entry=0x7f132403ffb0, locals=locals@entry=0x7f132403ffb0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/pythonrun.c:1714
v = <optimized out>
#12 0x00007f1381f773bd in run_mod (mod=mod@entry=0x7f132427ebb8, filename=filename@entry=0x7f1324197350, globals=globals@entry=0x7f132403ffb0, locals=locals@entry=0x7f132403ffb0, flags=flags@entry=0x0, arena=arena@entry=0x7f1324032d00) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/pythonrun.c:1735
tstate = 0x7f1324146e40
co = 0x7f13240a2f00
v = <optimized out>
#13 0x00007f1381f7746d in pyrun_file (fp=fp@entry=0x7f13240a2f00, filename=filename@entry=0x7f1324197350, start=start@entry=257, globals=globals@entry=0x7f132403ffb0, locals=locals@entry=0x7f132403ffb0, closeit=closeit@entry=1, flags=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/pythonrun.c:1630
arena = 0x7f1324032d00
mod = 0x7f132427ebb8
ret = <optimized out>
#14 0x00007f1381f79f73 in PyRun_FileExFlags (fp=0x7f13240a2f00, filename=<optimized out>, start=257, globals=0x7f132403ffb0, locals=0x7f132403ffb0, closeit=1, flags=0x0) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/Python3-3.11.1/Python/pythonrun.c:1650
filename_obj = 0x7f1324197350
res = <optimized out>
#15 0x0000000000d5ac60 in CPythonInvoker::executeScript (this=<optimized out>, fp=<optimized out>, script=..., moduleDict=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/python/PythonInvoker.cpp:428
m_Py_file_input = 257
#16 0x0000000000d5bfb0 in CPythonInvoker::execute (this=this@entry=0x3bf31e0, script=..., arguments=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/python/PythonInvoker.cpp:319
f = 0x7f1324197280
pycontext = <optimized out>
pyRealFilename = <optimized out>
fp = 0x7f13240a2f00
pythonPath = {_M_t = {_M_impl = {<std::allocator<std::_Rb_tree_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >> = {<std::__new_allocator<std::_Rb_tree_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >> = {<No data fields>}, <No data fields>}, <std::_Rb_tree_key_compare<std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >> = {_M_key_compare = {<std::binary_function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool>> = {<No data fields>}, <No data fields>}}, <std::_Rb_tree_header> = {_M_header = {_M_color = std::_S_red, _M_parent = 0x7f1324197f80, _M_left = 0x7f1324197f80, _M_right = 0x7f1324197f80}, _M_node_count = 1}, <No data fields>}}}
realFilename = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f13242769f0 "/storage/.kodi/addons/script.hello.world/addon.py"}, _M_string_length = 49, {_M_local_buf = "1\000\000\000\000\000\000\000\374d\033\177\023\177\000", _M_allocated_capacity = 49}}
scriptDir = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f1324100e70 "/storage/.kodi/addons/script.hello.world"}, _M_string_length = 40, {_M_local_buf = ")\000\000\000\000\000\000\000P\250\233\003\000\000\000", _M_allocated_capacity = 41}}
l_threadState = <optimized out>
newInterp = <optimized out>
sysArgv = <optimized out>
module = <optimized out>
moduleDict = 0x7f132403ffb0
stopping = false
failed = false
exceptionType = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f13220125b0 ""}, _M_string_length = 0, {_M_local_buf = "\000(\001\"\023\177\000\000\b\000\000\000\000\000\000", _M_allocated_capacity = 139720151607296}}
exceptionValue = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f1322012590 ""}, _M_string_length = 0, {_M_local_buf = "\000\263\233\003\000\000\000\000$\000\000\000\000\000\000", _M_allocated_capacity = 60535552}}
exceptionTraceback = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f1322012570 ""}, _M_string_length = 0, {_M_local_buf = "\000\000\000$\023\177\000\000X\264\233\003\000\000\000", _M_allocated_capacity = 139720185085952}}
stateToSet = <optimized out>
lock = {_M_device = 0x0, _M_owns = 146}
__PRETTY_FUNCTION__ = "bool CPythonInvoker::execute(const std::string&, std::vector<std::__cxx11::basic_string<wchar_t> >&)"
#17 0x0000000000d5c64b in CPythonInvoker::execute (this=0x3bf31e0, script=..., arguments=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/python/PythonInvoker.cpp:140
w_arguments = {<std::_Vector_base<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> >, std::allocator<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > > >> = {_M_impl = {<std::allocator<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > >> = {<std::__new_allocator<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > >> = {<No data fields>}, <No data fields>}, <std::_Vector_base<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> >, std::allocator<std::__cxx11::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > > >::_Vector_impl_data> = {_M_start = 0x7f1324047810, _M_finish = 0x7f1324047830, _M_end_of_storage = 0x7f1324047830}, <No data fields>}}, <No data fields>}
#18 0x00000000015aedf4 in ILanguageInvoker::Execute (this=this@entry=0x3bf31e0, script=..., arguments=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/generic/ILanguageInvoker.cpp:29
No locals.
#19 0x0000000000d5ca9b in CPythonInvoker::Execute (this=0x3bf31e0, script=..., arguments=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/python/PythonInvoker.cpp:128
No locals.
#20 0x00000000015af31c in CLanguageInvokerThread::Process (this=0x3ba5c10) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/interfaces/generic/LanguageInvokerThread.cpp:107
lckdl = {_M_device = 0x3ba5e70, _M_owns = true}
#21 0x000000000101d4ec in CThread::Action (this=this@entry=0x3ba5c38) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/threads/Thread.cpp:267
No locals.
#22 0x000000000101d807 in operator() (__closure=<optimized out>, pThread=0x3ba5c38, promise=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/build/kodi-20.0rc2-Nexus/xbmc/threads/Thread.cpp:138
name = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f13220129d0 "LanguageInvoker"}, _M_string_length = 15, {_M_local_buf = "LanguageInvoker", _M_allocated_capacity = 7306916077306274124}}
autodelete = false
ss = <incomplete type>
id = {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<std::__new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f13220129f0 "139720151611072"}, _M_string_length = 15, {_M_local_buf = "139720151611072", _M_allocated_capacity = 3832897750101996337}}
__FUNCTION__ = "operator()"
#23 0x000000000101d9f4 in std::__invoke_impl<void, CThread::Create(bool)::<lambda(CThread*, std::promise<bool>)>, CThread*, std::promise<bool> >(std::__invoke_other, struct {...} &&) (__f=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/toolchain/x86_64-libreelec-linux-gnu/include/c++/12.2.0/bits/invoke.h:61
No locals.
#24 0x000000000101da2d in std::__invoke<CThread::Create(bool)::<lambda(CThread*, std::promise<bool>)>, CThread*, std::promise<bool> > (__fn=...) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/toolchain/x86_64-libreelec-linux-gnu/include/c++/12.2.0/bits/invoke.h:96
No locals.
#25 std::thread::_Invoker<std::tuple<CThread::Create(bool)::<lambda(CThread*, std::promise<bool>)>, CThread*, std::promise<bool> > >::_M_invoke<0, 1, 2> (this=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/toolchain/x86_64-libreelec-linux-gnu/include/c++/12.2.0/bits/std_thread.h:252
No locals.
#26 std::thread::_Invoker<std::tuple<CThread::Create(bool)::<lambda(CThread*, std::promise<bool>)>, CThread*, std::promise<bool> > >::operator() (this=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/toolchain/x86_64-libreelec-linux-gnu/include/c++/12.2.0/bits/std_thread.h:259
No locals.
#27 std::thread::_State_impl<std::thread::_Invoker<std::tuple<CThread::Create(bool)::<lambda(CThread*, std::promise<bool>)>, CThread*, std::promise<bool> > > >::_M_run(void) (this=<optimized out>) at /home/docker/LibreELEC.tv/build.LibreELEC-x11.x86_64-11.0-devel-mg-debug/toolchain/x86_64-libreelec-linux-gnu/include/c++/12.2.0/bits/std_thread.h:210
No locals.
#28 0x00007f137f1e1403 in ?? () from /usr/lib/libstdc++.so.6
No symbol table info available.
#29 0x00007f137f3a92c0 in ?? () from /usr/lib/libc.so.6
No symbol table info available.
#30 0x00007f137f4227cc in ?? () from /usr/lib/libc.so.6
No symbol table info available.
rax 0x7f13242a8110 139720187871504
rbx 0x7f13242102b0 139720187249328
rcx 0x0 0
rdx 0x7f1350519f5a 139720928632666
rsi 0x7f135051edc0 139720928652736
rdi 0x0 0
rbp 0x1e1e4e0 0x1e1e4e0 <_Py_NoneStruct>
rsp 0x7f1322012108 0x7f1322012108
r8 0x0 0
r9 0x7f13242ac260 139720187888224
r10 0x200f422ed6206fb3 2310137902792732595
r11 0x202 514
r12 0x0 0
r13 0x0 0
r14 0x0 0
r15 0x7f1381ed8a70 139721760934512
rip 0x0 0x0
eflags 0x10202 [ IF RF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.1
- Operating system and architecture: LibreELEC 11 nightly x86_64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
# Conclusion
With #24061 the expat_CAPI is allocated on the heap but _elementtree.c is still using a [static reference](https://github.com/python/cpython/blob/v3.11.1/Modules/_elementtree.c#L3039) to a may be already freed structure.
Reverting #24061 solves the issue for me. A true fix from someone with more cpython experience should move `*expat_capi` to the heap too.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100745
* gh-100846
* gh-100847
<!-- /gh-linked-prs -->
| b034fd3e5926c63a681a211087b4c666834c7525 | 53455a319f3f2e5609fca2a313ea356fba318665 |
python/cpython | python__cpython-100678 | # Improve description for `venv --upgrade-deps`
The current description comes out as `Upgrade core dependencies: pip setuptools to the latest version in PyPI`. The lack of comma between "pip" and "setuptools" reads slightly odd to me. I think it's because `" ".join()` is used instead of `", ".join()`.
https://github.com/python/cpython/blob/edfbf56f4ca6588dfd20b53f534a4465e43c82bd/Lib/venv/__init__.py#L526
<!-- gh-linked-prs -->
### Linked PRs
* gh-100678
<!-- /gh-linked-prs -->
| 9dee9731663d670652586c929190f227ab56bd8f | 7feb6d2f85d69fbabfc0598d8947124883167f12 |
python/cpython | python__cpython-100701 | # typing.get_type_hints documentation claims it no longer includes base class type hints.
# Documentation
The `typing.get_type_hints` documentation currently says
> Changed in version 3.10: Calling get_type_hints() on a class no longer returns the annotations of its base classes.
This is incorrect. No such change was made in 3.10, and no such change should be made. The documentation was changed erroneously when someone mixed up the `typing.get_type_hints` behavior with an [unrelated `__annotations__` change](https://github.com/python/cpython/issues/99535).
This note should be removed from the `typing.get_type_hints` documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100701
* gh-100823
* gh-100826
<!-- /gh-linked-prs -->
| deaf090699a7312cccb0637409f44de3f382389b | 7116030a25f7dd2140ef3e889f3f5471334d6d0b |
python/cpython | python__cpython-100960 | # Clarify how sqlite3 maps parameters onto placeholders
> agree; you might want to clarify though that the use of "qmark" or "named" is detected automatically on a per-statement basis (provided my understanding of that is correct).
Yes, we should definitely clarify how parameters are interpreted and mapped to the placeholders.
`sqlite3` does not check if you use the "qmark" or "named" style (or any other style FWIW[^1]); it only looks at the type of the params supplied:
1. If a dict or dict subclass is supplied, the named style is assumed and you'll get an error if a named parameter is not provided by the supplied dict.
2. If an exact tuple, an exact list, or a sequence (that is not a dict or dict subclass) is supplied, the qmark style[^2] is assumed. This means that `sqlite3` iterates over the params and blindly assigns placeholder 1[^3] the first item in the supplied sequence, and so on. This also happens if you use named placeholders and supply, for example, a list. Try it and be surprised. Now, that bug may be too old to be fixed; there's bound to be some code out there that depends on this exact bug. We might be able to introduce a warning and then change the behaviour after a few release cycles, but such a breaking change/bugfix will need a broader discussion.
[^1]: try for example `cx.execute("select ?2, ?1", ['first', 'second'])`; the SQLite numeric style, which is not PEP-249-compatible, is accepted and correctly applied
[^2]: called nameless in SQLite speak
[^3]: SQLite placeholders use one-based indices
_Originally posted by @erlend-aasland in https://github.com/python/cpython/pull/100630#discussion_r1059800444_
<!-- gh-linked-prs -->
### Linked PRs
* gh-100960
* gh-101044
* gh-101045
<!-- /gh-linked-prs -->
| 206f05a46b426eb374f724f8e7cd42f2f9643bb8 | 124af17b6e49f0f22fbe646fb57800393235d704 |
python/cpython | python__cpython-100650 | # The `native_thread_id` field of `PyThreadState` is not updated after fork
# Bug report
The `native_thread_id` field of the `PyThreadState` object is not updated after a fork on Linux (at least). This means that child processes spawned by the main thread of the parent process will have a main thread with the parent thread ID.
The `native_thread_id` is meant to be consumed by tools like [Austin](https://github.com/p403n1x87/austin) and therefore the behaviour is easily observed with these tools. One way to reproduce this is to profile this with Austin
~~~python
import multiprocessing
def fact(n):
f = 1
for i in range(1, n + 1):
f *= i
return f
def do(N):
n = 1
for _ in range(N):
fact(n)
n += 1
if __name__ == "__main__":
import sys
try:
nproc = int(sys.argv[1])
except Exception:
nproc = 2
processes = []
for _ in range(nproc):
process = multiprocessing.Process(target=do, args=(3000,))
process.start()
processes.append(process)
for process in processes:
process.join(timeout=5)
~~~
and observe that the reported thread IDs coincide with the parent's PID.
# Your environment
- CPython versions tested on: 3.11.1
- Operating system and architecture: Ubuntu 22.04 (amd64)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100650
* gh-100660
<!-- /gh-linked-prs -->
| d52d4942cfdd52a50f88b87b1ff2a67375dbcf47 | e83f88a455261ed53530a960f1514ab7af7d2e82 |
python/cpython | python__cpython-100663 | # `__sizeof__` incorrectly ignores the size 1 array in PyVarObjects (bool, int, tuple) when ob_size is 0
# Bug report
`tuple.__sizeof__`, `int.__sizeof__`, and `bool.__sizeof__` incorrectly returns a size using an algorithm that assumes the `ob_item` / `ob_digit` array is size 0 when `ob_size` is 0, when this is not true.
```py
print((0).__sizeof__())
>> 24
print((1).__sizeof__())
>> 28
```
This result of `__sizeof__` suggests that int(0) has an `ob_digit` array size of 0, and thus 4 less bytes (considering array dtype of c_uint32). However, this is not correct.
https://github.com/python/cpython/blob/3.11/Include/cpython/longintrepr.h#L79-L82
Code paths for the creation of `int(0)` and other PyVarObject types all initialize with an array of size 1. Such the struct of `int(0)` holds a c_uint32 array of size 1, and element [0] of 0.
The result `24` of `(0).__sizeof__()` suggests that this array does not exist for sizing purposes. This seems to be misleading for performance calculations on memory size, and creates unsafe scenarios when moving memory.
## Implementations of `__sizeof__`
**(int)** (bool also inherits this)
https://github.com/python/cpython/blob/main/Objects/longobject.c#L5876-L5884
```c
res = offsetof(PyLongObject, ob_digit) + Py_ABS(Py_SIZE(self))*sizeof(digit);
```
**(tuple)**
https://github.com/python/cpython/blob/main/Objects/typeobject.c#L5929-L5942
# Your environment
- CPython versions tested on: 3.11.1 and main branch
- Operating system and architecture: Windows and macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-100663
* gh-100717
<!-- /gh-linked-prs -->
| d7e7f79ca7c2029e46a06d21a7a5abea631b5d13 | 9dee9731663d670652586c929190f227ab56bd8f |
python/cpython | python__cpython-100638 | # [tutorial/classes.html] odds and ends dataclasses sample code typo dataclasses
# Documentation
Open https://docs.python.org/3.12/tutorial/classes.html#odds-and-ends
There is a typo in the sample code. `import dataclasses` should be `import dataclass`.
```python
from dataclasses import dataclasses
@dataclass
class Employee:
name: str
dept: str
salary: int
```
It was introduced in this [PR](https://github.com/python/cpython/pull/100498/files#diff-cce67fcd47f7f2eb0e7c82b019417844fa21306a87e3fef7719e6fd1bd1f3774R744).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100638
* gh-100639
* gh-100640
<!-- /gh-linked-prs -->
| 98308dbeb110198ebe28bdb7720d3671b3e7f57b | 636e9dd23f88c701eecf91156835fe0fc8b1feb6 |
python/cpython | python__cpython-24961 | # Document 'attr' parameter for window.vline() in curses module
# Documentation
Function [window.vline](https://docs.python.org/3/library/curses.html#curses.window.vline) is missing the `attr` parameter, which defaults to `A_NORMAL` (this is already documented).
<!-- gh-linked-prs -->
### Linked PRs
* gh-24961
* gh-100625
* gh-100626
<!-- /gh-linked-prs -->
| 9ddc388527477afdae17252628d63138631072ba | 42f7a00ae8b6b3fa09115e24b9512216c6c8978e |
python/cpython | python__cpython-100601 | # Test `test_bpo_45813_2` in `test_coroutines` generates a "coroutine was never awaited" warning
While playing around with https://github.com/python/cpython/pull/100582 and looking at build/test logs, I've noticed that there's this warning:
```
» ./python.exe -m test -v test_coroutines -m test_bpo_45813_2
== CPython 3.12.0a3+ (heads/issue-100577-dirty:43e3659e33, Dec 28 2022, 14:38:55) [Clang 11.0.0 (clang-1100.0.33.16)]
== macOS-10.14.6-x86_64-i386-64bit little-endian
== Python build: debug
== cwd: /Users/sobolev/Desktop/cpython/build/test_python_45783æ
== CPU count: 4
== encodings: locale=UTF-8, FS=utf-8
0:00:00 load avg: 1.32 Run tests sequentially
0:00:00 load avg: 1.32 [1/1] test_coroutines
test_bpo_45813_2 (test.test_coroutines.CoroutineTest.test_bpo_45813_2)
This would crash the interpreter in 3.11a2 ... /Users/sobolev/Desktop/cpython/Lib/test/test_coroutines.py:2217: RuntimeWarning: coroutine 'CoroutineTest.test_bpo_45813_2.<locals>.f' was never awaited
if method() is not None:
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
ok
```
I propose to fix it, there's no need for it. PR is incoming.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100601
* gh-100605
<!-- /gh-linked-prs -->
| 76856366d3ece34c3e738f7167329e97bbf52b34 | f10f503b24a35a43910a632ee50c7568bedd6664 |
python/cpython | python__cpython-100586 | # importlib.resources.as_file is leaving temporary file pointers open
# Bug report
importlib.resources.as_file is leaving temporary file pointers open after writing their contents
see `_write_contents` function in importlib/resources/_common.py
Easy to repeat, just run the test case below with `-We`
`Lib.test.test_importlib.resources.test_resource.ResourceFromZipsTest01.test_as_file_directory`
# Your environment
- CPython versions tested on: Python 3.12.0a3+
- Operating system and architecture: Ubuntu 22.04.1 LTS
I think it just needs to use `Path.write_bytes` and keep the file pointer closed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100586
<!-- /gh-linked-prs -->
| f10f503b24a35a43910a632ee50c7568bedd6664 | cf1c09818032df3080c2cd9e7edb5f657213dc83 |
python/cpython | python__cpython-100590 | # replace pydoc with pydoc3
# Documentation
The [3.11 docs](https://docs.python.org/3.11/library/pydoc.html) and [3.10 docs](https://docs.python.org/3.10/library/pydoc.html) state that pydoc can be invoked as a script, giving as example`pydoc sys`. However, there's no such command after installing those versions on macOS: there's only `pydoc3`.
BTW, I also have Python 3.8 installed with Anaconda and it creates symlinks pydoc and pydoc3 both pointing to pydoc3.8.
It thus seems either the documentation should be updated to replace pydoc with pydoc3 or the 3.10 and 3.11 installers should create a pydoc symlink.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100590
* gh-100606
* gh-100607
<!-- /gh-linked-prs -->
| 7223d50b9785bc7b0cd76dcc68d97dabcbade4b6 | 76856366d3ece34c3e738f7167329e97bbf52b34 |
python/cpython | python__cpython-100579 | # Replace `assert(0)` with `Py_UNREACHABLE` in `Python/symtable.c`
While looking at https://github.com/python/cpython/issues/87447 I've noticed that there are three places in `cpython` where `assert(0)` is used instead of `Py_UNREACHABLE` macro:
https://github.com/python/cpython/blob/984894a9a25c0f8298565b0c0c2e1f41917e4f88/Python/symtable.c#L1539-L1542
As the docs say:
> Use this when you have a code path that cannot be reached by design.
> For example, in the ``default:`` clause in a ``switch`` statement for which
> all possible values are covered in ``case`` statements. Use this in places
> where you might be tempted to put an ``assert(0)`` or ``abort()`` call.
The intent with `Py_UNREACHABLE` is clearer and error message is nicer.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100579
<!-- /gh-linked-prs -->
| 5e1adb4f8861f2a5969952d24c8ad0ce8ec0a8ec | 457c1f4a19a096a52d6553687c7c4cee415818dc |
python/cpython | python__cpython-100575 | # Add `strptime`/`strftime` format code examples
Currently the docs for [`datetime.strptime`](https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime) and [`datetime.strftime`](https://docs.python.org/3/library/datetime.html#datetime.date.strftime) don't include an example but have a link to the [table with the format codes](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes), where I end up having to reconstruct the right combination every time. At the end of each class docs there are some examples, but there are no links pointing to them, so they aren't easily discoverable. In addition, the `.(from)isoformat` methods can often be used instead of `strftime`/`strptime`.
I therefore suggest the following changes:
1. add an example before the [table with the format codes](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes), showing some of the most common codes;
2. add links to `(date|time|datetime).isoformat` in the docs for `(date|time|datetime).strptime`;
3. add links to `datetime.fromisoformat` in the docs for `datetime.strptime` (`date` and `time` have a `.fromisoformat` method, but no `.strptime` method);
<!-- gh-linked-prs -->
### Linked PRs
* gh-100575
* gh-103368
<!-- /gh-linked-prs -->
| 3310b94d3db2f477cf2b8789c30ac0f22f82d2dd | 1e9dfdacefa2c8c27762ba6491b0f570147ee355 |
python/cpython | python__cpython-100959 | # Calling os.stat() on a named pipe used by asyncio.ProactorEventLoop.start_serving_pipe() will raise OSError
# Bug report
On Windows, several calls to `os.stat()` on the named pipe used by `asyncio.ProactorEventLoop.start_serving_pipe()` will result in an `OSError`. At this time, the first call to `os.stat()` will cause a `BrokenPipeError` on the server side.
## Steps to reproduce
Run the pipe server in Command Prompt:
>type pipe_server.py
import asyncio
async def start_server():
pipe = r'\\.\pipe\_test_pipe_server'
loop = asyncio.get_event_loop()
await loop.start_serving_pipe(asyncio.Protocol, pipe)
await asyncio.sleep(3600)
asyncio.run(start_server())
>python pipe_server.py
Run the following script as client on another Command Prompt:
>type pipe_stat.py
import os
pipe = r'\\.\pipe\_test_pipe_server'
os.stat(pipe)
>python pipe_stat.py
Display the following messages on the server side:
Pipe accept failed
pipe: <PipeHandle handle=368>
Traceback (most recent call last):
File "d:\python_build\Python-3.9.13\lib\asyncio\windows_events.py", line 368, in loop_accept_pipe
f = self._proactor.accept_pipe(pipe)
File "d:\python_build\Python-3.9.13\lib\asyncio\windows_events.py", line 636, in accept_pipe
connected = ov.ConnectNamedPipe(pipe.fileno())
BrokenPipeError: [WinError 232] The pipe is being closed
On client side, run `pipe_stat.py` repeat several times, then `OSError` occured:
Traceback (most recent call last):
File "C:\Users\<user>\Desktop\pipe_stat.py", line 4, in <module>
os.stat(pipe)
OSError: [WinError 231] All pipe instances are busy: '\\\\.\\pipe\\_test_pipe_server'
This problem seems to stem from commit 9eb3d54639 between 3.8.0b3 and 3.8.0b4.
# Your environment
- CPython versions tested on: 3.8.0b3+(9eb3d54639), 3.8.0, 3.9.13, 3.10.9, 3.11.1
- Operating system and architecture: Windows 10 x64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100959
* gh-101019
* gh-101020
<!-- /gh-linked-prs -->
| 1bc7a736837272b15ad3a7aa472977bc720d1033 | c00eb1eae6af3ee5b7e314add4606da4521bb8c5 |
python/cpython | python__cpython-100563 | # Improve performance of pathlib.Path.absolute()
The current implementation of `pathlib.Path.absolute()` calls `self.cwd()` rather than `os.getcwd()`, and so constructs _two_ `Path` objects rather than one. As path objects are [slow to construct](https://github.com/faster-cpython/ideas/discussions/194), this has a performance impact.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100563
<!-- /gh-linked-prs -->
| 7fba99eadb3349a6d49d02f13b1fddf44c674393 | af5149f30b652737ef3b495b303819d985f439b1 |
python/cpython | python__cpython-100589 | # Better `or` doc wording
# Documentation
`x or y` is [documented](https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not) as:
> if x is false, then y, else x
That looks unnatural/backwards/negated to me. It's not how I think of it. I suggest:
> if x is true, then x, else y
<!-- gh-linked-prs -->
### Linked PRs
* gh-100589
* gh-102108
* gh-102109
<!-- /gh-linked-prs -->
| b40dd71241f092d90c192ebff5d58cbd7e84dc52 | 350ba7c07f8983537883e093c5c623287a2a22e5 |
python/cpython | python__cpython-123332 | # Support setting `tp_vectorcall` for heap types
# Feature or enhancement
The `tp_vectorcall` slot can be used with static types to define a more efficient implementation of `__new__` / `__init__`. This slot does not have a typedef in typeslots.h, so it cannot currently be set for `PyType_FromSpec` or read using `PyType_GetSlot`.
In 3.12 other vectorcall functionality looks set to stabilise, so please consider adding a `Py_tp_vectorcall` typedef and allow heap types to set / get this member.
# Pitch
Adding the ability for `tp_vectorcall` to be used in the limited API enables extension types to make use of the vectorcall optimisation in more functionality.
# Previous discussion
I see in #85784 that `tp_vectorcall` is deliberately inaccessible with `PyType_GetSlot` because it is not part of the limited API.
If there is support for this proposal, I am happy to have a first stab at implementation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123332
* gh-124066
<!-- /gh-linked-prs -->
| 74330d992be26829dba65ab83d698d42b2f2a2ee | bbb36c0934b7644a9f8b67d3cae78aa6240e005a |
python/cpython | python__cpython-100555 | # Improve `test_sqlite3.test_sqlite_row_iter`
While working on #100457 I've noticed that `test_sqlite_row_iter` can be improved. Right now it is defined as:
```python
def test_sqlite_row_iter(self):
"""Checks if the row object is iterable"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
for col in row:
pass
```
Well, there are several issues:
1. We do not check what values it actually returns
2. We do not check whether or not it is iterable the second time, because some types are implemented as generators and cannot be iterated over the second time
I will send a PR with the improved test.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100555
* gh-100564
* gh-100565
<!-- /gh-linked-prs -->
| 3dc48dabd48864039951715816e07986a4828d80 | b0ea28913e3bf684ef847a71afcdfa8224bab63d |
python/cpython | python__cpython-100547 | # eval() documentation is wrong about keyword arguments
# Documentation
[The docs for `eval()`](https://docs.python.org/3/library/functions.html#eval) say it takes keyword arguments, which is incorrect:
```
eval(expression, /, globals=None, locals=None)
```
For example:
```
>>> eval('a', locals={'a': 0})
...
TypeError: eval() takes no keyword arguments
```
Meanwhile, `help(eval)` has the correct signature:
```
eval(source, globals=None, locals=None, /)
```
I found a previous, similar issue, #69996, which was resolved in #15173 (which shipped in 3.7) but then the syntax was changed in #96579 and [the same sort of mistake was added back](https://github.com/python/cpython/commit/3c4cbd177f36777a04e78eb07ce20367560a66d3#diff-6a7a07ac473fdd76734669b1b70626ad2176011129902f6add017810f54d0439R515) (which shipped in 3.11).
This issue is also similar:
- #67926
<!-- gh-linked-prs -->
### Linked PRs
* gh-100547
* gh-100654
<!-- /gh-linked-prs -->
| 71159a8e078bda0c9a39c6cd0980b7ba238dc582 | 1f6c87ca7b9351b2e5c5363504796fce0554c9b8 |
python/cpython | python__cpython-100541 | # Clean up `_ctypes`/`libffi` build, particularly on macOS
bpo-28491/gh-72677 was closed after bpo-41100/gh-85272, but [`Modules/_ctypes/libffi_osx`](https://github.com/python/cpython/tree/3.11/Modules/_ctypes/libffi_osx) wasn't actually removed, just made inaccessible to any build. In the course of figuring out whether it is in fact unused now, I also stumbled across [`Modules/_ctypes/darwin`](https://github.com/python/cpython/tree/3.11/Modules/_ctypes/darwin), which also appears to be a `dlfcn.h` shim for Mac OS X 10.2 and earlier, and as far as I can tell has been unused for years. And, of course, with no local copy of `libffi` on any platform anymore, there's no need for `--with-system-ffi` to stick around, which also pointed me towards a needless `-DMACOSX` in the macOS build of `_ctypes`. I'll be attaching a series of PRs for 3.12 (only) removing all of these, each of which can be done independently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100541
* gh-100542
* gh-100543
* gh-100544
<!-- /gh-linked-prs -->
| 2df82db48506e5a2044a28f147fdb42f662d37b9 | 7223d50b9785bc7b0cd76dcc68d97dabcbade4b6 |
python/cpython | python__cpython-103576 | # Confusing error message when attempting to pass a newtype as a match pattern
```python
from typing import NewType
T = NewType('T', str)
match 'test':
case T(): ...
```
fails with:
```
Traceback (most recent call last):
File "/tmp/foo.py", line 6, in <module>
case T(): ...
^^^
TypeError: called match pattern must be a type
```
The error message is confusing because `T` is a type. The error message should probably say `class` instead of `type`, since [as per the specification](https://peps.python.org/pep-0634/#class-patterns) it's a class pattern.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103576
<!-- /gh-linked-prs -->
| 07804ce24c3103ee1bb141af31b9a1a0f92f5e43 | 78cac520c34b133ba32665e601adbc794282f4b7 |
python/cpython | python__cpython-100523 | # Resolve TODO comments in `test_concurrent_futures.py`
# Feature or enhancement
There are 3 TODO comments that should be resolved in the `test_concurrent_futures.py` file[0]
1.
https://github.com/python/cpython/blob/e16d4ed59072839b49bda4b447f260201aae7e39/Lib/test/test_concurrent_futures.py#L714
2. https://github.com/python/cpython/blob/e16d4ed59072839b49bda4b447f260201aae7e39/Lib/test/test_concurrent_futures.py#L1534
3. https://github.com/python/cpython/blob/e16d4ed59072839b49bda4b447f260201aae7e39/Lib/test/test_concurrent_futures.py#L1548
# Pitch
- ~~Resolving 1), i.e adding a test to cover the non-zero timeout case would simply make the tests more robust.~~ **resolved in:** gh-100523
- Resolving 2) would have two benefits. Firstly, it would make the tests run faster by ~1s (there is a 1s sleep). Secondly, sleeping in the tests can _occasionally_ cause flakiness[1] and so I would consider it good practice to remove them where practical.
- Resolving 3) Same reasons as for 2).
#### Is there a preferred way to tackle this?
--------------------------------------------
[0]. https://github.com/python/cpython/blob/e16d4ed59072839b49bda4b447f260201aae7e39/Lib/test/test_concurrent_futures.py
[1]. This seems to be acknowledged in the related timeouts that are used:
https://github.com/python/cpython/blob/745545b5bb847023f90505bf9caa983463413780/Lib/test/support/__init__.py#L96
CC: @brianquinlan
<!-- gh-linked-prs -->
### Linked PRs
* gh-100523
<!-- /gh-linked-prs -->
| a2262789abccb68a61bb4047743fbcbd9a64b13c | 73245d084e383b5bc3affedc9444e6b6c881c546 |
python/cpython | python__cpython-100524 | # Docstrings in configparser are invalid reStructuredText
In the `configparser` module, the docstrings are not valid rst. I stumbled onto this issue in the [backport](/jaraco/configparser) when I started validating documentation. Here's what the sphinx output looks like when trying to render the docs for that module:
```
docs: commands[0] /Users/jaraco/code/jaraco/configparser/docs> python -m sphinx -W --keep-going . /Users/jaraco/code/jaraco/configparser/build/html
Running Sphinx v5.3.0
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 0 source files that are out of date
updating environment: [config changed ('project')] 2 added, 0 changed, 0 removed
reading sources... [100%] index
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:22: WARNING: Block quote ends without a blank line; unexpected unindent.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:22: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:26: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:30: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:33: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:37: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:44: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:48: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:88: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:88: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:103: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:103: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:103: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:103: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:136: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:9: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:5: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:8: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:8: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:8: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:7: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] index
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
/Users/jaraco/code/jaraco/configparser/src/backports/configparser/__init__.py:docstring of backports.configparser:1: WARNING: py:class reference target not found: backports.configparser.Error
generating indices... genindex py-modindex done
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build finished with problems, 46 warnings.
```
It would be nice if the source documentation would compile nicely under sphinx without warnings.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100524
* gh-100533
* gh-100534
<!-- /gh-linked-prs -->
| 199507b81a302ea19f93593965b1e5088195a6c5 | 3ccc98fc24b278e0f8195686f3651c7c9fabeb59 |
python/cpython | python__cpython-99588 | # Simplify eff_request_host in cookiejar.py
The existing code can be made more idiomatic and to match the RFC wording better.
Mainly creating this issue since I think https://github.com/python/cpython/pull/99588 should have a news entry
<!-- gh-linked-prs -->
### Linked PRs
* gh-99588
<!-- /gh-linked-prs -->
| b9aa14a484f653cb6a3a242776df9ac5fe161bfc | 046cbc2080360b0b0bbe6ea7554045a6bbbd94bd |
python/cpython | python__cpython-100521 | # `ast.NodeTransformer` is not tested
There are no tests for `ast.NodeTransformer`:
```
» ag NodeTransformer
Misc/NEWS.d/3.9.0a3.rst
817:In the :mod:`ast` module documentation, fix a misleading ``NodeTransformer``
Lib/inspect.py
2230: class RewriteSymbolics(ast.NodeTransformer):
Lib/ast.py
411: traversing. For this a special visitor exists (`NodeTransformer`) that
453:class NodeTransformer(NodeVisitor):
458: The `NodeTransformer` will walk the AST and use the return value of the
467: class RewriteName(NodeTransformer):
Doc/library/ast.rst
2122: (:class:`NodeTransformer`) that allows modifications.
2132:.. class:: NodeTransformer()
2137: The :class:`NodeTransformer` will walk the AST and use the return value of
2146: class RewriteName(NodeTransformer):
2163: If :class:`NodeTransformer` introduces new nodes (that weren't part of
Doc/whatsnew/2.6.rst
2740::class:`NodeTransformer` classes for traversing and modifying an AST,
```
I think we should add at least some high-level tests that will check the desired behaviour.
I will send a PR today.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100521
<!-- /gh-linked-prs -->
| c1c5882359a2899b74c1685a0d4e61d6e232161f | f63f525e161204970418ebc132efc542daaa24ed |
python/cpython | python__cpython-106533 | # Support for detecting the flavour of pathlib subclasses
# Feature or enhancement
Provide a way of checking whether a user subclass of `pathlib.PurePath` and `Path` uses POSIX or Windows semantics
# The Problem
Now that #68320 is resolved, users can subclass `pathlib.PurePath` and `Path` directly:
```python
import pathlib
class MyPath(pathlib.Path):
pass
path = MyPath('foo')
```
However, there's no (public) procedure to detect whether an instance of `MyPath` uses POSIX or Windows semantics.
A non-public way of doing it:
```python
path._flavour is posixpath # Check whether the path object uses POSIX semantics
path._flavour is ntpath # Check whether the path uses Windows semantics
path._flavour is os.path # Check whether the path uses the current OS's semantics
```
Note that checking `isinstance(path, PurePosixPath)` (etc) won't work, as user subclasses of `PurePath` and `Path` do not have the POSIX- and Windows-specific subclasses in their class hierarchy.
# The Proposal
Make the `_flavour` attribute public; document it. Possible names:
- `flavour` (simply remove the underscore)
- `pathmod` (used internally in older pathlib versions)
- `module` (used in the longstanding thirdparty [`path` package](https://path.readthedocs.io/en/latest/api.html#path.Path.module))
# Alternatives
We could make the `PurePosixPath` and `PureWindowsPath` classes 'protocols', which support `isinstance()` checks even when the passed object isn't a conventional subclass. But:
1. The implementation will be more complex
2. It could open a can of worms about whether `PurePosixPath` and `PureWindowsPath` should be proper protocols, and not directly instantiable.
A further alternative would be to add an `is_posix()` method, which gels pretty nicely with the existing `is_*()` and `as_posix()` methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106533
<!-- /gh-linked-prs -->
| c6c5665ee0c0a5ddc96da255c9a62daa332c32b3 | a1a3193990cd6658c1fe859b88a2bc03971a16df |
python/cpython | python__cpython-100489 | # Add `is_integer` to fractions.Fraction
Requested by @mdickinson in https://github.com/python/cpython/issues/100268#issuecomment-1363885807
The case for:
- Improves duck type compatibility between Fraction and float (and now int)
- Simple to implement; hopefully low maintenance cost
The case against:
- Not aware of much demand for it from users
- The case for int.is_integer is much stronger, since the PEP 484 type system conflates `float` and `int | float`
<!-- gh-linked-prs -->
### Linked PRs
* gh-100489
<!-- /gh-linked-prs -->
| e83f88a455261ed53530a960f1514ab7af7d2e82 | 71159a8e078bda0c9a39c6cd0980b7ba238dc582 |
python/cpython | python__cpython-100677 | # Consider adding sumproduct() or dotproduct() to the math module
I was reviewing the itertools recipes to see whether some were worth promoting to be builtin tools. The `dotproduct()` recipe was the best candidate. To non-matrix people this is known as [sumproduct()](https://support.microsoft.com/en-us/office/sumproduct-function-16753e75-9f68-4874-94ac-4d2145a2fd2e) and it comes up in many non-vector applications, possibly the most common being `sum([price * quantity for price, quantity in zip(prices, quantities)])` and the second most common being weighted averages.
The current version of the recipe is:
```
def dotproduct(vec1, vec2):
"Compute a sum of products."
return sum(starmap(operator.mul, zip(vec1, vec2, strict=True)))
```
If we offered this as part of the math module, we could make a higher quality implementation.
For float inputs or mixed int/float inputs, we could square and sum in [quad precision](http://csclub.uwaterloo.ca/~pbarfuss/dekker1971.pdf), making a single rounding at the end. This would make a robust building block to serve as a foundation for users to construct higher level tools. It is also something that is difficult for them to do on their own.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100677
* gh-100836
* gh-100857
* gh-101383
* gh-101397
* gh-101567
<!-- /gh-linked-prs -->
| 47b9f83a83db288c652e43567c7b0f74d87a29be | deaf090699a7312cccb0637409f44de3f382389b |
python/cpython | python__cpython-100475 | # http.server directories named index.html break directory listings
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
If you have a directory called `index.html` or `index.htm` within a directory (any name in `SimpleHTTPRequestHandler.index_pages`), it causes http.server to return a 404 Not Found error instead of the directory listing. This comes about due to not checking that the index is a regular file when it checks for its presence. The 404 error comes from the call to open() the directory raising an OSError.
To reproduce create a folder structure like below and run python3 -m http.server -d foo. You will get a 404 error rather than a directory listing.
```
foo/
foo/
├── bar
└── index.html/
└── baz
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.0, Python 3.12.0a3 (c3c7848a)
- Operating system and architecture: Fedora Linux 37 (Workstation Edition) x86_64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100475
* gh-100504
* gh-100505
<!-- /gh-linked-prs -->
| 46e6a28308def2c3a71c679a6fa4ed7d520802b9 | 00afa5066bd45348ed82a38d3442763b2ed1a068 |
python/cpython | python__cpython-100473 | # compileall's stripdir, prependdir and limit_sl_dest cannot be bytes
This is a follow-up to https://github.com/python/cpython/issues/84627
The docs claim:
> The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike).
However, none of these can be bytes.
I propose we just fix the documentation
Also see:
- https://github.com/python/typeshed/pull/3956#issuecomment-633739042
- https://github.com/python/typeshed/pull/5172#issuecomment-817236036
<!-- gh-linked-prs -->
### Linked PRs
* gh-100473
* gh-100514
* gh-100515
<!-- /gh-linked-prs -->
| 046cbc2080360b0b0bbe6ea7554045a6bbbd94bd | efccd04b9efc1752a845b377399d2068b06d04e7 |
python/cpython | python__cpython-100460 | # Errors in specialisation stats for STORE_ATTR
There seem to be some copy-paste errors in specialize.c, which cause failures for STORE_ATTR to be attributed to LOAD_ATTR:
```
@@ -860,7 +860,7 @@ _Py_Specialize_StoreAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name)
// We *might* not really need this check, but we inherited it from
// PyObject_GenericSetAttr and friends... and this way we still do the
// right thing if someone forgets to call PyType_Ready(type):
- SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_OTHER);
+ SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_OTHER);
goto fail;
}
if (PyModule_CheckExact(owner)) {
@@ -915,16 +915,16 @@ _Py_Specialize_StoreAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name)
SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_OVERRIDDEN);
goto fail;
case BUILTIN_CLASSMETHOD:
- SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_ATTR_BUILTIN_CLASS_METHOD_OBJ);
+ SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_ATTR_BUILTIN_CLASS_METHOD_OBJ);
goto fail;
case PYTHON_CLASSMETHOD:
- SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_ATTR_CLASS_METHOD_OBJ);
+ SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_ATTR_CLASS_METHOD_OBJ);
goto fail;
case NON_OVERRIDING:
- SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_ATTR_CLASS_ATTR_DESCRIPTOR);
+ SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_ATTR_CLASS_ATTR_DESCRIPTOR);
goto fail;
case NON_DESCRIPTOR:
- SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_ATTR_CLASS_ATTR_SIMPLE);
+ SPECIALIZATION_FAIL(STORE_ATTR, SPEC_FAIL_ATTR_CLASS_ATTR_SIMPLE);
goto fail;
case ABSENT:
if (specialize_dict_access(owner, instr, type, kind, name, STORE_ATTR,
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100460
<!-- /gh-linked-prs -->
| 2659036c757a11235c4abd21f02c3a548a344fe7 | 84bc6a4f25fcf467813ee12b74118f7b1b54e285 |
python/cpython | python__cpython-100387 | # Enum with `str` or `int` Mixin Breaking Change in Python 3.11
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Looks like there was a breaking change with the way str and int mixins work with Enums in Python 3.11:
```python
from enum import Enum
class Foo(str, Enum):
BAR = "bar"
# Python 3.10
f"{Foo.BAR}" # > bar
# Python 3.11
f"{Foo.BAR}" # > Foo.BAR
```
The same goes for Enum classes with the `int` mixin.
In my project we were relying on Foo.BAR to return the enum value, so this change broke our code. We fixed it by replacing str Enum mixin with the newly added StrEnum class (thanks for that, it's exactly what we needed!).
I think reverting the breaking change would only introduce another breaking change so that's probably not the way to go. But maybe updating the whatsnew page and call out the change there could help people stumbling into this when doing the upgrade. I've found the existing point about this change in the release notes a little bit confusing and I have already opened a PR to try and clear it up a bit: https://github.com/python/cpython/pull/100387
I've also written a longer blog post about it [here](https://blog.pecar.me/python-enum), and there has been some lively discussion in [r/python](https://www.reddit.com/r/Python/comments/zt4ot4/enum_with_str_or_int_mixin_breaking_change_in/).
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.0, 3.11.1
- Operating system and architecture: MacOS, Ubuntu 22.04
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100387
* gh-104060
<!-- /gh-linked-prs -->
| e665563f8301d0db5cb0847d75fc879f074aa100 | 3ed8c882902a6982fd67e898a5b8a2d619fb5ddf |
python/cpython | python__cpython-100456 | # Start running SSL tests with OpenSSL 3.1.0-beta1
# Feature or enhancement
SSL tests are run with OpenSSL 1.1.1s and 3.0.7 at the moment.
https://github.com/python/cpython/blob/84bc6a4f25fcf467813ee12b74118f7b1b54e285/.github/workflows/build.yml#L238
Let's add a fresh [3.1.0-beta1](https://www.openssl.org/blog/blog/2022/12/21/OpenSSL3.1Beta/) to the matrix.
# Pitch
OpenSSL asks to build and test against this beta release.
We ran tests for alpha and beta versions of 3.0 in the past 44fb55149934d8fb095edb6fc3f8167208035b96.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100456
* gh-100486
* gh-118262
* gh-125186
<!-- /gh-linked-prs -->
| a23cb72ac82372fac05ba36ce08923840ca0de06 | 7ca45e5ddd493411e61706d07679ea54b954e41b |
python/cpython | python__cpython-100446 | # Improve error message for unterminated strings with escapes
Inspired by #55688 and #94768.
Currently, this code raises the following error:
```
>>> "asdf\"
File "<stdin>", line 1
"asdf\"
^
SyntaxError: unterminated string literal (detected at line 1)
```
Maybe there's room to make this error message more helpful, to make the source of the error more obvious for users unfamiliar with escapes. Could the following be better?
```
>>> "asdf\"
File "<stdin>", line 1
"asdf\"
^
SyntaxError: unterminated string literal (detected at line 1); perhaps you escaped the end quote?
```
(The raw string situation in the linked issues that inspired this is a little extra tricky, since the workaround is still not clear. But hopefully the message still makes it clearer to users what is going on)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100446
<!-- /gh-linked-prs -->
| 3156d193b81f7fefbafa1a5299bc9588a6768956 | baefbb21d91db2d950706737a6ebee9b2eff5c2d |
python/cpython | python__cpython-100436 | # Float documentation doesn't allow int-like strings as valid arguments for float
[The documentation for `float`](https://docs.python.org/3/library/functions.html#float) doesn't explicitly allow int-like strings, although it does give one as an example. E.g. although `float('10')` works and returns a float, the documentation implies that it shouldn't.
See [discussion on Discuss](https://discuss.python.org/t/is-it-legal-valid-to-use-float-to-convert-a-string-containing-integer-to-float/22114).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100436
* gh-100437
* gh-100510
* gh-100511
* gh-100674
* gh-100675
<!-- /gh-linked-prs -->
| 69bc86cb1aed49db27afc0095e0f4bcd8f1f3983 | 74a2b79c6265c92ef381b5ff0dc63903bf0178ac |
python/cpython | python__cpython-100426 | # Improve accuracy of builtin sum() for float inputs
Currently `sum()` makes no efforts to improve accuracy over a simple running total. We do have `math.fsum()` that makes extreme efforts to be almost perfect; however, that function isn't well known, it runs 10x slower than regular `sum()`, and it always coerces to a float.
I suggest switching the builtin `sum()` handling of float inputs to Arnold Neumaier's improved variation of compensated summation. Per his [paper](https://www.mat.univie.ac.at/~neum/scan/01.pdf), this algorithm has excellent error bounds (though not as perfect as `fsum()`:
```
|s - š| ≤ ε|s| + ε²(3/4n² + n)·Σ|aᵢ| (IV,12)
|s - š| ≤ ε|s| + ε²(1/4n³ + 5/2n² + n)·Max|aᵢ| (IV,13)
```
The compensation tracking runs in parallel to the main accumulation. And except for pathological cases, the branch is predictable, making the test essentially free. Thanks to the magic of instruction level parallelism and branch prediction, this improvement has zero cost on my Apple M1 build. Timed with:
`./python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'`
N.B. Numpy switched from a simple running total to [partial pairwise summation](https://numpy.org/doc/stable/reference/generated/numpy.sum.html). That isn't as accurate as what is being proposed here, but it made more sense for them because the extra work of Neumaier summation isn't masked by the overhead of fetching values from an iterator as we do here. Also with an iterator, we can't do pairwise summation without using auxiliary memory.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100426
* gh-100860
* gh-101854
* gh-107785
* gh-107787
<!-- /gh-linked-prs -->
| 5d84966cce6c596da22922a07f49bde959ff5201 | 1ecfd1ebf1f53ef6ac82085b25ed09952b470d4e |
python/cpython | python__cpython-114481 | # Add sqlite3 as another possible backing store for the dbm module
Right now we support `ndbm` and `gnu.dbm` which might or might not be part of a given build. The fallback is the super slow `dumbdbm`. Not the sqlite3 is part of the standard build, we can do better.
The module docstring says:
> Future versions may change the order in which implementations are
> tested for existence, and add interfaces to other dbm-like
> implementations.
The future is now. Let's provide a fast, stable, robust, always available alternative.
This can be done will pure python calls to the existing `sqlite3` module, or there can be a C extension that calls the Sqlite3 C API directly.
This would automatically be available to the `shelve` module, giving us a high quality, persistent key-value store.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114481
* gh-115447
* gh-115448
* gh-115449
<!-- /gh-linked-prs -->
| dd5e4d90789b3a065290e264122629f31cb0b547 | 57e4c81ae1cd605efa173885574aedc3fded4b8b |
python/cpython | python__cpython-100409 | # Fix a traceback in multiprocessing example
The `multiprocessing` module documentation contains the following example:
```
>>> from multiprocessing import Pool
>>> p = Pool(5)
>>> def f(x):
... return x*x
...
>>> with p:
... p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
Process PoolWorker-3:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
```
Actually, it is outdated. It was so in Python 2, but in Python 3 before 3.12 you get AttributeError with different message
```
AttributeError: Can't get attribute 'f' on <module '__main__' (built-in)>
```
and in 3.12 you get
```
AttributeError: Can't get attribute 'f' on <module '__main__' (<class '_frozen_importlib.BuiltinImporter'>)>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100409
* gh-103275
* gh-106231
<!-- /gh-linked-prs -->
| a28d4edb23b7150942f1eceb9e97c6f53aa4de42 | 3246688918a428738b61c4adb5fbc6525eae96f9 |
python/cpython | python__cpython-128155 | # Consider adding `-Werror=unguarded-availability-new` to compiler flags for Apple platforms
# Feature or enhancement
Runtime crashes like #97897 could be prevented by turning clang's `unguaded-availability-new` warning into a fatal compile error. This can be done by adding `-Werror=unguarded-availability-new` to compiler flags.
Essentially what this warning does is cross reference undefined symbols against symbols defined in the targeted Apple SDK. If it sees a symbol introduced in a newer SDK version and that symbol isn't weakly referenced/linked, you get a warning. `-Werror` upconverts it to a fatal compiler error.
If you add this flag to release builds, the compiler prevents you from shipping binaries that ship unguarded symbol usage for targeted SDK versions. i.e. it prevents run-time crashes when binaries run on older Apple OS versions.
If you enable this setting today, you may find the 3.8 branch isn't properly gating use of `mkfifoat` and `mknodat`. (Although this may be an oddity from python-build-standalone and not a CPython bug.)
cc @ned-deily
---
The advantages of having this additional warning is to catch regressions such as:
- https://github.com/python/cpython/issues/123797
- https://github.com/python/cpython/issues/75782
<!-- gh-linked-prs -->
### Linked PRs
* gh-128155
<!-- /gh-linked-prs -->
| 9d3a8f494985e8bbef698c467099370e233fcbd4 | f420bdd29fbc1a97ad20d88075c38c937c1f8479 |
python/cpython | python__cpython-100375 | # Incorrect returns caused by improper address filter in socket.getfqdn
# Bug report
When getfqdn called with name "::", instead of returning gethostname(), it will call gethostbyaddr("::").
This will raise an exception "socket.herror: [Errno 1] Unknown host", which will cause a 30 seconds(timeout) delay and return incorrect result. (Tested only on macOS Ventura 13.0.0.1, Python 3.10.6)
The solution is to add a filter to the first if statement, in line 792 socket.py, which will be included in my further pull-request.
```python
import socket
socket.getfqdn("::") # This will block 30 seconds and returns "::" instead of hostname
```
# Your environment
- CPython versions tested on: Python 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)] on Darwin
- Operating system and architecture: macOS Ventura 13.0.0.1, arm64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100375
* gh-100401
* gh-100402
<!-- /gh-linked-prs -->
| 12be23cf3c1301be2c6b8fd4cb2cd35a567d2ea2 | a7715ccfba5b86ab09f86ec56ac3755c93b46b48 |
python/cpython | python__cpython-100373 | # SSLContext.load_verify_locations accepts some cases of trailing data in DER
# Bug report
`SSLContext.load_verify_locations` detects EOF by looking for `PEM_R_NO_START_LINE` and `ASN1_R_HEADER_TOO_LONG` in PEM and DER, respectively. The former is correct. PEM allows arbitrary trailing data before and after and the OpenSSL API involves [looking for that particular error code](https://www.openssl.org/docs/man1.1.1/man3/PEM_write_bio.html#RETURN-VALUES).
`ASN1_R_HEADER_TOO_LONG`, however doesn't appear anywhere in OpenSSL's documentation and isn't the right way to detect EOF for a sequence of DER elements. It's signaled whenever there weren't enough bytes to read a full ASN.1 header. That could be because of EOF, but it could also be there was one byte, or any other truncated ASN.1 header. (It could also happen inside a deeply nested ASN.1 structure, but OpenSSL happens to push `ERR_R_NESTED_ASN1_ERROR` in that case, so that case doesn't confuse CPython.)
To repro, add this test to `test_load_verify_cadata`. It should fail.
```
with self.assertRaises(ssl.SSLError):
ctx.load_verify_locations(cadata=cacert_der + b"A")
```
The fix is instead to stop at `BIO_eof` for DER, as there's no need to skip surrounding data. I'll upload a PR shortly to do that.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: main
- Operating system and architecture: Linux, x86_64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100373
<!-- /gh-linked-prs -->
| acfe02f3b05436658d92add6b168538b30f357f0 | 6a1c49a7176f29435e71a326866d952b686bceb3 |
python/cpython | python__cpython-103902 | # sqlite3.Connection.blobopen() can fail with OverflowError on large rowids
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
`blobopen` internally uses `int` to hold the requested rowid ([[1]](https://github.com/python/cpython/blob/main/Modules/_sqlite/connection.c#L498), [[2]](https://github.com/python/cpython/blob/main/Modules/_sqlite/clinic/connection.c.h#L300)), but SQLite rowids are actually 64-bit integers:
```
int sqlite3_blob_open(
sqlite3*,
const char *zDb,
const char *zTable,
const char *zColumn,
sqlite3_int64 iRow, // <-- the rowid parameter
int flags,
sqlite3_blob **ppBlob
);
```
This makes an attempt to open a blob with large rowid raise an `OverflowError` when Python is compiled with 32-bit `int`, which is the case on Windows even when compiling as 64-bit application.
This might seem like an edge case, but any `INTEGER PRIMARY KEY` in a rowid table [aliases rowid](https://www.sqlite.org/rowidtable.html), which means any application that uses non-autoincrement primary keys (e.g. timestamps, checksums) is likely to hit this very trivially -- I know I did on basically the first insert. You don't need to have more than 2**32 rows or anything like that for this to happen.
100% reproducible with:
```python
import sqlite3
con = sqlite3.connect(':memory:')
rowid = 2**32
con.execute("create table t(t blob)")
con.execute("insert into t(rowid, t) values (?, zeroblob(1))", (rowid,))
con.blobopen('t', 't', rowid)
```
Expected: nothing (i.e. successful call)
Instead:
```
Traceback (most recent call last):
File "E:\Temp\blob.py", line 10, in <module>
con.blobopen('t', 't', rowid)
OverflowError: Python int too large to convert to C int
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)], 3.12.0a3+ (heads/main-dirty:cb60b6131b, Dec 20 2022, 14:37:41) [MSC v.1934 64 bit (AMD64)]
- Operating system and architecture: Windows x64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-103902
* gh-104285
<!-- /gh-linked-prs -->
| a05bad3254e2ae5fdf558dfdb65899a2298d8ded | cab1298a6022ddf12ddcdadd74bb8741650d8e9f |
python/cpython | python__cpython-100364 | # Optimize `asyncio.get_running_loop`
`asyncio.get_running_loop` is one of the most performance critical functions in `asyncio`. With https://github.com/python/cpython/issues/66285 the running loop can be reset in the fork handler thereby avoiding the need for the `getpid` checks. The whole running loop holder object is unnecessary now and adds up unnecessary dependent memory loads.
Benchmark:
```py
import asyncio
from pyperf import Runner
async def main():
for i in range(10000):
asyncio.get_running_loop()
runner = Runner()
runner.bench_async_func("main", main)
```
Result:
| Benchmark | base | patch |
|-----------|:-------:|:---------------------:|
| main | 5.03 ms | 328 us: 15.33x faster |
The numbers speak for themselves.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100364
<!-- /gh-linked-prs -->
| 4994f2488f8a436ebda3510c779cbfe292bb21a0 | 79311cbfe718f17c89bab67d7f89da3931bfa2ac |
python/cpython | python__cpython-100358 | # Convert more functions in `bltinsmodule.c` to Argument Clinic
# Feature or enhancement
There are several functions that are still not coverted to argument clinic (while they even say `AC: cannot convert yet, as needs PEP 457 group support in inspect`). I've found:
- `vars`
- `getattr`
- `dir`
- `next`
- `iter`
I will send a PR with these functions converted shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100358
<!-- /gh-linked-prs -->
| bdfb6943861431a79e63f0da2e6b3fe163c12bc7 | 0769f957514300a75be51fc6d1b963c8e359208b |
python/cpython | python__cpython-100349 | # asyncio._SelectorSocketTransport: help garbage collector
# Feature or enhancement
Debugging an asyncio application I've found that many _SelectorSocketTransport objects end up listed in gc.garbage instead of being released as soon as possible.
# Pitch
We could release resources as soon as possible, and avoid this task to be done by gc.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100349
<!-- /gh-linked-prs -->
| a6331b605e8044a205a113e1db87d2b0a53d0222 | 39dfbb2d5dc47635f332bc13ca667293de6989ab |
python/cpython | python__cpython-100345 | # Provide C implementation for asyncio.current_task
# Feature or enhancement
By providing a C implementation for `asyncio.current_task`, its performance can be improved.
# Pitch
Performance improvement.
From Instagram profiling data, we've found that this function is called frequently, and a C implementation (in Cinder 3.8) showed more than 4x speedup in a microbenchmark.
# Previous discussion
N/A
<!-- gh-linked-prs -->
### Linked PRs
* gh-100345
<!-- /gh-linked-prs -->
| 4cc63e0d4e4cf3299dcc0ea81616ba072ae5589d | aa878f086b7ba8bdd7006d9d509c671167a5fb1e |
python/cpython | python__cpython-100343 | # Missing NULL check in AC *args tuple parsing
`__clinic_args` is currently not checked for NULL and can crash the interpreter. The fix is to check for NULL and exit early.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100343
* gh-100568
<!-- /gh-linked-prs -->
| 7cf164ad5e3c8c6af5ae8813ad6a784448605418 | e97afefda54b818b140b3cc905642b69d9d65f0c |
python/cpython | python__cpython-100341 | # Build failure on wasm-sdk-17
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
`wasi` build is failing on the latest [wasm-sdk-17](https://github.com/WebAssembly/wasi-sdk/releases/tag/wasi-sdk-17) due to a change in clang itself.
## How to verify the failure?
Install latest (version 17) of the wasm-sdk from the link above.
Then:
```
./Tools/wasm/wasm_build.py wasi
...
<snipped output>
ccache /opt/wasi-sdk/bin/clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I../../Include/internal -IObjects -IInclude -IPython -I. -I../../Include -DPy_BUILD_CORE_BUILTIN -c ../../Modules/_weakref.c -o Modules/_weakref.o
../../Modules/timemodule.c:1972:13: error: incompatible pointer to integer conversion passing 'const struct __clockid *' to parameter of type 'long' [-Wint-conversion]
if (PyModule_AddIntMacro(module, CLOCK_REALTIME) < 0) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../Include/modsupport.h:59:69: note: expanded from macro 'PyModule_AddIntMacro'
#define PyModule_AddIntMacro(m, c) PyModule_AddIntConstant((m), #c, (c))
^~~
../../Include/modsupport.h:51:71: note: passing argument to parameter here
PyAPI_FUNC(int) PyModule_AddIntConstant(PyObject *, const char *, long);
^
../../Modules/timemodule.c:1979:13: error: incompatible pointer to integer conversion passing 'const struct __clockid *' to parameter of type 'long' [-Wint-conversion]
if (PyModule_AddIntMacro(module, CLOCK_MONOTONIC) < 0) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../Include/modsupport.h:59:69: note: expanded from macro 'PyModule_AddIntMacro'
#define PyModule_AddIntMacro(m, c) PyModule_AddIntConstant((m), #c, (c))
^~~
../../Include/modsupport.h:51:71: note: passing argument to parameter here
PyAPI_FUNC(int) PyModule_AddIntConstant(PyObject *, const char *, long);
^
2 errors generated.
make: *** [Makefile:2872: Modules/timemodule.o] Error 1
make: *** Waiting for unfinished jobs....
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100341
* gh-106066
<!-- /gh-linked-prs -->
| 75c8133efec035ec1083ebd8e7d43ef340c2e581 | 1f0d0a432cf431882b432eeba8315f84f818da6b |
python/cpython | python__cpython-100947 | # Python 3.11.0 -> 3.11.1 : ...\Python311\DLLs not added to sys.path in embedded startup on Windows
# Bug report
When updating Python 3.11.0 -> 3.11.1 (or reverse to reverse the issue), the `...\Python311\DLLs` folder is suddenly not added anymore to sys.path in embedded startup due to somehow impaired search path algorithm. E.g. in `Pythonwin.exe`. And thus things like ctypes, socket, hashlib etc. cannot be imported. But `...\Python311\Lib` and all the other paths are correctly there, just the DLLs path missing.
The same was observed e.g. here:
https://github.com/mhammond/pywin32/issues/1995
The issue is also in current Py 3.12.0a3 at least.
The issue seems not to be with `python.exe` startup.
The issue also disappears when I monkey-copy the `Pythonwin.exe` next to `python.exe` and use that copy.
Note: `Pythonwin.exe` locates pythonNN.dll dynamically and does the usual Python init.
And extra confusing: in the registry there is a `PythonPath` key like `C:\Python312\Lib\;C:\Python312\DLLs\` :
I always thought that the DLLs path is taken from there. But when I edit-damage that like `C:\Python312\Lib\;C:\Python312\DLLsx\` e.g., it has no effect :-)
The correct DLLs dir (only) is still in the sys.path in the above working cases, and `DLLsx` also does not appear on sys.path in the non-working cases.
Reproduce:
* Win10
* pip install pywin32; run Pythonwin.exe
* `import ctypes` and/or inspect sys.path after start.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100947
* gh-101082
* gh-101080
<!-- /gh-linked-prs -->
| df10571a130c46d67756239e76ad69a3d730a990 | 7b14c2ef194b6eed79670aa9d7e29ab8e2256a56 |
python/cpython | python__cpython-102621 | # Clarification in the `__slots__` documentation
# Documentation
https://docs.python.org/3/reference/datamodel.html#notes-on-using-slots
One of the bulletpoints:
> * Nonempty __slots__ does not work for classes derived from “variable-length” built-in types such as [int](https://docs.python.org/3/library/functions.html#int), [bytes](https://docs.python.org/3/library/stdtypes.html#bytes) and [tuple](https://docs.python.org/3/library/stdtypes.html#tuple).
Points to clarify:
* What does "does not work" mean?
* Whats the set of "variable-length" built-in types? Is `str` one of them?
<!-- gh-linked-prs -->
### Linked PRs
* gh-102621
* gh-102687
* gh-102688
<!-- /gh-linked-prs -->
| 88c262c086077377b40dfae5e46f597e28ffe3c9 | 3d872a74c8c16d4a077c2223f678b1f8f7e0e988 |
python/cpython | python__cpython-100289 | # Finish up LOAD_ATTR specialisation
We should target the following specialisation failures:
- [X] has managed dict
- [ ] not in dict (how does this even happen??)
With these two we can hit >90% specialisation successes.
If we're feeling really ambitious, we could aim for "not managed dict" failure too. But we don't need that to achieve >90% successes.
I'm doing the first one. Is anyone interested in investigating the second specialisation failure?
<!-- gh-linked-prs -->
### Linked PRs
* gh-100289
* gh-100468
* gh-100492
* gh-100753
* gh-101354
* gh-101379
* gh-105990
* gh-106589
<!-- /gh-linked-prs -->
| 36d358348de8efad75ebcf55dad8ed4a4f6dcda9 | c3c7848a48b74a321632202e4bdcf2f465fb1cc6 |
python/cpython | python__cpython-100496 | # unittest.mock.seal doesn't work as expected with AsyncMock
I noticed this while reviewing https://github.com/python/cpython/pull/100252#discussion_r1050456588
I believe the following test case should pass, but it doesn't on main
```python
import unittest
from unittest.mock import Mock, seal
class AsyncClass:
async def async_method(self): pass
def normal_method(self): pass
class Case(unittest.TestCase):
def test_spec_normal_methods_on_class_with_mock_seal(self):
mock = Mock(AsyncClass)
seal(mock)
# test passes, aka this raises AttributError
with self.assertRaises(AttributeError):
mock.normal_method
# test fails, aka this does not raise AttributError
with self.assertRaises(AttributeError):
mock.async_method
unittest.main()
```
It's easy to fix, just need to move the clause that handles AsyncMock after the `if self._mock_sealed:` check.
cc @sobolevn who moved the `if self._mock_sealed:` check earlier in https://github.com/python/cpython/pull/28300/files , but not all the way
<!-- gh-linked-prs -->
### Linked PRs
* gh-100496
* gh-100506
* gh-100508
<!-- /gh-linked-prs -->
| e4b43ebb3afbd231a4e5630e7e358aa3093f8677 | 46e6a28308def2c3a71c679a6fa4ed7d520802b9 |
python/cpython | python__cpython-100273 | # JSON does not preserve the order of OrderedDict
#95385 caused a regression in JSON serialization of OrderedDict when use the C implementation.
```pycon
>>> import collections, json
>>> od = collections.OrderedDict(a=1, b=2)
>>> od.move_to_end('a')
>>> od
OrderedDict([('b', 2), ('a', 1)])
>>> json.dumps(od)
'{"a": 1, "b": 2}'
```
With the pure Python implementation, as well as in older Pythons, you get `'{"b": 2, "a": 1}'`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100273
<!-- /gh-linked-prs -->
| 0fe61d0838218b18bcff4bf434beba3e8dbcc42b | 2b38a9aa747785f6d540e95e6846d6510de6b306 |
python/cpython | python__cpython-100439 | # Improve duck type compatibility of int with float
# Feature or enhancement
Function arguments and variables annotated with `float` should not allow value of type `int.`
# Pitch
[PEP484](https://peps.python.org/pep-0484/) suggested that `when an argument is annotated as having type float, an argument of type int is acceptable;`. This allows this kind of typing to be valid:
`x: float = 2`
But `int` is not subtype of `float` and doesn't provide the same interface. Float provides methods that are not available in `int`:
- `is_integer`
- `fromhex`
- `hex`
This violates LSP and is problematic especially with `is_integer`:
```python
def method_requiring_whole_number_float(number: float):
if number.is_integer():
...
```
This method clearly states that it requires `float` and as an author of such code, I would expect that `is_integer` would be available if my typing is correct.
There are workarounds (`if int(number) == number:`) but they render the `is_integer` useless as it can never be safely used.
Just adding the missing methods to `int` (or removing the extra methods from `float`) would not be valid solution as there are other problems stemming from the fact that `int` is not `float`. Eg.:
```python
def split_whole_and_decimal(number: float) -> tuple[str, str]:
return str(number).split('.') # it's reasonable to expect that the output contains a dot `.` as `float` does but `int` doesn't
```
I'm proposing an errata to PEP484 that will remove `when an argument is annotated as having type float, an argument of type int is acceptable;`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100439
<!-- /gh-linked-prs -->
| 3e46f9fe05b40ee42009878620f448d3a4b44cb5 | a23cb72ac82372fac05ba36ce08923840ca0de06 |
python/cpython | python__cpython-122047 | # Ensurepip fails ungracefully when mimetype is missing from Windows registry
# Bug report
A "WinError 5 Access is denied" error occurs when ensurepip (for python 3.10 and above) is run and there are missing/inaccessible mimetypes in the Windows registry. Because ensurepip is used in the Python installer, it also fails silently there, causing pip to never be installed without alerting the user to the issue. The line in which the error occurs is here:
https://github.com/python/cpython/blob/024ac542d738f56b36bdeb3517a10e93da5acab9/Lib/mimetypes.py#L250
Bypassing that line and instead using `self._read_windows_registry(add_type)` (which is the fallback function I believe) fixes the problem. I believe that `_mimetypes_read_windows_registry(add_type)` needs to be able to fail more gracefully when it can't find a particular mimetype in the registry.
I found someone else describing exactly the issue [here](https://discuss.python.org/t/ensurepip-failing-with-access-is-denied-error-when-attempting-to-read-windows-registry/21482).
# Your environment
- CPython versions tested on: 3.10, 3.11
- Operating system and architecture: Windows 10 LTSC, x64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-122047
* gh-122786
* gh-122787
<!-- /gh-linked-prs -->
| 0bd93755f37e6b8beb597787fce39eb141179965 | c25898d51e4ec84319b7113d5bf453c6e6519d9c |
python/cpython | python__cpython-100249 | # Possibly missing parameters in the documentation of asyncio functions
The functions _asyncio_.**open_connection**/**start_server**/**open_unix_connection**/**start_unix_server** call their "counterparts" in *loop*.
_asyncio_.**open_connection** calls _loop_.**create_connection**.
_asyncio_.**start_server** calls _loop_.**create_server**.
_asyncio_.**open_unix_connection** calls _loop_.**create_unix_connection**.
_asyncio_.**start_unix_server** calls _loop_.**create_unix_server**.
Thus the signatures in the documentation for **open_\*** and **start_\*** almost copy the signatures of their counterparts.
But 3.11 introduced *ssl_shutdown_timeout* parameter that is missing in the documentation of **open_\*** and **start_\*** functions. So I recommend putting it into the documentation of the aforementioned functions for the consistency reason.
I've created the PR in case the issue is right.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100249
* gh-100250
<!-- /gh-linked-prs -->
| 96638538002fc6e209755c06f613b0a59eb91166 | 5693f45b19de47f8cb2f08d3baf43e626d3fbdf3 |
python/cpython | python__cpython-100944 | # py.exe launcher ignores [commands] from py.ini
# Bug report
This is a regression between Python 3.11.0 and 3.11.1. Since 46a3cf4fe3380b5d4560589cce8f602ba949832d (88297e2a8a75898228360ee369628a4a6111e2ee, #98692), py.exe no longer observes the [“custom commands”](https://peps.python.org/pep-0397/#customized-commands) mapping under `[commands]` in _py.ini_ defining additional “virtual commands” (unless the virtual command starts with the same prefix as one of the four predefined virtual commands), but directly tries to launch the virtual command as an executable.
**Steps to reproduce:**
%WINDIR%\py.ini:
```ini
[commands]
/opt/something/bin/my-python2=C:\something\python27\python.exe
```
test.py:
```python
#!/opt/something/bin/my-python2
import sys
print('hello from', sys.executable)
```
```
%WINDIR%\py.exe test.py
```
**Expected result** (and actual up to 3.11.0):
```
('hello from', 'C:\\something\\python27\\python.exe')
```
**Actual result:**
```
Unable to create process using 'C:\opt\something\bin\my-python2 test.py': The system cannot find the file specified.
```
I seem to be able to fix this as follows, which satisfies the existing tests, however this code is such a complex tangle of special cases that I have no idea whether it is the right thing to do. (The idea is that the loop over `shebangTemplates` should always find exactly one match, which was previously (before the regression) ensured by the empty template, so that `_findCommand()` is always called. Checking for `tmpl != &shebangTemplates[0]` is needed to satisfy `test_recursive_search_path`, however it might exclude too much – maybe `searchPath()` should instead report explicitly that it skipped a recursive call.)
```diff
diff --git a/PC/launcher2.c b/PC/launcher2.c
index 9b3db04aa4..ad313c10f3 100644
--- a/PC/launcher2.c
+++ b/PC/launcher2.c
@@ -1001,19 +1001,13 @@ checkShebang(SearchInfo *search)
L"/usr/bin/env ",
L"/usr/bin/",
L"/usr/local/bin/",
- L"python",
+ L"",
NULL
};
for (const wchar_t **tmpl = shebangTemplates; *tmpl; ++tmpl) {
if (_shebangStartsWith(shebang, shebangLength, *tmpl, &command)) {
commandLength = 0;
- // Normally "python" is the start of the command, but we also need it
- // as a shebang prefix for back-compat. We move the command marker back
- // if we match on that one.
- if (0 == wcscmp(*tmpl, L"python")) {
- command -= 6;
- }
while (command[commandLength] && !isspace(command[commandLength])) {
commandLength += 1;
}
@@ -1052,18 +1046,20 @@ checkShebang(SearchInfo *search)
debug(L"# Treating shebang command '%.*s' as 'py'\n",
commandLength, command);
}
+ } else if (tmpl != &shebangTemplates[0]) {
+ // Unrecognised commands are joined to the script's directory and treated
+ // as the executable path
+ return _useShebangAsExecutable(search, shebang, shebangLength);
} else {
debug(L"# Found shebang command but could not execute it: %.*s\n",
commandLength, command);
}
// search is done by this point
- return 0;
+ break;
}
}
- // Unrecognised commands are joined to the script's directory and treated
- // as the executable path
- return _useShebangAsExecutable(search, shebang, shebangLength);
+ return 0;
}
```
# Your environment
- CPython versions tested on: 3.9, 3.10, 3.11.0 (good), 3.11.1 (bad)
- Operating system and architecture: Windows 10 Pro 10.0.19043.2006 AMD64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100944
* gh-101012
* gh-101083
* gh-101084
<!-- /gh-linked-prs -->
| 468c3bf79890ef614764b4e7543608876c792794 | b5d4347950399800c6703736d716f08761b29245 |
python/cpython | python__cpython-100244 | # performance shortcut in functools.partial behaves differently in C and in Python version
## Bug
`functools.partial` is implemented in `functools.py` and in `_functoolsmodule.c`. The former is almost never used, so libraries come to depend on the quirks and corner cases of the C implementation. This is a [problem for PyPy](https://foss.heptapod.net/pypy/pypy/-/issues/3869), where the Python implementation is the only one as of the most recent PyPy version. Here's one such difference, which was uncovered by the `lxml` library. The following code leads to a `RecursionError`:
```python
import sys
sys.modules['_functools'] = None # force use of pure python version, if this is commented out it works
from functools import partial
class Builder:
def __call__(self, tag, *children, **attrib):
return (tag, children, attrib)
def __getattr__(self, tag):
return partial(self, tag)
B = Builder()
m = B.m
```
this is the traceback:
```
Traceback (most recent call last):
File "/home/cfbolz/projects/cpython/bug.py", line 14, in <module>
m = B.m
^^^
File "/home/cfbolz/projects/cpython/bug.py", line 11, in __getattr__
return partial(self, tag)
^^^^^^^^^^^^^^^^^^
File "/home/cfbolz/projects/cpython/Lib/functools.py", line 287, in __new__
if hasattr(func, "func"):
^^^^^^^^^^^^^^^^^^^^^
File "/home/cfbolz/projects/cpython/bug.py", line 11, in __getattr__
return partial(self, tag)
^^^^^^^^^^^^^^^^^^
File "/home/cfbolz/projects/cpython/Lib/functools.py", line 287, in __new__
if hasattr(func, "func"):
^^^^^^^^^^^^^^^^^^^^^
... and repeated
```
The problem is the following performance shortcut in `partial.__new__`:
```python
class partial:
...
def __new__(cls, func, /, *args, **keywords):
if not callable(func):
raise TypeError("the first argument must be callable")
if hasattr(func, "func"): # <------------------- problem
args = func.args + args
keywords = {**func.keywords, **keywords}
func = func.func
```
Basically in this case `func` is an object where calling `hasattr(func, "func")` is not safe. The equivalent C code does this check:
```c
if (Py_TYPE(func)->tp_call == (ternaryfunc)partial_call) {
// The type of "func" might not be exactly the same type object
// as "type", but if it is called using partial_call, it must have the
// same memory layout (fn, args and kw members).
// We can use its underlying function directly and merge the arguments.
partialobject *part = (partialobject *)func;
```
In particular, it does not simply call `hasattr` on `func`.
## Real World Version
This is not an artificial problem, we discovered this via the [class](https://github.com/lxml/lxml/blob/master/src/lxml/builder.py#L228) `lxml.builder.ElementMaker`. It has a `__call__` method implemented. It also has `__getattr__` that looks like this:
```python
def __getattr__(self, tag):
return partial(self, tag)
```
Which yields the above `RecursionError` on PyPy.
## Solution ideas
One approach would be to file a bug with `lxml`, but it is likely that more libraries depend on this behaviour. So I would suggest to change the `__new__` Python code to add an `isinstance` check, to bring its behaviour closer to that of the C code:
```python
def __new__(cls, func, /, *args, **keywords):
if not callable(func):
raise TypeError("the first argument must be callable")
if isinstance(func, partial) and hasattr(func, "func"):
args = func.args + args
...
```
I'll open a PR with this approach soon. /cc @mgorny
<!-- gh-linked-prs -->
### Linked PRs
* gh-100244
<!-- /gh-linked-prs -->
| 5a0209fc23de113747058858a4d2e5fc8213711e | 8e36cb7bb2a6057445975d46169f23a719909917 |
python/cpython | python__cpython-100235 | # Set default lamb to 1.0 in random.expovariate()
Give this function a meaningful and useful default value.
Note, Numpy has already made the same decision.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100235
<!-- /gh-linked-prs -->
| b430399d41fa88e9040cd055e55cf9211bf63c61 | 8356c14b4f81f4d0010afb61610edacf4068b804 |
python/cpython | python__cpython-100229 | # raise a Warning when os.fork() is called and the process has multiple threads
os.fork() in a multi-threaded application is a likely source of deadlocks on many platforms. We should raise a warning when people call os.fork() from a process that we know has other threads running.
This came from discussion in https://discuss.python.org/t/switching-default-multiprocessing-context-to-spawn-on-posix-as-well/21868/, though I believe many of us have pondered doing it in the past.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100229
* gh-109767
* gh-109773
<!-- /gh-linked-prs -->
| 894f2c3c161933bd820ad322b3b678d89bc2377c | 2df82db48506e5a2044a28f147fdb42f662d37b9 |
python/cpython | python__cpython-101807 | # `asyncio.StreamReader.read` does what exactly?
# Documentation
https://docs.python.org/3/library/asyncio-stream.html#asyncio.StreamReader.read
It remains unclear what exactly that method does. It will give me no more than *n* bytes from the stream. That's a pretty vague contract. Will it sit there until so many bytes are available? Or will it return each one separately? Will they be fragmented in some way? How long will it wait for more bytes to become available?
And what does "EOF" mean?! I know this from the old DOS days as "end of file" but it has two meanings (the byte \x1A inside a file, or the actual end of a file on disk) which I don't see apply here.
Similarly for the other read methods, will they wait and only return when so many bytes have been received? Are there any timeouts? What about cancellation? I'm not sure if I can expect the same level of features as in .NET, but at least I'd like to know what to expect.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101807
* gh-102001
* gh-102002
<!-- /gh-linked-prs -->
| 77d95c83733722ada35eb1ef89ae5b84a51ddd32 | a1723caabfcdca5d675c4cb04554fb04c7edf601 |
python/cpython | python__cpython-100223 | # Use a correct and transparent definition of "code unit" in C code
We should define `_Py_CODEUNIT` properly without the need for type punning.
Currently `_Py_CODEUNIT` is define as `typedef uint16_t _Py_CODEUNIT;` but it really an 8 bit opcode followed a bit operand aligned to 16 bits. Which means we need to resort to type punning to access the operand and oparg individually.
E.g. https://github.com/python/cpython/blob/main/Include/cpython/code.h#L32
PEP 7 states that "Python 3.11 and newer versions use C11 without optional_features".
So let's use a union with anonymous struct to define it properly:
```C
typedef union {
int16_t align;
struct {
uint8_t opcode;
uint8_t oparg;
};
} _Py_CODEUNIT;
```
@iritkatriel thoughts?
<!-- gh-linked-prs -->
### Linked PRs
* gh-100223
* gh-100259
<!-- /gh-linked-prs -->
| bdd86741bebd3efb51e540d5148e658cb34fd3ce | ae83c782155ffe86830c3255e338f366e331ad30 |
python/cpython | python__cpython-100329 | # `make sharedinstall` does not create lib-dynload in DESTDIR if lib-dynload exists in system already
# Bug report
The `Makefile.pre.in` rule for creating directories is specified as:
```
$(DESTSHARED):
@for i in $(DESTDIRS); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
```
This means that if `$(DESTSHARED)` exists already (note: missing `$(DESTDIR)`!), i.e. Python 3.12 has been installed into the system, the directories in `$(DESTDIR)` won't be created and the successive `sharedinstall` rules fail due to the missing destination, e.g.:
```
/usr/bin/install -c -m 755 Modules/array.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/array.cpython-312-x86_64-linux-gnu.so
/usr/bin/install: cannot create regular file '/var/tmp/portage/dev-lang/python-3.12.0_alpha3/image/usr/lib/python3.12/lib-dynload/array.cpython-312-x86_64-linux-gnu.so': No such file or directory
/usr/bin/install -c -m 755 Modules/_asyncio.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/_asyncio.cpython-312-x86_64-linux-gnu.so
/usr/bin/install: cannot create regular file '/var/tmp/portage/dev-lang/python-3.12.0_alpha3/image/usr/lib/python3.12/lib-dynload/_asyncio.cpython-312-x86_64-linux-gnu.so': No such file or directory
/usr/bin/install -c -m 755 Modules/_bisect.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/_bisect.cpython-312-x86_64-linux-gnu.so
/usr/bin/install: cannot create regular file '/var/tmp/portage/dev-lang/python-3.12.0_alpha3/image/usr/lib/python3.12/lib-dynload/_bisect.cpython-312-x86_64-linux-gnu.so': No such file or directory
```
Full log: [dev-lang:python-3.12.0_alpha3:20221207-142002.log](https://github.com/python/cpython/files/10218597/dev-lang.python-3.12.0_alpha3.20221207-142002.log)
# Your environment
- CPython versions tested on: 3.12.0a3
- Operating system and architecture: Gentoo Linux/amd64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100329
<!-- /gh-linked-prs -->
| 2a8bf2580441147f1a15e61229d669abc0ab86ee | 35dd55005ee9aea2843eff7f514ee689a0995df8 |
python/cpython | python__cpython-100328 | # `make sharedinstall` does not return failure if install commands fail
# Bug report
If `make sharedinstall` fails to install some Python extensions, the make target wrongly succeeds. For example, I'm seeing:
```
/usr/bin/install -c -m 755 Modules/array.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/array.cpython-312-x86_64-linux-gnu.so
/usr/bin/install: cannot create regular file '/var/tmp/portage/dev-lang/python-3.12.0_alpha3/image/usr/lib/python3.12/lib-dynload/array.cpython-312-x86_64-linux-gnu.so': No such file or directory
/usr/bin/install -c -m 755 Modules/_asyncio.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/_asyncio.cpython-312-x86_64-linux-gnu.so
/usr/bin/install: cannot create regular file '/var/tmp/portage/dev-lang/python-3.12.0_alpha3/image/usr/lib/python3.12/lib-dynload/_asyncio.cpython-312-x86_64-linux-gnu.so': No such file or directory
/usr/bin/install -c -m 755 Modules/_bisect.cpython-312-x86_64-linux-gnu.so /usr/lib/python3.12/lib-dynload/_bisect.cpython-312-x86_64-linux-gnu.so
[...]
```
Nevertheless, `make install` returns successfully in this case. This causes major problems for automated builds since they end up with broken Python installs when the make target should have failed.
I need to investigate why it's failing but that's a separate issue.
Complete build log (1.2M): [dev-lang:python-3.12.0_alpha3:20221207-142002.log](https://github.com/python/cpython/files/10218273/dev-lang.python-3.12.0_alpha3.20221207-142002.log)
The problem seems to be that the `sharedinstall` targets runs a single shell command and make doesn't check the exit status until its very end.
I suspect the same problem may apply to other install rules.
# Your environment
- CPython versions tested on: 3.12.0a3
- Operating system and architecture: Gentoo Linux/amd64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100328
<!-- /gh-linked-prs -->
| a90863c993157ae65e040476cf46abd73ae54b4a | 2667452945eb0a3b8993bb4298ca8da54dc0155a |
python/cpython | python__cpython-100212 | # Link in code comment no longer relevant for HTML unescaping
# Documentation
The link in https://github.com/python/cpython/blob/0e081a089ec969c9a34f5ff25886205616ef4dd3/Lib/html/__init__.py#L28 is not longer relevant and should be replace:
- current link <http://www.w3.org/TR/html5/syntax.html#tokenizing-character-references>
- more accurate link <https://html.spec.whatwg.org/multipage/parsing.html#numeric-character-reference-end-state>
The link should explain the source of the replacements table:
```python
# see http://www.w3.org/TR/html5/syntax.html#tokenizing-character-references
_invalid_charrefs = {
0x00: '\ufffd', # REPLACEMENT CHARACTER
0x0d: '\r', # CARRIAGE RETURN
0x80: '\u20ac', # EURO SIGN
0x81: '\x81', # <control>
0x82: '\u201a', # SINGLE LOW-9 QUOTATION MARK
0x83: '\u0192', # LATIN SMALL LETTER F WITH HOOK
0x84: '\u201e', # DOUBLE LOW-9 QUOTATION MARK
0x85: '\u2026', # HORIZONTAL ELLIPSIS
0x86: '\u2020', # DAGGER
0x87: '\u2021', # DOUBLE DAGGER
0x88: '\u02c6', # MODIFIER LETTER CIRCUMFLEX ACCENT
0x89: '\u2030', # PER MILLE SIGN
0x8a: '\u0160', # LATIN CAPITAL LETTER S WITH CARON
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100212
* gh-102044
* gh-102045
<!-- /gh-linked-prs -->
| 9a07eff628c1cd88b7cdda88a8fd0db3fe7ea552 | 072935951f7cd44b40ee37fe561478b2e431c2fb |
python/cpython | python__cpython-100207 | # `sysconfig.get_default_scheme` `versionadded` vs `versionchanged`
# Documentation
`get_default_scheme` used to be private and called `_get_default_scheme`, but in GH-24644, it was made public. That PR uses the `versionchanged` directive to document this, but this is confusing to users, I believe the correct directive should be `versionadded`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100207
* gh-100216
* gh-100217
<!-- /gh-linked-prs -->
| d3ea82aaf940167482df1e08d6482de8f2dd8526 | 0e081a089ec969c9a34f5ff25886205616ef4dd3 |
python/cpython | python__cpython-100416 | # Bare yield's behaviour undocumented
```
def g():
yield
print(next(g()))
```
That prints `None`, but the [Yield expressions](https://docs.python.org/3/reference/expressions.html#yield-expressions) documentation doesn't say so. It only talks about *"returning the value of `expression_list` to the generator’s caller"*, but doesn't say that `None` gets returned if the optional `expression_list` isn't given.
Maybe it's stated in the PEPs referenced at the end, but I think one shouldn't have to look that far. It should be right there in the documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100416
* gh-100664
* gh-100665
<!-- /gh-linked-prs -->
| 1aab269d4acbf0b29573ad0a21c54fddee233243 | 1d1480fefc6ae77d14d6eff007b180ff5d1cd5d4 |
python/cpython | python__cpython-100194 | # Add more tests for `asyncio` subprocess
As evident from https://github.com/python/cpython/issues/100133, we need more functional tests for subprocess. Currently there are only two tests for shell and others don't tests all combinations.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100194
* gh-100569
<!-- /gh-linked-prs -->
| e97afefda54b818b140b3cc905642b69d9d65f0c | 08e5594cf3d42391a48e0311f6b9393ec2e00e1e |
python/cpython | python__cpython-100189 | # Too many misses in BINARY_SUBSCR_LIST_INT
The expression `my_list[-1]` [gets specialized](https://github.com/python/cpython/blob/70be5e42f6e288de32e0df3c77ac22a9ddf1a74b/Python/specialize.c#L1263-L1267) to `BINARY_SUBSCR_LIST_INT` every time, then it [gets deoptimized every time](https://github.com/python/cpython/blob/70be5e42f6e288de32e0df3c77ac22a9ddf1a74b/Python/bytecodes.c#L396)
~~This leads to 31%ish of `BINARY_SUBSCR_LIST_INT` executions being misses (on some programs anyway).~~ Edit: this stat was probably not accurate.
Should be an easy fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100189
<!-- /gh-linked-prs -->
| c18d83118881333b9a0afd0add83afb2ba7300f7 | 44892d45b038f919b0378590a776580a9d73b291 |
python/cpython | python__cpython-100901 | # Upgrade OpenSSL bundled with windows to 1.1.1s
# Enhancement
The version of OpenSSL used in [get_externals.bat](https://github.com/python/cpython/blob/c0859743d9ad3bbd4c021200f4162cfeadc0c17a/PCbuild/get_externals.bat#L80) is 1.1.1q, where 1.1.1s has been released. This in turn pulls from https://github.com/python/cpython-bin-deps/tree/openssl-bin, which was last updated by @zooba to 1.1.1q
# Pitch
It would be nice to get the latest updates. Looking at the [changelog](https://www.openssl.org/news/cl111.txt) I don't see anything critial for windows, so maybe 1.1.1q is still OK, in which csae feel free to close the issue.
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100901
* gh-100902
* gh-100903
* gh-100904
* gh-100908
* gh-100909
* gh-100910
* gh-101258
* gh-101259
<!-- /gh-linked-prs -->
| d7ab7149f83e4f194cf0e3a438fb6ca177832c99 | e098137cd3250af05f19380590b8dec79dc5942f |
python/cpython | python__cpython-100177 | # Meta issue: clean up redundant compat code
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
There's several places that have compatibility code for EOL and unsupported Python versions.
We can modernise the code by removing the obsolete bits.
I'll open separate PRs to keep the changes focused by area, and intend to use this meta issue for them all (but can also open separate issues if preferred).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100177
* gh-100190
* gh-100197
* gh-100297
* gh-101853
<!-- /gh-linked-prs -->
| 3192c00a3cf136e06592d9a14d4d7b82412da4de | 6997e77bdf2297375962aaf82876da4e7ecdd61a |
python/cpython | python__cpython-100178 | # Enum Docs does MultiplesOfThree instead of PowersOfThree
# Documentation
Current:
The example includes a class `PowersOfThree`. Inside the function `_generate_next_value_()`, it executes `return (count + 1) * 3` (Line 5), which is multiples of three instead of powers of three.
Expected:
Line 5 should be updated to: `return 3 ** (count + 1)`, and the output on Line 9 should be updated to 9.
OR
Line 2 and Line 8 should be updated to `MultiplesOfThree`
https://docs.python.org/3/library/enum.html#enum.Enum._generate_next_value_
<!-- gh-linked-prs -->
### Linked PRs
* gh-100178
* gh-100181
<!-- /gh-linked-prs -->
| 868bab0fdc514cfa70ce97e484a689aee8cb5a36 | 2e279e85fece187b6058718ac7e82d1692461e26 |
python/cpython | python__cpython-100410 | # DeprecationWarning scope expanded in asyncio.events
As discovered in https://github.com/prompt-toolkit/python-prompt-toolkit/issues/1696, it appears that the DeprecationWarning introduced in Python 3.10 has expanded its scope, now with 3.11.1 and 3.10.9 emitting during get_event_loop_policy() where it did not before:
```
~ $ py -3.11 -W error
Python 3.11.0 (main, Oct 26 2022, 19:06:18) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> asyncio.get_event_loop_policy().get_event_loop()
<_UnixSelectorEventLoop running=False closed=False debug=False>
>>> ^D
~ $ docker run -it jaraco/multipy-tox py -3.11 -W error
Python 3.11.1 (main, Dec 7 2022, 01:11:34) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> asyncio.get_event_loop_policy().get_event_loop()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/asyncio/events.py", line 687, in get_event_loop
warnings.warn('There is no current event loop',
DeprecationWarning: There is no current event loop
```
It's not obvious if it was intentional to expand the scope of the DeprecationWarning, but it does come as a surprise as the calling code had previously attempted to address the deprecation.
I think there are two concerns to be addressed:
- Does this symptom indicate an unintentional but real additional path that was previously unprotected by the DeprecationWarning? And if so, does that imply that the behavior's first true deprecation is in Python 3.12 and thus will delay the removal?
- What is a user expected to do to properly address the deprecation? I read the [what's new for Python 3.10](https://docs.python.org/3/whatsnew/3.10.html) and it indicates that the call is deprecated, but it provides little guidance on how a user can adapt to the new behavior. Maybe there should be a note to the effect of "programs relying on a non-running event loop must ensure that there is a running event loop before attempting to get the event loop."
<!-- gh-linked-prs -->
### Linked PRs
* gh-100410
* gh-100412
* gh-100969
* gh-100970
<!-- /gh-linked-prs -->
| e5bd5ad70d9e549eeb80aadb4f3ccb0f2f23266d | 468c3bf79890ef614764b4e7543608876c792794 |
python/cpython | python__cpython-100147 | # BUILD_LIST should steal references, similar to BUILD_TUPLE
Currently, the `BUILD_LIST` opcode adjusts the stack multiple times while items are popped one by one from the stack, rather than employing the more efficient method used by `BUILD_TUPLE` (that steals the references from the stack).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100147
<!-- /gh-linked-prs -->
| e6d44407827490a5345e8393fbdc78fd6c14f5b1 | b3722ca058f6a6d6505cf2ea9ffabaf7fb6b6e19 |
python/cpython | python__cpython-100144 | # Make it possible to collect pystats of parts of runs
The `--enable-pystats` system has a couple of important shortcomings:
- When stats is turned off, stats aren't dumped (on quit or explicitly). Therefore, it's impossible to put `sys._stats_on`/`sys._stats_off` calls around tested code and get the results out.
- Since stats collection is on by default, it is very fiddly to exclude code for stats, especially for things that fire off subprocesses, e.g. pyperformance.
@markshannon's [suggestion](https://github.com/faster-cpython/ideas/issues/511) is to:
- Having stats off by default
- Add an `-Xstats` flag to turn it on at startup
- Always dump out stats
<!-- gh-linked-prs -->
### Linked PRs
* gh-100144
<!-- /gh-linked-prs -->
| 1583c6e326a8454d3c806763620e1329bf6b7cbe | e4ea33b17807d99ed737f800d9b0006957c008d2 |
python/cpython | python__cpython-100154 | # asyncio subprocess stdout occasionally lost (3.11.0 → 3.11.1 regression)
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
So, I've updated python from 3.11.0 to 3.11.1 and one of my utilities which runs a lot of external processes with `asyncio.create_subprocess_exec` started failing in different places in weird ways. It turned out that with some probability `asyncio.subprocess.Process.communicate()` would now return an empty stdout. Here's a repro:
```
import asyncio
async def main():
attempt = 1
while True:
proc = await asyncio.create_subprocess_exec('/bin/echo', 'test', stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)
stdout, stderr = await proc.communicate()
text = stdout.decode('utf-8').strip()
if text != 'test':
raise RuntimeError(f'FAIL on attempt {attempt}: output="{text}"')
attempt += 1
asyncio.run(main())
```
You may have to wait somewhat for the problem to reproduce, but for me it fails in under 15 seconds more or less reliably. Possible output:
```
RuntimeError: FAIL on attempt 3823: output=""
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.1
- Operating system and architecture: FreeBSD 13.1 amd64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100154
* gh-100398
<!-- /gh-linked-prs -->
| a7715ccfba5b86ab09f86ec56ac3755c93b46b48 | 4994f2488f8a436ebda3510c779cbfe292bb21a0 |
python/cpython | python__cpython-100132 | # Allow `delete=False` for tempfile.TemporaryDirectory()
# Feature or enhancement
If you use `tempfile.TemporaryDirectory()`, the directory is automagically deleted, what is the whole purpose of this class. But sometimes you may want to take a look at it for debugging reasons, so it would be nice to have a `delete=False` to prevent deleting.
# Pitch
Code using this could look like this:
```python
import argparse
import tempfile
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--keep-temp-files", action="store_true", help="Don't delete temporary files")
args = parser.parse_args()
with tempfile.TemporaryDirectory(delete=not args.keep_temp_files) as tempdir:
# Do something
if args.keep_temp_files:
print(f"You can view your temporary files at {tempdir}")
if __name__ == "__main__":
main()
```
I personal use `tempfile.TemporaryDirectory()` a lot, especial in build scripts. Sometimes a need to take a look at the content of these. so having this Option would be great for that.
## Other ways to achieve this
Of course, there would be other ways to achieve this.
**1. Use a extra function that takes the directory as argument:**
```python
if args.keep_temp_files:
do_something("some/other/dir")
else:
with tempfile.TemporaryDirectory() as tempdir:
do_something(tempfile)
```
This would need some code rewrite.
**2. Using `input()`**
```python
with tempfile.TemporaryDirectory() as tempdir:
# Do something
if args.keep_temp_files:
print(f"You can view your temporary files at {tempdir}. Press enter to delete the files.")
breakpoint()
```
This could cause problems in situations, where you can't easily give some Keyboard input to TTY.
**3. Using a custom function**
```python
def my_temp_dir(delet: bool = false):
if delete:
return tempfile.TemporaryDirectory()
else:
return my_custom_handler()
```
This will blow up your code when you just write a simple script. Python also often used by beginners, which may don't know, they can do this and start refactoring their code.
## Why you should implement this
The argument can be easy added to exiting scripts if needed, without any additional work. A scripting language like Python should allow this to make debug more easy and more beginner friendly.
It is absolutely no work to implement this. If this would be much work to implement and maintain this, I would not suggested it. But in this case, it's just 3 steeps:
1. Take a additional argument in `__init__()`:
2. Save is as class attribute
3. In `__exit__()` ask, if it should delete or not
# Previous discussion
None
<!-- gh-linked-prs -->
### Linked PRs
* gh-100132
<!-- /gh-linked-prs -->
| 64cb1a4f0f0bc733a33ad7a6520e749ca1cdd43f | ded9a7fc194a1d5c0e38f475a45f8f77dbe9c6bc |
python/cpython | python__cpython-100613 | # Shim frames can be accessed during frame teardown (via `tstate->cframe->current_frame`)
# Crash report
In jaraco/jaraco.net#5, I've captured a failure that's emerged as Github updated Python 3.12 from a2 to a3 on Linux. The [calling code](https://github.com/jaraco/jaraco.net/blob/e0d1db9d0c8d91579e26f5bcae9c61270d6d6647/jaraco/net/devices/linux.py#L97-L104) is using ctypes to call libc functions.
I don't yet have a minimal reproducer. I'm unable to replicate the issue locally as I don't yet have Linux with a3.
- Operating system and architecture: Ubuntu Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-100613
<!-- /gh-linked-prs -->
| 61762b93871419b34f02d83cee5ca0d94d4a2903 | 2e80c2a976c13dcb69a654b386164dca362295a3 |
python/cpython | python__cpython-100447 | # code.co_positions behaviour does not match documentation
The [documentation of co_positions()](https://docs.python.org/3.12/reference/datamodel.html#codeobject.co_positions) says:
`The iterator returns tuples containing the (start_line, end_line, start_column, end_column). The i-th tuple corresponds to the position of the source code that compiled to the i-th instruction.`
I think this is incorrect, because the iterator returns tuples for cache entries as well:
```
>>> def f():
... a.b = 1
...
>>> import dis
>>> dis.dis(f)
1 0 RESUME 0
2 2 LOAD_CONST 1 (1)
4 LOAD_GLOBAL 0 (a)
16 STORE_ATTR 1 (b)
26 LOAD_CONST 0 (None)
28 RETURN_VALUE
>>> len(list(f.__code__.co_positions()))
15
>>> from pprint import pprint as pp
>>> pp(list(f.__code__.co_positions()))
[(1, 1, 0, 0),
(2, 2, 8, 9),
(2, 2, 2, 3),
(2, 2, 2, 3),
(2, 2, 2, 3),
(2, 2, 2, 3),
(2, 2, 2, 3),
(2, 2, 2, 3),
(2, 2, 2, 5),
(2, 2, 2, 5),
(2, 2, 2, 5),
(2, 2, 2, 5),
(2, 2, 2, 5),
(2, 2, 2, 5),
(2, 2, 2, 5)]
>>>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100447
* gh-119364
* gh-119869
* gh-119870
* gh-119871
<!-- /gh-linked-prs -->
| f07daaf4f7a637f9f9324e7c8bf78e8a3faae7e0 | b2f7b2ef0b5421e01efb8c7bee2ef95d3bab77eb |
python/cpython | python__cpython-100114 | # Remove `yield from` usage from asyncio tests
Support for `yield from` was deprecated and removed so it should be removed.
See https://github.com/python/cpython/blob/cd67c1bb30eccd0c6fd1386405df225aed4c91a9/Lib/test/test_asyncio/test_tasks.py#L2093-L2100
<!-- gh-linked-prs -->
### Linked PRs
* gh-100114
<!-- /gh-linked-prs -->
| 0448deac70be94792616c0fb0c9cb524de9a09b8 | 286e3c76a9cb8f1adc2a915f0d246a1e2e408733 |
python/cpython | python__cpython-100128 | # Avoid using iterable coroutines in `asyncio` internally
`asyncio` currently uses iterable coroutine directly to wrap awaitables with `__await__` methods. This leads to unnecessary special casing and is confusing as `asyncio` does not supports using `yield from` now so it should not be used internally too. This will avoid checking for generators everywhere in public APIs (TBD for different issue).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100128
<!-- /gh-linked-prs -->
| a44553ea9f7745a1119148082edb1fb0372ac0e2 | 1c9f3391b916939e5ad18213e553f8d6bfbec25e |
python/cpython | python__cpython-100109 | # Specialize FOR_ITER for tuples
Same as FOR_ITER_LIST.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100109
<!-- /gh-linked-prs -->
| 748c6c0921ee02a19e01a35f03ce5f4d9cfde5a6 | 0448deac70be94792616c0fb0c9cb524de9a09b8 |
python/cpython | python__cpython-114358 | # Script containing a "shebang" would not start under Windows
# Bug report
This Python script **hello.pyw** does not start on my PC when I double click it in Windows Explorer.
```
#!/usr/bin/env python3
from tkinter import messagebox
messagebox.showinfo(message='hello, world')
```
Instead, a console window pops up and disappears instantly.
Now I modify the shebang as follows:
```
#!/usr/bin/env python
from tkinter import messagebox
messagebox.showinfo(message='hello, world')
```
On double-clicking **hello.pyw**, *a console opens*, then the message box appears.
Now I remove the shebang:
```
from tkinter import messagebox
messagebox.showinfo(message='hello, world')
```
On double-clicking **hello.pyw**, the message box appears as expected.
This misbehavior was observed with the introduction of Python 3.11. It was not like this until version 3.10.8.
# Your environment
- CPython versions tested on: 3.11.1
- Operating system and architecture: Windows 10 22H2 x64
I have implemented quite a few open source programs with Python, all of which have a shebang for working across platforms. On my download pages I had to generally discourage the use with Python 3.11.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114358
* gh-114541
* gh-114542
<!-- /gh-linked-prs -->
| d5c21c12c17b6e4db2378755af8e3699516da187 | 6888cccac0776d965cc38a7240e1bdbacb952b91 |
python/cpython | python__cpython-100103 | # Unclear documentation of `zip()`'s `strict` option
In version 3.10, `zip()` has gained the `strict` option, which causes a `ValueError` to be raised when the iterables have different lengths (cf. #84816). [The documentation](https://docs.python.org/3/library/functions.html#zip) describes this as follows:
> Unlike the default behavior, it checks that the lengths of iterables are identical, raising a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError) if they aren’t:
In my opinion, this is confusing at best. `zip()` in `strict` mode does *not* check the `len()` of the iterables, as that sentence might lead one to think. Rather – and exactly as I expected before reading the documentation – it checks that all iterables are exhausted at the same time (or more specifically, it checks that if `next()` on the iterator for one iterable raises a `StopIteration`, the others do as well). This distinction is important in the context of iterables that do not have a length, e.g. generator functions. It also makes it clear that the error is only raised when one of the iterables reaches exhaustion, which may be important e.g. in a `for ... in zip(...)` loop, since the loop body would be executed for the matching pairs before an error is raised. Depending on what the user is doing, they may therefore still want an explicit `len()` check before running `zip()` to avoid having to roll back later, for example.
Note that PEP-618 (which is not linked from the docs) does not contain this misleading language:
> When enabled, a ValueError is raised if one of the arguments is exhausted before the others.
And likewise in `zip()`'s docstring:
> If strict is true and one of the arguments is exhausted before the others, raise a ValueError.
I think this language of an 'exhausted iterable' should be used in the documentation as well. It is already used for other core functions such as `map()` and `next()`. Even the `zip()` documentation uses it, but only in the description of the default behaviour without `strict`. I feel like, compared to the two quotes above, the timing of the exception deserves an extra explanation though.
I will submit a PR with my proposed changes shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100103
* gh-100594
* gh-100595
<!-- /gh-linked-prs -->
| cf1c09818032df3080c2cd9e7edb5f657213dc83 | c4c5790120beabed83ce5855f18d209ab8324434 |
python/cpython | python__cpython-100099 | # 3.11.1 Regression: namedtuple Enum values are cast to tuple
# Bug report
Between 3.11.0 and 3.11.1, Enums whose values are namedtuple objects have their values converted to tuple, which drops the field names we expect to be able to use, causing AttributeErrors. Test cases below create a namedtuple and an enum whose values are instances of that tuple. In the 3.11.1 case, referencing the enum value like `NTEnum.NONE.value` produces a tuple and not a namedtuple. In both cases, `copy.copy` preserves the namedtuple type.
It is not clear whether any item in the changelog or release notes references this change, nor could I quickly tell whether this was related to changes to address #93910.
<details><summary>Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32</summary>
```py
>>> from enum import Enum
>>> from collections import namedtuple
>>> TTuple = namedtuple('TTuple', 'id a blist')
>>> class NTEnum(Enum):
... NONE = TTuple(0, 0, [])
... A = TTuple(1, 2, [4])
... B = TTuple(2, 4, [0, 1, 2])
...
...
>>> NTEnum.NONE
<NTEnum.NONE: TTuple(id=0, a=0, blist=[])>
>>> NTEnum.NONE.value
TTuple(id=0, a=0, blist=[])
>>> [x.value for x in NTEnum]
[TTuple(id=0, a=0, blist=[]), TTuple(id=1, a=2, blist=[4]), TTuple(id=2, a=4, blist=[0, 1, 2])]
>>> import copy
>>> x = TTuple(0, 1, [7])
>>> x
TTuple(id=0, a=1, blist=[7])
>>> copy.copy(x)
TTuple(id=0, a=1, blist=[7])
>>> copy.deepcopy(x)
TTuple(id=0, a=1, blist=[7])
>>> NTEnum.NONE.value.blist
[]
```
</details>
<details>
<summary>Python 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32</summary>
```py
>>> from enum import Enum
>>> from collections import namedtuple
>>> TTuple = namedtuple('TTuple', 'id a blist')
>>> class NTEnum(Enum):
... NONE = TTuple(0, 0, [])
... A = TTuple(1, 2, [4])
... B = TTuple(2, 4, [0, 1, 2])
...
...
>>> NTEnum.NONE
<NTEnum.NONE: (0, 0, [])>
>>> NTEnum.NONE.value
(0, 0, [])
>>> [x.value for x in NTEnum]
[(0, 0, []), (1, 2, [4]), (2, 4, [0, 1, 2])]
>>> import copy
>>> x = TTuple(0, 1, [7])
>>> x
TTuple(id=0, a=1, blist=[7])
>>> copy.copy(x)
TTuple(id=0, a=1, blist=[7])
>>> copy.deepcopy(x)
TTuple(id=0, a=1, blist=[7])
>>> NTEnum.NONE.value.blist
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
NTEnum.NONE.value.blist
AttributeError: 'tuple' object has no attribute 'blist'
```
</details>
# Your environment
- CPython versions tested on: 3.11.0, 3.11.1
- Operating system and architecture: win64 (amd64)
- 3.11.0 additionally tested on linux
- 3.11.0, 3.11.1 tested in IDLE
<!-- gh-linked-prs -->
### Linked PRs
* gh-100099
* gh-100102
<!-- /gh-linked-prs -->
| ded02ca54d7bfa32c8eab0871d56e4547cd356eb | cce836296016463032495c6ca739ab469ed13d3c |
python/cpython | python__cpython-100078 | # test_code.test_invalid_bytecode is a bit cryptic and flaky
This test checks what happens when we try to execute a function with invalid bytecode. It hand-crafts the function's code, which is hard to read and maintain as bytecode changes. It also assumes that 238 is an invalid opcode, which it should at least assert.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100078
<!-- /gh-linked-prs -->
| f3e97c90ed6f82fce67b0e8757eec54908ba49ce | 68e41295b8611a990de68f15c89f1eb3dea51867 |
python/cpython | python__cpython-100074 | # Netlify builds happen for PRs that do not change docs
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
After https://github.com/python/cpython/pull/92852 all PRs get a notification from `netlify` about new doc builds. Example: https://github.com/python/cpython/pull/100070 (notice no `Doc/` changes made)
I don't think it is right:
1. We are wasting resources for no good reason
2. It is a noise for both contributors and core devs
But, `netlify` has `build.ignore` option that can work similarly to these lines: https://github.com/python/cpython/blob/7031275776f43c76231318c2158a7a2753bc1fba/.github/workflows/doc.yml#L23-L26
Docs: https://docs.netlify.com/configure-builds/ignore-builds/
I will send a PR to test it :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100074
<!-- /gh-linked-prs -->
| d92407ed497e3fc5acacb0294ab6095013e600f4 | f3e97c90ed6f82fce67b0e8757eec54908ba49ce |
python/cpython | python__cpython-100063 | # Error code tables in _ssl and err_name_to_codes are no longer used
(Filing this for a PR I'll upload shortly.)
Prior to https://github.com/python/cpython/pull/25300, the make_ssl_data.py script used various tables, exposed in `_ssl`, to update the error list. After that PR, these are no longer used and we can trim that a bit. This gets them out of the way if, in the future, OpenSSL provides an API to do what the code here is doing directly. (https://github.com/openssl/openssl/issues/19848)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100063
<!-- /gh-linked-prs -->
| 02f9920900551fd0281c8989d65521d4fce4ead1 | 5ffc1e5a21de9a30566095386236db44695d184a |
python/cpython | python__cpython-102612 | # Extra characters erroneously matched when using possessive quantifier with negative lookahead
# Bug report
Regular expressions that combine a possessive quantifier with a negative lookahead match extra erroneous characters in re module 2.2.1 of Python 3.11. (The test was run on Windows 10 using the official distribution of Python 3.11.0.)
For example, the following regular expression aims to match consecutive characters that are not 'C' in string 'ABC'. (There are simpler ways to do this, but this is just an example to illustrate the problem.)
```
import re
text = 'ABC'
print('Possessive quantifier, negative lookahead:',
re.findall('(((?!C).)++)', text))
```
Output:
```
Possessive quantifier, negative lookahead: [('ABC', 'B')]
```
The first subgroup of the match is the entire match, while the second subgroup is the last character that was matched. They should be 'AB' and 'B', respectively. While the last matched character is correctly identified as 'B', the complete match is erroneously set to 'ABC'.
Replacing the negative lookahead with a positive lookahead eliminates the problem:
```
print('Possessive quantifier, positive lookahead:',
re.findall('(((?=[^C]).)++)', text))
```
Output:
```
Possessive quantifier, positive lookahead: [('AB', 'B')]
```
Alternately, keeping the negative lookahead but replacing the possessive quantifier with a greedy quantifier also eliminates the problem:
```
print('Greedy quantifier, negative lookahead:',
re.findall('(((?!C).)+)', text))
```
Output:
```
Greedy quantifier, negative lookahead: [('AB', 'B')]
```
While this example uses the ++ quantifier, the *+ and ?+ quantifiers exhibit similar behaviour. Also, using a longer pattern in the negative lookahead leads to even more characters being erroneously matched.
Thank you for adding possessive quantifiers to the re module! It is a very useful feature!
# Environment
- re module 2.2.1 in standard library
- CPython versions tested on: 3.11.0
- Operating system and architecture: Windows 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-102612
* gh-108003
* gh-108004
<!-- /gh-linked-prs -->
| abd9cc52d94b8e2835322b62c29f09bb0e6fcfe9 | a86df298df5b02e2d69ea6879e9ed10a7adb85d0 |
python/cpython | python__cpython-100065 | # C assertion error from the runtime while expecting a SyntaxError
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
In a `--with-pydebug` build, run the following code:
```
$ cat ~/tmp/t.py
import ast
ast.parse("""
func(
a=["unclosed], # Need a quote in this comment: "
b=2,
)
""")
$ ./python ~/tmp/t.py
```
# Error messages
```
python: Objects/call.c:324: _PyObject_Call: Assertion `!_PyErr_Occurred(tstate)' failed.
fish: Job 1, './python ~/tmp/t.py' terminated by signal SIGABRT (Abort)
```
(How do I obtain a core dump?)
I think the error happens inside this call: https://github.com/python/cpython/blob/5c19050546e3e37a8889a0baa2954e1444e803d3/Parser/pegen_errors.c#L175
What happened is that the [`_PyTokenizer_Get(p->tok, &new_token)` call](https://github.com/python/cpython/blob/5c19050546e3e37a8889a0baa2954e1444e803d3/Parser/pegen_errors.c#L170) earlier also sets an error [here](https://github.com/python/cpython/blob/417206a05c4545bde96c2bbbea92b53e6cac0d48/Parser/tokenizer.c#L2137-L2148).
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: main branch at https://github.com/python/cpython/commit/5c19050546e3e37a8889a0baa2954e1444e803d3, but this also happens since Python 3.10
- Operating system and architecture: Linux x86_64
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100065
* gh-100067
* gh-100073
<!-- /gh-linked-prs -->
| 97e7004cfe48305bcd642c653b406dc7470e196d | abbe4482ab914d6da8b1678ad46174cb875ed7f6 |
python/cpython | python__cpython-100052 | # Dictionary view objects (dictview) - incorrect example
# Documentation
Link to the doc:
[Dictionary view objects](https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects)
Look at the examples. The output for `values.mapping` (near the end) now is:
```
mappingproxy({'eggs': 2, 'sausage': 1, 'bacon': 1, 'spam': 500})
```
It seems that the correct output should be:
```
mappingproxy({'bacon': 1, 'spam': 500})
```
Belongs to:
- python 3.10
- python 3.11
<!-- gh-linked-prs -->
### Linked PRs
* gh-100052
* gh-100155
* gh-100156
<!-- /gh-linked-prs -->
| 7c0fb71fbfa8682f56c15832e2c793a6180f2ec0 | 7a0f3c1d92ef0768e082ace19d970b0ef12e7346 |
python/cpython | python__cpython-100168 | # [Enum] `__text_signature__` of `EnumType.__call__` and derivatives
`EnumType.__call__` is a dual-purpose method:
- create a new enum class (functional API, and only valid if the parent enum has no members)
- look up an existing member (only valid if enum has members)
Enhancement: Have the appropriate `__text_signature__` set for each enum/flag.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100168
<!-- /gh-linked-prs -->
| eafd14fbe0fd464b9d700f6d00137415193aa143 | b2a7272408593355c4c8e1d2ce9018cf96691bea |
python/cpython | python__cpython-100027 | # Include number of raw stats files in summarize_stats.py output
Mostly for confirmation / debugging purposes, the output of `summarize_stats.py` should include the number of raw input files in `/tmp/py_stats`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100027
<!-- /gh-linked-prs -->
| 9dc787ea96916552695e79397588fdfa68f22024 | 5c19050546e3e37a8889a0baa2954e1444e803d3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.