repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-124401 | # Use the normal code path for breakpoint commands in pdb
# Feature or enhancement
### Proposal:
Currently the structure to support breakpoint commands is too heavy. We need to keep `self.commands_doprompt` and `self.commands_silent` which are not necessary. Also `bp_commands` now use its own loop to execute commands - we should avoid this as much as possible because this is a duplicate path that may have dark corners. For example, if in the future we have commands that insert commands in the beginning of the `cmdqueue`.
We should avoid having many exceptions for this command and use the existing structure to run the command in breakpoint commands.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-124401
<!-- /gh-linked-prs -->
| b5774603a0c877f19b33fb922e2fb967b1d50329 | 4b83c03ce964af7fb204144db9adaa524c113a82 |
python/cpython | python__cpython-124399 | # Pin LLVM 18.1.0 for JIT CI (Windows)
Looks like LLVM on Chocolatey has added 19.1.0 so we need to pin the JIT version.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124399
* gh-129380
<!-- /gh-linked-prs -->
| b4d0d7de0f6d938128bf525e119c18af5632b804 | 20ccda000b5f8365d5f864fd07876804157c2378 |
python/cpython | python__cpython-124386 | # Document PyLong_AS_LONG
`PyLong_AS_LONG` is undocumented. It should be documented, so people know how to replace it.
See also: https://github.com/capi-workgroup/decisions/issues/38
<!-- gh-linked-prs -->
### Linked PRs
* gh-124386
* gh-124719
* gh-130549
<!-- /gh-linked-prs -->
| 425587a110eb214a097c634d4b6d944ac478923e | 1ba35ea38562bfc0301bab4e098aa124e114b886 |
python/cpython | python__cpython-124542 | # New test_ttk failure on Mac: "bad screen distance"
# Bug report
### Bug description:
I've been running with some patches from @ronaldoussoren to the following:
```
modified: Lib/test/support/__init__.py
modified: Lib/test/test_ttk/test_style.py
modified: Lib/test/test_ttk/test_widgets.py
```
(See attached diff.)
[test_ttk.txt](https://github.com/user-attachments/files/17104079/test_ttk.txt)
Everything was working fine, but today I began to get a repeatable failure in `test_ttk`:
```
======================================================================
FAIL: test_configure_height (test.test_ttk.test_widgets.NotebookTest.test_configure_height)
----------------------------------------------------------------------
_tkinter.TclError: bad screen distance ""
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 539, in test_configure_height
self.checkIntegerParam(widget, 'height', 100, -100, 0)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 81, in checkIntegerParam
self.checkInvalidParam(widget, name, '', errmsg=errmsg)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 67, in checkInvalidParam
with self.assertRaisesRegex(tkinter.TclError, errmsg or ''):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: "\Aexpected integer but got ""\Z" does not match "bad screen distance """
======================================================================
FAIL: test_configure_width (test.test_ttk.test_widgets.NotebookTest.test_configure_width)
----------------------------------------------------------------------
_tkinter.TclError: bad screen distance ""
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 543, in test_configure_width
self.checkIntegerParam(widget, 'width', 402, -402, 0)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 81, in checkIntegerParam
self.checkInvalidParam(widget, name, '', errmsg=errmsg)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skip/src/python/cpython/Lib/test/test_tkinter/widget_tests.py", line 67, in checkInvalidParam
with self.assertRaisesRegex(tkinter.TclError, errmsg or ''):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: "\Aexpected integer but got ""\Z" does not match "bad screen distance """
----------------------------------------------------------------------
Ran 342 tests in 6.364s
```
I'm putting this out there with no further investigation. I doubt it will be a release blocker for 3.13, but if so, maybe the sprinters can find and fix the problem quickly.
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-124542
* gh-124544
* gh-124545
<!-- /gh-linked-prs -->
| fb6bd31cb74d2f7e7b525ee4fe9f45475fc94ce9 | d6954b6421aa34afd280df9c44ded21a2348a6ea |
python/cpython | python__cpython-124459 | # Crash running PyO3 tests with --test-threads=1000
# Crash report
### What happened?
Unfortunately I don't know how to make this more minimal than "run the PyO3 tests".
You'll need rust 1.81 installed along with a copy of the latest version of PyO3 from github and a free-threaded Python interpreter.
Repeatedly run the PyO3 tests with `UNSAFE_PYO3_BUILD_FREE_THREADED=1 cargo test --lib -- --test-threads=1000`, and eventually you'll see a seg fault. If you run the test executable under a debugger, you'll see a C traceback like this, crashing on an atomic load inside the GC internals:
<details>
```
* thread #2, name = 'buffer::tests::test_array_buffer', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x0000000101a1d7d4 libpython3.13td.dylib`_Py_qbsr_goal_reached(qsbr=0x0000000000000000, goal=9) at pycore_qsbr.h:103:53
frame #1: 0x0000000101a1d714 libpython3.13td.dylib`_Py_qsbr_poll(qsbr=0x0000000000000000, goal=9) at qsbr.c:165:9
frame #2: 0x00000001017dd814 libpython3.13td.dylib`process_queue(head=0x0000000101d148d8, qsbr=0x0000000000000000, keep_empty=false) at obmalloc.c:1188:18
frame #3: 0x00000001017dd958 libpython3.13td.dylib`process_interp_queue(queue=0x0000000101d148d0, qsbr=0x0000000000000000) at obmalloc.c:1218:9
frame #4: 0x00000001017dd778 libpython3.13td.dylib`_PyMem_ProcessDelayed(tstate=0x000000013f91f610) at obmalloc.c:1237:5
frame #5: 0x00000001019a8a98 libpython3.13td.dylib`process_delayed_frees(interp=0x0000000101cff940) at gc_free_threading.c:351:9
frame #6: 0x00000001019a86f8 libpython3.13td.dylib`gc_collect_internal(interp=0x0000000101cff940, state=0x000000016fff6698, generation=0) at gc_free_threading.c:1109:5
frame #7: 0x00000001019a5d20 libpython3.13td.dylib`gc_collect_main(tstate=0x0000000101d2fe40, generation=0, reason=_Py_GC_REASON_HEAP) at gc_free_threading.c:1225:5
frame #8: 0x00000001019a6894 libpython3.13td.dylib`_Py_RunGC(tstate=0x0000000101d2fe40) at gc_free_threading.c:1684:5
frame #9: 0x00000001019b8944 libpython3.13td.dylib`_Py_HandlePending(tstate=0x0000000101d2fe40) at ceval_gil.c:1296:9
frame #10: 0x0000000101933f38 libpython3.13td.dylib`_PyEval_EvalFrameDefault(tstate=0x0000000101d2fe40, frame=0x0000000100cc0be8, throwflag=0) at generated_cases.c.h:846:13
frame #11: 0x000000010192fbbc libpython3.13td.dylib`_PyEval_EvalFrame(tstate=0x0000000101d2fe40, frame=0x0000000100cc0878, throwflag=0) at pycore_ceval.h:119:16
frame #12: 0x000000010192fb28 libpython3.13td.dylib`_PyEval_Vector(tstate=0x0000000101d2fe40, func=0x0000020000404a50, locals='0x16fff91a0', args='0x16fff9198', argcount=2, kwnames='0x16fff9188') at ceval.c:1806:12
frame #13: 0x000000010171a8a8 libpython3.13td.dylib`_PyFunction_Vectorcall(func='0x16fff9210', stack='0x16fff9208', nargsf=2, kwnames='0x16fff91f8') at call.c:413:16
frame #14: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x16fff9268', args='0x16fff9260', nargsf=2, kwnames='0x16fff9250') at pycore_call.h:168:11
frame #15: 0x000000010171bb88 libpython3.13td.dylib`object_vacall(tstate=0x0000000101d2fe40, base='0x16fff92e8', callable='0x16fff92e0', vargs="") at call.c:819:14
frame #16: 0x000000010171b998 libpython3.13td.dylib`PyObject_CallMethodObjArgs(obj='0x16fff9380', name=0x0000000101cf2308) at call.c:880:24
frame #17: 0x00000001019c64a8 libpython3.13td.dylib`import_find_and_load(tstate=0x0000000101d2fe40, abs_name=0x0000020000a8b770) at import.c:3675:11
frame #18: 0x00000001019c578c libpython3.13td.dylib`PyImport_ImportModuleLevelObject(name=0x0000020000a8b770, globals=0x0000020000a922f0, locals=0x0000020000a922f0, fromlist=0x00000200000addf0, level=0) at import.c:3757:15
frame #19: 0x0000000101950ad8 libpython3.13td.dylib`import_name(tstate=0x0000000101d2fe40, frame=0x0000000100cc0808, name=0x0000020000a8b770, fromlist=0x00000200000addf0, level=0) at ceval.c:2693:16
frame #20: 0x000000010193f020 libpython3.13td.dylib`_PyEval_EvalFrameDefault(tstate=0x0000000101d2fe40, frame=0x0000000100cc0808, throwflag=0) at generated_cases.c.h:3201:19
frame #21: 0x000000010192fbbc libpython3.13td.dylib`_PyEval_EvalFrame(tstate=0x0000000101d2fe40, frame=0x0000000100cc0808, throwflag=0) at pycore_ceval.h:119:16
frame #22: 0x000000010192fb28 libpython3.13td.dylib`_PyEval_Vector(tstate=0x0000000101d2fe40, func=0x000002000079fcd0, locals=0x0000020000a922f0, args='0x16fffc048', argcount=0, kwnames='0x16fffc038') at ceval.c:1806:12
frame #23: 0x000000010192f99c libpython3.13td.dylib`PyEval_EvalCode(co='0x16fffc0f0', globals=0x0000020000a922f0, locals=0x0000020000a922f0) at ceval.c:596:21
frame #24: 0x000000010192a7bc libpython3.13td.dylib`builtin_exec_impl(module='0x16fffc1d0', source='0x16fffc1c8', globals=0x0000020000a922f0, locals=0x0000020000a922f0, closure='0x16fffc1b0') at bltinmodule.c:1145:17
frame #25: 0x0000000101927920 libpython3.13td.dylib`builtin_exec(module='0x16fffc260', args='0x16fffc258', nargs=2, kwnames='0x16fffc248') at bltinmodule.c.h:556:20
frame #26: 0x00000001017b3cd8 libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS(func='0x16fffc2e0', args='0x16fffc2d8', nargsf=2, kwnames='0x16fffc2c8') at methodobject.c:441:24
frame #27: 0x000000010171a284 libpython3.13td.dylib`_PyVectorcall_Call(tstate=0x0000000101d2fe40, func=(libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS at methodobject.c:433), callable='0x16fffc350', tuple=0x0000020000a96790, kwargs={}) at call.c:273:16
frame #28: 0x000000010171a52c libpython3.13td.dylib`_PyObject_Call(tstate=0x0000000101d2fe40, callable='0x16fffc3b8', args=0x0000020000a96790, kwargs={}) at call.c:348:16
frame #29: 0x000000010171a614 libpython3.13td.dylib`PyObject_Call(callable='0x16fffc3f8', args=0x0000020000a96790, kwargs={}) at call.c:373:12
frame #30: 0x0000000101936954 libpython3.13td.dylib`_PyEval_EvalFrameDefault(tstate=0x0000000101d2fe40, frame=0x0000000100cc0780, throwflag=0) at generated_cases.c.h:1355:26
frame #31: 0x000000010192fbbc libpython3.13td.dylib`_PyEval_EvalFrame(tstate=0x0000000101d2fe40, frame=0x0000000100cc0508, throwflag=0) at pycore_ceval.h:119:16
frame #32: 0x000000010192fb28 libpython3.13td.dylib`_PyEval_Vector(tstate=0x0000000101d2fe40, func=0x0000020000404a50, locals='0x16fffee20', args='0x16fffee18', argcount=2, kwnames='0x16fffee08') at ceval.c:1806:12
frame #33: 0x000000010171a8a8 libpython3.13td.dylib`_PyFunction_Vectorcall(func='0x16fffee90', stack='0x16fffee88', nargsf=2, kwnames='0x16fffee78') at call.c:413:16
frame #34: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x16fffeee8', args='0x16fffeee0', nargsf=2, kwnames='0x16fffeed0') at pycore_call.h:168:11
frame #35: 0x000000010171bb88 libpython3.13td.dylib`object_vacall(tstate=0x0000000101d2fe40, base='0x16fffef68', callable='0x16fffef60', vargs="") at call.c:819:14
frame #36: 0x000000010171b998 libpython3.13td.dylib`PyObject_CallMethodObjArgs(obj='0x16ffff000', name=0x0000000101cf2308) at call.c:880:24
frame #37: 0x00000001019c64a8 libpython3.13td.dylib`import_find_and_load(tstate=0x0000000101d2fe40, abs_name=0x0000020000a76b40) at import.c:3675:11
frame #38: 0x00000001019c578c libpython3.13td.dylib`PyImport_ImportModuleLevelObject(name=0x0000020000a76b40, globals='0x16ffff200', locals='0x16ffff1f8', fromlist='0x16ffff1f0', level=0) at import.c:3757:15
frame #39: 0x0000000101929850 libpython3.13td.dylib`builtin___import___impl(module='0x16ffff248', name=0x0000020000a76b40, globals='0x16ffff238', locals='0x16ffff230', fromlist='0x16ffff228', level=0) at bltinmodule.c:277:12
frame #40: 0x0000000101926898 libpython3.13td.dylib`builtin___import__(module='0x16ffff2d8', args='0x16ffff2d0', nargs=1, kwnames='0x16ffff2c0') at bltinmodule.c.h:107:20
frame #41: 0x00000001017b3cd8 libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS(func='0x16ffff360', args='0x16ffff358', nargsf=1, kwnames='0x16ffff348') at methodobject.c:441:24
frame #42: 0x000000010171a284 libpython3.13td.dylib`_PyVectorcall_Call(tstate=0x0000000101d2fe40, func=(libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS at methodobject.c:433), callable='0x16ffff3d0', tuple=0x00000200000ad760, kwargs={}) at call.c:273:16
frame #43: 0x000000010171a52c libpython3.13td.dylib`_PyObject_Call(tstate=0x0000000101d2fe40, callable='0x16ffff438', args=0x00000200000ad760, kwargs={}) at call.c:348:16
frame #44: 0x000000010171a614 libpython3.13td.dylib`PyObject_Call(callable='0x16ffff478', args=0x00000200000ad760, kwargs={}) at call.c:373:12
frame #45: 0x0000000101936954 libpython3.13td.dylib`_PyEval_EvalFrameDefault(tstate=0x0000000101d2fe40, frame=0x0000000100cc0480, throwflag=0) at generated_cases.c.h:1355:26
frame #46: 0x000000010192fbbc libpython3.13td.dylib`_PyEval_EvalFrame(tstate=0x0000000101d2fe40, frame=0x0000000100cc0318, throwflag=0) at pycore_ceval.h:119:16
frame #47: 0x000000010192fb28 libpython3.13td.dylib`_PyEval_Vector(tstate=0x0000000101d2fe40, func=0x0000020000404a50, locals='0x170001ea0', args='0x170001e98', argcount=2, kwnames='0x170001e88') at ceval.c:1806:12
frame #48: 0x000000010171a8a8 libpython3.13td.dylib`_PyFunction_Vectorcall(func='0x170001f10', stack='0x170001f08', nargsf=2, kwnames='0x170001ef8') at call.c:413:16
frame #49: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x170001f68', args='0x170001f60', nargsf=2, kwnames='0x170001f50') at pycore_call.h:168:11
frame #50: 0x000000010171bb88 libpython3.13td.dylib`object_vacall(tstate=0x0000000101d2fe40, base='0x170001fe8', callable='0x170001fe0', vargs="") at call.c:819:14
frame #51: 0x000000010171b998 libpython3.13td.dylib`PyObject_CallMethodObjArgs(obj='0x170002080', name=0x0000000101cf2308) at call.c:880:24
frame #52: 0x00000001019c64a8 libpython3.13td.dylib`import_find_and_load(tstate=0x0000000101d2fe40, abs_name=0x0000020000a76de0) at import.c:3675:11
frame #53: 0x00000001019c578c libpython3.13td.dylib`PyImport_ImportModuleLevelObject(name=0x0000020000a76de0, globals=0x0000020000370c70, locals=0x0000020000370c70, fromlist=[], level=0) at import.c:3757:15
frame #54: 0x0000000101929850 libpython3.13td.dylib`builtin___import___impl(module='0x1700022c8', name=0x0000020000a76de0, globals=0x0000020000370c70, locals=0x0000020000370c70, fromlist=[], level=0) at bltinmodule.c:277:12
frame #55: 0x0000000101926898 libpython3.13td.dylib`builtin___import__(module='0x170002358', args='0x170002350', nargs=5, kwnames='0x170002340') at bltinmodule.c.h:107:20
frame #56: 0x00000001017b3cd8 libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS(func='0x1700023e0', args='0x1700023d8', nargsf=5, kwnames='0x1700023c8') at methodobject.c:441:24
frame #57: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x170002438', args='0x170002430', nargsf=5, kwnames='0x170002420') at pycore_call.h:168:11
frame #58: 0x000000010171afd4 libpython3.13td.dylib`_PyObject_CallFunctionVa(tstate=0x0000000101d2fe40, callable='0x1700024c8', format="OOOOi", va="\xe0m\xa7") at call.c:552:18
frame #59: 0x000000010171ae00 libpython3.13td.dylib`PyObject_CallFunction(callable='0x170002548', format="OOOOi") at call.c:574:14
frame #60: 0x00000001019c5418 libpython3.13td.dylib`PyImport_Import(module_name=0x0000020000a76de0) at import.c:3942:9
frame #61: 0x00000001019c72e8 libpython3.13td.dylib`_PyImport_GetModuleAttr(modname=0x0000020000a76de0, attrname=0x0000020000a76e50) at import.c:4173:21
frame #62: 0x00000001019c73cc libpython3.13td.dylib`_PyImport_GetModuleAttrString(modname="collections.abc", attrname="MutableSequence") at import.c:4194:24
frame #63: 0x000000010717f76c array.cpython-313td-darwin.so`array_modexec(m='0x1700026e8') at arraymodule.c:3189:33
frame #64: 0x00000001017b71c8 libpython3.13td.dylib`PyModule_ExecDef(module='0x170002770', def=0x000000010718c300) at moduleobject.c:489:23
frame #65: 0x00000001019c9e70 libpython3.13td.dylib`exec_builtin_or_dynamic(mod='0x1700027a0') at import.c:808:12
frame #66: 0x00000001019cd5f8 libpython3.13td.dylib`_imp_exec_dynamic_impl(module='0x1700027c8', mod='0x1700027c0') at import.c:4739:12
frame #67: 0x00000001019cbe8c libpython3.13td.dylib`_imp_exec_dynamic(module='0x1700027f8', mod='0x1700027f0') at import.c.h:513:21
frame #68: 0x00000001017b3f74 libpython3.13td.dylib`cfunction_vectorcall_O(func='0x170002870', args='0x170002868', nargsf=1, kwnames='0x170002858') at methodobject.c:512:24
frame #69: 0x000000010171a284 libpython3.13td.dylib`_PyVectorcall_Call(tstate=0x0000000101d2fe40, func=(libpython3.13td.dylib`cfunction_vectorcall_O at methodobject.c:493), callable='0x1700028e0', tuple=('0x200000a9868',), kwargs={}) at call.c:273:16
frame #70: 0x000000010171a52c libpython3.13td.dylib`_PyObject_Call(tstate=0x0000000101d2fe40, callable='0x170002948', args=('0x200000a9868',), kwargs={}) at call.c:348:16
frame #71: 0x000000010171a614 libpython3.13td.dylib`PyObject_Call(callable='0x170002988', args=('0x200000a9868',), kwargs={}) at call.c:373:12
frame #72: 0x0000000101936954 libpython3.13td.dylib`_PyEval_EvalFrameDefault(tstate=0x0000000101d2fe40, frame=0x0000000100cc0290, throwflag=0) at generated_cases.c.h:1355:26
frame #73: 0x000000010192fbbc libpython3.13td.dylib`_PyEval_EvalFrame(tstate=0x0000000101d2fe40, frame=0x0000000100cc0020, throwflag=0) at pycore_ceval.h:119:16
frame #74: 0x000000010192fb28 libpython3.13td.dylib`_PyEval_Vector(tstate=0x0000000101d2fe40, func=0x0000020000404a50, locals='0x1700053b0', args='0x1700053a8', argcount=2, kwnames='0x170005398') at ceval.c:1806:12
frame #75: 0x000000010171a8a8 libpython3.13td.dylib`_PyFunction_Vectorcall(func='0x170005420', stack='0x170005418', nargsf=2, kwnames='0x170005408') at call.c:413:16
frame #76: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x170005478', args='0x170005470', nargsf=2, kwnames='0x170005460') at pycore_call.h:168:11
frame #77: 0x000000010171bb88 libpython3.13td.dylib`object_vacall(tstate=0x0000000101d2fe40, base='0x1700054f8', callable='0x1700054f0', vargs="") at call.c:819:14
frame #78: 0x000000010171b998 libpython3.13td.dylib`PyObject_CallMethodObjArgs(obj='0x170005590', name=0x0000000101cf2308) at call.c:880:24
frame #79: 0x00000001019c64a8 libpython3.13td.dylib`import_find_and_load(tstate=0x0000000101d2fe40, abs_name=0x0000020000a85530) at import.c:3675:11
frame #80: 0x00000001019c578c libpython3.13td.dylib`PyImport_ImportModuleLevelObject(name=0x0000020000a85530, globals=0x0000020000696df0, locals=0x0000020000696df0, fromlist=[], level=0) at import.c:3757:15
frame #81: 0x0000000101929850 libpython3.13td.dylib`builtin___import___impl(module='0x1700057d8', name=0x0000020000a85530, globals=0x0000020000696df0, locals=0x0000020000696df0, fromlist=[], level=0) at bltinmodule.c:277:12
frame #82: 0x0000000101926898 libpython3.13td.dylib`builtin___import__(module='0x170005868', args='0x170005860', nargs=5, kwnames='0x170005850') at bltinmodule.c.h:107:20
frame #83: 0x00000001017b3cd8 libpython3.13td.dylib`cfunction_vectorcall_FASTCALL_KEYWORDS(func='0x1700058f0', args='0x1700058e8', nargsf=5, kwnames='0x1700058d8') at methodobject.c:441:24
frame #84: 0x0000000101718f4c libpython3.13td.dylib`_PyObject_VectorcallTstate(tstate=0x0000000101d2fe40, callable='0x170005948', args='0x170005940', nargsf=5, kwnames='0x170005930') at pycore_call.h:168:11
frame #85: 0x000000010171afd4 libpython3.13td.dylib`_PyObject_CallFunctionVa(tstate=0x0000000101d2fe40, callable='0x1700059d8', format="OOOOi", va="0U\xa8") at call.c:552:18
frame #86: 0x000000010171ae00 libpython3.13td.dylib`PyObject_CallFunction(callable='0x170005a58', format="OOOOi") at call.c:574:14
frame #87: 0x00000001019c5418 libpython3.13td.dylib`PyImport_Import(module_name=0x0000020000a85530) at import.c:3942:9
frame #88: 0x00000001001d1720 pyo3-97f8571082c6d82b`pyo3::types::module::PyModule::import::h10020d9fbc325799((null)=(__0 = core::marker::PhantomData<(&pyo3::gil::GILGuard, pyo3::impl_::not_send::NotSend)> @ 0x0000000170005b4f), name=(data_ptr = "arrayf", length = 5)) at module.rs:91:13
frame #89: 0x0000000100082e9c pyo3-97f8571082c6d82b`pyo3::marker::Python::import::h76b7d162714cdbe5((null)=(__0 = core::marker::PhantomData<(&pyo3::gil::GILGuard, pyo3::impl_::not_send::NotSend)> @ 0x0000000170005b8f), name=(data_ptr = "arrayf", length = 5)) at marker.rs:712:9
frame #90: 0x000000010018110c pyo3-97f8571082c6d82b`pyo3::buffer::tests::test_array_buffer::_$u7b$$u7b$closure$u7d$$u7d$::h5080eb33d98ca1ff((null)={closure_env#0} @ 0x00000001700063be, (null)=(__0 = core::marker::PhantomData<(&pyo3::gil::GILGuard, pyo3::impl_::not_send::NotSend)> @ 0x00000001700063bf)) at buffer.rs:889:25
frame #91: 0x000000010006b074 pyo3-97f8571082c6d82b`pyo3::marker::Python::with_gil::h5467a51b0b9ab8e3(f={closure_env#0} @ 0x000000017000679f) at marker.rs:409:9
frame #92: 0x00000001000aacac pyo3-97f8571082c6d82b`pyo3::buffer::tests::test_array_buffer::h09873f4901cd5a78 at buffer.rs:888:9
frame #93: 0x00000001001810c4 pyo3-97f8571082c6d82b`pyo3::buffer::tests::test_array_buffer::_$u7b$$u7b$closure$u7d$$u7d$::hcd0c85096f6a20d2((null)=0x00000001700067fe) at buffer.rs:887:27
frame #94: 0x00000001001586bc pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once::h518ac7d9c7bd346c((null)={closure_env#0} @ 0x00000001700067fe, (null)=<unavailable>) at function.rs:250:5
frame #95: 0x000000010029af48 pyo3-97f8571082c6d82b`test::__rust_begin_short_backtrace::hc730144174a2f2b8 [inlined] core::ops::function::FnOnce::call_once::h0ba6e3d0adcb0fb8 at function.rs:250:5 [opt]
frame #96: 0x000000010029af40 pyo3-97f8571082c6d82b`test::__rust_begin_short_backtrace::hc730144174a2f2b8 at lib.rs:624:18 [opt]
frame #97: 0x000000010029a89c pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] test::run_test_in_process::_$u7b$$u7b$closure$u7d$$u7d$::h9e544e0587f41be2 at lib.rs:647:60 [opt]
frame #98: 0x000000010029a890 pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h8389036b3da9abb4 at unwind_safe.rs:272:9 [opt]
frame #99: 0x000000010029a890 pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] std::panicking::try::do_call::h6c15d214f8b0efc9 at panicking.rs:557:40 [opt]
frame #100: 0x000000010029a88c pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] std::panicking::try::h4dec6d151c5d7c52 at panicking.rs:521:19 [opt]
frame #101: 0x000000010029a88c pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] std::panic::catch_unwind::h08246350d14b78e5 at panic.rs:350:14 [opt]
frame #102: 0x000000010029a88c pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 [inlined] test::run_test_in_process::h5176f0d7330017af at lib.rs:647:27 [opt]
frame #103: 0x000000010029a808 pyo3-97f8571082c6d82b`test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h3dade545f948edf5 at lib.rs:568:43 [opt]
frame #104: 0x000000010026c9c8 pyo3-97f8571082c6d82b`std::sys::backtrace::__rust_begin_short_backtrace::h927c17d76a6dfbc6 [inlined] test::run_test::_$u7b$$u7b$closure$u7d$$u7d$::h331e167c3eb94f21 at lib.rs:598:41 [opt]
frame #105: 0x000000010026c940 pyo3-97f8571082c6d82b`std::sys::backtrace::__rust_begin_short_backtrace::h927c17d76a6dfbc6 at backtrace.rs:152:18 [opt]
frame #106: 0x000000010026fb2c pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h9a08c487421e7042 at mod.rs:538:17 [opt]
frame #107: 0x000000010026fb24 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hf6a2344f0bd24956 at unwind_safe.rs:272:9 [opt]
frame #108: 0x000000010026fb24 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] std::panicking::try::do_call::hfdb1a93845faf3ef at panicking.rs:557:40 [opt]
frame #109: 0x000000010026fb24 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] std::panicking::try::h4b35ce5ad8a162fd at panicking.rs:521:19 [opt]
frame #110: 0x000000010026fb24 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] std::panic::catch_unwind::h3bb4d3ee2986e761 at panic.rs:350:14 [opt]
frame #111: 0x000000010026fb24 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 [inlined] std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h5e36dd32c0d26255 at mod.rs:537:30 [opt]
frame #112: 0x000000010026fab0 pyo3-97f8571082c6d82b`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4b1e8564c6e52dc1 at function.rs:250:5 [opt]
frame #113: 0x0000000100310d10 pyo3-97f8571082c6d82b`std::sys::pal::unix::thread::Thread::new::thread_start::h1bd1b9c95010bf71 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h01276ebbe54a8110 at boxed.rs:2070:9 [opt]
frame #114: 0x0000000100310d04 pyo3-97f8571082c6d82b`std::sys::pal::unix::thread::Thread::new::thread_start::h1bd1b9c95010bf71 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::habad1faa89d23086 at boxed.rs:2070:9 [opt]
frame #115: 0x0000000100310d00 pyo3-97f8571082c6d82b`std::sys::pal::unix::thread::Thread::new::thread_start::h1bd1b9c95010bf71 at thread.rs:108:17 [opt]
frame #116: 0x0000000193ba1f94 libsystem_pthread.dylib`_pthread_start + 136
```
</details>
Not sure if it will be helpful, but the tracebacks from all of the threads are here: https://gist.github.com/ngoldbaum/d3f5bceba9554ba8347c40773446b08d
Happy to help with reproducing this if anyone has trouble getting PyO3 setup. I'm on the CPython discord in the #free-threading channel if chatting is easier.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0rc2 experimental free-threading build (main, Sep 23 2024, 13:34:50) [Clang 15.0.0 (clang-1500.3.9.4)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-124459
* gh-125540
<!-- /gh-linked-prs -->
| 54c6fcbefd33a8d8bf8c004cf1aad3be3d37b933 | e97910cdb76c1f1dadfc4721b828611e4f4b6449 |
python/cpython | python__cpython-124371 | # Python "HOWTO" guide for free-threading
We already have a ["HOWTO" guide](https://docs.python.org/3.13/howto/free-threading-extensions.html#freethreading-extensions-howto) for free-threading C API extension authors.
We should have a similar guide aimed at people writing Python code. The HOWTO should document the current limitations of the free-threaded build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124371
* gh-124860
<!-- /gh-linked-prs -->
| 68e384c2179fba41bc3be469e6ef34927a37f4a5 | 7d24ea9db3e8fdca52058629c9ba577aba3d8e5c |
python/cpython | python__cpython-128399 | # Expression before `=` in an f-string is interpreted like a normal string with escape sequences
# Bug report
### Bug description:
When an f-string contains an `=` specifier, the characters before the equals sign are interpreted as if they were within a normal string. In particular, a backslash is always interpreted as starting an escape sequence, even when it doesn’t, which causes unexpected output.
```pycon
>>> print(f'{br'\N{OX}'=}')
br'🐂'=b'\\N{OX}'
>>> print(f'{b'\N{OX}'=}')
<unknown>:1: SyntaxWarning: invalid escape sequence '\N'
b'🐂'=b'\\N{OX}'
>>> print(f'{r'\xff'=}')
r'ÿ'='\\xff'
>>> print(f'{r'\n'=}')
r'
'='\\n'
>>> print(f'{'\''=}')
'''="'"
```
Those results are misleading because the expressions printed before the equals signs are either syntax errors or do not match the values printed after the equals signs. I expected these results:
```pycon
>>> print(f'{br'\N{OX}'=}')
br'\N{OX}'=b'\\N{OX}'
>>> print(f'{b'\N{OX}'=}')
<unknown>:1: SyntaxWarning: invalid escape sequence '\N'
b'\N{OX}'=b'\\N{OX}'
>>> print(f'{r'\xff'=}')
r'\xff'='\\xff'
>>> print(f'{r'\n'=}')
r'\n'='\\n'
>>> print(f'{'\''=}')
'\''="'"
```
Even when the result is not so misleading, it would still be more helpful for debugging if the output kept the expression unmodified, instead of interpreting the escape sequences. For example, this:
```pycon
>>> print(f'{'\xc5'=}')
'Å'='Å'
```
would be better as this:
```pycon
>>> print(f'{'\xc5'=}')
'\xc5'='Å'
```
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-128399
* gh-129187
* gh-129190
<!-- /gh-linked-prs -->
| 60a3a0dd6fe140fdc87f6e769ee5bb17d92efe4e | ba9a4b621577b92f36d88388cc9f791c2dc7d7ba |
python/cpython | python__cpython-124428 | # argparse: abbreviated single-dash long options do not work with =
`argparse` supports short options, single-dash long options and double-dash long options. Values for option can be specified as separate argument (for all types of options):
```
--option value
-option value
-o value
```
It can also be grouped in a single argument with the option. In this case it should be separated by `=` from the option name:
```
--option=value
-option=value
-ovalue
```
`argparse` also supports abbreviation of long options (enabled by default):
```
--opt value
-opt value
```
Abbreviated double-dash long options can be used with `=`:
```
--opt=value
```
But this does not work with single-dash long options:
```
-opt=value
```
This is definitely an omission. There are other bugs related to abbreviation (#104860) and use with `=` (#124305). There were many other bugs related to this code. This code is just not well tested.
The code for `--opt=value` is not tested as well -- breaking it does not break tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124428
* gh-124753
* gh-124754
<!-- /gh-linked-prs -->
| 61180446eee2aef07b042c7e8892c45afabd1499 | 9bcadf589ab6f7b9d309290de7a80156b6905d35 |
python/cpython | python__cpython-124349 | # `_PyObject_IS_GC()` should not use `PyType_HasFeature()`
# Feature or enhancement
### Proposal:
`_PyObject_IS_GC()` is one of the most frequently used functions, which keeps using `PyType_HasFeature()` rather than `_PyType_HasFeature()`.
I know it has no performance issues since introduced, but switching to use `_PyType_HasFeature()` would make more sense:
* Include/internal/pycore_object.h
```c
// Fast inlined version of PyObject_IS_GC()
static inline int
_PyObject_IS_GC(PyObject *obj)
{
PyTypeObject *type = Py_TYPE(obj);
return (PyType_IS_GC(type)
&& (type->tp_is_gc == NULL || type->tp_is_gc(obj)));
}
...
// Fast inlined version of PyType_IS_GC()
#define _PyType_IS_GC(t) _PyType_HasFeature((t), Py_TPFLAGS_HAVE_GC)
```
* Include/objimpl.h
```c
/* Test if a type has a GC head */
#define PyType_IS_GC(t) PyType_HasFeature((t), Py_TPFLAGS_HAVE_GC)
```
cc @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-124349
<!-- /gh-linked-prs -->
| d9d5b3d2ef770224b39987dbce7bb5c616fe3477 | 2e0d445364f6d9832e81263c1c68440654924fc4 |
python/cpython | python__cpython-124322 | # Argparse negative number parsing does not capture -.5
https://github.com/python/cpython/pull/123970/files#r1770614169
<!-- gh-linked-prs -->
### Linked PRs
* gh-124322
* gh-124359
* gh-124361
<!-- /gh-linked-prs -->
| dc48312717142ec902197da504fad333f13c9937 | 2f6d4109b84d40b76e8814233ecfcc02291f71be |
python/cpython | python__cpython-124472 | # Remove `ma_version_tag` (PEP 699 / PEP 509)
# Feature or enhancement
The accepted [PEP 699](https://peps.python.org/pep-0699/) proposed removing the private [`ma_version_tag`](https://github.com/python/cpython/blob/342e654b8eda24c68da64cc21bc9583e480d9e8e/Include/cpython/dictobject.h#L25) field from `PyDictObject`. Note that PEP 699 supersedes [PEP 509](https://peps.python.org/pep-0509/), which originally proposed the field.
### Why now?
* The `ma_version_tag` field was deprecated in 3.12 and we are now working on 3.14, so I think this is in line with Python's [backward compatibility policy](https://peps.python.org/pep-0387/#basic-policy-for-backwards-compatibility) from PEP 387.
* [Cython](https://github.com/cython/cython/blob/29462efacef571913efa31fb3f2897aa99b6b149/Cython/Utility/ModuleSetupCode.c#L381-L384) and PyTorch ([dynamo](https://github.com/pytorch/pytorch/blob/d2455b99fb4b50731f2ac0e26ee351d9b2f7623f/torch/csrc/dynamo/guards.cpp#L671-L685)) and [Nuitka](https://github.com/Nuitka/Nuitka/blob/551166924fe58dfcdbce3a64dd53e474af876b1a/nuitka/build/include/nuitka/helper/dictionaries.h#L274-L276) have stopped using it for CPython 3.12+. I don't think `ma_version_tag` ever saw widespread usage. Cython was the major user mentioned in PEP 699. [^1]
* I think the `ma_version_tag` updates have a non-negligible cost in the free-threaded build, and it's easier and simpler to remove it (if we're planning to do that anyways) than to make it more efficient
* It would be convenient to use some of the version tag bits for per-thread refcounting of globals and builtins. (See Mark's comment in https://github.com/python/cpython/issues/124218#issuecomment-2363771308).
* If we are going to remove this in 3.14, I think doing so earlier in the development cycle is better.
### Dict Watchers
The `ma_version_tag` field is also used for dict watchers (8 bits) and the tier2 mutation counter (4 bits). We will still want that functionality.
cc @Fidget-Spinner @markshannon
[^1]: I searched the [top ~7500](https://dev.to/hugovk/how-to-search-5000-python-projects-31gk) sdists as well. The only other actual usage I saw was https://github.com/slezica/python-frozendict, which doesn't have a 3.11 or 3.12 C extension yet (but also functions as a pure-Python package).
<!-- gh-linked-prs -->
### Linked PRs
* gh-124472
<!-- /gh-linked-prs -->
| 5aa91c56bf14c38b4c7f5ccaaa3cd24fe3fb0f04 | 60ff67d010078eca15a74b1429caf779ac4f9c74 |
python/cpython | python__cpython-124803 | # Add translation tests for argparse
While #12711 was merged, it seems like there's a separate issue worth tracking around adding translation tests for argparse (see https://github.com/python/cpython/pull/12711#issuecomment-1964754284)
<!-- gh-linked-prs -->
### Linked PRs
* gh-124803
* gh-126046
* gh-126051
* gh-126054
<!-- /gh-linked-prs -->
| 0922a4ae0d2803e3a4e9f3d2ccd217364cfd700e | f819d4301d7c75f02be1187fda017f0e7b608816 |
python/cpython | python__cpython-124288 | # Reference counting stats do not account for immortal objects.
### Bug description:
The reference counting stats ([for example](https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20240914-3.14.0a0-401fff7-PYTHON_UOPS/bm-20240914-azure-x86_64-python-401fff7423ca3c8bf1d0-3.14.0a0-401fff7-pystats.md#object-stats)) show stats for increfs and decrefs (both in the interpreter and in the rest of the executable).
One might assume that is the number of `Py_INCREF` and `Py_DECREF`s executed, but it is not. It is the number of `Py_INCREF` and `Py_DECREF` where the object in question is mortal. We should add stats for `Py_INCREF` and `Py_DECREF`s of immortal objects as well.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-124288
<!-- /gh-linked-prs -->
| c87b0e4a462f98c418f750c6c95d4d8715c38332 | 6203ef35dd4ee9dd59759ce83eace8eacac69685 |
python/cpython | python__cpython-130770 | # Rewrite `typing.Annotated` docs
As @JelleZijlstra said in https://github.com/python/cpython/pull/124125
> I don't like this long bulleted list of individual issues. We should rewrite it to a more organized form.
`typing.Annotated` indeed has a lot bullet points in the docs: https://docs.python.org/3/library/typing.html#typing.Annotated
So, how can we be more compact about it while keeping all the details?
<!-- gh-linked-prs -->
### Linked PRs
* gh-130770
* gh-131222
* gh-131223
<!-- /gh-linked-prs -->
| e4ac196aaaa9fd2f1bd0050797b58959671c549a | c497f83ad85260e6c1be633d5ab24b3dddd6f32c |
python/cpython | python__cpython-124279 | # Windows Installer update with "keep current options" uninstalled the free-threaded binaries
# Bug report
### Bug description:
I used the installer to upgrade from a Python 3.13.0rc1 install to a Python 3.13.0rc2 install. In the 3.13.0rc1 install, I had installed the free-threaded binaries.
When upgrading, I chose the first option, "Upgrade Now".
After the upgrade, the free-threaded binaries were no longer installed, and I had to rerun the installer and modify the install to add them.
Other non-default install options, such as installing the debug binaries, were not lost.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-124279
* gh-124347
<!-- /gh-linked-prs -->
| df7228ce140ecb005d44a0c171ba4d098b3fa67c | 6ab634840c662ae07d90655e5e50ca43421da4be |
python/cpython | python__cpython-124251 | # SystemError/Assertion failure when processing struct with '0p' field
# Crash report
### What happened?
Using `struct` to process zero-width Pascal strings (`"0p"`) can lead to an assertion failure or `SystemError`.
Specifically:
* `struct.pack("<0p", b"")` leads to an assertion failure and seg fault on debug builds (tested with current `main` and 3.13 tip)
* `struct.unpack("<0p", b"")` raises an unexpected `SystemError` (tested with current `main`, 3.13 tip, non-debug builds of 3.8-3.12)
On current `main` (`8f82d9aa219`):
```shell
$ make clean && ./configure --with-pydebug && make -j
$ ./python -VV
Python 3.14.0a0 (heads/main:8f82d9aa219, Sep 19 2024, 11:08:04) [GCC 11.4.0]
$ ./python -c 'import struct; struct.unpack("<0p", b"")'
Traceback (most recent call last):
File "<string>", line 1, in <module>
import struct; struct.unpack("<0p", b"")
~~~~~~~~~~~~~^^^^^^^^^^^^
SystemError: Negative size passed to PyBytes_FromStringAndSize
$ ./python -c 'import struct; struct.pack("<0p", b"")'
python: ./Modules/_struct.c:1991: s_pack_internal: Assertion `_Py_STATIC_CAST(Py_ssize_t, _Py_STATIC_CAST(unsigned char, (n))) == (n)' failed.
[1] 186971 IOT instruction (core dumped) ./python -c 'import struct; struct.pack("<0p", b"")'
```
The same behavior is reproducible on the current 3.13 tip (`112b1704fa6`), and likely previous versions.
PR forthcoming.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:8f82d9aa219, Sep 19 2024, 11:08:04) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-124251
* gh-124277
* gh-124278
<!-- /gh-linked-prs -->
| 63f196090f90cbfe5f698824655f74dea5cb2b29 | baa3550bc3a119f41cc4eaed5373f9d695208e8e |
python/cpython | python__cpython-124246 | # UserWarning in test_argparse
Running test_argparse with option `-We` fails:
```pytb
$ ./python -We -m test -vuall test_argparse -m test_invalid_args
...
======================================================================
ERROR: test_invalid_args (test.test_argparse.TestIntermixedArgs.test_invalid_args)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/test/test_argparse.py", line 5815, in test_invalid_args
parser.parse_intermixed_args(['hello', '--foo'])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/argparse.py", line 2386, in parse_intermixed_args
args, argv = self.parse_known_intermixed_args(args, namespace)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/argparse.py", line 2440, in parse_known_intermixed_args
warn('Do not expect %s in %s' % (action.dest, namespace))
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UserWarning: Do not expect foo in Namespace(foo=[])
----------------------------------------------------------------------
```
The test was added in gh-103558. cc @gaogaotiantian
<!-- gh-linked-prs -->
### Linked PRs
* gh-124246
* gh-124255
<!-- /gh-linked-prs -->
| 992e8f6102e317b4967a762fbefea82f9fcf9dfb | 7331d0f70bc9fbac177b76b6ec03486430383425 |
python/cpython | python__cpython-124237 | # Improve `mock.reset_mock` docs: clarify that `return_value` and `side_effects` are booleans
Requested here: https://github.com/python/cpython/pull/124038
<!-- gh-linked-prs -->
### Linked PRs
* gh-124237
* gh-124592
* gh-130408
<!-- /gh-linked-prs -->
| 19fed6cf6eb51044fd0c02c6338259e2dd7fd462 | f923605658a29ff9af5a62edc1fc10191977627b |
python/cpython | python__cpython-124229 | # AssertionError in test_uuid on NetBSD: Expected UUID version 1, but got version 4
# Bug report
### Bug description:
```python
home# ./python -m test test_uuid
Using random seed: 2311813199
0:00:00 load avg: 0.11 Run 1 test sequentially in a single process
0:00:00 load avg: 0.11 [1/1] test_uuid
test test_uuid failed -- Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_uuid.py", line 496, in test_uuid1
equal(u.version, 1)
~~~~~^^^^^^^^^^^^^^
AssertionError: 4 != 1
test_uuid failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test_uuid
Total duration: 1.1 sec
Total tests: run=72 failures=1 skipped=13
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-124229
<!-- /gh-linked-prs -->
| 2e8c769481d5729d86be8c6cff5881c4c5fbb8d2 | d5f95ec07bb47a4d6554e04d13a979dbeac05f74 |
python/cpython | python__cpython-124844 | # Reference count contention with nested functions
Creating nested functions can be a source of reference count contention in the free-threaded build. Consider the following (contrived) example:
```python
from concurrent.futures import ThreadPoolExecutor
def func(n):
sum = 0
for i in range(n):
def square(x):
return x ** 2
sum += square(i)
with ThreadPoolExecutor(max_workers=8) as executor:
future = executor.submit(func, 100000)
print(future.result())
```
Creating many `square` functions concurrently causes reference count contention on the `func_code`, `func_globals`, and `func_builtins` fields:
https://github.com/python/cpython/blob/21d2a9ab2f4dcbf1be462d3b7f7a231a46bc1cb7/Include/cpython/funcobject.h#L11-L16
The code object and builtins and globals dictionaries are already configured to support deferred reference counting, but the references in `PyFunctionObject` are not `_PyStackRef`s -- they are normal "counted" references.
Note that this is an issue for nested functions (closures), but not module-level functions or methods because those are typically created infrequently.
I outline a few possibly ways to address this below. My preference is for 2a below.
## Option 1: Use deferred reference counting in `PyFunctionObject`
### Variant 1a: Use `_PyStackRef` in `PyFunctionObject`
Instead of `PyObject *func_code` we have `_PyStackRef func_code`. We use this strategy effectively in a number of other places, including the frame object and generators.
The downside of this approach is that the fields of `PyFunctionObject` are exposed in public headers (`cpython/funcobject.h`), even though they are not documented. Changing the type of `func_code`, `func_globals`, and `func_builtins` risks breaking backwards compatibility with some C API extensions.
### Variant 1b: Use `PyObject*` and new bitfield
Instead of using `_PyStackRef`, we can keep the fields as `PyObject *` and store whether the field uses a deferred reference in a separate field. This was the [approach](https://github.com/python/cpython/blob/21d2a9ab2f4dcbf1be462d3b7f7a231a46bc1cb7/Include/cpython/funcobject.h#L11-L16) I took by the `nogil-3.9` fork.
This has fewer compatibility issues than using `_PyStackRef`, but there are still compatibility hazards. It would not be safe for extensions to change `func_code`/`func_globals`/`func_builtins` with something like `Py_SETREF(func->func_globals, new_globals)` because the reference counting semantics are different.
## Option 2: Use per-thread reference counting
We already use [per-thread reference counting](https://peps.python.org/pep-0703/#reference-counting-type-objects) for the references from instances to their types (i.e., `ob_type`), if the type is a heap type. Storing the reference counts per-thread avoids most of the reference count contention on the object. This also avoids the compatibility issues with option 1 because you can use a per-thread incref with a normal `Py_DECREF` -- the only risk is performance, not correctness.
The challenge with this approach is that we need some quick and reliable way to index the per-thread reference count array. For heap types, we added a new field [`unique_id`](https://github.com/python/cpython/blob/21d2a9ab2f4dcbf1be462d3b7f7a231a46bc1cb7/Include/cpython/object.h#L275) in the free-threaded build. We can do something similar for code objects, but the globals and builtins are just "normal" dictionaries and I don't think we want to add a new field for every dictionary.
## Variant 2a: Allocate space for identifier when creating globals and builtins.
When we create globals and builtins dictionaries we allocate space for an extra `Py_ssize_t` unique id at the end after the `PyDictObject`. The type would still just `PyDict_Type`, so Python and the rest of the C API would just see a normal dictionary. We can identify these special dictionaries using a bit in `ob_gc_bits` or by stealing another bit from `ma_version_tag`.
If the globals or builtins dictionaries are replaced by user defined dictionaries, than things would still work, they'd just might have scaling bottlenecks.
## Variant 2b: Use a hash table for per-thread references
We can use a hash table to map `PyObject*` to their per-thread reference counts. This is less efficient than having a unique index into the per-thread [reference count array](https://github.com/python/cpython/blob/21d2a9ab2f4dcbf1be462d3b7f7a231a46bc1cb7/Include/internal/pycore_tstate.h#L36), but avoids the need for an extra field.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124844
* gh-125216
* gh-125713
* gh-125847
<!-- /gh-linked-prs -->
| b48253852341c01309b0598852841cd89bc28afd | 5aa91c56bf14c38b4c7f5ccaaa3cd24fe3fb0f04 |
python/cpython | python__cpython-124240 | # RFC 9637 not implemented by ipaddress module
# Bug report
### Bug description:
```python
>>> import ipaddress
>>> ip_address = ipaddress.ip_address("3fff::")
>>> ip_address.is_global
True
```
[RFC 9637](https://www.rfc-editor.org/rfc/rfc9637.html) registered the address block `3fff::/20` as **NOT** globally reachable in the [IANA IPv6 Special-Purpose Address Registry](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml), but the `ipaddress` module doesn't respect this.
### CPython versions tested on:
3.11
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-124240
* gh-124282
* gh-124283
<!-- /gh-linked-prs -->
| db6eb3640a7d98db6fea17cf9da4cb14504e5571 | 622368d99c986ca1a9bdba951ac53f42d7ee6fca |
python/cpython | python__cpython-124215 | # Test failures inside `systemd-nspawn --suppress-sync=true` container
# Bug report
### Bug description:
When the CPython's test suite is run inside a systemd-nspawn container with `--suppress-sync=true` specified, the following tests fail:
```pytb
======================================================================
FAIL: test_fdatasync (test.test_os.TestInvalidFD)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.8.19_p2/work/Python-3.8.19/Lib/test/test_os.py", line 1850, in helper
self.check(getattr(os, f))
File "/var/tmp/portage/dev-lang/python-3.8.19_p2/work/Python-3.8.19/Lib/test/test_os.py", line 1861, in check
self.fail("%r didn't raise an OSError with a bad file descriptor"
AssertionError: <built-in function fdatasync> didn't raise an OSError with a bad file descriptor
======================================================================
FAIL: test_fsync (test.test_os.TestInvalidFD)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.8.19_p2/work/Python-3.8.19/Lib/test/test_os.py", line 1850, in helper
self.check(getattr(os, f))
File "/var/tmp/portage/dev-lang/python-3.8.19_p2/work/Python-3.8.19/Lib/test/test_os.py", line 1861, in check
self.fail("%r didn't raise an OSError with a bad file descriptor"
AssertionError: <built-in function fsync> didn't raise an OSError with a bad file descriptor
----------------------------------------------------------------------
======================================================================
FAIL: test_flush_return_value (test.test_mmap.MmapTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.8.19_p2/work/Python-3.8.19/Lib/test/test_mmap.py", line 746, in test_flush_return_value
self.assertRaises(OSError, mm.flush, 1, len(b'python'))
AssertionError: OSError not raised by flush
----------------------------------------------------------------------
```
This is because systemd-nspawn uses seccomp to stub out `fsync()`, `fdatasync()` and `msync()` calls, and this implies they always return success.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124215
* gh-124892
<!-- /gh-linked-prs -->
| 342e654b8eda24c68da64cc21bc9583e480d9e8e | 1a577729e347714eb819fa3a3a00149406c24e5e |
python/cpython | python__cpython-124211 | # Invalid variable name in `venv` code for symlink failure handling on Windows
# Bug report
### Bug description:
The `dst` variable in https://github.com/python/cpython/blob/main/Lib/venv/__init__.py#L396 should be named `dest`. Otherwise it throws the `UnboundLocalError: cannot access local variable 'dst' where it is not associated with a value`.
I have a PR that fixes this and also adds a test covering this code path.
This is occurring on Python 3.13 and 3.14.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-124211
* gh-124226
<!-- /gh-linked-prs -->
| ea7fe1fe2e162f2375562467ad834c6224a62daf | 36682c091407dc9c7e750c22fb71e62466952662 |
python/cpython | python__cpython-127046 | # threading module is missing basic/introductory usage example
# Documentation
threading module is missing basic/introductory usage example
documentation start from some general notes which may have its place there and jump into "This module defines the following functions: threading.active_count()"
Would it be maybe possible to have an example of basic standard usage? For "my Python script is CPU-limited, I want to use more than one core"
the closest that I found is https://docs.python.org/3/library/threading.html#threading.Thread.run that does not really explain at all how it is actually supposed to be used even for a toy example
(I will probably find tutorial somewhere, maybe chatgpt/deepseek will produce something but I would love to have an official example that I can assume to be a good idea rather than finding something that seems to work)
<!-- gh-linked-prs -->
### Linked PRs
* gh-127046
* gh-134090
* gh-134091
<!-- /gh-linked-prs -->
| 62f66caa8c963bdf45d1e22456aea985e74fa2d5 | 73d71a416fb05b64c2b43fade5d781a1fa0cb2cd |
python/cpython | python__cpython-124208 | # `get_type_hints(K format=FORWARDREF)` raises when `K` is dynamically created class
# Bug report
On `main` this raises:
```python
>>> from typing import get_type_hints
>>> K = type('K', (), {})
>>> get_type_hints(K)
{}
>>> get_type_hints(K, format=2)
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
get_type_hints(K, format=2)
~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/typing.py", line 2393, in get_type_hints
ann = annotationlib.get_annotations(base, format=format)
File "/Users/sobolev/Desktop/cpython2/Lib/annotationlib.py", line 634, in get_annotations
annotate = get_annotate_function(obj)
File "/Users/sobolev/Desktop/cpython2/Lib/annotationlib.py", line 578, in get_annotate_function
return _BASE_GET_ANNOTATE(obj)
AttributeError: type object 'object' has no attribute '__annotate__'
```
However, this does not raise:
```python
>>> from inspect import get_annotations
>>> K = type('K', (), {})
>>> get_annotations(K)
{}
>>> get_annotations(K, format=2)
{}
```
I suppose that this is a bug.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124208
<!-- /gh-linked-prs -->
| 96f619faa74a8a32c2c297833cdeb0393c0b6b13 | 3b6bfa77aa4da2ce1f3a15e39831f8b85882698c |
python/cpython | python__cpython-124195 | # Wrong link in "What's new in Python 3.8" (apparent typo in issue ID)
# Documentation
I noticed a typo while doing some curation on Stack Overflow and looking for a link to justify a claim.
At the start of https://docs.python.org/3/whatsnew/3.8.html#api-and-feature-removals :
> Starting with Python 3.3, importing ABCs from collections was deprecated, and importing should be done from collections.abc. Being able to import from collections was marked for removal in 3.8, but has been delayed to 3.9. (See bpo-36952.)
This links to https://bugs.python.org/issue?@action=redirect&bpo=36952 , which redirects to #81133, which is the wrong issue. The redirection system is working fine, but the BPO ID is wrong; it should link the immediately next issue, bpo-36953 (resp. #81134).
The removal of this functionality is about to become very relevant again as people are forced to upgrade from 3.8 to remain within the support window. Hopefully, people who encounter problems will find the "What's New" document for 3.9 instead (or whatever version they're updating to), or to more specific third-party advice. But this should still get fixed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124195
* gh-124197
* gh-124198
<!-- /gh-linked-prs -->
| d8c0fe1944ac41787e16fa60e608f56c8235e100 | a15a584bf3f94ea11ab9363548c8872251364000 |
python/cpython | python__cpython-124192 | # Remove -Wconversion from CFLAGS_NODIST in --enable-safety
# Bug report
### Bug description:
`-Wconversion` in CFLAGS_NODIST in CI jobs Ubuntu build/test and macOS build/test emits warnings on PRs that have changes unrelated to those warnings. There is an issue in the actions toolkit expressing the same frustration: https://github.com/actions/toolkit/issues/457
We also need to discuss how we will address remediating `-Wconversion` related warnings if we decide to enable it again.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124192
<!-- /gh-linked-prs -->
| 29a1a6e3ed6f606939b4aaf8d6955f368c3be3fc | d8c0fe1944ac41787e16fa60e608f56c8235e100 |
python/cpython | python__cpython-124193 | # Ignore compiler warnings for entire file or directory in CI tooling
# Feature or enhancement
### Proposal:
CI warning check tooling needs to ignore entire files or directories. Currently `Tools/build/.warningignore_ubuntu` and `Tools/build/.warningignore_macos` contain file paths and a count of warnings to ignore per file. `Tools/build/check_warnings.py` should be able to ignore all warnings from a file or an entire directory.
For example:
```
Objects/longobject.c 46
Objects/memoryobject.c *
Objects/methodobject.c 1
Objects/mimalloc/ *
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-124193
<!-- /gh-linked-prs -->
| 81480e6edb34774d783d018d1f0e61ab5c3f0a9a | 646f16bdeed6ebe1069e1d64886fbaa26edac75c |
python/cpython | python__cpython-124189 | # Take into account encoding of source file for syntax error
Currently most syntax errors raised in the compiler (except these raised in the parser) use `PyErr_ProgramTextObject()` to get the line of the code. It does not know the encoding of the source file and interpret it as UTF-8 (failing if it contain non-UTF-8 sequences). The parser uses `_PyErr_ProgramDecodedTextObject()`.
There are two ways to solve this issue:
* Pass the source file encoding from the parser to the code generator. This may require changing some data structures. But this is more efficient.
* Detect the encoding in `PyErr_ProgramTextObject()`. Since the latter is in the public C API, this can also affect the third-party code.
There are other issues with `PyErr_ProgramTextObject()`:
* It leave the BOM in the first line if the source line contains it. This is not consistent with offsets.
* For very long lines, it returns the tail of the line that exceeds 1000 bytes. It can be short, it can start with invalid character, it is not consistent with offsets. If return incomplete line, it is better to return the head.
This all applies to `PyErr_ProgramText()` as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124189
* gh-124423
* gh-124426
<!-- /gh-linked-prs -->
| e2f710792b0418b8ca1ca3b8cdf39588c7268495 | 3c83f9958c14cd62ad8951c53536f7788745b0ba |
python/cpython | python__cpython-124335 | # "DeprecationWarning: builtin type has no __module__ attribute" when using PyStructSequence_NewType from a module method
# Bug report
### Bug description:
Below is an example producing the warning. This warning does not occur when the type is created in the module's init function.
When using `PyStructSequence_InitType2`, the resulting type gets `"builtin"` as the `__module__` attribute and there is no deprecation warning.
It's also possible to add a field named `__module__` to prevent the warning. Setting the `__module__` attribute after creating the type does not help, the warning gets raised inside the `PyStructSequence_NewType` function.
I couldn't find anything indicating that I'm not supposed to use `PyStructSequence_NewType` in this way.
```c
#include <Python.h>
// the type is usable, but we get: <string>:1: DeprecationWarning: builtin type TypeName has no __module__ attribute
// python3 -c "import c_ext; c_ext.create_type()"
static PyObject* create_type(PyObject* self) {
PyStructSequence_Field fields[] = {
{ "FieldName", "docs" },
{ NULL, NULL },
};
PyStructSequence_Desc desc = {"TypeName", "docs", fields, 1};
return (PyObject*)PyStructSequence_NewType(&desc);
}
static struct PyMethodDef methods[] = {
{"create_type", (PyCFunction)create_type, METH_NOARGS},
{NULL, NULL}
};
static struct PyModuleDef module = {
PyModuleDef_HEAD_INIT, "c_ext", NULL, -1, methods
};
PyMODINIT_FUNC PyInit_c_ext(void) {
return PyModule_Create(&module);
}
```
### CPython versions tested on:
3.10, 3.11, 3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124335
* gh-125056
* gh-125057
<!-- /gh-linked-prs -->
| 3287c834e5370294e310450115290979aac06efa | 51d426dc033ef9208c0244a569f3e816e4c328c9 |
python/cpython | python__cpython-124164 | # Segfault when trying to use PyRun_SimpleString() with some imports
# Crash report
### What happened?
I hit the segfault when doing the following thing:
```
$ docker run -ti fedora:41 bash
# dnf -y install gcc python-devel
# echo '#include <Python.h>
int main() {
Py_Initialize();
PyThreadState_Swap(Py_NewInterpreter());
PyRun_SimpleString("import readline");
}' > test.c
# gcc test.c -I/usr/include/python3.13 -lpython3.13
# ./a.out
Segmentation fault (core dumped)
```
```
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7a8c3cb in reload_singlephase_extension (tstate=tstate@entry=0x7ffff7e5a850, cached=cached@entry=0x0,
info=info@entry=0x7fffffff8c90) at /usr/src/debug/python3.13-3.13.0~rc2-1.fc41.x86_64/Python/import.c:1763
1763 PyModuleDef *def = cached->def;
```
The same code doesn't crash on 3.12.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0rc2 (main, Sep 7 2024, 00:00:00) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-124164
* gh-124250
<!-- /gh-linked-prs -->
| 7331d0f70bc9fbac177b76b6ec03486430383425 | 8f82d9aa2191db7826bb7a453fe06ce65f966cf8 |
python/cpython | python__cpython-124133 | # Regex \B doesn't match empty string
# Bug report
### Bug description:
```python
>>> import re
>>> list(re.finditer(r'\b', 'e'))
[<re.Match object; span=(0, 0), match=''>, <re.Match object; span=(1, 1), match=''>]
>>> list(re.finditer(r'\B', 'e'))
[]
>>> list(re.finditer(r'\b', '%'))
[]
>>> list(re.finditer(r'\B', '%'))
[<re.Match object; span=(0, 0), match=''>, <re.Match object; span=(1, 1), match=''>]
>>> list(re.finditer(r'\b', ''))
[]
>>> list(re.finditer(r'\B', ''))
[]
```
Apparently the empty string neither is nor isn't a word boundary. Is that supposed to happen? \B matches the empty string in every other language I can think of.
Online reproducer: https://godbolt.org/z/8q6fehss7
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124133
* gh-124328
* gh-124329
* gh-124330
* gh-124413
* gh-124414
* gh-127007
<!-- /gh-linked-prs -->
| d3e79d75d164c338a64fd66edb26e69c501cee58 | 2e8c769481d5729d86be8c6cff5881c4c5fbb8d2 |
python/cpython | python__cpython-124128 | # [C API] Make Py_REFCNT() opaque in limited C API 3.14
In the limited C API 3.14 and newer, I propose to change Py_REFCNT() implementation to an opaque function call to hide implementation details. I made a similar change for Py_INCREF() and Py_DECREF() in Python 3.12.
The problem is that with Free Threading (PEP 703), the implementation of this functions becomes less trivial than just getting the object member `PyObject.ob_refcnt`:
```c
static inline Py_ssize_t _Py_REFCNT(PyObject *ob) {
uint32_t local = _Py_atomic_load_uint32_relaxed(&ob->ob_ref_local);
if (local == _Py_IMMORTAL_REFCNT_LOCAL) {
return _Py_IMMORTAL_REFCNT;
}
Py_ssize_t shared = _Py_atomic_load_ssize_relaxed(&ob->ob_ref_shared);
return _Py_STATIC_CAST(Py_ssize_t, local) +
Py_ARITHMETIC_RIGHT_SHIFT(Py_ssize_t, shared, _Py_REF_SHARED_SHIFT);
}
```
`_Py_atomic_load_uint32_relaxed()` and `_Py_atomic_load_ssize_relaxed()` must now be called. But I would prefer to not "leak" such implementation detail into the limited C API.
cc @colesbury @Fidget-Spinner @encukou
<!-- gh-linked-prs -->
### Linked PRs
* gh-124128
<!-- /gh-linked-prs -->
| 9d344fafc4385cb2e17425b77b54660ca83c61ac | b82f07653e1e15a48ebaf8de324f52559e470254 |
python/cpython | python__cpython-124125 | # Document `Annotated.__origin__`
The `typing` module docs cover using `Annotated.__metadata__` to retrieve the annotations, but they do not currently cover the use of `Annotated.__origin__` to retrieve the underlying type hint that is being annotated.
```
>>> from typing import Annotated
>>> Annotated[int, ""].__metadata__
('',)
>>> Annotated[int, ""].__origin__
<class 'int'>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-124125
* gh-124416
* gh-124417
<!-- /gh-linked-prs -->
| faef3fa653f2901cc905f98eae0ddcd8dc334d33 | 9d344fafc4385cb2e17425b77b54660ca83c61ac |
python/cpython | python__cpython-124110 | # test_locale: test_strcoll_with_diacritic and test_strxfrm_with_diacritic tests failing on NetBSD
# Bug report
### Bug description:
```python
localhost$ ./python -m test test_locale -m test_strcoll_with_diacritic -m test_strxfrm_with_diacritic -v
== CPython 3.14.0a0 (heads/main:401fff7423c, Sep 15 2024, 21:20:00) [GCC 10.5.0]
== NetBSD-10.0-amd64-x86_64-64bit-ELF little-endian
== Python build: debug
== cwd: /home/blue/cpython/build/test_python_worker_28241æ
== CPU count: 6
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 1742921980
0:00:00 load avg: 0.18 Run 1 test sequentially in a single process
0:00:00 load avg: 0.18 [1/1] test_locale
test_strcoll_with_diacritic (test.test_locale.TestEnUSCollation.test_strcoll_with_diacritic) ... testing with 'en_US.UTF-8'... FAIL
test_strxfrm_with_diacritic (test.test_locale.TestEnUSCollation.test_strxfrm_with_diacritic) ... testing with 'en_US.UTF-8'... FAIL
======================================================================
FAIL: test_strcoll_with_diacritic (test.test_locale.TestEnUSCollation.test_strcoll_with_diacritic)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_locale.py", line 359, in test_strcoll_with_diacritic
self.assertLess(locale.strcoll('à', 'b'), 0)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 126 not less than 0
======================================================================
FAIL: test_strxfrm_with_diacritic (test.test_locale.TestEnUSCollation.test_strxfrm_with_diacritic)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_locale.py", line 368, in test_strxfrm_with_diacritic
self.assertLess(locale.strxfrm('à'), locale.strxfrm('b'))
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'à' not less than 'b'
----------------------------------------------------------------------
Ran 2 tests in 0.013s
FAILED (failures=2)
test test_locale failed
test_locale failed (2 failures)
== Tests result: FAILURE ==
1 test failed:
test_locale
Total duration: 266 ms
Total tests: run=2 (filtered) failures=2
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
localhost$
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-124110
* gh-124146
* gh-124147
<!-- /gh-linked-prs -->
| 10de3600a908f96d1c43dac85ef867991d54708e | a9c2bc16349c2be3005f97249f3ae9699988f218 |
python/cpython | python__cpython-124103 | # Update Dependency Information for PCBuild to Correct Information
the description in `python.props` stated:
```
Use the latest available version of Visual Studio to build. To override
this and build with an earlier version, pass "/p:PlatformToolset=v100"
(for example) when building.
```
However, v100 corresponds to Visual Studio 2010, and now Visual Studio 2017 or later is required. v100 is no longer available. See [https://docs.python.org/3.14/using/configure.html](https://docs.python.org/3.14/using/configure.html).
Currently, the latest Visual Studio 2022 uses v143, not v140. Therefore, I have changed it to the officially supported Visual Studio 2017 (v140). See [https://github.com/python/cpython/blob/main/PCbuild/readme.txt#L62](https://github.com/python/cpython/blob/main/PCbuild/readme.txt#L62).
To correctly build CPython, the minimum Python required version is now 3.10; otherwise, it will download via NuGet online. See [https://github.com/python/cpython/blob/main/PCbuild/find_python.bat#L45](https://github.com/python/cpython/blob/main/PCbuild/find_python.bat#L45).
However, descriptions in other files are outdated, mentioning versions like 3.6, 2.7, and 3.4. Due to the long time lapse, these descriptions have become inconsistent across files. I have now standardized them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124103
* gh-124784
<!-- /gh-linked-prs -->
| 27a62e7371f062a80704f6bf4d6e8f06568d37aa | 8a2baedc4bcb606da937e4e066b4b3a18961cace |
python/cpython | python__cpython-124119 | # Pasting a function definition does not work in 3.13 REPL with Windows Terminal
# Bug report
### Bug description:
I use Python downloaded from python.org without using a graphical environment such as IPython. Instead, I use the Windows Terminal with the REPL built into python.exe. In 3.12.6, I can paste a function definition copied from a text editor directly into the REPL and everything works fine. In 3.13.rc2 this does not work; the indentation is all messed up and I get an IndentationError.
```python
def letter_colors(word, guess):
'''Compute letter colors for Wordle guesses. B=black Y=yellow G=green'''
if (n := len(word)) != len(guess):
raise ValueError('Word and guess must be the same length.')
result = ['G' if wl == gl else 'B' for (wl, gl) in zip(word, guess)]
unused = [w for (w, r) in zip(word, result) if r == 'B']
for i, c in enumerate(guess):
if result[i] != 'G' and c in unused:
result[i] = 'Y'
unused.remove(c)
return ''.join(result)
```
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-124119
* gh-133457
<!-- /gh-linked-prs -->
| a65366ed879a3d9f27cbcc811ed2e05ad1a2af06 | 25a7ddf2efeaf77bcf94dbfca28ba3a6fe9ab57e |
python/cpython | python__cpython-124084 | # test_strsignal fails on NetBSD with TypeError
# Bug report
### Bug description:
```sh
-bash-5.2$ ./python -m test test_signal -m test_strsignal
Using random seed: 3137472754
0:00:00 load avg: 5.12 Run 1 test sequentially in a single process
0:00:00 load avg: 5.12 [1/1] test_signal
test test_signal failed -- Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_signal.py", line 127, in test_strsignal
self.assertIn("Interrupt", signal.strsignal(signal.SIGINT))
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/blue/cpython/Lib/unittest/case.py", line 1180, in assertIn
if member not in container:
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not a container or iterable
test_signal failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_signal
Total duration: 142 ms
Total tests: run=1 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-124084
* gh-124223
* gh-124224
<!-- /gh-linked-prs -->
| 36682c091407dc9c7e750c22fb71e62466952662 | 9a6e2336e4b54fc13064b77826a67b03b3b45133 |
python/cpython | python__cpython-124069 | # ``test_asyncio.test_base_events`` leaks references in free-threaded build
# Bug report
### Bug description:
```python
eclips4@nixos ~/p/p/cpython (main)> ./python -m test -R 3:3 test_asyncio.test_base_events
Using random seed: 315532800
0:00:00 load avg: 7.88 Run 1 test sequentially in a single process
0:00:00 load avg: 7.88 [1/1] test_asyncio.test_base_events
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XX3 333
test_asyncio.test_base_events leaked [3, 3, 3] references, sum=9
test_asyncio.test_base_events failed (reference leak)
== Tests result: FAILURE ==
1 test failed:
test_asyncio.test_base_events
Total duration: 18.4 sec
Total tests: run=111
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124069
<!-- /gh-linked-prs -->
| b02301fa5a543266ee310a6d98278d2b8e26d7b3 | 38809171b8768517824fb62d48abe2cb0aff8429 |
python/cpython | python__cpython-124070 | # Lots of new compiler warnings
```
In file included from ./Include/internal/pycore_global_objects.h:12:
./Include/internal/pycore_gc.h:230:21: warning: implicit conversion changes signedness: 'int' to 'uintptr_t' (aka 'unsigned long') [-Wsign-conversion]
gc->_gc_prev &= ~_PyGC_PREV_MASK_FINALIZED;
~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~
```
```
In file included from ./Include/internal/pycore_code.h:13:
./Include/internal/pycore_backoff.h:78:66: warning: implicit conversion loses integer precision: 'int' to 'uint16_t' (aka 'unsigned short') [-Wimplicit-int-conversion]
return make_backoff_counter((1 << (counter.backoff + 1)) - 1, counter.backoff + 1);
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
```
```
In file included from ./Include/internal/pycore_mimalloc.h:45:
./Include/internal/mimalloc/mimalloc/internal.h:489:84: warning: implicit conversion changes signedness: 'int' to 'mi_thread_free_t' (aka 'unsigned long') [-Wsign-conversion]
return (mi_block_t*)(mi_atomic_load_relaxed(&((mi_page_t*)page)->xthread_free) & ~3);
~ ^~
./Include/internal/mimalloc/mimalloc/internal.h:508:29: warning: implicit conversion changes signedness: 'int' to 'mi_thread_free_t' (aka 'unsigned long') [-Wsign-conversion]
return (mi_block_t*)(tf & ~0x03);
~ ^~~~~
./Include/internal/mimalloc/mimalloc/internal.h:806:10: warning: implicit conversion changes signedness: 'int' to 'size_t' (aka 'unsigned long') [-Wsign-conversion]
return __builtin_clzl(x);
~~~~~~ ^~~~~~~~~~~~~~~~~
./Include/internal/mimalloc/mimalloc/internal.h:814:10: warning: implicit conversion changes signedness: 'int' to 'size_t' (aka 'unsigned long') [-Wsign-conversion]
return __builtin_ctzl(x);
~~~~~~ ^~~~~~~~~~~~~~~~~
./Modules/getpath.c:264:48: warning: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'unsigned long' [-Wsign-conversion]
wchar_t **parts = (wchar_t **)PyMem_Malloc(n * sizeof(wchar_t *));
^ ~
./Modules/getpath.c:269:22: warning: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'unsigned long' [-Wsign-conversion]
memset(parts, 0, n * sizeof(wchar_t *));
^ ~
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/secure/_string.h:77:33: note: expanded from macro 'memset'
__builtin___memset_chk (dest, __VA_ARGS__, __darwin_obsz0 (dest))
^~~~~~~~~~~
./Modules/getpath.c:295:61: warning: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'unsigned long' [-Wsign-conversion]
wchar_t *final = cchFinal > 0 ? (wchar_t *)PyMem_Malloc(cchFinal * sizeof(wchar_t)) : NULL;
^~~~~~~~ ~
./Modules/getpath.c:318:57: warning: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'size_t' (aka 'unsigned long') [-Wsign-conversion]
} else if (_Py_add_relfile(final, parts[i], cchFinal) < 0) {
~~~~~~~~~~~~~~~ ^~~~~~~~
./Modules/getpath.c:385:63: warning: implicit conversion changes signedness: 'size_t' (aka 'unsigned long') to 'Py_ssize_t' (aka 'long') [-Wsign-conversion]
wchar_t *wbuffer = _Py_DecodeUTF8_surrogateescape(buffer, cb, &len);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~
./Modules/getpath.c:663:43: warning: implicit conversion changes signedness: 'size_t' (aka 'unsigned long') to 'Py_ssize_t' (aka 'long') [-Wsign-conversion]
u = PyUnicode_FromWideChar(w, len);
~~~~~~~~~~~~~~~~~~~~~~ ^~~
./Modules/getpath.c:707:43: warning: implicit conversion changes signedness: 'size_t' (aka 'unsigned long') to 'Py_ssize_t' (aka 'long') [-Wsign-conversion]
u = PyUnicode_FromWideChar(w, len);
~~~~~~~~~~~~~~~~~~~~~~ ^~~
13 warnings generated.
```
Related https://github.com/python/cpython/pull/123020
<!-- gh-linked-prs -->
### Linked PRs
* gh-124070
* gh-124174
* gh-124177
* gh-124181
* gh-124204
* gh-124216
<!-- /gh-linked-prs -->
| 44052b5f18c5d605d33bf3207b5c918127cf0e82 | 05235e3c16d755e292ebf6e2bd6c4903bb6849b9 |
python/cpython | python__cpython-124061 | # Remove `_PyCompile_IsNestedScope` roll it into `_PyCompile_IsInteractive`
Currently the `c_interactive` field is used in only one place - to special-case the code generation of interactive statements. We can move this special case from `codegen_stmt_expr` to `_PyCodegen_Body` (where there is the `is_interactive` arg) and then we no longer need `c_interactive`, its accessor function _PyCompile_IsInteractive, and the `_PyCompile_IsNestedScope` function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124061
<!-- /gh-linked-prs -->
| 9aa1f60e2dedd8a67c42fb4f4c56858b6ba5b947 | 453da532fee26dc4f83d4cab77eb9bdb17b941e6 |
python/cpython | python__cpython-124045 | # Tracker: protect macros expansions using `do { ... } while (0)` constructions or via `static inline` equivalents when possible
# Fortifying Macros Expansion
## The Problem
In C, using a macro allows to reduce code duplication. However, depending on the macro's body, the moment they are expanded by the C preprocessor, the obtained C code may not be syntactally correct (or could have an unexpected execution flow).
In #123842, I saw that there are many files that have macros that could wrongly expand at compile time depending on how they are being used. Most of the time it's harmless but we never know how contributors use the macros themselves (they may not be aware of PEP-7 and could end up with dangling `else` in some cases). I will continue working on that draft PR but it's really hard to find a good regex for finding the macros to protect :') (well, I could do something with some analyzer but I'm too lazy).
Since I'm tired of creating separate issues for each module and macro to protect, I will use this single issue to organize and track the progress.
> [!IMPORTANT]
> We only touch **source files** (`.c`). We do **NOT** change the macros in header files (`.h`). Those files can be included by anyone and the macros could be used by some downstream projects. Since we do not know how they actually use them, we don't want to break them (and there is no way we can notify them of the change except via commits). We will have dedicated issues to fix those macros as a wider discussion could be needed, but this specific issue only focuses on source files.
## The Good News
Good news is that we have not one but **two** approaches for fixing this.
### Solution 1: converting the macro into a `static inline` function
When possible, prefer converting macros into `static inline` functions. Similar work has been achieved in PEP-670 for the C API and the overhead is not that much. Smart compilers would anyway inline the function, making it (almost) assembly-equivalent to a macro.
Not all macros can be converted into `static inline` functions. Macros that simply compute, say the addition of two `double` as `ADD_DOUBLE(x, y) (x) + (y)` should be rewritten as
```C
static inline double
add_double(double x, double y)
{
return x + y;
}
```
There are various pitfalls when using macros (those pitfalls are explained in detail in PEP-670) so converting macros into `static inline` functions should be preferred. On the other hand, macros that have a `return` or a `goto` instruction cannot be easily converted into `static inline` functions.
> [!IMPORTANT]
> I'd advise to carefully read PEP-670 to see when to convert a macro into a `static inline` function.
> [!NOTE]
> When converting macros into `static inline` functions, the case of the name can be changed. Most macros have uppercase names, but functions are lowercased. I'd suggest having a different commit for the rename so that it could easily be reverted if needed.
### Solution 2: using `do { ... } while (0)` constructions
In addition to the arguments exposed in https://www.quora.com/What-is-the-purpose-of-using-do-while-0-in-macros, the reason why we use `do { <body> } while (0)` is that the compiler is smart enough to optimize it as `{ <body> }` but after having expanded the macro (we can neither use `#define MACRO(...) if (1) { <body> }` nor `#define MACRO(...) { <body> }` because we could create dangling `else` if we were to use `MACRO();` instead of `MACRO()`). Now, I will lay down the convention I took for solving those issues.
When possible, apply [PEP 7](https://peps.python.org/pep-0007/), namely, align line continuations and put `do` on a separate column. For instance, we change:
```C
#define MACRO(...) \
<FIRST LINE> \
<SECOND LINE> \
<LAST LINE>
// OR
#define MACRO(...) { \
<FIRST LINE> \
<SECOND LINE> \
<LAST LINE> \
}
// OR
#define MACRO(...) \
{ \
<FIRST LINE> \
<SECOND LINE> \
<LAST LINE> \
}
```
into
```C
#define MACRO(...) \
do { \
<FIRST LINE> \
<SECOND LINE> \
<LAST LINE> \
} while (0)
```
By convention, I put `do` on a separate line and indent it so that we don't have `} while (0)` stick to left margin. If other macros in the file are already using the `do { ... } while (0)` construction, either I keep the same style (which is probably preferred) or I change them if they are in minority (but using a separate commit so that I can easily revert it if a core-dev does not want this kind of cosmetic change).[^1]
> [!TIP]
> To align line continuations easily, take the longest line after adding the `do { ... } while (0)` construction and press TAB. Most of the time, this will align the `\` on the next multiple of 4. Then either enter vertical mode to edit the line continuations or put tabs before every other `\` until getting aligned line continuations.
I personally avoid adding a semicolon after `while (0)`. One reason is that it convenes the intention for the usage to behave like a C statement. This is also in line with what PEP 7 suggests (but does not enforce). We could add it but then both `MACRO(...)` and `MACRO(...);` could be found in the code and I think it's better to have something uniform (ideally `MACRO(...);`).
> [!NOTE]
> The modules that will be touched may be very old and they could have their own coding style. I don't have a strong opinion on whether we should PEP-7-ize the macro body if the rest of the code is not PEP-7 compliant, but I think that aligning the line continuations should be preferred because it makes the macro easier to read.
<!---
### The Even Better News
An even better news is that this task is a good introductory task for anyone interested in contributing to CPython (additional hands are always welcomed). Future contributors may inspect any commit in #123842 since each commit is dedicated to fix a single module (e.g., https://github.com/python/cpython/pull/123842/commits/5c90db8206a4dfeffdbdfdf5712df6d92995cad6) or a macro spread across multiple modules (e.g., https://github.com/python/cpython/pull/123842/commits/8fe006fc40e29a67e0ec9d2986612544185576e2).
--->
> [!IMPORTANT]
> Avoid touching other places of the code. If we believe there is a bug in the macro, we open a separate issue. In addition, each fix should only focus on a **single** file (or a single macro that is duplicated across modules). Ideally, we should only make PRs one by one because this could create conflicts on other branches and our work usually has a lower priority. For that reason, we also don't backports the changes.
[^1]: Consistency beats purity but sometimes consistency could be forced (if there are 50 macros for which we add `do { ... } while (0)` but there is only 1 macro that is putting the `do` at the same level as the `#define`, we simply change that single macro).
<!-- gh-linked-prs -->
### Linked PRs
* gh-124045
<!-- /gh-linked-prs -->
| e49d1b44d3ef2542c0ae165e14a7e5ffbc32b2d1 | 432bf31327c6b9647acb8bdb0eac2d392fd9f60a |
python/cpython | python__cpython-124078 | # Crash when free-threaded Python is built with `--with-trace-refs`
# Crash report
### What happened?
I'm using `ThreadPoolExecutor` to write tests for free-threading enabled C extensions.
- metaopt/optree#137
~When running concurrent threads larger than `num_workers`, CPython crashes.~ CPython crashes when `num_futures >= 2` if the test function is complex enough.
The Python is installed via the following configuration with PyDebug enabled:
```bash
./configure --prefix="${PWD}/pydev" \
--enable-ipv6 --with-openssl="$(brew --prefix openssl)" \
--with-system-expat --with-system-libmpdec \
--with-assertions --with-pydebug --with-trace-refs \
--disable-gil
```
```python
from concurrent.futures import ThreadPoolExecutor
NUM_WORKERS = 32
NUM_FUTURES = 1024
def concurrent_run(func):
with ThreadPoolExecutor(max_workers=NUM_WORKERS) as executor:
for _ in range(NUM_FUTURES):
executor.submit(func)
concurrent_run(lambda : None)
```
```python
Python 3.13.0rc2+ experimental free-threading build (heads/3.13:112b1704fa6, Sep 13 2024, 15:34:48) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from concurrent.futures import ThreadPoolExecutor
>>> NUM_WORKERS = 32
... NUM_FUTURES = 1024
...
... def concurrent_run(func):
... with ThreadPoolExecutor(max_workers=NUM_WORKERS) as executor:
... for _ in range(NUM_FUTURES):
... executor.submit(func)
...
>>> concurrent_run(lambda : None)
Assertion failed: (value == REFCHAIN_VALUE), function _PyRefchain_Remove, file object.c, line 195.
Fatal Python error: Aborted
Thread 0x0000000173e2b000 (most recent call first):
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 304 in __enter__
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 528 in release
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/concurrent/futures/thread.py", line 87 in _worker
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 992 in run
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1041 in _bootstrap_inner
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1012 in _bootstrap
Thread 0x0000000172e1f000 (most recent call first):
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/concurrent/futures/thread.py", line 89 in _worker
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 992 in run
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1041 in _bootstrap_inner
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1012 in _bootstrap
Thread 0x0000000171e13000 (most recent call first):
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/concurrent/futures/thread.py", line 89 in _worker
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 992 in run
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1041 in _bootstrap_inner
File "/Users/PanXuehai/Projects/cpython/pydev/lib/python3.13t/threading.py", line 1012 in _bootstrap
Current thread 0x0000000170e07000 (most recent call first):
Assertion failed: (PyCode_Check(f->f_executable)), function _PyFrame_GetCode, file pycore_frame.h, line 81.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
Python 3.13.0rc2+ experimental free-threading build (heads/3.13:112b1704fa6, Sep 13 2024, 15:34:48) [Clang 15.0.0 (clang-1500.3.9.4)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-124078
* gh-124138
<!-- /gh-linked-prs -->
| 3b45df03a4bd0e21edec43144b8d9bac689d23a0 | 44052b5f18c5d605d33bf3207b5c918127cf0e82 |
python/cpython | python__cpython-124042 | # testHypot in test_match uses assertEqual on floats
https://github.com/python/cpython/blob/main/Lib/test/test_math.py#L794-L871
We (SUSE) have test `self.assertEqual(hypot(1, -1), math.sqrt(2))` failing on some distros and some architectures (namely `i586`) and I suspect that whole idea of `assertEqual` on two floating point numbers is very unfortunate. Shouldn't be there at least `assertAlmostEqual`?
Cc: @mdickinson, @rhettinger, @danigm
<!-- gh-linked-prs -->
### Linked PRs
* gh-124042
* gh-124235
* gh-124236
<!-- /gh-linked-prs -->
| 4420cf4dc9ef7bd3c1c9b5465fa9397304bf0110 | 7628f67d55cb65bad9c9266e0457e468cd7e3775 |
python/cpython | python__cpython-124031 | # test_termios.test_tcsendbreak fails with 'Inappropriate ioctl for device' error on NetBSD
# Bug report
### Bug description:
```
$ ./python -m test test_termios -m test_tcsendbreak -v
== CPython 3.14.0a0 (heads/main:3bd942f106a, Sep 13 2024, 03:20:13) [GCC 10.5.0]
== NetBSD-10.0-amd64-x86_64-64bit-ELF little-endian
== Python build: debug
== cwd: /home/blue/cpython/build/test_python_worker_6492æ
== CPU count: 6
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 3084653998
0:00:00 load avg: 3.05 Run 1 test sequentially in a single process
0:00:00 load avg: 3.05 [1/1] test_termios
test_tcsendbreak (test.test_termios.TestFunctions.test_tcsendbreak) ... ERROR
======================================================================
ERROR: test_tcsendbreak (test.test_termios.TestFunctions.test_tcsendbreak)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_termios.py", line 95, in test_tcsendbreak
termios.tcsendbreak(self.fd, 1)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
----------------------------------------------------------------------
Ran 1 test in 0.008s
FAILED (errors=1)
test test_termios failed
test_termios failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_termios
Total duration: 154 ms
Total tests: run=1 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-124031
* gh-124062
* gh-124063
<!-- /gh-linked-prs -->
| 9f42b62db998131bb5cd555e2fa72ba7e06e3130 | 8810e286fa48876422d1b230208911decbead294 |
python/cpython | python__cpython-124028 | # Python 3.13+ REPL inserts ~ on [Del] [PgUp] [PgDn] with TERM=vt100
# Bug report
### Bug description:
I've noticed that the Python 3.13+ REPL misbehaves in a [mock](https://github.com/rpm-software-management/mock) shell environment. Whenever I press <kbd>Del</kbd>, <kbd>PgUp</kbd>, or <kbd>PgDn</kbd>, a literal `~` character is inserted.
tl;dr this happens when you run `TERM=vt100 python3.13`.
## Details:
I did not know what was special about the mock shell, but this did not happen with the old readline-based REPL.
When trying to figure out what's different in the mock shell I looked at `env` and it was:
```
SHELL=/bin/bash
HISTCONTROL=ignoredups
HISTSIZE=1000
HOSTNAME=2f26bd11c076
PWD=/builddir
LOGNAME=root
HOME=/builddir
LANG=C.UTF-8
LS_COLORS=...snip...
PROMPT_COMMAND=printf "\033]0;<mock-chroot>\007"
TERM=vt100
USER=root
SHLVL=1
PS1=<mock-chroot> \s-\v\$
DEBUGINFOD_URLS=https://debuginfod.fedoraproject.org/
PATH=/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
MAIL=/var/spool/mail/root
_=/usr/bin/env
```
I found out that to reproduce, all I have to do is to set `TERM=vt100`. I don't know the actual meaning of that, but it changes the behavior of the new REPL, but not the old.
```
$ TERM=vt100 python3.13
Python 3.13.0rc2 (main, Sep 7 2024, 00:00:00) [GCC 13.3.1 20240522 (Red Hat 13.3.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Pressing either of <kbd>Del</kbd>, <kbd>PgUp</kbd>, or <kbd>PgDn</kbd> (possibly other keys?) adds a literal `~` instead of doing any of the expected actions (history traverse up, down, deleting the next character if any).
```
$ TERM=vt100 python3.12
Python 3.12.6 (main, Sep 9 2024, 00:00:00) [GCC 13.3.1 20240522 (Red Hat 13.3.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Pressing either of <kbd>Del</kbd>, <kbd>PgUp</kbd>, or <kbd>PgDn</kbd> does what it is supposed to do.
---
To reproduce my actual use case, first install mock. The easiest way is to use podman (or docker) with Fedora. Following https://rpm-software-management.github.io/mock/#mock-inside-podman-fedora-toolbox-or-docker-container
```
$ podman run --rm --privileged -ti fedora:40 bash # or docker run --cap-add=SYS_ADMIN ...
# dnf install -y mock
# useradd mockbuilder
# usermod -a -G mock mockbuilder
# su - mockbuilder
$ mock -r fedora-rawhide-x86_64 --no-bootstrap-image install python3.13 python3.12
...
$ mock -r fedora-rawhide-x86_64 shell
...
<mock-chroot> sh-5.2# python3.13
Python 3.13.0rc2 (main, Sep 7 2024, 00:00:00) [GCC 14.2.1 20240905 (Red Hat 14.2.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124028
* gh-124029
<!-- /gh-linked-prs -->
| f4e5643df64d0c2a009ed224560044b3409a47c0 | 6e06e01881dcffbeef5baac0c112ffb14cfa0b27 |
python/cpython | python__cpython-124023 | # class docstring is removed in interactive mode
```
>>> import dis
>>> src = """class C:
... "docstring"
... x = 42"""
>>> dis.dis(compile(src, "<string>", "single"))
0 RESUME 0
1 LOAD_BUILD_CLASS
PUSH_NULL
LOAD_CONST 0 (<code object C at 0x10c3b4f40, file "<string>", line 1>)
MAKE_FUNCTION
LOAD_CONST 1 ('C')
CALL 2
STORE_NAME 0 (C)
RETURN_CONST 2 (None)
Disassembly of <code object C at 0x10c3b4f40, file "<string>", line 1>:
1 RESUME 0
LOAD_NAME 0 (__name__)
STORE_NAME 1 (__module__)
LOAD_CONST 0 ('C')
STORE_NAME 2 (__qualname__)
LOAD_CONST 1 (1)
STORE_NAME 3 (__firstlineno__)
2 NOP <--- line 2 is where the docstring is. This NOP shouldn't be here
3 LOAD_CONST 2 (42)
STORE_NAME 4 (x)
LOAD_CONST 3 (())
STORE_NAME 5 (__static_attributes__)
RETURN_CONST 4 (None)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-124023
* gh-124052
* gh-124075
<!-- /gh-linked-prs -->
| a9594a34c62487961be86c0925daaa43bb467bb9 | cfe6074d1fa81cf0684fbf8a623616441a1966e7 |
python/cpython | python__cpython-124020 | # codegen_annotations_in_scope is called when ste->ste_annotations_used is false
Currently `codegen_annotations_in_scope()` is called before the value of `ste->ste_annotations_used` is checked. As far as I can tell this is not necessary.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124020
<!-- /gh-linked-prs -->
| 6e06e01881dcffbeef5baac0c112ffb14cfa0b27 | a53812df126b99bca25187441a123c7785ee82a0 |
python/cpython | python__cpython-124017 | # upgrade Unicode database to 16.0.0
https://blog.unicode.org/2024/09/announcing-unicode-standard-version-160.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-124017
<!-- /gh-linked-prs -->
| bb904e063d0cbe4c7c83ebfa5fbed2d9c4980a64 | a9594a34c62487961be86c0925daaa43bb467bb9 |
python/cpython | python__cpython-124014 | # remove _PyCompile_IsTopLevelAwait
The assertions that need this function were added to convince ourselves that the refactor away from it is correct. We don't need them anymore, so we can remove this function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124014
<!-- /gh-linked-prs -->
| 8145ebea587284db3be3f152ee0298952977d6f4 | b2afe2aae487ebf89897e22c01d9095944fd334f |
python/cpython | python__cpython-124059 | # invaild assertion of _io__WindowConsoleIO_write_impl
# Feature or enhancement
### Proposal:
If function `_io__WindowsConsoleIO_write_impl` does not produce as expected output from WriteConsoleW, it is the process of recalculating the existing len.
At this point, function WideCharToMultiByte finds the byte length of utf-8 and function MultiByteToWideChar finds the letter length of utf-16. And is this comparison valid for comparing the results of these two?
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/is-the-size-comparison-of-utf-8-and-utf-16-valid/63614
<!-- gh-linked-prs -->
### Linked PRs
* gh-124059
* gh-127325
* gh-127326
<!-- /gh-linked-prs -->
| 3cf83d91a5baf3600dd60f7aaaf4fb6d73c4b8a9 | 83926d3b4c7847394b5e2531e9566d7fc9fbea0f |
python/cpython | python__cpython-124003 | # Use array of size 1 for `self_or_null` and other instruction inputs that get merged into another array.
# Bug report
We have a common pattern for calls. The stack looks like this: `func` `self_or_null` `args[oparg]` and if `self_or_null` is not `NULL` we merge it into the `args` array. This is error prone however as the code generator thinks `self_or_null` is a discrete scalar, not part of an array.
We have specific code to workaround this, but I'm not sure that there are lurking bugs.
We should, instead, define `self_or_null` as an array of size 1. This tells the code generator that it is an array and must be in memory, not a register.
As well as causing issues now, this is necessary for https://github.com/python/cpython/issues/120024 and https://github.com/python/cpython/issues/121459.
<!-- gh-linked-prs -->
### Linked PRs
* gh-124003
<!-- /gh-linked-prs -->
| 4ed7d1d6acc22807bfb5983c98fd59f7cb5061db | 3ea51fa2e33797c772af6eaf6ede76d2dc6082ba |
python/cpython | python__cpython-123995 | # Buildbot failure in test_importlib UTF-16 tests
:warning::warning::warning: Buildbot failure :warning::warning::warning:
------------------------------------------------------------------------
Hi! The buildbot **s390x RHEL9 Refleaks 3.x** has failed when building commit ba687d9481c04fd160795ff8d8568f5c9f877128.
What do you need to do:
1. Don't panic.
2. Check [the buildbot page in the devguide](https://devguide.python.org/buildbots/) if you don't know what the buildbots are or how they work.
3. Go to the page of the buildbot that failed (https://buildbot.python.org/#/builders/1589/builds/145) and take a look at the build logs.
4. Check if the failure is related to this commit (ba687d9481c04fd160795ff8d8568f5c9f877128) or if it is a false positive.
5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.
You can take a look at the buildbot page here:
https://buildbot.python.org/#/builders/1589/builds/145
Failed tests:
- test_importlib
Failed subtests:
- test_open_text - test.test_importlib.resources.test_functional.FunctionalAPITest_StringAnchor.test_open_text
- test_read_text - test.test_importlib.resources.test_functional.FunctionalAPITest_StringAnchor.test_read_text
- test_read_text - test.test_importlib.resources.test_functional.FunctionalAPITest_ModuleAnchor.test_read_text
- test_open_text - test.test_importlib.resources.test_functional.FunctionalAPITest_ModuleAnchor.test_open_text
Summary of the results of the build (if available):
==
<details>
<summary>Click to see traceback logs</summary>
```python-traceback
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_functional.py", line 144, in test_open_text
self.assertEndsWith( # ignore the BOM
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
f.read(),
^^^^^^^^^
...<2 lines>...
),
^^
)
^
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_functional.py", line 50, in assertEndsWith
self.assertEqual(string[-len(suffix) :], suffix)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '\x00H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\[61 chars]00\n' != 'H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\x00T[61 chars]\x00'
- �H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
? -
+ H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
-
+ �
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_read.py", line 46, in test_read_text_with_errors
self.assertEqual(
~~~~~~~~~~~~~~~~^
result,
^^^^^^^
...<2 lines>...
'\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
AssertionError: '\x00H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\[61 chars]00\n' != 'H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\x00T[61 chars]\x00'
- �H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
? -
+ H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
-
+ �
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_functional.py", line 92, in test_read_text
self.assertEndsWith( # ignore the BOM
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
resources.read_text(
^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
),
^^
)
^
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_functional.py", line 50, in assertEndsWith
self.assertEqual(string[-len(suffix) :], suffix)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '\x00H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\[61 chars]00\n' != 'H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\x00T[61 chars]\x00'
- �H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
? -
+ H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
-
+ �
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.refleak/build/Lib/test/test_importlib/resources/test_open.py", line 49, in test_open_text_with_errors
self.assertEqual(
~~~~~~~~~~~~~~~~^
result,
^^^^^^^
...<2 lines>...
'\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
AssertionError: '\x00H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\[61 chars]00\n' != 'H\x00e\x00l\x00l\x00o\x00,\x00 \x00U\x00T[61 chars]\x00'
- �H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
? -
+ H�e�l�l�o�,� �U�T�F�-�1�6� �w�o�r�l�d�!�
-
+ �
```
</details>
_Originally posted by @bedevere-bot in https://github.com/python/cpython/issues/123037#issuecomment-2345217335_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123995
<!-- /gh-linked-prs -->
| 3ea51fa2e33797c772af6eaf6ede76d2dc6082ba | a362c41bc934fabe6bfef9be1962005b38396860 |
python/cpython | python__cpython-124358 | # Remove `WITH_FREELISTS` macro
As we discussed at https://discuss.python.org/t/should-we-remove-the-with-freelists-macro,
we just added this macro to **test** mimalloc's own feature without using our own free list implementation.
See: https://github.com/python/cpython/issues/89685
But since we already implemented `per thread freelist` at the https://github.com/python/cpython/issues/111968 for freethreading, we don't need this flag anymore.
I will remove this macro during this year's core sprint (only around two weeks left)
https://discuss.python.org/t/2024-core-dev-sprint-in-bellevue-wa-september-23-27/39227
cc @colesbury @vstinner @markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-124358
<!-- /gh-linked-prs -->
| ad7c7785461fffba04f5a36cd6d062e92b0fda16 | be76e3f26e0b907f711497d006b8b83bff04c036 |
python/cpython | python__cpython-124018 | # Sentinel in namespace path breaks NamespaceReader
# Bug report
### Bug description:
As reported in https://github.com/python/importlib_resources/issues/311 and highlighted by https://github.com/python/importlib_resources/issues/318, when an editable install adds a sentinel value to a namespace path, it can break the NamespaceReader when that value doesn't resolve to a pathlib.Path or zipfile.Path.
This code was introduced in https://github.com/python/cpython/issues/106531 and fixed in [importlib_resources 6.4.5](https://importlib-resources.readthedocs.io/en/latest/history.html#v6-4-5).
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-124018
* gh-129319
<!-- /gh-linked-prs -->
| b543b32eff78ce214e68e8c5fc15a8c843fa8dec | fccbfc40b546630fa7ee404c0949d52ab2921a90 |
python/cpython | python__cpython-123972 | # Add _PyErr_RaiseSyntaxError and _PyErr_EmitSyntaxWarning
The ``_PyCompile_Error`` and ``_PyCompile_Warn`` functions take a compiler struct as parameter, just for the filename. In order to be able to use these functions from areas of the codebase where this does not exist, we need versions that take just the filename.
I think they should live in Python/errors.py, with `_PyErr_` prefix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123972
<!-- /gh-linked-prs -->
| aba42c0b547e6395c9c268cf98a298d0494cb9df | 9aa1f60e2dedd8a67c42fb4f4c56858b6ba5b947 |
python/cpython | python__cpython-123971 | # `python -m random --float N` should pick a float between 0 and N, not 1 and N
# Bug report
### Bug description:
In the specification for the new `python -m random` CLI for 3.13 (#118131), the proposed behavior was
> if it's a float, print a random float between 0 and the input (via [random.uniform](https://docs.python.org/3/library/random.html#random.uniform)).
However, what was actually implemented (#118132, cc @hugovk) was
```console
$ python -m random --help
[…]
-f, --float N print a random floating-point number between 1 and N inclusive
$ python -m random 2.0
1.883974829952927
$ python -m random 2.0
1.034610672623156
$ python -m random 2.0
1.3676261878147473
$ python -m random 2.0
1.1875340810783404
$ python -m random 2.0
1.6148479875565644
$ python -m random 1.0
1.0
```
This is surprising and not helpful. Everyone will expect a range of length `N` starting from `0.0`, not a range of length `N - 1.0` starting from `1.0`.
(Note that this is completely distinct from the debate about whether it’s more natural to select *integers* from `[1, …, N]` or `[0, …, N - 1]`, as at least those are both ranges of length `N`.)
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123971
* gh-124009
* gh-124517
* gh-124523
<!-- /gh-linked-prs -->
| a362c41bc934fabe6bfef9be1962005b38396860 | 8e99495701737c9d9706622f59581213ef163b23 |
python/cpython | python__cpython-127329 | # faulthandler's `dump_traceback` doesn't handle case where top-most frame is `FRAME_OWNED_BY_CSTACK`
# Bug report
We properly skip trampoline frames when they are not the top-most frame:
https://github.com/python/cpython/blob/3bd942f106aa36c261a2d90104c027026b2a8fb6/Python/traceback.c#L979-L982
But if `tstate->current_frame` is a trampoline frame (i.e., `FRAME_OWNED_BY_CSTACK`) then `dump_traceback` will crash if faulthandler is triggered when executing a trampoline frame.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127329
* gh-127362
* gh-127363
<!-- /gh-linked-prs -->
| 58e334e1431b2ed6b70ee42501ea73e08084e769 | 9328db7652677a23192cb51b0324a0fdbfa587c9 |
python/cpython | python__cpython-123962 | # Refactorization of `_cursesmodule.c` to fix reference leaks
# Feature or enhancement
### gh-123962: global dictionary removal
In `_cursesmodule.c`, there is a global reference to the module's dictionary. The reason is that we cannot add all constants when the module is being initialized because some constants require `curses.initscr()` to be called beforehand.
Instead of storing the module's dictionary in a global variable, we can access it in `curses.initscr()` using `PyModule_GetDict`.
### gh-124047: global variable name cleanup
Some global variables are modified in `_cursesmodule.c` by other functions and their names can be confused with a local variable name. I wanted to use uppercase names but Victor argued that this should be reserved for C macros (https://github.com/python/cpython/pull/123910#discussion_r1754335272) and suggested keeping prefixed lowercase names.
### gh-124729: use a global state object to hold global variables
Once we are done with the above changes, we introduce a global module's state object. This state will hold the error and window type (which needs to be converted into a heap type) that are currently global variables.
### gh-124907: improve C API cleanup
We improve the interface for creating and destroying the underlying C API and its associated capsule. Once we transform the Window type into a heap type, we'll also add the corresponding `traverse()` and `clear()` callbacks to the capsule object.
### gh-124934: transform window type into a heap type
In this task, we make curses window type a heap type instead of a static one. Since the module is still not yet a multi-phase initialization module, we cannot entirely clean-up all references.
### gh-125000: make the module ~~free-threaded friendly~~ process-wide
The module should not be allowed to be exec'ed() more than once by process. This is achieved by adding a global flag. ~~In addition, we should make the read/write operations on tbe global flags atomic so that free-threaded builds do not have race conditions.~~
### gh-124965: implement PEP-489 for `curses`
The last task takes care of changing the module into a PEP-489 compliant one. It will also take care of the long-standing reference leaks in curses.
---
Some extra and optional tasks at the end:
### Error messages improvements
Some functions say that "X() returned ERR" but the function X is not always the correct one (e.g., in `_curses_window_echochar_impl`, it says that `echochar` returned ERR but this depends on whether we used `echochar` or `wechochar`). I think we need to really use the name of the function that was called at runtime (some functions do this but not all).
The idea is to extend the exception type so that it also has two attributes, namely `called_funcname` and `curses_funcname` (subject to bikeshedding). The `called_funcname` value would contain the incriminating function (i.e. where the exception was raised) while `curses_funcname` would contain the runtime C function that was used (which may differ in many cases from the actual curses Python function).
To that end, we need the exception type to be a heap type, since a static type would be 1) an overkill 2) still leaks (and we don't like leaks).
### Cosmetic changes
I can't say I'm maintaining curses but I can say that it's tiring to work with a file of more 5k lines. Many lines are actually caused by the fact that we have both the implementation for the window object and for the module's method. But these two can probably be split into two files. I don't know if this is suitable but it would probably help maintining the code (both take roughly 2k lines of code). Here are the various cosmetic points I'd like to address:
- Make the macros more readable (there are packed and honestly, for a first-time contributor, hard to parse).
- ~Split the module into multiple `.c` files. This is not necessarily mandatory because we would probably need to export some functions that are shared or add some internal header so it might overcomplicate the task.~
- Simplify some code paths (just some if/else that could be cleaner or things like that).
- Put whitespaces in function calls (honestly, when you see `func(y,x, something)`, it's hard to read).
Those cosmetic changes could be helpful to newcomers, since no one is really interested in this module. But those are just things I had in mind while refactoring the module, none of them are really required (and I'm now quite used to).
#### EDIT (03/10/24)
I'll just give up on splitting the files. There are too many inter-dependencies between functions that it makes any PR for that too hard to review. In addition, we would end up exporting too many un-necessary symbols.
### Related Work
* 2938c3dec99390087490124c2ef50e1592671e72 (tracked by [gh-123290](123290))
* e49d1b44d3ef2542c0ae165e14a7e5ffbc32b2d1 (tracked by [gh-124044](124044))
<!-- gh-linked-prs -->
### Linked PRs
* gh-123962
* gh-124047
* gh-124729
* gh-124907
* gh-124934
* gh-124965
* gh-125000
<!-- /gh-linked-prs -->
| 403f3ddedcab14f6c16ea78a93bb4acf49d06a07 | e5b0185e43c972ce98decd1493cd0b0c3a6b166b |
python/cpython | python__cpython-123959 | # codegen should ideally not need to know the optimization level
Optimization level is used for things that can be done in ast_opt. I'll try to move them there.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123959
* gh-124143
<!-- /gh-linked-prs -->
| e07154fd1e3152a758cf9b476257a4ffdc48dfc6 | 2938c3dec99390087490124c2ef50e1592671e72 |
python/cpython | python__cpython-123970 | # Argparse does not identify negative numbers with underscores as a numerical value
# Bug report
### Bug description:
Simple example:
$ python test_arguments.py --Value -1_000.0
`error: argument -v/--Value: expected one argument`
The regular expression to identify negative numbers in the argparse.py module uses the following regular expression:
```python
#argparse.py:1353-1354
# determines whether an "option" looks like a negative number
self._negative_number_matcher = _re.compile(r'^-\d+$|^-\d*\.\d+$')
```
This does not identify negative numerical values with underscores as a numerical value
This could probably be resolved by changing the regex to:
```python
# determines whether an "option" looks like a negative number
self._negative_number_matcher = _re.compile(r'^-\d[\d_]*$|^-[\d_]*\.\d+$')
```
### CPython versions tested on:
3.9
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-123970
* gh-124158
* gh-124175
<!-- /gh-linked-prs -->
| 14e5bdceff45e6e789e1f838b96988946c75b0f4 | 0a32c6959c265d21d7c43fe8e4aefc8c0983e85e |
python/cpython | python__cpython-123943 | # Missing test for docstring-handling code in ast_opt.c
``astfold_body`` has a block of code that checks whether const folding created something that looks like a docstring, which was not there before. If it did, then we have a string that came out of folding a const expression. This is not recognised by Python as a docstring, so in order to prevent the compiler from recognising it as such, the ``astfold_body`` converts the AST node into a const f-string node (``_PyAST_JoinedStr``), which is also not recognised as a docstring.
This is not covered by tests, so it took me a while to figure out what's going on. I'll add the test.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123943
* gh-123955
<!-- /gh-linked-prs -->
| 6e23c89fcdd02b08fa6e9fa70d6e90763ddfc327 | c8d1dbef5b770b647aa7ff45fd5b269bc7629d0b |
python/cpython | python__cpython-124150 | # Daemonic threads not killed in some circumstances in python 3.13
# Bug report
### Bug description:
I got a [bug report](https://github.com/pycompression/python-zlib-ng/issues/53) on python-zlib-ng where using the threaded opening the program would hang if an exception occurred and a context manager was not used.
I fixed it by changing the threads to be daemonic: https://github.com/pycompression/python-zlib-ng/pull/54.
This fixes the issue on python 3.9, 3.10, 3.11 and 3.12, but not 3.13-rc2. Also the latest main branch is affected. Unfortunately I could not create a minimal reproducer, other than running the test directly. Daemonic threads seem not to work in this particular case, where there are some BufferedIO streams and queues involved.
```
git clone -b issue53 https://github.com/pycompression/python-zlib-ng
# Build Cpython
./python -m ensurepip
./python -m pip install pytest pytest-timeout <path to python-zlib-ng repo>
./python -m pytest <path to python-zlib-ng repo>/tests/test_gzip_ng_threaded.py::test_threaded_program_can_exit_on_error
```
Since there are many code changes between 3.12 in 3.13 the underlying cause is surely there, but it is hard to pinpoint.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124150
<!-- /gh-linked-prs -->
| c9878e1b220b748788c3faa656257d5da4cd46c7 | 21c825417fc993d708c3ff57e2b8b97b09a20159 |
python/cpython | python__cpython-123941 | # Incorrect slot check: typo in `__dictoffset__`
# Bug report
I made a typo that made it into the final code: https://github.com/python/cpython/blob/00ffdf27367fb9aef247644a96f1a9ffb5be1efe/Lib/dataclasses.py#L1211-L1212
It should had been `__dictoffset__` not `__dictrefoffset__`.
Fixing plan:
- Add tests for C types with `__dictoffset__`, so it won't happen again :)
- Fix the typo
<!-- gh-linked-prs -->
### Linked PRs
* gh-123941
* gh-123991
* gh-123992
<!-- /gh-linked-prs -->
| ac918ccad707ab2d7dbb78a4796a7b8a874f334c | 43303e362e3a7e2d96747d881021a14c7f7e3d0b |
python/cpython | python__cpython-124038 | # `reset_mock` resets `MagicMock`'s magic methods in an unexpected way
# Bug report
### Bug description:
The `reset_mock(return_value=True)` method behaves in a wrong/inconsistent way.
When used with `MagicMock`, the method `reset_mock(return_value=True)` does not reset the return values of the magic methods. Only if you call for example `__str__` and then call the reset_mock function, the return value will be reset, but not to the default value.
```python
from unittest import mock
mm = mock.MagicMock()
print(type(mm.__str__()))
mm.reset_mock(return_value=True)
print(type(mm.__str__()))
print(type(mm.__hash__()))
mm.reset_mock(return_value=True)
print(type(mm.__hash__()))
```
#### Output
```sh
<class 'str'>
<class 'unittest.mock.MagicMock'>
<class 'int'>
<class 'unittest.mock.MagicMock'>
```
Since Python 3.9 [PR](https://github.com/python/cpython/issues/83113) `reset_mock` now also resets child mocks. This explains the behaviour. Calling the `__str__` method creates a child `MagicMock` with a set return value. Since this child mock now exists, its return value is reset when reset_mock(return_value=True) is called.
Although this can be logically explained, it's counter-intuitive and annoying as I'm never sure which values are being reset.
I would expect the same behaviour as `Mock`. The return value of `__str__` and other magic methods should not be effected.
```python
from unittest import mock
m = mock.Mock()
print(type(m.__str__()))
m.reset_mock(return_value=True)
print(type(m.__str__()))
print(type(m.__hash__()))
m.reset_mock(return_value=True)
print(type(m.__hash__()))
```
#### Output
```sh
<class 'str'>
<class 'str'>
<class 'int'>
<class 'int'>
```
### CPython versions tested on:
3.10
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-124038
* gh-124231
* gh-124232
<!-- /gh-linked-prs -->
| 7628f67d55cb65bad9c9266e0457e468cd7e3775 | 43cd7aa8cd88624f7211e47b98bc1e8e63e7660f |
python/cpython | python__cpython-123929 | # Better error message for from imports when a script shadows a module
# Feature or enhancement
### Proposal:
This is a follow up to https://github.com/python/cpython/pull/113769
See https://github.com/python/cpython/pull/113769#issuecomment-2197994572
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123929
* gh-125937
* gh-125960
<!-- /gh-linked-prs -->
| 3983527c3a6b389e373a233e514919555853ccb3 | 58c753827ac7aa3d7f1495ac206c28bf2f6c67e8 |
python/cpython | python__cpython-128405 | # configure doesn't disable ncurses extended_pair_content() if ncursesw is not available
# Bug report
### Bug description:
On a system I'm trying to build Python for, I have libncurses.so available but not libncursesw.so.
Unfortunately, although this is not documented directly anywhere I can find, the ncurses extended_pair_content() and extended_color_content() functions (and the `init_extended_*()` functions) are not available in libncurses, they are only available in libncursesw. This causes the compilation of Python to fail:
```
checking for curses.h... yes
checking for ncurses.h... yes
checking for ncursesw... no
checking for initscr in -lncursesw... no
checking for ncurses... no
checking for initscr in -lncurses... yes
checking curses module flags... ncurses (CFLAGS: , LIBS: -lncurses)
checking for panel.h... yes
checking for panel... no
checking for update_panels in -lpanel... yes
checking panel flags... panel (CFLAGS: , LIBS: -lpanel)
checking for term.h... yes
checking whether mvwdelch is an expression... yes
checking whether WINDOW has _flags... yes
checking for curses function is_pad... yes
checking for curses function is_term_resized... yes
checking for curses function resize_term... yes
checking for curses function resizeterm... yes
checking for curses function immedok... yes
checking for curses function syncok... yes
checking for curses function wchgat... yes
checking for curses function filter... yes
checking for curses function has_key... yes
checking for curses function typeahead... yes
checking for curses function use_env... yes
...
checking for stdlib extension module _curses... yes
checking for stdlib extension module _curses_panel... yes
...
86_64-rl84-linux-gnu-gcc -pthread -shared Modules/_cursesmodule.o -lncurses -o Modules/_curses.cpython-312-x86_64-linux-gnu.so
...
./python -E -c 'import sys ; from sysconfig import get_platform ; print("%s-%d.%
d" % (get_platform(), *sys.version_info[:2]))' >platform
[ERROR] _curses failed to import: /data/src/python3/Linux-Release-make/bld.python3/build/lib.linux-x86_64-3.12/_curses.cpython-312-x86_64-linux-gnu.so: undefined symbol: extended_pair_content
```
If I try a simple program:
```
#include <ncurses.h>
#include <stdio.h>
int main(void)
{
initscr();
start_color();
{
int f, b;
int r = extended_pair_content(1, &f, &b);
printf("r=%d f=%d b=%d\n", r, f, b);
}
endwin();
return 0;
}
```
then it works if I link with `-lncursesw`:
```
$ gcc -o /tmp/foo /tmp/foo.c -lncursesw
$
```
But fails if I only link with `-lncurses`:
```
$ gcc -o /tmp/foo /tmp/foo.c -lncurses
/bin/ld: /tmp/cccHNZsN.o: in function `main':
foo.c:(.text+0x85): undefined reference to `extended_pair_content'
/bin/ld: foo.c:(.text+0x107): undefined reference to `extended_color_content'
collect2: error: ld returned 1 exit status
$
```
I believe this patch will fix it:
```
--- a/Modules/_cursesmodule.c 2024-09-06 15:03:47.000000000 -0400
+++ b/Modules/_cursesmodule.c 2024-09-10 17:41:55.124440110 -0400
@@ -139,7 +139,7 @@
#define STRICT_SYSV_CURSES
#endif
-#if NCURSES_EXT_FUNCS+0 >= 20170401 && NCURSES_EXT_COLORS+0 >= 20170401
+#if HAVE_NCURSESW && NCURSES_EXT_FUNCS+0 >= 20170401 && NCURSES_EXT_COLORS+0 >\
= 20170401
#define _NCURSES_EXTENDED_COLOR_FUNCS 1
#else
#define _NCURSES_EXTENDED_COLOR_FUNCS 0
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128405
* gh-128407
* gh-128408
<!-- /gh-linked-prs -->
| 8d16919a06a55a50756bf083221a6f6cab43de50 | e1baa778f602ede66831eb34b9ef17f21e4d4347 |
python/cpython | python__cpython-123924 | # Use deferred reference counting in some `_PyInterpreterFrame` fields
# Feature or enhancement
The `_PyInterpreterFrame` struct contains a strong reference to:
* `f_executable` - the currently executing code object
* `f_funcobj` - the function object
* `f_locals` - the locals object (often NULL)
* `frame_obj` - the frame object (often NULL)
We should use deferred references (in the free-threaded build) for `f_executable` and `f_funcobj` because they are a common source of reference count contention. The pointed to objects (code and function) already support deferred reference counting, but the references in the frame are not deferred.
I think we don't need to bother with changing `f_locals` for now. I don't think it's used frequently enough to be a bottleneck, but we can revisit it later if necessary.
The `frame_obj` are typically unique -- not shared across threads -- so we don't need to bother with deferred reference counting for `frame_obj`.
### Complications and hazards
`_PyInterpreterFrame` are also embedded in generators/coroutines and `PyFrameObject`, which are heap objects. We need to be careful that `_PyStackRef` fields are visible to the GC in order to be kept alive. Once an object is untracked, the `_PyStackRef` fields may no longer be valid: it's safe to call `PyStackRef_CLOSE/CLEAR` but not otherwise access or dereference those fields because the GC may have already collected them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123924
* gh-124026
<!-- /gh-linked-prs -->
| b2afe2aae487ebf89897e22c01d9095944fd334f | 4ed7d1d6acc22807bfb5983c98fd59f7cb5061db |
python/cpython | python__cpython-123920 | # `_freeze_module.c` has several unhandled nulls
# Bug report
https://github.com/python/cpython/blob/main/Programs/_freeze_module.c has several unhandled nulls.
I have a PR ready:
```diff
diff --git Programs/_freeze_module.c Programs/_freeze_module.c
index 2a462a42cda..891e4256e89 100644
--- Programs/_freeze_module.c
+++ Programs/_freeze_module.c
@@ -110,6 +110,9 @@ static PyObject *
compile_and_marshal(const char *name, const char *text)
{
char *filename = (char *) malloc(strlen(name) + 10);
+ if (filename == NULL) {
+ return PyErr_NoMemory();
+ }
sprintf(filename, "<frozen %s>", name);
PyObject *code = Py_CompileStringExFlags(text, filename,
Py_file_input, NULL, 0);
@@ -133,6 +136,9 @@ get_varname(const char *name, const char *prefix)
{
size_t n = strlen(prefix);
char *varname = (char *) malloc(strlen(name) + n + 1);
+ if (varname == NULL) {
+ return NULL;
+ }
(void)strcpy(varname, prefix);
for (size_t i = 0; name[i] != '\0'; i++) {
if (name[i] == '.') {
@@ -178,6 +184,11 @@ write_frozen(const char *outpath, const char *inpath, const char *name,
fprintf(outfile, "%s\n", header);
char *arrayname = get_varname(name, "_Py_M__");
+ if (arrayname == NULL) {
+ fprintf(stderr, "memory error: could not allocate varname\n");
+ fclose(outfile);
+ return -1;
+ }
write_code(outfile, marshalled, arrayname);
free(arrayname);
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-123920
* gh-123948
* gh-123949
<!-- /gh-linked-prs -->
| c8d1dbef5b770b647aa7ff45fd5b269bc7629d0b | e9eedf19c99475b1940bbbbdc8816b51da3968e7 |
python/cpython | python__cpython-123918 | # buildrelease.bat uses same build directory for AMD64 and ARM64
# Bug report
### Bug description:
When I build Python 3.11.10 installers using `Tools\msi\buildrelease.bat -x64 -arm64`, the AMD64 files under `PCbuild\amd64\en-us` are deleted when the ARM64 build starts.
This is due to a typo in the build directory for ARM64 in `Tools\msi\buildrelease.bat`, which has `set BUILD=%Py_OutDir%amd64\`. This build directory conflicts with that for AMD64 and should instead be `set BUILD=%Py_OutDir%arm64\`. I'll create a PR.
### CPython versions tested on:
3.11
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-123918
* gh-123921
* gh-123922
<!-- /gh-linked-prs -->
| 00ffdf27367fb9aef247644a96f1a9ffb5be1efe | a2d0818c85c985a54f358e1d9ea7ce3eb0c5acca |
python/cpython | python__cpython-123947 | # PyType_From*: Disallow metaclasses with custom tp_new
Following up after #60074 and #103968:
In 3.14, the deprecation period is over and `PyType_From*` should fail if the metaclass has custom `tp_new`. That means the `tp_new` is no longer silently skipped.
The proper way to instantiate such a metaclass is to call it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123947
* gh-131506
<!-- /gh-linked-prs -->
| 432bf31327c6b9647acb8bdb0eac2d392fd9f60a | d7e83398c188a0acd19a496ee2eeeeab52d64a11 |
python/cpython | python__cpython-123893 | # _wmi missing in sys.stdlib_module_names
# Bug report
### Bug description:
Given that `import _wmi` succeeds on a minimal Python distribution (embeddable amd64 downloaded from https://www.python.org/downloads/release/python-3130rc2/ for example), should `_wmi` not be included in `sys.stdlib_module_names`?
```
python-3.13.0rc2-embed-amd64 $ .\python.exe
Python 3.13.0rc2 (tags/v3.13.0rc2:ec61006, Sep 6 2024, 22:13:49) [MSC v.1940 64 bit (AMD64)] on win32
>>> import sys
>>> "_wmi" in sys.stdlib_module_names
False
>>> import _wmi
>>>
```
(My use case is that I am generating license exports automatically and any packages that are not resolvable to a distribution and not included in `stdlib_module_names` lead to an error or need special-casing.)
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-123893
* gh-123896
* gh-123897
<!-- /gh-linked-prs -->
| fb1b51a58df4315f7ef3171a5abeb74f132b0971 | b52de7e02dba9e1f176d6d978d782fbd0509311e |
python/cpython | python__cpython-124490 | # Incorrect optimization in itertools.tee()
**Bug description**:
To save a memory allocation, the code path for a tee-in-a-tee incorrectly reuses the outer tee object as the first tee object in the result tuple. This is incorrect. All tee objects in the result tuple should have the same behavior. They are supposed to be "n independent iterators". However, the first one is not independent and it has different behaviors from the others. This is an unfortunate side-effect of an early incorrect optimization. I've now seen this affect real code. It surprising, unhelpful, undocumented, and hard to debug.
**Demonstration**:
```python
from itertools import tee
def demo(i):
it = iter('abcdefghi')
[outer_tee] = tee(it, 1)
inner_tee = tee(outer_tee, 10)[i]
return next(inner_tee), next(outer_tee)
print('These should all give the same result:')
for i in range(10):
print(i, demo(i))
```
This outputs:
```
These should all give the same result:
0 ('a', 'b')
1 ('a', 'a')
2 ('a', 'a')
3 ('a', 'a')
4 ('a', 'a')
5 ('a', 'a')
6 ('a', 'a')
7 ('a', 'a')
8 ('a', 'a')
9 ('a', 'a')
```
There is a test for the optimization -- it wasn't an accident. However, the optimization itself is a bug against the published specification in the docs and against general expectations.
```
a, b = tee('abc')
c, d = tee(a)
self.assertTrue(a is c)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-124490
* gh-125081
* gh-125153
<!-- /gh-linked-prs -->
| 909c6f718913e713c990d69e6d8a74c05f81e2c2 | fb6bd31cb74d2f7e7b525ee4fe9f45475fc94ce9 |
python/cpython | python__cpython-123883 | # Compiler should not need to know how to construct AST nodes
There is one place in the compiler where it copies a bit of AST and modifies it to add a base class to generic classes.
It would be nice if the compiler didn't need to know how to do this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123883
* gh-123886
* gh-123890
* gh-123891
<!-- /gh-linked-prs -->
| a2d0818c85c985a54f358e1d9ea7ce3eb0c5acca | 3597642ed57d184511ca2dbd1a382ffe8e280ac4 |
python/cpython | python__cpython-123950 | # Python 3.13 breaks circular imports during single phase init of extension module
Here's a bash script to reproduce:
```bash
printf '
#define PY_SSIZE_T_CLEAN
#include <Python.h>
static struct PyModuleDef nativemodule = {
PyModuleDef_HEAD_INIT,
.m_name = "native",
};
PyObject* module = NULL;
PyMODINIT_FUNC PyInit_native(void) {
if (module) {
Py_INCREF(module);
return module;
}
module = PyModule_Create(&nativemodule);
assert(module);
Py_XDECREF(PyImport_ImportModule("non_native"));
PySys_WriteStdout("hello from native\\n");
return module;
}
' > native.c
printf 'import native # circular import' > non_native.py
printf '
from setuptools import setup, Extension
setup(name="native", ext_modules=[Extension("native", sources=["native.c"])])
' > setup.py
python setup.py build_ext --inplace
python -c 'import native'
```
This produces:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
import native
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 921, in _load_unlocked
File "<frozen importlib._bootstrap>", line 819, in module_from_spec
File "<frozen importlib._bootstrap>", line 782, in _init_module_attrs
SystemError: extension module 'native' is already cached
```
and an assertion failure in debug builds.
The new error comes from https://github.com/python/cpython/pull/118532 cc @ericsnowcurrently
It's unclear whether this breaking change is intentional, given no mention in documentation and the assert.
This affects the mypyc transpiler, see https://github.com/python/mypy/issues/17748 for details and for an end-to-end repro. This means for instance that mypy and black cannot currently be compiled for Python 3.13. Changing mypyc to use multi-phase init is not an easy change because mypyc uses globals.
### CPython versions tested on:
3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123950
* gh-124273
<!-- /gh-linked-prs -->
| aee219f4558dda619bd86e4b0e028ce47a5e4b77 | 3e36e5aef18e326f5d1081d73ee8d8fefa1d82f8 |
python/cpython | python__cpython-126552 | # Support `wasm32-wasip1` as the default WASI target triple
# Bug report
### Bug description:
https://bytecodealliance.zulipchat.com/#narrow/stream/219900-wasi/topic/Is.20.60wasm32-wasi.60.20retired.20in.20preference.20for.20.60wasm32-wasip1.60.3F points out that `wasm32-wasi` is soft deprecated in favour of `wasm32-wasip1` so that `wasm32-wasi` can be claimed for WASI 1.0. As such, we should move over to making `-wasip1` the target triple.
This will require https://github.com/python/cpython/pull/123042 as well as a version of https://github.com/brettcannon/cpython/commit/2959f15d94d44e48a1efa3bd616c26ad5683a1ee .
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-126552
* gh-126561
<!-- /gh-linked-prs -->
| bbe9b21d06c192a616bc1720ec8f7d4ccc16cab8 | 1f777396f52a4cf7417f56097f10add8042295f4 |
python/cpython | python__cpython-124396 | # The new REPL fails on Ctrl-C after search
How to reproduce:
* Run the interactive interpreter.
* Press `<Ctrl-R>`. You should see the "```(r-search `')```" prompt.
* Press `<Right>`. You should see the normal REPL prompt "`>>>` ".
* Press `<Ctrl-C>`. The REPL quits with a traceback.
```
>>> Traceback (most recent call last):nes[2::2]))
File "/home/serhiy/py/cpython/Lib/_pyrepl/simple_interact.py", line 148, in run_multiline_interactive_console
statement = multiline_input(more_lines, ps1, ps2)
File "/home/serhiy/py/cpython/Lib/_pyrepl/readline.py", line 389, in multiline_input
return reader.readline()
~~~~~~~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/reader.py", line 800, in readline
self.handle1()
~~~~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/reader.py", line 755, in handle1
self.console.wait(100)
~~~~~~~~~~~~~~~~~^^^^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/unix_console.py", line 426, in wait
or bool(self.pollob.poll(timeout))
~~~~~~~~~~~~~~~~^^^^^^^^^
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/serhiy/py/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
"__main__", mod_spec)
File "/home/serhiy/py/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
~~~~^^^^^^^^^^^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/__main__.py", line 6, in <module>
__pyrepl_interactive_console()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/simple_interact.py", line 165, in run_multiline_interactive_console
r.pop_input_trans()
~~~~~~~~~~~~~~~~~^^
File "/home/serhiy/py/cpython/Lib/_pyrepl/reader.py", line 559, in pop_input_trans
self.input_trans = self.input_trans_stack.pop()
~~~~~~~~~~~~~~~~~~~~~~~~~~^^
IndexError: pop from empty list
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-124396
* gh-124530
<!-- /gh-linked-prs -->
| c1600c78e4565b6bb558ade451abe2648ba4dd0a | 28efeefab7d577ea4fb6e3f6e82f903f2aee271d |
python/cpython | python__cpython-123859 | # test_sqlite3/test_dump.py Fails if SQLite Foreign Keys Enabled
# Bug report
### Bug description:
Just compiled 3.13.0rc2 today and ran into this issue with test_sqlite3. [A commit](https://github.com/python/cpython/commit/de777e490fb356d7bcc7c907141c20a5135d97df) added an intentional foreign key violation in test_dump.py, but the test doesn't consider that SQLite may have been compiled with foreign keys enabled by default. Just needs to have that turned off for the test.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-123859
* gh-125163
<!-- /gh-linked-prs -->
| 14b44c58e195c4cdee6594a4aacf8bf95b19fcd7 | 37228bd16e3ef97d32da08848552f7ef016d68ab |
python/cpython | python__cpython-123854 | # Testing math_testcases.txt lacks support for signed zeros
# Feature or enhancement
### Proposal:
This file tested, using result_check() helper:
https://github.com/python/cpython/blob/beee91cdcc0dbecab252f7c5c7c51e2adb8edc26/Lib/test/test_math.py#L175
which lacks checking of zero sign (while some entries in the data file - have signed zeros). C.f. cmath_testcases.txt, which tested with rAssertAlmostEqual(), that *has* such special case:
https://github.com/python/cpython/blob/beee91cdcc0dbecab252f7c5c7c51e2adb8edc26/Lib/test/test_cmath.py#L124-L131
I think that first helper should be adjusted to do same. I'll work on a patch.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123854
* gh-124161
* gh-124162
* gh-124171
* gh-124186
* gh-124187
<!-- /gh-linked-prs -->
| 28aea5d07d163105b42acd81c1651397ef95ea57 | 14e5bdceff45e6e789e1f838b96988946c75b0f4 |
python/cpython | python__cpython-123835 | # Add `symtable` to the list of modules with a CLI
Documenting the `symtable` CLI was first not considered since it previously included self-tests. This is no more the case and the module now features a full-fledged CLI (only since 3.13).
As such, I suggest adding it to the list of modules documented in `cmdline.rst`.
---
* Related: https://github.com/python/cpython/pull/109112
* Specialization of: #109435
<!-- gh-linked-prs -->
### Linked PRs
* gh-123835
* gh-123862
<!-- /gh-linked-prs -->
| 32bc2d61411fb71bdc84eb29c6859517e7f25f36 | 05a401a5c3e385286a346df6f0b463b35df871b2 |
python/cpython | python__cpython-126182 | # `getaddrinfo` fails on Illumos, inconsistent elsewhere if port is, but type & protocol are not specified
# Bug report
### Bug description:
I have discovered that
1. On ~~Solaris~~ OpenIndiana, if you don't specify at least socket type **or** protocol for the service name, address resolution fails (see below).
2. Linux & MacOS happily ignore this and continue with the resolution anyway.
3. Specifying the socket family doesn't help.
I've filed a PR to fix the problem at the call site of the software that exposed it (miguelgrinberg/Flask-SocketIO#2088), but the maintainer argues that it's a bug in CPython and is unwilling to accept a patch that specifies the protocol.
His position that Python's high-level APIs should provide consistent behavior across platforms is understandable, although one could argue that `getaddrinfo("127.0.0.1", 5000)` is a borderline invalid use of the API. However, the documentation as it stands suggests that `None` can be passed, but doesn't explain why you would want to do so, and the paragraph below suggests that keyword arguments act as AND filters and can be safely omitted.
Unfortunately, I don't see any fantastic way to fix this in CPython that would provide a consistent interface across platforms. I think we could do a `configure` check to detect this `getaddrinfo` behavior (might also be the case on AIX, Tru64, etc., although it's just a speculation) and then call the C function twice at Python level with (`type=socket.SOCK_STREAM` and `type=socket.SOCK_DGRAM`) to merge the results (which is already done for another reason):
https://github.com/python/cpython/blob/beee91cdcc0dbecab252f7c5c7c51e2adb8edc26/Lib/socket.py#L964
Or, if you don't mind, we can just always do it where protocol AND socket type are unspecified, but port is specified and is numeric. This is very simple, the overhead (I think) is negligible, and it is consistent across platforms.
I would appreciate an opinion from Python developers and/or Solaris engineers before I put more work into this.
### OpenIndiana
```pytb
$ python3
Python 3.9.19 (main, Mar 26 2024, 20:30:24) [GCC 13.2.0] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 9] service name not available for the specified socket type
>>> socket.getaddrinfo("127.0.0.1", 5000, family=socket.AF_INET)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 9] service name not available for the specified socket type
>>> socket.getaddrinfo("127.0.0.1", None)
[(<AddressFamily.AF_INET: 2>, 0, 0, '', ('127.0.0.1', 0))]
>>> socket.getaddrinfo("127.0.0.1", 5000, type=socket.SOCK_STREAM)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 2>, 6, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", 5000, proto=socket.IPPROTO_TCP)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 2>, 6, '', ('127.0.0.1', 5000))]
```
### macOS
```pycon
% python3
Python 3.12.5 (main, Aug 6 2024, 19:08:49) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('127.0.0.1', 5000)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('127.0.0.1', 5000))]
```
### AIX
```pycon
-bash-5.1$ python3
Python 3.9.19 (main, Apr 5 2024, 05:08:51)
[GCC 10.3.0] on aix
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", 5000, type=socket.SOCK_STREAM)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('127.0.0.1', 5000))]
```
### Solaris
```pycon
$ python3
Python 3.9.19 (main, Sep 23 2024, 16:24:38)
[GCC 13.2.0] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
[(<AddressFamily.AF_INET: 2>, 0, 0, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", 5000, family=socket.AF_INET)
[(<AddressFamily.AF_INET: 2>, 0, 0, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", None)
[(<AddressFamily.AF_INET: 2>, 0, 0, '', ('127.0.0.1', 0))]
>>> socket.getaddrinfo("127.0.0.1", 5000, type=socket.SOCK_STREAM)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 2>, 6, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", 5000, proto=socket.IPPROTO_TCP)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 2>, 6, '', ('127.0.0.1', 5000))]
```
### FreeBSD 14.1
```pycon
[zaytsev@freebsd ~]$ python3
Python 3.11.9 (main, Oct 5 2024, 11:59:33) [Clang 18.1.5 (https://github.com/llvm/llvm-project.git llvmorg-18.1.5-0-g617a15 on freebsd14
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('127.0.0.1', 5000)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('127.0.0.1', 5000)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_SEQPACKET: 5>, 132, '', ('127.0.0.1', 5000))]
>>> socket.getaddrinfo("127.0.0.1", None)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('127.0.0.1', 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('127.0.0.1', 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_SEQPACKET: 5>, 132, '', ('127.0.0.1', 0))]
```
### Linux (Fedora)
```pycon
zaytsev@fedora:~$ python3
Python 3.12.4 (main, Jun 7 2024, 00:00:00) [GCC 14.1.1 20240607 (Red Hat 14.1.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> socket.getaddrinfo("127.0.0.1", 5000)
[(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('127.0.0.1', 5000)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_DGRAM: 2>, 17, '', ('127.0.0.1', 5000)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_RAW: 3>, 0, '', ('127.0.0.1', 5000))]
```
#### Summary
* Linux: STREAM, DGRAM, RAW
* macOS: STREAM, DGRAM
* OpenBSD: STREAM, DGRAM
* FreeBSD with Cheri: STREAM, DGRAM, SEQPACKET
* AIX: DGRAM (but STREAM supported, obviously!)
* Solaris/Illumos: exception
/cc @kulikjak
### CPython versions tested on:
3.9
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-126182
* gh-126824
* gh-126825
<!-- /gh-linked-prs -->
| ff0ef0a54bef26fc507fbf9b7a6009eb7d3f17f5 | e0692f11650acb6c2eed940eb94650b4703c072e |
python/cpython | python__cpython-123827 | # Unused function warnings during mimalloc build on NetBSD
### Bug description:
```sh
gcc -pthread -c -fno-strict-overflow -Wsign-compare -g -Og -Wall -O2 -fstack-protector-strong -Wtrampolines -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Objects/setobject.o Objects/setobject.c
--- Objects/obmalloc.o ---
In file included from Objects/mimalloc/prim/prim.c:22,
from Objects/mimalloc/static.c:37,
from Objects/obmalloc.c:22:
Objects/mimalloc/prim/unix/prim.c:76:12: warning: 'mi_prim_access' defined but not used [-Wunused-function]
76 | static int mi_prim_access(const char *fpath, int mode) {
| ^~~~~~~~~~~~~~
Objects/mimalloc/prim/unix/prim.c:73:12: warning: 'mi_prim_close' defined but not used [-Wunused-function]
73 | static int mi_prim_close(int fd) {
| ^~~~~~~~~~~~~
Objects/mimalloc/prim/unix/prim.c:70:16: warning: 'mi_prim_read' defined but not used [-Wunused-function]
70 | static ssize_t mi_prim_read(int fd, void* buf, size_t bufsize) {
| ^~~~~~~~~~~~
Objects/mimalloc/prim/unix/prim.c:67:12: warning: 'mi_prim_open' defined but not used [-Wunused-function]
67 | static int mi_prim_open(const char* fpath, int open_flags) {
| ^~~~~~~~~~~~
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-123827
* gh-123875
<!-- /gh-linked-prs -->
| 4a6b1f179667e2a8c6131718eb78a15f726e047b | 1a9d8917a38e3eb190506025b9444730ed821449 |
python/cpython | python__cpython-123824 | # test_posix fails on NetBSD: Operation Not Supported for posix_fallocate
### Bug description:
```sh
$ ./python -m test test_posix
Using random seed: 3412046247
0:00:00 load avg: 0.00 Run 1 test sequentially in a single process
0:00:00 load avg: 0.00 [1/1] test_posix
test test_posix failed -- Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/test_posix.py", line 407, in test_posix_fallocate
posix.posix_fallocate(fd, 0, 10)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
OSError: [Errno 45] Operation not supported
test_posix failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_posix
Total duration: 4.7 sec
Total tests: run=167 skipped=41
Total test files: run=1/1 failed=1
Result: FAILURE
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-123824
* gh-123864
* gh-123865
<!-- /gh-linked-prs -->
| df4f0cbfad8a1ed0146cabd30d01efd135d4d048 | b2a8c38bb20e0a201bbc60f66371ee4e406f6dae |
python/cpython | python__cpython-123829 | # Issue with Round Function in Python
# Bug report
### Bug description:
**Description of the Issue:**
The `round` function does not seem to handle certain floating-point numbers correctly. For example, when rounding the number -1357.1357 to negative 4 decimal places, the function returns -0.0 instead of the expected 0.0.
**Steps to Reproduce:**
1. Open a Python interpreter.
2. Execute the following code:
```
print(round(-1357.1357, -4))
```
3. Observe the output.
Expected Result: The output should be 0.0.
Actual Result: The output is -0.0.
**Environment Details Tested:**
--Python Version: 3.12.5, Operating System: Windows 11.
--Python Version: 3.12.3, Operating System: Ubuntu.
Additional Information: I have verified this issue on multiple machines and with different Python versions. The issue persists across all tested environments.
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-123829
* gh-123938
* gh-123939
* gh-124007
* gh-124048
* gh-124049
<!-- /gh-linked-prs -->
| d2b9b6f919e92184420c8e13d078e83447ce7917 | a1dbf2ea69acc6ccee6292709af1dadd55c068be |
python/cpython | python__cpython-123804 | # Support arbitrary code page encodings on Windows
# Feature or enhancement
Python supports encodings that correspond to some code pages on Windows, like cp437 or cp1252. But every such encoding should be specially implemented. There are code pages that do not have corresponding codec implemented in Python.
But there are functions that allow to encode or decode using arbitrary code page: `codecs.code_page_encode()` and `codecs.code_page_decode()`. The only step left is to make them available as encodings, so they could be used in `str.encode()` and `bytes.decode()`.
Currently this is already used for the current Windows (ANSI) code page. If the cpXXX encoding is not implemented in Python and XXX matches the value returned by `GetACP()`, "cpXXX" will be made an alias to the "mbcs" codec.
I propose to add support for arbitrary cpXXX encodings on Windows. If such encoding is not implemented directly, fall back to use the Windows-specific API.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123804
<!-- /gh-linked-prs -->
| f7ef0203d44acb21ab1c5ff0c3e15f9727862760 | 8fe1926164932f868e6e907ad72a74c2f2372b07 |
python/cpython | python__cpython-123806 | # Availability of `ptsname_r` is not checked at runtime on macOS
# Bug report
### Bug description:
Use of this was recently added in Python 3.13 but it is only available in macOS 10.13.4+ and there is not a proper runtime check for availability. This means that in `python-build-standalone` we need to ban the symbol instead of allowing proper weak linking.
See discussion at https://github.com/indygreg/python-build-standalone/pull/319#discussion_r1747174338
Similar to https://github.com/python/cpython/issues/75782
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-123806
* gh-124270
<!-- /gh-linked-prs -->
| 3e36e5aef18e326f5d1081d73ee8d8fefa1d82f8 | aae126748ff3d442fdbcd07933855ffd7ae6f59c |
python/cpython | python__cpython-123801 | # `secrets.randbits` returns only nonnegative integers
# Documentation
The docstring of `secrets.randbits` reads, in its entirety, "Return an int with *k* random bits." In fact, it only generates nonnegative integers, like its sister function `random.getrandbits`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123801
* gh-123830
* gh-123831
<!-- /gh-linked-prs -->
| beee91cdcc0dbecab252f7c5c7c51e2adb8edc26 | 11fa11987990eb7ed75b1597cf2e8237f5991c57 |
python/cpython | python__cpython-123036 | # `test_pkgutil` should clean up `spam` module
When running the tests with --randomize, as is done by the buildbots, @mhsmith came across this failure:
```
======================================================================
ERROR: test_find_class (test.test_pickle.CUnpicklerTests.test_find_class)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/msmith/git/python/cpython/Lib/test/pickletester.py", line 1265, in test_find_class
self.assertRaises(ModuleNotFoundError, unpickler.find_class, 'spam', 'log')
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/msmith/git/python/cpython/Lib/unittest/case.py", line 804, in assertRaises
return context.handle('assertRaises', args, kwargs)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/msmith/git/python/cpython/Lib/unittest/case.py", line 238, in handle
callable_obj(*args, **kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
AttributeError: module 'spam' has no attribute 'log'
======================================================================
ERROR: test_find_class (test.test_pickle.PyUnpicklerTests.test_find_class)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/msmith/git/python/cpython/Lib/test/pickletester.py", line 1265, in test_find_class
self.assertRaises(ModuleNotFoundError, unpickler.find_class, 'spam', 'log')
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/msmith/git/python/cpython/Lib/unittest/case.py", line 804, in assertRaises
return context.handle('assertRaises', args, kwargs)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/msmith/git/python/cpython/Lib/unittest/case.py", line 238, in handle
callable_obj(*args, **kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/msmith/git/python/cpython/Lib/pickle.py", line 1620, in find_class
return getattr(sys.modules[module], name)
AttributeError: module 'spam' has no attribute 'log'
----------------------------------------------------------------------
Ran 857 tests in 2.954s
FAILED (errors=2, skipped=31)
test test_pickle failed
```
This can be reproduced by running only test_pkgutil and test_pickle, in that order. test_pkgutil leaves behind a `spam` module in `sys.modules`, while test_pickle assumes there is no such module.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123036
* gh-123781
* gh-123782
<!-- /gh-linked-prs -->
| eca3fe40c251d51964172dd4e6e9c7d0d85d7d4a | 782a076362ca1836c2052652b1a3a352dec45186 |
python/cpython | python__cpython-123757 | # Only support restart command in pdb when it's a command line usage
# Feature or enhancement
### Proposal:
Now when we have a `breakpoint()` in the source code and enter pdb, we can do `restart` and raise a `Restart` exception - which in most cases will just stop the program with an exception. For other rare cases, it might be even worse.
It simply does not make sense to allow a `restart` command if pdb is not brought up from command line - that exception is only caught in command line usage. So, we should only allow `restart` in command line usage and do nothing (except for an error) in other cases.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123757
<!-- /gh-linked-prs -->
| 28efeefab7d577ea4fb6e3f6e82f903f2aee271d | da5855e99a8c2d6ef2bb20124d2ebb862dbb971f |
python/cpython | python__cpython-123779 | # `Includes/internal/pycore_code.h` uses `static_assert()` but does not inlcude `pymacro.h`
# Bug report
### Bug description:
`Includes/internal/pycore_code.h` uses `static_assert()` but does not inlcude `pymacro.h`, should it?
AFAIU `pymacro.h` makes sure that `static_assert()` is correctly defined for all supported compilers and platforms. Not including it in `Includes/internal/pycore_code.h` implicitly relies on `pymacro.h` being included before or via transitive includes.
I've found this while investigating a Cython extension module build failure. Cython includes the private header and thus requires `static_macro()` to be defined.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123779
* gh-123785
<!-- /gh-linked-prs -->
| ef4b69d2becf49daaea21eb04effee81328a0393 | e95984826eb3cdb3a3baedb2ccea35e11e9f8161 |
python/cpython | python__cpython-130537 | # Document caveats of `zipfile.Path` around name sanitization
> Advertise that `zipfile.Path` does not do any name sanitization and it's the responsibility of the caller to check the inputs, etc.
I think documenting the caveats would be good.
_Originally posted by @obfusk in https://github.com/python/cpython/issues/123270#issuecomment-2330381740_
<!-- gh-linked-prs -->
### Linked PRs
* gh-130537
* gh-130986
* gh-130987
<!-- /gh-linked-prs -->
| a3990df6121880e8c67824a101bb1316de232898 | edd1eca336976b3431cf636aea87f08a40c94935 |
python/cpython | python__cpython-123719 | # Build error: implicit declaration of 'explicit_memset' in NetBSD
### Bug description:
```sh
-bash-5.2$ make
```
```sh
./Modules/_hacl/Lib_Memzero0.c: In function 'Lib_Memzero0_memzero0':
./Modules/_hacl/Lib_Memzero0.c:45:5: error: implicit declaration of function 'explicit_memset'; did you mean '__builtin_memset'? [-Werror=implicit-function-declaration]
45 | explicit_memset(dst, 0, len_);
| ^~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
*** Error code 1
Stop.
make: stopped in /home/seth/cpython
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-123719
<!-- /gh-linked-prs -->
| f8f7500168c94330e094aebfa38798d949466328 | 84ad264ce602fb263a46a4536377bdc830eea81e |
python/cpython | python__cpython-123717 | # Configure script fails with 'Bad substitution' error on NetBSD
### Bug description:
```sh
-bash-5.2$ ./configure
```
```
checking for git... found
checking build system type... x86_64-unknown-netbsd10.0
checking host system type... x86_64-unknown-netbsd10.0
checking for Python interpreter freezing... ./_bootstrap_python
checking for python3.14... no
checking for python3.13... no
checking for python3.12... no
checking for python3.11... no
checking for python3.10... no
checking for python3... no
checking for python... no
checking Python for regen version... missing
checking for pkg-config... no
checking MACHDEP... "netbsd10"
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking for --with-app-store-compliance... not patching for app store compliance
./configure: 4512: Syntax error: Bad substitution
```
OS: `NetBSD 10.0 amd64`
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-123717
* gh-123752
<!-- /gh-linked-prs -->
| 42f52431e9961d5236b33a68af16cca07b74d02c | b5aa271f86229f126c21805ff2bd3b95526818a4 |
python/cpython | python__cpython-123701 | # Update tested OpenSSL branches in `Tools/ssl/multissltests.py` and CI
# Feature or enhancement
### Proposal:
OpenSSL 3.3 has been out long enough to be up to v3.3.2 already; we should start testing with it, though it will probably never be included in binary releases.
OpenSSL 1.1.1 has been out of support for quite a while as well; it should be moved to the "old" set or removed, and probably isn't worth testing in CI anymore.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123701
* gh-123702
* gh-123704
<!-- /gh-linked-prs -->
| d83e30caddcbf9482273743d287577517ec735b7 | 56b00f4705634af2861a8aa9c2eb5769012220f0 |
python/cpython | python__cpython-123697 | # tsan: fix race:sock_recv_impl suppressions for free-thread building
# Feature or enhancement
### Proposal:
The root cause of this race here is we use the same socket in the server and client side in our test. I think this is wrong
FYI https://github.com/python/cpython/blob/main/Lib/test/test_socket.py#L4817-L4825
I think we just need make a small patch for test case
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123697
<!-- /gh-linked-prs -->
| 8a46a2ec5032c5eb1bc3c6bb0fc2422ac9b2cc53 | 5a4fb7ea1c96f67dbb3df5d4ccaf3f66a1e19731 |
python/cpython | python__cpython-123689 | # Please upgrade bundled Expat to 2.6.3 (e.g. for the fixes to CVE-2024-45490, CVE-2024-45491 and CVE-2024-45492)
# Bug report
### Bug description:
Hi! :wave:
Please upgrade bundled Expat to 2.6.3 (e.g. for the fixes to CVE-2024-45490, CVE-2024-45491 and CVE-2024-45492).
- GitHub release: https://github.com/libexpat/libexpat/releases/tag/R_2_6_3
- Change log: https://github.com/libexpat/libexpat/blob/R_2_6_3/expat/Changes
The CPython issue for previous 2.6.2 was #116741 and the related merged main pull request was #117296, in case you want to have a look. The Dockerfile from comment https://github.com/python/cpython/pull/117296#pullrequestreview-1964486079 could be of help with raising confidence in a bump pull request when going forward.
Thanks in advance!
### CPython versions tested on:
3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch
### Operating systems tested on:
Linux, macOS, Windows, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-123689
* gh-123705
* gh-123706
* gh-123707
* gh-123708
* gh-123709
* gh-123710
* gh-123711
* gh-123712
<!-- /gh-linked-prs -->
| 40bdb0deee746e51c71c56329df21e5172fd8aa0 | d83e30caddcbf9482273743d287577517ec735b7 |
python/cpython | python__cpython-123703 | # `decimal.getcontext` crashes with `--with-decimal-contextvar=no` and `-X showrefcount`
# Crash report
### What happened?
cc @godlygeek, who helped me find this
I was investigating a reference leak on the asyncio test suite, and I came across `decimal.getcontext` leaking some memory. Here's what I was doing:
```
$ ./python -Xshowrefcount -c "import decimal; decimal.getcontext()"
[207 refs, 0 blocks]
```
This is a problem in itself, since this is a leak, but after configuring with `--with-decimal-contextvar=no`, then the above code fails with a negative reference count:
```
Python/gc.c:92: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small
Enable tracemalloc to get the memory block allocation traceback
object address : 0x5d3d7d336490
object refcount : 1
object type : 0x5d3d7d4acd50
object type name: decimal.Context
object repr : Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
```
It seems both of these problems were caused by #123244
(@encukou, this needs `extension-modules`, `3.14` and `3.13`)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-123703
* gh-123774
<!-- /gh-linked-prs -->
| 853588e24c907be158b3a08601797ea5b47a0eba | 8311b11800509c975023e062e2c336f417c5e4c0 |
python/cpython | python__cpython-123648 | # Dictionary and set documentation mixup
# Documentation
https://docs.python.org/3/reference/datamodel.html
3.2.7.1. Dictionaries
The documentation says the following:
"Dictionaries are mutable; they can be created by the {...} notation."
However any Python interpreter shows the following result:
>>>print(type({}))
<class 'dict'>
print(type({...}))
>>> <class 'set'>
I suspect the documentation mixes up the empty set and empty dictionary creation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-123648
* gh-123653
* gh-123654
<!-- /gh-linked-prs -->
| cfbc841ef3c27b3e65d1223bf8fedf1f652137bc | 68fe5758bf1900ffdcdf7cd9e40f5018555a39d4 |
python/cpython | python__cpython-123635 | # Instance method performance issue with free threading (3.13rc1)
# Bug report
### Bug description:
```python
from threading import Thread
import time
class Cache:
def get(self, i):
return i
def get(i):
return i
client = Cache()
def bench_run(runner):
s = time.monotonic_ns()
for i in range(500000):
runner(i)
d = time.monotonic_ns() - s
print(f"{d:,}")
def bench_run_parallel(count, runner):
tl = []
for i in range(count):
t = Thread(target=bench_run, args=[runner])
tl.append(t)
t.start()
for t in tl:
t.join()
if __name__ == '__main__':
print("no threading class")
bench_run(client.get)
print("\nthreading class")
bench_run_parallel(6, client.get)
print("\nno threading function")
bench_run(get)
print("\nthreading function")
bench_run_parallel(6, get)
```
Processor: 2.6 GHz 6-Core Intel Core i7
```
PYTHON_CONFIGURE_OPTS='--disable-gil' pyenv install 3.13.0rc1
PYTHON_GIL=0 python benchmarks/run.py
```
Python 3.13.0rc1 experimental free-threading build (main, Sep 3 2024, 09:59:18) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
```
no threading class
34,626,126
threading class
313,298,115
318,659,929
319,484,384
321,183,006
320,650,898
321,589,085
no threading function
29,890,615
threading function
31,336,844
31,927,592
32,087,732
33,612,093
32,611,520
33,897,321
```
When using 6 threads, the instance method becomes approximately 10 times slower, while the function method shows comparable performance.
Python 3.12.5 (main, Sep 2 2024, 15:11:13) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
```
no threading class
27,588,484
threading class
79,383,431
113,182,075
111,067,075
107,679,440
63,825,808
92,679,231
no threading function
26,200,075
threading function
76,489,523
105,636,550
121,436,736
106,408,462
104,572,564
70,686,101
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-123635
<!-- /gh-linked-prs -->
| d00878b06a05ea04790813dba70b09cc1d11bf45 | 29b5323c4567dc7772e1d30a7ba1cbad52fe10a9 |
python/cpython | python__cpython-123617 | # A save function in the Turtle module
# Feature or enhancement
### Proposal:
I've used `turtle.py` quite a bit in my teaching, and a common question is how learners can save their files. Now, the turtle library only supports saving the drawings as postscript files by calling the cryptic command `turtle.getscreen().getcanvas().postscript(file=filename)`. Whenever a learner asks “How can I print what I made?”, I’ll have to show them that extremely scary command with method chaining -- not the best snippet to show someone who’s still struggling with functions…
If we instead had a `turtle.save("filename.ps")`, then I could tell the learners to use that function and an online postscript to pdf converter, which should be understandable to most learners
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/improving-the-turtle-library/61840
<!-- gh-linked-prs -->
### Linked PRs
* gh-123617
<!-- /gh-linked-prs -->
| 584cdf8d4140406b3676515332a26c856c02618b | 1f9d163850c43ba85193ef853986c5e96b168c8c |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.