repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-106921 | # Forgot to add error check?
#106411 adds the `_PyCompile_CleanDoc` function.
there's a call to `PyMem_Malloc`, but we don't check if it returns `NULL` value:
https://github.com/python/cpython/blob/009e8f084c4cbb1f43d40b24b7f71fb189bbe36b/Python/compile.c#L8055-L8063
Is there a need to check for a `NULL` value?
If so, I can send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106921
<!-- /gh-linked-prs -->
| 85ed1d24427bf3e000467aab7ee1b0322b0a9013 | a31dea1feb61793e48fa9aa5014f358352205c1d |
python/cpython | python__cpython-106910 | # Use role :const: for referencing module constants
While technically Python modules are mutable and every name can be overridden, some names are not intended to be changed. Sphinx has a special directive for this: `:const:`. I think that it is better to use semantically more specific `:const:` instead of general `:data:` in these cases.
First of all it is all-uppercase names, but there are exceptions: `curses.LINES` and `curses.COLS` are changed after terminal changes size. But some lowercase names like `os.sep` and `string.ascii_letters` are also clearly constants.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106910
* gh-106956
* gh-106957
<!-- /gh-linked-prs -->
| 4b9948617f91175783609769aa6160e5b49b9ccc | d036db728ea3d54509cbad06df74e2d9a31fbec8 |
python/cpython | python__cpython-106906 | # Error in AST recursion depth tracking change of gh-95185
# Bug report
The change in https://github.com/python/cpython/commit/00474472944944b346d8409cfded84bb299f601a seems to miss the recusrion depth adjustment in case of an error. As an example for some of the generated code:
```
PyObject*
ast2obj_mod(struct ast_state *state, void* _o)
{
mod_ty o = (mod_ty)_o;
PyObject *result = NULL, *value = NULL;
PyTypeObject *tp;
if (!o) {
Py_RETURN_NONE;
}
if (++state->recursion_depth > state->recursion_limit) {
PyErr_SetString(PyExc_RecursionError,
"maximum recursion depth exceeded during ast construction");
return 0;
}
switch (o->kind) {
case Module_kind:
tp = (PyTypeObject *)state->Module_type;
result = PyType_GenericNew(tp, NULL, NULL);
if (!result) goto failed;
value = ast2obj_list(state, (asdl_seq*)o->v.Module.body, ast2obj_stmt);
if (!value) goto failed;
if (PyObject_SetAttr(result, state->body, value) == -1)
goto failed;
...
}
state->recursion_depth--;
return result;
failed:
Py_XDECREF(value);
Py_XDECREF(result);
return NULL;
}
```
Note that the `failed` code path is missing the `state->recursion_depth--;` statement.
I found this as I'm trying to track down where spurious `SystemError: AST constructor recursion depth mismatch` errors in Python 3.11 are coming from. E.g.
```
File "/env/lib/python3.11/site-packages/bloscpack/numpy_io.py", line 358, in unpack_ndarray_from_bytes
return unpack_ndarray(source)
^^^^^^^^^^^^^^^^^
File "/env/lib/python3.11/site-packages/bloscpack/numpy_io.py", line 305, in unpack_ndarray
sink = PlainNumpySink(source.metadata)
^^^^^^^^^^^^^^^^^
File "/env/lib/python3.11/site-packages/bloscpack/numpy_io.py", line 136, in __init__
dtype_ = ast.literal_eval(metadata['dtype'])
^^^^^^^^^^^^^^^^^
File "/env/lib/python3.11/ast.py", line 64, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
^^^^^^^^^^^^^^^^^
File "/env/lib/python3.11/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
^^^^^^^^^^^^^^^^^
SystemError: AST constructor recursion depth mismatch (before=48, after=47)
```
# Your environment
Reproduced in Python 3.11 but the code in main looks the same.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106906
* gh-112032
* gh-112849
* gh-113035
* gh-113472
* gh-113476
<!-- /gh-linked-prs -->
| 1447af797048e62049d00bbd96d8daee3929f527 | 1c7ed7e9ebc53290c831d7b610219fa737153a1b |
python/cpython | python__cpython-107347 | # Add mechanism for getting active exception in a sys.monitoring PY_UNWIND callback (3.12)
I couldn't figure out a way get the active exception from a PY_UNWIND event callback -- I was hoping the exception would be passed to the callback but it isn't. This was an issue with the old tracer function debugger / profiler support so it's not really a regression, but I was hoping to avoid the work arounds that we used with it.
I mentioned this in https://github.com/python/cpython/issues/103082 but was encouraged to create a separate bug for it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107347
* gh-107382
<!-- /gh-linked-prs -->
| ac7a0f858a8d0c6ca2e64bb880fca40e229d267a | 9a7204b86bdb3e26c2a62aeaafb875275500b9f7 |
python/cpython | python__cpython-107291 | # No sys.monitoring RAISE event emitted when exception is re-raised (3.12b4)
It looks like no RAISE event is emitted for a bare raise in an except: clause or at the end of a finally: clause that implicitly reraises an exception. I don't know whether this a problem with the implementation or it should simply be documented that this is to be expected.
I mentioned this in https://github.com/python/cpython/issues/103082 but was encouraged to create a separate bug for it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107291
* gh-107346
<!-- /gh-linked-prs -->
| 766d2518ae8384c6bd7f82727defeb86847ccf64 | f84d77b4e07aeb6241c1ff9932627d3ba059efa8 |
python/cpython | python__cpython-107337 | # Returning DISABLE from a sys.monitoring callback can trigger an assert in a debug build in (3.12b4)
Returning DISABLE from a sys.monitoring callback for an event >= PY_MONITORING_INSTRUMENTED_EVENTS triggers an assert in a debug build. It would probably be better to just ignore a DISABLE return value since anything else can be returned (I think) or mandate that None be returned. If the current behavior is retained, it should be documented.
I didn't test in a non debug build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107337
* gh-107351
<!-- /gh-linked-prs -->
| c6539b36c163efff3d6ed02b938a6151325f4db7 | d77d973335835bd744be8106010061cb338b0ae1 |
python/cpython | python__cpython-106894 | # Use roles :data: and :const: for module-level variables and constants
Role `:data:` is purposed for marking a module-level variable. [1] But role `:attr:` is often used instead in cases like `sys.modules`. While technically any module-level variable is accessible as an attribute, I think that it would be better to use more special role `:data:`.
[1] https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#role-py-data
<!-- gh-linked-prs -->
### Linked PRs
* gh-106894
* gh-106954
* gh-106955
<!-- /gh-linked-prs -->
| d036db728ea3d54509cbad06df74e2d9a31fbec8 | 8d397ee8259fa0f81598a452438fc335267ca260 |
python/cpython | python__cpython-106901 | # asyncio.base_events.Server is not exposed as Asyncio.Server in early versions of 3.9 and 3.10 and all previous versions
# Documentation:
The documentation for python shows `asyncio.base_events.Server` as `asyncio.Server` in all python versions, [Offending Page](https://docs.python.org/3.8/library/asyncio-eventloop.html#asyncio.Server). However, it is not exposed as such until late versions of 3.9, 3.10 and 3.11.0.
## Relevant Facts:
PR #31760 makes the change exposing it as `asyncio.Server`, but this only effects newer versions.
## My Comments:
I can make a pr adding a note to the docs, let me know what you would like me to do.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106901
* gh-106902
* gh-106903
<!-- /gh-linked-prs -->
| 1e1f4e91a905bab3103250a3ceadac0693b926d9 | c6c5665ee0c0a5ddc96da255c9a62daa332c32b3 |
python/cpython | python__cpython-107397 | # Include of `linux/limits.h` breaks build on Linux <5.1
# Bug report
#101857 / #101858 added an `#include <linux/limits.h>` on Linux in `Modules/posixmodule.c`.
If my Linux Git history sleuthing is accurate (`git log -- include/linux/limits.h`), `linux/limits.h` only exists in Linux <3.10 and >5.1. (Yes, the file was removed then re-added.) Attempting to compile CPython against Linux headers 3.10-5.0 fails due to the missing file.
cc @thesamesam (commit author) @gpshead (reviewer)
<!-- gh-linked-prs -->
### Linked PRs
* gh-107397
* gh-107414
* gh-107415
<!-- /gh-linked-prs -->
| 11c055f5ff1a353de6d2e77f2af24aaa782878ba | cf63df88d38ec3e6ebd44ed184312df9f07f9782 |
python/cpython | python__cpython-106871 | # Upgrade CPython to the new PyMemberDef API: names starting with Py_ prefix (Py_T_INT instead of T_INT)
The CPython code base was written for the old "structmember.h" API with names like READONLY and T_INT. I propose to upgrade the code base to new public names like Py_READONLY and Py_T_INT.
``#include "structmember.h"`` should be removed, but ``#include <stddef.h>`` should maybe be added in exchange to get the standard ``offsetof()`` function.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106871
<!-- /gh-linked-prs -->
| 1a3faba9f15e0c03c6cc0d225d377b8910b5379f | ed082383272c2c238e364e9cc83229234aee23cc |
python/cpython | python__cpython-108975 | # Unix-only pages should each say "Availability: Unix"
# Documentation
This section of the docs has a number of pages: https://docs.python.org/3/library/unix.html . Each is only available on Unix, but the pages themselves don't say this. For example, if you search for the `resource` module, you get to here: https://docs.python.org/3/library/resource.html, and nothing on that page says it's not available on Windows.
It would be a small change to add "Availability: Unix" to each of these pages to make them more self-contained and quicker to reference.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108975
* gh-111553
* gh-111554
<!-- /gh-linked-prs -->
| cf3dbe4c3df40e2d4d572c62623207ce29a9365b | 52a5b5d276abb3f3101c0b75b67c1c3f8ee483fe |
python/cpython | python__cpython-106854 | # The table of sys.flags doesn't have warn_default_encoding attribute.
https://docs.python.org/3/library/sys.html?highlight=sys%20flags#sys.flags
PEP 597 – Add optional EncodingWarning https://peps.python.org/pep-0597/

<!-- gh-linked-prs -->
### Linked PRs
* gh-106854
* gh-106958
* gh-107054
<!-- /gh-linked-prs -->
| fd84ac0ee0a8d5e34e0a106eed7e50539b61c5f8 | 4b9948617f91175783609769aa6160e5b49b9ccc |
python/cpython | python__cpython-106846 | # test_doctest are leaked
Tried on current main.
```python
./python -m test -R 3:3 test_doctest
Running Debug|x64 interpreter...
0:00:00 Run tests sequentially
0:00:00 [1/1] test_doctest
beginning 6 repetitions
123456
<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.<frozen importlib._bootstrap>:824: ImportWarning: TestImporter.exec_module() not found; falling back to load_module()
.
test_doctest leaked [34, 34, 34] memory blocks, sum=102
test_doctest failed (reference leak) in 34.8 sec
== Tests result: FAILURE ==
1 test failed:
test_doctest
Total duration: 34.9 sec
Tests result: FAILURE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106846
<!-- /gh-linked-prs -->
| ece3b9d12a2f47da8b144f185dfba9b2b725fc82 | 1e36ca63f9f5e0399efe13a80499cef290314c2a |
python/cpython | python__cpython-106832 | # Unreachable code in `Modules/_ssl.c`
Looks like code in this check cannot ever be reached:
https://github.com/python/cpython/blob/2b94a05a0e45e4aae030a28b716a038ef529f8ef/Modules/_ssl.c#L2824-L2827
At this point `session` cannot be `NULL`, because it is checked right above: https://github.com/python/cpython/blob/2b94a05a0e45e4aae030a28b716a038ef529f8ef/Modules/_ssl.c#L2803-L2806
I guess that it was intended to check `newsession` variable instead.
Docs say: https://www.openssl.org/docs/man1.0.2/man3/d2i_SSL_SESSION.html
> d2i_SSL_SESSION() returns a pointer to the newly allocated SSL_SESSION object. In case of failure the NULL-pointer is returned and the error message can be retrieved from the error stack.
One more thing that bothers me here is that error is not set. We just return `NULL` which can theoretically crash the interpeter.
So, my plan is to:
1. Check `newsession` instead
2. Add a `ValueError` there
Originally introduced in https://github.com/python/cpython/commit/99a6570295de5684bfac767b4d35c72f8f36612d
PR is on its way.
Found by Linux Verification Center ([portal.linuxtesting.ru](http://portal.linuxtesting.ru/)) with SVACE.
Author A. Voronin.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106832
* gh-106835
* gh-106836
<!-- /gh-linked-prs -->
| ebf2c56b33553a448da8f60fcd89a622f071b5f4 | 8e9a1a032233f06ce0f1acdf5f983d614c8745a5 |
python/cpython | python__cpython-107564 | # Code generator: support variable stack effects in macros
Currently you can't split e.g.
```
inst(CALL_PY_EXACT_ARGS, (unused/1, func_version/2, method, callable, args[oparg] -- unused))
{ ... }
```
into two micro-ops
```
op(_CHECK_PY_FUNCTION, (func_version/2, method, callable, args[oparg] -- method, callable, args[oparg]))
{ ... }
op(_CALL_PY_EXACT_ARGS, (method, callable, args[oparg] -- unused))
{ ... }
macro(CALL_PY_EXACT_ARGS) = unused/1 + _CHECK_PY_FUNCTION + _CALL_PY_EXACT_ARGS;
```
We need at least to handle micro-ops that have the same variable-length input effect as output effect, and a final (action) op that has a variable-length stack input effect and a fixed output effect. This should handle all the CALL specializations. (For UNPACK_SEQUENCE we also need to support a fixed input effect and a variable output effect, in he action op.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-107564
* gh-107649
* gh-107759
<!-- /gh-linked-prs -->
| 85e5b1f5b806289744ef9a5a13dabfb23044f713 | 4e6fac7fcc31fc6198fcddc612688b0a05ff7ae4 |
python/cpython | python__cpython-106798 | # Python/generated_cases.c.h emits compiler warnings
````
In file included from Python/ceval.c:770:
Python/generated_cases.c.h:3463:51: warning: code will never be executed [-Wunreachable-code]
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
Python/generated_cases.c.h:3463:43: note: silence by adding parentheses to mark code as explicitly dead
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3437:58: warning: code will never be executed [-Wunreachable-code]
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^~~~
Python/generated_cases.c.h:3437:17: note: silence by adding parentheses to mark code as explicitly dead
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3437:47: warning: code will never be executed [-Wunreachable-code]
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
Python/generated_cases.c.h:3437:43: note: silence by adding parentheses to mark code as explicitly dead
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3415:58: warning: code will never be executed [-Wunreachable-code]
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^~~~
Python/generated_cases.c.h:3415:17: note: silence by adding parentheses to mark code as explicitly dead
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3415:47: warning: code will never be executed [-Wunreachable-code]
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
Python/generated_cases.c.h:3415:43: note: silence by adding parentheses to mark code as explicitly dead
if (0) { stack_pointer[-(1 + (0 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3387:51: warning: code will never be executed [-Wunreachable-code]
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
Python/generated_cases.c.h:3387:43: note: silence by adding parentheses to mark code as explicitly dead
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
Python/generated_cases.c.h:3365:51: warning: code will never be executed [-Wunreachable-code]
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
Python/generated_cases.c.h:3365:43: note: silence by adding parentheses to mark code as explicitly dead
if (1) { stack_pointer[-(1 + (1 ? 1 : 0))] = res2; }
^
/* DISABLES CODE */ ( )
````
Not a significant issue, but good to be fixed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106798
* gh-107889
<!-- /gh-linked-prs -->
| 1e36ca63f9f5e0399efe13a80499cef290314c2a | 00e52acebd2beb2663202bfc4be0ce79ba77361e |
python/cpython | python__cpython-106790 | # sysconfig imports pprint which imports the world
sysconfig imports pprint in order to print the config dict to a file. But this dict contains only strings and ints, so pprint is overkill.
The problem with importing pprint is that it imports dataclasses, which imports inspect, which imports dis which imports opcode which tries to import _opcode which does not exist in early stages of the build when sysconfig is used. So opcode imports _opcode in a try-except and ignores import errors. Since sysconfig doesn't need anything from _opcode, the partially constructed opcode module suffices. But this is flaky and hard to reason about (it works because it works). It would be better if sysconfig didn't import so much.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106790
<!-- /gh-linked-prs -->
| 5ecedbd26692b9fbdd7aad81b991869bf650f929 | 7aa89e505d893cd5e6f33b84d66e5fa769089931 |
python/cpython | python__cpython-106784 | # docs.python.org | 4.6. match Statements
# Documentation
https://docs.python.org/3/tutorial/controlflow.html
I fixed the code as follows since points is not defined, still it does not work:
```python
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
points = [Point(0,7), Point(0,9)]
match points:
case []:
print("No points")
case [Point(0, 0)]:
print("The origin")
case [Point(x, y)]:
print(f"Single point {x}, {y}")
case [Point(0, y1), Point(0, y2)]:
print(f"Two on the Y axis at {y1}, {y2}")
case _:
print("Something else")
```
error:
```pytb
Traceback (most recent call last):
File "c:\Users\moham\OneDrive\Education\UDST\gpaCal.py", line 15, in <module>
case [Point(0, y1), Point(0, y2)]:
^^^^^^^^^^^^
TypeError: Point() accepts 0 positional sub-patterns (2 given)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106784
* gh-106819
* gh-106820
<!-- /gh-linked-prs -->
| 7aa89e505d893cd5e6f33b84d66e5fa769089931 | babb22da5a25c18a2d203bf72ba35e7861ca60ee |
python/cpython | python__cpython-106775 | # Upgrade the bundled version of pip to 23.2.1
# Feature or enhancement
This is the latest pip release.
# Pitch
This ensures that users who install newest release of Python get the newest version of pip.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106775
* gh-107222
* gh-107223
<!-- /gh-linked-prs -->
| f443b54a2f14e386a91fe4b09f41a265445008b8 | d9e34db993ebc4340459ac5cb3a66c1b83451a66 |
python/cpython | python__cpython-110566 | # [3.11] [3.12] Building python 3.11.4 installer FAILS, launcher.wixproj cannot detect v143 buildtools
# Bug report
I tried to build Python 3.11.4 with Visual Studio 2022 (v143) and I get following error at the end of compilation.
Rest of the project binaries are built using v143 successfully.
```
Project "D:\build\DE-Python\Python\Tools\msi\launcher\launcher.wixproj" (1) is building "D:\build\DE-Python\Python\PCbuild\pyshellext.vcxproj" (2) on node 1 (default targets).
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\Microsoft.CppBuild.targets(456,5): error MSB8020: The build tools for v143 (Platform Toolset = 'v143') cannot be found. To build using the v143 bui
ld tools, please install v143 build tools. Alternatively, you may upgrade to the current Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting "Retarget solution". [D:\build\DE-Python\Python\
PCbuild\pyshellext.vcxproj]
```
I used following command to build: **Python\Tools\msi\build.bat -x64 --pack**
Note: This command successfully creates installer for Python 3.10.12 on same setup.
**Build steps from: https://github.com/python/cpython/blob/main/Tools/msi/README.txt no longer work for 3.11.x and 3.12.x series for python users**
# My environment
VCIDEInstallDir=C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\VC\
VCINSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\
VCToolsInstallDir=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.36.32532\
VCToolsRedistDir=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Redist\MSVC\14.36.32532\
VCToolsVersion=14.36.32532
VisualStudioVersion=17.0
VS170COMNTOOLS=C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\Tools\
VS2022INSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Professional
VSINSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Professional\
- CPython versions tested on: Python 3.11.4
- Operating system and architecture:
Edition Windows 10 Enterprise
Version 22H2
Installed on 7/14/2023
OS build 19045.3031
Experience Windows Feature Experience Pack 1000.19041.1000.0
# Steps to reproduce
1. Enable .Net Framework 3.5 on Windows 10
2. Install Visual Studio 2022(v143 toolset) on Windows 10 Enterprise (or any other flavor)
2. Install PythonDevelopment tools (with Python native development tools)
3. Download source code zip for python 3.11.4
4. Execute: Tools\msi\build.bat -x64 --pack
5. Expected result: Installer should be created
<!-- gh-linked-prs -->
### Linked PRs
* gh-110566
<!-- /gh-linked-prs -->
| 0050670d76193ea529f51d0526256cb7a769d61b | ea7b53ff67764a2abf1f27d4c95d032d2dbb02f9 |
python/cpython | python__cpython-107466 | # [3.12] `EnumMeta.__getattr__` removed without deprecation
# Bug report
```python
>>> from enum import Enum
>>> class Color(Enum):
... RED = "red"
...
```
**3.11**
```python
>>> Color.__getattr__("RED")
<Color.RED: 'red'>
```
**3.12b4**
```py
>>> Color.__getattr__("RED")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Color' has no attribute '__getattr__'. Did you mean: '__getitem__'?
```
***
I see that `__getattr__` is [documented](https://docs.python.org/3.11/library/enum.html#enum.EnumType.__getattr__), so I would have expected either a deprecation notice or an entry in What's New for Python 3.12.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107466
* gh-107509
<!-- /gh-linked-prs -->
| de51dede5b48ef23d7d33d92f3616824e23fd205 | 5eb80a61f582802c3f1caa3bf4dc754847bf1e75 |
python/cpython | python__cpython-106753 | # Several bugfixes for zipfile.Path
The [3.16.2 release of zipp](https://zipp.readthedocs.io/en/latest/history.html#v3-16-2) saw several bug fixes. Let's apply those to CPython.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106753
* gh-106755
* gh-106757
* gh-106777
* gh-106778
<!-- /gh-linked-prs -->
| 03185f0c150ebc52d41dd5ea6f369c7b5ba9fc16 | fb32f35c0585b1dbb87b6f254818e1f485a50f65 |
python/cpython | python__cpython-106754 | # Optimize XXXSelector for many iterations of the event loop
split out from https://github.com/python/cpython/pull/106555#issuecomment-1632025959
The current `EpollSelector` can be sped up a bit. This makes quite a difference when there are 100000+ iterations of the event loop per minute (the use case being receiving bluetooth data from multiple sources) since selectors have to run every iteration.
original: 11.831302762031555
new: 9.579423972172663
```
import timeit
import math
import select
import os
from selectors import EpollSelector, EVENT_WRITE, EVENT_READ
class OriginalEpollSelector(EpollSelector):
def select(self, timeout=None):
if timeout is None:
timeout = -1
elif timeout <= 0:
timeout = 0
else:
# epoll_wait() has a resolution of 1 millisecond, round away
# from zero to wait *at least* timeout seconds.
timeout = math.ceil(timeout * 1e3) * 1e-3
# epoll_wait() expects `maxevents` to be greater than zero;
# we want to make sure that `select()` can be called when no
# FD is registered.
max_ev = max(len(self._fd_to_key), 1)
ready = []
try:
fd_event_list = self._selector.poll(timeout, max_ev)
except InterruptedError:
return ready
for fd, event in fd_event_list:
events = 0
if event & ~select.EPOLLIN:
events |= EVENT_WRITE
if event & ~select.EPOLLOUT:
events |= EVENT_READ
key = self._key_from_fd(fd)
if key:
ready.append((key, events & key.events))
return ready
NOT_EPOLLIN = ~select.EPOLLIN
NOT_EPOLLOUT = ~select.EPOLLOUT
class NewEpollSelector(EpollSelector):
def select(self, timeout=None):
if timeout is None:
timeout = -1
elif timeout <= 0:
timeout = 0
else:
# epoll_wait() has a resolution of 1 millisecond, round away
# from zero to wait *at least* timeout seconds.
timeout = math.ceil(timeout * 1e3) * 1e-3
# epoll_wait() expects `maxevents` to be greater than zero;
# we want to make sure that `select()` can be called when no
# FD is registered.
max_ev = len(self._fd_to_key) or 1
ready = []
try:
fd_event_list = self._selector.poll(timeout, max_ev)
except InterruptedError:
return ready
fd_to_key = self._fd_to_key
for fd, event in fd_event_list:
key = fd_to_key.get(fd)
if key:
ready.append(
(
key,
(
(event & NOT_EPOLLIN and EVENT_WRITE)
| (event & NOT_EPOLLOUT and EVENT_READ)
)
& key.events,
)
)
return ready
original_epoll = OriginalEpollSelector()
new_epoll = NewEpollSelector()
for _ in range(512):
r, w = os.pipe()
os.write(w, b"a")
original_epoll.register(r, EVENT_READ)
new_epoll.register(r, EVENT_READ)
original_time = timeit.timeit(
"selector.select()",
number=100000,
globals={"selector": original_epoll},
)
new_time = timeit.timeit(
"selector.select()",
number=100000,
globals={"selector": new_epoll},
)
print("original: %s" % original_time)
print("new: %s" % new_time)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106754
* gh-106864
* gh-106879
* gh-106884
<!-- /gh-linked-prs -->
| aecf6aca515a203a823a87c711f15cbb82097c8b | 4cb0b9c0a9f6a4154238c98013d2679229b1f794 |
python/cpython | python__cpython-106748 | # Use type for deprecated aliases of typing instead of removing them
# Enhancement
> This module (typing) defines several deprecated aliases to pre-existing standard library classes. ...
> the aliases became redundant in Python 3.9 when the corresponding pre-existing classes were enhanced to support []. ...
> The redundant types are deprecated as of Python 3.9 ...
> (&) will be removed from the typing module no sooner than the first Python version released 5 years after the release of Python 3.9.0.
> -- <cite>[The typing documentation](https://docs.python.org/3/library/typing.html#deprecated-aliases)</cite>
Instead of removing these aliases, it would be more appropriate to redefine them using the `type` keyword added in 3.12 which shouldn't cause unnecessary overhead:
```python
type Dict = dict
type List = list
...
```
And make type checkers warn users of the compatibility aliases that they are only to be used when aiming for backwards compatibility with Python version 3.5.4 - 3.8.
# Pitch
These aliases were the only way to parameterise built-in classes in Python version 3.5.4 - 3.8. Completely removing them would suddenly break backwards compatibility with these versions for no good reason. This is especially relevant to third party modules that backport features from newer Python versions and pure python modules trying to support as many Python versions as possible.
A work-around would be to define a compatibility module, but that isn't a great solution as it adds unnecessary extra code & still requires modifying the code which could be cumbersome for large projects:
```python
"""Typing compatibility module"""
from sys import version_info
__all__: list[str] = ['Dict', 'List', ...]
if version_info >= (3, 9):
# TypeAlias & type can't even be used here because it they weren't available in 3.9 making this a suboptimal solution
Dict = dict
List = list
...
else:
from typing import Dict, List, ...
```
What happens otherwise?
1. IDE's could give incorrect suggestions
2. Many older online tutorials become irrelevant.
3. Abandoned projects would become unusable
Examples:
- Python 3.5.4 - 3.8:
```python
from typing import List
lst1: List[int] = [3, 1, 4, 1, 5, ...]
lst2: list[int] = [3, 1, 4, 1, 5, ...] # TypeError: 'type' object is not subscriptable
```
- Python 3.9 - 3.X:
```python
from typing import List
lst1: List[int] = [3, 1, 4, 1, 5, ...]
lst2: list[int] = [3, 1, 4, 1, 5, ...]
```
- Python 3.Y - ∞:
```python
lst1: list[int] = [3, 1, 4, 1, 5, ...]
from typing import List # ImportError: cannot import name 'List' from 'typing'
lst2: List[int] = [3, 1, 4, 1, 5, ...]
```
No notation would be compatible with both 3.5.4 - 3.X & 3.Y - ∞
It's worth pointing out these aliases have been deprecated for quite some time. But as long as no irreversible decisions are made this remains open to discussion.
# Previous discussion
Not applicable, this is an enhancement.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106748
* gh-106772
* gh-106773
<!-- /gh-linked-prs -->
| 89ec0e952965b6a1be40e26c3ddc4131599e5ee9 | aeef8591e41b68341af308e56a744396c66879cc |
python/cpython | python__cpython-106740 | # Add `rtype_cache` to `warnings.warn` message when leaked objects found
# Feature or enhancement
Add the `rtype_cache` (which is a `set` that includes all leaked objects of a particular type) to the `warnings.warn` message in `Lib/multiprocessing/resource_tracker.py` when any leaked objects are found to make debugging easier:
https://github.com/python/cpython/blob/490295d651d04ec3b3eff2a2cda7501191bad78a/Lib/multiprocessing/resource_tracker.py#L224-L226
# Pitch
When the `resource_tracker` module (`Lib/multiprocessing/resource_tracker.py`) finds leaked objects, the `finally` block in the `main` function includes the type of leaked objects and the number of leaked objects, but does not actually include the objects that are leaked. Adding the `set` of `rtype_cache` to the `warnings.warn` message will make debugging much more useful, as the names of the leaked objects could help more quickly identify what was leaked and/or why the leaked object was not properly cleaned up.
The permalink attached above links directly to the relevant code as it is currently implemented, but I'm adding it again below with some surrounding code for reference here:
```
finally:
# all processes have terminated; cleanup any remaining resources
for rtype, rtype_cache in cache.items():
if rtype_cache:
try:
warnings.warn('resource_tracker: There appear to be %d '
'leaked %s objects to clean up at shutdown' %
(len(rtype_cache), rtype))
except Exception:
pass
```
# Previous discussion
A recent example of an issue where the additional context would have been useful is https://github.com/python/cpython/issues/104090, in which the current warning message is `/workspace/cpython/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106740
<!-- /gh-linked-prs -->
| fabcbe9c12688eb9a902a5c89cb720ed373625c5 | 188000ae4b486cfffaeaa0a06902e4844d1b2bbc |
python/cpython | python__cpython-106735 | # Disable tab completion in multiline mode of pdb
# Bug report
`pdb` has multiline mode now where you can input more than one line of code in pdb prompt. However, when entering multiline mode, we should disable tab completion - when the users hit tab in multiline mode, they mean "insert tab here", not try to give me a pdb command.
# Your environment
CPython:main
<!-- gh-linked-prs -->
### Linked PRs
* gh-106735
<!-- /gh-linked-prs -->
| 391f3e3ca904449a50b2dd5956684357fdce690b | b88d9e75f68f102aca45fa62e2b0e2e2ff46d810 |
python/cpython | python__cpython-106815 | # `inspect.getsource` (`findsource`) not working as expected when duplicate class is used
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
Consider the following example:
```python
import inspect
class Model:
a = 1
class Model:
a = 2
print(Model.a)
#> 2
print(inspect.getsource(Model))
#> class Model:
#> a = 1
```
I think we should expect the last class to be used (it works as expected if we use functions instead of classes).
This issue was raised in an external project:
- https://github.com/octoml/synr/issues/13
And I'm facing this issue while working on:
- https://github.com/pydantic/pydantic/pull/6563
Let me know if this should be fixed, I'll take a deeper look at what was done in https://github.com/python/cpython/pull/10307 and see if I can make a PR
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.4
- Operating system and architecture: Ubuntu
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-106815
* gh-106968
<!-- /gh-linked-prs -->
| 663854d73b35feeb004ae0970e45b53ca27774a1 | 505eede38d141d43e40e246319b157e3c77211d3 |
python/cpython | python__cpython-106724 | # Multiprocessing not propagating -Xfrozen_modules=off
# Bug report
After python3.11 changes around frozen imports, when using multiprocessing contexts other than `fork`, the newly added `-Xfrozen_modules=off` isn't passed to spawned process interpreters.
Simple snippet demonstrating the issue:
```python
"""
$ python -Xfrozen_modules=off test.py
main: {'frozen_modules': 'off'}
forkserver: {}
spawn: {}
fork: {'frozen_modules': 'off'}
"""
import sys
import multiprocessing
def xoptions():
return sys._xoptions
def main():
print('main:', xoptions())
for ctx in ('forkserver', 'spawn', 'fork'):
with multiprocessing.get_context(ctx).Pool(1) as pool:
print(f'{ctx}:', pool.apply(xoptions))
if __name__ == '__main__':
main()
```
The issue seems to be `subprocess._args_from_interpreter_flags` not honoring `frozen_modules` key from `sys._xoptions`.
```bash
$ python -Xfrozen_modules=off -c 'import subprocess;print(subprocess._args_from_interpreter_flags())'
[]
```
# Your environment
python 3.11.4
<!-- gh-linked-prs -->
### Linked PRs
* gh-106724
* gh-107367
* gh-107368
<!-- /gh-linked-prs -->
| 3dcac785810df4d9db50abe90847eaf03bbdaaf4 | 76c26eaca4147ba7e3e8d7379c5a828f0b512a46 |
python/cpython | python__cpython-106720 | # Fix __annotations__ getters and setters in type and module
There are several issues in the code of `__annotations__` getters and setters in `type` and `module` types.
* `PyDict_Contains()` returns -1 on error. The code interprets it as a positive answer.
* Calling `PyDict_Contains()` is not needed in all these cases at first place. Just try to get or delete an attribute and analyze the result.
* All errors raised when accessing `module.__dict__` (including MemoryError and KeyboardInterrupt) are replaced with a TypeError.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106720
* gh-106848
* gh-106850
<!-- /gh-linked-prs -->
| e1c295e3da9ff5a3eb6b009a1f821d80e564ac87 | ece3b9d12a2f47da8b144f185dfba9b2b725fc82 |
python/cpython | python__cpython-108730 | # `PyConfig.stdlib_dir` does not affect `sys._stdlib_dir`
In Python 3.11, PyConfig gained a `stdlib_dir` attribute which is, as far as I can tell, supposed to override `sys._stdlib_dir` (which is used by importlib). No matter what I set it to or what else I set, however, it seems to be ignored. For example:
```testprogram.c
#include <Python.h>
#include <stdio.h>
extern wchar_t *_Py_GetStdlibDir(void);
int main(int argc, char **argv)
{
PyStatus status;
PyConfig config;
PyConfig_InitPythonConfig(&config);
PyConfig_SetString(&config, &config.stdlib_dir, L"/my/stdlib/dir");
Py_InitializeFromConfig(&config);
if (PyStatus_Exception(status)) {
Py_ExitStatusException(status);
}
PyConfig_Clear(&config);
fprintf(stderr, "stdlib_dir = '%ls'\n", _Py_GetStdlibDir());
Py_Finalize();
}
```
```sh
% gcc -Wall $(/python/3.12-opt/bin/python3-config --cflags) -o testprogram testprogram.c $(/python/3.12-opt/bin/python3-config --ldflags) -lpython3.12
# Look ma, no warnings.
% ./testprogram
stdlib_dir = '/python/3.12-opt/lib/python3.12'
```
It doesn't seem to matter whether `PyConfig.stdlib_dir` is set to an existing directory either (although for my use-case, it must be a non-existant one; we're getting the stdlib from embedded data, and the incorrect setting of `sys._stdlib_dir` means importlib's `FrozenImporter` sets the wrong `__file__` attribute on frozen/deepfrozen modules, like `os`.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-108730
<!-- /gh-linked-prs -->
| 834b7c18d74da3b30fdca66cc7da6b9e1db3ce6c | 821a7ac493120b6d5065598cfa835ab3f25965cb |
python/cpython | python__cpython-107007 | # test_capi crash and produce core dumps on FreeBSD 13
Examples:
```
[vstinner@freebsd ~/python/main]$ ./python -m test test_capi -m test_no_FatalError_infinite_loop -v
...
Warning -- files was modified by test_capi
Warning -- Before: []
Warning -- After: ['python.core']
```
and
```
[vstinner@freebsd ~/python/main]$ ./python -m test -v test_capi -m test_get_set_optimizer
...
test_get_set_optimizer (test.test_capi.test_misc.TestOptimizerAPI.test_get_set_optimizer) ... Fatal Python error: Segmentation fault
Current thread 0x0000000827265000 (most recent call first):
File "/usr/home/vstinner/python/main/Lib/unittest/case.py", line 589 in _callTestMethod
File "/usr/home/vstinner/python/main/Lib/unittest/case.py", line 634 in run
File "/usr/home/vstinner/python/main/Lib/unittest/case.py", line 690 in __call__
File "/usr/home/vstinner/python/main/Lib/unittest/suite.py", line 122 in run
```
* FreeBSD 13.2-RELEASE-p1
* clang 14.0.5
* Python configured with: ``./configure --cache-file=../config.cache --with-pydebug CFLAGS="-O0"``
<!-- gh-linked-prs -->
### Linked PRs
* gh-107007
* gh-107009
<!-- /gh-linked-prs -->
| 4a1026d7647c084b0dc80dd49163d16ba12a2e55 | 6dbffaed17d59079d6a2788d686009f762a3278f |
python/cpython | python__cpython-106716 | # Streamline family syntax in Tools/cases_generator
The syntax to designate a family currently looks like this:
```
family(store_subscr, INLINE_CACHE_ENTRIES_STORE_SUBSCR) = {
STORE_SUBSCR,
STORE_SUBSCR_DICT,
STORE_SUBSCR_LIST_INT,
};
```
Here the `store_subscr` "family name" is redundant (and in fact we even had [one case](https://github.com/python/cpython/pull/104268) where it was incorrect).
I propose to change it to be more similar to the `pseudo` syntax, so it will become
```
family(STORE_SUBSCR, INLINE_CACHE_ENTRIES_STORE_SUBSCR) = {
STORE_SUBSCR_DICT,
STORE_SUBSCR_LIST_INT,
};
```
This should be a straightforward change to the parser and code generator in Tools/cases_generator.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106716
<!-- /gh-linked-prs -->
| 1e36ca63f9f5e0399efe13a80499cef290314c2a | 00e52acebd2beb2663202bfc4be0ce79ba77361e |
python/cpython | python__cpython-106702 | # Move Tier 2 interpreter (`_PyUopExecute`) to a new file
It currently lives in ceval.c, but I'm noticing serious increases in compile time, possibly because the file is so large. Splitting it up should make this more bearable (esp. when using `make -j`).
It's not completely trivial, since there are a number of static functions (and some macros?) that are shared between the Tier 1 an Tier 2 interpreter. But those should be turned into private functions (`_Py` prefix, no `PyAPI_FUNC` macro). That will help the copy-and-patch tooling as well. (CC @brandtbucher)
<!-- gh-linked-prs -->
### Linked PRs
* gh-106702
* gh-106924
<!-- /gh-linked-prs -->
| e6e0ea0113748db1e9fe675be6db9041cd5cce1f | 2f3ee02c22c4b42bf6075a75104c3cfbb4eb4c86 |
python/cpython | python__cpython-8150 | # Add a .coveragerc file to the CPython repository
Check branches both ways and ignore code that should be excluded from coverage. This includes some special directives for IDLE.
This file is used by the coverage package. It can be used where it is, in the repository directory, or copied elsewhere, depending on how one runs coverage.
<!-- gh-linked-prs -->
### Linked PRs
* gh-8150
<!-- /gh-linked-prs -->
| 6ff8f82f92a8af363b2bdd8bbaba5845eef430fc | 39fc7ef4fe211e8f7d3b5a6e392e475ecdfbce72 |
python/cpython | python__cpython-106700 | # New warning in `ssl.c`: conversion from 'uint64_t' to 'long', possible loss of data
See https://github.com/python/cpython/pull/106678/files#diff-89879be484d86da4e77c90d5408b2e10190ee27cc86337cd0f8efc3520d60621
<img width="1064" alt="Снимок экрана 2023-07-12 в 21 12 39" src="https://github.com/python/cpython/assets/4660275/108ee61c-1352-465b-98a2-c429d9cc5834">
<!-- gh-linked-prs -->
### Linked PRs
* gh-106700
* gh-106827
* gh-116665
<!-- /gh-linked-prs -->
| ad95c7253a70e559e7d3f25d53f4772f28bb8b44 | 036bb7365607ab7e5cf901f1ac4256f9ae1be82c |
python/cpython | python__cpython-107650 | # asyncio: potential leak of TLS connections
# Bug report
## Synopsis
Forgetting to close TLS connections manually with `asyncio.open_connection()` will lead to a leak of TCP connection when the writer/reader get out of scope
Note: the reference is properly released when the remote side closes the connection
This seems to be counter intuitive relative to other python APIs where the connection is closed when the handle goes out of scope
## Details
- open a TLS connection with `asyncio.open_connection(..., ssl=True)`
- do some read/writes
- exit function, so handlers get out of scope (and possibly gc collected). This may be due to an exception for example
- do **not** call `writer.close()`
- the connection is now "unreachable" from a user point of view
- however the TCP connection is kept alive
When trying to debug this issue I found out that a `_SSLProtocolTransport` instance is kept in memory, probably linked to the eventloop
## Example script
```python
import os
import asyncio
import gc
import signal
HOST = "google.fr" # will keep the connection alive for a few minutes at least
async def query():
reader, writer = await asyncio.open_connection(HOST, 443, ssl=True)
# No connection: close, remote side will keep the connection open
writer.write(f"GET / HTTP/1.1\r\nHost: {HOST}\r\n\r\n".encode())
await writer.drain()
# only read the first header line
try:
return (await reader.readline()).decode()
finally:
# closing the writer will properly finalize the connection
# writer.close()
pass
# reader and writer are now unreachable
async def amain():
await query()
# The _SSLProtocolTransport object is kept in memory and the
# connection won't be released until the remote side closes the connection
for _ in range(200):
# Just be sure everything is freed, just in case
gc.collect()
await asyncio.sleep(1)
def main():
print(f"PID {os.getpid()}")
task = asyncio.ensure_future(amain())
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGTERM, task.cancel)
loop.add_signal_handler(signal.SIGINT, task.cancel)
loop.run_until_complete(task)
if __name__ == "__main__":
main()
```
# Your environment
- CPython versions tested on:
- 3.11.4
- 3.10.11
- Operating system and architecture: Debian Linux 5.19 x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-107650
* gh-107656
* gh-107657
* gh-107836
<!-- /gh-linked-prs -->
| 41178e41995992bbe417f94bce158de93f9e3188 | 5e2746d6e2fb0da29225ead7135f078c5f087b57 |
python/cpython | python__cpython-106674 | # C API: Report ignored exception in C API functions which ignore all errors
Some C API functions, like `PyDict_GetItem()` and `PyObject_HasAttr()` (see capi-workgroup/problems#51 for full list), clears all errors which occurred during execution of these functions. The user has no way to distinguish the normal negative result from error.
There is no way to fix such C API. But at least we can make ignoring errors less unnoticeable. The proposed PR adds `_PyErr_WriteUnraisableMsg()` when an unrelated error is ignored. It is the same way as reporting errors during object destruction.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106674
<!-- /gh-linked-prs -->
| f55cb44359821e71c29903f2152b4658509dac0d | a077b2fbb88f5192bb47e514334f760bf08d0295 |
python/cpython | python__cpython-106676 | # Support to jump between chained exception in Pdb
# Feature or enhancement
I believe it would be useful to allow to move between chained exception in Pdb.
That is to say if you have
```
try:
function_that_raises()
except Exception as e:
raise ... from e # Pdb says this is the bottom frame.
```
Havin something like `down` go down the chained exception ? This was requested on [IPython's ipdb](https://github.com/ipython/ipython/issues/13982), But I believe this could be brought to Pdb as well.
## Picth
When moving up and down the stack in Pdb in chained exception, when hitting the bottom of the stack, it should be possible to look at the `__cause__` or `__context__` and jump to the top of it. And vice-versa when going up if we store the various TB we went through.
In post-mortem debugging this should allow to better understand the reason for an exception.
# Previous discussion
See,
https://discuss.python.org/t/interested-in-support-to-jump-between-chained-exception-in-pdb/29470/3
<!-- gh-linked-prs -->
### Linked PRs
* gh-106676
* gh-108865
* gh-110493
* gh-132277
* gh-132279
<!-- /gh-linked-prs -->
| f75cefd402c4c830228d85ca3442377ebaf09454 | 242bef459bfbd0ec5e45e6d47df2709093cfefa7 |
python/cpython | python__cpython-106733 | # email.utils.getaddresses() rejects email addresses with "," in name
# Bug report
`email.utils.getaddresses()` returns `('', '')` for email addresses with `,` in a real name, e.g.
```pycon
>>> from email.utils import getaddresses
>>> getaddresses(('"Sürname, Firstname" <to@example.com>',))
[('', '')]
```
Regression in 18dfbd035775c15533d13a98e56b1d2bf5c65f00.
# Your environment
- CPython versions tested on: 3.12.0b4
- Operating system and architecture: x86_64 GNU/Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-106733
* gh-106941
<!-- /gh-linked-prs -->
| a31dea1feb61793e48fa9aa5014f358352205c1d | d81b4f8ff84311fa737e62f2883442ec06d7d5d8 |
python/cpython | python__cpython-106665 | # Adding selectors has two KeyError exceptions in the success path
Similar to https://github.com/python/cpython/issues/106527, adding a new asyncio reader has to hit `_SelectorMapping.__getitem__` which is expected to raise and catch KeyError twice since the reader will not yet be in the map.
When connections are constantly being added and removed because devices are being polled over http/websocket the overhead of adding/removing readers adds up.
For a webserver with connections constantly being added/removed, the cost of adding and removing impacts how many clients can be handled
Another place I see this come up is with dbus connections which need to get torn down and created at fast clip when dealing with bluetooth devices.
See https://github.com/python/cpython/issues/106527#issuecomment-1627468269 and https://github.com/python/cpython/issues/106527#issuecomment-1625923919 for where this was split from
<!-- gh-linked-prs -->
### Linked PRs
* gh-106665
<!-- /gh-linked-prs -->
| 8d2f3c36caf9ecdee1176314b18388aef6e7f2c2 | e6e0ea0113748db1e9fe675be6db9041cd5cce1f |
python/cpython | python__cpython-108010 | # Test `test_embed` fails on Windows 11, Python 3.13.0a0
# Bug report
The test `test_embed` fails on my Windows machine, using master branch (commit d0972c77aa1cd5fe27618e82c10141a2bf157476).
```
python.bat -m test test_embed
```
Full output -
```
C:\Code\cpython>python.bat -m test test_embed
Running Debug|x64 interpreter...
0:00:00 Run tests sequentially
0:00:00 [1/1] test_embed
test test_embed failed -- Traceback (most recent call last):
File "C:\Code\cpython\Lib\test\test_embed.py", line 259, in test_forced_io_encoding
self.assertEqual(out.strip(), expected_output)
AssertionError: '--- [403 chars]din: utf-8:surrogateescape\nstdout: iso8859-1:[221 chars]lace' != '--- [403 chars]din: iso8859-1:surrogateescape\nstdout: iso885[229 chars]lace'
--- Use defaults ---
Expected encoding: default
Expected errors: default
stdin: utf-8:surrogateescape
stdout: utf-8:surrogateescape
stderr: utf-8:backslashreplace
--- Set errors only ---
Expected encoding: default
Expected errors: ignore
stdin: utf-8:ignore
stdout: utf-8:ignore
stderr: utf-8:backslashreplace
--- Set encoding only ---
Expected encoding: iso8859-1
Expected errors: default
- stdin: utf-8:surrogateescape
? ^^^ ^
+ stdin: iso8859-1:surrogateescape
? ^^^^^^^ ^
stdout: iso8859-1:surrogateescape
stderr: iso8859-1:backslashreplace
--- Set encoding and errors ---
Expected encoding: iso8859-1
Expected errors: replace
- stdin: utf-8:replace
+ stdin: iso8859-1:replace
stdout: iso8859-1:replace
stderr: iso8859-1:backslashreplace
test_embed failed (1 failure) in 30.7 sec
== Tests result: FAILURE ==
1 test failed:
test_embed
Total duration: 30.7 sec
Tests result: FAILURE
```
# Your environment
- CPython versions tested on:
```
Running Debug|x64 interpreter...
Python 3.13.0a0
```
- Commit: d0972c77aa1cd5fe27618e82c10141a2bf157476
- Operating system and architecture:
```
OS Name: Microsoft Windows 11 Pro
OS Version: 10.0.22621 N/A Build 22621
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108010
<!-- /gh-linked-prs -->
| e35c722d22cae605b485e75a69238dc44aab4c96 | 57a20b0960f5c087a476b34c72f608580746cab5 |
python/cpython | python__cpython-106657 | # Let's not generate #line directives in generated_cases.c.h
The cases generator has the ability (`-l`, `--emit-line-directives`) to emit C preprocessor [`#line` directives](https://learn.microsoft.com/en-us/cpp/preprocessor/hash-line-directive-c-cpp?view=msvc-170) so that the generated code is attributed to the proper line in the input (Python/bytecodes.c). Makefile.pre.in enables this option by default, because it helps debugging bugs in instruction implementations (i.e., bytecodes.c).
However, this option also has downsides -- because the code generator also makes up a lot of code that isn't sourced from bytecodes.c, it emits several `#line` directives for each instruction. The problem is that almost every time you edit bytecodes.c to insert or remove a line in an instruction, the `#line` directive for *all* following instructions must be adjusted, producing a [humongous diff](https://github.com/python/cpython/pull/106638/files?diff=unified&w=0#diff-4ef46fa654f95502e49a24f7dc8ee31a4cac9b3433fe9cd2b2d4dd78cfbad448).
It is remarkable how often for various reasons I find myself looking at such a diff, and it's definitely a case of not seeing the forest for the trees.
I propose that we remove this flag from Makefile.pre.in. Diffs will be much smaller, and more useful. Devs who are faced with errors in the generated file that they would like to map back to the original file, can turn on the option manually. The easiest way is probably to run the tool like this:
```
python3 Tools/cases_generator/generate_cases.py -l
```
The defaults all match the explicit output filenames mentioned in Makefile.pre.in.
(If someone really would like to type some variant of `make regen-cases` to generate the `#line` directives, maybe we can define a new variable in Makefile.pre.in that you can pass on the command line, e.g. `make regen-cases CASESFLAGS=-l`.)
CC. @markshannon, @brandtbucher.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106657
<!-- /gh-linked-prs -->
| 7f55f58b6c97306da350f5b441d26f859e9d8f16 | b03755a2347325a89a48b08fc158419000513bcb |
python/cpython | python__cpython-106671 | # Minor documentation issues in asyncio BaseEventLoop
Firstly:
https://github.com/python/cpython/blob/a2d54d4e8ab12f967a220be88bde8ac8227c5ab3/Lib/asyncio/events.py#L620
I think the `AbstractEventLoopPolicy`'s `get_event_loop` function should not say:
> Returns an event loop object implementing the BaseEventLoop interface
But rather:
> Returns an event loop object implementing the **AbstractEventLoop** interface
`AbstractEventLoop` is more of a representation of the interface than `BaseEventLoop`, and `AbstractEventLoop` is the class that the docs instruct people to extend if they want to have a custom event loop, and specifically say to not extend `BaseEventLoop`. It's also easier to look at the interface via `AbstractEventLoop`, without all the implementation details getting in the way.
Secondly:
https://github.com/python/cpython/blob/64c0890b697783db9b3f67e3bb4dcee1165a0b9b/Lib/asyncio/base_events.py#L730
Grammar error:
> If two callbacks are scheduled for exactly the same time, it undefined which will be called first.
should read:
> If two callbacks are scheduled for exactly the same time, it **is** undefined which will be called first.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106671
* gh-106711
* gh-106712
<!-- /gh-linked-prs -->
| 4b4a5b70aa8d47b1e2a0582b741c31b786da762a | 7e6ce48872fa3de98c986057764f35e1b2f4b936 |
python/cpython | python__cpython-106629 | # Possible performance improvement in email parsing
PyPy received the following performance bug today: https://foss.heptapod.net/pypy/pypy/-/issues/3961
Somebody who was trying to process a lot of emails from an mbox file was complaining about terrible performance on PyPy. The problem turned out to be fact that `email.feedparser.FeedParser._parsegen` is compiling a new regular expression for every multipart message in the mbox file. On PyPy this is particularly bad, because those regular expressions are jitted and that costs even more time. However, even on CPython compiling these regular expressions takes a noticeable portion of the benchmark.
I [fixed this problem in PyPy](https://foss.heptapod.net/pypy/pypy/-/commit/ac270e3701e29024d4098cd0e674cd1fe30a751f) by simply using `str.startswith` with the multipart separator, followed by a generic regular expression that can be used for arbitrary boundaries. In PyPy this helps massively, but in CPython it's still a 20% performance improvement. Will open a PR for it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106629
<!-- /gh-linked-prs -->
| 7e6ce48872fa3de98c986057764f35e1b2f4b936 | af51bd7cda9c0cba149b882c1e501765595e5fc3 |
python/cpython | python__cpython-106623 | # Error when executing the sample in chapter 4.6
# Documentation
This is a pull request for the [sample code in section 4.6](https://docs.python.org/ja/3/tutorial/controlflow.html#match-statements).
I ran the following based on the sample code and got an error.
The sample code in the tutorial may be an excerpt of some code.
However, given that we will try "Point(1, var)" immediately following, I believe the user will easily understand that there is a necessary dataclass import statement.
Executed code :
```
class Point:
x: int
y: int
def where_is(point):
match point:
case Point(x=0, y=0):
print("Origin")
case Point(x=0, y=y):
print(f"Y={y}")
case Point(x=x, y=0):
print(f"X={x}")
case Point():
print("Somewhere else")
case _:
print("Not a point")
## Added code
p = Point(1, 1)
```
Errors that occurred :
```
p = Point(1, 1)
^^^^^^^^^^^
TypeError: Point() takes no arguments
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106623
* gh-106636
* gh-106637
<!-- /gh-linked-prs -->
| d0b7e18262e69dd4b8252e804e4f98fc9533bcd6 | 945d3cbf2e8e756ed16c3ec51106e6157abb2698 |
python/cpython | python__cpython-106794 | # Change uop instruction format to (opcode, oparg, operand)
See https://github.com/python/cpython/issues/106581#issuecomment-1629381464.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106794
<!-- /gh-linked-prs -->
| 8e9a1a032233f06ce0f1acdf5f983d614c8745a5 | 7e96370a946a2ca0f2f25af4ce5b3b59f020721b |
python/cpython | python__cpython-106666 | # [Enum] add __deepcopy__ method to Enum
<!-- gh-linked-prs -->
### Linked PRs
* gh-106666
* gh-106694
* gh-106695
<!-- /gh-linked-prs -->
| 357e9e9da3929cb9d55ea31896e66f488e44e8f2 | e4b88c1e4ac129b36f99a534387d64f7b8cda8ef |
python/cpython | python__cpython-106598 | # Add a collection of offsets to facilitate the work of out-of-process debuggers
Some of the relevant fields in the interpreter state and the frame state in 3.12 are very challenging to fetch from out-of-process tools because they are in offsets that depend on compilation or platform variables that are different in different platforms. Not only that but they require the tools to copy a huge amount of intermediate structures making the whole thing very verbose.
To allow out-of-process tools to get these offsets without having to copy the headers (which also doesn't really work as some of these fields may depend on compilation flags and other parameters), add a debugging struct to the runtime state (which is the entry point many of these tools need anyway) that contain a list of the offsets that are relevant for out-of-process tools.
This list will not be backward compatible between minor versions but is expected to reflect the relevant fields for all the life of a minor version.
We can add more fields to the struct in the future at our leisure.
See https://github.com/python/cpython/issues/106140 and https://github.com/python/cpython/issues/100987 and https://github.com/python/cpython/pull/105271 for more information.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106598
* gh-121311
* gh-121312
* gh-121369
* gh-121370
<!-- /gh-linked-prs -->
| b444bfb0a325dea8c29f7b1828233b00fbf4a1cb | 579aa89e68a6607398317a50586af781981e89fb |
python/cpython | python__cpython-106588 | # Return the correct exit code if all tests have been skipped in `unittest`
# Bug report
In Python 3.12, a new exit code (`5`, instead of `0`) was [added](https://docs.python.org/3.12/library/unittest.html#unittest.main) if all tests were skipped in `unittest`. But in some cases it still returns `0`. See code examples.
Code example 1:
```python
import unittest
class TestSimple(unittest.TestCase):
@classmethod
def setUpClass(cls):
raise unittest.SkipTest("Skip whole Case")
def test_true(self):
self.assertTrue(True)
def test_false(self):
self.assertTrue(False, msg="Is not True")
```
Output:
```
Ran 0 tests in 0.000s
NO TESTS RAN (skipped=1)
```
Ok, we get exit code 5 here.
Code example 2:
```python
import unittest
class TestCase(unittest.TestCase):
@unittest.skip("something")
def test(self):
self.assertTrue(False)
```
Output:
```
Ran 1 test in 0.000s
OK (skipped=1)
```
Here we can see output `0`. Ran one test and skipped. But I think it is incorrect, because all tests were skipped.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.12
- Operating system and architecture: I think it does not matter.
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-106588
* gh-109725
* gh-114470
<!-- /gh-linked-prs -->
| ecabff98c41453f15ecd26ac255d531b571b9bc1 | ca715e56a13feabc15c368898df6511613d18987 |
python/cpython | python__cpython-106707 | # Call design for Tier 2 (uops) interpreter
(Maybe this is tentative enough that it still belongs in the faster-cpython/ideas tracker, but I hope we're close enough that we can hash it out here. CC @markshannon, @brandtbucher)
(This is a WIP until I have looked a bit deeper into this.)
First order of business is splitting some of the CALL specializations into multiple ops satisfying the uop requirement: either use oparg and no cache entries, or don't use oparg and use at most one cache entry. For example, one of the more important ones, CALL_PY_EXACT_ARGS, uses both `oparg` (the number of arguments) and a cache entry (`func_version`). Splitting it into a guard and an action op is problematic: even discounting the possibility of encountering a bound method (i.e., assuming `method` is `NULL`), it contains the following `DEOPT` calls:
```
// PyObject *callable = stack_pointer[-1-oparg];
DEOPT_IF(tstate->interp->eval_frame, CALL);
int argcount = oparg;
PyFunctionObject *func = (PyFunctionObject *)callable;
DEOPT_IF(!PyFunction_Check(callable), CALL);
PyFunctionObject *func = (PyFunctionObject *)callable;
DEOPT_IF(func->func_version != func_version, CALL);
PyCodeObject *code = (PyCodeObject *)func->func_code;
DEOPT_IF(code->co_argcount != argcount, CALL);
DEOPT_IF(!_PyThreadState_HasStackSpace(tstate, code->co_framesize), CALL);
```
If we wanted to combine all this in a single guard op, that guard would require access to both `oparg` (to dig out `callable`) and `func_version`. The fundamental problem is that the callable, which needs to be prodded and poked for the guard to pass, is buried under the arguments, and we need to use `oparg` to know how deep it is buried.
What if we somehow reversed this so that the callable is _on top of the stack_, after the arguments? We could arrange for this by adding a `COPY n+1` opcode just before the `CALL` opcode (or its specializations). In fact, this could even be a blessing in disguise, since now we would no longer need to push a `NULL` before the callable to reserve space for `self` -- instead, if the callable is found to be a bound method, its `self` can overwrite the original callable (below the arguments) and the function extracted from the bound method can overwrite the copy of the callable _above_ the arguments. This has the advantage of no longer needing to have a "push `NULL`" bit in several other opcodes (the `LOAD_GLOBAL` and `LOAD_ATTR` families -- we'll have to review the logic in `LOAD_ATTR` a bit more to make sure this can work).
(Note that the key reason why the callable is buried below the arguments is a requirement about evaluation order in expressions -- the language reference requires that in the expression `F(X)` where `F` and `X` themselves are possibly complex expressions, `F` is evaluated before `X`.)
Comparing before and after, currently we have the following arrangement on the stack when `CALL n` or any of its specializations is reached:
```
NULL
callable
arg[0]
arg[1]
...
arg[n-1]
```
This is obtained by e.g.
```
PUSH_NULL
LOAD_FAST callable
<load n args>
CALL n
```
or
```
LOAD_GLOBAL (NULL + callable)
<load n args>
CALL n
```
or
```
LOAD_ATTR (NULL|self + callable)
<load n args>
CALL n
```
Under my proposal the arrangement would change to
```
callable
arg[0]
arg[1]
...
arg[n-1]
callable
```
and it would be obtained by
```
LOAD_FAST callable / LOAD_GLOBAL callable / LOAD_ATTR callable
<load n args>
COPY n+1
CALL n
```
It would (perhaps) even be permissible for the guard to overwrite both copies of the callable if a method is detected, since it would change from
```
self.func
<n args>
self.func
```
to
```
self
<n args>
func
```
where we would be assured that `func` has type `PyFunctionObject *`. (However, I think we ought to have separate specializations for the two cases, since the transformation would also require bumping `oparg`.)
The runtime cost would be an extra `COPY` instruction before each `CALL`; however I think this might actually be simpler than the dynamic check for bound methods, at least when using copy-and-patch.
Another cost would be requiring extra specializations for some cases that currently dynamically decide between function and method; but again I think that with copy-and-patch that is probably worth it, given that we expect that dynamic check to always go the same way for a specific location.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106707
* gh-107760
* gh-107793
* gh-108067
* gh-108380
* gh-108462
* gh-108493
* gh-108895
* gh-109338
<!-- /gh-linked-prs -->
| 2b94a05a0e45e4aae030a28b716a038ef529f8ef | b2b261ab2a2d4ff000c6248dbc52247c78cfa5ab |
python/cpython | python__cpython-106567 | # Optimize (?!) in regular expressions
Some regular expression engines support `(*FAIL)` as a pattern which fails to match anything. `(?!)` is an idiomatic way to write this in engines which do not support `(*FAIL)`.
It works pretty well, but it can be optimized. Instead of compiling it as ASSERT_NOT opcode
```
>>> re.compile(r'12(?!)|3', re.DEBUG)
BRANCH
LITERAL 49
LITERAL 50
ASSERT_NOT 1
OR
LITERAL 51
0. INFO 9 0b100 1 2 (to 10)
in
5. LITERAL 0x31 ('1')
7. LITERAL 0x33 ('3')
9. FAILURE
10: BRANCH 11 (to 22)
12. LITERAL 0x31 ('1')
14. LITERAL 0x32 ('2')
16. ASSERT_NOT 3 0 (to 20)
19. SUCCESS
20: JUMP 7 (to 28)
22: branch 5 (to 27)
23. LITERAL 0x33 ('3')
25. JUMP 2 (to 28)
27: FAILURE
28: SUCCESS
re.compile('12(?!)|3', re.DEBUG)
```
it can be compiled as FAILURE opcode.
```
>>> re.compile(r'12(?!)|3', re.DEBUG)
BRANCH
LITERAL 49
LITERAL 50
FAILURE
OR
LITERAL 51
0. INFO 9 0b100 1 2 (to 10)
in
5. LITERAL 0x31 ('1')
7. LITERAL 0x33 ('3')
9. FAILURE
10: BRANCH 8 (to 19)
12. LITERAL 0x31 ('1')
14. LITERAL 0x32 ('2')
16. FAILURE
17. JUMP 7 (to 25)
19: branch 5 (to 24)
20. LITERAL 0x33 ('3')
22. JUMP 2 (to 25)
24: FAILURE
25: SUCCESS
re.compile('12(?!)|3', re.DEBUG)
```
Unfortunately I do not know good examples of using `(*FAIL)` in regular expressions (without using `(*SKIP)`) to include them in the documentation. Perhaps other patterns of using `(*FAIL)` could be optimized future, but I do not know what to optimize.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106567
<!-- /gh-linked-prs -->
| ed64204716035db58c7e11b78182596aa2d97176 | 50e3cc9748eb2103eb7ed6cc5a74d177df3cfb13 |
python/cpython | python__cpython-112611 | # Python 3.12 introduces redundant decls warning
# Bug report
Python 3.12 introduce a `-Wredundant-decls` warning.
```c
// test.c
#include <Python.h>
int main(void) {}
```
```sh
$ gcc test.c -o test $(pkg-config --cflags python-3.12) -Wredundant-decls
In file included from /usr/include/python3.12/Python.h:52,
from test.c:1:
/usr/include/python3.12/longobject.h:10:26: warning: redundant redeclaration of ‘PyLong_Type’ [-Wredundant-decls]
10 | PyAPI_DATA(PyTypeObject) PyLong_Type;
| ^~~~~~~~~~~
In file included from /usr/include/python3.12/Python.h:44:
/usr/include/python3.12/object.h:210:26: note: previous declaration of ‘PyLong_Type’ with type ‘PyTypeObject’ {aka ‘struct _typeobject’}
210 | PyAPI_DATA(PyTypeObject) PyLong_Type;
| ^~~~~~~~~~~
In file included from /usr/include/python3.12/Python.h:54:
/usr/include/python3.12/boolobject.h:10:26: warning: redundant redeclaration of ‘PyBool_Type’ [-Wredundant-decls]
10 | PyAPI_DATA(PyTypeObject) PyBool_Type;
| ^~~~~~~~~~~
/usr/include/python3.12/object.h:211:26: note: previous declaration of ‘PyBool_Type’ with type ‘PyTypeObject’ {aka ‘struct _typeobject’}
211 | PyAPI_DATA(PyTypeObject) PyBool_Type;
|
```
- This was introduced in 7559f5fda94ab568a1a910b17683ed81dc3426fb
- This currently breaks the build/tests of util-linux/pylibmount (with -Werror)
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.12.0b3 (should be the same on master)
- Operating system and architecture: ArchLinux, x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-112611
* gh-112612
* gh-112650
* gh-112651
<!-- /gh-linked-prs -->
| 1f2a676785d48ed9ac01e60cc56a82e44b725474 | 29e6c7b68acac628b084a82670708008be262379 |
python/cpython | python__cpython-106559 | # multiprocessing Manager exceptions create cyclic references
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
`multiprocessing.managers` uses `convert_to_error(kind, result)` to make a raisable exception out of `result` when a call has responded with some sort of error. If `kind == "#ERROR"`, then `result` is already an exception and the caller raises it directly—but because `result` was created in a frame at or under the caller, this creates a reference cycle `result` → `result.__traceback__` → (some frame)`.f_locals['result']`.
In particular, every time I've used a manager queue I've expected frequent occurrences of `queue.Empty`, and the buildup of reference cycles sporadically wakes up the garbage collector and wrecks my hopes of consistent latency.
I'm including an example script below. PR coming in a moment, so please let me know if I should expand the example into a test and bundle that in. (Please also feel free to tell me if this is a misuse of queue.Empty and I should buzz off.)
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.11.3
- Operating system and architecture: `uname -a` says `Linux delia 6.3.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:42 +0000 x86_64 GNU/Linux`
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
# Minimal example
## Output
```
net allocations: 0
got from queue: [0, 1, 2, 3, 4]
net allocations: 23
garbage produced: Counter({<class 'traceback'>: 3, <class 'frame'>: 3, <class 'list'>: 1, <class '_queue.Empty'>: 1})
net allocations: 0
```
## Script
```py
#!/usr/bin/env python
import collections
import gc
import multiprocessing
import queue
import time
def sender(q):
for i in range(5):
q.put_nowait(i)
def get_all_available(q):
result = []
try:
while True:
result.append(q.get_nowait())
except queue.Empty:
...
return result
def main():
q = multiprocessing.Manager().Queue()
p = multiprocessing.Process(target=sender, args=(q,))
p.start()
# take control of gc
gc.disable()
gc.collect()
gc.set_debug(gc.DEBUG_SAVEALL)
time.sleep(0.1) # just in case the new process took a while to create
print('net allocations: ', gc.get_count()[0])
# trigger a queue.Empty
print('got from queue: ', get_all_available(q))
# check for collectable garbage and print it
print('net allocations: ', gc.get_count()[0])
gc.collect()
print('garbage produced:', collections.Counter(type(x) for x in gc.garbage))
gc.set_debug(0)
gc.garbage.clear()
gc.collect()
print('net allocations: ', gc.get_count()[0])
# clean up
p.join()
if __name__ == '__main__':
main()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106559
<!-- /gh-linked-prs -->
| 5f7d4ecf301ef12eb1d1d347add054f4fcd8fc5c | caa41a4f1db0112690cf610bab7d9c6dce9ff1ce |
python/cpython | python__cpython-106555 | # selector `_BaseSelectorImpl._key_from_fd` reimplements `.get()`
selector `_BaseSelectorImpl._key_from_fd` reimplements `.get()`
https://github.com/python/cpython/blob/da98ed0aa040791ef08b24befab697038c8c9fd5/Lib/selectors.py#L275
This internal method isn't documented https://docs.python.org/3/library/selectors.html which means it likely can be replaced with `.get()` which would reduce runtime in the select loops
current:
<img width="574" alt="Screenshot 2023-07-08 at 3 32 38 PM" src="https://github.com/python/cpython/assets/663432/675646f7-3cb6-42b7-900b-854701e374c2">
removed:
<img width="575" alt="Screenshot 2023-07-08 at 3 32 34 PM" src="https://github.com/python/cpython/assets/663432/8242318c-1c35-4db0-bdef-9a29500c779c">
<!-- gh-linked-prs -->
### Linked PRs
* gh-106555
<!-- /gh-linked-prs -->
| aeef8591e41b68341af308e56a744396c66879cc | 6a70edf24ca217c5ed4a556d0df5748fc775c762 |
python/cpython | python__cpython-112613 | # GCC conversion warnings from internal header files
# Bug report
Compiling Cython-generated code with GCC `-Wconversion` produces a bunch of warnings within inline function defined in internal CPython header files.
```
/usr/include/python3.12/internal/pycore_code.h: In function ‘write_varint’:
/usr/include/python3.12/internal/pycore_code.h:362:12: warning: conversion from ‘unsigned int’ to ‘uint8_t’ {aka ‘unsigned char’} may change value [-Wconversion]
362 | *ptr = val;
| ^~~
/usr/include/python3.12/internal/pycore_code.h: In function ‘write_signed_varint’:
/usr/include/python3.12/internal/pycore_code.h:375:30: warning: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Wsign-conversion]
375 | return write_varint(ptr, val);
| ^~~
/usr/include/python3.12/internal/pycore_code.h: In function ‘write_location_entry_start’:
/usr/include/python3.12/internal/pycore_code.h:382:12: warning: conversion from ‘int’ to ‘uint8_t’ {aka ‘unsigned char’} may change value [-Wconversion]
382 | *ptr = 128 | (code << 3) | (length - 1);
| ^~~
/usr/include/python3.12/internal/pycore_code.h: In function ‘adaptive_counter_bits’:
/usr/include/python3.12/internal/pycore_code.h:423:45: warning: conversion from ‘int’ to ‘uint16_t’ {aka ‘short unsigned int’} may change value [-Wconversion]
423 | return (value << ADAPTIVE_BACKOFF_BITS) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
424 | (backoff & ((1<<ADAPTIVE_BACKOFF_BITS)-1));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/python3.12/internal/pycore_code.h: In function ‘adaptive_counter_backoff’:
/usr/include/python3.12/internal/pycore_code.h:446:26: warning: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Wsign-conversion]
446 | unsigned int value = (1 << backoff) - 1;
| ^
/usr/include/python3.12/internal/pycore_code.h:447:34: warning: conversion to ‘int’ from ‘unsigned int’ may change the sign of the result [-Wsign-conversion]
447 | return adaptive_counter_bits(value, backoff);
| ^~~~~
/usr/include/python3.12/internal/pycore_code.h:447:41: warning: conversion to ‘int’ from ‘unsigned int’ may change the sign of the result [-Wsign-conversion]
447 | return adaptive_counter_bits(value, backoff);
```
A tentative (untested) patch would be the following:
```diff
diff --git a/Include/internal/pycore_code.h b/Include/internal/pycore_code.h
index d1829eb324..a7b22968ba 100644
--- a/Include/internal/pycore_code.h
+++ b/Include/internal/pycore_code.h
@@ -427,9 +427,9 @@ write_location_entry_start(uint8_t *ptr, int code, int length)
static inline uint16_t
-adaptive_counter_bits(int value, int backoff) {
- return (value << ADAPTIVE_BACKOFF_BITS) |
- (backoff & ((1<<ADAPTIVE_BACKOFF_BITS)-1));
+adaptive_counter_bits(unsigned int value, unsigned int backoff) {
+ return (uint16_t) ((value << ADAPTIVE_BACKOFF_BITS) |
+ (backoff & ((1U<<ADAPTIVE_BACKOFF_BITS)-1U)));
}
static inline uint16_t
@@ -446,12 +446,12 @@ adaptive_counter_cooldown(void) {
static inline uint16_t
adaptive_counter_backoff(uint16_t counter) {
- unsigned int backoff = counter & ((1<<ADAPTIVE_BACKOFF_BITS)-1);
+ unsigned int backoff = counter & ((1U<<ADAPTIVE_BACKOFF_BITS)-1U);
backoff++;
if (backoff > MAX_BACKOFF_VALUE) {
backoff = MAX_BACKOFF_VALUE;
}
- unsigned int value = (1 << backoff) - 1;
+ unsigned int value = (1U << backoff) - 1U;
return adaptive_counter_bits(value, backoff);
}
```
If this approach of adding an explicit `(uint16_t)` cast and adding `U` suffices to integer literals where needed is OK with you, I can provide a full and tested patch or pull request.
# Your environment
- CPython versions tested on: Python 3.12.0b3
- Operating system and architecture: Linux Fedora 38, GCC 13.1.1
<!-- gh-linked-prs -->
### Linked PRs
* gh-112613
* gh-112696
<!-- /gh-linked-prs -->
| a74902a14cdc0952abf7bfabcf529c9b132c5cce | dee7beeb4f9d28fec945c8c495027cc22a512328 |
python/cpython | python__cpython-106536 | # Document PEP 387 Soft Deprecation
# Documentation
Document https://peps.python.org/pep-0387/#soft-deprecation
<!-- gh-linked-prs -->
### Linked PRs
* gh-106536
* gh-105735
* gh-106859
<!-- /gh-linked-prs -->
| d524b6f61f0b9fe4c359373185bf08bab423a218 | 1fb9bd222bfe96cdf8a82701a3192e45d0811555 |
python/cpython | python__cpython-106532 | # Refresh importlib.resources
This issue tracks the updates to importlib.resources from [importlib_resources](/python/importlib_resources) for Python 3.13.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106532
* gh-116835
* gh-117054
* gh-120014
<!-- /gh-linked-prs -->
| 243fdcb40ebeb177ce723911c1f7fad8a1fdf6cb | 7c95345e4f93f4a2475418f17df5aae39dea861f |
python/cpython | python__cpython-106528 | # Reduce overhead to add/remove asyncio readers and writers
# Pitch
The current code path for adding and removing asyncio readers and writers always has to do a `try` `except KeyError` to add a new reader/writer
https://github.com/python/cpython/blob/b3648f036e502db7e7da951ec4eb1f205cb3d74e/Lib/asyncio/selector_events.py#L316
For use cases where readers are added and removed frequently (hard to change the design without breaking changes) this adds up quickly.
<img width="1321" alt="Screenshot 2023-07-07 at 8 22 45 AM" src="https://github.com/python/cpython/assets/663432/c0583d58-e342-40f0-ac3c-77795610f84f">
<!-- gh-linked-prs -->
### Linked PRs
* gh-106528
<!-- /gh-linked-prs -->
| b7dc795dfd175c0d25a479cfaf94a13c368a5a7b | ee15844db88fab3a282d14325662bcfd026272ac |
python/cpython | python__cpython-106525 | # `_sre.template` crashes in case of negative or non-integer group index
`_sre.template` crashes if `template` argument contains group index that is negative or not an `int` instance.
Examples:
```python3
>>> import _sre
>>> _sre.template("", ["", -1, ""])
Segmentation fault (core dumped)
```
```python3
>>> _sre.template("", ["", (), ""])
Segmentation fault (core dumped)
```
In `_sre_template_impl` part of `self->items` remains uninitialized if call to `PyLong_AsSsize_t` returns negative value or fails with exception. Then attempt to clear `self->items[i].literal` in `template_clear` leads to dereferencing of uninitialized pointer.
Not sure if this worth fixing, since `_sre.template` is an internal implementation detail that is used only in `_compile_template` function, where it accepts only (I guess) correct templates created in `_parser.parse_template` function, and additional checks/initialization can affect its performance. But I'll submit a PR anyway.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106525
* gh-106544
<!-- /gh-linked-prs -->
| 2ef1dc37f02b08536b677dd23ec51541a60effd7 | 1c9e4934621627fbbfeada8d9dd87ecba4e446b0 |
python/cpython | python__cpython-106522 | # C API: Add PyObject_GetOptionalAttr() function
It is a new name of former `_PyObject_LookupAttr()` added in #76752.
```c
int PyObject_GetOptionalAttr(PyObject *obj, PyObject *attr_name, PyObject **result);
```
Discussion about making it public and naming: https://discuss.python.org/t/make-pyobject-lookupattr-public/29104.
See also other new functions with similar interface in #106307 and #106004.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106522
* gh-106642
<!-- /gh-linked-prs -->
| 579aa89e68a6607398317a50586af781981e89fb | cabd6e8a107127ff02f0b514148f648fb2472a58 |
python/cpython | python__cpython-106511 | # Fix DEBUG output for atomic group
It missed my attention that DEBUG output is not specialized for atomic groups (added in #34627). For example:
```
>>> re.compile('(?>ab?)', re.DEBUG)
ATOMIC_GROUP [(LITERAL, 97), (MAX_REPEAT, (0, 1, [(LITERAL, 98)]))]
...
```
The correct output should show the decoded structure of the subpattern:
```
>>> re.compile('(?>ab?)', re.DEBUG)
ATOMIC_GROUP
LITERAL 97
MAX_REPEAT 0 1
LITERAL 98
...
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106511
* gh-106548
* gh-106549
<!-- /gh-linked-prs -->
| 74ec02e9490d8aa086aa9ad9d1d34d2ad999b5af | ec7180bd1b3c156d4484e8e6babc5ecb707420e3 |
python/cpython | python__cpython-106509 | # Improve debugging of the _sre module
If macro VERBOSE is defined during compiling the `_sre` module, debugging prints are added which help to debug internals of the regular expression engine. Unfortunately, it is a global option which affects evaluation of every regular expression. Since some Python code which uses regular expressions is executed at `make`, it makes building much more slower. Also, some regular expressions are used at the REPR startup, which adds a lot of noise. It makes this feature almost not usable.
The proposed PR allows to enable tracing on per-pattern basis. If macro `VERBOSE == 1`, tracing is only enabled for patterns compiled with the DEBUG flag. If `VERBOSE == 2`, tracing is always enabled, if `VERBOSE == 0` it is always disabled.
This feature is for core developers only.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106509
<!-- /gh-linked-prs -->
| b305c69d1085c9e6c6875559109f73b827cb6fe0 | 74ec02e9490d8aa086aa9ad9d1d34d2ad999b5af |
python/cpython | python__cpython-106504 | # asyncio._SelectorSocketTransport cyclic ref leak is back as `._write_ready.__self__`
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
#100348 aka #87745 (fix #100349 merged 2022-12-20) is back as a side effect of #31871 (merged 2022-12-24)—caching a bound method as `_write_ready` creates the reference cycle `._write_ready.__self__` on `_SelectorSocketTransport`.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.9.12, 3.11.3
- attempted 3.12.0b3 but my repro case uses aiohttp and I can't get aiohttp to build on 3.12+, probably due to https://github.com/aio-libs/aiohttp/pull/7315 being still in progress
- Operating system and architecture: `uname -a` = `Linux delia 6.3.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:42 +0000 x86_64 GNU/Linux`
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-106504
* gh-106514
<!-- /gh-linked-prs -->
| 3e5ce7968f5ab715f649e296e1f6b499621b8091 | 24fb627ea7a4d57cf479b7516bafdb6c253a1645 |
python/cpython | python__cpython-106488 | # Allow `str.replace`'s 'count' to be a keyword argument
# Feature or enhancement
Allow the *count* argument of `str.replace` to be a keyword to better describe its use.
# Pitch
`str.replace` takes `old` and `new` strings as parameters, and an optional `count` parameter:
https://docs.python.org/3/library/stdtypes.html#str.replace
However, `count` cannot be a keyword argument:
```pycon
>>> "aaa".replace("a", "b", 2)
'bba'
>>> "aaa".replace("a", "b", count=2)
TypeError: str.replace() takes no keyword arguments
```
It would be more explicit if the `count` parameter could also be a keyword, so there's no doubt about its meaning.
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
Suggested by @treyhunner at https://mastodon.social/@treyhunner/110664375381530126
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-106488
<!-- /gh-linked-prs -->
| 34c14147a2c52930b8b471905074509639e82d5b | dac1e364901d3668742e6eecc2ce63586330c11f |
python/cpython | python__cpython-106480 | # Typo in __cplusplus macro
There is a typo in `_testcppext.cpp` https://github.com/python/cpython/blob/99b00efd5edfd5b26bf9e2a35cbfc96277fdcbb1/Lib/test/_testcppext.cpp#L89-L93 causing the `if` to never evaluate as true.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106480
* gh-106493
<!-- /gh-linked-prs -->
| 67a798888dcde13bbb1e17cfcc3c742c94e67a07 | 5548097925b9924ebf761376d632c5198d01ebd5 |
python/cpython | python__cpython-106462 | # Missing Deprecation Notice for typing.Callable Alias in the Deprecated Aliases section
# Documentation
typing.Callable is not listed as a deprecated alias under https://docs.python.org/3/library/typing.html#deprecated-aliases.
However, it IS listed here: https://docs.python.org/3/library/typing.html#typing.Callable and states "Deprecated alias to [collections.abc.Callable](https://docs.python.org/3/library/collections.abc.html#collections.abc.Callable)". But not having it anywhere in the [Depricated Aliases section](https://docs.python.org/3/library/typing.html#deprecated-aliases) of the docs is confusing as the only depreciation notice lies outside of this section.
It seems like it could fit under this section: https://docs.python.org/3/library/typing.html#aliases-to-container-abcs-in-collections-abc as the language used in this section ("Deprecated alias to collections.abc.[...]") closely matches the language used callable depreciation notice, but I am not entirely sure it if fits the "container" description.
I suggest adding the deprecation notice to the [Depricated Aliases section](https://docs.python.org/3/library/typing.html#deprecated-aliases).
That is all! Thanks!
<!-- gh-linked-prs -->
### Linked PRs
* gh-106462
* gh-106574
* gh-106575
<!-- /gh-linked-prs -->
| ca8b55c7f54b38e264056148075a8061a7082013 | ee46cb6aa959d891b0a480fea29f1eb991e0fad8 |
python/cpython | python__cpython-106366 | # `test_unittest` failing due to `test/test_unittest/testmock/testthreadingmock` assuming threading exists
https://github.com/python/cpython/commit/d65b783b6966d233467a48ef633afb4aff9d5df8 via https://github.com/python/cpython/pull/16094 the culprit.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106366
<!-- /gh-linked-prs -->
| 56353b10023ff12c7c8d6288ae4bf7bdcd5d4b6c | e7cd55753b00d6f7e5c8998bae73ebaa9e86398d |
python/cpython | python__cpython-106447 | # Doctest fails in `stdtypes.rst`
# Documentation
When some codes are modified, the doc and doctests are not updated synchronously.
**Local doctest:**
```shell
./python -m doctest ./Doc/library/stdtypes.rst
```
<details><summary>Details</summary>
<p>
```shell
**********************************************************************
File "./Doc/library/stdtypes.rst", line 3769, in stdtypes.rst
Failed example:
v[1:4]
Expected:
<memory at 0x7f3ddc9f4350>
Got:
<memory at 0x7f42c250f040>
**********************************************************************
File "./Doc/library/stdtypes.rst", line 3952, in stdtypes.rst
Failed example:
mm.tolist()
Expected:
[89, 98, 99]
Got:
[97, 98, 99]
**********************************************************************
File "./Doc/library/stdtypes.rst", line 4037, in stdtypes.rst
Failed example:
x[0] = b'a'
Expected:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: memoryview: invalid value for format "B"
Got:
Traceback (most recent call last):
File "/home/github/cpython/Lib/doctest.py", line 1357, in __run
exec(compile(example.source, filename, "single",
File "<doctest stdtypes.rst[234]>", line 1, in <module>
x[0] = b'a'
~^^^
TypeError: memoryview: invalid type for format 'B'
**********************************************************************
File "./Doc/library/stdtypes.rst", line 4789, in stdtypes.rst
Failed example:
keys ^ {'sausage', 'juice'}
Expected:
{'juice', 'sausage', 'bacon', 'spam'}
Got:
{'spam', 'sausage', 'bacon', 'juice'}
**********************************************************************
File "./Doc/library/stdtypes.rst", line 4791, in stdtypes.rst
Failed example:
keys | ['juice', 'juice', 'juice']
Expected:
{'juice', 'sausage', 'bacon', 'spam', 'eggs'}
Got:
{'spam', 'bacon', 'juice'}
**********************************************************************
File "./Doc/library/stdtypes.rst", line 4997, in stdtypes.rst
Failed example:
dict[str][str]
Expected:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: There are no type variables left in dict[str]
Got:
Traceback (most recent call last):
File "/home/github/cpython/Lib/doctest.py", line 1357, in __run
exec(compile(example.source, filename, "single",
File "<doctest stdtypes.rst[335]>", line 1, in <module>
dict[str][str]
~~~~~~~~~^^^^^
TypeError: dict[str] is not a generic class
**********************************************************************
File "./Doc/library/stdtypes.rst", line 5209, in stdtypes.rst
Failed example:
isinstance(1, int | list[int])
Expected:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: isinstance() argument 2 cannot contain a parameterized generic
Got:
True
**********************************************************************
File "./Doc/library/stdtypes.rst", line 5514, in stdtypes.rst
Failed example:
int.__subclasses__()
Expected:
[<class 'bool'>]
Got:
[<class 'bool'>, <enum 'IntEnum'>, <flag 'IntFlag'>, <class 're._constants._NamedIntConstant'>]
**********************************************************************
File "./Doc/library/stdtypes.rst", line 5548, in stdtypes.rst
Failed example:
_ = int('2' * 5432)
Expected:
Traceback (most recent call last):
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion: value has 5432 digits; use sys.set_int_max_str_digits() to increase the limit.
Got:
Traceback (most recent call last):
File "/home/github/cpython/Lib/doctest.py", line 1357, in __run
exec(compile(example.source, filename, "single",
File "<doctest stdtypes.rst[361]>", line 1, in <module>
_ = int('2' * 5432)
^^^^^^^^^^^^^^^
ValueError: Exceeds the limit (4300 digits) for integer string conversion: value has 5432 digits; use sys.set_int_max_str_digits() to increase the limit
**********************************************************************
File "./Doc/library/stdtypes.rst", line 5556, in stdtypes.rst
Failed example:
len(str(i_squared))
Expected:
Traceback (most recent call last):
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion: value has 8599 digits; use sys.set_int_max_str_digits() to increase the limit.
Got:
Traceback (most recent call last):
File "/home/github/cpython/Lib/doctest.py", line 1357, in __run
exec(compile(example.source, filename, "single",
File "<doctest stdtypes.rst[365]>", line 1, in <module>
len(str(i_squared))
^^^^^^^^^^^^^^
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit
**********************************************************************
```
</p>
</details>
```shell
1 items had failures:
10 of 374 in stdtypes.rst
***Test Failed*** 10 failures.
```
BTW, It seems that all doctests are passing in the CI pipeline. Those failed cases are not found by CI doctest.
See: https://github.com/CharlieZhao95/cpython/actions/runs/5461321900/jobs/9939195566
```shell
...
Document: library/stdtypes
--------------------------
1 items passed all tests:
50 tests in default
50 tests in 1 items.
50 passed and 0 failed.
Test passed.
```
**Environment:**
CPython versions tested on: 3.13.0a0
Operating system and architecture: Ubuntu 22.04.1 LTS
<!-- gh-linked-prs -->
### Linked PRs
* gh-106447
* gh-106741
* gh-106742
<!-- /gh-linked-prs -->
| 89867d2491c0c3ef77bc237899b2f0762f43c03c | 21d98be42289369ccfbdcc38574cb9ab50ce1c02 |
python/cpython | python__cpython-106408 | # New warning: "'_Py_IsInterpreterFinalizing' undefined; assuming extern returning in"
<img width="993" alt="Снимок экрана 2023-07-04 в 13 49 29" src="https://github.com/python/cpython/assets/4660275/72849196-4368-4368-a77b-d8584c8444db">
Looks like the reason for this is https://github.com/python/cpython/commit/bc7eb1708452da59c22782c487ae7f05f1788970#diff-42415407f8d0ef2d42e29d13d979f633e3543770e62c3871e1101ad532d336a8
I am working on a fix. PR is comming :)
CC @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-106408
<!-- /gh-linked-prs -->
| 80f1c6c49b4cd2bf698eb2bc3d2f3da904880dd2 | dfe4de203881e8d068e6fc5b8e31075841a86d25 |
python/cpython | python__cpython-106418 | # 3.12 regression: cannot create weak reference to 'typing.TypeVar' object
This works in Python 3.11, but fails in 3.12b3:
```python
import weakref
import typing
weakref.ref(typing.TypeVar('T'))
```
I'm assuming it's an unintentional side effect of the PEP-695 implementation? Unfortunately I don't have time to investigate right now, let me know if I should.
Reported in https://github.com/cloudpipe/cloudpickle/issues/507
<!-- gh-linked-prs -->
### Linked PRs
* gh-106418
* gh-106635
<!-- /gh-linked-prs -->
| 945d3cbf2e8e756ed16c3ec51106e6157abb2698 | a2d54d4e8ab12f967a220be88bde8ac8227c5ab3 |
python/cpython | python__cpython-106401 | # `f'"{var:}"'` adds redundant empty string literal in AST in 3.12
3.11:
```python
>>> import ast
>>> print(ast.dump(ast.parse("""f'"{var:}"'"""), indent=2))
Module(
body=[
Expr(
value=JoinedStr(
values=[
Constant(value='"'),
FormattedValue(
value=Name(id='var', ctx=Load()),
conversion=-1,
format_spec=JoinedStr(values=[])),
Constant(value='"')]))],
type_ignores=[])
```
3.12:
```python
>>> import ast
>>> print(ast.dump(ast.parse("""f'"{var:}"'"""), indent=2))
Module(
body=[
Expr(
value=JoinedStr(
values=[
Constant(value='"'),
FormattedValue(
value=Name(id='var', ctx=Load()),
conversion=-1,
format_spec=JoinedStr(
values=[
Constant(value='')])),
Constant(value='"')]))],
type_ignores=[])
```
This affects several linters:
1. https://github.com/python/mypy/pull/15577
2. https://github.com/PyCQA/flake8-bugbear/issues/393
Is this a bug? Or is it a feature?
<!-- gh-linked-prs -->
### Linked PRs
* gh-106401
* gh-106416
<!-- /gh-linked-prs -->
| dfe4de203881e8d068e6fc5b8e31075841a86d25 | 8f6df5e9cbc3a1689601714192aa6ecbb23e1927 |
python/cpython | python__cpython-106369 | # Increase Argument Clinic test coverage
The Argument Clinic test suite consists of three parts:
- Lib/test/test_clinic.py: mostly parser tests
- Lib/test/clinic.test: generated code, checked by test_clinic.py (`ClinicExternalTest`)
- Modules/_testclinic.c: mostly converter tests (thanks, @colorfulappl!)
Test coverage is currently just above 80%, which is too low. I'd say we try to reach 90-something percent coverage.
Reminder: PRs for increased test coverage should be backported to all bugfix branches.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106369
* gh-106370
* gh-106373
* gh-106374
* gh-106379
* gh-106381
* gh-106384
* gh-106387
* gh-106388
* gh-106389
* gh-106390
* gh-106391
* gh-106407
* gh-106409
* gh-106410
* gh-106415
* gh-106438
* gh-106439
* gh-106728
* gh-106730
* gh-106731
* gh-106759
* gh-106769
* gh-106770
* gh-106813
* gh-106833
* gh-106838
* gh-106839
* gh-106933
* gh-106943
* gh-106944
* gh-106979
* gh-106994
* gh-107002
* gh-107156
* gh-107189
* gh-107190
* gh-107277
* gh-107282
* gh-107326
* gh-107364
* gh-107365
* gh-107366
* gh-107387
* gh-107496
* gh-107499
* gh-107500
* gh-107514
* gh-107582
* gh-107606
* gh-107611
* gh-107641
* gh-107693
* gh-107731
* gh-107977
* gh-107985
<!-- /gh-linked-prs -->
| 7f4c8121db62a9f72f00f2d9f73381e82f289581 | bf06c6825cadeda54a9c0848fa51463a0e0b2cf8 |
python/cpython | python__cpython-106422 | # Add basic introspection, like `str()` for executors.
We should (at least via the unstable C api) be able to get an executor and introspect it.
This will be useful for testing and debugging.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106422
* gh-106622
<!-- /gh-linked-prs -->
| 318ea2c72e9aed7ac92457c28747eda9424c8327 | 80f1c6c49b4cd2bf698eb2bc3d2f3da904880dd2 |
python/cpython | python__cpython-106361 | # Argument Clinic: DSLParser.parse_converter() can return an incorrect kwargs dict
_Originally posted by @AlexWaygood in https://github.com/python/cpython/issues/106354#issuecomment-1617797726_:
```
Tools/clinic/clinic.py:5039: error: Incompatible return value type (got
"tuple[str, bool, dict[str | None, Any]]", expected
"tuple[str, bool, dict[str, Any]]") [return-value]
return name, False, kwargs
^~~~~~~~~~~~~~~~~~~
```
I think mypy is flagging a real bug in the code here. The issue is this block of code here:
https://github.com/python/cpython/blob/d694f043ae25308dda607fad4ea1d8d85d79982f/Tools/clinic/clinic.py#L5033-L5039
The function is annotated as return `tuple[str, bool, KwargDict]`, and `KwargDict` is a type alias for `dict[str, Any]`. It's important that the dictionary that's the third element of the tuple only has strings as keys. If it doesn't, then this will fail:
https://github.com/python/cpython/blob/d694f043ae25308dda607fad4ea1d8d85d79982f/Tools/clinic/clinic.py#L4617
The code as written, however, doesn't guarantee that all the keys in the `kwargs` dictionary will be strings. In the dictionary comprehension, we can see that `annotation` is an `ast.Call` instance. That means that `annotation.keywords` is of type `list[ast.keyword]` -- we can see this from typeshed's stubs for the `ast` module (which are an invaluable reference if you're working with ASTs in Python!): https://github.com/python/typeshed/blob/18d45d62aabe68fce78965c4920cbdeddb4b54db/stdlib/_ast.pyi#L324-L329. If `annotation.keywords` is of type `list[ast.keyword]`, that means that the `node` variable in the dictionary comprehension is of type `keyword`, which means (again using typeshed's `ast` stubs), that `node.arg` is of type `str | None`: https://github.com/python/typeshed/blob/18d45d62aabe68fce78965c4920cbdeddb4b54db/stdlib/_ast.pyi#L516-L520. AKA, the keys in this dictionary are not always guaranteed to be strings -- there's a latent bug in this code!
<!-- gh-linked-prs -->
### Linked PRs
* gh-106361
* gh-106364
<!-- /gh-linked-prs -->
| 0da4c883cf4185efe27b711c3e0a1e6e94397610 | 60e41a0cbbe1a43e504b50a454efe653aba78a1a |
python/cpython | python__cpython-106351 | # Tkinter: return value of `mp_init()` should not be ignored
An `mp_int` structure is usable only if `mp_init()` returns `MP_OKAY`; `mp_init()` may return `MP_MEM` if it fails to allocate memory (see https://github.com/libtom/libtommath/blob/v1.2.0/bn_mp_init.c#L12).
`mp_init()` must be checked to avoid a warning when compiling against standalone libtommath headers (rather than Tcl-bundled libtommath):
```
./Modules/_tkinter.c:908:5: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
mp_init(&bigValue);
^~~~~~~ ~~~~~~~~~
1 warning generated.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106351
* gh-107258
* gh-107259
<!-- /gh-linked-prs -->
| b5ae7c498438657a6ba0bf4cc216b9c2c93a06c7 | 2b1a81e2cfa740609d48ad82627ae251262e6470 |
python/cpython | python__cpython-106331 | # `pathlib.PurePath.match()` can incorrectly match empty path
```python
>>> from pathlib import PurePath
>>> PurePath().match('*')
True
```
This should be false, because a `*` segment always consumes exactly one path component, but empty paths have 0 components.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106331
* gh-106372
<!-- /gh-linked-prs -->
| b4efdf8cda8fbbd0ca53b457d5f6e46a59348caf | e5862113dde7a66b08f1ece542a3cfaf0a3d9080 |
python/cpython | python__cpython-106321 | # C API: Remove private C API functions (move them to the internal C API)
Over the years, we accumulated many private functions as part of the public C API header files. I propose to remove them: move them to the internal C API.
If many C extensions are affected by these removals, it's a sign that we should consider promoted used private functions as public functions: provide a better API, add error handling, write documentation, write tests.
---
Summary: **[My plan to clarify private vs public functions in Python 3.13](https://discuss.python.org/t/c-api-my-plan-to-clarify-private-vs-public-functions-in-python-3-13/30131)**
* Private functions converted to public functions:
* PyLong_AsInt(): issue #108444
* Py_IsFinalizing(): PR #108032
* Discussions:
* [(pssst) Let’s treat all API in public headers as public](https://discuss.python.org/t/pssst-lets-treat-all-api-in-public-headers-as-public/28916)
* [list of PyPI affected projects](https://github.com/python/cpython/issues/106320#issuecomment-1620773057)
* [C API: How much private is the private _Py_IDENTIFIER() API?](https://discuss.python.org/t/c-api-how-much-private-is-the-private-py-identifier-api/29190) -- ``_Py_Identifier``
* [_Py_c_xxx complex functions](https://github.com/python/cpython/issues/106320#issuecomment-1633302147) (ex: ``_Py_c_abs()``)
* PR #107139: Add _PyTupleBuilder API to the internal C API
* Notes:
* While being private, [_PyBytes_Resize()](https://docs.python.org/dev/c-api/bytes.html#c._PyBytes_Resize) and [_PyTuple_Resize()](https://docs.python.org/dev/c-api/tuple.html#c._PyTuple_Resize) are documented
* The private _PyCrossInterpreterData API is used by 3rd party project on PyPI: https://github.com/python/cpython/pull/107068#issuecomment-1648594654. I closed my PR #107068.
---
<!-- gh-linked-prs -->
### Linked PRs
* gh-106321
* gh-106324
* gh-106325
* gh-106335
* gh-106336
* gh-106339
* gh-106341
* gh-106342
* gh-106355
* gh-106356
* gh-106382
* gh-106383
* gh-106385
* gh-106386
* gh-106398
* gh-106399
* gh-106400
* gh-106417
* gh-106425
* gh-106434
* gh-107021
* gh-107026
* gh-107027
* gh-107030
* gh-107032
* gh-107034
* gh-107036
* gh-107041
* gh-107053
* gh-107064
* gh-107068
* gh-107070
* gh-107071
* gh-107142
* gh-107143
* gh-107144
* gh-107145
* gh-107147
* gh-107159
* gh-107185
* gh-107187
* gh-108313
* gh-108429
* gh-108430
* gh-108431
* gh-108433
* gh-108434
* gh-108449
* gh-108451
* gh-108452
* gh-108453
* gh-108499
* gh-108503
* gh-108505
* gh-108593
* gh-108597
* gh-108599
* gh-108600
* gh-108601
* gh-108602
* gh-108603
* gh-108604
* gh-108605
* gh-108606
* gh-108607
* gh-108609
* gh-108664
* gh-108712
* gh-108713
* gh-108720
* gh-108742
* gh-108743
* gh-108863
* gh-111162
* gh-111939
* gh-128787
* gh-128788
* gh-128837
<!-- /gh-linked-prs -->
| 18b1fdebe0cd5e601aa341227c13ec9d89bdf32c | 0530f4f64629ff97f3feb7524da0833b9535e8b6 |
python/cpython | python__cpython-106317 | # Remove Include/cpython/pytime.h header file
Remove Include/cpython/pytime.h header file: it only contains private functions and so far no one asked to expose public functions for this API.
I propose to move these functions to the internal pycore_time.h header file.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106317
<!-- /gh-linked-prs -->
| 46d77610fc77088bceac720a13d9f2df3a50f29e | 822db860eada721742f878653d7ac9364ed8df59 |
python/cpython | python__cpython-106311 | # __signature__ in the inspect module
# Documentation
The `__signature__` attribute is treated by inspect.signature (and inspect.Signature.from_callable) as an override which is returned instead of computing the signature through the ordinary process. `__text_signature__` serves a similar purpose but containing a string version of the signature.
That's for the implementation.
None of the two are mentioned in the signature or Signature sections of the inspect module's doc page. However, the [inspect.unwrap](https://docs.python.org/3/library/inspect.html#inspect.unwrap) function's doc entry mentions, in an example, that inspect.signature does recognize the `__signature__` attribute, without saying what it does with it.
This is the only mention of `__signature__` or `__text_signature__` in the doc, excluding changelogs, whatsnew, and the Argument clinic section which is overtly implementation-dependent.
However, it is also mentioned in the [PEP 362](https://peps.python.org/pep-0362/#implementation) which introduced Signature, albeit in the Implementation section.
It should be clarified whether what `__signature__` does is documented or not. It's a very useful feature to be able to cache or lie about a function's signature, and I think it should be supported.
If not, then inspect.unwrap should stop mentioning it and use another example.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106311
* gh-111145
* gh-111146
<!-- /gh-linked-prs -->
| b07f23259d30e61fd7cc975b8b0e3b2e846fed8f | 5dfa71769f547fffa893a89b0b04d963a41b2441 |
python/cpython | python__cpython-106312 | # Deprecate typing.no_type_check_decorator
# Feature or enhancement
We should deprecate `typing.no_type_check_decorator`, with a view to removing it in Python 3.15.
# Pitch
The typing documentation describes `typing.no_type_check_decorator` like this:
> Decorator to give another decorator the [no_type_check()](https://docs.python.org/3/library/typing.html#typing.no_type_check) effect.
>
> This wraps the decorator with something that wraps the decorated function in [no_type_check()](https://docs.python.org/3/library/typing.html#typing.no_type_check).
In 2023, this is frankly misleading for users of the `typing` module. Unlike its twin `@no_type_check`, no major type checker has yet added support for `no_type_check_decorator`, despite the fact that it was added to the typing module in Python 3.5. If it's been 8 years and it still hasn't had support added by any major type checker, it's unlikely to ever be usable or useful. Meanwhile, the decorator is fairly obscure, and you can achieve the same effect using other tools such as `@typing.no_type_check` and `typing.Annotated`. There have been a few feature requests to type checkers over the years asking them to add support, but none of them have a significant number of upvotes.
We should simply deprecate `@no_type_check_decorator`, with a view to removing it in Python 3.15. (Note that I **do not** propose deprecating `@no_type_check`, which is supported by mypy and reasonably popular among mypy users.)
# Previous discussion
- https://github.com/python/mypy/issues/6583
- https://github.com/google/pytype/issues/1452
- https://github.com/microsoft/pyright/issues/1448 (this issue is about `no_type_check` rather than `no_type_check_decorator`, but it shows that pyright is unlikely to ever support either decorator, unlike mypy which supports the former but not the latter)
<!-- gh-linked-prs -->
### Linked PRs
* gh-106312
<!-- /gh-linked-prs -->
| 32718f908cc92c474fd968912368b8a4500bd055 | 4b4a5b70aa8d47b1e2a0582b741c31b786da762a |
python/cpython | python__cpython-106308 | # Add PyMapping_GetOptionalItem()
`PyObject_GetItem()` raises a KeyError if the key is not found in a mapping. In some cases it should be treated as any other error, but in other cases it should be caught and suppressed. The repeating pattern of `PyObject_GetItem()` followed by `PyErr_ExceptionMatches(PyExc_KeyError)` and `PyErr_Clear()` occurs 7 times in `Python/bytecodes.c` and at least 5 times in other files.
I propose to add private helper `_PyMapping_LookupItem()` which combines these three calls to make the code clearer. It also has a special case for exact dict, so eliminates even more repeating code. For example:
```diff
PyObject *m;
- if (PyDict_CheckExact(modules)) {
- m = Py_XNewRef(PyDict_GetItemWithError(modules, name));
- }
- else {
- m = PyObject_GetItem(modules, name);
- if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) {
- _PyErr_Clear(tstate);
- }
- }
- if (_PyErr_Occurred(tstate)) {
+ if (_PyMapping_LookupItem(modules, name, &m) < 0) {
return NULL;
}
```
The interface of `_PyMapping_LookupItem()` is very similar to other private helper `_PyObject_LookupAttr()` (see #76752) and to a proposed new C API for PyDict (see #106004).
<!-- gh-linked-prs -->
### Linked PRs
* gh-106308
<!-- /gh-linked-prs -->
| 4bf43710d1e1f19cc46b116b5d8524f6c75dabfa | b444bfb0a325dea8c29f7b1828233b00fbf4a1cb |
python/cpython | python__cpython-106304 | # Use _PyObject_LookupAttr() instead of PyObject_GetAttr()
`PyObject_GetAttr()` followed by `PyErr_ExceptionMatches(PyExc_AttributeError)` and `PyErr_Clear()` can be replaced with `_PyObject_LookupAttr()`. It is faster, because it avoids raising the AttributeError exception which later be caught and dropped at first place. And it leads to simpler code. Most of code already uses `_PyObject_LookupAttr()`, but some new code was added recently which still use the old idiom.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106304
* gh-106568
* gh-106569
<!-- /gh-linked-prs -->
| 93d292c2b3f8e85ef562c37f59678c639b9b8fcb | d137c2cae28b79555433079d917c3e0614bdcd61 |
python/cpython | python__cpython-106302 | # `assertRaises(Regex)?(Exception)` is problematic
`assertRaises(Exception)` is problematic, because it does not really test anything.
For example, here's the test code from `test_mailbox.py`:
```python
def raiser(*args, **kw):
raise Exception("a fake error")
support.patch(self, email.generator.BytesGenerator, 'flatten', raiser)
with self.assertRaises(Exception):
self._box.add(email.message_from_string("From: Alphöso"))
```
`self.assertRaises(Exception)` can happen for *any* other reason, we have zero confidence in the fact that we have the right exception. For example, a simple typo in `email.message_from_stringX` will also satisfy this check.
Sometimes, it is a bit better when `assertRaisesRegex` is used, then we have at least have a message to compare and be at least partially sure that the exception is correct. But, it is still not quite good, because we don't know the type of error that happen and it might accidentally change.
There are multiple places where this pattern is used:
```python
Lib/test/test_ordered_dict.py
189: with self.assertRaises(Exception):
Lib/test/test_importlib/test_main.py
71: with self.assertRaises(Exception):
Lib/test/test_code.py
182: with self.assertRaises(Exception):
Lib/test/test_shutil.py
2741: with self.assertRaises(Exception):
Lib/test/test_unittest/testmock/testasync.py
464: with self.assertRaises(Exception):
Lib/test/test_unittest/test_runner.py
349: with self.assertRaises(Exception) as cm:
372: with self.assertRaises(Exception) as cm:
382: with self.assertRaises(Exception) as cm:
402: with self.assertRaises(Exception) as e:
633: with self.assertRaises(Exception) as e:
858: with self.assertRaises(Exception) as cm:
891: with self.assertRaises(Exception) as cm:
902: with self.assertRaises(Exception) as cm:
Lib/test/test_concurrent_futures.py
1039: with self.assertRaises(Exception) as cm:
Lib/test/test_mailbox.py
122: with self.assertRaises(Exception):
Lib/test/test_yield_from.py
1121: with self.assertRaises(Exception) as caught:
1385: with self.assertRaises(Exception) as caught:
1397: with self.assertRaises(Exception) as caught:
1411: with self.assertRaises(Exception) as caught:
1423: with self.assertRaises(Exception) as caught:
1435: with self.assertRaises(Exception) as caught:
1507: with self.assertRaises(Exception) as caught:
Lib/test/test_email/test_message.py
706: with self.assertRaises(Exception) as ar:
Lib/test/test_codecs.py
2825: with self.assertRaises(Exception) as failure:
2832: with self.assertRaises(Exception) as failure:
Lib/test/test_mmap.py
703: with self.assertRaises(Exception) as exc:
Lib/test/test_tarfile.py
2801: with self.assertRaises(Exception) as exc:
```
I suggest to:
1. Use custom exception classes where possible `class MyExc(Exception): ...`, this way we will have 100% guarantee that the test is accurate
2. Use `assertRaisesRegex` more, where possible
3. Keep some of the `assertRaises(Exception)` where it is legit, for example `test_ordereddict` says: `Note, the exact exception raised is not guaranteed. The only guarantee that the next() will not succeed`
4. If `as exc` is used and there are `isinstance` checks - then we can keep it as is
5. Keep it, where literally any exception can happen (or different ones)
6. Keep it where `Exception` class is tested excplicitly, like in `test_yield_from` with `g.throw(Exception)`
I will send a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106302
* gh-106534
* gh-106545
* gh-106737
* gh-108006
* gh-108007
<!-- /gh-linked-prs -->
| 6e6a4cd52332017b10c8d88fbbbfe015948093f4 | 80b9b3a51757ebb1e3547afc349a229706eadfde |
python/cpython | python__cpython-106294 | # Typos, Grammar mistakes in object_layout.md
Found multiple typos, and grammar mistakes which can be improved in `Objects/object_layout.md`
<!-- gh-linked-prs -->
### Linked PRs
* gh-106294
* gh-114158
<!-- /gh-linked-prs -->
| 60ca37fdee52cc4ff318b6e9ddbb260e8583b33b | 45e527dfb553a5687aad828260cda85f02b9b6f8 |
python/cpython | python__cpython-106380 | # cached_property no longer works as a data descriptor in Python 3.12
# Bug report
In Python 3.11 and below, when `cached_property` was inherited from, the `__get__` method would always check for the attribute in the cache.
In Python 3.12.0b3 (since #101890 it seems) the `__get__` method no longer checks the cache.
This isn't an issue for typical use, but it might be an issue for subclasses.
For example here's a version of `cached_property` that inherits from the `functools` version but also allows for a `setter` (just as `@property` does).
Since this child class adds a `__set__` method, the `__get__` method in `functools.cached_property` will be called *before* the `__dict__` attribute is accessed.
```python
"""Demonstration of cached_property difference in Python 3.12.0b3."""
import functools
class settable_cached_property(functools.cached_property):
def __init__(self, func):
super().__init__(func)
self._setter = self._deleter = None
def setter(self, setter):
self._setter = setter
return self
def __set__(self, obj, value):
if self._setter:
self._setter(obj, value)
obj.__dict__.pop(self.attrname, None)
else:
obj.__dict__[self.attrname] = value
class Thing:
@settable_cached_property
def x(self):
return self.y
thing = Thing()
thing.y = 4
print(f"{thing.x = } (should be 4)")
thing.y = 5
print(f"{thing.x = } (should still be 4)")
```
This new behavior may be intended, but I wanted to make a note of it because it does break a previous (undocumented I believe?) assumption that `cached_property` could be inherited from and turned into a data descriptor.
# Your environment
Python 3.12.0b3 on Ubuntu Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-106380
* gh-106469
<!-- /gh-linked-prs -->
| 838406b4fc044c0b2f397c23275c69f16a76205b | 217f47d6e5e56bca78b8556e910cd00890f6f84a |
python/cpython | python__cpython-106319 | # Line numbers for errors off in traces
When a uop encounters an error, the line number is set to -1 or to the wrong place.
@markshannon
---
Repro (on latest main):
```
a = 1
def func():
global a
i = 20
while i > 0:
del a
i -= 1
if i >= 3:
a = 2
func()
```
Run this normally and you get this traceback:
```
Traceback (most recent call last):
File "/Users/guido/cpython/t.py", line 12, in <module>
func()
File "/Users/guido/cpython/t.py", line 7, in func
del a
^
NameError: name 'a' is not defined
```
But run with `-Xuops`, and you get:
```
Traceback (most recent call last):
File "/Users/guido/cpython/t.py", line 12, in <module>
func()
File "/Users/guido/cpython/t.py", line -1, in func
NameError: name 'a' is not defined
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-106319
<!-- /gh-linked-prs -->
| 2028a4f6d996d2a46cbc33d0b65fdae284ee71fc | 58906213cc5d8f2be311664766b4923ef29dae1f |
python/cpython | python__cpython-106285 | # Python/ceval.c -Wunreachable-code warn from apple-clang
````
Python/ceval.c:2849:17: warning: code will never be executed [-Wunreachable-code]
for (;;) {}
^~~~~~~~~~~
Python/ceval.c:2848:17: warning: code will never be executed [-Wunreachable-code]
abort(); // Unreachable
````
The code was first introduced from https://github.com/python/cpython/commit/51fc72511733353de15bc633a3d7b6da366842e4
@gvanrossum is it really needed?
<!-- gh-linked-prs -->
### Linked PRs
* gh-106285
<!-- /gh-linked-prs -->
| 02ce3d56e6d230768853757109e7ca6425a6a600 | a8ae73965b02302b7661ea07a6e4f955a961aca9 |
python/cpython | python__cpython-106289 | # Warning in `Python/executor_cases.c.h`
```c
'initializing': conversion from 'uint64_t' to 'uint32_t', possible loss of data (compiling source file ..\Python\ceval.c)
[D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj]
```
Appears in #106262.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106289
<!-- /gh-linked-prs -->
| 2062e115017d8c33e74ba14adef2a255c344f747 | 02ce3d56e6d230768853757109e7ca6425a6a600 |
python/cpython | python__cpython-106270 | # Segmentation fault when instantiating `decimal.SignalDictMixin` type
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
The following code will causes a segmentation fault:
```python
>>> import decimal
>>> tp = type(decimal.Context().flags) # SignalDict type
>>> tp() # Segmentation fault
```
This code instantiates an object of `SignalDict` type (inherited from the base class `SignalDictMixin`) and tries to print the contents of the object (use **repr**).
The problem is caused by the following C code, where the `signaldict_repr` function **accesses a null pointer**.
```c
static int
signaldict_init(PyObject *self, PyObject *args UNUSED, PyObject *kwds UNUSED)
{
SdFlagAddr(self) = NULL;
return 0;
}
...
static PyObject *
signaldict_repr(PyObject *self)
{
...
for (cm=signal_map, i=0; cm->name != NULL; cm++, i++) {
n[i] = cm->fqname;
// Access NULL pointer here
b[i] = SdFlags(self)&cm->flag ? "True" : "False";
}
...
}
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.13.0.0a0, 3.10.2
- Operating system and architecture: Ubuntu 22.04.1 LTS, Windows 11
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-106270
* gh-107490
* gh-107491
<!-- /gh-linked-prs -->
| 3979150a0d406707f6d253d7c15fb32c1e005a77 | 5113ed7a2b92e8beabebe5fe2f6e856c52fbe1a0 |
python/cpython | python__cpython-106260 | # Add trivial "help" target to Makefile.pre.in
The main Makefile could use a "help" target akin to that in Doc/Makefile.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106260
<!-- /gh-linked-prs -->
| d9ccde28c4321ffc0d3f8b18c6346d075b784c40 | 41457c7fdb04819d04a528b8dfa72c1aa5745cc9 |
python/cpython | python__cpython-106252 | # The `cases_generator` handles some instructions incorrectly
I triggered this `assert`...
```py
Traceback (most recent call last):
File "/home/brandtbucher/cpython/./Tools/cases_generator/generate_cases.py", line 1609, in <module>
main()
File "/home/brandtbucher/cpython/./Tools/cases_generator/generate_cases.py", line 1604, in main
a.write_metadata()
File "/home/brandtbucher/cpython/./Tools/cases_generator/generate_cases.py", line 1260, in write_metadata
assert not instr.active_caches, (instr.name, instr.cache_effects)
AssertionError: ('TO_BOOL_ALWAYS_TRUE', [CacheEffect(context=<Python/bytecodes.c: 1807-1810>, name='unused', size=1), CacheEffect(context=<Python/bytecodes.c: 1811-1814>, name='version', size=2)]
```
...while defining this new instruction...
```c
inst(TO_BOOL_ALWAYS_TRUE, (unused/1, version/2, value -- res)) {
// This one is a bit weird, because we expect *some* failures:
assert(version);
DEOPT_IF(Py_TYPE(value)->tp_version_tag != version, TO_BOOL);
STAT_INC(TO_BOOL, hit);
DECREF_INPUTS();
res = Py_True;
}
```
@gvanrossum thinks that the `assert` is incorrect: it appears that the current code doesn't know how to handle an instruction that uses caches but *doesn't* have an oparg.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106252
<!-- /gh-linked-prs -->
| 6e9f83d9aee34192de5d0ef7285be23514911ccd | 8bff940ad69ce176dcd2b8e91d0b30ddd09945f1 |
python/cpython | python__cpython-106816 | # os.path.normpath truncates input on null bytes in 3.11, but not 3.10
# Bug report
Looks like `posix._path_normpath` has slightly different behaviour to the python implementation of `normpath` defined in `posixpath`, as such `os.path.normpath` behaves differently on Python 3.11 (where `posix._path_normpath` is used if it exists) vs 3.10 on posix systems:
Python 3.10:
```python
>>> import os.path
>>> os.path.normpath('hello\x00world')
'hello\x00world'
>>> os.path.normpath('\x00hello')
'\x00hello'
```
Python 3.11:
```python
>>> import os.path
>>> os.path.normpath('hello\x00world')
'hello'
>>> os.path.normpath('\x00hello')
'.'
```
Obviously filepaths shouldn't have nulls in them, but the above means invalid input to a program could result in the wrong files or directories being used, rather than an error about embedded nulls once the filepaths are actually used for a system call. And I'm guessing the inconsistency between Python3.10 and 3.11, or between the Python and C implementations of `normpath` was not intended in any case.
# Your environment
CPython 3.11.3, running on Arch Linux
Python 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] on linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-106816
* gh-107981
* gh-107982
* gh-107983
* gh-108248
* gh-108251
* gh-108252
<!-- /gh-linked-prs -->
| 09322724319d4c23195300b222a1c0ea720af56b | 607f18c89456cdc9064e27f86a7505e011209757 |
python/cpython | python__cpython-106239 | # Handle KeyboardInterrupt during logging._acquireLock()
We've come across a concurrency bug in `logging/__init__.py` which involves the handling of asynchronous exceptions, such as `KeyboardInterrupt`, during the execution of `logging._acquireLock()`.
In the current implementation, when `threading.RLock.acquire()` is executed, there is a possibility for an asynchronous exception to occur during the transition back from native code, even if the lock acquisition is successful.
The typical use of `_acquireLock()` in the logging library is as follows:
```
def _loggingMethod(handler):
"""
Add a handler to the internal cleanup list using a weak reference.
"""
_acquireLock()
try:
# doSomething
finally:
_releaseLock()
```
In this pattern, if a `KeyboardInterrupt` is raised during the lock acquisition, the lock ends up getting abandoned.
When can this happen? One example is during forks. `logging/__init__.py` registers an at-fork hook, with
```
os.register_at_fork(before=_acquireLock,
after_in_child=_after_at_fork_child_reinit_locks,
after_in_parent=_releaseLock)
```
A scenario occurring in our production environment is during a slow fork operation (when the server is under heavy load and performing a multitude of forks). The lock could be held for up to a minute. If this is happening in a secondary thread, and a SIGINT signal is received in the main thread while is waiting to acquire the lock for logging, the lock will be abandoned. This will causes the process to hang during the next _acquireLock() call.
To address this issue, we provide a simple pull request to add a try-except block within _acquireLock(), e.g.:
```
def _acquireLock():
if _lock:
try:
_lock.acquire()
except BaseException:
_lock.release()
raise
```
This way, if an exception arises during the lock acquisition, the lock will be released, preventing the lock from being abandoned and the process from potentially hanging.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106239
<!-- /gh-linked-prs -->
| 99b00efd5edfd5b26bf9e2a35cbfc96277fdcbb1 | 38aa89a52ed5194f70bbf07d699a2dd3720e2efd |
python/cpython | python__cpython-106237 | # `_DummyThread`s can be joined in `-OO` mode
Right now `_DummyThread` claims to not allow `join`: https://github.com/python/cpython/blob/fb0d9b9ac1ec3ea13fae8b8ef6a4f0a5a80482b3/Lib/threading.py#L1453-L1458
But, it can be easily changed with `python -OO` mode, which strips `assert` statements.
The easiest way to check this is:
```python
# ex.py
import threading, _thread
def f(mutex):
threading.current_thread()
mutex.release()
mutex = threading.Lock()
mutex.acquire()
tid = _thread.start_new_thread(f, (mutex,))
mutex.acquire()
threading._active[tid].join()
print('done')
```
`python ex.py` results in:
```
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/ex.py", line 12, in <module>
threading._active[tid].join()
File "/Users/sobolev/Desktop/cpython/Lib/threading.py", line 1458, in join
assert False, "cannot join a dummy thread"
AssertionError: cannot join a dummy thread
```
But, `python -OO ex.py` results in:
```
done
```
This looks like an important behavior change. I propose to use explicit `AssertionError` / `RuntimeError` instead.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106237
<!-- /gh-linked-prs -->
| e4b88c1e4ac129b36f99a534387d64f7b8cda8ef | dd1884dc5dc1a540c60e98ea1bc482a51d996564 |
python/cpython | python__cpython-126921 | # Argparse: improve parse_known_args() doc
# Bug report
When the `ArgumentParser.parse_known_args()` method is used to parse a parameter with `action='append'` set, the parameter will be parsed in the whole argument list, including arguments in the unknown section.
## Example code showing the issue
```
import argparse
p = argparse.ArgumentParser()
p.add_argument("--foo", action='append')
print(p.parse_known_args(["--foo", "a", "STOP", "--foo", "b"]))
```
## Expected output
Parsing stops at the first unknown arg (`STOP`) and all following arguments are left untouched, as they might have to be parsed by a different parser elsewhere.
```
(Namespace(foo=['a']), ['STOP', '--foo', 'b'])
```
## Actual output
All instances of `--foo` are parsed, including the ones following the first unknown arg.
```
(Namespace(foo=['a', 'b']), ['STOP'])
```
# Your environment
- CPython versions tested on: 3.11.3, 3.10.6, 3.9.16
- Operating system and architecture: x86_64-pc-linux-gnu
<!-- gh-linked-prs -->
### Linked PRs
* gh-126921
* gh-134913
* gh-134914
<!-- /gh-linked-prs -->
| a4251411a97304ab001721c6231d86ddf4eac3f0 | cb8a72b301f47e76d93a7fe5b259e9a5758792e1 |
python/cpython | python__cpython-106234 | # Improve `InvalidTZPathWarning` warning with a stacklevel
Given this code:
```python
# ex.py
import zoneinfo
print(zoneinfo.TZPATH)
```
And this command to run it: `PYTHONTZPATH=ex.py ./python.exe ex.py`
Output will be:
```
» PYTHONTZPATH=ex.py ./python.exe ex.py
/Users/sobolev/Desktop/cpython/Lib/zoneinfo/_tzpath.py:44: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
warnings.warn(
()
```
### Setting `stacklevel`
From `1` to `5`:
```
/Users/sobolev/Desktop/cpython/Lib/zoneinfo/_tzpath.py:44: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
warnings.warn(
```
```
/Users/sobolev/Desktop/cpython/Lib/zoneinfo/_tzpath.py:22: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
base_tzpath = _parse_python_tzpath(env_var)
```
```
/Users/sobolev/Desktop/cpython/Lib/zoneinfo/_tzpath.py:176: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
reset_tzpath()
```
```
/Users/sobolev/Desktop/cpython/Lib/zoneinfo/__init__.py:10: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
from . import _tzpath
```
```
/Users/sobolev/Desktop/cpython/ex.py:1: InvalidTZPathWarning: Invalid paths specified in PYTHONTZPATH environment variable. Paths should be absolute but found the following relative paths:
ex.py
import zoneinfo
```
Looks like `5` is the best in terms of being informative, where this warning comes from.
I will send a PR with the fix.
CC @pganssle
<!-- gh-linked-prs -->
### Linked PRs
* gh-106234
* gh-115081
<!-- /gh-linked-prs -->
| d7334e2c2012defaf7aae920d6a56689464509d1 | 1a10437a14b13100bdf41cbdab819c33258deb65 |
python/cpython | python__cpython-106296 | # timeit basic examples are not compatible for Windows (CMD/ PowerShell)
# Documentation
on this page
https://docs.python.org/3/library/timeit.html
the example
`python3 -m timeit '"-".join(str(n) for n in range(100))'`
should be
`python3 -m timeit "'-'.join(str(n) for n in range(100))"`
because CMD and PowerShell do not support the first format
<!-- gh-linked-prs -->
### Linked PRs
* gh-106296
* gh-106298
<!-- /gh-linked-prs -->
| 04dfc6fa9018e92a5b51c29fc0ff45419c596bc3 | eb7d6e7ad844955f9af88707d296e003c7ce4394 |
python/cpython | python__cpython-106166 | # Fix `test_opcache` to skip threaded tests on WebAssembly platforms
https://buildbot.python.org/all/#/builders/1056/builds/2355
https://buildbot.python.org/all/#/builders/1046/builds/2338
/cc @brandtbucher
<!-- gh-linked-prs -->
### Linked PRs
* gh-106166
<!-- /gh-linked-prs -->
| 4bde89462a95e5962e1467cfc1af5a6094c0c858 | 11731434df2d7d29b4260e5ad65b993cea775c36 |
python/cpython | python__cpython-106219 | # Emscripten JS trampolines do not interact well with wasm stack switching
There is a WIP proposal to enable webassembly stack switching which have been implemented in v8.
https://github.com/WebAssembly/js-promise-integration
It is not possible to switch stacks that contain JS frames so the Emscripten JS trampolines that allow calling functions with the wrong number of arguments don't work in this case. However, the js-promise-integration proposal requires the [type reflection for Wasm/JS API](https://github.com/WebAssembly/js-types) proposal, which allows us to actually count the number of arguments a function expects. I propose that for better compatibility with stack switching, we check if type reflection is available, and if so we use a switch block to decide the appropriate signature. If type reflection is unavailable, we should use the current EMJS trampoline.
I'm working on a patch for this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106219
* gh-133984
* gh-134376
<!-- /gh-linked-prs -->
| 3086b86cfda829e23a71569908edbfbcdc16327f | 04130b290b545e64625c07dc8fa2709d17e70880 |
python/cpython | python__cpython-110757 | # C-analyzer (make check-c-globals) only works with GCC?
From #106173 I understand that the C-analyzer (Tools/c-analyzer, our tool that looks for mutable static globals in C code) only works with GCC. When I run it on macOS (i.e. in clang land) I get a long message that doesn't explain this, plus a traceback. Maybe it should detect that you're using clang (even if it's aliased to `gcc`) and print a shorter message without a traceback?
<details>
```
~/cpython$ make check-c-globals
python3.12 ./Tools/c-analyzer/check-c-globals.py \
--format summary \
--traceback
analyzing files...
.# requested file: </Users/guido/cpython/Include/pystats.h>
-------------------------
Non-constant global variables are generally not supported
in the CPython repo. We use a tool to analyze the C code
and report if any unsupported globals are found. The tool
may be run manually with:
./python Tools/c-analyzer/check-c-globals.py --format summary [FILE]
Occasionally the tool is unable to parse updated code.
If this happens then add the file to the "EXCLUDED" list
in Tools/c-analyzer/cpython/_parser.py and create a new
issue for fixing the tool (and CC ericsnowcurrently
on the issue).
If the tool reports an unsupported global variable and
it is actually const (and thus supported) then first try
fixing the declaration appropriately in the code. If that
doesn't work then add the variable to the "should be const"
section of Tools/c-analyzer/cpython/ignored.tsv.
If the tool otherwise reports an unsupported global variable
then first try to make it non-global, possibly adding to
PyInterpreterState (for core code) or module state (for
extension modules). In an emergency, you can add the
variable to Tools/c-analyzer/cpython/globals-to-fix.tsv
to get CI passing, but doing so should be avoided. If
this course it taken, be sure to create an issue for
eliminating the global (and CC ericsnowcurrently).
Traceback (most recent call last):
File "/Users/guido/cpython/./Tools/c-analyzer/check-c-globals.py", line 39, in <module>
main(cmd, cmd_kwargs)
File "/Users/guido/cpython/Tools/c-analyzer/cpython/__main__.py", line 498, in main
run_cmd(**cmd_kwargs)
File "/Users/guido/cpython/Tools/c-analyzer/cpython/__main__.py", line 159, in cmd_check
c_analyzer.cmd_check(
File "/Users/guido/cpython/Tools/c-analyzer/c_analyzer/__main__.py", line 325, in cmd_check
analyzed = _analyze(filenames, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Tools/c-analyzer/cpython/_analyzer.py", line 149, in analyze
analysis = Analysis.from_results(results)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Tools/c-analyzer/c_analyzer/info.py", line 281, in from_results
for info, resolved in results:
File "/Users/guido/cpython/Tools/c-analyzer/c_analyzer/__init__.py", line 59, in analyze_decls
decls = list(decls)
^^^^^^^^^^^
File "/Users/guido/cpython/Tools/c-analyzer/cpython/_analyzer.py", line 162, in iter_decls
for decl in decls:
File "/Users/guido/cpython/Tools/c-analyzer/c_analyzer/__init__.py", line 43, in iter_decls
for item in parsed:
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/match.py", line 136, in filter_by_kind
for item in items:
File "/Users/guido/cpython/Tools/c-analyzer/cpython/_parser.py", line 399, in parse_files
yield from _parse_files(
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/__init__.py", line 26, in parse_files
yield from _parse_file(
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/__init__.py", line 47, in _parse_file
for item in _parse(srclines, **srckwargs):
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/parser/__init__.py", line 128, in parse
for result in _parse(srclines, anon_name, **srckwargs):
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/parser/__init__.py", line 159, in _parse
for result in parse_globals(source, anon_name):
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/parser/_global.py", line 38, in parse_globals
for srcinfo in source:
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/parser/__init__.py", line 173, in _iter_source
for fileinfo, line in lines:
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/__init__.py", line 46, in <genexpr>
srclines = ((l.file, l.data) for l in preprocessed if l.kind == 'source')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/guido/cpython/Tools/c-analyzer/c_parser/preprocessor/gcc.py", line 94, in _iter_lines
raise NotImplementedError((line, expected))
NotImplementedError: ('# 1 "<built-in>" 1', '# 1 "<built-in>"')
make: *** [check-c-globals] Error 1
~/cpython$
```
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-110757
<!-- /gh-linked-prs -->
| 898f531996f2c5399b13811682c578c4fd08afaa | b7f9661bc12fdfec98684c89f03177ae5d3d74c1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.