repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-121278 | # Allow `.. versionadded:: next` in docs
# Feature or enhancement
### Proposal:
In a PR to CPython, the `versionadded`, `versionchanged`, `versionremoved`, `deprecated`, `deprecated-removed` directives in documentation should currently be set to the upcoming release.
This is inconvenient:
- the numbers need to be changed in backports
- if a PR misses a feature release, the number needs to be updated
It would be good to treat this more like News entries, which live in a `next/` directory before a release, when the release manager bundles them up and assigns a version.
Concrete proposal:
- [x] Teach `versionadded` & the others to expand the version argument `next` to `<version> (unreleased)` (e.g. `3.14.0b0 (unreleased)`).
- [x] Add a tool that replaces the `next` with a given string (e.g. `3.14`).
- [x] Modify the release manager tooling to run the tool on release.
- [x] Add a check to release manager tooling that *built* HTML documentation for a fresh release does not include the string `(unreleased)`. The RM should be able to skip this test, in case of a false positive.
- [x] Update the Devguide.
- [x] Announce in Discourse
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/automating-versionadded-changed-markers-in-docs-to-expedite-prs/38423
<!-- gh-linked-prs -->
### Linked PRs
* gh-121278
* gh-124623
* gh-124718
* gh-125980
* gh-127827
* gh-127867
* gh-128117
<!-- /gh-linked-prs -->
### Related PRs
* release-tools PR: https://github.com/python/release-tools/pull/164
* devguide PR: https://github.com/python/devguide/pull/1413
### Discourse announcement
* https://discuss.python.org/t/versionadded-next/65280 | 7d24ea9db3e8fdca52058629c9ba577aba3d8e5c | 1ff1b899ce13b195d978736b78cd75ac021e64b5 |
python/cpython | python__cpython-121276 | # Some tests in test_smtplib and test_logging failed when Python is configured with `--disable-ipv6`
# Bug report
### Bug description:
There is a error sample:
```
ERROR: test_basic (test.test_logging.SMTPHandlerTest.test_basic)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/asaka/Codes/cpython/Lib/test/test_logging.py", line 1121, in test_basic
server = TestSMTPServer((socket_helper.HOST, 0), self.process_message, 0.001,
sockmap)
File "/home/asaka/Codes/cpython/Lib/test/test_logging.py", line 882, in __init__
smtpd.SMTPServer.__init__(self, addr, None, map=sockmap,
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
decode_data=True)
^^^^^^^^^^^^^^^^^
File "/home/asaka/Codes/cpython/Lib/test/support/smtpd.py", line 641, in __init__
self.bind(localaddr)
~~~~~~~~~^^^^^^^^^^^
File "/home/asaka/Codes/cpython/Lib/test/support/asyncore.py", line 332, in bind
return self.socket.bind(addr)
~~~~~~~~~~~~~~~~^^^^^^
OSError: bind(): bad family
```
This is caused by `SMTPServer` is using `socket.getaddrinfo` to get the socket family, which also include IPv6 result even if IPv6 support is disabled, and it's value will be used to `socket.bind`:
https://github.com/python/cpython/blob/6343486eb60ac5a9e15402a592298259c5afdee1/Lib/test/support/smtpd.py#L636-L638
`--disable-ipv6` must be specified when `--with-thread-sanitizer` is specified on my machine (Arch with Clang17), so I think it's better to fix it although `--disable-ipv6` is not widely used.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121276
<!-- /gh-linked-prs -->
| 3998554bb05f5ce18e8a66492d23d094a2299442 | 070f1e2e5b9b31ee3e7a1af2e30d7e3a66040b17 |
python/cpython | python__cpython-121273 | # Move a few bits from compiler to earlier stages to simplify the compiler
There are validations in compile.c that can move to earlier stages (AST validation or symtable construction).
And the compiler is modifying the symbol table (setting ``ste_coroutine``), which it really shouldn't be doing.
This will simplify the compiler, which is one of the largest code components of the interpreter.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121273
* gh-121297
* gh-121361
<!-- /gh-linked-prs -->
| 1ac273224a85126c4356e355f7445206fadde7ec | 6343486eb60ac5a9e15402a592298259c5afdee1 |
python/cpython | python__cpython-122716 | # Remove remnants of support for non-IEEE 754 systems from cmathmodule.c
# Feature or enhancement
### Proposal:
Proposed patch:
```diff
diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c
index bf86a211bc..591442334e 100644
--- a/Modules/cmathmodule.c
+++ b/Modules/cmathmodule.c
@@ -185,15 +185,8 @@ cmath_acos_impl(PyObject *module, Py_complex z)
if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) {
/* avoid unnecessary overflow for large arguments */
r.real = atan2(fabs(z.imag), z.real);
- /* split into cases to make sure that the branch cut has the
- correct continuity on systems with unsigned zeros */
- if (z.real < 0.) {
- r.imag = -copysign(log(hypot(z.real/2., z.imag/2.)) +
- M_LN2*2., z.imag);
- } else {
- r.imag = copysign(log(hypot(z.real/2., z.imag/2.)) +
- M_LN2*2., -z.imag);
- }
+ r.imag = -copysign(log(hypot(z.real/2., z.imag/2.)) +
+ M_LN2*2., z.imag);
} else {
s1.real = 1.-z.real;
s1.imag = -z.imag;
@@ -386,11 +379,7 @@ cmath_atanh_impl(PyObject *module, Py_complex z)
*/
h = hypot(z.real/2., z.imag/2.); /* safe from overflow */
r.real = z.real/4./h/h;
- /* the two negations in the next line cancel each other out
- except when working with unsigned zeros: they're there to
- ensure that the branch cut has the correct continuity on
- systems that don't support signed zeros */
- r.imag = -copysign(Py_MATH_PI/2., -z.imag);
+ r.imag = copysign(Py_MATH_PI/2., z.imag);
errno = 0;
} else if (z.real == 1. && ay < CM_SQRT_DBL_MIN) {
/* C99 standard says: atanh(1+/-0.) should be inf +/- 0i */
```
Removing this legacy was [proposed](https://github.com/python/cpython/pull/102067#issuecomment-1437850117) in #102067, but *before* merging #31790. Maybe now that change does make sense?
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/102067#issuecomment-1437850117
<!-- gh-linked-prs -->
### Linked PRs
* gh-122716
<!-- /gh-linked-prs -->
| b6e745a27e9c98127acee436e4855066c58b7a3b | 2f5c3b09e45798a18d60841d04a165fb062be666 |
python/cpython | python__cpython-121269 | # Tarfile is unnecessarily slow
# Bug report
### Bug description:
There is room for improvement in tarfile write performance. In a simple benchmark I find that tarfile spends most of its time doing repeated user name/group name queries.
https://gist.github.com/jforberg/86af759c796199740c31547ae828aef2
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121269
<!-- /gh-linked-prs -->
| 2b2d607095335024e5e2bb358e3ef37650536839 | 616468b87bc5bcf5a4db688637ef748e1243db8a |
python/cpython | python__cpython-121493 | # `always_inline` makes build fail with `-Og` and `--without-pydebug`
# Bug report
### Bug description:
We're trying to build CPython to debug a crash in third-party extension. To do that, I've set `CFLAGS="-Og -g"` and passed `--with-assertions`. However, we're not using `--with-pydebug` since that has had side effects that broke multiple third-party packages in the past. Unfortunately, in this configuration CPython fails to build due to use of `always_inline`:
```
gcc -c -fno-strict-overflow -fstack-protector-strong -Wtrampolines -Wsign-compare -g -O3 -Wall -Og -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Objects/dictobject.o Objects/dictobject.c
In function ‘do_lookup’,
inlined from ‘unicodekeys_lookup_unicode’ at Objects/dictobject.c:1137:12:
Objects/dictobject.c:1120:1: error: inlining failed in call to ‘always_inline’ ‘compare_unicode_unicode’: function not considered for inlining
1120 | compare_unicode_unicode(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1052:30: note: called from here
1052 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1120:1: error: inlining failed in call to ‘always_inline’ ‘compare_unicode_unicode’: function not considered for inlining
1120 | compare_unicode_unicode(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1068:30: note: called from here
1068 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘do_lookup’,
inlined from ‘unicodekeys_lookup_generic’ at Objects/dictobject.c:1116:12:
Objects/dictobject.c:1085:1: error: inlining failed in call to ‘always_inline’ ‘compare_unicode_generic’: function not considered for inlining
1085 | compare_unicode_generic(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1052:30: note: called from here
1052 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1085:1: error: inlining failed in call to ‘always_inline’ ‘compare_unicode_generic’: function not considered for inlining
1085 | compare_unicode_generic(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1068:30: note: called from here
1068 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘do_lookup’,
inlined from ‘dictkeys_generic_lookup’ at Objects/dictobject.c:1171:12:
Objects/dictobject.c:1141:1: error: inlining failed in call to ‘always_inline’ ‘compare_generic’: function not considered for inlining
1141 | compare_generic(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~
Objects/dictobject.c:1052:30: note: called from here
1052 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Objects/dictobject.c:1141:1: error: inlining failed in call to ‘always_inline’ ‘compare_generic’: function not considered for inlining
1141 | compare_generic(PyDictObject *mp, PyDictKeysObject *dk,
| ^~~~~~~~~~~~~~~
Objects/dictobject.c:1068:30: note: called from here
1068 | Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make: *** [Makefile:3051: Objects/dictobject.o] Error 1
```
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121493
* gh-121581
* gh-121949
* gh-122095
<!-- /gh-linked-prs -->
| c5a6b9afd82cad3f6abd9dc71cd5fdd5781a53f5 | 81fd625b5c30cc6f417c93bad404923676ad8ca3 |
python/cpython | python__cpython-121270 | # Stackrefs (#118450) introduced a large performance regression on Windows
### Bug description:
#118450 "Convert the evaluation stack to stack refs" (by @Fidget-Spinner) unfortunately seems to have introduced a [7% - 11% performance regression overall](https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20240627-3.14.0a0-22b0de2) (though as much as 25% for some benchmarks) on Windows only. Linux (both x86_64 and aarch64) and macOS (arm) seem to have a change beneath the noise threshold. To be clear, @Fidget-Spinner's work is great and the lesson here is that for these sorts of sweeping low-level changes, we should have benchmarked on all of the Tier 1 platforms to be certain.
I don't know what the cause is -- it is probably MSVC not being able to "optimize through" something and introducing unnecessary overhead, but that's just a theory.
Cc: @brandtbucher, @markshannon
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-121270
<!-- /gh-linked-prs -->
| 722229e5dc1e499664966e50bb98065670033300 | 93156880efd14ad7adc7d3512552b434f5543890 |
python/cpython | python__cpython-121613 | # Add support for C99 complex type (_Complex) to the struct module
# Feature or enhancement
### Proposal:
The struct module has support for ``float`` and ``double`` types, so at least there should be also ``float _Complex`` and ``double _Complex``. I'll work on a patch.
<details>
<summary>Initial version</summary>
```diff
diff --git a/Lib/ctypes/__init__.py b/Lib/ctypes/__init__.py
index d2e6a8bfc8..d941036719 100644
--- a/Lib/ctypes/__init__.py
+++ b/Lib/ctypes/__init__.py
@@ -208,6 +208,7 @@ class c_longdouble(_SimpleCData):
try:
class c_double_complex(_SimpleCData):
_type_ = "C"
+ _check_size(c_double_complex)
except AttributeError:
pass
diff --git a/Modules/_struct.c b/Modules/_struct.c
index 6a68478dd4..caf4975413 100644
--- a/Modules/_struct.c
+++ b/Modules/_struct.c
@@ -12,6 +12,9 @@
#include "pycore_long.h" // _PyLong_AsByteArray()
#include "pycore_moduleobject.h" // _PyModule_GetState()
+#ifdef Py_HAVE_C_COMPLEX
+# include "_complex.h" // complex
+#endif
#include <stddef.h> // offsetof()
/*[clinic input]
@@ -80,6 +83,9 @@ typedef struct { char c; int x; } st_int;
typedef struct { char c; long x; } st_long;
typedef struct { char c; float x; } st_float;
typedef struct { char c; double x; } st_double;
+#ifdef Py_HAVE_C_COMPLEX
+typedef struct { char c; double complex x; } st_double_complex;
+#endif
typedef struct { char c; void *x; } st_void_p;
typedef struct { char c; size_t x; } st_size_t;
typedef struct { char c; _Bool x; } st_bool;
@@ -89,6 +95,9 @@ typedef struct { char c; _Bool x; } st_bool;
#define LONG_ALIGN (sizeof(st_long) - sizeof(long))
#define FLOAT_ALIGN (sizeof(st_float) - sizeof(float))
#define DOUBLE_ALIGN (sizeof(st_double) - sizeof(double))
+#ifdef Py_HAVE_C_COMPLEX
+#define DOUBLE_COMPLEX_ALIGN (sizeof(st_double_complex) - sizeof(double complex))
+#endif
#define VOID_P_ALIGN (sizeof(st_void_p) - sizeof(void *))
#define SIZE_T_ALIGN (sizeof(st_size_t) - sizeof(size_t))
#define BOOL_ALIGN (sizeof(st_bool) - sizeof(_Bool))
@@ -518,6 +527,17 @@ nu_double(_structmodulestate *state, const char *p, const formatdef *f)
return PyFloat_FromDouble(x);
}
+#ifdef Py_HAVE_C_COMPLEX
+static PyObject *
+nu_double_complex(_structmodulestate *state, const char *p, const formatdef *f)
+{
+ double complex x;
+
+ memcpy(&x, p, sizeof(x));
+ return PyComplex_FromDoubles(creal(x), cimag(x));
+}
+#endif
+
static PyObject *
nu_void_p(_structmodulestate *state, const char *p, const formatdef *f)
{
@@ -791,6 +811,24 @@ np_double(_structmodulestate *state, char *p, PyObject *v, const formatdef *f)
return 0;
}
+#ifdef Py_HAVE_C_COMPLEX
+static int
+np_double_complex(_structmodulestate *state, char *p, PyObject *v,
+ const formatdef *f)
+{
+ Py_complex c = PyComplex_AsCComplex(v);
+ double complex x = CMPLX(c.real, c.imag);
+
+ if (c.real == -1 && PyErr_Occurred()) {
+ PyErr_SetString(state->StructError,
+ "required argument is not a complex");
+ return -1;
+ }
+ memcpy(p, (char *)&x, sizeof(x));
+ return 0;
+}
+#endif
+
static int
np_void_p(_structmodulestate *state, char *p, PyObject *v, const formatdef *f)
{
@@ -829,6 +867,9 @@ static const formatdef native_table[] = {
{'e', sizeof(short), SHORT_ALIGN, nu_halffloat, np_halffloat},
{'f', sizeof(float), FLOAT_ALIGN, nu_float, np_float},
{'d', sizeof(double), DOUBLE_ALIGN, nu_double, np_double},
+#ifdef Py_HAVE_C_COMPLEX
+ {'C', sizeof(double complex), DOUBLE_COMPLEX_ALIGN, nu_double_complex, np_double_complex},
+#endif
{'P', sizeof(void *), VOID_P_ALIGN, nu_void_p, np_void_p},
{0}
};
```
</details>
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121613
* gh-131867
* gh-132827
* gh-132863
* gh-132864
* gh-133249
<!-- /gh-linked-prs -->
| 7487db4c7af629f0a81b2127a3ee0000a288cefc | f55273b3b7124dc570911724107c2440f37905fc |
python/cpython | python__cpython-121255 | # Python 3.13.0b3 REPL empties ~/.python_history
# Bug report
### Bug description:
As discussed in https://discuss.python.org/t/python-3-13-0-beta-3-now-available/56847/7 and https://github.com/python/cpython/issues/120766#issuecomment-2200251720
When I updated from Python 3.13.0b2 to b3 and started the REPL for the first time, pressing arrow up showed the history.
Upon exiting and starting again, there is no more history when I press arrow up. I noticed it is also missing from my other Python REPLs. `~/.python_history` is empty.
To reproduce, I did:
1. run `python3.13`
2. press arrow up, hit enter
3. Ctrl+D
4. run `python3.13` again -- no history
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121255
* gh-121259
* gh-121261
* gh-121322
* gh-121659
* gh-121816
<!-- /gh-linked-prs -->
| 7a807c3efaa83f1e4fb9b791579b47a0a1fd47de | 7435f053b4a54372a2c43dee7a15c4b973f09209 |
python/cpython | python__cpython-121221 | # [weakref]: Consider marking ``test_threaded_weak_key_dict_copy`` and ``test_threaded_weak_value_dict_copy`` as cpu-heavy tests
# Feature or enhancement
### Proposal:
I'm wondering why the deepcopy version of these tests is marked as ``requires_resource('cpu')`` but they take almost the same amount of the time as the ``copy`` tests (``test_weakref`` with enabled deepcopy and disabled copy tests takes about an 13 seconds, with enabled copy tests and disabled deepcopy tests takes about an 11 seconds)
Previously:
```python
./python.exe -m test -q test_weakref
Using random seed: 1032227984
0:00:00 load avg: 1.24 Run 1 test sequentially in a single process
== Tests result: SUCCESS ==
Total duration: 11.7 sec
Total tests: run=137 skipped=2
Total test files: run=1/1
Result: SUCCESS
```
After marking these two tests as cpu-heavy:
```python
./python.exe -m test -q test_weakref
Using random seed: 2063515471
0:00:00 load avg: 1.43 Run 1 test sequentially in a single process
== Tests result: SUCCESS ==
Total duration: 2.7 sec
Total tests: run=137 skipped=4
Total test files: run=1/1
Result: SUCCESS
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121221
<!-- /gh-linked-prs -->
| c7991cc28788bbb086fd85d8fc55e20742f0de88 | 56a3ce2715509fc8e42ae40ec40ce6a590448da4 |
python/cpython | python__cpython-121211 | # `ast.compare` fails if fields or attributes are missing at runtime
# Bug report
### Bug description:
`ast.compare` does not handle the case where attributes or fields are missing at runtime:
```python
>>> import ast
>>> a = ast.parse('a').body[0].value
>>> del a.id
>>> ast.compare(a, a)
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
ast.compare(a,a)
~~~~~~~~~~~^^^^^
File "/lib/python/cpython/Lib/ast.py", line 473, in compare
if not _compare_fields(a, b):
~~~~~~~~~~~~~~~^^^^^^
File "/lib/python/cpython/Lib/ast.py", line 452, in _compare_fields
a_field = getattr(a, field)
AttributeError: 'Name' object has no attribute 'id'
>>> a = ast.parse('a').body[0].value
>>> del a.lineno
>>> ast.compare(a, a, compare_attributes=True)
Traceback (most recent call last):
File "<python-input-8>", line 2, in <module>
ast.compare(a, a, compare_attributes=True)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python/cpython/Lib/ast.py", line 475, in compare
if compare_attributes and not _compare_attributes(a, b):
~~~~~~~~~~~~~~~~~~~^^^^^^
File "/lib/python/cpython/Lib/ast.py", line 464, in _compare_attributes
a_attr = getattr(a, attr)
AttributeError: 'Name' object has no attribute 'lineno'
```
I suggest making `ast.compare` ignore a field/attribute if it's missing on *both* operands (they *do* compare equal in the sense that they don't have that specific field; not that even without that assumption, we still have an issue with `ast.compare(ast.Name('a'), ast.Name('a'), compare_attributes=True)`).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121211
<!-- /gh-linked-prs -->
| 15232a0819a2f7e0f448f28f2e6081912d10e7cb | 7a807c3efaa83f1e4fb9b791579b47a0a1fd47de |
python/cpython | python__cpython-121207 | # test_posixpath.test_expanduser_pwd2() fails on s390x Fedora Rawhide 3.x
build: https://buildbot.python.org/all/#/builders/538/builds/4698
```
FAIL: test_expanduser_pwd2 (test.test_posixpath.PosixPathTest.test_expanduser_pwd2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-fedora-rawhide-z/build/Lib/test/test_posixpath.py", line 366, in test_expanduser_pwd2
self.assertEqual(posixpath.expanduser('~' + name), home)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '/nonexisting' != '/'
- /nonexisting
+ /
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-121207
* gh-121213
* gh-121214
* gh-121226
* gh-121228
* gh-121231
* gh-121232
<!-- /gh-linked-prs -->
| 05a6f8da6042cc87da1cd3824c1375d12753e5a1 | c766ad206ea60b1e0edcb625b99e7631954a984f |
python/cpython | python__cpython-121203 | # Python fails to build on s390x RHEL7 LTO + PGO 3.x on: #if defined(__APPLE__) && defined(__has_attribute) && __has_attribute(availability)
s390x RHEL7 LTO + PGO 3.x: https://buildbot.python.org/all/#/builders/244/builds/7626
```
./Modules/timemodule.c:1491:70: error: missing binary operator before token "("
#if defined(__APPLE__) && defined(__has_attribute) && __has_attribute(availability)
^
make[2]: *** [Modules/timemodule.o] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-121203
<!-- /gh-linked-prs -->
| a0b8b342c5d0b4722ad9cfe82f2630025d445f00 | af8c3d7a26d605099f5b3406a8d33ecddb77e8fb |
python/cpython | python__cpython-121197 | # `dict.fromkeys` must mark its parameters as pos-only
# Bug report
Here's how it looks now:
<img width="795" alt="Снимок экрана 2024-07-01 в 10 05 59" src="https://github.com/python/cpython/assets/4660275/676bfb53-1b8a-480f-a175-bfa0e6ed2bc6">
From this defitinion I undertand that I can pass `value` as a named keyword, but I can't:
```python
>>> dict.fromkeys(x, value=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: dict.fromkeys() takes no keyword arguments
```
```python
>>> import inspect
>>> inspect.signature(dict.fromkeys)
<Signature (iterable, value=None, /)>
```
I just made this error in a real code: https://github.com/wemake-services/wemake-python-styleguide/pull/2994
Many other definitions in this file use `/` to properly mark positional only parameters. Like:
<img width="437" alt="Снимок экрана 2024-07-01 в 10 08 24" src="https://github.com/python/cpython/assets/4660275/cc9da929-ef84-414e-aa65-a497903aff7c">
<img width="284" alt="Снимок экрана 2024-07-01 в 10 10 42" src="https://github.com/python/cpython/assets/4660275/7eeb40b3-877f-4931-a138-b2e27e7c2b45">
and etc.
So, I will send a PR to add `/` to `dict.fromkeys`
<!-- gh-linked-prs -->
### Linked PRs
* gh-121197
* gh-121242
* gh-121243
<!-- /gh-linked-prs -->
| 1dc9a4f6b20148fd4ef2eb2800a6c65224828181 | 33903c53dbdb768e1ef7c46d347869577f2173ce |
python/cpython | python__cpython-121195 | # Test suite interrupted by xml.etree.ElementTree.ParseError: not well-formed (invalid token)
It seems like the problem comes from `0x1b` bytes (eg: displayed as `^[` in vim editor), ANSI escape sequence, which is not properly escaped.
Example: https://buildbot.python.org/all/#/builders/332/builds/1457
```
0:04:35 load avg: 8.27 [350/478] test_sys_setprofile passed -- running (1): test.test_multiprocessing_spawn.test_processes (30.2 sec)
0:04:36 load avg: 8.27 [351/478] test_named_expressions passed -- running (1): test.test_multiprocessing_spawn.test_processes (30.4 sec)
['<testsuite start="2024-06-30 10:05:12.412148" tests="117" errors="0" failures="1"><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_empty" status="run" result="completed" time="0.000074" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_character_key" status="run" result="completed" time="0.000064" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_character_key_with_stack" status="run" result="completed" time="0.000071" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_invalid_key" status="run" result="completed" time="0.000046" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_invalid_key_with_stack" status="run" result="completed" time="0.000051" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_invalid_key_with_unicode_category" status="run" result="completed" time="0.000045" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_multiple_keys" status="run" result="completed" time="0.000048" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_single_key" status="run" result="completed" time="0.000043" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_transition_key" status="run" result="completed" time="0.000065" /><testcase name="test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_transition_key_interrupted" status="run" result="completed" time="0.000060" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_empty" status="run" result="completed" time="0.000075" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_multiple_statements" status="run" result="completed" time="0.000976" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_multiple_statements_output" status="run" result="completed" time="0.000235" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_no_active_future" status="run" result="completed" time="0.000209" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_runsource_compiles_and_runs_code" status="run" result="completed" time="0.000468" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_runsource_returns_false_for_failed_compilation" status="run" result="completed" time="0.000162" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_runsource_returns_false_for_successful_compilation" status="run" result="completed" time="0.000115" /><testcase name="test.test_pyrepl.test_interact.TestSimpleInteract.test_runsource_shows_syntax_error_for_failed_compilation" status="run" result="completed" time="0.000879" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_clashing_definitions" status="run" result="completed" time="0.000062" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_empty_keymap" status="run" result="completed" time="0.000033" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_empty_value" status="run" result="completed" time="0.000036" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_multiple_empty_values" status="run" result="completed" time="0.000035" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_multiple_keymaps" status="run" result="completed" time="0.000034" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_nested_keymap" status="run" result="completed" time="0.000034" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_nested_multiple_keymaps" status="run" result="completed" time="0.000034" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_non_bytes_key" status="run" result="completed" time="0.000045" /><testcase name="test.test_pyrepl.test_keymap.TestCompileKeymap.test_single_keymap" status="run" result="completed" time="0.000034" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_combinations" status="run" result="completed" time="0.000063" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_control_sequences" status="run" result="completed" time="0.000030" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_escape_sequences" status="run" result="completed" time="0.000296" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_index_errors" status="run" result="completed" time="0.000069" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_keynames" status="run" result="completed" time="0.000864" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_keyspec_errors" status="run" result="completed" time="0.000334" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_meta_sequences" status="run" result="completed" time="0.000044" /><testcase name="test.test_pyrepl.test_keymap.TestParseKeys.test_single_character" status="run" result="completed" time="0.001213" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_after_wrap_and_move_up" status="run" result="completed" time="0.009281" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_double_width_character" status="run" result="completed" time="0.004559" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_double_width_character_move_left" status="run" result="completed" time="0.004815" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_double_width_character_move_left_right" status="run" result="completed" time="0.005351" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_double_width_characters_move_up" status="run" result="completed" time="0.012739" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_double_width_characters_move_up_down" status="run" result="completed" time="0.011296" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_move_down_to_eol" status="run" result="completed" time="0.010642" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_move_up_to_eol" status="run" result="completed" time="0.010715" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_multiple_double_width_characters_move_left" status="run" result="completed" time="0.007659" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_multiple_mixed_lines_move_up" status="run" result="completed" time="0.018315" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_cursor_position_simple_character" status="run" result="completed" time="0.003981" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_down_arrow_end_of_input" status="run" result="completed" time="0.007747" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_left_arrow_simple" status="run" result="completed" time="0.006869" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_right_arrow_end_of_line" status="run" result="completed" time="0.005664" /><testcase name="test.test_pyrepl.test_pyrepl.TestCursorPosition.test_up_arrow_simple" status="run" result="completed" time="0.007966" /><testcase name="test.test_pyrepl.test_pyrepl.TestMain.test_dumb_terminal_exits_cleanly" status="run" result="completed" time="0.604765" /><testcase name="test.test_pyrepl.test_pyrepl.TestMain.test_exposed_globals_in_repl" status="run" result="completed" time="0.587335"><system-out /><system-err /><failure type="AssertionError" message="AssertionError: False is not true : sorted(dir()) exit Python 3.14.0a0 (heads/refs/pull/120894/merge:3ea9e550eef, Jun 30 2024, 06:00:16) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. \x1b[?2004h \x1b[?1h\x1b=\x1b[?25l\x1b[1A \x1b[1;35m>>> \x1b[0m\x1b[4D \x1b[?12l\x1b[?25h\x1b[4C \x1b[?25l\x1b[4D\x1b[1;35m>>> \x1b[0ms\x1b[5D\x1b[?12l\x1b[?25h\x1b[5C \x1b[?25l\x1b[5D\x1b[1;35m>>> \x1b[0mso\x1b[6D\x1b[?12l\x1b[?25h\x1b[6C \x1b[?25l\x1b[6D\x1b[1;35m>>> \x1b[0msor\x1b[7D \x1b[?12l\x1b[?25h\x1b[7C \x1b[?25l\x1b[7D\x1b[1;35m>>> \x1b[0msort\x1b[8D \x1b[?12l\x1b[?25h\x1b[8C \x1b[?25l\x1b[8D\x1b[1;35m>>> \x1b[0msorte\x1b[9D \x1b[?12l\x1b[?25h\x1b[9C \x1b[?25l\x1b[9D\x1b[1;35m>>> \x1b[0msorted \x1b[10D \x1b[?12l\x1b[?25h\x1b[10C \x1b[?25l\x1b[10D\x1b[1;35m>>> \x1b[0msorted( \x1b[11D \x1b[?12l\x1b[?25h\x1b[11C \x1b[?25l\x1b[11D\x1b[1;35m>>> \x1b[0msorted(d\x1b[12D\x1b[?12l\x1b[?25h\x1b[12C \x1b[?25l\x1b[12D\x1b[1;35m>>> \x1b[0msorted(di\x1b[13D \x1b[?12l\x1b[?25h\x1b[13C \x1b[?25l\x1b[13D\x1b[1;35m>>> \x1b[0msorted(dir \x1b[14D \x1b[?12l\x1b[?25h\x1b[14C \x1b[?25l\x1b[14D\x1b[1;35m>>> \x1b[0msorted(dir(\x1b[15D\x1b[?12l\x1b[?25h\x1b[15C \x1b[?25l\x1b[15D\x1b[1;35m>>> \x1b[0msorted(dir() \x1b[16D \x1b[?12l\x1b[?25h\x1b[16C \x1b[?25l\x1b[16D\x1b[1;35m>>> \x1b[0msorted(dir())\x1b[17D \x1b[?12l\x1b[?25h\x1b[17C \x1b[17D \x1b[?2004l \x1b[?1l\x1b> [\'__annotations__\', \'__builtins__\', \'__cached__\', \'__doc__\', \'__file__\', \'__loader__\', \'__name__\', \'__package__\', \'__spec__\', \'sys\', \'ver\'] \x1b[?2004h \x1b[?1h\x1b=\x1b[?25l\x1b[1A \x1b[1;35m>>> \x1b[0m \x1b[4D \x1b[?12l\x1b[?25h\x1b[4C \x1b[?25l\x1b[4D\x1b[1;35m>>> \x1b[0me\x1b[5D\x1b[?12l\x1b[?25h\x1b[5C \x1b[?25l\x1b[5D\x1b[1;35m>>> \x1b[0mex\x1b[6D \x1b[?12l\x1b[?25h\x1b[6C \x1b[?25l\x1b[6D\x1b[1;35m>>> \x1b[0mexi \x1b[7D \x1b[?12l\x1b[?25h\x1b[7C \x1b[?25l\x1b[7D\x1b[1;35m>>> \x1b[0mexit\x1b[8D\x1b[?12l\x1b[?25h\x1b[8C\x1b[8D \x1b[?2004l\x1b[?1l\x1b> ">Traceback (most recent call last):\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/unittest/case.py", line 58, in testPartExecutor\n yield\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/unittest/case.py", line 660, in run\n self._callTestMethod(testMethod)\n ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/unittest/case.py", line 606, in _callTestMethod\n result = method()\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/support/__init__.py", line 2622, in wrapper\n return func(*args, **kwargs)\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/test_pyrepl/test_pyrepl.py", line 865, in test_exposed_globals_in_repl\n self.assertTrue(case1 or case2 or case3 or case4, output)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/unittest/case.py", line 753, in assertTrue\n raise self.failureException(msg)\nAssertionError: False is not true : sorted(dir())\r\nexit\r\n\nPython 3.14.0a0 (heads/refs/pull/120894/merge:3ea9e550eef, Jun 30 2024, 06:00:16) [GCC 7.5.0] on linux\r\nType "help", "copyright", "credits" or "license" for more information.\r\n\n\x1b[?2004h\n\x1b[?1h\x1b=\x1b[?25l\x1b[1A\n\x1b[1;35m>>> \x1b[0m\x1b[4D\n\x1b[?12l\x1b[?25h\x1b[4C\n\x1b[?25l\x1b[4D\x1b[1;35m>>> \x1b[0ms\x1b[5D\x1b[?12l\x1b[?25h\x1b[5C\n\x1b[?25l\x1b[5D\x1b[1;35m>>> \x1b[0mso\x1b[6D\x1b[?12l\x1b[?25h\x1b[6C\n\x1b[?25l\x1b[6D\x1b[1;35m>>> \x1b[0msor\x1b[7D\n\x1b[?12l\x1b[?25h\x1b[7C\n\x1b[?25l\x1b[7D\x1b[1;35m>>> \x1b[0msort\x1b[8D\n\x1b[?12l\x1b[?25h\x1b[8C\n\x1b[?25l\x1b[8D\x1b[1;35m>>> \x1b[0msorte\x1b[9D\n\x1b[?12l\x1b[?25h\x1b[9C\n\x1b[?25l\x1b[9D\x1b[1;35m>>> \x1b[0msorted\n\x1b[10D\n\x1b[?12l\x1b[?25h\x1b[10C\n\x1b[?25l\x1b[10D\x1b[1;35m>>> \x1b[0msorted(\n\x1b[11D\n\x1b[?12l\x1b[?25h\x1b[11C\n\x1b[?25l\x1b[11D\x1b[1;35m>>> \x1b[0msorted(d\x1b[12D\x1b[?12l\x1b[?25h\x1b[12C\n\x1b[?25l\x1b[12D\x1b[1;35m>>> \x1b[0msorted(di\x1b[13D\n\x1b[?12l\x1b[?25h\x1b[13C\n\x1b[?25l\x1b[13D\x1b[1;35m>>> \x1b[0msorted(dir\n\x1b[14D\n\x1b[?12l\x1b[?25h\x1b[14C\n\x1b[?25l\x1b[14D\x1b[1;35m>>> \x1b[0msorted(dir(\x1b[15D\x1b[?12l\x1b[?25h\x1b[15C\n\x1b[?25l\x1b[15D\x1b[1;35m>>> \x1b[0msorted(dir()\n\x1b[16D\n\x1b[?12l\x1b[?25h\x1b[16C\n\x1b[?25l\x1b[16D\x1b[1;35m>>> \x1b[0msorted(dir())\x1b[17D\n\x1b[?12l\x1b[?25h\x1b[17C\n\x1b[17D\n\r\n\x1b[?2004l\n\x1b[?1l\x1b>\n[\'__annotations__\', \'__builtins__\', \'__cached__\', \'__doc__\', \'__file__\', \'__loader__\', \'__name__\', \'__package__\', \'__spec__\', \'sys\', \'ver\']\r\n\n\x1b[?2004h\n\x1b[?1h\x1b=\x1b[?25l\x1b[1A\n\n\x1b[1;35m>>> \x1b[0m\n\x1b[4D\n\x1b[?12l\x1b[?25h\x1b[4C\n\x1b[?25l\x1b[4D\x1b[1;35m>>> \x1b[0me\x1b[5D\x1b[?12l\x1b[?25h\x1b[5C\n\x1b[?25l\x1b[5D\x1b[1;35m>>> \x1b[0mex\x1b[6D\n\x1b[?12l\x1b[?25h\x1b[6C\n\x1b[?25l\x1b[6D\x1b[1;35m>>> \x1b[0mexi\n\x1b[7D\n\x1b[?12l\x1b[?25h\x1b[7C\n\x1b[?25l\x1b[7D\x1b[1;35m>>> \x1b[0mexit\x1b[8D\x1b[?12l\x1b[?25h\x1b[8C\x1b[8D\n\r\x1b[?2004l\x1b[?1l\x1b>\n</failure></testcase><testcase name="test.test_pyrepl.test_pyrepl.TestMain.test_python_basic_repl" status="run" result="completed" time="1.247115" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_bracketed_paste" status="run" result="completed" time="0.004426" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_bracketed_paste_single_line" status="run" result="completed" time="0.002461" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_paste" status="run" result="completed" time="0.006187" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_paste_mid_newlines" status="run" result="completed" time="0.003866" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_paste_mid_newlines_not_in_paste_mode" status="run" result="completed" time="0.003064" /><testcase name="test.test_pyrepl.test_pyrepl.TestPasteEvent.test_paste_not_in_paste_mode" status="run" result="completed" time="0.005041" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_continuation" status="run" result="completed" time="0.003406" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_default" status="run" result="completed" time="0.000031" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_ignore_comments" status="run" result="completed" time="0.003174" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_multiline" status="run" result="completed" time="0.003716" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_prev_block" status="run" result="completed" time="0.004236" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplAutoindent.test_auto_indent_with_comment" status="run" result="completed" time="0.003255" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_completion_with_many_options" status="run" result="completed" time="0.074031" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_completion_with_warnings" status="run" result="completed" time="0.010621" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_empty_namespace_completion" status="run" result="completed" time="0.002656" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_global_namespace_completion" status="run" result="completed" time="0.002461" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_simple_completion" status="run" result="completed" time="0.004463" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplCompleter.test_updown_arrow_with_completion_menu" status="run" result="completed" time="0.073284" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_basic" status="run" result="completed" time="0.002418" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_control_character" status="run" result="completed" time="0.002317" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_history_navigation_with_down_arrow" status="run" result="completed" time="0.002359" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_history_navigation_with_up_arrow" status="run" result="completed" time="0.004004" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_history_search" status="run" result="completed" time="0.003447" /><testcase name="test.test_pyrepl.test_pyrepl.TestPyReplOutput.test_multiline_edit" status="run" result="completed" time="0.003851" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_backspace" status="run" result="completed" time="0.008366" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_backspace_in_second_line_after_wrap" status="run" result="completed" time="0.007194" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_wrap_removes_after_backspace" status="run" result="completed" time="0.006498" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_wrap_simple" status="run" result="completed" time="0.012438" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_wrap_three_lines" status="run" result="completed" time="0.008426" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_wrap_three_lines_mixed_character" status="run" result="completed" time="0.009177" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_calc_screen_wrap_wide_characters" status="run" result="completed" time="0.005860" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_completions_updated_on_key_press" status="run" result="completed" time="0.012732" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_input_hook_is_called_if_set" status="run" result="completed" time="0.012378" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_key_press_on_tab_press_once" status="run" result="completed" time="0.009018" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_keyboard_interrupt_clears_screen" status="run" result="completed" time="0.025747" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_newline_within_block_trailing_whitespace" status="run" result="completed" time="0.012264" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_prompt_length" status="run" result="completed" time="0.000110" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_setpos_for_xy_simple" status="run" result="completed" time="0.004700" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_setpos_from_xy_after_wrap" status="run" result="completed" time="0.007858" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_setpos_from_xy_multiple_lines" status="run" result="completed" time="0.008174" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_setpos_fromxy_in_wrapped_line" status="run" result="completed" time="0.035877" /><testcase name="test.test_pyrepl.test_reader.TestReader.test_up_arrow_after_ctrl_r" status="run" result="completed" time="0.005097" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_cursor_back_write" status="run" result="completed" time="0.003763" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_cursor_left" status="run" result="completed" time="0.003459" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_cursor_left_right" status="run" result="completed" time="0.004322" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_cursor_up" status="run" result="completed" time="0.003993" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_cursor_up_down" status="run" result="completed" time="0.004148" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_multiline_function_move_up_down_short_terminal" status="run" result="completed" time="0.034868" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_multiline_function_move_up_short_terminal" status="run" result="completed" time="0.005552" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_resize_bigger_on_multiline_function" status="run" result="completed" time="0.006622" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_resize_smaller_on_multiline_function" status="run" result="completed" time="0.007210" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_simple_addition" status="run" result="completed" time="0.003960" /><testcase name="test.test_pyrepl.test_unix_console.TestConsole.test_wrap" status="run" result="completed" time="0.004245" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_empty" status="run" result="completed" time="0.000385" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_flush_buf" status="run" result="completed" time="0.000222" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_get" status="run" result="completed" time="0.000213" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_insert" status="run" result="completed" time="0.000205" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_special_key" status="run" result="completed" time="0.000232" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_unrecognized_escape_sequence" status="run" result="completed" time="0.000231" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_with_key_in_keymap" status="run" result="completed" time="0.000604" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_with_keymap_in_keymap" status="run" result="completed" time="0.000589" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_with_keymap_in_keymap_and_escape" status="run" result="completed" time="0.001690" /><testcase name="test.test_pyrepl.test_unix_eventqueue.TestUnixEventQueue.test_push_without_key_in_keymap" status="run" result="completed" time="0.000566" /><testcase name="unittest.loader.ModuleSkipped.test.test_pyrepl.test_windows_console" status="run" result="completed" time="0.000009"><skipped>test only relevant on win32</skipped></testcase></testsuite>']
Kill <WorkerThread #1 running test=test_fork1 pid=17421 time=4.6 sec> process group
Kill <WorkerThread #2 running test=test_tools pid=13493 time=15.5 sec> process group
Kill <WorkerThread #3 running test=test.test_multiprocessing_spawn.test_misc pid=13075 time=23.5 sec> process group
Kill <WorkerThread #5 running test=test.test_concurrent_futures.test_shutdown pid=14246 time=13.1 sec> process group
Kill <WorkerThread #6 running test=test.test_multiprocessing_spawn.test_processes pid=12483 time=33.8 sec> process group
Kill <WorkerThread #4 running test=test_hashlib pid=19110 time=4 ms> process group
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
"__main__", mod_spec)
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
~~~~^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/__main__.py", line 2, in <module>
main(_add_python_opts=True)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/main.py", line 747, in main
Regrtest(ns, _add_python_opts=_add_python_opts).main(tests=tests)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/main.py", line 739, in main
exitcode = self.run_tests(selected, tests)
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/main.py", line 576, in run_tests
return self._run_tests(selected, tests)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/main.py", line 536, in _run_tests
self._run_tests_mp(runtests, self.num_workers)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/main.py", line 434, in _run_tests_mp
RunWorkers(num_workers, runtests, self.logger, self.results).run()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/run_workers.py", line 606, in run
result = self._process_result(item)
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/run_workers.py", line 577, in _process_result
self.results.accumulate_result(result, self.runtests)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/results.py", line 132, in accumulate_result
self.add_junit(xml_data)
~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/test/libregrtest/results.py", line 165, in add_junit
self.testsuite_xml.append(ET.fromstring(e))
~~~~~~~~~~~~~^^^
File "/home/dje/cpython-buildarea/pull_request.edelsohn-sles-z/build/Lib/xml/etree/ElementTree.py", line 1342, in XML
parser.feed(text)
~~~~~~~~~~~^^^^^^
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 8035
make: *** [Makefile:2262: buildbottest] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-121195
* gh-121204
* gh-121205
<!-- /gh-linked-prs -->
| af8c3d7a26d605099f5b3406a8d33ecddb77e8fb | f80376b129ad947263a6b03a6c3a874e9f8706e6 |
python/cpython | python__cpython-121166 | # Use `do-while(0)` to protect `ADJUST_INDICES` macro
# Feature or enhancement
### Proposal:
Use `do-while(0)` construction to protect the expansion of `ADJUST_INDICES` in:
- https://github.com/python/cpython/blob/d6d8707ff217f211f3a2e48084cc0ddfa41efc4d/Objects/unicodeobject.c#L9318-L9330
- https://github.com/python/cpython/blob/2cb84b107ad136eafb6e3d69145b7bdaefcca879/Objects/bytes_methods.c#L435-L447
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121166
<!-- /gh-linked-prs -->
| 6343486eb60ac5a9e15402a592298259c5afdee1 | 15232a0819a2f7e0f448f28f2e6081912d10e7cb |
python/cpython | python__cpython-121164 | # Consider support "all" as an valid action for ``warnings.filterswarnings/simplefilter``.
# Feature or enhancement
### Proposal:
During reviewing the #121102, I found out that all is actually supported somewhere, but this support is not complete. Since users of Python are using it, I think that deprecating this behaviour would be incorrect. The best thing we can do here is to extend the support.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121164
<!-- /gh-linked-prs -->
| c8669489d45f22a8c6de7e05b7625db10befb8db | b765e4adf858ff8a8646f38933a5a355b6d72760 |
python/cpython | python__cpython-121326 | # `readline.set_history_length` corrupts history files when used in a libedit build
# Bug report
### Bug description:
~~Since the libedit transition~~ ([correction](https://github.com/python/cpython/issues/121160#issuecomment-2200501386): If an end user switches from a GNU readline build of Python to an editline build of Python), several important functions related to history files are broken.
1. It is inconvenient enough that history files written by GNU readline based `readline` module are unreadable and result in a rather confusing `OSError` with `exc.errno == errno.EINVAL` (without a corresponding operating system routine that returns such an error), but a workaround intercepting this `OSError` can be written that converts the old format to the new one. This has been covered before, e.g. in https://github.com/python/cpython/issues/120766.
2. More importantly, history files written by libedit based `readline` module are unreadable by itself when `set_history_length` is used, as demonstrated by the code below.
3. While a workaround for that is also possible (also below), as far as I can tell, until 3.13, so until a year from now (?) (whenever https://github.com/python/cpython/issues/112510 becomes available in a release), your only option is `"libedit" in readline.__doc__`, which I expect will be what many people will leave in the codebase even once `readline.backend` becomes available.
Reproducer:
```python
import readline
readline.add_history("foo bar")
readline.add_history("nope nope")
readline.write_history_file("history-works")
# this works:
readline.read_history_file("history-works")
readline.set_history_length(2)
readline.write_history_file("history-breaks")
# this breaks:
readline.read_history_file("history-breaks")
```
Workaround:
```python
def _is_using_libedit():
if hasattr(readline, "backend"):
return readline.backend == "editline"
else:
return "libedit" in readline.__doc__
readline.set_history_length(1000)
if _is_using_libedit():
readline.replace_history_item(
max(0, readline.get_current_history_length() - readline.get_history_length()),
"_HiStOrY_V2_")
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121326
* gh-121327
* gh-121856
* gh-121857
* gh-122030
* gh-122031
<!-- /gh-linked-prs -->
| 263c7e611bb24715e513d457a3477a61fff15162 | b4aedb23ae7954fb58084dda16cd41786819a8cf |
python/cpython | python__cpython-121154 | # Incorrect use of _PyLong_CompactValue()
# Bug report
There are several errors related to use of `_PyLong_CompactValue()` in `longobject.c`.
* The result has type `Py_ssize_t`, not `intptr_t`. Although on most supported platforms it is the same.
* Type cast from unsigned integer to signed integer and from signed integer to unsigned integer should be explicit.
* Downcasting should be explicit.
Some of the current code may have undefined behavior.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121154
* gh-121536
* gh-121900
* gh-121901
<!-- /gh-linked-prs -->
| 18015451d0e3f4d155d56f70faf9b76ce5b7ad79 | 0759cecd9d945dfbac2226febaba51f41195555c |
python/cpython | python__cpython-121159 | # argparse: usage text of arguments in mutually exclusive groups no longer wraps in Python 3.13
# Bug report
### Bug description:
Prior to Python 3.13, long usage text of arguments in mutually exclusive groups would wrap to multiple lines. This is no longer the case in Python 3.13:
```python
import argparse
parser = argparse.ArgumentParser(
prog="PROG", formatter_class=lambda prog: argparse.HelpFormatter(prog, width=80)
)
meg = parser.add_mutually_exclusive_group()
meg.add_argument("--op1", metavar="MET", nargs="?")
meg.add_argument("--op2", metavar=("MET1", "MET2"), nargs="*")
meg.add_argument("--op3", nargs="*")
meg.add_argument("--op4", metavar=("MET1", "MET2"), nargs="+")
meg.add_argument("--op5", nargs="+")
meg.add_argument("--op6", nargs=3)
meg.add_argument("--op7", metavar=("MET1", "MET2", "MET3"), nargs=3)
parser.print_help()
```
Python 3.12 output:
```
usage: PROG [-h] [--op1 [MET] | --op2 [MET1 [MET2 ...]] | --op3 [OP3 ...] |
--op4 MET1 [MET2 ...] | --op5 OP5 [OP5 ...] | --op6 OP6 OP6 OP6 |
--op7 MET1 MET2 MET3]
```
Python 3.13 output:
```
usage: PROG [-h]
[--op1 [MET] | --op2 [MET1 [MET2 ...]] | --op3 [OP3 ...] | --op4 MET1 [MET2 ...] | --op5 OP5 [OP5 ...] | --op6 OP6 OP6 OP6 | --op7 MET1 MET2 MET3]
```
This is a regression I introduced in #105039. I am working on a fix for this.
/cc @encukou (sorry for the regression!)
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121159
* gh-122777
<!-- /gh-linked-prs -->
| 013a0929750ed2b46ae990b59d02e3db84337474 | 9e551f9b351440ebae79e07a02d0e4a1b61d139e |
python/cpython | python__cpython-121176 | # Specialization for accurate complex summation in sum()?
# Feature or enhancement
### Proposal:
Currently, sum() builtin lacks any specialization for complex numbers, yet it's usually faster than better pure-python alternatives.
<details>
<summary>benchmark sum() wrt pure-python version</summary>
```python
# a.py
from random import random, seed
seed(1)
data = [complex(random(), random()) for _ in range(10)]
def msum(xs):
it = iter(xs)
res = next(it)
for z in it:
res += z
return res
def sum2(xs):
return complex(sum(_.real for _ in xs),
sum(_.imag for _ in xs))
```
```
$ ./python -m timeit -r11 -unsec -s 'from a import data, msum' 'sum(data)'
500000 loops, best of 11: 963 nsec per loop
$ ./python -m timeit -r11 -unsec -s 'from a import data, msum' 'msum(data)'
200000 loops, best of 11: 1.31e+03 nsec per loop
```
Hardly using sum() component-wise is an option:
```
$ ./python -m timeit -r11 -unsec -s 'from a import data, sum2' 'sum2(data)'
50000 loops, best of 11: 8.56e+03 nsec per loop
```
--------------------------
</details>
Unfortunately, direct using this builtin in numeric code doesn't make sense, as results are (usually) inaccurate. It's not too hard to do summation component-wise with math.fsum(), but it's slow and there might be a better way.
In #100425 simple algorithm, using compensated summation, was implemented in sum() for floats. I propose (1) make specialization in sum() for complex numbers, and (2) reuse #100425 code to implement accurate summation of complexes.
(1) is simple and strightforward, yet it will give some measurable performance boost
<details>
<summary>benchmark sum() in the main wrt added specialization for complex</summary>
```diff
diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c
index 6e50623caf..da0eed584a 100644
--- a/Python/bltinmodule.c
+++ b/Python/bltinmodule.c
@@ -2691,6 +2691,59 @@ builtin_sum_impl(PyObject *module, PyObject *iterable, PyObject *start)
}
}
}
+
+ if (PyComplex_CheckExact(result)) {
+ Py_complex c_result = PyComplex_AsCComplex(result);
+ Py_SETREF(result, NULL);
+ while(result == NULL) {
+ item = PyIter_Next(iter);
+ if (item == NULL) {
+ Py_DECREF(iter);
+ if (PyErr_Occurred())
+ return NULL;
+ return PyComplex_FromCComplex(c_result);
+ }
+ if (PyComplex_CheckExact(item)) {
+ Py_complex x = PyComplex_AsCComplex(item);
+ c_result.real += x.real;
+ c_result.imag += x.imag;
+ _Py_DECREF_SPECIALIZED(item, _PyFloat_ExactDealloc);
+ continue;
+ }
+ if (PyLong_Check(item)) {
+ long value;
+ int overflow;
+ value = PyLong_AsLongAndOverflow(item, &overflow);
+ if (!overflow) {
+ c_result.real += (double)value;
+ c_result.imag += 0.0;
+ Py_DECREF(item);
+ continue;
+ }
+ }
+ if (PyFloat_Check(item)) {
+ double value = PyFloat_AS_DOUBLE(item);
+ c_result.real += value;
+ c_result.imag += 0.0;
+ Py_DECREF(item);
+ continue;
+ }
+ result = PyComplex_FromCComplex(c_result);
+ if (result == NULL) {
+ Py_DECREF(item);
+ Py_DECREF(iter);
+ return NULL;
+ }
+ temp = PyNumber_Add(result, item);
+ Py_DECREF(result);
+ Py_DECREF(item);
+ result = temp;
+ if (result == NULL) {
+ Py_DECREF(iter);
+ return NULL;
+ }
+ }
+ }
#endif
for(;;) {
```
main:
```
$ ./python -m timeit -r11 -unsec -s 'from a import data, msum' 'sum(data)'
500000 loops, best of 11: 963 nsec per loop
```
with specialization:
```
$ ./python -m timeit -r11 -unsec -s 'from a import data, msum' 'sum(data)'
500000 loops, best of 11: 606 nsec per loop
```
--------------------
</details>
(2) also seems to be a no-brain task: simple refactoring of PyFloat specialization should allows us use same core for PyComplex specialization.
If there are no objections against - I'll work on a patch.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121176
<!-- /gh-linked-prs -->
| 169e7138ab84db465b6bf28e6c1dc6c39dbf89f4 | bc93923a2dee00751e44da58b6967c63e3f5c392 |
python/cpython | python__cpython-121162 | # Support copy.replace() on AST nodes
# Feature or enhancement
### Proposal:
I want this to work:
```
>>> n=ast.Name(id="x")
>>> copy.replace(n, id="y")
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
copy.replace(n, id="y")
~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/jelle/py/cpython/Lib/copy.py", line 293, in replace
raise TypeError(f"replace() does not support {cls.__name__} objects")
TypeError: replace() does not support Name objects
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121162
<!-- /gh-linked-prs -->
| 9728ead36181fb3f0a4b2e8a7291a3e0a702b952 | 94f50f8ee6872007d46c385f7af253497273255a |
python/cpython | python__cpython-121136 | # Missing PyDECREF calls for ADDITEMS opcode of _pickle.c
# Bug report
### Bug description:
In the `load_additems` function of [`Modules/_pickle.c`](https://github.com/python/cpython/blob/3.11/Modules/_pickle.c#L6629) (which handles the `ADDITEMS` opcode), [PyObject_GetAttr](https://docs.python.org/3/c-api/object.html#c.PyObject_GetAttr) is called and returns `add_func` on [line 6660](https://github.com/python/cpython/blob/3.11/Modules/_pickle.c#L6660). PyObject_GetAttr returns a new reference, but this reference is never decremented using Py_DECREF, so 2x calls to this function are added (compare to `do_append` function in the same file).
Pull request was made at https://github.com/python/cpython/pull/121136
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121136
* gh-121139
* gh-121140
<!-- /gh-linked-prs -->
| 92893fd8dc803ed7cdde55d29d25f84ccb5e3ef0 | e6543daf12051e9c660a5c0437683e8d2706a3c7 |
python/cpython | python__cpython-121132 | # Setting the line number is ignored in `INSTRUMENTED_YIELD_VALUE`
# Bug report
### Bug description:
There is a test for this: `test_jump_from_yield` in `test_sys_settrace`, but it expects the wrong value and the jump is ignored.
This is a fairly fringe behavior so I'm not sure if this is worth backporting.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121132
<!-- /gh-linked-prs -->
| afb0aa6ed20bd8e982ecb307f12923cf8dbccd8c | d9efa45d7457b0dfea467bb1c2d22c69056ffc73 |
python/cpython | python__cpython-121150 | # Self-documenting f-string in conversion specifier throws ValueError
# Bug report
### Bug description:
Since Python 3.12, the compiler throws a ValueError when compiling a string like `f"{x:{y=}}"`:
```python
$ ./python.exe
Python 3.14.0a0 (heads/main:81a654a342, Jun 28 2024, 07:45:17) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> f"{x:{y=}}"
ValueError: field 'value' is required for Constant
```
Note this is a compile-time error; you also see it if you call `compile()` or `ast.parse()`. I would not expect the compiler to ever throw ValueError.
On 3.11, this works as I'd expect:
```
$ python
Python 3.11.9 (main, May 7 2024, 09:02:19) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> f"{x:{y=}}"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'x' is not defined
>>> import ast
>>> ast.dump(ast.parse('f"{x:{y=}}"'))
"Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=-1, format_spec=JoinedStr(values=[Constant(value='y='), FormattedValue(value=Name(id='y', ctx=Load()), conversion=114)]))]))], type_ignores=[])"
```
I don't have a use case for this, but came across it while exploring f-string syntax.
cc @ericvsmith for f-strings and @pablogsal because this feels related to PEP 701
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-121150
* gh-121868
* gh-122063
<!-- /gh-linked-prs -->
| c46d64e0ef8e92a6b4ab4805d813d7e4d6663380 | 69c68de43aef03dd52fabd21f99cb3b0f9329201 |
python/cpython | python__cpython-121118 | # Skip __index__ handling in PyLong_AsNativeBytes
As per the discussion in https://github.com/capi-workgroup/decisions/issues/32 (and some other bugs here that I haven't bothered to look up), having int conversion functions call back into Python to evaluate `__index__` can be problematic or unexpected (Python code may cause the GIL to be released, which could break threading invariants in the calling code).
We should disable `__index__` handling by default in `PyLong_AsNativeBytes` and provide a flag to reenable it. In general, calling `__index__` is only really useful on user-provided arguments, which are easily distinguishable from "private" variables in the caller's code.
This allows the caller to omit `PyLong_Check` before calling the conversion function (if they need to guarantee they retain the thread), or to provide a single flag if they will allow it. They'll discover they need the flag due to a `TypeError`, which is safe, rather than a crash or native threading related issue, which is unsafe.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121118
* gh-121133
<!-- /gh-linked-prs -->
| 2894aa14f22430e9b6d4676afead6da7c79209ca | 81a654a3425eaa05a51342509089533c1f623f1b |
python/cpython | python__cpython-121236 | # test_basic_multiple_interpreters_reset_each: _PyRefchain_Remove: Assertion `value == REFCHAIN_VALUE' failed
# Crash report
### What happened?
### Configuration
```sh
./configure --with-trace-refs --with-pydebug
```
Test:
```sh
./python -m test -v -m test_basic_multiple_interpreters_reset_each test_import
```
Output:
```sh
== CPython 3.14.0a0 (heads/main:1a2e7a7475, Jun 28 2024, 02:37:08) [GCC 14.1.1 20240522]
== Linux-6.9.6-arch1-1-x86_64-with-glibc2.39 little-endian
== Python build: debug TraceRefs
== cwd: /home/arf/cpython/build/test_python_worker_246671æ
== CPU count: 16
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 4201083195
0:00:00 load avg: 2.50 Run 1 test sequentially in a single process
0:00:00 load avg: 2.50 [1/1] test_import
test_basic_multiple_interpreters_reset_each (test.test_import.SinglephaseInitTests.test_basic_multiple_interpreters_reset_each) ... python: Objects/object.c:195: _PyRefchain_Remove: Assertion `value == REFCHAIN_VALUE' failed.
Fatal Python error: Aborted
Current thread 0x00007cde0f816740 (most recent call first):
File "<string>", line 54 in <module>
Extension modules: _testinternalcapi, _testmultiphase, _testcapi (total: 3)
[1] 246671 IOT instruction (core dumped) ./python -m test -v -m test_basic_multiple_interpreters_reset_each test_impor
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (heads/main:1a2e7a7475, Jun 28 2024, 02:37:08) [GCC 14.1.1 20240522]
<!-- gh-linked-prs -->
### Linked PRs
* gh-121236
* gh-121238
* gh-121503
* gh-121517
<!-- /gh-linked-prs -->
| 9bcb7d8c6f8277c4e76145ec4c834213167e3387 | 91313afdb392d0d6105e9aaa57b5a50112b613e7 |
python/cpython | python__cpython-121293 | # Free-threaded libraries should be in `lib/python3.14t` (configure)
# Bug report
### Background
When using `configure` based Python installations, if two the free-threaded and default builds are installed to the same prefix, they will share the same `lib` directory. For example, when installing Python 3.14 the structure looks like:
```
include/python3.14/...
include/python3.14t/...
lib/libpython3.14.a
lib/libpython3.14t.a
lib/python3.14/... # shared!
```
The include directories are *not* shared, which is good because they have different `pyconfig.h` files.
However, the `lib/python3.14` is shared, which means that packages installed in the default build may be partially available in the free-threaded build and vice versa. This was unintended and can cause problems, such as confusing error messages and crashes.
For example, if I run:
* `python3.14 -m pip install numpy` (install in default build)
* `python3.14t -c "import numpy"`
I get a confusing error message:
<details>
<summary>Error importing numpy: you should not try to import numpy from its source directory...</summary>
```
Traceback (most recent call last):
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/_core/__init__.py", line 23, in <module>
from . import multiarray
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/_core/multiarray.py", line 10, in <module>
from . import overrides
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/_core/overrides.py", line 8, in <module>
from numpy._core._multiarray_umath import (
add_docstring, _get_implementing_args, _ArrayFunctionDispatcher)
ModuleNotFoundError: No module named 'numpy._core._multiarray_umath'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/__init__.py", line 114, in <module>
from numpy.__config__ import show as show_config
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/__config__.py", line 4, in <module>
from numpy._core._multiarray_umath import (
...<3 lines>...
)
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/_core/__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.14 from "/tmp/python-nogil/bin/python3.14t"
* The NumPy version is: "2.0.0"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: No module named 'numpy._core._multiarray_umath'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
import numpy
File "/tmp/python-nogil/lib/python3.14/site-packages/numpy/__init__.py", line 119, in <module>
raise ImportError(msg) from e
ImportError: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python interpreter from there.
```
</details>
It would be better if installing NumPy in the default build did not make it available in the free-threaded build and vice versa.
### Proposal
We should add the ABI suffix to the lib directory, like we do for the include directory. Specifically, we should use `python$(LDVERSION)` instead of `python$(VERSION)`.For example, the free-threaded build would use `lib/python3.14t` in Python 3.14.
Debug builds have `d` as part of their ABI suffix (and LDVERSION), so this would incidentally affect installations of the debug configuration of the non-free-threaded build.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121293
* gh-121631
* gh-122737
* gh-122750
<!-- /gh-linked-prs -->
| e8c91d90ba8fab410a27fad4f709cc73f6ffcbf4 | 5250a031332eb9499d5fc190d7287642e5a144b9 |
python/cpython | python__cpython-121102 | # python -Wall is undocumented
Many people are using it ["python3 -Wall"](https://www.google.com/search?q=%22python3+-Wall%22), but "all" is not listed as a valid choice at cmdoption [-W](https://docs.python.org/3/using/cmdline.html#cmdoption-W) docs, nor at env var [PYTHONWARNINGS](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONWARNINGS) docs.
When I saw it used I wanted to check what this option does in the docs but I had to [RTFS](https://github.com/python/cpython/blob/e9b4ec614b66d11623b80471409c16a109f888d5/Lib/warnings.py#L251) to understand that it was just an alias for "always", so I think the alias should be documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121102
* gh-121146
* gh-121147
<!-- /gh-linked-prs -->
| 0a1e8ff9c15675fdc4d07fa6c59f83808bf00798 | 6d34938dc8163f4a4bcc68069a1645a7ab76e935 |
python/cpython | python__cpython-121097 | # Valgrind lists memory leaks: dlopen() called without dlclose()
Example on Python 3.12 when loading `_crypt` extension module:
```
$ PYTHONMALLOC=malloc valgrind --leak-check=full --show-leak-kinds=all --num-callers=50 ./python -c 'import _crypt'
==2444751== Memcheck, a memory error detector
==2444751== Copyright (C) 2002-2024, and GNU GPL'd, by Julian Seward et al.
==2444751== Using Valgrind-3.23.0 and LibVEX; rerun with -h for copyright info
==2444751== Command: ./python -c import\ _crypt
==2444751==
==2444751==
==2444751== HEAP SUMMARY:
==2444751== in use at exit: 3,253 bytes in 8 blocks
==2444751== total heap usage: 44,037 allocs, 44,029 frees, 6,464,804 bytes allocated
==2444751==
==2444751== 21 bytes in 1 blocks are still reachable in loss record 1 of 7
==2444751== at 0x484282F: malloc (vg_replace_malloc.c:446)
==2444751== by 0x402587F: malloc (rtld-malloc.h:56)
==2444751== by 0x402587F: strdup (strdup.c:42)
==2444751== by 0x4014B58: _dl_load_cache_lookup (dl-cache.c:515)
==2444751== by 0x4008F1F: _dl_map_object (dl-load.c:2116)
==2444751== by 0x400287C: openaux (dl-deps.c:64)
==2444751== by 0x4001522: _dl_catch_exception (dl-catch.c:237)
==2444751== by 0x4002CDF: _dl_map_object_deps (dl-deps.c:232)
==2444751== by 0x400CA34: dl_open_worker_begin (dl-open.c:638)
==2444751== by 0x4001522: _dl_catch_exception (dl-catch.c:237)
==2444751== by 0x400C10F: dl_open_worker (dl-open.c:803)
==2444751== by 0x4001522: _dl_catch_exception (dl-catch.c:237)
==2444751== by 0x400C563: _dl_open (dl-open.c:905)
==2444751== by 0x49E9E23: dlopen_doit (dlopen.c:56)
==2444751== by 0x4001522: _dl_catch_exception (dl-catch.c:237)
==2444751== by 0x4001678: _dl_catch_error (dl-catch.c:256)
==2444751== by 0x49E9912: _dlerror_run (dlerror.c:138)
==2444751== by 0x49E9EDE: dlopen_implementation (dlopen.c:71)
==2444751== by 0x49E9EDE: dlopen@@GLIBC_2.34 (dlopen.c:81)
==2444751== (...)
```
Valgrind reports leaks on memory allocated by `dlopen()` in `Python/dynload_shlib.c`, because Python doesn't call `dlclose()`.
I don't think that it's big deal to not call `dlclose()`, since there is little memory and it's not common to unload a dynamic library at runtime. Also, I don't see where/how `dlclose()` can be called.
Instead, I propose to suppress these warnings in the Valgrind suppressions file, to help spotting more important leaks.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121097
* gh-121122
* gh-121123
<!-- /gh-linked-prs -->
| 6e63d84e43fdce3a5bdb899b024cf947d4e48900 | 58a3580836eca58c4a0c02cedc8a8d6080b8ab59 |
python/cpython | python__cpython-121191 | # test_typing leaked [1, 1, 1] memory blocks: fail randomly on Refleak buildbots
```
Re-running test_typing in verbose mode
test_typing leaked [1, 1, 1] memory blocks, sum=3
```
Examples of failure:
* AMD64 Windows11 Refleaks 3.x: https://buildbot.python.org/all/#/builders/920/builds/807
* AMD64 Windows11 Refleaks 3.13: https://buildbot.python.org/all/#/builders/1484/builds/52
<!-- gh-linked-prs -->
### Linked PRs
* gh-121191
* gh-121208
* gh-121209
* gh-121360
* gh-121372
* gh-121373
<!-- /gh-linked-prs -->
| c766ad206ea60b1e0edcb625b99e7631954a984f | 6988ff02a5741bcd04a8f46b7dd845e849557be0 |
python/cpython | python__cpython-121083 | # CPython build failed with `--enable-pystats` after #118450
# Bug report
### Bug description:
```text
Python/specialize.c: In function ‘_Py_Specialize_ForIter’:
Python/specialize.c:2392:60: error: incompatible type for argument 1 of ‘_PySpecialization_ClassifyIterator’
2392 | _PySpecialization_ClassifyIterator(iter));
| ^~~~
| |
| _PyStackRef
Python/specialize.c:409:70: note: in definition of macro ‘SPECIALIZATION_FAIL’
409 | _Py_stats->opcode_stats[opcode].specialization.failure_kinds[kind]++; \
| ^~~~
Python/specialize.c:2290:47: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘_PyStackRef’
2290 | _PySpecialization_ClassifyIterator(PyObject *iter)
| ~~~~~~~~~~^~~~
```
Introduced after #118450
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121083
<!-- /gh-linked-prs -->
| 223c03a43c010cf4404f2a42efafe587646a0619 | b7a95dfee30aae171de47f98ed3b7d1cc08e5bd4 |
python/cpython | python__cpython-121343 | # Warn In PyThreadState_Clear() If the Thread State Still Has an Unhandled Exception
# Feature or enhancement
### Proposal:
When `PyThreadState_Clear()` is called, the thread state could still have an exception set. As a user, it would be helpful to know when that happens. This is similar to how we already warn the user if the thread state was still running Python code (we check `tstate->current_frame`).
For unhandled exceptions, I'm pretty sure we'd check `tstate->exc_state` (or maybe `tstate->exc_info`), along with `tstate->current_exception`. We could emit a warning, as well as print the traceback (or generally invoke `sys.excepthook()`).
It would probably make sense to separately warn about `tstate->async_exc` (and print that traceback).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121343
<!-- /gh-linked-prs -->
| 6c1a4fb6d400827155fd70e48d682e35397731a1 | aea0c586d181abb897511b6b46d28bfbe4858f79 |
python/cpython | python__cpython-121041 | # -Wimplicit-fallthrough generating warnings
# Bug report
### Bug description:
Due to the addition of `-Wimplicit-fallthrough` as a `BASEFLAG` new warnings are generated.
This should be reverted until tooling is created to track these new warnings per https://github.com/python/cpython/issues/112301
Warnings can be found in builds https://buildbot.python.org/all/#/builders/721/builds/1465/steps/3/logs/warnings__143_ from https://github.com/python/cpython/pull/121030
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121041
* gh-121044
<!-- /gh-linked-prs -->
| ef28f6df42c916b058ed14275fb1ceba63ede28e | 90565972243f33dcd40d60a4f1474b97174fe304 |
python/cpython | python__cpython-121071 | # Add assertFloatsAreIdentical/assertComplexAreIdentical to unittest (or kwarg to assertEqual)?
# Feature or enhancement
### Proposal:
Clones of assertFloatsAreIdentical() are scattered across the CPython tests:
https://github.com/python/cpython/blob/d8f82432a36178a2376cc2d0984b02bb03f6d55f/Lib/test/test_complex.py#L74
https://github.com/python/cpython/blob/d8f82432a36178a2376cc2d0984b02bb03f6d55f/Lib/test/test_cmath.py#L68
https://github.com/python/cpython/blob/d8f82432a36178a2376cc2d0984b02bb03f6d55f/Lib/test/test_float.py#L1069
https://github.com/python/cpython/blob/d8f82432a36178a2376cc2d0984b02bb03f6d55f/Lib/test/test_capi/test_getargs.py#L440
Maybe it's worth to have a dedicated check?
Or a special kwarg for the assertEqual method, to workaround ``NAN`` and ``-0.0`` values for floats/complexes.
Edit:
Or at least some support from Lib/test/support... I was adding similar helper yet in another test file and that looks odd.
Numpy has numpy.testing.assert_equal():
```pycon
>>> np.testing.assert_equal([0.0], [+0.0])
>>> np.testing.assert_equal([0.0], [-0.0])
Traceback (most recent call last):
...
AssertionError:
Items are not equal:
item=0
ACTUAL: 0.0
DESIRED: -0.0
>>> np.testing.assert_equal([np.nan], [np.nan])
>>> np.testing.assert_equal([0.0], [np.nan])
Traceback (most recent call last):
...
AssertionError:
Items are not equal:
item=0
ACTUAL: 0.0
DESIRED: nan
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121071
* gh-123840
* gh-123841
<!-- /gh-linked-prs -->
| 8ef8354ef15e00d484ac2ded9442b789c24b11e0 | beee91cdcc0dbecab252f7c5c7c51e2adb8edc26 |
python/cpython | python__cpython-121036 | # Include lastResort in logging HOWTO flow chart
# Documentation
The [flow chart](https://docs.python.org/3/howto/logging.html#logging-flow) in the logging HOWTO is very helpful in understanding how the logging module works. However, the flow chart does not make clear why a logger with no configured handlers can still emit records.
The reason is `lastResort`, which is used if no handlers are found in the logger hierarchy as documented further down in the HOWTO: https://docs.python.org/3/howto/logging.html#what-happens-if-no-configuration-is-provided.
I suggest updating the flow chart to include it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121036
* gh-121105
* gh-121106
* gh-121254
* gh-121265
* gh-121316
* gh-121317
* gh-121320
* gh-121321
* gh-121323
* gh-121324
* gh-121325
<!-- /gh-linked-prs -->
| 237baf4d7a789deb153fbc1fc3863550949d5da2 | 4a62a331de1eeda7878960b0bd184a348908245e |
python/cpython | python__cpython-136463 | # Soft-deprecate `sys.api_version` and the C API's `PYTHON_API_VERSION`
`PYTHON_API_VERSION` and the related `sys.api_version` module attribute were last changed for Python 2.6.
At the moment they're not documented at all, so anyone stumbling across them (e.g. by running `dir(sys)`) may be legitimately confused as to what they're for.
They're not actually for anything, they're an old idea to help manage cross-version extension module compatibility checks that was superseded by the introduction of the stable ABI in PEP 384.
This ticket covers adding these values to the documentation specifically so they can be given an explicit soft deprecation notice (we don't have any plans to actually remove them, since their ongoing maintenance cost is essentially zero - prior to the Discourse thread linked below, I doubt anyone had even remembered these existed in the past decade)
(from https://discuss.python.org/t/should-we-document-that-python-api-version-sys-api-version-are-no-longer-updated/)
<!-- gh-linked-prs -->
### Linked PRs
* gh-136463
* gh-136928
<!-- /gh-linked-prs -->
| 658599c15d13ee3a5cb56c3d9fccaa195465d4b5 | 28153fec58a255a001c39235376a326ccb367188 |
python/cpython | python__cpython-121086 | # Add __get__ to the partial object
# Feature or enhancement
In https://github.com/python/cpython/pull/119827#issuecomment-2190108757, @rhettinger proposed to add the `__get__` method to the `partial` object in `functools`. This is a breaking change, although the impact may be much lesser than of adding `__get__` to builtin functions. But we should follow the common procedure for such changes: first add `__get__` that emits FutureWarning with suggestion to wrap partial into staticmethod and return the `partial` object unchanged, then change the behavior few releases later.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121086
* gh-121089
* gh-121092
<!-- /gh-linked-prs -->
| db96edd6d1a58045196a71aff565743f493b5fbb | 223c03a43c010cf4404f2a42efafe587646a0619 |
python/cpython | python__cpython-121033 | # Improve the repr of partialmethod
# Bug report
The repr of `partialmethod` object contains redundant commas and spaces. Compare it with the repr of `partial`:
```pycon
>>> import functools
>>> def test(*args, **kwargs): pass
...
>>> functools.partial(test)
functools.partial(<function test at 0x7fcbe91819d0>)
>>> functools.partial(test, a=1)
functools.partial(<function test at 0x7fcbe91819d0>, a=1)
>>> functools.partialmethod(test)
functools.partialmethod(<function test at 0x7fcbe91819d0>, , )
>>> functools.partialmethod(test, a=1)
functools.partialmethod(<function test at 0x7fcbe91819d0>, , a=1)
```
cc @dg-pb @rhettinger
<!-- gh-linked-prs -->
### Linked PRs
* gh-121033
* gh-121037
* gh-121038
<!-- /gh-linked-prs -->
| d2646e3f45e3e4e831ee2ae84d55b161a361d592 | d8f82432a36178a2376cc2d0984b02bb03f6d55f |
python/cpython | python__cpython-121024 | # Improve `_xxtestfuzz/README.rst`
# Bug report
Right now there are several points I would like to see improved in this file:
1. We need to clarify that `$test_name` should be replaced with an actual test name: https://github.com/python/cpython/blob/82235449b85165add62c1b200299456a50a1d097/Modules/_xxtestfuzz/README.rst#L26-L29
2. `_Py_FUZZ_YES` is not actually used in our code, docs: https://github.com/python/cpython/blob/82235449b85165add62c1b200299456a50a1d097/Modules/_xxtestfuzz/README.rst#L34 code: https://github.com/python/cpython/blob/7fb32e02092922b0256d7be91bbf80767eb2ca46/Modules/_xxtestfuzz/fuzzer.c#L699
3. Why is `fuzz_builtin_float` is used in this example? https://github.com/python/cpython/blob/82235449b85165add62c1b200299456a50a1d097/Modules/_xxtestfuzz/README.rst#L34C9-L34C41 It is better to use `$test_name`. Or even better `$fuzz_test_name` to have a common prefix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121024
* gh-124140
* gh-124141
<!-- /gh-linked-prs -->
| a9c2bc16349c2be3005f97249f3ae9699988f218 | 3b45df03a4bd0e21edec43144b8d9bac689d23a0 |
python/cpython | python__cpython-121019 | # argparse.ArgumentParser.parses_args does not honor exit_on_error=False when given unrecognized arguments
# Bug report
### Bug description:
As reported in [Discourse](https://discuss.python.org/t/about-the-exit-on-error-setting-in-argparse/56702), even though the documentation on the `exit_on_error` argument for `argparse.ArgumentParser` says:
> [`exit_on_error`](https://docs.python.org/3/library/argparse.html#exit-on-error) - Determines whether or not `ArgumentParser` exits with error info when an error occurs. (default: `True`)
The `parse_args` method would still exit with error info when there are unrecognized arguments:
```python
import argparse
parser = argparse.ArgumentParser(exit_on_error=False)
try:
parser.parse_args('invalid arguments'.split())
except argparse.ArgumentError:
print('ArgumentError caught.')
```
which outputs:
```
usage: test.py [-h]
test.py: error: unrecognized arguments: invalid arguments
```
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-121019
* gh-121031
* gh-121032
* gh-121056
* gh-121128
* gh-121129
* gh-121510
* gh-121516
<!-- /gh-linked-prs -->
| 0654336dd5138aec04e3017e15ccbb90a44e053d | 82235449b85165add62c1b200299456a50a1d097 |
python/cpython | python__cpython-121017 | # `PYTHON_BASIC_REPL` is not currently tested
# Bug report
### Bug description:
The environment variable `PYTHON_BASIC_REPL` is meant to make Python use the basic REPL, but this isn't currently tested to work.
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-121017
* gh-121064
<!-- /gh-linked-prs -->
| 9e45fd9858a059950f7387b4fda2b00df0e8e537 | ef28f6df42c916b058ed14275fb1ceba63ede28e |
python/cpython | python__cpython-121483 | # Incorrect optimization in JIT build
# Bug report
### Bug description:
Reproducer:
```python
def foo():
a = [1, 2, 3]
exhit = iter(a)
for _ in exhit:
pass
a.append("this should'be in exhit")
print(f"got {list(exhit)}, should be []")
foo()
foo()
foo()
foo()
foo()
foo()
```
Output:
```
got [], should be []
got [], should be []
got [], should be []
got [], should be []
got [], should be []
got ["this should'be in exhit"], should be []
```
Obviously, the last line is incorrect.
Output with a ``PYTHON_LLTRACE=2`` env:
```python
got [], should be []
got [], should be []
got [], should be []
got [], should be []
got [], should be []
Optimizing foo (/home/eclips4/programming-languages/cpython/example.py:1) at byte offset 42
1 ADD_TO_TRACE: _START_EXECUTOR (0, target=21, operand=0x7f4646e59832)
21: JUMP_BACKWARD(5)
2 ADD_TO_TRACE: _CHECK_VALIDITY_AND_SET_IP (0, target=21, operand=0x7f4646e59832)
3 ADD_TO_TRACE: _TIER2_RESUME_CHECK (0, target=21, operand=0)
18: FOR_ITER_LIST(3)
4 ADD_TO_TRACE: _CHECK_VALIDITY_AND_SET_IP (0, target=18, operand=0x7f4646e5982c)
5 ADD_TO_TRACE: _ITER_CHECK_LIST (3, target=18, operand=0)
6 ADD_TO_TRACE: _GUARD_NOT_EXHAUSTED_LIST (3, target=18, operand=0)
7 ADD_TO_TRACE: _ITER_NEXT_LIST (3, target=18, operand=0)
20: STORE_FAST(2)
8 ADD_TO_TRACE: _CHECK_VALIDITY_AND_SET_IP (0, target=20, operand=0x7f4646e59830)
9 ADD_TO_TRACE: _STORE_FAST (2, target=20, operand=0)
21: JUMP_BACKWARD(5)
10 ADD_TO_TRACE: _CHECK_VALIDITY_AND_SET_IP (0, target=21, operand=0x7f4646e59832)
11 ADD_TO_TRACE: _JUMP_TO_TOP (0, target=0, operand=0)
Created a proto-trace for foo (/home/eclips4/programming-languages/cpython/example.py:1) at byte offset 36 -- length 11
Optimized trace (length 10):
0 OPTIMIZED: _START_EXECUTOR (0, jump_target=7, operand=0x7f4646e59e80)
1 OPTIMIZED: _TIER2_RESUME_CHECK (0, jump_target=7, operand=0)
2 OPTIMIZED: _ITER_CHECK_LIST (3, jump_target=8, operand=0)
3 OPTIMIZED: _GUARD_NOT_EXHAUSTED_LIST (3, jump_target=9, operand=0)
4 OPTIMIZED: _ITER_NEXT_LIST (3, target=18, operand=0)
5 OPTIMIZED: _STORE_FAST_2 (2, target=20, operand=0)
6 OPTIMIZED: _JUMP_TO_TOP (0, target=0, operand=0)
7 OPTIMIZED: _DEOPT (0, target=21, operand=0)
8 OPTIMIZED: _EXIT_TRACE (0, exit_index=0, operand=0)
9 OPTIMIZED: _EXIT_TRACE (0, exit_index=1, operand=0)
got ["this should'be in exhit"], should be []
```
It's definitely related to this part of code:
https://github.com/python/cpython/blob/e4a97a7fb1c03d3b6ec6efbeff553a0230e003c7/Python/optimizer.c#L1027-L1033
I guess the culprit is there. If we remove the ``_GUARD_NOT_EXHAUSTED_LIST`` from ``is_for_iter_test``, the problem will go away (although it can still be reproduced in other ways using other (range, tuple) iterators).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-121483
* gh-121494
<!-- /gh-linked-prs -->
| 8ad6067bd4556afddc86004f8e350aa672fda217 | d69529d31ccd1510843cfac1ab53bb8cb027541f |
python/cpython | python__cpython-121046 | # test_pyrepl: test_cursor_back_write() blocks on input_hook() when tests are run sequentially
Command:
```
$ ./python -m test -u all,-gui
(...)
0:24:10 load avg: 0.92 [326/478] test_pyclbr
0:24:13 load avg: 0.92 [327/478] test_pyexpat
0:24:13 load avg: 0.92 [328/478] test_pyrepl
```
gdb:
```
(gdb) where
#0 0x00007f3b0325ec37 in select () from /lib64/libc.so.6
#1 0x00007f3af40f4497 in Sleep (milli=<optimized out>) at ./Modules/_tkinter.c:371
#2 0x00007f3af40f4faa in EventHook () at ./Modules/_tkinter.c:3350
#3 0x0000000000666d86 in os__inputhook_impl (module=<optimized out>) at ./Modules/posixmodule.c:16795
#4 0x0000000000666dad in os__inputhook (module=<optimized out>, _unused_ignored=_unused_ignored@entry=0x0)
at ./Modules/clinic/posixmodule.c.h:12134
(...)
(gdb) py-bt
Traceback (most recent call first):
<built-in method _inputhook of module object at remote 0x7f3af53d9850>
File "/home/vstinner/python/main/Lib/_pyrepl/reader.py", line 719, in handle1
input_hook()
File "/home/vstinner/python/main/Lib/test/test_pyrepl/support.py", line 75, in handle_all_events
reader.handle1()
File "/home/vstinner/python/main/Lib/test/test_pyrepl/test_unix_console.py", line 197, in test_cursor_back_write
_, con = handle_events_unix_console(events)
File "/home/vstinner/python/main/Lib/unittest/mock.py", line 1423, in patched
return func(*newargs, **newkeywargs)
File "/home/vstinner/python/main/Lib/unittest/case.py", line 606, in _callTestMethod
result = method()
File "/home/vstinner/python/main/Lib/unittest/case.py", line 660, in run
self._callTestMethod(testMethod)
(...)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-121046
* gh-121049
<!-- /gh-linked-prs -->
| 44eafd66882589d4f4eb569d70c49724da3e9291 | c87876763e88ddbe1d465912aff74ee4c0ffd451 |
python/cpython | python__cpython-120992 | # Evaluation stack bounds are not checked in debug builds.
# Bug report
### Bug description:
Prior to the generating code for the interpreter(s), we had a set of macros for stack manipulation, `PUSH`, `POP`, etc.
The code generator does not use these any more, which improves readability, but we have lost the runtime checking of bounds.
Having those bounds checks would have identified the issue with #120793 almost immediately.
We should add bounds checking asserts in the generated code.
E.g.
`stack_pointer +=1` should be
```
stack_pointer += 1;
assert(WITH_STACK_BOUNDS(stack_pointer));
```
This bulks out the generated code a bit, but I think making the limit checks explicit is more readable than using stack adjustment macros.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120992
<!-- /gh-linked-prs -->
| 8f5a01707f27a015b52b7b55af058f8833f8f7db | 42b2c9d78da7ebd6bd5925a4d4c78aec3c9e78e6 |
python/cpython | python__cpython-122132 | # `asyncio` thread-safety issues in the free-threaded build
# Bug report
1. `fi_freelist` isn't thread-safe (move to `pycore_freelist.h` and follow that pattern)
2. `enter_task`, `leave_task`, and `swap_current_task` aren't thread-safe due to shared `state->current_tasks` and borrowed references.
3. `register_task` and `unregister_task` aren't thread-safe due to shared `state->asyncio_tasks` linked list
4. `_asyncio_all_tasks_impl` isn't thread-safe due the the `asyncio_tasks` linked list.
For 2, 3, and 4, we can consider using critical sections to protect the accesses to `state->current_tasks` and `state->asyncio_tasks`.
Longer term, moving data to per-loop will probably help with multi-threaded scaling.
<!-- gh-linked-prs -->
### Linked PRs
* gh-122132
* gh-122138
* gh-122139
* gh-122152
* gh-122186
* gh-122317
* gh-122612
* gh-122801
<!-- /gh-linked-prs -->
| c908d1f87d287a4b3ec58c85b692a7eb617fa6ea | 2c1b1e7a07eba0138b9858c6f2bea3cae9af0808 |
python/cpython | python__cpython-121655 | # `threading.local()` implementation is not thread-safe in the free-threaded build
# Bug report
The implementation of Python thread local variables (`threading.local()` or `_thread._local`) has some thread-safety issues. The issues are particularly relevant to the free-threaded build, but some can affect the default build too.
### local_clear loop over threads isn't safe
The `local_clear()` function is called when a thread local variable is destroyed. It loops over all threads in the interpreter and removes itself from their dictionaries.
https://github.com/python/cpython/blob/fd0f814ade43fa479bfbe76dc226424db14a9354/Modules/_threadmodule.c#L1568-L1581
This isn't thread-safe because after `HEAD_UNLOCK(runtime)`, the stored `tstate` might be deleted concurrently. This can happen even in the default build with the GIL enabled because `PyThreadState_Delete()` doesn't require the GIL to be held. However, it's less likely to occur in practice because `threading` module created threads hold onto the GIL until they're deleted.
### local_clear access to `tstate->dict` isn't thread-safe
In the free-threaded build, `local_clear()` may be run concurrently with some other thread's `PyThreadState_Clear()`. The access to another thread's `tstate->dict` isn't safe because it may be getting destroyed concurrently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121655
* gh-122042
<!-- /gh-linked-prs -->
| e059aa6b01030310125477c3ed1da0491020fe10 | 2009e25e26040dca32696e70f91f13665350e7fd |
python/cpython | python__cpython-120959 | # Parser compares `int` to `Py_ssize_t` poorly
I'm not sure how widespread this is, or how generated the code is, but within `_loop0_139_rule` in `parser.c` we find this:
```
Py_ssize_t _n = 0;
// some lines later
for (int i = 0; i < _n; i++) asdl_seq_SET_UNTYPED(_seq, i, _children[i]);
```
If `_n` can never be larger than MAX_INT (likely), there seems no reason it can't be `int`. Alternatively, if `i` has to increment all the way up to `_n`, it should be `Py_ssize_t`.
Otherwise, an infinite loop is _theoretically_ possible, and clever static analysers will take great pride in reminding us about this possibility until we fix it.
(I'd jump in and fix this myself but I've never touched this code before and am not sure where to start. Should be easy enough for someone who does know, though.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-120959
<!-- /gh-linked-prs -->
| 348184845a72088368021d1f42e96ceea3eee88c | e7315543377322e4c6e0d8d2c4a4bb4626e43f4c |
python/cpython | python__cpython-120940 | # `__del__` documentation does not mention `weakref`
# Documentation
the `weakref` documentation mentions `__del__` and compares with it. However, `__del__` documentation does not mention `weakref`, while this is most probably what the vast majority or developers is looking for feature wise.
https://docs.python.org/3/library/weakref.html#comparing-finalizers-with-del-methods
<!-- gh-linked-prs -->
### Linked PRs
* gh-120940
* gh-121061
* gh-121062
<!-- /gh-linked-prs -->
| 1c13b29d54ad6d7c9e030227d575ad7d21b4054f | 22b0de2755ee2d0e2dd21cd8761f15421ed2da3d |
python/cpython | python__cpython-121747 | # email module generates wrong MIME header with quoted-printable encoded extra space with Python 3.12.4
# Bug report
### Bug description:
In Python 3.12.4, using `EmailMessage` class with long non-ASCII characters in the subject, the resulting, encoded extra space `=?utf-8?q?_?=` is generated. The issue doesn't occur with Python 3.12.3 and 3.11.9.
#### Python 3.12.4
```pycon
Python 3.12.4 (main, Jun 20 2024, 23:12:11) [GCC 13.2.1 20240309] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from email import message_from_string
>>> from email.message import EmailMessage
>>> from email.header import decode_header, make_header
>>> msg = EmailMessage()
>>> msg.set_content(u'Body text.\n', cte='quoted-printable')
>>> subject = 'A_very' + ' long' * 23 + ' súmmäry'
>>> subject
'A_very long long long long long long long long long long long long long long long long long long long long long long long súmmäry'
>>> msg['Subject'] = subject
>>> print(msg.as_string())
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: A_very long long long long long long long long long long long long
long long long long long long long long long long long =?utf-8?q?s=C3=BAmm?=
=?utf-8?q?_?==?utf-8?q?=C3=A4ry?=
Body text.
>>> parsed_msg = message_from_string(msg.as_string())
>>> parsed_subject = str(make_header(decode_header(parsed_msg['Subject'])))
>>> parsed_subject
'A_very long long long long long long long long long long long long long long long long long long long long long long long súmm äry'
>>> subject == parsed_subject
False
>>>
```
#### Python 3.12.3
```pycon
Python 3.12.3 (main, May 23 2024, 00:56:56) [GCC 13.2.1 20240309] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from email import message_from_string
>>> from email.message import EmailMessage
>>> from email.header import decode_header, make_header
>>> msg = EmailMessage()
>>> msg.set_content(u'Body text.\n', cte='quoted-printable')
>>> subject = 'A_very' + ' long' * 23 + ' súmmäry'
>>> subject
'A_very long long long long long long long long long long long long long long long long long long long long long long long súmmäry'
>>> msg['Subject'] = subject
>>> print(msg.as_string())
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: A_very long long long long long long long long long long long long
long long long long long long long long long long long =?utf-8?q?s=C3=BAmm?=
=?utf-8?q?=C3=A4ry?=
Body text.
>>> parsed_msg = message_from_string(msg.as_string())
>>> parsed_subject = str(make_header(decode_header(parsed_msg['Subject'])))
>>> parsed_subject
'A_very long long long long long long long long long long long long long long long long long long long long long long long súmmäry'
>>> subject == parsed_subject
True
>>>
```
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-121747
* gh-121963
* gh-121964
<!-- /gh-linked-prs -->
| cecaceea31f32f01b5617989e3dc8b2077f53f89 | 1056f2bc208bdfe562c79d2a5098723c50ae9c23 |
python/cpython | python__cpython-120911 | # ValueError in importlib.metadata for eggs with files installed outside the site packages
In https://github.com/python/importlib_metadata/issues/455, users reported issues with importlib metadata attempting to resolve the paths for packages installed outside the system site packages. This issue applies to importlib.metadata as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120911
* gh-120917
* gh-120918
<!-- /gh-linked-prs -->
| 1ba0bb21ed4eb54023fdfccc9cb20be8fff946b1 | 0b918e81c1de909f753f1d02bcba0f831d63cfa8 |
python/cpython | python__cpython-122309 | # `FrameLocalsProxy` is stricter than `dict` about what constitutes a match
# Bug report
### Bug description:
```python
import inspect
class MyString(str):
pass
def f():
a = 1
locals = inspect.currentframe().f_locals
print(MyString("a") in locals)
f()
```
In Python 3.12 and below this prints `True`. In Python 3.13 this print `False`. I think it comes down to the check for exact unicode: https://github.com/python/cpython/blob/f4ddaa396715855ffbd94590f89ab7d55feeec07/Objects/frameobject.c#L112
The change in behaviour isn't a huge problem so if it's intended then I won't spend waste any time complaining about it, but I do think it's worth confirming that it is intended/desired.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-122309
* gh-122488
<!-- /gh-linked-prs -->
| 5912487938ac4b517209082ab9e6d2d3d0fb4f4d | 1cac0908fb6866c30b7fe106bc8d6cd03c7977f9 |
python/cpython | python__cpython-120898 | # Typo in `urllib.parse` docs: Incorrect parameter name
The [“Changed in version 3.3”](https://github.com/python/cpython/blob/96ead91f0f0db59a942b8b34da9cc980c05588a2/Doc/library/urllib.parse.rst?plain=1#L176) note to `urllib.parse.urlparse()` refers to `allow_fragment`. The correct name for this parameter is `allow_fragments`, plural.
A minor typo in an out-of-the-way part of the documentation, but it still managed to trip me up when I used the name from there, not from the function signature! :facepalm:
<!-- gh-linked-prs -->
### Linked PRs
* gh-120898
* gh-120902
* gh-120903
<!-- /gh-linked-prs -->
| b6fa8fe86a6f4d02c263682716a91285a94024fc | 96ead91f0f0db59a942b8b34da9cc980c05588a2 |
python/cpython | python__cpython-120889 | # Bump the bundled pip version to 24.1.x
This brings in the latest pip release into CPython's bundled copies, notably, adding support for free-threading builds which is worth backporting the bundle update to 3.13 for.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120889
* gh-121080
* gh-121348
<!-- /gh-linked-prs -->
| 4999e0bda091826fcdf303dd439364e1d303a5ce | 1167a9a30b4b2f327ed987e845e378990d1ae6bf |
python/cpython | python__cpython-120874 | # Add tests for new Tk widget options
"-state" was added for ttk::scale in Tk 8.6.9. More options will be added in Tk 8.7 and 9.0.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120874
* gh-120875
* gh-120876
* gh-120877
* gh-120879
* gh-120880
<!-- /gh-linked-prs -->
| 974a978631bfbfa6f617e927d5eaa82b06694ae5 | 879d1f28bb97bcecddca0824276877aaf97f25b3 |
python/cpython | python__cpython-120872 | # 3.12.4 breaks`logging.config.DictConfig` with `logging.handlers.QueueHandler` on read-only file systems
# Bug report
### Bug description:
When using `logging.config.dictConfig` to configure a `logging.handlers.QueueHandler` on a read only file system, it works fine in 3.12.3, but crashes in 3.12.4.
**Reproducing**
```python
# bug.py
import logging.config
logging.config.dictConfig(
{
"version": 1,
"handlers": {
"queue_listener": {
"class": "logging.handlers.QueueHandler",
"queue": {"()": "queue.Queue", "maxsize": -1},
},
},
}
)
```
```dockerfile
# Dockerfile
FROM python:3.12.4-slim-bookworm
COPY bug.py ./
CMD ["python", "bug.py"]
```
Run `docker run --rm --read-only -it $(docker build -q .)`
```
Process SyncManager-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/multiprocessing/managers.py", line 591, in _run_server
server = cls._Server(registry, address, authkey, serializer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/managers.py", line 156, in __init__
self.listener = Listener(address=address, backlog=128)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 458, in __init__
address = address or arbitrary_address(family)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 77, in arbitrary_address
return tempfile.mktemp(prefix='listener-', dir=util.get_temp_dir())
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/util.py", line 149, in get_temp_dir
tempdir = tempfile.mkdtemp(prefix='pymp-')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tempfile.py", line 373, in mkdtemp
prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tempfile.py", line 126, in _sanitize_params
dir = gettempdir()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tempfile.py", line 315, in gettempdir
return _os.fsdecode(_gettempdir())
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tempfile.py", line 308, in _gettempdir
tempdir = _get_default_tempdir()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tempfile.py", line 223, in _get_default_tempdir
raise FileNotFoundError(_errno.ENOENT,
FileNotFoundError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Traceback (most recent call last):
File "/usr/local/lib/python3.12/logging/config.py", line 581, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/logging/config.py", line 792, in configure_handler
proxy_queue = MM().Queue()
^^^^
File "/usr/local/lib/python3.12/multiprocessing/context.py", line 57, in Manager
m.start()
File "/usr/local/lib/python3.12/multiprocessing/managers.py", line 566, in start
self._address = reader.recv()
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 430, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 399, in _recv
raise EOFError
EOFError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "//bug.py", line 3, in <module>
logging.config.dictConfig(
File "/usr/local/lib/python3.12/logging/config.py", line 920, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/lib/python3.12/logging/config.py", line 588, in configure
raise ValueError('Unable to configure handler '
ValueError: Unable to configure handler 'queue_listener'
```
<hr>
From my understanding, this is related to #119819 et al. and the change that caused this was introduced in #120030: It introduces creating ephemeral instances of `multiprocessing.Manager` which, on init, tries to acquire a temporary directory.
https://github.com/python/cpython/blob/879d1f28bb97bcecddca0824276877aaf97f25b3/Lib/logging/config.py#L785
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120872
* gh-121077
* gh-121078
<!-- /gh-linked-prs -->
| 7d9c68513d112823a9a6cdc7453b998b2c24eb4c | 4be1f37b20bd51498d3adf8ad603095c0f38d6e5 |
python/cpython | python__cpython-120861 | # `type_setattro` error return paths contain bugs
# Bug report
The implementation of `type_setattro` has a few bugs relating to error return paths:
The lock is not released if `_PyDict_GetItemRef_Unicode_LockHeld` fails:
https://github.com/python/cpython/blob/8f17d69b7bc906e8407095317842cc0fd52cd84a/Objects/typeobject.c#L5732-L5736
`name` should be `Py_DECREF`'d
https://github.com/python/cpython/blob/8f17d69b7bc906e8407095317842cc0fd52cd84a/Objects/typeobject.c#L5721-L5723
<!-- gh-linked-prs -->
### Linked PRs
* gh-120861
* gh-120963
<!-- /gh-linked-prs -->
| dee63cb35971b87a09ddda5d6f29cd941f570720 | 0153fd094019b84e18b8e8451019694595f67f9e |
python/cpython | python__cpython-120859 | # `PyDict_Next` should not lock the dict
# Bug report
`PyDict_Next` currently wraps `_PyDict_Next` in a critical section. We shouldn't do this -- the locking needs to be external to the call.
1) It's not sufficient to lock the dict just for each `_PyDict_Next` call because we return borrowed references and because `pos` becomes meaningless if the dictionary gets resized or rehashed.
2) It interferes with externally locking the dict because the inner critical sections can suspend the outer ones. In other words, if the caller use a critical section to lock the dict for multiple iterations, this will break that.
https://github.com/python/cpython/blob/8f17d69b7bc906e8407095317842cc0fd52cd84a/Objects/dictobject.c#L2883-L2890
cc @DinoV
<!-- gh-linked-prs -->
### Linked PRs
* gh-120859
* gh-120964
<!-- /gh-linked-prs -->
| 375b723d5873f948696c7e85a97f4778d9e00ff0 | dee63cb35971b87a09ddda5d6f29cd941f570720 |
python/cpython | python__cpython-121051 | # `faulthandler` itself crashes in free-threading build (in `_Py_DumpExtensionModules`)
# Bug report
The `faulthandler` module can dump Python tracebacks when a crash occurs. Unfortunately, the current implementation itself crashes in the free-threaded build. This is mostly undetected because our tests expect a crash, but faulthandler itself crashing is not desirable.
### Faulthandler may be called without a valid thread state (i.e., without holding GIL)
Faulthandler may be triggered when the thread doesn't have a valid thread state (i.e., doesn't hold the GIL in the default build and is not "attached" in the free-threaded build). Additionally, it's called from a signal handler, so we only want to call async-signal-safe functions (generally no locking).
Faulthandler calls `PyDict_Next` (via `_Py_DumpExtensionModules`) on the modules dictionary. This is not *entirely* safe in the default build (because we don't hold the GIL), but works well enough in practice.
However, it will consistently crash in the free-threaded build because [`PyDict_Next`](https://github.com/python/cpython/blob/6f1d448bc110633eda110310fd833bd46e7b30f2/Objects/dictobject.c#L2882-L2890) starts a critical section, which assumes there is a valid thread state.
Suggestion:
* we should use `_PyDict_Next()`, which doesn't internally lock the dict
* we should try to lock the dict around the `_PyDict_Next()` loop, with `_PyMutex_LockTimed` `timeout=0`. If we can't immediately lock the dict, we should not dump modules. This async-signal-safe because it's just a simple compare-exchange and doesn't block.
* we can't call `PyMutex_Unlock()` because it's not async-signal-safe (it internally acquires locks in order to wake up threads), so we should either use a simple atomic exchange to unlock the dict (without waking up waiters) or not bother unlocking the lock at all. We exit shortly after `_Py_DumpExtensionModules`, so it doesn't matter if we don't wake up other threads.
<!-- gh-linked-prs -->
### Linked PRs
* gh-121051
* gh-121107
<!-- /gh-linked-prs -->
| 1a2e7a747540f74414e7c50556bcb2cc127e9d1c | 237baf4d7a789deb153fbc1fc3863550949d5da2 |
python/cpython | python__cpython-120835 | # generator frame type should not be PyObject*[]
For historical reasons, the `*_iframe` field in `_PyGenObject_HEAD` is declared as ` PyObject*[]`, and later cast to `_PyInterpreterFrame *` (which is not a PyObject*).
<!-- gh-linked-prs -->
### Linked PRs
* gh-120835
* gh-120941
* gh-120976
<!-- /gh-linked-prs -->
| 65a12c559cbc13c2c5a4aa65c76310bd8d2051a7 | c38e2f64d012929168dfef7363c9e48bd1a6c731 |
python/cpython | python__cpython-121250 | # ios buildbot failure: `enclose 'sqlite3_create_window_function' in a __builtin_available check to silence this warning`
This failed in my PR: https://github.com/python/cpython/pull/120442
But, it does not look related.
Link: https://buildbot.python.org/all/#builders/1380/builds/655
```
HEAD is now at 8334a1b55c gh-120384: Fix array-out-of-bounds crash in `list_ass_subscript` (#120442)
Switched to and reset branch 'main'
configure: WARNING: no system libmpdecimal found; falling back to bundled libmpdecimal (deprecated and scheduled for removal in Python 3.15)
configure: WARNING: pkg-config is missing. Some dependencies may not be detected correctly.
configure: WARNING: no system libmpdecimal found; falling back to bundled libmpdecimal (deprecated and scheduled for removal in Python 3.15)
configure: WARNING: pkg-config is missing. Some dependencies may not be detected correctly.
../../Modules/_sqlite/connection.c:1307:14: warning: 'sqlite3_create_window_function' is only available on iOS 13.0 or newer [-Wunguarded-availability-new]
rc = sqlite3_create_window_function(self->db, name, num_params, flags,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator17.5.sdk/usr/include/sqlite3.h:5533:16: note: 'sqlite3_create_window_function' has been marked as being introduced in iOS 13.0 here, but the deployment target is iOS 12.0.0
SQLITE_API int sqlite3_create_window_function(
^
../../Modules/_sqlite/connection.c:1307:14: note: enclose 'sqlite3_create_window_function' in a __builtin_available check to silence this warning
rc = sqlite3_create_window_function(self->db, name, num_params, flags,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../Modules/_sqlite/connection.c:1315:14: warning: 'sqlite3_create_window_function' is only available on iOS 13.0 or newer [-Wunguarded-availability-new]
rc = sqlite3_create_window_function(self->db, name, num_params, flags,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator17.5.sdk/usr/include/sqlite3.h:5533:16: note: 'sqlite3_create_window_function' has been marked as being introduced in iOS 13.0 here, but the deployment target is iOS 12.0.0
SQLITE_API int sqlite3_create_window_function(
^
../../Modules/_sqlite/connection.c:1315:14: note: enclose 'sqlite3_create_window_function' in a __builtin_available check to silence this warning
rc = sqlite3_create_window_function(self->db, name, num_params, flags,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2 warnings generated.
--- xcodebuild: WARNING: Using the first of multiple matching destinations:
{ platform:iOS Simulator, id:C6AEA11C-C34A-47B8-BD67-AF0403ECA353, OS:17.5, name:iPhone SE (3rd generation) }
{ platform:iOS Simulator, id:C6AEA11C-C34A-47B8-BD67-AF0403ECA353, OS:17.5, name:iPhone SE (3rd generation) }
2024-06-21 11:16:45.090 xcodebuild[33218:87319398] [MT] IDETestOperationsObserverDebug: 1075.762 elapsed -- Testing started completed.
2024-06-21 11:16:45.090 xcodebuild[33218:87319398] [MT] IDETestOperationsObserverDebug: 0.000 sec, +0.000 sec -- start
2024-06-21 11:16:45.090 xcodebuild[33218:87319398] [MT] IDETestOperationsObserverDebug: 1075.762 sec, +1075.762 sec -- end
Failing tests:
-[iOSTestbedTests testPython]
** TEST FAILED **
make: *** [testios] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-121250
* gh-121833
* gh-122339
* gh-122341
<!-- /gh-linked-prs -->
| 7e91e0dcfe2faab1e1a4630e6f745aa30ca87b3d | 2bac2b86b1486f15038fb246835e04bb1b213cd8 |
python/cpython | python__cpython-120812 | # Reference leak in `_contextvars.Context.run()`
# Bug report
### Bug description:
In `_contextvars.Context.run`, `call_result` is not `Py_DECREF`'d if `_PyContext_Exit` fails. See [here](https://github.com/python/cpython/blob/main/Python/context.c#L660).
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120812
* gh-120843
* gh-120844
<!-- /gh-linked-prs -->
| aed31beca9a54b85a1392631a48da80602210f18 | a81d434c06335b0989ba83666ec7076b9d9d4e1e |
python/cpython | python__cpython-120805 | # Rewrite asyncio subprocesses without child watchers
### Tasks
- [x] Remove child watchers (excludes threaded and pidfd watcher) #120805
- [x] Remove `get_child_watcher` and `set_child_watcher` https://github.com/python/cpython/pull/120818
- [x] Remove threaded and pidfd watcher https://github.com/python/cpython/pull/120893
- [x] Remove abc of it https://github.com/python/cpython/pull/120893
- [x] Add documentation regarding it and news entry https://github.com/python/cpython/pull/120895
Each task item will be done in a separate PR and news entry will be added when all of this is done otherwise it will be confusing for users.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120805
* gh-120818
* gh-120893
* gh-120895
* gh-121124
<!-- /gh-linked-prs -->
| 733dac01b0dc3047efc9027dba177d7116e47c50 | a2f6f7dd26128b834c6e66fe1ceac3ac751143f5 |
python/cpython | python__cpython-120803 | # importlib.metadata test fixtures should prefer test.support fixtures
In https://github.com/python/cpython/pull/116131, I learned that there are some fixtures in importlib.metadata that now appear to have substantial overlap with those in `support.os_helper`. There was a time when the backport was the dominant implementation and the fixtures needed to stand alone, but as we move to a more stable implementation in the stdlib, the tests should rely on fixtures from the stdlib wherever possible.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120803
<!-- /gh-linked-prs -->
| 85d90b59e2a9185cad608c5047357be645b4d5c6 | c1553bc34a537e00d6513da7df1c427df3570574 |
python/cpython | python__cpython-122887 | # a bit better example in pathlib docs - maybe include example path that involves file in a folder(s)?
# Documentation
See https://docs.python.org/3/library/pathlib.html#concrete-paths
`Path('setup.py')` and `PosixPath('/etc')` and `WindowsPath('c:/Program Files/')` both have single level only.
Would it be OK to have example with file in a folder? say `PosixPath('/etc/hosts')` (if that is correct way to pass more complex paths to constructor...)
Would PR changing this would be welcome and have chances to be reviewed?
<!-- gh-linked-prs -->
### Linked PRs
* gh-122887
* gh-122895
* gh-122896
<!-- /gh-linked-prs -->
| 363374cf69a7e2292fe3f1c6bedd199088958cc2 | 0959142e4defcf7a9fcbbb228d2e2b97a074f7ea |
python/cpython | python__cpython-120829 | # ``test_datetime`` fails with a ``--forever`` argument
# Bug report
### Bug description:
Output:
```python
...many lines
test_resolution_info (test.datetimetester.TestTime_Fast.test_resolution_info) ... FAIL
======================================================================
FAIL: test_resolution_info (test.datetimetester.TestTime_Fast.test_resolution_info)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/admin/Projects/cpython/Lib/test/datetimetester.py", line 3685, in test_resolution_info
self.assertIsInstance(self.theclass.max, self.theclass)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: -1224 is not an instance of <class 'datetime.time'>
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120829
* gh-120855
* gh-122800
<!-- /gh-linked-prs -->
| a81d434c06335b0989ba83666ec7076b9d9d4e1e | 6f1d448bc110633eda110310fd833bd46e7b30f2 |
python/cpython | python__cpython-120781 | # dis: LOAD_SPECIAL should mention the name of the attribute
# Feature or enhancement
### Proposal:
```python
>>> import dis
>>> dis.dis("with x: pass")
0 RESUME 0
1 LOAD_NAME 0 (x)
COPY 1
LOAD_SPECIAL 1
SWAP 2
SWAP 3
LOAD_SPECIAL 0
```
It should say "`LOAD_SPECIAL 1 (__exit__)`" or something similar.
I'll work on this.
cc @markshannon who added LOAD_SPECIAL in #120640
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120781
<!-- /gh-linked-prs -->
| e8e151d4715839f785ff853c77594d7302b40266 | 55596ae0446e40f47e2a28b8897fe9530c32a19a |
python/cpython | python__cpython-120778 | # Introspective attributes of an async generator object are undocumented
# Documentation
Though listed with details in [PEP-525](https://peps.python.org/pep-0525/#asynchronous-generator-object), introspective attributes of an async generator object, namely `__name__`, `__qualname__`, `ag_await`, `ag_frame`, `ag_running` and `ag_code`, are missing in the documentation of the [`inspect`](https://docs.python.org/3/library/inspect.html) module.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120778
* gh-120827
* gh-120828
<!-- /gh-linked-prs -->
| 83d3d7aace32b8536f552f78dd29610344f13160 | 8334a1b55c93068f5d243852029baa83377ff6c9 |
python/cpython | python__cpython-120770 | # pdb repeats `w 0` on empty line when there are commands in `cmdqueue`
# Bug report
### Bug description:
If you do a `n ;; p 1` in pdb, the next time you entry an empty line (meant for repeat last command), it will repeat `w 0` due to #119882. It's supposed to repeat `p 1` (arguably `n ;; p1` but we never do that). As user never inputs `w 0` by their own, the current behavior might not be desired.
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120770
<!-- /gh-linked-prs -->
| 31ce5c05a489fa22f30c4afdec162e4e669af15a | 3af7263037de1d0ef63b070fc7bfc2cf042eaebe |
python/cpython | python__cpython-120755 | # Speed up open().read() pattern by reducing the number of system calls
# Feature or enhancement
### Proposal:
I came across some seemingly redundant `fstat()` and `lseek()` calls when working on a tool that scanned a directory of lots of small YAML files and loaded their contents as config. In tracing I found most execution time wasn't in the python interpreter but system calls (on top of NFS in that case, which made some I/O calls particularly slow).
I've been experimenting with a program that reads all `.rst` files in the python `Docs` directory to try and remove some of those redundant system calls..
### Test Program
```python
from pathlib import Path
nlines = []
for filename in Path("cpython/Doc").glob("**/*.rst"):
nlines.append(len(filename.read_text()))
```
In my experimentation, with some tweaks to fileio can remove over 10% of the system calls the test program makes when scanning the whole `Doc` folders for `.rst` files on both macOS and Linux (don't have a Windows machine to measure on).
### Current State (9 system calls)
Currently on my Linux machine to read a whole `.rst` file with the above code there is this series of system calls:
```python
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
ioctl(3, TCGETS, 0x7ffe52525930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
### Target State (~~7~~ 5 system calls)
It would be nice to get it down to (for small files, large file caveat in PR / get an additional seek):
```python
# Open the file
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
# Check if the open fd is a file or directory and early-exit on directories with a specialized error.
# With my changes we also stash the size information from this for later use as an estimate.
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
# Read the data directly into a PyBytes
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
# Read the EOF marker
read(3, "", 1) = 0
# Close the file
close(3) = 0
```
In a number of cases (ex. importing modules) there is often a `fstat` followed immediately by an open / read the file (which does another `fstat` typically), but that is an extension point and I want to keep that out of scope for now.
### Questions rattling around in my head around this
Some of these are likely better for Discourse / longer form discussion, happy to start threads there as appropriate.
1. Is there a way to add a test for certain system calls happening with certain arguments and/or a certain amount of time? (I don't currently see a great way to write a test to make sure the number of system calls doesn't change unintentionally)
2. Running a simple python script (`python simple.py` that contains `print("Hello, World!")`) currently reads `simple.py` in full at least 4 times and does over 5 seeks. I have been pulling on that thread but it interacts with importlib as well as how the python compiler currently works, still trying to get my head around. Would removing more of those overheads be something of interest / should I keep working to get my head around it?
3. We could potentially save more
1. with readv (one readv call, two iovecs). I avoided this for now because _Py_read does quite a bit.
2. dispatching multiple calls in parallel using asynchronous I/O APIs to meet the python API guarantees; I am experimenting with this (backed by relatively new Linux I/O APIs but possibly for kqueue and epoll), but it's _very_ experimental and feeling a lot like "has to be a interpreter primitive" to me to work effectively which is complex to plumb through. Very early days though, many thoughts, not much prototype code.
4. The `_blksize` member of fileio was added in bpo-21679. It is not used much as far as I can tell as its reflection `_blksize` in python or in the code. The only usage I can find is https://github.com/python/cpython/blob/main/Modules/_io/_iomodule.c#L365-L374, where we could just query for it when needed in that case to save some storage on all `fileio` objects. The behavior of using the stat returned st_blksize is part of the docs, so doesn't feel like we can fully remove it.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120755
* gh-121143
* gh-121357
* gh-121593
* gh-121633
* gh-122101
* gh-122103
* gh-122111
* gh-122215
* gh-122216
* gh-123303
* gh-123412
* gh-123413
* gh-124225
* gh-125166
* gh-126466
<!-- /gh-linked-prs -->
| 2f5f19e783385ec5312f7054827ccf1cdb6e14ef | 9728ead36181fb3f0a4b2e8a7291a3e0a702b952 |
python/cpython | python__cpython-120744 | # Soft deprecate os.spawn*(), os.popen() and os.system() functions
See the discussion for the rationale: [Is time to remove os module spawn* functions?](https://discuss.python.org/t/is-time-to-remove-os-module-spawn-functions/55829)
See also the discussion: [How to deal with unsafe/broken os.spawn* arg handling behavior on Windows](https://discuss.python.org/t/how-to-deal-with-unsafe-broken-os-spawn-arg-handling-behavior-on-windows/20829) (Nov 2022).
Advantages of the subprocess module:
* subprocess restores the signal handlers (**reliable behavior**).
* subprocess closes all file descriptors (**more secure**).
* subprocess doesn't use a shell by default, `shell=True` must be used explicitly.
* subprocess is the defacto standard way to spawn processes in Python.
Examples of os functions issues:
* os.popen() and os.system() always use `shell=True`, it cannot be disabled. (**higher risk of shell code injection**)
* os.popen().close() return value depends on the platform. (**not portable**)
* os.popen().close() return value is a "wait status", not an exit code: waitstatus_to_exitcode() must be used to get a return code.
* There are 8 os.spawn*() functions, it's more **error prone** and harder to use than subprocess APIs.
* os.spawn*() functions return an exit code 1 if the program is not found instead of raising an exception. It's harder to distinguish if the program exists and has an exit code of 1, or if it doesn't exist.
* The os.spawn*() functions are not safe for use in multithreaded programs on POSIX systems as they use fork()+exec() from Python there.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120744
<!-- /gh-linked-prs -->
| d44c550f7ebee7d33785142e6031a4621cf21573 | 02cb5fdee391670d63b2fc0a92ca9b36a32ac95a |
python/cpython | python__cpython-120734 | # some internal compiler functions don't follow the naming convention
compiler_visit_* is a prefix that interacts with the VISIT macros and the * should correspond to an AST node. There are some functions that are named like this but should not be.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120734
<!-- /gh-linked-prs -->
| eaaf6995a883255e6d0e433591dc9fdc374b8f06 | 0f3e36454d754026d6c510053ff1e4b22ae80cd9 |
python/cpython | python__cpython-120737 | # Python 3.12.3 -> 3.12.4 breaks use of `create_autospec(X, spec_set=True, name="X")`
# Bug report
### Bug description:
Hello, thanks for the work you do!
My issue: Updating from 3.12.3 to 3.12.4 breaks many elements in my test suite. We use the `name=` argument to better understand log messages once a test with many objects fails.
This works with 3.12.3, but crashes with 3.12.4. The issue might have been introduced with this fix: https://github.com/python/cpython/commit/23ba96e2433d17e86f4770a64b94aaf9ad22a25b
### Code example
File `autospec_test.py`:
```python
import unittest
from unittest.mock import create_autospec
class X: ...
class TestX(unittest.TestCase):
def test_x(self):
mock = create_autospec(X, spec_set=True, name="X")
if __name__ == "__main__":
unittest.main()
```
### Observed behavior
#### python 3.12.3
```
python autospec_test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
```
#### python 3.12.4
```pytb
python autospec_test.py
E
======================================================================
ERROR: test_x (__main__.TestX.test_x)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../autospec_test.py", line 10, in test_x
mock = create_autospec(X, spec_set=True, name="X")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/unittest/mock.py", line 2823, in create_autospec
mock.configure_mock(**kwargs)
File "/usr/local/lib/python3.12/unittest/mock.py", line 650, in configure_mock
setattr(obj, final, val)
File "/usr/local/lib/python3.12/unittest/mock.py", line 774, in __setattr__
raise AttributeError("Mock object has no attribute '%s'" % name)
AttributeError: Mock object has no attribute 'name'
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (errors=1)
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, (CI pipelines failed on windows, too – didn't test this example, though.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-120737
* gh-120760
* gh-120761
<!-- /gh-linked-prs -->
| 1e4815692f6c8a37a3974d0d7d2025494d026d76 | ed5ae6c4d76feaff06c2104c8ff864553b000253 |
python/cpython | python__cpython-120727 | # New warnings: ``warning: unused function 'is_core_module' [-Wunused-function]``
# Bug report
### Bug description:
Popped up during build in non-debug mode:
```
Python/import.c:1555:1: warning: unused function 'is_core_module' [-Wunused-function]
is_core_module(PyInterpreterState *interp, PyObject *name, PyObject *path)
^
```
and
```
Python/pystate.c:1135:1: warning: unused function 'check_interpreter_whence' [-Wunused-function]
check_interpreter_whence(long whence)
^
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120727
* gh-120729
<!-- /gh-linked-prs -->
| a816cd67f43d9adb27ccdb6331e08c835247d1df | 45d5cab533a607716b2b41134839a59facf309cd |
python/cpython | python__cpython-120820 | # `datetime.strftime("%Y")` is not padding correctly
# Bug report
### Bug description:
[docs](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) say:
```text
%Y
Year with century as a decimal number.
0001, 0002, …, 2013, 2014, …, 9998, 9999
```
but this is what is happening
```python
from datetime import datetime
print(datetime(9, 6, 7).strftime("%Y-%m-%d")) # 9-06-07
print(datetime(99, 6, 7).strftime("%Y-%m-%d")) # 99-06-07
print(datetime(999, 6, 7).strftime("%Y-%m-%d")) # 999-06-07
```
### CPython versions tested on:
3.10
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120820
* gh-121144
* gh-121145
* gh-122408
* gh-122409
* gh-135933
* gh-136387
<!-- /gh-linked-prs -->
| 6d34938dc8163f4a4bcc68069a1645a7ab76e935 | 92893fd8dc803ed7cdde55d29d25f84ccb5e3ef0 |
python/cpython | python__cpython-120691 | # test_email fails on WASI: thread 'main' has overflowed its stack
wasm32-wasi 3.x: https://buildbot.python.org/all/#/builders/1046/builds/5459
```
0:00:28 load avg: 4.60 [ 25/478/1] test_email worker non-zero exit code (Exit code -6 (SIGABRT))
test_b_case_ignored (test.test_email.test__encoded_words.TestDecode.test_b_case_ignored) ... ok
test_b_invalid_bytes_ignored_with_defect (test.test_email.test__encoded_words.TestDecode.test_b_invalid_bytes_ignored_with_defect) ... ok
(...)
test_group_escaped_quoted_strings_in_local_part (test.test_email.test_headerregistry.TestAddressHeader.test_group_escaped_quoted_strings_in_local_part) ... ok
test_group_name_and_address (test.test_email.test_headerregistry.TestAddressHeader.test_group_name_and_address) ...
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120691
<!-- /gh-linked-prs -->
| 49f51deeef901b677853f00e428cbaeb13ecd2f2 | c81a5e6b5b7749862d271e7a67f89976069ad2cd |
python/cpython | python__cpython-120687 | # Removed unused internal C API functions
The functions in Include/internal/pycore_identifier.h are not used:
```
extern PyObject* _PyType_LookupId(PyTypeObject *, _Py_Identifier *);
extern PyObject* _PyObject_LookupSpecialId(PyObject *, _Py_Identifier *);
extern int _PyObject_SetAttrId(PyObject *, _Py_Identifier *, PyObject *);
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120687
<!-- /gh-linked-prs -->
| 6f7acaab508edac3dff376912b85cf46a8671e72 | 12af8ec864225248c3d2916cb142a5e7ee36cbe2 |
python/cpython | python__cpython-120709 | # Python 3.13.0b2: rounding can cause incorrect log timestamps
# Bug report
### Bug description:
The change made in #102412 introduces floating-point issues that in corner cases can cause log timestamps to be wrong by up to 999ms. This is much worse than the bug it was trying to fix (#102402). For example:
```python
import logging
import time
from unittest import mock
def fake_time_ns():
return 1718710000_000_000_000 - 1
def fake_time():
return fake_time_ns() / 1e9
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s")
with mock.patch("time.time_ns", fake_time_ns), mock.patch("time.time", fake_time):
logging.info("The time is now")
```
when run with TZ=utc will output
```
2024-06-18 11:26:40,999 The time is now
```
The correct time (modulo any timezone shifts) is 11:26:40 (minus 1 nanosecond).
This occurs because when converting the `time_ns` result to a floating-point timestamp to store in `self.created`, the result is rounded, while the millisecond calculation truncates. This will affect roughly 1 in 8 million timestamps.
I don't think this can happen prior to #102412, although it depends on `Formatter.converter` to take the floor of the given value. A custom converter that went via `datetime.fromtimestamp` could go wrong, because that rounds to the *nearest* microsecond, but the default `time.localtime` converter takes the floor.
gh-102402 points out that representing time as floating-point seconds can't exactly represent decimal fractions, but it's not clear to me why that's considered an issue. In general `time_ns()` has two advantages:
1. It's higher precision (although not necessarily higher accuracy); but the overheads of processing and formatting a log message are higher than the precision of `time.time()` anyway, and the provided formatting code only handles millisecond precision.
2. It allows calculations to be performed in integer arithmetic, avoiding corner cases such as gh-89047. However, we're stuck with `LogRecord.created` being a float for compatibility reasons, and as shown above, mixing integer and float arithmetic leads to inconsistent results.
So my recommendation is to revert #102412. If that's not considered acceptable, then this corner case can be detected by checking for `int(self.created) != ct // 1_000_000_000` and adjusted `self.msecs` to compensate.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120709
* gh-120933
<!-- /gh-linked-prs -->
| 1500a23f33f5a6d052ff1ef6383d9839928b8ff1 | 02df6795743ee4ee26a07986edbb5e22ae9fec8b |
python/cpython | python__cpython-120904 | # New REPL does not include globals from executed module when used with `-i`
# Bug report
### Bug description:
If I have a module called `foo`.py with the following contents in the root of my CPython local clone:
```py
class Foo: pass
```
Then running `PYTHON_BASIC_REPL=1 ./python.exe -i foo.py`, I get the following behaviour: `foo.py` is executed, and the `Foo` class is available in the globals of the REPL:
```pycon
~/dev/cpython (main)⚡ % PYTHON_BASIC_REPL=1 ./python.exe -i foo.py
>>> Foo
<class '__main__.Foo'>
```
But with the new REPL, the `Foo` class isn't available!
```pycon
~/dev/cpython (main)⚡ % ./python.exe -i foo.py
>>> Foo
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
Foo
NameError: name 'Foo' is not defined
>>>
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120904
* gh-121916
* gh-121924
* gh-121929
<!-- /gh-linked-prs -->
| ac07451116d52dd6a5545d27b6a2e3737ed27cf0 | 58753f33e47fe48906883dc010771f68c13b7e52 |
python/cpython | python__cpython-120675 | # Protect multi-line macros expansions in `_test*.c` files
# Feature or enhancement
### Proposal:
In `Modules/_testbuffer.c` and `Modules/_testcapimodule.c`, there are some expansions that are not protected, although their usage is correct.
This issue is in continuation with https://github.com/python/cpython/issues/119981#issuecomment-2145636560 and #120017. There are other places where macro expansions are not protected, so I'll work on that when I've some free time (those PRs are quite easy to review and to implement IMO).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120675
<!-- /gh-linked-prs -->
| 7c5da94b5d674e112dc77f6494463014b7137193 | 4bc27abdbee88efcf9ada83de6e9e9a0e439edaf |
python/cpython | python__cpython-120822 | # Failing configure tests due to missing space
# Bug report
### Bug description:
When building CPython from source, I noticed some suspicious errors that seem to be related to a missing spaces in the configure script. It seems that most uses of `as_fn_append` correctly include a leading space before appending to a variable, however, there are several cases where this space has not been added which can lead to two arguments being concatenated:
- https://github.com/python/cpython/blob/main/configure#L9617
- https://github.com/python/cpython/blob/main/configure#L9735
- https://github.com/python/cpython/blob/main/configure#L9780
- https://github.com/python/cpython/blob/main/configure#L9821
- https://github.com/python/cpython/blob/main/configure#L9862
- https://github.com/python/cpython/blob/main/configure#L9903
- https://github.com/python/cpython/blob/main/configure#L9944
- https://github.com/python/cpython/blob/main/configure#L9996
When compiling to Emscripten in Bazel, for example, I end up with `-sRELOCATABLE=1-Wstrict-prototypes` in one of my tests:
```
configure:9880: checking if we can enable /home/developer/.cache/bazel/_bazel_developer/25e07d78077dfe1eca932359d50e41ef/sandbox/processwrapper-sandbox/29/execroot/_main/external/emsdk/emscripten_toolchain/emcc.sh strict-prototypes warning
configure:9900: /home/developer/.cache/bazel/_bazel_developer/25e07d78077dfe1eca932359d50e41ef/sandbox/processwrapper-sandbox/29/execroot/_main/external/emsdk/emscripten_toolchain/emcc.sh -c --sysroot=/home/developer/.cache/bazel/_bazel_developer/25e07d78077dfe1eca932359d50e41ef/sandbox/processwrapper-sandbox/29/execroot/_main/external/emscripten_bin_linux/emscripten/cache/sysroot -fdiagnostics-color -fno-strict-aliasing -funsigned-char -no-canonical-prefixes -Wall -iwithsysroot/include/c++/v1 -iwithsysroot/include/compat -iwithsysroot/include -isystem /home/developer/.cache/bazel/_bazel_developer/25e07d78077dfe1eca932359d50e41ef/sandbox/processwrapper-sandbox/29/execroot/_main/external/emscripten_bin_linux/lib/clang/19/include -Wno-builtin-macro-redefined -D__DATE__=redacted -D__TIMESTAMP__=redacted -D__TIME__=redacted -O2 -g0 -fwasm-exceptions -pthread -sRELOCATABLE=1-Wstrict-prototypes -Werror -I/home/developer/.cache/bazel/_bazel_developer/25e07d78077dfe1eca932359d50e41ef/sandbox/processwrapper-sandbox/29/execroot/_main/bazel-out/k8-fastbuild/bin/python/libpython.ext_build_deps/libffi/include conftest.c >&5
emcc: error: setting `RELOCATABLE` expects `bool` but got `str`
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-120822
* gh-120985
* gh-120986
<!-- /gh-linked-prs -->
| 2106c9bef0c18ff35db7d6c083cb8f189507758e | fd0f814ade43fa479bfbe76dc226424db14a9354 |
python/cpython | python__cpython-120668 | # Bug in smtplib example
# Documentation
The example code is meant to demonstrate sending a multi-line message. It does not do that; the lines get contracted into one.
This can be fixed by adding a newline character to the end of the line when it is added to the message string:
```python
msg += line + '\n'
```
BTW, why not use PEP8 compliant names, f-strings, and context managers? Why make things complicated with a .strip() call? Why make it that blank lines in the message are like pressing send?
```python
import smtplib
from_addr = input("From: ")
to_addrs = input("To: ").split()
print("Enter message, end with ^D (Unix) or ^Z (Windows):")
# Add the From: and To: headers at the start!
msg = f"From: {from_addr}\r\nTo: {', '.join(to_addrs)}\r\n\r\n"
while True:
try:
line = input()
except EOFError:
break
msg += line + '\n'
print("Message length is", len(msg))
with smtplib.SMTP('localhost') as server:
server.set_debuglevel(1)
server.sendmail(from_addr, to_addrs, msg)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120668
* gh-120681
* gh-120682
<!-- /gh-linked-prs -->
| 4bc27abdbee88efcf9ada83de6e9e9a0e439edaf | 3044d3866e87bd236d8e7931fb4aa176ba483716 |
python/cpython | python__cpython-120934 | # Type error in the section on type hints
# Documentation
https://docs.python.org/3/library/typing.html
"This module provides runtime support for type hints.
Consider the function below:"
```
def moon_weight(earth_weight: float) -> str:
return f'On the moon, you would weigh {earth_weight * 0.166} kilograms.'
```
This is technically incorrect. Kilograms are not a measure of weight, they are a measure of mass. If you are 60 kilograms on earth, you are 60 kilograms on the moon, not 10 kilograms. You weigh six times less on the moon because the moon has six times less mass than the Earth, not because you have six times less mass on the moon.
On Earth, "kilogram" is a common shorthand for "the weight, on Earth, of a kilogram." On Earth, it is fine to drop the phrase "on Earth" because everybody understands we are talking about the weight of a kilogram on Earth. Once we are planet hopping, it no longer makes sense to use "kilogram" as shorthand for "weight of a kilogram" because the weight of a kilogram changes everywhere we go.
This is the kind of type error that some languages use type systems to prevent (https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/units-of-measure) so it makes sense to be careful about types here.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120934
* gh-120987
* gh-120988
<!-- /gh-linked-prs -->
| bb057ea1075e000ff3f0d6b27a2b7ca4117b4969 | 2106c9bef0c18ff35db7d6c083cb8f189507758e |
python/cpython | python__cpython-120660 | # Skip `test_free_threading` with GIL
# Feature or enhancement
### Proposal:
`test_free_threading` shouldn't run when the GIL is enabled. It takes 2 minutes on the default build, making it the slowest test in the suite.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
- #119599
- #120565
<!-- gh-linked-prs -->
### Linked PRs
* gh-120660
* gh-120694
<!-- /gh-linked-prs -->
| 360f14a493d8461d42dc646be40b4b6fb20db57a | d2e423114cfb5028515c73e01b4955b39a1ee7db |
python/cpython | python__cpython-120643 | # [C API] Move private PyCode and PyOptimizer API to the internal C API
The private PyCode and PyOptimizer APIs use "unnamed structs/unions" which are not allowed in ISO C99: see https://github.com/python/cpython/issues/120293.
I propose to move private PyCode and PyOptimizer APIs to the internal C API.
6 PyUnstable optimizer functions are moved to the internal C API:
* PyUnstable_Replace_Executor()
* PyUnstable_SetOptimizer()
* PyUnstable_GetOptimizer()
* PyUnstable_GetExecutor()
* PyUnstable_Optimizer_NewCounter()
* PyUnstable_Optimizer_NewUOpOptimizer()
<!-- gh-linked-prs -->
### Linked PRs
* gh-120643
* gh-121043
* gh-121644
* gh-121729
<!-- /gh-linked-prs -->
| 9e4a81f00fef689c6e18a64245aa064eaadc7ac7 | 9e45fd9858a059950f7387b4fda2b00df0e8e537 |
python/cpython | python__cpython-120676 | # test_repl: Warning -- reap_children() reaped child process 3457256
On the "AMD64 Debian PGO 3.x" buildbot worker, test_pyrepl fails with "env changed" because a child process is not awaited explicitly.
build: https://buildbot.python.org/all/#/builders/249/builds/8909
```
1:17:07 load avg: 20.12 [453/478/1] test_pyrepl failed (env changed) (37.4 sec)
test_empty (test.test_pyrepl.test_input.KeymapTranslatorTests.test_empty) ... ok
test_push_character_key (test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_character_key) ... ok
test_push_character_key_with_stack (test.test_pyrepl.test_input.KeymapTranslatorTests.test_push_character_key_with_stack) ... ok
(...)
test.test_pyrepl.test_windows_console (unittest.loader.ModuleSkipped.test.test_pyrepl.test_windows_console) ... skipped 'test only relevant on win32'
----------------------------------------------------------------------
Ran 116 tests in 35.634s
OK (skipped=2)
Warning -- reap_children() reaped child process 3457256
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120676
* gh-120741
<!-- /gh-linked-prs -->
| 0f3e36454d754026d6c510053ff1e4b22ae80cd9 | d8f27cb1141fd3575de816438ed80a916c0560ed |
python/cpython | python__cpython-120634 | # Remove tear-off menu feature in turtledemo
https://github.com/python/cpython/issues/58092 (BPO 13884) removed tear-off menu in IDLE, but it's still there in the turtledemo window.
`tearoff=0` will disable tear-off vertical menu (it's even Windows 8's close window `x` style😨)
Last, I adjusted the scroll bar for previewing `py` files to the right. The original left side didn't fit the usage👀
There is no mention of tearoff in the Python docs, it seem like an outdated feature (but it is enabled by default)🤔
---
## Screenshout
### Before

### After

<!-- gh-linked-prs -->
### Linked PRs
* gh-120634
* gh-120725
<!-- /gh-linked-prs -->
| 89f7208f672be635e923f04c19a7480eb8eb414c | a0dce37895947a09f3ff97ae33bba703f6a6310c |
python/cpython | python__cpython-120607 | # Allow EOF to exit pdb commands definition
# Feature or enhancement
### Proposal:
In `pdb` `commands` command, the user can use `exit` or a resume execution to exit the commands definition. However, EOF seems a very reasonable and intuitive way to exit as well. It's technically not a bug because we never said it should, but it's annoying and easy to fix.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120607
<!-- /gh-linked-prs -->
| 4bbb0273f23c93ee82d7f60067775c558a7d1b1b | 1e4815692f6c8a37a3974d0d7d2025494d026d76 |
python/cpython | python__cpython-120604 | # Tools/jit/_llvm.py doesn't support LLVM_VERSION_SUFFIX
# Bug report
### Bug description:
I initially encountered [this build issue on Gentoo](https://bugs.gentoo.org/931838).
Currently the regex in `Tools/jit/_llvm.py` only allows strings of the form `version MAJOR.MINOR.PATCH`
```python
_LLVM_VERSION_PATTERN = re.compile(rf"version\s+{_LLVM_VERSION}\.\d+\.\d+\s+")
```
However if LLVM was built with e.g. `-DLLVM_VERSION_SUFFIX=+libcxx` ([as is the case on Gentoo](https://github.com/gentoo/gentoo/blob/11725d2f812440fecba12ef63172af4eadacd907/sys-devel/llvm/llvm-18.1.7.ebuild#L428)) `clang --version` would return `clang version 18.1.7+libcxx` which is a valid version but doesn't match.
My proposed fix would be to simply allow a non-whitespace string of arbitrary length after the version, i.e.:
```python
_LLVM_VERSION_PATTERN = re.compile(rf"version\s+{_LLVM_VERSION}\.\d+\.\d+\S*\s+")
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-120604
* gh-120768
<!-- /gh-linked-prs -->
| 285f42c850da0d8ca31850088eb7b9247cbbbc71 | 4bbb0273f23c93ee82d7f60067775c558a7d1b1b |
python/cpython | python__cpython-120601 | # [C API] Make Py_TYPE() opaque in limited C API 3.14
In the limited C API 3.14 and newer, I propose to change Py_TYPE() and Py_SET_TYPE() implementation to opaque function calls to hide implementation details. I made a similar change for Py_REFCNT() and Py_SET_REFCNT() in Python 3.12.
The problem is that with Free Threading (PEP 703), the implementation of these functions become less trivial than just getting/setting an object member:
```c
static inline PyTypeObject* Py_TYPE(PyObject *ob) {
return (PyTypeObject *)_Py_atomic_load_ptr_relaxed(&ob->ob_type);
}
static inline void Py_SET_TYPE(PyObject *ob, PyTypeObject *type) {
_Py_atomic_store_ptr(&ob->ob_type, type);
}
```
`_Py_atomic_load_ptr_relaxed()` and `_Py_atomic_store_ptr()` must now be called. But I would prefer to not "leak" such implementation detail into the limited C API.
cc @colesbury @Fidget-Spinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-120601
<!-- /gh-linked-prs -->
| 16f8e22e7c681d8e8184048ed1bf927d33e11758 | e8752d7b80775ec2a348cd4bf38cbe26a4a07615 |
python/cpython | python__cpython-120615 | # ``test_pydoc.test_pydoc`` fails with a ``-R 3:3`` option
# Bug report
### Bug description:
```
./python.exe -m test -R 3:3 test_pydoc.test_pydoc
```
The output of the tests is in the attached file.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
[output.txt](https://github.com/user-attachments/files/15858121/output.txt)
<!-- gh-linked-prs -->
### Linked PRs
* gh-120615
* gh-120669
* gh-120670
<!-- /gh-linked-prs -->
| 2cf47389e26cb591342d07dad98619916d5a1b15 | ac37a806018cc40fafebcd0fa90250c3e0261e0c |
python/cpython | python__cpython-120588 | # Several unused functions in `posixmodule.c` under WASI build
# Bug report
```
../../Modules/posixmodule.c:7883:1: warning: unused function 'warn_about_fork_with_threads' [-Wunused-function]
../../Modules/posixmodule.c:12545:1: warning: unused function 'major_minor_conv' [-Wunused-function]
../../Modules/posixmodule.c:12556:1: warning: unused function 'major_minor_check' [-Wunused-function]
```
Link: https://buildbot.python.org/all/#/builders/1046/builds/5439
The problem is that these helpers are defined without a `#ifdef` guard, while functions that use them have a conditional definition.
I propose to add the same conditions to these functions as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120588
* gh-120616
* gh-120617
<!-- /gh-linked-prs -->
| 3df2022931f77c5cadb3f51b371be6ae17587ede | bac4edad69bb20dd9460766e062637cae999e1e0 |
python/cpython | python__cpython-120585 | # `test_critical_sections.c`: unused function `thread_critical_sections`
# Bug report
On systems where `Py_CAN_START_THREADS` is not defined (like wasm32-wasi 3.x), `test_critical_sections_threads` is not created:
https://github.com/python/cpython/blob/192d17c3fd9945104bc0303cf248bb0d074d260e/Modules/_testinternalcapi/test_critical_sections.c#L173-L177
But, `thread_critical_sections` and `struct test_data` are always created. This leaves them unused in this case:
https://github.com/python/cpython/blob/192d17c3fd9945104bc0303cf248bb0d074d260e/Modules/_testinternalcapi/test_critical_sections.c#L132-L144
I propose to move `#ifdef Py_CAN_START_THREADS` upper to cover all the test utils as well.
Buildbot link with a warning: https://buildbot.python.org/all/#/builders/1046/builds/5439
```
../../Modules/_testinternalcapi/test_critical_sections.c:142:1: warning: unused function 'thread_critical_sections' [-Wunused-function]
1 warning generated.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120585
* gh-120592
<!-- /gh-linked-prs -->
| b337aefd3e44f5c8e38cd282273359d07cce6126 | b8484c6ad7fd14ca464e584b79821b4b906dd77a |
python/cpython | python__cpython-120580 | # `test_free_threading/test_dict.py` does not guard `_testcapi` import
# Bug report
This line is the source of this potential test failure: https://github.com/python/cpython/blob/cf49ef78f894e418bea7de23dde9b01d6235889d/Lib/test/test_free_threading/test_dict.py#L10-L12
```
» ./python.exe -m test test_free_threading -m test_dict_version -v
== CPython 3.14.0a0 (heads/main-dirty:cf49ef78f89, Jun 16 2024, 10:08:08) [Clang 15.0.0 (clang-1500.3.9.4)]
== macOS-14.4.1-arm64-arm-64bit-Mach-O little-endian
== Python build: debug
== cwd: /Users/sobolev/Desktop/cpython2/build/test_python_worker_59249æ
== CPU count: 12
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 3294917118
0:00:00 load avg: 1.67 Run 1 test sequentially in a single process
0:00:00 load avg: 1.67 [1/1] test_free_threading
Failed to import test module: test.test_free_threading.test_dict
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/unittest/loader.py", line 396, in _find_test_path
module = self._get_module_from_name(name)
File "/Users/sobolev/Desktop/cpython2/Lib/unittest/loader.py", line 339, in _get_module_from_name
__import__(name)
~~~~~~~~~~^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/test/test_free_threading/test_dict.py", line 11, in <module>
from _testcapi import dict_version
ModuleNotFoundError: No module named '_testcapi'
test test_free_threading crashed -- Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython2/Lib/test/libregrtest/single.py", line 181, in _runtest_env_changed_exc
_load_run_test(result, runtests)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/test/libregrtest/single.py", line 138, in _load_run_test
regrtest_runner(result, test_func, runtests)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython2/Lib/test/libregrtest/single.py", line 91, in regrtest_runner
test_result = test_func()
File "/Users/sobolev/Desktop/cpython2/Lib/test/libregrtest/single.py", line 135, in test_func
return run_unittest(test_mod)
File "/Users/sobolev/Desktop/cpython2/Lib/test/libregrtest/single.py", line 35, in run_unittest
raise Exception("errors while loading tests")
Exception: errors while loading tests
test_free_threading failed (uncaught exception)
== Tests result: FAILURE ==
1 test failed:
test_free_threading
Total duration: 44 ms
Total tests: run=0 (filtered)
Total test files: run=1/1 (filtered) failed=1
Result: FAILURE
```
If you build CPython without test modules:
```
The following modules are *disabled* in configure script:
_ctypes_test _testbuffer _testcapi
_testclinic _testclinic_limited _testexternalinspection
_testimportmultiple _testinternalcapi _testlimitedcapi
_testmultiphase _testsinglephase _xxtestfuzz
xxlimited xxlimited_35 xxsubtype
```
The proper way is to move this import this module conditionally and skip tests that rely on it, just like other modules do.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120580
* gh-120583
<!-- /gh-linked-prs -->
| 0c0348adbfca991f78b3aaa6790e5c26606a1c0f | cf49ef78f894e418bea7de23dde9b01d6235889d |
python/cpython | python__cpython-120573 | # [Docs] Missing parentheses in `TypeIs` and `TypeGuard` documentation
# Documentation
Current documentation regarding `TypeIs`'s behaviors is inconsistent with that in [typing spec](https://typing.readthedocs.io/en/latest/spec/narrowing.html#typeis).
```patch
--- a/Doc/library/typing.rst
+++ b/Doc/library/typing.rst
@@ -1454,8 +1454,8 @@ These can be used as types in annotations. They all support subscription using
to write such functions in a type-safe manner.
If a ``TypeIs`` function is a class or instance method, then the type in
- ``TypeIs`` maps to the type of the second parameter after ``cls`` or
- ``self``.
+ ``TypeIs`` maps to the type of the second parameter (after ``cls`` or
+ ``self``).
```
This issue is also present in documentation of `TypeGuard` from 3.10 to 3.12.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120573
* gh-120575
* gh-120578
<!-- /gh-linked-prs -->
| 1fa595963ed512b055d2a4faddef5a9e544288ac | 08d09cf5ba041c9c5c3860200b56bab66fd44a23 |
python/cpython | python__cpython-120569 | # file leak in PyUnstable_CopyPerfMapFile
# Bug report
### Bug description:
This leak was found by the Clang Static Analyzer:

### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-120569
<!-- /gh-linked-prs -->
| 92cebaa4911786683e87841bf7788351e7595ac2 | b337aefd3e44f5c8e38cd282273359d07cce6126 |
python/cpython | python__cpython-120570 | # Clarify weekday return in calendar.monthrange docstring
# Documentation
```python
def weekday(year, month, day):
"""Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31)."""
if not datetime.MINYEAR <= year <= datetime.MAXYEAR:
year = 2000 + year % 400
return Day(datetime.date(year, month, day).weekday())
def monthrange(year, month):
"""Return weekday (0-6 ~ Mon-Sun) and number of days (28-31) for
year, month."""
if not 1 <= month <= 12:
raise IllegalMonthError(month)
day1 = weekday(year, month, 1)
ndays = mdays[month] + (month == FEBRUARY and isleap(year))
return day1, ndays
```
In [docs](https://docs.python.org/3/library/calendar.html) it is defined as: `Returns weekday of first day of the month and number of days in month, for the specified year and month.` which is clearer than what is in the code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120570
* gh-120597
* gh-120598
<!-- /gh-linked-prs -->
| bd4516d9efee109dd3b02a3d60845f9053fc6718 | 4f59f8638267aa64ad2daa0111d8b7fdc2499834 |
python/cpython | python__cpython-120564 | # Consider marking ``zip64`` tests in ``test_zipimport`` as cpu-heavy
# Feature or enhancement
### Proposal:
Without ``testZip64`` and ``testZip64CruftAndComment`` test_zipimport takes about ~300ms:
```python
./python -m test -q test_zipimport
Using random seed: 4154114933
0:00:00 load avg: 0.02 Run 1 test sequentially in a single process
== Tests result: SUCCESS ==
Total duration: 270 ms
Total tests: run=81 skipped=6
Total test files: run=1/1
Result: SUCCESS
```
With them, about ~35 seconds:
```python
./python -m test -q test_zipimport
Using random seed: 3529335933
0:00:00 load avg: 0.01 Run 1 test sequentially in a single process
test_zipimport passed in 35.0 sec
== Tests result: SUCCESS ==
Total duration: 35.0 sec
Total tests: run=81 skipped=2
Total test files: run=1/1
Result: SUCCESS
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-120564
<!-- /gh-linked-prs -->
| ac37a806018cc40fafebcd0fa90250c3e0261e0c | 35b16795d11cb50768ffad5fe8e61bdebde9b66a |
python/cpython | python__cpython-120545 | # Add `else: fail()` to test cases where exception is always expected
# Bug report
Most `except self.failureClass` cases have `else` branch where we can ensure that the exception really did happen:
https://github.com/python/cpython/blob/d4039d3f6f8cb7738c5cd272dde04171446dfd2b/Lib/test/test_unittest/test_case.py#L831-L837
But, some rare ones do not have them:
https://github.com/python/cpython/blob/d4039d3f6f8cb7738c5cd272dde04171446dfd2b/Lib/test/test_unittest/test_case.py#L1148-L1154
So, in theory they can just succeed and `except` branch might never be called. This is a theorical problem, but it is still a problem.
I propose to add `else` branches to these tests, just to be safe.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120545
* gh-120546
* gh-120547
<!-- /gh-linked-prs -->
| 42ebdd83bb194f054fe5a10b3caa0c3a95be3679 | d4039d3f6f8cb7738c5cd272dde04171446dfd2b |
python/cpython | python__cpython-120543 | # Improve the "less" prompt in pydoc
In #65824 the "Help on ..." was added in the "less" prompt in pydoc. It works good in the CLI or hen you call `help()` with a string, but unfortunately, it is not so good for `help()` used in the REPL with non-string argument. For example, `help(str)` has a prompt starting with "Help on type ", and `help(str.upper)` -- "Help on method_descriptor ". In the latter case, the help text itself is started with "Help on method_descriptor:", but this is a different issue.
The proposed PR improves the prompt. It now uses the objects `__qualname__` or `__name__` if they are available (this covers classes, functions, methods, and many others), and falls back to "{typename} object" otherwise.
<!-- gh-linked-prs -->
### Linked PRs
* gh-120543
* gh-120562
<!-- /gh-linked-prs -->
| 31d1d72d7e24e0427df70f7dd14b9baff28a4f89 | 9e0b11eb21930b7b8e4a396200a921e9985cfca4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.