repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-110919 | # regrtest: allow to intermix --match and --ignore options
# Feature or enhancement
Currently you can use `--match` and `--ignore` options for positive and negative filtering of test cases by name. But negative patterns always win. So you can select a class of tests and then exclude some tests from it, but you cannot exclude a class of tests except some tests.
Many programs that have similar options (can also be named `--include`/`--exclude`) use different algorithm. They apply rules in the order, and the last rule wins. I propose to implement this in Python regrtests too. Examples:
Run `FileTests` tests in `test_os`, excluding `test_write`:
```
./python -m test test_os -m FileTests -i test_write
```
Run all tests in `test_os`, excluding `FileTests` tests, but including `test_write`:
```
./python -m test test_os -i FileTests -m test_write
```
And, of course, any combinations are valid. This applies also to options that read patterns from files: `--matchfile` and `--ignorefile`. The implementation is actually somewhat simpler than the current one.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110919
* gh-111167
* gh-111168
* gh-116718
* gh-116726
* gh-116727
<!-- /gh-linked-prs -->
| 9a1fe09622cd0f1e24c2ba5335c94c5d70306fd0 | b578e51f026a45576930816d6784697192ed472e |
python/cpython | python__cpython-111007 | # Windows: WindowsConsoleIO produces mojibake replacement characters
# Bug report
### Bug description:
Hi! The following code reliably produces some unicode [replacement characters �](https://www.compart.com/en/unicode/U+FFFD), on Windows, always in the same location. Works fine on Linux.
This report is a follow-up to this other one: https://github.com/python/cpython/issues/82052
[A fix was already attempted](https://github.com/python/cpython/pull/101103/files#diff-463599eb2f67c89107a6fc2431d387fef7fdbb8c860b2c424477ca9b9794282cR999), but as you can see, there are still some cases uncovered.
### Example 1
```python
print('é'*20001)
```

### Example 2
This is an attempt at making a shorter example, and a bit of a stretch goal.
```sh
python -c "import sys;[sys.stdout.buffer.raw.write(b) for b in [b'\xc3', b'\xa9',b'\xc3\xa9']]"
```

### CPython versions tested on:
3.10, 3.12
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-111007
* gh-111108
* gh-111109
<!-- /gh-linked-prs -->
| 11312eae6ec3acf51aacafce4cb6d1a5edfd5f2e | b60f05870816019cfd9b2f7d104364613e66fc78 |
python/cpython | python__cpython-110921 | # Regression in `MemoryError` displaying (`_PyErr_Display` fallback to C impl of `print_exception`)
# Bug report
### Bug description:
After e7331365b488382d906ce6733ab1349ded49c928:
```python
>>> raise MemoryError()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
object address : 0x7fd3b6037540
object refcount : 3
object type : 0x556d6e04cf40
object type name: MemoryError
object repr : MemoryError()
lost sys.stderr
```
cc @pablogsal
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110921
<!-- /gh-linked-prs -->
| b3c9faf056e7d642785a8cfd53d1184b37a74a69 | bad7a35055dbe9e6297110eb8c72eb8edfefd42d |
python/cpython | python__cpython-110908 | # AC: it is allowed to use `*` after vararg definition
# Bug report
Right now this code is allowed:
```
/*[clinic input]
my_test_func
pos_arg: object
*args: object
*
kw_arg: object
[clinic start generated code]*/
```
The question is: why? We try to stick to the same syntax CPython has, so there's no need to allow a code like this. So, we should raise something like: `'my_test_func' uses '*' more than once`.
Proper way:
```
/*[clinic input]
my_test_func
pos_arg: object
*args: object
kw_arg: object
[clinic start generated code]*/
```
One more important thing: we also have cases like `* [from ...]`:
```
/*[clinic input]
depr_star_pos2_len1_with_vararg
a: object
*vararg: object
* [from 3.14]
b: object
[clinic start generated code]*/
```
But, this needs additional discussion which we should do in a separate issue.
This one is just about the double `*` bug.
For now, I do not touch this branch of code.
Refs https://github.com/python/cpython/issues/110782
CC @JelleZijlstra
<!-- gh-linked-prs -->
### Linked PRs
* gh-110908
<!-- /gh-linked-prs -->
| bad7a35055dbe9e6297110eb8c72eb8edfefd42d | 14d2d1576d9301032a6a1f62caa347ff1685c872 |
python/cpython | python__cpython-110906 | # [Enum] minor fixes and cleanup
# Bug report
### Bug description:
The `_is_private` method has incorrect results on attributes starting with a triple underscore. Example:
```python
import enum
enum._is_private('X', '_X___test') # returns True
```
The relevant code is:
```
def _is_private(cls_name, name):
pattern = '_%s__' % (cls_name, )
pat_len = len(pattern)
if (
len(name) > pat_len
and name.startswith(pattern)
and name[pat_len:pat_len+1] != ['_']
and (name[-1] != '_' or name[-2] != '_')
):
return True
else:
return False
```
The check `name[pat_len:pat_len+1] != ['_']` is always `False`, since `name[pat_len:pat_len+1]` is a `str` and `['_']` a list.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110906
* gh-111042
* gh-111043
<!-- /gh-linked-prs -->
| a77180e6633a3aca4011b175c2ac65aa33795035 | f07ca27709855d4637b43bba23384cc795143ee3 |
python/cpython | python__cpython-111601 | # Asyncio stream doesn't handle exceptions in callback
# Bug report
### Bug description:
Consider the following server that always just crashes:
```python
import asyncio
async def handle_echo(reader, writer):
raise Exception("I break everything")
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
async with server:
await server.serve_forever()
asyncio.run(main(), debug=True)
```
Together with the following client:
```python
import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
data = await reader.read(100)
print(data)
writer.close()
await writer.wait_closed()
print("closed")
asyncio.run(tcp_echo_client('Hello World!'), debug=True)
```
When running this in Python 3.8, the server prints a `Task exception was never retrieved` message with an exception. I guess this is acceptable. However, the connection is never closed, and the client is kept hanging. That should not happen in my opinion.
Furthermore, in newer Python versions, due to https://github.com/python/cpython/pull/96323, the internal task is being kept alive indefinitely. Therefore, the `Task exception was never retrieved` message is only printed when you kill the server. Otherwise, the failure is completely silent. This is extremely counter-intuitive behavior.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-111601
* gh-111632
* gh-111634
<!-- /gh-linked-prs -->
| 229f44d353c71185414a072017f46f125676bdd6 | 794dff2fb1d9efe73a724640192c34b25f4fae85 |
python/cpython | python__cpython-110909 | # Legacy tracing & PY_UNWIND: arg of legacy tracer call should be NULL
# Bug report
### Bug description:
When using a legacy tracer with `PyEval_SetTrace` and a function exits with an exception, the arg of the tracer event `PyTrace_RETURN` should be NULL [docs](https://docs.python.org/3/c-api/init.html#c.PyTrace_RETURN). Currently in 3.12 (& 3.13), the arg is set to the exception value because the PY_UNWIND handler is forwarding its arg to the legacy tracer function.
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110909
<!-- /gh-linked-prs -->
| f4b5588bde656d8ad048b66a0be4cb5131f0d83f | 0887b9ce8b5b4f9ecdef014b9329da78a46c9f42 |
python/cpython | python__cpython-110887 | # BNF grammar notation is mentioned but not defined
# Documentation
BNF grammar notation is mentioned twice in python documentation:
1. https://docs.python.org/3.13/reference/introduction.html#notation
2. https://docs.python.org/3.13/reference/expressions.html
We should add a link to the wikipedia [article](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form) explaining what BNF means the first time it is mentioned.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110887
* gh-110900
* gh-110901
<!-- /gh-linked-prs -->
| 42a5d21d465ffcac792685b6127a6e8e39dd6897 | 9608704cde4441c76c1b8b765e3aea072bca3b0d |
python/cpython | python__cpython-110943 | # The first attempt to hande the `format` property during logging.Formatter initilization removes the `.` dictionary from the `config`
# Bug report
### Bug description:
When the library tries to initialize a formatter and comes across the old `format` property, it falls back to an error handler, but before it does that, it pops the `.` dicationary from the `config` making it impossible to process it later during the second call to `self.configure_custom(config)`
https://github.com/python/cpython/blob/main/Lib/logging/config.py#L480
This is whrere `configure_custom` calls `props = config.pop('.', None)`, but it does that before `result = c(**kwargs)` which throws an exception when it finds the `format` property.
```python
def configure_custom(self, config):
"""Configure an object with a user-supplied factory."""
c = config.pop('()')
if not callable(c):
c = self.resolve(c)
props = config.pop('.', None) # <-- '.' gets removed on first try and is gone on the second attempt
# Check for valid identifiers
kwargs = {k: config[k] for k in config if valid_ident(k)}
result = c(**kwargs) # <-- throws an error when `format` property is used
# props = config.pop('.', None) # <-- this is where it probably needs to get called so that '.' remains in 'config'
if props:
for name, value in props.items():
setattr(result, name, value)
return result
```
Then then initialization continues here inside the `except` that calls `configure_custom` for the second time, but this time without the `.` in the `config` so it's skipped.
https://github.com/python/cpython/blob/main/Lib/logging/config.py#L670
```python
def configure_formatter(self, config):
"""Configure a formatter from a dictionary."""
if '()' in config:
factory = config['()'] # for use in exception handler
try:
result = self.configure_custom(config)
except TypeError as te:
if "'format'" not in str(te):
raise
#Name of parameter changed from fmt to format.
#Retry with old name.
#This is so that code can be used with older Python versions
#(e.g. by Django)
config['fmt'] = config.pop('format')
config['()'] = factory
result = self.configure_custom(config)
```
I guess the function `configure_custom` should call `props = config.pop('.', None)` after `result = c(**kwargs)` so that the `.` remains in the `config` for the second call in case an exception is thrown during the first try.
## Example
This config won't initialize the `custom_property` of `MyFormatter`:
```python
"custom_formatter": {
"()": MyFormatter,
"style": "{",
"datefmt": "%Y-%m-%d %H:%M:%S",
"format": "<custom-format>", # <-- when removed or changed to `fmt` then the '.' works
".": {
"custom_property": "value"
}
}
```
The formatter is implemented like this:
```python
class MyFormatter(logging.Formatter):
custom_property: str = "."
def format(self, record: logging.LogRecord) -> str:
# ...
return super().format(record)
```
### CPython versions tested on:
3.10
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110943
* gh-111911
* gh-111914
<!-- /gh-linked-prs -->
| a5f29c9faf046b9ef3e498a0bc63dbc29017b5e3 | 7d21e3d5ee9858aee570aa6c5b6a6e87d776f4b5 |
python/cpython | python__cpython-110868 | # Argument Clinic: vararg + kw-only crash
# Bug report
Originally found in https://github.com/python/cpython/issues/110782 by @mxschmitt and @JelleZijlstra
This is the reproducer:
```c
/*[clinic input]
null_or_tuple_for_varargs
name: object
*constraints: object
covariant: bool = False
[clinic start generated code]*/
static PyObject *
null_or_tuple_for_varargs_impl(PyObject *module, PyObject *name,
PyObject *constraints, int covariant)
/*[clinic end generated code: output=a785b35421358983 input=017dc120cb6e651b]*/
{
assert(name != NULL);
assert(constraints != NULL);
assert(covariant == 0 || covariant == 1);
Py_RETURN_NONE;
}
```
When called with `name=...`, it crashes:
```python
>>> import _testclinic
>>> _testclinic.null_or_tuple_for_varargs('a')
>>> _testclinic.null_or_tuple_for_varargs(name='a')
Assertion failed: (constraints != NULL), function null_or_tuple_for_varargs_impl, file _testclinic.c, line 1148.
[1] 30808 abort ./python.exe
```
I have a PR ready with the fix.
CC @erlend-aasland as AC maintainer.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110868
* gh-110922
<!-- /gh-linked-prs -->
| c2192a2bee17e2ce80c5af34410ccd0c8b6e08aa | db656aebc659e5023d004053db44031176bbe9f5 |
python/cpython | python__cpython-110848 | # threading Thread.join should call the OS join API
# Feature or enhancement
### Proposal:
`threading.Thread.join()` only waits for the CPython internals to wash its hands of the underlying thread. It doesn't actually wait for the OS thread itself to exit, which in theory happens rapidly as its final internal code completes quickly - but we have no good wait to determine.
Why finally do this now? Now that we're encouraging people to notice and avoid threading existing when `os.fork` is called in 3.12, a use case has come up for deterministically knowing when the thread is done at the OS level so that the code can proceed with `os.fork`.
https://github.com/python/cpython/pull/110510 could use this in an atfork before fork handler for example.
POSIX has pthread_join, we should be able to expose and use via `_thread`. Windows presumably has an equivalent concept API.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110848
<!-- /gh-linked-prs -->
| c1db9606081bdbe0207f83a861a3c70c356d3704 | e2c097ebdee447ded1109f99a235e65aa3533bf8 |
python/cpython | python__cpython-110962 | # AIX fails to build _testcapi extension: Undefined symbol: .__atomic_fetch_or_8
# Bug report
### Bug description:
```
Modules/ld_so_aix gcc -pthread -bI:Modules/python.exp Modules/_testcapimodule.o Modules/_testcapi/vectorcall.o Modules/_testcapi/vectorcall_limited.o Modules/_testcapi/heaptype.o Modules/_testcapi/abstract.o Modules/_testcapi/unicode.o Modules/_testcapi/dict.o Modules/_testcapi/set.o Modules/_testcapi/getargs.o Modules/_testcapi/datetime.o Modules/_testcapi/docstring.o Modules/_testcapi/mem.o Modules/_testcapi/watchers.o Modules/_testcapi/long.o Modules/_testcapi/float.o Modules/_testcapi/structmember.o Modules/_testcapi/exceptions.o Modules/_testcapi/code.o Modules/_testcapi/buffer.o Modules/_testcapi/pyatomic.o Modules/_testcapi/pyos.o Modules/_testcapi/immortal.o Modules/_testcapi/heaptype_relative.o Modules/_testcapi/gc.o -o Modules/_testcapi.cpython-313.so
ld: 0711-327 WARNING: Entry point not found: PyInit__testcapi.cpython-313
ld: 0711-317 ERROR: Undefined symbol: .__atomic_fetch_or_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_fetch_and_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_load_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_store_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_exchange_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_compare_exchange_8
ld: 0711-317 ERROR: Undefined symbol: .__atomic_fetch_add_8
ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
collect2: error: ld returned 8 exit status
gmake: *** [Makefile:3159: Modules/_testcapi.cpython-313.so] Error 1
```
This is a regression as this issue was fixed through #109101 but later changes made to configure through #109344 broke it. LIBATOMIC in configure is replaced with LIBS but LIBS is never passed during the _testcapi module creation.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-110962
<!-- /gh-linked-prs -->
| 88bac5d5044e577825db1f9367af908dc9a3ad82 | 767f416feb551f495bacfff1e9ba1e6672c2f24e |
python/cpython | python__cpython-113298 | # test_sysconfig test_library fails on macOS framework installs, like the python.org installer
# Bug report
With the python.org 3.13.0a1 installer for macOS, the new `test_library` of `test_sysconfig` fails:
```
$ python3.13
Python 3.13.0a1 (v3.13.0a1:ad056f03ae, Oct 13 2023, 06:35:05) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
$ python3.13 -m test -w test_sysconfig
Using random seed 1311857515
Raised RLIMIT_NOFILE: 256 -> 1024
0:00:00 load avg: 1.72 Run 1 test sequentially
0:00:00 load avg: 1.72 [1/1] test_sysconfig
test test_sysconfig failed -- multiple errors occurred; run in verbose mode for details
test_sysconfig failed (2 failures)
== Tests result: FAILURE ==
1 test failed:
test_sysconfig
0:00:00 load avg: 1.72 Re-running 1 failed tests in verbose mode in subprocesses
0:00:00 load avg: 1.72 Run 1 test in parallel using 1 worker process
0:00:00 load avg: 1.72 [1/1/1] test_sysconfig failed (2 failures)
Re-running test_sysconfig in verbose mode (matching: test_library, test_user_similar)
test_library (test.test_sysconfig.TestSysConfig.test_library) ... FAIL
test_user_similar (test.test_sysconfig.TestSysConfig.test_user_similar) ... FAIL
======================================================================
FAIL: test_library (test.test_sysconfig.TestSysConfig.test_library)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.13.0a1_11/lib/python3.13/test/test_sysconfig.py", line 416, in test_library
self.assertTrue(ldlibrary.startswith(f'libpython{major}.{minor}'))
AssertionError: False is not true
======================================================================
FAIL: test_user_similar (test.test_sysconfig.TestSysConfig.test_user_similar)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.13.0a1_11/lib/python3.13/test/test_sysconfig.py", line 385, in test_user_similar
self.assertEqual(user_path, expected)
AssertionError: '/Users/nad/Library/Python/3.13/lib/python3.13' != '/Library/Frameworks/Python.framework/Versions/3.13.0a1_11/lib/python3.13'
- /Users/nad/Library/Python/3.13/lib/python3.13
+ /Library/Frameworks/Python.framework/Versions/3.13.0a1_11/lib/python3.13
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=2)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-113298
<!-- /gh-linked-prs -->
| bab0758ea4a1d4666a973ae2d65f21a09e4478ba | 50b093f5c7060c0b44c264808411346cee7becf0 |
python/cpython | python__cpython-112828 | # Assertion failure in test_capi test_alignof_max_align_t possible with macOS universal2 (multi-arch) builds
macOS supports multi-architecture binaries (fat binaries) and CPython have long supported building them. Current releases of macOS support two system architectures: legacy x86_64 (for Intel Macs) and arm64 (for Apple Silicon Macs). The CPython ./configure supports building fat binaries containing both archs by use of the ``--with-universal-archs=universal2`` option. To do so, it makes use of the muilt-arch support built into the Apple-supplied build tools (Xcode or Command Line Tools) which allows multi-arch compiles and links to happen via single calls (by adding --arch flags to the calls to compilers et al).
However, because configure's autoconf tests are only run in the architecture of the build machine, different results can be obtained depending on whether the ``universal2`` build machine is an Intel or an Apple Silicon Mac. In particular, there are differences in the autoconf generated `pyconfig.h` as shown here with modified file names for clarity; in either case, there is only one `pyconfig.h` file that will be generated and used regardless of which arch the fat binaries are run on:
```
$ diff pyconfig_intel.h pyconfig_arm.h
24c24
< #define ALIGNOF_MAX_ALIGN_T 16
---
> #define ALIGNOF_MAX_ALIGN_T 8
439c439
< #define HAVE_GCC_ASM_FOR_X64 1
---
> /* #undef HAVE_GCC_ASM_FOR_X64 */
443c443
< #define HAVE_GCC_ASM_FOR_X87 1
---
> /* #undef HAVE_GCC_ASM_FOR_X87 */
1607c1607
< #define PY_SUPPORT_TIER 1
---
> #define PY_SUPPORT_TIER 2
1656c1656
< #define SIZEOF_LONG_DOUBLE 16
---
> #define SIZEOF_LONG_DOUBLE 8
```
The difference in `ALIGNOF_MAX_ALIGN_T` between the two archs leads to the following test failure when a universal2 binary is built on an Apple Silicon Mac but executed on an Intel Mac:
```
test_alignof_max_align_t (test.test_capi.test_misc.Test_testcapi.test_alignof_max_align_t) ... Assertion failed: (ALIGNOF_MAX_ALIGN_T >= _Alignof(long double)), function test_alignof_max_align_t, file heaptype_relative.c, line 309.
Fatal Python error: Aborted
```
The test does not crash in the opposite case: when built on an Intel Mac but run on an Apple Silicon Mac, because in this case 16 > 8.
This issue was noticed while testing changes to the process we use to build python.org python for macOS installers. Up to now, we have been using Intel Macs to build all installers but, when testing building installers on Apple Silicon Macs, the crash of `test_capi` was seen when running the resultant `universal2` installer on Intel Macs.
It's not clear what is the best way to resolve this and, more importantly, whether there are other kinds of similar problems not yet discovered. Probably the safest approach would be to build each architecture separately on their native archs and ``lipo`` the resultant single-arch binaries into fat files. And/or it may be better to provide separate copies of arch-sensitive header files like pyconfig.h in separate directories and have the interpreter point to the correct one at run time for extension module builds. Or, in this case, perhaps adding some some conditional code to force the sizes to be the larger value in the case of macOS universal2 builds *might* be enough this time?
<!-- gh-linked-prs -->
### Linked PRs
* gh-112828
* gh-112864
<!-- /gh-linked-prs -->
| 15a80b15af9a0b0ebe6bd538a1919712ce7d4ef9 | 4ac1e8fb25c5c0e1da61784281ab878db671761b |
python/cpython | python__cpython-110816 | # PyArg_ParseTupleAndKeywords() and non-ASCII keyword names
Most of C strings in the C API are implied to be UTF-8 encoded. PyArg_ParseTupleAndKeywords() mostly works with non-ASCII keyword names as they are UTF-8 encoded. Except one case, when you pass argument by keyword with invalid non-ASCII name to a function that has optional parameter with non-ASCII UTF-8 encoded name. In this case you get a crash in the debug build.
It was caused by combination of f4934ea77da38516731a75fbf9458b248d26dd81 and a83a6a3275f7dc748db3a237bbf4b05fcf76a85f (bpo-28701, #72887). Before these changes you simply got inaccurate or even wrong error message.
Examples:
1.
Parameters: "ä"
Keyword arguments: "ë"
Old behavior: TypeError "'ë' is an invalid keyword argument for this function"
Current behavior: crash
Expected behavior: TypeError "'ë' is an invalid keyword argument for this function"
2.
Parameters: "ä"
Keyword arguments: "ä"
Old behavior: TypeError "invalid keyword argument for this function"
Current behavior: crash
Expected behavior: TypeError "'ä' is an invalid keyword argument for this function"
3.
Parameters: "ä", "ë"
Keyword arguments: "ä", "ë"
Old behavior: TypeError "'ë' is an invalid keyword argument for this function"
Current behavior: crash
Expected behavior: TypeError "'ä' is an invalid keyword argument for this function"
In case 1 the pre-bpo-28701 behavior was correct, in case 2 it was correct but not precise (it failed to find the name of invalid keyword argument), in case 3 it was wrong (it found wrong name). In all cases there is a crash currently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110816
* gh-110817
* gh-110825
<!-- /gh-linked-prs -->
| 7284e0ef84e53f80b2e60c3f51e3467d67a275f3 | ce298a1c1566467e7fd459c8f61478a26f42833e |
python/cpython | python__cpython-110775 | # Allow the repl to show source code and complete tracebacks
Currently the REPL doesn't show traceback source or augmented information for the source that is typed in the repl itself:
```
>>> def f(x):
... 1/x
...
>>> f(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in f
ZeroDivisionError: division by zero
```
To improve user experience, we want the REPL to be able to treat source lines thar were typed directly as any other kind and show proper tracebacks:
```
>>> def f(x):
... 1/x
...
>>> f(0)
Traceback (most recent call last):
REPL, line 1, in <module>
f(0)
REPL, line 2, in f
1/x
~^~
ZeroDivisionError: division by zero
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-110775
* gh-110814
<!-- /gh-linked-prs -->
| e1d8c65e1df990ef8d61b8912742e1a021395e78 | 898f531996f2c5399b13811682c578c4fd08afaa |
python/cpython | python__cpython-110813 | # It's unclear how to set __weaklistoffset__ Python 3.11 and below
PyType_Slot can’t set tp_weaklistoffset, the 3.12 documentation suggests usingPy_TPFLAGS_MANAGED_WEAKREF: [Type Objects](https://docs.python.org/3/c-api/type.html?highlight=pytype_slot#c.PyType_Slot). But this tp flag is only available in 3.12. So how would one use it in 3.12 and use multiphase init with tp_weaklistoffset for older Pythons?
It's unclear in the current docs. IMO the [3.11](https://docs.python.org/3/c-api/type.html?c.PyType_Slot) version was clear, but that hint is now effectively soft-deprecated.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110813
* gh-110823
<!-- /gh-linked-prs -->
| 2ab34f0e425d90d0a153104ef2f4343dce2a414d | 11bbe6c6e13980ef9fe2bc4c39b9642524062c4e |
python/cpython | python__cpython-110797 | # Intermittent failure in ` test_sys.SysModuleTest.test_current_exceptions`
# Bug report
### Bug description:
Example: https://github.com/python/cpython/actions/runs/6502915514/job/17662666558?pr=110794
The code in question:
https://github.com/python/cpython/blob/b883cad06b12443014d57dcebd42d55f559b18f4/Lib/test/test_sys.py#L507-L572
Originally added in 64366fa9b3ba71b8a503a8719eff433f4ea49eb9
I believe the failure is due to `sys._current_exceptions` being run before the exception is raised.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110797
<!-- /gh-linked-prs -->
| df24153f650f39ca82db143cc4a58062412a0896 | 8ed338ab44daf3cd681d73156fd1d9b6bed53795 |
python/cpython | python__cpython-110791 | # Ignore `BrokenPipeError` when piping the output of the `sysconfig` CLI
# Feature or enhancement
### Proposal:
I very often pipe the output of the `sysconfig` CLI to the [head](https://man.archlinux.org/man/head.1) utility from [coreutils](https://www.gnu.org/software/coreutils/), so that I can check only the "paths" section of the output. This results in a `BrokenPipeError` exception, as we keep trying to write after the pipe closes.
```
$ python -m sysconfig | head -n 14
Platform: "linux-x86_64"
Python version: "3.11"
Current installation scheme: "posix_prefix"
Paths:
data = "/usr"
include = "/usr/include/python3.11"
platinclude = "/usr/include/python3.11"
platlib = "/usr/lib/python3.11/site-packages"
platstdlib = "/usr/lib/python3.11"
purelib = "/usr/lib/python3.11/site-packages"
scripts = "/usr/bin"
stdlib = "/usr/lib/python3.11"
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/lib/python3.11/sysconfig.py", line 851, in <module>
_main()
File "/usr/lib/python3.11/sysconfig.py", line 847, in _main
_print_dict('Variables', get_config_vars())
File "/usr/lib/python3.11/sysconfig.py", line 833, in _print_dict
print(f'\t{key} = "{value}"')
BrokenPipeError: [Errno 32] Broken pipe
```
I'd be nice if we could suppress this error.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110791
<!-- /gh-linked-prs -->
| 6478dea3c8aca7147d013d6d7f5bf7805b300589 | b883cad06b12443014d57dcebd42d55f559b18f4 |
python/cpython | python__cpython-110784 | # [Regression] TypeVar crashes when name is specified via keyword parameters
# Crash report
### What happened?
```python
from typing import TypeVar
TypeVar(name="Handler")
```
Good: 3.11.X
Bad: 3.12.0
Downstream issue: https://github.com/jfhbrook/pyee/issues/134
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux, macOS
### Output from running 'python -VV' on the command line:
Python 3.12.0 (main, Oct 12 2023, 00:42:11) [GCC 12.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-110784
* gh-110787
<!-- /gh-linked-prs -->
| 6a4528d70c8435d4403e09937068a446f35a78ac | 02d26c4bef3ad0f9c97e47993a7fa67898842e5c |
python/cpython | python__cpython-110776 | # Support setting the loop_factory in IsolatedAsyncioTestCase
### Proposal:
I want to be able to create utility subclasses of `IsolatedAsyncioTestCase` that use either uvloop or a specific event loop using the `asyncio.Runner(..., loop_factory=...)` kwarg, Ideally it would look something like:
Another advantage of setting `loop_factory` would allow use of `IsolatedAsyncioTestCase` without using `def tearDownModule(): asyncio.set_event_loop_policy(None)` because `asyncio.Runner` does not call `asyncio.get_event_loop_policy()` when called with a `loop_factory`
```python
class DefaultIsolatedAsyncioTestCase:
"""
Use the most efficient event loop regardless of what has been configured with asyncio.set_event_loop_policy
does not require `def tearDownModule(): asyncio.set_event_loop_policy(None)`
"""
loop_factory = asyncio.EventLoop
```
or
```python
class UvloopIsolatedAsyncioTestCase:
"""
runs tests on uvloop
"""
loop_factory = uvloop.new_event_loop
```
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/support-setting-the-loop-factory-in-isolatedasynciotestcase/36027
<!-- gh-linked-prs -->
### Linked PRs
* gh-110776
<!-- /gh-linked-prs -->
| 770530679e89b06f33655b34a8c466ed906842fe | f6a02327b5fcdc10df855985ca9d2d9dc2a0a46f |
python/cpython | python__cpython-110773 | # Decompose asyncio.run_forever() into parts to improve integration with other event loops
# Feature or enhancement
### Proposal:
This is an admittedly niche use case, but it's one that we've hit in the development of [Toga](https://github.com/beeware/toga).
GUI toolkits all have event loops. If you want to use Python `asyncio` calls in a GUI app, it is necessary to make the GUI's event loop and Python's asyncio event loop co-exist.
This usually means writing a subclass of EventLoop that overrides `run_forever()` in some way. Either the inner `while True: _run_once()` loop of `run_forever()` needs to be modified to integrate the equivalent "run once" of the GUI's event loop; or the `while True` loop needs to be subverted entirely and `_run_once()` needs to be invoked regularly on the GUI's event loop.
However, the implementation of `run_forever()` makes this difficult to do without duplicating significant portions of the implementation of `run_forever()`. For example, [Toga's Winforms asyncio integration](https://github.com/beeware/toga/blob/main/winforms/src/toga_winforms/libs/proactor.py#L12) involves duplicating the [startup](https://github.com/beeware/toga/blob/main/winforms/src/toga_winforms/libs/proactor.py#L38) and [shutdown](https://github.com/beeware/toga/blob/main/winforms/src/toga_winforms/libs/proactor.py#L95) of `run_forever()`
The fundamental structure of `run_forever()` is:
```python
def run_forever():
try:
run_forever_setup()
while True:
run_once()
finally:
run_forever_cleanup()
```
I propose that the implemenation of `run_forever()` be decomposed into exactly these parts - an initial setup, the actual loop, and a post-loop cleanup. This does not impact on the operation of ascynio itself, but it would allow Toga (and any other integrated asyncio loop) to automatically inherit any improvements that are made to the core event loop.
It is worth noting that CPython's own Winforms integration has a similar need. The implementation of [`run_forever()` in `ProactorEventLoop()`](https://github.com/python/cpython/blob/main/Lib/asyncio/windows_events.py#L317) needs to include additional setup and additional cleanup calls. Decomposing the base event loop would allow this additional setup and cleanup to be explcit.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Informal in-person discussion at CPython core team sprint.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110773
<!-- /gh-linked-prs -->
| a7e2a10a85bb597d3bb8f9303214bd0524fa54c3 | 0ed2329a1627fc8ae97b009114cd960c25567f75 |
python/cpython | python__cpython-110753 | # Assertion failure in instrumentation during interpreter finalization
# Bug report
### Bug description:
```python
import sys
def f(*args):
pass
def bar():
yield 42
sys.settrace(f)
g = bar()
next(g)
```
The code above will trigger an assertion added in #109846.
```
python: Python/instrumentation.c:1576: _Py_Instrument: Assertion `(interp->ceval.eval_breaker & ~_PY_EVAL_EVENTS_MASK) == 0 || instrumentation_cross_checks(interp, code)' failed.
Aborted
```
The reason is that during the interpreter finalization, the global event monitors are reset to 0. However, the garbage collection triggered a send to the generator `bar` which then triggered `_Py_Instrument`.
`is_version_up_to_date` is true because the the code was instrumented when last time `eval_breaker` changed. However, `instrumentation_cross_checks` would fail, because `interp->monitors` is reset to `0`.
Notice that in the code example above, if the trace function `f` returns itself rather than `None`, the assertion failure won't trigger because the generator would not be garbage collected - this might be an instrumentation bug that made the generator uncollectable (I've confirmed that `_PyGen_Finalize` and `gen_close()` never executed). However, I did not connect the dots so I don't know why that did not work.
I've made a simple patch in the PR, where I reset the `interp->ceval.eval_breaker = 0` in `interpreter_clear`. It's kind of reasonable but I'm not knowledgeable enough to be confident that this is the/a correct fix.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110753
<!-- /gh-linked-prs -->
| 1e3460d9faaffb35b3c6175c666b1f45aea2c1d8 | b6000d287407cbccfbb1157dc1fc6128497badc7 |
python/cpython | python__cpython-110750 | # WASI build is broken on certain environments
# Bug report
### Bug description:
Building CPython main for WASI in `quay.io/tiran/cpythonbuild:emsdk3` results in:
```
../../Parser/tokenizer/file_tokenizer.c:156:9: error: implicit declaration of function 'lseek' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
lseek(fd, (off_t)(pos > 0 ? pos - 1 : pos), SEEK_SET) == (off_t)-1) {
^
../../Parser/tokenizer/file_tokenizer.c:156:9: note: did you mean 'fseek'?
/opt/wasi-sdk/bin/../share/wasi-sysroot/include/stdio.h:95:5: note: 'fseek' declared here
int fseek(FILE *, long, int);
^
../../Parser/tokenizer/file_tokenizer.c:395:12: error: implicit declaration of function 'read' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
return read(b.fd, (void *)buf, size);
^
../../Parser/tokenizer/file_tokenizer.c:395:12: note: did you mean 'fread'?
/opt/wasi-sdk/bin/../share/wasi-sysroot/include/stdio.h:102:8: note: 'fread' declared here
size_t fread(void *__restrict, size_t, size_t, FILE *__restrict);
^
2 errors generated.
```
This seems to be due to GH-110684, which moved the `unistd.h` import to the top during the refactoring. The import is now happening before `pyconfig.h` is included, so `HAVE_UNISTD_H` may be undefined.
Old code:
https://github.com/python/cpython/blob/eb50cd37eac47dd4dc71ab42d0582dfb6eac4515/Parser/tokenizer.c#L4-L12
New code:
https://github.com/python/cpython/blob/01481f2dc13341c84b64d6dffc08ffed022712a6/Parser/tokenizer/file_tokenizer.c#L1-L9
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110750
<!-- /gh-linked-prs -->
| 23645420dcc4f3b7b2ec4045ef6ac126c37a98c2 | 5257ade0bc9b53471227d0e67999df7c8ad633a7 |
python/cpython | python__cpython-111236 | # Improve markup in `Doc/library/tkinter.ttk.rst`
# Documentation
In the `tkinter` docs there are a few methods that include sequences of valid options/values, e.g. https://docs.python.org/3/library/tkinter.ttk.html#tkinter.ttk.Treeview.column
These should use appropriate markup (like ` ``...`` ` or `*...*`).
<!-- gh-linked-prs -->
### Linked PRs
* gh-111236
* gh-113193
* gh-113194
<!-- /gh-linked-prs -->
| 00d2b6d1fca91e1a83f7f99a370685b095ed4928 | d07483292b115a5a0e9b9b09f3ec1000ce879986 |
python/cpython | python__cpython-110880 | # pathlib.Path.read_text should include a newline argument
# Feature or enhancement
### Proposal:
Support for a `newline` argument was added to the `write_text` method in Python 3.10 (issue #67894).
I've been using this method but I've found a need for a `newline` method for `read_text` as well.
Here's the scenario where this would be handy (namely reading and writing the original newlines):
```python
from pathlib import Path
path = Path("my_file.txt")
contents = path.read_text(newline="")
contents = perform_some_operation_on(contents)
path.write_text(contents, newline="")
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
I have not found any prior discussions or mentions of `newline` argument being supported for `read_text` outside of the initial pathlib API discussion (https://github.com/python/cpython/issues/64417#issuecomment-1093641934).
<!-- gh-linked-prs -->
### Linked PRs
* gh-110880
<!-- /gh-linked-prs -->
| 9d70831cb7127855a8bf83b585525f13cffb9f59 | d857d5331a3326c77f867d837014d774841017a9 |
python/cpython | python__cpython-110735 | # Reduce overhead to run one iteration of the asyncio event loop
# Feature or enhancement
### Proposal:
Consider the following use case:
An asyncio event loop where the majority of the handles being run have a very small run time because they are decoding Bluetooth or Zigbee packets.
In this case `_run_once` becomes a bit of a bottleneck. In production, the `min` and `max` calls to find the timeout represent a significant portion of the run time. We can increase the number of packets that can be processed per second by switching the `min` and `max` to a simple `>` and `<` check.
```python
import heapq
import timeit
from asyncio import events
from asyncio.base_events import (
_MIN_CANCELLED_TIMER_HANDLES_FRACTION,
_MIN_SCHEDULED_TIMER_HANDLES,
MAXIMUM_SELECT_TIMEOUT,
BaseEventLoop,
_format_handle,
logger,
)
from time import monotonic
class FakeSelector:
def select(self, timeout):
"""Wait for I/O events."""
fake_selector = FakeSelector()
original_loop = BaseEventLoop()
original_loop._selector = fake_selector
original_loop._process_events = lambda events: None
timer = events.TimerHandle(
monotonic() + 100000, lambda: None, ("any",), original_loop, None
)
timer._scheduled = True
heapq.heappush(original_loop._scheduled, timer)
class OptimizedBaseEventLoop(BaseEventLoop):
def _run_once(self):
"""Run one full iteration of the event loop.
This calls all currently ready callbacks, polls for I/O,
schedules the resulting callbacks, and finally schedules
'call_later' callbacks.
"""
sched_count = len(self._scheduled)
if (
sched_count > _MIN_SCHEDULED_TIMER_HANDLES
and self._timer_cancelled_count / sched_count
> _MIN_CANCELLED_TIMER_HANDLES_FRACTION
):
# Remove delayed calls that were cancelled if their number
# is too high
new_scheduled = []
for handle in self._scheduled:
if handle._cancelled:
handle._scheduled = False
else:
new_scheduled.append(handle)
heapq.heapify(new_scheduled)
self._scheduled = new_scheduled
self._timer_cancelled_count = 0
else:
# Remove delayed calls that were cancelled from head of queue.
while self._scheduled and self._scheduled[0]._cancelled:
self._timer_cancelled_count -= 1
handle = heapq.heappop(self._scheduled)
handle._scheduled = False
timeout = None
if self._ready or self._stopping:
timeout = 0
elif self._scheduled:
# Compute the desired timeout.
timeout = self._scheduled[0]._when - self.time()
if timeout > MAXIMUM_SELECT_TIMEOUT:
timeout = MAXIMUM_SELECT_TIMEOUT
elif timeout < 0:
timeout = 0
event_list = self._selector.select(timeout)
self._process_events(event_list)
# Needed to break cycles when an exception occurs.
event_list = None
# Handle 'later' callbacks that are ready.
end_time = self.time() + self._clock_resolution
while self._scheduled:
handle = self._scheduled[0]
if handle._when >= end_time:
break
handle = heapq.heappop(self._scheduled)
handle._scheduled = False
self._ready.append(handle)
# This is the only place where callbacks are actually *called*.
# All other places just add them to ready.
# Note: We run all currently scheduled callbacks, but not any
# callbacks scheduled by callbacks run this time around --
# they will be run the next time (after another I/O poll).
# Use an idiom that is thread-safe without using locks.
ntodo = len(self._ready)
for i in range(ntodo):
handle = self._ready.popleft()
if handle._cancelled:
continue
if self._debug:
try:
self._current_handle = handle
t0 = self.time()
handle._run()
dt = self.time() - t0
if dt >= self.slow_callback_duration:
logger.warning(
"Executing %s took %.3f seconds", _format_handle(handle), dt
)
finally:
self._current_handle = None
else:
handle._run()
handle = None # Needed to break cycles when an exception occurs.
new_loop = OptimizedBaseEventLoop()
new_loop._selector = fake_selector
new_loop._process_events = lambda events: None
timer = events.TimerHandle(monotonic() + 100000, lambda: None, ("any",), new_loop, None)
timer._scheduled = True
heapq.heappush(new_loop._scheduled, timer)
new_time = timeit.timeit(
"loop._run_once()",
number=1000000,
globals={"loop": new_loop},
)
print("new: %s" % new_time)
original_time = timeit.timeit(
"loop._run_once()",
number=1000000,
globals={"loop": original_loop},
)
print("original: %s" % original_time)
```
```
new: 0.36033158400096
original: 0.4667800000170246
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110735
<!-- /gh-linked-prs -->
| 3ac8e6955fedc35e8646bee7e04be6f20205cc1e | 41d8ec5a1bae1e5d4452da0a1a0649ace4ecb7b0 |
python/cpython | python__cpython-110702 | # Use the traceback module for PyErr_Display() and fallback to the C implementation
This is the first step to be able to reduce the code duplication and complexity so adding features to the default traceback is easier.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110702
* gh-111905
* gh-113712
<!-- /gh-linked-prs -->
| e7331365b488382d906ce6733ab1349ded49c928 | 8c6c14b91bf95e04018151c53bce6e27e0e22447 |
python/cpython | python__cpython-110822 | # name error in zipfile.ZipInfo._decodeExtra()
https://github.com/python/cpython/blob/f27b83090701b9c215e0d65f1f924fb9330cb649/Lib/zipfile/__init__.py#L548 uses the `warnings` module, but AFAICS it's never imported.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110822
* gh-110861
<!-- /gh-linked-prs -->
| 4110cfec1233139b4e7c63459ba465ab80554e3e | e2b3d831fd2824d8a5713e3ed2a64aad0fb6b62d |
python/cpython | python__cpython-110818 | # asyncio.wait_for() doesn't raise builtin TimeoutError
# Documentation
The documentation page about [asyncio.wait_for()](https://docs.python.org/3/library/asyncio-task.html#asyncio.wait_for) function says that:
> If a timeout occurs, it cancels the task and raises [TimeoutError](https://docs.python.org/3/library/exceptions.html#TimeoutError).
The *TimeoutError* hyperlink leads to the built-in exception "TimeoutError", which would lead any sane user to think that *asyncio.wait_for()* raises builtin TimeoutError.
However, it doesn't:
```python3
#!/usr/bin/env python3
import asyncio
async def main():
try:
await asyncio.wait_for(asyncio.sleep(100), timeout=1)
except TimeoutError:
print('timeout error')
except Exception as exc:
print('not a timeout error:', type(exc))
asyncio.run(main())
```
The above program prints:
```
not a timeout error: <class 'asyncio.exceptions.TimeoutError'>
```
Documentation should be changed to reflect what *really* happens in Python.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110818
* gh-110826
* gh-110827
<!-- /gh-linked-prs -->
| f81e36f700ac8c6766207fcf3bc2540692af868b | 548ce0923b9ef93b1c1df59f8febc4bb3daff28a |
python/cpython | python__cpython-110789 | # test_os.TimerfdTests fails on FreeBSD 14 and 15: missing select.epoll()
os.timerfd_create() is [documented to be available only on Linux](https://docs.python.org/dev/library/os.html#os.timerfd_create), but it's always available on FreeBSD 14.
Problem: the test uses epoll() which is not available on FreeBSD. The test should be redesigned with `selectors.DefaultSelector` to become portable.
os.timerfd_create() was added recently to the main branch by PR gh-108382: see issue gh-108277.
cc @m-tmatma
Logs:
```
======================================================================
ERROR: test_timerfd_epoll (test.test_os.TimerfdTests.test_timerfd_epoll)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.opsec-fbsd14/build/Lib/test/test_os.py", line 4099, in test_timerfd_epoll
ep = select.epoll()
^^^^^^^^^^^^
AttributeError: module 'select' has no attribute 'epoll'. Did you mean: 'poll'?
======================================================================
ERROR: test_timerfd_ns_epoll (test.test_os.TimerfdTests.test_timerfd_ns_epoll)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.opsec-fbsd14/build/Lib/test/test_os.py", line 4255, in test_timerfd_ns_epoll
ep = select.epoll()
^^^^^^^^^^^^
AttributeError: module 'select' has no attribute 'epoll'. Did you mean: 'poll'?
```
FreeBSD 14 build: https://buildbot.python.org/all/#/builders/1232/builds/208
<!-- gh-linked-prs -->
### Linked PRs
* gh-110789
* gh-111529
<!-- /gh-linked-prs -->
| 8f07b6e4e3c5e5c549b8c2c7d14a89b2563e6b9e | a7e2a10a85bb597d3bb8f9303214bd0524fa54c3 |
python/cpython | python__cpython-110706 | # Incorrect syntax error message for incorrect argument unpacking
# Bug report
### Bug description:
I don't think `SyntaxError: iterable argument unpacking follows keyword argument unpacking` should be present in a case like this (minimal reproducible example):
```python
>>> func(t, *:)
File "<stdin>", line 1
func(t, *:)
^
SyntaxError: iterable argument unpacking follows keyword argument unpacking
```
So it seems like if `*` is followed by code that triggers a generic `SyntaxError: invalid syntax` error, the former error triggers and overshadows the generic error. This has somewhat made checking the error in the expression following that `*` a bit harder.
I have another question. `func(a=5, *b)` is apparently allowed but from the error `func(**{'a': 7}, *b)` isn't? I don't know what the PEP is for this so I'll try to search it up later.
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110706
* gh-110765
* gh-110766
<!-- /gh-linked-prs -->
| 3d180347ae73119bb51500efeeafdcd62bcc6f78 | ec5622d1977c03960883547f7b8f55c26ac85df2 |
python/cpython | python__cpython-110952 | # test_asyncio.test_base_events: test_time_and_call_at() failed on GHA Windows x64
Recently, I modified asyncio tests to add a CLOCK_RES constant. Example of test_time_and_call_at() change:
```
+CLOCK_RES = 0.020
(...)
- self.assertGreaterEqual(dt, delay - 0.050, dt)
+ self.assertGreaterEqual(dt, delay - test_utils.CLOCK_RES)
```
So the test now tolerates only 20 ms difference instead of 50 ms. Problem, the test failed on GHA Windows x64 CI :-(
asyncio uses `time.monotonic()` as `loop.time()`. test.pythoninfo reports a clock resolution of 15.6 ms.
call_at() creates a TimerHandle . `_run_once()` uses `self._scheduled[0]._when - self.time()` timeout to call `self._selector.select(timeout)`. asyncio uses `self._clock_resolution = time.get_clock_info('monotonic').resolution` to decide in `_run_once()` if an event should be scheduled or not.
```py
# Handle 'later' callbacks that are ready.
end_time = self.time() + self._clock_resolution
while self._scheduled:
handle = self._scheduled[0]
if handle._when >= end_time:
break
```
*Maybe* there is a rounding issue and sometimes a callback scheduled in 100 ms is actually run in 78 ms.
A timeout of 100 ms should wait 6 or 7 monotonic clock ticks, whereas 78 ms is around 5 clock ticks:
```py
>>> k=0.015625
>>> 0.07799999999997453 / k
4.99199999999837
>>> 0.100 / k
6.4
```
The test reads the clock twice, maybe *t0* is 1 tick ahead of *when*:
```
when = self.loop.time() + delay
self.loop.call_at(when, cb)
t0 = self.loop.time()
```
It seems like the callback was called 2 ticks before the expected timeout when taking rounding issue in account:
```
>>> (0.100//k-1)*k, (0.100//k+1)*k
(0.078125, 0.109375)
```
The callback should be called between 78.1 ms (2 clicks before) and 109.4 ms (1 click after). It would be nice to only have an error of a maximum of 1 click:
```
>>> (0.100//k)*k, (0.100//k+1)*k
(0.09375, 0.109375)
```
Between 93.8 ms (1 click before) and 109.4 ms (1 click after).
I'm not sure what's going on.
GHA Windows x64:
```
FAIL: test_time_and_call_at (test.test_asyncio.test_base_events.BaseEventLoopTests.test_time_and_call_at)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_asyncio\test_base_events.py", line 285, in test_time_and_call_at
self.assertGreaterEqual(dt, delay - test_utils.CLOCK_RES)
AssertionError: 0.07799999999997453 not greater than or equal to 0.08
```
test.pythoninfo:
```
time.altzone: -3600
time.daylight: 0
time.get_clock_info(monotonic): namespace(implementation='GetTickCount64()', monotonic=True, adjustable=False, resolution=0.015625)
time.get_clock_info(perf_counter): namespace(implementation='QueryPerformanceCounter()', monotonic=True, adjustable=False, resolution=1e-07)
time.get_clock_info(process_time): namespace(implementation='GetProcessTimes()', monotonic=True, adjustable=False, resolution=1e-07)
time.get_clock_info(thread_time): namespace(implementation='GetThreadTimes()', monotonic=True, adjustable=False, resolution=1e-07)
time.get_clock_info(time): namespace(implementation='GetSystemTimeAsFileTime()', monotonic=False, adjustable=True, resolution=0.015625)
time.time: 1696993710.9453082
time.timezone: 0
time.tzname: ('Coordinated Universal Time', 'Coordinated Universal Time')
```
build: https://github.com/python/cpython/actions/runs/6477453109/job/17587684818?pr=110677
cc @sobolevn @kumaraditya303
<!-- gh-linked-prs -->
### Linked PRs
* gh-110952
* gh-110970
* gh-110971
<!-- /gh-linked-prs -->
| 9a9fba825f8aaee4ea9b3429875c6c6324d0dee0 | 78e4a6de48749f4f76987ca85abb0e18f586f9c4 |
python/cpython | python__cpython-110687 | # Pattern matching is not tested with `@runtime_checkable` protocols
# Bug report
There are different simple and corner cases that we need to tests.
Why is that important? Because `runtime_checkable` `Protocol` has its own `__instancecheck__` magic that must work correctly with `match` machinery.
```python
>>> import typing, dataclasses
>>> @typing.runtime_checkable
... class P(typing.Protocol):
... a: int
...
>>> @dataclasses.dataclass
... class D:
... a: int
...
>>> match D(1):
... case P() as p:
... print(p)
...
D(a=1)
```
```python
>>> @typing.runtime_checkable
... class P(typing.Protocol):
... a: int
... b: int
...
>>> @dataclasses.dataclass
... class D:
... a: int
... b: int
...
>>> match D(1, 2):
... case P(b=b1):
... print(b1)
...
2
```
I have a PR ready.
CC @AlexWaygood for `runtime_checkable` and @brandtbucher for `match`
Requires https://github.com/python/cpython/issues/110682 to test multiple cases with `Protocol` and `__match_args__`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110687
<!-- /gh-linked-prs -->
| 9d02d3451a61521c65db6f93596ece2f572f1f3e | 7595d47722ae359e6642506646640a3f86816cef |
python/cpython | python__cpython-110683 | # `__match_args__` + `runtime_checkable` protocol
# Bug report
Since `__match_args__` is not mentioned in `_SPECIAL_NAMES` right now these two examples have different results:
```python
@runtime_checkable
class P(Protocol):
x: int
y: int
class A:
def __init__(self, x: int, y: int):
self.x = x
self.y = y
assert isinstance(A(1, 2), P) is True
```
And:
```python
@runtime_checkable
class P(Protocol):
__match_args__ = ('x', 'y')
x: int
y: int
class A:
def __init__(self, x: int, y: int):
self.x = x
self.y = y
assert isinstance(A(1, 2), P) is False
```
Why I think that `__match_args__` is a special attribute and should not be checked in `isinstance`?
1. It might be useed for protocol itself in patma (new issue is on its way about it):
```python
match A(1, 2):
case P(x, y):
print(x, y)
```
But, this does not work right now if `A` does not have `__match_args__`. Which is not really required for this case.
2. Several similar attributes like `__slots__` and `__weakref__` and `__annotations__` are ignored already
Do others agree?
CC @AlexWaygood for `@runtime_checkable` protocols
I will send a PR with my proposed solution and tests :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-110683
<!-- /gh-linked-prs -->
| 5257ade0bc9b53471227d0e67999df7c8ad633a7 | 88ecb190f3717f7f0d663d004fc4b63c7e7bce77 |
python/cpython | python__cpython-110747 | # Improve markup in `enum.rst`
# Documentation
For example in https://docs.python.org/3/library/enum.html#enum.IntFlag there are several names that use italic (`*...*`) instead of the correct role (`:class:`, `:func:`, etc.). There might be more throughout the page.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110747
<!-- /gh-linked-prs -->
| 14f52e154814e4cde4959d04f3e18ac4006cab40 | 0db2517687efcf5ec0174a32398ec1564b3204f1 |
python/cpython | python__cpython-110677 | # test_pty: test_openpty() timed out after 3h 20min on aarch64 RHEL8 Refleaks 3.x:
Is os.write(fd, data) guaranteed to write **all** bytes of *data*? Or can it write only *some* (first) bytes of *data*?
Extract of the test:
```py
debug("Writing chunked output")
os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
s2 = _readline(master_fd) # <==== HERE LINE 181 ===
self.assertEqual(b'For my pet fish, Eric.\n', normalize_output(s2))
```
with:
```py
def _readline(fd):
"""Read one line. May block forever if no newline is read."""
reader = io.FileIO(fd, mode='rb', closefd=False)
return reader.readline()
```
aarch64 RHEL8 Refleaks 3.x:
```
3:22:48 load avg: 0.00 [467/467/1] test_pty worker non-zero exit code (Exit code 1)
beginning 6 repetitions
123456
/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/pty.py:95: DeprecationWarning: This process (pid=502171) is multi-threaded, use of forkpty() may lead to deadlocks in the child.
pid, fd = os.forkpty()
hi there
./home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/pty.py:95: DeprecationWarning: This process (pid=502171) is multi-threaded, use of forkpty() may lead to deadlocks in the child.
pid, fd = os.forkpty()
Timeout (3:20:00)!
Thread 0x0000ffff9f154590 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/test/test_pty.py", line 68 in _readline
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/test/test_pty.py", line 181 in test_openpty
(...)
```
build: https://buildbot.python.org/all/#/builders/551/builds/883
cc @serhiy-storchaka
<!-- gh-linked-prs -->
### Linked PRs
* gh-110677
* gh-110742
* gh-110743
<!-- /gh-linked-prs -->
| b4e8049766a46a9e6548b18d7e9a0c9f573cd122 | 3ac8e6955fedc35e8646bee7e04be6f20205cc1e |
python/cpython | python__cpython-110667 | # Tests: Don't measure CI performance in tests, don't test maximum elapsed time
A test should not measure a CI performance: don't test maximum elapsed time.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110667
* gh-110668
* gh-110669
<!-- /gh-linked-prs -->
| 1556f426da3f2fb5842689999933c8038b65c034 | f901f56313610389027cb4eae80d1d4b071aef69 |
python/cpython | python__cpython-110663 | # test_multiprocessing_spawn.test_manager: test_async_timeout() failed on Windows (x64)
When the test failed in 2013, 10 years ago, the fix was to increase the timeout from 200 ms to 1 sec: issue #63798.
Windows (x64):
```
0:14:39 load avg: 13.67 [336/469/1] test.test_multiprocessing_spawn.test_manager failed (1 failure) (2 min 26 sec) -- running (1): test.test_multiprocessing_spawn.test_processes (2 min 36 sec)
(...)
FAIL: test_async_timeout (test.test_multiprocessing_spawn.test_manager.WithManagerTestPool.test_async_timeout)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\_test_multiprocessing.py", line 2579, in test_async_timeout
self.assertRaises(multiprocessing.TimeoutError, get, timeout=TIMEOUT2)
AssertionError: TimeoutError not raised by <test._test_multiprocessing.TimingWrapper object at 0x000001BA18F3CA60>
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-110663
* gh-110674
* gh-110675
<!-- /gh-linked-prs -->
| 790ecf6302e47b84da5d1c3b14dbdf070bce615b | 1556f426da3f2fb5842689999933c8038b65c034 |
python/cpython | python__cpython-110657 | # test_logging: test_post_fork_child_no_deadlock() failed on Address sanitizer
That's another victim of libasan deadlock involving fork. See test_threading:
```py
# gh-89363: Skip fork() test if Python is built with Address Sanitizer (ASAN)
# to work around a libasan race condition, dead lock in pthread_create().
skip_if_asan_fork = support.skip_if_sanitizer(
"libasan has a pthread_create() dead lock",
address=True)
```
Address sanitizer:
```
FAIL: test_post_fork_child_no_deadlock (test.test_logging.HandlerTest.test_post_fork_child_no_deadlock)
Ensure child logging locks are not held; bpo-6721 & bpo-36533.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/Lib/test/test_logging.py", line 789, in test_post_fork_child_no_deadlock
support.wait_process(pid, exitcode=0)
File "/home/runner/work/cpython/cpython/Lib/test/support/__init__.py", line 2206, in wait_process
raise AssertionError(f"process {pid} is still running "
AssertionError: process 12448 is still running after 300.4 seconds
```
build: https://github.com/python/cpython/actions/runs/6475004361/job/17581154367?pr=110650
<!-- gh-linked-prs -->
### Linked PRs
* gh-110657
* gh-110664
* gh-110665
<!-- /gh-linked-prs -->
| f901f56313610389027cb4eae80d1d4b071aef69 | 7ca4aafa0ea94a4bb778b34e9c2e85b07f550c11 |
python/cpython | python__cpython-110650 | # test_signal: test_stress_modifying_handlers() failed on s390x SLES 3.x
See also:
* #75032
* #110083
s390x SLES 3.x:
```
== CPU count: 4
...
0:03:27 load avg: 5.22 [280/467/1] test_signal failed (1 failure) (1 min 10 sec) -- running (1): test.test_multiprocessing_spawn.test_processes (38.4 sec)
(...)
FAIL: test_stress_modifying_handlers (test.test_signal.StressTest.test_stress_modifying_handlers)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-sles-z/build/Lib/test/test_signal.py", line 1374, in test_stress_modifying_handlers
self.assertGreater(num_received_signals, 0)
AssertionError: 0 not greater than 0
```
Test pass when re-run in verbose mode.
build: https://buildbot.python.org/all/#/builders/540/builds/6801
<!-- gh-linked-prs -->
### Linked PRs
* gh-110650
* gh-110658
* gh-110659
<!-- /gh-linked-prs -->
| e07c37cd5212c9d13749b4d02a1d68e1efcba6cf | da0a68afc9d1487fad20c50f5133cda731d17a17 |
python/cpython | python__cpython-110632 | # Fix wrongly indented blocks in the documentation
# Bug report
In the documentation there are several blocks that are indented incorrectly but don't produce errors when the documentation is built. Two common mistakes wrap the misindented block into an additional blockquote or definition list, causing extra indentation in the rendered output but no other visible problems (at least with the current theme).
This is a follow-up of https://github.com/python/devguide/issues/1178 and the linked PRs that fixed a list of similar issues in the devguide repository.
I'm working on a few PRs that fix these errors in the docs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110632
* gh-110690
* gh-110691
* gh-110635
* gh-110637
* gh-110638
* gh-110685
* gh-110736
* gh-110737
* gh-110708
* gh-110740
* gh-110741
* gh-110724
* gh-110738
* gh-110739
* gh-110885
* gh-134419
* gh-134420
<!-- /gh-linked-prs -->
| 3dd593e2f2527e199ff7401308131e6888f0cf6c | b5f7777cb3ecae02d49e0b348968c1ff1ffe21f4 |
python/cpython | python__cpython-110629 | # Add tests for PyLong C API
Currently only new PyLong_AsInt() is tested.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110629
* gh-110854
<!-- /gh-linked-prs -->
| 9d40ebf1902812fad6aa85ede7b6f1fdff3c1291 | 84e2096fbdea880799f2fdb3f0992a8961106bed |
python/cpython | python__cpython-110591 | # `_sre.compile` overwrites `TypeError` with `OverflowError`
# Bug report
Reproduction:
```python
>>> import _sre
>>> _sre.compile('', 0, ['abc'], 0, {}, ())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: regular expression code size limit exceeded
```
It should be:
```python
>>> import _sre
>>> _sre.compile('', 0, ['abc'], 0, {}, ())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: an integer is required
```
Because the third arg is `code: object(subclass_of='&PyList_Type')`, it is assumed to be integers inside.
Problematic lines: https://github.com/python/cpython/blob/def7ea5cec41e8d3112641bb4af7572c0ac4f380/Modules/_sre/sre.c#L1510-L1515
They do not check for `PyLong_AsUnsignedLong` errors.
I have a PR ready :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-110591
* gh-110613
* gh-110614
<!-- /gh-linked-prs -->
| 344d3a222a7864f8157773749bdd77d1c9dfc1e6 | 0362cbf908aff2b87298f8a9422e7b368f890071 |
python/cpython | python__cpython-110573 | # `test_*_code` functions in `_testcapi/getargs.c` have memory leaks
# Bug report
I don't think it is *very* important, since this is just a test, but why have it when it is spotted?
1. `test_k_code`: https://github.com/python/cpython/blob/326c6c4e07137b43c49b74bd5528619360080469/Modules/_testcapi/getargs.c#L331-L398
On errors `tuple` is not decrefed.
Also, note these lines: https://github.com/python/cpython/blob/326c6c4e07137b43c49b74bd5528619360080469/Modules/_testcapi/getargs.c#L358-L374
Here' we leave a `tuple` is a semi-broken state. Its 0'th item has a reference count of 0.
We should also recreate a `tuple` here with the new items. `test_L_code` also has this problem.
2. https://github.com/python/cpython/blob/326c6c4e07137b43c49b74bd5528619360080469/Modules/_testcapi/getargs.c#L684-L732
On errors `tuple` is not decrefed. And `num` is re-assigned without `tuple` cleanup.
3. https://github.com/python/cpython/blob/326c6c4e07137b43c49b74bd5528619360080469/Modules/_testcapi/getargs.c#L734-L767
As well, `tuple` is leaked on errors.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110573
* gh-111161
* gh-111214
<!-- /gh-linked-prs -->
| f71cd5394efe154ba92228b2b67be910cc1ede95 | e136e2d640f4686b63ea05088d481115185fc305 |
python/cpython | python__cpython-110559 | # Run ruff on Argument Clinic in CI
# Feature or enhancement
### Proposal:
As @erlend-aasland and I have worked on adding type hints to Argument Clinic over the last few months, and applied various other modernisations to `Tools/clinic/`, running pyflakes on the code on a regular basis has caught numerous small bugs that periodically crept in due to a refactoring. Pyflakes has an _extremely_ low number of false positives, so it would be great to have this run as part of CI to catch these issues _before_ they're merged in.
Following #109161, we now run `ruff` in CI on the `Lib/test/` directory, and ruff has implementations of all the pyflakes error codes. I propose that we add a CI check that runs the full set of pyflakes checks (via ruff) on `Tools/clinic/`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110559
* gh-110598
* gh-110603
* gh-110609
<!-- /gh-linked-prs -->
| 7b2764e798e400b8f5fcc199739405e6fbd05c20 | 96fed66a65097eac2dc528ce29c9ba676bb07689 |
python/cpython | python__cpython-110552 | # Several CAPI tests include `<stddef.h>` for no reason
# Bug report
These files do not use `offset` or `ptrdiff_t` that some files use:
```diff
diff --git Modules/_testcapi/abstract.c Modules/_testcapi/abstract.c
index 81a3dea4c1d..a93477a7090 100644
--- Modules/_testcapi/abstract.c
+++ Modules/_testcapi/abstract.c
@@ -1,5 +1,3 @@
-#include <stddef.h> // ptrdiff_t
-
#include "parts.h"
#include "util.h"
diff --git Modules/_testcapi/dict.c Modules/_testcapi/dict.c
index 810989fbed8..5f6a1a037dc 100644
--- Modules/_testcapi/dict.c
+++ Modules/_testcapi/dict.c
@@ -1,5 +1,3 @@
-#include <stddef.h> // ptrdiff_t
-
#include "parts.h"
#include "util.h"
diff --git Modules/_testcapi/set.c Modules/_testcapi/set.c
index f68a1859698..35e686e1e29 100644
--- Modules/_testcapi/set.c
+++ Modules/_testcapi/set.c
@@ -1,5 +1,3 @@
-#include <stddef.h> // ptrdiff_t
-
#include "parts.h"
#include "util.h"
```
I suggest removing them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110552
* gh-110553
<!-- /gh-linked-prs -->
| 89df5b73d07872d554da60b455b46c98e01a022d | 48419a50b44a195ad7de958f479a924e7c2d3e1b |
python/cpython | python__cpython-110535 | # fix a URL redirect to wikipedia article on Fibonacci numbers
# Documentation
Section 3.2 of the tutorial: https://docs.python.org/3.13/tutorial/introduction.html#first-steps-towards-programming
links to https://en.wikipedia.org/w/index.php?title=Fibonacci_number which redirects to https://en.wikipedia.org/wiki/Fibonacci_sequence
<!-- gh-linked-prs -->
### Linked PRs
* gh-110535
* gh-110536
* gh-110537
<!-- /gh-linked-prs -->
| 892ee72b3622de30acd12576b59259fc69e2e40a | 7e30821b17b56bb5ed9799f62eb45e448cb52c8e |
python/cpython | python__cpython-110528 | # `PySet_Clear`'s docs are incomplete
# Bug report
Current docs:
https://github.com/python/cpython/blob/7e30821b17b56bb5ed9799f62eb45e448cb52c8e/Doc/c-api/set.rst#L164-L166
It does not mention:
- `-1` is returned on error, `0` on success
- It can raise `SystemError`, similar to other functions that mutate sets
https://github.com/python/cpython/blob/7e30821b17b56bb5ed9799f62eb45e448cb52c8e/Objects/setobject.c#L2291-L2298
I propose to add improve this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110528
* gh-110927
* gh-110928
<!-- /gh-linked-prs -->
| bfc1cd8145db00df23fbbd2ed95324bb96c0b25b | c2192a2bee17e2ce80c5af34410ccd0c8b6e08aa |
python/cpython | python__cpython-110526 | # Improve CAPI tests of `set` and `frozenset`
# Feature or enhancement
Right now CAPI tests of `set` and `frozenset` are quite outdated. They are defined as `.test_c_api` method on `set` objects, when `Py_DEBUG` is set:
https://github.com/python/cpython/blob/7e30821b17b56bb5ed9799f62eb45e448cb52c8e/Objects/setobject.c#L2371-L2511
There are several things that I can see as problematic:
- It is stored together with the production code
- These tests are mixing CAPI (`PySet_Check`), internal CAPI (`_PySet_Update`), and internal function calls (`set_clear_internal`)
- These tests are hard to parametrize since they are called as
https://github.com/python/cpython/blob/7e30821b17b56bb5ed9799f62eb45e448cb52c8e/Lib/test/test_set.py#L638-L641
- Multiple things are not tested: like `set` and `frozenset` subclasses
- Test failures are not quite informative, because using C `assert`
- This is a single huge test
- This test is only run under debug builds and not under "release" builds
I propose a plan to improve it:
- [x] Move CAPI tests to `Modules/_testcapi/set.c` and add `Lib/test/test_capi/test_set.py`
- [x] Move internal CAPI tests `Modules/_testinternalcapi/set.c` and make sure that they are executed in `test_capi.py`
- [x] Delete `test_c_api` from `setobject.c`
I plan to work on this and already have the PR for `1.` :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-110526
* gh-110544
* gh-110547
* gh-110554
* gh-110630
* gh-110688
<!-- /gh-linked-prs -->
| c49edd7d9c5395a6a6696a4846f56bc8b2b22792 | dd4bb0529e44ac6f75a9ebbfcbf5d73dc251b7a7 |
python/cpython | python__cpython-110520 | # Deprecation warning for non-integer number in gettext is not always accurate
# Bug report
`gettext` functions and methods that consider plural forms (like `ngettext()`) and directly or indirectly use `GNUTranslations` emit a deprecation warning if the number is not integer. But it only points to the line where `GNUTranslations` methods `ngettext()` or `npgettext()` are used directly. Since module level functions use it indirectly, and methods of other classes can use it indirectly as a fallback, the deprecation warning usually points to the line in the `gettext` module instead of the line in the user code that uses it. It makes deprecation warnings much less useful.
The following PR dynamically calculate the stacklevel for warning, skipping any `gettext` code.
Also I have found that many code is not covered by tests (in particular `NullTranslations` and domain-aware functions like `dngettext()`). The PR extends tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110520
* gh-110563
* gh-110564
<!-- /gh-linked-prs -->
| 326c6c4e07137b43c49b74bd5528619360080469 | 7bd560ce8de41e62230975c44fd7fbd189e8e858 |
python/cpython | python__cpython-110524 | # `AssertionError` when running `profile` on code containing generator expression
# Bug report
### Bug description:
Bisected to 411b1692811b2ecac59cb0df0f920861c7cf179a.
Minimal repro (save it to file and run as `python -m profile script.py`):
```python
next(i for i in range(10))
```
Output:
```
Exception ignored in: <generator object <genexpr> at 0x7f48fac29010>
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/profile.py", line 209, in trace_dispatch_i
if self.dispatch[event](self, frame, t):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/profile.py", line 293, in trace_dispatch_return
assert frame is self.cur[-2].f_back, ("Bad return", self.cur[-3])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: ('Bad return', ('script.py', 1, '<module>'))
5 function calls in 0.114 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.114 0.114 :0(exec)
1 0.000 0.000 0.000 0.000 :0(next)
1 0.000 0.000 0.114 0.114 profile:0(<code object <module> at 0x7f48faba08c0, file "script.py", line 1>)
0 0.000 0.000 profile:0(profiler)
1 0.000 0.000 0.000 0.000 script.py:1(<genexpr>)
1 0.114 0.114 0.114 0.114 script.py:1(<module>)
```
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110524
* gh-110541
<!-- /gh-linked-prs -->
| dd4bb0529e44ac6f75a9ebbfcbf5d73dc251b7a7 | 9f8282de6bdc3e1f976318821ff151ed45fedc56 |
python/cpython | python__cpython-110498 | # Better communicate that `IOError` and `WindowsError` are just aliases of `OSError` now
# Bug report
While working on some Windows-related typeshed PRs, I've noticed that some places in docs are very clear about this change. For example:
```rst
.. versionchanged:: 3.3
:exc:`IOError` used to be raised; it is now an alias of :exc:`OSError`.
```
It is clear that `OSError` is the same as `IOError` now.
But, there are several places where it is not clear. Example:
https://github.com/python/cpython/blob/92ca90b7629c070ebc3b08e6f03db0bb552634e3/Doc/library/gettext.rst#L165-L170
It might be confusing: people might think that right now `OSError` and `IOError` are different.
Let's add notes about explicit alias to several places that miss it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110498
* gh-110545
* gh-110546
<!-- /gh-linked-prs -->
| 5e7edac7717bfe5f3c533d83ddd0f564db8de40b | c49edd7d9c5395a6a6696a4846f56bc8b2b22792 |
python/cpython | python__cpython-108801 | # Optimise math.ceil for known exact float
Do the same optimisation done by rhettinger for math.floor in https://github.com/python/cpython/pull/21072
<!-- gh-linked-prs -->
### Linked PRs
* gh-108801
<!-- /gh-linked-prs -->
| f013b475047b2e9d377feda9f2e16e5cdef824d7 | 201dc11aeb4699de3c5ebaea9798796c30087bcc |
python/cpython | python__cpython-110651 | # pathlib.PurePath.with_name() rejects names with NTFS alternate data streams; accepts '.'
# Bug report
This shouldn't raise an exception:
```python
>>> pathlib.PureWindowsPath('foo', 'a').with_name('a:b')
ValueError: Invalid name 'a:b'
```
(affects all versions of pathlib)
This _should_ raise an exception:
```python
>>> pathlib.PurePath('foo', 'a').with_name('.')
PurePosixPath('foo/.')
```
(affects 3.12 and up)
<!-- gh-linked-prs -->
### Linked PRs
* gh-110651
* gh-110678
<!-- /gh-linked-prs -->
| b5f7777cb3ecae02d49e0b348968c1ff1ffe21f4 | 790ecf6302e47b84da5d1c3b14dbdf070bce615b |
python/cpython | python__cpython-110764 | # Implement biased reference counting in `--disable-gil` builds
# Feature or enhancement
CPython's current reference counting implementation would not be thread-safe without the GIL. In `--disable-gil` builds, we should implement biased reference counting, which is thread-safe and has lower execution overhead compared to plain atomic reference counting. The design is described in the ["Biased Reference Counting"](https://peps.python.org/pep-0703/#biased-reference-counting) section of PEP 703.
This will require changing the `PyObject` struct in `--disable-gil` builds. I expect the `--disable-gil` struct to look like:
```c
struct _object {
uintptr_t ob_tid;
uint16_t _padding;
uint8_t ob_mutex;
uint8_t ob_gc_bits;
uint32_t ob_ref_local;
Py_ssize_t ob_ref_shared;
PyTypeObject *ob_type;
};
```
Not all the fields will be used immediately, but it will be easier to include them as part of implementing this change.
I intend to split the implementation across (at least) two PRs to make it easier to review.
1. The first PR will add the new `PyObject` fields and implement the core biased reference counting operations, but not the [inter-thread queue](https://github.com/colesbury/nogil-3.12/blob/nogil-3.12/Python/pyrefcnt.c) to reduce the initial complexity. (done)
2. A later PR will implement the inter-thread queue. (not yet implemented)
The inter-thread queue is needed eventually (before *actually* disabling the GIL), but since the `--disable-gil` builds still currently require the GIL, it doesn't have to be in the initial PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110764
* gh-111503
* gh-111505
* gh-111560
* gh-111974
* gh-112174
* gh-112180
* gh-112595
* gh-114824
* gh-117271
<!-- /gh-linked-prs -->
| 6dfb8fe0236718e9afc8136ff2b58dcfbc182022 | 05f2f0ac92afa560315eb66fd6576683c7f69e2d |
python/cpython | python__cpython-115273 | # EOF occurred in violation of protocol starting Python3.10 on large requests
# Bug report
### Bug description:
We have identified a regression starting with Python 3.10 when making a large request over TLS.
```python
import requests
requests.post('https://google.com', data=b'A'*1000000)
```
```python
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:2426)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))
```
The same request works fine on Python 3.9.
### CPython versions tested on:
3.10, 3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-115273
<!-- /gh-linked-prs -->
| e0f9863a613b7e00794475bf4286c1dcf688ac26 | fce7fd6426519a2897330c03da7eb889232bf681 |
python/cpython | python__cpython-113441 | # configure --with-openssl-rpath=DIR generates invalid linker option on macOS
# Bug report
### Bug description:
On macOS, running configure with the option --with-openssl-rpath=/x/y/z causes the linker to be invoked with the option:
`-rpath=/x/y/z`
which causes a linker error when compiling python with Apple's compiler. The form of this option which the Apple linker expects is:
`-rpath /x/y/z`
The invalid option is generated on line 28077 of the configure script, which reads:
`rpath_arg="-Wl,-rpath="`
Changing that line to:
`rpath_arg="-rpath "`
corrects the problem. Presumably that only works with the Apple linker, and the current form is correct when using the gcc linker, rather than the gcc emulation provided by Apple. So this fix should only be applied when using Apple's linker.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-113441
* gh-113535
<!-- /gh-linked-prs -->
| cc13eabc7ce08accf49656e258ba500f74a1dae8 | bfee2f77e16f01a718c1044564ee624f1f2bc328 |
python/cpython | python__cpython-110487 | # Thread ID assertion in `pystate.c` failing under WASI
# Bug report
### Bug description:
The following assert fails under a debug build with WASI:
https://github.com/python/cpython/blob/a155f9f3427578ca5706d27e20bd0576f0395073/Python/pystate.c#L269
It's probably due to the pthread stubs always returning `0` as the thread ID:
https://github.com/python/cpython/blob/a155f9f3427578ca5706d27e20bd0576f0395073/Python/thread_pthread_stubs.h#L97-L100
It can probably be solved by making the assertion conditional on `HAVE_PTHREAD_STUBS not being defined:
https://github.com/python/cpython/blob/a155f9f3427578ca5706d27e20bd0576f0395073/Include/cpython/pthread_stubs.h#L4
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-110487
* gh-110491
<!-- /gh-linked-prs -->
| 5fd8821cf8eb1fe2e8575f8c7cc747cf78855a88 | f013b475047b2e9d377feda9f2e16e5cdef824d7 |
python/cpython | python__cpython-110470 | # Python 3.11.5 to 3.11.6 upgrade fails: vcruntime140.dll not found
# Bug report
### Bug description:
While updating 3.11.5 to 3.11.6 using python-3.11.6-amd64.exe, installation fails multiple time while precompiling. This is on a fairly clean up-to-date Win10 system with no Visual Studio. On a system with Visual Studio, the upgrade works.
### CPython versions tested on:
3.11
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110470
* gh-110555
* gh-110556
<!-- /gh-linked-prs -->
| 12cc6792d0ca1d0b72712d77c6efcb0aa0c7e7ba | ea39c877c0a8e7a717f2e4bf7d92a3a8780e67c0 |
python/cpython | python__cpython-110433 | # GHA "Check if generated files are up to date" job failed with: No such file or directory: ./Parser/parser.new.c
The GHA "Check if generated files are up to date" job failed with: `No such file or directory: './Parser/parser.new.c'`.
This job runs:
```
make regen-deepfreeze
make -j4 regen-all
make regen-stdlib-module-names
```
Logs:
```
The Makefile was updated, you may need to re-run make.
PYTHONPATH=./Tools/peg_generator python3.11 -m pegen -q c \
./Grammar/python.gram \
./Grammar/Tokens \
-o ./Parser/parser.new.c
python3.11 ./Tools/build/update_file.py ./Tools/peg_generator/pegen/grammar_parser.py \
./Tools/peg_generator/pegen/grammar_parser.py.new
gcc -c -fno-strict-overflow -Wsign-compare -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -Og -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -fPIC -DPy_BUILD_CORE -o Programs/python.o ./Programs/python.c
Read 186 instructions/ops, 5 supers, 1 macros, and 12 families from ./Python/bytecodes.c
Wrote 186 instructions, 5 supers, and 1 macros to ./Python/generated_cases.c.h.new
python3.11 ./Tools/build/update_file.py ./Python/generated_cases.c.h ./Python/generated_cases.c.h.new
python3.11 ./Tools/build/update_file.py ./Python/opcode_metadata.h ./Python/opcode_metadata.h.new
gcc -c -fno-strict-overflow -Wsign-compare -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -Og -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -fPIC -DPy_BUILD_CORE -o Python/deepfreeze/deepfreeze.o Python/deepfreeze/deepfreeze.c
gcc -c -fno-strict-overflow -Wsign-compare -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -Og -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -fPIC -DPy_BUILD_CORE -o Python/frozen.o Python/frozen.c
python3.11 ./Tools/build/update_file.py ./Lib/keyword.py ./Lib/keyword.py.new
# Regenerate Lib/test/levenshtein_examples.json
python3.11 ./Tools/build/generate_levenshtein_examples.py ./Lib/test/levenshtein_examples.json
/home/runner/work/cpython/cpython/Lib/test/levenshtein_examples.json already exists, skipping regeneration.
To force, add --overwrite to the invocation of this tool or delete the existing file.
python3.11 ./Tools/clinic/clinic.py --make --srcdir .
python3.11 ./Tools/build/update_file.py ./Parser/parser.c ./Parser/parser.new.c
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython/./Tools/clinic/clinic.py", line 5562, in <module>
sys.exit(main(sys.argv[1:]))
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/cpython/cpython/./Tools/clinic/clinic.py", line 5542, in main
parse_file(path, verify=not ns.force)
File "/home/runner/work/cpython/cpython/./Tools/clinic/clinic.py", line 2253, in parse_file
with open(filename, encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: './Parser/parser.new.c'
```
build: https://github.com/python/cpython/actions/runs/6423320320/job/17441640730?pr=110427
<!-- gh-linked-prs -->
### Linked PRs
* gh-110433
* gh-110438
* gh-110439
<!-- /gh-linked-prs -->
| fb6c4ed2bbb2a867d5f0b9a94656e4714be5d9c2 | d257479c2f6cbf3b69ed90062f00635832e4bf91 |
python/cpython | python__cpython-110418 | # glob module docs in wrong order
The documentation of the `glob` module is in a slightly odd order:
1. A description of the globbing language (wildcards etc)
2. A "see also: pathlib" block
3. Function documentation:
a. `glob()`
b. `iglob()`
c. `escape()`
4. Two examples of using `glob()`
5. A "see also: fnmatch" block
In particular, the description of `glob()` and the examples of its usage are interrupted by descriptions of `iglob()` and `escape()`.
I think this might be better:
1. A description of the globbing language (wildcards etc)
5. A "see also: fnmatch" block
2. A "see also: pathlib" block
3. Function documentation:
a. `glob()`, including two examples of its usage
b. `iglob()`
c. `escape()`
<!-- gh-linked-prs -->
### Linked PRs
* gh-110418
<!-- /gh-linked-prs -->
| d5491a6eff516ad47906bd91a13d71cdde18f5ab | cf67ebfb315ce36175f3d425249d7c6560f6d0d5 |
python/cpython | python__cpython-110441 | # Proposal to add `Py_IsFinalizing()` to the limited API/stable ABI
# Feature or enhancement
### Proposal:
Bigger Python extension projects sometimes need to check whether the interpreter is in the process of shutting down to determine if certain operations may be safely executed (`PyEval_RestoreThread`, `Py_DECREF`, etc.). Making a mistake here would cause a segfault, and the functions `Py_IsFinalizing` (previously `_Py_IsFinalizing`) are important to keep that from happening.
Just recently, this function was deleted, then re-added to the public API (https://github.com/python/cpython/issues/108014). However, it is not part of the *limited API* and therefore cannot be used in extension modules targeting the *stable ABI*.
The purpose of this ticket is to start a discussion on whether this function could be exposed as part of these longer-term stable interfaces.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110441
<!-- /gh-linked-prs -->
| 64f158e7b09e67d0bf5c8603ff88c86ed4e8f8fd | b987fdb19b981ef6e7f71b41790b5ed4e2064646 |
python/cpython | python__cpython-110517 | # select.kqueue uses invalid fd after fork
# Bug report
### Bug description:
`select.kqueue` is not aware of forks, and will try to use its file descriptor number after fork. Since kqueues are not inherited[^inherited] in children, the fd will be invalid, or it can refer to a file opened by someone else.
[^inherited]: the queue itself is not available in children, and the OS will automatically close any fd referring to a kqueue after fork.
Reproducer for a case where `select.kqueue` will close the fd opened with `open()` on its destructor:
```python
import os
import select
def repro():
kq = select.kqueue()
pid = os.fork()
if pid == 0:
f = open("/dev/null", "wb")
print(f"{f.fileno()=} {kq.fileno()=}")
del kq
f.write(b"x")
f.close()
repro()
```
<details><summary>Reproducer with asyncio</summary>
<p>
```python
import asyncio
import gc
import os
def asyncio_repro():
loop = asyncio.new_event_loop()
loop.run_until_complete(asyncio.sleep(0))
if os.fork() == 0:
del loop
with open("/dev/null", "wb") as f:
gc.collect()
f.write(b"x")
asyncio_repro()
```
This will fail with `OSError: [Errno 9] Bad file descriptor` when operating on f, because its fd was coincidentally closed by the loop destructor. Dropping the reference after fork does not help; it actually makes the problem worse, because the loop becomes cyclic garbage and the kqueue can be closed at a later, less predictable time.
</p>
</details>
In the asyncio example I need a bit of setup: the first loop object needs to be open at fork time[^1]. The bug will be observable if, in the child, the loop's kqueue is closed/destructed *after* a different fd is opened.
[^1]: this can also happen if event loop policy holds a reference to a loop, and later is dropped (maybe to create a new loop in the child). Or a Runner context is still active at fork time and is exited in the child.
I encountered this because I got `test_asyncio.test_unix_events.TestFork` failures while working on something unrelated (and it's been a pain to debug), not in production code. I guess it can still happen in real-world code though, because there is no proper way to dispose of a select.kqueue object in a forked process, and it's hard to debug a random EBADF that triggers only on Mac/BSD in otherwise-correct code.
I'm willing to work on a fix, if one is desired, but I'll need some guidance on the strategy. I thought `select.kqueue` objects can be invalidated after fork, but that would add some (small) tracking overhead so I don't know if it's acceptable.
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
macOS, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-110517
* gh-111745
* gh-111816
<!-- /gh-linked-prs -->
| a6c1c04d4d2339f0094422974ae3f26f8c7c8565 | cd6b2ced7595fa69222bdd2042edc5a2576f3678 |
python/cpython | python__cpython-110400 | # test_builtin.PtyTests: test_input_no_stdout_fileno() failed on PPC64LE RHEL7 3.x
This buildbot is slow. Maybe it's jut the `signal.alarm(2)` timeout of 2 seconds which is too slow. I suggest to just remove it.
```py
# Child
try:
# Make sure we don't get stuck if there's a problem
signal.alarm(2)
os.close(r)
with open(w, "w") as wpipe:
child(wpipe)
except:
traceback.print_exc()
finally:
# We don't want to return to unittest...
os._exit(0)
```
PPC64LE RHEL7 3.x:
```
FAIL: test_input_no_stdout_fileno (test.test_builtin.PtyTests.test_input_no_stdout_fileno)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2316, in test_input_no_stdout_fileno
lines = self.run_child(child, b"quux\r")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2189, in run_child
return self._run_child(child, terminal_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2246, in _run_child
self.fail("got %d lines in pipe but expected 2, child output was:\n%s"
AssertionError: got 0 lines in pipe but expected 2, child output was:
Current thread 0x00003fff9c244f40 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2314 in child
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2210 in _run_child
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2189 in run_child
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/test_builtin.py", line 2316 in test_input_no_stdout_fileno
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/case.py", line 636 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/case.py", line 692 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/unittest/runner.py", line 240 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/support/__init__.py", line 1155 in _run_suite
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/support/__init__.py", line 1282 in run_unittest
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 92 in test_func
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 95 in _load_run_test
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 138 in _runtest_env_changed_exc
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 238 in _runtest
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/single.py", line 266 in run_single_test
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/worker.py", line 89 in worker_process
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/worker.py", line 112 in main
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/test/libregrtest/worker.py", line 116 in <module>
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/runpy.py", line 88 in _run_code
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/runpy.py", line 198 in _run_module_as_main
Stderr:
/home/buildbot/buildarea/3.x.cstratak-RHEL7-ppc64le/build/Lib/pty.py:95: DeprecationWarning: This process (pid=27024) is multi-threaded, use of forkpty() may lead to deadlocks in the child.
pid, fd = os.forkpty()
----------------------------------------------------------------------
Ran 119 tests in 26.882s
FAILED (failures=1, skipped=7)
test test_builtin failed
```
build: test_input_no_stdout_fileno
<!-- gh-linked-prs -->
### Linked PRs
* gh-110400
* gh-110444
* gh-110445
<!-- /gh-linked-prs -->
| 1328fa31fe9c72748fc6fd11d017c82aafd48a49 | d33aa18f15de482a01988aabc75907328e1f9c9f |
python/cpython | python__cpython-110642 | # tty.setraw() and tty.setcbreak() return partially modified original attributes
# Bug report
According to the documentation they save the original result of `termios.tcgetattr()` and return it. But while making a copy of the attribute list before modifying it they do not take in account that it contains a reference to original `cc` list and modify it. So these functions return a list which contains original values and modified `cc` list.
These functions started returning the attribute list in #85984 (486bc8e03019b8edc3cbfc6e64db96d65dbe13b6).
The question: who should make a copy of the internal list: `setraw()` before passing it to `cfmakeraw()`, or `cfmakeraw()` itself?
cc @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-110642
* gh-110853
<!-- /gh-linked-prs -->
| 84e2096fbdea880799f2fdb3f0992a8961106bed | 7284e0ef84e53f80b2e60c3f51e3467d67a275f3 |
python/cpython | python__cpython-110401 | # test_socket: NetworkConnectionAttributesTest.clientTearDown() raised AttributeError on AMD64 RHEL7 Refleaks 3.x
`NetworkConnectionAttributesTest.clientTearDown()` raised `AttributeError: 'NetworkConnectionAttributesTest' object has no attribute 'cli'`.
AMD64 RHEL7 Refleaks 3.x:
```
(...)
3:19:42 load avg: 0.00 running (1): test_socket (3 hour 17 min)
3:20:12 load avg: 0.00 running (1): test_socket (3 hour 18 min)
3:20:42 load avg: 0.00 running (1): test_socket (3 hour 18 min)
3:21:12 load avg: 0.00 running (1): test_socket (3 hour 19 min)
3:21:42 load avg: 0.00 running (1): test_socket (3 hour 19 min)
3:22:12 load avg: 0.00 [467/467/2] test_socket worker non-zero exit code (Exit code 1)
beginning 6 repetitions
123456
..Warning -- Unraisable exception
Exception ignored in thread started by: <bound method ThreadableTest.clientRun of <test.test_socket.NetworkConnectionAttributesTest testMethod=testSourceAddress>>
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/test_socket.py", line 406, in clientRun
self.clientTearDown()
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/test_socket.py", line 5368, in clientTearDown
self.cli.close()
^^^^^^^^
AttributeError: 'NetworkConnectionAttributesTest' object has no attribute 'cli'
Timeout (3:20:00)!
Thread 0x00007f268deaf740 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/socket.py", line 295 in accept
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/test_socket.py", line 5373 in _justAccept
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/case.py", line 589 in _callTestMethod
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/case.py", line 636 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/case.py", line 692 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/support/testresult.py", line 146 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/support/__init__.py", line 1155 in _run_suite
File "/home/buildbot/buildarea/3.x.cstratak-RHEL7-x86_64.refleak/build/Lib/test/support/__init__.py", line 1282 in run_unittest
```
build: https://buildbot.python.org/all/#/builders/562/builds/902
<!-- gh-linked-prs -->
### Linked PRs
* gh-110401
* gh-110405
* gh-110406
<!-- /gh-linked-prs -->
| e37d4557c3de0476e76ca4b8a1cc8d2566b86c79 | 6e97a9647ae028facb392d12fc24973503693bd6 |
python/cpython | python__cpython-110394 | # The tty module is not tested
It only contains 4 functions (2 before 3.12). `tty.setcbreak()` is only used in `pydoc` (and this case seems is not tested), `tty.setraw()` is only used in `pty` and tests mock it. So, the `tty` module is not tested, neither directly, nor indirectly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110394
* gh-110621
* gh-110622
* gh-110634
<!-- /gh-linked-prs -->
| 7f702b26dbbf24ab5ef2be5444ae652300733b5b | 92a9e980245156bf75ede0869f8ba9512e04d2eb |
python/cpython | python__cpython-110407 | # Issues from docs@python.org
Users who don't have a github login are instructed to report issues to docs@python.org.
I meant to do some triaging before converting the mails to GH issues, but always had something more important to do. So I'm posting a rough list to get more eyes on these. Each has a link to the list post.
(These weren't triaged; they're not necessarily good ideas.)
```[tasklist]
### Tasks
- [x] typo in socket docs -- [Hank Knox](https://mail.python.org/archives/list/docs@python.org/thread/T7BAG3ROH3YT3YTAWCUE7CMZ2FI6VO72/)
- [x] asyncio shield typo -- [Dyussenov Nuraly](https://mail.python.org/archives/list/docs@python.org/thread/6EM4DHHS3RMWSJWV2ZGKWTJRD4Z2GNIC/) - (https://github.com/python/cpython/pull/108427)
- [x] array -- data type range printed wrong -- [Shivansh Verma](https://mail.python.org/archives/list/docs@python.org/thread/B463UYLY2RBIG66LXM3TIPZDZIYWCN6X/) (#113708)
- [ ] EPUB error -- e.g. [Glenn Street](https://mail.python.org/archives/list/docs@python.org/thread/HSIILYH7LVK4JU5U5GWHHIP4PU34UTNN/), [Ranjith B](https://mail.python.org/archives/list/docs@python.org/thread/Y3K22WC7QQVIECZBLFZPYYFWOCAOPZ2K/), [sobi](https://mail.python.org/archives/list/docs@python.org/message/JELSGFZ6HC2SKHPHTX3LSGH627QVGE5N/)
- [x] Howto/regex omits the simplest {n} case -- [Derek Mead](https://mail.python.org/archives/list/docs@python.org/thread/7D56UZ3VSWCQVQPIL44LWTO42EJUBRKI/) -- (https://github.com/python/cpython/pull/111110)
- [x] Problem in help for str().rsplit() -- [Alan Brogan](https://mail.python.org/archives/list/docs@python.org/thread/MCSKKTHBKX6AIK25YI2INEDVONJZYRLH/) - (https://github.com/python/cpython/pull/113355)
- [x] replacement for deprecation -- [SUN Guonian](https://mail.python.org/archives/list/docs@python.org/thread/4AYZNSW3RB3DNT5C7JEE6RCESBWIKZTD/) (https://github.com/python/cpython/pull/112783)
- [x] Potential typo in Profiler docs? -- [2dfkit](https://mail.python.org/archives/list/docs@python.org/thread/DJNXVVP2UW4R6OAPRA6TE5WMHDC4NKZ6/)
- [x] timeit -- [Arthur Goldberg](https://mail.python.org/archives/list/docs@python.org/thread/CQON3MGDEN6CY5KFXXBY4FTCG2FIIDTM/) -- (GH-110407)
- [x] list of errata: 1 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/)
- [x] list of errata: 2 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) -- (https://github.com/python/cpython/pull/119330)
- [x] list of errata: 3 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) -- (https://github.com/python/cpython/pull/112777)
- [x] list of errata: 4 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) - WONTFIX
- [x] list of errata: 5 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) - WONTFIX
- [x] list of errata: 6 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) -- (#120219)
- [x] list of errata: 7 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/)
- [x] list of errata: 8 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/) -- (https://github.com/python/cpython/pull/111574)
- [x] list of errata: 9 -- [Stephan Stiller](https://mail.python.org/archives/list/docs@python.org/thread/ZEDPZHRVGLYR4ZFZ5I3NBWVMHMX4WPGW/)
- [x] profiler docs inaccuracy -- [2dfkit](https://mail.python.org/archives/list/docs@python.org/thread/DJNXVVP2UW4R6OAPRA6TE5WMHDC4NKZ6/#DJNXVVP2UW4R6OAPRA6TE5WMHDC4NKZ6) -- (https://github.com/python/cpython/pull/112221)
- [ ] https://github.com/python/cpython/issues/112146
- [x] Getting tkinter -- [Steve Philcox](https://mail.python.org/archives/list/docs@python.org/message/JOQ3RQVG3A7ZYOWEVU7TAYLY2WSZ6TST/): This complaint is about how to install dependencies when building CPython; this is already covered by the devguide.
- [ ] https://github.com/python/cpython/issues/120302
- [x] Problems in Python tutorial -- [Todd Hoatson](https://mail.python.org/archives/list/docs@python.org/message/REO4QRDIHU5ML4BMNXPDONCKONIH6RN3/)
- [x] link from UserList docs to the collections classes list of methods -- [Seebs](https://mail.python.org/archives/list/docs@python.org/message/37ZRBCHCB37Y2KW73EBHXFGRIZHFCNUU/) - WONTFIX
- [x] confusing formatting of “`field`s” -- [Роман Филимонов](https://mail.python.org/archives/list/docs@python.org/message/G5AYM2N4LLB65BTPQZBX5BIMBXRGUPVP/): this is a common way of marking up the plural variant of something that's marked up using a monospaced font; not a bug
- [x] socket.makefile -- [Michael Gold](https://mail.python.org/archives/list/docs@python.org/thread/3VGLA6Q6QDYB7XIAHFIRAVW4TQSEAZGA/)
- [x] `pow` - “integral” is confusing to non-natives -- [marc rovira](https://mail.python.org/archives/list/docs@python.org/thread/HD47WPHMLKUTQEWEQOHBAGULUHN4ICPQ/) -- (https://github.com/python/cpython/pull/119688)
```
Feel free to tick off ones that aren't appropriate, and convert larger ones to their own issues.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110407
* gh-110409
* gh-110410
* gh-110434
* gh-110435
* gh-110436
* gh-111018
* gh-111110
* gh-111204
* gh-111205
* gh-111206
* gh-111207
* gh-111208
* gh-111247
* gh-112221
* gh-112262
* gh-112263
* gh-111574
* gh-112264
* gh-112265
* gh-112777
* gh-112783
* gh-113353
* gh-113355
* gh-113379
* gh-113380
* gh-113708
* gh-118970
* gh-119150
* gh-119230
* gh-119324
* gh-119325
* gh-119326
* gh-119327
* gh-119330
* gh-119370
* gh-119371
* gh-119688
* gh-120206
* gh-120207
* gh-120219
* gh-120229
* gh-120230
<!-- /gh-linked-prs -->
| a973bf0f97e55ace9eab100f9eb95d7eedcb28ac | e37d4557c3de0476e76ca4b8a1cc8d2566b86c79 |
python/cpython | python__cpython-110379 | # `test_contextlib_async` produces several `RuntimeWarning`s
# Bug report
```
» ./python.exe -m test test_contextlib_async
Using random seed 908291980
0:00:00 load avg: 4.08 Run 1 test sequentially
0:00:00 load avg: 4.08 [1/1] test_contextlib_async
/Users/sobolev/Desktop/cpython/Lib/contextlib.py:701: RuntimeWarning: coroutine method 'aclose' of 'AsyncContextManagerTestCase.test_contextmanager_trap_second_yield.<locals>.whoo' was never awaited
async def __aenter__(self):
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/Users/sobolev/Desktop/cpython/Lib/contextlib.py:701: RuntimeWarning: coroutine method 'aclose' of 'AsyncContextManagerTestCase.test_contextmanager_trap_yield_after_throw.<locals>.whoo' was never awaited
async def __aenter__(self):
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/Users/sobolev/Desktop/cpython/Lib/contextlib.py:701: RuntimeWarning: coroutine method 'aclose' of 'TestAbstractAsyncContextManager.test_async_gen_propagates_generator_exit.<locals>.gen' was never awaited
async def __aenter__(self):
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
== Tests result: SUCCESS ==
1 test OK.
Total duration: 121 ms
Total tests: run=58
Total test files: run=1/1
Result: SUCCESS
```
This can be fixed by moving away from legacy `@_async_test` and by using stabel `IsolatedAsyncioTestCase`.
https://github.com/python/cpython/blob/6592976061a6580fee2ade3564f6497eb685ab67/Lib/test/test_contextlib_async.py#L14-L20
<!-- gh-linked-prs -->
### Linked PRs
* gh-110379
* gh-110499
* gh-110500
* gh-110588
* gh-110589
* gh-110610
* gh-110611
<!-- /gh-linked-prs -->
| 6780d63ae5ed5ec98782606491a30b3bdb2f32b4 | 8e56d551ceef37a307280bcc5303bf69ccc9f9c1 |
python/cpython | python__cpython-110368 | # regrtest: python -m test -j1 --verbose3 should not replace sys.stdout
Currently, when a test does crash, its output is lost. It gives bug reports like issue gh-110364 which contains no output, just an exit code :-( The problem is that run_single_test() replaces sys.stdout when --verbose3 option is used. If Python runs normally, the output is ignored on success, or display on error.
When we run a worker process, we can make this decision in the main process: ignore the output on error, or display the output on error.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110368
* gh-110387
* gh-111577
* gh-111589
* gh-111590
<!-- /gh-linked-prs -->
| 6592976061a6580fee2ade3564f6497eb685ab67 | 313aa861ce23e83ca64284d97c1dac234c9def7c |
python/cpython | python__cpython-110366 | # `termios.tpsetattr` does rewrite errors
# Bug report
This code is problematic: https://github.com/python/cpython/blob/bf4bc36069ef1ed4be4be2ae70404f78bff056d9/Modules/termios.c#L215-L224
It does rewrite errors that happened before. Showing the last error, not the first one.
This goes against Python's semantics.
Here's the reproducer:
```python
>>> import termios
>>> termios.tcsetattr(0, 0, [0, 1, 2, '3', 4, 5, 6])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object cannot be interpreted as an integer
>>> termios.tcsetattr(0, 0, [object(), 1, 2, '3', 4, 5, 6])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object cannot be interpreted as an integer
```
The second error should say:
```python
>>> termios.tcsetattr(0, 0, [object(), 1, 2, '3', 4, 5, 6])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'object' object cannot be interpreted as an integer
```
And while we are there we can also change `PyList_GetItem` to `PyList_GET_ITEM`, because:
- `term` is known to be a `list`
- the size of `term` is known to be 7
See check https://github.com/python/cpython/blob/bf4bc36069ef1ed4be4be2ae70404f78bff056d9/Modules/termios.c#L197-L201 before these lines.
The only problem is tests. I think that `termios` is not tested that much in our suite. We don't even have `test_termios.py` file. Should I add one?
Refs https://github.com/python/cpython/issues/110260
<!-- gh-linked-prs -->
### Linked PRs
* gh-110366
* gh-110389
* gh-110390
<!-- /gh-linked-prs -->
| 2bbbab212fb10b3aeaded188fb5d6c001fb4bf74 | 6592976061a6580fee2ade3564f6497eb685ab67 |
python/cpython | python__cpython-110350 | # Tkinter demo: show Tcl/Tk patchlevel instead of version
# Feature or enhancement
### Proposal:
When dealing with issue reports involving Tcl/Tk, it is often much more helpful to know what Tcl/Tk calls the “patchlevel” (e.g. 8.6.13) than what Tcl/Tk calls the “version” (e.g. 8.6), because only the former usually corresponds to individual Tcl/Tk releases (and bugs tend to be fixed between these releases).
The test program in `tkinter._test()` (as invoked by `python3.y -m tkinter`) currently shows the Tcl/Tk “version”. So if Tkinter users are consulting this demo to know what Tcl/Tk is on their system when reporting issues, then I believe it would be better for this demo to show the “patchlevel” instead.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-110350
* gh-114252
* gh-114253
<!-- /gh-linked-prs -->
| b8f29b1293f55e12e86a2a039b49b6f9f73851b7 | 8e31cdc9450b3e644d48954865568e162edad514 |
python/cpython | python__cpython-110336 | # test_asyncio.test_unix_events leaked "pymp-xk9bwcca" temporary file on ARM Raspbian 3.x
"pymp" files are created by the multiprocessing module.
ARM Raspbian 3.x:
```
0:35:39 load avg: 6.50 [350/467/1] test.test_asyncio.test_unix_events failed (env changed) -- running (2): test.test_concurrent_futures.test_wait (40.9 sec), test_peg_generator (1 min 54 sec)
Warning -- test.test_asyncio.test_unix_events leaked temporary files (1): pymp-xk9bwcca
```
build: https://buildbot.python.org/all/#/builders/424/builds/5093
<!-- gh-linked-prs -->
### Linked PRs
* gh-110336
* gh-110338
<!-- /gh-linked-prs -->
| 1337765225d7d593169205672e004f97e15237ec | 1de9406f9136e3952b849487f0151be3c669a3ea |
python/cpython | python__cpython-110334 | # `test_zlib` uses some very old `random` API
# Bug report
Here's the problematic code: https://github.com/python/cpython/blob/1465386720cd532a378a5cc1e6de9d96dd8fcc81/Lib/test/test_zlib.py#L513-L525
It always fails with `AttributeError`, because `random.WichmannHill()` does not exist.
This is some old compatibility class that was removed a long time ago:
```
The attributes random.whseed and random.__whseed have no meaning for
the new generator. Code using these attributes should switch to a
new class, random.WichmannHill which is provided for backward
compatibility and to make an alternate generator available.
```
So, I propose to simplify this test.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110334
* gh-110348
* gh-110349
<!-- /gh-linked-prs -->
| e9f2352b7b7503519790ee6f51c2e298cf390e75 | efd8c7a7c97aa411098a4bee0ef1d428337deebb |
python/cpython | python__cpython-112226 | # In _GUARD_TYPE_VERSION, can type_version be 0?
Currently `_GUARD_TYPE_VERSION` looks like this in Python/bytecodes.c:
```
op(_GUARD_TYPE_VERSION, (type_version/2, owner -- owner)) {
PyTypeObject *tp = Py_TYPE(owner);
assert(type_version != 0);
DEOPT_IF(tp->tp_version_tag != type_version);
}
```
My question is about that `assert`. As I split more opcodes into uops, I find some variations of this -- a few opcodes didn't have such an `assert`, a few others have it *after* the `DEOPT_IF` call. Are there really places where the cache can contain a `type_version` field that is zero? And if so, should those match a zero `tp_version_tag` in the type?
@markshannon @brandtbucher
<!-- gh-linked-prs -->
### Linked PRs
* gh-112226
<!-- /gh-linked-prs -->
| eb3c94ea669561a0dfacaca715d4b2723bb2c6f4 | 43b1c33204d125e256f7a0c3086ba547b71a105e |
python/cpython | python__cpython-110311 | # Heap Types In the Cross-Interpreter Data Registry are Improperly Shared
This can cause a weakref for a class to be decref'ed in the wrong interpreter (or even when the GIL isn't held). The solution is to have a separate registry for each interpreter for heap types. Static types would stay global.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110311
* gh-110712
* gh-110714
<!-- /gh-linked-prs -->
| 80dc39e1dc2abc809f448cba5d2c5b9c1c631e11 | e561e9805854980a61967d07869b4ec4205b32c8 |
python/cpython | python__cpython-110320 | # empty string constant in f-string format_spec
# Bug report
### Bug description:
The f-string `f'{1:{name}}'` has a extra `Constant(value='')` in the format_spec part in 3.12, which is not part of the AST in 3.11. This has also an effect on the generated bytecode.
```python
import ast
import dis
print(ast.dump(ast.parse("f'{1:{name}}'"),indent=2))
dis.dis(compile("f'{1:{name}}'","<string>","exec"))
```
output (Python 3.12.0):
```python
Module(
body=[
Expr(
value=JoinedStr(
values=[
FormattedValue(
value=Constant(value=1),
conversion=-1,
format_spec=JoinedStr(
values=[
FormattedValue(
value=Name(id='name', ctx=Load()),
conversion=-1),
Constant(value='')]))]))], # <-- extra Constant
type_ignores=[])
0 0 RESUME 0
1 2 LOAD_CONST 0 (1)
4 LOAD_NAME 0 (name)
6 FORMAT_VALUE 0
8 LOAD_CONST 1 ('') # <-- extra LOAD_CONST
10 BUILD_STRING 2 # <-- extra BUILD_STRING
12 FORMAT_VALUE 4 (with format)
14 POP_TOP
16 RETURN_CONST 2 (None)
```
output (Python 3.11.3):
```python
Module(
body=[
Expr(
value=JoinedStr(
values=[
FormattedValue(
value=Constant(value=1),
conversion=-1,
format_spec=JoinedStr(
values=[
FormattedValue(
value=Name(id='name', ctx=Load()),
conversion=-1)]))]))],
type_ignores=[])
0 0 RESUME 0
1 2 LOAD_CONST 0 (1)
4 LOAD_NAME 0 (name)
6 FORMAT_VALUE 0
8 FORMAT_VALUE 4 (with format)
10 POP_TOP
12 LOAD_CONST 1 (None)
14 RETURN_VALUE
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-110320
<!-- /gh-linked-prs -->
| 2cb62c6437fa07e08b4778f7ab9baa5f16ac01f2 | cc389ef627b2a486ab89d9a11245bef48224efb1 |
python/cpython | python__cpython-110318 | # test__xxinterpchannels leaks [21, 21, 21] references
```
./python -m test test__xxinterpchannels -R 3:3
(...)
test__xxinterpchannels leaked [21, 21, 21] references, sum=63
```
Regression introduced by: commit a8f5dab58daca9f01ec3c6f8c85e53329251b05d
```
commit a8f5dab58daca9f01ec3c6f8c85e53329251b05d
Author: Eric Snow <ericsnowcurrently@gmail.com>
Date: Mon Oct 2 14:47:41 2023 -0600
gh-76785: Module-level Fixes for test.support.interpreters (gh-110236)
* add RecvChannel.close() and SendChannel.close()
* make RecvChannel and SendChannel shareable
* expose ChannelEmptyError and ChannelNotEmptyError
Lib/test/support/interpreters.py | 30 +++++--
Lib/test/test_interpreters.py | 16 ++++
Modules/_xxinterpchannelsmodule.c | 185 +++++++++++++++++++++++++++++++++-----
3 files changed, 206 insertions(+), 25 deletions(-)
```
Example of build: https://buildbot.python.org/all/#/builders/320/builds/855
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-110318
<!-- /gh-linked-prs -->
| d23a2f988771f4abd771ab4274529dcbf60dae37 | 6741d5af32101c27c3f930bfc575a7e567f9bf20 |
python/cpython | python__cpython-110297 | # C API: Add PyUnicode_EqualToUTF8() function
# Feature or enhancement
There is public `PyUnicode_CompareWithASCIIString()` function. Despite it name, it compares Python string object with ISO-8859-1 encoded C string. it returns -1, 0 or 1 and never sets an error.
There is private `_PyUnicode_EqualToASCIIString()` function. It only works with ASCII encoded C string and crashes in debug build it it is not ASCII. It returns 0 or 1 and never sets an error.
`_PyUnicode_EqualToASCIIString()` is more efficient than `PyUnicode_CompareWithASCIIString()`, because if arguments are not equal it can simply return false instead of determining what is larger. It was the main reason of introducing it. It is also more convenient, because you do not need to add `== 0` or `!= 0` after the call (and if it is not added, it is difficult to read).
I propose to add the latter function to the public C API, but also extend it to support UTF-8 encoded C strings. While most of use cases are ASCII-only, formally almost all C strings in the C API are UTF-8 encoded. `PyUnicode_FromString()` and `PyUnicode_AsUTF8AndSize()` used to convert between Python and C strings use UTF-8 encoding. `PyTypeObject.tp_name`, `PyMethodDef.ml_name`, `PyDescrObject.d_name` all are UTF-8 encoded. `PyUnicode_CompareWithASCIIString()` cannot be used to compare Python string with such names.
For PyASCIIObject objects the new function will be as fast as `_PyUnicode_EqualToASCIIString()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110297
<!-- /gh-linked-prs -->
| eb50cd37eac47dd4dc71ab42d0582dfb6eac4515 | d1f7fae424d51b0374c8204599583c4a26c1a992 |
python/cpython | python__cpython-110277 | # `test_unicode` always fails during a PGO-optimised build (because it does not exist)
# Bug report
### Bug description:
As part of a PGO-optimised build, a selection of tests are run in order for the PGO build to take place (if I understand correctly, these tests are used to gather data on which functions in Python are performance-critical and should be optimised). Exactly which tests are run is hardcoded here:
https://github.com/python/cpython/blob/bb2e96f6f4d6c397c4eb5775a09262a207675577/Lib/test/libregrtest/pgo.py#L1-L51
`test_unicode` is specified as one of the tests to be run as part of this selection. However, `test_unicode` always fails during a PGO build. Why? Because it was renamed as `test_str` in #13172 (by @asqui). This hasn't been noticed until now because the PGO build is still marked as a "success" even if some tests fail during the analysis phase.
The fix for the immediate issue is simple: don't try to run `test_unicode` as part of the PGO-optimised build (it doesn't exist anymore); run `test_str` instead. Longer term, though, we might want to look at making the PGO build fail if it tries to run nonexistent tests. It's probably correct for the build to continue even if some tests fail during the analysis phase, but perhaps we could make an exception for this _specific kind_ of failure (attempting to run nonexistent tests).
Cc. @hugovk as the primary reviewer for #13172, and @vstinner as a libregrtest expert.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110277
* gh-110295
* gh-111950
<!-- /gh-linked-prs -->
| dddc757303c6512915ef79b9029213415f1c6f1b | bb2e96f6f4d6c397c4eb5775a09262a207675577 |
python/cpython | python__cpython-110299 | # Named tuple's _replace() method should raise TypeError for unexpected keyword arguments
# Bug report
When you call a function with incorrect key arguments, you get a TypeError. But it is not always so with the `_replace()` method of a named tuple class created by `collections.namedtuple()`.
```pyshell
>>> from collections import namedtuple
>>> P = namedtuple('P', 'x y')
>>> p = P(1, 2)
>>> p._replace(z=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython/Lib/collections/__init__.py", line 460, in _replace
raise ValueError(f'Got unexpected field names: {list(kwds)!r}')
ValueError: Got unexpected field names: ['z']
```
It is not even consistent with constructor which raises TypeError:
```pyshell
>>> P(x=1, y=2, z=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: P.__new__() got an unexpected keyword argument 'z'
```
I think that `_replace()` also should raise TypeError for unexpected keyword arguments.
cc @rhettinger
<!-- gh-linked-prs -->
### Linked PRs
* gh-110299
<!-- /gh-linked-prs -->
| c74e9fb18917ceb287c3ed5be5d0c2a16a646a99 | da6760bdf5ed8ede203618d5118f4ceb2cb1652d |
python/cpython | python__cpython-110274 | # dataclasses.replace() should raise TypeError for all invalid or missed required keyword arguments
# Bug report
When you call a function with incorrect key arguments or missing a required argument, you get a TypeError. But it is not always so with `dataclasses.replace()`. It raises a ValueError if a keyword argument for an InitVar field is missed or if a keyword argument for a field declared with init=False is specified.
```pyshell
>>> from dataclasses import *
>>> @dataclass
... class C:
... x: int
... y: InitVar[int]
... z: int = field(init=False, default=100)
...
>>> c = C(x=11, y=22)
>>> replace(c, x=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython3.12/Lib/dataclasses.py", line 1570, in replace
raise ValueError(f"InitVar {f.name!r} "
ValueError: InitVar 'y' must be specified with replace()
>>> replace(c, x=1, y=2, z=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython3.12/Lib/dataclasses.py", line 1563, in replace
raise ValueError(f'field {f.name} is declared with '
ValueError: field z is declared with init=False, it cannot be specified with replace()
```
It is not even consistent with constructors:
```pyshell
>>> C(x=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: C.__init__() missing 1 required positional argument: 'y'
>>> C(x=1, y=2, z=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: C.__init__() got an unexpected keyword argument 'z'
```
And it raises a TypeError for unexpected keyword arguments.
```pyshell
>>> replace(c, x=1, y=2, t=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython3.12/Lib/dataclasses.py", line 1579, in replace
return obj.__class__(**changes)
^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: C.__init__() got an unexpected keyword argument 't'
```
I think that `dataclasses.replace()` should raise TypeError in all these cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110274
<!-- /gh-linked-prs -->
| 5b9a3fd6a0ce3c347463e6192a59c15f5fcb0043 | bfe7e72522565f828f43c2591fea84a7981ee048 |
python/cpython | python__cpython-110272 | # Add tests for pickling and copying PyStructSequence objects
PyStructSequence supports pickling and copying, but there are no tests for this. There is only one trivial test which tests that `__reduce__()` does not raise exception or crash.
Needed explicit tests for `copy.copy()`, `copy.deepcopy()` (it is the same, but still), `pickle.dumps()`/`pickle.loads()`.
cc @XuehaiPan
<!-- gh-linked-prs -->
### Linked PRs
* gh-110272
* gh-110284
* gh-110285
<!-- /gh-linked-prs -->
| 2d4865d775123e8889c7a79fc49b4bf627176c4b | f1663a492e14c80c30cb9741fdc36fa221d5e30a |
python/cpython | python__cpython-110261 | # Missing error checks in `termios` module
# Bug report
There are multiple cases where `PyList_SetItem` does not check for `-1` in `termios` module. We need to fix this. There are two options to fix this:
- `< 0` check
- `PyList_SET_ITEM` usage
There are two possible errors that `PyList_SetItem` can produce:
```c
if (!PyList_Check(op)) {
Py_XDECREF(newitem);
PyErr_BadInternalCall();
return -1;
}
if (!valid_index(i, Py_SIZE(op))) {
Py_XDECREF(newitem);
PyErr_SetString(PyExc_IndexError,
"list assignment index out of range");
return -1;
}
```
Each case needs an explanation.
https://github.com/python/cpython/blob/8c071373f12f325c54591fe990ec026184e48f8f/Modules/termios.c#L116-L124
Here `cc` is always a `list` and `i` goes from `0` to some other `int` value by `i++`, it is safe to use `PyList_SET_ITEM`.
https://github.com/python/cpython/blob/8c071373f12f325c54591fe990ec026184e48f8f/Modules/termios.c#L129-L137
Here `cc` is a list, but `VMIN` and `VTIME` are 3rd party constants from `termios.h`. I guess it is safer to use explicit error check.
https://github.com/python/cpython/blob/8c071373f12f325c54591fe990ec026184e48f8f/Modules/termios.c#L140-L153
Here we need to check for possibel `PyLong_FromLong` error, but we can use `PyList_SET_ITEM`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110261
<!-- /gh-linked-prs -->
| 43baddc2b9557e06ca4f3427c0717bc3688ed3e4 | 254e30c487908a52a7545cea205aeaef5fbfeea4 |
python/cpython | python__cpython-110271 | # Multiline expression brackets with format specifiers don't work in f-strings
# Bug report
### Bug description:
In accordance with [PEP 701](https://peps.python.org/pep-0701/), the following code works:
```pycon
>>> x = 1
>>> f"___{
... x
... }___"
'___1___'
>>> f"___{(
... x
... )}___"
'___1___'
```
But the following fails:
```python
f"__{
x:d
}__"
```
This gives:
```
File "<stdin>", line 1
x:d
SyntaxError: unterminated f-string literal (detected at line 2)
```
Is this intended behaviour? This is not clarified in the PEP.
---
Similarly,
```python
f"""__{
x:d
}__"""
```
Gives:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier 'd
' for object of type 'int'
```
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-110271
* gh-110396
<!-- /gh-linked-prs -->
| 3d5df54cdc1e946bd953bc9906da5abf78a48357 | 74208ed0c440244fb809d8acc97cb9ef51e888e3 |
python/cpython | python__cpython-110242 | # Missing error check in `_testinternalcapi`
# Bug report
While looking at https://github.com/python/cpython/pull/110238 I've noticed that https://github.com/python/cpython/blob/014aacda6239f0e33b3ad5ece343df66701804b2/Modules/_testinternalcapi.c#L678 does not handle errors properly.
I have a PR ready :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-110242
* gh-110244
<!-- /gh-linked-prs -->
| 4596c76d1a7650fd4650c814dc1d40d664cd8fb4 | a8f5dab58daca9f01ec3c6f8c85e53329251b05d |
python/cpython | python__cpython-110238 | # Missing error checks in `_PyEval_MatchClass`
Several places in `_PyEval_MatchClass` call `PyList_Append` without checking the return value (e.g., https://github.com/python/cpython/blob/fc2cb86d210555d509debaeefd370d5331cd9d93/Python/ceval.c#L509C13-L509C26). However, `PyList_Append` can fail. It will only fail if we're out of memory or if we passed a non-list, so it's unlikely to come up in practice, but we should still check for errors.
cc @brandtbucher for pattern matching
<!-- gh-linked-prs -->
### Linked PRs
* gh-110238
* gh-110511
* gh-110512
<!-- /gh-linked-prs -->
| dd9d781da30aa3740e54c063a40413c542d78c25 | de2a4036cbfd5e41a5bdd2b81122b7765729af83 |
python/cpython | python__cpython-110258 | # PyStructSequence constructor ignores unknown field names
# Bug report
PyStructSequence constructor takes two arguments: tuple and dict. The tuple specifies values for "visible" fields (i.e. what you get when interpret PyStructSequence as a tuple), the dict specifies values for fields that are accessible only by name (they are like `__slot__` attributes).
```pyshell
>>> import time
>>> time.struct_time((2023, 10, 2, 17, 50, 53, 0, 275, 0), {'tm_zone': 'GMT', 'tm_gmtoff': 0})
time.struct_time(tm_year=2023, tm_mon=10, tm_mday=2, tm_hour=17, tm_min=50, tm_sec=53, tm_wday=0, tm_yday=275, tm_isdst=0)
```
The problem is that all invalid names are silently ignored. It includes names of "visible" fields and typos.
```pyshell
>>> time.struct_time((0,)*9, {'tm_zone': 'GMT', 'tm_gmtoff': 0, 'tm_year': 2023, 'invalid': 1})
time.struct_time(tm_year=0, tm_mon=0, tm_mday=0, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=0, tm_isdst=0)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-110258
<!-- /gh-linked-prs -->
| 9561648f4a5d8486b67ee4bbe24a239b2a93212c | bf4bc36069ef1ed4be4be2ae70404f78bff056d9 |
python/cpython | python__cpython-110223 | # Add support of PyStructSequence in copy.replace()
# Feature or enhancement
### Proposal:
This issue is to make the concept of ["named tuple"](https://docs.python.org/3/glossary.html#term-named-tuple) support the `__replace__` protocol.
`collections.namedtuple` and `typing.NamedTuple` already support the `__replace__` protocol in:
- #108752
[`PyStructSequence`](https://docs.python.org/3/c-api/tuple.html#struct-sequence-objects)s are also [named tuples](https://docs.python.org/3/glossary.html#term-named-tuple) but they do not support the `__replace__` protocol yet.
It would be convenient if `PyStructSequence` be supported in `copy.replace()`.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
- #108751
- https://discuss.python.org/t/generalize-replace-function/28511
- #109956
- #109174
- #109175
<!-- gh-linked-prs -->
### Linked PRs
* gh-110223
<!-- /gh-linked-prs -->
| 3bbe3b7c822091caac90c00ee937848bc4de80eb | 9561648f4a5d8486b67ee4bbe24a239b2a93212c |
python/cpython | python__cpython-110212 | # Make types classes that are generic at type time subscriptable at runtime
# Feature or enhancement
### Proposal:
coroutine and generator should be subscriptable at runtime as they already are at type time
```python
>>> import types
>>> types.CoroutineType[None, None, int]
TypeError: type 'coroutine' is not subscriptable
```
https://github.com/python/typeshed/blob/31916d1e0c4c213cac031e2a448e5531206e3ab4/stdlib/types.pyi#L402
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/typing/issues/1480 is where this stemmed from
<!-- gh-linked-prs -->
### Linked PRs
* gh-110212
<!-- /gh-linked-prs -->
| e7dafdc2240a8e4e45f53782c47120eb3fe37712 | b4bdf83cc67434235d9630c92c84a5261992b235 |
python/cpython | python__cpython-130933 | # test_multiprocessing_spawn.test_manager: _TestCondition hung (20 min timeout) on AMD64 RHEL8 3.x
```
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/test/_test_multiprocessing.py", line 1426, in f
woken.release()
```
This frame comes from ``_TestCondition``.
AMD64 RHEL8 3.x:
```
0:04:43 load avg: 7.07 [464/467] test_zipfile passed -- running (2): test.test_multiprocessing_spawn.test_manager (1 min 12 sec), test_math (1 min 22 sec)
0:04:52 load avg: 6.14 [465/467] test_xmlrpc passed -- running (2): test.test_multiprocessing_spawn.test_manager (1 min 21 sec), test_math (1 min 30 sec)
0:05:19 load avg: 4.38 [466/467] test_math passed (1 min 57 sec) -- running (1): test.test_multiprocessing_spawn.test_manager (1 min 48 sec)
0:05:49 load avg: 2.66 running (1): test.test_multiprocessing_spawn.test_manager (2 min 18 sec)
0:06:19 load avg: 1.61 running (1): test.test_multiprocessing_spawn.test_manager (2 min 48 sec)
0:06:49 load avg: 0.97 running (1): test.test_multiprocessing_spawn.test_manager (3 min 18 sec)
(...)
0:22:19 load avg: 0.00 running (1): test.test_multiprocessing_spawn.test_manager (18 min 48 sec)
0:22:49 load avg: 0.00 running (1): test.test_multiprocessing_spawn.test_manager (19 min 18 sec)
0:23:19 load avg: 0.00 running (1): test.test_multiprocessing_spawn.test_manager (19 min 48 sec)
0:23:31 load avg: 0.00 [467/467/1] test.test_multiprocessing_spawn.test_manager worker non-zero exit code (Exit code 1)
Process Process-44:
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/test/_test_multiprocessing.py", line 1426, in f
woken.release()
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/managers.py", line 1059, in release
return self._callmethod('release')
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/managers.py", line 840, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/managers.py", line 263, in serve_client
self.id_to_local_proxy_obj[ident]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
KeyError: '7f528ad142e0'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/managers.py", line 265, in serve_client
raise ke
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/managers.py", line 259, in serve_client
obj, exposed, gettypeid = id_to_obj[ident]
~~~~~~~~~^^^^^^^
KeyError: '7f528ad142e0'
---------------------------------------------------------------------------
Timeout (0:20:00)!
Thread 0x00007f3635005740 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/popen_fork.py", line 27 in poll
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/popen_fork.py", line 43 in wait
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/multiprocessing/process.py", line 149 in join
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/case.py", line 597 in _callCleanup
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/case.py", line 673 in doCleanups
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/case.py", line 640 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/case.py", line 692 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/unittest/runner.py", line 240 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64/build/Lib/test/support/__init__.py", line 1155 in _run_suite
```
``test_multiprocessing_spawn.test_manager`` passed when re-run.
build: https://buildbot.python.org/all/#/builders/185/builds/5160
<!-- gh-linked-prs -->
### Linked PRs
* gh-130933
* gh-130950
* gh-130951
<!-- /gh-linked-prs -->
| edd1eca336976b3431cf636aea87f08a40c94935 | 72e5b25efb580fb1f0fdfade516be90d90822164 |
python/cpython | python__cpython-110198 | # copy and deepcopy fail to copy ipaddress scope_id
# Bug report
### Bug description:
copy and deepcopy fail to copy ipaddress scope_id
```python3
import ipaddress, copy
a = ipaddress.IPv6Address('fe80::abc%def')
a.scope_id #shows 'def'
c = copy.copy(a)
c.scope_id #shows null, should show 'def'
d = copy.deepcopy(a)
d.scope_id #shows null, should show 'def'
```
### CPython versions tested on:
3.9, 3.11
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110198
* gh-111190
* gh-111191
<!-- /gh-linked-prs -->
| 767f416feb551f495bacfff1e9ba1e6672c2f24e | b845a9e145b2f133270aa07836c7e6a385066c00 |
python/cpython | python__cpython-112604 | # ctypes array cannot be returned by value on ARM64
# Bug report
### Bug description:
Dear,
I'm wrapping a C-library with the ctypes module on Darwin (ARM64). I have a function that simply initializes a C-struct and returns the struct by value. The C-struct contains one simple C-array type. When using ArrayTypes in Python, it seems the struct is not returned properly.
On Windows and Linux platforms the same code does work as expected.
The strange thing is that if I lay out the struct explicitly with singular types in Python (as I expect the layout of the struct is the same), it does work as expected on all tried platforms.
To reproduce, I first create a simple .c file with the struct and the initializing function. In Python, I give code that I use to compile (with gcc), and then three different ways to wrap the struct and function in Python using ctypes. I believe the three ways should be equivalent (and they are on Windows and Linux), but not on Darwin (ARM64).
I saved this file as ``ctypesexample.c``
```c
struct example{double coords[3];};
struct example set_defaults(){
struct example ex;
ex.coords[0] = 1.0;
ex.coords[1] = 2.0;
ex.coords[2] = 3.0;
return ex;}
```
I saved this file as ``reproduce.py``
```python
import os
import subprocess
import ctypes
def compile():
"""Compile the c file using gcc."""
subprocess.call(["gcc","-c","-Wall","-Werror","-fpic","ctypesexample.c"])
subprocess.call(["gcc","-shared","-o","libctypesexample","ctypesexample.o"])
assert os.path.isfile('libctypesexample')
# Three ways to wrap the struct:
class Example1(ctypes.Structure):
_fields_ = [
('coords', ctypes.c_double * 3)
]
class CoordType(ctypes.Array):
_type_ = ctypes.c_double
_length_ = 3
class Example2(ctypes.Structure):
_fields_ = [('coords', CoordType)]
class Example3(ctypes.Structure):
_fields_ = [
('x', ctypes.c_double),
('y', ctypes.c_double),
('z', ctypes.c_double)
]
def run():
# Load the shared library
so_location = os.path.abspath("libctypesexample")
_lib = ctypes.cdll.LoadLibrary(so_location)
_lib.set_defaults.restype = Example1
o = _lib.set_defaults()
print(f"Example1: ({o.coords[0]}, {o.coords[1]}, {o.coords[2]})")
_lib.set_defaults.restype = Example2
o = _lib.set_defaults()
print(f"Example2: ({o.coords[0]}, {o.coords[1]}, {o.coords[2]})")
_lib.set_defaults.restype = Example3
o = _lib.set_defaults()
print(f"Example3: ({o.x}, {o.y}, {o.z})")
if __name__ == "__main__":
if not os.path.isfile('libctypesexample'):
compile()
run()
```
I tried this code on Linux (debian) -- both in Docker and WSL. And this works perfectly. Equivalent code in Windows is also working fine. In the past, I have run similar code as above on many different versions of Python (3.5-3.11) on both platforms for years (and still) without problems. For example on WSL with the following platform info
`uname_result(system='Linux', release='5.15.90.1-microsoft-standard-WSL2', version='#1 SMP Fri Jan 27 02:56:13 UTC 2023', machine='x86_64')`
and Python:
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux
I get this correct output:
```
Example1: (1.0, 2.0, 3.0)
Example2: (1.0, 2.0, 3.0)
Example3: (1.0, 2.0, 3.0)
```
While on **Darwin (ARM)** with the following platform info:
`system='Darwin', release='22.1.0', version='Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:30 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8103', machine='arm64')`
and Python
Python 3.11.4 (main, Jun 12 2023, 11:39:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
I get the following output:
```
Example1: (3.025693809e-314, 3.0256938013e-314, 2.1481083817e-314)
Example2: (3.025693809e-314, 3.0256938013e-314, 2.1481083817e-314)
Example3: (1.0, 2.0, 3.0)
```
In summary, the struct doesn't seem to be properly returned as value using array types, though with a literal listing of the types, it seems there is no problem. So there is a workaround, but as we're using ctypes for automatic wrapping through reflection and the code works properly on Windows and Linux, it would be quite some work to implement the workaround.
I hope my bug report finds you without any errors, and many thanks for giving the opportunity to report.
Best regards
### CPython versions tested on:
3.10, 3.11
### Operating systems tested on:
Linux, macOS, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-112604
* gh-112766
* gh-112767
* gh-112818
* gh-112829
* gh-112830
* gh-112959
* gh-113167
* gh-113170
* gh-114753
* gh-114774
* gh-114775
* gh-118233
<!-- /gh-linked-prs -->
| 6644ca45cde9ca1b80513a90dacccfeea2d98620 | 79dad03747fe17634136209f1bcaf346a8c10617 |
python/cpython | python__cpython-110465 | # test_subprocess: test_pipesize_default() failed on s390x Fedora Clang 3.x buildbot
s390x Fedora Clang 3.x buildbot:
```
FAIL: test_pipesize_default (test.test_subprocess.ProcessTestCase.test_pipesize_default)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.clang/build/Lib/test/test_subprocess.py", line 763, in test_pipesize_default
self.assertEqual(
AssertionError: 65536 != 8192
```
build: https://buildbot.python.org/all/#/builders/3/builds/4728
<!-- gh-linked-prs -->
### Linked PRs
* gh-110465
* gh-110471
* gh-110472
<!-- /gh-linked-prs -->
| d023d4166b255023dac448305270350030101481 | a4baa9e8ac62cac3ea6363b15ea585b1998ea1f9 |
python/cpython | python__cpython-110181 | # `typing`: `_PickleUsingNameMixin` is dead code
# Bug report
### Bug description:
The `typing._PickleUsingNameMixin` class is dead code, and should be removed. In Python 3.11, it was used as a mixin class for `typing.TypeVar`, `typing.TypeVarTuple`, and `typing.ParamSpec`. However, all three classes are now implemented in C on Python 3.12+, and no longer make use of this mixin.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110181
<!-- /gh-linked-prs -->
| d642c5bbf58b42c90053dc553885445d53f247fe | 15de493395c3251b8b82063bbe22a379792b9404 |
python/cpython | python__cpython-110194 | # `test_typing`: `test_many_weakrefs` takes many seconds
# Bug report
### Bug description:
`TypeVarTests.test_many_weakrefs` in `test_typing` takes far longer than any other test in `test_typing` if you're using a debug build of CPython, slowing down `test_typing` considerably:
```
>python Lib/test/test_typing.py --durations 5
Running Debug|x64 interpreter...
.............................................................................................................................................................................................................................................s.....................................................................................................................................................................................................................................................................................................................................................................
Slowest test durations
----------------------------------------------------------------------
5.090s test_many_weakrefs (__main__.TypeVarTests.test_many_weakrefs)
0.206s test_variadic_parameters (__main__.GenericAliasSubstitutionTests.test_variadic_parameters)
0.082s test_two_parameters (__main__.GenericAliasSubstitutionTests.test_two_parameters)
0.057s test_special_attrs (__main__.SpecialAttrsTests.test_special_attrs)
0.044s test_var_substitution (__main__.TypeVarTupleTests.test_var_substitution)
----------------------------------------------------------------------
Ran 594 tests in 6.714s
OK (skipped=1)
```
The test was added in #108517, as a regression test for #108295.
The slowdown caused by this test is much less pronounced if you use a PGO-optimised non-debug build, but the test still takes much longer than any other test in `test_typing` on my machine:
```
>python Lib/test/test_typing.py --durations 5
Running PGUpdate|x64 interpreter...
.............................................................................................................................................................................................................................................s.....................................................................................................................................................................................................................................................................................................................................................................
Slowest test durations
----------------------------------------------------------------------
0.379s test_many_weakrefs (__main__.TypeVarTests.test_many_weakrefs)
0.026s test_variadic_parameters (__main__.GenericAliasSubstitutionTests.test_variadic_parameters)
0.014s test_two_parameters (__main__.GenericAliasSubstitutionTests.test_two_parameters)
0.007s test_etree (__main__.UnionTests.test_etree)
0.004s test_var_substitution (__main__.TypeVarTupleTests.test_var_substitution)
----------------------------------------------------------------------
Ran 594 tests in 0.610s
OK (skipped=1)
```
The test currently attempts to create 100,000 TypeVar weakrefs, 100,000 ParamSpec weakrefs and 100,000 TypeVarTuple weakrefs:
https://github.com/python/cpython/blob/038c3564fb4ac6439ff0484e6746b0790794f41b/Lib/test/test_typing.py#L547-L555
@JelleZijlstra, reckon we could maybe make that number a little smaller? :)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-110194
* gh-110224
<!-- /gh-linked-prs -->
| 732ad44cec971be5255b1accbac6555d3615c2bf | 8d92b6eff3bac45e7d4871c46c4511218b9b685a |
python/cpython | python__cpython-110172 | # `libregrtest` should always set `random.seed`
# Feature or enhancement
While working on https://github.com/python/cpython/issues/110160 I've noticed that it is rather hard to reproduce random test failures. So, I want to propose a new feature / fix for that.
First of all, there's existing prior work of @vstinner who added `--randseed` flag.
Right now it is used together with `-r` to randomize test order and seeding random.
I propose to:
- Split `--randseed` into two options: test order randomization and `random.seed` usage
- Let's keep `-r` as-is
- Let's add `--no-use-randseed` flag to disable setting `random.seed`, let's always seed random by default
- Always print information about current random seed to be able to reuse it in the next runs
Example:
```
» ./python.exe -m test test_regrtest
Using random seed 65906482
0:00:00 load avg: 1.55 Run 1 test sequentially
0:00:00 load avg: 1.55 [1/1] test_regrtest
== Tests result: SUCCESS ==
1 test OK.
Total duration: 12.7 sec
Total tests: run=102 skipped=2
Total test files: run=1/1
Result: SUCCESS
```
Basically, this is how https://github.com/pytest-dev/pytest-randomly works.
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-110172
<!-- /gh-linked-prs -->
| 1465386720cd532a378a5cc1e6de9d96dd8fcc81 | 5b9a3fd6a0ce3c347463e6192a59c15f5fcb0043 |
python/cpython | python__cpython-110413 | # test_socket: testCmsgTrunc0() deadlock on ARM Raspbian 3.x
test_socket adds a lock on addCleanup:
```py
class ThreadSafeCleanupTestCase:
"""Subclass of unittest.TestCase with thread-safe cleanup methods.
This subclass protects the addCleanup() and doCleanups() methods
with a recursive lock.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._cleanup_lock = threading.RLock()
def addCleanup(self, *args, **kwargs):
with self._cleanup_lock:
return super().addCleanup(*args, **kwargs)
def doCleanups(self, *args, **kwargs):
with self._cleanup_lock:
return super().doCleanups(*args, **kwargs)
```
Problem: what happens if a thread calls addCleanup() while the main thread is calling doCleanups()? Well, **a deadlock**.
ARM Raspbian 3.x:
```
0:40:31 load avg: 3.05 [337/467/1] test_socket worker non-zero exit code (Exit code 1) -- running (1): test_math (1 min 4 sec)
Timeout (0:40:00)!
Thread 0xf5dea440 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 230 in addCleanup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 3576 in newFDs
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 3611 in createAndSendFDs
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 3841 in _testCmsgTrunc0
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 402 in clientRun
Thread 0xf7acb040 (most recent call first):
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 348 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/threading.py", line 648 in wait
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 597 in _callCleanup
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 673 in doCleanups
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/test_socket.py", line 235 in doCleanups
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 640 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/case.py", line 692 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 122 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/suite.py", line 84 in __call__
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/unittest/runner.py", line 240 in run
File "/var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/Lib/test/support/__init__.py", line 1155 in _run_suite
...
```
build: https://buildbot.python.org/all/#/builders/424/builds/5065
<!-- gh-linked-prs -->
### Linked PRs
* gh-110413
* gh-110416
* gh-110423
* gh-110424
* gh-110427
* gh-110440
<!-- /gh-linked-prs -->
| 0db2f1475e6539e1954e1f8bd53e005c3ecd6a26 | 318f5df27109ff8d2519edefa771920a0ec62b92 |
python/cpython | python__cpython-110331 | # test_gdb: test_pycfunction_fastcall_keywords() failed on PPC64LE Fedora Stable Clang 3.x
PPC64LE Fedora Stable Clang 3.x:
```
======================================================================
FAIL: test_pycfunction_fastcall_keywords (test.test_gdb.test_cfunction_full.CFunctionFullTests.test_pycfunction_fastcall_keywords) [_testcapi.meth_fastcall_keywords]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.clang/build/Lib/test/test_gdb/test_cfunction.py", line 67, in check_pycfunction
self.check(func_name, cmd)
File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.clang/build/Lib/test/test_gdb/test_cfunction_full.py", line 32, in check
self.assertRegex(gdb_output, regex)
AssertionError: Regex didn't match: '#(1|2)\\ <built\\-in\\ method\\ meth_fastcall_keywords' not found in 'Breakpoint 1 (meth_fastcall_keywords) pending.\n
[Thread debugging using libthread_db enabled]\n
Using host libthread_db library "/lib64/libthread_db.so.1".\n
\n
Breakpoint 1, meth_fastcall_keywords (self=<module at remote 0x7fffea3948f0>, args=, nargs=0, kwargs=0x0) at ./Modules/_testcapimodule.c:2114\n
2114\t PyObject *pyargs = _fastcall_to_tuple(args, nargs);\n
Unable to locate python frame\n
'
Stdout:
test call: _testcapi.meth_fastcall_keywords()
======================================================================
FAIL: test_pycfunction_varargs (test.test_gdb.test_cfunction_full.CFunctionFullTests.test_pycfunction_varargs) [_testcapi.meth_varargs]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.clang/build/Lib/test/test_gdb/test_cfunction.py", line 67, in check_pycfunction
self.check(func_name, cmd)
File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.clang/build/Lib/test/test_gdb/test_cfunction_full.py", line 32, in check
self.assertRegex(gdb_output, regex)
AssertionError: Regex didn't match: '#(1|2)\\ <built\\-in\\ method\\ meth_varargs' not found in 'Breakpoint 1 (meth_varargs) pending.\n
[Thread debugging using libthread_db enabled]\n
Using host libthread_db library "/lib64/libthread_db.so.1".\n
\n
Breakpoint 1, meth_varargs (self=<module at remote 0x7fffea394890>, args=()) at ./Modules/_testcapimodule.c:2067\n
2067\t return Py_BuildValue("NO", _null_to_none(self), args);\n
Unable to locate python frame\n
'
Stdout:
test call: _testcapi.meth_varargs()
----------------------------------------------------------------------
```
configure:
```
./configure --prefix '$(PWD)/target' --with-pydebug
```
test.pythoninfo:
```
CC.version: gcc (GCC) 13.2.1 20230728 (Red Hat 13.2.1-1)
libregrtests.build_info: debug
sysconfig[PY_CFLAGS]: -fno-strict-overflow -Wsign-compare -g -Og -Wall
sysconfig[PY_CFLAGS_NODIST]: -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal
sysconfig[PY_STDMODULE_CFLAGS]: -fno-strict-overflow -Wsign-compare -g -Og -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include
```
build: https://buildbot.python.org/all/#/builders/435/builds/3683
<!-- gh-linked-prs -->
### Linked PRs
* gh-110331
<!-- /gh-linked-prs -->
| 1de9406f9136e3952b849487f0151be3c669a3ea | 1465386720cd532a378a5cc1e6de9d96dd8fcc81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.