repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-109307 | # broken URL in the "Using Python on Unix platforms" documentation page
# Documentation
there are two issues in the "Using Python on Unix platforms" documentation page: https://docs.python.org/3.13/using/unix.html
1. the link for Fedora users is broken. The correct URL is https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/
2. slackbook.org URL redirects from http -> https. www. prefix is unnecessary and has also been removed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109307
* gh-109477
* gh-109478
<!-- /gh-linked-prs -->
| 0b38ce440bd76b3d25b6d042ee9613841fb4a947 | e218e5022eef369573808a4f8dda9aeeab663750 |
python/cpython | python__cpython-109533 | # Compiler warnings on string comparisons in _testcapi
```
./Modules/_testcapimodule.c:226:18: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(v != UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
./Modules/_testcapimodule.c:238:14: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(k == UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
./Modules/_testcapimodule.c:239:14: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(v == UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
4 warnings generated.
./Modules/_testcapi/dict.c:289:16: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(key == UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
./Modules/_testcapi/dict.c:290:18: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(value == UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
2 warnings generated.
./Modules/_testcapi/exceptions.c:129:17: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(type != UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
./Modules/_testcapi/exceptions.c:130:18: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(value != UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
./Modules/_testcapi/exceptions.c:131:15: warning: result of comparison against a string literal is unspecified (use an explicit string comparison function instead) [-Wstring-compare]
assert(tb != UNINITIALIZED_PTR);
^ ~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109533
* gh-109558
<!-- /gh-linked-prs -->
| ed582a2ed980efba2d0da365ae37bff4a2b99873 | beb5ec5817b645562ebbdd59f25683a93061c32c |
python/cpython | python__cpython-109467 | # Add ipv6_mapped property to IPv4Address
# Feature or enhancement
### Proposal:
Proposing a `ipv6_mapped` property for IPv4Address. This is the other direction of the `ipv4_mapped` property that already exists in `IPv6Address`. This can be useful if you need an ipv6 representation for an ipv4 address.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/ipv4-to-ipv6-mapped-address/31464
<!-- gh-linked-prs -->
### Linked PRs
* gh-109467
<!-- /gh-linked-prs -->
| ba8aa1fd3735aa3dd13a36ad8f059a422d25ff37 | 24b5cbd3dce3fe37cdc787ccedd1e73a4f8cfc3c |
python/cpython | python__cpython-109462 | # Update logging library module lock to use context manager to acquire/release lock.
# Feature or enhancement
### Proposal:
Current implementation relies on both `_acquireLock()` and `_releaseLock()` being called, otherwise a lock may never be released:
```python
def _acquireLock():
"""
Acquire the module-level lock for serializing access to shared data.
This should be released with _releaseLock().
"""
if _lock:
try:
_lock.acquire()
except BaseException:
_lock.release()
raise
def _releaseLock():
"""
Release the module-level lock acquired by calling _acquireLock().
"""
if _lock:
_lock.release()
```
The majority of usages of `_acquireLock()` manually add a try/except/finally block to ensure that the lock is released if an exception is thrown. Some usages of `_acquireLock()` have no safety.
The proposal is to alter the usage of `_acquireLock()` to be a context manager that deals with acquiring and releasing automatically rather than requiring try/except/finally blocks to be used anywhere the function is called.
For example,
usage before:
```python
_acquireLock()
try:
...
finally:
_releaseLock()
```
proposed usage:
```python
with _acquireLock():
...
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109462
<!-- /gh-linked-prs -->
| 74723e11109a320e628898817ab449b3dad9ee96 | cc54bcf17b5b5f7681f52baf3acef75b995fa1fd |
python/cpython | python__cpython-109436 | # Add a page to the documentation listing stdlib modules with a command-line interface (CLI)
While reviewing PR #109152, I discover that the sqlite3 module has a command line interface. Maybe I knew and then I forgot.
Is there a a documentation page listing all modules with a command-line interface (CLI)? I don't know so.
By searching for ``main(`` pattern and for ``__main__.py`` files, so far, I found:
* ast: https://docs.python.org/dev/library/ast.html#command-line-usage
* asyncio: https://docs.python.org/dev/library/asyncio.html just one sentence, "You can experiment with an asyncio concurrent context in the REPL".
* base64: shhh, it's a secret, it's not documented!
* calendar: https://docs.python.org/dev/library/calendar.html#command-line-usage
* compileall: https://docs.python.org/dev/library/compileall.html#command-line-use
* ensurepip: https://docs.python.org/dev/library/ensurepip.html#command-line-interface
* gzip: https://docs.python.org/dev/library/gzip.html#command-line-interface
* idlelib: not sure if it's the same as: https://docs.python.org/dev/library/idle.html#command-line-usage
* json.tool: https://docs.python.org/dev/library/json.html#module-json.tool
* pdb: https://docs.python.org/3/library/pdb.html
* profile: https://docs.python.org/dev/library/profile.html#instant-user-s-manual
* py_compile: https://docs.python.org/dev/library/py_compile.html#command-line-interface
* quopri: (missing)
* site: https://docs.python.org/dev/library/site.html#command-line-interface
* sqlite3 https://docs.python.org/3/library/sqlite3.html#command-line-interface
* tabnanny: https://docs.python.org/dev/library/tabnanny.html
* tarfile: https://docs.python.org/dev/library/tarfile.html#command-line-interface
* timeit: https://docs.python.org/dev/library/timeit.html#command-line-interface
* tokenize: https://docs.python.org/dev/library/tokenize.html#command-line-usage
* trace: https://docs.python.org/dev/library/trace.html#command-line-usage
* turtledemo: https://docs.python.org/3/library/turtle.html#module-turtledemo
* uuid: https://docs.python.org/dev/library/uuid.html#command-line-example
* unittest: https://docs.python.org/dev/library/unittest.html#command-line-interface
* venv: https://docs.python.org/dev/library/venv.html#creating-virtual-environments
* webbrowser: https://docs.python.org/3.14/library/webbrowser.html
* zipapp: https://docs.python.org/dev/library/zipapp.html#command-line-interface
* zipfile: https://docs.python.org/dev/library/zipfile.html#command-line-interface
Some modules include self-tests in their main() function. I dislike that and I would suggest to remove it and at least move it to their test suite.
* ctypes.util: self-test
* ctypes.textpad: demo
* curses.has_key: self-test?
* ``dbm.__init__``: get the DB type... but this CLI is not easy discoverable, it's in the __init__ module!
* getopt: self-test
* heapq: self-test
* shlex: print tokens. Is it really useful?
* imaplib: self-test, "test the IMAP4_stream class"
* modulefinder: self-test
* netrc: just parse ``~/.netrc`` if available, and then display nothing... not very useful
* random: benchmark
* smtplib: "Test the sendmail method", try to send an email to localhost SMTP server.
* symtable: self-test. parse itself (symtable.py)
* turtle: it's a demo. Looks more like an easter egg or advanced example than "an useful CLI".
* textwrap: self-test
* tkinter: self-test
* wsgiref.simple_server: self-test, run a server, open itself in a browser, stop the server.
* xml.sax.expatreader: Sheakspear easter egg.
* xml.sax.xmlreader: self-test.
* xmlrpc.client + xmlrpc.server: simple demo.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109436
<!-- /gh-linked-prs -->
| 59f32a785fbce76a0fa02fe6d671813057714a0a | afa7b0d743d9b7172d58d1e4c28da8324b9be2eb |
python/cpython | python__cpython-109428 | # regrtest: "Cannot read process stdout: 'utf-8' codec can't decode byte 0xdd" error
On a buildbot, a job failed because a test worker stdout cannot be decoded from UTF-8:
> Cannot read process stdout: 'utf-8' codec can't decode byte 0xdd in position 2360: invalid continuation byte
In February, @gpshead reported a similar issue: issue #101634. I fixed it by adding a try/except: commit https://github.com/python/cpython/commit/2ac3eec103cf450aaaebeb932e51155d2e7fb37b.
The try/except stops regrtest in a more clean fashion, but it has a big issue: the worker stdout is lost and cannot be displayed.
IMO regrtest should use any mean to display the stdout, especially if it's corrupted. It's too important, since some bugs only occur on some platforms. For example, Windows can log assertion errors in UTF-16 whereas regrtest uses a 8-bit encoding (issue #108989 closed as "WONTFIX").
```
(...)
0:02:59 load avg: 9.85 [200/463] test_tempfile passed -- running (1): test_pickle (42.6 sec)
0:03:02 load avg: 10.18 [201/463] test_codecs passed -- running (2): test_pickle (45.5 sec), test.test_multiprocessing_spawn.test_processes (32.2 sec)
0:03:02 load avg: 10.18 [202/463] test_linecache passed -- running (2): test_pickle (46.1 sec), test.test_multiprocessing_spawn.test_processes (32.9 sec)
0:03:03 load avg: 10.18 [203/463] test_set passed -- running (2): test_pickle (47.3 sec), test.test_multiprocessing_spawn.test_processes (34.1 sec)
0:03:04 load avg: 10.18 [204/463] test_shlex passed -- running (2): test_pickle (47.8 sec), test.test_multiprocessing_spawn.test_processes (34.6 sec)
0:03:05 load avg: 10.18 [205/463/1] test_interpreters process crashed (Cannot read process stdout: 'utf-8' codec can't decode byte 0xdd in position 2360: invalid continuation byte) -- running (2): test_pickle (48.7 sec), test.test_multiprocessing_spawn.test_processes (35.5 sec)
Kill <WorkerThread #1 running test=test_sys pid=1206013 time=2.6 sec> process group
Kill <WorkerThread #2 running test=test.test_asyncio.test_events pid=1205711 time=9.2 sec> process group
Kill <WorkerThread #3 running test=test_tools pid=1205910 time=6.2 sec> process group
Kill <WorkerThread #4 running test=test_float pid=1206093 time=885 ms> process group
Kill <WorkerThread #5 running test=test.test_multiprocessing_spawn.test_threads pid=1205413 time=16.3 sec> process group
(...)
```
build: https://buildbot.python.org/all/#/builders/435/builds/3612
<!-- gh-linked-prs -->
### Linked PRs
* gh-109428
<!-- /gh-linked-prs -->
| 74c72a2fc73941394839bd912c4814398b461446 | 68a6f21f47e779ddd70e33cf04d170a63f077fcd |
python/cpython | python__cpython-109419 | # test_b2a_roundtrip failure in test_binascii.py
# Bug report
### Bug description:
Seen in GitHub CI:
```pytb
======================================================================
ERROR: test_b2a_roundtrip (test.test_binascii.BytearrayBinASCIITest.test_b2a_roundtrip)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_binascii.py", line 233, in test_b2a_roundtrip
binary=hypothesis.strategies.binary(),
^^^^^^^
File "/home/runner/work/cpython/cpython-builddir/hypovenv/lib/python3.13/site-packages/hypothesis/core.py", line 1386, in wrapped_test
raise the_error_hypothesis_found
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_binascii.py", line 237, in test_b2a_roundtrip
converted = binascii.b2a_uu(self.type2test(binary), backtick=backtick)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
binascii.Error: At most 45 bytes at once
Falsifying example: test_b2a_roundtrip(
self=<test.test_binascii.BytearrayBinASCIITest testMethod=test_b2a_roundtrip>,
binary=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
backtick=False,
)
You can reproduce this example by temporarily adding @reproduce_failure('6.84.0', b'AXicY2RgYBwciIEBAA2TAC8=') as a decorator on your test case
```
(https://github.com/python/cpython/actions/runs/6189137441/job/16802618181?pr=109344)
I think we need a `max_size=45` in:
https://github.com/python/cpython/blob/3b9d10b0316cdc2679ccad80563b7c7da3951388/Lib/test/test_binascii.py#L232-L239
cc @sobolevn
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109419
<!-- /gh-linked-prs -->
| d7dc3d9455de93310ccde13ceafe84d426790a5c | 3b9d10b0316cdc2679ccad80563b7c7da3951388 |
python/cpython | python__cpython-109440 | # Add a note to the `venv` documentation that users should **not** put their code _inside_ the virtual environment
# Documentation
The `venv` docs don't explicitly call out that the virtual environment directory should be viewed as self-contained unit and not as a directory to place one's own code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109440
* gh-109480
* gh-109481
<!-- /gh-linked-prs -->
| a6846d45ff3c836bc859c40e7684b57df991dc05 | 0b38ce440bd76b3d25b6d042ee9613841fb4a947 |
python/cpython | python__cpython-109437 | # frozen dataclass inheritance is not strictly checked in multiple inheritance
# Bug report
### Bug description:
```python
import dataclasses
@dataclasses.dataclass
class NotFrozen:
pass
@dataclasses.dataclass(frozen=True)
class Frozen:
pass
@dataclasses.dataclass(frozen=True)
class Child(NotFrozen, Frozen):
pass
```
The dataclass inheritance hierarchy is supposed to require all classes to be either frozen or non frozen, this works properly for checking that an unfrozen class does not inherit from any frozen classes, but it allows frozen classes to inherit from unfrozen ones as long as there's at least one frozen class in the MI
### CPython versions tested on:
3.10
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-109437
<!-- /gh-linked-prs -->
| b6000d287407cbccfbb1157dc1fc6128497badc7 | 7dd3c2b80064c39f1f0ebbc1f8486897b3148aa5 |
python/cpython | python__cpython-109452 | # Move Azure Pipelines CI to GitHub Actions
GitHub Actions is easier to use and maintain. For example, GitHub Actions is more popular and widely used, and so more familiar. It's easier to maintain a single system. Also core devs can restart failed GitHub Actions whereas they can't for Azure Pipelines. Both are owned by Microsoft and I believe they run on much the same infra.
There's a lot of duplication of tests between Azure Pipelines and GitHub Actions.
Let's remove the duplicated tests, and move the unique things over to GitHub Actions. For example, `patchcheck` is only run on AP.
We might not necessarily need to port things directly over, it may be possible to replicate equivalent checks in another way, possible with a pre-commit or Ruff lint rule. Let's evaluate these separately.
Related issues/PRs:
* https://github.com/python/cpython/issues/109395
* https://github.com/python/cpython/pull/109400
* https://github.com/python/cpython/pull/109412
* https://github.com/python/cpython/pull/105823
* https://github.com/python/cpython/issues/84018
* https://github.com/python/cpython/pull/18818
<!-- gh-linked-prs -->
### Linked PRs
* gh-109452
* gh-109453
* gh-109459
* gh-109519
* gh-109520
* gh-109535
* gh-109536
* gh-109569
* gh-109623
* gh-109624
* gh-109854
* gh-109890
* gh-109891
* gh-109895
* gh-110594
* gh-110595
* gh-110633
* gh-110636
* gh-110640
* gh-110641
* gh-110726
* gh-110730
* gh-122333
* gh-122643
<!-- /gh-linked-prs -->
| a75daed7e004ee9a53b160307c4c072656176a02 | add16f1a5e4013f97d33cc677dc008e8199f5b11 |
python/cpython | python__cpython-109403 | # Bug in `regrtest` when name in `SPLITTESTDIRS` also matches a module name in tests
# Bug report
During https://github.com/python/cpython/pull/109368 I tried to create `Lib/test/test_future/` directory. It did not go well:
```
» ./python.exe -m test test_future
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/Users/sobolev/Desktop/cpython/Lib/test/__main__.py", line 2, in <module>
main()
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 511, in main
Regrtest(ns).main(tests=tests)
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 492, in main
selected, tests = self.find_tests(tests)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 176, in find_tests
alltests = findtests(testdir=self.test_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 45, in findtests
tests.extend(findtests(testdir=subdir, exclude=exclude,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 45, in findtests
tests.extend(findtests(testdir=subdir, exclude=exclude,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 38, in findtests
for name in os.listdir(testdir):
^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/sobolev/Desktop/cpython/Lib/test/test_future/test_future'
```
I renamed `Lib/test/test_future/test_future.py` to `Lib/test/test_future/test_import_future.py`
Now I get a new error (but similar):
```
» ./python.exe -m test test_future
Traceback (most recent call last):
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/Users/sobolev/Desktop/cpython/Lib/test/__main__.py", line 2, in <module>
main()
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 511, in main
Regrtest(ns).main(tests=tests)
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 492, in main
selected, tests = self.find_tests(tests)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/main.py", line 176, in find_tests
alltests = findtests(testdir=self.test_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 45, in findtests
tests.extend(findtests(testdir=subdir, exclude=exclude,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 45, in findtests
tests.extend(findtests(testdir=subdir, exclude=exclude,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sobolev/Desktop/cpython/Lib/test/libregrtest/findtests.py", line 38, in findtests
for name in os.listdir(testdir):
^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/sobolev/Desktop/cpython/Lib/test/test_concurrent_futures/test_future'
```
This looks like a bug to me.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109403
* gh-109404
<!-- /gh-linked-prs -->
| 9ccd2e6aee173615012f7b8262ec890cfb1a7eb4 | d7a27e527d7e669d2e45cff80ad725978226477c |
python/cpython | python__cpython-109875 | # test_threading: test_default_timeout() fails randomly ands logs "Warning -- Unraisable exception"
Example from issue #108987:
```
test_default_timeout (test.test_threading.BarrierTests.test_default_timeout)
Test the barrier's default timeout ... Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
ERROR
test_repr (test.test_threading.BarrierTests.test_repr) ... Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 700, in wait
self._enter() # Block while the barrier drains.
^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 724, in _enter
raise BrokenBarrierError
threading.BrokenBarrierError:
ok
```
build: https://github.com/python/cpython/actions/runs/6094370280/job/16535778004#step:5:103
In issue #108987, I fixed a crashed which occurred in this test when the test failed.
I create this issue to focus on the **test failure** and ``Warning -- Unraisable exception``.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109875
* gh-109876
* gh-109877
<!-- /gh-linked-prs -->
| e5186c3de4194de3ea8c80edb182d786f5e20944 | e9791ba35175171170ff09094ea46b91fc18c654 |
python/cpython | python__cpython-109423 | # test_socket.LinuxKernelCryptoAPI.test_hmac_sha1() fails on "AMD64 RHEL8 FIPS Only Blake2 Builtin Hash 3.x" buildbot
# Bug report
When FIPS is enabled in Linux, LinuxKernelCryptoAPI.test_hmac_sha1() fails with ``OSError: [Errno 22] Invalid argument``.
test.pythoninfo:
```
fips.linux_crypto_fips_enabled: 1
fips.openssl_fips_mode: 1
```
Error:
```
ERROR: test_hmac_sha1 (test.test_socket.LinuxKernelCryptoAPI.test_hmac_sha1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-fips-x86_64.no-builtin-hashes-except-blake2/build/Lib/test/test_socket.py", line 6479, in test_hmac_sha1
algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, b"Jefe")
OSError: [Errno 22] Invalid argument
```
build: https://buildbot.python.org/all/#/builders/469/builds/5995
<!-- gh-linked-prs -->
### Linked PRs
* gh-109423
* gh-109426
* gh-109427
* gh-125106
* gh-125107
<!-- /gh-linked-prs -->
| e091b9f20fa8e409003af79f3c468b8225e6dcd3 | d7dc3d9455de93310ccde13ceafe84d426790a5c |
python/cpython | python__cpython-109400 | # Azure Pipelines: `macOS CI Tests` and `Ubuntu CI Tests (coverage)` jobs are always skipped
I cannot find any CI runs where these two are completed / failed. They are always skipped.
Looks like Azure only stores build result for ~1 month or so, so I cannot trace it back further.
```
Started: Today at 09:12
Duration: 2h 3m 40s
Evaluating: False
Result: False
```
Examples:
- https://dev.azure.com/Python/cpython/_build/results?buildId=137735&view=logs&j=609bc2f2-68da-5434-7493-ff39ba61961e
- https://dev.azure.com/Python/cpython/_build/results?buildId=135386&view=logs&j=4fbfd83d-b707-5326-9192-5affa2b62e03
- https://dev.azure.com/Python/cpython/_build/results?buildId=137732&view=logs&j=4fbfd83d-b707-5326-9192-5affa2b62e03
Why does it happen?
<!-- gh-linked-prs -->
### Linked PRs
* gh-109400
* gh-109412
* gh-109433
* gh-109434
* gh-109441
* gh-109442
<!-- /gh-linked-prs -->
| 1ece084be3684e06101aa1efa82d3ed98c99c432 | 82505dc351b2f7e37aa395218709b432d83292cd |
python/cpython | python__cpython-109391 | # add C utility for debugging symbol tables, under `#if 0`
In `flowgraph.c` we have some C functions for dumping instructions and basicblocks as text. These are under `#if 0` by default, so effectively comments, but they can be very useful to uncomment and use when debugging a tricky compiler issue.
I recently had to debug a symbol table problem, and I wrote a similar function to (recursively) dump a symbol table and its children to a text representation. I propose that we also include this in the source tree, under `#if 0` just like the compiler equivalent, to save time for future debuggers.
The Python `symtable` module does not fill this need, since you can't use it if the symbol table bug is preventing compilation from even completing.
cc @JelleZijlstra
<!-- gh-linked-prs -->
### Linked PRs
* gh-109391
<!-- /gh-linked-prs -->
| 32ffe58c1298b0082ff6fe96ad45c4efe49f4338 | d41d2e69f621ce25e7719f5057a6be6776bc6783 |
python/cpython | python__cpython-109376 | # pdb: calling registered alias without command raises KeyError
# Bug report
### Bug description:
pdb allows to register an alias without a command.
Calling that alias raises an unhandled exception.
```bash
$ /tmp/ cat bar.py
breakpoint()
$ /tmp/ python bar.py
--Return--
> /tmp/bar.py(1)<module>()->None
-> breakpoint()
(Pdb) alias foo
(Pdb) foo
Traceback (most recent call last):
File "/tmp/bar.py", line 1, in <module>
breakpoint()
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/bdb.py", line 94, in trace_dispatch
return self.dispatch_return(frame, arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/bdb.py", line 153, in dispatch_return
self.user_return(frame, arg)
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/pdb.py", line 372, in user_return
self.interaction(frame, None)
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/pdb.py", line 435, in interaction
self._cmdloop()
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/pdb.py", line 400, in _cmdloop
self.cmdloop()
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/cmd.py", line 137, in cmdloop
line = self.precmd(line)
^^^^^^^^^^^^^^^^^
File "/home/home/.pyenv/versions/3.11.1/lib/python3.11/pdb.py", line 473, in precmd
while args[0] in self.aliases:
~~~~^^^
IndexError: list index out of range
```
The general syntax of `alias foo` is valid, as it allows to print the alias. However, if there is no command registered, calling it fails.
I am unsure if raising an exception is the wanted behavior here. I think providing an error message which indicates that the current alias is unknown could be more appropriate, e.g.:
```bash
$ python /tmp/bar.py
--Return--
> /tmp/bar.py(1)<module>()->None
-> breakpoint()
(Pdb) alias foo
*** Unkown alias. To create an alias see 'help alias'
```
A solution could be to add an additional check to catch cases where one tries to register an alias without a command.
### CPython versions tested on:
3.10, 3.11, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109376
* gh-109429
* gh-109430
<!-- /gh-linked-prs -->
| 68a6f21f47e779ddd70e33cf04d170a63f077fcd | e091b9f20fa8e409003af79f3c468b8225e6dcd3 |
python/cpython | python__cpython-109374 | # pystats should save all metadata necessary to perform comparison in pystats .json output
# Bug report
### Bug description:
The `Tools/scripts/summarize_stats.py` script currently has the ability to save stats to a `.json` file so that collected stats can later be compared to another set of collected stats. Unfortunately, this comparison must be performed on the same version of Python that collected the stats, since information about specialized instructions and stats types are collected at runtime during the comparison (from the opcode module and by scraping the source code). Failure to do so may either break, or, worse, silently create incorrect results. If this information was instead stored in the `.json` file, the comparison could be formed on any recent enough build of Python, which is particularly important for the [benchmarking infrastructure](https://github.com/faster-cpython/bench_runner/), where we don't want to have to checkout or rebuild specific versions of Python just to create derived comparative data.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109374
<!-- /gh-linked-prs -->
| 19f5effc27bc47d76b2f484a1fcb4ccdd7dc9d67 | 3d881453d32101855cd083ef79641da5e437df3d |
python/cpython | python__cpython-109385 | # Crash in test_sys_setprofile if it follows test_sys_settrace
# Crash report
`test_reentrancy` in `test_sys_setprofile` crashes if it follows `test_sys_settrace` runned in the same process.
The fastest reproducer:
```
$ ./python -m test -v test_sys_settrace test_sys_setprofile -m test_sys_settrace -m test_reentrancy
...
test_reentrancy (test.test_sys_setprofile.TestEdgeCases.test_reentrancy) ... python: Python/instrumentation.c:662: instrument: Assertion `!is_instrumented(opcode)' failed.
Fatal Python error: Aborted
Current thread 0x00007f7bc76e9740 (most recent call first):
File "/home/serhiy/py/cpython/Lib/unittest/case.py", line 873 in _baseAssertEqual
File "/home/serhiy/py/cpython/Lib/unittest/case.py", line 885 in assertEqual
File "/home/serhiy/py/cpython/Lib/test/test_sys_setprofile.py", line 440 in test_reentrancy
File "/home/serhiy/py/cpython/Lib/unittest/case.py", line 589 in _callTestMethod
File "/home/serhiy/py/cpython/Lib/unittest/case.py", line 634 in run
File "/home/serhiy/py/cpython/Lib/unittest/case.py", line 690 in __call__
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 122 in run
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 122 in run
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 122 in run
File "/home/serhiy/py/cpython/Lib/unittest/suite.py", line 84 in __call__
File "/home/serhiy/py/cpython/Lib/unittest/runner.py", line 240 in run
File "/home/serhiy/py/cpython/Lib/test/support/__init__.py", line 1137 in _run_suite
File "/home/serhiy/py/cpython/Lib/test/support/__init__.py", line 1264 in run_unittest
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 36 in run_unittest
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 90 in test_func
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 48 in regrtest_runner
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 93 in _load_run_test
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 136 in _runtest_env_changed_exc
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 236 in _runtest
File "/home/serhiy/py/cpython/Lib/test/libregrtest/single.py", line 264 in run_single_test
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 283 in run_test
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 318 in run_tests_sequentially
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 444 in _run_tests
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 474 in run_tests
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 503 in main
File "/home/serhiy/py/cpython/Lib/test/libregrtest/main.py", line 511 in main
File "/home/serhiy/py/cpython/Lib/test/__main__.py", line 2 in <module>
File "/home/serhiy/py/cpython/Lib/runpy.py", line 88 in _run_code
File "/home/serhiy/py/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testcapi (total: 1)
Aborted (core dumped)
```
Stacktrace:
```
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350489920) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737350489920) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737350489920, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cca476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7cb07f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7cb071b in __assert_fail_base (fmt=0x7ffff7e65150 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5555559ce388 "!is_instrumented(opcode)",
file=0x5555559ce260 "Python/instrumentation.c", line=662, function=<optimized out>) at ./assert/assert.c:92
#6 0x00007ffff7cc1e96 in __GI___assert_fail (assertion=assertion@entry=0x5555559ce388 "!is_instrumented(opcode)", file=file@entry=0x5555559ce260 "Python/instrumentation.c",
line=line@entry=662, function=function@entry=0x5555559cf120 <__PRETTY_FUNCTION__.9> "instrument") at ./assert/assert.c:101
#7 0x0000555555850f1d in instrument (code=code@entry=0x7ffff746f0d0, i=i@entry=54) at Python/instrumentation.c:662
#8 0x0000555555850fec in add_tools (code=code@entry=0x7ffff746f0d0, offset=offset@entry=54, event=event@entry=2, tools=tools@entry=64) at Python/instrumentation.c:793
#9 0x0000555555854473 in _Py_Instrument (code=0x7ffff746f0d0, interp=<optimized out>) at Python/instrumentation.c:1618
#10 0x00005555557d3899 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555c0e818 <_PyRuntime+508728>, frame=0x7ffff7b413c0, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:47
#11 0x00005555557f45df in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c0e818 <_PyRuntime+508728>) at ./Include/internal/pycore_ceval.h:107
#12 _PyEval_Vector (tstate=0x555555c0e818 <_PyRuntime+508728>, func=0x7ffff7180dd0, locals=locals@entry=0x0, args=0x7fffffffb6b0, argcount=2, kwnames=0x0) at Python/ceval.c:1632
...
```
All other tests in `test_sys_setprofile` do not crash. Reproduced on 3.12+. 3.11 does not have `test_reentrancy`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109385
* gh-109542
<!-- /gh-linked-prs -->
| 412f5e85d6b9f2e90c57c54539d06c7a025a472a | 23f9f6f46454455bc6015e83ae5b5e946dae7698 |
python/cpython | python__cpython-109846 | # Executors might ignore instrumentation.
# Bug report
Code handed to the optimizer may not include instrumentation. If instrumentation is added later, the executor does not see it.
We remove all `ENTER_EXECUTORS` when instrumenting, but that doesn't fix the problem of executors that are still running.
### Example
A loop that calls `foo`:
```Py
while large_number > 0:
foo()
large_number -= 1
```
The loop gets turned into an executor, and sometime before `large_number` reaches zero, a debugger gets attached and in some callee of `foo` turns on monitoring of calls. We would expect all subsequent calls to `foo()` to be monitored, but they will not be, as the executor knows nothing about the call to `foo()` being instrumented.
### Let's not rely on the executor/optimizer to handle this.
We could add a complex de-optimization strategy, so that executors are invalidated when instrumentation occurs, but that is the sort of thing we want to be doing for maximum performance, not for correctness.
It is much safer, and ultimately no slower (once we implement fancy de-optimization) to add extra checks for instrumentation, so that unoptimized traces are correct.
### The solution
The solution is quite simple, add a check for instrumentation after every call.
We can make this less slow (and less simple) by combining the eval-breaker check and instrumentation check into one. This makes the check slightly more complex, but should speed up `RESUME` by less than it slows down every call as every call already has an eval-breaker check.
Combining the two checks reducing the number of bits for versions from 64 to 24 (we need the other bits for eval-breaker, gc, async exceptions, etc). A 64 bit number never overflows (computers don't last long enough), but 24 bit numbers do.
This will have three main effects, beyond the hopefully very small performance impact for all code:
* It removes the need for a complete call stack scan of all threads when instrumenting.
* Tools that set or change monitoring many times will see significant performance improvements as we no longer need to traverse all stacks to re-instrument the whole call stack whenever monitoring changes
* Tools that set or change monitoring many millions of times will break, as we run out of versions. It is likely that these tools were already broken or had performance so bad as to be totally unusable.
#### Making the check explicit in the bytecode.
We can simplify all the call instructions by removing the `CHECK_EVAL_BREAKER` check at the end and adding an explicit `RESUME` instruction after every `CALL`
Although this has the disadvantage of making the bytecode larger and adding dispatch overhead, it does have the following advantages:
* Allows tier 1 to optimize the `RESUME` to `RESUME_CHECK` which might be cancel out the additional dispatch overhead.
* Makes the check explicit, making it feasible for tier 2 optimizations to remove it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109846
* gh-110358
* gh-110384
* gh-111642
* gh-111657
<!-- /gh-linked-prs -->
| bf4bc36069ef1ed4be4be2ae70404f78bff056d9 | 7c149a76b2bf4c66bb7c8650ffb71acce12f5ea2 |
python/cpython | python__cpython-109358 | # test_monitoring fails when run multiple times: ValueError: tool 0 is already in use
The issue broke all Refleaks buildbots :-(
Example:
```
$ ./python -m test test_monitoring -R 3:3
0:00:00 load avg: 3.50 Run 1 test sequentially
0:00:00 load avg: 3.50 [1/1] test_monitoring
beginning 6 repetitions
123456
.test test_monitoring failed -- Traceback (most recent call last):
File "/home/vstinner/python/main/Lib/test/test_monitoring.py", line 1723, in test_gh108976
sys.monitoring.use_tool_id(0, "test")
ValueError: tool 0 is already in use
test_monitoring failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test_monitoring
Total duration: 454 ms
Total tests: run=63
Total test files: run=1/1 failed=1
Result: FAILURE
```
The regression was introduced by: commit 4a69301ea4539da172a00a80e78c07e9b41c1f8e of PR GH-109131. cc @markshannon
```
commit 4a69301ea4539da172a00a80e78c07e9b41c1f8e
Author: Mark Shannon <mark@hotpy.org>
Date: Mon Sep 11 14:37:09 2023 +0100
GH-108976. Keep monitoring data structures valid during de-optimization during callback. (GH-109131)
```
Python 3.12 is also affected, the change was [backported to 3.12](https://github.com/python/cpython/pull/109268): cc @Yhg1s
<!-- gh-linked-prs -->
### Linked PRs
* gh-109358
* gh-109359
<!-- /gh-linked-prs -->
| 388d91cd474de80355f5a8f6a26e8962813a3128 | b544c2b1355571a36fe0c212f92e9b163ceb16af |
python/cpython | python__cpython-109352 | # Crash on compilation of invalid AST involving walrus
Running this file:
```
import ast
m = ast.Module(
body=[
ast.Expr(
value=ast.ListComp(
elt=ast.NamedExpr(
target=ast.Constant(value=1),
value=ast.Constant(value=3),
),
generators=[
ast.comprehension(
target=ast.Name(id="x", ctx=ast.Store()),
iter=ast.Name(id="y", ctx=ast.Load()),
ifs=[],
is_async=0,
)
],
)
)
],
type_ignores=[],
)
compile(ast.fix_missing_locations(m), "<file>", "exec")
```
Causes:
```
% ./python.exe namedexpr.py
Assertion failed: (e->kind == Name_kind), function symtable_extend_namedexpr_scope, file symtable.c, line 1877.
zsh: abort ./python.exe namedexpr.py
```
This is on a debug build; on a release build presumably it will segfault or trigger UB somewhere.
I'll have a fix soon.
On 3.11 the reproducer instead fails with `TypeError: '<' not supported between instances of 'int' and 'str'` for me. Not sure what's up with that; the code that should trigger the crash is the same on 3.11 and main.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109352
* gh-109379
* gh-109380
<!-- /gh-linked-prs -->
| 79101edb03b7381b514126c68acabfcbbba2f842 | d69805b38a1815e7aaadf49bdd019c7cca105ac6 |
python/cpython | python__cpython-109353 | # Captured output in unittest.mock documentation is outdated
Refer to
* [Doc/library/unittest.mock.rst](https://github.com/python/cpython/blob/5dcbbd8861e618488d95416dee8ea94577e3f4f0/Doc/library/unittest.mock.rst)
* [Doc/library/unittest.mock-examples.rst](https://github.com/python/cpython/blob/5dcbbd8861e618488d95416dee8ea94577e3f4f0/Doc/library/unittest.mock-examples.rst)
The documentation was not updated after changes that made exceptions more readable like https://github.com/python/cpython/pull/11804. This matters since documentation would be easier to follow & google with up-to-date examples.
One example is:
https://github.com/python/cpython/blob/5dcbbd8861e618488d95416dee8ea94577e3f4f0/Doc/library/unittest.mock-examples.rst?plain=1#L796-L801
When output as of @ Python 3.13.0a0:
```
AssertionError: Expected 'foo_bar' to be called once. Called 2 times.
Calls: [call('baz', spam='eggs'), call()].
```
There are multiple examples of this. The easy fix is to just run the examples and update with the latest exception messages.
I prepared a fix for this, coming up in PR unless you stop me 🌞
<!-- gh-linked-prs -->
### Linked PRs
* gh-109353
<!-- /gh-linked-prs -->
| 3d881453d32101855cd083ef79641da5e437df3d | 92ed7e4df1af84fb29e678d111e8561ffcd14581 |
python/cpython | python__cpython-109349 | # compiling an AST with an invalid TypeAlias causes a segmentation fault
# Bug report
### Bug description:
The code for the following AST is:
```python
type foo['x'] = Callable
```
compiling the AST for the given code results in a segmentation fault
```python
from ast import *
m = Module(
body=[
TypeAlias(
name=Subscript(
value=Name(id="foo", ctx=Load()),
slice=Constant(value="x"),
ctx=Store(),
),
type_params=[],
value=Name(id="Callable", ctx=Load()),
)
],
type_ignores=[],
)
compile(fix_missing_locations(m), "<file>", "exec")
```
output (Python 3.12.0rc2+):
```
fish: Job 1, 'venv3.12/bin/python bug.py' terminated by signal SIGSEGV (Adressbereichsfehler)
```
Compiling the code gives the correct syntax error.
```python
compile("type foo['x'] = Callable","<file>","exec")
```
output (Python 3.12.0rc2+):
```
Traceback (most recent call last):
File "/home/frank/projects/executing/bug.py", line 1, in <module>
compile("type foo['x'] = Callable","<file>","exec")
File "<file>", line 1
type foo['x'] = Callable
^^^
SyntaxError: invalid syntax
```
### CPython versions tested on:
3.12
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109349
* gh-109381
<!-- /gh-linked-prs -->
| 987b4bc0870e1e29a88275dc3fa39bf2c3dcc763 | 79101edb03b7381b514126c68acabfcbbba2f842 |
python/cpython | python__cpython-109335 | # pystats should compare opcode counts by name, not number
# Bug report
### Bug description:
When `Tools/scripts/summarize_stats.py` is used to compare two stats .json files, it compares opcode counts by the opcode number, not the opcode name. This is a problem when comparing between two commits where the opcode numbers have changed (due to adding or removing opcodes).
If stats instead:
1) Emitted the opcode name rather than the number
2) summarize_stats.py used names rather than numbers
we should have something much more robust.
Cc @brandtbucher
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109335
<!-- /gh-linked-prs -->
| 5dcbbd8861e618488d95416dee8ea94577e3f4f0 | 90cf345ed42ae4d17d2a073718985eb3432a7c20 |
python/cpython | python__cpython-109913 | # Adds stats for the tier 2 optimizer
Currently we have no stats for anything regarding the tier 2 optimizer.
Without them we are making too many guesses about what we should be doing.
The performance numbers tell us that things aren't working as well as they should, although not too badly either.
However, performance numbers tell us nothing about why that is, or what is happening.
For example, https://github.com/python/cpython/pull/109038 should have increased the important ratio of `(number of uops executed)/(traces started)` but we have no idea if it actually did.
We need the following stats soon:
* Total micro-ops executed
* Total number of traces started
* Total number of traces created
* Optimization attempts
The following stats would also be nice, but are less urgent:
* Per uop execution counts, like we have for tier 1 instructions.
* Exit reason counts: polymorphism vs branch mis-prediction
* A histogram of uops executed per trace
<!-- gh-linked-prs -->
### Linked PRs
* gh-109913
* gh-110402
* gh-110561
<!-- /gh-linked-prs -->
| e561e9805854980a61967d07869b4ec4205b32c8 | f7860295b16a402621e209871c8eaeeea16f464e |
python/cpython | python__cpython-109320 | # Deprecate dis.HAVE_ARGUMENT
dis.HAVE_ARGUMENT is only correct for 'normal' opcodes (not specialised, instrumented or pseudo ops). We have the dis.hasarg alternative now, which is correct for all opcode types.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109320
<!-- /gh-linked-prs -->
| b303d3ad3e80e1d9b3befe6650f61f38b72179a4 | 3cb9a8edca6e3fa0f0045b03a9a6444cf8f7affe |
python/cpython | python__cpython-109298 | # test_asyncio and test_compileall leak temporary files/directories
Using PR #109290 fix, I found that the two following tests leak temporary files/directories.
test_compileall:
```
$ ./python -m test -j1 test_compileall --fail-env-changed -m test.test_compileall.CompileallTestsWithSourceEpoch.test_ddir_empty_multiple_workers
0:00:00 load avg: 0.81 Run tests in parallel using 1 child processes
0:00:01 load avg: 0.81 [1/1/1] test_compileall failed (env changed)
Warning -- files was modified by test_compileall
Warning -- Before: []
Warning -- After: ['pymp-kmfq1ivj/']
Warning -- files was modified by test_compileall
Warning -- Before: []
Warning -- After: ['pymp-kmfq1ivj/']
== Tests result: ENV CHANGED ==
1 test altered the execution environment:
test_compileall
Total duration: 1.2 sec
Total tests: run=1 (filtered)
Total test files: run=1/1 (filtered) env_changed=1
Result: ENV CHANGED
```
test.test_asyncio.test_events:
```
$ ./python -m test -j1 test.test_asyncio.test_events --fail-env-changed -m test.test_asyncio.test_events.TestPyGetEventLoop.test_get_event_loop_new_process
0:00:00 load avg: 0.65 Run tests in parallel using 1 child processes
0:00:01 load avg: 0.65 [1/1/1] test.test_asyncio.test_events failed (env changed)
Warning -- files was modified by test.test_asyncio.test_events
Warning -- Before: []
Warning -- After: ['pymp-tvmenicy/']
Warning -- files was modified by test.test_asyncio.test_events
Warning -- Before: []
Warning -- After: ['pymp-tvmenicy/']
== Tests result: ENV CHANGED ==
1 test altered the execution environment:
test.test_asyncio.test_events
Total duration: 1.4 sec
Total tests: run=1 (filtered)
Total test files: run=1/1 (filtered) env_changed=1
Result: ENV CHANGED
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109298
* gh-109299
* gh-109301
* gh-109302
* gh-109303
* gh-109304
* gh-109308
<!-- /gh-linked-prs -->
| 09ea4b8706165fd9474165090a0ba86509abd6c8 | 391f3e3ca904449a50b2dd5956684357fdce690b |
python/cpython | python__cpython-109293 | # 3.12 What's New should mention PEP 709 impact on symtable module results
# Documentation
PEP 709 changes the output of the `symtable` for comprehensions; there is no longer a per-comprehension child symbol table. This should be mentioned in the What's New for 709.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109293
* gh-109296
<!-- /gh-linked-prs -->
| 2b1e2f1cd154e6df553eda7936715ea0622b4ecf | ceeb4173aee7b835f553a8286feaa48b98c16124 |
python/cpython | python__cpython-109294 | # Transform inst(X, ...) to op(X, ...) plus macro(X) = X in code generator
This should make it easier for all further passes in the code generator -- they can just look at macros, ignoring the singleton instructions. @markshannon tried this in https://github.com/python/cpython/pull/108997 but got stuck, I hope I can get it unstuck.
It's possible that the synthetic uop will be named `__X` instead of `_X`, since the latter would create some name conflicts. (I don't think that this causes issues with `__X` being treated as "private" in Python, since we won't have Python-level variables named after uops.) (**UPDATE:** The synthetic uop and macro will both be called `X`.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-109294
* gh-110419
<!-- /gh-linked-prs -->
| a7a079798d1c3542ecbb041a868429d515639e35 | 47af18859385fd996bf7f8fa4b33600c59a6d626 |
python/cpython | python__cpython-109277 | # Enhance Python regrtest test runner (test.libregrtest)
In issue gh-109162, I refactored deeply libregrtest code to easy its maintenance. While refactoring libregrtest, I saw many things that I would like to enhance. So I create this meta ticket to track enhancements :-)
<!-- gh-linked-prs -->
### Linked PRs
* gh-109277
* gh-109278
* gh-109279
* gh-109288
* gh-109290
* gh-109312
* gh-109313
* gh-109326
* gh-109337
* gh-109340
* gh-109355
* gh-109356
* gh-109828
* gh-109831
* gh-109903
* gh-110148
* gh-110326
<!-- /gh-linked-prs -->
| de5f8f7d13c0bbc723eaea83284dc78b37be54b4 | baa6dc8e388e71b2a00347143ecefb2ad3a8e53b |
python/cpython | python__cpython-109267 | # document of msvcrt.kbhit's return value is incorrect
`msvcrt.kbhit`'s [document](https://docs.python.org/3.11/library/msvcrt.html#msvcrt.kbhit) said:
> Return True if a keypress is waiting to be read.
But according to it's [clinic function signature](https://github.com/python/cpython/blob/60b8341d07649194aa73108369a1e04ed1848794/PC/msvcrtmodule.c#L221), it should be a type of long. And as the [Microsoft's document](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/kbhit?view=msvc-170) said it would be a nonzero value if a key has been pressed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109267
<!-- /gh-linked-prs -->
| e121fca33bfb03a2d2bc33ba9d6bb5616d09b033 | 42ab2cbd7b5e76e919b70883ae683e789dbd913d |
python/cpython | python__cpython-109269 | # Changes to specialized instructions requires a magic number change.
Until #107971 only changes to the set of instructions present in the pyc file required a new version change.
Now, removing or adding *any* opcode requires a pyc file.
There are a few problems with this.
* pyc files have to be regenerated for changes unrelated to the on disk format.
* This makes it much harder to add or remove specializations during the beta phase, as it will invalidate pyc files
* It consumes an unnecessary number of magic numbers. This is not a big deal, but want to keep to a limit of 50 per release
* It reduces stability in layout and thus performance, making meaningful benchmarking harder.
* It is even easier to forget the magic number bump #109198
### Proposal
Allocate "external" opcodes to odd numbers.
Allocate instrumented opcodes to the top region (as we do now)
Allocate specialized and other internal opcodes to even numbers
It would also be nice to be able to specify the most common opcodes, so that tooling does a decent job of laying out the tables.
We might also consider generating a hash of the opcodes and checking it `test_opcodes` to ensure that the magic number is updated when necessary.
@iritkatriel
<!-- gh-linked-prs -->
### Linked PRs
* gh-109269
<!-- /gh-linked-prs -->
| 8b55adfa8ff05477b4be7def36db7b66c73f181d | 247ee1bf841524667f883ebba5e343101f609026 |
python/cpython | python__cpython-109238 | # test_site: test_underpth_basic() fails if the current directory is non-ASCII
Failure on GHA Windows x64 CI with PR #109229 which runs test_site in a non-ASCII working directory:
```
FAIL: test_underpth_basic (test.test_site._pthFileTests.test_underpth_basic)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_site.py", line 621, in test_underpth_basic
self.assertEqual(
AssertionError: Lists differ: ['D:\[40 chars]_1460\udce6', 'D:\\a\\cpython\\cpython\\PCbuil[396 chars]18f'] != ['D:\[40 chars]_1460�', 'D:\\a\\cpython\\cpython\\PCbuild\\am[391 chars]18f']
First differing element 0:
'D:\\a\\cpython\\cpython\\build\\test_python_1460\udce6'
'D:\\a\\cpython\\cpython\\build\\test_python_1460�'
Diff is 690 characters long. Set self.maxDiff to None to see it. : sys.path is incorrect
```
build: https://github.com/python/cpython/actions/runs/6138224249/job/16654899364?pr=109229
<!-- gh-linked-prs -->
### Linked PRs
* gh-109238
* gh-109239
* gh-109240
<!-- /gh-linked-prs -->
| cbb3a6f8ada3d133c3ab9f9465b65067fce5bb42 | d6892c2b9263b39ea1c7905667942914b6a24b2c |
python/cpython | python__cpython-109322 | # Enhance `sqlite3` connection context management documentation with `contextlib.closing`
This issue covers an opportunity to enhance the existing [`sqlite3` docs on context management](https://docs.python.org/3/library/sqlite3.html#sqlite3-connection-context-manager) by adding a mention of [`contextlib.closing`](https://docs.python.org/3/library/contextlib.html#contextlib.closing) to help guide developers when performing singular transactions. Adding this section to the documentation would help guide developers in use cases where they'd like to close the `sqlite3` connection without an additional `sqlite.connect().close()` after a context manager closes.
For default `sqlite3` context management cases which don't use `contextlib.closing` we could outline or strengthen existing mention that the intention is to perform multiple transactions without fully closing the connection. This might help distinguish development options / decision-making for Python developers reading through the documentation.
A `contextlib.closing` example we could include:
```python
import sqlite3
from contextlib import closing
with closing(sqlite3.connect("workfile.sqlite")) as cx:
with cx:
cx.execute("CREATE TABLE lang(id INTEGER PRIMARY KEY, name VARCHAR UNIQUE)")
cx.execute("INSERT INTO lang(name) VALUES(?)", ("Python",))
# no need to close the connection as contextlib.closing closed it for us
```
Related to [a discussion from discuss.python.org](https://discuss.python.org/t/implicitly-close-sqlite3-connections-with-context-managers/33320). Many thanks to @erlend-aasland for suggesting the `contextlib.closing` approach and the others in the discussion thread for their help in discussing this topic.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109322
* gh-110293
* gh-110294
<!-- /gh-linked-prs -->
| 4227bfa8b273207a2b882f7d69c8ac49c3d2b57d | 2d4865d775123e8889c7a79fc49b4bf627176c4b |
python/cpython | python__cpython-109233 | # test_pyexpat: test_exception() fails on Ubuntu and macOS jobs on GitHub Actions
```
FAIL: test_exception (test.test_pyexpat.HandlerExceptionTest.test_exception)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_pyexpat.py", line 452, in test_exception
parser.Parse(b"<a><b><c/></b></a>", True)
File "../cpython-ro-srcdir/Modules/pyexpat.c", line 421, in StartElement
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_pyexpat.py", line 442, in StartElementHandler
raise RuntimeError(name)
RuntimeError: a
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_pyexpat.py", line 472, in test_exception
self.assertIn('call_with_frame("StartElement"', entries[1][3])
AssertionError: 'call_with_frame("StartElement"' not found in ''
```
* Ubuntu logs: https://github.com/python/cpython/actions/runs/6138224249/job/16654899227?pr=109229
* macOS logs: https://github.com/python/cpython/actions/runs/6138224249/job/16654899199?pr=109229
Comparison of Ubuntu logs:
* before (ok): https://github.com/python/cpython/actions/runs/6134648048/job/16647654038
* commit 43039181bb7cef1041080fb8da1b8e116a3f75f3
* configure: ``checking for --with-system-expat... no``
* test.pythoninfo: ``expat.EXPAT_VERSION: expat_2.5.0``
* Tests: Result: SUCCESS
* after (error): https://github.com/python/cpython/actions/runs/6138224249/job/16654899227?pr=109229 -- PR
* commit 28ddef7ff24005e481ec5195f0c0d85e3d15579b -- PR #109229
* configure: ``checking for --with-system-expat... no``
* test.pythoninfo: ``expat.EXPAT_VERSION: expat_2.5.0``
* Test: 1 test failed: test_pyexpat
<!-- gh-linked-prs -->
### Linked PRs
* gh-109233
* gh-109241
* gh-109242
<!-- /gh-linked-prs -->
| e55aab95786e0e9fb36a9a1122d2d0fb3d2403cd | cbb3a6f8ada3d133c3ab9f9465b65067fce5bb42 |
python/cpython | python__cpython-109377 | # SystemError for pep695 type parameter with the same name as the inner class
# Bug report
### Bug description:
The following script shows the SystemError:
```python
class name_1[name_4]:
class name_4[name_2](name_4):
name_4
```
output (Python 3.12.0rc2+):
```python
SystemError: compiler_lookup_arg(name='name_4') with reftype=3 failed in <generic parameters of name_4>; freevars of code name_4: ('.type_params', 'name_4')
```
minimizing it further leads some different crashes:
The propably most interresting one is this, which does not look special at all.
```python
class name_1[name_3]:
class name_4[name_2](name_5):
pass
```
output (Python 3.12.0rc2+):
```python
Traceback (most recent call last):
File "/home/frank/projects/pysource-codegen/bug3.py", line 1, in <module>
class name_1[name_3]:
File "/home/frank/projects/pysource-codegen/bug3.py", line 1, in <generic parameters of name_1>
class name_1[name_3]:
File "/home/frank/projects/pysource-codegen/bug3.py", line 2, in name_1
class name_4[name_2](name_5):
File "/home/frank/projects/pysource-codegen/bug3.py", line 2, in <generic parameters of name_4>
class name_4[name_2](name_5):
^^^^^^
NameError: name 'name_5' is not defined. Did you mean: 'name_2'?
Modules/gcmodule.c:113: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small
Enable tracemalloc to get the memory block allocation traceback
object address : 0x7f2c57a12930
object refcount : 1
object type : 0x557ad9ef1fe0
object type name: dict
object repr : {'__module__': '__main__', '__qualname__': 'name_1', '__type_params__': (name_3,)}
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: finalizing (tstate=0x0000557ada04eba0)
Current thread 0x00007f2c57f21280 (most recent call first):
Garbage-collecting
<no Python frame>
```
@JelleZijlstra I think this is another one for you. Feel free to create a second issue if you think that they are unrelated.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109377
* gh-109389
* gh-109410
<!-- /gh-linked-prs -->
| de1428f8c234a8731ced99cbfe5cd6c5c719e31d | 49258efada0cb0fc58ccffc018ff310b8f7f4570 |
python/cpython | python__cpython-119620 | # Invalid "equivalents" of the complex type constructor in docs
The sphinx docs says:
```
class complex(real=0, imag=0)
[...]
Return a complex number with the value real + imag*1j or convert a string or number to a complex number.
[...]
```
The docstring (btw it doesn't mention a string as an argument):
```
>>> print(complex.__doc__)
Create a complex number from a real part and an optional imaginary part.
This is equivalent to (real + imag*1j) where imag defaults to 0.
```
That wrong, e.g.:
```pycon
>>> complex(0.0, -0.0)
-0j
>>> 0.0 + (-0.0)*1j
0j
>>> complex(-0.0, -0.0)
(-0-0j)
>>> -0.0 + (-0.0)*1j
(-0+0j)
>>> complex(-0.0, 0.0)
(-0+0j)
>>> -0.0 + 0.0*1j
0j
```
<details>
<summary>Here is an attempt (patch) to solve, let me know if this is worth a PR:</summary>
```diff
diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst
index d9974c6350..78b85658ef 100644
--- a/Doc/library/functions.rst
+++ b/Doc/library/functions.rst
@@ -373,8 +373,8 @@ are always available. They are listed here in alphabetical order.
.. class:: complex(real=0, imag=0)
complex(string)
- Return a complex number with the value *real* + *imag*\*1j or convert a string
- or number to a complex number. If the first parameter is a string, it will
+ Create a complex number from a real part and an optional imaginary part
+ or convert a string to a complex number. If the first parameter is a string, it will
be interpreted as a complex number and the function must be called without a
second parameter. The second parameter can never be a string. Each argument
may be any numeric type (including complex). If *imag* is omitted, it
diff --git a/Objects/complexobject.c b/Objects/complexobject.c
index 0e96f54584..336b703233 100644
--- a/Objects/complexobject.c
+++ b/Objects/complexobject.c
@@ -886,9 +886,8 @@ complex.__new__ as complex_new
real as r: object(c_default="NULL") = 0
imag as i: object(c_default="NULL") = 0
-Create a complex number from a real part and an optional imaginary part.
-
-This is equivalent to (real + imag*1j) where imag defaults to 0.
+Create a complex number from a real part and an optional imaginary part
+or convert a string to a complex number.
[clinic start generated code]*/
static PyObject *
```
</details>
**Edit**:
Another instance of this issue is in the [cmath docs](https://docs.python.org/3/library/cmath.html#conversions-to-and-from-polar-coordinates):
```rst
A Python complex number ``z`` is stored internally using *rectangular*
or *Cartesian* coordinates. It is completely determined by its *real
part* ``z.real`` and its *imaginary part* ``z.imag``. In other
words::
z == z.real + z.imag*1j
```
E.g.:
```pycon
>>> from cmath import inf
>>> complex(0.0, inf)
infj
>>> 0.0 + inf*1j
(nan+infj)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-119620
* gh-119635
* gh-119687
* gh-119795
* gh-119796
* gh-119803
* gh-119805
<!-- /gh-linked-prs -->
| ef01e95ae3659015c2ebe4ecdc048aadcda89930 | deda85717b2557c6bad8b9a51719c60ac510818f |
python/cpython | python__cpython-109257 | # Reference count leak when there is an error in the BUILD_MAP opcode
# Bug report
### Bug description:
```c
inst(BUILD_MAP, (values[oparg*2] -- map)) {
map = _PyDict_FromItems(
values, 2,
values+1, 2,
oparg);
if (map == NULL) <--------------
goto error; <------------------
DECREF_INPUTS();
ERROR_IF(map == NULL, error);
}
```
If an error occurs, the number of links will not decrease
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-109257
* gh-109323
* gh-109324
<!-- /gh-linked-prs -->
| 247ee1bf841524667f883ebba5e343101f609026 | 1110c5bc828218086f6397ec05a9312fb73ea30a |
python/cpython | python__cpython-109285 | # Combine SAVE_IP and SAVE_CURRENT_IP into a new uop
See https://github.com/faster-cpython/ideas/issues/621#issuecomment-1712697869:
> Now, there's an *explicit* `SAVE_CURRENT_IP` in some macros (before every `_POP_FRAME` and `_PUSH_FRAME`). That's because it also acts as a flag for the code generator, which, combined with the preceding `SAVE_IP`, makes it effectively save a pointer to the *next* instruction (which is what's needed by the frame push/pop uops). But there's so much special-casing here that we might as well either introduce a new uop for the combined special effects or special-case `_POP_FRAME` and `_PUSH_FRAME`.
Also rename `SAVE_IP` to `SET_IP` (from the [same issue](https://github.com/faster-cpython/ideas/issues/621#issuecomment-1709845175)). [**DONE**]
<!-- gh-linked-prs -->
### Linked PRs
* gh-109285
* gh-110755
* gh-111001
<!-- /gh-linked-prs -->
| fbaf77eb9bd1e6812ebf984d32b29b025cc037d6 | 1ee50e2a78f644d81d341a08562073ad169d8cc7 |
python/cpython | python__cpython-109210 | # Increase minimum supported Sphinx to 4.2
# Documentation
#108184 introduces a nice improvement to https://docs.python.org/dev/library/string.html by linking the tokens. However linking to other groups was only introduced in Sphinx 3.5 (https://github.com/sphinx-doc/sphinx/pull/8247).
Using the same survey as https://github.com/python/cpython/issues/86986#issuecomment-1140493331:
* Ubuntu 23.04: [Python 3.11](https://packages.ubuntu.com/lunar/python3), [Sphinx 5.3.0](https://packages.ubuntu.com/lunar/python3-sphinx)
* Debian stable: [Python 3.11](https://packages.debian.org/stable/python3), [Sphinx 5.3.0](https://packages.debian.org/stable/python3-sphinx)
* RHEL [doesn't seem to list versions](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/package_manifest/repositories#CodeReadyLinuxBuilder-repository)
* Fedora 38: [Python 3.11](https://packages.fedoraproject.org/pkgs/python3.11/python3/), [Sphinx 5.3.0](https://packages.fedoraproject.org/pkgs/python-sphinx/python3-sphinx/)
* OpenSUSE rolling: [Sphinx 7.2.4](http://download.opensuse.org/tumbleweed/repo/oss/noarch/python39-Sphinx-7.2.4-1.1.noarch.rpm)
**Updated survey**: https://github.com/python/cpython/issues/109209#issuecomment-1713859051
A
<!-- gh-linked-prs -->
### Linked PRs
* gh-109210
* gh-109636
* gh-109637
<!-- /gh-linked-prs -->
| 712cb173f8e1d02c625a40ae03bba57b0c1c032a | 9ccf0545efd5bc5af5aa51774030c471d49a972b |
python/cpython | python__cpython-109225 | # SystemError when printing symbol table entry
# Bug report
### Bug description:
Platform: Windows10
Tested version: 3.10, 3.11, 3.12rc2
```python
import symtable
script = "a=0"
symt = symtable.symtable(script, "test.py", "exec")
print(symt._table)
```
```
<symtable entry top(-1), line 0>OverflowError: Python int too large to convert to C long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "x:\python\oneliner\__develop\symt.py", line 8, in <module>
print(symt._table)
SystemError: <built-in method write of _io.TextIOWrapper object at 0x000001508B5559A0> returned a result with an exception set
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-109225
* gh-109227
* gh-109228
<!-- /gh-linked-prs -->
| 429749969621b149c1a7c3c004bd44f52bec8f44 | 2dd6a86c4ee604b331ed739c2508b0d0114993c6 |
python/cpython | python__cpython-109289 | # LOAD_GLOBAL super reports wrong source positions
# Bug report
### Bug description:
LOAD_GLOBAL super includes the paranthesis of `super()` in the source positions.
script:
```python
import dis
source="""
class VerifierFailure:
def __init__(self):
super().__init__
"""
code=compile(source,"<file>","exec")
bc=code.co_consts[0].co_consts[1]
load_global=list(dis.Bytecode(bc))[2]
dis.dis(bc)
print(load_global)
assert load_global.positions.end_col_offset==13
```
output (Python 3.12.0rc2+):
```python
0 COPY_FREE_VARS 1
4 2 RESUME 0
5 4 LOAD_GLOBAL 0 (super)
14 LOAD_DEREF 1 (__class__)
16 LOAD_FAST 0 (self)
18 LOAD_SUPER_ATTR 4 (__init__)
22 POP_TOP
24 RETURN_CONST 0 (None)
Instruction(opname='LOAD_GLOBAL', opcode=116, arg=0, argval='super', argrepr='super', offset=4, starts_line=5, is_jump_target=False, positions=Positions(lineno=5, end_lineno=5, col_offset=8, end_col_offset=15))
Traceback (most recent call last):
File "/home/frank/projects/cpython/../executing/bug.py", line 21, in <module>
assert load_global.positions.end_col_offset==13
AssertionError
```
I bisected this problem down to 0dc8b50d33208e9ca4fc3d959c6798529731f020
I hope that it is possible to restore the old source positions. I work currently on python 3.12 support for [executing](https://github.com/alexmojaki/executing), which relies on correct source positions to perform a correct bytecode -> AST mapping.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109289
* gh-109291
<!-- /gh-linked-prs -->
| ceeb4173aee7b835f553a8286feaa48b98c16124 | fbaf77eb9bd1e6812ebf984d32b29b025cc037d6 |
python/cpython | python__cpython-110239 | # Build fails with recent libedit versions
# Bug report
### Bug description:
`readline_set_completion_display_matches_hook_impl` will use either `VFunction` (if `_RL_FUNCTION_TYPEDEF` is defined) or `rl_compdisp_func_t`. The former is deprecated but available in current readline, while the latter has historically not been available in libedit. However, libedit recently added `rl_compdisp_func_t` and removed `VFunction` entirely: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/readline/readline.h.diff?r1=1.53&r2=1.54&sortby=date
We ship a recent libedit in MacPorts, so we see a compilation failure like this:
```
./Modules/readline.c:448:10: error: use of undeclared identifier 'VFunction'; did you mean 'function'?
(VFunction *)on_completion_display_matches_hook : 0;
^~~~~~~~~
```
It's easy enough for us to patch around this, but I guess if compatibility with both current libedit and the older version shipped by Apple is desired, there needs to be another check for the availability of this type.
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-110239
* gh-110562
* gh-110575
<!-- /gh-linked-prs -->
| f4cb0d27cc08f490c42a22e646eb73cc7072d54a | 12cc6792d0ca1d0b72712d77c6efcb0aa0c7e7ba |
python/cpython | python__cpython-92555 | # What's New in Python 3.12 (copyediting)
# Documentation
This is a meta-issue to capture PRs for copyedits to What's New in Python 3.12.
<!-- gh-linked-prs -->
### Linked PRs
* gh-92555
* gh-92562
* gh-94762
* gh-105278
* gh-105282
* gh-105315
* gh-105321
* gh-106984
* gh-106986
* gh-107005
* gh-107106
* gh-108878
* gh-108890
* gh-109159
* gh-109273
* gh-109654
* gh-109655
* gh-109656
* gh-109657
* gh-109658
* gh-109659
* gh-109660
* gh-109661
* gh-109662
* gh-109663
* gh-109664
* gh-109665
* gh-109681
* gh-109684
* gh-109687
* gh-109689
* gh-109713
* gh-109715
* gh-109716
* gh-109728
* gh-109729
* gh-109730
* gh-109732
* gh-109733
* gh-109751
* gh-109753
* gh-109754
* gh-109755
* gh-109756
* gh-109760
* gh-109766
* gh-109768
* gh-109770
* gh-109806
* gh-109815
* gh-109816
* gh-109821
* gh-109825
* gh-109826
* gh-109827
* gh-109830
* gh-109836
* gh-109844
* gh-109880
* gh-109925
* gh-109927
* gh-109971
* gh-110047
* gh-110117
* gh-110215
<!-- /gh-linked-prs -->
A | 11a608d2b1b9c10079a1fe2ebf815a638c640c79 | d8104d13cd80737f5efe1cd94aeec5979f912cd0 |
python/cpython | python__cpython-109192 | # pathlib.Path.resolve() mishandles symlink loops
# Bug report
### Bug description:
Two closely-related issues in [`pathlib.Path.resolve()`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve):
First, `resolve(strict=True)` raises `RuntimeError` rather than `OSError` when a symlink loop is encountered. This is done only for backwards compatibility, since #25264 made pathlib call `os.path.realpath()`. It should raise `OSError(ELOOP)`
Second, `resolve(strict=False)` suppresses every kind of OS error _except_ symlink loop errors. Again this is only for backwards compatibility. It should suppress exceptions about symlink loops.
Relevant code:
https://github.com/python/cpython/blob/e21c89f9841488a1d4ca1024dc97c1aba44e0c87/Lib/pathlib.py#L1233-L1252
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109192
<!-- /gh-linked-prs -->
| ecd813f054e0dee890d484b8210e202175abd632 | 859618c8cd5de86a975e68d7e5d20c04bc5db2e5 |
python/cpython | python__cpython-109201 | # The docs of "traceback.format_exception_only" is incorrect when the Exception has notes
# Documentation
The documentation of [`traceback.format_exception_only`](https://docs.python.org/3.11/library/traceback.html#traceback.format_exception_only) states the following (emphasis mine):
> Format the exception part of a traceback using an exception value such as given by `sys.last_value`. The return value is a list of strings, each ending in a newline. Normally, the list contains a single string; however, for [SyntaxError](https://docs.python.org/3.11/library/exceptions.html#SyntaxError) exceptions, it contains several lines that (when printed) display detailed information about where the syntax error occurred. **The message indicating which exception occurred is the always last string in the list.**
The last string isn't always the formatted `Exception` instance. Actually, the last string can be any arbitrary string if notes are added to the `Exception`, as demonstrated by the following example:
```python
import traceback
e = ValueError("The error")
e.add_note("Notes:\n1. Part 1\n2. Part 2")
output = traceback.format_exception_only(e)
print(output)
```
The `output` is actually a list of 4 elements, with the message representing the raised `Exception` being in first position:
```python
['ValueError: The error\n', 'Notes:\n', '1. Part 1\n', '2. Part 2\n']
```
This also applies to the documentation of [`TracebackException.format_exception_only`](https://docs.python.org/3.11/library/traceback.html#traceback.TracebackException.format_exception_only).
<!-- gh-linked-prs -->
### Linked PRs
* gh-109201
* gh-109334
* gh-109336
<!-- /gh-linked-prs -->
| 0e76cc359ba5d5e29d7c75355d7c1bc7e817eecf | e121fca33bfb03a2d2bc33ba9d6bb5616d09b033 |
python/cpython | python__cpython-111548 | # Exceptions slow in 3.11, depending on location
# Bug report
### Bug description:
(From [Discourse](https://discuss.python.org/t/why-does-unreached-code-take-linear-time/33295?u=pochmann))
Consider these two functions:
```python
def short():
try:
if 0 == 1:
unreached
raise RuntimeError
except RuntimeError:
pass
def long():
try:
if 0 == 1:
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
raise RuntimeError
except RuntimeError:
pass
```
The only difference is that `long()` has 100 unreached statements instead of just one. But it takes much longer in Python 3.11 (and a bit longer in Python 3.10). Times from @jamestwebber ([here](https://discuss.python.org/t/why-does-unreached-code-take-linear-time/33295/19?u=pochmann)):
```
Python: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:34:09) [GCC 12.3.0]
176.5 ± 0.4 ns short
644.7 ± 0.6 ns long
Python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
150.7 ± 0.1 ns short
167.0 ± 0.2 ns long
```
Why? Shouldn't it just jump over them all and be just as fast as `short()`?
<details><summary>Benchmark script</summary>
[Attempt This Online!](https://ato.pxeger.com/run?1=7VW9TsMwEB6R_BS3NalCaERBEJSRHbFWVRTac2tBnMi-VFRVn4SlC-y8Ag_BwNNgO0nTMILY6sn-7uf7zsN9L2_lmpaF3O1eK-KnV18nn1wVOZDIURCIvCwUNS-2j7R4iYqns6KShAoy7WJ1lqaMhCYx021ujpkMDD7HFWsgvdaMsTly0Evz9vyYgTmk1vXFHsFhBEkCUQfZU0mF2WyJ8z2qMqER7o0WI-JWqUK5ED7PsKQe3nUqM90qeCrk4lcCbv7_eiQ-Eh-Jj8R_J_7DkuSVNMs8qVd14PalAQsFKQhpGssFeuftAjVdbO6GxzCZgs3iNsv12LoUt_WNSWiP-x0X2aoJwRAivHZ1ZOu0ocS559pO-NSfxBfTaTcUUqVM88HGeoxH2o8vw4hv4eMdNs5wHDZ2mNQwcJU_pEej0YGOvuL-5icjsTZEjwcgq_wBVRKNhsOxD2dG97if3WgOs7JEaWbwWY-gGc3xBPCI68T9yoGWUglJXvtXAfAwTWWWY5rWreq4z1h9Gdw5Q48HgfXXcIVKi0L6tb03Lt-6_Tc)):
```python
from timeit import timeit
from time import perf_counter as time
from statistics import mean, stdev
import sys
def short():
try:
if 0 == 1:
unreached
raise RuntimeError
except RuntimeError:
pass
def long():
try:
if 0 == 1:
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached; unreached
raise RuntimeError
except RuntimeError:
pass
funcs = short, long
for _ in range(3):
times = {f: [] for f in funcs}
def stats(f):
ts = [t * 1e9 for t in sorted(times[f])[:5]]
return f'{mean(ts):6.1f} ± {stdev(ts):4.1f} ns '
for _ in range(100):
for f in funcs:
t = timeit(f, number=10**4) / 1e4
times[f].append(t)
for f in sorted(funcs, key=stats):
print(stats(f), f.__name__)
print()
print('Python:', sys.version)
```
</details>
In fact it takes time linear in how many unreached statements there are. Times for 100 to 100000 unreached statements (on one line, before the `try`):
```
100 2.6 μs
1000 24.3 μs
10000 253.3 μs
100000 2786.2 μs
```
<details><summary>Benchmark script</summary>
```python
from time import perf_counter as time
from timeit import repeat
for e in range(2, 6):
n = 10 ** e
exec(f'''def f():
if 0 == 1:
{'unreached;' * n}
try:
raise RuntimeError
except RuntimeError:
pass''')
number = 10**6 // n
t = min(repeat(f, number=number)) / number
print(f'{n:6} {t * 1e6 :7.1f} μs')
```
[Attempt This Online!](https://ato.pxeger.com/run?1=fZJNTsMwEIXFNqd4OydRIQ2LgIKy5AJcAIV0TL2IY00cqVWUPXdg0w2cgjOwh9Pg_DRVRGE2tud9Iz0_-_Xd7O220ofDW2Pl5e33RSu5KmFVSVClqdjCEMvHomq0JUZeD5o3U8oeOSZDufU8WTHcsAbn-pn86xWSIPXgSiNDvEYYgoYz7ajwpRBiQxLSn6i-lMQamaNPrb5a0WimvNjS5k4ghO5m2fJ-yXKuasKDs-1c3jNXPMu0K8jYhbacNXldO1vB6Lopn9zNe-thmCCKoIe-da1SaX-8ty9XE5mNSxAgmjoDblhpR4lWp0mH1jr7MSVIb65i2eHzoxbB-AZfL4s4zkRxCuHczvsVxx9R_BtDH8Ho5_g3fgA)
</details>
The slowness happens when the unreached statements are anywhere before the `raise`, and not when they're anywhere after the `raise` ([demo](https://discuss.python.org/t/why-does-unreached-code-take-linear-time/33295/22?u=pochmann)). So it seems what matters is location of the `raise` in the function. Long code before it somehow makes it slow.
This has a noticeable impact on real code I wrote (assuming I pinpointed the issue correctly): two solutions for a task, and one was oddly slower (~760 vs ~660 ns) despite executing the exact same sequence of bytecode operations. Just one jump length differed, leading to a `raise` at a larger address.
<details><summary>Benchmark script with those two solutions and the relevant test case:</summary>
The functions shall return the one item from the iterable, or raise an exception if there are fewer or more than one. Testing with an empty iterable, both get the iterator, iterate it (nothing, since it's empty), then raise. The relevant difference appears to be that the slower one has the `raise` written at the bottom, whereas the faster one has it near the top.
Sample times:
```
664.4 ± 8.6 ns one_sequential
762.1 ± 28.8 ns one_nested
Python: 3.11.4 (main, Jun 24 2023, 10:18:04) [GCC 13.1.1 20230429]
```
Code:
```python
from timeit import timeit
from statistics import mean, stdev
from itertools import repeat, starmap, islice
import sys
def one_nested(iterable, too_short=None, too_long=None):
it = iter(iterable)
for first in it:
for second in it:
raise too_long or ValueError(
'Expected exactly one item in iterable, but',
f'got {first!r}, {second!r}, and perhaps more.'
)
return first
raise too_short or ValueError('too few items in iterable (expected 1)')
def one_sequential(iterable, too_short=None, too_long=None):
it = iter(iterable)
for first in it:
break
else:
raise too_short or ValueError('too few items in iterable (expected 1)')
for second in it:
raise too_long or ValueError(
'Expected exactly one item in iterable, but '
f'got {first!r}, {second!r}, and perhaps more.'
)
return first
funcs = one_nested, one_sequential
def empty(f):
iterable = iter(())
too_short = RuntimeError()
for _ in repeat(None, 10**4):
try:
f(iterable, too_short)
except RuntimeError:
pass
for case in empty,:
times = {f: [] for f in funcs}
def stats(f):
ts = [t * 1e9 for t in sorted(times[f])[:5]]
return f'{mean(ts):6.1f} ± {stdev(ts):4.1f} ns '
for _ in range(100):
for f in funcs:
t = timeit(lambda: case(f), number=1) / 1e4
times[f].append(t)
for f in sorted(funcs, key=stats):
print(stats(f), f.__name__)
print()
print('Python:', sys.version)
```
[Attempt This Online!](https://ato.pxeger.com/run?1=rVXNjtMwEL5xyFOYU5IqlEYqCCLluFeEkOBSVZGbjltrE9vYztKo6pNw2QvceQUeg6fBP_ntohWgzSn2zNjfN9_M-Ot30eojZ_f33xpNXrz59ewjkbxGmtZANaK14FJ3q8BZlMaaKk1L1VtrwCwx-3u48y5Ug9ScV4OHBAFYWx8saywSRFVFSwg6s2pVEAR7IIgzKBgoDfvIHoJ3FSTIHFWoo3HM3xm7X1ecHdwyzgJkPoM1d_cOcbHbJ1wiQqUyVJixe-feoKDkbH9tsZ_EVMFwETK-n3DVwI2UXEYzT_uFNycBpQGN4IRLXbWWh0VT-7N7IrtGh8mDaBIeuEZnh_K5vCTo7HG5f2zwCZBHLBSquYRlOIuPh5UE3UjmuQZzCi53VxxCY0AEvjiUagoTRdCzSeMwngij4HMDTFNcPYE4j6uzk4Bv3QoqBeP-k5F6vAT-Xv5_kB7Npftf2TvwM72DgDTMNGQ-aaDkSjOvI9RCtxEZhOnS08kTxb6gxgTn6EPDbPt77mNTFZaf7-vIK5-uFot1PCZRy3beVORPZTNWMJxKEHp23zxeYGXmhL28xEYec79jk2Q-IzbKpuBMMrTZ-tqyTi41F-diM2Dnlxoy4AJt1EajBUrhrYtz9agMOjOH3LEbso032avt9kG_hWc7_iKt4uz1MiUX9POHUdLOQre3dntMdeKPmcPsAFG6WsXzkTQinnO3SvgpHFW43u1x5pJgeCSINfUOZJ7G6KVhsJ7HdeiXWAhghs1kLE5IuhsTdAtt7vIzQSUkZTrqs5YgsiwKhmsoCn-Ut5uq9D_he_eeZGFix_ryDqSinMX-dekemf6x-Q0)
</details>
### CPython versions tested on:
3.10, 3.11
### Operating systems tested on:
Linux, macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-111548
* gh-111550
* gh-111551
* gh-111948
* gh-111951
<!-- /gh-linked-prs -->
| abb15420c11d9dda9c89f74eac8417240b321109 | ad6380bc340900e0977ce54928b0d3e166c7cf99 |
python/cpython | python__cpython-109197 | # Notes added to "SyntaxError" (and subclasses) are not displayed
# Bug report
### Bug description:
Hi.
I noticed that using [`add_note()`](https://docs.python.org/3/library/exceptions.html#BaseException.add_note) with an error of type `SyntaxError` (or its derived such as `IndentationError` and `TabError`) produces a traceback without the expected notes.
Here is a minimal reproducible example:
```python
try:
exec("a = 7 *")
except SyntaxError as e:
e.add_note("Note")
raise
```
Or alternatively:
```python
e = SyntaxError("invalid syntax", ("<string>", 1, 8, "a = 7 *\n", 1, 8))
e.add_note("Note")
raise e
```
The output of these examples is:
```python
Traceback (most recent call last):
File "/home/delgan/test.py", line 2, in <module>
exec("a = 7 *")
File "<string>", line 1
a = 7 *
^
SyntaxError: invalid syntax
```
The expected `"Note"` is missing.
Since neither PEP 678 nor the documentation mention that `SyntaxError` is not fully compatible with notes, I assume that this is probably a bug.
### CPython versions tested on:
3.11, 3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109197
* gh-109280
* gh-109283
<!-- /gh-linked-prs -->
| ecd21a629a2a30bcae89902f7cad5670e9441e2c | de5f8f7d13c0bbc723eaea83284dc78b37be54b4 |
python/cpython | python__cpython-119234 | # Replace _PyFrame_OpAlreadyRan by a check for incomplete frame
The ``_PyFrame_OpAlreadyRan`` function in frameobject.c is not covered by any tests.
For details see https://github.com/faster-cpython/ideas/issues/623.
<!-- gh-linked-prs -->
### Linked PRs
* gh-119234
<!-- /gh-linked-prs -->
| 77ff28bb6776d583e593937df554cbf572cb47b0 | f49df4f486e531ff2666eb22854117c564b3de3d |
python/cpython | python__cpython-109175 | # Add support of SimpleNamespace in copy.replace()
# Feature or enhancement
SimpleNamespace is used as a simple data record type, a simpler alternative to named tuples and dataclasses. Although it is easy to create a copy of a SimpleNamespace instance with modified attributes (`SimpleNamespace(**vars(ns), attr=newvalue)`), it would be convenient if SimpleNamespace be supported in `copy.replace()` which already supports named tuples and dataclasses.
https://github.com/python/cpython/issues/108751
https://discuss.python.org/t/generalize-replace-function/28511
<!-- gh-linked-prs -->
### Linked PRs
* gh-109175
<!-- /gh-linked-prs -->
| 92578919a60ebe2b8d6d42377f1e27479c156d65 | 0eab2427b149cd46e0dee3efbb6b2cfca2a4f723 |
python/cpython | python__cpython-109165 | # Replace the old `getopt` with `argparse` in pdb CLI
# Feature or enhancement
### Proposal:
`pdb` is using `getopt` for argument parsing. `getopt` is pretty old and not well maintained. Some issue in #108791 could be avoided with the modern `argparse` module. We should replace `getopt` with `argparse` when it's easy to do so. This might ease the load in the future when more arguments are added to the CLI.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109165
<!-- /gh-linked-prs -->
| 73ccfa28c5e6ff68de15fdbb1321d4773a688e61 | 3e8fcb7df74248530c4280915c77e69811f69c3f |
python/cpython | python__cpython-109163 | # Refactor test.libregrtest
I propose to refactor ``test.libregrtest`` to make it easier to maintain and to prepare adding type annotations.
The regrtest project has a long history. It was added in 1996 by commit 152494aea24669a3d74460fa460a4ed45696bc75. When it was created, it was 170 lines long and had 4 command line options: ``-v`` (verbose), ``-q`` (quiet), ``-g`` (generate) and ``-x`` (exclude). Slowly, it got more and more features:
* Better command line interface with ``argparse`` (it used ``getopt`` at the begining)
* Run tests in parallel with multiple processes (this code caused me a lot of headaches!)
* Detect when the "environment" is altered: warnings filters, loggers, etc.
* Re-run failed tests in verbose mode (now they are run in fresh processes)
* Detect memory, reference and file descriptor leaks
* Detect leaked files by creating a temporary directory for each test worker process
* Best effort to restore the machine to its previous state: wait until threads and processes complete, remove temporary files, etc.
* etc.
Some of these features were implemented in ``test.support`` + ``test.libregrtest``.
A few years ago, I decided to split the giant mono ``regrtest.py`` file (for example, it was 2 200 lines of Python code in Python 2.7) into sub-files (!). To make it possible, I passed ``ns`` argument which is a bag of "global variables" (technically, it's a ``Namespace`` class, see ``cmdline.py``).
The problem is that for type annotation, it's very unclear what a Namespace contains. It may or may not have arguments (see my commit message of this PR: ``Add missing attributes to Namespace: coverage, threshold, wait.``), argument types are weakly defined, etc. Moreover, ``ns`` is not only used to "get" variables, but also to **set** variables! For example, find_tests() overrides ``ns.args``. How is it possible to know which ``ns`` attributes are used? Are they "read-only"? We don't know just by reading a function prototype.
This large refactoring cleans up everything in a serie of small changes to pass simple types like ``bool``, ``str`` or ``tuple[str]``. It's easier to guess the purpose of a function and its behavior just from its prototype.
I tried to create only short files, the longest is still sadly ``main.py`` with 891 lines.
```
$ wc -l *.py|sort -n
2 __init__.py
56 pgo.py
124 win_utils.py
159 setup.py
202 refleak.py
307 utils.py
329 save_env.py
451 cmdline.py
575 runtest.py
631 runtest_mp.py
891 main.py
3727 total
```
To understand where the ``ns`` magic bag of global variables, look at ``regrtest.py`` monster in Python 2.7. Its main() functions defines not less than 34 functions inside the main() function! Variables are defined in the main() prototype!
```
def main(tests=None, testdir=None, verbose=0, quiet=False,
exclude=False, single=False, randomize=False, fromfile=None,
findleaks=False, use_resources=None, trace=False, coverdir='coverage',
runleaks=False, huntrleaks=False, verbose2=False, print_slow=False,
random_seed=None, use_mp=None, verbose3=False, forever=False,
header=False, pgo=False, failfast=False, match_tests=None):
```
I supposed that it was designed to be able to use regrtest as an API: pass parameters to the main() function, without having to use the command line interface. In the main branch, this feature is still supported, the magic ``**kwargs`` bag:
```py
def main(tests=None, **kwargs):
Regrtest().main(tests=tests, **kwargs)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109163
* gh-109168
* gh-109170
* gh-109171
* gh-109172
* gh-109177
* gh-109202
* gh-109204
* gh-109205
* gh-109206
* gh-109208
* gh-109212
* gh-109229
* gh-109243
* gh-109246
* gh-109248
* gh-109250
* gh-109253
<!-- /gh-linked-prs -->
| 5b7303e2653a0723a3e4c767d03dd02681206ca8 | bcb2ab5ef8c646565b09c860fb14e415d7b374bd |
python/cpython | python__cpython-109157 | # Remove INSTRUCTION event monitoring will remove LINE events as well
# Bug report
### Bug description:
The current de-instrumentation code for instructions incorrectly de-instruments line too. So if INSTRUCTION events are removed, event if the LINE events are still there, it won't trigger the event.
```python
import sys
E = sys.monitoring.events
def line(*args):
print("Line event: ", args)
def inst(*args):
print("Instruction event: ", args)
sys.monitoring.use_tool_id(0, "test")
sys.monitoring.set_events(0, 0)
sys.monitoring.register_callback(0, E.LINE, line)
sys.monitoring.register_callback(0, E.INSTRUCTION, inst)
sys.monitoring.set_events(0, E.LINE | E.INSTRUCTION)
sys.monitoring.set_events(0, E.LINE)
a = 1
sys.monitoring.set_events(0, 0)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109157
* gh-109384
<!-- /gh-linked-prs -->
| d69805b38a1815e7aaadf49bdd019c7cca105ac6 | a0c06a4f933faccd7f8201701b2491d38464212c |
python/cpython | python__cpython-109152 | # Enable readline in the sqlite3 CLI
# Feature or enhancement
```
python -m code
```
runs a REPL which supports editing, moving with arrow keys, history, etc. But
```
python -m sqlite3
```
do not supports this.
The `sqlite3` module should do the same what the `code` module does.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109152
* gh-110352
* gh-110542
<!-- /gh-linked-prs -->
| 254e30c487908a52a7545cea205aeaef5fbfeea4 | e9f2352b7b7503519790ee6f51c2e298cf390e75 |
python/cpython | python__cpython-109141 | # `test_binascii` has duplicated tests
# Bug report
```console
$ python -m pip install ruff
...
$ ruff --select F811 Lib/test/test_binascii.py
Lib/test/test_binascii.py:279:9: F811 Redefinition of unused `test_hex_roundtrip` from line 236
Found 1 error.
```
Source:
https://github.com/python/cpython/blob/f63d37877ad166041489a968233b57540f8456e8/Lib/test/test_binascii.py#L232-L239
https://github.com/python/cpython/blob/f63d37877ad166041489a968233b57540f8456e8/Lib/test/test_binascii.py#L278-L282
The first `test_hex_roundtrip` is doing a `hexlify`/`unhexlify` roundtrip and is correctly named.
The second `test_hex_roundtrip` is doing a `b2a_uu`/`a2b_uu` roundtrip, so can be renamed like `test_b2a_roundtrip`.
This was introduced after https://github.com/python/core-workflow/issues/505#issuecomment-1609495331.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109141
<!-- /gh-linked-prs -->
| aa51182320f3c391195eb7d5bd970867e63bd978 | f63d37877ad166041489a968233b57540f8456e8 |
python/cpython | python__cpython-109137 | # Tools/scripts/summarize_stats.py fails following #108754
# Bug report
### Bug description:
After building with `./configure --enable-pystats` and collecting some pystats with this script:
```python
import sys
sys._stats_on()
for i in range(50):
pass
sys._stats_off()
```
running the `Tools/scripts/summarize_stats.py` script fails with the following:
```pytb
Traceback (most recent call last):
File "/home/mdboom/Work/builds/cpython/Lib/pdb.py", line 2110, in main pdb._run(target)
File "/home/mdboom/Work/builds/cpython/Lib/pdb.py", line 1882, in _run self.run(target.code)
File "/home/mdboom/Work/builds/cpython/Lib/bdb.py", line 600, in run exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 686, in <module>
main()
File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 683, in main output_stats(args.inputs, json_output=args.json_output) File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 639, in output_stats output_single_stats(stats)
File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 607, in output_single_stats
emit_call_stats(stats)
File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 460, in emit_call_stats
rows = calculate_call_stats(stats)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mdboom/Work/builds/cpython/Tools/scripts/summarize_stats.py", line 451, in calculate_call_stats label = name + " (" + pretty(defines[index][0]) + ")"
~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
This was broken by #108754 since it moved the stats definitions from `Include/pystats.h` to `Include/cpython/pystats.h`, but the path wasn't updated in `summarize_stats.py`.
I will have a fix posted shortly.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109137
<!-- /gh-linked-prs -->
| 52beebc856fedf507ac0eb9e45c2e2c9fed1e5b8 | 6275c67ea68645e5b296a80ea63b90707a0be792 |
python/cpython | python__cpython-109126 | # Run mypy on `Tools/wasm`
# Feature or enhancement
Right now we already have 3 fully typed and checked tools:
- https://github.com/python/cpython/issues/104050
- https://github.com/python/cpython/issues/108455
- https://github.com/python/cpython/issues/104504
I think we can also add `Tools/wasm` quite easily to this list.
`Tools/wasm` already has most of the type annotations, but there are some mypy errors to fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109126
* gh-109561
<!-- /gh-linked-prs -->
| f65497fd252a4a4df960da04d68e8316b58624c0 | ed582a2ed980efba2d0da365ae37bff4a2b99873 |
python/cpython | python__cpython-117444 | # incorrect SyntaxError
# Bug report
### Bug description:
The following input will cause a `SyntaxError`. That's all well and good, but the message for the `SyntaxError` is incorrect.
```python
f(A,*)
```
```python
SyntaxError: iterable argument unpacking follows keyword argument unpacking
```
This is not correct. That error message is intended for cases like this `f(**kwargs, *args)`.
This bug is present on `3.9`, `3.10`, and `3.11`. I haven't tested `3.12`, but I suspect that it is present there as well. The bug is not present on `3.8`.
### CPython versions tested on:
3.8 3.9, 3.10, 3.11
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-117444
* gh-117464
* gh-117465
<!-- /gh-linked-prs -->
| c97d3af2391e62ef456ef2365d48ab9b8cdbe27b | 1d5479b236e9a66dd32a24eff6fb83e3242b999d |
python/cpython | python__cpython-109123 | # Annotation scopes containing nested scopes
# Bug report
### Bug description:
The following code causes a SystemError during compilation.
```python
class name_2[*name_5, name_3: int]:
(name_3 := name_4)
class name_4[name_5: name_5]((name_4 for name_5 in name_0 if name_3), name_2 if name_3 else name_0):
pass
```
output (Python 3.12.0rc2+):
```python
SystemError: compiler_lookup_arg(name='name_3') with reftype=3 failed in <generic parameters of name_4>; freevars of code <genexpr>: ('name_3',)
```
removing the named-expression makes it worse and causes a crash
```python
class name_2[*name_5, name_3: int]:
class name_4[name_5: name_5]((name_4 for name_5 in name_0 if name_3), name_2 if name_3 else name_0):
pass
```
output (Python 3.12.0rc2+):
```python
Traceback (most recent call last):
File "/home/frank/projects/pysource-codegen/bug2.py", line 2, in <module>
class name_2[*name_5, name_3: int]:
File "/home/frank/projects/pysource-codegen/bug2.py", line 2, in <generic parameters of name_2>
class name_2[*name_5, name_3: int]:
File "/home/frank/projects/pysource-codegen/bug2.py", line 4, in name_2
class name_4[name_5: name_5]((name_4 for name_5 in name_0 if name_3), name_2 if name_3 else name_0):
File "/home/frank/projects/pysource-codegen/bug2.py", line 4, in <generic parameters of name_4>
class name_4[name_5: name_5]((name_4 for name_5 in name_0 if name_3), name_2 if name_3 else name_0):
^^^^^^
NameError: name 'name_0' is not defined
Modules/gcmodule.c:113: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small
Enable tracemalloc to get the memory block allocation traceback
object address : 0x7f4c77059a30
object refcount : 1
object type : 0x560ea4b6cfe0
object type name: dict
object repr : {'__module__': '__main__', '__qualname__': 'name_2', '__type_params__': (name_5, name_3)}
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: finalizing (tstate=0x0000560ea4cc9ba0)
Current thread 0x00007f4c7756c280 (most recent call first):
Garbage-collecting
<no Python frame>
```
side note:
The reason for the strange looking language constructs and variable names is that I generated this code. I found it while working on my pysource-codegen library.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109123
* gh-109173
* gh-109196
* gh-109297
* gh-118019
* gh-118160
<!-- /gh-linked-prs -->
| 17f994174de9211b2baaff217eeb1033343230fc | e9e2ca7a7b4b4320009cdf85c84ec5bd6c4923c3 |
python/cpython | python__cpython-109121 | # misleading SyntaxError (f-string and type alias related)
# Bug report
### Bug description:
The following code produces a misleading syntax error.
```python
lambda name_3=f'{name_4}': {name_3}
type {}[name_4] = {}
```
output (Python 3.12.0rc2+):
```python
File "/home/frank/projects/pysource-codegen/bug.py", line 1
lambda name_3=f'{name_4}': {name_3}
^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: f-string: lambda expressions are not allowed without parentheses
```
The first line should be valid python syntax and I would expect to get an syntax error for the wrong type alias name.
side note:
This is generated code. I found this issue during my work for the python 3.12 support for pysource-codegen.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109121
* gh-109155
<!-- /gh-linked-prs -->
| 5bda2f637e1cfbca45a83aa6e22db25498064b27 | 87a7faf6b68c8076e640a9a1347a255f132d8382 |
python/cpython | python__cpython-109113 | # `ssl.SSLObject` and `ssl.SSLSocket` should expose method to get certificate chain
# Feature or enhancement
### Proposal:
Being able to get a certificate chain is needed to perform OCSP revocation checks.
Starting from `py3.10` we can at least call C-level API directly, but I guess such a crucial functionality should be documented and exposed in Python API:
```python
ssl_socket._sslobj.get_unverified_chain()
```
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-109113
<!-- /gh-linked-prs -->
| 5a740cd06ec1191767edcc6d3a7d5eca7873cb7b | ddf2e953c27d529b7e321c972ede2afce5dfb0b0 |
python/cpython | python__cpython-109911 | # test_xxtestfuzz emits DeprecationWarning
test_xxtestfuzz emits deprecation warnings about modules sre_compile and sre_constants which always were internal. It should use public non-deprecated API.
```
$ ./python -Wa -m test -v test_xxtestfuzz
...
test_sample_input_smoke_test (test.test_xxtestfuzz.TestFuzzer.test_sample_input_smoke_test)
This is only a regression test: Check that it doesn't crash. ... /home/serhiy/py/cpython/Lib/test/test_xxtestfuzz.py:13: DeprecationWarning: module 'sre_compile' is deprecated
_xxtestfuzz.run(b"")
/home/serhiy/py/cpython/Lib/test/test_xxtestfuzz.py:13: DeprecationWarning: module 'sre_constants' is deprecated
_xxtestfuzz.run(b"")
ok
...
```
cc @ssbr, @ammaraskar
<!-- gh-linked-prs -->
### Linked PRs
* gh-109911
* gh-109932
* gh-109933
<!-- /gh-linked-prs -->
| a829356f86d597e4dfe92e236a6d711c8a464f16 | 9dbfe2dc8e7bba25e52f9470ae6969821a365297 |
python/cpython | python__cpython-109387 | # test_httpservers causes os.fork DeprecationWarning, can we deprecate CGIHTTPRequestHandler?
# Bug report
Every test class in test_httpservers starts a thread to run a server. CGIHTTPServerTestCase tests trigger htt.server.CGIHTTPRequestHandler.run_cgi() which executes a CGI script in a separate process. Using `os.fork()` if possible. But `os.fork()` now emits a DeprecationWarning if used in multi-threaded environment, and in future it will be an error.
```
$ ./python -Wa -m test -v test_httpservers -m CGIHTTPServerTestCase
...
test_accept (test.test_httpservers.CGIHTTPServerTestCase.test_accept) ... /home/serhiy/py/cpython/Lib/http/server.py:1172: DeprecationWarning: This process (pid=2768812) is multi-threaded, use of fork() may lead to deadlocks in the child.
pid = os.fork()
/home/serhiy/py/cpython/Lib/http/server.py:1172: DeprecationWarning: This process (pid=2768812) is multi-threaded, use of fork() may lead to deadlocks in the child.
pid = os.fork()
/home/serhiy/py/cpython/Lib/http/server.py:1172: DeprecationWarning: This process (pid=2768812) is multi-threaded, use of fork() may lead to deadlocks in the child.
pid = os.fork()
ok
test_authorization (test.test_httpservers.CGIHTTPServerTestCase.test_authorization) ... /home/serhiy/py/cpython/Lib/http/server.py:1172: DeprecationWarning: This process (pid=2768812) is multi-threaded, use of fork() may lead to deadlocks in the child.
pid = os.fork()
ok
test_cgi_path_in_sub_directories (test.test_httpservers.CGIHTTPServerTestCase.test_cgi_path_in_sub_directories) ... /home/serhiy/py/cpython/Lib/http/server.py:1172: DeprecationWarning: This process (pid=2768812) is multi-threaded, use of fork() may lead to deadlocks in the child.
pid = os.fork()
ok
...
```
We should do something with this. First, the tests are polluted with presumably false alerts. Second, these alerts may be not false. Third, these tests will be broken in future, when the warning is turned into error.
Also, what actions are expected from users of `http.server` when they see a warning? How they can solve the problem on their side? If `http.server` is deprecated, it should be documented. If it remains, it should not produce scary warnings.
cc @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-109387
* gh-109471
<!-- /gh-linked-prs -->
| 59073c9ab8eb8db9304f16c46086ccc525e82570 | 19f5effc27bc47d76b2f484a1fcb4ccdd7dc9d67 |
python/cpython | python__cpython-109095 | # Replace frame->prev_instr by frame->instr_ptr
``frame->prev_instr`` has confusing semantics, we will replace it by a new field ``instr_ptr`` which is
(1) the instruction currently executing (if any), or
(2) the next instruction to execute on the frame (otherwise).
<!-- gh-linked-prs -->
### Linked PRs
* gh-109095
* gh-109076
* gh-110759
<!-- /gh-linked-prs -->
| 67a91f78e4395148afcc33e5cd6f3f0a9623e63a | 573eff3e2ec36b5ec77c3601592a652e524abe21 |
python/cpython | python__cpython-109142 | # test_asyncgen fails randomly on Windows
```
FAIL: test_async_gen_asyncio_gc_aclose_09 (test.test_asyncgen.AsyncGenAsyncioTest.test_async_gen_asyncio_gc_aclose_09)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_asyncgen.py", line 1075, in test_async_gen_asyncio_gc_aclose_09
self.assertEqual(DONE, 1)
AssertionError: 0 != 1
```
and then:
```
Re-running test_asyncgen in verbose mode (matching: test_async_gen_asyncio_gc_aclose_09)
test_async_gen_asyncio_gc_aclose_09 (test.test_asyncgen.AsyncGenAsyncioTest.test_async_gen_asyncio_gc_aclose_09) ... FAIL
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<<async_generator_athrow without __name__>()> wait_for=<Future finished result=None>>
======================================================================
FAIL: test_async_gen_asyncio_gc_aclose_09 (test.test_asyncgen.AsyncGenAsyncioTest.test_async_gen_asyncio_gc_aclose_09)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\test_asyncgen.py", line 1075, in test_async_gen_asyncio_gc_aclose_09
self.assertEqual(DONE, 1)
AssertionError: 0 != 1
----------------------------------------------------------------------
Ran 1 test in 0.206s
FAILED (failures=1)
test test_asyncgen failed
```
build: https://github.com/python/cpython/actions/runs/6105856789/job/16569987356?pr=109060
<!-- gh-linked-prs -->
### Linked PRs
* gh-109142
* gh-109149
* gh-109150
<!-- /gh-linked-prs -->
| ccd48623d4860e730a16f3f252d67bfea8c1e905 | 52beebc856fedf507ac0eb9e45c2e2c9fed1e5b8 |
python/cpython | python__cpython-109101 | # ARM Raspbian 3.x fails to build _testcapi extension: undefined symbol: __atomic_fetch_or_8
I supposed that the build error is related to ``Modules/_testcapi/pyatomic.c`` and ``Include/cpython/pyatomic.h``.
"Raspbian is a free operating system based on Debian optimized for the Raspberry Pi hardware."
Buildbot worker says: "Raspberry Pi 4 B running Raspbian (Bullseye 11.x)."
test.pythoninfo:
* CC.version: gcc (Raspbian 10.2.1-6+rpi1) 10.2.1 20210110
* platform.architecture: 32bit ELF
* platform.libc_ver: glibc 2.31
* platform.platform: Linux-6.1.21-v8+-aarch64-with-glibc2.31
* sysconfig[PY_CFLAGS]: ``-fno-strict-overflow -DNDEBUG -g -O3 -Wall -UNDEBUG``
* sysconfig[PY_CFLAGS_NODIST]: ``-std=c11 -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal``
* sysconfig[PY_STDMODULE_CFLAGS]: ``-fno-strict-overflow -DNDEBUG -g -O3 -Wall -UNDEBUG -std=c11 -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include``
Logs:
```
gcc -pthread -shared Modules/_testcapimodule.o Modules/_testcapi/vectorcall.o Modules/_testcapi/vectorcall_limited.o Modules/_testcapi/heaptype.o Modules/_testcapi/abstract.o Modules/_testcapi/unicode.o Modules/_testcapi/dict.o Modules/_testcapi/getargs.o Modules/_testcapi/datetime.o Modules/_testcapi/docstring.o Modules/_testcapi/mem.o Modules/_testcapi/watchers.o Modules/_testcapi/long.o Modules/_testcapi/float.o Modules/_testcapi/structmember.o Modules/_testcapi/exceptions.o Modules/_testcapi/code.o Modules/_testcapi/buffer.o Modules/_testcapi/pyatomic.o Modules/_testcapi/pyos.o Modules/_testcapi/immortal.o Modules/_testcapi/heaptype_relative.o Modules/_testcapi/gc.o -o Modules/_testcapi.cpython-313-arm-linux-gnueabihf.so
[ERROR] _testcapi failed to import: /var/lib/buildbot/workers/3.x.gps-raspbian.nondebug/build/build/lib.linux-aarch64-3.13/_testcapi.cpython-313-arm-linux-gnueabihf.so: undefined symbol: __atomic_fetch_or_8
The necessary bits to build these optional modules were not found:
_tkinter _uuid
To find the necessary bits, look in configure.ac and config.log.
Following modules built successfully but were removed because they could not be imported:
_testcapi
Checked 107 modules (31 built-in, 72 shared, 1 n/a on linux-aarch64, 0 disabled, 2 missing, 1 failed on import)
```
logs: https://buildbot.python.org/all/#/builders/424/builds/4849
cc @colesbury @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-109101
* gh-109211
* gh-109224
<!-- /gh-linked-prs -->
| 1f7e42131d2800f0fbb89bfd91fafa8a073e066d | 697c9dcf8fc746636c6187e4f110e0e6e865b710 |
python/cpython | python__cpython-109107 | # test_sys_settrace -R 3:3 does crash
```
$ ./python -m test test_sys_settrace -R 3:3 -m test_line_event_raises_before_opcode_event -m test_no_jump_backwards_into_for_block
0:00:00 load avg: 0.77 Run tests sequentially
0:00:00 load avg: 0.77 [1/1] test_sys_settrace
beginning 6 repetitions
123456
.Fatal Python error: Segmentation fault
Current thread 0x00007f8a40b15740 (most recent call first):
File "/home/vstinner/python/main/Lib/test/test_sys_settrace.py", line 1883 in trace
File "/home/vstinner/python/main/Lib/test/test_sys_settrace.py", line 2433 in test_no_jump_backwards_into_for_block
(...)
```
Regression introduced by: commit 6f3c138dfa868b32d3288898923bbfa388f2fa5d
```
commit 6f3c138dfa868b32d3288898923bbfa388f2fa5d
Author: Serhiy Storchaka <storchaka@gmail.com>
Date: Wed Sep 6 23:55:42 2023 +0300
gh-108751: Add copy.replace() function (GH-108752)
It creates a modified copy of an object by calling the object's
__replace__() method.
It is a generalization of dataclasses.replace(), named tuple's _replace()
method and replace() methods in various classes, and supports all these
stdlib classes.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109107
* gh-112329
<!-- /gh-linked-prs -->
| 057bc7249073066ed8087b548ee06f0eabfa9e7c | a56c92875699c2ba92ed49e72f6abbf363a5c537 |
python/cpython | python__cpython-109068 | # Tests: remove C API tests using @requires_legacy_unicode_capi()?
The ``@test.support.requires_legacy_unicode_capi()`` decorators uses ``from _testcapi import unicode_legacy_string``, but this module attribute has been remove since Python 3.12 by commit f9c9354a7a173eaca2aa19e667b5cf12167b7fed, and so all tests using the decorator are now always skipped.
Should we now remove tests using ``@requires_legacy_unicode_capi()`` decorator? Or are these tests kept on purpose?
For example, this test is now always skipped:
```
$ ./python -m test test_capi -m test.test_capi.test_getargs.String_TestCase.test_Z -v
test_Z (test.test_capi.test_getargs.String_TestCase.test_Z) ... skipped 'requires legacy Unicode C API'
```
See also issue #92536 and [PEP 623: Remove wstr from Unicode](https://peps.python.org/pep-0623/).
cc @methane @serhiy-storchaka
<!-- gh-linked-prs -->
### Linked PRs
* gh-109068
<!-- /gh-linked-prs -->
| b4131a13cb41d0f397776683c3b99500db9e2cfd | 17f994174de9211b2baaff217eeb1033343230fc |
python/cpython | python__cpython-109046 | # Limited API tests are now incorrectly skipped unconditionally
Commit 13a00078b81776b23b0b6add69b848382240d1f2 (#108663, cc @vstinner) made all Python builds compatible with the Limited API, and removed the `LIMITED_API_AVAILABLE` flag. However, some tests are still checking for that flag, so they are now being incorrectly skipped.
https://github.com/python/cpython/blob/3bfa24e29f286cbc1f42bdb4d2b1c0c9d643c8d6/Modules/_testcapimodule.c#L4000-L4010
https://github.com/python/cpython/blob/3bfa24e29f286cbc1f42bdb4d2b1c0c9d643c8d6/Lib/test/support/__init__.py#L1088-L1094
<!-- gh-linked-prs -->
### Linked PRs
* gh-109046
<!-- /gh-linked-prs -->
| f42edf1e7be5018a8988a219a168e231cbaa25e5 | 3bfa24e29f286cbc1f42bdb4d2b1c0c9d643c8d6 |
python/cpython | python__cpython-109038 | # Branch prediction design for Tier 2 (uops) interpreter
I'm splitting this topic off gh-106529, notably see this comment: https://github.com/python/cpython/issues/106529#issuecomment-1625942167.
The design we've arrived at adds a "counter" to all branch (== conditional jump) instructions in Tier 1, i.e., to `POP_JUMP_IF_{TRUE,FALSE,NONE,NOT_NONE}`. This counter is managed differently than most other counter cache entries. It should be initialized to a pattern of alternating ones and zeros. Whenever we execute a branch instruction, we shift the counter left by one position (losing the leftmost bit), and set the bottom bit to one if we jump, or zero if we don't.
When we get to the point where we're constructing a superblock, we look at the cache entry, and decide which is the more likely branch based on the number of bits in the counter (`_Py_popcount32()`). We then continue projecting along the more likely branch.
We can even get fancy and predict a percentage of correct predictions, and multiply the percentages together as we project through branches, and stop projecting altogether if the probability gets too low. E.g. after two branches with 50%, the probability would be 25%, which is probably too low to bother, so we stop. OTOH after one branch with 80% and one with 25%, we multiply together 0.8 and 0.75 (!), giving 0.6, which is still likely enough to keep going.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109038
<!-- /gh-linked-prs -->
| bcce5e271815c0bdbe894964e853210d2c75949b | ecd21a629a2a30bcae89902f7cad5670e9441e2c |
python/cpython | python__cpython-109034 | # os.utime raises exceptions without a filename.
# Bug report
### Bug description:
The builtin os.utime option raises exceptions without a filename. E.g. on Linux:
```python
import os
try:
os.utime('/root', None)
except PermissionError as e:
print(e.filename) # None
```
The source actually provides a justification for this omission:
```
/* Avoid putting the file name into the error here,
as that may confuse the user into believing that
something is wrong with the file, when it also
could be the time stamp that gives a problem. */
```
which really is no justification at all.
Uncaught exceptions are not usually considered a replacement for user-facing error messages, so omitting the filename here is at best a guidance to the programmer, but this approach unnecessarily removes context that could be helpful in crafting a useful error message. On Linux at least, I don't even think that the exception traceback has the potential for confusion raised in this comment — EPERM could quite clearly mean that my user lacks the requisite permission to modify the file in whatever relevant way, including changing the timestamps.
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109034
<!-- /gh-linked-prs -->
| ddf2e953c27d529b7e321c972ede2afce5dfb0b0 | fd7e08a6f35581e1189b9bf12feb51f7167a86c5 |
python/cpython | python__cpython-109048 | # Instantiation of an empty Enum with any value no longer throws `ValueError`
# Bug report
### Bug description:
On Python < 3.12 instantiation of an empty `Enum` with any value resulted in `ValueError`, but on 3.12.0rc2 it returns `<enum 'bar'>` without errors. On 3.12.0rc1 it resulted in `TypeError: 'NoneType' object is not iterable` that was addressed in this issue:
* https://github.com/python/cpython/issues/106928
I'm not sure if it's intended or not but just in case, I'll write about it, so that if it's unintended behaviour it could be fixed before the final release. For me `ValueError` was more consistent but I don't see a practical reason to use empty enums anyway and I'm fine if this issue is going to be closed.
```python
class Foo(Enum):
pass
print(Foo('bar'))
# 3.11: ValueError: 'bar' is not a valid Foo
# 3.12.0rc1: TypeError: 'NoneType' object is not iterable
# 3.12.0rc2: <enum 'bar'>
```
### CPython versions tested on:
3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-109048
* gh-109122
<!-- /gh-linked-prs -->
| c74e440168fab9bf91346471087a394af13fa2db | b9831e5c98de280870b6d932033b868ef56fa2fa |
python/cpython | python__cpython-109251 | # Use Kyiv instead of Kiev in test_email
# Bug report
### Bug description:
As proposed in #108533 the time zone was renamed but we can't simply rename it in the test cause it fails on old(?) MacOS systems. I would propose to prepend it with:
```python
@unittest.skipUnless(os.path.exists('/usr/share/zoneinfo/Europe/Kyiv') or
os.path.exists('/usr/lib/zoneinfo/Europe/Kyiv'),
"Can't find the Olson's TZ database")
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-109251
* gh-111279
* gh-111280
<!-- /gh-linked-prs -->
| 46407fe79ca78051cbf6c80e8b8e70a228f9fa50 | c7d68f907ad3e3aa17546df92a32bddb145a69bf |
python/cpython | python__cpython-109016 | # test_asyncio, test_imaplib and test_socket fail with FreeBSD TCP blackhole
test_asyncio, test_imaplib and test_socket fail with FreeBSD TCP blackhole. When a FreeBSD machine is configured with:
```
sudo sysctl net.inet.tcp.blackhole=2
sudo sysctl net.inet.udp.blackhole=1
```
That's the case by default in FreeBSD GCP image:
* https://github.com/cirruslabs/cirrus-ci-docs/issues/483
* https://reviews.freebsd.org/D41751
I'm working on a PR to skip affected tests with such special configuration.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109016
* gh-109041
* gh-109042
<!-- /gh-linked-prs -->
| a52a3509770f29f940cda9307704908949912276 | 60a9eea3f56c002356998f5532b3ad870a1ffa8e |
python/cpython | python__cpython-109003 | # Ensure only one wheel for each vendored package in `verify_ensurepip_wheels.py`
We've had two cases of old wheels not being removed while a new one is added. This lead to CI failures on the affected branches, which required a fix afterwards like gh-108998.
Let's fix it by making the tool complain if there's more than one wheel present.
Reported by @edmorley.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109003
* gh-109005
* gh-109006
* gh-109007
* gh-109008
* gh-109009
<!-- /gh-linked-prs -->
| f8a047941f2e4a1848700c21d58a08c9ec6a9c68 | fbce43a251488f666be9794c908a6613bf8ae260 |
python/cpython | python__cpython-109004 | # Add tests for msvcrt module
# Bug report
### Bug description:
[msvcrt](https://docs.python.org/3/library/msvcrt.html) is a public module, but it has no tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109004
* gh-109169
* gh-109226
<!-- /gh-linked-prs -->
| bcb2ab5ef8c646565b09c860fb14e415d7b374bd | 1f7e42131d2800f0fbb89bfd91fafa8a073e066d |
python/cpython | python__cpython-108992 | # _PyFrame_GetState is misnamed and does more than it needs to
_PyFrame_GetState is a static function so can be renamed to reflect that.
More importantly, it calculates a lot more than what it is actually used for (it is called in two places, each of which checks for one of the states: CLEARED or SUSPENDED). The rest is neither used nor tested.
Some of the other states that this function checks for depend on the opcode at ``prev_instr``, which is going to be replaced soon by ``instr_ptr``. If we want to keep this function working we need to first cover all the states by tests. But there is no point doing this for code which is unused.
So I will replace this function by two simpler functions for the two existing use cases: checking if the frame is in CLEARED or SUSPENDED state.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108992
<!-- /gh-linked-prs -->
| 39376cb93d40b8fe588be0c1987272b0f8c49e26 | 5f3433f210d25d366fcebfb40adf444c8f46cd59 |
python/cpython | python__cpython-109135 | # test_threading fails and crashes when rerun
# Crash report
First, `BarrierTests.test_default_timeout` in `test_threading` fails, producing also several unraisable exception messages. They are produced even during running few following tests.
https://github.com/python/cpython/actions/runs/6094370280/job/16535778004#step:5:103
```
test_default_timeout (test.test_threading.BarrierTests.test_default_timeout)
Test the barrier's default timeout ... Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
ERROR
test_repr (test.test_threading.BarrierTests.test_repr) ... Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x053F5270>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 700, in wait
self._enter() # Block while the barrier drains.
^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 724, in _enter
raise BrokenBarrierError
threading.BrokenBarrierError:
ok
```
After rerunning `test_threading` it crashes.
https://github.com/python/cpython/actions/runs/6094370280/job/16535778004#step:5:1067
```
0:17:28 Re-running 2 failed tests in verbose mode in subprocesses
0:17:28 Run tests in parallel using 4 child processes (timeout: 20 min, worker timeout: 25 min)
0:17:30 [1/2/1] test_threading process crashed (Exit code 3)
Re-running test_threading in verbose mode (matching: test_default_timeout)
test_default_timeout (test.test_threading.BarrierTests.test_default_timeout)
Test the barrier's default timeout ... Warning -- Unraisable exception
Exception ignored in thread started by: <function Bunch.__init__.<locals>.task at 0x052196F0>
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 48, in task
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 749, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError:
ERROR
======================================================================
ERROR: test_default_timeout (test.test_threading.BarrierTests.test_default_timeout)
Test the barrier's default timeout
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1025, in test_default_timeout
self.run_threads(f)
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 854, in run_threads
f()
File "D:\a\cpython\cpython\Lib\test\lock_tests.py", line 1020, in f
i = barrier.wait()
^^^^^^^^^^^^^^
File "D:\a\cpython\cpython\Lib\threading.py", line 709, in wait
self._wait(timeout)
File "D:\a\cpython\cpython\Lib\threading.py", line 747, in _wait
raise BrokenBarrierError
threading.BrokenBarrierError
----------------------------------------------------------------------
Ran 1 test in 0.376s
FAILED (errors=1)
test test_threading failed
{"test_name": "test_threading", "state": "FAILED", "duration": 0.6816820999999891, "xml_data": null, "stats": {"tests_run": 1, "failures": 0, "skipped": 0}, "errors": [["test_default_timeout (test.test_threading.BarrierTests.test_default_timeout)", "Traceback (most recent call last):\n File \"D:\\a\\cpython\\cpython\\Lib\\test\\lock_tests.py\", line 1025, in test_default_timeout\n self.run_threads(f)\n File \"D:\\a\\cpython\\cpython\\Lib\\test\\lock_tests.py\", line 854, in run_threads\n f()\n File \"D:\\a\\cpython\\cpython\\Lib\\test\\lock_tests.py\", line 1020, in f\n i = barrier.wait()\n ^^^^^^^^^^^^^^\n File \"D:\\a\\cpython\\cpython\\Lib\\threading.py\", line 709, in wait\n self._wait(timeout)\n File \"D:\\a\\cpython\\cpython\\Lib\\threading.py\", line 747, in _wait\n raise BrokenBarrierError\nthreading.BrokenBarrierError\n"]], "failures": [], "__test_result__": "TestResult"}
A�s�s�e�r�t�i�o�n� �f�a�i�l�e�d�:� �t�s�t�a�t�e�_�i�s�_�a�l�i�v�e�(�t�s�t�a�t�e�)� �&�&� �!�t�s�t�a�t�e�-�>�_�s�t�a�t�u�s�.�b�o�u�n�d�,� �f�i�l�e� �D�:�\�a�\�c�p�y�t�h�o�n�\�c�p�y�t�h�o�n�\�P�y�t�h�o�n�\�p�y�s�t�a�t�e�.�c�,� �l�i�n�e� �2�4�5�
�Fatal Python error: Aborted
Thread 0x0000186c (most recent call first):
<no Python frame>
1 test failed again:
test_threading
```
Note also UTF-16 in logs, but this is other issue.
It only happened in Windows (x86) run, the run on Windows (x64) passed successfully, as well as on other platforms. It may be 32-bit Windows specific or random.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109135
* gh-109272
* gh-110342
<!-- /gh-linked-prs -->
| 517cd82ea7d01b344804413ef05610934a43a241 | c0f488b88f2a54d76256818e2841d868fecfd396 |
python/cpython | python__cpython-108984 | # `global x: int` is not tested
While reading https://peps.python.org/pep-0526/#where-annotations-aren-t-allowed I've noticed that not all corner-cases are covered: https://github.com/python/cpython/blob/6f8411cfd68134ccae01b0b4cb332578008a69e3/Lib/test/test_grammar.py#L347-L365
For example, explicit PEP's example:
```python
def f():
global x: int # SyntaxError
def g():
x: int # Also a SyntaxError
global x
```
Two problems:
1. `global x: int` is not tested, only `nonlocal x: int` is
2. `x: int; nonlocal x` is not tested
There's also an interesting corner-case from the rejected ideas:
```python
x: int = y = 1
z = w: int = 1
```
Since, we have this test `check_syntax_error(self, "def f: int")`, I assume that we also test rejected ideas here. So, let's add this one as well.
I propose adding these three cases.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108984
* gh-109000
* gh-109001
<!-- /gh-linked-prs -->
| 1fb20d42c58924e2e941622b3539645c7b843e0e | 39376cb93d40b8fe588be0c1987272b0f8c49e26 |
python/cpython | python__cpython-109431 | # test_asyncio: test_subprocess_consistent_callbacks() fails randomly
The following test_asyncio test is unstable and fails randomly on buildbots. I saw failures on Linux and FreeBSD.
```
FAIL: test_subprocess_consistent_callbacks (test.test_asyncio.test_subprocess.SubprocessThreadedWatcherTests.test_subprocess_consistent_callbacks)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel8-z.lto-pgo/build/Lib/test/test_asyncio/test_subprocess.py", line 788, in test_subprocess_consistent_callbacks
self.loop.run_until_complete(main())
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel8-z.lto-pgo/build/Lib/asyncio/base_events.py", line 664, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel8-z.lto-pgo/build/Lib/test/test_asyncio/test_subprocess.py", line 780, in main
self.assertEqual(events, [
AssertionError: Lists differ: ['process_exited', ('pipe_data_received', 1, b'stdout')] != [('pipe_data_received', 1, b'stdout'), ('p[95 chars]ted']
First differing element 0:
'process_exited'
('pipe_data_received', 1, b'stdout')
Second list contains 3 additional elements.
First extra element 2:
'pipe_connection_lost'
- ['process_exited', ('pipe_data_received', 1, b'stdout')]
? ------------------ ^
+ [('pipe_data_received', 1, b'stdout'),
? ^
+ ('pipe_data_received', 2, b'stderr'),
+ 'pipe_connection_lost',
+ 'pipe_connection_lost',
+ 'process_exited']
```
build: https://buildbot.python.org/all/#/builders/442/builds/4900
<!-- gh-linked-prs -->
### Linked PRs
* gh-109431
* gh-109609
* gh-109610
<!-- /gh-linked-prs -->
| ced6924630037f1e5b3d1dbef2b600152fb07fbb | 850cc8d0b1db0a912a6e458720e265e6a6e5c1ba |
python/cpython | python__cpython-109491 | # test_sys.test_intern() fails if run more than once in the same process
# Bug report
Example:
```
$ ./python -m test test_sys test_sys -m test_intern -v
(...)
0:00:00 load avg: 1.60 Run tests sequentially
0:00:00 load avg: 1.60 [1/2] test_sys
test_intern (test.test_sys.SysModuleTest.test_intern) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
0:00:00 load avg: 1.60 [2/2] test_sys
test_intern (test.test_sys.SysModuleTest.test_intern) ... FAIL
======================================================================
FAIL: test_intern (test.test_sys.SysModuleTest.test_intern)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vstinner/python/3.12/Lib/test/test_sys.py", line 687, in test_intern
self.assertTrue(sys.intern(s) is s)
AssertionError: False is not true
----------------------------------------------------------------------
(...)
Result: FAILURE
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-109491
* gh-110216
<!-- /gh-linked-prs -->
| 44b1e4ea4842c6cdc1bedba7aaeb93f236b3ec08 | f16e81f368d08891e28dc1f038c1826ea80d7801 |
python/cpython | python__cpython-108964 | # test_tempfile.test_flags() fails on FreeBSD: Operation not supported
On FreeBSD 13.2-RELEASE:
```
ERROR: test_flags (test.test_tempfile.TestTemporaryDirectory.test_flags)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/buildbot/buildarea/3.x.ware-freebsd/build/Lib/test/test_tempfile.py", line 1845, in test_flags
os.chflags(os.path.join(root, name), flags)
OSError: [Errno 45] Operation not supported: '/tmp/test_python_s_sfxawg/8s2ad168/dir1/dir1/dir1/test0.txt'
```
logs: https://buildbot.python.org/all/#/builders/1223/builds/1
I'm working on a fix.
See also issue #108948.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108964
* gh-108967
* gh-108968
<!-- /gh-linked-prs -->
| cd2ef21b076b494224985e266c5f5f8b37c66618 | b87263be9b82f778317c2bdf45ecd2ff23d0ba1b |
python/cpython | python__cpython-108929 | # test_unittest failure triggered by test_import + test_importlib
# Bug report
### Bug description:
test_unittest fails if (and only if) test_import and test_importlib are run (in the same process) before it:
```
./python -m test test_import test_importlib test_unittest
0:00:00 load avg: 3.83 Run tests sequentially
0:00:00 load avg: 3.83 [1/3] test_import
0:00:02 load avg: 3.52 [2/3] test_importlib
0:00:06 load avg: 3.24 [3/3] test_unittest
test test_unittest failed -- Traceback (most recent call last):
File "/home/thomas/python/python/cpython/Lib/test/test_unittest/test_discovery.py", line 840, in test_discovery_failed_discovery
with test.test_importlib.util.uncache('package'):
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'test.test_importlib' has no attribute 'util'
test_unittest failed (1 error)
```
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-108929
* gh-108952
* gh-108954
* gh-110347
* gh-112711
* gh-112712
* gh-112765
* gh-112784
* gh-112785
<!-- /gh-linked-prs -->
| 3f89b257639dd817a32079da2ae2c4436b8e82eb | deea7c82682848b2a0db971a4dcc3a32c73a9f8c |
python/cpython | python__cpython-109044 | # duplicate backslashes in str.split docstring
```console
$ pydoc3 str.split | grep -F '\\'
character (including \\n \\r \\t \\f and spaces) and will discard
```
These should be single backslashes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109044
* gh-109061
* gh-109062
<!-- /gh-linked-prs -->
| e7d5433f944a5725aa82595f9251abfc8a63d333 | fd5989bda119f8c0f8571aad88d8e9be3be678a4 |
python/cpython | python__cpython-108904 | # `asyncio`: do not create unneeded `lambda`s
# Bug report
There are multiple places in `asyncio` code and docs where this pattern is used: `lambda: some_call()`
While this pattern can be used in some rare cases, generally it is not a good thing to have for several reasons:
1. It creates an extra frame for no reason
2. It spends more bytecode entries to do the same thing
```python
>>> import dis
>>> def a():
... return lambda: str()
...
>>> def b():
... return str
...
>>> dis.dis(a)
1 0 RESUME 0
2 2 LOAD_CONST 1 (<code object <lambda> at 0x10136d040, file "<stdin>", line 2>)
4 MAKE_FUNCTION
6 RETURN_VALUE
Disassembly of <code object <lambda> at 0x10136d040, file "<stdin>", line 2>:
2 0 RESUME 0
2 LOAD_GLOBAL 1 (str + NULL)
12 CALL 0
20 RETURN_VALUE
>>> dis.dis(b)
1 0 RESUME 0
2 2 LOAD_GLOBAL 0 (str)
12 RETURN_VALUE
```
I propose to remove this pattern from `asyncio`, because it is designed to be load-intensive and such micro-optimizations surely help.
```diff
diff --git Doc/library/asyncio-protocol.rst Doc/library/asyncio-protocol.rst
index 7bc906eaafc..9781bda8b27 100644
--- Doc/library/asyncio-protocol.rst
+++ Doc/library/asyncio-protocol.rst
@@ -746,7 +746,7 @@ received data, and close the connection::
loop = asyncio.get_running_loop()
server = await loop.create_server(
- lambda: EchoServerProtocol(),
+ EchoServerProtocol,
'127.0.0.1', 8888)
async with server:
@@ -850,7 +850,7 @@ method, sends back received data::
# One protocol instance will be created to serve all
# client requests.
transport, protocol = await loop.create_datagram_endpoint(
- lambda: EchoServerProtocol(),
+ EchoServerProtocol,
local_addr=('127.0.0.1', 9999))
try:
diff --git Lib/asyncio/sslproto.py Lib/asyncio/sslproto.py
index 488e17d8bcc..3eb65a8a08b 100644
--- Lib/asyncio/sslproto.py
+++ Lib/asyncio/sslproto.py
@@ -539,7 +539,7 @@ def _start_handshake(self):
# start handshake timeout count down
self._handshake_timeout_handle = \
self._loop.call_later(self._ssl_handshake_timeout,
- lambda: self._check_handshake_timeout())
+ self._check_handshake_timeout)
self._do_handshake()
@@ -619,7 +619,7 @@ def _start_shutdown(self):
self._set_state(SSLProtocolState.FLUSHING)
self._shutdown_timeout_handle = self._loop.call_later(
self._ssl_shutdown_timeout,
- lambda: self._check_shutdown_timeout()
+ self._check_shutdown_timeout
)
self._do_flush()
@@ -758,7 +758,7 @@ def _do_read__buffered(self):
else:
break
else:
- self._loop.call_soon(lambda: self._do_read())
+ self._loop.call_soon(self._do_read)
except SSLAgainErrors:
pass
if offset > 0:
```
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108904
<!-- /gh-linked-prs -->
| ad1d6a1c20c7bd3e880c9af7251e6f39ff0e62a9 | 1e0d62793a84001e92f1c80b511d3a212b435acc |
python/cpython | python__cpython-108870 | # C API: Rename _PyThreadState_GetUnchecked() to public PyThreadState_GetUnchecked
The _PyThreadState_UncheckedGet() function was added in Python 3.5 (2016) by issue #70342 with commit bfd316e750bc3040c08d1b5872e2de188e8c1e5f. I was asked by PyPy developers to have an API to get the current thread state, but don't call Py_FatalError() if it's NULL. See: https://mail.python.org/pipermail/python-dev/2016-January/142767.html
In Python 3.13, I'm trying to **remove private functions** from the public C API: issue #106320.
Since _PyThreadState_UncheckedGet() had at least one user and exposing this API is revelant, I propose renaming the private _PyThreadState_UncheckedGet() function to **PyThreadState_GetUnchecked**() and so make it public, but not added it to the limited C API (not needed).
For the function name, see: https://github.com/capi-workgroup/problems/issues/52 I propose to use the ``Unsafe`` suffix for "unsafe" function. Previously, the function used ``Unchecked``, but it wasn't a suffix!
**UPDATE**: I renamed PyThreadState_Get**Unsafe**() to PyThreadState_Get**Unchecked**().
<!-- gh-linked-prs -->
### Linked PRs
* gh-108870
<!-- /gh-linked-prs -->
| d73501602f863a54c872ce103cd3fa119e38bac9 | 6ab6040054e5ca2d3eb7833dc8bf4eb0bbaa0aac |
python/cpython | python__cpython-108862 | # Markup missing in docs for `inspect.Signature.replace()`
# Documentation
The word "replace" in the docs for `inspect.Signature.replace()` is missing appropriate markup to make it not appear as just the word "replace":
https://github.com/python/cpython/blob/44fc378b598eb675a847d1b7c46c8f6e81313c04/Doc/library/inspect.rst?plain=1#L733
Probably needs a `:meth:`, but at least some code literal markup.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108862
* gh-108970
* gh-108971
<!-- /gh-linked-prs -->
| 6f8411cfd68134ccae01b0b4cb332578008a69e3 | cd2ef21b076b494224985e266c5f5f8b37c66618 |
python/cpython | python__cpython-108853 | # test_tomllib.test_inline_array_recursion_limit() failed on wasm32-wasi 3.11 buildbot
test_tomllib.test_inline_array_recursion_limit() failed on wasm32-wasi 3.11 buildbot:
```
ERROR: test_inline_array_recursion_limit (test.test_tomllib.test_misc.TestMiscellaneous.test_inline_array_recursion_limit)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Lib/test/test_tomllib/test_misc.py", line 98, in test_inline_array_recursion_limit
tomllib.loads(recursive_array_toml)
File "/Lib/tomllib/_parser.py", line 102, in loads
pos = key_value_rule(src, pos, out, header, parse_float)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/tomllib/_parser.py", line 326, in key_value_rule
pos, key, value = parse_key_value_pair(src, pos, parse_float)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/tomllib/_parser.py", line 369, in parse_key_value_pair
pos, value = parse_value(src, pos, parse_float)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/tomllib/_parser.py", line 616, in parse_value
return parse_array(src, pos, parse_float)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/tomllib/_parser.py", line 420, in parse_array
pos, val = parse_value(src, pos, parse_float)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(...)
File "/Lib/tomllib/_parser.py", line 416, in parse_array
pos = skip_comments_and_array_ws(src, pos)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/tomllib/_parser.py", line 278, in skip_comments_and_array_ws
pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RecursionError: maximum recursion depth exceeded
```
link: https://buildbot.python.org/all/#/builders/1047/builds/908
<!-- gh-linked-prs -->
### Linked PRs
* gh-108853
* gh-109012
* gh-109013
* gh-110127
<!-- /gh-linked-prs -->
| 8ff11425783806f8cb78e99f667546b1f7f3428e | 2cd170db40ffba357848672ff3d2f8c1e0e74f2c |
python/cpython | python__cpython-108841 | # Remove unused `TestEnumTypeSubclassing` in `test_enum`
# Bug report
Looks like this class was not ever used:
https://github.com/python/cpython/blob/f373c6b9483e12d7f6e03a631601149ed60ab883/Lib/test/test_enum.py#L4691-L4692
It was added as the part of https://github.com/python/cpython/commit/b775106d940e3d77c8af7967545bb9a5b7b162df#diff-8467f9fbbff81abf26d87a8dbbf0e0c866157971948010e48cc73539251a9e4c
I propose to remove it.
But, if there are actual tests that are missing and are required, I am happy to add them there.
CC @ethanfurman
<!-- gh-linked-prs -->
### Linked PRs
* gh-108841
<!-- /gh-linked-prs -->
| b4c8cce9a7a9f3953eedffa277a3fe071731856d | 230649f5383ac28793c499ea334b16ff671c3d02 |
python/cpython | python__cpython-108839 | # regrtest should re-run failed tests in subprocesses
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Buildbots and GitHub Actions run the Python test suite with ``--verbose2`` option to re-run failed tests in verbose mode. The current regrtest implementation is the re-run tests in the main process. It is not reliable: if a test crash or timeout, the main process cannot get back the control since the process is killed. Usually, there is no final report after the test is re-run, and the output is just "truncated".
I propose to re-run failed tests in subprocesses to make them more reliable:
* Each test file is run in a fresh process and so does not inherit the state of previously executed tests. It runs in a known and reproducible state: threads, signals, etc.
* The main process can kill the worker process if it hangs.
* If faulthandler kills a worker process on a timeout, the main control detects it and continues its regular work.
* Same if a worker process is killed because of a fatal error: Python Fatal Error, segmentation fault, etc.
* Regular regrtest features can be re-used, like detection of leaking temporary files.
I'm working on an implementation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108839
* gh-108858
* gh-108896
* gh-108966
<!-- /gh-linked-prs -->
| 31c2945f143c6b80c837fcf09a5cfb85fea9ea4c | c2ec174d243da5d2607dbf06c4451d0093ac40ba |
python/cpython | python__cpython-108827 | # `dis` command line interface exists, but not documented
`dis` CLI exists de facto and is useful for disassembly of source code files or code from stdin, but it is not documented and mentioned as a "test program" in source code.
There was a previous discussion #88571 and PR #26714, but the last one is closed for unknown reason. I'll submit a new PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108827
* gh-110681
* gh-110689
<!-- /gh-linked-prs -->
| 0d805b998ded854840f029b7f0c9a02eb3efa251 | 732532b0af9d1b5c7ae4932526c8d20d86c15507 |
python/cpython | python__cpython-108820 | # regrtest: compute statistics on executed tests
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
``regrtest`` should compute statistics on all executed tests: count successes, failures and skipped.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108820
* gh-108821
* gh-108833
* gh-109432
<!-- /gh-linked-prs -->
| f964154b8cb74a05a3664cd34e938c0298f97680 | 7b4b788eaadb36f65b08a4a84e7096bb03dcc12b |
python/cpython | python__cpython-108795 | # doctest: count the number of skipped tests
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Currently, doctest only counts the number of failured and attempted tests, not the number of skipped tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108795
<!-- /gh-linked-prs -->
| 4f9b706c6f5d4422a398146bfd011daedaef1851 | 4ba18099b70c9f20f69357bac94d74f7c3238d7f |
python/cpython | python__cpython-108816 | # `pdb` CLI doesn't handle incorrect arguments properly
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
3.11, 3.12, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.0a0 (heads/main:578ebc5d5f, Sep 1 2023, 20:48:35) [GCC 10.2.1 20210110]
### A clear and concise description of the bug:
`pdb` module produces large traceback instead of short error message if invoked with invalid command line option. This happens because it doesn't handle exceptions that can occur in `getopt.getopt`, as it typically done.
```
$ ./python -m pdb -c
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2114, in <module>
pdb.main()
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2060, in main
opts, args = getopt.getopt(sys.argv[1:], 'mhc:', ['help', 'command='])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/getopt.py", line 95, in getopt
opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/getopt.py", line 198, in do_shorts
raise GetoptError(_('option -%s requires argument') % opt,
getopt.GetoptError: option -c requires argument
```
A similar situation is with nonexistant modules and directory names. In the first case an exception that occurs in `_ModuleTarget.check` is printed to stderr with its traceback. In the second case directory name is 'successfully' checked by `_ScriptTarget.check` call, and debugger is ran on invalid target.
```
$ ./python -m pdb -m spam
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/pdb.py", line 166, in check
self._details
File "/home/radislav/projects/cpython/Lib/functools.py", line 1014, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/pdb.py", line 174, in _details
return runpy._get_module_details(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/runpy.py", line 142, in _get_module_details
raise error("No module named %s" % mod_name)
ImportError: No module named spam
```
```
$ ./python -m pdb /
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2088, in main
pdb._run(target)
File "/home/radislav/projects/cpython/Lib/pdb.py", line 1868, in _run
self.run(target.code)
^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/pdb.py", line 159, in code
with io.open_code(self) as fp:
^^^^^^^^^^^^^^^^^^
IsADirectoryError: [Errno 21] Is a directory: '/'
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /home/radislav/projects/cpython/Lib/pdb.py(159)code()
-> with io.open_code(self) as fp:
(Pdb) c
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2088, in main
pdb._run(target)
File "/home/radislav/projects/cpython/Lib/pdb.py", line 1868, in _run
self.run(target.code)
^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/pdb.py", line 159, in code
with io.open_code(self) as fp:
^^^^^^^^^^^^^^^^^^
IsADirectoryError: [Errno 21] Is a directory: '/'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/radislav/projects/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2114, in <module>
pdb.main()
File "/home/radislav/projects/cpython/Lib/pdb.py", line 2106, in main
pdb.interaction(None, e)
File "/home/radislav/projects/cpython/Lib/pdb.py", line 501, in interaction
self._cmdloop()
File "/home/radislav/projects/cpython/Lib/pdb.py", line 405, in _cmdloop
self.cmdloop()
File "/home/radislav/projects/cpython/Lib/cmd.py", line 138, in cmdloop
stop = self.onecmd(line)
^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/pdb.py", line 592, in onecmd
return cmd.Cmd.onecmd(self, line)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/cmd.py", line 217, in onecmd
return func(arg)
^^^^^^^^^
File "/home/radislav/projects/cpython/Lib/pdb.py", line 1329, in do_continue
self.set_continue()
File "/home/radislav/projects/cpython/Lib/bdb.py", line 344, in set_continue
self._set_stopinfo(self.botframe, None, -1)
^^^^^^^^^^^^^
AttributeError: 'Pdb' object has no attribute 'botframe'. Did you mean: 'curframe'?
```
I'm working on a fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108816
* gh-110915
* gh-110916
* gh-111063
* gh-111064
<!-- /gh-linked-prs -->
| 162213f2db3835e1115178d38741544f4b4db416 | b75186f69edcf54615910a5cd707996144163ef7 |
python/cpython | python__cpython-108787 | # Split up _testinternalcapi.c
https://github.com/python/cpython/issues/93649 and associated PRs split up `_testcapimodule.c` because the file is large and hard to maintain. I'm proposing splitting the tests for the internal C API (`Modules/_testinternalcapi.c`) in the same manner for the same reasons. Additionally, I expect the PRs related to PEP 703 to require additional internal C API tests, and I'd like to avoid further cluttering `_testinternalcapi.c`.
cc @encukou @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-108787
<!-- /gh-linked-prs -->
| aa52888e6a0269f0c31a24bd0d1adb3238147261 | 76ce537fb1291f90c6dccbfea8653544de7902fd |
python/cpython | python__cpython-108764 | # C API: Cleanup header files
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
Meta issue for a few changes related to C API header files.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108764
* gh-108766
* gh-108769
* gh-108775
* gh-108776
* gh-108781
* gh-108782
* gh-108783
* gh-108784
* gh-108823
* gh-108825
* gh-108830
* gh-108831
* gh-108854
* gh-108855
* gh-108977
* gh-111402
* gh-111563
* gh-136027
* gh-136043
* gh-136044
<!-- /gh-linked-prs -->
| 03c5a685689fd4891d01caff8da01096fcf73d5a | 5948f562e031a4d345681e1b962da3e3083f85df |
python/cpython | python__cpython-108754 | # Enhance Py_STATS: statistics on Python performance
# Feature or enhancement
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
### Proposal:
When Python is built with ``./configure --enable-pystats``, it always dump statistics at exit, even if statistics gathering was always off.
I propose adding ``PYTHONSTATS`` environment variable and only dump statistics at exit when ``python -X pystats`` or ``PYTHONSTATS=1`` is used.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108754
* gh-109040
<!-- /gh-linked-prs -->
| a0773b89dfe5cd2190d539905dd89e7f6455668e | 8ff11425783806f8cb78e99f667546b1f7f3428e |
python/cpython | python__cpython-108752 | # Add generalized replace() function
# Feature or enhancement
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/generalize-replace-function/28511
### Proposal:
Some classes have the `replace()` method, which creates a modified copy of the object (modified values are provided as keyword arguments). Named tuples have the `_replace()` for this (to avoid conflict with attribute `replace`). Dataclasses provide a global function for this.
I proposed to generalize `dataclasses.replace()` to support all classes which need this feature. By the result of the discussion on discuss.python.org, the new function will be added in the `copy` module as `copy.replace()`. `dataclasses.replace()` will continue to support only dataclasses. Dataclasses, named tuples and all classes which currently have the `replace()` method with suitable semantic will get also the `__replace__()` method. Now you can add such feature in new classes without conflicting with the `replace` attribute, and use this feature in general code without conflicting with `str.replace()` and like.
For now, `copy.replace()` is more limited than `copy.copy()` and does not fall back to use the powerful pickle protocol.
<!-- gh-linked-prs -->
### Linked PRs
* gh-108752
<!-- /gh-linked-prs -->
| 6f3c138dfa868b32d3288898923bbfa388f2fa5d | 9f0c0a46f00d687e921990ee83894b2f4ce8a6e7 |
python/cpython | python__cpython-109470 | # `site.{usercustomize,sitecustomize}` hooks are never tested
# Bug report
They are documented:
```rst
The Customization Modules
-------------------------
Python provides two hooks to let you customize it: :mod:`sitecustomize` and
:mod:`usercustomize`. To see how it works, you need first to find the location
of your user site-packages directory. Start Python and run this code::
>>> import site
>>> site.getusersitepackages()
'/home/user/.local/lib/python3.5/site-packages'
Now you can create a file named :file:`usercustomize.py` in that directory and
put anything you want in it. It will affect every invocation of Python, unless
it is started with the :option:`-s` option to disable the automatic import.
:mod:`sitecustomize` works in the same way, but is typically created by an
administrator of the computer in the global site-packages directory, and is
imported before :mod:`usercustomize`. See the documentation of the :mod:`site`
module for more details.
```
But, they are not tested anywhere in the CPython.
I think that adding these test might be worth it, in case the setup is reliable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-109470
<!-- /gh-linked-prs -->
| 738574fb21967a1f313f1542dd7b70ae0dcd9705 | 220bcc9e27c89bf3b3609b80a31b1398840f195e |
python/cpython | python__cpython-108741 | # make regen-all is not deterministic
# Bug report
### Checklist
- [X] I am confident this is a bug in CPython, not a bug in a third-party project
- [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc),
and am confident this bug has not been reported before
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
### A clear and concise description of the bug:
The ``make regen-all`` command is not deterministic. Depending in the order in which the target dependencies are executed, the result change.
regen1.sh:
```
set -e -x
git checkout .
git checkout 79823c103b66030f10e07e04a5462f101674a4fc
# WARNING: it removes all untracked files!
git clean -fdx
./configure --with-pydebug
make regen-cases
make regen-typeslots
make regen-token
make regen-ast
make regen-keyword
make regen-sre
make regen-frozen
make regen-pegen-metaparser
make regen-pegen
make regen-test-frozenmain
make regen-test-levenshtein
make clinic
make regen-global-objects
echo
grep 'ID(traceback)' Python/deepfreeze/deepfreeze.c || true
echo
wc -l Python/deepfreeze/deepfreeze.c
```
regen2.sh:
```
set -e -x
git checkout .
git checkout 79823c103b66030f10e07e04a5462f101674a4fc
# WARNING: it removes all untracked files!
git clean -fdx
./configure --with-pydebug
make clinic
make regen-global-objects
make regen-cases
make regen-typeslots
make regen-token
make regen-ast
make regen-keyword
make regen-sre
make regen-frozen
make regen-pegen-metaparser
make regen-pegen
make regen-test-frozenmain
make regen-test-levenshtein
echo
grep 'ID(traceback)' Python/deepfreeze/deepfreeze.c || true
echo
wc -l Python/deepfreeze/deepfreeze.c
```
regen1.sh output:
```
(...)
&_Py_ID(traceback),
&_Py_ID(traceback),
146433 Python/deepfreeze/deepfreeze.c
```
regen2.sh output:
```
(...)
146543 Python/deepfreeze/deepfreeze.c
```
* Using ``regen1.sh``, there is a ``traceback`` identfier in ``Python/deepfreeze/deepfreeze.c``.
* Using ``regen1.sh``, there **is no** ``traceback`` identfier in ``Python/deepfreeze/deepfreeze.c``.
These commands are coming from ``regen-all`` in ``Makefile.pre.in``:
```
regen-all: regen-cases regen-typeslots \
regen-token regen-ast regen-keyword regen-sre regen-frozen clinic \
regen-pegen-metaparser regen-pegen regen-test-frozenmain \
regen-test-levenshtein regen-global-objects
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-108741
* gh-109019
* gh-109021
<!-- /gh-linked-prs -->
| db1ee6a19ab62191c16ecb732cb4dcaede98a902 | a0773b89dfe5cd2190d539905dd89e7f6455668e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.