repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-101523 | # Make it easier to override externals on Windows
Currently, it's very easy to build with the exact set of external dependencies that we use upstream. However, sometimes downstream builders need to use alternate builds of these dependencies, and possibly alternate versions (within compatibility allowances).
Make the build files easier to override without having to patch them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101523
* gh-101546
* gh-101547
<!-- /gh-linked-prs -->
| f6c53b80a16f63825479c5ca0f8a5e2829c3f505 | 433fb3ef08c71b97a0d08e522df56e0afaf3747a |
python/cpython | python__cpython-104508 | # Isolate the tracemalloc Module Between Interpreters
(See https://github.com/python/cpython/issues/100227.)
Currently the tracemalloc module has some state in `_PyRuntimeState`, including objects, which is shared by all interpreters. Interpreters should be isolated from each other, for a variety of reasons (including the possibility of a per-interpreter GIL). Isolating the module will involve moving some of the state to `PyInterpreterState` (and, under per-interpreter GIL, guarding other state with a global lock).
----
Analysis:
<details>
<summary>(expand)</summary>
## Allocators
The module installs a custom allocator (using the PEP 445 API
([docs](https://docs.python.org/3.12/c-api/memory.html#customize-memory-allocators)).
<details>
<summary>(expand)</summary>
<BR/>
the allocator functions:
| domain | func | wraps | wraps (reentrant) | actually wraps |
| ---- | ---- | ---- | ---- | ---- |
| <p align="center">-</p> | `tracemalloc_alloc()` | \<original `malloc()` or `calloc()`\> | \<--- | |
| <p align="center">-</p> | `tracemalloc_realloc()` | \<original `realloc()` or `free()`\> | \<--- | |
| <p align="center">-</p> | `tracemalloc_raw_alloc()` | `tracemalloc_alloc()` **\*** | \<original `malloc()` or `calloc()`\> | |
| <p align="center">-</p> | `tracemalloc_alloc_gil()` | `tracemalloc_alloc()` | \<original `malloc()` or `calloc()`\> | |
| raw | | | | |
| | `tracemalloc_raw_malloc()` | `tracemalloc_alloc()` | <original `malloc()`> | `tracemalloc_raw_alloc()` |
| | `tracemalloc_raw_calloc()` | `tracemalloc_alloc()` | <original `calloc()`> | `tracemalloc_raw_alloc()` |
| | `tracemalloc_raw_realloc()` | `tracemalloc_realloc()` **\*** | \<original `realloc()`> | |
| | `tracemalloc_free()` | <original `free()`> | \<--- | |
| mem | | | | |
| obj | | | | |
| | `tracemalloc_malloc_gil()` | `tracemalloc_alloc_gil()` | \<--- | |
| | `tracemalloc_calloc_gil()` | `tracemalloc_alloc_gil()` | \<--- | |
| | `tracemalloc_realloc_gil()` | `tracemalloc_realloc()` | \<original `realloc()`\> | |
| | `tracemalloc_free()` | <original `free()`> | \<--- | |
**\*** Note that `tracemalloc_raw_alloc()` wraps the `tracemalloc_alloc()` call
with `PyGILState_Ensure()`/`PyGILState_Release()`.
Likewise for `tracemalloc_raw_realloc()` where it calls `tracemalloc_realloc()`.
In no other case does an allocator function use the GILState API for any calls.
</details>
## State
### Fields
<details>
<summary>(expand)</summary>
<BR/>
https://github.com/python/cpython/blob/main/Include/internal/pycore_tracemalloc.h#L57-L107
https://github.com/python/cpython/blob/main/Include/internal/pycore_runtime.h#L147
<details>
<summary>raw</summary>
<BR/>
https://github.com/python/cpython/blob/0675b8f032c69d265468b31d5cadac6a7ce4bd9c/Include/internal/pycore_tracemalloc.h#L43-L55
https://github.com/python/cpython/blob/0675b8f032c69d265468b31d5cadac6a7ce4bd9c/Include/internal/pycore_tracemalloc.h#L57-L64
https://github.com/python/cpython/blob/0675b8f032c69d265468b31d5cadac6a7ce4bd9c/Include/internal/pycore_tracemalloc.h#L57-L107
https://github.com/python/cpython/blob/0675b8f032c69d265468b31d5cadac6a7ce4bd9c/Modules/_tracemalloc.c#L54-L55
https://github.com/python/cpython/blob/0675b8f032c69d265468b31d5cadac6a7ce4bd9c/Modules/_tracemalloc.c#L69-L76
</details>
| name | type | protected by | \#ifdef | notes |
| ---- | ---- | ---- | ---- | ---- |
| `config` | `struct _PyTraceMalloc_Config` | | | |
| . `initialized` | `enum {}` | GIL | | |
| . `tracing` | `bool` | GIL | | |
| . `max_nframe` | `int` | GIL | | |
| `allocators` | | | | see PEP 445 ([docs](https://docs.python.org/3.12/c-api/memory.html#customize-memory-allocators)) |
| . `mem` | `PyMemAllocatorEx` | GIL | | |
| . `raw` | `PyMemAllocatorEx` | GIL | | |
| . `obj` | `PyMemAllocatorEx` | GIL | | |
| `tables_lock` | `PyThread_type_lock` | GIL | `TRACE_RAW_MALLOC` | |
| `traced_memory` | `size_t` | `tables_lock` | | |
| `peak_traced_memory` | `size_t` | `tables_lock` | | |
| `filenames` | `_Py_hashtable_t *` | GIL | | interned; effectively a `set` of objects |
| `traceback` | `struct tracemalloc_traceback *` | GIL | | a temporary buffer |
| `tracebacks` | `_Py_hashtable_t *` | GIL | | interned; effectively a `set` of `traceback_t` |
| `traces` | `_Py_hashtable_t *` | `tables_lock` | | void-ptr -> `trace_t` |
| `domains` | `_Py_hashtable_t *` |`tables_lock` | | domain -> `_Py_hashtable_t *` (per-domain `traces`) |
| `empty_traceback` | `struct tracemalloc_traceback` | ??? | | |
| `reentrant_key` | `Py_tss_t` | ??? | | |
notes:
* each frame in `struct tracemalloc_traceback` holds a filename object
* `traceback_t` is a typedef for `struct tracemalloc_traceback`
* `frame_t` is a typedef for `struct tracemalloc_frame`
hold objects:
* `filenames`
* `traceback` (filename in each frame)
* `tracebacks` (filename in each frame of each traceback)
* `traces` (filename in each frame of each traceback)
* `domains` (filename in each frame of each traceback in each domain)
</details>
### Usage
<details>
<summary>(expand)</summary>
<BR/>
simple:
| name | context | get | set |
| ---- | ---- | ---- | ---- |
| `config` | | | |
| . `initialized` | lifecycle | `tracemalloc_init()`<BR/>`tracemalloc_deinit()` | `tracemalloc_init()`<BR/>`tracemalloc_deinit()` |
| . `tracing` | module | `_tracemalloc_is_tracing_impl()`<BR/>`_tracemalloc__get_traces_impl()`<BR/>`_tracemalloc_clear_traces_impl()`<BR/>`_tracemalloc_get_traceback_limit_impl()`<BR/>`_tracemalloc_get_traced_memory_impl()`<BR/>`_tracemalloc_reset_peak_impl()` | |
| | C-API | `PyTraceMalloc_Track()`<BR/>`PyTraceMalloc_Untrack()`<BR/>`_PyTraceMalloc_NewReference()`<BR/>`_PyMem_DumpTraceback()` | |
| | lifecycle | `tracemalloc_start()`<BR/>`tracemalloc_stop()` | `tracemalloc_start()`<BR/>`tracemalloc_stop()` |
| | internal | `tracemalloc_get_traceback()` | |
| . `max_nframe` | module | `_tracemalloc_get_traceback_limit_impl()` | |
| | lifecycle | | `tracemalloc_start()` |
| | internal | `traceback_get_frames()` | |
| `allocators` | | | | |
| . `mem` | lifecycle | `tracemalloc_start()` \+<BR/>`tracemalloc_stop()` | |
| . `raw` | lifecycle | `tracemalloc_init()` \+<BR/>`tracemalloc_start()` \+<BR/>`tracemalloc_stop()` | |
| | internal | `raw_malloc()`<BR/> `raw_free()` | |
| . `obj` | lifecycle | `tracemalloc_start()` \+<BR/>`tracemalloc_stop()` | |
| `tables_lock` | module | `_tracemalloc__get_traces_impl()` \+<BR/>`_tracemalloc_get_tracemalloc_memory_impl()` \+<BR/>`_tracemalloc_get_traced_memory_impl()` \+<BR/>`_tracemalloc_reset_peak_impl()` \+ | |
| | C-API | `PyTraceMalloc_Track()` \+<BR/>`PyTraceMalloc_Untrack()` \+<BR/>`_PyTraceMalloc_NewReference()` \+ | |
| | lifecycle | `tracemalloc_init()`<BR/>`tracemalloc_deinit()` | `tracemalloc_init()` \*<BR/>`tracemalloc_deinit()` \* |
| | allocator | `tracemalloc_alloc()` \+<BR/>`tracemalloc_realloc()` \+<BR/>`tracemalloc_free()` \+<BR/>`tracemalloc_realloc_gil()` \+<BR/>`tracemalloc_raw_realloc()` \+ | |
| | internal | `tracemalloc_get_traceback()` \+<BR/>`tracemalloc_clear_traces()` \+ | |
| `traced_memory` | module | `_tracemalloc_get_traced_memory_impl()`<BR/>`_tracemalloc_reset_peak_impl()` | |
| | internal | `tracemalloc_add_trace()` | `tracemalloc_add_trace()`<BR/>`tracemalloc_remove_trace()`<BR/>`tracemalloc_clear_traces()` |
| `peak_traced_memory` | module | `_tracemalloc_get_traced_memory_impl()` | `_tracemalloc_reset_peak_impl()` |
| | internal | `tracemalloc_add_trace()` | `tracemalloc_add_trace()`<BR/>`tracemalloc_clear_traces()` |
| `filenames` | module | `_tracemalloc_get_tracemalloc_memory_impl()` | |
| | lifecycle | | `tracemalloc_init()` \* |
| | internal | `tracemalloc_get_frame()`<BR/>`tracemalloc_clear_traces()` \+ | |
| `traceback` | lifecycle | `tracemalloc_stop()` | `tracemalloc_start()` \*<BR/>`tracemalloc_stop()` \* |
| | internal | `traceback_new()` \+ | |
| `tracebacks` | module | `_tracemalloc_get_tracemalloc_memory_impl()` | |
| | lifecycle | | `tracemalloc_init()` \*<BR/>`tracemalloc_deinit()` \+ |
| | internal | `traceback_new()` \+<BR/>`tracemalloc_clear_traces()` \+ | |
| `traces` | module | `_tracemalloc__get_traces_impl()`<BR/>`_tracemalloc_get_tracemalloc_memory_impl()` | |
| | C-API | `_PyTraceMalloc_NewReference()` | |
| | lifecycle | `tracemalloc_deinit()` \+ | `tracemalloc_init()` \* |
| | internal | `tracemalloc_get_traces_table()`<BR/>`tracemalloc_add_trace()` (indirect)<BR/>`tracemalloc_remove_trace()` (indirect)<BR/>`tracemalloc_get_traceback()` (indirect)<BR/>`tracemalloc_clear_traces()` \+ | |
| `domains` | module | `_tracemalloc__get_traces_impl()`<BR/>`_tracemalloc_get_tracemalloc_memory_impl()` | |
| | lifecycle | `tracemalloc_deinit()` \+ | `tracemalloc_init()` \* |
| | internal | `tracemalloc_get_traces_table()`<BR/>`tracemalloc_remove_trace()` (indirect)<BR/>`tracemalloc_get_traceback()` (indirect)<BR/>`tracemalloc_clear_traces()` \+<BR/>`tracemalloc_add_trace()` \+ |
| `empty_traceback` | lifecycle | `tracemalloc_init()` \+ | |
| | internal | `traceback_new()` | |
| `reentrant_key` | lifecycle | `tracemalloc_init()` \+<BR/>`tracemalloc_deinit()` \+ | |
| | allocator | `tracemalloc_alloc_gil()` (indirect) \+<BR/>`tracemalloc_realloc_gil()` (indirect) \+<BR/>`tracemalloc_raw_alloc()` (indirect) \+<BR/>`tracemalloc_raw_realloc()` (indirect) \+ | |
| | internal | `get_reentrant()`<BR/>`set_reentrant()` \+ | |
\* the function allocates/deallocates the value (see below)
\+ the function mutates the value (see below)
simple (extraneous):
| name | context | allocate/deallocate | get (assert-only) |
| ---- | ---- | ---- | ---- |
| `config.tracing` | internal | | `tracemalloc_add_trace()`<BR/>`tracemalloc_remove_trace()` |
| `tables_lock` | lifecycle | `tracemalloc_init()`<BR/>`tracemalloc_deinit()` | |
| `traced_memory` | internal | | `tracemalloc_remove_trace()` |
| `filenames` | lifecycle | `tracemalloc_init()` | |
| `traceback` | lifecycle | `tracemalloc_start()`<BR/>`tracemalloc_stop()` | `tracemalloc_start()` |
| `tracebacks` | lifecycle | `tracemalloc_init()` | |
| `traces` | lifecycle | `tracemalloc_init()` | |
| `domains` | lifecycle | `tracemalloc_init()` | |
mutation of complex fields:
| name | context | initialize | finalize | clear | modify |
| ---- | ---- | ---- | ---- | ---- | ---- |
| `allocators.mem` | lifecycle | `tracemalloc_start()` | | | |
| `allocators.raw` | lifecycle | `tracemalloc_init()`<BR/>`tracemalloc_start()` | | | |
| `allocators.obj` | lifecycle | `tracemalloc_start()` | | | |
| `tables_lock` | module | | | | `_tracemalloc__get_traces_impl()`<BR/>`_tracemalloc_get_tracemalloc_memory_impl()`<BR/>`_tracemalloc_get_traced_memory_impl()`<BR/>`_tracemalloc_reset_peak_impl()` |
| | C-API | | | | `PyTraceMalloc_Track()`<BR/>`PyTraceMalloc_Untrack()`<BR/>`_PyTraceMalloc_NewReference()` |
| | allocator | `tracemalloc_alloc()` | | | `tracemalloc_alloc()`<BR/>`tracemalloc_realloc()`<BR/>`tracemalloc_free()`<BR/>`tracemalloc_realloc_gil()`<BR/>`tracemalloc_raw_realloc()` |
| | internal | | | | `tracemalloc_clear_traces()`<BR/>`tracemalloc_get_traceback()` |
| `filenames` | internal | | | `tracemalloc_clear_traces()` | `tracemalloc_get_frame()`<BR/>`tracemalloc_clear_traces()` |
| | lifecycle | | `tracemalloc_deinit()` | | |
| `traceback` | internal | `traceback_new()` | | | |
| `tracebacks` | lifecycle | | `tracemalloc_deinit()` | | |
| | internal | | | `tracemalloc_clear_traces()` | `traceback_new()` |
| `traces` | lifecycle | | `tracemalloc_deinit()` | | |
| | internal | | | `tracemalloc_clear_traces()` | |
| `domains` | lifecycle| | `tracemalloc_deinit()` | | |
| | internal | | | `tracemalloc_clear_traces()` | `tracemalloc_add_trace()` |
| `reentrant_key` | lifecycle | `tracemalloc_init()` | `tracemalloc_deinit()` | | |
| | internal | | | | `set_reentrant()` |
indirection:
| name | context | direct | indirect |
| ---- | ---- | ---- | ---- |
| `allocators.raw` | lifecycle | `tracemalloc_start()`<BR/>`tracemalloc_copy_trace()` | `raw_malloc()` |
| | | `tracemalloc_stop()` | `raw_free()` |
| | internal | `traceback_new()`<BR/>`tracemalloc_add_trace()`<BR/>`tracemalloc_copy_trace()` | `raw_malloc()` |
| | | `traceback_new()`<BR/>`tracemalloc_add_trace()`<BR/>`tracemalloc_remove_trace()` | `raw_free()` |
| `traces` | internal | `tracemalloc_add_trace()`<BR/>`tracemalloc_remove_trace()`<BR/>`tracemalloc_get_traceback()` | `tracemalloc_get_traces_table()` |
| `domains` | internal | `tracemalloc_add_trace()`<BR/>`tracemalloc_remove_trace()`<BR/>`tracemalloc_get_traceback()` | `tracemalloc_get_traces_table()` |
| `reentrant_key` | allocator | `tracemalloc_alloc_gil()`<BR/>`tracemalloc_realloc_gil()`<BR/>`tracemalloc_raw_alloc()`<BR/>`tracemalloc_raw_realloc()` | `get_reentrant()` |
| | | `tracemalloc_alloc_gil()`<BR/>`tracemalloc_realloc_gil()`<BR/>`tracemalloc_raw_alloc()`<BR/>`tracemalloc_raw_realloc()` | `set_reentrant()` |
</details>
</details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-104508
<!-- /gh-linked-prs -->
| f7df17394906f2af51afef3c8ccaaab3847b059c | 26931944dd8abd6554249239344fa62b789b9028 |
python/cpython | python__cpython-101499 | # Improper call to `asyncio.Timeout.expired`
# Documentation
In [this example](https://docs.python.org/3/library/asyncio-task.html#asyncio.Timeout) for using `asyncio.Timeout`, `expired` is accessed as an attribute, but it should be called as a method.
I.e. the example should read as:
```python
async def main():
try:
# We do not know the timeout when starting, so we pass ``None``.
async with asyncio.timeout(None) as cm:
# We know the timeout now, so we reschedule it.
new_deadline = get_running_loop().time() + 10
cm.reschedule(new_deadline)
await long_running_task()
except TimeoutError:
pass
if cm.expired(): # <---- I CHANGED THIS TO BE A METHOD CALL
print("Looks like we haven't finished on time.")
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101499
* gh-101501
<!-- /gh-linked-prs -->
| 95fb0e02582b5673eff49694eb0dce1d7df52301 | 62251c3da06eb4662502295697f39730565b1717 |
python/cpython | python__cpython-101477 | # Add internal API for fast module access from heap type methods
See [topic on Discourse](https://discuss.python.org/t/a-fast-variant-of-pytype-getmodule/23377?u=erlendaasland).
For CPython internal usage, we've got `_PyModule_GetState`, which is a fast variant of `PyModule_GetState`, the module check in the latter is simply an assert in the former.
For `PyType_GetModuleState`, there are three `if`s (two of them implicitly in `PyType_GetModule`):
1. check that the given type is a heap type
2. check that the given type has an associated module
3. check that the result of `PyType_GetModule` is not `NULL`
For stdlib core extension modules, all of these conditions are always true (AFAIK). With a fast static inlined variant, for example `_PyType_GetModuleState`, with a fast variant of `PyType_GetModule` inlined, where all three conditions are `assert()`ed, we can speed up a heap type methods that need to access module state.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101477
* gh-102188
<!-- /gh-linked-prs -->
| ccd98a3146d66343499d04a44e038223a1a09e80 | d43c2652d4b1ca4d0afa468e58c4f50052f4bfa2 |
python/cpython | python__cpython-101470 | # Optimise `get_io_state()`
The `_io` extension module is required to bootstrap Python, so we can safely use internal C APIs. We should use static inlined `_PyModule_GetState` instead of the slower `PyModule_GetState`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101470
<!-- /gh-linked-prs -->
| f80db6cef075186f888a85258ccf2164bf148921 | 20c11f2e600e1c0bf42de4d6f2fb3ce5ccc2587c |
python/cpython | python__cpython-101468 | # Python launcher does not filter when only one runtime is installed
If the launcher only finds a single runtime, it skips all filtering, which means if you request a mismatched version you will still get the installed one.
e.g. a user (let's call them "Guido") installs Python 3.11. They then run `py -3.12` and instead of suggesting to install 3.12, it launches 3.11!
Unrelated, but discovered at the same time: `-3.1` will match `-3.11`, because it's a blind prefix match. The match should require a separator character.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101468
* gh-101504
<!-- /gh-linked-prs -->
| eda60916bc88f8af736790ffd52381e8bb83ae83 | b91b42d236c81bd7cbe402b322c82bfcd0d883a1 |
python/cpython | python__cpython-101455 | # Documentation of END_ASYNC_FOR is out of date and incorrect
[Documentation of END_ASYNC_FOR](https://docs.python.org/3.12/library/dis.html#opcode-END_ASYNC_FOR) is both out of date and incorrect:
``
Terminates an [async for](https://docs.python.org/3.12/reference/compound_stmts.html#async-for) loop. Handles an exception raised when awaiting a next item. If STACK[-1] is [StopAsyncIteration](https://docs.python.org/3.12/library/exceptions.html#StopAsyncIteration) pop 3 values from the stack and restore the exception state using the second of them. Otherwise re-raise the exception using the value from the stack. An exception handler block is removed from the block stack.
``
<!-- gh-linked-prs -->
### Linked PRs
* gh-101455
* gh-101493
<!-- /gh-linked-prs -->
| 62251c3da06eb4662502295697f39730565b1717 | 2b3e02a705907d0db2ce5266f06ad88a6b6160db |
python/cpython | python__cpython-101661 | # OrderedDict dict/list representation
I know that for historical purposes, `OrderedDict.__repr__` displays key-value pairs in the format of `[(key, value), ...]`, since the `{key: value, ...}` representation did not respect the ordering that `OrderedDict` needs. However, this was only in the past and it has become a while since dictionaries have become ordered, so I was wondering if changing the `OrderedDict.__repr__` to display `OrderedDict({key: value, ...})` instead would be fine. The main rationale is that this is syntactically cleaner to read, often I have found the other representation to be a lot more clunky. Other than potentially breaking things like doc-tests, I don't see much issue with this. The only potential ambiguity that I see is the fact that one might not be immediately sure that the dictionary is ordered, but the fact that it is wrapped by `OrderedDict(...)` would probably dispel any doubts.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101661
<!-- /gh-linked-prs -->
| 790ff6bc6a56b4bd6e403aa43a984b99f7171dd7 | b2b85b5db9cfdb24f966b61757536a898abc3830 |
python/cpython | python__cpython-101439 | # extra comma (,) in the json snippet in Doc/howto/logging-cookbook.rst
# Documentation
There is an extra comma (,) in the json snippet in Doc/howto/logging-cookbook.rst as follows.
```diff
Suppose you configure logging with the following JSON:
.. code-block:: json
...
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "simple",
- "stream": "ext://sys.stdout",
+ "stream": "ext://sys.stdout"
},
"stderr": {
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101439
* gh-101463
* gh-101464
<!-- /gh-linked-prs -->
| 20c11f2e600e1c0bf42de4d6f2fb3ce5ccc2587c | df0068ce4827471cc2962631ee64f6f38e818ec4 |
python/cpython | python__cpython-114269 | # ElementTree.iterparse "leaks" file descriptor when not exhausted
The PR https://github.com/python/cpython/pull/31696 attempts to fix the "leak" of file descriptors when the iterator is not exhausted. That PR fixes the warning, but not the underlying issue that the files aren't closed until the next tracing garbage collection cycle.
Note that there isn't truly a leak of file descriptors. The file descriptors are eventually closed when the file object is finalized (at cyclic garbage collection). The point of the `ResourceWarning` (in my understanding) is that waiting until the next garbage collection cycle means that you may temporarily have a lot of unwanted open file descriptors, which could exhaust the global limit or prevent successful writes to those files on Windows.
```python
# run with ulimit -Sn 1000
import xml.etree.ElementTree as ET
import tempfile
import gc
gc.disable()
def run():
with tempfile.NamedTemporaryFile("w") as f:
f.write("<document />junk")
for i in range(10000):
it = ET.iterparse(f.name)
del it
run()
```
On my system, after lowering the file descriptor limit to 1000 (via `ulimit -Sn 1000`) I get:
```
OSError: [Errno 24] Too many open files: '/tmp/tmpwwmd9gp6'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-114269
* gh-114499
* gh-114500
<!-- /gh-linked-prs -->
| 66f95ea6a65deff547cab0d312b8c8c8a4cf8beb | b7688ef71eddcaf14f71b1c22ff2f164f34b2c74 |
python/cpython | python__cpython-101745 | # _tracemalloc__get_object_traceback doesn't handle objects with pre-headers correctly
In the main branch, the function `_tracemalloc__get_object_traceback` doesn't account for objects with preheaders:
https://github.com/python/cpython/blob/ea232716d3de1675478db3a302629ba43194c967/Modules/_tracemalloc.c#L1406-L1414
This means that when running with tracemalloc some allocation tracebacks are missing:
```python
import warnings
class MyClass:
def __del__(self):
warnings.warn("Uh oh", ResourceWarning, source=self)
def func():
m = MyClass()
func()
```
```
python3 -Wd -X tracemalloc=2 example.py
```
cc @markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-101745
<!-- /gh-linked-prs -->
| 5b946d371979a772120e6ee7d37f9b735769d433 | f1f3af7b8245e61a2e0abef03b2c6c5902ed7df8 |
python/cpython | python__cpython-101424 | # tarfile.errorlevel is 1, not 0
# Documentation
[TarFile](https://docs.python.org/3/library/tarfile.html#tarfile.TarFile)'s docs say `errorlevel=0`, but the actual default is 1.
It is that way in Python 2 and I assume it hasn't changed in py3, so for docs purposes, it's always been that way. No need for a “changed in *version*” note.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101424
* gh-101425
* gh-101426
* gh-101427
* gh-101428
* gh-101429
<!-- /gh-linked-prs -->
| ea232716d3de1675478db3a302629ba43194c967 | c1b1f51cd1632f0b77dacd43092fb44ed5e053a9 |
python/cpython | python__cpython-101411 | # Argument clinic: improve generated code for self converter type checks
# Feature or enhancement
Currently, typically for `__new__` and `__init__` methods, Argument Clinic will spell out the `self` type check as such:
```python
if kind == METHOD_NEW:
type_check = ('({0} == {1} ||\n '
' {0}->tp_init == {2}tp_init)'
).format(self.name, type_object, prefix)
else:
type_check = ('(Py_IS_TYPE({0}, {1}) ||\n '
' Py_TYPE({0})->tp_new == {2}tp_new)'
).format(self.name, type_object, prefix)
```
`prefix` is a slightly modified variant of `type_object`, depending on if the latter is a pointer. This works swell in most cases, but with module state and heap types, the generated code is not optimal. First, let's just quote the AC docs on how to declare a class:
> When you declare a class, you must also specify two aspects of its type in C: the type declaration you’d use for a pointer to an instance of this class, and a pointer to the [PyTypeObject](https://docs.python.org/3.10/c-api/type.html#c.PyTypeObject) for this class.
For heap types with module state, you'd typically do something like this:
```c
typedef struct {
...
} myclass_object;
typedef struct {
PyTypeObject myclass_type;
} module_state;
static inline module_state *
find_state_by_type(PyTypeObject *tp)
{
PyObject *mod = PyType_GetModuleByDef(&moduledef); // Potentially slow!
void *state = PyModule_GetState(mod);
return (module_state*)state;
}
#define clinic_state() (find_state_by_type(type))
#include "clinic/mymod.c.h"
#undef clinic_state
/*[clinic input]
module mymod
class mymod.myclass "myclass_object *" "clinic_state()->myclass_type"
[clinic start generated code]*/
```
Currently, this generates clinic code like this:
```c
mymod_myclass_new(PyTypeObject *type, PyObject *args, PyObject *kwargs)
{
...
if ((type == clinic_state()->RowType ||
type->tp_init == clinic_state()->RowType->tp_init) &&
```
... _potentially_ calling `PyType_GetModuleByDef` twice in the self type check.
# Pitch
Suggesting to modify clinic to store the self type pointer in a local variable, and use that variable in the self type check. Proof-of-concept diff from the `_sqlite` extension module clinic code
```diff
diff --git a/Modules/_sqlite/clinic/cursor.c.h b/Modules/_sqlite/clinic/cursor.c.h
index 36b8d0051a..633ad2e73d 100644
--- a/Modules/_sqlite/clinic/cursor.c.h
+++ b/Modules/_sqlite/clinic/cursor.c.h
@@ -16,10 +16,11 @@ static int
pysqlite_cursor_init(PyObject *self, PyObject *args, PyObject *kwargs)
{
int return_value = -1;
+ PyTypeObject *self_tp = clinic_state()->CursorType;
pysqlite_Connection *connection;
- if ((Py_IS_TYPE(self, clinic_state()->CursorType) ||
- Py_TYPE(self)->tp_new == clinic_state()->CursorType->tp_new) &&
+ if ((Py_IS_TYPE(self, self_tp) ||
+ Py_TYPE(self)->tp_new == self_tp->tp_new) &&
!_PyArg_NoKeywords("Cursor", kwargs)) {
goto exit;
}
@@ -318,4 +319,4 @@ pysqlite_cursor_close(pysqlite_Cursor *self, PyObject *Py_UNUSED(ignored))
{
return pysqlite_cursor_close_impl(self);
}
-/*[clinic end generated code: output=e53e75a32a9d92bd input=a9049054013a1b77]*/
+/*[clinic end generated code: output=e5eac0cbe29c88ad input=a9049054013a1b77]*/
```
# Previous discussion
https://github.com/python/cpython/pull/101302#discussion_r1089924859
<!-- gh-linked-prs -->
### Linked PRs
* gh-101411
<!-- /gh-linked-prs -->
| 2753cf2ed6eb329bdc34b8f67228801182b82160 | 0062f538d937de55cf3b66b4a8d527b1fe9d5182 |
python/cpython | python__cpython-101741 | # PyObject_GC_Resize doesn't check preheader size
The `PyObject_GC_Resize` implementation in the main branch doesn't check if the object has a preheader:
https://github.com/python/cpython/blob/0ef92d979311ba82d4c41b22ef38e12e1b08b13d/Modules/gcmodule.c#L2357-L2358
The only internal use I see is by `_PyTuple_Resize` which doesn't have a preheader, but the [`PyObject_GC_Resize`](https://docs.python.org/3/c-api/gcsupport.html#:~:text=TYPE%20*-,PyObject_GC_Resize,-(TYPE%2C ) function is publicly documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101741
<!-- /gh-linked-prs -->
| 9c3442c0938127788fa59e4ceb80ae78b86fad1d | 0056701aa370553636b676cc99e327137d1688c6 |
python/cpython | python__cpython-101413 | # Negative line number in stacktrace exception
The following code snippet generates a stacktrace exception pointing to a invalid line number (-1):
```python
with object() as obj:
break
```
Example:
```bash
$ echo 'with object() as obj:\n\tbreak' > main.py
$ python main.py
File "/home/kartz/main.py", line -1
SyntaxError: 'break' outside loop
```
This is reproducible with Python 3.10, 3.11 and latest [master](https://github.com/python/cpython/commit/666c0840dcac9941fa41ec619fef8d45cd849a0b). I could not reproduce this on Python versions older than <3.10.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101413
* gh-101447
* gh-101448
<!-- /gh-linked-prs -->
| e867c1b753424d8d3f9c9ba0b431d007fd958c80 | 28db978d7f134edf6c86f21c42e15003511e7e9b |
python/cpython | python__cpython-101391 | # Possible small mistake in the documentation of importlib.util.LazyLoader.factory
The <u>documentation</u> of _LoazyLoader_.**factory** in the module _importlib.util_ says:
> **classmethod** factory(loader)
>
> A **static method** which returns a callable that creates a lazy loader. This is meant to be used in situations where the loader is passed by class instead of by instance.
The problem is that there is a mixing of words **classmethod** and **static method**. And the [documentation](https://docs.python.org/3/reference/datamodel.html#types) defines the _static method_ as <q>... Static method objects are created by the built-in staticmethod() constructor.</q>
The source code for the method is:
```python
@classmethod
def factory(cls, loader):
"""Construct a callable which returns the eager loader made lazy."""
cls.__check_eager_loader(loader)
return lambda *args, **kwargs: cls(loader(*args, **kwargs))
```
So, in my opition, "a static method" in the documentation should be corrected to "a class method".
I've created the PR in case the issue is right.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101391
* gh-101813
* gh-101814
<!-- /gh-linked-prs -->
| 17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5 | 61f2be08661949e2f6dfc94143436297e60d47de |
python/cpython | python__cpython-101387 | # typos found by codespell
# Documentation
Found the following typos by codespell (https://github.com/codespell-project/codespell):
```
./c-api/init_config.rst:842: varable ==> variable
./c-api/init_config.rst:1585: calculatin ==> calculate
./howto/logging-cookbook.rst:3631: detault ==> default
./howto/logging-cookbook.rst:3822: mutiple ==> multiple
./library/sqlite3.rst:2210: Droping ==> Dropping
./library/struct.rst:465: consective ==> consecutive
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101387
<!-- /gh-linked-prs -->
| db757f0e440d2b3fd3a04dd771907838e0c2241c | 052f53d65d9f65c7c3223a383857ad07a182c2d7 |
python/cpython | python__cpython-101418 | # Unit test test.test_socket.ThreadedVSOCKSocketStreamTest.testStream hangs on WSL2 ubuntu
# Bug report
unit test test.test_socket.ThreadedVSOCKSocketStreamTest.testStream hangs on WSL2 ubuntu.
I added some print and it seems vsock does not work on WSL though /dev/vsock exists, HAVE_SOCKET_VSOCK() == True.
```
$ ls /dev/vsock
/dev/vsock
```
Modified test_socket.py snippet:
```
@unittest.skipIf(fcntl is None, "need fcntl")
@unittest.skipUnless(HAVE_SOCKET_VSOCK,
'VSOCK sockets required for this test.')
@unittest.skipUnless(get_cid() != 2,
"This test can only be run on a virtual guest.")
class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest):
def __init__(self, methodName='runTest'):
print('__init__')
unittest.TestCase.__init__(self, methodName=methodName)
ThreadableTest.__init__(self)
def setUp(self):
print('setUp')
self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM)
self.addCleanup(self.serv.close)
self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT))
self.serv.listen()
self.serverExplicitReady()
print('Before serv.accept')
self.conn, self.connaddr = self.serv.accept()
print('After serv.accept')
self.addCleanup(self.conn.close)
print('end - setUp')
def clientSetUp(self):
print('clientSetup')
time.sleep(0.1)
self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM)
self.addCleanup(self.cli.close)
cid = get_cid()
print('cid: ', cid)
print('before cli.connect')
self.cli.connect((cid, VSOCKPORT))
print('end - clientSetup')
def testStream(self):
print('testStream')
msg = self.conn.recv(1024)
self.assertEqual(msg, MSG)
def _testStream(self):
self.cli.send(MSG)
self.cli.close()
```
Test outputs:
```
./python -m unittest -v test.test_socket.ThreadedVSOCKSocketStreamTest.testStream
__init__
testStream (test.test_socket.ThreadedVSOCKSocketStreamTest.testStream) ... setUp
Before serv.accept
clientSetup
cid: 4294967295
before cli.connect
^CTraceback (most recent call last):
File "/home/peter/repo/cpython/Lib/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/repo/cpython/Lib/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/home/peter/repo/cpython/Lib/unittest/__main__.py", line 18, in <module>
main(module=None)
File "/home/peter/repo/cpython/Lib/unittest/main.py", line 102, in __init__
self.runTests()
...
File "/home/peter/repo/cpython/Lib/unittest/case.py", line 576, in _callSetUp
self.setUp()
File "/home/peter/repo/cpython/Lib/test/test_socket.py", line 376, in _setUp
self.__setUp()
File "/home/peter/repo/cpython/Lib/test/test_socket.py", line 520, in setUp
self.conn, self.connaddr = self.serv.accept()
^^^^^^^^^^^^^^^^^^
File "/home/peter/repo/cpython/Lib/socket.py", line 295, in accept
fd, addr = self._accept()
^^^^^^^^^^^^^^
KeyboardInterrupt
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on:
both Python 3.12.0a4+ and Python 3.10.9+
- Operating system and architecture:
Windows 10 home 19044.2486 + WSL2 Ubuntu 22.04
```
uname -r
5.15.79.1-microsoft-standard-WSL2
>>> platform.release()
'5.15.79.1-microsoft-standard-WSL2'
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS"
```
# Suggestions
- Make it work and pass on WSL2
- Skip the test on WSL2
- Add a timeout so it does not hang at least
<!-- gh-linked-prs -->
### Linked PRs
* gh-101418
* gh-101419
* gh-115585
* gh-115586
<!-- /gh-linked-prs -->
| 31f5550fbecef45378a0cabf76ead1b39250d9bc | 7740a0109679b3dc0b258b2121f0fbfa554bd1ea |
python/cpython | python__cpython-101378 | # Improve test_locale_calendar_formatweekday of calendar
It seems that `test_locale_calendar_formatweekday` of calendar misses some important inputs:
https://github.com/python/cpython/blob/main/Lib/test/test_calendar.py#L567-L578
Specifically, it does not cover a day under different lengths, ie, several common cases in which 1 <= length <= 10, which can change the weekday formatting.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101378
* gh-102713
* gh-102714
<!-- /gh-linked-prs -->
| 5e0865f22eed9f3f3f0e912c4ada196effbd8ce0 | e94edab727d07bef851e0e1a345e18a453be863d |
python/cpython | python__cpython-101399 | # Bytecode interpreter generator should support composing instructions from parts.
I.e. [this](https://github.com/python/cpython/commit/5a9af9c85c0cc9105472b435ec63acc6d4385719) should work as expected.
```
inst(BINARY_OP_ADD_FLOAT) = _BINARY_OP_ADD_FLOAT_CHECK + _BINARY_OP_ADD_FLOAT_ACTION;
```
The suggested solution of [using macro](https://github.com/faster-cpython/ideas/issues/541#issuecomment-1405242983) doesn't work.
I get the error:
```
./Python/bytecodes.c:171: Unknown instruction 'BINARY_OP_ADD_FLOAT' referenced in family 'binary_op'
./Python/bytecodes.c:171: Family 'binary_op' has unknown members: {'BINARY_OP_ADD_FLOAT'}
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101399
<!-- /gh-linked-prs -->
| 7a3752338a2bfea023fcb119c842750fe799262f | e11fc032a75d067d2167a21037722a770b9dfb51 |
python/cpython | python__cpython-101363 | # pathlib.PureWindowsPath.match() mishandles path anchors
`pathlib.PurePath.match()` doesn't use `fnmatch()` to match path anchors, contrary to the documentation. Instead, it considers the drive and root of the pattern separately; if either (or both) is specified in the pattern, it must *exactly* match the corresponding drive/root in the path.
This results in the following:
```python
>>> from pathlib import PureWindowsPath as P
>>> P('c:/b.py').match('*:/*.py')
False
>>> P('c:/b.py').match('c:*.py')
True
>>> P('c:/b.py').match('/*.py')
True
>>> P('//some/share/a.py').match('//*/*/*.py')
False
>>> P('//some/share/a.py').match('/*.py')
True
```
All of these results are wrong IMO.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101363
<!-- /gh-linked-prs -->
| d401b20630965c0e1d2a5a0d60d5fc51aa5a8d80 | 775f8819e319127f9bfb046773b74bcc62c68b6a |
python/cpython | python__cpython-101342 | # Remove unncessary enum._power_of_two function
https://github.com/python/cpython/blob/dfad678d7024ab86d265d84ed45999e031a03691/Lib/enum.py#L1568-L1571
This function looks like be defined by mistake.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101342
<!-- /gh-linked-prs -->
| 8cef9c0f92720f6810be1c74e00f611acb4b8b1e | b5c4d6064cc2ae7042d2dc7b533d74f6c4176a38 |
python/cpython | python__cpython-112485 | # Add TCP keepalive option for `asyncio.start_server()`
By default, this `asyncio.start_server()` method will create a list of socket objects usually for both IPv4 and IPv6 (if possible) and do not set keepalive option. if I want I can create a socket object by myself and set keepalive option for it, then pass it in `start_server()`, but if so, I will not be able to start a server which listens on both IPv4 and IPv6 addresses.
I hope we can add a keepalive parameter for `asyncio.start_server()` or do something else to make things easier.
<!-- gh-linked-prs -->
### Linked PRs
* gh-112485
<!-- /gh-linked-prs -->
| 3aea6c4823e90172c9bc36cd20dc51b295d8a3c4 | a3a1cb48456c809f7b1ab6a6ffe83e8b3f69be0f |
python/cpython | python__cpython-101572 | # tarfile cannot handle high UIDs such as 734_380_696 (or at least the test fails)
Run `test_tarfile` on a posix system as a user with a high userid such as 734380696 and `test_add_dir_getmember` will fail with:
```
ERROR: test_add_dir_getmember (test.test_tarfile.Bz2UstarReadTest.test_add_dir_getmember)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/.../cpython/Lib/test/test_tarfile.py", line 225, in test_add_dir_getmember
self.add_dir_and_getmember('bar')
File "/.../cpython/Lib/test/test_tarfile.py", line 234, in add_dir_and_getmember
tar.add(name)
File "/.../cpython/Lib/tarfile.py", line 2001, in add
self.addfile(tarinfo)
File "/.../cpython/Lib/tarfile.py", line 2020, in addfile
buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../cpython/Lib/tarfile.py", line 823, in tobuf
return self.create_ustar_header(info, encoding, errors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../cpython/Lib/tarfile.py", line 842, in create_ustar_header
return self._create_header(info, USTAR_FORMAT, encoding, errors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../cpython/Lib/tarfile.py", line 954, in _create_header
itn(info.get("uid", 0), 8, format),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../cpython/Lib/tarfile.py", line 214, in itn
raise ValueError("overflow in number field")
ValueError: overflow in number field
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101572
* gh-101583
* gh-101584
<!-- /gh-linked-prs -->
| ffcb8220d7a8c8ca169b467d9e4a752874f68af2 | f7e9fbacb250ad9df95d0024161b40dfdc9cc39e |
python/cpython | python__cpython-101327 | # Regression in Python 3.11 when explicitly passing None to FutureIter.throw
Reported and commit identified by @alicederyn in https://github.com/python/typeshed/issues/9586
Regression caused by https://github.com/python/cpython/pull/31973. Doesn't contain a news entry, so probably not intentional.
```
λ cat xxx.py
import asyncio
async def f():
await asyncio.sleep(0)
async def g():
await asyncio.gather(f(), f())
h = g()
h.send(None)
print(type(h))
h.throw(ValueError("some error"), None, None)
λ python3.10 xxx.py
/Users/shantanu/dev/cpython/xxx.py:5: DeprecationWarning: There is no current event loop
await asyncio.gather(f(), f())
<class 'coroutine'>
Traceback (most recent call last):
File "/Users/shantanu/dev/cpython/xxx.py", line 10, in <module>
h.throw(ValueError("some error"), None, None)
File "/Users/shantanu/dev/cpython/xxx.py", line 5, in g
await asyncio.gather(f(), f())
ValueError: some error
λ python3.11 xxx.py
/Users/shantanu/dev/cpython/xxx.py:5: DeprecationWarning: There is no current event loop
await asyncio.gather(f(), f())
<class 'coroutine'>
Traceback (most recent call last):
File "/Users/shantanu/dev/cpython/xxx.py", line 10, in <module>
h.throw(ValueError("some error"), None, None)
File "/Users/shantanu/dev/cpython/xxx.py", line 5, in g
await asyncio.gather(f(), f())
TypeError: throw() third argument must be a traceback
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101327
* gh-101328
<!-- /gh-linked-prs -->
| a178ba82bfe2f2fb6f6ff0e67cb734fd7c4321e3 | 952a1d9cc970508b280af475c0be1809692f0c76 |
python/cpython | python__cpython-101323 | # ZlibDecompressor not tested
The test does not inherit from unittest.TestCase. An oversight.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101323
<!-- /gh-linked-prs -->
| a89e6713c4de99d4be5a1304b134e57a24ab10ac | 144aaa74bbd77aee822ee92344744dbb05aa2f30 |
python/cpython | python__cpython-101335 | # Please add the `ssl_shutdown_timeout` parameter for `StreamWriter.start_tls()` in asyncio
Seems that `StreamWriter.start_tls()` method should have a `ssl_shutdown_timeout` parameter like other relevant methods.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101335
<!-- /gh-linked-prs -->
| cc407b9de645ab7c137df8ea2409a005369169a5 | 75227fba1dd1683289d90ada7abba237bff55d14 |
python/cpython | python__cpython-101374 | # webbrowser CLI has no --help?
```
python -m webbrowser --help
option --help not recognized
Usage: /usr/lib64/python3.11/webbrowser.py [-n | -t] url
-n: open new window
-t: open new tab
```
I would expect any command to provide a `-h / --help` option. OK, the effect is about the same, but "option --help not recognized" is a bit surprising.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101374
<!-- /gh-linked-prs -->
| 3d7eb66c963c0c86021738271483bef27c425b17 | 420bbb783b43216cc897dc8914851899db37a31d |
python/cpython | python__cpython-101286 | # Undocumented risky behaviour in subprocess module
# Bug report - Undocumented risky behaviour in subprocess module
When using `subprocess.Popen` with `shell=True` on Windows and without a `COMSPEC` environment variable, a `cmd.exe` is launched. The problem is the `cmd.exe` full path is not written, Windows will search the executable in the current directory and in the PATH. If an arbitrary executable file is written to the current directory or to a directory in the PATH, it can be run instead of the real cmd.exe.
See the code [here](https://github.com/python/cpython/blob/38cc24f119346e2947e316478b58e58f0dde307c/Lib/subprocess.py#L1480) and a POC [here](https://github.com/mauricelambert/PythonSubprocessVulnerabilityPOC).
- This risky behaviour can be patched by replacing `cmd.exe` string by `C:\WINDOWS\system32\cmd.exe`.
- If the behavior was chosen by python developers, it should be documented.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101286
* gh-101708
* gh-101709
* gh-101710
* gh-101711
* gh-101712
* gh-101713
* gh-101719
* gh-101721
* gh-101728
<!-- /gh-linked-prs -->
| 23751ed826ee63fb486e874ec25934ea87dd8519 | de3669ebcb33ca8e3373fbbaed646c5f287979b8 |
python/cpython | python__cpython-101557 | # Enhance the BOLT build process
From @indygreg 's https://github.com/python/cpython/pull/101093
TODO
- [x] Optional `-fno-reorder-blocks-and-partition`
- [x] Drop `-gdwarf-4`
- [x] Run BOLT instrumented binary with `$(RUNSHARED)``
- [x] Enable BOLT optimization of libpython
<!-- gh-linked-prs -->
### Linked PRs
* gh-101557
* gh-104493
* gh-104709
* gh-104751
* gh-104752
* gh-104821
* gh-104853
<!-- /gh-linked-prs -->
| 144aaa74bbd77aee822ee92344744dbb05aa2f30 | cef9de62b8bf5e2d11d5a074012dfa81dc4ea935 |
python/cpython | python__cpython-101280 | # Drop -gdwarf-4 flag from the BOLT build.
Sidenote: BOLT now supports DWARF5, so there's no need to force `-gdwarf-4`
https://github.com/python/cpython/blob/main/configure.ac#L1950-1952
_Originally posted by @aaupov in https://github.com/python/cpython/issues/101093#issuecomment-1386296142_
<!-- gh-linked-prs -->
### Linked PRs
* gh-101280
<!-- /gh-linked-prs -->
| a958e7d35af191c354a3e89a1236e830b1e46029 | 8c183cddd3d0031c6d397738f73c20ee6bf61ce8 |
python/cpython | python__cpython-101302 | # Isolate the itertools extension module
# Feature or enhancement
Following https://github.com/ericsnowcurrently/multi-core-python/wiki/0-The-Plan we need to convert the `itertools` extension module to use module state.
There are multiple static global type objects:
https://github.com/python/cpython/blob/e244401ce508ad391295beb636e499fcc6797a2a/Tools/c-analyzer/cpython/globals-to-fix.tsv#L338-L359
We need to convert these to heap types, add module state, and implement multi-phase init.
See also PEP-687.
I've got an old PR (#24065) that I'm planning on resurrecting. I'll re-submit it as multiple PRs; this is going to be a large change.
For this particular module, it could make sense to add a module state pointer to each type context, for easy and cheap state access.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101302
* gh-101303
* gh-101304
* gh-101305
<!-- /gh-linked-prs -->
| 2b3e02a705907d0db2ce5266f06ad88a6b6160db | cc407b9de645ab7c137df8ea2409a005369169a5 |
python/cpython | python__cpython-101274 | # Expand on enum limitations
# Documentation
Encountered the python/mypy#14494 issue and it was suggested that we include it on the docs. This issue is an atttempt to expand on the documentation for this particular issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101274
<!-- /gh-linked-prs -->
| 037b3ee44e7de00b4653d73d4808c0f679a909a7 | 7109b287cf84cebdfa99b2b0a657d55f6e481be7 |
python/cpython | python__cpython-101394 | # `int` and `set` subclass `__sizeof__` under-reports the instance dictionary pointer
# Bug report
Most built-in types, when subclassed, will have a `__sizeof__` that is 8 (pointer size) larger than the parent object, due to having an instance dictionary.
```python
class UserFloat(float): pass
assert UserFloat().__sizeof__() - (0.0).__sizeof__() == 8
class UserTuple(tuple): pass
assert UserTuple().__sizeof__() - ().__sizeof__() == 8
class UserList(list): pass
assert UserList().__sizeof__() - [].__sizeof__() == 8
class UserDict(dict): pass
assert UserDict().__sizeof__() - {}.__sizeof__() == 8
```
This is unexpectedly, not the case for `int` and `set`, which exactly match the original `__sizeof__`
```python
class UserSet(set): pass
print(UserSet().__sizeof__(), set().__sizeof__()) # 200 200
class UserInt(int): pass
print(UserInt().__sizeof__(), (0).__sizeof__()) # 24 24
```
As a result this makes `__dictoffset__` usages incorrect as well
```python
from ctypes import py_object
class UserTuple(tuple): pass
x = UserTuple()
x.a = 5
print(x.__dict__)
>> {'a': 5}
py_object.from_address(id(x) + x.__sizeof__() + UserTuple.__dictoffset__)
>> py_object({'a': 5})
```
```python
from ctypes import py_object
class UserInt(int): pass
x = UserInt()
x.a = 5
print(x.__dict__)
>> {'a': 5}
py_object.from_address(id(x) + x.__sizeof__() + UserInt.__dictoffset__)
>> py_object(<NULL>)
```
# Your environment
- CPython versions tested on: 3.11, 3.12.0a3
<!-- gh-linked-prs -->
### Linked PRs
* gh-101394
* gh-101579
* gh-101638
* gh-101847
<!-- /gh-linked-prs -->
| 39017e04b55d4c110787551dc9a0cb753f27d700 | 9b60ee976a6b66fe96c2d39051612999c26561e5 |
python/cpython | python__cpython-101262 | # Add test for function with > 255 args
I made a change that broke functions with > 255 args, and the only test that failed was a namedtuple test (with a namedtuple with many fields). We need also a test for a plain function call (it makes it easier to see what broke).
<!-- gh-linked-prs -->
### Linked PRs
* gh-101262
<!-- /gh-linked-prs -->
| bd7903967cd2a19ebc842dd1cce75f60a18aef02 | 7b20a0f55a16b3e2d274cc478e4d04bd8a836a9f |
python/cpython | python__cpython-101230 | # Aliases for imported names is not tested in `test_ast.py`
Such cases as `from bar import y as z` and `import bar as foo` is not tested.
I think, it should be tested =)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101230
* gh-101433
* gh-101434
<!-- /gh-linked-prs -->
| 28db978d7f134edf6c86f21c42e15003511e7e9b | 7a3752338a2bfea023fcb119c842750fe799262f |
python/cpython | python__cpython-101227 | # There is a typo in docstring for read method of _io.TextIOWrapper class
# Documentation
The docstring of the read method of `_io.TextIOWrapper` class has a typo.
The argument name `size `is not used in the explanation.
```
>>> file = open("/tmp/junk")
>>> help(file.read)
read(size=-1, /) method of _io.TextIOWrapper instance
Read at most n characters from stream.
Read from underlying buffer until we have n characters or we hit EOF.
If n is negative or omitted, read until EOF.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101227
<!-- /gh-linked-prs -->
| f1f3af7b8245e61a2e0abef03b2c6c5902ed7df8 | 272da55affe6f2b3b73ff5474e1246089517d051 |
python/cpython | python__cpython-113567 | # Bug in multiprocessing + Pipes on macOS
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
I believe I've found a bug in how the `multiprocessing` package passes the `Connection`s that `Pipe` creates down to the child worker process, but only on macOS.
The following minimal example demonstrates the problem:
```
def _mp_job(nth, child):
print("Nth is", nth)
if __name__ == "__main__":
from multiprocessing import Pool, Pipe, set_start_method, log_to_stderr
import logging, time
set_start_method("spawn")
logger = log_to_stderr()
logger.setLevel(logging.DEBUG)
with Pool(processes = 10) as mp_pool:
jobs = []
for i in range(20):
parent, child = Pipe()
# child = None
r = mp_pool.apply_async(_mp_job, args = (i, child))
jobs.append(r)
while jobs:
new_jobs = []
for job in jobs:
if not job.ready():
new_jobs.append(job)
jobs = new_jobs
print("%d jobs remaining" % len(jobs))
time.sleep(1)
```
On Linux, this script prints `Nth is 0`, etc., 20 times and exits. On macOS, it does the same if the line `child = None` is not commented out. If that line is commented out - i.e., if the child `Connection` is passed in the `args` of `apply_async()` - not all the jobs are done, and the script will frequently (if not always) loop forever, reporting some number of jobs remaining.
The logging shows approximately what's happening: the output will have a number of lines of this form:
```
[DEBUG/SpawnPoolWorker-10] worker got EOFError or OSError -- exiting
```
and the number of log records of that type is exactly the number of jobs reported remaining. This debug message is reported by the `worker()` function in `multiprocessing/pool.py`, as it dequeues a task:
```
try:
task = get()
except (EOFError, OSError):
util.debug('worker got EOFError or OSError -- exiting')
break
```
When I insert a traceback printout before the debug statement, I find that it's reporting `ConnectionRefusedError`, presumably as it attempts to unpickle the `Connection` object in the worker:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 112, in worker
task = get()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/queues.py", line 354, in get
return _ForkingPickler.loads(res)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 961, in rebuild_connection
fd = df.detach()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 620, in SocketClient
s.connect(address)
ConnectionRefusedError: [Errno 61] Connection refused
```
The error is caught and the worker exits, but it's already dequeued the task, so the task never gets done.
Note that this *has* to be due to the `Connection` object being passed; if I uncomment `child = None`, the code works fine. Note that it also has nothing to do with anything passed through the `Pipe`, since the code passes nothing through the pipe. It also has nothing to do with the connection objects being garbage collected because there's no reference to them in the parent process; if I save them in a global list, I get the same error.
I don't understand how this could possibly happen; the `Pipe` is created with `socket.socketpair()`, and I was under the impression that sockets created that way don't require any other initialization to communicate. I do know that it's a race condition; if I insert a short sleep after I create the `Pipe`, say, .1 second, the code works fine. I've also observed that this is much more likely to happen with large numbers of workers; if the number of workers is 2, I almost never observe the problem.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
Breaks:
- CPython versions tested on: 3.7.9, 3.11
- Operating system and architecture: macOS 12.6.2, Intel 6-core i7
- CPython versions tested on: 3.8.2
- Operating system and architecture: macOS 10.15.7, Intel quad core i7
Works:
- CPython versions tested on: 3.8.10
- Operating system and architecture: Linux
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-113567
* gh-114018
* gh-114019
<!-- /gh-linked-prs -->
| c7d59bd8cfa053e77ae3446c82afff1fd38a4886 | 21f83efd106a19f1d26e049c06678a6729a721f0 |
python/cpython | python__cpython-101222 | # Possibly missing options in the documentation of timeit
The "Command-Line Interface" part of the documentation of _timeit_ says:
> python -m timeit [-n N] [-r N] [-u U] [-s S] [-h] [statement ...]
But actually the command can take additional two options, **-p** and **-v**, as the next part of the documentation describe.
So, for consistency, I suggest adding them into the usage part.
I've created the PR in case the issue is correct.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101222
<!-- /gh-linked-prs -->
| 9b60ee976a6b66fe96c2d39051612999c26561e5 | 6e4a521c2ab84c082ad71e540b045699f0dbbc11 |
python/cpython | python__cpython-101324 | # Add optimized versions of isdir / isfile on Windows
I went down this rabbit hole when someone mentioned that `isfile`/`isdir`/`exists` all make a rather expensive `os.stat` call on Windows (which is actually a long wrapper around a number of system calls on Windows), rather than the simpler and more direct call to `GetFileAttributeW`.
I noticed that at one point there was a [version of `isdir`](https://github.com/python/cpython/commits/9c669ccc77c85eac245d460bab510a38b20d9a08) that does exactly this. At the time, this claimed a 2x speedup.
However, this C implementation of `isdir` was removed as part of a large set of changes in df2d4a6f3, and as a result, `isdir` got faster.
With the following benchmark:
<details>
<summary>isdir benchmark</summary>
```
import os.path
import timeit
for i in range(100):
os.makedirs(f"exists{i}", exist_ok=True)
def test_exists():
for i in range(100):
os.path.isdir(f"exists{i}")
def test_extinct():
for i in range(100):
os.path.isdir(f"extinct{i}")
print(timeit.timeit(test_exists, number=100))
print(timeit.timeit(test_extinct, number=100))
for i in range(100):
os.rmdir(f"exists{i}")
```
</details>
I get the following with df2d4a6f3:
```
exists: 0.18694799999957468
doesn't exist: 0.08418370000072173
```
and with the prior commit:
```
exists: 0.25393609999991895
doesn't exist: 0.08511730000009265
```
So, from this, I'd conclude that the idea of replacing calls to `os.stat` with calls to `GetFileAttributeW` would not bear fruit, but @zooba should probably confirm I'm benchmarking the right thing and making sense.
In any event, we should probably remove the [little vestige](https://github.com/python/cpython/blob/main/Lib/ntpath.py#LL854-L862C9) that imports this fast path that was removed:
```python
try:
# The genericpath.isdir implementation uses os.stat and checks the mode
# attribute to tell whether or not the path is a directory.
# This is overkill on Windows - just pass the path to GetFileAttributes
# and check the attribute from there.
from nt import _isdir as isdir
except ImportError:
# Use genericpath.isdir as imported above.
pass
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101324
<!-- /gh-linked-prs -->
| 86ebd5c3fa9ac0fba3b651f1d4abfca79614af5f | 3a88de7a0af00872d9d57e1d98bc2f035cb15a1c |
python/cpython | python__cpython-101188 | # New warning on GitHub: "unused variable ‘runtime’ in `pystate.c`"
All PRs right now show this warning:
<img width="919" alt="Снимок экрана 2023-01-20 в 10 55 11" src="https://user-images.githubusercontent.com/4660275/213645037-f66d6b42-eca2-4b56-953f-30b514bbc5d7.png">
However, it looks like a false-positive, because `runtime` is clearly used later on:
https://github.com/python/cpython/blob/3fa8fe7177bb941aa60ecaf14d1fa1750a26f674/Python/pystate.c#L1881-L1885
Looks like it has started after https://github.com/python/cpython/commit/6036c3e856f033bf13e929536e7bf127fdd921c9
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-101188
<!-- /gh-linked-prs -->
| 8be6992620db18bea31c7f75a33c7dcc3782e95a | 3847a6c64b96bb2cb93be394a590d4df2c35e876 |
python/cpython | python__cpython-101170 | # minor optimization in the implementation of except*
The except* bytecode was implemented before we switched to SWAP/COPY, and the move to them resulted in bytecode that contains one more instruction than necessary:
```
COPY 1
BUILD_LIST
SWAP 2
```
can be written as
```
BUILD_LIST
COPY 2
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101170
<!-- /gh-linked-prs -->
| 9ec9b203eaa219ed13ad9deb21740924bd22b10d | 6036c3e856f033bf13e929536e7bf127fdd921c9 |
python/cpython | python__cpython-101171 | # New deprecation warning in some tests due to a new decorator in test.support
DeprecationWarning: It is deprecated to return a value that is not None from a test case (<bound method _id of <test.test_dis.DisTests testMethod=test_binary_specialize>>)
Shows up in test_dis, test_compile and test_embed, for tests that use the new @requires_specialization which was added in https://github.com/python/cpython/pull/100713.
I don't know why this decorator does this while requires_limited_api doesn't.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101171
<!-- /gh-linked-prs -->
| 0c5db2a60701a939288eb4c7704382631a598398 | 9ec9b203eaa219ed13ad9deb21740924bd22b10d |
python/cpython | python__cpython-103369 | # issubclass() is inconsistent with generic aliases
Instances of `types.GenericAlias` are accepted as the first argument of `issubclass()`:
```pycon
>>> issubclass(list[int], object)
True
>>> issubclass(list[int], type)
False
```
while instances of `typing.GenericAlias` are not:
```pycon
>>> issubclass(typing.List[int], object)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: issubclass() arg 1 must be a class
>>> issubclass(typing.List[int], type)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: issubclass() arg 1 must be a class
```
Although both are rejected if the second argument is an abstract class:
```pycon
>>> issubclass(list[int], collections.abc.Sequence)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython/Lib/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class
>>> issubclass(typing.List[int], collections.abc.Sequence)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython/Lib/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class
```
Usually `issubclass(x, y)` is preceded by the check `isinstance(x, type)`, so the final result will always be false since 3.11, but if that check is omitted, you can see a difference.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103369
<!-- /gh-linked-prs -->
| d93b4ac2ff7bce07fb1c8805f43838818598191c | 666b68e8f252e3c6238d6eed1fc82937a774316f |
python/cpython | python__cpython-101193 | # PEP 699: Implementation
We can either follow what was outlined here https://github.com/python/peps/pull/2927.
Or https://discuss.python.org/t/pep-699-remove-private-dict-version-field-added-in-pep-509/19724/14
<!-- gh-linked-prs -->
### Linked PRs
* gh-101193
<!-- /gh-linked-prs -->
| 7f95ec3e7405ea5f44adc3584e297a3191118f32 | e244401ce508ad391295beb636e499fcc6797a2a |
python/cpython | python__cpython-101145 | # `zipfile.Path.read_text` & `.open` methods with a positional `encoding` arg causes a TypeError
This is a regression from 3.9 behavior seen in Python 3.10.
zipfile.Path.read_text passes *args and **kwargs to it's own `open()` method, but in 3.10+ is also blindly sets `kwargs["encodings"]` to a value. So code that was previously passing an encoding as a positional argument now sees:
```
File "/<stdlib>/zipfile.py", line 2362, in read_text
with self.open('r', *args, **kwargs) as strm:
File "/<stdlib>/zipfile.py", line 2350, in open
return io.TextIOWrapper(stream, *args, **kwargs)
TypeError: Failed to construct dataset imagenet2012: argument for TextIOWrapper() given by name ('encoding') and position (2)
```
3.10's Lib/zipfile.py (and main's Lib/zipfile/_path.py) contain:
```
def read_text(self, *args, **kwargs):
kwargs["encoding"] = io.text_encoding(kwargs.get("encoding")) # <-- new in 3.10, source of bug
with self.open('r', *args, **kwargs) as strm:
return strm.read()
```
As this is a regression, we should avoid setting that value in kwargs when `"encodings" not in kwargs` to match 3.9 and earlier behavior.
TODO list:
- [x] @jaraco will port the change to [zipp](/jaraco/zipp).
<!-- gh-linked-prs -->
### Linked PRs
* gh-101145
* gh-101179
* gh-101182
<!-- /gh-linked-prs -->
| 5927013e47a8c63b70e104152351f3447baa819c | 9e025d305f159aebf01775ad1dc2817679f01aa9 |
python/cpython | python__cpython-101197 | # asyncio.base_events.BaseEventLoop._add_callback references TimerHandle but never uses them
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
After 592ada9b4b08ad57037e365b9c462d71c96e4453 it appears this function not longer cares about `self._scheduled` or `TimerHandle` but the docstring seems to imply otherwise
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.12.0a4
- Operating system and architecture: MacOS
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
I only noticed because it was unexpectedly more expensive

I think its dead code and can be simplified to:
```diff
diff --git a/Lib/asyncio/base_events.py b/Lib/asyncio/base_events.py
index cbabb43ae0..070ea1044d 100644
--- a/Lib/asyncio/base_events.py
+++ b/Lib/asyncio/base_events.py
@@ -1857,11 +1857,9 @@ def call_exception_handler(self, context):
exc_info=True)
def _add_callback(self, handle):
- """Add a Handle to _scheduled (TimerHandle) or _ready."""
+ """Add a Handle to _ready."""
assert isinstance(handle, events.Handle), 'A Handle is required here'
- if handle._cancelled:
- return
- assert not isinstance(handle, events.TimerHandle)
+ if not handle._cancelled:
self._ready.append(handle)
def _add_callback_signalsafe(self, handle):
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101197
* gh-101216
* gh-101217
<!-- /gh-linked-prs -->
| 9e947675ae3dc32f5863e5ed3022301cf7fd79b4 | c22a55c8b4f142ff679880ec954691d5920b7845 |
python/cpython | python__cpython-118593 | # Add .rst to mimetypes
``` python
>>> import mimetypes
>>> mimetypes.guess_type("README.rst", strict=False)
(None, None)
```
In the `python:3.11` Docker image, we get [a vanity type](https://www.iana.org/assignments/media-types/text/prs.fallenstein.rst):
``` python
>>> mimetypes.guess_type("README.rst", strict=False)
('text/prs.fallenstein.rst', None)
```
I think most people would expect [`text/x-rst`](https://docutils.sourceforge.io/FAQ.html#what-s-the-official-mime-type-for-restructuredtext-data). Even better would be to get it registered!
<!-- gh-linked-prs -->
### Linked PRs
* gh-118593
* gh-118599
<!-- /gh-linked-prs -->
| 1511bc95c4bc95bd35599dc9c88111c9aac44c0d | b6f0ab5b1cb6d779efe4867d83a60e8d66c48dee |
python/cpython | python__cpython-101138 | # Windows py launcher: older 32-bit versions not supported in some cases
# Bug report
On 64-bit Windows, if both Python 2.7 32-bit and Python 2.7 64-bit are installed, these two installs are considered equivalent and only the 64-bit version is considered, both when running `py -2-32` (which fails) and `py -0` (which only displays the 64-bit version).
This is also the case for Python 3.4 and lower, as the "-32" suffix is only part of the tag in the registry in 3.5+.
Under `C:\`:

`py -0p` output:

While these versions are no longer officially supported since 2020, [PEP-514](https://peps.python.org/pep-0514/) does have a section on backwards compatibility, mentioning:
> To ensure backwards compatibility, applications should treat environments listed under the following two registry keys as distinct, even when the Tag matches:
> ```
> HKEY_LOCAL_MACHINE\Software\Python\PythonCore\<Tag>
> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\<Tag>
> ```
# Your environment
- CPython versions tested on: 3.11.1
- Operating system and architecture: Windows 10 x64
<!-- gh-linked-prs -->
### Linked PRs
* gh-101138
* gh-101290
<!-- /gh-linked-prs -->
| daec3a463c747c852d7ee91e82770fb1763d7d31 | fee7a995a18589001a432cea365fd04bf8efb7c1 |
python/cpython | python__cpython-101124 | # invalid signature for math.hypot
```python
>>> import math
>>> import inspect
>>> inspect.signature(math.hypot)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sk/src/cpython/Lib/inspect.py", line 3295, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sk/src/cpython/Lib/inspect.py", line 3039, in from_callable
return _signature_from_callable(obj, sigcls=cls,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sk/src/cpython/Lib/inspect.py", line 2531, in _signature_from_callable
return _signature_from_builtin(sigcls, obj,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sk/src/cpython/Lib/inspect.py", line 2330, in _signature_from_builtin
raise ValueError("no signature found for builtin {!r}".format(func))
ValueError: no signature found for builtin <built-in function hypot>
```
This patch works:
```diff
diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c
index 1342162fa7..0e610eb9cb 100644
--- a/Modules/mathmodule.c
+++ b/Modules/mathmodule.c
@@ -2805,7 +2805,9 @@ math_hypot(PyObject *self, PyObject *const *args, Py_ssize_t nargs)
#undef NUM_STACK_ELEMS
PyDoc_STRVAR(math_hypot_doc,
- "hypot(*coordinates) -> value\n\n\
+ "hypot($module, *coordinates)\n\
+--\n\
+\n\
Multidimensional Euclidean distance from the origin to a point.\n\
\n\
Roughly equivalent to:\n\
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-101124
* gh-126235
<!-- /gh-linked-prs -->
| 9f904398179bcbab4bf21b2500aa14aec01fbdb3 | b562bc672bff106f1cbf2aad7770b654bfe374ec |
python/cpython | python__cpython-104287 | # SQLite rowcount is corrupted when combining UPDATE RETURNING w/ id IN (?)
# Bug report
The rowcount from an sqlite UPDATE RETURNING query with WHERE id IN (?) used (but not == ?) will be 0.
The reproduction case from #93421 runs successfully. The modified reproduction case below errors with:
```
Traceback (most recent call last):
File "C:\Data\R\git\electrumsv\x.py", line 41, in <module>
go()
File "C:\Data\R\git\electrumsv\x.py", line 35, in go
assert rowcount == 1, (rowcount, f"was updated: {updated_value=='v2'}")
AssertionError: (0, 'was updated: True')
```
Reproduction case.
```python
import os
import sqlite3
def go():
# create table
cursor = conn.cursor()
cursor.execute(
"""CREATE TABLE some_table (
id INTEGER NOT NULL,
flags INTEGER NOT NULL,
value VARCHAR(40) NOT NULL,
PRIMARY KEY (id)
)
"""
)
cursor.close()
conn.commit()
# run operation
cursor = conn.cursor()
cursor.execute(
"INSERT INTO some_table (id, flags, value) VALUES (1, 4, 'v1')"
)
ident = 1
cursor.execute(
"UPDATE some_table SET value='v2' "
"WHERE id IN (?) RETURNING id",
(ident,),
)
rowcount = cursor.rowcount
updated_value = cursor.execute("SELECT value FROM some_table WHERE id=1").fetchone()[0]
assert rowcount == 1, (rowcount, f"was updated: {updated_value=='v2'}")
if os.path.exists("file.db"):
os.unlink("file.db")
conn = sqlite3.connect("file.db")
go()
# never reaches here
```
# Your environment
This is the 3.10.9 windows install from python.org on my computer. My coworker installed the version on his Windows computer and could reproduce it as well.
```
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
```
My coworker has also had it happen on MacOS with 3.10.8
```
3.10.8 (main, Oct 13 2022, 10:17:43) [Clang 14.0.0 (clang-1400.0.29.102)]
```
3.2 GHz 8-Core Intel Xeon W
<!-- gh-linked-prs -->
### Linked PRs
* gh-104287
* gh-104381
<!-- /gh-linked-prs -->
| 7470321f8171dce96a604ba2c24176837f999ca0 | 4abfe6a14b5be5decbaa3142d9e2549cf2d86c34 |
python/cpython | python__cpython-101132 | # `Path.rglob` -> documentation does not specify what `pattern` is
# Documentation
Today I came across `Path.rglob` in our codebase.
I went to https://docs.python.org/3/library/pathlib.html#pathlib.Path.rglob and tried to find out what `pattern` actually is. While the current documentation links to `Path.glob()`, I would like to add a note similar you can find at https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob, ie
> Patterns are the same as for [fnmatch](https://docs.python.org/3/library/fnmatch.html#module-fnmatch)
Would a PR be welcome to add this kind of information?
<!-- gh-linked-prs -->
### Linked PRs
* gh-101132
* gh-101223
* gh-114030
<!-- /gh-linked-prs -->
| 61f338a005aa9f36b2a0a8d6924857e703bb6140 | f1d0711dd33bdd6485082116ddaca004c225b62f |
python/cpython | python__cpython-101121 | # `optparse` -> information about `store_const` is a bit vague
# Documentation
For context - new `optparse` user here.
After I have seen `store_const` in our code base, I had a bit of a hard time to figure out the difference between `store` and `store_const`.
The [Other actions](https://docs.python.org/3/library/optparse.html#other-actions) section just mentions.
> store const -> store a constant value
I would like to add a link to https://docs.python.org/3/library/optparse.html#optparse.Option.const and mention that the constant value is not provided by the user, but needs to be set beforehand.
Would a pull request with theses changes be accepted?
<!-- gh-linked-prs -->
### Linked PRs
* gh-101121
* gh-101203
* gh-101204
<!-- /gh-linked-prs -->
| 9a155138c58cad409e28e34359ba87ec0025b6b7 | 61f338a005aa9f36b2a0a8d6924857e703bb6140 |
python/cpython | python__cpython-101119 | # Typo in python random.random documentation
There is a typo in documentation [here](https://docs.python.org/3/library/random.html#random.random) regarding random.random()
`the next random floating point number in the range [0.0, 1.0).`
It can be corrected by replacing ")" with "]"
<!-- gh-linked-prs -->
### Linked PRs
* gh-101119
* gh-101246
<!-- /gh-linked-prs -->
| 8bcd4a6ec7f951d952c174c6a8d937cc62444738 | 3e09f3152e518cdc8779b52943b86812114ce071 |
python/cpython | python__cpython-101127 | # 3.11 regression: default arguments in PyEval_EvalCodeEx
# Bug report
3.11 Started using `_PyFunction_FromConstructor` inside PyEval_EvalCodeEx. #92173 reported that this caused issues when closures were involved because `_PyFunction_FromConstructor` discarded the closure found in `PyFrameConstructor`. This was fixed in #92175.
It turns out the same holds true for default arguments and default kw arguments as can be seen at https://github.com/python/cpython/blob/main/Objects/funcobject.c#L90-L91 and is a regression compared to 3.10.
The fix appears fairly simple but I would love to have some guidance as to where to add a test for it.
# Your environment
- CPython versions tested on: 3.11.0
- Operating system and architecture: Windows x86-64
<!-- gh-linked-prs -->
### Linked PRs
* gh-101127
* gh-101636
<!-- /gh-linked-prs -->
| 79903240480429a6e545177416a7b782b0e5b9bd | ae62bddaf81176a7e55f95f18d55621c9c46c23d |
python/cpython | python__cpython-101061 | # Configure should conditionally add `-fno-reorder-blocks-and-partition`
# Bug report
`--enable-bolt` currently unconditionally adds `-fno-reorder-blocks-and-partition`. This flag isn't implemented on Clang and breaks the build in this configuration.
PR forthcoming.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101061
<!-- /gh-linked-prs -->
| 7589d713a14a1efbb0cbb7caa9fdbad2b081cf18 | 7f95ec3e7405ea5f44adc3584e297a3191118f32 |
python/cpython | python__cpython-101057 | # Potential memory leak in Objects/bytesobject.c
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
https://github.com/python/cpython/blob/206f05a46b426eb374f724f8e7cd42f2f9643bb8/Objects/bytesobject.c#L429-L443
At line 438, object `p` won't be freed if `_PyBytesWriter_Prepare` fails (returns NULL) and thus become a memory leak.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101057
* gh-101076
* gh-101077
<!-- /gh-linked-prs -->
| b1a74a182d8762bda51838401ac92b6ebad9632a | b82049993f74185da71adf2eb8d6c8f15db063e1 |
python/cpython | python__cpython-101048 | # Remove LIBTOOL_CRUFT from configure
# Feature or enhancement
Per summary.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101048
<!-- /gh-linked-prs -->
| 79af40a40384ff57238185017970c3f60e351ee1 | 4db64529aea775aa23b24f35d08611f427ec2f6f |
python/cpython | python__cpython-101042 | # `utcfromtimetuple` is mentioned in documentation, which isn't a thing. The 'from' part is erroneous
# Documentation
In datetime module documentation [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.utctimetuple)
in "Warning" block instead of `utctimetuple` there's `utcfromtimetuple`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101042
* gh-101175
* gh-101176
<!-- /gh-linked-prs -->
| 8e9d08b062bbabfe439bc73f82e3d7bb3800189e | 5e9df888dd5ab0e59d1cebc30c998a17aa65a3e2 |
python/cpython | python__cpython-101038 | # long_subtype_new underallocates for zero
When creating `int` instances, we [always allocate](https://github.com/python/cpython/blob/ef633e5000222a3dba74473c49d6a81fca0a44ec/Objects/longobject.c#L157) space for and initialise at least one digit, since the `medium_value` function that's used for fast paths [assumes the existence](https://github.com/python/cpython/blob/ef633e5000222a3dba74473c49d6a81fca0a44ec/Objects/longobject.c#L33) of that digit. This change was introduced for Python 3.11 in PR #27832.
However, the corresponding change in `long_subtype_new` was missed; this results in us asking for zero digits in the case of creating an instance of a subclass of `int` with value zero - e.g.,
```python
class MyInt(int):
pass
myint = MyInt() # problem
```
That's then a problem if `myint` is used in a context where `medium_value` might be called on it, for example as in `int(myint)`.
We've got away with this so far because of a compensating _overallocation_ issue: see #81381; this bug blocked an attempt to address #81381 in #100855. But I believe this should still be fixed.
I'll make a PR shortly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101038
* gh-101219
<!-- /gh-linked-prs -->
| 401fdf9c851eb61229250ebffa942adca99b36d1 | 9e947675ae3dc32f5863e5ed3022301cf7fd79b4 |
python/cpython | python__cpython-101024 | # The docs for email.mime types incorrectly identify bytes as strings
# Documentation
For example:
> _imagedata is a string containing the raw image data. ([image.py](https://github.com/python/cpython/blob/main/Lib/email/mime/image.py#L20))
or
> _audiodata is a string containing the raw audio data. ([audio.py](https://github.com/python/cpython/blob/main/Lib/email/mime/audio.py#L21))
So the programmer thinks, "how can it be a 'string' _and_ 'raw' at the same time?" Then the programmer notices the parameter for encoding and the language saying "the default encoding is Base64" and thinks, "ah, ok, that's how it could be a string!" (though it's still a little bit squirrelly to use the word "raw" to describe encoded data).
But then the reader sees the future tense in "... which will perform the actual encoding..." and thinks, "no, the argument must really be for binary data."
And then the programmer notices the parallel language in [text.py](https://github.com/python/cpython/blob/main/Lib/email/mime/text.py#L20):
> _text is the string for this message object.
and starts to wonder: "huh, maybe _all_ of these are expecting bytes." But then the programmer's code blows up when it tries to go with that theory (still no help from the documentation).
So in desperation, the programmer starts to dig into the actual code to find out what's done for each of these subclasses. And that journey leads to the conclusion that "string" in this package means one thing for some of the classes and something different for others.
Can't we just say "bytes" when we mean bytes, and reserve the use of the word "string" to mean what it's supposed to mean in the only versions of Python which are actually officially supported? 😆
<!-- gh-linked-prs -->
### Linked PRs
* gh-101024
* gh-101043
* gh-101052
<!-- /gh-linked-prs -->
| 49cae39ef020eaf242607bb2d2d193760b9855a6 | ef633e5000222a3dba74473c49d6a81fca0a44ec |
python/cpython | python__cpython-101031 | # [typing: PEP 646]: `*tuple[int, int]` is improperly evaluated by `get_type_hints`
# Bug report
Unpack information of `typing.GenericAlias` is not transferred from string annotations to interpreted annotations. `typing.Unpacked` works as expected.
A fix could be to add `if getattr(t, "__unpacked__", False): return next(iter(GenericAlias(t.__origin__, ev_args)))` clause here:
https://github.com/python/cpython/blob/6492492ce7844bf1193ff748be8c09e301b12cee/Lib/typing.py#L373-L374
NOTE: This bug is in typing's internal API and I don't think there's issues in the public API (but haven't looked hard). However, this issue pops up quickly if you try to do anything wrt PEP 646 typing at runtime.
## Minimal example
```py
from typing import ForwardRef
from typing import TypeVarTuple
Ts = TypeVarTuple("Ts")
typ = ForwardRef("*Ts")._evaluate({}, {"Ts": Ts}, frozenset())
assert typ == next(iter(Ts)) # <-- PASSES AS EXPECTED
typ = ForwardRef("*tuple[int]")._evaluate({}, {}, frozenset())
assert typ == tuple[int] # <-- PASSES BUT SHOULDN'T
assert typ == next(iter(tuple[int])) # <-- SHOULD PASS BUT DOESN'T
```
# Your environment
Python 3.11
<!-- gh-linked-prs -->
### Linked PRs
* gh-101031
* gh-101254
<!-- /gh-linked-prs -->
| 807d6b576fa37f3ab7eb951297cb365c0c198595 | d717be04dc7876696cb21ce7901bda0214c4b2e0 |
python/cpython | python__cpython-101007 | # Improve error handling when read marshal data
There are several issues in error handling in the marshal module:
1. EOFError can override other errors such as MemoryError or OSError at the start of the object.
2. When the NULL object occurs as a code object component, the error message can be misleading (like "NULL object in marshal data for list") or even ignored (if the code object is a dict key) and reading will continue.
Also, it is a common idiom to only call PyErr_Occurred() if the return code denotes a possible error. This is not always used in that code. While the performance gain may be tiny, it is better to get rid of an overhead, it can pay for itself after further optimizations.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101007
* gh-106226
* gh-106227
<!-- /gh-linked-prs -->
| 8bf6904b229583033035d91a3800da5604dcaad4 | 3fb7c608e5764559a718ce8cb81350d7a3df0356 |
python/cpython | python__cpython-101002 | # Add os.path.splitroot() function
# Feature or enhancement
Add a function that splits a path into a `(drive, root, tail)` triad:
1. The _drive_ part has the same meaning as in `splitdrive()`
2. The _root_ part is one of: the empty string, a forward slash, a backward slash (Windows only), or two forward slashes (POSIX only)
3. The _tail_ part is everything following the root.
Similarly to `splitdrive()`, a `splitroot()` function would ensure that `drive + root + tail` is the same as the input path.
# Pitch
The extra level of detail reflects an extra step in the Windows 'current path' hierarchy -- Windows has both a 'current drive', and a 'current directory' for one or more drives, which results in several kinds of non-absolute paths, e.g. 'foo/bar', '/foo/bar', 'X:foo/bar'
This three-part model is used successfully by pathlib, which exposes _root_ as an attribute, and combines `drive + root` as an attribute called _anchor_. The anchor has useful properties, e.g. comparing two paths anchors can tell us whether a `relative_to()` operation is possible.
Pathlib has [its own implementation of `splitroot()`](https://github.com/python/cpython/blob/08e5594cf3d42391a48e0311f6b9393ec2e00e1e/Lib/pathlib.py#L274-L285), but its performance is hamstrung by its need for OS-agnosticism. By moving the implementation into `ntpath` and `posixpath` we can take advantage of OS-specific rules to improve pathlib performance.
# Previous discussion
- https://discuss.python.org/t/add-os-path-splitroot/22243
- https://github.com/python/cpython/pull/31691/files#r944987609
- https://github.com/python/cpython/pull/100351/files/5b67a7489e5518975d0c11033cd35bf60fad0ddf#r1065219273
<!-- gh-linked-prs -->
### Linked PRs
* gh-101002
<!-- /gh-linked-prs -->
| e5b08ddddf1099f04bf65e63017de840bd4b5980 | 37f15a5efab847b8aca47981ab596e9c36445bf7 |
python/cpython | python__cpython-100998 | # Implement Multi-Phase Init for the _testinternalcapi Module
See PEP 630 and PEP 687. This is an internal test module so the bar isn't as high as for regular stdlib modules.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100998
* gh-101058
<!-- /gh-linked-prs -->
| b511d3512ba334475201baf4d9db7c4d28f0a9ad | 005e69403d638f9ff8f71e59960c600016e101a4 |
python/cpython | python__cpython-100990 | # Accommodate Sphinx option by changing docstring return type of "integer" to "int".
# Bug report
When `collections.deque` is subclassed and the resulting class is documented with Sphinx using the option `:inherited-members:`, the following error appears:
```
WARNING: py:class reference target not found: integer -- return number of occurrences of value
```
I assume this is because the `PyDoc_STRVAR` docstring of the C-implementation is not compliant to [PEP-7](https://peps.python.org/pep-0007/#documentation-strings) as required in [`PyDoc_STRVAR`](https://docs.python.org/3/c-api/intro.html#c.PyDoc_STRVAR):
https://github.com/python/cpython/blob/d9dff4c8b5ab41c47af002ad7fb083c953e75f31/Modules/_collectionsmodule.c#L992-L993
And everything after `->` is interpreted as type hint.
This can be reproduced with e.g. a python module `sub_deque.py`:
```
class SubDeque(deque):
pass
```
and a documenation file `docs/src/sub_deque.rst`:
```
.. automodule:: sub_deque
:inherited-members:
```
which is then built with Sphinx: `sphinx-build -W docs/src docs/dist`.
I'd be happy to provide a patch for this myself if you feel this issue should be fixed.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.7 - 3.11
- Operating system and architecture: Arch Linux & Ubuntu
<!-- gh-linked-prs -->
### Linked PRs
* gh-100990
* gh-102910
* gh-102911
* gh-102949
* gh-102979
* gh-102984
* gh-102985
* gh-102991
* gh-102992
<!-- /gh-linked-prs -->
| c74073657e32b8872f91b3bbe1efa9af20adbea9 | c24f1f1e874c283bb11d8b9fbd661536ade19fe9 |
python/cpython | python__cpython-100986 | # HTTPSConnection double wrapping IPv6 address during CONNECT
# Bug report
when http.client.HTTPSConnection is used with IPv6 addresses and a proxy results in the "Host:" Header being
wrapped in square brackets twice. (e.g. Host: [[fd2e:6f44:5dd8:c956::15]]:5050 )
```
$ cat /tmp/proxy_test.py
import http.client, ssl
conn = http.client.HTTPSConnection("fd2e:6f44:5dd8:c956::1", 8215, context=ssl._create_unverified_context())
conn.set_tunnel("[fd2e:6f44:5dd8:c956::15]", 5050)
conn.request("GET", "/", "")
response = conn.getresponse()
print(response.status, response.reason)
$ python3 /tmp/proxy_test.py
400 Bad Request
=== tcp dump showing extra squar brackets in "Host:" header ==
CONNECT [fd2e:6f44:5dd8:c956::15]:5050 HTTP/1.0
HTTP/1.1 200 Connection established
GET / HTTP/1.1
Host: [[fd2e:6f44:5dd8:c956::15]]:5050
Accept-Encoding: identity
Content-Length: 0
HTTP/1.1 400 Bad Request
Date: Thu, 12 Jan 2023 11:47:38 GMT
Server: Apache
Content-Length: 226
Connection: close
Content-Type: text/html; charset=iso-8859-1
```
# Your environment
$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.12
$ rpm -qf /usr/bin/python3
python3-3.9.10-4.el9_0.x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100986
* gh-115591
* gh-115606
<!-- /gh-linked-prs -->
| 465db27cb983084e718a1fd9519b2726c96935cb | 4dff48d1f454096efa2e1e7b4596bc56c6f68c20 |
python/cpython | python__cpython-100983 | # Specialization of some instructions does not conform to PEP 659, and prevents PEP 669
As written, PEP 659 says that individual specializations are restricted to a single instruction.
PEP 669 relies on this, as it also wants to replace instructions at runtime, and it would break if specialization occurs across multiple instructions.
Currently there are a two places where we break this design by specializing pairs of instructions together:
* `COMPARE_OP` `POP_JUMP_IF_` pairs are specialized together
* `FOR_ITER` `STORE_FAST` are specialized together
The second will go away with the register VM, and doesn't seem to be an issue in practice.
It is the `COMPARE_OP` `POP_JUMP_IF_` specialization that is problematic, as PEP 669 wants to instrument branches.
Instrumenting the `POP_JUMP_IF_` doesn't work if the `COMPARE_OP` specialization jumps over it.
The solution is to replace the `COMPARE_OP` `POP_JUMP_IF_` pair with a single `COMPARE_AND_BRANCH` instruction that can be specialized or instrumented atomically.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100983
* gh-101985
* gh-102801
<!-- /gh-linked-prs -->
| 7b14c2ef194b6eed79670aa9d7e29ab8e2256a56 | b1a74a182d8762bda51838401ac92b6ebad9632a |
python/cpython | python__cpython-124292 | # ctypes Stucture._fields_ cannot be set when Structure has been subclassed
# Bug report
Currently the method for defining Structure fields that cannot be defined within the class, is to define the class with `pass` and then set `_fields_` outside. But `ctypes.Structure` incorrectly raises `AttributeError`, even when no class has ever set a `_fields_`.
```py
from ctypes import *
class PyObject(Structure):
pass
class PyVarObject(PyObject):
pass
class PyTypeObject(PyVarObject):
pass
PyObject._fields_ = [
("ob_refcnt", c_long),
("ob_type", POINTER(PyTypeObject)),
]
```
```py
PyObject._fields_ = [
^^^^^^^^^^^^^^^^^
AttributeError: _fields_ is final
```
Strangely, suppressing this Attribute error reveals the `_fields_` attribute *does* actually get set?
```py
with suppress(AttributeError):
PyObject._fields_ = [
("ob_refcnt", c_long),
("ob_type", POINTER(PyTypeObject)),
]
print(PyObject._fields_)
>> [('ob_refcnt', <class 'ctypes.c_long'>), ('ob_type', <class '__main__.LP_PyTypeObject'>)]
```
# Your environment
- CPython versions tested on: 3.11.1, main
- Operating system and architecture: macOS ARM, ubuntu x64
<!-- gh-linked-prs -->
### Linked PRs
* gh-124292
<!-- /gh-linked-prs -->
| be76e3f26e0b907f711497d006b8b83bff04c036 | e256a7590a0149feadfef161ed000991376dc0e8 |
python/cpython | python__cpython-100936 | # Change "char *str" to "const char *str" in KeywordToken because it is an immutable string.
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
#100936
<!-- gh-linked-prs -->
### Linked PRs
* gh-100936
<!-- /gh-linked-prs -->
| a1e051a23736fdf3da812363bcaf32e53a294f03 | 75c8133efec035ec1083ebd8e7d43ef340c2e581 |
python/cpython | python__cpython-100934 | # Remove `check_string` and `check_mapping` from `test_xml_etree`
Right now one test in `test_xml_etree` contains two helpers called `check_mapping` and `check_string`:
https://github.com/python/cpython/blob/8dd2766d99f8f51ad62dc0fde8282483590c6cd0/Lib/test/test_xml_etree.py#L206-L241
It originates from very old code: https://github.com/python/cpython/blob/d9a550b5e14db2748f4fe089c2637c3462ad7d8d/Lib/test/test_xml_etree.py#L140-L169
It is half-baked with lots of obvious things to be improved.
I think that it is safe just to replace them with:
- `check_string` to `assertIsInstance(str)`
- `check_mapping` to `assertIsInstance(dict)`
It is more correct, because both Python and C implementations only use `str` and `dict` for checked attributes. And in this case we can skip re-inventing tests for mapping and string.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100934
* gh-101686
* gh-101687
<!-- /gh-linked-prs -->
| eb49d32b9af0b3b01a5588626179187f11d145c9 | feec49c40736fc05626a183a8d14c4ebbea5ae28 |
python/cpython | python__cpython-100932 | # Improve `pickle` protocols coverage in `test_slice`
While working on https://github.com/python/cpython/issues/100817 I've noticed that `slice` is only tested with 3 `pickle` protocols. See https://github.com/python/cpython/blob/8dd2766d99f8f51ad62dc0fde8282483590c6cd0/Lib/test/test_slice.py#L237-L243 I don't think we need to limited supported protocols. Let's test all protocols!
<!-- gh-linked-prs -->
### Linked PRs
* gh-100932
* gh-100978
* gh-100979
<!-- /gh-linked-prs -->
| 8795ad1bd0d6ee031543fcaf5a86a60b37950714 | 8dd2766d99f8f51ad62dc0fde8282483590c6cd0 |
python/cpython | python__cpython-92650 | # Reuse code objects for similar `dataclass` definitions
A little over a year ago, @dabeaz came up with a cool way of speeding up `dataclass` creation by avoiding unnecessary `exec` calls. Essentially, his proof-of-concept `dataklasses` module caches code objects for methods of "similarly-shaped" dataclasses, and patches them with the correct names:
https://github.com/dabeaz/dataklasses
I have a working prototype of a similar idea for the stdlib `dataclasses` module over in #92650. It basically doubles the speed of `dataclass` definitions.
CC @ericvsmith
<!-- gh-linked-prs -->
### Linked PRs
* gh-92650
<!-- /gh-linked-prs -->
| 3dbca81c9b7903e8d808089a6a76dc97807b3df3 | 4f6f34fb0b7c1048b238062015f3fef7372a911f |
python/cpython | python__cpython-131282 | # ctypes infinite pointer cache
# Bug(ish?) report
The following function has a cache. If you are using a factory to call ctypes.POINTER in a loop, the memory usage is unbounded and unable to be reclaimed.
https://docs.python.org/3/library/ctypes.html#ctypes.POINTER
The documentation should mention that this is unbounded and should not be called in a loop, or the cache should be changed to a configurable, bounded LRU cache.
Example of a variable length type factory used by windows, and a bad func that cannot be called in a loop:
```
def shitemid_factory(size: int) -> Type[ctypes.Structure]:
class SHITEMID_Var(ctypes.Structure):
_fields_ = (
("cb", USHORT),
("abID", BYTE * size),
)
return SHITEMID_Var
def bad_func():
SHITEMID_Var = shitemid_factory(sz - ctypes.sizeof(USHORT))
item_var = ctypes.cast(item_ptr, ctypes.POINTER(SHITEMID_Var))
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-131282
* gh-133843
* gh-133867
<!-- /gh-linked-prs -->
| a0bc0c462ff55b4112be49d1839635ac3a4a9878 | 7e7e49be78e26d0a3b861a04bbec1635aabb71b9 |
python/cpython | python__cpython-100924 | # The "mask" stored in the cache of `COMPARE_OP` could be stored in the oparg.
The "when to branch mask" used by specializations of `COMPARE_OP` takes 4 bits, the comparison operator take 3 bits.
Since the mask can be computed at compile time, it should be stored in the oparg along with the comparison operator.
This would free up a cache entry.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100924
<!-- /gh-linked-prs -->
| 6e4e14d98fe0868981f29701496d57a8223c5407 | 61f12b8ff7073064040ff0e6220150408d24829b |
python/cpython | python__cpython-101514 | # asyncio Streams wait_closed
# Documentation
https://docs.python.org/3/library/asyncio-stream.html - assuming that `await stream.wait_closed()` should be called after all uses of `.close()` - which both the `close()` and `wait_closed()` methods both heavily suggest should be the case - then all examples on this page should also demonstrate this practice. At present only the first one (at the top) does: the examples at the bottom don't use `wait_closed()`.
The documentation for wait_closed also doesn't really say *why* it should always be used beyond "to wait until the underlying connection is closed" which doesn't really explain anything.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101514
* gh-101532
* gh-101533
<!-- /gh-linked-prs -->
| 5c39daf50b7f388f9b24bb2d6ef415955440bebf | 5dcae3f0c3e9072251217e814a9438670e5f1e40 |
python/cpython | python__cpython-100917 | # Convert argument to appropriate type
# Documentation
https://docs.python.org/3/howto/logging-cookbook.html#use-of-contextvars
The argument `count` should be converted to integer type if user passes value to `main.py`.
```bash
$ python3 main.py --count 100
Traceback (most recent call last):
File "/home/tom/main.py", line 138, in <module>
main()
File "/home/tom/main.py", line 117, in main
for i in range(options.count):
TypeError: 'str' object cannot be interpreted as an integer
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100917
* gh-100918
* gh-100919
<!-- /gh-linked-prs -->
| b2f7b2ef0b5421e01efb8c7bee2ef95d3bab77eb | 35650f25383efce83b62d5273110ab8dcdbcc254 |
python/cpython | python__cpython-100913 | # Update documentation for `sys.winver` to not say it's "normally the first three characters of version"
# Documentation
https://docs.python.org/3/library/sys.html#sys.winver says:
> The value is normally the first three characters of [version](https://docs.python.org/3/library/sys.html#sys.version).
That has not been true since Python 3.11. It should probably be updated to say something like, "The value is normally the major and minor versions of the running Python interpreter," or something along those lines.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100913
* gh-100962
* gh-100963
<!-- /gh-linked-prs -->
| d9dff4c8b5ab41c47af002ad7fb083c953e75f31 | 02a72f080dc89b037c304a85a0f96509de9ae688 |
python/cpython | python__cpython-100905 | # Mac Readme doc mentions bpo
# Documentation
In: https://github.com/python/cpython/blob/main/Mac/README.rst#L351
It says
```
configure: WARNING: ## Report this to https://bugs.python.org/ ##
```
Since we no longer use b.p.o, this link is not correct.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100905
* gh-100906
* gh-100907
<!-- /gh-linked-prs -->
| be23a202426385ad99dcb2611152783780c7bc42 | 3f3c78e32fc67766232ee4ddf17b560c9836a2d1 |
python/cpython | python__cpython-100922 | # Possible race condition between `threading.local()` and GIL acquisition on Linux
# Bug report
TSAN reports a race condition from a Python program that both uses `threading.local()` and also a native extension that attempts to acquire the GIL in a separate, native thread.
To reproduce, build both the Python interpreter and the native module with TSAN. An example native module is like this, but note that all that matters is that it spawns a native thread that attempts to acquire the GIL:
**thread_haver.cc:**
```c++
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <pthread.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
extern "C" {
static pthread_t t;
static void* DoWork(void* arg) {
printf("Thread trying to acquire GIL\n");
fflush(stdout);
PyGILState_STATE py_threadstate = PyGILState_Ensure(); // Race here!!
printf("Thread called with arg %p\n", arg);
PyGILState_Release(py_threadstate);
printf("Thread has released GIL\n");
return arg;
}
static PyObject* SomeNumber(PyObject* module, PyObject* object) {
if (pthread_create(&t, nullptr, DoWork, nullptr) != 0) {
fprintf(stderr, "pthread_create failed\n");
abort();
}
return PyLong_FromLong(reinterpret_cast<uintptr_t>(object));
}
static PyObject* AnotherNumber(PyObject* module, PyObject* object) {
if (pthread_join(t, nullptr) != 0) {
fprintf(stderr, "pthread_join failed\n");
abort();
}
return PyLong_FromLong(reinterpret_cast<uintptr_t>(object));
}
} // extern "C"
static PyMethodDef thbmod_methods[] = {
{"some_number", SomeNumber, METH_O, "Makes a number."},
{"another_number", AnotherNumber, METH_O, "Makes a number."},
{nullptr, nullptr, 0, nullptr} /* Sentinel */
};
static struct PyModuleDef thbmod = {
PyModuleDef_HEAD_INIT,
"thread_haver", /* name of module */
nullptr, /* module documentation, may be NULL */
1024, /* size of per-interpreter state of the module,
or -1 if the module keeps state in global variables. */
thbmod_methods,
};
PyMODINIT_FUNC PyInit_thread_haver() {
return PyModule_Create(&thbmod);
}
```
Compile this into a shared object and using TSAN with, say, `${CC} -fPIC -shared -O2 -fsanitize=thread -o thread_haver.so thread_haver.cc`.
Now the data race happens if we create and destroy a `threading.local()` object in Python:
**demo.py:**
```py
import threading
import thread_haver
print("Number: {}".format(thread_haver.some_number(thread_haver))) # starts a thread
for _ in range(10000):
_ = threading.local() # race here (?)
print("Number: {}".format(thread_haver.another_number(threading))) # joins the thread
```
Concretely, here is the TSAN output, for Python 3.9:
```
Thread trying to acquire GIL
Number: 135325830047024
==================
WARNING: ThreadSanitizer: data race (pid=[...])
Read of size 8 at 0x7b4400019f98 by main thread:
#0 local_clear [...]/Modules/_threadmodule.c:819:25 (python+0xcbc14e)
#1 local_dealloc [...]/Modules/_threadmodule.c:838:5 (python+0xcbbd1d)
#2 _Py_DECREF [...]/Include/object.h:447:9 (python+0x104efaa)
#3 _Py_XDECREF [...]/Include/object.h:514:9 (python+0x104efaa)
#4 insertdict [...]/Objects/dictobject.c:1123:5 (python+0x104efaa)
Previous write of size 8 at 0x7b4400019f98 by thread T1:
#0 malloc [...]/tsan/rtl/tsan_interceptors_posix.cpp:683:5 (python+0xbd28f1)
#1 _PyMem_RawMalloc [...]/Objects/obmalloc.c:116:11 (python+0x1083956)
Location is heap block of size 264 at 0x7b4400019f00 allocated by thread T1:
#0 malloc [...]/tsan/rtl/tsan_interceptors_posix.cpp:683:5 (python+0xbd28f1)
#1 _PyMem_RawMalloc [...]/Objects/obmalloc.c:116:11 (python+0x1083956)
Thread T1 (tid=3039745, running) created by main thread at:
#0 pthread_create [...]/tsan/rtl/tsan_interceptors_posix.cpp:1038:3 (python+0xbd4679)
#1 SomeNumber(_object*, _object*) thread_haver.cc:23:7 (thread_haver.so+0xb58)
#2 cfunction_vectorcall_O [...]/Objects/methodobject.c:516:24 (python+0x107cb3d)
SUMMARY: ThreadSanitizer: data race [...]/Modules/_threadmodule.c:819:25 in local_clear
==================
Thread called with arg (nil)
Thread has released GIL
Number: 135325829912464
ThreadSanitizer: reported 1 warnings
```
Unfortunately, the backtrace does not go into the details of `PyGILState_Ensure()`, but the race seems to be on `tstate->dict` in https://github.com/python/cpython/blob/5ef90eebfd90147c7e774cd5335ad4fe69a7227c/Modules/_threadmodule.c#L819 from the `threading.local()` deallocation function, and on the `tstate` struct being allocated by `malloc` (by the GIL acquisition?).
# Your environment
- CPython versions tested on: 3.9 and 3.10
- Operating system and architecture: Linux (Debian-derived)
I suspect that there may be some TLS access that both the `threading.local()` deallocation function and the GIL acquisition perform and that may not be sufficiently synchronised. It could also be a bug in TSAN that it does not track TLS access correctly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100922
* gh-100937
* gh-100938
* gh-100939
* gh-100953
<!-- /gh-linked-prs -->
| 762745a124cbc297cf2fe6f3ec9ca1840bb2e873 | 847d7708ba8739a5d5d31f22d71497527a7d8241 |
python/cpython | python__cpython-100885 | # email: address list folding encodes list-separator comma
# Bug report
When during address list folding a separating comma ends up on a folded line containing that is to be unicode-encoded than the separator itself is also unicode encoded. Instead the comma should still a plain comma.
This is rejected by some mail servers:
```
ZZZ: host YYY said:
550 Invalid address in message header. Consult RFC2822. (in reply to end of
DATA command)
```
Minimal example:
```python
>>> import email
>>> m = email.message.EmailMessage()
>>> t = '01234567890123456789012345678901234567890123456789012345678901@example.org, ä <foo@example.org>'
>>> m['to'] = t
>>> m.as_string()
'to: 01234567890123456789012345678901234567890123456789012345678901@example.org\n =?utf-8?q?=2C?= =?utf-8?q?=C3=A4?= <foo@example.org>\n\n'
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.10.9, latest main (e47b13934b2eb50914e4dbae91f1dc59f8325e30)
- Operating system and architecture: ArchLinux x86_64
<!-- gh-linked-prs -->
### Linked PRs
* gh-100885
* gh-115592
* gh-115593
<!-- /gh-linked-prs -->
| 09fab93c3d857496c0bd162797fab816c311ee48 | 465db27cb983084e718a1fd9519b2726c96935cb |
python/cpython | python__cpython-100883 | # Improve `test_pickling` in `test_ast`
Here's how it is defined right now: https://github.com/python/cpython/blob/837ba052672d1a5f85a46c1b6d4b6e7d192af6f3/Lib/test/test_ast.py#L640-L655
I see two major problems here:
1. `cPickle` does not exist anymore (since 3.0), so it always fails with `ImportError`. If we want to be sure that both Python and C versions behave the same - we can surely do that with `import_helper.import_fresh_module` or we can drop this part and only keep `pickle` module direct usage (as other modules do). I prefer the second option, because of how other things are tested. But, in the end it is up to maintainers to decide :)
2. Not all `pickle` protocols are tested: we need to use `pickle.HIGHEST_PROTOCOL` instead of just three items
PR is incoming!
<!-- gh-linked-prs -->
### Linked PRs
* gh-100883
<!-- /gh-linked-prs -->
| 2e80c2a976c13dcb69a654b386164dca362295a3 | f08209874e58d0adbb08bd1dba4f58ba63f571c5 |
python/cpython | python__cpython-100881 | # Warning: "‘lo’ may be used uninitialized in this function" in `math_fsum` in `mathmodule.c`
Looks like that after https://github.com/python/cpython/commit/87d3bd0e02cddc415a42573052110eb9301d2c3d we got a new warning:
<img width="918" alt="Снимок экрана 2023-01-09 в 10 40 53" src="https://user-images.githubusercontent.com/4660275/211260435-f6ce1e4a-0e18-4065-a15f-31a53b8d7aec.png">
<!-- gh-linked-prs -->
### Linked PRs
* gh-100881
<!-- /gh-linked-prs -->
| 36f2329367f3608d15562f1c9e89c50a1bd07b0b | 837ba052672d1a5f85a46c1b6d4b6e7d192af6f3 |
python/cpython | python__cpython-100872 | # Improve tests of `copy` module
After working on https://github.com/python/cpython/issues/100817 I've noticed that there are multiple things that can be improved in terms of `copy` module tests:
1. First of all, I had submitted some broken code in https://github.com/python/cpython/pull/100818 but, our test cases were not able to detect it. Solution: add a new test case in `test_slice.py` with `copy` and `deepcopy` calls. I think it should be in `test_slice` and not in `test_copy`, because there's nothing special about it: `copy` does not change its behaviour or special case it.
2. This test ensures that after modifing `copyreg` we can now copy an object, but does not assert the result: https://github.com/python/cpython/blob/e47b13934b2eb50914e4dbae91f1dc59f8325e30/Lib/test/test_copy.py#L42-L55 Right now it can be whatever. The same happens in https://github.com/python/cpython/blob/e47b13934b2eb50914e4dbae91f1dc59f8325e30/Lib/test/test_copy.py#L302-L315 Solution: add required assertions
3. `test_deepcopy_atomic` misses several important types: `bytes`, `types.EllipsisType`, `NotImplementedType`. https://github.com/python/cpython/blob/e47b13934b2eb50914e4dbae91f1dc59f8325e30/Lib/test/test_copy.py#L350-L359
PR is incoming.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100872
* gh-100975
* gh-100976
<!-- /gh-linked-prs -->
| 729ab9b622957fef0e9b494af9a71ab02986c741 | 762745a124cbc297cf2fe6f3ec9ca1840bb2e873 |
python/cpython | python__cpython-100845 | # Has the volatile declaration in fsum() outlived its usefulness?
This bit of defensive programming is costly. Removing the `volatile` declaration reduces the running time of `fsum()` for a hundred floats from `2.26 usec per loop` to `1.42 usec per loop`.
I'm thinking the x87 issues have mostly faded. If we do need to keep this, can it be wrapped in an `#ifdef` so that we don't have everyone paying for a problem that almost nobody has?
<!-- gh-linked-prs -->
### Linked PRs
* gh-100845
<!-- /gh-linked-prs -->
| 87d3bd0e02cddc415a42573052110eb9301d2c3d | b139bcd8922b47bf77c75d5f6704cc9147852546 |
python/cpython | python__cpython-100825 | # Possible typo in the documentation of unittest.TestLoader.testNamePatterns
The documentation for _TestLoader_.**testNamePatterns** in the module _unittest_ says:
> List of Unix shell-style wildcard test name patterns that test methods have to match to be included in test suites (see **-v** option).
> ...
> Note that matches are always performed using fnmatch.fnmatchcase(), so unlike patterns passed to the **-v** option, simple substring patterns will have to be converted using * wildcards.
But the option **-v** only increases verbosity, does not accept an argument, and so doesn't work with test name patterns.
In my opinion, it should be an option **-k**, because the option **-k** is used to pass a pattern as the output of `unittest --help` says:
> -k TESTNAMEPATTERNS Only run tests which match the given substring
Also checking the source code, **-k** is bound to _testNamePatterns_
<pre><code>parser.add_argument('-k', dest='testNamePatterns',
action='append', type=_convert_select_pattern,
help='Only run tests which match the given substring')
</code></pre>
I've created the PR in case the issue is correct.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100825
* gh-100838
* gh-100839
<!-- /gh-linked-prs -->
| 6d3bc4a795e7a60f665e41b2d4b6803f3844fc48 | a2141882f259e21bb09fa0b7cba8142147b9e3d7 |
python/cpython | python__cpython-100816 | # Normalize `types` module usage in `copy` module
Right now there are two minor problems with it:
1. Here `CodeType` is assumed to be missing for some reason: https://github.com/python/cpython/blob/26ff43625ed7bf09542ad8f149cb6af710b41e15/Lib/copy.py#L109 But here, it is just used as it should be: https://github.com/python/cpython/blob/26ff43625ed7bf09542ad8f149cb6af710b41e15/Lib/copy.py#L185
2. We can also modernize `type(None)` and etc to use existing `types.` aliases for clarity
<!-- gh-linked-prs -->
### Linked PRs
* gh-100816
<!-- /gh-linked-prs -->
| 951303fd855838d47765dcd05471e14311dc9fdd | 6746135b0722a5359ce6346554c847afba603b5a |
python/cpython | python__cpython-107692 | # Missing method tkinter.Image._register
# Bug report
When `tkinter` get's a keyword argument, it tries to convert it to a string that tcl/tk can handle. The problem is that when a callable object is passed in as the value for a keyword argument, `tkinter` usually calls an internal `_register` method but it's missing from the `Image` class. [Here](https://github.com/python/cpython/blob/26ff43625ed7bf09542ad8f149cb6af710b41e15/Lib/tkinter/__init__.py#L4073) in `tkinter/__init__.py`, `tkinter` tries to call `self._register(v)` but there is no method named `_register` in the `Image` class. There is a `_register` method for `Misc` and `Variable` but `Image` doesn't inherit from any of those. A minimal reproducible example:
```python
import tkinter
root = tkinter.Tk()
tkinter.PhotoImage(height=print)
```
The expected error is: `_tkinter.TclError: expected integer but got "140140827421696print"` but it's throwing `AttributeError: 'PhotoImage' object has no attribute '_register'`.
The only time this bug would appear is when someone makes a mistake and calls `PhotoImage` or `BitmapImage` with a callable keyword parameter. So the only difference is the error message.
# Your environment
The issue isn't environment dependant.
- CPython versions tested on: 3.10 but from the source code, it should be present in all 3.7+ versions of cpython.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107692
* gh-107722
* gh-107723
<!-- /gh-linked-prs -->
| 50e3cc9748eb2103eb7ed6cc5a74d177df3cfb13 | 835e38891535649321230f94121410244c583966 |
python/cpython | python__cpython-10294 | # Add IPv4 socket option constant IP_PKTINFO as socket module attribute.
# Feature or enhancement
Add IPv4 socket option constant IP_PKTINFO as socket module attribute.
# Pitch
The socket option IP_PKTINFO is defined on Linux and OSX.
I would think we would want all the socket option constants defined in systems headers specifically `<in.h>` available as socket module level attributes. The most common ones are already included.
There are many other constants I would be willing to add, if desired.
In fact I'm curious if all constants should be pulled from headers dynamically.
<!-- gh-linked-prs -->
### Linked PRs
* gh-10294
<!-- /gh-linked-prs -->
| 2cdc5189a6bc3157fddd814662bde99ecfd77529 | 30a306c2ade6c2af3c0b1d988a17dae3916e0d27 |
python/cpython | python__cpython-100812 | # pathlib.Path.absolute() mishandles drive-relative Windows paths
Windows has one current directory *per drive*, and supports drive-relative paths like 'X:' and 'X:foo.txt'. This makes a conversion from relative to absolute paths more complicated than simply prepending a (single) current directory.
It's correctly handled in `ntpath.abspath()` by calling NT's [`GetFullPathNameW()`](https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfullpathnamew) function. But in pathlib we simply prepend `os.getcwd()`, leading to incorrect results:
```python
>>> import os, nt, pathlib
>>> os.chdir('Z:/build')
>>> os.chdir('C:/')
>>> os.path.abspath('Z:')
'Z:\\build'
>>> nt._getfullpathname('Z:')
'Z:\\build'
>>> pathlib.Path('Z:').absolute()
WindowsPath('Z:')
```
This bug is present in all versions of CPython pathlib. We can't fix it by calling `abspath()` because it will also normalize the path, eliding '..' parts.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100812
* gh-101993
<!-- /gh-linked-prs -->
| 072011b3c38f871cdc3ab62630ea2234d09456d1 | d401b20630965c0e1d2a5a0d60d5fc51aa5a8d80 |
python/cpython | python__cpython-100830 | # random.choice fails on numpy arrays
# Bug report
In a project of mine I use random.choice on a numpy array (at multiple point in the code) which worked fine until recently. After upgrading to Python 3.11, my code crashes with:
`ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`
The value error is caused by a `if not seq` check in random.choice (line 369) that was introduced in commit 3fee7776e6ed8ab023a0220da1daf3160fda868b. The check is clearly intended to test for emptiness of the sequence and would raise an error if the sequence is empty. With numpy arrays (even non-empty ones) it now causes the ValueError when trying to cast the array to a boolean.
The issue is easily reproducible with the following code:
```py
import random
import numpy as np
a = np.array([1, 2, 3])
random.choice(a)
```
The code worked on earlier versions, but fails in Python 3.11.
The problem could be avoided by checking for emptiness using `if len(seq) == 0`. I understand that `if not seq` checks are relatively common and many people prefer the conciseness, but numpy compatibility and backwards compatibility seem important to me in this context.
**Tested on:**
- CPython versions tested on: 3.11.0
- Operating system and architecture: Ubuntu 22.04 64-bit
<!-- gh-linked-prs -->
### Linked PRs
* gh-100830
* gh-100858
<!-- /gh-linked-prs -->
| 9a68ff12c3e647a4f8dd935919ae296593770a6b | 87d3bd0e02cddc415a42573052110eb9301d2c3d |
python/cpython | python__cpython-101010 | # segfault in a docker container on host with ipv6 disabled when using getaddrinfo from socket.py
# Bug report
I did open an issue with fail2ban, but apparently the issue is in python itself.
https://github.com/fail2ban/fail2ban/issues/3438
fail2ban-python -c 'from fail2ban.server.ipdns import DNSUtils; print(DNSUtils.dnsToIp("fail2ban_01"))'
fail2ban_01 is the container name and its hostname
I am sorry , i am not a python dev, so at this moment, the best i can provide as a minimal test is the backtrace.
It will show exactly which function was called, with which paramater, from which file thr function was taken from.
# Your environment
- CPython versions tested on: python3.9-3.9.16-1.el9.x86_64
- Operating system and architecture: x86_64 vm on a dell server
- Container inside that vm, run with docker 20.10.22
OS of the vm centos stream 9
OS of the container centos stream 9
ipv6 disabled with net.ipv6.conf.all.disable_ipv6 = 1
<!-- gh-linked-prs -->
### Linked PRs
* gh-101010
* gh-101220
* gh-101236
* gh-101237
* gh-101238
* gh-101252
* gh-101271
* gh-101272
<!-- /gh-linked-prs -->
| f267b623f3634d8ef246de55970132d28854b22f | ea7c7af10b72ec4f3c5ad2bb6beb1d3667ff978e |
python/cpython | python__cpython-100793 | # Make `email.message.Message.__contains__` faster
Right now the implementation of `Message.__contains__` looks like this: https://github.com/python/cpython/blob/2f2fa03ff3d566b675020787e23de8fb4ca78e99/Lib/email/message.py#L450-L451
There are several problems here:
1. We build intermediate structure (`list` in this case)
2. We use `list` for `in` operation, which is slow
The fastest way to do check if actually have this item is simply by:
```python
def __contains__(self, name):
name_lower = name.lower()
for k, v in self._headers:
if name_lower == k.lower():
return True
return False
```
We do not create any intermediate lists / sets. And we even don't iterate longer than needed.
This change makes `in` check twice as fast.
## Microbenchmark
### Before
```
» pyperf timeit --setup 'import email; m = email.message_from_file(open("Lib/test/test_email/data/msg_01.txt"))' '"from" in m'
.....................
Mean +- std dev: 1.40 us +- 0.14 us
```
```
pyperf timeit --setup 'import email; m = email.message_from_file(open("Lib/test/test_email/data/msg_01.txt"))' '"missing" in m'
.....................
Mean +- std dev: 1.42 us +- 0.06 us
```
### After
```
» pyperf timeit --setup 'import email; m = email.message_from_file(open("Lib/test/test_email/data/msg_01.txt"))' '"missing" in m'
.....................
Mean +- std dev: 904 ns +- 55 ns
```
```
» pyperf timeit --setup 'import email; m = email.message_from_file(open("Lib/test/test_email/data/msg_01.txt"))' '"from" in m'
.....................
Mean +- std dev: 715 ns +- 24 ns
```
The second case is now twice as fast.
It probably also consumes less memory now, but I don't think it is very significant.
## Importance
Since `EmailMessage` (a subclass of `Message`) is quite widely used by users and 3rd party libs, I think it is important to be included.
And since the patch is quite simple and pure-python, I think the risks are very low.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100793
<!-- /gh-linked-prs -->
| 6746135b0722a5359ce6346554c847afba603b5a | 47b9f83a83db288c652e43567c7b0f74d87a29be |
python/cpython | python__cpython-100811 | # Clarify os.path.join documentation
https://docs.python.org/3/library/os.path.html#os.path.join
> On Windows, the drive letter is not reset when an absolute path component (e.g., r'\foo') is encountered. If a component contains a drive letter, all previous components are thrown away and the drive letter is reset. Note that since there is a current directory for each drive, os.path.join("c:", "foo") represents a path relative to the current directory on drive C: (c:foo), not c:\foo.
There are a couple potential issues here.
First, it should be "drive", not "drive letter":
```
>>> ntpath.join("thrownaway", "//host/computer/dir", "/asdf") # unc is not a letter
'//host/computer/asdf'
```
Second, it should be "if a component is from a different drive or an absolute path, all previous components are thrown away and the drive is reset":
```
>>> ntpath.join("C:", "foo", "C:", "bar") # previous components are not thrown away
'C:foo\\bar'
>>> ntpath.join("C:", "foo", "D:", "bar")
'D:bar'
```
Third, as a nit, maybe "component" should be replaced with "segment", since arguments can contain path separators. This would improve consistency with the pathlib docs.
cc @barneygale @JelleZijlstra
These came up in https://github.com/python/cpython/pull/100782
<!-- gh-linked-prs -->
### Linked PRs
* gh-100811
* gh-100843
* gh-100844
<!-- /gh-linked-prs -->
| 909a6746939ea1d09fab21f26b558cfd7e3e29a0 | ef09bf63d22b2efe5c0e9a2b9f25a9bec2ba1db0 |
python/cpython | python__cpython-100788 | # Incorrect documentation for `input` parameter
The documentation for `input` when using `help(...)` shows this:
```python
>>> help(input)
Help on built-in function input in module builtins:
input(prompt=None, /)
Read a string from standard input. The trailing newline is stripped.
The prompt string, if given, is printed to standard output without a
trailing newline before reading input.
If the user hits EOF (*nix: Ctrl-D, Windows: Ctrl-Z+Return), raise EOFError.
On *nix systems, readline is used if available.
```
This implies `input()` does the same as `input(None)`, but using `input(None)` actually does `input("None")`. It should probably be changed to show `input(prompt="", /)` as the default argument instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100788
* gh-100841
* gh-100842
<!-- /gh-linked-prs -->
| a2141882f259e21bb09fa0b7cba8142147b9e3d7 | df3851fe4ab8c9013aeb5dfd75d6e8edf2ece9a8 |
python/cpython | python__cpython-100765 | # Include/internal/pycore_frame.h is missing from PYTHON_HEADERS in makefile
This means that in development, changes to `pycore_frame.h` won't result in re-compilation of any compilation units.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100765
<!-- /gh-linked-prs -->
| 0cd597fef10d30a100fa4d5e132b3d385a5ac0a4 | 7a50d6b5b09a88e915891757fdd6d371310d2e96 |
python/cpython | python__cpython-100759 | # Refactor initialisation of frame headers into a single function
There are 3 places in the code where we prepare a frame for execution:
1. _PyFrame_PushUnchecked in pycore_frame.h
2. init_frame in frameobject.c
3. _PyEvalFramePushAndInit in ceval.c
All of these call the function _PyFrame_InitializeSpecials to set values from the code object to various fields on the frame. Two of them (2 and 3 above) also NULL the localsplus array.
When we will change the frame layout for the register machine, currently we need to update those three places. But if we add NULLing the locals to _PyFrame_InitializeSpecials (and rename it to something clearer) then we can just make our changes there.
There isn't really a problem with NULLing the localsplus in _PyFrame_PushUnchecked - all the call sites follow up by setting some fields and NULLing the rest. So now we will NULL all fields and overwrite a few after returning from _PyFrame_InitializeSpecials. Will benchmark to make sure.
(Lesson learned from the register machine experiment).
<!-- gh-linked-prs -->
### Linked PRs
* gh-100759
* gh-100775
<!-- /gh-linked-prs -->
| 15c44789bb125b93e96815a336ec73423c47508e | 78068126a1f2172ff61a0871ba43d8530bc73905 |
python/cpython | python__cpython-100751 | # PEP 597: platform from_subprocess missing encoding kwarg
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
```pytb
python3.11 -W error -X warn_default_encoding -c "import platform; print(platform.uname().processor)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.11/functools.py", line 1001, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/platform.py", line 792, in processor
return _unknown_as_blank(_Processor.get())
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/platform.py", line 738, in get
return func() or ''
^^^^^^
File "/usr/lib/python3.11/platform.py", line 761, in from_subprocess
return subprocess.check_output(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 466, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 908, in __init__
self.encoding = encoding = _text_encoding()
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 372, in _text_encoding
warnings.warn("'encoding' argument not specified.",
EncodingWarning: 'encoding' argument not specified.
```
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on:
- Operating system and architecture:
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100751
* gh-101207
* gh-101644
<!-- /gh-linked-prs -->
| 6b3993c556eb3bb36d1754a17643cddd3f6ade92 | 01093b82037fbae83623581294a0f1cf5b4a44b0 |
python/cpython | python__cpython-100748 | # Some compiler macros use c instead of C to access the compiler
This is "harmless" at the moment because in all usage sites c is passed in for C, but it's wrong.
```diff
diff --git a/Python/compile.c b/Python/compile.c
index cbbdfb9e94..f2deada47a 100644
--- a/Python/compile.c
+++ b/Python/compile.c
@@ -1626,7 +1626,7 @@ cfg_builder_addop_j(cfg_builder *g, location loc,
#define ADDOP_IN_SCOPE(C, LOC, OP) { \
if (cfg_builder_addop_noarg(CFG_BUILDER(C), (OP), (LOC)) < 0) { \
- compiler_exit_scope(c); \
+ compiler_exit_scope(C); \
return -1; \
} \
}
@@ -1692,7 +1692,7 @@ cfg_builder_addop_j(cfg_builder *g, location loc,
#define VISIT_IN_SCOPE(C, TYPE, V) {\
if (compiler_visit_ ## TYPE((C), (V)) < 0) { \
- compiler_exit_scope(c); \
+ compiler_exit_scope(C); \
return ERROR; \
} \
}
@@ -1713,7 +1713,7 @@ cfg_builder_addop_j(cfg_builder *g, location loc,
for (_i = 0; _i < asdl_seq_LEN(seq); _i++) { \
TYPE ## _ty elt = (TYPE ## _ty)asdl_seq_GET(seq, _i); \
if (compiler_visit_ ## TYPE((C), elt) < 0) { \
- compiler_exit_scope(c); \
+ compiler_exit_scope(C); \
return ERROR; \
} \
} \
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100748
<!-- /gh-linked-prs -->
| 52017dbe1681a7cd4fe0e8d6fbbf81fd711a0506 | 15aecf8dd70f82eb507d74fae9662072a377bdc8 |
python/cpython | python__cpython-116713 | # Dict comprehensions are not tested at all in `test_named_expressions.py`
While working on https://github.com/python/cpython/pull/100581 I've noticed that we do not test `dict` comprehensions at all there.
I think that this needs to be improved.
I will add test case for them soon, PR is in the works.
<!-- gh-linked-prs -->
### Linked PRs
* gh-116713
* gh-116747
* gh-116748
<!-- /gh-linked-prs -->
| 25684e71310642ffd20b45eea9b5226a1fa809a5 | 7f418fb111dec325b5c9fe6f6e96076049322f02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.