repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-103939 | # stdlib sqlite3 executemany() does not support RETURNING statement
# Problem
`sqlite3.Connection.executemany()` does not support a `RETURNING` statement. All requests containing it fail with an exception:
`sqlite3.ProgrammingError: executemany() can only execute DML statements.`
Working shell example
```
#!/bin/sh
rm -f ./test.db
sqlite3 ./test.db <<EOF
CREATE TABLE releases (
component VARCHAR(64) NOT NULL,
version VARCHAR(64) NOT NULL,
os VARCHAR(64) NOT NULL,
PRIMARY KEY (component, version, os));
INSERT INTO releases VALUES('server', '1.0.0', 'Unix'), ('server', '1.0.0', 'NT') RETURNING *;
EOF
```
Produces a relevant output:
```
server|1.0.0|Unix
server|1.0.0|NT
```
However, a Python example doing similar thing:
```
#!/bin/env python3.10
from pathlib import Path
import sqlite3
if __name__ == '__main__':
test_file = Path.cwd() / 'test.db'
test_file.unlink(missing_ok=True)
connection = sqlite3.connect(test_file)
connection.execute(
'CREATE TABLE releases ('
'component VARCHAR(64) NOT NULL, '
'version VARCHAR(64) NOT NULL, '
'os VARCHAR(64) NOT NULL, '
'PRIMARY KEY (component, version, os));',
)
values = [
('server', '1.0.0', 'Unix'),
('server', '1.0.0', 'NT')
]
cursor = connection.executemany('INSERT INTO releases VALUES(?, ?, ?) RETURNING *;', values)
print(cursor.fetchall())
```
generates an exception:
```
Traceback (most recent call last):
File "/home/zentarim/py/test/./sql_many.py", line 20, in <module>
cursor = connection.executemany('INSERT INTO releases VALUES(?, ?, ?) RETURNING *;', values)
sqlite3.ProgrammingError: executemany() can only execute DML statements.
```
# Environment
Ubuntu **22.04.1 LTS**
Kernel **6.0.0-1006-oem**
Python **3.10.6**
sqlite3 module version: **2.6.0**
sqlite3 package version: **3.37.2**
Not sure if it is a bug or unimplemented feature.
Thanks in advance!
<!-- gh-linked-prs -->
### Linked PRs
* gh-103939
* gh-103966
<!-- /gh-linked-prs -->
| 30216b69a2fc716c7cfab842364a379cd6ffe458 | 52cedc5c10336f0bc199d28524491e7de05bd047 |
python/cpython | python__cpython-100009 | # Drop support for platforms without two's complement integer representation: require two's complement to build Python
Require [Two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) to build Python.
Only very old machines (built in the 1960s?) like [UNIVAC 1100/2200 series](https://en.wikipedia.org/wiki/UNIVAC_1100/2200_series) use [signed number representations](https://en.wikipedia.org/wiki/Signed_number_representations) different than two's complement. Nowadays, all CPUs use the two's complement representation and CPython code base already relies on that in many places (especially ``Objects/longobject.c``). For example, Python adds the [-fwrapv compiler flag](https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html) to GCC and clang.
The *signed* parameter of ``int.from_bytes()`` and ``int.to_bytes()`` indicates whether two's complement is used to represent the integer.
I propose to be more explicit on build requirements for integers:
* Two's complement
* No trap representation
* No padding bits
Mark Dickinson @mdickinson likes to repeat that CPython has these requirements :-)
* https://github.com/python/cpython/pull/27832#pullrequestreview-734243048
> I think the `-(x + 1)` pattern came from days when we worried too much about alternative integer representations allowed by the C spec **(sign-magnitude, ones' complement, two's complement with a trap representation, etc.)**. But since then other parts of the codebase have been happy to assume that all integer types are **two's complement, no padding bits, no trap representation**, so I think it's safe to do so here. (And your changes to the other bitwise operations already assume two's complement and no trap representation.)
* https://github.com/python/cpython/pull/17933#discussion_r674992923
> (...) But making all our usual assumptions about integer representation **(two's complement, no trap representation, no padding bits, etc.)**, `-INT_MIN-1` is the same as `INT_MAX` (...)
I created this issue while reviewing PR #99762 which proposes to generalize a micro-optimization relying on a cast from an signed integer to an unsigned integer: replace ``0 <= index && index < limit`` (``Py_ssize_t``) with ``(size_t)index < (size_t)limit``. I'm not sure that two's complement is strictly required for this micro-optimization, but it reminded it to me :-)
cc @mdickinson
<!-- gh-linked-prs -->
### Linked PRs
* gh-100009
* gh-100014
* gh-100045
<!-- /gh-linked-prs -->
| 5ea052bb0c8fa76867751046c89f69db5661ed4f | 038b151963d9d4a5f4c852544fb5b0402ffcb218 |
python/cpython | python__cpython-100006 | # Newly added test_cmd_line_script.test_script_as_dev_fd() fails on FreeBSD: /dev/fd/3 doesn't exist
I propose to skip test_cmd_line_script.test_script_as_dev_fd() on FreeBSD. Here is why.
cc @Jehops @emaste @koobs
The test added by PR #99768 fails on FreeBSD. test_cmd_line_script.test_script_as_dev_fd() fails with:
```
FAIL: test_script_as_dev_fd (test.test_cmd_line_script.CmdLineTest.test_script_as_dev_fd)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/home/buildbot/python/3.x.koobs-freebsd-9e36.nondebug/build/Lib/test/test_cmd_line_script.py", line 766, in test_script_as_dev_fd
self.assertEqual(out, b"12345678912345678912345\n")
AssertionError: b"/usr/home/buildbot/python/3.x.koobs-free[94 chars]ry\n" != b'12345678912345678912345\n'
Stderr:
/usr/home/buildbot/python/3.x.koobs-freebsd-9e36.nondebug/build/Lib/subprocess.py:849: RuntimeWarning: pass_fds overriding close_fds.
warnings.warn("pass_fds overriding close_fds.", RuntimeWarning)
```
Note: Failure first reported in issue #99985.
FreeBSD /dev/fd/ behaves differently than Linux /dev/fd/. I'm not sure why. If the parent opens a file and the file descriptor is inherited, the child process can open the file, but /dev/fd/ only contains file descriptors 0, 1, 2. Example:
parent.py:
```
import subprocess
import sys
script = 'print("Hello")'
script_name = 'script.py'
with open(script_name, 'w') as fp:
fp.write(script)
with open(script_name, "r") as fp:
fd = fp.fileno()
print("FD", fd)
cmd = [sys.executable]
p = subprocess.Popen(cmd, close_fds=False, pass_fds=(0, 1, 2, fd))
p.wait()
```
Child process:
```
$ ./python y.py
FD 3
/usr/home/vstinner/python/main/Lib/subprocess.py:849: RuntimeWarning: pass_fds overriding close_fds.
warnings.warn("pass_fds overriding close_fds.", RuntimeWarning)
Python 3.12.0a2+ (heads/main:e3a3863cb9, Dec 5 2022, 12:09:39) [Clang 13.0.0 (git@github.com:llvm/llvm-project.git llvmorg-13.0.0-0-gd7b669b3a3 on freebsd13
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
# The process has 4 file descriptors:
>>> fd=os.dup(0); os.close(fd)
>>> fd=os.dup(1); os.close(fd)
>>> fd=os.dup(2); os.close(fd)
>>> fd=os.dup(3); os.close(fd)
# But /dev/fd/3 doesn't exist
>>> os.listdir("/dev/fd")
['0', '1', '2']
>>> f=open("/dev/fd/3", "rb")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '/dev/fd/3'
# Using directly file descriptor 3 works as expected:
>>> f=open(3, "rb")
>>> f.close()
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100006
* gh-100007
* gh-125109
<!-- /gh-linked-prs -->
| 038b151963d9d4a5f4c852544fb5b0402ffcb218 | e3a3863cb9561705d3dd59a9367427ed45dfb5ea |
python/cpython | python__cpython-100002 | # `python -m http.server` log messages to stderr can emit raw data
Problem: The `http.server` module lets some control characters from the request thru which when emitted as is in a log message to a terminal can be used to control it or otherwise generate misleading output. `python -m http.server` is typically run within such a terminal.
Fix: The `http.server` default `log_message()` method needs to prevent printing of control characters.
Reported by David Leadbeater, G-Research on 2022-12-04
<!-- gh-linked-prs -->
### Linked PRs
* gh-100002
* gh-100031
* gh-100032
* gh-100033
* gh-100034
* gh-100035
* gh-100038
* gh-100040
* gh-100041
* gh-100042
* gh-100043
* gh-100044
<!-- /gh-linked-prs -->
| d8ab0a4dfa48f881b4ac9ab857d2e9de42f72828 | 530cc9dbb61df55b83f0219d2282980c9cb1cbd8 |
python/cpython | python__cpython-100198 | # Python 3.9.14: grammar/clarity improvements for str.encode, str.decode error-checking documentation
# Documentation
The are a couple of paragraphs about error-checking behaviour for `str.encode` and `str.decode` that appears in a few versions of Python and could potentially be improved in future versions.
As found in the [Python 3.9.14 documentation for `std.encode`](https://docs.python.org/3.9/library/stdtypes.html#str.encode):
> By default, the _errors_ argument is not checked for best performances, but only used at the first encoding error. Enable the [Python Development Mode](https://docs.python.org/3.9/library/devmode.html#devmode), or use a debug build to check _errors_.
The paragraph before that is fairly dense - there could be an opportunity to improve both of them.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100198
* gh-100382
* gh-100383
<!-- /gh-linked-prs -->
| a2bb3b7f9d8d15c81b724726454d68357fb31d1c | c18d83118881333b9a0afd0add83afb2ba7300f7 |
python/cpython | python__cpython-100036 | # New warnings: `'function': conversion from 'int64_t' to 'int', possible loss of data`
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
GitHub is showing new warnings in `Modules/_xxsubinterpretersmodule.c`:
<img width="911" alt="Снимок экрана 2022-12-04 в 12 07 02" src="https://user-images.githubusercontent.com/4660275/205482677-87e8c7f3-0ffd-4a4e-ba27-4329e946421c.png">
> 'function': conversion from 'int64_t' to 'int', possible loss of data [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj]
Looks like https://github.com/python/cpython/pull/99940 is the cause.
Link: https://github.com/python/cpython/blame/bf26bdf6ac04878fc720e78422991aaedb9808a1/Modules/_xxsubinterpretersmodule.c#L2530
CC @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-100036
<!-- /gh-linked-prs -->
| e9e63ad8653296c199446d6f7cdad889e492a34e | f49c735e525cf031ddbfc19161aafac4fb18837b |
python/cpython | python__cpython-99971 | # Possibly a missing parameter in the documentation of DocTestSuite in doctest
The documentation for **DocTestSuite** in **doctest** have the following signature.
> DocTestSuite(module=None, globs=None, extraglobs=None, test_finder=None, setUp=None, tearDown=None, checker=None)
But the documentation also talks about **optionflags**.
> Optional arguments setUp, tearDown, and **optionflags** are the same as for function DocFileSuite() above.
In my opinion, when _setUp_ and _trearDown_ are mentioned in the signature, **optionflags** should be in the signature too.
I've created PR in case the issue is correct.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99971
<!-- /gh-linked-prs -->
| e477348f36eff3f63ba509582622ea620fa9ae5b | a9bad4d28413666edc57551dd439bca6a6a59dd9 |
python/cpython | python__cpython-99958 | # Add `frozen_default` parameter on `dataclass_transform`
# Feature or enhancement
Add a new `frozen_default` parameter to [`dataclass_transform`](https://peps.python.org/pep-0681/#dataclass-transform-parameters) similar to the existing `eq_default` and `order_default` parameters.
# Pitch
- Frozen dataclasses are very popular when working with Jax (`flax.struct.dataclass` and `tjax.dataclass`), so being able to indicate that they are frozen by default would improve the user experience.
- `dataclass_transform` is currently supported by Pyright/Pylance and Pyre. Both teams are in favor of implementing this new parameter. Since `dataclass_transform` supports kwargs for experimentation/extensibility, Pyright was already able to [add support](https://github.com/microsoft/pyright/commit/bedc124c394f01bbdea79809953ceb0299d42a7b) for `frozen_default`.
- All feedback on typing-sig has been in favor of this enhancement.
# Previous discussion
Discussed in [typing-sig](https://mail.python.org/archives/list/typing-sig@python.org/thread/IKZULRE5UZVIN7B6IMFR2CXIF6RYIJ2O/).
<!-- gh-linked-prs -->
### Linked PRs
* gh-99958
<!-- /gh-linked-prs -->
| 5c19050546e3e37a8889a0baa2954e1444e803d3 | bed15f87eadc726122185cf41efcdda289f4a7b1 |
python/cpython | python__cpython-99956 | # error handling in the compiler is inconsistent
In compile.c some function return 0 for failure and 1 for success, and some return -1 for failure and 0 for success.
Should be -1 / 0 consistently.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99956
* gh-100010
* gh-100215
* gh-101412
<!-- /gh-linked-prs -->
| ab02262cd0385a2fb5eb8a6ee3cedd4b4bb969f3 | b4f35055496d918b42ff305b5d09ebd333204a69 |
python/cpython | python__cpython-100630 | # sqlite3's support for "numeric" paramstyle does not appear to honor the actual numbers with positional parameters
We're attempting to get some test support for "numeric" paramstyle, which while unnecessary for sqlite3, is similar to the paramstyle used by a very widely used, non-pep-249 library asyncpg.
anyway, I don't think sqlite3 is interpreting "numeric" correctly when the numbers are not ordered. If we consider numbers like ":3, :4, :2, :1" etc. to just be more interesting looking question marks (like "?, ?, ?, ?"), that's certainly easy but it seems to defeat the purpose of "numeric" parameters, where we would assume the number refers to the position of an entry in the parameter list.
if indeed this is wrong and it's a bug (I'm going to ping the DBAPI SIG list with this, to get their notion of intent), I fully expect that sqlite3 probably cant change things at this point, but just want to understand indeed what the intent of "numeric" paramstyle is.
Demo below:
```py
import sqlite3
conn = sqlite3.connect(":memory:")
cursor = conn.cursor()
cursor.execute(
"""
create table my_table(
a varchar,
b varchar,
c varchar,
d varchar,
e varchar
)
"""
)
cursor.execute(
"""
insert into my_table(a, b, c, d, e) values ('a', 'b', 'c', 'd', 'e')
"""
)
cursor.execute(
"""
select count(*) from my_table where a=? and b=? and c=? and d=? and e=?
""",
("a", "b", "c", "d", "e"),
)
assert cursor.fetchone() == (1, )
cursor.execute(
"""
select count(*) from my_table where a=:1 and b=:2 and c=:3 and d=:4 and e=:5
""",
("a", "b", "c", "d", "e"),
)
assert cursor.fetchone() == (1, )
cursor.execute(
"""
select count(*) from my_table where a=:3 and b=:4 and c=:1 and d=:5 and e=:2
""",
("c", "e", "a", "b", "d") # <--- fails
#("a", "b", "c", "d", "e"), # <--- succeeds, which is wrong
# {"3": "a", "4": "b", "1": "c", "2": "e", "5": "d"} # <--- succeeds, but this is not "numeric" paramstyle
)
assert cursor.fetchone() == (1, )
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100630
* gh-100669
* gh-100670
<!-- /gh-linked-prs -->
| b7a68ab824249ebf053b8149ebb83cd8578781c9 | 2366b27565ab57f053b4a551ade0e71796a1896c |
python/cpython | python__cpython-100169 | # "a foreign function that will call a COM method" generated by `ctypes.WINFUNCTYPE` works in Python3.7, does not work as same in newer Python
# Bug report
I'm contributing on [enthought/comtypes](https://github.com/enthought/comtypes).
In `comtypes`, there is a [test for the behavior of Excel](https://github.com/enthought/comtypes/blob/0f3cf2b6d309d887eed92dd2b6d4393883ccbc69/comtypes/test/test_excel.py) that is currently skipped. If I comment out [the `unittest.skip` marker in that test](https://github.com/enthought/comtypes/blob/0f3cf2b6d309d887eed92dd2b6d4393883ccbc69/comtypes/test/test_excel.py#L118-L119), it works in Python 3.7 and fails in Python 3.11.
```
PS ...\comtypes> py -3.7 -m unittest comtypes.test.test_excel -vv
test (comtypes.test.test_excel.Test_EarlyBind) ... ok
test (comtypes.test.test_excel.Test_LateBind) ... ok
----------------------------------------------------------------------
Ran 2 tests in 10.576s
OK
PS ...\comtypes> py -3.7 -m clear_comtypes_cache -y # <- clear caches, required!
Removed directory "...\comtypes\comtypes\gen"
Removed directory "...\AppData\Roaming\Python\Python37\comtypes_cache"
PS ...\comtypes> py -3.11 -m unittest comtypes.test.test_excel -vv
test (comtypes.test.test_excel.Test_EarlyBind.test) ... FAIL
test (comtypes.test.test_excel.Test_LateBind.test) ... ok
======================================================================
FAIL: test (comtypes.test.test_excel.Test_EarlyBind.test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "...\comtypes\comtypes\test\test_excel.py", line 62, in test
self.assertEqual(xl.Range["A1:C3"].Value(),
AssertionError: Tuples differ: ((None, None, 10.0), ('x', 'y', 'z'), (3.0, 2.0, 1.0)) != ((10.0, 20.0, 31.4), ('x', 'y', 'z'), (3.0, 2.0, 1.0))
First differing element 0:
(None, None, 10.0)
(10.0, 20.0, 31.4)
- ((None, None, 10.0), ('x', 'y', 'z'), (3.0, 2.0, 1.0))
? ------------
+ ((10.0, 20.0, 31.4), ('x', 'y', 'z'), (3.0, 2.0, 1.0))
? ++++++++++++
----------------------------------------------------------------------
Ran 2 tests in 9.640s
FAILED (failures=1)
```
`xl.Range[...` calls [a prototype-function generated by `ctypes.WinFunctionType` and `ctypes.WINFUNCTYPE`](https://docs.python.org/3/library/ctypes.html#ctypes.WINFUNCTYPE).
This is also reported in enthought/comtypes#212 and the test fails in Python 3.8 as well.
[A strange callback behavior also occurs with simple COM libraries](https://github.com/enthought/comtypes/issues/212#issue-643130022).
Therefore, I think that this may not be caused by the Excel specification.
There may be other regressions in `ctypes` callbacks that have also been reported in #82929.
Also, is #97513 a possible solution to this problem?
Any opinions would be appreciated.
# Your environment
- CPython versions tested on: Python 3.7.1 and Python 3.11.0
- Windows 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-100169
* gh-101339
* gh-101340
<!-- /gh-linked-prs -->
| dfad678d7024ab86d265d84ed45999e031a03691 | f5ad63f79af3a5876f90b409d0c8402fa54e878a |
python/cpython | python__cpython-99946 | # Chain import SystemError on unexpected exceptions
# Feature or enhancement
For functions, the `SystemError` chains an in-fly unexpected exceptions, for some import errors this is not the case.
# Pitch
Chaining seems easy, and helps debugging since it is tricky to get back the original exception otherwise, and knowing the exception is often sufficient.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99946
<!-- /gh-linked-prs -->
| 474220e3a58d739acc5154eb3e000461d2222d62 | a68e585c8b7b27323f67905868467ce0588a1dae |
python/cpython | python__cpython-100053 | # asyncio tcp transport on Windows reads bytearray instead of bytes
# Bug report
`asyncio.Protocol.data_received` prototype not respected: when using `create_connection` to create a tcp transport `data_received` is being called with a `bytearray` object instead of `bytes`.
If this is expected behaviour libraries like httpx and such should be warned or the prototype modified, although I doubt it's intended because it would suppose the generator doesn't hold reference to it otherwise data could change while stored in buffered stuff.
```python
import sys
print(f'Python: {sys.version}\n')
import asyncio
class MyProto(asyncio.Protocol):
def data_received(self, data: bytes) -> None:
print('@@@@@@@@@@ ', data)
def eof_received(self):
print('##########')
async def main():
t, proto = await asyncio.get_running_loop().create_connection(MyProto, 'example.com', 80)
t.write(b'SITE BE MAD\n\n')
await asyncio.sleep(1)
t.close()
asyncio.run(main())
```
Correct output: `@@@@@@@@@@ b'HTTP/1.0 ...WHATEVER THE SERVER ANSWERS...'`
Faulty output: `@@@@@@@@@@ bytearray(b'HTTP/1.0 ...WHATEVER THE SERVER ANSWERS...')`
# Your environment
On Windows 11 x64
Tested on:
- 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
- 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)]
Working as expected in:
- 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
# Related problems
Link to the thread where I found a conflicting thing (it's being solved there as a workaround)
https://github.com/encode/httpx/discussions/2305#discussioncomment-4288553
**EDIT:** simplified the minimal example
<!-- gh-linked-prs -->
### Linked PRs
* gh-100053
<!-- /gh-linked-prs -->
| 1bb68ba6d9de6bb7f00aee11d135123163f15887 | d5f8a2b6ad408368e728a389da918cead3ef7ee9 |
python/cpython | python__cpython-99935 | # test_deterministic_sets fails on x86 (32 bit)
I am observing:
```
======================================================================
FAIL: test_deterministic_sets (test.test_marshal.BugsTestCase.test_deterministic_sets) [set([('Spam', 0), ('Spam', 1), ('Spam', 2)])]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_marshal.py", line 368, in test_deterministic_sets
self.assertNotEqual(repr_0, repr_1)
AssertionError: b"{('Spam', 1), ('Spam', 0), ('Spam', 2)}\n" == b"{('Spam', 1), ('Spam', 0), ('Spam', 2)}\n"
======================================================================
FAIL: test_deterministic_sets (test.test_marshal.BugsTestCase.test_deterministic_sets) [frozenset([('Spam', 0), ('Spam', 1), ('Spam', 2)])]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.11/test/test_marshal.py", line 368, in test_deterministic_sets
self.assertNotEqual(repr_0, repr_1)
AssertionError: b"frozenset({('Spam', 1), ('Spam', 0), ('Spam', 2)})\n" == b"frozenset({('Spam', 1), ('Spam', 0), ('Spam', 2)})\n"
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-99935
* gh-99972
<!-- /gh-linked-prs -->
| c68573b339320409b038501fdd7d4f8a56766275 | ee6015650017ca145a48c345311a9c481949de71 |
python/cpython | python__cpython-99926 | # Inconsistent JSON serialization error messages
# Bug report
When you try to serialize a `NaN`, `inf` or `-inf` with `json.dumps(..., allow_nan=False)`, the error messages are inconsistent, depending on whether you use the `indent` argument or not.
```python
>>> json.dumps(float('nan'), allow_nan=False)
ValueError: Out of range float values are not JSON compliant
>>> json.dumps(float('nan'), allow_nan=False, indent=4)
ValueError: Out of range float values are not JSON compliant: nan
```
That is because if you don't use `indent`, the encoding is done in C code [here](https://github.com/python/cpython/blob/0563be23a557917228a8b48cbb31bda285a3a815/Modules/_json.c#L1321-L1327),
but if you use `indent`, the encoding is done in pure Python code [here](https://github.com/python/cpython/blob/0563be23a557917228a8b48cbb31bda285a3a815/Lib/json/encoder.py#L239-L242), and the error messages are different between the two.
# Your environment
- CPython versions tested on: 3.11.0
- Operating system and architecture: MacOS 12.6, Apple Silicon
<!-- gh-linked-prs -->
### Linked PRs
* gh-99926
<!-- /gh-linked-prs -->
| d98ca8172c39326bb200308a5191ceeb4a262d53 | a6331b605e8044a205a113e1db87d2b0a53d0222 |
python/cpython | python__cpython-99906 | # summarize_stats.py doesn't display misses in execution counts
# Bug report
The `summarize_stats.py` script doesn't display misses in execution counts.
You can see the example broken output here: https://github.com/faster-cpython/ideas/blob/main/stats/pystats-052bc12-2022-11-29.md
<!-- gh-linked-prs -->
### Linked PRs
* gh-99906
<!-- /gh-linked-prs -->
| bf94c653f4291ba2db506453e0e00a82fe06b70a | 131801d14dfc4f0b2b79103612c88e2e282ff158 |
python/cpython | python__cpython-99895 | # test_traceback: test_import_from_error_bad_suggestions_do_not_trigger_for_small_names() fails randomly on "wasm32-emscripten node (dynamic linking)" buildbot
test_traceback: test_import_from_error_bad_suggestions_do_not_trigger_for_small_names() fails randomly on the "wasm32-emscripten node (dynamic linking)" buildbot worker.
Example: https://buildbot.python.org/all/#/builders/1056/builds/928
```
FAIL: test_import_from_error_bad_suggestions_do_not_trigger_for_small_names (test.test_traceback.CPythonSuggestionFormattingTests.test_import_from_error_bad_suggestions_do_not_trigger_for_small_names) (name='b')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/buildbot/bcannon-wasm/3.x.bcannon-wasm.emscripten-node-dl/build/Lib/test/test_traceback.py", line 3173, in test_import_from_error_bad_suggestions_do_not_trigger_for_small_names
self.assertNotIn("mom", actual)
AssertionError: 'mom' unexpectedly found in "ImportError: cannot import name 'b' from 'IMwPxQRPpcEcmomu' (/tmp/tmpanzezi5a/IMwPxQRPpcEcmomu.py)"
```
cc @brettcannon @tiran @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-99895
<!-- /gh-linked-prs -->
| 0563be23a557917228a8b48cbb31bda285a3a815 | f08e52ccb027f6f703302b8c1a82db9fd3934270 |
python/cpython | python__cpython-100011 | # test_unicodedata: test_normalization() fails randomly with IncompleteRead on PPC64LE Fedora buildbots
For a few weeks, test_unicodedata.test_normalization() fails randomly with IncompleteRead on PPC64LE Fedora buildbots.
IMO the test should be skipped on download error, rather than treating a download error as a test failure.
Example: https://buildbot.python.org/all/#/builders/33/builds/3031
```
FAIL: test_normalization (test.test_unicodedata.NormalizationTest.test_normalization)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/http/client.py", line 591, in _read_chunked
value.append(self._safe_read(chunk_left))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/http/client.py", line 632, in _safe_read
raise IncompleteRead(data, amt-len(data))
http.client.IncompleteRead: IncompleteRead(8113 bytes read, 2337 more expected)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/test/test_unicodedata.py", line 362, in test_normalization
testdata = open_urlresource(TESTDATAURL, encoding="utf-8",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/test/support/__init__.py", line 671, in open_urlresource
s = f.read()
^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/gzip.py", line 295, in read
return self._buffer.read(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/_compression.py", line 118, in readall
while data := self.read(sys.maxsize):
^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/gzip.py", line 500, in read
buf = self._fp.read(READ_BUFFER_SIZE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/gzip.py", line 90, in read
return self.file.read(size)
^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/http/client.py", line 459, in read
return self._read_chunked(amt)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/http/client.py", line 597, in _read_chunked
raise IncompleteRead(b''.join(value)) from exc
http.client.IncompleteRead: IncompleteRead(106486 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.clang/build/Lib/test/test_unicodedata.py", line 368, in test_normalization
self.fail(f"Could not retrieve {TESTDATAURL}")
AssertionError: Could not retrieve http://www.pythontest.net/unicode/15.0.0/NormalizationTest.txt
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100011
* gh-100012
* gh-100013
<!-- /gh-linked-prs -->
| 2488c1e1b66366a3a933ff248eff080fabd2351c | 5ea052bb0c8fa76867751046c89f69db5661ed4f |
python/cpython | python__cpython-99893 | # Infinite recursion in the tokeniser when showing warnings
Turns out that showing warnings in the tokenizer is quite tricky because in the process of showing the warning we need to fetch the encoding which needs to tokenize the first two lines and if the warning is there that leads to an infinite loop. Check for instance with a file contaning:
```
0b1and x
```
Notice that this only happens when tokenizing files.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99893
* gh-99896
<!-- /gh-linked-prs -->
| 417206a05c4545bde96c2bbbea92b53e6cac0d48 | 19c38801ba2f42a220adece5d5f12e833b41822a |
python/cpython | python__cpython-104096 | # Directory traversal in uu module / uu.decode
# Bug report
The function uu.decode is vulnerable to trivial directory traversal if no output filename is given. An uu-encoded file with a path starting with a repetition of ../../ or a / allows writing a file to an arbitrary location on the filesystem.
I reported this to security@python.org and was asked to report it publicly as the function is rarely used and removal is planned anyway for Python 3.13.
# Your environment
CPython versions tested on: 3.10.8
Operating system and architecture: Linux
# example files
Case 1:
```
begin 644 ../../../../../../../../tmp/test1
$86)C"@``
`
end
```
Case 2:
```
begin 644 /tmp/test2
$86)C"@``
`
end
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-104096
* gh-104329
* gh-104330
* gh-104331
* gh-104332
* gh-104333
<!-- /gh-linked-prs -->
| 0aeda297931820436a50b78f4f7f0597274b5df4 | afe7703744f813adb15719642444b5fd35888d86 |
python/cpython | python__cpython-121481 | # Document (undefined) rounding behaviour of new-style formatting
# Documentation
Current [documentation of new-style formatting](https://docs.python.org/3/reference/lexical_analysis.html#f-strings) does not talk about rounding and that the rounding behaviour is undefined / platform-dependent. Popular platforms seem to round to the nearest number of the selected precision with ties rounded to the number with even last digit, e.g. `'{:.2f}'.format(1/8)` produces `0.12`. This choice as well the dependence on the platform will be unexpected to many users.
While ties are rare for random numbers, ties can be frequent in some applications, e.g. when reporting a percentage where the total is a fairly small multiple (greater than 1) of a power of 10 and the number of digits selected for printing covers increments of 1 divided by that power of 10, e.g. `n = 80000` and `'{:.2f}%'.format(100*count/n)`.
The documentation should draw attention to the undefined rounding behaviour and to that there are a number of competing popular choices. A link to the list of [rounding behaviours supported by the decimal module](https://docs.python.org/3/library/decimal.html#rounding-modes) may also help.
Based on discussion in issue #99534.
Similar: issue #99875
<!-- gh-linked-prs -->
### Linked PRs
* gh-121481
* gh-126334
* gh-126335
<!-- /gh-linked-prs -->
| 7d7d56d8b1147a6b85e1c09d01b164df7c5c4942 | 868bfcc02ed42a1042851830b79c6877b7f1c7a8 |
python/cpython | python__cpython-99877 | # compiler optimizations violate oparg invariants
During code-gen, the compiler makes sure that any opcode that does not HAVE_ARG get an oparg value of 0.
Then, during optimizations some instructions become NOPs but their oparg is not set to 0, so when we come to emit code we need to check HAS_ARG again. It would be better to preserve the invariant in the optimizer instead.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99877
<!-- /gh-linked-prs -->
| 18a6967544795cdcce45b45700b7a9ed3994b8fb | a694b8222e8b0683682958222699953379fd2d48 |
python/cpython | python__cpython-99846 | # PEP 670: Convert _PyObject_SIZE() and _PyObject_VAR_SIZE() macros to functions
The _PyObject_SIZE() and _PyObject_VAR_SIZE() macros should be converted to functions: see [PEP 670](https://peps.python.org/pep-0670/) for the rationale.
My problem is that I don't know if the return type should be signed (Py_ssize_t) or unsigned (size_t).
CPython usage of _PyObject_SIZE():
* Signed: 18. Implementation of ``__sizeof__()`` methods.
* Unsigned: 0
* Implicit cast to unsigned: 2. Calls ``PyObject_Malloc(_PyObject_SIZE(tp))`` and ``gc_alloc(_PyObject_SIZE(tp), presize)`` where the first argument type is ``size_t``.
CPython usage of _PyObject_VAR_SIZE():
* Signed: 5
* Unsigned: 1
* Implicit cast to unsigned: 1. Call ``_PyDebugAllocatorStats(..., _PyObject_VAR_SIZE(&PyTuple_Type, len))`` where the argument type is ``size_t``.
To get a **container length**, the C API uses signed type (``Py_ssize_t``): PyList_Size(), PyDict_Size(), Py_SIZE(), etc.
To **allocate memory**, the C API prefers unsigned type (``size_t``): PyMem_Malloc(), PyObject_Realloc(), etc.
Python allocator functions reject size greater than ``PY_SSIZE_T_MAX``:
```
void *
PyMem_RawMalloc(size_t size)
{
/*
* Limit ourselves to PY_SSIZE_T_MAX bytes to prevent security holes.
* Most python internals blindly use a signed Py_ssize_t to track
* things without checking for overflows or negatives.
* As size_t is unsigned, checking for size < 0 is not required.
*/
if (size > (size_t)PY_SSIZE_T_MAX)
return NULL;
return _PyMem_Raw.malloc(_PyMem_Raw.ctx, size);
}
```
Some "sizeof" functions freely mix signed and unsigned types. Example:
```
static PyObject *
deque_sizeof(dequeobject *deque, void *unused)
{
Py_ssize_t res;
Py_ssize_t blocks;
res = _PyObject_SIZE(Py_TYPE(deque));
blocks = (size_t)(deque->leftindex + Py_SIZE(deque) + BLOCKLEN - 1) / BLOCKLEN;
assert(deque->leftindex + Py_SIZE(deque) - 1 ==
(blocks - 1) * BLOCKLEN + deque->rightindex);
res += blocks * sizeof(block);
return PyLong_FromSsize_t(res);
}
```
``blocks`` and ``sizeof(block)`` are unsigned, but ``res`` is signed.
---
Another problem is that _PyObject_VAR_SIZE() has an undefined behavior on integer overflow. Maybe it should return ``SIZE_MAX`` on oveflow, to make sure that Python allocator function fail (return ``NULL``)?
<!-- gh-linked-prs -->
### Linked PRs
* gh-99846
* gh-99847
* gh-99848
* gh-99850
* gh-99903
* gh-99922
* gh-99924
<!-- /gh-linked-prs -->
| 85dd6cb6df996b1197266d1a50ecc9187a91e481 | 18a6967544795cdcce45b45700b7a9ed3994b8fb |
python/cpython | python__cpython-104444 | # idlelib/NEWS.txt for 3.12.0 and backports
Main became 3.12 as of 3.11.0 beta 1: 2022-05-08
However, idlelib/NEWS.txt items continued going under
What's New in IDLE 3.11.0 on both main and 3.11
until 3.11.0rc1, 2022-08-08.
Subsequent news items go under
What's New in IDLE 3.12.0 (new header) on main branch
What's New in IDLE 3.11.z (new header) on 3.11 branch
What's New in IDLE 3.10.z (old header) on 3.10 branch
In other words, idlelib News is handled as if main were branched off as of .0rc1.
This is different from the changelog attached to What's New in 3.x.
Release peps -- needed for proposed and actual release dates.
3.10 PEP-619 https://peps.python.org/pep-0619/
3.11 PEP-664 https://peps.python.org/pep-0664/
3.12 PEP-693 https://peps.python.org/pep-0693/
<!-- gh-linked-prs -->
### Linked PRs
* gh-104444
* gh-104445
<!-- /gh-linked-prs -->
| 57139a6b5f0cfa04156d5c650026012a7c5a7aad | 563c7dcba0ea1070698b77129628e9e1c86d34e2 |
python/cpython | python__cpython-101307 | # Consider upgrading bundled Tk to 8.6.13
Tcl/Tk 8.6.13 with many bugfixes was released a week ago. I think it's worth trying it out in the next Python 3.12 alphas and betas.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101307
* gh-104738
* gh-110710
<!-- /gh-linked-prs -->
| 8d18d1ffd52eb3917c4566b09596d596116a5532 | 9f2c479eaf7d922746ef2f3c85b5c781757686b1 |
python/cpython | python__cpython-100302 | # asyncio: Document return values of AbstractEventLoop.remove_{reader,writer}
# Documentation
According to https://github.com/python/typeshed/pull/7042, `AbstractEventLoop.remove_{reader,writer}` return a bool, but there is no indication of this in the [documentation for these methods](https://docs.python.org/3/library/asyncio-eventloop.html#watching-file-descriptors). These return values need to be documented so that subclass implementors can know how to implement them appropriately.
<!-- gh-linked-prs -->
### Linked PRs
* gh-100302
* gh-100303
* gh-100304
<!-- /gh-linked-prs -->
| 5234e1cbea686e38392f113707db322ad8405048 | f23236a92d8796ae91772adaf27c3485fda963e8 |
python/cpython | python__cpython-99825 | # Document sqlite3.connect() as implicitly opening transactions in the new PEP-249 manual commit mode
# Documentation
In `sqlite3`, @malemburg [specified](https://github.com/python/cpython/issues/83638#issuecomment-1093853924) that `connect()`, `commit()`, and `rollback()` implicitly open transactions in the new PEP-249 manual commit mode implemented in PR #93823:
> If this is set to False, the module could then implement the
> correct way of handling transactions, which means:
>
> a) start a new transaction when the connection is opened
> b) start a new transaction after .commit() and .rollback()
> c) don't start new transactions anywhere else
> d) run an implicit .rollback() when the connection closes
So I expect to see that information clearly documented.
Yet the PR documented it only for `commit()` and `rollback()` (item b), not for `connect()` (item a):
```
* :mod:`!sqlite3` ensures that a transaction is always open,
so :meth:`Connection.commit` and :meth:`Connection.rollback`
will implicitly open a new transaction immediately after closing
the pending one.
```
To me it is important to document it also for `connect()` rather than relying on user’s deduction, which is not obvious at all since for instance in *legacy* manual commit mode, `sqlite3` does *not* implicitly open transactions when `connect()` is called but when `execute()` is called on a DML statement.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99825
<!-- /gh-linked-prs -->
| 19c38801ba2f42a220adece5d5f12e833b41822a | fe17d353134748dc772f8743ceadc2dd9e0db187 |
python/cpython | python__cpython-101589 | # zipfile.Path is not Path-like
This is about [`zipfile.Path`](https://github.com/python/cpython/blob/8bb7fdaee8c19f0311f15dbea7f8eee80a67a50f/Lib/zipfile.py#L2290). The doc says it is compatible to `pathlib.Path`. But it seems that is not for 100% because it doesn't derive from `pathlib.PurePath` and can treated as _Path-like_ in all situations.
I was redirected from `pandas` where I opened an [issue](https://github.com/pandas-dev/pandas/issues/49906) about the fact that `pandas.read_excel()` does accept path-like objects but not `zipfile.Path`. It was [explained to me](https://github.com/pandas-dev/pandas/issues/49906#issuecomment-1328072136) that `zipfile.Path` doesn't implement `__fspath__` and that is the problem.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101589
* gh-102266
* gh-102267
<!-- /gh-linked-prs -->
| 84181c14040ed2befe7f1a55b4f560c80fa61154 | 59e86caca812fc993c5eb7dc8ccd1508ffccba86 |
python/cpython | python__cpython-21104 | # inspect. _signature_fromstr has unused code
The signature parsing in inspect has some code that checks whether parse_name return the sentinel "invalid", but parse_name never returns this value. So this is dead code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-21104
<!-- /gh-linked-prs -->
| ac115b51e71c24374682e2a9e6663f99d2faf000 | d08fb257698e3475d6f69bb808211d39e344e5b2 |
python/cpython | python__cpython-99812 | # logging.StringTemplateStyle's usesTime method using wrong variable to search for asctime
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
See [`usesTime` method of `StringTemplateStyle` class in `logging` module](https://github.com/python/cpython/blob/22860dbbc8b53954055847d2bb036af68b4ea409/Lib/logging/__init__.py#L514). It's passing **`self.asctime_format`** to `fmt.find` but I believe it should be passing **`self.asctime_search`**. The two variables have the same value so it doesn't currently result in broken behavior, but this is a latent bug that could reveal itself if either variable's value changes. I've got a PR ready, I'll submit it shortly after this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99812
* gh-99851
* gh-99852
<!-- /gh-linked-prs -->
| 1d1bb95abdcafe92c771fb3dc4722351b032cc24 | ca3e611b1f620eabb657ef08a95d5f5f554ea773 |
python/cpython | python__cpython-99796 | # Possible typo in the documentation of importlib.resources.abc
The documentation for importlib.resources.abc.TraversableResources at the end of the _importlib.resources.abc_ documentation says:
> ... Therefore, any loader supplying importlib.abc.**TraversableReader** also supplies ResourceReader.
But importlib.abc.**TraversableReader** isn't exposed in the module. In my opinion, the author of the documentation meant **TraversableResources** considering the fact, that the preceding sentence talks about **TraversableResources** subclassing ResourceReader.
> Subclasses importlib.resources.abc.ResourceReader and provides concrete implementations of the importlib.resources.abc.ResourceReader’s abstract methods
I've created the PR in case the issue is correct.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99796
* gh-99799
* gh-99800
<!-- /gh-linked-prs -->
| 5f8898216e7b67b7de6b0b1aad9277e88bcebfdb | 003f341e99234cf6088341e746ffef15e12ccda2 |
python/cpython | python__cpython-99771 | # Make the correct `call` specialization fail kind show up
The `SPEC_FAIL_KIND` is not displayed correctly due to not being adequately maintained.
e.g
https://github.com/python/cpython/blob/b1dcdefc3abf496a3e37e12b85dd9959f5b70341/Python/specialize.c#L1471-L1482
According to the context, `METH_FASTCALL | METH_KEYWORDS` flag does not cause specialization failure.
However, the `method descr` fail kind will also be shown as `SPEC_FAIL_CALL_PYCFUNCTION`, because `builtin_call_fail_kind` function is called incorrectly in the `specialize_method_descriptor` function.
I'd like to submit a PR to fix them and make them display correctly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99771
<!-- /gh-linked-prs -->
| a02161286a67692758cac38d9dbe42625c211605 | 2b82c36f17ada471e734c3ad93e6eff8b36a5ad9 |
python/cpython | python__cpython-99928 | # DOC: tp_watch was added to PyTypeObject but is not documented
# Documentation
PR #97875 extended the `PyTypeObject` structure with a new `tp_watch` field. It should be documented in the [tp slots](https://docs.python.org/3.12/c-api/typeobj.html#tp-slots) section, the [PyTypeObject struct](https://docs.python.org/3.12/c-api/typeobj.html#pytypeobject-definition) definition, and the [PyTypeObject slots](https://docs.python.org/3.12/c-api/typeobj.html#pytypeobject-slots) sections of typeobject.rst
<!-- gh-linked-prs -->
### Linked PRs
* gh-99928
* gh-100271
<!-- /gh-linked-prs -->
| b7e4f1d97c6e784d2dee182d2b81541ddcff5751 | 48e352a2410b6e962d40359939a0d43aaba5ece9 |
python/cpython | python__cpython-99742 | # Implement Multi-Phase Init for _xxsubinterpreters
See PEP 630 and PEP 687. This is an internal test module so the bar isn't as high as for regular stdlib modules.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99742
* gh-99939
* gh-99940
* gh-100036
<!-- /gh-linked-prs -->
| 530cc9dbb61df55b83f0219d2282980c9cb1cbd8 | 51ee0a29e9b20c3e4a94a675e73a894ee2fe447b |
python/cpython | python__cpython-99736 | # Handle no arguments when using sub-commands in argparse
# Documentation
The example that shows the use of sub-commands and set_defaults to dispatch a function for each sub-command, fails when there are no command line arguments:
`AttributeError: 'Namespace' object has no attribute 'func'`
A pull request is provided that adds one line to handle that case and an explanation in the documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99736
* gh-100927
* gh-102036
* gh-102037
<!-- /gh-linked-prs -->
| e8bedeb45b134e7ae033560353ba064738170cd3 | 2c178253bd1f78545d412670c59060dc7c676f8c |
python/cpython | python__cpython-99731 | # HEAD requests should be HEAD requests upon redirect
# Bug report
Currently the following is `False`
```python
from urllib.request import Request, urlopen
len(urlopen(Request("http://google.com", method="HEAD")).read()) == 0 # False
```
But this is `True`
```python
len(urlopen(Request("http://www.google.com", method="HEAD")).read()) == 0 # True
```
This is because `http://google.com` redirects with 302 to `http://www.google.com`.
This means that checking for existence of some file by URL will actually download the file when the URL responds with a redirect. This makes no sense. Also the HTTP spec says nothing about changing HEAD requests into GET requests; it just says that everything but GET and HEAD requests should require user interaction on redirect, which Python violates, but there's a comment on that explaining it's an active choice to violate the spec there.
To me it seems like this is an oversight. Note that `curl -LI http://google.com` also sticks to HEAD requests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99731
<!-- /gh-linked-prs -->
| 759e8e7ab83848c527a53d7b2051bc14ac7b7c76 | 49baa656cb994122869bc807a88ea2f3f0d7751b |
python/cpython | python__cpython-100030 | # Frame teardown can create frame objects
<!--
Use this template for hard crashes of the interpreter, segmentation faults, failed C-level assertions, and similar.
Do not submit this form if you encounter an exception being unexpectedly raised from a Python function.
Most of the time, these should be filed as bugs, rather than crashes.
The CPython interpreter is itself written in a different programming language, C.
For CPython, a "crash" is when Python itself fails, leading to a traceback in the C stack.
-->
# Crash report
using https://github.com/graingert/segfault-repro running `pytest` yields a segfault in about 1 in 3 runs
```
================================================================================================================================== test session starts ===================================================================================================================================
platform linux -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/graingert/projects/segfault-repro, configfile: pyproject.toml
collected 1 item
test_ssltransport.py Fatal Python error: Segmentation fault
Current thread 0x00007f874b2df000 (most recent call first):
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/unraisableexception.py", line 43 in _hook
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/python.py", line 195 in pytest_pyfunc_call
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_callers.py", line 39 in _multicall
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_manager.py", line 80 in _hookexec
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_hooks.py", line 265 in __call__
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/python.py", line 1789 in runtest
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 167 in pytest_runtest_call
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_callers.py", line 39 in _multicall
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_manager.py", line 80 in _hookexec
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_hooks.py", line 265 in __call__
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 260 in <lambda>
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 339 in from_call
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 259 in call_runtest_hook
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 220 in call_and_report
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 131 in runtestprotocol
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/runner.py", line 112 in pytest_runtest_protocol
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_callers.py", line 39 in _multicall
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_manager.py", line 80 in _hookexec
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_hooks.py", line 265 in __call__
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/main.py", line 349 in pytest_runtestloop
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_callers.py", line 39 in _multicall
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_manager.py", line 80 in _hookexec
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_hooks.py", line 265 in __call__
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/main.py", line 324 in _main
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/main.py", line 270 in wrap_session
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/main.py", line 317 in pytest_cmdline_main
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_callers.py", line 39 in _multicall
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_manager.py", line 80 in _hookexec
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/pluggy/_hooks.py", line 265 in __call__
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/config/__init__.py", line 167 in main
File "/home/graingert/.virtualenvs/segfault-repro/lib/python3.11/site-packages/_pytest/config/__init__.py", line 190 in console_main
File "/home/graingert/.virtualenvs/segfault-repro/bin/pytest", line 8 in <module>
[1] 50315 segmentation fault (core dumped) pytest
```
# Error messages
Enter any relevant error message caused by the crash, including a core dump if there is one.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: `Python 3.11.0 (main, Oct 24 2022, 19:56:13) [GCC 11.2.0] on linux`
- Operating system and architecture: `5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-100030
* gh-100047
<!-- /gh-linked-prs -->
| b72014c783e5698beb18ee1249597e510b8bcb5a | 85d5a7e8ef472a4a64e5de883cf313c111a8ec77 |
python/cpython | python__cpython-99750 | # Possible typo in the documentation of datetime
At the bottom of the page, in the section Technical Detail, the point 9 in the notes says:
> When used with the strptime() method, the leading zero is optional for formats %d, %m, %H, %I, %M, %S, **%J**, %U, %W, and %V.
But **%J** (the uppercase J) does not exist. Maybe **%j** (the lowercase J) was meant.
If it is the case, I'll be glad to create an PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99750
* gh-100158
* gh-100159
<!-- /gh-linked-prs -->
| d5f8a2b6ad408368e728a389da918cead3ef7ee9 | e477348f36eff3f63ba509582622ea620fa9ae5b |
python/cpython | python__cpython-99732 | # 3.12: segmentation fault from compile() builtin
# Crash report
I can trigger a crash of the 3.12 interpreter using the following Python instruction:
```python
compile("assert (False if 1 else True)", "<string>", "exec")
```
# Error messages
The full output when running locally-built cpython with debug assertions avoids a segfault, triggering an assertion instead:
```python
$ ./python
Python 3.12.0a2+ (heads/main:f1a4a6a587, Nov 22 2022, 22:12:33) [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> compile("assert (False if 1 else True)", "<string>", "exec")
python: Python/compile.c:8703: remove_redundant_jumps: Assertion `no_empty_basic_blocks(g)' failed.
Aborted
```
# Your environment
I originally encountered the segmentation fault on macOS with 3.12.0a1.
I have reproduced it above with f1a4a6a587 on ubuntu (inside WSL).
<!-- gh-linked-prs -->
### Linked PRs
* gh-99732
<!-- /gh-linked-prs -->
| ae185fdcca9d48aef425468de8a8a31300280932 | 5f4ae86a639fb84260d622e31468da21dc468265 |
python/cpython | python__cpython-99707 | # 3.12 - PyASCIIObject state only 31 bits in size, should be 32
Looking at https://github.com/python/cpython/blob/d4cf192826b4c3bc91ac0de573a3a2d85760f1dd/Include/cpython/unicodeobject.h#L136-L138
I believe the bitfield is intended to have 32 bits, however summing the fields 1 + 3 + 1 + 1 + 25 = 31.
This only appears to affect 3.12, as the `interned` field has reduced from 2 bits to 1 bit, and the `ready` bit has been removed, but the padding was only increased from 24 to 25.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99707
<!-- /gh-linked-prs -->
| b4d54a332ed593c9fcd0da25684c622a251d03ce | c24397a1080fa496d4e860e3054592ecb3685052 |
python/cpython | python__cpython-99712 | # case error in test_unary.py
Hello all,
I was looking throw the test code in cpython to use a test cases for my own python interpreter.
While looking at the code, I got curious.
First is this function.
> https://github.com/python/cpython/blob/7e3f09cad9b783d8968aa79ff6a8ee57beb8b83e/Lib/test/test_unary.py#L23-L27
I think this function is not testing invert operator.
Instead it is just copying test cases of negative operator.
Next is these functions.
> https://github.com/python/cpython/blob/7e3f09cad9b783d8968aa79ff6a8ee57beb8b83e/Lib/test/test_unary.py#L7-L13
> https://github.com/python/cpython/blob/7e3f09cad9b783d8968aa79ff6a8ee57beb8b83e/Lib/test/test_unary.py#L15-L21
I think these functions have duplicated test cases.
`self.assertTrue(-2 == 0 - 2)` tested twice in test_negative,
`self.assertEqual(+2, 2)` tested twice as well in test_positive.
I want to know if there is an intention that I don't know.
Or if there is a problem, I think we need to fix the test code.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99712
<!-- /gh-linked-prs -->
| 54289f85b2af1ecf046089ddf535dda1bdf6af24 | 868bab0fdc514cfa70ce97e484a689aee8cb5a36 |
python/cpython | python__cpython-99678 | # `inspect._getmembers` duplicates self type in `mro` for no good reason
This line https://github.com/python/cpython/blame/4d82f628c44490d6fbc3f6998d2473d1304d891f/Lib/inspect.py#L540 has this logic `mro = (object,) + getmro(object)`
I don't think this is correct:
1. `getmro` returns MRO including self type
2. We waste time on doing this
3. It might backfire at some moment, now it works because of how MRO is used later
Introduced in https://github.com/python/cpython/commit/86a8a9ae983b66ea218ccbb57d3e3a5cdf918e97
<!-- gh-linked-prs -->
### Linked PRs
* gh-99678
<!-- /gh-linked-prs -->
| 2653b82c1a44371ad0da6b5a1101abbda4acd2d3 | ac115b51e71c24374682e2a9e6663f99d2faf000 |
python/cpython | python__cpython-99672 | # Possible typo in typing.TypeVarTuple docs
# Documentation
In the [section about the TypeVarTuple type](https://docs.python.org/3.11/library/typing.html#typing.TypeVarTuple), there is a bit of text that I find diffcult making sense of. My guess is that the author made a typo, but of course I could be wrong and maybe I am not getting the idea.
Where the text states:
> Type variable tuples must always be unpacked. This helps distinguish type variable **types** from normal type variables
I think it meant:
> Type variable tuples must always be unpacked. This helps distinguish type variable **tuples** from normal type variables
In case it is indeed a typo I'd be happy to open a PR fixing the issue.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99672
* gh-99674
<!-- /gh-linked-prs -->
| 1bf983ce7eb8bfd17dc18102b61dfbdafe0deda2 | 2781ec9b0e41a62cecc189c22dfc849f9a56927c |
python/cpython | python__cpython-99660 | # sqlite3 bigmem test catches wrong exception
`Lib/test/test_sqlite3/test_types.py` has two bigmem tests:
- `test_too_large_string`; and
- `test_too_large_blob`.
Those are skipped unless `-M` is passed to the test runner so nobody was running those tests until [I set up a bigmem buildbot](https://buildbot.python.org/all/#/builders/1079). Running tests on the buildbot revealed two failures:
```
======================================================================
ERROR: test_too_large_blob (test.test_sqlite3.test_types.SqliteTypeTests.test_too_large_blob)
----------------------------------------------------------------------
Traceback (most recent call last):
File "R:\buildarea\3.x.ambv-bb-win11.bigmem\build\Lib\test\support\__init__.py", line 967, in wrapper
return f(self, maxsize)
^^^^^^^^^^^^^^^^
File "R:\buildarea\3.x.ambv-bb-win11.bigmem\build\Lib\test\test_sqlite3\test_types.py", line 121, in test_too_large_blob
self.cur.execute("insert into test(s) values (?)", (b'x'*(2**31-1),))
sqlite3.DataError: string or blob too big
Stdout:
... expected peak memory use: 6.0G
Stderr:
R:\buildarea\3.x.ambv-bb-win11.bigmem\build\Lib\test\support\__init__.py:910: RuntimeWarning: /proc not available for stats: [Errno 2] No such file or directory: '/proc/10708/statm'
warnings.warn('/proc not available for stats: {}'.format(e),
======================================================================
ERROR: test_too_large_string (test.test_sqlite3.test_types.SqliteTypeTests.test_too_large_string)
----------------------------------------------------------------------
Traceback (most recent call last):
File "R:\buildarea\3.x.ambv-bb-win11.bigmem\build\Lib\test\support\__init__.py", line 967, in wrapper
return f(self, maxsize)
^^^^^^^^^^^^^^^^
File "R:\buildarea\3.x.ambv-bb-win11.bigmem\build\Lib\test\test_sqlite3\test_types.py", line 110, in test_too_large_string
self.cur.execute("insert into test(s) values (?)", ('x'*(2**31-1),))
sqlite3.DataError: string or blob too big
Stdout:
... expected peak memory use: 8.0G
----------------------------------------------------------------------
```
The `with self.assertRaises()` in those tests should catch `sqlite.DataError` instead of the exceptions currently listed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99660
* gh-99666
<!-- /gh-linked-prs -->
| 2781ec9b0e41a62cecc189c22dfc849f9a56927c | 49e554dbafc87245c1364ae00ad064a96f5cb995 |
python/cpython | python__cpython-99653 | # argparse docs: "optional arguments" instead of "options"
Docs for [ArgumentParser.add_argument_group](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_argument_group) say:
> By default, ArgumentParser groups command-line arguments into “positional arguments” and “optional arguments” when displaying help messages.
But "optional arguments" was renamed to just "options" in Python 3.10 (https://github.com/python/cpython/issues/53903).
<!-- gh-linked-prs -->
### Linked PRs
* gh-99653
* gh-99705
<!-- /gh-linked-prs -->
| f5fea2288620cb2fda24f3eecc9d623331fe4401 | d4cf192826b4c3bc91ac0de573a3a2d85760f1dd |
python/cpython | python__cpython-99646 | # All TestCase classes use a shared stack for class cleanup
`TestCase` class methods `addClassCleanup()` and `doClassCleanups()` are similar to instance methods `addCleanup()` and `doCleanups()`. `add*Cleanup()` add a callback into a list, and `do*Cleanups()` pop them from a list and call. The main difference is that the latter methods use a list stored as an instance attribute while the former use a list stored as a class attribute.
The problem is that the class attribute `TestCase._class_cleanups` is shared between all `TestCase` subclasses. It usually does not cause the problem, because tests in different test classes are run sequentially, but if you run a new test suite while running a test, and the outer test class use `addClassCleanup()`, the callback registered for the outer test class will be called when cleaning up the inner test class. It can happen when you test `unittest` itself. It really happened, and was unnoticed only because the outer class did not use `addClassCleanup()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99646
* gh-99698
* gh-99699
<!-- /gh-linked-prs -->
| c2102136be569e6fc8ed90181f229b46d07142f8 | d15b9f19ac0ffb29b646735d69b29f48a71c247f |
python/cpython | python__cpython-99621 | # BaseExceptionGroup.derive doesn't preserve __cause__ etc.
The [documentation](https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup.derive) for `BaseExceptionGroup.derive` says "Returns an exception group with the same [message](https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup.message), `__traceback__`, `__cause__`, `__context__` and `__notes__` but which wraps the exceptions in excs."
But it doesn't, it only preserves the message:
```
Python 3.11.0 (v3.11.0:deaf509e8f, Oct 24 2022, 14:43:23) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> eg = BaseExceptionGroup("", [ValueError("included")])
>>> eg.add_note("note")
>>> eg.__cause__ = ValueError("cause")
>>> eg.__context__ = ValueError("context")
>>> derived = eg.derive([ValueError("derive")])
>>> derived.__notes__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'ExceptionGroup' object has no attribute '__notes__'. Did you mean: '__ne__'?
>>> derived.__cause__
derived.__cause__
>>> derived.__cause__
>>> derived.__context__
>>> derived.message
''
```
The code for `BaseExceptionGroup.derive` only uses `self->msg` from the original object: https://github.com/python/cpython/blob/b0e1f9c241cd8f8c864d51059217f997d3b792bf/Objects/exceptions.c#L860
cc @iritkatriel @Zac-HD for exceptiongroups and notes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99621
* gh-99720
<!-- /gh-linked-prs -->
| 5d9183c7ad68eb9ddb53d54a3f9a27e29dbabf31 | 8f024a02d7d63315ecc3479f0715e927f48fc91b |
python/cpython | python__cpython-99613 | # PyUnicode_DecodeUTF8Stateful() does not set *consumed for ASCII-only string
`PyUnicode_DecodeUTF8Stateful()` should save the number of successfully decoded bytes in `*consumed`. But if all bytes are in the ASCII range, it uses a fast path and does not set `*consumed`.
It was found during writing coverage tests for Unicode C API (#99593).
<!-- gh-linked-prs -->
### Linked PRs
* gh-99613
* gh-107224
* gh-107230
* gh-107231
<!-- /gh-linked-prs -->
| f08e52ccb027f6f703302b8c1a82db9fd3934270 | d460c8ec52716a37080d31fdc0f673edcc98bee8 |
python/cpython | python__cpython-99583 | # Freezing zipimport into _bootstrap_python
# Feature or enhancement
Freezing `zipimport` module into `_boostrap_python`.
# Pitch
Currently, `_bootstrap_python` is used to freeze modules during python building. When running `_bootstrap_python`, stdlib files can be found in the source directory. However, when embedding cpython into another program, other build systems may be used to build cpython. Furthermore, these build systems may build cpython out of the source tree, even in a sandbox directory. Now, in order for the `_bootstrap_python` to function, we need to distribute the stdlib into the build directory. In this situation, a zip file of stdlib is the preferable way to do it. Unfortunately, the `zipimport` is not frozen into _bootstrap_python, so we can’t use a zip stdlib.
If we freeze `zipimport` into` _bootstrap_python` it will ease the build process a lot. It’s a minor patch. We have tested it on Windows, Linux, and macOS. Admittedly, this change has no additional benefit to normal python building. But it doesn’t seem to be doing any harm.
# Previous discussion
<!--
New features to Python should first be discussed elsewhere before creating issues on GitHub,
for example in the "ideas" category (https://discuss.python.org/c/ideas/6) of discuss.python.org,
or the python-ideas mailing list (https://mail.python.org/mailman3/lists/python-ideas.python.org/).
Use this space to post links to the places where you have already discussed this feature proposal:
-->
https://discuss.python.org/t/how-about-freezing-zipimport-into-bootstrap-python/21203
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-99583
<!-- /gh-linked-prs -->
| 228c92eb5c126130316a32b44a0ce8f28cc5d544 | 7c0fb71fbfa8682f56c15832e2c793a6180f2ec0 |
python/cpython | python__cpython-99605 | # heap corruption while parsing huge comment
# Crash report
A very large comment in [heapcrpt.py](https://github.com/python/cpython/files/10042900/heapcrpt.zip) causes `tokenizer.c` to perform an illegal write, leading to heap corruption and crashing the interpreter
# Error messages
Linux/glibc: `double free or corruption (!prev)`
Windows: `0xc0000374` in event viewer
# Your environment
Reproduced on cpython 3.10.0, 3.10.8, 3.12.0a2
Reproduced on fedora 35 (x64), windows 10 (x64, 17763.316)
Not reproduced on cpython 3.9.15
Not visibly reproduced on macos 10.13
<!-- gh-linked-prs -->
### Linked PRs
* gh-99605
* gh-99627
* gh-99628
* gh-99630
<!-- /gh-linked-prs -->
| e13d1d9dda8c27691180bc618bd5e9bf43dfa89f | abf5b6ff43c5e238e2d577c95ed27bc8ff01afd5 |
python/cpython | python__cpython-99642 | # gc_decref: Assertion "gc_get_refs(g) > 0" failed
# Bug report
Error whlie testing the main branch. May be similar to:
- https://github.com/python/cpython/issues/94215
# The problem
When running `./python -m test`:
```
0:18:59 load avg: 0.72 [187/433] test_imp
Modules/gcmodule.c:113: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small
Enable tracemalloc to get the memory block allocation traceback
```
# Your environment
- CPython versions tested on: "main" branch from git repo.
- Operating system and architecture: Void LInux, x86_64, gcc 10.2.1.
# Message log
```
0:18:59 load avg: 0.72 [187/433] test_imp
Modules/gcmodule.c:113: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small
Enable tracemalloc to get the memory block allocation traceback
object address : 0x7f3dcdf877d0
object refcount : 96
object type : 0x5573e22fc880
object type name: module
object repr : <module 'builtins' (built-in)>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
Current thread 0x00007f3dce3c32c0 (most recent call first):
Garbage-collecting
File "/usr/local/src/cpython/Lib/test/support/__init__.py", line 738 in gc_collect
File "/usr/local/src/cpython/Lib/test/libregrtest/save_env.py", line 314 in __exit__
File "/usr/local/src/cpython/Lib/test/libregrtest/runtest.py", line 312 in _runtest_inner2
File "/usr/local/src/cpython/Lib/test/libregrtest/runtest.py", line 360 in _runtest_inner
File "/usr/local/src/cpython/Lib/test/libregrtest/runtest.py", line 235 in _runtest
File "/usr/local/src/cpython/Lib/test/libregrtest/runtest.py", line 265 in runtest
File "/usr/local/src/cpython/Lib/test/libregrtest/main.py", line 455 in run_tests_sequential
File "/usr/local/src/cpython/Lib/test/libregrtest/main.py", line 572 in run_tests
File "/usr/local/src/cpython/Lib/test/libregrtest/main.py", line 750 in _main
File "/usr/local/src/cpython/Lib/test/libregrtest/main.py", line 709 in main
File "/usr/local/src/cpython/Lib/test/libregrtest/main.py", line 773 in main
File "/usr/local/src/cpython/Lib/test/__main__.py", line 2 in <module>
File "/usr/local/src/cpython/Lib/runpy.py", line 88 in _run_code
File "/usr/local/src/cpython/Lib/runpy.py", line 198 in _run_module_as_main
Extension modules: _testcapi, _xxsubinterpreters, _testinternalcapi, _testbuffer, _testmultiphase, _ctypes_test, xxsubtype (total: 7)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-99642
* gh-99643
* gh-99644
<!-- /gh-linked-prs -->
| cb2ef8b2acbb231c207207d3375b2f8b0077a6ee | 1cae31d26ba621f6b1f0656ad3d69a0236338bad |
python/cpython | python__cpython-99616 | # LWPCookieJar.save() gives unexpected results in 3.11.0
# Bug report
`LWPCookieJar.save()` doesn't truncate the file.
So removing cookies from an existing jar file then saving it gives unexpected results :
```python3
from http.cookiejar import LWPCookieJar
from urllib.request import Request, urlopen
lwp = LWPCookieJar("cookies.lwp")
# get some cookies & save
request = Request("https://www.scoopmeacookie.com/give-me-more/")
with urlopen(request) as response:
lwp.extract_cookies(response, request)
# here's some cookies
print(f"Extracted : {lwp}")
lwp.save()
# clear the jar & save
lwp.clear()
# the jar is now empty
print(f"Clear : {lwp}")
lwp.save()
# are those cookies really gone ?
lwp.load()
# they're back ! that would be great IRL
# LWPCookieJar.save() didn't truncate the file (os.O_TRUNC is missing)
print(f"Reload : {lwp}")
```
# Environment
- Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
- Windows 10 64 bits 22H2
<!-- gh-linked-prs -->
### Linked PRs
* gh-99616
* gh-100377
<!-- /gh-linked-prs -->
| 44892d45b038f919b0378590a776580a9d73b291 | cb60b6131bc2bb11c48a15f808914d8b242b9fc5 |
python/cpython | python__cpython-99555 | # `.pyc` files are larger than they need to be
Python 3.11 made `.pyc` files almost twice as large. There are two main reasons for this:
- [PEP 659](https://peps.python.org/pep-0659/) made the bytecode stream ~3x as large as 3.10.
- [PEP 657](https://peps.python.org/pep-0657/) made the location tables ~9x as large as 3.10.
(Note that these effects compound each other, since longer bytecode means more location entries.)
However, there is low-hanging fruit for improving this situation in 3.12:
- Bytecode can be compressed using a fairly simple scheme (one byte for instructions without an oparg, two bytes for instructions with an oparg, and zero bytes for `CACHE` entries). **This results in serialized bytecode that is ~66% smaller than 3.11.**
- The location table format already has a mechanism for compressing multiple code units into a single entry. Currently it's only used for `EXTENDED_ARG`s and `CACHE`s corresponding to a single instruction, but with slight changes the compiler can use the same mechanism to share location table entries between adjacent instructions. This is a double-win, since it not only makes `.pyc` files smaller, but also shrinks the memory footprint of all code objects in the process. **Experiments show that this makes location tables ~33% smaller than 3.11.**
When both of these optimizations are applied, `.pyc` files become ~33% smaller than 3.11. This is only ~33% larger than 3.10, despite all of the rich new debugging information present.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99555
* gh-99556
* gh-100435
<!-- /gh-linked-prs -->
| 426569eb8ca1edaa68026aa2bab6b8d1c9105f93 | 4420cf4dc9ef7bd3c1c9b5465fa9397304bf0110 |
python/cpython | python__cpython-99572 | # Subclasses of `ExceptionGroup` can wrap `BaseException`s
```python
class MyEG(ExceptionGroup):
"""Holds BaseExceptions without itself being a BaseException."""
oops = MyEG("oops", [KeyboardInterrupt()])
assert isinstance(oops, Exception)
assert not isinstance(oops.exceptions[0], Exception)
```
I believe that this is a bug tracing to the period when PEP-654 did not intend `(Base)ExceptionGroup` to be usable as a parent class; and I think a sufficient fix would be to replace the type-equality check with an isinstance check in:
https://github.com/python/cpython/blob/bc390dd93574c3c6773958c6a7e68adc83d0bf3f/Objects/exceptions.c#L740-L744
cc @iritkatriel; raised via https://github.com/agronholm/exceptiongroup/pull/40#discussion_r1020961107
<!-- gh-linked-prs -->
### Linked PRs
* gh-99572
* gh-99580
* gh-99615
* gh-103435
<!-- /gh-linked-prs -->
| c8c6113398ee9a7867fe9b08bc539cceb61e2aaa | a220c6d1ee3053895f502b43b47dc3a9c55fa6a3 |
python/cpython | python__cpython-99548 | # isjunction for checking if a given path is a junction
# Feature or enhancement
(A clear and concise description of your proposal.)
# Pitch
I’m proposing we add a `isjunction` method to os.path and a `is_junction` method to `pathlib.Path`. Both would return True if the given path is a junction. On Posix the API would exist, but would always return False as this tends to be a Windows-only thing.
We have similar logic to check for concepts like `ismount`, `islink`, etc. So i figure this would fit in.
# Previous discussion
See https://discuss.python.org/t/add-mechanism-to-check-if-a-path-is-a-junction/20952 and then lower down https://mail.python.org/archives/list/python-ideas@python.org/thread/KQ7AELTQRLYOXD434GQ2AHNDD23C4CYG/
There didn't seem to be any hard -1s to this proposal.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99548
<!-- /gh-linked-prs -->
| 1b2de89bce7eee3c63ce2286f071db57cd2cfa22 | c2102136be569e6fc8ed90181f229b46d07142f8 |
python/cpython | python__cpython-99541 | # Constant hash value for None to aid reproducibility
# Feature or enhancement
Fix `hash(None)` to a constant value.
# Pitch
(Updated 2022.11.18)
- Under current behavior, the runtime leaks the ASLR offset, since the original address of the `None` singleton is fixed and `_Py_HashPointerRaw` is reversible. Admittedly, there are other similar objects, like `NotImplemented` or `Ellipsis` that also have this problem, and need to be similarly fixed.
- Because of ASLR, `hash(None)` changes every run; that consequently means the hash of many useful "key" types changes every run, particularly tuples, NamedTuples and frozen dataclasses that have `Optional` fields.
- The other source of hash value instability across runs in common "key" types like str or Enum, can be fixed using the `PYTHONHASHSEED` environment var.
- other singletons commonly used as (or as part of) mapping keys, `True` and `False` already have fixed hash values.
CPython's builtin set classes, as do all other non-concurrent hash-tables, either open or closed, AFAIK, grant the user a certain stability property. Given a specific sequence of initialization and subsequent mutation (if any), and given specific inputs with certain hash values, if one were to "replay" it, the result set will be in the same observable state every time: not only have the same items (correctness), but also they would be retrieved from the set in the same order when iterated.
This property means that code that starts out with identical data, performs computations and makes decisions based on the results will behave identically between runs. For example, if based on some mathematical properties of the input, we have computed a set of N valid choices, they are given integer scores, then we pick the first choice that has maximal score. If the set guarantees the property described above, we are also guaranteed that the exact same choice will be made every time this code runs, even in case of ties. This is very helpful for reproducibility, especially in complex algorithmic code that makes a lot of combinatorial decisions of that kind.
There is a counterargument that we should simply just offer `StableSet` and `StableFrozenSet` that guarantee a specific order, the same way that `dict` does.
A few things to note about that:
- I've written such set classes as an adapter over `dict[T, None]`, there is a substantial perf overhead to that
- Is it worth the extra "weight" in code inside the core? That's suspect - why hasn't it been added all those years?
- In a large codebase, it requires automated code inspection and editing tools to enforce this. It's all too easy, and natural, to add a seemingly harmless set comprehension somewhere and defeat the whole effort
- The insertion-order-as-iteration-order guarantee is stronger than what we actually require, in order to have the "reproducability" property I've described, so we're paying extra for something we don't really need.
My PR makes a small change to CPython, in `objects.c`, that sets the `tp_hash` descriptor of `NoneType` to a function that simply returns a constant value.
Admittedly, determinism between runs isn't a concern that most users/programs care about. It is rather niche. However, I argue that still, there is no externalized cost to this change.
# Previous discussion
https://discuss.python.org/t/constant-hash-for-none/21110
<!-- gh-linked-prs -->
### Linked PRs
* gh-99541
<!-- /gh-linked-prs -->
| 432117cd1f59c76d97da2eaff55a7d758301dbc7 | a5a7cea202d34ca699d9594d428ba527ef7ff2ce |
python/cpython | python__cpython-99990 | # `__annotations__` are not inherited in 3.10 while they are in 3.8
# Bug report
Python 3.8.13 inherits annotations from a subclass:
```bash
Python 3.8.13 (default, Oct 19 2022, 17:54:22)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... x: int
...
>>> class B(A):
... pass
...
>>> print(A.__annotations__)
{'x': <class 'int'>}
>>> print(B.__annotations__)
{'x': <class 'int'>}
>>> print(B.__dict__)
{'__module__': '__main__', '__doc__': None}
```
Python 3.10.8 does not:
```bash
Python 3.10.8 (main, Nov 4 2022, 08:45:18) [Clang 12.0.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... x: int
...
>>> class B(A):
... pass
...
>>> print(A.__annotations__)
{'x': <class 'int'>}
>>> print(B.__annotations__)
{}
>>> print(B.__dict__)
{'__module__': '__main__', '__doc__': None, '__annotations__': {}}
```
Can't find anything related to this in the changelogs, release notes, stackoverflow etc. Is this known/expected behaviour?
<!-- gh-linked-prs -->
### Linked PRs
* gh-99990
* gh-100507
* gh-100509
<!-- /gh-linked-prs -->
| f5b7b19bf10724d831285fb04e00f763838bd555 | e4b43ebb3afbd231a4e5630e7e358aa3093f8677 |
python/cpython | python__cpython-99519 | # Buildbot failure: unhandled warning in `test_enum.py`
Full error text:
```
AssertionError: unhandled warning {message : SyntaxWarning("invalid escape sequence '\\('"), category : 'SyntaxWarning', filename : '/Users/sobolev/Desktop/cpython/Lib/test/test_enum.py', lineno : 1481, line : None}
----------------------------------------------------------------------
Ran 1 test in 6.852s
FAILED (failures=1)
test test___all__ failed
test___all__ failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test___all__
Total duration: 7.1 sec
Tests result: FAILURE
```
Repro: `./python.exe -m test -v test___all__`
Originally found in: https://github.com/python/cpython/pull/99461#issuecomment-1315706879
The fix is on its way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99519
<!-- /gh-linked-prs -->
| 5cfb7d19f5242c9b8ffd2fe30a24569e85a99e1d | 00437ad30454005bc82fca75dfbabf6c95f3ea6a |
python/cpython | python__cpython-99511 | # multiprocessing classes SimpleQueue and Queue don't support typing in 3.11.0
# Bug report
`SimpleQueue` and `Queue` classes from `multiprocessing` module in Python 3.11.0 do not support type `[str]` annotation.
Minimal, reproducible example:
```python3
from multiprocessing import Queue
multiprocessing_queue: Queue[str] = Queue()
```
or
```python3
from multiprocessing import SimpleQueue
multiprocessing_queue: SimpleQueue[str] = SimpleQueue()
```
Result - error:
```console
multiprocessing_queue: SimpleQueue[str] = SimpleQueue()
~~~~~~~~~~~^^^^^
TypeError: 'method' object is not subscriptable
```
How it should work:
It should work like `Queue` from the `queue` module:
```python3
from queue import Queue
standard_queue: Queue[str] = Queue()
```
Result - no error.
Why do I need this?
I want my IDE to know that `queue.get()` returns `str` object.
# Your environment
Python 3.11.0 arm64
Python 3.11.0 (main, Nov 4 2022, 17:22:54) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
MacBook M1 Pro macOS Ventura 13.0.1.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99511
<!-- /gh-linked-prs -->
| ce39aaffeef9aa8af54a8554fe7a5609a6bba471 | 199507b81a302ea19f93593965b1e5088195a6c5 |
python/cpython | python__cpython-99635 | # Call to _imp.source_hash with incorrect arguments in (unreachable?) part of SourceLoader.get_code (importlib._bootstrap_external)
# Bug report
As far as I can tell this line:
https://github.com/python/cpython/blob/4e4b13e8f6211abbc0d53056da11357756daa314/Lib/importlib/_bootstrap_external.py#L1147-L1147
will raise a TypeError and should be a 2 argument form as seen earlier in the same method.
https://github.com/python/cpython/blob/4e4b13e8f6211abbc0d53056da11357756daa314/Lib/importlib/_bootstrap_external.py#L1117-L1120
However, as either `hash_based` is False, `source_hash` has already been generated or the code returns early I don't think this line can be reached. (Maybe if you have a flaky version of get_data that sometimes raises an ImportError/EOFError??).
Example:
```python3
>>> _imp.source_hash(_RAW_MAGIC_NUMBER, b'')
b's\x8d\x9c\xd5\xd5\xe8\x7fs'
>>> _imp.source_hash(b'')
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
TypeError: source_hash() missing required argument 'source' (pos 2)
```
Edit: Sorry, as I opened the issue from within the source line via 'reference in new issue' I think I accidentally skipped the bug report format that would have given this a label.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99635
<!-- /gh-linked-prs -->
| c69cfcdb116c4907b306e2bd0e263d5ceba48bd5 | 57dfb1c4c8d5298494f121d1686a77a11612fd64 |
python/cpython | python__cpython-99512 | # secrets.compare_digest raises TypeError if one of it's arguments is None
# Bug report
When using `secrets.compare_digest()` with one or both of it's arguments being None the function explodes. This is [not explicitly documented](https://docs.python.org/3/library/secrets.html#secrets.compare_digest) even though the documentation mentions that it compares strings.
```python
❯ python3.11
Python 3.11.0 (main, Oct 25 2022, 13:57:33) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import secrets
>>> secrets.compare_digest('', '')
True
>>> secrets.compare_digest('', None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand types(s) or combination of types: 'str' and 'NoneType'
>>> secrets.compare_digest(None, '')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand types(s) or combination of types: 'NoneType' and 'str'
>>> secrets.compare_digest(None, None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand types(s) or combination of types: 'NoneType' and 'NoneType'
```
I would argue that a function that advertises to securely compare the two arguments in constant time, should not explode and thus reveal that one of the arguments (in this case someone had set the password hash in the Database to `NULL`) has a different length. If this is not wanted, I think the documentation should include some guidance how to handle this case not generate a timing attack that reveals something about the password in the Database? Not sure that is reasonable, but at least som hints about this not handling `None` would have been really helpful.
# Your environment
- CPython versions tested on: 3.6, 3.9, 3.10, 3.11
- Operating system and architecture: Linux x86, Mac ARM
<!-- gh-linked-prs -->
### Linked PRs
* gh-99512
* gh-99790
* gh-99791
<!-- /gh-linked-prs -->
| 47d673d81fc315069c14f9438ebe61fb70ef1ccc | ec2b76aa8b7c6313293ff9c6814e8bc31e08fcaf |
python/cpython | python__cpython-99484 | # Remove compatibility Jython code
There are several places where some Jython-specific hacks are used. For example, there are imports from `com.python.core` which is a third-party library.
They are untested and undocumented.
Right now Jython is stuck with 2.7
I think, we don't have to keep it: eventually Jython can patch it on their side, since all parts are pure-Python code.
Original issue on Jython's repo: https://github.com/jython/jython/issues/217
I think we should leave this open for some time (1 month?) to let Jython's team to comment on this.
CC @Yhg1s
PR to review and comment is incoming :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-99484
<!-- /gh-linked-prs -->
| 745545b5bb847023f90505bf9caa983463413780 | c5726b727e26b81a267933654cf26b760a90d9aa |
python/cpython | python__cpython-99444 | # `descr_set_trampoline_call` return type should be `int` not `PyObject*`
`getset_set` return type is `int`, `descr_set_trampoline_call`'s return type needs to be the same. I think this was a copy-paste error that occurred when applying my patch where `descr_set_trampoline_call` returns `int`:
https://github.com/pyodide/pyodide/blob/main/cpython/patches/0001-Patch-in-call-trampolines-to-handle-fpcast-troubles.patch
<!-- gh-linked-prs -->
### Linked PRs
* gh-99444
* gh-99552
<!-- /gh-linked-prs -->
| bc390dd93574c3c6773958c6a7e68adc83d0bf3f | aa8b58cb33826bd2b1a1de631ebcd6a5353eecb5 |
python/cpython | python__cpython-111762 | # pdb mangles sys.path when run with -P or ._pth
# Bug report
When running a script, `pdb` indiscriminately replaces `sys.path[0]`, which it assumes to be the path where `pdb` itself was found, with the path where the script was found. That assumption may not be correct: it is not when the interpreter runs in “safe path” mode due to a [`-P` command line option](https://docs.python.org/3/using/cmdline.html#cmdoption-P) or the presence of a `._pth` file. In that case `sys.path[0]` may point to the standard library, and overwriting it may break the script’s ability to import standard library modules.
This is easily reproduced in the _embeddable package_ for Windows, which has the standard library in python311.zip, refers to it in `sys.path[0]`, and has a *python311._pth* file, but should also be reproducible (using `-P`) on other platforms in any other installation that finds the standard library in `sys.path[0]`. (This is not the case in a standard installation for Windows, which has `'…\\python311.zip'` in `sys.path[0]`, but that file doesn’t actually exist and the standard library is located in files in `Lib/` and is found through later entries in `sys.path`.)
**Steps to reproduce**
script.py:
```python
import sys
print(sys.path)
import random
print(random.random())
```
**Actual result**
```
C:\Users\Walther\pdbtest>python-3.11.0-embed-amd64\python.exe -m pdb script.py
> c:\users\walther\pdbtest\script.py(1)<module>()
-> import sys
(Pdb) c
['C:\\Users\\Walther\\pdbtest', 'C:\\Users\\Walther\\pdbtest\\python-3.11.0-embed-amd64']
Traceback (most recent call last):
File "pdb.py", line 1768, in main
File "pdb.py", line 1646, in _run
File "bdb.py", line 597, in run
File "<string>", line 1, in <module>
File "C:\Users\Walther\pdbtest\script.py", line 3, in <module>
import random
ModuleNotFoundError: No module named 'random'
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> c:\users\walther\pdbtest\script.py(3)<module>()
-> import random
(Pdb) q
Post mortem debugger finished. The C:\Users\Walther\pdbtest\script.py will be restarted
> c:\users\walther\pdbtest\script.py(1)<module>()
-> import sys
(Pdb) q
```
**Expected result**
```
C:\Users\Walther\pdbtest>python-3.11.0-embed-amd64\python.exe -m pdb script.py
> c:\users\walther\pdbtest\script.py(1)<module>()
-> import sys
(Pdb) c
['C:\\Users\\Walther\\pdbtest\\python-3.11.0-embed-amd64\\python311.zip', 'C:\\Users\\Walther\\pdbtest\\python-3.11.0-embed-amd64']
0.6351821708383135
The program finished and will be restarted
> c:\users\walther\pdbtest\script.py(1)<module>()
-> import sys
(Pdb) q
```
It seems to me this could be fixed as follows (which is what I used to get the “expected result”), or maybe by comparing `sys.path[0]` with `__file__` to check the assumption directly:
```diff
--- pdb-orig.py 2022-11-11 10:51:02.717413700 +0100
+++ pdb.py 2022-11-11 10:10:19.737092700 +0100
@@ -147,7 +147,8 @@
sys.exit(1)
# Replace pdb's dir with script's dir in front of module search path.
- sys.path[0] = os.path.dirname(self)
+ if not (hasattr(sys.flags, 'safe_path') and sys.flags.safe_path):
+ sys.path[0] = os.path.dirname(self)
@property
def filename(self):
```
Any opinions, should I make a PR with that?
# Your environment
- CPython versions tested on: 3.11.0
- Operating system and architecture: Windows 10+11 AMD64+ARM64 (but shouldn’t matter)
<!-- gh-linked-prs -->
### Linked PRs
* gh-111762
<!-- /gh-linked-prs -->
| b90a5cf11cdb69e60aed7be732e80113bca7bbe4 | 8f71b349de1ff2b11223ff7a8241c62a5a932339 |
python/cpython | python__cpython-99470 | # ./configure and make failed on macOS Monterey
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
When using `./configure` it let me file a bug report
```
./configure ✔ │ 23:31:36
checking build system type... x86_64-apple-darwin21.4.0
checking host system type... x86_64-apple-darwin21.4.0
checking for python3.10... python3.10
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... "darwin"
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for a sed that does not truncate output... /usr/bin/sed
checking for --with-cxx-main=<compiler>... no
checking for g++... no
configure:
By default, distutils will build C++ extension modules with "g++".
If this is not intended, then set CXX on the configure command line.
checking for the platform triplet based on compiler characteristics... darwin
configure: error: internal configure error for the platform triplet, please file a bug report
```
When I used `brew install gcc-11 & CC=gcc-11 ./configure` the configure passed but the make failed.
Which I believe the GNU gcc is missing the `HAVE_BUILTIN_AVAILABLE` macro?
```
make ✔ │ 1m 20s │ 23:35:40
gcc-11 -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -DPy_BUILD_CORE -o Programs/python.o ./Programs/python.c
gcc-11 -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -DPy_BUILD_CORE -o Parser/token.o Parser/token.c
....
--- a lot of normal logs ---
....
gcc-11 -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -DPy_BUILD_CORE_BUILTIN -DPy_BUILD_CORE_BUILTIN -I./Include/internal -c ./Modules/posixmodule.c -o Modules/posixmodule.o
./Modules/posixmodule.c: In function 'utime_dir_fd':
./Modules/posixmodule.c:5165:9: error: 'HAVE_UTIMENSAT_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_FACCESSAT_RUNTIME'?
5165 | if (HAVE_UTIMENSAT_RUNTIME) {
| ^~~~~~~~~~~~~~~~~~~~~~
| HAVE_FACCESSAT_RUNTIME
./Modules/posixmodule.c:5165:9: note: each undeclared identifier is reported only once for each function it appears in
./Modules/posixmodule.c: In function 'utime_fd':
./Modules/posixmodule.c:5201:9: error: 'HAVE_FUTIMENS_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_RENAMEAT_RUNTIME'?
5201 | if (HAVE_FUTIMENS_RUNTIME) {
| ^~~~~~~~~~~~~~~~~~~~~
| HAVE_RENAMEAT_RUNTIME
./Modules/posixmodule.c: In function 'utime_nofollow_symlinks':
./Modules/posixmodule.c:5242:9: error: 'HAVE_UTIMENSAT_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_FACCESSAT_RUNTIME'?
5242 | if (HAVE_UTIMENSAT_RUNTIME) {
| ^~~~~~~~~~~~~~~~~~~~~~
| HAVE_FACCESSAT_RUNTIME
./Modules/posixmodule.c: In function 'utime_default':
./Modules/posixmodule.c:5274:9: error: 'HAVE_UTIMENSAT_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_FACCESSAT_RUNTIME'?
5274 | if (HAVE_UTIMENSAT_RUNTIME) {
| ^~~~~~~~~~~~~~~~~~~~~~
| HAVE_FACCESSAT_RUNTIME
./Modules/posixmodule.c: In function 'os_preadv_impl':
./Modules/posixmodule.c:9706: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
9706 | #pragma clang diagnostic push
|
./Modules/posixmodule.c:9707: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
9707 | #pragma clang diagnostic ignored "-Wunguarded-availability"
|
./Modules/posixmodule.c:9708: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
9708 | #pragma clang diagnostic ignored "-Wunguarded-availability-new"
|
./Modules/posixmodule.c:9718: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
9718 | #pragma clang diagnostic pop
|
./Modules/posixmodule.c: In function 'os_pwritev_impl':
./Modules/posixmodule.c:10346: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
10346 | #pragma clang diagnostic push
|
./Modules/posixmodule.c:10347: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
10347 | #pragma clang diagnostic ignored "-Wunguarded-availability"
|
./Modules/posixmodule.c:10348: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
10348 | #pragma clang diagnostic ignored "-Wunguarded-availability-new"
|
./Modules/posixmodule.c:10359: warning: ignoring '#pragma clang diagnostic' [-Wunknown-pragmas]
10359 | #pragma clang diagnostic pop
|
./Modules/posixmodule.c: In function 'probe_futimens':
./Modules/posixmodule.c:15466:23: error: 'HAVE_FUTIMENS_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_RENAMEAT_RUNTIME'?
15466 | PROBE(probe_futimens, HAVE_FUTIMENS_RUNTIME)
| ^~~~~~~~~~~~~~~~~~~~~
./Modules/posixmodule.c:15410:11: note: in definition of macro 'PROBE'
15410 | if (test) { \
| ^~~~
./Modules/posixmodule.c: In function 'probe_utimensat':
./Modules/posixmodule.c:15470:24: error: 'HAVE_UTIMENSAT_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_FACCESSAT_RUNTIME'?
15470 | PROBE(probe_utimensat, HAVE_UTIMENSAT_RUNTIME)
| ^~~~~~~~~~~~~~~~~~~~~~
./Modules/posixmodule.c:15410:11: note: in definition of macro 'PROBE'
15410 | if (test) { \
| ^~~~
./Modules/posixmodule.c: In function 'posixmodule_exec':
./Modules/posixmodule.c:15623:9: error: 'HAVE_PWRITEV_RUNTIME' undeclared (first use in this function); did you mean 'HAVE_OPENAT_RUNTIME'?
15623 | if (HAVE_PWRITEV_RUNTIME) {} else {
| ^~~~~~~~~~~~~~~~~~~~
| HAVE_OPENAT_RUNTIME
./Modules/posixmodule.c: In function 'probe_utimensat':
./Modules/posixmodule.c:15415:4: warning: control reaches end of non-void function [-Wreturn-type]
15415 | }
| ^
./Modules/posixmodule.c:15470:1: note: in expansion of macro 'PROBE'
15470 | PROBE(probe_utimensat, HAVE_UTIMENSAT_RUNTIME)
| ^~~~~
./Modules/posixmodule.c: In function 'probe_futimens':
./Modules/posixmodule.c:15415:4: warning: control reaches end of non-void function [-Wreturn-type]
15415 | }
| ^
./Modules/posixmodule.c:15466:1: note: in expansion of macro 'PROBE'
15466 | PROBE(probe_futimens, HAVE_FUTIMENS_RUNTIME)
| ^~~~~
./Modules/posixmodule.c: In function 'utime_nofollow_symlinks':
./Modules/posixmodule.c:5264:1: warning: control reaches end of non-void function [-Wreturn-type]
5264 | }
| ^
./Modules/posixmodule.c: In function 'utime_dir_fd':
./Modules/posixmodule.c:5187:1: warning: control reaches end of non-void function [-Wreturn-type]
5187 | }
| ^
./Modules/posixmodule.c: In function 'utime_fd':
./Modules/posixmodule.c:5225:1: warning: control reaches end of non-void function [-Wreturn-type]
5225 | }
| ^
./Modules/posixmodule.c: In function 'utime_default':
./Modules/posixmodule.c:5294:1: warning: control reaches end of non-void function [-Wreturn-type]
5294 | }
| ^
make: *** [Modules/posixmodule.o] Error 1
```
# Your environment
- CPython versions tested on: 3.10.0
- Operating system and architecture: Darwin 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 00:45:05 PDT 2022; root:xnu-8020.101.4~15/RELEASE_X86_64 x86_64
<img width="379" alt="image" src="https://user-images.githubusercontent.com/4531670/201140238-9957941d-3ea1-456c-99d7-9b3e3aa274c3.png">
```
gcc -v ✔ │ 3m 14s │ 23:50:52
Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: x86_64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
gcc-11 -v ✔ │ 23:50:54
Using built-in specs.
COLLECT_GCC=gcc-11
COLLECT_LTO_WRAPPER=/usr/local/Cellar/gcc@11/11.3.0/bin/../libexec/gcc/x86_64-apple-darwin21/11/lto-wrapper
Target: x86_64-apple-darwin21
Configured with: ../configure --prefix=/usr/local/opt/gcc@11 --libdir=/usr/local/opt/gcc@11/lib/gcc/11 --disable-nls --enable-checking=release --with-gcc-major-version-only --enable-languages=c,c++,objc,obj-c++,fortran,d --program-suffix=-11 --with-gmp=/usr/local/opt/gmp --with-mpfr=/usr/local/opt/mpfr --with-mpc=/usr/local/opt/libmpc --with-isl=/usr/local/opt/isl --with-zstd=/usr/local/opt/zstd --with-pkgversion='Homebrew GCC 11.3.0' --with-bugurl=https://github.com/Homebrew/homebrew-core/issues --enable-libphobos --build=x86_64-apple-darwin21 --with-system-zlib --with-sysroot=/Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 11.3.0 (Homebrew GCC 11.3.0)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-99470
* gh-99638
<!-- /gh-linked-prs -->
| cdde29dde90947df9bac39c1d19479914fb3db09 | 6d8da238ccdf946dc90e20821652d8caa25b76ba |
python/cpython | python__cpython-114031 | # Clarify the documentation of pathlib.Path.is_relative_to()
Hi,
Currently (python 3.10.6 & 3.11.0):
```python
from pathlib import Path
p = Path('/var/log/../../opt')
p.is_relative_to('/var/log')
>>> True
p = p.resolve()
p.is_relative_to('/var/log')
>>> False
```
Once you know `is_relative_to` uses `relative_to`, this makes more sense but it's not obvious from the documentation and the examples given. Also it can easily lead to code that looks secure but isn't. Case in point, I was tasked with reviewing this code today (simplified for illustration purposes):
```python
path = Path(ROOT_PATH, user_input_rel_path)
if path.is_relative_to(ROOT_PATH):
path.unlink()
else:
raise PermissionError('Nope!')
```
I was unsure if I should open a bug or not because one could easily argue it isn't a bug. I do believe however that a warning in the documentation could save a few devs from making a mistake.
<!-- gh-linked-prs -->
### Linked PRs
* gh-114031
* gh-114460
* gh-114461
<!-- /gh-linked-prs -->
| 3a61d24062aaa1e13ba794360b6c765d9a1f2b06 | 9af9ac153acb4198878ad81ef438aca2b808e45d |
python/cpython | python__cpython-101396 | # Add link to the first article about Python
# Documentation
Add link to the first article about Python (https://ir.cwi.nl/pub/18204) to part about Python articles (https://docs.python.org/3/faq/general.html#are-there-any-published-articles-about-python-that-i-can-reference)
<!-- gh-linked-prs -->
### Linked PRs
* gh-101396
* gh-101461
* gh-101462
<!-- /gh-linked-prs -->
| df0068ce4827471cc2962631ee64f6f38e818ec4 | 1a62ae84c687791bc1dfb54d1eb75e1c7277bb04 |
python/cpython | python__cpython-99802 | # Bug report: shutil.make_archive() makes empty archive file even when root_dir does not exists
# Bug report
In python 3.10+, shutil.make_archive() makes empty archive file and does not raise any error even when root_dir does not exists.
In python -3.9, FileNotFoundError is raised with message `[Errno 2] No such file or directory: ‘xxxxxxx’`.
```
import shutil
shutil.make_archive(base_name='aaa_archive', root_dir="not_existing_dir", format="zip")
# This will raise FileNotFoundError in python ~3.9, where it doesn’t in 3.10~
```
I though making empty archive file is unnatural, so fixing it maybe good for backward compatibility.
I think this problem is caused in this line, where `os.chdir(root_dir)` is not called anymore.
In the previous code, `os.chdir(root_dir)` will raise FileNotFoundError when root_dir does not exists.
https://github.com/python/cpython/pull/93160/files#diff-db8ac59326160713929e0e1973aef54f0280fe9f154ef24d14244909a0e0689bL1084
I thought checking the existence of root_dir and raise FileNotFoundError when root_dir is not found, might be a good implementation to fix this problem.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99802
* gh-107998
* gh-107999
<!-- /gh-linked-prs -->
| a86df298df5b02e2d69ea6879e9ed10a7adb85d0 | a794ebeb028f7ef287c780d3890f816db9c21c51 |
python/cpython | python__cpython-102518 | # Extension type from documentation doesn't compile in C++20 mode
# Bug report
C++20 added support for designated initializers and fails to compile if you mix named and unnamed initializers.
For demonstration I'll use the example from the documentation section "[Defining Extension Types: Tutorial](https://docs.python.org/3/extending/newtypes_tutorial.html)" with `.tp_doc` removed from `CustomType` so it builds on Python 3.10.
When building with C++ compiler in C++20 mode using
`g++ $(python3-config --cflags) -std=c++20 -Wall -Wextra -Wno-missing-field-initializers -c pymod.cc`
it fails with the following message:
```
pymod.cc:11:5: error: either all initializer clauses should be designated or none of them should be
11 | .tp_name = "custom.Custom",
| ^
pymod.cc:20:5: error: either all initializer clauses should be designated or none of them should be
20 | .m_name = "custom",
| ^
```
The problem is the macros that produce non-designated initializers here.
# Additional information
The following compiles work flawlessly:
GNU C compiler:
`gcc $(python3-config --cflags) -Wall -Wextra -Wno-missing-field-initializers -c pymod.cc`
GNU C++ compiler with no C++ version specified:
`g++ $(python3-config --cflags) -Wall -Wextra -Wno-missing-field-initializers -c pymod.cc`
GNU C++ Compiler in C++17 mode:
`g++ $(python3-config --cflags) -std=c++17 -Wall -Wextra -Wno-missing-field-initializers -c pymod.cc`
# Your environment
- Fedora 36 container with `python-devel`, `gcc` and `gcc-c++` installed
- CPython: 3.10.7
- GCC: 12.2.1 20220819 (Red Hat 12.2.1-2)
The issue leading up to this was our build failing in C++20 mode on at least RHEL 7/8/9, Fedora 34 to 37, SLES 12, SLES 15, Debian 10/11, Ubuntu 16 to 22.
So I guess building with pretty much every GNU C++ compiler will break if you turn on C++20.
pymod.cc:
```
#define PY_SSIZE_T_CLEAN
#include <Python.h>
typedef struct {
PyObject_HEAD
/* Type-specific fields go here. */
} CustomObject;
static PyTypeObject CustomType = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "custom.Custom",
.tp_basicsize = sizeof(CustomObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = PyType_GenericNew,
};
static PyModuleDef custommodule = {
PyModuleDef_HEAD_INIT,
.m_name = "custom",
.m_doc = "Example module that creates an extension type.",
.m_size = -1,
};
PyMODINIT_FUNC
PyInit_custom(void)
{
PyObject *m;
if (PyType_Ready(&CustomType) < 0)
return NULL;
m = PyModule_Create(&custommodule);
if (m == NULL)
return NULL;
Py_INCREF(&CustomType);
if (PyModule_AddObject(m, "Custom", (PyObject *) &CustomType) < 0) {
Py_DECREF(&CustomType);
Py_DECREF(m);
return NULL;
}
return m;
}
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-102518
<!-- /gh-linked-prs -->
| 23cf1e20a6470588fbc64483031ceeec7614dc56 | 52bc2e7b9d451821513a580a9b73c20cfdcf2b21 |
python/cpython | python__cpython-100381 | # /std:c++20 instead of /std:c++17 used for _wmimodule.cpp, but seems unnecessary
@python/windows-team
I noticed the VS project file `_wmi.cxproj` for `_wmimodule.cpp` added for 3.12 to support using WMI on Windows to get `platform` data in issue #89545 / PR #96289 specifies C++20 mode via the compiler flag `/std:c++20`. The flag (and partially-complete C++20 support) was only added in VS 2019 16.11, and produces a compiler warning in VS 2017 15.9 stating the flag was ignored.
This is the only file that requires it and there were no relevant hints in the issue, PR, or commit history as to why it was required. Despite the flag being ignored, there were no compiler errors or other warnings, and both the full test suite (minus a couple clearly unrelated issues) and running `test_wmi` and `test_platform` with `-u all` passed, and the same was true when I recompiled it with `/std:c++17` (added in VS 2017 15.8 and used a couple other files), which naturally avoids the warning.
Therefore, it would seems specifying C++20 is unnecessary, and it can be changed to `/std:c++17` to avoid compiler warnings and use a consistent C++ standard version with the other files, unless there's something I'm missing here (entirely possible, of course). Any reason this was added?
<!-- gh-linked-prs -->
### Linked PRs
* gh-100381
<!-- /gh-linked-prs -->
| f08209874e58d0adbb08bd1dba4f58ba63f571c5 | 36f2329367f3608d15562f1c9e89c50a1bd07b0b |
python/cpython | python__cpython-112670 | # Elide uninformative traceback indicators in `return` and simple assignment statements
The new traceback indicators can be *really* nice, though at times also quite verbose. #93883/#93994 by @belm0 improved this situation by skipping the indicators for lines where the *entire* line was indicated, which helps substantially. I'd like to propose that we take this a small step further:
- Skip indicators for lines with a `return` statement, where every part execpt the `return` keyword is indicated; e.g. `return foo()`
- Skip indicators for lines with a simple assignment statement, where the entire rhs is indicated and the lhs consists of a simple name; e.g. `name = some.more(complicated, call=here)`
These heuristics are slightly more complicated than "don't indicate the entire line", but I argue that in each of these cases the indicators add little to no information, while compressing the traceback makes it easier to navigate and draws attention to the remaining more-specific indicators.
My motivating example is the traceback reported in https://github.com/pytest-dev/pytest/issues/10466, where I count seven already-elided indicator lines, twelve that would be elided by the return heuristic, seven by the simple-assignment heuristic, and four other lines where the indicators would not be affected by this proposal. I'd even argue that dropping uninformative indicators from a majority (19/30) of traceback lines could be considered a bugfix - indicating on 4/30 lines is quite different to 23/30!
<!-- gh-linked-prs -->
### Linked PRs
* gh-112670
* gh-119554
* gh-119556
<!-- /gh-linked-prs -->
| 4a08a75cf4c490f7c43ede69bdf6e5a79c6a3af3 | c1bf4874c1e9db2beda1d62c8c241229783c789b |
python/cpython | python__cpython-100182 | # Segfault on frame.f_back when frame is created with PyFrame_New()
Python segfaults when frame.f_back is accessed on a frame created with PyFrame_New() c api. Calling the PyFrame_GetBack() c api also segfaults, at least in debug builds and on win32 (it depends on the contents of uninitialized memory). Tested with 3.11.0 and git 3.11 branch as of Nov 4, 2022
Cause is that the ->previous field of the _PyInterpreterFrame is never set to NULL and when PyFrame_GetBack() runs, it tries to dereference the pointer value of ->previous and segfaults. A test case using ctypes is attached.
Adding a frame->previous = NULL; line to init_frame() in frameobject.c fixes this, though I don't know if it's the best place for it.
[f_back_segfault.py.txt](https://github.com/python/cpython/files/9942212/f_back_segfault.py.txt)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100182
* gh-100478
<!-- /gh-linked-prs -->
| 88d565f32a709140664444c6dea20ecd35a10e94 | 2659036c757a11235c4abd21f02c3a548a344fe7 |
python/cpython | python__cpython-98993 | # Some missing newlines for prompts
For example, in the [enum howto](https://docs.python.org/dev/howto/enum.html#intflag) ([source](https://github.com/python/cpython/blame/main/Doc/howto/enum.rst#L675-L680)) becomes
```
class Perm(IntFlag):
R = 4
W = 2
X = 1
RWX = 7
Perm.RWX
~Perm.RWX
Perm(7)
```
but when pasted into the prompt, gives
```
>>> class Perm(IntFlag):
... R = 4
... W = 2
... X = 1
... RWX = 7
... Perm.RWX
File "<stdin>", line 6
Perm.RWX
^^^^
SyntaxError: invalid syntax
```
There are a number of such examples that could use a newline with `...` for ease of pasting.
https://github.com/python/cpython/pull/98993
<!-- gh-linked-prs -->
### Linked PRs
* gh-98993
<!-- /gh-linked-prs -->
| 286e3c76a9cb8f1adc2a915f0d246a1e2e408733 | 3e06b5030b18ca9d9d507423b582d13f38d393f2 |
python/cpython | python__cpython-106649 | # Use OpenSSL 3.0.x in our binary builds
# Feature or enhancement
We currently use OpenSSL 1.1.1 series in our Windows and macOS binary builds.
Per https://www.openssl.org/source/, that is only supported through September of 2023.
Thus we need to switch to a supported version of OpenSSL before 3.12 is released. _(And likely consider moving 3.11 to use it if deemed feasible)_
There are a pile of bugs related to OpenSSL 3 that may or may not be blockers:
* https://github.com/python/cpython/issues/90728
* https://github.com/python/cpython/issues/101401
* https://github.com/python/cpython/issues/90307
* https://github.com/python/cpython/issues/95494
* ... edit this list to link to others ...
We have a longer term desire to not be so beholden to OpenSSL at all. But this issue is being filed as a practical response to untangling that not being likely feasible before 3.12beta1.
<!-- gh-linked-prs -->
### Linked PRs
* gh-106649
* gh-106680
* gh-106761
* gh-107472
* gh-107474
* gh-107476
* gh-107481
* gh-107482
<!-- /gh-linked-prs -->
| e2d7366fb3df44e7434132636d49f22d6d25cc9f | 2ca008e2b738b8c08b4bf46b2b23f315d6510d92 |
python/cpython | python__cpython-123857 | # "builtins" module should link to Built-in Types and Built-in Exceptions
The [`builtins`](https://docs.python.org/library/builtins.html) module documentation links to [Built-in Functions](https://docs.python.org/library/functions.html#built-in-funcs) and [Built-in Constants](https://docs.python.org/library/constants.html#built-in-consts), but not [Built-in Types](https://docs.python.org/library/stdtypes.html) and [Built-in Exceptions](https://docs.python.org/library/exceptions.html). Those should be included, shouldn't they?
---
I'd be happy to submit a PR for this, though I'm not very good with RST. I believe the links would be `` :ref:`bltin-types` `` and `` :ref:`bltin-exceptions` ``.
I'd also want to change the wording to make it less clunky with more links, from `See ... for documentation.` to `For documentation, see ...`
<!-- gh-linked-prs -->
### Linked PRs
* gh-123857
* gh-125764
* gh-125765
<!-- /gh-linked-prs -->
| 9256be7ff0ab035cfd262127d893c9bc88b3c84c | b3c6b2c9e19ea84f617c13399c411044afbc3813 |
python/cpython | python__cpython-100798 | # Minor doc issue: dataclasses.KW_ONLY not documented as inspected by dataclass()
# [dataclasses](https://docs.python.org/3.11/library/dataclasses.html) documentation
According to [dataclasses](https://docs.python.org/3.11/library/dataclasses.html) documentation, the only places "where [dataclass()](https://docs.python.org/3.11/library/dataclasses.html#dataclasses.dataclass) inspects a type annotation" are:
* [Class variables](https://docs.python.org/3.11/library/dataclasses.html#class-variables) "One of two places"
* and [Init-only variables](https://docs.python.org/3.11/library/dataclasses.html#init-only-variables) "The other place"
However, [dataclasses.KW_ONLY](https://docs.python.org/3.11/library/dataclasses.html#dataclasses.KW_ONLY) logically must be (and [actually is](https://github.com/python/cpython/blob/3.11/Lib/dataclasses.py#L945)) the third place.
It's only a minor inconsistency, but it can be easily remedied (in a future-proof way, if ever a fourth, etc. place would be added) by changing:
* [Class variables](https://docs.python.org/3.11/library/dataclasses.html#class-variables) to "One of *few* places"
* and [Init-only variables](https://docs.python.org/3.11/library/dataclasses.html#init-only-variables) to "*An*other place"
(*italics* only to show the difference, not intended in the change)
<!-- gh-linked-prs -->
### Linked PRs
* gh-100798
* gh-100799
* gh-100800
<!-- /gh-linked-prs -->
| 659c2607f5b44a8a18a0840d1ac39df8a3219dd5 | 2f2fa03ff3d566b675020787e23de8fb4ca78e99 |
python/cpython | python__cpython-100771 | # Add `CALL_INTRINSIC` instruction.
We have a number of instructions that are complicated and executed fairly rarely. For example `MAP_KEYS`, `CHECK_EG_MATCH`, `CLEANUP_THROW`.
These bulk out the interpreter, possibly slowing things down.
We should move code from these into helper functions, which can be called though a table from `CALL_INTRINSIC` instruction.
The `CALL_INTRINSIC` instruction also provides a means for contributors to add new functionality without a deep understanding of the compiler.
Candidates for moving into `CALL_INTRINSIC` are:
* SETUP_ANNOTATIONS
* LOAD_BUILD_CLASS
* MATCH_KEYS
* CHECK_EG_MATCH
* CLEANUP_THROW
<!-- gh-linked-prs -->
### Linked PRs
* gh-100771
* gh-100774
<!-- /gh-linked-prs -->
| 28187141cc34063ef857976ddbca87ba09a882c2 | f20c553a458659f247fac1fb829f8172aa32f69a |
python/cpython | python__cpython-102032 | # Fields with single underscore names can mess up dataclasses
A similar issue to https://github.com/python/cpython/issues/96151. ericvsmith mentioned this is worth opening an issue for in https://github.com/python/cpython/pull/98143#issuecomment-1280306360
dataclasses uses variables with single underscore names as part of its implementation. This can cause interesting errors, for example:
```python
from dataclasses import dataclass, field
@dataclass
class X:
x: int = field(default_factory=lambda: 111)
_dflt_x: int = field(default_factory=lambda: 222)
X() # TypeError: '_HAS_DEFAULT_FACTORY_CLASS' object is not callable
```
The fix is simple: prefix all of these things with `__dataclass_`, to make name collisions more obviously the user's fault. We already do this for e.g. `__dataclass_self__` in the implementation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102032
<!-- /gh-linked-prs -->
| 718e86671fe62a706c460b7f049b196e434cb5b3 | 027223db96b0464c49a74513f82a1bf25aa510bd |
python/cpython | python__cpython-103942 | # TESTSUBDIRS missing some test directories
# Bug report
I noticed this when `test_sqlite3` was missing from the installed tests
# Your environment
3.11+ though I suspect some other versions might have inaccurate testdir listings too
here's the missing ones for the current primary branch for example:
```diff
diff --git a/Makefile.pre.in b/Makefile.pre.in
index 9961a864cb..9a19b5c147 100644
--- a/Makefile.pre.in
+++ b/Makefile.pre.in
@@ -1963,6 +1963,7 @@ LIBSUBDIRS= asyncio \
xmlrpc \
zoneinfo \
__phello__
+# find Lib/test/ -type d | cut -d/ -f2- | sort | grep -Ev '(crashers|leakers)' | xargs --replace echo $'\t\t{} \\'
TESTSUBDIRS= distutils/tests \
idlelib/idle_test \
test \
@@ -1972,7 +1973,6 @@ TESTSUBDIRS= distutils/tests \
test/data \
test/decimaltestdata \
test/dtracedata \
- test/eintrdata \
test/encoded_modules \
test/imghdrdata \
test/libregrtest \
@@ -2039,7 +2039,26 @@ TESTSUBDIRS= distutils/tests \
test/test_lib2to3/data/fixers \
test/test_lib2to3/data/fixers/myfixes \
test/test_peg_generator \
+ test/test_sqlite3 \
test/test_tkinter \
+ test/test_tomllib \
+ test/test_tomllib/data \
+ test/test_tomllib/data/invalid \
+ test/test_tomllib/data/invalid/array \
+ test/test_tomllib/data/invalid/array-of-tables \
+ test/test_tomllib/data/invalid/boolean \
+ test/test_tomllib/data/invalid/dates-and-times \
+ test/test_tomllib/data/invalid/dotted-keys \
+ test/test_tomllib/data/invalid/inline-table \
+ test/test_tomllib/data/invalid/keys-and-vals \
+ test/test_tomllib/data/invalid/literal-str \
+ test/test_tomllib/data/invalid/multiline-basic-str \
+ test/test_tomllib/data/invalid/multiline-literal-str \
+ test/test_tomllib/data/invalid/table \
+ test/test_tomllib/data/valid \
+ test/test_tomllib/data/valid/array \
+ test/test_tomllib/data/valid/dates-and-times \
+ test/test_tomllib/data/valid/multiline-basic-str \
test/test_tools \
test/test_ttk \
test/test_unittest \
```
let me know if you'd like me to send patch(es) or if there's a more programmatic way to handle this
<!-- gh-linked-prs -->
### Linked PRs
* gh-103942
* gh-103946
* gh-103970
<!-- /gh-linked-prs -->
| bf0b8a9f8d647515170cbdf3b6a8c0f44e0f37b3 | 72adaba6dd2aa1a9aeb9a992db7d854c89202e27 |
python/cpython | python__cpython-99966 | # `urllib.error.HTTPError(..., fp=None)` raises a `KeyError` instead of an `AttributeError` on attribute access
# Bug report
The exception `urllib.error.HTTPError(..., fp=None)` raises a `KeyError` instead of an `AttributeError` when accessing an attribute that does not exist.
```python
>>> from urllib.error import HTTPError
>>> x = HTTPError("url", 405, "METHOD NOT ALLOWED", None, None)
>>> assert getattr(x, "__notes__", ()) == ()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vinmic/repos/cpython/Lib/tempfile.py", line 477, in __getattr__
file = self.__dict__['file']
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'file'
```
# Your environment
```
Python 3.12.0a1+ (heads/main:bded5edd9a, Oct 27 2022, 19:23:58) [GCC 11.3.0] on linux
```
This bug should be reproducible for all python3 versions on all systems.
# Context
I found this error while running a code similar to this:
```python
from logging import getLogger
from urllib.error import HTTPError
try:
raise HTTPError("url", 405, "METHOD NOT ALLOWED", None, None)
except Exception:
getLogger().exception("Ooops")
```
Instead of having the exception logged, I ended up with the following trace:
```
--- Logging error ---
Traceback (most recent call last):
File "/home/vinmic/repos/cpython/../test_trio/test_trio.py", line 6, in <module>
raise HTTPError("url", 405, "METHOD NOT ALLOWED", None, None)
urllib.error.HTTPError: HTTP Error 405: METHOD NOT ALLOWED
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1160, in emit
msg = self.format(record)
^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 999, in format
return fmt.format(record)
^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 711, in format
record.exc_text = self.formatException(record.exc_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 661, in formatException
traceback.print_exception(ei[0], ei[1], tb, None, sio)
File "/home/vinmic/repos/cpython/Lib/traceback.py", line 124, in print_exception
te = TracebackException(type(value), value, tb, limit=limit, compact=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/traceback.py", line 697, in __init__
self.__notes__ = getattr(exc_value, '__notes__', None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/tempfile.py", line 477, in __getattr__
file = self.__dict__['file']
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'file'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vinmic/repos/cpython/../test_trio/test_trio.py", line 8, in <module>
getLogger().exception("Ooops")
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1574, in exception
self.error(msg, *args, exc_info=exc_info, **kwargs)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1568, in error
self._log(ERROR, msg, args, **kwargs)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1684, in _log
self.handle(record)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1700, in handle
self.callHandlers(record)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1770, in callHandlers
lastResort.handle(record)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1028, in handle
self.emit(record)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1168, in emit
self.handleError(record)
File "/home/vinmic/repos/cpython/Lib/logging/__init__.py", line 1082, in handleError
traceback.print_exception(t, v, tb, None, sys.stderr)
File "/home/vinmic/repos/cpython/Lib/traceback.py", line 124, in print_exception
te = TracebackException(type(value), value, tb, limit=limit, compact=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/traceback.py", line 763, in __init__
context = TracebackException(
^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/traceback.py", line 697, in __init__
self.__notes__ = getattr(exc_value, '__notes__', None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vinmic/repos/cpython/Lib/tempfile.py", line 477, in __getattr__
file = self.__dict__['file']
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'file'
```
Note however that the exception is logged properly on python 3.9 (without the `exceptiongroup` module imported). So maybe the following patch should be applied on top of `HTTPError` being fixed in order to make tracebacks more robust:
```diff
diff --git a/Lib/traceback.py b/Lib/traceback.py
index 6270100348..a9c15d59be 100644
--- a/Lib/traceback.py
+++ b/Lib/traceback.py
@@ -694,7 +694,10 @@ def __init__(self, exc_type, exc_value, exc_traceback, *, limit=None,
# Capture now to permit freeing resources: only complication is in the
# unofficial API _format_final_exc_line
self._str = _safe_string(exc_value, 'exception')
- self.__notes__ = getattr(exc_value, '__notes__', None)
+ try:
+ self.__notes__ = getattr(exc_value, '__notes__', None)
+ except Exception:
+ self.__notes__ = None
if exc_type and issubclass(exc_type, SyntaxError):
# Handle SyntaxError's specially
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-99966
* gh-100096
* gh-100097
<!-- /gh-linked-prs -->
| dc8a86893df37e137cfe992e95e7d66cd68e9eaf | 3c892022472eb975360fb3f0caa6f6fcc6fbf220 |
python/cpython | python__cpython-98761 | # Prefer "python" over "python3" for command line examples in docs.
# Documentation
Currently docs are not consistent in using `python` vs. `python3` for command line examples. As far as I'm aware, we should prefer `python`, see https://peps.python.org/pep-0394/#for-end-users-of-python
<!-- gh-linked-prs -->
### Linked PRs
* gh-98761
<!-- /gh-linked-prs -->
| 847d7708ba8739a5d5d31f22d71497527a7d8241 | 8795ad1bd0d6ee031543fcaf5a86a60b37950714 |
python/cpython | python__cpython-101618 | # logging documentation is tough for beginners
Something of a perennial issue that I've seen is that folks:
a) complain that logging is hard to use, hard to configure. Somebody called logging an "advanced" module recently.
b) a lot of real world code gets written not following what most people with even moderate experience consider the most basic of best practices. Namely:
- using `logging.warning` instead of `logger = logging.getLogger(__name__); logger.warning..`
- library code behaving like it "owns" the root logger: making changes on the root logger, calling logging.basicConfig, heck even calling logging.warning falls into this category, even though it seems totally innocuous, since it can call basicConfig
Now, I think the logging module is fantastic, and that the documentation is generally excellent, and I don't think it's hard to use. However, when idiomatic usage of `logging` is not really that difficult, I struggle to understand why there isn't an example of idiomatic, correct usage, right on the very first page, as the first code example that people see (instead of showing `logging.warning`, which is something that should almost never be used).
It's somewhat hard to tell someone that logging *isn't* an advanced module, when I say "it's easy, just do logger = logging.getLogger(name) at the top of each file, and then..." but such code is only shown as a block in the "Advanced" section of the tutorial, which itself is a small link off the main page.
I am not suggesting any dramatic change to the overall documentation; but it is a bit dramatic I suppose insofar as I think it's important to change the very *first* thing that people see when they look at logging. Namely, on that very first screen, I would like to see code blocks like this:
```python
# myapp.py
import logging
import mylib
logger = logging.getLogger(__name__)
def main():
logging.basicConfig(filename='myapp.log', level=logging.INFO)
logger.info('Started')
mylib.do_something()
logger.info('Finished')
if __name__ == '__main__':
main()
```
```python
# mylib.py
import logging
logger = logging.getLogger(__name__)
def do_something():
logger.debug('Debug message')
logger.warning('Warning message')
```
Along with a few accompanying sentences, before or after, explaining the basic ideas here (and yes, it should only take a few sentences), namely:
- most files should only need to create a `logger`, and then use logger methods, and not have any other interactions.
- main should be the place where logger configuration is handled (generally, and for simple use cases)
- encourage the user to play around with the log levels
In my view, this is all of the starting information you need to start using logging well, at the very start. I use logging quite a lot, but 95% of the time at least, my real world production code still uses logging in this basic way, and it works fantastically. People new to logging often are not going to stick around for 30 minutes and read most of the documentation; let's make sure that if they only read one screen worth of docs on their first visit, it leaves them writing code they'll be happy with later.
I'm happy to submit a PR with a draft of the new first section (and slight removal of duplicated material in the tutorial) if there is some receptiveness to the ideas here.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101618
* gh-116733
* gh-116734
<!-- /gh-linked-prs -->
| 7f418fb111dec325b5c9fe6f6e96076049322f02 | 8e2aab7ad5e1c8b3360c1e1b80ddadc0845eaa3e |
python/cpython | python__cpython-98768 | # AIX build fails with the main branch
Python main branch fails to build in AIX operating system with the below error.
```
gcc -pthread -c -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -DPy_BUILD_CORE -o Objects/typeobject.o Objects/typeobject.c
In file included from ./Include/internal/pycore_runtime.h:13,
from ./Include/internal/pycore_pystate.h:11,
from ./Include/internal/pycore_call.h:11,
from Objects/typeobject.c:4:
./Include/internal/pycore_global_strings.h:674:41: error: 'struct `<anonymous>`' has no member named '__Bool'; did you mean '_hook'?
(_Py_SINGLETON(strings.identifiers._ ## NAME._ascii.ob_base))
^
./Include/internal/pycore_global_objects.h:24:31: note: in definition of macro '_Py_GLOBAL_OBJECT'
_PyRuntime.global_objects.NAME
^~~~
./Include/internal/pycore_global_strings.h:674:7: note: in expansion of macro '_Py_SINGLETON'
(_Py_SINGLETON(strings.identifiers._ ## NAME._ascii.ob_base))
^~~~~~~~~~~~~
Objects/typeobject.c:8569:38: note: in expansion of macro '_Py_ID'
PyDoc_STR(DOC), .name_strobj = &_Py_ID(NAME) }
^~~~~~
Objects/typeobject.c:8579:5: note: in expansion of macro 'ETSLOT'
ETSLOT(NAME, as_number.SLOT, FUNCTION, WRAPPER, \
^~~~~~
Objects/typeobject.c:8686:5: note: in expansion of macro 'UNSLOT'
UNSLOT(__bool__, nb_bool, slot_nb_bool, wrap_inquirypred,
^~~~~~
gmake: *** [Makefile:2525: Objects/typeobject.o] Error 1
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-98768
<!-- /gh-linked-prs -->
| 618b7a8260bb40290d6551f24885931077309590 | ba4731d149185894c77d201bc5804da90ff45eee |
python/cpython | python__cpython-100018 | # PyMemoryView_FromMemory is part of stable ABI but the flag constants (PyBUF_READ, etc.) are not
# Feature or enhancement
I'd rather not write:
```c
#ifndef PyBUF_READ
#define PyBUF_READ 0x100
#endif
```
and instead be able to rely on these even in stable abi mode
relevant error message:
```
whatever.c:1:30: error: ‘PyBUF_READ’ undeclared (first use in this function)
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-100018
* gh-100020
<!-- /gh-linked-prs -->
| f24738742cc5d3e00409d55ced789cd544b346b5 | 922a6cf6c265e2763a003291885ff74d46203fc3 |
python/cpython | python__cpython-98643 | # configure: `--with-dbmliborder=gdbm` no longer satisfies `_dbm`
# Bug report
It seems that the `configure` script is no longer able to build `_dbm` module from `gdbm_compat`.
Excerpts from configure log (full log: [configure.txt](https://github.com/python/cpython/files/9857724/configure.txt)):
```
$ ./configure -C --with-dbmliborder=gdbm
[...]
checking gdbm.h usability... yes
checking gdbm.h presence... yes
checking for gdbm.h... yes
checking for gdbm_open in -lgdbm... yes
checking ndbm.h usability... no
checking ndbm.h presence... no
checking for ndbm.h... no
checking for ndbm presence and linker args... ()
checking gdbm/ndbm.h usability... yes
checking gdbm/ndbm.h presence... yes
checking for gdbm/ndbm.h... yes
checking gdbm-ndbm.h usability... no
checking gdbm-ndbm.h presence... no
checking for gdbm-ndbm.h... no
checking for library containing dbm_open... -lgdbm_compat
checking db.h usability... no
checking db.h presence... no
checking for db.h... no
checking for --with-dbmliborder... gdbm
checking for _dbm module CFLAGS and LIBS...
[...]
checking for stdlib extension module _dbm... missing
checking for stdlib extension module _gdbm... yes
[...]
```
It seems that the problem is in the following snippet:
```sh
for db in $with_dbmliborder; do
case "$db" in
ndbm)
if test "$have_ndbm" = yes; then
DBM_CFLAGS="-DUSE_NDBM"
DBM_LIBS="$dbm_ndbm"
have_dbm=yes
break
fi
;;
gdbm)
if test "$have_gdbm_compat" = yes; then
DBM_CFLAGS="-DUSE_GDBM_COMPAT"
DBM_LIBS="-lgdbm_compat"
have_dbm=yes
break
fi
;;
```
However, `have_gdbm_compat` is not declared anymore, probably because of:
```sh
AC_MSG_CHECKING([for ndbm presence and linker args])
AS_CASE([$ac_cv_search_dbm_open],
[*ndbm*|*gdbm_compat*], [
dbm_ndbm="$ac_cv_search_dbm_open"
have_ndbm=yes
],
```
declaring `have_ndbm=yes` in this case.
Seems to have been introduced in ec5e253556875640b1ac514e85c545346ac3f1e0 by @tiran.
# Your environment
- CPython versions tested on: 3.12.0a1
- Operating system and architecture: Gentoo/amd64
<!-- gh-linked-prs -->
### Linked PRs
* gh-98643
<!-- /gh-linked-prs -->
| 02a72f080dc89b037c304a85a0f96509de9ae688 | 07a87f74faf31cdd755ac7de6d44531139899d1b |
python/cpython | python__cpython-99664 | # `sys._git` is empty on Windows
Compare/contrast these two official releases:
```
C:\> python3.11
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys._git
('CPython', '', '')
>>> ^Z
C:\> python3.10
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys._git
('CPython', 'tags/v3.10.8', 'aaaf517')
>>> ^Z
```
Something broke in the build to not capture git information correctly.
<!-- gh-linked-prs -->
### Linked PRs
* gh-99664
* gh-99665
<!-- /gh-linked-prs -->
| 49e554dbafc87245c1364ae00ad064a96f5cb995 | c450c8c9ed6e420025f39d0e4850a79f8160cdcd |
python/cpython | python__cpython-102657 | # Expose _Py_NewInterpreter() as Py_NewInterpreterFromConfig()
A while back I added `_Py_NewInterpreter()` (a "private" API) to support configuring the new interpreter. Ultimately, I'd like to adjust
the signature a little and then make the function part of the public API (as `Py_NewInterpreterFromConfig()`).
My plan:
1. change the argument to a new `_PyInterpreterConfig` struct
2. rename the function to `Py_NewInterpreterFromConfig()`, inspired by `Py_InitializeFromConfig()` (takes a `PyInterpreterConfig` instead of `isolated_subinterpreter`)
3. split up the boolean `isolated_subinterpreter` into the corresponding multiple granular settings
* allow_fork
* allow_subprocess
* allow_threads
4. drop `PyConfig._isolated_interpreter`
Note that the current default (`Py_NewInterpeter()` and `Py_Initialize*()`) allows fork, subprocess, and threads, and the optional "isolated" interpreter disables all three. I'm not planning on changing any of that here.
My main objective here is to expose the existing API in a way that we can do the following afterward:
* stop giving the option to disallow subprocess (i.e. drop `PyInterpreterConfig.allow_subprocess`)
* add an option to disallow just "exec" instead
* stop disallowing threads as a default behavior for an "isolated" interpreter (we'd still keep the option though)
* add the option to disallow daemon threads
* add an option to check if each extension supports running in multiple interpreters
* add other options for PEP 684 (per-interpreter GIL)
<!-- gh-linked-prs -->
### Linked PRs
* gh-102657
* gh-102658
* gh-102882
* gh-102883
* gh-107191
* gh-107198
<!-- /gh-linked-prs -->
| 3bb475662ba998e1b2d26fa370794d0804e33927 | 910a64e3013bce821bfac75377cbe88bedf265de |
python/cpython | python__cpython-104474 | # `_SSLProtocolTransport` keeps reference to protocol after close
`_SSLProtocolTransport` keeps reference to protocol after close. It leads to reference cyeles between the transport and the protocol and is bad for gc. Clearing this is better as it frees up the memory immediately without waiting for the gc. This causes memory leaks in some cases as if an exception occurs the deallocation is delayed even further as tracebacks keeps ref to frame and frame keeps locals alive.
https://github.com/python/cpython/blob/8d574234d49acf3472f7151ee4296da0f297d6f2/Lib/asyncio/sslproto.py#L102-L111
<!-- gh-linked-prs -->
### Linked PRs
* gh-104474
* gh-104485
<!-- /gh-linked-prs -->
| fb8739f0b6291fb048a94d6312f59ba4d10a20ca | 88c5c586708dcff369c49edae947d487a80f0346 |
python/cpython | python__cpython-98459 | # Unittest: self-referencing explicit exception cause results in infinite loop
# Bug report
If an exception is raised with a self-referencing \_\_cause__ or \_\_context__ then TestResult._clean_tracebacks() in result.py enters an infinite loop.
Minimal example 1:
```python
try:
raise Exception()
except Exception as e:
raise e from e
```
Minimal example 2:
```python
try:
e = Exception()
raise e from e
except Exception as e:
raise e
```
# Identified Cause
Self-references are not checked while unwinding the chained exception [on line 216 of result.py](https://github.com/python/cpython/blob/3.9/Lib/unittest/result.py#L216)
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.9, 3.10
- Operating system and architecture: macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-98459
* gh-99995
* gh-99996
<!-- /gh-linked-prs -->
| 72ec518203c3f3577a5e888b12f10bb49060e6c2 | 1012dc1b4367e05b92d67ea6925a39d50dce31b7 |
python/cpython | python__cpython-120763 | # Source location of return instruction in a with block is incorrect
```
def f():
with x:
return 42
import dis
from pprint import pprint as pp
def pos(p):
return (p.lineno, p.end_lineno, p.col_offset, p.end_col_offset)
pp([(pos(x.positions), x.opname, x.argval) for x in dis.get_instructions(f)])
```
Output:
```
[((1, 1, 0, 0), 'RESUME', 0),
((2, 2, 6, 7), 'LOAD_GLOBAL', 'x'),
((2, 3, 1, 12), 'BEFORE_WITH', None),
((2, 3, 1, 12), 'POP_TOP', None),
((3, 3, 10, 12), 'NOP', None),
((2, 3, 1, 12), 'LOAD_CONST', None),
((2, 3, 1, 12), 'LOAD_CONST', None),
((2, 3, 1, 12), 'LOAD_CONST', None),
((2, 3, 1, 12), 'CALL', 2),
((2, 3, 1, 12), 'POP_TOP', None),
((2, 3, 1, 12), 'LOAD_CONST', 42), <-- incorrect
((2, 3, 1, 12), 'RETURN_VALUE', None), <-- incorrect
((2, 3, 1, 12), 'PUSH_EXC_INFO', None),
((2, 3, 1, 12), 'WITH_EXCEPT_START', None),
((2, 3, 1, 12), 'POP_JUMP_IF_TRUE', 50),
((2, 3, 1, 12), 'RERAISE', 2),
((2, 3, 1, 12), 'POP_TOP', None),
((2, 3, 1, 12), 'POP_EXCEPT', None),
((2, 3, 1, 12), 'POP_TOP', None),
((2, 3, 1, 12), 'POP_TOP', None),
((2, 3, 1, 12), 'LOAD_CONST', None),
((2, 3, 1, 12), 'RETURN_VALUE', None),
((None, None, None, None), 'COPY', 3),
((None, None, None, None), 'POP_EXCEPT', None),
((None, None, None, None), 'RERAISE', 1)]
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-120763
* gh-120786
* gh-120787
<!-- /gh-linked-prs -->
| 55596ae0446e40f47e2a28b8897fe9530c32a19a | 8bc76ae45f48bede7ce3191db08cf36d879e6e8d |
python/cpython | python__cpython-100118 | # Add itertools.batched()
This was requested on python-ideas:
def batched(iterable, n):
"Batch data into lists of length n. The last batch may be shorter."
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError('n must be >= 1')
it = iter(iterable)
while (batch := list(islice(it, n))):
yield batch
<!-- gh-linked-prs -->
### Linked PRs
* gh-100118
* gh-100138
* gh-100323
<!-- /gh-linked-prs -->
| 35cc0ea736a323119157117d93e5d68d8247e89f | 41d4ac9da348ca33056e271d71588b2dc3a6d48d |
python/cpython | python__cpython-98637 | # Give python-isal a mention in the zlib/gzip documentation
# Documentation
The documentation mentions several PyPI packages such as numpy and requests as an alternative for standard library packages.
I would like to propose that [python-isal](https://github.com/pycompression/python-isal) gets a mention in the "see also" section of the zlib and gzip documentation. Simply: "python-isal, faster zlib and gzip decompression and compression".
Since the documentation cannot just recommend any random project out there here follows the argumentation why python-isal should get a mention.
1. Python-isal uses stdlib code and uses a PSF-2.0 license.
2. As a result the following improvements could be made to the stdlib code:
- https://github.com/python/cpython/pull/24645
- https://github.com/python/cpython/pull/24647
- https://github.com/python/cpython/pull/25011
- https://github.com/python/cpython/pull/27941
- https://github.com/python/cpython/pull/97664
- https://github.com/python/cpython/pull/22408
- https://github.com/python/cpython/pull/29028
Python-isal is a "good citizen" of the python ecosystem. All the improvements have been ported back to CPython. The useful thing about python-isal is that it allows the gzip and zlib code of CPython to evolve and get tested by a smaller group of users before it lands in CPython itself. The PRs above were all suggested only after the changes were found to be stable in releases of python-isal.
Therefore more python-isal users is also beneficial to CPython itself. It is also beneficial for the users to be able to install a library that offers 2x faster decompression and 5x(!) faster compression. Hence a small one-liner in "see also" is warranted in my opinion.
The next thing for python-isal to tackle is this: https://github.com/python/cpython/issues/89550 . When a working solution is found this will be backported to CPython.
Disclosure: I am the python-isal maintainer.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98637
* gh-132894
<!-- /gh-linked-prs -->
| b1fc8b69ec4c29026cd8786fc5da0c498c7dcd57 | 99b71efe8e9d59ce04b6d59ed166b57dff3e84d8 |
python/cpython | python__cpython-101508 | # Update bundled pip to 22.3
# Feature or enhancement
Routine update of the bundled pip and setuptools wheels in `ensurepip` following a pip release.
<!-- gh-linked-prs -->
### Linked PRs
* gh-101508
<!-- /gh-linked-prs -->
| 616aec1ff148ba4570aa2d4b8ea420c715c206e4 | d9de0792482d2ded364b0c7d2867b97a5da41b12 |
python/cpython | python__cpython-103163 | # Document new 3.11 enum APIs (ReprEnum, global_* and/or show_flag_values)
As discovered in #98295 , there are several undocumented new APIs in the `enum` module:
* `ReprEnum` and is documented in [What's New](https://docs.python.org/3.11/whatsnew/3.11.html#enum) (and exported in `__all__`), but not anywhere in the [enum library module documentation](https://docs.python.org/3.11/library/enum.html#enum.StrEnum), which seems like an oversight.
* Likewise, `global_enum` is documented in What's New (and exported in `__all__`) and was previously documented in the library docs, but that documentation was reverted in #30637 , and not restored, so I'm unsure on the current status.
* `global_str`, `global_enum_repr` and `global_flag_repr` were added (to replace the `__str__()` and `__repr__()` of the appropriate classes) and are not marked as private (`_`) and are exported by `__all__`, but aren't documented either. Should they be?
* `show_flag_values` was added and is not in `__all__`, but its use is referred to in an error message (cited in the docs) and it is not marked private (`_`). Should this be documented as well?
@ethanfurman your guidance here would be much appreciated, thanks. I'm happy to review/copyedit a PR, or if you prefer, I can draft something and you review it. Ideally, we should get this in before the 3.11 release in a week or so...
Related: #95913
<!-- gh-linked-prs -->
### Linked PRs
* gh-103163
* gh-103227
<!-- /gh-linked-prs -->
| 5ffc1e5a21de9a30566095386236db44695d184a | d3a7732dd54c27ae523bef73efbb0c580ce2fbc0 |
python/cpython | python__cpython-98252 | # struct.pack error messages are misleading and inconsistent
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
## 1. Misleading error message
```
>>> import struct
>>> struct.pack(">Q", -1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
struct.error: int too large to convert
```
I don't think `-1` is that large to convert to ulonglong, so the error message is wrong. The problem is that `-1` is not in the range of ulonglong. The current error message is not helpful for users to debug.
Compared to other error messages:
Code
```
import struct
for endianness in "<>":
for size in "BHILQ":
try:
fmt = endianness + size
struct.pack(fmt, -1)
except struct.error as e:
print("Error msg of " + fmt + ":", e)
```
stdout
```
Error msg of <B: ubyte format requires 0 <= number <= 255
Error msg of <H: ushort format requires 0 <= number <= 65535
Error msg of <I: argument out of range
Error msg of <L: argument out of range
Error msg of <Q: argument out of range
Error msg of >B: ubyte format requires 0 <= number <= 255
Error msg of >H: argument out of range
Error msg of >I: argument out of range
Error msg of >L: argument out of range
Error msg of >Q: int too large to convert
```
## 2. Inconsistent error messages when packing into different integral types
See the output above.
## A possible solution
I can create a PR to fix the 1st problem. For the 2nd problem, https://github.com/python/cpython/pull/28178#issuecomment-914146646 and https://github.com/python/cpython/issues/89197#issuecomment-1093927199 said that the inconsistency can be fixed, so I can probably fix this in the same PR.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: Python 3.12.0a0 (heads/main:ccab67b, Oct 12 2022, 15:25:20) [GCC 12.2.0] on linux
- Operating system and architecture: Arch Linux 5.19.13.arch1-1, x86-64
- Native endianness: Little-endian
<!--
You can freely edit this text. Remove any lines you believe are unnecessary.
-->
<!-- gh-linked-prs -->
### Linked PRs
* gh-98252
<!-- /gh-linked-prs -->
| 854a878e4f09cd961ba5135567f7a5b5f86d7be9 | 2ae894b6d1995a3b9f95f4a82eec6dedd3ba5298 |
python/cpython | python__cpython-101689 | # inspect.getsource() on sourceless dataclass raises undocumented exception
<!--
If you're new to Python and you're not sure whether what you're experiencing is a bug, the CPython issue tracker is not
the right place to seek help. Consider the following options instead:
- reading the Python tutorial: https://docs.python.org/3/tutorial/
- posting in the "Users" category on discuss.python.org: https://discuss.python.org/c/users/7
- emailing the Python-list mailing list: https://mail.python.org/mailman/listinfo/python-list
- searching our issue tracker (https://github.com/python/cpython/issues) to see if
your problem has already been reported
-->
# Bug report
If I run the following program in Python 3.10:
```py
from dataclasses import dataclass
from inspect import getsource
defs = {}
exec(
"""
@dataclass
class C:
"The source for this class cannot be located."
""",
{"dataclass": dataclass},
defs,
)
try:
getsource(defs["C"])
except OSError:
print("Got the documented exception.")
```
The output is:
```
$ python sourceless_dataclass.py
Traceback (most recent call last):
File "<path>/sourceless_dataclass.py", line 16, in <module>
getsource(defs["C"])
File "/usr/lib/python3.10/inspect.py", line 1147, in getsource
lines, lnum = getsourcelines(object)
File "/usr/lib/python3.10/inspect.py", line 1129, in getsourcelines
lines, lnum = findsource(object)
File "/usr/lib/python3.10/inspect.py", line 940, in findsource
file = getsourcefile(object)
File "/usr/lib/python3.10/inspect.py", line 817, in getsourcefile
filename = getfile(object)
File "/usr/lib/python3.10/inspect.py", line 786, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <class 'C'> is a built-in class
```
The [documentation](https://docs.python.org/3.10/library/inspect.html#inspect.getsource) states that `OSError` can be raised but does not mention `TypeError`.
The implementation of `inspect.getsource()` assumes that if a class has no `__module__` attribute, it must be a built-in class, but a sourceless dataclass doesn't have a `__module__` attribute either. I don't know whether this is a bug in `getsource()` or whether the generation of the dataclass should set `__module__` to `'__main__'`, but in any case the behavior is not as documented.
# Your environment
- CPython versions tested on: Python 3.10.6
- Operating system and architecture: Ubuntu Linux 18.04
<!-- gh-linked-prs -->
### Linked PRs
* gh-101689
* gh-102969
* gh-102970
<!-- /gh-linked-prs -->
| b6132085ca5418f714eff6e31d1d03369d3fd1d9 | 58d2b30c012c3a9fe5ab747ae47c96af09e0fd15 |
python/cpython | python__cpython-127547 | # email: get_payload(decode=True) doesn't handle Content-Transfer-Encoding with trailing white space
If the Content-Transfer-Encoding header field of a message part has trailing whitespace, for example "base64 ", get_payload(decode=True) does not return the properly decoded payload.
Here is a minimal code example. Sample message file attached.
```
import email
from email import policy
with open('msg.txt', 'rb') as f:
msg = email.message_from_binary_file(f, policy=policy.default)
parts = list(msg.walk())
parts[1].get_payload(decode=True)
> b'SGVsbG8uIFRlc3Rpbmc=\n'
```
The parsed content-transfer-encoding header "cte" value is truncated, but it's string value is not.
```
>>> header = parts[1].get('content-transfer-encoding')
>>> header.cte
'base64'
>>> str(header)
'base64 '
```
Which is what appears to be used in the decode attempt
https://github.com/python/cpython/blob/main/Lib/email/message.py#L289
- CPython versions tested on: 3.9.13, 3.10.7
- Operating system and architecture: macOS 12.6 Intel
[msg.txt](https://github.com/python/cpython/files/9757951/msg.txt)
<!-- gh-linked-prs -->
### Linked PRs
* gh-127547
* gh-128528
* gh-128529
<!-- /gh-linked-prs -->
| a62ba52f1439c1f878a3ff9b8544caf9aeef9b90 | 3b231be8f000ae59faa04d5a2f1af11beafee866 |
python/cpython | python__cpython-98170 | # `dataclasses.astuple` breaks on `DefaultDict`
# Bug report
This is very similar to https://github.com/python/cpython/issues/79721
```python
from dataclasses import dataclass, astuple
from typing import DefaultDict, List
from collections import defaultdict
@dataclass
class C:
mp: DefaultDict[str, List]
dd = defaultdict(list)
dd["x"].append(12)
c = C(mp=dd)
d = astuple(c) # throws "TypeError: first argument must be callable or None"
assert d == ({"x": [12]},)
assert d[0] is not c.mp # make sure defaultdict is copied
```
Basically applying the same fix for `asdict` from https://github.com/python/cpython/pull/32056 to `astuple`.
# Your environment
<!-- Include as many relevant details as possible about the environment you experienced the bug in -->
- CPython versions tested on: 3.10.7
- Operating system and architecture: macOS, arm64
<!-- gh-linked-prs -->
### Linked PRs
* gh-98170
<!-- /gh-linked-prs -->
| 71e37d907905b0504c5bb7b25681adeea2157492 | 85ba8a3e03707092800cbf2a29d95e0b495e3cb7 |
python/cpython | python__cpython-107552 | # Update Refcount-related Docs
# Documentation
PEP 683 includes [some docs changes](https://peps.python.org/pep-0683/#documentation) that should help narrow expectations about refcount semantics. Those changes shouldn't need to wait for the PEP.
I'd be interested in backporting these changes as far back as possible.
<!-- gh-linked-prs -->
### Linked PRs
* gh-107552
* gh-107752
* gh-107753
* gh-107754
<!-- /gh-linked-prs -->
| 5dc825d504ad08d64c9d1ce578f9deebbe012604 | 0191af97a6bf3f720cd0ae69a0bdb14c97351679 |
python/cpython | python__cpython-98109 | # Add pickleability to zipfile.Path
In [zipp 3.9.1](https://zipp.readthedocs.io/en/latest/history.html#v3-9-0), zipp.Path added support for pickleability. Let's sync with that version and incorporate that behavior.
<!-- gh-linked-prs -->
### Linked PRs
* gh-98109
<!-- /gh-linked-prs -->
| 93f22d30eb7bf579d511b1866674bc1c2513dde9 | 5f8898216e7b67b7de6b0b1aad9277e88bcebfdb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.