id int64 | number int64 | title string | state string | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | html_url string | is_pull_request bool | pull_request_url string | pull_request_html_url string | user_login string | comments_count int64 | body string | labels list | reactions_plus1 int64 | reactions_minus1 int64 | reactions_laugh int64 | reactions_hooray int64 | reactions_confused int64 | reactions_heart int64 | reactions_rocket int64 | reactions_eyes int64 | comments list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,122,782,208 | 61,578 | DOC: Validate versions.json before building docs #61573 | closed | 2025-06-05T21:48:19 | 2025-06-07T10:00:46 | 2025-06-07T10:00:46 | https://github.com/pandas-dev/pandas/pull/61578 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61578 | https://github.com/pandas-dev/pandas/pull/61578 | iabhi4 | 5 | Adds a JSON validity check for `versions.json` directly inside `pandas_web` during context generation. This ensures malformed JSON (e.g., trailing commas) is caught early, preventing issues like the broken version dropdown in #61572
- [x] Closes #61573 | [
"CI",
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> Can you move this check to main, and just open the file and load it as json, with a comment about what's for.\r\n\r\nThanks for the review @datapythonista, just to confirm: when you say \"move this check to main\", do you mean the `main()` function itself, or the` __main__` block. My assumption is the `__main__`... |
3,122,680,868 | 61,577 | TST: Remove match= in test_setitem_invalid to avoid PytestWarning | closed | 2025-06-05T21:09:39 | 2025-06-13T17:44:38 | 2025-06-13T17:44:31 | https://github.com/pandas-dev/pandas/pull/61577 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61577 | https://github.com/pandas-dev/pandas/pull/61577 | iabhi4 | 3 | test only change - This PR removes the use of `match=""` in `test_setitem_invalid` within `base/setitem.py`.
Using an empty string as a match pattern triggers a `PytestWarning` in newer versions of pytest, so the `match` argument has been removed since the message was not being validated
- [x] closes #61557
- [x] Ran pre-commit check | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> Personally I'd just remove the match parameter (unless we have a check to make sure it exists).\r\n> \r\n> But either way this looks good, thanks for taking care of it @iabhi4.\r\n\r\nI tend to agree with @datapythonista, unless there is a specific check that we are looking for and in this case it does not appea... |
3,122,405,296 | 61,576 | DataFrames Class update | closed | 2025-06-05T19:32:17 | 2025-06-13T09:55:40 | 2025-06-13T09:55:40 | https://github.com/pandas-dev/pandas/pull/61576 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61576 | https://github.com/pandas-dev/pandas/pull/61576 | SaraTammame | 1 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @SaraTammame for the contribution, but this seems to be some personal tests, so I guess we can close this.\r\n\r\nIf there is a contribution you want to make, please try to link an issue or provide a description, so we have more context when reviewing."
] |
3,122,398,272 | 61,575 | DataFrame class update | closed | 2025-06-05T19:30:08 | 2025-06-05T19:31:04 | 2025-06-05T19:31:04 | https://github.com/pandas-dev/pandas/pull/61575 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61575 | https://github.com/pandas-dev/pandas/pull/61575 | SaraTammame | 0 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,122,363,438 | 61,574 | BUG: 2.3.0 didn't publish wheels for musl-aarch64 (arm) | closed | 2025-06-05T19:18:15 | 2025-06-06T16:39:30 | 2025-06-06T16:36:53 | https://github.com/pandas-dev/pandas/issues/61574 | true | null | null | virajmehta | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
https://pypi.org/project/pandas/2.3.0/#files
https://pypi.org/project/pandas/2.2.3/#files
```
### Issue Description
ctrl-f for musl on the first page gives 15 results
for the second page gives 36 results.
ctrl on the 2.3.0 page for "-cp313-cp313-musllinux_1_2_aarch64.whl" gives no results but there are results on the 2.2.3 page.j
### Expected Behavior
would like a wheel for musl / arm64
### Installed Versions
not relevant | [
"Bug",
"Needs Triage"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I just uploaded musl-aarch64 wheels for pandas 2.3 https://pypi.org/project/pandas/2.3.0/#files",
"Closing as these wheels are on PyPI now",
"Thank you!"
] |
3,122,306,508 | 61,573 | WEB: Test that our versions JSON is valid | closed | 2025-06-05T18:59:22 | 2025-06-07T10:00:47 | 2025-06-07T10:00:47 | https://github.com/pandas-dev/pandas/issues/61573 | true | null | null | datapythonista | 0 | See #61572
I think this can be as simple as loading the file with `json.load` when calling `pandas_web.py`. This way, if the file is not valid JSON the CI should break. But we need to double check that `json.load` fails if an extra comma is present. | [
"good first issue",
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,122,295,987 | 61,572 | BUG: Fix JSON typo that breaks javascript in the docs | closed | 2025-06-05T18:55:44 | 2025-06-05T19:36:34 | 2025-06-05T19:25:19 | https://github.com/pandas-dev/pandas/pull/61572 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61572 | https://github.com/pandas-dev/pandas/pull/61572 | datapythonista | 4 | Closes #61571
As opposed to Python, JSON doesn't accept commas after the last element of a dict, and it's strict about it. We added this as a typo (I think I did the same during a release, I'll create an issue to validate this JSON in the CI) when updating the JSON that provides the versions for the documentation drop down. This seems to be breaking all the javascript in our docs.
@mroeschke I would merge this before waiting for the CI, but up to you. | [
"Bug",
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Is this failure related? https://github.com/pandas-dev/pandas/actions/runs/15474985199/job/43567905639?pr=61572\r\n\r\n```bash\r\nWriting evaluated template result to /home/runner/work/pandas/pandas/doc/build/html/_static/nbsphinx-code-cells.css\r\n\r\nExtension error (pydata_sphinx_theme):\r\nHandler <function up... |
3,122,246,064 | 61,571 | DOC: Version dropdown not working | closed | 2025-06-05T18:40:11 | 2025-06-05T19:25:20 | 2025-06-05T19:25:20 | https://github.com/pandas-dev/pandas/issues/61571 | true | null | null | datapythonista | 0 | Seems like the version dropdown is not working, at least for me, after the release: https://pandas.pydata.org/docs/
Can others confirm please?
Edit: Also the search. I guess there is a javascript error making all javascript code to not run | [
"Docs",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,122,126,379 | 61,570 | BUG: comparing strings of different dtypes errors in 2.3 | closed | 2025-06-05T17:57:17 | 2025-07-02T16:35:31 | 2025-07-02T16:35:31 | https://github.com/pandas-dev/pandas/issues/61570 | true | null | null | a-reich | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd, numpy as np
arr1 = pd.array([],pd.StringDtype("pyarrow", na_value=pd.NA))
arr2 = pd.array([], pd.StringDtype("python", na_value=np.nan))
arr1 == arr2 # NotImplementedError: eq not implemented for <class 'pandas.core.arrays.string_.StringArrayNumpySemantics'>
```
### Issue Description
This appears to be the type of issue discussed in https://github.com/pandas-dev/pandas/issues/60639. That issue was closed, but I got an error when I tried running the above reproducer on the example given in the [whatsnew](https://pandas.pydata.org/pandas-docs/version/2.3.0/whatsnew/v2.3.0.html#notable-bug-fix1) for release 2.3.
My understanding was that the issue was closed when https://github.com/pandas-dev/pandas/pull/61138 was merged to main, but it's unclear if the fix was successfully backported to the 2.3.x branch. I haven't had the time yet to try when building pandas myself from main.
### Expected Behavior
Comparisons of string arrays/series with different dtypes should not error and the return dtype should follow the behavior laid out in #60639 .
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.2
python-bits : 64
OS : Linux
OS-release : 6.6.87.1-microsoft-standard-WSL2
Version : #1 SMP PREEMPT_DYNAMIC Mon Apr 21 17:08:54 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Strings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I’m also fine with re-opening #60639 if that’s better.",
"@a-reich Thanks for raising the issue — it has been resolved on the main branch:\n\n```python\n>>> import pandas as pd, numpy as np\n>>> import pandas as pd, numpy as np\n>>> arr1 = pd.array([],pd.StringDtype(\"pyarrow\", na_value=pd.NA))\n>>> arr2 = pd.a... |
3,122,042,354 | 61,569 | BLD: Build wheels for 3.9 and musllinux-aarch64 for pandas 2.3 | closed | 2025-06-05T17:29:26 | 2025-07-03T15:55:37 | 2025-07-02T17:08:01 | https://github.com/pandas-dev/pandas/pull/61569 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61569 | https://github.com/pandas-dev/pandas/pull/61569 | mroeschke | 6 | @lithomas1 would I need to re-tag the 2.3.x branch if/when we merge this?
xref https://github.com/pandas-dev/pandas/issues/61563 https://github.com/pandas-dev/pandas/issues/61574 | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I guess the issue is the tag would have to be named something else (at this point, I don't think it's a good idea to delete and re-tag), and that point it's probably better to make a new version.\r\n\r\nIt might be easier to download wheel artifacts from this PR and upload them by hand using twine, though.",
"Th... |
3,121,700,295 | 61,568 | BUG: Inconsistent behavior surrounding pd.fillna | open | 2025-06-05T15:33:32 | 2025-08-18T21:27:25 | null | https://github.com/pandas-dev/pandas/issues/61568 | true | null | null | anna-intellegens | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
empty = pd.DataFrame([[None, None, None, None], [None, None, None, None], [None, None, None, None], [None, None, None, None]], columns=list("ABCD"), dtype=np.float64)
print(empty.dtypes)
# A float64
# B float64
# C float64
# D float64
# dtype: object
full_a = pd.DataFrame([[1.0, 2.0, "3.0", 4.0],[5.0,6.0,"7.0",8.0], [9.0,10.0,"11.0",12.0], [13.0,14.0,"15.0",16.0]], columns=list("ABCD"))
print(full_a.dtypes)
# A float64
# B float64
# C object
# D float64
# dtype: object
full_b = pd.DataFrame([[1.5, 2.0, "3.0", 4.0], [5.0,6.5,"7.0",8.0], [9.0,10.0,"11.0",12.0], [13.0,14.0,"15.0",16.5]], columns=list("ABCD"))
print(full_b.dtypes)
# A float64
# B float64
# C object
# D float64
# dtype: object
combined_1 = empty.fillna(full_a)
print(combined_1.dtypes)
# A int64
# B int64
# C object
# D int64
# dtype: object
combined_2 = empty.fillna(full_b)
print(combined_2.dtypes)
# A object
# B object
# C object
# D object
# dtype: object
```
### Issue Description
The returned types of pandas dataframe fillna method gives inconsistent resulting types between a column that contains integral float values, and ones that don't. This leads to very confusing behavior, where the exact values of the input data (even if it was correctly starting as float64s in both dataframes) can affect the output types. In particular, if both the starting column and the merging column have the float64 dtype, as a user I would expect the output column to have a float64 dtype, but instead I get an int64 if all the values happen to be integral, otherwise I get an object dtype?! This behavior is further only observed if one of the other columns happen to be (correctly) an object dtype, when again, I expected the types of unrelated columns not to affect each other.
I know there are currently changes undergoing surrounding casting of types, but here as all types are being inputted correctly I didn't expect any casting to be being performed as part of this operation?
### Expected Behavior
In the above example, I expected both `combined_1` and `combined_2` to have the same dtypes as each other.
I also expected both of them to actually have dtypes of `float64` for cols `A`, `B` and `D`, given the input types are `float64`. The `object` type for those columns of `combined_2` is particularly confusing in this case
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 6.11.0-26-generic
Version : #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
Also tested on 2.3.0 (sorry, website still says 2.2.3 is latest):
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 6.11.0-26-generic
Version : #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details> | [
"Bug",
"Missing-data",
"Dtype Conversions"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for raising this, I investigated the dtype inconsistency and traced it to how `fillna(DataFrame)` calls `where(self.notna(), other)`. When one column is `object`, it triggers coercion of all columns to `object`, even if others are `float64`. Replacing this with a column-wise `np.where(notna, lhs, rhs)` pres... |
3,121,077,669 | 61,567 | BUILD: Add wheels for musllinux_aarch64 | closed | 2025-06-05T12:41:27 | 2025-06-06T16:43:58 | 2025-06-06T16:43:24 | https://github.com/pandas-dev/pandas/pull/61567 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61567 | https://github.com/pandas-dev/pandas/pull/61567 | cdce8p | 5 | It seems the `musllinux_aarch64` wheels got accidentally removed in the transition from circleci to GHA.
Would be great if this could be backported to `2.3.x` as well.
- [x] closes https://github.com/pandas-dev/pandas/issues/55645#issuecomment-2943818815
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR. I'll incorporate this into https://github.com/pandas-dev/pandas/pull/61569 so I can upload 3.9 musllinux-aarch wheels as well",
"Going to close this PR (since the wheel building might be different than in the 2.3.x branch).\r\n\r\nI will update https://github.com/pandas-dev/pandas/issues/61574... |
3,120,672,132 | 61,566 | BUILD: Installation issue on Mac with M1 Pro arm64 processor. pandas_parser.cpython-311-darwin.so is using x86_64 arch | closed | 2025-06-05T10:26:33 | 2025-08-14T02:02:00 | 2025-08-14T02:02:00 | https://github.com/pandas-dev/pandas/issues/61566 | true | null | null | Leviann | 2 | ### Installation check
- [x] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
macOS-15.5-arm64-arm-64bit
### Installation Method
Built from source
### pandas Version
pandas-3.0.0.dev0+2147.g1f6f42ac55
### Python Version
3.11.13
### Installation Logs
I can't use pandas installed either using pip or from source, on my macbook with M1 Pro arm64 processor.
I am not using rosetta in terminal and my python install is also amd64.
Please help. It looks like one library file is built with x86-x64 arch.
<details>
import pandas as pd
../../Library/Python/3.11/lib/python/site-packages/pandas/__init__.py:45: in <module>
from pandas.core.api import (
../../Library/Python/3.11/lib/python/site-packages/pandas/core/api.py:1: in <module>
from pandas._libs import (
../../Library/Python/3.11/lib/python/site-packages/pandas/_libs/__init__.py:16: in <module>
import pandas._libs.pandas_parser # isort: skip # type: ignore[reportUnusedImport]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E ImportError: dlopen(/Users/leviann/Library/Python/3.11/lib/python/site-packages/pandas/_libs/pandas_parser.cpython-311-darwin.so, 0x0002): tried: '/Users/leviann/Library/Python/3.11/lib/python/site-packages/pandas/_libs/pandas_parser.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/leviann/Library/Python/3.11/lib/python/site-packages/pandas/_libs/pandas_parser.cpython-311-darwin.so' (no such file), '/Users/leviann/Library/Python/3.11/lib/python/site-packages/pandas/_libs/pandas_parser.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64'))
</details>
| [
"Build",
"Needs Info",
"OS X"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Can you include the build command you are using to build from source along with the full log.",
"Closing for lack of info. If you’d like to answer @rhshadrach question we can reopen and try to help"
] |
3,120,501,218 | 61,565 | BUG: RecursionError when apply generic alias as a func | closed | 2025-06-05T09:30:30 | 2025-06-16T23:13:26 | 2025-06-16T23:13:26 | https://github.com/pandas-dev/pandas/issues/61565 | true | null | null | MacroBull | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.DataFrame({'x': [1], 'y': [2]}).apply(list, axis="columns")
pd.DataFrame({'x': [1], 'y': [2]}).apply(list[int], axis="columns")
```
### Issue Description
Traceback:
```python
[... skipping similar frames: Series.apply at line 4935 (593 times), NDFrameApply.agg_or_apply_list_like at line 744 (592 times), SeriesApply.apply at line 1412 (592 times), Apply.apply_list_or_dict_like at line 630 (592 times), Apply.compute_list_like at line 369 (592 times)]
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py:1412, in SeriesApply.apply(self)
...
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py:630, in Apply.apply_list_or_dict_like(self)
...
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py:744, in NDFrameApply.agg_or_apply_list_like(self, op_name)
...
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py:369, in Apply.compute_list_like(self, op_name, selected_obj, kwargs)
...
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/series.py:4935, in Series.apply(self, func, convert_dtype, args, by_row, **kwargs)
...
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py:1407, in SeriesApply.apply(self)
1404 def apply(self) -> DataFrame | Series:
1405 obj = self.obj
-> 1407 if len(obj) == 0:
1408 return self.apply_empty_result()
1410 # dispatch to handle list-like or dict-like
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/series.py:918, in Series.__len__(self)
914 def __len__(self) -> int:
915 """
916 Return the length of the Series.
917 """
--> 918 return len(self._mgr)
File /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/internals/base.py:76, in DataManager.__len__(self)
74 @final
75 def __len__(self) -> int:
---> 76 return len(self.items)
RecursionError: maximum recursion depth exceeded
> /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/internals/base.py(76)__len__()
74 @final
75 def __len__(self) -> int:
---> 76 return len(self.items)
77
78 @property
ipdb>
```
where `self.func` is regarded as list of funcs, which lead to the bug
```python
> /opt/homebrew/envs/pandera/lib/python3.12/site-packages/pandas/core/apply.py(1412)apply()
1410 # dispatch to handle list-like or dict-like
1411 if is_list_like(self.func):
-> 1412 return self.apply_list_or_dict_like()
1413
1414 if isinstance(self.func, str):
ipdb> p self.func
*list[int]
ipdb> p is_list_like(self.func)
True
ipdb>
```
### Expected Behavior
Expected output:
> 0 [1, 2]
> dtype: object
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.12.9
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:33 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T8122
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take"
] |
3,119,816,321 | 61,564 | Failed to install pandas BUILD: 2.3.0 Windows | closed | 2025-06-05T04:41:38 | 2025-06-06T09:12:02 | 2025-06-05T16:58:11 | https://github.com/pandas-dev/pandas/issues/61564 | true | null | null | tanishqv | 5 | ### Installation check
- [x] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
Windows-10-10.0.22631-SP0
### Installation Method
pip install
### pandas Version
2.3.0
### Python Version
3.9.13
### Installation Logs
<details>
C:\Users\tanishq>python -V
Python 3.9.13
C:\Users\tanishq>python -c "import platform; print(platform.platform())"
Windows-10-10.0.22631-SP0
Collecting pandas
Downloading pandas-2.3.0.tar.gz (4.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 4.9 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
+ meson setup C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854 C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854\.mesonpy-3s9nxbqg -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854\.mesonpy-3s9nxbqg\meson-python-native-file.ini
The Meson build system
Version: 1.8.1
Source dir: C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854
Build dir: C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854\.mesonpy-3s9nxbqg
Build type: native build
..\meson.build:2:0: ERROR: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
A full log can be found at C:\Users\tanishq\AppData\Local\Temp\pip-install-9lil68je\pandas_8d418a38654d41a19bef484e6372f854\.mesonpy-3s9nxbqg\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</details>
| [
"Build",
"Needs Triage"
] | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Ran into the same issue. `requires-python` is still `>= 3.9`.\n\nhttps://github.com/pandas-dev/pandas/blob/2cc37625532045f4ac55b27176454bbbc9baf213/pyproject.toml#L28\n\nhttps://github.com/pandas-dev/pandas/releases/tag/v2.3.0 says 2.3.0 only supports >= 3.10.",
"Ran into the same issue in **Mac OSX (Sillicon),*... |
3,119,772,338 | 61,563 | Failed to install pandas==2.3.0 with Python 3.9 | closed | 2025-06-05T04:11:02 | 2025-06-06T16:18:28 | 2025-06-06T12:10:15 | https://github.com/pandas-dev/pandas/issues/61563 | true | null | null | ryanchao2012 | 22 | ### Installation check
- [x] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
Linux-5.10.195-1.20230921.el7.x86_64-x86_64-with-glibc2.17
### Installation Method
pip install
### pandas Version
2.3.0
### Python Version
3.9.15
### Installation Logs
<details>
(base) [root@64bf929a621d7dafeb18b348 ~]# python -c 'import platform; print(platform.platform())'
Linux-5.10.195-1.20230921.el7.x86_64-x86_64-with-glibc2.17
(base) [root@64bf929a621d7dafeb18b348 ~]# pip install pandas -U
Requirement already satisfied: pandas in /opt/conda/lib/python3.9/site-packages (1.5.0)
Collecting pandas
Downloading pandas-2.3.0.tar.gz (4.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 90.9 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [152 lines of output]
+ meson setup /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86 /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/.mesonpy-lsu89q1a -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=/tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/.mesonpy-lsu89q1a/meson-python-native-file.ini
The Meson build system
Version: 1.8.1
Source dir: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86
Build dir: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/.mesonpy-lsu89q1a
Build type: native build
Project name: pandas
Project version: 2.3.0
C compiler for the host machine: cc (gcc 4.8.5 "cc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)")
C linker for the host machine: cc ld.bfd 2.27-44
C++ compiler for the host machine: c++ (gcc 4.8.5 "c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)")
C++ linker for the host machine: c++ ld.bfd 2.27-44
Cython compiler for the host machine: cython (cython 3.1.1)
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python found: YES (/opt/conda/bin/python)
Found pkg-config: YES (/usr/bin/pkg-config) 0.27.1
Run-time dependency python found: YES 3.9
Build targets in project: 53
pandas 2.3.0
User defined options
Native files: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/.mesonpy-lsu89q1a/meson-python-native-file.ini
b_ndebug : if-release
b_vscrt : md
buildtype : release
vsenv : true
Found ninja-1.11.1.git.kitware.jobserver-1 at /tmp/pip-build-env-ltjugjgm/normal/bin/ninja
Visual Studio environment is needed to run Ninja. It is recommended to use Meson wrapper:
/tmp/pip-build-env-ltjugjgm/overlay/bin/meson compile -C .
+ /tmp/pip-build-env-ltjugjgm/normal/bin/ninja
[1/151] Generating pandas/_libs/intervaltree_helper_pxi with a custom command
[2/151] Generating pandas/_libs/hashtable_func_helper_pxi with a custom command
[3/151] Generating pandas/_libs/algos_common_helper_pxi with a custom command
[4/151] Generating pandas/_libs/khash_primitive_helper_pxi with a custom command
[5/151] Generating pandas/_libs/index_class_helper_pxi with a custom command
[6/151] Generating pandas/_libs/algos_take_helper_pxi with a custom command
[7/151] Generating pandas/_libs/hashtable_class_helper_pxi with a custom command
[8/151] Copying file pandas/__init__.py
[9/151] Generating pandas/_libs/sparse_op_helper_pxi with a custom command
[10/151] Compiling C object pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_lib_ultrajsonenc.c.o
FAILED: pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_lib_ultrajsonenc.c.o
cc -Ipandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p -Ipandas/_libs -I../pandas/_libs -I../../../pip-build-env-ltjugjgm/overlay/lib/python3.9/site-packages/numpy/_core/include -I../pandas/_libs/include -I/opt/conda/include/python3.9 -fvisibility=hidden -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=c11 -O3 -DNPY_NO_DEPRECATED_API=0 -DNPY_TARGET_VERSION=NPY_1_21_API_VERSION -fPIC -MD -MQ pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_lib_ultrajsonenc.c.o -MF pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_lib_ultrajsonenc.c.o.d -o pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_lib_ultrajsonenc.c.o -c ../pandas/_libs/src/vendored/ujson/lib/ultrajsonenc.c
In file included from ../pandas/_libs/src/vendored/ujson/lib/ultrajsonenc.c:43:0:
../pandas/_libs/include/pandas/portable.h:31:22: error: missing binary operator before token "("
#elif __has_attribute(__fallthrough__)
^
[11/151] Compiling C object pandas/_libs/pandas_parser.cpython-39-x86_64-linux-gnu.so.p/src_parser_io.c.o
[12/151] Compiling C object pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime.c.o
FAILED: pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime.c.o
cc -Ipandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p -Ipandas/_libs -I../pandas/_libs -I../../../pip-build-env-ltjugjgm/overlay/lib/python3.9/site-packages/numpy/_core/include -I../pandas/_libs/include -I/opt/conda/include/python3.9 -fvisibility=hidden -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=c11 -O3 -DNPY_NO_DEPRECATED_API=0 -DNPY_TARGET_VERSION=NPY_1_21_API_VERSION -fPIC -MD -MQ pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime.c.o -MF pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime.c.o.d -o pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime.c.o -c ../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c:57:1: error: static assertion failed: "__has_builtin not detected; please try a newer compiler"
_Static_assert(0, "__has_builtin not detected; please try a newer compiler");
^
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c: In function ‘scaleYearToEpoch’:
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c:343:3: warning: implicit declaration of function ‘checked_int64_sub’ [-Wimplicit-function-declaration]
return checked_int64_sub(year, 1970, result);
^
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c: In function ‘scaleYearsToMonths’:
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c:347:3: warning: implicit declaration of function ‘checked_int64_mul’ [-Wimplicit-function-declaration]
return checked_int64_mul(years, 12, result);
^
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c: In function ‘npy_datetimestruct_to_datetime’:
../pandas/_libs/src/vendored/numpy/datetime/np_datetime.c:425:5: warning: implicit declaration of function ‘checked_int64_add’ [-Wimplicit-function-declaration]
PD_CHECK_OVERFLOW(checked_int64_add(months, months_adder, &months));
^
[13/151] Compiling C object pandas/_libs/pandas_parser.cpython-39-x86_64-linux-gnu.so.p/src_parser_pd_parser.c.o
[14/151] Compiling C object pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_datetime_date_conversions.c.o
[15/151] Compiling C object pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_python_JSONtoObj.c.o
[16/151] Compiling C object pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_datetime_pd_datetime.c.o
[17/151] Compiling C object pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_python_ujson.c.o
[18/151] Compiling C object pandas/_libs/pandas_datetime.cpython-39-x86_64-linux-gnu.so.p/src_vendored_numpy_datetime_np_datetime_strings.c.o
[19/151] Compiling C object pandas/_libs/json.cpython-39-x86_64-linux-gnu.so.p/src_vendored_ujson_python_objToJSON.c.o
[20/151] Compiling C object pandas/_libs/tslibs/parsing.cpython-39-x86_64-linux-gnu.so.p/.._src_parser_tokenizer.c.o
[21/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/indexing.pyx
[22/151] Compiling C object pandas/_libs/pandas_parser.cpython-39-x86_64-linux-gnu.so.p/src_parser_tokenizer.c.o
[23/151] Compiling C object pandas/_libs/lib.cpython-39-x86_64-linux-gnu.so.p/src_parser_tokenizer.c.o
[24/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/ccalendar.pyx
[25/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/base.pyx
[26/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/np_datetime.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[27/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/missing.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[28/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/dtypes.pyx
[29/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/arrays.pyx
[30/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/hashing.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[31/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/nattype.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/nattype.pyx:79:0: Global name __nat_unpickle matched from within class scope in contradiction to to Python 'class private name' rules. This may change in a future release.
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/nattype.pyx:79:0: Global name __nat_unpickle matched from within class scope in contradiction to to Python 'class private name' rules. This may change in a future release.
[32/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/vectorized.pyx
[33/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/fields.pyx
[34/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/internals.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[35/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/conversion.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[36/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/parsing.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[37/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/timezones.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[38/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/tzconversion.pyx
[39/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/strptime.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[40/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/parsers.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/parsers.pyx:1605:18: noexcept clause is ignored for function returning Python object
[41/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/timestamps.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[42/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/period.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[43/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/timedeltas.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[44/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/offsets.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[45/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/lib.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[46/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/index.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[47/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/interval.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[48/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/join.pyx
[49/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/hashtable.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[50/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/algos.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
[51/151] Compiling Cython source /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/groupby.pyx
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /tmp/pip-install-gp_gpioe/pandas_8608342ddb164d0e8725d2463640de86/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object
ninja: build stopped: subcommand failed.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</details>
| [
"Build",
"Needs Triage"
] | 13 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | [
"Ran into the same issue in **Mac OSX (Sillicon),** using uv pip install. Error is in Cython there is a missing \"ios\" file.\n(didn't want to open another issue, but I can do it if it is different)\n\nPython: 3.12.7\nEnv: virtualenv (uv)\nPlatform: Mac OSX (sillicon)\n\nMost relevant log:\n```\n pandas/_libs/... |
3,119,734,585 | 61,562 | WEB: Update versions.json for 2.3 | closed | 2025-06-05T03:41:48 | 2025-06-05T17:31:53 | 2025-06-05T17:31:50 | https://github.com/pandas-dev/pandas/pull/61562 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61562 | https://github.com/pandas-dev/pandas/pull/61562 | mroeschke | 0 | null | [
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,119,292,227 | 61,561 | Backport PR #61560: DOC: Set date for v2.3.0.rst whatsnew | closed | 2025-06-04T22:32:29 | 2025-06-04T23:05:57 | 2025-06-04T23:05:43 | https://github.com/pandas-dev/pandas/pull/61561 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61561 | https://github.com/pandas-dev/pandas/pull/61561 | mroeschke | 0 | null | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,119,110,834 | 61,560 | DOC: Set date for v2.3.0.rst whatsnew | closed | 2025-06-04T21:03:56 | 2025-06-04T22:33:07 | 2025-06-04T22:28:13 | https://github.com/pandas-dev/pandas/pull/61560 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61560 | https://github.com/pandas-dev/pandas/pull/61560 | mroeschke | 5 | null | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61560/",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61560/",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please back... |
3,118,769,967 | 61,559 | Pandas DataFrame.query Code Injection (Unpatched) | closed | 2025-06-04T18:45:13 | 2025-06-05T00:53:47 | 2025-06-05T00:53:43 | https://github.com/pandas-dev/pandas/issues/61559 | true | null | null | clyormz | 2 | Python pandas version 2.2.3 has a vulnerability on Pandas DataFrame.query
In order to fix the function query on DataFrame python class what are the elements to review to resolve the vulnerability CVE-2024-9880.
Regards | [
"expressions",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi, this CVE is rejected. Also discussed at #60602",
"Agreed @asishm, closing."
] |
3,118,724,429 | 61,558 | Backport PR #61519: BUILD: Bump Cython to 3.1 | closed | 2025-06-04T18:28:57 | 2025-06-04T20:45:50 | 2025-06-04T20:45:43 | https://github.com/pandas-dev/pandas/pull/61558 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61558 | https://github.com/pandas-dev/pandas/pull/61558 | mroeschke | 0 | https://github.com/pandas-dev/pandas/pull/61519 | [
"Build",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,118,522,709 | 61,557 | BUG: regex match in compliance tests no longer match pytest expected inputs | closed | 2025-06-04T17:11:35 | 2025-06-13T17:44:32 | 2025-06-13T17:44:32 | https://github.com/pandas-dev/pandas/issues/61557 | true | null | null | chalmerlowe | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
N/A
```
### Issue Description
When I run compliance tests in python-db-dtype-pandas (a support file used by python-bigquery) I am getting multiple warnings (which cause test failures) due to a recent update in how pytest handles regex matches.
In pandas release 2.2.3 there is a snippet of code:
```
def test_take_pandas_style_negative_raises(self, data, na_value):
with pytest.raises(ValueError, match=""):
```
Pytest returns this Warning:
```
pytest.PytestWarning: matching against an empty string will *always* pass. If you want to check for an empty message you need to pass '^$'. If you don't want to match you should pass `None` or leave out the parameter.
```
This warning shows up in association with each of these pandas tests (it may occur with other tests, but these are the only ones that my tests revealed.):
```
FAILED ...::TestGetitem::test_take_pandas_style_negative_raises
FAILED ...::TestMethods::test_argmax_argmin_no_skipna_notimplemented
FAILED ...::TestSetitem::test_setitem_invalid
FAILED ...::TestJSONArrayGetitem::test_take_pandas_style_negative_raises
FAILED ...::TestJSONArrayMethods::test_argmax_argmin_no_skipna_notimplemented
FAILED ...::TestJSONArraySetitem::test_setitem_invalid
```
### Expected Behavior
N/A
### Installed Versions
N/A | [
"Testing",
"Error Reporting",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report.\n\n> pandas release 2.2.3\n\nTo be sure, we'd only fix these on main. So it's not too important for us if this is in the 2.2.3 release. Does that cause you any issues?\n\nFor `test_take_pandas_style_negative_raises`, this is now fixed on main, but at least some of the others you listed are n... |
3,117,730,207 | 61,556 | Backport PR #61549 on branch 2.3.x (TST: Add error message for test_groupby_raises_category_on_category for quantile) | closed | 2025-06-04T12:52:13 | 2025-06-04T16:31:56 | 2025-06-04T16:31:56 | https://github.com/pandas-dev/pandas/pull/61556 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61556 | https://github.com/pandas-dev/pandas/pull/61556 | meeseeksmachine | 0 | Backport PR #61549: TST: Add error message for test_groupby_raises_category_on_category for quantile | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,117,071,656 | 61,555 | DOC: Fix typo in to_html and to_string docs | closed | 2025-06-04T09:04:21 | 2025-06-06T14:54:00 | 2025-06-06T14:53:59 | https://github.com/pandas-dev/pandas/pull/61555 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61555 | https://github.com/pandas-dev/pandas/pull/61555 | datapythonista | 5 | **UPDATE:** This started as an upgrade of Python in environment.yml, now it's just about fixing an issue with a period is duplicated in both the template and the variables.
I guess the only reason we require Python 3.10 for the development environments and run the docs is because it wasn't updated for a long time. Not sure if it'd be better to unpin, which I assume would get us the latest version of Python. Happy to do that here too. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"+1 to unpin Python here",
"Just FYI, I think there are some configurations like pre-commit/mypy/etc. that have a \"Python version\" configuration that may be useful to (un)sync since they are dependent on the Python version installed by `environment.yml`",
"I'll leave this PR just for the typo in the docs (whi... |
3,117,048,901 | 61,554 | BUG: duplicated() raises error with singlton set as subset | closed | 2025-06-04T08:57:11 | 2025-06-11T17:52:35 | 2025-06-11T17:52:35 | https://github.com/pandas-dev/pandas/issues/61554 | true | null | null | camold | 7 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame([{"a": "foo", "b": "bar"}])
df.duplicated(subset={"a"}) # raises error
df.duplicated(subset=["a"]) # works
df.duplicated(subset=("a",)) # works
df.duplicates(subset={"a","b"}) # works
```
### Issue Description
Providing a singleton set to the subset parameter raises an error.
### Expected Behavior
Should work normally without having to convert the input to list or tuple.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 6.11.0-26-generic
Version : #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 2.2.3
numpy : 2.2.0
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.29.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.36
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take\n",
"I verified it in the release build but not on the main branch, try running it on the main branch, the root issue might have been fixed already",
"I think this issue should be able to be closed",
"I have tried to set up a clone of the main branch but it does not build on my machine locally. So I can... |
3,116,984,839 | 61,553 | DOC: Move PyCapsule whatsnew note from v3.0.0 to v2.3.0 | closed | 2025-06-04T08:36:01 | 2025-06-04T11:31:48 | 2025-06-04T11:31:38 | https://github.com/pandas-dev/pandas/pull/61553 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61553 | https://github.com/pandas-dev/pandas/pull/61553 | MarcoGorelli | 1 | follow-up from https://github.com/pandas-dev/pandas/pull/61488#pullrequestreview-2871730860
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @MarcoGorelli "
] |
3,116,474,516 | 61,552 | DOC: Add note on inference behavior of apply with result_type='expand' | closed | 2025-06-04T05:20:11 | 2025-07-15T17:17:59 | 2025-07-15T17:17:59 | https://github.com/pandas-dev/pandas/pull/61552 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61552 | https://github.com/pandas-dev/pandas/pull/61552 | pinkgradient | 5 | Closes #61057
- This PR adds a docstring note about the inference behavior of apply with result_type='expand' when function returns NaN-like values, as discussed in issue #61057.
- Added docstring to distinguish behavior for 'apply' method. | [
"Docs",
"Apply",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"Personally I think we should deprecate that parameter. Since I guess you're using it, do you mind sharing an example of how you are using it?\r\n\r\nYou'll have to remove the blank lines for the PR. Also, did you check if what you are detailing is only affected... |
3,116,401,342 | 61,551 | Parallelize test_sql.py - Issue #60378 | open | 2025-06-04T04:47:40 | 2025-08-10T00:10:06 | null | https://github.com/pandas-dev/pandas/pull/61551 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61551 | https://github.com/pandas-dev/pandas/pull/61551 | dShcherbakov1 | 5 | - [ ] Closes #60378
- [ ] Builds off of https://github.com/pandas-dev/pandas/pull/60595
- [ ] Extends @UmbertoFasci's per-table UUID work
- [ ] Solves https://github.com/pandas-dev/pandas/pull/60595 concurrency issue
~~- [x] Parallelizes test_sql.py via per-worker_DBs (approach superseded due to complexity)~~
| [
"Testing",
"IO SQL",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the feedback @mroeschke!\r\n\r\nHaving now run this through CI, I agree! I come from a production-oriented environment, so I initially underestimated the architecture/OS complexity. \r\n\r\nI looked into per-table UUIDs as per @UmbertoFasci's PR, and was concerned it might impact future test writing, an... |
3,116,382,404 | 61,550 | DOC: Remove and Update out of date Docker Image issue with #61511 | closed | 2025-06-04T04:40:39 | 2025-07-28T17:20:48 | 2025-07-28T17:20:48 | https://github.com/pandas-dev/pandas/pull/61550 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61550 | https://github.com/pandas-dev/pandas/pull/61550 | jacksnnn | 3 | - [ ] Addresses & closes [DOC: Docker image provided on "Debugging C extensions" is out of date #61511](https://github.com/pandas-dev/pandas/issues/61511)
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| [
"Docs",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@WillAyd do you mind having a look?\r\n\r\nI think we should also remove the container if it's not maintained, not only the docs.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#upd... |
3,115,932,756 | 61,549 | TST: Add error message for test_groupby_raises_category_on_category for quantile | closed | 2025-06-04T01:25:16 | 2025-06-04T16:24:09 | 2025-06-04T12:51:50 | https://github.com/pandas-dev/pandas/pull/61549 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61549 | https://github.com/pandas-dev/pandas/pull/61549 | mroeschke | 0 | e.g. https://github.com/pandas-dev/pandas/actions/runs/15426735055/job/43415431095?pr=61519 | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,115,932,737 | 61,548 | changed the pydata-sphinx-theme dependency to the git version | closed | 2025-06-04T01:25:15 | 2025-06-06T14:48:33 | 2025-06-04T11:43:25 | https://github.com/pandas-dev/pandas/pull/61548 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61548 | https://github.com/pandas-dev/pandas/pull/61548 | louisjh14 | 3 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This seems a bad idea. And if you think we should be doing this, please open an issue with the motivation, or at least write a description in the PR, so this can be discussed. But we don't want changes to a sphinx theme breaking our CI, so I don't think we'll get this merged.",
"This was related to issue #51536.... |
3,115,882,897 | 61,547 | CLN: Format comments in frame.py to comply with PEP8 line width | closed | 2025-06-04T00:53:34 | 2025-06-04T11:46:55 | 2025-06-04T11:46:55 | https://github.com/pandas-dev/pandas/pull/61547 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61547 | https://github.com/pandas-dev/pandas/pull/61547 | TheCheerfulCoder | 1 | Hello! While becoming acquainted with the code, I noticed that the frame.py file had a lot of comments with a length over PEP8's 79 character limit.
I reformatted all comments to the 79 character limit, unless they involve command-line related code examples.
Let me know if there is anything I can do to help.
Thank you! | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for working on the proposed changes @TheCheerfulCoder. We validate in our CI that all rules from PEP-8 we care about are enforced. You can check the code checks jobs to get an idea all the things we validate.\r\n\r\nWe're happy with these longer lines, and merging this PR means we lose the value that `git b... |
3,115,813,278 | 61,546 | ENH: Adding DataFrame plotting benchmarks for large datasets | closed | 2025-06-04T00:21:17 | 2025-07-16T16:36:23 | 2025-07-16T16:36:22 | https://github.com/pandas-dev/pandas/pull/61546 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61546 | https://github.com/pandas-dev/pandas/pull/61546 | shadnikn | 3 | - [ ] related to #61532 - Adding in performance benchmarks for DataFrame plotting with large datasets.
- [ ] Description: Added 'DataFramePlottingLarge' benchmark class to track performance issues related to bottlenecks in #61398 and #61532. Tests multiple DataFrame sizes with/w/o DatetimeIndex & provides a baseline single-column comparison.
- [ ] Intended to cover:
- DataFrame sizes: (1000,10) to (10000,10)
- DatetimeIndex vs. regular index comparison
- Multi-column vs. single-column plotting.
| [
"Benchmark"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@rhshadrach do you have an opinion about this?",
"@shadnikn - what is the runtime of these benchmarks on your machine?",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can re... |
3,115,629,990 | 61,545 | Fix WeekOfMonth offset constructor offsets.pyx | closed | 2025-06-03T22:40:30 | 2025-06-04T14:52:42 | 2025-06-04T14:52:42 | https://github.com/pandas-dev/pandas/pull/61545 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61545 | https://github.com/pandas-dev/pandas/pull/61545 | Cadensmith1123 | 1 | Related to issue #52431, went and scanned the checklist commented previously by a user on Aug 4th, 2024. Noticed WeekOfMonth has incorrect offset constructors, and updated according to an earlier comment by @Dr-Irv
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This is not the right way to do this. Will close this, because we need to figure out what is the right way."
] |
3,115,609,052 | 61,544 | DOC: Fix WeekOfMonth offset constructor offsets.pyx | closed | 2025-06-03T22:30:19 | 2025-06-03T22:36:06 | 2025-06-03T22:36:06 | https://github.com/pandas-dev/pandas/pull/61544 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61544 | https://github.com/pandas-dev/pandas/pull/61544 | Cadensmith1123 | 0 | Related to #52431 | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,115,248,818 | 61,543 | DOC: Fix docs for BusinessDay constructor #52431 | closed | 2025-06-03T19:50:59 | 2025-07-16T16:35:50 | 2025-07-16T16:35:50 | https://github.com/pandas-dev/pandas/pull/61543 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61543 | https://github.com/pandas-dev/pandas/pull/61543 | camacluc | 1 | fix constructor for offset BusinessDay
- [ ] addresses #52431
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| [
"Docs",
"Frequency"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,114,631,618 | 61,542 | CI: Debug slow environment solve times | closed | 2025-06-03T16:19:34 | 2025-07-28T19:22:25 | 2025-06-06T16:45:18 | https://github.com/pandas-dev/pandas/pull/61542 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61542 | https://github.com/pandas-dev/pandas/pull/61542 | mroeschke | 2 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Not sure if you've seen the discussion in #61531",
"Ah yes thanks. Yeah I suspect this is a mamba solver issue. Closing this PR out"
] |
3,114,329,620 | 61,541 | BUG: Fix Index.equals between object and string | closed | 2025-06-03T14:54:11 | 2025-07-10T20:58:51 | 2025-07-10T20:58:43 | https://github.com/pandas-dev/pandas/pull/61541 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61541 | https://github.com/pandas-dev/pandas/pull/61541 | sanggon6107 | 1 | - [X] closes #61099
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
## Description of the code change on `Index.equals`
On the main branch, `Index.equals` casts `self` to `object` only when `self.dtype.na_value` is `np.nan`. The comparison actually succeeds when `self.dtype.na_value` is `np.nan` as below.
```python
>>> import pandas as pd
>>> import numpy as np
>>> s1 = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
>>> s2 = pd.Series([4, 5, 6], index=['a', 'b', 'c'])
>>> s2.index = s2.index.astype(pd.StringDtype(storage="pyarrow", na_value=np.nan))
>>> print(s1 < s2)
a True
b True
c True
dtype: bool
```
However, since doc stated that `dtype` is not compared, `self` should be casted regardless of `self.dtype.na_value` so that `self` could be compared with other dtypes as desired.
## Description of the code change on `test_mixed_col_index_dtype`
`using_infer_string` has been removed since I think that `result` should be `string` regardless of `using_infer_string`. This is becaus of the code change made on `Index.equals` - since `Index.equals` consider `df1.columns` is equal to `df2.colums`, `Index.intersection` returns `self`(which is `string`). You could see the result becomes `object`(which is the dtype of `df2`) in case of `result = df2 + df1`. On the main branch, on the other hand, `Index.intersection` returns `object` because `Index.equals` returns `False`, and then both `self` and `other` are cast to `object` by `_find_common_type_compat`. (see `L3287` at pandas/core/indexes/base.py)
https://github.com/pandas-dev/pandas/blob/25e64629ec317fba2fc1c2834b20362fa6c1fd89/pandas/core/indexes/base.py#L3286-L3290
* I created this pull request since @MayurKishorKumar doesn't seem to work on this issue anymore, but please let me know if there is going to be further actions on the previous PR and I am supposed to close this one. | [
"Bug",
"Strings",
"Index"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @sanggon6107!"
] |
3,113,693,938 | 61,540 | ENH pandas-dev#60693: shift operations for Series and DataFrames | closed | 2025-06-03T12:06:30 | 2025-06-30T18:17:58 | 2025-06-30T18:17:58 | https://github.com/pandas-dev/pandas/pull/61540 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61540 | https://github.com/pandas-dev/pandas/pull/61540 | David-msggc | 2 | This commit introduces the rshift and lshift method for both Series and DataFrames in pandas. It also adds the corresponding in-place methods.
These methdos don't work between a Series and a Dataframe, or if the two Series or DataFrames differ in size.
- [x] closes #60693
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Enhancement",
"Numeric Operations"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke Any chance this would be worth adding to pandas?",
"Going to close this PR until a decision has been made on the original issue"
] |
3,113,587,665 | 61,539 | ENH: Columns formatted as "Text" in Excel are read as numbers | open | 2025-06-03T11:31:43 | 2025-06-23T16:00:42 | null | https://github.com/pandas-dev/pandas/issues/61539 | true | null | null | pranay-sa | 0 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
When reading Excel files, pandas ignores Excel's "Text" cell formatting and converts text-formatted numbers (e.g., IDs, codes) to numeric types (int/float).
This requires manual conversion back to strings, which can be inefficient , for a huge dataset and prone to errors.

### Feature Description
Add an option in pd.read_excel() to respect Excel's cell formatting
(e.g., dtype_from_format=True), or set it to true by default , preserving text-formatted columns as strings.
### Alternative Solutions
OpenPyXL/Xlrd Engine + Format Detection
Read cell formats directly (requires manual parsing):
from openpyxl import load_workbook
wb = load_workbook("data.xlsx", data_only=False)
sheet = wb.active
text_columns = [col for col in sheet.columns if sheet.cell(row=1, column=col[0].column).number_format == "@"]
### Additional Context
_No response_ | [
"Enhancement",
"IO Excel",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,111,502,879 | 61,538 | usecols investigation for various I/O functions | closed | 2025-06-02T20:16:32 | 2025-06-03T16:04:25 | 2025-06-03T16:04:24 | https://github.com/pandas-dev/pandas/issues/61538 | true | null | null | eicchen | 1 | pasting my comment from #61386 for visibility with relevant decisionmakers
> As promised during the sync meeting today, I went and compiled how various read functions handle columns being specified. Functions that take usecols (read_csv, read_clipboard, read_excel, and read_hdf(undocumented)) don't take into account input order, whereas functions that ask for columns instead do (hdf, feather, parquet, orc, starata, sql).
>
> Finally, there are also some that straight up don't take column specifiers.
>
> I'd expect functions that use usecols to be using the same function in the backend, but I'd have to verify it if we're planning to standardize the parameter.
>
> CSV attached below of functions tested (those with a read and write function in pandas)
> [does_it_use_order.csv](https://github.com/user-attachments/files/20558438/does_it_use_order.csv) | [
"IO CSV",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks, but let's centralize discussion in the original issue (https://github.com/pandas-dev/pandas/issues/61386) so closing this one"
] |
3,111,287,144 | 61,537 | BUG: use of iloc with heterogeneous DataFrame sometimes performs undocumented conversions | open | 2025-06-02T18:59:10 | 2025-06-23T16:03:56 | null | https://github.com/pandas-dev/pandas/issues/61537 | true | null | null | illbebach | 0 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
d = {
'Name': ['Bob', 'John', 'Alice'],
'Age': [25, 41, 30],
'Result': [1.2, 0.5, 0.3],
'Ok': [True, False, True],
}
df = pd.DataFrame(data=d)
print()
print('------ test 1 ------')
print(df)
print()
print('first row')
print(df.iloc[0])
d = {
'Age': [25, 41, 30],
'Result': [1.2, 0.5, 0.3],
'Ok': [True, False, True],
}
df = pd.DataFrame(data=d)
print()
print('------ test 2 ------')
print(df)
print()
print('first row')
print(df.iloc[0])
d = {
'Age': [25, 41, 30],
'Result': [1.2, 0.5, 0.3],
}
df = pd.DataFrame(data=d)
print()
print('------ test 3 ------')
print(df)
print()
print('first row')
print(df.iloc[0])
```
### Issue Description
Pandas sometimes performs type conversions on returned data from the `.iloc` function.
In the first two cases, a Series of object is returned. In the last case pandas decided to promote all values to float, and returns a Series of float, which is not what I want.
Program output
```
------ test 1 ------
Name Age Result Ok
0 Bob 25 1.2 True
1 John 41 0.5 False
2 Alice 30 0.3 True
first row
Name Bob
Age 25
Result 1.2
Ok True
Name: 0, dtype: object
------ test 2 ------
Age Result Ok
0 25 1.2 True
1 41 0.5 False
2 30 0.3 True
first row
Age 25
Result 1.2
Ok True
Name: 0, dtype: object
------ test 3 ------
Age Result
0 25 1.2
1 41 0.5
2 30 0.3
first row
Age 25.0
Result 1.2
Name: 0, dtype: float64
```
1. To me, it is undesirable, in any circumstance, for pandas to apply type conversions on returned data. I realize this conversion is probably there for historical or performance reasons, and may not be changed.
3. At a minimum, the documentation should mention exactly what circumstances a type conversion will occur. The [.iloc documentation ](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iloc.html#pandas.DataFrame.iloc) makes no mention of what will occur when a DataFrame containing heterogeneous data is indexed via `.iloc`
For completeness, there is a similar bug logged long ago. #5256
### Expected Behavior
1. My preference is to never perform type conversions. I realize changing this behavior could break some existing code that depends on such a conversion.
2. My second recommendation is to update the documentation to describe exactly when and what data conversions will be performed by pandas. At a minimum there should be a 'warning' or 'note' about the type conversions.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 154 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Dtype Conversions",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,110,854,668 | 61,536 | [pre-commit.ci] pre-commit autoupdate | closed | 2025-06-02T16:29:49 | 2025-06-02T17:02:26 | 2025-06-02T17:02:23 | https://github.com/pandas-dev/pandas/pull/61536 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61536 | https://github.com/pandas-dev/pandas/pull/61536 | pre-commit-ci[bot] | 0 | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.11.8 → v0.11.12](https://github.com/astral-sh/ruff-pre-commit/compare/v0.11.8...v0.11.12)
- [github.com/asottile/pyupgrade: v3.19.1 → v3.20.0](https://github.com/asottile/pyupgrade/compare/v3.19.1...v3.20.0)
- [github.com/pre-commit/mirrors-clang-format: v20.1.3 → v20.1.5](https://github.com/pre-commit/mirrors-clang-format/compare/v20.1.3...v20.1.5)
- [github.com/trim21/pre-commit-mirror-meson: v1.8.0 → v1.8.1](https://github.com/trim21/pre-commit-mirror-meson/compare/v1.8.0...v1.8.1)
<!--pre-commit.ci end--> | [
"Code Style"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,108,703,108 | 61,535 | ENH: read_csv tz option | closed | 2025-06-02T05:57:05 | 2025-08-05T16:29:10 | 2025-08-05T16:29:10 | https://github.com/pandas-dev/pandas/issues/61535 | true | null | null | hasandiwan | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I use pd.read_csv to grab a series of timestamp'd links interactively from a remote website.
### Feature Description
I would like it to convert the columns specified by parse_dates to the timezone specified by wherever the /etc/localtime link points to by default in a non-deprecated manner:
> [frame.loc[:,c].dt.tz_convert('/'.join([os.getenv('TZ', os.path.realpath('/etc/localtime').split('/')[-2:])][0])) for c in frame.select_dtypes('datetime64[ns, UTC]')]`
I'd like to propose this functionality as the tz parameter to read_csv. I suspect the implementation is not python, and can't find it in my git checkout of pandas.
### Alternative Solutions
Covered above
### Additional Context | [
"Enhancement",
"IO CSV",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I don't think we'll implement this. `read_csv` has too way many parameters already, and this can be done after the CSV is read, if I'm not missing anything. But let's see what others think.",
"Agreed, closing"
] |
3,108,065,455 | 61,534 | Add version constraints to reduce micromamba CI environment resolution time | closed | 2025-06-01T23:59:29 | 2025-06-02T08:33:13 | 2025-06-02T08:32:55 | https://github.com/pandas-dev/pandas/pull/61534 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61534 | https://github.com/pandas-dev/pandas/pull/61534 | Ishubhammohole | 2 | This PR updates the `environment.yml` file to add version constraints to several frequently resolved dependencies. These changes help reduce environment resolution time in CI workflows using micromamba, which was previously leading to timeouts (see #61531).
Updated packages:
- ipywidgets>=8.1.2
- nbformat>=5.9.2
- notebook>=7.0.6,<7.2.0
- dask-core>=2024.4.2
- seaborn-base>=0.13.2
Fixes: #61531 | [
"CI",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"Closing, as discussed in https://github.com/pandas-dev/pandas/issues/61531#issuecomment-2929429803"
] |
3,107,929,504 | 61,533 | BUG: Fix Series.str.zfill for ArrowDtype string arrays #61485 | open | 2025-06-01T22:09:25 | 2025-07-17T00:09:10 | null | https://github.com/pandas-dev/pandas/pull/61533 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61533 | https://github.com/pandas-dev/pandas/pull/61533 | iabhi4 | 1 | Implemented `_str_zfill` for `ArrowExtensionArray` to support `Series.str.zfill` on Arrow-backed string arrays (`ArrowDtype(pa.string())`). This fixes an AttributeError due to the method relying on `_str_map`, which wasn't implemented. Used `_apply_elementwise` to match the approach of other string methods. Added tests under `test_string_array.py` and confirmed they pass. Also confirmed no other relevant test files are broken and the change aligns with how other string accessors are handled.
- [x] closes #61485
- [x] Tests added and passed
- [x] All code checks passed with pre-commit
- [x] Added a changelog entry in v3.0.0.rst | [
"Strings",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,107,872,632 | 61,532 | ENH: speed up `DataFrame.plot` using `LineCollection` | open | 2025-06-01T21:25:38 | 2025-06-04T00:23:14 | null | https://github.com/pandas-dev/pandas/issues/61532 | true | null | null | Abdelgha-4 | 5 | **Description:**
When plotting line charts with many columns or rows, DataFrame.plot() currently adds one Line2D object per column. This incurs significant overhead in large datasets.
Replacing this with a single LineCollection (from matplotlib.collections) can yield substantial speedups. In my benchmarks, plotting via LineCollection was ~2.5× faster on large DataFrames with many columns.
**Minimal example:**
```python
# Imports and data generation
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.collections import LineCollection
num_rows = 500
num_cols = 2000
test_df = pd.DataFrame(np.random.randn(num_rows, num_cols).cumsum(axis=0))
# Simply using DataFrame.plot, (5.6 secs)
test_df.plot(legend=False, figsize=(12, 8))
plt.show()
# Optimized version using LineCollection (2.2 secs)
x = np.arange(len(test_df.index))
lines = [np.column_stack([x, test_df[col].values]) for col in test_df.columns]
default_colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
color_cycle = list(itertools.islice(itertools.cycle(default_colors), len(lines)))
line_collection = LineCollection(lines, colors=color_cycle)
fig, ax = plt.subplots(figsize=(12, 8))
ax.add_collection(line_collection)
ax.margins(0.05)
plt.show()
```
**Note:** the ~2.5x speed improvement is specific to dataframes with integer index. For dataframes with `DatetimeIndex` the actual speed improvement is ~27x when combined with the workaround here: #61398
Thank you for considering this suggestion! | [
"Visualization",
"Performance"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hello, I'd like to take an opportunity to resolve this issue. Could I get assigned to it?",
"take",
"Confirmed on main and in my testing as well. I am aware that this is relatively linked to #61398, but I do think that this should be kept open as a separate issue since they tackle different performance bottlen... |
3,107,305,231 | 61,531 | CI: Micromamba taking too long to resolve the environments in the CI | closed | 2025-06-01T14:08:04 | 2025-06-13T14:34:13 | 2025-06-13T14:34:13 | https://github.com/pandas-dev/pandas/issues/61531 | true | null | null | datapythonista | 6 | Our CI jobs are frequently failing now as they timeout after 90 minutes of execution. Of those 90 minutes, 25 are spent on micromamba resolving the environment.
In the past we have fixed this by limiting the number of packages to be considered. For example, if the environment just says `numpy`, maybe there are 200 versions that will be considered. While if we say `numpy >= 2` the number can be limited to few.
I'm not sure which packages have lots of options, and we don't want to filter out the versions that make sense to install. But we should have a look and see if by adding few constraints we can get a reasonable time to solve the environment. | [
"CI",
"Dependencies",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"The failing CI jobs is exactly what we are trying to fix. I'll close the PR for now, as it doesn't really help as it is. The dependencies that need to be fixed are not our development environment (not sure how long it takes, but not 25 minutes the last time I installed it). It's the dependencies for the ... |
3,107,129,953 | 61,530 | API: Replace na_action parameter in Series/DataFrame/Index.map by the standard skipna | open | 2025-06-01T11:47:23 | 2025-07-25T00:09:11 | null | https://github.com/pandas-dev/pandas/pull/61530 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61530 | https://github.com/pandas-dev/pandas/pull/61530 | datapythonista | 2 | - [X] xref #61128
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
We've been consistently using a boolean `skipna` parameter to let users decide whether missing values should be ignored or not in different methods. The `.map` method has an awkward `na_action=None | "igonre"` for the same purpose. I add the standard `skipna` parameter to the methods, and start the deprecation of the `na_action` one. | [
"API Design",
"Apply",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Updated this so the `na_action` parameter is also not removed from the extension arrays, and it warns. The base array since as you said @mroeschke is technically public. `map_array` since it's used by pandas-pint, even if it's not supposed to be public, but better not break their code. And the specific implementat... |
3,106,434,267 | 61,529 | raise TypeError when function returns object-like results | open | 2025-06-01T02:41:40 | 2025-06-23T16:06:20 | null | https://github.com/pandas-dev/pandas/issues/61529 | true | null | null | michael2015tse | 0 | This roll_apply raises `TypeError: must be real number, not str` when my customized function returns str / list or other object-like results. Because here it is: `ndarray[float64_t] output`
https://github.com/pandas-dev/pandas/blob/0691c5cf90477d3503834d983f69350f250a6ff7/pandas/_libs/window/aggregations.pyx#L1396 | [
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,105,890,881 | 61,528 | DOC: Typo fix for .astype() in cheatsheet | closed | 2025-05-31T18:35:29 | 2025-06-02T16:29:02 | 2025-06-02T16:28:55 | https://github.com/pandas-dev/pandas/pull/61528 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61528 | https://github.com/pandas-dev/pandas/pull/61528 | brchristian | 3 | - [x] Closes #61523
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@datapythonista You're very welcome! I noticed that too about the PDF size. I'm not too familiar with the MS Office suite and how PowerPoint exports exactly; I just saved it directly from PowerPoint Version 16.97.2 (25052611) for Mac by following what looked like the standard flow through either \"Save\" or \"Expo... |
3,105,461,098 | 61,527 | ENH: Implement DataFrame.select | closed | 2025-05-31T13:10:03 | 2025-06-20T21:15:00 | 2025-06-20T20:20:32 | https://github.com/pandas-dev/pandas/pull/61527 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61527 | https://github.com/pandas-dev/pandas/pull/61527 | datapythonista | 22 | - [X] closes #61522
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Based on the feedback in #61522 and on the last devs call, I implemented `DataFrame.select` in the most simple way. It does work with `MultiIndex`, but it does not support equivalents to `filter(regex=)` or `filter(like=`) directly. I added examples in the docs, so users can do that easily in Python (I can add one for regex if people think it's worth it).
The examples in the docs and the tests should make quite clear what's the behavior, feedback welcome.
For context, this is added so we can make `DataFrame.filter` focus on filtering rows, for example:
```python
df = df.select("name", "age")
df = df.filter(df.age >= 18)
```
or
```python
(df.select("name", "age")
.filter(lambda df: df.age >= 18))
```
CC: @pandas-dev/pandas-core | [
"Enhancement",
"Indexing",
"API Design"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Slight preference for (arg) over (*arg), strong preference for supporting one, not both.",
"For reference, [PySpark](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.select.html) uses `*cols`, and [Polars](https://docs.pola.rs/api/python/stable/reference/dataframe/a... |
3,105,055,505 | 61,526 | DOC: fix ES01 for pandas.plotting.autocorrelation_plot | closed | 2025-05-31T07:08:00 | 2025-06-08T09:07:28 | 2025-06-02T16:37:15 | https://github.com/pandas-dev/pandas/pull/61526 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61526 | https://github.com/pandas-dev/pandas/pull/61526 | tuhinsharma121 | 2 | fixes
```
pandas.plotting.autocorrelation_plot ES01
```
This includes a crisp extended summary of the method which talks about what it does, how it does it and whats the value of it.
| [
"Docs",
"Visualization"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @tuhinsharma121 ",
"> lgtm, but I'm not so familiar with autocorrelation plots, I'll let someone else merge. @Dr-Irv maybe you know about this more?\r\n\r\nNot my area at all!"
] |
3,104,977,346 | 61,525 | BUG: Fix assert_frame_equal dtype handling when check_dtype=False (#61473) | open | 2025-05-31T06:09:26 | 2025-07-04T23:17:54 | null | https://github.com/pandas-dev/pandas/pull/61525 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61525 | https://github.com/pandas-dev/pandas/pull/61525 | iabhi4 | 2 | Fixes a bug in `assert_frame_equal` where `DataFrames` with logically equal values but different dtypes (nullable vs non-nullable) would raise errors even with check_dtype=False. Now, when `check_dtype=False`, dtype mismatches won’t cause failures.
- [x] closes #61473
- [x] Tests added and passing
- [x] Pre-commit checks passing
- [x] Added entry to v3.0.0.rst
| [
"Testing",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"hey @mroeschke, addressed the ... |
3,104,716,590 | 61,524 | BUG: Fix pivot_table margins to include NaN groups when dropna=False | closed | 2025-05-31T02:52:43 | 2025-07-13T12:13:46 | 2025-07-13T12:10:06 | https://github.com/pandas-dev/pandas/pull/61524 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61524 | https://github.com/pandas-dev/pandas/pull/61524 | iabhi4 | 5 | Fix incorrect margin computation in `pivot_table` when index or columns contain NA values
This PR fixes an issue where the `"All"` row or column (i.e., `margins=True`) in `pd.pivot_table` does not account for rows that contain `NA` values in the index or column dimensions. These rows were incorrectly excluded from the overall aggregation used to compute the margin, leading to incorrect totals.
The fix modifies the margin calculation to ensure that rows with `NA` values are included in the aggregation, consistent with how the data is treated in the main table when `dropna=False`.
- [x] closes #61509
- [x] Test added and passed
- [x] Ran pre-commit check
- [x] Added entry in `doc/source/whatsnew/v3.0.0.rst` under `Reshaping` | [
"Bug",
"Missing-data",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This change fixes the margin behavior in `pivot_table` when `dropna=False`, but as expected, it also affects related logic, a few tests are now failing, especially in `test_crosstab`, where the old margin behavior is still being asserted.\r\n\r\nBefore I go ahead and update those tests to match the new behavior, I... |
3,104,347,859 | 61,523 | DOC: Official "Cheat Sheet" shows `as_type()` method, correct signature is `astype()` | closed | 2025-05-30T21:51:59 | 2025-06-02T16:28:56 | 2025-06-02T16:28:56 | https://github.com/pandas-dev/pandas/issues/61523 | true | null | null | brchristian | 3 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
### Documentation problem
The third page of the official "Cheat Sheet" at https://github.com/pandas-dev/pandas/blob/main/doc/cheatsheet/Pandas_Cheat_Sheet.pdf has a section called "Changing Type". It lists `df.as_type(type)`, however no such method exists; the correct method should be `df.astype(type)`.
### Suggested fix for documentation
The fix is straightforward: replace `as_type` with `astype`. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I’m very happy to take this and submit a PR that updates the PPTX/PDF. 👍",
"Thanks for the report. Agreed we should have `astype`. PR to fix is welcome - when doing so, please update all files in the `cheatsheet` directory.",
"PR #61528 submitted!\n\nNote that the cheatsheets offered in Farsi and Japanese are... |
3,104,104,614 | 61,522 | ENH: Implement DataFrame.select to select columns | open | 2025-05-30T19:40:31 | 2025-06-03T18:25:57 | null | https://github.com/pandas-dev/pandas/issues/61522 | true | null | null | datapythonista | 3 | Add a new method `DataFrame.select` to select columns from a DataFrame. The exact specs are still open to discussion, here I write a draft of what the method could look like.
Basic case, select columns. Personally both as a list, or as multiple parameters with `*args` should be supported for convenience:
```python
df.select("column1", "column2")
df.select(["column1", "column2"])
```
Cases to consider.
**What if a provided column doesn't exist?** I assume we want to raise a `ValueError`.
**What if a column is duplicated?** I assume we want to return the column twice.
**How to select with a wildcard or regex?** Some options:
1. Not support them (users can do anything fancy with `df.columns` themselves.
2. Assume the column is a regex if name starts by `^` and ends with `$`. For wildcards, I guess it could be ok if `column*` is provided, to first check if the column with the star exists, if it does return it, otherwise assume the star is a wildcard
3. Accept callables, so users can do `df.select(lambda col: col.startswith("column"))`
4. Have extra parameters `regex` like `df.select(regex="column\d")`
5. Same as 2 by make users enable if explicitly with a flag `df.select("column\d", regex=True)`
Personally, I'd start by 1, not supporting anything fancy, and decide later. It's way easier to add, than to remove something we don't like once released.
**What to do with MultiIndex?** I guess if a list of strings is provided, they should select from the first level of the MultiIndex. Should we support the elements being tuples to select multiple levels at once? I haven't worked much with MultiIndex myself for a while, @Dr-Irv maybe you have an idea on what the expectation should be.
Can anyone think of anything else not trivial for implementing this? | [
"Enhancement",
"Indexing",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Related issues\n\nhttps://github.com/pandas-dev/pandas/issues/40322\nhttps://github.com/pandas-dev/pandas/issues/55289\nhttps://github.com/pandas-dev/pandas/issues/61317\n\n> What if...\n\nWhen just `*args` are provided, this should have the same behavior as `__getitem__` when a row is not provided. Doing anything... |
3,104,037,445 | 61,521 | Misleading error message when PyTables is not installed | closed | 2025-05-30T19:08:09 | 2025-06-30T18:16:15 | 2025-06-30T18:16:15 | https://github.com/pandas-dev/pandas/issues/61521 | true | null | null | user27182 | 7 | I tried reading an hd5 file with the latest pandas and got this import error:
`ImportError: Missing optional dependency 'pytables'. Use pip or conda to install pytables.`
So I tried `pip install pytables` and got this error:
```
ERROR: Could not find a version that satisfies the requirement pytables (from versions: none)
ERROR: No matching distribution found for pytables
```
So then I went searching on PyPI and apparently there are no packages named `pytables`: https://pypi.org/search/?q=pytables
I did find the [PyTables](https://github.com/PyTables/PyTables) project on GitHub though, which says that we need to use `pip install tables` to install it. After installing `tables`, the hd5 read operation worked.
So, we need to install _tables_, not _pytables_, which is definitely confusing and not obvious. I think it would be very helpful if the error message indicated this to avoid having to go through the search process above. | [
"Error Reporting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Replacing these lines:\nhttps://github.com/pandas-dev/pandas/blob/c708e152c42f81b17cf6e47f7939a86e4c3fc77f/pandas/compat/_optional.py#L151-L157\nwith:\n``` python\n package_name = INSTALL_MAPPING.get(name, name)\n\n msg = (\n f\"Missing optional dependency {package_name}. {extra} \"\n f\"Use pi... |
3,103,956,084 | 61,520 | Backport PR #61518 on branch 2.3.x (TST: Use external_error_raised for numpy-raised test_error_invalid_values) | closed | 2025-05-30T18:34:23 | 2025-05-30T21:06:07 | 2025-05-30T21:06:07 | https://github.com/pandas-dev/pandas/pull/61520 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61520 | https://github.com/pandas-dev/pandas/pull/61520 | meeseeksmachine | 0 | Backport PR #61518: TST: Use external_error_raised for numpy-raised test_error_invalid_values | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,103,936,508 | 61,519 | BUILD: Bump Cython to 3.1 | closed | 2025-05-30T18:26:12 | 2025-06-05T18:50:32 | 2025-06-04T18:16:55 | https://github.com/pandas-dev/pandas/pull/61519 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61519 | https://github.com/pandas-dev/pandas/pull/61519 | rhshadrach | 8 | - [x] closes #60972 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
ASVs: https://github.com/pandas-dev/pandas/issues/60972#issuecomment-2906286143 | [
"Build",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> Could you also bump cython in the ci/deps files?\r\n\r\nBumped. The pin there was `cython>=0.29.33` (except freethreading which doesn't have any pin), but I do think having some upper bound is preferable. \r\n\r\nFrom the 2 to 3 transition, I was cautious about accepting new versions and I think now overly so. W... |
3,103,699,390 | 61,518 | TST: Use external_error_raised for numpy-raised test_error_invalid_values | closed | 2025-05-30T16:34:21 | 2025-05-30T18:33:52 | 2025-05-30T18:33:49 | https://github.com/pandas-dev/pandas/pull/61518 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61518 | https://github.com/pandas-dev/pandas/pull/61518 | mroeschke | 1 | Looks like numpy extended an error message that is failing `test_error_invalid_values` e.g https://github.com/pandas-dev/pandas/actions/runs/15331067593/job/43137707952 | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Merging to fix the failure on main"
] |
3,101,810,788 | 61,517 | BUG: Fix sorting by column named None in DataFrame.sort_values (GH#61512) | closed | 2025-05-30T00:57:45 | 2025-06-02T16:45:50 | 2025-06-02T16:45:44 | https://github.com/pandas-dev/pandas/pull/61517 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61517 | https://github.com/pandas-dev/pandas/pull/61517 | iabhi4 | 1 | Fixes a bug in `DataFrame.sort_values` where sorting by a column explicitly named None raised a `KeyError`. Added a conditional check to correctly retrieve and sort the column when `None` is used as a label.
- [x] closes #61512
- [x] Tests added and passed
- [x] All code checks passed via pre-commit
- [x] Added an entry to the latest whatsnew file | [
"Algos"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @iabhi4 "
] |
3,101,600,757 | 61,516 | BUG: DataFrame.sample weights not required to sum to less than 1 | closed | 2025-05-29T22:14:58 | 2025-07-11T02:27:17 | 2025-07-11T02:27:16 | https://github.com/pandas-dev/pandas/issues/61516 | true | null | null | dougj892 | 16 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = {'w': [100, 1, 1]}
df = pd.DataFrame(data)
df.sample(n=2, weights=df.w, replace=False)
```
### Issue Description
In order for PPS sampling without replacement to be feasible, the selection probabilities must be less than 1, i.e.
$ \frac{n \cdot w_i}{\sum w_i}< 1$
where w is the weight and n is the total number of units to be sampled. This is often not the case if you are selecting a decent proportion of all units and there is wide variance in unit size. For example, suppose you want to select 2 units with PPS without replacement from a sampling frame of 3 units with sizes 100, 1, and 1. There is no way to make the probability of selection of the first unit 100x the probability of selection of the other two units (since the max prob for the first unit is 1 and at least one of the other units must have prob >= .5).
Unfortunately, pandas df.sampling function doesn't throw an error in this case.
### Expected Behavior
The code above should throw some sort of error like "Some unit probabilities are larger than 1 and thus PPS sampling without replacement cannot be performed"
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
| [
"Bug",
"Algos"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. As named and documented, these are weights and not probabilities. pandas will normalize the sum to 1 so that they are used a probabilities. E.g. in your example, the corresponding probabilities are:\n\n 100 / (100 + 1 + 1), 1 / (100 + 1 + 1), 1 / (100 + 1 + 1)\n\nMarking as a closing cand... |
3,100,485,724 | 61,515 | ENH: Ability to name columns/index levels when using `.str.split(..., expand=True)` on `Index`/`Series` | open | 2025-05-29T14:11:53 | 2025-05-30T09:23:05 | null | https://github.com/pandas-dev/pandas/issues/61515 | true | null | null | nachomaiz | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
When using `.str.split(..., expand=True)`:
- On a `Series` the resulting dataframe columns are labeled with numbers by default
- On an `Index` the resulting levels are not labeled
It would be great if we could specify the names that the new columns or levels will take once the split is performed.
### Feature Description
I think it would be helpful if the method had a `names` parameter that would at a minimum accept a sequence of labels for the newly created columns/levels, similarly to how `MultiIndex` is initialized.
It could work like so:
```py
>>> index = pd.Index(["a_b"])
>>> index.str.split("_", expand=True, names=["A", "B"])
MultiIndex([('a', 'b')], names=["A", "B"], length=1)
>>> series = pd.Series(["a_b"])
>>> series.str.split("_", expand=True, names=["A", "B"])
| | A | B |
|---|---|---|
| 0 | a | b |
```
The length of the `names` sequence should match the number of expanded columns/levels, otherwise it should throw a `ValueError`.
### Alternative Solutions
For `Index`, this works almost exactly the same:
```py
>>> index.str.split("_", expand=True).rename(["A", "B"])
```
So I think it's not as impactful for `Index`.
But for `Series`, this becomes more cumbersome, and the need to specify the renaming via a dictionary makes it feel disjointed vs the easier index renaming and `MultiIndex` instantiation:
```py
>>> series.str.split("_", expand=True).rename(columns={0: "A", 1: "B"})
```
So my proposal would provide a similar interface for using the `split` method of the `str` accessor across pandas sequences.
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"What about `series.str.split(\"_\", expand=True).set_axis([\"A\", \"B\"], axis=1)`?",
"Ah, good point! Yes that would work as well and is less cumbersome than the `rename` method.\n\nI'd still think it would be worth having this functionality in `str.split`, but I agree the need-gap isn't as large as I thought.\... |
3,100,115,943 | 61,514 | BUGFIX: escape `quotechar` when `escapechar` is not None (even if quoting=csv.QUOTE_NONE) | closed | 2025-05-29T11:53:41 | 2025-05-30T16:36:50 | 2025-05-30T16:36:42 | https://github.com/pandas-dev/pandas/pull/61514 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61514 | https://github.com/pandas-dev/pandas/pull/61514 | KevsterAmp | 2 | - [x] closes #61407 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
---
Found the issue on `CSVFormatter._initialize_quotechar`, wherein it only returns quotechar when `self.quoting` is not `csvlib.QUOTE_NONE`:
```python
if self.quoting != csvlib.QUOTE_NONE:
# prevents crash in _csv
return quotechar
return None
```
to follow the same behavior of `csv.writer` (escape when `escapechar is not None`), I improved the if function to **return quotechar when `self.escapechar is not None`**
in the `CSVFormatter.__init__`, moved the initialization of `self.escapechar` higher than `self.quotechar` since the initial solution was returning errors from calling `self.escapechar` before its initialization
initially marked as draft to let the CIs run & check if there are tests affected by this change | [
"IO CSV"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"CI errors looks unrelated",
"Thanks @KevsterAmp "
] |
3,099,904,854 | 61,513 | ENH: Adding pd.from_pydantic | open | 2025-05-29T10:34:29 | 2025-07-15T21:07:14 | null | https://github.com/pandas-dev/pandas/issues/61513 | true | null | null | JavierLopezT | 1 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
There are a few methods to get a pandas DataFrame from a Pydantic model, but it will be just easier and more readable to not have any intermediate step.
### Feature Description
Ideally, I could do something like:
from pydantic import BaseModel
class Item(BaseModel):
item_category: str
item_name: str
purchase_price: float
suggested_retail_price: float
item_number: int
margin: float
note: Optional[str] = None
response = : list[Item]
df = pd.from_pydantic(response)
And I will get a pandas df with columns `item_category, item_name, purchase_price, suggested_retail_price, item_number, margin, note` and on each row, one element of the list in response
### Alternative Solutions
https://stackoverflow.com/questions/61814887/how-to-convert-a-list-of-pydantic-basemodels-to-pandas-dataframe
### Additional Context
_No response_ | [
"Enhancement",
"IO Data",
"Needs Triage"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take"
] |
3,098,361,410 | 61,512 | BUG: Cannot sort by columns named None | closed | 2025-05-28T19:30:52 | 2025-06-02T16:45:45 | 2025-06-02T16:45:45 | https://github.com/pandas-dev/pandas/issues/61512 | true | null | null | zvun | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame([[1, 2], [3, 4]], columns=['C1', None])
df.sort_values(None) # KeyError: None
```
### Issue Description
Sorting a DataFrame by a column named `None` results in the error `KeyError: None`. This breaks e.g. plugins that depend on Pandas for viewing and sorting DataFrames (see a related DataWrangler issue [here](https://github.com/microsoft/vscode-data-wrangler/issues/496), where inconsistent behavior with columns named None has also been reported).
### Expected Behavior
A column named None should not result in inconsistent behavior where some operations work but some others don't.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.2
python-bits : 64
OS : Linux
OS-release : 6.1.0-32-amd64
Version : #1 SMP PREEMPT_DYNAMIC Debian 6.1.129-1 (2025-03-06)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None | [
"Bug",
"Indexing",
"Sorting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. In general I suspect you will have a bad time trying to use `None` in an index or columns. However as long as pandas allows it, we should strive to improve this behavior. Further investigations and PRs to fix are welcome.\n\nWith `future.infer_string = True`, this can now at least work. As p... |
3,098,321,760 | 61,511 | DOC: Docker image provided on "Debugging C extensions" is out of date | open | 2025-05-28T19:12:36 | 2025-06-02T22:26:24 | null | https://github.com/pandas-dev/pandas/issues/61511 | true | null | null | eicchen | 6 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/development/debugging_extensions.html
### Documentation problem
Docker image provided on linked page includes out of date pip and meson which causes build errors without manual updates to the installed docker image.
Additionally, when brought up in the bi-weekly meeting, it was mentioned that the preferred method is to locally install cygdb so the documentation should be updated to reflect this knowledge.
### Suggested fix for documentation
Either update or remove provided docker image and add that local installations are preferred.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"cc @WillAyd ",
"I think we get very little usage of the docker containers, so unless someone cares to maintain we can just remove that documentation",
"I definitely think if we don't use the docker we should streamline/link to instructions for locally instaling cygdb, I think the article that @WillAyd wrote al... |
3,097,701,978 | 61,510 | BUG: VSCode go to definition doesn't work with pandas.api.extensions.register_dataframe_accessor | closed | 2025-05-28T15:04:19 | 2025-06-06T20:40:04 | 2025-06-02T16:27:31 | https://github.com/pandas-dev/pandas/issues/61510 | true | null | null | arnoldjames98 | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
Create a pandas.api.extensions.register_dataframe_accessor, then when using try to command + click in VSCode to jump to the definition. Either no definition is found or it jumps to a file like series.pyi
Create a pandas.api.extensions.register_dataframe_accessor, then when using try to command + click in VSCode to jump to the definition. Either no definition is found or it jumps to a file like series.pyi
`my_utils/my_accessor.py`
```python
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("demo")
class DemoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
def say_hello(self):
print("Hello from accessor!")
```
`main.py`
```python
import pandas as pd
import sys
import importlib
# Ensure the path to the module is in sys.path
sys.path.append("my_utils") # Adjust this path as needed
import my_accessor
importlib.reload(my_accessor)
# Create DataFrame and use accessor
df = pd.DataFrame({"A": [1, 2, 3]})
df.demo.say_hello() # This runs fine, but "jump to definition" doesn't work
```
### Issue Description
VSCode go to definition doesn't work with pandas.api.extensions.register_dataframe_accessor
### Expected Behavior
I can jump to the definition when command clicking and see the documentation in VSCode
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.11.6.final.0
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 2.0.2
pytz : 2025.2
dateutil : 2.9.0.post0
setuptools : 80.7.1
pip : 25.1.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.6
IPython : 7.34.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.10.0
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Closing Candidate",
"Accessors"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Further investigations are certainly welcome. If there is an adjustment pandas can make to improve this functionality and it isn't a heavy code change, then a PR is welcome as well. However I do not think we should leave issues for supporting various IDEs on the queue for the long term if th... |
3,097,395,248 | 61,509 | BUG: margin for pivot_table is incorrect with NA column/index values | closed | 2025-05-28T13:26:06 | 2025-07-13T12:10:07 | 2025-07-13T12:10:06 | https://github.com/pandas-dev/pandas/issues/61509 | true | null | null | vtraag | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
#%%
import pandas as pd
df = pd.DataFrame({"i": [1, 2, 3],
"g1": ["a", "b", "b"],
"g2": ["x", None, None],
})
df.pivot_table(index="g1",
columns="g2",
values="i",
aggfunc="count",
dropna=False, margins=True)
```
### Issue Description
The margins of a `pivot_table` are incorrect when the `index` or `columns` variables contains missing variables. In particular, for the variable with the missing value, the total is missing.
| g1 | x | nan | Total
|-----|-----|-----|------
| a | 1.0 | | 1
| b | | 2.0 | 2
| All | 1.0 | | 3
### Expected Behavior
Margins should also be included for columns/indices with missing values.
In particular, the table should look like this
| g1 | x | nan | Total
|-----|-----|-----|------
| a | 1.0 | | 1
| b | | 2.0 | 2
| All | 1.0 | 2.0 | 3
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : f538741432edf55c6b9fb5d0d496d2dd1d7c2457
python : 3.12.8.final.0
python-bits : 64
OS : Linux
OS-release : 6.11.0-26-generic
Version : #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.0
numpy : 1.26.4
pytz : 2025.1
dateutil : 2.9.0.post0
setuptools : 75.8.0
pip : 25.0
Cython : None
pytest : None
hypothesis : None
sphinx : 8.1.3
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.5
IPython : 8.22.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.0
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.9.4
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.4
pandas_gbq : None
pyarrow : 16.1.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.23.0
tzdata : 2025.1
qtpy : 2.4.2
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Confirmed on main, further investigations and a PR to fix is welcome!\n\n Related: #53521 ",
"> Thanks for the report. Confirmed on main, further investigations and a PR to fix is welcome!\n\nHi @rhshadrach just raised a PR for this\n\nIt ensures `pivot_table(..., margins=True, dropna=Fals... |
3,096,897,913 | 61,508 | BUG: Fix inconsistent returned objects when applying groupby aggregations | closed | 2025-05-28T10:42:19 | 2025-06-25T12:02:17 | 2025-05-30T16:40:42 | https://github.com/pandas-dev/pandas/pull/61508 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61508 | https://github.com/pandas-dev/pandas/pull/61508 | arthurlw | 3 | - [x] closes #61503
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Bug",
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw, can you add a test that checks that the bug is indeed fixed. And a release note as well. Thank you! ",
"Thanks @arthurlw ",
"Thanks @arthurlw\r\n@mroeschke : is there any chance this could be added to an earlier release?\r\nI see there is a 2.3.1 milestone for example."
] |
3,095,297,276 | 61,507 | ENH: Implement to_iceberg | closed | 2025-05-27T21:35:59 | 2025-06-30T15:00:39 | 2025-06-09T18:18:25 | https://github.com/pandas-dev/pandas/pull/61507 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61507 | https://github.com/pandas-dev/pandas/pull/61507 | datapythonista | 1 | - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"IO Data"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Added the `append` parameter. I think it's a great addition, thanks for the feedback @IsaacWarren.\r\n\r\nI was thinking that for the parameters that receive PyIceberg objects, one option is to use a generic `**kwargs` like `to_parquet` does, that are sent to the engine (only PyIceberg so far). This wouldn't direc... |
3,095,115,520 | 61,506 | CLN Replace direct import of closing with qualified contextlib usage | closed | 2025-05-27T20:25:03 | 2025-05-28T00:22:44 | 2025-05-27T21:24:33 | https://github.com/pandas-dev/pandas/pull/61506 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61506 | https://github.com/pandas-dev/pandas/pull/61506 | mheguy | 1 | There were 4 uses of `contextlib.closing` and 2 of `closing`.
This PR converts the use of `closing` to `contextlib.closing` and removes the extra import statement. | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @mheguy "
] |
3,094,496,975 | 61,505 | BUG: Mask changing value despite of no True return | closed | 2025-05-27T16:19:19 | 2025-05-27T16:42:12 | 2025-05-27T16:42:11 | https://github.com/pandas-dev/pandas/issues/61505 | true | null | null | frbelotto | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = {'cd_tip_fma_pgto': [pd.NA]}
df = pd.DataFrame(data, dtype='double[pyarrow]')
df['payment'] = np.nan
df['payment'] = df['payment'].mask(cond=(df['cd_tip_fma_pgto'] == 6), other='Livelo')
```
### Issue Description
Evaluating the mask method under a pd.NA, when using pyarrow, is changing the target value!

### Expected Behavior
The results of the compasison should not cause any change of the target value, as you see in this example :
```
data = {'cd_tip_fma_pgto': [pd.NA]}
df = pd.DataFrame(data)
df['payment'] = np.nan
df['payment'] = df['payment'].mask(cond=(df['cd_tip_fma_pgto'] == 6), other='Livelo')
```

### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.0
python-bits : 64
OS : Linux
OS-release : 3.10.0-1127.19.1.el7.x86_64
Version : #1 SMP Tue Aug 25 17:23:54 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.24.2
pytz : 2022.7.1
dateutil : 2.8.2
pip : 25.1.1
Cython : 0.29.33
sphinx : None
IPython : 8.10.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.11.2
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.5.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.2
lxml.etree : 4.9.2
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.3
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 7.4.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : 2.0.30
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : N/A
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I think that my issue is similar to the [Issue 60729](https://github.com/pandas-dev/pandas/issues/60729)"
] |
3,094,178,259 | 61,504 | Use `TypeAlias` in code where types are declared | closed | 2025-05-27T14:36:54 | 2025-07-01T10:36:35 | 2025-07-01T10:35:58 | https://github.com/pandas-dev/pandas/pull/61504 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61504 | https://github.com/pandas-dev/pandas/pull/61504 | Dr-Irv | 2 | After introducing `TypeAlias` in `_typing.py`, goal of this PR is to use it in any other source files that are creating types in this way. ~~Also, make all of those types private.~~
I believe that I caught them all via some searching, but may have missed a few. Couldn't find a rule that enforces use of `TypeAlias`. Note that in 3.12, the recommendation is to do a `type` declaration, which is probably why there isn't such a rule.
| [
"Clean",
"Typing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@rhshadrach are we good to merge this?",
"Thanks @Dr-Irv "
] |
3,093,692,853 | 61,503 | BUG: Inconsistent returned objects when applying groupby aggregations | closed | 2025-05-27T12:05:42 | 2025-05-30T16:40:43 | 2025-05-30T16:40:43 | https://github.com/pandas-dev/pandas/issues/61503 | true | null | null | sylvainmouretfico | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(columns=['Group', 'Data'])
df.groupby(['Group'], as_index=False)['Data'].agg('sum')
# Returns:
# Empty DataFrame
# Columns: [Group, Data]
# Index: []
def mysum(x):
return sum(x)
df.groupby(['Group'], as_index=False)['Data'].agg(mysum)
# Returns:
# Series([], Name: Data, dtype: object)
```
### Issue Description
When performing groupby aggregations on empty dataframe (with labeled columns), the outcome differs whether we use an internal aggregator or a custom function.
The difference of behaviour is problematic because when using internal aggregators (like 'sum'), the returned object is a dataframe with proper columns that we can select. However with custom functions, the returned object with an empty Series from which we cannot select columns.
This forces developers in this situation to check emptiness of the dataframe first.
This is not desirable from code conciseness point of view, but more importantly, we easily forget to check it and can therefore lead to errors.
### Expected Behavior
Both approaches to apply groupby aggregations should return the same object, preferably a dataframe form which we can select columns.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 170 Stepping 4, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United Kingdom.1252
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I have explored the code a bit, and from what I can see, it would look like it would be required to update the following code in pandas/core/groupby/generic.py/SeriesGroupBy/aggregate:\n> if self.ngroups == 0:\n> # e.g. test_evaluate_with_empty_groups without any groups to\n> ... |
3,092,205,641 | 61,502 | BUG: Print alignement problem with some unicode characters | open | 2025-05-26T23:38:32 | 2025-05-29T00:08:58 | null | https://github.com/pandas-dev/pandas/issues/61502 | true | null | null | mhooreman | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
print(pd.DataFrame({'a': 'FooXXXXX,BarXXXXX,BazXXXXX,💾,🤓🤘'.split(','), 'b': 1}))
```
### Issue Description
The example prints the following output:
```
a b
0 FooXXXXX 1
1 BarXXXXX 1
2 BazXXXXX 1
3 💾 1
4 🤓🤘 1
```
It seems that some unicode characters are shifting the position to the right.
I have tried with different ranges (number of used bytes), and I can't find where the issue comes from.
### Expected Behavior
```
a b
0 FooXXXXX 1
1 BarXXXXX 1
2 BazXXXXX 1
3 💾 1
4 🤓🤘 1
```
(well, there's also an alignement problem on github, but the "1" shall be aligned)
### Installed Versions
```
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.2
python-bits : 64
OS : Darwin
OS-release : 22.6.0
Version : Darwin Kernel Version 22.6.0: Thu Apr 24 20:25:14 PDT 2025; root:xnu-8796.141.3.712.2~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
``` | [
"Bug",
"Unicode",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The misalignment seems to be happening because pandas calculates column widths using `len()` or `east_asian_width()`, which doesn’t really work well for emojis or other wide Unicode characters, they take up more space visually than you'd expect.\n\nI tried patching this locally by using [`wcwidth.wcswidth()`](http... |
3,092,028,044 | 61,501 | DOC: Fixes dangling parenthesis in `.rst` files | closed | 2025-05-26T20:42:44 | 2025-05-27T15:23:45 | 2025-05-27T15:23:03 | https://github.com/pandas-dev/pandas/pull/61501 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61501 | https://github.com/pandas-dev/pandas/pull/61501 | mattpopovich | 0 | - [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
I initially found a missing closing parenthesis in the documentation, then I decided to write a script to find others. This MR fixes them to the best of my ability.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,092,014,708 | 61,500 | DOC: use Hashable instead of label | closed | 2025-05-26T20:31:45 | 2025-05-27T17:35:07 | 2025-05-27T15:59:46 | https://github.com/pandas-dev/pandas/pull/61500 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61500 | https://github.com/pandas-dev/pandas/pull/61500 | cmp0xff | 1 | In https://github.com/pandas-dev/pandas/pull/61455#discussion_r2096069007:
> we should actually be using `Hashable` everywhere as since it's an actual Python type unlike `label`.
- <del>[ ] closes #xxxx (Replace xxxx with the GitHub issue number)</del>
- <del>[ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- <del>[ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.</del>
- <del>[ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.</del>
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @cmp0xff "
] |
3,091,911,733 | 61,499 | ENH: Support Plugin Accessors Via Entry Points | open | 2025-05-26T19:12:20 | 2025-08-20T09:42:09 | null | https://github.com/pandas-dev/pandas/pull/61499 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61499 | https://github.com/pandas-dev/pandas/pull/61499 | PedroM4rques | 18 | TLDR: Allows external libraries to register accessors for pandas objects (DataFrame, Series, Index) using the 'pandas.<pd_objs>.accessor' entry point group. This enables plugins to be automatically used without explicit import.
I'm working on this PR collaboratively with @afonso-antunes .
- [X] closes #29076
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
# Proposal
We propose implementing an entrypoint system similar to Vaex (#29076) to allow easy access to the functionalities of any installed plugin without requiring explicit imports. The idea is to make all installed packages available for use, only being "imported" when they are needed in the program, in a seamless manner.
## Current Behavior
Currently, each plugin must be explicitly imported:
```python
import pandas as pd
import vaex.graphql # required to enable .graphql (.graphql is compatible with pd.DataFrames)
df = pd.DataFrame(...)
df.graphql.query(...) # only works after the import
```
## Proposed Behavior
With our feature implemented, the code would be simplified to:
```python
import pandas as pd
df = pd.DataFrame(...)
df.graphql.query(...) # works directly if the plugin is installed via pip
```
| [
"API Design"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Most of the errors are from:\r\n```from importlib_metadata import entry_points```",
"I'm personally happy to add this, but it's kind of a big change in terms of the code users can write. @pandas-dev/pandas-core, thoughts here?\r\n\r\nIf this moves forward, you'll want to fix the CI and add documentation for this... |
3,091,789,032 | 61,498 | ENH: Add `force_suffixes` boolean argument to `pd.merge` | open | 2025-05-26T17:59:35 | 2025-08-02T00:09:10 | null | https://github.com/pandas-dev/pandas/pull/61498 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61498 | https://github.com/pandas-dev/pandas/pull/61498 | kopytjuk | 13 | ## Motivation
Often, when working with wide (i.e. multiple columns) dataframes in exploratory, merging them leads to an even wider dataframe. Currently, the `suffixes` mechanism is only applied on *equally named* columns from both dataframes.
However, often developers alter the column names beforehand, or use solutions similar to the one suggested [here](https://github.com/pandas-dev/pandas/issues/17834#issuecomment-1242794050).
## Changes
This PR adds a `force_suffixes` boolean argument to `pd.merge` which applies the suffixes on all columns, no matter if they equally named or not.
The goal is to have the following:
```python
df1 = pd.DataFrame({
'ID': [1, 2, 3],
'Value': ['A', 'B', 'C']
})
df2 = pd.DataFrame({
'ID': [2, 3, 4],
'Value': ['D', 'E', 'F']
})
merged_df = pd.merge(df1, df2, on='ID', how="inner", suffixes=('_left', '_right'), force_suffixes=True)
# Goal:
expected = DataFrame([[2, 2, "B", "D"], [3, 3, "C", "E"]],
columns=["ID_left", "Value_left", "ID_right", "Value_right"])
```
- [x] addresses #17834
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Enhancement",
"Reshaping",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hey @mroeschke, can you please take a look at my if the direction is right for you (i.e. you are OK with an additional argument) before I will fix the failing tests, linting errors and adjust the documentation. Ty in advance!",
"`merge` already has a quite complex signature, and what you are trying to solve here... |
3,091,724,576 | 61,497 | DOC: Typo in shared_docs | closed | 2025-05-26T17:15:34 | 2025-05-27T17:11:12 | 2025-05-27T16:00:16 | https://github.com/pandas-dev/pandas/pull/61497 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61497 | https://github.com/pandas-dev/pandas/pull/61497 | wjandrea | 1 | "regexs" → "regexes" | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @wjandrea "
] |
3,091,373,898 | 61,496 | BUG: Passing string[pyarrow] to the dtype parameter of e.g. csv_read() does produce a string type Series | open | 2025-05-26T14:21:07 | 2025-06-04T01:18:19 | null | https://github.com/pandas-dev/pandas/issues/61496 | true | null | null | ClauPet | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.read_csv("test.csv", dtype_backend = "pyarrow", dtype={"col2":"string[pyarrow]"})
df
col1 col2
0 abc 1
1 dfg 2
df.dtypes
col1 string[pyarrow]
col2 string[pyarrow]
dtype: object
# Series of col1 shows string[pyarrow] dtype
df["col1"]
0 abc
1 dfg
Name: col1, dtype: string[pyarrow]
# Seris of col2 does NOT show string[pyarrow], but string dtype
df["col2"]
0 1
1 2
Name: col2, dtype: string
# Using ArrowDtype instead of string alias with the dtype parameter of read_csv() correctly shows string[pyarrow] as the dtype of the Series consisting of col2
df1 = pd.read_csv("test.csv", dtype_backend = "pyarrow", dtype={"col2": pd.ArrowDtype(pyarrow.string())})
df1["col2"]
0 1
1 2
Name: col2, dtype: string[pyarrow]
```
### Issue Description
When reading a CSV with dtype_backend="pyarrow" and specifying a column as "string[pyarrow]" via the dtype parameter, the resulting Series displays as dtype: string instead of the expected string[pyarrow], even though df.dtypes shows string[pyarrow]. This inconsistency only occurs when using the string alias — using pd.ArrowDtype(pyarrow.string()) correctly preserves and displays the Arrow-backed string[pyarrow] dtype in the Series.
### Expected Behavior
df1["col2"]
0 1
1 2
Name: col2, dtype: string[pyarrow]
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
| [
"Dtype Conversions",
"IO CSV",
"Strings",
"Needs Discussion",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"Looking at this again, I am sure this is not a bug. Sorry for causing confusion. What I got confused about boils down this example:\n\n```\nser = pd.Series([\"a\", \"b\"], dtype = \"string[pyarrow]\")\n\nser.dtypes\n# string[pyarrow]\n\nser\n# 0 a\n# 1 b\n# dtype: string\n```\n\nI had expected the ... |
3,090,615,206 | 61,495 | DOC: Fix sparse and dense array memory usage comparison. | closed | 2025-05-26T09:48:10 | 2025-05-26T17:03:31 | 2025-05-26T17:03:31 | https://github.com/pandas-dev/pandas/pull/61495 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61495 | https://github.com/pandas-dev/pandas/pull/61495 | JMRaczynski | 2 | This MR fixes misleading part of sparse user guide, which suggests that dataframes consume far less memory than they really do, due to wrong units calculation.
Also changed usage of `str.format()` to more modern f-string syntax in part of guide which was mentioned above. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61495/"
] |
3,089,517,701 | 61,494 | DOC: kwargs naming in pd.Series.interpolate | closed | 2025-05-25T19:20:56 | 2025-05-25T19:23:36 | 2025-05-25T19:23:35 | https://github.com/pandas-dev/pandas/issues/61494 | true | null | null | loicdiridollou | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.interpolate.html
### Documentation problem
Display of the `**kwargs` is looking like `''**kwargs''`
### Suggested fix for documentation
Remove the backquotes. | [
"Docs",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Sorry looked at the wrong tag and not `main`. "
] |
3,089,252,154 | 61,493 | ENH: Supporting a `mapper` function as the 1st argument in `DataFrame.set_axis` | closed | 2025-05-25T12:02:15 | 2025-05-27T06:37:45 | 2025-05-27T06:37:44 | https://github.com/pandas-dev/pandas/issues/61493 | true | null | null | aallahyar | 3 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
At the moment, `DataFrame.set_axis()` only accepts `labels`:
```python
df = (
pd.DataFrame({'A': range(3), 'B': range(10, 13)})
.set_axis(['a', 'b'], axis=1)
)
print(df)
# a b
# 0 0 10
# 1 1 11
# 2 2 12
```
Which makes it difficult (or more precisely verbose, using workarounds) to use during method chaining (where available columns could be dynamic and unknown at the begining of the chain).
### Feature Description
I suggest to allow `.set_axis` method to accept a "mapper" (either a `function`, `dict` or `series`) that could be used to convert an axis to another preferred axis.
The proposed enhancement could get inspiration from how `.rename_axis` works. For example, `.set_axis` could support receiving a function to apply on the current axis of the `DataFrame` (either its `index` or `columns`, depending on the `axis` argument) and set the axis to the labels that are returned by the function (see below for an example).
Example:
```python
df = (
pd.DataFrame({'A': range(3), 'B': range(10, 13)})
.set_axis(lambda df: 'col' + df.columns, axis=1)
# or an alternative signature to support
.set_axis({'A': 'colA', 'B': 'colB'}, axis='columns')
)
# colA colB
# 0 0 10
# 1 1 11
# 2 2 12
```
### Alternative Solutions
There is of course a workaround for this but, it is slightly verbose to use it during method chaining:
```python
df = (
pd.DataFrame({'A': range(3), 'B': range(10, 13)})
.pipe(lambda df: df.set_axis('col' + df.columns, axis=1))
)
print(df)
# colA colB
# 0 0 10
# 1 1 11
# 2 2 12
```
### Additional Context
I think `.set_axis` in general needs a bit of API consistency update.
For example, in `DataFrame.rename_axis` arguments can be provided in two ways:
```python
df.rename_axis(index=index_mapper, columns=columns_mapper)
df.rename_axis(mapper, axis='index')
```
But, `.set_axis` does not support such calling signatures. I propose to additionally support `index=` and `columns=` calling arguments to clarify the intent and increase readability. | [
"Enhancement",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi @pandas-dev, I’d like to work on this enhancement to support a mapper in DataFrame.set_axis. Could you please assign #61493 to me?",
"@aallahyar Thanks for the request. This functionality already exists in [`df.rename`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rename.html) if I understand... |
3,088,895,131 | 61,492 | DOC: Fix incorrect reST markups in What's new | closed | 2025-05-24T23:38:01 | 2025-05-26T23:46:43 | 2025-05-26T16:53:01 | https://github.com/pandas-dev/pandas/pull/61492 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61492 | https://github.com/pandas-dev/pandas/pull/61492 | koyuki7w | 4 | Fixed some markups so as to render HTML correctly. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61492/",
"@datapythonista Thanks, I checked the document now renders correctly.",
"Thanks for the fixes @koyuki7w "
] |
3,088,592,217 | 61,491 | BUG: If you add _metadata to a custom subclass of Series, the sequence name is lost when indexing | open | 2025-05-24T16:39:14 | 2025-08-14T02:02:14 | null | https://github.com/pandas-dev/pandas/issues/61491 | true | null | null | vitalizzare | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
class MySeq(pd.Series):
_metadata = ['property']
@property
def _constructor(self):
return MySeq
seq = MySeq([*'abc'], name='data')
assert seq.name == 'data'
assert seq[1:2].name == 'data'
assert seq[[1, 2]].name is None
assert seq.drop_duplicates().name is None
```
### Issue Description
<kbd>pandas 2.2.3</kbd>
Let’s consider two variants of defining a custom subtype of `pandas.Series`. In the first one, no [custom properties][1] are added, while in the second one, custom metadata is included:
```python
import pandas as pd
class MySeries(pd.Series):
@property
def _constructor(self):
return MySeries
seq = MySeries([*'abc'], name='data')
print(f'''Case without _metadata:
{isinstance(seq[0:1], MySeries) = }
{isinstance(seq[[0, 1]], MySeries) = }
{seq[0:1].name = }
{seq[[0, 1]].name = }
''')
class MySeries(pd.Series):
_metadata = ['property']
@property
def _constructor(self):
return MySeries
seq = MySeries([*'abc'], name='data')
seq.property = 'MyProperty'
print(f'''Case with defined _metadata:
{isinstance(seq[0:1], MySeries) = }
{isinstance(seq[[0, 1]], MySeries) = }
{seq[0:1].name = }
{seq[[0, 1]].name = }
{getattr(seq[0:1], 'property', 'NA') = }
{getattr(seq[[0, 1]], 'property', 'NA') = }
''')
```
The output of the code above will be:
```none
Case without _metadata:
isinstance(seq[0:1], MySeries) = True
isinstance(seq[[0, 1]], MySeries) = True
seq[0:1].name = 'data'
seq[[0, 1]].name = 'data'
Case with defined _metadata:
isinstance(seq[0:1], MySeries) = True
isinstance(seq[[0, 1]], MySeries) = True
seq[0:1].name = 'data'
seq[[0, 1]].name = None <<< Problematic result of indexing
getattr(seq[0:1], 'property', 'NA') = 'MyProperty'
getattr(seq[[0, 1]], 'property', 'NA') = 'MyProperty'
```
So, if `_metadata` is defined, the sequence name is preserved when slicing, but **lost when indexing with a list**, whereas without `_metadata` the name is preserved in both cases.
As a workaround we can add `'name'` to `_metadata`:
```python
class MySeries(pd.Series):
_metadata = ['property', 'name']
@property
def _constructor(self):
return MySeries
seq = MySeries([*'abc'], name='data')
assert seq[0:1].name == 'data'
assert seq[[0, 1]].name == 'data'
```
However, I'm not sure if there's no deferred issues caused by treating `name` as a metadata attribute.
The problem arose when applying PyJanitor methods to user-defined DataFrames with `_metadata`. Specifically, `drop_duplicates` was applied to a separate column, followed by an attempt to access its `name` in order to combine the result into a new DataFrame.
[1]: https://pandas.pydata.org/pandas-docs/stable/development/extending.html#define-original-properties
### Expected Behavior
```python
import pandas as pd
class MySeq(pd.Series):
_metadata = ['property']
@property
def _constructor(self):
return MySeq
seq = MySeq([*'abc'], name='data')
assert seq[[1, 2]].name == 'data'
assert seq.drop_duplicates().name == 'data'
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : cfe54bd5da48095f4c599a58a1ce8ccc0906b668
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 4.15.0-213-generic
Version : #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2124.gcfe54bd5da
numpy : 2.3.0.dev0+git20250304.6611d55
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyiceberg : None
pyreadstat : None
pytest : None
python-calamine : None
pytz : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"metadata",
"Needs Triage",
"Subclassing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I've found that a `Series` object has `_metadata = ['_name']` by default. This means that when manually defining `_metadata` in a custom `Series` subclass, we need to explicitly add `'_name'` to it as well. I couldn't find this information in the documentation. Maybe it should be mentioned here: https://pandas.pyd... |
3,088,468,341 | 61,490 | Fix GH-61477: Prevent spurious sort warning in concat with unorderable MultiIndex | closed | 2025-05-24T13:36:24 | 2025-06-02T16:53:43 | 2025-06-02T16:53:43 | https://github.com/pandas-dev/pandas/pull/61490 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61490 | https://github.com/pandas-dev/pandas/pull/61490 | Neer-Pathak | 1 | # Fix GH-61477: Stop Spurious Warning When `concat(..., sort=False)` on Mixed-Type `MultiIndex`
## Overview
When you do something like:
```python
pd.concat([df1, df2], axis=1, sort=False)
```
and your two DataFrames have MultiIndex columns that mix tuples and integers, pandas used to try to sort those labels under the hood. Since Python cannot compare tuple < int, you’d see:
```
RuntimeWarning: '<' not supported between instances of 'int' and 'tuple'; sort order is undefined for incomparable objects with multilevel columns
```
This warning is confusing, and worse, you explicitly asked not to sort (sort=False), so pandas should never even try.
# What Changed
1. Short-circuit Index.union when sort=False
Before: Even with sort=False, pandas would call its normal union logic, which might attempt to compare labels.
Now: If you pass sort=False, we simply concatenate the two index arrays with:
```
np.concatenate([self._values, other._values])
```
and wrap that in a new Index. No comparisons, no warnings, and your original order is preserved.
2. Guard sorting in MultiIndex._union
Before: pandas would call ```result.sort_values()``` when sort wasn’t False, and if labels were unorderable it would warn you.
Now: We only call ```sort_values()``` when sort is truthy (True), and we wrap it in a ```try/except``` TypeError that silently falls back to the existing order on failure. No warning is emitted.
3. New Regression Test
A pytest test reproduces the original bug scenario, concatenating two small DataFrames with mixed-type MultiIndex columns and ```sort=False.``` The test asserts:
No RuntimeWarning is raised
Column order is exactly “first DataFrame’s columns, then second DataFrame’s columns”
Respects sort=False: If a user explicitly disables sorting, pandas won’t try.
Silences spurious warnings: No more confusing messages about comparing tuples to ints.
Keeps existing behavior for sort=True: You still get a sort or a real error if the labels truly can’t be ordered.
For testing we can try
```
import numpy as np, pandas as pd
left = pd.DataFrame(
np.random.rand(5, 2),
columns=pd.MultiIndex.from_tuples([("A", 1), ("B", (2, 3))])
)
right = pd.DataFrame(
np.random.rand(5, 1),
columns=pd.MultiIndex.from_tuples([("C", 4)])
)
# No warning, order preserved:
out = pd.concat([left, right], axis=1, sort=False)
print(out.columns) # [("A", 1), ("B", (2, 3)), ("C", 4)]
# Sorting still works if requested:
sorted_out = pd.concat([left, right], axis=1, sort=True)
print(sorted_out.columns) # sorted order or TypeError if impossible
```
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,088,304,844 | 61,489 | BUG: Raise on coercion of ambiguous datetime strings to datetime64 | closed | 2025-05-24T09:28:07 | 2025-05-25T06:46:11 | 2025-05-25T06:43:54 | https://github.com/pandas-dev/pandas/pull/61489 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61489 | https://github.com/pandas-dev/pandas/pull/61489 | iabhi4 | 2 | This PR addresses a bug where object-dtype arrays containing ambiguous datetime strings (e.g., `"12/01/2020"`, `"13/01/2020"`) were being silently coerced to `datetime64[ns]`, potentially resulting in inconsistent or unintended parsing.
- Introduced stricter input validation during coercion to detect and raise a `ValueError` when an ambiguous format is inferred but cannot be consistently parsed.
- Added tests to cover both direct assignment and constructor-based coercion scenarios.
- [x] closes 61353
- [x] Tests added and passed
- [ ] All code checks passed
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The `pre-commit` is currently failing due to the `unwanted-patterns-private-import-across-module` hook\r\n\r\nThis PR imports `_guess_datetime_format_for_array` from `pandas.core.tools.datetimes`, which is a private function.\r\nThere’s currently no public alternative that offers this functionality. specifically, ... |
3,088,219,656 | 61,488 | Backport PR #60739 on branch 2.3.x (ENH: pandas.api.interchange.from_dataframe now uses the Arrow PyCapsule Interface if available, only falling back to the Dataframe Interchange Protocol if that fails) | closed | 2025-05-24T07:37:43 | 2025-05-27T16:14:11 | 2025-05-27T16:10:36 | https://github.com/pandas-dev/pandas/pull/61488 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61488 | https://github.com/pandas-dev/pandas/pull/61488 | MarcoGorelli | 2 | OK to backport https://github.com/pandas-dev/pandas/pull/60739/files?
One thing to check is:
- #60739 has the release note in 3.0
- but if it gets backported, then the change will have happened in 2.3
So, not sure how to do this. Is it OK to backport, and then change the whatsnew note on `main`? | [
"Interchange"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @MarcoGorelli ",
"thanks! yup, will do soon"
] |
3,087,737,413 | 61,487 | Implemented NumbaExecutionEngine | open | 2025-05-23T23:51:56 | 2025-07-31T03:31:07 | null | https://github.com/pandas-dev/pandas/pull/61487 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61487 | https://github.com/pandas-dev/pandas/pull/61487 | arthurlw | 12 | - [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Implements NumbaExecutionEngine for #61458
Docstring is currently a placeholder.
| [
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi @datapythonista, I was seeing a CI error because Numba isn’t installed in the test environment, so I tried to guard against it using a try and catch method, but it seems not to work. Do you have any advice on how to move forward?",
"This is what we use for optional dependencies in tests: https://github.com/pa... |
3,087,635,011 | 61,486 | pandas logo license question | open | 2025-05-23T22:08:56 | 2025-05-26T18:01:58 | null | https://github.com/pandas-dev/pandas/issues/61486 | true | null | null | JoOkuma | 1 | Hi pandas developers,
I'm the process of the publishing a [paper](https://www.biorxiv.org/content/10.1101/2024.09.02.610652v1.full.pdf) which displays the pandas logo (Fig 1)
The journal requires a permission from the copyright owners or a permissive license to use the logo.
Is there anyone who could help us with that?
Is there a license for the logo?
Thanks in advance, | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I'm not a member of the project or a licensing expert, but FWIW, the project license is [BSD 3-Clause](https://github.com/pandas-dev/pandas/blob/main/LICENSE) and the logo is included in the project at [web/pandas/static/img](https://github.com/pandas-dev/pandas/tree/main/web/pandas/static/img) under `pandas.svg` ... |
3,087,447,487 | 61,485 | BUG: zfill with pyarrow string | open | 2025-05-23T20:18:19 | 2025-06-03T00:39:19 | null | https://github.com/pandas-dev/pandas/issues/61485 | true | null | null | williambdean | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import pyarrow as pa
pd.Series(["A", "AB", "ABC"], dtype=pd.ArrowDtype(pa.string())).str.zfill(3)
```
### Issue Description
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/will/GitHub/various/narwhals/.venv/lib/python3.12/site-packages/pandas/core/strings/accessor.py", line 137, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/will/GitHub/various/narwhals/.venv/lib/python3.12/site-packages/pandas/core/strings/accessor.py", line 1818, in zfill
result = self._data.array._str_map(f)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ArrowExtensionArray' object has no attribute '_str_map'. Did you mean: '_str_pad'?
```
### Expected Behavior
Same as other string dtypes
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Fri Jul 5 17:56:39 PDT 2024; root:xnu-10063.141.1~2/RELEASE_ARM64_T8122
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : 6.131.23
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Confirmed on main. `ArrowExtensionArray` doesn’t implement `_str_map`, so `.str.zfill` fails. PRs to add this support are welcome.\n\nThanks for raising this!",
"take",
"Just raised an upstream feature request in Arrow to support `utf8_zfill`, which would allow us to use a native compute kernel instead of fall... |
3,087,240,792 | 61,484 | BUG: Raise clear error for duplicate id_vars in melt (GH61475) | closed | 2025-05-23T18:46:55 | 2025-05-30T16:49:22 | 2025-05-30T16:43:55 | https://github.com/pandas-dev/pandas/pull/61484 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61484 | https://github.com/pandas-dev/pandas/pull/61484 | ZanirP | 1 | - [x] closes #61475
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Reshaping",
"Error Reporting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @ZanirP "
] |
3,086,991,137 | 61,483 | BUG: date_range behaviour is inconsistent when using inclusive=right | open | 2025-05-23T17:03:09 | 2025-05-28T10:20:51 | null | https://github.com/pandas-dev/pandas/issues/61483 | true | null | null | sebastian-east | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# demonstration of (default) behaviour with inclusive = both
print(pd.date_range("2025-01-01 00:00:00", "2025-01-01 00:10:00", freq="10min", inclusive="both")) # 10min gap
print(pd.date_range("2025-01-01 00:05:00", "2025-01-01 00:10:00", freq="10min", inclusive="both")) # 5min gap
print(pd.date_range("2025-01-01 00:10:00", "2025-01-01 00:10:00", freq="10min", inclusive="both")) # 0min gap
# demonstration of behaviour with inclusive = right
print(pd.date_range("2025-01-01 00:00:00", "2025-01-01 00:10:00", freq="10min", inclusive="right")) # 10min gap
print(pd.date_range("2025-01-01 00:05:00", "2025-01-01 00:10:00", freq="10min", inclusive="right")) # 5min gap
print(pd.date_range("2025-01-01 00:10:00", "2025-01-01 00:10:00", freq="10min", inclusive="right")) # 0min gap
```
### Issue Description
The behaviour of `date_range` is inconsistent when using the argument `inclusive="right"`.
When the time delta between the start and end point of a `date_range` with `inclusive="right"` is shorter than the `freq` argument then an empty `DatetimeIndex` is returned. However, when the start and end points are the same (i.e. the time delta is zero), then a `DatetimeIndex` including the 'rightmost' argument is returned (which is, obviously, also the start time). This appears to be inconsistent: if an empty `DatetimeIndex` is returned when the time delta between the arguments is shorter than the specified frequency, then it should also return an empty `DatetimeIndex` when the time delta is zero (or, equally, it should return the end time in both cases).
### Expected Behavior
For consistency in the above example, either
```
print(pd.date_range("2025-01-01 00:05:00", "2025-01-01 00:10:00", freq="10min", inclusive="right")) # 5min gap
```
should return `DatetimeIndex(['2025-01-01 00:10:00'], dtype='datetime64[ns]', freq='10min')`, or
```
print(pd.date_range("2025-01-01 00:10:00", "2025-01-01 00:10:00", freq="10min", inclusive="right")) # 0min gap
```
should return `DatetimeIndex([], dtype='datetime64[ns]', freq='10min')`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : e2bd8e60f000b46bb632d9ed78939264c55629ef
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 5.14.0-162.6.1.el9_1.x86_64
Version : #1 SMP PREEMPT_DYNAMIC Fri Nov 18 02:06:38 UTC 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 3.0.0.dev0+2123.ge2bd8e60f0
numpy : 2.3.0.dev0+git20250520.2a7a0d0
dateutil : 2.9.0.post0
pip : 25.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyiceberg : None
pyreadstat : None
pytest : None
python-calamine : None
pytz : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Datetime",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"For your expected behavior regarding\n\n```python\nprint(pd.date_range(\"2025-01-01 00:05:00\", \"2025-01-01 00:10:00\", freq=\"10min\", inclusive=\"right\")) # 5min gap\n```\n\nIt should not return `DatetimeIndex(['2025-01-01 00:10:00'], dtype='datetime64[ns]', freq='10min')` because `00:10:00` is not generated (... |
3,086,721,221 | 61,482 | BUG: Raise error when filtering HDF5 with tz-aware index (GH#61479) | closed | 2025-05-23T15:13:54 | 2025-06-16T19:44:05 | 2025-06-16T19:44:04 | https://github.com/pandas-dev/pandas/pull/61482 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61482 | https://github.com/pandas-dev/pandas/pull/61482 | AnushaUKumar | 2 | This PR addresses a bug where applying a .select(where=...) query on an HDF5 store with a timezone-aware DatetimeIndex raised a confusing or incorrect error. Since tz-aware filtering isn't currently supported, we now raise a clear ValueError when such filtering is attempted.
What’s included:
-Adds a specific check in select() to detect and error on tz-aware index queries
-Includes a minimal test case reproducing the issue in test_timezone_bug.py
Closes: #61479 | [
"IO HDF5",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@AnushaUKumar - the linked issue is not about timezones, did you perhaps get the number wrong?",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,084,708,724 | 61,481 | BUG: Fixed issue where rolling.kurt() calculations would be effected by values outside of scope | open | 2025-05-22T22:45:46 | 2025-07-26T00:09:01 | null | https://github.com/pandas-dev/pandas/pull/61481 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61481 | https://github.com/pandas-dev/pandas/pull/61481 | eicchen | 2 | - [x] closes #61416
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Might have found an unrelated issue when calculating kurtosis for numbers >1e6, but I'll have to look into it more and open an issue if that is the case.
| [
"Bug",
"Window",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke my PR hasn't been reviewed for a while now, just checking if it will be reviewed or if I should just close it.\r\n\r\n(sorry if it's a bother, I know you guys probably all have a lot on your plates and I didn't know who to ping)",
"This pull request is stale because it has been open for thirty days wi... |
3,083,980,753 | 61,480 | DOC: Fix formatting in indexing.rst | closed | 2025-05-22T16:45:47 | 2025-05-23T20:22:58 | 2025-05-22T17:40:26 | https://github.com/pandas-dev/pandas/pull/61480 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61480 | https://github.com/pandas-dev/pandas/pull/61480 | wjandrea | 1 | Don't use code formatting for non-code. To mention a term, use italics. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @wjandrea "
] |
3,083,228,094 | 61,479 | BUG: read_hdf() doesn't handle datetime64[ms] properly | closed | 2025-05-22T12:34:28 | 2025-05-23T17:26:53 | 2025-05-23T17:26:53 | https://github.com/pandas-dev/pandas/issues/61479 | true | null | null | a-ma72 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(
{
"dates": [
pd.to_datetime("2025-05-21 18:44:22"),
pd.to_datetime("2025-05-21 19:12:42"),
],
"tags": [
12,
45,
]
},
)
df["dates"] = df["dates"].astype("datetime64[ms]")
print(df.dtypes)
print(df)
df.to_hdf("dates.h5", key="dates")
df2 = pd.read_hdf("dates.h5", key="dates")
print(df2)
df2["corrected"] = df2["dates"].astype("i8").astype("datetime64[ms]")
print(df2)
```
### Issue Description
Dataframes containing dtype of "datetime64[ms]" seem to be correctly written in hdf format, but the readback is misinterpreted as “datetime64[ns]”.
The output of the code above is:
```dates datetime64[ms]
tags int64
dtype: object
dates tags
0 2025-05-21 18:44:22 12
1 2025-05-21 19:12:42 45
dates tags
0 1970-01-01 00:29:07.853062 12
1 1970-01-01 00:29:07.854762 45
dates tags corrected
0 1970-01-01 00:29:07.853062 12 2025-05-21 18:44:22
1 1970-01-01 00:29:07.854762 45 2025-05-21 19:12:42```
### Expected Behavior
Correct dates when read back.
### Installed Versions
```INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.11.9.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : de_DE.cp1252
pandas : 2.2.2
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 80.8.0
pip : 25.1.1
Cython : 3.1.1
pytest : 8.3.5
hypothesis : 6.131.20
sphinx : 8.2.3
blosc : None
feather : None
xlsxwriter : 3.2.3
lxml.etree : 5.4.0
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 3.1.6
IPython : 8.36.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.0
gcsfs : None
matplotlib : 3.8.4
numba : 0.61.2
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.30
tables : 3.10.2
tabulate : None
xarray : 2025.4.0
xlrd : 2.0.1
zstandard : 0.23.0
tzdata : 2025.2
qtpy : 2.4.3
pyqt5 : None```
| [
"Bug",
"IO HDF5",
"Needs Info",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. On main, I'm seeing that the correct output.\n\n```python\ndf = pd.DataFrame(\n {\n \"dates\": [\n pd.to_datetime(\"2025-05-21 18:44:22\"),\n pd.to_datetime(\"2025-05-21 19:12:42\"),\n ],\n \"tags\": [\n 12,\n 45,\n ]... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.