id int64 | number int64 | title string | state string | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | html_url string | is_pull_request bool | pull_request_url string | pull_request_html_url string | user_login string | comments_count int64 | body string | labels list | reactions_plus1 int64 | reactions_minus1 int64 | reactions_laugh int64 | reactions_hooray int64 | reactions_confused int64 | reactions_heart int64 | reactions_rocket int64 | reactions_eyes int64 | comments list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,083,137,230 | 61,478 | BUG: to_latex does not escape % with percent formatter | closed | 2025-05-22T12:02:17 | 2025-05-30T18:24:36 | 2025-05-30T18:24:32 | https://github.com/pandas-dev/pandas/issues/61478 | true | null | null | stertingen | 6 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
print(pd.DataFrame({"x": [0.1, 0.5, 1.0]}).to_latex(formatters={"x": "{:.0%}"}, escape=True))
print(pd.DataFrame({"x": [0.1, 0.5, 1.0]}).style.format("{:.0%}", escape="latex").to_latex())
```
### Issue Description
When using `"{:.0%}"` to format floating point values as percentages, the percent signs are not correctly escaped even if explicitly specified. This applies to `DataFrame.to_latex` and `Styler.to_latex`.
Output:
```latex
\begin{tabular}{lr}
\toprule
& x \\
\midrule
0 & 10% \\
1 & 50% \\
2 & 100% \\
\bottomrule
\end{tabular}
\begin{tabular}{lr}
& x \\
0 & 10% \\
1 & 50% \\
2 & 100% \\
\end{tabular}
```
### Expected Behavior
```latex
\begin{tabular}{lr}
\toprule
& x \\
\midrule
0 & 10\% \\
1 & 50\% \\
2 & 100\% \\
\bottomrule
\end{tabular}
\begin{tabular}{lr}
& x \\
0 & 10\% \\
1 & 50\% \\
2 & 100\% \\
\end{tabular}
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : de_DE.cp1252
pandas : 2.2.3
numpy : 2.0.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.11
sphinx : None
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.10.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.10.0
numba : 0.60.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"IO LaTeX"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Boiled it down to the implementation of `_maybe_wrap_formatter`.\nIt applies the escape function before the formatting function (adding the % sign).\nTechnically, the formatter, the decimals, the thousands and the na_rep could introduce symbols to the string which should probably be escaped.\nNot sure why the the functions are applied in that precise order, there might be a good reason I overlooked, tho.",
"Thanks for the report. I believe the intention is that if the user is introducing their own symbols, they can decide to escape them if they desire.\n\ncc @attack68 ",
"This is not a bug. It is as documented. \"Escaping is done before formatter\".\nThe rationale is that there is not one direction that suits all purposes. But, under the current design all cases can be covered. \nThe solution to the above is to apply an adjusted formatter: `f\"{x * 100: .0f}\\%\"`, which is relatively simple for a user to do.\nFor cases where applying escaping first is needed, there would be no easy, or no solution at all, if the design is implemented the other way round.\n\nSee the documentation example: Using a formatter with HTML escape and Na rep for a case which requires escaping first.",
"Yes, it is indeed documented for `Styler.format`. So I guess this is intended behavior.\nThe adjusted formatter needs to be `lambda x: f\"{x* 100: .0f}\\\\%\"` in my case, but @attack68 nudged be to the right direction, thanks!\n\nFor `DataFrame.to_latex`, this behavior is not documented. In fact, it only states:\n> By default, the value will be read from the pandas config module and set to _True_ if the option `styler.format.escape` is _“latex”_. When set to False prevents from escaping latex special characters in column names.\n\nIt does not explicitly tell what happens if it is set to _True_ and only mentions its impact on the column names.\nHowever, this setting does control the escaping of cell contents:\n```python\nprint(pd.DataFrame({\"x\": [\"%\"]}).to_latex(escape=False))\n```\n```latex\n\\begin{tabular}{ll}\n\\toprule\n & x \\\\\n\\midrule\n0 & % \\\\\n\\bottomrule\n\\end{tabular}\n```\nvs.\n```python\nprint(pd.DataFrame({\"x\": [\"%\"]}).to_latex(escape=True))\n```\n```latex\n\\begin{tabular}{ll}\n\\toprule\n & x \\\\\n\\midrule\n0 & \\% \\\\\n\\bottomrule\n\\end{tabular}\n```\nSo, for `DataFrame.to_latex` it is not entirely clear what the `escape` parameter is supposed to do and what not; I would suggest refining the docs here.\nHowever, its behavior is consistent with `Styler.format`, so technically we could be fine with referring to the `Styler` implementation as well.",
"DataFrame to_latex was re-engineered for version 2.0.0 to use the Styler mechanics (this was to reduce maintenance burden and avoid dual implementations of same feature where one was much more out of date). Really it shouldnt exist at all and all its arguments were monkey patched to suit Styler.\nThe docs do state this and advise users to use Styler instead",
"Thanks @attack68 - nothing more to do here I think; closing."
] |
3,082,527,781 | 61,477 | BUG: Pandas concat raises RuntimeWarning: '<' not supported between instances of 'int' and 'tuple', sort order is undefined for incomparable objects with multilevel columns | open | 2025-05-22T08:33:16 | 2025-05-22T08:33:16 | null | https://github.com/pandas-dev/pandas/issues/61477 | true | null | null | ButteryPaws | 0 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
left_data = np.random.rand(1000, 5)
left_index = pd.date_range(start='20240101 09:00:00', periods=1000, freq='min')
left_columns = pd.MultiIndex.from_tuples([
('price', 'A'), # Tuple[str, str]
('price', 'B'), # Tuple[str, str]
('price', 'C'), # Tuple[str, str]
('diff', ('high', 'low')), # Tuple[str, Tuple[str, str]]
('diff', ('open', 'close')) # Tuple[str, Tuple[str, str]]
])
left_df = pd.DataFrame(data=left_data, index=left_index, columns=left_columns)
right_data = np.random.rand(990, 3)
right_index = pd.date_range(start='20240101 12:00:00', periods=990, freq='min')
right_columns = pd.MultiIndex.from_tuples([
('X', 1),
('X', 2),
('X', 3),
])
right_df = pd.DataFrame(data=right_data, columns=right_columns, index=right_index)
df1 = pd.concat([left_df, right_df], axis=1, sort=False)
print(df1)
# df2 = pd.merge(left_df, right_df, left_index=True, right_index=True)
# print(df2)
# df3 = left_df.join(right_df)
# print(df3)
```
### Issue Description
Let's say we have two dataframes, `left_df` and `right_df` both of which have multilevel columns, or columns of type `pandas.MultiIndex`. The columns are of different types, in particular, one of the dataframes has a column which is of type `Tuple[str, Tuple[str, str]]` and the other has a column of type `Tuple[str, int]`. My goal is to concatenate these two dataframes along the columns. To this, I tried out using `pd.concat` with the `axis=1` argument and experimented around with some other arguments as well.
### Further experiments
1. To avoid this warning, I tried to use `pd.DataFrame.join` and `pd.merge` as well but they all return the same warning (given in the code)
2. I tried out the same thing with a `RangeIndex` instead of a `DatetimeIndex` and I get the same error. The error doesn't seem to depend on the indices of the dataframes.
3. I tried the same thing with single-level columns, or using `left_columns=pd.Index([('A', 'A'), 'B', 'high', 'low'])` and `right_columns=pd.Index([('X', 'X'), 1, 'a'])` to see if this happens only when we have multilevel columns. **I do not get any warnings in this case, which confirms that this is an issue only with `MultiIndex` columns and not regular columns**.
4. I tried various combinations of types in the `MultiIndex` columns and this issue doesn't arise when columns are just of type `Tuple[str, str]` and `Tuple[str, int]` or `Tuple[str, Tuple[str, str]]`. It might happen with other pairs of data types as well, I haven't checked all combinations.
### Expected Behavior
It is hard to understand why is any kind of a sorting operations performed in this case. There should not be any instance of comparison. There is some common low level function being called in all 3 of `concat`, `merge` and `join` which is comparing the column values. It is expected that no warning be thrown in this case. To reproduce the output one gets (handling the correct type of join) without a warning, the code which can be run is:
```python
new_index = pd.MultiIndex.from_tuples(np.concatenate([left_df.keys(), right_df.keys()]))
df = pd.concat([left_df, right_df], axis=1, sort=False, ignore_index=True).reindex(new_index, axis=1)
```
[Source](https://stackoverflow.com/a/79632096/22823405)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 6.8.0-1026-aws
Version : #28-Ubuntu SMP Mon Mar 24 19:32:19 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.1.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 8.29.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.9.2
numba : 0.61.0
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : 1.4.6
pyarrow : 19.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,082,247,813 | 61,476 | DOC: Improve DateOffset docstring with constructor and examples (#52431) | closed | 2025-05-22T06:45:30 | 2025-06-30T18:18:47 | 2025-06-30T18:18:47 | https://github.com/pandas-dev/pandas/pull/61476 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61476 | https://github.com/pandas-dev/pandas/pull/61476 | ericzhihuang | 3 | Updates the docstring for the `DateOffset` class in `offsets.pyx`.
Changes:
- Added a constructor signature and parameter documentation
This addresses part of issue #52431 (documentation improvements for offset constructors). | [
"Docs",
"Frequency",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the contribution @ericzhihuang, but I don't understand the changes here. This docstring already has examples and see also sections, you are adding new ones in the middle of the description.\r\n\r\nCan you have a look, make sure that the docstring makes sense as a whole, and also check that the CI is happy. I guess it's red because the errors mentioned.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,081,509,955 | 61,475 | BUG: More Indicative Error when pd.melt with duplicate columns | closed | 2025-05-21T22:01:23 | 2025-05-30T16:43:56 | 2025-05-30T16:43:56 | https://github.com/pandas-dev/pandas/issues/61475 | true | null | null | James-Lee-998 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
x = pd.DataFrame([[1, 2, 3], [3, 4, 5]], columns=["A", "A", "B"])
pd.melt(x, id_vars=["A"], value_vars=["B"])
```
### Issue Description
Error raised when melting on DataFrame with duplicate column headers
```
import pandas as pd
x = pd.DataFrame([[1, 2, 3], [3, 4, 5]], columns=["A", "A", "B"])
pd.melt(x, id_vars=["A"], value_vars=["B"])
```
Above raises:
```
File "pandas\core\reshape\melt.py", line 110, in melt
if not isinstance(id_data.dtype, np.dtype):
^^^^^^^^^^^^^
File "pandas\core\generic.py", line 6286, in __getattr__
return object.__getattribute__(self, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DataFrame' object has no attribute 'dtype'. Did you mean: 'dtypes'?
```
From pandas\core\reshape\melt.py:108 pop method causes id_data to be assigned to a DataFrame object rather causing above AttributeError
```
for col in id_vars:
id_data = frame.pop(col)
if not isinstance(id_data.dtype, np.dtype):
```
When having duplicate column headers in a dataframe it raises an AttributeError, should this instead indicate a hint about melting on duplicated column headers? Possibly implement a check prior to .dtype being called?
### Expected Behavior
Error raised should be indicative of duplicated column headers.
```
for col in id_vars:
id_data = frame.pop(col)
if isinstance(id_data, pd.DataFrame):
raise Exception(f"{col} is a duplicate column header")
if not isinstance(id_data.dtype, np.dtype):
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.4
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 154 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United Kingdom.1252
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.9.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Reshaping",
"Error Reporting",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report, confirmed on main. Agreed we should check and raise a more understandable error when there are duplicate columns. PR to fix is welcome!",
"take"
] |
3,081,300,105 | 61,474 | BUG: dataframe.to_csv calls defauly numpy to_string function, resulting in | closed | 2025-05-21T20:09:11 | 2025-05-23T17:28:19 | 2025-05-23T17:28:18 | https://github.com/pandas-dev/pandas/issues/61474 | true | null | null | mtscott321 | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
array = np.ones((1000, 1000))
df = pd.DataFrame({"Data": [array]})
df.to_csv('test.csv', index=False)
```
### Issue Description
The output file will then have the following in the cell:
array([[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]])
### Expected Behavior
I would expect the code to run like the following code does:
import pandas as pd
import numpy as np
array = np.ones((1000, 1000))
df = pd.DataFrame({"Data": [array]})
with np.printoptions(linewidth=1000000, threshold=np.inf):
df.to_csv('corrected_test.csv', index=False)
Where the df.to_csv function does not call the default numpy print statement.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.9.20.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.2
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 75.1.0
pip : 24.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.15.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.9.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : 2.4.1
pyqt5 : None
</details>
| [
"Bug",
"IO CSV",
"Closing Candidate"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I think you're asking for pandas to change the `__str__` behavior of a Python object. I'm negative on this, it is up to the user to control.",
"take",
"Agreed, especially since `repr`/IO writer support for nested values is not well supported generally and discouraged. Closing"
] |
3,081,103,616 | 61,473 | BUG: assert_frame_equal(check_dtype=False) fails when comparing two DFs containing pd.NA that only differ in dtype (object vs Int32) | open | 2025-05-21T18:35:56 | 2025-08-11T11:45:12 | null | https://github.com/pandas-dev/pandas/issues/61473 | true | null | null | michiel-de-muynck | 14 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
from pandas.testing import assert_frame_equal
df1 = pd.DataFrame(
{
"x": pd.Series([pd.NA], dtype="Int32"),
}
)
df2 = pd.DataFrame(
{
"x": pd.Series([pd.NA], dtype="object"),
}
)
assert_frame_equal(df1, df2, check_dtype=False) # fails, but should succeed
```
### Issue Description
Output of the above example:
```
AssertionError: DataFrame.iloc[:, 0] (column name="x") are different
DataFrame.iloc[:, 0] (column name="x") values are different (100.0 %)
[index]: [0]
[left]: [nan]
[right]: [<NA>]
```
When comparing DataFrames containing `pd.NA` using `check_dtype=False`, the test incorrectly fails despite the only difference being the dtype (Int32 vs object).
Note that the values in the dataframe really are the same:
```
print(type(df1["x"][0])) # prints <class 'pandas._libs.missing.NAType'>
print(type(df2["x"][0])) # prints <class 'pandas._libs.missing.NAType'>
```
Related issues:
- https://github.com/pandas-dev/pandas/issues/18463: Similar but "opposite": here the dataframes contain different values (nan vs None) which are incorrectly treated as equal. In this issue, the dataframes contain equal values which are incorrectly treated as different.
### Expected Behavior
The test should succeed, since the only difference is the dtypes, and `check_dtype=False`.
### Installed Versions
<details>
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Testing",
"good first issue",
"ExtensionArray"
] | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report, this would pass if when converting the EA to a NumPy array we cast to object dtype. I haven't looked to see if this might cause issues in other cases. Since this is aimed at tests, I'm wondering if changing to object dtype is okay here.\n\ncc @jbrockmendel @mroeschke for any thoughts.",
"> this would pass if when converting the EA to a NumPy array we cast to object dtype\n\nYah I'm pretty sure that the behavior of `df1['x'].to_numpy()` casting to a float dtype was a much-discussed intentional decision. Changing that would be a can of worms.\n\nI'm inclined to just discourage the use of a) check_dtype=False and b) using pd.NA in an object dtype column (note that `df1 == df2` raises)\n",
"@jbrockmendel - sorry, I wasn't clear. I meant just inside `assert_frame_equal` to use `.to_numpy(dtype=\"object\")` when `check_dtype=False` rather than just `.to_numpy()`. Agreed changing the behavior of `.to_numpy()` is off the table.",
"Gotcha, fine by me",
"how can i make contribution to solve this, can you please give advice to me? @iabhi4 @rhshadrach ",
"Hi @venturero \n\nI already raised a PR for this based on the above discussion. You can checkout other issues from the `Issues` tab and follow the [contribution guide](https://pandas.pydata.org/docs/development/contributing.html) to submit a clean fix for the issue you're tackling.",
"@srilasya02 May I kindly ask why you wrote 'take' here and what happened when you wrote it?",
"Hi! I'm a new contributor and would love to work on this issue. Could you please assign it to me? 😊\n\n",
"Can I try it? I'm new. I'd love to learn it.",
"take",
"Hey, is this issue still open? if yes can I contribute to it as well?\n",
"Hi everyone! I’m Arun and I’d love to contribute to this issue.\nI’m new here but eager to learn. Could anyone guide me on where to start or if you have suggestions for first steps? \nThank you! 🐼✨\n",
"> Hi everyone! I’m Arun and I’d love to contribute to this issue. I’m new here but eager to learn. Could anyone guide me on where to start or if you have suggestions for first steps? Thank you! 🐼✨\n\nFixed.",
"take"
] |
3,080,730,686 | 61,472 | Backport PR #61399: BUG: round on object columns no longer raises a TypeError | closed | 2025-05-21T15:56:04 | 2025-05-21T17:06:19 | 2025-05-21T17:06:16 | https://github.com/pandas-dev/pandas/pull/61472 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61472 | https://github.com/pandas-dev/pandas/pull/61472 | mroeschke | 0 | xref https://github.com/pandas-dev/pandas/pull/61399 | [
"Error Reporting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,080,554,537 | 61,471 | DOC: Improve lookup documentation | closed | 2025-05-21T15:04:30 | 2025-06-02T16:57:16 | 2025-06-02T16:57:09 | https://github.com/pandas-dev/pandas/pull/61471 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61471 | https://github.com/pandas-dev/pandas/pull/61471 | stevenae | 2 | - [ ] closes #40140
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Follows from #61185
Examples available at https://colab.research.google.com/drive/1MGWX6JVJL5yHyK7BeEBPQAW4tLM3TZL9#scrollTo=DjWfk4i1SiOY | [
"Docs",
"Indexing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Addressed your concerns! If you have time for a review.\r\n\r\nOn Wed, May 21, 2025, 4:33 PM Richard Shadrach ***@***.***>\r\nwrote:\r\n\r\n> ***@***.**** requested changes on this pull request.\r\n>\r\n> No strong opposition to having both functions, but the performance gain of\r\n> the _het version does not seem significant to me.\r\n> ------------------------------\r\n>\r\n> In doc/source/user_guide/indexing.rst\r\n> <https://github.com/pandas-dev/pandas/pull/61471#discussion_r2101110730>:\r\n>\r\n> >\r\n> - df = pd.DataFrame({'col': [\"A\", \"A\", \"B\", \"B\"],\r\n> - 'A': [80, 23, np.nan, 22],\r\n> - 'B': [80, 55, 76, 67]})\r\n> - df\r\n> - idx, cols = pd.factorize(df['col'])\r\n> - df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx]\r\n> + def pd_lookup_hom(df, row_labels, col_labels):\r\n> + rows = df.index.get_indexer(row_labels)\r\n>\r\n> Can you add df = df.loc[:, sorted(set(col_labels))] here.\r\n> ------------------------------\r\n>\r\n> In doc/source/user_guide/indexing.rst\r\n> <https://github.com/pandas-dev/pandas/pull/61471#discussion_r2101114103>:\r\n>\r\n> > +\r\n> +.. code-block:: python\r\n> +\r\n> + def pd_lookup_het(df, row_labels, col_labels):\r\n> + rows = df.index.get_indexer(row_labels)\r\n> + cols = df.columns.get_indexer(col_labels)\r\n> + sub = df.take(np.unique(cols), axis=1)\r\n> + sub = sub.take(np.unique(rows), axis=0)\r\n> + rows = sub.index.get_indexer(row_labels)\r\n> + values = sub.melt()[\"value\"]\r\n> + cols = sub.columns.get_indexer(col_labels)\r\n> + flat_index = rows + cols * len(sub)\r\n> + result = values[flat_index]\r\n> + return result\r\n> +\r\n> +For homogeneous column types, it is fastest to skip column subsetting and go directly to numpy:\r\n>\r\n> Nit: NumPy\r\n> ------------------------------\r\n>\r\n> In doc/source/user_guide/indexing.rst\r\n> <https://github.com/pandas-dev/pandas/pull/61471#discussion_r2101114607>:\r\n>\r\n> >\r\n> -.. ipython:: python\r\n> +For heterogeneous column types, we subset columns to avoid unnecessary numpy conversions:\r\n>\r\n> NumPy again.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/pull/61471#pullrequestreview-2859035990>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAFZOHK43SLWEMO55YCEVV327TPLNAVCNFSM6AAAAAB5TRZE7WVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDQNJZGAZTKOJZGA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Thanks @stevenae "
] |
3,079,915,886 | 61,470 | DOC: Restructure and expand UDF page | closed | 2025-05-21T11:34:50 | 2025-05-27T15:15:22 | 2025-05-27T15:15:22 | https://github.com/pandas-dev/pandas/pull/61470 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61470 | https://github.com/pandas-dev/pandas/pull/61470 | datapythonista | 3 | I changed the order in which the methods are presented,both in the table and in the sections, to be:
- map
- apply
- pipe
- filter
- agg
- transform
I find it easier to explain them in this order.
And I expanded the method sections with examples and a bit more of information.
I removed the most complex example in the intro, as I think the examples in the sections will make a better job now at explaining the most complex cases.
@arthurlw @rhshadrach do you mind having a look? | [
"Docs",
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Looks good to me! I think the example under vectorized operations should be changed to fit with the Fahrenheit example, but that can be added in a follow-up PR.",
"Thanks @arthurlw, great feedback. I'll leave the example on the vectorized section for now, as it may make sense to also expand that section as we make progress with the IT engines. Feel free to update it now if you want, but I'm unsure at this point how to add the JIT engines to that section, and how to better present all the performance related topics. Maybe we can just add a section for it, but maybe we can find a way to present it so one topic expands on the previous, as I tried to do with the different methods. ",
"Merging here, as I want to add few more things to this page. Please let me know if any comment, happy to address feedback in a follow up PR."
] |
3,079,474,716 | 61,469 | BUG: pandas.pivot_table margins, dropna and observed parameters not producing expected result | closed | 2025-05-21T08:58:05 | 2025-06-21T14:38:16 | 2025-06-21T14:38:09 | https://github.com/pandas-dev/pandas/issues/61469 | true | null | null | hugotomasf | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = {
'column_A_1': ['A', 'B', 'A', None, 'D', 'B', 'A'],
'column_A_2': ['G', 'F', 'J', 'J', 'J', 'F', 'G'],
'column_A_3': ['6602', '7059', '9805', '3080', '8625', '5741', '9685'],
'column_A_4': ['A', 'B', 'A', None, 'A', None, 'B'],
'column_A_4': ['X', None, 'Y', None, 'Z', 'X', 'Y'],
'column_B_1': ['1', '2', '3', '4', '5', '6', '7'],
'column_C_1': [0, 2, 5, 9, 8, 3, 7],
'column_C_2': [12, 75, None, 93, 89, 23, 97],
'column_C_3': [789, 102, 425, 895, None, 795, None],
'column_C_3': [15886, 49828, None, 9898, 8085, 9707, 8049]
}
df = pd.DataFrame(data)
pd.pivot_table(df, index=['column_A_1', 'column_A_2', 'column_A_3', 'column_A_4'], columns=['column_B_1'], values=['column_C_1', 'column_C_2', 'column_C_3'], aggfunc={'column_C_1': 'max', 'column_C_2': 'min', 'column_C_3': 'count'}, dropna=False, margins=False, observed=True)
```
### Issue Description
I have a huge dataset with similar structure to the example. I want to pivot the table grouping using the columns A as the index, the values of the columns B as the new columns and aggregate the values of the columns C. I want all columns B values to appear as columns, even if the entire column is NaN. This is because I want to coalesce values from multiple columns into one. Therefore, the parameter dropna should be equal to False. But the DataFrame I get has 336 rows with impossible combinations. For example, the first row A, F, 3080, X has the entire row filled with NaNs since this combination does not exist.

This is a problem because with a small dataset I wouldn't mind. But with a fairly large dataset, numpy returns an error because it has reached the maximum list size. While reading the documentation, I noticed the parameter:

I thought this parameter fixed this issue. Playing around with this parameter, it does not affect the result, it only adds a row. Here is a result of combining these two parameters.
dropna=False, margins=False (Too many rows)

dropna=True, margins=False (Missing Column B values)

dropna=False, margins=True (Same as dropna=False, margins=False?)

dropna=True, margins=True (Same as dropna=True, margins=False?)

I also noticed this parameter:

But it is deprecated, and the default value of True seems to be the value that I need. Forcing this parameter to True does not change the result.

### Expected Behavior
I expect with the parameter's combination dropna=False, margins=False and observed=True to get all the rows with plausible combinations (like if I was grouping by) and all the columns with column B values and columns C values.
I don't know if this is a bug or if it is the intended way for the pivot table to work and this is an enhancement.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.6
python-bits : 64
OS : Linux
OS-release : 5.10.235-227.919.amzn2.x86_64
Version : #1 SMP Sat Apr 5 16:59:05 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0
pip : 24.0
Cython : None
sphinx : 7.2.6
IPython : 8.23.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.3.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.3
lxml.etree : 5.1.0
matplotlib : 3.8.4
numba : 0.59.1
numexpr : 2.9.0
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 15.0.2
pyreadstat : None
pytest : 8.1.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.0
sqlalchemy : 2.0.29
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : 0.22.0
tzdata : 2024.1
qtpy : 2.4.1
pyqt5 : None
</details>
| [
"Bug",
"Reshaping",
"Duplicate Report"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! While the default value of `observed` is deprecated, that option is not planned for removal. It wasn't to clear to me if this was part of your concern.\n\n> I expect with the parameter's combination dropna=False, margins=False and observed=True to get all the rows with plausible combinations (like if I was grouping by) and all the columns with column B values and columns C values.\n\nRunning your example with these values, I get a result with columns involving \n\n 'column_B_1', 'column_C_1', 'column_C_2', 'column_C_3'\n\nand 336 rows. This appears to have all columns and rows that are expected, can you detail how this differs from your desired result?\n\nOne thing I'll mention is that you have duplicate keys in the `data` dictionary provided. I assume that wasn't intentional.",
"Thank you for your response.\n\nThe problem I have is that I don't want to remove the null values from the columns by which I group. That's why I have to keep the parameter dropna=False. The problem is that when this parameter is set to False, all combinations are returned, even if they are not possible. In the example shown, the NaN are taken into account for grouping, but non-existent combinations are added as the first one. There is no row with A in column_A_1 nor F in column_A_2, yet it appears (obviously with all null values). For a small dataset, this is not a problem, but for a massive dataset it is easy to get some memory or size error.\n\n\n\nThe solution I have found is to replace the null values of 'column_A_1', 'column_A_2', 'column_A_3', 'column_A_4', reset the index and replace again with null. It is a workaround, but it seems like unnecessary steps to me. I don't see the point of returning all combinations when 90% of the rows will have all values set to null.",
"I see - thanks. In the future, it would be appreciated to make the example minimal, e.g.\n\n```python\ndata = {\n 'column_A_1': ['A', 'B'],\n 'column_A_2': ['C', 'D'],\n 'column_B_1': ['1', '2'],\n 'column_C_1': [3, 4],\n}\ndf = pd.DataFrame(data)\n```\n\nThis is bullet 1 in https://github.com/pandas-dev/pandas/issues/53521; closing as a duplicated.\n"
] |
3,078,357,968 | 61,468 | ENH: Implement PDEP-17 | closed | 2025-05-20T22:22:11 | 2025-08-12T23:10:18 | 2025-08-12T23:10:10 | https://github.com/pandas-dev/pandas/pull/61468 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61468 | https://github.com/pandas-dev/pandas/pull/61468 | rhshadrach | 11 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Continuation of #58169
Still needs tests and some cleanup with the docs and decorator arguments, but I'd like to get a first look.
cc @Dr-Irv @Aloqeely | [
"Enhancement",
"Warnings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Seems like the label `PDEP` can only be used for the PDEP themselves. Otherwise building the website fails, as it expects the PR to be a PDEP that needs to be rendered in the roadmap.",
"@Dr-Irv - Thanks for the review! I'm negative on the name `PandasWarning` for a class that has to do with deprecations. pandas emits many warnings, not all are about deprecations.",
"> @Dr-Irv - Thanks for the review! I'm negative on the name `PandasWarning` for a class that has to do with deprecations. pandas emits many warnings, not all are about deprecations.\r\n\r\nMaybe a different name would solve that? Maybe `PandasDeprecationOrFutureWarning` or something like that.\r\n",
"> Maybe a different name would solve that? Maybe `PandasDeprecationOrFutureWarning` or something like that.\r\n\r\nCurrently using `PandasChangeWarning`, which has my personal preference so far. Always open to other options or opinions!",
"> I think there should be separate docs about the deprecation policy and these classes, aside from what is in the whatsnew document.\r\n\r\nI've added some implementation details to PDEP-17, and more details to the whatsnew. If you'd like to see this documented somewhere else, can you detail where.",
"@bashtage - pandas_datareader is using the `deprecate_kwarg` decorator. I'm modifying it here, is this an issue?",
"@mroeschke - this should be ready.",
"> Can you think of a linting rule that could remind us to use these new warning classes instead of FutureWarning and DeprecationWarning?\r\n\r\nI think we can raise in `tm.assert_produces_warning` whenever a straight Future/Deprecation/PendingDeprecationWarning is passed. However we cannot do this at the moment as there are a bunch of FutureWarnings that need to be enforced for pandas 3.0.\r\n\r\nMy suggestion would be to move forward here, enforce pandas 3.0 deprecations, and revisit adding such logic to `tm.assert_produces_warning`. I don't doubt there will be adding warnings that need to be converted at that point, happy to take up that work.",
"I can only build the docs in this PR if I specify `--num-jobs=1` and cannot reproduce this locally.",
"@mroeschke - okay to merge?",
"Thanks @rhshadrach "
] |
3,078,337,779 | 61,467 | ENH: Support third-party execution engines in Series.map | closed | 2025-05-20T22:07:21 | 2025-05-27T21:37:34 | 2025-05-27T21:37:34 | https://github.com/pandas-dev/pandas/pull/61467 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61467 | https://github.com/pandas-dev/pandas/pull/61467 | datapythonista | 0 | - [X] xref #61125
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,078,154,773 | 61,466 | BUG: Series.str.isdigit with pyarrow dtype doesn't honor unicode superscripts | open | 2025-05-20T20:25:32 | 2025-08-21T07:19:46 | null | https://github.com/pandas-dev/pandas/issues/61466 | true | null | null | GarrettWu | 10 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
s = pd.Series(['23', '³', '⅕', ''], dtype=pd.StringDtype(storage="pyarrow"))
s.str.isdigit()
0
0 True
1 False
2 False
3 False
dtype: boolean
```
### Issue Description
Series.str.isdigit() with pyarrow string dtype doesn't honor unicode superscript/subscript. Which diverges with the public doc. https://pandas.pydata.org/docs/reference/api/pandas.Series.str.isdigit.html#pandas.Series.str.isdigit
The bug only happens in Pyarrow string dtype, Python string dtype behavior is correct.
### Expected Behavior
```
import pandas as pd
s = pd.Series(['23', '³', '⅕', ''], dtype=pd.StringDtype(storage="pyarrow"))
s.str.isdigit()
```
```
0
0 True
1 True
2 False
3 False
dtype: boolean
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.12
python-bits : 64
OS : Linux
OS-release : 6.1.123+
Version : #1 SMP PREEMPT_DYNAMIC Sun Mar 30 16:01:29 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.0.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.1.2
Cython : 3.0.12
sphinx : 8.2.3
IPython : 7.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : 1.1
hypothesis : None
gcsfs : 2025.3.2
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.0
numba : 0.60.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.28.1
psycopg2 : 2.9.10
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.40
tables : 3.10.2
tabulate : 0.9.0
xarray : 2025.3.1
xlrd : 2.0.1
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Strings",
"Needs Discussion",
"Upstream issue",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report, confirmed on main. Further investigations and PRs to fix are welcome!",
"@rhshadrach The issue stems from `pyarrow.compute.utf8_is_digit` not recognizing non-ASCII Unicode digits (e.g., `'³'`). To align with `str.isdigit()`'s behavior and pandas docs, I propose replacing the Arrow compute call in `_str_isdigit()` with\n```\ndef _str_isdigit(self):\n values = self.to_numpy(na_value=None)\n data = []\n mask = []\n\n for val in values:\n if val is None:\n data.append(False)\n mask.append(True)\n else:\n data.append(val.isdigit())\n mask.append(False)\n\n from pandas.core.arrays.boolean import BooleanArray\n return BooleanArray(np.array(data, dtype=bool), np.array(mask, dtype=bool))\n```\n\nWhile this isn’t vectorized, it correctly honors all Unicode digit categories, which aligns with user expectations. Let me know if this workaround is acceptable for now, or if you’d prefer keeping the current Arrow-based behavior and instead clarifying the limitation in the documentation.\n\nRelated upstream issue: I’ve confirmed that this is a `pyarrow` limitation and have raised an [enhancement request](https://github.com/apache/arrow/issues/46589) in the Arrow repo to bring `utf8_is_digit` in line with `str.isdigit()`.\n\nOptionally, we could also explore reimplementing this in Cython using `PyUnicode_READ` and `Py_UNICODE_ISDIGIT` for performance while maintaining Unicode correctness.\n\nLet me know what direction you'd prefer, happy to work on a patch either way",
"Looks like this is getting fixed upstream (thanks!). Assuming that to be the case, my preference would be to leave pandas as-is.\n\ncc @WillAyd @jorisvandenbossche for any thoughts.",
"Yes I agree - let's keep it as an upstream fix. Thanks for the thorough investigation and solution @iabhi4 ",
"Thanks @iabhi4 for the upstream fix https://github.com/apache/arrow/issues/46589. It solves the superscripts issue, but introduces another discrepancy:\n```\n// '¾' (vulgar fraction) is treated as a digit by utf8proc 'No'\n```\n\nAny chance we can fix it too? Otherwise str.isdigit is still different on python string and pyarrow string types.",
"> Looks like this is getting fixed upstream (thanks!). Assuming that to be the case, my preference would be to leave pandas as-is.\n\nIf this comes in the next pyarrow version, I think we could still add a fallback based on the version. I.e. use pyarrow for recent versions, otherwise still fallback to the python implemenation. \nPotentially, assuming the cases that behave differently are all in unicode, we could also first do a check if all elements are ascii, and if so always use the pyarrow version (I _think_ the overhead is worth it)",
"I am doing a PR for what I mentioned above (still fallback to python for pyarrow<21) at https://github.com/pandas-dev/pandas/pull/61962, but that also shows that this now introduces a new inconsistency ..\n\nThe superscript is now seen as a digit, but the ⅕ as well, while that is not the case for python:\n\n```python\n>>> '⅕'.isdigit()\nFalse\n```\n\nvs\n\n```\n>>> import pyarrow.compute as pc\n>>> pa.__version__\n'21.0.0'\n>>> pc.utf8_is_digit(['⅕'])\n<pyarrow.lib.BooleanArray object at 0x7f28ca52da80>\n[\n true\n]\n```\n",
"What do we want to do here? \nStay as close as possible to the Python behaviour? (in that case we could fast-check if all values are ascii, and in that case still using the faster pyarrow algorithm, but otherwise fall back to Python) \nOr accept that this is one of the differences between both engines and document it as such?\n\n(it's also not clear to me if one of both behaviours is \"more correct\" than the other ..)",
"I think that there are endless possibilities for differences and ambiguities on the compute interpretation of unicode sequences, so I'd rather just document that we are using Arrow/utf8proc. I think the only real alternatives are playing \"whack-a-mole\" as people report issues, or always pushing everything to a PyObject and continuing to use Python's interpretation. That would definitely be the \"pure\" backwards-compat approach, but I'm not sure it is all that practical",
"Yes, I think I agree for differences that are unicode corner cases like this one. I updated the PR to test the different behaviour, and then we should also document this as one of the known differences (in addition to the upper case of ß)"
] |
3,077,824,147 | 61,465 | BUG: Raise ValueError on integer indexers containing NA; skip test for unsupported EAs(#56727) | open | 2025-05-20T18:02:28 | 2025-07-26T00:09:03 | null | https://github.com/pandas-dev/pandas/pull/61465 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61465 | https://github.com/pandas-dev/pandas/pull/61465 | pelagiavlas | 2 | - [x] closes #56727
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
What this PR changes:
- Adds a check to _setitem_with_indexer to raise a ValueError when NA is present in the indexer.
- Updates test_setitem_integer_with_missing_raises to skip test cases for known ExtensionArray types (PeriodArray, DatetimeArray, IntervalArray) that do not support indexing with NA in integer indexers. | [
"Bug",
"Testing",
"ExtensionArray",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Greetings! I’m following up on this PR.\r\nIf there’s any input or revisions required, please feel free to let me know.\r\nThank you!",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,075,200,403 | 61,464 | BUG: Decimal and float-to-int conversion issues with pyarrow ≥18.0.0 in parquet and Arrow dtype tests | open | 2025-05-19T23:07:27 | 2025-07-24T20:08:27 | null | https://github.com/pandas-dev/pandas/issues/61464 | true | null | null | bhavya2109sharma | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
Issue 1
import pyarrow as pa
array = pa.array([1.5, 2.5], type=pa.float64())
array.to_pandas(types_mapper={pa.float64(): pa.int64()}.get)
ArrowInvalid: Float value 1.5 was truncated converting to int64
Issue 2
import pandas as pd
import pyarrow as pa
from decimal import Decimal
df = pd.DataFrame({"a": [Decimal("123.00")]}, dtype="string[pyarrow]")
df.to_parquet("decimal.pq", schema=pa.schema([("a", pa.decimal128(5))]))
result = pd.read_parquet("decimal.pq")
expected = pd.DataFrame({"a": ["123"]}, dtype="string[python]")
pd.testing.assert_frame_equal(result, expected)
AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="a") are different
Attribute "dtype" are different
[left]: object
[right]: string[python]
```
### Issue Description
Two issues have been observed when using pandas 2.2.3 with pyarrow >= 18.0.0:
- Test cases Failing : pandas/tests/extension/test_arrow.py::test_from_arrow_respecting_given_dtype_unsafe and pandas/tests/io/test_parquet.py::TestParquetPyArrow::test_roundtrip_decimal
- Stricter float-to-int casting causes ArrowInvalid in tests like test_from_arrow_respecting_given_dtype_unsafe.
- Decimal roundtrip mismatch: test_roundtrip_decimal fails due to dtype mismatches (object vs. string[python]) when reading back a decimal column written with a specified pyarrow schema.
These issues were not present with pyarrow==17.x.
### Expected Behavior
- Float to int casting should either handle truncation more gracefully (as in older versions) or tests should be updated to skip/adjust.
- Decimal roundtrips to parquet should maintain the same pandas dtype or document clearly if type coercion is expected.
### Installed Versions
<details>
python : 3.11.11
pandas : 2.2.3
pyarrow : 19.0.1
</details>
| [
"Bug",
"Closing Candidate",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"In newer versions of PyArrow, type identity is stricter which is why this code is now causing errors.\n\n**Issue 1:**\nHi, the issue is that `types_mapper={pa.float64(): pa.int64()}.get` is not reliable in newer versions of PyArrow. This is because each call to `pa.float64()` creates a new object, so the key in your dictionary does not match the instance passed internally by PyArrow. I fixed the issue by converting the `float` to pandas and then casting the `float` to an `int` using truncation.\n\n```python\narray = pa.array([1.5, 2.5], type=pa.float64())\ns = array.to_pandas() \ns_int = s.astype(int) \n```\n\n**Issue 2:**\nThe issue is that both the values and types of `result` and `expected` are different. The result column is an object with a decimal value of 123.00, while the expected column is a string with a value of \"123\". I fixed this by converting the result column to a string and removing the trailing decimal places so that it matched the expected column.\n\n```python\ndf = pd.DataFrame({\"a\": [Decimal(\"123.00\")]}, dtype=\"object\")\ndf.to_parquet(\"decimal.pq\", schema=pa.schema([(\"a\", pa.decimal128(5))]))\nresult = pd.read_parquet(\"decimal.pq\")\nresult[\"a\"] = result[\"a\"].apply(lambda x: str(x).split(\".\")[0] if isinstance(x, Decimal) else str(x))\nresult = result.astype({\"a\": \"string\"})\n\nexpected = pd.DataFrame({\"a\": [\"123\"]}, dtype=\"string[python]\")\npd.testing.assert_frame_equal(result, expected)\n```\n",
"Thanks @phoebecd for the suggestions but I am running test cases implemented by pandas. AFAIK, pandas needs to fix these test cases in newer version so that pyarrow stricter identity type errors get resolved with a fix made by pandas. ",
"~@bhavya2109sharma~ @phoebecd - Unfortunately your response does not help with this issue and only adds noise that maintainers spend time going through. I suspect it was generated by AI. If that is the case, please do not merely post the AI response to an issue. Using it as an aid when crafting a response is okay, but you should first feel confident that what you post is likely to be helpful.",
"Haven't checked if these failures are still happening on 2.2.x, but I believe they are not in 2.3.0. Can you confirm @bhavya2109sharma? If that's the case, we can close as the 2.2 series will not see anymore patches."
] |
3,075,048,545 | 61,463 | Wheels for win_arm64 | closed | 2025-05-19T21:22:13 | 2025-05-21T20:31:36 | 2025-05-21T20:25:01 | https://github.com/pandas-dev/pandas/pull/61463 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61463 | https://github.com/pandas-dev/pandas/pull/61463 | khmyznikov | 4 | - [ ] closes #61462
- [x] no new code was added to pandas | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Regular win_amd also required the `pip install delvewheel` for some reason, despite the fact I didn't touched it. Added that for all win plats to fix the build.",
"Thanks @khmyznikov ",
"@mroeschke any ETA when the new version will be available on PyPi?",
"> any ETA when the new version will be available on PyPi?\r\n\r\nProbably Q4 2025 when pandas 3.0 is anticipated to be released"
] |
3,075,047,965 | 61,462 | BUILD: Provide wheel for Windows ARM64 | closed | 2025-05-19T21:21:52 | 2025-05-21T20:25:02 | 2025-05-21T20:25:02 | https://github.com/pandas-dev/pandas/issues/61462 | true | null | null | khmyznikov | 0 | ### What is the current behavior?
Installation via pip requires local build.
### What is the desired behavior?
To have [native wheel for WoA](https://blogs.windows.com/windowsdeveloper/2025/04/14/github-actions-now-supports-windows-on-arm-runners-for-all-public-repos/). GitHub Actions now supports win-arm64 for free.
### How would this improve `pandas`?
Due to the library's popularity, a native version for the growing number of Windows on ARM (WoA) devices offers a better user experience. | [
"Build",
"Windows",
"ARM"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,074,786,808 | 61,461 | DOC: fix two mistakes in missing_data.rst | closed | 2025-05-19T19:17:16 | 2025-05-20T15:02:21 | 2025-05-19T19:45:53 | https://github.com/pandas-dev/pandas/pull/61461 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61461 | https://github.com/pandas-dev/pandas/pull/61461 | gogowitsch | 0 | Small documentation improvements on `missing_data.rst`:
- The referenced example no longer exists
- Add space after full stop. | [
"Docs",
"Missing-data"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,073,889,307 | 61,460 | PERF: Slow Windows / Ubuntu Unit Tests during Status Checks | closed | 2025-05-19T13:37:43 | 2025-06-02T16:26:14 | 2025-06-02T16:26:11 | https://github.com/pandas-dev/pandas/issues/61460 | true | null | null | MartinBraquet | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
The Windows Unit tests are dangerously close to time out when running the checks that validate a PR.
The last unit test from a merged PR took 83 minutes, out of the 90 minutes before timeout:
https://github.com/pandas-dev/pandas/actions/runs/15019196064/job/42204122221
Furthermore, the checks in the open PR below are failing due to timeout in one of the Windows Unit tests.
https://github.com/pandas-dev/pandas/pull/61457/checks?check_run_id=42474035590
As there is only one unit test failing among all the PR checks and the Ubuntu Unit test is taking the same time in this PR as in the merged PR above, it strongly suggests that there is no issue intrinsic to the code change in the PR and that the way forward is:
- To increase the 90-min timeout in the unit test config yaml
- Or, and maybe better, to reduce the total time to run unit tests; this obviously might require a lot of work, unless some low-hanging fruits are still up for grab.
~If this issue appears in all new PRs triggering the core unit tests, this requires immediate attention.~
### Installed Versions
<details>
Version independent
</details>
### Prior Performance
_No response_ | [
"Performance",
"CI",
"Windows",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Update: \nAfter a rerun of the checks in PR #61457, they all passed in 45 min.\nThis is a significant difference with the 90 min that it took for the precedent check that ran just an hour before it.\nIt would be interesting to determine the cause for that twofold increase in runtime.\nIt ran at the same time as another check from a different PR. So one hypothesis is that compute time increases if multiple checks run in parallel. If all checks share the same compute resources and each check uses most of them, then parallel runs may severely impair performance",
"The root issue is that creating the environment is taking significantly longer for some reason (+30 minutes) for Windows and Ubuntu (x86) environments recently. Maybe the dependency solver got updated that triggered a performance regression",
"Closing in favor of https://github.com/pandas-dev/pandas/issues/61531"
] |
3,073,813,106 | 61,459 | DOC: change `pandas.DataFrame.unstack`'s `fill_value` param to scalar | closed | 2025-05-19T13:13:36 | 2025-05-19T16:00:47 | 2025-05-19T16:00:40 | https://github.com/pandas-dev/pandas/pull/61459 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61459 | https://github.com/pandas-dev/pandas/pull/61459 | KevsterAmp | 1 | - [x] closes #61445 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
<img width="796" alt="image" src="https://github.com/user-attachments/assets/2540c012-dc39-47bd-81f9-263855a2c69d" />
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @KevsterAmp "
] |
3,073,785,764 | 61,458 | Use BaseExecutionEngine for Python and Numba engines | open | 2025-05-19T13:04:52 | 2025-06-15T15:25:23 | null | https://github.com/pandas-dev/pandas/issues/61458 | true | null | null | datapythonista | 6 | In #61032 we have created a new base class `BaseExecutionEngine` that engines can subclass to handle `apply` and `map` operations. The base class has been initially created to allow third-party engines to be passed to `DataFrame.apply(..., engine=third_party_engine)`. But our core engines Python and Numba can also be implemented as instances of this base class. This will make the code cleaner, more maintainable, and it may allow to move the Numba engine outside of the pandas code base easily.
The whole migration to the new interface is quite a big change, so it's recommended to make the transition step by step, in small pull requests. | [
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for assigning me this @datapythonista ! This looks interesting to work on and I'll start looking into it.",
"Thanks @arthurlw. A possible approach could be starting by numba only. The numba engine is only implemented for `DataFrame.apply` for now, and only for certain types of the parameters. For example, it doesn't work with ufuncs.\n\nI think all the numba engine has been introduced in two PRs, https://github.com/pandas-dev/pandas/pull/54666 and https://github.com/pandas-dev/pandas/pull/55104, and hasn't change much. So it should be easy to see all the changes implemented for the engine.\n\nThe main logic is implemented here: https://github.com/pandas-dev/pandas/blob/main/pandas/core/apply.py#L1096\n\nI think having all the numba engine as a sublass of the base executor would be already quite valuable, and much easier than refactoring all the Python engine code.\n\nFor reference, you have an implementation of a third-party executor engine in this PR: https://github.com/bodo-ai/Bodo/pull/410/files",
"Hey @datapythonista I’ve been thinking about how to best organize the engine subclasses and avoid circular imports. One option is to move the base class and all engine implementations into a new `pandas/core/engines/` sub-package:\n```\npandas/core/\n├─ apply.py\n└─ engines/\n ├─ base.py # BaseExecutionEngine\n ├─ python_engine.py # PythonExecutionEngine\n └─ numba_engine.py # NumbaExecutionEngine\n```\nThis keeps each engine in its own file and provides a clear plugin point for third-party engines. What do you think?",
"This looks reasonable. I'd probably start creating the `NumbaExecutionEngine` class in `apply.py` for now, as I think it'll be somehow small. And being in the same file you'll also avoid circular imports. But as we properly split the Python and the Numba engines, I think it makes sense to split this way. Maybe it'd be more clear to name the directory/module `apply`, since `engine` can mean different things in pandas.",
"Hey @datapythonista, I’m working on the PythonExecutionEngine and wanted to propose a plan for splitting the work into PRs:\n\n1. Add support for third-party execution engines for `DataFrame.map`, similar to what's done in #61467\n\n2. Implement PythonExecutionEngine in `apply.py`\n\n3. (Optional) Split engines into submodules proposed [here](https://github.com/pandas-dev/pandas/issues/61458#issuecomment-2901134361)\n\nOne question I had: for PythonExecutionEngine.apply, should we follow the approach used for NumbaExecutionEngine and lift logic from `apply_raw`, or should we call back into the logic defined in `frame.py`?",
"Thanks @arthurlw for working on this, good questions.\n\nWhat you propose sounds good to me. What makes sense to me is that we add the engine keyword with the existing behavior not only to `DataFrame.map`, but also to `Series.apply` and `.pipe` of both methods.\n\nBefore starting with the implementation of `PythonExecutionEngine` I think we should have the `NumbaExecutionEngine` merged. I think using the new interface for Numba is way easier, and also I think it should make it easier to implement the python engine. But `PythonExecutionEngine` should follow the same API as the Numba one. It's not only about `apply_way`, but the whole `apply.py`. The `raw` in `apply_raw` means Numpy arrays as opposed of pandas Series. Meaning that when you apply a function to data, if you apply it to the Series is \"normal\", if you apply it to the \"raw\" data, it means the underlying Numpy array. Numba only understands Numpy, not pandas, that's why most of the logic of the numba engine lives in `apply_raw`. But when dealing with the Python engine, then it's not only that method, but the rest of the class where things happen.\n\nFor the Numba engine, the idea is that `DataFrame.apply`, instead of always calling `Apply.apply`, it will call `NumbaExecutionEngine.apply`. This method should do all the checks of things unsupported by Numba that now live in `Apply.apply`. For example:\n\n```python\nclass Apply\n def apply(self):\n if is_list_like(self.func):\n if self.engine == \"numba\":\n raise NotImplementedError(\n \"the 'numba' engine doesn't support lists of callables yet\"\n )\n```\n\nwill be something like:\n\n```python\nclass NumbaExecutionEngine:\n def apply(...):\n if is_list_like(func):\n raise NotImplementedError(...)\n```\n\nso, the class `Apply` shouldn't receive the engine, but be called directly only for the default engine. And for the things that the numba engine does support, from `NumbaExecutionEngine` you can call the methods in `apply` (you may need to make them functions outside the class, and call them from both `Apply` and `NumbaExecutionEngine` instead).\n\nFor what I know, all the `if self.engine == \"numba\":` in `apply.py` were implemented for `DataFrame.apply`. But the `apply` of group by and window operations also supports the numba engine. I think this code lives elsewhere, but I'm not too sure. You may one to have a look."
] |
3,073,461,741 | 61,457 | ENH: Added `DataFrame.nsorted` to select top ``n`` rows according to column-dependent order | open | 2025-05-19T11:11:18 | 2025-07-29T18:19:32 | null | https://github.com/pandas-dev/pandas/pull/61457 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61457 | https://github.com/pandas-dev/pandas/pull/61457 | MartinBraquet | 4 | - [x] closes #61166
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
PR is ready for review.
| [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"The PR has been waiting for reviewers since then.",
"@rhshadrach @snitish @Dr-Irv ",
"I applied the comments and the checks passed, so the PR should be ready for re-review."
] |
3,073,419,317 | 61,456 | PERF: Setting an item of incompatible dtype | closed | 2025-05-19T10:55:32 | 2025-08-05T17:04:22 | 2025-08-05T17:04:22 | https://github.com/pandas-dev/pandas/issues/61456 | true | null | null | muhannad125 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
df["feature"] = np.nan
for cluster in df["cluster"].unique():
df.loc[df["cluster"] == cluster, "feature"] = "string"
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.0-138-generic
Version : #148-Ubuntu SMP Fri Mar 14 19:05:48 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.utf8
LOCALE : en_GB.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.6.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 4.9.4
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : 2023.6.0
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : 2025.1.2
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
### Prior Performance
Setup
Dataset: df with 148,858 rows
Task: Assign "string" to a new column "feature" based on unique values in the "cluster" column.
Environment: Running on LSF
Test 1: Initialize with np.nan
import numpy as np
df["feature"] = np.nan
for cluster in df["cluster"].unique():
df.loc[df["cluster"] == cluster, "feature"] = "string"
Runtime: ~52.5 seconds
Warning:
FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas.
Value 'string' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
Test 2: Initialize with "None"
df["feature"] = "None"
for cluster in df["cluster"].unique():
df.loc[df["cluster"] == cluster, "feature"] = "string"
Runtime: ~1 minute 35 seconds
No warnings
Observation: Slower performance despite avoiding the dtype mismatch warning. | [
"Indexing",
"Performance",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@muhannad125 - please provide a reproducible example. Setup an example `df` with synthetic data.\n\nYou would might be interested in using [map](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.map.html) if performance is a concern.",
"Closing as needing more information"
] |
3,072,836,884 | 61,455 | fix(doc): #61432 typing | closed | 2025-05-19T07:34:54 | 2025-05-20T21:42:25 | 2025-05-20T21:40:31 | https://github.com/pandas-dev/pandas/pull/61455 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61455 | https://github.com/pandas-dev/pandas/pull/61455 | cmp0xff | 3 | - [x] closes #61432
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke thank you for the approval. Wouldn't the PR fit for 2.2.4 or 2.3.0? It seems to be merely a fix to me.",
"This PR will go into 3.0. No whatsnew entry is needed for this PR.",
"Thanks @cmp0xff "
] |
3,072,117,089 | 61,454 | BUG: Raise TypeError when joining with non-DataFrame using 'on=' (GH#61434) | open | 2025-05-18T23:00:01 | 2025-06-25T01:51:32 | null | https://github.com/pandas-dev/pandas/pull/61454 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61454 | https://github.com/pandas-dev/pandas/pull/61454 | iabhi4 | 3 | Closes GH#61434
### What does this PR change?
When using `DataFrame.join()` with the `on` parameter, passing an invalid object like a `dict`, `int`, or third-party DataFrame previously resulted in unclear internal errors.
This PR adds a minimal type check that raises a clear `TypeError` when `other` is not a `DataFrame`, `Series`, or a list of such objects. Valid list-based joins without `on` remain unaffected.
### Checklist
- [x] Closes #61434
- [x] Tests added and passed
- [x] All code checks passed via `pre-commit run --all-files`
- [x] Entry added in `doc/source/whatsnew/v3.0.0.rst` | [
"Reshaping",
"Error Reporting",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"All checks passed except the Pyodide build, which failed due to a rate-limit (HTTP 429). The failure seems unrelated to this PR. A rerun should resolve it",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"@mroeschke @rhshadrach can you please take a look at this?"
] |
3,072,076,580 | 61,453 | TYP: Update typing for 3.10 | closed | 2025-05-18T21:38:17 | 2025-05-19T16:11:30 | 2025-05-19T16:11:23 | https://github.com/pandas-dev/pandas/pull/61453 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61453 | https://github.com/pandas-dev/pandas/pull/61453 | Dr-Irv | 1 | - Ran `pyupgrade` so that typing is updated to python 3.10 - that modified files in `pandas/_libs`
- Updated `pandas/_typing.py` to have the following:
- Use `TypeAlias` to declare all types
- Change `Type` to use `builtins.type`
- Remove `Optional`
- Remove use of `Union` *except* for when it is used on a pandas type
Also fixed some other typing issues as well as an issue with running `stubtest` locally.
| [
"Typing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @Dr-Irv "
] |
3,071,183,637 | 61,452 | BUG: Compiler Flag Drift May Affect Pandas ABI Stability via Memory Assumptions | closed | 2025-05-18T00:17:59 | 2025-05-18T10:21:22 | 2025-05-18T10:21:00 | https://github.com/pandas-dev/pandas/issues/61452 | true | null | null | BryteLite | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
# Create a structured array with alignment-sensitive types
dtype = np.dtype([('x', np.int64), ('y', np.float64)])
arr = np.zeros(10, dtype=dtype)
# Wrap into DataFrame
df = pd.DataFrame(arr)
# Trigger complex alignment path
try:
# Operation that depends on consistent field layout
df_sum = df.sum(numeric_only=True)
print("Sum result:", df_sum)
except Exception as e:
print("Failure during structured alignment test:", e)
```
### Issue Description
### Summary
Pandas may be vulnerable to ABI and memory alignment issues caused by C23 default behaviors in GCC 15.1. Silent adoption of padding behavior changes — particularly in union or struct definitions used in NumPy or Pandas C extensions — may lead to unpredictable runtime behavior.
This issue was originally identified in NumPy and Cython. As Pandas includes both compiled Cython code and relies on NumPy for internal memory layout, it is downstream vulnerable.
These compiled pieces are sensitive to pointer alignment, ABI expectations, or padding behaviors — especially across environments.
### Reproducible Example
Please see section below
Possibly related to:
- [BUG: DataFrame constructor not compatible with array-like classes that have a 'name' attribute](https://github.com/pandas-dev/pandas/issues/61443)
- [BUG: Confusing Behavior When Assigning DataFrame Columns Using omegaconf.ListConfig](https://github.com/pandas-dev/pandas/issues/61439)
- [BUG: Some ExtensionArrays can return 0-d Elements](https://github.com/pandas-dev/pandas/issues/61433)
- [BUG: Joining Pandas with Polars dataframe produces fuzzy errormessage](https://github.com/pandas-dev/pandas/issues/61434)
- [BUG: documented usage of of str.split(...).str.get fails on dtype large_string[pyarrow]](https://github.com/pandas-dev/pandas/issues/61431)
Report for more context:
[Report](https://brytelite.github.io/BryteLite/supply-chain-report)
### Expected Behavior
Recompile NumPy and Pandas with mismatched flags.
Then run the If padding bits are not cleared correctly in C structs, or if a layout mismatch occurs due to vendor/flag drift, crashes or incorrect math results may emerge.
`CFLAGS="-std=c23" pip install numpy pandas --force-reinstall --no-cache-dir when #building`
### Installed Versions
NumPy latest 3.13 release, Pandas latest 3.13 release are suitable. | [
"Bug",
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"NumPy just released 2.2.6 fixes concerning types today. Might just mask the issue however.",
"https://sethmlarson.dev/slop-security-reports",
"Thanks @eli-schwartz - closing along with https://github.com/matplotlib/matplotlib/issues/30064 and https://github.com/numpy/numpy/issues/28953"
] |
3,071,115,712 | 61,451 | BUG: Fix DataFrame constructor misclassification of array-like with 'name' attribute (#61443) | closed | 2025-05-17T22:30:43 | 2025-05-19T15:54:56 | 2025-05-19T15:54:49 | https://github.com/pandas-dev/pandas/pull/61451 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61451 | https://github.com/pandas-dev/pandas/pull/61451 | iabhi4 | 3 | BUG: Fix DataFrame constructor misclassification of array-like with 'name' attribute
Previously, any object with a `.name` attribute (like some `vtkArray`-like objects) was assumed to be a `Series` or `Index`, causing the DataFrame constructor to misinterpret the input and raise errors when passed valid 2D array-likes.
This fix ensures we only apply the named-Series/Index logic when the input is **actually** an instance of `ABCSeries` or `ABCIndex`, and the `name` is not `None`.
A new test was added to ensure array-like subclasses with `.name` are handled correctly.
---
- [x] Closes #61443
- [x] Tests added and passing
- [x] Code passes all checks via `pre-commit`
- [x] Behavior verified with array-like + `.name` case
- [x] Entry added in `doc/source/whatsnew/v3.0.0.rst` | [
"Bug",
"Constructors"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"**Note:** One unrelated pre-commit check failed:\r\n\r\n ```\r\nCheck for strings with misplaced spaces.................................................................Failed\r\n - hook id: unwanted-patterns-strings-with-misplaced-whitespace\r\n\r\n pandas/_libs/tslibs/offsets.pyx:5112: String has a space at the beginning instead of the end of the previous string.\r\n pandas/_libs/tslibs/offsets.pyx:5126: String has a space at the beginning instead of the end of the previous string.\r\n ```\r\n\r\n One of the flagged lines includes:\r\n\r\n ```python\r\n f\" instead.\",\r\n ```\r\n\r\n This has a space at the *start* of the string rather than the end of the previous one, which violates the `unwanted-patterns-strings-with-misplaced-whitespace` pre-commit rule.\r\n\r\n Since this file wasn't touched in this PR and is unrelated to the fix, I’ve left it as-is. Let me know if you'd like me to patch it.\r\n",
"> Thanks for the PR! Can you add a line in the whatsnew for 3.0 under the `Other` section in bugfixes.\r\n> \r\n> >\r\n\r\nThanks for the review @rhshadrach!\r\nAdded the whatsnew entry and the GH#61443 comment in the test",
"Thanks @iabhi4 "
] |
3,070,159,397 | 61,450 | BUG: Fix Dataframe handling of scalar Timestamp #61444 | closed | 2025-05-17T01:42:15 | 2025-06-30T18:26:21 | 2025-06-30T18:26:20 | https://github.com/pandas-dev/pandas/pull/61450 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61450 | https://github.com/pandas-dev/pandas/pull/61450 | Farsidetfs | 3 | closes #61444
Tests developed, but validating against test suite to ensure full compliance. Will update pull request with submission notes and unit test after validation this doesn't disrupt other functions.
TODO:
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Stale",
"Timestamp"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Existing tests seem to be happy, what's needed is to add a test that fails now, and passes with the code changes here. As well as the release note.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,070,072,229 | 61,449 | DOC: fix typo in merging.rst | closed | 2025-05-16T23:56:46 | 2025-05-18T17:25:15 | 2025-05-17T00:21:58 | https://github.com/pandas-dev/pandas/pull/61449 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61449 | https://github.com/pandas-dev/pandas/pull/61449 | wjandrea | 1 | "order data"
- ~~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~~
- ~~[ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~~
- ~~[ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).~~
- ~~[ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~~
- ~~[ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~~ | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @wjandrea "
] |
3,069,696,569 | 61,448 | DOC: Skip parallel_coordinaes, andrews_curves doctests | closed | 2025-05-16T19:00:05 | 2025-05-16T20:59:18 | 2025-05-16T20:59:16 | https://github.com/pandas-dev/pandas/pull/61448 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61448 | https://github.com/pandas-dev/pandas/pull/61448 | mroeschke | 0 | The method calls are skipped in these doctest, so we should skip the `DataFrame` setup that makes a network call
e.g where this can fail https://github.com/pandas-dev/pandas/actions/runs/15072461295/job/42371956167 | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,068,360,501 | 61,447 | BUG: read_csv silently ignores out of bounds errors when parsing date columns | closed | 2025-05-16T08:42:56 | 2025-05-17T21:06:07 | 2025-05-17T21:05:50 | https://github.com/pandas-dev/pandas/issues/61447 | true | null | null | ssuhre | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import tempfile as tmp
with tmp.TemporaryFile(mode='r+') as csv_file:
pd.DataFrame({
'over_and_under': [
'2262-04-12',
'1677-09-20',
]
}).to_csv(csv_file, index=False)
csv_file.seek(0)
df = pd.read_csv(csv_file, parse_dates=['over_and_under'], date_format='%Y-%m-%d')
print(df.info())
pd.to_datetime(df['over_and_under'], format='%Y-%m-%d')
```
### Issue Description
pandas 2.2.3 `read_csv` does not raise an Exception when parsing a date column with specified _date_format_ if values are out of bounds and silently keeps the column as object dtype.
An explicit call of `to_datetime` on the column reveals the out of bounds problem which I expected to get from `read_csv`
### Expected Behavior
`read_csv` should propagate or raise an OutOfBoundsDatetime exception like `to_datetime`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Fri Apr 11 18:33:47 PDT 2025; root:xnu-11417.101.15~117/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.5
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Datetime",
"IO CSV",
"Non-Nano"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"This is fixed on main. The dtype of the column is now `datetime64[s]`",
"Thanks for the report, and thanks @asishm. Closing."
] |
3,067,920,070 | 61,446 | CI: clean up wheel build workarounds now that Cython 3.1.0 is out | closed | 2025-05-16T04:44:14 | 2025-07-01T08:27:58 | 2025-05-16T15:54:59 | https://github.com/pandas-dev/pandas/pull/61446 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61446 | https://github.com/pandas-dev/pandas/pull/61446 | rgommers | 3 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This simplifiers wheel builds for free-threaded CPython, which is useful in itself and I thought was a good idea before backporting the Windows cp313t wheel support to the 2.3.x branch, as just discussed at https://github.com/pandas-dev/pandas/pull/61249#issuecomment-2885514360.
This should have no changes in behavior, and mostly reverts the workarounds for unreleased Cython added initially in gh-60146. Other notes:
- Changes the `free-threaded-support` cibuildwheel setting to `enable`, because the former is now deprecated.
- This leaves the license concatenation behavior unchanged. Note that it's skipped on Windows, and happens on other platforms. This was added without any discussion in gh-60146. It looks inconsistent, but the bash invocation doesn't work on Windows so I'd like to leave it unchanged in this PR. A useful follow-up PR may be to remove the ad-hoc concatenation in favor of starting to use [PEP 639](https://peps.python.org/pep-0639/).
I ran the wheel builds on my fork before opening this PR, they're passing ([CI logs](https://github.com/rgommers/pandas/actions/runs/15060502913)). | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @rgommers ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 17f0dd6233a881702b36a301f4b8dd82f7d9f9a8\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61446: CI: clean up wheel build workarounds now that Cython 3.1.0 is out'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61446-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61446 on branch 2.3.x (CI: clean up wheel build workarounds now that Cython 3.1.0 is out)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61752"
] |
3,067,627,843 | 61,445 | DOC: DataFrame.unstack should accept fill_value with more types than just int/str/dict | closed | 2025-05-16T00:12:57 | 2025-05-19T16:00:41 | 2025-05-19T16:00:41 | https://github.com/pandas-dev/pandas/issues/61445 | true | null | null | loicdiridollou | 3 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.unstack.html#pandas.DataFrame.unstack
### Documentation problem
Currently the docs stipulate that only `int`, `str` and `dict` are allowed for the `fill_value`, yet it seems like all the types that could be used when creating a `DataFrame` seem to pass at runtime. I have not tried them all yet but int, float, complex, timestamp are working fine.
### Suggested fix for documentation
Add all allowed types for dataframe elements for the `fill_value` field.
Happy to create the PR if this is agreed by the maintainers. I will raise the issue in the pandas-stubs repo. | [
"Docs",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Also I realized that the `dict` option is quite confusing, I am not able to make it work, the following code will fail:\n\n```python\npd.DataFrame([\n [\"a\", \"b\", pd.Timestamp(2021, 3, 2)],\n [\"a\", \"a\", pd.Timestamp(2023, 4, 2)],\n [\"b\", \"b\", pd.Timestamp(2024, 3, 2)]\n]).set_index([0, 1]).unstack(1, fill_value={2: pd.Timestamp(2023, 4, 5)})\n```\n\n```\n File \"/Users/loic/Documents/Code/pandas-stubs/gh1214_unstack/.venv/lib/python3.13/site-packages/pandas/core/reshape/reshape.py\", line 238, in get_result\n values, _ = self.get_new_values(values, fill_value)\n ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^\n File \"/Users/loic/Documents/Code/pandas-stubs/gh1214_unstack/.venv/lib/python3.13/site-packages/pandas/core/reshape/reshape.py\", line 288, in get_new_values\n dtype, fill_value = maybe_promote(dtype, fill_value)\n ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^\n File \"/Users/loic/Documents/Code/pandas-stubs/gh1214_unstack/.venv/lib/python3.13/site-packages/pandas/core/dtypes/cast.py\", line 595, in maybe_promote\n dtype, fill_value = _maybe_promote(dtype, fill_value)\n ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^\n File \"/Users/loic/Documents/Code/pandas-stubs/gh1214_unstack/.venv/lib/python3.13/site-packages/pandas/core/dtypes/cast.py\", line 622, in _maybe_promote\n raise ValueError(\"fill_value must be a scalar\")\nValueError: fill_value must be a scalar\n```",
"FWIW, I didn't see any tests for `fill_value` being a `dict`, so I think this is erroneous docs.\n\nSeems like the doc change was part of this PR https://github.com/pandas-dev/pandas/pull/28655 and it was just erroneous to document the `fill_value` that way.\n\n",
"take"
] |
3,065,324,221 | 61,444 | BUG: DataFrame column assignment with pd.Timestamp leads to unexpected dtype and incorrect JSON output | closed | 2025-05-15T07:58:40 | 2025-05-19T02:46:08 | 2025-05-17T21:15:26 | https://github.com/pandas-dev/pandas/issues/61444 | true | null | null | tanjt107 | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
date = pd.Timestamp("2025-01-01")
df = pd.DataFrame(columns=["date"], index=["a", "b", "c"])
df["date"] = date
print(df["date"].dtype) # Output: datetime64[s] Expected: datetime64[ns]
print(df.to_json()) # Output: {"date":{"a":1696,"b":1696,"c":1696}}
# Expected: {"date":{"a":1735689600000,"b":1735689600000,"c":1735689600000}}
```
### Issue Description
When assigning a pd.Timestamp to a column in a DataFrame, the resulting dtype of the column is not as expected, and the output of to_json() is incorrect.
### Expected Behavior
The dtype of the date column should default to datetime64[ns] after assignment.
The output of df.to_json() should correctly represent the timestamp in milliseconds since the epoch.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.4
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Fri Apr 11 18:33:47 PDT 2025; root:xnu-11417.101.15~117/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
None
</details>
| [
"Bug",
"Non-Nano",
"Timestamp"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"I was able to duplicate and believe my PR will resolve this issue. If confirmed as a bug worth fixing, will update tests and change document and submit PR for approval. ",
"Thanks for the report, it seems to me the issue lies with Timestamp creation rather than assignment to a DataFrame.\n\n```python\ndate = pd.Timestamp(\"2025-01-01\")\nprint(date.unit)\n# s\ndate2 = pd.Timestamp(year=2025, month=1, day=1)\nprint(date2.unit)\n# us\n```\n\nAs such, closing as a duplicate of https://github.com/pandas-dev/pandas/issues/58989",
"@rhshadrach \n\nI think this is a separate issue as this version fails as well.\n\n#this test fails\n date2 = Timestamp(year=2025, month=1, day=1)\n df4 = DataFrame(index=['a', 'b', 'c'], columns=[\"date\"], dtype='datetime64[ns]')\n df4[\"date\"] = date2 \n assert df4[\"date\"].dtype == \"datetime64[ns]\"\n print(\"df4 assertion passed\")\n\n\n#this test passes\n date = Timestamp(\"2025-01-01\")\n df2 = DataFrame(index=['a', 'b', 'c'], columns=[\"date\"], dtype='datetime64[ns]')\n df2[\"date\"] = [date]*len(df2)\n assert df2[\"date\"].dtype == \"datetime64[ns]\"\n print(\"df2 assertion passed\")\n\n"
] |
3,064,367,936 | 61,443 | BUG: `DataFrame` constructor not compatible with array-like classes that have a `'name'` attribute | closed | 2025-05-14T22:02:04 | 2025-05-19T15:54:50 | 2025-05-19T15:54:50 | https://github.com/pandas-dev/pandas/issues/61443 | true | null | null | user27182 | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import numpy as np
import pandas as pd
import vtk
poly = vtk.vtkPolyData(points=np.eye(3))
pd.DataFrame(poly.points)
```
``` python
ValueError: Per-column arrays must each be 1-dimensional
```
Originally posted in https://github.com/pyvista/pyvista/issues/7519
### Issue Description
Wrapping a `DataFrame` with the array-like object above results in an unexpected `ValueError` being raised. The cause is this line, which assumes that the input object must be a `Series` or `Index` type based on having a `'name'` attribute.
https://github.com/pandas-dev/pandas/blob/41968a550a159ec0e5ef541a610b7007003bab5b/pandas/core/frame.py#L798-L799
This assumption fails for the `VTKArray` `poly.points`, which also has a `'name'` attribute.
### Expected Behavior
No error should be raised, and the array-like input should be wrapped correctly by `DataFrame`
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.2
python-bits : 64
OS : Darwin
OS-release : 23.4.0
Version : Darwin Kernel Version 23.4.0: Fri Mar 15 00:19:22 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8112
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_CA.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : 8.1.3
IPython : 8.36.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : 6.131.9
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Constructors"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi!\n\nI'm a beginner contributor and spent some time digging into this — here's what I found:\n\nWhen an array-like object (like `vtkArray` or similar) is passed to `pd.DataFrame()` and it has a `.name` attribute, the constructor currently assumes it's a `Series` or `Index` and wraps it into a `{name: data}` dict. This then routes to `dict_to_mgr()` → `_extract_index()` which attempts to treat the 2D array-like as a 1D column and eventually raises the error\n\nThis behavior is unexpected because the array-like input is valid (2D, convertible to DataFrame), but it's being misinterpreted solely due to the presence of `.name`.\n\nI'd like to work on this issue. I'm happy to follow any guidance or suggestions!",
"Yes exactly - the check for a `'name'` attribute as a proxy for the input being `Series` or `Index` type is the issue. The fix could be as simple as doing a proper `isinstance` check instead, e.g.:\n\n``` diff\n- elif getattr(data, \"name\", None) is not None: \n+ elif isinstance(data, (Series, Index)):\n```\n\nthough there may be some historical or other reason for why `'name'` is used here. But this is what I would try first.",
"@user27182 Thanks for the confirmation! I looked into the historical context of the `.name` check.\n\nIt appears to have been introduced in [this commit](https://github.com/pandas-dev/pandas/commit/7f31567f8f125bb51ae7a3097c8bc24fef6f4d58) from 2013 by @jreback as part of a broader set of fixes (GH4204, GH4463) related to Series/indexing and type coercion — notably when `Series` still subclassed `ndarray`.\n\nBack then, checking for `.name` may have been a lightweight proxy to identify Series-like objects without introducing tight coupling, but now that we have proper ABCs (`ABCSeries`, `ABCIndexClass`), it seems more robust to explicitly check types via `isinstance`.\n\nSo unless there's some subtle compatibility case that still requires relying on `.name`, I’d proceed with:\n\n```python\nelif isinstance(data, (ABCSeries, ABCIndexClass)):",
"Thanks for the report @user27182 and investigaiton @iabhi4. I agree it seems like we should switch to an isinstance check here. PRs are welcome!",
"> Thanks for the report [@user27182](https://github.com/user27182) and investigaiton [@iabhi4](https://github.com/iabhi4). I agree it seems like we should switch to an isinstance check here. PRs are welcome!\n\nI’ve submitted a PR implementing the `isinstance` check as discussed, along with a test to validate the fix. Let me know if any further adjustments are needed, happy to iterate!"
] |
3,063,692,554 | 61,442 | ENH: add option to save json without escaping forward slashes | open | 2025-05-14T16:35:35 | 2025-07-01T16:12:55 | null | https://github.com/pandas-dev/pandas/issues/61442 | true | null | null | ellisbrown | 0 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I love pandas and use it extensively. one very common use case for me is saving large json / jsonl files to describe ML training datasets. unfortunately, pandas uses ujson under the hood which automatically escapes forward slashes---which are a very common use case in my dataset files to describe filepaths to images/videos/etc.
the escaped filepaths hit issues with some (non-pandas) downstream libs that ingest my json/jsonl dataset files. so instead of using of using the native pandas `.to_json()` function, I have to import the `json` package and manually write the file myself. this can be much slower for very large files
I am ok living with this inconvenience, but it seems to me to be a gap in the pandas api. perhaps adding an option to prevent the escaping could would be a good enhancement
### Feature Description
add a new parameter to `pandas.DataFrame.to_json()` to `escape_forward_slashes`
```python
def to_json(self, ..., escape_forward_slashes=True) -> str | None:
...
```
or even a `ujson_options` dict
```python
def to_json(self, ..., ujson_options={}) -> str | None:
...
```
### Alternative Solutions
instead of
```python
df.to_json(path)
```
you have to manually use the `json` package
```python
import json
with open(path, "w") as f:
json.dump(df.to_dict(orient="records"), f)
```
### Additional Context
also note that the `ujson` project [explicitly states](https://github.com/ultrajson/ultrajson?tab=readme-ov-file#project-status)
> this library has been put into a **_maintenance-only mode_**... Users are encouraged to migrate to [orjson](https://pypi.org/project/orjson/) which is both much faster and less likely to introduce a surprise buffer overflow vulnerability in the future.
so it might be worth migrating to `orjson` during this development effort | [
"Enhancement",
"IO JSON",
"Needs Triage"
] | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,062,286,222 | 61,441 | BUG: Raise ValueError on integer indexers containing NA; skip test for unsupported EAs | closed | 2025-05-14T08:30:32 | 2025-05-20T18:02:28 | 2025-05-20T18:02:28 | https://github.com/pandas-dev/pandas/pull/61441 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61441 | https://github.com/pandas-dev/pandas/pull/61441 | pelagiavlas | 0 | - [ ] closes #56727
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This PR addresses missing validation in `_setitem_with_indexer`, where indexing with integer indexers containing `pd.NA` (e.g., `Int64` arrays with missing values) would silently fail or misbehave.
### What this PR changes:
- Adds a check to `_setitem_with_indexer` to raise a `ValueError` when NA is present in the indexer.
- Updates `test_setitem_integer_with_missing_raises` to skip test cases for known `ExtensionArray` types (`PeriodArray`, `DatetimeArray`, `IntervalArray`) that do not support indexing with NA in integer indexers.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,062,281,472 | 61,440 | ENH: Broaden `dict` to `Mapping` as replace argument | closed | 2025-05-14T08:29:11 | 2025-05-27T15:26:34 | 2025-05-27T15:26:33 | https://github.com/pandas-dev/pandas/issues/61440 | true | null | null | DavideCanton | 9 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently the `replace` method of `Series` allows only `dict`, but not `Mapping` inputs, as the `DataFrame` one does.
For example:
```py
from collections.abc import Mapping
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
d: Mapping[int, str] = {1: "a", 2: "b", 3: "c"}
d2: Mapping[str, Mapping[int, str]] = {"A": d}
print(df.replace(d2)) # typechecks
print(df["A"].replace(d)) # works but doesn't typecheck
```
### Feature Description
I guess it's enough to change from `dict` to `Mapping` in the type signature, since it seems to work even if the argument is not a dict (for example if it's a `MappingProxyType` instance).
### Alternative Solutions
I guess an alternative solution is just to type ignore the replace invocation.
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hey @DavideCanton,\n\nHave you installed the `pandas-stubs` package? It is an extension of the pandas repo for types.\nI have tried locally in an environment where pandas-stubs is installed and no warning are raised (pyright/mypy).\nCan you provide how you typecheck this code?\n\nSee below, the type is `Mapping` as you are suggesting:\nhttps://github.com/pandas-dev/pandas-stubs/blob/93a9e11f4e4656a935853d3814ac9073adf4c9cc/pandas-stubs/core/frame.pyi#L863-L880",
"This is what I do\n\n```\n$ uv pip install pandas pandas-stubs mypy\nResolved 11 packages in 17ms\nInstalled 11 packages in 4.57s\n + mypy==1.15.0\n + mypy-extensions==1.1.0\n + numpy==2.2.6\n + pandas==2.2.3\n + pandas-stubs==2.2.3.250308\n + python-dateutil==2.9.0.post0\n + pytz==2025.2\n + six==1.17.0\n + types-pytz==2025.2.0.20250516\n + typing-extensions==4.13.2\n + tzdata==2025.2\n$ mypy .\nfoo.py:10: error: Argument 1 to \"replace\" of \"DataFrame\" has incompatible type \"Mapping[str, Mapping[int, str]]\"; expected \"str | bytes | date | datetime | timedelta | <14 more items> | None\" [arg-type]\nFound 2 errors in 2 files (checked 2 source files)\n```",
"I see why you are seeing the issue and I don't, can you try running with `pandas-stubs` on main:\n`uv pip install \"git+https://github.com/pandas-dev/pandas-stubs.git\"`\nWith your setup I saw the error, with the stubs repo on main I don't meaning that it was fixed in: https://github.com/pandas-dev/pandas-stubs/pull/1164\nPlease let me know if when pulling from main you still see the issue, we should be able to ask for a new release of the stubs.",
"Seems fixed if using the main version, so probably as you said it's just a stubs problem.\n\nThanks!",
"@Dr-Irv do you have a timeline for the next release of the stubs? The question here was related to something that was fixed since last release. Thanks!",
"Thanks, probably this can be closed since it's not an actual pandas issue?",
"You will be the only one able to close it and we can see when is the next pandas-stubs release.",
"> [@Dr-Irv](https://github.com/Dr-Irv) do you have a timeline for the next release of the stubs? The question here was related to something that was fixed since last release. Thanks!\n\nThanks for the reminder. I just did release 2.2.3.250527",
"closing since I just released `pandas-stubs` 2.2.3.250527"
] |
3,062,080,513 | 61,439 | BUG: Confusing Behavior When Assigning DataFrame Columns Using `omegaconf.ListConfig` | open | 2025-05-14T07:15:34 | 2025-05-17T21:30:20 | null | https://github.com/pandas-dev/pandas/issues/61439 | true | null | null | Trezorro | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
from omegaconf import OmegaConf
df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
cfg = OmegaConf.create({"cols": ["a", "b"]})
cols = cfg.cols # This is a ListConfig
df[cols] = df[cols] * 2 # Raises ValueError
```
**Error message:**
`ValueError: Cannot set a DataFrame with multiple columns to the single column ['a', 'b']`
### Issue Description
When using an `omegaconf.ListConfig` object to select columns in a Pandas DataFrame, the assignment operation fails with a `ValueError`, even though the shapes, columns, and indices of the left-hand side (LHS) and right-hand side (RHS) match perfectly. This behavior is unexpected and confusing, as it is not immediately clear that the issue is caused by the type of the column selector.
## Expected Behavior:
The assignment should succeed, as the shapes, columns, and indices of the LHS and RHS match.
## Likely Context of Encountering This:
This issue is likely to occur in workflows where omegaconf.ListConfig is used to manage configurations, such as specifying column names for normalization or other data processing tasks. For example:
```python
# Compute min and max for normalization
min_vals = data[target_cols].min()
max_vals = data[target_cols].max()
# Attempt to normalize using ListConfig as column selector
data[target_cols] = (data[target_cols] - min_vals) / (max_vals - min_vals) # This raises the same ValueError
```
## Workaround:
Convert the ListConfig object to a standard Python list before using it in Pandas operations:
```python
data[list(target_cols)] = (data[list(target_cols)] - min_vals) / (max_vals - min_vals)
```
## Why This Is Confusing:
- The error message suggests that a single column is being assigned multiple columns, which is misleading.
- Shapes, columns, and even indexes match. Online there is no notes to be found on this edge case.
- The actual issue is the type of the column selector (ListConfig), which behaves like a list in many other contexts.
## Proposed Solution:
- Improve the error message to indicate that the column selector type might be incompatible.
- Consider adding support for omegaconf.ListConfig as a valid column selector, since `isinstance(cols, Sequence)` is True.
### Expected Behavior
The assignment should succeed, as the shapes, columns, and indices of the LHS and RHS match.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.7
python-bits : 64
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Fri Nov 15 15:13:28 PST 2024; root:xnu-10063.141.1.702.7~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 4.9.4
matplotlib : 3.10.1
numba : 0.58.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Is it possible to construct a minimal class for `cfg.cols` demonstrating the issue? This isn't necessary, but would be helpful.",
"This issue stems from \n\nhttps://github.com/pandas-dev/pandas/blob/5aa78c019649a291456788bc3a808452a387884b/pandas/core/frame.py#L4172\n\nHere we have an isinstance check. We can't merely use `is_list_like` here because pandas treats tuples differently from a list to support MultiIndexes. I wonder if using `is_list_like` and `not isinstance(key, tuple)` might be a good way forward. However I think we need to be cautious here; I worry about unintended side-effects. Further investigations are welcome!"
] |
3,061,658,902 | 61,438 | BUG: ImportError: cannot import name 'NaN' from 'numpy' in squeeze_pro.py | closed | 2025-05-14T02:50:31 | 2025-05-14T16:19:37 | 2025-05-14T16:19:36 | https://github.com/pandas-dev/pandas/issues/61438 | true | null | null | heidongqilin | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas_ta as ta
```
### Issue Description
D:\t70\duanxian>python duanxian_TDI_TSI_DIV.py
Traceback (most recent call last):
File "D:\t70\duanxian\duanxian_TDI_TSI_DIV.py", line 11, in <module>
import pandas_ta as ta # 新增:用于 ALMA 等指标
^^^^^^^^^^^^^^^^^^^^^^
File "D:\veighna_studio\Lib\site-packages\pandas_ta\__init__.py", line 116, in <module>
from pandas_ta.core import *
File "D:\veighna_studio\Lib\site-packages\pandas_ta\core.py", line 18, in <module>
from pandas_ta.momentum import *
File "D:\veighna_studio\Lib\site-packages\pandas_ta\momentum\__init__.py", line 34, in <module>
from .squeeze_pro import squeeze_pro
File "D:\veighna_studio\Lib\site-packages\pandas_ta\momentum\squeeze_pro.py", line 2, in <module>
from numpy import NaN as npNaN
ImportError: cannot import name 'NaN' from 'numpy' (D:\veighna_studio\Lib\site-packages\numpy\__init__.py). Did you mean: 'nan'?
### Expected Behavior
import pandas_ta success
### Installed Versions
Operating System: [Your OS, e.g., Windows 10/11]
Python Version: [Your Python version, e.g., 3.9, 3.10, 3.11 - based on your traceback, it seems like Python 3.13 based on "cp313" in numpy download, please confirm]
pandas_ta Version: 0.3.14b0 (Confirmed via pip show pandas_ta)
numpy Version: 2.2.5 (Confirmed via pip show numpy)
pandas Version: 2.2.3 (Confirmed via pip show pandas) | [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. `pandas_ta` is not maintained by this repository so I would suggest pinging the owners of that repository. Closing"
] |
3,061,433,244 | 61,437 | Backport PR #61423: CI: Fix test failures in 32-bit environment | closed | 2025-05-13T23:25:43 | 2025-05-13T23:57:41 | 2025-05-13T23:57:38 | https://github.com/pandas-dev/pandas/pull/61437 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61437 | https://github.com/pandas-dev/pandas/pull/61437 | mroeschke | 0 | null | [
"CI",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,059,177,316 | 61,436 | change readme | closed | 2025-05-13T08:18:06 | 2025-05-13T08:20:01 | 2025-05-13T08:20:01 | https://github.com/pandas-dev/pandas/pull/61436 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61436 | https://github.com/pandas-dev/pandas/pull/61436 | uzairahmad03 | 0 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,057,668,111 | 61,435 | ENH: Implemented MultiIndex.searchsorted method ( GH14833) | open | 2025-05-12T17:35:52 | 2025-07-20T02:34:53 | null | https://github.com/pandas-dev/pandas/pull/61435 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61435 | https://github.com/pandas-dev/pandas/pull/61435 | GSAUC3 | 15 | - [X] closes #14833
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Enhancement",
"MultiIndex",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @GSAUC3 for the PR. Is there and issue, or has there been any discussion about this elsewhere?",
"Hi @datapythonista, this was the issue https://github.com/pandas-dev/pandas/issues/14833 , against which i made the pull request.",
"HI, I had ran pytest and pre-commit locally, is it possible to run all these test locally?",
"@GSAUC3 Add a return statement in your function after `except` block. \r\n- check teh docstring and proper typing hints, `IndexOpsMixin` check this class, where searchsorted has a properly defined parameter structure. ",
"hi, @datapythonista , I am having trouble running all these tests, locally, before committing my code, so far i have only ran the pytest, locally, and that worked, could you please, guide me, how to set up the testing environment locally, before each commit?\r\n\r\nwill running\r\n`pre-commit run --all-files`\r\nsuffice? \r\n",
"pre-commit should run automatically if it's set up to work as intended. You have all the information on how to set up the development environment, run tests... in the development documentation: https://pandas.pydata.org/docs/development/index.html",
"Hi, @datapythonista, this part of the error messages tells, us that searchsorted method should fail, but it is passing, am i correct?\r\n```\r\n=================================== FAILURES ===================================\r\n__________________________ test_searchsorted[tuples] ___________________________\r\n[gw0] darwin -- Python 3.10.17 /Users/runner/micromamba/envs/test/bin/python3.10\r\n[XPASS(strict)] np.searchsorted doesn't work on pd.MultiIndex: GH 1[48](https://github.com/pandas-dev/pandas/actions/runs/15033468287/job/42250710830?pr=61435#step:5:52)33\r\n___________________ test_searchsorted[mi-with-dt64tz-level] ____________________\r\n[gw0] darwin -- Python 3.10.17 /Users/runner/micromamba/envs/test/bin/python3.10\r\n[XPASS(strict)] np.searchsorted doesn't work on pd.MultiIndex: GH 14833\r\n___________________________ test_searchsorted[multi] ___________________________\r\n```",
"> Hi, @datapythonista, this part of the error messages tells, us that searchsorted method should fail, but it is passing, am i correct?\r\n\r\nYes, that's correct. I guess we have an xfail for the test that should be removed.",
"Hi @datapythonista . Thank you for your suggestions, I've addressed the feedback from earlier and the CI checks are now passing. This PR should be ready for review whenever you get a chance. Please let me know if any changes are required. Thanks again!\r\n",
"Hi @datapythonista,\r\nI hope you're doing well. Apologies if this is a basic question—I'm still relatively new to open source, and I noticed that the pull request now shows an “outdated” tag on some of the files I contributed to.\r\nI'm not entirely sure what that means. Should I be concerned about it? Should i update the branch?\r\nThanks in advance for your guidance!",
"Hi @mroeschke, thank you for the review and helpful feedback.\r\n\r\nI understand that ExtensionArray currently only supports 1D data, and making it work with 2D inputs would likely take some deeper changes.\r\n\r\nIf the long-term goal is to update algorithms.searchsorted to support 2D inputs and dispatch to the array — so that ExtensionArray can benefit automatically — I’d be happy to help with that.\r\n\r\nPlease let me know how you’d like to move forward. I’d be glad to contribute to any changes or help explore what’s needed.",
"Hi @mroeschke, I've made the required changes.\r\nI had a question; would it be appropriate to implement this using binary search?\r\nI already have a working implementation ready, and I'm happy to push it if that's the recommended approach.\r\nLet me know what you think!",
"Hi @datapythonista and @mroeschke 👋,\r\n\r\nI hope you're both doing well! Just a gentle reminder regarding [PR #61435](https://github.com/pandas-dev/pandas/pull/61435). I've addressed the requested changes and would appreciate it if you could take a look when you have a moment. Please let me know if any further modifications are needed.\r\n\r\nThank you very much for your time and guidance!",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"@datapythonista Hi, hope you are doing well, would you mind reviewing this pull request please?"
] |
3,057,649,370 | 61,434 | BUG: Joining Pandas with Polars dataframe produces fuzzy errormessage | open | 2025-05-12T17:28:02 | 2025-05-18T23:07:58 | null | https://github.com/pandas-dev/pandas/issues/61434 | true | null | null | Juan-132 | 7 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
See below.
```
### Issue Description
### Reproducible example
```python
import pandas as pd
data = {
"Column2": [10, 20, 30],
"Column3": ["A", "B", "C"],
"Column4": ["Lala", "YesYes", "NoNo"],
}
df1 = pd.DataFrame(data)
```
```python
import polars as pl
data = {
"Column1": ["Text1", "Text2", "Text3"],
"Column2": [10, 20, 30],
"Column3": ["A", "B", "C"]
}
df2 = pl.DataFrame(data)
```
```python
result = df1.join(df2, on=["Column2", "Column3"], how="inner")
```
### Log output
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_11612\367032622.py in ?()
----> 1 result = df1.join(df2, on=["Column2", "Column3"], how="inner")
c:\Users\name\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\frame.py in ?(self, other, on, how, lsuffix, rsuffix, sort, validate)
10766 validate=validate,
10767 )
10768 else:
10769 if on is not None:
> 10770 raise ValueError(
10771 "Joining multiple DataFrames only supported for joining on index"
10772 )
10773
ValueError: Joining multiple DataFrames only supported for joining on index
```
### Expected Behavior
**Expected Result**
Error message is not correct.
It should say that joining pandas dataframe with polars dataframe is not supported.
This is how Polars formulates the error when joining the other way around:
`TypeError: expected `other` join table to be a DataFrame, not 'pandas.core.frame.DataFrame'`
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Dutch_Netherlands.1252
pandas : 2.2.3
numpy : 2.2.5
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
...
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Error Reporting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I have confirmed the bug on pandas version 2.2.3.\nThe error message when attempting to join a pandas DataFrame with a Polars DataFrame is misleading. I intend to work on a fix to provide a more appropriate error message that clearly indicates the incompatibility between pandas and Polars for such join operations.\nI will submit a pull request with the proposed changes shortly.",
"I'm somewhat negative here. The API docs for `DataFrame.join` say `other` can be\n\n> DataFrame, Series, or a list containing any combination of them\n\nand I think it is reasonable to expect readers to know we mean \"pandas DataFrame\" whenever our docs say \"DataFrame\".\n\nSimilar situations have been discussed, and I believe the conclusion was that when we think it's likely a user could make an error that we can support improving the error message. In my opinion, this crosses the line and should not be supported. To support something like this across the pandas API would be a lot of code, a lot of runtime checks, all to support what I think is an unreasonable case.\n\ncc @pandas-dev/pandas-core ",
"I think doing an instance check on the type we expect, with an appropriate error message, is worthwhile. I think we can fix these as they come up. This isn't about passing a polars DataFrame versus pandas DataFrame. It's about that we aren't checking the type of the argument at runtime. For example, here is something that fails where an attempt is made to join a DataFrame with a list of ints, but the error message isn't saying \"you didn't pass a DataFrame, Series, or list of such\":\n```python\n>>> df = pd.DataFrame({\"x\":[1,2,3], \"y\":[\"a\", \"b\", \"c\"]})\n>>> df.join([1,2])\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"C:\\Condadirs\\envs\\pandasstubs\\lib\\site-packages\\pandas\\core\\frame.py\", line 10785, in join\n can_concat = all(df.index.is_unique for df in frames)\n File \"C:\\Condadirs\\envs\\pandasstubs\\lib\\site-packages\\pandas\\core\\frame.py\", line 10785, in <genexpr>\n can_concat = all(df.index.is_unique for df in frames)\nAttributeError: 'int' object has no attribute 'index'\n```\n",
"Thanks @Dr-Irv. I think the benefits to the user are clear. But I do not see those benefits as being anywhere near the cost. We will be spending time on triaging issues, reviewing PRs, running tests, and maintaining more code. These checks also come with a runtime penalty. It's likely not all that significant, but it's also not zero. And all of this for making sure the user is using our API the way it's documented, which I think one can argue is the user's responsibility.",
"> But I do not see those benefits as being anywhere near the cost. We will be spending time on triaging issues, reviewing PRs, running tests, and maintaining more code.\n\nWe're inconsistent in pandas as to whether we do these runtime checks. I think checking if the passed parameters are the proper types is reasonable. I think we should handle these via a whack-a-mole approach - fix them as they are reported. So we fix `join()` here and not worry about other places. For something like `join()`, the added check costs nothing in comparison to the overall join operation.\n",
"I do not think doing runtime checks are unreasonable, I think they are not worth the cost. But I do not wish to argue this further, I suspect it won't get much in the way of attention.\n\nI've removed the Discussion Needed label. Contributions here are welcome.",
"Opened a PR based on the conversation to address this specific case. It adds a clear `TypeError` when a non-pandas object is passed to `DataFrame.join()` with `on=`. Happy to make any further adjustments if needed."
] |
3,057,477,464 | 61,433 | BUG: Some `ExtensionArray`s can return 0-d Elements | open | 2025-05-12T16:12:18 | 2025-05-18T22:01:27 | null | https://github.com/pandas-dev/pandas/issues/61433 | true | null | null | ilan-gold | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
for arr in [pd.arrays.PeriodArray(pd.PeriodIndex(['2023-01-01','2023-01-02'], freq='D')), pd.Categorical(["a", "b"])]:
subset = arr[(0, Ellipsis)]
assert isinstance(subset, type(arr))
assert subset.shape == ()
```
### Issue Description
Given what is stated on https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray.html, I would expect this not to be possible at all.
### Expected Behavior
The reason I care is that arrow arrays do not have a 0d version, which makes it tough to develop over all `ExtensionArray` classes:
```
pd.array([1, 2], dtype="int64[pyarrow]")[(0, Ellipsis)]
```
gives simply the number 1.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Darwin
OS-release : 24.1.0
Version : Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.5
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : 6.131.6
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.2
matplotlib : 3.10.1
numba : 0.61.2
numexpr : 2.10.2
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : 2025.4.1.dev3+gd998eac1.d20250509
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Needs Discussion",
"ExtensionArray"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report!\n\n> The reason I care is that arrow arrays do not have a 0d version, which makes it tough to develop over all ExtensionArray classes\n\nCan you give a little more detail here as to why it makes it tough? In particular, does \n\n pd.array([1, 2], dtype=\"int64[pyarrow]\")[0]\n\nalso cause difficulties?",
"Ah - I see it now. It's that with just `[0]`, you get the type of the scalar and not the ExtensionArray type.",
"Thanks for the report. NumPy appears to have decided the user is asking to get back a 0-dim ndarray rather than a scalar.\n\n```python\nprint(type(np.array([1, 2, 3])[1, ...]))\n# <class 'numpy.ndarray'>\nprint(type(np.array([1, 2, 3])[1]))\n# <class 'numpy.int64'>\n```\n\npandas would be consistent to agree that the user is asking for a 0-dim ExtensionArray here, and hence to raise as these are not supported.\n\n@jbrockmendel - do you have any thoughts here?",
"raising would make sense, but im a bit concerned about the performance hit of adding that check (that `__getitem__` method was optimized pretty hard IIRC). are there user-facing methods of reaching this?"
] |
3,057,409,687 | 61,432 | DOC: Series.name is just Hashable, but many column arguments require str | closed | 2025-05-12T15:45:47 | 2025-05-20T21:40:33 | 2025-05-20T21:40:33 | https://github.com/pandas-dev/pandas/issues/61432 | true | null | null | cmp0xff | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
* https://pandas.pydata.org/docs/dev/reference/api/pandas.Series.name.html
* https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.pivot.html
### Documentation problem
In the documentation, `Series.name` is [just required to be](https://pandas.pydata.org/docs/dev/reference/api/pandas.Series.name.html) a `Hashable`. When `pandas` functions ask for a column label, however, it often asks for an `str`, e.g. in [DataFrame.pivot](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.pivot.html), where it says
> **columns**: *str or object or a list of str*
### Suggested fix for documentation
Use `Hashable` everywhere to column labels as a function argument | [
"Docs",
"Reshaping",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Hashable is correct, but the docs often use `label` instead, e.g.\n\nhttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html\n\nI think we should use `label` for consistency."
] |
3,057,389,539 | 61,431 | BUG: documented usage of of `str.split(...).str.get` fails on dtype `large_string[pyarrow]` | open | 2025-05-12T15:38:24 | 2025-06-03T11:41:18 | null | https://github.com/pandas-dev/pandas/issues/61431 | true | null | null | SandroCasagrande | 9 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.Series(["abc"], dtype="large_string[pyarrow]").str.split("b").str
-traceback
Traceback (most recent call last):
File "<python-input-7>", line 1, in <module>
a = pd.Series(["abc"], dtype="large_string[pyarrow]").str.split("b").str[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/generic.py", line 6127, in __getattr__
return object.__getattribute__(self, name)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/accessor.py", line 228, in __get__
return self._accessor(obj)
~~~~~~~~~~~~~~^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/strings/accessor.py", line 208, in __init__
self._inferred_dtype = self._validate(data)
~~~~~~~~~~~~~~^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/strings/accessor.py", line 262, in _validate
raise AttributeError(
f"Can only use .str accessor with string values, not {inferred_dtype}"
)
AttributeError: Can only use .str accessor with string values, not unknown-array. Did you mean: 'std'?
```
### Issue Description
The return dtype of `split` is very different when acting on `large_string` (results in pyarrow list) and `string` (results in object).
Interestingly, using the `list` accessor works **only** on `large_string` dtype
```python
>>> pd.Series(["abc"], dtype="large_string[pyarrow]").str.split("b").list[0]
0 a
dtype: large_string[pyarrow]
```
but **not** on `string` dtype
```
>>> pd.Series(["abc"], dtype="string[pyarrow]").str.split("b").list[0]
Traceback (most recent call last):
File "<python-input-15>", line 1, in <module>
pd.Series(["abc"], dtype="string[pyarrow]").str.split("b").list[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/generic.py", line 6127, in __getattr__
return object.__getattribute__(self, name)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/accessor.py", line 228, in __get__
return self._accessor(obj)
~~~~~~~~~~~~~~^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/arrays/arrow/accessors.py", line 73, in __init__
super().__init__(
~~~~~~~~~~~~~~~~^
data,
^^^^^
validation_msg="Can only use the '.list' accessor with "
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"'list[pyarrow]' dtype, not {dtype}.",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/arrays/arrow/accessors.py", line 41, in __init__
self._validate(data)
~~~~~~~~~~~~~~^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/pandas-main-string-test/lib/python3.13/site-packages/pandas/core/arrays/arrow/accessors.py", line 51, in _validate
raise AttributeError(self._validation_msg.format(dtype=dtype))
AttributeError: Can only use the '.list' accessor with 'list[pyarrow]' dtype, not object.. Did you mean: 'hist'?
```
From a use perspective this is unfortunate, as I have to know the underlying dtype in order to choose the correct accessor (or cast).
### Expected Behavior
Should work similar to
```python
>>> pd.Series(["abc"], dtype="string[pyarrow]").str.split("b").str[0]
0 a
dtype: object
```
since it is documented behavior https://github.com/pandas-dev/pandas/blob/f496acffccfc08f30f8392894a8e0c56d404ef87/doc/source/user_guide/text.rst?plain=1#L229 (dtype is debatable).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : f496acffccfc08f30f8392894a8e0c56d404ef87
python : 3.13.2
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Fri Apr 11 18:33:47 PDT 2025; root:xnu-11417.101.15~117/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2100.gf496acffcc
numpy : 2.2.5
dateutil : 2.9.0.post0
pip : 25.1
Cython : 3.0.11
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pytz : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Strings",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! Agreed on the inconsistency here. \n\n```python\nprint(pd.Series([\"abc\"], dtype=\"large_string[pyarrow]\").str.split(\"b\"))\n# 0 ['a' 'c']\n# dtype: list<item: large_string>[pyarrow]\nprint(pd.Series([\"abc\"], dtype=\"string[pyarrow]\").str.split(\"b\"))\n# 0 [a, c]\n# dtype: object\n```\n\nThe behavior on `string[pyarrow]` was introduced in http://github.com/pandas-dev/pandas/pull/40708. cc @simonjayhawkins @jorisvandenbossche \n\nWhile the current behavior of returning ArrowExtensionArray list dtype on `large_string[pyarrow]` seems preferable to object dtype in isolation, one benefit of returning object dtype on `string[pyarrow]` is that it does smooth the transition from object strings to PyArrow strings. But if we were to decide one day we do in fact want ArrowExtensionArray, this is a hard behavior to deprecate.\n\ncc @WillAyd @mroeschke for any thoughts as well.",
"This is a general issue that I was hoping the logical type system proposal would clarify, as it gets pretty tough to cherry pick different code paths for different data types.\n\nI think the best solution would return a list data type as a result of this operation. It is more inline with the intent of the user code, and more performant",
"> I think the best solution would return a list data type as a result of this operation.\n\n@WillAyd - which operation?",
"str.split",
"On both `string[pyarrow]` and `large_string[pyarrow]`? Certainly not `object` dtype I assume, nor Python-backed strings.",
"👍",
"Thanks @SandroCasagrande for the report. I completely understand the confusion around pandas dtypes and why one could expect the behavior to be different or even lead one to expect consistency here.\n\nLet's start by introducing a quirk of pandas and then expanding on that.\n\nThere's a dtype in pandas core called `ArrowDtype`. This is an **experimental** `ExtensionDtype` for **ALL** PyArrow data types. But one can easily create a Series backed by, say, an Arrow string array.\n\n```python\npd.Series([\"abc\"], dtype=pd.ArrowDtype(pa.string()))\n# 0 abc\n# dtype: string[pyarrow]\n``` \n\nwe see this gives `dtype: string[pyarrow]`. This is the dtype string alias which is also accepted as input to the `dtype` parameter of the `Series` constructor. So let's do that instead.\n\n```python\npd.Series([\"abc\"], dtype=\"string[pyarrow]\")\n# 0 abc\n# dtype: string\n```\n\nOh. The string alias of the dtype is now just string! So let's do that instead.\n\n```python\npd.Series([\"abc\"], dtype=\"string\")\n# 0 abc\n# dtype: string\n```\n\nThe quirk is that all these Series are different! The last one is not even backed by PyArrow!\n\nSo what's going on?\n\n```python\npd.Series([\"abc\"], dtype=pd.ArrowDtype(pa.string())).dtype # string[pyarrow]\n\ntype(_) # pandas.core.dtypes.dtypes.ArrowDtype\n\npd.Series([\"abc\"], dtype=\"string[pyarrow]\").dtype # string[pyarrow]\n\ntype(_) # pandas.core.arrays.string_.StringDtype\n\npd.Series([\"abc\"], dtype=\"string\").dtype # string[python]\n\ntype(_) # pandas.core.arrays.string_.StringDtype\n```\n\nBasically there's overlap in the dtype string aliases for the `ArrowDtype` and the `StringDtype`\n\n`ArrowDtype` is an experimental dtype and being an extension array follows the EA API but there is no restriction on the return type of this EA and hence follow the documented usage of the pandas dtypes. (being an EA it could have been shipped separately and personally I don't know why this experimental EA was included in pandas core in the first place)\n\nso the basic problem here is that `dtype=\"large_string[pyarrow]\")` and `dtype=\"string[pyarrow]\")` are significantly different dtypes and associated with different extension array types, one that is experimental and always returns Arrow dtypes and the other that conforms to the documented pandas api.\n\nHopefully this background will help the discussion in determining if this is indeed a bug and whether there should be consistency here.\n",
"> This is a general issue that I was hoping the logical type system proposal would clarify, as it gets pretty tough to cherry pick different code paths for different data types.\n\nIndeed ..\n\n> I think the best solution would return a list data type as a result of this operation. It is more inline with the intent of the user code, and more performant\n\nWe should, eventually, indeed return a list data type, once we have a dedicated list data type. But again my position is that we should only do this for the default dtypes once we have a default list dtype. And so until we have a better logical dtype system, I think the default behaviour for the default string dtype for `str.split()` being `object` dtype is \"correct\".\n\n(if the default string dtype, which uses NaN as missing value indicator, would return a ArrowDtype(list) type, that would introduce NA-variants of dtypes in existing workflows of people that did not opt in into using pyarrow-NA-dtypes)\n",
"> We should, eventually, indeed return a list data type, once we have a dedicated list data type. But again my position is that we should only do this for the default dtypes once we have a default list dtype.\n\nAnd to be clear, we have approval to implement this in PDEP-10. So no blockers here.\n\n> (if the default string dtype, which uses NaN as missing value indicator, would return a ArrowDtype(list) type, that would introduce NA-variants of dtypes in existing workflows of people that did not opt in into using pyarrow-NA-dtypes)\n\nJust like PDEP-14 introduced a numpy semantics nan-variant, we also require a numpy semantics variant of the nested dtype. (this perhaps requires a PDEP to mirror PDEP-14 but specific for the nested dtypes)"
] |
3,055,081,134 | 61,430 | BLD: Decrease size of docker image | closed | 2025-05-11T16:25:28 | 2025-05-12T19:10:45 | 2025-05-12T19:10:38 | https://github.com/pandas-dev/pandas/pull/61430 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61430 | https://github.com/pandas-dev/pandas/pull/61430 | huisman | 1 | This PR reduces the size of the docker image by:
- combining RUN commands to minimise the number of layers
- removing the apt lists files to reduce total size
- use --no-cache-dir when installing with pip
In my tests it reduced the size of the final image with approximately 0.47GB (most of it due to the --no-cache-dir).
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @huisman "
] |
3,055,017,753 | 61,429 | DOC: Updates to documentation - io.rst | closed | 2025-05-11T14:17:41 | 2025-05-12T16:55:22 | 2025-05-12T16:55:16 | https://github.com/pandas-dev/pandas/pull/61429 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61429 | https://github.com/pandas-dev/pandas/pull/61429 | ConnorWallace15 | 1 | updating hdf5 data description link due to 404 error
- [x] closes #61428
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @ConnorWallace15 "
] |
3,054,604,088 | 61,428 | DOC: Broken Link in IO Tools - HDF5 Data Description | closed | 2025-05-11T00:23:23 | 2025-05-12T16:55:17 | 2025-05-12T16:55:17 | https://github.com/pandas-dev/pandas/issues/61428 | true | null | null | ConnorWallace15 | 2 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/user_guide/io.html
### Documentation problem
The link for HDF5 data description is broken and leads to a 404 error.
Current [HDF5 link](https://support.hdfgroup.org/HDF5/whatishdf5.html#gsc.tab=0)
### Suggested fix for documentation
I believe a good replacement link would be to this [Introduction to HDF5](https://support.hdfgroup.org/documentation/hdf5/latest/_intro_h_d_f5.html).
I would like to update the documentation with this link and create a pull request. | [
"Docs",
"IO HDF5"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"Thanks for raising this! PRs are welcome."
] |
3,054,507,099 | 61,427 | ENH: access arrow-backed map as a python dictionary | open | 2025-05-10T20:29:32 | 2025-07-15T21:07:24 | null | https://github.com/pandas-dev/pandas/issues/61427 | true | null | null | mikelui | 0 | ### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Users should be able to accessing a dataframe element–that is an Arrow-backed map–with normal python dict semantics.
Today, accessing an *Arrow-backed* map element will return a list of tuples per [`as_py()`](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/arrays/arrow/array.py#L639) from [`MapScalar`](https://arrow.apache.org/docs/python/generated/pyarrow.MapScalar.html) type–thus list semantics and not dictionary access semantics. Historically, this is because Arrow allows multiple keys, and ordering is not enforced. So converting to a python dictionary removes those two behaviors. (1) multiple keys *will* be removed and (2) the ordering *may* be changed. In practice, this is not the common case, and so it makes the common case hard.
The common case is that users want to interact with a map with traditional key/value access semantics. It's often a burden and source of confusion when users need to manually convert, a la
```
# pseudocode
df = table.to_pandas(types_mapper=pd.ArrowDtype)
my_dict = df["col_a"].iloc[0]
val = my_dict["key"] # error, no key/value access semantics
val = dict(my_dict)["key"] # users need to manually convert to a dict on each access
```
This behavior should also be available when using imperative iteration based methods like `.iterrows()`, which is another common patter for accessing element-by-element.
### Feature Description
We can have a configuration for this in `ArrowExtensionArray`.
Arrow already has a `maps_as_pydicts` flag: [`.to_pandas(maps_as_pydicts=True)`](https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatch.html#pyarrow.RecordBatch.to_pandas) which controls this behavior *only* when *not* using pyarrow backed data frames (when using numpy backed data frames). This feature is already widely used in at last one large company.
The flag will generate a [native python dictionary](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/src/arrow/python/arrow_to_pandas.cc#L1026) instead of a python list of `(key, value)` tuples. This flag has also made its way to [lower-level apis](https://github.com/apache/arrow/pull/45471) and come up with [competing dataframe libraries](https://github.com/pola-rs/polars/issues/21745).
There's not an obvious place to put this in the `types_mapper` API. But, we can already see *unexpected* behavior when combining `maps_as_pydicts=True` with the `types_mapper=pd.ArrowDtype`
```
# pseudocode
df = table.to_pandas(types_mapper=pd.ArrowDtype, maps_as_pydicts=True)
# my_dict is still a `MapScalar`!!
my_dict = df["col_a"].iloc[0]
```
When combined, `maps_as_pydicts` is effectively ignored, because the code path taken for `types_mapper=pd.ArrowDtype` makes no use of the flag.
So, this is all to say, when we see both of those flags, we should *propagate the configuration* to Pandas, so that it will use it during element access [1](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/arrays/arrow/array.py#L634), [2](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/arrays/arrow/array.py#L639)
Such a change requires changes in both Arrow and Pandas.
### Alternative Solutions
Alternatively, we can save some state in the underlying pyarrow array, so that calling [`as_py()`](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/scalar.pxi#L1085) on the `MapScalar` will automatically do the right thing.
Some breadcrumbs for context:
* a `MapScalar` is generated when accessing a pyarrow MapArray [1](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/array.pxi#L1530C16-L1530C27), [2](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/scalar.pxi#L36)
* this is accessed when retrieving an element from an `ArrowExtensionArray` [1](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/arrays/arrow/array.py#L634), [2](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/arrays/arrow/array.py#L639)
So, one can imagine that this information is saved in the `MapArray`/`Table` itself. However, that also introduces action at a distance when converting a table to a dataframe, and then performing element access. It would be more straightforward to configure this during the conversion to Pandas and holding that configuration state in the dataframe.
----
Another partial alternative is making a `.map` [accessor](https://github.com/pandas-dev/pandas/blob/3832e85779b143d882ce501c24ee51df95799e2c/pandas/core/series.py#L5852). I lack context on these accessors and don't know if they are an obvious solution, or a ham-fisted one.
### Additional Context
Performance can be a consideration. When doing an element access, we'd be doing a conversion from the native `Arrow` array to a Python dictionary.
However, *this is already the case*. Element access on a `MapScalar` already traverses the underlying `MapArray` and coverts it to a python list [1](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/scalar.pxi#L1112C30-L1113C1), [2](https://github.com/apache/arrow/blob/598938711a8376cbfdceaf5c77ab0fd5057e6c02/python/pyarrow/scalar.pxi#L1082) | [
"Enhancement",
"Needs Triage",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,054,347,369 | 61,426 | BUG: Fix memory leak when slicing Series and assigning to self | closed | 2025-05-10T16:36:04 | 2025-06-02T16:55:00 | 2025-06-02T16:54:59 | https://github.com/pandas-dev/pandas/pull/61426 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61426 | https://github.com/pandas-dev/pandas/pull/61426 | niranjanorkat | 1 | This PR fixes a memory leak that occurs when a Series is sliced and reassigned to itself, e.g., a = a[-1:].
The underlying BlockManager retained references to the original data, preventing garbage collection. This is resolved by ensuring the sliced result copies the backing data.
Closes #60640. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but this solution would be a non-starter as it introduces extra copies.\r\n\r\nSince there's continuing discussion in the issue going to close this PR"
] |
3,054,212,451 | 61,425 | BUG(string dtype): Arithmetic operations between Series with string dtype index | open | 2025-05-10T14:43:33 | 2025-05-11T23:55:04 | null | https://github.com/pandas-dev/pandas/issues/61425 | true | null | null | rhshadrach | 2 | Similar to #61099, but concerning `lhs + rhs`. Alignment in general is heavily involved here as well. One thing to note is that unlike in comparisons operations, in arithmetic operations the `lhs.index` dtype is favored, assuming no coercion is necessary.
```python
dtypes = [
np.dtype(object),
pd.StringDtype("pyarrow", na_value=np.nan),
pd.StringDtype("python", na_value=np.nan),
pd.StringDtype("pyarrow", na_value=pd.NA),
pd.StringDtype("python", na_value=pd.NA),
pd.ArrowDtype(pa.string())
]
idx1 = pd.Series(["a", np.nan, "b"], dtype=dtypes[1])
idx2 = pd.Series(["a", np.nan, "b"], dtype=dtypes[3])
df1 = pd.DataFrame({"idx": idx1, "value": [1, 2, 3]}).set_index("idx")
df2 = pd.DataFrame({"idx": idx2, "value": [1, 2, 3]}).set_index("idx")
print(df1["value"] + df2["value"])
print(df2["value"] + df1["value"])
```
When concerning string dtypes, I've observed the following:
- NaN vs NA generally aligns, the value propagated is always NA
- NaN vs NA does not align when the NA arises from ArrowExtensionArray
- NaN vs None (object) aligns, the value propagated is from `lhs`
- NA vs None does not align
- PyArrow-NA + ArrowExtensionArray results in object dtype (NAs do align)
- Python-NA + PyArrow-NA results in PyArrow-NA; contrary to the left being preferred
- Python-NA + PyArrow-NA results in object type (NAs do align)
- When `lhs` and `rhs` have indices that are both object dtype:
- NaN vs None aligns and propagates the `lhs` value.
- NA vs None does not align
- NA vs NaN does not align
I think the main two things we need to decide are:
1. How should NA vs NaN vs None align.
2. When they do align, which value should be propagated.
A few properties I think are crucial:
- Alignment should only depend on value and left-vs-right operand, not storage.
- Alignment should be transitive.
If we do decide on aligning between different values, a natural order is `None < NaN < NA`. However, the most backwards compatible would be to have None vs NaN be operand dependent with NA always propagating when present. | [
"Bug",
"Strings",
"Needs Discussion",
"API - Consistency"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"Hi @rhshadrach \nI’ve dug into this issue on pandas 2.2.2 and here’s what I’ve confirmed:\nimport numpy as np\nimport pandas as pd\nimport pyarrow as pa\n# Check pandas version\nprint(pd.__version__)\n\ndtypes = [\n np.dtype(object),\n pd.StringDtype(\"pyarrow\"), # Remove na_value for older pandas versions\n pd.StringDtype(\"python\"), # Remove na_value for older pandas versions\n pd.StringDtype(\"pyarrow\"), # Remove na_value for older pandas versions\n pd.StringDtype(\"python\"), # Remove na_value for older pandas versions\n pd.ArrowDtype(pa.string())\n]\nidx1 = pd.Series([\"a\", np.nan, \"b\"], dtype=dtypes[1])\nidx2 = pd.Series([\"a\", np.nan, \"b\"], dtype=dtypes[3])\ndf1 = pd.DataFrame({\"idx\": idx1, \"value\": [1, 2, 3]}).set_index(\"idx\")\ndf2 = pd.DataFrame({\"idx\": idx2, \"value\": [1, 2, 3]}).set_index(\"idx\")\nprint(df1[\"value\"] + df2[\"value\"])\nprint(df2[\"value\"] + df1[\"value\"])\n\noutput \n2.2.2\n\nidx\na 2\n NA 4\nb 6\nName: value, dtype: int64\nidx\na 2\nNA 4\nb 6\nName: value, dtype: int64\n\nWhile the arithmetic operations are working in my environment, I noticed that the index dtypes for `df1` and `df2` are slightly different despite using `pd.StringDtype(\"pyarrow\")` for both, which might contribute to the potential inconsistencies when using the 'pyarrow' storage backend.\n\nI'll share any additional findings or reproducible examples I come across. Looking forward to contributing to a resolution for this issue.\n\n\n"
] |
3,054,181,753 | 61,424 | i want to develop one feature in pandas | closed | 2025-05-10T14:14:16 | 2025-05-10T14:31:48 | 2025-05-10T14:31:47 | https://github.com/pandas-dev/pandas/issues/61424 | true | null | null | Sunil5411 | 2 | ### Research
- [x] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
i want to develop one feature in pandas
### Question about pandas
_No response_ | [
"Usage Question",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Adds a native .explain() method to pandas DataFrames that provides an instant, human-readable summary of data — including column types, missing values, distributions, and key insights — all in a single command to accelerate EDA.",
"Thanks for the suggestion but I would be -1 to include this in pandas. This sounds similar to `describe`. For this feature I would suggest implementing this as a 3rd party library. Closing"
] |
3,053,798,669 | 61,423 | CI: Fix test failures in 32-bit environment | closed | 2025-05-10T06:43:45 | 2025-05-13T23:25:56 | 2025-05-13T23:09:01 | https://github.com/pandas-dev/pandas/pull/61423 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61423 | https://github.com/pandas-dev/pandas/pull/61423 | chilin0525 | 4 | I noticed that some CI failures are due to the same test errors appearing in several recent PRs.
After comparing multiple failed and successful CI runs, it seems that the unit tests fail when using `cython==3.1.0` in the Linux 32-bit environment.
* Faliure unittest case:
```python
FAILED pandas/tests/window/test_rolling.py::test_rolling_var_numerical_issues[var-1-values0] - AssertionError: Series are different
Series values are different (42.85714 %)
[index]: [0, 1, 2, 3, 4, 5, 6]
[left]: [nan, 5e+33, 0.0, -1.7226268574692147e+17, -1.7226268574692147e+17, -1.7226268574692147e+17, 0.0]
[right]: [nan, 5e+33, 0.0, 0.5, 0.5, 2.0, 0.0]
At positional index 3, first diff: -1.7226268574692147e+17 != 0.5
FAILED pandas/tests/window/test_rolling.py::test_rolling_var_numerical_issues[std-1-values1] - AssertionError: Series are different
Series values are different (42.85714 %)
[index]: [0, 1, 2, 3, 4, 5, 6]
[left]: [nan, 7.071067811865475e+16, 0.0, 0.0, 0.0, 0.0, 0.0]
[right]: [nan, 7.071068e+16, 0.0, 0.7071068, 0.7071068, 1.414214, 0.0]
At positional index 3, first diff: 0.0 != 0.7071068
FAILED pandas/tests/window/test_rolling.py::test_rolling_var_numerical_issues[var-2-values2] - AssertionError: Series are different
Series values are different (42.85714 %)
[index]: [0, 1, 2, 3, 4, 5, 6]
[left]: [nan, 5e+33, -1.7226268574692147e+17, 0.0, -1.7226268574692147e+17, -1.7226268574692147e+17, 0.0]
[right]: [nan, 5e+33, 0.5, 0.0, 0.5, 2.0, 0.0]
At positional index 2, first diff: -1.7226268574692147e+17 != 0.5
FAILED pandas/tests/window/test_rolling.py::test_rolling_var_numerical_issues[std-2-values3] - AssertionError: Series are different
Series values are different (42.85714 %)
[index]: [0, 1, 2, 3, 4, 5, 6]
[left]: [nan, 7.071067811865475e+16, 0.0, 0.0, 0.0, 0.0, 0.0]
[right]: [nan, 7.071068e+16, 0.7071068, 0.0, 0.7071068, 1.414214, 0.0]
At positional index 2, first diff: 0.0 != 0.7071068
= 4 failed, 166839 passed, 24850 skipped, 5388 deselected, 795 xfailed, 92 xpassed, 2 warnings in 554.17s (0:09:14) =
```
* using `cython==3.0.10`:
```python
>>> import numpy as np
... from pandas import Series
...
... def debug_rolling_var():
... ds = Series([99999999999999999, 1, 1, 2, 3, 1, 1])
... print("Rolling(2).var():\n", ds.rolling(2).var())
... print("Numpy var:", np.var([99999999999999999, 1], ddof=0))
...
>>> debug_rolling_var()
Rolling(2).var():
0 NaN
1 5.000000e+33
2 0.000000e+00
3 5.000000e-01
4 5.000000e-01
5 2.000000e+00
6 0.000000e+00
dtype: float64
Numpy var: 2.5e+33
```
* using `cython==3.1.0`:
```python
>>> import numpy as np
... from pandas import Series
...
... def debug_rolling_var():
... ds = Series([99999999999999999, 1, 1, 2, 3, 1, 1])
... print("Rolling(2).var():\n", ds.rolling(2).var())
... print("Numpy var:", np.var([99999999999999999, 1], ddof=0))
...
>>> debug_rolling_var()
Rolling(2).var():
0 NaN
1 5.000000e+33
2 0.000000e+00
3 -1.722627e+17
4 -1.722627e+17
5 -1.722627e+17
6 0.000000e+00
dtype: float64
Numpy var: 2.5e+33
``` | [
"CI",
"Dependencies",
"32bit"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke Hi, this error seems to be an upstream issue. I created this MR just to evaluate whether it's caused by Cython.\r\nFor issues like this, what's the usual approach to ensure the unit tests pass?",
"Thanks for starting investigation. If this is caused by Cython then we probably need to pin it like you did. If you could create a minimal example for the Cython folks (that doesn't use pandas), that'd be appreciated",
"Thanks @chilin0525 ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 41968a550a159ec0e5ef541a610b7007003bab5b\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61423: CI: Fix test failures in 32-bit environment'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61423-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61423 on branch 2.3.x (CI: Fix test failures in 32-bit environment)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n "
] |
3,053,377,590 | 61,422 | BUG: Raise MergeError when suffixes result in duplicate column names … | closed | 2025-05-09T23:21:41 | 2025-06-06T14:12:42 | 2025-06-06T14:12:20 | https://github.com/pandas-dev/pandas/pull/61422 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61422 | https://github.com/pandas-dev/pandas/pull/61422 | Farsidetfs | 20 | closes #61402
All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file if fixing a bug or adding a new feature.
| [
"Bug",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"Thanks for your contribution!\r\n\r\nJust a quick note: you don't need to write `GH#61402` in the PR description — simply using `#61402` in PR description, is enough, GitHub will automatically link it 😀. \r\nAlso, since this PR addresses a bug, please make sure to:\r\n* Add a unit test that covers this case\r\n* Include an entry in the `doc/source/whatsnew/vx.y.z.rst` file to document your fix\r\n\r\nFor reference, you can check the contributing guidelines here: https://pandas.pydata.org/docs/development/contributing_codebase.html#documenting-your-code",
"Thanks for the pointers. I'll get those added in here soon. Trying to track down why the Unit Tests / Linux-32-bit(pull_request) is failing. I didn't change anything that should have effected Series, so it's kinda weird. \r\n\r\nI also can't get the pytest to run normally on my dev yet either, so I haven't been able to fully replicate the failure locally yet. So, still a little more work to do here.",
"@Farsidetfs I believe the CI failure is not related to your changes. It appears to be caused by the cython version — pandas unit tests fail with `cython==3.1.0`. You may notice that the same test failures have occurred in several recent PRs as well. I already address the issue in https://github.com/pandas-dev/pandas/pull/61423. ",
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"@nikaltipar I think this should be ready now. Please let me know if I've missed anything. I took your advice and combined the two with slight modifications to improve efficiency using sets throughout rather than just convert at the end.",
"> @nikaltipar I think this should be ready now. Please let me know if I've missed anything. I took your advice and combined the two with slight modifications to improve efficiency using sets throughout rather than just convert at the end.\r\n\r\nThanks for taking care of that, @Farsidetfs ! It looks good to me, no other comments from my side. Thanks for adding the unit-tests, too!",
"@nikaltipar Could you rebase main branch to trigger CI again?",
"> @nikaltipar Could you rebase main branch to trigger CI again?\r\n\r\nI am not able to, I'll have to wait for @Farsidetfs ",
"Rebase complete. Thanks. Let me know if there's anything else needed.",
"#61402 ",
"@rhshadrach All requested changes are complete, so it should be ready for your review to unblock merge. Thanks",
"@Farsidetfs - just a conflict that needs resolved in the whatsnew.",
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"@rhshadrach Merge conflicts resolved, ready for merge",
"Thanks @Farsidetfs very nice"
] |
3,052,849,935 | 61,421 | DOC: Updated titanic.rst survived description | closed | 2025-05-09T18:01:36 | 2025-05-09T18:36:15 | 2025-05-09T18:36:07 | https://github.com/pandas-dev/pandas/pull/61421 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61421 | https://github.com/pandas-dev/pandas/pull/61421 | arthurlw | 1 | - [x] closes #61412
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw "
] |
3,052,183,501 | 61,420 | ENH: Add smart_groupby() method for automatic grouping by categorical columns and aggregating numerics | closed | 2025-05-09T13:27:40 | 2025-05-14T18:59:18 | 2025-05-14T18:59:15 | https://github.com/pandas-dev/pandas/issues/61420 | true | null | null | rit4rosa | 3 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently, pandas.DataFrame.groupby() requires users to explicitly specify both the grouping columns and the aggregation functions. This can be repetitive and inefficient, especially during exploratory data analysis on large DataFrames with many columns. A common use case like “group by all categorical columns and compute the mean of numeric columns” requires verbose, manual setup.
### Feature Description
Add a new method to DataFrame called smart_groupby(), which intelligently infers grouping and aggregation behavior based on the column types of the DataFrame.
Proposed behavior:
- If no parameters are passed:
- Group by all columns of type object, category, or bool
- Aggregate all remaining numeric columns using the mean
- Optional keyword parameters:
- by: specify grouping columns explicitly
- agg: specify aggregation function(s) (default is "mean")
- exclude: exclude specific columns from grouping or aggregation
### Alternative Solutions
Currently, users must write verbose code to accomplish the same:
```
group_cols = [col for col in df.columns if df[col].dtype == 'category']
agg_cols = [col for col in df.columns if pd.api.types.is_numeric_dtype(df[col])]
df.groupby(group_cols)[agg_cols].mean()
```
### Additional Context
_No response_ | [
"Enhancement",
"Groupby",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I would like to work on this feature if you agree.",
"Thanks for the suggestion but I would be -1 including this in pandas. pandas is moving toward explicit and less automatic behaviors, and the snippet you posted is short enough to be wrapped in a custom helper function",
"Agreed @mroeschke. Closing."
] |
3,052,051,865 | 61,419 | BUILD: Missing Windows free-threading wheel | closed | 2025-05-09T12:40:24 | 2025-05-10T15:07:23 | 2025-05-10T14:36:30 | https://github.com/pandas-dev/pandas/issues/61419 | true | null | null | blink1073 | 2 | ### Installation check
- [x] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
Windows-2022Server-10.0.20348-SP0
### Installation Method
pip install
### pandas Version
2.2.3
### Python Version
3.13.3 free-threading
### Installation Logs
<details>
$ which pip
/home/Administrator/venv/Scripts/pip
$ pip install pandas
Collecting pandas
Downloading pandas-2.2.3.tar.gz (4.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 144.5 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
+ meson setup Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222 Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222\.mesonpy-ulxgqp76\build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222\.mesonpy-ulxgqp76\build\meson-python-native-file.ini
The Meson build system
Version: 1.2.1
Source dir: Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222
Build dir: Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222\.mesonpy-ulxgqp76\build
Build type: native build
Project name: pandas
Project version: 2.2.3
..\..\meson.build:2:0: ERROR: Could not find C:\Program Files\Microsoft Visual Studio\Installer\vswhere.exe
A full log can be found at Z:\data\tmp\pip-install-niaom8mt\pandas_620816291b0449be8d128c83a9a99222\.mesonpy-ulxgqp76\build\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 25.0.1 -> 25.1.1
[notice] To update, run: python.exe -m pip install --upgrade pip
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</details>
I don't see a wheel for Windows cp313t in the list of release files https://pypi.org/project/pandas/2.2.3/#files.
I see a job is running that should produce the wheel: https://github.com/pandas-dev/pandas/actions/runs/14920899116/job/41915964757
Perhaps the wheel was accidentally omitted in the release process?
| [
"Build",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the issue, but our 3.13t wheels will be released in pandas 3.0. Our nightly wheels have 3.13t wheels though https://anaconda.org/scientific-python-nightly-wheels/pandas/files. Closing as we won't be supporting free threading in pandas 2.2.3",
"Understood, thanks!"
] |
3,051,633,779 | 61,418 | BUG/FEATURE REQUEST: DataFrame.to_sql() tries to create table when it exists | open | 2025-05-09T09:47:10 | 2025-06-03T19:39:28 | null | https://github.com/pandas-dev/pandas/issues/61418 | true | null | null | vladidobro | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
This example requires an Oracle 19c database
engine = sqlalchemy.create_engine('oracle+oracledb://...', echo=True)
con = engine.connect()
c.execute(text('''
CREATE PRIVATE TEMPORARY TABLE ORA$PTT_TEMP (
a INT
) ON COMMIT DROP DEFINITION
'''))
pd.DataFrame({'a': [1]}).to_sql('ORA$PTT_TEMP', engine)
-05-09 11:10:00,967 INFO sqlalchemy.engine.Engine SELECT tables_and_views.table_name
FROM (SELECT a_tables.table_name AS table_name, a_tables.owner AS owner
FROM all_tables a_tables UNION ALL SELECT a_views.view_name AS table_name, a_views.owner AS owner
FROM all_views a_views) tables_and_views
WHERE tables_and_views.table_name = :table_name AND tables_and_views.owner = :owner
2025-05-09 11:10:00,967 INFO sqlalchemy.engine.Engine [cached since 533.2s ago] {'table_name': 'ORA$PTT_TEMP', 'owner': '...'}
2025-05-09 11:10:00,993 INFO sqlalchemy.engine.Engine
CREATE TABLE ORA$PTT_TEMP (
curve_id INT
)
DatabaseError: (oracledb.exceptions.DatabaseError) ORA-32463: cannot create an object with a name matching private temporary table prefix
```
### Issue Description
Hello Pandas!
I am trying to use DataFrame.to_sql with Oracle "PRIVATE TEMPORARY" tables.
The catch is that these tables for whatever reason cannot be detected with the inspector.has_table() method, so pandas is trying to create the table, and then fails.
The issue is quite annoying, because the error is in the `pandas.SQLDatabase.prep_table()` method, which is called unconditionally in the `pandas.SQLDatabase.to_sql()`, and there is no way to override it with a custom "method: callable" parameter to `pandas.DataFrame.to_sql()`.
Though one could argue that this is a bug in the SQLAlchemy Oracle dialect, rather than Pandas. But IMHO it should be possible to skip the table check and creation altogether in the `pandas.DataFrame.to_sql()` call.
It looks like it would be easy to add a `skip_table_creation: bool = False` argument to the `to_sql()` method, that would just skip the prep_table call in SQLDatabase.to_sql().
The downside would be that pandas would not have the reflected information about target database types, but this could potentially be solved by passing a custom `sqlalchemy.Table` object?
What do you think about this? Is this a direction that Pandas would like to go in, or do you think about the `.to_sql()` method more as a handy feature for ad-hoc operations, that should not be used much in production? Do you think it is better to write my own insert methods and not rely on `.to_sql()` for production use?
### Expected Behavior
I expect that it will not try to create a table if it exists, or an option to skip table creation if I know that it does not exist.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.3
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Fri Apr 11 18:33:47 PDT 2025; root:xnu-11417.101.15~117/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.8.2
pip : 24.0
Cython : None
sphinx : None
IPython : 8.21.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.3.1
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.3
lxml.etree : 5.1.0
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 15.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.40
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
| [
"Bug",
"IO SQL",
"Needs Discussion",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the request!\n\n> Though one could argue that this is a bug in the SQLAlchemy Oracle dialect, rather than Pandas.\n\nHas this been reported to SQLAlchemy?",
"No, I believe it is not reported to SQLAlchemy, I guess I will do that.\n\nThough I feel a little bit like there should still be an escape hatch in pandas that will not rely on sqlalchemy being able to reflect all special table types. There will probably not be many of them, but even then I think it would be good to skip the table reflection if I know the table upfront, for performance reasons.",
"What do you think about making it possible to pass as argument a sqlalchemy.Table instead of table name?\nIt seems like it would be minimal changes, the sql backend would just check if the table is a Table object and if yes, don't do the reflection.\nThat looks on first sight like a really simple change, I would maybe try it as my first PR in pandas, if pandas would like this.\n@rhshadrach ",
"@vladidobro - I am not very familiar with the SQL layer in pandas, if you're willing, I'd suggest putting up a proof-of-concept PR and we can discuss further.",
"Hi, I have reported to sqlalchemy and it seems that there is no possibility to reflect PRIVATE TEMPORARY tables.\nhttps://github.com/sqlalchemy/sqlalchemy/discussions/12633\n\nThat seems like the only way to insert to them via pandas would be to change pandas' to_sql()."
] |
3,051,556,738 | 61,417 | ENH: The prompt message in the error does not bring any valid bug prompts | open | 2025-05-09T09:16:39 | 2025-05-20T14:28:36 | null | https://github.com/pandas-dev/pandas/issues/61417 | true | null | null | pengjunfeng11 | 1 | ### Feature Type
- [ ] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
<img width="1348" alt="Image" src="https://github.com/user-attachments/assets/c4104157-1e97-4455-853b-371e9bbea1bf" />
Since the bool method has been deprecated, there should be no prompt here. I can start writing a PR to fix this minor issue.
This issue does not cause a serious bug, so I consider it a functional improvement and have submitted it here.
### Feature Description
修复下面的问题
<img width="1348" alt="Image" src="https://github.com/user-attachments/assets/c4104157-1e97-4455-853b-371e9bbea1bf" />
### Alternative Solutions
Modify the code here
<img width="1348" alt="Image" src="https://github.com/user-attachments/assets/c4104157-1e97-4455-853b-371e9bbea1bf" />
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Could you please explain what you mean by \"no prompt\"? A deprecation means that the functionality will soon be removed, not that it is already removed. So at this stage `.bool()` will execute, and either return successfully or fail (like in your example, as your dataframe is inappropriate for that method)."
] |
3,051,460,314 | 61,416 | BUG: df.rolling.{std, skew, kurt} gives unexpected value | open | 2025-05-09T08:42:24 | 2025-05-17T21:59:05 | null | https://github.com/pandas-dev/pandas/issues/61416 | true | null | null | Jie-Lei | 8 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(index=range(100))
df = df.assign(val = df.index)
df = df/1e3
df.loc[0,"val"] = 1e6
df.loc[5,"val"] = -1e6
res1 = df.rolling(20,min_periods=1).kurt()
res2 = df.iloc[1:].rolling(20,min_periods=1).kurt()
>>>res1.tail(5)
val
95 722.329422
96 730.791755
97 739.254087
98 747.716420
99 756.178752
>>>res2.tail(5)
val
95 -1.2
96 -1.2
97 -1.2
98 -1.2
99 -1.2
```
### Issue Description
In one of my experiments, the results of my rolling calculation of high-order moments differed. When I excluded the first data or retained the first data, the results of the rolling calculation varied greatly. I used this case to attempt to reproduce this result. The operators I tested, Including df.rolling.std, df.rolling.skew, df.rolling.kurt. I don't know what the reason is. I think for the df.rolling operator, this should be a bug
### Expected Behavior
The result of the rolling calculation, regardless of what the first one is, should the last few pieces of data not be affected by the initial data
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19044
machine : AMD64
processor : Intel64 Family 6 Model 106 Stepping 6, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : Chinese (Simplified)_China.936
pandas : 2.2.3
numpy : 2.2.5
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
None
</details>
| [
"Bug",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hey OP, thanks for raising this! Rolling operations include not just the current row, but also previous rows within the window. This means including or excluding the first row can impact the entire calculation, even for later rows. This is expected behavior, not a bug.\n\nLet me know if this makes sense or if you’re seeing something different.",
"@arthurlw Thank you for your reply! Regarding the function of df.rolling, I believe it builds a sliding window on the data and applies a calculation function. Each calculation only uses the data within the window. If following this logic, in the case I provided, when calculating the kurtosis, the following results will be my expectation:\n\n```\nres1 = df.rolling(20,min_periods=1).kurt()\nres2 = df.iloc[1:].rolling(20,min_periods=1).kurt()\n\npd.testing.assert_frame_equal(res1.loc[21:], res2.loc[21:])\n```\n\nHowever, from the case I presented, it can be seen that the results are not like this. In my case, I constructed a special dataframe, which has a maximum value and a minimum value. The maximum value is located at index 0. Whether to include this value in the rolling calculation will lead to different results. \n\nThis is my understanding of df.rolling. Finally, once again, thank you for your reply.",
"Thanks for the explanation! You’re right that whether the first data point (index 0) is included will lead to different results. This is actually expected behavior because `df.iloc[1:]` explicitly removes the first row, which means all window calculations in res2 will start at index 1 and will exclude the value at index 0. Thus, res1 and res2 will provide different results. ",
"Thank you for your reply. What surprises me is that the sliding window calculation shouldn't be affected by data outside the window. Then, including or excluding the first piece of data not affect the calculation result at the last index position. Since the data window is 20 and there are 100 data samples, why excluding the first data entry would cause the calculation result at the last index position to be different? This is the point that raises my doubts.",
"I see now what you mean and thanks for the catch! This definitely shouldn’t happen. It looks like the huge outlier is influencing values outside of its window with `.std`, `.skew`, and `.kurt`. PRs and contributions are welcome.",
"take",
"Rolling algos in pandas uses online methods, see: https://github.com/pandas-dev/pandas/issues/60053#issuecomment-2415885452",
"Just an update, it seems like the current implementation is running into difficulty due to the value contrast between numbers being too large. There is a compensation number which works as a fallback to catch the changes in the small numbers, but because the data set here includes two equally large magnitudes (1e6, -1e6), the algorithm overwrites the compensation number which is what causes a knock-on effect for the running summation of x^4."
] |
3,051,304,892 | 61,415 | BUG: ImportError: cannot import name 'NaN' from 'numpy' | closed | 2025-05-09T07:40:07 | 2025-05-09T17:37:23 | 2025-05-09T17:37:22 | https://github.com/pandas-dev/pandas/issues/61415 | true | null | null | Bl4ckVo1d | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
ImportError: cannot import name 'NaN' from 'numpy'
```
### Issue Description
ImportError: cannot import name 'NaN' from 'numpy'
### Expected Behavior
ImportError: cannot import name 'NaN' from 'numpy'
### Installed Versions
<details>
ImportError: cannot import name 'NaN' from 'numpy'
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"ImportError: cannot import name 'NaN' from 'numpy' \n\n",
"ImportError: cannot import name 'NaN' from 'numpy' \n\n",
"Hey OP, thanks for raising this issue! Just a heads up, NaN is not a directly importable attribute from numpy. You can use np.nan instead:\n```python\nimport numpy as np\nprint(np.nan)\n```\nor \n```python\nfrom numpy import nan\nprint(nan)\n```\nThis isn’t a bug, so I’ll close this. If you’re still running into issues, feel free to reopen or provide more context. "
] |
3,050,669,700 | 61,414 | Bug fix slow plot with datetimeindex | closed | 2025-05-09T03:55:17 | 2025-06-02T16:55:28 | 2025-06-02T16:55:28 | https://github.com/pandas-dev/pandas/pull/61414 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61414 | https://github.com/pandas-dev/pandas/pull/61414 | thehalvo | 1 | - [x] closes [#61398](https://github.com/pandas-dev/pandas/issues/61398)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. (N/A - no new methods/functions added)
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,050,627,050 | 61,413 | CLN: Expose arguments in DataFrame.query | closed | 2025-05-09T03:18:15 | 2025-05-20T02:35:42 | 2025-05-20T02:35:31 | https://github.com/pandas-dev/pandas/pull/61413 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61413 | https://github.com/pandas-dev/pandas/pull/61413 | loicdiridollou | 1 | - [x] closes #61405
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Clean",
"expressions"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @loicdiridollou - very nice!"
] |
3,050,183,256 | 61,412 | DOC: Error in Getting started tutorials > How do I read and write tabular data? | closed | 2025-05-08T21:59:48 | 2025-05-09T18:36:09 | 2025-05-09T18:36:08 | https://github.com/pandas-dev/pandas/issues/61412 | true | null | null | paintdog | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/getting_started/intro_tutorials/02_read_write.html
### Documentation problem
In the documentation for the Titanic dataset on this page:
https://pandas.pydata.org/docs/getting_started/intro_tutorials/02_read_write.html
It currently says:
> "Survived: Indication whether passenger survived. 0 for yes and 1 for no."
This appears to be incorrect. The correct meaning is:
> 0 = did not survive
> 1 = survived
You can verify this, for example, with the entry for "McCarthy, Mr. Timothy J.", who is listed with a 0 in the dataset and was confirmed deceased (source: https://de.wikipedia.org/wiki/Passagiere_der_Titanic).
Thanks for your great work and for maintaining the documentation!
### Suggested fix for documentation
Survived: Indication whether passenger survived. 0 for no and 1 for yes. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for catching this! The correct mapping is 0 = did not survive and 1 = survived."
] |
3,049,800,285 | 61,411 | DOC: removed none from docstring | closed | 2025-05-08T18:38:30 | 2025-05-08T22:27:29 | 2025-05-08T22:27:22 | https://github.com/pandas-dev/pandas/pull/61411 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61411 | https://github.com/pandas-dev/pandas/pull/61411 | arthurlw | 1 | - [x] closes #61408
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Docs",
"Algos"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw "
] |
3,049,575,679 | 61,410 | CI: Upgrade to ubuntu-24.04, install Python free threading from conda-forge | closed | 2025-05-08T16:50:51 | 2025-05-16T00:58:56 | 2025-05-15T18:59:46 | https://github.com/pandas-dev/pandas/pull/61410 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61410 | https://github.com/pandas-dev/pandas/pull/61410 | mroeschke | 6 | null | [
"CI"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Seems like that the python-freethreading from conda-forge is not recognized as a valid Python, no idea why. But other than that this looks great, much cleaner.",
"Probably needs a bump of meson/meson-python.",
"submitted https://github.com/cython/cython/issues/6870 for the warning.",
"thanks @mroeschke ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 b2b2d04e419e44932d51017ececb5c3a86b15925\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61410: CI: Upgrade to ubuntu-24.04, install Python free threading from conda-forge'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61410-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61410 on branch 2.3.x (CI: Upgrade to ubuntu-24.04, install Python free threading from conda-forge)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"The free threading stuff isn't on 2.3 so no need for backport"
] |
3,049,397,172 | 61,409 | BUG: CVE-2020-13091 | closed | 2025-05-08T15:40:20 | 2025-05-08T15:59:46 | 2025-05-08T15:59:45 | https://github.com/pandas-dev/pandas/issues/61409 | true | null | null | mrw56410 | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
When will this bug be fixed?
```
### Issue Description
Bug since 2020
### Expected Behavior
No Bug
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Please see the discussion in https://github.com/pandas-dev/pandas/issues/49810, https://github.com/pandas-dev/pandas/issues/36256, https://github.com/pandas-dev/pandas/issues/48049. This is a won't fix from the pandas side"
] |
3,049,087,156 | 61,408 | DOC: axis argument for take says `None` is acceptable, but that is incorrect. | closed | 2025-05-08T13:54:54 | 2025-05-08T22:27:23 | 2025-05-08T22:27:23 | https://github.com/pandas-dev/pandas/issues/61408 | true | null | null | Dr-Irv | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.take.html#pandas.DataFrame.take
### Documentation problem
The `axis` argument is documented as: "axis {0 or ‘index’, 1 or ‘columns’, None}, default 0" . But `None` is not accepted. So it should be removed from the docs.
See https://github.com/pandas-dev/pandas-stubs/pull/1209#discussion_r2079740441 for an example.
### Suggested fix for documentation
Remove `None` from that sentence.
| [
"Docs",
"Algos"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This was changed in https://github.com/pandas-dev/pandas/pull/20179 and seems to be erroneous, even at that time."
] |
3,049,045,130 | 61,407 | BUG: to_csv() quotechar/escapechar behavior differs from csv module | closed | 2025-05-08T13:40:42 | 2025-05-30T16:36:43 | 2025-05-30T16:36:43 | https://github.com/pandas-dev/pandas/issues/61407 | true | null | null | johnrtian | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import csv
import sys
data = [['a', 'b"c', 'def"'], ['a2', None, '"c']]
# no escaping
df = pd.DataFrame(data)
print(df.to_csv(sep='\t', index=False, header=False, quotechar='"', escapechar='\\', quoting=csv.QUOTE_NONE))
print(df.to_csv(sep='\t', index=False, header=False, quotechar='"', escapechar='\\', quoting=csv.QUOTE_NONE, doublequote=False))
# escaping
csv_writer = csv.writer(sys.stdout, delimiter='\t', quotechar='"', escapechar='\\', quoting=csv.QUOTE_NONE)
for r in data:
_ = csv_writer.writerow(r)
```
### Issue Description
`to_csv()` doesn't escape `quotechar` when `quoting=csv.QUOTE_NONE`.
````
a b"c def"
a2 "c
````
### Expected Behavior
`quotechar` gets escaped using `escapechar` even when `quoting=csv.QUOTE_NONE`.
This is the behavior of the csv module.
````
a b\"c def\"
a2 \"c
````
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.2
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22621
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 1.26.2
pytz : 2023.3.post1
dateutil : 2.8.2
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.24.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.23
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"IO CSV"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Confirmed on main. Agreed with the expected behavior, further investigations and PRs to fix are welcome!",
"take",
"@omarraf are you still working on this? can i take over this issue thanks",
"@KevsterAmp yes go for it ",
"Take"
] |
3,049,036,809 | 61,406 | BUG: way to include all columns within a groupby apply | open | 2025-05-08T13:37:47 | 2025-05-08T16:47:11 | null | https://github.com/pandas-dev/pandas/issues/61406 | true | null | null | madelavar12 | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Sample DataFrame
df = pd.DataFrame({
"group": ["A", "A", "B", "B"],
"value": [1, 2, 3, 4],
})
# Function that operates on the whole group (e.g., adds a new column)
def process_group(group_df):
group_df["value_doubled"] = group_df["value"] * 2
return group_df
# Trigger the deprecation warning
result = df.groupby("group").apply(process_group)
print(result)
group value value_doubled
group
A 0 A 1 2
1 A 2 4
B 2 B 3 6
3 B 4 8
C:\Users\e361154\AppData\Local\Temp\1\ipykernel_15728\2443901964.py:15: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.
result = df.groupby("group").apply(process_group)
```
### Issue Description
When using groupby().apply() with a function that modifies and returns the entire group DataFrame, a FutureWarning is raised in pandas >= 2.2. This warning notifies users that in pandas 3.0, the default behavior will change: the grouping columns will no longer be included in the data passed to the function unless include_groups=True is explicitly set. To maintain the current behavior and suppress the warning, users must pass include_groups=False.
This affects workflows where the function operates on the full DataFrame per group and expects the group keys to be included in the data automatically, as was the case in earlier pandas versions.
### Expected Behavior
The expected behavior is still what I want from the above example. I just don't want that functionality to be lost in pandas 3.0.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.7
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.35.0
adbc-driver-postgresql: None
...
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Groupby",
"Apply",
"Closing Candidate"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report.\n\n1. You can do `df.groupby(...)[df.columns]`.\n2. You can access the groups from the index of the result.\n3. You can do `df.groupby(..., as_index=False)` to have the groups be columns instead of index.\n\nIn the event that none of these work for you, you can use `DataFrameGroupBy.pipe` to have your own helper function.\n\n```python\ndef include_all_columns(gb, *args, **kwargs):\n return gb[gb.obj.columns]\n\nresult = df.groupby(\"group\").pipe(include_all_columns).apply(process_group)\n```\n\nMore longer term, pandas core developers are positive on adding expressions, similar to those in PySpark and Polars. If that were to happen, then you could do `df.groupby(...)[pd.all()].apply(...)`.",
"I'll also add, the example in the OP mutates the provided `group_df`. This is [explicitly not supported](https://pandas.pydata.org/pandas-docs/dev/user_guide/gotchas.html#mutating-with-user-defined-function-udf-methods). If it works in your use-case, great, but there are various ways you can mutate the argument that will break pandas. You should instead make a copy.\n\n```python\ndef process_group(group_df):\n group_df = group_df.copy()\n group_df[\"value_doubled\"] = group_df[\"value\"] * 2\n return group_df\n```",
"I agree with your sentiment about the mutation in a general sense, but I also see great use cases for the mutation if adding a column without having to make a copy within every group since that could be computationally intense depending on the DataFrame.\n\nOption 1 is what I have currently been doing `df.groupby(...)[df.columns]` but that just seems clunky. I am more just wondering why this was a reduction in functionality? Why not just keep the option to include groups? I guess ultimately it ends up being the same thing but I am just not sure why it was just fully deprecated.\n\nThe issue isn't with the original groupby, it is within the apply that is causing the issue so options 2 and 3 don't seem to work as you are saying they would unless I am just misunderstanding.\n\nOption 2 with a reset_index specifying the level would work but that also seems clunky especially for a multicolumn groupby:\n```\nresult = df.groupby(\"group\").apply(process_group, include_groups=False).reset_index(level=\"group\")\nprint(result)\n```\n\nOption 3 just doesn't return the group anymore:\n```\nresult = df.groupby(\"group\", as_index=False).apply(process_group, include_groups=False)\nprint(result)\n```",
"> I am more just wondering why this was a reduction in functionality? Why not just keep the option to include groups? I guess ultimately it ends up being the same thing but I am just not sure why it was just fully deprecated.\n\napply was the only function that operated on the groups (filters include the groups, but don't operate on them), and even then only in certain cases. It was an inconsistency in the API. Supporting this option in just `apply` means the groupby internals need to track whether the groups are in the supplied DataFrame itself or outside of it, and what do to in each case. This complicates the internals by adding a whole additional state that needs to be tracked.\n\n> Option 3 just doesn't return the group anymore:\n\nAh, indeed. I think what's going on here is that `apply` infers that your operation is a transform, and so does not include the groups. I've argument that we should enable the behavior of `as_index=False` in such cases (https://github.com/pandas-dev/pandas/issues/49543), but it seems like it won't gain much traction.\n"
] |
3,047,535,649 | 61,405 | DOC/ENH: Add full list of argument for DataFrame.query | closed | 2025-05-08T01:16:47 | 2025-07-18T04:46:05 | 2025-05-20T02:35:33 | https://github.com/pandas-dev/pandas/issues/61405 | true | null | null | loicdiridollou | 2 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html#pandas.DataFrame.query
### Documentation problem
This question arises when @MarcoGorelli wanted to fully type `DataFrame.query` in the stubs repo https://github.com/pandas-dev/pandas-stubs/issues/1173. Right now the extra arguments are passed through `**kwargs` but when we go through the code we see that they are the same as the ones in `pd.eval` (https://pandas.pydata.org/docs/reference/api/pandas.eval.html#pandas.eval).
### Suggested fix for documentation
Considering that this would help to expand the typehinting in that area and that the number of arguments is limited, would it be conceivable to expose all the arguments instead of relying on `**kwargs`?
For information this is the list of arguments that would need to be added:
```python
parser: Literal["pandas", "python"] = ...,
engine: Literal["python", "numexpr"] | None = ...,
local_dict: dict[_str, Any] | None = ...,
global_dict: dict[_str, Any] | None = ...,
resolvers: list[Mapping] | None = ...,
level: int = ...,
target: object | None = ...,
```
See https://github.com/pandas-dev/pandas-stubs/pull/1193 for the potential typehinting. | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for raising! I agree that the docs could be clearer and that `**kwargs` should be replaced with the arguments you listed. ",
"Great thanks Arthur! I’ll prepare a PR to improve function signature and\r\ndocs, it is helpful for the stubs in particular.\r\n\r\nOn Thu, May 8, 2025 at 3:20 AM Arthur Laureus Wigo ***@***.***>\r\nwrote:\r\n\r\n> *arthurlw* left a comment (pandas-dev/pandas#61405)\r\n> <https://github.com/pandas-dev/pandas/issues/61405#issuecomment-2862037343>\r\n>\r\n> Thanks for raising! I agree that the docs could be clearer and that\r\n> **kwargs should be replaced with the arguments you listed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/issues/61405#issuecomment-2862037343>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AMQNSQEQVYRYHMEHRB6VDOD25MASRAVCNFSM6AAAAAB4VHSL36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNRSGAZTOMZUGM>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] |
3,047,207,237 | 61,404 | BLD: allow to build with non-MSVC compilers on Windows | closed | 2025-05-07T21:25:47 | 2025-06-17T04:30:04 | 2025-06-16T19:44:46 | https://github.com/pandas-dev/pandas/pull/61404 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61404 | https://github.com/pandas-dev/pandas/pull/61404 | lazka | 5 | Always passing --vsenv to meson means pandas can't be built with gcc/clang
on Windows.
Instead add it to the cibuildwheel config so MSVC is still forced in CI
when building wheels, and in various places where it is built via pip. | [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I understand that this is an edge case on Windows, and that mingw is not officially supported, but numpy/scipy also don't force msvc, so I though it's worth a try proposing this.\r\n\r\nFeedback welcome.",
"Thanks for the PR. This makes sense, although I think the challenge is more comprehensively how we document the process for users that want to build on Windows. \r\n\r\nI have pretty limited Windows knowledge, but AFAIU we only document currently for the MSVC approach?\r\nhttps://pandas.pydata.org/pandas-docs/stable/development/contributing_environment.html#step-1-install-a-c-compiler\r\n\r\nSo if we want to go this route, I think just need to make it clearer to contributors how they can opt in or out of the the different toolchains on Windows",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen.",
"Nothing I can decide or fix (there is no review or failing tests). If someone is interested in this again, please ping me.\r\n\r\nWe will continue patching pandas downstream in the meantime."
] |
3,045,379,589 | 61,403 | BUG: guess_datetime_format cannot infer iso 8601 format | closed | 2025-05-07T09:41:27 | 2025-05-08T11:01:15 | 2025-05-07T17:04:01 | https://github.com/pandas-dev/pandas/issues/61403 | true | null | null | Thomath | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
pd.to_datetime(
pd.Series(['2025-05-05 20:25:22+00:00', '2025-05-05 12:04:52+00:00'])
)
# no warning
pd.to_datetime(
pd.Series(['2025-05-05 20:25:22+00:00'])
)
# No warning
pd.to_datetime(
pd.Series(['2025-05-05 12:03:08+00:00', '2025-05-05 12:04:52+00:00']),
)
```
### Issue Description
When running `pd.to_datetime(pd.Series(['2025-05-05 20:25:22+00:00', '2025-05-05 12:04:52+00:00']))` the following warning is risen:
> UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
This is because `guess_datetime_format` cannot infer a format for the first given timestamp '2025-05-05 20:25:22+00:00'.
### Expected Behavior
No warning is risen.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.10
python-bits : 64
OS : Linux
OS-release : 6.8.0-58-generic
Version : #60~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 28 16:09:21 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Datetime",
"Warnings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I can reproduce on 2.2.x, but this warning does not appear on main. This will be fixed in pandas 3.0. Closing."
] |
3,045,217,167 | 61,402 | BUG: Duplicate columns allowed on `merge` if originating from separate dataframes | closed | 2025-05-07T08:50:02 | 2025-06-06T14:12:22 | 2025-06-06T14:12:22 | https://github.com/pandas-dev/pandas/issues/61402 | true | null | null | nikaltipar | 8 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df1 = pd.DataFrame({"col1":[1], "col2":[2]})
df2 = pd.DataFrame({"col1":[1], "col2":[2], "col2_dup":[3]})
pd.merge(df1, df2, on="col1", suffixes=("_dup", ""))
# Observe (1)
pd.merge(df1, df2, on="col1", suffixes=("", "_dup"))
# Observe (2)
```
### Issue Description
Case 1 provides the following result:
```
col1 col2_dup col2 col2_dup
0 1 2 2 3
```
Case 2 results in an exception:
```
pandas.errors.MergeError: Passing 'suffixes' which cause duplicate columns {'col2_dup'} is not allowed.
```
While the MergeError in this case does make sense (ideally duplicate columns should not be allowed as they might cause confusion), the same issue is observed in the first case and no exception is raised.
### Expected Behavior
Since this bug is about consistency, either of the following 2 should happen:
- An error should be raised in both cases.
- An error should not be raised in any case, and the duplicate column should be allowed.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.7
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 170 Stepping 4, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.5
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 23.2.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Possibly related #13659",
"Thanks for the report! Agreed both should raise. PRs to fix are welcome!",
"Ok, I'll see if I can squeeze this in for this month. It should easy enough to fix. Of course, if anyone else wants to take it up, feel free to!",
"take",
"Hii I'm new to open source and pandas internals, but I'd love to try fixing this. I might ask a few beginner questions as I go — hope that's okay!",
"@nikaltipar @samruddhibaviskar11 Just a reminder: the issue already has a PR to address it — https://github.com/pandas-dev/pandas/pull/61422",
"Sorry I didn't make a note here I was working on this. Apologies @samruddhibaviskar11. I'm just finalizing documentation and finished revisions as recommended. Do I need to ask to \"take\" here? ",
"take"
] |
3,043,719,829 | 61,401 | ENH: access sliced dataframe from rolling.cov | open | 2025-05-06T18:51:17 | 2025-07-18T20:46:55 | null | https://github.com/pandas-dev/pandas/issues/61401 | true | null | null | srkunze | 1 | ### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
In a current project, I iterate over `df.rolling(window).cov(pairwise=True)`. Currently, I back-calculate from the index value of the cov() and the window offset what I suspect to be the start of the window. Then I slice the original df again into the window.
It would be great to iterate efficiently over the original df simultaneously with the cov values (and possibly with all the other window functions).
### Feature Description
An idea off the top off my head:
```
for window, cov in df.rolling(window).roll("window", "cov_pairwise"):
...
# window equals df.loc[start:end]
# cov equals df.loc[start:end].cov()
# start equals window.index[0]
# end equals window.index[-1]
...
```
### Alternative Solutions
I don't know any. Maybe there is already a way to do this.
Additionally, `roll` could allow efficient slicing to avoid useless calculations
```
for window, cov in df.rolling(window).roll("window", "cov_pairwise")[-1000:]:
...
```
### Additional Context
_No response_ | [
"Enhancement",
"Window",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"For increased clarity, could you please add a minimal and fully reproducible example of your current methodology and explain how your proposed feature would improve efficiency (in terms of time complexity)?"
] |
3,043,109,140 | 61,400 | BUG: Fix naive timestamps inheriting timezone from previous timestamps in to_datetime with ISO8601 format | closed | 2025-05-06T14:50:19 | 2025-05-06T18:29:26 | 2025-05-06T18:29:19 | https://github.com/pandas-dev/pandas/pull/61400 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61400 | https://github.com/pandas-dev/pandas/pull/61400 | myenugula | 1 | - [x] closes #61389 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Timezones"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @myenugula "
] |
3,042,845,468 | 61,399 | BUG: round on object columns no longer raises a TypeError | closed | 2025-05-06T13:24:43 | 2025-05-21T15:56:25 | 2025-05-21T00:33:33 | https://github.com/pandas-dev/pandas/pull/61399 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61399 | https://github.com/pandas-dev/pandas/pull/61399 | KevsterAmp | 6 | - [x] closes #61206 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Regression",
"Numeric Operations"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"`pre-commit.ci` is timing out and the 2 failed unit tests are canceled operations. ",
"@KevsterAmp could you target this PR to `main`? We have a process to backport this PR to the `2.3.x` branch after it is merged to main",
"Rebased to main and forced push the branch.\r\n\r\n@mroeschke since we're targetting to main. Should I add to `whatsnew/v3.0.0.rst` and remove the current `whatsnew/v2.3.0.rst`?",
"> Should I add to whatsnew/v3.0.0.rst and remove the current whatsnew/v2.3.0.rst\r\n\r\nNo need. It's implied that all fixes in 2.3 apply to 3.0",
"Thanks @KevsterAmp ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 6a2da7ad16cff82f0eadbec04e921baf6c0ae8fb\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61399: BUG: round on object columns no longer raises a TypeError'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61399-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61399 on branch 2.3.x (BUG: round on object columns no longer raises a TypeError)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n "
] |
3,041,395,327 | 61,398 | BUG: Slower `DataFrame.plot` with `DatetimeIndex` | open | 2025-05-06T03:15:39 | 2025-06-03T04:08:25 | null | https://github.com/pandas-dev/pandas/issues/61398 | true | null | null | Abdelgha-4 | 7 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
# Imports & data generation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
num_rows = 500
num_cols = 2000
index = pd.date_range(start="2020-01-01", periods=num_rows, freq="D")
test_df = pd.DataFrame(np.random.randn(num_rows, num_cols).cumsum(axis=0), index=index)
# Very Slow plot (1m 11.6s)
test_df.plot(legend=False, figsize=(12, 8))
plt.show()
# Much faster Plot using this workaround: (6.1s)
# 1. Plot a single column with dates to copy the right ticks
ax1 = test_df.iloc[:, 0].plot(figsize=(12, 6), legend=False)
xticks = ax1.get_xticks()
xticklabels = [label.get_text() for label in ax1.get_xticklabels()]
plt.close(ax1.figure)
# 2. Faster plot with no date index
ax2 = test_df.reset_index(drop=True).plot(legend=False, figsize=(12, 8))
# 3. Inject the date X axis info
num_ticks = len(xticks)
new_xticks = np.linspace(0, num_rows - 1, num_ticks)
ax2.set_xlim(0, num_rows - 1)
ax2.set_xticks(new_xticks)
ax2.set_xticklabels(xticklabels)
plt.show()
```
### Issue Description
Plotting a large DataFrame with a `DatetimeIndex` and many rows and columns results in extremely slow rendering times. This issue can be surprisingly mitigated by first plotting a single column to generate the correct ticks and labels, then resetting the index and copying the ticks to plot the full DataFrame, gaining +11x speed improvement. This may suggests that a similar logic may be applied (if found consistent) to improve speed when applied.
### Expected Behavior
No big difference in ploting time depending on the index type, especially if avoidable with the trick above.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.12.4.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : fr_FR.cp1252
pandas : 2.2.2
numpy : 2.0.1
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 75.3.0
pip : 25.0.1
Cython : None
pytest : 8.3.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.3.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.26.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.9.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| [
"Datetime",
"Visualization",
"Performance"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report, confirmed on main. Further investigations and PRs to fix are welcome!",
"take",
"For anyone interested, I've found an even faster way to produce the same plot:\n```python\n# Additional import\nfrom matplotlib.collections import LineCollection\n\n# 1. Same as above, plot a single column to copy the ticks\nax1 = test_df.iloc[:, 0].plot(figsize=(12, 6), legend=False)\nxticks = ax1.get_xticks()\nxticklabels = [label.get_text() for label in ax1.get_xticklabels()]\nplt.close(ax1.figure)\n\n# 2. This time using LineCollection\nx = np.arange(len(test_df.index))\nlines = [np.column_stack([x, test_df[col].values]) for col in test_df.columns]\ndefault_colors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"]\ncolor_cycle = list(itertools.islice(itertools.cycle(default_colors), len(lines)))\n\nline_collection = LineCollection(lines, colors=color_cycle)\nfig, ax2 = plt.subplots(figsize=(10, 5))\nax2.add_collection(line_collection)\nax2.set_xlim(0, num_rows - 1)\nax2.margins(y=0.05)\n\n# Injecting ticks, same as above\nax2.set_xticks(np.linspace(0, num_rows - 1, len(xticks)))\nax2.set_xticklabels(xticklabels)\n\nplt.tight_layout()\nplt.show()\n```\n\nThis is 2.5x faster than my proposed workaround and 27x faster than `DataFrame.plot`.\n\n@rhshadrach Please let me know if this is worth being a separate issue, or maybe out of scope.",
"Thanks for the work here @Abdelgha-4 - at a glance that looks good, but it'd be more informative to see what this would look like in the pandas code itself. Would you be willing to put up a PR?",
"Unfortunately I won't be able to work on a PR for the moment, but I've created a separate issue regarding this here: #61532, so that it's tracked and can be picked up by interested contributors.",
"@Abdelgha-4 - can you help me understand why we need a 2nd issue for this?",
"@rhshadrach IMO each issue adresses a different type of performance bottleneck:\n\n- The inefficiency in plotting `DatetimeIndex`, fully solvable using existing pandas functionality alone. So the focus there is on optimizing how pandas handles datetime axes internally.\n\n- New proposed structural change: using LineCollection instead of many Line2D objects. This involves integrating a Matplotlib feature that pandas plotting doesn't currently use, and could unlock consistent speedups for all large DataFrames — even when the index type isn't the bottleneck.\n\n\nI judged that you can work on either of them without having to know about the other one, hence the separation. You can ofc disagree with the rationale here, in which case please feel free to close it."
] |
3,040,183,733 | 61,397 | [pre-commit.ci] pre-commit autoupdate | closed | 2025-05-05T16:29:15 | 2025-05-05T17:24:34 | 2025-05-05T17:24:05 | https://github.com/pandas-dev/pandas/pull/61397 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61397 | https://github.com/pandas-dev/pandas/pull/61397 | pre-commit-ci[bot] | 0 | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.11.4 → v0.11.8](https://github.com/astral-sh/ruff-pre-commit/compare/v0.11.4...v0.11.8)
- [github.com/pre-commit/mirrors-clang-format: v20.1.0 → v20.1.3](https://github.com/pre-commit/mirrors-clang-format/compare/v20.1.0...v20.1.3)
- [github.com/trim21/pre-commit-mirror-meson: v1.7.2 → v1.8.0](https://github.com/trim21/pre-commit-mirror-meson/compare/v1.7.2...v1.8.0)
<!--pre-commit.ci end--> | [
"Code Style"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,037,631,952 | 61,396 | Fix #60766:.map,.apply would convert element type for extension array | open | 2025-05-03T21:54:02 | 2025-08-20T17:10:45 | null | https://github.com/pandas-dev/pandas/pull/61396 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61396 | https://github.com/pandas-dev/pandas/pull/61396 | pedromfdiogo | 1 | - [x] closes #60766
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
The Int32Dtype type allows representing integers with support for null values (pd.NA). However, when using .map(f) or .apply(f), the elements passed to f are converted to float64, and pd.NA is transformed into np.nan.
This happens because .map() and .apply() internally use numpy, which automatically converts the data to float64, even when the original type is Int32Dtype.
The fix (just remove the method to_numpy()) ensures that when using .map() or .apply(), the elements in the series retain their original type (Int32, Float64, boolean, etc.), preventing unnecessary conversions to float64 and ensuring that pd.NA remains correctly handled. | [
"Bug",
"Apply",
"pyarrow dtype retention"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"i suspect the correct thing to do for map involves the just-implemented EA._cast_pointwise_result"
] |
3,037,427,843 | 61,395 | BUG: pd.to_datetime failing to parse with exception error 01-Jun-2025 in sequence with 31-May-2025 | closed | 2025-05-03T14:04:02 | 2025-05-04T23:00:29 | 2025-05-03T15:12:00 | https://github.com/pandas-dev/pandas/issues/61395 | true | null | null | johndrummond | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import sys
print(f"Pandas version: {pd.__version__}")
print(f"Python version: {sys.version}")
df = pd.DataFrame({'day': ["31-May-2025","01-Jun-2025","02-Jun-2025"]})
pd.to_datetime(df['day'])
```
### Issue Description
gives
'Pandas version: 2.2.3'
'Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]'
ValueError: time data "01-Jun-2025" doesn't match format "%d-%B-%Y", at position 1. You might want to try:
- passing `format` if your strings have a consistent format;
- passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
- passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.
File <command-6844361422137531>, line 2
1 df = pd.DataFrame({'day': ["31-May-2025","01-Jun-2025","02-Jun-2025"]})
----> 2 pd.to_datetime(df['day'])
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/pandas/core/tools/datetimes.py:1067, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
1065 result = arg.map(cache_array)
1066 else:
-> 1067 values = convert_listlike(arg._values, format)
1068 result = arg._constructor(values, index=arg.index, name=arg.name)
1069 elif isinstance(arg, (ABCDataFrame, abc.MutableMapping)):
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/pandas/core/tools/datetimes.py:433, in _convert_listlike_datetimes(arg, format, name, utc, unit, errors, dayfirst, yearfirst, exact)
431 # `format` could be inferred, or user didn't ask for mixed-format parsing.
432 if format is not None and format != "mixed":
--> 433 return _array_strptime_with_fallback(arg, name, utc, format, exact, errors)
435 result, tz_parsed = objects_to_datetime64(
436 arg,
437 dayfirst=dayfirst,
(...)
441 allow_object=True,
### Expected Behavior
it parses happily and correctly with no exception
interestingly it's having the transition end of may. start of June. Starting with 01-Jun-2025 works, ending with 31-May-2025 works,
dateparser.parse is happy
I'm guessing it infers a full month from the May when in fact it is a three character abbreviation.
### Installed Versions
<details>
running in databricks notebook - checked in a separate version of python locally, with pandas 2.2.1
'Pandas version: 2.2.3'
'Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]' for the notebook.
pd.show_versions() doesn't return anything
locally
Pandas version: 2.2.1
Python version: 3.12.2 (main, Mar 25 2024, 11:48:28) [Clang 15.0.0 (clang-1500.3.9.4)]
and pd.show_versions() gives.
FileNotFoundError Traceback (most recent call last)
File /Users/J.Drummond/Documents/wip/python/truth_soc_[1](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/truth_soc_1.py:1).py:2
1 # %%
----> [2](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/truth_soc_1.py:2) pd.show_versions()
File ~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:141, in show_versions(as_json)
[104](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:104) """
[105](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:105) Provide useful information, important for bug reports.
[106](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:106)
(...)
[138](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:138) ...
[139](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:139) """
[140](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:140) sys_info = _get_sys_info()
--> [141](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:141) deps = _get_dependency_info()
[143](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:143) if as_json:
[144](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:144) j = {"system": sys_info, "dependencies": deps}
File ~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:98, in _get_dependency_info()
[96](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:96) result: dict[str, JSONSerializable] = {}
[97](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:97) for modname in deps:
---> [98](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:98) mod = import_optional_dependency(modname, errors="ignore")
[99](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:99) result[modname] = get_version(mod) if mod else None
[100](https://file+.vscode-resource.vscode-cdn.net/Users/J.Drummond/Documents/wip/python/~/Documents/wip/python/.venv/lib/python3.12/site-packages/pandas/util/_print_versions.py:100) return result
...
</details>
| [
"Bug",
"Datetime"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report.\n\n> interestingly it's having the transition end of may. start of June. Starting with 01-Jun-2025 works, ending with 31-May-2025 works\n\nWhen given no other information, pandas needs to infer the format from the first value. Starting with `May`, the short-form and the long-form of the month are the same. Thus pandas needs to guess. Regardless of how pandas guesses, some guesses will be wrong.\n\nThe resolution is provided in the error message: pass a format string. In this case, it's `format=\"%d-%b-%Y\"`.\n\nClosing.",
"see https://github.com/pandas-dev/pandas/issues/58328 for additional context - duplicate of this",
"Sorry to have missed the previous discussion. Interesting if one starts with any month aside from May it works fine. Which is what happened for us. And then when one gets to starting in May it throws an exception. But that's not a bug :)",
"Just wondering on the guesses it could guess from more than the first value\r\nif ambiguous\r\n\r\nOn Sat, 3 May 2025, 16:12 Richard Shadrach, ***@***.***>\r\nwrote:\r\n\r\n> *rhshadrach* left a comment (pandas-dev/pandas#61395)\r\n> <https://github.com/pandas-dev/pandas/issues/61395#issuecomment-2848667773>\r\n>\r\n> Thanks for the report.\r\n>\r\n> interestingly it's having the transition end of may. start of June.\r\n> Starting with 01-Jun-2025 works, ending with 31-May-2025 works\r\n>\r\n> When given no other information, pandas needs to infer the format from the\r\n> first value. Starting with May, the short-form and the long-form of the\r\n> month are the same. Thus pandas needs to guess. Regardless of how pandas\r\n> guesses, some guesses will be wrong.\r\n>\r\n> The resolution is provided in the error message: pass a format string. In\r\n> this case, it's format=\"%d-%b-%Y\".\r\n>\r\n> Closing.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/issues/61395#issuecomment-2848667773>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAK223ZNX42XMUSWO4COI7L24TMFNAVCNFSM6AAAAAB4LYTH26VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNBYGY3DONZXGM>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] |
3,037,228,054 | 61,394 | DOC: add `api.types.is_dtype_equal` into document | closed | 2025-05-03T07:20:19 | 2025-05-03T20:00:54 | 2025-05-03T20:00:53 | https://github.com/pandas-dev/pandas/pull/61394 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61394 | https://github.com/pandas-dev/pandas/pull/61394 | chilin0525 | 1 | - [x] closes #60905
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs",
"Dtype Conversions"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@datapythonista Thank you for the review and for pointing out how to fix the CI error! 🙏 "
] |
3,036,899,526 | 61,393 | Subplot title count fix + fix for issue introduced in earlier PR | closed | 2025-05-02T22:00:03 | 2025-05-07T16:11:38 | 2025-05-07T16:11:19 | https://github.com/pandas-dev/pandas/pull/61393 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61393 | https://github.com/pandas-dev/pandas/pull/61393 | eicchen | 1 | - [x] closes #61019
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Adds a check for length of subplots as an alternative to the default title check and produces an alternative error message if number of subplots does not match titles produced.
Additionally, includes fix for issues introduced during PR: #61340 and mentioned in #61018 | [
"Visualization"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks again @eicchen "
] |
3,036,402,091 | 61,392 | DOC: Issue with the general expressiveness of the docs | closed | 2025-05-02T16:28:12 | 2025-08-05T17:09:58 | 2025-08-05T17:09:58 | https://github.com/pandas-dev/pandas/issues/61392 | true | null | null | epigramx | 4 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
Example: https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.floor.html
### Documentation problem
Throughout the docs the explanation of a function is often limited only to a circular sentence that repeats the verb that names the function and nothing else. Eg in the example of `pandas.Series.dt.floor` it basically says "it does floor" and the details of the docs are restricted to the individual options and outcomes after that.
### Suggested fix for documentation
In the example of floor it should first say in a richer sentence what floor actually does. It doesn't have to be anything big. I won't write an example of that because the docs didn't tell me what floor does. | [
"Docs",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report.\n\n> is often limited only to a circular sentence that repeats the verb that names the function and nothing else\n\nWhat should pandas say about `sum`? Should `sum` be explained as if the reader was unfamiliar with the mathematical operation? My answer here is no - this is okay to assume, and similarly for all standard operations. Of course one needs to decide what is \"standard\", and here there can be disagreements, but I would say the docs are okay to assume the reader is familiar with the floor operation. Descriptions of this are readily provided on searches for \"floor operation\" is users are not familiar. \n\nThat said, while I don't find this problematic I'm still open to improving the description if suggestions are provided. Marking this as Needs Info until that's provided.",
"Ah, also I see now that this just used floor as an example. I think \"docs can be improved by making them more expressive\" is not a particularly useful issue to have open - it has no good closing criterion. Making it more focused on a function or a collection of related function all with similar issues would be more helpful I think.",
"> no - this is okay to assume, and similarly for all standard operations\n\nNumpy documentation defines floor mathematically.\n",
"Thanks but agreed this issue is too nebulous to be actionable so closing. If you can identify specific parts of the documentation that need improvement with said improvement than feel free to open another issue "
] |
3,035,882,425 | 61,391 | fix MultiIndex.difference not working with PyArrow timestamps (#61382) ,and some ruff formating fix | closed | 2025-05-02T12:10:52 | 2025-05-20T16:04:42 | 2025-05-20T16:04:41 | https://github.com/pandas-dev/pandas/pull/61391 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61391 | https://github.com/pandas-dev/pandas/pull/61391 | NEREUScode | 1 | ## Problem
The `MultiIndex.difference` method fails to remove entries when the index contains PyArrow-backed timestamps (`timestamp[ns][pyarrow]`). This occurs because direct tuple comparisons with PyArrow scalar types are unreliable during membership checks, causing entries to remain unexpectedly.
**Example**:
```python
# PyArrow timestamp index
df = DataFrame(...).astype({"date": "timestamp[ns][pyarrow]"}).set_index(["id", "date"])
idx_val = df.index[0]
new_index = df.index.difference([idx_val]) # Fails to remove idx_val
```
Solution
Code Conversion: Map other values to integer codes compatible with the original index's levels.
Engine Validation: Use the MultiIndex's internal engine for membership checks, ensuring accurate handling of PyArrow types.
Mask-Based Exclusion: Create a boolean mask to filter out matched entries, then reconstruct the index.
Testing
Added a test in pandas/tests/indexes/multi/test_setops.py that:
Creates a MultiIndex with PyArrow timestamps.
Validates difference correctly removes entries.
Skips the test if PyArrow is not installed.
Use Case Impact
Fixes scenarios where users filter hierarchical datasets with PyArrow timestamps, such as:
```python
# Remove specific timestamps from a time-series index
clean_index = raw_index.difference(unwanted_timestamps)
```
Closes #61382. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,035,805,085 | 61,390 | fix MultiIndex.difference not working with PyArrow timestamps (#61382) ,and some ruff formating fix | closed | 2025-05-02T11:30:06 | 2025-05-02T12:09:22 | 2025-05-02T12:09:22 | https://github.com/pandas-dev/pandas/pull/61390 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61390 | https://github.com/pandas-dev/pandas/pull/61390 | NEREUScode | 0 | ## Problem
The `MultiIndex.difference` method fails to remove entries when the index contains PyArrow-backed timestamps (`timestamp[ns][pyarrow]`). This occurs because direct tuple comparisons with PyArrow scalar types are unreliable during membership checks, causing entries to remain unexpectedly.
**Example**:
```python
# PyArrow timestamp index
df = DataFrame(...).astype({"date": "timestamp[ns][pyarrow]"}).set_index(["id", "date"])
idx_val = df.index[0]
new_index = df.index.difference([idx_val]) # Fails to remove idx_val
```
Solution
Code Conversion: Map other values to integer codes compatible with the original index's levels.
Engine Validation: Use the MultiIndex's internal engine for membership checks, ensuring accurate handling of PyArrow types.
Mask-Based Exclusion: Create a boolean mask to filter out matched entries, then reconstruct the index.
Testing
Added a test in pandas/tests/indexes/multi/test_setops.py that:
Creates a MultiIndex with PyArrow timestamps.
Validates difference correctly removes entries.
Skips the test if PyArrow is not installed.
Use Case Impact
Fixes scenarios where users filter hierarchical datasets with PyArrow timestamps, such as:
```python
# Remove specific timestamps from a time-series index
clean_index = raw_index.difference(unwanted_timestamps)
```
Closes #61382. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,035,796,151 | 61,389 | BUG: Incorrect Parsing of Timestamps in pd.to_datetime with Series with format="ISO8601" and UTC=True | closed | 2025-05-02T11:24:44 | 2025-05-06T18:29:21 | 2025-05-06T18:29:20 | https://github.com/pandas-dev/pandas/issues/61389 | true | null | null | PaulCalot | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Single timestamp
raw = "2023-10-15T14:30:00"
single = pd.to_datetime(raw, utc=True, format="ISO8601")
print(single)
# Output: 2023-10-15 14:30:00+00:00 (correct)
# Series of timestamps
series = pd.Series([0, 0], index=["2023-10-15T10:30:00-12:00", raw])
converted = pd.to_datetime(series.index, utc=True, format="ISO8601")
print(converted)
# Output: 2023-10-16 02:30:00+00:00 for the second one (incorrect)
# error depends on the previous one timezone
```
### Issue Description
When using pd.to_datetime to parse a Series of timestamps with format="ISO8601" and utc=True, the parsing of a timestamp without an explicit timezone offset is incorrect and appears to depend on the timezone offset of the previous timestamp in the Series. This behavior does not occur when parsing a single timestamp.
### Expected Behavior
In this configuration, behavior should not depend on the previous timestamp timezone. Result should be the same as when individually passed.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.0
python-bits : 64
OS : Linux
OS-release : 5.10.0-34-amd64
Version : #1 SMP Debian 5.10.234-1 (2025-02-24)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 15.0.2
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'version' is not defined
</details> | [
"Bug",
"Datetime",
"Timezones"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the clear issue! Confirmed on main. \n\nWhen you batch‐parse `[\"2023-10-15T10:30:00-12:00\", \"2023-10-15T14:30:00\"]` with `format=\"ISO8601\", utc=True`, the second (naive) timestamp wrongly reuses the `“–12:00”` offset and becomes `2023-10-16T02:30:00+00:00` instead of `2023-10-15T14:30:00+00:00`. The ISO8601 parser is retaining its last‐seen offset between parses. \n\nI believe when `utc=True`, naive timestamps should be treated as UTC.",
"An interesting case indeed. If this inconsistency can be fixed without hurting performance, I'm certainly positive on it. However I think the performance of what I believe is the common case (consistent timezone data) should weigh in here as well.\n\nFurther investigations are welcome.",
"take"
] |
3,035,790,674 | 61,388 | fix MultiIndex.difference not working with PyArrow timestamps (#61382) ,and some formating fix | closed | 2025-05-02T11:21:31 | 2025-05-02T11:26:59 | 2025-05-02T11:26:59 | https://github.com/pandas-dev/pandas/pull/61388 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61388 | https://github.com/pandas-dev/pandas/pull/61388 | NEREUScode | 0 | ## Problem
The `MultiIndex.difference` method fails to remove entries when the index contains PyArrow-backed timestamps (`timestamp[ns][pyarrow]`). This occurs because direct tuple comparisons with PyArrow scalar types are unreliable during membership checks, causing entries to remain unexpectedly.
**Example**:
```python
# PyArrow timestamp index
df = DataFrame(...).astype({"date": "timestamp[ns][pyarrow]"}).set_index(["id", "date"])
idx_val = df.index[0]
new_index = df.index.difference([idx_val]) # Fails to remove idx_val
```
Solution
Code Conversion: Map other values to integer codes compatible with the original index's levels.
Engine Validation: Use the MultiIndex's internal engine for membership checks, ensuring accurate handling of PyArrow types.
Mask-Based Exclusion: Create a boolean mask to filter out matched entries, then reconstruct the index.
Testing
Added a test in pandas/tests/indexes/multi/test_setops.py that:
Creates a MultiIndex with PyArrow timestamps.
Validates difference correctly removes entries.
Skips the test if PyArrow is not installed.
Use Case Impact
Fixes scenarios where users filter hierarchical datasets with PyArrow timestamps, such as:
```python
# Remove specific timestamps from a time-series index
clean_index = raw_index.difference(unwanted_timestamps)
```
Closes #61382. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,034,492,987 | 61,387 | TYP: `npt._ArrayLikeInt_co` does not exist | open | 2025-05-01T19:02:57 | 2025-05-01T23:10:27 | null | https://github.com/pandas-dev/pandas/issues/61387 | true | null | null | jorenham | 0 | https://github.com/pandas-dev/pandas/blob/e55d90783bac30b75e7288380b15a62ab6e43f78/pandas/_typing.py#L91
I wouldn't recommend using these private internal type-aliases at all, but if you must, then you probably should import it from `numpy._typing`, because it is not exported by `numpy.typing`. | [
"Typing"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,034,147,446 | 61,386 | ENH: read_csv with usecols shouldn't change column order | open | 2025-05-01T15:50:33 | 2025-06-24T16:39:46 | null | https://github.com/pandas-dev/pandas/issues/61386 | true | null | null | amarvin | 10 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
The documentation for `pandas.read_csv(usecols=[...])` says that it treats the iterable list of columns like an unordered set (updated in https://github.com/pandas-dev/pandas/issues/18673 and #53763), so the returned dataframe won't necessarily have the same column order. This is different behaviour from other pandas data reading methods (e.g., `pandas.read_parquet(columns=[...])`). I think the order should be preserved. If `usecols` is converted to a `set`, I think it should instead be converted to `OrderedSet` or keys of `collections.OrderedDict` (or just `dict` in Python >3.6).
### Feature Description
```py
import pandas as pd
# Example CSV file (replace with your actual file)
csv_data = """
col1,col2,col3,col4
A,1,X,10
B,2,Y,20
C,3,Z,30
"""
with open("example.csv", "w") as f:
f.write(csv_data)
# Desired column order
desired_order = ['col3', 'col1', 'col4']
# Read CSV with usecols (selects columns but doesn't order)
df = pd.read_csv("example.csv", usecols=desired_order)
print(df) # incorrect column order
# Reindex DataFrame to enforce desired order (a popular workaround that I think shouldn't be required)
# One solution is to include this line in `read_csv`, when using `usecols` kwarg
df = df[desired_order]
print(df) # correct column order
```
### Alternative Solutions
Instead of converting `usecols` to `set`, convert it to `dict.keys()` which preserved order in Python >3.6
### Additional Context
_No response_ | [
"Enhancement",
"IO CSV",
"Needs Triage"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This is also an issue for `pandas.read_excel(usecols=[...])`.",
"Others are confused by the current feature too and have to do a workaround: https://stackoverflow.com/a/40024462/6068036",
"take",
"Replace \n\nif usecols:\n usecols = set(usecols)\n\nWith\n\nif usecols:\n usecols = dict.fromkeys(usecols) # preserves order",
"@mroeschke could I get your opinion on this before I dig deeper into it? You were the last person to work with the function (_validate_usecols_arg) and I'm mainly worried about backwards compatibility rather than feasibility. But considering that pandas is having a major version update, it *could* be justifiable.",
"IMHO, we shouldn't make this change, but I could be convinced otherwise. There are 2 reasons:\n\n1. We do document how to preserve the order (this was introduced in https://github.com/pandas-dev/pandas/pull/19746 )\n2. The \"order\" isn't clear if the argument is a callable.\n\n",
"As promised during the sync meeting today, I went and compiled how various read functions handle columns being specified. Functions that take usecols (read_csv, read_clipboard, read_excel, and read_hdf(undocumented)) don't take into account input order, whereas functions that ask for columns instead do (hdf, feather, parquet, orc, starata, sql). \n\nFinally, there are also some that straight up don't take column specifiers. \n\nI'd expect functions that use usecols to be using the same function in the backend, but I'd have to verify it if we're planning to standardize the parameter.\n\nCSV attached below of functions tested (those with a read and write function in pandas)\n[does_it_use_order.csv](https://github.com/user-attachments/files/20499345/does_it_use_order.csv)",
"@Dr-Irv Do you think that it would still warrant further discussion? Or should I just go ahead and implement it?\n\nI think adding an optional param in read_csv would solve this issue as all the other import functions which use \"usecols\" instead of \"columns\" seem to link back to read_csv in some way. ",
"> [@Dr-Irv](https://github.com/Dr-Irv) Do you think that it would still warrant further discussion? Or should I just go ahead and implement it?\n> \n> I think adding an optional param in read_csv would solve this issue as all the other import functions which use \"usecols\" instead of \"columns\" seem to link back to read_csv in some way.\n\nCan you attend the dev meeting tomorrow (June 10) so we can discuss it there?\n",
"I do think that that would be the best option but unfortunately I have a flight during that time, so I can either hijack the end of the new contributor meeting next Wednesday or discuss is during the one after"
] |
3,034,147,251 | 61,385 | BUG: to_sql works only for strings | open | 2025-05-01T15:50:27 | 2025-05-06T02:23:31 | null | https://github.com/pandas-dev/pandas/issues/61385 | true | null | null | pranav-ds | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import panda as pd
from sqlalchemy.types import DOUBLE
data = # Panda Datafrme with timestamp, double and string along with different column types.
column_types_filtered_data = {col: DOUBLE() for col in data.columns}
data.to_sql(..., dtype=column_types_filtered_data)
```
### Issue Description
For any type other than str this block in pandas.io.sql will fail.
```
for col, my_type in dtype.items():
if not isinstance(my_type, str):
raise ValueError(f"{col} ({my_type}) not a string")
```
### Expected Behavior
Different datatypes should be supported.
### Installed Versions
pandas==2.2.3
| [
"Bug",
"IO SQL",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The block in question should only be hit when pandas falls back to `sqlite3`. What kind of connection are you supplying?",
"IG, There is no problem at all you can try this solution make you will Dtype correctly \n\n`import pandas as pd\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.types import DOUBLE, String, DateTime\n\n\ndata = pd.DataFrame({\n 'timestamp': pd.date_range(start='2023-01-01', periods=3, freq='D'),\n 'value': [1.23, 4.56, 5.567],\n 'category': ['A', 'B', 'C']\n})\n\n\ndata_check_DateTime = pd.DataFrame({\n 'timestamp': pd.date_range(start='2023-01-01', periods=3, freq='D')\n})\n\n\ndata_double = pd.DataFrame({\n 'value': [1.23, 4.56, 5.567]\n})\n\n# Create a dummy SQLite engine\nengine = create_engine('sqlite:///:memory:')\n\n\ncolumn_types_filtered_data = {\n 'timestamp': DateTime(),\n 'value': DOUBLE(),\n 'category': String()\n}\n\n# Writing DataFrame to SQL\ndata.to_sql('my_table', con=engine, if_exists='replace', index=False, dtype=column_types_filtered_data)\n\n\nresult_data = pd.read_sql('SELECT * FROM my_table', con=engine)\nprint(result_data)\n\n\ncolumn_types_filtered_data_datetime = {'timestamp': DateTime()}\ndata_check_DateTime.to_sql('my_table_datetime', con=engine, if_exists='replace', index=False, dtype=column_types_filtered_data_datetime)\n\n# Verify the DateTime DataFrame\nresult_datetime = pd.read_sql('SELECT * FROM my_table_datetime', con=engine)\nprint(result_datetime)\n\n# Writing the DOUBLE-only DataFrame to SQL\ncolumn_types_filtered_data_double = {'value': DOUBLE()}\ndata_double.to_sql('my_table_double', con=engine, if_exists='replace', index=False, dtype=column_types_filtered_data_double)\n\n# Verify the DOUBLE DataFrame\nresult_double = pd.read_sql('SELECT * FROM my_table_double', con=engine)\nprint(result_double)\n`\n\n",
"Pandas version checks\n\t•\tI have checked that this issue has not already been reported.\n\t•\tI have confirmed this bug exists on the latest version of pandas.\n\t•\tI have confirmed this bug exists on the main branch of pandas.\nimport pandas as pd\nfrom sqlalchemy.types import DOUBLE\n\n# Example DataFrame with timestamp, float, and string columns\ndata = pd.DataFrame({\n 'timestamp': pd.date_range(start='2023-01-01', periods=3, freq='D'),\n 'value': [1.23, 4.56, 5.67],\n 'category': ['A', 'B', 'C']\n})\n\n# Using SQLAlchemy data types\ncolumn_types_filtered_data = {col: DOUBLE() for col in data.columns}\n\n# This will raise an error if using a raw sqlite3 connection\ndata.to_sql('my_table', con='sqlite3_connection_here', dtype=column_types_filtered_data)\n\nIssue Description\n\nWhen using a raw DB-API connection (e.g., sqlite3.connect()), passing a non-string type like sqlalchemy.types.DOUBLE() to to_sql(dtype=...) raises this error:\n\nValueError: column_name (<sqlalchemy.sql.sqltypes.DOUBLE object>) not a string\n\nThis happens due to the following code in pandas.io.sql:\n\nfor col, my_type in dtype.items():\n if not isinstance(my_type, str):\n raise ValueError(f\"{col} ({my_type}) not a string\")\n\nThis block is only executed when pandas falls back to sqlite3 (or other DB-API connections without SQLAlchemy).\n\nExpected Behavior\n\nIf SQLAlchemy types are unsupported with raw DB-API connections, this should be:\n\t•\tDocumented clearly in the to_sql API docs.\n\t•\tPossibly provide a more informative error message suggesting the use of SQLAlchemy.\n\nAlternatively, a fallback mechanism could convert known SQLAlchemy types into SQL strings automatically.\n\n\nUpdate the documentation for to_sql() to specify:\n\t•\tIf using a SQLAlchemy engine, dtype can include SQLAlchemy types (DOUBLE(), String(), etc.).\n\t•\tIf using a DB-API connection, dtype must contain SQL string types (\"TEXT\", \"FLOAT\").",
"> Documented clearly in the to_sql API docs.\n\nThe API docs page says that the argument must be strings for sqlite3 legacy mode\n\n> Possibly provide a more informative error message suggesting the use of SQLAlchemy.\n\nThe error message says the argument must be strings.\n\n> Alternatively, a fallback mechanism could convert known SQLAlchemy types into SQL strings automatically.\n\nI'm negative on this - it is best that such logic is not owned by pandas."
] |
3,032,828,511 | 61,384 | BLD: Try using shared memory utilities in Cython to reduce wheel sizes | closed | 2025-05-01T00:01:38 | 2025-05-16T02:07:29 | 2025-05-16T01:14:44 | https://github.com/pandas-dev/pandas/pull/61384 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61384 | https://github.com/pandas-dev/pandas/pull/61384 | lithomas1 | 5 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"around 10% savings in both compressed/uncompressed size\r\n(1MB compressed, 4MB uncompressed)\r\n\r\nscikit-learn seems to have gotten a lot more mileage out of this optimization...\r\n(they report around 25% savings)",
"This looks pretty good as is, and will definitely be worthwhile to do - it's a major gain in binary size for only 20 lines of code or so. It should work now, maybe retrigger CI after dropping the Cython pin?",
"pre-commit.ci autofix",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 341f1612a984e6d990ffa1d3302a220deeadad92\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61384: BLD: Try using shared memory utilities in Cython to reduce wheel sizes'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61384-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61384 on branch 2.3.x (BLD: Try using shared memory utilities in Cython to reduce wheel sizes)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Probably ok to punt this to 3.0 since that's happening soon and to minimize risk for 2.3."
] |
3,032,500,896 | 61,383 | ENH: Implement pandas.read_iceberg | closed | 2025-04-30T21:00:23 | 2025-05-14T20:29:30 | 2025-05-14T20:29:30 | https://github.com/pandas-dev/pandas/pull/61383 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61383 | https://github.com/pandas-dev/pandas/pull/61383 | datapythonista | 11 | - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"IO Data"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"There is a test failure, for some reason PyIceberg is able to find the namespace and the table when using:\r\n\r\n```python\r\npyiceberg.catalog.load_catalog(**{\"uri\": f\"sqlite:///{path}/catalog.sqlite\"})\r\n```\r\n\r\nbut not when using:\r\n\r\n```python\r\npyiceberg.catalog.load_catalog(\"pandas_tests_catalog\")\r\n```\r\n\r\nwith config file `~/.pyiceberg`:\r\n\r\n```yaml\r\ncatalog:\r\n pandas_tests_catalog:\r\n type: sql\r\n uri: sqlite:///path/catalog.sqlite\r\n```\r\n\r\nI need to research more on what's the problem, since this should work. Other than that this should be ready to get merged.",
"Hi @datapythonista, I took a look at the failing tests and from what I can tell it looks like writing the .pyicberg.yaml file causes subsequent tests to fail. I'm not really sure why though and I can't reproduce it locally with pyiceberg 0.9. I have a ~/.pyiceberg.yaml file containing\r\n```\r\n catalog:\r\n pandas_tests_catalog:\r\n uri: sqlite:////tmp/iceberg_catalog/catalog.sqlite\r\n```\r\nand a test file running\r\n```python\r\nfrom pyiceberg import catalog\r\n\r\ncatalog = catalog.load_catalog(None, **{\"uri\": \"sqlite:////tmp/iceberg_catalog/catalog.sqlite\"})\r\nprint(catalog.list_namespaces())\r\n```\r\njust fine. I did see that the line removing the ~/.pyiceberg.yaml was commented out, was there a reason for that?",
"Thanks for giving it a try. I just commented that line to make sure locally that it contained what I expected after running the test, I forgot to uncomment.\r\n\r\nPassing the uri to load_catalog worked fine locally. I just had problems when passing a catalog name to it (which requires the config file with the uri). I'll have another look, I thought I could be missing something obvious, but I guess it's something more complicated.",
"Oh ok, I just tried by passing the catalog name with a config file \r\n```python\r\nfrom pyiceberg import catalog\r\nfrom pyiceberg.schema import Schema\r\n\r\ncatalog = catalog.load_catalog(\"pandas_tests_catalog\")\r\ncatalog.create_namespace_if_not_exists(\"default\")\r\ncatalog.create_table_if_not_exists(\"default.test_table\", schema=Schema())\r\n```\r\nand it gave me an error about there not being a default path because warehouse isn't set, maybe try setting `warehouse` for pandas_tests_catalog in ~/.pyiceberg.yaml? When I added that locally it started working.",
"I tried providing warehouse, but that didn't work.\r\n\r\nI've been debugging, and seems like the problem is that whether the catalog is loaded by a URI or by a name, the query that retrieve namespaces is filtering by a catalog name. Which will be `default` when the catalog is loaded with a URI with no name. I'm still having an issue, after making the name consistent for our tests, but I hope I can get a fix soon.\r\n\r\nThank for the help!",
"Tests should be fixed now. It was a bit tricky, since pyiceberg loads the config when importing the module, not when loading the catalog, which made the config file to not always be loaded.",
"For some reason pyiceberg used to have an upper version for dependencies. They stopped doing it now, but for the minimum pyiceberg version it was not possible to be compatible with all our minimum version dependencies. I relaxed the minimum version of fsspec, s3fs and gcsfs to be able to resolve the minimum versions environment with pyiceberg.",
"Curious if there is an open issue discussing including this new feature.\r\n\r\n(FWIW I did like your prior IO registration PDEP that would have made this easier to externally implement)",
"> Curious if there is an open issue discussing including this new feature.\r\n\r\nI think only the discussions you are aware of, https://github.com/bodo-ai/Bodo-Pandas-Collaboration/issues/9 and the discussions in the calls that I know.\r\n\r\nI had a look at using `read_sql` instead of `read_iceberg`, and to me it feels like the API would be too difficult to use. Considering the popularity of Iceberg, and that we already have specific connectors for much less popular formats such as feather, SPSS... I found this the most reasonable implementation. But happy to give a try at using the code in this PR with `read_sql` instead, if there is interest. But it also feels that our code will be more complex, so personally I don't see an advantage.\r\n\r\nI'm also happy to revisit PDEP-9. There weren't objections to the general idea that I remember, the main blocker was that some people weren't happy that connectors could register with the name they wanted. And I don't think there is a good solution to this. To me it's not a problem, since at the end is the user who decides which Python dependencies are installed. In any case, to me it makes sense to move forward with this Iceberg, and surely this would be a good candidate to move as a third party with many others if we ever implement PDEP-9.",
"I didn't know pytest fixtures could have tear down natively. Thanks for the feedback @mroeschke, I implement the Iceberg catalog as a fixture now, and also added the experimental warning as suggested. Please let me know if you have any other feedback, otherwise this should be ready.",
"> Could you also add a whatsnew entry in `v3.0.0.rst` under `Other enhancements`?\r\n\r\nAbsolutely, I thought I added it. Added it now"
] |
3,031,402,515 | 61,382 | BUG: Multindex difference not working on columns with type Timestamp[ns][pyarrow] | open | 2025-04-30T14:06:06 | 2025-08-15T14:34:22 | null | https://github.com/pandas-dev/pandas/issues/61382 | true | null | null | bmaisonn | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(
[
(1, "1900-01-01", "a"),
(2, "1900-01-01", "b")
],
columns=["id", "date", "val"]
).astype({"id": "int64[pyarrow]", "date": "timestamp[ns][pyarrow]", "val":"string[pyarrow]"})
df = df.set_index(["id", "date"])
idx_val = df.index[0]
idx_val in df.index # will show True
df.index.difference([idx_val]) # The two elements are still present in the dataframe
```
### Issue Description
Note that the code will work if we using datetime64[ns] instead of timestamp[ns][pyarrow] type.
Also the code works fine if we convert the index to a none multi index.
### Expected Behavior
We expect the same behavior with timestamp[ns][pyarrow] and other type. The element that we use to apply the difference should be removed from the dataframe
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 22.0.2
Cython : None
sphinx : 8.1.3
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"setops",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for raising this! Confirmed on main. \n\n`MultiIndex.difference` doesn't exclude the matching row when the index includes a `timestamp[ns][pyarrow]` column. ",
"`MultiIndex._convert_can_do_stop` creates a MultiIndex internally from the provided list which results in a DatetimeIndex. This then doesn't compare against PyArrow. It seems to me we should enable comparisons between the two.",
"Take"
] |
3,030,977,902 | 61,381 | Fix alignment in Series subtraction with MultiIndex, Index and NaN values (#60908) | open | 2025-04-30T11:21:39 | 2025-07-27T00:10:20 | null | https://github.com/pandas-dev/pandas/pull/61381 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61381 | https://github.com/pandas-dev/pandas/pull/61381 | rit4rosa | 2 | This pull request fixes #60908 , where subtracting a Series with a MultiIndex containing NaN values from a Series with a regular Index could lead to incorrect results or unexpected behavior.
The issue was caused by the _align_for_op method not properly handling cases where the left-hand Series had a MultiIndex and the right-hand side had a flat Index, especially when NaN values were present. This could lead to misalignment during arithmetic operations.
To fix this, the _align_for_op method was updated to:
- Ensure that when the left Series has a MultiIndex and the right Series has a regular Index, the right Series is properly reindexed based on the first level of the left-hand MultiIndex, even when NaN values are involved.
- Correctly handle cases where either Series is empty.
Additionally, a new test test_series_subtraction_with_nan_and_levels (added in test_subtraction_nanindex) was introduced to verify that:
- Subtracting a Series with a MultiIndex (including NaNs) from a regular Index works correctly.
- The result maintains the correct alignment and expected output values.
Tested with x86-64 linux. Doesn’t reproduce on Linux-32-bit. | [
"Bug",
"Indexing",
"Missing-data",
"MultiIndex",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi, just following up on this PR. Let me know if you have any feedback or if anything is needed from my side. Thank you",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,030,002,590 | 61,380 | ENH: Implement translations infrastructure | open | 2025-04-30T03:24:20 | 2025-08-06T15:48:46 | null | https://github.com/pandas-dev/pandas/pull/61380 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61380 | https://github.com/pandas-dev/pandas/pull/61380 | goanpeca | 39 | Hello team!
This PR is a proposal for adding the translations infrastructure to the pandas web page.
Following the discussion in https://github.com/pandas-dev/pandas/issues/56301, we (a group of folks working on the Scientific Python grant) have been working to set up infrastructure and translate the contents of the pandas web site. As of this moment, we have 100% translations for the pandas website into Spanish and Brazilian Portuguese, with other languages available for translation (depending on volunteer translators).
To build, the command remains the same:
```bash
python pandas_web.py pandas/content --target-path build
```
If you want to check out other related work, please take a look at https://github.com/scipy/scipy.org/pull/617
You an read more about how the translation process works at https://scientific-python-translations.github.io/docs/
## What this PR does?
- Download and extract the latest available translations (over 90% completion) from https://github.com/Scientific-Python-Translations/pandas-translations. The setting can be changed [here](https://github.com/Scientific-Python-Translations/pandas-translations/blob/main/.github/workflows/sync_translations.yml#L21)
- Adds a Language switcher (Thanks @melissawm ❤️ 🚀 ).
- Added a new section to the config to store additional translations information.
- Handles site generation for each language.
- Left everything in the same script.
Supersedes https://github.com/pandas-dev/pandas/pull/61220
## Demo

---
cc @mroeschke @datapythonista | [
"Docs",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61380/",
"Thanks @goanpeca for this. Do you mind adding some more context here? I can't see in https://github.com/Scientific-Python-Translations/pandas-translations much information, like what languages are available, or how to fix a bad translation, which would be useful to know.\r\n\r\nAlso, in the docs generated from this PR I can see any language dropdown or anything different from our current docs. What are we expecting?",
"Hi @datapythonista - this is a follow up to #61220, a proof-of-concept CI job to build the website with translations that don't live in this repo. This PR and #61220 are meant to work together and I'm happy to incorporate one into the other once we agree on the general direction and workflow for this.\r\n\r\nLet us know if we can answer any other questions. Unfortunately I'm not sure how to get the preview for the other PR, I relied on building locally to test that things were working.",
"Sorry, I missed https://github.com/pandas-dev/pandas/pull/61220 and the issue discussion.\r\n\r\nI don't fully understand what you're doing here, but I describe next how to add translations without adding too much complexity in this repo, which I don't think any core dev would be onboard with.\r\n\r\n1. You decide on how to generate translations and manage it independently from this repo, and end up with a structure like this with the translated documents:\r\n\r\n```\r\n+ es/\r\n - index.md\r\n + about/\r\n - team.md\r\n - ...\r\n - ...\r\n+ pt/\r\n - index.md\r\n + about/\r\n - team.md\r\n - ...\r\n - ...\r\n```\r\n2. In our CI, before calling `pandas_web.py` you download this directory structure to the `web/` directory. No other changes needed, this will create all translated pages.\r\n3. We add a dropdown with the languages to the website (you can add the language list to `web/pandas/config.yml`)\r\n\r\nI think this makes everyone's life easy, and we get the expected result.",
"Thanks @datapythonista !\r\n\r\nCan you clarify what is missing from #61220 to match your description? That is pretty much what is done in that PR. Maybe this is confusing because we chose to do it in two parts exactly because we wanted to decouple the reorganization of the repo + switcher (in #61220) from the actual translations (this PR). \r\n\r\nHappy to follow up with any feedback in the other PR as well. Cheers!",
"In #61220 you are moving all the current website pages, that should be undone. You are adding the translated pages to this repo, we don't want it. You are making changes to pandas_web.py, this is not needed based on what I described above.\r\n\r\nOnly changes in a PR to this pandas repo should be addi g a CI step as per step 2, editing the wevsite template with the language dropdown as per step 3.",
"I see! I will rework what I have there to match your proposal. Thanks!",
">Great improvement to the PR, this makes a lot of sense to me.\r\n\r\nThanks for the review @datapythonista.\r\n\r\n>First, if I understand correctly, you download the translations of the web, and the percentage of the translated content, amd then you check for each language if it's translated enough to be published. Personally, I think you should better take care of this logic in your repo when generating the tar, not here. First to simplify the code here, and second to avoid downloading translations that are not going to be used.\r\n\r\nFixed!\r\n\r\n>Just as a suggestion, I wouldn't use this approach, even in your repo. Imagine you publish translations that are at least 90%, and we have Spanish at 100%. Then I add new content that is 11% of the website. And automatically the Spanish translations that are already indexed by search engines, in user bookmarks, in links in blog posts... are deleted from our website. Not great in my opinion, much better to simply get the new content in English and hope it will eventually be translated.\r\n\r\nThis is also fixed!\r\n\r\n>Another thing I would do is to extract the tar file as it is downloaded. So the tar file is passed to gzip/tarfile in memory, with the io module, not as a path in disk. With this you can get all the code here in a single short function. Or we ciuld even create a github action in your repo with this, as it's generic, and just use it here. So only the CI step would live in this PR.\r\n\r\nDid not follow this one as I think things are now simpler and in a single script.\r\n\r\n>Finally, we already have a configuration file for the website. We could save the url of the translations tar there. Also good in the script, just a question of preference. Or if you go for the github action approach, it could simply be a parameter in the CI step. Then you would need another one for the target dir in this repo.\r\n\r\nAdded the information the to the config file as requested and updated the scripts to handle site generation for languages. Moved all logic to the existing script.\r\n\r\n>In any case, the approach here is also very reasonable, all above are suggestions that personally I think would make things simpler.\r\n\r\nPlease let me know what do you think about the current changes.\r\n\r\n---\r\n\r\nThis PR now supersedes https://github.com/pandas-dev/pandas/pull/61220\r\n\r\nThanks @melissawm!",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61380/",
"Thanks @goanpeca, a couple of comments.\r\n\r\nFeels like we are repeating in this PR our logic that we already have implemented. If a markdown file is added to the web directory before the rendering of the website, it will be rendered in the same way as tge existing pages. There is no need to make changes to pandas_web.py for the translations, just copy the content of the translations tar in the web directory, and all the translated content will be in the rendered web.\r\n\r\nIn the preview links to translations are broken. This is likely to be caused by assuming the website is always going to be hosted at the root of the domain, not a subdirectory. This can't be assumed.\r\n\r\nFinally, I think it'd be good to handle the downloading amd uncompressing of the translation file in a github action. Or less neat, in the step on the CI config, but out of pandas_web.py. This will keep things simple in pandas_web.py, and it'll be easier to maintain.",
"Hi again, thanks for the comments @datapythonista !\r\n\r\n>There is no need to make changes to pandas_web.py for the translations, just copy the content of the translations tar in the web directory, and all the translated content will be in the rendered web.\r\n\r\n1. Yes, **there is need** as the navbar is different for each language and this comes from the config file which contains translatable content. So each language has its own config and each language is treated as a separate site/build.\r\n\r\n>In the preview links to translations are broken. This is likely to be caused by assuming the website is always going to be hosted at the root of the domain, not a subdirectory. This can't be assumed.\r\n\r\n2. Fixed!\r\n\r\n>Finally, I think it'd be good to handle the downloading amd uncompressing of the translation file in a github action. Or less neat, in the step on the CI config, but out of pandas_web.py. This will keep things simple in pandas_web.py, and it'll be easier to maintain.\r\n\r\n3. AFAICT It is not possible (with the current way the website is build), to make the translations work without changes to the `pandas_web`script. I tried my best to keep the changes to the minimum on that file, and having the downloads there \"is needed\" to know the languages that are available and process things accordingly as pointed in 1. \r\n\r\nIf this request stems from not wanting to have this happen locally, I could add a flag to process translations, and only handle english if not.\r\n\r\n```\r\n# Download and extract and copy translations\r\npython pandas_web.py pandas/content --target-path build --translations\r\n\r\n# Will just process english, with no translations download, extract or copy\r\npython pandas_web.py pandas/content --target-path build\r\n\r\n```\r\n",
"Thanks for the clarification @goanpeca, I understand now.\r\n\r\nThe idea is that pandas maintainers shouldn't have to spend time maintaining the script to build the website, so I want to keep it as simple as possible. Ideally we'd like to use Hugo or another static site generator. The problem is that the pandas website has a decent amount of content that is dynamically fetched from different sources. So, what I did is to do an almost single function extremely simple static site generation that simply renders markdown files into html using a template and leaving them in the same structure as they are found. And it uses a context which is the yaml config file parsed as a python object. There are a couple of tiny helper functions to this, and also a Preprocessors class which enriches the context with the external sources.\r\n\r\nMy concern here is that the 120 lines of code that our static site generator needs (excluding the preprocessors, which are independent small functions) grow singnificantly, in size and complexity.\r\n\r\nI still think what I said about creating a github action in the repo managing the translation is the way to go. 90% of what it's needed is to download a tar file and uncompress it in a directory. If we don't translate the navbar, I guess this is fine and super simple, and the only thing needed in pandas_web.py is adding to the context the language of the page being translated. Does this make sense?\r\n\r\nIf we get to this point, then only the translation of the navigation bar would be missing while keeping things very simple. And for those, we could just add a json file to each language directory, fetch a different file with all the translations from the server in the navbar preprocessor, or something very simple like that.",
"Hi @datapythonista, thanks for the comments.\r\n\r\n>The idea is that pandas maintainers shouldn't have to spend time maintaining the script to build the website, so I want to keep it as simple as possible. Ideally we'd like to use Hugo or another static site generator. The problem is that the pandas website has a decent amount of content that is dynamically fetched from different sources. So, what I did is to do an almost single function extremely simple static site generation that simply renders markdown files into html using a template and leaving them in the same structure as they are found. And it uses a context which is the yaml config file parsed as a python object. There are a couple of tiny helper functions to this, and also a Preprocessors class which enriches the context with the external sources.\r\n\r\nI moved all the translations logic to a separate file `pandas_translations.py`.\r\n\r\n>My concern here is that the 120 lines of code that our static site generator needs (excluding the preprocessors, which are independent small functions) grow singnificantly, in size and complexity.\r\n\r\nI kept the changes to the bare minimum in the `pandas_web.py` script to be able to handle translations (~40 LOC).\r\n- The context is generated once\r\n- Created a separate `navbar.yml` file so that it can be translated separately and removed this section from the `config.yml`\r\n- Added a utility function to update the context for the navbar only per language\r\n- Maintained the `for` to iterate over languages, but is simpler now. Having it process all translation folders as normal content, and try to inject the context there just makes things more difficult to reason about.\r\n- Added some typing that was missing\r\n- Added an extra flag to process translations so:\r\n\r\n```bash\r\n# This will only build english as it normally did, it still downloads translations\r\n# but does not include them in the process\r\npython web/pandas_web.py web/pandas --target-path=web/build\r\n\r\n# This will build all languages, download and process translations\r\npython web/pandas_web.py web/pandas --target-path=web/build --translations\r\n```\r\n\r\n>I still think what I said about creating a github action in the repo managing the translation is the way to go. 90% of what it's needed is to download a tar file and uncompress it in a directory. If we don't translate the navbar, I guess this is fine and super simple, and the only thing needed in pandas_web.py is adding to the context the language of the page being translated. Does this make sense?\r\n\r\nA GitHub action doesn't seem too helpful, because it isn't applicable anywhere else than this project. This is why I separated the logic into `pandas_translations.py` and imported this in `pandas_web.py` to be able to process language information\r\n\r\n>If we get to this point, then only the translation of the navigation bar would be missing while keeping things very simple. And for those, we could just add a json file to each language directory, fetch a different file with all the translations from the server in the navbar preprocessor, or something very simple like that.\r\n\r\nThis is now solved by splitting the navbar content to a separate `navbar.yml` file. I wholeheartedly disagree with not having the navbar items translated as that would hamper accessibility of the translated website.\r\n\r\n>In the preview links to translations are broken. This is likely to be caused by assuming the website is always going to be hosted at the root of the domain, not a subdirectory. This can't be assumed.\r\n\r\nThis has been fixed.\r\n\r\n>Ideally we'd like to use Hugo or another static site generator. \r\n\r\nGreat! I believe that is definitely a move in the right direction 😄 \r\n\r\nIn the meantime I believe this the best I can come up with to keep things simple, but functional.\r\n\r\nCheers!\r\n",
"Thanks @goanpeca, this looks better.\r\n\r\nI think we can still make things significantly simpler. In the pandas_web script we iterate over every file in the `web/pandas/` directory, and we render it with the context: https://github.com/pandas-dev/pandas/blob/main/web/pandas_web.py#L477\r\n\r\nMy point was that if you copy the translated markdowns there, inside their language directory (e.g. `web/pandas/es/`) the files will be automatically rendered without doing anything else. And to render the navbar translated, you'll just need the navbar translated in the context (same as now, but for example wrapped in a dict `{\"en\": ..., \"es\":...}`. And then you'll need to add to the context the language of the file being rendered. You can simply consider all directories with two characters to be a translation directory.",
"Hi @datapythonista, thanks for the explanation.\r\n\r\nI will try it.",
"@datapythonista, implemented the suggestions. ",
">Thanks for the updates. Added some more comments, but looking better.\r\n\r\nThanks for the comments. I just replied to all of them and will post some updates later today. ",
"Hi @datapythonista, I implemented the suggestions.",
"Hi @datapythonista, I implemented most of the additional suggestions. Please see comments.",
"> I'd split the preprocessors in a slightly different way as suggested in the comments, but I think the way the code is now is very simple and easy to understand and maintain. Thanks a lot for all the updates here.\r\n\r\nMade some new changes based on your suggestions @datapythonista.",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61380/",
"The sponsor logos in the home page don't render correctly. I guess the problem is not in this PR, but in the translation of the html file, no?",
">The sponsor logos in the home page don't render correctly. I guess the problem is not in this PR, but in the translation of the html file, no?\r\n\r\nWould it be ok to use absolute URLs? since the english pages live in the root but the translated pages live in `es/something`. Would not work on preview though.\r\n\r\nI could use `/static/img...` instead of `../static` at is currently used. Frameworks rely on filters like `relative_url / absolute_url` to handle this cases and append the appropriate base_folder or base_url to the link. \r\n\r\nEither that, or copying assets folder into each language.\r\n",
"The images of the books should be implemented in the same exact way, and those seem to be working fine in the translated pages. Doesn't seem like we need to change the links, feels more like a problem in the translated content for those images, no?",
">The images of the books should be implemented in the same exact way, and those seem to be working fine in the translated pages. Doesn't seem like we need to change the links, feels more like a problem in the translated content for those images, no?\r\n\r\nI will look into the content.",
">I guess the problem is not in this PR, but in the translation of the html file, no?\r\n\r\nCorrect. This is an issue in the translations. (Working on those fixes)\r\n\r\n¿Besides that is there anything else you consider needs a revision?",
"/preview"
] |
3,029,701,921 | 61,379 | DOC: Fix dark mode text visibility in Getting Started accordion (#60024) | closed | 2025-04-29T23:01:49 | 2025-04-30T16:21:16 | 2025-04-30T16:21:09 | https://github.com/pandas-dev/pandas/pull/61379 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61379 | https://github.com/pandas-dev/pandas/pull/61379 | danielpintosalazar | 3 | - [x] closes [#61377](https://github.com/pandas-dev/pandas/issues/61377) (duplicate of [#60024](https://github.com/pandas-dev/pandas/issues/60024))
### 📄 **Description**
This pull request fixes the issue described in [#61377](https://github.com/pandas-dev/pandas/issues/61377), where the text in the accordion content of the *Getting Started* tutorial section was not visible in dark mode.
Although #61377 is a duplicate, the underlying problem was originally reported in [#60024](https://github.com/pandas-dev/pandas/issues/60024), and later duplicated in [#60041](https://github.com/pandas-dev/pandas/issues/60041) and [#60921](https://github.com/pandas-dev/pandas/issues/60921). This PR addresses the root cause described in those issues.
### ✅ **Changes Made**
Added CSS variables to ensure that text and background colors are consistent with the active theme:
```css
.tutorial-card .card-header {
--bs-card-cap-color: var(--pst-color-text-base);
color: var(--pst-color-text-base);
cursor: pointer;
background-color: var(--pst-color-surface);
border: 1px solid var(--pst-color-border);
}
.tutorial-card .card-body {
background-color: var(--pst-color-on-background);
color: var(--pst-color-text-base);
}
```
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61379/",
"Thanks @danielpintosalazar "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.