id int64 | number int64 | title string | state string | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | html_url string | is_pull_request bool | pull_request_url string | pull_request_html_url string | user_login string | comments_count int64 | body string | labels list | reactions_plus1 int64 | reactions_minus1 int64 | reactions_laugh int64 | reactions_hooray int64 | reactions_confused int64 | reactions_heart int64 | reactions_rocket int64 | reactions_eyes int64 | comments list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,028,820,512 | 61,378 | DOC: update pandas cheat sheet with a third page (fixes #40680) | closed | 2025-04-29T15:47:54 | 2025-05-11T17:29:57 | 2025-05-11T17:29:57 | https://github.com/pandas-dev/pandas/pull/61378 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61378 | https://github.com/pandas-dev/pandas/pull/61378 | Shoestring42 | 4 | - [x] closes #40680
- [x] added 3rd page to the cheat sheet including more details on plotting, frequently used options, input/output formats and more.
- [x] added `.info()`, `.memory_usage()`, and `.dtypes()` to 'Summarize Data' on page 2 and rearranged the page to fill the gap left by the old plotting section.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the feedback, this is my first time contributing to anything so thanks for being so constructive. I think I've addressed all the issues raised. Changes have been pushed so let me know if anything else needs updating!",
"Thanks again, I've updated the powerpoint according to the feedback, made the pdf version, and editted the README.md file as well.",
"@Shoestring42 looking forward to having you address my last comment here: https://github.com/pandas-dev/pandas/pull/61378#pullrequestreview-2809938830\r\n",
"changes have been pushed. Sorry it took so long, exam period has just started at uni so i've been busy revising. Thanks for your patience."
] |
3,028,562,270 | 61,377 | not able to see the content in the dark mode | closed | 2025-04-29T14:24:33 | 2025-04-30T16:21:59 | 2025-04-30T16:21:58 | https://github.com/pandas-dev/pandas/issues/61377 | true | null | null | preetlakra | 3 | <img width="1470" alt="Image" src="https://github.com/user-attachments/assets/1f676b75-6720-4a8a-9bc3-103ebe55e205" />
##issue in styling of the content line when turning on the dark mode. | [
"Docs",
"Duplicate Report"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"Thanks for the report, this is a duplicate of https://github.com/pandas-dev/pandas/issues/60024",
"Closed by https://github.com/pandas-dev/pandas/pull/61379"
] |
3,028,532,570 | 61,376 | BUG: Series.dot for arrow and nullable dtypes returns object-dtyped series | closed | 2025-04-29T14:16:14 | 2025-04-29T16:21:11 | 2025-04-29T16:20:40 | https://github.com/pandas-dev/pandas/pull/61376 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61376 | https://github.com/pandas-dev/pandas/pull/61376 | theavey | 1 | fixes #61375 by porting DataFrame fix (from #54025 as reported in #53979)
- [x] closes #61375
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Numeric Operations",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @theavey "
] |
3,028,515,989 | 61,375 | BUG: dot on Arrow Series produces a Numpy object result | closed | 2025-04-29T14:11:39 | 2025-04-29T16:20:41 | 2025-04-29T16:20:41 | https://github.com/pandas-dev/pandas/issues/61375 | true | null | null | theavey | 0 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
series_result = pd.Series({"a": 1.0}, dtype="Float64").dot(
pd.DataFrame({"col1": {"a": 2.0}, "col2": {"a": 3.0}}, dtype="Float64")
)
series_result.dtype # is dtype('O')
series_result_2 = pd.Series({"a": 1.0}, dtype="float[pyarrow]").dot(
pd.DataFrame({"col1": {"a": 2.0}, "col2": {"a": 3.0}}, dtype="float[pyarrow]")
)
series_result_2.dtype # same, is dtype('O')
# `DataFrame.dot` was already fixed
df_result = pd.DataFrame({"col1": {"a": 2.0}, "col2": {"a": 3.0}}, dtype="Float64").T.dot(
pd.Series({"a": 1.0}, dtype="Float64")
)
df_result.dtype # is Float64Dtype()
```
### Issue Description
`Series.dot` with Arrow or nullable dtypes returns series result with numpy object dtype. This was reported in #53979 and fixed for DataFrames in #54025.
Possibly side notes: I believe the "real" issue here is that the implementation uses `.values` which returns a `dtype=object` array for the DataFrame. This seems directly related to #60038 and at least somewhat related to #60301 (which is also referenced in a comment on the former).
### Expected Behavior
I would expect `Series.dot` to return the "best" common datatype for the input datatypes (in the examples, would expect the appropriate float dtype)
```python
import pandas as pd
series_result = pd.Series({"a": 1.0}, dtype="Float64").dot(
pd.DataFrame({"col1": {"a": 2.0}, "col2": {"a": 3.0}}, dtype="Float64")
)
series_result.dtype # would expect Float64Dtype()
series_result_2 = pd.Series({"a": 1.0}, dtype="float[pyarrow]").dot(
pd.DataFrame({"col1": {"a": 2.0}, "col2": {"a": 3.0}}, dtype="float[pyarrow]")
)
series_result_2.dtype # would expect float[pyarrow]
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.12
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.5
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1
Cython : None
sphinx : 8.2.3
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : 0.61.2
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : 3.10.2
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,027,960,213 | 61,374 | Percentile Scaling Data Transformation | closed | 2025-04-29T11:23:42 | 2025-04-29T16:23:41 | 2025-04-29T16:23:41 | https://github.com/pandas-dev/pandas/pull/61374 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61374 | https://github.com/pandas-dev/pandas/pull/61374 | sujan099 | 1 | This pull request adds a new data transformation utility called `percentile_scaling` to the `pandas/io/` module, which scales numerical data to a percentile-based range from 0 to 100. This transformation is useful for standardizing features in data preprocessing workflows, especially for ML pipelines or percentile-based visual analytics.
Implementation Details
- Introduced a new function `percentile_scaling(data: List[float]) -> List[float]` that:
- Accepts a list or NumPy array of numerical values.
- Returns values scaled to a [0, 100] percentile scale.
- Raises appropriate errors for invalid input (e.g., zero variance or empty input).
Tests
- Added unit tests in `pandas/tests/io/test_percentile_scaling.py`:
- Validates correct scaling behavior.
- Handles edge cases such as identical values and empty inputs.
- All tests pass successfully using `unittest`.
Compliance
- [x] Follows Pandas contribution guidelines
- [x] All tests pass successfully
- [x] Function is self-contained and does not introduce dependencies
- [x] Code is PEP8-compliant and cleanly documented
Notes
This contribution is part of a university-level data engineering course project (DATA 226). The goal is to implement practical transformation logic for real-world data pipeline use cases while following standard open-source contribution workflows.
- [x] Tests added and passed
- [x] Code passes style checks and pre-commit hooks
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but since there is no open issue associated with this PR that has also been triaged and accepted by the core team, we will not be moving forward with this feature so closing."
] |
3,027,838,985 | 61,373 | this is testing only | closed | 2025-04-29T10:35:03 | 2025-04-29T16:21:29 | 2025-04-29T16:21:29 | https://github.com/pandas-dev/pandas/pull/61373 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61373 | https://github.com/pandas-dev/pandas/pull/61373 | ManasasivaVasireedy | 0 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,026,305,712 | 61,372 | Fix pyarrow comparison issue in array.py | closed | 2025-04-28T21:41:17 | 2025-05-19T16:17:14 | 2025-05-19T16:17:13 | https://github.com/pandas-dev/pandas/pull/61372 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61372 | https://github.com/pandas-dev/pandas/pull/61372 | AshleySonny | 1 | - [x] closes #60937
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,025,592,843 | 61,371 | CI: Use Cython nightlies for Windows wheel builds again | closed | 2025-04-28T17:05:20 | 2025-04-28T18:17:52 | 2025-04-28T18:17:50 | https://github.com/pandas-dev/pandas/pull/61371 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61371 | https://github.com/pandas-dev/pandas/pull/61371 | mroeschke | 1 | Validated that the wheel tests should pass now from https://github.com/pandas-dev/pandas/pull/61354 | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The relevant wheel builds have passed again so merging"
] |
3,025,323,572 | 61,370 | ENH: Adding hint to to_sql | open | 2025-04-28T15:23:10 | 2025-07-15T21:06:30 | null | https://github.com/pandas-dev/pandas/issues/61370 | true | null | null | AliKayhanAtay | 0 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
It would be great if we able use hints with to_sql. Especially /*APPEND PARALLEL*/ hints greatly improves insert time in Oracle.
### Feature Description
with db().connect() as connection:
df.to_sql('TEST_TABLE', connection, hints={'ORACLE':['APPEND', 'PARALLEL']}
### Alternative Solutions
with db().connect() as connection:
df.to_sql('TEST_TABLE', connection, hints={'ORACLE':['APPEND', 'PARALLEL']}
### Additional Context
_No response_ | [
"Enhancement",
"IO SQL",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,024,500,784 | 61,369 | Bump pypa/cibuildwheel from 2.23.2 to 2.23.3 | closed | 2025-04-28T10:35:52 | 2025-04-28T17:41:39 | 2025-04-28T17:41:35 | https://github.com/pandas-dev/pandas/pull/61369 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61369 | https://github.com/pandas-dev/pandas/pull/61369 | dependabot[bot] | 0 | Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.2 to 2.23.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p>
<blockquote>
<h2>v2.23.3</h2>
<ul>
<li>🛠 Dependency updates, including Python 3.13.3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2371">#2371</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p>
<blockquote>
<h3>v2.23.3</h3>
<p><em>26 April 2025</em></p>
<ul>
<li>🛠 Dependency updates, including Python 3.13.3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2371">#2371</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pypa/cibuildwheel/commit/faf86a6ed7efa889faf6996aa23820831055001a"><code>faf86a6</code></a> Bump version: v2.23.3</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/4241f37b2c5be7f7ed96214b83f8cfbe1496cc28"><code>4241f37</code></a> [2.x] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2371">#2371</a>)</li>
<li>See full diff in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.2...v2.23.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | [
"Build",
"CI",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,023,470,553 | 61,368 | BUG: Python 3.14 may not increment refcount | open | 2025-04-28T01:23:56 | 2025-08-16T08:38:34 | null | https://github.com/pandas-dev/pandas/issues/61368 | true | null | null | tacaswell | 17 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import warnings
warnings.simplefilter('error')
df = pd.DataFrame(
{'year': [2018, 2018, 2018],
'month': [1, 1, 1],
'day': [1, 2, 3],
'value': [1, 2, 3]})
df['date'] = pd.to_datetime(df[['year', 'month', 'day']])
```
### Issue Description
With python 3.14 and the Pandas main branch (or 2.2.3 with `pd.options.mode.copy_on_write = "warn"`) the above fails with:
```python
Python 3.14.0a7+ (heads/main:276252565cc, Apr 27 2025, 16:05:04) [Clang 19.1.7 ]
Type 'copyright', 'credits' or 'license' for more information
IPython 9.3.0.dev -- An enhanced Interactive Python. Type '?' for help.
Tip: You can use LaTeX or Unicode completion, `\alpha<tab>` will insert the α symbol.
In [1]: import pandas as pd
In [2]: df = pd.DataFrame(
...: {'year': [2018, 2018, 2018],
...: 'month': [1, 1, 1],
...: 'day': [1, 2, 3],
...: 'value': [1, 2, 3]})
...: df['date'] = pd.to_datetime(df[['year', 'month', 'day']])
<ipython-input-2-a8566e79621c>:6: ChainedAssignmentError: A value is trying to be set on a copy of a DataFrame or Series through chained assignment.
When using the Copy-on-Write mode, such chained assignment never works to update the original DataFrame or Series, because the intermediate object on which we are setting values always behaves as a copy.
Try using '.loc[row_indexer, col_indexer] = value' instead, to perform the assignment in a single step.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/copy_on_write.html
df['date'] = pd.to_datetime(df[['year', 'month', 'day']])
In [3]: import warnings
In [4]: warnings.simplefilter('error')
In [5]: df = pd.DataFrame(
...: {'year': [2018, 2018, 2018],
...: 'month': [1, 1, 1],
...: 'day': [1, 2, 3],
...: 'value': [1, 2, 3]})
...: df['date'] = pd.to_datetime(df[['year', 'month', 'day']])
---------------------------------------------------------------------------
ChainedAssignmentError Traceback (most recent call last)
<ipython-input-5-a8566e79621c> in ?()
2 {'year': [2018, 2018, 2018],
3 'month': [1, 1, 1],
4 'day': [1, 2, 3],
5 'value': [1, 2, 3]})
----> 6 df['date'] = pd.to_datetime(df[['year', 'month', 'day']])
~/.virtualenvs/cp314-clang/lib/python3.14/site-packages/pandas/core/frame.py in ?(self, key, value)
4156 def __setitem__(self, key, value) -> None:
4157 if not PYPY:
4158 if sys.getrefcount(self) <= 3:
-> 4159 warnings.warn(
4160 _chained_assignment_msg, ChainedAssignmentError, stacklevel=2
4161 )
4162
ChainedAssignmentError: A value is trying to be set on a copy of a DataFrame or Series through chained assignment.
When using the Copy-on-Write mode, such chained assignment never works to update the original DataFrame or Series, because the intermediate object on which we are setting values always behaves as a copy.
Try using '.loc[row_indexer, col_indexer] = value' instead, to perform the assignment in a single step.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/copy_on_write.html
In [6]: pd.__version__
Out[6]: '3.0.0.dev0+2080.g44c5613568'
```
With Python 3.14 there will be an optimization where the reference count is not incremented if Python can be sure that something above the calling scope will hold a reference for the life time of a scope. This is causing a number of failures in test suites when reference counts are checked. In this case I think it erroneously triggering the logic that the object is a intermediary.
Found this because it is failing the mpl test suite (this snippet is extracted from one of our tests).
With py313 I do not get this failure.
### Expected Behavior
no warning
### Installed Versions
It is mostly development versions of things, this same env with pd main also fails.
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.14.0a7+
python-bits : 64
OS : Linux
OS-release : 6.14.2-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Thu, 10 Apr 2025 18:43:59 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.3.0.dev0+git20250427.4961a14
pytz : 2025.2
dateutil : 2.9.0.post1.dev6+g35ed87a.d20250427
pip : 25.0.dev0
Cython : 3.1.0b1
sphinx : None
IPython : 9.3.0.dev
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0.alpha0
matplotlib : 3.11.0.dev732+g8fedcea7fc
numba : None
numexpr : 2.10.3.dev0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.0.dev32+g7ef189757
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0.dev0+git20250427.55cae81
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : 2025.3.1
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Discussion",
"Warnings",
"Copy / view semantics"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! It sounds like we may need to disable these warnings for Python 3.14+ if the refcount cannot be relied upon.\n\ncc @jorisvandenbossche @phofl ",
"Since CoW is implemented using refcount, could there also be cases where we believe data is not being shared but it really is?",
">Since CoW is implemented using refcount\n\nThe actual Copy-on-Write mechanism itself is implement using weakrefs, and does not rely on refcounting, I think. \n\nThe refcounts are used for the warning about chained assignments. While not essential for ensure correct behaviour (correctly copying when needed), those warnings are quite important towards the users for migrating / generally avoiding mistakes in the future (giving how widely spread chained assignment is).\n\nSo ideally we would be able to keep this warning working. \n\n> With Python 3.14 there will be an optimization where the reference count is not incremented if Python can be sure that something above the calling scope will hold a reference for the life time of a scope.\n\nDo you know if there is a technical explanation of this somewhere? (or the PR implementing it? Didn't directly find anything mentioned in the 3.14 whatsnew page) \nI'll have to look a bit more into this change and the specific example if there is anything on our side that we can do detect when this happens or to otherwise deal with it.",
"Hi! Sorry for the random comment, but @ngoldbaum pointed out this issue to me. I'm the author of the [optimization](https://github.com/python/cpython/pull/130708). Happy to answer any questions or help brainstorm a solution with you.",
"I just tried with both 3.14.0 RC1 and 3.14.0t RC1 and this is still an issue on current `main`.\n\nLet me try to dig in to understand under exactly what circumstances this happens - maybe we can just change the check to `< 3` instead of `<= 3`, because the stackref is always going to be missing on 3.14 and newer.",
"Unfortunately no, it's not that easy, there are cases where none of the three references are stackrefs.",
"@jorisvandenbossche I have a PR open that disables the warning in #61950 but before working on it more I want to confirm that approach is OK with you.",
"@ngoldbaum thanks for looking into this!\n\nTo illustrate the issue with a small pure python example (what I originally used to explore the implementation), consider the following class that wraps some underlying data and allows to get/set data:\n\n```python\nimport sys\n\nclass Object:\n \"\"\"Small class that wraps some data, and can get/set this data\"\"\"\n \n def __init__(self, data):\n self.data = data\n \n def __getitem__(self, key):\n return Object(self.data[key])\n \n def __setitem__(self, key, value):\n print(\"Refcount self: \", sys.getrefcount(self))\n self.data[key] = value\n \n def __repr__(self):\n return f\"<Object {self.data}>\"\n \n def copy(self):\n return Object(self.data.copy())\n```\n\nand then setting some data with Python 3.13:\n\n```python\n>>> obj = Object(list(range(10)))\n# direct setitem -> this modifies the underlying data\n>>> obj[5] = 100\nRefcount self: 4\n>>> obj\n<Object [0, 1, 2, 3, 4, 100, 6, 7, 8, 9]>\n\n# chained setitem -> this does NOT modify the underlying data\n# (in this toy example because the slice of a list gives a new list, not a view)\n>>> obj[1:4][1] = 1000\nRefcount self: 3\n>>> obj\n<Object [0, 1, 2, 3, 4, 100, 6, 7, 8, 9]>\n```\n\nRunning that with Python 3.14:\n\n```python\n>>> obj = Object(list(range(10)))\n>>> obj[5] = 100\nRefcount self: 4\n>>> obj[1:4][1] = 1000\nRefcount self: 2\n```\n\nSo that already illustrates that there is a difference in this basic example. \nAlthough the fact that it is _lower_ in the case of chained assignment, that is not really a problem given we test for `<=3`, but the problem I suppose is that there are also other cases where the refcount becomes lower, giving false positive warnings.\n\nTesting that with code that is not run top-level in the interactive interpreter, but is code in a function that is called, we can already see this. When running the below example with a non-chained assignment in a test with Python 3.13, that gives a refcount of 4 as well, i.e. the same regardless of whether it is in a function or not. But with Python 3.14, the below code no longer gives a refcount of 4, but only of 2:\n\n```python\n>>> def test():\n... obj = Object(list(range(10)))\n... obj[5] = 100\n... \n>>> test()\nRefcount self: 2\n```\n\nAnd so if the above `obj` would be a pandas DataFrame, and we are doing a plain setitem operation in the function, that currently triggers a false positive warning, unfortunately.",
"Based on the above, I assume the simple conclusion is that the current implementation for the warning check using `sys.getrefcount(self)` will no longer work, and there is not really any other alternative than to disable the warning for Python 3.14+ ..\n\n@ngoldbaum in the other PR you mentioned \"Unfortunately we're probably past the time when we can get C API changes merged into CPython to support this use-case, so it may not be easily feasible to detect what you're looking for just based on refcounts in 3.14 and newer.\", but I am also not sure what kind of C API could make this possible? \nI see the link from the cpython issue adding `PyUnstable_Object_IsUniqueReferencedTemporary` that can do this at the C level? But so that is for cases in C where such temporary objects had a refcount of 1 before Python 3.14. We are doing this check from Python (and in a method on the object in question), so that always already added some references (i.e. the reason we are checking for `<=3` and not for `==1`). So I am not sure a C API method like that would help us?\n\nCould that work if we would create a C extension base class for `pd.DataFrame` in C that would implement `__setitem__` (and just do this check and then defer to another python method on the subclass that has the actual setitem implementation)? \n(but the fact that this is for a `self` reference in a method on the object itself might complicate this?)",
"The C API idea was for there to somehow be a C function you could call via Cython bindings in Python which would do the correct thing.\n\n@mpage has a lot more context about how exactly reference count semantics change with stackrefs. Maybe there is a clever way to do this in 3.14.\n\nAlso while I was working on this, I noticed a few spots that used hard-coded reference counts and a few spots that used a pandas-wide constant. It's not clear to me if the places where the reference count threshold is hard-coded do it that way on purpose.\n\nRefactoring all the reference count checks into a single function would probably make it easier to experiment with different approaches to detecting this condition in 3.14.",
"@ngoldbaum - I'm not sure there's a way to perform this check at runtime in pure Python without making some changes to CPython. As you said, it's probably too late for 3.14.0, but we *might* be able to get it into 3.14.1. [This branch](https://github.com/mpage/cpython/tree/py-unique-temporary) contains one possible approach and [this gist](https://gist.github.com/mpage/dd98df3cb7e842db1aed623171063b36) demonstrates its use.\n\nThe suggestion to implement `__setitem__` in C and have it call `PyUnstable_Object_IsUniqueReferencedTemporary` might work, too.\n\nHowever, I wonder if a better solution would be to perform these checks statically. [This gist](https://gist.github.com/mpage/b62cfd405e66763c00fd522b45c23ff3) is a simple example of how this might work. Running it against this source\n\n```py\nobj = Object(list(range(10)))\n\n# This is fine\nobj[5] = 100\n\n# This is a no-no\nobj[1:4][1] = 1000\n\n# Part of this is a no-no\nobj[5], obj[1:4][1] = 100, 1000\n```\n\nproduces\n\n```\nChained assignment detected at line 7, col 0:\n\nobj[1:4][1] = 1000\n^--- here\n\nChained assignment detected at line 10, col 8:\n\nobj[5], obj[1:4][1] = 100, 1000\n ^--- here\n```",
"@mpage the problem is that in this case, it's not a uniquely referenced temporary - there are a few references, just one less in 3.14 than in 3.13, and only sometimes.\n\nIf I try to run one test file in the Pandas test suite that is sensitive to this change on 3.14 and go into a debugger, I see:\n\n<details>\n\n```\ngoldbaum at Nathans-MBP in ~/Documents/pandas on 3.14-ci!\n± pytest pandas/tests/indexing/test_chaining_and_caching.py --pdb\n================================================================== test session starts ===================================================================\nplatform darwin -- Python 3.14.0rc1, pytest-8.4.1, pluggy-1.6.0\nrootdir: /Users/goldbaum/Documents/pandas\nconfigfile: pyproject.toml\nplugins: xdist-3.8.0, hypothesis-6.136.4, cov-6.2.1, run-parallel-0.5.1.dev0\ncollected 25 items\nCollected 0 items to run in parallel\n\npandas/tests/indexing/test_chaining_and_caching.py .........F\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nself = <pandas.tests.indexing.test_chaining_and_caching.TestChaining object at 0x1084adbf0>\ntemp_file = PosixPath('/private/var/folders/nk/yds4mlh97kg9qdq745g715rw0000gn/T/pytest-of-goldbaum/pytest-2/test_detect_chained_assignment0/ecb9dae3-4d3a-4010-a680-e2510beb72db')\n\n @pytest.mark.arm_slow\n def test_detect_chained_assignment_is_copy_pickle(self, temp_file):\n # gh-5475: Make sure that is_copy is picked up reconstruction\n df = DataFrame({\"A\": [1, 2]})\n\n path = str(temp_file)\n df.to_pickle(path)\n df2 = pd.read_pickle(path)\n> df2[\"B\"] = df2[\"A\"]\n ^^^^^^^^\n\npandas/tests/indexing/test_chaining_and_caching.py:193:\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nself = A\n0 1\n1 2, key = 'B', value = 0 1\n1 2\nName: A, dtype: int64\n\n def __setitem__(self, key, value) -> None:\n \"\"\"\n Set item(s) in DataFrame by key.\n\n This method allows you to set the values of one or more columns in the\n DataFrame using a key. If the key does not exist, a new\n column will be created.\n\n Parameters\n ----------\n key : The object(s) in the index which are to be assigned to\n Column label(s) to set. Can be a single column name, list of column names,\n or tuple for MultiIndex columns.\n value : scalar, array-like, Series, or DataFrame\n Value(s) to set for the specified key(s).\n\n Returns\n -------\n None\n This method does not return a value.\n\n See Also\n --------\n DataFrame.loc : Access and set values by label-based indexing.\n DataFrame.iloc : Access and set values by position-based indexing.\n DataFrame.assign : Assign new columns to a DataFrame.\n\n Notes\n -----\n When assigning a Series to a DataFrame column, pandas aligns the Series\n by index labels, not by position. This means:\n\n * Values from the Series are matched to DataFrame rows by index label\n * If a Series index label doesn't exist in the DataFrame index, it's ignored\n * If a DataFrame index label doesn't exist in the Series index, NaN is assigned\n * The order of values in the Series doesn't matter; only the index labels matter\n\n Examples\n --------\n Basic column assignment:\n\n >>> df = pd.DataFrame({\"A\": [1, 2, 3]})\n >>> df[\"B\"] = [4, 5, 6] # Assigns by position\n >>> df\n A B\n 0 1 4\n 1 2 5\n 2 3 6\n\n Series assignment with index alignment:\n\n >>> df = pd.DataFrame({\"A\": [1, 2, 3]}, index=[0, 1, 2])\n >>> s = pd.Series([10, 20], index=[1, 3]) # Note: index 3 doesn't exist in df\n >>> df[\"B\"] = s # Assigns by index label, not position\n >>> df\n A B\n 0 1 NaN\n 1 2 10\n 2 3 NaN\n\n Series assignment with partial index match:\n\n >>> df = pd.DataFrame({\"A\": [1, 2, 3, 4]}, index=[\"a\", \"b\", \"c\", \"d\"])\n >>> s = pd.Series([100, 200], index=[\"b\", \"d\"])\n >>> df[\"B\"] = s\n >>> df\n A B\n a 1 NaN\n b 2 100\n c 3 NaN\n d 4 200\n\n Series index labels NOT in DataFrame, ignored:\n\n >>> df = pd.DataFrame({\"A\": [1, 2, 3]}, index=[\"x\", \"y\", \"z\"])\n >>> s = pd.Series([10, 20, 30, 40, 50], index=[\"x\", \"y\", \"a\", \"b\", \"z\"])\n >>> df[\"B\"] = s\n >>> df\n A B\n x 1 10\n y 2 20\n z 3 50\n # Values for 'a' and 'b' are completely ignored!\n \"\"\"\n if not PYPY:\n if sys.getrefcount(self) <= REF_COUNT + 1:\n> warnings.warn(\n _chained_assignment_msg, ChainedAssignmentError, stacklevel=2\n )\nE pandas.errors.ChainedAssignmentError: A value is trying to be set on a copy of a DataFrame or Series through chained assignment.\nE Such chained assignment never works to update the original DataFrame or Series, because the intermediate object on which we are setting values always behaves as a copy (due to Copy-on-Write).\nE\nE Try using '.loc[row_indexer, col_indexer] = value' instead, to perform the assignment in a single step.\nE\nE See the documentation for a more detailed explanation: https://pandas.pydata.org/pandas-docs/stable/user_guide/copy_on_write.html#chained-assignment\n\npandas/core/frame.py:4301: ChainedAssignmentError\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> /Users/goldbaum/Documents/pandas/pandas/core/frame.py(4301)__setitem__()\n-> warnings.warn(\n(Pdb) p sys.getrefcount(self)\n3\n(Pdb) p REF_COUNT\n2\n(Pdb) import gc\n(Pdb) len(gc.get_referrers(self))\n2\n[<frame at 0x1086e0d40, file '/Users/goldbaum/Documents/pandas/pandas/tests/indexing/test_chaining_and_caching.py', line 193, code test_detect_chained_assignment_is_copy_pickle>, <frame at 0x1086e0840, file '/Users/goldbaum/Documents/pandas/pandas/core/frame.py', line 4301, code __setitem__>]\n```\n\n</details>\n\nIf you expand the details block and look at the very end, `gc.get_referrers()` seems to point to two frame objects as holding references. Not sure if that helps narrow down what's happening.\n\nIn this case, the warning is triggering on this expression:\n\nhttps://github.com/pandas-dev/pandas/blob/4257ad67b1c056699b54d03142ebb25fb14faf46/pandas/tests/indexing/test_chaining_and_caching.py#L193\n\nNote that I'm using a slightly patched copy of pandas here, let me know if you want to try to reproduce this and I'll set up something nicer.",
"(started exploring a cython solution using the unstable C API, see https://github.com/pandas-dev/pandas/pull/62070, will answer above comments later!)",
"Thanks @mpage and @ngoldbaum for the input! \n\n> The suggestion to implement `__setitem__` in C and have it call `PyUnstable_Object_IsUniqueReferencedTemporary` might work, too.\n\nI have been trying that, see https://github.com/pandas-dev/pandas/pull/62070, and it _seems_ to be working regardless of it being called on a method. I even got it working with cython instead of writing a small extension type in c, although this gives a bit of a hassle to figure out the correct MRO and other impacts (on pickling, on object instantiation) from now having a c type as base class.\n\nThis seems to address the case of `__setitem__` (that I illustrated above). We also have a few inplace methods where we do a similar check (for example `df.update(..)`), which are not (yet) covered by this. But already having the check working for setitem is a big improvement, and I suppose we could just take a similar route for those inplace methods.\n\n> I'm not sure there's a way to perform this check at runtime in pure Python without making some changes to CPython. As you said, it's probably too late for 3.14.0, but we _might_ be able to get it into 3.14.1. [This branch](https://github.com/mpage/cpython/tree/py-unique-temporary) contains one possible approach and [this gist](https://gist.github.com/mpage/dd98df3cb7e842db1aed623171063b36) demonstrates its use.\n\nWhile I might have something working, personally I think it would still be nice to have such a python-level function as well. \n\n_If_ such a helper would get into Python 3.14.1 (or 3.15), I assume we could essentially vendor the `_PyObject_IsUniqueReferencedTemporary` / `sys__is_unique_referenced_temporary_impl` from https://github.com/mpage/cpython/commit/94bff2d5757aceb968a4aedb6cea75ca363ddd72 in our code to cover current 3.14.0? (I know it uses some private C APIs, but if we only include it for a narrow python range, then that might be OK)\n\n\n> However, I wonder if a better solution would be to perform these checks statically. [This gist](https://gist.github.com/mpage/b62cfd405e66763c00fd522b45c23ff3) is a simple example of how this might work. Running it against this source\n\nYes, I think static analysis could also definitely help (and I hope that some of the linters / type checkers could implement such checks to give early warnings to the user, although it might need to be a tool that is a combination of both, because it would need to be able to detect that the root object is a pandas DataFrame or Series). \nBut not everyone is using static analysis, so I think the runtime checks are still important to have.\n\n\n\n\n> the problem is that in this case, it's not a uniquely referenced temporary - there are a few references, just one less in 3.14 than in 3.13, and only sometimes.\n\n@ngoldbaum I do think it actually is a uniquely referenced temporary for a case like `df[..][..] = ..` (it is `df[..]` that is the temporary object in the call chain), at least depending on how references are considered for methods. \nThe example you show from the tests, `df2[\"B\"] = df2[\"A\"]`, is indeed not such a case (here `df2 is not a temporary). But so here we don't want to raise a warning, and the failure is because it is incorrectly raising a warning (because the reference count can be lower on Python 3.14)",
"Thanks so much for looking into this!!\n\nI agree - the uses in NumPy and Pandas probably justify adding some kind of public API for this upstream.",
"@jorisvandenbossche @ngoldbaum - I think I figured out a work around for 3.14: temporary objects that result from chained assignment should have a refcount of 1 (for method calls like `update`) or 2 (for `__setitem__`) and will not be a local in the caller's frame. I no longer see any test failures due to unexpected or missing `ChainedAssignmentError`s when I run the pandas test suite against 3.14 using this approach.\n\n* [pandas changes](https://github.com/mpage/pandas/tree/gh-61368-chained-assign)\n* [3.14 changes](https://github.com/mpage/cpython/tree/pandas-61368-314)",
"@mpage thanks for further looking into that! That seems like a nice approach as well (although I think I am not yet entirely understanding how a `sys._is_unique_referenced_temporary` helper versus `sys._is_local_in_caller_frame` + refcount check exactly compare, or why one approach would be better than the other)\n\nNow, practically speaking for moving forward here: my understanding is that it is too late to have some helper function like that in Python 3.14.0? Or would you still try to propose that? \nAnd looking at the `sys__is_local_in_caller_frame_impl`, that seems to be using mostly private APIs, so not something we can safely vendor? \n"
] |
3,023,304,057 | 61,367 | DOC: Add missing period in sample docstring | closed | 2025-04-27T20:25:31 | 2025-04-28T16:45:56 | 2025-04-28T16:45:43 | https://github.com/pandas-dev/pandas/pull/61367 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61367 | https://github.com/pandas-dev/pandas/pull/61367 | yanamis | 1 | - Minor documentation fix.
- Adds a missing period at the end of the "random_state" description in the `sample` function docstring.
- No functional changes.
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @yanamis "
] |
3,023,219,572 | 61,366 | [minor edit] edit definitions of some parameters with correct idiomatic English for better legibility | closed | 2025-04-27T17:32:54 | 2025-04-28T16:39:33 | 2025-04-28T16:39:26 | https://github.com/pandas-dev/pandas/pull/61366 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61366 | https://github.com/pandas-dev/pandas/pull/61366 | kirisakow | 1 | <!--
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
//--> | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @kirisakow "
] |
3,023,173,793 | 61,365 | BUG: Constructing series with Timedelta object results in datetime series | open | 2025-04-27T16:06:33 | 2025-05-04T16:02:19 | null | https://github.com/pandas-dev/pandas/issues/61365 | true | null | null | Casper-Guo | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
test = pd.Series([pd.Timedelta("NaT")])
print(test)
```
### Issue Description
`test` is initialized to a series of `datetime64` type. This gotcha is not documented anywhere and the result is counter-intuitive. Opening the issue in case this is unintended.
### Expected Behavior
`test` is initialized to a series of `timedelta64` type
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.3
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : 8.2.3
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Timedelta",
"Constructors"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | [
"Thanks for opening this! Confirmed on main. I agree that the behavior feels a bit unintuitive. ",
"I did some investigating and found that for datetime-related types (datetime64, timedelta64, etc) with the value `pd.NaT`, pandas stores them all as `<class 'pandas._libs.tslibs.nattype.NaTType'>`. This makes it impossible to differentiate between a `Timestamp(\"NaT\")` and a `Timedelta(\"NaT\")` during Series construction.",
"I also traced the source code and it looks like the `dtype_if_all_nat` is assigned by `dtype_if_all_nat` in this case. As @arthurlw mentioned, there's no reliable way to determine whether a `NaTType` originates from a `Timestamp` or a `Timedelta`. 😢\n\nhttps://github.com/pandas-dev/pandas/blob/337d40e5d55f7787e48f029486f47fd5a053bc80/pandas/core/dtypes/cast.py#L1195-L1206\n\n\nhttps://github.com/pandas-dev/pandas/blob/337d40e5d55f7787e48f029486f47fd5a053bc80/pandas/_libs/lib.pyx#L2808-L2831"
] |
3,023,111,781 | 61,364 | BUG: groupby.groups with NA categories fails | closed | 2025-04-27T14:19:18 | 2025-04-28T20:30:58 | 2025-04-28T16:47:10 | https://github.com/pandas-dev/pandas/pull/61364 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61364 | https://github.com/pandas-dev/pandas/pull/61364 | rhshadrach | 1 | - [x] closes #61356 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
There is a slight code duplication here, but we don't need to rely on Cateorical's codes because we can just directly use groupby's. We also can't use `groupby` to implement `Index.groupby` because the former only works in the case where the `values` are exhaustive. | [
"Bug",
"Groupby",
"Missing-data",
"Categorical"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @rhshadrach "
] |
3,022,586,635 | 61,363 | DOC: Added constructor parameters to DateOffset docstring for API consistency #52431 | closed | 2025-04-27T02:11:58 | 2025-05-03T13:24:42 | 2025-05-03T13:24:42 | https://github.com/pandas-dev/pandas/pull/61363 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61363 | https://github.com/pandas-dev/pandas/pull/61363 | sainivas-99 | 1 | - Added the constructor signature for DateOffset.
- No functional changes were made, only documentation improvements.
- Part of the issue #52431 is addressed by this. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I'm a bit confused here, as the description of the PR doesn't match the changes. I'll be closing this, as the change here doesn't seem very useful. The issue you refer to is about documenting the parameters of the constructors of the mentioned classes, not adding an example of constructing the class. Also, the examples should go into their section and format."
] |
3,022,360,085 | 61,362 | QST: best way to extend/subclass pandas.DataFrame | closed | 2025-04-26T22:17:25 | 2025-08-05T03:04:19 | 2025-08-05T03:04:18 | https://github.com/pandas-dev/pandas/issues/61362 | true | null | null | rwijtvliet | 2 | ### Research
- [x] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
https://stackoverflow.com/questions/79594258/best-way-to-extend-subclass-pandas-dataframe
### Question about pandas
I've written a [package](https://www.github.com/rwijtvliet/portfolyo) to work with energy-related timeseries. At its center is a class ([`PfLine`](https://portfolyo.readthedocs.io/en/latest/core/pfline.html)) that is essentially a wrapper around pandas.DataFrame, and it implements various methods and properties that are also available on DataFrames - like `.loc`, `.asfreq()`, `.index`, etc.
I am currently in the middle of a rewrite of this package, and think it would be a good idea to have closer integration with pandas. [This page](https://pandas.pydata.org/docs/development/extending.html) lays out several possibilities, and I am unsure which route to take - and was hoping to find some sparring here.
Let me describe a bit what I'm trying to accomplish with the `PfLine` class:
* Behaves like a DataFrame, with specific column names allowed and some data conversion (and validation) on initialisation.
* Is immutable to avoid data from becoming inconsistent.
* Has additional methods.
The methods could be directly under `PfLine.method()` or under e.g. `df.pfl.method()`.
What is probably important: a way is needed for the user to specify a (still under development) configuration object (`commodity`) when initialising the PfLine. This object contains information used in coercing the data, e.g. what are the correct units and which timezones are allowed for the index. | [
"Usage Question",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> Behaves like a DataFrame... Is immutable\n\nThese two are in conflict, pandas is not designed to be immutable. At the very least, you'd have to workaround:\n\n- `__setitem__`, `.loc`, .`iloc`, `.at`, `.iat`\n- `.to_numpy()`, `.values`\n- Any method with an inplace argument\n- Any method which acts inplace (e.g. `update`, `insert`)\n\nBut with these requirements, I believe the only feasible option would be subclass DataFrame / Series.",
"Closing as addressed. Can reopen if there are further questions."
] |
3,022,163,177 | 61,361 | REGR: Fix signature of GroupBy.expanding | closed | 2025-04-26T17:38:41 | 2025-04-27T11:29:30 | 2025-04-27T11:29:29 | https://github.com/pandas-dev/pandas/issues/61361 | true | null | null | rhshadrach | 4 | Ref: https://github.com/pandas-dev/pandas/pull/61352#discussion_r2060726723
#61352 replaced `*args` and `**kwargs` in the signature of `GroupBy.expanding`. However I believe further arguments need to be added. We could also revert the PR instead. | [
"Bug",
"Groupby",
"Regression",
"Blocker",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for pointing this out! Just to confirm, which additional arguments do you think should be included in the signature/documentation?",
"From the old docstrings:\n\n> Arguments are the same as `:meth:DataFrame.rolling` except that ``step`` cannot be specified.\n\nI believe that's accurate, but we should confirm.",
"When I was writing the documentation, I referred to the Expanding class and the DataFrame.expanding signature. Both expect min_periods, axis, and method as parameters. \n\nDataFrame.rolling accepts several additional arguments, such as window, center, and win_type, which don't apply to expanding windows.\n\nAlso the docstring above was added by me in #61274, though after reviewing this more carefully, I believe that was a mistake.",
"> Also the docstring above was added by me in [#61274](https://github.com/pandas-dev/pandas/pull/61274), though after reviewing this more carefully, I believe that was a mistake.\n\nAt my request!\n\nI missed that `Expanding` is part of the MRO for ExpandingGroupby. The current signature looks correct to me. Closing."
] |
3,021,572,049 | 61,360 | ENH: magic_case() | closed | 2025-04-26T07:05:56 | 2025-04-26T18:11:40 | 2025-04-26T18:11:39 | https://github.com/pandas-dev/pandas/issues/61360 | true | null | null | VamsiAkella | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
we basically come up issue about not knowing the case of the column, we can print and view it but to make life little more easier I got magic_case created.
we have to pass the DataFrame and the column name we know (ignoring case) and we can have this assigned to a variable
mc=magic_case(df_2,'jack')
print(mc) # JaCK
and if there are multiple names with difference in case then it throws a value error with list of names
# ValueError: Multiple columns with the same name but different cases found: ['JaCK', 'JACk']
### Feature Description
def magic_case(df, column_name, new_name=None, inplace=False):
"""
Find the exact case-sensitive column name in a DataFrame and optionally rename it.
Parameters:
-----------
df : pandas.DataFrame
The DataFrame to search in
column_name : str
The case-insensitive column name to search for
new_name : str, optional
If provided, the column will be renamed to this value
inplace : bool, default False
If True and new_name is provided, modifies the DataFrame in-place and returns None.
If False and new_name is provided, returns a copy of the DataFrame with renamed column.
If new_name is None, this parameter has no effect.
Returns:
--------
str or pandas.DataFrame or None
- If new_name is None: returns the exact case-sensitive column name
- If new_name is provided and inplace=False: returns the DataFrame with renamed column
- If new_name is provided and inplace=True: returns None
Raises:
-------
ValueError
If no matching column is found or if multiple matches are found
"""
# Check if the dataframe is empty or has no columns
if df.empty or len(df.columns) == 0:
raise ValueError("DataFrame is empty or has no columns")
# Strip whitespace from column names for comparison
clean_columns = {col.lower().strip(): col for col in df.columns}
# Clean and lowercase the search term
search_term = column_name.lower().strip()
# Check if the lowercase version of the input exists
if search_term not in clean_columns:
matches = []
# Check for partial matches (e.g., "jack" might match "jackson")
for col_lower, col_original in clean_columns.items():
if search_term in col_lower or col_lower in search_term:
matches.append(col_original)
if matches:
original_column_name = matches[0] # Get the first partial match
else:
raise ValueError(f"No column matching '{column_name}' was found in the DataFrame")
else:
# Check for multiple exact matches with the same spelling but different cases
exact_matches = [col for col in df.columns if col.lower().strip() == search_term]
if len(exact_matches) > 1:
raise ValueError(f"Multiple columns with the same name but different cases found: {exact_matches}")
# Get the exact case-sensitive column name
original_column_name = clean_columns[search_term]
# If new_name is not provided, just return the original column name
if new_name is None:
return original_column_name
# If new_name is provided, rename the column
if inplace:
df.rename(columns={original_column_name: new_name}, inplace=True)
return None
else:
return df.rename(columns={original_column_name: new_name})
### Alternative Solutions
# nothing
### Additional Context
if you had anything to say - please drop mail to akvamsikrishna@outlook.com with sub: magic_case() 😅 just to identify easily and prioritize your response over others. | [
"Enhancement",
"Indexing",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the request. Your example implementation does not handle non-string columns as well as MultiIndex columns. In general, I do not think this rises to the level required for inclusion in the pandas API. Labeling as a closing candidate.",
"Agreed -1 for adding this in pandas. Thanks for the suggestion but closing"
] |
3,021,313,607 | 61,359 | BUG: Raise ValueError for non-string columns in read_json orient='table' (GH19129) | closed | 2025-04-26T01:10:10 | 2025-06-02T16:59:22 | 2025-06-02T16:59:22 | https://github.com/pandas-dev/pandas/pull/61359 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61359 | https://github.com/pandas-dev/pandas/pull/61359 | amoitra1 | 4 | - Closes #19129
- Adds validation to ensure all column names are strings when using orient='table' in read_json
- Raises a clear ValueError if invalid column names are found
- Adds a new unit test to pandas/tests/io/json/test_json_table_schema.py to cover the invalid input case
- Ran pytest and pre-commit hooks successfully
Looking forward to feedback. Thanks! | [
"Error Reporting",
"IO JSON",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"Hi, it seems there is already a PR #60945 addressing this issue.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"As mentioned, it appears we already have https://github.com/pandas-dev/pandas/pull/60945 addressing this issue so closing in favor of that PR since it was opened first "
] |
3,021,283,667 | 61,358 | Improve documentation for MonthEnd and YearBegin offsets | closed | 2025-04-26T00:37:39 | 2025-05-03T13:20:16 | 2025-05-03T13:20:15 | https://github.com/pandas-dev/pandas/pull/61358 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61358 | https://github.com/pandas-dev/pandas/pull/61358 | SoumitAddanki | 1 | This pull request improves the documentation for two commonly used offset constructors in Pandas: MonthEnd and YearBegin.
Changes include:
Clarified the purpose and behavior of each offset class
Added runnable examples (doctest-compliant) to demonstrate usage
Improved parameter descriptions where necessary
These changes aim to make the documentation more accessible and clear for both new and experienced users of Pandas’ time series functionality.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Looks like something went wrong with this PR, it contains lots of unrelated changes. I think better to close this, as I think it'll be much simpler to open a new PR with the documentation changes than fixing this PR. Please, feel free to reopen for the intended work. I recommend using a new branch (not `main`) to work on pull requests, that should help avoid the problems here."
] |
3,021,192,275 | 61,357 | DOC: change `tuples` param for MultiIndex.from_tuples from sequence to iterable | open | 2025-04-25T23:07:36 | 2025-04-27T19:44:45 | null | https://github.com/pandas-dev/pandas/issues/61357 | true | null | null | yangdanny97 | 2 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.from_tuples.html#pandas.MultiIndex.from_tuples
The docs currently say this for the `tuples` parameter:
> list / sequence of tuple-likes
### Documentation problem
Pandas-stubs annotates the parameter as sequence: https://github.com/pandas-dev/pandas-stubs/blob/main/pandas-stubs/core/indexes/multi.pyi#L49
Pandas source code annotates the parameter as iterable: https://github.com/pandas-dev/pandas/blob/main/pandas/core/indexes/multi.py#L521
Typing the parameter as sequence prevents this pattern from typechecking, even if it works at runtime:
```
MultiIndex.from_tuples(zip(['a'], ['b']))
```
This was raised in https://github.com/pandas-dev/pandas-stubs/issues/1158
### Suggested fix for documentation
Could we loosen the type annotation in the docs to say iterable? Then I can update pandas-stubs to match. | [
"Docs",
"MultiIndex",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> Could we loosen the type annotation in the docs to say iterable?\n\nI think no - the code raises on inputs that aren't list-like (pandas considers generators as list-like). Currently the code does not raise on sets, but I believe it should. I think the docs should be something like `list-like excluding sets` and the behavior of pandas changed to match.\n\nAlso this is the only occurrence in pandas of `tuple-likes` and I'm not sure what that means. `MultiIndex.from_tuples([[1, 2]])` does not raise, is a list \"tuple-like\"?\n\ncc @Dr-Irv",
"This relates to a comment I made here: https://github.com/pandas-dev/pandas/issues/55425#issuecomment-1967184148\n\nI agree with the idea of removing `tuple-like` and using `list-like` in the docs, which means we could include a generator.\n"
] |
3,021,002,869 | 61,356 | BUG: `DataFrameGroupBy.groups` fails when Categorical indexer contains NaNs and `dropna=False` | closed | 2025-04-25T20:46:45 | 2025-04-28T16:47:11 | 2025-04-28T16:47:11 | https://github.com/pandas-dev/pandas/issues/61356 | true | null | null | tehunter | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> df = DataFrame(
... {
... "cat": Categorical(["a", np.nan, "a"], categories=["a", "b", "d"]),
... "vals": [1, 2, 3],
... }
... )
>>> g = df.groupby("cat", observed=True, dropna=False)
>>> result = g.groups
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/pandas/pandas/core/groupby/groupby.py", line 569, in groups
return self._grouper.groups
File "properties.pyx", line 36, in pandas._libs.properties.CachedProperty.__get__
File "/workspaces/pandas/pandas/core/groupby/ops.py", line 710, in groups
return self.groupings[0].groups
File "properties.pyx", line 36, in pandas._libs.properties.CachedProperty.__get__
File "/workspaces/pandas/pandas/core/groupby/grouper.py", line 711, in groups
return codes, uniques
File "/workspaces/pandas/pandas/core/arrays/categorical.py", line 745, in from_codes
dtype = CategoricalDtype._from_values_or_dtype(
File "/workspaces/pandas/pandas/core/dtypes/dtypes.py", line 347, in _from_values_or_dtype
dtype = CategoricalDtype(categories, ordered)
File "/workspaces/pandas/pandas/core/dtypes/dtypes.py", line 230, in __init__
self._finalize(categories, ordered, fastpath=False)
File "/workspaces/pandas/pandas/core/dtypes/dtypes.py", line 387, in _finalize
categories = self.validate_categories(categories, fastpath=fastpath)
File "/workspaces/pandas/pandas/core/dtypes/dtypes.py", line 585, in validate_categories
raise ValueError("Categorical categories cannot be null")
ValueError: Categorical categories cannot be null
>>>
```
### Issue Description
When using `df.groupby(cat, dropna=False).groups`, we encounter a `ValueError`. This is counter-intuitive, as grouping operations work without an issue.
```python
>>> df = DataFrame(
... {
... "cat": Categorical(["a", np.nan, "a"], categories=["a", "b", "d"]),
... "vals": [1, 2, 3],
... }
... )
>>> g = df.groupby("cat", observed=True, dropna=False)
>>> g.sum()
vals
cat
a 4
NaN 2
>>> g.sum().index
CategoricalIndex(['a', nan], categories=['a', 'b', 'd'], ordered=False, dtype='category', name='cat')
```
### Expected Behavior
`.groups` should return a dictionary which includes the NaN as the last entry.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 41131a14324ababc5c81f194de3d9a239d120f27
python : 3.10.8
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2085.g41131a1432
numpy : 2.2.5
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.12
sphinx : 8.1.3
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.3.2
html5lib : 1.1
hypothesis : 6.131.8
gcsfs : 2025.3.2
jinja2 : 3.1.6
lxml.etree : 5.4.0
matplotlib : 3.10.1
numba : 0.61.2
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 19.0.1
pyreadstat : 1.2.8
pytest : 8.3.5
python-calamine : None
pytz : 2025.2
pyxlsb : 1.0.10
s3fs : 2025.3.2
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.3
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Groupby",
"Missing-data",
"Categorical"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Confirmed on main. PR to fix is up."
] |
3,020,794,342 | 61,355 | DOC: Removed self-reference to `DataFrame.resample` in the "See also" section. | closed | 2025-04-25T18:51:16 | 2025-04-25T18:54:57 | 2025-04-25T18:54:56 | https://github.com/pandas-dev/pandas/pull/61355 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61355 | https://github.com/pandas-dev/pandas/pull/61355 | arthurlw | 1 | - [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Closing this. I realized that `DataFrame.resample` and `Series.resample` share the same docstring, so listing both in \"See also\" makes sense."
] |
3,020,557,314 | 61,354 | Test Cython divmod fix for Windows | closed | 2025-04-25T16:50:01 | 2025-04-25T17:16:37 | 2025-04-25T17:16:32 | https://github.com/pandas-dev/pandas/pull/61354 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61354 | https://github.com/pandas-dev/pandas/pull/61354 | mroeschke | 1 | https://github.com/cython/cython/pull/6801 should fix the issues we were seeing in https://github.com/pandas-dev/pandas/pull/61261, but this commit is not apart of the Cython nightly wheels yet. | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Looks like that Cython PR fixes the Windows build. Closing until we the Cython nighties are updated"
] |
3,020,552,781 | 61,353 | BUG: inserting list of strings into Series auto-infers them as datetimes with mixed formats | open | 2025-04-25T16:47:35 | 2025-05-24T10:23:40 | null | https://github.com/pandas-dev/pandas/issues/61353 | true | null | null | MarcoGorelli | 7 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
In [3]: df = pd.DataFrame({'a': pd.date_range('2000', freq='D', periods=2)})
In [4]: df.loc[:, 'a'] = ['12/01/2020', '13/01/2020']
In [5]: df
Out[5]:
a
0 2020-12-01
1 2020-01-13
```
### Issue Description
Similar to https://pandas.pydata.org/pdeps/0004-consistent-to-datetime-parsing.html
### Expected Behavior
I think that inferring strings as datetimes is fine so long as they're parsed in a consistent format
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 57fd50221ea3d5de63d909e168f10ad9fc0eee9b
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+1979.g57fd50221e
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.12
sphinx : 8.1.3
IPython : 8.33.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.2.0
html5lib : 1.1
hypothesis : 6.127.5
gcsfs : 2025.2.0
jinja2 : 3.1.5
lxml.etree : 5.3.1
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 19.0.1
pyreadstat : 1.2.8
pytest : 8.3.5
python-calamine : None
pytz : 2025.1
pyxlsb : 1.0.10
s3fs : 2025.2.0
scipy : 1.15.2
sqlalchemy : 2.0.38
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Datetime"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"sorry how's that related?",
"It's not just with loc. \nsame results:\n\ndf = DataFrame({'a': ['12/01/2020', '13/01/2020']}, dtype='datetime64[ns]')\n\n0 2020-12-01\n1 2020-01-13",
"take",
"@MarcoGorelli Hey, I looked into this and the root issue seems to be that ambiguous datetime strings like `\"12/01/2020\"` and `\"13/01/2020\"` are being silently parsed inconsistently — one as MM/DD/YYYY and the other as DD/MM/YYYY — depending on what's valid. This happens both when assigning with `.loc` and during DataFrame construction with `dtype='datetime64[ns]'`, since NumPy ends up handling the coercion without format checks.\n\nI added a small check inside `maybe_coerce_values()` that kicks in when we’re dealing with a 1D object array of strings. It tries to infer a consistent datetime format, and raises a `ValueError` if the format is ambiguous or inconsistent. This is based on the idea mentioned in the issue that inferring is okay as long as the format is consistent.\n\nWould this kind of fix be reasonable? If so, I can open a PR. Let me know!",
"yeah that might be fine",
"> yeah that might be fine\n\njust opened a PR. When you have a moment, would you mind taking a look? Appreciate your input!",
"i don't really have much capacity for reviews, but i'll note that your solution probably adds more complexity than it warranted, if you want it to be merged it should be simpler and cleaner"
] |
3,018,727,940 | 61,352 | DOC: Updated `groupby.expanding` arguments | closed | 2025-04-25T00:46:30 | 2025-04-27T11:30:09 | 2025-04-25T16:34:50 | https://github.com/pandas-dev/pandas/pull/61352 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61352 | https://github.com/pandas-dev/pandas/pull/61352 | arthurlw | 1 | - [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw "
] |
3,018,442,108 | 61,351 | Add warning to `.groupby` when null keys would be dropped due to default `dropna` | closed | 2025-04-24T21:01:55 | 2025-05-27T16:18:07 | 2025-05-27T16:18:07 | https://github.com/pandas-dev/pandas/pull/61351 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61351 | https://github.com/pandas-dev/pandas/pull/61351 | tehunter | 2 | - [X] closes #61339
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
TODO:
- [X] Check performance for `codes` check approaches (`codes.min()` was about 3x faster)
- [ ] Run full test suite to ensure nothing broke
- [ ] Add tests/implementation for `.pivot_table`/`.stack`/etc. (possibly in a follow-up PR?) | [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,018,111,045 | 61,350 | ENH: th elements from Styler need the row scope | closed | 2025-04-24T18:23:37 | 2025-04-24T19:03:45 | 2025-04-24T19:03:21 | https://github.com/pandas-dev/pandas/issues/61350 | true | null | null | reteps | 1 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently, the pandas [Styler](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.html) API can be used to create a HTML table from a dataframe. However, the tables it generates are not accessible: it fails [WCAG/H63](https://www.w3.org/WAI/WCAG21/Techniques/html/H63).
### Feature Description
Ensure the output generated by Styler is accessible.
- `th` with class `row_heading` needs the `row` scope
I use the current workaround to add this rule myself:
```
html_root = lxml.html.fromstring(frame_style.to_html())
for th in html_root.xpath("//th[contains(@class, 'row_heading')]"):
th.set("scope", "row")
```
### Alternative Solutions
- Make the styler API more flexible for adding attributes. Currently, [set_td_classes](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.set_td_classes.html#pandas.io.formats.style.Styler.set_td_classes) and [set_table_styles](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.set_table_styles.html#pandas.io.formats.style.Styler.set_table_styles) aren't flexible enough for this, and [set_table_attributes](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.set_table_attributes.html#pandas.io.formats.style.Styler.set_table_attributes) can't set attributes on `th` elements themselves.
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Closing in favor of https://gitlab.com/html-validate/html-validate/-/issues/303."
] |
3,017,715,895 | 61,349 | TST: Testing for mixed int/str Index | closed | 2025-04-24T15:48:53 | 2025-06-30T18:20:47 | 2025-06-30T18:20:46 | https://github.com/pandas-dev/pandas/pull/61349 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61349 | https://github.com/pandas-dev/pandas/pull/61349 | xaris96 | 11 | - [x] closes #54072
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Testing",
"Index"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@xaris96 Can you fix the broken tests and remove all your debug files from this PR please?",
"@datapythonista okay. no problem. i would like to ask about the pr. Do we need to fix the failed tests that already existed to pass the pr? we are new to this and need some guidance :)",
"I think the main idea of the issue is to add the mixed index to the tests, see what it fails, fix the bugs of things that don't work with an index with mix types, and if there are tests that need to be updated, also fix that. Since I think several things will be broken, maybe you can open a first PR with just adding the index with mixed types, then write the failures in a comment to the issue, and then address them in individual PRs. But up to you, whatever you consider it makes things easier. @jbrockmendel anything you'd like to add?",
"@datapythonista thanks for the advise. it is really helpful!",
"@datapythonista also one more question. is there any problem or do we create a mess if we making changes and update this pr? or this is the common tactic?",
"> @datapythonista also one more question. is there any problem or do we create a mess if we making changes and update this pr? or this is the common tactic?\r\n\r\nA PR is mainly a UI for your fork's branch. The usual way of working is to just keep updating your fork's branch, until reviewers are happy and it's merged into the project's main branch.",
"> anything you'd like to add?\r\n\r\nseems like a reasonable approach.",
"@mroeschke okay ",
"@datapythonista Hi, I’m encountering an issue with test_rolling_var_numerical_issues, even though I haven't changed anything related to it. It was passing before, but now it's failing. Could you help me understand what might be causing this?",
"@mroeschke Hi.. i also commented above that I’m encountering an issue with test_rolling_var_numerical_issues, even though I haven't changed anything related to it. It was passing before, but now it's failing. Could you help me understand what might be causing this?",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,017,657,606 | 61,348 | Mixed int string | closed | 2025-04-24T15:25:38 | 2025-05-04T21:04:42 | 2025-05-03T13:16:39 | https://github.com/pandas-dev/pandas/pull/61348 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61348 | https://github.com/pandas-dev/pandas/pull/61348 | xaris96 | 2 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Feels like this was superseded by #61349. Please let me know if I missed something and there is a reason to reopen this PR.",
"@datapythonista yeah you are right. sorry for the mess its our first time!"
] |
3,017,123,734 | 61,347 | Request for guidance on issues for upcoming PyData Yerevan pandas sprint | closed | 2025-04-24T12:34:24 | 2025-08-24T13:12:29 | 2025-08-24T13:12:29 | https://github.com/pandas-dev/pandas/issues/61347 | true | null | null | surenpoghosian | 6 | Dear Pandas Team,
I am Suren Poghosyan, an Organizational Committee Member at PyData Yerevan. As you may already know, last summer we hosted an open-source contribution sprint focused on **pandas** library, in collaboration with Patrick Höfler, at the American University of Armenia.
We are currently planning to run a follow-up sprint independently and would appreciate your guidance on which issues in the **pandas** GitHub repository are the most appropriate for a 2-3 hour contribution session. Furthermore, feel free to share any relevant issue which you would like to proceed with, despite our previous specification and time span.
On top of that, we are reaching out to make sure we’re following the proper contribution guidelines and not creating unnecessary noise or inconveniences in the issue tracker. We aim to contribute meaningfully and respectfully.
Looking forward to contributing again and strengthening our local culture of open-source collaboration.
In addition, here are the articles about our previous sprint:
[AUA to Host Inaugural PyData Yerevan Open Source pandas Sprint ](https://newsroom.aua.am/2024/06/10/aua-to-host-inaugural-pydata-yerevan-open-source-pandas-sprint/)
[PyData Yerevan Open Source pandas Sprint](https://newsroom.aua.am/event/pydata-yerevan-open-source-pandas-sprint/)
[PyData Yerevan hosted the inaugural Open Source pandas Sprint with Patrick Höfler - Linkedin](https://www.linkedin.com/posts/pydata-yerevan_yesterday-pydata-yerevan-hosted-the-inaugural-activity-7211723009479376896-nkZw?utm_source=share&utm_medium=member_desktop&rcm=ACoAADUCpscBtHmkZbcvJGJVB6J3UtccLYAcVPM)
Best regards,
Suren Poghosyan
Organizational Committee Member
PyData Yerevan
@jorisvandenbossche @TomAugspurger @jreback @WillAyd @mroeschke @jbrockmendel @datapythonista | [
"Community"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@phofl can you take point on this?\n\nWhen is the sprint? i.e. when it is passed this becomes closable. ",
"@jbrockmendel we're aiming for August 15.",
"I've gathered the [issues](https://github.com/surenpoghosian/pydata-yerevan-sprint-2025/issues/1) in the following [repo](https://github.com/surenpoghosian/pydata-yerevan-sprint-2025) \n\nPlease have a look, I would love to hear your feedback...",
">@phofl can you take point on this?\n\nI can't unfortunately and also don't have time.\n\n@surenpoghosian I don't think that there are many resources on the pandas side to help with this. The general guidance I shared a few months ago via email is still valid though",
"@surenpoghosian looking at the issues you listed, it really depends on how experienced your participants are. We generally recommend looking at issues with the Good First Issue label.",
"Hey guys, thank you for dedicating time to this matter. The event took place on August 15, and I am closing the issue as there is no more need for further guidance."
] |
3,014,981,266 | 61,346 | BUG: assignment via loc silently fails with differing dtypes | closed | 2025-04-23T18:48:39 | 2025-04-26T16:51:19 | 2025-04-26T12:21:38 | https://github.com/pandas-dev/pandas/issues/61346 | true | null | null | zbs | 13 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
print(pd.__version__)
df = pd.DataFrame({'foo': ['2025-04-23', '2025-04-22']})
df['bar'] = pd.to_datetime(df['foo'], format='%Y-%m-%d')
df.loc[:, 'bar'] = df.loc[:, 'bar'].dt.strftime('%Y%m%d')
print(df)
# Yields
# 2.2.3
# foo bar
# 0 2025-04-23 2025-04-23
# 1 2025-04-22 2025-04-22
```
### Issue Description
I expect `bar` to look like
```
20250423
20250422
```
instead of
```
2025-04-23
2025-04-22
```
### Expected Behavior
`bar` should look like
```
20250423
20250422
```
### Installed Versions
<details>
```
[ins] In [2]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.10
python-bits : 64
OS : Linux
OS-release : 4.18.0-372.32.1.el8_6.x86_64
Version : #1 SMP Fri Oct 7 12:35:10 EDT 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.12
sphinx : None
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.9.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.2
matplotlib : 3.10.1
numba : 0.61.2
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 15.0.2
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.39
tables : 3.9.2
tabulate : 0.9.0
xarray : 2025.3.1
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
```
</details>
| [
"Bug",
"Dtype Conversions",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Confirmed on main. Still silently works when assigning differing dtypes to columns via `.loc` (in the example above, assigning strings to a `datetime64` column). \n\nIt seems to me that this should be raising an error, consistent with the behavior introduced for other dtype mismatches (e.g., int64 ← str, which now raises a `LossySetitemError` when assigning with `.loc`).",
"This may be well known, but just in case, `df['bar'] = df.loc[:, 'bar'].dt.strftime('%Y%m%d')` gives the desired behavior for the OP and is what should be used when you want to overwrite a column with a (possibly) different dtype.\n\n> It seems to me that this should be raising an error, consistent with the behavior introduced for other dtype mismatches\n\nThis isn't so clear to me, e.g.\n\n```python\ndf = pd.DataFrame({\"a\": [1.0, 2.5, 3.0]})\ndf.loc[:, \"a\"] = 5\nprint(df)\n# a\n# 0 5.0\n# 1 5.0\n# 2 5.0\n```\n\nShould this raise? I personally think the answer there is no. But I'm not sure we ever made any decisions on which implicit conversions should and should not be allowed. This is somewhat related to [PDEP-6](https://pandas.pydata.org/pdeps/0006-ban-upcasting.html).",
"cc @pandas-dev/pandas-core ",
"I don't think that is what is going on here. It's not about incompatible types not being recognized. It's about the automatic conversion that is done with strings that are formatted datetime objects being assigned to a series that has `datetime64` dtype. With these statements:\n```python\ndf['bar'] = pd.to_datetime(df['foo'], format='%Y-%m-%d')\ndf.loc[:, 'bar'] = df.loc[:, 'bar'].dt.strftime('%Y%m%d')\n```\nthe first statement sets the dtype of `\"bar\"` to be `datetime64`. In the second statement, the expression `df.loc[:, 'bar'].dt.strftime('%Y%m%d')` has object dtype - it is a set of strings. But because it is being assigned to a column with `datetime64` dtype, we first try to parse the strings to see if it is a valid date. So then we keep the dtype as `datetime64`.\nFor example:\n```python\n>>> df.loc[:, \"bar\"] = [\"290102\", \"300304\"]\n>>> df\n foo bar\n0 2025-04-23 2002-01-29\n1 2025-04-22 2004-03-30\n```\n\nI'm not sure if we want to change the behavior in this case. If `.loc` is used to change values in a column with `datetime64` dtype, the ability to parse a string is useful as it lets you fix individual values (or selected rows) without having to parse the strings into dates.\n\nOn the other hand, as shown in the example, if a user did something like that, it is unclear whether they wanted the dates parsed as YYMMDD or DDMMYY. So maybe we should be warning if things are ambiguous??\n\n\n\n",
"Yup, looks like it's going down the mixed formats path (🙀 )\n```python\nIn [8]: df = pd.DataFrame({'foo': ['2025-04-23', '2025-04-22']}); df['bar'] = pd.to_datetime(df['foo'], format='%Y-%m-%d\n ...: ')\n\nIn [9]: df.loc[:, 'bar'] = ['12/01/2020', '13/01/2020']\n\nIn [10]: df\nOut[10]:\n foo bar\n0 2025-04-23 2020-12-01\n1 2025-04-22 2020-01-13\n```",
"@Dr-Irv, thanks for the detailed explanation. From what you’ve described, it appears that the automatic conversion of string-formatted datetime values when assigned via .loc is intentional and, in many cases, desirable for allowing flexible value updates in datetime columns.\n\nGiven this understanding, it seems the current behavior isn’t a bug per se but a design choice. If the consensus among maintainers and the community is that this behavior should remain as is, then it might make sense to close this issue. \n\nHowever, if there’s broader interest in re-evaluating the behavior—perhaps to introduce warnings or alternative handling for ambiguous string formats—it could be worthwhile to change the title with a view to moving the discussion towards a new enhancement proposal.",
"> Given this understanding, it seems the current behavior isn’t a bug per se but a design choice\n\nSure, but it would probably be in line with pdep4 for the parsing to be consistent rather than changing format mid column? ",
"@MarcoGorelli Is your sample in https://github.com/pandas-dev/pandas/issues/61346#issuecomment-2828868867 a bug not already covered by other open issues?",
"I couldn't find so, I've made a new one: https://github.com/pandas-dev/pandas/issues/61353\n\nAt which I agree that what's described here is intentional and not a bug 👍 ",
"> I've made a new one: [#61353](https://github.com/pandas-dev/pandas/issues/61353)\n\nthanks @MarcoGorelli ",
"This discussion has been quite illuminating: thanks for all your responses. In the past I’ve avoided using `df[col]` because pandas will complain if `df` is a view; instead I use `df.loc` wherever possible. Given that the above suggests using `df[col]`, is that warning no longer valid?",
"@zbs - there are two behaviors I think you may desire to perform: replace some rows in a column or replace the entire column. If you are replacing some rows in a column, you cannot also change the dtype of that column simultaneously.\n\nSome rows in a column: `df.loc[rows, column] = ...`\nReplace the entire column: `df[column] = ...`\n\nWhile it is possible to replace all rows with the first (e.g. `rows = :`), pandas will still treat this the same as if you are doing a partial replacement. That is, the dtype of the column cannot change.",
"Great, thanks for the explanation!"
] |
3,014,950,760 | 61,345 | Update groupby().first() documentation to clarify behavior with missing data (#27578) | closed | 2025-04-23T18:39:31 | 2025-05-19T16:16:48 | 2025-05-19T16:16:47 | https://github.com/pandas-dev/pandas/pull/61345 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61345 | https://github.com/pandas-dev/pandas/pull/61345 | ericcht | 1 | This PR enhances the docstring for `GroupBy.first()` to clarify:
- It returns the first *non-null* value per column
- It differs from `.nth(0)` and `.head(1)` in how it treats missing values
- Includes comparative examples for better understanding
Fixes part of issue #27578
Ready for review. | [
"Docs",
"Groupby"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,014,914,579 | 61,344 | BUG: Series of bools with length mismatch does not raise when used with `.iloc` | closed | 2025-04-23T18:24:12 | 2025-04-24T20:20:23 | 2025-04-24T20:20:13 | https://github.com/pandas-dev/pandas/issues/61344 | true | null | null | arthurlw | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
s = pd.Series([1, 2, 3])
mask_series = pd.Series([True, False, True, True])
result = s[mask_series]
print(result)
# Output:
# 0 1
# 2 3
# dtype: int64
mask_array = np.array([True, False, True, True])
print(s[mask_array])
# IndexError: Boolean index has wrong length: 4 instead of 3
```
### Issue Description
When using `.iloc` with a boolean Series mask whose length exceeds the target, pandas does not raise an error. This is inconsistent with numpy bool indexing, which raises an IndexError.
### Expected Behavior
`.iloc` should raise if the boolean Series mask length doesn’t match the target Series length.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 25e57c34158158de2cd5d2c0843f3e5babbeb3e5
python : 3.12.9
python-bits : 64
OS : Darwin
OS-release : 24.0.0
Version : Darwin Kernel Version 24.0.0: Mon Aug 12 20:49:48 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2080.g25e57c3415
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 25.0
Cython : 3.0.12
sphinx : 8.1.3
IPython : 9.0.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.3.0
html5lib : 1.1
hypothesis : 6.130.4
gcsfs : 2025.3.0
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.6
pymysql : 1.4.6
pyarrow : 19.0.1
pyreadstat : 1.2.8
pytest : 8.3.5
python-calamine : None
pytz : 2025.2
pyxlsb : 1.0.10
s3fs : 2025.3.0
scipy : 1.15.2
sqlalchemy : 2.0.10
tables : 3.10.2
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pandas will align on the index when you provide a Series. This is expected behavior. Closing."
] |
3,013,699,081 | 61,343 | Fix #61072: inconsistent fullmatch results with regex alternation | closed | 2025-04-23T11:33:57 | 2025-06-30T18:29:35 | 2025-06-30T18:29:34 | https://github.com/pandas-dev/pandas/pull/61343 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61343 | https://github.com/pandas-dev/pandas/pull/61343 | Pedro-Santos04 | 2 | in PyArrow strings
Fixes an issue where regex patterns with alternation (|) produce different results between str dtype and string[pyarrow] dtype. When using patterns like "(as)|(as)", PyArrow implementation would incorrectly match "asdf" while Python's implementation correctly rejects it. The fix adds special handling to ensure alternation patterns are properly parenthesized when using PyArrow-backed strings
- [ ] closes #61072 | [
"Bug",
"Strings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> I think this ends up using pyarrow's `match_substring_regex` function here; therefore, this would be a limitation of the pyarrow implementaiton and would need fixing there instead of pandas\r\n\r\nThanks for the feedback! I agree the issue stems from PyArrow’s match_substring_regex not enforcing full-string matching. To ensure consistent Series.str.fullmatch behavior, I propose a pandas workaround: wrap patterns in ^...$ for PyArrow arrays (e.g., pat = f\"^{pat}$\") while keeping the | grouping logic. I’ll:\r\n\r\nImplement the workaround.\r\nAdd tests for foo|bar, empty strings, and re.IGNORECASE.\r\nDocument the PyArrow limitation in the docstring.\r\nFile a PyArrow bug report and link it here.\r\nI’ll update the PR shortly. Please let me know if this approach works or if you suggest an alternative",
"Thanks but I prefer not to add temporary workarounds in pandas until it's properly implemented in PyArrow. If interested in still working on this, I would suggest working on this on the Arrow repository so closing as an upstream issue"
] |
3,013,322,994 | 61,342 | BUG: Concatenating data frames with `MultiIndex` with `datetime64[ms]` dtype introduces `NaT` values to the index | closed | 2025-04-23T09:29:06 | 2025-04-24T20:18:53 | 2025-04-24T20:18:43 | https://github.com/pandas-dev/pandas/issues/61342 | true | null | null | shchur | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
def resample_each_item(dtype) -> pd.DataFrame:
df = pd.DataFrame(
[
["A", "2023-01-15", 42],
["A", "2023-01-17", 33],
["B", "2023-02-20", 78],
["B", "2023-02-23", 91],
],
columns=["item_id", "timestamp", "target"],
)
df["timestamp"] = pd.to_datetime(df["timestamp"]).astype(dtype)
df = df.set_index(["item_id", "timestamp"])
resampled = []
for item_id in ["A", "B"]:
resampled.append(pd.concat({item_id: df.loc[item_id].resample("D", level="timestamp").mean()}))
return pd.concat(resampled)
print(resample_each_item("datetime64[ns]"))
# For datetime64[ns] all timestamps are valid
# target
# timestamp
# A 2023-01-15 42.0
# 2023-01-16 NaN
# 2023-01-17 33.0
# B 2023-02-20 78.0
# 2023-02-21 NaN
# 2023-02-22 NaN
# 2023-02-23 91.0
print(resample_each_item("datetime64[ms]"))
# For datetime64[ms] or datetime64[s] dtypes, NaT values are introduced
# target
# timestamp
# A 2023-01-15 42.0
# NaT NaN
# NaT 33.0
# B 2023-02-20 78.0
# NaT NaN
# NaT NaN
# NaT 91.0
```
### Issue Description
When concatenating data frames with `MultiIndex`, where one level is of type `datetime64[ms]` or `datetime64[s]`, some timestamps are replaced with `NaT`. If the timestamps are of dtype `datetime64[ns]`, no `NaT` values are introduced.
### Expected Behavior
No `NaT` values are introduced, regardless of whether the timestamp dtype is `datetime64[ms]`, `datetime64[s]` or `datetime64[ns]`.
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Linux
OS-release : 6.1.132-147.221.amzn2023.x86_64
Version : #1 SMP PREEMPT_DYNAMIC Tue Apr 8 13:14:54 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.12.3
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
```
</details>
| [
"Bug",
"Datetime",
"MultiIndex"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I'm seeing this is fixed on main. There are a number of improvements to datetimes coming in 3.0, so not surprising.\n\n```python\nprint(resample_each_item(\"datetime64[ms]\"))\n# target\n# timestamp \n# A 2023-01-15 42.0\n# 2023-01-16 NaN\n# 2023-01-17 33.0\n# B 2023-02-20 78.0\n# 2023-02-21 NaN\n# 2023-02-22 NaN\n# 2023-02-23 91.0\n```\n\nClosing."
] |
3,012,466,982 | 61,341 | DOC Update link to "The Grammar of Graphics" book | closed | 2025-04-23T01:32:17 | 2025-04-23T01:47:49 | 2025-04-23T01:47:42 | https://github.com/pandas-dev/pandas/pull/61341 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61341 | https://github.com/pandas-dev/pandas/pull/61341 | star1327p | 1 | Update link to "The Grammar of Graphics" book.
https://doi.org/10.1007/0-387-28695-0
Original link does not work:
https://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @star1327p "
] |
3,012,434,545 | 61,340 | BUG: Fixed issue with bar plots not stacking correctly when 'stacked' and 'subplots' are used together | closed | 2025-04-23T00:58:23 | 2025-04-28T20:10:37 | 2025-04-28T20:10:28 | https://github.com/pandas-dev/pandas/pull/61340 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61340 | https://github.com/pandas-dev/pandas/pull/61340 | eicchen | 1 | - [x] closes #61018
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Added check for when stacked and subplots are used in conjunction for bar plots. And logic dictating offsets for individual subplots to account for plots with non-concurrent columns being graphed.
Currently, does not take into account column order in subplot entry (eg: (A, B) vs (B, A)) when stacking | [
"Visualization"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @eicchen02 "
] |
3,011,459,524 | 61,339 | ENH: Add warning when `DataFrame.groupby` drops NA keys | open | 2025-04-22T15:40:59 | 2025-04-22T21:11:58 | null | https://github.com/pandas-dev/pandas/issues/61339 | true | null | null | tehunter | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently, pandas `DataFrame.groupby` will default to dropping NA values in any of the keys. In v1.1, a `dropna` argument was added which allows the users to retain NA values, but its default value is set to `True` (#3729). In that discussion, there were several requests to add a warning when NA values are dropped (https://github.com/pandas-dev/pandas/issues/3729#issuecomment-2494715257).
This issue raises that request to provide additional visibility and a single place for discussion.
### Feature Description
Add a warning to the user when a groupby contains NA keys and `dropna` is not explicitly passed. This warning should also be emitted in other aggregation functions that drop missing keys (e.g., `pivot`, .
```
>>> df = pd.DataFrame({"key1": ["a", "a"], "key2": ["b", None], "value": [1, 2]})
>>> df.groupby(["a", "b"])["value"].sum()
MissingKeyWarning: `groupby` encountered missing keys which will be dropped from result. Please specify `dropna=True` to hide warning and retain default behavior, or `dropna=False` to include missing values.
key1 key2
a b 1
Name: value, dtype: int64
```
I think this is the best option as it warns the user in multiple scenarios:
1) User is unaware of pandas default behavior.
2) User is aware of pandas default behavior, but forgot to include the argument.
3) User is aware of pandas default behavior, but is unaware that their data contains missing values (prompting a bug fix or data quality check upstream).
### Alternative Solutions
Here are some other ideas for discussion, but I think the downsides of these all outweigh the benefits.
#### Alternative 1: Set default `dropna` value to be user-configurable via pandas settings
This would allow the user to decide if they prefer "SQL-style" grouping globally. This could work in conjunction with the user warning above. Cons: Still requires user to remember to specify the option in their code. Options would affect the results, which complicates debugging and collaboration and goes against good code guidelines.
#### Alternative 2: Change the default value of `dropna`
This would bring pandas in line with SQL and Polars, but would likely break user code. This doesn't preclude the warning above, as it would be required as part of a deprecation plan. Cons: Would need to be rolled out very slowly.
#### Alternative 3: Change the default value of `dropna` for multi-key groupings only.
Assumes users doing multi-key grouping are more likely to want to retain missing values. Cons: Would add confusion and still break user code.
### Additional Context
This has been a known source of confusion and a difference from SQL and Polars (See [1](https://stackoverflow.com/questions/18429491/pandas-groupby-columns-with-nan-missing-values) [2](https://github.com/pola-rs/polars/issues/11030#issuecomment-1712964207) [3](https://pbpython.com/groupby-warning.html)).
Even for experienced Pandas users, it's easy to forget to add `dropna=False` or not realize there are missing values in your grouping keys. With the current behavior, we're adding an additional mental overhead on developers, increasing the learning curve (especially coming from SQL), and introducing a source of bugs. | [
"Enhancement",
"Groupby",
"Missing-data",
"Needs Discussion"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the request. I'm negative on:\n\n - Adding a warning when `dropna=True`. This is noisy, as far as pandas can tell the user is telling pandas to drop NA values, it should not warn when that happens.\n - Adding a global underride for `dropna`. That makes the behavior of pandas non-local: you can not look at a piece of code and know what it does as it depend on the global state. \n - Having the default value of `dropna` depend on other argument or the data itself. \n\nHowever I'm positive on changing the default of `dropna` to False, and then even deprecating the parameter entirely. I am planning to start the deprecation after 3.0 is released (as long as there are no objections from the core team).\n\nRelated: https://github.com/pandas-dev/pandas/pull/53094. This PDEP is stalled currently because there are many improvements that need to be done to `pivot_table` first.",
"Ah, I did not see \n\n> when a groupby contains NA keys _and `dropna` is not explicitly passed_.\n\nWhile I'd be positive on this, I think we should just deprecate `dropna=True` as the default. This will also give a warning when `dropna` is not passed and the groupby keys contain an NA value, so it's much the same."
] |
3,011,329,646 | 61,338 | BUG: Period datatype data gets mangled up in pivoting operation | closed | 2025-04-22T14:49:35 | 2025-04-22T16:01:10 | 2025-04-22T16:01:08 | https://github.com/pandas-dev/pandas/issues/61338 | true | null | null | RobertasA | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({
"id1": [1, 2],
"id2": [10, 20],
"id3": [100, 200],
"period":[pd.Period("2021-01"), pd.Period("2021-03")]
}).set_index(['id1','id2'])
result = df.unstack().stack(future_stack=True)
#fails - unexpected
assert (result.loc[df.index]==df).all().all()
```
### Issue Description
The data in the "period" column gets mangled up, the value associated with the first record shows up twice and the value of the second record disappears.
The problem appears with both `future_stack=True` and `future_stack=False`.
The problem does not appear when stacking "period" series, only when stacking dataframe (so following unstack(), the columns are a mutliindex).
### Expected Behavior
It is expected that `df.unstack().stack()` would return the original records unchanged.
Changing period dtype to 'str' behaves as expected:
```python
import pandas as pd
df = pd.DataFrame({
"id1": [1, 2],
"id2": [10, 20],
"id3": [100, 200],
"period":[pd.Period("2021-01"), pd.Period("2021-03")]
}).set_index(['id1','id2'])
#succeeds - expected
df2 = df.astype({"period": "str"})
result = df2.unstack().stack(future_stack=True)
assert (result.loc[df2.index]==df2).all().all()
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Linux
OS-release : 5.10.226-214.880.amzn2.x86_64
Version : #1 SMP Tue Oct 8 16:18:15 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.0.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : 2025.3.0
scipy : 1.15.2
sqlalchemy : 2.0.39
tables : None
tabulate : 0.9.0
xarray : 2025.1.2
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"After further investigation I can see that the bug is actually in unstack and is duplicate of https://github.com/pandas-dev/pandas/issues/60980, already fixed in `main` but not released yet."
] |
3,010,682,898 | 61,337 | BUG: DataFrame.to_markdown exception when a cell has numpy.array type | open | 2025-04-22T10:40:03 | 2025-04-25T18:54:21 | null | https://github.com/pandas-dev/pandas/issues/61337 | true | null | null | omarsaad98 | 6 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
results = pd.DataFrame({"col1": [np.array(["hello","world"], dtype=object), "world"]})
results.to_markdown()
```
### Issue Description
When a dataframe contains a cell with type numpy.array, the to_markdown function will fail. The exact issue is due to differing behavior between any other value and numpy arrays, but the reason I think **this is a pandas issue** is because an assumption is made about the value:
```python
def _is_separating_line(row):
row_type = type(row)
is_sl = (row_type == list or row_type == str) and (
(len(row) >= 1 and row[0] == SEPARATING_LINE) # <- compares row[0] to a string
or (len(row) >= 2 and row[1] == SEPARATING_LINE)
)
return is_sl
```
Anything that isn't a string will normally result in this comparison resolving to `False`, but this **should be explicit** to avoid strange datatypes causing undefined behavior.
Stack trace:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[69], line 3
1 import numpy as np
2 results = pd.DataFrame({"col1": [np.array(["hello","world"], dtype=object), "world"]})
----> 3 results.to_markdown()
File [...]\.venv\Lib\site-packages\pandas\util\_decorators.py:333, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
327 if len(args) > num_allow_args:
328 warnings.warn(
329 msg.format(arguments=_format_argument_list(allow_args)),
330 FutureWarning,
331 stacklevel=find_stack_level(),
332 )
--> 333 return func(*args, **kwargs)
File [...]\.venv\Lib\site-packages\pandas\core\frame.py:2984, in DataFrame.to_markdown(self, buf, mode, index, storage_options, **kwargs)
2982 kwargs.setdefault("showindex", index)
2983 tabulate = import_optional_dependency("tabulate")
-> 2984 result = tabulate.tabulate(self, **kwargs)
2985 if buf is None:
2986 return result
File [...]\.venv\Lib\site-packages\tabulate\__init__.py:2048, in tabulate(tabular_data, headers, tablefmt, floatfmt, intfmt, numalign, stralign, missingval, showindex, disable_numparse, colalign, maxcolwidths, rowalign, maxheadercolwidths)
2045 if tabular_data is None:
2046 tabular_data = []
-> 2048 list_of_lists, headers = _normalize_tabular_data(
2049 tabular_data, headers, showindex=showindex
2050 )
2051 list_of_lists, separating_lines = _remove_separating_lines(list_of_lists)
2053 if maxcolwidths is not None:
File [...]\.venv\Lib\site-packages\tabulate\__init__.py:1471, in _normalize_tabular_data(tabular_data, headers, showindex)
1469 headers = list(map(str, headers))
1470 # rows = list(map(list, rows))
-> 1471 rows = list(map(lambda r: r if _is_separating_line(r) else list(r), rows))
1473 # add or remove an index column
1474 showindex_is_a_str = type(showindex) in [str, bytes]
File [...]\.venv\Lib\site-packages\tabulate\__init__.py:1471, in _normalize_tabular_data.<locals>.<lambda>(r)
1469 headers = list(map(str, headers))
1470 # rows = list(map(list, rows))
-> 1471 rows = list(map(lambda r: r if _is_separating_line(r) else list(r), rows))
1473 # add or remove an index column
1474 showindex_is_a_str = type(showindex) in [str, bytes]
File [...]\.venv\Lib\site-packages\tabulate\__init__.py:107, in _is_separating_line(row)
104 def _is_separating_line(row):
105 row_type = type(row)
106 is_sl = (row_type == list or row_type == str) and (
--> 107 (len(row) >= 1 and row[0] == SEPARATING_LINE)
108 or (len(row) >= 2 and row[1] == SEPARATING_LINE)
109 )
110 return is_sl
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
### Expected Behavior
to_markdown converts this numpy array to string and outputting a markdown normally. Instead there's an exception
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.0.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"IO Data",
"Needs Discussion",
"Dependencies",
"Nested Data"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report, confirmed on main. In general I think you will find little support in pandas for nested objects, and I do not think pandas should necessarily support such things in all operations. But if the fix here is simple, I'm positive on it. Further investigations and (simple :smile:) PRs to fix are welcome!",
"take",
"looks like a duplicate of #59588\n\n> I think this is a pandas issue\n\n@mroeschke wrote https://github.com/pandas-dev/pandas/issues/59588#issuecomment-2308946866\n\n> Seems like something pandas shouldn't work around so closing\n\nissue opened at https://github.com/astanin/python-tabulate/issues/339",
"> looks like a duplicate of #59588\n\nYou're right. Actually I just noticed the specific code snippet i referenced was from tabulate, not pandas. This is a tabulate issue",
"> This is a tabulate issue\n\nThat maybe true, but unfortunately it appears that [python-tabulate](https://github.com/astanin/python-tabulate) may no longer be actively maintained. As pandas relies on this library, it maybe that we need to vendor it in the future.",
"Thanks @simonjayhawkins. I'd be good with finding an alternative to tabulate, but somewhat against vendoring and having to maintain the code."
] |
3,010,189,907 | 61,336 | ENH: IDEA Introduce axis→0/axis→1 arrow aliases to disambiguate direction vs. label operations | closed | 2025-04-22T07:31:10 | 2025-05-18T06:21:52 | 2025-04-23T16:19:48 | https://github.com/pandas-dev/pandas/issues/61336 | true | null | null | withlionbuddha | 5 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
<ENG>
The `axis` parameter currently serves two distinct purposes:
1. *Along‑axis* operations that reduce or transform values (e.g. `apply`, `sum`)
2. *Label‑targeting* operations that modify or drop index / column labels (e.g. `drop`, `rename`)
Because both use the same syntax (`axis=0` or `axis=1`), many users mis‑interpret which dimension is affected.
<한국어>
현재 axis 매개변수는 서로 다른 두 가지 목적으로 사용되고 있습니다:
값을 축소하거나 변환하는 축 방향 연산 (예: apply, sum)
인덱스/열 레이블을 수정하거나 삭제하는 레이블 대상 연산 (예: drop, rename)
두 경우 모두 동일한 구문(axis=0 또는 axis=1)을 사용하기 때문에 많은 사용자가 어떤 차원이 영향을 받는지 혼동합니다.
### Feature Description
<ENG>
Proposed API
Keep existing syntax and add an **arrow alias** that makes the “direction” explicit:
| Syntax | Meaning |
|--------|---------|
| `axis=0` *(unchanged)* | target **index labels** (delete / rename) |
| `axis=1` *(unchanged)* | target **column labels** |
| `axis→0` *(new)* | operate **along index** – treat each **column vector** |
| `axis→1` *(new)* | operate **along columns** – treat each **row vector** |
Arrow aliases are optional; existing code keeps working unchanged.
<한국어>
제안된 API
기존 구문을 유지하면서 "방향"을 명확히 나타내는 화살표 별칭을 추가합니다
| Syntax | Meaning |
|--------|---------|
| `axis=0` *(변경없음)* | 인덱스 레이블 대상 (삭제 / 이름 변경) |
| `axis=1` *(변경없음)* | 열 레이블 대상 |
| `axis→0` *(신규)* | 인덱스 방향으로 연산 – 각 열 벡터 처리 |
| `axis→1` *(신규)* | 열 방향으로 연산 – 각 행 벡터 처리 |
화살표 별칭은 선택 사항이며, 기존 코드는 변경 없이 계속 작동합니다.
### Alternative Solutions
<ENG>
| Alias idea | Interpretation |
|------------|----------------|
| **`axis→0`** | operate **along index** (column‑wise) |
| **`axis→1`** | operate **along columns** (row‑wise) |
**Benefits**
* Greatly reduces beginner confusion around `axis`.
* Preserves full NumPy compatibility.
* Requires minimal code changes (add alias mapping in `axis_aliases`).
*
<한국어>
| 별칭 아이디어 | 해석|
|------------|----------------|
| **`axis→0`** | 인덱스 방향으로 연산 (열 단위) |
| **`axis→1`** | 열 방향으로 연산 (행 단위) |
장점
axis에 대한 초보자의 혼란을 크게 줄입니다.
NumPy 호환성을 완전히 유지합니다.
최소한의 코드 변경만 필요합니다 (axis_aliases에 별칭 매핑 추가).
### Additional Context
See repeated questions on Stack Overflow:<br>
<https://stackoverflow.com/q/26716616>. | [
"Enhancement",
"Needs Discussion",
"Closing Candidate"
] | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | [
"<ENG>\nSpeaking as a research developer who relies on pandas daily:\n\nMost “axis confusion” cases we see in internal code reviews stem from the dual meaning of `axis=0 / 1`.\nA symbolic alias such as `axis→0 / axis→1` instantly separates **direction** from **label**, benefitting not only experienced users but also **students who are just learning pandas for the first time**.\nThe change is fully backward‑compatible, so it poses no risk to existing codebases.\n\n+1 to this proposal — happy to help with docs or a small PR if the core team agrees. \n\n<한국어>\npandas를 일상적으로 사용하는 **연구개발자**로서 의견 남깁니다.\n\n사내 코드 리뷰에서 발생하는 `axis=0 / 1` 혼란의 대부분은 **방향 vs. 라벨**의 이중 의미 때문입니다.\n`axis→0 / axis→1` 같은 기호 alias가 있으면 두 개념을 명확히 구분할 수 있어, 숙련자뿐 아니라 **pandas를 처음 배우는 학생들에게도 이해 부담을 크게 줄여 줍니다**.\n기존 문법과 완전히 호환되므로 기존 코드베이스에 영향을 주지 않습니다.\n\n 제안에 전적으로 찬성합니다. 코어팀이 승인한다면 문서 보완이나 간단한 PR 작업에 기여하겠습니다.",
"Thanks, it's an interesting thought but this does not seem implementable as it would not be valid python.\n\n\n```python\nIn [21]: def f(axis->1):\n ...: pass\n Cell In[21], line 1\n def f(axis->1):\n ^\nSyntaxError: invalid syntax\n```\n",
"I am also negative on any non-ASCII functions / arguments in pandas.",
"Agreed with the response so far. Additionally I am not fond of having multiple ways to specify things.\n\nSince there's not a lot of positive reception for this feature. Closing",
"I’ve realized that modifying the syntax to distinguish the two meanings of axis=0, along-axis and label-targeting operations, would require changes to the Python language itself."
] |
3,009,689,867 | 61,335 | ENH/TST: unset_index method #60869 | closed | 2025-04-22T02:34:40 | 2025-04-22T15:59:38 | 2025-04-22T15:59:38 | https://github.com/pandas-dev/pandas/pull/61335 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61335 | https://github.com/pandas-dev/pandas/pull/61335 | HoqueUM | 1 | - [x] closes #60869 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but the linked issue has not been triaged yet and therefore a PR would be too early. Closing this PR, but I would suggest tackling issue that do not have the `needs triage` or `needs discussion` labels."
] |
3,009,494,674 | 61,334 | DOC: Updated `groupby.ewm` arguments | closed | 2025-04-21T23:35:26 | 2025-04-23T18:37:47 | 2025-04-22T15:56:52 | https://github.com/pandas-dev/pandas/pull/61334 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61334 | https://github.com/pandas-dev/pandas/pull/61334 | arthurlw | 1 | - [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Groupby",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw "
] |
3,009,380,832 | 61,333 | CI: Have dedicated Python 3.13 job instead of using Python dev | closed | 2025-04-21T21:58:26 | 2025-04-30T16:09:44 | 2025-04-30T16:09:41 | https://github.com/pandas-dev/pandas/pull/61333 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61333 | https://github.com/pandas-dev/pandas/pull/61333 | mroeschke | 1 | We've been testing Python 3.13 using the `Python dev` job. Since Python 3.13 has been available since last October, we should be able to test this version with a dedicated job with all of our optional dependencies.
Additionally, this new job caught an unclosed `sqlite3` engine, so modified some `test_sql.py` tests (and removed unnecessary parametrizations) | [
"CI",
"Python 3.13"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Going to merge since green. Happy to follow up if needed"
] |
3,009,364,139 | 61,332 | CLN: Use newer numpy random Generator methods in plotting colors | closed | 2025-04-21T21:45:31 | 2025-04-30T16:08:51 | 2025-04-30T16:08:48 | https://github.com/pandas-dev/pandas/pull/61332 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61332 | https://github.com/pandas-dev/pandas/pull/61332 | mroeschke | 1 | Noticed in https://github.com/pandas-dev/pandas/pull/61330, replace a numpy legacy random method with a newer random Generator method (in addition to a cleanup) | [
"Visualization",
"Clean"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Going to merge since green. Happy to follow up if needed"
] |
3,008,950,460 | 61,331 | DEPS: Clean unused dependencies | closed | 2025-04-21T17:49:35 | 2025-05-08T16:24:30 | 2025-05-08T16:24:26 | https://github.com/pandas-dev/pandas/pull/61331 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61331 | https://github.com/pandas-dev/pandas/pull/61331 | mroeschke | 2 | * When installing pytables, it appears `blosc2` is generally a dependency now https://github.com/PyTables/PyTables/blob/9afcc380d93192460e7badd1baf568592cadad26/pyproject.toml#L78
* I don't think we use `google-auth` or `gitdb` anywhere now | [
"Build",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Looks good, I guess xarray was pinned to not allow the latest versions because of the test failures in the CI.\r\n\r\n@shoyer @FrancescAlted if you have any comment here...",
"Since this PR is green, going to merge. Happy to follow up if needed"
] |
3,008,889,780 | 61,330 | TYP: Remove unused mypy ignores | closed | 2025-04-21T17:15:37 | 2025-04-21T20:47:15 | 2025-04-21T20:47:12 | https://github.com/pandas-dev/pandas/pull/61330 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61330 | https://github.com/pandas-dev/pandas/pull/61330 | mroeschke | 1 | Failing on main. Maybe unneeded due to a numpy update. | [
"Typing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Seems like the timeouts are unrelated here, so going to merge to get to green"
] |
3,008,586,493 | 61,329 | Remove WillAyd from CODEOWNERS | closed | 2025-04-21T14:48:02 | 2025-04-21T15:46:38 | 2025-04-21T15:45:50 | https://github.com/pandas-dev/pandas/pull/61329 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61329 | https://github.com/pandas-dev/pandas/pull/61329 | WillAyd | 1 | This was well intentioned but I do not follow every change to _libs, and this ends up creating more notifications than I follow | [
"Admin"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @WillAyd "
] |
3,008,583,143 | 61,328 | BUG: pre-commit version 4.0.0 is required | open | 2025-04-21T14:46:27 | 2025-06-29T15:59:08 | null | https://github.com/pandas-dev/pandas/issues/61328 | true | null | null | WillAyd | 12 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
Try to run pre-commit on main
```
### Issue Description
It looks like the minimum pre-commit version was bumped to 4.0.0 in https://github.com/pandas-dev/pandas/pull/61246
However, the version that gets installed by apt on Ubuntu 24.04 is only 3.6.2, so you would have to mess with the system provided version to develop on pandas, or set up a more complicated environment
### Expected Behavior
The pre-commit minimum version pin should probably be a little looser @mroeschke
### Installed Versions
main
| [
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"We don't really advertise/support creating development environments from apt packages so I'm a little hesitant wanting to support that use case. Is there a reason you need to use pre-commit from apt?",
"Do you think it should be pip installed by the system installed Python instead? On Debian you are discouraged from doing so:\n\n```\npip install pre-commit\nerror: externally-managed-environment\n\n× This environment is externally managed\n╰─> To install Python packages system-wide, try apt install\n python3-xyz, where xyz is the package you are trying to\n install.\n \n If you wish to install a non-Debian-packaged Python package,\n create a virtual environment using python3 -m venv path/to/venv.\n Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make\n sure you have python3-full installed.\n \n If you wish to install a non-Debian packaged Python application,\n it may be easiest to use pipx install xyz, which will manage a\n virtual environment for you. Make sure you have pipx installed.\n \n See /usr/share/doc/python3.12/README.venv for more information.\n\nnote: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.\nhint: See PEP 668 for the detailed specification.\n```\n\nFor sure can go through the alternatives listed above, but it would be nicer to avoid. Is there a particular feature we need the 4.0.0 min for?",
"> Do you think it should be pip installed by the system installed Python instead?\n\nWe do also encourage development environments to be in a virtual environment, as describe in that traceback.\n\n> Is there a particular feature we need the 4.0.0 min for?\n\nNot in particular, but I am also just not very fond on being bound to limitations of the apt ecosystem because in my experience Python library authors are usually not the ones releasing/maintaining their apt equivalents.",
"> We do also encourage development environments to be in a virtual environment, as describe in that traceback.\n\nJust to clarify the use case, I think this mostly will affect IDEs and editors that are not attached to a given virtual environment. In my use case, I have my IDE set up to the system pre-commit, so that I don't need to putz around with every single project to make a commit, and can work multiple projects from a single session. \n\npre-commit will create its own virtual environment separate from the virtual environment you use for development anyway, so it doesnt affect the functionality of the checks by using a system installed package",
"Yeah your convenience use case is understandable, but would still be -0 to change (but won't die on that hill) due to apt not being in lockstep with pypi/conda generally.",
"> In my use case, I have my IDE set up to the system pre-commit, so that I don't need to putz around with every single project to make a commit\n\nI'm a little confused here. Are you not using a virtual environment when working on pandas, and if you are, are you needing to do anything more than `pip install pre-commit` in that virtual environment?",
"I use a virtual environment for working on pandas, but for committing files to git and running git hooks I just use my IDE directly, which doesn't need to be launched from or activate the virtual environment",
"I see - thanks. Though I do agree with @mroeschke - I don't think we should be holding back dev tools because a developer wants to do dev-related activities with the system's installation of Python.",
"I suppose it depends how you look at it. I don't consider anything from my setup to be using system Python for development. I am just using system libraries to commit to the VCS (namely git and its pre-commit ecosystem, within which there is a popular Python package of the same name)",
"Ah, indeed - apologies. I'd be opposed to the policy of holding back dev tools for any contributor, but I think perhaps core-devs deserve more consideration. If there is a feature we want to use or bug we're impacted by, then I'd also be opposed. But barring that, I'm okay to hold pre-commit back.",
"I am also using a (user-)global pre-commit installation for simplicity. However, you can just remove the older version from apt and install pre-commit via `uv tool install pre-commit` instead. I do the same for `ruff`. Solves all problems.\n\nGlobal installs of pre-commit and tools like ruff are useful especially when working in multiple different projects of varying quality.",
"Seems like reasonable comments all around. What I don't see addressed is \"how viable is reverting the pin to >=3.6.2?\" Any ideas?"
] |
3,008,507,297 | 61,327 | ENH/TST: grep-like select columns of a DataFrame by a part of their names (fixes #61319) | closed | 2025-04-21T14:13:39 | 2025-04-21T14:50:04 | 2025-04-21T14:50:04 | https://github.com/pandas-dev/pandas/pull/61327 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61327 | https://github.com/pandas-dev/pandas/pull/61327 | HoqueUM | 0 | - [ ] closes #61319(Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,008,415,286 | 61,326 | ENH/TST: grep-like select columns of a DataFrame by a part of their names (fixes #61319) | closed | 2025-04-21T13:26:21 | 2025-04-21T13:29:54 | 2025-04-21T13:29:53 | https://github.com/pandas-dev/pandas/pull/61326 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61326 | https://github.com/pandas-dev/pandas/pull/61326 | HoqueUM | 0 | - [ ] closes #61319 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,008,414,891 | 61,325 | add test case for mixed string and int | closed | 2025-04-21T13:26:08 | 2025-05-09T16:08:27 | 2025-05-09T16:08:27 | https://github.com/pandas-dev/pandas/pull/61325 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61325 | https://github.com/pandas-dev/pandas/pull/61325 | spd123 | 1 | - [ ] closes #54072 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,008,401,200 | 61,324 | ENH/TST: grep-like select columns of a DataFrame by a part of their names (fixes #61319) | closed | 2025-04-21T13:18:23 | 2025-04-21T13:24:23 | 2025-04-21T13:24:23 | https://github.com/pandas-dev/pandas/pull/61324 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61324 | https://github.com/pandas-dev/pandas/pull/61324 | HoqueUM | 0 | - [x] closes #61319(Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,007,590,629 | 61,323 | BUG: to_dict(orient='dict') does not convert np.nan to None in Pandas 2.2.3 | open | 2025-04-21T05:21:40 | 2025-04-24T20:16:13 | null | https://github.com/pandas-dev/pandas/issues/61323 | true | null | null | uyauhkk01 | 1 | ### Pandas version checks
- [ ] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
df = pd.DataFrame({
'A': [1, np.nan, 3],
'B': ['x', 'y', np.nan]
})
for k,v in df.to_dict()['A'].items():
print(f"v={v},type(v)={type(v)}")
print("pd-v:",pd.__version__)
print( df.to_dict()['A'][1] is None)
print( df.to_dict()['A'][1] is np.nan)
print(df.to_dict()['A'][1])
print(type(df.to_dict()['A'][1]))
print(np.isnan(df.to_dict()['A'][1]))
#--------------
# v=1.0,type(v)=<class 'float'>
# v=nan,type(v)=<class 'float'>
# v=3.0,type(v)=<class 'float'>
# pd-v: 2.2.3
# False
# False
# nan
# <class 'float'>
# True
```
### Issue Description
BUG: to_dict(orient='dict') does not convert np.nan to None in Pandas 2.2.3
### Expected Behavior
NaN --> None
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
| [
"Bug",
"IO Data",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Why would you expect pandas to convert `np.nan` to `None` here?"
] |
3,007,155,383 | 61,322 | BUG: memory issues with `string[pyarrow]` after sorted `pd.merge` | open | 2025-04-20T18:16:32 | 2025-04-22T21:35:01 | null | https://github.com/pandas-dev/pandas/issues/61322 | true | null | null | noahblakesmith | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import random
import string
import pandas as pd
import pyarrow as pa
# Gen random data ----------------------------------------------------------------------
random.seed(42)
txt = "".join(random.choices(string.printable, k=int(1e4)))
num = random.choices(range(int(1e6)), k=int(125e3))
# Gen dataframes -----------------------------------------------------------------------
a = pd.Series(num, dtype="Int64")
b = pd.Series([txt] * int(125e3), dtype="string[pyarrow]")
lhs = pd.DataFrame({"a": a, "b": b})
# Concatenation is necessary to reproduce bug (not sure why)
lhs = pd.concat([lhs, lhs], ignore_index=True, verify_integrity=True)
rhs = pd.DataFrame({"a": a}).drop_duplicates()
# Merge with and without sorting -------------------------------------------------------
df_nosort = pd.merge(left=lhs, right=rhs, on="a", sort=False)
print(df_nosort.memory_usage(deep=True))
df_sort = pd.merge(left=lhs, right=rhs, on="a", sort=True)
print(df_sort.memory_usage(deep=True))
# `b` cols are equal despite memory usage difference
print(df_nosort["b"].equals(df_sort["b"]))
# Write to parquet files ---------------------------------------------------------------
schema = pa.schema([pa.field("a", pa.int64()), pa.field("b", pa.string())])
df_nosort.to_parquet("df_nosort.parquet", schema=schema)
df_sort.to_parquet("df_sort.parquet", schema=schema)
```
### Issue Description
Issues only occur when series `b` has `dtype` of `string[pyarrow]` (not `string[python]`).
1. `.to_parquet` fails for `df_sort` but succeeds for `df_nosort`.
2. The memory usage of `b` is greater in `df_sort` than in `df_nosort`.
3. Despite differences in memory usage, `df_sort["b"]` is equal to `df_nosort["b"]`.
### Expected Behavior
1. I would expect `.to_parquet` to succeed for both dataframes.
2. I would expect `df_sort["b"]` to have the same memory usage as `df_nosort["b"]`. (I should note, however, that I lack a sophisticated understanding of memory management, so I may be mistaken.)
3. I would expect `df_nosort["b"].equals(df_sort["b"])` to return `False` if the series differ in memory usage. (Same caveat applies.)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Wed Mar 19 21:16:34 PDT 2025; root:xnu-11417.101.15~1/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Closing Candidate",
"Upstream issue",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"take",
"take",
"take",
"I'm seeing similar behavior as the other issue: https://github.com/pandas-dev/pandas/issues/61316#issuecomment-2822528900"
] |
3,007,129,581 | 61,321 | Fix: AttributeError when using .iloc with pyarrow-backed Series in Pandas #61311 | closed | 2025-04-20T17:15:13 | 2025-07-08T18:32:42 | 2025-05-09T16:07:23 | https://github.com/pandas-dev/pandas/pull/61321 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61321 | https://github.com/pandas-dev/pandas/pull/61321 | Shyanil | 1 | fix : #61311
### Solution:
The problem arises when the code tries to access `.max()` and `.min()` on `ArrowExtensionArray`. To fix this, we can replace `.max()` and `.min()` with `np.max()` and `np.min()`, which can handle these arrays correctly.
Here is the modification to be made:
```python
# check that the key does not exceed the maximum size of the index
if np.max(arr) >= len_axis or np.min(arr) < -len_axis:
raise IndexError("positional indexers are out-of-bounds")
```
--- | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,007,048,652 | 61,320 | PERF: Restore old performances with .isin() on columns typed as np.ui… | closed | 2025-04-20T14:24:36 | 2025-05-19T16:14:54 | 2025-05-19T16:14:47 | https://github.com/pandas-dev/pandas/pull/61320 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61320 | https://github.com/pandas-dev/pandas/pull/61320 | pbrochart | 3 | …nt64
- [ ] closes #60098 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Only if dtypes are equal (e.g uint64 vs uint64, uint32 vs uint32...)
%timeit data["uints"].isin([np.uint64(1), np.uint64(2)]) # 17ms (!)
The last line, with older numpy==1.26.4 (last version <2.0), is even worse: ~200ms. | [
"Performance",
"Regression",
"isin"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"Implicit conversion to float64 happens only whith uint64/int64.\r\nI reverted the PR #46693 to provide an example based on initial issue #46485:\r\n\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\ntest_df = pd.DataFrame([{'a': 1378774140726870442}], dtype=np.uint64)\r\n\r\nprint(1378774140726870442 == 1378774140726870528) \r\n#False\r\n\r\nprint(test_df['a'].isin([1378774140726870528])[0])\r\n#True\r\n\r\nprint(test_df['a'].isin([1])[0])\r\n#False\r\n```\r\n\r\nThe second test must be False and was handled by the PR #46693\r\nbecause there is implicit conversion to float64.\r\nBut if we change it to:\r\n\r\n```\r\nprint(test_df['a'].isin([np.uint64(1378774140726870528)])[0])\r\n#False\r\n```\r\n\r\nThe result is correct because in this case there is no implicit conversion so it's not necessary to use object.\r\nRegarding the performance, it's resolves partially the issue #60098:\r\n\r\nBefore:\r\n\r\n```\r\nimport pandas as pd, numpy as np\r\ndata = pd.DataFrame({\r\n \"uints\": np.random.randint(10000, size=300000, dtype=np.uint64),\r\n \"ints\": np.random.randint(10000, size=300000, dtype=np.int64),\r\n})\r\n\r\n%timeit data[\"uints\"].isin([np.uint64(1), np.uint64(2)]) # 239ms\r\n```\r\n\r\nAfter:\r\n```\r\nimport pandas as pd, numpy as np\r\ndata = pd.DataFrame({\r\n \"uints\": np.random.randint(10000, size=300000, dtype=np.uint64),\r\n \"ints\": np.random.randint(10000, size=300000, dtype=np.int64),\r\n})\r\n\r\n%timeit data[\"uints\"].isin([np.uint64(1), np.uint64(2)]) # 4ms\r\n```",
"Thanks @pbrochart "
] |
3,007,041,632 | 61,319 | ENH: grep-like select columns of a DataFrame by a part of their names | closed | 2025-04-20T14:10:10 | 2025-04-21T15:49:28 | 2025-04-21T15:49:27 | https://github.com/pandas-dev/pandas/issues/61319 | true | null | null | kirisakow | 3 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I wish I could grep-like select columns of a DataFrame by a part of their names, and return a subset of the original DataFrame containing only columns that match the substring.
### Feature Description
```py
from typing import List, Union
import pandas as pd
class ExtendedDF(pd.DataFrame):
@property
def _constructor(self):
return ExtendedDF
def select_by_substr(self, substr: Union[str, List[str]], *, ignore_case: bool = True) -> Union[pd.DataFrame, 'ExtendedDF']:
"""grep-like select columns of a DataFrame by a part of their names.
Args:
substr (Union[str, List[str]]): a string or a list of strings to be used as search patterns
ignore_case (bool): if True (default), ignore search pattern case
Returns:
pd.DataFrame: a subset of the original DataFrame containing only columns that match the substring
Usage:
Consider two DataFrame objects extracted from two different sources, and thus varying in their column names:
```py
df1 = pd.DataFrame({
'Distance': [105.0, 0.0, 4.0, 1.0, 1241.0],
'Distance_percent': [0.2, 0.0, 5.2, 11.1, 92.8],
'Mixed': [921.0, 0.0, 52.0, 5.0, 0.0],
'Mixed_percent': [1.9, 0.0, 67.5, 55.6, 0.0],
'avg_diff': [121146.9, 293246.3, 212169.9, 41299.8, 29438.3],
'med_diff': [17544.0, 1657.0, 55205.0, 95750.0, 2577.0],
})
df2 = pd.DataFrame({
'distance': [105.0, 0.0, 4.0, 1.0, 1241.0],
'distance_percent': [0.2, 0.0, 5.2, 11.1, 92.8],
'mixed': [921.0, 0.0, 52.0, 5.0, 0.0],
'mixed_percent': [1.9, 0.0, 67.5, 55.6, 0.0],
'diff_avg': [121146.9, 293246.3, 212169.9, 41299.8, 29438.3],
'diff_med': [17544.0, 1657.0, 55205.0, 95750.0, 2577.0],
})
df1 = ExtendedDF(df1)
df2 = ExtendedDF(df2)
```
```
df1
Distance Distance_percent Mixed Mixed_percent avg_diff med_diff
0 105.0 0.2 921.0 1.9 121146.9 17544.0
1 0.0 0.0 0.0 0.0 293246.3 1657.0
2 4.0 5.2 52.0 67.5 212169.9 55205.0
3 1.0 11.1 5.0 55.6 41299.8 95750.0
4 1241.0 92.8 0.0 0.0 29438.3 2577.0
df2
distance distance_percent mixed mixed_percent diff_avg diff_med
0 105.0 0.2 921.0 1.9 121146.9 17544.0
1 0.0 0.0 0.0 0.0 293246.3 1657.0
2 4.0 5.2 52.0 67.5 212169.9 55205.0
3 1.0 11.1 5.0 55.6 41299.8 95750.0
4 1241.0 92.8 0.0 0.0 29438.3 2577.0
```
As an analyst, I need to inspect which column is which between the two datasets:
(a) either by defining a single string search pattern (`ignore_case=True` by default):
```py
cols_to_select = 'diff'
print('df1:')
print(df1.select_by_substr(cols_to_select).T) # transposed for a better legibility
print()
print('df2:')
print(df2.select_by_substr(cols_to_select).T) # transposed for a better legibility
```
```
df1:
0 1 2 3 4
avg_diff 121146.9 293246.3 212169.9 41299.8 29438.3
med_diff 17544.0 1657.0 55205.0 95750.0 2577.0
df2:
0 1 2 3 4
diff_avg 121146.9 293246.3 212169.9 41299.8 29438.3
diff_med 17544.0 1657.0 55205.0 95750.0 2577.0
```
(b) or by defining a list of string search patterns (`ignore_case=True` by default):
```py
cols_to_select = ['dist', 'Mix']
print('df1:')
print(df1.select_by_substr(cols_to_select).T) # transposed for a better legibility
print()
print('df2:')
print(df2.select_by_substr(cols_to_select).T) # transposed for a better legibility
```
```
df1:
0 1 2 3 4
Mixed 921.0 0.0 52.0 5.0 0.0
Distance 105.0 0.0 4.0 1.0 1241.0
Mixed_percent 1.9 0.0 67.5 55.6 0.0
Distance_percent 0.2 0.0 5.2 11.1 92.8
df2:
0 1 2 3 4
mixed_percent 1.9 0.0 67.5 55.6 0.0
mixed 921.0 0.0 52.0 5.0 0.0
distance 105.0 0.0 4.0 1.0 1241.0
distance_percent 0.2 0.0 5.2 11.1 92.8
```
(c) or, same as (b) but with an explicit `ignore_case=False`:
```py
cols_to_select = ['dist', 'Mix']
print('df1:')
print(df1.select_by_substr(cols_to_select, ignore_case=False).T) # transposed for a better legibility
print()
print('df2:')
print(df2.select_by_substr(cols_to_select, ignore_case=False).T) # transposed for a better legibility
```
```
df1:
0 1 2 3 4
Mixed_percent 1.9 0.0 67.5 55.6 0.0
Mixed 921.0 0.0 52.0 5.0 0.0
df2:
0 1 2 3 4
distance_percent 0.2 0.0 5.2 11.1 92.8
distance 105.0 0.0 4.0 1.0 1241.0
```
"""
substr = [substr] if isinstance(substr, str) else substr
if ignore_case:
selected_cols = [col_name for col_name in self.columns for s in substr if s.casefold() in col_name.casefold()]
else:
selected_cols = [col_name for col_name in self.columns for s in substr if s in col_name]
selected_cols = list(set(selected_cols))
return self[selected_cols]
```
### Alternative Solutions
Idk
### Additional Context
_No response_ | [
"Enhancement",
"Indexing",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@kirisakow does [DataFrame.filter](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.filter.html) satisfy your use case?\n```\ndf_new = df.filter(like=substr)\n```",
"Filter is one way to do this. You could also pass a callable that takes in a dataframe, and returns the columns you care about, to `__getitem__`.",
"Thanks for the suggestion but per the 2 suggestions above, there are more primitive APIs available that would allow you to compose the functionality in this request, so I don't think this would require a dedicated API for this. Thanks but closing"
] |
3,006,986,859 | 61,318 | ENH: inspect duplicate rows for columns that vary | closed | 2025-04-20T12:11:06 | 2025-04-21T15:50:37 | 2025-04-21T15:50:36 | https://github.com/pandas-dev/pandas/issues/61318 | true | null | null | kirisakow | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I wish I had a function that would inspect a DataFrame that has duplicate values and yield, per each group of rows that have a duplicate value, a subset of the input DataFrame featuring only the columns that vary.
### Feature Description
```py
from typing import Union
import pandas as pd
class ExtendedDF(pd.DataFrame):
@property
def _constructor(self):
return ExtendedDF
def inspect_duplicates(self, key_col: str) -> Union[pd.DataFrame, 'ExtendedDF']:
"""Inspects a DataFrame that has duplicate values in the `key_col` column,
and yields, per each group of rows that have same `key_col` value, a subset
of the input DataFrame featuring only the columns that vary.
Args:
key_col (str): name of the column with duplicate values
Yields:
pd.DataFrame: per each group of rows that have same `key_col` value,
yields a subset of the input DataFrame featuring only the columns that
vary.
Examples:
Consider a dataset containing ramen ratings with duplicates:
```py
df = pd.DataFrame({
'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
'style': ['cup', 'pack', 'cup', 'cup', 'pack'],
'rating': [4, 3.5, 4, 15, 5],
'col_that_doesnt_change': ['so yummy' for _ in range(5)],
'another_col_that_doesnt_change': ['mmm love it' for _ in range(5)],
})
df = ExtendedDF(df)
```
```
df
brand style rating col_that_doesnt_change another_col_that_doesnt_change
0 Yum Yum cup 4.0 so yummy mmm love it
1 Yum Yum pack 3.5 so yummy mmm love it
2 Indomie cup 4.0 so yummy mmm love it
3 Indomie cup 15.0 so yummy mmm love it
4 Indomie pack 5.0 so yummy mmm love it
```
Inspect the duplicates using 'brand' column as the key:
```py
print(
*df.inspect_duplicates('brand')
)
```
```
brand rating
2 Indomie 4.0
3 Indomie 15.0
4 Indomie 5.0
brand style rating
0 Yum Yum cup 4.0
1 Yum Yum pack 3.5
```
Inspect the duplicates using 'style' column as the key:
```py
print(
*df.inspect_duplicates('style')
)
```
```
style brand
0 cup Yum Yum
2 cup Indomie
3 cup Indomie
style brand rating
1 pack Yum Yum 3.5
4 pack Indomie 5.0
```
Inspect the duplicates using 'rating' column as the key:
```py
print(
*df.inspect_duplicates('rating')
)
```
```
rating brand
0 4.0 Yum Yum
2 4.0 Indomie
```
You can also concatenate everything that is yielded into a single DataFrame:
```py
print(
pd.concat([
*df.inspect_duplicates('brand')
])
)
```
```
brand style rating
0 Yum Yum cup 4.0
1 Yum Yum pack 3.5
2 Indomie NaN 4.0
3 Indomie NaN 15.0
4 Indomie NaN 5.0
```
"""
mark_all_dupl_mask = self.duplicated(key_col, keep=False)
df_dupl = self.loc[mark_all_dupl_mask]
for k in set(df_dupl[key_col].values):
sub_df = self.loc[self[key_col] == k]
mask_eq = sub_df.iloc[0] != sub_df.iloc[1]
diff_cols = mask_eq.loc[mask_eq].index.values
yield sub_df.loc[:, [key_col] + list(diff_cols)]
```
### Alternative Solutions
None
### Additional Context
Authors:
- @miraaitsaada
- @kirisakow | [
"Enhancement",
"Groupby",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This doesn't look like a general enough use case to add it to the pandas API. As a result, I'm -1 on this.\nYou can achieve something like this using `.groupby().apply()` and a custom lambda function.",
"Agreed, there are primitive APIs, as you demonstrate, that allows this functionality without having to maintain it in pandas so closing"
] |
3,006,956,714 | 61,317 | ENH: Make DataFrame.filter accept filters in new formats | open | 2025-04-20T11:05:24 | 2025-05-27T21:56:41 | null | https://github.com/pandas-dev/pandas/issues/61317 | true | null | null | datapythonista | 20 | I think it'd be very nice for users to get this working regarding [filter](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.filter.html):
```python
df.filter(df["age"] > 18) # same as `df[df["age"] > 18`
df.filter("age > 18") # same as `df.query("age > 18")`, I think `.query` should be deprecated if this is implemented in `.filter`
df.filter(lambda df: df["age"] > 18) # same as `df[df['age'].apply(lambda x: x > 18)]`, useful for method chaining
```
I think implementing this is reasonably simple. I think the main challenge is how to design the API in a way that filter can be intuitive and still work with the current parameters. And particularly, keeping backward compatibility. But personally, I think this would be so useful, that worth finding a solution.
CC: @rhshadrach | [
"Needs Discussion",
"Filters"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Partial proposal:\n\n - Accept `items` (will maybe want to rename this argument?) of type:\n - Series (will align on index)\n - Non-Series list-likes (must be same length as df)\n - strings a la `query`\n - UDFs to be discussed.\n - Deprecate `like`, `regex`; offer no alternatives.\n - Deprecate `axis=1` but add `DataFrame.select` (somewhat talked about in https://github.com/pandas-dev/pandas/issues/55289).\n\nFor UDFs, it seems to me that the usage in the OP can be readily handled by `pipe`. I would more expect passing a UDF to `filter` would operate row-wise similar to `apply(..., by_row=True)`.\n\nAnother question is how strict we are on the values that will be filtered. Do we require these to be `bool/np.bool_`, or do we allow any value an internally pandas will evaluate the truthyness of it. I would lean toward the latter.\n\n",
"I like the idea.\n\nIf I understand correctly, the main use of `df.filter(cond)` where cond is a `Series` will be equivalent to now use `df[cond]`. I think implementing the `like` and `regex` behaviors would be trivial with `df.filter(df[\"col\"].str.contains(\"xxx\"))` and same for `regex`, right? It does feel we're offering a very reasonable alternative.\n\nI see your point for using `.pipe` to filter, and in a way kind of agree. But it feels like `df.filter(lambda x: x[\"age\"] > 18)` will be together with `df.filter(df[\"age\"] > 18)` the most common used case by far. While `df.pipe(lambda x: x[x[\"age\"] > 18)` may seem a reasonable alternative, I think it will really make users' life easier to support the former, as I think it's way more intuitive.\n\nIn any case, what you propose seems like a great improvement.",
"> I think implementing the `like` and `regex` behaviors would be trivial with `df.filter(df[\"col\"].str.contains(\"xxx\"))` and same for `regex`, right? It does feel we're offering a very reasonable alternative.\n\nAgreed - I should have said no _new_ alternatives. :laughing:\n\nFor UDFs, one reason not to have `df.filter(lambda x: x[\"age\"] > 18)` operate by row is that it is effectively a transpose (`x` being a Series means it can only have one dtype), one of the behaviors I would love to remove from pandas across the board. Another is that `agg`, `apply`, `transform` all pass columns (vertical) objects into the UDF. While it doesn't make sense for `filter` to act column-by-column, passing the entire DataFrame seems closer in behavior than operating horizontally.\n\nHowever I do not find it intuitive that in `df.filter(lambda x: x[\"age\"] > 18)` the `x` is the same as `df`. I agree in the utility of having this for method chaining, but I immediately think `x` as being a component (element / column / row) of `df` instead of the entire thing. Perhaps that's just me?\n\nA bit of restatement of my previous post, but it seems like `df.filter(lambda x: ...)` acting by row provides new functionality otherwise not readily available (I think?) where as having `x` be all of `df` is very close to duplicating `pipe`.\n\nFinally, if we are to have `x` be the same as `df` in this case, what is the validation on the result? Must it be a Series with the same index as `df`, or are we going to allow alignment. Can users returns list-likes of the same length?\n\nOverall, I lean toward operate by-row here, but not strongly.\n\n> I think `.query` should be deprecated if this is implemented in `.filter`\n\nI agree, but desire the deprecation would be slow. That is, first introduce filter and change the docs to discourage the use of `query`. Then after 1 or 2 years, start the deprecation process.\n\ncc @pandas-dev/pandas-core for any thoughts.",
"> Overall, I lean toward operate by-row here, but not strongly.\n\n`DataFrame.filter` does not filter a Dataframe on its contents, the filter is applied to the labels of the index. The suggestion in the OP is to essentially add value based conditional filtering to this method.\n\nIf you operate by row, (or by column if the axis argument is retained), then if you passed a Series with the Series.name set to the index label then it would be easier to filter based on the index label and thereby potentially justify the removal of `like` and `regex` and offer no alternatives?\n\n",
"I see your point @rhshadrach, and I think what you propose is very reasonable and maybe even thr best option in theory.\n\nIn practice, I would be very surprised if most users don't find the pyspark-like API of the function receiving the whole dataframe more intuitive. See [this example](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.filter.html) in their docs:\n\n```python\ndf.filter(df.age > 3).show()\n```\n\nWe can't compare directly with a lazy API, but I think what I propose is quite similar to this.\n\nAlso, it was discussed before about adding `pandas.col(\"my_col\")` to avoid the lambda. I guess that would look like:\n\n```python\ndf.filter(pd.col('age') > 3)\n```\n\nPersonally if filter will accept both this expression and a lambda, I think it's way more clear and intuitive that the lambda works the way I described.\n\nLet's see what other people think, maybe what's clear and intuitive to me it's not to others.\n\n",
"Maybe I'm missing something, but why deprecate `query()`. I have LOTS of code that uses that.\n\nWhy not leave `filter` as is - it operates on labels - and maybe expand `query()` to take expressions as proposed here.\n\nSo that `df.query(df[\"age\"] > 18)` and `df.query(\"age > 18\")` would do the same thing\n",
"That's a reasonable option. I think filter is more clear, and is what everybody else is using. If we were to implement the API from scratch now, I think it would be the obvious choice. For backward compatibility query may be better, and we can surely consider it. But I would rather have a very long deprecation timeline, than keep the API IMHO wrong because of a choice we did that now is not ideal.",
"> Why not leave filter as is - it operates on labels\n\nBecause it's at odds with other DataFrame libraries.\n\n - PySpark: https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.filter.html\n - Polars: https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.filter.html\n - ibis: https://ibis-project.org/reference/expression-tables.html#ibis.expr.types.relations.Table.filter\n \nThe exceptions are Modin and dask, but I think they were designed to model the pandas API.\n\nIn addition, I would argue `query` is an odd choice of a name for a filtering method.\n\nI'm fine with leaving query as it is for a long time, including indefinitely.",
"> > Why not leave filter as is - it operates on labels\n> \n> Because it's at odds with other DataFrame libraries.\n\nBut those libraries were introduced _after_ `pandas` ! So shouldn't THEY be modifying _their_ API's? (I say this somewhat facetiously)\n\n> In addition, I would argue query is an odd choice of a name for a filtering method.\n\nThen maybe you think that SQL (Structured Query Language) should be called SFL (Structured Filtering Language) ? (definitely said facetitously)\n\nTo me `query` is appropriate if you think of it from the perspective of how SQL works.\n\n",
"For a method that returns a subset of rows based on a condition, I think the standard terminology is filter. Query seems more appropriate for a more complex expression that can get data doing operations that not only involve a filter. I think SQL is consistent with this, since it allows to do more than the WHERE clause. And I think query feels inappropriate. `df.where` would be consistent with SQL, but to me filter is clearly the right choice.",
"> To me query is appropriate if you think of it from the perspective of how SQL works.\n\nQueries in SQL can do so much more than filter. `DataFrame.query` can only filter. I think this is supporting my contention that `query` is an odd name.",
"> Why not leave `filter` as is - it operates on labels - and maybe expand `query()` to take expressions as proposed here.\n\nThat makes sense to me as we would not be mixing label based \"filtering\" with value based \"filtering\"\n\n> I think filter is more clear, and is what everybody else is using. If we were to implement the API from scratch now, I think it would be the obvious choice.\n\nThis also makes sense to me.\n\nSo the issue is how to make this transition. If we don't mix the label based and value based \"filtering\" this surely makes the transition path more difficult.\n\nI'm still not clear how we keep the current label based filtering functionality if we \"Deprecate like, regex; offer no alternatives.\" When we deprecate we say something like \"x is deprecated. use ... instead\". @rhshadrach can you clarify what the ... would be?",
"> When we deprecate we say something like \"x is deprecated. use ... instead\". [@rhshadrach](https://github.com/rhshadrach) can you clarify what the ... would be?\n\n`df.filter(df[\"col\"].index.str.contains(\"xxx\"))` (or same with square brackets)... this was discussed above in one of the many replies, Richard meant no new specific alternative, and all the existing filter method funcionality is already possible (and personally I'd bet that the alternative using a boolean mask based on the index attribute may already be more popular than the filter method.",
"> and personally I'd bet that the alternative using a boolean mask based on the index attribute may already be more popular than the filter method.\n\nYes, boolean indexing is one of the core strengths of pandas and remains the best practice for filtering data. Its clarity and explicit nature make it ideal for developers who want to see exactly which rows or columns are being selected—for example, using expressions like `df[df['age'] > 18]` directly leaves little room for ambiguity.\n\nIn contrast, convenience methods like `DataFrame.filter` and `DataFrame.query` were originally designed to offer syntactic sugar for specific filtering operations that might be less straightforward with boolean indexing. However, extending these methods to incorporate functionality already achievable through boolean indexing creates a duplicate API. This duplication tends to blur the clear separation of concerns: boolean indexing for explicit condition-based filtering, and the convenience methods for more specialized use cases such as label-based filtering or evaluating query expressions.\n\nFrom the perspective of user-friendliness and maintainability, especially for newcomers, it is perhaps more intuitive to keep these methods unchanged. Retaining their original, focused design helps avoid confusion. New users won't have to decide between multiple approaches for the same operation, and experienced users can continue to leverage boolean indexing as a robust tool for data selection. Moreover, a stable and clear API encourages better code clarity and consistency, both of which are essential for long-term maintainability.\n\nIn summary, while extending these convenience methods might seem like a way to offer more flexibility, doing so risks introducing unnecessary redundancy and potential confusion. Maintaining the current API allows developers to choose the most appropriate filtering method—whether it's the explicit power of boolean indexing or the specialized convenience of `filter` and `query`—without overlapping functionality?\n",
"I think the existing API is already duplicated, as you mention, filter is syntactic sugar for 3 very particular use cases (I personally never used).\n\nI don't think the square brackets is a good API for method chaining, so I'm happy with the duplication after the changes proposed here.\n\nAlso, after having used both pyspark and polars, I find the filter method with a condition one of the essential functionality of a dataframe library. If we manage to implement the syntax below, I think it'll be the most important and convenient API change to pandas since I started contributing:\n\n```python\ndf.filter(pd.col('age') > 3)\n```\n\nOf course other will have different points of view, but for a large amount of our user base I think this would be a huge improvement. And as a first step it needs the changes proposed in this issue.",
"> And as a first step it needs the changes proposed in this issue.\n\nIf DataFrame.filter did not already exist and do something different it would definitely be more straightforward to implement this.\n\n> I think the existing API is already duplicated, as you mention, filter is syntactic sugar for 3 very particular use cases (I personally never used).\n\nI don't disagree. Let me think on this some more.\n\n> I don't think the square brackets is a good API for method chaining, so I'm happy with the duplication after the changes proposed here.\n\nnoted. \n",
"for some additional context, it seems we are covering some of the same ground as #12401",
"pinging @jorisvandenbossche for input as participant https://github.com/pandas-dev/pandas/issues/12401 and the follow up open issue #26642.\n\nI'm guessing from https://github.com/pandas-dev/pandas/issues/26642#issuecomment-511080719 that @jorisvandenbossche may want to retain the syntatic sugar that .filter offers but is not adverse to renaming the method.",
"This is an interesting discussion, and I'm glad some of the old issues were linked in to give a historical context. Here is how I see things (and feel free to disagree):\n\n1. Way back when, `pandas` had a `DataFrame.select()` method and `DataFrame.filter()` method, both which operated on labels. `DataFrame.select()` was deprecated and removed because of the functionality of `DataFrame.loc` and `DataFrame.filter()`.\n2. There has been debate over time regarding the various ways people can filter data based on labels as well as data. Right now, there are 3 ways: `DataFrame.filter()` allows filtering on labels, `DataFrame.loc` allows including an expression that creates a boolean mask, and `DataFrame.query()` allows arbitrary expressions that allow filtering on _both_ labels and data.\n3. In the meantime, libraries like `polars` and `pyspark` have introduced a `filter()` method that allows filtering on data.\n\nSo a question here is whether we change `DataFrame.filter()` to have functionality similar to what is used in `polars` and `pyspark` , and how to create a transition path for those using `filter()` for filtering on labels. \n\nOne thing that I want to point out about `query()`, which I find very useful, is that it allows filtering on a combination of labels and data. \n\nIn his comment at https://github.com/pandas-dev/pandas/issues/61317#issuecomment-2831973882, @datapythonista suggested this kind of syntax: `df.filter(df[\"col\"].index.str.contains(\"xxx\"))`. If the index were named \"foo\", I would use `df.query(\"foo.str.contains('xxx')\")`, which, at least for me, is a lot cleaner. When I have a `DataFrame`, typically with a `MultiIndex`, I think of the labels in the `MultiIndex` as data so I can do queries that combine both expressions on labels in the index with expressions on data in the columns.\n",
"> One thing that I want to point out about `query()`, which I find very useful, is that it allows filtering on a combination of labels and data.\n\n`.filter` will behave exactly the same on string input. And I am fine with keeping `query`. I have a bit of a preference to have a long period of discouraging it and then deprecating, but I'm fine with it remaining an alias for `filter` (on string inputs) indefinitely.\n\n> how to create a transition path for those using `filter()` for filtering on labels.\n\nI would propose to add a new argument `cond`. Users can either specify some nonempty subset of `{items, like, regex}` or `cond`, but not both. We can then introduce the deprecation on specifying the former."
] |
3,006,803,729 | 61,316 | BUG: `.to_parquet` fails with `schema` for `string[pyarrow]` but not `string[python]` | open | 2025-04-20T04:47:56 | 2025-04-22T21:32:11 | null | https://github.com/pandas-dev/pandas/issues/61316 | true | null | null | noahblakesmith | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import random
import string
import pandas as pd
import pyarrow as pa
# Gen ~1m bytes of text
txt = "".join(random.choices(string.printable, k=int(1e6)))
# Gen dataframes
data = {"c": [txt] * int(5e3)}
df_v0 = pd.DataFrame(data, dtype="string[python]")
df_v1 = pd.DataFrame(data, dtype="string[pyarrow]")
# Write to parquets using schema
schema = pa.schema([pa.field("c", pa.string())])
df_v0.to_parquet(path="df_v0.parquet", schema=schema)
df_v1.to_parquet(path="df_v1.parquet", schema=schema)
```
### Issue Description
Writing to a parquet file fails when `dtype` is `string[pyarrow]` but not for `string[python]`.
### Expected Behavior
I believe `df_v1` should write to a parquet file like `df_v0` does.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.11.0-1012-azure
Version : #12~24.04.1-Ubuntu SMP Mon Mar 10 19:00:39 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Closing Candidate",
"Upstream issue",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. It seems to me this might be a pyarrow issue.\n\n```python\ntxt = \"\".join(random.choices(string.printable, k=int(1e6)))\n\n# Gen dataframes\ndata = {\"c\": [txt] * int(5e3)}\ndf_v0 = pd.DataFrame(data, dtype=\"string[python]\")\ndf_v1 = pd.DataFrame(data, dtype=\"string[pyarrow]\")\n\ntype_ = pa.string()\npa.array(df_v0[\"c\"], type=type_, from_pandas=True, safe=True) # Success\npa.array(df_v1[\"c\"], type=type_, from_pandas=True, safe=True) # Fails\n```"
] |
3,006,767,854 | 61,315 | DOC: Add missing punctuation to merging.rst | closed | 2025-04-20T02:51:59 | 2025-04-21T15:46:38 | 2025-04-21T15:46:30 | https://github.com/pandas-dev/pandas/pull/61315 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61315 | https://github.com/pandas-dev/pandas/pull/61315 | star1327p | 3 | Add missing punctuation to ``merging.rst``.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61315/",
"Thanks @star1327p "
] |
3,006,704,163 | 61,314 | [minor edit] fix typo: psudocode -> pseudocode | closed | 2025-04-19T23:28:30 | 2025-04-21T15:47:05 | 2025-04-21T15:46:59 | https://github.com/pandas-dev/pandas/pull/61314 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61314 | https://github.com/pandas-dev/pandas/pull/61314 | kirisakow | 1 | this PR fixes this typo spotted here:
https://github.com/pandas-dev/pandas/blob/a811388727bb0640528962191b0f4e50d8235cfd/.github/ISSUE_TEMPLATE/feature_request.yaml#L34
<!--
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
//--> | [
"CI"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @kirisakow "
] |
3,006,698,990 | 61,313 | [ canceled PR ] | closed | 2025-04-19T23:11:04 | 2025-04-19T23:22:51 | 2025-04-19T23:18:47 | https://github.com/pandas-dev/pandas/pull/61313 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61313 | https://github.com/pandas-dev/pandas/pull/61313 | kirisakow | 0 | this PR fixes this typo spotted here:
https://github.com/pandas-dev/pandas/blob/a811388727bb0640528962191b0f4e50d8235cfd/.github/ISSUE_TEMPLATE/feature_request.yaml?plain=1#L34
<!--
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
//--> | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,006,638,852 | 61,312 | BUG: duplicated() is reporting rows as duplicates when they aren't upon visual inspection. | closed | 2025-04-19T20:18:24 | 2025-04-19T21:02:04 | 2025-04-19T21:02:03 | https://github.com/pandas-dev/pandas/issues/61312 | true | null | null | nvd2291 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
I got the creditcard.csv from Kaggle: https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud
import pandas as pd
import numpy as np
credit_card_df = pd.read_csv('creditcard.csv')
duplicated_df = credit_card_df[credit_card_df.duplicated()]
duplicated_df
```
### Issue Description
If you look at the output of the *duplicated_df* you can see rows 33 and 35 reported as duplicates when they aren't. The values are close but not exact duplicates
### Expected Behavior
Would expect these rows to not be reported as duplicates because the values in the columns that aren't named 'Time' are not identical to each other.
### Installed Versions

| [
"Bug",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I think you are misunderstanding what `df.duplicated()` / `df.duplicated(keep='first')` does.\n\nIf rows 2, 3 and 4 are identical, then `df.duplicated()` will return `False` for row 2 and `True` for rows 3 and 4. So using a boolean index on the original dataframe would only contain rows 3 and 4. You could use `df.duplicated(keep=False)` which will return `True` for all duplicates including the first occurence.\n\nYou can see in the kaggle csv that rows 32,33 are identical and 34,35 are identical, therefore the duplicated method returns True for rows 33 and 35.",
"Thanks for the clairifcation. I didn't realize this"
] |
3,006,611,118 | 61,311 | BUG: ``'ArrowExtensionArray' object has no attribute 'max'`` when passing pyarrow-backed series to `.iloc` | closed | 2025-04-19T19:08:44 | 2025-08-05T17:21:49 | 2025-08-05T17:21:49 | https://github.com/pandas-dev/pandas/issues/61311 | true | null | null | MarcoGorelli | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({"a": [1, 2], "c": [0, 2], "d": ["c", "a"]})
In [3]: df.iloc[:, df['c']] # works fine
Out[3]:
a d
0 1 c
1 2 a
In [4]: df = pd.DataFrame({"a": [1, 2], "c": [0, 2], "d": ["c", "a"]}).convert_dtypes(dtype_backend='pyarrow')
In [5]: df.iloc[:, df['c']] # now, it raises
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 df.iloc[:, df['c']]
File ~/pandas-dev/pandas/core/indexing.py:1189, in _LocationIndexer.__getitem__(self, key)
1187 if self._is_scalar_access(key):
1188 return self.obj._get_value(*key, takeable=self._takeable)
-> 1189 return self._getitem_tuple(key)
1190 else:
1191 # we by definition only have the 0th axis
1192 axis = self.axis or 0
File ~/pandas-dev/pandas/core/indexing.py:1692, in _iLocIndexer._getitem_tuple(self, tup)
1691 def _getitem_tuple(self, tup: tuple):
-> 1692 tup = self._validate_tuple_indexer(tup)
1693 with suppress(IndexingError):
1694 return self._getitem_lowerdim(tup)
File ~/pandas-dev/pandas/core/indexing.py:975, in _LocationIndexer._validate_tuple_indexer(self, key)
973 for i, k in enumerate(key):
974 try:
--> 975 self._validate_key(k, i)
976 except ValueError as err:
977 raise ValueError(
978 f"Location based indexing can only have [{self._valid_types}] types"
979 ) from err
File ~/pandas-dev/pandas/core/indexing.py:1613, in _iLocIndexer._validate_key(self, key, axis)
1610 raise IndexError(f".iloc requires numeric indexers, got {arr}")
1612 # check that the key does not exceed the maximum size of the index
-> 1613 if len(arr) and (arr.max() >= len_axis or arr.min() < -len_axis):
1614 raise IndexError("positional indexers are out-of-bounds")
1615 else:
AttributeError: 'ArrowExtensionArray' object has no attribute 'max'
```
### Issue Description
`df.iloc[:, df['c']]` works for regular pandas dataframes but raises for pyarrow-backed ones
spotted in [narwhals](https://github.com/narwhals-dev/narwhals)
### Expected Behavior
```
a d
0 1 c
1 2 a
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 57fd50221ea3d5de63d909e168f10ad9fc0eee9b
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+1979.g57fd50221e
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.12
sphinx : 8.1.3
IPython : 8.33.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.2.0
html5lib : 1.1
hypothesis : 6.127.5
gcsfs : 2025.2.0
jinja2 : 3.1.5
lxml.etree : 5.3.1
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 19.0.1
pyreadstat : 1.2.8
pytest : 8.3.5
python-calamine : None
pytz : 2025.1
pyxlsb : 1.0.10
s3fs : 2025.2.0
scipy : 1.15.2
sqlalchemy : 2.0.38
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Arrow"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take"
] |
3,006,338,577 | 61,310 | BUG: no last function in window rolling | closed | 2025-04-19T10:00:37 | 2025-04-21T02:52:27 | 2025-04-21T02:52:26 | https://github.com/pandas-dev/pandas/issues/61310 | true | null | null | algonell | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
s = pd.Series(range(5))
s.rolling(3).last()
```
### Issue Description
The reproducible example is right from the [docs](https://pandas.pydata.org/docs/dev/reference/api/pandas.core.window.rolling.Rolling.last.html).
Same goes for agg invocations.
<ins>Typical error messages:</ins>
* AttributeError: 'last' is not a valid function for 'Rolling' object
* AttributeError: 'Rolling' object has no attribute 'last'
### Expected Behavior
Last if available.
### Installed Versions
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.3
python-bits : 64
OS : Linux
OS-release : 6.14.2-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Thu, 10 Apr 2025 18:43:59 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.3
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.2
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
| [
"Bug",
"Needs Info",
"Window",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | [
"@algonell Are you testing this on the main branch? I can reproduce the code snippet from the documentation and obtain the correct result on the main branch.",
"@chilin0525 My bad, I am not testing main/latest.",
"@algonell The `first` and `last` methods were recently added to rolling (#60579). They will be available in pandas 3.0.0"
] |
3,006,331,148 | 61,309 | BUG: to_latex, when escaped=True, doesn't escape columns name | closed | 2025-04-19T09:43:14 | 2025-04-19T12:37:19 | 2025-04-19T12:37:13 | https://github.com/pandas-dev/pandas/issues/61309 | true | null | null | lucian-student | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(data=[{"hello":"world"}])
df.columns.name = "hello_world"
df.to_latex("table.tex",escape=True)
```
### Issue Description
Contents of df.columns.name aren't escaped.
### Expected Behavior
df.columns.name should be escaped.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 0f437949513225922d851e9581723d82120684a6
python : 3.8.10.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.0.3
numpy : 1.24.4
pytz : 2025.2
dateutil : 2.9.0.post0
setuptools : 44.0.0
pip : 20.0.2
Cython : 3.0.12
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.6
IPython : 8.12.3
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.7.5
numba : None
numexpr : 2.8.6
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : 3.8.0
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
| [
"Bug",
"Duplicate Report",
"Styler"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Providing a reproducible result on the main branch for both `escape=True` and `escape=False`:\n* `escape=True`:\n ```\n \\begin{tabular}{ll}\n \\toprule\n hello_world & hello \\\\\n \\midrule\n 0 & world \\\\\n \\bottomrule\n \\end{tabular}\n ```\n* `escape=False`:\n ```\n \\begin{tabular}{ll}\n \\toprule\n hello_world & hello \\\\\n \\midrule\n 0 & world \\\\\n \\bottomrule\n \\end{tabular}\n ```",
"Thanks for the report! Agreed that column labels should also be escaped. I've also confirmed that index labels are escaped.\n\ncc @attack68 ",
"@rhshadrach Is this issue a duplicate of #57362? I noticed that PR #61307 has already been created to address it.",
"Indeed, thanks! Closing."
] |
3,006,317,345 | 61,308 | ENH: Add tzdata to hard dependencies | closed | 2025-04-19T09:10:48 | 2025-04-22T16:04:29 | 2025-04-22T16:04:22 | https://github.com/pandas-dev/pandas/pull/61308 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61308 | https://github.com/pandas-dev/pandas/pull/61308 | chilin0525 | 1 | - [x] closes #61273
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
| [
"Error Reporting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @chilin0525 "
] |
3,006,307,360 | 61,307 | EHN: `df.to_latex(escape=True)` also escape index names | closed | 2025-04-19T08:50:21 | 2025-04-28T16:58:49 | 2025-04-28T16:58:41 | https://github.com/pandas-dev/pandas/pull/61307 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61307 | https://github.com/pandas-dev/pandas/pull/61307 | quangngd | 2 | - [ ] closes #57362 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Stage 3. Part of an multi-stages effort: https://github.com/pandas-dev/pandas/pull/57880#issuecomment-2003636401
Part 2 has a issue at https://github.com/pandas-dev/pandas/issues/59324
But in the process of implementing, I realized the current implementation of `df.to_latex(escape)` does not go through `styler.to_latex` but to call `styler.format_index`. This implementation follows the same flow. So maybe part 2 is not really relevant.
| [
"Bug",
"IO LaTeX",
"Styler"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"cc @attack68 as the codeowner of styler",
"Thanks @quangngd "
] |
3,003,688,955 | 61,306 | ENH: Add read_dbf method | open | 2025-04-18T00:56:40 | 2025-07-15T20:11:01 | null | https://github.com/pandas-dev/pandas/issues/61306 | true | null | null | cgarciga | 6 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
See: https://stackoverflow.com/questions/41898561/pandas-transform-a-dbf-table-into-a-dataframe
### Feature Description
Add `read_dbf` method to read `.DBF` files.
### Alternative Solutions
Current best solution: https://pypi.org/project/dbf/
### Additional Context
_No response_ | [
"Enhancement",
"IO Data",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Could I be assigned to this issue? Thanks",
"@Allan0820 You can assign to yourself by leaving a `take` comment.",
"take",
"take",
"hey @sajmaru im already working on it"
] |
3,003,688,791 | 61,305 | 50000 | closed | 2025-04-18T00:56:25 | 2025-04-18T16:10:01 | 2025-04-18T16:10:01 | https://github.com/pandas-dev/pandas/issues/61305 | true | null | null | hhhh3113 | 0 | null | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,003,375,577 | 61,304 | ENH: Add Optional Schema Definitions to Enable IDE Autocompletion | closed | 2025-04-17T20:23:14 | 2025-04-17T20:56:32 | 2025-04-17T20:56:31 | https://github.com/pandas-dev/pandas/issues/61304 | true | null | null | YoniChechik | 1 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Pandas is widely used in data-heavy workflows, and in many cases, the structure of a DataFrame is known in advance — especially when loading from sources like CSVs, databases, or APIs.
However, pandas DataFrames are fully dynamic, so IDEs and static type checkers cannot infer the structure. This limits productivity, especially in large codebases, because Column names don’t autocomplete
We’re not asking for runtime schema enforcement or data validation — we’re already familiar with Pandera and similar tools. What’s missing is a mechanism for IDEs and static tools (like Pylance and MyPy) to recognize DataFrame schemas for better code intelligence.
### Feature Description
Introduce an optional way to define column names and types for a DataFrame that tools like VS Code + Pylance can use for autocompletion and type hints.
Example syntax (suggested API):
```python
import pandas as pd
from pandas.typing import Schema # hypothetical
class OrderSchema(Schema):
OrderID: int
CustomerName: str
OrderDate: str
Product: str
Quantity: int
Price: float
Country: str
df: pd.DataFrame[OrderSchema] = pd.read_csv("orders.csv")
# IDE should support:
df.Country # autocomplete & type: str
```
This would behave similarly to how TypedDict or Pydantic models enable structure-aware development, but focused on DataFrame-level constructs.
It does not need to affect runtime at all — just serve as a static hint for tooling.
### Alternative Solutions
No
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the suggestion. I think this feature request is more appropriate for pandas stubs so I would suggest opening an issue in that repo https://github.com/pandas-dev/pandas-stubs"
] |
3,003,117,809 | 61,303 | BUG: NA are coerced in to NaN by concat | open | 2025-04-17T17:51:31 | 2025-04-19T12:29:06 | null | https://github.com/pandas-dev/pandas/issues/61303 | true | null | null | davetapley | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from pandas import NA, DataFrame, concat
a = DataFrame.from_dict({'val': [1], 'na': [NA]})
b = DataFrame.from_dict({'val': [2], 'na': [NA]})
print("a and b:")
print(a)
print(b)
print("concat:")
ab = concat([a, b])
print(ab)
```
### Issue Description
`NA` are coerced in to `NaN` by `concat`:
```
a and b:
val na
0 1 <NA>
val na
0 2 <NA>
concat:
val na
0 1 NaN
0 2 NaN
```
### Expected Behavior
The `NA` stay as `NA`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.2
python-bits : 64
OS : Linux
OS-release : 6.8.0-1021-azure
Version : #25~22.04.1-Ubuntu SMP Thu Jan 16 21:37:09 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.2.2
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Reshaping",
"Dtype Conversions",
"PDEP missing values"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I found these related, but none address this specific case:\n\n- https://github.com/pandas-dev/pandas/issues/45637\n- https://github.com/pandas-dev/pandas/issues/46922\n- https://github.com/pandas-dev/pandas/pull/47372",
"Thanks for raising this! Confirmed on main. This happens because `DataFrame.from_dict({'na': [NA]})` gives the `na` column an `object` dtype, so `concat` falls back to NumPy’s object‐array logic, which doesn’t know about `pd.NA` and yields `NaN` instead. To preserve `pd.NA` you can first cast to a true nullable dtype. E.g.\n\n```python\na = pd.DataFrame.from_dict({'val': [1], 'na': pd.array([pd.NA], dtype='Int64')})\nb = pd.DataFrame.from_dict({'val': [2], 'na': pd.array([pd.NA], dtype='Int64')})\n```\n\nThis seems related to PDEP-16.",
"Note that in [documentation](https://pandas.pydata.org/docs/dev/user_guide/missing_data.html#na-semantics): \n\n> Currently, pandas does not use those data types using [NA](https://pandas.pydata.org/docs/dev/reference/api/pandas.NA.html#pandas.NA) by default in a [DataFrame](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) or [Series](https://pandas.pydata.org/docs/dev/reference/api/pandas.Series.html#pandas.Series), so you need to specify the dtype explicitly. An easy way to convert to those dtypes is explained in the [conversion section](https://pandas.pydata.org/docs/dev/user_guide/missing_data.html#missing-data-na-conversion).\n\nSo you can convert the df like `a = a.convert_dtypes()`.",
"Agreed with @arthurlw and @yuanx749 with the current workaround, but this coercion from NA to np.nan should also not happen. However work on this should wait for [PDEP-16](https://github.com/pandas-dev/pandas/pull/58988)."
] |
3,002,720,511 | 61,302 | Fix #59772: tz_aware NaT raises exception on to_numpy | closed | 2025-04-17T14:46:13 | 2025-05-09T16:06:16 | 2025-05-09T16:06:15 | https://github.com/pandas-dev/pandas/pull/61302 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61302 | https://github.com/pandas-dev/pandas/pull/61302 | tomasmacieira | 1 | Fix error when converting tz-aware Series with NaT to NumPy array
- [x] closes #59772
- [x] [Tests added and passed] if fixing a bug or adding a new feature
- [ ] All [code checks passed]
- [ ] Added [type annotations] to new arguments/methods/functions.
- [x] Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file if fixing a bug or adding a new feature.
Previously, converting a Series of timezone-aware pd.NaT to a NumPy array using
.to_numpy("datetime64[ns]") would raise an exception.
It would happen because it could not be converted to datetime64 as a tz-aware value.
This is now fixed by removing the timezone localization from NaT values before the conversion.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
3,001,170,908 | 61,301 | DOC: Fix documentation for DataFrameGroupBy.filter and SeriesGroupBy.filter | closed | 2025-04-17T01:57:12 | 2025-04-17T04:32:42 | 2025-04-17T04:32:42 | https://github.com/pandas-dev/pandas/pull/61301 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61301 | https://github.com/pandas-dev/pandas/pull/61301 | adamreeve | 0 | - [x] closes #61300
- [ ] ~~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~~
- [ ] ~~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~~
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
3,001,147,839 | 61,300 | DOC: DataFrameGroupBy.filter documentation is misleading | closed | 2025-04-17T01:35:00 | 2025-04-17T04:34:39 | 2025-04-17T04:34:38 | https://github.com/pandas-dev/pandas/issues/61300 | true | null | null | adamreeve | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html and https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.SeriesGroupBy.filter.html
### Documentation problem
Both `DataFrameGroupBy.filter` and `SeriesGroupBy.filter` state that they "filter *elements* from groups".
This is not true, these methods filter whole groups. If you attempt to filter individual elements within a group by returning a series of boolean you get an error:
```
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar'],
'B' : [1, 2, 3, 4, 5, 6],
'C' : [2.0, 5., 8., 1., 2., 9.]})
df.groupby("A").filter(lambda x: x['B'] > 1).sum()
```
```
TypeError: filter function returned a Series, but expected a scalar bool
```
### Suggested fix for documentation
Suggested documentation:
```
Filter groups that don’t satisfy a criterion.
Groups are filtered if they do not satisfy the boolean criterion specified by func.
``` | [
"Docs",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"After looking at this again, I think the documentation does make sense but I was misinterpreting it. This does filter out rows/elements rather than whole groups, but it filters rows based on a criterion that applies to the whole group rather than to rows within the group."
] |
3,000,184,974 | 61,299 | DOC: copyedit _base.py | closed | 2025-04-16T16:37:48 | 2025-04-17T18:07:23 | 2025-04-16T20:41:28 | https://github.com/pandas-dev/pandas/pull/61299 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61299 | https://github.com/pandas-dev/pandas/pull/61299 | wjandrea | 1 | No need to restate the library name
- ~~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~~
- ~~[ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~~
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- ~~[ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~~
- ~~[ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~~ | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @wjandrea "
] |
2,998,393,374 | 61,298 | I would like to join the pandas community, but the Slack link is broken and I cannot join. | closed | 2025-04-16T05:05:35 | 2025-04-17T16:11:22 | 2025-04-17T16:11:22 | https://github.com/pandas-dev/pandas/issues/61298 | true | null | null | atsushi196323 | 3 | I would like to join the pandas community on Slack, so I would appreciate if you could provide me with a new invitation link. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @atsushi196323 for your interest.\n\nsomeone from the Contributor Community Team should be able to help with your request.\n\ncc @rhshadrach @jorisvandenbossche @noatamir ",
"@atsushi196323 The Slack link is available [here](https://join.slack.com/t/pandas-dev-community/shared_invite/zt-2blg6u9k3-K6_XvMRDZWeH7Id274UeIg), and you can also find it in the [documentation](https://pandas.pydata.org/docs/dev/development/community.html#community-slack). Could you let me know where you encountered the broken link?",
"@chilin0525 The link that was displayed when I searched with Claude was broken, but it has been resolved now. Thank you for providing me with the link!"
] |
2,998,204,470 | 61,297 | BUG: Raise clear error when assign is used with non-string column keys | closed | 2025-04-16T03:02:15 | 2025-04-16T16:08:03 | 2025-04-16T16:08:03 | https://github.com/pandas-dev/pandas/pull/61297 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61297 | https://github.com/pandas-dev/pandas/pull/61297 | AnushaUKumar | 1 | ### What does this PR do?
Fixes confusing behavior when `.assign()` is used with non-string keys like tuples. Python raises a generic `TypeError: keywords must be strings`, which confuses users.
This PR adds a more helpful message:
> assign() only supports string column names. Use df[('C', 'one')] = ... to assign non-string column names like tuples.
### Why?
This improves the developer experience and avoids unnecessary debugging time.
### How was this fixed?
- Added `__kwargs_dict__` as a keyword-only argument for testing with tuple keys
- Added validation in `assign()` for string-only keys
- Updated existing test for compatibility
- Added new test: `test_assign_with_tuple_column_key_raises_typeerror`
### Test added
`test_assign_with_tuple_column_key_raises_typeerror`
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but it appears there a lot of changes in unrelated files. Going to close this PR, but ensure that pull requests are made from the latest commit on main"
] |
2,997,926,068 | 61,296 | BUG: Underscores aren't escaped in LaTeX outputs | closed | 2025-04-15T23:52:45 | 2025-04-15T23:55:11 | 2025-04-15T23:55:10 | https://github.com/pandas-dev/pandas/issues/61296 | true | null | null | gtkacz | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame([{"header_1": 1, "header_2": 2}, {"header_1": 3, "header_2": 4}])
print(df.to_latex(index=False))
```
### Issue Description
Underlines in strings ( _ ) should be escaped in LaTeX outputs given than underscores in LaTeX represent subscripts.
### Expected Behavior
The expected output of the provided code example _should_ be:
```latex
\begin{tabular}{rr}
\toprule
header\_1 & header\_2 \\
\midrule
1 & 2 \\
3 & 4 \\
\bottomrule
\end{tabular}
```
But instead is:
```latex
\begin{tabular}{rr}
\toprule
header_1 & header_2 \\
\midrule
1 & 2 \\
3 & 4 \\
\bottomrule
\end{tabular}
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.7
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I found the `escape` bool keyword on the `to_latex` method, which fixes this. This should be enabled by default though."
] |
2,997,798,963 | 61,295 | BUG: df.assign no longer works with multilevel columns | closed | 2025-04-15T22:38:25 | 2025-04-16T13:40:52 | 2025-04-16T13:40:29 | https://github.com/pandas-dev/pandas/issues/61295 | true | null | null | Zendaug | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
# Creating a DataFrame with multilevel columns
arrays = [['A', 'A', 'B', 'B'], ['one', 'two', 'one', 'two']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(3, 4), columns=index)
# Try to create a new column using "assign"
df = df.assign(**{('C', 'one'): [1, 2, 3]})
```
### Issue Description
The final line reports the error: TypeError: keywords must be strings
### Expected Behavior
It should create a new column ("C", 'one') with 1,2,3 in it.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 154 Stepping 4, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en
LOCALE : English_Australia.1252
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : 7.3.7
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : 1.2.7
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : 3.1.1
zstandard : None
tzdata : 2023.3
qtpy : 2.4.1
pyqt5 : None
</details>
| [
"Bug"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This is a python limitation. You're using tuple unpacking to pass in arguments to a function. Function argument names have to be valid variable names. https://stackoverflow.com/questions/65392503/keyword-error-generated-when-passing-a-dictionary-to-a-function-with-tuples-as-t",
"Thanks for raising this! The `assign` function accepts `**kwargs` which only allows string keys, so tuple-based MultiIndex labels can’t be passed this way. \n\nhttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html\n\nTechnically speaking this isn't a pandas bug, but maybe pandas could consider supporting tuple keys?",
"> pandas could improve the error message\n\nthis is not possible (with the current signature), the error is thrown directly by the python interpreter.\n\n```py\nIn [1]: def foo(**kwargs):\n ...: print(kwargs)\n ...:\n\nIn [2]: foo(**{('C', 'one'): 2})\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nCell In[1], line 1\n----> 1 foo(**{('C', 'one'): 2})\n\nTypeError: keywords must be strings\n```\n\n",
"Agreed @asishm - nothing can be done here. Closing."
] |
2,997,386,095 | 61,294 | BUG: join unexpectedly created extra column start with "key_" | open | 2025-04-15T19:25:06 | 2025-08-22T00:40:01 | null | https://github.com/pandas-dev/pandas/issues/61294 | true | null | null | albb318 | 13 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
a = pd.DataFrame({0:[1,2,3]})
b = pd.DataFrame(index=[1,2,3],data={0:[4,5,6]})
a.join(b, on=0, rsuffix='_test')
```
### Issue Description
When dataframe a and b have the same column name, a key_ column is created unexpectedly after the join operation.
**key_0** 0 0_test
0 1 1 4
1 2 2 5
2 3 3 6
### Expected Behavior
Expecting result without the key_0 column.
### Installed Versions
<details>
pandas : 2.2.3
numpy : 1.26.4
</details>
| [
"Bug",
"Reshaping",
"Deprecate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This seems to be the expected behavior if you look at the [documentation examples](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.join.html). Moreover, it seems it is why we pass lsuffix and rsuffix. Please let me know if I am missing somthing.",
"In this case the \"key_0\" columns seems redundant, as the logic is straightforward. Dataframe a's column 0 is the target column for the join operation, although Dataframe b also has column 0, but it can just rename it to 0_test (as specifed by rsuffix), and join by its index. Expecting result as the following \n\n 0 0_test\n0 1 4\n1 2 5\n2 3 6\n\nInterestingly, the behavior is different with letter column name, see the following example\n\n```\na = pd.DataFrame({'a':[1,2,3]})\nb = pd.DataFrame(index=[1,2,3],data={'a':[4,5,6]})\na.join(b, on='a', rsuffix='_test')\n```\n\nit will get result as\n\n a a_test\n0 1 4\n1 2 5\n2 3 6\n",
"take",
"Thanks for clarifying! I think I have found the issue. The join function is basically a wrapper for the merge function, so down stream in `pandas/core/reshape/merge.py ` There is a line of code that does \n```python\nresult.insert(i, name or f\"key_{i}\", key_col)\n```\nThis basically adds the column to the DataFrame, with the variable on or \"key_{column_name}\". The intent for this was to catch when name was None. But, because `0` is truthy, while every other number is falsey the result is that you get the latter for columns with 0. This seems like just a oversight from the person who implemented this.\n\nThe fix would be \n```python\nresult.insert(i, name if name is not None else f\"key_{i}\", key_col)\n```\n\nHowever, this is not the actual issue the issue comes a bit earlier and it is specifically due to how join passes the suffix \"\" to merge instead of None. This causes behavior like if any of the column names is an integer it does not join like their string counter parts instead making two columns: one an integer and one a string. \n\n\nThere are two solutions to this. One would be to instead of defining `lsuffix` and `rsuffix` to be `\"\"` it should be `None`. The other is changing the `renamer` function to not touch the column if the suffix string is empty. \n\nBoth would have some minor backward compatibility issues, the suffix changes in join would be more localized to just the join function while the latter would affect the merge feature and the join feature. However, Its seems pretty uncommon to add a empty suffix to a merge operation and even less common to use integer columns. And, changing join to None would be more inline with users expectations when using .join function. \n\nThus, changing the join signature would impact backwards compatibility the least ensuring developers using merge with empty strings maintain their functionality, while aligning the join feature to be more like what happens when a string counterpart is used.",
"> One would be to instead of defining `lsuffix` and `rsuffix` to be `\"\"` it should be `None`.\n\n+1. In addition we should document in these args that when either is specified, any non-string columns will be converted to strings before applying the suffix. ",
"@rhshadrach I'm not sure how I should go about this. Should I raise a `FutureWarning` and leave the default value as is. Or, should I change the default value to `None` and put up a `DepreciationWarning`? ",
"@ShayanG9 - it should start as a `DeprecationWarning`. You will likely need to change the default value `lib.no_default` to be able to tell if the user is passing `None` or not. You can search the code for other uses of `lib.no_default` as examples.",
"@ShayanG9 can I work on this issue?",
"@KevsterAmp Go right ahead it should be an easy fix. I've been really busy lately. ",
"Thanks",
"take",
"take",
"I tried fixing it, but seeing many failure in docs due to the new warning, I was not able to get them fixed"
] |
2,997,194,301 | 61,293 | BUG: pivot_table with overlapping values | closed | 2025-04-15T17:56:43 | 2025-04-23T22:58:54 | 2025-04-23T21:47:17 | https://github.com/pandas-dev/pandas/pull/61293 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61293 | https://github.com/pandas-dev/pandas/pull/61293 | it176131 | 6 | - [x] closes #57876
- [x] closes #61292
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Reshaping"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Should I update the `doc/source/whatsnew/v2.3.0.rst` or `doc/source/whatsnew/v3.0.0.rst` file?",
"@it176131 - update v3.0.0. Also due to an erroneous commit with a large file, we've had to force-push to the main branch recently. You'll need to fix this up by removing the erroneous commit from your history as well. See the unexpected changes and file in the diff - those need to be reverted.\r\n\r\n@datapythonista - if we're going to force-push to a contributors branch, I think a comment detailing what's going on would be helpful.",
"@rhshadrach I believe I've reverted the changes as requested. Please lmk if I did it incorrectly.\r\n",
"Oh, so GitHub added the image we removed to the open PRs? That explains why it was in that PR...",
"@rhshadrach [this check](https://github.com/pandas-dev/pandas/actions/runs/14518139891/job/40732136143?pr=61293) appears to have failed due to a timeout—is there a way to restart it?",
"Thanks @it176131 "
] |
2,996,968,254 | 61,292 | BUG: `values` argument ignored when also supplied to `index`/`columns` in `pivot_table` | closed | 2025-04-15T16:18:14 | 2025-04-23T21:47:19 | 2025-04-23T21:47:19 | https://github.com/pandas-dev/pandas/issues/61292 | true | null | null | it176131 | 0 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import numpy as np
import pandas as pd
from pandas import Index, MultiIndex
import pandas.testing as tm
def test_pivot_table_values_in_columns():
"""``values`` arg is shared between ``values`` and ``columns``."""
data = [
["A", 1, 50, -1],
["B", 1, 100, -2],
["A", 2, 100, -2],
["B", 2, 200, -4],
]
df = pd.DataFrame(data=data, columns=["index", "col", "value", "extra"])
result = df.pivot_table(values="value", index="index", columns=["col", "value"])
nan = np.nan
e_data = [
[50.0, nan, 100.0, nan],
[nan, 100.0, nan, 200.0],
]
e_index = Index(data=["A", "B"], name="index")
e_cols = MultiIndex.from_arrays(
arrays=[[1, 1, 2, 2], [50, 100, 100, 200]], names=["col", "value"]
)
expected = pd.DataFrame(data=e_data, index=e_index, columns=e_cols)
tm.assert_frame_equal(left=result, right=expected)
def test_pivot_table_values_in_index():
"""``values`` arg is shared between ``values`` and ``index``."""
data = [
["A", 1, 50, -1],
["B", 1, 100, -2],
["A", 2, 100, -2],
["B", 2, 200, -4],
]
df = pd.DataFrame(data=data, columns=["index", "col", "value", "extra"])
result = df.pivot_table(values="value", index=["index", "value"], columns="col")
nan = np.nan
e_data = [
[50.0, nan],
[nan, 100.0],
[100.0, nan],
[nan, 200.0],
]
e_index = MultiIndex.from_arrays(
arrays=[["A", "A", "B", "B"], [50, 100, 100, 200]], names=["index", "value"]
)
e_cols = Index(data=[1, 2], name="col")
expected = pd.DataFrame(data=e_data, index=e_index, columns=e_cols)
tm.assert_frame_equal(left=result, right=expected)
test_pivot_table_values_in_columns() # Fails.
test_pivot_table_values_in_index() # Fails.
```
### Issue Description
When the column supplied to `values` in [`pandas.DataFrame.pivot_table`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html) is also supplied to `index` or `columns`, the resulting `DataFrame` does not contain the aggregations of the `values` argument. If any extra column(s) are present, those columns are aggregated instead of those supplied to `values`. This is similar to issue #57876, but the additional columns result in a non-empty `DataFrame`.
### Expected Behavior
I would expect the two tests above to pass, i.e., the `values` arg is aggregated instead of the non-supplied "extra" column.
```python
# Expected output of ``test_pivot_table_values_in_columns``:
col 1 2
value 50 100 100 200
index
A 50.0 NaN 100.0 NaN
B NaN 100.0 NaN 200.0
```
```python
# Expected output of ``test_pivot_table_values_in_index``:
col 1 2
index value
A 50 50.0 NaN
100 NaN 100.0
B 100 100.0 NaN
200 NaN 200.0
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.3
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22631
machine : AMD64
processor : AMD64 Family 25 Model 116 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,995,166,299 | 61,291 | BUG: reindex (and atleast several other methods) do not respect fill_value=None | open | 2025-04-15T05:24:25 | 2025-04-16T13:27:14 | null | https://github.com/pandas-dev/pandas/issues/61291 | true | null | null | TimAllen576 | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({'1'})
new_df = df.reindex(range(2), fill_value=None)
print(new_df,'\n')
data = [None, complex(1)]
arr2 = pd.core.construction.array(data)
indexer = [0, -1]
new_arr = arr2.take(indexer, allow_fill=True, fill_value=None)
print(arr2)
print(new_arr)
```
### Issue Description
User-facing bug placing `NaN` into df instead of `None` and the underlying `ExtensionArray.take` method (with a more contrived example to ensure the types are valid) which is a significant cause of the behaviour.
This is less of a bug and more a discussion on default behaviour in pandas. It is impossible to "fill" a value of `None` in an already existing dataframe/array as far as I can see in pandas, although `None` is often the default value used to fill during dataframe creation[^1]. The reason for this behaviour seems to run very deeply and shows active choices by implementors to have this behaviour, see: example with `arr.take` (with an implementors note which I did not completely understand), also `BaseArrayManager._make_na_array` and the optimised code for `NDFrame._reindex_multi`.
This problem seems to have been almost seen but brushed past in #20640 where `fill_value` is recognised to not always be honored due to compatibility with the underlying arrays. However, as shown there are also cases where `None` is a valid entry in the array which are blanket ignored (even though I'd argue that the array type should possibly be adjusted to fit the fill value). Some complexity also comes from there not being a `None` in C however I strongly dislike this argument as there seems to be standins/a way to work with them throughout the rest of the library. Note I have not dug deeply into the C/Cython code myself.
In closing, I think `None` should be able to be used as a `fill_value` when reindexing but it is currently intentionally and unecessarily mangled into `np.nan` at many levels. Discussion is strongly encouraged as any actual changes WOULD subtly change existing behaviour and at some point I just want to understand why these choices were made.
[^1]: Which is where I ran into this problem, creating a dataframe of strings with holes, then trying to consistently reshape it. Of course there are plenty of ways for me to circumvent this problem but I want to demonstrate this is not arbitrary/irrelevant.
### Expected Behavior
\
0
0 1
1 None
\<NumpyExtensionArray\>
[None, (1+0j)]
Length: 2, dtype: object
\<NumpyExtensionArray\>
[None, None]
Length: 2, dtype: object
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_New Zealand.1252
pandas : 2.2.3
numpy : 2.1.0
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Needs Discussion",
"PDEP missing values"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I expect this to be a part of [PDEP-16](https://github.com/pandas-dev/pandas/pull/58988) when the proposal is finished. Namely, I think with switching over to `pd.NA` as the NA value across dtypes, we can then treat `None` as just the Python object it is and not an NA value for non-object dtypes (whether it will be regarded as an NA value in object dtype seems less clear to me). However the core team is reluctant on making any changes without the full proposal.\n\nLooking over issues tagged with PDEP-16, I'm not seeing any mentioning `None` behavior so leaving this open."
] |
2,994,511,655 | 61,290 | ENH: read_xml() does not allow to specify huge_tree=True for the 'lxml' parser. | open | 2025-04-14T23:25:47 | 2025-07-06T15:52:30 | null | https://github.com/pandas-dev/pandas/issues/61290 | true | null | null | sergiykhan | 1 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
`read_xml()` fails with the error message `XMLSyntaxError: xmlSAX2Characters: huge text node`.
A similar problem can be overcome when manually parsing the tree like so:
```
from lxml import etree
with open(filename) as f:
tree = etree.parse(f, etree.XMLParser(huge_tree=True))
```
### Feature Description
I am not sure what the best way to supply options to the parser would be.
### Alternative Solutions
Right now, I have to read the file using the 'etree' parser like so
```
df = pd.read_xml(
filename,
parser='etree',
)
```
### Additional Context
Similarly, the following option could be passed to the parser `recover=True`. | [
"Enhancement",
"IO XML"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thank you for your report. My thought is that `read_xml` provides a convenience method to parse shallow, flatter XML to DataFrames. And since XML is an open-ended type that can range in dimensions and DataFrames require the two-dimension types, `read_xml` is not meant to cover exceptional cases like you point out. Also, as you show `lxml` provides much more functionality to parse any kind of XML.\n\nPossibly, we can incorporate a `kwargs` implementation for users to pass arguments to third party connectors? But this may detract from the practice with other IO tools. Then maintenance and testing can be a concern since `kwargs` will allow open-ended number of arguments.\n\nFor special cases, consider directly using the upstream (`lxml`) package to parse XML. Then, retrieve content in lists, dicts. etc. to pass on to DataFrames."
] |
2,993,032,883 | 61,289 | WEB: Update benchmarks page | closed | 2025-04-14T13:10:24 | 2025-04-19T12:12:34 | 2025-04-19T11:47:47 | https://github.com/pandas-dev/pandas/pull/61289 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61289 | https://github.com/pandas-dev/pandas/pull/61289 | rhshadrach | 1 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Benchmark",
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@datapythonista This is ready for another look."
] |
2,992,890,289 | 61,288 | BUG: Fix #46726; wrong result with varying window size min/max rolling calc. | closed | 2025-04-14T12:15:42 | 2025-04-25T20:28:31 | 2025-04-25T20:28:19 | https://github.com/pandas-dev/pandas/pull/61288 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61288 | https://github.com/pandas-dev/pandas/pull/61288 | viable-alternative | 3 | - [x] closes #46726
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Speed improved by ~10% as measured by ASV.
- [x] added an entry in the latest `doc/source/whatsnew/v3.0.0.rst`.
### Summary:
- Fixes a 3-year-old bug with incorrect min/max rolling calculation for custom window sizes. Adds an error check for invalid inputs.
- Speed improvement of ~10% by not using additional queue for handling NaNs.
- For complex cases, incurs additional multiplicative log(k) complexity (where k is max window size), but this cost is only incurred in cases that produced invalid result before. For example, for constant window size this cost is not incurred.
### Changed behavior:
Has additional validity check, which will raise ValueError if the function detects a condition it cannot work with, namely improper ordering of start/end bounds. The existing method would happily consume such input and would produce a wrong result. There is a new unit test to check for the raised ValueError.
### Note on invalid inputs:
It is possible to make the method work for an arbitrary stream of start/end window bounds, but it will require sorting. It is very unlikely that such work is worth the effort, and it is estimated to have extremely low need, if any. Let someone create an enhancement request first.
If sorting is to be implemented: it can be done with only incurring performance hit in the case of unsorted input: copy and sort the start/end arrays, producing a permutation, run the main method on the copy, and then extract the result back using the permutation. To detect if the start/end array pair is properly sorted will only take O(N). (Soring is N*log(N), does not have to be stable, but the input array is extremely likely to be “almost” sorted, and you have to pick your poison of a sorting method that works well with nearly sorted array, or use efficient soring methods, most of which do not offer additional speed on nearly sorted arrays.) Working such intermediate step (without copying and pasting) into 3 different implementations will require some less than straightforward work in the “apply” family of methods used by other rolling functions, and therefore will bear risk. If this is decided to be done, it is recommended to have an additional parameter to optionally skip the “sorted” check. (The user may already know that the arrays are properly sorted).
### How to Debug numba
You can temporarily change 2 lines of code in order to Python-debug numba implementation with VS Code or another Python debugger:
- Comment out the `numba.jit` decorator on the function(`sliding_min_max()` in `min_max_.py`).
- Do the same with the `column_looper()` function defined inside the `generate_apply_looper()` function in **executor.py**.
- Your breakpoint inside the function will now hit!
### Misc Notes
The speed improvement of ~10% was confirmed in two ways:
- As measured by pandas’ supplied asv benchmark suite (0.80-0.91 coefficient (depending on particular test) on my hardware).
- With a custom load test over a 2MM-long rolling window on a 300MM-long data set. (See the supplied [bench.py.txt](https://github.com/user-attachments/files/19735182/bench.py.txt).) A single run of the test takes approx. 6-8 seconds and consumes ~15GB of RAM on a 32-GB RAM PC.
| [
"Bug",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke, all done!\r\nEdited my original comment to reflect what is actually being done. Thx!\r\n(CI is still in progress. I will fix ti up if anything fails.)",
"Thank you for the thoughtful review. Give me some time (hopefully under a week) to read up on PEP8 and address other comments. Thanks for the helpful naming suggestions: only takes a minute, and saves me hours of time digging too deep into PEP8.",
"Thanks @viable-alternative!"
] |
2,992,569,954 | 61,287 | PERF: Restore old performances with .isin() on columns typed as np.ui… | closed | 2025-04-14T10:07:07 | 2025-04-14T18:44:21 | 2025-04-14T18:44:21 | https://github.com/pandas-dev/pandas/pull/61287 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61287 | https://github.com/pandas-dev/pandas/pull/61287 | pbrochart | 0 | …nt64
- [ ] closes #60098 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,992,400,724 | 61,286 | ENH: Update DataFrame.to_stata to handle pd.NA and None values in strL columns | closed | 2025-04-14T09:08:20 | 2025-04-23T08:17:34 | 2025-04-22T16:02:29 | https://github.com/pandas-dev/pandas/pull/61286 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61286 | https://github.com/pandas-dev/pandas/pull/61286 | Danferno | 1 | - [x] closes #23633 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"IO Stata"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @Danferno "
] |
2,992,092,293 | 61,285 | DOC: Improve clarity and beginner-friendly tone in table tutorial | closed | 2025-04-14T07:04:32 | 2025-04-14T19:02:13 | 2025-04-14T16:53:24 | https://github.com/pandas-dev/pandas/pull/61285 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61285 | https://github.com/pandas-dev/pandas/pull/61285 | VenkataSiriPriya | 1 | This PR makes small improvements to the 01_table_oriented.rst tutorial in the getting_started/intro_tutorials section. The changes include:
Simplified explanations for importing pandas and creating DataFrames.
Improved grammar and sentence clarity.
Reworded the Series and describe() method descriptions to be more beginner-friendly.
These edits aim to enhance readability and help first-time users better understand pandas concepts.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but I do not think these changes substantially improve the existing text so going to close this PR. It's best to open PRs that related to an open issue that has been triaged"
] |
2,992,070,475 | 61,284 | Doc-groupby-ewm | closed | 2025-04-14T06:53:34 | 2025-04-14T16:55:00 | 2025-04-14T16:54:59 | https://github.com/pandas-dev/pandas/pull/61284 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61284 | https://github.com/pandas-dev/pandas/pull/61284 | ShauryaDusht | 1 | - [x] closes #61268 (Adding documentation for `groupby.ewm()`)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR but this is already being addressed in https://github.com/pandas-dev/pandas/pull/61283 so closing as already being worked on"
] |
2,991,915,085 | 61,283 | DOC: Add documentation for `groupby.ewm()` | closed | 2025-04-14T05:24:05 | 2025-04-14T20:05:42 | 2025-04-14T20:05:35 | https://github.com/pandas-dev/pandas/pull/61283 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61283 | https://github.com/pandas-dev/pandas/pull/61283 | arthurlw | 1 | - [x] closes #61268
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Docs",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw "
] |
2,991,767,360 | 61,282 | ENH: pd.DataFrame.from_dict() should support loading columns of varying lengths | open | 2025-04-14T03:17:54 | 2025-04-15T05:56:41 | null | https://github.com/pandas-dev/pandas/issues/61282 | true | null | null | nikhilweee | 9 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Creating a dataframe from a dictionary with columns of varying lengths is not supported.
As of pandas 2.2.3, the following snippet results in `ValueError: All arrays must be of the same length`
```py
df = pd.DataFrame.from_dict({"col1": [1, 2, 3], "col2": [4, 5]})
```
### Feature Description
Pandas should automatically pad columns as necessary to make sure they are the same length. Especially because that's the behavior when the `orient` argument is set to `index`. The following works perfectly fine.
```py
df = pd.DataFrame.from_dict({"col1": [1, 2, 3], "col2": [4, 5]}, orient="index")
```
### Alternative Solutions
Since pandas already supports rows of varying lengths when the `orient` argument is set to `index`, to load a dictionary where not all columns are the same length, an alternative solution would be to set `orient` to `index` and transpose the resulting dataframe.
```py
df = pd.DataFrame.from_dict({"col1": [1, 2, 3], "col2": [4, 5]}, orient='index').T
```
### Additional Context
Since there is a discrepancy in the way pandas handles loading dictionaries based on the value of the `orient` argument, it would be great to have parity between the two. | [
"Enhancement",
"Needs Triage"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi! I’d like to work on this.\nI propose adding an optional `autopad` parameter to `from_dict()` that pads shorter columns \nwith a `fill_value` (default `np.nan`). This keeps existing behavior unchanged unless explicitly enabled.\n ```python\n df = pd.DataFrame.from_dict(\n {\"col1\": [1, 2, 3], \"col2\": [4, 5]},\n autopad=True,\n fill_value=np.nan\n )\n ```\nLet me know if this approach sounds good!\n\n",
"take",
"@ShauryaDusht I wonder if it makes sense to have parity between the two (orient='index' and orient='columns') and avoid introducing the new `autopad` argument? Especially because this new parameter would only affect orient='columns' and would not have any effect (or essentially be always True) when orient='index'",
"@nikhilweee \nThanks for the feedback! To maintain parity between orientations, adding three options — `index`, `columns`, and `all` — could be a more consistent solution.\n`index`: pads rows\n`columns`: pads columns\n`all`: pads both\nThis would offer flexibility while keeping existing behaviour intact. Let me know if this approach works.",
"@ShauryaDusht Are you suggesting we add a new option to the `orient` argument? Or are you suggesting that these options would be applicable to the new `autopad` argument? \n\nEither way I still think it makes sense to just update the behaviour of `pd.DataFrame.from_dict()` to auto pad when `orient` is set to the default value of `columns`. ",
"@nikhilweee I was referring to your approach — adding a new option to the orient argument itself. That felt like a cleaner and more sensible, rather than introducing a separate `autopad` argument.\nSo yeah, adding `columns` and `all` to `orient` as part of the overall solution sounds good.",
"@ShauryaDusht Sorry if I was unclear but I am not suggesting that we add any arguments at all. My suggestion is to merely update the behavior of `pd.DataFrame.from_dict(data, orient='columns')` to match `pd.DataFrame.from_dict(data, orient='index')` such that `pd.DataFrame.from_dict(data, orient='columns')` does not complain when the dict values are of varying lengths.",
"@nikhilweee Got it — looks like the second `orient='columns'` was meant to be `orient='index'` (just a typo).\nNow I understand everything. The idea is just to make `orient='columns'` work like `orient='index'` does.\n\nSo should I wait for the maintainers' review before starting(it is still in triage), or would it be okay to begin working on it now?",
"@ShauryaDusht Yes, that's exactly what I meant (I fixed the typo). I think it's a good idea to wait for what the maintainers have to say."
] |
2,991,281,653 | 61,281 | ENH: preview_csv(***.csv) for Fast First-N-Line Preview on Large Plus Size (>100GB) | closed | 2025-04-13T14:08:49 | 2025-08-18T01:05:06 | 2025-08-18T01:05:06 | https://github.com/pandas-dev/pandas/issues/61281 | true | null | null | visheshrwl | 4 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
The current `pandas.read_csv()` implementation is designed for robust and complete CSV parsing. However, even when users request only a few lines using `nrows=X`, the function:
- **Initializes the full parsing engine**
- Performs **column-wise type inference**
- Scans for **delimiter/header consistency**
- May **read a large portion or all of the file**, even for small previews
For **large datasets** (10–100GB CSVs), this results in significant I/O, CPU, and memory overhead — all when the user likely just wants a **quick preview** of the data.
This is a common pattern in:
- Exploratory Data Analysis (EDA)
- Data cataloging and profiling
- Schema validation or column sniffing
- Dashboards and notebook tooling
Currently, users resort to workarounds like:
```python
pd.read_csv(..., chunksize=5)
next(...)
```
or shell-level hacks like:
```bash
head -n 5 large_file.csv
```
These are non-intuitive, unstructured, or outside the pandas ecosystem.
### Feature Description
## Introduces a new Function
```python
pandas.preview_csv(filepath_or_buffer, nrows=5, ...)
```
### Goals
- Read only the first n rows + header lines
- Avoid loading or inferring types from null dataset
- No full cloumn validation
- Fallback to object dtype unless `dtype_infer = true`
- Support basic options like delimiter, encoding, header presence.
### Proposed API:
```python
def preview_csv(
filepath_or_buffer,
nrows: int = 5,
delimiter: str = ",",
encoding: str = "utf-8",
has_header: bool = True,
dtype_infer: bool = False,
as_generator: bool = False
) -> pd.DataFrame:
...
```
### Alternative Solutions
| **Tool / Method** | **Behavior** | **Limitation** |
|-----------------------------------|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| `pd.read_csv(nrows=X)` | Reads entire file into memory, performs dtype inference and column validation | Not optimized for quick previews; incurs overhead even for small `nrows` |
| `pd.read_csv(chunksize=X)` | Returns an iterator of chunks (DataFrames of size `X`) | Requires non-intuitive iterator handling; users often want `DataFrame` directly |
| `csv.reader + slicing` | Python’s built-in CSV reader is lightweight and fast | Returns raw lists, not a DataFrame; lacks header handling and column inference |
| `subprocess.run(["head", "-n"])` | OS-level utility that returns first N lines | Not portable across platforms, doesn't integrate with DataFrame workflow |
| `Polars: pl.read_csv(..., n_rows)`| Rust-based, blazing fast CSV reader | Requires installing a new library; pandas users might not want to switch ecosystems |
| `Dask: dd.read_csv(...).head()` | Lazy, out-of-core loading with chunked processing | Overhead of distributed engine is unnecessary for simple previews |
| `open(...).readlines(N)` | Naive Python read of first N lines | Doesn’t handle parsing, delimiters, or schema properly |
| `pyarrow.csv.read_csv(...)[0:X]` | Efficient Arrow-based preview | Requires using Apache Arrow APIs; returns Arrow tables unless converted |
While workarounds exist, none provide a **clean, idiomatic, native pandas function** to:
- Efficiently load the first N rows
- Return a `DataFrame` immediately
- Avoid dtype inference
- Skip full file validation
- Avoid requiring third-party dependencies
A dedicated `pandas.preview_csv()` would fill this gap and offer an elegant, performant solution for quick data previews.
### Additional Context
_No response_ | [
"Enhancement",
"IO CSV",
"Needs Discussion",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the request. Having to maintain an entirely different code path that does very similar things to `read_csv` seems to me to be a non-starter. I would like to understand why `read_csv` could not be improved to fit this purpose.",
"**Thank you for the thoughtful feedback @rhshadrach !**\n\nI completely understand the reluctance to maintain a separate code path - especially in a core function like `read_csv()`, which already carries significant complexity.\n\n`read_csv()` is designed for full-fidelity, schema-validated and optionally type-inferred ingestion. Introducing conditional short circuits for preview-style use cases pollutes that logic and increases branching inside a hot, complex code path. \n\nOn the other hand, a dedicated `preview_csv()` function:\n- Defines a minimal contract: \"_Read the top `N` rows quickly with minimal parsing_\"\n- Requires no inference or post-processing logic\n- Makes the behaviour **explicit, predictable, and easy to optimize separately**.\n\nFrom a user intent perspective:\n- `read_csv(nrows=X)` implies: \"_I want a truncated but fully parsed and inferred subset of the data_\"\n- `preview_csv(nrows=X)` would mean: \"_I just want to see the first X lines, as fast as possible - even if it's untyped or partially parsed._\"\n\nThis distinction matters - especially in workflows where previewing is decoupled from actual analysis, such as:\n- Data cataloging\n- EDA profiling\n- Schema sniffing\n- Logging pipelines\n\nAny performance optimization embedded in `read_csv()` must:\n- Preserve dozens of edge cases\n- Remain compatible with all backends (C, python, Arrow-based readers)\n- Honor ~50+ keyword arguments (`dtype`, `parse_dates`, `converters`, `skiprows`, etc.)\n\nThis would introduce non-trivial complexity and testing burden to a critical code path and create surface area for subtle regressions.\n \nBoth `polars.read_csv(..., n_rows=X)` and `vaex.open(...).head(X)` implement optimized preview semantics using fast readers with early stopping. These tools don't override their full `read_csv()` equivalents - they recognize the preview use case is distinct. \n\nPandas could adopt a similar design without breaking the existing contract of `read_csv()`\n\nIf approved, I'm happy to:\n- Own the implementation of `preview_csv()`\n- Benchmark it vs `read_csv()` under real workloads (10GB+)\n- Keep it behind a dedicated namespace (e.g. `pandas.io.preview`)\n- Ensure full test coverage and documentation.\n\nWould love your thoughts - and if there's a preferred entry point you'd recommend for this to remain modular and maintainable long-term.\n\nThanks again!\n",
"Can you post sample data and benchmarks demonstrating the performance issue with specifying `nrows=N`.",
"I dont think this merits a new function in pa das."
] |
2,990,561,056 | 61,280 | DOC: to_json for stream object | closed | 2025-04-12T16:35:16 | 2025-04-14T14:03:03 | 2025-04-14T14:03:02 | https://github.com/pandas-dev/pandas/issues/61280 | true | null | null | loicdiridollou | 2 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html
### Documentation problem
Currently the docs for `to_json`method only mentions about file-like object, yet we can pass buffer which are more like stream object. This was raised on the stubs repo (https://github.com/pandas-dev/pandas-stubs/issues/1179).
Should the docs reflect the ability of not just file-like but also stream-like? It seems to be supported at run time for sure.
Thanks!
### Suggested fix for documentation
Add mention of stream-like object for the `path_or_buf` argument. | [
"Docs",
"IO JSON",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"From https://docs.python.org/3/library/io.html\n\n> The [io](https://docs.python.org/3/library/io.html#module-io) module provides Python’s main facilities for dealing with various types of I/O. There are three main types of I/O: text I/O, binary I/O and raw I/O. These are generic categories, and various backing stores can be used for each of them. A concrete object belonging to any of these categories is called a [file object](https://docs.python.org/3/glossary.html#term-file-object). **Other common terms are stream and file-like object.**\n\npandas consistently uses `file-like` and I am seeing no occurrences of `stream-like`. I am negative on this change.\n\ncc @Dr-Irv ",
"@rhshadrach I was unaware that we define \"file-like\" to include streams, so I'm in agreement with you. Thanks for pointing that out. I'll close this.\n"
] |
2,990,553,000 | 61,279 | WEB: Add pandas cookbook 3 to home page | closed | 2025-04-12T16:17:16 | 2025-04-12T20:13:57 | 2025-04-12T18:51:17 | https://github.com/pandas-dev/pandas/pull/61279 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61279 | https://github.com/pandas-dev/pandas/pull/61279 | datapythonista | 2 | #61271 was hard reverted, as it contained a 1.4 Mb image which we didn't want in our git history.
Same PR here, but with the image now using 9Kb.
@WillAyd, if you check in detail, you and Matt look a bit like cartoons, hahaha. This is because I'm just using 24 colors in the image, as this makes the file size much smaller than using 256. I don't think anyone will notice, and even when paying attention to me it looks kind of cool, more than anything bad. But I'm surely happy to improve the quality of the image and have an image some Kb bigger if you prefer. Just let me know. | [
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61279/"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.