id
int64
number
int64
title
string
state
string
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
html_url
string
is_pull_request
bool
pull_request_url
string
pull_request_html_url
string
user_login
string
comments_count
int64
body
string
labels
list
reactions_plus1
int64
reactions_minus1
int64
reactions_laugh
int64
reactions_hooray
int64
reactions_confused
int64
reactions_heart
int64
reactions_rocket
int64
reactions_eyes
int64
comments
list
3,160,653,803
61,678
ENH #61033: Add coalesce_keys option to DataFrame.join for preserving join keys
closed
2025-06-19T15:06:26
2025-07-28T17:25:52
2025-07-28T17:25:52
https://github.com/pandas-dev/pandas/pull/61678
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61678
https://github.com/pandas-dev/pandas/pull/61678
rit4rosa
3
Add coalesce_keys option to DataFrame.join for preserving join keys This adds a coalesce_keys keyword to DataFrame.join to allow preservation of both join key columns (id and id_right), instead of automatically coalescing them into a single column. This is especially useful in full outer joins, where retaining information about unmatched keys from both sides is important. Example: df1.join(df2, on=id, coalesce_keys=False) This will result in both id and id_right columns being preserved, rather than merged into a single id. Includes: - Modifications to join internals (core/reshape/merge.py) - A dedicated test file (test_merge_coalesce.py) covering: - Preservation of join keys when coalesce_keys=False - Comparison with default behavior (coalesce_keys=True) - Full outer joins with asymmetric key presence
[ "Enhancement", "Reshaping", "Stale" ]
0
0
0
0
0
0
0
0
[ "closes #61033", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.", "Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen." ]
3,157,190,784
61,677
Incorrect rolling std() results on very large DataFrames
open
2025-06-18T14:50:29
2025-07-18T20:45:55
null
https://github.com/pandas-dev/pandas/issues/61677
true
null
null
ZuotaoZhang
0
I apologize that I cannot provide a reproducible example, as this issue was discovered while working with an extremely large DataFrame. I've observed that when performing rolling(window).std() on a DataFrame with tens of millions of rows, the calculations produce incorrect results. For example: - With window=40, the calculated std at index = x is approximately 0.5 in the full DataFrame; - However, if I take a subset of the DataFrame (rows x-45 to x) and perform the same rolling(40).std() operation, the result at index x becomes approximately 0.2. This inconsistency suggests there may be numerical precision issues or algorithmic differences when handling very large DataFrames.
[ "Window", "Reduction Operations" ]
0
0
0
0
0
0
0
0
[]
3,157,064,231
61,676
BUG: Implicit conversion to float64 with isin()
open
2025-06-18T14:15:38
2025-07-21T21:47:29
null
https://github.com/pandas-dev/pandas/issues/61676
true
null
null
pbrochart
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import numpy as np test_df = pd.DataFrame([{'a': 1378774140726870442}], dtype=np.int64) print(test_df['a'].isin([np.uint64(1378774140726870528)])[0]) #True ``` ### Issue Description The latest version of Pandas fixes the implicit conversion to float64 only when dtypes are uint64 vs int64. Int64 vs uint64 needs also to be fixed. ### Expected Behavior import pandas as pd import numpy as np test_df = pd.DataFrame([{'a': 1378774140726870442}], dtype=np.int64) print(test_df['a'].isin([np.uint64(1378774140726870528)])[0]) #False ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.11.2 python-bits : 64 OS : Linux OS-release : 6.1.0-31-amd64 Version : #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) machine : x86_64 processor : byteorder : little LC_ALL : None LANG : fr_FR.UTF-8 LOCALE : fr_FR.UTF-8 pandas : 2.2.3 numpy : 1.26.0 pytz : 2023.3.post1 dateutil : 2.8.2 pip : 23.0.1 Cython : 3.0.12 sphinx : None IPython : 8.17.2 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.2 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2023.10.0 html5lib : None hypothesis : 6.131.0 gcsfs : None jinja2 : 3.1.2 lxml.etree : None matplotlib : 3.8.2 numba : None numexpr : None odfpy : None openpyxl : 3.1.2 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 14.0.1 pyreadstat : None pytest : 8.3.5 python-calamine : None pyxlsb : None s3fs : 2023.10.0 scipy : 1.11.4 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2023.3 qtpy : None pyqt5 : None </details>
[ "Bug", "Dtype Conversions", "isin" ]
0
0
0
0
0
0
0
0
[ "take" ]
3,155,881,737
61,675
BUG: DataFrame.join(other) raises InvalidIndexError if column index is CategoricalIndex
open
2025-06-18T07:59:14
2025-08-21T07:19:46
null
https://github.com/pandas-dev/pandas/issues/61675
true
null
null
tvoipio
3
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd cat_data = pd.Categorical([15, 16, 17, 18], categories=pd.Series(list(range(3, 24)), dtype="Int64"), ordered=True) df1 = pd.DataFrame({"hr": cat_data, "values1": "a b c d".split()}).set_index("hr") df2 = pd.DataFrame({"hr": cat_data, "values2": "xyzzy foo bar ...".split()}).set_index("hr") df1.columns = pd.CategoricalIndex([4], dtype=cat_data.dtype, name="other_hr") df2.columns = pd.CategoricalIndex([3], dtype=cat_data.dtype, name="other_hr") print(pd.__version__) # Clunky, but works df_joined_1 = df1.reset_index(level="hr").merge(df2.reset_index(level="hr"), on="hr").set_index("hr") # Works on 1.4.4 and nightly (3.0.0.dev0+2177.g8a1d5a06f9), not 2.2.3 or 2.3.0 df_joined_2 = df1.join(df2) # returns True... assuming we got this far df_joined_1.equals(df_joined_2) ``` ### Issue Description `join`ing two DataFrames which have a `CategoricalIndex` as columns (for example, due to having pivoted on a categorical column) results in an InvalidIndexError (see below) on Pandas versions 2.2.3 and 2.3.0. The same code works with 1.4.4 (from which I am trying to migrate to 2.x) and nightly. While the issue does not manifest in nightly, I am still reporting it in the hopes of getting it fixed in future 2.x releases. ### Expected Behavior Code executes successfully and the last statement returns `True` ### Installed Versions Pandas 1.4.4 (works) <details> INSTALLED VERSIONS ------------------ commit : ca60aab7340d9989d9428e11a51467658190bb6b python : 3.10.16.final.0 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 1.4.4 numpy : 1.23.5 pytz : 2025.2 dateutil : 2.9.0.post0 setuptools : 80.7.1 pip : 25.1.1 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : 8.36.0 pandas_datareader: None bs4 : None bottleneck : None brotli : None fastparquet : None fsspec : None gcsfs : None markupsafe : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : 3.1.2 pandas_gbq : None pyarrow : 8.0.0 pyreadstat : None pyxlsb : None s3fs : None scipy : None snappy : None sqlalchemy : 2.0.41 tables : None tabulate : None xarray : None xlrd : None xlwt : None zstandard : None </details> Pandas 2.2.3 (does not work) <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.12.9 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.2.4 pytz : 2025.1 dateutil : 2.9.0.post0 pip : 24.3.1 Cython : None sphinx : None IPython : 9.0.2 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : 3.10.1 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 19.0.1 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.2 sqlalchemy : 2.0.39 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.1 qtpy : None pyqt5 : None </details> Pandas 2.3.0 (does not work) <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.12.9 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 2.3.0 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details> Pandas nightly (works) <details> INSTALLED VERSIONS ------------------ commit : 8a1d5a06f9fb3c232249e3ed301932053efb06d8 python : 3.12.9 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 3.0.0.dev0+2177.g8a1d5a06f9 numpy : 2.4.0.dev0+git20250617.32f4afa dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None psycopg2 : None pymysql : None pyarrow : None pyiceberg : None pyreadstat : None pytest : None python-calamine : None pytz : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details> ### Error traceback On Pandas 2.3.3: ``` Traceback (most recent call last): File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc return self._engine.get_loc(casted_key) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pandas/_libs/index.pyx", line 167, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 175, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index_class_helper.pxi", line 86, in pandas._libs.index.MaskedInt64Engine._check_type KeyError: slice(None, None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/frame.py", line 10764, in join return merge( ^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/reshape/merge.py", line 184, in merge return op.get_result(copy=copy) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/reshape/merge.py", line 888, in get_result result = self._reindex_and_concat( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/reshape/merge.py", line 837, in _reindex_and_concat left = self.left[:] ~~~~~~~~~^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/frame.py", line 4080, in __getitem__ and key in self.columns ^^^^^^^^^^^^^^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/indexes/category.py", line 368, in __contains__ return contains(self, key, container=self._engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/arrays/categorical.py", line 230, in contains loc = cat.categories.get_loc(key) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/voipiti/.pyenv/versions/pd23-312/lib/python3.12/site-packages/pandas/core/indexes/base.py", line 3818, in get_loc raise InvalidIndexError(key) pandas.errors.InvalidIndexError: slice(None, None, None) ```
[ "Bug", "Reshaping", "Regression", "Categorical" ]
0
0
0
0
0
0
0
0
[ "Thanks for the report; I'll mark as a regression for now, but it seems likely the 2.x series will not see any more releases.", "i can't reproduce this in 2.3.0", "I can reproduce on 2.3.x." ]
3,154,076,715
61,674
BUG: fix: `list` as index item does not raise
closed
2025-06-17T16:21:37
2025-07-28T17:25:29
2025-07-28T17:25:29
https://github.com/pandas-dev/pandas/pull/61674
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61674
https://github.com/pandas-dev/pandas/pull/61674
Andre-Andreati
2
- [X] closes #60925 - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature. ### Problem: Index constructor was allowing creation of indexes where one of the index's item is a list (unhashable) while others are not lists. - If all items are lists - like columns=[ ['a', 'b'], ['b', 'c'], ['b', 'c'] ], it will try to create a MultiIndex. **Correct**. - If _any_ item is a list, but _NOT all_ - like the initial example, columns=[ 'a', ['b', 'c'], ['b', 'c'] ], there's no check for this condition, and the creation will follow as if all items, including the lists, are valid column names. ### Solution: Added a check in the Index constructor for this case. Raise ValueError. Added test to check if is raising correctly. Changed a test that should raise.
[ "Bug", "Index", "Stale" ]
0
0
0
0
0
0
0
0
[ "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.", "Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen." ]
3,153,132,080
61,673
DOC: Document two-issue limit for `take` command in contributing guide #61626
closed
2025-06-17T11:22:47
2025-06-22T17:34:35
2025-06-22T11:23:04
https://github.com/pandas-dev/pandas/pull/61673
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61673
https://github.com/pandas-dev/pandas/pull/61673
SnehaDeshmukh28
3
- [ ] closes #61626 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "Hi team! 👋\r\n\r\nJust wanted to kindly follow up — it's been a few days, and I wanted to check if there's anything I should improve or update in this PR.\r\n\r\nWould really love your feedback and guidance. Thanks for the incredible work you all do maintaining pandas! 🙏\r\n\r\ncc: @mroeschke @rhshadrach @noatamir", "Thanks for the PR, but there is no such consideration when assigning due to a `take` comment. See the linked issue for more details.\r\n\r\nWhen looking for an issue to work on, I recommend first verifying the issue reported is correct. It is quite often that this reporter is mistaken for one reason or another.", "@rhshadrach Thank you so much for the feedback! Definitely will work on it!" ]
3,151,625,198
61,672
BUG: Index allows one item to be `list` among others that are not
closed
2025-06-16T23:52:06
2025-06-17T02:36:44
2025-06-17T02:36:43
https://github.com/pandas-dev/pandas/pull/61672
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61672
https://github.com/pandas-dev/pandas/pull/61672
Andre-Andreati
0
- [X] closes #60925 - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug. Corrected test pandas/tests/frame/test_repr.py::[test_assign_index_sequences] that was passing when should be raising. Added test to check if raises correctly.
[]
0
0
0
0
0
0
0
0
[]
3,150,470,239
61,671
BUG: np.nan to datetime assertionerror when too large datetime given
open
2025-06-16T15:43:04
2025-06-24T00:45:15
null
https://github.com/pandas-dev/pandas/issues/61671
true
null
null
dnallicus
4
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import numpy as np import datetime pd.DataFrame([np.nan], dtype='datetime64[ns]').replace(np.nan, datetime.datetime(3000,1,1)) ``` ### Issue Description code above gives this error, likely because the datetime is too big for datetime64[ns]: pd.DataFrame([np.nan], dtype='datetime64[ns]').replace(np.nan, datetime.datetime(3000,1,1)) File "C:\Users\mdarnall\mdarnall-local-dev\tma-venv-prod\.venv\lib\site-packages\pandas\core\generic.py", line 8141, in replace new_data = self._mgr.replace( File "C:\Users\mdarnall\mdarnall-local-dev\tma-venv-prod\.venv\lib\site-packages\pandas\core\internals\base.py", line 249, in replace return self.apply_with_block( File "C:\Users\mdarnall\mdarnall-local-dev\tma-venv-prod\.venv\lib\site-packages\pandas\core\internals\managers.py", line 363, in apply applied = getattr(b, f)(**kwargs) File "C:\Users\mdarnall\mdarnall-local-dev\tma-venv-prod\.venv\lib\site-packages\pandas\core\internals\blocks.py", line 924, in replace blk = self.coerce_to_target_dtype(value) File "C:\Users\mdarnall\mdarnall-local-dev\tma-venv-prod\.venv\lib\site-packages\pandas\core\internals\blocks.py", line 490, in coerce_to_target_dtype raise AssertionError( AssertionError: Something has gone wrong, please report a bug at https://github.com/pandas-dev/pandas/issues ### Expected Behavior different error message or change column type ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.9.13 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 151 Stepping 2, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.2.3 numpy : 2.0.2 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : 8.18.1 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : 3.9.4 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.13.1 sqlalchemy : 2.0.41 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Datetime", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "take", "I looked into this and confirmed that `.to_datetime64()` on `Timestamp(datetime(3000, 1, 1))` returns a `datetime64[us]`, and casting that to `datetime64[ns]` silently overflows, resulting in a corrupted timestamp like `1830-11-23T00:50:52`. Since no error is raised, `infer_dtype_from_scalar` infers an invalid dtype, and we later hit an `AssertionError` when `new_dtype == self.dtype`.\n\nWould adding a bounds check after `.to_datetime64()` in `infer_dtype_from_scalar` make sense here to catch this early?\n\n", "Looks like this needs special handling in dtypes.cast.find_result_type. At the end of the function we call find_common_type with dt64[ns] and dt64[us] which gives dt64[ns]. For arithmetic-like use cases that is the right thing to do for find_common_type, but not here.", "That makes sense. I’ll move the bounds check into `find_result_type`, right after `find_common_type`, and special-case the datetime64[ns]/[us] scenario. Thanks for the clarification!" ]
3,150,092,764
61,670
CLN: Use dedup_names for column name mangling in Python parser (#50371)
open
2025-06-16T13:45:36
2025-08-21T02:22:55
null
https://github.com/pandas-dev/pandas/pull/61670
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61670
https://github.com/pandas-dev/pandas/pull/61670
Veneel77
5
### What does this PR do? Replaces manual deduplication logic in the Python parser with the `dedup_names` utility from `pandas.io.common`. ### Why is this important? - Improves consistency and maintainability - Simplifies internal logic - Addresses issue #50371 --- ### Checklist - [x] closes #50371 - [x] Tests added and passed (existing tests cover the change) - [x] All code checks passed (CI will run static checks) - [ ] Added type annotations (n/a) - [ ] Added whatsnew entry (not needed for refactor)
[ "Clean", "Stale" ]
0
0
0
0
0
0
0
0
[ "Hi maintainers 👋\r\n\r\nThis PR replaces manual deduplication logic with the shared `dedup_names` utility. Parser-related tests pass as expected ✅\r\n\r\nSome CI checks failed due to network-based tests (`test_network.py`, `test_url`, etc.), which appear unrelated to my change and are known to be flaky or environment-dependent.\r\n\r\nPlease let me know if you’d like anything else adjusted. Thanks!\r\n\r\n", "can you merge main and see if the CI passes", "Yeah sir, I will try to merge and see if the CI passes\r\n\r\nOn Tue, 15 Jul, 2025, 1:34 am jbrockmendel, ***@***.***>\r\nwrote:\r\n\r\n> ***@***.**** commented on this pull request.\r\n> ------------------------------\r\n>\r\n> In pandas/io/parsers/python_parser.py\r\n> <https://github.com/pandas-dev/pandas/pull/61670#discussion_r2205712915>:\r\n>\r\n> > - # TODO: Use pandas.io.common.dedup_names instead (see #50371)\r\n> - for i in col_loop_order:\r\n> - col = this_columns[i]\r\n> - old_col = col\r\n> - cur_count = counts[col]\r\n> -\r\n> - if cur_count > 0:\r\n> - while cur_count > 0:\r\n> - counts[old_col] = cur_count + 1\r\n> - col = f\"{old_col}.{cur_count}\"\r\n> - if col in this_columns:\r\n> - cur_count += 1\r\n> - else:\r\n> - cur_count = counts[col]\r\n> -\r\n> - if (\r\n>\r\n> looks like this chunk isn't present in dedup_names?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/pull/61670#pullrequestreview-3017613477>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/BCH5XYIIYOXP5ISXTVC73533IQEOLAVCNFSM6AAAAAB7NLXPE2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTAMJXGYYTGNBXG4>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.", "I'm working on it and I will able to finish it in week\r\nThank you\r\n\r\nOn Sat, 16 Aug, 2025, 5:38 am github-actions[bot], ***@***.***>\r\nwrote:\r\n\r\n> *github-actions[bot]* left a comment (pandas-dev/pandas#61670)\r\n> <https://github.com/pandas-dev/pandas/pull/61670#issuecomment-3193035105>\r\n>\r\n> This pull request is stale because it has been open for thirty days with\r\n> no activity. Please update\r\n> <https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request>\r\n> and respond to this comment if you're still interested in working on this.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/pull/61670#issuecomment-3193035105>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/BCH5XYOBUM2JAAZECJUQDH33NZZABAVCNFSM6AAAAAB7NLXPE2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTCOJTGAZTKMJQGU>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
3,149,917,098
61,669
ENH: Switch to trusted publishing for package upload to PyPI in CI
open
2025-06-16T12:56:12
2025-07-19T09:53:57
null
https://github.com/pandas-dev/pandas/issues/61669
true
null
null
EpicWink
1
### Feature Type - [ ] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I would like to audit the `pandas` wheel easily. ### Feature Description Trusted publishing (with attestations) means I can know for certain that what I download from PyPI is the same artefact which was generated in GitHub CI, meaning that what I see in GitHub is the same as what is installed - handy for auditing (rather than having to manually review all of the installed files on each release). See [the Python packaging documentation](https://packaging.python.org/en/latest/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/#configuring-trusted-publishing), [the PyPI documentation](https://docs.pypi.org/trusted-publishers/), and [the official pypi-publish GitHub action documentation](https://github.com/pypa/gh-action-pypi-publish?tab=readme-ov-file#trusted-publishing) on trusted publishing - you'll need to configure an environment in PyPI and GitHub. ### Alternative Solutions Manually review all of the installed files on each release ### Additional Context _No response_
[ "Enhancement", "Build", "CI" ]
1
0
0
0
0
0
0
0
[ "Hi all - if nobody is on this yet, I’d like to take it. I’ll open a PR shortly." ]
3,149,162,296
61,668
Bump pypa/cibuildwheel from 2.23.3 to 3.0.0
closed
2025-06-16T09:05:45
2025-07-07T10:46:20
2025-07-07T10:46:18
https://github.com/pandas-dev/pandas/pull/61668
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61668
https://github.com/pandas-dev/pandas/pull/61668
dependabot[bot]
1
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.3 to 3.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p> <blockquote> <h2>v3.0.0</h2> <p>See <a href="https://github.com/henryiii"><code>@​henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p> <ul> <li> <p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p> </li> <li> <p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p> </li> <li> <p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> <p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p> <ul> <li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li> <li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li> </ul> </li> <li> <p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p> </li> <li> <p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p> </li> <li> <p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p> </li> <li> <p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p> </li> <li> <p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p> </li> <li> <p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p> </li> <li> <p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p> </li> <li> <p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p> </li> <li> <p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p> </li> <li> <p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p> </li> <li> <p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p> </li> <li> <p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>&quot;auto&quot;</code>. It now requires explicit <code>&quot;auto32&quot;</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p> </li> <li> <p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>⚠️ Dropped official support for Appveyor. If it was working for you before, it will probably continue to do so, but we can't be sure, because our CI doesn't run there anymore. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2386">#2386</a>)</p> </li> <li> <p>📚 A reorganisation of the docs, and numerous updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2280">#2280</a>)</p> </li> <li> <p>📚 Use Python 3.14 color output in docs CLI output. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2407">#2407</a>)</p> </li> <li> <p>📚 Docs now primarily use the pyproject.toml name of options, rather than the environment variable name. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2389">#2389</a>)</p> </li> <li> <p>📚 README table now matches docs and auto-updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2427">#2427</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2428">#2428</a>)</p> </li> </ul> <h2>v3.0.0rc3</h2> <p>Not yet released, but available for testing.</p> <p>Note - when using a beta version, be sure to check the <a href="https://cibuildwheel.pypa.io/en/latest/">latest docs</a>, rather than the stable version, which is still on v2.X.</p> <!-- raw HTML omitted --> <p>If you've used previous versions of the beta:</p> <ul> <li>⚠️ Previous betas of v3.0 changed the working directory for tests. This has been rolled back to the v2.x behaviour, so you might need to change configs if you adapted to the beta 1 or 2 behaviour. See [issue <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2406">#2406</a>](<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2406">pypa/cibuildwheel#2406</a>) for more information.</li> <li>⚠️ GraalPy shipped with the identifier <code>gp242-*</code> in previous betas, this has been changed to <code>gp311_242-*</code> to be consistent with other interpreters, and to fix a bug with GraalPy and project requires-python detection. If you were using GraalPy, you might need to update your config to use the new identifier.</li> <li>⚠️ <code>test-sources</code> now uses <code>project</code> directory instead of the <code>package</code> directory (matching the docs).</li> <li>⚠️ 32-bit linux builds were removed from <code>&quot;auto&quot;</code> (the default), now require <code>&quot;auto32&quot;</code> or explicit archs, as modern manylinux images (including our new default) do not support them.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p> <blockquote> <h3>v3.0.0</h3> <p><em>11 June 2025</em></p> <p>See <a href="https://github.com/henryiii"><code>@​henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p> <ul> <li> <p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p> </li> <li> <p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p> </li> <li> <p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> <p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p> <ul> <li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li> <li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li> </ul> </li> <li> <p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p> </li> <li> <p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p> </li> <li> <p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p> </li> <li> <p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p> </li> <li> <p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p> </li> <li> <p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p> </li> <li> <p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p> </li> <li> <p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p> </li> <li> <p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p> </li> <li> <p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p> </li> <li> <p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p> </li> <li> <p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p> </li> <li> <p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>&quot;auto&quot;</code>. It now requires explicit <code>&quot;auto32&quot;</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p> </li> <li> <p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p> </li> <li> <p>⚠️ Dropped official support for Appveyor. If it was working for you before, it will probably continue to do so, but we can't be sure, because our CI doesn't run there anymore. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2386">#2386</a>)</p> </li> <li> <p>📚 A reorganisation of the docs, and numerous updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2280">#2280</a>)</p> </li> <li> <p>📚 Use Python 3.14 color output in docs CLI output. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2407">#2407</a>)</p> </li> <li> <p>📚 Docs now primarily use the pyproject.toml name of options, rather than the environment variable name. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2389">#2389</a>)</p> </li> <li> <p>📚 README table now matches docs and auto-updates. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2427">#2427</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2428">#2428</a>)</p> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pypa/cibuildwheel/commit/5f22145df44122af0f5a201f93cf0207171beca7"><code>5f22145</code></a> Bump version: v3.0.0</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/a73177515a438c947d6e6e7a7356dfe67991d740"><code>a731775</code></a> Docs: mobile layout fix (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2466">#2466</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/ff86a6457781e53a6edbb60d3c2677c64be4f282"><code>ff86a64</code></a> docs: add tips for numpy (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2465">#2465</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/6f5e480fec0d367f9230ee0be4bcb56136eeec43"><code>6f5e480</code></a> chore: use pip's groups in CI (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2463">#2463</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/3c5ff0988806752c5a6502c845f9edc2d98095d6"><code>3c5ff09</code></a> Bump version: v3.0.0rc3</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/1b9a56e01487f7fd9146e622505ea22d4d35e954"><code>1b9a56e</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2455">#2455</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/aa9fe2a24edd67db40cbd394e4621a479e9e69f1"><code>aa9fe2a</code></a> ci: use uv python for docs (binary b1) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2462">#2462</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/e188d9e26007031c475bb5293b90c5f386ecb439"><code>e188d9e</code></a> feat: remove 32-bit linux from auto arch, fix auto32 on linux aarch64 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/3fa7bd1e72c565f4efc61363db6b2f14dbdbb198"><code>3fa7bd1</code></a> ci: fix cirrus and reduce rebuilds (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2460">#2460</a>)</li> <li><a href="https://github.com/pypa/cibuildwheel/commit/c6368be701a99f8315656f925236dbcec5b9b9c2"><code>c6368be</code></a> Move to the <code>OS-latest</code> image tags on Azure Pipelines (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2461">#2461</a>)</li> <li>Additional commits viewable in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.3...v3.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pypa/cibuildwheel&package-manager=github_actions&previous-version=2.23.3&new-version=3.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
[ "Build", "CI", "Dependencies" ]
0
0
0
0
0
1
0
0
[ "Superseded by #61796." ]
3,149,140,633
61,667
BUG: pd.read_sql is incorrectly reading long int when connecting to Teradata
open
2025-06-16T08:59:08
2025-07-16T02:40:11
null
https://github.com/pandas-dev/pandas/issues/61667
true
null
null
IzidoroBaltazar
0
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import numpy as np query = """ SELECT longID FROM teradata_table WHERE longID = 305184080441754059; """ dtype = { "QueryID": np.uint64, } with teradatasql.connect( host=HOST, user=USERNAME, password=PASSWORD, logmech="LDAP" ) as connect: df = pd.read_sql(query, connect, dtype=dtype) df.head() # pandas returns longID 305184080441754048 - close but not quite 305184080441754059 ``` ### Issue Description We have trouble when pulling long `longID` with 18 digits pandas are incorrectly reading the Teradata value. I also tried using `cast(longID as decimal(18, 0))` to help Pandas understand the type of `longID`. So far I haven't found a solution how to fix the problem - incorrect value read. We are using `teradatasql` version `20.0.0.24` we can confirm that `teradatasql` is working correctly as it gives us the value below when using query specified above: `Decimal('305184080441754059')` ```python with teradatasql.connect( host=HOST, user=USERNAME, password=PASSWORD, logmech="LDAP" ) as con: with con.cursor() as cur: cur.execute(query) for row in cur: print(f"{row}") ``` ^ this works as expected - so we assume that Teradata SQL library is working correctly. ### Expected Behavior We expect to see `305184080441754059` as `longID` in the pandas dataframe. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.12.9 python-bits : 64 OS : Linux OS-release : 4.18.0-477.15.1.el8_8.x86_64 Version : #1 SMP Thu Jul 20 11:31:48 PDT 2023 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : en_US.UTF-8 LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 1.26.4 pytz : 2025.1 dateutil : 2.9.0.post0 pip : 25.0.1 Cython : None sphinx : None IPython : 8.32.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : 2024.11.0 fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.5 lxml.etree : 5.4.0 matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : 2.9.10 pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.3 sqlalchemy : 2.0.38 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : 0.23.0 tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "IO SQL", "Needs Triage" ]
0
0
0
0
0
0
0
0
[]
3,148,581,354
61,666
ENH: Support for Orthodox Easter
closed
2025-06-16T05:19:20
2025-06-16T19:39:33
2025-06-16T19:39:27
https://github.com/pandas-dev/pandas/pull/61666
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61666
https://github.com/pandas-dev/pandas/pull/61666
w3stling
1
- [X] closes #61665 - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Frequency" ]
0
0
0
0
0
0
0
0
[ "Thanks @w3stling " ]
3,148,578,801
61,665
ENH: Support for Orthodox Easter
closed
2025-06-16T05:17:43
2025-06-16T19:39:28
2025-06-16T19:39:28
https://github.com/pandas-dev/pandas/issues/61665
true
null
null
w3stling
0
### Feature Type - [x] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description The [pandas.tseries.offsets.Easter](https://github.com/pandas-dev/pandas/blob/c067bcd701e6cb4125e869a2802ef867d8395800/pandas/_libs/tslibs/offsets.pyx#L4511) class currently calculates the date of Western Easter only. However, it does not support the calculation of Orthodox Easter. This limitation makes it more difficult to work with holidays that are relative to Orthodox Easter, such as Orthodox Good Friday and Orthodox Easter Monday. ### Feature Description With a small and fully backwards-compatible change to the [pandas.tseries.offsets.Easter](https://github.com/pandas-dev/pandas/blob/c067bcd701e6cb4125e869a2802ef867d8395800/pandas/_libs/tslibs/offsets.pyx#L4511) class, support for Orthodox Easter (and Julian Easter) can be added by introducing an optional `method` parameter to the `Easter` constructor. This `method` parameter specifies the method to use for calculating easter and would then be passed to [dateutil.easter](https://dateutil.readthedocs.io/en/stable/easter.html), which is used internally by the Easter class. Usage example: ```python from dateutil.easter import EASTER_ORTHODOX OrthodoxGoodFriday = Holiday("Good Friday", month=1, day=1, offset=[Easter(method=EASTER_ORTHODOX), Day(-2)]) OrthodoxEasterMonday = Holiday("Easter Monday", month=1, day=1, offset=[Easter(method=EASTER_ORTHODOX), Day(1)]) ``` This is similar to how the [GoodFriday](https://github.com/pandas-dev/pandas/blob/c067bcd701e6cb4125e869a2802ef867d8395800/pandas/tseries/holiday.py#L609) and [EasterMonday](https://github.com/pandas-dev/pandas/blob/c067bcd701e6cb4125e869a2802ef867d8395800/pandas/tseries/holiday.py#L611) holidays for Western Easter are implemented in the `pandas.tseries.holiday` module. ### Alternative Solutions An alternative solution, without modifying the Easter class as suggested, is to use the observance parameter. ```python def calculate_orthodox_good_friday(dt): offset = easter(dt.year, method=EASTER_ORTHODOX) - timedelta(days=2) - dt.date() return dt + offset OrthodoxGoodFriday = Holiday( "Good Friday", month=1, day=1, observance=calculate_orthodox_good_friday) ```
[ "Enhancement", "Needs Triage" ]
0
0
0
0
0
0
0
0
[]
3,148,398,505
61,664
Fix some incorrect indents in development documentation
closed
2025-06-16T03:13:09
2025-06-17T23:14:10
2025-06-17T16:52:30
https://github.com/pandas-dev/pandas/pull/61664
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61664
https://github.com/pandas-dev/pandas/pull/61664
koyuki7w
4
Incorrect indentations cause some texts to be misinterpreted as quoteblocks.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "/preview", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61664/", "Thanks @koyuki7w " ]
3,148,053,110
61,663
BUG: Incorrect guess_datetime_format response
closed
2025-06-15T20:46:06
2025-06-18T19:33:38
2025-06-16T18:53:51
https://github.com/pandas-dev/pandas/issues/61663
true
null
null
logan-dunbar
5
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd print(pd.tseries.api.guess_datetime_format('2025-06-15T21:25:00.000000Z')) print(pd.tseries.api.guess_datetime_format('2025-06-15T20:24:00.000000Z')) print(pd.tseries.api.guess_datetime_format('2025-06-15T20:25:00.000000Z')) # %Y-%m-%dT%H:%M:%S.%f%z # %Y-%m-%dT%H:%M:%S.%f%z # None ``` ### Issue Description I'm receiving a strange `None` from `guess_datetime_format` for a very particular string combination. I can change the hours and minutes separately and it works fine, but when I set the time to exactly `20:25` it produces a None result. ### Expected Behavior It should produce the same `'%Y-%m-%dT%H:%M:%S.%f%z'` format as the other two examples. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.10.12 python-bits : 64 OS : Linux OS-release : 6.15.1-061501-generic Version : #202506041425 SMP PREEMPT_DYNAMIC Wed Jun 4 18:01:32 UTC 2025 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : C.UTF-8 LANG : C.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 1.26.4 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.0.1 Cython : 0.29.37 sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2024.6.1 html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.4 lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.11.1 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Datetime", "Duplicate Report", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "* Confirmed on `2.3.0`:\n\n ```python\n >>> import pandas as pd\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T21:25:00.000000Z'))\n %Y-%m-%dT%H:%M:%S.%f%z\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T20:24:00.000000Z'))\n %Y-%m-%dT%H:%M:%S.%f%z\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T20:25:00.000000Z'))\n None\n ```\n* Works on the main branch, further investigation is required:\n ```python\n >>> import pandas as pd\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T21:25:00.000000Z'))\n %Y-%m-%dT%H:%M:%S.%f%z\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T20:24:00.000000Z'))\n %Y-%m-%dT%H:%M:%S.%f%z\n >>> print(pd.tseries.api.guess_datetime_format('2025-06-15T20:25:00.000000Z'))\n %Y-%m-%dT%H:%M:%S.%f%z\n ```", "Look like the issue resolved in https://github.com/pandas-dev/pandas/pull/57471, this issue should be able to be closed.\n", "Thanks for the investigation @chilin0525. Closing.", "That pull request was completed over a year ago, will it make it into the next versioned release?", "It is currently slated for release with pandas 3.0\nunsure if this will get backported to 2.3.x\n\ncc @mroeschke " ]
3,147,665,359
61,662
DOC: Improve documentation for DataFrame.__setitem__ and .loc assignment from Series
closed
2025-06-15T15:06:12
2025-08-01T15:31:05
2025-08-01T15:31:05
https://github.com/pandas-dev/pandas/issues/61662
true
null
null
cxder-77
1
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation pandas.DataFrame.__setitem__ https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.__setitem__.html pandas.core.indexing.IndexingMixin.loc https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html User Guide: Indexing and Selecting Data https://pandas.pydata.org/docs/user_guide/indexing.html ### Documentation problem *Documentation Enhancement** The following behavior is not clearly explained in the documentation: ```python import pandas as pd df = pd.DataFrame({'a': [1, 2, 3]}) df['b'] = pd.Series({1: 'b'}) print(df) # Output: # a b # 0 1 NaN # 1 2 b # 2 3 NaN ``` - The Series is **reindexed** to match the DataFrame index. - Values are inserted **by index label**, not by position. - Missing labels yield **NaN**, and the order is adjusted accordingly. This behavior is: - Not explained in the `__setitem__` documentation (which is missing entirely). - Only mentioned vaguely in `.loc` docs, with no example. - Absent from the "Indexing and Selecting Data" user guide when assigning Series with unordered or partial index. ### Suggested fix for documentation 1. **Add docstring for `DataFrame.__setitem__`** with clear explanation that: > When assigning a Series, pandas aligns on index. Values in the Series that don't match an index label will result in `NaN`. 2. **Update `.loc` documentation**: Include a note that when assigning a Series to `.loc[row_labels, col]`, pandas aligns the Series by index and **not by order**. 3. **Add example in the User Guide** under: [Indexing and Selecting Data](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html) > Assigning a Series with unordered/missing index keys to a DataFrame column. **Suggested example:** ```python df = pd.DataFrame({'a': [1, 2, 3]}) s = pd.Series({2: 'zero', 1: 'one', 0: 'two'}) df['d'] = s # Output: # a d # 0 1 two # 1 2 one # 2 3 zero ``` ### 📈 Why this is better: The current documentation is incomplete and vague about how Series alignment works in assignments. This fix: - Makes `__setitem__` behavior explicit and discoverable. - Improves `.loc` docs with better clarity and practical context. - Adds real-world examples to the user guide to reduce silent bugs and confusion. These improvements help all users—especially beginners—understand how pandas handles Series assignment internally.
[ "Docs", "Needs Triage" ]
0
0
0
0
0
1
0
0
[ "take" ]
3,147,021,128
61,661
DOC: Make the benchmarks URL clickable
closed
2025-06-15T02:58:47
2025-06-16T16:57:51
2025-06-16T16:57:28
https://github.com/pandas-dev/pandas/pull/61661
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61661
https://github.com/pandas-dev/pandas/pull/61661
star1327p
1
Make the benchmarks URL clickable. - [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks @star1327p " ]
3,146,495,650
61,660
BUG: Type error supplying SQLAlchemy NVARCHAR length in to_sql()
open
2025-06-14T18:25:32
2025-07-23T13:43:16
null
https://github.com/pandas-dev/pandas/issues/61660
true
null
null
philipnye
3
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import os import uuid import pandas as pd from sqlalchemy import create_engine from sqlalchemy.types import NVARCHAR from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER import urllib.parse connection = create_engine( f"mssql+pyodbc:///?odbc_connect={ urllib.parse.quote_plus( 'DRIVER=' + os.environ['ODBC_DRIVER'] + ';' 'SERVER=' + os.environ['ODBC_SERVER'] + ';' 'DATABASE=' + os.environ['ODBC_DATABASE'] + ';' 'UID=' + os.environ['AZURE_CLIENT_ID'] + ';' 'PWD=' + os.environ['AZURE_CLIENT_SECRET'] + ';' 'Authentication=' + os.environ['ODBC_AUTHENTICATION'] ) }" ) df = pd.DataFrame( columns=['id', 'name'], data=[ [str(uuid.uuid4()), 'hello'], [str(uuid.uuid4()), 'world'], [str(uuid.uuid4()), 'foo'], [str(uuid.uuid4()), 'bar'], [str(uuid.uuid4()), 'baz'], ] ) df.to_sql( 'test', con=connection, dtype={ 'id': UNIQUEIDENTIFIER, 'name': NVARCHAR(length=1024), }, index=False, ) ``` ### Issue Description This raises `Argument of type "dict[str, type[UNIQUEIDENTIFIER[_UUID_RETURN@UNIQUEIDENTIFIER]] | NVARCHAR]" cannot be assigned to parameter "dtype" of type "DtypeArg | None" in function "to_sql".` when using a type checker (Pylance/Pyright). ### Expected Behavior No type error is raised ### Installed Versions INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.13.0 python-bits : 64 OS : Windows OS-release : 11 Version : 10.0.26100 machine : AMD64 processor : Intel64 Family 6 Model 154 Stepping 4, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United Kingdom.1252 pandas : 2.3.0 numpy : 2.1.3 pytz : 2024.2 dateutil : 2.9.0.post0 pip : 25.0 Cython : None sphinx : None IPython : 8.30.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.4 lxml.etree : None matplotlib : 3.10.0 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 18.1.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.14.1 sqlalchemy : 2.0.36 tables : None tabulate : 0.9.0 xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None
[ "Bug", "IO SQL", "Typing", "Needs Triage" ]
1
0
0
0
0
0
0
0
[ "As far as I can tell, the typing of the dtype argument expects a _type class_ (e.g. `NVARCHAR`) or the name of it, not an actual _instance_ of a type class, which is instantiated using the `NVARCHAR(lenght=1024)` _constructor_.\n\n_(I am not sure if I am using the python typing terminology correctly, just trying to use my basic OOP knowledge)_", "Shouldn't this issue be [raised at pandas-dev/pandas-stubs](https://github.com/pandas-dev/pandas-stubs/issues/) instead? 🤔", "See also #9138 which made this notation possible" ]
3,146,486,878
61,659
BUG: to_numeric fails to convert a Pyarrow Decimal series containing NA values
open
2025-06-14T18:21:40
2025-08-19T00:08:27
null
https://github.com/pandas-dev/pandas/pull/61659
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61659
https://github.com/pandas-dev/pandas/pull/61659
chilin0525
1
- [x] closes #61641 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
[ "Bug", "Dtype Conversions", "Stale", "Arrow" ]
0
0
0
0
0
0
0
0
[ "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,146,121,756
61,658
DOC/ENH: Holiday days_of_week value error
closed
2025-06-14T12:50:04
2025-06-18T18:29:20
2025-06-17T16:53:08
https://github.com/pandas-dev/pandas/pull/61658
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61658
https://github.com/pandas-dev/pandas/pull/61658
sharkipelago
2
Changed `Holiday` constructor argument `days_of_week` to raise a `ValueError` on input of the incorrect type as discussed in #61600 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Frequency" ]
0
0
0
0
0
0
0
0
[ "I also edited the docstring a little bit as I was a bit confused what days_of_week was supposed to do from the docstring alone. \r\n\r\nHappy to delete the docstring edits if they don't make sense though.", "Thanks @sharkipelago " ]
3,144,911,668
61,657
WEB: Reorganization of the Ecosystem page
closed
2025-06-13T22:04:43
2025-06-16T21:16:35
2025-06-16T21:16:30
https://github.com/pandas-dev/pandas/pull/61657
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61657
https://github.com/pandas-dev/pandas/pull/61657
datapythonista
3
I changed the sections of the Ecosystem page, in a way that the pandas extensions are grouped together first, and the other sections emphasize how packages they are related to pandas. I merged two separate IO sections, and the IDEs with the development tools. I added line breaks to very long lines, and improved a bit the styles. Probably better to check the preview when ready, than the diff, as the diff will be long and difficult to follow as mostly every library is moved.
[ "Web" ]
1
0
0
0
0
0
0
0
[ "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61657/community/ecosystem.html", "Thanks @datapythonista " ]
3,144,296,766
61,656
WEB: Clean up Ecosystem page
closed
2025-06-13T17:46:33
2025-06-13T20:38:27
2025-06-13T20:38:21
https://github.com/pandas-dev/pandas/pull/61656
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61656
https://github.com/pandas-dev/pandas/pull/61656
datapythonista
1
Removing projects from our Ecosystem page that don't seem maintained and much used (2+ years inactivity in their github) Also, ArcticDB had a whole user guide in the page, leaving only the overview as other projects have, with the link to their docs for users who are interested.
[ "Web" ]
0
0
0
0
0
0
0
0
[ "Thanks @datapythonista " ]
3,144,201,465
61,655
WEB: Add table of contents to the Ecosystem
closed
2025-06-13T17:09:07
2025-06-13T20:37:48
2025-06-13T20:37:43
https://github.com/pandas-dev/pandas/pull/61655
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61655
https://github.com/pandas-dev/pandas/pull/61655
datapythonista
1
Supersedes #61595 Changes to make the tables of contents in the website the same in all pages (we had customized the PDEP ones before). Adding the ToC to Ecosystem. It renders like this: ![Screenshot at 2025-06-13 19-02-56](https://github.com/user-attachments/assets/a3471680-b9f9-46e6-b02d-0d379d75efec) I can make it just one level if preferred. I think with one level looks nicer, but with two, it's uglier but more practical (and simpler since it's the same for all pages). But no strong preference from my side
[ "Web" ]
0
0
0
0
0
0
0
0
[ "Thanks @datapythonista " ]
3,143,997,746
61,654
DOC: Add release notes template for 2.3.1
closed
2025-06-13T15:47:14
2025-06-19T20:12:41
2025-06-19T20:12:26
https://github.com/pandas-dev/pandas/pull/61654
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61654
https://github.com/pandas-dev/pandas/pull/61654
jorisvandenbossche
2
Starting the release notes for 2.3.1, so we can start adding things (for new PRs, but will also follow-up with moving some things from 2.3.0 to 2.3.1)
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@rhshadrach I remember you were adding those in a PR with a bug fix. I think we should better merge this, but since your PR will conflict, I'll leave it to you.", "Thanks @jorisvandenbossche " ]
3,143,958,531
61,653
[backport 2.3.x] CI: Fix slow mamba solver issue by limiting boto3 version (#61594)
closed
2025-06-13T15:33:41
2025-06-24T13:48:07
2025-06-24T13:48:01
https://github.com/pandas-dev/pandas/pull/61653
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61653
https://github.com/pandas-dev/pandas/pull/61653
jorisvandenbossche
0
Backport of https://github.com/pandas-dev/pandas/pull/61594
[]
0
0
0
0
0
0
0
0
[]
3,143,946,403
61,652
[backport 2.3.x] TST: update xfail xarray version check in to_xarray test (#61648)
closed
2025-06-13T15:28:54
2025-06-13T16:51:11
2025-06-13T16:51:07
https://github.com/pandas-dev/pandas/pull/61652
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61652
https://github.com/pandas-dev/pandas/pull/61652
jorisvandenbossche
3
Backport of #61648 Probably not needed to backport because we might not get the latest xarray versions in the envs of the 2.3.x branch, but still useful in case someone runs the tests with more up to date xarray
[]
0
0
0
0
0
0
0
0
[ "Not sure why we get the failures in the CI, I don't think they happen in `main`. And I don't remember any recent PR fixing those.", "> Not sure why we get the failures in the CI, I don't think they happen in `main`.\r\n\r\nCurrently it is the python-dev windows one that is failing. It has been failing consistently the last few days while I have been doing backports. \r\nQuickly checking, this was not yet failing last week when the 2.3.0 release was done. Comparing that build with the first failing one of this week, there are various things that updated. It's a newer version of the image, python updated from 3.13.3 to 3.13.4, cython updated from 3.1.1 to 3.1.2.\r\n\r\nWhat seems fishy is that the error message is saying \"LINK : fatal error LNK1104: cannot open file 'python313t.lib'\", but this is not for a free-threaded build.\r\n\r\nNow, it seems that Python 3.13.4 has several unintended regressions, which are fixed in 3.13.5 (https://www.python.org/downloads/release/python-3135/, one related to building on windows), so it is probably just a matter of waiting until we get 3.13.5 through the setup action.", "And the reason it is not failing on main is because there 3.13 is tested through a conda env (https://github.com/pandas-dev/pandas/pull/61333), and conda-forge directly updated from 3.13.3 to 3.13.5, avoiding the issue" ]
3,143,767,689
61,651
feature #58141: Consistent naming conventions for string dtype aliases
open
2025-06-13T14:27:41
2025-07-27T00:10:14
null
https://github.com/pandas-dev/pandas/pull/61651
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61651
https://github.com/pandas-dev/pandas/pull/61651
pedromfdiogo
2
Key implementation steps: - Created factory functions (string, datetime, integer, floating, decimal, boolean, list, categorical, interval, period, sparse, date, duration, map, struct) to generate pandas dtypes (e.g., StringDtype, Int64Dtype, ArrowDtype) based on parameters like backend, bits, unit, and precision. - Added support for Pandas, NumPy and PyArrow backends, enabling seamless switching (e.g., integer() returns Int64Dtype for Pandas or ArrowDtype(pa.int64()) for PyArrow). - Implemented parameter validation to ensure correct usage (e.g., validating mode in string() to be "string" or "binary", and unit in datetime() for NumPy). - Integrated PyArrow types for advanced dtypes (e.g., pa.float64(), pa.list_(), pa.map_()), supporting modern data processing frameworks. - Implemented comprehensive tests in test_factory.py to validate dtype creation across all functions, ensuring correct behavior for different backends, verifying string representations (e.g., "double[pyarrow]" for pa.float64()), and confirming proper error handling (e.g., raising ValueError for invalid inputs). - Addressed PyArrow compatibility by implementing correct method calls, such as using pa.bool_() for boolean dtypes, ensuring proper integration. This change simplifies dtype creation, reduces duplication, and ensures compatibility across backends, making it easier to extend support for new dtypes in the future. - [x] closes #58141 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
[ "Enhancement", "API Design", "Stale" ]
0
0
0
0
0
0
0
0
[ "@simonjayhawkins Hi! Just wanted to check if this PR needs anything else from my side. Thanks in advance for reviewing", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,143,609,348
61,650
feature #49580: support new-style float_format string in to_csv
closed
2025-06-13T13:35:01
2025-07-08T15:48:28
2025-07-08T15:48:22
https://github.com/pandas-dev/pandas/pull/61650
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61650
https://github.com/pandas-dev/pandas/pull/61650
pedromfdiogo
5
feat(to_csv): support new-style float_format strings using str.format Detect and process new-style format strings (e.g., "{:,.2f}") in the float_format parameter of to_csv. - Check if float_format is a string and matches new-style pattern - Convert it to a callable (e.g., lambda x: float_format.format(x)) - Ensure compatibility with NaN values and mixed data types - Improves formatting output for floats when exporting to CSV Example: df = pd.DataFrame([1234.56789, 9876.54321]) df.to_csv(float_format="{:,.2f}") # now outputs formatted values like 1,234.57 and support new-style without .format - [x] closes #49580 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
[ "Enhancement", "IO CSV" ]
0
0
0
0
0
0
0
0
[ "@simonjayhawkins Hi! Just wanted to check if this PR needs anything else from my side. Thanks in advance for reviewing", "Could you add a whatsnew entry in `v3.0.0.rst`?", "@mroeschke What do you think of the updates?", "I failed 1 test, but I don't know why because it doesn't seem to be related to the changes", "Thanks @pedromfdiogo " ]
3,143,422,035
61,649
[backport 2.3.x] API (string dtype): implement hierarchy (NA > NaN, pyarrow > python) for consistent comparisons between different string dtypes (#61138)
closed
2025-06-13T12:29:23
2025-06-13T15:34:28
2025-06-13T15:34:25
https://github.com/pandas-dev/pandas/pull/61649
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61649
https://github.com/pandas-dev/pandas/pull/61649
jorisvandenbossche
0
Backport of https://github.com/pandas-dev/pandas/pull/61138
[]
0
0
0
0
0
0
0
0
[]
3,143,383,541
61,648
TST: update xfail xarray version check in to_xarray test
closed
2025-06-13T12:13:46
2025-06-13T15:29:21
2025-06-13T13:31:19
https://github.com/pandas-dev/pandas/pull/61648
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61648
https://github.com/pandas-dev/pandas/pull/61648
jorisvandenbossche
2
Started to see xpass in https://github.com/pandas-dev/pandas/pull/61594, so updating the version check here. Not sure this is entirely covered by CI, but tested this locally with a few different xarray versions.
[ "Testing" ]
0
0
0
0
0
0
0
0
[ "Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 7def76a79361c8f6aa2893d9a559da03c6cb6d3a\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61648: TST: update xfail xarray version check in to_xarray test'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61648-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61648 on branch 2.3.x (TST: update xfail xarray version check in to_xarray test)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ", "Backport -> https://github.com/pandas-dev/pandas/pull/61652" ]
3,143,286,276
61,647
WEB: Moving maintainers to inactive (no answer from them)
closed
2025-06-13T11:32:13
2025-06-25T17:05:05
2025-06-20T20:30:45
https://github.com/pandas-dev/pandas/pull/61647
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61647
https://github.com/pandas-dev/pandas/pull/61647
datapythonista
4
I couldn't get an answer from @jreback @topper-123 @alimcmaster1 regarding being active or unactive for some time. I'll leave this open for few days, in case they see it an can confirm. But in the past we've been moving people to inactive if they didn't seem active for a long time and we couldn't get confirmation from them on whether they want to continue to be active or not.
[ "Web" ]
0
0
0
0
0
0
0
0
[ "well i do follow things and in particular not too happy about what is going on\n\nwill reserve the right to vote if needed", "Thanks Jeff for confirming, great news that you are still following the project and interested in voting if needed. Very happy to revert the changes and leaving you as active.\r\n\r\nAnd if you want to share the feedback on why you aren't happy on what's going on, I'm surely very interested to know your opinion.", "I didn't hear from Terji, so I guess he could be moved to inactive, but I think I prefer to just close this PR, since I'm not sure what the governance says in this case.", "> I didn't hear from Terji, so I guess he could be moved to inactive, but I think I prefer to just close this PR, since I'm not sure what the governance says in this case.\r\n\r\n@datapythonista discussed this at the steering committee meeting. The core team decides its membership, so it really should be a core team decision on how to manage the non-response of any individual." ]
3,143,094,907
61,646
fix std/var with complex array
closed
2025-06-13T10:28:07
2025-06-16T17:13:32
2025-06-16T17:13:26
https://github.com/pandas-dev/pandas/pull/61646
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61646
https://github.com/pandas-dev/pandas/pull/61646
randolf-scholz
2
- [x] closes #61645 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Reduction Operations" ]
0
0
0
0
0
0
0
0
[ "These failures seem unrelated, is the CI broken?", "Thanks @randolf-scholz " ]
3,142,900,511
61,645
BUG: `Series.std` and `Series.var` give incorrect results for complex values.
closed
2025-06-13T09:27:37
2025-06-16T17:13:28
2025-06-16T17:13:27
https://github.com/pandas-dev/pandas/issues/61645
true
null
null
randolf-scholz
0
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import numpy as np import pandas as pd arr = np.array([-1j, 0j, 1j], dtype=complex) s = pd.Series(arr, dtype=complex) print(arr.std(ddof=0)) # 0.816496580927726 print(s.std(ddof=0)) # nan print(arr.var(ddof=0)) # 0.666 print(s.var(ddof=0)) # -0.666 ``` ### Issue Description 1. The results diverge from numpy. 2. pandas yields nonsensical results like negative floats. ### Expected Behavior For complex variables, `std` and `var` should give non-negative floating results. Recall that $` σ ≔ \sqrt{𝔼|x-μ|^2 } `$. Often, authors that only use real-valued variables leave out the absolute value, which I guess is what happened here. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.13.4 python-bits : 64 OS : Linux OS-release : 6.11.0-26-generic Version : #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 2.3.0 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : 8.2.3 IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : 6.135.9 gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : 8.4.0 python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.3 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Needs Triage" ]
0
0
0
0
0
0
0
0
[]
3,142,883,091
61,644
BUG: Add PyArrow datelike type support for `map()`
open
2025-06-13T09:23:38
2025-08-14T21:00:00
null
https://github.com/pandas-dev/pandas/pull/61644
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61644
https://github.com/pandas-dev/pandas/pull/61644
KevsterAmp
6
- [x] closes #61231 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Bug", "Apply", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Can you add tests based on the original issue", "Looks like some tests are failing in the CI. Can you address those?", "@jbrockmendel sorry I was inactive last 2 weeks. Will take a look at this issue atm", "I now used `self.dtype.kind in \"mM\"` instead. \r\n\r\nI saw that the test function (`test_map()`) uses it to test out datetime likes. So I realized its best to use it to filter out datetime likes on the function as well", "@jbrockmendel Is the fix for the existing test_map enough? or should I add a test similar to the issue as well? Thanks!", "Can you add a test associated with the original issue. it isn't clear from the edited test that it is fixed" ]
3,142,102,084
61,643
BUG: replace value failed
closed
2025-06-13T03:08:09
2025-08-05T17:10:36
2025-08-05T17:10:35
https://github.com/pandas-dev/pandas/issues/61643
true
null
null
peng256
5
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import numpy as np import pandas as pd start_time = '2025-06-06' end_time = '2025-06-09' sig1 = pd.read_parquet('data1.par') sig1 = sig1[(sig1.tradeDate >= start_time) & (sig1.tradeDate <= end_time)] sig1 = sig1.pivot(index='tradeDate', columns='ticker', values='signal_value').fillna(0) sig2 = pd.read_parquet('data2.par') sig2 = sig2[(sig2.tradeDate >= start_time) & (sig2.tradeDate <= end_time)] sig2 = sig2.pivot(index='tradeDate', columns='ticker', values='signal_value').fillna(0) sig = sig1 + sig2 filt = pd.read_feather('filter.fea').set_index('tradeDate') filt.index = pd.to_datetime(filt.index) filt = filt.reindex(sig.index, columns=sig.columns) # method 1: make a copy then filter s1 = sig.copy() s1.values[:] = np.where(filt == 1, s1, np.nan) print(s1.count(axis=1)) # method 2: directly filter sig.values[:] = np.where(filt == 1, sig, np.nan) print(sig.count(axis=1)) ``` ### Issue Description Why not work: sig.values[:] = np.where(filt == 1, sig, np.nan) If using a copy, the sentence above works: s1 = sig.copy() s1.values[:] = np.where(filt == 1, s1, np.nan) ### Expected Behavior Both methods should work. ### Installed Versions [data.zip](https://github.com/user-attachments/files/20718931/data.zip) <details> INSTALLED VERSIONS ------------------ commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.12.7.final.0 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:48:46 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T8103 machine : arm64 processor : i386 byteorder : little LC_ALL : None LANG : None LOCALE : None.UTF-8 pandas : 2.2.2 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 75.1.0 pip : 24.2 Cython : None pytest : 7.4.4 hypothesis : None sphinx : 7.3.7 blosc : None feather : None xlsxwriter : None lxml.etree : 5.2.1 html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.4 IPython : 8.27.0 pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 bottleneck : 1.3.7 dataframe-api-compat : None fastparquet : None fsspec : 2024.6.1 gcsfs : None matplotlib : 3.9.2 numba : 0.60.0 numexpr : 2.8.7 odfpy : None openpyxl : 3.1.5 pandas_gbq : None pyarrow : 16.1.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : 2024.6.1 scipy : 1.13.1 sqlalchemy : 2.0.34 tables : 3.10.1 tabulate : 0.9.0 xarray : 2023.6.0 xlrd : None zstandard : 0.23.0 tzdata : 2023.3 qtpy : 2.4.1 pyqt5 : None </details>
[ "Bug", "Needs Info", "replace" ]
0
0
0
0
0
0
0
0
[ "take\n", "I think the underlying values array is read-only, see https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html#read-only-numpy-arrays\n\nCould you use `pd.where` instead?", "Does anyone working on this? any improvements?", "Can you post an example that is copy-pastable without having to load an unknown zip file? See https://matthewrocklin.com/minimal-bug-reports.html", "Closing as this issue needs more information to be actionable" ]
3,141,658,639
61,642
ENH: Allow third-party packages to register IO engines
closed
2025-06-12T21:58:17
2025-07-03T21:33:32
2025-07-03T17:02:50
https://github.com/pandas-dev/pandas/pull/61642
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61642
https://github.com/pandas-dev/pandas/pull/61642
datapythonista
50
- [X] xref #61584 - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Added the new system to the Iceberg connection only to keep this smaller. The idea is to add the decorator to all other connectors, happy to do it here or in a follow up PR.
[ "IO Data", "API Design" ]
0
0
0
0
0
0
0
0
[ "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61642/", "@mroeschke I addressed your comments and I think this should be ready when you've got time. Thanks!", "Fully agree, I just didn't want to make this PR too big by allowing the engines and adding`**kwargs` everywhere here. I'll do it in a follow up.\r\n\r\n@pandas-dev/pandas-core any opinion or comment before merging this?", "Thanks for the ping. I haven't been involved enough to really block, but I'm curious what the advantage of this is over leveraging the Arrow PyCapsule interface for data exchange; I feel like the latter would be a better thing to build against, given it is a rather well adopted standard in the ecosystem", "Good point, thanks for the feedback. I think this is a higher interface that still allows for using the Arrow pycapsule internally. Surely an option would be to get rid of I/O in pandas, and have an ecosystem of readers that can be used via a single `pd.read(engine)` or something similar. But I think it's better to use the current API, allow third party engines as implemented here, and let the engines decide how the data is exchanged. Or what is your idea?", "It would depend on a resolution to https://github.com/pandas-dev/pandas/issues/59631, but for demo purposes let's assume we decide to implement a new `DataFrame.from_pycapsule` class method.\r\n\r\nSo instead of an I/O method like:\r\n\r\n```python\r\ndf = pd.read_iceberg(..., kwargs)\r\n```\r\n\r\nYou could construct a dataframe like:\r\n\r\n```python\r\ndf = pd.DataFrame.from_pycapsule(\r\n # whatever the iceberg API is here to create a table\r\n pyiceberg.table(..., kwargs)\r\n)\r\n```\r\n\r\nThe main downside is verbosity, but the upsides are:\r\n\r\n 1. Generic approach to read any datasource\r\n 2. pandas itself has to do very little development (third parties are responsible for integration)\r\n 3. polars, cudf, etc... all benefit in kind\r\n\r\nSo yea this could be extended to say even Delta if they decided to implement the PyCapsule interface (whether they do currently or not, I don't know):\r\n\r\n```python\r\ndf = pd.DataFrame.from_pycapsule(\r\n # whatever the delta API is here to create a table\r\n DeltaLake.table(..., kwargs)\r\n)\r\n```\r\n\r\nand if polars decided on the same API you could create that dataframe as well:\r\n\r\n```python\r\ndf = pl.DataFrame.from_pycapsule(\r\n # whatever the delta API is here to create a table\r\n DeltaLake.table(..., kwargs)\r\n)\r\n```", "As I said though, I haven't been involved enough to really block, so if this PR has some support happy to roll with it and can clean up later if it comes to it. Thanks for giving it consideration @datapythonista ", "i don't know anything about pycapsules. is that effectively a pd.from_arrow_table method that isn't pyarrow-specific?", "Yea kind of. In a generic sense, a PyCapsule is just a Python construct that can wrap generic memory. The Arrow PyCapsule interface further defines lifetime semantics for passing Arrow data across a PyCapsule:\n\nhttps://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html\n\nSomewhat separately there is the question of how you would read and write the data in a capsule. For pandas that is pyarrow, but other libraries may choose a tool like nanoarrow for a smaller dependency", "@WillAyd what you propose seems reasonable, but I guess we aren't planning to remove all pandas IO anytime soon. And if we keep our readers and writers with multiengine support, I think this interface is going to be useful, even if long term we move into the pyarrow capsule reader you propose. Also, since pandas is not arrow based yet, this PR could be used to move the xarray connectors to the xarray package, while using pycapsule wouldn't be ideal for pandas/xarray interchange, as they are numpy based.", "> And if we keep our readers and writers with multiengine support\r\n\r\nATM that's just csv and parquet? And the parquet one plausibly is not needed?\r\n\r\nAFAICT this adds a bunch of new tests/code/docs, complicates the import with entrypoints, and lets 3rd parties hijack our namespace. All when there's a perfectly good option of using their own namespace.\r\n\r\nAlso if we ever did change defaults like #61618 or PDEP16, that would Break The World for any 3rd party readers that we are implicitly committed to supporting.\r\n\r\n-1.", "> Also, since pandas is not arrow based yet, this PR could be used to move the xarray connectors to the xarray package, while using pycapsule wouldn't be ideal for pandas/xarray interchange, as they are numpy based.\r\n\r\nDefinitely not an expert, but I want to point out that DLPack also offers a PyCapsule for data exchange. See https://dmlc.github.io/dlpack/latest/python_spec.html\r\n\r\nSo depending on how generic we want things to be, PyCapsule support doesn't _just_ mean consuming Arrow data, but _could_ mean dlpack data as well (which I assume xarray can do, if it doesn't already)", "> ATM that's just csv and parquet? And the parquet one plausibly is not needed?\r\n\r\nAny reader could have an engine option, this would allow having xarray, sas... as other packages.\r\n\r\n> AFAICT this adds a bunch of new tests/code/docs, complicates the import with entrypoints, and lets 3rd parties hijack our namespace. All when there's a perfectly good option of using their own namespace.\r\n\r\nThis adds minimal tests, code or docs, any other PR adds as many as this. We already allow entrypoints in pandas, and 3rd parties can not use this to change the namespace directly, just to allow `engine=\"foo\"`.\r\n\r\n> Also if we ever did change defaults like https://github.com/pandas-dev/pandas/issues/61618 or PDEP16, that would Break The World for any 3rd party readers that we are implicitly committed to supporting.\r\n\r\nWe could add the value of the flag setting the types to use to the interface, so third parties can transition in the same way as us.\r\n\r\n> -1.\r\n\r\nBeing honest I think you'll block anything that is Bodo related. I think it was best to be -1 to accept their money when they offered it. In any case, I'll close this, and I'll open a separate issue to discuss returning the remaining funds, as I think it can make sense at this point.", "I'm not blocking, just voting against. If everyone else disagrees, I'll make my peace with it.\r\n\r\nI'm not against everything bodo-related, but I do default pretty hard against letting 3rd parties piggyback on our namespace.", "it might be nice to sequester all third-party integrations into an `external` (or `thirdparty`) namespace, like `pd.external.read_iceberg` (and something like `pd.DataFrame.external.to_iceberg`).", "> I'm not blocking, just voting against. If everyone else disagrees, I'll make my peace with it.\r\n\r\nI apologize @jbrockmendel as I think I overreacted. While I don't understand or share your concerns, I think it's good to have your feedback.\r\n\r\nThe problem is that I think the new pandas governance, instead of fixing any of our decision making problems, made them even worst. While your -1 vote is fair and just a point of view that I appreciate having, in practice means that this PR is dead unless I try the PDEP path and wait the 3 months time window, which I won't do.\r\n\r\nI'll leave this open for a bit longer, just in case there is enough support on this to make you be ok with it. And otherwise I'll close later when it becomes stale.", "> While I don't understand or share your concerns\r\n\r\nNot sharing them is fine, but if you don't understand that means I've done a poor job explaining them, so I'll try again:\r\n\r\nZero users have asked for this. There are zero use cases in which `bodo.read_iceberg` is not a fully-functional (and clearer!) alternative. To my mind, that means there are zero upsides.\r\n\r\nOn the other hand, this complicates are API, which users frequently _do_ complain about. Sure it doesn't complicate it _much_, but these things add up. Similarly more docs and more code and more tests increase burdens, and these things add up. These are very real downsides.\r\n\r\nBut the part that I actually think will come to bite us is that it muddles responsibility. Users won't know what we do and don't maintain, and will complain to us. If we want to change something that breaks a 3rd party reader, people will argue that needs a deprecation process. There will be crappy poorly-implemented engines out there with _our_ name associated with them.\r\n\r\nAll downsides, no upsides.\r\n\r\n> instead of fixing any of our decision making problems, made them even worst\r\n\r\nNo argument there. FWIW I'm pretty comfortable being the \"that doesn't need a full PDEP\" guy.", "> FWIW I'm pretty comfortable being the \"that doesn't need a full PDEP\" guy.\r\n\r\nAs the person usually asking for a PDEP, I don't think one is needed here.\r\n\r\nBut @jbrockmendel brings up some valid concerns, and I leave it to others to determine whether those concerns mean we don't accept this addition to pandas.\r\n", "Thanks @jbrockmendel, this is helpful, I understand better your concerns, and I actually agree in two of your point.\r\n\r\nWhile I think this is actually a very small change, it wouldn't be worth if the goal was to allow Bodo (or others) to write a reader. We got rid of `read_gbq` long ago, because maintenance of just the wrapper was annoying, and also not an open source format, but a paid infra. They implemented `pandas_gbq.read_gbq`, and no big deal. That was a whole format, not an engine, but we could do the same with anything.\r\n\r\nI just find this has some advantages:\r\n\r\n- For writers, what you propose doesn't support method chaining\r\n- With your proposal, if I want to get rid of the xarray IO code in pandas, it's a breaking change. With this proposal I can move the wrapper to xarray, add the entrypoint, and users with xarray installed won't see a difference. Users without xarray may see a different error message if using xarray. And our CI wouldn't need to test for xarray, and more important, our environment wouldn't have xarray. If we can do this with few engines, I'll probably won't have to continue spending countless hours fighting with the problems of the conda solver.\r\n- If our CSV readers (or Excel readers) were moved out to other packages, with your proposal their signatures would become independent. And changing from one engine to another would require changing the module, possibly the function name, and the parameters. With this PR it'd be just the engine name, and parameters only if engine specific parameters exist. For context, I'd like to have pandas CSV engines for Polars and DuckDB, and I'd like that the code lives in those projects. This PR (and the needed follow up to support all formats) would allow it from the pandas side.\r\n\r\nSecond point I agree is that users may not immediately underatand that the engines don't live in pandas. I don't have a solution, but two things to consider.\r\n\r\n- For most connectors the code already lives in an external package, we just maintain a thin wrapper. So, in a way, we already take responsibility for other packages code.\r\n- The docs won't mention external engines, but the API docs can mention external engines are supoorted, as well as the user guide. For new engines, I think users will have to learn them from the other packages docs, or our ecosystem page. So, except for the users that read code written by someone else, I think most users can become familiar with the idea that enginea are not part of pandas.", "> For writers, what you propose doesn't support method chaining\r\n\r\nA user who is heroin-level obsessed with method chaining can use `.pipe(bodo.to_iceberg)`\r\n\r\n> if I want to get rid of the xarray IO code in pandas\r\n\r\nIf that is the real goal here, please just make an issue for that. In that scenario, I would _also_ say that the relevant reader/writer belongs in its own namespace.", "@twoertwein we discussed supporting what you mention in PDEP-9. And if people aren't convinced to have this for engines only, I don't think there can be consensus for supporring aebitrary formats in the pandas namespace.", "> If that is the real goal here, please just make an issue for that.\r\n\r\nI wouldn't say it's the real goal, but surely one of the main reasons. I wrote PDEP-9 to go into the details on why I think pandas IO should work as Python modules. \r\n\r\nI think the main reason why people use Python is because code is readable, and it's batteries included via pip/conda. I'd say people use pandas because it's a Swiss knife, also with everything included. If a user in Python is asked to implement code in a C extension, is like telling them it's not possible, because they are used to pip install + import. In a similar way (in my opinion), telling users to import another module to read, and pipe to write, is telling them the reader is not supported by pandas. Surely the difficulty is not comparable to writing a C extension, but the feeling that is not supported and that a hack is needed are probably the same.\r\n\r\nThe real goal here is to reduce the gap between a pandas core IO connector, and an external IO connector. To the same as a standard library package and a cheeseshop package. And one of the main motivations is that moving IO connectors into and out of pandas would become trivial, both technically and in terms of backward compatibility. PDEP-9 tried that fully, this is just for engines of pandas supported formats. But same idea, just that this PR is trivial, both in code and conceptually, and PDEP-9 came with problems of naming conflicts, pollution of the pandas namespace.\r\n\r\nBut I personally don't think discussing again PDEP-9 is needed. I think it's mostly whether the advantages here are worth the added complexity. To me that's an absolute yes. I guess you don't see the advantages as significant as you think it's fine to just use pandas modules and pipe. I disagree with that, but it's surely a valid point of view. In practice we won't move any IO connector out of pandas with this PR. But it's surely not clear if that was going to happen anyway, so not an immediate advantage.", "@Dr-Irv you are still blocking this PR. Is it that you want it to be blocked, or that you forgot to remove the requested change flag? From your last comment I can't tell which of Brock's comments you share, amd if they are a blocker. But if you just didn't forgot to remove the flag, I don't think it's very nice to block someone's work without being clear what change is expected, or why this shouldn't be merged in any form.", "> is telling them the reader is not supported by pandas\r\n\r\nCorrect. It's implemented and maintained by a 3rd party. That is the correct message to send.\r\n\r\n> To the same as a standard library package and a cheeseshop package\r\n\r\nUsing a 3rd party engine via their own namespace is _literally_ using a cheeseshop package. \r\n\r\n> or why this shouldn't be merged in any form\r\n\r\nRegardless of whether Irv removes the block, please do not self-merge.", "> @Dr-Irv you are still blocking this PR. Is it that you want it to be blocked, or that you forgot to remove the requested change flag? From your last comment I can't tell which of Brock's comments you share, amd if they are a blocker. But if you just didn't forgot to remove the flag, I don't think it's very nice to block someone's work without being clear what change is expected, or why this shouldn't be merged in any form.\r\n\r\nI have comments above that still should be addressed:\r\n\r\n- https://github.com/pandas-dev/pandas/pull/61642#discussion_r2173484073 which is suggesting that `pyproject.toml` is required to use this feature. \r\n- https://github.com/pandas-dev/pandas/pull/61642#discussion_r2173486073 which is that I think the `whatsnew` is unneeded (or should be changed)\r\n- https://github.com/pandas-dev/pandas/pull/61642#discussion_r2173157243 which is about moving the `load` operation to when the entry point is actually needed.\r\n", "> Thanks @Dr-Irv for your comment, and thanks for reminding me that you'll continue to block other people's work based on zero impact and extremely opinionated details that I'd bet only make sense to you.\r\n\r\nI'm sorry that you see my comments in that light. That's certainly not my intent. \r\n\r\n> \r\n> I guess my options are letting you blackmail me so your ego is happy, report your behaviours to the CoC or the steering committe you are part ot it. Or stop wasting my time in the toxic environment pandas has become. I think the choice is clear, so take this as a good bye.\r\n\r\nI am not trying to blackmail you. I am providing what I believe to be constructive comments to make the work you do understandable by others. \r\n\r\n", "I wanted to add my perspective on this PR. I work at Bodo, but before that I was a user of Pandas myself.\r\n\r\nI think that having the engine argument in Pandas as well as the corresponding docs page offers an easy way for users to solve their problems without researching other packages or workarounds. For example, reading multiple csv files from a directory. A Pandas user might write code that looks like this:\r\n``` py\r\nimport glob\r\nimport pandas as pd\r\n\r\npath = \"my_dir\"\r\nfilenames = glob.glob(path + \"/*.csv\")\r\ndfs = []\r\n\r\nfor file in glob.glob(path + \"/*.csv\"):\r\n dfs.append(pd.read_csv(file))\r\n\r\npd.concat(dfs, ignore_index=True)\r\n```\r\nBut with the engine param and corresponding docs, they could discover changing the engine might simplify this workflow:\r\n``` py\r\nimport pandas as pd\r\n\r\npd.read_csv(\"my_dir/*.csv\", engine=\"some_engine\")\r\n```\r\nOf course if they truly were bothered by this they could do some googling or ask AI and find another package that fits their use case a bit better, but I feel like Pandas should lead users in the right direction. ", "> I have comments above that still should be addressed:\r\n\r\n@datapythonista Just to clarify, I'm not for or against adding this particular feature. The comments that I was hoping you would respond to were about documentation and performance. With respect to the documentation, I think it can be improved (and would be helpful if this were to be reopened and merged in). With respect to performance, you acknowledged that delayed loading is a good idea, but you didn't want to do it because you didn't think the PR would be accepted.\r\n\r\nI'm not making the decision of accepting and merging the PR if you were to reopen it. I will leave that to others.", "> But with the engine param and corresponding docs, they could discover changing the engine might simplify this workflow:\r\n\r\n1) I don't think we're putting the docs for third party engines in our docs.\r\n2) A third party engine with behavior significantly different from the pandas API doubly doesn't belong in the pandas namespace." ]
3,141,579,514
61,641
BUG: `to_numeric` fails to convert a Pyarrow Decimal series containing NA values.
open
2025-06-12T21:16:02
2025-07-17T15:56:07
null
https://github.com/pandas-dev/pandas/issues/61641
true
null
null
kzvezdarov
9
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import pyarrow as pa decimal_type = pd.ArrowDtype(pa.decimal128(3, scale=2)) series = pd.Series([1, None], dtype=decimal_type) pd.to_numeric(series, errors="coerce") ``` ### Issue Description [`pandas.to_numeric`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html) fails to coerce Pyarrow Decimal series that contain NA values due to those NA values getting dropped, leading to an index mismatch: ```python import pandas as pd import pyarrow as pa decimal_type = pd.ArrowDtype(pa.decimal128(3, scale=2)) series = pd.Series([1, None], dtype=decimal_type) pd.to_numeric(series, errors="coerce") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[13], line 8 4 decimal_type = pd.ArrowDtype(pa.decimal128(3, scale=2)) 6 series = pd.Series([1, None], dtype=decimal_type) ----> 8 pd.to_numeric(series, errors="coerce") File /opt/homebrew/lib/python3.13/site-packages/pandas/core/tools/numeric.py:319, in to_numeric(arg, errors, downcast, dtype_backend) 316 values = ArrowExtensionArray(values.__arrow_array__()) 318 if is_series: --> 319 return arg._constructor(values, index=arg.index, name=arg.name) 320 elif is_index: 321 # because we want to coerce to numeric if possible, 322 # do not use _shallow_copy 323 from pandas import Index File /opt/homebrew/lib/python3.13/site-packages/pandas/core/series.py:575, in Series.__init__(self, data, index, dtype, name, copy, fastpath) 573 index = default_index(len(data)) 574 elif is_list_like(data): --> 575 com.require_length_match(data, index) 577 # create/copy the manager 578 if isinstance(data, (SingleBlockManager, SingleArrayManager)): File /opt/homebrew/lib/python3.13/site-packages/pandas/core/common.py:573, in require_length_match(data, index) 569 """ 570 Check the length of data matches the length of the index. 571 """ 572 if len(data) != len(index): --> 573 raise ValueError( 574 "Length of values " 575 f"({len(data)}) " 576 "does not match length of index " 577 f"({len(index)})" 578 ) ValueError: Length of values (1) does not match length of index (2) ``` This seems to be due to [this conversion to a numpy type](https://github.com/pandas-dev/pandas/blob/c5457f61d92b9428a56c619a6c420b122a41a347/pandas/core/tools/numeric.py#L215) setting the dtype to `object`, which causes [this condition to be false](https://github.com/pandas-dev/pandas/blob/c5457f61d92b9428a56c619a6c420b122a41a347/pandas/core/tools/numeric.py#L276), which skips re-adding the NA values, leading to a final `values` array shorter than the original index. ### Expected Behavior I'd expect the series to get converted (to values of `decimal.Decimal` type, with dtype=object) without raising an exception, preserving the null elements. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.13.2 python-bits : 64 OS : Darwin OS-release : 24.5.0 Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 machine : arm64 processor : arm byteorder : little LC_ALL : en_CA.UTF-8 LANG : None LOCALE : en_CA.UTF-8 pandas : 2.2.3 numpy : 2.2.2 pytz : 2025.1 dateutil : 2.9.0.post0 pip : 25.0 Cython : None sphinx : None IPython : 8.32.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.2.0 html5lib : None hypothesis : 6.125.2 gcsfs : None jinja2 : 3.1.5 lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 19.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.2 sqlalchemy : 2.0.38 tables : None tabulate : None xarray : 2025.1.2 xlrd : None xlsxwriter : None zstandard : 0.23.0 tzdata : 2025.1 qtpy : None pyqt5 : None </details>
[ "Bug", "Dtype Conversions", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Confirmed on main. PRs and investigations are welcome. From a quick look I do think that `.dropna()` from your link above does cause this issue. \n\nThanks for raising this!", "take", "> ### Expected Behavior\n> I'd expect the series to get converted (to values of `decimal.Decimal` type, with dtype=object) without raising an exception, preserving the null elements.\n\nthe docs for `pandas.to_numeric` state that \"The default return dtype is float64 or int64 depending on the data supplied. Use the downcast parameter to obtain other dtypes.\"\n\nthe whole point of `pandas.to_numeric` is to \"Convert argument to a numeric type.\" and the return is \"Numeric if parsing succeeded.\"\n\nSo returning an object array does not seem appropriate?\n\nAlso note that an traditional object array does not properly support null values #32931, so i'm not so sure that putting pd.NA values in an object array is ideal? ", "> > ### Expected Behavior\n> > I'd expect the series to get converted (to values of `decimal.Decimal` type, with dtype=object) without raising an exception, preserving the null elements.\n> \n> the docs for `pandas.to_numeric` state that \"The default return dtype is float64 or int64 depending on the data supplied. Use the downcast parameter to obtain other dtypes.\"\n> \n> the whole point of `pandas.to_numeric` is to \"Convert argument to a numeric type.\" and the return is \"Numeric if parsing succeeded.\"\n> \n> So returning an object array does not seem appropriate?\n> \n> Also note that an traditional object array does not properly support null values [#32931](https://github.com/pandas-dev/pandas/issues/32931), so i'm not so sure that putting pd.NA values in an object array is ideal?\n\nMakes sense; to be honest that was just my best guess after inspecting the partially constructed output with a debugger.", "@mroeschke @jorisvandenbossche \n \nMatt, interested on your views on how this should behave today with the \"arrow dtypes\" and Joris on the future of Decimal types (or other new numeric-like types) in general.", "IMO if a `ExtensionDtype._is_numeric is True`, I think `to_numeric` should no-op with data passed with that type, including the arrow dtypes. So alternatively, I think the `float64 or int64` noted in the documentation should be expanded with respect to all types that claim they are \"numeric\".", "Hi @simonjayhawkins @mroeschke , I’ve opened a PR for this issue and implemented the corresponding [test case](https://github.com/pandas-dev/pandas/pull/61659/files#diff-bb3d61035fca39bcb24aa49c1b004838e8dedda4afe29d555c41b55e7c1d7935). I’d like to ask if the test result looks correct to you? Thanks 🙏", "take @mroeschke ", "@Vernon-codes there is already a PR open to address this issue #61659" ]
3,141,425,028
61,640
BUG: Fix GroupBy aggregate coersion of outputs inconsistency for pyarrow dtypes
open
2025-06-12T20:01:14
2025-08-02T00:09:06
null
https://github.com/pandas-dev/pandas/pull/61640
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61640
https://github.com/pandas-dev/pandas/pull/61640
heoh
1
- [x] closes #61636 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature. ## Description Fix a bug in `DataFrameGroupBy.agg` when applied to columns with `ArrowDtype`, where pandas attempted to cast the result back to the original dtype. ## Cause 1. `ExtensionArray._from_scalars()` should raise only `ValueError` or `TypeError`, instead of `pa.ArrowNotImplementedError`. 2. `ArrowExtensionArray._from_sequence()` may attempt an invalid cast, and if it succeeds, it can unintentionally alter the actual data type.
[ "Bug", "Groupby", "Stale", "Arrow" ]
0
0
0
0
0
0
0
0
[ "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." ]
3,141,322,038
61,639
[backport 2.3.x] ENH(string dtype): fallback for HDF5 with UTF-8 surrogates (#60993)
closed
2025-06-12T19:18:32
2025-06-13T12:25:01
2025-06-13T12:25:01
https://github.com/pandas-dev/pandas/pull/61639
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61639
https://github.com/pandas-dev/pandas/pull/61639
jorisvandenbossche
0
Backport of #60993
[]
0
0
0
0
0
0
0
0
[]
3,140,441,450
61,638
[2.3.x] CI: temporarily pin numpy to 2.2 until latest numexpr is available
closed
2025-06-12T14:03:20
2025-06-12T16:51:41
2025-06-12T16:51:37
https://github.com/pandas-dev/pandas/pull/61638
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61638
https://github.com/pandas-dev/pandas/pull/61638
jorisvandenbossche
0
Some tests involving numexpr are failing in the CI of the backport PRs, which is happening since the release of numpy 2.3, I think. On the main branch this is not happening (yet), because I think numpy gets restricted there to 2.2 because of the presence of numba in the env. In the 2.3.x branch however, mamba ends up getting numpy 2.3 I reported the wrong results from numexpr upstream (https://github.com/pydata/numexpr/issues/515), although it seems it might be solved with the latest numexpr release.
[ "CI" ]
0
0
0
0
0
0
0
0
[]
3,140,199,907
61,637
Fix void dtype handling
open
2025-06-12T12:59:53
2025-08-20T06:38:10
null
https://github.com/pandas-dev/pandas/pull/61637
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61637
https://github.com/pandas-dev/pandas/pull/61637
flying-sheep
3
- [x] closes #54810 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Bug", "Constructors", "Stale" ]
0
0
0
0
0
0
0
0
[ "which `doc/source/whatsnew/vX.X.X.rst` am I supposed to edit?", "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.", "yeh, I’d love to get a review." ]
3,139,941,881
61,636
BUG: Groupby aggregate coersion of outputs inconsistency for pyarrow dtypes
closed
2025-06-12T11:37:40
2025-07-29T16:13:54
2025-07-29T16:13:54
https://github.com/pandas-dev/pandas/issues/61636
true
null
null
AndrejIring
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [ ] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd from pyarrow import string df = pd.DataFrame([ [0,"X","A"], [1,"X","A"], [2,"X","A"], [3,"X","B"], [4,"X","B"], [5,"X","B"],], columns = ["a","b","c"]).astype({"a":int, "b":str,"c":pd.ArrowDtype(string())}) df.set_index("b").groupby("a").agg(lambda df: df.to_dict()) ``` ### Issue Description When applying groupby aggregate on a column with type defined using `pd.ArrowDtype()` the pandas tries to cast the output into the original type, which can raise an error (e.g. `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<location_abbreviation: string> to utf8 using function cast_string` for the example provided). For example, if `string[pyarrow]` is used, then this behaviour doesn't occur: ```python import pandas as pd df = pd.DataFrame([ [0,"X","A"], [1,"X","A"], [2,"X","A"], [3,"X","B"], [4,"X","B"], [5,"X","B"],], columns = ["a","b","c"]).astype({"a":int, "b":str,"c":"string[pyarrow]"}) df.set_index("b").groupby("a").agg(lambda df: df.to_dict()) ``` Or if the user-defined function also has `*args` or `**kwargs`, this coercion is not applied: ```python import pandas as pd df = pd.DataFrame([ [0,"X","A"], [1,"X","A"], [2,"X","A"], [3,"X","B"], [4,"X","B"], [5,"X","B"],], columns = ["a","b","c"]).astype({"a":int, "b":str,"c":pd.ArrowDtype(string()}) df.set_index("b").groupby("a").agg(lambda df, _: df.to_dict(), []) ``` both returns: | a | c | |----:|:-----------| | 0 | {'X': 'A'} | | 1 | {'X': 'A'} | | 2 | {'X': 'A'} | | 3 | {'X': 'B'} | | 4 | {'X': 'B'} | | 5 | {'X': 'B'} | ### Expected Behavior I would expect the code from example to return: | a | c | |----:|:-----------| | 0 | {'X': 'A'} | | 1 | {'X': 'A'} | | 2 | {'X': 'A'} | | 3 | {'X': 'B'} | | 4 | {'X': 'B'} | | 5 | {'X': 'B'} | ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.11.6 python-bits : 64 OS : Linux OS-release : 5.10.223-211.872.amzn2.x86_64 Version : #1 SMP Mon Jul 29 19:52:29 UTC 2024 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 1.26.4 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 24.3.1 Cython : None sphinx : None IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : 6.135.0 gcsfs : None jinja2 : 3.1.6 lxml.etree : 5.4.0 matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 18.1.0 pyreadstat : None pytest : 7.4.4 python-calamine : None pyxlsb : None s3fs : None scipy : 1.14.1 sqlalchemy : None tables : None tabulate : 0.9.0 xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Groupby", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Thanks for describing the issue. I'd like to try work on it.", "take" ]
3,137,863,533
61,635
Description of pandas_datetime_exec function.
closed
2025-06-11T19:15:00
2025-07-01T01:33:35
2025-06-30T18:12:42
https://github.com/pandas-dev/pandas/pull/61635
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61635
https://github.com/pandas-dev/pandas/pull/61635
kunaljani1100
3
Added the function description documentation for `pandas_datetime_exec` in the following location. https://github.com/pandas-dev/pandas/blob/main/pandas/_libs/src/datetime/pd_datetime.c - [x] closes #61631 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "@mroeschke Can you please review this change whenever you get a chance?", "Thanks @kunaljani1100 ", "Thanks you very much @mroeschke for taking a look into this." ]
3,136,989,603
61,634
ENH: ExcelWriter append or create mode
open
2025-06-11T14:10:56
2025-07-29T06:19:17
null
https://github.com/pandas-dev/pandas/issues/61634
true
null
null
flori-ko
6
### Feature Type - [x] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I work with scripts, which write a lot of dataframes to different sheets in excel files. For my use case, the excel files I want to write to may or may not exist yet. Since ``dataframe.to_excel("path/file.xlsx")`` always creates a new excel file I am using the ``ExcelWriter`` with ``mode="a"`` and ``if_sheet_exists="replace"`` for updating the data on the specific sheet. However, when the excel file doesn't exist, I need to use ``mode="w"`` and also use ``None`` for ``if_sheet_exists``. I wish it would be easier to append data to excel files, regardless of they exist or not. What I want is a behaviour that is actually similar to the ``if_sheet_exists`` parameter. There, I don't need to know if the sheet already exist or not, I simply specify what I want to happen if it exists. ### Feature Description Add a new Option to the ``mode`` parameter of the "ExcelWriter". ``` mode : {{'w', 'a', 'append_or_create'}}, default 'w' File mode to use (write, append, append to the file and create the file if it doesn't exist yet). Append does not work with fsspec URLs. ``` The value for this option can be any other value that fits. The mode ``"append_or_create"`` would check if the excel file exists, if it does it appends the sheet to it, using the value of ``if_sheet_exists``. If the excel file doesn't exist it is created. ### Alternative Solutions ``` import os from typing import Literal import pandas as pd def write_excel_append_or_create( dataframe: pd.DataFrame, path: str, sheet_name: str = "Sheet1", if_sheet_exists: Literal["error", "new", "replace", "overlay"] = "replace", **kwargs, ) -> None: excel_file_exists = os.path.exists(path) mode = "a" if excel_file_exists else "w" replace = if_sheet_exists if mode == "a" else None with pd.ExcelWriter(path=path, mode=mode, engine="openpyxl", if_sheet_exists=replace) as writer: dataframe.to_excel(excel_writer=writer, sheet_name=sheet_name, **kwargs) ``` ### Additional Context _No response_
[ "Enhancement", "IO Excel", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "I think this is a valid and useful use case. \n\nI think it might be easier just to use `os.path.exists()` to determine if the file exists, and then just update engine_kwargs accordingly.", "take", "I agree this is an unconventional case where append mode doesn't create a file automatically.\nWhereas in both standard [Python File Handling](https://docs.python.org/3/library/functions.html#open)\nand [Pandas to_csv methods](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html)\nAppend mode automatically creates the file if it doesn't exist.\n\nI shall add the functionality to 'a' mode itself by checking for existence and switching the modes internally like @rmhowe425 mentioned, and raise a PR soon after running tests.\n", "@DevastatingRPG I recommend holding off until repo maintainers remove the `Needs Triage` label.", "Thanks for the heads up. Although it was only a 3 liner code, I'll wait till there's an update from the maintainers regarding this before raising the PR", "This issue is quite old but still has the ``Needs Triage`` label. Will this get reviewed eventually?" ]
3,136,949,969
61,633
[backport 2.3.x] BUG(string dtype): groupby/resampler.min/max returns float on all NA strings (#60985)
closed
2025-06-11T13:59:17
2025-06-12T19:11:06
2025-06-12T19:11:01
https://github.com/pandas-dev/pandas/pull/61633
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61633
https://github.com/pandas-dev/pandas/pull/61633
jorisvandenbossche
0
Backport of https://github.com/pandas-dev/pandas/pull/60985
[]
0
0
0
0
0
0
0
0
[]
3,136,664,134
61,632
DOC: warn about apply with raw=True, if function returns Optional[int]
open
2025-06-11T12:33:33
2025-08-09T12:16:32
null
https://github.com/pandas-dev/pandas/issues/61632
true
null
null
wrschneider
2
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html ### Documentation problem when you use `df.apply` with `raw=True` you can get an error if the applied function returns None for some elements, because of the way underlying numpy infers the array type from the first element. Example: ```python import pandas as pd from typing import Optional def func(a: int) -> Optional[int]: if a % 3 == 0: return 1 if a % 3 == 1: return 0 else: return None df = pd.DataFrame([[1], [2], [3], [4], [5], [6]]) print(df.apply(lambda row: func(row[0]), axis=1, raw=True)) ``` This will raise an error ``` TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType' ``` On the other hand, if the _first_ returned value is None, numpy creates an array of `object` which can hold either int or None: ``` df = pd.DataFrame([2], [3], [4], [5], [6]]) print(df.apply(lambda row: func(row[0]), axis=1, raw=True)) ``` will return ``` 0 None 1 1 2 0 3 None 4 1 dtype: object ``` ### Suggested fix for documentation Explain that the function must not return None if `raw=True` _or_ treat as a bug fix (i.e. allow specifying type of result ndarray explicitly)
[ "Docs", "Apply" ]
0
0
0
0
0
0
0
0
[ "Thanks for the report. I'm positive on adding something to the effect of \"When `raw=True`, the dtype will be inferred from the first result\" (is this true when using numba? Need to check). \n\nAnd while I'm positive on accepting a `dtype_result` or similar argument, that will need a solid proposal on how it'd work in related methods.", "> Thanks for the report. I'm positive on adding something to the effect of \"When `raw=True`, the dtype will be inferred from the first result\" (is this true when using numba? Need to check).\n> \n> And while I'm positive on accepting a `dtype_result` or similar argument, that will need a solid proposal on how it'd work in related methods.\n\nI'd be OK with documentation change to warn about behavior especially around `None` returns\n\nA change to accept `dtype` and pass along to underlying numpy code would be ideal, and shouldn't affect anything that does not pass in that argument." ]
3,135,439,373
61,631
DOC: Description of pandas_datetime_exec function
closed
2025-06-11T04:32:00
2025-06-30T18:12:43
2025-06-30T18:12:43
https://github.com/pandas-dev/pandas/issues/61631
true
null
null
kunaljani1100
2
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://github.com/pandas-dev/pandas/blob/main/pandas/_libs/src/datetime/pd_datetime.c ### Documentation problem The file pd_datetime.c has missing documentation on line 195 for the function static int pandas_datetime_exec(PyObject *Py_UNUSED(module)). We need to add documentation for what the role of this function is. ### Suggested fix for documentation The suggested fix is to add documentation for the function that has been defined on line 195. The function initializes and exposes a custom datetime C-API from the Pandas library by creating a PyCapsule that stores function pointers, which can be accessed later by other C code (or Cython code) that imports the capsule.
[ "Docs", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "I'm curious if this is actually bothering anybody? I don't mind \"fixing\" it, just have the vibe that this is AI-derived rather than a real issue.", "This is not bothering anybody, it is just additional documentation for existing code that already exists in this repo so that new developers can understand the code better." ]
3,135,421,091
61,630
DOC: Title Capitalization and Grammar Fix
closed
2025-06-11T04:17:22
2025-06-11T19:00:26
2025-06-11T16:00:08
https://github.com/pandas-dev/pandas/pull/61630
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61630
https://github.com/pandas-dev/pandas/pull/61630
kunaljani1100
1
Currrently the title of the repository in the README.md reads the following text: pandas: powerful Python data analysis toolkit Since the title is grammatically incorrect and also has incorrect capitalization, this pull request was opened to ensure that the title is capitalized properly and there are no grammatical errors in the title.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Sure, thanks @kunaljani1100 " ]
3,134,989,864
61,629
BUG: to_stata erroring when encoded text and normal text have mismatched length
closed
2025-06-10T22:35:36
2025-06-30T18:14:34
2025-06-30T18:14:29
https://github.com/pandas-dev/pandas/pull/61629
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61629
https://github.com/pandas-dev/pandas/pull/61629
eicchen
1
- [x] closes #61583 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. I removed the encoded check in stata.py and replaced it with a normal check, if the encoded check was there for any particular reason I can standardize them the other way
[ "Bug", "IO Stata" ]
0
0
0
0
0
0
0
0
[ "Thanks @eicchen " ]
3,134,848,986
61,628
BUG: PerformanceWarning when agg with pd.NamedAgg and as_index=False
open
2025-06-10T21:09:03
2025-06-23T15:35:19
null
https://github.com/pandas-dev/pandas/issues/61628
true
null
null
xma08
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import numpy as np import pandas as pd n_rows = 1_000 group_size = 10 n_random_cols = 200 data = {"id": np.repeat(np.arange(n_rows // group_size), group_size)} for i in range(n_random_cols): data[f"col_{i}"] = np.random.randn(n_rows) df = pd.DataFrame(data) # PerformanceWarning when as_index is False named_agg_without_index_warning_df = ( df .groupby('id', as_index=False) .agg(**{ column: pd.NamedAgg(column=column, aggfunc="mean") for column in df.columns if column != "id" }) ) # no warnings when as_index is True named_agg_with_index_ok_df = ( df .groupby('id', as_index=True) .agg(**{ column: pd.NamedAgg(column=column, aggfunc="mean") for column in df.columns if column != "id" }) ) # no warnings when using dict agg no matter what as_index is dict_agg_ok_df = ( df .groupby('id', as_index=False) .agg({ column: "mean" for column in df.columns if column != "id" }) ) ``` ### Issue Description there is an inconsistent behavior (PerformanceWarning) of agg when `as_index` is True/False. Please refer to the example above. ### Expected Behavior No `PerformanceWarning` is raised when `as_index=False` ### Installed Versions <details> v2.3.0 </details>
[ "Bug", "Needs Info", "Apply", "Warnings" ]
0
0
0
0
0
0
0
0
[ "I did a little bit of testing (using time) and found that using dict agg with `as_index=false` is actually faster than named agg, so I am not sure if a Performance Warning is necessary for this case. " ]
3,134,706,506
61,627
BUG: the behavior of DataFrameGroupBy.apply(..., include_groups=True) breaks post-mortem debugging
closed
2025-06-10T20:00:05
2025-07-31T02:07:26
2025-07-31T02:07:16
https://github.com/pandas-dev/pandas/issues/61627
true
null
null
pulkin
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd def f(df): df["group"] raise TypeError("a very subtle bug") pd.DataFrame({"group": ["a", "a", "b", "b"], "data": [0, 1, 2, 3]}).groupby("group").apply(f) ``` ### Issue Description The argument in the title and the corresponding behavior is described like this: ``` When True, will attempt to apply func to the groupings in the case that they are columns of the DataFrame. If this raises a TypeError, the result will be computed with the groupings excluded. When False, the groupings will be excluded when applying func. ``` https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.apply.html I think the described behavior is problematic and it renders close to impossible to use post-mortem debugging for the `TypeError("a very subtle bug")`. pandas should not swallow `TypeError` hoping that developers will figure it out in a pile of logs. ### Expected Behavior There should be no "attempts" from the docs and pandas should not catch and swallow any exceptions from the payload. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.13.3 python-bits : 64 OS : Linux OS-release : 6.14.9-300.fc42.x86_64 Version : #1 SMP PREEMPT_DYNAMIC Thu May 29 14:27:53 UTC 2025 machine : x86_64 processor : byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 2.3.0 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 24.3.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Groupby", "Apply" ]
0
0
0
0
0
0
0
0
[ "Pandas 2.3.0 version only allows `include_groups=False`\nDocumentation: https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.apply.html\n\nChecked on main branch:\n```\n>>> import pandas as pd\n+ /opt/homebrew/Caskroom/miniforge/base/envs/pandas-dev/bin/ninja\n[1/1] Generating write_version_file with a custom command\n>>> pd.__version__\n'3.0.0.dev0+2179.g1da0d02205'\n>>> def f(df):\n... return df.sum()\n... \n>>> pd.DataFrame({\"group\": [\"a\", \"a\", \"b\", \"b\"], \"data\": [0, 1, 2, 3]}).groupby(\"group\").apply(f, include_groups=True)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/niruta.talwekar/Documents/GitHub/pandas/pandas/core/groupby/groupby.py\", line 1602, in apply\n raise ValueError(\"include_groups=True is no longer allowed.\")\nValueError: include_groups=True is no longer allowed.\n>>> pd.DataFrame({\"group\": [\"a\", \"a\", \"b\", \"b\"], \"data\": [0, 1, 2, 3]}).groupby(\"group\").apply(f, include_groups=False)\n data\ngroup \na 1\nb 5\n```\n\nWhen using `include_groups=True`, it throws error while executed successfully on `include_groups=False`.\n\nRecommendation:\nChange Documentation \n a. To reflect DataFrameGroupBy.apply(func, *args, include_groups=False, **kwargs), https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.apply.html#pandas.core.groupby.DataFrameGroupBy.apply\n b. Update `While True` documentation to reflect above error.\n", "Thanks for the report!\n\n> Pandas 2.3.0 version only allows `include_groups=False`\n\npandas 3.0 only allows `include_groups=False`, not 2.3.\n\nAgreed with the OP that we should not be catching exceptions from a user-defined function. That's exactly what 3.0 will do when released. Closing.\n" ]
3,134,661,051
61,626
DOC: Pandas contributor take limit
closed
2025-06-10T19:38:27
2025-06-22T11:21:20
2025-06-22T11:21:17
https://github.com/pandas-dev/pandas/issues/61626
true
null
null
eicchen
6
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/development/contributing.html#id2 ### Documentation problem I don't believe it is documented anywhere but I think there is a two assigned task limit for issues. Currently, whenever I type "take" the bot doesn't auto assign me a task. I think this should be documented as I have one completed issue which is still waiting for a PR review, and another which needs further discussion during a meeting. It's not like I'm just randomly taking tasks. Other people could run into something similar. A maintainer should verify that there is a limit before we edit the documentation first though. Example: https://github.com/pandas-dev/pandas/issues/61583#issuecomment-2960369848 https://github.com/pandas-dev/pandas/issues/61511#issuecomment-2932438936 ### Suggested fix for documentation Just add that there is a limit to the number of issues you can concurrently take and to contact a maintainer if you run into issues and need more (up to maintainer discretion on the last part)
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Hi! I’m a new contributor interested in making my first PR and this seems like a great starter issue. May I please take this up? I’ll wait for confirmation before starting. Thanks! 😊", "@SnehaDeshmukh28 Absolutely! In the meantime you could also check what other issues are open to contribute to, usually documentation stuff is pretty simple, otherwise sometimes issues are tagged as \"good first issue\" which you can also consider looking at", "@eicchen Thank you so much for the quick response and approval! 😊\nI'll start working on this issue and also explore other open and \"good first issue\" tagged ones for potential contributions.", "take", "take", "I had 4 issues assigned to me and take works. In addition, here is the job that makes the assignment, there is no consideration of how many issues are assigned.\n\nhttps://github.com/pandas-dev/pandas/blob/592a41a1a504af38f235557b2a1b16228d8eda0a/.github/workflows/comment-commands.yml#L12-L20" ]
3,133,949,972
61,625
[backport 2.3.x] BUG(string dtype): Empty sum produces incorrect result (#60936)
closed
2025-06-10T15:01:04
2025-06-12T14:05:19
2025-06-12T14:05:15
https://github.com/pandas-dev/pandas/pull/61625
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61625
https://github.com/pandas-dev/pandas/pull/61625
jorisvandenbossche
1
Backport of https://github.com/pandas-dev/pandas/pull/60936
[]
0
0
0
0
0
0
0
0
[ "Remaining failures are handled in https://github.com/pandas-dev/pandas/pull/61638" ]
3,133,857,521
61,624
BUG: Fix infer_dtype result for float with embedded pd.NA
closed
2025-06-10T14:37:42
2025-07-29T13:00:15
2025-07-11T19:08:56
https://github.com/pandas-dev/pandas/pull/61624
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61624
https://github.com/pandas-dev/pandas/pull/61624
heoh
3
- [x] closes #61621 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature. ## Description Fix a bug in :func:`api.types.infer_dtype` returning "mixed-integer-float" for float and ``pd.NA`` mix. ## Cause This problem occurred because the existing `is_float_array` internally fixed `skipna=False`. It was solved by adding the `skipna` argument.
[ "Bug", "Missing-data", "Dtype Conversions" ]
0
0
0
0
0
0
0
1
[ "Is there anything else I can do about this? I would appreciate it if you could review it when you have time.", "Thanks @heoh!", "@rhshadrach I could really use this fix. As this is not much of a breaking change, can't we change the milestone to 2.3.2?\r\nI would highly appreciate it. <3" ]
3,133,177,853
61,623
BUG: DataFrame.explode fails with str dtype
closed
2025-06-10T11:13:09
2025-06-24T08:51:14
2025-06-24T07:35:49
https://github.com/pandas-dev/pandas/pull/61623
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61623
https://github.com/pandas-dev/pandas/pull/61623
rhshadrach
4
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. This operation works on all other dtypes, e.g. ```python df = pd.DataFrame({"a": [1, 2]}) print(df.explode(column="a")) # a # 0 1 # 1 2 ```
[ "Bug", "Reshaping", "Strings" ]
0
0
0
0
0
0
0
0
[ "@datapythonista \r\n\r\n> Btw, did we have problems before to have the `result is not df` assert? I haven't used it before, it'd be good to understand in which tests it make sense. Thanks!\r\n\r\nI think it makes sense to check this when we have a no-op that should return a copy. If the operation modifies the data (which is the case for the vast majority of tests), I don't think it needs to be checked.", "@datapythonista - fixed up the docs build, this should be ready.", "Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 73db25d585a12e587beffef83449dbdd5d16d0f6\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61623: BUG: DataFrame.explode fails with str dtype'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61623-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61623 on branch 2.3.x (BUG: DataFrame.explode fails with str dtype)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ", "Backport -> https://github.com/pandas-dev/pandas/pull/61699" ]
3,133,144,774
61,622
BUG: CoW - eq not implemented for <class 'pandas.core.internals.blocks.ExtensionBlock'>
closed
2025-06-10T11:01:53
2025-07-28T16:24:09
2025-07-28T16:24:08
https://github.com/pandas-dev/pandas/issues/61622
true
null
null
mhabets
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd pd.options.mode.copy_on_write = True idx = pd.Index(['a', 'b', 'c'], dtype="string[pyarrow]") pd.Series(idx).replace({"z": "b", "a": "d"}) ``` ### Issue Description The above code raises the following issue: `NotImplementedError: eq not implemented for <class 'pandas.core.internals.blocks.ExtensionBlock'>` ### Expected Behavior The code should run without raising any error as it does without the CoW clause, shouldn't it? ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.12.5 python-bits : 64 OS : Windows OS-release : 11 Version : 10.0.26100 machine : AMD64 processor : Intel64 Family 6 Model 154 Stepping 3, GenuineIntel byteorder : little LC_ALL : None LANG : en LOCALE : English_United Kingdom.1252 pandas : 2.3.0 numpy : 2.1.3 pytz : 2024.1 dateutil : 2.9.0 pip : 24.2 Cython : None sphinx : None IPython : 8.32.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : 1.1 hypothesis : None gcsfs : None jinja2 : 3.1.4 lxml.etree : 5.3.0 matplotlib : 3.9.2 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : 2.9.9 pymysql : None pyarrow : 17.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.14.0 sqlalchemy : 2.0.35 tables : None tabulate : None xarray : None xlrd : 2.0.1 xlsxwriter : 3.2.0 zstandard : 0.23.0 tzdata : 2024.1 qtpy : None pyqt5 : None </details>
[ "Bug", "replace", "Copy / view semantics" ]
0
0
0
0
0
0
0
0
[ "cc @phofl ", "I haven't run a bisection, but this looks likely to be due to https://github.com/pandas-dev/pandas/pull/52008. Having the weakref to an Index looks like it could be a footgun due to equality comparisons, but we can at least work around here by replacing\n\nhttps://github.com/pandas-dev/pandas/blob/e72c8a1e0ad421c1b8a7b918d995f24bed595cc3/pandas/core/internals/blocks.py#L871-L873\n\nwith something like\n\n```python\nidx = next(iter(k for k, e in enumerate(b.refs.referenced_blocks) if e() is b))\nb.refs.referenced_blocks.pop(idx)\n```\n\nNot familiar with how refs to index objects are handled elsewhere." ]
3,132,794,266
61,621
BUG: infer_dtype result for float with embedded pd.NA
closed
2025-06-10T09:15:31
2025-07-11T19:08:58
2025-07-11T19:08:58
https://github.com/pandas-dev/pandas/issues/61621
true
null
null
MarkusZimmerDLR
5
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python from pandas.api.types import infer_dtype assert infer_dtype(pd.Series([1.,2.,.3,pd.NA], dtype=object)) == infer_dtype(pd.Series([1.,2.,.3,np.nan], dtype=object)) ``` ### Issue Description Dear pandas-folks, This was checked for pandas V 2.3.0 and 2.2.X When using pandas' `infer_dtype` on an object array consisting out of floats with embedded `pd.NA`, the result will be `mixed-integer-float` tough `skipna` is `True` as a default. The same test for embedded `np.nan` returns `floating`. ```python >>> from pandas.api.types import infer_dtype >>> infer_dtype(pd.Series([1,2,3,pd.NA], dtype=object)) 'integer' >>> infer_dtype(pd.Series([1,2,3,np.nan], dtype=object)) 'integer' >>> infer_dtype(pd.Series([1.,2.,.3,pd.NA], dtype=object)) 'mixed-integer-float' v <<< should be `floating` >>> infer_dtype(pd.Series([1.,2.,.3,np.nan], dtype=object)) 'floating' >>> infer_dtype(pd.Series(['1.0', np.nan],dtype=object)) 'string' >>> infer_dtype(pd.Series(['1.0', pd.NA],dtype=object)) 'string' ``` In case of other types, like integer or strings, the function does not produce a false / different output w.r.t. the na-type. Context, I am maintaining a small project which assures integers in columns to stay integers - a common known issue. I you know of a well established extension for this purpose, feel free to point me towards it. ### Expected Behavior `>>> infer_dtype(pd.Series([1.,2.,.3,pd.NA], dtype=object))` should return `floating` ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.13.3 python-bits : 64 OS : Linux OS-release : 4.18.0-553.51.1.el8_10.x86_64 Version : #1 SMP Fri Apr 25 00:55:37 EDT 2025 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : None IPython : 9.2.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : 0.9.0 xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "Missing-data", "Dtype Conversions" ]
0
0
0
0
0
0
0
0
[ "Confirmed on main! Investigations and PRs are welcome.\n\nThanks for raising this!", "I want to contribute to this. Thank you for explaining the issue.", "take", "Since this seems to be a very simple and minor fix, is it possible to not wait for the 3.0 release? Or is the release imminent?", "xref #32931\n" ]
3,132,603,595
61,620
API: New global option to set the default dtypes to use
open
2025-06-10T08:15:11
2025-06-10T11:13:16
null
https://github.com/pandas-dev/pandas/issues/61620
true
null
null
datapythonista
3
This was already implemented before 2.0 in #50748, but then removed before the release in #51853, as in too many cases the option wasn't being respected. The idea is to have a global option to let pandas know which dtype kind to use when data is created (the exact option name needs to be discussed, but I'll use `use_arrow` to illustrate): ```python pandas.options.mode.use_arrow = True df = pandas.read_csv(...) # The returned DataFrame will use pyarrow dtypes df["foo"] = 1 # The added column will use pyarrow dtypes df = pandas.DataFrame(...) # The returned DataFrame will use pyarrow dtypes ... ``` I don't think adding the option is controversial, as it has no impact on users unless set, and it was already implemented without objections in the past. I think the implementation requires a bit of discussion, as the exact behavior to implement is not immediately obvious, a least to me. Main points I can see 1. Should we have an option to set pyarrow as the default (since those should be the types we expect people to use in the future), or a more generic option to set `dtype_backend` to `numpy|nullable|pyarrow`? 2. I think at least initially it makes sense that if a user is specific about the dtype they want to use (e.g. `Series([1, 2], dtype="Int32")`) we let them do it. But could it make sense to have a second option `force_arrow` or `force_dtype_backend` so any operation that would use another dtype kind would fail? I think this could be helpful for users that only want to live in the pyarrow world, and it would also be helpful to identify undesired casts for us. 3. The exact namespace (`mode` vs `future` vs others) and name of the option, which clearly will depend on the previous points
[ "API Design", "Needs Discussion" ]
0
0
0
0
0
0
0
0
[ "> 2\\. I think at least initially it makes sense that if a user is specific about the dtype they want to use (e.g. `Series([1, 2], dtype=\"Int32\")`) we let them do it. But could it make sense to have a second option `force_arrow` or `force_dtype_backend` so any operation that would use another dtype kind would fail? I think this could be helpful for users that only want to live in the pyarrow world, and it would also be helpful to identify undesired casts for us.\n\nIt would seem logical that if we have a global option that there is a mapping of dtypes to Arrow types silently. The purpose of the global option is to work with only Arrow types.\n\na secondary option, for control of that would perhaps be desirable for some users.\n\nBut definitely we would not want to require any code changes. The idea of the option would be to allow users to use PyArrow on existing code without any code changes.\n\nWe could perhaps give consideration to logical types, as per PDEP-13 #58455, as a future direction so that these silent dtypes mappings do not occur but that is definitely not a blocker to what you are proposing.", "> 1. Should we have an option to set pyarrow as the default (since those should be the types we expect people to use in the future), or a more generic option to set `dtype_backend` to `numpy|nullable|pyarrow`?\n\nNot a maintainer, but personally I would prefer the latter: it feels more future-proof and flexible, especially if other backends are considered later on.", "Thanks @arthurlw, this is good feedback. I agree, and I prefer the first option, because I see the dtype backends not as a feature, but as something we had to do because we didn't get the backend we wanted initially.\n\nLong term I think users should just think about float, int... and not how they are storaged internally. In that sense maybe `pandas.options.mode.use_legacy_dtypes = True/False` can even be clearer, if others share my point of view." ]
3,132,010,456
61,619
DOC: Clarify that 'names' is only used when constructing a MultiIndex
closed
2025-06-10T03:14:48
2025-06-10T12:53:18
2025-06-10T12:53:18
https://github.com/pandas-dev/pandas/pull/61619
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61619
https://github.com/pandas-dev/pandas/pull/61619
EdwardPunzalan
1
- [x] closes #19082 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature **(Not applicable — this is a documentation-only change)** - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions **(Not applicable — no new functions or arguments were added)** - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature **(Optional — this change may be too minor, but can be added under "Documentation" if desired)** ### Description Clarifies in the `Index` class docstring that the `names` parameter is only relevant when constructing a `MultiIndex`. This prevents confusion where users expect `names=('a',)` to behave like `name='a'` for regular Index objects. No changes were made to the behavior of `Index`, only to the documentation for better clarity.
[]
0
0
0
0
0
0
0
0
[ "I'll close this PR as it doesn't feel useful. Feel free to comment if I'm missing something, and I can surely reopen." ]
3,131,572,969
61,618
Moving to PyArrow dtypes by default
open
2025-06-09T21:51:57
2025-07-01T22:23:59
null
https://github.com/pandas-dev/pandas/issues/61618
true
null
null
datapythonista
103
There have been some discussions about this before, but as far as I know there is not a plan or any decision made. My understanding is (feel free to disagree): 1. If we were starting pandas today, we would **only** use Arrow as storage for DataFrame columns / Series 2. After all the great work that has been done on building new pandas types based on PyArrow, we are not considering other Arrow implementations 3. Based on 1 and 2, we are moving towards pandas based on PyArrow, and the main question is what's the transition path @jbrockmendel [commented](https://github.com/pandas-dev/pandas/pull/61599#issuecomment-2957034897) this, and I think many others share this point of view, based on past interactions: ``` There's a path to making it feasible to use PyArrow types by default, but that path probably takes multiple major release cycles. Doing it for 3.0 would be, frankly, insane. ``` It would be interesting to know why exactly, I guess it's mainly because of two main reasons: - Finish PyArrow types and making operations with them as reliable and fast as the original pandas types - Giving users time to adapt I don't know the exact state of the PyArrow types, and how often users will face problems if using them instead of the original ones. From my perception, there aren't any major efforts to make them better at this point. So, I'm unsure if the situation in that regard will be very different if we make the PyArrow types the default ones tomorrow, or if we make them the default ones in two years. My understanding is that the only person who is paid consistently to work on pandas is Matt, and he's doing an amazing job at keeping the project going, reviewing most of the PRs, keeping the CI in good shape... But I don't think not him not anyone else if being able to put hours into developing new things as it used to be. For reference, this is the GitHub chart of pandas activity (commits) since pandas 2.0: ![Image](https://github.com/user-attachments/assets/3c766972-49b1-4fff-ba21-ef52c71ade89) So, in my opinion, the existing problems with PyArrow will start to be addressed significantly, whenever they become the default ones. So, in my opinion our two main options are: - We move forward with the PyArrow transition. pandas 3.0 will surely not be the best pandas version ever if we start using PyArrow types, but pandas 3.1 will be much better, and pandas 3.2 may be as good as pandas 2 in reliability and speed, but much closer to what we would like pandas to be. Of course not all users are ready for pandas 3.0 with Arrow types. They can surely pin to `pandas=2` until pandas 3 is more mature and they made the required changes to their code. We can surely add a flag `pandas.options.mode.use_arrow = False` that reverts the new default to the old status quo. So users can actually move to pandas 3.0 but stay with the old types until we (pandas) and them are ready to get into the new default types. The transition from Python 2 to 3 (which is surely an example of what not to do) took more than 10 years. I don't think in our case we need as much. And if there is interest (aka money) we can also support the pandas 2 series while needed. - The other option is to continue with the transition to the new nullable types, that my understanding is that we implemented because PyArrow didn't exist at that time. Continue to put our little resources on them. Making users adapt their code to a new temporary status quo, not the final one we envision, and stay in this transition period and delay the move to PyArrow I assume around 6 years (Brock mentioned `multiple major release cycles`, so I assume something like 3 at a rate of one major release every 2 years). It will be great to know what are other people's thoughts and ideal plans, and see what makes more sense. But to me personally, based on the above information, it doesn't sound more insane to move to PyArrow in pandas 3, than to move in pandas 6.
[ "Needs Discussion", "Arrow" ]
0
0
0
0
0
0
0
0
[ "Moving users to pyarrow types by default in 3.0 would be insane because #32265 has not been addressed. Getting that out of the way was the point of PDEP 16, which you and Simon are oddly hostile toward.", "Since I helped draft PDEP-10, I would like a world where the Arrow type system with NA semantics would be the only pandas type system.\n\nSecondarily, I would like a world where pandas has the Arrow type system with NA semantics and the legacy (NaN) Numpy type system which is completely independent from the Arrow type system (e.g. a user cannot mix the two in any way)\n\n---\nI agree with Brock that null semantics (NA vs NaN) must inevitably be discussed with adopting a new type system.\n\nI've also generally been concerned about the growing complexity of PDEP-14 with the configurability of \"storage\" and NA semantics (like having to define a comparison hierarchy https://github.com/pandas-dev/pandas/issues/60639). While I understand that we've been very cautious about compatibility with existing types, I don't think this is maintainable or clearer for users in the long run.\n\n---\nMy ideal roadplan would be:\n\n1. In pandas 3.x\n * Deprecate the NumPy nullable types\n * Deprecate NaN as a missing value for Arrow types\n * Deprecate mixing Arrow types and Numpy types in any operation\n * Add some global configuration API, `set_option(\"type_system\", \"legacy\" | \"pyarrow\")`, that configures the \"default\" type system as either NaN semantics with NumPy or NA semantics with Arrow (with the default being `\"legacy\"`)\n2. In pandas 4.x\n * Enforce above deprecations\n * Deprecate `\"legacy\"` as the \"default\" type system\n3. In pandas 5.x\n * Enforce `\"pyarrow\"` as the new \"default\" type system\n", "> Deprecate NaN as a missing value for Arrow types\n\nSo with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA? If so, I’m on board but I expect that to be a painful deprecation.", "To be clear, I'm not hostile towards PDEP-16 at all. I think it's important for pandas to have clear and simple missing value handling, and while incomplete, I think the PDEP and the discussions have been very useful and insightful. Amd I really appreciate that work.\n\nI just don't see PDEP-16 as a blocker for moving to pyarrow, even less if implemented at the same time. And also, I wouldn't spend time with our own nullable dtypes, I would implement PDEP-16 only for pyarrow types.\n\nI couldn't agree more on the points Matt mentions for pandas 3.x. Personally I would change the default earlier. Sounds like pandas 4.x is mostly about showing a warning to users until they manually change the default, which I personally wouldn't do. But that's a minor point, I like the general idea.", ">> Deprecate NaN as a missing value for Arrow types\n\n> So with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA?\n\nCorrect. Yeah I am optimistic that most of the deprecation would hopefully go into `ArrowExtensionArray`, but of course there are probably a lot of one-off places that need addressing.\n\n> I just don't see PDEP-16 as a blocker for moving to pyarrow\n\nThere is a lot of references about type systems in that PDEP that I think would warrant some re-imagining given which type systems are favored. As mentioned before (unfortunately) I think type systems and missing value semantics need to be discussed together ", "I created a separate issue #61620 for the option mentioned in the description and in Matt's roadmap, since I think that's somehow independent, and no blocked by PDEP-15, by this issue, or by nothing else that I know.\n\n> There is a lot of references about type systems in that PDEP that I think would warrant some re-imagining given which type systems are favored. As mentioned before (unfortunately) I think type systems and missing value semantics need to be discussed together\n\nI fully agree with this. But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.\n\nFor users already using PyArrow, they'll have to follow the deprecation transition if they are using `NaN` as missing. To me, this can be started in 3.0, 4.0 or whenever. And probably the earlier the better, so no more code is written with the undesired behavior.\n\nFor users not yet using PyArrow, I do understand that it's better to force the move when the PyArrow dtypes behave as we think they should behave. I'm not convinced this should be a blocker, and even less if the deprecation of the special treatment of `NaN` is also implemented in 3.0 or in the version when PyArrow types become the default. Maybe I'm wrong, but if you are getting data from parquet, csv... you are not immediately affected by this missing value semantics problems. You need to create data manually (rare for most professional use cases imho), or you need to be explicitly setting `NaN` in existing data (also rare in my personal experience). Am I missing something that makes this point important enough to be a blocker for moving to PyArrow and stop investing time in types that we plan to deprecate? If you have an example of code commonly used that is problematic for what I'm proposing, that can surely convince me, and help identify the optimal transition path.", "> Correct. Yeah I am optimistic that most of the deprecation would hopefully go into `ArrowExtensionArray`, but of course there are probably a lot of one-off places that need addressing.\n\n@mroeschke a couple of questions:\n\n1. could the ArrowExtensionArray be described as a true extension array, i.e. using the extension array interface for 3rd party EAs.?\n\n2. does the ArrowExtensionArray rely on any Cython code for the implementation to work?\n", "> > Deprecate NaN as a missing value for Arrow types\n> \n> So with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA? If so, I’m on board but I expect that to be a painful deprecation.\n\nI have always understood the ArrowExtensionArray and ArrowDtype to be experimental, no PDEP and no roadmap item, for the purpose of evaluating PyArrow types in pandas to potentially eventually use as a backend for pandas nullable dtypes.\n\nSo I can sort of understand why the ArrowDtypes are no longer pure and have allowed pandas semantics to creep into the API.\n\nAs experimental dtypes why do they need any deprecation at all? Where do we promote these types as pandas recommended types?", "just to be clear about my previous statement, there is a roadmap item\n\n> Apache Arrow interoperability\n[Apache Arrow](https://arrow.apache.org/) is a cross-language development platform for in-memory data. The Arrow logical types are closely aligned with typical pandas use cases.\n>\n> We'd like to provide better-integrated support for Arrow memory and data types within pandas. This will let us take advantage of its I/O capabilities and provide for better interoperability with other languages and libraries using Arrow.\n\nbut I've never interpreted this to cover adopting the current ArrowDtype system throughout pandas", "> My ideal roadplan would be:\n\n@mroeschke given the current level of funding and interest for the development of the pandas nullable dtypes and the proportion of core devs that now appear to favor embracing the ArrowDtype instead, I fear that this may be, at this time, the most pragmatic approach in keeping pandas development moving forward as it does seem to have slowed of late. I'm not necessarily comfortable deprecating so much prior effort but then the same could have been said about Panel many years ago and I'm not sure anyone misses it today. If the community wants nullable dtypes by default, they may be less interested in the implementation details or even to some extent the performance. If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.", "The goal of this and PDEP-13 were pretty aligned; prefer PyArrow to build out our default type system where applicable, and fill in the gaps using whatever we have as a fallback. That PDEP conversation stalled; not sure if its worth reviving or if this issue is going to tackle a smaller subset of the problem, but in any case I definitely support this", "> But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.\n\n@datapythonista I just would like some agreement that that defaulting to PyArrow types also matches PDEP-16's proposal to (only) NA semantics for this type as well when making the change for a consistent story, but I suppose they don't need to be done at the same time\n\n> could the ArrowExtensionArray be described as a true extension array, i.e. using the extension array interface for 3rd party EAs.?\n\n@simonjayhawkins yes, it purely uses `ExtensionArray` and `ExtensionDtype` to implement functionality.\n\n> does the ArrowExtensionArray rely on any Cython code for the implementation to work?\n\nIt does not, and ideally it won't. When interacting with other parts of Cython in pandas, e.g. `groupby`, we've created hooks to convert to numpy first.\n\n> As experimental dtypes why do they need any deprecation at all? Where do we promote these types as pandas recommended types?\n\nThis is a valid point; technically it shouldn't require deprecation.\n\nWhile our docs don't necessarily state a recommended type, anecdotally, it has felt like in past year or two there's been quite a number of conference talks, blogs posts, books that have \"celebrated\" the newer Arrow types in pandas. Although attention != usage, it may still warrant some care if changing behavior IMO.\n\n> If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.\n\nAn alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party `ExtensionArray` library for users that still want to use them.", "> > If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.\n> \n> An alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party `ExtensionArray` library for users that still want to use them.\n\nwe have a roadmap item: \n\n> Extensibility\nPandas extending.extension-types allow for extending NumPy types with custom data types and array storage. Pandas uses extension types internally, and provides an interface for 3rd-party libraries to define their own custom data types.\n\n>Many parts of pandas still unintentionally convert data to a NumPy array. These problems are especially pronounced for nested data.\n\n>We'd like to improve the handling of extension arrays throughout the library, making their behavior more consistent with the handling of NumPy arrays. We'll do this by cleaning up pandas' internals and adding new methods to the extension array interface.\n\nthe roadmap also states \n\n> pandas is in the process of moving roadmap points to PDEPs\n\nso we could perhaps have a separate issue on this (perhaps culminating in a PDEP to clear this off that list) to address any points to make the spin off easier?", "> > But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.\n> \n> [@datapythonista](https://github.com/datapythonista) I just would like some agreement that that defaulting to PyArrow types also matches PDEP-16's proposal to (only) NA semantics for this type as well when making the change for a consistent story, but I suppose they don't need to be done at the same time\n\nIn the absence of a competing proposal, I think you should proceed assuming that PDEP-16 is going to be accepted someday. A stale PDEP cannot be allowed to hold up development in other areas, especially if you have the bandwidth and resources to move forward on this.", "@mroeschke \n\n> * Deprecate the NumPy nullable types\n\njust to be clear, Int, Float, Boolean and pd.NA variants of StringArray only?\n\n> Deprecate mixing Arrow types and Numpy types in any operation\n\ndoes that include numpy object type?", "I'm going to throw out another (possibly crazy) idea based on these comments:\n> If we were starting pandas today, we would only use Arrow as storage for DataFrame columns / Series\n\n> An alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party ExtensionArray library for users that still want to use them.\n\nThe idea is as follows:\n1. We create a new repository corresponding to what is in `pandas` 2.3 - let's call it `npandas` (\"n\" indicating \"numpy-based\")\n2. `pandas` 3.0 goes fully PyArrow - removes the numpy storage and removes the nullable extension types\n\nWe only do bug fixes to `npandas` - so if people want to use the numpy types or nullable types, they use that version.\n\nIf they want \"modern\" pandas, they fully buy into `pandas` 3.0 and `PyArrow` support - requiring `pyarrow` and start working with the future.\n\nThen we don't have to worry about a transition path that WE maintain. The transition is up to the users. They can take working code with `npandas` 2.3 and try `pandas` 3.0, and report issues in converting from `npandas` 2.3 to `pandas` 3.0. But we don't have to keep all these code paths working - we have 2 separate projects.\n\n", "Generally curious but why do we feel the need to remove the extension types? If anything, couldn't we just update them to use PyArrow behind the scenes? \n\nI realize we use the term \"numpy_nullable\" throughout, but conceptually there's nothing different between what they and their corresponding Arrow types do. The Arrow implementation is just going to be more lightweight, so why not update those in place and not require users to change their code?", "All proposals and discussion points so far seem very interesting. Personally I think it's great we are having this conversation.\n\nBased on the feedback above, what makes most sense to me is:\n\n- Require PyArrow in 3.0\n- We deprecate the NaN behavior in Arrow types. Correct me if I'm wrong, but I think it's not a huge effort, Bodo will probably fund the development, and Brock and Simon have availability to do it\n- We default to PyArrow dtypes starting in 3.0, but we provide a global flag (e.g. `pandas.options.mode.use_legacy_typing`)for users who aren't ready to use the Arrow types.\n\nThis wouldn't take us to a perfect situation, but it makes the transition path very easy for both users and us, and we can start focussing on pandas backed by Arrow, while gathering feedback from users quite fast without creating them any inconvenient (other that adding a line of code setting the option). After getting the feedback we will be in a better position to decide on deprecating the legacy types, moving them to a separate repo, maintaining them forever...", "@simonjayhawkins \n\n> just to be clear, Int, Float, Boolean and pd.NA variants of StringArray only?\n\nYes and I'd say all variants of `StringArray` (`ArrowExtensionArray` supports all functionality of `StringArray`)\n\n> does that include numpy object type?\n\nI would say yes.", "> We default to PyArrow dtypes starting in 3.0, but we provide a global flag\n\n3.0 is already huge (CoW, Strings) and way, way late. Let's not risk adding another year to that lateness.", "> We deprecate the NaN behavior in Arrow types. Correct me if I'm wrong, but I think it's not a huge effort, Bodo will probably fund the development, and Brock and Simon have availability to do it\n\nDeprecating that behavior is relatively easy (though I kind of expect @jorisvandenbossche to chime in that we're missing something important). The hard part is the step after that telling users they have to change all their existing `ser[1] = np.nan` to use `pd.NA` instead (and constructor calls, and csvs).\n\nAs to \"huge effort\", off the top of my head, some tasks that would be required in order to make pyarrow dtypes the default include:\n\na) making object, category, interval, period dtypes Just Work\nb) changing the non-numeric Index subclasses to use pyarrow dtypes\nc) updating all the tests to work with both/all backend global options. Even if all the non-test code is bug-free, this is months of work.\nd) (longer-term) try to ameliorate the performance hit for axis=1 operations, arithmetic, etc", "Thanks @jbrockmendel, this is great feedback.\n\nDo you think implementing the global option (without defaulting to arrow or fixing the above points) is reasonable for 3.0 (releasing it soon)?\n\nFor numpy nullable types, does that same list apply? Or just the work on missing value semantics is needed to consider them ready to be the default?", "Everything except the performance would need to be done regardless of the default.", "Trying to catch up with all discussions .. but already two quick points of feedback:\n\n1) Timing: \n\n> > We default to PyArrow dtypes starting in 3.0\n> \n> 3.0 is already huge (CoW, Strings) and way, way late. Let's not risk adding another year to that lateness.\n\nI agree with @jbrockmendel here. Personally I would certainly not consider any of this for 3.0. There is still _lots_ of work to do before we can even consider making the pyarrow dtypes the default (right now users cannot even try to this out fully), which I think will easily take at least a year. \n(and saying that as the person who caused a year of delay for 3.0 thinking we could \"quickly\" include the string dtype feature in pandas 3.0 at a point where we were otherwise already ready to release ...)\n\n2) General approach of the \"PyArrow dtypes\": \n\nPersonally, I would prefer that we go for the model chosen for the `StringDtype`, i.e. not necessarily the fact that there are multiple backends (or the `na_value`), but I mean the fact that we went for a `pd.StringDtype()` as the default dtype and not for `pd.ArrowDtype(pa.large_string())`. \nWe should make good use of the Arrow type system, but IMO we 1) should not necessarily expose all of its details to our users (e.g. string vs large_string vs string_view, but have a default \"string\" dtype (with potentially options for power users than want to customized it)), and so I think we should hide some implementation details of (Py)Arrow under a nice \"pandas logical dtype\" system (i.e. @WillAyd's PDEP), and 2) by having separate classes per \"type\" of dtype, we can also provide a better user experience (e.g. a datetime/timestamp dtype will have a timezone attribute, but numeric dtypes don't need this).\n\nAs another example of this, I think we should create a variant of `pd.CategoricalDtype` that is nullable and uses pyarrow under the hood, and not have people rely on the fact that this translates to `pd.ArrowDtype(pa.dictionary(...))` \n(in the line of what Brock said above about \"making object, category, interval, period dtypes Just Work\")", "Thanks @jorisvandenbossche \n\n> 2\\. General approach of the \"PyArrow dtypes\":\n\nThis aligns with my expectation of the future direction of pandas. Do we have any umbrella discussion anywhere that could be used as the basis of a PDEP so this is on the roadmap. (note: I'm not in any way volunteering to take that on)\n\nI don't think PDEP-16 covers the using of pyarrow as a backend only and hiding the functionality behind our pandas nullable types as it is focused on the missing value semantics. Now, I appreciate that is part of that plan, but is not the plan as a roadmap item.\n\nBut to be fair, ArrowDType was introduced without a PDEP or IMO a roadmap item that covered what was implemented. As said above, these types have been promoted at conferences etc and are potentially being actively used. This is now IMO pulling the development team in two different directions. For instance, I am very frustrated at the ArrowDType for string being available to users as it has caused so much confusion even within the team. IMO we don't need this as \"experimental\" since we have the pandas nullable dtype available. However, I also appreciate that for evaluating an experimental dtype system (not just individual dtypes) we should perhaps include it. So I am conflicted.\n\nNow we are where we are. And we probably now have as many (or more) core-devs that appear to favor the ArrowDtype system. So managing that is now likely as difficult as the technical discussions themselves. \n\n@mroeschke please don't take this as criticism as the work you have done is great (for the whole project, the maintenance and not just the Arrow work) and any issues that have arisen are systematic failures. As @WillAyd pointed out the PDEP process needs to be able to allow to make decisions without every minute detail ironed out before the PDEP is approved.\n", "Thanks all for the feedback. Personally feels like while opening so many discussions have been a bit chaotic (sorry about that as I started or reopened most), I think it's been very productive.\n\nI'd like to know how do you feel about trying to unify all the type system topics into a single PDEP that broadly defines a roadmap for the next steps and final goal. Maybe I'm too optimistic, but I think there is agreement in most points, and it's just the details that need to be decided. Some of the things that I think could go into the PDEP and there is mostly agreement:\n\n- \"Final\" pandas dtypes backed by arrow (probably also agreement on not exposing it to the user)\n- Eventually making PyArrow a required dependency (unclear when)\n- Implementing a global option to at least change between the status quo and the final dtype system (maybe other options)\n- The PyArrow dtypes still need work before being the default/final:\n - NA semantics\n - Functionality not yet implemented\n - Bugs / undesired behavior\n - Performance\n- We should be testing the final PyArrow type system via our test suite with the global option (this can lead the work on what needs to be fixed)\n- There should be a transition path from the status quo to the final types system that needs to be defined\n- We need to take into consideration that we have around 500 work hours of funding, and that if things don't change, it doesn't seem likely that anyone will put the huge amount of time this needs to finish the roadmap\n\nDoes people feel like it make sense to write a PDEP with the general picture of this roadmap (that should superseed this issue, PDEP-15 and probably others)? Does anyone want to lead the effort to get this PDEP done?", "> please don't take this as criticism as the work you have done is great \n\nNone taken :) . Yes, the development of these types preceded our agreed-upon approval process for larger pandas enhancements.\n\n> Personally, I would prefer that we go for the model chosen for the StringDtype\n\nFWIW, the early model of `ArrowDtype` almost followed a variant of the `StringDtype` model (https://github.com/pandas-dev/pandas/pull/46972). I have come to like the benefits of the current `ArrowDtype` (probably with some bias), but I'll argue my case in a different forum.", "> FWIW, the early model of `ArrowDtype` almost followed a variant of the `StringDtype` model ([#46972](https://github.com/pandas-dev/pandas/pull/46972)). I have come to like the benefits of the current `ArrowDtype` (probably with some bias), but I'll argue my case in a different forum.\n\nSurely, this is that forum? (From the title of the discussion)\n\nI think that we should have another discussion for pandas nullable dtypes by default, which arguably is already opened #58243?\n\n@mroeschke you have come up with a proposed (maybe seen as alternative) roadmap and I for one am more than happy to give it serious consideration. To do this IMO we must keep the discussion about pandas nullable types in a different thread. Discussion is good. With respect to bias, not a problem in this thread. To actively participate in this discussion I need to avoid any bias that I may have towards the pandas nullable types arising from my involvement in the pd.NA variant of the string array.\n\nIn my head, I can see a path that allows both to progress in parallel avoiding conflict. I'll elaborate later when I can present it as a coherent plan. (it involves separation of type systems as you have suggested, some deprecations as you have suggested but not removals, instead moving things behind flags as suggested) - A pure arrow approach and a cuddly wrapped one. Now that users have already been exposed to the more \"raw\" types, users may not needed to be shielded from this to the same extent and that could indeed be a path to the pandas nullable types. We may need to recognize the realities on the ground to come to a resolution with respect to the natural evolution and adoption of the ArrowDtype, the current level of funding and more importantly what we would have approval to use the funds for and lastly the decline in contributions of late and therefor the realistic speed of progress/interest on the nullable types.\n\n", "> > does that include numpy object type?\n> \n> I would say yes.\n\n```python\ndf = pd.DataFrame(\n {\n \"int\": pd.Series([1, 2, 3], dtype=pd.ArrowDtype(pa.int64())),\n \"str\": pd.Series([\"a\", \"b\", pd.NA], dtype=pd.ArrowDtype(pa.string())),\n }\n)\nprint(df)\n# int str\n# 0 1 a\n# 1 2 b\n# 2 3 <NA>\n\nres = df.iloc[2]\nres\n# int 3\n# str <NA>\n# Name: 2, dtype: object\n\nres.dtype\n# dtype('O')\n\nres[\"int\"]\n# 3\n\ntype(res[\"int\"])\n\nprint(df.T)\n# 0 1 2\n# int 1 2 3\n# str a b <NA>\n\ndf.T.dtypes\n# 0 object\n# 1 object\n# 2 object\n# dtype: object\n```\n\nSo `loc`/`iloc` and transpose, and along with other axis=1 operations, on a heterogeneous dataframe of arrow only types, currently return a numpy array of Python objects. Numpy `object` dtype with pd.NA doesn't really work so well IMO.\n\nHow would we avoid numpy object dtype in a arrow only dtype system? Breaking API changes to the return types of iloc/loc? A nullable object dtype? having pyarrow scalars in the numpy array instead? .... ", "> So `loc`/`iloc` and transpose, and along with other axis=1 operations, on a heterogeneous dataframe of arrow only types, currently return a numpy array of Python objects. Numpy `object` dtype with pd.NA doesn't really work so well IMO.\n\nThat happens with the nullable types as well:\n\n```python\n>>> sa = pd.Series([1,2,pd.NA], dtype=\"Int64\")\n>>> sb = pd.Series([1.1, pd.NA, 2.2], dtype=\"Float64\")\n>>> sc = pd.Series([pd.NA, \"v2\", \"v3\"], dtype=\"string\")\n>>> df = pd.DataFrame({\"a\":sa, \"b\":sb, \"c\": sc})\n>>> df\n a b c\n0 1 1.1 <NA>\n1 2 <NA> v2\n2 <NA> 2.2 v3\n>>> df.T\n 0 1 2\na 1 2 <NA>\nb 1.1 <NA> 2.2\nc <NA> v2 v3\n>>> df.T.dtypes\n0 object\n1 object\n2 object\ndtype: object\n```\n\nAnd even with the numpy types:\n```python\n>>> san = pd.Series([1,2,3], dtype=\"int\")\n>>> sbn = pd.Series([1.1, np.nan, 2.2], dtype=\"float\")\n>>> scn = pd.Series([np.nan, \"v2\", \"v3\"])\n>>> dfn = pd.DataFrame({\"a\":san, \"b\":sbn, \"c\": scn})\n>>> dfn.dtypes\na int64\nb float64\nc object\ndtype: object\n>>> dfn.T\n 0 1 2\na 1 2 3\nb 1.1 NaN 2.2\nc NaN v2 v3\n>>> dfn.T.dtypes\n0 object\n1 object\n2 object\ndtype: object\n>>> dfn.iloc[0]\na 1\nb 1.1\nc NaN\nName: 0, dtype: object\n```\n\nSo here in `dfn.iloc[0]`, we have an integer, a float and `np.nan`, and we infer the dtype as `object`\n" ]
3,131,536,267
61,617
BUG: numpy_nullable NaNs do not round-trip through CSV
open
2025-06-09T21:30:14
2025-07-10T15:15:11
null
https://github.com/pandas-dev/pandas/issues/61617
true
null
null
jbrockmendel
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python arr = pd.array([0, pd.NA]) ser = pd.Series(arr) / 0 ser[2] = 3 df = ser.to_frame("A") text = df.to_csv() rt = pd.read_csv(StringIO(text), dtype_backend="numpy_nullable")[["A"]] >>> df.loc[0, "A"] np.float64(nan) >>> rt.loc[0, "A"] <NA> ``` ### Issue Description This is a consequence of the constructor casting NaNs to pd.NA, xref #32265 ### Expected Behavior NA ### Installed Versions <details> Replace this line with the output of pd.show_versions() </details>
[ "Bug", "IO CSV", "PDEP missing values" ]
0
0
0
0
0
0
0
0
[ "take @jbrockmendel i can resolve this issue?", "This is not actionable at the moment. We suggest looking for an issue with the \"good first issue\" label." ]
3,131,243,281
61,616
CI: New NumPy release breaking Numba in our CI
open
2025-06-09T19:28:13
2025-06-09T19:56:30
null
https://github.com/pandas-dev/pandas/issues/61616
true
null
null
datapythonista
2
``` pandas/tests/groupby/aggregate/test_numba.py:19: in <module> numba = pytest.importorskip("numba") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E pytest.PytestDeprecationWarning: E Module 'numba' was found, but when imported by pytest it raised: E ImportError('Numba needs NumPy 2.2 or less. Got NumPy 2.3.') E In pytest 9.1 this warning will become an error by default. E You can fix the underlying problem, or alternatively overwrite this behavior and silence this warning by passing exc_type=ImportError explicitly. E See https://docs.pytest.org/en/stable/deprecations.html#pytest-importorskip-default-behavior-regarding-importerror ``` Source: https://github.com/pandas-dev/pandas/actions/runs/15541539524/job/43753280349#step:9:76 Somehow related: https://github.com/numba/numba/issues/10105 @pandas-dev/pandas-core I guess it's not the first time this happens, since seems that Numba has been raising `ImportError` instead of pining the dependencies for at least one more version. Doesn't seem like we should be pinning NumPy ourselves. Any idea how this was fixed before if it already happened, or what should we do to fix the CI?
[ "CI", "Dependencies" ]
0
0
0
0
0
0
0
0
[ "This was probably fixed by https://github.com/conda-forge/numba-feedstock/pull/157.\n\nI think https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/1039 will fix this with no change needed on our part, or you could temporarily add that pin until that repo data fix is in.", "Ah, amazing, thanks a lot for the info!" ]
3,131,181,248
61,615
BUG: Fix RecursionError when apply native container types as a func
closed
2025-06-09T19:03:33
2025-06-16T23:13:30
2025-06-16T23:13:25
https://github.com/pandas-dev/pandas/pull/61615
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61615
https://github.com/pandas-dev/pandas/pull/61615
heoh
1
- [x] closes #61565 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - Nothing new added. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature. ## Description Fixes a bug in `DataFrame.apply` raising `RecursionError` when passing `func=list[int]`. ## Cause The existing code handled parameterized container types, but not native container types, yielding false positives. I also added a check for `types.GenericAlias` to handle native container types. #### Reference: https://github.com/python/cpython/blob/3.12/Lib/typing.py#L1251 > ```py > class _GenericAlias(_BaseGenericAlias, _root=True): > ... > # Objects which are instances of this class include: > # * Parameterized container types, e.g. `Tuple[int]`, `List[int]`. > # * Note that native container types, e.g. `tuple`, `list`, use > # `types.GenericAlias` instead. > ```
[ "Dtype Conversions" ]
0
0
0
0
0
0
0
0
[ "Thanks @heoh " ]
3,130,644,380
61,614
feat(1.5.x): Add support for python 3.12
closed
2025-06-09T15:25:57
2025-06-09T16:53:46
2025-06-09T16:53:46
https://github.com/pandas-dev/pandas/pull/61614
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61614
https://github.com/pandas-dev/pandas/pull/61614
piotrplenik
2
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[ "I don't think we have plans to release any new 1.5 version, or provide support of any version older than 2.3. It would also be useful to reference an issue or provide a description, so we have context on your proposal.", "Thanks for the PR, but as mentioned the community no longer plans to supporting pandas 1.5 anymore, so further support needs to be accomplished in forks of pandas. Closing as no action." ]
3,129,988,361
61,613
Fix type annotation issues in pandas/core/frame.py to resolve self-re…
closed
2025-06-09T11:14:56
2025-06-09T11:36:15
2025-06-09T11:36:15
https://github.com/pandas-dev/pandas/pull/61613
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61613
https://github.com/pandas-dev/pandas/pull/61613
mdawoud27
0
…ferences and pipe operator syntax - [x] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[]
3,128,709,220
61,612
Fix explode to preserve datetime unit in Series and DataFrame; update…
closed
2025-06-08T21:27:32
2025-06-12T09:09:05
2025-06-12T09:09:05
https://github.com/pandas-dev/pandas/pull/61612
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61612
https://github.com/pandas-dev/pandas/pull/61612
Narendersingh007
0
This PR fixes an issue where the explode method in both Series and DataFrame does not preserve the datetime unit information (such as milliseconds or microseconds) of DatetimeIndex or datetime-like data. Previously, exploding a datetime-like Series or DataFrame column would convert timestamps to nanosecond resolution, losing the original unit precision. The fix ensures that after exploding, the resulting Series or DataFrame retains the original datetime unit dtype, maintaining consistency and avoiding unwanted dtype changes. Additionally, relevant tests have been updated and extended to verify that the datetime unit is preserved through explode operations in both single- and multi-column cases. - [x] closes #61610 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
[]
0
0
0
0
0
0
0
0
[]
3,128,685,347
61,611
Fix/devcontainer qt deps
closed
2025-06-08T20:43:57
2025-07-28T17:19:52
2025-07-28T17:19:51
https://github.com/pandas-dev/pandas/pull/61611
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61611
https://github.com/pandas-dev/pandas/pull/61611
Drei-E3
1
- [x] closes #61037 - [x] [Tests not added and not passed]Test: Manually ran pytest inside the devcontainer to confirm PyQt5 and pytest-qt are functional. - Adding platform: linux/amd64 to the docker-compose.yml dev service to work around image compatibility issues on Apple Silicon. - Switching from direct Dockerfile builds to docker-compose.yml via updates in .devcontainer/devcontainer.json: Removed "dockerFile" setting Added "service", "workspaceFolder", and "dockerComposeFile" - Installing qt5-qmake and qtbase5-dev via apt-get to support pytest-qt, which is required for the test suite. However this is commented in order to avoid redundant tools installed on not arm/Arch64 plattform just like the original file. The user should uncomment it. These changes resolve build failures seen on Apple Silicon when using the VS Code Remote - Containers extension. Why this matters: Apple Silicon machines often encounter architecture compatibility issues when building development containers, especially when Python packages need to compile C/C++ or Qt-based code. These changes ensure a smooth devcontainer build experience on both ARM64 and x86_64 environments. ## Introduction If you use mac Silicon, you should uncomment "# platform: linux/amd64" of docker-compose.yml file and #-y qt5-qmake qtbase5-dev\ of Dockerfile. If you use another plattform, just build dev container or docker container as usual, nothing was changed.
[ "Build" ]
0
0
0
0
0
0
0
0
[ "Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen." ]
3,128,583,320
61,610
BUG: `explode()` converts timestamps at millisecond resolution in DatetimeIndex to nanosecond resolution
closed
2025-06-08T18:02:53
2025-06-11T22:40:10
2025-06-11T22:40:10
https://github.com/pandas-dev/pandas/issues/61610
true
null
null
void-rooster
2
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd test = pd.Series([pd.date_range("2020-01-01T00:00:00Z", "2020-01-01T02:00:00Z", freq="1h", unit="ms")]) test.explode().dtype ``` ### Issue Description The docs for `pd.date_range` state that the `unit` keyword argument is the resolution of timestamps in the returned DatetimeIndex, which is true---and counter to the usage of `unit` elsewhere, e.g. in `pd.to_datetime`. Regardless of this discrepancy, `explode` does not respect the millisecond resolution of timestamps in a DatetimeIndex, converting them to nanosecond resolution in the returned Series or DataFrame. ### Expected Behavior dtypes should not be changed by `explode`. ### Installed Versions INSTALLED VERSIONS ------------------ commit : 2cc37625532045f4ac55b27176454bbbc9baf213 python : 3.11.13 python-bits : 64 OS : Linux OS-release : 5.15.0-139-generic Version : #149~20.04.1-Ubuntu SMP Wed Apr 16 08:29:56 UTC 2025 machine : x86_64 processor : byteorder : little LC_ALL : None LANG : C.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.3.0 numpy : 1.26.4 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 25.1.1 Cython : None sphinx : 8.2.3 IPython : 9.3.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.4 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.6 lxml.etree : None matplotlib : 3.10.3 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : 8.4.0 python-calamine : None pyxlsb : None s3fs : 2025.5.1 scipy : 1.15.3 sqlalchemy : 2.0.41 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None
[ "Bug", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Hello, I’m interested in contributing a fix for this issue.", "Thanks for the report. This seems to be fixed in main." ]
3,128,489,154
61,609
CLN: Fix code formatting to address pre-commit and build failures
closed
2025-06-08T16:07:29
2025-06-09T10:14:53
2025-06-09T08:10:03
https://github.com/pandas-dev/pandas/pull/61609
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61609
https://github.com/pandas-dev/pandas/pull/61609
Neer-Pathak
7
Run ruff --fix and ruff format to fix style issues flagged by pre-commit. This resolves common RUF003 errors (e.g., ambiguous hyphens) seen across multiple PRs.
[]
0
0
0
0
0
0
0
0
[ "I think as per the documentation I should rename my commit to CLN: Code cleanup.", "Not sure where you saw pre-commit flagging issues, but the CI is green for main, I think all is good (based on our validations).\r\n\r\nMaybe you run things locally with different versions of the validation tools?", "> Not sure where you saw pre-commit flagging issues, but the CI is green for main, I think all is good (based on our validations).\n> \n> Maybe you run things locally with different versions of the validation tools?\n\nYeah makes sense — but even on a clean main, pre-commit fails for me unless I run Ruff. Even adding just a comment triggers it, so I don’t think it’s a version mismatch. Might be that pre-commit is catching things CI isn’t?\n\nAlso seeing the docstring validation failing — happy to fix it if needed, or we can just drop the PR if it’s not worth changing right now. Let me know what you prefer!\n\n", "We do check pre-commit in the CI, with main and a clean install of the pre-commit tools if I'm not wrong. And it's green for main, so it seems like you've got something different than our CI locally.\r\n\r\nI'll run pre-commit to this PR to see how it looks like.", "pre-commit.ci autofix", "Ahh got it — makes sense now. I’ll revert those changes to the test docstrings. Thanks for pointing that out!", "The problem was that after making the code change, I was manually running ruff format and ruff check --fix, which applied to the entire directory. After running pre-commit, I realized that everything outside the pandas/ directory is skipped, along with pandas/tests/, when using pre-commit run --all-files." ]
3,128,436,139
61,608
BUG: Pandas concat raises RuntimeWarning: '<' not supported between i…
closed
2025-06-08T15:15:25
2025-06-08T16:01:47
2025-06-08T16:01:47
https://github.com/pandas-dev/pandas/pull/61608
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61608
https://github.com/pandas-dev/pandas/pull/61608
Neer-Pathak
0
# Fix GH-61477: Stop Spurious Warning When `concat(..., sort=False)` on Mixed-Type `MultiIndex` ## Overview When you do something like: ```python pd.concat([df1, df2], axis=1, sort=False) ``` and your two DataFrames have MultiIndex columns that mix tuples and integers, pandas used to try to sort those labels under the hood. Since Python cannot compare tuple < int, you’d see: ``` RuntimeWarning: '<' not supported between instances of 'int' and 'tuple'; sort order is undefined for incomparable objects with multilevel columns ``` This warning is confusing, and worse, you explicitly asked not to sort (sort=False), so pandas should never even try. # What Changed 1. Short-circuit Index.union when sort=False Before: Even with sort=False, pandas would call its normal union logic, which might attempt to compare labels. Now: If you pass sort=False, we simply concatenate the two index arrays with: ``` np.concatenate([self._values, other._values]) ``` and wrap that in a new Index. No comparisons, no warnings, and your original order is preserved. 2. Guard sorting in MultiIndex._union Before: pandas would call ```result.sort_values()``` when sort wasn’t False, and if labels were unorderable it would warn you. Now: We only call ```sort_values()``` when sort is truthy (True), and we wrap it in a ```try/except``` TypeError that silently falls back to the existing order on failure. No warning is emitted. 3. New Regression Test A pytest test reproduces the original bug scenario, concatenating two small DataFrames with mixed-type MultiIndex columns and ```sort=False.``` The test asserts: No RuntimeWarning is raised Column order is exactly “first DataFrame’s columns, then second DataFrame’s columns” Respects sort=False: If a user explicitly disables sorting, pandas won’t try. Silences spurious warnings: No more confusing messages about comparing tuples to ints. Keeps existing behavior for sort=True: You still get a sort or a real error if the labels truly can’t be ordered. For testing we can try ``` import numpy as np, pandas as pd left = pd.DataFrame( np.random.rand(5, 2), columns=pd.MultiIndex.from_tuples([("A", 1), ("B", (2, 3))]) ) right = pd.DataFrame( np.random.rand(5, 1), columns=pd.MultiIndex.from_tuples([("C", 4)]) ) # No warning, order preserved: out = pd.concat([left, right], axis=1, sort=False) print(out.columns) # [("A", 1), ("B", (2, 3)), ("C", 4)] # Sorting still works if requested: sorted_out = pd.concat([left, right], axis=1, sort=True) print(sorted_out.columns) # sorted order or TypeError if impossible ``` Implemented a new approach for concatenating indices with mixed data types using the 'union' method to resolve the previous failing test cases. This ensures correct merging of indices with different types, addressing the issue reported in the original pull request.
[]
0
0
0
0
0
0
0
0
[]
3,128,119,804
61,607
QST: Subject: User Experience Issue - NumPy Types in DataFrame Results Breaking Readability
open
2025-06-08T09:05:24
2025-06-09T16:51:25
null
https://github.com/pandas-dev/pandas/issues/61607
true
null
null
COderHop
2
### Research - [x] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions. - [x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com). ### Link to question on StackOverflow None ### Question about pandas ssue Description TL;DR: Since pandas 2.0+, .tolist() and similar methods return NumPy types instead of native Python types, severely impacting user experience and data readability. Problem Example Before (pandas 1.x): pythondf.index.tolist() # Returns: [0, 1, 2, 3, 4] # Clean, readable Now (pandas 2.x): pythondf.index.tolist() # Returns: [np.int64(0), np.int64(1), np.int64(2), np.int64(3), np.int64(4)] # Verbose, confusing Impact on User Experience Poor Readability: Results are cluttered with np.int64(), np.float64() wrappers Debugging Nightmare: Harder to quickly scan and understand data Display Issues: When printing or logging, output is unnecessarily verbose User Confusion: Many users don't understand why they're seeing NumPy types Breaking Change: Existing code expectations broken without clear migration path Current Workarounds Are Painful Users now need to write additional code for basic operations: python# Instead of simple: indices = df.index.tolist() # We need: indices = [int(x) for x in df.index.tolist()] The Core Problem DataFrames are meant for data analysis and exploration. The primary use case is human-readable data inspection, not performance-critical numerical computation at the .tolist() level. Suggested Solutions Add a parameter: .tolist(native_types=True) (default True for user-facing methods) Separate methods: Keep .tolist() for NumPy types, add .tolist_clean() for Python types Configuration option: Allow users to set pandas behavior globally Revert the change: Prioritize user experience over marginal performance gains Why This Matters Pandas' strength has always been its ease of use and intuitive behavior. This change sacrifices user experience for performance gains that most users don't need when calling .tolist(). The goal of data analysis is insight, not fighting with data types. Request Please consider reverting this behavior or providing a simple, built-in solution. The current situation forces every pandas user to write boilerplate code for basic data inspection. Thank you for maintaining this incredible library. I hope we can find a solution that balances performance with the user-friendly experience that makes pandas great. Environment: pandas: 2.2.3 numpy: 1.26.4 Impact: All DataFrame operations returning lists
[ "Usage Question", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "Thanks @COderHop for the report.\n\nIt appears that you have a good grasp of the issue. IIRC this has been reported/discussed before but I can't find it at this time.\n\n> Breaking Change: Existing code expectations broken without clear migration path\n\nI do not agree that from the pandas perspective this is true. Numpy made a change to their repr and pandas continues to return Numpy types as before, only the repr has changed and that should not really be considered a pandas issue.\n\nHowever, to be fair, many users were probably unaware before that their lists contained numpy types and not Python types which would have perhaps been a more logical design choice. If pandas had however changed the return type this would have been a breaking change.\n\n> Please consider reverting this behavior or providing a simple, built-in solution.\n\nIIRC other discussions have suggested making this breaking change in a future release in the the return type of some operations for which a return of standard Python objects would be appropriate. This seems reasonable to me.\n\nEven though I'm sure this is a duplicate issue, I'll leave it open until I can find the other issues or until someone else point us in the right direction.\n\n@mroeschke IIRC you did some PRs at some point related to this to fix ci?\n", "Since this issue is focusing on `tolist` conversion, there has been discussion that generally conversion to Python collections should also convert scalars to Python scalars, but the pandas API is inconsistent with this conversion in other APIs as well (e.g. `__iter__`)" ]
3,128,016,694
61,606
DEP: update python-calamine to 0.3.2
closed
2025-06-08T07:05:47
2025-06-19T20:50:35
2025-06-19T20:50:35
https://github.com/pandas-dev/pandas/pull/61606
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61606
https://github.com/pandas-dev/pandas/pull/61606
chilin0525
5
- [x] closes https://github.com/pandas-dev/pandas/issues/61186 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. This PR updates the calamine engine dependency to version `0.3.2`, which includes the upstream fix for leading-zero truncation in VLOOKUP-derived Excel cells.
[ "Dependencies" ]
0
0
0
0
0
0
0
0
[ "It'd be good to either link an existing issue, or to write a description, so we can understand why we should merge this.", "@datapythonista Thanks for the reminder! I’ve now added the issue link for clarity. Since this is my first PR involving a package version bump, I wanted to ask:\r\n1. Does the pandas community support this upgrade to calamine >= 0.3.2? If it’s not aligned with project goals, I’m happy to close the PR.\r\n2. If the upgrade is approved, would you like me to add a unit test covering the original issue scenario?\r\n\r\nThanks! ", "Thanks for the context. This looks reasonable. In general we support dependencies that are two years old. Given there is a bug, I guess it can make sense to require a version that is newer than that. @mroeschke @rhshadrach do you have an opinion on whether it's more important to be compatible with older versions one of the libraries to read Excel, or to prevent a bug it has?", "> Given there is a bug, I guess it can make sense to require a version that is newer than [2 years].\r\n> ...\r\n> do you have an opinion on whether it's more important to be compatible with older versions one of the libraries to read Excel, or to prevent a bug it has?\r\n\r\nEvery optional dependency has bugfixes on just about every release, I think a bugfix would only justify bumping the minimum version if it is of an serious nature. Security issues as well. I do not think this is the case here.", "Let's close this then. I assume many users create a fresh environment and they'll get a new version anyway. For the people who has environments with the old dependency, hopefully they can find information that leads them to upgrading the buggy version. " ]
3,127,822,197
61,605
DOC: Show constructor arguments for some classes in `pd.series.offsets`
closed
2025-06-08T02:20:34
2025-08-20T20:30:50
2025-08-20T20:30:48
https://github.com/pandas-dev/pandas/pull/61605
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61605
https://github.com/pandas-dev/pandas/pull/61605
Dr-Irv
16
This contributes to https://github.com/pandas-dev/pandas/issues/52431 In that issue, the goal is to show the arguments for the constructors for the various offsets, which are all in `pyx` files. The problem is that `sphinx` doesn't pick up the arguments for `__init__()` in those files. It seems that if you have `class` that is a not a `cdef class`, you have to have a `__new__()` method so the docs show up. This PR does this for `YearEnd`, by creating a `cdef _YearEnd` class. and then `YearEnd` is a regular class with `__new__()` calling `__new__()` of `_YearEnd`. So, if you preview the docs from this MR, you will see that `YearEnd` now shows its constructor. If this PR is accepted, then we can get the community to do the work of separating the classes into "public" and "private" ones and adding adding the `__new__()` methods to the other documented offset classes and they can also add a `Parameters` section to those classes. Note that for `YearEnd`, I had to change the docs from `Attributes` to `Parameters` to make it so that the docstrings are validated.
[ "Docs", "Frequency" ]
0
0
0
0
0
0
0
0
[ "/preview\r\n", "No strong opinions about this, but if the examples need the okexcept on windows sounds like this change is breaking things on windows, no? I guess I'm missing something, if you can clarify. But it would be better to fix the underlying problem, I don't think we should add okexcept unless we document exceptions.", "> No strong opinions about this, but if the examples need the okexcept on windows sounds like this change is breaking things on windows, no? I guess I'm missing something, if you can clarify. But it would be better to fix the underlying problem, I don't think we should add okexcept unless we document exceptions.\r\n\r\nNo, the change is needed to just build the regular documentation (without this change) on Windows. It has something to do with how sphinx calls cython and that doesn't seem to work right on Windows. Not sure exactly what the problem is.\r\n", "> No strong opinions about this, but if the examples need the okexcept on windows sounds like this change is breaking things on windows, no? I guess I'm missing something, if you can clarify. But it would be better to fix the underlying problem, I don't think we should add okexcept unless we document exceptions.\r\n\r\nI found a better fix in #61686 . So the `okexcept` is not there any more.", "/preview\r\n", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61605/", "> If this PR is accepted, then we can get the community to do the work of separating the classes into \"public\" and \"private\" ones and adding adding the `__new__()` methods to the other documented offset classes and they can also add a `Parameters` section to those classes.\r\n\r\nThis seems like a very heavy handed code change for docs.", "> From the linked issue:\r\n> \r\n> > It turns out that making the constructor arguments appear from PYX files that are imported 3 levels in the hierarchy doesn't work right with Sphinx.\r\n> \r\n> Has this been reported?\r\n\r\nIt was a problem in our repo. It is fixed in this PR in `doc/source/index.rst.template`\r\n", "> > If this PR is accepted, then we can get the community to do the work of separating the classes into \"public\" and \"private\" ones and adding adding the `__new__()` methods to the other documented offset classes and they can also add a `Parameters` section to those classes.\r\n> \r\n> This seems like a very heavy handed code change for docs.\r\n\r\nNo disagreement there. But I don't see another way of getting these constructors documented unless we work out something with the sphinx developers.\r\n\r\nNote that there is a similar problem for `Interval` (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Interval.html#pandas.Interval) . For `Timestamp`, `Timedelta`, and `Period`, the same pattern is used so that the constructors show up in the docs, i.e., there is a private `cdef` class and the public class is a subclass of the private one, and the public one has all the docs.\r\n\r\n", "I believe this is https://github.com/cython/cython/issues/3873.\r\n\r\n@jbrockmendel - any thoughts here?", "This came up in the last dev call. I think Irv's solution was to make a non-cdef class with the docstring. The perf impact was about 100ns on construction which we agreed was not a big deal (just kind of janky).\r\n\r\n> For Timestamp, Timedelta, and Period, the same pattern is used so that the constructors show up in the docs\r\n\r\nFWIW im pretty sure that pattern is used because we implement a `__new__` method, which I don't think you can do in a cdef class.", "Is this something that can be reverted once sphinx/cython fixes something upstream? if so, can there be a `# TODO(sphinx3.14.159): ...` attached to these", "> Is this something that can be reverted once sphinx/cython fixes something upstream? if so, can there be a `# TODO(sphinx3.14.159): ...` attached to these\r\n\r\nMaybe. I don't know how easy the reversion would be. But if we make this change, we can also better document all the offset methods/properties and avoid the exceptions that appear in https://github.com/pandas-dev/pandas/blob/faf3bbb1d7831f7db8fc72b36f3e83e7179bb3f9/ci/code_checks.sh#L81\r\n\r\n", "@rhshadrach at the dev meeting on 8/13, @jbrockmendel said he was fine with this solution. Just need you to agree....", "Thanks @Dr-Irv " ]
3,127,606,594
61,604
WEB: Replace os.path with pathlib.Path in pandas_web.py
closed
2025-06-07T22:27:57
2025-06-13T16:22:08
2025-06-13T16:21:39
https://github.com/pandas-dev/pandas/pull/61604
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61604
https://github.com/pandas-dev/pandas/pull/61604
iabhi4
5
Replaces `os.path` with `pathlib.Path` in `pandas_web.py`, as suggested by @datapythonista in [61578](https://github.com/pandas-dev/pandas/pull/61578). No functional changes, verified site generation remains correct - [x] Ran pre-commit check
[ "Web" ]
0
0
0
0
0
0
0
0
[ "Thanks for the review! updated to make sure all paths are created as `Path` early and used consistently. Removed redundant casts too.", "Addressed all the latest comments, switched to `Path.rglob`, cleaned up `base_url`, and ensured consistent `Path` usage. Let me know if anything else looks off, I also retested on local after the changes", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61604/", "Thanks for the clean up here @iabhi4 , very nice PR" ]
3,127,589,818
61,603
REF: Replace os.path with pathlib.Path in pandas_web.py
closed
2025-06-07T22:00:23
2025-06-07T22:18:40
2025-06-07T22:18:40
https://github.com/pandas-dev/pandas/pull/61603
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61603
https://github.com/pandas-dev/pandas/pull/61603
iabhi4
1
Replaces `os.path `with `pathlib.Path` in `pandas_web.py`, as suggested by @datapythonista in [61578](https://github.com/pandas-dev/pandas/pull/61578). No functional changes, verified site generation remains correct - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
[]
0
0
0
0
0
0
0
0
[ "I'm resolving the conflicts, it's my previous PR which just got merged" ]
3,127,557,589
61,602
BUG: Writing UUIDs fail
closed
2025-06-07T21:30:22
2025-07-31T01:44:12
2025-07-31T01:43:59
https://github.com/pandas-dev/pandas/issues/61602
true
null
null
torchss
3
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python >>> df = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]}) >>> df id 0 6f6303cd-516d-4a27-9165-bb703f9e2240 1 c250ba7f-31db-47de-b02b-54296ac6a4df 2 c523257a-51ab-4160-957b-619ce55c78f9 >>> df.to_parquet('sample_pandas_pa.parquet', engine='pyarrow') Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".venv/lib/python3.12/site-packages/pandas/util/_decorators.py", line 333, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/pandas/core/frame.py", line 3113, in to_parquet return to_parquet( ^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/pandas/io/parquet.py", line 480, in to_parquet impl.write( File ".venv/lib/python3.12/site-packages/pandas/io/parquet.py", line 190, in write table = self.api.Table.from_pandas(df, **from_pandas_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4793, in pyarrow.lib.Table.from_pandas File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 639, in dataframe_to_arrays arrays = [convert_column(c, f) ^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 626, in convert_column raise e File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 620, in convert_column result = pa.array(col, type=type_, from_pandas=True, safe=safe) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/array.pxi", line 365, in pyarrow.lib.array File "pyarrow/array.pxi", line 90, in pyarrow.lib._ndarray_to_array File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: ("Could not convert UUID('6f6303cd-516d-4a27-9165-bb703f9e2240') with type UUID: did not recognize Python value type when inferring an Arrow data type", 'Conversion failed for column id with type object') ``` ### Issue Description Writing UUIDs fail. pyarrow supports writing UUIDs ### Expected Behavior Writing UUIDs pass ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.12.9 python-bits : 64 OS : Linux OS-release : 6.8.0-57-generic Version : #59~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Mar 19 17:07:41 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.3 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None </details>
[ "Bug", "IO Parquet", "Needs Triage", "Upstream issue", "Arrow" ]
0
0
0
0
0
0
0
0
[ "#After review, this seems to be an issue with PyArrow not Pandas\nSpecifically the convert_columns in from_pandas function of PyArrow as it is not able to work with the UUID library's UUID datatype.\n\n```python\nimport uuid\nimport pandas as pd\nimport pyarrow as pa\n\ndf = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})\n\nnew_df = pa.Table.from_pandas(df)\n```\n\nEven the above code gives the same error\n\nA temporary workaround is to use the bytes data type before saving to parquet\n\n```python\nimport uuid\nimport pandas as pd\n\ndf = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})\n\n# Convert UUIDs to bytes for Arrow compatibility\ndf['id'] = df['id'].apply(lambda x: x.bytes)\n\ndf.to_parquet('sample_pandas_pa.parquet', engine='pyarrow')\n```\n\nThe above code produces no error and saves the file successfully\n\nOf course we can implement it on our side in to_parquet function but seems like a fix is needed on their side. Can we get a comment from the Maintainers to see how to proceed?", "Thanks @DevastatingRPG , found related issue in PyArrow: https://github.com/apache/arrow/issues/44224", "Thanks for the report and investigation. Closing as an upstream issue." ]
3,127,419,533
61,601
WEB: remove "String data type" from "Roadmap points pending a PDEP" section.
closed
2025-06-07T19:35:18
2025-06-08T16:54:58
2025-06-08T16:00:49
https://github.com/pandas-dev/pandas/pull/61601
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61601
https://github.com/pandas-dev/pandas/pull/61601
simonjayhawkins
1
> pandas is in the process of moving roadmap points to PDEPs (implemented in August 2022). During the transition, some roadmap points will exist as PDEPs, while others will exist as sections below. This one is now covered by PDEP-14 which has been accepted and therefore no longer pending a PDEP.
[ "Web" ]
0
0
0
0
0
0
0
0
[ "I think that point was obsolete for a long time before PDEP-14. Thanks for taking care of this @simonjayhawkins " ]
3,127,271,808
61,600
DOC/ENH: Holiday exclusion argument
closed
2025-06-07T17:07:21
2025-06-14T11:55:33
2025-06-13T16:11:42
https://github.com/pandas-dev/pandas/pull/61600
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61600
https://github.com/pandas-dev/pandas/pull/61600
sharkipelago
4
- [x] closes #54382 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Frequency" ]
0
0
0
0
0
0
0
0
[ "I'm trying to figure out what the Pyodide test failing means, but struggling a bit. @mroeschke any insight in where I should look to resolve the failed check? \r\n\r\nAlso trying to just rerun the checks, but couldn't figure out how to do that without making an empty commit ", "> I'm trying to figure out what the Pyodide test failing means\r\n\r\nI think that's unrelated to your PR, so no need to worry about it", "Thanks for all your help on the PR so far! I really appreciate it. Are there any other changes I should make?", "Thanks @sharkipelago " ]
3,127,184,241
61,599
PDEP-18: Nullable Object Dtype
open
2025-06-07T15:39:36
2025-07-28T00:09:55
null
https://github.com/pandas-dev/pandas/pull/61599
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61599
https://github.com/pandas-dev/pandas/pull/61599
simonjayhawkins
61
as per PDEP-1 > The initial status of a PDEP will be Status: Draft. This will be changed to Status: Under discussion by the author(s), when they are ready to proceed with the decision making process. but comments are surely welcome in the meantime.
[ "Stale", "PDEP" ]
0
0
0
0
0
0
0
0
[ "“Object” analogous to “Float64”?", "At least to me the PDEP will be easier to read (and comment on) if you limit the line width, to 80 or similar.\r\n\r\nThe idea sounds good, it'd be good if you can provide information on how using a boolean mask compares to having `pandas.NA` inside the main array.", "> At least to me the PDEP will be easier to read (and comment on) if you limit the line width, to 80 or similar.\r\n\r\nsure.\r\n\r\n> it'd be good if you can provide information on how using a boolean mask compares to having `pandas.NA` inside the main array.\r\n\r\nusing a sentinel as opposed to a mask is an implementation detail that I can expand on. I'm assuming that this would still be a separate dtype from the traditional numpy dtype, a pandas nullable dtype? We have the string array backed by a masked object array that I was effectively proposing reusing/refactoring as a base class.\r\n\r\nThere is also another option which is maybe what you are proposing: making a breaking change to the exisiting numpy object dtype to handle pd.NA differently? This is perhaps what is in the rejected ideas section and that needs clarification?", "> “Object” analogous to “Float64”?\r\n\r\nThat's the obvious choice but IIRC the capitalization was considered as confusing/non intuitive by some when discussed with respect to the string dtype.\r\n", "> > “Object” analogous to “Float64”?\r\n> \r\n> That's the obvious choice but IIRC the capitalization was considered as confusing/non intuitive by some when discussed with respect to the string dtype.\r\n\r\nnaming the nullable object dtype \"Object\" aligns well with pandas’ approach to evolving its dtypes (like \"Int64\" for integers, \"Float64\" for floats and \"Boolean\" for the nullable `bool` dtype). It creates a clear visual and semantic cue for users that a nullable, extension-based implementation is being used. As long as the design and documentation explicitly address the differences between the legacy object dtype and \"Object\", this approach could indeed enhance clarity and usability even though it it less explicit that \"object_nullable\".\r\n\r\nAlso bear in mind that, being effectively a tweek to the pd.NA variant of the python backed string dtype, the repr could be just \"object\", just as \"string[pyarrow]\" is shown as just \"string\". This effectively indicates the logical type and that instead of using the dtype string alias we could recommend constructing the nullable array for testing/evaluation using the Dtype object using the patterns that there seemed to be some consensus on in PDEP-13. There may be advantage of using \"object\" as the repr instead of \"Object\" as this could potentially simplify the transition to nullable types by default in future. So then I would think that using the more explicit \"object_nullable\" for now could maybe be better than introducing the capitalized form if it was agreed that the repr is just \"object\" without the subtleties of capitalization.\r\n", "Also note that in the PDEP it was written \"tentatively named `\"object_nullable\"`\" based on the passage of PDEP-14 which needed a sub discussion to address. The words chosen purposely not to set the dtype string alias in stone and allowing the discussion to potentially avoid this debate in the main discussion thread.", "> > it'd be good if you can provide information on how using a boolean mask compares to having `pandas.NA` inside the main array.\r\n> \r\n> using a sentinel as opposed to a mask is an implementation detail that I can expand on. I'm assuming that this would still be a separate dtype from the traditional numpy dtype, a pandas nullable dtype? We have the string array backed by a masked object array that I was effectively proposing reusing/refactoring as a base class.\r\n> \r\n> There is also another option which is maybe what you are proposing: making a breaking change to the exisiting numpy object dtype to handle pd.NA differently? This is perhaps what is in the rejected ideas section and that needs clarification?\r\n\r\nNo matter which of the two approaches above is considered, I think the arguments for using a mask as opposed to a sentinel are probably the same:\r\n\r\nUsing a Boolean mask is generally seen as the preferred design in pandas for extension arrays. It provides uniform missing-value handling, clearer data semantics, and is more in line with how extension types like \"Int64\" and \"boolean\" have been developed, not to mention the nullable string array which it is intended to re-use for the nullable object implementation.\r\n\r\nThis separation is one reason why the design of a nullable object dtype would benefit from a dedicated missingness mask rather than trying to “magically” interpret pd.NA embedded in an otherwise generic Python object array. IIUC pd.NA was designed as the **representation** of missing values in nullable arrays and was never intended to be an explicit sentinel value. I think many of the issues with pd.NA in object arrays arise from users thinking that the pd.NA object itself is a missing value and not a representation of a missing value. Of course a python object array can hold any object and so we can't, or maybe don't, stop users putting the pd.NA object in the traditional numpy object array.\r\n\r\nTo compare the two approaches:\r\n\r\nEmbedded pd.NA:\r\n\r\n- Maybe a simpler conceptual model for small or homogeneous arrays where overhead is minimal.\r\n- Checking each element at runtime might involve extra comparisons\r\n- When missing values are stored in the same array as valid data, ensuring that operations treat pd.NA consistently can be tricky. \r\n\r\nSeparate Boolean Mask:\r\n\r\n- Offers a clear and robust way to denote missing data while preserving clean data arrays; aligns with the design of other extension types in pandas; facilitates efficient and consistent missing data handling across operations\r\n- Introduces additional complexity in data structure design. This is not really an issue as there is no POC needed as the nullable object array shares so much code with the tried and tested pd.NA variant of the string array, available for a long time now.\r\n- Separating the missingness information from the actual data often leads to more readable, more performant and more maintainable code. When operations are performed, pandas can first consult the mask to identify missing elements and then process only the valid ones, or process only the missing values with operations such as `fillna`. In vectorized operations, many operations can short-circuit based on the mask.\r\n- The extra Boolean mask does add memory overhead, though future implementations could optimize the mask as a bitmask rather than a full Boolean array. This is applicable to all pandas nullable types and so that discussion would be outside the scope of this PDEP.\r\n \r\n", "A few API question it'd be helpful to see addressed explicitly:\r\n\r\n1) `ser[0] = np.nan` Does this assign NaN or does this silently replace with pd.NA?\r\n2) Same question with NaN in a list passed to the constructor.\r\n3) Same questions but with None or pd.NaT\r\n4) `ser = pd.Series([pd.NA, None, pd.NaT, np.nan], dtype=\"object_nullable\")`. (Assuming we don't silently replace) What do isna/fillna/skipna do?\r\n", "I think @jbrockmendel questions are very good and worth having explicit in the proposal.\r\n\r\nPersonally I think `pd.NA` should be used for setting values in the boolean mask, everything else should go into the values and not have special treatment.\r\n\r\nWhile it may be counterintuitive at first based on past behavior, I think it's the simplest to implement, to explain, and to understand.", "Just posting high level concerns:\r\n\r\n1. While pandas still `object` a \"string dtype\", would `\"object_nullable\"` be yet-another-string-dtype with an alternative NA semantics? I would be more comfortable with this type if pandas just considers `object` as purely PyObjects\r\n2. Maybe a meta comment about this topic, I do think discussing a \"nullable object type\" would be better suited in a PDEP that discusses all \"nullable types\" (system) to avoid fragmentation/diversion of terminology, null semantics, etc. as inevitability the other types will be discussed as well.", "Sorry if I miss it, but is the plan to implement the nullable object backed by numpy arrays only? I don't think we have an object dtype based on Arrow as Polars do, right?\r\n\r\nBack to the discussion about naming, I think `pyobject[pyarrow]` and I guess `pyobject[numpy]` would be my preference. I think pyobject is more explicit and way more clear than object.", "> A few API question it'd be helpful to see addressed explicitly:\r\n> \r\n> 1. `ser[0] = np.nan` Does this assign NaN or does this silently replace with pd.NA?\r\n> 2. Same question with NaN in a list passed to the constructor.\r\n> 3. Same questions but with None or pd.NaT\r\n> 4. `ser = pd.Series([pd.NA, None, pd.NaT, np.nan], dtype=\"object_nullable\")`. (Assuming we don't silently replace) What do isna/fillna/skipna do?\r\n\r\nwhat i've said in the initial draft is\r\n\r\n> The proposed nullable object array will be\r\nunable to hold `np.nan`, `None` or `pd.NaT` as these will be\r\nconsidered missing in the constructors and other conversions\r\nwhen following the existing API for the other nullable\r\ntypes. Users will not be able to round-trip between the\r\nlegacy and nullable object dtypes.\r\n\r\nSo I'm assuming that to ease the transition that all these will be treated as missing, updating the mask appropriately and represented as pd.NA. So my assumption is that we do silently replace. Do you disagree with this or perhaps prefer warnings for the assignment?", "> Personally I think `pd.NA` should be used for setting values in the boolean mask, everything else should go into the values and not have special treatment.\r\n> \r\n> While it may be counterintuitive at first based on past behavior, I think it's the simplest to implement, to explain, and to understand.\r\n\r\nseems reasonable. I assumed we would match the behavior of other nullable extension arrays for consistency. I'll audit this before opening the official discussion period.\r\n\r\nThanks for highlighting this. ", "Oh one more: \r\n\r\n5) `ser = pd.Series([2, pd.Timedelta(1)], dtype=\"nullable_object\")` # what happens with `ser / 0`?", "> 2\\. Maybe a meta comment about this topic, I do think discussing a \"nullable object type\" would be better suited in a PDEP that discusses all \"nullable types\" (system) to avoid fragmentation/diversion of terminology, null semantics, etc. as inevitability the other types will be discussed as well.\r\n\r\nyes I wanted to discuss this in PDEP-16 as the type mapping in the first commit showed that the traditional numpy object dtype would be retained. A few days away from a full year since that was opened and the PDEP in incomplete with no discussion. sitting on a draft PDEP is harming the project. If we can get PDEP-16 moving then I would probably not have opened this.\r\n\r\nI think being part of the bigger discussion is crucial if the object_nullable was to become a default in the future. I did not explicitly state that in this initial draft even though I made the comment that could be interpreted as the motivation for this PDEP. As I said in https://github.com/pandas-dev/pandas/pull/61599#discussion_r2136185318 i'm happy to remove that.\r\n\r\nThe motivation is just to create a dtype consistent with the other pandas nullable dtypes. We have Int, Float, Boolean etc but do not have one for object. ", "> Oh one more:\r\n> \r\n> 5. `ser = pd.Series([2, pd.Timedelta(1)], dtype=\"nullable_object\")` # what happens with `ser / 0`?\r\n\r\nunder the current proposal these would be missing in the mask (represented as the pd.NA scalar). if we instead do what @datapythonista suggests in https://github.com/pandas-dev/pandas/pull/61599#issuecomment-2956329138 then this would yield np.nan and would be a sentinel value.\r\n\r\n", "> 1. While pandas still `object` a \"string dtype\", would `\"object_nullable\"` be yet-another-string-dtype with an alternative NA semantics? I would be more comfortable with this type if pandas just considers `object` as purely PyObjects\r\n\r\nto be compatible with pyArrow and polars?", ">> While pandas still object a \"string dtype\", would \"object_nullable\" be yet-another-string-dtype with an alternative NA semantics? I would be more comfortable with this type if pandas just considers object as purely PyObjects\r\n\r\n> to be compatible with pyArrow and polars?\r\n\r\nSorry had some typos in that statement. I meant to say \"While pandas still considers `object` a \"string dtype\"\".\r\n\r\nWell there no compatibility with those two since they don't have a type to store arbitrary PyObjects. I think it's a potential \"strength\" that pandas can provide a \"nullable object dtype\" to store PyObjects with null semantics. My concern is pandas still conflating `object` with \"string\".\r\n\r\nSo for example, `str` APIs still work with `object` type. Would you expect `str` APIs to work with `object_nullable`?", "> The motivation is just to create a dtype consistent with the other pandas nullable dtypes.\r\n\r\nBehind a flag. Not necessarily making any commitment here to changes to the existing pandas api. To allow evaluation of the concept as we did with the other pandas nullable dtypes at first. The StringDtype has been available for a long time now before being made a pandas default type. I expect object_nullable to be experimental for a long time and to iron out any consistency issues with the other pd.NA variants of the pandas nullable dtypes and consistency issues with the legacy object dtype with the handling of values that are considered missing as @jbrockmendel highlighted.", "> > > While pandas still object a \"string dtype\", would \"object_nullable\" be yet-another-string-dtype with an alternative NA semantics? I would be more comfortable with this type if pandas just considers object as purely PyObjects\r\n> \r\n> > to be compatible with pyArrow and polars?\r\n> \r\n> Sorry had some typos in that statement. I meant to say \"While pandas still considers `object` a \"string dtype\"\".\r\n> \r\n> Well there no compatibility with those two since they don't have a type to store arbitrary PyObjects. I think it's a potential \"strength\" that pandas can provide a \"nullable object dtype\" to store PyObjects with null semantics. My concern is pandas still conflating `object` with \"string\".\r\n> \r\n\r\nAh i see, I interpreted that as we could have an arrow array containing the pointers to the python objects.\r\n\r\nthe nullable string array is an nullable object array with constraints. The nullable object array would not be a string array. the constraints are it's strengths as we don't have the np.nan, None, pd.NAT issue?\r\n\r\n> So for example, `str` APIs still work with `object` type. Would you expect `str` APIs to work with `object_nullable`?\r\n\r\nonly if `object` dtype retains the str accessor. If we remove it from `object` then it would not be needed from \"object_nullable\"\r\n\r\n\r\n ", "> The nullable object array would not be a string array.\r\n\r\n+1, and hopefully that can give us inspiration to decouple the base `object` dtype from meaning \"string\" in the future (not for this PDEP)\r\n\r\n> the constraints are it's strengths as we don't have the np.nan, None, pd.NAT issue?\r\n\r\nThat and a general observation that users continue to store arbitrary objects (numpy arrays, pandas objects, custom classes) in pandas objects, so providing nullability to that I guess is a plus", "> While pandas still object a \"string dtype\", would \"object_nullable\" be yet-another-string-dtype with an alternative NA semantics?\r\n\r\nhopefully not.\r\n\r\nThe intention was an array with exactly the same NA semantics of the other pd.NA variants of the pandas nullable types, i.e. the original nullable string dtype, Int, Float, Bool etc. Should be **no** differences in my opinion. The issues regarding handing of np.nan are perhaps similar to the array of issue for the pandas nullable float type if we allow it. If we don't it's not an issue but the behavior of the object and object_nullable would be different. In an ideal world we would perhaps want as much backwards compatibility as possible.\r\n\r\nIf we don't have pyarrow as a required dependency, then I would expect object backed variants of the new dtypes. These would presumably be based on an nullable object array with constraints. For these cases the constraints would be their strength. I've not included that as a motivation either at this point. If the discussion here exceeds the PDEP-10/PDEP-15 discussion which I expect it to following the timescales in PDEP-1 that could be added depending on the outcome. I assuming PDEP-10 is rejected. If PyArrow is made a required dependency for 3.0 then maybe we could shift focus to a nullable object array backed by pyarrow pointers to python objects instead.", "> > So for example, `str` APIs still work with `object` type. Would you expect `str` APIs to work with `object_nullable`?\r\n> \r\n> only if `object` dtype retains the str accessor. If we remove it from `object` then it would not be needed from \"object_nullable\"\r\n\r\nI see that discussing accessors is a glaring omission from my draft.", "The main use case I have for PyObject types (there can surely be others) is for loading data that can't be casted to other types. For example, loading from a nested JSON containing heterogeneous types (thinking on Arrow structs, otherwise even with homogenous types no other type can hold it in a reasonable way).\r\n\r\nIn that sense, what it makes sense to me is the the PyObject type provides the functionality to \"fix\" the data, so it can be transformed and converted to another type with the specific functionality. Feels like `.map()`, `.astype()` and not much more should be enough if I'm not missing anything. I don't think `.map()` to run arbitrary Python code on Python objects is an unresonable choice, and since they are PyObjects anyway, I don't think it should be significantly slower than the accessor methods.\r\n\r\nThere can surely be other considerations, but so far +1 on not supporting `.str` (or other accessors) in the new type.", "> There can surely be other considerations, but so far +1 on not supporting `.str` (or other accessors) in the new type.\r\n\r\nInteresting idea. The proposed nullable object dtype with a list accessor would be well suited to the return type of str.split(expand=False) of the pd.NA variant of the pandas nullable string dtype.\r\n", "> In that sense, what it makes sense to me is the the PyObject type provides the functionality to \"fix\" the data, so it can be transformed and converted to another type with the specific functionality. Feels like `.map()`, `.astype()` and not much more should be enough if I'm not missing anything. I don't think `.map()` to run arbitrary Python code on Python objects is an unresonable choice, and since they are PyObjects anyway, I don't think it should be significantly slower than the accessor methods.\r\n\r\nThat's also an interesting idea. Designing a nullable object array from scratch instead of trying to match all the functionality of the current numpy object array!", "> That's also an interesting idea. Designing a nullable object array from scratch instead of trying to match all the functionality of the current numpy object array!\r\n\r\nFor context, I do think we're requiring PyArrow in pandas 3.0, and using PyArrow types by default. My comments here are based on it. Unfortunately we need to wait an extra month to find out. But spending the next two years reinventing the nullable types that PyArrow already give us for free seems a very poor investment of our time. And even if we do that, I'd do it by continuing to release 2.x versions. We'll continue this discussion in the appropriate threads, just wanted to clarify that we are probably discussing this new PyObject type from very different points of view, as PDEP-15 is keeping the direction of the project very uncertain.", "> For context, I do think we're requiring PyArrow in pandas 3.0, and using PyArrow types by default.\r\n\r\nWhere has this been discussed? My understanding is that the PyArrow types are just experimental and the pandas nullable types will be the default in the future (as per PDEP-16 when its done)", "This can't be discussed until there is clarity about PDEP-10 and PDEP-15 in my opinion. And it's been like 3 years of PyArrow types being experimental anyway, so I think it's something worth discussing and considering if we do require PyArrow, no?", "I read the above as using PyArrow types by default in 3.0. That can't happen unless we postpone the 3.0 release and as per our deprecation policy have at least 2 minor releases in 2.x with the appropriate warnings for the breaking changes.\r\n\r\n" ]
3,127,031,047
61,598
BUG: Dangerous inconsistency: `~` operator changes behavior based on context outside a target.
closed
2025-06-07T13:43:53
2025-06-16T01:24:50
2025-06-13T08:35:04
https://github.com/pandas-dev/pandas/issues/61598
true
null
null
monagai
4
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd df = pd.DataFrame({ ...: 'A': [1, 9, 6, 2, 7], ...: 'B': [6, 1, 3, 6, 3], ...: 'C': [2, 8, 4, 4, 4] ...: }, index=list('abcde')) df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) df['vals'] = df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) ``` ### Issue Description This ia reprot about `~` opetarotr in pandas dataframe. Here is the example on python=3.10.12, pandas=2.2.3. ``` python 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.34.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd In [2]: df = pd.DataFrame({ ...: 'A': [1, 9, 6, 2, 7], ...: 'B': [6, 1, 3, 6, 3], ...: 'C': [2, 8, 4, 4, 4] ...: }, index=list('abcde')) In [3]: df Out[3]: A B C a 1 6 2 b 9 1 8 c 6 3 4 d 2 6 4 e 7 3 4 In [3]: df Out[3]: A B C a 1 6 2 b 9 1 8 c 6 3 4 d 2 6 4 e 7 3 4 In [4]: df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) Out[4]: a False b True c True d False e True dtype: bool In [5]: df['vals'] = df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) In [6]: df Out[6]: A B C vals a 1 6 2 False b 9 1 8 True c 6 3 4 True d 2 6 4 False e 7 3 4 True In [7]: df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1) Out[7]: a -2 b -1 c -1 d -2 e -1 dtype: int64 ``` In the above example, the same `df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)` is executed in step 4, 5, and 7. However, the result of step 7 is ridiculous. In spite of `~`, `not` operator returns a correct answer. It seems that `~` operator in pandas dataframe quite dangerous and unreliable. In the environment of python 3.13.3, panads=2.2.3, **only for the step 7**, python returns warning that `<ipython-input-7-7d5677ff0f59>:1: DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int.`. However, I think this is a warning by python (not by pandas) from a different point of view. ### Expected Behavior The result of step 7 is same as step 4, 5. ### Installed Versions python = 3.10.12 pandas = 2.2.3 </details>
[ "Usage Question" ]
0
0
0
0
0
0
0
0
[ "I believe the way to do what you want is simply\n\n```python3\n~((df['B'] > 3) & (df['C'] < 8)\n```\n\nThis keeps all the math within pandas and returns a boolean Series.\n\nWhen you call\n\n```python3\ndf.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)\n```\n\nyou are expressly telling the system that you want to take each row of df, cast it to a Series, look up individual elements of the Series, and then apply ~ to the elements. In this case you are taking all the math away from pandas and sending it to Python, which is doing something you don't want, which is why you get the Python warning. \n\nOn line 4 I think you happen to get away with it because the whole DataFrame is dtyped as np.int64 and so you're staying with numpy scalars (e.g. np.int64, rather than python int), and so your comparison operators return `numpy.bool_`s, and numpy is handling this the way you want. When you add the additional column you're getting an object dtyped Series on the cross-section (because you now have columns of different dtypes), and so it's going all the way to python, so you're getting python ints, and thus python bools, which give you a different answer, i.e.\n\n```python3\nIn[1]: ~np.bool_(False)\nOut[1]: True\n\nIn [2]: ~False\nOut[2]: -1\n```", "Sorry, your comment is unacceptable for me.\nI am talking about inconsistent behavior of `~` operator depending on DataFrame columns\nIt seems that you lead the discsussion to an another point.\n\n`df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)` targets only int columns.\nWhy is it correct that this commands depends on off-target columns?\n\n```\nPython 3.13.3 (main, Apr 8 2025, 13:54:08) [Clang 16.0.0 (clang-1600.0.26.6)]\nType 'copyright', 'credits' or 'license' for more information\nIPython 9.2.0 -- An enhanced Interactive Python. Type '?' for help.\nTip: `?` alone on a line will brings up IPython's help\n…\n\nIn [4]: df\nOut[4]: \n A B C vals\na 1 6 2 False\nb 9 1 8 True\nc 6 3 4 True\nd 2 6 4 False\ne 7 3 4 True\n\nIn [5]: df.dtypes\nOut[5]: \nA int64\nB int64\nC int64\nvals bool\ndtype: object\n\nIn [6]: tmp_df = df[['B','C']]\n\nIn [7]: tmp_df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)\nOut[7]: \na False\nb True\nc True\nd False\ne True\ndtype: bool\n\nIn [8]: df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)\n<ipython-input-8-f0949cd95cf2>:1: DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int.\n df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)\nOut[8]: \na -2\nb -1\nc -1\nd -2\ne -1\ndtype: int64\n```\n\nIf such a curious behaviour is a 'correct specifiation' in pandas, pandas shoud never recommend `~` operator as a method to get a part of series or dataframes.\n", "I think @Liam3851 is correct, and this part of his answer\n\n> When you add the additional column you're getting an object dtyped Series on the cross-section (because you now have columns of different dtypes), and so it's going all the way to python, so you're getting python ints, and thus python bools, which give you a different answer\n\nanswers your question about why you get different answers depending on which dataframe you apply the operation to:\n- in the first case, all your columns have the same type\n- in the second case, you have mixed type columns\n\nI don't think there's anything dangerous about `~` - the dangerous part is using `apply(..., axis=1)` on a dataframe with mixed datatypes, which is what you have in the second case\n\n---\n\nI'd suggest taking a kinder approach here. @Liam3851 took the time to write a nice answer to you, and you downvote the reply and comment \"Sorry, your comment is unacceptable for me\". Please ask yourself whether this constitutes friendly behaviour 🙏 ", "Thank you for your comment.\n\nHowever, \n\n> took the time to write a nice answer to you, \n\nSorry, but this is meaningless.\nI submitted this issue not for my own benefit, but **for all pandas users**, because I already know this problem and how to avoid it. \n\nIn the pandas documentation here:\nhttps://github.com/pandas-dev/pandas/blob/main/doc/source/user_guide/indexing.rst#L1315-L1347\nit states: `You can negate boolean expressions with the word not or the ~ operator.`\n\nHowever, this is not true, is it? \nHere is the evidence:\n```\nIn [5]: df\nOut[5]:\n A B C vals\na 1 6 2 False\nb 9 1 8 True\nc 6 3 4 True\nd 2 6 4 False\ne 7 3 4 True\n\nIn [6]: df.apply(lambda x: ~((x['B'] > 3) & (x['C'] < 8)), axis=1)\nOut[6]:\na -2\nb -1\nc -1\nd -2\ne -1\ndtype: int64\n\nIn [7]: df.apply(lambda x: not ((x['B'] > 3) & (x['C'] < 8)), axis=1)\nOut[7]:\na False\nb True\nc True\nd False\ne True\ndtype: bool\n```\nPlease answer yes or no to this question.\n\nFurthermore, your comment suggests that:\n- All pandas users must understand the deep layers of pandas implementation to use it effectively.\n- All pandas users must have complete knowledge of the data types in their DataFrames, regardless of size.\n\nIs this correct? Please answer yes or no to this question.\n\nIn reality, this appears to be a fundamental specification defect, not merely a bug.\nThe behavior is as specified, but the specification itself is problematic and leads to unpredictable results.\nAs bitwise operations and logical operations represent fundamentally distinct computational concepts, this conflation of semantics inevitably results in confusion and unpredictable behavior.\n\nI understand that perhaps pandas cannot revert to a world without the `~` operator now.\nIn that case, shouldn't there at least be a prominent warning regarding this issue?\n\n**Closing this issue serves only to suppress the facts.**\nI believe this violates the spirit of open source, even though I may be relatively new to Python. \n\n**Please reopen this issue if you are fair developers.**\n\n" ]
3,126,754,876
61,597
ENH: improve optional import error message
closed
2025-06-07T09:16:15
2025-06-30T18:16:22
2025-06-30T18:16:14
https://github.com/pandas-dev/pandas/pull/61597
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61597
https://github.com/pandas-dev/pandas/pull/61597
KevsterAmp
5
- [x] closes #61521 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Error Reporting" ]
0
0
0
0
0
0
0
0
[ "I'm not sure how to fix the CI errors: \r\n```\r\nError: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run\r\n```", "One change is missing: https://github.com/pandas-dev/pandas/blob/675c81c9dc82ff9be797c94f3fc8dddb5dc44894/pandas/tests/io/test_parquet.py#L337\r\nI expect the CI will pass once that’s fixed.", "thanks @yuanx749, confirmed on the previous CI errors that the fastparquet was the issue", "@rhshadrach - can you kindly review this? thanks!", "Thanks @KevsterAmp " ]
3,126,739,587
61,596
VOTE: Voting issue for PDEP-15: Reject adding PyArrow as a required dependency
closed
2025-06-07T09:02:40
2025-07-17T10:40:25
2025-07-17T10:40:25
https://github.com/pandas-dev/pandas/issues/61596
true
null
null
datapythonista
21
### Locked issue - [x] I locked this voting issue so that only voting members are able to cast their votes or comment on this issue. ### PDEP number and title PDEP-15: Reject PDEP-10 ### Pull request with discussion https://github.com/pandas-dev/pandas/pull/58623 ### Rendered PDEP for easy reading https://github.com/pandas-dev/pandas/blob/c159851cc0762625f9e51f9d9bd1d18011b79aa7/web/pandas/pdeps/0015-do-not-require-pyarrow.md ### Discussion participants 5 voting members (active maintainers) participated in the discussion ### Voting will close in 15 days. 2025-06-22 ### Vote Cast your vote in a comment below. * +1: approve. * 0: abstain. * Reason: A one sentence reason is required. * -1: disapprove * Reason: A one sentence reason is required. A disapprove vote requires prior participation in the linked discussion PR. @pandas-dev/pandas-core
[ "Vote" ]
0
0
0
0
0
0
0
0
[ "-1\n\nWhile there are surely some challenges and costs of requiring PyArrow, I think the benefits of already requiring PyArrow in pandas 3.0 are enough to justify the requirement. If pandas becomes simpler, faster and better for most users, and for the development of the project, and the cost is few edge cases having to stay in pandas 2.0 or find an alternative lightweight technology, that's fine with me. Also, the sooner we require PyArrow, the sooner the issues with PyArrow will be addressed, and most of the original concerns have been addressed anyway (including unsupported architectures, reducing installation size, support for wasm...).", "-1\n\nrejecting PDEP-10 is like throwing the baby out with the bathwater. The issue appears to be just the timing of making PyArrow the required dependency. Of course if others want to reject PDEP-10 and propose keeping PyArrow an optional dependency indefinitely then that's a different issue and I would potentially vote differently on that.", "A call for vote is in violation of PDEP-1.\n\n> - After 30 days, with a note that there is at most 30 days remaining for discussion, and that a vote will be called for if no discussion occurs in the next 15 days.\n> - After 45 days, with a note that there is at most 15 days remaining for discussion, and that a vote will be called for in 15 days.\n> ...\n> After 30 discussion days, in case 15 days passed without any new unaddressed comments, the authors may close the discussion period preemptively, by sending an early reminder of 15 days remaining until the voting period starts.\n\nThe timeline requires a 15 day announcement to commence with voting. This has not occurred.", "Thanks for pointing that out @rhshadrach.\n\nThis PDEP is stalled, Thomas is not an active maintainer anymore, and I don't think there are technical details to discuss here. In my opinion it's just a formal document to reject PDEP-10, with a summary of the reasons. While it's nice to have the summary, I don't think a PDEP was necessary in the first place, we could also have voted again on PDEP-10 instead, or just vote via email as we used to do. In that sense feels a special PDEP.\n\nAlso, I don't think it makes sense to only allow people who commented in the PDEP to vote. This PDEP didn't have a technical discussion in the sense of defining an API or something. People could be totally happy with what was said here, and vote against it.\n\nIn any case, you're technically right. If you think the 15 days announcement is useful, let's do it. But after one year I don't think there is much to discuss in the content of this PDEP. I think it'd be better to understand what the team wants as soon as possible, and have the discussions on the technical details, not in this formalism. In general we've been using common sense over bureacracy, at least with the previous governance, which was very disconnected from reality. But it'll be good to know other's opinion.\n\nDo you want to have the 15 day announcement? Will you update this PDEP with the feedback from the new discussions, or should we just wait the 15 days to be strict with our own policies?", "> Do you want to have the 15 day announcement?\n\nI do think the announcement should happen, as PDEP-1 calls for, on both GitHub and email, as this is not my decision to make.", "I was just reading, and the policy also says `A PDEP discussion will remain open for up to 60 days.`. Not sure what exactly it implies, as it's not mentioned, but technically the PDEP is not under discussion anymore. I surely don't want to do it, but feels like technically this should be closed as rejected or as invalid since it violated the PDEP process in the first place.\n\nI'll send the announcement, but personally, I think the governance should be to make our lives easier. It's not working so well, since this and other PDEPs are being an impediment to decision making, not facilitating it as it was intended. And we should probably put our efforts on updating the process when it's not useful. Just blindly following what is written, just because it's written, doesn't seem the right approach to me. From your comment I don't think you consider the announcement and waiting useful, but you think we should respect the process. It's a valid point (I disagree with), as feels like we'll just waste 15 days for no reason, but no big deal.\n\nClosing here (I'll reopen in 15 days), and will send the announcement.", "> > Do you want to have the 15 day announcement?\n> \n> I do think the announcement should happen, as PDEP-1 calls for, on both GitHub and email, as this is not my decision to make.\n\nI do want to point out that here: https://github.com/pandas-dev/pandas/pull/58623#pullrequestreview-2264483517 , which was on August 27, 2004, I did call for the remaining 15 days of discussion, but never followed up to start the vote.\n\nNevertheless, things have changed in the 9+ months since, so having another 15 days of discussion is probably a good thing to do.\n\n", "@datapythonista I think it would be better if we opened a NEW voting issue on this topic so that there isn't confusion with the earlier discussion above.\n\nAlso, would be good to clarify that a +1 vote means we are not requiring pyarrow in 3.0, and -1 vote means we are requiring pyarrow in 3.0.", "-1\n\nPDEP-14 already passed and says that pyarrow will NOT be a hard dependency for version 3.0", "-1; same reason as @Dr-Irv ", "-1; same as @rhshadrach ", "-1; same reasons as the others", "-1; ditto", "Shoot, I got confused. I vote +1, i.e. to confirm the vote in PDEP14 to _not_ require pyarrow in 3.0.", "I believe we can close this vote. We have 7 negative votes. If all 15 core team members vote, that would require 11 positive votes, which is impossible to obtain. If we receive the minimum quorum required (11 votes), it would still be impossible for PDEP-15 to pass.\n\nSo the interpretation of this outcome is as follows:\n\n- PDEP-15 is rejected\n- However, because in PDEP-14 there was an agreement that `pyarrow` will NOT be a hard dependency for version 3.0, PDEP-10 remains accepted, except for the specific details regarding requiring `pyarrow` in version 3.0. In other words, the plan is that `pyarrow` will be required in the future, but the timeline for that remains uncertain. \n\nPlease let the group know if you disagree with the interpretation of this outcome.\n\nThere is open discussion about possible ways forward in #61618 ,\n", "0.\nDue to lack participation in underlying discussion my options are in {0, 1} I believe.", "> I believe we can close this vote.\n\nIIUC voters are allowed to change their vote at any time during the voting period. I recall @jorisvandenbossche explicitly say that at some point but can't find the discussion now. IIRC I was somewhere saying that if I had voted on PDEP-8 and then seen @jreback comment I would have wanted to change my vote.\n\nNow IIUC we don't have any procedure for rescinding or updating a vote once cast, so it is probably allowed? Indeed, @jbrockmendel has changed his vote here.\n\nSo I would just let the vote run it's course for now to avoid further claims of procedural irregularities.\n\nIt probably won't change the outcome. What would be the benefit of closing the vote early?", "(I am fine with keeping it open to follow procedure, as indeed people can change their vote, although not sure that changes much in this case .. , see below)\n\nI would personally rather not vote because:\n\n- The PDEP text is outdated, so even if would support the main propostion (i.e. not require pyarrow for 3.0), I can't really approve the current PDEP\n- It is not clear what either a approval or rejection actually means in practice. See Irv's question [above](https://github.com/pandas-dev/pandas/issues/61596#issuecomment-2996813645) about this at the start of the vote, or Brock's confusion switching his vote (and the confusion other people have expressed on the feedback issue at https://github.com/pandas-dev/pandas/issues/54466#issuecomment-3009044920). We should ideally have clarified that _before_ voting.\n\n\nBut if I have to vote, I'll abstain (0)\n\nI will add some more context to the PDEP discussion in https://github.com/pandas-dev/pandas/pull/58623, to keep most of the discussion there. EDIT: see https://github.com/pandas-dev/pandas/pull/58623#issuecomment-3023587822", "I think I am locked into {0, 1}, so voting 0 here \n\nI agree with the others who voted -1", "> Please let the group know if you disagree with the interpretation of this outcome.\n\nYour summary seems accurate @Dr-Irv. From my side, I'm happy to finish the vote early, regardless of the -1/0/1 result I think it's clear what the team wants, which was my reason to call the vote. I'd probably close this PDEP, as I don't think feel having it as a rejected PDEP is very useful, but ok with that too.\n\nThe main takeaway for me here, besides confirming and formalizing what many assumed about PDEP-14 is that the governance needs improvements, as it's not being helpgul. I'm probably biased as I was one of the very few people who voted against the the new governance. But I think what we need is a governance that focus on solving the very specific problems we have regarding governance, and not focus on corporation-like processes and bureacracy that quite often we won't want to follow. Like waiting for this vote to close, have a formal 90 days PDEP process to remove the \"PyArrow will be required warning\"... If I had to decide the governance myself I'd define the basics on how a PDEP is approved, on when a PR can be merged, I'd appoint a BDFL that can behave as a referee/leader when having a flat organization prevents from taking action. Not that the BDFL needs to make any technical decision, but check he can check with the relevant person, propose a solution that all the interested parties can agree on... Regardless of what the details of the voting, who the BDFL is, or what the final outcome of the discussions is, I think a governance of this kind would make things way more efficient, and enjoyable.", "***The voting period has ended***\n\n---\n\n### 🗳️ Vote Summary\n\n| Voter | Vote | Reason Summary |\n|---------------------|------|---------------------------------------------------------------------------------|\n| datapythonista | -1 | Believes requiring PyArrow simplifies and improves pandas overall |\n| simonjayhawkins | -1 | Rejecting PDEP-10 is premature; timing is the real issue |\n| rhshadrach | -1 | Procedural concerns; aligns with Dr-Irv’s reasoning |\n| Dr-Irv | -1 | PDEP-14 already passed; PyArrow not required for 3.0 |\n| jbrockmendel | -1 → +1 | Initially voted -1, then clarified +1 to confirm PyArrow not required in 3.0 |\n| mroeschke | -1 | Agrees with others; no new reasoning provided |\n| WillAyd | -1 | Ditto previous votes |\n| attack68 | 0 | Abstained due to lack of participation in discussion |\n| jorisvandenbossche | 0 | Abstained; unclear implications and outdated PDEP text |\n| phofl | 0 | Abstained; agrees with -1 votes but locked into {0, 1} |\n\n---\n\n### 📊 Tally\n\n- **-1 (Disapprove)**: 6 votes\n- **0 (Abstain)**: 3 votes\n- **+1 (Approve)**: 1 vote\n\n---\n\nhttps://github.com/orgs/pandas-dev/teams/pandas-core shows that there are currently 15 voting members.\n\nPDEP-1 states that The quorum is computed as the lower of these two values:\n\n- 11 voting members.\n- 50% of voting members.\n\nwith 10 votes cast we have exceeded the 50% of voting members threshold.\n\n--- \n\n> Once the voting period ends, any voter may tally the votes in a comment, using the format: w-x-y-z, where w stands for the total of approving, x of abstaining, y of disapproving votes cast, and z of number of voting members who did not respond to the VOTE issue. The tally of the votes will state if a quorum has been reached or not.\n\n1-3-6-5 Quorum reached.\n\n**PDEP officially rejected.**" ]
3,126,190,691
61,595
Add table of contents support to ecosystem page
closed
2025-06-07T00:46:04
2025-06-13T17:11:49
2025-06-13T17:11:48
https://github.com/pandas-dev/pandas/pull/61595
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61595
https://github.com/pandas-dev/pandas/pull/61595
akafle01
5
This PR adds a table of contents to the Ecosystem page using the Markdown `TocExtension`. - Inserted `[TOC]` placeholder in `ecosystem.md` - Enabled `toc` extension in `web.py` - Configured `toc_depth` for h2 and h3 headers - Closes #61587
[ "Web" ]
0
0
0
0
0
0
0
0
[ "@akafle01 do you have time to continue the work on this?", "> @akafle01 do you have time to continue the work on this?\r\n\r\nYes I will work on it today", "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61595/", "Thanks @akafle01 for the contribution. I was checking this, as the table of contents looked different than in the other pages, and realized besides adding the ToC we needed to do some refactoring. I better add the ToC to Ecosystem in #61655 where I've implemented the refactor, so we can test both things together.\r\n\r\nI'll close this PR, sorry about it, as the work you've been doing is great, just more practical to add this in the other PR." ]
3,125,854,805
61,594
CI: Fix slow mamba solver issue by limiting boto3 versions
closed
2025-06-06T21:01:41
2025-06-13T15:34:03
2025-06-13T14:34:12
https://github.com/pandas-dev/pandas/pull/61594
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61594
https://github.com/pandas-dev/pandas/pull/61594
datapythonista
12
Closes #61531 Probably better to rerun the CI 3 or 4 times to be sure this is the problem and the solution. But based on local tests, seems like boto3 has a huge number of versions (they release almost every day), and that's the problem with the mamba solver. Limiting the number of versions provided to the solver should help. 1.27 is from 2 years ago, consistent with other packages. Why only fails for 3.13? No idea
[ "CI", "Dependencies" ]
0
0
0
0
0
0
0
0
[ "This doesn't seem to work. Removing `boto3` from the environment seems to fix the problem locally, I thought limiting the options would also fix it. But seems like even pinning to a specific version doesn't work. I guess the problem may be with a `boto3` dependency, but not too sure.", "The dependencies for `boto3` are here: https://github.com/conda-forge/boto3-feedstock/blob/main/recipe/meta.yaml#L23\r\n\r\nNot too sure if the problem is really with `boto3`, maybe it's more complex but removing `boto3` makes it work. But doesn't seem like mamba really checks their issues, and I don't think we want to get into the mamba solver internals ourselves, so I guess we need to just keep trying something that makes the solver for 3.13 take a reasonable time.", "Based on my tests, this is what makes the solver happy. Relaxing the conditions causes the very slow mamba install times. This breaks the check for minimum versions, so it can't be merged as is (and I'll add a comment when we decide what to do).\r\n\r\n@mroeschke not sure if you have an opinion on what to do to avoid the mamba problems based on this. Not too sure what's going on.", "I'm curious if conda's libmamba solver has the same issue. Potentially based on https://github.com/conda/conda-libmamba-solver/issues/668 it may. (It's on my radar to maybe switch to https://github.com/conda-incubator/setup-miniconda given that mamba activity is kinda slow as you mentioned)\r\n\r\nIt's a shame we have to pin, but I'm OK with it to speed up solve times ", "I had the impression that conda, mamba, micromamba and pixi all use libmamba, so not sure if changing tool would be helpful. But I'm not so sure about the exact details.", "I don't know what conda is doing with its solver, but I've seen this issue with boto3 and the standard pip dependency resolver, which downloads a copy of every possible boto version and inspects its metadata to find out what constraints it might have to solve for. The sheer number of boto versions to download makes that a really slow process", "I'll leave out the `s3fs>=2025.5.1` for now, and just pin `boto3` in all the dependency files. I see mamba being very slow in other versions of pandas now. Since `boto3` is not a pandas dependency, but something we use for testing, pinning shouldn't be a problem. Just pining `boto3` didn't make mamba resolve immediately in my tests, but with some luck it makes resolve faster, and we can avoid the cancelled jobs.", "The first run of the CI after just pinning `boto3` didn't have long mamba solver times. I'll try a couple more times and see if it continues to be the case. And if it is, I think we can merge this and leave the dependencies like this until we have new problems with the solver. The other options I can think of require changing the versions of `s3fs` to what we'd like to have, which is not great.", "This looks good to me. The one failing test is a xpass, probably because xarray got updated to 2025.6.0 and they might have fixed something. Will do a PR updating this test.", "> Will do a PR updating this test.\r\n\r\n-> https://github.com/pandas-dev/pandas/pull/61648", "Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 2bf3dc9850bf085242711553982e75f019c31d1d\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61594: CI: Fix slow mamba solver issue by limiting boto3 versions'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61594-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61594 on branch 2.3.x (CI: Fix slow mamba solver issue by limiting boto3 versions)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ", "Backport -> https://github.com/pandas-dev/pandas/pull/61653" ]
3,125,680,830
61,593
WEB: Restore website width and improve table of contents style
closed
2025-06-06T19:28:04
2025-06-08T12:47:57
2025-06-08T12:47:50
https://github.com/pandas-dev/pandas/pull/61593
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61593
https://github.com/pandas-dev/pandas/pull/61593
datapythonista
1
Follow up of #58791 This restore the website width (happy to make the content of PDEPs narrower, but it's a bit trickier so I'll leave it to another PR). I also styled the table of contents more similar to the code blocks, which personally I think it looks a bit nicer. Before: ![Screenshot at 2025-06-06 23-23-22](https://github.com/user-attachments/assets/7a025b7c-c5b6-4e6a-b921-627f4e3439cf) After: ![Screenshot at 2025-06-06 23-22-37](https://github.com/user-attachments/assets/178642b6-8376-4992-bef0-224fddb2c97d) CC @rhshadrach
[ "Web" ]
0
0
0
0
0
0
0
0
[ "Thanks @datapythonista " ]
3,125,627,528
61,592
CI: Update Python version to 3.11 in environment.yml
closed
2025-06-06T19:01:57
2025-06-06T20:57:23
2025-06-06T20:57:16
https://github.com/pandas-dev/pandas/pull/61592
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61592
https://github.com/pandas-dev/pandas/pull/61592
datapythonista
1
xref #61585 Let's pin to 3.11 for now (let's see if this doesn't break the CI), and we can continue the discussion on unpinning if needed.
[ "CI", "Dependencies" ]
0
0
0
0
0
0
0
0
[ "Thanks @datapythonista " ]
3,125,543,454
61,591
WEB: Moving maintainers to inactive status
closed
2025-06-06T18:27:15
2025-06-06T22:09:23
2025-06-06T22:09:23
https://github.com/pandas-dev/pandas/pull/61591
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61591
https://github.com/pandas-dev/pandas/pull/61591
datapythonista
2
A sad PR, but after checking with some maintainers, they confirmed that they became inactive and wish to be changed status. I think 3 more people will also be moved unfortunately, but still waiting for confirmation from them.
[ "Admin", "Web" ]
0
0
0
0
0
0
0
0
[ "/preview", "Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61591/" ]
3,125,346,433
61,590
RLS: 2.3.1
closed
2025-06-06T17:01:06
2025-07-28T16:12:17
2025-07-28T16:12:16
https://github.com/pandas-dev/pandas/issues/61590
true
null
null
mroeschke
13
Placeholder issue _if_ we decide to release 2.3.1. At the time of writing this issue, it's expected that pandas 3.0 would be the next version https://github.com/pandas-dev/pandas/issues/57064 Notable tasks for 2.3.1: - [x] Re-enable Python 3.9 support (https://github.com/pandas-dev/pandas/issues/61563, https://github.com/pandas-dev/pandas/issues/61579) - [x] Revert https://github.com/pandas-dev/pandas/pull/60792 - [x] Merge https://github.com/pandas-dev/pandas/pull/61569 (without hardcoding version) - [x] Re-enable musl-aarch64 wheels - [x] Merge https://github.com/pandas-dev/pandas/pull/61569 (without hardcoding version)
[ "Release" ]
3
0
0
0
0
0
2
0
[ "Thanks @mroeschke for opening this issue.\n\n> Placeholder issue _if_ we decide to release 2.3.1.\n\nRLS 2.3 was driven by the need to release to the community for evaluation/experimentation the new np.nan variant of the nullable string dtype (both object fallback and pyarrow backed).\n\nSo it would be, to me, a reasonable assumption that any issues reported regarding this new dtype would be fixed in the 2.3.x branch. Not providing these fixes to the community until the 3.0 release candidate would not allow time for proper evaluation/feedback?\n\nThe new string dtype is behind a feature flag and experimental so changes to behavior could be added in a patch release? and not need to wait for a minor/major release?\n\nPDEP-14 states \"The 2.3.0 release would then have all future string functionality available\". Did all the issues related to the new dtype get into 2.3.0? I can see there are a few stale PRs for the string dtype that presumably still need to be resolved. These could be included in a patch release?\n\nSo at this time i think it may be appropriate to assume that we will need to actively maintain the 2.3.x branch and that patch releases are more likely than not?\n\n> At the time of writing this issue, it's expected that pandas 3.0 would be the next version https://github.com/pandas-dev/pandas/issues/57064\n\nreleasing 3.0 before an reasonable amount of time has elapsed would defeat the whole purpose and exercise of creating a 2.3 release. Some contributors have suggested a release cadence of 4 months between minor releases, but 6 months is probably more realistic?\n\nThere is a deprecation in 2.3 related to `str.contains`. It may not be relevant or too significant but our deprecation policy, PDEP-17 states:\n\n> Deprecated functionality should remain unchanged in at least 2 minor releases before being changed or removed\n> Deprecations should initially use DeprecationWarning, and then be switched to FutureWarning in the last minor release before the major release they are planned to be removed in\n\n#59615 used a FutureWarning so we don't need to consider the 2nd clause. However, to honor the first clause may strictly require a 2.4 release before the behavior is then changed in 3.0? A pragmatic approach may be needed but also consideration to not making a mockery of the deprecation policy. Or just postpone enforcing that deprecation until 4.0.\n\n#59328 gives an overview of breaking changes", "> [#59328](https://github.com/pandas-dev/pandas/issues/59328) gives an overview of breaking changes\n\nThere was consideration to including the breaking changes in the 2.3 release notes. This wasn't done in time for the 2.3 an hence the changes not yet communicated to the community.", "@mroeschke ive installed 2.3 and then installed pyarrow using mamba. The pyarrow version installed is 20.0.0.\n\nIt seems this combo doesn't work? Is there any fixes that should be backported or changes to conda-forge instead? ", "> It seems this combo doesn't work? Is there any fixes that should be backported or changes to conda-forge instead?\n\nWhat error shows up they are installed together? There's a possibly the conda-forge recipe is kinda out of sync with our dependencies in pyproject.toml", "so i've just switched kernels back after using pyArrow 19.0.1 and the notebook is running fine now with 20.0.0. So it looks like it was an issue on my end.", "On the technical side, I think I have backported all PRs that we forgot to backport for 2.3.0 or that were merged since 2.3.0 to main and should target 2.3.1 (based on our [\"Still Needs Manual Backport\" label](https://github.com/pandas-dev/pandas/issues?q=label%3A%22Still%20Needs%20Manual%20Backport%22)). \nI think that mostly leaves the CI / packaging (3.9 wheels) related changes that we should include, listed in the top post (@mroeschke do you have some time to look at those?)\n\nOn the communication side, I would still like to update the 2.3.0 release notes to include more details about the upcoming changes, because right now we didn't actually really communicate about this when releasing 2.3.0. I am currently first creating this content for the 3.0.0 release notes (https://github.com/pandas-dev/pandas/pull/61724), and when that is done I would also add a distilled version of that to the 2.3.0 and 2.3.1 release notes, mentioning the opt-in feature flags. \nAnd that content also refers to the migration guide I started at https://github.com/pandas-dev/pandas/pull/61705. Feedback and proofreading of those documentation proposals is very welcome.\n\nRegarding a timeline for 2.3.1, I think we already have enough changes to warrant a release (once the packaging/CI issues are resolved), and personally I would have time for this first half of next week.", "> Regarding a timeline for 2.3.1, I think we already have enough changes to warrant a release (once the packaging/CI issues are resolved), and personally I would have time for this first half of next week.\n\n+1", "> On the communication side, I would still like to update the 2.3.0 release notes to include more details about the upcoming changes, because right now we didn't actually really communicate about this when releasing 2.3.0. I am currently first creating this content for the 3.0.0 release notes ([#61724](https://github.com/pandas-dev/pandas/pull/61724)), and when that is done I would also add a distilled version of that to the 2.3.0 and 2.3.1 release notes, mentioning the opt-in feature flags.\n> And that content also refers to the migration guide I started at [#61705](https://github.com/pandas-dev/pandas/pull/61705). Feedback and proofreading of those documentation proposals is very welcome.\n\n@jorisvandenbossche it would be good to get that into 2.3.1 but maybe not a blocker. Hopefully I will be able to help and to go over your docs if not this week, maybe at the weekend. But yes, if these can be included in 2.3.1 also, that'll be a bonus.", "@jorisvandenbossche If you doing a release today, what about #61771? Ready to merge? Will it be backported? ", "Released at https://github.com/pandas-dev/pandas/releases/tag/v2.3.1, and wheels are up at https://pypi.org/project/pandas/2.3.1 (I think all wheels are uploaded now, downloaded a bunch of them manually because our download script does not fetch all of them). Will wait on the conda-forge builds for announcing", "FWIW the conda-forge release is currently blocked by some failing CI in https://github.com/conda-forge/pandas-feedstock/pull/228", "closable?", "I don't remember if there was an email sent out about the release, but everything else was done so closing" ]
3,125,298,226
61,589
Avoid re-enabling the GIL at runtime
closed
2025-06-06T16:43:49
2025-06-06T18:36:16
2025-06-06T18:36:09
https://github.com/pandas-dev/pandas/pull/61589
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61589
https://github.com/pandas-dev/pandas/pull/61589
lysnikolaou
1
The GIL gets dynamically reenabled because the shared utility Cython build does not include the `freethreading_compatible` directive. - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Build", "Internals" ]
0
0
0
0
0
0
0
0
[ "Thanks @lysnikolaou " ]
3,124,922,046
61,588
CI: Test pandas with numpy 1.26
closed
2025-06-06T14:17:38
2025-07-08T15:44:44
2025-07-08T15:44:44
https://github.com/pandas-dev/pandas/issues/61588
true
null
null
datapythonista
5
See #60154 We should add the build and fix the existing errors
[ "Testing", "CI", "Dependencies", "good first issue" ]
0
0
0
0
0
0
0
0
[ "take", "I did some deep testing across numpy versions and here’s what I’m seeing: The existing `xfail` condition for `np.float32(1.1)` was correct in spirit, this behavior starts changing already in numpy `1.25`, not just `1.26`.\n\nIf we only add `np_version_gte1p26` to `xfail`, tests still fail under numpy `1.25.x` (confirmed locally with 1.25.2).\n\nThe previous condition using `NPY_PROMOTION_STATE` was guarding this for `<1.24` or promotion changes but now it looks like we should generalize to:\n>or np_version_gte1p25\n\nThat makes the test robust across current and future versions, Should I proceed with this change and send a PR?\nAlso regarding the CI build, I saw the earlier discussion in #60154 , would you like me to add that CI job (`actions-310-numpy-126.yaml`) in this PR itself, or would you prefer it as a followup PR?\n", "I think the idea is to test for users who can't upgrade to numpy 2, which are expected to use the latest numpy 1 version.\n\nProbably better to add the CI job in the first PR, so it's seen what fails.\n\nCC @jorisvandenbossche ", "> would you like me to add that CI job (`actions-310-numpy-126.yaml`) in this PR itself, or would you prefer it as a followup PR?\n\nIt's probably easiest to do both in a single PR, because we need to add the extra CI build to verify if the fixes are working, but we will also need those fixes to get a green CI to be able to merge it.", "Hi! I’m new to open source and would love to work on this issue. Please assign it to me 🙂\n" ]
3,124,863,602
61,587
WEB: Add table of content for the Ecosystem page
closed
2025-06-06T13:56:03
2025-06-16T09:09:07
2025-06-16T09:09:07
https://github.com/pandas-dev/pandas/issues/61587
true
null
null
datapythonista
4
We did it for the PDEP pages here: #58791 I don't think it should be difficult to also add a table of contents for the ecosystem page, which is quite large and not so easy to find things
[ "good first issue", "Web" ]
0
0
0
0
0
0
0
0
[ "Hi, I would like to work on this.", "Hi, priya here\nI would like to work on this issue", "take\n", "This is done now" ]
3,124,712,062
61,586
Update whatsnew for issue #53115
closed
2025-06-06T12:56:51
2025-06-09T17:14:13
2025-06-09T17:14:06
https://github.com/pandas-dev/pandas/pull/61586
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61586
https://github.com/pandas-dev/pandas/pull/61586
fbourgey
1
https://github.com/pandas-dev/pandas/pull/60898
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "Thanks @fbourgey " ]
3,124,580,897
61,585
Unpin Python version in the development environment
open
2025-06-06T12:03:21
2025-06-06T18:57:42
null
https://github.com/pandas-dev/pandas/issues/61585
true
null
null
datapythonista
5
xref #61555 Our base conda environment `environment.yml` is used locally by pandas developers (contributors and maintainers), and for some CI jobs, like building the documentation. For stability, we pinned the version of Python to a specific version (currently `python=3.10`). As new versions of Python are released, this is becoming outdated, and for no particular reason we're now going building our docs and creating our development environments in a Python version soon to stop being supported. This issue is to remove the pin, leave `python` open to any version, which in general it should be the latest available, and fix all the problems in the CI caused by upgrading the version here. The errors detected when upgrading can be found here: - https://github.com/pandas-dev/pandas/actions/runs/15452252618/job/43496928413?pr=61555 - https://github.com/pandas-dev/pandas/actions/runs/15452252597/job/43496928225?pr=61555
[ "CI", "Needs Discussion", "Dependencies" ]
0
0
0
0
0
0
0
0
[ "If we don't pin to the minimum supported version, I fear contributors will submit code that is only supported on higher versions and then have to debug-via-CI. That is not a good experience. \n\nI think we should continue to push contributors to use the minimum Python version by pinning it rather than removing the pin.", "Thanks for the feedback @rhshadrach. Personally I think the CI will prevent any issue, and I'm not sure how often people code things that only work in the latest versions of Python. At the same time, it's not a big deal to keep updating the Python version in `environment.yml`, and probably not so important to run sphinx, the website, pre-commit, and not sure if something else in a newer Python version either. I'd still remove the pin, but I'm ok to just upgrade to 3.11 if you think it does make a difference regarding the contributors.", "I've lost track - when are we dropping 3.10 support? Do we have discussion on this?", "#60059 and Matt mentioned it in the last dev call too", "Thanks! I'm good with pinning the environment to 3.11 in that case even before we drop 3.10 fully on main. But I do think we should prefer this over no pin." ]
3,124,513,901
61,584
Proposal to allow third-party engines for readers and writers
open
2025-06-06T11:33:20
2025-06-06T11:33:43
null
https://github.com/pandas-dev/pandas/issues/61584
true
null
null
datapythonista
0
In [PDEP-9](https://pandas.pydata.org/pdeps/0009-io-extensions.html) it was discussed the possibility of allowing third-party packages to automatically add `pandas.read_<format>` functions and `DataFrame.to_<format>` methods. There was a main challenges that made the proposal not move forward: the complexity of managing multiple packages for the same format (conflicting names, differences in signatures...). What I propose here is similar, but not to register the readers/writers for whole formats, but engines of the existing formats instead. This is less ambitious, since it doesn't allow adding new formats to pandas (e.g. `pandas.read_dicom`, a format for medical data), but it still have the rest of the advantages of PDEP-9: - It still allows third-party packages to provide the code for pandas readers/writers (e.g. a faster csv reader, a new excel reader wrapping another excel library...) - It opens the door to removing from our code base connectors that can be better maintained elsewhere. As an example, engines like fastparquet for parquet, as well as others, are basically a mapping between our functions signature and their functions signatures, with a bit of extra logic. I think the engines are way more likely to need changes because changes in the wrapped library, than in our function signature, so to me it makes things simpler and easier to maintain if the engine was part of the fastparquet and pyarrow libraries. Moving engines out of pandas is something for the future, and it can be discussed individually, since it probably makes sense to keep many, and move out some - There would be no need to deal with optional dependencies for the engines using this system. Dealing with optional dependencies adds complexity that we can avoid - It would simplify our dependencies significantly (if moving engines out of pandas happens), as well as our tests. We had problems in the past because we skip tests depending on whether a library can be imported or not. And we were for a while not running many pandas tests. Having less optional dependencies would help prevent this sort of problems. - Conflicts in this case seem unlikely. Most of the engines are names after the library they wrap, as opposed to libraries "fighting" to register a format name. There could still be in some cases, but only for users with both the conflicting packages installed, and we can warn in this case. - We will continue to control the signature for all readers and writers, which for the users means that the formats are fixed, and every format has a unique signature which is documented in our docs - In some cases we already use `**kwargs` for engine specific parameters. This provides extra flexibility while keeping most of the signature unified Implementing this would have no impact to users unless they call a reader/writer with an engine value that is unknown. At that point instead of raising as now, we would first check for registered entry points, and if one exist for the format (e.g. "csv") and the provided engine name (e.g. "arrow-rs", a possible new reader based in Rust's Arrow implementation, if someone implements that), then the function provided by the entry point would handle the request. Only small drawback I can find is that since engines would be generic, the API pages of the documentation won't be able to provide engine specific information for the engines not in pandas itself. I think this is very reasonable, and we can keep a registry of known connectors in the Ecosystem page with links to their docs, as we usually do.
[ "Enhancement", "IO Data", "API Design" ]
2
0
0
0
0
0
0
0
[]
3,124,326,875
61,583
BUG: StataWriter returns ascii error when length of string is < 2045, but encoded length is > 2045
closed
2025-06-06T10:14:19
2025-06-30T18:14:30
2025-06-30T18:14:30
https://github.com/pandas-dev/pandas/issues/61583
true
null
null
Danferno
1
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd df = pd.DataFrame({'doubleByteCol': ['§'*1500]}) df.to_stata('temp.dta', version=118) len_encoded = df['doubleByteCol'].str.encode('utf-8').str.len() # _encode_strings() count = 3000 -> no byte encoding because assumed will become strL (stata.py:2694) len_typlist = df['doubleByteCol'].str.len() # _dtype_to_stata_type() = 1500 -> typ 1500 (stata.py:2193) len_typlist < 2045 # True -> Tries to convert to np dtype S1500, but fails because unicode characters are not supported (normally no issue because encoded to bytes first) (stata.py:2945,2956) ``` ### Issue Description The StataWriter uses two different versions of the string column to check the same thing. During _encode_strings() it checks the length of the byte-encoded column `max_len_string_array(ensure_object(encoded._values))` but when assigning numpy types it checks the (potentially) unencoded version `itemsize = max_len_string_array(ensure_object(column._values))`. This then trips up the _prepare_data() section, which expects short columns to be byte-encoded already `typ <= self._max_string_length` based on the reported type, which is not true if the encoded column > 2045 due to unicode characters such as `§` taking up two bytes. ### Expected Behavior I don't know the internal workings of stata.py well enough to be sure, but I think the easiest fix is using the actual values when checking str length in _encode_strings(). That is, replace ```max_len_string_array(ensure_object(encoded._values))``` by ```max_len_string_array(ensure_object(self.data[col]._values))``` ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.12.1.final.0 python-bits : 64 OS : Windows OS-release : 11 Version : 10.0.26100 machine : AMD64 processor : AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_Belgium.1252 pandas : 2.2.2 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : None pip : 23.2.1 Cython : None pytest : 8.3.5 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : 4.9.4 html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 15.0.2 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None </details> ### Temporary fix For users finding this topic, this refers to the following Exception ``` Exception has occurred: UnicodeEncodeError (note: full exception trace is shown but execution is paused at: <module>) 'ascii' codec can't encode characters in position 0-1499: ordinal not in range(128) File "F:\datatog\junkyard\adhoc-scripts\mwes\pandas_asciiencoding.py", line 4, in <module> (Current frame) df.to_stata('temp.dta', version=118) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1499: ordinal not in range(128) ``` You can workaround this issue by explicitly specifying the offending columns in the `convert_strL` option.
[ "Bug", "IO Stata", "Needs Triage" ]
0
0
0
0
0
0
0
0
[ "take\n" ]
3,124,305,189
61,582
BUG: Require sample weights to sum to less than 1 when replace = True
closed
2025-06-06T10:05:48
2025-07-11T02:27:23
2025-07-11T02:27:15
https://github.com/pandas-dev/pandas/pull/61582
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61582
https://github.com/pandas-dev/pandas/pull/61582
microslaw
2
- [x] closes #61516 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Bug", "Algos" ]
0
0
0
0
0
0
0
0
[ " 1. I've added this as an exception, but I've also considered doing this as a warning, since the issue is\r\n\r\n - easy to encounter - one case occured in user_guide\r\n - hard to explain within a short error message - requires specific equation and some level of understanding of probability\r\n - has minimal effect on majority of use cases - as long as the variance isn't huge, the bias this error introduces is minimal, and for very specific cases e.g. the village sampling from the op warning should suffice\r\n\r\n 2. I also deviated a bit from original issue in that i'm allowing `max(w) * n / sum (w) == 1`. As far as I understand, it doesn't introduce any bias, and it allows some specific uses, e.g. `df.sample(n=1,weights=[1,0])`\r\n\r\n 3. If this is implemented as an exception, I believe that there is no way for the numpy exception `\"Fewer non-zero entries in p than size\"` to occur, as:\r\n\r\n - `w` - list of weights\r\n - `n` - number of non-zero weights in `w`\r\n - `size` - size of the sample to be drawn\r\n - `len` - length of `w`\r\n\r\n I'm assuming that\r\n\r\n - the weights are normalized, so `sum(w) = 1`, as it doesn't change anything and makes the math easier\r\n - the weights are equal, since if any of them is larger than the others, `max(w)` and `max(w) * size` will increase\r\n - therefore `w = [1/n, 1/n, ..., 1/n]` or any combination like `w = [1/n, 0, 1/n, 0, ..., 1/n]` where `n` is the number of non-zero weights\r\n - for the `\"Fewer non-zero entries in p than size\"` to occur, `size > n`\r\n - for the `\"Invalid weights: If replace=False, \"` to occur, `size * max(w) / sum(w) >= 1`\r\n - `size * max(w) / sum(w) >= 1`\r\n - `size * max(w) >= 1`, as `sum(w) = 1`\r\n - `size * 1/n >= 1`, as `max(w) = 1/n`\r\n - `size >= n`\r\n", "Thanks @microslaw!" ]
3,123,767,543
61,581
BUG: pd.DataFrame.mul has not support fill_value?
open
2025-06-06T06:15:20
2025-08-19T21:47:23
null
https://github.com/pandas-dev/pandas/issues/61581
true
null
null
TungPhaSanh
12
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd df = pd.DataFrame(np.arange(12).reshape(3,4)) s = [1,pd.NA,3] df.mul(s, axis="columns", fill_value=0) ``` ### Issue Description It raise error: fill_value=0 is not supported ### Expected Behavior It should return the result with filled NA value of 0 ### Installed Versions <details> 2.3.0 </details>
[ "Bug" ]
0
0
0
0
0
0
0
0
[ "Hi @TungPhaSanh ,\n\nIt seems the example code doesn't work because `s` only has 3 elements - I think it needs 4.\n\nI have tried this on the main branch, but `mul` seems to work fine when `fill_value=0`.\n\n```python\n>>> import pandas as pd\n>>> import numpy as np\n\n>>> df = pd.DataFrame(np.arange(12).reshape(3,4))\n>>> df\n 0 1 2 3\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n>>> s = [1,pd.NA,3, 4]\n>>> s\n[1, <NA>, 3, 4]\n>>> df.mul(s, axis=\"columns\", fill_value=0)\n 0 1 2 3\n0 0 0 6 12\n1 4 0 18 28\n2 8 0 30 44\n\n```", "Thanks @sanggon6107 \n\nMy bad. Sorry with my example, the true one is:\n\n> import pandas as pd\n> df = pd.DataFrame(np.arange(12).reshape(3,4))\n> s = [1,pd.NA,3]\n> df.mul(s, axis=\"index\", fill_value=0) # index instead of columns\n\nI have try it, it is okay, but sometimes in my project it raise error fill_value=0 is not supported. I have to fillna first before doing multiplication.\n\n", "Thanks for the report. Your example doesn't reproduce for me on latest release (2.3.0) with `axis='index'`.\n\nCan you confirm your example provided currently fails for you and if so, please provide the full traceback. If not, please update your post with a reproducible example. Thanks!\n\n```py\n>>> df\n 0 1 2 3\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n>>> s = [1,pd.NA,3]\n>>> df.mul(s, axis=\"index\", fill_value=0)\n 0 1 2 3\n0 0 1 2 3\n1 0 0 0 0\n2 24 27 30 33\n>>> pd.__version__\n'2.3.0'\n```", "Hi @asishm ,\n\nI have face this error in my project, when I try it. Here is my code:\n\n``` python\nadjusted_data[[\"Open\", \"High\", \"Low\", \"Close\"]] = adjusted_data[\n [\"Open\", \"High\", \"Low\", \"Close\"]\n].mul(adjusted_data[\"Multiplier\"], axis=0, fill_value=1)\n\n---------------------------------------------------------------------------\nNotImplementedError Traceback (most recent call last)\n~\\AppData\\Local\\Temp\\ipykernel_9980\\1777838207.py in ?()\n 2 listed_data, adjustment_multiplier, left_on=\"Ticker\", right_index=True, how=\"left\"\n 3 )\n 4 adjusted_data[[\"Open\", \"High\", \"Low\", \"Close\"]] = adjusted_data[\n 5 [\"Open\", \"High\", \"Low\", \"Close\"]\n----> 6 ].mul(adjusted_data[\"Multiplier\"], axis=0, fill_value=1)\n 7 adjusted_data\n\n~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python313\\site-packages\\pandas\\core\\frame.py in ?(self, other, axis, level, fill_value)\n 8386 @Appender(ops.make_flex_doc(\"mul\", \"dataframe\"))\n 8387 def mul(\n 8388 self, other, axis: Axis = \"columns\", level=None, fill_value=None\n 8389 ) -> DataFrame:\n-> 8390 return self._flex_arith_method(\n 8391 other, operator.mul, level=level, fill_value=fill_value, axis=axis\n 8392 )\n\n~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python313\\site-packages\\pandas\\core\\frame.py in ?(self, other, op, axis, level, fill_value)\n 8264 \n 8265 if isinstance(other, Series) and fill_value is not None:\n 8266 # TODO: We could allow this in cases where we end up going\n 8267 # through the DataFrame path\n-> 8268 raise NotImplementedError(f\"fill_value {fill_value} not supported.\")\n 8269 \n 8270 other = ops.maybe_prepare_scalar_for_op(other, self.shape)\n 8271 self, other = self._align_for_op(other, axis, flex=True, level=level)\n\nNotImplementedError: fill_value 1 not supported.\n```\n\nFor more information: `adjusted_data` is a DataFrame of 9 columns * 4,000,000 rows. \n\nBut when I try the following:\n\n``` python\nadjusted_data[[\"Open\", \"High\", \"Low\", \"Close\"]] = adjusted_data[\n [\"Open\", \"High\", \"Low\", \"Close\"]\n].mul(adjusted_data[\"Multiplier\"].fillna(1), axis=0)\n```\n\nit is okay.", "I was able to replicate the issue and can takeover the ticket\n", "@jbrockmendel Do you remember if there was a specific reason fill_value was initially excluded from working on series for DataFrame.mul? Because the functionality seems to already be there barring the check that's throwing the error above.", "It was like this when I got here. I don't think anyone would mind adding support.", "@jbrockmendel Do you know what the TODO comment there could mean then? Not sure that I fully understand what the original reasoning for it could be ", "i suspect the idea was that depending on the axis we could do `other = DataFrame({x: other for x in self.columns})` (not exactly that bc of non-unique columns)", "@jbrockmendel after tinkering around, I think that the fill_value problem was fixed in the PR that I ended up closing. Operations not supporting 1-D EAs feels like a separate issue to me. My proposal is that I put in a PR just tackling the fill_value change and then put a separate one in for 1-D EA operations. What do you think?\n\n", "IIRC that PR was almost right and could get there with a little work. you're welcome to reopen it and we can talk through a fix.", "Oop, I got completely sidetracked and end up working on getting 1D EAs to work with operations. Ill reopen the PR and we can move the discussion over there" ]
3,123,743,755
61,580
DOC: Fix typos
closed
2025-06-06T06:01:54
2025-06-06T13:08:36
2025-06-06T13:08:35
https://github.com/pandas-dev/pandas/pull/61580
true
https://api.github.com/repos/pandas-dev/pandas/pulls/61580
https://github.com/pandas-dev/pandas/pull/61580
omahs
1
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
[ "Docs" ]
0
0
0
0
0
0
0
0
[ "pre-commit.ci autofix" ]
3,123,461,220
61,579
`pandas.__version__` is `2.3.0+4.g1dfc98e16a` in pandas 2.3.0 and python 3.9, not `2.3.0`
closed
2025-06-06T02:49:46
2025-07-07T19:28:56
2025-07-07T19:28:56
https://github.com/pandas-dev/pandas/issues/61579
true
null
null
harupy
9
## How to reproduce: ``` docker run --rm python:3.9 bash -c "pip install pandas && python -c 'import pandas; print(pandas.__version__)'" ``` ## Output: ```sh % docker run --rm python:3.9 bash -c "pip install pandas && python -c 'import pandas; print(pandas.__version__)'" ... # pip install logs 2.3.0+4.g1dfc98e16a ``` Seems related to https://github.com/pandas-dev/pandas/issues/61563#issuecomment-2947099734
[ "Bug" ]
0
0
0
0
3
0
0
0
[ "@mroeschke Is this expected?", "I can confirm this. \nThis breaks our pandas version check function in our sdk\nhttps://github.com/shinnytech/tqsdk-ci/actions/runs/15481907073/job/43589180749\n\n![Image](https://github.com/user-attachments/assets/f981017f-65f0-4713-8e34-87aa2c2b62f8)", "@shinny-taojiachun `importlib.metadata.version` should work as a workaround.", "@harupy Thank you for your assistant. We are aware what we use in the past may not be the best practice. \nNewer release of our sdk should change to better version check. But releasing newer version takes a lot of time.\n\nMeanwhile, this issue is blocking our users from using our sdk, especially new installation and upgrading of our sdk.", "I can confirm this happens to me, only for 3.9 and pandas installed via pip (installed via conda-forge has the right version)", "Same problem.", "Broken `ydata-profiling` integration due to this error.\n```\n../../../venv3.9/lib/python3.9/site-packages/ydata_profiling/__init__.py:10: in <module>\n from ydata_profiling.compare_reports import compare # isort:skip # noqa\n../../../venv3.9/lib/python3.9/site-packages/ydata_profiling/compare_reports.py:12: in <module>\n from ydata_profiling.profile_report import ProfileReport\n../../../venv3.9/lib/python3.9/site-packages/ydata_profiling/profile_report.py:26: in <module>\n from visions import VisionsTypeset\n../../../venv3.9/lib/python3.9/site-packages/visions/__init__.py:4: in <module>\n from visions.backends import *\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/__init__.py:9: in <module>\n import visions.backends.pandas\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/pandas/__init__.py:2: in <module>\n import visions.backends.pandas.types\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/pandas/types/__init__.py:1: in <module>\n import visions.backends.pandas.types.boolean\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/pandas/types/boolean.py:11: in <module>\n from visions.backends.pandas.test_utils import (\n<frozen importlib._bootstrap>:1007: in _find_and_load\n ???\n<frozen importlib._bootstrap>:986: in _find_and_load_unlocked\n ???\n<frozen importlib._bootstrap>:680: in _load_unlocked\n ???\n../../../venv3.9/lib/python3.9/site-packages/_pytest/assertion/rewrite.py:184: in exec_module\n exec(co, module.__dict__)\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/pandas/test_utils.py:12: in <module>\n pandas_version = tuple(int(i) for i in pd.__version__.split(\".\"))\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n\n.0 = <list_iterator object at 0x10eb67400>\n\n> pandas_version = tuple(int(i) for i in pd.__version__.split(\".\"))\nE ValueError: invalid literal for int() with base 10: '0+4'\n\n../../../venv3.9/lib/python3.9/site-packages/visions/backends/pandas/test_utils.py:12: ValueError\n```", "Apologies, yes this is unexpected. This was due to having to release 3.9 wheels in an unorthodox way https://github.com/pandas-dev/pandas/pull/61569 with context in https://github.com/pandas-dev/pandas/issues/61563#issuecomment-2945331441\n\n(Removing the `good first issue` since the fix just involves releasing pandas though our normal mechanisms)\n", "We have released pandas 2.3.1, and this version should now have a proper version number also for Python 3.9" ]