id int64 | number int64 | title string | state string | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | html_url string | is_pull_request bool | pull_request_url string | pull_request_html_url string | user_login string | comments_count int64 | body string | labels list | reactions_plus1 int64 | reactions_minus1 int64 | reactions_laugh int64 | reactions_hooray int64 | reactions_confused int64 | reactions_heart int64 | reactions_rocket int64 | reactions_eyes int64 | comments list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,990,517,180 | 61,278 | WEB: Removed Coiled as a sponsor, and update past sponsors list | closed | 2025-04-12T15:00:33 | 2025-04-13T15:37:19 | 2025-04-13T15:37:19 | https://github.com/pandas-dev/pandas/pull/61278 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61278 | https://github.com/pandas-dev/pandas/pull/61278 | datapythonista | 0 | - [X] xref #61277
Removing only Coiled for now (NumFOCUS pending discussion)
In #61121 I forgot to move removed sponsors to the past sponsors list. Doing it here. | [
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,990,463,851 | 61,277 | WEB: Remove NumFOCUS and Coiled as sponsors | closed | 2025-04-12T13:03:38 | 2025-04-14T14:20:03 | 2025-04-13T15:39:45 | https://github.com/pandas-dev/pandas/issues/61277 | true | null | null | datapythonista | 12 | pandas is it's not really well funded these days, and we've been updating the sponsors list recently to be more accurate, and I'd like to still remove NumFOCUS and Coiled from the list. Of course I do appreciate the support that both NumFOCUS and Coiled provided and still provide to pandas, but I think having them as sponsors is at this point misleading and create the false impression that pandas is better funded than it is.
For NumFOCUS, we pay them 15% of the pandas income for financial support mostly. We get few other things like legal support or small development grants. But I wouldn't list them as "pandas has the support of NumFOCUS" in periods of very low funding like now, as we probably supported NumFOCUS more than NumFOCUS supported us. For reference OpenCollective US would charge as 10%, and OpenCollective EU 8%.
For Coiled, I saw Patrick made two commits in the last 7 months, and if I'm not wrong he's not very active in PR review or other maintenance tasks recently. Truly thankful to them as Patrick could make lots of quality work for pandas sponsored by them in the past, but as of today, having them as a sponsor it's again misleading visitors of our website into thinking pandas is in a healthy financial state as it was a couple of years ago, when in practice it's not.
I hope having a more accurate list can help find new sponsors, help when we ask new grants, and maybe even help bring awareness of our current sponsors how key is their support at this point.
@pandas-dev/pandas-core @phofl please let me know if any objection, otherwise I'll move forward with this in the next few days. | [
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I don't think NumFOCUS should be removed - even if not financially they still help us with legal and administrative tasks as an organization.\n\nI have no opinion on Coiled",
"> I don't think NumFOCUS should be removed - even if not financially they still help us with legal and administrative tasks as an organization.\n\nThey help us because we paid them more than $100k+ for it. Many people I spoke with believe pandas receives funds from NumFOCUS, when the money goes in the opposite direction. I just don't think creating the false impression that NumFOCUS is funding our development at a time when pandas is mostly a single maintainer project is very helpful. That's my point, but if people like to have their logo in the list of sponsors, no big deal.",
"> > I don't think NumFOCUS should be removed - even if not financially they still help us with legal and administrative tasks as an organization.\n> \n> They help us because we paid them more than $100k+ for it. Many people I spoke with believe pandas receives funds from NumFOCUS, when the money goes in the opposite direction. I just don't think creating the false impression that NumFOCUS is funding our development at a time when pandas is mostly a single maintainer project is very helpful. That's my point, but if people like to have their logo in the list of sponsors, no big deal.\n\nI had exactly this impression until I read your post. I agree with you",
"> Many people I spoke with believe pandas receives funds from NumFOCUS, when the money goes in the opposite direction.\n\nSponsors support a project. I think the error here is thinking that support only consists of funding (including indirectly by allowing employees to maintain the project on paid time). I don't have a good sense of how much support NumFOCUS provides, but it seems to me removing them as a sponsor _just because_ people think that means they are funding pandas is wrong.\n\n> They help us because we paid them more than $100k+ for it.\n\nDo we have a sense that they have not provided support that makes this worth it?\n\n@datapythonista - I also removed the good first issue tag. Agreed this will be a good first issue once the direction forward is decided, but since new contributors use this to find issues to work on now we should wait until then.",
"Sound good to me re coiled (I actually left coiled a few weeks ago anyway but didn’t get around to do that yet)",
"Thanks all for the feedback.\n\n> Do we have a sense that they have not provided support that makes this worth it?\n\nI think this opens a separate discussion, and I don't have enough information to be strongly opinionated about it. Based on my experience, I don't think the service we're getting is as good as it should be. As an example, last contract one of us had to sign it took more than 2 months of waiting for NumFOCUS before it was ready. I also was quite pissed off when few of us (pandas maintainers) considered teaching pandas in a somehow official way to help the project raise funds (and be paid for it), and this was blocked by NumFOCUS. I also had a quite bad experience when I volunteered to help with NumFOCUS with their infrastructure and most of my work was boycotted and thousands of dollars were wasted for no reason I could understand. \n\nSo, for now, since our income is almost nothing, we aren't paying much to NumFOCUS, so I guess it's worth. If at some point we get an annual income of $200k, I would personally save $10k on fees and move to OpenCollective US. But I don't think there'll be consensus, so I won't lead the effort.\n\nBack to the discussion here on whether NumFOCUS supports us. It's to me as if we listed AWS as a pandas supported because we spend $30k a year in AWS credits, and they support us with some virtual machines because of it. I think the only difference in both cases is that we are the ones who are happy to support NumFOCUS. I was personally happy with it when we had 10 sponsors, a $200k budget, and more money for maintenance that we could spend. But right now we have Matt hours from NVIDIA, $500 a month from Tidelift, and the ongoing Bodo grant (for features, we finished the maintenance hours they granted us).\n\nAll non-volunteering work for pandas I'm aware of is paid by NVIDIA, Tidelift and Bodo. I'd like the community, organizations considering giving us a grant, companies considering supporting us... to see that in our website, so they can make an informed decision.\n",
"Sorry to hear about all the negative experience with NumFOCUS. I know that they too have undergone a lot of change within recent memory, so there's definitely a lot to be critical of. \n\nOn the flip side I think NumFOCUS has been recently helpful with trademark enforcement (pandasAI) and with the legal agreement to get a royalty donation from Packt from the sales of my book. I don't want to discredit entirely where they help, and since we haven't gotten many funds within the past year or so, they in turn have not received much financial support through us.\n\nIf we collectively wanted to reassess our relationship with NumFOCUS, I suggest we try and restart that conversation with someone there like Nicole. If that doesn't yield whatever it is we are looking for, then I think a PDEP to explore alternatives would be warranted (similar to what Jupyter did when they left NumFOCUS for the Linux Foundation https://jupyter.org/governance/linux-proposal.html)\n\nFor now though, unless someone is willing to lead that charge, I think we should still consider them a sponsor and partner",
"I won't consider them a pandas sponsor since they are not. ;) But agree with all you said.\n\nI'll close here, since Coiled has been removed, and it seems there is no agreement to remove NumFOCUS as a sponsor. Thanks all for the feedback.",
"One thing to consider is that we might want to have \"sponsors\" and \"supporters\". Or maybe \"financial sponsors\" and \"administrative supporters\". Then we separate NumFocus into the second category.",
"I'm positive on this. No strong opinion, but \"Financial sponsors\" vs \"Supporters\" sounds best to me. And that we include entities that supply maintainers or compute resources as financial sponsors.",
"If we create a category specific for NumFOCUS, I'd use `Fiscal host`. NumFOCUS uses the more American version `Fiscal sponsor` which I think it's misleading for anyone not familiar with the term (most people). But something like `Financial sponsors: ...` and `With the support of: NumFOCUS` sounds like an improvement to what we have now, so I'd settle for that even if I still find it misleading.",
"Keep in mind that we do receive financial sponsorship from NumFOCUS through small development grants. I don't know our exact financial situation last year, but I would expect that that was one of the largest income sources for the project (Natalia and I both received $2500 grants)"
] |
2,990,436,119 | 61,276 | BUG:FutureWarning for palette parameter without hue in faceted distributions | closed | 2025-04-12T12:05:08 | 2025-04-26T12:04:05 | 2025-04-26T12:04:01 | https://github.com/pandas-dev/pandas/issues/61276 | true | null | null | lavaeagle2 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import seaborn as sns
import matplotlib.pyplot as plt
# Sample data
tips = sns.load_dataset("tips")
# Faceted distribution plot
sns.displot(data=tips, x="total_bill", palette="viridis")
plt.show()
```
### Issue Description
When using faceted distributions with Seaborn and passing the `palette` parameter without assigning `hue`, a FutureWarning is raised. The warning suggests assigning `hue` and setting `legend=False` to avoid deprecation in future versions (v0.14.0). This behavior needs clarification or adjustment in Pandas' integration with Seaborn plotting functions.
observed behavior:
FutureWarning: Passing `palette` without assigning `hue` is deprecated and will be removed in v0.14.0. Assign the `y` variable to `hue` and set `legend=False` for the same effect.
### Expected Behavior
The warning should either be suppressed or handled gracefully within Pandas' plotting functions when interfacing with Seaborn.
### Installed Versions
<details>
/usr/local/lib/python3.11/dist-packages/_distutils_hack/__init__.py:31: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.11.12.final.0
python-bits : 64
OS : Linux
OS-release : 6.1.85+
Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 2.0.2
pytz : 2025.2
dateutil : 2.8.2
setuptools : 75.2.0
pip : 24.1.2
Cython : 3.0.12
pytest : 8.3.5
hypothesis : None
sphinx : 8.2.3
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.3.1
html5lib : 1.1
pymysql : None
psycopg2 : 2.9.10
jinja2 : 3.1.6
IPython : 7.34.0
pandas_datareader : 0.10.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
gcsfs : 2025.3.2
matplotlib : 3.10.0
numba : 0.60.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.28.0
pyarrow : 18.1.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.40
tables : 3.10.2
tabulate : 0.9.0
xarray : 2025.1.2
xlrd : 2.0.1
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>

| [
"Bug",
"Visualization"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> This behavior needs clarification or adjustment in Pandas' integration with Seaborn plotting functions.\n\nCan you clarify why you think this is a pandas issue?\n\n> The warning should either be suppressed or handled gracefully within Pandas' plotting functions when interfacing with Seaborn.\n\nAs far as I can tell there is no point in the callstack where seaborn is passing over control to pandas, so this is not technically possible.\n\nI think you need to raise an issue with Seaborn, although I don't understand why you think the current warning is not appropriate.",
"Thank you for the clarification.\n\nYou're right — after reviewing the call stack and the behavior more carefully, I can see that this warning originates directly from Seaborn, not from Pandas.\nI appreciate your explanation, and I’ll move this issue over to the Seaborn repository instead.\n\nThanks again for the quick response and guidance!\n"
] |
2,990,297,299 | 61,275 | ENH: support reading directory in read_csv | open | 2025-04-12T07:09:32 | 2025-08-21T20:51:49 | null | https://github.com/pandas-dev/pandas/pull/61275 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61275 | https://github.com/pandas-dev/pandas/pull/61275 | fangchenli | 6 | - [x] closes https://github.com/bodo-ai/Bodo-Pandas-Collaboration/issues/2
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"IO CSV"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"FWIW I recall the team being negative in the past about supporting reading directories of files, and we document just concatting DataFrames read from a directory: https://pandas.pydata.org/docs/user_guide/cookbook.html#reading-multiple-files-to-create-a-single-dataframe. Are we sure we want to include this?",
"> FWIW I recall the team being negative in the past about supporting reading directories of files\r\n\r\nDo you remember the reason? This seems like a useful thing, as I think it's common for some datasets to be split in different files with the same schema. And there is some added complexity to this, but it seems consistent with other syntactic sugar we have in IO operations such as decompressing, downloading, etc.\r\n\r\n",
"Note that you've got the image from Will's book in this PR, this happened when we had to hard revert it from git history.",
"The remaining test failures are related to S3. Not sure what the root cause is. Trying to cleanup S3-related tests a bit in https://github.com/pandas-dev/pandas/pull/61703.",
"i think an unrelated file got added?",
"> i think an unrelated file got added?\r\n\r\nRemoved."
] |
2,989,968,950 | 61,274 | DOC: Add documentation for `groupby.expanding()` | closed | 2025-04-12T00:15:43 | 2025-04-15T12:27:07 | 2025-04-14T22:41:29 | https://github.com/pandas-dev/pandas/pull/61274 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61274 | https://github.com/pandas-dev/pandas/pull/61274 | arthurlw | 3 | - [x] closes #61254
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Docs",
"Groupby",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @arthurlw - If you're interested I also think it would be good to replace `*args` and `**kwargs` throughout the public functions in the pandas API with the exact arguments that the method takes. This is more user-friendly not only for docs, but for linting and auto-completion.",
"Yeah I would be happy to work on replacing `*args` and `**kwargs` in public functions. I'll draft an issue and start listing down some of the public functions that need updating. Let me know if there’s a preferred place to start or if you’d prefer separate issues/PRs per module or function.",
"I would recommend \"one\" method per PR (where e.g. Series.sum and DataFrame.sum counts as 1) but no issues need to be made unless it seems uncertain whether we want to replace *args / **kwargs. The one case where we don't want to replace them is when they are passed through to a third-party (e.g. matplotlib or xlsxwriter)."
] |
2,989,028,515 | 61,273 | ENH: Add `tzdata` to the `_hard_dependencies` | closed | 2025-04-11T15:43:38 | 2025-04-22T16:04:23 | 2025-04-22T16:04:23 | https://github.com/pandas-dev/pandas/issues/61273 | true | null | null | chilin0525 | 2 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
In https://github.com/pandas-dev/pandas/pull/61084#pullrequestreview-2669422152, suggesting that `tzdata` should be added to the `_hard_dependencies` list.
### Feature Description
Extend current `_hard_dependencies` from `("numpy", "dateutil")` to `("numpy", "dateutil", "tzdata")`.
### Alternative Solutions
No
### Additional Context
_No response_ | [
"Enhancement",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I working on related issue in https://github.com/pandas-dev/pandas/pull/61084. I'd be happy to work on this after #61084 merged!",
"take"
] |
2,988,760,690 | 61,272 | BUILD: Error installing pandas 2.2.3 on AIX 7.3 system (error: conflicting types for lockf64, lseek64, ftruncate64..) | closed | 2025-04-11T14:00:12 | 2025-04-17T08:40:33 | 2025-04-17T08:40:30 | https://github.com/pandas-dev/pandas/issues/61272 | true | null | null | jose1711 | 1 | ### Installation check
- [x] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
AIX-3-CENSORED-powerpc-64bit
### Installation Method
pip install
### pandas Version
2.2.3
### Python Version
3.11
### Installation Logs
```
pip3.11 install --no-build-isolation pandas -vvv
```
<details>
Running command Preparing metadata (pyproject.toml)
+ meson setup /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/.mesonpy-j4blhphr -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=/tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/.mesonpy-j4blhphr/meson-python-native-file.ini
The Meson build system
Version: 1.6.1
Source dir: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d
Build dir: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/.mesonpy-j4blhphr
Build type: native build
Project name: pandas
Project version: 2.2.3
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.aix 7.3.2
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.aix 7.3.2
Cython compiler for the host machine: cython (cython 3.0.8)
Host machine cpu family: ppc
Host machine cpu: powerpc
Program python found: YES (/opt/freeware/bin/python3.11)
Found pkg-config: YES (/opt/freeware/bin/pkg-config) 0.29.2
Run-time dependency python found: YES 3.11
Build targets in project: 53
pandas 2.2.3
User defined options
Native files: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/.mesonpy-j4blhphr/meson-python-native-file.ini
b_ndebug : if-release
b_vscrt : md
buildtype : release
vsenv : true
Found ninja-1.12.1 at /opt/freeware/bin/ninja
Visual Studio environment is needed to run Ninja. It is recommended to use Meson wrapper:
/opt/freeware/bin/meson compile -C .
+ /opt/freeware/bin/ninja
[1/151] Generating pandas/_libs/algos_take_helper_pxi with a custom command
[2/151] Generating pandas/_libs/algos_common_helper_pxi with a custom command
[3/151] Generating pandas/_libs/khash_primitive_helper_pxi with a custom command
[4/151] Generating pandas/_libs/hashtable_class_helper_pxi with a custom command
[5/151] Generating pandas/_libs/hashtable_func_helper_pxi with a custom command
[6/151] Generating pandas/_libs/index_class_helper_pxi with a custom command
[7/151] Generating pandas/_libs/intervaltree_helper_pxi with a custom command
[8/151] Generating pandas/_libs/sparse_op_helper_pxi with a custom command
[9/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/base.pyx
[10/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/dtypes.pyx
[11/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/np_datetime.pyx
[12/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/ccalendar.pyx
[13/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/nattype.pyx
warning: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/nattype.pyx:79:0: Global name __nat_unpickle matched from within class scope in contradiction to to Python 'class private name' rules. This may change in a future release.
warning: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/nattype.pyx:79:0: Global name __nat_unpickle matched from within class scope in contradiction to to Python 'class private name' rules. This may change in a future release.
[14/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/conversion.pyx
[15/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/parsing.pyx
[16/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/fields.pyx
[17/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/offsets.pyx
[18/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/period.pyx
[19/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/strptime.pyx
[20/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/vectorized.pyx
[21/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/timezones.pyx
[22/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/arrays.pyx
[23/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/tzconversion.pyx
[24/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/timedeltas.pyx
[25/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslibs/timestamps.pyx
[26/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/indexing.pyx
[27/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/hashing.pyx
[28/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/internals.pyx
[29/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/missing.pyx
[30/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/ops_dispatch.pyx
[31/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/index.pyx
[32/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/interval.pyx
[33/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/parsers.pyx
[34/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/lib.pyx
[35/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/ops.pyx
[36/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/properties.pyx
[37/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/join.pyx
[38/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/byteswap.pyx
[39/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/algos.pyx
[40/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/hashtable.pyx
[41/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/sas.pyx
[42/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/testing.pyx
[43/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/groupby.pyx
[44/151] Compiling C object pandas/_libs/tslibs/base.cpython-311.so.p/meson-generated_pandas__libs_tslibs_base.pyx.c.o
[45/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/window/indexers.pyx
[46/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/reshape.pyx
[47/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/tslib.pyx
[48/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/window/aggregations.pyx
[49/151] Compiling C object pandas/_libs/tslibs/ccalendar.cpython-311.so.p/meson-generated_pandas__libs_tslibs_ccalendar.pyx.c.o
[50/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/sparse.pyx
[51/151] Compiling Cython source /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d/pandas/_libs/writers.pyx
[52/151] Compiling C object pandas/_libs/tslibs/parsing.cpython-311.so.p/.._src_parser_tokenizer.c.o
[53/151] Compiling C object pandas/_libs/tslibs/np_datetime.cpython-311.so.p/meson-generated_pandas__libs_tslibs_np_datetime.pyx.c.o
[54/151] Compiling C object pandas/_libs/tslibs/dtypes.cpython-311.so.p/meson-generated_pandas__libs_tslibs_dtypes.pyx.c.o
[55/151] Compiling C object pandas/_libs/tslibs/nattype.cpython-311.so.p/meson-generated_pandas__libs_tslibs_nattype.pyx.c.o
[56/151] Compiling C object pandas/_libs/tslibs/conversion.cpython-311.so.p/meson-generated_pandas__libs_tslibs_conversion.pyx.c.o
pandas/_libs/tslibs/conversion.cpython-311.so.p/pandas/_libs/tslibs/conversion.pyx.c: In function '__pyx_pf_6pandas_5_libs_6tslibs_10conversion_cast_from_unit_vectorized.constprop':
pandas/_libs/tslibs/conversion.cpython-311.so.p/pandas/_libs/tslibs/conversion.pyx.c:3064:79: warning: '__pyx_v_i' may be used uninitialized in this function [-Wmaybe-uninitialized]
3064 | __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
| ^~
3065 | (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
|
pandas/_libs/tslibs/conversion.cpython-311.so.p/pandas/_libs/tslibs/conversion.pyx.c:23754:14: note: '__pyx_v_i' was declared here
23754 | Py_ssize_t __pyx_v_i;
| ^~~~~~~~~
[57/151] Compiling C object pandas/_libs/tslibs/fields.cpython-311.so.p/meson-generated_pandas__libs_tslibs_fields.pyx.c.o
[58/151] Compiling C object pandas/_libs/tslibs/timezones.cpython-311.so.p/meson-generated_pandas__libs_tslibs_timezones.pyx.c.o
[59/151] Compiling C object pandas/_libs/tslibs/strptime.cpython-311.so.p/meson-generated_pandas__libs_tslibs_strptime.pyx.c.o
[60/151] Compiling C object pandas/_libs/tslibs/vectorized.cpython-311.so.p/meson-generated_pandas__libs_tslibs_vectorized.pyx.c.o
[61/151] Compiling C object pandas/_libs/tslibs/period.cpython-311.so.p/meson-generated_pandas__libs_tslibs_period.pyx.c.o
[62/151] Compiling C object pandas/_libs/arrays.cpython-311.so.p/meson-generated_pandas__libs_arrays.pyx.c.o
[63/151] Compiling C object pandas/_libs/tslibs/tzconversion.cpython-311.so.p/meson-generated_pandas__libs_tslibs_tzconversion.pyx.c.o
[64/151] Compiling C object pandas/_libs/tslibs/parsing.cpython-311.so.p/meson-generated_pandas__libs_tslibs_parsing.pyx.c.o
[65/151] Compiling C object pandas/_libs/indexing.cpython-311.so.p/meson-generated_pandas__libs_indexing.pyx.c.o
[66/151] Compiling C object pandas/_libs/tslibs/timestamps.cpython-311.so.p/meson-generated_pandas__libs_tslibs_timestamps.pyx.c.o
[67/151] Compiling C object pandas/_libs/tslibs/timedeltas.cpython-311.so.p/meson-generated_pandas__libs_tslibs_timedeltas.pyx.c.o
[68/151] Compiling C object pandas/_libs/hashing.cpython-311.so.p/meson-generated_pandas__libs_hashing.pyx.c.o
[69/151] Compiling C object pandas/_libs/lib.cpython-311.so.p/src_parser_tokenizer.c.o
[70/151] Compiling C object pandas/_libs/internals.cpython-311.so.p/meson-generated_pandas__libs_internals.pyx.c.o
[71/151] Compiling C object pandas/_libs/pandas_datetime.cpython-311.so.p/src_vendored_numpy_datetime_np_datetime.c.o
[72/151] Compiling C object pandas/_libs/pandas_datetime.cpython-311.so.p/src_vendored_numpy_datetime_np_datetime_strings.c.o
[73/151] Compiling C object pandas/_libs/pandas_datetime.cpython-311.so.p/src_datetime_date_conversions.c.o
[74/151] Compiling C object pandas/_libs/pandas_datetime.cpython-311.so.p/src_datetime_pd_datetime.c.o
[75/151] Compiling C object pandas/_libs/pandas_parser.cpython-311.so.p/src_parser_tokenizer.c.o
[76/151] Compiling C object pandas/_libs/pandas_parser.cpython-311.so.p/src_parser_io.c.o
[77/151] Compiling C object pandas/_libs/pandas_parser.cpython-311.so.p/src_parser_pd_parser.c.o
[78/151] Compiling C object pandas/_libs/missing.cpython-311.so.p/meson-generated_pandas__libs_missing.pyx.c.o
[79/151] Compiling C object pandas/_libs/parsers.cpython-311.so.p/src_parser_tokenizer.c.o
[80/151] Compiling C object pandas/_libs/parsers.cpython-311.so.p/src_parser_io.c.o
[81/151] Compiling C object pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_ujson.c.o
[82/151] Compiling C object pandas/_libs/tslibs/offsets.cpython-311.so.p/meson-generated_pandas__libs_tslibs_offsets.pyx.c.o
[83/151] Compiling C object pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_JSONtoObj.c.o
FAILED: pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_JSONtoObj.c.o
gcc -Ipandas/_libs/json.cpython-311.so.p -Ipandas/_libs -I../pandas/_libs -I../../../../home/USERNAME/.local/lib/python3.11/site-packages/numpy/core/include -I../pandas/_libs/include -I/opt/freeware/include/python3.11 -fvisibility=hidden -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=c11 -O3 -DNPY_NO_DEPRECATED_API=0 -DNPY_TARGET_VERSION=NPY_1_21_API_VERSION -fPIC -MD -MQ pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_JSONtoObj.c.o -MF pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_JSONtoObj.c.o.d -o pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_JSONtoObj.c.o -c ../pandas/_libs/src/vendored/ujson/python/JSONtoObj.c
In file included from /opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/string.h:52,
from ../pandas/_libs/include/pandas/portable.h:12,
from ../pandas/_libs/include/pandas/vendored/ujson/lib/ultrajson.h:55,
from ../pandas/_libs/src/vendored/ujson/python/JSONtoObj.c:41:
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:207:16: error: conflicting types for 'lseek64'
207 | extern off64_t _NOTHROW(lseek64, (int, off64_t, int));
| ^~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:205:23: note: previous declaration of 'lseek64' was here
205 | extern off_t _NOTHROW(lseek, (int, off_t, int));
| ^~~~~
In file included from /opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:864,
from /opt/freeware/include/python3.11/Python.h:29,
from ../pandas/_libs/src/vendored/ujson/python/JSONtoObj.c:43:
/usr/include/sys/lockf.h:64:13: error: conflicting types for 'lockf64'
64 | extern int lockf64 (int, int, off64_t);
| ^~~~~~~
/usr/include/sys/lockf.h:62:13: note: previous declaration of 'lockf64' was here
62 | extern int lockf (int, int, off_t);
| ^~~~~
In file included from /opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/string.h:52,
from ../pandas/_libs/include/pandas/portable.h:12,
from ../pandas/_libs/include/pandas/vendored/ujson/lib/ultrajson.h:55,
from ../pandas/_libs/src/vendored/ujson/python/JSONtoObj.c:41:
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:937:14: error: conflicting types for 'ftruncate64'
937 | extern int _NOTHROW(ftruncate64, (int, off64_t));
| ^~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:935:23: note: previous declaration of 'ftruncate64' was here
935 | extern int _NOTHROW(ftruncate, (int, off_t));
| ^~~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:994:14: error: conflicting types for 'truncate64'
994 | extern int _NOTHROW(truncate64, (const char *, off64_t));
| ^~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:992:23: note: previous declaration of 'truncate64' was here
992 | extern int _NOTHROW(truncate, (const char *, off_t));
| ^~~~~~~~
In file included from /opt/freeware/include/python3.11/Python.h:29,
from ../pandas/_libs/src/vendored/ujson/python/JSONtoObj.c:43:
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1013:18: error: conflicting types for 'pread64'
1013 | extern ssize_t pread64(int, void *, size_t, off64_t);
| ^~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1010:18: note: previous declaration of 'pread64' was here
1010 | extern ssize_t pread(int, void *, size_t, off_t);
| ^~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1014:18: error: conflicting types for 'pwrite64'
1014 | extern ssize_t pwrite64(int, const void *, size_t, off64_t);
| ^~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1011:18: note: previous declaration of 'pwrite64' was here
1011 | extern ssize_t pwrite(int, const void *, size_t, off_t);
| ^~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1110:17: error: conflicting types for 'fclear64'
1110 | extern off64_t fclear64(int, off64_t);
| ^~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1107:15: note: previous declaration of 'fclear64' was here
1107 | extern off_t fclear(int, off_t);
| ^~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1111:13: error: conflicting types for 'fsync_range64'
1111 | extern int fsync_range64(int, int, off64_t, off64_t);
| ^~~~~~~~~~~~~
/opt/freeware/lib/gcc/powerpc-ibm-aix7.3.0.0/10/include-fixed/unistd.h:1108:13: note: previous declaration of 'fsync_range64' was here
1108 | extern int fsync_range(int, int, off_t, off_t);
| ^~~~~~~~~~~
[84/151] Compiling C object pandas/_libs/json.cpython-311.so.p/src_vendored_ujson_python_objToJSON.c.o
[85/151] Compiling C object pandas/_libs/index.cpython-311.so.p/meson-generated_pandas__libs_index.pyx.c.o
[86/151] Compiling C object pandas/_libs/parsers.cpython-311.so.p/meson-generated_pandas__libs_parsers.pyx.c.o
[87/151] Compiling C object pandas/_libs/lib.cpython-311.so.p/meson-generated_pandas__libs_lib.pyx.c.o
pandas/_libs/lib.cpython-311.so.p/pandas/_libs/lib.pyx.c:91529:12: warning: '__pyx_memview_set_object' defined but not used [-Wunused-function]
91529 | static int __pyx_memview_set_object(const char *itemp, PyObject *obj) {
| ^~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/lib.cpython-311.so.p/pandas/_libs/lib.pyx.c:91524:20: warning: '__pyx_memview_get_object' defined but not used [-Wunused-function]
91524 | static PyObject *__pyx_memview_get_object(const char *itemp) {
| ^~~~~~~~~~~~~~~~~~~~~~~~
[88/151] Compiling C object pandas/_libs/interval.cpython-311.so.p/meson-generated_pandas__libs_interval.pyx.c.o
[89/151] Compiling C object pandas/_libs/join.cpython-311.so.p/meson-generated_pandas__libs_join.pyx.c.o
[90/151] Compiling C object pandas/_libs/algos.cpython-311.so.p/meson-generated_pandas__libs_algos.pyx.c.o
[91/151] Compiling C object pandas/_libs/groupby.cpython-311.so.p/meson-generated_pandas__libs_groupby.pyx.c.o
[92/151] Compiling C object pandas/_libs/hashtable.cpython-311.so.p/meson-generated_pandas__libs_hashtable.pyx.c.o
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_complex128':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:134615:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
134615 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_complex64':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:136475:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
136475 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_float64':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:138335:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
138335 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_float32':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:140195:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
140195 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_uint64':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:142055:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
142055 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_uint32':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:143915:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
143915 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_uint16':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:145775:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
145775 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_uint8':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:147635:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
147635 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_object':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:149433:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
149433 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_16; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_int64':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:151193:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
151193 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_int32':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:153053:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
153053 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_int16':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:154913:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
154913 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_f_6pandas_5_libs_9hashtable_value_count_int8':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:156773:33: warning: comparison of integer expressions of different signedness: 'Py_ssize_t' {aka 'long int'} and 'khuint_t' {aka 'unsigned int'} [-Wsign-compare]
156773 | for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_15; __pyx_t_1+=1) {
| ^
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_4__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:147685:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
147685 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_UInt8Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:147216:26: note: '__pyx_v_val' was declared here
147216 | __pyx_t_5numpy_uint8_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_0__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:156823:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
156823 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Int8Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:156354:25: note: '__pyx_v_val' was declared here
156354 | __pyx_t_5numpy_int8_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_6__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:143965:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
143965 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_UInt32Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:143496:27: note: '__pyx_v_val' was declared here
143496 | __pyx_t_5numpy_uint32_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_1__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:154963:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
154963 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Int16Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:154494:26: note: '__pyx_v_val' was declared here
154494 | __pyx_t_5numpy_int16_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_5__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:145825:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
145825 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_UInt16Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:145356:27: note: '__pyx_v_val' was declared here
145356 | __pyx_t_5numpy_uint16_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_8__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:140245:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
140245 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Float32Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:139776:28: note: '__pyx_v_val' was declared here
139776 | __pyx_t_5numpy_float32_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_7__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:142105:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
142105 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_UInt64Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:141636:27: note: '__pyx_v_val' was declared here
141636 | __pyx_t_5numpy_uint64_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_9__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:138385:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
138385 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Float64Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:137916:28: note: '__pyx_v_val' was declared here
137916 | __pyx_t_5numpy_float64_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_2__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:153103:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
153103 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Int32Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:152634:26: note: '__pyx_v_val' was declared here
152634 | __pyx_t_5numpy_int32_t __pyx_v_val;
| ^~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c: In function '__pyx_fuse_3__pyx_f_6pandas_5_libs_9hashtable_value_count.constprop':
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:151243:6: warning: '__pyx_v_val' may be used uninitialized in this function [-Wmaybe-uninitialized]
151243 | ((struct __pyx_vtabstruct_6pandas_5_libs_9hashtable_Int64Vector *)__pyx_v_result_keys->__pyx_vtab)->append(__pyx_v_result_keys, __pyx_v_val);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/hashtable.cpython-311.so.p/pandas/_libs/hashtable.pyx.c:150774:26: note: '__pyx_v_val' was declared here
150774 | __pyx_t_5numpy_int64_t __pyx_v_val;
| ^~~~~~~~~~~
ninja: build stopped: subcommand failed.
error: subprocess-exited-with-error
Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /opt/freeware/bin/python3.11 /opt/freeware/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpwrjwf5p1
cwd: /tmp/pip-install-m6bze4df/pandas_9acde4f69c3542ff9312ac4b80e89b4d
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Exception information:
Traceback (most recent call last):
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/operations/build/metadata.py", line 35, in generate_metadata
distinfo_dir = backend.prepare_metadata_for_build_wheel(metadata_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/utils/misc.py", line 772, in prepare_metadata_for_build_wheel
return super().prepare_metadata_for_build_wheel(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 186, in prepare_metadata_for_build_wheel
return self._call_hook('prepare_metadata_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 311, in _call_hook
self._subprocess_runner(
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py", line 252, in runner
call_subprocess(
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py", line 224, in call_subprocess
raise error
pip._internal.exceptions.InstallationSubprocessError: Preparing metadata (pyproject.toml) exited with 1
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
File "/opt/freeware/lib/python3.11/site-packages/pip/_vendor/resolvelib/structs.py", line 156, in __bool__
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
return any(self)
^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 211, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 293, in __init__
super().__init__(
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 525, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 640, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 71, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py", line 67, in prepare_distribution_metadata
self.req.prepare_metadata()
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/req/req_install.py", line 577, in prepare_metadata
self.metadata_directory = generate_metadata(
^^^^^^^^^^^^^^^^^^
File "/opt/freeware/lib/python3.11/site-packages/pip/_internal/operations/build/metadata.py", line 37, in generate_metadata
raise MetadataGenerationFailed(package_details=details) from error
pip._internal.exceptions.MetadataGenerationFailed: metadata generation failed
</details>
| [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Apparently AIX needs some extra care. I did compile it with a patched version by IBM. Steps for reference:\n\nInstall `numpy`\n```\nexport CXX=\"g++ -pthread\"\nexport CXXFLAGS=-maix64\nexport OBJECT_MODE=64\nexport LDFLAGS=\"-maix64 -lm\"\nexport CC=\"gcc -pthread\"\nexport CFLAGS=-maix64\n \npip install --no-cache-dir --ignore-installed --no-binary numpy numpy==1.26.4 -v\n```\n\nInstall `pandas`\n```\nexport CXX=\"g++ -pthread\"\nexport CXXFLAGS=-maix64\nexport OBJECT_MODE=64\nexport CC=\"gcc -pthread\"\nexport CFLAGS=-maix64\nexport LDFLAGS=\"-lm -Wl,-blibpath:/opt/freeware/lib/pthread:/opt/freeware/lib64:/opt/freeware/lib:/usr/lib:/lib\"\n \n# download src.rpm from https://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/SRPMS/python3-pandas/python3.9-pandas-2.2.3-1.src.rpm\n# place it into /tmp/\nsudo rpm -Uvh /tmp/python3.9-pandas-2.2.3-1.src.rpm\nmkdir ~/build\ncd ~/build\ngunzip -c /opt/freeware/src/packages/SOURCES/pandas-2.2.3.tar.gz | tar xvf -\ncd pandas-2.2.3\npip install . -I --no-deps --no-build-isolation -v\n```"
] |
2,988,653,395 | 61,271 | Add Pandas Cookbook to Book Recommendations | closed | 2025-04-11T13:16:27 | 2025-04-12T16:04:22 | 2025-04-11T15:48:57 | https://github.com/pandas-dev/pandas/pull/61271 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61271 | https://github.com/pandas-dev/pandas/pull/61271 | WillAyd | 9 | The link here is a special link used to track sales of the Pandas Cookbook through the pandas website. NumFOCUS and Packt (the publisher) have agreed that the latter will donate part of the proceeds through this link directly back to NumFOCUS | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @WillAyd ",
"@WillAyd @mroeschke @pandas-dev/pandas-core, great to have the book added (and published), but would it makes sense to revert this commit? It's still the last one, and it increases the repo size by 1.3Mb for the book cover. We can optimize it like the other book covers, and leave it in around 10Kb with reasonable quality.\r\n\r\nAlso, the image url in the html has the wrong extension (gif), the correct one is jpeg, so the home page shows the image broken right now.\r\n\r\nIt'll surely make things a bit tricky for people who cloned the code since this was merged, but personally I think it's worth.",
"Ah sorry about the image size - yes can replace with an optimized one. \r\n\r\n> Also, the image url in the html has the wrong extension (gif), the correct one is jpeg, so the home page shows the image broken right now.\r\n\r\nHmm so do I need to save the file as a gif in the git repo? The other two books have that extension",
"The extension is irrelevant, it's just the mismatch the problem. gif allows to have smaller color palettes, which can help reduce the image size, that's why the others are gif.\r\n\r\nThe main question here is if we just fix this, and we let the 1.3Mb stay in our git history, or if we hard undo this in the commit history, avoiding the 1.3Mb, but creating a bit of trouble to few uses because rewriting git history. If people are not ok with this, then better to leave the image as it is, and just fix the extension in the html so it renders correctly.",
"I would be fine with a hard undo since its the HEAD of main. I don't foresee that causing too much trouble",
"For people who cloned or pulled the reverted commit git will error, and not sure if just require the `-f` flag to leave things back to normal, or some other trouble. But for me, still worth it.",
"Alternative: https://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository\r\n\r\nNot sure how intrusive that would be as opposed to just removing the last commit.",
"I would also support hard undoing the last commit",
"Done. I'll open a new PR with the changes here"
] |
2,988,313,708 | 61,270 | BUG: Unexpected behavior change in DataFrame.min(axis=1) with numpy.array elements in pandas 2.2 vs pandas 1.1 | closed | 2025-04-11T10:55:24 | 2025-04-13T16:25:41 | 2025-04-13T16:25:38 | https://github.com/pandas-dev/pandas/issues/61270 | true | null | null | tanjt107 | 8 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"A": [
np.array([1]),
np.array([2]),
np.array([3]),
np.array([4]),
np.array([5]),
],
"B": [10, 20, 30, 40, 50],
}
)
df = df[["A", "B"]].min(axis=1)
print(df)
```
### Issue Description
The behavior of DataFrame.min(axis=1) when a column contains numpy.array elements has changed between pandas 2.2 and pandas 1.1. I did not find any mention of this change in the changelog, and it is unclear whether this is a regression or an intentional change.
### Expected Behavior
Output on pandas 1.1
```
0 1
1 2
2 3
3 4
4 5
dtype: float64
```
Output on pandas 2.2
```
0 [1]
1 [2]
2 [3]
3 [4]
4 [5]
dtype: object
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.4
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Nested Data"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | [
"Haven't validated, but this might have changed with #51335",
"I think the current behavior is more logical. However, should a bug be raised instead to avoid confusion?",
"@tanjt107 \n\n> should a bug be raised instead to avoid confusion?\n\nWhat does it mean to \"raise a bug\"?",
"Sorry I meant raising an error in the code itself.",
"Under what conditions would you suggest raising?",
"When comparing different dtypes (e.g., `np.ndarray` vs. `int`/`float64`), an error should be raised to avoid unpredictable or inconsistent behavior.",
"The dtype is not `np.ndarray`, it is `object`. As far as pandas is aware any Python object can be in such a column, including a mix of NumPy arrays, scalars, and other class instances. \n\nI am negative on adding logic to special case a column of all NumPy arrays (we would need to inspect to see that they are indeed all NumPy arrays, which is highly inefficient) and also raising on object comparisions.",
"While the change in behavior was perhaps unintentional and the new behavior is perhaps somewhat surprising, it is correct as far as pandas can tell. I say perhaps surprising because `bool(np.array([1]) < 10)` is True, which is used to compute the `min` between values. However, as far as pandas is concerned the `np.array([1])` is just some Python object and it would not be maintainable to special case certain types of Python objects in an object dtype column.\n\nClosing.\n"
] |
2,987,906,651 | 61,269 | BUG: pandas change in style overrides defaults format for other columns | closed | 2025-04-11T08:11:14 | 2025-04-13T20:21:26 | 2025-04-13T20:21:25 | https://github.com/pandas-dev/pandas/issues/61269 | true | null | null | lcrmorin | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.set_option('display.float_format', '{:.2f}'.format)
pd.DataFrame(15.22345676543234567,columns=[1,2,3,4,5,6],index=['A','Z','R','T'])#.style.format({1:'{:.2%}'})
pd.DataFrame(15.22345676543234567,columns=[1,2,3,4,5,6],index=['A','Z','R','T']).style.format({1:'{:.2%}'})
```
### Issue Description
After setting float default number of decimal place to display to 2, the first exemple works, only showing 2 decimals for all column. However when adding a custom style to only the first columns all other columns are formatted with the default 6 decimals places. This is quite counterproductive as it means if we want to set some format we need to define all format. It would be nice to be able to change only a few formats while the unchanged are set to default.
### Expected Behavior
When specifying a format for a given columns, pandas should style use user specified default value for other columns.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.0
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22621
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : fr_FR.cp1252
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : 8.31.0
adbc-driver-postgresql: None
...
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
| [
"Styler",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Confirmed this still happens on the latest main. Applying `.style.format()` to just one column causes the others to fall back to full precision. I’m guessing `.style.format()` overrides `display.float_format` entirely, so other columns fall back to default formatting.",
"Please read the **notes** section of https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html\n\nThis is behaving as intended and as described.\n\nDisplaying a Dataframe using the Dataframe printer is entirely different to displaying a Styler using the Styler's output methods.\n\nWhat you should do is this:\n\n```python\nimport pandas as pd\npd.set_option('styler.format.precision', 3)\npd.DataFrame(15.22345676543234567,columns=[1,2,3,4,5,6],index=['A','Z','R','T']).style.format({1:'{:.2%}'})\n```",
"Thanks for the clarification! I missed that `Styler` uses its own formatting logic via `styler.format.precision` instead of `display.float_format`. Appreciate the doc pointer!",
"Indeed, behaviour is quite confusing. But I can confirm the proposed code works as expected. "
] |
2,987,709,592 | 61,268 | DOC: Add documentation for `groupby.ewm()` | closed | 2025-04-11T06:42:06 | 2025-04-14T20:05:36 | 2025-04-14T20:05:36 | https://github.com/pandas-dev/pandas/issues/61268 | true | null | null | arthurlw | 1 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/groupby.html
### Documentation problem
There is no reference for `DataFrameGroupBy.ewm()`, even though it exists in API, and Docstring can be greatly improved.
Similar to: #61254
E.g. consider working example:
```
>>> import pandas as pd
>>> pd.__version__
'2.2.3'
>>> data = {"Class": ["A", "A", "A", "B", "B", "B"],"Value": [10, 20, 30, 40, 50, 60],}
>>> df = pd.DataFrame(data)
>>> df
Class Value
0 A 10
1 A 20
2 A 30
3 B 40
4 B 50
5 B 60
>>> ewm_mean = (df.groupby("Class").ewm(span=2).mean().reset_index(drop=True))
>>> ewm_mean
Value
0 10.000000
1 17.500000
2 26.153846
3 40.000000
4 47.500000
5 56.153846
```
### Suggested fix for documentation
Include reference of DataFrameGroupBy.ewm and SeriesGroupBy.ewm, like for [DataFrameGroupBy.rolling](https://pandas.pydata.org/docs/dev/reference/api/pandas.core.groupby.DataFrameGroupBy.rolling.html#pandas.core.groupby.DataFrameGroupBy.rolling)
and [SeriesGroupBy.rolling](https://pandas.pydata.org/docs/dev/reference/api/pandas.core.groupby.SeriesGroupBy.rolling.html#pandas.core.groupby.SeriesGroupBy.rolling)
Improve `groupby.ewm()` function Docstring | [
"Docs",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take"
] |
2,987,073,547 | 61,267 | BUG: Inconsistent date resolution | closed | 2025-04-10T22:29:16 | 2025-04-13T12:31:35 | 2025-04-13T12:31:31 | https://github.com/pandas-dev/pandas/issues/61267 | true | null | null | super-ibby | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import datetime
import pandas as pd
pd.__version__ # ‘2.2.2’
d = datetime.datetime(2025, 4, 10)
pd.Series([d]).dtype # dtype('<M8[ns]')
pd.DataFrame([{'date': d}])['date'].dtype # dtype('<M8[ns]')
df = pd.DataFrame([{'x': 0}])
df['date'] = d # <— broadcast scalar
df['date'].dtype # dtype('<M8[us]') <— different date resolution!
```
### Issue Description
Date resolution differs between Series/DataFrame construction versus assignment via a scalar broadcast.
### Expected Behavior
I’d expect the same date resolution across the examples I provided.
### Installed Versions
<details>
commit : d9cdd2ee5a58015e
python : 3.11.9.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : AMD64
processor : Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder : little
pandas : 2.2.2
numpy : 1.26.4
Cython : 3.0.10
</details>
| [
"Bug",
"Datetime"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. This seems to be fixed in main. Can you confirm you can reproduce this with the main branch (check box 3)?",
"I'm seeing the same @asishm - closing."
] |
2,986,796,920 | 61,266 | Backport PR #61265: TYP: Add ignores for numpy 2.2 updates | closed | 2025-04-10T20:22:58 | 2025-04-11T01:14:52 | 2025-04-11T01:14:48 | https://github.com/pandas-dev/pandas/pull/61266 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61266 | https://github.com/pandas-dev/pandas/pull/61266 | mroeschke | 1 | xref https://github.com/pandas-dev/pandas/pull/61265 | [
"Code Style",
"Typing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Ah numpy is still pinned < 2 on 2.3 so we don't need this backport"
] |
2,986,409,258 | 61,265 | TYP: Add ignores for numpy 2.2 updates | closed | 2025-04-10T17:36:13 | 2025-04-11T01:15:01 | 2025-04-10T20:17:42 | https://github.com/pandas-dev/pandas/pull/61265 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61265 | https://github.com/pandas-dev/pandas/pull/61265 | mroeschke | 2 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Code Style",
"Typing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Merging to get CI to green. The type ignore can be addressed in the future",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 adec21f3fa896684cad04ecf1878a9f2492370ea\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61265: TYP: Add ignores for numpy 2.2 updates'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61265-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61265 on branch 2.3.x (TYP: Add ignores for numpy 2.2 updates)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n "
] |
2,985,510,148 | 61,264 | API: Rename `arg` to `func` in `Series.map` | closed | 2025-04-10T12:30:14 | 2025-08-13T20:53:05 | 2025-04-14T13:14:29 | https://github.com/pandas-dev/pandas/pull/61264 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61264 | https://github.com/pandas-dev/pandas/pull/61264 | datapythonista | 1 | - [X] closes #61260
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
CC: @rhshadrach | [
"Deprecate",
"Apply",
"API - Consistency"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @datapythonista "
] |
2,983,703,172 | 61,263 | BUG: Impossible creation of array with dtype=string | closed | 2025-04-09T19:22:02 | 2025-05-15T16:13:30 | 2025-05-15T16:13:21 | https://github.com/pandas-dev/pandas/pull/61263 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61263 | https://github.com/pandas-dev/pandas/pull/61263 | Manju080 | 12 | closes #61155
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Hello @rhshadrach ,
I’ve created a fix that raises a ValueError when trying to create a StringArray from a list of lists with inconsistent lengths or non-character elements. This aligns the behavior for both consistent and inconsistent input formats and also tested.
I've would like to hear opinion to raise an error when a list of lists is passed for `dtype=StringDtype`, to avoid ambiguous behavior. If preferred, we could instead join the inner lists into strings automatically — happy to adjust based on guidance.
Example case : `pd.array([["t", "e", "s", "t"], ["w", "o", "r", "d"]], dtype="string") `
`output : <StringArray>
['test', 'word']
Length: 2, dtype: string`
Thanks | [
"Bug",
"Strings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Also, please add a test for this.",
"pre-commit.ci autofix",
"> We use pytest for testing, you'll need to add a test using that format. See here:\r\n> \r\n> https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#using-pytest\r\n> \r\n> The general pytest introduction may also be useful:\r\n> \r\n> https://docs.pytest.org/en/7.1.x/getting-started.html\r\n\r\nThank you for the details, will work on it",
"@rhshadrach I’ve been testing the following case in `test_lib.py` \r\n\r\n`def test_ensure_string_array_list_of_lists():`\r\n ` # GH#61155: ensure list of lists doesn't get converted to string`\r\n `arr = [['t', 'e', 's', 't'], ['w', 'o', 'r', 'd']]`\r\n `result = lib.ensure_string_array(arr)`\r\n\r\n ` # Each item in result should still be a list, not a stringified version`\r\n `assert isinstance(result[0], list)`\r\n `assert isinstance(result[1], list)`\r\n `assert result[0] == ['t', 'e', 's', 't']`\r\n `assert result[1] == ['w', 'o', 'r', 'd']`\r\n\r\nHowever, the test fails with \r\n `FAILED pandas/tests/libs/test_lib.py::test_ensure_string_array_list_of_lists - AssertionError`\r\n DEBUG RESULT: `[\"['t', 'e', 's', 't']\" \"['w', 'o', 'r', 'd']\"] <class 'numpy.ndarray'> <class 'str'>`\r\n\r\nSo currently, the list of lists gets converted into a 1D NumPy array of strings.\r\nWith the current implementation, `arr` becomes a 1D `object` array of lists (as intended), but it seems that downstream processing stringifies each list.\r\nDo you want me to guard against this case inside `ensure_string_array` to preserve the list structure? Or is the stringification expected behavior in this context?\r\n\r\nThanks!",
"I believe converting to a 1-dimesional ndarray of strings is the expected behavior of `enusure_string_array`. Perhaps I'm misunderstanding; what is the alternative?",
"Thanks for the clarification!\r\n\r\nYou're right — the behavior of `ensure_string_array` producing a 1D `ndarray` of stringified inner lists (when given a list of lists like `[list(\"test\"), list(\"word\")])` is consistent with the current expectations of the function.\r\n\r\n`def test_ensure_string_array_list_of_lists():`\r\n `arr = [list(\"test\"), list(\"word\")]`\r\n `result = lib.ensure_string_array(arr)`\r\n `assert isinstance(result, np.ndarray)`\r\n `assert result.dtype == object`\r\n `assert result[0] == \"['t', 'e', 's', 't']\"`\r\n `assert result[1] == \"['w', 'o', 'r', 'd']\"`\r\n `print(\"DEBUG RESULT:\", result)`\r\n\r\nMy initial assumption was that it should preserve the list structure instead of converting to strings, but after re-evaluating and running the test, I see that the 1D array of strings is indeed the intended behavior. The test has now been updated and passes successfully and got the below output \r\n`[1/1] Generating write_version_file with a custom command`\r\n`================================================= test session starts`\r\n`==================================================`\r\n`platform linux -- Python 3.12.3, pytest-8.3.5, pluggy-1.5.0`\r\n`rootdir: /mnt/c/Users/HP/Documents/Python_pandas_op/pandas`\r\n`configfile: pyproject.toml`\r\n`plugins: hypothesis-6.131.6`\r\n`collected 84 items`\r\npandas/tests/libs/test_lib.py ...................................................................................DEBUG RESULT: [\"['t', 'e', 's', 't']\" \"['w', 'o', 'r', 'd']\"]`\r\n\r\n`----------------- generated xml file: /mnt/c/Users/HP/Documents/Python_pandas_op/pandas/test-data.xml ------------------`\r\n`================================================= slowest 30 durations`\r\n`=================================================`\r\n`0.09s setup pandas/tests/libs/test_lib.py::TestMisc::test_max_len_string_array`\r\n\r\n`(29 durations < 0.005s hidden. Use -vv to show these durations.)`\r\n\r\nPlease let me know if I need to change anything\r\n",
"@Manju080 - the last change I'm seeing is from 3 weeks ago. Perhaps you need to push some commits?",
"That's right, I just wanna make sure before committing the changes.",
"Apologies for the causing confusion, I will work this to fix.",
"@rhshadrach Thank you very much, required changes are done.\r\nLet me know if there is anything ",
"pre-commit.ci autofix",
"Thanks @Manju080 "
] |
2,983,689,327 | 61,262 | DEBUG: Cython failures | closed | 2025-04-09T19:15:21 | 2025-04-09T21:27:50 | 2025-04-09T21:27:45 | https://github.com/pandas-dev/pandas/pull/61262 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61262 | https://github.com/pandas-dev/pandas/pull/61262 | mroeschke | 2 | Trying to get a Cython reproducer for https://github.com/pandas-dev/pandas/pull/61249 | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"First piece of repro from [8837a71](https://github.com/pandas-dev/pandas/pull/61262/commits/8837a71a9797cdc82f331c1205759e473e5fa219)\r\n\r\n```cython\r\ncpdef timedelta debug_2():\r\n cdef int64_t val = -420000000000\r\n us, remainder = divmod(val, 1000)\r\n if remainder >= 500:\r\n us += 1\r\n return timedelta(microseconds=us)\r\n```\r\n\r\nWindows 3.13t: `datetime.timedelta(microseconds=906795)`\r\nOther Platforms: `datetime.timedelta(days=-1, seconds=85980)`\r\n",
"Opened https://github.com/cython/cython/issues/6786 as a result of this investigation so closing"
] |
2,983,553,901 | 61,261 | CI: Pin Cython to a specific commit Window PY3.13t builds | closed | 2025-04-09T18:21:08 | 2025-04-09T19:48:27 | 2025-04-09T19:48:24 | https://github.com/pandas-dev/pandas/pull/61261 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61261 | https://github.com/pandas-dev/pandas/pull/61261 | mroeschke | 0 | Manual backport of https://github.com/pandas-dev/pandas/pull/61249 | [
"CI"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,983,486,796 | 61,260 | API: Rename arg to func in Series.map for consistency | closed | 2025-04-09T17:47:36 | 2025-04-14T13:41:22 | 2025-04-14T13:14:30 | https://github.com/pandas-dev/pandas/issues/61260 | true | null | null | datapythonista | 2 | The API of methods taking udf follow certain patterns that make them consistent and easier to learn and use. There are some small differences, which have been listed in #40112 and #61128.
This issue is to rename the `arg` parameter of `Series.map` to `func`, which is the name consistently used in almost all methods. In the case of `Series.map`, the argument is slightly different than others, given that `arg` or `func` can also be a `dict` or a `Series`, which will make `map` replace values from these mappings, instead of executing an elementwise udf.
This issue is for the renaming of the parameter, making the parameter consistent with other methods such as `DataFrame.apply` can be considered in another issue. But there are some cases to consider, given that the behavior of `map` is slightly different when providing a mapping, than when providing a function that maps. In particular, `map` will use `NaN` when the mapping returns `None`, but it will use `None` when the function returns `None`. Also, if we stop supporting dictionaries, users in general should just replace their code from `Series.map(my_dict)` to `Series.map(my_dict.get)`. But there are some special cases, for example when the dictionary is a `defaultdict`, `.get` will return `None`, while the current `map` implementation with a `defaultdict` will consider the default value. | [
"API Design",
"Apply",
"API - Consistency"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@pandas-dev/pandas-core I thought that it was a good idea when renaming `arg` in `Series.map` to `func` to make it only accept a function, for consistency with other functions and simplicity with the name. I thought for users it'd be as simply as using `my_dict.get` instead of `my_dict` as the argument.\n\nBut seems like there is some more complexity. `defaultdict` for example doesn't work with `.get` as expected for this case, since `.get` will still return `None` and not the default value. So users should use `my_series.map(lambda x: my_defaultdict[x])` instead of `my_series.my_defaultdict.get)`.\n\nAlso, when `Series.map` receives a dictionary, `None` will be return as `NaN`, while when it receives a function, `None` will be returned as `None`.\n\nIf we were designing the API from zero I'd still support the consistency of just accepting functions and one behaviour. But not too sure if it's worth given that the expected user code changes, while not too complex, are not as immediate as making all dictionaries a function with `.get`. Thoughts?",
"It seems like a very common use case to use a dict with `Series.map`, I don't think we should be making it more inconvenient. Also I'd expect `Series.map` to accept a dict from the name alone.\n\n+1 on `arg` -> `func`."
] |
2,981,501,667 | 61,259 | Add to_snake_case and to_camel_case for index label conversion using … | closed | 2025-04-09T04:31:33 | 2025-04-10T00:58:47 | 2025-04-09T16:44:42 | https://github.com/pandas-dev/pandas/pull/61259 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61259 | https://github.com/pandas-dev/pandas/pull/61259 | ek-ok | 2 | **Add `to_snake_case` and `to_camel_case` methods for Index label conversion**
This PR adds two new string transformation methods to the `pandas.Index` class:
### 🚀 New methods
- `to_snake_case()`: Converts string index labels to `snake_case` using `inflection.underscore`
- `to_camel_case()`: Converts string index labels to `camelCase` using `inflection.camelize`
Both methods:
- Leave non-string values (e.g. integers, `None`) unchanged
- Are chainable and return a new `Index` instance
### 🧪 Tests
Added unit tests for both methods in `test_base.py`:
- Covers strings with mixed case and spaces
- Verifies behavior with mixed-type labels
### 📌 Example
```python
import pandas as pd
df = pd.DataFrame({"first name": [1], "another_column": [1]})
df.columns = df.columns.to_camel_case()
print(df.columns)
# Index(['firstName', 'anotherColumn'], dtype='object')
df.columns = df.columns.to_snake_case()
print(df.columns)
# Index(['first_name', 'another_column'], dtype='object')
```
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but it appears this feature is not tied to an open issue with core developer support. For new APIs, we would need core developer support before adding new API. (Note given the simplicity of this feature, it's unlikely that this would be added to pandas.)\r\n\r\nThanks for the suggestion but closing ",
"Thanks for your comment and understand your point. This is actually something I've always wanted when dealing with messy column names. Without modifying pandas itself, the code below is the alternative I can think of. Do you have any recommendations?\r\n\r\n```\r\nimport pandas as pd\r\nimport inflection\r\n\r\n@pd.api.extensions.register_index_accessor(\"clean\")\r\nclass ColumnIndexAccessor:\r\n def __init__(self, pandas_obj):\r\n self._obj = pandas_obj\r\n\r\n def to_snake_case(self):\r\n return self._obj.to_series().apply(\r\n lambda x: inflection.underscore(x).replace(\" \", \"_\")\r\n ).values\r\n\r\n def to_camel_case(self):\r\n return self._obj.to_series().apply(\r\n lambda x: inflection.camelize(x, uppercase_first_letter=False)\r\n ).values\r\n```\r\n"
] |
2,981,294,469 | 61,258 | DOC: Update the last ArcticDB link in ecosystem.md | closed | 2025-04-09T01:24:17 | 2025-04-09T16:45:55 | 2025-04-09T16:45:48 | https://github.com/pandas-dev/pandas/pull/61258 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61258 | https://github.com/pandas-dev/pandas/pull/61258 | star1327p | 1 | Update the last ArcticDB link in ecosystem.md.
Correct link:
https://docs.arcticdb.io/latest/api/processing/#arcticdb.QueryBuilder
The old link does not work:
https://docs.arcticdb.io/latest/api/query_builder/
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks again @star1327p "
] |
2,980,801,799 | 61,257 | Changed term non-null to NA | closed | 2025-04-08T19:51:26 | 2025-04-10T16:04:14 | 2025-04-10T16:04:03 | https://github.com/pandas-dev/pandas/pull/61257 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61257 | https://github.com/pandas-dev/pandas/pull/61257 | DarthKitten2130 | 1 | - [ ] closes #60802
Changed the term non-null to NA, to reflect pandas' docs standard
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @DarthKitten2130 "
] |
2,980,335,275 | 61,256 | ENH: Utility to return "feasible" dtype for `infer_dtype` output. | open | 2025-04-08T16:32:10 | 2025-06-29T16:06:55 | null | https://github.com/pandas-dev/pandas/issues/61256 | true | null | null | mroeschke | 2 | `pandas.api.types.infer_dtype` returns a string label of the inferred type of data. (While these should probably be an `enum`), these string labels do not map cleanly to what supported pandas data type represents that data e.g `"mixed"` would probably map to `"object"`
It would be nice to
1. Add a parameter `as_pandas_type: bool = False` that would return an enum representing the supported pandas type (e.g. `"mixed"` and `"unknown-array"`, maps to "PandasType.OBJECT")
2. Add a separate function to do this
| [
"Enhancement",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"If we add the `as_pandas_type` parameter in `infer_dtype`, is a separate function still necessary?\n\nI think having a separate function makes the mapping logic reusable outside of `infer_dtype` (like for testing or downstream libraries), but I’m curious if others see it as necessary if we already have the param and the enum.",
"> While these should probably be an enum\n\n+1\n\nI guess this is still used in the wild, but I've been trying to wean off internal uses for a while because there's almost always a better alternative."
] |
2,980,222,415 | 61,255 | DOC: Add real-world aggregation example to GroupBy user guide | closed | 2025-04-08T15:48:24 | 2025-04-08T16:44:40 | 2025-04-08T16:44:39 | https://github.com/pandas-dev/pandas/pull/61255 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61255 | https://github.com/pandas-dev/pandas/pull/61255 | udayanand22 | 3 | This PR adds a real-world example using sales data to the GroupBy Aggregation section in the user guide (groupby.rst). This enhances understanding for new users by supplementing the existing animals DataFrame example with a business-style case. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The failing doc build seems unrelated to the change introduced in this PR, which only adds a real-world GroupBy aggregation example. Please let me know if any adjustments are needed—happy to update.",
"@pandas-dev/mentors – Submitted my GSOC 2025 proposal to revamp Pandas docs! \r\nWould love your feedback. Ready to start work from Day 1!",
"Thanks for the PR, but I don't think this example adds anything additional to these doc so closing. I recommend searching the issue tracker for documentation issues that have been triaged"
] |
2,979,660,728 | 61,254 | DOC: Add documentation for groupby.expanding() | closed | 2025-04-08T12:36:06 | 2025-04-14T22:41:30 | 2025-04-14T22:41:30 | https://github.com/pandas-dev/pandas/issues/61254 | true | null | null | olek-osikowicz | 2 | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
"https://pandas.pydata.org/docs/dev/reference/groupby.html"
### Documentation problem
There is no reference for `DataFrameGroupBy.expanding()`, even though it exists in API
E.g. consider working example:
```
>>> import pandas as pd
>>> pd.__version__
'2.2.3'
>>> data = {"Class": ["A", "A", "A", "B", "B", "B"],"Value": [10, 20, 30, 40, 50, 60],}
>>> df = pd.DataFrame(data)
>>> df
Class Value
0 A 10
1 A 20
2 A 30
3 B 40
4 B 50
5 B 60
>>> expanding_mean = df.groupby("Class").expanding().mean().reset_index(drop=True)
>>> expanding_mean
Value
0 10.0
1 15.0
2 20.0
3 40.0
4 45.0
5 50.0
```
It's undocumented behaviour
### Suggested fix for documentation
Include reference of `DataFrameGroupBy.expanding` and `SeriesGroupBy.expanding`, like for [DataFrameGroupBy.rolling](https://pandas.pydata.org/docs/dev/reference/api/pandas.core.groupby.DataFrameGroupBy.rolling.html#pandas.core.groupby.DataFrameGroupBy.rolling)
and [SeriesGroupBy.rolling](https://pandas.pydata.org/docs/dev/reference/api/pandas.core.groupby.SeriesGroupBy.rolling.html#pandas.core.groupby.SeriesGroupBy.rolling) | [
"Docs",
"Groupby",
"Window"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Agreed this should be added. While adding it to the API docs will make it show up, it looks like the current docstring should also be greatly improved as well.\n\nPRs to improve are welcome!",
"take"
] |
2,979,650,869 | 61,253 | BUG: Selecting the wrong first column | closed | 2025-04-08T12:32:18 | 2025-04-08T21:07:43 | 2025-04-08T21:07:37 | https://github.com/pandas-dev/pandas/issues/61253 | true | null | null | morzen | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
TTP_CWE_mappingDF = pd.read_csv('./658.csv', sep=',')
columns_list = TTP_CWE_mappingDF.columns.tolist() # Returns column names as a list
print(columns_list)
print("\n")
TEST = TTP_CWE_mappingDF.iloc[:, [0]]
columns_list2 = TEST.columns.tolist() # Returns column names as a list
print(columns_list2)
print(TEST)
# first_column = TTP_CWE_mappingDF.iloc[:, 0] # All rows (:) + first column (0)
# print(first_column)
```
### Issue Description
Hi,
When downloading the MITRE CAPEC cwe .csv I tried to import it on Python to play with it a bit.
Surprisingly, when selecting the first column, the data is from the second column, and this applies to the whole dataframe; all columns are off by one. The key is correct, but the data is for the next key.
This is rather problematic, as you can imagine.
I added the .csv file I am using as well.
[658.csv](https://github.com/user-attachments/files/19648994/658.csv)
### Expected Behavior
this is the result i get:
["'ID", 'Name', 'Abstraction', 'Status', 'Description', 'Alternate Terms', 'Likelihood Of Attack', 'Typical Severity', 'Related Attack Patterns', 'Execution Flow', 'Prerequisites', 'Skills Required', 'Resources Required', 'Indicators', 'Consequences', 'Mitigations', 'Example Instances', 'Related Weaknesses', 'Taxonomy Mappings', 'Notes']
["'ID"]
'ID
1 Accessing Functionality Not Properly Constrain...
11 Cause Web Server Misclassification
112 Brute Force
114 Authentication Abuse
115 Authentication Bypass
.. ...
698 Install Malicious Extension
70 Try Common or Default Usernames and Passwords
700 Network Boundary Bridging
94 Adversary in the Middle (AiTM)
98 Phishing
[177 rows x 1 columns]
the expected result should be:
["'ID", 'Name', 'Abstraction', 'Status', 'Description', 'Alternate Terms', 'Likelihood Of Attack', 'Typical Severity', 'Related Attack Patterns', 'Execution Flow', 'Prerequisites', 'Skills Required', 'Resources Required', 'Indicators', 'Consequences', 'Mitigations', 'Example Instances', 'Related Weaknesses', 'Taxonomy Mappings', 'Notes']
["'ID"]
'ID
1
11
112
114
115
..
698
70
700
94
98
[177 rows x 1 columns]
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.7
python-bits : 64
OS : Linux
OS-release : 6.8.11-arm64
Version : #1 SMP Kali 6.8.11-1kali2 (2024-05-30)
machine : aarch64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.1.1
Cython : None
sphinx : None
IPython : 9.0.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"IO CSV"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. If you would like pandas to not infer the first values as an index, you can pass `index_col=False` to `read_csv`.\n\nClosing for now. If this doesn't resolve your issue, reply here as to why and we can reopen."
] |
2,979,291,332 | 61,252 | BUG: AttributeError: 'SparseArray' object has no attribute 'round' | closed | 2025-04-08T10:10:36 | 2025-04-08T21:14:44 | 2025-04-08T21:14:41 | https://github.com/pandas-dev/pandas/issues/61252 | true | null | null | ShuyangXu | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame([1.1,2.5,3,4.7], dtype = pd.SparseDtype())
hasattr(df, 'round')
# True
df.round()
# AttributeError: 'SparseArray' object has no attribute 'round'
```
### Issue Description
the sparsed `df` do has `round` method, but can not execute it
### Expected Behavior
`df` can execute `round` method
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 3.10.0-1160.102.1.0.1.an7.x86_64
Version : #1 SMP Sun Oct 29 06:40:18 CST 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.utf8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Sparse"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi! 👋 I’d like to work on this issue. I can reproduce the bug as described and will look into adding support for .round() on SparseArray. Let me know if this is okay to proceed.",
"Closing as a duplicate of #49387. That issue has a PR linked to it, and that PR does indeed resolve this issue."
] |
2,979,048,343 | 61,251 | PERF: future_stack is too slow | closed | 2025-04-08T08:39:36 | 2025-04-08T16:18:08 | 2025-04-08T16:18:07 | https://github.com/pandas-dev/pandas/issues/61251 | true | null | null | auderson | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
The future_stack is very slow compared to the previous implementation:
```python
import pandas as pd, numpy as np
df = pd.DataFrame(np.random.randn(5000, 5000))
%%time
df.stack(dropna=False)
# CPU times: user 49.4 ms, sys: 49.7 ms, total: 99.1 ms
# Wall time: 96 ms
%%time
df.stack(future_stack=True)
# CPU times: user 1.96 s, sys: 122 ms, total: 2.08 s
# Wall time: 2.08 s
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 5.15.0-122-generic
Version : #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0
pip : 24.0
Cython : 3.0.7
sphinx : 7.3.7
IPython : 8.25.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.0
html5lib : None
hypothesis : 6.129.3
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : 0.60.0
numexpr : 2.10.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 16.1.0
pyreadstat : None
pytest : 8.2.2
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.0
sqlalchemy : 2.0.31
tables : 3.9.2
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.22.0
tzdata : 2024.1
qtpy : 2.4.1
pyqt5 : None
</details>
### Prior Performance
_No response_ | [
"Performance",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks, but closing as a duplicate of https://github.com/pandas-dev/pandas/issues/58391"
] |
2,978,363,392 | 61,250 | BUG: Raise error if not busdaycalendar | closed | 2025-04-08T01:34:52 | 2025-05-19T16:16:29 | 2025-05-19T16:16:29 | https://github.com/pandas-dev/pandas/pull/61250 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61250 | https://github.com/pandas-dev/pandas/pull/61250 | j-hendricks | 2 | Closes #60647. Raises TypeError if anything other than `None` or `np.busdaycalendar` is passed to `calendar`. Also added test for this exception as well as a note in whatsnew
- [x] closes #60647 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,978,085,469 | 61,249 | BLD: Try installing older Cython for windows free threading build | closed | 2025-04-07T21:57:16 | 2025-05-16T03:09:16 | 2025-04-09T17:56:30 | https://github.com/pandas-dev/pandas/pull/61249 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61249 | https://github.com/pandas-dev/pandas/pull/61249 | mroeschke | 17 | - [ ] closes #61242 (Replace xxxx with the GitHub issue number)
| [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Note: Older ninja or numpy failures versions are exhibiting the same failures. I suspect a Cython change might be the culprit ",
"The traceback showing up in the test suite is suspicious:\r\n\r\n```sh\r\n2025-04-08T19:28:28.2539512Z Traceback (most recent call last):\r\n2025-04-08T19:28:28.2554584Z File \"pandas/_libs/tslibs/tzconversion.pyx\", line 128, in pandas._libs.tslibs.tzconversion.Localizer.utc_val_to_local_val\r\n2025-04-08T19:28:28.2555644Z File \"pandas/_libs/tslibs/tzconversion.pyx\", line 759, in pandas._libs.tslibs.tzconversion._tz_localize_using_tzinfo_api\r\n2025-04-08T19:28:28.2556397Z File \"pandas/_libs/tslibs/tzconversion.pyx\", line 791, in pandas._libs.tslibs.tzconversion._astimezone\r\n2025-04-08T19:28:28.2557288Z File \"C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-t5af6k12\\cp313t-win_amd64\\venv-test\\Lib\\site-packages\\dateutil\\tz\\_common.py\", line 144, in fromutc\r\n2025-04-08T19:28:28.2557899Z return f(self, dt)\r\n2025-04-08T19:28:28.2558668Z File \"C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-t5af6k12\\cp313t-win_amd64\\venv-test\\Lib\\site-packages\\dateutil\\tz\\_common.py\", line 261, in fromutc\r\n2025-04-08T19:28:28.2559658Z _fold = self._fold_status(dt, dt_wall)\r\n2025-04-08T19:28:28.2560385Z File \"C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-t5af6k12\\cp313t-win_amd64\\venv-test\\Lib\\site-packages\\dateutil\\tz\\_common.py\", line 196, in _fold_status\r\n2025-04-08T19:28:28.2561049Z if self.is_ambiguous(dt_wall):\r\n2025-04-08T19:28:28.2561335Z ~~~~~~~~~~~~~~~~~^^^^^^^^^\r\n2025-04-08T19:28:28.2561988Z File \"C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-t5af6k12\\cp313t-win_amd64\\venv-test\\Lib\\site-packages\\dateutil\\tz\\tz.py\", line 254, in is_ambiguous\r\n2025-04-08T19:28:28.2562646Z naive_dst = self._naive_is_dst(dt)\r\n2025-04-08T19:28:28.2563346Z File \"C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-t5af6k12\\cp313t-win_amd64\\venv-test\\Lib\\site-packages\\dateutil\\tz\\tz.py\", line 260, in _naive_is_dst\r\n2025-04-08T19:28:28.2564061Z return time.localtime(timestamp + time.timezone).tm_isdst\r\n2025-04-08T19:28:28.2564432Z ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n2025-04-08T19:28:28.2564754Z OSError: [Errno 22] Invalid argument\r\n```\r\n\r\nI wonder if this is an issue with the container itself and not necessarily any of the Python packages",
"> The traceback showing up in the test suite is suspicious:\r\n\r\nThat traceback goes back to this dateutil issue: https://github.com/dateutil/dateutil/issues/197\r\n\r\n",
"Hmm possibly - but wouldn't that show up on the other Windows builds?\r\n\r\nMy other guess would be a bug in CPython - the datetime modules have historically used a global singleton for storing datetime state. On the free-threaded build that may be a likely issue area (assuming it hasn't changed upstream - I have not looked at all)",
"> Hmm possibly - but wouldn't that show up on the other Windows builds?\r\n\r\nYup, we do e.g. https://github.com/pandas-dev/pandas/actions/runs/14319739950/job/40133986072 but they get swallowed somewhere\r\n\r\nMy poor-man's git bisect here I think is pointing to a Cython bug (https://github.com/pandas-dev/pandas/pull/61249/commits/5016bf7d2ba64381f597356f0b3d1b39cb14ace2 shows that an earlier Cython commit does not fail the Python 3.13t Windows tests). My uneducated guess might be due to https://github.com/cython/cython/pull/6726",
"I also found this note in CPython about needing to define `Py_GIL_DISABLED=1` for Windows to work with free threaded builds:\r\n\r\nhttps://github.com/python/cpython/blob/main/Doc/howto/free-threading-extensions.rst#windows\r\n\r\nCertainly possible I am overlooking, but I don't see that anywhere in our current setup. Might be worth adding:\r\n\r\n```python\r\nadd_project_arguments('-DPy_GIL_DISABLED', language : 'c')\r\n```\r\n\r\nTo our current Meson configuration",
"> My poor-man's git bisect here I think is pointing to a Cython bug ([5016bf7](https://github.com/pandas-dev/pandas/commit/5016bf7d2ba64381f597356f0b3d1b39cb14ace2) shows that an earlier Cython commit does not fail the Python 3.13t Windows tests). My uneducated guess might be due to [cython/cython#6726](https://github.com/cython/cython/pull/6726)\r\n\r\nAh OK cool. Ignore what I said then - nice find",
"> I also found this note in CPython about needing to define Py_GIL_DISABLED=1 for Windows to work with free threaded builds:\r\n\r\nAh that would be good to add. I think I saw some warnings in the logs about some free threading option not being set, not sure if it was Windows specific",
"It is possible that Meson already sets it for us in Windows. One way to check would be to add `-Ccompile-args=\"-v\"` to get the verbose output of each compilation step",
"I think I've narrowed it down to this Cython change https://github.com/cython/cython/pull/6717",
"Wow nice find. From an initial glance that seems really tangential - I wonder how we can form an MRE out of that for a bug report (just thinking out loud - haven't looked deeply)",
"Going to merge as the Windows Python 3.13t builds are passing now.",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 d1c64045921d7f5b4fe0609b5bc428219c279e5e\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61249: BLD: Try installing older Cython for windows free threading build'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61249-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61249 on branch 2.3.x (BLD: Try installing older Cython for windows free threading build)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"I think it might be easier if I manually do this backport as opposed to cherry picking this commit",
"Wow, really nice debugging, thanks!",
"> I also found this note in CPython about needing to define `Py_GIL_DISABLED=1` for Windows to work with free threaded builds:\r\n>\r\n> It is possible that Meson already sets it for us in Windows. One way to check would be to add `-Ccompile-args=\"-v\"` to get the verbose output of each compilation step\r\n\r\nJust to confirm: Meson does set this define on Windows builds, no need to do anything else.\r\n\r\nNow why I arrived here: I was looking for `cp313t` wheels for Windows, since they're not present on PyPI yet. I see this issue fixed nightlies, however those are for 3.0-dev only. From this I deduce that the upcoming 2.3.0 won't have `cp313t` wheels unless we change that:\r\n\r\nhttps://github.com/pandas-dev/pandas/blob/5bbd98bd616f0d7811c58e3e2d88473d94652d3a/.github/workflows/wheels.yml#L109\r\n\r\nMind if I open a PR to add them on the 2.3.x branch?\r\n\r\n",
"Sure I think that is ok. Thanks @rgommers !"
] |
2,977,882,677 | 61,248 | CI Use released numpy for Windows wheels testing | closed | 2025-04-07T20:07:10 | 2025-04-10T17:55:03 | 2025-04-10T16:46:26 | https://github.com/pandas-dev/pandas/pull/61248 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61248 | https://github.com/pandas-dev/pandas/pull/61248 | lesteve | 3 | Following #61249 this also used released numpy for testing on Windows free-threaded wheels. | [
"Build"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I guess this doesn't fix the nightly wheels failure, since the issue seems related to cython dev according to https://github.com/pandas-dev/pandas/pull/61249.\r\n\r\nIt could still be worth to merge this PR to use released numpy.\r\n\r\nI'll move it ouf of draft, when the wheels are back to green on `main`.",
"OK I guess this is ready to be reviewed, this simplifies the Windows free-threaded further (on top of #61249) by using released numpy for testing. cc @mroeschke.\r\n\r\nThe Wheels testing seems to be fine, see [build log](https://github.com/pandas-dev/pandas/actions/runs/14374817755/job/40304732103?pr=61248).\r\n\r\nNot quite sure why the mypy error is about since it does not seem related to my PR, see [build log](https://github.com/pandas-dev/pandas/actions/runs/14374817701/job/40304672507?pr=61248).",
"Thanks @lesteve (and for your initial stab at debugging our wheel failures)"
] |
2,977,840,944 | 61,247 | ENH: Functionality to aid with Database Imports | open | 2025-04-07T19:46:37 | 2025-08-12T18:51:53 | null | https://github.com/pandas-dev/pandas/issues/61247 | true | null | null | mwiles217 | 4 | ### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I would like the ability to have the following features to aid in Database import. 1) Unicode/Non Unicode identification for columns, 2) Max Length of each column including for inaccurate length when a multi value cell is converted is then saved by the dataframe[like list of states]. 3) Creation of the create table statements and supporting statements. 4) creation of the BCP file(tab delimited with some caveats), and its supporting FMT file and command line execution. 5) Replacement of certain chars in dataframe that prevent import (namely \r, \n, \t) in the data load step where it may be faster vs RegEx later. 6) Renaming of columns by stripping out certain characters or replacing them similar to the way that R has a rename all column function
I have code written that does most of this. For context I used dataframe as a key step for importing data into a sql server database at a rate of about 2.5 GB per hour with the philosophy of all columns as strings and provide conversion when in database as otherwise important leading zeroes could be dropped[like routing numbers or other custom indicators]. Also note that the methodology was to import into sql server using bcp which essentially is a tab delimited file. The code provided is not directly the code I used as I lost access to it. But it is a recreation and major refactoring making it simpler.
1. Build into perhaps describe whether or not a column contains unicode characters or not so to know to make fields varchar or nvarchar.
2. Build into perhaps describe getting max length of each column. NOTE. There is a discrepancy between what max length is when in a dataframe to when it is imported into a database in edge conditions when thedataframe is written back to a file. This is with multi value columns like a list of US States. The work around that works 99% of the time was multiply the length by 1.3 and then round up to nearest 100 by using math ceiling after dividing by 100 and then multiply by 100.
3. Number 1 and 2 should be in the same spot so as to be easily consumable to script your own rest of solution.
4. Perhaps when loading dataframe have option to replace certain characters to get it ready for import. it could probably be optimized there and run faster then running the regex after the fact. Im talking about replacing \r \n \t with a space or perhaps {n} and {t} respectively so they can easily be put back after import.
5. A built in column rename functionality that strips or replaces bad character from column headers and ensuring uniqueness. For example ( could be stripped but @ you may want to replace with at. And unicode may want to be stripped. Essentially making it so you don't need to use [] around field names in sql server script for those columns. Perhaps this function could have options as many may want renamed done differently. Expose this replace functionality so it can be used stand alone as needed for naming a table from its filename
6. The above i think could be good to provide building blocks for people to script the rest themselves but have the grunt part completed.
7. Perhaps auto add the filename without extension as well is row number when opening or saving it as sometimes you need it for debugging or you literally need to reference the previous or next row.
8. Have the ability to draft a create table statement from the information above and save it to a .sql file. With optionally adding an ID column and easily output it to a file. Also add in the rename of the table if it already exists by appending its timestamp including milliseconds and then transfer it to a different schema. Extra credit for creating the schema if it doesn't already exist. That part is to assist with auto complete tools and tracking changes to the data which was helpful for some disputes. Also make sure to strip naughty characters from the table name but use the filename without extension as the default table name.
9. Have the ability to create a bcp fmt file which is a mapping between the table and the file
10. Save the appropriate commands to a bat file for executing a bcp file as well as running the create table statement.
11. Perhaps a to_bcp option that in addition to the above several points also creates the tab delimited no header file ensuring that things like tabs and newline and form feed are replaced.
12. Perhaps to the last several points the to bcp function can auto create the other needed files when called.
### Feature Description
The below is working code that does 95% of what I requested above. So this request ultimately isn't for me, but the community.
Things requested from above that are omitted from the code below are:
1) Folding into the outputted .sql file the creation of a backup schema if it doesn't exist. And then renaming the same object if found by appending the creation date of the table including miliseconds and than transferring the table to that schema.
2) folding in an auto ID column into the create table and then the appropriate changes to the FMT file.
3) folding in the addition of 2 helpful columns a) the filename without extension, and b) the row number within file
4) Expansion of the column rename to do smarter replacements vs just stripping the characters (like replacing @ with at), and then checking that all column names are still unique.
Also, this code generates what appears to be acceptable output. But I haven't tested within an actual database import into SQL Server.
import pandas as pd
import numpy as np
import os,re,uuid
import math
from typing import List,Dict
rx_unicode_str:str="[^\x00-\x7F]"
rx_unicode=re.compile(rx_unicode_str,re.IGNORECASE)
rx_space_str:str=r"\s"
rx_space=re.compile(rx_space_str,re.IGNORECASE)
rx_underscore_str:str="_{2,}"
rx_underscore=re.compile(rx_underscore_str,re.IGNORECASE)
rx_strip_chars_str="!|@|#|\$|%|\^|&|\*|\(|\)|\{|\}|\[|\]|\.|\||;|:|'|\"|,|<|>|\?|=\+"
rx_strip_chars=re.compile(rx_strip_chars_str,re.IGNORECASE)
class column_info:
def __init__(self,arg_column_name):
self.guid:str=str(uuid.uuid4())
self.column_name:str=arg_column_name
self.column_name_orig:str=arg_column_name
self.max_length:int=0
self.max_length_fixed:int=0
self.has_unicode:bool=False
self.sql_max:bool=False
self.ColumnIndex_1Based:int=0
self.last_column:bool=False
def as_create_table(self):
datatype:str="NVARCHAR" if self.has_unicode==True else "VARCHAR"
comma:str="," if self.ColumnIndex_1Based>1 else ""
data_length:str="MAX" if self.sql_max==True else str(self.max_length_fixed)
return "{c}[{name}] {t}({l})".format(c=comma,name=self.column_name,t=datatype,l=data_length)
def as_fmt_file(self):
datatype:str="SQLNCHAR" if self.has_unicode==True else "SQLCHAR"
comma:str="," if self.ColumnIndex_1Based>1 else ""
data_length:str=str(self.max_length_fixed)
data_length:str="4000" if self.sql_max==True and self.has_unicode==True else data_length
data_length:str="8000" if self.sql_max==True and self.has_unicode==False else data_length
# 1 SQLINT 0 4 "\t" 1 "ID" ""
idx:str=str(self.ColumnIndex_1Based).ljust(6)
type:str=datatype.ljust(20)
datalen:str=data_length.ljust(10)
name=str("\"" + self.column_name + "\"").ljust(75)
sep:str="\\t" if self.last_column==False else "\\r\\n"
sep=str("\"" + sep + "\"").ljust(8)
return "{idx}{type} 0 {datalen} {sep} {name} \"\"".format(idx=idx,type=type,datalen=datalen,sep=sep,name=name)
# return "{c}[{name}] {t}({l})".format(c=comma,name=self.column_name,t=datatype,l=data_length)
def as_dict(self):
dict_ret={k: v for k, v in self.__dict__.items() if k not in["exclude_me"]}
return dict_ret
class csv_info:
def __init__(self,arg_filename:str):
self.Database:str="myDB"
self.Server:str="myServer"
self.UserName:str="myUser"
self.Password:str="myPass"
self.filename:str=arg_filename
self.output_directory:str=""
self.bcp_filename:str=""
self.fmt_filename:str=""
self.table_name:str=""
self.parent_directory:str=""
self.filename_with_extension:str=""
self.filename_wo_extension:str=""
self.file_extension:str=""
self.parent_directory,self.filename_with_extension=os.path.split(arg_filename)
self.filename_wo_extension,self.file_extension=os.path.splitext(self.filename_with_extension)
self.table_name=self.fix_name(self.filename_wo_extension)
self.change_output_directory(self.parent_directory)
self.df:pd.DataFrame=pd.read_csv(arg_filename,dtype=str)
self.Columns:List[column_info]=[]
max_lengths = self.df.apply(lambda x: x.astype(str).str.len().max())
column_index:int=-1
for col in self.df.columns:
column_index+=1
new_col=column_info(col)
new_col.max_length=int(max_lengths.iloc[column_index])
new_col.max_length_fixed=int(new_col.max_length_fixed * 1.3)
new_col.max_length_fixed=100 if new_col.max_length_fixed<=100 else int(math.ceil(new_col.max_length_fixed/100)*100)
# new_col.has_unicode=self.df[col].apply(has_unicode_regex).any()
new_col.has_unicode=self.df[col].str.contains(rx_unicode, regex=True).any()
new_col.sql_max=True if new_col.max_length_fixed>=8000 or (new_col.max_length_fixed>=4000 and new_col.has_unicode==True) else False
new_col.ColumnIndex_1Based=column_index+1
new_col.last_column=True if len(self.df.columns)==(column_index + 1) else False
self.Columns.append(new_col)
# end new_col.max_length_fixed>=4000 and new_col.has_unicode==True
self.fix_column_names();
self.to_bcp()
def change_output_directory(self,arg_output_directory:str):
self.output_directory=arg_output_directory
self.bcp_filename=os.path.join(self.output_directory,self.filename_wo_extension + ".bcp")
self.fmt_filename=os.path.join(self.output_directory,self.filename_wo_extension + ".fmt")
def as_create_table(self):
create_table:str="CREATE TABLE [{t}](\n".format(t=self.table_name)
col:column_info
for col in self.Columns:
create_table += col.as_create_table() + "\n"
of=os.path.join(self.output_directory,self.filename_wo_extension + ".sql")
with open(of,"w",encoding="utf-8") as f:
f.writelines(create_table + ")")
# return create_table + ")"
def as_fmt_file(self):
fmt_file:str="14.0\n{l}\n".format(l=str(len(self.df.columns)))
col:column_info
for col in self.Columns:
fmt_file += col.as_fmt_file() + "\n"
with open(self.fmt_filename,"w") as f:
f.writelines(fmt_file)
return
#return fmt_file
def to_bcp(self):
self.as_fmt_file()
of:str=os.path.join(self.output_directory,self.table_name + ".bat")
with open(of,"w",encoding="utf-8") as f:
f.writelines(self.sql_cmd() + "\n")
f.writelines(self.bcp_import_cmd() + "\n")
self.df.replace(to_replace=r"\t|\n|\r",value=" ",regex=True,inplace=True)
self.df.to_csv(path_or_buf=self.bcp_filename, sep="\t",index=None,header=False)
def fix_name(self,val:str):
ret:str=val.strip().replace("-","_")
ret=rx_unicode.sub("",ret)
ret=rx_space.sub("_",ret)
ret=rx_underscore.sub("_",ret)
ret=rx_strip_chars.sub("",ret)
return ret
def fix_column_names(self):
col:column_info
for col in self.Columns:
col.column_name=self.fix_name(col.column_name)
def bcp_import_cmd(self):
# -S {server_name} -U {username} -P {password}
f_error=os.path.join(self.output_directory,self.filename_wo_extension + "_import_errors.txt")
bcp_str:str= "bcp {db}.dbo.{t} in \"{f_bcp}\" -f {f_fmt} -T -C 65001 -S {server_name} -U {username} -P {password} -e\"{f_error}\"".format(
db=self.Database,t=self.table_name,f_bcp=self.bcp_filename,f_fmt=self.fmt_filename,f_error=f_error
,server_name=self.Server,username=self.UserName,password=self.Password)
return bcp_str
return bcp_str
def sql_cmd(self):
f=os.path.join(self.output_directory,self.table_name + ".sql")
ret:str="sqlcmd -S {s} -U {u} -P {p} -i \"{f}\"".format(s=self.Server,u=self.UserName,p=self.Password,f=f)
return ret
input_file:str=r"C:\data\LargeCSVFile\customers-2000000.csv"
cv=csv_info(input_file)
### Alternative Solutions
see previous section for the alternative solution of custom written code.
### Additional Context
_No response_ | [
"Enhancement",
"IO SQL",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Pandas core typically stays database-agnostic, calling through SQLAlchemy, and the logic above is very tightly coupled to Microsoft SQL Server. However the existing [bcpandas](https://pypi.org/project/bcpandas/) project exists to speed SQL Server-specific (particularly BCP-based) functionality in pandas and is part of the [ecosystem](https://pandas.pydata.org/community/ecosystem.html) page. If you have additional SQL-Server-specific functionality that is not currently covered by that project, they may be interested in contributions.",
"Understood. Thank You.\n\nWhat about the 2 higher level requests that may help many with additional processing and could be classified as database agnostic?\n1) Easily be able to tell which columns have Unicode characters in them.\n2) Getting the true max length of a column when the column can contain a list of items like a list of states? When a data frame is then saved to a file with all the special characters needed, the length in the file can then be greater than the reported length through data frame operations. By having that length discrepancy caused an import error which required that work around I came up with which was multiply by 1.3 and then rounding up to the nearest 100 with a minimum of a 100. An import into any system would potentially have a similar issue.",
"Re 1, `pd.Series.str.is_ascii` was added in #60532, this feature would presumably be the negation.\n\nRe 2, if bcpandas does not already have functionality to set the column sizes of a new table correctly for non-ASCII characters, I imagine that may be a welcome improvement. That said also I do believe MSFT SQL Server [now supports UTF-8](https://techcommunity.microsoft.com/blog/sqlserver/introducing-utf-8-support-for-sql-server/734928) and the code for UTF-8 in SQL Server is varchar not nvarchar (though it does seem like you need the true byte length for the field). \n",
"This does not seem like it belongs in pandas."
] |
2,977,566,483 | 61,246 | STY: Bump pre-commit checks | closed | 2025-04-07T17:46:30 | 2025-04-07T21:21:35 | 2025-04-07T21:21:32 | https://github.com/pandas-dev/pandas/pull/61246 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61246 | https://github.com/pandas-dev/pandas/pull/61246 | mroeschke | 0 | Supersedes https://github.com/pandas-dev/pandas/pull/61243 | [
"Code Style"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,977,538,258 | 61,245 | BUG: remove_unused_levels does not keep index levels order | open | 2025-04-07T17:32:22 | 2025-04-08T21:19:10 | null | https://github.com/pandas-dev/pandas/issues/61245 | true | null | null | mathman79 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame(
[
("aap", "1991-01-02", 100.0000),
("aap", "2024-12-24", 75.7575),
("noot", "1960-01-04", 11.111),
("noot", "2024-12-24", 123.45),
("noot", "2024-12-30", 321.54),
],
columns=["name", "date", "value"],
).set_index(["name", "date"])["value"]
index = df.iloc[:-1].copy().index
assert all(index.levels[-1] == sorted(index.levels[-1]))
index2 = index.remove_unused_levels()
assert all(index2.levels[-1] == sorted(index2.levels[-1]))
```
### Issue Description
Order or the multi index level is not kept. This causes issues with code like unstack being mis-ordered:
```
import pandas as pd
df = pd.DataFrame(
[
("aap", "1991-01-02", 100.0000),
("aap", "2024-12-24", 75.7575),
("noot", "1960-01-04", 11.111),
("noot", "2024-12-24", 123.45),
("noot", "2024-12-30", 321.54),
],
columns=["name", "date", "value"],
).set_index(["name", "date"])["value"]
df.iloc[:-1].unstack(level=0)
```
### Expected Behavior
I expect that the current order or the multi index level is kept.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : f538741432edf55c6b9fb5d0d496d2dd1d7c2457
python : 3.9.13.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.0
numpy : 1.26.4
pytz : 2023.3
dateutil : 2.8.2
setuptools : 63.4.1
pip : 24.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.8.0
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.7.0
pandas_datareader : 0.10.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : 1.3.7
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.10.0
gcsfs : None
matplotlib : None
numba : None
numexpr : 2.8.8
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.10.0
scipy : 1.13.1
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"MultiIndex",
"Needs Discussion",
"Sorting"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Naively I would expected `remove_unused_index_levels` do something like below (assuming we always want the levels to be sorted):\n\n```\ndef remove_unused_index_levels(index: pd.MultiIndex) -> pd.MultiIndex:\n \"\"\"Remove unused index levels, keeping levels ordered.\"\"\"\n codes, levels, names = index_codes_levels_names(index)\n for i, (code, level) in enumerate(zip(codes, levels)):\n uniq_code = np.unique(code)\n codes[i] = np.searchsorted(uniq_code, code)\n levels[i] = level[uniq_code]\n return pd.MultiIndex(levels, codes, names=names)\n```",
"Thanks for the report. Agreed with the expected behavior that removing unused index levels should not modify the output of other operations down the line. However it's not clear to me if the order of the index levels should be an implementation detail of MultiIndex (and thus, the issue is with unstack), or if the index levels should have an influence on things like sorting. Further investigations are welcome, marking this as Needs Discussion for now."
] |
2,977,449,661 | 61,244 | BUG: Handle overlapping line and scatter on the same plot | closed | 2025-04-07T16:50:39 | 2025-04-09T16:44:13 | 2025-04-09T16:28:51 | https://github.com/pandas-dev/pandas/pull/61244 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61244 | https://github.com/pandas-dev/pandas/pull/61244 | MartinBraquet | 2 | - [x] closes #61005
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Running the code from the issue above now shows a correct plot.
```python
import datetime
import matplotlib.pyplot as plt
import pandas as pd
datetime_list = [datetime.datetime(year=2025, month=1, day=1, hour=n) for n in range(23)]
y = [n for n in range(23)]
df = pd.DataFrame(columns=['datetime', 'y'])
for i, n in enumerate(datetime_list):
df.loc[len(df)] = [n, y[i]]
fig, ax = plt.subplots(2, sharex=True)
df.plot.scatter(x='datetime', y='y', ax=ax[0])
df.plot(x='datetime', y='y', ax=ax[1])
```

| [
"Visualization"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke Thanks for the feedback; I applied your comments.",
"Thanks @MartinBraquet "
] |
2,977,402,605 | 61,243 | [pre-commit.ci] pre-commit autoupdate | closed | 2025-04-07T16:29:50 | 2025-04-07T17:47:10 | 2025-04-07T17:46:45 | https://github.com/pandas-dev/pandas/pull/61243 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61243 | https://github.com/pandas-dev/pandas/pull/61243 | pre-commit-ci[bot] | 0 | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.9.9 → v0.11.4](https://github.com/astral-sh/ruff-pre-commit/compare/v0.9.9...v0.11.4)
- [github.com/pre-commit/mirrors-clang-format: v19.1.7 → v20.1.0](https://github.com/pre-commit/mirrors-clang-format/compare/v19.1.7...v20.1.0)
- [github.com/trim21/pre-commit-mirror-meson: v1.7.0 → v1.7.2](https://github.com/trim21/pre-commit-mirror-meson/compare/v1.7.0...v1.7.2)
<!--pre-commit.ci end--> | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,977,179,313 | 61,242 | No Windows free-threaded wheel available in scientific-python-nightly-wheels | closed | 2025-04-07T15:06:16 | 2025-04-09T17:56:31 | 2025-04-09T17:56:31 | https://github.com/pandas-dev/pandas/issues/61242 | true | null | null | lesteve | 2 | In scikit-learn we noticed there are no Windows free-threaded development wheel in [scientific-python-nightly-wheels](https://anaconda.org/scientific-python-nightly-wheels/pandas/files).
The reason seems to be that your Wheels builder has failed consistently for more than a week, see [build logs](https://github.com/pandas-dev/pandas/actions/workflows/wheels.yml?query=event%3Aschedule).
The failures only happen for Windows free-threaded i.e. `cp313t-win_amd64`. I had a quick look at one of the log there are almost 300 failures and plenty of them seem to be related to indexes with timestamps ...
One thing I did notice is that you are still using numpy development wheel for Windows free-threaded and I think using a released numpy may be good enough since numpy 2.2.4 (and probably a few earlier versions as well) has a free-threaded wheel for Windows, see [PyPI numpy info](https://pypi.org/project/numpy/#files). | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the ping. We've started an attempt to try to root cause these failures in https://github.com/pandas-dev/pandas/pull/61240 but not much progress yet ",
"In Arrow we have temporarily disabled testing with pandas for Windows free-threaded to unblock our 20.0.0 release. We will re-enable once this issue is fixed. Thanks!"
] |
2,975,301,216 | 61,241 | DOC Update the awkward-pandas GitHub link | closed | 2025-04-06T23:18:27 | 2025-04-07T16:35:37 | 2025-04-07T16:35:30 | https://github.com/pandas-dev/pandas/pull/61241 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61241 | https://github.com/pandas-dev/pandas/pull/61241 | star1327p | 1 | In ecosystems.md, the awkward-pandas link should be:
https://github.com/scikit-hep/awkward
The old link does not work:
https://awkward-pandas.readthedocs.io/
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @star1327p "
] |
2,975,022,712 | 61,240 | BLD/CI: Try to fix the Windows Python 3.13t wheel build | closed | 2025-04-06T15:27:16 | 2025-04-10T12:26:57 | 2025-04-10T12:26:52 | https://github.com/pandas-dev/pandas/pull/61240 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61240 | https://github.com/pandas-dev/pandas/pull/61240 | lithomas1 | 3 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Build",
"Windows",
"Python 3.13"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"FYI for anyone interested:\r\nThe Windows wheel builder CI (just the free-threaded builds) have been failing for the past 2 weeks. It seems like there's omething wonrg with the datetime code. I already took a look at numpy/tzdata versions, but those don't seem to be the problem (and I'm not sure what to look at next).\r\n\r\nI don't have time to debug this any further (nor do I have a Windows machine to do so), but if anyone else is interested feel free to push to my branch.",
"Something you may want to try (in particular to rule out a numpy dev change) is to try to use a released numpy instead of numpy dev.\r\n\r\nAs I mentioned in https://github.com/pandas-dev/pandas/issues/61242 using numpy dev was needed historically for free-threaded, but there has been a few numpy releases with free-threaded wheels (including Windows).\r\n",
"I opened a draft PR with the changes to use numpy release https://github.com/pandas-dev/pandas/pull/61248.\r\n\r\nNot familiar with pandas setup, but probably someone needs to add the label \"Build\" to my PR or try to push similar changes in this PR branch, whatever seems easier :wink:."
] |
2,974,932,003 | 61,239 | made changes | closed | 2025-04-06T12:42:28 | 2025-04-06T12:42:37 | 2025-04-06T12:42:37 | https://github.com/pandas-dev/pandas/pull/61239 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61239 | https://github.com/pandas-dev/pandas/pull/61239 | Vaishnav-raj-vp | 0 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,974,899,214 | 61,238 | DOC: Added docstrings to min, max, and reso | closed | 2025-04-06T11:42:54 | 2025-04-07T16:58:00 | 2025-04-07T16:57:53 | https://github.com/pandas-dev/pandas/pull/61238 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61238 | https://github.com/pandas-dev/pandas/pull/61238 | j-hendricks | 1 | Closes #59458
Added docstrings to `min`, `max`, and `resolution` for class `Timestamp`. Used same approach as seen in PR #61119
- [x] closes #59458 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @j-hendricks "
] |
2,974,876,714 | 61,237 | ENH: Add dropna parameter to Series.unique() (fixes #61209) | closed | 2025-04-06T11:00:59 | 2025-05-15T16:10:28 | 2025-05-15T16:10:28 | https://github.com/pandas-dev/pandas/pull/61237 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61237 | https://github.com/pandas-dev/pandas/pull/61237 | sahermuhamed1 | 1 | - [x] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Changes:
- Added `dropna` parameter to `Series.unique()` (default=True)
- Ensured backward compatibility
- Added comprehensive test coverage
### Notes:
- Changes split into logical commits:
1. Core functionality (ENH)
2. Test coverage (TST) | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,974,721,242 | 61,236 | BUG: Pyarrow timestamp support for map() function | closed | 2025-04-06T06:10:47 | 2025-05-21T16:13:33 | 2025-05-21T16:13:33 | https://github.com/pandas-dev/pandas/pull/61236 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61236 | https://github.com/pandas-dev/pandas/pull/61236 | arthurlw | 2 | - [x] closes #61231 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Stale",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,974,486,431 | 61,235 | ENH: Add dropna parameter to Series.unique() (fixes #61209) | closed | 2025-04-05T20:30:17 | 2025-05-15T16:10:56 | 2025-05-15T16:10:56 | https://github.com/pandas-dev/pandas/pull/61235 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61235 | https://github.com/pandas-dev/pandas/pull/61235 | sahermuhamed1 | 1 | - [x] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,974,366,339 | 61,234 | BUG: Fix DatetimeIndex timezone preservation when joining indexes with same timezone but different units | closed | 2025-04-05T17:08:57 | 2025-05-30T18:22:01 | 2025-05-30T18:21:53 | https://github.com/pandas-dev/pandas/pull/61234 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61234 | https://github.com/pandas-dev/pandas/pull/61234 | myenugula | 5 | - [x] closes #60080
- [x] [Tests added and passed]
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations]
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Dtype Conversions",
"Timezones"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hi @rhshadrach, could you please check this one out?",
"I agree with @rhshadrach's comment on splitting/parametrizing the test, otherwise this LGTM",
"Hi @rhshadrach, I see that you've requested changes. Could you please clarify what exactly needs to be changed? as I've already made the changes you've requested about `assert result.tz == idx1.tz`",
"> Hi @rhshadrach, I see that you've requested changes. Could you please clarify what exactly needs to be changed? as I've already made the changes you've requested about `assert result.tz == idx1.tz`\r\n\r\nOnce changes are requested, the state doesn't update until another review is submitted.",
"Thanks @myenugula "
] |
2,974,321,187 | 61,233 | BUG: Fix scatter plot colors in groupby context to match line plot behavior (#59846) | closed | 2025-04-05T16:16:14 | 2025-07-28T17:19:06 | 2025-07-28T17:19:06 | https://github.com/pandas-dev/pandas/pull/61233 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61233 | https://github.com/pandas-dev/pandas/pull/61233 | myenugula | 5 | - [x] closes #59846
- [x] [Tests added and passed]
- [x] All [code checks passed]
- [x] Added [type annotations]
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Groupby",
"Visualization",
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I removed all unnecessary comments in the test function",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen.",
"@mroeschke I've merged in the main branch. Could you please reopen this PR so I can run it through the GitHub Actions and do further changes if needed? ",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,972,876,056 | 61,232 | ENH: The method of obtaining a certain cell or slice of the dataframe is confusing and unclear | closed | 2025-04-04T16:27:25 | 2025-08-05T16:30:20 | 2025-08-05T16:30:20 | https://github.com/pandas-dev/pandas/issues/61232 | true | null | null | zyy37 | 3 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
The method of obtaining a certain cell or slice of the dataframe is confusing and unclear, such as using `loc`, `iloc`, `at`, `iat`, and the operator `[]`, etc. For example, `df.loc[row_label, col_label]` and `df.iloc[row_index, col_index]`. If `loc` is a property, it is better to be a variable but now it is a verb or something rather than a df buffer. The user wants to take the cell value, so the behavior of `loc` is like a member function and the `()` operator should be used to pass parameters. However, you are using the operator `[]`, but the object of the operator `[]` is usually an instance, which is a confusing place.
And when taking two columns or slices at the same time, for example, taking one column and one row, the expression `value = df.loc[1, 'B']`, where the operator `[]` represents the horizontal and vertical coordinate information, and taking two columns and one row, `row_data = df.loc['row_label', ['col1', 'col2']]`, where the second operator `[]` has both vertical coordinate information but behaves like a `list` or `tuple` instead of the former, this is another confusing aspect.
In mathematics, the coordinate values such as (3, 4) represent the horizontal and vertical coordinates respectively, as well as the `()` operator. I hope that the operation rules you define should conform to common customs or competition analysis or benchmarking such as numpy. thank you.
### Feature Description
n/a
### Alternative Solutions
n/a
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Eh, this is reaching at best. If it ain't broke don't fix it",
"This is the sort of stuff that would have been discussed in the earlier stages of a project but pandas in far too deep. Adding call syntax to `_Loc` is probably not gonna happen as it's a common complaint that the API is already way too verbose with choices.",
"Thanks for the suggestion but this is long standing behavior in pandas and won't change in the future. Closing"
] |
2,972,750,429 | 61,231 | BUG: PyArrow timestamp type does not work with map() function | open | 2025-04-04T15:34:54 | 2025-06-07T11:41:43 | null | https://github.com/pandas-dev/pandas/issues/61231 | true | null | null | dbalabka | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
Here are my reproduction steps that does not work with PyArrow type:
df = pd.DataFrame({"a": pd.date_range("2018-01-01 00:00:00", "2018-01-07 00:00:00")}).astype({"a": "timestamp[ns][pyarrow]"})
date2pos = {date: i for i, date in enumerate(df['a'])}
df["a"].map(date2pos)
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
Name: a, dtype: float64
```
### Issue Description
For some reason `pd.DataFrame.map()` function does not work with PyArrow `timestamp[ns][pyarrow]` type and does not map values.
### Expected Behavior
Here is an expected behavior that works with the default pandas type `datetime64[ns]`:
```
df = pd.DataFrame({"a": pd.date_range("2018-01-01 00:00:00", "2018-01-07 00:00:00")})
date2pos = {date: i for i, date in enumerate(df['a'])}
df["a"].map(date2pos)
```
```
0 0
1 1
2 2
3 3
4 4
5 5
6 6
Name: a, dtype: int64
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.7
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.3
pytz : 2025.1
dateutil : 2.8.2
pip : 23.2.1
Cython : None
sphinx : None
IPython : 8.20.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.12.2
html5lib : None
hypothesis : None
gcsfs : 2023.12.2post1
jinja2 : 3.1.3
lxml.etree : None
matplotlib : 3.8.2
numba : 0.60.0
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 14.0.2
pyreadstat : None
pytest : 7.4.4
python-calamine : None
pyxlsb : None
s3fs : 2023.12.2
scipy : 1.12.0
sqlalchemy : 2.0.29
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2023.4
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Apply",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for reporting. Confirmed on main. Investigations and PRs to fix are welcome.",
"take",
"take"
] |
2,972,742,661 | 61,230 | ENH: The row and column indexing mechanism of your dataframe is inefficient, leading to errors and unnecessary time consumption | open | 2025-04-04T15:32:05 | 2025-06-29T16:01:27 | null | https://github.com/pandas-dev/pandas/issues/61230 | true | null | null | zyy37 | 3 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
The row and column indexing mechanism of your dataframe is inefficient, leading to errors and unnecessary time consumption for users. When two dataframes are merged or concated horizontally or vertically, it can cause index duplication. If iterating the index in a `for` loop, the operation will be repeated twice in one iteration, which is a typical scenario that leads to calculation errors. For example,
```Python
df = pd.concat([df1, df2]).drop_duplicates('title')
df.reset_index(drop=True, inplace=True) # this expression must be included every time, otherwise duplicate indexes will cause loop iteration errors.
df['name'] = None
for idx, row in df.iterrows():
name_list = ['mike', 'jake', 'cook']
df.at[idx, 'name'] = ",".join(name_list)
```
If there is no expression `df.reset_index(drop=True, inplace=True)`, this cell will have two of the name_list instead of one written in the code, `(Pdb) p df.at[idx, 'name'].index Index([1, 1], dtype='int64')`.
So I hope that when the rows or columns of the dataframe change, you can automatically maintain the index as an internal mechanism, just like C++'s vectors or arrays. After deletion and removal, the index or iterator is automatically maintained as a continuous number, and users do not manage this. This is also competitor analysis and benchmarking. Hope for improvement. Thank you.
### Feature Description
n/a
### Alternative Solutions
n/a
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"You should avoid `inplace=True` and `iterrows`. Calling `reset_index` is also often questionable (do it only when needed). If you don't have a meaningful index, just do `pd.concat(..., ignore_index=True)`\n\nIterating the DF in a for loop via `iterrows` is the worst thing you can do performace-wise. Best forget that `iterrows` even exists. You don't need it! Your loop can be reduced to `df[\"name\"] = \",\".join(name_list)` which is fast.\n\nI fail to see a Pandas problem here, it looks to me like a \"don't know how to use Pandas efficiently\" problem. So at best it is a documentation issue. In addition, I think `iterrows` should be deprecated or removed entirely.",
"How do you explain that the appearance of two duplicate indexes would lead to duplicate iterations, for example, `(Pdb) p df.at[idx, 'name'].index Index([1, 1], dtype='int64')`. Why don't you automatically clear duplicate indexes internally? This is a potential error. Does the developer know that there are duplicate indexes in each calculation result? If you provide a function, make sure there are no fatal loopholes. The `row` scenario of `iterrows` is what users need, for example, `row` can be computed in parallel. You're right. I just started using pandas, and what I wasted a lot of time doing during the development process was testing the performance of different methods.",
"> Why don't you automatically clear duplicate indexes internally?\n\nHow would you do that? Consider a datetime index. You might have two data points on the same date. I think your misconception is that the index is a unique integral value. It can be but it doesn't have to be. \n```\nx = pd.DataFrame(\n {\"some_measurement\": [10.5, 11.1, 9.2, 9.7]}, \n index=pd.to_datetime([\"2025-05-01\",\"2025-05-01\", \"2025-05-02\", \"2025-05-03\"])\n)\n```\nwould you drop the 10.5 or the 11.1 measurement and why? That would be very error prone! On the 1st of may you have 2 measurements. One is as important as the other.\n\nOf course, some operations may require a unique index.\n\n> The row scenario of iterrows is what users need\n\nNo. Working with single row data is precisely what you need to avoid. What you need to do is to perform the calculation for all rows at the same time as in `df[new_column] = df[column1] + df[column2]`. This performs the addition for all rows. Try not to do operations on individual cells.\n\nThis is certainly not the right place to discuss this further."
] |
2,970,740,046 | 61,229 | BUG: Fix #61222: Keep index name when resampling with pyarrow dtype | closed | 2025-04-03T21:39:09 | 2025-04-07T16:55:16 | 2025-04-07T16:55:09 | https://github.com/pandas-dev/pandas/pull/61229 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61229 | https://github.com/pandas-dev/pandas/pull/61229 | mthiboust | 2 | - [x] closes #61222
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Resample",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke Thanks for your review. I addressed your comments.",
"Thanks @mthiboust "
] |
2,970,419,833 | 61,228 | Fix false friends in implicit string concatenation in tests | closed | 2025-04-03T18:52:35 | 2025-04-03T21:53:19 | 2025-04-03T21:53:11 | https://github.com/pandas-dev/pandas/pull/61228 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61228 | https://github.com/pandas-dev/pandas/pull/61228 | jbdyn | 3 | [In a PR](https://github.com/nvim-treesitter/nvim-treesitter/pull/7788) about how to syntax-highlight Python docstrings, @rmuir and I discovered that instead of implicit concatenation of strings a string plus a docstring have been written.
Since this is a subtle one, I want to briefly show the differences:
The false friend of implicit string concatenation
```python
# var == "foo"
var = "foo" # <-- no implicit string concatenation
"bar" # <-- docstring, legal for the bytecode compiler, against PEP 257
```
can be fixed for example with surrounding brackets:
```python
# var == "foobar"
var = (
"foo" # <-- gets implicitly concatenated
"bar"
)
```
I felt free to fix the ones I found right away such that the tests pass.
I searched with [`ripgrep`](https://github.com/BurntSushi/ripgrep) like so:
```sh
# in path/to/cloned/pandas
rg -A 1 -B 2 -U ' = f?"[^"]*"\s+f?"[^"]+"\s*' pandas/
```
I am pretty confident, but not entirely sure, to have catched all cases. :thinking: | [
"Testing"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"As a side note:\r\n\r\nThe tests passed before because they called [`assert_produces_warning`](https://github.com/pandas-dev/pandas/blob/main/pandas/_testing/_warnings.py#L29), which in turn calls [`_assert_caught_expected_warnings`](https://github.com/pandas-dev/pandas/blob/main/pandas/_testing/_warnings.py#L157), where `re.search` is used:\r\n\r\nhttps://github.com/pandas-dev/pandas/blob/04356be7d385dcb99d3040e37a85e1030afb259b/pandas/_testing/_warnings.py#L181-L183\r\n\r\nThe false friends only gave the first part of the full string to match (the rest was discarded as it was detected as docstring), but `re.search` still gives a positive match in that case.\r\n\r\nMaybe it should be `re.fullmatch` instead? :shrug: ",
"> I am pretty confident, but not entirely sure, to have catched all cases. 🤔\r\n\r\nI rechecked the logfile from the script I ran yesterday: these are the same files and lines with highlight differences, so I think you found them all.",
"Thanks @jbdyn "
] |
2,970,350,606 | 61,227 | DOC Removed excessive Plotly links in ecosystem.md | closed | 2025-04-03T18:20:10 | 2025-04-03T18:41:29 | 2025-04-03T18:41:21 | https://github.com/pandas-dev/pandas/pull/61227 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61227 | https://github.com/pandas-dev/pandas/pull/61227 | star1327p | 1 | Removed excessive Plotly links in `ecosystem.md`.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @star1327p "
] |
2,970,075,919 | 61,226 | BUG: Fix #61221: Exception with unstack(sort=False) and NA in index. | closed | 2025-04-03T16:05:18 | 2025-07-28T17:18:14 | 2025-07-28T17:18:14 | https://github.com/pandas-dev/pandas/pull/61226 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61226 | https://github.com/pandas-dev/pandas/pull/61226 | gsmll | 2 | - [ ✔️ ] closes #61221
- [✔️ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [✔️ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [✔️ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ✔️] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
| [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. Additionally it appears there's a commit for an image we do not want in our history.\r\n\r\nIf interested in continuing, I'd recommend opening a new PR"
] |
2,969,751,841 | 61,225 | BUG: Fix #57608: queries on categorical string columns in HDFStore.select() return unexpected results. | closed | 2025-04-03T14:15:29 | 2025-05-20T15:57:41 | 2025-05-20T15:57:34 | https://github.com/pandas-dev/pandas/pull/61225 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61225 | https://github.com/pandas-dev/pandas/pull/61225 | SofiaSM45 | 4 | In function __init__() of class Selection (pandas/core/io/pytables.py), the method self.terms.evaluate() was not returning the correct value for the where condition. The issue stemmed from the function convert_value() of class BinOp (pandas/core/computation/pytables.py), where the function searchedsorted() did not return the correct index when matching the where condition in the metadata (categories table). Replacing searchsorted() with np.where() resolves this issue.
- [x] closes #57608
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"IO HDF5"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"pre-commit.ci autofix",
"Rebased to include recent upstream changes. I apologize for the unused import in my earlier commit; thank you very much for the quick fix!",
"Thanks @SofiaSM45 "
] |
2,969,275,509 | 61,224 | ENH: Implement loading and dumping to and from YAML | closed | 2025-04-03T11:30:41 | 2025-04-03T15:12:32 | 2025-04-03T15:12:31 | https://github.com/pandas-dev/pandas/issues/61224 | true | null | null | acampove | 2 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Hi,
I am nowadays mostly working with YAML and moved away from JSON, given that it has proven far superior in terms of userfriendlines (e.g. readability). However I am missing a quick way to load and dump data from yaml to pandas.
### Feature Description
```python
df = pd.from_yaml('/path/to/my/yaml/file.yml')
df.to_yaml('/path/to/my/yaml/file.yml')
```
### Alternative Solutions
I can write my wrapper to do all the dirty work myself for now.
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"For your reference, I implemented it in my utilities library [here](https://github.com/acampove/dmu/tree/master?tab=readme-ov-file#dataframe-to-and-from-yaml) but this should be integrated into pandas itself.",
"Thanks for the suggestion, but this was suggested and rejected in https://github.com/pandas-dev/pandas/issues/35421 so I don't think this will change so closing"
] |
2,969,235,408 | 61,223 | BUG: setting item to iterable with .at fails when column doesn't exist or has wrong dtype | open | 2025-04-03T11:15:33 | 2025-04-10T18:44:28 | null | https://github.com/pandas-dev/pandas/issues/61223 | true | null | null | jbogar | 6 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df=pd.DataFrame(data=[[1,2],[3,4]],index=["a","b"],columns=["A","B"])
df.at["a","C"]=[1,2,3]
```
### Issue Description
When using .at to set cell value to an iterable, it fails if it has to create a new column.
It works fine if setting the cell value to a scalar (like `df.at["a","C"]=1`).
It also fails if the column exists but is of the wrong dtype.
This fails:
```df.at["a","A"]=[1,2,3]```
But this works:
```
df.loc[:,"A"]=df.A.astype(object)
df.at["a","A"]=[1,2,3]
```
The error trace:
```
KeyError Traceback (most recent call last)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexes/base.py:3805, in Index.get_loc(self, key)
3804 try:
-> 3805 return self._engine.get_loc(casted_key)
3806 except KeyError as err:
File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc()
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:7081, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:7089, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'C'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/frame.py:4561, in DataFrame._set_value(self, index, col, value, takeable)
4560 else:
-> 4561 icol = self.columns.get_loc(col)
4562 iindex = self.index.get_loc(index)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexes/base.py:3812, in Index.get_loc(self, key)
3811 raise InvalidIndexError(key)
-> 3812 raise KeyError(key) from err
3813 except TypeError:
3814 # If we have a listlike key, _check_indexing_error will raise
3815 # InvalidIndexError. Otherwise we fall through and re-raise
3816 # the TypeError.
KeyError: 'C'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[1], line 4
1 import pandas as pd
3 df=pd.DataFrame(data=[[1,2],[3,4]],index=["a","b"],columns=["A","B"])
----> 4 df.at["a","C"]=[1,2,3]
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:2586, in _AtIndexer.__setitem__(self, key, value)
2583 self.obj.loc[key] = value
2584 return
-> 2586 return super().__setitem__(key, value)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:2542, in _ScalarAccessIndexer.__setitem__(self, key, value)
2539 if len(key) != self.ndim:
2540 raise ValueError("Not enough indexers for scalar access (setting)!")
-> 2542 self.obj._set_value(*key, value=value, takeable=self._takeable)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/frame.py:4575, in DataFrame._set_value(self, index, col, value, takeable)
4573 self.iloc[index, col] = value
4574 else:
-> 4575 self.loc[index, col] = value
4576 self._item_cache.pop(col, None)
4578 except InvalidIndexError as ii_err:
4579 # GH48729: Seems like you are trying to assign a value to a
4580 # row when only scalar options are permitted
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:911, in _LocationIndexer.__setitem__(self, key, value)
908 self._has_valid_setitem_indexer(key)
910 iloc = self if self.name == "iloc" else self.obj.iloc
--> 911 iloc._setitem_with_indexer(indexer, value, self.name)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:1890, in _iLocIndexer._setitem_with_indexer(self, indexer, value, name)
1885 self.obj[key] = infer_fill_value(value)
1887 new_indexer = convert_from_missing_indexer_tuple(
1888 indexer, self.obj.axes
1889 )
-> 1890 self._setitem_with_indexer(new_indexer, value, name)
1892 return
1894 # reindex the axis
1895 # make sure to clear the cache because we are
1896 # just replacing the block manager here
1897 # so the object is the same
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:1942, in _iLocIndexer._setitem_with_indexer(self, indexer, value, name)
1939 # align and set the values
1940 if take_split_path:
1941 # We have to operate column-wise
-> 1942 self._setitem_with_indexer_split_path(indexer, value, name)
1943 else:
1944 self._setitem_single_block(indexer, value, name)
File ~/miniconda3/lib/python3.12/site-packages/pandas/core/indexing.py:1998, in _iLocIndexer._setitem_with_indexer_split_path(self, indexer, value, name)
1993 if len(value) == 1 and not is_integer(info_axis):
1994 # This is a case like df.iloc[:3, [1]] = [0]
1995 # where we treat as df.iloc[:3, 1] = 0
1996 return self._setitem_with_indexer((pi, info_axis[0]), value[0])
-> 1998 raise ValueError(
1999 "Must have equal len keys and value "
2000 "when setting with an iterable"
2001 )
2003 elif lplane_indexer == 0 and len(value) == len(self.obj.index):
2004 # We get here in one case via .loc with a all-False mask
2005 pass
ValueError: Must have equal len keys and value when setting with an iterable
```
### Expected Behavior
The cell value is set without errors.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.9.21
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.3
numpy : 2.0.2
pytz : 2025.1
dateutil : 2.8.2
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.18.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.6.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : 3.9.4
numba : None
numexpr : 2.10.2
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : 2023.6.0
scipy : 1.13.1
sqlalchemy : None
tables : N/A
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"The same errors happen using `.loc`.\n\nA workaround is specifying the column first:\n`df[\"C\"] = pd.Series()`",
"Looking at the [documentation ](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.at.html) it seems like this example does not show a bug, since it should throw a KeyError given there is no key \"C\" already in the dataframe. Moreover, the .at function is specifically used \"if you only need to get or set a single value in a DataFrame or Series.\" This is why it works when we convert the dataframe to a object dtype. The real bug here seems to be that it works when providing a scalar: it should throw a KeyError.\n\nIf there is a need to fix .at to throw an error I would be happy to work on it as my first issue. \n\n",
"@ShayanG9 The documentation says it should throw KeyError when _getting_, not setting. [Userguide](https://pandas.pydata.org/docs/user_guide/indexing.html) specificaly states that it will inflate the dataframe inplace if the key does not exist.\n\nAll documentation says it should work the same as .loc, just access only one cell of the dataframe.\n\n@yuanx749 That's the issue, if you look at the trace, it falls back to .loc, which throws this error. But it shouldn't fall back to .loc, it should access a single cell and put a list to it.",
"It seems like you are right about the KeyError. However, given it should change only one _cell_ then it doesn't make sense to exchange a cell populated by an integer type with a list, like you mention in the line `df.at[\"a\",\"A\"]=[1,2,3]` Especially when the series is of type int64. \n\n```python\nimport pandas as pd\n\ndf=pd.DataFrame(data=[[1,2],[3,4]],index=[\"a\",\"b\"],columns=[\"A\",\"B\"])\nprint(df.dtypes)\n```\n\nreturns \n\n```\nA int64\nB int64\ndtype: object\n```\n\nHowever, if we do \n```\nimport pandas as pd\n\ndf=pd.DataFrame(data=[[1,2],[3,4]],index=[\"a\",\"b\"],columns=[\"A\",\"B\"])\ndf.loc[:,\"A\"]=df.A.astype(object)\nprint(df.dtypes)\n```\n\nWe would get\n```\nFutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '[1 3]' has dtype incompatible with int64, please explicitly cast to a compatible dtype first.\n df.loc[:,\"A\"]=df.A.astype(object)\nA object\nB int64\ndtype: object\n```\nNow that the series is of dtype object we can replace the int64 object with a list object, since they are interchangeable. I'm not sure if there is something I might be missing, but the implementation seems to make sense. Perhaps a note about this behavior in the documentation would be good?",
"What you propose would be incompatible with .loc\n\nWhen you assign incompatible value with .loc, it will throw a future warning, but it will change the dtype to the compatible one. `.loc` and `.at` should have consistent behavior.\n\n```\nIn [16]: df=pd.DataFrame(data=[[1]], columns=[\"A\"])\n\nIn [17]: print(df.dtypes)\nA int64\ndtype: object\n\nIn [18]: df.loc[0,\"A\"]=\"this is string\"\n<ipython-input-18-dcd67bec8a93>:1: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'this is string' has dtype incompatible with int64, please explicitly cast to a compatible dtype first.\n df.loc[0,\"A\"]=\"this is string\"\n\nIn [19]: df.dtypes\nOut[19]: \nA object\ndtype: object\n```\n\nBtw. if you do this example with `.at` it will also work.",
"Thanks for clarifying, and sorry for the misunderstanding. I see what you mean now. Looking at this [pull request](https://github.com/pandas-dev/pandas/pull/57265/files) it seems like this behavior is intentional. If you look it seems like it was previously in the code that a more descriptive error would be thrown: `\"Must have equal len keys and value when setting with an iterable\"`. However, if this should be the behavior I do not know. I did some digging and it seems to be about compatibility with numpy something about this [pull request](https://github.com/numpy/numpy/pull/10615). It might be good to have someone from the pandas team look over this?"
] |
2,969,126,559 | 61,222 | BUG: Index name lost when using "resample" with pyarrow dtypes | closed | 2025-04-03T10:37:17 | 2025-04-07T16:55:10 | 2025-04-07T16:55:10 | https://github.com/pandas-dev/pandas/issues/61222 | true | null | null | mthiboust | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Create a df with DatetimeIndex called "timestamp" and native pandas dtype
native_df = pd.DataFrame(
{'value': [23.5, 24.1, 22.8, 25.3, 23.9]},
index=pd.date_range(start='2025-01-01 00:00:00', end='2025-01-01 04:00:00', freq='h'),
)
native_df.index.name = "timestamp"
# Create a similar df with pyarrow dtypes
pyarrow_df = native_df.copy()
pyarrow_df.index = pyarrow_df.index.astype('timestamp[ns][pyarrow]')
pyarrow_df["value"] = pyarrow_df["value"].astype('float64[pyarrow]')
native_df.resample("2h").mean().reset_index()["timestamp"] # OK
pyarrow_df.resample("2h").mean().reset_index()["timestamp"] # KeyError: 'timestamp'
```
### Issue Description
The `resample` forget the name of the index when using `pyarrow` dtypes.
By the way, I notice that `DatetimeIndex` are converted to `Index` when using `pyarrow` dtypes. Maybe it is related?
See concrete example in screenshot

### Expected Behavior
The `resample` methods is expected to behave in the same way for `pyarrow` and `native` dtypes.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 5.10.234-225.910.amzn2.x86_64
Version : #1 SMP Fri Feb 14 16:52:40 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : 2025.3.2
scipy : 1.15.2
sqlalchemy : 2.0.38
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Resample",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"If it helps, I currently use this `safe_resample()` function as a temporary workaround:\n\n```python\nT = TypeVar(\"T\", pd.DataFrame, pd.Series)\n\n\nclass IndexPreservingResampler(Generic[T]):\n \"\"\"A resampler that preserves the index name of the input DataFrame or Series.\"\"\"\n\n def __init__(self, resampler: pd.core.resample.Resampler, idx_name: str | None) -> None:\n self._resampler = resampler\n self._index_name = idx_name\n\n def __getattr__(self, name: str) -> Any:\n method = getattr(self._resampler, name)\n\n if not callable(method):\n return method\n\n def wrapped(*args: Any, **kwargs: Any) -> T:\n result = method(*args, **kwargs)\n if hasattr(result, \"index\"):\n result.index.name = self._index_name\n return result\n\n return wrapped\n\n\ndef safe_resample(\n df: T,\n freq: str,\n **kwargs: Any,\n) -> IndexPreservingResampler[T]:\n \"\"\"Resample a DataFrame or Series while preserving the index name.\n\n When using pyarrow dtypes, the index name is lost after resampling.\n This is a temporary fix to preserve the index name.\n See https://github.com/pandas-dev/pandas/issues/61222\n\n Args:\n df: The DataFrame or Series to resample\n freq: The frequency to resample to\n **kwargs: Additional arguments to pass to pandas resample method\n\n Returns:\n A Resampler object that will preserve the index name after aggregation\n \"\"\"\n index_name = df.index.name\n return IndexPreservingResampler(df.resample(freq, **kwargs), index_name)\n```\n\nUsing it in practice:\n```python\npyarrow_df.resample(\"2h\").mean().reset_index()[\"timestamp\"] # KeyError: 'timestamp'\nsafe_resample(pyarrow_df, \"2h\").mean().reset_index()[\"timestamp\"] # OK\n```",
"Thanks for reporting. Confirmed on main. PRs to fix are welcome."
] |
2,967,880,291 | 61,221 | BUG: Exception with `unstack(sort=False)` and NA in index | open | 2025-04-03T00:54:08 | 2025-08-15T14:31:53 | null | https://github.com/pandas-dev/pandas/issues/61221 | true | null | null | jlumpe | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
levels1 = ['b', 'a']
levels2 = pd.Index([1, 2, 3, pd.NA], dtype=pd.Int64Dtype())
index = pd.MultiIndex.from_product([levels1, levels2], names=['level1', 'level2'])
df = pd.DataFrame(dict(value=range(len(index))), index=index)
print(df)
print(df.unstack(level='level2'))
print(df.unstack(level='level2', sort=False))
```
```
value
level1 level2
b 1 0
2 1
3 2
<NA> 3
a 1 4
2 5
3 6
<NA> 7
```
```
value
level2 <NA> 1 2 3
level1
a 3 0 1 2
b 7 4 5 6
```
```
Traceback (most recent call last):
File "/home/jared/tmp/./250402-pd-test.py", line 15, in <module>
print(df.unstack(level='level2', sort=False))
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/frame.py", line 9928, in unstack
result = unstack(self, level, fill_value, sort)
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/reshape/reshape.py", line 504, in unstack
return _unstack_frame(obj, level, fill_value=fill_value, sort=sort)
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/reshape/reshape.py", line 537, in _unstack_frame
return unstacker.get_result(
~~~~~~~~~~~~~~~~~~~~^
obj._values, value_columns=obj.columns, fill_value=fill_value
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/reshape/reshape.py", line 242, in get_result
return self.constructor(
~~~~~~~~~~~~~~~~^
values, index=index, columns=columns, dtype=values.dtype
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/frame.py", line 827, in __init__
mgr = ndarray_to_mgr(
data,
...<4 lines>...
typ=manager,
)
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/internals/construction.py", line 336, in ndarray_to_mgr
_check_values_indices_shape_match(values, index, columns)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jared/opt/mambaforge/envs/pd-test/lib/python3.13/site-packages/pandas/core/internals/construction.py", line 420, in _check_values_indices_shape_match
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (2, 4), indices imply (2, 5)
```
### Issue Description
With a `MultiIndex` level of `Int64Dtype()` containing NA values, `DataFrame.unstack()` produces the expected result with `sort=True` (default) but causes an exception with `sort=False`. This does not occur if the NA value is removed from the index.
### Expected Behavior
No exception, the returned `DataFrame` has `level` in the original order (`['b', 'a']`).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.0.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Reshaping",
"ExtensionArray"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take\n",
"Thanks for reporting. Confirmed on main. PRs to fix are welcome.",
"take"
] |
2,967,331,184 | 61,220 | ENH: Create infrastructure for translations | closed | 2025-04-02T19:19:57 | 2025-05-12T07:46:13 | 2025-05-12T07:46:13 | https://github.com/pandas-dev/pandas/pull/61220 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61220 | https://github.com/pandas-dev/pandas/pull/61220 | melissawm | 9 | Hi all,
This PR is a proposal for adding the translations infrastructure to the pandas web page.
Following the discussion in #56301, we (a group of folks working on the Scientific Python grant) have been working to set up infrastructure and translate the contents of the pandas web site. As of this moment, we have 100% translations for the pandas website into Spanish and Brazilian Portuguese, with other languages available for translation (depending on volunteer translators).
What this PR **does**:
* Reorganizes web site sources file structure for multilanguage support, with a new "pt" folder which, in the future, can hold Brazilian Portuguese translations pulled in from Crowdin.
* Adds a language switcher to the top of the page
* Adds language option to web pages command line builder
What this PR **does not** do:
* Add actual translations for the full contents of the website. This needs to be done in a follow-up.
This PR is a draft, as we are looking for feedback on the approach and appetite for this change. We would love to have more languages added, and we firmly believe having the translations infrastructure may help recruit new translators which will then see their work published on the actual website. We can also work on adding a "Translations team" to the pandas website if desired, with data pulled in automatically from Crowdin.
To build, this will require the following command:
```
python pandas_web.py pandas/content --target-path build --languages en pt
```
If you want to check out other related work, please take a look at https://github.com/scipy/scipy.org/pull/617
Some of this is still work in progress, and @goanpeca is working on automations to make synchronizing and updating the translations easier- he can also help answer questions on the overall integration with Crowdin.
Any feedback is appreciated, and we are happy to answer questions and discuss more if needed.
Cheers!
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Web"
] | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | [
"Thanks for starting this @melissawm \r\n\r\n1. Reviewing the feedback in the original issue https://github.com/pandas-dev/pandas/issues/56301, it appears the pandas core devs (including myself) would prefer translations to live outside the core repo. Am I understanding correctly, that the `pt` directory, or any other, new abbreviated language directory, would mean the translation would live in this repo?\r\n2. If docs in the `en` folder get modified, the `--languages` flag will automatically update the changed docs to the target language?",
"Hi @mroeschke !\r\n\r\n1. I think we could devise a way to build the website pulling in the translations from the https://github.com/Scientific-Python-Translations/pandas-translations repo, although that may complicate your CI set up. It's your call though, happy to explore that.\r\n2. As far as I understand, no - changes to the en folder are propagated to the https://github.com/Scientific-Python-Translations/pandas-translations repo, which in turn is passed over to translators, and that will update the other languages. Maybe @goanpeca can help me with this one.",
"Hi @melissawm \r\n\r\nregarding \r\n\r\n>If docs in the en folder get modified, the --languages flag will automatically update the changed docs to the target language?\r\n\r\nNo, currently a github action is set to run daily (could be modified as needed) to check if the content has changed, and if it does we copy the changes over at https://github.com/Scientific-Python-Translations/pandas-translations where the crowidn integration is set. \r\n\r\nA different action that runs once per week (can be modified as needed) checks if the translations are over a certain threshold of completion (by default 95%) and if there are new strings available a PR will be merged automatically over at https://github.com/Scientific-Python-Translations/pandas-translations with the translated content.\r\n\r\n>I think we could devise a way to build the website pulling in the translations from the https://github.com/Scientific-Python-Translations/pandas-translations repo, although that may complicate your CI set up. It's your call though, happy to explore that.\r\n\r\nRegarding this, we could indeed pull the translations from the repo on build time to avoid having that content on this repo. And as @melissawm it would be a bit more involved for CI on this side, but we can make that work.",
"Hi @mroeschke, I created a small PR on how we could implement bringing the translations from the Scientific Python Translations Organization\r\n\r\nhttps://github.com/pandas-dev/pandas/pull/61380\r\n\r\nIt would need to be scheduled to run every (week? or so) and at that momento it would pull any translations that are available.\r\n\r\nThis would of course rely on the work @melissawm is doing here, minus the files in the pt folder.",
"I guess we want to add the dropdown for languages to #61380 and close this PR?",
"Thanks @datapythonista - I'm working on it and will update as soon as possible. Since this PR relies on having one config file per language, I'll have to rethink how to make this work. I can ping you once it's ready, or if you prefer to close this PR I can certainly open a fresh one. ",
"My point is that this PR seems to be doing the same as #61380, and since that seems to be closer to get merged, feels like we will merge that one and dismiss this PR. But for what you say I'm not sure if I'm missing something.",
"PR https://github.com/pandas-dev/pandas/pull/61380 supersedes this one.\r\n\r\nThanks @melissawm ! 🚀 \r\n",
"Thank you for the work here @melissawm, I'll close this PR as seems superseeded, please let me know if I'm misunderstanding or if we finally prefer to move forward with this one."
] |
2,967,254,781 | 61,219 | Fix #58421: Index[timestamp[pyarrow]].union with itself return object type | closed | 2025-04-02T18:45:22 | 2025-05-15T16:11:15 | 2025-05-15T16:11:15 | https://github.com/pandas-dev/pandas/pull/61219 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61219 | https://github.com/pandas-dev/pandas/pull/61219 | afonso-antunes | 2 | - [X] closes #58421
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Fix Summary:
Previously, the `_make_concat_multiindex` method could silently downgrade extension dtypes (e.g., to object) when creating levels. This PR ensures that the `_concat_indexes` helper uses the correct dtype-aware construction (`array(..., dtype=...)`) to preserve the original dtype of the first index.
### Test added:
Added a test in `pandas/tests/frame/methods/test_concat_arrow_index.py` that covers the preservation of extension dtypes when using `pd.concat` with `keys=` that triggers MultiIndex creation.
The test creates two DataFrames with `timestamp[pyarrow]` indices, then concatenates them with `pd.concat(..., keys=...)` and asserts that:
- The resulting index is a `MultiIndex`
- The second level (`levels[1]`) retains the `ArrowDtype('timestamp[us][pyarrow]')` instead of being downgraded to `object`.
This ensures the dtype preservation fix is validated and regressed against.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"### Note on test failures\r\nSome tests are failing because they expect the old behavior where ``pd.concat(..., keys=...)`` would return an ``Index`` of tuples with ``dtype=object``.\r\n\r\nThis PR intentionally changes that behavior to **preserve the dtype of the original index** (e.g., ArrowDtype) and produce a proper ``MultiIndex`` with names and levels — which is more consistent and solves the issue.\r\n\r\nErrors such as:\r\n- AttributeError: 'Index' object has no attribute 'levels'\r\n- AssertionError due to mismatched Index vs MultiIndex\r\n\r\n...are a direct result of this behavior change.\r\nThese test failures are expected and reflect outdated assumptions.\r\nIf needed, I'm happy to follow up with updates to the relevant tests to align with the new behavior.",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,966,874,023 | 61,218 | QST: Should the absence of tzdata package affect the performance in any way ? | closed | 2025-04-02T15:53:10 | 2025-08-05T17:04:53 | 2025-08-05T17:04:53 | https://github.com/pandas-dev/pandas/issues/61218 | true | null | null | sdg002 | 6 | ### Research
- [x] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
https://stackoverflow.com/search?page=3&tab=Relevance&pagesize=30&q=pandas%20AND%20tzdata%20&searchOn=3
### Question about pandas
We are on pandas 1.5.3. We are investigating some performance bottlenecks. At this point, we are not sure where the problem lies. However, we have a consistent pattern of observations.
We noticed that when `tzdata==2025.2` was uninstalled, there was a severe degradation in performance (> 10x).
Upon further investigations and eliminations, we arrived at the following matrix:
## Good perf-1
```
pandas==2.2.3
tzdata==2025.2
```
## Good perf-2
```
pandas==1.5.3
tzdata==2025.2
```
## Bad perf
No `tzdata`
```
pandas==1.5.3
```
Any suggestions ?
Is there any logic in any part of Pandas that relies on `tzdata` ?
Thanks,
Sau
| [
"Usage Question",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Sorry. SFO will not allow me to ask any more questions. ",
"`tzdata` is used as an alternative timezone library compared to `pytz`. It's hard to answer performance implications without knowing what they are.\n\nNote that pandas 1.5 is no longer supported and `tzdata` is a required dependency as of pandas 2.0",
"Hello @mroeschke ,\n\nThanks for the quick reply. Based on your response, We have narrowed down our scenarios to the following:\n\n## Good - pandas 2.2.3 and tzdata\nWhen `pandas 2.2.3` is installed, `tzdata` gets installed too (tallies with what you have commented). This is our best case and everything works fine. \n\n## Bad - pandas 2.2.3 only\nHowever, what surprised us that when `tzdata` is uninstalled using `pip uninstall` , the code continues to run without any errors. But, the performance is 20X slower.\n\nWe were expecting an error to be thrown. \n\nIt would be very helpful, if you can explain what pandas 2.2.3 does internally when the `tzdata` package is missing ? \n\nThanks,\nSau\n",
"> explain what pandas 2.2.3 does internally when the tzdata package is missing\n\nWell, it _should_ raise at import-time since there's a check for it in pandas/\\_\\_init\\_\\_.py\n\n@mroeschke i see it is also listed in compat/_optional. Maybe should be removed there?",
"> Maybe should be removed there?\n\nYes definitely",
"Closing as I don't think there is anything actionable in this issue"
] |
2,966,525,309 | 61,217 | BUG: unstack incorrectly reshuffles data when sort=False | closed | 2025-04-02T14:12:08 | 2025-05-15T15:18:34 | 2025-04-03T02:51:31 | https://github.com/pandas-dev/pandas/issues/61217 | true | null | null | wahsmail | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
a = pd.Series([f'a{i}' for i in range(10)])
b = pd.Series([f'b{i}' for i in range(10)])
ser = pd.concat({'a': a, 'b': b}) # multi-indexed series e.g. ('a', 0) = 'a0'
df = ser.unstack(0, sort=False) # dataframe with integer index and columns=['a','b'], some a values end up in b column and vice-versa
```
### Issue Description
When unstacking a multi-indexed series (or dataframe), passing sort=False fails to preserve the original mapping of multi-index keys to values. In other words, the resulting "a" column has a mix of "a" and "b"-prefixed values.
### Expected Behavior
I would expect sort=False to prevent a sort of the newly produced column names but preserve the mapping of multi-index keys to values based on the following from the documentation: "sort: Sort the level(s) in the resulting MultiIndex columns."
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.9
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"This seems to work properly in the main branch of pandas. ",
"Confirmed that this bug no longer exists on main. Closing.",
"I came across the same issue on the latest version of pandas, but could also confirm it works fine on main, which is encouraging. It will be good to see the fix released. It is the sort of issue that could lead to incorrect results that go unnoticed for a while.\n\nJust in case someone else is having similar issues, this is a minimal example I built to understand the issue on my side. The results in `time_series_1` are clearly wrong, as it is leading to data being associated with the wrong device. The resampling was redundant in this dummy example, but I needed that to create the conditions for the bug to appear.\n\n```python\nimport pandas as pd\n\ntime_series = (\n pd.DataFrame(\n data={\n \"device_id\": [1, 0, 1, 0],\n \"temperature\": [10.0, 20.0, 11.0, 21.0],\n },\n index=pd.DatetimeIndex(\n data=[\n pd.Timestamp(\"2025-05-15 00:00:00\"),\n pd.Timestamp(\"2025-05-15 00:00:00\"),\n pd.Timestamp(\"2025-05-15 00:01:00\"),\n pd.Timestamp(\"2025-05-15 00:01:00\"),\n ],\n name=\"datetime\",\n ),\n )\n .rename_axis(columns=[\"metric\"])\n .set_index(\"device_id\", append=True)\n .groupby(level=\"device_id\", sort=False)\n .resample(rule=\"1min\", level=\"datetime\")\n .mean()\n)\n\ntime_series_1 = time_series.unstack(level=\"device_id\", sort=False)\ntime_series_2 = time_series.unstack(level=\"device_id\")\n```\n\n```\n>>> time_series_1 \nmetric temperature \ndevice_id 1 0\ndatetime\n2025-05-15 00:00:00 10.0 11.0\n2025-05-15 00:01:00 20.0 21.0\n```\n\n```\n>>> time_series_2 \nmetric temperature \ndevice_id 1 0\ndatetime\n2025-05-15 00:00:00 10.0 20.0\n2025-05-15 00:01:00 11.0 21.0\n```"
] |
2,965,915,343 | 61,216 | BUG: OverflowError when fillna on DataFrame with a pd.Timestamp (#61208) | closed | 2025-04-02T10:16:22 | 2025-04-14T16:59:05 | 2025-04-14T16:58:58 | https://github.com/pandas-dev/pandas/pull/61216 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61216 | https://github.com/pandas-dev/pandas/pull/61216 | PedroM4rques | 1 | - Now correctly raises OutOfBoundsDatetime
- Added test_fillna_out_of_bounds_datetime()
- [x] closes #61208
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. (does not apply)
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Fix for `fillna` with Out-of-Bounds Datetime Values
**Issue**: Using `fillna` on a `datetime64[ns]` column with an out-of-bounds timestamp (e.g., `'0001-01-01'`) raised an `AssertionError` instead of the expected `OutOfBoundsDatetime`.
**Fix**: Modified the `where` method in `pandas/core/internals/blocks.py` to catch and re-raise `OutOfBoundsDatetime` directly, preventing the `AssertionError`.
**Fix (`inplace=True`)**: Modified the `putmask` method in `pandas/core/internals/blocks.py` to catch and re-raise `OutOfBoundsDatetime` directly, preventing the `AssertionError`.
**Test Added**:
- Created `test_fillna_out_of_bounds_datetime` in `pandas/tests/frame/methods/test_fillna.py`.
- The test:
- Sets up a DataFrame with a `datetime64[ns]` column containing `NaT`.
- Attempts to fill `NaT` with `'0001-01-01'`.
- Expects `OutOfBoundsDatetime`.
| [
"Bug",
"Missing-data",
"Error Reporting",
"Timestamp"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @PedroM4rques "
] |
2,965,310,744 | 61,215 | DOC: Fix ES01 for pandas.api.extensions.ExtensionDtype | closed | 2025-04-02T06:23:50 | 2025-04-02T16:12:41 | 2025-04-02T16:12:35 | https://github.com/pandas-dev/pandas/pull/61215 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61215 | https://github.com/pandas-dev/pandas/pull/61215 | tuhinsharma121 | 1 | fixes
```
pandas.api.extensions.ExtensionDtype ES01 | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @tuhinsharma121 "
] |
2,965,010,194 | 61,214 | Restrict clipping of DataFrame.corr only when cov=False | closed | 2025-04-02T02:37:04 | 2025-04-03T21:55:32 | 2025-04-03T21:55:24 | https://github.com/pandas-dev/pandas/pull/61214 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61214 | https://github.com/pandas-dev/pandas/pull/61214 | j-hendricks | 3 | Closes #61154 `DataFrame.corr` was clipped between `-1` and `1` to handle numerical precision errors. However, this was done regardless of whether `cov` equals `True` or `False`, and should instead only be done when `cov=False`.
- [x] closes #61154 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"cov/corr"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@mroeschke here is my pull request for fixing the `dataframe.corr` issue. Thanks!",
"Thanks! Would you be able to add new unit test that covers this? It seems like we didn't have one that hit this edge case previously.\r\n\r\nI think no release note is necessary, since the original one made clear it's only for `corr`.",
"Thanks @j-hendricks "
] |
2,964,980,317 | 61,213 | BUG: DataFrame.corr clips values when cov=True | closed | 2025-04-02T02:20:50 | 2025-04-02T02:38:39 | 2025-04-02T02:38:39 | https://github.com/pandas-dev/pandas/issues/61213 | true | null | null | j-hendricks | 0 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
In [50]: x = pd.DataFrame({"A": [1, 2, None, 4], "B": [2, 4, None, 9]})
In [51]: x.cov()
Out[51]:
A B
A 1.0 1.0
B 1.0 1.0
In [52]: x.dropna().cov()
Out[52]:
A B
A 2.333333 5.5
B 5.500000 13.0
```
### Issue Description
Stemming from #61154. `DataFrame.corr` was clipped between `-1` and `1` to handle numerical precision errors. However, this was done regardless of whether `cov` equals `True` or `False`, and should instead only be done when `cov=False`.
### Expected Behavior
import pandas as pd
In [50]: x = pd.DataFrame({"A": [1, 2, None, 4], "B": [2, 4, None, 9]})
In [51]: x.cov()
Out[51]:
A B
A 1.0 1.0
B 1.0 1.0
In [52]: x.dropna().cov()
Out[52]:
A B
A 1.0 1.0
B 1.0 1.0
### Installed Versions
<details>
commit : cdc9e952f139746c2e6816997d82b389f605ec58
python : 3.10.16
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:22 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6041
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2006.gcdc9e952f1
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : 3.0.12
sphinx : 8.1.3
IPython : 8.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.3.0
html5lib : 1.1
hypothesis : 6.130.4
gcsfs : 2025.3.0
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 19.0.1
pyreadstat : 1.2.8
pytest : 8.3.5
python-calamine : None
pytz : 2025.2
pyxlsb : 1.0.10
s3fs : 2025.3.0
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Needs Triage"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,964,817,507 | 61,212 | BUG: OverflowError when fillna on DataFrame with a pd.Timestamp (#61208) | closed | 2025-04-02T00:08:59 | 2025-04-02T09:07:20 | 2025-04-02T09:07:19 | https://github.com/pandas-dev/pandas/pull/61212 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61212 | https://github.com/pandas-dev/pandas/pull/61212 | PedroM4rques | 1 | - [x] closes #61208
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. (Does not apply)
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Fix for `fillna` with Out-of-Bounds Datetime Values
**Issue**: Using `fillna` on a `datetime64[ns]` column with an out-of-bounds timestamp (e.g., `'0001-01-01'`) raised an `AssertionError` instead of the expected `OutOfBoundsDatetime`.
**Fix**: Modified the `putmask` method in `pandas/core/internals/blocks.py` to catch and re-raise `OutOfBoundsDatetime` directly, preventing the `AssertionError`.
**Test Added**:
- Created `test_fillna_out_of_bounds_datetime` in `pandas/tests/frame/methods/test_fillna.py`.
- The test:
- Sets up a DataFrame with a `datetime64[ns]` column containing `NaT`.
- Attempts to fill `NaT` with `'0001-01-01'`.
- Expects `OutOfBoundsDatetime`. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"I did not open this PR correctly, I'll open another ASAP"
] |
2,964,763,684 | 61,211 | BUG: Preserve extension dtypes in MultiIndex during concat (#58421) | closed | 2025-04-01T23:19:47 | 2025-04-02T18:26:52 | 2025-04-02T18:26:52 | https://github.com/pandas-dev/pandas/pull/61211 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61211 | https://github.com/pandas-dev/pandas/pull/61211 | afonso-antunes | 0 | - [X] closes #58421
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Fix Summary:
Previously, the `_make_concat_multiindex` method could silently downgrade extension dtypes (e.g., to object) when creating levels. This PR ensures that the `_concat_indexes` helper uses the correct dtype-aware construction (`array(..., dtype=...)`) to preserve the original dtype of the first index.
### Test added:
Added a test in `pandas/tests/frame/methods/test_concat_arrow_index.py` that covers the preservation of extension dtypes when using `pd.concat` with `keys=` that triggers MultiIndex creation.
The test creates two DataFrames with `timestamp[pyarrow]` indices, then concatenates them with `pd.concat(..., keys=...)` and asserts that:
- The resulting index is a `MultiIndex`
- The second level (`levels[1]`) retains the `ArrowDtype('timestamp[us][pyarrow]')` instead of being downgraded to `object`.
This ensures the dtype preservation fix is validated and regressed against. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,963,918,215 | 61,210 | ENH: Add ignore_empty and ignore_all_na arguments to pd.concat | open | 2025-04-01T16:11:39 | 2025-07-15T20:11:54 | null | https://github.com/pandas-dev/pandas/issues/61210 | true | null | null | sergei3000 | 3 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I'd like this warning of `pd.concat()` be solved with an argument
```
FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
return pd.concat(orders_for_plotting)
```
### Feature Description
I mean instead of changing my code to something like
```python
result = pd.concat([df for df in [df1, df2] if not df.empty])
```
I think it'd be cool to have arguments like `ignore_empty: bool = True` and `ignore_all_na: bool = True` (which would turn into `= False` in the future) in `pd.concat`, so I'd be all good by just adding one argument in my codebase to deal with the future behavior.
### Alternative Solutions
An alternative solution would be not doing this and making people change their code to keep their legacy stuff intact
### Additional Context
_No response_ | [
"Enhancement",
"Needs Triage",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"In main this is already depreciated, someone else can take over if wanted.",
"Any updates on this issue? If possible, I'd like to work on it if it is approved."
] |
2,960,860,396 | 61,209 | ENH: Consistent NA handling in `unique()`, and `nunique()` | open | 2025-03-31T15:36:49 | 2025-04-08T21:31:40 | null | https://github.com/pandas-dev/pandas/issues/61209 | true | null | null | olek-osikowicz | 3 | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently `Series.nunique` has a default parameter `dropna=True`.
However `Series.unique` does not accept the `dropna` the parameter.
This can cause the unexpected behaviour when: `s.nunique()` is not nessesarly equal to `len(s.unique())`.
See example below:
```
>>> import pandas as pd
>>> s = pd.Series([pd.NA, 1, pd.NA])
>>> s.unique()
array([<NA>, 1], dtype=object)
>>> len(s.unique())
2
>>> s.nunique()
1
```
I believe it should be addressed to avoid implicit behaviour.
### Feature Description
Simplest way to addess it would be to change the default parameter of `Series.nunique` to `dropna=False`.
Analogously the same default parameter for `DataFrame.nunique`.
This would be consistent with [current summary](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.nunique.html) of the method:
> Count number of distinct elements in specified axis.
> Return Series with number of distinct elements. Can ignore NaN values.
"Can ignore NaN values.", hints that should be optional parameter not enabled by default.
### Alternative Solutions
Another approach to force consistent NaN handling by default would be to addapt `Series.unique` to accept `dropna` and set it to `True` by default.
Although possible, this is more laborious and more impactful change on Pandas API.
### Additional Context
_No response_
EDIT: Typos | [
"Enhancement",
"Algos",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"take",
"I think it should be `dropna=True` by default, so your alternative solution, i.e. add `dropna` to `Series.unique` (with default set to `True`) makes more sense to me. cc: @rhshadrach ",
"Related: https://github.com/pandas-dev/pandas/pull/53094\n\nWhile I would prefer pandas not dropping NA values by default, that isn't the case today. However if we are going to eventually change the default of `dropna` to `False`, then I would be hesitant of changing the default behavior of `unique` just to then change it back.\n\nIn this particular case I think we should wait for `dropna` to default to False, and then decide if we really want a dropna argument in this method. The main blocker for this is work on `pivot_table` behaviors, which I plan to take up after 3.0 is released."
] |
2,960,818,034 | 61,208 | BUG: OverflowError when fillna on DataFrame with a pd.Timestamp | closed | 2025-03-31T15:19:29 | 2025-04-14T16:59:00 | 2025-04-14T16:59:00 | https://github.com/pandas-dev/pandas/issues/61208 | true | null | null | thecurve8 | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({
'datetime' : pd.date_range('1/1/2011', periods=3, freq='h'),
'value' : [1,2,3]
})
df.iloc[0,0] = None
df.fillna(pd.Timestamp('0001-01-01'), inplace=True)
```
### Issue Description
Issue is similar to [this closed issue without a reproducible example](https://github.com/pandas-dev/pandas/issues/56502 ).
A DataFrame that has a column with datetime64[ns] with NaT gets an error if trying to fill null values with a pd.Timestamp that lies outside the range of the given precision.
### Expected Behavior
The null values in the DataFrame should be replaced with the provided TimeStamp or an error should be provided to the user that the Timestamps have incompatible precisions and ranges.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.8
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.17763
machine : AMD64
processor : Intel64 Family 6 Model 143 Stepping 8, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : German_Switzerland.1252
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post()
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Error Reporting",
"Timestamp"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! Confirmed on main, part of the stack has the error:\n\n> pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Cannot cast 0001-01-01 00:00:00 to unit='ns' without overflow.\n\nwhich I believe should be what is raised here. Further investigations and PRs to fix are welcome!",
"take",
"Hi @rhshadrach ,\nI've opened a PR that should close this issue. When you have a moment, could you please take a look? \n\nThank you!"
] |
2,959,825,885 | 61,207 | Fix #60494: query doesn't work on DataFrame integer column names | closed | 2025-03-31T08:36:24 | 2025-04-01T16:50:23 | 2025-04-01T16:50:16 | https://github.com/pandas-dev/pandas/pull/61207 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61207 | https://github.com/pandas-dev/pandas/pull/61207 | David-msggc | 1 | - [x] closes #60494
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Some io code checks failed but they were already failing before the bugfix.
Function _get_cleaned_column_resolvers ignores integer column names, so when it is called on eval function, it returns empty columns which it shouldn't since there is an integer column. Converting the integer columns to strings before calling the _get_cleaned_column_resolvers on eval fucntion fixes this. | [
"expressions"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @David-msggc "
] |
2,959,787,102 | 61,206 | BUG: round on object columns no longer raises a TypeError | closed | 2025-03-31T08:18:12 | 2025-05-21T00:33:35 | 2025-05-21T00:33:34 | https://github.com/pandas-dev/pandas/issues/61206 | true | null | null | MT407 | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df=pd.DataFrame(data=['foo'],columns=['bar'])
df.loc[0,'bar']=0.2
df['bar'].round()
Out[4]:
0 0.2
```
### Issue Description
pd.Series.round() appears to have changed behaviour in 2.2.3 compared to 2.1.4.
In previous versions, attempting to round a column with "object" dtype would raise a TypeError. In 2.2.3, round now silently returns the same column, without applying any rounding.
I'm not sure if there is some underlying change that causes this behaviour, but together with the removal of downcasting from a variety of methods (ffill, replace, fillna,...) this change in behaviour seems dangerous without any warnings.
### Expected Behavior
import pandas as pd
df=pd.DataFrame(data=['foo'],columns=['bar'])
df.loc[0,'bar']=0.2
df['bar'].round()
TypeError: loop of ufunc does not support argument 0 of type float which has no callable rint method
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.9.18
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United Kingdom.1252
pandas : 2.2.3
numpy : 1.26.3
pytz : 2023.3.post1
dateutil : 2.8.2
pip : 23.3.1
Cython : None
sphinx : None
IPython : 8.15.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
blosc : None
bottleneck : 1.3.6
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.2
lxml.etree : 4.9.3
matplotlib : 3.8.0
numba : 0.60.0
numexpr : 2.8.7
odfpy : None
openpyxl : 3.1.0
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 7.4.0
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.4
sqlalchemy : None
tables : None
tabulate : None
xarray : 2023.12.0
xlrd : 2.0.1
xlsxwriter : 3.1.1
zstandard : 0.19.0
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Regression",
"Error Reporting",
"Numeric Operations"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Result of a git bisect points at https://github.com/pandas-dev/pandas/pull/56767; cc @phofl",
"take\n",
"take"
] |
2,959,776,688 | 61,205 | Bump pypa/cibuildwheel from 2.23.1 to 2.23.2 | closed | 2025-03-31T08:13:04 | 2025-03-31T17:22:58 | 2025-03-31T17:22:55 | https://github.com/pandas-dev/pandas/pull/61205 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61205 | https://github.com/pandas-dev/pandas/pull/61205 | dependabot[bot] | 0 | Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.1 to 2.23.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p>
<blockquote>
<h2>v2.23.2</h2>
<ul>
<li>🐛 Workaround an issue with pyodide builds when running cibuildwheel with a Python that was installed via UV (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2328">#2328</a> via <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2331">#2331</a>)</li>
<li>🛠 Dependency updates, including a manylinux update that fixes an <a href="https://redirect.github.com/pypa/manylinux/issues/1760">'undefined symbol' error</a> in gcc-toolset (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2334">#2334</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p>
<blockquote>
<h3>v2.23.2</h3>
<p><em>24 March 2025</em></p>
<ul>
<li>🐛 Workaround an issue with pyodide builds when running cibuildwheel with a Python that was installed via UV (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2328">#2328</a> via <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2331">#2331</a>)</li>
<li>🛠 Dependency updates, including a manylinux update that fixes an <a href="https://redirect.github.com/pypa/manylinux/issues/1760">'undefined symbol' error</a> in gcc-toolset (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2334">#2334</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pypa/cibuildwheel/commit/d04cacbc9866d432033b1d09142936e6a0e2121a"><code>d04cacb</code></a> Bump version: v2.23.2</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/5f4e019684661085adb6558969c7fd389a532174"><code>5f4e019</code></a> [2.x] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2334">#2334</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/2efa648f38e83a421aae82bc80002f8cabf92be7"><code>2efa648</code></a> fix: always resolve --python argument (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2328">#2328</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2331">#2331</a>)</li>
<li>See full diff in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.1...v2.23.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | [
"Build",
"CI",
"Dependencies"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,959,329,293 | 61,204 | BUG: DataFrame.min with skipna=True raises TypeError when column contains np.nan and datetime.date | open | 2025-03-31T02:39:29 | 2025-04-27T12:00:06 | null | https://github.com/pandas-dev/pandas/issues/61204 | true | null | null | tanjt107 | 4 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
import datetime
data = {
"dates": [
np.nan,
np.nan,
datetime.date(2025, 1, 3),
datetime.date(2025, 1, 4),
],
}
df = pd.DataFrame(data)
df.min(axis=0)
```
### Issue Description
The issue arises when calling DataFrame.min(axis=0) with skipna=True (default) on a column containing a mix of np.nan and datetime.date. This results in a TypeError because np.nan (a float) cannot be compared with datetime.date.
```Traceback (most recent call last):
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\test.py", line 29, in <module>
df.min(axis=0)
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11643, in min
result = super().min(axis, skipna, numeric_only, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\generic.py", line 12388, in min
return self._stat_function(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\generic.py", line 12377, in _stat_function
return self._reduce(
^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11562, in _reduce
res = df._mgr.reduce(blk_func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\internals\managers.py", line 1500, in reduce
nbs = blk.reduce(func)
^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\internals\blocks.py", line 404, in reduce
result = func(self.values)
^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11481, in blk_func
return op(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 147, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 404, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 1098, in reduction
result = getattr(values, meth)(axis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\numpy\_core\_methods.py", line 48, in _amin
return umr_minimum(a, axis, None, out, keepdims, initial, where)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'float' and 'datetime.date'
```
This issue is related to issue [#61187](https://github.com/pandas-dev/pandas/issues/61187), but the specific case here involves datetime.date (not datetime.datetime), which behaves differently in pandas.
### Expected Behavior
```
dates 2025-01-03
dtype: object
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.7
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.3
pytz : 2025.1
dateutil : 2.9.0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.28.0
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : 3.2.2
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Reduction Operations"
] | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. This happens on any `object` dtype data; I am wondering if pandas should handle object blocks specially where we filter instead of fillna values. Further investigations and PRs to fix are welcome!",
"take",
"Confirmed on main, it looks like https://github.com/pandas-dev/pandas/blob/0e0bafb39d080a05ca44c501d5af3c553ef89b14/pandas/core/nanops.py#L1100 comparing `inf` with `datetime.date` after `_get_values` on the example provided in the issue.\n\nHowever, I'm curious whether Pandas is expected to support this kind of operation.\nFor example, in NumPy, a similar case raises a TypeError:\n\n```python3\nimport datetime\nimport numpy as np\n\nvalues = np.array([\n [np.nan, np.nan],\n [datetime.date(2020, 1, 1), datetime.date(2020, 1, 2)],\n])\nprint(np.nanmin(values, axis=0))\nTraceback (most recent call last):\n File \"/home/pandas/test3.py\", line 8, in <module>\n print(np.nanmin(values, axis=0))\n File \"/usr/local/lib/python3.10/site-packages/numpy/lib/nanfunctions.py\", line 350, in nanmin\n res = np.amin(a, axis=axis, out=out, **kwargs)\n File \"/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py\", line 2970, in amin\n return _wrapreduction(a, np.minimum, 'min', axis, None, out,\n File \"/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py\", line 88, in _wrapreduction\n return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\nTypeError: '<=' not supported between instances of 'float' and 'datetime.date'\n```\n\ncc @rhshadrach ",
"Thanks @chilin0525 - it's not clear to me whether NumPy would regard this as a bug on their end. I've opened https://github.com/numpy/numpy/issues/28839."
] |
2,959,000,442 | 61,203 | BUG: fix to_json on period | closed | 2025-03-30T17:35:03 | 2025-07-28T17:18:48 | 2025-07-28T17:18:48 | https://github.com/pandas-dev/pandas/pull/61203 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61203 | https://github.com/pandas-dev/pandas/pull/61203 | xiaohuanlin | 4 | - [x] closes #55490 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Stale"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Does anyone else want to review it?",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Can anyone take a look?",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,958,672,916 | 61,202 | DOC: Simplify pandas theme footer by removing social buttons and stre… | closed | 2025-03-30T05:21:34 | 2025-03-30T22:03:33 | 2025-03-30T22:03:33 | https://github.com/pandas-dev/pandas/pull/61202 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61202 | https://github.com/pandas-dev/pandas/pull/61202 | ascender1729 | 0 | I'll help you complete the pull request form with the appropriate information. Here's what you should fill in:
Title:
```
DOC: Simplify pandas theme footer
```
Description:
```markdown
This PR simplifies the pandas theme footer by:
- Removing social media buttons (which are already present in the navigation bar)
- Streamlining the copyright text to be more concise
- Adding proper CSS styling for better visual appearance
The changes make the footer cleaner and more focused while maintaining essential information. The social media links remain accessible through the navigation bar.
- [x] closes #51536
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
```
Notes about the checkboxes:
1. ✅ `closes #51536` - Checked because this PR addresses issue #51536
2. ✅ `All code checks passed` - Checked because the changes are purely documentation/CSS and don't require tests
3. ❌ `Added type annotations` - Unchecked because we only modified HTML and CSS files
4. ❌ `Added an entry in whatsnew` - Unchecked because this is a documentation-only change
The changes look good in the diff view:
1. Removed the social media buttons from the footer
2. Simplified the copyright text
3. Added proper CSS styling for the footer
The PR follows pandas' contribution guidelines by:
- Using the correct prefix (DOC:)
- Keeping changes focused and minimal
- Including proper CSS styling
- Maintaining accessibility
- Following the existing code style
Would you like me to help you with anything else regarding the pull request?
| [
"Docs",
"Web"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,958,643,966 | 61,201 | Fix missing blank line in DataFrame.round docstring (PEP 257 style) | closed | 2025-03-30T03:48:16 | 2025-03-30T17:11:29 | 2025-03-30T17:11:29 | https://github.com/pandas-dev/pandas/pull/61201 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61201 | https://github.com/pandas-dev/pandas/pull/61201 | cureprotocols | 1 | This PR fixes a minor style issue in the docstring of `DataFrame.round()`.
### What Changed:
- Adds a missing blank line before the closing triple quotes (`"""`)
- Ensures compliance with PEP 257 and pandas' internal docstring style guidelines
- Helps maintain clean parsing for automated doc tools and improves readability
This is a **non-functional, formatting-only change** intended to improve internal consistency across the API documentation.
---
### ✅ Checklist
- [ ] closes #xxxx (No issue to close — docstring formatting only)
- [ ] Tests added and passed (Not applicable)
- [x] All code checks passed (pre-commit and linting compliant)
- [ ] Added type annotations (Not applicable)
- [ ] Added an entry in `doc/source/whatsnew/` (Not applicable for style-only fix)
Author: Michael Alexander Montoya (@cureprotocols)
| [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Indeed, pre-commit identifies this as an issue. Closing."
] |
2,958,584,423 | 61,200 | BUG: date comparison fails when series is all pd.NaT values | closed | 2025-03-30T00:45:01 | 2025-04-18T06:46:18 | 2025-04-15T12:42:00 | https://github.com/pandas-dev/pandas/pull/61200 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61200 | https://github.com/pandas-dev/pandas/pull/61200 | Mohit-Kundu | 9 | - [x] closes #61188 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
| [
"Bug",
"Datetime",
"Missing-data"
] | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | [
"pre-commit.ci autofix",
"@rhshadrach I just incorporated the changes! Please let me know if everything looks okay.",
"> lgtm\r\n\r\nGood to hear! Thank you for guidance :)",
"Looks like this PR re-added the 1.3MB Pandas Cookbook jpeg into main. Looks like we should hard revert this commit? cc @datapythonista ",
"I'm travelling and I may not be able to do it for several hours or tomorrow. Do you have permissions to access the project settings and enable the force push option? If you can do that, it's aa easy as clone, git reset --soft head~ and git push -f.\r\n\r\nIf you don't have access, I can see if I can give you permissons, but I don't think I can. Otherwise let's try npt to merge until then, which will make things significantly more complicated.",
"@pandas-dev/pandas-core I hard reverted this, as it reintroduced the big image we wanted to avoid having in our history.\r\n\r\n@Mohit-Kundu sorry about this. We had to rewrite pandas git history, which likely caused that unrelated image ending up in your PR. Sorry we didn't see you before merging, but now we had to hard undo the resulting commit of this PR to avoid having that image in our history and increasing even more the repository size (it's 433Mb even being extra careful of not adding big files). Given this, can I ask you to please open this PR again with the same changes, but without the image? Thank you!\r\n\r\n@mroeschke I see you have the same permissions as me already. I can't add people, but all good for now I think.",
"Thanks @mroeschke @datapythonista ",
"@datapythonista I'd be happy to do that! \r\n\r\nHowever, I'm not exactly sure how to reopen the PR, since on my end it shows that the pull request has been successfully merged and closed. I'd really appreciate your guidance on the best way to proceed; should I open a new PR with the same changes but without the image, or is there another approach you would recommend?\r\n\r\nThanks!\r\n\r\n",
"> should I open a new PR with the same changes but without the image\r\n\r\nYes, I think this is the best. Thank you!\r\n\r\n"
] |
2,958,559,547 | 61,199 | BUG: Fix Series comparison fails when index dtypes differ (object vs string) (#61099) | closed | 2025-03-29T23:50:02 | 2025-04-01T23:28:12 | 2025-04-01T23:28:12 | https://github.com/pandas-dev/pandas/pull/61199 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61199 | https://github.com/pandas-dev/pandas/pull/61199 | MayurKishorKumar | 4 | - [x] closes #61099
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"@MayurKishorKumar - looks like your environment got added to your commit.",
"Oh shoot, my bad! I’ll fix that right away. Thanks for catching it!",
"Hi @rhshadrach 👋\r\n\r\nI’m working on fixing [[#61099](https://github.com/pandas-dev/pandas/issues/61099)] and ran into a failure in `test_mixed_col_index_dtype`.\r\n\r\nMy fix updates `Index.equals` so that `StringDtype` and `object` dtypes are treated as equivalent when comparing column indexes. As a result, this test now fails because `result.columns.dtype` becomes `\"string\"` while `expected.columns.dtype` remains `object`.\r\n\r\nThere are two options I’m considering:\r\n\r\n1. **Update the test** to explicitly cast `expected.columns` to `\"string\"` when `using_infer_string=True`, so it reflects the result.\r\n2. **Adjust internal logic** so the result stays `object`, but that might go against the spirit of treating string/object as equal.\r\n\r\nWould updating the test be acceptable in this case?\r\n\r\nThanks!\r\n",
"pre-commit.ci autofix"
] |
2,958,390,687 | 61,198 | BUG: Fix AttributeError in pd.eval for method calls on binary operations | closed | 2025-03-29T18:36:26 | 2025-03-31T16:52:42 | 2025-03-31T16:52:36 | https://github.com/pandas-dev/pandas/pull/61198 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61198 | https://github.com/pandas-dev/pandas/pull/61198 | myenugula | 1 | - [x] closes #61175
- [x] [Tests added and passed] if fixing a bug or adding a new feature.
- [x] All [code checks passed]
- [x] Added [type annotations] to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. | [
"Bug",
"expressions"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @myenugula "
] |
2,958,215,361 | 61,197 | ENH: Add a new parameter to pandas.read_csv #61172 | closed | 2025-03-29T16:29:35 | 2025-03-29T18:34:44 | 2025-03-29T18:34:44 | https://github.com/pandas-dev/pandas/pull/61197 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61197 | https://github.com/pandas-dev/pandas/pull/61197 | BahramF73 | 1 | - [x] closes #61172
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
### Notes:
I tried to fix this issue by adding a new parameter named `return_empty` for `pandas.read_csv` which is by default _**False**_ and will return an empty DataFrame if set to _**True**_.
I am not familiar with Tests and didn't test it but should not break anythings. | [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the PR, but as discussed in https://github.com/pandas-dev/pandas/issues/61172#issuecomment-2749363849 there probably isn't much appetite for this feature so closing this PR as won't implement"
] |
2,957,335,175 | 61,196 | BUG: `to_datetime()` warns unnecessarily that format cannot be inferred | closed | 2025-03-28T21:49:48 | 2025-03-31T01:00:52 | 2025-03-29T12:30:42 | https://github.com/pandas-dev/pandas/issues/61196 | true | null | null | metazoic | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
pd.to_datetime(['2020-01-01T20:20:20', '2020-01-01T20:21:20'])
```
### Issue Description
This produces the following warning even though the format is inferable:
```
UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
```
Digging deeper, this warning occurs only when the year = concatenated hour and minute in the first element of the list, e.g. year = 2020, hour = 20, minute = 20.
### Expected Behavior
`to_datetime()` should behave just as it does when this unique condition does not hold.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:24 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6030
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.3
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : 7.3.7
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : 0.61.0
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : 2024.12.0
scipy : 1.15.1
sqlalchemy : 2.0.37
tables : 3.10.2
tabulate : 0.9.0
xarray : 2024.11.0
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2023.3
qtpy : 2.4.1
pyqt5 : None
</details>
| [
"Bug",
"Datetime",
"Warnings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"This seems to be fixed in main",
"I've run your reproducible locally and it didn't give me any user warning.\n\n<details>\n <summary>Installed Versions</summary>\n\n```python\n\nINSTALLED VERSIONS\n------------------\ncommit : 543680dcd9af5e4a9443d54204ec21e801652252\npython : 3.11.9\npython-bits : 64\nOS : Darwin\nOS-release : 24.3.0\nVersion : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6031\nmachine : arm64\nprocessor : arm\nbyteorder : little\nLC_ALL : None\nLANG : None\nLOCALE : en_US.UTF-8\n\npandas : 0+untagged.36056.g543680d\nnumpy : 2.2.4\ndateutil : 2.9.0.post0\npip : 25.0.1\nCython : 3.0.12\nsphinx : None\nIPython : None\nadbc-driver-postgresql: None\nadbc-driver-sqlite : None\nbs4 : None\nblosc : None\nbottleneck : None\nfastparquet : None\nfsspec : None\nhtml5lib : None\nhypothesis : None\ngcsfs : None\njinja2 : None\nlxml.etree : None\nmatplotlib : None\nnumba : None\nnumexpr : None\nodfpy : None\nopenpyxl : None\npsycopg2 : None\npymysql : None\npyarrow : None\npyreadstat : None\npytest : None\npython-calamine : None\npytz : 2025.2\npyxlsb : None\ns3fs : None\nscipy : None\nsqlalchemy : None\ntables : None\ntabulate : None\nxarray : None\nxlrd : None\nxlsxwriter : None\nzstandard : None\ntzdata : 2025.2\nqtpy : None\npyqt5 : None\n\n```\n</details>\n",
"Thanks @asishm and @myenugula - closing."
] |
2,956,990,529 | 61,195 | DOC: User Guide Page on user-defined functions | closed | 2025-03-28T19:15:48 | 2025-05-19T13:11:32 | 2025-05-18T19:31:46 | https://github.com/pandas-dev/pandas/pull/61195 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61195 | https://github.com/pandas-dev/pandas/pull/61195 | arthurlw | 10 | - [x] closes #61126
- [ ] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- [ ] ~All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).~
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
| [
"Docs",
"Apply"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Currently writing this, so I would appreciate any feedback on it!",
"Hi @rhshadrach thanks for the feedback! I agree with you and will push updates soon",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61195/",
"@arthurlw I generated a preview of the rendered docs in this PR. If you want to have a look to see if everything looks as expected: https://pandas.pydata.org/preview/pandas-dev/pandas/61195/docs/user_guide/user_defined_functions.html\r\n\r\n@rhshadrach can you have a look and see if this can be merged? Even if we want to make some improvements and extend this in the future, I think this PR is already a great first version. So, whenever there is no blocker, probably easier to get this merged and iterate in follow up PRs if needed.",
"Thanks for the preview! I think adding an example that combines groupby and filter (which takes a UDF) could be beneficial. That said, I do think it might duplicate some existing docs, so not sure if it's worth including here.",
"Just noting this is still on my radar, should be able to get to it in the next 3 days.",
"Thanks @arthurlw for all the work here!",
"Thanks for the guidance on this UDF User Guide @datapythonista @rhshadrach!\r\n\r\nI'm interested in diving deeper into this area of pandas. Are there any related issues, features, or improvements you think could be tackled next?\r\n\r\nThanks in advance!",
"@arthurlw I created #61458 and assigned you to it. I think it's a good one to work in pandas udf. Good to learn more about the status quo, with reasonable complexity, and very useful to the project since it will make the code much clearer for future changes. Also, if you work on this issue, I think you'll realize of inconsistencies, missing documentation, and other related tasks that may be good to work on too. For example, it could be useful to document the executor interface in the documentation about extending pandas. So, third-party library authors can easily learn how to create an execution engine for pandas map/apply.\r\n\r\nPlease let me know if this is not what you're looking for, I can try to think of something else. Or if you have any question (better to ask them in the other issue)."
] |
2,956,720,128 | 61,194 | ENH: adding a filter (and bold) to header when writing to excel | open | 2025-03-28T17:13:59 | 2025-05-11T09:01:06 | null | https://github.com/pandas-dev/pandas/issues/61194 | true | null | null | simonaubertbd | 11 | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Hello,
One of the things I always do when opening an excel file is adding filter on header. Just like that

### Feature Description
I'm not exactly sure
### Alternative Solutions
I see that as an option to to_excel function or to class ExcelWriter, not sure of what would be the best.
### Additional Context
_No response_ | [
"Enhancement",
"IO Excel"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the request. Assuming all of our current engines can support this, this seems like a common desire; I'm positive here. I would recommend the argument:\n\n autofilter: bool | list-like = False\n\nallowing the user to control which index/columns get filters. However, we'd need to workout how this interacts with other arguments, namely `startrow` and `mergecells`.",
"take \n\n",
"take",
"@rhshadrach @janeq6 The most common usage, I think, is the bold format with what is called auto-filter on excel\nhttps://xlsxwriter.readthedocs.io/working_with_autofilters.html\nhttps://openpyxl.readthedocs.io/en/stable/api/openpyxl.worksheet.filters.html#module-openpyxl.worksheet.filters",
"In excel, since a filter can only be applied over a continuous range of cells, I think the argument\n```\nautofilter: bool | range = False\n```\nwould be more optimal as it's more in line with how it's applied in excel and makes input validation far easier. As for merge cells, Excel isn't really designed to filter columns with merged cells.\n\nI would recommend to not apply the autofilter if there are any merged cells if the argument is simply True, but when provided a range, to check if there any merged cells in that range of columns, if no, then apply it to that range, otherwise don't apply it and raise a warning.",
"Unfortunately `range(0, 5, 2)` is not contiguous either. I wonder if we should just accept `bool` here.",
"How about if it raises a warning and doesn't apply the filter if the step does not equal 1 and have it mentioned in the docs?",
"I would be more for raising instead of warning - the user provided input which broken the API contract. Rethinking this, if we are going to accept more than `bool`, I think it should be a `list-like` (note: range is list-like) that we validate is contiguous. Requiring users provide a `range` is not ergonomic in my opinion, can validating it is contiguous is not a heavy lift.",
"I raised an opinion in other threads (unfortunately I searched but could not find) that `DataFrame.to_excel` should yield an unformatted raw excel file (no bold, underlines, cell formatting) etc, and `Styler.to_excel` should provide functionality to add Excel elements. \n\n`Styler.to_excel` already converts the two pseudo-CSS styles \"border-style\" and \"number-format\" on a per-cell basis. It would be possible to add another pseudo-CSS style that would give per column control for adding these filters. It would also not impinge on the underlying `DataFrame.to_excel`",
"@janeq6 Are you still working on this? I would like to contribute to this issue or some part of it as I have already implemented some of the functionality for this as it was in high demand at my previous company.",
"take"
] |
2,955,473,231 | 61,193 | BUG: Fix pyarrow categoricals not working for pivot and multiindex | closed | 2025-03-28T09:10:56 | 2025-04-14T17:00:11 | 2025-04-14T17:00:04 | https://github.com/pandas-dev/pandas/pull/61193 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61193 | https://github.com/pandas-dev/pandas/pull/61193 | robin-mader-bis | 3 | - [X] closes #53051
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
<details>
<summary>Disclaimer</summary>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
</details>
| [
"Reshaping",
"Arrow"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Hey @mroeschke,\r\nThanks for the Review! All comments should be adressed. Could you have another look if everything looks good?",
"Hey @mroeschke,\r\nThanks again for the review. All feedback should be addressed. Can you have another look?",
"Thanks @robin-mader-bis "
] |
2,954,866,743 | 61,192 | Feature/guepard pandas | closed | 2025-03-28T02:43:32 | 2025-03-28T02:44:07 | 2025-03-28T02:44:07 | https://github.com/pandas-dev/pandas/pull/61192 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61192 | https://github.com/pandas-dev/pandas/pull/61192 | kobbinour13 | 0 | - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] |
2,954,549,901 | 61,191 | BUG: Boolean selection edge case. | open | 2025-03-27T22:27:30 | 2025-04-08T23:28:22 | null | https://github.com/pandas-dev/pandas/issues/61191 | true | null | null | ptth222 | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas
df1 = pandas.DataFrame()
boolean_series = pandas.Series(dtype=bool)
df1[boolean_series]
# Empty DataFrame
# Columns: []
# Index: []
df2 = pandas.DataFrame(index=[0, 1])
df2[boolean_series]
# IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
```
### Issue Description
Trying to use an empty boolean Series to select on an empty DataFrame that has an index results in an error.
### Expected Behavior
I would expect to return an empty DataFrame. The expectation might make more sense with an example.
```
import pandas
df1 = pandas.DataFrame(['a', 'b'], index = [0, 1])
df1[df1.duplicated()]
# Empty DataFrame
# Columns: [0]
# Index: []
df2 = pandas.DataFrame(index = [0, 1])
df2[df2.duplicated()]
# IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
```
Both of these DataFrames have no duplicate values, but only one results in an error. It would be nice not to require a test for this special case and just get an empty DataFrame as the result since an empty DataFrame does not contain any duplicates.
I looked into this a little bit because I thought maybe the .duplicated method just needed to have the empty Series also return the index, but it is not possible, as far as I can tell, to create a Series with an index but no values like you can with a DataFrame. If you try, the values are set to some default. In the case for bool it is True. I think the selection code would have to check for an empty Series before trying to use the index and return an empty DataFrame. If I am investigating this correctly, it looks like in pandas/core/frame.py in the ._getitem_bool_array method you could add a case to the if chain at the top. Something like:
```
if isinstance(key, Series) and key.empty:
return self._take_with_is_copy(Index([]), axis=0)
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.5
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 1.24.4
pytz : 2022.1
dateutil : 2.8.2
pip : 25.0.1
Cython : 3.0.11
sphinx : 5.1.1
IPython : 8.21.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : 4.9.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.4
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.9
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2024.1
qtpy : 2.4.1
pyqt5 : None
</details>
| [
"Bug",
"Indexing",
"Needs Discussion"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. First a case where we do not deal with empty objects:\n\n```python\ndf1 = pd.DataFrame({\"a\": [1, 1, 2], \"b\": [3, 4, 5]})\nmask = pd.Series({\"a\": True})\ndf1[mask]\n# pandas.errors.IndexingError: Unalignable boolean Series provided as indexer\n```\n\nI do not think there is an appetite for changing this behavior. I agree it would be great if\n\n```python\ndf2 = pandas.DataFrame(index=[0, 1])\ndf2[df2.duplicated()]\n```\n\ncould always work, but I do not see a way to change `duplicated` nor `__getitem__` to make this so. E.g. if we were to follow your proposal:\n\n> I think the selection code would have to check for an empty Series before trying to use the index and return an empty DataFrame.\n\nthis would introduce an edge case that goes against the general rule (unalignable Series will raise, except when they are empty). As such, I'm opposed to this way forward.",
"> this would introduce an edge case that goes against the general rule (unalignable Series will raise, except when they are empty). As such, I'm opposed to this way forward.\n\nYou are just picking your edge case though. You either introduce the one I suggested or leave in the one that's there. The one that's there is much more onerous in my opinion. Is anyone really relying on an empty Series mask to result in an error? How many people would rather rely on the DataFrame's own built in methods to always work with itself? Do you want to keep the current edge case so some arbitrary general rule doesn't have an exception, resulting in a less functional class? Or do you want the edge case that improves functionality, but breaks a general rule in a scenario where it basically doesn't matter (IMO)? Note that these are questions to actually consider, not me just trying to win an argument or something.\n\nSome alternatives:\nYou could change Series so it could have an index without values, or DataFrame so it can't. It's weird that these don't have the same behavior in that respect. That seems a bit more involved though.\n\nAnother solution is to have every method that returns a boolean Series check if it is going to return an empty Series and instead return one with all False values the same length as the DataFrame. This would also be a pain tracking all of these down. Having boolean Series default value be False instead of True and always passing the DataFrame index into the Series constructor would also work, but it's likely people rely on the default True somewhere.\n\n\nUltimately this is a structural issue of Series not being able to have an index without values while DataFrames can, and choosing to return boolean Series for methods.\n\nThe way I see it the options are:\n1. Deal with the structural issue.\n2. Apply one of the patches I outlined.\n3. Ignore the problem and leave it because 1. is too cumbersome and 2. violates some rule in a way that's unlikely to matter (IMO).\n\n\nNote that there is a similar error with unaligned sizes:\n```\nimport pandas\n\ndf1 = pandas.DataFrame(['a', 'b'], index = [0, 1])\ndf1[df1.duplicated().values]\n# Empty DataFrame\n# Columns: [0]\n# Index: []\n\ndf = pandas.DataFrame(index=[0, 1])\ndf[df.duplicated().values]\n# ValueError: Item wrong length 0 instead of 2.\n```\nTo me this suggests having methods always return a Series of the same size as the DataFrame might be a better overall fix. At least when the Series dtype is bool. What's more logical, returning False for every row when there are no columns in a DataFrame, or returning an empty Series? I don't really know, but for the scenario that I started this Issue for the all False Series doesn't result in an error.\n\nEdit: To do this in .duplicated it looks like you could change:\n```\nif self.empty:\n return self._constructor_sliced(dtype=bool)\n```\nTo:\n```\nif self.empty:\n return self._constructor_sliced(False, dtype=bool, index = self.index)\n```\nI'm not sure how many methods do something similar, but it would be a good idea to change those too if this is acceptable.\n\nEdit 2: Note that both the \"all\" and \"any\" methods return values in this same situation.\n```\nimport pandas\n\npandas.DataFrame(index = [0, 1]).all(axis=1)\n# 0 True\n# 1 True\n# dtype: bool\n\npandas.DataFrame(index = [0, 1]).any(axis=1)\n# 0 False\n# 1 False\n# dtype: bool\n\n\npandas.DataFrame().all(axis=1)\n# Series([], dtype: bool)\n\npandas.DataFrame().any(axis=1)\n# Series([], dtype: bool)\n```\nThere is more historical precedent for these methods returning True or False for empty sets, but I think the reasoning could also be applied to other boolean methods. Making it so that a Series can be empty with an index I think is the best fix, but having reasonable default values for boolean methods is also okay.\n\nEdit 3: So due to how any() and all() work, boolean methods cannot return default boolean values for empty DataFrames. Since all() must return True for an empty set, methods like isna have to return an empty set when used on an empty DataFrame. To return anything else will result in an incorrect result when chained with any() or all(). This basically nullifies everything I said in regards to that possible solution. I think the only real choices are allowing an empty Series to have an index or modifying the __getitem__ to ignore the size of the input if the DataFrame is empty and just return empty."
] |
2,954,061,898 | 61,190 | Update guidance on CFLAGS | closed | 2025-03-27T18:25:45 | 2025-03-28T20:12:33 | 2025-03-28T20:12:26 | https://github.com/pandas-dev/pandas/pull/61190 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61190 | https://github.com/pandas-dev/pandas/pull/61190 | WillAyd | 1 | I recently discovered that setting the flags like this also interferes with Meson's ability to look up a caching tool like ccache or sccache. Rather than appending to these, I think its best to just unset them entirely while developing | [
"Docs"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks @WillAyd "
] |
2,953,485,694 | 61,189 | BUG: \0 null bytes in `str` not preserved in `pandas.CategoricalIndex` or `pandas.MultiIndex` | closed | 2025-03-27T15:38:25 | 2025-03-28T20:44:48 | 2025-03-28T20:44:37 | https://github.com/pandas-dev/pandas/issues/61189 | true | null | null | dutc | 3 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from sys import version_info as py_version_info
from pandas import __version__ as pd_version
assert py_version_info[:3] == (3, 13, 2)
assert pd_version == '2.2.3' or pd_version == '3.0.0.dev0+2028.gb64f438cc8'
from pandas import CategoricalIndex, MultiIndex
entities = [b'abc', b'abc\0']
# CORRECT
cat = CategoricalIndex(entities)
assert cat.tolist() == entities
assert len({*cat.tolist()}) == len({*entities})
# CORRECT
idx = MultiIndex.from_product([entities])
assert idx.get_level_values(0).tolist() == entities
assert len({*idx.get_level_values(0).tolist()}) == len({*entities})
entities = ['abc', 'abc\0']
# INCORRECT
cat = CategoricalIndex(entities)
assert cat.tolist() != entities
assert len({*cat.tolist()}) < len({*entities})
# INCORRECT
idx = MultiIndex.from_product([entities])
assert idx.get_level_values(0).tolist() != entities
assert len({*idx.get_level_values(0).tolist()}) < len({*entities})
entities = ['abc', 'abc\0def']
# INCORRECT
cat = CategoricalIndex(entities)
assert cat.tolist() != entities
assert len({*cat.tolist()}) < len({*entities})
# INCORRECT
idx = MultiIndex.from_product([entities])
assert idx.get_level_values(0).tolist() != entities
assert len({*idx.get_level_values(0).tolist()}) < len({*entities})
```
### Issue Description
When constructing a `pandas.CategoricalIndex` or `pandas.MultiIndex` from Python `str` values, any code points following a '\0' are discarded. This does not occur with `bytes` inputs.
### Expected Behavior
The null bytes should be preserved exactly.
### Installed Versions
<details>
>>> from pandas import show_versions
>>> show_versions() # trimmed
INSTALLED VERSIONS
------------------
commit : b64f438cc8079d441331396fbac1e2dc61b26af9
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 6.12.20-1-lts
Version : #1 SMP PREEMPT_DYNAMIC Sun, 23 Mar 2025 08:02:10 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2028.gb64f438cc8
numpy : 2.3.0.dev0+git20250325.2a6f4f0
dateutil : 2.9.0.post0
pip : 24.3.1
tzdata : 2025.2
</details> | [
"Bug",
"Algos",
"Strings"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"To help further pinpoint, it seems that the deeper issue may lie within `pandas.factorize`\n\n```python\nimport pandas as pd\nprint(f'{pd.__version__ = }') # 2.2.3\n\nentities = ['abc', 'abc\\0']\ns = pd.Series(entities)\n\nassert s.tolist() == entities\n\nprint(\n # incorrect unique value detection\n s.factorize(), # (array([0, 0]), Index(['abc'], dtype='object'))\n pd.factorize(s), # (array([0, 0]), Index(['abc'], dtype='object'))\n sep='\\n'\n)\n```\n\nBuilding more confidence, if we explicitly create a `CategoricalDtype` we can round trip safely\n\n```python\nimport pandas as pd\nprint(f'{pd.__version__ = }') # 2.2.3\n\nentities = ['abc', 'abc\\0']\ndtype = pd.CategoricalDtype(categories=entities)\n\nassert pd.Categorical(entities, dtype=dtype).tolist() == entities\nassert pd.Series(entities, dtype=dtype).tolist() == entities\n```\n\n",
"Yes, there's a few related open issues on this https://github.com/pandas-dev/pandas/issues/34551 https://github.com/pandas-dev/pandas/issues/53720",
"Thanks for the report, and @asishm for tracking down those issues. Closing as a duplicate of #34551."
] |
2,952,212,798 | 61,188 | BUG: date comparison fails when series is all pd.NaT values | closed | 2025-03-27T09:24:05 | 2025-04-15T12:42:01 | 2025-04-15T12:42:01 | https://github.com/pandas-dev/pandas/issues/61188 | true | null | null | imrehg | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
from datetime import datetime
s = pd.Series([pd.NaT, "1/1/2020 10:00:00"])
s = pd.to_datetime(s)
print(s.dt.date.le(datetime.now().date()))
# 0 False
# 1 True
# dtype: bool
s = pd.Series([pd.NaT, pd.NaT])
s = pd.to_datetime(s)
print(s.dt.date.le(datetime.now().date()))
# TypeError: Invalid comparison between dtype=datetime64[ns] and date
```
### Issue Description
When comparing a `datetime[ns]` or similar series, where all the values turn out to be `pd.NaT` values, the comparison just breaks. This is problematic, as the input series cannot necessarily be controlled beforehand, and if there's any actual non-NaT value, the comparison works. The Series `dtype` values are the same in both cases, which would make me expect that the rest of the behaviour is the same too.
### Expected Behavior
In the above code, I would expect it to return:
```python
0 False
1 False
dtype: bool
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 8.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.0
html5lib : None
hypothesis : 6.130.4
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : 0.28.0
psycopg2 : None
pymysql : None
pyarrow : 15.0.2
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : 2025.3.0
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Datetime",
"good first issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! It looks like in this case `s.dt.date` should be returning object dtype, but it is instead returning `datetime64[s]`. PRs to fix are welcome!",
"take",
"Hi @rhshadrach ,\n\nCould you please take a look at my latest commit? I’d appreciate your feedback to ensure everything looks good.\n\nThanks!",
"@Mohit-Kundu - are you referring to #61200? I'd suggest reopening that PR if so. Otherwise, can you point me to the PR you're referring to.",
"@rhshadrach yes, that's the one! I just reopened it and updated the branch before running the tests."
] |
2,951,368,673 | 61,187 | BUG: DataFrame.min raises TypeError when column contains mixed types (e.g., np.nan and datetime) | closed | 2025-03-27T03:16:47 | 2025-04-01T01:56:35 | 2025-03-31T02:16:58 | https://github.com/pandas-dev/pandas/issues/61187 | true | null | null | tanjt107 | 5 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
import datetime
data = {
"dates": [
np.nan,
np.nan,
datetime.datetime(2025, 1, 3),
datetime.datetime(2025, 1, 4),
],
}
df = pd.DataFrame(data)
df.min(axis=0)
```
### Issue Description
When calling DataFrame.min(axis=0) on a DataFrame with columns containing mixed types (np.nan and datetime), a TypeError is raised due to the comparison of float (from np.nan) and datetime.date. The default behavior of min should skip np.nan values when skipna=True (default), but this does not happen.
```Traceback (most recent call last):
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\test.py", line 29, in <module>
df.min(axis=0)
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11643, in min
result = super().min(axis, skipna, numeric_only, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\generic.py", line 12388, in min
return self._stat_function(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\generic.py", line 12377, in _stat_function
return self._reduce(
^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11562, in _reduce
res = df._mgr.reduce(blk_func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\internals\managers.py", line 1500, in reduce
nbs = blk.reduce(func)
^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\internals\blocks.py", line 404, in reduce
result = func(self.values)
^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\frame.py", line 11481, in blk_func
return op(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 147, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 404, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\pandas\core\nanops.py", line 1098, in reduction
result = getattr(values, meth)(axis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\45217950\Downloads\GitHub\irr-cloud\.venv\Lib\site-packages\numpy\_core\_methods.py", line 48, in _amin
return umr_minimum(a, axis, None, out, keepdims, initial, where)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'float' and 'datetime.date'
```
### Expected Behavior
The min function should skip np.nan values when skipna=True (default) and return the minimum datetime value:
```
dates 2025-01-03
dtype: datetime64[ns]
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.7
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.3
pytz : 2025.1
dateutil : 2.9.0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.28.0
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : 3.2.2
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Needs Info",
"Reduction Operations"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. I am not seeing this issue on the main branch of pandas. You've checked the box that you've confirmed this bug exists there, is that the case?",
"This issue no longer occurs after restarting my computer. Therefore, I am closing the issue.",
"@rhshadrach I’ve fixed the example and reopened issue [#61204](https://github.com/pandas-dev/pandas/issues/61204).",
"In the future, please fix the existing issue rather than opening new ones.",
"Sure, my apologies. I only realized I had the ability to reopen the issue after creating the new one."
] |
2,951,251,444 | 61,186 | BUG: engine calamine lost 0 when read_excel from vlookup cell | closed | 2025-03-27T01:57:03 | 2025-06-19T20:51:54 | 2025-06-19T20:51:54 | https://github.com/pandas-dev/pandas/issues/61186 | true | null | null | ryjfgjl | 6 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pd.read_excel(r'C:\Users\ryjfgjl\Desktop\汽车计提1-2月明细(1).xlsx', sheet_name=3, na_filter=False, engine='calamine', dtype=object)
print(df)
```
### Issue Description
Excel data

df:

### Expected Behavior
change engine to openpyxl is correct

### Installed Versions
<details>
2.2.3
</details>
| [
"Bug",
"IO Excel",
"Closing Candidate",
"Upstream issue"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report. Are you able to share the excel file (preferably without the vlookups) or code to generate the excel file?",
"import pandas as pd\n\nfile = r'C:\\Users\\ryjfgjl\\Desktop\\test.xlsx'\ndf_without_vlookup = pd.read_excel(file, engine='calamine', sheet_name='Sheet1', dtype=object)\nprint(df_without_vlookup)\ndf_with_vlookup = pd.read_excel(file, engine='calamine', sheet_name='Sheet2', dtype=object)\nprint(df_with_vlookup)\n\n[test.xlsx](https://github.com/user-attachments/files/19528442/test.xlsx)",
"Hi. It has been fixed in upstream [tafia/calamine#472](https://github.com/tafia/calamine/pull/472), but it hasn't been released yet.\n\nUPD: Should be fixed in https://github.com/dimastbk/python-calamine/releases/tag/v0.3.2.",
"@dimastbk You're right — the bug was resolved in version 0.3.2.\n\n* `0.3.1`:\n ```python\n id code\n 0 1 05291912\n 1 2 05291913\n 2 3 05291914\n 3 4 05291915\n id code\n 0 1 5291912\n 1 2 5291913\n 2 3 5291914\n ```\n* `0.3.2`\n ```python\n id code\n 0 1 05291912\n 1 2 05291913\n 2 3 05291914\n 3 4 05291915\n id code\n 0 1 05291912\n 1 2 05291913\n 2 3 05291914\n ```",
"take",
"Sorry we didn't have this discussion earlier @chilin0525, but seems like we don't want to fix this, as it involves not allowing users to install a version we wish to support. "
] |
2,950,800,166 | 61,185 | ENH: Reimplement DataFrame.lookup | closed | 2025-03-26T21:13:46 | 2025-06-02T17:00:29 | 2025-06-02T17:00:28 | https://github.com/pandas-dev/pandas/pull/61185 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61185 | https://github.com/pandas-dev/pandas/pull/61185 | stevenae | 19 | - [x] closes #40140
- [x] [Tests added and passed]
- [x] All [code checks passed]
- [x] Added [type annotations]
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Optimization notes:
Most important change is removal of:
`if not self._is_mixed_type or n > thresh`
The old implementation slowed down when `n < thresh`, with or without mixed types. Cases `n < thresh` now 10x faster.
Logic can be followed via python operator precedence:
https://docs.python.org/3/reference/expressions.html#operator-precedence
Test notes:
I am unfamiliar with pytest and did not add paramterization | [
"Enhancement",
"Indexing",
"Stale"
] | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | [
"I tested out three variants of subsetting the dataframe before converting to numpy:\r\n- subset column and row\r\n- subset only column\r\n- subset column, then subset row if types are mixed\r\n\r\nOptimization testing script:\r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\nimport timeit\r\nnp.random.seed(43)\r\nfor n in [100,100_000]:\r\n\tfor k in range(2,6):\r\n\t\tprint(k,n)\r\n\t\tcols = list('abcdef')\r\n\t\tdf = pd.DataFrame(np.random.randint(0, 10, size=(n,len(cols))), columns=cols)\r\n\t\tdf['col'] = np.random.choice(cols, n)\r\n\t\tsample_n = n//10\r\n\t\tidx = np.random.choice(df['col'].index.to_numpy(),sample_n)\r\n\t\tcols = np.random.choice(df['col'].to_numpy(),sample_n)\r\n\t\ttimeit.timeit(lambda: df.drop(columns='col').lookup(idx, cols),number=1000)\r\n\t\tstr_col = cols[0]\r\n\t\tdf[str_col] = df[str_col].astype(str)\r\n\t\tdf[str_col] = str_col\r\n\t\ttimeit.timeit(lambda: df.drop(columns='col').lookup(idx, cols),number=1000)\r\n```\r\n\r\n| | col+row | col-only | col+mixed row |\r\n|:---|:---|:---|:---|\r\n| | 2 100 | 2 100| 2 100 |\r\n| numeric | 0.19170337496325374 | 0.2384615419432521| 0.19463533395901322 |\r\n| mixed | 0.1781897919718176 | 0.23713816609233618| 0.27453291695564985 |\r\n| | 3 100 | 3 100| 3 100 |\r\n| numeric | 0.15338195790536702 | 0.20400249981321394| 0.1500512920320034 |\r\n| mixed | 0.18086445797234774 | 0.2427495000883937| 0.2795307501219213 |\r\n| | 4 100 | 4 100| 4 100 |\r\n| numeric | 0.1565960831940174 | 0.2095870419871062| 0.15431487490423024 |\r\n| mixed | 0.17770141689106822 | 0.23276254208758473| 0.26711999997496605 |\r\n| | 5 100 | 5 100| 5 100 |\r\n| numeric | 0.1558396250475198 | 0.2023254157975316| 0.15394329093396664 |\r\n| mixed | 0.17938704183325171 | 0.2375077500473708| 0.274615041911602 |\r\n| | 2 100000 | 2 100000| 2 100000 |\r\n| numeric | 0.6304021249525249 | 1.2773219170048833| 0.855312000028789 |\r\n| mixed | 4.435680666938424 | 1.679579583927989| 1.979861208004877 |\r\n| | 3 100000 | 3 100000| 3 100000 |\r\n| numeric | 0.6471724167931825 | 1.248306917026639| 0.843553707934916 |\r\n| mixed | 4.393679084023461 | 1.7129242909140885| 1.955484125064686 |\r\n| | 4 100000 | 4 100000| 4 100000 |\r\n| numeric | 0.6682121250778437 | 1.2452070831786841| 0.8302506660111248 |\r\n| mixed | 4.390174541156739 | 1.6384193329140544| 1.9620799159165472 |\r\n| | 5 100000 | 5 100000| 5 100000 |\r\n| numeric | 0.6654676250182092 | 1.2772445830050856| 0.865516958059743 |\r\n| mixed | 4.451537624932826 | 1.742541000014171| 2.0112057079095393 |\r\n\r\nAs a result of this testing I settled on the third option.",
"> Is the implementation in [#40140 (comment)](https://github.com/pandas-dev/pandas/issues/40140#issuecomment-1796623981) not sufficient?\r\n> \r\n> ```python\r\n> size = 100_000\r\n> df = pd.DataFrame({'a': np.random.randint(0, 100, size), 'b': np.random.random(size), 'c': 'x'})\r\n> row_labels = np.repeat(np.arange(size), 2)\r\n> col_labels = np.tile(['a', 'b'], size)\r\n> %timeit df.lookup(row_labels, col_labels)\r\n> # 22.3 ms ± 391 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) <--- this PR\r\n> # 13.4 ms ± 17 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) <--- proposed implementation\r\n> ```\r\n\r\nThe implementation `df[['a', 'b']].sum().sum()` pulls up the entire column, and does not support lookup of individual values by row/column. Is that what you are referring to?",
"> If we are to move forward, this looks good, should get a whatsnew in enhancements for 3.0.\r\n\r\nDone",
"@stevenae - sorry, linked to the wrong comment. I've fixed my comment above.\r\n\r\nAh, but I think I see. This avoids a large copy when only certain columns are used.",
"cc @pandas-dev/pandas-core\r\n\r\nMy take: this provides an implementation for what I think is a natural operation that is not straightforward for most users. It provides performance benefits that take into account columnar-based storage (subsetting columns prior to calling `.to_numpy()`). This seems like a worthy addition in my opinion, especially given the user feedback when the previous version was removed.",
"> @stevenae - sorry, linked to the wrong comment. I've fixed my comment above.\r\n> \r\n> Ah, but I think I see. This avoids a large copy when only certain columns are used.\r\n\r\nYes -- I ran a comparison (script at end) and found this PR implementation beats the comment you referenced on large mixed-type lookups.\r\n\r\nMetrics\r\n\r\n| PR | 40140 |\r\n| :-- | :-- |\r\n| 2 100 | |\r\n| 0.1964133749715984 | 0.0907377500552684 | \r\n| 0.274302874924615 | 0.11014608410187066 |\r\n| 3 100 | |\r\n| 0.15044220816344023 | 0.08912291703745723 | \r\n| 0.2768622918520123 | 0.11031254194676876 |\r\n| 4 100 | |\r\n| 0.15489325020462275 | 0.09032529196701944 | \r\n| 0.26732829213142395 | 0.10644491598941386 |\r\n| 5 100 | |\r\n| 0.1546538749244064 | 0.08968612505123019 | \r\n| 0.2721201251260936 | 0.11162270791828632 |\r\n| 2 100000 | |\r\n| 0.8096102089621127 | 0.40509104216471314 | \r\n| 1.9508202918805182 | 4.064577874960378 |\r\n| 3 100000 | |\r\n| 0.8242515418678522 | 0.4148290839511901 | \r\n| 1.9535491249989718 | 4.241159915924072 |\r\n| 4 100000 | |\r\n| 0.8302762501407415 | 0.42497566691599786 | \r\n| 1.9240409170743078 | 4.146159041905776 |\r\n| 5 100000 | |\r\n| 0.8654224998317659 | 0.44505883287638426 | \r\n| 2.0630989999044687 | 4.4090170410927385 |\r\n\r\nScript \r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\nimport timeit\r\nnp.random.seed(43)\r\n\r\ndef pd_lookup(df, row_labels, col_labels):\r\n rows = df.index.get_indexer(row_labels)\r\n cols = df.columns.get_indexer(col_labels)\r\n result = df.to_numpy()[rows, cols]\r\n return result\r\n\r\nfor n in [100,100_000]:\r\n\tfor k in range(2,6):\r\n\t\tprint(k,n)\r\n\t\tcols = list('abcdef')\r\n\t\tdf = pd.DataFrame(np.random.randint(0, 10, size=(n,len(cols))), columns=cols)\r\n\t\tdf['col'] = np.random.choice(cols, n)\r\n\t\tsample_n = n//10\r\n\t\tidx = np.random.choice(df['col'].index.to_numpy(),sample_n)\r\n\t\tcols = np.random.choice(df['col'].to_numpy(),sample_n)\r\n\t\ttimeit.timeit(lambda: df.drop(columns='col').lookup(idx, cols),number=1000)\r\n\t\ttimeit.timeit(lambda: pd_lookup(df.drop(columns='col'),idx,cols),number=1000)\r\n\t\tstr_col = cols[0]\r\n\t\tdf[str_col] = df[str_col].astype(str)\r\n\t\tdf[str_col] = str_col\r\n\t\ttimeit.timeit(lambda: df.drop(columns='col').lookup(idx, cols),number=1000)\r\n\t\ttimeit.timeit(lambda: pd_lookup(df.drop(columns='col'),idx,cols),number=1000)\r\n```",
"Trying to make sure i understand correctly: this seems equivalent to `df.loc[rows, cols].to_numpy().ravel()`? (or maybe `df.loc[rows, cols].stack().values` might be better for preserving EAs?) And the main motivation is that this is more performant than those options?",
"> Trying to make sure i understand correctly: this seems equivalent to `df.loc[rows, cols].to_numpy().ravel()`? (or maybe `df.loc[rows, cols].stack().values` might be better for preserving EAs?) And the main motivation is that this is more performant than those options?\r\n\r\nHi @jbrockmendel -- df.loc[rows, cols] returns all columns for all rows. Lookup only returns the values at paired columns and rows.",
"That makes sense, thanks. So more of a `df[rows, cols].diag()` (which doesnt exist)?",
"> That makes sense, thanks. So more of a `df[rows, cols].diag()` (which doesnt exist)?\n\nI think the best analogue from within pandas is is a for loop of .at[].",
"Overall I am -1 adding this back in. I think the utility of this function is limited in the general case of non-homogenous dataframes. ",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"I am still interested. @rhshadrach what's the right next step?\r\n\r\nOn Fri, May 16, 2025, 8:08 PM github-actions[bot] ***@***.***>\r\nwrote:\r\n\r\n> *github-actions[bot]* left a comment (pandas-dev/pandas#61185)\r\n> <https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2887869881>\r\n>\r\n> This pull request is stale because it has been open for thirty days with\r\n> no activity. Please update\r\n> <https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request>\r\n> and respond to this comment if you're still interested in working on this.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2887869881>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAFZOHPXV5NLEHJQ3RVANL326Z4YPAVCNFSM6AAAAABZ3L3IL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQOBXHA3DSOBYGE>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@stevenae - I think we need agreement among core devs on whether this should be supported. While I'm sympathetic to users who found this useful prior to it's removal from pandas, there are a few arguments against which I find compelling.\r\n\r\n - Other DataFrame libraries do not offer such a method (to my knowledge).\r\n - The implementation can be achieved using existing functionality with what I measure (see below) as a 30% decrease in performance in non-homogeneous cases, and a 400% increase in performance in the homogeneous case.\r\n - Methods that coerce to object dtype when used on non-homogeneous DataFrames is something that I would like to see less in the built-in methods of pandas, not more. Here it's my opinion _not_ that user's shouldn't be able to do it, but that it we should avoid it being built-in to pandas.\r\n\r\nFor the benchmark in bullet 2, I ran the code in https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2762366593 with the following modification of `pd_lookup`:\r\n\r\n```python\r\ndef pd_lookup(df, row_labels, col_labels):\r\n df = df.loc[:, sorted(set(col_labels))]\r\n rows = df.index.get_indexer(row_labels)\r\n cols = df.columns.get_indexer(col_labels)\r\n result = df.to_numpy()[rows, cols]\r\n return result\r\n```\r\n",
"Understood! Should I put together a recipe for the documentation then?\r\nSince it seems there's indeed a 30% performance improvement to be had when\r\nindexing heterogeneous columns.\r\n\r\nOn Sun, May 18, 2025, 9:31 AM Richard Shadrach ***@***.***>\r\nwrote:\r\n\r\n> *rhshadrach* left a comment (pandas-dev/pandas#61185)\r\n> <https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2888992264>\r\n>\r\n> @stevenae <https://github.com/stevenae> - I think we need agreement among\r\n> core devs on whether this should be supported. While I'm sympathetic to\r\n> users who found this useful prior to it's removal from pandas, there are a\r\n> few arguments against which I find compelling.\r\n>\r\n> - Other DataFrame libraries do not offer such a method (to my\r\n> knowledge).\r\n> - The implementation can be achieved using existing functionality with\r\n> what I measure (see below) as a 30% decrease in performance in\r\n> non-homogeneous cases, and a 400% increase in performance in the\r\n> homogeneous case.\r\n> - Methods that coerce to object dtype when used on non-homogeneous\r\n> DataFrames is something that I would like to see less in the built-in\r\n> methods of pandas, not more. Here it's my opinion *not* that user's\r\n> shouldn't be able to do it, but that it we should avoid it being built-in\r\n> to pandas.\r\n>\r\n> For the benchmark in bullet 2, I ran the code in #61185 (comment)\r\n> <https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2762366593>\r\n> with the following modification of pd_lookup:\r\n>\r\n> def pd_lookup(df, row_labels, col_labels):\r\n> df = df.loc[:, sorted(set(col_labels))]\r\n> rows = df.index.get_indexer(row_labels)\r\n> cols = df.columns.get_indexer(col_labels)\r\n> result = df.to_numpy()[rows, cols]\r\n> return result\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/pandas-dev/pandas/pull/61185#issuecomment-2888992264>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAFZOHOBP6HQLM3EJVA6FQD27CDTHAVCNFSM6AAAAABZ3L3IL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQOBYHE4TEMRWGQ>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@stevenae - yes, I think that would be uncontroversial. ",
"> @stevenae - yes, I think that would be uncontroversial. \n\nOkay! Will do in the next couple weeks",
"@rhshadrach doc update is at #61471 ",
"Since we just merged https://github.com/pandas-dev/pandas/pull/61471 adding documentation, closing this PR "
] |
2,950,256,151 | 61,184 | DOC: Add details of dropna in DataFrame.pivot_table | closed | 2025-03-26T16:55:56 | 2025-04-03T01:01:37 | 2025-04-02T21:28:16 | https://github.com/pandas-dev/pandas/pull/61184 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61184 | https://github.com/pandas-dev/pandas/pull/61184 | it176131 | 4 | - [x] closes #61113
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [
"Docs",
"Missing-data",
"Reshaping"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Which `doc/source/whatsnew/vX.X.X.rst` file should I edit? `v2.3.0.rst`?",
"~@rhshadrach it looks like two warnings are raised because of the bulleted list formatting: ([1](https://github.com/pandas-dev/pandas/actions/runs/14161907429/job/39668656803?pr=61184#step:8:61)), ([2](https://github.com/pandas-dev/pandas/actions/runs/14161907429/job/39668656803?pr=61184#step:8:62)). Is there another way you recommend formatting the docstring? Or do you have an example with a bulleted list that I can work from?~\r\n\r\nThink I figured it out in 1a82d2a.",
"@rhshadrach ready for review",
"Thanks @it176131 - nice work!"
] |
2,950,220,946 | 61,183 | REGR: Interpolate with method=index | closed | 2025-03-26T16:40:24 | 2025-03-29T19:32:13 | 2025-03-29T18:01:32 | https://github.com/pandas-dev/pandas/pull/61183 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61183 | https://github.com/pandas-dev/pandas/pull/61183 | rhshadrach | 2 | - [x] closes #61122 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Partial revert of #56515. Regression is only on main; hasn't been released yet so no whatsnew. | [
"Bug",
"Missing-data",
"Regression"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Looks like the resample interpolate doctests are failing https://github.com/pandas-dev/pandas/actions/runs/14088933670/job/39460360510?pr=61183",
"Thanks @rhshadrach "
] |
2,950,210,565 | 61,182 | BUG: Negation of `.str.isnumeric()` changes `dtype` when `pd.NA` is present | closed | 2025-03-26T16:35:55 | 2025-04-04T09:59:28 | 2025-04-04T09:59:28 | https://github.com/pandas-dev/pandas/issues/61182 | true | null | null | noahblakesmith | 9 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
s = pd.Series(["", "0", "123", " 123", pd.NA])
print(s.str.isnumeric())
print(~s.str.isnumeric())
t = pd.Series(["", "0", "123", " 123"])
print(t.str.isnumeric())
print(~t.str.isnumeric())
```
### Issue Description
When `pd.NA` is present in a `Series` object, negating the `.str.isnumeric()` method changes `bool` values to `int` values.
### Expected Behavior
Negation should adhere to the [Kleene logic](https://pandas.pydata.org/docs/user_guide/boolean.html) implemented elsewhere in `pandas`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.8.0-1021-azure
Version : #25-Ubuntu SMP Wed Jan 15 20:45:09 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Strings",
"Needs Discussion",
"Closing Candidate"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report! Your input to `isnumeric` is object dtype, and so you get object dtype back. Thus the results are Python's `True` and `False`. The behavior pandas displays here is then consistent with the operations on the Python objects:\n\n```python\nprint(~True)\n# -2\n```\n\nIf you would like Kleene-logic, then you should likely specify `dtype=pd.StringDtype()` or `dtype=\"string\"`. \n\nEdit: The remainder of this comment is misleading, see below.\n\n~When doing this, I'm seeing `isnumeric` come out at `False` for `pd.NA`. It's not clear to me whether or not that is the proper result.~\n\n```python\npd.set_option(\"infer_string\", True)\ns = pd.Series([\"\", pd.NA])\nprint(s.str.isnumeric())\n# 0 False\n# 1 False\n# dtype: bool\n```\n\ncc @jorisvandenbossche @WillAyd @mroeschke ",
"Thanks for the response, @rhshadrach. What you said makes sense.\n\nHowever, using `pd.set_option(\"infer_string\", True)` raises an error upon negation when `pd.NA` is present:\n\n```python\npd.set_option(\"infer_string\", True)\n\ns = pd.Series([\"\", pd.NA])\nprint(~s.str.isnumeric())\n# TypeError: bad operand type for unary ~: 'float'\n\nt = pd.Series([\"\"])\nprint(~t.str.isnumeric())\n# 0 False\n# dtype: bool\n# 0 True\n# dtype: bool\n```\n",
"> Negation should adhere to the [Kleene logic](https://pandas.pydata.org/docs/user_guide/boolean.html) implemented elsewhere in `pandas`.\n\nNot sure that should be the case here with the legacy object array. pd.NA, although a python object that can be held by an object array, since it can hold any Python object is not the internal representation of a missing value.\n\nI would suspect that `s.str.isnumeric()` should probably return `False` for pd.NA in the legacy object array since the Kleene logic is appropriate to nullable arrays only. The return value would then be a `bool` array which give the expected result for the logical negation.\n\nIf we wanted to propagate the pd.NA value for the legacy object array we would need to return a pandas nullable boolean array which would mean the return type of `s.str.isnumeric()` is value dependent which is something we try to avoid. \n\nSo it appears that the return type being a object array when pd.NA is present is a bug. Thanks for the report.",
"I think there are a few bugs wrapped up in this discussion. To clarify, the behavior you are looking for is achievable when you use the `pd.StringDtype()`, which is naturally backed by pd.NA and follows Kleene logic:\n\n```python\n>>> ser = pd.Series([\"\", pd.NA], dtype=pd.StringDtype())\n>>> ~ser.str.isnumeric()\n0 True\n1 <NA>\ndtype: boolean\n```\n\nThe \"infer_string\" option uses that same `pd.StringDtype` but with np.nan as the missing value indicator (i.e. `dtype=pd.StringDtype(na_value=np.nan)`), which does not follow Kleene logic. ",
"Thanks @WillAyd - I've edited my comment above. So I think we're good here; my example above should have been:\n\n```python\npd.set_option(\"infer_string\", True)\ns = pd.Series([\"\", np.nan])\nprint(s.str.isnumeric())\n# 0 False\n# 1 False\n# dtype: bool\n```\n\nWith NaN, I _think_ we want this to come out as `False` as it does today instead of propagating the `nan` value. Does that sound right?\n",
"Hmm that's tricky. I think if you were to just evaluate the result of an inversion with np.nan, it would be strange to get `False` back, especially since that is a lossy inversion. However, if the idea is that the inversion is strictly going to be used as an indexer, then it would be helpful to do that in line with work like https://github.com/pandas-dev/pandas/pull/59616\n\nGenerally there isn't a universal solution to a problem like this using `np.nan` as a missing value indicator, so if the OP is looking for Kleene logic I would advise staying away from `np.nan` altogether",
"not sure why this is closed.\n\nthe docs https://pandas.pydata.org/docs/reference/api/pandas.Series.str.isnumeric.html state that the return type is \"Series or Index of boolean values with the same length as the original Series/Index.\". All the examples show `dtype: bool`\n\nhere we have `dtype: object` Series returned when `pd.NA` is present in the Series of object dtype.\n\n```python\ns = pd.Series([\"\", \"0\", \"123\", \" 123\", pd.NA])\nprint(s)\nprint(s.str.isnumeric())\n# 0 \n# 1 0\n# 2 123\n# 3 123\n# 4 <NA>\n# dtype: object\n# 0 False\n# 1 True\n# 2 True\n# 3 False\n# 4 <NA>\n# dtype: object\n```\n\nSurely this is a bug?\n\nThe expected result of `s.str.isnumeric()` for an object dtype would be...\n\n```\n# 0 False\n# 1 True\n# 2 True\n# 3 False\n# 4 False\n# dtype: bool\n```\n\nso that the result could be used as an indexer and that logical negation would work as expected?\n\nI would not expect `<NA>` to be propagated in an object dtype otherwise the return type would have to be a nullable pandas Boolean dtype (and not the object type) and this would result in non default pandas dtypes being presented to the user?",
"Thanks @simonjayhawkins - I missed your previous comment. Reopening.\n\nHowever it's not clear to me that the propagation of the NA value in object dtype is a bug. I'd guess it's likely that the documentation of `isnumeric` was written without specifically thinking of NA, but could be wrong here. Has NA behavior in methods like these been discussed in the past?\n\nNo strong opinion on my side.\n\n> I would not expect `<NA>` to be propagated in an object dtype otherwise the return type would have to be a nullable pandas Boolean dtype (and not the object type)\n\nWhy not object?",
"> Why not object?\n\nSo it appears that object dtype **is** returned using the string accessor on object series containing non-string values...\n\n```python\ns = pd.Series([\"\", \"0\", \"123\", \" 123\", 123])\nprint(s)\nprint(s.str.isnumeric())\n# 0 \n# 1 0\n# 2 123\n# 3 123\n# 4 123\n# dtype: object\n# 0 False\n# 1 True\n# 2 True\n# 3 False\n# 4 NaN\n# dtype: object\n```\n\nIn this case I would have expected the numeric 123 value to also be False and a boolean array returned since the str accessor should only be operating on the string values IMO.\n\nSo this does not look like an inconsistency arising specifically from having the pd.NA object as a value in a object dtype Series.\n\n> No strong opinion on my side.\n\nAgree. If returning an object array when using the str accessor on object arrays that have some non-string values is long standing behavior then I'm inclined to not consider the pd.NA being persevered a bug at this time.\n"
] |
2,949,306,759 | 61,181 | update offsets.pyx to fix #60647 | closed | 2025-03-26T11:51:20 | 2025-04-28T18:23:34 | 2025-04-28T18:23:34 | https://github.com/pandas-dev/pandas/pull/61181 | true | https://api.github.com/repos/pandas-dev/pandas/pulls/61181 | https://github.com/pandas-dev/pandas/pull/61181 | kangqiwang | 4 | - [x ] closes #60647 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| [] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Sorry for not adding a test case for this. I'll set it up once I get home and can work on my PC—my current laptop is too slow to handle the full testing environment.\r\n",
"Also, I noticed the unit tests failed. Again, I will fix them once I get home.",
"Thanks for working on this fix for #60647!\r\n\r\nThis change successfully removes the code path containing the specific line (`holidays = holidays + calendar.holidays().tolist()`) identified in the issue and the comments.\r\n\r\nHowever, there are a few points that probably deserve some attention:\r\n\r\n**Removed Functionality:** This fix works by removing the `if-else` block. Was the functionality within that block (handling a non-numpy `calendar` object passed with `holidays`) intended or ever valid? Removing it is a potentially breaking change that should ideally be documented.\r\n\r\n**Exception Type:** Raising `ApplyTypeError` here is a bit unusual, as it's typically for arithmetic operations returning `NotImplemented`. Would a `ValueError` or `TypeError` be more appropriate to signal invalid arguments during the setup phase?\r\n",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen."
] |
2,949,189,167 | 61,180 | BUG: Performance issue with fillna() after merging DataFrames | open | 2025-03-26T11:02:00 | 2025-03-26T17:17:56 | null | https://github.com/pandas-dev/pandas/issues/61180 | true | null | null | sjfakharian | 2 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
import time
# Create two large DataFrames with missing data
np.random.seed(0)
size = 1_000_000
df1 = pd.DataFrame({
'ID': range(size),
'Name': np.random.choice(['Alice', 'Bob', 'Charlie', 'David', 'Eve', None], size)
})
df2 = pd.DataFrame({
'ID': range(size // 2, size * 3 // 2), # Overlapping and new IDs
'Age': np.random.choice([None, 20, 30, 40, 50, 60], size)
})
# Measure time for merge operation
start_time = time.time()
merged_df = pd.merge(df1, df2, on='ID', how='outer')
merge_time = time.time() - start_time
print(f"Merge time: {merge_time:.2f} seconds")
# Measure time for fillna operation
start_time = time.time()
merged_df['Name'].fillna('Unknown', inplace=True)
merged_df['Age'].fillna(0, inplace=True)
fillna_time = time.time() - start_time
print(f"Fillna time: {fillna_time:.2f} seconds")
# Print some statistics
print(f"Total rows after merge: {len(merged_df)}")
print(f"Null values in 'Name' after fillna: {merged_df['Name'].isnull().sum()}")
print(f"Null values in 'Age' after fillna: {merged_df['Age'].isnull().sum()}")
```
### Issue Description
## Bug Description
When using `fillna()` after merging DataFrames, unexpected behavior and performance issues occur.
## Reproducible Code Example
### Expected Behavior
## Expected Behavior
The `fillna()` operation should efficiently fill missing values after merging, without unexpected behavior or significant performance degradation.
## Actual Behavior
The `fillna()` operation may exhibit unexpected behavior or poor performance, especially with larger datasets.
## Additional Context
This issue becomes more apparent when working with larger datasets and complex merge operations. Improving the performance and reliability of `fillna()` after merging would greatly benefit data processing workflows.
## Environment
- pandas version: 3.0.0
- Python version: 3.13.2
- Operating System: Linux
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.13.2.final.0
python-bits : 64
OS : Linux
OS-release : 5.10.102.1-microsoft-standard-WSL2
Version : #1 SMP Wed Mar 2 00:30:59 UTC 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0
numpy : 1.26.3
pytz : 2024.1
dateutil : 2.8.2
pip : 24.0
setuptools : 69.0.2
Cython : 3.0.8
pytest : 8.0.0
hypothesis : 6.98.3
sphinx : 7.2.6
blosc : None
feather : None
xlsxwriter : 3.1.9
lxml.etree : 5.1.0
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.21.0
pandas_datareader: None
[other dependencies ...]
</details>
| [
"Missing-data",
"Performance",
"Needs Info"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"> When using `fillna()` after merging DataFrames, unexpected behavior and performance issues occur.\n\nThanks for the report. You should be seeing a `ChainedAssignmentError` due to the use of `inplace=True`. Changing the code to not use this:\n\n```python\nmerged_df['Name'] = merged_df['Name'].fillna('Unknown')\nmerged_df['Age'] = merged_df['Age'].fillna(0)\n```\n\ngives me the proper behavior.\n\nIf you believe there are performance issues, can you detail why it is you think that?",
"Thank you for addressing the performance issue with fillna() after merging large DataFrames.\n\nUpon testing, I observed that the slowdown occurs when applying fillna() immediately after a merge operation that results in a DataFrame with a non-standard index and scattered missing data. It seems that merging might alter the DataFrame’s internal structure, leading to inefficiencies in how fillna() processes and locates missing values. Additionally, the performance hit could be related to the following factors:\n\nIndex Misalignment: The merge operation may produce a DataFrame with an irregular index, causing additional overhead in the alignment process during fillna().\n\nMemory Layout Changes: Merging can lead to non-contiguous memory blocks, which might result in less efficient operations when fillna() is applied.\n\nData Type Conversions: There could be implicit type conversions after a merge that delay processing or require extra computations.\n\nCaching Effects: The reorganization of data post-merge might impact cache locality, slowing down subsequent operations like fillna().\n\nWould it be helpful if I provided more detailed benchmarks or a profiling summary of these operations? I am happy to contribute further to diagnosing the root cause and exploring potential optimizations.\n\n"
] |
2,948,702,201 | 61,179 | BUG: replace with np.nan unexpectedly converts pd.Timestamp to pd.NaT | closed | 2025-03-26T07:57:42 | 2025-03-26T19:27:17 | 2025-03-26T19:26:41 | https://github.com/pandas-dev/pandas/issues/61179 | true | null | null | tanjt107 | 1 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
data = {
"date": [
pd.Timestamp("2025-01-01"),
pd.Timestamp("2025-01-02"),
pd.Timestamp("2025-01-03"),
],
}
df = pd.DataFrame(data)
df.replace([pd.Timestamp("2025-01-01"), pd.Timestamp("2025-01-02")], np.nan)
```
### Issue Description
When using `DataFrame.replace()` to replace specific `pd.Timestamp` values with np.nan, the resulting values become `pd.NaT` instead of `np.nan`. This behavior differs from `pandas 1.1.5`, where the replaced values were `np.nan` as expected.
Output
```
date
0 NaT
1 NaT
2 2025-01-03
```
### Expected Behavior
```
date
0 NaN
1 NaN
2 2025-01-03
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.5
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.0.0
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 23.2.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
| [
"Bug",
"Missing-data",
"Timestamp"
] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [
"Thanks for the report!\n\n> This behavior differs from pandas 1.1.5, where the replaced values were np.nan as expected.\n\nIf we were to store `np.nan`, I think that the column dtype would have to be object. Assuming that is the case, this would be less performant and not suppor the `.dt` namespace that provides much of the functionality associated with timestamps. \n\nIf you do really want this behavior, then you have to opt into object dtype explicity:\n\n```python\ndf.astype(object).replace([pd.Timestamp(\"2025-01-01\"), pd.Timestamp(\"2025-01-02\")], np.nan)\n```\n\nThat will give you `np.nan` as expected.\n\nIt seems to me the current behavior is preferred. Closing."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.