content
stringlengths
1
103k
path
stringlengths
8
216
filename
stringlengths
2
179
language
stringclasses
15 values
size_bytes
int64
2
189k
quality_score
float64
0.5
0.95
complexity
float64
0
1
documentation_ratio
float64
0
1
repository
stringclasses
5 values
stars
int64
0
1k
created_date
stringdate
2023-07-10 19:21:08
2025-07-09 19:11:45
license
stringclasses
4 values
is_test
bool
2 classes
file_hash
stringlengths
32
32
\n\n
.venv\Lib\site-packages\packaging\licenses\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
4,290
0.8
0
0
awesome-app
823
2023-11-13T14:36:44.349040
MIT
false
a76d6c4f4b9f3ab250906f3f9f84338f
\n\n
.venv\Lib\site-packages\packaging\__pycache__\markers.cpython-313.pyc
markers.cpython-313.pyc
Other
13,091
0.95
0.027027
0
react-lib
395
2024-09-02T06:16:07.988083
MIT
false
d52477c22a2f10a1fc955020c7262017
\n\n
.venv\Lib\site-packages\packaging\__pycache__\metadata.cpython-313.pyc
metadata.cpython-313.pyc
Other
27,360
0.95
0.061224
0.0131
react-lib
473
2025-03-30T17:34:26.683602
Apache-2.0
false
a6784de21c99d2eec6821ad2f6157aab
\n\n
.venv\Lib\site-packages\packaging\__pycache__\requirements.cpython-313.pyc
requirements.cpython-313.pyc
Other
4,625
0.95
0
0
node-utils
317
2024-09-26T18:18:04.527317
Apache-2.0
false
25f0183e5b16f25f505963c014af6eea
\n\n
.venv\Lib\site-packages\packaging\__pycache__\specifiers.cpython-313.pyc
specifiers.cpython-313.pyc
Other
37,633
0.95
0.062615
0.055901
vue-tools
463
2024-08-17T01:12:06.372195
MIT
false
b933ae25d0073d5f2def5a4bbe30e63e
\n\n
.venv\Lib\site-packages\packaging\__pycache__\tags.cpython-313.pyc
tags.cpython-313.pyc
Other
24,965
0.95
0.064626
0
awesome-app
248
2024-05-01T06:44:26.780781
BSD-3-Clause
false
bc9a942a7b8b48d19550397d73d1bce3
\n\n
.venv\Lib\site-packages\packaging\__pycache__\utils.cpython-313.pyc
utils.cpython-313.pyc
Other
6,749
0.8
0
0
react-lib
379
2024-05-19T02:43:51.350587
Apache-2.0
false
52253fce3078522023250ac0fd80bb65
\n\n
.venv\Lib\site-packages\packaging\__pycache__\version.cpython-313.pyc
version.cpython-313.pyc
Other
19,962
0.95
0.016026
0.003534
python-kit
139
2024-09-30T03:32:40.320180
GPL-3.0
false
841718e176066bc44bc6a5b9907eedfb
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_elffile.cpython-313.pyc
_elffile.cpython-313.pyc
Other
5,208
0.95
0.014286
0
node-utils
638
2025-04-29T05:37:02.970941
GPL-3.0
false
9ee4a6601a34ccdc579f253fbaf20c9b
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_manylinux.cpython-313.pyc
_manylinux.cpython-313.pyc
Other
9,995
0.8
0.014706
0
python-kit
823
2025-01-24T06:16:39.990590
BSD-3-Clause
false
b6d42a68939c2a7684a61ec4aac7df04
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_musllinux.cpython-313.pyc
_musllinux.cpython-313.pyc
Other
4,614
0.8
0.039474
0.028986
react-lib
487
2023-09-10T22:48:29.253144
BSD-3-Clause
false
a6470f9cd430745772e8469045dd926e
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_parser.cpython-313.pyc
_parser.cpython-313.pyc
Other
14,178
0.95
0.010582
0.022099
awesome-app
883
2024-01-16T07:53:31.397640
GPL-3.0
false
fe2c74d8b31b32344129421869c30015
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_structures.cpython-313.pyc
_structures.cpython-313.pyc
Other
3,344
0.8
0
0
python-kit
213
2025-06-05T07:10:40.026109
BSD-3-Clause
false
8bac5f3cdfe1763ed38317e6934c8d3c
\n\n
.venv\Lib\site-packages\packaging\__pycache__\_tokenizer.cpython-313.pyc
_tokenizer.cpython-313.pyc
Other
8,101
0.8
0.029412
0
react-lib
852
2024-06-11T23:16:57.735976
MIT
false
e58644f0b89a95a1e70653efa50e99a8
\n\n
.venv\Lib\site-packages\packaging\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
549
0.8
0.090909
0
awesome-app
317
2024-05-03T14:10:04.292521
BSD-3-Clause
false
e2761f26f15dcdec4b768f74c2cab5bb
pip\n
.venv\Lib\site-packages\packaging-25.0.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
react-lib
682
2024-01-24T05:26:35.652595
MIT
false
365c9bfeb7d89244f2ce01c1de44cb85
Metadata-Version: 2.4\nName: packaging\nVersion: 25.0\nSummary: Core utilities for Python packages\nAuthor-email: Donald Stufft <donald@stufft.io>\nRequires-Python: >=3.8\nDescription-Content-Type: text/x-rst\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: Apache Software License\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Typing :: Typed\nLicense-File: LICENSE\nLicense-File: LICENSE.APACHE\nLicense-File: LICENSE.BSD\nProject-URL: Documentation, https://packaging.pypa.io/\nProject-URL: Source, https://github.com/pypa/packaging\n\npackaging\n=========\n\n.. start-intro\n\nReusable core utilities for various Python Packaging\n`interoperability specifications <https://packaging.python.org/specifications/>`_.\n\nThis library provides utilities that implement the interoperability\nspecifications which have clearly one correct behaviour (eg: :pep:`440`)\nor benefit greatly from having a single shared implementation (eg: :pep:`425`).\n\n.. end-intro\n\nThe ``packaging`` project includes the following: version handling, specifiers,\nmarkers, requirements, tags, utilities.\n\nDocumentation\n-------------\n\nThe `documentation`_ provides information and the API for the following:\n\n- Version Handling\n- Specifiers\n- Markers\n- Requirements\n- Tags\n- Utilities\n\nInstallation\n------------\n\nUse ``pip`` to install these utilities::\n\n pip install packaging\n\nThe ``packaging`` library uses calendar-based versioning (``YY.N``).\n\nDiscussion\n----------\n\nIf you run into bugs, you can file them in our `issue tracker`_.\n\nYou can also join ``#pypa`` on Freenode to ask questions or get involved.\n\n\n.. _`documentation`: https://packaging.pypa.io/\n.. _`issue tracker`: https://github.com/pypa/packaging/issues\n\n\nCode of Conduct\n---------------\n\nEveryone interacting in the packaging project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.\n\n.. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md\n\nContributing\n------------\n\nThe ``CONTRIBUTING.rst`` file outlines how to contribute to this project as\nwell as how to report a potential security issue. The documentation for this\nproject also covers information about `project development`_ and `security`_.\n\n.. _`project development`: https://packaging.pypa.io/en/latest/development/\n.. _`security`: https://packaging.pypa.io/en/latest/security/\n\nProject History\n---------------\n\nPlease review the ``CHANGELOG.rst`` file or the `Changelog documentation`_ for\nrecent changes and project history.\n\n.. _`Changelog documentation`: https://packaging.pypa.io/en/latest/changelog/\n\n
.venv\Lib\site-packages\packaging-25.0.dist-info\METADATA
METADATA
Other
3,281
0.95
0.047619
0
react-lib
230
2024-05-31T04:59:58.857388
MIT
false
ee4c1c51623635c64b2d4c769ba16674
packaging-25.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\npackaging-25.0.dist-info/METADATA,sha256=W2EaYJw4_vw9YWv0XSCuyY-31T8kXayp4sMPyFx6woI,3281\npackaging-25.0.dist-info/RECORD,,\npackaging-25.0.dist-info/WHEEL,sha256=G2gURzTEtmeR8nrdXUJfNiB3VYVxigPQ-bEQujpNiNs,82\npackaging-25.0.dist-info/licenses/LICENSE,sha256=ytHvW9NA1z4HS6YU0m996spceUDD2MNIUuZcSQlobEg,197\npackaging-25.0.dist-info/licenses/LICENSE.APACHE,sha256=DVQuDIgE45qn836wDaWnYhSdxoLXgpRRKH4RuTjpRZQ,10174\npackaging-25.0.dist-info/licenses/LICENSE.BSD,sha256=tw5-m3QvHMb5SLNMFqo5_-zpQZY2S8iP8NIYDwAo-sU,1344\npackaging/__init__.py,sha256=_0cDiPVf2S-bNfVmZguxxzmrIYWlyASxpqph4qsJWUc,494\npackaging/__pycache__/__init__.cpython-313.pyc,,\npackaging/__pycache__/_elffile.cpython-313.pyc,,\npackaging/__pycache__/_manylinux.cpython-313.pyc,,\npackaging/__pycache__/_musllinux.cpython-313.pyc,,\npackaging/__pycache__/_parser.cpython-313.pyc,,\npackaging/__pycache__/_structures.cpython-313.pyc,,\npackaging/__pycache__/_tokenizer.cpython-313.pyc,,\npackaging/__pycache__/markers.cpython-313.pyc,,\npackaging/__pycache__/metadata.cpython-313.pyc,,\npackaging/__pycache__/requirements.cpython-313.pyc,,\npackaging/__pycache__/specifiers.cpython-313.pyc,,\npackaging/__pycache__/tags.cpython-313.pyc,,\npackaging/__pycache__/utils.cpython-313.pyc,,\npackaging/__pycache__/version.cpython-313.pyc,,\npackaging/_elffile.py,sha256=UkrbDtW7aeq3qqoAfU16ojyHZ1xsTvGke_WqMTKAKd0,3286\npackaging/_manylinux.py,sha256=t4y_-dTOcfr36gLY-ztiOpxxJFGO2ikC11HgfysGxiM,9596\npackaging/_musllinux.py,sha256=p9ZqNYiOItGee8KcZFeHF_YcdhVwGHdK6r-8lgixvGQ,2694\npackaging/_parser.py,sha256=gYfnj0pRHflVc4RHZit13KNTyN9iiVcU2RUCGi22BwM,10221\npackaging/_structures.py,sha256=q3eVNmbWJGG_S0Dit_S3Ao8qQqz_5PYTXFAKBZe5yr4,1431\npackaging/_tokenizer.py,sha256=OYzt7qKxylOAJ-q0XyK1qAycyPRYLfMPdGQKRXkZWyI,5310\npackaging/licenses/__init__.py,sha256=VsK4o27CJXWfTi8r2ybJmsBoCdhpnBWuNrskaCVKP7U,5715\npackaging/licenses/__pycache__/__init__.cpython-313.pyc,,\npackaging/licenses/__pycache__/_spdx.cpython-313.pyc,,\npackaging/licenses/_spdx.py,sha256=oAm1ztPFwlsmCKe7lAAsv_OIOfS1cWDu9bNBkeu-2ns,48398\npackaging/markers.py,sha256=P0we27jm1xUzgGMJxBjtUFCIWeBxTsMeJTOJ6chZmAY,12049\npackaging/metadata.py,sha256=8IZErqQQnNm53dZZuYq4FGU4_dpyinMeH1QFBIWIkfE,34739\npackaging/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\npackaging/requirements.py,sha256=gYyRSAdbrIyKDY66ugIDUQjRMvxkH2ALioTmX3tnL6o,2947\npackaging/specifiers.py,sha256=gtPu5DTc-F9baLq3FTGEK6dPhHGCuwwZetaY0PSV2gs,40055\npackaging/tags.py,sha256=41s97W9Zatrq2Ed7Rc3qeBDaHe8pKKvYq2mGjwahfXk,22745\npackaging/utils.py,sha256=0F3Hh9OFuRgrhTgGZUl5K22Fv1YP2tZl1z_2gO6kJiA,5050\npackaging/version.py,sha256=olfyuk_DPbflNkJ4wBWetXQ17c74x3DB501degUv7DY,16676\n
.venv\Lib\site-packages\packaging-25.0.dist-info\RECORD
RECORD
Other
2,792
0.85
0
0
python-kit
480
2024-08-15T15:39:52.093172
MIT
false
e26763d16bb4b978c8f4fe6f89a46e0c
Wheel-Version: 1.0\nGenerator: flit 3.12.0\nRoot-Is-Purelib: true\nTag: py3-none-any\n
.venv\Lib\site-packages\packaging-25.0.dist-info\WHEEL
WHEEL
Other
82
0.5
0
0
react-lib
367
2024-10-19T08:38:35.121442
Apache-2.0
false
eca1d2e32987c5c9fd85f21a0c92d672
This software is made available under the terms of *either* of the licenses\nfound in LICENSE.APACHE or LICENSE.BSD. Contributions to this software is made\nunder the terms of *both* these licenses.\n
.venv\Lib\site-packages\packaging-25.0.dist-info\licenses\LICENSE
LICENSE
Other
197
0.7
0
0
python-kit
81
2023-08-21T10:05:47.348829
MIT
false
faadaedca9251a90b205c9167578ce91
\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n "License" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n "Licensor" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n "Legal Entity" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n "control" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n "You" (or "Your") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n "Source" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n "Object" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n "Work" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n "Derivative Works" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n "Contribution" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, "submitted"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as "Not a Contribution."\n\n "Contributor" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a "NOTICE" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an "AS IS" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n
.venv\Lib\site-packages\packaging-25.0.dist-info\licenses\LICENSE.APACHE
LICENSE.APACHE
Other
10,174
0.95
0.112994
0
react-lib
916
2023-08-12T08:21:49.603492
BSD-3-Clause
false
2ee41112a44fe7014dce33e26468ba93
Copyright (c) Donald Stufft and individual contributors.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n 1. Redistributions of source code must retain the above copyright notice,\n this list of conditions and the following disclaimer.\n\n 2. Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
.venv\Lib\site-packages\packaging-25.0.dist-info\licenses\LICENSE.BSD
LICENSE.BSD
Other
1,344
0.7
0
0
node-utils
284
2024-02-11T05:12:36.765177
MIT
false
7bef9bf4a8e4263634d0597e7ba100b8
"""\nThis file is very long and growing, but it was decided to not split it yet, as\nit's still manageable (2020-03-17, ~1.1k LoC). See gh-31989\n\nInstead of splitting it was decided to define sections here:\n- Configuration / Settings\n- Autouse fixtures\n- Common arguments\n- Missing values & co.\n- Classes\n- Indices\n- Series'\n- DataFrames\n- Operators & Operations\n- Data sets/files\n- Time zones\n- Dtypes\n- Misc\n"""\nfrom __future__ import annotations\n\nfrom collections import abc\nfrom datetime import (\n date,\n datetime,\n time,\n timedelta,\n timezone,\n)\nfrom decimal import Decimal\nimport operator\nimport os\nfrom typing import (\n TYPE_CHECKING,\n Callable,\n)\n\nfrom dateutil.tz import (\n tzlocal,\n tzutc,\n)\nimport hypothesis\nfrom hypothesis import strategies as st\nimport numpy as np\nimport pytest\nfrom pytz import (\n FixedOffset,\n utc,\n)\n\nfrom pandas._config.config import _get_option\n\nimport pandas.util._test_decorators as td\n\nfrom pandas.core.dtypes.dtypes import (\n DatetimeTZDtype,\n IntervalDtype,\n)\n\nimport pandas as pd\nfrom pandas import (\n CategoricalIndex,\n DataFrame,\n Interval,\n IntervalIndex,\n Period,\n RangeIndex,\n Series,\n Timedelta,\n Timestamp,\n date_range,\n period_range,\n timedelta_range,\n)\nimport pandas._testing as tm\nfrom pandas.core import ops\nfrom pandas.core.indexes.api import (\n Index,\n MultiIndex,\n)\nfrom pandas.util.version import Version\n\nif TYPE_CHECKING:\n from collections.abc import (\n Hashable,\n Iterator,\n )\n\ntry:\n import pyarrow as pa\nexcept ImportError:\n has_pyarrow = False\nelse:\n del pa\n has_pyarrow = True\n\nimport zoneinfo\n\ntry:\n zoneinfo.ZoneInfo("UTC")\nexcept zoneinfo.ZoneInfoNotFoundError:\n zoneinfo = None # type: ignore[assignment]\n\n\n# ----------------------------------------------------------------\n# Configuration / Settings\n# ----------------------------------------------------------------\n# pytest\n\n\ndef pytest_addoption(parser) -> None:\n parser.addoption(\n "--no-strict-data-files",\n action="store_false",\n help="Don't fail if a test is skipped for missing data file.",\n )\n\n\ndef ignore_doctest_warning(item: pytest.Item, path: str, message: str) -> None:\n """Ignore doctest warning.\n\n Parameters\n ----------\n item : pytest.Item\n pytest test item.\n path : str\n Module path to Python object, e.g. "pandas.core.frame.DataFrame.append". A\n warning will be filtered when item.name ends with in given path. So it is\n sufficient to specify e.g. "DataFrame.append".\n message : str\n Message to be filtered.\n """\n if item.name.endswith(path):\n item.add_marker(pytest.mark.filterwarnings(f"ignore:{message}"))\n\n\ndef pytest_collection_modifyitems(items, config) -> None:\n is_doctest = config.getoption("--doctest-modules") or config.getoption(\n "--doctest-cython", default=False\n )\n\n # Warnings from doctests that can be ignored; place reason in comment above.\n # Each entry specifies (path, message) - see the ignore_doctest_warning function\n ignored_doctest_warnings = [\n ("is_int64_dtype", "is_int64_dtype is deprecated"),\n ("is_interval_dtype", "is_interval_dtype is deprecated"),\n ("is_period_dtype", "is_period_dtype is deprecated"),\n ("is_datetime64tz_dtype", "is_datetime64tz_dtype is deprecated"),\n ("is_categorical_dtype", "is_categorical_dtype is deprecated"),\n ("is_sparse", "is_sparse is deprecated"),\n ("DataFrameGroupBy.fillna", "DataFrameGroupBy.fillna is deprecated"),\n ("NDFrame.replace", "The 'method' keyword"),\n ("NDFrame.replace", "Series.replace without 'value'"),\n ("NDFrame.clip", "Downcasting behavior in Series and DataFrame methods"),\n ("Series.idxmin", "The behavior of Series.idxmin"),\n ("Series.idxmax", "The behavior of Series.idxmax"),\n ("SeriesGroupBy.fillna", "SeriesGroupBy.fillna is deprecated"),\n ("SeriesGroupBy.idxmin", "The behavior of Series.idxmin"),\n ("SeriesGroupBy.idxmax", "The behavior of Series.idxmax"),\n # Docstring divides by zero to show behavior difference\n ("missing.mask_zero_div_zero", "divide by zero encountered"),\n (\n "to_pydatetime",\n "The behavior of DatetimeProperties.to_pydatetime is deprecated",\n ),\n (\n "pandas.core.generic.NDFrame.bool",\n "(Series|DataFrame).bool is now deprecated and will be removed "\n "in future version of pandas",\n ),\n (\n "pandas.core.generic.NDFrame.first",\n "first is deprecated and will be removed in a future version. "\n "Please create a mask and filter using `.loc` instead",\n ),\n (\n "Resampler.fillna",\n "DatetimeIndexResampler.fillna is deprecated",\n ),\n (\n "DataFrameGroupBy.fillna",\n "DataFrameGroupBy.fillna with 'method' is deprecated",\n ),\n (\n "DataFrameGroupBy.fillna",\n "DataFrame.fillna with 'method' is deprecated",\n ),\n ("read_parquet", "Passing a BlockManager to DataFrame is deprecated"),\n ]\n\n if is_doctest:\n for item in items:\n for path, message in ignored_doctest_warnings:\n ignore_doctest_warning(item, path, message)\n\n\nhypothesis_health_checks = [hypothesis.HealthCheck.too_slow]\nif Version(hypothesis.__version__) >= Version("6.83.2"):\n hypothesis_health_checks.append(hypothesis.HealthCheck.differing_executors)\n\n# Hypothesis\nhypothesis.settings.register_profile(\n "ci",\n # Hypothesis timing checks are tuned for scalars by default, so we bump\n # them from 200ms to 500ms per test case as the global default. If this\n # is too short for a specific test, (a) try to make it faster, and (b)\n # if it really is slow add `@settings(deadline=...)` with a working value,\n # or `deadline=None` to entirely disable timeouts for that test.\n # 2022-02-09: Changed deadline from 500 -> None. Deadline leads to\n # non-actionable, flaky CI failures (# GH 24641, 44969, 45118, 44969)\n deadline=None,\n suppress_health_check=tuple(hypothesis_health_checks),\n)\nhypothesis.settings.load_profile("ci")\n\n# Registering these strategies makes them globally available via st.from_type,\n# which is use for offsets in tests/tseries/offsets/test_offsets_properties.py\nfor name in "MonthBegin MonthEnd BMonthBegin BMonthEnd".split():\n cls = getattr(pd.tseries.offsets, name)\n st.register_type_strategy(\n cls, st.builds(cls, n=st.integers(-99, 99), normalize=st.booleans())\n )\n\nfor name in "YearBegin YearEnd BYearBegin BYearEnd".split():\n cls = getattr(pd.tseries.offsets, name)\n st.register_type_strategy(\n cls,\n st.builds(\n cls,\n n=st.integers(-5, 5),\n normalize=st.booleans(),\n month=st.integers(min_value=1, max_value=12),\n ),\n )\n\nfor name in "QuarterBegin QuarterEnd BQuarterBegin BQuarterEnd".split():\n cls = getattr(pd.tseries.offsets, name)\n st.register_type_strategy(\n cls,\n st.builds(\n cls,\n n=st.integers(-24, 24),\n normalize=st.booleans(),\n startingMonth=st.integers(min_value=1, max_value=12),\n ),\n )\n\n\n# ----------------------------------------------------------------\n# Autouse fixtures\n# ----------------------------------------------------------------\n\n\n# https://github.com/pytest-dev/pytest/issues/11873\n# Would like to avoid autouse=True, but cannot as of pytest 8.0.0\n@pytest.fixture(autouse=True)\ndef add_doctest_imports(doctest_namespace) -> None:\n """\n Make `np` and `pd` names available for doctests.\n """\n doctest_namespace["np"] = np\n doctest_namespace["pd"] = pd\n\n\n@pytest.fixture(autouse=True)\ndef configure_tests() -> None:\n """\n Configure settings for all tests and test modules.\n """\n pd.set_option("chained_assignment", "raise")\n\n\n# ----------------------------------------------------------------\n# Common arguments\n# ----------------------------------------------------------------\n@pytest.fixture(params=[0, 1, "index", "columns"], ids=lambda x: f"axis={repr(x)}")\ndef axis(request):\n """\n Fixture for returning the axis numbers of a DataFrame.\n """\n return request.param\n\n\naxis_frame = axis\n\n\n@pytest.fixture(params=[1, "columns"], ids=lambda x: f"axis={repr(x)}")\ndef axis_1(request):\n """\n Fixture for returning aliases of axis 1 of a DataFrame.\n """\n return request.param\n\n\n@pytest.fixture(params=[True, False, None])\ndef observed(request):\n """\n Pass in the observed keyword to groupby for [True, False]\n This indicates whether categoricals should return values for\n values which are not in the grouper [False / None], or only values which\n appear in the grouper [True]. [None] is supported for future compatibility\n if we decide to change the default (and would need to warn if this\n parameter is not passed).\n """\n return request.param\n\n\n@pytest.fixture(params=[True, False, None])\ndef ordered(request):\n """\n Boolean 'ordered' parameter for Categorical.\n """\n return request.param\n\n\n@pytest.fixture(params=[True, False])\ndef skipna(request):\n """\n Boolean 'skipna' parameter.\n """\n return request.param\n\n\n@pytest.fixture(params=["first", "last", False])\ndef keep(request):\n """\n Valid values for the 'keep' parameter used in\n .duplicated or .drop_duplicates\n """\n return request.param\n\n\n@pytest.fixture(params=["both", "neither", "left", "right"])\ndef inclusive_endpoints_fixture(request):\n """\n Fixture for trying all interval 'inclusive' parameters.\n """\n return request.param\n\n\n@pytest.fixture(params=["left", "right", "both", "neither"])\ndef closed(request):\n """\n Fixture for trying all interval closed parameters.\n """\n return request.param\n\n\n@pytest.fixture(params=["left", "right", "both", "neither"])\ndef other_closed(request):\n """\n Secondary closed fixture to allow parametrizing over all pairs of closed.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n None,\n "gzip",\n "bz2",\n "zip",\n "xz",\n "tar",\n pytest.param("zstd", marks=td.skip_if_no("zstandard")),\n ]\n)\ndef compression(request):\n """\n Fixture for trying common compression types in compression tests.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n "gzip",\n "bz2",\n "zip",\n "xz",\n "tar",\n pytest.param("zstd", marks=td.skip_if_no("zstandard")),\n ]\n)\ndef compression_only(request):\n """\n Fixture for trying common compression types in compression tests excluding\n uncompressed case.\n """\n return request.param\n\n\n@pytest.fixture(params=[True, False])\ndef writable(request):\n """\n Fixture that an array is writable.\n """\n return request.param\n\n\n@pytest.fixture(params=["inner", "outer", "left", "right"])\ndef join_type(request):\n """\n Fixture for trying all types of join operations.\n """\n return request.param\n\n\n@pytest.fixture(params=["nlargest", "nsmallest"])\ndef nselect_method(request):\n """\n Fixture for trying all nselect methods.\n """\n return request.param\n\n\n# ----------------------------------------------------------------\n# Missing values & co.\n# ----------------------------------------------------------------\n@pytest.fixture(params=tm.NULL_OBJECTS, ids=lambda x: type(x).__name__)\ndef nulls_fixture(request):\n """\n Fixture for each null type in pandas.\n """\n return request.param\n\n\nnulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture\n\n\n@pytest.fixture(params=[None, np.nan, pd.NaT])\ndef unique_nulls_fixture(request):\n """\n Fixture for each null type in pandas, each null type exactly once.\n """\n return request.param\n\n\n# Generate cartesian product of unique_nulls_fixture:\nunique_nulls_fixture2 = unique_nulls_fixture\n\n\n@pytest.fixture(params=tm.NP_NAT_OBJECTS, ids=lambda x: type(x).__name__)\ndef np_nat_fixture(request):\n """\n Fixture for each NaT type in numpy.\n """\n return request.param\n\n\n# Generate cartesian product of np_nat_fixture:\nnp_nat_fixture2 = np_nat_fixture\n\n\n# ----------------------------------------------------------------\n# Classes\n# ----------------------------------------------------------------\n\n\n@pytest.fixture(params=[DataFrame, Series])\ndef frame_or_series(request):\n """\n Fixture to parametrize over DataFrame and Series.\n """\n return request.param\n\n\n@pytest.fixture(params=[Index, Series], ids=["index", "series"])\ndef index_or_series(request):\n """\n Fixture to parametrize over Index and Series, made necessary by a mypy\n bug, giving an error:\n\n List item 0 has incompatible type "Type[Series]"; expected "Type[PandasObject]"\n\n See GH#29725\n """\n return request.param\n\n\n# Generate cartesian product of index_or_series fixture:\nindex_or_series2 = index_or_series\n\n\n@pytest.fixture(params=[Index, Series, pd.array], ids=["index", "series", "array"])\ndef index_or_series_or_array(request):\n """\n Fixture to parametrize over Index, Series, and ExtensionArray\n """\n return request.param\n\n\n@pytest.fixture(params=[Index, Series, DataFrame, pd.array], ids=lambda x: x.__name__)\ndef box_with_array(request):\n """\n Fixture to test behavior for Index, Series, DataFrame, and pandas Array\n classes\n """\n return request.param\n\n\nbox_with_array2 = box_with_array\n\n\n@pytest.fixture\ndef dict_subclass() -> type[dict]:\n """\n Fixture for a dictionary subclass.\n """\n\n class TestSubDict(dict):\n def __init__(self, *args, **kwargs) -> None:\n dict.__init__(self, *args, **kwargs)\n\n return TestSubDict\n\n\n@pytest.fixture\ndef non_dict_mapping_subclass() -> type[abc.Mapping]:\n """\n Fixture for a non-mapping dictionary subclass.\n """\n\n class TestNonDictMapping(abc.Mapping):\n def __init__(self, underlying_dict) -> None:\n self._data = underlying_dict\n\n def __getitem__(self, key):\n return self._data.__getitem__(key)\n\n def __iter__(self) -> Iterator:\n return self._data.__iter__()\n\n def __len__(self) -> int:\n return self._data.__len__()\n\n return TestNonDictMapping\n\n\n# ----------------------------------------------------------------\n# Indices\n# ----------------------------------------------------------------\n@pytest.fixture\ndef multiindex_year_month_day_dataframe_random_data():\n """\n DataFrame with 3 level MultiIndex (year, month, day) covering\n first 100 business days from 2000-01-01 with random data\n """\n tdf = DataFrame(\n np.random.default_rng(2).standard_normal((100, 4)),\n columns=Index(list("ABCD")),\n index=date_range("2000-01-01", periods=100, freq="B"),\n )\n ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()\n # use int64 Index, to make sure things work\n ymd.index = ymd.index.set_levels([lev.astype("i8") for lev in ymd.index.levels])\n ymd.index.set_names(["year", "month", "day"], inplace=True)\n return ymd\n\n\n@pytest.fixture\ndef lexsorted_two_level_string_multiindex() -> MultiIndex:\n """\n 2-level MultiIndex, lexsorted, with string names.\n """\n return MultiIndex(\n levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],\n codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],\n names=["first", "second"],\n )\n\n\n@pytest.fixture\ndef multiindex_dataframe_random_data(\n lexsorted_two_level_string_multiindex,\n) -> DataFrame:\n """DataFrame with 2 level MultiIndex with random data"""\n index = lexsorted_two_level_string_multiindex\n return DataFrame(\n np.random.default_rng(2).standard_normal((10, 3)),\n index=index,\n columns=Index(["A", "B", "C"], name="exp"),\n )\n\n\ndef _create_multiindex():\n """\n MultiIndex used to test the general functionality of this object\n """\n\n # See Also: tests.multi.conftest.idx\n major_axis = Index(["foo", "bar", "baz", "qux"])\n minor_axis = Index(["one", "two"])\n\n major_codes = np.array([0, 0, 1, 2, 3, 3])\n minor_codes = np.array([0, 1, 0, 1, 0, 1])\n index_names = ["first", "second"]\n return MultiIndex(\n levels=[major_axis, minor_axis],\n codes=[major_codes, minor_codes],\n names=index_names,\n verify_integrity=False,\n )\n\n\ndef _create_mi_with_dt64tz_level():\n """\n MultiIndex with a level that is a tzaware DatetimeIndex.\n """\n # GH#8367 round trip with pickle\n return MultiIndex.from_product(\n [[1, 2], ["a", "b"], date_range("20130101", periods=3, tz="US/Eastern")],\n names=["one", "two", "three"],\n )\n\n\nindices_dict = {\n "object": Index([f"pandas_{i}" for i in range(100)], dtype=object),\n "string": Index([f"pandas_{i}" for i in range(100)], dtype="str"),\n "datetime": date_range("2020-01-01", periods=100),\n "datetime-tz": date_range("2020-01-01", periods=100, tz="US/Pacific"),\n "period": period_range("2020-01-01", periods=100, freq="D"),\n "timedelta": timedelta_range(start="1 day", periods=100, freq="D"),\n "range": RangeIndex(100),\n "int8": Index(np.arange(100), dtype="int8"),\n "int16": Index(np.arange(100), dtype="int16"),\n "int32": Index(np.arange(100), dtype="int32"),\n "int64": Index(np.arange(100), dtype="int64"),\n "uint8": Index(np.arange(100), dtype="uint8"),\n "uint16": Index(np.arange(100), dtype="uint16"),\n "uint32": Index(np.arange(100), dtype="uint32"),\n "uint64": Index(np.arange(100), dtype="uint64"),\n "float32": Index(np.arange(100), dtype="float32"),\n "float64": Index(np.arange(100), dtype="float64"),\n "bool-object": Index([True, False] * 5, dtype=object),\n "bool-dtype": Index([True, False] * 5, dtype=bool),\n "complex64": Index(\n np.arange(100, dtype="complex64") + 1.0j * np.arange(100, dtype="complex64")\n ),\n "complex128": Index(\n np.arange(100, dtype="complex128") + 1.0j * np.arange(100, dtype="complex128")\n ),\n "categorical": CategoricalIndex(list("abcd") * 25),\n "interval": IntervalIndex.from_breaks(np.linspace(0, 100, num=101)),\n "empty": Index([]),\n "tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),\n "mi-with-dt64tz-level": _create_mi_with_dt64tz_level(),\n "multi": _create_multiindex(),\n "repeats": Index([0, 0, 1, 1, 2, 2]),\n "nullable_int": Index(np.arange(100), dtype="Int64"),\n "nullable_uint": Index(np.arange(100), dtype="UInt16"),\n "nullable_float": Index(np.arange(100), dtype="Float32"),\n "nullable_bool": Index(np.arange(100).astype(bool), dtype="boolean"),\n "string-python": Index(\n pd.array([f"pandas_{i}" for i in range(100)], dtype="string[python]")\n ),\n}\nif has_pyarrow:\n idx = Index(pd.array([f"pandas_{i}" for i in range(100)], dtype="string[pyarrow]"))\n indices_dict["string-pyarrow"] = idx\n\n\n@pytest.fixture(params=indices_dict.keys())\ndef index(request):\n """\n Fixture for many "simple" kinds of indices.\n\n These indices are unlikely to cover corner cases, e.g.\n - no names\n - no NaTs/NaNs\n - no values near implementation bounds\n - ...\n """\n # copy to avoid mutation, e.g. setting .name\n return indices_dict[request.param].copy()\n\n\n# Needed to generate cartesian product of indices\nindex_fixture2 = index\n\n\n@pytest.fixture(\n params=[\n key for key, value in indices_dict.items() if not isinstance(value, MultiIndex)\n ]\n)\ndef index_flat(request):\n """\n index fixture, but excluding MultiIndex cases.\n """\n key = request.param\n return indices_dict[key].copy()\n\n\n# Alias so we can test with cartesian product of index_flat\nindex_flat2 = index_flat\n\n\n@pytest.fixture(\n params=[\n key\n for key, value in indices_dict.items()\n if not (\n key.startswith(("int", "uint", "float"))\n or key in ["range", "empty", "repeats", "bool-dtype"]\n )\n and not isinstance(value, MultiIndex)\n ]\n)\ndef index_with_missing(request):\n """\n Fixture for indices with missing values.\n\n Integer-dtype and empty cases are excluded because they cannot hold missing\n values.\n\n MultiIndex is excluded because isna() is not defined for MultiIndex.\n """\n\n # GH 35538. Use deep copy to avoid illusive bug on np-dev\n # GHA pipeline that writes into indices_dict despite copy\n ind = indices_dict[request.param].copy(deep=True)\n vals = ind.values.copy()\n if request.param in ["tuples", "mi-with-dt64tz-level", "multi"]:\n # For setting missing values in the top level of MultiIndex\n vals = ind.tolist()\n vals[0] = (None,) + vals[0][1:]\n vals[-1] = (None,) + vals[-1][1:]\n return MultiIndex.from_tuples(vals)\n else:\n vals[0] = None\n vals[-1] = None\n return type(ind)(vals)\n\n\n# ----------------------------------------------------------------\n# Series'\n# ----------------------------------------------------------------\n@pytest.fixture\ndef string_series() -> Series:\n """\n Fixture for Series of floats with Index of unique strings\n """\n return Series(\n np.arange(30, dtype=np.float64) * 1.1,\n index=Index([f"i_{i}" for i in range(30)]),\n name="series",\n )\n\n\n@pytest.fixture\ndef object_series() -> Series:\n """\n Fixture for Series of dtype object with Index of unique strings\n """\n data = [f"foo_{i}" for i in range(30)]\n index = Index([f"bar_{i}" for i in range(30)])\n return Series(data, index=index, name="objects", dtype=object)\n\n\n@pytest.fixture\ndef datetime_series() -> Series:\n """\n Fixture for Series of floats with DatetimeIndex\n """\n return Series(\n np.random.default_rng(2).standard_normal(30),\n index=date_range("2000-01-01", periods=30, freq="B"),\n name="ts",\n )\n\n\ndef _create_series(index):\n """Helper for the _series dict"""\n size = len(index)\n data = np.random.default_rng(2).standard_normal(size)\n return Series(data, index=index, name="a", copy=False)\n\n\n_series = {\n f"series-with-{index_id}-index": _create_series(index)\n for index_id, index in indices_dict.items()\n}\n\n\n@pytest.fixture\ndef series_with_simple_index(index) -> Series:\n """\n Fixture for tests on series with changing types of indices.\n """\n return _create_series(index)\n\n\n_narrow_series = {\n f"{dtype.__name__}-series": Series(\n range(30), index=[f"i-{i}" for i in range(30)], name="a", dtype=dtype\n )\n for dtype in tm.NARROW_NP_DTYPES\n}\n\n\n_index_or_series_objs = {**indices_dict, **_series, **_narrow_series}\n\n\n@pytest.fixture(params=_index_or_series_objs.keys())\ndef index_or_series_obj(request):\n """\n Fixture for tests on indexes, series and series with a narrow dtype\n copy to avoid mutation, e.g. setting .name\n """\n return _index_or_series_objs[request.param].copy(deep=True)\n\n\n_typ_objects_series = {\n f"{dtype.__name__}-series": Series(dtype) for dtype in tm.PYTHON_DATA_TYPES\n}\n\n\n_index_or_series_memory_objs = {\n **indices_dict,\n **_series,\n **_narrow_series,\n **_typ_objects_series,\n}\n\n\n@pytest.fixture(params=_index_or_series_memory_objs.keys())\ndef index_or_series_memory_obj(request):\n """\n Fixture for tests on indexes, series, series with a narrow dtype and\n series with empty objects type\n copy to avoid mutation, e.g. setting .name\n """\n return _index_or_series_memory_objs[request.param].copy(deep=True)\n\n\n# ----------------------------------------------------------------\n# DataFrames\n# ----------------------------------------------------------------\n@pytest.fixture\ndef int_frame() -> DataFrame:\n """\n Fixture for DataFrame of ints with index of unique strings\n\n Columns are ['A', 'B', 'C', 'D']\n """\n return DataFrame(\n np.ones((30, 4), dtype=np.int64),\n index=Index([f"foo_{i}" for i in range(30)]),\n columns=Index(list("ABCD")),\n )\n\n\n@pytest.fixture\ndef float_frame() -> DataFrame:\n """\n Fixture for DataFrame of floats with index of unique strings\n\n Columns are ['A', 'B', 'C', 'D'].\n """\n return DataFrame(\n np.random.default_rng(2).standard_normal((30, 4)),\n index=Index([f"foo_{i}" for i in range(30)]),\n columns=Index(list("ABCD")),\n )\n\n\n@pytest.fixture\ndef rand_series_with_duplicate_datetimeindex() -> Series:\n """\n Fixture for Series with a DatetimeIndex that has duplicates.\n """\n dates = [\n datetime(2000, 1, 2),\n datetime(2000, 1, 2),\n datetime(2000, 1, 2),\n datetime(2000, 1, 3),\n datetime(2000, 1, 3),\n datetime(2000, 1, 3),\n datetime(2000, 1, 4),\n datetime(2000, 1, 4),\n datetime(2000, 1, 4),\n datetime(2000, 1, 5),\n ]\n\n return Series(np.random.default_rng(2).standard_normal(len(dates)), index=dates)\n\n\n# ----------------------------------------------------------------\n# Scalars\n# ----------------------------------------------------------------\n@pytest.fixture(\n params=[\n (Interval(left=0, right=5), IntervalDtype("int64", "right")),\n (Interval(left=0.1, right=0.5), IntervalDtype("float64", "right")),\n (Period("2012-01", freq="M"), "period[M]"),\n (Period("2012-02-01", freq="D"), "period[D]"),\n (\n Timestamp("2011-01-01", tz="US/Eastern"),\n DatetimeTZDtype(unit="s", tz="US/Eastern"),\n ),\n (Timedelta(seconds=500), "timedelta64[ns]"),\n ]\n)\ndef ea_scalar_and_dtype(request):\n return request.param\n\n\n# ----------------------------------------------------------------\n# Operators & Operations\n# ----------------------------------------------------------------\n\n\n@pytest.fixture(params=tm.arithmetic_dunder_methods)\ndef all_arithmetic_operators(request):\n """\n Fixture for dunder names for common arithmetic operations.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n operator.add,\n ops.radd,\n operator.sub,\n ops.rsub,\n operator.mul,\n ops.rmul,\n operator.truediv,\n ops.rtruediv,\n operator.floordiv,\n ops.rfloordiv,\n operator.mod,\n ops.rmod,\n operator.pow,\n ops.rpow,\n operator.eq,\n operator.ne,\n operator.lt,\n operator.le,\n operator.gt,\n operator.ge,\n operator.and_,\n ops.rand_,\n operator.xor,\n ops.rxor,\n operator.or_,\n ops.ror_,\n ]\n)\ndef all_binary_operators(request):\n """\n Fixture for operator and roperator arithmetic, comparison, and logical ops.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n operator.add,\n ops.radd,\n operator.sub,\n ops.rsub,\n operator.mul,\n ops.rmul,\n operator.truediv,\n ops.rtruediv,\n operator.floordiv,\n ops.rfloordiv,\n operator.mod,\n ops.rmod,\n operator.pow,\n ops.rpow,\n ]\n)\ndef all_arithmetic_functions(request):\n """\n Fixture for operator and roperator arithmetic functions.\n\n Notes\n -----\n This includes divmod and rdivmod, whereas all_arithmetic_operators\n does not.\n """\n return request.param\n\n\n_all_numeric_reductions = [\n "count",\n "sum",\n "max",\n "min",\n "mean",\n "prod",\n "std",\n "var",\n "median",\n "kurt",\n "skew",\n "sem",\n]\n\n\n@pytest.fixture(params=_all_numeric_reductions)\ndef all_numeric_reductions(request):\n """\n Fixture for numeric reduction names.\n """\n return request.param\n\n\n_all_boolean_reductions = ["all", "any"]\n\n\n@pytest.fixture(params=_all_boolean_reductions)\ndef all_boolean_reductions(request):\n """\n Fixture for boolean reduction names.\n """\n return request.param\n\n\n_all_reductions = _all_numeric_reductions + _all_boolean_reductions\n\n\n@pytest.fixture(params=_all_reductions)\ndef all_reductions(request):\n """\n Fixture for all (boolean + numeric) reduction names.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n operator.eq,\n operator.ne,\n operator.gt,\n operator.ge,\n operator.lt,\n operator.le,\n ]\n)\ndef comparison_op(request):\n """\n Fixture for operator module comparison functions.\n """\n return request.param\n\n\n@pytest.fixture(params=["__le__", "__lt__", "__ge__", "__gt__"])\ndef compare_operators_no_eq_ne(request):\n """\n Fixture for dunder names for compare operations except == and !=\n\n * >=\n * >\n * <\n * <=\n """\n return request.param\n\n\n@pytest.fixture(\n params=["__and__", "__rand__", "__or__", "__ror__", "__xor__", "__rxor__"]\n)\ndef all_logical_operators(request):\n """\n Fixture for dunder names for common logical operations\n\n * |\n * &\n * ^\n """\n return request.param\n\n\n_all_numeric_accumulations = ["cumsum", "cumprod", "cummin", "cummax"]\n\n\n@pytest.fixture(params=_all_numeric_accumulations)\ndef all_numeric_accumulations(request):\n """\n Fixture for numeric accumulation names\n """\n return request.param\n\n\n# ----------------------------------------------------------------\n# Data sets/files\n# ----------------------------------------------------------------\n@pytest.fixture\ndef strict_data_files(pytestconfig):\n """\n Returns the configuration for the test setting `--no-strict-data-files`.\n """\n return pytestconfig.getoption("--no-strict-data-files")\n\n\n@pytest.fixture\ndef datapath(strict_data_files: str) -> Callable[..., str]:\n """\n Get the path to a data file.\n\n Parameters\n ----------\n path : str\n Path to the file, relative to ``pandas/tests/``\n\n Returns\n -------\n path including ``pandas/tests``.\n\n Raises\n ------\n ValueError\n If the path doesn't exist and the --no-strict-data-files option is not set.\n """\n BASE_PATH = os.path.join(os.path.dirname(__file__), "tests")\n\n def deco(*args):\n path = os.path.join(BASE_PATH, *args)\n if not os.path.exists(path):\n if strict_data_files:\n raise ValueError(\n f"Could not find file {path} and --no-strict-data-files is not set."\n )\n pytest.skip(f"Could not find {path}.")\n return path\n\n return deco\n\n\n# ----------------------------------------------------------------\n# Time zones\n# ----------------------------------------------------------------\nTIMEZONES = [\n None,\n "UTC",\n "US/Eastern",\n "Asia/Tokyo",\n "dateutil/US/Pacific",\n "dateutil/Asia/Singapore",\n "+01:15",\n "-02:15",\n "UTC+01:15",\n "UTC-02:15",\n tzutc(),\n tzlocal(),\n FixedOffset(300),\n FixedOffset(0),\n FixedOffset(-300),\n timezone.utc,\n timezone(timedelta(hours=1)),\n timezone(timedelta(hours=-1), name="foo"),\n]\nif zoneinfo is not None:\n TIMEZONES.extend(\n [\n zoneinfo.ZoneInfo("US/Pacific"), # type: ignore[list-item]\n zoneinfo.ZoneInfo("UTC"), # type: ignore[list-item]\n ]\n )\nTIMEZONE_IDS = [repr(i) for i in TIMEZONES]\n\n\n@td.parametrize_fixture_doc(str(TIMEZONE_IDS))\n@pytest.fixture(params=TIMEZONES, ids=TIMEZONE_IDS)\ndef tz_naive_fixture(request):\n """\n Fixture for trying timezones including default (None): {0}\n """\n return request.param\n\n\n@td.parametrize_fixture_doc(str(TIMEZONE_IDS[1:]))\n@pytest.fixture(params=TIMEZONES[1:], ids=TIMEZONE_IDS[1:])\ndef tz_aware_fixture(request):\n """\n Fixture for trying explicit timezones: {0}\n """\n return request.param\n\n\n# Generate cartesian product of tz_aware_fixture:\ntz_aware_fixture2 = tz_aware_fixture\n\n\n_UTCS = ["utc", "dateutil/UTC", utc, tzutc(), timezone.utc]\nif zoneinfo is not None:\n _UTCS.append(zoneinfo.ZoneInfo("UTC"))\n\n\n@pytest.fixture(params=_UTCS)\ndef utc_fixture(request):\n """\n Fixture to provide variants of UTC timezone strings and tzinfo objects.\n """\n return request.param\n\n\nutc_fixture2 = utc_fixture\n\n\n@pytest.fixture(params=["s", "ms", "us", "ns"])\ndef unit(request):\n """\n datetime64 units we support.\n """\n return request.param\n\n\nunit2 = unit\n\n\n# ----------------------------------------------------------------\n# Dtypes\n# ----------------------------------------------------------------\n@pytest.fixture(params=tm.STRING_DTYPES)\ndef string_dtype(request):\n """\n Parametrized fixture for string dtypes.\n\n * str\n * 'str'\n * 'U'\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n ("python", pd.NA),\n pytest.param(("pyarrow", pd.NA), marks=td.skip_if_no("pyarrow")),\n pytest.param(("pyarrow", np.nan), marks=td.skip_if_no("pyarrow")),\n ("python", np.nan),\n ],\n ids=[\n "string=string[python]",\n "string=string[pyarrow]",\n "string=str[pyarrow]",\n "string=str[python]",\n ],\n)\ndef string_dtype_no_object(request):\n """\n Parametrized fixture for string dtypes.\n * 'string[python]' (NA variant)\n * 'string[pyarrow]' (NA variant)\n * 'str' (NaN variant, with pyarrow)\n * 'str' (NaN variant, without pyarrow)\n """\n # need to instantiate the StringDtype here instead of in the params\n # to avoid importing pyarrow during test collection\n storage, na_value = request.param\n return pd.StringDtype(storage, na_value)\n\n\n@pytest.fixture(\n params=[\n "string[python]",\n pytest.param("string[pyarrow]", marks=td.skip_if_no("pyarrow")),\n ]\n)\ndef nullable_string_dtype(request):\n """\n Parametrized fixture for string dtypes.\n\n * 'string[python]'\n * 'string[pyarrow]'\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n pytest.param(("pyarrow", np.nan), marks=td.skip_if_no("pyarrow")),\n pytest.param(("pyarrow", pd.NA), marks=td.skip_if_no("pyarrow")),\n ]\n)\ndef pyarrow_string_dtype(request):\n """\n Parametrized fixture for string dtypes backed by Pyarrow.\n\n * 'str[pyarrow]'\n * 'string[pyarrow]'\n """\n return pd.StringDtype(*request.param)\n\n\n@pytest.fixture(\n params=[\n "python",\n pytest.param("pyarrow", marks=td.skip_if_no("pyarrow")),\n ]\n)\ndef string_storage(request):\n """\n Parametrized fixture for pd.options.mode.string_storage.\n\n * 'python'\n * 'pyarrow'\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n ("python", pd.NA),\n pytest.param(("pyarrow", pd.NA), marks=td.skip_if_no("pyarrow")),\n pytest.param(("pyarrow", np.nan), marks=td.skip_if_no("pyarrow")),\n ("python", np.nan),\n ],\n ids=[\n "string=string[python]",\n "string=string[pyarrow]",\n "string=str[pyarrow]",\n "string=str[python]",\n ],\n)\ndef string_dtype_arguments(request):\n """\n Parametrized fixture for StringDtype storage and na_value.\n\n * 'python' + pd.NA\n * 'pyarrow' + pd.NA\n * 'pyarrow' + np.nan\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n "numpy_nullable",\n pytest.param("pyarrow", marks=td.skip_if_no("pyarrow")),\n ]\n)\ndef dtype_backend(request):\n """\n Parametrized fixture for pd.options.mode.string_storage.\n\n * 'python'\n * 'pyarrow'\n """\n return request.param\n\n\n# Alias so we can test with cartesian product of string_storage\nstring_storage2 = string_storage\nstring_dtype_arguments2 = string_dtype_arguments\n\n\n@pytest.fixture(params=tm.BYTES_DTYPES)\ndef bytes_dtype(request):\n """\n Parametrized fixture for bytes dtypes.\n\n * bytes\n * 'bytes'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.OBJECT_DTYPES)\ndef object_dtype(request):\n """\n Parametrized fixture for object dtypes.\n\n * object\n * 'object'\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n np.dtype("object"),\n ("python", pd.NA),\n pytest.param(("pyarrow", pd.NA), marks=td.skip_if_no("pyarrow")),\n pytest.param(("pyarrow", np.nan), marks=td.skip_if_no("pyarrow")),\n ("python", np.nan),\n ],\n ids=[\n "string=object",\n "string=string[python]",\n "string=string[pyarrow]",\n "string=str[pyarrow]",\n "string=str[python]",\n ],\n)\ndef any_string_dtype(request):\n """\n Parametrized fixture for string dtypes.\n * 'object'\n * 'string[python]' (NA variant)\n * 'string[pyarrow]' (NA variant)\n * 'str' (NaN variant, with pyarrow)\n * 'str' (NaN variant, without pyarrow)\n """\n if isinstance(request.param, np.dtype):\n return request.param\n else:\n # need to instantiate the StringDtype here instead of in the params\n # to avoid importing pyarrow during test collection\n storage, na_value = request.param\n return pd.StringDtype(storage, na_value)\n\n\n@pytest.fixture(params=tm.DATETIME64_DTYPES)\ndef datetime64_dtype(request):\n """\n Parametrized fixture for datetime64 dtypes.\n\n * 'datetime64[ns]'\n * 'M8[ns]'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.TIMEDELTA64_DTYPES)\ndef timedelta64_dtype(request):\n """\n Parametrized fixture for timedelta64 dtypes.\n\n * 'timedelta64[ns]'\n * 'm8[ns]'\n """\n return request.param\n\n\n@pytest.fixture\ndef fixed_now_ts() -> Timestamp:\n """\n Fixture emits fixed Timestamp.now()\n """\n return Timestamp( # pyright: ignore[reportGeneralTypeIssues]\n year=2021, month=1, day=1, hour=12, minute=4, second=13, microsecond=22\n )\n\n\n@pytest.fixture(params=tm.FLOAT_NUMPY_DTYPES)\ndef float_numpy_dtype(request):\n """\n Parameterized fixture for float dtypes.\n\n * float\n * 'float32'\n * 'float64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.FLOAT_EA_DTYPES)\ndef float_ea_dtype(request):\n """\n Parameterized fixture for float dtypes.\n\n * 'Float32'\n * 'Float64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_FLOAT_DTYPES)\ndef any_float_dtype(request):\n """\n Parameterized fixture for float dtypes.\n\n * float\n * 'float32'\n * 'float64'\n * 'Float32'\n * 'Float64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.COMPLEX_DTYPES)\ndef complex_dtype(request):\n """\n Parameterized fixture for complex dtypes.\n\n * complex\n * 'complex64'\n * 'complex128'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.COMPLEX_FLOAT_DTYPES)\ndef complex_or_float_dtype(request):\n """\n Parameterized fixture for complex and numpy float dtypes.\n\n * complex\n * 'complex64'\n * 'complex128'\n * float\n * 'float32'\n * 'float64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.SIGNED_INT_NUMPY_DTYPES)\ndef any_signed_int_numpy_dtype(request):\n """\n Parameterized fixture for signed integer dtypes.\n\n * int\n * 'int8'\n * 'int16'\n * 'int32'\n * 'int64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.UNSIGNED_INT_NUMPY_DTYPES)\ndef any_unsigned_int_numpy_dtype(request):\n """\n Parameterized fixture for unsigned integer dtypes.\n\n * 'uint8'\n * 'uint16'\n * 'uint32'\n * 'uint64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_INT_NUMPY_DTYPES)\ndef any_int_numpy_dtype(request):\n """\n Parameterized fixture for any integer dtype.\n\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_INT_EA_DTYPES)\ndef any_int_ea_dtype(request):\n """\n Parameterized fixture for any nullable integer dtype.\n\n * 'UInt8'\n * 'Int8'\n * 'UInt16'\n * 'Int16'\n * 'UInt32'\n * 'Int32'\n * 'UInt64'\n * 'Int64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_INT_DTYPES)\ndef any_int_dtype(request):\n """\n Parameterized fixture for any nullable integer dtype.\n\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n * 'UInt8'\n * 'Int8'\n * 'UInt16'\n * 'Int16'\n * 'UInt32'\n * 'Int32'\n * 'UInt64'\n * 'Int64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_INT_EA_DTYPES + tm.FLOAT_EA_DTYPES)\ndef any_numeric_ea_dtype(request):\n """\n Parameterized fixture for any nullable integer dtype and\n any float ea dtypes.\n\n * 'UInt8'\n * 'Int8'\n * 'UInt16'\n * 'Int16'\n * 'UInt32'\n * 'Int32'\n * 'UInt64'\n * 'Int64'\n * 'Float32'\n * 'Float64'\n """\n return request.param\n\n\n# Unsupported operand types for + ("List[Union[str, ExtensionDtype, dtype[Any],\n# Type[object]]]" and "List[str]")\n@pytest.fixture(\n params=tm.ALL_INT_EA_DTYPES\n + tm.FLOAT_EA_DTYPES\n + tm.ALL_INT_PYARROW_DTYPES_STR_REPR\n + tm.FLOAT_PYARROW_DTYPES_STR_REPR # type: ignore[operator]\n)\ndef any_numeric_ea_and_arrow_dtype(request):\n """\n Parameterized fixture for any nullable integer dtype and\n any float ea dtypes.\n\n * 'UInt8'\n * 'Int8'\n * 'UInt16'\n * 'Int16'\n * 'UInt32'\n * 'Int32'\n * 'UInt64'\n * 'Int64'\n * 'Float32'\n * 'Float64'\n * 'uint8[pyarrow]'\n * 'int8[pyarrow]'\n * 'uint16[pyarrow]'\n * 'int16[pyarrow]'\n * 'uint32[pyarrow]'\n * 'int32[pyarrow]'\n * 'uint64[pyarrow]'\n * 'int64[pyarrow]'\n * 'float32[pyarrow]'\n * 'float64[pyarrow]'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.SIGNED_INT_EA_DTYPES)\ndef any_signed_int_ea_dtype(request):\n """\n Parameterized fixture for any signed nullable integer dtype.\n\n * 'Int8'\n * 'Int16'\n * 'Int32'\n * 'Int64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_REAL_NUMPY_DTYPES)\ndef any_real_numpy_dtype(request):\n """\n Parameterized fixture for any (purely) real numeric dtype.\n\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n * float\n * 'float32'\n * 'float64'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_REAL_DTYPES)\ndef any_real_numeric_dtype(request):\n """\n Parameterized fixture for any (purely) real numeric dtype.\n\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n * float\n * 'float32'\n * 'float64'\n\n and associated ea dtypes.\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_NUMPY_DTYPES)\ndef any_numpy_dtype(request):\n """\n Parameterized fixture for all numpy dtypes.\n\n * bool\n * 'bool'\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n * float\n * 'float32'\n * 'float64'\n * complex\n * 'complex64'\n * 'complex128'\n * str\n * 'str'\n * 'U'\n * bytes\n * 'bytes'\n * 'datetime64[ns]'\n * 'M8[ns]'\n * 'timedelta64[ns]'\n * 'm8[ns]'\n * object\n * 'object'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_REAL_NULLABLE_DTYPES)\ndef any_real_nullable_dtype(request):\n """\n Parameterized fixture for all real dtypes that can hold NA.\n\n * float\n * 'float32'\n * 'float64'\n * 'Float32'\n * 'Float64'\n * 'UInt8'\n * 'UInt16'\n * 'UInt32'\n * 'UInt64'\n * 'Int8'\n * 'Int16'\n * 'Int32'\n * 'Int64'\n * 'uint8[pyarrow]'\n * 'uint16[pyarrow]'\n * 'uint32[pyarrow]'\n * 'uint64[pyarrow]'\n * 'int8[pyarrow]'\n * 'int16[pyarrow]'\n * 'int32[pyarrow]'\n * 'int64[pyarrow]'\n * 'float[pyarrow]'\n * 'double[pyarrow]'\n """\n return request.param\n\n\n@pytest.fixture(params=tm.ALL_NUMERIC_DTYPES)\ndef any_numeric_dtype(request):\n """\n Parameterized fixture for all numeric dtypes.\n\n * int\n * 'int8'\n * 'uint8'\n * 'int16'\n * 'uint16'\n * 'int32'\n * 'uint32'\n * 'int64'\n * 'uint64'\n * float\n * 'float32'\n * 'float64'\n * complex\n * 'complex64'\n * 'complex128'\n * 'UInt8'\n * 'Int8'\n * 'UInt16'\n * 'Int16'\n * 'UInt32'\n * 'Int32'\n * 'UInt64'\n * 'Int64'\n * 'Float32'\n * 'Float64'\n """\n return request.param\n\n\n# categoricals are handled separately\n_any_skipna_inferred_dtype = [\n ("string", ["a", np.nan, "c"]),\n ("string", ["a", pd.NA, "c"]),\n ("mixed", ["a", pd.NaT, "c"]), # pd.NaT not considered valid by is_string_array\n ("bytes", [b"a", np.nan, b"c"]),\n ("empty", [np.nan, np.nan, np.nan]),\n ("empty", []),\n ("mixed-integer", ["a", np.nan, 2]),\n ("mixed", ["a", np.nan, 2.0]),\n ("floating", [1.0, np.nan, 2.0]),\n ("integer", [1, np.nan, 2]),\n ("mixed-integer-float", [1, np.nan, 2.0]),\n ("decimal", [Decimal(1), np.nan, Decimal(2)]),\n ("boolean", [True, np.nan, False]),\n ("boolean", [True, pd.NA, False]),\n ("datetime64", [np.datetime64("2013-01-01"), np.nan, np.datetime64("2018-01-01")]),\n ("datetime", [Timestamp("20130101"), np.nan, Timestamp("20180101")]),\n ("date", [date(2013, 1, 1), np.nan, date(2018, 1, 1)]),\n ("complex", [1 + 1j, np.nan, 2 + 2j]),\n # The following dtype is commented out due to GH 23554\n # ('timedelta64', [np.timedelta64(1, 'D'),\n # np.nan, np.timedelta64(2, 'D')]),\n ("timedelta", [timedelta(1), np.nan, timedelta(2)]),\n ("time", [time(1), np.nan, time(2)]),\n ("period", [Period(2013), pd.NaT, Period(2018)]),\n ("interval", [Interval(0, 1), np.nan, Interval(0, 2)]),\n]\nids, _ = zip(*_any_skipna_inferred_dtype) # use inferred type as fixture-id\n\n\n@pytest.fixture(params=_any_skipna_inferred_dtype, ids=ids)\ndef any_skipna_inferred_dtype(request):\n """\n Fixture for all inferred dtypes from _libs.lib.infer_dtype\n\n The covered (inferred) types are:\n * 'string'\n * 'empty'\n * 'bytes'\n * 'mixed'\n * 'mixed-integer'\n * 'mixed-integer-float'\n * 'floating'\n * 'integer'\n * 'decimal'\n * 'boolean'\n * 'datetime64'\n * 'datetime'\n * 'date'\n * 'timedelta'\n * 'time'\n * 'period'\n * 'interval'\n\n Returns\n -------\n inferred_dtype : str\n The string for the inferred dtype from _libs.lib.infer_dtype\n values : np.ndarray\n An array of object dtype that will be inferred to have\n `inferred_dtype`\n\n Examples\n --------\n >>> from pandas._libs import lib\n >>>\n >>> def test_something(any_skipna_inferred_dtype):\n ... inferred_dtype, values = any_skipna_inferred_dtype\n ... # will pass\n ... assert lib.infer_dtype(values, skipna=True) == inferred_dtype\n """\n inferred_dtype, values = request.param\n values = np.array(values, dtype=object) # object dtype to avoid casting\n\n # correctness of inference tested in tests/dtypes/test_inference.py\n return inferred_dtype, values\n\n\n# ----------------------------------------------------------------\n# Misc\n# ----------------------------------------------------------------\n@pytest.fixture\ndef ip():\n """\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n """\n pytest.importorskip("IPython", minversion="6.0.0")\n from IPython.core.interactiveshell import InteractiveShell\n\n # GH#35711 make sure sqlite history file handle is not leaked\n from traitlets.config import Config # isort:skip\n\n c = Config()\n c.HistoryManager.hist_file = ":memory:"\n\n return InteractiveShell(config=c)\n\n\n@pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"])\ndef spmatrix(request):\n """\n Yields scipy sparse matrix classes.\n """\n sparse = pytest.importorskip("scipy.sparse")\n\n return getattr(sparse, request.param + "_matrix")\n\n\n@pytest.fixture(\n params=[\n getattr(pd.offsets, o)\n for o in pd.offsets.__all__\n if issubclass(getattr(pd.offsets, o), pd.offsets.Tick) and o != "Tick"\n ]\n)\ndef tick_classes(request):\n """\n Fixture for Tick based datetime offsets available for a time series.\n """\n return request.param\n\n\n@pytest.fixture(params=[None, lambda x: x])\ndef sort_by_key(request):\n """\n Simple fixture for testing keys in sorting methods.\n Tests None (no key) and the identity key.\n """\n return request.param\n\n\n@pytest.fixture(\n params=[\n ("foo", None, None),\n ("Egon", "Venkman", None),\n ("NCC1701D", "NCC1701D", "NCC1701D"),\n # possibly-matching NAs\n (np.nan, np.nan, np.nan),\n (np.nan, pd.NaT, None),\n (np.nan, pd.NA, None),\n (pd.NA, pd.NA, pd.NA),\n ]\n)\ndef names(request) -> tuple[Hashable, Hashable, Hashable]:\n """\n A 3-tuple of names, the first two for operands, the last for a result.\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.setitem, tm.loc, tm.iloc])\ndef indexer_sli(request):\n """\n Parametrize over __setitem__, loc.__setitem__, iloc.__setitem__\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.loc, tm.iloc])\ndef indexer_li(request):\n """\n Parametrize over loc.__getitem__, iloc.__getitem__\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.setitem, tm.iloc])\ndef indexer_si(request):\n """\n Parametrize over __setitem__, iloc.__setitem__\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.setitem, tm.loc])\ndef indexer_sl(request):\n """\n Parametrize over __setitem__, loc.__setitem__\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.at, tm.loc])\ndef indexer_al(request):\n """\n Parametrize over at.__setitem__, loc.__setitem__\n """\n return request.param\n\n\n@pytest.fixture(params=[tm.iat, tm.iloc])\ndef indexer_ial(request):\n """\n Parametrize over iat.__setitem__, iloc.__setitem__\n """\n return request.param\n\n\n@pytest.fixture\ndef using_array_manager() -> bool:\n """\n Fixture to check if the array manager is being used.\n """\n return _get_option("mode.data_manager", silent=True) == "array"\n\n\n@pytest.fixture\ndef using_copy_on_write() -> bool:\n """\n Fixture to check if Copy-on-Write is enabled.\n """\n return (\n pd.options.mode.copy_on_write is True\n and _get_option("mode.data_manager", silent=True) == "block"\n )\n\n\n@pytest.fixture\ndef warn_copy_on_write() -> bool:\n """\n Fixture to check if Copy-on-Write is in warning mode.\n """\n return (\n pd.options.mode.copy_on_write == "warn"\n and _get_option("mode.data_manager", silent=True) == "block"\n )\n\n\n@pytest.fixture\ndef using_infer_string() -> bool:\n """\n Fixture to check if infer string option is enabled.\n """\n return pd.options.future.infer_string is True\n\n\nwarsaws = ["Europe/Warsaw", "dateutil/Europe/Warsaw"]\nif zoneinfo is not None:\n warsaws.append(zoneinfo.ZoneInfo("Europe/Warsaw")) # type: ignore[arg-type]\n\n\n@pytest.fixture(params=warsaws)\ndef warsaw(request) -> str:\n """\n tzinfo for Europe/Warsaw using pytz, dateutil, or zoneinfo.\n """\n return request.param\n\n\n@pytest.fixture()\ndef arrow_string_storage():\n return ("pyarrow", "pyarrow_numpy")\n
.venv\Lib\site-packages\pandas\conftest.py
conftest.py
Python
51,045
0.75
0.128814
0.200354
python-kit
342
2024-05-18T19:31:40.963683
GPL-3.0
true
54752745b68dc133cb9ae23d82847366
[build-system]\n# Minimum requirements for the build system to execute.\n# See https://github.com/scipy/scipy/pull/12940 for the AIX issue.\nrequires = [\n "meson-python>=0.13.1",\n "meson>=1.2.1,<2",\n "wheel",\n "Cython<4.0.0a0", # Note: sync with setup.py, environment.yml and asv.conf.json\n # Force numpy higher than 2.0rc1, so that built wheels are compatible\n # with both numpy 1 and 2\n "numpy>=2.0",\n "versioneer[toml]"\n]\n\nbuild-backend = "mesonpy"\n\n[project]\nname = 'pandas'\ndynamic = [\n 'version'\n]\ndescription = 'Powerful data structures for data analysis, time series, and statistics'\nreadme = 'README.md'\nauthors = [\n { name = 'The Pandas Development Team', email='pandas-dev@python.org' },\n]\nlicense = {file = 'LICENSE'}\nrequires-python = '>=3.9'\ndependencies = [\n "numpy>=1.22.4; python_version<'3.11'",\n "numpy>=1.23.2; python_version=='3.11'",\n "numpy>=1.26.0; python_version>='3.12'",\n "python-dateutil>=2.8.2",\n "pytz>=2020.1",\n "tzdata>=2022.7"\n]\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Cython',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n 'Programming Language :: Python :: 3.12',\n 'Programming Language :: Python :: 3.13',\n 'Topic :: Scientific/Engineering'\n]\n\n[project.urls]\nhomepage = 'https://pandas.pydata.org'\ndocumentation = 'https://pandas.pydata.org/docs/'\nrepository = 'https://github.com/pandas-dev/pandas'\n\n[project.entry-points."pandas_plotting_backends"]\nmatplotlib = "pandas:plotting._matplotlib"\n\n[project.optional-dependencies]\ntest = ['hypothesis>=6.46.1', 'pytest>=7.3.2', 'pytest-xdist>=2.2.0']\npyarrow = ['pyarrow>=10.0.1']\nperformance = ['bottleneck>=1.3.6', 'numba>=0.56.4', 'numexpr>=2.8.4']\ncomputation = ['scipy>=1.10.0', 'xarray>=2022.12.0']\nfss = ['fsspec>=2022.11.0']\naws = ['s3fs>=2022.11.0']\ngcp = ['gcsfs>=2022.11.0', 'pandas-gbq>=0.19.0']\nexcel = ['odfpy>=1.4.1', 'openpyxl>=3.1.0', 'python-calamine>=0.1.7', 'pyxlsb>=1.0.10', 'xlrd>=2.0.1', 'xlsxwriter>=3.0.5']\nparquet = ['pyarrow>=10.0.1']\nfeather = ['pyarrow>=10.0.1']\nhdf5 = [# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)\n #'blosc>=1.20.1',\n 'tables>=3.8.0']\nspss = ['pyreadstat>=1.2.0']\npostgresql = ['SQLAlchemy>=2.0.0', 'psycopg2>=2.9.6', 'adbc-driver-postgresql>=0.8.0']\nmysql = ['SQLAlchemy>=2.0.0', 'pymysql>=1.0.2']\nsql-other = ['SQLAlchemy>=2.0.0', 'adbc-driver-postgresql>=0.8.0', 'adbc-driver-sqlite>=0.8.0']\nhtml = ['beautifulsoup4>=4.11.2', 'html5lib>=1.1', 'lxml>=4.9.2']\nxml = ['lxml>=4.9.2']\nplot = ['matplotlib>=3.6.3']\noutput-formatting = ['jinja2>=3.1.2', 'tabulate>=0.9.0']\nclipboard = ['PyQt5>=5.15.9', 'qtpy>=2.3.0']\ncompression = ['zstandard>=0.19.0']\nconsortium-standard = ['dataframe-api-compat>=0.1.7']\nall = ['adbc-driver-postgresql>=0.8.0',\n 'adbc-driver-sqlite>=0.8.0',\n 'beautifulsoup4>=4.11.2',\n # blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)\n #'blosc>=1.21.3',\n 'bottleneck>=1.3.6',\n 'dataframe-api-compat>=0.1.7',\n 'fastparquet>=2022.12.0',\n 'fsspec>=2022.11.0',\n 'gcsfs>=2022.11.0',\n 'html5lib>=1.1',\n 'hypothesis>=6.46.1',\n 'jinja2>=3.1.2',\n 'lxml>=4.9.2',\n 'matplotlib>=3.6.3',\n 'numba>=0.56.4',\n 'numexpr>=2.8.4',\n 'odfpy>=1.4.1',\n 'openpyxl>=3.1.0',\n 'pandas-gbq>=0.19.0',\n 'psycopg2>=2.9.6',\n 'pyarrow>=10.0.1',\n 'pymysql>=1.0.2',\n 'PyQt5>=5.15.9',\n 'pyreadstat>=1.2.0',\n 'pytest>=7.3.2',\n 'pytest-xdist>=2.2.0',\n 'python-calamine>=0.1.7',\n 'pyxlsb>=1.0.10',\n 'qtpy>=2.3.0',\n 'scipy>=1.10.0',\n 's3fs>=2022.11.0',\n 'SQLAlchemy>=2.0.0',\n 'tables>=3.8.0',\n 'tabulate>=0.9.0',\n 'xarray>=2022.12.0',\n 'xlrd>=2.0.1',\n 'xlsxwriter>=3.0.5',\n 'zstandard>=0.19.0']\n\n# TODO: Remove after setuptools support is dropped.\n[tool.setuptools]\ninclude-package-data = true\n\n[tool.setuptools.packages.find]\ninclude = ["pandas", "pandas.*"]\nnamespaces = false\n\n[tool.setuptools.exclude-package-data]\n"*" = ["*.c", "*.h"]\n\n# See the docstring in versioneer.py for instructions. Note that you must\n# re-run 'versioneer.py setup' after changing this section, and commit the\n# resulting files.\n[tool.versioneer]\nVCS = "git"\nstyle = "pep440"\nversionfile_source = "pandas/_version.py"\nversionfile_build = "pandas/_version.py"\ntag_prefix = "v"\nparentdir_prefix = "pandas-"\n\n[tool.meson-python.args]\nsetup = ['--vsenv'] # For Windows\n\n[tool.cibuildwheel]\nskip = "cp36-* cp37-* cp38-* pp* *_i686 *_ppc64le *_s390x"\nbuild-verbosity = "3"\nenvironment = {LDFLAGS="-Wl,--strip-all"}\n# pytz 2024.2 causing some failures\ntest-requires = "hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytz<2024.2"\ntest-command = """\n PANDAS_CI='1' python -c 'import pandas as pd; \\n pd.test(extra_args=["-m not clipboard and not single_cpu and not slow and not network and not db", "-n 2", "--no-strict-data-files"]); \\n pd.test(extra_args=["-m not clipboard and single_cpu and not slow and not network and not db", "--no-strict-data-files"]);' \\n """\nfree-threaded-support = true\nbefore-build = "PACKAGE_DIR={package} bash {package}/scripts/cibw_before_build.sh"\n\n[tool.cibuildwheel.windows]\nbefore-build = "pip install delvewheel && bash {package}/scripts/cibw_before_build.sh"\nrepair-wheel-command = "delvewheel repair -w {dest_dir} {wheel}"\n\n[[tool.cibuildwheel.overrides]]\nselect = "*-manylinux_aarch64*"\ntest-command = """\n PANDAS_CI='1' python -c 'import pandas as pd; \\n pd.test(extra_args=["-m not clipboard and not single_cpu and not slow and not network and not db and not fails_arm_wheels", "-n 2", "--no-strict-data-files"]); \\n pd.test(extra_args=["-m not clipboard and single_cpu and not slow and not network and not db", "--no-strict-data-files"]);' \\n """\n\n[[tool.cibuildwheel.overrides]]\nselect = "*-musllinux*"\nbefore-test = "apk update && apk add musl-locales"\n\n[[tool.cibuildwheel.overrides]]\nselect = "*-win*"\n# We test separately for Windows, since we use\n# the windowsservercore docker image to check if any dlls are\n# missing from the wheel\ntest-command = ""\n\n[[tool.cibuildwheel.overrides]]\n# Don't strip wheels on macOS.\n# macOS doesn't support stripping wheels with linker\n# https://github.com/MacPython/numpy-wheels/pull/87#issuecomment-624878264\nselect = "*-macosx*"\nenvironment = {CFLAGS="-g0"}\n\n[tool.black]\ntarget-version = ['py39', 'py310']\nrequired-version = '23.11.0'\nexclude = '''\n(\n asv_bench/env\n | \.egg\n | \.git\n | \.hg\n | \.mypy_cache\n | \.nox\n | \.tox\n | \.venv\n | _build\n | buck-out\n | build\n | dist\n | setup.py\n)\n'''\n\n[tool.ruff]\nline-length = 88\ntarget-version = "py310"\nfix = true\nunfixable = []\ntyping-modules = ["pandas._typing"]\n\nselect = [\n # pyflakes\n "F",\n # pycodestyle\n "E", "W",\n # flake8-2020\n "YTT",\n # flake8-bugbear\n "B",\n # flake8-quotes\n "Q",\n # flake8-debugger\n "T10",\n # flake8-gettext\n "INT",\n # pylint\n "PL",\n # misc lints\n "PIE",\n # flake8-pyi\n "PYI",\n # tidy imports\n "TID",\n # implicit string concatenation\n "ISC",\n # type-checking imports\n "TCH",\n # comprehensions\n "C4",\n # pygrep-hooks\n "PGH",\n # Ruff-specific rules\n "RUF",\n # flake8-bandit: exec-builtin\n "S102",\n # numpy-legacy-random\n "NPY002",\n # Perflint\n "PERF",\n # flynt\n "FLY",\n # flake8-logging-format\n "G",\n # flake8-future-annotations\n "FA",\n]\n\nignore = [\n ### Intentionally disabled\n # space before : (needed for how black formats slicing)\n "E203",\n # module level import not at top of file\n "E402",\n # do not assign a lambda expression, use a def\n "E731",\n # line break before binary operator\n # "W503", # not yet implemented\n # line break after binary operator\n # "W504", # not yet implemented\n # controversial\n "B006",\n # controversial\n "B007",\n # controversial\n "B008",\n # setattr is used to side-step mypy\n "B009",\n # getattr is used to side-step mypy\n "B010",\n # tests use assert False\n "B011",\n # tests use comparisons but not their returned value\n "B015",\n # false positives\n "B019",\n # Loop control variable overrides iterable it iterates\n "B020",\n # Function definition does not bind loop variable\n "B023",\n # Functions defined inside a loop must not use variables redefined in the loop\n # "B301", # not yet implemented\n # Only works with python >=3.10\n "B905",\n # Too many arguments to function call\n "PLR0913",\n # Too many returns\n "PLR0911",\n # Too many branches\n "PLR0912",\n # Too many statements\n "PLR0915",\n # Redefined loop name\n "PLW2901",\n # Global statements are discouraged\n "PLW0603",\n # Docstrings should not be included in stubs\n "PYI021",\n # Use `typing.NamedTuple` instead of `collections.namedtuple`\n "PYI024",\n # No builtin `eval()` allowed\n "PGH001",\n # compare-to-empty-string\n "PLC1901",\n # while int | float can be shortened to float, the former is more explicit\n "PYI041",\n # incorrect-dict-iterator, flags valid Series.items usage\n "PERF102",\n # try-except-in-loop, becomes useless in Python 3.11\n "PERF203",\n\n\n ### TODO: Enable gradually\n # Useless statement\n "B018",\n # Within an except clause, raise exceptions with ...\n "B904",\n # Magic number\n "PLR2004",\n # comparison-with-itself\n "PLR0124",\n # Consider `elif` instead of `else` then `if` to remove indentation level\n "PLR5501",\n # collection-literal-concatenation\n "RUF005",\n # pairwise-over-zipped (>=PY310 only)\n "RUF007",\n # explicit-f-string-type-conversion\n "RUF010",\n # mutable-class-default\n "RUF012"\n]\n\nexclude = [\n "doc/sphinxext/*.py",\n "doc/build/*.py",\n "doc/temp/*.py",\n ".eggs/*.py",\n # vendored files\n "pandas/util/version/*",\n "pandas/io/clipboard/__init__.py",\n # exclude asv benchmark environments from linting\n "env",\n]\n\n[tool.ruff.per-file-ignores]\n# relative imports allowed for asv_bench\n"asv_bench/*" = ["TID", "NPY002"]\n# to be enabled gradually\n"pandas/core/*" = ["PLR5501"]\n"pandas/tests/*" = ["B028", "FLY"]\n"scripts/*" = ["B028"]\n# Keep this one enabled\n"pandas/_typing.py" = ["TCH"]\n\n[tool.pylint.messages_control]\nmax-line-length = 88\ndisable = [\n # intentionally turned off\n "bad-mcs-classmethod-argument",\n "broad-except",\n "c-extension-no-member",\n "comparison-with-itself",\n "consider-using-enumerate",\n "import-error",\n "import-outside-toplevel",\n "invalid-name",\n "invalid-unary-operand-type",\n "line-too-long",\n "no-else-continue",\n "no-else-raise",\n "no-else-return",\n "no-member",\n "no-name-in-module",\n "not-an-iterable",\n "overridden-final-method",\n "pointless-statement",\n "redundant-keyword-arg",\n "singleton-comparison",\n "too-many-ancestors",\n "too-many-arguments",\n "too-many-boolean-expressions",\n "too-many-branches",\n "too-many-function-args",\n "too-many-instance-attributes",\n "too-many-locals",\n "too-many-nested-blocks",\n "too-many-public-methods",\n "too-many-return-statements",\n "too-many-statements",\n "unexpected-keyword-arg",\n "ungrouped-imports",\n "unsubscriptable-object",\n "unsupported-assignment-operation",\n "unsupported-membership-test",\n "unused-import",\n "use-dict-literal",\n "use-implicit-booleaness-not-comparison",\n "use-implicit-booleaness-not-len",\n "wrong-import-order",\n "wrong-import-position",\n "redefined-loop-name",\n\n # misc\n "abstract-class-instantiated",\n "no-value-for-parameter",\n "undefined-variable",\n "unpacking-non-sequence",\n "used-before-assignment",\n\n # pylint type "C": convention, for programming standard violation\n "missing-class-docstring",\n "missing-function-docstring",\n "missing-module-docstring",\n "superfluous-parens",\n "too-many-lines",\n "unidiomatic-typecheck",\n "unnecessary-dunder-call",\n "unnecessary-lambda-assignment",\n\n # pylint type "R": refactor, for bad code smell\n "consider-using-with",\n "cyclic-import",\n "duplicate-code",\n "inconsistent-return-statements",\n "redefined-argument-from-local",\n "too-few-public-methods",\n\n # pylint type "W": warning, for python specific problems\n "abstract-method",\n "arguments-differ",\n "arguments-out-of-order",\n "arguments-renamed",\n "attribute-defined-outside-init",\n "broad-exception-raised",\n "comparison-with-callable",\n "dangerous-default-value",\n "deprecated-module",\n "eval-used",\n "expression-not-assigned",\n "fixme",\n "global-statement",\n "invalid-overridden-method",\n "keyword-arg-before-vararg",\n "possibly-unused-variable",\n "protected-access",\n "raise-missing-from",\n "redefined-builtin",\n "redefined-outer-name",\n "self-cls-assignment",\n "signature-differs",\n "super-init-not-called",\n "try-except-raise",\n "unnecessary-lambda",\n "unused-argument",\n "unused-variable",\n "using-constant-test",\n\n # disabled on 2.3.x branch\n "consider-using-in",\n "simplifiable-if-expression",\n]\n\n[tool.pytest.ini_options]\n# sync minversion with pyproject.toml & install.rst\nminversion = "7.3.2"\naddopts = "--strict-markers --strict-config --capture=no --durations=30 --junitxml=test-data.xml"\nempty_parameter_set_mark = "fail_at_collect"\nxfail_strict = true\ntestpaths = "pandas"\ndoctest_optionflags = [\n "NORMALIZE_WHITESPACE",\n "IGNORE_EXCEPTION_DETAIL",\n "ELLIPSIS",\n]\nfilterwarnings = [\n "error:::pandas",\n "error::ResourceWarning",\n "error::pytest.PytestUnraisableExceptionWarning",\n # TODO(PY311-minimum): Specify EncodingWarning\n # Ignore 3rd party EncodingWarning but raise on pandas'\n "ignore:.*encoding.* argument not specified",\n "error:.*encoding.* argument not specified::pandas",\n "ignore:.*ssl.SSLSocket:pytest.PytestUnraisableExceptionWarning",\n "ignore:.*ssl.SSLSocket:ResourceWarning",\n # GH 44844: Can remove once minimum matplotlib version >= 3.7\n "ignore:.*FileIO:pytest.PytestUnraisableExceptionWarning",\n "ignore:.*BufferedRandom:ResourceWarning",\n "ignore::ResourceWarning:asyncio",\n # From plotting doctests\n "ignore:More than 20 figures have been opened:RuntimeWarning",\n # Will be fixed in numba 0.56: https://github.com/numba/numba/issues/7758\n "ignore:`np.MachAr` is deprecated:DeprecationWarning:numba",\n "ignore:.*urllib3:DeprecationWarning:botocore",\n "ignore:Setuptools is replacing distutils.:UserWarning:_distutils_hack",\n # https://github.com/PyTables/PyTables/issues/822\n "ignore:a closed node found in the registry:UserWarning:tables",\n "ignore:`np.object` is a deprecated:DeprecationWarning:tables",\n "ignore:tostring:DeprecationWarning:tables",\n "ignore:distutils Version classes are deprecated:DeprecationWarning:pandas_datareader",\n "ignore:distutils Version classes are deprecated:DeprecationWarning:numexpr",\n "ignore:distutils Version classes are deprecated:DeprecationWarning:fastparquet",\n "ignore:distutils Version classes are deprecated:DeprecationWarning:fsspec",\n # Can be removed once https://github.com/numpy/numpy/pull/24794 is merged\n "ignore:.*In the future `np.long` will be defined as.*:FutureWarning",\n]\njunit_family = "xunit2"\nmarkers = [\n "single_cpu: tests that should run on a single cpu only",\n "slow: mark a test as slow",\n "network: mark a test as network",\n "db: tests requiring a database (mysql or postgres)",\n "clipboard: mark a pd.read_clipboard test",\n "arm_slow: mark a test as slow for arm64 architecture",\n "skip_ubsan: Tests known to fail UBSAN check",\n # TODO: someone should investigate this ...\n # these tests only fail in the wheel builder and don't fail in regular\n # ARM CI\n "fails_arm_wheels: Tests that fail in the ARM wheel build only",\n]\n\n[tool.mypy]\n# Import discovery\nmypy_path = "typings"\nfiles = ["pandas", "typings"]\nnamespace_packages = false\nexplicit_package_bases = false\nignore_missing_imports = true\nfollow_imports = "normal"\nfollow_imports_for_stubs = false\nno_site_packages = false\nno_silence_site_packages = false\n# Platform configuration\npython_version = "3.11"\nplatform = "linux-64"\n# Disallow dynamic typing\ndisallow_any_unimported = false # TODO\ndisallow_any_expr = false # TODO\ndisallow_any_decorated = false # TODO\ndisallow_any_explicit = false # TODO\ndisallow_any_generics = false # TODO\ndisallow_subclassing_any = false # TODO\n# Untyped definitions and calls\ndisallow_untyped_calls = true\ndisallow_untyped_defs = true\ndisallow_incomplete_defs = true\ncheck_untyped_defs = true\ndisallow_untyped_decorators = true\n# None and Optional handling\nno_implicit_optional = true\nstrict_optional = true\n# Configuring warnings\nwarn_redundant_casts = true\nwarn_unused_ignores = true\nwarn_no_return = true\nwarn_return_any = false # TODO\nwarn_unreachable = false # GH#27396\n# Suppressing errors\nignore_errors = false\nenable_error_code = "ignore-without-code"\n# Miscellaneous strictness flags\nallow_untyped_globals = false\nallow_redefinition = false\nlocal_partial_types = false\nimplicit_reexport = true\nstrict_equality = true\n# Configuring error messages\nshow_error_context = false\nshow_column_numbers = false\nshow_error_codes = true\n\n[[tool.mypy.overrides]]\nmodule = [\n "pandas._config.config", # TODO\n "pandas._libs.*",\n "pandas._testing.*", # TODO\n "pandas.arrays", # TODO\n "pandas.compat.numpy.function", # TODO\n "pandas.compat._optional", # TODO\n "pandas.compat.compressors", # TODO\n "pandas.compat.pickle_compat", # TODO\n "pandas.core._numba.executor", # TODO\n "pandas.core.array_algos.datetimelike_accumulations", # TODO\n "pandas.core.array_algos.masked_accumulations", # TODO\n "pandas.core.array_algos.masked_reductions", # TODO\n "pandas.core.array_algos.putmask", # TODO\n "pandas.core.array_algos.quantile", # TODO\n "pandas.core.array_algos.replace", # TODO\n "pandas.core.array_algos.take", # TODO\n "pandas.core.arrays.*", # TODO\n "pandas.core.computation.*", # TODO\n "pandas.core.dtypes.astype", # TODO\n "pandas.core.dtypes.cast", # TODO\n "pandas.core.dtypes.common", # TODO\n "pandas.core.dtypes.concat", # TODO\n "pandas.core.dtypes.dtypes", # TODO\n "pandas.core.dtypes.generic", # TODO\n "pandas.core.dtypes.inference", # TODO\n "pandas.core.dtypes.missing", # TODO\n "pandas.core.groupby.categorical", # TODO\n "pandas.core.groupby.generic", # TODO\n "pandas.core.groupby.grouper", # TODO\n "pandas.core.groupby.groupby", # TODO\n "pandas.core.groupby.ops", # TODO\n "pandas.core.indexers.*", # TODO\n "pandas.core.indexes.*", # TODO\n "pandas.core.interchange.column", # TODO\n "pandas.core.interchange.dataframe_protocol", # TODO\n "pandas.core.interchange.from_dataframe", # TODO\n "pandas.core.internals.*", # TODO\n "pandas.core.methods.*", # TODO\n "pandas.core.ops.array_ops", # TODO\n "pandas.core.ops.common", # TODO\n "pandas.core.ops.invalid", # TODO\n "pandas.core.ops.mask_ops", # TODO\n "pandas.core.ops.missing", # TODO\n "pandas.core.reshape.*", # TODO\n "pandas.core.strings.*", # TODO\n "pandas.core.tools.*", # TODO\n "pandas.core.window.common", # TODO\n "pandas.core.window.ewm", # TODO\n "pandas.core.window.expanding", # TODO\n "pandas.core.window.numba_", # TODO\n "pandas.core.window.online", # TODO\n "pandas.core.window.rolling", # TODO\n "pandas.core.accessor", # TODO\n "pandas.core.algorithms", # TODO\n "pandas.core.apply", # TODO\n "pandas.core.arraylike", # TODO\n "pandas.core.base", # TODO\n "pandas.core.common", # TODO\n "pandas.core.config_init", # TODO\n "pandas.core.construction", # TODO\n "pandas.core.flags", # TODO\n "pandas.core.frame", # TODO\n "pandas.core.generic", # TODO\n "pandas.core.indexing", # TODO\n "pandas.core.missing", # TODO\n "pandas.core.nanops", # TODO\n "pandas.core.resample", # TODO\n "pandas.core.roperator", # TODO\n "pandas.core.sample", # TODO\n "pandas.core.series", # TODO\n "pandas.core.sorting", # TODO\n "pandas.errors", # TODO\n "pandas.io.clipboard", # TODO\n "pandas.io.excel._base", # TODO\n "pandas.io.excel._odfreader", # TODO\n "pandas.io.excel._odswriter", # TODO\n "pandas.io.excel._openpyxl", # TODO\n "pandas.io.excel._pyxlsb", # TODO\n "pandas.io.excel._xlrd", # TODO\n "pandas.io.excel._xlsxwriter", # TODO\n "pandas.io.formats.console", # TODO\n "pandas.io.formats.css", # TODO\n "pandas.io.formats.excel", # TODO\n "pandas.io.formats.format", # TODO\n "pandas.io.formats.info", # TODO\n "pandas.io.formats.printing", # TODO\n "pandas.io.formats.style", # TODO\n "pandas.io.formats.style_render", # TODO\n "pandas.io.formats.xml", # TODO\n "pandas.io.json.*", # TODO\n "pandas.io.parsers.*", # TODO\n "pandas.io.sas.sas_xport", # TODO\n "pandas.io.sas.sas7bdat", # TODO\n "pandas.io.clipboards", # TODO\n "pandas.io.common", # TODO\n "pandas.io.gbq", # TODO\n "pandas.io.html", # TODO\n "pandas.io.gbq", # TODO\n "pandas.io.parquet", # TODO\n "pandas.io.pytables", # TODO\n "pandas.io.sql", # TODO\n "pandas.io.stata", # TODO\n "pandas.io.xml", # TODO\n "pandas.plotting.*", # TODO\n "pandas.tests.*",\n "pandas.tseries.frequencies", # TODO\n "pandas.tseries.holiday", # TODO\n "pandas.util._decorators", # TODO\n "pandas.util._doctools", # TODO\n "pandas.util._print_versions", # TODO\n "pandas.util._test_decorators", # TODO\n "pandas.util._validators", # TODO\n "pandas.util", # TODO\n "pandas._version",\n "pandas.conftest",\n "pandas"\n]\ndisallow_untyped_calls = false\ndisallow_untyped_defs = false\ndisallow_incomplete_defs = false\n\n[[tool.mypy.overrides]]\nmodule = [\n "pandas.tests.*",\n "pandas._version",\n "pandas.io.clipboard",\n]\ncheck_untyped_defs = false\n\n[[tool.mypy.overrides]]\nmodule = [\n "pandas.tests.apply.test_series_apply",\n "pandas.tests.arithmetic.conftest",\n "pandas.tests.arrays.sparse.test_combine_concat",\n "pandas.tests.dtypes.test_common",\n "pandas.tests.frame.methods.test_to_records",\n "pandas.tests.groupby.test_rank",\n "pandas.tests.groupby.transform.test_transform",\n "pandas.tests.indexes.interval.test_interval",\n "pandas.tests.indexing.test_categorical",\n "pandas.tests.io.excel.test_writers",\n "pandas.tests.reductions.test_reductions",\n "pandas.tests.test_expressions",\n]\nignore_errors = true\n\n# To be kept consistent with "Import Formatting" section in contributing.rst\n[tool.isort]\nknown_pre_libs = "pandas._config"\nknown_pre_core = ["pandas._libs", "pandas._typing", "pandas.util._*", "pandas.compat", "pandas.errors"]\nknown_dtypes = "pandas.core.dtypes"\nknown_post_core = ["pandas.tseries", "pandas.io", "pandas.plotting"]\nsections = ["FUTURE", "STDLIB", "THIRDPARTY" ,"PRE_LIBS" , "PRE_CORE", "DTYPES", "FIRSTPARTY", "POST_CORE", "LOCALFOLDER"]\nprofile = "black"\ncombine_as_imports = true\nforce_grid_wrap = 2\nforce_sort_within_sections = true\nskip_glob = "env"\nskip = "pandas/__init__.py"\n\n[tool.pyright]\npythonVersion = "3.11"\ntypeCheckingMode = "basic"\nuseLibraryCodeForTypes = false\ninclude = ["pandas", "typings"]\nexclude = ["pandas/tests", "pandas/io/clipboard", "pandas/util/version", "pandas/core/_numba/extensions.py"]\n# enable subset of "strict"\nreportDuplicateImport = true\nreportInconsistentConstructor = true\nreportInvalidStubStatement = true\nreportOverlappingOverload = true\nreportPropertyTypeMismatch = true\nreportUntypedClassDecorator = true\nreportUntypedFunctionDecorator = true\nreportUntypedNamedTuple = true\nreportUnusedImport = true\ndisableBytesTypePromotions = true\n# disable subset of "basic"\nreportGeneralTypeIssues = false\nreportMissingModuleSource = false\nreportOptionalCall = false\nreportOptionalIterable = false\nreportOptionalMemberAccess = false\nreportOptionalOperand = false\nreportOptionalSubscript = false\nreportPrivateImportUsage = false\nreportUnboundVariable = false\n\n[tool.coverage.run]\nbranch = true\nomit = ["pandas/_typing.py", "pandas/_version.py"]\nplugins = ["Cython.Coverage"]\nsource = ["pandas"]\n\n[tool.coverage.report]\nignore_errors = false\nshow_missing = true\nomit = ["pandas/_version.py"]\nexclude_lines = [\n # Have to re-enable the standard pragma\n "pragma: no cover",\n # Don't complain about missing debug-only code:s\n "def __repr__",\n "if self.debug",\n # Don't complain if tests don't hit defensive assertion code:\n "raise AssertionError",\n "raise NotImplementedError",\n "AbstractMethodError",\n # Don't complain if non-runnable code isn't run:\n "if 0:",\n "if __name__ == .__main__.:",\n "if TYPE_CHECKING:",\n]\n\n[tool.coverage.html]\ndirectory = "coverage_html_report"\n\n[tool.codespell]\nignore-words-list = "blocs, coo, hist, nd, sav, ser, recuse, nin, timere, expec, expecs"\nignore-regex = 'https://([\w/\.])+'\n
.venv\Lib\site-packages\pandas\pyproject.toml
pyproject.toml
Other
24,595
0.95
0.040441
0.157419
awesome-app
812
2024-02-14T00:17:47.006415
BSD-3-Clause
false
3fc8866c8351852752c9e44fac1c717b
"""\nPublic testing utility functions.\n"""\n\n\nfrom pandas._testing import (\n assert_extension_array_equal,\n assert_frame_equal,\n assert_index_equal,\n assert_series_equal,\n)\n\n__all__ = [\n "assert_extension_array_equal",\n "assert_frame_equal",\n "assert_series_equal",\n "assert_index_equal",\n]\n
.venv\Lib\site-packages\pandas\testing.py
testing.py
Python
313
0.85
0
0
awesome-app
886
2023-07-22T18:52:45.386999
MIT
true
65ad0fd5a572b703478ff4600dc0c870
from __future__ import annotations\n\nfrom collections.abc import (\n Hashable,\n Iterator,\n Mapping,\n MutableMapping,\n Sequence,\n)\nfrom datetime import (\n date,\n datetime,\n timedelta,\n tzinfo,\n)\nfrom os import PathLike\nimport sys\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Literal,\n Optional,\n Protocol,\n Type as type_t,\n TypeVar,\n Union,\n overload,\n)\n\nimport numpy as np\n\n# To prevent import cycles place any internal imports in the branch below\n# and use a string literal forward reference to it in subsequent types\n# https://mypy.readthedocs.io/en/latest/common_issues.html#import-cycles\nif TYPE_CHECKING:\n import numpy.typing as npt\n\n from pandas._libs import (\n NaTType,\n Period,\n Timedelta,\n Timestamp,\n )\n from pandas._libs.tslibs import BaseOffset\n\n from pandas.core.dtypes.dtypes import ExtensionDtype\n\n from pandas import Interval\n from pandas.arrays import (\n DatetimeArray,\n TimedeltaArray,\n )\n from pandas.core.arrays.base import ExtensionArray\n from pandas.core.frame import DataFrame\n from pandas.core.generic import NDFrame\n from pandas.core.groupby.generic import (\n DataFrameGroupBy,\n GroupBy,\n SeriesGroupBy,\n )\n from pandas.core.indexes.base import Index\n from pandas.core.internals import (\n ArrayManager,\n BlockManager,\n SingleArrayManager,\n SingleBlockManager,\n )\n from pandas.core.resample import Resampler\n from pandas.core.series import Series\n from pandas.core.window.rolling import BaseWindow\n\n from pandas.io.formats.format import EngFormatter\n from pandas.tseries.holiday import AbstractHolidayCalendar\n\n ScalarLike_co = Union[\n int,\n float,\n complex,\n str,\n bytes,\n np.generic,\n ]\n\n # numpy compatible types\n NumpyValueArrayLike = Union[ScalarLike_co, npt.ArrayLike]\n # Name "npt._ArrayLikeInt_co" is not defined [name-defined]\n NumpySorter = Optional[npt._ArrayLikeInt_co] # type: ignore[name-defined]\n\n from typing import SupportsIndex\n\n if sys.version_info >= (3, 10):\n from typing import TypeGuard # pyright: ignore[reportUnusedImport]\n else:\n from typing_extensions import TypeGuard # pyright: ignore[reportUnusedImport]\n\n if sys.version_info >= (3, 11):\n from typing import Self # pyright: ignore[reportUnusedImport]\n else:\n from typing_extensions import Self # pyright: ignore[reportUnusedImport]\nelse:\n npt: Any = None\n Self: Any = None\n TypeGuard: Any = None\n\nHashableT = TypeVar("HashableT", bound=Hashable)\nMutableMappingT = TypeVar("MutableMappingT", bound=MutableMapping)\n\n# array-like\n\nArrayLike = Union["ExtensionArray", np.ndarray]\nAnyArrayLike = Union[ArrayLike, "Index", "Series"]\nTimeArrayLike = Union["DatetimeArray", "TimedeltaArray"]\n\n# list-like\n\n# from https://github.com/hauntsaninja/useful_types\n# includes Sequence-like objects but excludes str and bytes\n_T_co = TypeVar("_T_co", covariant=True)\n\n\nclass SequenceNotStr(Protocol[_T_co]):\n @overload\n def __getitem__(self, index: SupportsIndex, /) -> _T_co:\n ...\n\n @overload\n def __getitem__(self, index: slice, /) -> Sequence[_T_co]:\n ...\n\n def __contains__(self, value: object, /) -> bool:\n ...\n\n def __len__(self) -> int:\n ...\n\n def __iter__(self) -> Iterator[_T_co]:\n ...\n\n def index(self, value: Any, /, start: int = 0, stop: int = ...) -> int:\n ...\n\n def count(self, value: Any, /) -> int:\n ...\n\n def __reversed__(self) -> Iterator[_T_co]:\n ...\n\n\nListLike = Union[AnyArrayLike, SequenceNotStr, range]\n\n# scalars\n\nPythonScalar = Union[str, float, bool]\nDatetimeLikeScalar = Union["Period", "Timestamp", "Timedelta"]\nPandasScalar = Union["Period", "Timestamp", "Timedelta", "Interval"]\nScalar = Union[PythonScalar, PandasScalar, np.datetime64, np.timedelta64, date]\nIntStrT = TypeVar("IntStrT", bound=Union[int, str])\n\n\n# timestamp and timedelta convertible types\n\nTimestampConvertibleTypes = Union[\n "Timestamp", date, np.datetime64, np.int64, float, str\n]\nTimestampNonexistent = Union[\n Literal["shift_forward", "shift_backward", "NaT", "raise"], timedelta\n]\nTimedeltaConvertibleTypes = Union[\n "Timedelta", timedelta, np.timedelta64, np.int64, float, str\n]\nTimezone = Union[str, tzinfo]\n\nToTimestampHow = Literal["s", "e", "start", "end"]\n\n# NDFrameT is stricter and ensures that the same subclass of NDFrame always is\n# used. E.g. `def func(a: NDFrameT) -> NDFrameT: ...` means that if a\n# Series is passed into a function, a Series is always returned and if a DataFrame is\n# passed in, a DataFrame is always returned.\nNDFrameT = TypeVar("NDFrameT", bound="NDFrame")\n\nNumpyIndexT = TypeVar("NumpyIndexT", np.ndarray, "Index")\n\nAxisInt = int\nAxis = Union[AxisInt, Literal["index", "columns", "rows"]]\nIndexLabel = Union[Hashable, Sequence[Hashable]]\nLevel = Hashable\nShape = tuple[int, ...]\nSuffixes = tuple[Optional[str], Optional[str]]\nOrdered = Optional[bool]\nJSONSerializable = Optional[Union[PythonScalar, list, dict]]\nFrequency = Union[str, "BaseOffset"]\nAxes = ListLike\n\nRandomState = Union[\n int,\n np.ndarray,\n np.random.Generator,\n np.random.BitGenerator,\n np.random.RandomState,\n]\n\n# dtypes\nNpDtype = Union[str, np.dtype, type_t[Union[str, complex, bool, object]]]\nDtype = Union["ExtensionDtype", NpDtype]\nAstypeArg = Union["ExtensionDtype", "npt.DTypeLike"]\n# DtypeArg specifies all allowable dtypes in a functions its dtype argument\nDtypeArg = Union[Dtype, dict[Hashable, Dtype]]\nDtypeObj = Union[np.dtype, "ExtensionDtype"]\n\n# converters\nConvertersArg = dict[Hashable, Callable[[Dtype], Dtype]]\n\n# parse_dates\nParseDatesArg = Union[\n bool, list[Hashable], list[list[Hashable]], dict[Hashable, list[Hashable]]\n]\n\n# For functions like rename that convert one label to another\nRenamer = Union[Mapping[Any, Hashable], Callable[[Any], Hashable]]\n\n# to maintain type information across generic functions and parametrization\nT = TypeVar("T")\n\n# used in decorators to preserve the signature of the function it decorates\n# see https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators\nFuncType = Callable[..., Any]\nF = TypeVar("F", bound=FuncType)\n\n# types of vectorized key functions for DataFrame::sort_values and\n# DataFrame::sort_index, among others\nValueKeyFunc = Optional[Callable[["Series"], Union["Series", AnyArrayLike]]]\nIndexKeyFunc = Optional[Callable[["Index"], Union["Index", AnyArrayLike]]]\n\n# types of `func` kwarg for DataFrame.aggregate and Series.aggregate\nAggFuncTypeBase = Union[Callable, str]\nAggFuncTypeDict = MutableMapping[\n Hashable, Union[AggFuncTypeBase, list[AggFuncTypeBase]]\n]\nAggFuncType = Union[\n AggFuncTypeBase,\n list[AggFuncTypeBase],\n AggFuncTypeDict,\n]\nAggObjType = Union[\n "Series",\n "DataFrame",\n "GroupBy",\n "SeriesGroupBy",\n "DataFrameGroupBy",\n "BaseWindow",\n "Resampler",\n]\n\nPythonFuncType = Callable[[Any], Any]\n\n# filenames and file-like-objects\nAnyStr_co = TypeVar("AnyStr_co", str, bytes, covariant=True)\nAnyStr_contra = TypeVar("AnyStr_contra", str, bytes, contravariant=True)\n\n\nclass BaseBuffer(Protocol):\n @property\n def mode(self) -> str:\n # for _get_filepath_or_buffer\n ...\n\n def seek(self, __offset: int, __whence: int = ...) -> int:\n # with one argument: gzip.GzipFile, bz2.BZ2File\n # with two arguments: zip.ZipFile, read_sas\n ...\n\n def seekable(self) -> bool:\n # for bz2.BZ2File\n ...\n\n def tell(self) -> int:\n # for zip.ZipFile, read_stata, to_stata\n ...\n\n\nclass ReadBuffer(BaseBuffer, Protocol[AnyStr_co]):\n def read(self, __n: int = ...) -> AnyStr_co:\n # for BytesIOWrapper, gzip.GzipFile, bz2.BZ2File\n ...\n\n\nclass WriteBuffer(BaseBuffer, Protocol[AnyStr_contra]):\n def write(self, __b: AnyStr_contra) -> Any:\n # for gzip.GzipFile, bz2.BZ2File\n ...\n\n def flush(self) -> Any:\n # for gzip.GzipFile, bz2.BZ2File\n ...\n\n\nclass ReadPickleBuffer(ReadBuffer[bytes], Protocol):\n def readline(self) -> bytes:\n ...\n\n\nclass WriteExcelBuffer(WriteBuffer[bytes], Protocol):\n def truncate(self, size: int | None = ...) -> int:\n ...\n\n\nclass ReadCsvBuffer(ReadBuffer[AnyStr_co], Protocol):\n def __iter__(self) -> Iterator[AnyStr_co]:\n # for engine=python\n ...\n\n def fileno(self) -> int:\n # for _MMapWrapper\n ...\n\n def readline(self) -> AnyStr_co:\n # for engine=python\n ...\n\n @property\n def closed(self) -> bool:\n # for enine=pyarrow\n ...\n\n\nFilePath = Union[str, "PathLike[str]"]\n\n# for arbitrary kwargs passed during reading/writing files\nStorageOptions = Optional[dict[str, Any]]\n\n\n# compression keywords and compression\nCompressionDict = dict[str, Any]\nCompressionOptions = Optional[\n Union[Literal["infer", "gzip", "bz2", "zip", "xz", "zstd", "tar"], CompressionDict]\n]\n\n# types in DataFrameFormatter\nFormattersType = Union[\n list[Callable], tuple[Callable, ...], Mapping[Union[str, int], Callable]\n]\nColspaceType = Mapping[Hashable, Union[str, int]]\nFloatFormatType = Union[str, Callable, "EngFormatter"]\nColspaceArgType = Union[\n str, int, Sequence[Union[str, int]], Mapping[Hashable, Union[str, int]]\n]\n\n# Arguments for fillna()\nFillnaOptions = Literal["backfill", "bfill", "ffill", "pad"]\nInterpolateOptions = Literal[\n "linear",\n "time",\n "index",\n "values",\n "nearest",\n "zero",\n "slinear",\n "quadratic",\n "cubic",\n "barycentric",\n "polynomial",\n "krogh",\n "piecewise_polynomial",\n "spline",\n "pchip",\n "akima",\n "cubicspline",\n "from_derivatives",\n]\n\n# internals\nManager = Union[\n "ArrayManager", "SingleArrayManager", "BlockManager", "SingleBlockManager"\n]\nSingleManager = Union["SingleArrayManager", "SingleBlockManager"]\nManager2D = Union["ArrayManager", "BlockManager"]\n\n# indexing\n# PositionalIndexer -> valid 1D positional indexer, e.g. can pass\n# to ndarray.__getitem__\n# ScalarIndexer is for a single value as the index\n# SequenceIndexer is for list like or slices (but not tuples)\n# PositionalIndexerTuple is extends the PositionalIndexer for 2D arrays\n# These are used in various __getitem__ overloads\n# TODO(typing#684): add Ellipsis, see\n# https://github.com/python/typing/issues/684#issuecomment-548203158\n# https://bugs.python.org/issue41810\n# Using List[int] here rather than Sequence[int] to disallow tuples.\nScalarIndexer = Union[int, np.integer]\nSequenceIndexer = Union[slice, list[int], np.ndarray]\nPositionalIndexer = Union[ScalarIndexer, SequenceIndexer]\nPositionalIndexerTuple = tuple[PositionalIndexer, PositionalIndexer]\nPositionalIndexer2D = Union[PositionalIndexer, PositionalIndexerTuple]\nif TYPE_CHECKING:\n TakeIndexer = Union[Sequence[int], Sequence[np.integer], npt.NDArray[np.integer]]\nelse:\n TakeIndexer = Any\n\n# Shared by functions such as drop and astype\nIgnoreRaise = Literal["ignore", "raise"]\n\n# Windowing rank methods\nWindowingRankType = Literal["average", "min", "max"]\n\n# read_csv engines\nCSVEngine = Literal["c", "python", "pyarrow", "python-fwf"]\n\n# read_json engines\nJSONEngine = Literal["ujson", "pyarrow"]\n\n# read_xml parsers\nXMLParsers = Literal["lxml", "etree"]\n\n# read_html flavors\nHTMLFlavors = Literal["lxml", "html5lib", "bs4"]\n\n# Interval closed type\nIntervalLeftRight = Literal["left", "right"]\nIntervalClosedType = Union[IntervalLeftRight, Literal["both", "neither"]]\n\n# datetime and NaTType\nDatetimeNaTType = Union[datetime, "NaTType"]\nDateTimeErrorChoices = Union[IgnoreRaise, Literal["coerce"]]\n\n# sort_index\nSortKind = Literal["quicksort", "mergesort", "heapsort", "stable"]\nNaPosition = Literal["first", "last"]\n\n# Arguments for nsmalles and n_largest\nNsmallestNlargestKeep = Literal["first", "last", "all"]\n\n# quantile interpolation\nQuantileInterpolation = Literal["linear", "lower", "higher", "midpoint", "nearest"]\n\n# plotting\nPlottingOrientation = Literal["horizontal", "vertical"]\n\n# dropna\nAnyAll = Literal["any", "all"]\n\n# merge\nMergeHow = Literal["left", "right", "inner", "outer", "cross"]\nMergeValidate = Literal[\n "one_to_one",\n "1:1",\n "one_to_many",\n "1:m",\n "many_to_one",\n "m:1",\n "many_to_many",\n "m:m",\n]\n\n# join\nJoinHow = Literal["left", "right", "inner", "outer"]\nJoinValidate = Literal[\n "one_to_one",\n "1:1",\n "one_to_many",\n "1:m",\n "many_to_one",\n "m:1",\n "many_to_many",\n "m:m",\n]\n\n# reindex\nReindexMethod = Union[FillnaOptions, Literal["nearest"]]\n\nMatplotlibColor = Union[str, Sequence[float]]\nTimeGrouperOrigin = Union[\n "Timestamp", Literal["epoch", "start", "start_day", "end", "end_day"]\n]\nTimeAmbiguous = Union[Literal["infer", "NaT", "raise"], "npt.NDArray[np.bool_]"]\nTimeNonexistent = Union[\n Literal["shift_forward", "shift_backward", "NaT", "raise"], timedelta\n]\nDropKeep = Literal["first", "last", False]\nCorrelationMethod = Union[\n Literal["pearson", "kendall", "spearman"], Callable[[np.ndarray, np.ndarray], float]\n]\nAlignJoin = Literal["outer", "inner", "left", "right"]\nDtypeBackend = Literal["pyarrow", "numpy_nullable"]\n\nTimeUnit = Literal["s", "ms", "us", "ns"]\nOpenFileErrors = Literal[\n "strict",\n "ignore",\n "replace",\n "surrogateescape",\n "xmlcharrefreplace",\n "backslashreplace",\n "namereplace",\n]\n\n# update\nUpdateJoin = Literal["left"]\n\n# applymap\nNaAction = Literal["ignore"]\n\n# from_dict\nFromDictOrient = Literal["columns", "index", "tight"]\n\n# to_gbc\nToGbqIfexist = Literal["fail", "replace", "append"]\n\n# to_stata\nToStataByteorder = Literal[">", "<", "little", "big"]\n\n# ExcelWriter\nExcelWriterIfSheetExists = Literal["error", "new", "replace", "overlay"]\n\n# Offsets\nOffsetCalendar = Union[np.busdaycalendar, "AbstractHolidayCalendar"]\n\n# read_csv: usecols\nUsecolsArgType = Union[\n SequenceNotStr[Hashable],\n range,\n AnyArrayLike,\n Callable[[HashableT], bool],\n None,\n]\n
.venv\Lib\site-packages\pandas\_typing.py
_typing.py
Python
14,037
0.95
0.104762
0.186761
react-lib
324
2024-06-02T07:58:45.799366
BSD-3-Clause
false
63efc1ec26eef89468facc8c716f7319
# This file helps to compute a version number in source trees obtained from\n# git-archive tarball (such as those provided by githubs download-from-tag\n# feature). Distribution tarballs (built by setup.py sdist) and build\n# directories (produced by setup.py build) will contain a much shorter file\n# that just contains the computed version number.\n\n# This file is released into the public domain.\n# Generated by versioneer-0.28\n# https://github.com/python-versioneer/python-versioneer\n\n"""Git implementation of _version.py."""\n\nimport errno\nimport functools\nimport os\nimport re\nimport subprocess\nimport sys\nfrom typing import Callable\n\n\ndef get_keywords():\n """Get the keywords needed to look up the version information."""\n # these strings will be replaced by git during git-archive.\n # setup.py/versioneer.py will grep for the variable names, so they must\n # each be defined on a line of their own. _version.py will just call\n # get_keywords().\n git_refnames = " (HEAD, tag: v2.3.0, origin/2.3.x)"\n git_full = "2cc37625532045f4ac55b27176454bbbc9baf213"\n git_date = "2025-06-04 19:07:38 -0700"\n keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}\n return keywords\n\n\nclass VersioneerConfig:\n """Container for Versioneer configuration parameters."""\n\n\ndef get_config():\n """Create, populate and return the VersioneerConfig() object."""\n # these strings are filled in when 'setup.py versioneer' creates\n # _version.py\n cfg = VersioneerConfig()\n cfg.VCS = "git"\n cfg.style = "pep440"\n cfg.tag_prefix = "v"\n cfg.parentdir_prefix = "pandas-"\n cfg.versionfile_source = "pandas/_version.py"\n cfg.verbose = False\n return cfg\n\n\nclass NotThisMethod(Exception):\n """Exception raised if a method is not valid for the current scenario."""\n\n\nLONG_VERSION_PY: dict[str, str] = {}\nHANDLERS: dict[str, dict[str, Callable]] = {}\n\n\ndef register_vcs_handler(vcs, method): # decorator\n """Create decorator to mark a method as the handler of a VCS."""\n\n def decorate(f):\n """Store f in HANDLERS[vcs][method]."""\n if vcs not in HANDLERS:\n HANDLERS[vcs] = {}\n HANDLERS[vcs][method] = f\n return f\n\n return decorate\n\n\ndef run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None):\n """Call the given command(s)."""\n assert isinstance(commands, list)\n process = None\n\n popen_kwargs = {}\n if sys.platform == "win32":\n # This hides the console window if pythonw.exe is used\n startupinfo = subprocess.STARTUPINFO()\n startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\n popen_kwargs["startupinfo"] = startupinfo\n\n for command in commands:\n dispcmd = str([command] + args)\n try:\n # remember shell=False, so use git.cmd on windows, not just git\n process = subprocess.Popen(\n [command] + args,\n cwd=cwd,\n env=env,\n stdout=subprocess.PIPE,\n stderr=(subprocess.PIPE if hide_stderr else None),\n **popen_kwargs,\n )\n break\n except OSError:\n e = sys.exc_info()[1]\n if e.errno == errno.ENOENT:\n continue\n if verbose:\n print(f"unable to run {dispcmd}")\n print(e)\n return None, None\n else:\n if verbose:\n print(f"unable to find command, tried {commands}")\n return None, None\n stdout = process.communicate()[0].strip().decode()\n if process.returncode != 0:\n if verbose:\n print(f"unable to run {dispcmd} (error)")\n print(f"stdout was {stdout}")\n return None, process.returncode\n return stdout, process.returncode\n\n\ndef versions_from_parentdir(parentdir_prefix, root, verbose):\n """Try to determine the version from the parent directory name.\n\n Source tarballs conventionally unpack into a directory that includes both\n the project name and a version string. We will also support searching up\n two directory levels for an appropriately named parent directory\n """\n rootdirs = []\n\n for _ in range(3):\n dirname = os.path.basename(root)\n if dirname.startswith(parentdir_prefix):\n return {\n "version": dirname[len(parentdir_prefix) :],\n "full-revisionid": None,\n "dirty": False,\n "error": None,\n "date": None,\n }\n rootdirs.append(root)\n root = os.path.dirname(root) # up a level\n\n if verbose:\n print(\n f"Tried directories {str(rootdirs)} \\n but none started with prefix {parentdir_prefix}"\n )\n raise NotThisMethod("rootdir doesn't start with parentdir_prefix")\n\n\n@register_vcs_handler("git", "get_keywords")\ndef git_get_keywords(versionfile_abs):\n """Extract version information from the given file."""\n # the code embedded in _version.py can just fetch the value of these\n # keywords. When used from setup.py, we don't want to import _version.py,\n # so we do it with a regexp instead. This function is not used from\n # _version.py.\n keywords = {}\n try:\n with open(versionfile_abs, encoding="utf-8") as fobj:\n for line in fobj:\n if line.strip().startswith("git_refnames ="):\n mo = re.search(r'=\s*"(.*)"', line)\n if mo:\n keywords["refnames"] = mo.group(1)\n if line.strip().startswith("git_full ="):\n mo = re.search(r'=\s*"(.*)"', line)\n if mo:\n keywords["full"] = mo.group(1)\n if line.strip().startswith("git_date ="):\n mo = re.search(r'=\s*"(.*)"', line)\n if mo:\n keywords["date"] = mo.group(1)\n except OSError:\n pass\n return keywords\n\n\n@register_vcs_handler("git", "keywords")\ndef git_versions_from_keywords(keywords, tag_prefix, verbose):\n """Get version information from git keywords."""\n if "refnames" not in keywords:\n raise NotThisMethod("Short version file found")\n date = keywords.get("date")\n if date is not None:\n # Use only the last line. Previous lines may contain GPG signature\n # information.\n date = date.splitlines()[-1]\n\n # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant\n # datestamp. However we prefer "%ci" (which expands to an "ISO-8601\n # -like" string, which we must then edit to make compliant), because\n # it's been around since git-1.5.3, and it's too difficult to\n # discover which version we're using, or to work around using an\n # older one.\n date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)\n refnames = keywords["refnames"].strip()\n if refnames.startswith("$Format"):\n if verbose:\n print("keywords are unexpanded, not using")\n raise NotThisMethod("unexpanded keywords, not a git-archive tarball")\n refs = {r.strip() for r in refnames.strip("()").split(",")}\n # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of\n # just "foo-1.0". If we see a "tag: " prefix, prefer those.\n TAG = "tag: "\n tags = {r[len(TAG) :] for r in refs if r.startswith(TAG)}\n if not tags:\n # Either we're using git < 1.8.3, or there really are no tags. We use\n # a heuristic: assume all version tags have a digit. The old git %d\n # expansion behaves like git log --decorate=short and strips out the\n # refs/heads/ and refs/tags/ prefixes that would let us distinguish\n # between branches and tags. By ignoring refnames without digits, we\n # filter out many common branch names like "release" and\n # "stabilization", as well as "HEAD" and "master".\n tags = {r for r in refs if re.search(r"\d", r)}\n if verbose:\n print(f"discarding '{','.join(refs - tags)}', no digits")\n if verbose:\n print(f"likely tags: {','.join(sorted(tags))}")\n for ref in sorted(tags):\n # sorting will prefer e.g. "2.0" over "2.0rc1"\n if ref.startswith(tag_prefix):\n r = ref[len(tag_prefix) :]\n # Filter out refs that exactly match prefix or that don't start\n # with a number once the prefix is stripped (mostly a concern\n # when prefix is '')\n if not re.match(r"\d", r):\n continue\n if verbose:\n print(f"picking {r}")\n return {\n "version": r,\n "full-revisionid": keywords["full"].strip(),\n "dirty": False,\n "error": None,\n "date": date,\n }\n # no suitable tags, so version is "0+unknown", but full hex is still there\n if verbose:\n print("no suitable tags, using unknown + full revision id")\n return {\n "version": "0+unknown",\n "full-revisionid": keywords["full"].strip(),\n "dirty": False,\n "error": "no suitable tags",\n "date": None,\n }\n\n\n@register_vcs_handler("git", "pieces_from_vcs")\ndef git_pieces_from_vcs(tag_prefix, root, verbose, runner=run_command):\n """Get version from 'git describe' in the root of the source tree.\n\n This only gets called if the git-archive 'subst' keywords were *not*\n expanded, and _version.py hasn't already been rewritten with a short\n version string, meaning we're inside a checked out source tree.\n """\n GITS = ["git"]\n if sys.platform == "win32":\n GITS = ["git.cmd", "git.exe"]\n\n # GIT_DIR can interfere with correct operation of Versioneer.\n # It may be intended to be passed to the Versioneer-versioned project,\n # but that should not change where we get our version from.\n env = os.environ.copy()\n env.pop("GIT_DIR", None)\n runner = functools.partial(runner, env=env)\n\n _, rc = runner(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=not verbose)\n if rc != 0:\n if verbose:\n print(f"Directory {root} not under git control")\n raise NotThisMethod("'git rev-parse --git-dir' returned error")\n\n # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]\n # if there isn't one, this yields HEX[-dirty] (no NUM)\n describe_out, rc = runner(\n GITS,\n [\n "describe",\n "--tags",\n "--dirty",\n "--always",\n "--long",\n "--match",\n f"{tag_prefix}[[:digit:]]*",\n ],\n cwd=root,\n )\n # --long was added in git-1.5.5\n if describe_out is None:\n raise NotThisMethod("'git describe' failed")\n describe_out = describe_out.strip()\n full_out, rc = runner(GITS, ["rev-parse", "HEAD"], cwd=root)\n if full_out is None:\n raise NotThisMethod("'git rev-parse' failed")\n full_out = full_out.strip()\n\n pieces = {}\n pieces["long"] = full_out\n pieces["short"] = full_out[:7] # maybe improved later\n pieces["error"] = None\n\n branch_name, rc = runner(GITS, ["rev-parse", "--abbrev-ref", "HEAD"], cwd=root)\n # --abbrev-ref was added in git-1.6.3\n if rc != 0 or branch_name is None:\n raise NotThisMethod("'git rev-parse --abbrev-ref' returned error")\n branch_name = branch_name.strip()\n\n if branch_name == "HEAD":\n # If we aren't exactly on a branch, pick a branch which represents\n # the current commit. If all else fails, we are on a branchless\n # commit.\n branches, rc = runner(GITS, ["branch", "--contains"], cwd=root)\n # --contains was added in git-1.5.4\n if rc != 0 or branches is None:\n raise NotThisMethod("'git branch --contains' returned error")\n branches = branches.split("\n")\n\n # Remove the first line if we're running detached\n if "(" in branches[0]:\n branches.pop(0)\n\n # Strip off the leading "* " from the list of branches.\n branches = [branch[2:] for branch in branches]\n if "master" in branches:\n branch_name = "master"\n elif not branches:\n branch_name = None\n else:\n # Pick the first branch that is returned. Good or bad.\n branch_name = branches[0]\n\n pieces["branch"] = branch_name\n\n # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]\n # TAG might have hyphens.\n git_describe = describe_out\n\n # look for -dirty suffix\n dirty = git_describe.endswith("-dirty")\n pieces["dirty"] = dirty\n if dirty:\n git_describe = git_describe[: git_describe.rindex("-dirty")]\n\n # now we have TAG-NUM-gHEX or HEX\n\n if "-" in git_describe:\n # TAG-NUM-gHEX\n mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)\n if not mo:\n # unparsable. Maybe git-describe is misbehaving?\n pieces["error"] = f"unable to parse git-describe output: '{describe_out}'"\n return pieces\n\n # tag\n full_tag = mo.group(1)\n if not full_tag.startswith(tag_prefix):\n if verbose:\n fmt = "tag '%s' doesn't start with prefix '%s'"\n print(fmt % (full_tag, tag_prefix))\n pieces[\n "error"\n ] = f"tag '{full_tag}' doesn't start with prefix '{tag_prefix}'"\n return pieces\n pieces["closest-tag"] = full_tag[len(tag_prefix) :]\n\n # distance: number of commits since tag\n pieces["distance"] = int(mo.group(2))\n\n # commit: short hex revision ID\n pieces["short"] = mo.group(3)\n\n else:\n # HEX: no tags\n pieces["closest-tag"] = None\n out, rc = runner(GITS, ["rev-list", "HEAD", "--left-right"], cwd=root)\n pieces["distance"] = len(out.split()) # total number of commits\n\n # commit date: see ISO-8601 comment in git_versions_from_keywords()\n date = runner(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip()\n # Use only the last line. Previous lines may contain GPG signature\n # information.\n date = date.splitlines()[-1]\n pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)\n\n return pieces\n\n\ndef plus_or_dot(pieces) -> str:\n """Return a + if we don't already have one, else return a ."""\n if "+" in pieces.get("closest-tag", ""):\n return "."\n return "+"\n\n\ndef render_pep440(pieces):\n """Build up version string, with post-release "local version identifier".\n\n Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you\n get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty\n\n Exceptions:\n 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"] or pieces["dirty"]:\n rendered += plus_or_dot(pieces)\n rendered += f"{pieces['distance']}.g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n else:\n # exception #1\n rendered = f"0+untagged.{pieces['distance']}.g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n return rendered\n\n\ndef render_pep440_branch(pieces):\n """TAG[[.dev0]+DISTANCE.gHEX[.dirty]] .\n\n The ".dev0" means not master branch. Note that .dev0 sorts backwards\n (a feature branch will appear "older" than the master branch).\n\n Exceptions:\n 1: no tags. 0[.dev0]+untagged.DISTANCE.gHEX[.dirty]\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"] or pieces["dirty"]:\n if pieces["branch"] != "master":\n rendered += ".dev0"\n rendered += plus_or_dot(pieces)\n rendered += f"{pieces['distance']}.g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n else:\n # exception #1\n rendered = "0"\n if pieces["branch"] != "master":\n rendered += ".dev0"\n rendered += f"+untagged.{pieces['distance']}.g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n return rendered\n\n\ndef pep440_split_post(ver):\n """Split pep440 version string at the post-release segment.\n\n Returns the release segments before the post-release and the\n post-release version number (or -1 if no post-release segment is present).\n """\n vc = str.split(ver, ".post")\n return vc[0], int(vc[1] or 0) if len(vc) == 2 else None\n\n\ndef render_pep440_pre(pieces):\n """TAG[.postN.devDISTANCE] -- No -dirty.\n\n Exceptions:\n 1: no tags. 0.post0.devDISTANCE\n """\n if pieces["closest-tag"]:\n if pieces["distance"]:\n # update the post release segment\n tag_version, post_version = pep440_split_post(pieces["closest-tag"])\n rendered = tag_version\n if post_version is not None:\n rendered += f".post{post_version + 1}.dev{pieces['distance']}"\n else:\n rendered += f".post0.dev{pieces['distance']}"\n else:\n # no commits, use the tag as the version\n rendered = pieces["closest-tag"]\n else:\n # exception #1\n rendered = f"0.post0.dev{pieces['distance']}"\n return rendered\n\n\ndef render_pep440_post(pieces):\n """TAG[.postDISTANCE[.dev0]+gHEX] .\n\n The ".dev0" means dirty. Note that .dev0 sorts backwards\n (a dirty tree will appear "older" than the corresponding clean one),\n but you shouldn't be releasing software with -dirty anyways.\n\n Exceptions:\n 1: no tags. 0.postDISTANCE[.dev0]\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"] or pieces["dirty"]:\n rendered += f".post{pieces['distance']}"\n if pieces["dirty"]:\n rendered += ".dev0"\n rendered += plus_or_dot(pieces)\n rendered += f"g{pieces['short']}"\n else:\n # exception #1\n rendered = f"0.post{pieces['distance']}"\n if pieces["dirty"]:\n rendered += ".dev0"\n rendered += f"+g{pieces['short']}"\n return rendered\n\n\ndef render_pep440_post_branch(pieces):\n """TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] .\n\n The ".dev0" means not master branch.\n\n Exceptions:\n 1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty]\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"] or pieces["dirty"]:\n rendered += f".post{pieces['distance']}"\n if pieces["branch"] != "master":\n rendered += ".dev0"\n rendered += plus_or_dot(pieces)\n rendered += f"g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n else:\n # exception #1\n rendered = f"0.post{pieces['distance']}"\n if pieces["branch"] != "master":\n rendered += ".dev0"\n rendered += f"+g{pieces['short']}"\n if pieces["dirty"]:\n rendered += ".dirty"\n return rendered\n\n\ndef render_pep440_old(pieces):\n """TAG[.postDISTANCE[.dev0]] .\n\n The ".dev0" means dirty.\n\n Exceptions:\n 1: no tags. 0.postDISTANCE[.dev0]\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"] or pieces["dirty"]:\n rendered += f"0.post{pieces['distance']}"\n if pieces["dirty"]:\n rendered += ".dev0"\n else:\n # exception #1\n rendered = f"0.post{pieces['distance']}"\n if pieces["dirty"]:\n rendered += ".dev0"\n return rendered\n\n\ndef render_git_describe(pieces):\n """TAG[-DISTANCE-gHEX][-dirty].\n\n Like 'git describe --tags --dirty --always'.\n\n Exceptions:\n 1: no tags. HEX[-dirty] (note: no 'g' prefix)\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n if pieces["distance"]:\n rendered += f"-{pieces['distance']}-g{pieces['short']}"\n else:\n # exception #1\n rendered = pieces["short"]\n if pieces["dirty"]:\n rendered += "-dirty"\n return rendered\n\n\ndef render_git_describe_long(pieces):\n """TAG-DISTANCE-gHEX[-dirty].\n\n Like 'git describe --tags --dirty --always -long'.\n The distance/hash is unconditional.\n\n Exceptions:\n 1: no tags. HEX[-dirty] (note: no 'g' prefix)\n """\n if pieces["closest-tag"]:\n rendered = pieces["closest-tag"]\n rendered += f"-{pieces['distance']}-g{pieces['short']}"\n else:\n # exception #1\n rendered = pieces["short"]\n if pieces["dirty"]:\n rendered += "-dirty"\n return rendered\n\n\ndef render(pieces, style):\n """Render the given version pieces into the requested style."""\n if pieces["error"]:\n return {\n "version": "unknown",\n "full-revisionid": pieces.get("long"),\n "dirty": None,\n "error": pieces["error"],\n "date": None,\n }\n\n if not style or style == "default":\n style = "pep440" # the default\n\n if style == "pep440":\n rendered = render_pep440(pieces)\n elif style == "pep440-branch":\n rendered = render_pep440_branch(pieces)\n elif style == "pep440-pre":\n rendered = render_pep440_pre(pieces)\n elif style == "pep440-post":\n rendered = render_pep440_post(pieces)\n elif style == "pep440-post-branch":\n rendered = render_pep440_post_branch(pieces)\n elif style == "pep440-old":\n rendered = render_pep440_old(pieces)\n elif style == "git-describe":\n rendered = render_git_describe(pieces)\n elif style == "git-describe-long":\n rendered = render_git_describe_long(pieces)\n else:\n raise ValueError(f"unknown style '{style}'")\n\n return {\n "version": rendered,\n "full-revisionid": pieces["long"],\n "dirty": pieces["dirty"],\n "error": None,\n "date": pieces.get("date"),\n }\n\n\ndef get_versions():\n """Get version information or return default if unable to do so."""\n # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have\n # __file__, we can work backwards from there to the root. Some\n # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which\n # case we can only use expanded keywords.\n\n cfg = get_config()\n verbose = cfg.verbose\n\n try:\n return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose)\n except NotThisMethod:\n pass\n\n try:\n root = os.path.realpath(__file__)\n # versionfile_source is the relative path from the top of the source\n # tree (where the .git directory might live) to this file. Invert\n # this to find the root from __file__.\n for _ in cfg.versionfile_source.split("/"):\n root = os.path.dirname(root)\n except NameError:\n return {\n "version": "0+unknown",\n "full-revisionid": None,\n "dirty": None,\n "error": "unable to find root of source tree",\n "date": None,\n }\n\n try:\n pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)\n return render(pieces, cfg.style)\n except NotThisMethod:\n pass\n\n try:\n if cfg.parentdir_prefix:\n return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)\n except NotThisMethod:\n pass\n\n return {\n "version": "0+unknown",\n "full-revisionid": None,\n "dirty": None,\n "error": "unable to compute version",\n "date": None,\n }\n
.venv\Lib\site-packages\pandas\_version.py
_version.py
Python
23,677
0.95
0.196532
0.147458
python-kit
291
2024-12-14T20:42:10.133394
BSD-3-Clause
false
d7cd6e521fe7bd67d574715a8c59383b
__version__="2.3.0"\n__git_version__="2cc37625532045f4ac55b27176454bbbc9baf213"\n
.venv\Lib\site-packages\pandas\_version_meson.py
_version_meson.py
Python
79
0.5
0
0
react-lib
279
2024-10-23T17:10:11.152970
BSD-3-Clause
false
9ffee1dcd957478fa4b071e96314f9dc
from __future__ import annotations\n\n\n# start delvewheel patch\ndef _delvewheel_patch_1_10_1():\n import os\n if os.path.isdir(libs_dir := os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, 'pandas.libs'))):\n os.add_dll_directory(libs_dir)\n\n\n_delvewheel_patch_1_10_1()\ndel _delvewheel_patch_1_10_1\n# end delvewheel patch\n\nimport os\nimport warnings\n\n__docformat__ = "restructuredtext"\n\n# Let users know if they're missing any of our hard dependencies\n_hard_dependencies = ("numpy", "pytz", "dateutil")\n_missing_dependencies = []\n\nfor _dependency in _hard_dependencies:\n try:\n __import__(_dependency)\n except ImportError as _e: # pragma: no cover\n _missing_dependencies.append(f"{_dependency}: {_e}")\n\nif _missing_dependencies: # pragma: no cover\n raise ImportError(\n "Unable to import required dependencies:\n" + "\n".join(_missing_dependencies)\n )\ndel _hard_dependencies, _dependency, _missing_dependencies\n\ntry:\n # numpy compat\n from pandas.compat import (\n is_numpy_dev as _is_numpy_dev, # pyright: ignore[reportUnusedImport] # noqa: F401\n )\nexcept ImportError as _err: # pragma: no cover\n _module = _err.name\n raise ImportError(\n f"C extension: {_module} not built. If you want to import "\n "pandas from the source directory, you may need to run "\n "'python setup.py build_ext' to build the C extensions first."\n ) from _err\n\nfrom pandas._config import (\n get_option,\n set_option,\n reset_option,\n describe_option,\n option_context,\n options,\n)\n\n# let init-time option registration happen\nimport pandas.core.config_init # pyright: ignore[reportUnusedImport] # noqa: F401\n\nfrom pandas.core.api import (\n # dtype\n ArrowDtype,\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n Float32Dtype,\n Float64Dtype,\n CategoricalDtype,\n PeriodDtype,\n IntervalDtype,\n DatetimeTZDtype,\n StringDtype,\n BooleanDtype,\n # missing\n NA,\n isna,\n isnull,\n notna,\n notnull,\n # indexes\n Index,\n CategoricalIndex,\n RangeIndex,\n MultiIndex,\n IntervalIndex,\n TimedeltaIndex,\n DatetimeIndex,\n PeriodIndex,\n IndexSlice,\n # tseries\n NaT,\n Period,\n period_range,\n Timedelta,\n timedelta_range,\n Timestamp,\n date_range,\n bdate_range,\n Interval,\n interval_range,\n DateOffset,\n # conversion\n to_numeric,\n to_datetime,\n to_timedelta,\n # misc\n Flags,\n Grouper,\n factorize,\n unique,\n value_counts,\n NamedAgg,\n array,\n Categorical,\n set_eng_float_format,\n Series,\n DataFrame,\n)\n\nfrom pandas.core.dtypes.dtypes import SparseDtype\n\nfrom pandas.tseries.api import infer_freq\nfrom pandas.tseries import offsets\n\nfrom pandas.core.computation.api import eval\n\nfrom pandas.core.reshape.api import (\n concat,\n lreshape,\n melt,\n wide_to_long,\n merge,\n merge_asof,\n merge_ordered,\n crosstab,\n pivot,\n pivot_table,\n get_dummies,\n from_dummies,\n cut,\n qcut,\n)\n\nfrom pandas import api, arrays, errors, io, plotting, tseries\nfrom pandas import testing\nfrom pandas.util._print_versions import show_versions\n\nfrom pandas.io.api import (\n # excel\n ExcelFile,\n ExcelWriter,\n read_excel,\n # parsers\n read_csv,\n read_fwf,\n read_table,\n # pickle\n read_pickle,\n to_pickle,\n # pytables\n HDFStore,\n read_hdf,\n # sql\n read_sql,\n read_sql_query,\n read_sql_table,\n # misc\n read_clipboard,\n read_parquet,\n read_orc,\n read_feather,\n read_gbq,\n read_html,\n read_xml,\n read_json,\n read_stata,\n read_sas,\n read_spss,\n)\n\nfrom pandas.io.json._normalize import json_normalize\n\nfrom pandas.util._tester import test\n\n# use the closest tagged version if possible\n_built_with_meson = False\ntry:\n from pandas._version_meson import ( # pyright: ignore [reportMissingImports]\n __version__,\n __git_version__,\n )\n\n _built_with_meson = True\nexcept ImportError:\n from pandas._version import get_versions\n\n v = get_versions()\n __version__ = v.get("closest-tag", v["version"])\n __git_version__ = v.get("full-revisionid")\n del get_versions, v\n\n# GH#55043 - deprecation of the data_manager option\nif "PANDAS_DATA_MANAGER" in os.environ:\n warnings.warn(\n "The env variable PANDAS_DATA_MANAGER is set. The data_manager option is "\n "deprecated and will be removed in a future version. Only the BlockManager "\n "will be available. Unset this environment variable to silence this warning.",\n FutureWarning,\n stacklevel=2,\n )\n\ndel warnings, os\n\n# module level doc-string\n__doc__ = """\npandas - a powerful data analysis and manipulation library for Python\n=====================================================================\n\n**pandas** is a Python package providing fast, flexible, and expressive data\nstructures designed to make working with "relational" or "labeled" data both\neasy and intuitive. It aims to be the fundamental high-level building block for\ndoing practical, **real world** data analysis in Python. Additionally, it has\nthe broader goal of becoming **the most powerful and flexible open source data\nanalysis / manipulation tool available in any language**. It is already well on\nits way toward this goal.\n\nMain Features\n-------------\nHere are just a few of the things that pandas does well:\n\n - Easy handling of missing data in floating point as well as non-floating\n point data.\n - Size mutability: columns can be inserted and deleted from DataFrame and\n higher dimensional objects\n - Automatic and explicit data alignment: objects can be explicitly aligned\n to a set of labels, or the user can simply ignore the labels and let\n `Series`, `DataFrame`, etc. automatically align the data for you in\n computations.\n - Powerful, flexible group by functionality to perform split-apply-combine\n operations on data sets, for both aggregating and transforming data.\n - Make it easy to convert ragged, differently-indexed data in other Python\n and NumPy data structures into DataFrame objects.\n - Intelligent label-based slicing, fancy indexing, and subsetting of large\n data sets.\n - Intuitive merging and joining data sets.\n - Flexible reshaping and pivoting of data sets.\n - Hierarchical labeling of axes (possible to have multiple labels per tick).\n - Robust IO tools for loading data from flat files (CSV and delimited),\n Excel files, databases, and saving/loading data from the ultrafast HDF5\n format.\n - Time series-specific functionality: date range generation and frequency\n conversion, moving window statistics, date shifting and lagging.\n"""\n\n# Use __all__ to let type checkers know what is part of the public API.\n# Pandas is not (yet) a py.typed library: the public API is determined\n# based on the documentation.\n__all__ = [\n "ArrowDtype",\n "BooleanDtype",\n "Categorical",\n "CategoricalDtype",\n "CategoricalIndex",\n "DataFrame",\n "DateOffset",\n "DatetimeIndex",\n "DatetimeTZDtype",\n "ExcelFile",\n "ExcelWriter",\n "Flags",\n "Float32Dtype",\n "Float64Dtype",\n "Grouper",\n "HDFStore",\n "Index",\n "IndexSlice",\n "Int16Dtype",\n "Int32Dtype",\n "Int64Dtype",\n "Int8Dtype",\n "Interval",\n "IntervalDtype",\n "IntervalIndex",\n "MultiIndex",\n "NA",\n "NaT",\n "NamedAgg",\n "Period",\n "PeriodDtype",\n "PeriodIndex",\n "RangeIndex",\n "Series",\n "SparseDtype",\n "StringDtype",\n "Timedelta",\n "TimedeltaIndex",\n "Timestamp",\n "UInt16Dtype",\n "UInt32Dtype",\n "UInt64Dtype",\n "UInt8Dtype",\n "api",\n "array",\n "arrays",\n "bdate_range",\n "concat",\n "crosstab",\n "cut",\n "date_range",\n "describe_option",\n "errors",\n "eval",\n "factorize",\n "get_dummies",\n "from_dummies",\n "get_option",\n "infer_freq",\n "interval_range",\n "io",\n "isna",\n "isnull",\n "json_normalize",\n "lreshape",\n "melt",\n "merge",\n "merge_asof",\n "merge_ordered",\n "notna",\n "notnull",\n "offsets",\n "option_context",\n "options",\n "period_range",\n "pivot",\n "pivot_table",\n "plotting",\n "qcut",\n "read_clipboard",\n "read_csv",\n "read_excel",\n "read_feather",\n "read_fwf",\n "read_gbq",\n "read_hdf",\n "read_html",\n "read_json",\n "read_orc",\n "read_parquet",\n "read_pickle",\n "read_sas",\n "read_spss",\n "read_sql",\n "read_sql_query",\n "read_sql_table",\n "read_stata",\n "read_table",\n "read_xml",\n "reset_option",\n "set_eng_float_format",\n "set_option",\n "show_versions",\n "test",\n "testing",\n "timedelta_range",\n "to_datetime",\n "to_numeric",\n "to_pickle",\n "to_timedelta",\n "tseries",\n "unique",\n "value_counts",\n "wide_to_long",\n]\n
.venv\Lib\site-packages\pandas\__init__.py
__init__.py
Python
8,969
0.95
0.039578
0.068966
vue-tools
919
2024-01-20T18:26:59.703113
MIT
false
c9eae48eed61bf5bc44f2a00206d9c76
""" public toolkit API """\nfrom pandas.api import (\n extensions,\n indexers,\n interchange,\n types,\n typing,\n)\n\n__all__ = [\n "interchange",\n "extensions",\n "indexers",\n "types",\n "typing",\n]\n
.venv\Lib\site-packages\pandas\api\__init__.py
__init__.py
Python
219
0.85
0
0
node-utils
651
2023-11-28T18:49:32.297022
Apache-2.0
false
567dbfd14b39edff0f0b487e16a1af67
"""\nPublic API for extending pandas objects.\n"""\n\nfrom pandas._libs.lib import no_default\n\nfrom pandas.core.dtypes.base import (\n ExtensionDtype,\n register_extension_dtype,\n)\n\nfrom pandas.core.accessor import (\n register_dataframe_accessor,\n register_index_accessor,\n register_series_accessor,\n)\nfrom pandas.core.algorithms import take\nfrom pandas.core.arrays import (\n ExtensionArray,\n ExtensionScalarOpsMixin,\n)\n\n__all__ = [\n "no_default",\n "ExtensionDtype",\n "register_extension_dtype",\n "register_dataframe_accessor",\n "register_index_accessor",\n "register_series_accessor",\n "take",\n "ExtensionArray",\n "ExtensionScalarOpsMixin",\n]\n
.venv\Lib\site-packages\pandas\api\extensions\__init__.py
__init__.py
Python
685
0.85
0.030303
0
awesome-app
762
2024-06-01T14:19:20.113902
MIT
false
c6b13278b712eabb188c66d02c6138ce
\n\n
.venv\Lib\site-packages\pandas\api\extensions\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
782
0.7
0.071429
0
vue-tools
842
2023-10-26T22:31:11.465568
MIT
false
63a9e618e928ae063606e01eab5ca566
"""\nPublic API for Rolling Window Indexers.\n"""\n\nfrom pandas.core.indexers import check_array_indexer\nfrom pandas.core.indexers.objects import (\n BaseIndexer,\n FixedForwardWindowIndexer,\n VariableOffsetWindowIndexer,\n)\n\n__all__ = [\n "check_array_indexer",\n "BaseIndexer",\n "FixedForwardWindowIndexer",\n "VariableOffsetWindowIndexer",\n]\n
.venv\Lib\site-packages\pandas\api\indexers\__init__.py
__init__.py
Python
357
0.85
0.058824
0
react-lib
407
2024-12-27T09:22:15.775104
BSD-3-Clause
false
b3dc789e5fabe4325917ed068c048cda
\n\n
.venv\Lib\site-packages\pandas\api\indexers\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
516
0.7
0.25
0
node-utils
212
2025-06-23T06:11:46.235511
BSD-3-Clause
false
e1d5c1096283444dc01aff291b737acb
"""\nPublic API for DataFrame interchange protocol.\n"""\n\nfrom pandas.core.interchange.dataframe_protocol import DataFrame\nfrom pandas.core.interchange.from_dataframe import from_dataframe\n\n__all__ = ["from_dataframe", "DataFrame"]\n
.venv\Lib\site-packages\pandas\api\interchange\__init__.py
__init__.py
Python
230
0.85
0.125
0
react-lib
922
2025-01-17T15:00:50.528419
MIT
false
52ac5a1b30effc509cdfc2fa05845c23
\n\n
.venv\Lib\site-packages\pandas\api\interchange\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
462
0.7
0.166667
0
awesome-app
152
2024-12-12T23:23:45.604000
GPL-3.0
false
51fe2a1d78bd91bdae61e460ddd36cee
"""\nPublic toolkit API.\n"""\n\nfrom pandas._libs.lib import infer_dtype\n\nfrom pandas.core.dtypes.api import * # noqa: F403\nfrom pandas.core.dtypes.concat import union_categoricals\nfrom pandas.core.dtypes.dtypes import (\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n)\n\n__all__ = [\n "infer_dtype",\n "union_categoricals",\n "CategoricalDtype",\n "DatetimeTZDtype",\n "IntervalDtype",\n "PeriodDtype",\n]\n
.venv\Lib\site-packages\pandas\api\types\__init__.py
__init__.py
Python
447
0.95
0
0
react-lib
115
2024-10-26T21:19:34.040845
GPL-3.0
false
ddcf8d495308f0ce1d131e227dfa3389
\n\n
.venv\Lib\site-packages\pandas\api\types\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
602
0.7
0
0
react-lib
283
2025-02-02T14:24:55.174157
Apache-2.0
false
71827eedfe33b22ee524be7f6038fd88
"""\nPublic API classes that store intermediate results useful for type-hinting.\n"""\n\nfrom pandas._libs import NaTType\nfrom pandas._libs.missing import NAType\n\nfrom pandas.core.groupby import (\n DataFrameGroupBy,\n SeriesGroupBy,\n)\nfrom pandas.core.resample import (\n DatetimeIndexResamplerGroupby,\n PeriodIndexResamplerGroupby,\n Resampler,\n TimedeltaIndexResamplerGroupby,\n TimeGrouper,\n)\nfrom pandas.core.window import (\n Expanding,\n ExpandingGroupby,\n ExponentialMovingWindow,\n ExponentialMovingWindowGroupby,\n Rolling,\n RollingGroupby,\n Window,\n)\n\n# TODO: Can't import Styler without importing jinja2\n# from pandas.io.formats.style import Styler\nfrom pandas.io.json._json import JsonReader\nfrom pandas.io.stata import StataReader\n\n__all__ = [\n "DataFrameGroupBy",\n "DatetimeIndexResamplerGroupby",\n "Expanding",\n "ExpandingGroupby",\n "ExponentialMovingWindow",\n "ExponentialMovingWindowGroupby",\n "JsonReader",\n "NaTType",\n "NAType",\n "PeriodIndexResamplerGroupby",\n "Resampler",\n "Rolling",\n "RollingGroupby",\n "SeriesGroupBy",\n "StataReader",\n # See TODO above\n # "Styler",\n "TimedeltaIndexResamplerGroupby",\n "TimeGrouper",\n "Window",\n]\n
.venv\Lib\site-packages\pandas\api\typing\__init__.py
__init__.py
Python
1,244
0.95
0.018182
0.078431
react-lib
441
2024-09-29T03:07:15.385153
Apache-2.0
false
91435470c986678388e305977c2bf91a
\n\n
.venv\Lib\site-packages\pandas\api\typing\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,133
0.7
0.066667
0
python-kit
925
2024-12-13T19:52:25.824643
GPL-3.0
false
8ba1bcad523209c2f0c3f10c04434835
\n\n
.venv\Lib\site-packages\pandas\api\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
402
0.7
0
0
awesome-app
585
2024-04-28T14:54:37.636841
Apache-2.0
false
f61faf7e1172c50849b73f4ac26964be
"""\nAll of pandas' ExtensionArrays.\n\nSee :ref:`extending.extension-types` for more.\n"""\nfrom pandas.core.arrays import (\n ArrowExtensionArray,\n ArrowStringArray,\n BooleanArray,\n Categorical,\n DatetimeArray,\n FloatingArray,\n IntegerArray,\n IntervalArray,\n NumpyExtensionArray,\n PeriodArray,\n SparseArray,\n StringArray,\n TimedeltaArray,\n)\n\n__all__ = [\n "ArrowExtensionArray",\n "ArrowStringArray",\n "BooleanArray",\n "Categorical",\n "DatetimeArray",\n "FloatingArray",\n "IntegerArray",\n "IntervalArray",\n "NumpyExtensionArray",\n "PeriodArray",\n "SparseArray",\n "StringArray",\n "TimedeltaArray",\n]\n\n\ndef __getattr__(name: str) -> type[NumpyExtensionArray]:\n if name == "PandasArray":\n # GH#53694\n import warnings\n\n from pandas.util._exceptions import find_stack_level\n\n warnings.warn(\n "PandasArray has been renamed NumpyExtensionArray. Use that "\n "instead. This alias will be removed in a future version.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return NumpyExtensionArray\n raise AttributeError(f"module 'pandas.arrays' has no attribute '{name}'")\n
.venv\Lib\site-packages\pandas\arrays\__init__.py
__init__.py
Python
1,227
0.95
0.056604
0.021277
python-kit
615
2023-09-10T02:37:57.040431
Apache-2.0
false
2b7c7ea2eb664a3cbfd994e2b2d02ad3
\n\n
.venv\Lib\site-packages\pandas\arrays\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,360
0.8
0.035714
0
awesome-app
585
2025-03-08T15:20:51.265213
Apache-2.0
false
6a9f97923cacd6d4889a8d6e025def31
"""\nPatched ``BZ2File`` and ``LZMAFile`` to handle pickle protocol 5.\n"""\n\nfrom __future__ import annotations\n\nfrom pickle import PickleBuffer\n\nfrom pandas.compat._constants import PY310\n\ntry:\n import bz2\n\n has_bz2 = True\nexcept ImportError:\n has_bz2 = False\n\ntry:\n import lzma\n\n has_lzma = True\nexcept ImportError:\n has_lzma = False\n\n\ndef flatten_buffer(\n b: bytes | bytearray | memoryview | PickleBuffer,\n) -> bytes | bytearray | memoryview:\n """\n Return some 1-D `uint8` typed buffer.\n\n Coerces anything that does not match that description to one that does\n without copying if possible (otherwise will copy).\n """\n\n if isinstance(b, (bytes, bytearray)):\n return b\n\n if not isinstance(b, PickleBuffer):\n b = PickleBuffer(b)\n\n try:\n # coerce to 1-D `uint8` C-contiguous `memoryview` zero-copy\n return b.raw()\n except BufferError:\n # perform in-memory copy if buffer is not contiguous\n return memoryview(b).tobytes("A")\n\n\nif has_bz2:\n\n class BZ2File(bz2.BZ2File):\n if not PY310:\n\n def write(self, b) -> int:\n # Workaround issue where `bz2.BZ2File` expects `len`\n # to return the number of bytes in `b` by converting\n # `b` into something that meets that constraint with\n # minimal copying.\n #\n # Note: This is fixed in Python 3.10.\n return super().write(flatten_buffer(b))\n\n\nif has_lzma:\n\n class LZMAFile(lzma.LZMAFile):\n if not PY310:\n\n def write(self, b) -> int:\n # Workaround issue where `lzma.LZMAFile` expects `len`\n # to return the number of bytes in `b` by converting\n # `b` into something that meets that constraint with\n # minimal copying.\n #\n # Note: This is fixed in Python 3.10.\n return super().write(flatten_buffer(b))\n
.venv\Lib\site-packages\pandas\compat\compressors.py
compressors.py
Python
1,975
0.95
0.207792
0.25
awesome-app
954
2024-03-27T22:12:59.620512
GPL-3.0
false
525f96c94218429fd6423572cf8d5576
"""\nSupport pre-0.12 series pickle compatibility.\n"""\nfrom __future__ import annotations\n\nimport contextlib\nimport copy\nimport io\nimport pickle as pkl\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\n\nfrom pandas._libs.arrays import NDArrayBacked\nfrom pandas._libs.tslibs import BaseOffset\n\nfrom pandas import Index\nfrom pandas.core.arrays import (\n DatetimeArray,\n PeriodArray,\n TimedeltaArray,\n)\nfrom pandas.core.internals import BlockManager\n\nif TYPE_CHECKING:\n from collections.abc import Generator\n\n\ndef load_reduce(self) -> None:\n stack = self.stack\n args = stack.pop()\n func = stack[-1]\n\n try:\n stack[-1] = func(*args)\n return\n except TypeError as err:\n # If we have a deprecated function,\n # try to replace and try again.\n\n msg = "_reconstruct: First argument must be a sub-type of ndarray"\n\n if msg in str(err):\n try:\n cls = args[0]\n stack[-1] = object.__new__(cls)\n return\n except TypeError:\n pass\n elif args and isinstance(args[0], type) and issubclass(args[0], BaseOffset):\n # TypeError: object.__new__(Day) is not safe, use Day.__new__()\n cls = args[0]\n stack[-1] = cls.__new__(*args)\n return\n elif args and issubclass(args[0], PeriodArray):\n cls = args[0]\n stack[-1] = NDArrayBacked.__new__(*args)\n return\n\n raise\n\n\n# If classes are moved, provide compat here.\n_class_locations_map = {\n ("pandas.core.sparse.array", "SparseArray"): ("pandas.core.arrays", "SparseArray"),\n # 15477\n ("pandas.core.base", "FrozenNDArray"): ("numpy", "ndarray"),\n # Re-routing unpickle block logic to go through _unpickle_block instead\n # for pandas <= 1.3.5\n ("pandas.core.internals.blocks", "new_block"): (\n "pandas._libs.internals",\n "_unpickle_block",\n ),\n ("pandas.core.indexes.frozen", "FrozenNDArray"): ("numpy", "ndarray"),\n ("pandas.core.base", "FrozenList"): ("pandas.core.indexes.frozen", "FrozenList"),\n # 10890\n ("pandas.core.series", "TimeSeries"): ("pandas.core.series", "Series"),\n ("pandas.sparse.series", "SparseTimeSeries"): (\n "pandas.core.sparse.series",\n "SparseSeries",\n ),\n # 12588, extensions moving\n ("pandas._sparse", "BlockIndex"): ("pandas._libs.sparse", "BlockIndex"),\n ("pandas.tslib", "Timestamp"): ("pandas._libs.tslib", "Timestamp"),\n # 18543 moving period\n ("pandas._period", "Period"): ("pandas._libs.tslibs.period", "Period"),\n ("pandas._libs.period", "Period"): ("pandas._libs.tslibs.period", "Period"),\n # 18014 moved __nat_unpickle from _libs.tslib-->_libs.tslibs.nattype\n ("pandas.tslib", "__nat_unpickle"): (\n "pandas._libs.tslibs.nattype",\n "__nat_unpickle",\n ),\n ("pandas._libs.tslib", "__nat_unpickle"): (\n "pandas._libs.tslibs.nattype",\n "__nat_unpickle",\n ),\n # 15998 top-level dirs moving\n ("pandas.sparse.array", "SparseArray"): (\n "pandas.core.arrays.sparse",\n "SparseArray",\n ),\n ("pandas.indexes.base", "_new_Index"): ("pandas.core.indexes.base", "_new_Index"),\n ("pandas.indexes.base", "Index"): ("pandas.core.indexes.base", "Index"),\n ("pandas.indexes.numeric", "Int64Index"): (\n "pandas.core.indexes.base",\n "Index", # updated in 50775\n ),\n ("pandas.indexes.range", "RangeIndex"): ("pandas.core.indexes.range", "RangeIndex"),\n ("pandas.indexes.multi", "MultiIndex"): ("pandas.core.indexes.multi", "MultiIndex"),\n ("pandas.tseries.index", "_new_DatetimeIndex"): (\n "pandas.core.indexes.datetimes",\n "_new_DatetimeIndex",\n ),\n ("pandas.tseries.index", "DatetimeIndex"): (\n "pandas.core.indexes.datetimes",\n "DatetimeIndex",\n ),\n ("pandas.tseries.period", "PeriodIndex"): (\n "pandas.core.indexes.period",\n "PeriodIndex",\n ),\n # 19269, arrays moving\n ("pandas.core.categorical", "Categorical"): ("pandas.core.arrays", "Categorical"),\n # 19939, add timedeltaindex, float64index compat from 15998 move\n ("pandas.tseries.tdi", "TimedeltaIndex"): (\n "pandas.core.indexes.timedeltas",\n "TimedeltaIndex",\n ),\n ("pandas.indexes.numeric", "Float64Index"): (\n "pandas.core.indexes.base",\n "Index", # updated in 50775\n ),\n # 50775, remove Int64Index, UInt64Index & Float64Index from codabase\n ("pandas.core.indexes.numeric", "Int64Index"): (\n "pandas.core.indexes.base",\n "Index",\n ),\n ("pandas.core.indexes.numeric", "UInt64Index"): (\n "pandas.core.indexes.base",\n "Index",\n ),\n ("pandas.core.indexes.numeric", "Float64Index"): (\n "pandas.core.indexes.base",\n "Index",\n ),\n ("pandas.core.arrays.sparse.dtype", "SparseDtype"): (\n "pandas.core.dtypes.dtypes",\n "SparseDtype",\n ),\n}\n\n\n# our Unpickler sub-class to override methods and some dispatcher\n# functions for compat and uses a non-public class of the pickle module.\n\n\nclass Unpickler(pkl._Unpickler):\n def find_class(self, module, name):\n # override superclass\n key = (module, name)\n module, name = _class_locations_map.get(key, key)\n return super().find_class(module, name)\n\n\nUnpickler.dispatch = copy.copy(Unpickler.dispatch)\nUnpickler.dispatch[pkl.REDUCE[0]] = load_reduce\n\n\ndef load_newobj(self) -> None:\n args = self.stack.pop()\n cls = self.stack[-1]\n\n # compat\n if issubclass(cls, Index):\n obj = object.__new__(cls)\n elif issubclass(cls, DatetimeArray) and not args:\n arr = np.array([], dtype="M8[ns]")\n obj = cls.__new__(cls, arr, arr.dtype)\n elif issubclass(cls, TimedeltaArray) and not args:\n arr = np.array([], dtype="m8[ns]")\n obj = cls.__new__(cls, arr, arr.dtype)\n elif cls is BlockManager and not args:\n obj = cls.__new__(cls, (), [], False)\n else:\n obj = cls.__new__(cls, *args)\n\n self.stack[-1] = obj\n\n\nUnpickler.dispatch[pkl.NEWOBJ[0]] = load_newobj\n\n\ndef load_newobj_ex(self) -> None:\n kwargs = self.stack.pop()\n args = self.stack.pop()\n cls = self.stack.pop()\n\n # compat\n if issubclass(cls, Index):\n obj = object.__new__(cls)\n else:\n obj = cls.__new__(cls, *args, **kwargs)\n self.append(obj)\n\n\ntry:\n Unpickler.dispatch[pkl.NEWOBJ_EX[0]] = load_newobj_ex\nexcept (AttributeError, KeyError):\n pass\n\n\ndef load(fh, encoding: str | None = None, is_verbose: bool = False):\n """\n Load a pickle, with a provided encoding,\n\n Parameters\n ----------\n fh : a filelike object\n encoding : an optional encoding\n is_verbose : show exception output\n """\n try:\n fh.seek(0)\n if encoding is not None:\n up = Unpickler(fh, encoding=encoding)\n else:\n up = Unpickler(fh)\n # "Unpickler" has no attribute "is_verbose" [attr-defined]\n up.is_verbose = is_verbose # type: ignore[attr-defined]\n\n return up.load()\n except (ValueError, TypeError):\n raise\n\n\ndef loads(\n bytes_object: bytes,\n *,\n fix_imports: bool = True,\n encoding: str = "ASCII",\n errors: str = "strict",\n):\n """\n Analogous to pickle._loads.\n """\n fd = io.BytesIO(bytes_object)\n return Unpickler(\n fd, fix_imports=fix_imports, encoding=encoding, errors=errors\n ).load()\n\n\n@contextlib.contextmanager\ndef patch_pickle() -> Generator[None, None, None]:\n """\n Temporarily patch pickle to use our unpickler.\n """\n orig_loads = pkl.loads\n try:\n setattr(pkl, "loads", loads)\n yield\n finally:\n setattr(pkl, "loads", orig_loads)\n
.venv\Lib\site-packages\pandas\compat\pickle_compat.py
pickle_compat.py
Python
7,723
0.95
0.09542
0.098214
react-lib
441
2025-01-12T17:15:13.439112
BSD-3-Clause
false
c0ba4921f44a56dc0bd9c631fbeb91e0
""" support pyarrow compatibility across versions """\n\nfrom __future__ import annotations\n\nfrom pandas.util.version import Version\n\ntry:\n import pyarrow as pa\n\n _palv = Version(Version(pa.__version__).base_version)\n pa_version_under10p1 = _palv < Version("10.0.1")\n pa_version_under11p0 = _palv < Version("11.0.0")\n pa_version_under12p0 = _palv < Version("12.0.0")\n pa_version_under13p0 = _palv < Version("13.0.0")\n pa_version_under14p0 = _palv < Version("14.0.0")\n pa_version_under14p1 = _palv < Version("14.0.1")\n pa_version_under15p0 = _palv < Version("15.0.0")\n pa_version_under16p0 = _palv < Version("16.0.0")\n pa_version_under17p0 = _palv < Version("17.0.0")\n pa_version_under18p0 = _palv < Version("18.0.0")\n pa_version_under19p0 = _palv < Version("19.0.0")\n pa_version_under20p0 = _palv < Version("20.0.0")\n HAS_PYARROW = True\nexcept ImportError:\n pa_version_under10p1 = True\n pa_version_under11p0 = True\n pa_version_under12p0 = True\n pa_version_under13p0 = True\n pa_version_under14p0 = True\n pa_version_under14p1 = True\n pa_version_under15p0 = True\n pa_version_under16p0 = True\n pa_version_under17p0 = True\n pa_version_under18p0 = True\n pa_version_under19p0 = True\n pa_version_under20p0 = True\n HAS_PYARROW = False\n
.venv\Lib\site-packages\pandas\compat\pyarrow.py
pyarrow.py
Python
1,308
0.85
0.027027
0
react-lib
183
2024-07-16T09:01:46.749865
BSD-3-Clause
false
978d0d6d256fc28e5ba5dd717cd83baf
"""\n_constants\n======\n\nConstants relevant for the Python implementation.\n"""\n\nfrom __future__ import annotations\n\nimport platform\nimport sys\nimport sysconfig\n\nIS64 = sys.maxsize > 2**32\n\nPY310 = sys.version_info >= (3, 10)\nPY311 = sys.version_info >= (3, 11)\nPY312 = sys.version_info >= (3, 12)\nPYPY = platform.python_implementation() == "PyPy"\nISMUSL = "musl" in (sysconfig.get_config_var("HOST_GNU_TYPE") or "")\nREF_COUNT = 2 if PY311 else 3\n\n__all__ = [\n "IS64",\n "ISMUSL",\n "PY310",\n "PY311",\n "PY312",\n "PYPY",\n]\n
.venv\Lib\site-packages\pandas\compat\_constants.py
_constants.py
Python
536
0.85
0.066667
0
vue-tools
245
2024-09-24T01:44:33.332571
BSD-3-Clause
false
3fc51697cc2964be6ed0cf86143fefbc
from __future__ import annotations\n\nimport importlib\nimport sys\nfrom typing import TYPE_CHECKING\nimport warnings\n\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.util.version import Version\n\nif TYPE_CHECKING:\n import types\n\n# Update install.rst & setup.cfg when updating versions!\n\nVERSIONS = {\n "adbc-driver-postgresql": "0.8.0",\n "adbc-driver-sqlite": "0.8.0",\n "bs4": "4.11.2",\n "blosc": "1.21.3",\n "bottleneck": "1.3.6",\n "dataframe-api-compat": "0.1.7",\n "fastparquet": "2022.12.0",\n "fsspec": "2022.11.0",\n "html5lib": "1.1",\n "hypothesis": "6.46.1",\n "gcsfs": "2022.11.0",\n "jinja2": "3.1.2",\n "lxml.etree": "4.9.2",\n "matplotlib": "3.6.3",\n "numba": "0.56.4",\n "numexpr": "2.8.4",\n "odfpy": "1.4.1",\n "openpyxl": "3.1.0",\n "pandas_gbq": "0.19.0",\n "psycopg2": "2.9.6", # (dt dec pq3 ext lo64)\n "pymysql": "1.0.2",\n "pyarrow": "10.0.1",\n "pyreadstat": "1.2.0",\n "pytest": "7.3.2",\n "python-calamine": "0.1.7",\n "pyxlsb": "1.0.10",\n "s3fs": "2022.11.0",\n "scipy": "1.10.0",\n "sqlalchemy": "2.0.0",\n "tables": "3.8.0",\n "tabulate": "0.9.0",\n "xarray": "2022.12.0",\n "xlrd": "2.0.1",\n "xlsxwriter": "3.0.5",\n "zstandard": "0.19.0",\n "tzdata": "2022.7",\n "qtpy": "2.3.0",\n "pyqt5": "5.15.9",\n}\n\n# A mapping from import name to package name (on PyPI) for packages where\n# these two names are different.\n\nINSTALL_MAPPING = {\n "bs4": "beautifulsoup4",\n "bottleneck": "Bottleneck",\n "jinja2": "Jinja2",\n "lxml.etree": "lxml",\n "odf": "odfpy",\n "pandas_gbq": "pandas-gbq",\n "python_calamine": "python-calamine",\n "sqlalchemy": "SQLAlchemy",\n "tables": "pytables",\n}\n\n\ndef get_version(module: types.ModuleType) -> str:\n version = getattr(module, "__version__", None)\n\n if version is None:\n raise ImportError(f"Can't determine version for {module.__name__}")\n if module.__name__ == "psycopg2":\n # psycopg2 appends " (dt dec pq3 ext lo64)" to it's version\n version = version.split()[0]\n return version\n\n\ndef import_optional_dependency(\n name: str,\n extra: str = "",\n errors: str = "raise",\n min_version: str | None = None,\n):\n """\n Import an optional dependency.\n\n By default, if a dependency is missing an ImportError with a nice\n message will be raised. If a dependency is present, but too old,\n we raise.\n\n Parameters\n ----------\n name : str\n The module name.\n extra : str\n Additional text to include in the ImportError message.\n errors : str {'raise', 'warn', 'ignore'}\n What to do when a dependency is not found or its version is too old.\n\n * raise : Raise an ImportError\n * warn : Only applicable when a module's version is to old.\n Warns that the version is too old and returns None\n * ignore: If the module is not installed, return None, otherwise,\n return the module, even if the version is too old.\n It's expected that users validate the version locally when\n using ``errors="ignore"`` (see. ``io/html.py``)\n min_version : str, default None\n Specify a minimum version that is different from the global pandas\n minimum version required.\n Returns\n -------\n maybe_module : Optional[ModuleType]\n The imported module, when found and the version is correct.\n None is returned when the package is not found and `errors`\n is False, or when the package's version is too old and `errors`\n is ``'warn'`` or ``'ignore'``.\n """\n assert errors in {"warn", "raise", "ignore"}\n\n package_name = INSTALL_MAPPING.get(name)\n install_name = package_name if package_name is not None else name\n\n msg = (\n f"Missing optional dependency '{install_name}'. {extra} "\n f"Use pip or conda to install {install_name}."\n )\n try:\n module = importlib.import_module(name)\n except ImportError:\n if errors == "raise":\n raise ImportError(msg)\n return None\n\n # Handle submodules: if we have submodule, grab parent module from sys.modules\n parent = name.split(".")[0]\n if parent != name:\n install_name = parent\n module_to_get = sys.modules[install_name]\n else:\n module_to_get = module\n minimum_version = min_version if min_version is not None else VERSIONS.get(parent)\n if minimum_version:\n version = get_version(module_to_get)\n if version and Version(version) < Version(minimum_version):\n msg = (\n f"Pandas requires version '{minimum_version}' or newer of '{parent}' "\n f"(version '{version}' currently installed)."\n )\n if errors == "warn":\n warnings.warn(\n msg,\n UserWarning,\n stacklevel=find_stack_level(),\n )\n return None\n elif errors == "raise":\n raise ImportError(msg)\n else:\n return None\n\n return module\n
.venv\Lib\site-packages\pandas\compat\_optional.py
_optional.py
Python
5,089
0.95
0.107143
0.054054
node-utils
717
2024-03-30T22:22:36.826062
BSD-3-Clause
false
2a8ab9319ffe2aeb97e35b1638ad23ec
"""\ncompat\n======\n\nCross-compatible functions for different versions of Python.\n\nOther items:\n* platform checker\n"""\nfrom __future__ import annotations\n\nimport os\nimport platform\nimport sys\nfrom typing import TYPE_CHECKING\n\nfrom pandas.compat._constants import (\n IS64,\n ISMUSL,\n PY310,\n PY311,\n PY312,\n PYPY,\n)\nimport pandas.compat.compressors\nfrom pandas.compat.numpy import is_numpy_dev\nfrom pandas.compat.pyarrow import (\n HAS_PYARROW,\n pa_version_under10p1,\n pa_version_under11p0,\n pa_version_under13p0,\n pa_version_under14p0,\n pa_version_under14p1,\n pa_version_under16p0,\n pa_version_under17p0,\n pa_version_under18p0,\n pa_version_under19p0,\n pa_version_under20p0,\n)\n\nif TYPE_CHECKING:\n from pandas._typing import F\n\n\ndef set_function_name(f: F, name: str, cls: type) -> F:\n """\n Bind the name/qualname attributes of the function.\n """\n f.__name__ = name\n f.__qualname__ = f"{cls.__name__}.{name}"\n f.__module__ = cls.__module__\n return f\n\n\ndef is_platform_little_endian() -> bool:\n """\n Checking if the running platform is little endian.\n\n Returns\n -------\n bool\n True if the running platform is little endian.\n """\n return sys.byteorder == "little"\n\n\ndef is_platform_windows() -> bool:\n """\n Checking if the running platform is windows.\n\n Returns\n -------\n bool\n True if the running platform is windows.\n """\n return sys.platform in ["win32", "cygwin"]\n\n\ndef is_platform_linux() -> bool:\n """\n Checking if the running platform is linux.\n\n Returns\n -------\n bool\n True if the running platform is linux.\n """\n return sys.platform == "linux"\n\n\ndef is_platform_mac() -> bool:\n """\n Checking if the running platform is mac.\n\n Returns\n -------\n bool\n True if the running platform is mac.\n """\n return sys.platform == "darwin"\n\n\ndef is_platform_arm() -> bool:\n """\n Checking if the running platform use ARM architecture.\n\n Returns\n -------\n bool\n True if the running platform uses ARM architecture.\n """\n return platform.machine() in ("arm64", "aarch64") or platform.machine().startswith(\n "armv"\n )\n\n\ndef is_platform_power() -> bool:\n """\n Checking if the running platform use Power architecture.\n\n Returns\n -------\n bool\n True if the running platform uses ARM architecture.\n """\n return platform.machine() in ("ppc64", "ppc64le")\n\n\ndef is_ci_environment() -> bool:\n """\n Checking if running in a continuous integration environment by checking\n the PANDAS_CI environment variable.\n\n Returns\n -------\n bool\n True if the running in a continuous integration environment.\n """\n return os.environ.get("PANDAS_CI", "0") == "1"\n\n\ndef get_lzma_file() -> type[pandas.compat.compressors.LZMAFile]:\n """\n Importing the `LZMAFile` class from the `lzma` module.\n\n Returns\n -------\n class\n The `LZMAFile` class from the `lzma` module.\n\n Raises\n ------\n RuntimeError\n If the `lzma` module was not imported correctly, or didn't exist.\n """\n if not pandas.compat.compressors.has_lzma:\n raise RuntimeError(\n "lzma module not available. "\n "A Python re-install with the proper dependencies, "\n "might be required to solve this issue."\n )\n return pandas.compat.compressors.LZMAFile\n\n\ndef get_bz2_file() -> type[pandas.compat.compressors.BZ2File]:\n """\n Importing the `BZ2File` class from the `bz2` module.\n\n Returns\n -------\n class\n The `BZ2File` class from the `bz2` module.\n\n Raises\n ------\n RuntimeError\n If the `bz2` module was not imported correctly, or didn't exist.\n """\n if not pandas.compat.compressors.has_bz2:\n raise RuntimeError(\n "bz2 module not available. "\n "A Python re-install with the proper dependencies, "\n "might be required to solve this issue."\n )\n return pandas.compat.compressors.BZ2File\n\n\n__all__ = [\n "is_numpy_dev",\n "pa_version_under10p1",\n "pa_version_under11p0",\n "pa_version_under13p0",\n "pa_version_under14p0",\n "pa_version_under14p1",\n "pa_version_under16p0",\n "pa_version_under17p0",\n "pa_version_under18p0",\n "pa_version_under19p0",\n "pa_version_under20p0",\n "HAS_PYARROW",\n "IS64",\n "ISMUSL",\n "PY310",\n "PY311",\n "PY312",\n "PYPY",\n]\n
.venv\Lib\site-packages\pandas\compat\__init__.py
__init__.py
Python
4,478
0.85
0.169082
0.005917
node-utils
786
2025-07-03T09:40:34.044942
GPL-3.0
false
03099046c2a97d5c982233dbf52317a3
"""\nFor compatibility with numpy libraries, pandas functions or methods have to\naccept '*args' and '**kwargs' parameters to accommodate numpy arguments that\nare not actually used or respected in the pandas implementation.\n\nTo ensure that users do not abuse these parameters, validation is performed in\n'validators.py' to make sure that any extra parameters passed correspond ONLY\nto those in the numpy signature. Part of that validation includes whether or\nnot the user attempted to pass in non-default values for these extraneous\nparameters. As we want to discourage users from relying on these parameters\nwhen calling the pandas implementation, we want them only to pass in the\ndefault values for these parameters.\n\nThis module provides a set of commonly used default arguments for functions and\nmethods that are spread throughout the codebase. This module will make it\neasier to adjust to future upstream changes in the analogous numpy signatures.\n"""\nfrom __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Any,\n TypeVar,\n cast,\n overload,\n)\n\nimport numpy as np\nfrom numpy import ndarray\n\nfrom pandas._libs.lib import (\n is_bool,\n is_integer,\n)\nfrom pandas.errors import UnsupportedFunctionCall\nfrom pandas.util._validators import (\n validate_args,\n validate_args_and_kwargs,\n validate_kwargs,\n)\n\nif TYPE_CHECKING:\n from pandas._typing import (\n Axis,\n AxisInt,\n )\n\n AxisNoneT = TypeVar("AxisNoneT", Axis, None)\n\n\nclass CompatValidator:\n def __init__(\n self,\n defaults,\n fname=None,\n method: str | None = None,\n max_fname_arg_count=None,\n ) -> None:\n self.fname = fname\n self.method = method\n self.defaults = defaults\n self.max_fname_arg_count = max_fname_arg_count\n\n def __call__(\n self,\n args,\n kwargs,\n fname=None,\n max_fname_arg_count=None,\n method: str | None = None,\n ) -> None:\n if not args and not kwargs:\n return None\n\n fname = self.fname if fname is None else fname\n max_fname_arg_count = (\n self.max_fname_arg_count\n if max_fname_arg_count is None\n else max_fname_arg_count\n )\n method = self.method if method is None else method\n\n if method == "args":\n validate_args(fname, args, max_fname_arg_count, self.defaults)\n elif method == "kwargs":\n validate_kwargs(fname, kwargs, self.defaults)\n elif method == "both":\n validate_args_and_kwargs(\n fname, args, kwargs, max_fname_arg_count, self.defaults\n )\n else:\n raise ValueError(f"invalid validation method '{method}'")\n\n\nARGMINMAX_DEFAULTS = {"out": None}\nvalidate_argmin = CompatValidator(\n ARGMINMAX_DEFAULTS, fname="argmin", method="both", max_fname_arg_count=1\n)\nvalidate_argmax = CompatValidator(\n ARGMINMAX_DEFAULTS, fname="argmax", method="both", max_fname_arg_count=1\n)\n\n\ndef process_skipna(skipna: bool | ndarray | None, args) -> tuple[bool, Any]:\n if isinstance(skipna, ndarray) or skipna is None:\n args = (skipna,) + args\n skipna = True\n\n return skipna, args\n\n\ndef validate_argmin_with_skipna(skipna: bool | ndarray | None, args, kwargs) -> bool:\n """\n If 'Series.argmin' is called via the 'numpy' library, the third parameter\n in its signature is 'out', which takes either an ndarray or 'None', so\n check if the 'skipna' parameter is either an instance of ndarray or is\n None, since 'skipna' itself should be a boolean\n """\n skipna, args = process_skipna(skipna, args)\n validate_argmin(args, kwargs)\n return skipna\n\n\ndef validate_argmax_with_skipna(skipna: bool | ndarray | None, args, kwargs) -> bool:\n """\n If 'Series.argmax' is called via the 'numpy' library, the third parameter\n in its signature is 'out', which takes either an ndarray or 'None', so\n check if the 'skipna' parameter is either an instance of ndarray or is\n None, since 'skipna' itself should be a boolean\n """\n skipna, args = process_skipna(skipna, args)\n validate_argmax(args, kwargs)\n return skipna\n\n\nARGSORT_DEFAULTS: dict[str, int | str | None] = {}\nARGSORT_DEFAULTS["axis"] = -1\nARGSORT_DEFAULTS["kind"] = "quicksort"\nARGSORT_DEFAULTS["order"] = None\nARGSORT_DEFAULTS["kind"] = None\nARGSORT_DEFAULTS["stable"] = None\n\n\nvalidate_argsort = CompatValidator(\n ARGSORT_DEFAULTS, fname="argsort", max_fname_arg_count=0, method="both"\n)\n\n# two different signatures of argsort, this second validation for when the\n# `kind` param is supported\nARGSORT_DEFAULTS_KIND: dict[str, int | None] = {}\nARGSORT_DEFAULTS_KIND["axis"] = -1\nARGSORT_DEFAULTS_KIND["order"] = None\nARGSORT_DEFAULTS_KIND["stable"] = None\nvalidate_argsort_kind = CompatValidator(\n ARGSORT_DEFAULTS_KIND, fname="argsort", max_fname_arg_count=0, method="both"\n)\n\n\ndef validate_argsort_with_ascending(ascending: bool | int | None, args, kwargs) -> bool:\n """\n If 'Categorical.argsort' is called via the 'numpy' library, the first\n parameter in its signature is 'axis', which takes either an integer or\n 'None', so check if the 'ascending' parameter has either integer type or is\n None, since 'ascending' itself should be a boolean\n """\n if is_integer(ascending) or ascending is None:\n args = (ascending,) + args\n ascending = True\n\n validate_argsort_kind(args, kwargs, max_fname_arg_count=3)\n ascending = cast(bool, ascending)\n return ascending\n\n\nCLIP_DEFAULTS: dict[str, Any] = {"out": None}\nvalidate_clip = CompatValidator(\n CLIP_DEFAULTS, fname="clip", method="both", max_fname_arg_count=3\n)\n\n\n@overload\ndef validate_clip_with_axis(axis: ndarray, args, kwargs) -> None:\n ...\n\n\n@overload\ndef validate_clip_with_axis(axis: AxisNoneT, args, kwargs) -> AxisNoneT:\n ...\n\n\ndef validate_clip_with_axis(\n axis: ndarray | AxisNoneT, args, kwargs\n) -> AxisNoneT | None:\n """\n If 'NDFrame.clip' is called via the numpy library, the third parameter in\n its signature is 'out', which can takes an ndarray, so check if the 'axis'\n parameter is an instance of ndarray, since 'axis' itself should either be\n an integer or None\n """\n if isinstance(axis, ndarray):\n args = (axis,) + args\n # error: Incompatible types in assignment (expression has type "None",\n # variable has type "Union[ndarray[Any, Any], str, int]")\n axis = None # type: ignore[assignment]\n\n validate_clip(args, kwargs)\n # error: Incompatible return value type (got "Union[ndarray[Any, Any],\n # str, int]", expected "Union[str, int, None]")\n return axis # type: ignore[return-value]\n\n\nCUM_FUNC_DEFAULTS: dict[str, Any] = {}\nCUM_FUNC_DEFAULTS["dtype"] = None\nCUM_FUNC_DEFAULTS["out"] = None\nvalidate_cum_func = CompatValidator(\n CUM_FUNC_DEFAULTS, method="both", max_fname_arg_count=1\n)\nvalidate_cumsum = CompatValidator(\n CUM_FUNC_DEFAULTS, fname="cumsum", method="both", max_fname_arg_count=1\n)\n\n\ndef validate_cum_func_with_skipna(skipna: bool, args, kwargs, name) -> bool:\n """\n If this function is called via the 'numpy' library, the third parameter in\n its signature is 'dtype', which takes either a 'numpy' dtype or 'None', so\n check if the 'skipna' parameter is a boolean or not\n """\n if not is_bool(skipna):\n args = (skipna,) + args\n skipna = True\n elif isinstance(skipna, np.bool_):\n skipna = bool(skipna)\n\n validate_cum_func(args, kwargs, fname=name)\n return skipna\n\n\nALLANY_DEFAULTS: dict[str, bool | None] = {}\nALLANY_DEFAULTS["dtype"] = None\nALLANY_DEFAULTS["out"] = None\nALLANY_DEFAULTS["keepdims"] = False\nALLANY_DEFAULTS["axis"] = None\nvalidate_all = CompatValidator(\n ALLANY_DEFAULTS, fname="all", method="both", max_fname_arg_count=1\n)\nvalidate_any = CompatValidator(\n ALLANY_DEFAULTS, fname="any", method="both", max_fname_arg_count=1\n)\n\nLOGICAL_FUNC_DEFAULTS = {"out": None, "keepdims": False}\nvalidate_logical_func = CompatValidator(LOGICAL_FUNC_DEFAULTS, method="kwargs")\n\nMINMAX_DEFAULTS = {"axis": None, "dtype": None, "out": None, "keepdims": False}\nvalidate_min = CompatValidator(\n MINMAX_DEFAULTS, fname="min", method="both", max_fname_arg_count=1\n)\nvalidate_max = CompatValidator(\n MINMAX_DEFAULTS, fname="max", method="both", max_fname_arg_count=1\n)\n\nRESHAPE_DEFAULTS: dict[str, str] = {"order": "C"}\nvalidate_reshape = CompatValidator(\n RESHAPE_DEFAULTS, fname="reshape", method="both", max_fname_arg_count=1\n)\n\nREPEAT_DEFAULTS: dict[str, Any] = {"axis": None}\nvalidate_repeat = CompatValidator(\n REPEAT_DEFAULTS, fname="repeat", method="both", max_fname_arg_count=1\n)\n\nROUND_DEFAULTS: dict[str, Any] = {"out": None}\nvalidate_round = CompatValidator(\n ROUND_DEFAULTS, fname="round", method="both", max_fname_arg_count=1\n)\n\nSORT_DEFAULTS: dict[str, int | str | None] = {}\nSORT_DEFAULTS["axis"] = -1\nSORT_DEFAULTS["kind"] = "quicksort"\nSORT_DEFAULTS["order"] = None\nvalidate_sort = CompatValidator(SORT_DEFAULTS, fname="sort", method="kwargs")\n\nSTAT_FUNC_DEFAULTS: dict[str, Any | None] = {}\nSTAT_FUNC_DEFAULTS["dtype"] = None\nSTAT_FUNC_DEFAULTS["out"] = None\n\nSUM_DEFAULTS = STAT_FUNC_DEFAULTS.copy()\nSUM_DEFAULTS["axis"] = None\nSUM_DEFAULTS["keepdims"] = False\nSUM_DEFAULTS["initial"] = None\n\nPROD_DEFAULTS = SUM_DEFAULTS.copy()\n\nMEAN_DEFAULTS = SUM_DEFAULTS.copy()\n\nMEDIAN_DEFAULTS = STAT_FUNC_DEFAULTS.copy()\nMEDIAN_DEFAULTS["overwrite_input"] = False\nMEDIAN_DEFAULTS["keepdims"] = False\n\nSTAT_FUNC_DEFAULTS["keepdims"] = False\n\nvalidate_stat_func = CompatValidator(STAT_FUNC_DEFAULTS, method="kwargs")\nvalidate_sum = CompatValidator(\n SUM_DEFAULTS, fname="sum", method="both", max_fname_arg_count=1\n)\nvalidate_prod = CompatValidator(\n PROD_DEFAULTS, fname="prod", method="both", max_fname_arg_count=1\n)\nvalidate_mean = CompatValidator(\n MEAN_DEFAULTS, fname="mean", method="both", max_fname_arg_count=1\n)\nvalidate_median = CompatValidator(\n MEDIAN_DEFAULTS, fname="median", method="both", max_fname_arg_count=1\n)\n\nSTAT_DDOF_FUNC_DEFAULTS: dict[str, bool | None] = {}\nSTAT_DDOF_FUNC_DEFAULTS["dtype"] = None\nSTAT_DDOF_FUNC_DEFAULTS["out"] = None\nSTAT_DDOF_FUNC_DEFAULTS["keepdims"] = False\nvalidate_stat_ddof_func = CompatValidator(STAT_DDOF_FUNC_DEFAULTS, method="kwargs")\n\nTAKE_DEFAULTS: dict[str, str | None] = {}\nTAKE_DEFAULTS["out"] = None\nTAKE_DEFAULTS["mode"] = "raise"\nvalidate_take = CompatValidator(TAKE_DEFAULTS, fname="take", method="kwargs")\n\n\ndef validate_take_with_convert(convert: ndarray | bool | None, args, kwargs) -> bool:\n """\n If this function is called via the 'numpy' library, the third parameter in\n its signature is 'axis', which takes either an ndarray or 'None', so check\n if the 'convert' parameter is either an instance of ndarray or is None\n """\n if isinstance(convert, ndarray) or convert is None:\n args = (convert,) + args\n convert = True\n\n validate_take(args, kwargs, max_fname_arg_count=3, method="both")\n return convert\n\n\nTRANSPOSE_DEFAULTS = {"axes": None}\nvalidate_transpose = CompatValidator(\n TRANSPOSE_DEFAULTS, fname="transpose", method="both", max_fname_arg_count=0\n)\n\n\ndef validate_groupby_func(name: str, args, kwargs, allowed=None) -> None:\n """\n 'args' and 'kwargs' should be empty, except for allowed kwargs because all\n of their necessary parameters are explicitly listed in the function\n signature\n """\n if allowed is None:\n allowed = []\n\n kwargs = set(kwargs) - set(allowed)\n\n if len(args) + len(kwargs) > 0:\n raise UnsupportedFunctionCall(\n "numpy operations are not valid with groupby. "\n f"Use .groupby(...).{name}() instead"\n )\n\n\nRESAMPLER_NUMPY_OPS = ("min", "max", "sum", "prod", "mean", "std", "var")\n\n\ndef validate_resampler_func(method: str, args, kwargs) -> None:\n """\n 'args' and 'kwargs' should be empty because all of their necessary\n parameters are explicitly listed in the function signature\n """\n if len(args) + len(kwargs) > 0:\n if method in RESAMPLER_NUMPY_OPS:\n raise UnsupportedFunctionCall(\n "numpy operations are not valid with resample. "\n f"Use .resample(...).{method}() instead"\n )\n raise TypeError("too many arguments passed in")\n\n\ndef validate_minmax_axis(axis: AxisInt | None, ndim: int = 1) -> None:\n """\n Ensure that the axis argument passed to min, max, argmin, or argmax is zero\n or None, as otherwise it will be incorrectly ignored.\n\n Parameters\n ----------\n axis : int or None\n ndim : int, default 1\n\n Raises\n ------\n ValueError\n """\n if axis is None:\n return\n if axis >= ndim or (axis < 0 and ndim + axis < 0):\n raise ValueError(f"`axis` must be fewer than the number of dimensions ({ndim})")\n\n\n_validation_funcs = {\n "median": validate_median,\n "mean": validate_mean,\n "min": validate_min,\n "max": validate_max,\n "sum": validate_sum,\n "prod": validate_prod,\n}\n\n\ndef validate_func(fname, args, kwargs) -> None:\n if fname not in _validation_funcs:\n return validate_stat_func(args, kwargs, fname=fname)\n\n validation_func = _validation_funcs[fname]\n return validation_func(args, kwargs)\n
.venv\Lib\site-packages\pandas\compat\numpy\function.py
function.py
Python
13,291
0.95
0.117225
0.017857
node-utils
406
2023-12-15T19:35:09.209507
MIT
false
4b62919e2735e66d1dd63b588ab4e50f
""" support numpy compatibility across versions """\nimport warnings\n\nimport numpy as np\n\nfrom pandas.util.version import Version\n\n# numpy versioning\n_np_version = np.__version__\n_nlv = Version(_np_version)\nnp_version_lt1p23 = _nlv < Version("1.23")\nnp_version_gte1p24 = _nlv >= Version("1.24")\nnp_version_gte1p24p3 = _nlv >= Version("1.24.3")\nnp_version_gte1p25 = _nlv >= Version("1.25")\nnp_version_gt2 = _nlv >= Version("2.0.0")\nis_numpy_dev = _nlv.dev is not None\n_min_numpy_ver = "1.22.4"\n\n\nif _nlv < Version(_min_numpy_ver):\n raise ImportError(\n f"this version of pandas is incompatible with numpy < {_min_numpy_ver}\n"\n f"your numpy version is {_np_version}.\n"\n f"Please upgrade numpy to >= {_min_numpy_ver} to use this pandas version"\n )\n\n\nnp_long: type\nnp_ulong: type\n\nif np_version_gt2:\n try:\n with warnings.catch_warnings():\n warnings.filterwarnings(\n "ignore",\n r".*In the future `np\.long` will be defined as.*",\n FutureWarning,\n )\n np_long = np.long # type: ignore[attr-defined]\n np_ulong = np.ulong # type: ignore[attr-defined]\n except AttributeError:\n np_long = np.int_\n np_ulong = np.uint\nelse:\n np_long = np.int_\n np_ulong = np.uint\n\n\n__all__ = [\n "np",\n "_np_version",\n "is_numpy_dev",\n]\n
.venv\Lib\site-packages\pandas\compat\numpy\__init__.py
__init__.py
Python
1,366
0.95
0.056604
0.023256
vue-tools
896
2024-02-21T22:55:55.421890
MIT
false
d5cbdd1e7db0b71d56df79b42f2af742
\n\n
.venv\Lib\site-packages\pandas\compat\numpy\__pycache__\function.cpython-313.pyc
function.cpython-313.pyc
Other
13,041
0.95
0.088757
0.00625
react-lib
910
2024-01-27T19:27:07.698114
GPL-3.0
false
fcb25a3a9c370cab6b77d47a0c34e972
\n\n
.venv\Lib\site-packages\pandas\compat\numpy\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,901
0.8
0
0
react-lib
545
2024-12-19T17:57:32.017857
BSD-3-Clause
false
3e365ceba00f7d271281ad2a310bcac8
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\compressors.cpython-313.pyc
compressors.cpython-313.pyc
Other
2,464
0.8
0.033333
0
python-kit
354
2024-06-18T11:43:08.528425
GPL-3.0
false
d1d87a9e529c4a60c0346d4dd14f0788
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\pickle_compat.cpython-313.pyc
pickle_compat.cpython-313.pyc
Other
8,459
0.95
0
0
node-utils
582
2025-03-03T19:44:43.811945
BSD-3-Clause
false
727178b5ac541cd3bb41135bf6e82d3c
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\pyarrow.cpython-313.pyc
pyarrow.cpython-313.pyc
Other
1,549
0.8
0
0
react-lib
559
2023-09-13T21:34:28.206115
BSD-3-Clause
false
ce5d1f37821d6c7e517fe1627365ebd6
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\_constants.cpython-313.pyc
_constants.cpython-313.pyc
Other
1,004
0.8
0.055556
0
vue-tools
909
2023-11-07T04:03:13.170896
Apache-2.0
false
798143632ae010e68b3850cae820da3d
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\_optional.cpython-313.pyc
_optional.cpython-313.pyc
Other
5,384
0.95
0.033333
0.035714
vue-tools
731
2023-12-28T15:56:34.540564
BSD-3-Clause
false
c8e3e7ebea31ba861bb68f760a7c1a40
\n\n
.venv\Lib\site-packages\pandas\compat\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
5,699
0.95
0.192982
0.010204
awesome-app
735
2024-06-11T02:06:23.534353
GPL-3.0
false
f1d1b275a17d0cf31d6243d6f9c10361
"""\n\naccessor.py contains base classes for implementing accessor properties\nthat can be mixed into or pinned onto other pandas classes.\n\n"""\nfrom __future__ import annotations\n\nfrom typing import (\n Callable,\n final,\n)\nimport warnings\n\nfrom pandas.util._decorators import doc\nfrom pandas.util._exceptions import find_stack_level\n\n\nclass DirNamesMixin:\n _accessors: set[str] = set()\n _hidden_attrs: frozenset[str] = frozenset()\n\n @final\n def _dir_deletions(self) -> set[str]:\n """\n Delete unwanted __dir__ for this object.\n """\n return self._accessors | self._hidden_attrs\n\n def _dir_additions(self) -> set[str]:\n """\n Add additional __dir__ for this object.\n """\n return {accessor for accessor in self._accessors if hasattr(self, accessor)}\n\n def __dir__(self) -> list[str]:\n """\n Provide method name lookup and completion.\n\n Notes\n -----\n Only provide 'public' methods.\n """\n rv = set(super().__dir__())\n rv = (rv - self._dir_deletions()) | self._dir_additions()\n return sorted(rv)\n\n\nclass PandasDelegate:\n """\n Abstract base class for delegating methods/properties.\n """\n\n def _delegate_property_get(self, name: str, *args, **kwargs):\n raise TypeError(f"You cannot access the property {name}")\n\n def _delegate_property_set(self, name: str, value, *args, **kwargs):\n raise TypeError(f"The property {name} cannot be set")\n\n def _delegate_method(self, name: str, *args, **kwargs):\n raise TypeError(f"You cannot call method {name}")\n\n @classmethod\n def _add_delegate_accessors(\n cls,\n delegate,\n accessors: list[str],\n typ: str,\n overwrite: bool = False,\n accessor_mapping: Callable[[str], str] = lambda x: x,\n raise_on_missing: bool = True,\n ) -> None:\n """\n Add accessors to cls from the delegate class.\n\n Parameters\n ----------\n cls\n Class to add the methods/properties to.\n delegate\n Class to get methods/properties and doc-strings.\n accessors : list of str\n List of accessors to add.\n typ : {'property', 'method'}\n overwrite : bool, default False\n Overwrite the method/property in the target class if it exists.\n accessor_mapping: Callable, default lambda x: x\n Callable to map the delegate's function to the cls' function.\n raise_on_missing: bool, default True\n Raise if an accessor does not exist on delegate.\n False skips the missing accessor.\n """\n\n def _create_delegator_property(name: str):\n def _getter(self):\n return self._delegate_property_get(name)\n\n def _setter(self, new_values):\n return self._delegate_property_set(name, new_values)\n\n _getter.__name__ = name\n _setter.__name__ = name\n\n return property(\n fget=_getter,\n fset=_setter,\n doc=getattr(delegate, accessor_mapping(name)).__doc__,\n )\n\n def _create_delegator_method(name: str):\n def f(self, *args, **kwargs):\n return self._delegate_method(name, *args, **kwargs)\n\n f.__name__ = name\n f.__doc__ = getattr(delegate, accessor_mapping(name)).__doc__\n\n return f\n\n for name in accessors:\n if (\n not raise_on_missing\n and getattr(delegate, accessor_mapping(name), None) is None\n ):\n continue\n\n if typ == "property":\n f = _create_delegator_property(name)\n else:\n f = _create_delegator_method(name)\n\n # don't overwrite existing methods/properties\n if overwrite or not hasattr(cls, name):\n setattr(cls, name, f)\n\n\ndef delegate_names(\n delegate,\n accessors: list[str],\n typ: str,\n overwrite: bool = False,\n accessor_mapping: Callable[[str], str] = lambda x: x,\n raise_on_missing: bool = True,\n):\n """\n Add delegated names to a class using a class decorator. This provides\n an alternative usage to directly calling `_add_delegate_accessors`\n below a class definition.\n\n Parameters\n ----------\n delegate : object\n The class to get methods/properties & doc-strings.\n accessors : Sequence[str]\n List of accessor to add.\n typ : {'property', 'method'}\n overwrite : bool, default False\n Overwrite the method/property in the target class if it exists.\n accessor_mapping: Callable, default lambda x: x\n Callable to map the delegate's function to the cls' function.\n raise_on_missing: bool, default True\n Raise if an accessor does not exist on delegate.\n False skips the missing accessor.\n\n Returns\n -------\n callable\n A class decorator.\n\n Examples\n --------\n @delegate_names(Categorical, ["categories", "ordered"], "property")\n class CategoricalAccessor(PandasDelegate):\n [...]\n """\n\n def add_delegate_accessors(cls):\n cls._add_delegate_accessors(\n delegate,\n accessors,\n typ,\n overwrite=overwrite,\n accessor_mapping=accessor_mapping,\n raise_on_missing=raise_on_missing,\n )\n return cls\n\n return add_delegate_accessors\n\n\n# Ported with modifications from xarray; licence at LICENSES/XARRAY_LICENSE\n# https://github.com/pydata/xarray/blob/master/xarray/core/extensions.py\n# 1. We don't need to catch and re-raise AttributeErrors as RuntimeErrors\n# 2. We use a UserWarning instead of a custom Warning\n\n\nclass CachedAccessor:\n """\n Custom property-like object.\n\n A descriptor for caching accessors.\n\n Parameters\n ----------\n name : str\n Namespace that will be accessed under, e.g. ``df.foo``.\n accessor : cls\n Class with the extension methods.\n\n Notes\n -----\n For accessor, The class's __init__ method assumes that one of\n ``Series``, ``DataFrame`` or ``Index`` as the\n single argument ``data``.\n """\n\n def __init__(self, name: str, accessor) -> None:\n self._name = name\n self._accessor = accessor\n\n def __get__(self, obj, cls):\n if obj is None:\n # we're accessing the attribute of the class, i.e., Dataset.geo\n return self._accessor\n accessor_obj = self._accessor(obj)\n # Replace the property with the accessor object. Inspired by:\n # https://www.pydanny.com/cached-property.html\n # We need to use object.__setattr__ because we overwrite __setattr__ on\n # NDFrame\n object.__setattr__(obj, self._name, accessor_obj)\n return accessor_obj\n\n\n@doc(klass="", others="")\ndef _register_accessor(name: str, cls):\n """\n Register a custom accessor on {klass} objects.\n\n Parameters\n ----------\n name : str\n Name under which the accessor should be registered. A warning is issued\n if this name conflicts with a preexisting attribute.\n\n Returns\n -------\n callable\n A class decorator.\n\n See Also\n --------\n register_dataframe_accessor : Register a custom accessor on DataFrame objects.\n register_series_accessor : Register a custom accessor on Series objects.\n register_index_accessor : Register a custom accessor on Index objects.\n\n Notes\n -----\n When accessed, your accessor will be initialized with the pandas object\n the user is interacting with. So the signature must be\n\n .. code-block:: python\n\n def __init__(self, pandas_object): # noqa: E999\n ...\n\n For consistency with pandas methods, you should raise an ``AttributeError``\n if the data passed to your accessor has an incorrect dtype.\n\n >>> pd.Series(['a', 'b']).dt\n Traceback (most recent call last):\n ...\n AttributeError: Can only use .dt accessor with datetimelike values\n\n Examples\n --------\n In your library code::\n\n import pandas as pd\n\n @pd.api.extensions.register_dataframe_accessor("geo")\n class GeoAccessor:\n def __init__(self, pandas_obj):\n self._obj = pandas_obj\n\n @property\n def center(self):\n # return the geographic center point of this DataFrame\n lat = self._obj.latitude\n lon = self._obj.longitude\n return (float(lon.mean()), float(lat.mean()))\n\n def plot(self):\n # plot this array's data on a map, e.g., using Cartopy\n pass\n\n Back in an interactive IPython session:\n\n .. code-block:: ipython\n\n In [1]: ds = pd.DataFrame({{"longitude": np.linspace(0, 10),\n ...: "latitude": np.linspace(0, 20)}})\n In [2]: ds.geo.center\n Out[2]: (5.0, 10.0)\n In [3]: ds.geo.plot() # plots data on a map\n """\n\n def decorator(accessor):\n if hasattr(cls, name):\n warnings.warn(\n f"registration of accessor {repr(accessor)} under name "\n f"{repr(name)} for type {repr(cls)} is overriding a preexisting "\n f"attribute with the same name.",\n UserWarning,\n stacklevel=find_stack_level(),\n )\n setattr(cls, name, CachedAccessor(name, accessor))\n cls._accessors.add(name)\n return accessor\n\n return decorator\n\n\n@doc(_register_accessor, klass="DataFrame")\ndef register_dataframe_accessor(name: str):\n from pandas import DataFrame\n\n return _register_accessor(name, DataFrame)\n\n\n@doc(_register_accessor, klass="Series")\ndef register_series_accessor(name: str):\n from pandas import Series\n\n return _register_accessor(name, Series)\n\n\n@doc(_register_accessor, klass="Index")\ndef register_index_accessor(name: str):\n from pandas import Index\n\n return _register_accessor(name, Index)\n
.venv\Lib\site-packages\pandas\core\accessor.py
accessor.py
Python
10,044
0.95
0.197059
0.044776
react-lib
92
2023-08-17T05:27:08.158145
Apache-2.0
false
7911741e875f171cdfe463e720a1608b
"""\nGeneric data algorithms. This module is experimental at the moment and not\nintended for public consumption\n"""\nfrom __future__ import annotations\n\nimport decimal\nimport operator\nfrom textwrap import dedent\nfrom typing import (\n TYPE_CHECKING,\n Literal,\n cast,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import (\n algos,\n hashtable as htable,\n iNaT,\n lib,\n)\nfrom pandas._typing import (\n AnyArrayLike,\n ArrayLike,\n AxisInt,\n DtypeObj,\n TakeIndexer,\n npt,\n)\nfrom pandas.util._decorators import doc\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import (\n construct_1d_object_array_from_listlike,\n np_find_common_type,\n)\nfrom pandas.core.dtypes.common import (\n ensure_float64,\n ensure_object,\n ensure_platform_int,\n is_array_like,\n is_bool_dtype,\n is_complex_dtype,\n is_dict_like,\n is_extension_array_dtype,\n is_float_dtype,\n is_integer,\n is_integer_dtype,\n is_list_like,\n is_object_dtype,\n is_signed_integer_dtype,\n needs_i8_conversion,\n)\nfrom pandas.core.dtypes.concat import concat_compat\nfrom pandas.core.dtypes.dtypes import (\n BaseMaskedDtype,\n CategoricalDtype,\n ExtensionDtype,\n NumpyEADtype,\n)\nfrom pandas.core.dtypes.generic import (\n ABCDatetimeArray,\n ABCExtensionArray,\n ABCIndex,\n ABCMultiIndex,\n ABCSeries,\n ABCTimedeltaArray,\n)\nfrom pandas.core.dtypes.missing import (\n isna,\n na_value_for_dtype,\n)\n\nfrom pandas.core.array_algos.take import take_nd\nfrom pandas.core.construction import (\n array as pd_array,\n ensure_wrapped_if_datetimelike,\n extract_array,\n)\nfrom pandas.core.indexers import validate_indices\n\nif TYPE_CHECKING:\n from pandas._typing import (\n ListLike,\n NumpySorter,\n NumpyValueArrayLike,\n )\n\n from pandas import (\n Categorical,\n Index,\n Series,\n )\n from pandas.core.arrays import (\n BaseMaskedArray,\n ExtensionArray,\n )\n\n\n# --------------- #\n# dtype access #\n# --------------- #\ndef _ensure_data(values: ArrayLike) -> np.ndarray:\n """\n routine to ensure that our data is of the correct\n input dtype for lower-level routines\n\n This will coerce:\n - ints -> int64\n - uint -> uint64\n - bool -> uint8\n - datetimelike -> i8\n - datetime64tz -> i8 (in local tz)\n - categorical -> codes\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n\n Returns\n -------\n np.ndarray\n """\n\n if not isinstance(values, ABCMultiIndex):\n # extract_array would raise\n values = extract_array(values, extract_numpy=True)\n\n if is_object_dtype(values.dtype):\n return ensure_object(np.asarray(values))\n\n elif isinstance(values.dtype, BaseMaskedDtype):\n # i.e. BooleanArray, FloatingArray, IntegerArray\n values = cast("BaseMaskedArray", values)\n if not values._hasna:\n # No pd.NAs -> We can avoid an object-dtype cast (and copy) GH#41816\n # recurse to avoid re-implementing logic for eg bool->uint8\n return _ensure_data(values._data)\n return np.asarray(values)\n\n elif isinstance(values.dtype, CategoricalDtype):\n # NB: cases that go through here should NOT be using _reconstruct_data\n # on the back-end.\n values = cast("Categorical", values)\n return values.codes\n\n elif is_bool_dtype(values.dtype):\n if isinstance(values, np.ndarray):\n # i.e. actually dtype == np.dtype("bool")\n return np.asarray(values).view("uint8")\n else:\n # e.g. Sparse[bool, False] # TODO: no test cases get here\n return np.asarray(values).astype("uint8", copy=False)\n\n elif is_integer_dtype(values.dtype):\n return np.asarray(values)\n\n elif is_float_dtype(values.dtype):\n # Note: checking `values.dtype == "float128"` raises on Windows and 32bit\n # error: Item "ExtensionDtype" of "Union[Any, ExtensionDtype, dtype[Any]]"\n # has no attribute "itemsize"\n if values.dtype.itemsize in [2, 12, 16]: # type: ignore[union-attr]\n # we dont (yet) have float128 hashtable support\n return ensure_float64(values)\n return np.asarray(values)\n\n elif is_complex_dtype(values.dtype):\n return cast(np.ndarray, values)\n\n # datetimelike\n elif needs_i8_conversion(values.dtype):\n npvalues = values.view("i8")\n npvalues = cast(np.ndarray, npvalues)\n return npvalues\n\n # we have failed, return object\n values = np.asarray(values, dtype=object)\n return ensure_object(values)\n\n\ndef _reconstruct_data(\n values: ArrayLike, dtype: DtypeObj, original: AnyArrayLike\n) -> ArrayLike:\n """\n reverse of _ensure_data\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n dtype : np.dtype or ExtensionDtype\n original : AnyArrayLike\n\n Returns\n -------\n ExtensionArray or np.ndarray\n """\n if isinstance(values, ABCExtensionArray) and values.dtype == dtype:\n # Catch DatetimeArray/TimedeltaArray\n return values\n\n if not isinstance(dtype, np.dtype):\n # i.e. ExtensionDtype; note we have ruled out above the possibility\n # that values.dtype == dtype\n cls = dtype.construct_array_type()\n\n values = cls._from_sequence(values, dtype=dtype)\n\n else:\n values = values.astype(dtype, copy=False)\n\n return values\n\n\ndef _ensure_arraylike(values, func_name: str) -> ArrayLike:\n """\n ensure that we are arraylike if not already\n """\n if not isinstance(values, (ABCIndex, ABCSeries, ABCExtensionArray, np.ndarray)):\n # GH#52986\n if func_name != "isin-targets":\n # Make an exception for the comps argument in isin.\n warnings.warn(\n f"{func_name} with argument that is not not a Series, Index, "\n "ExtensionArray, or np.ndarray is deprecated and will raise in a "\n "future version.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n inferred = lib.infer_dtype(values, skipna=False)\n if inferred in ["mixed", "string", "mixed-integer"]:\n # "mixed-integer" to ensure we do not cast ["ss", 42] to str GH#22160\n if isinstance(values, tuple):\n values = list(values)\n values = construct_1d_object_array_from_listlike(values)\n else:\n values = np.asarray(values)\n return values\n\n\n_hashtables = {\n "complex128": htable.Complex128HashTable,\n "complex64": htable.Complex64HashTable,\n "float64": htable.Float64HashTable,\n "float32": htable.Float32HashTable,\n "uint64": htable.UInt64HashTable,\n "uint32": htable.UInt32HashTable,\n "uint16": htable.UInt16HashTable,\n "uint8": htable.UInt8HashTable,\n "int64": htable.Int64HashTable,\n "int32": htable.Int32HashTable,\n "int16": htable.Int16HashTable,\n "int8": htable.Int8HashTable,\n "string": htable.StringHashTable,\n "object": htable.PyObjectHashTable,\n}\n\n\ndef _get_hashtable_algo(values: np.ndarray):\n """\n Parameters\n ----------\n values : np.ndarray\n\n Returns\n -------\n htable : HashTable subclass\n values : ndarray\n """\n values = _ensure_data(values)\n\n ndtype = _check_object_for_strings(values)\n hashtable = _hashtables[ndtype]\n return hashtable, values\n\n\ndef _check_object_for_strings(values: np.ndarray) -> str:\n """\n Check if we can use string hashtable instead of object hashtable.\n\n Parameters\n ----------\n values : ndarray\n\n Returns\n -------\n str\n """\n ndtype = values.dtype.name\n if ndtype == "object":\n # it's cheaper to use a String Hash Table than Object; we infer\n # including nulls because that is the only difference between\n # StringHashTable and ObjectHashtable\n if lib.is_string_array(values, skipna=False):\n ndtype = "string"\n return ndtype\n\n\n# --------------- #\n# top-level algos #\n# --------------- #\n\n\ndef unique(values):\n """\n Return unique values based on a hash table.\n\n Uniques are returned in order of appearance. This does NOT sort.\n\n Significantly faster than numpy.unique for long enough sequences.\n Includes NA values.\n\n Parameters\n ----------\n values : 1d array-like\n\n Returns\n -------\n numpy.ndarray or ExtensionArray\n\n The return can be:\n\n * Index : when the input is an Index\n * Categorical : when the input is a Categorical dtype\n * ndarray : when the input is a Series/ndarray\n\n Return numpy.ndarray or ExtensionArray.\n\n See Also\n --------\n Index.unique : Return unique values from an Index.\n Series.unique : Return unique values of Series object.\n\n Examples\n --------\n >>> pd.unique(pd.Series([2, 1, 3, 3]))\n array([2, 1, 3])\n\n >>> pd.unique(pd.Series([2] + [1] * 5))\n array([2, 1])\n\n >>> pd.unique(pd.Series([pd.Timestamp("20160101"), pd.Timestamp("20160101")]))\n array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')\n\n >>> pd.unique(\n ... pd.Series(\n ... [\n ... pd.Timestamp("20160101", tz="US/Eastern"),\n ... pd.Timestamp("20160101", tz="US/Eastern"),\n ... ]\n ... )\n ... )\n <DatetimeArray>\n ['2016-01-01 00:00:00-05:00']\n Length: 1, dtype: datetime64[ns, US/Eastern]\n\n >>> pd.unique(\n ... pd.Index(\n ... [\n ... pd.Timestamp("20160101", tz="US/Eastern"),\n ... pd.Timestamp("20160101", tz="US/Eastern"),\n ... ]\n ... )\n ... )\n DatetimeIndex(['2016-01-01 00:00:00-05:00'],\n dtype='datetime64[ns, US/Eastern]',\n freq=None)\n\n >>> pd.unique(np.array(list("baabc"), dtype="O"))\n array(['b', 'a', 'c'], dtype=object)\n\n An unordered Categorical will return categories in the\n order of appearance.\n\n >>> pd.unique(pd.Series(pd.Categorical(list("baabc"))))\n ['b', 'a', 'c']\n Categories (3, object): ['a', 'b', 'c']\n\n >>> pd.unique(pd.Series(pd.Categorical(list("baabc"), categories=list("abc"))))\n ['b', 'a', 'c']\n Categories (3, object): ['a', 'b', 'c']\n\n An ordered Categorical preserves the category ordering.\n\n >>> pd.unique(\n ... pd.Series(\n ... pd.Categorical(list("baabc"), categories=list("abc"), ordered=True)\n ... )\n ... )\n ['b', 'a', 'c']\n Categories (3, object): ['a' < 'b' < 'c']\n\n An array of tuples\n\n >>> pd.unique(pd.Series([("a", "b"), ("b", "a"), ("a", "c"), ("b", "a")]).values)\n array([('a', 'b'), ('b', 'a'), ('a', 'c')], dtype=object)\n """\n return unique_with_mask(values)\n\n\ndef nunique_ints(values: ArrayLike) -> int:\n """\n Return the number of unique values for integer array-likes.\n\n Significantly faster than pandas.unique for long enough sequences.\n No checks are done to ensure input is integral.\n\n Parameters\n ----------\n values : 1d array-like\n\n Returns\n -------\n int : The number of unique values in ``values``\n """\n if len(values) == 0:\n return 0\n values = _ensure_data(values)\n # bincount requires intp\n result = (np.bincount(values.ravel().astype("intp")) != 0).sum()\n return result\n\n\ndef unique_with_mask(values, mask: npt.NDArray[np.bool_] | None = None):\n """See algorithms.unique for docs. Takes a mask for masked arrays."""\n values = _ensure_arraylike(values, func_name="unique")\n\n if isinstance(values.dtype, ExtensionDtype):\n # Dispatch to extension dtype's unique.\n return values.unique()\n\n original = values\n hashtable, values = _get_hashtable_algo(values)\n\n table = hashtable(len(values))\n if mask is None:\n uniques = table.unique(values)\n uniques = _reconstruct_data(uniques, original.dtype, original)\n return uniques\n\n else:\n uniques, mask = table.unique(values, mask=mask)\n uniques = _reconstruct_data(uniques, original.dtype, original)\n assert mask is not None # for mypy\n return uniques, mask.astype("bool")\n\n\nunique1d = unique\n\n\n_MINIMUM_COMP_ARR_LEN = 1_000_000\n\n\ndef isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:\n """\n Compute the isin boolean array.\n\n Parameters\n ----------\n comps : list-like\n values : list-like\n\n Returns\n -------\n ndarray[bool]\n Same length as `comps`.\n """\n if not is_list_like(comps):\n raise TypeError(\n "only list-like objects are allowed to be passed "\n f"to isin(), you passed a `{type(comps).__name__}`"\n )\n if not is_list_like(values):\n raise TypeError(\n "only list-like objects are allowed to be passed "\n f"to isin(), you passed a `{type(values).__name__}`"\n )\n\n if not isinstance(values, (ABCIndex, ABCSeries, ABCExtensionArray, np.ndarray)):\n orig_values = list(values)\n values = _ensure_arraylike(orig_values, func_name="isin-targets")\n\n if (\n len(values) > 0\n and values.dtype.kind in "iufcb"\n and not is_signed_integer_dtype(comps)\n ):\n # GH#46485 Use object to avoid upcast to float64 later\n # TODO: Share with _find_common_type_compat\n values = construct_1d_object_array_from_listlike(orig_values)\n\n elif isinstance(values, ABCMultiIndex):\n # Avoid raising in extract_array\n values = np.array(values)\n else:\n values = extract_array(values, extract_numpy=True, extract_range=True)\n\n comps_array = _ensure_arraylike(comps, func_name="isin")\n comps_array = extract_array(comps_array, extract_numpy=True)\n if not isinstance(comps_array, np.ndarray):\n # i.e. Extension Array\n return comps_array.isin(values)\n\n elif needs_i8_conversion(comps_array.dtype):\n # Dispatch to DatetimeLikeArrayMixin.isin\n return pd_array(comps_array).isin(values)\n elif needs_i8_conversion(values.dtype) and not is_object_dtype(comps_array.dtype):\n # e.g. comps_array are integers and values are datetime64s\n return np.zeros(comps_array.shape, dtype=bool)\n # TODO: not quite right ... Sparse/Categorical\n elif needs_i8_conversion(values.dtype):\n return isin(comps_array, values.astype(object))\n\n elif isinstance(values.dtype, ExtensionDtype):\n return isin(np.asarray(comps_array), np.asarray(values))\n\n # GH16012\n # Ensure np.isin doesn't get object types or it *may* throw an exception\n # Albeit hashmap has O(1) look-up (vs. O(logn) in sorted array),\n # isin is faster for small sizes\n if (\n len(comps_array) > _MINIMUM_COMP_ARR_LEN\n and len(values) <= 26\n and comps_array.dtype != object\n ):\n # If the values include nan we need to check for nan explicitly\n # since np.nan it not equal to np.nan\n if isna(values).any():\n\n def f(c, v):\n return np.logical_or(np.isin(c, v).ravel(), np.isnan(c))\n\n else:\n f = lambda a, b: np.isin(a, b).ravel()\n\n else:\n common = np_find_common_type(values.dtype, comps_array.dtype)\n values = values.astype(common, copy=False)\n comps_array = comps_array.astype(common, copy=False)\n f = htable.ismember\n\n return f(comps_array, values)\n\n\ndef factorize_array(\n values: np.ndarray,\n use_na_sentinel: bool = True,\n size_hint: int | None = None,\n na_value: object = None,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> tuple[npt.NDArray[np.intp], np.ndarray]:\n """\n Factorize a numpy array to codes and uniques.\n\n This doesn't do any coercion of types or unboxing before factorization.\n\n Parameters\n ----------\n values : ndarray\n use_na_sentinel : bool, default True\n If True, the sentinel -1 will be used for NaN values. If False,\n NaN values will be encoded as non-negative integers and will not drop the\n NaN from the uniques of the values.\n size_hint : int, optional\n Passed through to the hashtable's 'get_labels' method\n na_value : object, optional\n A value in `values` to consider missing. Note: only use this\n parameter when you know that you don't have any values pandas would\n consider missing in the array (NaN for float data, iNaT for\n datetimes, etc.).\n mask : ndarray[bool], optional\n If not None, the mask is used as indicator for missing values\n (True = missing, False = valid) instead of `na_value` or\n condition "val != val".\n\n Returns\n -------\n codes : ndarray[np.intp]\n uniques : ndarray\n """\n original = values\n if values.dtype.kind in "mM":\n # _get_hashtable_algo will cast dt64/td64 to i8 via _ensure_data, so we\n # need to do the same to na_value. We are assuming here that the passed\n # na_value is an appropriately-typed NaT.\n # e.g. test_where_datetimelike_categorical\n na_value = iNaT\n\n hash_klass, values = _get_hashtable_algo(values)\n\n table = hash_klass(size_hint or len(values))\n uniques, codes = table.factorize(\n values,\n na_sentinel=-1,\n na_value=na_value,\n mask=mask,\n ignore_na=use_na_sentinel,\n )\n\n # re-cast e.g. i8->dt64/td64, uint8->bool\n uniques = _reconstruct_data(uniques, original.dtype, original)\n\n codes = ensure_platform_int(codes)\n return codes, uniques\n\n\n@doc(\n values=dedent(\n """\\n values : sequence\n A 1-D sequence. Sequences that aren't pandas objects are\n coerced to ndarrays before factorization.\n """\n ),\n sort=dedent(\n """\\n sort : bool, default False\n Sort `uniques` and shuffle `codes` to maintain the\n relationship.\n """\n ),\n size_hint=dedent(\n """\\n size_hint : int, optional\n Hint to the hashtable sizer.\n """\n ),\n)\ndef factorize(\n values,\n sort: bool = False,\n use_na_sentinel: bool = True,\n size_hint: int | None = None,\n) -> tuple[np.ndarray, np.ndarray | Index]:\n """\n Encode the object as an enumerated type or categorical variable.\n\n This method is useful for obtaining a numeric representation of an\n array when all that matters is identifying distinct values. `factorize`\n is available as both a top-level function :func:`pandas.factorize`,\n and as a method :meth:`Series.factorize` and :meth:`Index.factorize`.\n\n Parameters\n ----------\n {values}{sort}\n use_na_sentinel : bool, default True\n If True, the sentinel -1 will be used for NaN values. If False,\n NaN values will be encoded as non-negative integers and will not drop the\n NaN from the uniques of the values.\n\n .. versionadded:: 1.5.0\n {size_hint}\\n\n Returns\n -------\n codes : ndarray\n An integer ndarray that's an indexer into `uniques`.\n ``uniques.take(codes)`` will have the same values as `values`.\n uniques : ndarray, Index, or Categorical\n The unique valid values. When `values` is Categorical, `uniques`\n is a Categorical. When `values` is some other pandas object, an\n `Index` is returned. Otherwise, a 1-D ndarray is returned.\n\n .. note::\n\n Even if there's a missing value in `values`, `uniques` will\n *not* contain an entry for it.\n\n See Also\n --------\n cut : Discretize continuous-valued array.\n unique : Find the unique value in an array.\n\n Notes\n -----\n Reference :ref:`the user guide <reshaping.factorize>` for more examples.\n\n Examples\n --------\n These examples all show factorize as a top-level method like\n ``pd.factorize(values)``. The results are identical for methods like\n :meth:`Series.factorize`.\n\n >>> codes, uniques = pd.factorize(np.array(['b', 'b', 'a', 'c', 'b'], dtype="O"))\n >>> codes\n array([0, 0, 1, 2, 0])\n >>> uniques\n array(['b', 'a', 'c'], dtype=object)\n\n With ``sort=True``, the `uniques` will be sorted, and `codes` will be\n shuffled so that the relationship is the maintained.\n\n >>> codes, uniques = pd.factorize(np.array(['b', 'b', 'a', 'c', 'b'], dtype="O"),\n ... sort=True)\n >>> codes\n array([1, 1, 0, 2, 1])\n >>> uniques\n array(['a', 'b', 'c'], dtype=object)\n\n When ``use_na_sentinel=True`` (the default), missing values are indicated in\n the `codes` with the sentinel value ``-1`` and missing values are not\n included in `uniques`.\n\n >>> codes, uniques = pd.factorize(np.array(['b', None, 'a', 'c', 'b'], dtype="O"))\n >>> codes\n array([ 0, -1, 1, 2, 0])\n >>> uniques\n array(['b', 'a', 'c'], dtype=object)\n\n Thus far, we've only factorized lists (which are internally coerced to\n NumPy arrays). When factorizing pandas objects, the type of `uniques`\n will differ. For Categoricals, a `Categorical` is returned.\n\n >>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])\n >>> codes, uniques = pd.factorize(cat)\n >>> codes\n array([0, 0, 1])\n >>> uniques\n ['a', 'c']\n Categories (3, object): ['a', 'b', 'c']\n\n Notice that ``'b'`` is in ``uniques.categories``, despite not being\n present in ``cat.values``.\n\n For all other pandas objects, an Index of the appropriate type is\n returned.\n\n >>> cat = pd.Series(['a', 'a', 'c'])\n >>> codes, uniques = pd.factorize(cat)\n >>> codes\n array([0, 0, 1])\n >>> uniques\n Index(['a', 'c'], dtype='object')\n\n If NaN is in the values, and we want to include NaN in the uniques of the\n values, it can be achieved by setting ``use_na_sentinel=False``.\n\n >>> values = np.array([1, 2, 1, np.nan])\n >>> codes, uniques = pd.factorize(values) # default: use_na_sentinel=True\n >>> codes\n array([ 0, 1, 0, -1])\n >>> uniques\n array([1., 2.])\n\n >>> codes, uniques = pd.factorize(values, use_na_sentinel=False)\n >>> codes\n array([0, 1, 0, 2])\n >>> uniques\n array([ 1., 2., nan])\n """\n # Implementation notes: This method is responsible for 3 things\n # 1.) coercing data to array-like (ndarray, Index, extension array)\n # 2.) factorizing codes and uniques\n # 3.) Maybe boxing the uniques in an Index\n #\n # Step 2 is dispatched to extension types (like Categorical). They are\n # responsible only for factorization. All data coercion, sorting and boxing\n # should happen here.\n if isinstance(values, (ABCIndex, ABCSeries)):\n return values.factorize(sort=sort, use_na_sentinel=use_na_sentinel)\n\n values = _ensure_arraylike(values, func_name="factorize")\n original = values\n\n if (\n isinstance(values, (ABCDatetimeArray, ABCTimedeltaArray))\n and values.freq is not None\n ):\n # The presence of 'freq' means we can fast-path sorting and know there\n # aren't NAs\n codes, uniques = values.factorize(sort=sort)\n return codes, uniques\n\n elif not isinstance(values, np.ndarray):\n # i.e. ExtensionArray\n codes, uniques = values.factorize(use_na_sentinel=use_na_sentinel)\n\n else:\n values = np.asarray(values) # convert DTA/TDA/MultiIndex\n\n if not use_na_sentinel and values.dtype == object:\n # factorize can now handle differentiating various types of null values.\n # These can only occur when the array has object dtype.\n # However, for backwards compatibility we only use the null for the\n # provided dtype. This may be revisited in the future, see GH#48476.\n null_mask = isna(values)\n if null_mask.any():\n na_value = na_value_for_dtype(values.dtype, compat=False)\n # Don't modify (potentially user-provided) array\n values = np.where(null_mask, na_value, values)\n\n codes, uniques = factorize_array(\n values,\n use_na_sentinel=use_na_sentinel,\n size_hint=size_hint,\n )\n\n if sort and len(uniques) > 0:\n uniques, codes = safe_sort(\n uniques,\n codes,\n use_na_sentinel=use_na_sentinel,\n assume_unique=True,\n verify=False,\n )\n\n uniques = _reconstruct_data(uniques, original.dtype, original)\n\n return codes, uniques\n\n\ndef value_counts(\n values,\n sort: bool = True,\n ascending: bool = False,\n normalize: bool = False,\n bins=None,\n dropna: bool = True,\n) -> Series:\n """\n Compute a histogram of the counts of non-null values.\n\n Parameters\n ----------\n values : ndarray (1-d)\n sort : bool, default True\n Sort by values\n ascending : bool, default False\n Sort in ascending order\n normalize: bool, default False\n If True then compute a relative histogram\n bins : integer, optional\n Rather than count values, group them into half-open bins,\n convenience for pd.cut, only works with numeric data\n dropna : bool, default True\n Don't include counts of NaN\n\n Returns\n -------\n Series\n """\n warnings.warn(\n # GH#53493\n "pandas.value_counts is deprecated and will be removed in a "\n "future version. Use pd.Series(obj).value_counts() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return value_counts_internal(\n values,\n sort=sort,\n ascending=ascending,\n normalize=normalize,\n bins=bins,\n dropna=dropna,\n )\n\n\ndef value_counts_internal(\n values,\n sort: bool = True,\n ascending: bool = False,\n normalize: bool = False,\n bins=None,\n dropna: bool = True,\n) -> Series:\n from pandas import (\n Index,\n Series,\n )\n\n index_name = getattr(values, "name", None)\n name = "proportion" if normalize else "count"\n\n if bins is not None:\n from pandas.core.reshape.tile import cut\n\n if isinstance(values, Series):\n values = values._values\n\n try:\n ii = cut(values, bins, include_lowest=True)\n except TypeError as err:\n raise TypeError("bins argument only works with numeric data.") from err\n\n # count, remove nulls (from the index), and but the bins\n result = ii.value_counts(dropna=dropna)\n result.name = name\n result = result[result.index.notna()]\n result.index = result.index.astype("interval")\n result = result.sort_index()\n\n # if we are dropna and we have NO values\n if dropna and (result._values == 0).all():\n result = result.iloc[0:0]\n\n # normalizing is by len of all (regardless of dropna)\n counts = np.array([len(ii)])\n\n else:\n if is_extension_array_dtype(values):\n # handle Categorical and sparse,\n result = Series(values, copy=False)._values.value_counts(dropna=dropna)\n result.name = name\n result.index.name = index_name\n counts = result._values\n if not isinstance(counts, np.ndarray):\n # e.g. ArrowExtensionArray\n counts = np.asarray(counts)\n\n elif isinstance(values, ABCMultiIndex):\n # GH49558\n levels = list(range(values.nlevels))\n result = (\n Series(index=values, name=name)\n .groupby(level=levels, dropna=dropna)\n .size()\n )\n result.index.names = values.names\n counts = result._values\n\n else:\n values = _ensure_arraylike(values, func_name="value_counts")\n keys, counts, _ = value_counts_arraylike(values, dropna)\n if keys.dtype == np.float16:\n keys = keys.astype(np.float32)\n\n # For backwards compatibility, we let Index do its normal type\n # inference, _except_ for if if infers from object to bool.\n idx = Index(keys)\n if idx.dtype in [bool, "string"] and keys.dtype == object:\n idx = idx.astype(object)\n elif (\n idx.dtype != keys.dtype # noqa: PLR1714 # # pylint: disable=R1714\n and idx.dtype != "string"\n ):\n warnings.warn(\n # GH#56161\n "The behavior of value_counts with object-dtype is deprecated. "\n "In a future version, this will *not* perform dtype inference "\n "on the resulting index. To retain the old behavior, use "\n "`result.index = result.index.infer_objects()`",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n idx.name = index_name\n\n result = Series(counts, index=idx, name=name, copy=False)\n\n if sort:\n result = result.sort_values(ascending=ascending)\n\n if normalize:\n result = result / counts.sum()\n\n return result\n\n\n# Called once from SparseArray, otherwise could be private\ndef value_counts_arraylike(\n values: np.ndarray, dropna: bool, mask: npt.NDArray[np.bool_] | None = None\n) -> tuple[ArrayLike, npt.NDArray[np.int64], int]:\n """\n Parameters\n ----------\n values : np.ndarray\n dropna : bool\n mask : np.ndarray[bool] or None, default None\n\n Returns\n -------\n uniques : np.ndarray\n counts : np.ndarray[np.int64]\n """\n original = values\n values = _ensure_data(values)\n\n keys, counts, na_counter = htable.value_count(values, dropna, mask=mask)\n\n if needs_i8_conversion(original.dtype):\n # datetime, timedelta, or period\n\n if dropna:\n mask = keys != iNaT\n keys, counts = keys[mask], counts[mask]\n\n res_keys = _reconstruct_data(keys, original.dtype, original)\n return res_keys, counts, na_counter\n\n\ndef duplicated(\n values: ArrayLike,\n keep: Literal["first", "last", False] = "first",\n mask: npt.NDArray[np.bool_] | None = None,\n) -> npt.NDArray[np.bool_]:\n """\n Return boolean ndarray denoting duplicate values.\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n Array over which to check for duplicate values.\n keep : {'first', 'last', False}, default 'first'\n - ``first`` : Mark duplicates as ``True`` except for the first\n occurrence.\n - ``last`` : Mark duplicates as ``True`` except for the last\n occurrence.\n - False : Mark all duplicates as ``True``.\n mask : ndarray[bool], optional\n array indicating which elements to exclude from checking\n\n Returns\n -------\n duplicated : ndarray[bool]\n """\n values = _ensure_data(values)\n return htable.duplicated(values, keep=keep, mask=mask)\n\n\ndef mode(\n values: ArrayLike, dropna: bool = True, mask: npt.NDArray[np.bool_] | None = None\n) -> ArrayLike:\n """\n Returns the mode(s) of an array.\n\n Parameters\n ----------\n values : array-like\n Array over which to check for duplicate values.\n dropna : bool, default True\n Don't consider counts of NaN/NaT.\n\n Returns\n -------\n np.ndarray or ExtensionArray\n """\n values = _ensure_arraylike(values, func_name="mode")\n original = values\n\n if needs_i8_conversion(values.dtype):\n # Got here with ndarray; dispatch to DatetimeArray/TimedeltaArray.\n values = ensure_wrapped_if_datetimelike(values)\n values = cast("ExtensionArray", values)\n return values._mode(dropna=dropna)\n\n values = _ensure_data(values)\n\n npresult, res_mask = htable.mode(values, dropna=dropna, mask=mask)\n if res_mask is not None:\n return npresult, res_mask # type: ignore[return-value]\n\n try:\n npresult = safe_sort(npresult)\n except TypeError as err:\n warnings.warn(\n f"Unable to sort modes: {err}",\n stacklevel=find_stack_level(),\n )\n\n result = _reconstruct_data(npresult, original.dtype, original)\n return result\n\n\ndef rank(\n values: ArrayLike,\n axis: AxisInt = 0,\n method: str = "average",\n na_option: str = "keep",\n ascending: bool = True,\n pct: bool = False,\n) -> npt.NDArray[np.float64]:\n """\n Rank the values along a given axis.\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n Array whose values will be ranked. The number of dimensions in this\n array must not exceed 2.\n axis : int, default 0\n Axis over which to perform rankings.\n method : {'average', 'min', 'max', 'first', 'dense'}, default 'average'\n The method by which tiebreaks are broken during the ranking.\n na_option : {'keep', 'top'}, default 'keep'\n The method by which NaNs are placed in the ranking.\n - ``keep``: rank each NaN value with a NaN ranking\n - ``top``: replace each NaN with either +/- inf so that they\n there are ranked at the top\n ascending : bool, default True\n Whether or not the elements should be ranked in ascending order.\n pct : bool, default False\n Whether or not to the display the returned rankings in integer form\n (e.g. 1, 2, 3) or in percentile form (e.g. 0.333..., 0.666..., 1).\n """\n is_datetimelike = needs_i8_conversion(values.dtype)\n values = _ensure_data(values)\n\n if values.ndim == 1:\n ranks = algos.rank_1d(\n values,\n is_datetimelike=is_datetimelike,\n ties_method=method,\n ascending=ascending,\n na_option=na_option,\n pct=pct,\n )\n elif values.ndim == 2:\n ranks = algos.rank_2d(\n values,\n axis=axis,\n is_datetimelike=is_datetimelike,\n ties_method=method,\n ascending=ascending,\n na_option=na_option,\n pct=pct,\n )\n else:\n raise TypeError("Array with ndim > 2 are not supported.")\n\n return ranks\n\n\n# ---- #\n# take #\n# ---- #\n\n\ndef take(\n arr,\n indices: TakeIndexer,\n axis: AxisInt = 0,\n allow_fill: bool = False,\n fill_value=None,\n):\n """\n Take elements from an array.\n\n Parameters\n ----------\n arr : array-like or scalar value\n Non array-likes (sequences/scalars without a dtype) are coerced\n to an ndarray.\n\n .. deprecated:: 2.1.0\n Passing an argument other than a numpy.ndarray, ExtensionArray,\n Index, or Series is deprecated.\n\n indices : sequence of int or one-dimensional np.ndarray of int\n Indices to be taken.\n axis : int, default 0\n The axis over which to select values.\n allow_fill : bool, default False\n How to handle negative values in `indices`.\n\n * False: negative values in `indices` indicate positional indices\n from the right (the default). This is similar to :func:`numpy.take`.\n\n * True: negative values in `indices` indicate\n missing values. These values are set to `fill_value`. Any other\n negative values raise a ``ValueError``.\n\n fill_value : any, optional\n Fill value to use for NA-indices when `allow_fill` is True.\n This may be ``None``, in which case the default NA value for\n the type (``self.dtype.na_value``) is used.\n\n For multi-dimensional `arr`, each *element* is filled with\n `fill_value`.\n\n Returns\n -------\n ndarray or ExtensionArray\n Same type as the input.\n\n Raises\n ------\n IndexError\n When `indices` is out of bounds for the array.\n ValueError\n When the indexer contains negative values other than ``-1``\n and `allow_fill` is True.\n\n Notes\n -----\n When `allow_fill` is False, `indices` may be whatever dimensionality\n is accepted by NumPy for `arr`.\n\n When `allow_fill` is True, `indices` should be 1-D.\n\n See Also\n --------\n numpy.take : Take elements from an array along an axis.\n\n Examples\n --------\n >>> import pandas as pd\n\n With the default ``allow_fill=False``, negative numbers indicate\n positional indices from the right.\n\n >>> pd.api.extensions.take(np.array([10, 20, 30]), [0, 0, -1])\n array([10, 10, 30])\n\n Setting ``allow_fill=True`` will place `fill_value` in those positions.\n\n >>> pd.api.extensions.take(np.array([10, 20, 30]), [0, 0, -1], allow_fill=True)\n array([10., 10., nan])\n\n >>> pd.api.extensions.take(np.array([10, 20, 30]), [0, 0, -1], allow_fill=True,\n ... fill_value=-10)\n array([ 10, 10, -10])\n """\n if not isinstance(arr, (np.ndarray, ABCExtensionArray, ABCIndex, ABCSeries)):\n # GH#52981\n warnings.warn(\n "pd.api.extensions.take accepting non-standard inputs is deprecated "\n "and will raise in a future version. Pass either a numpy.ndarray, "\n "ExtensionArray, Index, or Series instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n if not is_array_like(arr):\n arr = np.asarray(arr)\n\n indices = ensure_platform_int(indices)\n\n if allow_fill:\n # Pandas style, -1 means NA\n validate_indices(indices, arr.shape[axis])\n result = take_nd(\n arr, indices, axis=axis, allow_fill=True, fill_value=fill_value\n )\n else:\n # NumPy style\n result = arr.take(indices, axis=axis)\n return result\n\n\n# ------------ #\n# searchsorted #\n# ------------ #\n\n\ndef searchsorted(\n arr: ArrayLike,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n) -> npt.NDArray[np.intp] | np.intp:\n """\n Find indices where elements should be inserted to maintain order.\n\n Find the indices into a sorted array `arr` (a) such that, if the\n corresponding elements in `value` were inserted before the indices,\n the order of `arr` would be preserved.\n\n Assuming that `arr` is sorted:\n\n ====== ================================\n `side` returned index `i` satisfies\n ====== ================================\n left ``arr[i-1] < value <= self[i]``\n right ``arr[i-1] <= value < self[i]``\n ====== ================================\n\n Parameters\n ----------\n arr: np.ndarray, ExtensionArray, Series\n Input array. If `sorter` is None, then it must be sorted in\n ascending order, otherwise `sorter` must be an array of indices\n that sort it.\n value : array-like or scalar\n Values to insert into `arr`.\n side : {'left', 'right'}, optional\n If 'left', the index of the first suitable location found is given.\n If 'right', return the last such index. If there is no suitable\n index, return either 0 or N (where N is the length of `self`).\n sorter : 1-D array-like, optional\n Optional array of integer indices that sort array a into ascending\n order. They are typically the result of argsort.\n\n Returns\n -------\n array of ints or int\n If value is array-like, array of insertion points.\n If value is scalar, a single integer.\n\n See Also\n --------\n numpy.searchsorted : Similar method from NumPy.\n """\n if sorter is not None:\n sorter = ensure_platform_int(sorter)\n\n if (\n isinstance(arr, np.ndarray)\n and arr.dtype.kind in "iu"\n and (is_integer(value) or is_integer_dtype(value))\n ):\n # if `arr` and `value` have different dtypes, `arr` would be\n # recast by numpy, causing a slow search.\n # Before searching below, we therefore try to give `value` the\n # same dtype as `arr`, while guarding against integer overflows.\n iinfo = np.iinfo(arr.dtype.type)\n value_arr = np.array([value]) if is_integer(value) else np.array(value)\n if (value_arr >= iinfo.min).all() and (value_arr <= iinfo.max).all():\n # value within bounds, so no overflow, so can convert value dtype\n # to dtype of arr\n dtype = arr.dtype\n else:\n dtype = value_arr.dtype\n\n if is_integer(value):\n # We know that value is int\n value = cast(int, dtype.type(value))\n else:\n value = pd_array(cast(ArrayLike, value), dtype=dtype)\n else:\n # E.g. if `arr` is an array with dtype='datetime64[ns]'\n # and `value` is a pd.Timestamp, we may need to convert value\n arr = ensure_wrapped_if_datetimelike(arr)\n\n # Argument 1 to "searchsorted" of "ndarray" has incompatible type\n # "Union[NumpyValueArrayLike, ExtensionArray]"; expected "NumpyValueArrayLike"\n return arr.searchsorted(value, side=side, sorter=sorter) # type: ignore[arg-type]\n\n\n# ---- #\n# diff #\n# ---- #\n\n_diff_special = {"float64", "float32", "int64", "int32", "int16", "int8"}\n\n\ndef diff(arr, n: int, axis: AxisInt = 0):\n """\n difference of n between self,\n analogous to s-s.shift(n)\n\n Parameters\n ----------\n arr : ndarray or ExtensionArray\n n : int\n number of periods\n axis : {0, 1}\n axis to shift on\n stacklevel : int, default 3\n The stacklevel for the lost dtype warning.\n\n Returns\n -------\n shifted\n """\n\n n = int(n)\n na = np.nan\n dtype = arr.dtype\n\n is_bool = is_bool_dtype(dtype)\n if is_bool:\n op = operator.xor\n else:\n op = operator.sub\n\n if isinstance(dtype, NumpyEADtype):\n # NumpyExtensionArray cannot necessarily hold shifted versions of itself.\n arr = arr.to_numpy()\n dtype = arr.dtype\n\n if not isinstance(arr, np.ndarray):\n # i.e ExtensionArray\n if hasattr(arr, f"__{op.__name__}__"):\n if axis != 0:\n raise ValueError(f"cannot diff {type(arr).__name__} on axis={axis}")\n return op(arr, arr.shift(n))\n else:\n raise TypeError(\n f"{type(arr).__name__} has no 'diff' method. "\n "Convert to a suitable dtype prior to calling 'diff'."\n )\n\n is_timedelta = False\n if arr.dtype.kind in "mM":\n dtype = np.int64\n arr = arr.view("i8")\n na = iNaT\n is_timedelta = True\n\n elif is_bool:\n # We have to cast in order to be able to hold np.nan\n dtype = np.object_\n\n elif dtype.kind in "iu":\n # We have to cast in order to be able to hold np.nan\n\n # int8, int16 are incompatible with float64,\n # see https://github.com/cython/cython/issues/2646\n if arr.dtype.name in ["int8", "int16"]:\n dtype = np.float32\n else:\n dtype = np.float64\n\n orig_ndim = arr.ndim\n if orig_ndim == 1:\n # reshape so we can always use algos.diff_2d\n arr = arr.reshape(-1, 1)\n # TODO: require axis == 0\n\n dtype = np.dtype(dtype)\n out_arr = np.empty(arr.shape, dtype=dtype)\n\n na_indexer = [slice(None)] * 2\n na_indexer[axis] = slice(None, n) if n >= 0 else slice(n, None)\n out_arr[tuple(na_indexer)] = na\n\n if arr.dtype.name in _diff_special:\n # TODO: can diff_2d dtype specialization troubles be fixed by defining\n # out_arr inside diff_2d?\n algos.diff_2d(arr, out_arr, n, axis, datetimelike=is_timedelta)\n else:\n # To keep mypy happy, _res_indexer is a list while res_indexer is\n # a tuple, ditto for lag_indexer.\n _res_indexer = [slice(None)] * 2\n _res_indexer[axis] = slice(n, None) if n >= 0 else slice(None, n)\n res_indexer = tuple(_res_indexer)\n\n _lag_indexer = [slice(None)] * 2\n _lag_indexer[axis] = slice(None, -n) if n > 0 else slice(-n, None)\n lag_indexer = tuple(_lag_indexer)\n\n out_arr[res_indexer] = op(arr[res_indexer], arr[lag_indexer])\n\n if is_timedelta:\n out_arr = out_arr.view("timedelta64[ns]")\n\n if orig_ndim == 1:\n out_arr = out_arr[:, 0]\n return out_arr\n\n\n# --------------------------------------------------------------------\n# Helper functions\n\n\n# Note: safe_sort is in algorithms.py instead of sorting.py because it is\n# low-dependency, is used in this module, and used private methods from\n# this module.\ndef safe_sort(\n values: Index | ArrayLike,\n codes: npt.NDArray[np.intp] | None = None,\n use_na_sentinel: bool = True,\n assume_unique: bool = False,\n verify: bool = True,\n) -> AnyArrayLike | tuple[AnyArrayLike, np.ndarray]:\n """\n Sort ``values`` and reorder corresponding ``codes``.\n\n ``values`` should be unique if ``codes`` is not None.\n Safe for use with mixed types (int, str), orders ints before strs.\n\n Parameters\n ----------\n values : list-like\n Sequence; must be unique if ``codes`` is not None.\n codes : np.ndarray[intp] or None, default None\n Indices to ``values``. All out of bound indices are treated as\n "not found" and will be masked with ``-1``.\n use_na_sentinel : bool, default True\n If True, the sentinel -1 will be used for NaN values. If False,\n NaN values will be encoded as non-negative integers and will not drop the\n NaN from the uniques of the values.\n assume_unique : bool, default False\n When True, ``values`` are assumed to be unique, which can speed up\n the calculation. Ignored when ``codes`` is None.\n verify : bool, default True\n Check if codes are out of bound for the values and put out of bound\n codes equal to ``-1``. If ``verify=False``, it is assumed there\n are no out of bound codes. Ignored when ``codes`` is None.\n\n Returns\n -------\n ordered : AnyArrayLike\n Sorted ``values``\n new_codes : ndarray\n Reordered ``codes``; returned when ``codes`` is not None.\n\n Raises\n ------\n TypeError\n * If ``values`` is not list-like or if ``codes`` is neither None\n nor list-like\n * If ``values`` cannot be sorted\n ValueError\n * If ``codes`` is not None and ``values`` contain duplicates.\n """\n if not isinstance(values, (np.ndarray, ABCExtensionArray, ABCIndex)):\n raise TypeError(\n "Only np.ndarray, ExtensionArray, and Index objects are allowed to "\n "be passed to safe_sort as values"\n )\n\n sorter = None\n ordered: AnyArrayLike\n\n if (\n not isinstance(values.dtype, ExtensionDtype)\n and lib.infer_dtype(values, skipna=False) == "mixed-integer"\n ):\n ordered = _sort_mixed(values)\n else:\n try:\n sorter = values.argsort()\n ordered = values.take(sorter)\n except (TypeError, decimal.InvalidOperation):\n # Previous sorters failed or were not applicable, try `_sort_mixed`\n # which would work, but which fails for special case of 1d arrays\n # with tuples.\n if values.size and isinstance(values[0], tuple):\n # error: Argument 1 to "_sort_tuples" has incompatible type\n # "Union[Index, ExtensionArray, ndarray[Any, Any]]"; expected\n # "ndarray[Any, Any]"\n ordered = _sort_tuples(values) # type: ignore[arg-type]\n else:\n ordered = _sort_mixed(values)\n\n # codes:\n\n if codes is None:\n return ordered\n\n if not is_list_like(codes):\n raise TypeError(\n "Only list-like objects or None are allowed to "\n "be passed to safe_sort as codes"\n )\n codes = ensure_platform_int(np.asarray(codes))\n\n if not assume_unique and not len(unique(values)) == len(values):\n raise ValueError("values should be unique if codes is not None")\n\n if sorter is None:\n # mixed types\n # error: Argument 1 to "_get_hashtable_algo" has incompatible type\n # "Union[Index, ExtensionArray, ndarray[Any, Any]]"; expected\n # "ndarray[Any, Any]"\n hash_klass, values = _get_hashtable_algo(values) # type: ignore[arg-type]\n t = hash_klass(len(values))\n t.map_locations(values)\n sorter = ensure_platform_int(t.lookup(ordered))\n\n if use_na_sentinel:\n # take_nd is faster, but only works for na_sentinels of -1\n order2 = sorter.argsort()\n if verify:\n mask = (codes < -len(values)) | (codes >= len(values))\n codes[mask] = 0\n else:\n mask = None\n new_codes = take_nd(order2, codes, fill_value=-1)\n else:\n reverse_indexer = np.empty(len(sorter), dtype=int)\n reverse_indexer.put(sorter, np.arange(len(sorter)))\n # Out of bound indices will be masked with `-1` next, so we\n # may deal with them here without performance loss using `mode='wrap'`\n new_codes = reverse_indexer.take(codes, mode="wrap")\n\n if use_na_sentinel:\n mask = codes == -1\n if verify:\n mask = mask | (codes < -len(values)) | (codes >= len(values))\n\n if use_na_sentinel and mask is not None:\n np.putmask(new_codes, mask, -1)\n\n return ordered, ensure_platform_int(new_codes)\n\n\ndef _sort_mixed(values) -> AnyArrayLike:\n """order ints before strings before nulls in 1d arrays"""\n str_pos = np.array([isinstance(x, str) for x in values], dtype=bool)\n null_pos = np.array([isna(x) for x in values], dtype=bool)\n num_pos = ~str_pos & ~null_pos\n str_argsort = np.argsort(values[str_pos])\n num_argsort = np.argsort(values[num_pos])\n # convert boolean arrays to positional indices, then order by underlying values\n str_locs = str_pos.nonzero()[0].take(str_argsort)\n num_locs = num_pos.nonzero()[0].take(num_argsort)\n null_locs = null_pos.nonzero()[0]\n locs = np.concatenate([num_locs, str_locs, null_locs])\n return values.take(locs)\n\n\ndef _sort_tuples(values: np.ndarray) -> np.ndarray:\n """\n Convert array of tuples (1d) to array of arrays (2d).\n We need to keep the columns separately as they contain different types and\n nans (can't use `np.sort` as it may fail when str and nan are mixed in a\n column as types cannot be compared).\n """\n from pandas.core.internals.construction import to_arrays\n from pandas.core.sorting import lexsort_indexer\n\n arrays, _ = to_arrays(values, None)\n indexer = lexsort_indexer(arrays, orders=True)\n return values[indexer]\n\n\ndef union_with_duplicates(\n lvals: ArrayLike | Index, rvals: ArrayLike | Index\n) -> ArrayLike | Index:\n """\n Extracts the union from lvals and rvals with respect to duplicates and nans in\n both arrays.\n\n Parameters\n ----------\n lvals: np.ndarray or ExtensionArray\n left values which is ordered in front.\n rvals: np.ndarray or ExtensionArray\n right values ordered after lvals.\n\n Returns\n -------\n np.ndarray or ExtensionArray\n Containing the unsorted union of both arrays.\n\n Notes\n -----\n Caller is responsible for ensuring lvals.dtype == rvals.dtype.\n """\n from pandas import Series\n\n with warnings.catch_warnings():\n # filter warning from object dtype inference; we will end up discarding\n # the index here, so the deprecation does not affect the end result here.\n warnings.filterwarnings(\n "ignore",\n "The behavior of value_counts with object-dtype is deprecated",\n category=FutureWarning,\n )\n l_count = value_counts_internal(lvals, dropna=False)\n r_count = value_counts_internal(rvals, dropna=False)\n l_count, r_count = l_count.align(r_count, fill_value=0)\n final_count = np.maximum(l_count.values, r_count.values)\n final_count = Series(final_count, index=l_count.index, dtype="int", copy=False)\n if isinstance(lvals, ABCMultiIndex) and isinstance(rvals, ABCMultiIndex):\n unique_vals = lvals.append(rvals).unique()\n else:\n if isinstance(lvals, ABCIndex):\n lvals = lvals._values\n if isinstance(rvals, ABCIndex):\n rvals = rvals._values\n # error: List item 0 has incompatible type "Union[ExtensionArray,\n # ndarray[Any, Any], Index]"; expected "Union[ExtensionArray,\n # ndarray[Any, Any]]"\n combined = concat_compat([lvals, rvals]) # type: ignore[list-item]\n unique_vals = unique(combined)\n unique_vals = ensure_wrapped_if_datetimelike(unique_vals)\n repeats = final_count.reindex(unique_vals).values\n return np.repeat(unique_vals, repeats)\n\n\ndef map_array(\n arr: ArrayLike,\n mapper,\n na_action: Literal["ignore"] | None = None,\n convert: bool = True,\n) -> np.ndarray | ExtensionArray | Index:\n """\n Map values using an input mapping or function.\n\n Parameters\n ----------\n mapper : function, dict, or Series\n Mapping correspondence.\n na_action : {None, 'ignore'}, default None\n If 'ignore', propagate NA values, without passing them to the\n mapping correspondence.\n convert : bool, default True\n Try to find better dtype for elementwise function results. If\n False, leave as dtype=object.\n\n Returns\n -------\n Union[ndarray, Index, ExtensionArray]\n The output of the mapping function applied to the array.\n If the function returns a tuple with more than one element\n a MultiIndex will be returned.\n """\n if na_action not in (None, "ignore"):\n msg = f"na_action must either be 'ignore' or None, {na_action} was passed"\n raise ValueError(msg)\n\n # we can fastpath dict/Series to an efficient map\n # as we know that we are not going to have to yield\n # python types\n if is_dict_like(mapper):\n if isinstance(mapper, dict) and hasattr(mapper, "__missing__"):\n # If a dictionary subclass defines a default value method,\n # convert mapper to a lookup function (GH #15999).\n dict_with_default = mapper\n mapper = lambda x: dict_with_default[\n np.nan if isinstance(x, float) and np.isnan(x) else x\n ]\n else:\n # Dictionary does not have a default. Thus it's safe to\n # convert to an Series for efficiency.\n # we specify the keys here to handle the\n # possibility that they are tuples\n\n # The return value of mapping with an empty mapper is\n # expected to be pd.Series(np.nan, ...). As np.nan is\n # of dtype float64 the return value of this method should\n # be float64 as well\n from pandas import Series\n\n if len(mapper) == 0:\n mapper = Series(mapper, dtype=np.float64)\n else:\n mapper = Series(mapper)\n\n if isinstance(mapper, ABCSeries):\n if na_action == "ignore":\n mapper = mapper[mapper.index.notna()]\n\n # Since values were input this means we came from either\n # a dict or a series and mapper should be an index\n indexer = mapper.index.get_indexer(arr)\n new_values = take_nd(mapper._values, indexer)\n\n return new_values\n\n if not len(arr):\n return arr.copy()\n\n # we must convert to python types\n values = arr.astype(object, copy=False)\n if na_action is None:\n return lib.map_infer(values, mapper, convert=convert)\n else:\n return lib.map_infer_mask(\n values, mapper, mask=isna(values).view(np.uint8), convert=convert\n )\n
.venv\Lib\site-packages\pandas\core\algorithms.py
algorithms.py
Python
55,180
0.75
0.109903
0.111797
awesome-app
331
2025-04-28T04:45:33.971138
Apache-2.0
false
a720c921a2a6eaf30e7e9581864ca38b
from pandas._libs import (\n NaT,\n Period,\n Timedelta,\n Timestamp,\n)\nfrom pandas._libs.missing import NA\n\nfrom pandas.core.dtypes.dtypes import (\n ArrowDtype,\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n)\nfrom pandas.core.dtypes.missing import (\n isna,\n isnull,\n notna,\n notnull,\n)\n\nfrom pandas.core.algorithms import (\n factorize,\n unique,\n value_counts,\n)\nfrom pandas.core.arrays import Categorical\nfrom pandas.core.arrays.boolean import BooleanDtype\nfrom pandas.core.arrays.floating import (\n Float32Dtype,\n Float64Dtype,\n)\nfrom pandas.core.arrays.integer import (\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n)\nfrom pandas.core.arrays.string_ import StringDtype\nfrom pandas.core.construction import array\nfrom pandas.core.flags import Flags\nfrom pandas.core.groupby import (\n Grouper,\n NamedAgg,\n)\nfrom pandas.core.indexes.api import (\n CategoricalIndex,\n DatetimeIndex,\n Index,\n IntervalIndex,\n MultiIndex,\n PeriodIndex,\n RangeIndex,\n TimedeltaIndex,\n)\nfrom pandas.core.indexes.datetimes import (\n bdate_range,\n date_range,\n)\nfrom pandas.core.indexes.interval import (\n Interval,\n interval_range,\n)\nfrom pandas.core.indexes.period import period_range\nfrom pandas.core.indexes.timedeltas import timedelta_range\nfrom pandas.core.indexing import IndexSlice\nfrom pandas.core.series import Series\nfrom pandas.core.tools.datetimes import to_datetime\nfrom pandas.core.tools.numeric import to_numeric\nfrom pandas.core.tools.timedeltas import to_timedelta\n\nfrom pandas.io.formats.format import set_eng_float_format\nfrom pandas.tseries.offsets import DateOffset\n\n# DataFrame needs to be imported after NamedAgg to avoid a circular import\nfrom pandas.core.frame import DataFrame # isort:skip\n\n__all__ = [\n "array",\n "ArrowDtype",\n "bdate_range",\n "BooleanDtype",\n "Categorical",\n "CategoricalDtype",\n "CategoricalIndex",\n "DataFrame",\n "DateOffset",\n "date_range",\n "DatetimeIndex",\n "DatetimeTZDtype",\n "factorize",\n "Flags",\n "Float32Dtype",\n "Float64Dtype",\n "Grouper",\n "Index",\n "IndexSlice",\n "Int16Dtype",\n "Int32Dtype",\n "Int64Dtype",\n "Int8Dtype",\n "Interval",\n "IntervalDtype",\n "IntervalIndex",\n "interval_range",\n "isna",\n "isnull",\n "MultiIndex",\n "NA",\n "NamedAgg",\n "NaT",\n "notna",\n "notnull",\n "Period",\n "PeriodDtype",\n "PeriodIndex",\n "period_range",\n "RangeIndex",\n "Series",\n "set_eng_float_format",\n "StringDtype",\n "Timedelta",\n "TimedeltaIndex",\n "timedelta_range",\n "Timestamp",\n "to_datetime",\n "to_numeric",\n "to_timedelta",\n "UInt16Dtype",\n "UInt32Dtype",\n "UInt64Dtype",\n "UInt8Dtype",\n "unique",\n "value_counts",\n]\n
.venv\Lib\site-packages\pandas\core\api.py
api.py
Python
2,911
0.95
0
0.007407
python-kit
706
2023-11-25T08:20:26.317196
Apache-2.0
false
1221312d1038c924fd125785ccb150ee
from __future__ import annotations\n\nimport abc\nfrom collections import defaultdict\nimport functools\nfrom functools import partial\nimport inspect\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Literal,\n cast,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import option_context\n\nfrom pandas._libs import lib\nfrom pandas._libs.internals import BlockValuesRefs\nfrom pandas._typing import (\n AggFuncType,\n AggFuncTypeBase,\n AggFuncTypeDict,\n AggObjType,\n Axis,\n AxisInt,\n NDFrameT,\n npt,\n)\nfrom pandas.compat._optional import import_optional_dependency\nfrom pandas.errors import SpecificationError\nfrom pandas.util._decorators import cache_readonly\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import is_nested_object\nfrom pandas.core.dtypes.common import (\n is_dict_like,\n is_extension_array_dtype,\n is_list_like,\n is_numeric_dtype,\n is_sequence,\n)\nfrom pandas.core.dtypes.dtypes import (\n CategoricalDtype,\n ExtensionDtype,\n)\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCNDFrame,\n ABCSeries,\n)\n\nfrom pandas.core._numba.executor import generate_apply_looper\nimport pandas.core.common as com\nfrom pandas.core.construction import ensure_wrapped_if_datetimelike\n\nif TYPE_CHECKING:\n from collections.abc import (\n Generator,\n Hashable,\n Iterable,\n MutableMapping,\n Sequence,\n )\n\n from pandas import (\n DataFrame,\n Index,\n Series,\n )\n from pandas.core.groupby import GroupBy\n from pandas.core.resample import Resampler\n from pandas.core.window.rolling import BaseWindow\n\n\nResType = dict[int, Any]\n\n\ndef frame_apply(\n obj: DataFrame,\n func: AggFuncType,\n axis: Axis = 0,\n raw: bool = False,\n result_type: str | None = None,\n by_row: Literal[False, "compat"] = "compat",\n engine: str = "python",\n engine_kwargs: dict[str, bool] | None = None,\n args=None,\n kwargs=None,\n) -> FrameApply:\n """construct and return a row or column based frame apply object"""\n axis = obj._get_axis_number(axis)\n klass: type[FrameApply]\n if axis == 0:\n klass = FrameRowApply\n elif axis == 1:\n klass = FrameColumnApply\n\n _, func, _, _ = reconstruct_func(func, **kwargs)\n assert func is not None\n\n return klass(\n obj,\n func,\n raw=raw,\n result_type=result_type,\n by_row=by_row,\n engine=engine,\n engine_kwargs=engine_kwargs,\n args=args,\n kwargs=kwargs,\n )\n\n\nclass Apply(metaclass=abc.ABCMeta):\n axis: AxisInt\n\n def __init__(\n self,\n obj: AggObjType,\n func: AggFuncType,\n raw: bool,\n result_type: str | None,\n *,\n by_row: Literal[False, "compat", "_compat"] = "compat",\n engine: str = "python",\n engine_kwargs: dict[str, bool] | None = None,\n args,\n kwargs,\n ) -> None:\n self.obj = obj\n self.raw = raw\n\n assert by_row is False or by_row in ["compat", "_compat"]\n self.by_row = by_row\n\n self.args = args or ()\n self.kwargs = kwargs or {}\n\n self.engine = engine\n self.engine_kwargs = {} if engine_kwargs is None else engine_kwargs\n\n if result_type not in [None, "reduce", "broadcast", "expand"]:\n raise ValueError(\n "invalid value for result_type, must be one "\n "of {None, 'reduce', 'broadcast', 'expand'}"\n )\n\n self.result_type = result_type\n\n self.func = func\n\n @abc.abstractmethod\n def apply(self) -> DataFrame | Series:\n pass\n\n @abc.abstractmethod\n def agg_or_apply_list_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n pass\n\n @abc.abstractmethod\n def agg_or_apply_dict_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n pass\n\n def agg(self) -> DataFrame | Series | None:\n """\n Provide an implementation for the aggregators.\n\n Returns\n -------\n Result of aggregation, or None if agg cannot be performed by\n this method.\n """\n obj = self.obj\n func = self.func\n args = self.args\n kwargs = self.kwargs\n\n if isinstance(func, str):\n return self.apply_str()\n\n if is_dict_like(func):\n return self.agg_dict_like()\n elif is_list_like(func):\n # we require a list, but not a 'str'\n return self.agg_list_like()\n\n if callable(func):\n f = com.get_cython_func(func)\n if f and not args and not kwargs:\n warn_alias_replacement(obj, func, f)\n return getattr(obj, f)()\n\n # caller can react\n return None\n\n def transform(self) -> DataFrame | Series:\n """\n Transform a DataFrame or Series.\n\n Returns\n -------\n DataFrame or Series\n Result of applying ``func`` along the given axis of the\n Series or DataFrame.\n\n Raises\n ------\n ValueError\n If the transform function fails or does not transform.\n """\n obj = self.obj\n func = self.func\n axis = self.axis\n args = self.args\n kwargs = self.kwargs\n\n is_series = obj.ndim == 1\n\n if obj._get_axis_number(axis) == 1:\n assert not is_series\n return obj.T.transform(func, 0, *args, **kwargs).T\n\n if is_list_like(func) and not is_dict_like(func):\n func = cast(list[AggFuncTypeBase], func)\n # Convert func equivalent dict\n if is_series:\n func = {com.get_callable_name(v) or v: v for v in func}\n else:\n func = {col: func for col in obj}\n\n if is_dict_like(func):\n func = cast(AggFuncTypeDict, func)\n return self.transform_dict_like(func)\n\n # func is either str or callable\n func = cast(AggFuncTypeBase, func)\n try:\n result = self.transform_str_or_callable(func)\n except TypeError:\n raise\n except Exception as err:\n raise ValueError("Transform function failed") from err\n\n # Functions that transform may return empty Series/DataFrame\n # when the dtype is not appropriate\n if (\n isinstance(result, (ABCSeries, ABCDataFrame))\n and result.empty\n and not obj.empty\n ):\n raise ValueError("Transform function failed")\n # error: Argument 1 to "__get__" of "AxisProperty" has incompatible type\n # "Union[Series, DataFrame, GroupBy[Any], SeriesGroupBy,\n # DataFrameGroupBy, BaseWindow, Resampler]"; expected "Union[DataFrame,\n # Series]"\n if not isinstance(result, (ABCSeries, ABCDataFrame)) or not result.index.equals(\n obj.index # type: ignore[arg-type]\n ):\n raise ValueError("Function did not transform")\n\n return result\n\n def transform_dict_like(self, func) -> DataFrame:\n """\n Compute transform in the case of a dict-like func\n """\n from pandas.core.reshape.concat import concat\n\n obj = self.obj\n args = self.args\n kwargs = self.kwargs\n\n # transform is currently only for Series/DataFrame\n assert isinstance(obj, ABCNDFrame)\n\n if len(func) == 0:\n raise ValueError("No transform functions were provided")\n\n func = self.normalize_dictlike_arg("transform", obj, func)\n\n results: dict[Hashable, DataFrame | Series] = {}\n for name, how in func.items():\n colg = obj._gotitem(name, ndim=1)\n results[name] = colg.transform(how, 0, *args, **kwargs)\n return concat(results, axis=1)\n\n def transform_str_or_callable(self, func) -> DataFrame | Series:\n """\n Compute transform in the case of a string or callable func\n """\n obj = self.obj\n args = self.args\n kwargs = self.kwargs\n\n if isinstance(func, str):\n return self._apply_str(obj, func, *args, **kwargs)\n\n if not args and not kwargs:\n f = com.get_cython_func(func)\n if f:\n warn_alias_replacement(obj, func, f)\n return getattr(obj, f)()\n\n # Two possible ways to use a UDF - apply or call directly\n try:\n return obj.apply(func, args=args, **kwargs)\n except Exception:\n return func(obj, *args, **kwargs)\n\n def agg_list_like(self) -> DataFrame | Series:\n """\n Compute aggregation in the case of a list-like argument.\n\n Returns\n -------\n Result of aggregation.\n """\n return self.agg_or_apply_list_like(op_name="agg")\n\n def compute_list_like(\n self,\n op_name: Literal["agg", "apply"],\n selected_obj: Series | DataFrame,\n kwargs: dict[str, Any],\n ) -> tuple[list[Hashable] | Index, list[Any]]:\n """\n Compute agg/apply results for like-like input.\n\n Parameters\n ----------\n op_name : {"agg", "apply"}\n Operation being performed.\n selected_obj : Series or DataFrame\n Data to perform operation on.\n kwargs : dict\n Keyword arguments to pass to the functions.\n\n Returns\n -------\n keys : list[Hashable] or Index\n Index labels for result.\n results : list\n Data for result. When aggregating with a Series, this can contain any\n Python objects.\n """\n func = cast(list[AggFuncTypeBase], self.func)\n obj = self.obj\n\n results = []\n keys = []\n\n # degenerate case\n if selected_obj.ndim == 1:\n for a in func:\n colg = obj._gotitem(selected_obj.name, ndim=1, subset=selected_obj)\n args = (\n [self.axis, *self.args]\n if include_axis(op_name, colg)\n else self.args\n )\n new_res = getattr(colg, op_name)(a, *args, **kwargs)\n results.append(new_res)\n\n # make sure we find a good name\n name = com.get_callable_name(a) or a\n keys.append(name)\n\n else:\n indices = []\n for index, col in enumerate(selected_obj):\n colg = obj._gotitem(col, ndim=1, subset=selected_obj.iloc[:, index])\n args = (\n [self.axis, *self.args]\n if include_axis(op_name, colg)\n else self.args\n )\n new_res = getattr(colg, op_name)(func, *args, **kwargs)\n results.append(new_res)\n indices.append(index)\n # error: Incompatible types in assignment (expression has type "Any |\n # Index", variable has type "list[Any | Callable[..., Any] | str]")\n keys = selected_obj.columns.take(indices) # type: ignore[assignment]\n\n return keys, results\n\n def wrap_results_list_like(\n self, keys: Iterable[Hashable], results: list[Series | DataFrame]\n ):\n from pandas.core.reshape.concat import concat\n\n obj = self.obj\n\n try:\n return concat(results, keys=keys, axis=1, sort=False)\n except TypeError as err:\n # we are concatting non-NDFrame objects,\n # e.g. a list of scalars\n from pandas import Series\n\n result = Series(results, index=keys, name=obj.name)\n if is_nested_object(result):\n raise ValueError(\n "cannot combine transform and aggregation operations"\n ) from err\n return result\n\n def agg_dict_like(self) -> DataFrame | Series:\n """\n Compute aggregation in the case of a dict-like argument.\n\n Returns\n -------\n Result of aggregation.\n """\n return self.agg_or_apply_dict_like(op_name="agg")\n\n def compute_dict_like(\n self,\n op_name: Literal["agg", "apply"],\n selected_obj: Series | DataFrame,\n selection: Hashable | Sequence[Hashable],\n kwargs: dict[str, Any],\n ) -> tuple[list[Hashable], list[Any]]:\n """\n Compute agg/apply results for dict-like input.\n\n Parameters\n ----------\n op_name : {"agg", "apply"}\n Operation being performed.\n selected_obj : Series or DataFrame\n Data to perform operation on.\n selection : hashable or sequence of hashables\n Used by GroupBy, Window, and Resample if selection is applied to the object.\n kwargs : dict\n Keyword arguments to pass to the functions.\n\n Returns\n -------\n keys : list[hashable]\n Index labels for result.\n results : list\n Data for result. When aggregating with a Series, this can contain any\n Python object.\n """\n from pandas.core.groupby.generic import (\n DataFrameGroupBy,\n SeriesGroupBy,\n )\n\n obj = self.obj\n is_groupby = isinstance(obj, (DataFrameGroupBy, SeriesGroupBy))\n func = cast(AggFuncTypeDict, self.func)\n func = self.normalize_dictlike_arg(op_name, selected_obj, func)\n\n is_non_unique_col = (\n selected_obj.ndim == 2\n and selected_obj.columns.nunique() < len(selected_obj.columns)\n )\n\n if selected_obj.ndim == 1:\n # key only used for output\n colg = obj._gotitem(selection, ndim=1)\n results = [getattr(colg, op_name)(how, **kwargs) for _, how in func.items()]\n keys = list(func.keys())\n elif not is_groupby and is_non_unique_col:\n # key used for column selection and output\n # GH#51099\n results = []\n keys = []\n for key, how in func.items():\n indices = selected_obj.columns.get_indexer_for([key])\n labels = selected_obj.columns.take(indices)\n label_to_indices = defaultdict(list)\n for index, label in zip(indices, labels):\n label_to_indices[label].append(index)\n\n key_data = [\n getattr(selected_obj._ixs(indice, axis=1), op_name)(how, **kwargs)\n for label, indices in label_to_indices.items()\n for indice in indices\n ]\n\n keys += [key] * len(key_data)\n results += key_data\n else:\n # key used for column selection and output\n results = [\n getattr(obj._gotitem(key, ndim=1), op_name)(how, **kwargs)\n for key, how in func.items()\n ]\n keys = list(func.keys())\n\n return keys, results\n\n def wrap_results_dict_like(\n self,\n selected_obj: Series | DataFrame,\n result_index: list[Hashable],\n result_data: list,\n ):\n from pandas import Index\n from pandas.core.reshape.concat import concat\n\n obj = self.obj\n\n # Avoid making two isinstance calls in all and any below\n is_ndframe = [isinstance(r, ABCNDFrame) for r in result_data]\n\n if all(is_ndframe):\n results = dict(zip(result_index, result_data))\n keys_to_use: Iterable[Hashable]\n keys_to_use = [k for k in result_index if not results[k].empty]\n # Have to check, if at least one DataFrame is not empty.\n keys_to_use = keys_to_use if keys_to_use != [] else result_index\n if selected_obj.ndim == 2:\n # keys are columns, so we can preserve names\n ktu = Index(keys_to_use)\n ktu._set_names(selected_obj.columns.names)\n keys_to_use = ktu\n\n axis: AxisInt = 0 if isinstance(obj, ABCSeries) else 1\n result = concat(\n {k: results[k] for k in keys_to_use},\n axis=axis,\n keys=keys_to_use,\n )\n elif any(is_ndframe):\n # There is a mix of NDFrames and scalars\n raise ValueError(\n "cannot perform both aggregation "\n "and transformation operations "\n "simultaneously"\n )\n else:\n from pandas import Series\n\n # we have a list of scalars\n # GH 36212 use name only if obj is a series\n if obj.ndim == 1:\n obj = cast("Series", obj)\n name = obj.name\n else:\n name = None\n\n result = Series(result_data, index=result_index, name=name)\n\n return result\n\n def apply_str(self) -> DataFrame | Series:\n """\n Compute apply in case of a string.\n\n Returns\n -------\n result: Series or DataFrame\n """\n # Caller is responsible for checking isinstance(self.f, str)\n func = cast(str, self.func)\n\n obj = self.obj\n\n from pandas.core.groupby.generic import (\n DataFrameGroupBy,\n SeriesGroupBy,\n )\n\n # Support for `frame.transform('method')`\n # Some methods (shift, etc.) require the axis argument, others\n # don't, so inspect and insert if necessary.\n method = getattr(obj, func, None)\n if callable(method):\n sig = inspect.getfullargspec(method)\n arg_names = (*sig.args, *sig.kwonlyargs)\n if self.axis != 0 and (\n "axis" not in arg_names or func in ("corrwith", "skew")\n ):\n raise ValueError(f"Operation {func} does not support axis=1")\n if "axis" in arg_names:\n if isinstance(obj, (SeriesGroupBy, DataFrameGroupBy)):\n # Try to avoid FutureWarning for deprecated axis keyword;\n # If self.axis matches the axis we would get by not passing\n # axis, we safely exclude the keyword.\n\n default_axis = 0\n if func in ["idxmax", "idxmin"]:\n # DataFrameGroupBy.idxmax, idxmin axis defaults to self.axis,\n # whereas other axis keywords default to 0\n default_axis = self.obj.axis\n\n if default_axis != self.axis:\n self.kwargs["axis"] = self.axis\n else:\n self.kwargs["axis"] = self.axis\n return self._apply_str(obj, func, *self.args, **self.kwargs)\n\n def apply_list_or_dict_like(self) -> DataFrame | Series:\n """\n Compute apply in case of a list-like or dict-like.\n\n Returns\n -------\n result: Series, DataFrame, or None\n Result when self.func is a list-like or dict-like, None otherwise.\n """\n\n if self.engine == "numba":\n raise NotImplementedError(\n "The 'numba' engine doesn't support list-like/"\n "dict likes of callables yet."\n )\n\n if self.axis == 1 and isinstance(self.obj, ABCDataFrame):\n return self.obj.T.apply(self.func, 0, args=self.args, **self.kwargs).T\n\n func = self.func\n kwargs = self.kwargs\n\n if is_dict_like(func):\n result = self.agg_or_apply_dict_like(op_name="apply")\n else:\n result = self.agg_or_apply_list_like(op_name="apply")\n\n result = reconstruct_and_relabel_result(result, func, **kwargs)\n\n return result\n\n def normalize_dictlike_arg(\n self, how: str, obj: DataFrame | Series, func: AggFuncTypeDict\n ) -> AggFuncTypeDict:\n """\n Handler for dict-like argument.\n\n Ensures that necessary columns exist if obj is a DataFrame, and\n that a nested renamer is not passed. Also normalizes to all lists\n when values consists of a mix of list and non-lists.\n """\n assert how in ("apply", "agg", "transform")\n\n # Can't use func.values(); wouldn't work for a Series\n if (\n how == "agg"\n and isinstance(obj, ABCSeries)\n and any(is_list_like(v) for _, v in func.items())\n ) or (any(is_dict_like(v) for _, v in func.items())):\n # GH 15931 - deprecation of renaming keys\n raise SpecificationError("nested renamer is not supported")\n\n if obj.ndim != 1:\n # Check for missing columns on a frame\n from pandas import Index\n\n cols = Index(list(func.keys())).difference(obj.columns, sort=True)\n if len(cols) > 0:\n raise KeyError(f"Column(s) {list(cols)} do not exist")\n\n aggregator_types = (list, tuple, dict)\n\n # if we have a dict of any non-scalars\n # eg. {'A' : ['mean']}, normalize all to\n # be list-likes\n # Cannot use func.values() because arg may be a Series\n if any(isinstance(x, aggregator_types) for _, x in func.items()):\n new_func: AggFuncTypeDict = {}\n for k, v in func.items():\n if not isinstance(v, aggregator_types):\n new_func[k] = [v]\n else:\n new_func[k] = v\n func = new_func\n return func\n\n def _apply_str(self, obj, func: str, *args, **kwargs):\n """\n if arg is a string, then try to operate on it:\n - try to find a function (or attribute) on obj\n - try to find a numpy function\n - raise\n """\n assert isinstance(func, str)\n\n if hasattr(obj, func):\n f = getattr(obj, func)\n if callable(f):\n return f(*args, **kwargs)\n\n # people may aggregate on a non-callable attribute\n # but don't let them think they can pass args to it\n assert len(args) == 0\n assert len([kwarg for kwarg in kwargs if kwarg not in ["axis"]]) == 0\n return f\n elif hasattr(np, func) and hasattr(obj, "__array__"):\n # in particular exclude Window\n f = getattr(np, func)\n return f(obj, *args, **kwargs)\n else:\n msg = f"'{func}' is not a valid function for '{type(obj).__name__}' object"\n raise AttributeError(msg)\n\n\nclass NDFrameApply(Apply):\n """\n Methods shared by FrameApply and SeriesApply but\n not GroupByApply or ResamplerWindowApply\n """\n\n obj: DataFrame | Series\n\n @property\n def index(self) -> Index:\n return self.obj.index\n\n @property\n def agg_axis(self) -> Index:\n return self.obj._get_agg_axis(self.axis)\n\n def agg_or_apply_list_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n obj = self.obj\n kwargs = self.kwargs\n\n if op_name == "apply":\n if isinstance(self, FrameApply):\n by_row = self.by_row\n\n elif isinstance(self, SeriesApply):\n by_row = "_compat" if self.by_row else False\n else:\n by_row = False\n kwargs = {**kwargs, "by_row": by_row}\n\n if getattr(obj, "axis", 0) == 1:\n raise NotImplementedError("axis other than 0 is not supported")\n\n keys, results = self.compute_list_like(op_name, obj, kwargs)\n result = self.wrap_results_list_like(keys, results)\n return result\n\n def agg_or_apply_dict_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n assert op_name in ["agg", "apply"]\n obj = self.obj\n\n kwargs = {}\n if op_name == "apply":\n by_row = "_compat" if self.by_row else False\n kwargs.update({"by_row": by_row})\n\n if getattr(obj, "axis", 0) == 1:\n raise NotImplementedError("axis other than 0 is not supported")\n\n selection = None\n result_index, result_data = self.compute_dict_like(\n op_name, obj, selection, kwargs\n )\n result = self.wrap_results_dict_like(obj, result_index, result_data)\n return result\n\n\nclass FrameApply(NDFrameApply):\n obj: DataFrame\n\n def __init__(\n self,\n obj: AggObjType,\n func: AggFuncType,\n raw: bool,\n result_type: str | None,\n *,\n by_row: Literal[False, "compat"] = False,\n engine: str = "python",\n engine_kwargs: dict[str, bool] | None = None,\n args,\n kwargs,\n ) -> None:\n if by_row is not False and by_row != "compat":\n raise ValueError(f"by_row={by_row} not allowed")\n super().__init__(\n obj,\n func,\n raw,\n result_type,\n by_row=by_row,\n engine=engine,\n engine_kwargs=engine_kwargs,\n args=args,\n kwargs=kwargs,\n )\n\n # ---------------------------------------------------------------\n # Abstract Methods\n\n @property\n @abc.abstractmethod\n def result_index(self) -> Index:\n pass\n\n @property\n @abc.abstractmethod\n def result_columns(self) -> Index:\n pass\n\n @property\n @abc.abstractmethod\n def series_generator(self) -> Generator[Series, None, None]:\n pass\n\n @staticmethod\n @functools.cache\n @abc.abstractmethod\n def generate_numba_apply_func(\n func, nogil=True, nopython=True, parallel=False\n ) -> Callable[[npt.NDArray, Index, Index], dict[int, Any]]:\n pass\n\n @abc.abstractmethod\n def apply_with_numba(self):\n pass\n\n def validate_values_for_numba(self):\n # Validate column dtyps all OK\n for colname, dtype in self.obj.dtypes.items():\n if not is_numeric_dtype(dtype):\n raise ValueError(\n f"Column {colname} must have a numeric dtype. "\n f"Found '{dtype}' instead"\n )\n if is_extension_array_dtype(dtype):\n raise ValueError(\n f"Column {colname} is backed by an extension array, "\n f"which is not supported by the numba engine."\n )\n\n @abc.abstractmethod\n def wrap_results_for_axis(\n self, results: ResType, res_index: Index\n ) -> DataFrame | Series:\n pass\n\n # ---------------------------------------------------------------\n\n @property\n def res_columns(self) -> Index:\n return self.result_columns\n\n @property\n def columns(self) -> Index:\n return self.obj.columns\n\n @cache_readonly\n def values(self):\n return self.obj.values\n\n def apply(self) -> DataFrame | Series:\n """compute the results"""\n\n # dispatch to handle list-like or dict-like\n if is_list_like(self.func):\n if self.engine == "numba":\n raise NotImplementedError(\n "the 'numba' engine doesn't support lists of callables yet"\n )\n return self.apply_list_or_dict_like()\n\n # all empty\n if len(self.columns) == 0 and len(self.index) == 0:\n return self.apply_empty_result()\n\n # string dispatch\n if isinstance(self.func, str):\n if self.engine == "numba":\n raise NotImplementedError(\n "the 'numba' engine doesn't support using "\n "a string as the callable function"\n )\n return self.apply_str()\n\n # ufunc\n elif isinstance(self.func, np.ufunc):\n if self.engine == "numba":\n raise NotImplementedError(\n "the 'numba' engine doesn't support "\n "using a numpy ufunc as the callable function"\n )\n with np.errstate(all="ignore"):\n results = self.obj._mgr.apply("apply", func=self.func)\n # _constructor will retain self.index and self.columns\n return self.obj._constructor_from_mgr(results, axes=results.axes)\n\n # broadcasting\n if self.result_type == "broadcast":\n if self.engine == "numba":\n raise NotImplementedError(\n "the 'numba' engine doesn't support result_type='broadcast'"\n )\n return self.apply_broadcast(self.obj)\n\n # one axis empty\n elif not all(self.obj.shape):\n return self.apply_empty_result()\n\n # raw\n elif self.raw:\n return self.apply_raw(engine=self.engine, engine_kwargs=self.engine_kwargs)\n\n return self.apply_standard()\n\n def agg(self):\n obj = self.obj\n axis = self.axis\n\n # TODO: Avoid having to change state\n self.obj = self.obj if self.axis == 0 else self.obj.T\n self.axis = 0\n\n result = None\n try:\n result = super().agg()\n finally:\n self.obj = obj\n self.axis = axis\n\n if axis == 1:\n result = result.T if result is not None else result\n\n if result is None:\n result = self.obj.apply(self.func, axis, args=self.args, **self.kwargs)\n\n return result\n\n def apply_empty_result(self):\n """\n we have an empty result; at least 1 axis is 0\n\n we will try to apply the function to an empty\n series in order to see if this is a reduction function\n """\n assert callable(self.func)\n\n # we are not asked to reduce or infer reduction\n # so just return a copy of the existing object\n if self.result_type not in ["reduce", None]:\n return self.obj.copy()\n\n # we may need to infer\n should_reduce = self.result_type == "reduce"\n\n from pandas import Series\n\n if not should_reduce:\n try:\n if self.axis == 0:\n r = self.func(\n Series([], dtype=np.float64), *self.args, **self.kwargs\n )\n else:\n r = self.func(\n Series(index=self.columns, dtype=np.float64),\n *self.args,\n **self.kwargs,\n )\n except Exception:\n pass\n else:\n should_reduce = not isinstance(r, Series)\n\n if should_reduce:\n if len(self.agg_axis):\n r = self.func(Series([], dtype=np.float64), *self.args, **self.kwargs)\n else:\n r = np.nan\n\n return self.obj._constructor_sliced(r, index=self.agg_axis)\n else:\n return self.obj.copy()\n\n def apply_raw(self, engine="python", engine_kwargs=None):\n """apply to the values as a numpy array"""\n\n def wrap_function(func):\n """\n Wrap user supplied function to work around numpy issue.\n\n see https://github.com/numpy/numpy/issues/8352\n """\n\n def wrapper(*args, **kwargs):\n result = func(*args, **kwargs)\n if isinstance(result, str):\n result = np.array(result, dtype=object)\n return result\n\n return wrapper\n\n if engine == "numba":\n engine_kwargs = {} if engine_kwargs is None else engine_kwargs\n\n # error: Argument 1 to "__call__" of "_lru_cache_wrapper" has\n # incompatible type "Callable[..., Any] | str | list[Callable\n # [..., Any] | str] | dict[Hashable,Callable[..., Any] | str |\n # list[Callable[..., Any] | str]]"; expected "Hashable"\n nb_looper = generate_apply_looper(\n self.func, **engine_kwargs # type: ignore[arg-type]\n )\n result = nb_looper(self.values, self.axis)\n # If we made the result 2-D, squeeze it back to 1-D\n result = np.squeeze(result)\n else:\n result = np.apply_along_axis(\n wrap_function(self.func),\n self.axis,\n self.values,\n *self.args,\n **self.kwargs,\n )\n\n # TODO: mixed type case\n if result.ndim == 2:\n return self.obj._constructor(result, index=self.index, columns=self.columns)\n else:\n return self.obj._constructor_sliced(result, index=self.agg_axis)\n\n def apply_broadcast(self, target: DataFrame) -> DataFrame:\n assert callable(self.func)\n\n result_values = np.empty_like(target.values)\n\n # axis which we want to compare compliance\n result_compare = target.shape[0]\n\n for i, col in enumerate(target.columns):\n res = self.func(target[col], *self.args, **self.kwargs)\n ares = np.asarray(res).ndim\n\n # must be a scalar or 1d\n if ares > 1:\n raise ValueError("too many dims to broadcast")\n if ares == 1:\n # must match return dim\n if result_compare != len(res):\n raise ValueError("cannot broadcast result")\n\n result_values[:, i] = res\n\n # we *always* preserve the original index / columns\n result = self.obj._constructor(\n result_values, index=target.index, columns=target.columns\n )\n return result\n\n def apply_standard(self):\n if self.engine == "python":\n results, res_index = self.apply_series_generator()\n else:\n results, res_index = self.apply_series_numba()\n\n # wrap results\n return self.wrap_results(results, res_index)\n\n def apply_series_generator(self) -> tuple[ResType, Index]:\n assert callable(self.func)\n\n series_gen = self.series_generator\n res_index = self.result_index\n\n results = {}\n\n with option_context("mode.chained_assignment", None):\n for i, v in enumerate(series_gen):\n # ignore SettingWithCopy here in case the user mutates\n results[i] = self.func(v, *self.args, **self.kwargs)\n if isinstance(results[i], ABCSeries):\n # If we have a view on v, we need to make a copy because\n # series_generator will swap out the underlying data\n results[i] = results[i].copy(deep=False)\n\n return results, res_index\n\n def apply_series_numba(self):\n if self.engine_kwargs.get("parallel", False):\n raise NotImplementedError(\n "Parallel apply is not supported when raw=False and engine='numba'"\n )\n if not self.obj.index.is_unique or not self.columns.is_unique:\n raise NotImplementedError(\n "The index/columns must be unique when raw=False and engine='numba'"\n )\n self.validate_values_for_numba()\n results = self.apply_with_numba()\n return results, self.result_index\n\n def wrap_results(self, results: ResType, res_index: Index) -> DataFrame | Series:\n from pandas import Series\n\n # see if we can infer the results\n if len(results) > 0 and 0 in results and is_sequence(results[0]):\n return self.wrap_results_for_axis(results, res_index)\n\n # dict of scalars\n\n # the default dtype of an empty Series is `object`, but this\n # code can be hit by df.mean() where the result should have dtype\n # float64 even if it's an empty Series.\n constructor_sliced = self.obj._constructor_sliced\n if len(results) == 0 and constructor_sliced is Series:\n result = constructor_sliced(results, dtype=np.float64)\n else:\n result = constructor_sliced(results)\n result.index = res_index\n\n return result\n\n def apply_str(self) -> DataFrame | Series:\n # Caller is responsible for checking isinstance(self.func, str)\n # TODO: GH#39993 - Avoid special-casing by replacing with lambda\n if self.func == "size":\n # Special-cased because DataFrame.size returns a single scalar\n obj = self.obj\n value = obj.shape[self.axis]\n return obj._constructor_sliced(value, index=self.agg_axis)\n return super().apply_str()\n\n\nclass FrameRowApply(FrameApply):\n axis: AxisInt = 0\n\n @property\n def series_generator(self) -> Generator[Series, None, None]:\n return (self.obj._ixs(i, axis=1) for i in range(len(self.columns)))\n\n @staticmethod\n @functools.cache\n def generate_numba_apply_func(\n func, nogil=True, nopython=True, parallel=False\n ) -> Callable[[npt.NDArray, Index, Index], dict[int, Any]]:\n numba = import_optional_dependency("numba")\n from pandas import Series\n\n # Import helper from extensions to cast string object -> np strings\n # Note: This also has the side effect of loading our numba extensions\n from pandas.core._numba.extensions import maybe_cast_str\n\n jitted_udf = numba.extending.register_jitable(func)\n\n # Currently the parallel argument doesn't get passed through here\n # (it's disabled) since the dicts in numba aren't thread-safe.\n @numba.jit(nogil=nogil, nopython=nopython, parallel=parallel)\n def numba_func(values, col_names, df_index):\n results = {}\n for j in range(values.shape[1]):\n # Create the series\n ser = Series(\n values[:, j], index=df_index, name=maybe_cast_str(col_names[j])\n )\n results[j] = jitted_udf(ser)\n return results\n\n return numba_func\n\n def apply_with_numba(self) -> dict[int, Any]:\n nb_func = self.generate_numba_apply_func(\n cast(Callable, self.func), **self.engine_kwargs\n )\n from pandas.core._numba.extensions import set_numba_data\n\n index = self.obj.index\n columns = self.obj.columns\n\n # Convert from numba dict to regular dict\n # Our isinstance checks in the df constructor don't pass for numbas typed dict\n with set_numba_data(index) as index, set_numba_data(columns) as columns:\n res = dict(nb_func(self.values, columns, index))\n return res\n\n @property\n def result_index(self) -> Index:\n return self.columns\n\n @property\n def result_columns(self) -> Index:\n return self.index\n\n def wrap_results_for_axis(\n self, results: ResType, res_index: Index\n ) -> DataFrame | Series:\n """return the results for the rows"""\n\n if self.result_type == "reduce":\n # e.g. test_apply_dict GH#8735\n res = self.obj._constructor_sliced(results)\n res.index = res_index\n return res\n\n elif self.result_type is None and all(\n isinstance(x, dict) for x in results.values()\n ):\n # Our operation was a to_dict op e.g.\n # test_apply_dict GH#8735, test_apply_reduce_to_dict GH#25196 #37544\n res = self.obj._constructor_sliced(results)\n res.index = res_index\n return res\n\n try:\n result = self.obj._constructor(data=results)\n except ValueError as err:\n if "All arrays must be of the same length" in str(err):\n # e.g. result = [[2, 3], [1.5], ['foo', 'bar']]\n # see test_agg_listlike_result GH#29587\n res = self.obj._constructor_sliced(results)\n res.index = res_index\n return res\n else:\n raise\n\n if not isinstance(results[0], ABCSeries):\n if len(result.index) == len(self.res_columns):\n result.index = self.res_columns\n\n if len(result.columns) == len(res_index):\n result.columns = res_index\n\n return result\n\n\nclass FrameColumnApply(FrameApply):\n axis: AxisInt = 1\n\n def apply_broadcast(self, target: DataFrame) -> DataFrame:\n result = super().apply_broadcast(target.T)\n return result.T\n\n @property\n def series_generator(self) -> Generator[Series, None, None]:\n values = self.values\n values = ensure_wrapped_if_datetimelike(values)\n assert len(values) > 0\n\n # We create one Series object, and will swap out the data inside\n # of it. Kids: don't do this at home.\n ser = self.obj._ixs(0, axis=0)\n mgr = ser._mgr\n\n is_view = mgr.blocks[0].refs.has_reference() # type: ignore[union-attr]\n\n if isinstance(ser.dtype, ExtensionDtype):\n # values will be incorrect for this block\n # TODO(EA2D): special case would be unnecessary with 2D EAs\n obj = self.obj\n for i in range(len(obj)):\n yield obj._ixs(i, axis=0)\n\n else:\n for arr, name in zip(values, self.index):\n # GH#35462 re-pin mgr in case setitem changed it\n ser._mgr = mgr\n mgr.set_values(arr)\n object.__setattr__(ser, "_name", name)\n if not is_view:\n # In apply_series_generator we store the a shallow copy of the\n # result, which potentially increases the ref count of this reused\n # `ser` object (depending on the result of the applied function)\n # -> if that happened and `ser` is already a copy, then we reset\n # the refs here to avoid triggering a unnecessary CoW inside the\n # applied function (https://github.com/pandas-dev/pandas/pull/56212)\n mgr.blocks[0].refs = BlockValuesRefs(mgr.blocks[0]) # type: ignore[union-attr]\n yield ser\n\n @staticmethod\n @functools.cache\n def generate_numba_apply_func(\n func, nogil=True, nopython=True, parallel=False\n ) -> Callable[[npt.NDArray, Index, Index], dict[int, Any]]:\n numba = import_optional_dependency("numba")\n from pandas import Series\n from pandas.core._numba.extensions import maybe_cast_str\n\n jitted_udf = numba.extending.register_jitable(func)\n\n @numba.jit(nogil=nogil, nopython=nopython, parallel=parallel)\n def numba_func(values, col_names_index, index):\n results = {}\n # Currently the parallel argument doesn't get passed through here\n # (it's disabled) since the dicts in numba aren't thread-safe.\n for i in range(values.shape[0]):\n # Create the series\n # TODO: values corrupted without the copy\n ser = Series(\n values[i].copy(),\n index=col_names_index,\n name=maybe_cast_str(index[i]),\n )\n results[i] = jitted_udf(ser)\n\n return results\n\n return numba_func\n\n def apply_with_numba(self) -> dict[int, Any]:\n nb_func = self.generate_numba_apply_func(\n cast(Callable, self.func), **self.engine_kwargs\n )\n\n from pandas.core._numba.extensions import set_numba_data\n\n # Convert from numba dict to regular dict\n # Our isinstance checks in the df constructor don't pass for numbas typed dict\n with set_numba_data(self.obj.index) as index, set_numba_data(\n self.columns\n ) as columns:\n res = dict(nb_func(self.values, columns, index))\n\n return res\n\n @property\n def result_index(self) -> Index:\n return self.index\n\n @property\n def result_columns(self) -> Index:\n return self.columns\n\n def wrap_results_for_axis(\n self, results: ResType, res_index: Index\n ) -> DataFrame | Series:\n """return the results for the columns"""\n result: DataFrame | Series\n\n # we have requested to expand\n if self.result_type == "expand":\n result = self.infer_to_same_shape(results, res_index)\n\n # we have a non-series and don't want inference\n elif not isinstance(results[0], ABCSeries):\n result = self.obj._constructor_sliced(results)\n result.index = res_index\n\n # we may want to infer results\n else:\n result = self.infer_to_same_shape(results, res_index)\n\n return result\n\n def infer_to_same_shape(self, results: ResType, res_index: Index) -> DataFrame:\n """infer the results to the same shape as the input object"""\n result = self.obj._constructor(data=results)\n result = result.T\n\n # set the index\n result.index = res_index\n\n # infer dtypes\n result = result.infer_objects(copy=False)\n\n return result\n\n\nclass SeriesApply(NDFrameApply):\n obj: Series\n axis: AxisInt = 0\n by_row: Literal[False, "compat", "_compat"] # only relevant for apply()\n\n def __init__(\n self,\n obj: Series,\n func: AggFuncType,\n *,\n convert_dtype: bool | lib.NoDefault = lib.no_default,\n by_row: Literal[False, "compat", "_compat"] = "compat",\n args,\n kwargs,\n ) -> None:\n if convert_dtype is lib.no_default:\n convert_dtype = True\n else:\n warnings.warn(\n "the convert_dtype parameter is deprecated and will be removed in a "\n "future version. Do ``ser.astype(object).apply()`` "\n "instead if you want ``convert_dtype=False``.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n self.convert_dtype = convert_dtype\n\n super().__init__(\n obj,\n func,\n raw=False,\n result_type=None,\n by_row=by_row,\n args=args,\n kwargs=kwargs,\n )\n\n def apply(self) -> DataFrame | Series:\n obj = self.obj\n\n if len(obj) == 0:\n return self.apply_empty_result()\n\n # dispatch to handle list-like or dict-like\n if is_list_like(self.func):\n return self.apply_list_or_dict_like()\n\n if isinstance(self.func, str):\n # if we are a string, try to dispatch\n return self.apply_str()\n\n if self.by_row == "_compat":\n return self.apply_compat()\n\n # self.func is Callable\n return self.apply_standard()\n\n def agg(self):\n result = super().agg()\n if result is None:\n obj = self.obj\n func = self.func\n # string, list-like, and dict-like are entirely handled in super\n assert callable(func)\n\n # GH53325: The setup below is just to keep current behavior while emitting a\n # deprecation message. In the future this will all be replaced with a simple\n # `result = f(self.obj, *self.args, **self.kwargs)`.\n try:\n result = obj.apply(func, args=self.args, **self.kwargs)\n except (ValueError, AttributeError, TypeError):\n result = func(obj, *self.args, **self.kwargs)\n else:\n msg = (\n f"using {func} in {type(obj).__name__}.agg cannot aggregate and "\n f"has been deprecated. Use {type(obj).__name__}.transform to "\n f"keep behavior unchanged."\n )\n warnings.warn(msg, FutureWarning, stacklevel=find_stack_level())\n\n return result\n\n def apply_empty_result(self) -> Series:\n obj = self.obj\n return obj._constructor(dtype=obj.dtype, index=obj.index).__finalize__(\n obj, method="apply"\n )\n\n def apply_compat(self):\n """compat apply method for funcs in listlikes and dictlikes.\n\n Used for each callable when giving listlikes and dictlikes of callables to\n apply. Needed for compatibility with Pandas < v2.1.\n\n .. versionadded:: 2.1.0\n """\n obj = self.obj\n func = self.func\n\n if callable(func):\n f = com.get_cython_func(func)\n if f and not self.args and not self.kwargs:\n return obj.apply(func, by_row=False)\n\n try:\n result = obj.apply(func, by_row="compat")\n except (ValueError, AttributeError, TypeError):\n result = obj.apply(func, by_row=False)\n return result\n\n def apply_standard(self) -> DataFrame | Series:\n # caller is responsible for ensuring that f is Callable\n func = cast(Callable, self.func)\n obj = self.obj\n\n if isinstance(func, np.ufunc):\n with np.errstate(all="ignore"):\n return func(obj, *self.args, **self.kwargs)\n elif not self.by_row:\n return func(obj, *self.args, **self.kwargs)\n\n if self.args or self.kwargs:\n # _map_values does not support args/kwargs\n def curried(x):\n return func(x, *self.args, **self.kwargs)\n\n else:\n curried = func\n\n # row-wise access\n # apply doesn't have a `na_action` keyword and for backward compat reasons\n # we need to give `na_action="ignore"` for categorical data.\n # TODO: remove the `na_action="ignore"` when that default has been changed in\n # Categorical (GH51645).\n action = "ignore" if isinstance(obj.dtype, CategoricalDtype) else None\n mapped = obj._map_values(\n mapper=curried, na_action=action, convert=self.convert_dtype\n )\n\n if len(mapped) and isinstance(mapped[0], ABCSeries):\n # GH#43986 Need to do list(mapped) in order to get treated as nested\n # See also GH#25959 regarding EA support\n return obj._constructor_expanddim(list(mapped), index=obj.index)\n else:\n return obj._constructor(mapped, index=obj.index).__finalize__(\n obj, method="apply"\n )\n\n\nclass GroupByApply(Apply):\n obj: GroupBy | Resampler | BaseWindow\n\n def __init__(\n self,\n obj: GroupBy[NDFrameT],\n func: AggFuncType,\n *,\n args,\n kwargs,\n ) -> None:\n kwargs = kwargs.copy()\n self.axis = obj.obj._get_axis_number(kwargs.get("axis", 0))\n super().__init__(\n obj,\n func,\n raw=False,\n result_type=None,\n args=args,\n kwargs=kwargs,\n )\n\n def apply(self):\n raise NotImplementedError\n\n def transform(self):\n raise NotImplementedError\n\n def agg_or_apply_list_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n obj = self.obj\n kwargs = self.kwargs\n if op_name == "apply":\n kwargs = {**kwargs, "by_row": False}\n\n if getattr(obj, "axis", 0) == 1:\n raise NotImplementedError("axis other than 0 is not supported")\n\n if obj._selected_obj.ndim == 1:\n # For SeriesGroupBy this matches _obj_with_exclusions\n selected_obj = obj._selected_obj\n else:\n selected_obj = obj._obj_with_exclusions\n\n # Only set as_index=True on groupby objects, not Window or Resample\n # that inherit from this class.\n with com.temp_setattr(\n obj, "as_index", True, condition=hasattr(obj, "as_index")\n ):\n keys, results = self.compute_list_like(op_name, selected_obj, kwargs)\n result = self.wrap_results_list_like(keys, results)\n return result\n\n def agg_or_apply_dict_like(\n self, op_name: Literal["agg", "apply"]\n ) -> DataFrame | Series:\n from pandas.core.groupby.generic import (\n DataFrameGroupBy,\n SeriesGroupBy,\n )\n\n assert op_name in ["agg", "apply"]\n\n obj = self.obj\n kwargs = {}\n if op_name == "apply":\n by_row = "_compat" if self.by_row else False\n kwargs.update({"by_row": by_row})\n\n if getattr(obj, "axis", 0) == 1:\n raise NotImplementedError("axis other than 0 is not supported")\n\n selected_obj = obj._selected_obj\n selection = obj._selection\n\n is_groupby = isinstance(obj, (DataFrameGroupBy, SeriesGroupBy))\n\n # Numba Groupby engine/engine-kwargs passthrough\n if is_groupby:\n engine = self.kwargs.get("engine", None)\n engine_kwargs = self.kwargs.get("engine_kwargs", None)\n kwargs.update({"engine": engine, "engine_kwargs": engine_kwargs})\n\n with com.temp_setattr(\n obj, "as_index", True, condition=hasattr(obj, "as_index")\n ):\n result_index, result_data = self.compute_dict_like(\n op_name, selected_obj, selection, kwargs\n )\n result = self.wrap_results_dict_like(selected_obj, result_index, result_data)\n return result\n\n\nclass ResamplerWindowApply(GroupByApply):\n axis: AxisInt = 0\n obj: Resampler | BaseWindow\n\n def __init__(\n self,\n obj: Resampler | BaseWindow,\n func: AggFuncType,\n *,\n args,\n kwargs,\n ) -> None:\n super(GroupByApply, self).__init__(\n obj,\n func,\n raw=False,\n result_type=None,\n args=args,\n kwargs=kwargs,\n )\n\n def apply(self):\n raise NotImplementedError\n\n def transform(self):\n raise NotImplementedError\n\n\ndef reconstruct_func(\n func: AggFuncType | None, **kwargs\n) -> tuple[bool, AggFuncType, tuple[str, ...] | None, npt.NDArray[np.intp] | None]:\n """\n This is the internal function to reconstruct func given if there is relabeling\n or not and also normalize the keyword to get new order of columns.\n\n If named aggregation is applied, `func` will be None, and kwargs contains the\n column and aggregation function information to be parsed;\n If named aggregation is not applied, `func` is either string (e.g. 'min') or\n Callable, or list of them (e.g. ['min', np.max]), or the dictionary of column name\n and str/Callable/list of them (e.g. {'A': 'min'}, or {'A': [np.min, lambda x: x]})\n\n If relabeling is True, will return relabeling, reconstructed func, column\n names, and the reconstructed order of columns.\n If relabeling is False, the columns and order will be None.\n\n Parameters\n ----------\n func: agg function (e.g. 'min' or Callable) or list of agg functions\n (e.g. ['min', np.max]) or dictionary (e.g. {'A': ['min', np.max]}).\n **kwargs: dict, kwargs used in is_multi_agg_with_relabel and\n normalize_keyword_aggregation function for relabelling\n\n Returns\n -------\n relabelling: bool, if there is relabelling or not\n func: normalized and mangled func\n columns: tuple of column names\n order: array of columns indices\n\n Examples\n --------\n >>> reconstruct_func(None, **{"foo": ("col", "min")})\n (True, defaultdict(<class 'list'>, {'col': ['min']}), ('foo',), array([0]))\n\n >>> reconstruct_func("min")\n (False, 'min', None, None)\n """\n relabeling = func is None and is_multi_agg_with_relabel(**kwargs)\n columns: tuple[str, ...] | None = None\n order: npt.NDArray[np.intp] | None = None\n\n if not relabeling:\n if isinstance(func, list) and len(func) > len(set(func)):\n # GH 28426 will raise error if duplicated function names are used and\n # there is no reassigned name\n raise SpecificationError(\n "Function names must be unique if there is no new column names "\n "assigned"\n )\n if func is None:\n # nicer error message\n raise TypeError("Must provide 'func' or tuples of '(column, aggfunc).")\n\n if relabeling:\n # error: Incompatible types in assignment (expression has type\n # "MutableMapping[Hashable, list[Callable[..., Any] | str]]", variable has type\n # "Callable[..., Any] | str | list[Callable[..., Any] | str] |\n # MutableMapping[Hashable, Callable[..., Any] | str | list[Callable[..., Any] |\n # str]] | None")\n func, columns, order = normalize_keyword_aggregation( # type: ignore[assignment]\n kwargs\n )\n assert func is not None\n\n return relabeling, func, columns, order\n\n\ndef is_multi_agg_with_relabel(**kwargs) -> bool:\n """\n Check whether kwargs passed to .agg look like multi-agg with relabeling.\n\n Parameters\n ----------\n **kwargs : dict\n\n Returns\n -------\n bool\n\n Examples\n --------\n >>> is_multi_agg_with_relabel(a="max")\n False\n >>> is_multi_agg_with_relabel(a_max=("a", "max"), a_min=("a", "min"))\n True\n >>> is_multi_agg_with_relabel()\n False\n """\n return all(isinstance(v, tuple) and len(v) == 2 for v in kwargs.values()) and (\n len(kwargs) > 0\n )\n\n\ndef normalize_keyword_aggregation(\n kwargs: dict,\n) -> tuple[\n MutableMapping[Hashable, list[AggFuncTypeBase]],\n tuple[str, ...],\n npt.NDArray[np.intp],\n]:\n """\n Normalize user-provided "named aggregation" kwargs.\n Transforms from the new ``Mapping[str, NamedAgg]`` style kwargs\n to the old Dict[str, List[scalar]]].\n\n Parameters\n ----------\n kwargs : dict\n\n Returns\n -------\n aggspec : dict\n The transformed kwargs.\n columns : tuple[str, ...]\n The user-provided keys.\n col_idx_order : List[int]\n List of columns indices.\n\n Examples\n --------\n >>> normalize_keyword_aggregation({"output": ("input", "sum")})\n (defaultdict(<class 'list'>, {'input': ['sum']}), ('output',), array([0]))\n """\n from pandas.core.indexes.base import Index\n\n # Normalize the aggregation functions as Mapping[column, List[func]],\n # process normally, then fixup the names.\n # TODO: aggspec type: typing.Dict[str, List[AggScalar]]\n aggspec = defaultdict(list)\n order = []\n columns, pairs = list(zip(*kwargs.items()))\n\n for column, aggfunc in pairs:\n aggspec[column].append(aggfunc)\n order.append((column, com.get_callable_name(aggfunc) or aggfunc))\n\n # uniquify aggfunc name if duplicated in order list\n uniquified_order = _make_unique_kwarg_list(order)\n\n # GH 25719, due to aggspec will change the order of assigned columns in aggregation\n # uniquified_aggspec will store uniquified order list and will compare it with order\n # based on index\n aggspec_order = [\n (column, com.get_callable_name(aggfunc) or aggfunc)\n for column, aggfuncs in aggspec.items()\n for aggfunc in aggfuncs\n ]\n uniquified_aggspec = _make_unique_kwarg_list(aggspec_order)\n\n # get the new index of columns by comparison\n col_idx_order = Index(uniquified_aggspec).get_indexer(uniquified_order)\n return aggspec, columns, col_idx_order\n\n\ndef _make_unique_kwarg_list(\n seq: Sequence[tuple[Any, Any]]\n) -> Sequence[tuple[Any, Any]]:\n """\n Uniquify aggfunc name of the pairs in the order list\n\n Examples:\n --------\n >>> kwarg_list = [('a', '<lambda>'), ('a', '<lambda>'), ('b', '<lambda>')]\n >>> _make_unique_kwarg_list(kwarg_list)\n [('a', '<lambda>_0'), ('a', '<lambda>_1'), ('b', '<lambda>')]\n """\n return [\n (pair[0], f"{pair[1]}_{seq[:i].count(pair)}") if seq.count(pair) > 1 else pair\n for i, pair in enumerate(seq)\n ]\n\n\ndef relabel_result(\n result: DataFrame | Series,\n func: dict[str, list[Callable | str]],\n columns: Iterable[Hashable],\n order: Iterable[int],\n) -> dict[Hashable, Series]:\n """\n Internal function to reorder result if relabelling is True for\n dataframe.agg, and return the reordered result in dict.\n\n Parameters:\n ----------\n result: Result from aggregation\n func: Dict of (column name, funcs)\n columns: New columns name for relabelling\n order: New order for relabelling\n\n Examples\n --------\n >>> from pandas.core.apply import relabel_result\n >>> result = pd.DataFrame(\n ... {"A": [np.nan, 2, np.nan], "C": [6, np.nan, np.nan], "B": [np.nan, 4, 2.5]},\n ... index=["max", "mean", "min"]\n ... )\n >>> funcs = {"A": ["max"], "C": ["max"], "B": ["mean", "min"]}\n >>> columns = ("foo", "aab", "bar", "dat")\n >>> order = [0, 1, 2, 3]\n >>> result_in_dict = relabel_result(result, funcs, columns, order)\n >>> pd.DataFrame(result_in_dict, index=columns)\n A C B\n foo 2.0 NaN NaN\n aab NaN 6.0 NaN\n bar NaN NaN 4.0\n dat NaN NaN 2.5\n """\n from pandas.core.indexes.base import Index\n\n reordered_indexes = [\n pair[0] for pair in sorted(zip(columns, order), key=lambda t: t[1])\n ]\n reordered_result_in_dict: dict[Hashable, Series] = {}\n idx = 0\n\n reorder_mask = not isinstance(result, ABCSeries) and len(result.columns) > 1\n for col, fun in func.items():\n s = result[col].dropna()\n\n # In the `_aggregate`, the callable names are obtained and used in `result`, and\n # these names are ordered alphabetically. e.g.\n # C2 C1\n # <lambda> 1 NaN\n # amax NaN 4.0\n # max NaN 4.0\n # sum 18.0 6.0\n # Therefore, the order of functions for each column could be shuffled\n # accordingly so need to get the callable name if it is not parsed names, and\n # reorder the aggregated result for each column.\n # e.g. if df.agg(c1=("C2", sum), c2=("C2", lambda x: min(x))), correct order is\n # [sum, <lambda>], but in `result`, it will be [<lambda>, sum], and we need to\n # reorder so that aggregated values map to their functions regarding the order.\n\n # However there is only one column being used for aggregation, not need to\n # reorder since the index is not sorted, and keep as is in `funcs`, e.g.\n # A\n # min 1.0\n # mean 1.5\n # mean 1.5\n if reorder_mask:\n fun = [\n com.get_callable_name(f) if not isinstance(f, str) else f for f in fun\n ]\n col_idx_order = Index(s.index).get_indexer(fun)\n s = s.iloc[col_idx_order]\n\n # assign the new user-provided "named aggregation" as index names, and reindex\n # it based on the whole user-provided names.\n s.index = reordered_indexes[idx : idx + len(fun)]\n reordered_result_in_dict[col] = s.reindex(columns, copy=False)\n idx = idx + len(fun)\n return reordered_result_in_dict\n\n\ndef reconstruct_and_relabel_result(result, func, **kwargs) -> DataFrame | Series:\n from pandas import DataFrame\n\n relabeling, func, columns, order = reconstruct_func(func, **kwargs)\n\n if relabeling:\n # This is to keep the order to columns occurrence unchanged, and also\n # keep the order of new columns occurrence unchanged\n\n # For the return values of reconstruct_func, if relabeling is\n # False, columns and order will be None.\n assert columns is not None\n assert order is not None\n\n result_in_dict = relabel_result(result, func, columns, order)\n result = DataFrame(result_in_dict, index=columns)\n\n return result\n\n\n# TODO: Can't use, because mypy doesn't like us setting __name__\n# error: "partial[Any]" has no attribute "__name__"\n# the type is:\n# typing.Sequence[Callable[..., ScalarResult]]\n# -> typing.Sequence[Callable[..., ScalarResult]]:\n\n\ndef _managle_lambda_list(aggfuncs: Sequence[Any]) -> Sequence[Any]:\n """\n Possibly mangle a list of aggfuncs.\n\n Parameters\n ----------\n aggfuncs : Sequence\n\n Returns\n -------\n mangled: list-like\n A new AggSpec sequence, where lambdas have been converted\n to have unique names.\n\n Notes\n -----\n If just one aggfunc is passed, the name will not be mangled.\n """\n if len(aggfuncs) <= 1:\n # don't mangle for .agg([lambda x: .])\n return aggfuncs\n i = 0\n mangled_aggfuncs = []\n for aggfunc in aggfuncs:\n if com.get_callable_name(aggfunc) == "<lambda>":\n aggfunc = partial(aggfunc)\n aggfunc.__name__ = f"<lambda_{i}>"\n i += 1\n mangled_aggfuncs.append(aggfunc)\n\n return mangled_aggfuncs\n\n\ndef maybe_mangle_lambdas(agg_spec: Any) -> Any:\n """\n Make new lambdas with unique names.\n\n Parameters\n ----------\n agg_spec : Any\n An argument to GroupBy.agg.\n Non-dict-like `agg_spec` are pass through as is.\n For dict-like `agg_spec` a new spec is returned\n with name-mangled lambdas.\n\n Returns\n -------\n mangled : Any\n Same type as the input.\n\n Examples\n --------\n >>> maybe_mangle_lambdas('sum')\n 'sum'\n >>> maybe_mangle_lambdas([lambda: 1, lambda: 2]) # doctest: +SKIP\n [<function __main__.<lambda_0>,\n <function pandas...._make_lambda.<locals>.f(*args, **kwargs)>]\n """\n is_dict = is_dict_like(agg_spec)\n if not (is_dict or is_list_like(agg_spec)):\n return agg_spec\n mangled_aggspec = type(agg_spec)() # dict or OrderedDict\n\n if is_dict:\n for key, aggfuncs in agg_spec.items():\n if is_list_like(aggfuncs) and not is_dict_like(aggfuncs):\n mangled_aggfuncs = _managle_lambda_list(aggfuncs)\n else:\n mangled_aggfuncs = aggfuncs\n\n mangled_aggspec[key] = mangled_aggfuncs\n else:\n mangled_aggspec = _managle_lambda_list(agg_spec)\n\n return mangled_aggspec\n\n\ndef validate_func_kwargs(\n kwargs: dict,\n) -> tuple[list[str], list[str | Callable[..., Any]]]:\n """\n Validates types of user-provided "named aggregation" kwargs.\n `TypeError` is raised if aggfunc is not `str` or callable.\n\n Parameters\n ----------\n kwargs : dict\n\n Returns\n -------\n columns : List[str]\n List of user-provided keys.\n func : List[Union[str, callable[...,Any]]]\n List of user-provided aggfuncs\n\n Examples\n --------\n >>> validate_func_kwargs({'one': 'min', 'two': 'max'})\n (['one', 'two'], ['min', 'max'])\n """\n tuple_given_message = "func is expected but received {} in **kwargs."\n columns = list(kwargs)\n func = []\n for col_func in kwargs.values():\n if not (isinstance(col_func, str) or callable(col_func)):\n raise TypeError(tuple_given_message.format(type(col_func).__name__))\n func.append(col_func)\n if not columns:\n no_arg_message = "Must provide 'func' or named aggregation **kwargs."\n raise TypeError(no_arg_message)\n return columns, func\n\n\ndef include_axis(op_name: Literal["agg", "apply"], colg: Series | DataFrame) -> bool:\n return isinstance(colg, ABCDataFrame) or (\n isinstance(colg, ABCSeries) and op_name == "agg"\n )\n\n\ndef warn_alias_replacement(\n obj: AggObjType,\n func: Callable,\n alias: str,\n) -> None:\n if alias.startswith("np."):\n full_alias = alias\n else:\n full_alias = f"{type(obj).__name__}.{alias}"\n alias = f'"{alias}"'\n warnings.warn(\n f"The provided callable {func} is currently using "\n f"{full_alias}. In a future version of pandas, "\n f"the provided callable will be used directly. To keep current "\n f"behavior pass the string {alias} instead.",\n category=FutureWarning,\n stacklevel=find_stack_level(),\n )\n
.venv\Lib\site-packages\pandas\core\apply.py
apply.py
Python
67,184
0.75
0.178415
0.115814
vue-tools
752
2024-05-02T14:40:13.613957
GPL-3.0
false
85c59619232760d4458a6b3444fa0afe
"""\nMethods that can be shared by many array-like classes or subclasses:\n Series\n Index\n ExtensionArray\n"""\nfrom __future__ import annotations\n\nimport operator\nfrom typing import Any\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op\n\nfrom pandas.core.dtypes.generic import ABCNDFrame\n\nfrom pandas.core import roperator\nfrom pandas.core.construction import extract_array\nfrom pandas.core.ops.common import unpack_zerodim_and_defer\n\nREDUCTION_ALIASES = {\n "maximum": "max",\n "minimum": "min",\n "add": "sum",\n "multiply": "prod",\n}\n\n\nclass OpsMixin:\n # -------------------------------------------------------------\n # Comparisons\n\n def _cmp_method(self, other, op):\n return NotImplemented\n\n @unpack_zerodim_and_defer("__eq__")\n def __eq__(self, other):\n return self._cmp_method(other, operator.eq)\n\n @unpack_zerodim_and_defer("__ne__")\n def __ne__(self, other):\n return self._cmp_method(other, operator.ne)\n\n @unpack_zerodim_and_defer("__lt__")\n def __lt__(self, other):\n return self._cmp_method(other, operator.lt)\n\n @unpack_zerodim_and_defer("__le__")\n def __le__(self, other):\n return self._cmp_method(other, operator.le)\n\n @unpack_zerodim_and_defer("__gt__")\n def __gt__(self, other):\n return self._cmp_method(other, operator.gt)\n\n @unpack_zerodim_and_defer("__ge__")\n def __ge__(self, other):\n return self._cmp_method(other, operator.ge)\n\n # -------------------------------------------------------------\n # Logical Methods\n\n def _logical_method(self, other, op):\n return NotImplemented\n\n @unpack_zerodim_and_defer("__and__")\n def __and__(self, other):\n return self._logical_method(other, operator.and_)\n\n @unpack_zerodim_and_defer("__rand__")\n def __rand__(self, other):\n return self._logical_method(other, roperator.rand_)\n\n @unpack_zerodim_and_defer("__or__")\n def __or__(self, other):\n return self._logical_method(other, operator.or_)\n\n @unpack_zerodim_and_defer("__ror__")\n def __ror__(self, other):\n return self._logical_method(other, roperator.ror_)\n\n @unpack_zerodim_and_defer("__xor__")\n def __xor__(self, other):\n return self._logical_method(other, operator.xor)\n\n @unpack_zerodim_and_defer("__rxor__")\n def __rxor__(self, other):\n return self._logical_method(other, roperator.rxor)\n\n # -------------------------------------------------------------\n # Arithmetic Methods\n\n def _arith_method(self, other, op):\n return NotImplemented\n\n @unpack_zerodim_and_defer("__add__")\n def __add__(self, other):\n """\n Get Addition of DataFrame and other, column-wise.\n\n Equivalent to ``DataFrame.add(other)``.\n\n Parameters\n ----------\n other : scalar, sequence, Series, dict or DataFrame\n Object to be added to the DataFrame.\n\n Returns\n -------\n DataFrame\n The result of adding ``other`` to DataFrame.\n\n See Also\n --------\n DataFrame.add : Add a DataFrame and another object, with option for index-\n or column-oriented addition.\n\n Examples\n --------\n >>> df = pd.DataFrame({'height': [1.5, 2.6], 'weight': [500, 800]},\n ... index=['elk', 'moose'])\n >>> df\n height weight\n elk 1.5 500\n moose 2.6 800\n\n Adding a scalar affects all rows and columns.\n\n >>> df[['height', 'weight']] + 1.5\n height weight\n elk 3.0 501.5\n moose 4.1 801.5\n\n Each element of a list is added to a column of the DataFrame, in order.\n\n >>> df[['height', 'weight']] + [0.5, 1.5]\n height weight\n elk 2.0 501.5\n moose 3.1 801.5\n\n Keys of a dictionary are aligned to the DataFrame, based on column names;\n each value in the dictionary is added to the corresponding column.\n\n >>> df[['height', 'weight']] + {'height': 0.5, 'weight': 1.5}\n height weight\n elk 2.0 501.5\n moose 3.1 801.5\n\n When `other` is a :class:`Series`, the index of `other` is aligned with the\n columns of the DataFrame.\n\n >>> s1 = pd.Series([0.5, 1.5], index=['weight', 'height'])\n >>> df[['height', 'weight']] + s1\n height weight\n elk 3.0 500.5\n moose 4.1 800.5\n\n Even when the index of `other` is the same as the index of the DataFrame,\n the :class:`Series` will not be reoriented. If index-wise alignment is desired,\n :meth:`DataFrame.add` should be used with `axis='index'`.\n\n >>> s2 = pd.Series([0.5, 1.5], index=['elk', 'moose'])\n >>> df[['height', 'weight']] + s2\n elk height moose weight\n elk NaN NaN NaN NaN\n moose NaN NaN NaN NaN\n\n >>> df[['height', 'weight']].add(s2, axis='index')\n height weight\n elk 2.0 500.5\n moose 4.1 801.5\n\n When `other` is a :class:`DataFrame`, both columns names and the\n index are aligned.\n\n >>> other = pd.DataFrame({'height': [0.2, 0.4, 0.6]},\n ... index=['elk', 'moose', 'deer'])\n >>> df[['height', 'weight']] + other\n height weight\n deer NaN NaN\n elk 1.7 NaN\n moose 3.0 NaN\n """\n return self._arith_method(other, operator.add)\n\n @unpack_zerodim_and_defer("__radd__")\n def __radd__(self, other):\n return self._arith_method(other, roperator.radd)\n\n @unpack_zerodim_and_defer("__sub__")\n def __sub__(self, other):\n return self._arith_method(other, operator.sub)\n\n @unpack_zerodim_and_defer("__rsub__")\n def __rsub__(self, other):\n return self._arith_method(other, roperator.rsub)\n\n @unpack_zerodim_and_defer("__mul__")\n def __mul__(self, other):\n return self._arith_method(other, operator.mul)\n\n @unpack_zerodim_and_defer("__rmul__")\n def __rmul__(self, other):\n return self._arith_method(other, roperator.rmul)\n\n @unpack_zerodim_and_defer("__truediv__")\n def __truediv__(self, other):\n return self._arith_method(other, operator.truediv)\n\n @unpack_zerodim_and_defer("__rtruediv__")\n def __rtruediv__(self, other):\n return self._arith_method(other, roperator.rtruediv)\n\n @unpack_zerodim_and_defer("__floordiv__")\n def __floordiv__(self, other):\n return self._arith_method(other, operator.floordiv)\n\n @unpack_zerodim_and_defer("__rfloordiv")\n def __rfloordiv__(self, other):\n return self._arith_method(other, roperator.rfloordiv)\n\n @unpack_zerodim_and_defer("__mod__")\n def __mod__(self, other):\n return self._arith_method(other, operator.mod)\n\n @unpack_zerodim_and_defer("__rmod__")\n def __rmod__(self, other):\n return self._arith_method(other, roperator.rmod)\n\n @unpack_zerodim_and_defer("__divmod__")\n def __divmod__(self, other):\n return self._arith_method(other, divmod)\n\n @unpack_zerodim_and_defer("__rdivmod__")\n def __rdivmod__(self, other):\n return self._arith_method(other, roperator.rdivmod)\n\n @unpack_zerodim_and_defer("__pow__")\n def __pow__(self, other):\n return self._arith_method(other, operator.pow)\n\n @unpack_zerodim_and_defer("__rpow__")\n def __rpow__(self, other):\n return self._arith_method(other, roperator.rpow)\n\n\n# -----------------------------------------------------------------------------\n# Helpers to implement __array_ufunc__\n\n\ndef array_ufunc(self, ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any):\n """\n Compatibility with numpy ufuncs.\n\n See also\n --------\n numpy.org/doc/stable/reference/arrays.classes.html#numpy.class.__array_ufunc__\n """\n from pandas.core.frame import (\n DataFrame,\n Series,\n )\n from pandas.core.generic import NDFrame\n from pandas.core.internals import (\n ArrayManager,\n BlockManager,\n )\n\n cls = type(self)\n\n kwargs = _standardize_out_kwarg(**kwargs)\n\n # for binary ops, use our custom dunder methods\n result = maybe_dispatch_ufunc_to_dunder_op(self, ufunc, method, *inputs, **kwargs)\n if result is not NotImplemented:\n return result\n\n # Determine if we should defer.\n no_defer = (\n np.ndarray.__array_ufunc__,\n cls.__array_ufunc__,\n )\n\n for item in inputs:\n higher_priority = (\n hasattr(item, "__array_priority__")\n and item.__array_priority__ > self.__array_priority__\n )\n has_array_ufunc = (\n hasattr(item, "__array_ufunc__")\n and type(item).__array_ufunc__ not in no_defer\n and not isinstance(item, self._HANDLED_TYPES)\n )\n if higher_priority or has_array_ufunc:\n return NotImplemented\n\n # align all the inputs.\n types = tuple(type(x) for x in inputs)\n alignable = [x for x, t in zip(inputs, types) if issubclass(t, NDFrame)]\n\n if len(alignable) > 1:\n # This triggers alignment.\n # At the moment, there aren't any ufuncs with more than two inputs\n # so this ends up just being x1.index | x2.index, but we write\n # it to handle *args.\n set_types = set(types)\n if len(set_types) > 1 and {DataFrame, Series}.issubset(set_types):\n # We currently don't handle ufunc(DataFrame, Series)\n # well. Previously this raised an internal ValueError. We might\n # support it someday, so raise a NotImplementedError.\n raise NotImplementedError(\n f"Cannot apply ufunc {ufunc} to mixed DataFrame and Series inputs."\n )\n axes = self.axes\n for obj in alignable[1:]:\n # this relies on the fact that we aren't handling mixed\n # series / frame ufuncs.\n for i, (ax1, ax2) in enumerate(zip(axes, obj.axes)):\n axes[i] = ax1.union(ax2)\n\n reconstruct_axes = dict(zip(self._AXIS_ORDERS, axes))\n inputs = tuple(\n x.reindex(**reconstruct_axes) if issubclass(t, NDFrame) else x\n for x, t in zip(inputs, types)\n )\n else:\n reconstruct_axes = dict(zip(self._AXIS_ORDERS, self.axes))\n\n if self.ndim == 1:\n names = [getattr(x, "name") for x in inputs if hasattr(x, "name")]\n name = names[0] if len(set(names)) == 1 else None\n reconstruct_kwargs = {"name": name}\n else:\n reconstruct_kwargs = {}\n\n def reconstruct(result):\n if ufunc.nout > 1:\n # np.modf, np.frexp, np.divmod\n return tuple(_reconstruct(x) for x in result)\n\n return _reconstruct(result)\n\n def _reconstruct(result):\n if lib.is_scalar(result):\n return result\n\n if result.ndim != self.ndim:\n if method == "outer":\n raise NotImplementedError\n return result\n if isinstance(result, (BlockManager, ArrayManager)):\n # we went through BlockManager.apply e.g. np.sqrt\n result = self._constructor_from_mgr(result, axes=result.axes)\n else:\n # we converted an array, lost our axes\n result = self._constructor(\n result, **reconstruct_axes, **reconstruct_kwargs, copy=False\n )\n # TODO: When we support multiple values in __finalize__, this\n # should pass alignable to `__finalize__` instead of self.\n # Then `np.add(a, b)` would consider attrs from both a and b\n # when a and b are NDFrames.\n if len(alignable) == 1:\n result = result.__finalize__(self)\n return result\n\n if "out" in kwargs:\n # e.g. test_multiindex_get_loc\n result = dispatch_ufunc_with_out(self, ufunc, method, *inputs, **kwargs)\n return reconstruct(result)\n\n if method == "reduce":\n # e.g. test.series.test_ufunc.test_reduce\n result = dispatch_reduction_ufunc(self, ufunc, method, *inputs, **kwargs)\n if result is not NotImplemented:\n return result\n\n # We still get here with kwargs `axis` for e.g. np.maximum.accumulate\n # and `dtype` and `keepdims` for np.ptp\n\n if self.ndim > 1 and (len(inputs) > 1 or ufunc.nout > 1):\n # Just give up on preserving types in the complex case.\n # In theory we could preserve them for them.\n # * nout>1 is doable if BlockManager.apply took nout and\n # returned a Tuple[BlockManager].\n # * len(inputs) > 1 is doable when we know that we have\n # aligned blocks / dtypes.\n\n # e.g. my_ufunc, modf, logaddexp, heaviside, subtract, add\n inputs = tuple(np.asarray(x) for x in inputs)\n # Note: we can't use default_array_ufunc here bc reindexing means\n # that `self` may not be among `inputs`\n result = getattr(ufunc, method)(*inputs, **kwargs)\n elif self.ndim == 1:\n # ufunc(series, ...)\n inputs = tuple(extract_array(x, extract_numpy=True) for x in inputs)\n result = getattr(ufunc, method)(*inputs, **kwargs)\n else:\n # ufunc(dataframe)\n if method == "__call__" and not kwargs:\n # for np.<ufunc>(..) calls\n # kwargs cannot necessarily be handled block-by-block, so only\n # take this path if there are no kwargs\n mgr = inputs[0]._mgr\n result = mgr.apply(getattr(ufunc, method))\n else:\n # otherwise specific ufunc methods (eg np.<ufunc>.accumulate(..))\n # Those can have an axis keyword and thus can't be called block-by-block\n result = default_array_ufunc(inputs[0], ufunc, method, *inputs, **kwargs)\n # e.g. np.negative (only one reached), with "where" and "out" in kwargs\n\n result = reconstruct(result)\n return result\n\n\ndef _standardize_out_kwarg(**kwargs) -> dict:\n """\n If kwargs contain "out1" and "out2", replace that with a tuple "out"\n\n np.divmod, np.modf, np.frexp can have either `out=(out1, out2)` or\n `out1=out1, out2=out2)`\n """\n if "out" not in kwargs and "out1" in kwargs and "out2" in kwargs:\n out1 = kwargs.pop("out1")\n out2 = kwargs.pop("out2")\n out = (out1, out2)\n kwargs["out"] = out\n return kwargs\n\n\ndef dispatch_ufunc_with_out(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n """\n If we have an `out` keyword, then call the ufunc without `out` and then\n set the result into the given `out`.\n """\n\n # Note: we assume _standardize_out_kwarg has already been called.\n out = kwargs.pop("out")\n where = kwargs.pop("where", None)\n\n result = getattr(ufunc, method)(*inputs, **kwargs)\n\n if result is NotImplemented:\n return NotImplemented\n\n if isinstance(result, tuple):\n # i.e. np.divmod, np.modf, np.frexp\n if not isinstance(out, tuple) or len(out) != len(result):\n raise NotImplementedError\n\n for arr, res in zip(out, result):\n _assign_where(arr, res, where)\n\n return out\n\n if isinstance(out, tuple):\n if len(out) == 1:\n out = out[0]\n else:\n raise NotImplementedError\n\n _assign_where(out, result, where)\n return out\n\n\ndef _assign_where(out, result, where) -> None:\n """\n Set a ufunc result into 'out', masking with a 'where' argument if necessary.\n """\n if where is None:\n # no 'where' arg passed to ufunc\n out[:] = result\n else:\n np.putmask(out, where, result)\n\n\ndef default_array_ufunc(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n """\n Fallback to the behavior we would get if we did not define __array_ufunc__.\n\n Notes\n -----\n We are assuming that `self` is among `inputs`.\n """\n if not any(x is self for x in inputs):\n raise NotImplementedError\n\n new_inputs = [x if x is not self else np.asarray(x) for x in inputs]\n\n return getattr(ufunc, method)(*new_inputs, **kwargs)\n\n\ndef dispatch_reduction_ufunc(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n """\n Dispatch ufunc reductions to self's reduction methods.\n """\n assert method == "reduce"\n\n if len(inputs) != 1 or inputs[0] is not self:\n return NotImplemented\n\n if ufunc.__name__ not in REDUCTION_ALIASES:\n return NotImplemented\n\n method_name = REDUCTION_ALIASES[ufunc.__name__]\n\n # NB: we are assuming that min/max represent minimum/maximum methods,\n # which would not be accurate for e.g. Timestamp.min\n if not hasattr(self, method_name):\n return NotImplemented\n\n if self.ndim > 1:\n if isinstance(self, ABCNDFrame):\n # TODO: test cases where this doesn't hold, i.e. 2D DTA/TDA\n kwargs["numeric_only"] = False\n\n if "axis" not in kwargs:\n # For DataFrame reductions we don't want the default axis=0\n # Note: np.min is not a ufunc, but uses array_function_dispatch,\n # so calls DataFrame.min (without ever getting here) with the np.min\n # default of axis=None, which DataFrame.min catches and changes to axis=0.\n # np.minimum.reduce(df) gets here bc axis is not in kwargs,\n # so we set axis=0 to match the behaviorof np.minimum.reduce(df.values)\n kwargs["axis"] = 0\n\n # By default, numpy's reductions do not skip NaNs, so we have to\n # pass skipna=False\n return getattr(self, method_name)(skipna=False, **kwargs)\n
.venv\Lib\site-packages\pandas\core\arraylike.py
arraylike.py
Python
17,655
0.95
0.196226
0.148325
python-kit
937
2023-12-15T19:30:11.094259
GPL-3.0
false
e7b0ec8f06070daf67ac9ed3f38bef0d
"""\nBase and utility classes for pandas objects.\n"""\n\nfrom __future__ import annotations\n\nimport textwrap\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Generic,\n Literal,\n cast,\n final,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import using_copy_on_write\n\nfrom pandas._libs import lib\nfrom pandas._typing import (\n AxisInt,\n DtypeObj,\n IndexLabel,\n NDFrameT,\n Self,\n Shape,\n npt,\n)\nfrom pandas.compat import PYPY\nfrom pandas.compat.numpy import function as nv\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import (\n cache_readonly,\n doc,\n)\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import can_hold_element\nfrom pandas.core.dtypes.common import (\n is_object_dtype,\n is_scalar,\n)\nfrom pandas.core.dtypes.dtypes import ExtensionDtype\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCIndex,\n ABCMultiIndex,\n ABCSeries,\n)\nfrom pandas.core.dtypes.missing import (\n isna,\n remove_na_arraylike,\n)\n\nfrom pandas.core import (\n algorithms,\n nanops,\n ops,\n)\nfrom pandas.core.accessor import DirNamesMixin\nfrom pandas.core.arraylike import OpsMixin\nfrom pandas.core.arrays import ExtensionArray\nfrom pandas.core.construction import (\n ensure_wrapped_if_datetimelike,\n extract_array,\n)\n\nif TYPE_CHECKING:\n from collections.abc import (\n Hashable,\n Iterator,\n )\n\n from pandas._typing import (\n DropKeep,\n NumpySorter,\n NumpyValueArrayLike,\n ScalarLike_co,\n )\n\n from pandas import (\n DataFrame,\n Index,\n Series,\n )\n\n\n_shared_docs: dict[str, str] = {}\n_indexops_doc_kwargs = {\n "klass": "IndexOpsMixin",\n "inplace": "",\n "unique": "IndexOpsMixin",\n "duplicated": "IndexOpsMixin",\n}\n\n\nclass PandasObject(DirNamesMixin):\n """\n Baseclass for various pandas objects.\n """\n\n # results from calls to methods decorated with cache_readonly get added to _cache\n _cache: dict[str, Any]\n\n @property\n def _constructor(self):\n """\n Class constructor (for this class it's just `__class__`).\n """\n return type(self)\n\n def __repr__(self) -> str:\n """\n Return a string representation for a particular object.\n """\n # Should be overwritten by base classes\n return object.__repr__(self)\n\n def _reset_cache(self, key: str | None = None) -> None:\n """\n Reset cached properties. If ``key`` is passed, only clears that key.\n """\n if not hasattr(self, "_cache"):\n return\n if key is None:\n self._cache.clear()\n else:\n self._cache.pop(key, None)\n\n def __sizeof__(self) -> int:\n """\n Generates the total memory usage for an object that returns\n either a value or Series of values\n """\n memory_usage = getattr(self, "memory_usage", None)\n if memory_usage:\n mem = memory_usage(deep=True) # pylint: disable=not-callable\n return int(mem if is_scalar(mem) else mem.sum())\n\n # no memory_usage attribute, so fall back to object's 'sizeof'\n return super().__sizeof__()\n\n\nclass NoNewAttributesMixin:\n """\n Mixin which prevents adding new attributes.\n\n Prevents additional attributes via xxx.attribute = "something" after a\n call to `self.__freeze()`. Mainly used to prevent the user from using\n wrong attributes on an accessor (`Series.cat/.str/.dt`).\n\n If you really want to add a new attribute at a later time, you need to use\n `object.__setattr__(self, key, value)`.\n """\n\n def _freeze(self) -> None:\n """\n Prevents setting additional attributes.\n """\n object.__setattr__(self, "__frozen", True)\n\n # prevent adding any attribute via s.xxx.new_attribute = ...\n def __setattr__(self, key: str, value) -> None:\n # _cache is used by a decorator\n # We need to check both 1.) cls.__dict__ and 2.) getattr(self, key)\n # because\n # 1.) getattr is false for attributes that raise errors\n # 2.) cls.__dict__ doesn't traverse into base classes\n if getattr(self, "__frozen", False) and not (\n key == "_cache"\n or key in type(self).__dict__\n or getattr(self, key, None) is not None\n ):\n raise AttributeError(f"You cannot add any new attribute '{key}'")\n object.__setattr__(self, key, value)\n\n\nclass SelectionMixin(Generic[NDFrameT]):\n """\n mixin implementing the selection & aggregation interface on a group-like\n object sub-classes need to define: obj, exclusions\n """\n\n obj: NDFrameT\n _selection: IndexLabel | None = None\n exclusions: frozenset[Hashable]\n _internal_names = ["_cache", "__setstate__"]\n _internal_names_set = set(_internal_names)\n\n @final\n @property\n def _selection_list(self):\n if not isinstance(\n self._selection, (list, tuple, ABCSeries, ABCIndex, np.ndarray)\n ):\n return [self._selection]\n return self._selection\n\n @cache_readonly\n def _selected_obj(self):\n if self._selection is None or isinstance(self.obj, ABCSeries):\n return self.obj\n else:\n return self.obj[self._selection]\n\n @final\n @cache_readonly\n def ndim(self) -> int:\n return self._selected_obj.ndim\n\n @final\n @cache_readonly\n def _obj_with_exclusions(self):\n if isinstance(self.obj, ABCSeries):\n return self.obj\n\n if self._selection is not None:\n return self.obj._getitem_nocopy(self._selection_list)\n\n if len(self.exclusions) > 0:\n # equivalent to `self.obj.drop(self.exclusions, axis=1)\n # but this avoids consolidating and making a copy\n # TODO: following GH#45287 can we now use .drop directly without\n # making a copy?\n return self.obj._drop_axis(self.exclusions, axis=1, only_slice=True)\n else:\n return self.obj\n\n def __getitem__(self, key):\n if self._selection is not None:\n raise IndexError(f"Column(s) {self._selection} already selected")\n\n if isinstance(key, (list, tuple, ABCSeries, ABCIndex, np.ndarray)):\n if len(self.obj.columns.intersection(key)) != len(set(key)):\n bad_keys = list(set(key).difference(self.obj.columns))\n raise KeyError(f"Columns not found: {str(bad_keys)[1:-1]}")\n return self._gotitem(list(key), ndim=2)\n\n else:\n if key not in self.obj:\n raise KeyError(f"Column not found: {key}")\n ndim = self.obj[key].ndim\n return self._gotitem(key, ndim=ndim)\n\n def _gotitem(self, key, ndim: int, subset=None):\n """\n sub-classes to define\n return a sliced object\n\n Parameters\n ----------\n key : str / list of selections\n ndim : {1, 2}\n requested ndim of result\n subset : object, default None\n subset to act on\n """\n raise AbstractMethodError(self)\n\n @final\n def _infer_selection(self, key, subset: Series | DataFrame):\n """\n Infer the `selection` to pass to our constructor in _gotitem.\n """\n # Shared by Rolling and Resample\n selection = None\n if subset.ndim == 2 and (\n (lib.is_scalar(key) and key in subset) or lib.is_list_like(key)\n ):\n selection = key\n elif subset.ndim == 1 and lib.is_scalar(key) and key == subset.name:\n selection = key\n return selection\n\n def aggregate(self, func, *args, **kwargs):\n raise AbstractMethodError(self)\n\n agg = aggregate\n\n\nclass IndexOpsMixin(OpsMixin):\n """\n Common ops mixin to support a unified interface / docs for Series / Index\n """\n\n # ndarray compatibility\n __array_priority__ = 1000\n _hidden_attrs: frozenset[str] = frozenset(\n ["tolist"] # tolist is not deprecated, just suppressed in the __dir__\n )\n\n @property\n def dtype(self) -> DtypeObj:\n # must be defined here as a property for mypy\n raise AbstractMethodError(self)\n\n @property\n def _values(self) -> ExtensionArray | np.ndarray:\n # must be defined here as a property for mypy\n raise AbstractMethodError(self)\n\n @final\n def transpose(self, *args, **kwargs) -> Self:\n """\n Return the transpose, which is by definition self.\n\n Returns\n -------\n %(klass)s\n """\n nv.validate_transpose(args, kwargs)\n return self\n\n T = property(\n transpose,\n doc="""\n Return the transpose, which is by definition self.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(['Ant', 'Bear', 'Cow'])\n >>> s\n 0 Ant\n 1 Bear\n 2 Cow\n dtype: object\n >>> s.T\n 0 Ant\n 1 Bear\n 2 Cow\n dtype: object\n\n For Index:\n\n >>> idx = pd.Index([1, 2, 3])\n >>> idx.T\n Index([1, 2, 3], dtype='int64')\n """,\n )\n\n @property\n def shape(self) -> Shape:\n """\n Return a tuple of the shape of the underlying data.\n\n Examples\n --------\n >>> s = pd.Series([1, 2, 3])\n >>> s.shape\n (3,)\n """\n return self._values.shape\n\n def __len__(self) -> int:\n # We need this defined here for mypy\n raise AbstractMethodError(self)\n\n # Temporarily avoid using `-> Literal[1]:` because of an IPython (jedi) bug\n # https://github.com/ipython/ipython/issues/14412\n # https://github.com/davidhalter/jedi/issues/1990\n @property\n def ndim(self) -> int:\n """\n Number of dimensions of the underlying data, by definition 1.\n\n Examples\n --------\n >>> s = pd.Series(['Ant', 'Bear', 'Cow'])\n >>> s\n 0 Ant\n 1 Bear\n 2 Cow\n dtype: object\n >>> s.ndim\n 1\n\n For Index:\n\n >>> idx = pd.Index([1, 2, 3])\n >>> idx\n Index([1, 2, 3], dtype='int64')\n >>> idx.ndim\n 1\n """\n return 1\n\n @final\n def item(self):\n """\n Return the first element of the underlying data as a Python scalar.\n\n Returns\n -------\n scalar\n The first element of Series or Index.\n\n Raises\n ------\n ValueError\n If the data is not length = 1.\n\n Examples\n --------\n >>> s = pd.Series([1])\n >>> s.item()\n 1\n\n For an index:\n\n >>> s = pd.Series([1], index=['a'])\n >>> s.index.item()\n 'a'\n """\n if len(self) == 1:\n return next(iter(self))\n raise ValueError("can only convert an array of size 1 to a Python scalar")\n\n @property\n def nbytes(self) -> int:\n """\n Return the number of bytes in the underlying data.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(['Ant', 'Bear', 'Cow'])\n >>> s\n 0 Ant\n 1 Bear\n 2 Cow\n dtype: object\n >>> s.nbytes\n 24\n\n For Index:\n\n >>> idx = pd.Index([1, 2, 3])\n >>> idx\n Index([1, 2, 3], dtype='int64')\n >>> idx.nbytes\n 24\n """\n return self._values.nbytes\n\n @property\n def size(self) -> int:\n """\n Return the number of elements in the underlying data.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(['Ant', 'Bear', 'Cow'])\n >>> s\n 0 Ant\n 1 Bear\n 2 Cow\n dtype: object\n >>> s.size\n 3\n\n For Index:\n\n >>> idx = pd.Index([1, 2, 3])\n >>> idx\n Index([1, 2, 3], dtype='int64')\n >>> idx.size\n 3\n """\n return len(self._values)\n\n @property\n def array(self) -> ExtensionArray:\n """\n The ExtensionArray of the data backing this Series or Index.\n\n Returns\n -------\n ExtensionArray\n An ExtensionArray of the values stored within. For extension\n types, this is the actual array. For NumPy native types, this\n is a thin (no copy) wrapper around :class:`numpy.ndarray`.\n\n ``.array`` differs from ``.values``, which may require converting\n the data to a different form.\n\n See Also\n --------\n Index.to_numpy : Similar method that always returns a NumPy array.\n Series.to_numpy : Similar method that always returns a NumPy array.\n\n Notes\n -----\n This table lays out the different array types for each extension\n dtype within pandas.\n\n ================== =============================\n dtype array type\n ================== =============================\n category Categorical\n period PeriodArray\n interval IntervalArray\n IntegerNA IntegerArray\n string StringArray\n boolean BooleanArray\n datetime64[ns, tz] DatetimeArray\n ================== =============================\n\n For any 3rd-party extension types, the array type will be an\n ExtensionArray.\n\n For all remaining dtypes ``.array`` will be a\n :class:`arrays.NumpyExtensionArray` wrapping the actual ndarray\n stored within. If you absolutely need a NumPy array (possibly with\n copying / coercing data), then use :meth:`Series.to_numpy` instead.\n\n Examples\n --------\n For regular NumPy types like int, and float, a NumpyExtensionArray\n is returned.\n\n >>> pd.Series([1, 2, 3]).array\n <NumpyExtensionArray>\n [1, 2, 3]\n Length: 3, dtype: int64\n\n For extension types, like Categorical, the actual ExtensionArray\n is returned\n\n >>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))\n >>> ser.array\n ['a', 'b', 'a']\n Categories (2, object): ['a', 'b']\n """\n raise AbstractMethodError(self)\n\n @final\n def to_numpy(\n self,\n dtype: npt.DTypeLike | None = None,\n copy: bool = False,\n na_value: object = lib.no_default,\n **kwargs,\n ) -> np.ndarray:\n """\n A NumPy ndarray representing the values in this Series or Index.\n\n Parameters\n ----------\n dtype : str or numpy.dtype, optional\n The dtype to pass to :meth:`numpy.asarray`.\n copy : bool, default False\n Whether to ensure that the returned value is not a view on\n another array. Note that ``copy=False`` does not *ensure* that\n ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that\n a copy is made, even if not strictly necessary.\n na_value : Any, optional\n The value to use for missing values. The default value depends\n on `dtype` and the type of the array.\n **kwargs\n Additional keywords passed through to the ``to_numpy`` method\n of the underlying array (for extension arrays).\n\n Returns\n -------\n numpy.ndarray\n\n See Also\n --------\n Series.array : Get the actual data stored within.\n Index.array : Get the actual data stored within.\n DataFrame.to_numpy : Similar method for DataFrame.\n\n Notes\n -----\n The returned array will be the same up to equality (values equal\n in `self` will be equal in the returned array; likewise for values\n that are not equal). When `self` contains an ExtensionArray, the\n dtype may be different. For example, for a category-dtype Series,\n ``to_numpy()`` will return a NumPy array and the categorical dtype\n will be lost.\n\n For NumPy dtypes, this will be a reference to the actual data stored\n in this Series or Index (assuming ``copy=False``). Modifying the result\n in place will modify the data stored in the Series or Index (not that\n we recommend doing that).\n\n For extension types, ``to_numpy()`` *may* require copying data and\n coercing the result to a NumPy type (possibly object), which may be\n expensive. When you need a no-copy reference to the underlying data,\n :attr:`Series.array` should be used instead.\n\n This table lays out the different dtypes and default return types of\n ``to_numpy()`` for various dtypes within pandas.\n\n ================== ================================\n dtype array type\n ================== ================================\n category[T] ndarray[T] (same dtype as input)\n period ndarray[object] (Periods)\n interval ndarray[object] (Intervals)\n IntegerNA ndarray[object]\n datetime64[ns] datetime64[ns]\n datetime64[ns, tz] ndarray[object] (Timestamps)\n ================== ================================\n\n Examples\n --------\n >>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))\n >>> ser.to_numpy()\n array(['a', 'b', 'a'], dtype=object)\n\n Specify the `dtype` to control how datetime-aware data is represented.\n Use ``dtype=object`` to return an ndarray of pandas :class:`Timestamp`\n objects, each with the correct ``tz``.\n\n >>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))\n >>> ser.to_numpy(dtype=object)\n array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),\n Timestamp('2000-01-02 00:00:00+0100', tz='CET')],\n dtype=object)\n\n Or ``dtype='datetime64[ns]'`` to return an ndarray of native\n datetime64 values. The values are converted to UTC and the timezone\n info is dropped.\n\n >>> ser.to_numpy(dtype="datetime64[ns]")\n ... # doctest: +ELLIPSIS\n array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00...'],\n dtype='datetime64[ns]')\n """\n if isinstance(self.dtype, ExtensionDtype):\n return self.array.to_numpy(dtype, copy=copy, na_value=na_value, **kwargs)\n elif kwargs:\n bad_keys = next(iter(kwargs.keys()))\n raise TypeError(\n f"to_numpy() got an unexpected keyword argument '{bad_keys}'"\n )\n\n fillna = (\n na_value is not lib.no_default\n # no need to fillna with np.nan if we already have a float dtype\n and not (na_value is np.nan and np.issubdtype(self.dtype, np.floating))\n )\n\n values = self._values\n if fillna:\n if not can_hold_element(values, na_value):\n # if we can't hold the na_value asarray either makes a copy or we\n # error before modifying values. The asarray later on thus won't make\n # another copy\n values = np.asarray(values, dtype=dtype)\n else:\n values = values.copy()\n\n values[np.asanyarray(isna(self))] = na_value\n\n result = np.asarray(values, dtype=dtype)\n\n if (copy and not fillna) or (not copy and using_copy_on_write()):\n if np.shares_memory(self._values[:2], result[:2]):\n # Take slices to improve performance of check\n if using_copy_on_write() and not copy:\n result = result.view()\n result.flags.writeable = False\n else:\n result = result.copy()\n\n return result\n\n @final\n @property\n def empty(self) -> bool:\n return not self.size\n\n @doc(op="max", oppose="min", value="largest")\n def argmax(\n self, axis: AxisInt | None = None, skipna: bool = True, *args, **kwargs\n ) -> int:\n """\n Return int position of the {value} value in the Series.\n\n If the {op}imum is achieved in multiple locations,\n the first row position is returned.\n\n Parameters\n ----------\n axis : {{None}}\n Unused. Parameter needed for compatibility with DataFrame.\n skipna : bool, default True\n Exclude NA/null values when showing the result.\n *args, **kwargs\n Additional arguments and keywords for compatibility with NumPy.\n\n Returns\n -------\n int\n Row position of the {op}imum value.\n\n See Also\n --------\n Series.arg{op} : Return position of the {op}imum value.\n Series.arg{oppose} : Return position of the {oppose}imum value.\n numpy.ndarray.arg{op} : Equivalent method for numpy arrays.\n Series.idxmax : Return index label of the maximum values.\n Series.idxmin : Return index label of the minimum values.\n\n Examples\n --------\n Consider dataset containing cereal calories\n\n >>> s = pd.Series({{'Corn Flakes': 100.0, 'Almond Delight': 110.0,\n ... 'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0}})\n >>> s\n Corn Flakes 100.0\n Almond Delight 110.0\n Cinnamon Toast Crunch 120.0\n Cocoa Puff 110.0\n dtype: float64\n\n >>> s.argmax()\n 2\n >>> s.argmin()\n 0\n\n The maximum cereal calories is the third element and\n the minimum cereal calories is the first element,\n since series is zero-indexed.\n """\n delegate = self._values\n nv.validate_minmax_axis(axis)\n skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)\n\n if isinstance(delegate, ExtensionArray):\n if not skipna and delegate.isna().any():\n warnings.warn(\n f"The behavior of {type(self).__name__}.argmax/argmin "\n "with skipna=False and NAs, or with all-NAs is deprecated. "\n "In a future version this will raise ValueError.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return -1\n else:\n return delegate.argmax()\n else:\n result = nanops.nanargmax(delegate, skipna=skipna)\n if result == -1:\n warnings.warn(\n f"The behavior of {type(self).__name__}.argmax/argmin "\n "with skipna=False and NAs, or with all-NAs is deprecated. "\n "In a future version this will raise ValueError.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n # error: Incompatible return value type (got "Union[int, ndarray]", expected\n # "int")\n return result # type: ignore[return-value]\n\n @doc(argmax, op="min", oppose="max", value="smallest")\n def argmin(\n self, axis: AxisInt | None = None, skipna: bool = True, *args, **kwargs\n ) -> int:\n delegate = self._values\n nv.validate_minmax_axis(axis)\n skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)\n\n if isinstance(delegate, ExtensionArray):\n if not skipna and delegate.isna().any():\n warnings.warn(\n f"The behavior of {type(self).__name__}.argmax/argmin "\n "with skipna=False and NAs, or with all-NAs is deprecated. "\n "In a future version this will raise ValueError.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return -1\n else:\n return delegate.argmin()\n else:\n result = nanops.nanargmin(delegate, skipna=skipna)\n if result == -1:\n warnings.warn(\n f"The behavior of {type(self).__name__}.argmax/argmin "\n "with skipna=False and NAs, or with all-NAs is deprecated. "\n "In a future version this will raise ValueError.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n # error: Incompatible return value type (got "Union[int, ndarray]", expected\n # "int")\n return result # type: ignore[return-value]\n\n def tolist(self):\n """\n Return a list of the values.\n\n These are each a scalar type, which is a Python scalar\n (for str, int, float) or a pandas scalar\n (for Timestamp/Timedelta/Interval/Period)\n\n Returns\n -------\n list\n\n See Also\n --------\n numpy.ndarray.tolist : Return the array as an a.ndim-levels deep\n nested list of Python scalars.\n\n Examples\n --------\n For Series\n\n >>> s = pd.Series([1, 2, 3])\n >>> s.to_list()\n [1, 2, 3]\n\n For Index:\n\n >>> idx = pd.Index([1, 2, 3])\n >>> idx\n Index([1, 2, 3], dtype='int64')\n\n >>> idx.to_list()\n [1, 2, 3]\n """\n return self._values.tolist()\n\n to_list = tolist\n\n def __iter__(self) -> Iterator:\n """\n Return an iterator of the values.\n\n These are each a scalar type, which is a Python scalar\n (for str, int, float) or a pandas scalar\n (for Timestamp/Timedelta/Interval/Period)\n\n Returns\n -------\n iterator\n\n Examples\n --------\n >>> s = pd.Series([1, 2, 3])\n >>> for x in s:\n ... print(x)\n 1\n 2\n 3\n """\n # We are explicitly making element iterators.\n if not isinstance(self._values, np.ndarray):\n # Check type instead of dtype to catch DTA/TDA\n return iter(self._values)\n else:\n return map(self._values.item, range(self._values.size))\n\n @cache_readonly\n def hasnans(self) -> bool:\n """\n Return True if there are any NaNs.\n\n Enables various performance speedups.\n\n Returns\n -------\n bool\n\n Examples\n --------\n >>> s = pd.Series([1, 2, 3, None])\n >>> s\n 0 1.0\n 1 2.0\n 2 3.0\n 3 NaN\n dtype: float64\n >>> s.hasnans\n True\n """\n # error: Item "bool" of "Union[bool, ndarray[Any, dtype[bool_]], NDFrame]"\n # has no attribute "any"\n return bool(isna(self).any()) # type: ignore[union-attr]\n\n @final\n def _map_values(self, mapper, na_action=None, convert: bool = True):\n """\n An internal function that maps values using the input\n correspondence (which can be a dict, Series, or function).\n\n Parameters\n ----------\n mapper : function, dict, or Series\n The input correspondence object\n na_action : {None, 'ignore'}\n If 'ignore', propagate NA values, without passing them to the\n mapping function\n convert : bool, default True\n Try to find better dtype for elementwise function results. If\n False, leave as dtype=object. Note that the dtype is always\n preserved for some extension array dtypes, such as Categorical.\n\n Returns\n -------\n Union[Index, MultiIndex], inferred\n The output of the mapping function applied to the index.\n If the function returns a tuple with more than one element\n a MultiIndex will be returned.\n """\n arr = self._values\n\n if isinstance(arr, ExtensionArray):\n return arr.map(mapper, na_action=na_action)\n\n return algorithms.map_array(arr, mapper, na_action=na_action, convert=convert)\n\n @final\n def value_counts(\n self,\n normalize: bool = False,\n sort: bool = True,\n ascending: bool = False,\n bins=None,\n dropna: bool = True,\n ) -> Series:\n """\n Return a Series containing counts of unique values.\n\n The resulting object will be in descending order so that the\n first element is the most frequently-occurring element.\n Excludes NA values by default.\n\n Parameters\n ----------\n normalize : bool, default False\n If True then the object returned will contain the relative\n frequencies of the unique values.\n sort : bool, default True\n Sort by frequencies when True. Preserve the order of the data when False.\n ascending : bool, default False\n Sort in ascending order.\n bins : int, optional\n Rather than count values, group them into half-open bins,\n a convenience for ``pd.cut``, only works with numeric data.\n dropna : bool, default True\n Don't include counts of NaN.\n\n Returns\n -------\n Series\n\n See Also\n --------\n Series.count: Number of non-NA elements in a Series.\n DataFrame.count: Number of non-NA elements in a DataFrame.\n DataFrame.value_counts: Equivalent method on DataFrames.\n\n Examples\n --------\n >>> index = pd.Index([3, 1, 2, 3, 4, np.nan])\n >>> index.value_counts()\n 3.0 2\n 1.0 1\n 2.0 1\n 4.0 1\n Name: count, dtype: int64\n\n With `normalize` set to `True`, returns the relative frequency by\n dividing all values by the sum of values.\n\n >>> s = pd.Series([3, 1, 2, 3, 4, np.nan])\n >>> s.value_counts(normalize=True)\n 3.0 0.4\n 1.0 0.2\n 2.0 0.2\n 4.0 0.2\n Name: proportion, dtype: float64\n\n **bins**\n\n Bins can be useful for going from a continuous variable to a\n categorical variable; instead of counting unique\n apparitions of values, divide the index in the specified\n number of half-open bins.\n\n >>> s.value_counts(bins=3)\n (0.996, 2.0] 2\n (2.0, 3.0] 2\n (3.0, 4.0] 1\n Name: count, dtype: int64\n\n **dropna**\n\n With `dropna` set to `False` we can also see NaN index values.\n\n >>> s.value_counts(dropna=False)\n 3.0 2\n 1.0 1\n 2.0 1\n 4.0 1\n NaN 1\n Name: count, dtype: int64\n """\n return algorithms.value_counts_internal(\n self,\n sort=sort,\n ascending=ascending,\n normalize=normalize,\n bins=bins,\n dropna=dropna,\n )\n\n def unique(self):\n values = self._values\n if not isinstance(values, np.ndarray):\n # i.e. ExtensionArray\n result = values.unique()\n else:\n result = algorithms.unique1d(values)\n return result\n\n @final\n def nunique(self, dropna: bool = True) -> int:\n """\n Return number of unique elements in the object.\n\n Excludes NA values by default.\n\n Parameters\n ----------\n dropna : bool, default True\n Don't include NaN in the count.\n\n Returns\n -------\n int\n\n See Also\n --------\n DataFrame.nunique: Method nunique for DataFrame.\n Series.count: Count non-NA/null observations in the Series.\n\n Examples\n --------\n >>> s = pd.Series([1, 3, 5, 7, 7])\n >>> s\n 0 1\n 1 3\n 2 5\n 3 7\n 4 7\n dtype: int64\n\n >>> s.nunique()\n 4\n """\n uniqs = self.unique()\n if dropna:\n uniqs = remove_na_arraylike(uniqs)\n return len(uniqs)\n\n @property\n def is_unique(self) -> bool:\n """\n Return boolean if values in the object are unique.\n\n Returns\n -------\n bool\n\n Examples\n --------\n >>> s = pd.Series([1, 2, 3])\n >>> s.is_unique\n True\n\n >>> s = pd.Series([1, 2, 3, 1])\n >>> s.is_unique\n False\n """\n return self.nunique(dropna=False) == len(self)\n\n @property\n def is_monotonic_increasing(self) -> bool:\n """\n Return boolean if values in the object are monotonically increasing.\n\n Returns\n -------\n bool\n\n Examples\n --------\n >>> s = pd.Series([1, 2, 2])\n >>> s.is_monotonic_increasing\n True\n\n >>> s = pd.Series([3, 2, 1])\n >>> s.is_monotonic_increasing\n False\n """\n from pandas import Index\n\n return Index(self).is_monotonic_increasing\n\n @property\n def is_monotonic_decreasing(self) -> bool:\n """\n Return boolean if values in the object are monotonically decreasing.\n\n Returns\n -------\n bool\n\n Examples\n --------\n >>> s = pd.Series([3, 2, 2, 1])\n >>> s.is_monotonic_decreasing\n True\n\n >>> s = pd.Series([1, 2, 3])\n >>> s.is_monotonic_decreasing\n False\n """\n from pandas import Index\n\n return Index(self).is_monotonic_decreasing\n\n @final\n def _memory_usage(self, deep: bool = False) -> int:\n """\n Memory usage of the values.\n\n Parameters\n ----------\n deep : bool, default False\n Introspect the data deeply, interrogate\n `object` dtypes for system-level memory consumption.\n\n Returns\n -------\n bytes used\n\n See Also\n --------\n numpy.ndarray.nbytes : Total bytes consumed by the elements of the\n array.\n\n Notes\n -----\n Memory usage does not include memory consumed by elements that\n are not components of the array if deep=False or if used on PyPy\n\n Examples\n --------\n >>> idx = pd.Index([1, 2, 3])\n >>> idx.memory_usage()\n 24\n """\n if hasattr(self.array, "memory_usage"):\n return self.array.memory_usage( # pyright: ignore[reportGeneralTypeIssues]\n deep=deep,\n )\n\n v = self.array.nbytes\n if deep and is_object_dtype(self.dtype) and not PYPY:\n values = cast(np.ndarray, self._values)\n v += lib.memory_usage_of_objects(values)\n return v\n\n @doc(\n algorithms.factorize,\n values="",\n order="",\n size_hint="",\n sort=textwrap.dedent(\n """\\n sort : bool, default False\n Sort `uniques` and shuffle `codes` to maintain the\n relationship.\n """\n ),\n )\n def factorize(\n self,\n sort: bool = False,\n use_na_sentinel: bool = True,\n ) -> tuple[npt.NDArray[np.intp], Index]:\n codes, uniques = algorithms.factorize(\n self._values, sort=sort, use_na_sentinel=use_na_sentinel\n )\n if uniques.dtype == np.float16:\n uniques = uniques.astype(np.float32)\n\n if isinstance(self, ABCMultiIndex):\n # preserve MultiIndex\n uniques = self._constructor(uniques)\n else:\n from pandas import Index\n\n try:\n uniques = Index(uniques, dtype=self.dtype)\n except NotImplementedError:\n # not all dtypes are supported in Index that are allowed for Series\n # e.g. float16 or bytes\n uniques = Index(uniques)\n return codes, uniques\n\n _shared_docs[\n "searchsorted"\n ] = """\n Find indices where elements should be inserted to maintain order.\n\n Find the indices into a sorted {klass} `self` such that, if the\n corresponding elements in `value` were inserted before the indices,\n the order of `self` would be preserved.\n\n .. note::\n\n The {klass} *must* be monotonically sorted, otherwise\n wrong locations will likely be returned. Pandas does *not*\n check this for you.\n\n Parameters\n ----------\n value : array-like or scalar\n Values to insert into `self`.\n side : {{'left', 'right'}}, optional\n If 'left', the index of the first suitable location found is given.\n If 'right', return the last such index. If there is no suitable\n index, return either 0 or N (where N is the length of `self`).\n sorter : 1-D array-like, optional\n Optional array of integer indices that sort `self` into ascending\n order. They are typically the result of ``np.argsort``.\n\n Returns\n -------\n int or array of int\n A scalar or array of insertion points with the\n same shape as `value`.\n\n See Also\n --------\n sort_values : Sort by the values along either axis.\n numpy.searchsorted : Similar method from NumPy.\n\n Notes\n -----\n Binary search is used to find the required insertion points.\n\n Examples\n --------\n >>> ser = pd.Series([1, 2, 3])\n >>> ser\n 0 1\n 1 2\n 2 3\n dtype: int64\n\n >>> ser.searchsorted(4)\n 3\n\n >>> ser.searchsorted([0, 4])\n array([0, 3])\n\n >>> ser.searchsorted([1, 3], side='left')\n array([0, 2])\n\n >>> ser.searchsorted([1, 3], side='right')\n array([1, 3])\n\n >>> ser = pd.Series(pd.to_datetime(['3/11/2000', '3/12/2000', '3/13/2000']))\n >>> ser\n 0 2000-03-11\n 1 2000-03-12\n 2 2000-03-13\n dtype: datetime64[ns]\n\n >>> ser.searchsorted('3/14/2000')\n 3\n\n >>> ser = pd.Categorical(\n ... ['apple', 'bread', 'bread', 'cheese', 'milk'], ordered=True\n ... )\n >>> ser\n ['apple', 'bread', 'bread', 'cheese', 'milk']\n Categories (4, object): ['apple' < 'bread' < 'cheese' < 'milk']\n\n >>> ser.searchsorted('bread')\n 1\n\n >>> ser.searchsorted(['bread'], side='right')\n array([3])\n\n If the values are not monotonically sorted, wrong locations\n may be returned:\n\n >>> ser = pd.Series([2, 1, 3])\n >>> ser\n 0 2\n 1 1\n 2 3\n dtype: int64\n\n >>> ser.searchsorted(1) # doctest: +SKIP\n 0 # wrong result, correct would be 1\n """\n\n # This overload is needed so that the call to searchsorted in\n # pandas.core.resample.TimeGrouper._get_period_bins picks the correct result\n\n # error: Overloaded function signatures 1 and 2 overlap with incompatible\n # return types\n @overload\n def searchsorted( # type: ignore[overload-overlap]\n self,\n value: ScalarLike_co,\n side: Literal["left", "right"] = ...,\n sorter: NumpySorter = ...,\n ) -> np.intp:\n ...\n\n @overload\n def searchsorted(\n self,\n value: npt.ArrayLike | ExtensionArray,\n side: Literal["left", "right"] = ...,\n sorter: NumpySorter = ...,\n ) -> npt.NDArray[np.intp]:\n ...\n\n @doc(_shared_docs["searchsorted"], klass="Index")\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n if isinstance(value, ABCDataFrame):\n msg = (\n "Value must be 1-D array-like or scalar, "\n f"{type(value).__name__} is not supported"\n )\n raise ValueError(msg)\n\n values = self._values\n if not isinstance(values, np.ndarray):\n # Going through EA.searchsorted directly improves performance GH#38083\n return values.searchsorted(value, side=side, sorter=sorter)\n\n return algorithms.searchsorted(\n values,\n value,\n side=side,\n sorter=sorter,\n )\n\n def drop_duplicates(self, *, keep: DropKeep = "first"):\n duplicated = self._duplicated(keep=keep)\n # error: Value of type "IndexOpsMixin" is not indexable\n return self[~duplicated] # type: ignore[index]\n\n @final\n def _duplicated(self, keep: DropKeep = "first") -> npt.NDArray[np.bool_]:\n arr = self._values\n if isinstance(arr, ExtensionArray):\n return arr.duplicated(keep=keep)\n return algorithms.duplicated(arr, keep=keep)\n\n def _arith_method(self, other, op):\n res_name = ops.get_op_result_name(self, other)\n\n lvalues = self._values\n rvalues = extract_array(other, extract_numpy=True, extract_range=True)\n rvalues = ops.maybe_prepare_scalar_for_op(rvalues, lvalues.shape)\n rvalues = ensure_wrapped_if_datetimelike(rvalues)\n if isinstance(rvalues, range):\n rvalues = np.arange(rvalues.start, rvalues.stop, rvalues.step)\n\n with np.errstate(all="ignore"):\n result = ops.arithmetic_op(lvalues, rvalues, op)\n\n return self._construct_result(result, name=res_name)\n\n def _construct_result(self, result, name):\n """\n Construct an appropriately-wrapped result from the ArrayLike result\n of an arithmetic-like operation.\n """\n raise AbstractMethodError(self)\n
.venv\Lib\site-packages\pandas\core\base.py
base.py
Python
41,384
0.95
0.107143
0.041738
python-kit
47
2024-01-09T08:18:14.100106
BSD-3-Clause
false
b6d711abb372bd90307a82f900d65c34
"""\nMisc tools for implementing data structures\n\nNote: pandas.core.common is *not* part of the public API.\n"""\nfrom __future__ import annotations\n\nimport builtins\nfrom collections import (\n abc,\n defaultdict,\n)\nfrom collections.abc import (\n Collection,\n Generator,\n Hashable,\n Iterable,\n Sequence,\n)\nimport contextlib\nfrom functools import partial\nimport inspect\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n cast,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas.compat.numpy import np_version_gte1p24\n\nfrom pandas.core.dtypes.cast import construct_1d_object_array_from_listlike\nfrom pandas.core.dtypes.common import (\n is_bool_dtype,\n is_integer,\n)\nfrom pandas.core.dtypes.generic import (\n ABCExtensionArray,\n ABCIndex,\n ABCMultiIndex,\n ABCSeries,\n)\nfrom pandas.core.dtypes.inference import iterable_not_string\n\nif TYPE_CHECKING:\n from pandas._typing import (\n AnyArrayLike,\n ArrayLike,\n NpDtype,\n RandomState,\n T,\n )\n\n from pandas import Index\n\n\ndef flatten(line):\n """\n Flatten an arbitrarily nested sequence.\n\n Parameters\n ----------\n line : sequence\n The non string sequence to flatten\n\n Notes\n -----\n This doesn't consider strings sequences.\n\n Returns\n -------\n flattened : generator\n """\n for element in line:\n if iterable_not_string(element):\n yield from flatten(element)\n else:\n yield element\n\n\ndef consensus_name_attr(objs):\n name = objs[0].name\n for obj in objs[1:]:\n try:\n if obj.name != name:\n name = None\n except ValueError:\n name = None\n return name\n\n\ndef is_bool_indexer(key: Any) -> bool:\n """\n Check whether `key` is a valid boolean indexer.\n\n Parameters\n ----------\n key : Any\n Only list-likes may be considered boolean indexers.\n All other types are not considered a boolean indexer.\n For array-like input, boolean ndarrays or ExtensionArrays\n with ``_is_boolean`` set are considered boolean indexers.\n\n Returns\n -------\n bool\n Whether `key` is a valid boolean indexer.\n\n Raises\n ------\n ValueError\n When the array is an object-dtype ndarray or ExtensionArray\n and contains missing values.\n\n See Also\n --------\n check_array_indexer : Check that `key` is a valid array to index,\n and convert to an ndarray.\n """\n if isinstance(\n key, (ABCSeries, np.ndarray, ABCIndex, ABCExtensionArray)\n ) and not isinstance(key, ABCMultiIndex):\n if key.dtype == np.object_:\n key_array = np.asarray(key)\n\n if not lib.is_bool_array(key_array):\n na_msg = "Cannot mask with non-boolean array containing NA / NaN values"\n if lib.is_bool_array(key_array, skipna=True):\n # Don't raise on e.g. ["A", "B", np.nan], see\n # test_loc_getitem_list_of_labels_categoricalindex_with_na\n raise ValueError(na_msg)\n return False\n return True\n elif is_bool_dtype(key.dtype):\n return True\n elif isinstance(key, list):\n # check if np.array(key).dtype would be bool\n if len(key) > 0:\n if type(key) is not list: # noqa: E721\n # GH#42461 cython will raise TypeError if we pass a subclass\n key = list(key)\n return lib.is_bool_list(key)\n\n return False\n\n\ndef cast_scalar_indexer(val):\n """\n Disallow indexing with a float key, even if that key is a round number.\n\n Parameters\n ----------\n val : scalar\n\n Returns\n -------\n outval : scalar\n """\n # assumes lib.is_scalar(val)\n if lib.is_float(val) and val.is_integer():\n raise IndexError(\n # GH#34193\n "Indexing with a float is no longer supported. Manually convert "\n "to an integer key instead."\n )\n return val\n\n\ndef not_none(*args):\n """\n Returns a generator consisting of the arguments that are not None.\n """\n return (arg for arg in args if arg is not None)\n\n\ndef any_none(*args) -> bool:\n """\n Returns a boolean indicating if any argument is None.\n """\n return any(arg is None for arg in args)\n\n\ndef all_none(*args) -> bool:\n """\n Returns a boolean indicating if all arguments are None.\n """\n return all(arg is None for arg in args)\n\n\ndef any_not_none(*args) -> bool:\n """\n Returns a boolean indicating if any argument is not None.\n """\n return any(arg is not None for arg in args)\n\n\ndef all_not_none(*args) -> bool:\n """\n Returns a boolean indicating if all arguments are not None.\n """\n return all(arg is not None for arg in args)\n\n\ndef count_not_none(*args) -> int:\n """\n Returns the count of arguments that are not None.\n """\n return sum(x is not None for x in args)\n\n\n@overload\ndef asarray_tuplesafe(\n values: ArrayLike | list | tuple | zip, dtype: NpDtype | None = ...\n) -> np.ndarray:\n # ExtensionArray can only be returned when values is an Index, all other iterables\n # will return np.ndarray. Unfortunately "all other" cannot be encoded in a type\n # signature, so instead we special-case some common types.\n ...\n\n\n@overload\ndef asarray_tuplesafe(values: Iterable, dtype: NpDtype | None = ...) -> ArrayLike:\n ...\n\n\ndef asarray_tuplesafe(values: Iterable, dtype: NpDtype | None = None) -> ArrayLike:\n if not (isinstance(values, (list, tuple)) or hasattr(values, "__array__")):\n values = list(values)\n elif isinstance(values, ABCIndex):\n return values._values\n elif isinstance(values, ABCSeries):\n return values._values\n\n if isinstance(values, list) and dtype in [np.object_, object]:\n return construct_1d_object_array_from_listlike(values)\n\n try:\n with warnings.catch_warnings():\n # Can remove warning filter once NumPy 1.24 is min version\n if not np_version_gte1p24:\n warnings.simplefilter("ignore", np.VisibleDeprecationWarning)\n result = np.asarray(values, dtype=dtype)\n except ValueError:\n # Using try/except since it's more performant than checking is_list_like\n # over each element\n # error: Argument 1 to "construct_1d_object_array_from_listlike"\n # has incompatible type "Iterable[Any]"; expected "Sized"\n return construct_1d_object_array_from_listlike(values) # type: ignore[arg-type]\n\n if issubclass(result.dtype.type, str):\n result = np.asarray(values, dtype=object)\n\n if result.ndim == 2:\n # Avoid building an array of arrays:\n values = [tuple(x) for x in values]\n result = construct_1d_object_array_from_listlike(values)\n\n return result\n\n\ndef index_labels_to_array(\n labels: np.ndarray | Iterable, dtype: NpDtype | None = None\n) -> np.ndarray:\n """\n Transform label or iterable of labels to array, for use in Index.\n\n Parameters\n ----------\n dtype : dtype\n If specified, use as dtype of the resulting array, otherwise infer.\n\n Returns\n -------\n array\n """\n if isinstance(labels, (str, tuple)):\n labels = [labels]\n\n if not isinstance(labels, (list, np.ndarray)):\n try:\n labels = list(labels)\n except TypeError: # non-iterable\n labels = [labels]\n\n labels = asarray_tuplesafe(labels, dtype=dtype)\n\n return labels\n\n\ndef maybe_make_list(obj):\n if obj is not None and not isinstance(obj, (tuple, list)):\n return [obj]\n return obj\n\n\ndef maybe_iterable_to_list(obj: Iterable[T] | T) -> Collection[T] | T:\n """\n If obj is Iterable but not list-like, consume into list.\n """\n if isinstance(obj, abc.Iterable) and not isinstance(obj, abc.Sized):\n return list(obj)\n obj = cast(Collection, obj)\n return obj\n\n\ndef is_null_slice(obj) -> bool:\n """\n We have a null slice.\n """\n return (\n isinstance(obj, slice)\n and obj.start is None\n and obj.stop is None\n and obj.step is None\n )\n\n\ndef is_empty_slice(obj) -> bool:\n """\n We have an empty slice, e.g. no values are selected.\n """\n return (\n isinstance(obj, slice)\n and obj.start is not None\n and obj.stop is not None\n and obj.start == obj.stop\n )\n\n\ndef is_true_slices(line) -> list[bool]:\n """\n Find non-trivial slices in "line": return a list of booleans with same length.\n """\n return [isinstance(k, slice) and not is_null_slice(k) for k in line]\n\n\n# TODO: used only once in indexing; belongs elsewhere?\ndef is_full_slice(obj, line: int) -> bool:\n """\n We have a full length slice.\n """\n return (\n isinstance(obj, slice)\n and obj.start == 0\n and obj.stop == line\n and obj.step is None\n )\n\n\ndef get_callable_name(obj):\n # typical case has name\n if hasattr(obj, "__name__"):\n return getattr(obj, "__name__")\n # some objects don't; could recurse\n if isinstance(obj, partial):\n return get_callable_name(obj.func)\n # fall back to class name\n if callable(obj):\n return type(obj).__name__\n # everything failed (probably because the argument\n # wasn't actually callable); we return None\n # instead of the empty string in this case to allow\n # distinguishing between no name and a name of ''\n return None\n\n\ndef apply_if_callable(maybe_callable, obj, **kwargs):\n """\n Evaluate possibly callable input using obj and kwargs if it is callable,\n otherwise return as it is.\n\n Parameters\n ----------\n maybe_callable : possibly a callable\n obj : NDFrame\n **kwargs\n """\n if callable(maybe_callable):\n return maybe_callable(obj, **kwargs)\n\n return maybe_callable\n\n\ndef standardize_mapping(into):\n """\n Helper function to standardize a supplied mapping.\n\n Parameters\n ----------\n into : instance or subclass of collections.abc.Mapping\n Must be a class, an initialized collections.defaultdict,\n or an instance of a collections.abc.Mapping subclass.\n\n Returns\n -------\n mapping : a collections.abc.Mapping subclass or other constructor\n a callable object that can accept an iterator to create\n the desired Mapping.\n\n See Also\n --------\n DataFrame.to_dict\n Series.to_dict\n """\n if not inspect.isclass(into):\n if isinstance(into, defaultdict):\n return partial(defaultdict, into.default_factory)\n into = type(into)\n if not issubclass(into, abc.Mapping):\n raise TypeError(f"unsupported type: {into}")\n if into == defaultdict:\n raise TypeError("to_dict() only accepts initialized defaultdicts")\n return into\n\n\n@overload\ndef random_state(state: np.random.Generator) -> np.random.Generator:\n ...\n\n\n@overload\ndef random_state(\n state: int | np.ndarray | np.random.BitGenerator | np.random.RandomState | None,\n) -> np.random.RandomState:\n ...\n\n\ndef random_state(state: RandomState | None = None):\n """\n Helper function for processing random_state arguments.\n\n Parameters\n ----------\n state : int, array-like, BitGenerator, Generator, np.random.RandomState, None.\n If receives an int, array-like, or BitGenerator, passes to\n np.random.RandomState() as seed.\n If receives an np.random RandomState or Generator, just returns that unchanged.\n If receives `None`, returns np.random.\n If receives anything else, raises an informative ValueError.\n\n Default None.\n\n Returns\n -------\n np.random.RandomState or np.random.Generator. If state is None, returns np.random\n\n """\n if is_integer(state) or isinstance(state, (np.ndarray, np.random.BitGenerator)):\n return np.random.RandomState(state)\n elif isinstance(state, np.random.RandomState):\n return state\n elif isinstance(state, np.random.Generator):\n return state\n elif state is None:\n return np.random\n else:\n raise ValueError(\n "random_state must be an integer, array-like, a BitGenerator, Generator, "\n "a numpy RandomState, or None"\n )\n\n\ndef pipe(\n obj, func: Callable[..., T] | tuple[Callable[..., T], str], *args, **kwargs\n) -> T:\n """\n Apply a function ``func`` to object ``obj`` either by passing obj as the\n first argument to the function or, in the case that the func is a tuple,\n interpret the first element of the tuple as a function and pass the obj to\n that function as a keyword argument whose key is the value of the second\n element of the tuple.\n\n Parameters\n ----------\n func : callable or tuple of (callable, str)\n Function to apply to this object or, alternatively, a\n ``(callable, data_keyword)`` tuple where ``data_keyword`` is a\n string indicating the keyword of ``callable`` that expects the\n object.\n *args : iterable, optional\n Positional arguments passed into ``func``.\n **kwargs : dict, optional\n A dictionary of keyword arguments passed into ``func``.\n\n Returns\n -------\n object : the return type of ``func``.\n """\n if isinstance(func, tuple):\n func, target = func\n if target in kwargs:\n msg = f"{target} is both the pipe target and a keyword argument"\n raise ValueError(msg)\n kwargs[target] = obj\n return func(*args, **kwargs)\n else:\n return func(obj, *args, **kwargs)\n\n\ndef get_rename_function(mapper):\n """\n Returns a function that will map names/labels, dependent if mapper\n is a dict, Series or just a function.\n """\n\n def f(x):\n if x in mapper:\n return mapper[x]\n else:\n return x\n\n return f if isinstance(mapper, (abc.Mapping, ABCSeries)) else mapper\n\n\ndef convert_to_list_like(\n values: Hashable | Iterable | AnyArrayLike,\n) -> list | AnyArrayLike:\n """\n Convert list-like or scalar input to list-like. List, numpy and pandas array-like\n inputs are returned unmodified whereas others are converted to list.\n """\n if isinstance(values, (list, np.ndarray, ABCIndex, ABCSeries, ABCExtensionArray)):\n return values\n elif isinstance(values, abc.Iterable) and not isinstance(values, str):\n return list(values)\n\n return [values]\n\n\n@contextlib.contextmanager\ndef temp_setattr(\n obj, attr: str, value, condition: bool = True\n) -> Generator[None, None, None]:\n """\n Temporarily set attribute on an object.\n\n Parameters\n ----------\n obj : object\n Object whose attribute will be modified.\n attr : str\n Attribute to modify.\n value : Any\n Value to temporarily set attribute to.\n condition : bool, default True\n Whether to set the attribute. Provided in order to not have to\n conditionally use this context manager.\n\n Yields\n ------\n object : obj with modified attribute.\n """\n if condition:\n old_value = getattr(obj, attr)\n setattr(obj, attr, value)\n try:\n yield obj\n finally:\n if condition:\n setattr(obj, attr, old_value)\n\n\ndef require_length_match(data, index: Index) -> None:\n """\n Check the length of data matches the length of the index.\n """\n if len(data) != len(index):\n raise ValueError(\n "Length of values "\n f"({len(data)}) "\n "does not match length of index "\n f"({len(index)})"\n )\n\n\n# the ufuncs np.maximum.reduce and np.minimum.reduce default to axis=0,\n# whereas np.min and np.max (which directly call obj.min and obj.max)\n# default to axis=None.\n_builtin_table = {\n builtins.sum: np.sum,\n builtins.max: np.maximum.reduce,\n builtins.min: np.minimum.reduce,\n}\n\n# GH#53425: Only for deprecation\n_builtin_table_alias = {\n builtins.sum: "np.sum",\n builtins.max: "np.maximum.reduce",\n builtins.min: "np.minimum.reduce",\n}\n\n_cython_table = {\n builtins.sum: "sum",\n builtins.max: "max",\n builtins.min: "min",\n np.all: "all",\n np.any: "any",\n np.sum: "sum",\n np.nansum: "sum",\n np.mean: "mean",\n np.nanmean: "mean",\n np.prod: "prod",\n np.nanprod: "prod",\n np.std: "std",\n np.nanstd: "std",\n np.var: "var",\n np.nanvar: "var",\n np.median: "median",\n np.nanmedian: "median",\n np.max: "max",\n np.nanmax: "max",\n np.min: "min",\n np.nanmin: "min",\n np.cumprod: "cumprod",\n np.nancumprod: "cumprod",\n np.cumsum: "cumsum",\n np.nancumsum: "cumsum",\n}\n\n\ndef get_cython_func(arg: Callable) -> str | None:\n """\n if we define an internal function for this argument, return it\n """\n return _cython_table.get(arg)\n\n\ndef is_builtin_func(arg):\n """\n if we define a builtin function for this argument, return it,\n otherwise return the arg\n """\n return _builtin_table.get(arg, arg)\n\n\ndef fill_missing_names(names: Sequence[Hashable | None]) -> list[Hashable]:\n """\n If a name is missing then replace it by level_n, where n is the count\n\n .. versionadded:: 1.4.0\n\n Parameters\n ----------\n names : list-like\n list of column names or None values.\n\n Returns\n -------\n list\n list of column names with the None values replaced.\n """\n return [f"level_{i}" if name is None else name for i, name in enumerate(names)]\n
.venv\Lib\site-packages\pandas\core\common.py
common.py
Python
17,449
0.95
0.179604
0.055762
react-lib
714
2024-09-04T18:41:39.883617
MIT
false
d77b7b3755e47e6d359df14a2c745c08
"""\nThis module is imported from the pandas package __init__.py file\nin order to ensure that the core.config options registered here will\nbe available as soon as the user loads the package. if register_option\nis invoked inside specific modules, they will not be registered until that\nmodule is imported, which may or may not be a problem.\n\nIf you need to make sure options are available even before a certain\nmodule is imported, register them here rather than in the module.\n\n"""\nfrom __future__ import annotations\n\nimport os\nfrom typing import (\n Any,\n Callable,\n)\n\nimport pandas._config.config as cf\nfrom pandas._config.config import (\n is_bool,\n is_callable,\n is_instance_factory,\n is_int,\n is_nonnegative_int,\n is_one_of_factory,\n is_str,\n is_text,\n)\n\n# compute\n\nuse_bottleneck_doc = """\n: bool\n Use the bottleneck library to accelerate if it is installed,\n the default is True\n Valid values: False,True\n"""\n\n\ndef use_bottleneck_cb(key) -> None:\n from pandas.core import nanops\n\n nanops.set_use_bottleneck(cf.get_option(key))\n\n\nuse_numexpr_doc = """\n: bool\n Use the numexpr library to accelerate computation if it is installed,\n the default is True\n Valid values: False,True\n"""\n\n\ndef use_numexpr_cb(key) -> None:\n from pandas.core.computation import expressions\n\n expressions.set_use_numexpr(cf.get_option(key))\n\n\nuse_numba_doc = """\n: bool\n Use the numba engine option for select operations if it is installed,\n the default is False\n Valid values: False,True\n"""\n\n\ndef use_numba_cb(key) -> None:\n from pandas.core.util import numba_\n\n numba_.set_use_numba(cf.get_option(key))\n\n\nwith cf.config_prefix("compute"):\n cf.register_option(\n "use_bottleneck",\n True,\n use_bottleneck_doc,\n validator=is_bool,\n cb=use_bottleneck_cb,\n )\n cf.register_option(\n "use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb\n )\n cf.register_option(\n "use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb\n )\n#\n# options from the "display" namespace\n\npc_precision_doc = """\n: int\n Floating point output precision in terms of number of places after the\n decimal, for regular formatting as well as scientific notation. Similar\n to ``precision`` in :meth:`numpy.set_printoptions`.\n"""\n\npc_colspace_doc = """\n: int\n Default space for DataFrame columns.\n"""\n\npc_max_rows_doc = """\n: int\n If max_rows is exceeded, switch to truncate view. Depending on\n `large_repr`, objects are either centrally truncated or printed as\n a summary view. 'None' value means unlimited.\n\n In case python/IPython is running in a terminal and `large_repr`\n equals 'truncate' this can be set to 0 and pandas will auto-detect\n the height of the terminal and print a truncated object which fits\n the screen height. The IPython notebook, IPython qtconsole, or\n IDLE do not run in a terminal and hence it is not possible to do\n correct auto-detection.\n"""\n\npc_min_rows_doc = """\n: int\n The numbers of rows to show in a truncated view (when `max_rows` is\n exceeded). Ignored when `max_rows` is set to None or 0. When set to\n None, follows the value of `max_rows`.\n"""\n\npc_max_cols_doc = """\n: int\n If max_cols is exceeded, switch to truncate view. Depending on\n `large_repr`, objects are either centrally truncated or printed as\n a summary view. 'None' value means unlimited.\n\n In case python/IPython is running in a terminal and `large_repr`\n equals 'truncate' this can be set to 0 or None and pandas will auto-detect\n the width of the terminal and print a truncated object which fits\n the screen width. The IPython notebook, IPython qtconsole, or IDLE\n do not run in a terminal and hence it is not possible to do\n correct auto-detection and defaults to 20.\n"""\n\npc_max_categories_doc = """\n: int\n This sets the maximum number of categories pandas should output when\n printing out a `Categorical` or a Series of dtype "category".\n"""\n\npc_max_info_cols_doc = """\n: int\n max_info_columns is used in DataFrame.info method to decide if\n per column information will be printed.\n"""\n\npc_nb_repr_h_doc = """\n: boolean\n When True, IPython notebook will use html representation for\n pandas objects (if it is available).\n"""\n\npc_pprint_nest_depth = """\n: int\n Controls the number of nested levels to process when pretty-printing\n"""\n\npc_multi_sparse_doc = """\n: boolean\n "sparsify" MultiIndex display (don't display repeated\n elements in outer levels within groups)\n"""\n\nfloat_format_doc = """\n: callable\n The callable should accept a floating point number and return\n a string with the desired format of the number. This is used\n in some places like SeriesFormatter.\n See formats.format.EngFormatter for an example.\n"""\n\nmax_colwidth_doc = """\n: int or None\n The maximum width in characters of a column in the repr of\n a pandas data structure. When the column overflows, a "..."\n placeholder is embedded in the output. A 'None' value means unlimited.\n"""\n\ncolheader_justify_doc = """\n: 'left'/'right'\n Controls the justification of column headers. used by DataFrameFormatter.\n"""\n\npc_expand_repr_doc = """\n: boolean\n Whether to print out the full DataFrame repr for wide DataFrames across\n multiple lines, `max_columns` is still respected, but the output will\n wrap-around across multiple "pages" if its width exceeds `display.width`.\n"""\n\npc_show_dimensions_doc = """\n: boolean or 'truncate'\n Whether to print out dimensions at the end of DataFrame repr.\n If 'truncate' is specified, only print out the dimensions if the\n frame is truncated (e.g. not display all rows and/or columns)\n"""\n\npc_east_asian_width_doc = """\n: boolean\n Whether to use the Unicode East Asian Width to calculate the display text\n width.\n Enabling this may affect to the performance (default: False)\n"""\n\npc_ambiguous_as_wide_doc = """\n: boolean\n Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)\n (default: False)\n"""\n\npc_table_schema_doc = """\n: boolean\n Whether to publish a Table Schema representation for frontends\n that support it.\n (default: False)\n"""\n\npc_html_border_doc = """\n: int\n A ``border=value`` attribute is inserted in the ``<table>`` tag\n for the DataFrame HTML repr.\n"""\n\npc_html_use_mathjax_doc = """\\n: boolean\n When True, Jupyter notebook will process table contents using MathJax,\n rendering mathematical expressions enclosed by the dollar symbol.\n (default: True)\n"""\n\npc_max_dir_items = """\\n: int\n The number of items that will be added to `dir(...)`. 'None' value means\n unlimited. Because dir is cached, changing this option will not immediately\n affect already existing dataframes until a column is deleted or added.\n\n This is for instance used to suggest columns from a dataframe to tab\n completion.\n"""\n\npc_width_doc = """\n: int\n Width of the display in characters. In case python/IPython is running in\n a terminal this can be set to None and pandas will correctly auto-detect\n the width.\n Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a\n terminal and hence it is not possible to correctly detect the width.\n"""\n\npc_chop_threshold_doc = """\n: float or None\n if set to a float value, all float values smaller than the given threshold\n will be displayed as exactly 0 by repr and friends.\n"""\n\npc_max_seq_items = """\n: int or None\n When pretty-printing a long sequence, no more then `max_seq_items`\n will be printed. If items are omitted, they will be denoted by the\n addition of "..." to the resulting string.\n\n If set to None, the number of items to be printed is unlimited.\n"""\n\npc_max_info_rows_doc = """\n: int\n df.info() will usually show null-counts for each column.\n For large frames this can be quite slow. max_info_rows and max_info_cols\n limit this null check only to frames with smaller dimensions than\n specified.\n"""\n\npc_large_repr_doc = """\n: 'truncate'/'info'\n For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can\n show a truncated table, or switch to the view from\n df.info() (the behaviour in earlier versions of pandas).\n"""\n\npc_memory_usage_doc = """\n: bool, string or None\n This specifies if the memory usage of a DataFrame should be displayed when\n df.info() is called. Valid values True,False,'deep'\n"""\n\n\ndef table_schema_cb(key) -> None:\n from pandas.io.formats.printing import enable_data_resource_formatter\n\n enable_data_resource_formatter(cf.get_option(key))\n\n\ndef is_terminal() -> bool:\n """\n Detect if Python is running in a terminal.\n\n Returns True if Python is running in a terminal or False if not.\n """\n try:\n # error: Name 'get_ipython' is not defined\n ip = get_ipython() # type: ignore[name-defined]\n except NameError: # assume standard Python interpreter in a terminal\n return True\n else:\n if hasattr(ip, "kernel"): # IPython as a Jupyter kernel\n return False\n else: # IPython in a terminal\n return True\n\n\nwith cf.config_prefix("display"):\n cf.register_option("precision", 6, pc_precision_doc, validator=is_nonnegative_int)\n cf.register_option(\n "float_format",\n None,\n float_format_doc,\n validator=is_one_of_factory([None, is_callable]),\n )\n cf.register_option(\n "max_info_rows",\n 1690785,\n pc_max_info_rows_doc,\n validator=is_int,\n )\n cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int)\n cf.register_option(\n "min_rows",\n 10,\n pc_min_rows_doc,\n validator=is_instance_factory([type(None), int]),\n )\n cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)\n\n cf.register_option(\n "max_colwidth",\n 50,\n max_colwidth_doc,\n validator=is_nonnegative_int,\n )\n if is_terminal():\n max_cols = 0 # automatically determine optimal number of columns\n else:\n max_cols = 20 # cannot determine optimal number of columns\n cf.register_option(\n "max_columns", max_cols, pc_max_cols_doc, validator=is_nonnegative_int\n )\n cf.register_option(\n "large_repr",\n "truncate",\n pc_large_repr_doc,\n validator=is_one_of_factory(["truncate", "info"]),\n )\n cf.register_option("max_info_columns", 100, pc_max_info_cols_doc, validator=is_int)\n cf.register_option(\n "colheader_justify", "right", colheader_justify_doc, validator=is_text\n )\n cf.register_option("notebook_repr_html", True, pc_nb_repr_h_doc, validator=is_bool)\n cf.register_option("pprint_nest_depth", 3, pc_pprint_nest_depth, validator=is_int)\n cf.register_option("multi_sparse", True, pc_multi_sparse_doc, validator=is_bool)\n cf.register_option("expand_frame_repr", True, pc_expand_repr_doc)\n cf.register_option(\n "show_dimensions",\n "truncate",\n pc_show_dimensions_doc,\n validator=is_one_of_factory([True, False, "truncate"]),\n )\n cf.register_option("chop_threshold", None, pc_chop_threshold_doc)\n cf.register_option("max_seq_items", 100, pc_max_seq_items)\n cf.register_option(\n "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int])\n )\n cf.register_option(\n "memory_usage",\n True,\n pc_memory_usage_doc,\n validator=is_one_of_factory([None, True, False, "deep"]),\n )\n cf.register_option(\n "unicode.east_asian_width", False, pc_east_asian_width_doc, validator=is_bool\n )\n cf.register_option(\n "unicode.ambiguous_as_wide", False, pc_east_asian_width_doc, validator=is_bool\n )\n cf.register_option(\n "html.table_schema",\n False,\n pc_table_schema_doc,\n validator=is_bool,\n cb=table_schema_cb,\n )\n cf.register_option("html.border", 1, pc_html_border_doc, validator=is_int)\n cf.register_option(\n "html.use_mathjax", True, pc_html_use_mathjax_doc, validator=is_bool\n )\n cf.register_option(\n "max_dir_items", 100, pc_max_dir_items, validator=is_nonnegative_int\n )\n\ntc_sim_interactive_doc = """\n: boolean\n Whether to simulate interactive mode for purposes of testing\n"""\n\nwith cf.config_prefix("mode"):\n cf.register_option("sim_interactive", False, tc_sim_interactive_doc)\n\nuse_inf_as_na_doc = """\n: boolean\n True means treat None, NaN, INF, -INF as NA (old way),\n False means None and NaN are null, but INF, -INF are not NA\n (new way).\n\n This option is deprecated in pandas 2.1.0 and will be removed in 3.0.\n"""\n\n# We don't want to start importing everything at the global context level\n# or we'll hit circular deps.\n\n\ndef use_inf_as_na_cb(key) -> None:\n # TODO(3.0): enforcing this deprecation will close GH#52501\n from pandas.core.dtypes.missing import _use_inf_as_na\n\n _use_inf_as_na(key)\n\n\nwith cf.config_prefix("mode"):\n cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)\n\ncf.deprecate_option(\n # GH#51684\n "mode.use_inf_as_na",\n "use_inf_as_na option is deprecated and will be removed in a future "\n "version. Convert inf values to NaN before operating instead.",\n)\n\ndata_manager_doc = """\n: string\n Internal data manager type; can be "block" or "array". Defaults to "block",\n unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needs\n to be set before pandas is imported).\n"""\n\n\nwith cf.config_prefix("mode"):\n cf.register_option(\n "data_manager",\n # Get the default from an environment variable, if set, otherwise defaults\n # to "block". This environment variable can be set for testing.\n os.environ.get("PANDAS_DATA_MANAGER", "block"),\n data_manager_doc,\n validator=is_one_of_factory(["block", "array"]),\n )\n\ncf.deprecate_option(\n # GH#55043\n "mode.data_manager",\n "data_manager option is deprecated and will be removed in a future "\n "version. Only the BlockManager will be available.",\n)\n\n\n# TODO better name?\ncopy_on_write_doc = """\n: bool\n Use new copy-view behaviour using Copy-on-Write. Defaults to False,\n unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable\n (if set to "1" for True, needs to be set before pandas is imported).\n"""\n\n\nwith cf.config_prefix("mode"):\n cf.register_option(\n "copy_on_write",\n # Get the default from an environment variable, if set, otherwise defaults\n # to False. This environment variable can be set for testing.\n "warn"\n if os.environ.get("PANDAS_COPY_ON_WRITE", "0") == "warn"\n else os.environ.get("PANDAS_COPY_ON_WRITE", "0") == "1",\n copy_on_write_doc,\n validator=is_one_of_factory([True, False, "warn"]),\n )\n\n\n# user warnings\nchained_assignment = """\n: string\n Raise an exception, warn, or no action if trying to use chained assignment,\n The default is warn\n"""\n\nwith cf.config_prefix("mode"):\n cf.register_option(\n "chained_assignment",\n "warn",\n chained_assignment,\n validator=is_one_of_factory([None, "warn", "raise"]),\n )\n\n\nstring_storage_doc = """\n: string\n The default storage for StringDtype.\n"""\n\n\ndef is_valid_string_storage(value: Any) -> None:\n legal_values = ["auto", "python", "pyarrow"]\n if value not in legal_values:\n msg = "Value must be one of python|pyarrow"\n if value == "pyarrow_numpy":\n # TODO: we can remove extra message after 3.0\n msg += (\n ". 'pyarrow_numpy' was specified, but this option should be "\n "enabled using pandas.options.future.infer_string instead"\n )\n raise ValueError(msg)\n\n\nwith cf.config_prefix("mode"):\n cf.register_option(\n "string_storage",\n "auto",\n string_storage_doc,\n # validator=is_one_of_factory(["python", "pyarrow"]),\n validator=is_valid_string_storage,\n )\n\n\n# Set up the io.excel specific reader configuration.\nreader_engine_doc = """\n: string\n The default Excel reader engine for '{ext}' files. Available options:\n auto, {others}.\n"""\n\n_xls_options = ["xlrd", "calamine"]\n_xlsm_options = ["xlrd", "openpyxl", "calamine"]\n_xlsx_options = ["xlrd", "openpyxl", "calamine"]\n_ods_options = ["odf", "calamine"]\n_xlsb_options = ["pyxlsb", "calamine"]\n\n\nwith cf.config_prefix("io.excel.xls"):\n cf.register_option(\n "reader",\n "auto",\n reader_engine_doc.format(ext="xls", others=", ".join(_xls_options)),\n validator=is_one_of_factory(_xls_options + ["auto"]),\n )\n\nwith cf.config_prefix("io.excel.xlsm"):\n cf.register_option(\n "reader",\n "auto",\n reader_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),\n validator=is_one_of_factory(_xlsm_options + ["auto"]),\n )\n\n\nwith cf.config_prefix("io.excel.xlsx"):\n cf.register_option(\n "reader",\n "auto",\n reader_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),\n validator=is_one_of_factory(_xlsx_options + ["auto"]),\n )\n\n\nwith cf.config_prefix("io.excel.ods"):\n cf.register_option(\n "reader",\n "auto",\n reader_engine_doc.format(ext="ods", others=", ".join(_ods_options)),\n validator=is_one_of_factory(_ods_options + ["auto"]),\n )\n\nwith cf.config_prefix("io.excel.xlsb"):\n cf.register_option(\n "reader",\n "auto",\n reader_engine_doc.format(ext="xlsb", others=", ".join(_xlsb_options)),\n validator=is_one_of_factory(_xlsb_options + ["auto"]),\n )\n\n# Set up the io.excel specific writer configuration.\nwriter_engine_doc = """\n: string\n The default Excel writer engine for '{ext}' files. Available options:\n auto, {others}.\n"""\n\n_xlsm_options = ["openpyxl"]\n_xlsx_options = ["openpyxl", "xlsxwriter"]\n_ods_options = ["odf"]\n\n\nwith cf.config_prefix("io.excel.xlsm"):\n cf.register_option(\n "writer",\n "auto",\n writer_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),\n validator=str,\n )\n\n\nwith cf.config_prefix("io.excel.xlsx"):\n cf.register_option(\n "writer",\n "auto",\n writer_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),\n validator=str,\n )\n\n\nwith cf.config_prefix("io.excel.ods"):\n cf.register_option(\n "writer",\n "auto",\n writer_engine_doc.format(ext="ods", others=", ".join(_ods_options)),\n validator=str,\n )\n\n\n# Set up the io.parquet specific configuration.\nparquet_engine_doc = """\n: string\n The default parquet reader/writer engine. Available options:\n 'auto', 'pyarrow', 'fastparquet', the default is 'auto'\n"""\n\nwith cf.config_prefix("io.parquet"):\n cf.register_option(\n "engine",\n "auto",\n parquet_engine_doc,\n validator=is_one_of_factory(["auto", "pyarrow", "fastparquet"]),\n )\n\n\n# Set up the io.sql specific configuration.\nsql_engine_doc = """\n: string\n The default sql reader/writer engine. Available options:\n 'auto', 'sqlalchemy', the default is 'auto'\n"""\n\nwith cf.config_prefix("io.sql"):\n cf.register_option(\n "engine",\n "auto",\n sql_engine_doc,\n validator=is_one_of_factory(["auto", "sqlalchemy"]),\n )\n\n# --------\n# Plotting\n# ---------\n\nplotting_backend_doc = """\n: str\n The plotting backend to use. The default value is "matplotlib", the\n backend provided with pandas. Other backends can be specified by\n providing the name of the module that implements the backend.\n"""\n\n\ndef register_plotting_backend_cb(key) -> None:\n if key == "matplotlib":\n # We defer matplotlib validation, since it's the default\n return\n from pandas.plotting._core import _get_plot_backend\n\n _get_plot_backend(key)\n\n\nwith cf.config_prefix("plotting"):\n cf.register_option(\n "backend",\n defval="matplotlib",\n doc=plotting_backend_doc,\n validator=register_plotting_backend_cb,\n )\n\n\nregister_converter_doc = """\n: bool or 'auto'.\n Whether to register converters with matplotlib's units registry for\n dates, times, datetimes, and Periods. Toggling to False will remove\n the converters, restoring any converters that pandas overwrote.\n"""\n\n\ndef register_converter_cb(key) -> None:\n from pandas.plotting import (\n deregister_matplotlib_converters,\n register_matplotlib_converters,\n )\n\n if cf.get_option(key):\n register_matplotlib_converters()\n else:\n deregister_matplotlib_converters()\n\n\nwith cf.config_prefix("plotting.matplotlib"):\n cf.register_option(\n "register_converters",\n "auto",\n register_converter_doc,\n validator=is_one_of_factory(["auto", True, False]),\n cb=register_converter_cb,\n )\n\n# ------\n# Styler\n# ------\n\nstyler_sparse_index_doc = """\n: bool\n Whether to sparsify the display of a hierarchical index. Setting to False will\n display each explicit level element in a hierarchical key for each row.\n"""\n\nstyler_sparse_columns_doc = """\n: bool\n Whether to sparsify the display of hierarchical columns. Setting to False will\n display each explicit level element in a hierarchical key for each column.\n"""\n\nstyler_render_repr = """\n: str\n Determine which output to use in Jupyter Notebook in {"html", "latex"}.\n"""\n\nstyler_max_elements = """\n: int\n The maximum number of data-cell (<td>) elements that will be rendered before\n trimming will occur over columns, rows or both if needed.\n"""\n\nstyler_max_rows = """\n: int, optional\n The maximum number of rows that will be rendered. May still be reduced to\n satisfy ``max_elements``, which takes precedence.\n"""\n\nstyler_max_columns = """\n: int, optional\n The maximum number of columns that will be rendered. May still be reduced to\n satisfy ``max_elements``, which takes precedence.\n"""\n\nstyler_precision = """\n: int\n The precision for floats and complex numbers.\n"""\n\nstyler_decimal = """\n: str\n The character representation for the decimal separator for floats and complex.\n"""\n\nstyler_thousands = """\n: str, optional\n The character representation for thousands separator for floats, int and complex.\n"""\n\nstyler_na_rep = """\n: str, optional\n The string representation for values identified as missing.\n"""\n\nstyler_escape = """\n: str, optional\n Whether to escape certain characters according to the given context; html or latex.\n"""\n\nstyler_formatter = """\n: str, callable, dict, optional\n A formatter object to be used as default within ``Styler.format``.\n"""\n\nstyler_multirow_align = """\n: {"c", "t", "b"}\n The specifier for vertical alignment of sparsified LaTeX multirows.\n"""\n\nstyler_multicol_align = r"""\n: {"r", "c", "l", "naive-l", "naive-r"}\n The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe\n decorators can also be added to non-naive values to draw vertical\n rules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells.\n"""\n\nstyler_hrules = """\n: bool\n Whether to add horizontal rules on top and bottom and below the headers.\n"""\n\nstyler_environment = """\n: str\n The environment to replace ``\\begin{table}``. If "longtable" is used results\n in a specific longtable environment format.\n"""\n\nstyler_encoding = """\n: str\n The encoding used for output HTML and LaTeX files.\n"""\n\nstyler_mathjax = """\n: bool\n If False will render special CSS classes to table attributes that indicate Mathjax\n will not be used in Jupyter Notebook.\n"""\n\nwith cf.config_prefix("styler"):\n cf.register_option("sparse.index", True, styler_sparse_index_doc, validator=is_bool)\n\n cf.register_option(\n "sparse.columns", True, styler_sparse_columns_doc, validator=is_bool\n )\n\n cf.register_option(\n "render.repr",\n "html",\n styler_render_repr,\n validator=is_one_of_factory(["html", "latex"]),\n )\n\n cf.register_option(\n "render.max_elements",\n 2**18,\n styler_max_elements,\n validator=is_nonnegative_int,\n )\n\n cf.register_option(\n "render.max_rows",\n None,\n styler_max_rows,\n validator=is_nonnegative_int,\n )\n\n cf.register_option(\n "render.max_columns",\n None,\n styler_max_columns,\n validator=is_nonnegative_int,\n )\n\n cf.register_option("render.encoding", "utf-8", styler_encoding, validator=is_str)\n\n cf.register_option("format.decimal", ".", styler_decimal, validator=is_str)\n\n cf.register_option(\n "format.precision", 6, styler_precision, validator=is_nonnegative_int\n )\n\n cf.register_option(\n "format.thousands",\n None,\n styler_thousands,\n validator=is_instance_factory([type(None), str]),\n )\n\n cf.register_option(\n "format.na_rep",\n None,\n styler_na_rep,\n validator=is_instance_factory([type(None), str]),\n )\n\n cf.register_option(\n "format.escape",\n None,\n styler_escape,\n validator=is_one_of_factory([None, "html", "latex", "latex-math"]),\n )\n\n cf.register_option(\n "format.formatter",\n None,\n styler_formatter,\n validator=is_instance_factory([type(None), dict, Callable, str]),\n )\n\n cf.register_option("html.mathjax", True, styler_mathjax, validator=is_bool)\n\n cf.register_option(\n "latex.multirow_align",\n "c",\n styler_multirow_align,\n validator=is_one_of_factory(["c", "t", "b", "naive"]),\n )\n\n val_mca = ["r", "|r|", "|r", "r|", "c", "|c|", "|c", "c|", "l", "|l|", "|l", "l|"]\n val_mca += ["naive-l", "naive-r"]\n cf.register_option(\n "latex.multicol_align",\n "r",\n styler_multicol_align,\n validator=is_one_of_factory(val_mca),\n )\n\n cf.register_option("latex.hrules", False, styler_hrules, validator=is_bool)\n\n cf.register_option(\n "latex.environment",\n None,\n styler_environment,\n validator=is_instance_factory([type(None), str]),\n )\n\n\nwith cf.config_prefix("future"):\n cf.register_option(\n "infer_string",\n True if os.environ.get("PANDAS_FUTURE_INFER_STRING", "0") == "1" else False,\n "Whether to infer sequence of str objects as pyarrow string "\n "dtype, which will be the default in pandas 3.0 "\n "(at which point this option will be deprecated).",\n validator=is_one_of_factory([True, False]),\n )\n\n cf.register_option(\n "no_silent_downcasting",\n False,\n "Whether to opt-in to the future behavior which will *not* silently "\n "downcast results from Series and DataFrame `where`, `mask`, and `clip` "\n "methods. "\n "Silent downcasting will be removed in pandas 3.0 "\n "(at which point this option will be deprecated).",\n validator=is_one_of_factory([True, False]),\n )\n
.venv\Lib\site-packages\pandas\core\config_init.py
config_init.py
Python
27,004
0.95
0.072264
0.036223
python-kit
29
2024-07-17T04:54:37.383216
Apache-2.0
false
b6f4a60d7b34fd08b44c12e01f80257d
"""\nConstructor functions intended to be shared by pd.array, Series.__init__,\nand Index.__new__.\n\nThese should not depend on core.internals.\n"""\nfrom __future__ import annotations\n\nfrom collections.abc import Sequence\nfrom typing import (\n TYPE_CHECKING,\n Optional,\n Union,\n cast,\n overload,\n)\nimport warnings\n\nimport numpy as np\nfrom numpy import ma\n\nfrom pandas._config import using_string_dtype\n\nfrom pandas._libs import lib\nfrom pandas._libs.tslibs import (\n Period,\n get_supported_dtype,\n is_supported_dtype,\n)\nfrom pandas._typing import (\n AnyArrayLike,\n ArrayLike,\n Dtype,\n DtypeObj,\n T,\n)\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.base import ExtensionDtype\nfrom pandas.core.dtypes.cast import (\n construct_1d_arraylike_from_scalar,\n construct_1d_object_array_from_listlike,\n maybe_cast_to_datetime,\n maybe_cast_to_integer_array,\n maybe_convert_platform,\n maybe_infer_to_datetimelike,\n maybe_promote,\n)\nfrom pandas.core.dtypes.common import (\n is_list_like,\n is_object_dtype,\n is_string_dtype,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import NumpyEADtype\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCExtensionArray,\n ABCIndex,\n ABCSeries,\n)\nfrom pandas.core.dtypes.missing import isna\n\nimport pandas.core.common as com\n\nif TYPE_CHECKING:\n from pandas import (\n Index,\n Series,\n )\n from pandas.core.arrays.base import ExtensionArray\n\n\ndef array(\n data: Sequence[object] | AnyArrayLike,\n dtype: Dtype | None = None,\n copy: bool = True,\n) -> ExtensionArray:\n """\n Create an array.\n\n Parameters\n ----------\n data : Sequence of objects\n The scalars inside `data` should be instances of the\n scalar type for `dtype`. It's expected that `data`\n represents a 1-dimensional array of data.\n\n When `data` is an Index or Series, the underlying array\n will be extracted from `data`.\n\n dtype : str, np.dtype, or ExtensionDtype, optional\n The dtype to use for the array. This may be a NumPy\n dtype or an extension type registered with pandas using\n :meth:`pandas.api.extensions.register_extension_dtype`.\n\n If not specified, there are two possibilities:\n\n 1. When `data` is a :class:`Series`, :class:`Index`, or\n :class:`ExtensionArray`, the `dtype` will be taken\n from the data.\n 2. Otherwise, pandas will attempt to infer the `dtype`\n from the data.\n\n Note that when `data` is a NumPy array, ``data.dtype`` is\n *not* used for inferring the array type. This is because\n NumPy cannot represent all the types of data that can be\n held in extension arrays.\n\n Currently, pandas will infer an extension dtype for sequences of\n\n ============================== =======================================\n Scalar Type Array Type\n ============================== =======================================\n :class:`pandas.Interval` :class:`pandas.arrays.IntervalArray`\n :class:`pandas.Period` :class:`pandas.arrays.PeriodArray`\n :class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray`\n :class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray`\n :class:`int` :class:`pandas.arrays.IntegerArray`\n :class:`float` :class:`pandas.arrays.FloatingArray`\n :class:`str` :class:`pandas.arrays.StringArray` or\n :class:`pandas.arrays.ArrowStringArray`\n :class:`bool` :class:`pandas.arrays.BooleanArray`\n ============================== =======================================\n\n The ExtensionArray created when the scalar type is :class:`str` is determined by\n ``pd.options.mode.string_storage`` if the dtype is not explicitly given.\n\n For all other cases, NumPy's usual inference rules will be used.\n copy : bool, default True\n Whether to copy the data, even if not necessary. Depending\n on the type of `data`, creating the new array may require\n copying data, even if ``copy=False``.\n\n Returns\n -------\n ExtensionArray\n The newly created array.\n\n Raises\n ------\n ValueError\n When `data` is not 1-dimensional.\n\n See Also\n --------\n numpy.array : Construct a NumPy array.\n Series : Construct a pandas Series.\n Index : Construct a pandas Index.\n arrays.NumpyExtensionArray : ExtensionArray wrapping a NumPy array.\n Series.array : Extract the array stored within a Series.\n\n Notes\n -----\n Omitting the `dtype` argument means pandas will attempt to infer the\n best array type from the values in the data. As new array types are\n added by pandas and 3rd party libraries, the "best" array type may\n change. We recommend specifying `dtype` to ensure that\n\n 1. the correct array type for the data is returned\n 2. the returned array type doesn't change as new extension types\n are added by pandas and third-party libraries\n\n Additionally, if the underlying memory representation of the returned\n array matters, we recommend specifying the `dtype` as a concrete object\n rather than a string alias or allowing it to be inferred. For example,\n a future version of pandas or a 3rd-party library may include a\n dedicated ExtensionArray for string data. In this event, the following\n would no longer return a :class:`arrays.NumpyExtensionArray` backed by a\n NumPy array.\n\n >>> pd.array(['a', 'b'], dtype=str)\n <NumpyExtensionArray>\n ['a', 'b']\n Length: 2, dtype: str32\n\n This would instead return the new ExtensionArray dedicated for string\n data. If you really need the new array to be backed by a NumPy array,\n specify that in the dtype.\n\n >>> pd.array(['a', 'b'], dtype=np.dtype("<U1"))\n <NumpyExtensionArray>\n ['a', 'b']\n Length: 2, dtype: str32\n\n Finally, Pandas has arrays that mostly overlap with NumPy\n\n * :class:`arrays.DatetimeArray`\n * :class:`arrays.TimedeltaArray`\n\n When data with a ``datetime64[ns]`` or ``timedelta64[ns]`` dtype is\n passed, pandas will always return a ``DatetimeArray`` or ``TimedeltaArray``\n rather than a ``NumpyExtensionArray``. This is for symmetry with the case of\n timezone-aware data, which NumPy does not natively support.\n\n >>> pd.array(['2015', '2016'], dtype='datetime64[ns]')\n <DatetimeArray>\n ['2015-01-01 00:00:00', '2016-01-01 00:00:00']\n Length: 2, dtype: datetime64[ns]\n\n >>> pd.array(["1h", "2h"], dtype='timedelta64[ns]')\n <TimedeltaArray>\n ['0 days 01:00:00', '0 days 02:00:00']\n Length: 2, dtype: timedelta64[ns]\n\n Examples\n --------\n If a dtype is not specified, pandas will infer the best dtype from the values.\n See the description of `dtype` for the types pandas infers for.\n\n >>> pd.array([1, 2])\n <IntegerArray>\n [1, 2]\n Length: 2, dtype: Int64\n\n >>> pd.array([1, 2, np.nan])\n <IntegerArray>\n [1, 2, <NA>]\n Length: 3, dtype: Int64\n\n >>> pd.array([1.1, 2.2])\n <FloatingArray>\n [1.1, 2.2]\n Length: 2, dtype: Float64\n\n >>> pd.array(["a", None, "c"])\n <StringArray>\n ['a', <NA>, 'c']\n Length: 3, dtype: string\n\n >>> with pd.option_context("string_storage", "pyarrow"):\n ... arr = pd.array(["a", None, "c"])\n ...\n >>> arr\n <ArrowStringArray>\n ['a', <NA>, 'c']\n Length: 3, dtype: string\n\n >>> pd.array([pd.Period('2000', freq="D"), pd.Period("2000", freq="D")])\n <PeriodArray>\n ['2000-01-01', '2000-01-01']\n Length: 2, dtype: period[D]\n\n You can use the string alias for `dtype`\n\n >>> pd.array(['a', 'b', 'a'], dtype='category')\n ['a', 'b', 'a']\n Categories (2, object): ['a', 'b']\n\n Or specify the actual dtype\n\n >>> pd.array(['a', 'b', 'a'],\n ... dtype=pd.CategoricalDtype(['a', 'b', 'c'], ordered=True))\n ['a', 'b', 'a']\n Categories (3, object): ['a' < 'b' < 'c']\n\n If pandas does not infer a dedicated extension type a\n :class:`arrays.NumpyExtensionArray` is returned.\n\n >>> pd.array([1 + 1j, 3 + 2j])\n <NumpyExtensionArray>\n [(1+1j), (3+2j)]\n Length: 2, dtype: complex128\n\n As mentioned in the "Notes" section, new extension types may be added\n in the future (by pandas or 3rd party libraries), causing the return\n value to no longer be a :class:`arrays.NumpyExtensionArray`. Specify the\n `dtype` as a NumPy dtype if you need to ensure there's no future change in\n behavior.\n\n >>> pd.array([1, 2], dtype=np.dtype("int32"))\n <NumpyExtensionArray>\n [1, 2]\n Length: 2, dtype: int32\n\n `data` must be 1-dimensional. A ValueError is raised when the input\n has the wrong dimensionality.\n\n >>> pd.array(1)\n Traceback (most recent call last):\n ...\n ValueError: Cannot pass scalar '1' to 'pandas.array'.\n """\n from pandas.core.arrays import (\n BooleanArray,\n DatetimeArray,\n ExtensionArray,\n FloatingArray,\n IntegerArray,\n IntervalArray,\n NumpyExtensionArray,\n PeriodArray,\n TimedeltaArray,\n )\n from pandas.core.arrays.string_ import StringDtype\n\n if lib.is_scalar(data):\n msg = f"Cannot pass scalar '{data}' to 'pandas.array'."\n raise ValueError(msg)\n elif isinstance(data, ABCDataFrame):\n raise TypeError("Cannot pass DataFrame to 'pandas.array'")\n\n if dtype is None and isinstance(data, (ABCSeries, ABCIndex, ExtensionArray)):\n # Note: we exclude np.ndarray here, will do type inference on it\n dtype = data.dtype\n\n data = extract_array(data, extract_numpy=True)\n\n # this returns None for not-found dtypes.\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n\n if isinstance(data, ExtensionArray) and (dtype is None or data.dtype == dtype):\n # e.g. TimedeltaArray[s], avoid casting to NumpyExtensionArray\n if copy:\n return data.copy()\n return data\n\n if isinstance(dtype, ExtensionDtype):\n cls = dtype.construct_array_type()\n return cls._from_sequence(data, dtype=dtype, copy=copy)\n\n if dtype is None:\n inferred_dtype = lib.infer_dtype(data, skipna=True)\n if inferred_dtype == "period":\n period_data = cast(Union[Sequence[Optional[Period]], AnyArrayLike], data)\n return PeriodArray._from_sequence(period_data, copy=copy)\n\n elif inferred_dtype == "interval":\n return IntervalArray(data, copy=copy)\n\n elif inferred_dtype.startswith("datetime"):\n # datetime, datetime64\n try:\n return DatetimeArray._from_sequence(data, copy=copy)\n except ValueError:\n # Mixture of timezones, fall back to NumpyExtensionArray\n pass\n\n elif inferred_dtype.startswith("timedelta"):\n # timedelta, timedelta64\n return TimedeltaArray._from_sequence(data, copy=copy)\n\n elif inferred_dtype == "string":\n # StringArray/ArrowStringArray depending on pd.options.mode.string_storage\n dtype = StringDtype()\n cls = dtype.construct_array_type()\n return cls._from_sequence(data, dtype=dtype, copy=copy)\n\n elif inferred_dtype == "integer":\n return IntegerArray._from_sequence(data, copy=copy)\n elif inferred_dtype == "empty" and not hasattr(data, "dtype") and not len(data):\n return FloatingArray._from_sequence(data, copy=copy)\n elif (\n inferred_dtype in ("floating", "mixed-integer-float")\n and getattr(data, "dtype", None) != np.float16\n ):\n # GH#44715 Exclude np.float16 bc FloatingArray does not support it;\n # we will fall back to NumpyExtensionArray.\n return FloatingArray._from_sequence(data, copy=copy)\n\n elif inferred_dtype == "boolean":\n return BooleanArray._from_sequence(data, dtype="boolean", copy=copy)\n\n # Pandas overrides NumPy for\n # 1. datetime64[ns,us,ms,s]\n # 2. timedelta64[ns,us,ms,s]\n # so that a DatetimeArray is returned.\n if lib.is_np_dtype(dtype, "M") and is_supported_dtype(dtype):\n return DatetimeArray._from_sequence(data, dtype=dtype, copy=copy)\n if lib.is_np_dtype(dtype, "m") and is_supported_dtype(dtype):\n return TimedeltaArray._from_sequence(data, dtype=dtype, copy=copy)\n\n elif lib.is_np_dtype(dtype, "mM"):\n warnings.warn(\n r"datetime64 and timedelta64 dtype resolutions other than "\n r"'s', 'ms', 'us', and 'ns' are deprecated. "\n r"In future releases passing unsupported resolutions will "\n r"raise an exception.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n return NumpyExtensionArray._from_sequence(data, dtype=dtype, copy=copy)\n\n\n_typs = frozenset(\n {\n "index",\n "rangeindex",\n "multiindex",\n "datetimeindex",\n "timedeltaindex",\n "periodindex",\n "categoricalindex",\n "intervalindex",\n "series",\n }\n)\n\n\n@overload\ndef extract_array(\n obj: Series | Index, extract_numpy: bool = ..., extract_range: bool = ...\n) -> ArrayLike:\n ...\n\n\n@overload\ndef extract_array(\n obj: T, extract_numpy: bool = ..., extract_range: bool = ...\n) -> T | ArrayLike:\n ...\n\n\ndef extract_array(\n obj: T, extract_numpy: bool = False, extract_range: bool = False\n) -> T | ArrayLike:\n """\n Extract the ndarray or ExtensionArray from a Series or Index.\n\n For all other types, `obj` is just returned as is.\n\n Parameters\n ----------\n obj : object\n For Series / Index, the underlying ExtensionArray is unboxed.\n\n extract_numpy : bool, default False\n Whether to extract the ndarray from a NumpyExtensionArray.\n\n extract_range : bool, default False\n If we have a RangeIndex, return range._values if True\n (which is a materialized integer ndarray), otherwise return unchanged.\n\n Returns\n -------\n arr : object\n\n Examples\n --------\n >>> extract_array(pd.Series(['a', 'b', 'c'], dtype='category'))\n ['a', 'b', 'c']\n Categories (3, object): ['a', 'b', 'c']\n\n Other objects like lists, arrays, and DataFrames are just passed through.\n\n >>> extract_array([1, 2, 3])\n [1, 2, 3]\n\n For an ndarray-backed Series / Index the ndarray is returned.\n\n >>> extract_array(pd.Series([1, 2, 3]))\n array([1, 2, 3])\n\n To extract all the way down to the ndarray, pass ``extract_numpy=True``.\n\n >>> extract_array(pd.Series([1, 2, 3]), extract_numpy=True)\n array([1, 2, 3])\n """\n typ = getattr(obj, "_typ", None)\n if typ in _typs:\n # i.e. isinstance(obj, (ABCIndex, ABCSeries))\n if typ == "rangeindex":\n if extract_range:\n # error: "T" has no attribute "_values"\n return obj._values # type: ignore[attr-defined]\n return obj\n\n # error: "T" has no attribute "_values"\n return obj._values # type: ignore[attr-defined]\n\n elif extract_numpy and typ == "npy_extension":\n # i.e. isinstance(obj, ABCNumpyExtensionArray)\n # error: "T" has no attribute "to_numpy"\n return obj.to_numpy() # type: ignore[attr-defined]\n\n return obj\n\n\ndef ensure_wrapped_if_datetimelike(arr):\n """\n Wrap datetime64 and timedelta64 ndarrays in DatetimeArray/TimedeltaArray.\n """\n if isinstance(arr, np.ndarray):\n if arr.dtype.kind == "M":\n from pandas.core.arrays import DatetimeArray\n\n dtype = get_supported_dtype(arr.dtype)\n return DatetimeArray._from_sequence(arr, dtype=dtype)\n\n elif arr.dtype.kind == "m":\n from pandas.core.arrays import TimedeltaArray\n\n dtype = get_supported_dtype(arr.dtype)\n return TimedeltaArray._from_sequence(arr, dtype=dtype)\n\n return arr\n\n\ndef sanitize_masked_array(data: ma.MaskedArray) -> np.ndarray:\n """\n Convert numpy MaskedArray to ensure mask is softened.\n """\n mask = ma.getmaskarray(data)\n if mask.any():\n dtype, fill_value = maybe_promote(data.dtype, np.nan)\n dtype = cast(np.dtype, dtype)\n data = ma.asarray(data.astype(dtype, copy=True))\n data.soften_mask() # set hardmask False if it was True\n data[mask] = fill_value\n else:\n data = data.copy()\n return data\n\n\ndef sanitize_array(\n data,\n index: Index | None,\n dtype: DtypeObj | None = None,\n copy: bool = False,\n *,\n allow_2d: bool = False,\n) -> ArrayLike:\n """\n Sanitize input data to an ndarray or ExtensionArray, copy if specified,\n coerce to the dtype if specified.\n\n Parameters\n ----------\n data : Any\n index : Index or None, default None\n dtype : np.dtype, ExtensionDtype, or None, default None\n copy : bool, default False\n allow_2d : bool, default False\n If False, raise if we have a 2D Arraylike.\n\n Returns\n -------\n np.ndarray or ExtensionArray\n """\n original_dtype = dtype\n if isinstance(data, ma.MaskedArray):\n data = sanitize_masked_array(data)\n\n if isinstance(dtype, NumpyEADtype):\n # Avoid ending up with a NumpyExtensionArray\n dtype = dtype.numpy_dtype\n\n object_index = False\n if isinstance(data, ABCIndex) and data.dtype == object and dtype is None:\n object_index = True\n\n # extract ndarray or ExtensionArray, ensure we have no NumpyExtensionArray\n data = extract_array(data, extract_numpy=True, extract_range=True)\n\n if isinstance(data, np.ndarray) and data.ndim == 0:\n if dtype is None:\n dtype = data.dtype\n data = lib.item_from_zerodim(data)\n elif isinstance(data, range):\n # GH#16804\n data = range_to_ndarray(data)\n copy = False\n\n if not is_list_like(data):\n if index is None:\n raise ValueError("index must be specified when data is not list-like")\n if isinstance(data, str) and using_string_dtype() and original_dtype is None:\n from pandas.core.arrays.string_ import StringDtype\n\n dtype = StringDtype(na_value=np.nan)\n data = construct_1d_arraylike_from_scalar(data, len(index), dtype)\n\n return data\n\n elif isinstance(data, ABCExtensionArray):\n # it is already ensured above this is not a NumpyExtensionArray\n # Until GH#49309 is fixed this check needs to come before the\n # ExtensionDtype check\n if dtype is not None:\n subarr = data.astype(dtype, copy=copy)\n elif copy:\n subarr = data.copy()\n else:\n subarr = data\n\n elif isinstance(dtype, ExtensionDtype):\n # create an extension array from its dtype\n _sanitize_non_ordered(data)\n cls = dtype.construct_array_type()\n if not hasattr(data, "__array__"):\n data = list(data)\n subarr = cls._from_sequence(data, dtype=dtype, copy=copy)\n\n # GH#846\n elif isinstance(data, np.ndarray):\n if isinstance(data, np.matrix):\n data = data.A\n\n if dtype is None:\n subarr = data\n if data.dtype == object:\n subarr = maybe_infer_to_datetimelike(data)\n if object_index and using_string_dtype() and is_string_dtype(subarr):\n # Avoid inference when string option is set\n subarr = data\n elif data.dtype.kind == "U" and using_string_dtype():\n from pandas.core.arrays.string_ import StringDtype\n\n dtype = StringDtype(na_value=np.nan)\n subarr = dtype.construct_array_type()._from_sequence(data, dtype=dtype)\n\n if (\n subarr is data\n or (subarr.dtype == "str" and subarr.dtype.storage == "python") # type: ignore[union-attr]\n ) and copy:\n subarr = subarr.copy()\n\n else:\n # we will try to copy by-definition here\n subarr = _try_cast(data, dtype, copy)\n\n elif hasattr(data, "__array__"):\n # e.g. dask array GH#38645\n if not copy:\n data = np.asarray(data)\n else:\n data = np.array(data, copy=copy)\n return sanitize_array(\n data,\n index=index,\n dtype=dtype,\n copy=False,\n allow_2d=allow_2d,\n )\n\n else:\n _sanitize_non_ordered(data)\n # materialize e.g. generators, convert e.g. tuples, abc.ValueView\n data = list(data)\n\n if len(data) == 0 and dtype is None:\n # We default to float64, matching numpy\n subarr = np.array([], dtype=np.float64)\n\n elif dtype is not None:\n subarr = _try_cast(data, dtype, copy)\n\n else:\n subarr = maybe_convert_platform(data)\n if subarr.dtype == object:\n subarr = cast(np.ndarray, subarr)\n subarr = maybe_infer_to_datetimelike(subarr)\n\n subarr = _sanitize_ndim(subarr, data, dtype, index, allow_2d=allow_2d)\n\n if isinstance(subarr, np.ndarray):\n # at this point we should have dtype be None or subarr.dtype == dtype\n dtype = cast(np.dtype, dtype)\n subarr = _sanitize_str_dtypes(subarr, data, dtype, copy)\n\n return subarr\n\n\ndef range_to_ndarray(rng: range) -> np.ndarray:\n """\n Cast a range object to ndarray.\n """\n # GH#30171 perf avoid realizing range as a list in np.array\n try:\n arr = np.arange(rng.start, rng.stop, rng.step, dtype="int64")\n except OverflowError:\n # GH#30173 handling for ranges that overflow int64\n if (rng.start >= 0 and rng.step > 0) or (rng.step < 0 <= rng.stop):\n try:\n arr = np.arange(rng.start, rng.stop, rng.step, dtype="uint64")\n except OverflowError:\n arr = construct_1d_object_array_from_listlike(list(rng))\n else:\n arr = construct_1d_object_array_from_listlike(list(rng))\n return arr\n\n\ndef _sanitize_non_ordered(data) -> None:\n """\n Raise only for unordered sets, e.g., not for dict_keys\n """\n if isinstance(data, (set, frozenset)):\n raise TypeError(f"'{type(data).__name__}' type is unordered")\n\n\ndef _sanitize_ndim(\n result: ArrayLike,\n data,\n dtype: DtypeObj | None,\n index: Index | None,\n *,\n allow_2d: bool = False,\n) -> ArrayLike:\n """\n Ensure we have a 1-dimensional result array.\n """\n if getattr(result, "ndim", 0) == 0:\n raise ValueError("result should be arraylike with ndim > 0")\n\n if result.ndim == 1:\n # the result that we want\n result = _maybe_repeat(result, index)\n\n elif result.ndim > 1:\n if isinstance(data, np.ndarray):\n if allow_2d:\n return result\n raise ValueError(\n f"Data must be 1-dimensional, got ndarray of shape {data.shape} instead"\n )\n if is_object_dtype(dtype) and isinstance(dtype, ExtensionDtype):\n # i.e. NumpyEADtype("O")\n\n result = com.asarray_tuplesafe(data, dtype=np.dtype("object"))\n cls = dtype.construct_array_type()\n result = cls._from_sequence(result, dtype=dtype)\n else:\n # error: Argument "dtype" to "asarray_tuplesafe" has incompatible type\n # "Union[dtype[Any], ExtensionDtype, None]"; expected "Union[str,\n # dtype[Any], None]"\n result = com.asarray_tuplesafe(data, dtype=dtype) # type: ignore[arg-type]\n return result\n\n\ndef _sanitize_str_dtypes(\n result: np.ndarray, data, dtype: np.dtype | None, copy: bool\n) -> np.ndarray:\n """\n Ensure we have a dtype that is supported by pandas.\n """\n\n # This is to prevent mixed-type Series getting all casted to\n # NumPy string type, e.g. NaN --> '-1#IND'.\n if issubclass(result.dtype.type, str):\n # GH#16605\n # If not empty convert the data to dtype\n # GH#19853: If data is a scalar, result has already the result\n if not lib.is_scalar(data):\n if not np.all(isna(data)):\n data = np.asarray(data, dtype=dtype)\n if not copy:\n result = np.asarray(data, dtype=object)\n else:\n result = np.array(data, dtype=object, copy=copy)\n return result\n\n\ndef _maybe_repeat(arr: ArrayLike, index: Index | None) -> ArrayLike:\n """\n If we have a length-1 array and an index describing how long we expect\n the result to be, repeat the array.\n """\n if index is not None:\n if 1 == len(arr) != len(index):\n arr = arr.repeat(len(index))\n return arr\n\n\ndef _try_cast(\n arr: list | np.ndarray,\n dtype: np.dtype,\n copy: bool,\n) -> ArrayLike:\n """\n Convert input to numpy ndarray and optionally cast to a given dtype.\n\n Parameters\n ----------\n arr : ndarray or list\n Excludes: ExtensionArray, Series, Index.\n dtype : np.dtype\n copy : bool\n If False, don't copy the data if not needed.\n\n Returns\n -------\n np.ndarray or ExtensionArray\n """\n is_ndarray = isinstance(arr, np.ndarray)\n\n if dtype == object:\n if not is_ndarray:\n subarr = construct_1d_object_array_from_listlike(arr)\n return subarr\n return ensure_wrapped_if_datetimelike(arr).astype(dtype, copy=copy)\n\n elif dtype.kind == "U":\n # TODO: test cases with arr.dtype.kind in "mM"\n if is_ndarray:\n arr = cast(np.ndarray, arr)\n shape = arr.shape\n if arr.ndim > 1:\n arr = arr.ravel()\n else:\n shape = (len(arr),)\n return lib.ensure_string_array(arr, convert_na_value=False, copy=copy).reshape(\n shape\n )\n\n elif dtype.kind in "mM":\n return maybe_cast_to_datetime(arr, dtype)\n\n # GH#15832: Check if we are requesting a numeric dtype and\n # that we can convert the data to the requested dtype.\n elif dtype.kind in "iu":\n # this will raise if we have e.g. floats\n\n subarr = maybe_cast_to_integer_array(arr, dtype)\n elif not copy:\n subarr = np.asarray(arr, dtype=dtype)\n else:\n subarr = np.array(arr, dtype=dtype, copy=copy)\n\n return subarr\n
.venv\Lib\site-packages\pandas\core\construction.py
construction.py
Python
26,384
0.95
0.152253
0.078752
awesome-app
702
2024-01-24T17:16:09.404980
MIT
false
04eea841f334f0596fde9a7ecc27344b
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\nimport weakref\n\nif TYPE_CHECKING:\n from pandas.core.generic import NDFrame\n\n\nclass Flags:\n """\n Flags that apply to pandas objects.\n\n Parameters\n ----------\n obj : Series or DataFrame\n The object these flags are associated with.\n allows_duplicate_labels : bool, default True\n Whether to allow duplicate labels in this object. By default,\n duplicate labels are permitted. Setting this to ``False`` will\n cause an :class:`errors.DuplicateLabelError` to be raised when\n `index` (or columns for DataFrame) is not unique, or any\n subsequent operation on introduces duplicates.\n See :ref:`duplicates.disallow` for more.\n\n .. warning::\n\n This is an experimental feature. Currently, many methods fail to\n propagate the ``allows_duplicate_labels`` value. In future versions\n it is expected that every method taking or returning one or more\n DataFrame or Series objects will propagate ``allows_duplicate_labels``.\n\n Examples\n --------\n Attributes can be set in two ways:\n\n >>> df = pd.DataFrame()\n >>> df.flags\n <Flags(allows_duplicate_labels=True)>\n >>> df.flags.allows_duplicate_labels = False\n >>> df.flags\n <Flags(allows_duplicate_labels=False)>\n\n >>> df.flags['allows_duplicate_labels'] = True\n >>> df.flags\n <Flags(allows_duplicate_labels=True)>\n """\n\n _keys: set[str] = {"allows_duplicate_labels"}\n\n def __init__(self, obj: NDFrame, *, allows_duplicate_labels: bool) -> None:\n self._allows_duplicate_labels = allows_duplicate_labels\n self._obj = weakref.ref(obj)\n\n @property\n def allows_duplicate_labels(self) -> bool:\n """\n Whether this object allows duplicate labels.\n\n Setting ``allows_duplicate_labels=False`` ensures that the\n index (and columns of a DataFrame) are unique. Most methods\n that accept and return a Series or DataFrame will propagate\n the value of ``allows_duplicate_labels``.\n\n See :ref:`duplicates` for more.\n\n See Also\n --------\n DataFrame.attrs : Set global metadata on this object.\n DataFrame.set_flags : Set global flags on this object.\n\n Examples\n --------\n >>> df = pd.DataFrame({"A": [1, 2]}, index=['a', 'a'])\n >>> df.flags.allows_duplicate_labels\n True\n >>> df.flags.allows_duplicate_labels = False\n Traceback (most recent call last):\n ...\n pandas.errors.DuplicateLabelError: Index has duplicates.\n positions\n label\n a [0, 1]\n """\n return self._allows_duplicate_labels\n\n @allows_duplicate_labels.setter\n def allows_duplicate_labels(self, value: bool) -> None:\n value = bool(value)\n obj = self._obj()\n if obj is None:\n raise ValueError("This flag's object has been deleted.")\n\n if not value:\n for ax in obj.axes:\n ax._maybe_check_unique()\n\n self._allows_duplicate_labels = value\n\n def __getitem__(self, key: str):\n if key not in self._keys:\n raise KeyError(key)\n\n return getattr(self, key)\n\n def __setitem__(self, key: str, value) -> None:\n if key not in self._keys:\n raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}")\n setattr(self, key, value)\n\n def __repr__(self) -> str:\n return f"<Flags(allows_duplicate_labels={self.allows_duplicate_labels})>"\n\n def __eq__(self, other) -> bool:\n if isinstance(other, type(self)):\n return self.allows_duplicate_labels == other.allows_duplicate_labels\n return False\n
.venv\Lib\site-packages\pandas\core\flags.py
flags.py
Python
3,763
0.85
0.162393
0
awesome-app
389
2024-07-21T12:49:54.781032
BSD-3-Clause
false
b14907b803e437dee54c4706c0bb479d
from __future__ import annotations\n\nfrom contextlib import suppress\nimport sys\nfrom typing import (\n TYPE_CHECKING,\n Any,\n TypeVar,\n cast,\n final,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import (\n using_copy_on_write,\n warn_copy_on_write,\n)\n\nfrom pandas._libs.indexing import NDFrameIndexerBase\nfrom pandas._libs.lib import item_from_zerodim\nfrom pandas.compat import PYPY\nfrom pandas.errors import (\n AbstractMethodError,\n ChainedAssignmentError,\n IndexingError,\n InvalidIndexError,\n LossySetitemError,\n _chained_assignment_msg,\n _chained_assignment_warning_msg,\n _check_cacher,\n)\nfrom pandas.util._decorators import doc\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import (\n can_hold_element,\n maybe_promote,\n)\nfrom pandas.core.dtypes.common import (\n is_array_like,\n is_bool_dtype,\n is_hashable,\n is_integer,\n is_iterator,\n is_list_like,\n is_numeric_dtype,\n is_object_dtype,\n is_scalar,\n is_sequence,\n)\nfrom pandas.core.dtypes.concat import concat_compat\nfrom pandas.core.dtypes.dtypes import ExtensionDtype\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCSeries,\n)\nfrom pandas.core.dtypes.missing import (\n construct_1d_array_from_inferred_fill_value,\n infer_fill_value,\n is_valid_na_for_dtype,\n isna,\n na_value_for_dtype,\n)\n\nfrom pandas.core import algorithms as algos\nimport pandas.core.common as com\nfrom pandas.core.construction import (\n array as pd_array,\n extract_array,\n)\nfrom pandas.core.indexers import (\n check_array_indexer,\n is_list_like_indexer,\n is_scalar_indexer,\n length_of_indexer,\n)\nfrom pandas.core.indexes.api import (\n Index,\n MultiIndex,\n)\n\nif TYPE_CHECKING:\n from collections.abc import (\n Hashable,\n Sequence,\n )\n\n from pandas._typing import (\n Axis,\n AxisInt,\n Self,\n npt,\n )\n\n from pandas import (\n DataFrame,\n Series,\n )\n\nT = TypeVar("T")\n# "null slice"\n_NS = slice(None, None)\n_one_ellipsis_message = "indexer may only contain one '...' entry"\n\n\n# the public IndexSlicerMaker\nclass _IndexSlice:\n """\n Create an object to more easily perform multi-index slicing.\n\n See Also\n --------\n MultiIndex.remove_unused_levels : New MultiIndex with no unused levels.\n\n Notes\n -----\n See :ref:`Defined Levels <advanced.shown_levels>`\n for further info on slicing a MultiIndex.\n\n Examples\n --------\n >>> midx = pd.MultiIndex.from_product([['A0','A1'], ['B0','B1','B2','B3']])\n >>> columns = ['foo', 'bar']\n >>> dfmi = pd.DataFrame(np.arange(16).reshape((len(midx), len(columns))),\n ... index=midx, columns=columns)\n\n Using the default slice command:\n\n >>> dfmi.loc[(slice(None), slice('B0', 'B1')), :]\n foo bar\n A0 B0 0 1\n B1 2 3\n A1 B0 8 9\n B1 10 11\n\n Using the IndexSlice class for a more intuitive command:\n\n >>> idx = pd.IndexSlice\n >>> dfmi.loc[idx[:, 'B0':'B1'], :]\n foo bar\n A0 B0 0 1\n B1 2 3\n A1 B0 8 9\n B1 10 11\n """\n\n def __getitem__(self, arg):\n return arg\n\n\nIndexSlice = _IndexSlice()\n\n\nclass IndexingMixin:\n """\n Mixin for adding .loc/.iloc/.at/.iat to Dataframes and Series.\n """\n\n @property\n def iloc(self) -> _iLocIndexer:\n """\n Purely integer-location based indexing for selection by position.\n\n .. deprecated:: 2.2.0\n\n Returning a tuple from a callable is deprecated.\n\n ``.iloc[]`` is primarily integer position based (from ``0`` to\n ``length-1`` of the axis), but may also be used with a boolean\n array.\n\n Allowed inputs are:\n\n - An integer, e.g. ``5``.\n - A list or array of integers, e.g. ``[4, 3, 0]``.\n - A slice object with ints, e.g. ``1:7``.\n - A boolean array.\n - A ``callable`` function with one argument (the calling Series or\n DataFrame) and that returns valid output for indexing (one of the above).\n This is useful in method chains, when you don't have a reference to the\n calling object, but would like to base your selection on\n some value.\n - A tuple of row and column indexes. The tuple elements consist of one of the\n above inputs, e.g. ``(0, 1)``.\n\n ``.iloc`` will raise ``IndexError`` if a requested indexer is\n out-of-bounds, except *slice* indexers which allow out-of-bounds\n indexing (this conforms with python/numpy *slice* semantics).\n\n See more at :ref:`Selection by Position <indexing.integer>`.\n\n See Also\n --------\n DataFrame.iat : Fast integer location scalar accessor.\n DataFrame.loc : Purely label-location based indexer for selection by label.\n Series.iloc : Purely integer-location based indexing for\n selection by position.\n\n Examples\n --------\n >>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},\n ... {'a': 100, 'b': 200, 'c': 300, 'd': 400},\n ... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000}]\n >>> df = pd.DataFrame(mydict)\n >>> df\n a b c d\n 0 1 2 3 4\n 1 100 200 300 400\n 2 1000 2000 3000 4000\n\n **Indexing just the rows**\n\n With a scalar integer.\n\n >>> type(df.iloc[0])\n <class 'pandas.core.series.Series'>\n >>> df.iloc[0]\n a 1\n b 2\n c 3\n d 4\n Name: 0, dtype: int64\n\n With a list of integers.\n\n >>> df.iloc[[0]]\n a b c d\n 0 1 2 3 4\n >>> type(df.iloc[[0]])\n <class 'pandas.core.frame.DataFrame'>\n\n >>> df.iloc[[0, 1]]\n a b c d\n 0 1 2 3 4\n 1 100 200 300 400\n\n With a `slice` object.\n\n >>> df.iloc[:3]\n a b c d\n 0 1 2 3 4\n 1 100 200 300 400\n 2 1000 2000 3000 4000\n\n With a boolean mask the same length as the index.\n\n >>> df.iloc[[True, False, True]]\n a b c d\n 0 1 2 3 4\n 2 1000 2000 3000 4000\n\n With a callable, useful in method chains. The `x` passed\n to the ``lambda`` is the DataFrame being sliced. This selects\n the rows whose index label even.\n\n >>> df.iloc[lambda x: x.index % 2 == 0]\n a b c d\n 0 1 2 3 4\n 2 1000 2000 3000 4000\n\n **Indexing both axes**\n\n You can mix the indexer types for the index and columns. Use ``:`` to\n select the entire axis.\n\n With scalar integers.\n\n >>> df.iloc[0, 1]\n 2\n\n With lists of integers.\n\n >>> df.iloc[[0, 2], [1, 3]]\n b d\n 0 2 4\n 2 2000 4000\n\n With `slice` objects.\n\n >>> df.iloc[1:3, 0:3]\n a b c\n 1 100 200 300\n 2 1000 2000 3000\n\n With a boolean array whose length matches the columns.\n\n >>> df.iloc[:, [True, False, True, False]]\n a c\n 0 1 3\n 1 100 300\n 2 1000 3000\n\n With a callable function that expects the Series or DataFrame.\n\n >>> df.iloc[:, lambda df: [0, 2]]\n a c\n 0 1 3\n 1 100 300\n 2 1000 3000\n """\n return _iLocIndexer("iloc", self)\n\n @property\n def loc(self) -> _LocIndexer:\n """\n Access a group of rows and columns by label(s) or a boolean array.\n\n ``.loc[]`` is primarily label based, but may also be used with a\n boolean array.\n\n Allowed inputs are:\n\n - A single label, e.g. ``5`` or ``'a'``, (note that ``5`` is\n interpreted as a *label* of the index, and **never** as an\n integer position along the index).\n - A list or array of labels, e.g. ``['a', 'b', 'c']``.\n - A slice object with labels, e.g. ``'a':'f'``.\n\n .. warning:: Note that contrary to usual python slices, **both** the\n start and the stop are included\n\n - A boolean array of the same length as the axis being sliced,\n e.g. ``[True, False, True]``.\n - An alignable boolean Series. The index of the key will be aligned before\n masking.\n - An alignable Index. The Index of the returned selection will be the input.\n - A ``callable`` function with one argument (the calling Series or\n DataFrame) and that returns valid output for indexing (one of the above)\n\n See more at :ref:`Selection by Label <indexing.label>`.\n\n Raises\n ------\n KeyError\n If any items are not found.\n IndexingError\n If an indexed key is passed and its index is unalignable to the frame index.\n\n See Also\n --------\n DataFrame.at : Access a single value for a row/column label pair.\n DataFrame.iloc : Access group of rows and columns by integer position(s).\n DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the\n Series/DataFrame.\n Series.loc : Access group of values using labels.\n\n Examples\n --------\n **Getting values**\n\n >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],\n ... index=['cobra', 'viper', 'sidewinder'],\n ... columns=['max_speed', 'shield'])\n >>> df\n max_speed shield\n cobra 1 2\n viper 4 5\n sidewinder 7 8\n\n Single label. Note this returns the row as a Series.\n\n >>> df.loc['viper']\n max_speed 4\n shield 5\n Name: viper, dtype: int64\n\n List of labels. Note using ``[[]]`` returns a DataFrame.\n\n >>> df.loc[['viper', 'sidewinder']]\n max_speed shield\n viper 4 5\n sidewinder 7 8\n\n Single label for row and column\n\n >>> df.loc['cobra', 'shield']\n 2\n\n Slice with labels for row and single label for column. As mentioned\n above, note that both the start and stop of the slice are included.\n\n >>> df.loc['cobra':'viper', 'max_speed']\n cobra 1\n viper 4\n Name: max_speed, dtype: int64\n\n Boolean list with the same length as the row axis\n\n >>> df.loc[[False, False, True]]\n max_speed shield\n sidewinder 7 8\n\n Alignable boolean Series:\n\n >>> df.loc[pd.Series([False, True, False],\n ... index=['viper', 'sidewinder', 'cobra'])]\n max_speed shield\n sidewinder 7 8\n\n Index (same behavior as ``df.reindex``)\n\n >>> df.loc[pd.Index(["cobra", "viper"], name="foo")]\n max_speed shield\n foo\n cobra 1 2\n viper 4 5\n\n Conditional that returns a boolean Series\n\n >>> df.loc[df['shield'] > 6]\n max_speed shield\n sidewinder 7 8\n\n Conditional that returns a boolean Series with column labels specified\n\n >>> df.loc[df['shield'] > 6, ['max_speed']]\n max_speed\n sidewinder 7\n\n Multiple conditional using ``&`` that returns a boolean Series\n\n >>> df.loc[(df['max_speed'] > 1) & (df['shield'] < 8)]\n max_speed shield\n viper 4 5\n\n Multiple conditional using ``|`` that returns a boolean Series\n\n >>> df.loc[(df['max_speed'] > 4) | (df['shield'] < 5)]\n max_speed shield\n cobra 1 2\n sidewinder 7 8\n\n Please ensure that each condition is wrapped in parentheses ``()``.\n See the :ref:`user guide<indexing.boolean>`\n for more details and explanations of Boolean indexing.\n\n .. note::\n If you find yourself using 3 or more conditionals in ``.loc[]``,\n consider using :ref:`advanced indexing<advanced.advanced_hierarchical>`.\n\n See below for using ``.loc[]`` on MultiIndex DataFrames.\n\n Callable that returns a boolean Series\n\n >>> df.loc[lambda df: df['shield'] == 8]\n max_speed shield\n sidewinder 7 8\n\n **Setting values**\n\n Set value for all items matching the list of labels\n\n >>> df.loc[['viper', 'sidewinder'], ['shield']] = 50\n >>> df\n max_speed shield\n cobra 1 2\n viper 4 50\n sidewinder 7 50\n\n Set value for an entire row\n\n >>> df.loc['cobra'] = 10\n >>> df\n max_speed shield\n cobra 10 10\n viper 4 50\n sidewinder 7 50\n\n Set value for an entire column\n\n >>> df.loc[:, 'max_speed'] = 30\n >>> df\n max_speed shield\n cobra 30 10\n viper 30 50\n sidewinder 30 50\n\n Set value for rows matching callable condition\n\n >>> df.loc[df['shield'] > 35] = 0\n >>> df\n max_speed shield\n cobra 30 10\n viper 0 0\n sidewinder 0 0\n\n Add value matching location\n\n >>> df.loc["viper", "shield"] += 5\n >>> df\n max_speed shield\n cobra 30 10\n viper 0 5\n sidewinder 0 0\n\n Setting using a ``Series`` or a ``DataFrame`` sets the values matching the\n index labels, not the index positions.\n\n >>> shuffled_df = df.loc[["viper", "cobra", "sidewinder"]]\n >>> df.loc[:] += shuffled_df\n >>> df\n max_speed shield\n cobra 60 20\n viper 0 10\n sidewinder 0 0\n\n **Getting values on a DataFrame with an index that has integer labels**\n\n Another example using integers for the index\n\n >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],\n ... index=[7, 8, 9], columns=['max_speed', 'shield'])\n >>> df\n max_speed shield\n 7 1 2\n 8 4 5\n 9 7 8\n\n Slice with integer labels for rows. As mentioned above, note that both\n the start and stop of the slice are included.\n\n >>> df.loc[7:9]\n max_speed shield\n 7 1 2\n 8 4 5\n 9 7 8\n\n **Getting values with a MultiIndex**\n\n A number of examples using a DataFrame with a MultiIndex\n\n >>> tuples = [\n ... ('cobra', 'mark i'), ('cobra', 'mark ii'),\n ... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),\n ... ('viper', 'mark ii'), ('viper', 'mark iii')\n ... ]\n >>> index = pd.MultiIndex.from_tuples(tuples)\n >>> values = [[12, 2], [0, 4], [10, 20],\n ... [1, 4], [7, 1], [16, 36]]\n >>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)\n >>> df\n max_speed shield\n cobra mark i 12 2\n mark ii 0 4\n sidewinder mark i 10 20\n mark ii 1 4\n viper mark ii 7 1\n mark iii 16 36\n\n Single label. Note this returns a DataFrame with a single index.\n\n >>> df.loc['cobra']\n max_speed shield\n mark i 12 2\n mark ii 0 4\n\n Single index tuple. Note this returns a Series.\n\n >>> df.loc[('cobra', 'mark ii')]\n max_speed 0\n shield 4\n Name: (cobra, mark ii), dtype: int64\n\n Single label for row and column. Similar to passing in a tuple, this\n returns a Series.\n\n >>> df.loc['cobra', 'mark i']\n max_speed 12\n shield 2\n Name: (cobra, mark i), dtype: int64\n\n Single tuple. Note using ``[[]]`` returns a DataFrame.\n\n >>> df.loc[[('cobra', 'mark ii')]]\n max_speed shield\n cobra mark ii 0 4\n\n Single tuple for the index with a single label for the column\n\n >>> df.loc[('cobra', 'mark i'), 'shield']\n 2\n\n Slice from index tuple to single label\n\n >>> df.loc[('cobra', 'mark i'):'viper']\n max_speed shield\n cobra mark i 12 2\n mark ii 0 4\n sidewinder mark i 10 20\n mark ii 1 4\n viper mark ii 7 1\n mark iii 16 36\n\n Slice from index tuple to index tuple\n\n >>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]\n max_speed shield\n cobra mark i 12 2\n mark ii 0 4\n sidewinder mark i 10 20\n mark ii 1 4\n viper mark ii 7 1\n\n Please see the :ref:`user guide<advanced.advanced_hierarchical>`\n for more details and explanations of advanced indexing.\n """\n return _LocIndexer("loc", self)\n\n @property\n def at(self) -> _AtIndexer:\n """\n Access a single value for a row/column label pair.\n\n Similar to ``loc``, in that both provide label-based lookups. Use\n ``at`` if you only need to get or set a single value in a DataFrame\n or Series.\n\n Raises\n ------\n KeyError\n If getting a value and 'label' does not exist in a DataFrame or Series.\n\n ValueError\n If row/column label pair is not a tuple or if any label\n from the pair is not a scalar for DataFrame.\n If label is list-like (*excluding* NamedTuple) for Series.\n\n See Also\n --------\n DataFrame.at : Access a single value for a row/column pair by label.\n DataFrame.iat : Access a single value for a row/column pair by integer\n position.\n DataFrame.loc : Access a group of rows and columns by label(s).\n DataFrame.iloc : Access a group of rows and columns by integer\n position(s).\n Series.at : Access a single value by label.\n Series.iat : Access a single value by integer position.\n Series.loc : Access a group of rows by label(s).\n Series.iloc : Access a group of rows by integer position(s).\n\n Notes\n -----\n See :ref:`Fast scalar value getting and setting <indexing.basics.get_value>`\n for more details.\n\n Examples\n --------\n >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],\n ... index=[4, 5, 6], columns=['A', 'B', 'C'])\n >>> df\n A B C\n 4 0 2 3\n 5 0 4 1\n 6 10 20 30\n\n Get value at specified row/column pair\n\n >>> df.at[4, 'B']\n 2\n\n Set value at specified row/column pair\n\n >>> df.at[4, 'B'] = 10\n >>> df.at[4, 'B']\n 10\n\n Get value within a Series\n\n >>> df.loc[5].at['B']\n 4\n """\n return _AtIndexer("at", self)\n\n @property\n def iat(self) -> _iAtIndexer:\n """\n Access a single value for a row/column pair by integer position.\n\n Similar to ``iloc``, in that both provide integer-based lookups. Use\n ``iat`` if you only need to get or set a single value in a DataFrame\n or Series.\n\n Raises\n ------\n IndexError\n When integer position is out of bounds.\n\n See Also\n --------\n DataFrame.at : Access a single value for a row/column label pair.\n DataFrame.loc : Access a group of rows and columns by label(s).\n DataFrame.iloc : Access a group of rows and columns by integer position(s).\n\n Examples\n --------\n >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],\n ... columns=['A', 'B', 'C'])\n >>> df\n A B C\n 0 0 2 3\n 1 0 4 1\n 2 10 20 30\n\n Get value at specified row/column pair\n\n >>> df.iat[1, 2]\n 1\n\n Set value at specified row/column pair\n\n >>> df.iat[1, 2] = 10\n >>> df.iat[1, 2]\n 10\n\n Get value within a series\n\n >>> df.loc[0].iat[1]\n 2\n """\n return _iAtIndexer("iat", self)\n\n\nclass _LocationIndexer(NDFrameIndexerBase):\n _valid_types: str\n axis: AxisInt | None = None\n\n # sub-classes need to set _takeable\n _takeable: bool\n\n @final\n def __call__(self, axis: Axis | None = None) -> Self:\n # we need to return a copy of ourselves\n new_self = type(self)(self.name, self.obj)\n\n if axis is not None:\n axis_int_none = self.obj._get_axis_number(axis)\n else:\n axis_int_none = axis\n new_self.axis = axis_int_none\n return new_self\n\n def _get_setitem_indexer(self, key):\n """\n Convert a potentially-label-based key into a positional indexer.\n """\n if self.name == "loc":\n # always holds here bc iloc overrides _get_setitem_indexer\n self._ensure_listlike_indexer(key)\n\n if isinstance(key, tuple):\n for x in key:\n check_dict_or_set_indexers(x)\n\n if self.axis is not None:\n key = _tupleize_axis_indexer(self.ndim, self.axis, key)\n\n ax = self.obj._get_axis(0)\n\n if (\n isinstance(ax, MultiIndex)\n and self.name != "iloc"\n and is_hashable(key)\n and not isinstance(key, slice)\n ):\n with suppress(KeyError, InvalidIndexError):\n # TypeError e.g. passed a bool\n return ax.get_loc(key)\n\n if isinstance(key, tuple):\n with suppress(IndexingError):\n # suppress "Too many indexers"\n return self._convert_tuple(key)\n\n if isinstance(key, range):\n # GH#45479 test_loc_setitem_range_key\n key = list(key)\n\n return self._convert_to_indexer(key, axis=0)\n\n @final\n def _maybe_mask_setitem_value(self, indexer, value):\n """\n If we have obj.iloc[mask] = series_or_frame and series_or_frame has the\n same length as obj, we treat this as obj.iloc[mask] = series_or_frame[mask],\n similar to Series.__setitem__.\n\n Note this is only for loc, not iloc.\n """\n\n if (\n isinstance(indexer, tuple)\n and len(indexer) == 2\n and isinstance(value, (ABCSeries, ABCDataFrame))\n ):\n pi, icols = indexer\n ndim = value.ndim\n if com.is_bool_indexer(pi) and len(value) == len(pi):\n newkey = pi.nonzero()[0]\n\n if is_scalar_indexer(icols, self.ndim - 1) and ndim == 1:\n # e.g. test_loc_setitem_boolean_mask_allfalse\n # test_loc_setitem_ndframe_values_alignment\n value = self.obj.iloc._align_series(indexer, value)\n indexer = (newkey, icols)\n\n elif (\n isinstance(icols, np.ndarray)\n and icols.dtype.kind == "i"\n and len(icols) == 1\n ):\n if ndim == 1:\n # We implicitly broadcast, though numpy does not, see\n # github.com/pandas-dev/pandas/pull/45501#discussion_r789071825\n # test_loc_setitem_ndframe_values_alignment\n value = self.obj.iloc._align_series(indexer, value)\n indexer = (newkey, icols)\n\n elif ndim == 2 and value.shape[1] == 1:\n # test_loc_setitem_ndframe_values_alignment\n value = self.obj.iloc._align_frame(indexer, value)\n indexer = (newkey, icols)\n elif com.is_bool_indexer(indexer):\n indexer = indexer.nonzero()[0]\n\n return indexer, value\n\n @final\n def _ensure_listlike_indexer(self, key, axis=None, value=None) -> None:\n """\n Ensure that a list-like of column labels are all present by adding them if\n they do not already exist.\n\n Parameters\n ----------\n key : list-like of column labels\n Target labels.\n axis : key axis if known\n """\n column_axis = 1\n\n # column only exists in 2-dimensional DataFrame\n if self.ndim != 2:\n return\n\n if isinstance(key, tuple) and len(key) > 1:\n # key may be a tuple if we are .loc\n # if length of key is > 1 set key to column part\n key = key[column_axis]\n axis = column_axis\n\n if (\n axis == column_axis\n and not isinstance(self.obj.columns, MultiIndex)\n and is_list_like_indexer(key)\n and not com.is_bool_indexer(key)\n and all(is_hashable(k) for k in key)\n ):\n # GH#38148\n keys = self.obj.columns.union(key, sort=False)\n diff = Index(key).difference(self.obj.columns, sort=False)\n\n if len(diff):\n # e.g. if we are doing df.loc[:, ["A", "B"]] = 7 and "B"\n # is a new column, add the new columns with dtype=np.void\n # so that later when we go through setitem_single_column\n # we will use isetitem. Without this, the reindex_axis\n # below would create float64 columns in this example, which\n # would successfully hold 7, so we would end up with the wrong\n # dtype.\n indexer = np.arange(len(keys), dtype=np.intp)\n indexer[len(self.obj.columns) :] = -1\n new_mgr = self.obj._mgr.reindex_indexer(\n keys, indexer=indexer, axis=0, only_slice=True, use_na_proxy=True\n )\n self.obj._mgr = new_mgr\n return\n\n self.obj._mgr = self.obj._mgr.reindex_axis(keys, axis=0, only_slice=True)\n\n @final\n def __setitem__(self, key, value) -> None:\n if not PYPY and using_copy_on_write():\n if sys.getrefcount(self.obj) <= 2:\n warnings.warn(\n _chained_assignment_msg, ChainedAssignmentError, stacklevel=2\n )\n elif not PYPY and not using_copy_on_write():\n ctr = sys.getrefcount(self.obj)\n ref_count = 2\n if not warn_copy_on_write() and _check_cacher(self.obj):\n # see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221\n ref_count += 1\n if ctr <= ref_count:\n warnings.warn(\n _chained_assignment_warning_msg, FutureWarning, stacklevel=2\n )\n\n check_dict_or_set_indexers(key)\n if isinstance(key, tuple):\n key = tuple(list(x) if is_iterator(x) else x for x in key)\n key = tuple(com.apply_if_callable(x, self.obj) for x in key)\n else:\n maybe_callable = com.apply_if_callable(key, self.obj)\n key = self._check_deprecated_callable_usage(key, maybe_callable)\n indexer = self._get_setitem_indexer(key)\n self._has_valid_setitem_indexer(key)\n\n iloc = self if self.name == "iloc" else self.obj.iloc\n iloc._setitem_with_indexer(indexer, value, self.name)\n\n def _validate_key(self, key, axis: AxisInt):\n """\n Ensure that key is valid for current indexer.\n\n Parameters\n ----------\n key : scalar, slice or list-like\n Key requested.\n axis : int\n Dimension on which the indexing is being made.\n\n Raises\n ------\n TypeError\n If the key (or some element of it) has wrong type.\n IndexError\n If the key (or some element of it) is out of bounds.\n KeyError\n If the key was not found.\n """\n raise AbstractMethodError(self)\n\n @final\n def _expand_ellipsis(self, tup: tuple) -> tuple:\n """\n If a tuple key includes an Ellipsis, replace it with an appropriate\n number of null slices.\n """\n if any(x is Ellipsis for x in tup):\n if tup.count(Ellipsis) > 1:\n raise IndexingError(_one_ellipsis_message)\n\n if len(tup) == self.ndim:\n # It is unambiguous what axis this Ellipsis is indexing,\n # treat as a single null slice.\n i = tup.index(Ellipsis)\n # FIXME: this assumes only one Ellipsis\n new_key = tup[:i] + (_NS,) + tup[i + 1 :]\n return new_key\n\n # TODO: other cases? only one test gets here, and that is covered\n # by _validate_key_length\n return tup\n\n @final\n def _validate_tuple_indexer(self, key: tuple) -> tuple:\n """\n Check the key for valid keys across my indexer.\n """\n key = self._validate_key_length(key)\n key = self._expand_ellipsis(key)\n for i, k in enumerate(key):\n try:\n self._validate_key(k, i)\n except ValueError as err:\n raise ValueError(\n "Location based indexing can only have "\n f"[{self._valid_types}] types"\n ) from err\n return key\n\n @final\n def _is_nested_tuple_indexer(self, tup: tuple) -> bool:\n """\n Returns\n -------\n bool\n """\n if any(isinstance(ax, MultiIndex) for ax in self.obj.axes):\n return any(is_nested_tuple(tup, ax) for ax in self.obj.axes)\n return False\n\n @final\n def _convert_tuple(self, key: tuple) -> tuple:\n # Note: we assume _tupleize_axis_indexer has been called, if necessary.\n self._validate_key_length(key)\n keyidx = [self._convert_to_indexer(k, axis=i) for i, k in enumerate(key)]\n return tuple(keyidx)\n\n @final\n def _validate_key_length(self, key: tuple) -> tuple:\n if len(key) > self.ndim:\n if key[0] is Ellipsis:\n # e.g. Series.iloc[..., 3] reduces to just Series.iloc[3]\n key = key[1:]\n if Ellipsis in key:\n raise IndexingError(_one_ellipsis_message)\n return self._validate_key_length(key)\n raise IndexingError("Too many indexers")\n return key\n\n @final\n def _getitem_tuple_same_dim(self, tup: tuple):\n """\n Index with indexers that should return an object of the same dimension\n as self.obj.\n\n This is only called after a failed call to _getitem_lowerdim.\n """\n retval = self.obj\n # Selecting columns before rows is significantly faster\n start_val = (self.ndim - len(tup)) + 1\n for i, key in enumerate(reversed(tup)):\n i = self.ndim - i - start_val\n if com.is_null_slice(key):\n continue\n\n retval = getattr(retval, self.name)._getitem_axis(key, axis=i)\n # We should never have retval.ndim < self.ndim, as that should\n # be handled by the _getitem_lowerdim call above.\n assert retval.ndim == self.ndim\n\n if retval is self.obj:\n # if all axes were a null slice (`df.loc[:, :]`), ensure we still\n # return a new object (https://github.com/pandas-dev/pandas/pull/49469)\n retval = retval.copy(deep=False)\n\n return retval\n\n @final\n def _getitem_lowerdim(self, tup: tuple):\n # we can directly get the axis result since the axis is specified\n if self.axis is not None:\n axis = self.obj._get_axis_number(self.axis)\n return self._getitem_axis(tup, axis=axis)\n\n # we may have a nested tuples indexer here\n if self._is_nested_tuple_indexer(tup):\n return self._getitem_nested_tuple(tup)\n\n # we maybe be using a tuple to represent multiple dimensions here\n ax0 = self.obj._get_axis(0)\n # ...but iloc should handle the tuple as simple integer-location\n # instead of checking it as multiindex representation (GH 13797)\n if (\n isinstance(ax0, MultiIndex)\n and self.name != "iloc"\n and not any(isinstance(x, slice) for x in tup)\n ):\n # Note: in all extant test cases, replacing the slice condition with\n # `all(is_hashable(x) or com.is_null_slice(x) for x in tup)`\n # is equivalent.\n # (see the other place where we call _handle_lowerdim_multi_index_axis0)\n with suppress(IndexingError):\n return cast(_LocIndexer, self)._handle_lowerdim_multi_index_axis0(tup)\n\n tup = self._validate_key_length(tup)\n\n for i, key in enumerate(tup):\n if is_label_like(key):\n # We don't need to check for tuples here because those are\n # caught by the _is_nested_tuple_indexer check above.\n section = self._getitem_axis(key, axis=i)\n\n # We should never have a scalar section here, because\n # _getitem_lowerdim is only called after a check for\n # is_scalar_access, which that would be.\n if section.ndim == self.ndim:\n # we're in the middle of slicing through a MultiIndex\n # revise the key wrt to `section` by inserting an _NS\n new_key = tup[:i] + (_NS,) + tup[i + 1 :]\n\n else:\n # Note: the section.ndim == self.ndim check above\n # rules out having DataFrame here, so we dont need to worry\n # about transposing.\n new_key = tup[:i] + tup[i + 1 :]\n\n if len(new_key) == 1:\n new_key = new_key[0]\n\n # Slices should return views, but calling iloc/loc with a null\n # slice returns a new object.\n if com.is_null_slice(new_key):\n return section\n # This is an elided recursive call to iloc/loc\n return getattr(section, self.name)[new_key]\n\n raise IndexingError("not applicable")\n\n @final\n def _getitem_nested_tuple(self, tup: tuple):\n # we have a nested tuple so have at least 1 multi-index level\n # we should be able to match up the dimensionality here\n\n def _contains_slice(x: object) -> bool:\n # Check if object is a slice or a tuple containing a slice\n if isinstance(x, tuple):\n return any(isinstance(v, slice) for v in x)\n elif isinstance(x, slice):\n return True\n return False\n\n for key in tup:\n check_dict_or_set_indexers(key)\n\n # we have too many indexers for our dim, but have at least 1\n # multi-index dimension, try to see if we have something like\n # a tuple passed to a series with a multi-index\n if len(tup) > self.ndim:\n if self.name != "loc":\n # This should never be reached, but let's be explicit about it\n raise ValueError("Too many indices") # pragma: no cover\n if all(\n (is_hashable(x) and not _contains_slice(x)) or com.is_null_slice(x)\n for x in tup\n ):\n # GH#10521 Series should reduce MultiIndex dimensions instead of\n # DataFrame, IndexingError is not raised when slice(None,None,None)\n # with one row.\n with suppress(IndexingError):\n return cast(_LocIndexer, self)._handle_lowerdim_multi_index_axis0(\n tup\n )\n elif isinstance(self.obj, ABCSeries) and any(\n isinstance(k, tuple) for k in tup\n ):\n # GH#35349 Raise if tuple in tuple for series\n # Do this after the all-hashable-or-null-slice check so that\n # we are only getting non-hashable tuples, in particular ones\n # that themselves contain a slice entry\n # See test_loc_series_getitem_too_many_dimensions\n raise IndexingError("Too many indexers")\n\n # this is a series with a multi-index specified a tuple of\n # selectors\n axis = self.axis or 0\n return self._getitem_axis(tup, axis=axis)\n\n # handle the multi-axis by taking sections and reducing\n # this is iterative\n obj = self.obj\n # GH#41369 Loop in reverse order ensures indexing along columns before rows\n # which selects only necessary blocks which avoids dtype conversion if possible\n axis = len(tup) - 1\n for key in tup[::-1]:\n if com.is_null_slice(key):\n axis -= 1\n continue\n\n obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)\n axis -= 1\n\n # if we have a scalar, we are done\n if is_scalar(obj) or not hasattr(obj, "ndim"):\n break\n\n return obj\n\n def _convert_to_indexer(self, key, axis: AxisInt):\n raise AbstractMethodError(self)\n\n def _check_deprecated_callable_usage(self, key: Any, maybe_callable: T) -> T:\n # GH53533\n if self.name == "iloc" and callable(key) and isinstance(maybe_callable, tuple):\n warnings.warn(\n "Returning a tuple from a callable with iloc "\n "is deprecated and will be removed in a future version",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return maybe_callable\n\n @final\n def __getitem__(self, key):\n check_dict_or_set_indexers(key)\n if type(key) is tuple:\n key = tuple(list(x) if is_iterator(x) else x for x in key)\n key = tuple(com.apply_if_callable(x, self.obj) for x in key)\n if self._is_scalar_access(key):\n return self.obj._get_value(*key, takeable=self._takeable)\n return self._getitem_tuple(key)\n else:\n # we by definition only have the 0th axis\n axis = self.axis or 0\n\n maybe_callable = com.apply_if_callable(key, self.obj)\n maybe_callable = self._check_deprecated_callable_usage(key, maybe_callable)\n return self._getitem_axis(maybe_callable, axis=axis)\n\n def _is_scalar_access(self, key: tuple):\n raise NotImplementedError()\n\n def _getitem_tuple(self, tup: tuple):\n raise AbstractMethodError(self)\n\n def _getitem_axis(self, key, axis: AxisInt):\n raise NotImplementedError()\n\n def _has_valid_setitem_indexer(self, indexer) -> bool:\n raise AbstractMethodError(self)\n\n @final\n def _getbool_axis(self, key, axis: AxisInt):\n # caller is responsible for ensuring non-None axis\n labels = self.obj._get_axis(axis)\n key = check_bool_indexer(labels, key)\n inds = key.nonzero()[0]\n return self.obj._take_with_is_copy(inds, axis=axis)\n\n\n@doc(IndexingMixin.loc)\nclass _LocIndexer(_LocationIndexer):\n _takeable: bool = False\n _valid_types = (\n "labels (MUST BE IN THE INDEX), slices of labels (BOTH "\n "endpoints included! Can be slices of integers if the "\n "index is integers), listlike of labels, boolean"\n )\n\n # -------------------------------------------------------------------\n # Key Checks\n\n @doc(_LocationIndexer._validate_key)\n def _validate_key(self, key, axis: Axis):\n # valid for a collection of labels (we check their presence later)\n # slice of labels (where start-end in labels)\n # slice of integers (only if in the labels)\n # boolean not in slice and with boolean index\n ax = self.obj._get_axis(axis)\n if isinstance(key, bool) and not (\n is_bool_dtype(ax.dtype)\n or ax.dtype.name == "boolean"\n or isinstance(ax, MultiIndex)\n and is_bool_dtype(ax.get_level_values(0).dtype)\n ):\n raise KeyError(\n f"{key}: boolean label can not be used without a boolean index"\n )\n\n if isinstance(key, slice) and (\n isinstance(key.start, bool) or isinstance(key.stop, bool)\n ):\n raise TypeError(f"{key}: boolean values can not be used in a slice")\n\n def _has_valid_setitem_indexer(self, indexer) -> bool:\n return True\n\n def _is_scalar_access(self, key: tuple) -> bool:\n """\n Returns\n -------\n bool\n """\n # this is a shortcut accessor to both .loc and .iloc\n # that provide the equivalent access of .at and .iat\n # a) avoid getting things via sections and (to minimize dtype changes)\n # b) provide a performant path\n if len(key) != self.ndim:\n return False\n\n for i, k in enumerate(key):\n if not is_scalar(k):\n return False\n\n ax = self.obj.axes[i]\n if isinstance(ax, MultiIndex):\n return False\n\n if isinstance(k, str) and ax._supports_partial_string_indexing:\n # partial string indexing, df.loc['2000', 'A']\n # should not be considered scalar\n return False\n\n if not ax._index_as_unique:\n return False\n\n return True\n\n # -------------------------------------------------------------------\n # MultiIndex Handling\n\n def _multi_take_opportunity(self, tup: tuple) -> bool:\n """\n Check whether there is the possibility to use ``_multi_take``.\n\n Currently the limit is that all axes being indexed, must be indexed with\n list-likes.\n\n Parameters\n ----------\n tup : tuple\n Tuple of indexers, one per axis.\n\n Returns\n -------\n bool\n Whether the current indexing,\n can be passed through `_multi_take`.\n """\n if not all(is_list_like_indexer(x) for x in tup):\n return False\n\n # just too complicated\n return not any(com.is_bool_indexer(x) for x in tup)\n\n def _multi_take(self, tup: tuple):\n """\n Create the indexers for the passed tuple of keys, and\n executes the take operation. This allows the take operation to be\n executed all at once, rather than once for each dimension.\n Improving efficiency.\n\n Parameters\n ----------\n tup : tuple\n Tuple of indexers, one per axis.\n\n Returns\n -------\n values: same type as the object being indexed\n """\n # GH 836\n d = {\n axis: self._get_listlike_indexer(key, axis)\n for (key, axis) in zip(tup, self.obj._AXIS_ORDERS)\n }\n return self.obj._reindex_with_indexers(d, copy=True, allow_dups=True)\n\n # -------------------------------------------------------------------\n\n def _getitem_iterable(self, key, axis: AxisInt):\n """\n Index current object with an iterable collection of keys.\n\n Parameters\n ----------\n key : iterable\n Targeted labels.\n axis : int\n Dimension on which the indexing is being made.\n\n Raises\n ------\n KeyError\n If no key was found. Will change in the future to raise if not all\n keys were found.\n\n Returns\n -------\n scalar, DataFrame, or Series: indexed value(s).\n """\n # we assume that not com.is_bool_indexer(key), as that is\n # handled before we get here.\n self._validate_key(key, axis)\n\n # A collection of keys\n keyarr, indexer = self._get_listlike_indexer(key, axis)\n return self.obj._reindex_with_indexers(\n {axis: [keyarr, indexer]}, copy=True, allow_dups=True\n )\n\n def _getitem_tuple(self, tup: tuple):\n with suppress(IndexingError):\n tup = self._expand_ellipsis(tup)\n return self._getitem_lowerdim(tup)\n\n # no multi-index, so validate all of the indexers\n tup = self._validate_tuple_indexer(tup)\n\n # ugly hack for GH #836\n if self._multi_take_opportunity(tup):\n return self._multi_take(tup)\n\n return self._getitem_tuple_same_dim(tup)\n\n def _get_label(self, label, axis: AxisInt):\n # GH#5567 this will fail if the label is not present in the axis.\n return self.obj.xs(label, axis=axis)\n\n def _handle_lowerdim_multi_index_axis0(self, tup: tuple):\n # we have an axis0 multi-index, handle or raise\n axis = self.axis or 0\n try:\n # fast path for series or for tup devoid of slices\n return self._get_label(tup, axis=axis)\n\n except KeyError as ek:\n # raise KeyError if number of indexers match\n # else IndexingError will be raised\n if self.ndim < len(tup) <= self.obj.index.nlevels:\n raise ek\n raise IndexingError("No label returned") from ek\n\n def _getitem_axis(self, key, axis: AxisInt):\n key = item_from_zerodim(key)\n if is_iterator(key):\n key = list(key)\n if key is Ellipsis:\n key = slice(None)\n\n labels = self.obj._get_axis(axis)\n\n if isinstance(key, tuple) and isinstance(labels, MultiIndex):\n key = tuple(key)\n\n if isinstance(key, slice):\n self._validate_key(key, axis)\n return self._get_slice_axis(key, axis=axis)\n elif com.is_bool_indexer(key):\n return self._getbool_axis(key, axis=axis)\n elif is_list_like_indexer(key):\n # an iterable multi-selection\n if not (isinstance(key, tuple) and isinstance(labels, MultiIndex)):\n if hasattr(key, "ndim") and key.ndim > 1:\n raise ValueError("Cannot index with multidimensional key")\n\n return self._getitem_iterable(key, axis=axis)\n\n # nested tuple slicing\n if is_nested_tuple(key, labels):\n locs = labels.get_locs(key)\n indexer: list[slice | npt.NDArray[np.intp]] = [slice(None)] * self.ndim\n indexer[axis] = locs\n return self.obj.iloc[tuple(indexer)]\n\n # fall thru to straight lookup\n self._validate_key(key, axis)\n return self._get_label(key, axis=axis)\n\n def _get_slice_axis(self, slice_obj: slice, axis: AxisInt):\n """\n This is pretty simple as we just have to deal with labels.\n """\n # caller is responsible for ensuring non-None axis\n obj = self.obj\n if not need_slice(slice_obj):\n return obj.copy(deep=False)\n\n labels = obj._get_axis(axis)\n indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step)\n\n if isinstance(indexer, slice):\n return self.obj._slice(indexer, axis=axis)\n else:\n # DatetimeIndex overrides Index.slice_indexer and may\n # return a DatetimeIndex instead of a slice object.\n return self.obj.take(indexer, axis=axis)\n\n def _convert_to_indexer(self, key, axis: AxisInt):\n """\n Convert indexing key into something we can use to do actual fancy\n indexing on a ndarray.\n\n Examples\n ix[:5] -> slice(0, 5)\n ix[[1,2,3]] -> [1,2,3]\n ix[['foo', 'bar', 'baz']] -> [i, j, k] (indices of foo, bar, baz)\n\n Going by Zen of Python?\n 'In the face of ambiguity, refuse the temptation to guess.'\n raise AmbiguousIndexError with integer labels?\n - No, prefer label-based indexing\n """\n labels = self.obj._get_axis(axis)\n\n if isinstance(key, slice):\n return labels._convert_slice_indexer(key, kind="loc")\n\n if (\n isinstance(key, tuple)\n and not isinstance(labels, MultiIndex)\n and self.ndim < 2\n and len(key) > 1\n ):\n raise IndexingError("Too many indexers")\n\n # Slices are not valid keys passed in by the user,\n # even though they are hashable in Python 3.12\n contains_slice = False\n if isinstance(key, tuple):\n contains_slice = any(isinstance(v, slice) for v in key)\n\n if is_scalar(key) or (\n isinstance(labels, MultiIndex) and is_hashable(key) and not contains_slice\n ):\n # Otherwise get_loc will raise InvalidIndexError\n\n # if we are a label return me\n try:\n return labels.get_loc(key)\n except LookupError:\n if isinstance(key, tuple) and isinstance(labels, MultiIndex):\n if len(key) == labels.nlevels:\n return {"key": key}\n raise\n except InvalidIndexError:\n # GH35015, using datetime as column indices raises exception\n if not isinstance(labels, MultiIndex):\n raise\n except ValueError:\n if not is_integer(key):\n raise\n return {"key": key}\n\n if is_nested_tuple(key, labels):\n if self.ndim == 1 and any(isinstance(k, tuple) for k in key):\n # GH#35349 Raise if tuple in tuple for series\n raise IndexingError("Too many indexers")\n return labels.get_locs(key)\n\n elif is_list_like_indexer(key):\n if is_iterator(key):\n key = list(key)\n\n if com.is_bool_indexer(key):\n key = check_bool_indexer(labels, key)\n return key\n else:\n return self._get_listlike_indexer(key, axis)[1]\n else:\n try:\n return labels.get_loc(key)\n except LookupError:\n # allow a not found key only if we are a setter\n if not is_list_like_indexer(key):\n return {"key": key}\n raise\n\n def _get_listlike_indexer(self, key, axis: AxisInt):\n """\n Transform a list-like of keys into a new index and an indexer.\n\n Parameters\n ----------\n key : list-like\n Targeted labels.\n axis: int\n Dimension on which the indexing is being made.\n\n Raises\n ------\n KeyError\n If at least one key was requested but none was found.\n\n Returns\n -------\n keyarr: Index\n New index (coinciding with 'key' if the axis is unique).\n values : array-like\n Indexer for the return object, -1 denotes keys not found.\n """\n ax = self.obj._get_axis(axis)\n axis_name = self.obj._get_axis_name(axis)\n\n keyarr, indexer = ax._get_indexer_strict(key, axis_name)\n\n return keyarr, indexer\n\n\n@doc(IndexingMixin.iloc)\nclass _iLocIndexer(_LocationIndexer):\n _valid_types = (\n "integer, integer slice (START point is INCLUDED, END "\n "point is EXCLUDED), listlike of integers, boolean array"\n )\n _takeable = True\n\n # -------------------------------------------------------------------\n # Key Checks\n\n def _validate_key(self, key, axis: AxisInt):\n if com.is_bool_indexer(key):\n if hasattr(key, "index") and isinstance(key.index, Index):\n if key.index.inferred_type == "integer":\n raise NotImplementedError(\n "iLocation based boolean "\n "indexing on an integer type "\n "is not available"\n )\n raise ValueError(\n "iLocation based boolean indexing cannot use "\n "an indexable as a mask"\n )\n return\n\n if isinstance(key, slice):\n return\n elif is_integer(key):\n self._validate_integer(key, axis)\n elif isinstance(key, tuple):\n # a tuple should already have been caught by this point\n # so don't treat a tuple as a valid indexer\n raise IndexingError("Too many indexers")\n elif is_list_like_indexer(key):\n if isinstance(key, ABCSeries):\n arr = key._values\n elif is_array_like(key):\n arr = key\n else:\n arr = np.array(key)\n len_axis = len(self.obj._get_axis(axis))\n\n # check that the key has a numeric dtype\n if not is_numeric_dtype(arr.dtype):\n raise IndexError(f".iloc requires numeric indexers, got {arr}")\n\n # check that the key does not exceed the maximum size of the index\n if len(arr) and (arr.max() >= len_axis or arr.min() < -len_axis):\n raise IndexError("positional indexers are out-of-bounds")\n else:\n raise ValueError(f"Can only index by location with a [{self._valid_types}]")\n\n def _has_valid_setitem_indexer(self, indexer) -> bool:\n """\n Validate that a positional indexer cannot enlarge its target\n will raise if needed, does not modify the indexer externally.\n\n Returns\n -------\n bool\n """\n if isinstance(indexer, dict):\n raise IndexError("iloc cannot enlarge its target object")\n\n if isinstance(indexer, ABCDataFrame):\n raise TypeError(\n "DataFrame indexer for .iloc is not supported. "\n "Consider using .loc with a DataFrame indexer for automatic alignment.",\n )\n\n if not isinstance(indexer, tuple):\n indexer = _tuplify(self.ndim, indexer)\n\n for ax, i in zip(self.obj.axes, indexer):\n if isinstance(i, slice):\n # should check the stop slice?\n pass\n elif is_list_like_indexer(i):\n # should check the elements?\n pass\n elif is_integer(i):\n if i >= len(ax):\n raise IndexError("iloc cannot enlarge its target object")\n elif isinstance(i, dict):\n raise IndexError("iloc cannot enlarge its target object")\n\n return True\n\n def _is_scalar_access(self, key: tuple) -> bool:\n """\n Returns\n -------\n bool\n """\n # this is a shortcut accessor to both .loc and .iloc\n # that provide the equivalent access of .at and .iat\n # a) avoid getting things via sections and (to minimize dtype changes)\n # b) provide a performant path\n if len(key) != self.ndim:\n return False\n\n return all(is_integer(k) for k in key)\n\n def _validate_integer(self, key: int | np.integer, axis: AxisInt) -> None:\n """\n Check that 'key' is a valid position in the desired axis.\n\n Parameters\n ----------\n key : int\n Requested position.\n axis : int\n Desired axis.\n\n Raises\n ------\n IndexError\n If 'key' is not a valid position in axis 'axis'.\n """\n len_axis = len(self.obj._get_axis(axis))\n if key >= len_axis or key < -len_axis:\n raise IndexError("single positional indexer is out-of-bounds")\n\n # -------------------------------------------------------------------\n\n def _getitem_tuple(self, tup: tuple):\n tup = self._validate_tuple_indexer(tup)\n with suppress(IndexingError):\n return self._getitem_lowerdim(tup)\n\n return self._getitem_tuple_same_dim(tup)\n\n def _get_list_axis(self, key, axis: AxisInt):\n """\n Return Series values by list or array of integers.\n\n Parameters\n ----------\n key : list-like positional indexer\n axis : int\n\n Returns\n -------\n Series object\n\n Notes\n -----\n `axis` can only be zero.\n """\n try:\n return self.obj._take_with_is_copy(key, axis=axis)\n except IndexError as err:\n # re-raise with different error message, e.g. test_getitem_ndarray_3d\n raise IndexError("positional indexers are out-of-bounds") from err\n\n def _getitem_axis(self, key, axis: AxisInt):\n if key is Ellipsis:\n key = slice(None)\n elif isinstance(key, ABCDataFrame):\n raise IndexError(\n "DataFrame indexer is not allowed for .iloc\n"\n "Consider using .loc for automatic alignment."\n )\n\n if isinstance(key, slice):\n return self._get_slice_axis(key, axis=axis)\n\n if is_iterator(key):\n key = list(key)\n\n if isinstance(key, list):\n key = np.asarray(key)\n\n if com.is_bool_indexer(key):\n self._validate_key(key, axis)\n return self._getbool_axis(key, axis=axis)\n\n # a list of integers\n elif is_list_like_indexer(key):\n return self._get_list_axis(key, axis=axis)\n\n # a single integer\n else:\n key = item_from_zerodim(key)\n if not is_integer(key):\n raise TypeError("Cannot index by location index with a non-integer key")\n\n # validate the location\n self._validate_integer(key, axis)\n\n return self.obj._ixs(key, axis=axis)\n\n def _get_slice_axis(self, slice_obj: slice, axis: AxisInt):\n # caller is responsible for ensuring non-None axis\n obj = self.obj\n\n if not need_slice(slice_obj):\n return obj.copy(deep=False)\n\n labels = obj._get_axis(axis)\n labels._validate_positional_slice(slice_obj)\n return self.obj._slice(slice_obj, axis=axis)\n\n def _convert_to_indexer(self, key, axis: AxisInt):\n """\n Much simpler as we only have to deal with our valid types.\n """\n return key\n\n def _get_setitem_indexer(self, key):\n # GH#32257 Fall through to let numpy do validation\n if is_iterator(key):\n key = list(key)\n\n if self.axis is not None:\n key = _tupleize_axis_indexer(self.ndim, self.axis, key)\n\n return key\n\n # -------------------------------------------------------------------\n\n def _setitem_with_indexer(self, indexer, value, name: str = "iloc"):\n """\n _setitem_with_indexer is for setting values on a Series/DataFrame\n using positional indexers.\n\n If the relevant keys are not present, the Series/DataFrame may be\n expanded.\n\n This method is currently broken when dealing with non-unique Indexes,\n since it goes from positional indexers back to labels when calling\n BlockManager methods, see GH#12991, GH#22046, GH#15686.\n """\n info_axis = self.obj._info_axis_number\n\n # maybe partial set\n take_split_path = not self.obj._mgr.is_single_block\n\n if not take_split_path and isinstance(value, ABCDataFrame):\n # Avoid cast of values\n take_split_path = not value._mgr.is_single_block\n\n # if there is only one block/type, still have to take split path\n # unless the block is one-dimensional or it can hold the value\n if not take_split_path and len(self.obj._mgr.arrays) and self.ndim > 1:\n # in case of dict, keys are indices\n val = list(value.values()) if isinstance(value, dict) else value\n arr = self.obj._mgr.arrays[0]\n take_split_path = not can_hold_element(\n arr, extract_array(val, extract_numpy=True)\n )\n\n # if we have any multi-indexes that have non-trivial slices\n # (not null slices) then we must take the split path, xref\n # GH 10360, GH 27841\n if isinstance(indexer, tuple) and len(indexer) == len(self.obj.axes):\n for i, ax in zip(indexer, self.obj.axes):\n if isinstance(ax, MultiIndex) and not (\n is_integer(i) or com.is_null_slice(i)\n ):\n take_split_path = True\n break\n\n if isinstance(indexer, tuple):\n nindexer = []\n for i, idx in enumerate(indexer):\n if isinstance(idx, dict):\n # reindex the axis to the new value\n # and set inplace\n key, _ = convert_missing_indexer(idx)\n\n # if this is the items axes, then take the main missing\n # path first\n # this correctly sets the dtype and avoids cache issues\n # essentially this separates out the block that is needed\n # to possibly be modified\n if self.ndim > 1 and i == info_axis:\n # add the new item, and set the value\n # must have all defined axes if we have a scalar\n # or a list-like on the non-info axes if we have a\n # list-like\n if not len(self.obj):\n if not is_list_like_indexer(value):\n raise ValueError(\n "cannot set a frame with no "\n "defined index and a scalar"\n )\n self.obj[key] = value\n return\n\n # add a new item with the dtype setup\n if com.is_null_slice(indexer[0]):\n # We are setting an entire column\n self.obj[key] = value\n return\n elif is_array_like(value):\n # GH#42099\n arr = extract_array(value, extract_numpy=True)\n taker = -1 * np.ones(len(self.obj), dtype=np.intp)\n empty_value = algos.take_nd(arr, taker)\n if not isinstance(value, ABCSeries):\n # if not Series (in which case we need to align),\n # we can short-circuit\n if (\n isinstance(arr, np.ndarray)\n and arr.ndim == 1\n and len(arr) == 1\n ):\n # NumPy 1.25 deprecation: https://github.com/numpy/numpy/pull/10615\n arr = arr[0, ...]\n empty_value[indexer[0]] = arr\n self.obj[key] = empty_value\n return\n\n self.obj[key] = empty_value\n elif not is_list_like(value):\n self.obj[key] = construct_1d_array_from_inferred_fill_value(\n value, len(self.obj)\n )\n else:\n # FIXME: GH#42099#issuecomment-864326014\n self.obj[key] = infer_fill_value(value)\n\n new_indexer = convert_from_missing_indexer_tuple(\n indexer, self.obj.axes\n )\n self._setitem_with_indexer(new_indexer, value, name)\n\n return\n\n # reindex the axis\n # make sure to clear the cache because we are\n # just replacing the block manager here\n # so the object is the same\n index = self.obj._get_axis(i)\n with warnings.catch_warnings():\n # TODO: re-issue this with setitem-specific message?\n warnings.filterwarnings(\n "ignore",\n "The behavior of Index.insert with object-dtype "\n "is deprecated",\n category=FutureWarning,\n )\n labels = index.insert(len(index), key)\n\n # We are expanding the Series/DataFrame values to match\n # the length of thenew index `labels`. GH#40096 ensure\n # this is valid even if the index has duplicates.\n taker = np.arange(len(index) + 1, dtype=np.intp)\n taker[-1] = -1\n reindexers = {i: (labels, taker)}\n new_obj = self.obj._reindex_with_indexers(\n reindexers, allow_dups=True\n )\n self.obj._mgr = new_obj._mgr\n self.obj._maybe_update_cacher(clear=True)\n self.obj._is_copy = None\n\n nindexer.append(labels.get_loc(key))\n\n else:\n nindexer.append(idx)\n\n indexer = tuple(nindexer)\n else:\n indexer, missing = convert_missing_indexer(indexer)\n\n if missing:\n self._setitem_with_indexer_missing(indexer, value)\n return\n\n if name == "loc":\n # must come after setting of missing\n indexer, value = self._maybe_mask_setitem_value(indexer, value)\n\n # align and set the values\n if take_split_path:\n # We have to operate column-wise\n self._setitem_with_indexer_split_path(indexer, value, name)\n else:\n self._setitem_single_block(indexer, value, name)\n\n def _setitem_with_indexer_split_path(self, indexer, value, name: str):\n """\n Setitem column-wise.\n """\n # Above we only set take_split_path to True for 2D cases\n assert self.ndim == 2\n\n if not isinstance(indexer, tuple):\n indexer = _tuplify(self.ndim, indexer)\n if len(indexer) > self.ndim:\n raise IndexError("too many indices for array")\n if isinstance(indexer[0], np.ndarray) and indexer[0].ndim > 2:\n raise ValueError(r"Cannot set values with ndim > 2")\n\n if (isinstance(value, ABCSeries) and name != "iloc") or isinstance(value, dict):\n from pandas import Series\n\n value = self._align_series(indexer, Series(value))\n\n # Ensure we have something we can iterate over\n info_axis = indexer[1]\n ilocs = self._ensure_iterable_column_indexer(info_axis)\n\n pi = indexer[0]\n lplane_indexer = length_of_indexer(pi, self.obj.index)\n # lplane_indexer gives the expected length of obj[indexer[0]]\n\n # we need an iterable, with a ndim of at least 1\n # eg. don't pass through np.array(0)\n if is_list_like_indexer(value) and getattr(value, "ndim", 1) > 0:\n if isinstance(value, ABCDataFrame):\n self._setitem_with_indexer_frame_value(indexer, value, name)\n\n elif np.ndim(value) == 2:\n # TODO: avoid np.ndim call in case it isn't an ndarray, since\n # that will construct an ndarray, which will be wasteful\n self._setitem_with_indexer_2d_value(indexer, value)\n\n elif len(ilocs) == 1 and lplane_indexer == len(value) and not is_scalar(pi):\n # We are setting multiple rows in a single column.\n self._setitem_single_column(ilocs[0], value, pi)\n\n elif len(ilocs) == 1 and 0 != lplane_indexer != len(value):\n # We are trying to set N values into M entries of a single\n # column, which is invalid for N != M\n # Exclude zero-len for e.g. boolean masking that is all-false\n\n if len(value) == 1 and not is_integer(info_axis):\n # This is a case like df.iloc[:3, [1]] = [0]\n # where we treat as df.iloc[:3, 1] = 0\n return self._setitem_with_indexer((pi, info_axis[0]), value[0])\n\n raise ValueError(\n "Must have equal len keys and value "\n "when setting with an iterable"\n )\n\n elif lplane_indexer == 0 and len(value) == len(self.obj.index):\n # We get here in one case via .loc with a all-False mask\n pass\n\n elif self._is_scalar_access(indexer) and is_object_dtype(\n self.obj.dtypes._values[ilocs[0]]\n ):\n # We are setting nested data, only possible for object dtype data\n self._setitem_single_column(indexer[1], value, pi)\n\n elif len(ilocs) == len(value):\n # We are setting multiple columns in a single row.\n for loc, v in zip(ilocs, value):\n self._setitem_single_column(loc, v, pi)\n\n elif len(ilocs) == 1 and com.is_null_slice(pi) and len(self.obj) == 0:\n # This is a setitem-with-expansion, see\n # test_loc_setitem_empty_append_expands_rows_mixed_dtype\n # e.g. df = DataFrame(columns=["x", "y"])\n # df["x"] = df["x"].astype(np.int64)\n # df.loc[:, "x"] = [1, 2, 3]\n self._setitem_single_column(ilocs[0], value, pi)\n\n else:\n raise ValueError(\n "Must have equal len keys and value "\n "when setting with an iterable"\n )\n\n else:\n # scalar value\n for loc in ilocs:\n self._setitem_single_column(loc, value, pi)\n\n def _setitem_with_indexer_2d_value(self, indexer, value):\n # We get here with np.ndim(value) == 2, excluding DataFrame,\n # which goes through _setitem_with_indexer_frame_value\n pi = indexer[0]\n\n ilocs = self._ensure_iterable_column_indexer(indexer[1])\n\n if not is_array_like(value):\n # cast lists to array\n value = np.array(value, dtype=object)\n if len(ilocs) != value.shape[1]:\n raise ValueError(\n "Must have equal len keys and value when setting with an ndarray"\n )\n\n for i, loc in enumerate(ilocs):\n value_col = value[:, i]\n if is_object_dtype(value_col.dtype):\n # casting to list so that we do type inference in setitem_single_column\n value_col = value_col.tolist()\n self._setitem_single_column(loc, value_col, pi)\n\n def _setitem_with_indexer_frame_value(self, indexer, value: DataFrame, name: str):\n ilocs = self._ensure_iterable_column_indexer(indexer[1])\n\n sub_indexer = list(indexer)\n pi = indexer[0]\n\n multiindex_indexer = isinstance(self.obj.columns, MultiIndex)\n\n unique_cols = value.columns.is_unique\n\n # We do not want to align the value in case of iloc GH#37728\n if name == "iloc":\n for i, loc in enumerate(ilocs):\n val = value.iloc[:, i]\n self._setitem_single_column(loc, val, pi)\n\n elif not unique_cols and value.columns.equals(self.obj.columns):\n # We assume we are already aligned, see\n # test_iloc_setitem_frame_duplicate_columns_multiple_blocks\n for loc in ilocs:\n item = self.obj.columns[loc]\n if item in value:\n sub_indexer[1] = item\n val = self._align_series(\n tuple(sub_indexer),\n value.iloc[:, loc],\n multiindex_indexer,\n )\n else:\n val = np.nan\n\n self._setitem_single_column(loc, val, pi)\n\n elif not unique_cols:\n raise ValueError("Setting with non-unique columns is not allowed.")\n\n else:\n for loc in ilocs:\n item = self.obj.columns[loc]\n if item in value:\n sub_indexer[1] = item\n val = self._align_series(\n tuple(sub_indexer),\n value[item],\n multiindex_indexer,\n using_cow=using_copy_on_write(),\n )\n else:\n val = np.nan\n\n self._setitem_single_column(loc, val, pi)\n\n def _setitem_single_column(self, loc: int, value, plane_indexer) -> None:\n """\n\n Parameters\n ----------\n loc : int\n Indexer for column position\n plane_indexer : int, slice, listlike[int]\n The indexer we use for setitem along axis=0.\n """\n pi = plane_indexer\n\n is_full_setter = com.is_null_slice(pi) or com.is_full_slice(pi, len(self.obj))\n\n is_null_setter = com.is_empty_slice(pi) or is_array_like(pi) and len(pi) == 0\n\n if is_null_setter:\n # no-op, don't cast dtype later\n return\n\n elif is_full_setter:\n try:\n self.obj._mgr.column_setitem(\n loc, plane_indexer, value, inplace_only=True\n )\n except (ValueError, TypeError, LossySetitemError):\n # If we're setting an entire column and we can't do it inplace,\n # then we can use value's dtype (or inferred dtype)\n # instead of object\n dtype = self.obj.dtypes.iloc[loc]\n if dtype not in (np.void, object) and not self.obj.empty:\n # - Exclude np.void, as that is a special case for expansion.\n # We want to warn for\n # df = pd.DataFrame({'a': [1, 2]})\n # df.loc[:, 'a'] = .3\n # but not for\n # df = pd.DataFrame({'a': [1, 2]})\n # df.loc[:, 'b'] = .3\n # - Exclude `object`, as then no upcasting happens.\n # - Exclude empty initial object with enlargement,\n # as then there's nothing to be inconsistent with.\n warnings.warn(\n f"Setting an item of incompatible dtype is deprecated "\n "and will raise in a future error of pandas. "\n f"Value '{value}' has dtype incompatible with {dtype}, "\n "please explicitly cast to a compatible dtype first.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n self.obj.isetitem(loc, value)\n else:\n # set value into the column (first attempting to operate inplace, then\n # falling back to casting if necessary)\n dtype = self.obj.dtypes.iloc[loc]\n if dtype == np.void:\n # This means we're expanding, with multiple columns, e.g.\n # df = pd.DataFrame({'A': [1,2,3], 'B': [4,5,6]})\n # df.loc[df.index <= 2, ['F', 'G']] = (1, 'abc')\n # Columns F and G will initially be set to np.void.\n # Here, we replace those temporary `np.void` columns with\n # columns of the appropriate dtype, based on `value`.\n self.obj.iloc[:, loc] = construct_1d_array_from_inferred_fill_value(\n value, len(self.obj)\n )\n self.obj._mgr.column_setitem(loc, plane_indexer, value)\n\n self.obj._clear_item_cache()\n\n def _setitem_single_block(self, indexer, value, name: str) -> None:\n """\n _setitem_with_indexer for the case when we have a single Block.\n """\n from pandas import Series\n\n if (isinstance(value, ABCSeries) and name != "iloc") or isinstance(value, dict):\n # TODO(EA): ExtensionBlock.setitem this causes issues with\n # setting for extensionarrays that store dicts. Need to decide\n # if it's worth supporting that.\n value = self._align_series(indexer, Series(value))\n\n info_axis = self.obj._info_axis_number\n item_labels = self.obj._get_axis(info_axis)\n if isinstance(indexer, tuple):\n # if we are setting on the info axis ONLY\n # set using those methods to avoid block-splitting\n # logic here\n if (\n self.ndim == len(indexer) == 2\n and is_integer(indexer[1])\n and com.is_null_slice(indexer[0])\n ):\n col = item_labels[indexer[info_axis]]\n if len(item_labels.get_indexer_for([col])) == 1:\n # e.g. test_loc_setitem_empty_append_expands_rows\n loc = item_labels.get_loc(col)\n self._setitem_single_column(loc, value, indexer[0])\n return\n\n indexer = maybe_convert_ix(*indexer) # e.g. test_setitem_frame_align\n\n if isinstance(value, ABCDataFrame) and name != "iloc":\n value = self._align_frame(indexer, value)._values\n\n # check for chained assignment\n self.obj._check_is_chained_assignment_possible()\n\n # actually do the set\n self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)\n self.obj._maybe_update_cacher(clear=True, inplace=True)\n\n def _setitem_with_indexer_missing(self, indexer, value):\n """\n Insert new row(s) or column(s) into the Series or DataFrame.\n """\n from pandas import Series\n\n # reindex the axis to the new value\n # and set inplace\n if self.ndim == 1:\n index = self.obj.index\n with warnings.catch_warnings():\n # TODO: re-issue this with setitem-specific message?\n warnings.filterwarnings(\n "ignore",\n "The behavior of Index.insert with object-dtype is deprecated",\n category=FutureWarning,\n )\n new_index = index.insert(len(index), indexer)\n\n # we have a coerced indexer, e.g. a float\n # that matches in an int64 Index, so\n # we will not create a duplicate index, rather\n # index to that element\n # e.g. 0.0 -> 0\n # GH#12246\n if index.is_unique:\n # pass new_index[-1:] instead if [new_index[-1]]\n # so that we retain dtype\n new_indexer = index.get_indexer(new_index[-1:])\n if (new_indexer != -1).any():\n # We get only here with loc, so can hard code\n return self._setitem_with_indexer(new_indexer, value, "loc")\n\n # this preserves dtype of the value and of the object\n if not is_scalar(value):\n new_dtype = None\n\n elif is_valid_na_for_dtype(value, self.obj.dtype):\n if not is_object_dtype(self.obj.dtype):\n # Every NA value is suitable for object, no conversion needed\n value = na_value_for_dtype(self.obj.dtype, compat=False)\n\n new_dtype = maybe_promote(self.obj.dtype, value)[0]\n\n elif isna(value):\n new_dtype = None\n elif not self.obj.empty and not is_object_dtype(self.obj.dtype):\n # We should not cast, if we have object dtype because we can\n # set timedeltas into object series\n curr_dtype = self.obj.dtype\n curr_dtype = getattr(curr_dtype, "numpy_dtype", curr_dtype)\n new_dtype = maybe_promote(curr_dtype, value)[0]\n else:\n new_dtype = None\n\n new_values = Series([value], dtype=new_dtype)._values\n\n if len(self.obj._values):\n # GH#22717 handle casting compatibility that np.concatenate\n # does incorrectly\n new_values = concat_compat([self.obj._values, new_values])\n self.obj._mgr = self.obj._constructor(\n new_values, index=new_index, name=self.obj.name\n )._mgr\n self.obj._maybe_update_cacher(clear=True)\n\n elif self.ndim == 2:\n if not len(self.obj.columns):\n # no columns and scalar\n raise ValueError("cannot set a frame with no defined columns")\n\n has_dtype = hasattr(value, "dtype")\n if isinstance(value, ABCSeries):\n # append a Series\n value = value.reindex(index=self.obj.columns, copy=True)\n value.name = indexer\n elif isinstance(value, dict):\n value = Series(\n value, index=self.obj.columns, name=indexer, dtype=object\n )\n else:\n # a list-list\n if is_list_like_indexer(value):\n # must have conforming columns\n if len(value) != len(self.obj.columns):\n raise ValueError("cannot set a row with mismatched columns")\n\n value = Series(value, index=self.obj.columns, name=indexer)\n\n if not len(self.obj):\n # We will ignore the existing dtypes instead of using\n # internals.concat logic\n df = value.to_frame().T\n\n idx = self.obj.index\n if isinstance(idx, MultiIndex):\n name = idx.names\n else:\n name = idx.name\n\n df.index = Index([indexer], name=name)\n if not has_dtype:\n # i.e. if we already had a Series or ndarray, keep that\n # dtype. But if we had a list or dict, then do inference\n df = df.infer_objects(copy=False)\n self.obj._mgr = df._mgr\n else:\n self.obj._mgr = self.obj._append(value)._mgr\n self.obj._maybe_update_cacher(clear=True)\n\n def _ensure_iterable_column_indexer(self, column_indexer):\n """\n Ensure that our column indexer is something that can be iterated over.\n """\n ilocs: Sequence[int | np.integer] | np.ndarray\n if is_integer(column_indexer):\n ilocs = [column_indexer]\n elif isinstance(column_indexer, slice):\n ilocs = np.arange(len(self.obj.columns))[column_indexer]\n elif (\n isinstance(column_indexer, np.ndarray) and column_indexer.dtype.kind == "b"\n ):\n ilocs = np.arange(len(column_indexer))[column_indexer]\n else:\n ilocs = column_indexer\n return ilocs\n\n def _align_series(\n self,\n indexer,\n ser: Series,\n multiindex_indexer: bool = False,\n using_cow: bool = False,\n ):\n """\n Parameters\n ----------\n indexer : tuple, slice, scalar\n Indexer used to get the locations that will be set to `ser`.\n ser : pd.Series\n Values to assign to the locations specified by `indexer`.\n multiindex_indexer : bool, optional\n Defaults to False. Should be set to True if `indexer` was from\n a `pd.MultiIndex`, to avoid unnecessary broadcasting.\n\n Returns\n -------\n `np.array` of `ser` broadcast to the appropriate shape for assignment\n to the locations selected by `indexer`\n """\n if isinstance(indexer, (slice, np.ndarray, list, Index)):\n indexer = (indexer,)\n\n if isinstance(indexer, tuple):\n # flatten np.ndarray indexers\n def ravel(i):\n return i.ravel() if isinstance(i, np.ndarray) else i\n\n indexer = tuple(map(ravel, indexer))\n\n aligners = [not com.is_null_slice(idx) for idx in indexer]\n sum_aligners = sum(aligners)\n single_aligner = sum_aligners == 1\n is_frame = self.ndim == 2\n obj = self.obj\n\n # are we a single alignable value on a non-primary\n # dim (e.g. panel: 1,2, or frame: 0) ?\n # hence need to align to a single axis dimension\n # rather that find all valid dims\n\n # frame\n if is_frame:\n single_aligner = single_aligner and aligners[0]\n\n # we have a frame, with multiple indexers on both axes; and a\n # series, so need to broadcast (see GH5206)\n if sum_aligners == self.ndim and all(is_sequence(_) for _ in indexer):\n ser_values = ser.reindex(obj.axes[0][indexer[0]], copy=True)._values\n\n # single indexer\n if len(indexer) > 1 and not multiindex_indexer:\n len_indexer = len(indexer[1])\n ser_values = (\n np.tile(ser_values, len_indexer).reshape(len_indexer, -1).T\n )\n\n return ser_values\n\n for i, idx in enumerate(indexer):\n ax = obj.axes[i]\n\n # multiple aligners (or null slices)\n if is_sequence(idx) or isinstance(idx, slice):\n if single_aligner and com.is_null_slice(idx):\n continue\n new_ix = ax[idx]\n if not is_list_like_indexer(new_ix):\n new_ix = Index([new_ix])\n else:\n new_ix = Index(new_ix)\n if ser.index.equals(new_ix):\n if using_cow:\n return ser\n return ser._values.copy()\n\n return ser.reindex(new_ix)._values\n\n # 2 dims\n elif single_aligner:\n # reindex along index\n ax = self.obj.axes[1]\n if ser.index.equals(ax) or not len(ax):\n return ser._values.copy()\n return ser.reindex(ax)._values\n\n elif is_integer(indexer) and self.ndim == 1:\n if is_object_dtype(self.obj.dtype):\n return ser\n ax = self.obj._get_axis(0)\n\n if ser.index.equals(ax):\n return ser._values.copy()\n\n return ser.reindex(ax)._values[indexer]\n\n elif is_integer(indexer):\n ax = self.obj._get_axis(1)\n\n if ser.index.equals(ax):\n return ser._values.copy()\n\n return ser.reindex(ax)._values\n\n raise ValueError("Incompatible indexer with Series")\n\n def _align_frame(self, indexer, df: DataFrame) -> DataFrame:\n is_frame = self.ndim == 2\n\n if isinstance(indexer, tuple):\n idx, cols = None, None\n sindexers = []\n for i, ix in enumerate(indexer):\n ax = self.obj.axes[i]\n if is_sequence(ix) or isinstance(ix, slice):\n if isinstance(ix, np.ndarray):\n ix = ix.ravel()\n if idx is None:\n idx = ax[ix]\n elif cols is None:\n cols = ax[ix]\n else:\n break\n else:\n sindexers.append(i)\n\n if idx is not None and cols is not None:\n if df.index.equals(idx) and df.columns.equals(cols):\n val = df.copy()\n else:\n val = df.reindex(idx, columns=cols)\n return val\n\n elif (isinstance(indexer, slice) or is_list_like_indexer(indexer)) and is_frame:\n ax = self.obj.index[indexer]\n if df.index.equals(ax):\n val = df.copy()\n else:\n # we have a multi-index and are trying to align\n # with a particular, level GH3738\n if (\n isinstance(ax, MultiIndex)\n and isinstance(df.index, MultiIndex)\n and ax.nlevels != df.index.nlevels\n ):\n raise TypeError(\n "cannot align on a multi-index with out "\n "specifying the join levels"\n )\n\n val = df.reindex(index=ax)\n return val\n\n raise ValueError("Incompatible indexer with DataFrame")\n\n\nclass _ScalarAccessIndexer(NDFrameIndexerBase):\n """\n Access scalars quickly.\n """\n\n # sub-classes need to set _takeable\n _takeable: bool\n\n def _convert_key(self, key):\n raise AbstractMethodError(self)\n\n def __getitem__(self, key):\n if not isinstance(key, tuple):\n # we could have a convertible item here (e.g. Timestamp)\n if not is_list_like_indexer(key):\n key = (key,)\n else:\n raise ValueError("Invalid call for scalar access (getting)!")\n\n key = self._convert_key(key)\n return self.obj._get_value(*key, takeable=self._takeable)\n\n def __setitem__(self, key, value) -> None:\n if isinstance(key, tuple):\n key = tuple(com.apply_if_callable(x, self.obj) for x in key)\n else:\n # scalar callable may return tuple\n key = com.apply_if_callable(key, self.obj)\n\n if not isinstance(key, tuple):\n key = _tuplify(self.ndim, key)\n key = list(self._convert_key(key))\n if len(key) != self.ndim:\n raise ValueError("Not enough indexers for scalar access (setting)!")\n\n self.obj._set_value(*key, value=value, takeable=self._takeable)\n\n\n@doc(IndexingMixin.at)\nclass _AtIndexer(_ScalarAccessIndexer):\n _takeable = False\n\n def _convert_key(self, key):\n """\n Require they keys to be the same type as the index. (so we don't\n fallback)\n """\n # GH 26989\n # For series, unpacking key needs to result in the label.\n # This is already the case for len(key) == 1; e.g. (1,)\n if self.ndim == 1 and len(key) > 1:\n key = (key,)\n\n return key\n\n @property\n def _axes_are_unique(self) -> bool:\n # Only relevant for self.ndim == 2\n assert self.ndim == 2\n return self.obj.index.is_unique and self.obj.columns.is_unique\n\n def __getitem__(self, key):\n if self.ndim == 2 and not self._axes_are_unique:\n # GH#33041 fall back to .loc\n if not isinstance(key, tuple) or not all(is_scalar(x) for x in key):\n raise ValueError("Invalid call for scalar access (getting)!")\n return self.obj.loc[key]\n\n return super().__getitem__(key)\n\n def __setitem__(self, key, value) -> None:\n if self.ndim == 2 and not self._axes_are_unique:\n # GH#33041 fall back to .loc\n if not isinstance(key, tuple) or not all(is_scalar(x) for x in key):\n raise ValueError("Invalid call for scalar access (setting)!")\n\n self.obj.loc[key] = value\n return\n\n return super().__setitem__(key, value)\n\n\n@doc(IndexingMixin.iat)\nclass _iAtIndexer(_ScalarAccessIndexer):\n _takeable = True\n\n def _convert_key(self, key):\n """\n Require integer args. (and convert to label arguments)\n """\n for i in key:\n if not is_integer(i):\n raise ValueError("iAt based indexing can only have integer indexers")\n return key\n\n\ndef _tuplify(ndim: int, loc: Hashable) -> tuple[Hashable | slice, ...]:\n """\n Given an indexer for the first dimension, create an equivalent tuple\n for indexing over all dimensions.\n\n Parameters\n ----------\n ndim : int\n loc : object\n\n Returns\n -------\n tuple\n """\n _tup: list[Hashable | slice]\n _tup = [slice(None, None) for _ in range(ndim)]\n _tup[0] = loc\n return tuple(_tup)\n\n\ndef _tupleize_axis_indexer(ndim: int, axis: AxisInt, key) -> tuple:\n """\n If we have an axis, adapt the given key to be axis-independent.\n """\n new_key = [slice(None)] * ndim\n new_key[axis] = key\n return tuple(new_key)\n\n\ndef check_bool_indexer(index: Index, key) -> np.ndarray:\n """\n Check if key is a valid boolean indexer for an object with such index and\n perform reindexing or conversion if needed.\n\n This function assumes that is_bool_indexer(key) == True.\n\n Parameters\n ----------\n index : Index\n Index of the object on which the indexing is done.\n key : list-like\n Boolean indexer to check.\n\n Returns\n -------\n np.array\n Resulting key.\n\n Raises\n ------\n IndexError\n If the key does not have the same length as index.\n IndexingError\n If the index of the key is unalignable to index.\n """\n result = key\n if isinstance(key, ABCSeries) and not key.index.equals(index):\n indexer = result.index.get_indexer_for(index)\n if -1 in indexer:\n raise IndexingError(\n "Unalignable boolean Series provided as "\n "indexer (index of the boolean Series and of "\n "the indexed object do not match)."\n )\n\n result = result.take(indexer)\n\n # fall through for boolean\n if not isinstance(result.dtype, ExtensionDtype):\n return result.astype(bool)._values\n\n if is_object_dtype(key):\n # key might be object-dtype bool, check_array_indexer needs bool array\n result = np.asarray(result, dtype=bool)\n elif not is_array_like(result):\n # GH 33924\n # key may contain nan elements, check_array_indexer needs bool array\n result = pd_array(result, dtype=bool)\n return check_array_indexer(index, result)\n\n\ndef convert_missing_indexer(indexer):\n """\n Reverse convert a missing indexer, which is a dict\n return the scalar indexer and a boolean indicating if we converted\n """\n if isinstance(indexer, dict):\n # a missing key (but not a tuple indexer)\n indexer = indexer["key"]\n\n if isinstance(indexer, bool):\n raise KeyError("cannot use a single bool to index into setitem")\n return indexer, True\n\n return indexer, False\n\n\ndef convert_from_missing_indexer_tuple(indexer, axes):\n """\n Create a filtered indexer that doesn't have any missing indexers.\n """\n\n def get_indexer(_i, _idx):\n return axes[_i].get_loc(_idx["key"]) if isinstance(_idx, dict) else _idx\n\n return tuple(get_indexer(_i, _idx) for _i, _idx in enumerate(indexer))\n\n\ndef maybe_convert_ix(*args):\n """\n We likely want to take the cross-product.\n """\n for arg in args:\n if not isinstance(arg, (np.ndarray, list, ABCSeries, Index)):\n return args\n return np.ix_(*args)\n\n\ndef is_nested_tuple(tup, labels) -> bool:\n """\n Returns\n -------\n bool\n """\n # check for a compatible nested tuple and multiindexes among the axes\n if not isinstance(tup, tuple):\n return False\n\n for k in tup:\n if is_list_like(k) or isinstance(k, slice):\n return isinstance(labels, MultiIndex)\n\n return False\n\n\ndef is_label_like(key) -> bool:\n """\n Returns\n -------\n bool\n """\n # select a label or row\n return (\n not isinstance(key, slice)\n and not is_list_like_indexer(key)\n and key is not Ellipsis\n )\n\n\ndef need_slice(obj: slice) -> bool:\n """\n Returns\n -------\n bool\n """\n return (\n obj.start is not None\n or obj.stop is not None\n or (obj.step is not None and obj.step != 1)\n )\n\n\ndef check_dict_or_set_indexers(key) -> None:\n """\n Check if the indexer is or contains a dict or set, which is no longer allowed.\n """\n if (\n isinstance(key, set)\n or isinstance(key, tuple)\n and any(isinstance(x, set) for x in key)\n ):\n raise TypeError(\n "Passing a set as an indexer is not supported. Use a list instead."\n )\n\n if (\n isinstance(key, dict)\n or isinstance(key, tuple)\n and any(isinstance(x, dict) for x in key)\n ):\n raise TypeError(\n "Passing a dict as an indexer is not supported. Use a list instead."\n )\n
.venv\Lib\site-packages\pandas\core\indexing.py
indexing.py
Python
97,236
0.75
0.172711
0.133042
awesome-app
737
2023-11-20T21:11:52.847558
BSD-3-Clause
false
8ca646e934214ba56f9889befe9c22e2
"""\nRoutines for filling missing data.\n"""\nfrom __future__ import annotations\n\nfrom functools import wraps\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Literal,\n cast,\n overload,\n)\n\nimport numpy as np\n\nfrom pandas._libs import (\n NaT,\n algos,\n lib,\n)\nfrom pandas._typing import (\n ArrayLike,\n AxisInt,\n F,\n ReindexMethod,\n npt,\n)\nfrom pandas.compat._optional import import_optional_dependency\n\nfrom pandas.core.dtypes.cast import infer_dtype_from\nfrom pandas.core.dtypes.common import (\n is_array_like,\n is_bool_dtype,\n is_numeric_dtype,\n is_numeric_v_string_like,\n is_object_dtype,\n needs_i8_conversion,\n)\nfrom pandas.core.dtypes.dtypes import DatetimeTZDtype\nfrom pandas.core.dtypes.missing import (\n is_valid_na_for_dtype,\n isna,\n na_value_for_dtype,\n)\n\nif TYPE_CHECKING:\n from pandas import Index\n\n\ndef check_value_size(value, mask: npt.NDArray[np.bool_], length: int):\n """\n Validate the size of the values passed to ExtensionArray.fillna.\n """\n if is_array_like(value):\n if len(value) != length:\n raise ValueError(\n f"Length of 'value' does not match. Got ({len(value)}) "\n f" expected {length}"\n )\n value = value[mask]\n\n return value\n\n\ndef mask_missing(arr: ArrayLike, values_to_mask) -> npt.NDArray[np.bool_]:\n """\n Return a masking array of same size/shape as arr\n with entries equaling any member of values_to_mask set to True\n\n Parameters\n ----------\n arr : ArrayLike\n values_to_mask: list, tuple, or scalar\n\n Returns\n -------\n np.ndarray[bool]\n """\n # When called from Block.replace/replace_list, values_to_mask is a scalar\n # known to be holdable by arr.\n # When called from Series._single_replace, values_to_mask is tuple or list\n dtype, values_to_mask = infer_dtype_from(values_to_mask)\n\n if isinstance(dtype, np.dtype):\n values_to_mask = np.array(values_to_mask, dtype=dtype)\n else:\n cls = dtype.construct_array_type()\n if not lib.is_list_like(values_to_mask):\n values_to_mask = [values_to_mask]\n values_to_mask = cls._from_sequence(values_to_mask, dtype=dtype, copy=False)\n\n potential_na = False\n if is_object_dtype(arr.dtype):\n # pre-compute mask to avoid comparison to NA\n potential_na = True\n arr_mask = ~isna(arr)\n\n na_mask = isna(values_to_mask)\n nonna = values_to_mask[~na_mask]\n\n # GH 21977\n mask = np.zeros(arr.shape, dtype=bool)\n if (\n is_numeric_dtype(arr.dtype)\n and not is_bool_dtype(arr.dtype)\n and is_bool_dtype(nonna.dtype)\n ):\n pass\n elif (\n is_bool_dtype(arr.dtype)\n and is_numeric_dtype(nonna.dtype)\n and not is_bool_dtype(nonna.dtype)\n ):\n pass\n else:\n for x in nonna:\n if is_numeric_v_string_like(arr, x):\n # GH#29553 prevent numpy deprecation warnings\n pass\n else:\n if potential_na:\n new_mask = np.zeros(arr.shape, dtype=np.bool_)\n new_mask[arr_mask] = arr[arr_mask] == x\n else:\n new_mask = arr == x\n\n if not isinstance(new_mask, np.ndarray):\n # usually BooleanArray\n new_mask = new_mask.to_numpy(dtype=bool, na_value=False)\n mask |= new_mask\n\n if na_mask.any():\n mask |= isna(arr)\n\n return mask\n\n\n@overload\ndef clean_fill_method(\n method: Literal["ffill", "pad", "bfill", "backfill"],\n *,\n allow_nearest: Literal[False] = ...,\n) -> Literal["pad", "backfill"]:\n ...\n\n\n@overload\ndef clean_fill_method(\n method: Literal["ffill", "pad", "bfill", "backfill", "nearest"],\n *,\n allow_nearest: Literal[True],\n) -> Literal["pad", "backfill", "nearest"]:\n ...\n\n\ndef clean_fill_method(\n method: Literal["ffill", "pad", "bfill", "backfill", "nearest"],\n *,\n allow_nearest: bool = False,\n) -> Literal["pad", "backfill", "nearest"]:\n if isinstance(method, str):\n # error: Incompatible types in assignment (expression has type "str", variable\n # has type "Literal['ffill', 'pad', 'bfill', 'backfill', 'nearest']")\n method = method.lower() # type: ignore[assignment]\n if method == "ffill":\n method = "pad"\n elif method == "bfill":\n method = "backfill"\n\n valid_methods = ["pad", "backfill"]\n expecting = "pad (ffill) or backfill (bfill)"\n if allow_nearest:\n valid_methods.append("nearest")\n expecting = "pad (ffill), backfill (bfill) or nearest"\n if method not in valid_methods:\n raise ValueError(f"Invalid fill method. Expecting {expecting}. Got {method}")\n return method\n\n\n# interpolation methods that dispatch to np.interp\n\nNP_METHODS = ["linear", "time", "index", "values"]\n\n# interpolation methods that dispatch to _interpolate_scipy_wrapper\n\nSP_METHODS = [\n "nearest",\n "zero",\n "slinear",\n "quadratic",\n "cubic",\n "barycentric",\n "krogh",\n "spline",\n "polynomial",\n "from_derivatives",\n "piecewise_polynomial",\n "pchip",\n "akima",\n "cubicspline",\n]\n\n\ndef clean_interp_method(method: str, index: Index, **kwargs) -> str:\n order = kwargs.get("order")\n\n if method in ("spline", "polynomial") and order is None:\n raise ValueError("You must specify the order of the spline or polynomial.")\n\n valid = NP_METHODS + SP_METHODS\n if method not in valid:\n raise ValueError(f"method must be one of {valid}. Got '{method}' instead.")\n\n if method in ("krogh", "piecewise_polynomial", "pchip"):\n if not index.is_monotonic_increasing:\n raise ValueError(\n f"{method} interpolation requires that the index be monotonic."\n )\n\n return method\n\n\ndef find_valid_index(how: str, is_valid: npt.NDArray[np.bool_]) -> int | None:\n """\n Retrieves the positional index of the first valid value.\n\n Parameters\n ----------\n how : {'first', 'last'}\n Use this parameter to change between the first or last valid index.\n is_valid: np.ndarray\n Mask to find na_values.\n\n Returns\n -------\n int or None\n """\n assert how in ["first", "last"]\n\n if len(is_valid) == 0: # early stop\n return None\n\n if is_valid.ndim == 2:\n is_valid = is_valid.any(axis=1) # reduce axis 1\n\n if how == "first":\n idxpos = is_valid[::].argmax()\n\n elif how == "last":\n idxpos = len(is_valid) - 1 - is_valid[::-1].argmax()\n\n chk_notna = is_valid[idxpos]\n\n if not chk_notna:\n return None\n # Incompatible return value type (got "signedinteger[Any]",\n # expected "Optional[int]")\n return idxpos # type: ignore[return-value]\n\n\ndef validate_limit_direction(\n limit_direction: str,\n) -> Literal["forward", "backward", "both"]:\n valid_limit_directions = ["forward", "backward", "both"]\n limit_direction = limit_direction.lower()\n if limit_direction not in valid_limit_directions:\n raise ValueError(\n "Invalid limit_direction: expecting one of "\n f"{valid_limit_directions}, got '{limit_direction}'."\n )\n # error: Incompatible return value type (got "str", expected\n # "Literal['forward', 'backward', 'both']")\n return limit_direction # type: ignore[return-value]\n\n\ndef validate_limit_area(limit_area: str | None) -> Literal["inside", "outside"] | None:\n if limit_area is not None:\n valid_limit_areas = ["inside", "outside"]\n limit_area = limit_area.lower()\n if limit_area not in valid_limit_areas:\n raise ValueError(\n f"Invalid limit_area: expecting one of {valid_limit_areas}, got "\n f"{limit_area}."\n )\n # error: Incompatible return value type (got "Optional[str]", expected\n # "Optional[Literal['inside', 'outside']]")\n return limit_area # type: ignore[return-value]\n\n\ndef infer_limit_direction(\n limit_direction: Literal["backward", "forward", "both"] | None, method: str\n) -> Literal["backward", "forward", "both"]:\n # Set `limit_direction` depending on `method`\n if limit_direction is None:\n if method in ("backfill", "bfill"):\n limit_direction = "backward"\n else:\n limit_direction = "forward"\n else:\n if method in ("pad", "ffill") and limit_direction != "forward":\n raise ValueError(\n f"`limit_direction` must be 'forward' for method `{method}`"\n )\n if method in ("backfill", "bfill") and limit_direction != "backward":\n raise ValueError(\n f"`limit_direction` must be 'backward' for method `{method}`"\n )\n return limit_direction\n\n\ndef get_interp_index(method, index: Index) -> Index:\n # create/use the index\n if method == "linear":\n # prior default\n from pandas import Index\n\n index = Index(np.arange(len(index)))\n else:\n methods = {"index", "values", "nearest", "time"}\n is_numeric_or_datetime = (\n is_numeric_dtype(index.dtype)\n or isinstance(index.dtype, DatetimeTZDtype)\n or lib.is_np_dtype(index.dtype, "mM")\n )\n if method not in methods and not is_numeric_or_datetime:\n raise ValueError(\n "Index column must be numeric or datetime type when "\n f"using {method} method other than linear. "\n "Try setting a numeric or datetime index column before "\n "interpolating."\n )\n\n if isna(index).any():\n raise NotImplementedError(\n "Interpolation with NaNs in the index "\n "has not been implemented. Try filling "\n "those NaNs before interpolating."\n )\n return index\n\n\ndef interpolate_2d_inplace(\n data: np.ndarray, # floating dtype\n index: Index,\n axis: AxisInt,\n method: str = "linear",\n limit: int | None = None,\n limit_direction: str = "forward",\n limit_area: str | None = None,\n fill_value: Any | None = None,\n mask=None,\n **kwargs,\n) -> None:\n """\n Column-wise application of _interpolate_1d.\n\n Notes\n -----\n Alters 'data' in-place.\n\n The signature does differ from _interpolate_1d because it only\n includes what is needed for Block.interpolate.\n """\n # validate the interp method\n clean_interp_method(method, index, **kwargs)\n\n if is_valid_na_for_dtype(fill_value, data.dtype):\n fill_value = na_value_for_dtype(data.dtype, compat=False)\n\n if method == "time":\n if not needs_i8_conversion(index.dtype):\n raise ValueError(\n "time-weighted interpolation only works "\n "on Series or DataFrames with a "\n "DatetimeIndex"\n )\n method = "values"\n\n limit_direction = validate_limit_direction(limit_direction)\n limit_area_validated = validate_limit_area(limit_area)\n\n # default limit is unlimited GH #16282\n limit = algos.validate_limit(nobs=None, limit=limit)\n\n indices = _index_to_interp_indices(index, method)\n\n def func(yvalues: np.ndarray) -> None:\n # process 1-d slices in the axis direction\n\n _interpolate_1d(\n indices=indices,\n yvalues=yvalues,\n method=method,\n limit=limit,\n limit_direction=limit_direction,\n limit_area=limit_area_validated,\n fill_value=fill_value,\n bounds_error=False,\n mask=mask,\n **kwargs,\n )\n\n # error: Argument 1 to "apply_along_axis" has incompatible type\n # "Callable[[ndarray[Any, Any]], None]"; expected "Callable[...,\n # Union[_SupportsArray[dtype[<nothing>]], Sequence[_SupportsArray\n # [dtype[<nothing>]]], Sequence[Sequence[_SupportsArray[dtype[<nothing>]]]],\n # Sequence[Sequence[Sequence[_SupportsArray[dtype[<nothing>]]]]],\n # Sequence[Sequence[Sequence[Sequence[_SupportsArray[dtype[<nothing>]]]]]]]]"\n np.apply_along_axis(func, axis, data) # type: ignore[arg-type]\n\n\ndef _index_to_interp_indices(index: Index, method: str) -> np.ndarray:\n """\n Convert Index to ndarray of indices to pass to NumPy/SciPy.\n """\n xarr = index._values\n if needs_i8_conversion(xarr.dtype):\n # GH#1646 for dt64tz\n xarr = xarr.view("i8")\n\n if method == "linear":\n inds = xarr\n inds = cast(np.ndarray, inds)\n else:\n inds = np.asarray(xarr)\n\n if method in ("values", "index"):\n if inds.dtype == np.object_:\n inds = lib.maybe_convert_objects(inds)\n\n return inds\n\n\ndef _interpolate_1d(\n indices: np.ndarray,\n yvalues: np.ndarray,\n method: str = "linear",\n limit: int | None = None,\n limit_direction: str = "forward",\n limit_area: Literal["inside", "outside"] | None = None,\n fill_value: Any | None = None,\n bounds_error: bool = False,\n order: int | None = None,\n mask=None,\n **kwargs,\n) -> None:\n """\n Logic for the 1-d interpolation. The input\n indices and yvalues will each be 1-d arrays of the same length.\n\n Bounds_error is currently hardcoded to False since non-scipy ones don't\n take it as an argument.\n\n Notes\n -----\n Fills 'yvalues' in-place.\n """\n if mask is not None:\n invalid = mask\n else:\n invalid = isna(yvalues)\n valid = ~invalid\n\n if not valid.any():\n return\n\n if valid.all():\n return\n\n # These are sets of index pointers to invalid values... i.e. {0, 1, etc...\n all_nans = set(np.flatnonzero(invalid))\n\n first_valid_index = find_valid_index(how="first", is_valid=valid)\n if first_valid_index is None: # no nan found in start\n first_valid_index = 0\n start_nans = set(range(first_valid_index))\n\n last_valid_index = find_valid_index(how="last", is_valid=valid)\n if last_valid_index is None: # no nan found in end\n last_valid_index = len(yvalues)\n end_nans = set(range(1 + last_valid_index, len(valid)))\n\n # Like the sets above, preserve_nans contains indices of invalid values,\n # but in this case, it is the final set of indices that need to be\n # preserved as NaN after the interpolation.\n\n # For example if limit_direction='forward' then preserve_nans will\n # contain indices of NaNs at the beginning of the series, and NaNs that\n # are more than 'limit' away from the prior non-NaN.\n\n # set preserve_nans based on direction using _interp_limit\n preserve_nans: list | set\n if limit_direction == "forward":\n preserve_nans = start_nans | set(_interp_limit(invalid, limit, 0))\n elif limit_direction == "backward":\n preserve_nans = end_nans | set(_interp_limit(invalid, 0, limit))\n else:\n # both directions... just use _interp_limit\n preserve_nans = set(_interp_limit(invalid, limit, limit))\n\n # if limit_area is set, add either mid or outside indices\n # to preserve_nans GH #16284\n if limit_area == "inside":\n # preserve NaNs on the outside\n preserve_nans |= start_nans | end_nans\n elif limit_area == "outside":\n # preserve NaNs on the inside\n mid_nans = all_nans - start_nans - end_nans\n preserve_nans |= mid_nans\n\n # sort preserve_nans and convert to list\n preserve_nans = sorted(preserve_nans)\n\n is_datetimelike = yvalues.dtype.kind in "mM"\n\n if is_datetimelike:\n yvalues = yvalues.view("i8")\n\n if method in NP_METHODS:\n # np.interp requires sorted X values, #21037\n\n indexer = np.argsort(indices[valid])\n yvalues[invalid] = np.interp(\n indices[invalid], indices[valid][indexer], yvalues[valid][indexer]\n )\n else:\n yvalues[invalid] = _interpolate_scipy_wrapper(\n indices[valid],\n yvalues[valid],\n indices[invalid],\n method=method,\n fill_value=fill_value,\n bounds_error=bounds_error,\n order=order,\n **kwargs,\n )\n\n if mask is not None:\n mask[:] = False\n mask[preserve_nans] = True\n elif is_datetimelike:\n yvalues[preserve_nans] = NaT.value\n else:\n yvalues[preserve_nans] = np.nan\n return\n\n\ndef _interpolate_scipy_wrapper(\n x: np.ndarray,\n y: np.ndarray,\n new_x: np.ndarray,\n method: str,\n fill_value=None,\n bounds_error: bool = False,\n order=None,\n **kwargs,\n):\n """\n Passed off to scipy.interpolate.interp1d. method is scipy's kind.\n Returns an array interpolated at new_x. Add any new methods to\n the list in _clean_interp_method.\n """\n extra = f"{method} interpolation requires SciPy."\n import_optional_dependency("scipy", extra=extra)\n from scipy import interpolate\n\n new_x = np.asarray(new_x)\n\n # ignores some kwargs that could be passed along.\n alt_methods = {\n "barycentric": interpolate.barycentric_interpolate,\n "krogh": interpolate.krogh_interpolate,\n "from_derivatives": _from_derivatives,\n "piecewise_polynomial": _from_derivatives,\n "cubicspline": _cubicspline_interpolate,\n "akima": _akima_interpolate,\n "pchip": interpolate.pchip_interpolate,\n }\n\n interp1d_methods = [\n "nearest",\n "zero",\n "slinear",\n "quadratic",\n "cubic",\n "polynomial",\n ]\n if method in interp1d_methods:\n if method == "polynomial":\n kind = order\n else:\n kind = method\n terp = interpolate.interp1d(\n x, y, kind=kind, fill_value=fill_value, bounds_error=bounds_error\n )\n new_y = terp(new_x)\n elif method == "spline":\n # GH #10633, #24014\n if isna(order) or (order <= 0):\n raise ValueError(\n f"order needs to be specified and greater than 0; got order: {order}"\n )\n terp = interpolate.UnivariateSpline(x, y, k=order, **kwargs)\n new_y = terp(new_x)\n else:\n # GH 7295: need to be able to write for some reason\n # in some circumstances: check all three\n if not x.flags.writeable:\n x = x.copy()\n if not y.flags.writeable:\n y = y.copy()\n if not new_x.flags.writeable:\n new_x = new_x.copy()\n terp = alt_methods[method]\n new_y = terp(x, y, new_x, **kwargs)\n return new_y\n\n\ndef _from_derivatives(\n xi: np.ndarray,\n yi: np.ndarray,\n x: np.ndarray,\n order=None,\n der: int | list[int] | None = 0,\n extrapolate: bool = False,\n):\n """\n Convenience function for interpolate.BPoly.from_derivatives.\n\n Construct a piecewise polynomial in the Bernstein basis, compatible\n with the specified values and derivatives at breakpoints.\n\n Parameters\n ----------\n xi : array-like\n sorted 1D array of x-coordinates\n yi : array-like or list of array-likes\n yi[i][j] is the j-th derivative known at xi[i]\n order: None or int or array-like of ints. Default: None.\n Specifies the degree of local polynomials. If not None, some\n derivatives are ignored.\n der : int or list\n How many derivatives to extract; None for all potentially nonzero\n derivatives (that is a number equal to the number of points), or a\n list of derivatives to extract. This number includes the function\n value as 0th derivative.\n extrapolate : bool, optional\n Whether to extrapolate to ouf-of-bounds points based on first and last\n intervals, or to return NaNs. Default: True.\n\n See Also\n --------\n scipy.interpolate.BPoly.from_derivatives\n\n Returns\n -------\n y : scalar or array-like\n The result, of length R or length M or M by R.\n """\n from scipy import interpolate\n\n # return the method for compat with scipy version & backwards compat\n method = interpolate.BPoly.from_derivatives\n m = method(xi, yi.reshape(-1, 1), orders=order, extrapolate=extrapolate)\n\n return m(x)\n\n\ndef _akima_interpolate(\n xi: np.ndarray,\n yi: np.ndarray,\n x: np.ndarray,\n der: int | list[int] | None = 0,\n axis: AxisInt = 0,\n):\n """\n Convenience function for akima interpolation.\n xi and yi are arrays of values used to approximate some function f,\n with ``yi = f(xi)``.\n\n See `Akima1DInterpolator` for details.\n\n Parameters\n ----------\n xi : np.ndarray\n A sorted list of x-coordinates, of length N.\n yi : np.ndarray\n A 1-D array of real values. `yi`'s length along the interpolation\n axis must be equal to the length of `xi`. If N-D array, use axis\n parameter to select correct axis.\n x : np.ndarray\n Of length M.\n der : int, optional\n How many derivatives to extract; None for all potentially\n nonzero derivatives (that is a number equal to the number\n of points), or a list of derivatives to extract. This number\n includes the function value as 0th derivative.\n axis : int, optional\n Axis in the yi array corresponding to the x-coordinate values.\n\n See Also\n --------\n scipy.interpolate.Akima1DInterpolator\n\n Returns\n -------\n y : scalar or array-like\n The result, of length R or length M or M by R,\n\n """\n from scipy import interpolate\n\n P = interpolate.Akima1DInterpolator(xi, yi, axis=axis)\n\n return P(x, nu=der)\n\n\ndef _cubicspline_interpolate(\n xi: np.ndarray,\n yi: np.ndarray,\n x: np.ndarray,\n axis: AxisInt = 0,\n bc_type: str | tuple[Any, Any] = "not-a-knot",\n extrapolate=None,\n):\n """\n Convenience function for cubic spline data interpolator.\n\n See `scipy.interpolate.CubicSpline` for details.\n\n Parameters\n ----------\n xi : np.ndarray, shape (n,)\n 1-d array containing values of the independent variable.\n Values must be real, finite and in strictly increasing order.\n yi : np.ndarray\n Array containing values of the dependent variable. It can have\n arbitrary number of dimensions, but the length along ``axis``\n (see below) must match the length of ``x``. Values must be finite.\n x : np.ndarray, shape (m,)\n axis : int, optional\n Axis along which `y` is assumed to be varying. Meaning that for\n ``x[i]`` the corresponding values are ``np.take(y, i, axis=axis)``.\n Default is 0.\n bc_type : string or 2-tuple, optional\n Boundary condition type. Two additional equations, given by the\n boundary conditions, are required to determine all coefficients of\n polynomials on each segment [2]_.\n If `bc_type` is a string, then the specified condition will be applied\n at both ends of a spline. Available conditions are:\n * 'not-a-knot' (default): The first and second segment at a curve end\n are the same polynomial. It is a good default when there is no\n information on boundary conditions.\n * 'periodic': The interpolated functions is assumed to be periodic\n of period ``x[-1] - x[0]``. The first and last value of `y` must be\n identical: ``y[0] == y[-1]``. This boundary condition will result in\n ``y'[0] == y'[-1]`` and ``y''[0] == y''[-1]``.\n * 'clamped': The first derivative at curves ends are zero. Assuming\n a 1D `y`, ``bc_type=((1, 0.0), (1, 0.0))`` is the same condition.\n * 'natural': The second derivative at curve ends are zero. Assuming\n a 1D `y`, ``bc_type=((2, 0.0), (2, 0.0))`` is the same condition.\n If `bc_type` is a 2-tuple, the first and the second value will be\n applied at the curve start and end respectively. The tuple values can\n be one of the previously mentioned strings (except 'periodic') or a\n tuple `(order, deriv_values)` allowing to specify arbitrary\n derivatives at curve ends:\n * `order`: the derivative order, 1 or 2.\n * `deriv_value`: array-like containing derivative values, shape must\n be the same as `y`, excluding ``axis`` dimension. For example, if\n `y` is 1D, then `deriv_value` must be a scalar. If `y` is 3D with\n the shape (n0, n1, n2) and axis=2, then `deriv_value` must be 2D\n and have the shape (n0, n1).\n extrapolate : {bool, 'periodic', None}, optional\n If bool, determines whether to extrapolate to out-of-bounds points\n based on first and last intervals, or to return NaNs. If 'periodic',\n periodic extrapolation is used. If None (default), ``extrapolate`` is\n set to 'periodic' for ``bc_type='periodic'`` and to True otherwise.\n\n See Also\n --------\n scipy.interpolate.CubicHermiteSpline\n\n Returns\n -------\n y : scalar or array-like\n The result, of shape (m,)\n\n References\n ----------\n .. [1] `Cubic Spline Interpolation\n <https://en.wikiversity.org/wiki/Cubic_Spline_Interpolation>`_\n on Wikiversity.\n .. [2] Carl de Boor, "A Practical Guide to Splines", Springer-Verlag, 1978.\n """\n from scipy import interpolate\n\n P = interpolate.CubicSpline(\n xi, yi, axis=axis, bc_type=bc_type, extrapolate=extrapolate\n )\n\n return P(x)\n\n\ndef _interpolate_with_limit_area(\n values: np.ndarray,\n method: Literal["pad", "backfill"],\n limit: int | None,\n limit_area: Literal["inside", "outside"],\n) -> None:\n """\n Apply interpolation and limit_area logic to values along a to-be-specified axis.\n\n Parameters\n ----------\n values: np.ndarray\n Input array.\n method: str\n Interpolation method. Could be "bfill" or "pad"\n limit: int, optional\n Index limit on interpolation.\n limit_area: {'inside', 'outside'}\n Limit area for interpolation.\n\n Notes\n -----\n Modifies values in-place.\n """\n\n invalid = isna(values)\n is_valid = ~invalid\n\n if not invalid.all():\n first = find_valid_index(how="first", is_valid=is_valid)\n if first is None:\n first = 0\n last = find_valid_index(how="last", is_valid=is_valid)\n if last is None:\n last = len(values)\n\n pad_or_backfill_inplace(\n values,\n method=method,\n limit=limit,\n limit_area=limit_area,\n )\n\n if limit_area == "inside":\n invalid[first : last + 1] = False\n elif limit_area == "outside":\n invalid[:first] = invalid[last + 1 :] = False\n else:\n raise ValueError("limit_area should be 'inside' or 'outside'")\n\n values[invalid] = np.nan\n\n\ndef pad_or_backfill_inplace(\n values: np.ndarray,\n method: Literal["pad", "backfill"] = "pad",\n axis: AxisInt = 0,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n) -> None:\n """\n Perform an actual interpolation of values, values will be make 2-d if\n needed fills inplace, returns the result.\n\n Parameters\n ----------\n values: np.ndarray\n Input array.\n method: str, default "pad"\n Interpolation method. Could be "bfill" or "pad"\n axis: 0 or 1\n Interpolation axis\n limit: int, optional\n Index limit on interpolation.\n limit_area: str, optional\n Limit area for interpolation. Can be "inside" or "outside"\n\n Notes\n -----\n Modifies values in-place.\n """\n transf = (lambda x: x) if axis == 0 else (lambda x: x.T)\n\n # reshape a 1 dim if needed\n if values.ndim == 1:\n if axis != 0: # pragma: no cover\n raise AssertionError("cannot interpolate on a ndim == 1 with axis != 0")\n values = values.reshape(tuple((1,) + values.shape))\n\n method = clean_fill_method(method)\n tvalues = transf(values)\n\n func = get_fill_func(method, ndim=2)\n # _pad_2d and _backfill_2d both modify tvalues inplace\n func(tvalues, limit=limit, limit_area=limit_area)\n\n\ndef _fillna_prep(\n values, mask: npt.NDArray[np.bool_] | None = None\n) -> npt.NDArray[np.bool_]:\n # boilerplate for _pad_1d, _backfill_1d, _pad_2d, _backfill_2d\n\n if mask is None:\n mask = isna(values)\n\n return mask\n\n\ndef _datetimelike_compat(func: F) -> F:\n """\n Wrapper to handle datetime64 and timedelta64 dtypes.\n """\n\n @wraps(func)\n def new_func(\n values,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n mask=None,\n ):\n if needs_i8_conversion(values.dtype):\n if mask is None:\n # This needs to occur before casting to int64\n mask = isna(values)\n\n result, mask = func(\n values.view("i8"), limit=limit, limit_area=limit_area, mask=mask\n )\n return result.view(values.dtype), mask\n\n return func(values, limit=limit, limit_area=limit_area, mask=mask)\n\n return cast(F, new_func)\n\n\n@_datetimelike_compat\ndef _pad_1d(\n values: np.ndarray,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> tuple[np.ndarray, npt.NDArray[np.bool_]]:\n mask = _fillna_prep(values, mask)\n if limit_area is not None and not mask.all():\n _fill_limit_area_1d(mask, limit_area)\n algos.pad_inplace(values, mask, limit=limit)\n return values, mask\n\n\n@_datetimelike_compat\ndef _backfill_1d(\n values: np.ndarray,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> tuple[np.ndarray, npt.NDArray[np.bool_]]:\n mask = _fillna_prep(values, mask)\n if limit_area is not None and not mask.all():\n _fill_limit_area_1d(mask, limit_area)\n algos.backfill_inplace(values, mask, limit=limit)\n return values, mask\n\n\n@_datetimelike_compat\ndef _pad_2d(\n values: np.ndarray,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n):\n mask = _fillna_prep(values, mask)\n if limit_area is not None:\n _fill_limit_area_2d(mask, limit_area)\n\n if values.size:\n algos.pad_2d_inplace(values, mask, limit=limit)\n else:\n # for test coverage\n pass\n return values, mask\n\n\n@_datetimelike_compat\ndef _backfill_2d(\n values,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n):\n mask = _fillna_prep(values, mask)\n if limit_area is not None:\n _fill_limit_area_2d(mask, limit_area)\n\n if values.size:\n algos.backfill_2d_inplace(values, mask, limit=limit)\n else:\n # for test coverage\n pass\n return values, mask\n\n\ndef _fill_limit_area_1d(\n mask: npt.NDArray[np.bool_], limit_area: Literal["outside", "inside"]\n) -> None:\n """Prepare 1d mask for ffill/bfill with limit_area.\n\n Caller is responsible for checking at least one value of mask is False.\n When called, mask will no longer faithfully represent when\n the corresponding are NA or not.\n\n Parameters\n ----------\n mask : np.ndarray[bool, ndim=1]\n Mask representing NA values when filling.\n limit_area : { "outside", "inside" }\n Whether to limit filling to outside or inside the outer most non-NA value.\n """\n neg_mask = ~mask\n first = neg_mask.argmax()\n last = len(neg_mask) - neg_mask[::-1].argmax() - 1\n if limit_area == "inside":\n mask[:first] = False\n mask[last + 1 :] = False\n elif limit_area == "outside":\n mask[first + 1 : last] = False\n\n\ndef _fill_limit_area_2d(\n mask: npt.NDArray[np.bool_], limit_area: Literal["outside", "inside"]\n) -> None:\n """Prepare 2d mask for ffill/bfill with limit_area.\n\n When called, mask will no longer faithfully represent when\n the corresponding are NA or not.\n\n Parameters\n ----------\n mask : np.ndarray[bool, ndim=1]\n Mask representing NA values when filling.\n limit_area : { "outside", "inside" }\n Whether to limit filling to outside or inside the outer most non-NA value.\n """\n neg_mask = ~mask.T\n if limit_area == "outside":\n # Identify inside\n la_mask = (\n np.maximum.accumulate(neg_mask, axis=0)\n & np.maximum.accumulate(neg_mask[::-1], axis=0)[::-1]\n )\n else:\n # Identify outside\n la_mask = (\n ~np.maximum.accumulate(neg_mask, axis=0)\n | ~np.maximum.accumulate(neg_mask[::-1], axis=0)[::-1]\n )\n mask[la_mask.T] = False\n\n\n_fill_methods = {"pad": _pad_1d, "backfill": _backfill_1d}\n\n\ndef get_fill_func(method, ndim: int = 1):\n method = clean_fill_method(method)\n if ndim == 1:\n return _fill_methods[method]\n return {"pad": _pad_2d, "backfill": _backfill_2d}[method]\n\n\ndef clean_reindex_fill_method(method) -> ReindexMethod | None:\n if method is None:\n return None\n return clean_fill_method(method, allow_nearest=True)\n\n\ndef _interp_limit(\n invalid: npt.NDArray[np.bool_], fw_limit: int | None, bw_limit: int | None\n):\n """\n Get indexers of values that won't be filled\n because they exceed the limits.\n\n Parameters\n ----------\n invalid : np.ndarray[bool]\n fw_limit : int or None\n forward limit to index\n bw_limit : int or None\n backward limit to index\n\n Returns\n -------\n set of indexers\n\n Notes\n -----\n This is equivalent to the more readable, but slower\n\n .. code-block:: python\n\n def _interp_limit(invalid, fw_limit, bw_limit):\n for x in np.where(invalid)[0]:\n if invalid[max(0, x - fw_limit):x + bw_limit + 1].all():\n yield x\n """\n # handle forward first; the backward direction is the same except\n # 1. operate on the reversed array\n # 2. subtract the returned indices from N - 1\n N = len(invalid)\n f_idx = set()\n b_idx = set()\n\n def inner(invalid, limit: int):\n limit = min(limit, N)\n windowed = _rolling_window(invalid, limit + 1).all(1)\n idx = set(np.where(windowed)[0] + limit) | set(\n np.where((~invalid[: limit + 1]).cumsum() == 0)[0]\n )\n return idx\n\n if fw_limit is not None:\n if fw_limit == 0:\n f_idx = set(np.where(invalid)[0])\n else:\n f_idx = inner(invalid, fw_limit)\n\n if bw_limit is not None:\n if bw_limit == 0:\n # then we don't even need to care about backwards\n # just use forwards\n return f_idx\n else:\n b_idx_inv = list(inner(invalid[::-1], bw_limit))\n b_idx = set(N - 1 - np.asarray(b_idx_inv))\n if fw_limit == 0:\n return b_idx\n\n return f_idx & b_idx\n\n\ndef _rolling_window(a: npt.NDArray[np.bool_], window: int) -> npt.NDArray[np.bool_]:\n """\n [True, True, False, True, False], 2 ->\n\n [\n [True, True],\n [True, False],\n [False, True],\n [True, False],\n ]\n """\n # https://stackoverflow.com/a/6811241\n shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)\n strides = a.strides + (a.strides[-1],)\n return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)\n
.venv\Lib\site-packages\pandas\core\missing.py
missing.py
Python
35,270
0.95
0.134715
0.080412
vue-tools
606
2025-06-20T03:50:08.754743
MIT
false
60cebaea00894f134fb8368c2b8c92e0
from __future__ import annotations\n\nimport functools\nimport itertools\nfrom typing import (\n Any,\n Callable,\n cast,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import get_option\n\nfrom pandas._libs import (\n NaT,\n NaTType,\n iNaT,\n lib,\n)\nfrom pandas._typing import (\n ArrayLike,\n AxisInt,\n CorrelationMethod,\n Dtype,\n DtypeObj,\n F,\n Scalar,\n Shape,\n npt,\n)\nfrom pandas.compat._optional import import_optional_dependency\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.common import (\n is_complex,\n is_float,\n is_float_dtype,\n is_integer,\n is_numeric_dtype,\n is_object_dtype,\n needs_i8_conversion,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.missing import (\n isna,\n na_value_for_dtype,\n notna,\n)\n\nbn = import_optional_dependency("bottleneck", errors="warn")\n_BOTTLENECK_INSTALLED = bn is not None\n_USE_BOTTLENECK = False\n\n\ndef set_use_bottleneck(v: bool = True) -> None:\n # set/unset to use bottleneck\n global _USE_BOTTLENECK\n if _BOTTLENECK_INSTALLED:\n _USE_BOTTLENECK = v\n\n\nset_use_bottleneck(get_option("compute.use_bottleneck"))\n\n\nclass disallow:\n def __init__(self, *dtypes: Dtype) -> None:\n super().__init__()\n self.dtypes = tuple(pandas_dtype(dtype).type for dtype in dtypes)\n\n def check(self, obj) -> bool:\n return hasattr(obj, "dtype") and issubclass(obj.dtype.type, self.dtypes)\n\n def __call__(self, f: F) -> F:\n @functools.wraps(f)\n def _f(*args, **kwargs):\n obj_iter = itertools.chain(args, kwargs.values())\n if any(self.check(obj) for obj in obj_iter):\n f_name = f.__name__.replace("nan", "")\n raise TypeError(\n f"reduction operation '{f_name}' not allowed for this dtype"\n )\n try:\n return f(*args, **kwargs)\n except ValueError as e:\n # we want to transform an object array\n # ValueError message to the more typical TypeError\n # e.g. this is normally a disallowed function on\n # object arrays that contain strings\n if is_object_dtype(args[0]):\n raise TypeError(e) from e\n raise\n\n return cast(F, _f)\n\n\nclass bottleneck_switch:\n def __init__(self, name=None, **kwargs) -> None:\n self.name = name\n self.kwargs = kwargs\n\n def __call__(self, alt: F) -> F:\n bn_name = self.name or alt.__name__\n\n try:\n bn_func = getattr(bn, bn_name)\n except (AttributeError, NameError): # pragma: no cover\n bn_func = None\n\n @functools.wraps(alt)\n def f(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n **kwds,\n ):\n if len(self.kwargs) > 0:\n for k, v in self.kwargs.items():\n if k not in kwds:\n kwds[k] = v\n\n if values.size == 0 and kwds.get("min_count") is None:\n # We are empty, returning NA for our type\n # Only applies for the default `min_count` of None\n # since that affects how empty arrays are handled.\n # TODO(GH-18976) update all the nanops methods to\n # correctly handle empty inputs and remove this check.\n # It *may* just be `var`\n return _na_for_min_count(values, axis)\n\n if _USE_BOTTLENECK and skipna and _bn_ok_dtype(values.dtype, bn_name):\n if kwds.get("mask", None) is None:\n # `mask` is not recognised by bottleneck, would raise\n # TypeError if called\n kwds.pop("mask", None)\n result = bn_func(values, axis=axis, **kwds)\n\n # prefer to treat inf/-inf as NA, but must compute the func\n # twice :(\n if _has_infs(result):\n result = alt(values, axis=axis, skipna=skipna, **kwds)\n else:\n result = alt(values, axis=axis, skipna=skipna, **kwds)\n else:\n result = alt(values, axis=axis, skipna=skipna, **kwds)\n\n return result\n\n return cast(F, f)\n\n\ndef _bn_ok_dtype(dtype: DtypeObj, name: str) -> bool:\n # Bottleneck chokes on datetime64, PeriodDtype (or and EA)\n if dtype != object and not needs_i8_conversion(dtype):\n # GH 42878\n # Bottleneck uses naive summation leading to O(n) loss of precision\n # unlike numpy which implements pairwise summation, which has O(log(n)) loss\n # crossref: https://github.com/pydata/bottleneck/issues/379\n\n # GH 15507\n # bottleneck does not properly upcast during the sum\n # so can overflow\n\n # GH 9422\n # further we also want to preserve NaN when all elements\n # are NaN, unlike bottleneck/numpy which consider this\n # to be 0\n return name not in ["nansum", "nanprod", "nanmean"]\n return False\n\n\ndef _has_infs(result) -> bool:\n if isinstance(result, np.ndarray):\n if result.dtype in ("f8", "f4"):\n # Note: outside of an nanops-specific test, we always have\n # result.ndim == 1, so there is no risk of this ravel making a copy.\n return lib.has_infs(result.ravel("K"))\n try:\n return np.isinf(result).any()\n except (TypeError, NotImplementedError):\n # if it doesn't support infs, then it can't have infs\n return False\n\n\ndef _get_fill_value(\n dtype: DtypeObj, fill_value: Scalar | None = None, fill_value_typ=None\n):\n """return the correct fill value for the dtype of the values"""\n if fill_value is not None:\n return fill_value\n if _na_ok_dtype(dtype):\n if fill_value_typ is None:\n return np.nan\n else:\n if fill_value_typ == "+inf":\n return np.inf\n else:\n return -np.inf\n else:\n if fill_value_typ == "+inf":\n # need the max int here\n return lib.i8max\n else:\n return iNaT\n\n\ndef _maybe_get_mask(\n values: np.ndarray, skipna: bool, mask: npt.NDArray[np.bool_] | None\n) -> npt.NDArray[np.bool_] | None:\n """\n Compute a mask if and only if necessary.\n\n This function will compute a mask iff it is necessary. Otherwise,\n return the provided mask (potentially None) when a mask does not need to be\n computed.\n\n A mask is never necessary if the values array is of boolean or integer\n dtypes, as these are incapable of storing NaNs. If passing a NaN-capable\n dtype that is interpretable as either boolean or integer data (eg,\n timedelta64), a mask must be provided.\n\n If the skipna parameter is False, a new mask will not be computed.\n\n The mask is computed using isna() by default. Setting invert=True selects\n notna() as the masking function.\n\n Parameters\n ----------\n values : ndarray\n input array to potentially compute mask for\n skipna : bool\n boolean for whether NaNs should be skipped\n mask : Optional[ndarray]\n nan-mask if known\n\n Returns\n -------\n Optional[np.ndarray[bool]]\n """\n if mask is None:\n if values.dtype.kind in "biu":\n # Boolean data cannot contain nulls, so signal via mask being None\n return None\n\n if skipna or values.dtype.kind in "mM":\n mask = isna(values)\n\n return mask\n\n\ndef _get_values(\n values: np.ndarray,\n skipna: bool,\n fill_value: Any = None,\n fill_value_typ: str | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> tuple[np.ndarray, npt.NDArray[np.bool_] | None]:\n """\n Utility to get the values view, mask, dtype, dtype_max, and fill_value.\n\n If both mask and fill_value/fill_value_typ are not None and skipna is True,\n the values array will be copied.\n\n For input arrays of boolean or integer dtypes, copies will only occur if a\n precomputed mask, a fill_value/fill_value_typ, and skipna=True are\n provided.\n\n Parameters\n ----------\n values : ndarray\n input array to potentially compute mask for\n skipna : bool\n boolean for whether NaNs should be skipped\n fill_value : Any\n value to fill NaNs with\n fill_value_typ : str\n Set to '+inf' or '-inf' to handle dtype-specific infinities\n mask : Optional[np.ndarray[bool]]\n nan-mask if known\n\n Returns\n -------\n values : ndarray\n Potential copy of input value array\n mask : Optional[ndarray[bool]]\n Mask for values, if deemed necessary to compute\n """\n # In _get_values is only called from within nanops, and in all cases\n # with scalar fill_value. This guarantee is important for the\n # np.where call below\n\n mask = _maybe_get_mask(values, skipna, mask)\n\n dtype = values.dtype\n\n datetimelike = False\n if values.dtype.kind in "mM":\n # changing timedelta64/datetime64 to int64 needs to happen after\n # finding `mask` above\n values = np.asarray(values.view("i8"))\n datetimelike = True\n\n if skipna and (mask is not None):\n # get our fill value (in case we need to provide an alternative\n # dtype for it)\n fill_value = _get_fill_value(\n dtype, fill_value=fill_value, fill_value_typ=fill_value_typ\n )\n\n if fill_value is not None:\n if mask.any():\n if datetimelike or _na_ok_dtype(dtype):\n values = values.copy()\n np.putmask(values, mask, fill_value)\n else:\n # np.where will promote if needed\n values = np.where(~mask, values, fill_value)\n\n return values, mask\n\n\ndef _get_dtype_max(dtype: np.dtype) -> np.dtype:\n # return a platform independent precision dtype\n dtype_max = dtype\n if dtype.kind in "bi":\n dtype_max = np.dtype(np.int64)\n elif dtype.kind == "u":\n dtype_max = np.dtype(np.uint64)\n elif dtype.kind == "f":\n dtype_max = np.dtype(np.float64)\n return dtype_max\n\n\ndef _na_ok_dtype(dtype: DtypeObj) -> bool:\n if needs_i8_conversion(dtype):\n return False\n return not issubclass(dtype.type, np.integer)\n\n\ndef _wrap_results(result, dtype: np.dtype, fill_value=None):\n """wrap our results if needed"""\n if result is NaT:\n pass\n\n elif dtype.kind == "M":\n if fill_value is None:\n # GH#24293\n fill_value = iNaT\n if not isinstance(result, np.ndarray):\n assert not isna(fill_value), "Expected non-null fill_value"\n if result == fill_value:\n result = np.nan\n\n if isna(result):\n result = np.datetime64("NaT", "ns").astype(dtype)\n else:\n result = np.int64(result).view(dtype)\n # retain original unit\n result = result.astype(dtype, copy=False)\n else:\n # If we have float dtype, taking a view will give the wrong result\n result = result.astype(dtype)\n elif dtype.kind == "m":\n if not isinstance(result, np.ndarray):\n if result == fill_value or np.isnan(result):\n result = np.timedelta64("NaT").astype(dtype)\n\n elif np.fabs(result) > lib.i8max:\n # raise if we have a timedelta64[ns] which is too large\n raise ValueError("overflow in timedelta operation")\n else:\n # return a timedelta64 with the original unit\n result = np.int64(result).astype(dtype, copy=False)\n\n else:\n result = result.astype("m8[ns]").view(dtype)\n\n return result\n\n\ndef _datetimelike_compat(func: F) -> F:\n """\n If we have datetime64 or timedelta64 values, ensure we have a correct\n mask before calling the wrapped function, then cast back afterwards.\n """\n\n @functools.wraps(func)\n def new_func(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n **kwargs,\n ):\n orig_values = values\n\n datetimelike = values.dtype.kind in "mM"\n if datetimelike and mask is None:\n mask = isna(values)\n\n result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)\n\n if datetimelike:\n result = _wrap_results(result, orig_values.dtype, fill_value=iNaT)\n if not skipna:\n assert mask is not None # checked above\n result = _mask_datetimelike_result(result, axis, mask, orig_values)\n\n return result\n\n return cast(F, new_func)\n\n\ndef _na_for_min_count(values: np.ndarray, axis: AxisInt | None) -> Scalar | np.ndarray:\n """\n Return the missing value for `values`.\n\n Parameters\n ----------\n values : ndarray\n axis : int or None\n axis for the reduction, required if values.ndim > 1.\n\n Returns\n -------\n result : scalar or ndarray\n For 1-D values, returns a scalar of the correct missing type.\n For 2-D values, returns a 1-D array where each element is missing.\n """\n # we either return np.nan or pd.NaT\n if values.dtype.kind in "iufcb":\n values = values.astype("float64")\n fill_value = na_value_for_dtype(values.dtype)\n\n if values.ndim == 1:\n return fill_value\n elif axis is None:\n return fill_value\n else:\n result_shape = values.shape[:axis] + values.shape[axis + 1 :]\n\n return np.full(result_shape, fill_value, dtype=values.dtype)\n\n\ndef maybe_operate_rowwise(func: F) -> F:\n """\n NumPy operations on C-contiguous ndarrays with axis=1 can be\n very slow if axis 1 >> axis 0.\n Operate row-by-row and concatenate the results.\n """\n\n @functools.wraps(func)\n def newfunc(values: np.ndarray, *, axis: AxisInt | None = None, **kwargs):\n if (\n axis == 1\n and values.ndim == 2\n and values.flags["C_CONTIGUOUS"]\n # only takes this path for wide arrays (long dataframes), for threshold see\n # https://github.com/pandas-dev/pandas/pull/43311#issuecomment-974891737\n and (values.shape[1] / 1000) > values.shape[0]\n and values.dtype != object\n and values.dtype != bool\n ):\n arrs = list(values)\n if kwargs.get("mask") is not None:\n mask = kwargs.pop("mask")\n results = [\n func(arrs[i], mask=mask[i], **kwargs) for i in range(len(arrs))\n ]\n else:\n results = [func(x, **kwargs) for x in arrs]\n return np.array(results)\n\n return func(values, axis=axis, **kwargs)\n\n return cast(F, newfunc)\n\n\ndef nanany(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> bool:\n """\n Check if any elements along an axis evaluate to True.\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : bool\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 2])\n >>> nanops.nanany(s.values)\n True\n\n >>> from pandas.core import nanops\n >>> s = pd.Series([np.nan])\n >>> nanops.nanany(s.values)\n False\n """\n if values.dtype.kind in "iub" and mask is None:\n # GH#26032 fastpath\n # error: Incompatible return value type (got "Union[bool_, ndarray]",\n # expected "bool")\n return values.any(axis) # type: ignore[return-value]\n\n if values.dtype.kind == "M":\n # GH#34479\n warnings.warn(\n "'any' with datetime64 dtypes is deprecated and will raise in a "\n "future version. Use (obj != pd.Timestamp(0)).any() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n values, _ = _get_values(values, skipna, fill_value=False, mask=mask)\n\n # For object type, any won't necessarily return\n # boolean values (numpy/numpy#4352)\n if values.dtype == object:\n values = values.astype(bool)\n\n # error: Incompatible return value type (got "Union[bool_, ndarray]", expected\n # "bool")\n return values.any(axis) # type: ignore[return-value]\n\n\ndef nanall(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> bool:\n """\n Check if all elements along an axis evaluate to True.\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : bool\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 2, np.nan])\n >>> nanops.nanall(s.values)\n True\n\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 0])\n >>> nanops.nanall(s.values)\n False\n """\n if values.dtype.kind in "iub" and mask is None:\n # GH#26032 fastpath\n # error: Incompatible return value type (got "Union[bool_, ndarray]",\n # expected "bool")\n return values.all(axis) # type: ignore[return-value]\n\n if values.dtype.kind == "M":\n # GH#34479\n warnings.warn(\n "'all' with datetime64 dtypes is deprecated and will raise in a "\n "future version. Use (obj != pd.Timestamp(0)).all() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n values, _ = _get_values(values, skipna, fill_value=True, mask=mask)\n\n # For object type, all won't necessarily return\n # boolean values (numpy/numpy#4352)\n if values.dtype == object:\n values = values.astype(bool)\n\n # error: Incompatible return value type (got "Union[bool_, ndarray]", expected\n # "bool")\n return values.all(axis) # type: ignore[return-value]\n\n\n@disallow("M8")\n@_datetimelike_compat\n@maybe_operate_rowwise\ndef nansum(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n min_count: int = 0,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Sum the elements along an axis ignoring NaNs\n\n Parameters\n ----------\n values : ndarray[dtype]\n axis : int, optional\n skipna : bool, default True\n min_count: int, default 0\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : dtype\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 2, np.nan])\n >>> nanops.nansum(s.values)\n 3.0\n """\n dtype = values.dtype\n values, mask = _get_values(values, skipna, fill_value=0, mask=mask)\n dtype_sum = _get_dtype_max(dtype)\n if dtype.kind == "f":\n dtype_sum = dtype\n elif dtype.kind == "m":\n dtype_sum = np.dtype(np.float64)\n\n the_sum = values.sum(axis, dtype=dtype_sum)\n the_sum = _maybe_null_out(the_sum, axis, mask, values.shape, min_count=min_count)\n\n return the_sum\n\n\ndef _mask_datetimelike_result(\n result: np.ndarray | np.datetime64 | np.timedelta64,\n axis: AxisInt | None,\n mask: npt.NDArray[np.bool_],\n orig_values: np.ndarray,\n) -> np.ndarray | np.datetime64 | np.timedelta64 | NaTType:\n if isinstance(result, np.ndarray):\n # we need to apply the mask\n result = result.astype("i8").view(orig_values.dtype)\n axis_mask = mask.any(axis=axis)\n # error: Unsupported target for indexed assignment ("Union[ndarray[Any, Any],\n # datetime64, timedelta64]")\n result[axis_mask] = iNaT # type: ignore[index]\n else:\n if mask.any():\n return np.int64(iNaT).view(orig_values.dtype)\n return result\n\n\n@bottleneck_switch()\n@_datetimelike_compat\ndef nanmean(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Compute the mean of the element along an axis ignoring NaNs\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n float\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 2, np.nan])\n >>> nanops.nanmean(s.values)\n 1.5\n """\n dtype = values.dtype\n values, mask = _get_values(values, skipna, fill_value=0, mask=mask)\n dtype_sum = _get_dtype_max(dtype)\n dtype_count = np.dtype(np.float64)\n\n # not using needs_i8_conversion because that includes period\n if dtype.kind in "mM":\n dtype_sum = np.dtype(np.float64)\n elif dtype.kind in "iu":\n dtype_sum = np.dtype(np.float64)\n elif dtype.kind == "f":\n dtype_sum = dtype\n dtype_count = dtype\n\n count = _get_counts(values.shape, mask, axis, dtype=dtype_count)\n the_sum = values.sum(axis, dtype=dtype_sum)\n the_sum = _ensure_numeric(the_sum)\n\n if axis is not None and getattr(the_sum, "ndim", False):\n count = cast(np.ndarray, count)\n with np.errstate(all="ignore"):\n # suppress division by zero warnings\n the_mean = the_sum / count\n ct_mask = count == 0\n if ct_mask.any():\n the_mean[ct_mask] = np.nan\n else:\n the_mean = the_sum / count if count > 0 else np.nan\n\n return the_mean\n\n\n@bottleneck_switch()\ndef nanmedian(values, *, axis: AxisInt | None = None, skipna: bool = True, mask=None):\n """\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 2, 2])\n >>> nanops.nanmedian(s.values)\n 2.0\n """\n # for floats without mask, the data already uses NaN as missing value\n # indicator, and `mask` will be calculated from that below -> in those\n # cases we never need to set NaN to the masked values\n using_nan_sentinel = values.dtype.kind == "f" and mask is None\n\n def get_median(x, _mask=None):\n if _mask is None:\n _mask = notna(x)\n else:\n _mask = ~_mask\n if not skipna and not _mask.all():\n return np.nan\n with warnings.catch_warnings():\n # Suppress RuntimeWarning about All-NaN slice\n warnings.filterwarnings(\n "ignore", "All-NaN slice encountered", RuntimeWarning\n )\n res = np.nanmedian(x[_mask])\n return res\n\n dtype = values.dtype\n values, mask = _get_values(values, skipna, mask=mask, fill_value=None)\n if values.dtype.kind != "f":\n if values.dtype == object:\n # GH#34671 avoid casting strings to numeric\n inferred = lib.infer_dtype(values)\n if inferred in ["string", "mixed"]:\n raise TypeError(f"Cannot convert {values} to numeric")\n try:\n values = values.astype("f8")\n except ValueError as err:\n # e.g. "could not convert string to float: 'a'"\n raise TypeError(str(err)) from err\n if not using_nan_sentinel and mask is not None:\n if not values.flags.writeable:\n values = values.copy()\n values[mask] = np.nan\n\n notempty = values.size\n\n # an array from a frame\n if values.ndim > 1 and axis is not None:\n # there's a non-empty array to apply over otherwise numpy raises\n if notempty:\n if not skipna:\n res = np.apply_along_axis(get_median, axis, values)\n\n else:\n # fastpath for the skipna case\n with warnings.catch_warnings():\n # Suppress RuntimeWarning about All-NaN slice\n warnings.filterwarnings(\n "ignore", "All-NaN slice encountered", RuntimeWarning\n )\n if (values.shape[1] == 1 and axis == 0) or (\n values.shape[0] == 1 and axis == 1\n ):\n # GH52788: fastpath when squeezable, nanmedian for 2D array slow\n res = np.nanmedian(np.squeeze(values), keepdims=True)\n else:\n res = np.nanmedian(values, axis=axis)\n\n else:\n # must return the correct shape, but median is not defined for the\n # empty set so return nans of shape "everything but the passed axis"\n # since "axis" is where the reduction would occur if we had a nonempty\n # array\n res = _get_empty_reduction_result(values.shape, axis)\n\n else:\n # otherwise return a scalar value\n res = get_median(values, mask) if notempty else np.nan\n return _wrap_results(res, dtype)\n\n\ndef _get_empty_reduction_result(\n shape: Shape,\n axis: AxisInt,\n) -> np.ndarray:\n """\n The result from a reduction on an empty ndarray.\n\n Parameters\n ----------\n shape : Tuple[int, ...]\n axis : int\n\n Returns\n -------\n np.ndarray\n """\n shp = np.array(shape)\n dims = np.arange(len(shape))\n ret = np.empty(shp[dims != axis], dtype=np.float64)\n ret.fill(np.nan)\n return ret\n\n\ndef _get_counts_nanvar(\n values_shape: Shape,\n mask: npt.NDArray[np.bool_] | None,\n axis: AxisInt | None,\n ddof: int,\n dtype: np.dtype = np.dtype(np.float64),\n) -> tuple[float | np.ndarray, float | np.ndarray]:\n """\n Get the count of non-null values along an axis, accounting\n for degrees of freedom.\n\n Parameters\n ----------\n values_shape : Tuple[int, ...]\n shape tuple from values ndarray, used if mask is None\n mask : Optional[ndarray[bool]]\n locations in values that should be considered missing\n axis : Optional[int]\n axis to count along\n ddof : int\n degrees of freedom\n dtype : type, optional\n type to use for count\n\n Returns\n -------\n count : int, np.nan or np.ndarray\n d : int, np.nan or np.ndarray\n """\n count = _get_counts(values_shape, mask, axis, dtype=dtype)\n d = count - dtype.type(ddof)\n\n # always return NaN, never inf\n if is_float(count):\n if count <= ddof:\n # error: Incompatible types in assignment (expression has type\n # "float", variable has type "Union[floating[Any], ndarray[Any,\n # dtype[floating[Any]]]]")\n count = np.nan # type: ignore[assignment]\n d = np.nan\n else:\n # count is not narrowed by is_float check\n count = cast(np.ndarray, count)\n mask = count <= ddof\n if mask.any():\n np.putmask(d, mask, np.nan)\n np.putmask(count, mask, np.nan)\n return count, d\n\n\n@bottleneck_switch(ddof=1)\ndef nanstd(\n values,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n ddof: int = 1,\n mask=None,\n):\n """\n Compute the standard deviation along given axis while ignoring NaNs\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n ddof : int, default 1\n Delta Degrees of Freedom. The divisor used in calculations is N - ddof,\n where N represents the number of elements.\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 2, 3])\n >>> nanops.nanstd(s.values)\n 1.0\n """\n if values.dtype == "M8[ns]":\n values = values.view("m8[ns]")\n\n orig_dtype = values.dtype\n values, mask = _get_values(values, skipna, mask=mask)\n\n result = np.sqrt(nanvar(values, axis=axis, skipna=skipna, ddof=ddof, mask=mask))\n return _wrap_results(result, orig_dtype)\n\n\n@disallow("M8", "m8")\n@bottleneck_switch(ddof=1)\ndef nanvar(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n ddof: int = 1,\n mask=None,\n):\n """\n Compute the variance along given axis while ignoring NaNs\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n ddof : int, default 1\n Delta Degrees of Freedom. The divisor used in calculations is N - ddof,\n where N represents the number of elements.\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 2, 3])\n >>> nanops.nanvar(s.values)\n 1.0\n """\n dtype = values.dtype\n mask = _maybe_get_mask(values, skipna, mask)\n if dtype.kind in "iu":\n values = values.astype("f8")\n if mask is not None:\n values[mask] = np.nan\n\n if values.dtype.kind == "f":\n count, d = _get_counts_nanvar(values.shape, mask, axis, ddof, values.dtype)\n else:\n count, d = _get_counts_nanvar(values.shape, mask, axis, ddof)\n\n if skipna and mask is not None:\n values = values.copy()\n np.putmask(values, mask, 0)\n\n # xref GH10242\n # Compute variance via two-pass algorithm, which is stable against\n # cancellation errors and relatively accurate for small numbers of\n # observations.\n #\n # See https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance\n avg = _ensure_numeric(values.sum(axis=axis, dtype=np.float64)) / count\n if axis is not None:\n avg = np.expand_dims(avg, axis)\n sqr = _ensure_numeric((avg - values) ** 2)\n if mask is not None:\n np.putmask(sqr, mask, 0)\n result = sqr.sum(axis=axis, dtype=np.float64) / d\n\n # Return variance as np.float64 (the datatype used in the accumulator),\n # unless we were dealing with a float array, in which case use the same\n # precision as the original values array.\n if dtype.kind == "f":\n result = result.astype(dtype, copy=False)\n return result\n\n\n@disallow("M8", "m8")\ndef nansem(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n ddof: int = 1,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Compute the standard error in the mean along given axis while ignoring NaNs\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n ddof : int, default 1\n Delta Degrees of Freedom. The divisor used in calculations is N - ddof,\n where N represents the number of elements.\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float64\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 2, 3])\n >>> nanops.nansem(s.values)\n 0.5773502691896258\n """\n # This checks if non-numeric-like data is passed with numeric_only=False\n # and raises a TypeError otherwise\n nanvar(values, axis=axis, skipna=skipna, ddof=ddof, mask=mask)\n\n mask = _maybe_get_mask(values, skipna, mask)\n if values.dtype.kind != "f":\n values = values.astype("f8")\n\n if not skipna and mask is not None and mask.any():\n return np.nan\n\n count, _ = _get_counts_nanvar(values.shape, mask, axis, ddof, values.dtype)\n var = nanvar(values, axis=axis, skipna=skipna, ddof=ddof, mask=mask)\n\n return np.sqrt(var) / np.sqrt(count)\n\n\ndef _nanminmax(meth, fill_value_typ):\n @bottleneck_switch(name=f"nan{meth}")\n @_datetimelike_compat\n def reduction(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n ):\n if values.size == 0:\n return _na_for_min_count(values, axis)\n\n values, mask = _get_values(\n values, skipna, fill_value_typ=fill_value_typ, mask=mask\n )\n result = getattr(values, meth)(axis)\n result = _maybe_null_out(result, axis, mask, values.shape)\n return result\n\n return reduction\n\n\nnanmin = _nanminmax("min", fill_value_typ="+inf")\nnanmax = _nanminmax("max", fill_value_typ="-inf")\n\n\ndef nanargmax(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> int | np.ndarray:\n """\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : int or ndarray[int]\n The index/indices of max value in specified axis or -1 in the NA case\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> arr = np.array([1, 2, 3, np.nan, 4])\n >>> nanops.nanargmax(arr)\n 4\n\n >>> arr = np.array(range(12), dtype=np.float64).reshape(4, 3)\n >>> arr[2:, 2] = np.nan\n >>> arr\n array([[ 0., 1., 2.],\n [ 3., 4., 5.],\n [ 6., 7., nan],\n [ 9., 10., nan]])\n >>> nanops.nanargmax(arr, axis=1)\n array([2, 2, 1, 1])\n """\n values, mask = _get_values(values, True, fill_value_typ="-inf", mask=mask)\n result = values.argmax(axis)\n # error: Argument 1 to "_maybe_arg_null_out" has incompatible type "Any |\n # signedinteger[Any]"; expected "ndarray[Any, Any]"\n result = _maybe_arg_null_out(result, axis, mask, skipna) # type: ignore[arg-type]\n return result\n\n\ndef nanargmin(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> int | np.ndarray:\n """\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : int or ndarray[int]\n The index/indices of min value in specified axis or -1 in the NA case\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> arr = np.array([1, 2, 3, np.nan, 4])\n >>> nanops.nanargmin(arr)\n 0\n\n >>> arr = np.array(range(12), dtype=np.float64).reshape(4, 3)\n >>> arr[2:, 0] = np.nan\n >>> arr\n array([[ 0., 1., 2.],\n [ 3., 4., 5.],\n [nan, 7., 8.],\n [nan, 10., 11.]])\n >>> nanops.nanargmin(arr, axis=1)\n array([0, 0, 1, 1])\n """\n values, mask = _get_values(values, True, fill_value_typ="+inf", mask=mask)\n result = values.argmin(axis)\n # error: Argument 1 to "_maybe_arg_null_out" has incompatible type "Any |\n # signedinteger[Any]"; expected "ndarray[Any, Any]"\n result = _maybe_arg_null_out(result, axis, mask, skipna) # type: ignore[arg-type]\n return result\n\n\n@disallow("M8", "m8")\n@maybe_operate_rowwise\ndef nanskew(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Compute the sample skewness.\n\n The statistic computed here is the adjusted Fisher-Pearson standardized\n moment coefficient G1. The algorithm computes this coefficient directly\n from the second and third central moment.\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float64\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 1, 2])\n >>> nanops.nanskew(s.values)\n 1.7320508075688787\n """\n mask = _maybe_get_mask(values, skipna, mask)\n if values.dtype.kind != "f":\n values = values.astype("f8")\n count = _get_counts(values.shape, mask, axis)\n else:\n count = _get_counts(values.shape, mask, axis, dtype=values.dtype)\n\n if skipna and mask is not None:\n values = values.copy()\n np.putmask(values, mask, 0)\n elif not skipna and mask is not None and mask.any():\n return np.nan\n\n with np.errstate(invalid="ignore", divide="ignore"):\n mean = values.sum(axis, dtype=np.float64) / count\n if axis is not None:\n mean = np.expand_dims(mean, axis)\n\n adjusted = values - mean\n if skipna and mask is not None:\n np.putmask(adjusted, mask, 0)\n adjusted2 = adjusted**2\n adjusted3 = adjusted2 * adjusted\n m2 = adjusted2.sum(axis, dtype=np.float64)\n m3 = adjusted3.sum(axis, dtype=np.float64)\n\n # floating point error\n #\n # #18044 in _libs/windows.pyx calc_skew follow this behavior\n # to fix the fperr to treat m2 <1e-14 as zero\n m2 = _zero_out_fperr(m2)\n m3 = _zero_out_fperr(m3)\n\n with np.errstate(invalid="ignore", divide="ignore"):\n result = (count * (count - 1) ** 0.5 / (count - 2)) * (m3 / m2**1.5)\n\n dtype = values.dtype\n if dtype.kind == "f":\n result = result.astype(dtype, copy=False)\n\n if isinstance(result, np.ndarray):\n result = np.where(m2 == 0, 0, result)\n result[count < 3] = np.nan\n else:\n result = dtype.type(0) if m2 == 0 else result\n if count < 3:\n return np.nan\n\n return result\n\n\n@disallow("M8", "m8")\n@maybe_operate_rowwise\ndef nankurt(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Compute the sample excess kurtosis\n\n The statistic computed here is the adjusted Fisher-Pearson standardized\n moment coefficient G2, computed directly from the second and fourth\n central moment.\n\n Parameters\n ----------\n values : ndarray\n axis : int, optional\n skipna : bool, default True\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n result : float64\n Unless input is a float array, in which case use the same\n precision as the input array.\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, np.nan, 1, 3, 2])\n >>> nanops.nankurt(s.values)\n -1.2892561983471076\n """\n mask = _maybe_get_mask(values, skipna, mask)\n if values.dtype.kind != "f":\n values = values.astype("f8")\n count = _get_counts(values.shape, mask, axis)\n else:\n count = _get_counts(values.shape, mask, axis, dtype=values.dtype)\n\n if skipna and mask is not None:\n values = values.copy()\n np.putmask(values, mask, 0)\n elif not skipna and mask is not None and mask.any():\n return np.nan\n\n with np.errstate(invalid="ignore", divide="ignore"):\n mean = values.sum(axis, dtype=np.float64) / count\n if axis is not None:\n mean = np.expand_dims(mean, axis)\n\n adjusted = values - mean\n if skipna and mask is not None:\n np.putmask(adjusted, mask, 0)\n adjusted2 = adjusted**2\n adjusted4 = adjusted2**2\n m2 = adjusted2.sum(axis, dtype=np.float64)\n m4 = adjusted4.sum(axis, dtype=np.float64)\n\n with np.errstate(invalid="ignore", divide="ignore"):\n adj = 3 * (count - 1) ** 2 / ((count - 2) * (count - 3))\n numerator = count * (count + 1) * (count - 1) * m4\n denominator = (count - 2) * (count - 3) * m2**2\n\n # floating point error\n #\n # #18044 in _libs/windows.pyx calc_kurt follow this behavior\n # to fix the fperr to treat denom <1e-14 as zero\n numerator = _zero_out_fperr(numerator)\n denominator = _zero_out_fperr(denominator)\n\n if not isinstance(denominator, np.ndarray):\n # if ``denom`` is a scalar, check these corner cases first before\n # doing division\n if count < 4:\n return np.nan\n if denominator == 0:\n return values.dtype.type(0)\n\n with np.errstate(invalid="ignore", divide="ignore"):\n result = numerator / denominator - adj\n\n dtype = values.dtype\n if dtype.kind == "f":\n result = result.astype(dtype, copy=False)\n\n if isinstance(result, np.ndarray):\n result = np.where(denominator == 0, 0, result)\n result[count < 4] = np.nan\n\n return result\n\n\n@disallow("M8", "m8")\n@maybe_operate_rowwise\ndef nanprod(\n values: np.ndarray,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n min_count: int = 0,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> float:\n """\n Parameters\n ----------\n values : ndarray[dtype]\n axis : int, optional\n skipna : bool, default True\n min_count: int, default 0\n mask : ndarray[bool], optional\n nan-mask if known\n\n Returns\n -------\n Dtype\n The product of all elements on a given axis. ( NaNs are treated as 1)\n\n Examples\n --------\n >>> from pandas.core import nanops\n >>> s = pd.Series([1, 2, 3, np.nan])\n >>> nanops.nanprod(s.values)\n 6.0\n """\n mask = _maybe_get_mask(values, skipna, mask)\n\n if skipna and mask is not None:\n values = values.copy()\n values[mask] = 1\n result = values.prod(axis)\n # error: Incompatible return value type (got "Union[ndarray, float]", expected\n # "float")\n return _maybe_null_out( # type: ignore[return-value]\n result, axis, mask, values.shape, min_count=min_count\n )\n\n\ndef _maybe_arg_null_out(\n result: np.ndarray,\n axis: AxisInt | None,\n mask: npt.NDArray[np.bool_] | None,\n skipna: bool,\n) -> np.ndarray | int:\n # helper function for nanargmin/nanargmax\n if mask is None:\n return result\n\n if axis is None or not getattr(result, "ndim", False):\n if skipna:\n if mask.all():\n return -1\n else:\n if mask.any():\n return -1\n else:\n if skipna:\n na_mask = mask.all(axis)\n else:\n na_mask = mask.any(axis)\n if na_mask.any():\n result[na_mask] = -1\n return result\n\n\ndef _get_counts(\n values_shape: Shape,\n mask: npt.NDArray[np.bool_] | None,\n axis: AxisInt | None,\n dtype: np.dtype[np.floating] = np.dtype(np.float64),\n) -> np.floating | npt.NDArray[np.floating]:\n """\n Get the count of non-null values along an axis\n\n Parameters\n ----------\n values_shape : tuple of int\n shape tuple from values ndarray, used if mask is None\n mask : Optional[ndarray[bool]]\n locations in values that should be considered missing\n axis : Optional[int]\n axis to count along\n dtype : type, optional\n type to use for count\n\n Returns\n -------\n count : scalar or array\n """\n if axis is None:\n if mask is not None:\n n = mask.size - mask.sum()\n else:\n n = np.prod(values_shape)\n return dtype.type(n)\n\n if mask is not None:\n count = mask.shape[axis] - mask.sum(axis)\n else:\n count = values_shape[axis]\n\n if is_integer(count):\n return dtype.type(count)\n return count.astype(dtype, copy=False)\n\n\ndef _maybe_null_out(\n result: np.ndarray | float | NaTType,\n axis: AxisInt | None,\n mask: npt.NDArray[np.bool_] | None,\n shape: tuple[int, ...],\n min_count: int = 1,\n) -> np.ndarray | float | NaTType:\n """\n Returns\n -------\n Dtype\n The product of all elements on a given axis. ( NaNs are treated as 1)\n """\n if mask is None and min_count == 0:\n # nothing to check; short-circuit\n return result\n\n if axis is not None and isinstance(result, np.ndarray):\n if mask is not None:\n null_mask = (mask.shape[axis] - mask.sum(axis) - min_count) < 0\n else:\n # we have no nulls, kept mask=None in _maybe_get_mask\n below_count = shape[axis] - min_count < 0\n new_shape = shape[:axis] + shape[axis + 1 :]\n null_mask = np.broadcast_to(below_count, new_shape)\n\n if np.any(null_mask):\n if is_numeric_dtype(result):\n if np.iscomplexobj(result):\n result = result.astype("c16")\n elif not is_float_dtype(result):\n result = result.astype("f8", copy=False)\n result[null_mask] = np.nan\n else:\n # GH12941, use None to auto cast null\n result[null_mask] = None\n elif result is not NaT:\n if check_below_min_count(shape, mask, min_count):\n result_dtype = getattr(result, "dtype", None)\n if is_float_dtype(result_dtype):\n # error: Item "None" of "Optional[Any]" has no attribute "type"\n result = result_dtype.type("nan") # type: ignore[union-attr]\n else:\n result = np.nan\n\n return result\n\n\ndef check_below_min_count(\n shape: tuple[int, ...], mask: npt.NDArray[np.bool_] | None, min_count: int\n) -> bool:\n """\n Check for the `min_count` keyword. Returns True if below `min_count` (when\n missing value should be returned from the reduction).\n\n Parameters\n ----------\n shape : tuple\n The shape of the values (`values.shape`).\n mask : ndarray[bool] or None\n Boolean numpy array (typically of same shape as `shape`) or None.\n min_count : int\n Keyword passed through from sum/prod call.\n\n Returns\n -------\n bool\n """\n if min_count > 0:\n if mask is None:\n # no missing values, only check size\n non_nulls = np.prod(shape)\n else:\n non_nulls = mask.size - mask.sum()\n if non_nulls < min_count:\n return True\n return False\n\n\ndef _zero_out_fperr(arg):\n # #18044 reference this behavior to fix rolling skew/kurt issue\n if isinstance(arg, np.ndarray):\n return np.where(np.abs(arg) < 1e-14, 0, arg)\n else:\n return arg.dtype.type(0) if np.abs(arg) < 1e-14 else arg\n\n\n@disallow("M8", "m8")\ndef nancorr(\n a: np.ndarray,\n b: np.ndarray,\n *,\n method: CorrelationMethod = "pearson",\n min_periods: int | None = None,\n) -> float:\n """\n a, b: ndarrays\n """\n if len(a) != len(b):\n raise AssertionError("Operands to nancorr must have same size")\n\n if min_periods is None:\n min_periods = 1\n\n valid = notna(a) & notna(b)\n if not valid.all():\n a = a[valid]\n b = b[valid]\n\n if len(a) < min_periods:\n return np.nan\n\n a = _ensure_numeric(a)\n b = _ensure_numeric(b)\n\n f = get_corr_func(method)\n return f(a, b)\n\n\ndef get_corr_func(\n method: CorrelationMethod,\n) -> Callable[[np.ndarray, np.ndarray], float]:\n if method == "kendall":\n from scipy.stats import kendalltau\n\n def func(a, b):\n return kendalltau(a, b)[0]\n\n return func\n elif method == "spearman":\n from scipy.stats import spearmanr\n\n def func(a, b):\n return spearmanr(a, b)[0]\n\n return func\n elif method == "pearson":\n\n def func(a, b):\n return np.corrcoef(a, b)[0, 1]\n\n return func\n elif callable(method):\n return method\n\n raise ValueError(\n f"Unknown method '{method}', expected one of "\n "'kendall', 'spearman', 'pearson', or callable"\n )\n\n\n@disallow("M8", "m8")\ndef nancov(\n a: np.ndarray,\n b: np.ndarray,\n *,\n min_periods: int | None = None,\n ddof: int | None = 1,\n) -> float:\n if len(a) != len(b):\n raise AssertionError("Operands to nancov must have same size")\n\n if min_periods is None:\n min_periods = 1\n\n valid = notna(a) & notna(b)\n if not valid.all():\n a = a[valid]\n b = b[valid]\n\n if len(a) < min_periods:\n return np.nan\n\n a = _ensure_numeric(a)\n b = _ensure_numeric(b)\n\n return np.cov(a, b, ddof=ddof)[0, 1]\n\n\ndef _ensure_numeric(x):\n if isinstance(x, np.ndarray):\n if x.dtype.kind in "biu":\n x = x.astype(np.float64)\n elif x.dtype == object:\n inferred = lib.infer_dtype(x)\n if inferred in ["string", "mixed"]:\n # GH#44008, GH#36703 avoid casting e.g. strings to numeric\n raise TypeError(f"Could not convert {x} to numeric")\n try:\n x = x.astype(np.complex128)\n except (TypeError, ValueError):\n try:\n x = x.astype(np.float64)\n except ValueError as err:\n # GH#29941 we get here with object arrays containing strs\n raise TypeError(f"Could not convert {x} to numeric") from err\n else:\n if not np.any(np.imag(x)):\n x = x.real\n elif not (is_float(x) or is_integer(x) or is_complex(x)):\n if isinstance(x, str):\n # GH#44008, GH#36703 avoid casting e.g. strings to numeric\n raise TypeError(f"Could not convert string '{x}' to numeric")\n try:\n x = float(x)\n except (TypeError, ValueError):\n # e.g. "1+1j" or "foo"\n try:\n x = complex(x)\n except ValueError as err:\n # e.g. "foo"\n raise TypeError(f"Could not convert {x} to numeric") from err\n return x\n\n\ndef na_accum_func(values: ArrayLike, accum_func, *, skipna: bool) -> ArrayLike:\n """\n Cumulative function with skipna support.\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n accum_func : {np.cumprod, np.maximum.accumulate, np.cumsum, np.minimum.accumulate}\n skipna : bool\n\n Returns\n -------\n np.ndarray or ExtensionArray\n """\n mask_a, mask_b = {\n np.cumprod: (1.0, np.nan),\n np.maximum.accumulate: (-np.inf, np.nan),\n np.cumsum: (0.0, np.nan),\n np.minimum.accumulate: (np.inf, np.nan),\n }[accum_func]\n\n # This should go through ea interface\n assert values.dtype.kind not in "mM"\n\n # We will be applying this function to block values\n if skipna and not issubclass(values.dtype.type, (np.integer, np.bool_)):\n vals = values.copy()\n mask = isna(vals)\n vals[mask] = mask_a\n result = accum_func(vals, axis=0)\n result[mask] = mask_b\n else:\n result = accum_func(values, axis=0)\n\n return result\n
.venv\Lib\site-packages\pandas\core\nanops.py
nanops.py
Python
50,984
0.75
0.157895
0.102234
react-lib
268
2025-03-27T15:07:23.341491
Apache-2.0
false
91ae7199f2be43ce8211e8ced1f37a20
from __future__ import annotations\n\nimport copy\nfrom textwrap import dedent\nfrom typing import (\n TYPE_CHECKING,\n Callable,\n Literal,\n cast,\n final,\n no_type_check,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas._libs.tslibs import (\n BaseOffset,\n IncompatibleFrequency,\n NaT,\n Period,\n Timedelta,\n Timestamp,\n to_offset,\n)\nfrom pandas._libs.tslibs.dtypes import freq_to_period_freqstr\nfrom pandas._typing import NDFrameT\nfrom pandas.compat.numpy import function as nv\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import (\n Appender,\n Substitution,\n doc,\n)\nfrom pandas.util._exceptions import (\n find_stack_level,\n rewrite_warning,\n)\n\nfrom pandas.core.dtypes.dtypes import ArrowDtype\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCSeries,\n)\n\nimport pandas.core.algorithms as algos\nfrom pandas.core.apply import (\n ResamplerWindowApply,\n warn_alias_replacement,\n)\nfrom pandas.core.arrays import ArrowExtensionArray\nfrom pandas.core.base import (\n PandasObject,\n SelectionMixin,\n)\nimport pandas.core.common as com\nfrom pandas.core.generic import (\n NDFrame,\n _shared_docs,\n)\nfrom pandas.core.groupby.generic import SeriesGroupBy\nfrom pandas.core.groupby.groupby import (\n BaseGroupBy,\n GroupBy,\n _apply_groupings_depr,\n _pipe_template,\n get_groupby,\n)\nfrom pandas.core.groupby.grouper import Grouper\nfrom pandas.core.groupby.ops import BinGrouper\nfrom pandas.core.indexes.api import MultiIndex\nfrom pandas.core.indexes.base import Index\nfrom pandas.core.indexes.datetimes import (\n DatetimeIndex,\n date_range,\n)\nfrom pandas.core.indexes.period import (\n PeriodIndex,\n period_range,\n)\nfrom pandas.core.indexes.timedeltas import (\n TimedeltaIndex,\n timedelta_range,\n)\n\nfrom pandas.tseries.frequencies import (\n is_subperiod,\n is_superperiod,\n)\nfrom pandas.tseries.offsets import (\n Day,\n Tick,\n)\n\nif TYPE_CHECKING:\n from collections.abc import Hashable\n\n from pandas._typing import (\n AnyArrayLike,\n Axis,\n AxisInt,\n Frequency,\n IndexLabel,\n InterpolateOptions,\n T,\n TimedeltaConvertibleTypes,\n TimeGrouperOrigin,\n TimestampConvertibleTypes,\n npt,\n )\n\n from pandas import (\n DataFrame,\n Series,\n )\n\n_shared_docs_kwargs: dict[str, str] = {}\n\n\nclass Resampler(BaseGroupBy, PandasObject):\n """\n Class for resampling datetimelike data, a groupby-like operation.\n See aggregate, transform, and apply functions on this object.\n\n It's easiest to use obj.resample(...) to use Resampler.\n\n Parameters\n ----------\n obj : Series or DataFrame\n groupby : TimeGrouper\n axis : int, default 0\n kind : str or None\n 'period', 'timestamp' to override default index treatment\n\n Returns\n -------\n a Resampler of the appropriate type\n\n Notes\n -----\n After resampling, see aggregate, apply, and transform functions.\n """\n\n _grouper: BinGrouper\n _timegrouper: TimeGrouper\n binner: DatetimeIndex | TimedeltaIndex | PeriodIndex # depends on subclass\n exclusions: frozenset[Hashable] = frozenset() # for SelectionMixin compat\n _internal_names_set = set({"obj", "ax", "_indexer"})\n\n # to the groupby descriptor\n _attributes = [\n "freq",\n "axis",\n "closed",\n "label",\n "convention",\n "kind",\n "origin",\n "offset",\n ]\n\n def __init__(\n self,\n obj: NDFrame,\n timegrouper: TimeGrouper,\n axis: Axis = 0,\n kind=None,\n *,\n gpr_index: Index,\n group_keys: bool = False,\n selection=None,\n include_groups: bool = True,\n ) -> None:\n self._timegrouper = timegrouper\n self.keys = None\n self.sort = True\n self.axis = obj._get_axis_number(axis)\n self.kind = kind\n self.group_keys = group_keys\n self.as_index = True\n self.include_groups = include_groups\n\n self.obj, self.ax, self._indexer = self._timegrouper._set_grouper(\n self._convert_obj(obj), sort=True, gpr_index=gpr_index\n )\n self.binner, self._grouper = self._get_binner()\n self._selection = selection\n if self._timegrouper.key is not None:\n self.exclusions = frozenset([self._timegrouper.key])\n else:\n self.exclusions = frozenset()\n\n @final\n def __str__(self) -> str:\n """\n Provide a nice str repr of our rolling object.\n """\n attrs = (\n f"{k}={getattr(self._timegrouper, k)}"\n for k in self._attributes\n if getattr(self._timegrouper, k, None) is not None\n )\n return f"{type(self).__name__} [{', '.join(attrs)}]"\n\n @final\n def __getattr__(self, attr: str):\n if attr in self._internal_names_set:\n return object.__getattribute__(self, attr)\n if attr in self._attributes:\n return getattr(self._timegrouper, attr)\n if attr in self.obj:\n return self[attr]\n\n return object.__getattribute__(self, attr)\n\n @final\n @property\n def _from_selection(self) -> bool:\n """\n Is the resampling from a DataFrame column or MultiIndex level.\n """\n # upsampling and PeriodIndex resampling do not work\n # with selection, this state used to catch and raise an error\n return self._timegrouper is not None and (\n self._timegrouper.key is not None or self._timegrouper.level is not None\n )\n\n def _convert_obj(self, obj: NDFrameT) -> NDFrameT:\n """\n Provide any conversions for the object in order to correctly handle.\n\n Parameters\n ----------\n obj : Series or DataFrame\n\n Returns\n -------\n Series or DataFrame\n """\n return obj._consolidate()\n\n def _get_binner_for_time(self):\n raise AbstractMethodError(self)\n\n @final\n def _get_binner(self):\n """\n Create the BinGrouper, assume that self.set_grouper(obj)\n has already been called.\n """\n binner, bins, binlabels = self._get_binner_for_time()\n assert len(bins) == len(binlabels)\n bin_grouper = BinGrouper(bins, binlabels, indexer=self._indexer)\n return binner, bin_grouper\n\n @final\n @Substitution(\n klass="Resampler",\n examples="""\n >>> df = pd.DataFrame({'A': [1, 2, 3, 4]},\n ... index=pd.date_range('2012-08-02', periods=4))\n >>> df\n A\n 2012-08-02 1\n 2012-08-03 2\n 2012-08-04 3\n 2012-08-05 4\n\n To get the difference between each 2-day period's maximum and minimum\n value in one pass, you can do\n\n >>> df.resample('2D').pipe(lambda x: x.max() - x.min())\n A\n 2012-08-02 1\n 2012-08-04 1""",\n )\n @Appender(_pipe_template)\n def pipe(\n self,\n func: Callable[..., T] | tuple[Callable[..., T], str],\n *args,\n **kwargs,\n ) -> T:\n return super().pipe(func, *args, **kwargs)\n\n _agg_see_also_doc = dedent(\n """\n See Also\n --------\n DataFrame.groupby.aggregate : Aggregate using callable, string, dict,\n or list of string/callables.\n DataFrame.resample.transform : Transforms the Series on each group\n based on the given function.\n DataFrame.aggregate: Aggregate using one or more\n operations over the specified axis.\n """\n )\n\n _agg_examples_doc = dedent(\n """\n Examples\n --------\n >>> s = pd.Series([1, 2, 3, 4, 5],\n ... index=pd.date_range('20130101', periods=5, freq='s'))\n >>> s\n 2013-01-01 00:00:00 1\n 2013-01-01 00:00:01 2\n 2013-01-01 00:00:02 3\n 2013-01-01 00:00:03 4\n 2013-01-01 00:00:04 5\n Freq: s, dtype: int64\n\n >>> r = s.resample('2s')\n\n >>> r.agg("sum")\n 2013-01-01 00:00:00 3\n 2013-01-01 00:00:02 7\n 2013-01-01 00:00:04 5\n Freq: 2s, dtype: int64\n\n >>> r.agg(['sum', 'mean', 'max'])\n sum mean max\n 2013-01-01 00:00:00 3 1.5 2\n 2013-01-01 00:00:02 7 3.5 4\n 2013-01-01 00:00:04 5 5.0 5\n\n >>> r.agg({'result': lambda x: x.mean() / x.std(),\n ... 'total': "sum"})\n result total\n 2013-01-01 00:00:00 2.121320 3\n 2013-01-01 00:00:02 4.949747 7\n 2013-01-01 00:00:04 NaN 5\n\n >>> r.agg(average="mean", total="sum")\n average total\n 2013-01-01 00:00:00 1.5 3\n 2013-01-01 00:00:02 3.5 7\n 2013-01-01 00:00:04 5.0 5\n """\n )\n\n @final\n @doc(\n _shared_docs["aggregate"],\n see_also=_agg_see_also_doc,\n examples=_agg_examples_doc,\n klass="DataFrame",\n axis="",\n )\n def aggregate(self, func=None, *args, **kwargs):\n result = ResamplerWindowApply(self, func, args=args, kwargs=kwargs).agg()\n if result is None:\n how = func\n result = self._groupby_and_aggregate(how, *args, **kwargs)\n\n return result\n\n agg = aggregate\n apply = aggregate\n\n @final\n def transform(self, arg, *args, **kwargs):\n """\n Call function producing a like-indexed Series on each group.\n\n Return a Series with the transformed values.\n\n Parameters\n ----------\n arg : function\n To apply to each group. Should return a Series with the same index.\n\n Returns\n -------\n Series\n\n Examples\n --------\n >>> s = pd.Series([1, 2],\n ... index=pd.date_range('20180101',\n ... periods=2,\n ... freq='1h'))\n >>> s\n 2018-01-01 00:00:00 1\n 2018-01-01 01:00:00 2\n Freq: h, dtype: int64\n\n >>> resampled = s.resample('15min')\n >>> resampled.transform(lambda x: (x - x.mean()) / x.std())\n 2018-01-01 00:00:00 NaN\n 2018-01-01 01:00:00 NaN\n Freq: h, dtype: float64\n """\n return self._selected_obj.groupby(self._timegrouper).transform(\n arg, *args, **kwargs\n )\n\n def _downsample(self, f, **kwargs):\n raise AbstractMethodError(self)\n\n def _upsample(self, f, limit: int | None = None, fill_value=None):\n raise AbstractMethodError(self)\n\n def _gotitem(self, key, ndim: int, subset=None):\n """\n Sub-classes to define. Return a sliced object.\n\n Parameters\n ----------\n key : string / list of selections\n ndim : {1, 2}\n requested ndim of result\n subset : object, default None\n subset to act on\n """\n grouper = self._grouper\n if subset is None:\n subset = self.obj\n if key is not None:\n subset = subset[key]\n else:\n # reached via Apply.agg_dict_like with selection=None and ndim=1\n assert subset.ndim == 1\n if ndim == 1:\n assert subset.ndim == 1\n\n grouped = get_groupby(\n subset, by=None, grouper=grouper, axis=self.axis, group_keys=self.group_keys\n )\n return grouped\n\n def _groupby_and_aggregate(self, how, *args, **kwargs):\n """\n Re-evaluate the obj with a groupby aggregation.\n """\n grouper = self._grouper\n\n # Excludes `on` column when provided\n obj = self._obj_with_exclusions\n\n grouped = get_groupby(\n obj, by=None, grouper=grouper, axis=self.axis, group_keys=self.group_keys\n )\n\n try:\n if callable(how):\n # TODO: test_resample_apply_with_additional_args fails if we go\n # through the non-lambda path, not clear that it should.\n func = lambda x: how(x, *args, **kwargs)\n result = grouped.aggregate(func)\n else:\n result = grouped.aggregate(how, *args, **kwargs)\n except (AttributeError, KeyError):\n # we have a non-reducing function; try to evaluate\n # alternatively we want to evaluate only a column of the input\n\n # test_apply_to_one_column_of_df the function being applied references\n # a DataFrame column, but aggregate_item_by_item operates column-wise\n # on Series, raising AttributeError or KeyError\n # (depending on whether the column lookup uses getattr/__getitem__)\n result = _apply(\n grouped, how, *args, include_groups=self.include_groups, **kwargs\n )\n\n except ValueError as err:\n if "Must produce aggregated value" in str(err):\n # raised in _aggregate_named\n # see test_apply_without_aggregation, test_apply_with_mutated_index\n pass\n else:\n raise\n\n # we have a non-reducing function\n # try to evaluate\n result = _apply(\n grouped, how, *args, include_groups=self.include_groups, **kwargs\n )\n\n return self._wrap_result(result)\n\n @final\n def _get_resampler_for_grouping(\n self, groupby: GroupBy, key, include_groups: bool = True\n ):\n """\n Return the correct class for resampling with groupby.\n """\n return self._resampler_for_grouping(\n groupby=groupby, key=key, parent=self, include_groups=include_groups\n )\n\n def _wrap_result(self, result):\n """\n Potentially wrap any results.\n """\n # GH 47705\n obj = self.obj\n if (\n isinstance(result, ABCDataFrame)\n and len(result) == 0\n and not isinstance(result.index, PeriodIndex)\n ):\n result = result.set_index(\n _asfreq_compat(obj.index[:0], freq=self.freq), append=True\n )\n\n if isinstance(result, ABCSeries) and self._selection is not None:\n result.name = self._selection\n\n if isinstance(result, ABCSeries) and result.empty:\n # When index is all NaT, result is empty but index is not\n result.index = _asfreq_compat(obj.index[:0], freq=self.freq)\n result.name = getattr(obj, "name", None)\n\n if self._timegrouper._arrow_dtype is not None:\n result.index = result.index.astype(self._timegrouper._arrow_dtype)\n\n return result\n\n @final\n def ffill(self, limit: int | None = None):\n """\n Forward fill the values.\n\n Parameters\n ----------\n limit : int, optional\n Limit of how many values to fill.\n\n Returns\n -------\n An upsampled Series.\n\n See Also\n --------\n Series.fillna: Fill NA/NaN values using the specified method.\n DataFrame.fillna: Fill NA/NaN values using the specified method.\n\n Examples\n --------\n Here we only create a ``Series``.\n\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n\n Example for ``ffill`` with downsampling (we have fewer dates after resampling):\n\n >>> ser.resample('MS').ffill()\n 2023-01-01 1\n 2023-02-01 3\n Freq: MS, dtype: int64\n\n Example for ``ffill`` with upsampling (fill the new dates with\n the previous value):\n\n >>> ser.resample('W').ffill()\n 2023-01-01 1\n 2023-01-08 1\n 2023-01-15 2\n 2023-01-22 2\n 2023-01-29 2\n 2023-02-05 3\n 2023-02-12 3\n 2023-02-19 4\n Freq: W-SUN, dtype: int64\n\n With upsampling and limiting (only fill the first new date with the\n previous value):\n\n >>> ser.resample('W').ffill(limit=1)\n 2023-01-01 1.0\n 2023-01-08 1.0\n 2023-01-15 2.0\n 2023-01-22 2.0\n 2023-01-29 NaN\n 2023-02-05 3.0\n 2023-02-12 NaN\n 2023-02-19 4.0\n Freq: W-SUN, dtype: float64\n """\n return self._upsample("ffill", limit=limit)\n\n @final\n def nearest(self, limit: int | None = None):\n """\n Resample by using the nearest value.\n\n When resampling data, missing values may appear (e.g., when the\n resampling frequency is higher than the original frequency).\n The `nearest` method will replace ``NaN`` values that appeared in\n the resampled data with the value from the nearest member of the\n sequence, based on the index value.\n Missing values that existed in the original data will not be modified.\n If `limit` is given, fill only this many values in each direction for\n each of the original values.\n\n Parameters\n ----------\n limit : int, optional\n Limit of how many values to fill.\n\n Returns\n -------\n Series or DataFrame\n An upsampled Series or DataFrame with ``NaN`` values filled with\n their nearest value.\n\n See Also\n --------\n backfill : Backward fill the new missing values in the resampled data.\n pad : Forward fill ``NaN`` values.\n\n Examples\n --------\n >>> s = pd.Series([1, 2],\n ... index=pd.date_range('20180101',\n ... periods=2,\n ... freq='1h'))\n >>> s\n 2018-01-01 00:00:00 1\n 2018-01-01 01:00:00 2\n Freq: h, dtype: int64\n\n >>> s.resample('15min').nearest()\n 2018-01-01 00:00:00 1\n 2018-01-01 00:15:00 1\n 2018-01-01 00:30:00 2\n 2018-01-01 00:45:00 2\n 2018-01-01 01:00:00 2\n Freq: 15min, dtype: int64\n\n Limit the number of upsampled values imputed by the nearest:\n\n >>> s.resample('15min').nearest(limit=1)\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:15:00 1.0\n 2018-01-01 00:30:00 NaN\n 2018-01-01 00:45:00 2.0\n 2018-01-01 01:00:00 2.0\n Freq: 15min, dtype: float64\n """\n return self._upsample("nearest", limit=limit)\n\n @final\n def bfill(self, limit: int | None = None):\n """\n Backward fill the new missing values in the resampled data.\n\n In statistics, imputation is the process of replacing missing data with\n substituted values [1]_. When resampling data, missing values may\n appear (e.g., when the resampling frequency is higher than the original\n frequency). The backward fill will replace NaN values that appeared in\n the resampled data with the next value in the original sequence.\n Missing values that existed in the original data will not be modified.\n\n Parameters\n ----------\n limit : int, optional\n Limit of how many values to fill.\n\n Returns\n -------\n Series, DataFrame\n An upsampled Series or DataFrame with backward filled NaN values.\n\n See Also\n --------\n bfill : Alias of backfill.\n fillna : Fill NaN values using the specified method, which can be\n 'backfill'.\n nearest : Fill NaN values with nearest neighbor starting from center.\n ffill : Forward fill NaN values.\n Series.fillna : Fill NaN values in the Series using the\n specified method, which can be 'backfill'.\n DataFrame.fillna : Fill NaN values in the DataFrame using the\n specified method, which can be 'backfill'.\n\n References\n ----------\n .. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)\n\n Examples\n --------\n Resampling a Series:\n\n >>> s = pd.Series([1, 2, 3],\n ... index=pd.date_range('20180101', periods=3, freq='h'))\n >>> s\n 2018-01-01 00:00:00 1\n 2018-01-01 01:00:00 2\n 2018-01-01 02:00:00 3\n Freq: h, dtype: int64\n\n >>> s.resample('30min').bfill()\n 2018-01-01 00:00:00 1\n 2018-01-01 00:30:00 2\n 2018-01-01 01:00:00 2\n 2018-01-01 01:30:00 3\n 2018-01-01 02:00:00 3\n Freq: 30min, dtype: int64\n\n >>> s.resample('15min').bfill(limit=2)\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:15:00 NaN\n 2018-01-01 00:30:00 2.0\n 2018-01-01 00:45:00 2.0\n 2018-01-01 01:00:00 2.0\n 2018-01-01 01:15:00 NaN\n 2018-01-01 01:30:00 3.0\n 2018-01-01 01:45:00 3.0\n 2018-01-01 02:00:00 3.0\n Freq: 15min, dtype: float64\n\n Resampling a DataFrame that has missing values:\n\n >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},\n ... index=pd.date_range('20180101', periods=3,\n ... freq='h'))\n >>> df\n a b\n 2018-01-01 00:00:00 2.0 1\n 2018-01-01 01:00:00 NaN 3\n 2018-01-01 02:00:00 6.0 5\n\n >>> df.resample('30min').bfill()\n a b\n 2018-01-01 00:00:00 2.0 1\n 2018-01-01 00:30:00 NaN 3\n 2018-01-01 01:00:00 NaN 3\n 2018-01-01 01:30:00 6.0 5\n 2018-01-01 02:00:00 6.0 5\n\n >>> df.resample('15min').bfill(limit=2)\n a b\n 2018-01-01 00:00:00 2.0 1.0\n 2018-01-01 00:15:00 NaN NaN\n 2018-01-01 00:30:00 NaN 3.0\n 2018-01-01 00:45:00 NaN 3.0\n 2018-01-01 01:00:00 NaN 3.0\n 2018-01-01 01:15:00 NaN NaN\n 2018-01-01 01:30:00 6.0 5.0\n 2018-01-01 01:45:00 6.0 5.0\n 2018-01-01 02:00:00 6.0 5.0\n """\n return self._upsample("bfill", limit=limit)\n\n @final\n def fillna(self, method, limit: int | None = None):\n """\n Fill missing values introduced by upsampling.\n\n In statistics, imputation is the process of replacing missing data with\n substituted values [1]_. When resampling data, missing values may\n appear (e.g., when the resampling frequency is higher than the original\n frequency).\n\n Missing values that existed in the original data will\n not be modified.\n\n Parameters\n ----------\n method : {'pad', 'backfill', 'ffill', 'bfill', 'nearest'}\n Method to use for filling holes in resampled data\n\n * 'pad' or 'ffill': use previous valid observation to fill gap\n (forward fill).\n * 'backfill' or 'bfill': use next valid observation to fill gap.\n * 'nearest': use nearest valid observation to fill gap.\n\n limit : int, optional\n Limit of how many consecutive missing values to fill.\n\n Returns\n -------\n Series or DataFrame\n An upsampled Series or DataFrame with missing values filled.\n\n See Also\n --------\n bfill : Backward fill NaN values in the resampled data.\n ffill : Forward fill NaN values in the resampled data.\n nearest : Fill NaN values in the resampled data\n with nearest neighbor starting from center.\n interpolate : Fill NaN values using interpolation.\n Series.fillna : Fill NaN values in the Series using the\n specified method, which can be 'bfill' and 'ffill'.\n DataFrame.fillna : Fill NaN values in the DataFrame using the\n specified method, which can be 'bfill' and 'ffill'.\n\n References\n ----------\n .. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)\n\n Examples\n --------\n Resampling a Series:\n\n >>> s = pd.Series([1, 2, 3],\n ... index=pd.date_range('20180101', periods=3, freq='h'))\n >>> s\n 2018-01-01 00:00:00 1\n 2018-01-01 01:00:00 2\n 2018-01-01 02:00:00 3\n Freq: h, dtype: int64\n\n Without filling the missing values you get:\n\n >>> s.resample("30min").asfreq()\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:30:00 NaN\n 2018-01-01 01:00:00 2.0\n 2018-01-01 01:30:00 NaN\n 2018-01-01 02:00:00 3.0\n Freq: 30min, dtype: float64\n\n >>> s.resample('30min').fillna("backfill")\n 2018-01-01 00:00:00 1\n 2018-01-01 00:30:00 2\n 2018-01-01 01:00:00 2\n 2018-01-01 01:30:00 3\n 2018-01-01 02:00:00 3\n Freq: 30min, dtype: int64\n\n >>> s.resample('15min').fillna("backfill", limit=2)\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:15:00 NaN\n 2018-01-01 00:30:00 2.0\n 2018-01-01 00:45:00 2.0\n 2018-01-01 01:00:00 2.0\n 2018-01-01 01:15:00 NaN\n 2018-01-01 01:30:00 3.0\n 2018-01-01 01:45:00 3.0\n 2018-01-01 02:00:00 3.0\n Freq: 15min, dtype: float64\n\n >>> s.resample('30min').fillna("pad")\n 2018-01-01 00:00:00 1\n 2018-01-01 00:30:00 1\n 2018-01-01 01:00:00 2\n 2018-01-01 01:30:00 2\n 2018-01-01 02:00:00 3\n Freq: 30min, dtype: int64\n\n >>> s.resample('30min').fillna("nearest")\n 2018-01-01 00:00:00 1\n 2018-01-01 00:30:00 2\n 2018-01-01 01:00:00 2\n 2018-01-01 01:30:00 3\n 2018-01-01 02:00:00 3\n Freq: 30min, dtype: int64\n\n Missing values present before the upsampling are not affected.\n\n >>> sm = pd.Series([1, None, 3],\n ... index=pd.date_range('20180101', periods=3, freq='h'))\n >>> sm\n 2018-01-01 00:00:00 1.0\n 2018-01-01 01:00:00 NaN\n 2018-01-01 02:00:00 3.0\n Freq: h, dtype: float64\n\n >>> sm.resample('30min').fillna('backfill')\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:30:00 NaN\n 2018-01-01 01:00:00 NaN\n 2018-01-01 01:30:00 3.0\n 2018-01-01 02:00:00 3.0\n Freq: 30min, dtype: float64\n\n >>> sm.resample('30min').fillna('pad')\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:30:00 1.0\n 2018-01-01 01:00:00 NaN\n 2018-01-01 01:30:00 NaN\n 2018-01-01 02:00:00 3.0\n Freq: 30min, dtype: float64\n\n >>> sm.resample('30min').fillna('nearest')\n 2018-01-01 00:00:00 1.0\n 2018-01-01 00:30:00 NaN\n 2018-01-01 01:00:00 NaN\n 2018-01-01 01:30:00 3.0\n 2018-01-01 02:00:00 3.0\n Freq: 30min, dtype: float64\n\n DataFrame resampling is done column-wise. All the same options are\n available.\n\n >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},\n ... index=pd.date_range('20180101', periods=3,\n ... freq='h'))\n >>> df\n a b\n 2018-01-01 00:00:00 2.0 1\n 2018-01-01 01:00:00 NaN 3\n 2018-01-01 02:00:00 6.0 5\n\n >>> df.resample('30min').fillna("bfill")\n a b\n 2018-01-01 00:00:00 2.0 1\n 2018-01-01 00:30:00 NaN 3\n 2018-01-01 01:00:00 NaN 3\n 2018-01-01 01:30:00 6.0 5\n 2018-01-01 02:00:00 6.0 5\n """\n warnings.warn(\n f"{type(self).__name__}.fillna is deprecated and will be removed "\n "in a future version. Use obj.ffill(), obj.bfill(), "\n "or obj.nearest() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return self._upsample(method, limit=limit)\n\n @final\n def interpolate(\n self,\n method: InterpolateOptions = "linear",\n *,\n axis: Axis = 0,\n limit: int | None = None,\n inplace: bool = False,\n limit_direction: Literal["forward", "backward", "both"] = "forward",\n limit_area=None,\n downcast=lib.no_default,\n **kwargs,\n ):\n """\n Interpolate values between target timestamps according to different methods.\n\n The original index is first reindexed to target timestamps\n (see :meth:`core.resample.Resampler.asfreq`),\n then the interpolation of ``NaN`` values via :meth:`DataFrame.interpolate`\n happens.\n\n Parameters\n ----------\n method : str, default 'linear'\n Interpolation technique to use. One of:\n\n * 'linear': Ignore the index and treat the values as equally\n spaced. This is the only method supported on MultiIndexes.\n * 'time': Works on daily and higher resolution data to interpolate\n given length of interval.\n * 'index', 'values': use the actual numerical values of the index.\n * 'pad': Fill in NaNs using existing values.\n * 'nearest', 'zero', 'slinear', 'quadratic', 'cubic',\n 'barycentric', 'polynomial': Passed to\n `scipy.interpolate.interp1d`, whereas 'spline' is passed to\n `scipy.interpolate.UnivariateSpline`. These methods use the numerical\n values of the index. Both 'polynomial' and 'spline' require that\n you also specify an `order` (int), e.g.\n ``df.interpolate(method='polynomial', order=5)``. Note that,\n `slinear` method in Pandas refers to the Scipy first order `spline`\n instead of Pandas first order `spline`.\n * 'krogh', 'piecewise_polynomial', 'spline', 'pchip', 'akima',\n 'cubicspline': Wrappers around the SciPy interpolation methods of\n similar names. See `Notes`.\n * 'from_derivatives': Refers to\n `scipy.interpolate.BPoly.from_derivatives`.\n\n axis : {{0 or 'index', 1 or 'columns', None}}, default None\n Axis to interpolate along. For `Series` this parameter is unused\n and defaults to 0.\n limit : int, optional\n Maximum number of consecutive NaNs to fill. Must be greater than\n 0.\n inplace : bool, default False\n Update the data in place if possible.\n limit_direction : {{'forward', 'backward', 'both'}}, Optional\n Consecutive NaNs will be filled in this direction.\n\n If limit is specified:\n * If 'method' is 'pad' or 'ffill', 'limit_direction' must be 'forward'.\n * If 'method' is 'backfill' or 'bfill', 'limit_direction' must be\n 'backwards'.\n\n If 'limit' is not specified:\n * If 'method' is 'backfill' or 'bfill', the default is 'backward'\n * else the default is 'forward'\n\n raises ValueError if `limit_direction` is 'forward' or 'both' and\n method is 'backfill' or 'bfill'.\n raises ValueError if `limit_direction` is 'backward' or 'both' and\n method is 'pad' or 'ffill'.\n\n limit_area : {{`None`, 'inside', 'outside'}}, default None\n If limit is specified, consecutive NaNs will be filled with this\n restriction.\n\n * ``None``: No fill restriction.\n * 'inside': Only fill NaNs surrounded by valid values\n (interpolate).\n * 'outside': Only fill NaNs outside valid values (extrapolate).\n\n downcast : optional, 'infer' or None, defaults to None\n Downcast dtypes if possible.\n\n .. deprecated:: 2.1.0\n\n ``**kwargs`` : optional\n Keyword arguments to pass on to the interpolating function.\n\n Returns\n -------\n DataFrame or Series\n Interpolated values at the specified freq.\n\n See Also\n --------\n core.resample.Resampler.asfreq: Return the values at the new freq,\n essentially a reindex.\n DataFrame.interpolate: Fill NaN values using an interpolation method.\n\n Notes\n -----\n For high-frequent or non-equidistant time-series with timestamps\n the reindexing followed by interpolation may lead to information loss\n as shown in the last example.\n\n Examples\n --------\n\n >>> start = "2023-03-01T07:00:00"\n >>> timesteps = pd.date_range(start, periods=5, freq="s")\n >>> series = pd.Series(data=[1, -1, 2, 1, 3], index=timesteps)\n >>> series\n 2023-03-01 07:00:00 1\n 2023-03-01 07:00:01 -1\n 2023-03-01 07:00:02 2\n 2023-03-01 07:00:03 1\n 2023-03-01 07:00:04 3\n Freq: s, dtype: int64\n\n Upsample the dataframe to 0.5Hz by providing the period time of 2s.\n\n >>> series.resample("2s").interpolate("linear")\n 2023-03-01 07:00:00 1\n 2023-03-01 07:00:02 2\n 2023-03-01 07:00:04 3\n Freq: 2s, dtype: int64\n\n Downsample the dataframe to 2Hz by providing the period time of 500ms.\n\n >>> series.resample("500ms").interpolate("linear")\n 2023-03-01 07:00:00.000 1.0\n 2023-03-01 07:00:00.500 0.0\n 2023-03-01 07:00:01.000 -1.0\n 2023-03-01 07:00:01.500 0.5\n 2023-03-01 07:00:02.000 2.0\n 2023-03-01 07:00:02.500 1.5\n 2023-03-01 07:00:03.000 1.0\n 2023-03-01 07:00:03.500 2.0\n 2023-03-01 07:00:04.000 3.0\n Freq: 500ms, dtype: float64\n\n Internal reindexing with ``asfreq()`` prior to interpolation leads to\n an interpolated timeseries on the basis the reindexed timestamps (anchors).\n Since not all datapoints from original series become anchors,\n it can lead to misleading interpolation results as in the following example:\n\n >>> series.resample("400ms").interpolate("linear")\n 2023-03-01 07:00:00.000 1.0\n 2023-03-01 07:00:00.400 1.2\n 2023-03-01 07:00:00.800 1.4\n 2023-03-01 07:00:01.200 1.6\n 2023-03-01 07:00:01.600 1.8\n 2023-03-01 07:00:02.000 2.0\n 2023-03-01 07:00:02.400 2.2\n 2023-03-01 07:00:02.800 2.4\n 2023-03-01 07:00:03.200 2.6\n 2023-03-01 07:00:03.600 2.8\n 2023-03-01 07:00:04.000 3.0\n Freq: 400ms, dtype: float64\n\n Note that the series erroneously increases between two anchors\n ``07:00:00`` and ``07:00:02``.\n """\n assert downcast is lib.no_default # just checking coverage\n result = self._upsample("asfreq")\n return result.interpolate(\n method=method,\n axis=axis,\n limit=limit,\n inplace=inplace,\n limit_direction=limit_direction,\n limit_area=limit_area,\n downcast=downcast,\n **kwargs,\n )\n\n @final\n def asfreq(self, fill_value=None):\n """\n Return the values at the new freq, essentially a reindex.\n\n Parameters\n ----------\n fill_value : scalar, optional\n Value to use for missing values, applied during upsampling (note\n this does not fill NaNs that already were present).\n\n Returns\n -------\n DataFrame or Series\n Values at the specified freq.\n\n See Also\n --------\n Series.asfreq: Convert TimeSeries to specified frequency.\n DataFrame.asfreq: Convert TimeSeries to specified frequency.\n\n Examples\n --------\n\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-31', '2023-02-01', '2023-02-28']))\n >>> ser\n 2023-01-01 1\n 2023-01-31 2\n 2023-02-01 3\n 2023-02-28 4\n dtype: int64\n >>> ser.resample('MS').asfreq()\n 2023-01-01 1\n 2023-02-01 3\n Freq: MS, dtype: int64\n """\n return self._upsample("asfreq", fill_value=fill_value)\n\n @final\n def sum(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n *args,\n **kwargs,\n ):\n """\n Compute sum of group values.\n\n Parameters\n ----------\n numeric_only : bool, default False\n Include only float, int, boolean columns.\n\n .. versionchanged:: 2.0.0\n\n numeric_only no longer accepts ``None``.\n\n min_count : int, default 0\n The required number of valid values to perform the operation. If fewer\n than ``min_count`` non-NA values are present the result will be NA.\n\n Returns\n -------\n Series or DataFrame\n Computed sum of values within each group.\n\n Examples\n --------\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n >>> ser.resample('MS').sum()\n 2023-01-01 3\n 2023-02-01 7\n Freq: MS, dtype: int64\n """\n maybe_warn_args_and_kwargs(type(self), "sum", args, kwargs)\n nv.validate_resampler_func("sum", args, kwargs)\n return self._downsample("sum", numeric_only=numeric_only, min_count=min_count)\n\n @final\n def prod(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n *args,\n **kwargs,\n ):\n """\n Compute prod of group values.\n\n Parameters\n ----------\n numeric_only : bool, default False\n Include only float, int, boolean columns.\n\n .. versionchanged:: 2.0.0\n\n numeric_only no longer accepts ``None``.\n\n min_count : int, default 0\n The required number of valid values to perform the operation. If fewer\n than ``min_count`` non-NA values are present the result will be NA.\n\n Returns\n -------\n Series or DataFrame\n Computed prod of values within each group.\n\n Examples\n --------\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n >>> ser.resample('MS').prod()\n 2023-01-01 2\n 2023-02-01 12\n Freq: MS, dtype: int64\n """\n maybe_warn_args_and_kwargs(type(self), "prod", args, kwargs)\n nv.validate_resampler_func("prod", args, kwargs)\n return self._downsample("prod", numeric_only=numeric_only, min_count=min_count)\n\n @final\n def min(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n *args,\n **kwargs,\n ):\n """\n Compute min value of group.\n\n Returns\n -------\n Series or DataFrame\n\n Examples\n --------\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n >>> ser.resample('MS').min()\n 2023-01-01 1\n 2023-02-01 3\n Freq: MS, dtype: int64\n """\n\n maybe_warn_args_and_kwargs(type(self), "min", args, kwargs)\n nv.validate_resampler_func("min", args, kwargs)\n return self._downsample("min", numeric_only=numeric_only, min_count=min_count)\n\n @final\n def max(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n *args,\n **kwargs,\n ):\n """\n Compute max value of group.\n\n Returns\n -------\n Series or DataFrame\n\n Examples\n --------\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n >>> ser.resample('MS').max()\n 2023-01-01 2\n 2023-02-01 4\n Freq: MS, dtype: int64\n """\n maybe_warn_args_and_kwargs(type(self), "max", args, kwargs)\n nv.validate_resampler_func("max", args, kwargs)\n return self._downsample("max", numeric_only=numeric_only, min_count=min_count)\n\n @final\n @doc(GroupBy.first)\n def first(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n skipna: bool = True,\n *args,\n **kwargs,\n ):\n maybe_warn_args_and_kwargs(type(self), "first", args, kwargs)\n nv.validate_resampler_func("first", args, kwargs)\n return self._downsample(\n "first", numeric_only=numeric_only, min_count=min_count, skipna=skipna\n )\n\n @final\n @doc(GroupBy.last)\n def last(\n self,\n numeric_only: bool = False,\n min_count: int = 0,\n skipna: bool = True,\n *args,\n **kwargs,\n ):\n maybe_warn_args_and_kwargs(type(self), "last", args, kwargs)\n nv.validate_resampler_func("last", args, kwargs)\n return self._downsample(\n "last", numeric_only=numeric_only, min_count=min_count, skipna=skipna\n )\n\n @final\n @doc(GroupBy.median)\n def median(self, numeric_only: bool = False, *args, **kwargs):\n maybe_warn_args_and_kwargs(type(self), "median", args, kwargs)\n nv.validate_resampler_func("median", args, kwargs)\n return self._downsample("median", numeric_only=numeric_only)\n\n @final\n def mean(\n self,\n numeric_only: bool = False,\n *args,\n **kwargs,\n ):\n """\n Compute mean of groups, excluding missing values.\n\n Parameters\n ----------\n numeric_only : bool, default False\n Include only `float`, `int` or `boolean` data.\n\n .. versionchanged:: 2.0.0\n\n numeric_only now defaults to ``False``.\n\n Returns\n -------\n DataFrame or Series\n Mean of values within each group.\n\n Examples\n --------\n\n >>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(\n ... ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))\n >>> ser\n 2023-01-01 1\n 2023-01-15 2\n 2023-02-01 3\n 2023-02-15 4\n dtype: int64\n >>> ser.resample('MS').mean()\n 2023-01-01 1.5\n 2023-02-01 3.5\n Freq: MS, dtype: float64\n """\n maybe_warn_args_and_kwargs(type(self), "mean", args, kwargs)\n nv.validate_resampler_func("mean", args, kwargs)\n return self._downsample("mean", numeric_only=numeric_only)\n\n @final\n def std(\n self,\n ddof: int = 1,\n numeric_only: bool = False,\n *args,\n **kwargs,\n ):\n """\n Compute standard deviation of groups, excluding missing values.\n\n Parameters\n ----------\n ddof : int, default 1\n Degrees of freedom.\n numeric_only : bool, default False\n Include only `float`, `int` or `boolean` data.\n\n .. versionadded:: 1.5.0\n\n .. versionchanged:: 2.0.0\n\n numeric_only now defaults to ``False``.\n\n Returns\n -------\n DataFrame or Series\n Standard deviation of values within each group.\n\n Examples\n --------\n\n >>> ser = pd.Series([1, 3, 2, 4, 3, 8],\n ... index=pd.DatetimeIndex(['2023-01-01',\n ... '2023-01-10',\n ... '2023-01-15',\n ... '2023-02-01',\n ... '2023-02-10',\n ... '2023-02-15']))\n >>> ser.resample('MS').std()\n 2023-01-01 1.000000\n 2023-02-01 2.645751\n Freq: MS, dtype: float64\n """\n maybe_warn_args_and_kwargs(type(self), "std", args, kwargs)\n nv.validate_resampler_func("std", args, kwargs)\n return self._downsample("std", ddof=ddof, numeric_only=numeric_only)\n\n @final\n def var(\n self,\n ddof: int = 1,\n numeric_only: bool = False,\n *args,\n **kwargs,\n ):\n """\n Compute variance of groups, excluding missing values.\n\n Parameters\n ----------\n ddof : int, default 1\n Degrees of freedom.\n\n numeric_only : bool, default False\n Include only `float`, `int` or `boolean` data.\n\n .. versionadded:: 1.5.0\n\n .. versionchanged:: 2.0.0\n\n numeric_only now defaults to ``False``.\n\n Returns\n -------\n DataFrame or Series\n Variance of values within each group.\n\n Examples\n --------\n\n >>> ser = pd.Series([1, 3, 2, 4, 3, 8],\n ... index=pd.DatetimeIndex(['2023-01-01',\n ... '2023-01-10',\n ... '2023-01-15',\n ... '2023-02-01',\n ... '2023-02-10',\n ... '2023-02-15']))\n >>> ser.resample('MS').var()\n 2023-01-01 1.0\n 2023-02-01 7.0\n Freq: MS, dtype: float64\n\n >>> ser.resample('MS').var(ddof=0)\n 2023-01-01 0.666667\n 2023-02-01 4.666667\n Freq: MS, dtype: float64\n """\n maybe_warn_args_and_kwargs(type(self), "var", args, kwargs)\n nv.validate_resampler_func("var", args, kwargs)\n return self._downsample("var", ddof=ddof, numeric_only=numeric_only)\n\n @final\n @doc(GroupBy.sem)\n def sem(\n self,\n ddof: int = 1,\n numeric_only: bool = False,\n *args,\n **kwargs,\n ):\n maybe_warn_args_and_kwargs(type(self), "sem", args, kwargs)\n nv.validate_resampler_func("sem", args, kwargs)\n return self._downsample("sem", ddof=ddof, numeric_only=numeric_only)\n\n @final\n @doc(GroupBy.ohlc)\n def ohlc(\n self,\n *args,\n **kwargs,\n ):\n maybe_warn_args_and_kwargs(type(self), "ohlc", args, kwargs)\n nv.validate_resampler_func("ohlc", args, kwargs)\n\n ax = self.ax\n obj = self._obj_with_exclusions\n if len(ax) == 0:\n # GH#42902\n obj = obj.copy()\n obj.index = _asfreq_compat(obj.index, self.freq)\n if obj.ndim == 1:\n obj = obj.to_frame()\n obj = obj.reindex(["open", "high", "low", "close"], axis=1)\n else:\n mi = MultiIndex.from_product(\n [obj.columns, ["open", "high", "low", "close"]]\n )\n obj = obj.reindex(mi, axis=1)\n return obj\n\n return self._downsample("ohlc")\n\n @final\n @doc(SeriesGroupBy.nunique)\n def nunique(\n self,\n *args,\n **kwargs,\n ):\n maybe_warn_args_and_kwargs(type(self), "nunique", args, kwargs)\n nv.validate_resampler_func("nunique", args, kwargs)\n return self._downsample("nunique")\n\n @final\n @doc(GroupBy.size)\n def size(self):\n result = self._downsample("size")\n\n # If the result is a non-empty DataFrame we stack to get a Series\n # GH 46826\n if isinstance(result, ABCDataFrame) and not result.empty:\n result = result.stack(future_stack=True)\n\n if not len(self.ax):\n from pandas import Series\n\n if self._selected_obj.ndim == 1:\n name = self._selected_obj.name\n else:\n name = None\n result = Series([], index=result.index, dtype="int64", name=name)\n return result\n\n @final\n @doc(GroupBy.count)\n def count(self):\n result = self._downsample("count")\n if not len(self.ax):\n if self._selected_obj.ndim == 1:\n result = type(self._selected_obj)(\n [], index=result.index, dtype="int64", name=self._selected_obj.name\n )\n else:\n from pandas import DataFrame\n\n result = DataFrame(\n [], index=result.index, columns=result.columns, dtype="int64"\n )\n\n return result\n\n @final\n def quantile(self, q: float | list[float] | AnyArrayLike = 0.5, **kwargs):\n """\n Return value at the given quantile.\n\n Parameters\n ----------\n q : float or array-like, default 0.5 (50% quantile)\n\n Returns\n -------\n DataFrame or Series\n Quantile of values within each group.\n\n See Also\n --------\n Series.quantile\n Return a series, where the index is q and the values are the quantiles.\n DataFrame.quantile\n Return a DataFrame, where the columns are the columns of self,\n and the values are the quantiles.\n DataFrameGroupBy.quantile\n Return a DataFrame, where the columns are groupby columns,\n and the values are its quantiles.\n\n Examples\n --------\n\n >>> ser = pd.Series([1, 3, 2, 4, 3, 8],\n ... index=pd.DatetimeIndex(['2023-01-01',\n ... '2023-01-10',\n ... '2023-01-15',\n ... '2023-02-01',\n ... '2023-02-10',\n ... '2023-02-15']))\n >>> ser.resample('MS').quantile()\n 2023-01-01 2.0\n 2023-02-01 4.0\n Freq: MS, dtype: float64\n\n >>> ser.resample('MS').quantile(.25)\n 2023-01-01 1.5\n 2023-02-01 3.5\n Freq: MS, dtype: float64\n """\n return self._downsample("quantile", q=q, **kwargs)\n\n\nclass _GroupByMixin(PandasObject, SelectionMixin):\n """\n Provide the groupby facilities.\n """\n\n _attributes: list[str] # in practice the same as Resampler._attributes\n _selection: IndexLabel | None = None\n _groupby: GroupBy\n _timegrouper: TimeGrouper\n\n def __init__(\n self,\n *,\n parent: Resampler,\n groupby: GroupBy,\n key=None,\n selection: IndexLabel | None = None,\n include_groups: bool = False,\n ) -> None:\n # reached via ._gotitem and _get_resampler_for_grouping\n\n assert isinstance(groupby, GroupBy), type(groupby)\n\n # parent is always a Resampler, sometimes a _GroupByMixin\n assert isinstance(parent, Resampler), type(parent)\n\n # initialize our GroupByMixin object with\n # the resampler attributes\n for attr in self._attributes:\n setattr(self, attr, getattr(parent, attr))\n self._selection = selection\n\n self.binner = parent.binner\n self.key = key\n\n self._groupby = groupby\n self._timegrouper = copy.copy(parent._timegrouper)\n\n self.ax = parent.ax\n self.obj = parent.obj\n self.include_groups = include_groups\n\n @no_type_check\n def _apply(self, f, *args, **kwargs):\n """\n Dispatch to _upsample; we are stripping all of the _upsample kwargs and\n performing the original function call on the grouped object.\n """\n\n def func(x):\n x = self._resampler_cls(x, timegrouper=self._timegrouper, gpr_index=self.ax)\n\n if isinstance(f, str):\n return getattr(x, f)(**kwargs)\n\n return x.apply(f, *args, **kwargs)\n\n result = _apply(self._groupby, func, include_groups=self.include_groups)\n return self._wrap_result(result)\n\n _upsample = _apply\n _downsample = _apply\n _groupby_and_aggregate = _apply\n\n @final\n def _gotitem(self, key, ndim, subset=None):\n """\n Sub-classes to define. Return a sliced object.\n\n Parameters\n ----------\n key : string / list of selections\n ndim : {1, 2}\n requested ndim of result\n subset : object, default None\n subset to act on\n """\n # create a new object to prevent aliasing\n if subset is None:\n subset = self.obj\n if key is not None:\n subset = subset[key]\n else:\n # reached via Apply.agg_dict_like with selection=None, ndim=1\n assert subset.ndim == 1\n\n # Try to select from a DataFrame, falling back to a Series\n try:\n if isinstance(key, list) and self.key not in key and self.key is not None:\n key.append(self.key)\n groupby = self._groupby[key]\n except IndexError:\n groupby = self._groupby\n\n selection = self._infer_selection(key, subset)\n\n new_rs = type(self)(\n groupby=groupby,\n parent=cast(Resampler, self),\n selection=selection,\n )\n return new_rs\n\n\nclass DatetimeIndexResampler(Resampler):\n ax: DatetimeIndex\n\n @property\n def _resampler_for_grouping(self):\n return DatetimeIndexResamplerGroupby\n\n def _get_binner_for_time(self):\n # this is how we are actually creating the bins\n if self.kind == "period":\n return self._timegrouper._get_time_period_bins(self.ax)\n return self._timegrouper._get_time_bins(self.ax)\n\n def _downsample(self, how, **kwargs):\n """\n Downsample the cython defined function.\n\n Parameters\n ----------\n how : string / cython mapped function\n **kwargs : kw args passed to how function\n """\n orig_how = how\n how = com.get_cython_func(how) or how\n if orig_how != how:\n warn_alias_replacement(self, orig_how, how)\n ax = self.ax\n\n # Excludes `on` column when provided\n obj = self._obj_with_exclusions\n\n if not len(ax):\n # reset to the new freq\n obj = obj.copy()\n obj.index = obj.index._with_freq(self.freq)\n assert obj.index.freq == self.freq, (obj.index.freq, self.freq)\n return obj\n\n # do we have a regular frequency\n\n # error: Item "None" of "Optional[Any]" has no attribute "binlabels"\n if (\n (ax.freq is not None or ax.inferred_freq is not None)\n and len(self._grouper.binlabels) > len(ax)\n and how is None\n ):\n # let's do an asfreq\n return self.asfreq()\n\n # we are downsampling\n # we want to call the actual grouper method here\n if self.axis == 0:\n result = obj.groupby(self._grouper).aggregate(how, **kwargs)\n else:\n # test_resample_axis1\n result = obj.T.groupby(self._grouper).aggregate(how, **kwargs).T\n\n return self._wrap_result(result)\n\n def _adjust_binner_for_upsample(self, binner):\n """\n Adjust our binner when upsampling.\n\n The range of a new index should not be outside specified range\n """\n if self.closed == "right":\n binner = binner[1:]\n else:\n binner = binner[:-1]\n return binner\n\n def _upsample(self, method, limit: int | None = None, fill_value=None):\n """\n Parameters\n ----------\n method : string {'backfill', 'bfill', 'pad',\n 'ffill', 'asfreq'} method for upsampling\n limit : int, default None\n Maximum size gap to fill when reindexing\n fill_value : scalar, default None\n Value to use for missing values\n\n See Also\n --------\n .fillna: Fill NA/NaN values using the specified method.\n\n """\n if self.axis:\n raise AssertionError("axis must be 0")\n if self._from_selection:\n raise ValueError(\n "Upsampling from level= or on= selection "\n "is not supported, use .set_index(...) "\n "to explicitly set index to datetime-like"\n )\n\n ax = self.ax\n obj = self._selected_obj\n binner = self.binner\n res_index = self._adjust_binner_for_upsample(binner)\n\n # if we have the same frequency as our axis, then we are equal sampling\n if (\n limit is None\n and to_offset(ax.inferred_freq) == self.freq\n and len(obj) == len(res_index)\n ):\n result = obj.copy()\n result.index = res_index\n else:\n if method == "asfreq":\n method = None\n result = obj.reindex(\n res_index, method=method, limit=limit, fill_value=fill_value\n )\n\n return self._wrap_result(result)\n\n def _wrap_result(self, result):\n result = super()._wrap_result(result)\n\n # we may have a different kind that we were asked originally\n # convert if needed\n if self.kind == "period" and not isinstance(result.index, PeriodIndex):\n if isinstance(result.index, MultiIndex):\n # GH 24103 - e.g. groupby resample\n if not isinstance(result.index.levels[-1], PeriodIndex):\n new_level = result.index.levels[-1].to_period(self.freq)\n result.index = result.index.set_levels(new_level, level=-1)\n else:\n result.index = result.index.to_period(self.freq)\n return result\n\n\n# error: Definition of "ax" in base class "_GroupByMixin" is incompatible\n# with definition in base class "DatetimeIndexResampler"\nclass DatetimeIndexResamplerGroupby( # type: ignore[misc]\n _GroupByMixin, DatetimeIndexResampler\n):\n """\n Provides a resample of a groupby implementation\n """\n\n @property\n def _resampler_cls(self):\n return DatetimeIndexResampler\n\n\nclass PeriodIndexResampler(DatetimeIndexResampler):\n # error: Incompatible types in assignment (expression has type "PeriodIndex", base\n # class "DatetimeIndexResampler" defined the type as "DatetimeIndex")\n ax: PeriodIndex # type: ignore[assignment]\n\n @property\n def _resampler_for_grouping(self):\n warnings.warn(\n "Resampling a groupby with a PeriodIndex is deprecated. "\n "Cast to DatetimeIndex before resampling instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return PeriodIndexResamplerGroupby\n\n def _get_binner_for_time(self):\n if self.kind == "timestamp":\n return super()._get_binner_for_time()\n return self._timegrouper._get_period_bins(self.ax)\n\n def _convert_obj(self, obj: NDFrameT) -> NDFrameT:\n obj = super()._convert_obj(obj)\n\n if self._from_selection:\n # see GH 14008, GH 12871\n msg = (\n "Resampling from level= or on= selection "\n "with a PeriodIndex is not currently supported, "\n "use .set_index(...) to explicitly set index"\n )\n raise NotImplementedError(msg)\n\n # convert to timestamp\n if self.kind == "timestamp":\n obj = obj.to_timestamp(how=self.convention)\n\n return obj\n\n def _downsample(self, how, **kwargs):\n """\n Downsample the cython defined function.\n\n Parameters\n ----------\n how : string / cython mapped function\n **kwargs : kw args passed to how function\n """\n # we may need to actually resample as if we are timestamps\n if self.kind == "timestamp":\n return super()._downsample(how, **kwargs)\n\n orig_how = how\n how = com.get_cython_func(how) or how\n if orig_how != how:\n warn_alias_replacement(self, orig_how, how)\n ax = self.ax\n\n if is_subperiod(ax.freq, self.freq):\n # Downsampling\n return self._groupby_and_aggregate(how, **kwargs)\n elif is_superperiod(ax.freq, self.freq):\n if how == "ohlc":\n # GH #13083\n # upsampling to subperiods is handled as an asfreq, which works\n # for pure aggregating/reducing methods\n # OHLC reduces along the time dimension, but creates multiple\n # values for each period -> handle by _groupby_and_aggregate()\n return self._groupby_and_aggregate(how)\n return self.asfreq()\n elif ax.freq == self.freq:\n return self.asfreq()\n\n raise IncompatibleFrequency(\n f"Frequency {ax.freq} cannot be resampled to {self.freq}, "\n "as they are not sub or super periods"\n )\n\n def _upsample(self, method, limit: int | None = None, fill_value=None):\n """\n Parameters\n ----------\n method : {'backfill', 'bfill', 'pad', 'ffill'}\n Method for upsampling.\n limit : int, default None\n Maximum size gap to fill when reindexing.\n fill_value : scalar, default None\n Value to use for missing values.\n\n See Also\n --------\n .fillna: Fill NA/NaN values using the specified method.\n\n """\n # we may need to actually resample as if we are timestamps\n if self.kind == "timestamp":\n return super()._upsample(method, limit=limit, fill_value=fill_value)\n\n ax = self.ax\n obj = self.obj\n new_index = self.binner\n\n # Start vs. end of period\n memb = ax.asfreq(self.freq, how=self.convention)\n\n # Get the fill indexer\n if method == "asfreq":\n method = None\n indexer = memb.get_indexer(new_index, method=method, limit=limit)\n new_obj = _take_new_index(\n obj,\n indexer,\n new_index,\n axis=self.axis,\n )\n return self._wrap_result(new_obj)\n\n\n# error: Definition of "ax" in base class "_GroupByMixin" is incompatible with\n# definition in base class "PeriodIndexResampler"\nclass PeriodIndexResamplerGroupby( # type: ignore[misc]\n _GroupByMixin, PeriodIndexResampler\n):\n """\n Provides a resample of a groupby implementation.\n """\n\n @property\n def _resampler_cls(self):\n return PeriodIndexResampler\n\n\nclass TimedeltaIndexResampler(DatetimeIndexResampler):\n # error: Incompatible types in assignment (expression has type "TimedeltaIndex",\n # base class "DatetimeIndexResampler" defined the type as "DatetimeIndex")\n ax: TimedeltaIndex # type: ignore[assignment]\n\n @property\n def _resampler_for_grouping(self):\n return TimedeltaIndexResamplerGroupby\n\n def _get_binner_for_time(self):\n return self._timegrouper._get_time_delta_bins(self.ax)\n\n def _adjust_binner_for_upsample(self, binner):\n """\n Adjust our binner when upsampling.\n\n The range of a new index is allowed to be greater than original range\n so we don't need to change the length of a binner, GH 13022\n """\n return binner\n\n\n# error: Definition of "ax" in base class "_GroupByMixin" is incompatible with\n# definition in base class "DatetimeIndexResampler"\nclass TimedeltaIndexResamplerGroupby( # type: ignore[misc]\n _GroupByMixin, TimedeltaIndexResampler\n):\n """\n Provides a resample of a groupby implementation.\n """\n\n @property\n def _resampler_cls(self):\n return TimedeltaIndexResampler\n\n\ndef get_resampler(obj: Series | DataFrame, kind=None, **kwds) -> Resampler:\n """\n Create a TimeGrouper and return our resampler.\n """\n tg = TimeGrouper(obj, **kwds) # type: ignore[arg-type]\n return tg._get_resampler(obj, kind=kind)\n\n\nget_resampler.__doc__ = Resampler.__doc__\n\n\ndef get_resampler_for_grouping(\n groupby: GroupBy,\n rule,\n how=None,\n fill_method=None,\n limit: int | None = None,\n kind=None,\n on=None,\n include_groups: bool = True,\n **kwargs,\n) -> Resampler:\n """\n Return our appropriate resampler when grouping as well.\n """\n # .resample uses 'on' similar to how .groupby uses 'key'\n tg = TimeGrouper(freq=rule, key=on, **kwargs)\n resampler = tg._get_resampler(groupby.obj, kind=kind)\n return resampler._get_resampler_for_grouping(\n groupby=groupby, include_groups=include_groups, key=tg.key\n )\n\n\nclass TimeGrouper(Grouper):\n """\n Custom groupby class for time-interval grouping.\n\n Parameters\n ----------\n freq : pandas date offset or offset alias for identifying bin edges\n closed : closed end of interval; 'left' or 'right'\n label : interval boundary to use for labeling; 'left' or 'right'\n convention : {'start', 'end', 'e', 's'}\n If axis is PeriodIndex\n """\n\n _attributes = Grouper._attributes + (\n "closed",\n "label",\n "how",\n "kind",\n "convention",\n "origin",\n "offset",\n )\n\n origin: TimeGrouperOrigin\n\n def __init__(\n self,\n obj: Grouper | None = None,\n freq: Frequency = "Min",\n key: str | None = None,\n closed: Literal["left", "right"] | None = None,\n label: Literal["left", "right"] | None = None,\n how: str = "mean",\n axis: Axis = 0,\n fill_method=None,\n limit: int | None = None,\n kind: str | None = None,\n convention: Literal["start", "end", "e", "s"] | None = None,\n origin: Literal["epoch", "start", "start_day", "end", "end_day"]\n | TimestampConvertibleTypes = "start_day",\n offset: TimedeltaConvertibleTypes | None = None,\n group_keys: bool = False,\n **kwargs,\n ) -> None:\n # Check for correctness of the keyword arguments which would\n # otherwise silently use the default if misspelled\n if label not in {None, "left", "right"}:\n raise ValueError(f"Unsupported value {label} for `label`")\n if closed not in {None, "left", "right"}:\n raise ValueError(f"Unsupported value {closed} for `closed`")\n if convention not in {None, "start", "end", "e", "s"}:\n raise ValueError(f"Unsupported value {convention} for `convention`")\n\n if (\n key is None\n and obj is not None\n and isinstance(obj.index, PeriodIndex) # type: ignore[attr-defined]\n or (\n key is not None\n and obj is not None\n and getattr(obj[key], "dtype", None) == "period" # type: ignore[index]\n )\n ):\n freq = to_offset(freq, is_period=True)\n else:\n freq = to_offset(freq)\n\n end_types = {"ME", "YE", "QE", "BME", "BYE", "BQE", "W"}\n rule = freq.rule_code\n if rule in end_types or ("-" in rule and rule[: rule.find("-")] in end_types):\n if closed is None:\n closed = "right"\n if label is None:\n label = "right"\n else:\n # The backward resample sets ``closed`` to ``'right'`` by default\n # since the last value should be considered as the edge point for\n # the last bin. When origin in "end" or "end_day", the value for a\n # specific ``Timestamp`` index stands for the resample result from\n # the current ``Timestamp`` minus ``freq`` to the current\n # ``Timestamp`` with a right close.\n if origin in ["end", "end_day"]:\n if closed is None:\n closed = "right"\n if label is None:\n label = "right"\n else:\n if closed is None:\n closed = "left"\n if label is None:\n label = "left"\n\n self.closed = closed\n self.label = label\n self.kind = kind\n self.convention = convention if convention is not None else "e"\n self.how = how\n self.fill_method = fill_method\n self.limit = limit\n self.group_keys = group_keys\n self._arrow_dtype: ArrowDtype | None = None\n\n if origin in ("epoch", "start", "start_day", "end", "end_day"):\n # error: Incompatible types in assignment (expression has type "Union[Union[\n # Timestamp, datetime, datetime64, signedinteger[_64Bit], float, str],\n # Literal['epoch', 'start', 'start_day', 'end', 'end_day']]", variable has\n # type "Union[Timestamp, Literal['epoch', 'start', 'start_day', 'end',\n # 'end_day']]")\n self.origin = origin # type: ignore[assignment]\n else:\n try:\n self.origin = Timestamp(origin)\n except (ValueError, TypeError) as err:\n raise ValueError(\n "'origin' should be equal to 'epoch', 'start', 'start_day', "\n "'end', 'end_day' or "\n f"should be a Timestamp convertible type. Got '{origin}' instead."\n ) from err\n\n try:\n self.offset = Timedelta(offset) if offset is not None else None\n except (ValueError, TypeError) as err:\n raise ValueError(\n "'offset' should be a Timedelta convertible type. "\n f"Got '{offset}' instead."\n ) from err\n\n # always sort time groupers\n kwargs["sort"] = True\n\n super().__init__(freq=freq, key=key, axis=axis, **kwargs)\n\n def _get_resampler(self, obj: NDFrame, kind=None) -> Resampler:\n """\n Return my resampler or raise if we have an invalid axis.\n\n Parameters\n ----------\n obj : Series or DataFrame\n kind : string, optional\n 'period','timestamp','timedelta' are valid\n\n Returns\n -------\n Resampler\n\n Raises\n ------\n TypeError if incompatible axis\n\n """\n _, ax, _ = self._set_grouper(obj, gpr_index=None)\n if isinstance(ax, DatetimeIndex):\n return DatetimeIndexResampler(\n obj,\n timegrouper=self,\n kind=kind,\n axis=self.axis,\n group_keys=self.group_keys,\n gpr_index=ax,\n )\n elif isinstance(ax, PeriodIndex) or kind == "period":\n if isinstance(ax, PeriodIndex):\n # GH#53481\n warnings.warn(\n "Resampling with a PeriodIndex is deprecated. "\n "Cast index to DatetimeIndex before resampling instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n else:\n warnings.warn(\n "Resampling with kind='period' is deprecated. "\n "Use datetime paths instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return PeriodIndexResampler(\n obj,\n timegrouper=self,\n kind=kind,\n axis=self.axis,\n group_keys=self.group_keys,\n gpr_index=ax,\n )\n elif isinstance(ax, TimedeltaIndex):\n return TimedeltaIndexResampler(\n obj,\n timegrouper=self,\n axis=self.axis,\n group_keys=self.group_keys,\n gpr_index=ax,\n )\n\n raise TypeError(\n "Only valid with DatetimeIndex, "\n "TimedeltaIndex or PeriodIndex, "\n f"but got an instance of '{type(ax).__name__}'"\n )\n\n def _get_grouper(\n self, obj: NDFrameT, validate: bool = True\n ) -> tuple[BinGrouper, NDFrameT]:\n # create the resampler and return our binner\n r = self._get_resampler(obj)\n return r._grouper, cast(NDFrameT, r.obj)\n\n def _get_time_bins(self, ax: DatetimeIndex):\n if not isinstance(ax, DatetimeIndex):\n raise TypeError(\n "axis must be a DatetimeIndex, but got "\n f"an instance of {type(ax).__name__}"\n )\n\n if len(ax) == 0:\n binner = labels = DatetimeIndex(\n data=[], freq=self.freq, name=ax.name, dtype=ax.dtype\n )\n return binner, [], labels\n\n first, last = _get_timestamp_range_edges(\n ax.min(),\n ax.max(),\n self.freq,\n unit=ax.unit,\n closed=self.closed,\n origin=self.origin,\n offset=self.offset,\n )\n # GH #12037\n # use first/last directly instead of call replace() on them\n # because replace() will swallow the nanosecond part\n # thus last bin maybe slightly before the end if the end contains\n # nanosecond part and lead to `Values falls after last bin` error\n # GH 25758: If DST lands at midnight (e.g. 'America/Havana'), user feedback\n # has noted that ambiguous=True provides the most sensible result\n binner = labels = date_range(\n freq=self.freq,\n start=first,\n end=last,\n tz=ax.tz,\n name=ax.name,\n ambiguous=True,\n nonexistent="shift_forward",\n unit=ax.unit,\n )\n\n ax_values = ax.asi8\n binner, bin_edges = self._adjust_bin_edges(binner, ax_values)\n\n # general version, knowing nothing about relative frequencies\n bins = lib.generate_bins_dt64(\n ax_values, bin_edges, self.closed, hasnans=ax.hasnans\n )\n\n if self.closed == "right":\n labels = binner\n if self.label == "right":\n labels = labels[1:]\n elif self.label == "right":\n labels = labels[1:]\n\n if ax.hasnans:\n binner = binner.insert(0, NaT)\n labels = labels.insert(0, NaT)\n\n # if we end up with more labels than bins\n # adjust the labels\n # GH4076\n if len(bins) < len(labels):\n labels = labels[: len(bins)]\n\n return binner, bins, labels\n\n def _adjust_bin_edges(\n self, binner: DatetimeIndex, ax_values: npt.NDArray[np.int64]\n ) -> tuple[DatetimeIndex, npt.NDArray[np.int64]]:\n # Some hacks for > daily data, see #1471, #1458, #1483\n\n if self.freq.name in ("BME", "ME", "W") or self.freq.name.split("-")[0] in (\n "BQE",\n "BYE",\n "QE",\n "YE",\n "W",\n ):\n # If the right end-point is on the last day of the month, roll forwards\n # until the last moment of that day. Note that we only do this for offsets\n # which correspond to the end of a super-daily period - "month start", for\n # example, is excluded.\n if self.closed == "right":\n # GH 21459, GH 9119: Adjust the bins relative to the wall time\n edges_dti = binner.tz_localize(None)\n edges_dti = (\n edges_dti\n + Timedelta(days=1, unit=edges_dti.unit).as_unit(edges_dti.unit)\n - Timedelta(1, unit=edges_dti.unit).as_unit(edges_dti.unit)\n )\n bin_edges = edges_dti.tz_localize(binner.tz).asi8\n else:\n bin_edges = binner.asi8\n\n # intraday values on last day\n if bin_edges[-2] > ax_values.max():\n bin_edges = bin_edges[:-1]\n binner = binner[:-1]\n else:\n bin_edges = binner.asi8\n return binner, bin_edges\n\n def _get_time_delta_bins(self, ax: TimedeltaIndex):\n if not isinstance(ax, TimedeltaIndex):\n raise TypeError(\n "axis must be a TimedeltaIndex, but got "\n f"an instance of {type(ax).__name__}"\n )\n\n if not isinstance(self.freq, Tick):\n # GH#51896\n raise ValueError(\n "Resampling on a TimedeltaIndex requires fixed-duration `freq`, "\n f"e.g. '24h' or '3D', not {self.freq}"\n )\n\n if not len(ax):\n binner = labels = TimedeltaIndex(data=[], freq=self.freq, name=ax.name)\n return binner, [], labels\n\n start, end = ax.min(), ax.max()\n\n if self.closed == "right":\n end += self.freq\n\n labels = binner = timedelta_range(\n start=start, end=end, freq=self.freq, name=ax.name\n )\n\n end_stamps = labels\n if self.closed == "left":\n end_stamps += self.freq\n\n bins = ax.searchsorted(end_stamps, side=self.closed)\n\n if self.offset:\n # GH 10530 & 31809\n labels += self.offset\n\n return binner, bins, labels\n\n def _get_time_period_bins(self, ax: DatetimeIndex):\n if not isinstance(ax, DatetimeIndex):\n raise TypeError(\n "axis must be a DatetimeIndex, but got "\n f"an instance of {type(ax).__name__}"\n )\n\n freq = self.freq\n\n if len(ax) == 0:\n binner = labels = PeriodIndex(\n data=[], freq=freq, name=ax.name, dtype=ax.dtype\n )\n return binner, [], labels\n\n labels = binner = period_range(start=ax[0], end=ax[-1], freq=freq, name=ax.name)\n\n end_stamps = (labels + freq).asfreq(freq, "s").to_timestamp()\n if ax.tz:\n end_stamps = end_stamps.tz_localize(ax.tz)\n bins = ax.searchsorted(end_stamps, side="left")\n\n return binner, bins, labels\n\n def _get_period_bins(self, ax: PeriodIndex):\n if not isinstance(ax, PeriodIndex):\n raise TypeError(\n "axis must be a PeriodIndex, but got "\n f"an instance of {type(ax).__name__}"\n )\n\n memb = ax.asfreq(self.freq, how=self.convention)\n\n # NaT handling as in pandas._lib.lib.generate_bins_dt64()\n nat_count = 0\n if memb.hasnans:\n # error: Incompatible types in assignment (expression has type\n # "bool_", variable has type "int") [assignment]\n nat_count = np.sum(memb._isnan) # type: ignore[assignment]\n memb = memb[~memb._isnan]\n\n if not len(memb):\n # index contains no valid (non-NaT) values\n bins = np.array([], dtype=np.int64)\n binner = labels = PeriodIndex(data=[], freq=self.freq, name=ax.name)\n if len(ax) > 0:\n # index is all NaT\n binner, bins, labels = _insert_nat_bin(binner, bins, labels, len(ax))\n return binner, bins, labels\n\n freq_mult = self.freq.n\n\n start = ax.min().asfreq(self.freq, how=self.convention)\n end = ax.max().asfreq(self.freq, how="end")\n bin_shift = 0\n\n if isinstance(self.freq, Tick):\n # GH 23882 & 31809: get adjusted bin edge labels with 'origin'\n # and 'origin' support. This call only makes sense if the freq is a\n # Tick since offset and origin are only used in those cases.\n # Not doing this check could create an extra empty bin.\n p_start, end = _get_period_range_edges(\n start,\n end,\n self.freq,\n closed=self.closed,\n origin=self.origin,\n offset=self.offset,\n )\n\n # Get offset for bin edge (not label edge) adjustment\n start_offset = Period(start, self.freq) - Period(p_start, self.freq)\n # error: Item "Period" of "Union[Period, Any]" has no attribute "n"\n bin_shift = start_offset.n % freq_mult # type: ignore[union-attr]\n start = p_start\n\n labels = binner = period_range(\n start=start, end=end, freq=self.freq, name=ax.name\n )\n\n i8 = memb.asi8\n\n # when upsampling to subperiods, we need to generate enough bins\n expected_bins_count = len(binner) * freq_mult\n i8_extend = expected_bins_count - (i8[-1] - i8[0])\n rng = np.arange(i8[0], i8[-1] + i8_extend, freq_mult)\n rng += freq_mult\n # adjust bin edge indexes to account for base\n rng -= bin_shift\n\n # Wrap in PeriodArray for PeriodArray.searchsorted\n prng = type(memb._data)(rng, dtype=memb.dtype)\n bins = memb.searchsorted(prng, side="left")\n\n if nat_count > 0:\n binner, bins, labels = _insert_nat_bin(binner, bins, labels, nat_count)\n\n return binner, bins, labels\n\n def _set_grouper(\n self, obj: NDFrameT, sort: bool = False, *, gpr_index: Index | None = None\n ) -> tuple[NDFrameT, Index, npt.NDArray[np.intp] | None]:\n obj, ax, indexer = super()._set_grouper(obj, sort, gpr_index=gpr_index)\n if isinstance(ax.dtype, ArrowDtype) and ax.dtype.kind in "Mm":\n self._arrow_dtype = ax.dtype\n ax = Index(\n cast(ArrowExtensionArray, ax.array)._maybe_convert_datelike_array()\n )\n return obj, ax, indexer\n\n\ndef _take_new_index(\n obj: NDFrameT, indexer: npt.NDArray[np.intp], new_index: Index, axis: AxisInt = 0\n) -> NDFrameT:\n if isinstance(obj, ABCSeries):\n new_values = algos.take_nd(obj._values, indexer)\n # error: Incompatible return value type (got "Series", expected "NDFrameT")\n return obj._constructor( # type: ignore[return-value]\n new_values, index=new_index, name=obj.name\n )\n elif isinstance(obj, ABCDataFrame):\n if axis == 1:\n raise NotImplementedError("axis 1 is not supported")\n new_mgr = obj._mgr.reindex_indexer(new_axis=new_index, indexer=indexer, axis=1)\n # error: Incompatible return value type (got "DataFrame", expected "NDFrameT")\n return obj._constructor_from_mgr(new_mgr, axes=new_mgr.axes) # type: ignore[return-value]\n else:\n raise ValueError("'obj' should be either a Series or a DataFrame")\n\n\ndef _get_timestamp_range_edges(\n first: Timestamp,\n last: Timestamp,\n freq: BaseOffset,\n unit: str,\n closed: Literal["right", "left"] = "left",\n origin: TimeGrouperOrigin = "start_day",\n offset: Timedelta | None = None,\n) -> tuple[Timestamp, Timestamp]:\n """\n Adjust the `first` Timestamp to the preceding Timestamp that resides on\n the provided offset. Adjust the `last` Timestamp to the following\n Timestamp that resides on the provided offset. Input Timestamps that\n already reside on the offset will be adjusted depending on the type of\n offset and the `closed` parameter.\n\n Parameters\n ----------\n first : pd.Timestamp\n The beginning Timestamp of the range to be adjusted.\n last : pd.Timestamp\n The ending Timestamp of the range to be adjusted.\n freq : pd.DateOffset\n The dateoffset to which the Timestamps will be adjusted.\n closed : {'right', 'left'}, default "left"\n Which side of bin interval is closed.\n origin : {'epoch', 'start', 'start_day'} or Timestamp, default 'start_day'\n The timestamp on which to adjust the grouping. The timezone of origin must\n match the timezone of the index.\n If a timestamp is not used, these values are also supported:\n\n - 'epoch': `origin` is 1970-01-01\n - 'start': `origin` is the first value of the timeseries\n - 'start_day': `origin` is the first day at midnight of the timeseries\n offset : pd.Timedelta, default is None\n An offset timedelta added to the origin.\n\n Returns\n -------\n A tuple of length 2, containing the adjusted pd.Timestamp objects.\n """\n if isinstance(freq, Tick):\n index_tz = first.tz\n if isinstance(origin, Timestamp) and (origin.tz is None) != (index_tz is None):\n raise ValueError("The origin must have the same timezone as the index.")\n if origin == "epoch":\n # set the epoch based on the timezone to have similar bins results when\n # resampling on the same kind of indexes on different timezones\n origin = Timestamp("1970-01-01", tz=index_tz)\n\n if isinstance(freq, Day):\n # _adjust_dates_anchored assumes 'D' means 24h, but first/last\n # might contain a DST transition (23h, 24h, or 25h).\n # So "pretend" the dates are naive when adjusting the endpoints\n first = first.tz_localize(None)\n last = last.tz_localize(None)\n if isinstance(origin, Timestamp):\n origin = origin.tz_localize(None)\n\n first, last = _adjust_dates_anchored(\n first, last, freq, closed=closed, origin=origin, offset=offset, unit=unit\n )\n if isinstance(freq, Day):\n first = first.tz_localize(index_tz)\n last = last.tz_localize(index_tz)\n else:\n first = first.normalize()\n last = last.normalize()\n\n if closed == "left":\n first = Timestamp(freq.rollback(first))\n else:\n first = Timestamp(first - freq)\n\n last = Timestamp(last + freq)\n\n return first, last\n\n\ndef _get_period_range_edges(\n first: Period,\n last: Period,\n freq: BaseOffset,\n closed: Literal["right", "left"] = "left",\n origin: TimeGrouperOrigin = "start_day",\n offset: Timedelta | None = None,\n) -> tuple[Period, Period]:\n """\n Adjust the provided `first` and `last` Periods to the respective Period of\n the given offset that encompasses them.\n\n Parameters\n ----------\n first : pd.Period\n The beginning Period of the range to be adjusted.\n last : pd.Period\n The ending Period of the range to be adjusted.\n freq : pd.DateOffset\n The freq to which the Periods will be adjusted.\n closed : {'right', 'left'}, default "left"\n Which side of bin interval is closed.\n origin : {'epoch', 'start', 'start_day'}, Timestamp, default 'start_day'\n The timestamp on which to adjust the grouping. The timezone of origin must\n match the timezone of the index.\n\n If a timestamp is not used, these values are also supported:\n\n - 'epoch': `origin` is 1970-01-01\n - 'start': `origin` is the first value of the timeseries\n - 'start_day': `origin` is the first day at midnight of the timeseries\n offset : pd.Timedelta, default is None\n An offset timedelta added to the origin.\n\n Returns\n -------\n A tuple of length 2, containing the adjusted pd.Period objects.\n """\n if not all(isinstance(obj, Period) for obj in [first, last]):\n raise TypeError("'first' and 'last' must be instances of type Period")\n\n # GH 23882\n first_ts = first.to_timestamp()\n last_ts = last.to_timestamp()\n adjust_first = not freq.is_on_offset(first_ts)\n adjust_last = freq.is_on_offset(last_ts)\n\n first_ts, last_ts = _get_timestamp_range_edges(\n first_ts, last_ts, freq, unit="ns", closed=closed, origin=origin, offset=offset\n )\n\n first = (first_ts + int(adjust_first) * freq).to_period(freq)\n last = (last_ts - int(adjust_last) * freq).to_period(freq)\n return first, last\n\n\ndef _insert_nat_bin(\n binner: PeriodIndex, bins: np.ndarray, labels: PeriodIndex, nat_count: int\n) -> tuple[PeriodIndex, np.ndarray, PeriodIndex]:\n # NaT handling as in pandas._lib.lib.generate_bins_dt64()\n # shift bins by the number of NaT\n assert nat_count > 0\n bins += nat_count\n bins = np.insert(bins, 0, nat_count)\n\n # Incompatible types in assignment (expression has type "Index", variable\n # has type "PeriodIndex")\n binner = binner.insert(0, NaT) # type: ignore[assignment]\n # Incompatible types in assignment (expression has type "Index", variable\n # has type "PeriodIndex")\n labels = labels.insert(0, NaT) # type: ignore[assignment]\n return binner, bins, labels\n\n\ndef _adjust_dates_anchored(\n first: Timestamp,\n last: Timestamp,\n freq: Tick,\n closed: Literal["right", "left"] = "right",\n origin: TimeGrouperOrigin = "start_day",\n offset: Timedelta | None = None,\n unit: str = "ns",\n) -> tuple[Timestamp, Timestamp]:\n # First and last offsets should be calculated from the start day to fix an\n # error cause by resampling across multiple days when a one day period is\n # not a multiple of the frequency. See GH 8683\n # To handle frequencies that are not multiple or divisible by a day we let\n # the possibility to define a fixed origin timestamp. See GH 31809\n first = first.as_unit(unit)\n last = last.as_unit(unit)\n if offset is not None:\n offset = offset.as_unit(unit)\n\n freq_value = Timedelta(freq).as_unit(unit)._value\n\n origin_timestamp = 0 # origin == "epoch"\n if origin == "start_day":\n origin_timestamp = first.normalize()._value\n elif origin == "start":\n origin_timestamp = first._value\n elif isinstance(origin, Timestamp):\n origin_timestamp = origin.as_unit(unit)._value\n elif origin in ["end", "end_day"]:\n origin_last = last if origin == "end" else last.ceil("D")\n sub_freq_times = (origin_last._value - first._value) // freq_value\n if closed == "left":\n sub_freq_times += 1\n first = origin_last - sub_freq_times * freq\n origin_timestamp = first._value\n origin_timestamp += offset._value if offset else 0\n\n # GH 10117 & GH 19375. If first and last contain timezone information,\n # Perform the calculation in UTC in order to avoid localizing on an\n # Ambiguous or Nonexistent time.\n first_tzinfo = first.tzinfo\n last_tzinfo = last.tzinfo\n if first_tzinfo is not None:\n first = first.tz_convert("UTC")\n if last_tzinfo is not None:\n last = last.tz_convert("UTC")\n\n foffset = (first._value - origin_timestamp) % freq_value\n loffset = (last._value - origin_timestamp) % freq_value\n\n if closed == "right":\n if foffset > 0:\n # roll back\n fresult_int = first._value - foffset\n else:\n fresult_int = first._value - freq_value\n\n if loffset > 0:\n # roll forward\n lresult_int = last._value + (freq_value - loffset)\n else:\n # already the end of the road\n lresult_int = last._value\n else: # closed == 'left'\n if foffset > 0:\n fresult_int = first._value - foffset\n else:\n # start of the road\n fresult_int = first._value\n\n if loffset > 0:\n # roll forward\n lresult_int = last._value + (freq_value - loffset)\n else:\n lresult_int = last._value + freq_value\n fresult = Timestamp(fresult_int, unit=unit)\n lresult = Timestamp(lresult_int, unit=unit)\n if first_tzinfo is not None:\n fresult = fresult.tz_localize("UTC").tz_convert(first_tzinfo)\n if last_tzinfo is not None:\n lresult = lresult.tz_localize("UTC").tz_convert(last_tzinfo)\n return fresult, lresult\n\n\ndef asfreq(\n obj: NDFrameT,\n freq,\n method=None,\n how=None,\n normalize: bool = False,\n fill_value=None,\n) -> NDFrameT:\n """\n Utility frequency conversion method for Series/DataFrame.\n\n See :meth:`pandas.NDFrame.asfreq` for full documentation.\n """\n if isinstance(obj.index, PeriodIndex):\n if method is not None:\n raise NotImplementedError("'method' argument is not supported")\n\n if how is None:\n how = "E"\n\n if isinstance(freq, BaseOffset):\n if hasattr(freq, "_period_dtype_code"):\n freq = freq_to_period_freqstr(freq.n, freq.name)\n else:\n raise ValueError(\n f"Invalid offset: '{freq.base}' for converting time series "\n f"with PeriodIndex."\n )\n\n new_obj = obj.copy()\n new_obj.index = obj.index.asfreq(freq, how=how)\n\n elif len(obj.index) == 0:\n new_obj = obj.copy()\n\n new_obj.index = _asfreq_compat(obj.index, freq)\n else:\n unit = None\n if isinstance(obj.index, DatetimeIndex):\n # TODO: should we disallow non-DatetimeIndex?\n unit = obj.index.unit\n dti = date_range(obj.index.min(), obj.index.max(), freq=freq, unit=unit)\n dti.name = obj.index.name\n new_obj = obj.reindex(dti, method=method, fill_value=fill_value)\n if normalize:\n new_obj.index = new_obj.index.normalize()\n\n return new_obj\n\n\ndef _asfreq_compat(index: DatetimeIndex | PeriodIndex | TimedeltaIndex, freq):\n """\n Helper to mimic asfreq on (empty) DatetimeIndex and TimedeltaIndex.\n\n Parameters\n ----------\n index : PeriodIndex, DatetimeIndex, or TimedeltaIndex\n freq : DateOffset\n\n Returns\n -------\n same type as index\n """\n if len(index) != 0:\n # This should never be reached, always checked by the caller\n raise ValueError(\n "Can only set arbitrary freq for empty DatetimeIndex or TimedeltaIndex"\n )\n new_index: Index\n if isinstance(index, PeriodIndex):\n new_index = index.asfreq(freq=freq)\n elif isinstance(index, DatetimeIndex):\n new_index = DatetimeIndex([], dtype=index.dtype, freq=freq, name=index.name)\n elif isinstance(index, TimedeltaIndex):\n new_index = TimedeltaIndex([], dtype=index.dtype, freq=freq, name=index.name)\n else: # pragma: no cover\n raise TypeError(type(index))\n return new_index\n\n\ndef maybe_warn_args_and_kwargs(cls, kernel: str, args, kwargs) -> None:\n """\n Warn for deprecation of args and kwargs in resample functions.\n\n Parameters\n ----------\n cls : type\n Class to warn about.\n kernel : str\n Operation name.\n args : tuple or None\n args passed by user. Will be None if and only if kernel does not have args.\n kwargs : dict or None\n kwargs passed by user. Will be None if and only if kernel does not have kwargs.\n """\n warn_args = args is not None and len(args) > 0\n warn_kwargs = kwargs is not None and len(kwargs) > 0\n if warn_args and warn_kwargs:\n msg = "args and kwargs"\n elif warn_args:\n msg = "args"\n elif warn_kwargs:\n msg = "kwargs"\n else:\n return\n warnings.warn(\n f"Passing additional {msg} to {cls.__name__}.{kernel} has "\n "no impact on the result and is deprecated. This will "\n "raise a TypeError in a future version of pandas.",\n category=FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n\ndef _apply(\n grouped: GroupBy, how: Callable, *args, include_groups: bool, **kwargs\n) -> DataFrame:\n # GH#7155 - rewrite warning to appear as if it came from `.resample`\n target_message = "DataFrameGroupBy.apply operated on the grouping columns"\n new_message = _apply_groupings_depr.format("DataFrameGroupBy", "resample")\n with rewrite_warning(\n target_message=target_message,\n target_category=FutureWarning,\n new_message=new_message,\n ):\n result = grouped.apply(how, *args, include_groups=include_groups, **kwargs)\n return result\n
.venv\Lib\site-packages\pandas\core\resample.py
resample.py
Python
95,573
0.75
0.10411
0.078895
vue-tools
213
2023-12-06T19:27:25.974560
MIT
false
99b0be6742ce6c7877dba0f6e9eeeac1
"""\nReversed Operations not available in the stdlib operator module.\nDefining these instead of using lambdas allows us to reference them by name.\n"""\nfrom __future__ import annotations\n\nimport operator\n\n\ndef radd(left, right):\n return right + left\n\n\ndef rsub(left, right):\n return right - left\n\n\ndef rmul(left, right):\n return right * left\n\n\ndef rdiv(left, right):\n return right / left\n\n\ndef rtruediv(left, right):\n return right / left\n\n\ndef rfloordiv(left, right):\n return right // left\n\n\ndef rmod(left, right):\n # check if right is a string as % is the string\n # formatting operation; this is a TypeError\n # otherwise perform the op\n if isinstance(right, str):\n typ = type(left).__name__\n raise TypeError(f"{typ} cannot perform the operation mod")\n\n return right % left\n\n\ndef rdivmod(left, right):\n return divmod(right, left)\n\n\ndef rpow(left, right):\n return right**left\n\n\ndef rand_(left, right):\n return operator.and_(right, left)\n\n\ndef ror_(left, right):\n return operator.or_(right, left)\n\n\ndef rxor(left, right):\n return operator.xor(right, left)\n
.venv\Lib\site-packages\pandas\core\roperator.py
roperator.py
Python
1,114
0.95
0.225806
0.083333
python-kit
247
2024-07-06T15:04:49.573478
MIT
false
4648e1026878f6c76567a6bbdfa6c116
"""\nModule containing utilities for NDFrame.sample() and .GroupBy.sample()\n"""\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\n\nfrom pandas._libs import lib\n\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCSeries,\n)\n\nif TYPE_CHECKING:\n from pandas._typing import AxisInt\n\n from pandas.core.generic import NDFrame\n\n\ndef preprocess_weights(obj: NDFrame, weights, axis: AxisInt) -> np.ndarray:\n """\n Process and validate the `weights` argument to `NDFrame.sample` and\n `.GroupBy.sample`.\n\n Returns `weights` as an ndarray[np.float64], validated except for normalizing\n weights (because that must be done groupwise in groupby sampling).\n """\n # If a series, align with frame\n if isinstance(weights, ABCSeries):\n weights = weights.reindex(obj.axes[axis])\n\n # Strings acceptable if a dataframe and axis = 0\n if isinstance(weights, str):\n if isinstance(obj, ABCDataFrame):\n if axis == 0:\n try:\n weights = obj[weights]\n except KeyError as err:\n raise KeyError(\n "String passed to weights not a valid column"\n ) from err\n else:\n raise ValueError(\n "Strings can only be passed to "\n "weights when sampling from rows on "\n "a DataFrame"\n )\n else:\n raise ValueError(\n "Strings cannot be passed as weights when sampling from a Series."\n )\n\n if isinstance(obj, ABCSeries):\n func = obj._constructor\n else:\n func = obj._constructor_sliced\n\n weights = func(weights, dtype="float64")._values\n\n if len(weights) != obj.shape[axis]:\n raise ValueError("Weights and axis to be sampled must be of same length")\n\n if lib.has_infs(weights):\n raise ValueError("weight vector may not include `inf` values")\n\n if (weights < 0).any():\n raise ValueError("weight vector many not include negative values")\n\n missing = np.isnan(weights)\n if missing.any():\n # Don't modify weights in place\n weights = weights.copy()\n weights[missing] = 0\n return weights\n\n\ndef process_sampling_size(\n n: int | None, frac: float | None, replace: bool\n) -> int | None:\n """\n Process and validate the `n` and `frac` arguments to `NDFrame.sample` and\n `.GroupBy.sample`.\n\n Returns None if `frac` should be used (variable sampling sizes), otherwise returns\n the constant sampling size.\n """\n # If no frac or n, default to n=1.\n if n is None and frac is None:\n n = 1\n elif n is not None and frac is not None:\n raise ValueError("Please enter a value for `frac` OR `n`, not both")\n elif n is not None:\n if n < 0:\n raise ValueError(\n "A negative number of rows requested. Please provide `n` >= 0."\n )\n if n % 1 != 0:\n raise ValueError("Only integers accepted as `n` values")\n else:\n assert frac is not None # for mypy\n if frac > 1 and not replace:\n raise ValueError(\n "Replace has to be set to `True` when "\n "upsampling the population `frac` > 1."\n )\n if frac < 0:\n raise ValueError(\n "A negative number of rows requested. Please provide `frac` >= 0."\n )\n\n return n\n\n\ndef sample(\n obj_len: int,\n size: int,\n replace: bool,\n weights: np.ndarray | None,\n random_state: np.random.RandomState | np.random.Generator,\n) -> np.ndarray:\n """\n Randomly sample `size` indices in `np.arange(obj_len)`\n\n Parameters\n ----------\n obj_len : int\n The length of the indices being considered\n size : int\n The number of values to choose\n replace : bool\n Allow or disallow sampling of the same row more than once.\n weights : np.ndarray[np.float64] or None\n If None, equal probability weighting, otherwise weights according\n to the vector normalized\n random_state: np.random.RandomState or np.random.Generator\n State used for the random sampling\n\n Returns\n -------\n np.ndarray[np.intp]\n """\n if weights is not None:\n weight_sum = weights.sum()\n if weight_sum != 0:\n weights = weights / weight_sum\n else:\n raise ValueError("Invalid weights: weights sum to zero")\n\n return random_state.choice(obj_len, size=size, replace=replace, p=weights).astype(\n np.intp, copy=False\n )\n
.venv\Lib\site-packages\pandas\core\sample.py
sample.py
Python
4,626
0.95
0.181818
0.031008
python-kit
974
2023-07-18T16:37:12.622982
BSD-3-Clause
false
e1c101ee50b084b69844982dd75e1f1d
from __future__ import annotations\n\n_shared_docs: dict[str, str] = {}\n\n_shared_docs[\n "aggregate"\n] = """\nAggregate using one or more operations over the specified axis.\n\nParameters\n----------\nfunc : function, str, list or dict\n Function to use for aggregating the data. If a function, must either\n work when passed a {klass} or when passed to {klass}.apply.\n\n Accepted combinations are:\n\n - function\n - string function name\n - list of functions and/or function names, e.g. ``[np.sum, 'mean']``\n - dict of axis labels -> functions, function names or list of such.\n{axis}\n*args\n Positional arguments to pass to `func`.\n**kwargs\n Keyword arguments to pass to `func`.\n\nReturns\n-------\nscalar, Series or DataFrame\n\n The return can be:\n\n * scalar : when Series.agg is called with single function\n * Series : when DataFrame.agg is called with a single function\n * DataFrame : when DataFrame.agg is called with several functions\n{see_also}\nNotes\n-----\nThe aggregation operations are always performed over an axis, either the\nindex (default) or the column axis. This behavior is different from\n`numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,\n`var`), where the default is to compute the aggregation of the flattened\narray, e.g., ``numpy.mean(arr_2d)`` as opposed to\n``numpy.mean(arr_2d, axis=0)``.\n\n`agg` is an alias for `aggregate`. Use the alias.\n\nFunctions that mutate the passed object can produce unexpected\nbehavior or errors and are not supported. See :ref:`gotchas.udf-mutation`\nfor more details.\n\nA passed user-defined-function will be passed a Series for evaluation.\n{examples}"""\n\n_shared_docs[\n "compare"\n] = """\nCompare to another {klass} and show the differences.\n\nParameters\n----------\nother : {klass}\n Object to compare with.\n\nalign_axis : {{0 or 'index', 1 or 'columns'}}, default 1\n Determine which axis to align the comparison on.\n\n * 0, or 'index' : Resulting differences are stacked vertically\n with rows drawn alternately from self and other.\n * 1, or 'columns' : Resulting differences are aligned horizontally\n with columns drawn alternately from self and other.\n\nkeep_shape : bool, default False\n If true, all rows and columns are kept.\n Otherwise, only the ones with different values are kept.\n\nkeep_equal : bool, default False\n If true, the result keeps values that are equal.\n Otherwise, equal values are shown as NaNs.\n\nresult_names : tuple, default ('self', 'other')\n Set the dataframes names in the comparison.\n\n .. versionadded:: 1.5.0\n"""\n\n_shared_docs[\n "groupby"\n] = """\nGroup %(klass)s using a mapper or by a Series of columns.\n\nA groupby operation involves some combination of splitting the\nobject, applying a function, and combining the results. This can be\nused to group large amounts of data and compute operations on these\ngroups.\n\nParameters\n----------\nby : mapping, function, label, pd.Grouper or list of such\n Used to determine the groups for the groupby.\n If ``by`` is a function, it's called on each value of the object's\n index. If a dict or Series is passed, the Series or dict VALUES\n will be used to determine the groups (the Series' values are first\n aligned; see ``.align()`` method). If a list or ndarray of length\n equal to the selected axis is passed (see the `groupby user guide\n <https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#splitting-an-object-into-groups>`_),\n the values are used as-is to determine the groups. A label or list\n of labels may be passed to group by the columns in ``self``.\n Notice that a tuple is interpreted as a (single) key.\naxis : {0 or 'index', 1 or 'columns'}, default 0\n Split along rows (0) or columns (1). For `Series` this parameter\n is unused and defaults to 0.\n\n .. deprecated:: 2.1.0\n\n Will be removed and behave like axis=0 in a future version.\n For ``axis=1``, do ``frame.T.groupby(...)`` instead.\n\nlevel : int, level name, or sequence of such, default None\n If the axis is a MultiIndex (hierarchical), group by a particular\n level or levels. Do not specify both ``by`` and ``level``.\nas_index : bool, default True\n Return object with group labels as the\n index. Only relevant for DataFrame input. as_index=False is\n effectively "SQL-style" grouped output. This argument has no effect\n on filtrations (see the `filtrations in the user guide\n <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#filtration>`_),\n such as ``head()``, ``tail()``, ``nth()`` and in transformations\n (see the `transformations in the user guide\n <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#transformation>`_).\nsort : bool, default True\n Sort group keys. Get better performance by turning this off.\n Note this does not influence the order of observations within each\n group. Groupby preserves the order of rows within each group. If False,\n the groups will appear in the same order as they did in the original DataFrame.\n This argument has no effect on filtrations (see the `filtrations in the user guide\n <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#filtration>`_),\n such as ``head()``, ``tail()``, ``nth()`` and in transformations\n (see the `transformations in the user guide\n <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#transformation>`_).\n\n .. versionchanged:: 2.0.0\n\n Specifying ``sort=False`` with an ordered categorical grouper will no\n longer sort the values.\n\ngroup_keys : bool, default True\n When calling apply and the ``by`` argument produces a like-indexed\n (i.e. :ref:`a transform <groupby.transform>`) result, add group keys to\n index to identify pieces. By default group keys are not included\n when the result's index (and column) labels match the inputs, and\n are included otherwise.\n\n .. versionchanged:: 1.5.0\n\n Warns that ``group_keys`` will no longer be ignored when the\n result from ``apply`` is a like-indexed Series or DataFrame.\n Specify ``group_keys`` explicitly to include the group keys or\n not.\n\n .. versionchanged:: 2.0.0\n\n ``group_keys`` now defaults to ``True``.\n\nobserved : bool, default False\n This only applies if any of the groupers are Categoricals.\n If True: only show observed values for categorical groupers.\n If False: show all values for categorical groupers.\n\n .. deprecated:: 2.1.0\n\n The default value will change to True in a future version of pandas.\n\ndropna : bool, default True\n If True, and if group keys contain NA values, NA values together\n with row/column will be dropped.\n If False, NA values will also be treated as the key in groups.\n\nReturns\n-------\npandas.api.typing.%(klass)sGroupBy\n Returns a groupby object that contains information about the groups.\n\nSee Also\n--------\nresample : Convenience method for frequency conversion and resampling\n of time series.\n\nNotes\n-----\nSee the `user guide\n<https://pandas.pydata.org/pandas-docs/stable/groupby.html>`__ for more\ndetailed usage and examples, including splitting an object into groups,\niterating through groups, selecting a group, aggregation, and more.\n"""\n\n_shared_docs[\n "melt"\n] = """\nUnpivot a DataFrame from wide to long format, optionally leaving identifiers set.\n\nThis function is useful to massage a DataFrame into a format where one\nor more columns are identifier variables (`id_vars`), while all other\ncolumns, considered measured variables (`value_vars`), are "unpivoted" to\nthe row axis, leaving just two non-identifier columns, 'variable' and\n'value'.\n\nParameters\n----------\nid_vars : scalar, tuple, list, or ndarray, optional\n Column(s) to use as identifier variables.\nvalue_vars : scalar, tuple, list, or ndarray, optional\n Column(s) to unpivot. If not specified, uses all columns that\n are not set as `id_vars`.\nvar_name : scalar, default None\n Name to use for the 'variable' column. If None it uses\n ``frame.columns.name`` or 'variable'.\nvalue_name : scalar, default 'value'\n Name to use for the 'value' column, can't be an existing column label.\ncol_level : scalar, optional\n If columns are a MultiIndex then use this level to melt.\nignore_index : bool, default True\n If True, original index is ignored. If False, the original index is retained.\n Index labels will be repeated as necessary.\n\nReturns\n-------\nDataFrame\n Unpivoted DataFrame.\n\nSee Also\n--------\n%(other)s : Identical method.\npivot_table : Create a spreadsheet-style pivot table as a DataFrame.\nDataFrame.pivot : Return reshaped DataFrame organized\n by given index / column values.\nDataFrame.explode : Explode a DataFrame from list-like\n columns to long format.\n\nNotes\n-----\nReference :ref:`the user guide <reshaping.melt>` for more examples.\n\nExamples\n--------\n>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},\n... 'B': {0: 1, 1: 3, 2: 5},\n... 'C': {0: 2, 1: 4, 2: 6}})\n>>> df\n A B C\n0 a 1 2\n1 b 3 4\n2 c 5 6\n\n>>> %(caller)sid_vars=['A'], value_vars=['B'])\n A variable value\n0 a B 1\n1 b B 3\n2 c B 5\n\n>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])\n A variable value\n0 a B 1\n1 b B 3\n2 c B 5\n3 a C 2\n4 b C 4\n5 c C 6\n\nThe names of 'variable' and 'value' columns can be customized:\n\n>>> %(caller)sid_vars=['A'], value_vars=['B'],\n... var_name='myVarname', value_name='myValname')\n A myVarname myValname\n0 a B 1\n1 b B 3\n2 c B 5\n\nOriginal index values can be kept around:\n\n>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)\n A variable value\n0 a B 1\n1 b B 3\n2 c B 5\n0 a C 2\n1 b C 4\n2 c C 6\n\nIf you have multi-index columns:\n\n>>> df.columns = [list('ABC'), list('DEF')]\n>>> df\n A B C\n D E F\n0 a 1 2\n1 b 3 4\n2 c 5 6\n\n>>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])\n A variable value\n0 a B 1\n1 b B 3\n2 c B 5\n\n>>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])\n (A, D) variable_0 variable_1 value\n0 a B E 1\n1 b B E 3\n2 c B E 5\n"""\n\n_shared_docs[\n "transform"\n] = """\nCall ``func`` on self producing a {klass} with the same axis shape as self.\n\nParameters\n----------\nfunc : function, str, list-like or dict-like\n Function to use for transforming the data. If a function, must either\n work when passed a {klass} or when passed to {klass}.apply. If func\n is both list-like and dict-like, dict-like behavior takes precedence.\n\n Accepted combinations are:\n\n - function\n - string function name\n - list-like of functions and/or function names, e.g. ``[np.exp, 'sqrt']``\n - dict-like of axis labels -> functions, function names or list-like of such.\n{axis}\n*args\n Positional arguments to pass to `func`.\n**kwargs\n Keyword arguments to pass to `func`.\n\nReturns\n-------\n{klass}\n A {klass} that must have the same length as self.\n\nRaises\n------\nValueError : If the returned {klass} has a different length than self.\n\nSee Also\n--------\n{klass}.agg : Only perform aggregating type operations.\n{klass}.apply : Invoke function on a {klass}.\n\nNotes\n-----\nFunctions that mutate the passed object can produce unexpected\nbehavior or errors and are not supported. See :ref:`gotchas.udf-mutation`\nfor more details.\n\nExamples\n--------\n>>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})\n>>> df\n A B\n0 0 1\n1 1 2\n2 2 3\n>>> df.transform(lambda x: x + 1)\n A B\n0 1 2\n1 2 3\n2 3 4\n\nEven though the resulting {klass} must have the same length as the\ninput {klass}, it is possible to provide several input functions:\n\n>>> s = pd.Series(range(3))\n>>> s\n0 0\n1 1\n2 2\ndtype: int64\n>>> s.transform([np.sqrt, np.exp])\n sqrt exp\n0 0.000000 1.000000\n1 1.000000 2.718282\n2 1.414214 7.389056\n\nYou can call transform on a GroupBy object:\n\n>>> df = pd.DataFrame({{\n... "Date": [\n... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",\n... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],\n... "Data": [5, 8, 6, 1, 50, 100, 60, 120],\n... }})\n>>> df\n Date Data\n0 2015-05-08 5\n1 2015-05-07 8\n2 2015-05-06 6\n3 2015-05-05 1\n4 2015-05-08 50\n5 2015-05-07 100\n6 2015-05-06 60\n7 2015-05-05 120\n>>> df.groupby('Date')['Data'].transform('sum')\n0 55\n1 108\n2 66\n3 121\n4 55\n5 108\n6 66\n7 121\nName: Data, dtype: int64\n\n>>> df = pd.DataFrame({{\n... "c": [1, 1, 1, 2, 2, 2, 2],\n... "type": ["m", "n", "o", "m", "m", "n", "n"]\n... }})\n>>> df\n c type\n0 1 m\n1 1 n\n2 1 o\n3 2 m\n4 2 m\n5 2 n\n6 2 n\n>>> df['size'] = df.groupby('c')['type'].transform(len)\n>>> df\n c type size\n0 1 m 3\n1 1 n 3\n2 1 o 3\n3 2 m 4\n4 2 m 4\n5 2 n 4\n6 2 n 4\n"""\n\n_shared_docs[\n "storage_options"\n] = """storage_options : dict, optional\n Extra options that make sense for a particular storage connection, e.g.\n host, port, username, password, etc. For HTTP(S) URLs the key-value pairs\n are forwarded to ``urllib.request.Request`` as header options. For other\n URLs (e.g. starting with "s3://", and "gcs://") the key-value pairs are\n forwarded to ``fsspec.open``. Please see ``fsspec`` and ``urllib`` for more\n details, and for more examples on storage options refer `here\n <https://pandas.pydata.org/docs/user_guide/io.html?\n highlight=storage_options#reading-writing-remote-files>`_."""\n\n_shared_docs[\n "compression_options"\n] = """compression : str or dict, default 'infer'\n For on-the-fly compression of the output data. If 'infer' and '%s' is\n path-like, then detect compression from the following extensions: '.gz',\n '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'\n (otherwise no compression).\n Set to ``None`` for no compression.\n Can also be a dict with key ``'method'`` set\n to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'xz'``, ``'tar'``} and\n other key-value pairs are forwarded to\n ``zipfile.ZipFile``, ``gzip.GzipFile``,\n ``bz2.BZ2File``, ``zstandard.ZstdCompressor``, ``lzma.LZMAFile`` or\n ``tarfile.TarFile``, respectively.\n As an example, the following could be passed for faster compression and to create\n a reproducible gzip archive:\n ``compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}``.\n\n .. versionadded:: 1.5.0\n Added support for `.tar` files."""\n\n_shared_docs[\n "decompression_options"\n] = """compression : str or dict, default 'infer'\n For on-the-fly decompression of on-disk data. If 'infer' and '%s' is\n path-like, then detect compression from the following extensions: '.gz',\n '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'\n (otherwise no compression).\n If using 'zip' or 'tar', the ZIP file must contain only one data file to be read in.\n Set to ``None`` for no decompression.\n Can also be a dict with key ``'method'`` set\n to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'xz'``, ``'tar'``} and\n other key-value pairs are forwarded to\n ``zipfile.ZipFile``, ``gzip.GzipFile``,\n ``bz2.BZ2File``, ``zstandard.ZstdDecompressor``, ``lzma.LZMAFile`` or\n ``tarfile.TarFile``, respectively.\n As an example, the following could be passed for Zstandard decompression using a\n custom compression dictionary:\n ``compression={'method': 'zstd', 'dict_data': my_compression_dict}``.\n\n .. versionadded:: 1.5.0\n Added support for `.tar` files."""\n\n_shared_docs[\n "replace"\n] = """\n Replace values given in `to_replace` with `value`.\n\n Values of the {klass} are replaced with other values dynamically.\n This differs from updating with ``.loc`` or ``.iloc``, which require\n you to specify a location to update with some value.\n\n Parameters\n ----------\n to_replace : str, regex, list, dict, Series, int, float, or None\n How to find the values that will be replaced.\n\n * numeric, str or regex:\n\n - numeric: numeric values equal to `to_replace` will be\n replaced with `value`\n - str: string exactly matching `to_replace` will be replaced\n with `value`\n - regex: regexs matching `to_replace` will be replaced with\n `value`\n\n * list of str, regex, or numeric:\n\n - First, if `to_replace` and `value` are both lists, they\n **must** be the same length.\n - Second, if ``regex=True`` then all of the strings in **both**\n lists will be interpreted as regexs otherwise they will match\n directly. This doesn't matter much for `value` since there\n are only a few possible substitution regexes you can use.\n - str, regex and numeric rules apply as above.\n\n * dict:\n\n - Dicts can be used to specify different replacement values\n for different existing values. For example,\n ``{{'a': 'b', 'y': 'z'}}`` replaces the value 'a' with 'b' and\n 'y' with 'z'. To use a dict in this way, the optional `value`\n parameter should not be given.\n - For a DataFrame a dict can specify that different values\n should be replaced in different columns. For example,\n ``{{'a': 1, 'b': 'z'}}`` looks for the value 1 in column 'a'\n and the value 'z' in column 'b' and replaces these values\n with whatever is specified in `value`. The `value` parameter\n should not be ``None`` in this case. You can treat this as a\n special case of passing two lists except that you are\n specifying the column to search in.\n - For a DataFrame nested dictionaries, e.g.,\n ``{{'a': {{'b': np.nan}}}}``, are read as follows: look in column\n 'a' for the value 'b' and replace it with NaN. The optional `value`\n parameter should not be specified to use a nested dict in this\n way. You can nest regular expressions as well. Note that\n column names (the top-level dictionary keys in a nested\n dictionary) **cannot** be regular expressions.\n\n * None:\n\n - This means that the `regex` argument must be a string,\n compiled regular expression, or list, dict, ndarray or\n Series of such elements. If `value` is also ``None`` then\n this **must** be a nested dictionary or Series.\n\n See the examples section for examples of each of these.\n value : scalar, dict, list, str, regex, default None\n Value to replace any values matching `to_replace` with.\n For a DataFrame a dict of values can be used to specify which\n value to use for each column (columns not in the dict will not be\n filled). Regular expressions, strings and lists or dicts of such\n objects are also allowed.\n {inplace}\n limit : int, default None\n Maximum size gap to forward or backward fill.\n\n .. deprecated:: 2.1.0\n regex : bool or same types as `to_replace`, default False\n Whether to interpret `to_replace` and/or `value` as regular\n expressions. Alternatively, this could be a regular expression or a\n list, dict, or array of regular expressions in which case\n `to_replace` must be ``None``.\n method : {{'pad', 'ffill', 'bfill'}}\n The method to use when for replacement, when `to_replace` is a\n scalar, list or tuple and `value` is ``None``.\n\n .. deprecated:: 2.1.0\n\n Returns\n -------\n {klass}\n Object after replacement.\n\n Raises\n ------\n AssertionError\n * If `regex` is not a ``bool`` and `to_replace` is not\n ``None``.\n\n TypeError\n * If `to_replace` is not a scalar, array-like, ``dict``, or ``None``\n * If `to_replace` is a ``dict`` and `value` is not a ``list``,\n ``dict``, ``ndarray``, or ``Series``\n * If `to_replace` is ``None`` and `regex` is not compilable\n into a regular expression or is a list, dict, ndarray, or\n Series.\n * When replacing multiple ``bool`` or ``datetime64`` objects and\n the arguments to `to_replace` does not match the type of the\n value being replaced\n\n ValueError\n * If a ``list`` or an ``ndarray`` is passed to `to_replace` and\n `value` but they are not the same length.\n\n See Also\n --------\n Series.fillna : Fill NA values.\n DataFrame.fillna : Fill NA values.\n Series.where : Replace values based on boolean condition.\n DataFrame.where : Replace values based on boolean condition.\n DataFrame.map: Apply a function to a Dataframe elementwise.\n Series.map: Map values of Series according to an input mapping or function.\n Series.str.replace : Simple string replacement.\n\n Notes\n -----\n * Regex substitution is performed under the hood with ``re.sub``. The\n rules for substitution for ``re.sub`` are the same.\n * Regular expressions will only substitute on strings, meaning you\n cannot provide, for example, a regular expression matching floating\n point numbers and expect the columns in your frame that have a\n numeric dtype to be matched. However, if those floating point\n numbers *are* strings, then you can do this.\n * This method has *a lot* of options. You are encouraged to experiment\n and play with this method to gain intuition about how it works.\n * When dict is used as the `to_replace` value, it is like\n key(s) in the dict are the to_replace part and\n value(s) in the dict are the value parameter.\n\n Examples\n --------\n\n **Scalar `to_replace` and `value`**\n\n >>> s = pd.Series([1, 2, 3, 4, 5])\n >>> s.replace(1, 5)\n 0 5\n 1 2\n 2 3\n 3 4\n 4 5\n dtype: int64\n\n >>> df = pd.DataFrame({{'A': [0, 1, 2, 3, 4],\n ... 'B': [5, 6, 7, 8, 9],\n ... 'C': ['a', 'b', 'c', 'd', 'e']}})\n >>> df.replace(0, 5)\n A B C\n 0 5 5 a\n 1 1 6 b\n 2 2 7 c\n 3 3 8 d\n 4 4 9 e\n\n **List-like `to_replace`**\n\n >>> df.replace([0, 1, 2, 3], 4)\n A B C\n 0 4 5 a\n 1 4 6 b\n 2 4 7 c\n 3 4 8 d\n 4 4 9 e\n\n >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])\n A B C\n 0 4 5 a\n 1 3 6 b\n 2 2 7 c\n 3 1 8 d\n 4 4 9 e\n\n >>> s.replace([1, 2], method='bfill')\n 0 3\n 1 3\n 2 3\n 3 4\n 4 5\n dtype: int64\n\n **dict-like `to_replace`**\n\n >>> df.replace({{0: 10, 1: 100}})\n A B C\n 0 10 5 a\n 1 100 6 b\n 2 2 7 c\n 3 3 8 d\n 4 4 9 e\n\n >>> df.replace({{'A': 0, 'B': 5}}, 100)\n A B C\n 0 100 100 a\n 1 1 6 b\n 2 2 7 c\n 3 3 8 d\n 4 4 9 e\n\n >>> df.replace({{'A': {{0: 100, 4: 400}}}})\n A B C\n 0 100 5 a\n 1 1 6 b\n 2 2 7 c\n 3 3 8 d\n 4 400 9 e\n\n **Regular expression `to_replace`**\n\n >>> df = pd.DataFrame({{'A': ['bat', 'foo', 'bait'],\n ... 'B': ['abc', 'bar', 'xyz']}})\n >>> df.replace(to_replace=r'^ba.$', value='new', regex=True)\n A B\n 0 new abc\n 1 foo new\n 2 bait xyz\n\n >>> df.replace({{'A': r'^ba.$'}}, {{'A': 'new'}}, regex=True)\n A B\n 0 new abc\n 1 foo bar\n 2 bait xyz\n\n >>> df.replace(regex=r'^ba.$', value='new')\n A B\n 0 new abc\n 1 foo new\n 2 bait xyz\n\n >>> df.replace(regex={{r'^ba.$': 'new', 'foo': 'xyz'}})\n A B\n 0 new abc\n 1 xyz new\n 2 bait xyz\n\n >>> df.replace(regex=[r'^ba.$', 'foo'], value='new')\n A B\n 0 new abc\n 1 new new\n 2 bait xyz\n\n Compare the behavior of ``s.replace({{'a': None}})`` and\n ``s.replace('a', None)`` to understand the peculiarities\n of the `to_replace` parameter:\n\n >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])\n\n When one uses a dict as the `to_replace` value, it is like the\n value(s) in the dict are equal to the `value` parameter.\n ``s.replace({{'a': None}})`` is equivalent to\n ``s.replace(to_replace={{'a': None}}, value=None, method=None)``:\n\n >>> s.replace({{'a': None}})\n 0 10\n 1 None\n 2 None\n 3 b\n 4 None\n dtype: object\n\n When ``value`` is not explicitly passed and `to_replace` is a scalar, list\n or tuple, `replace` uses the method parameter (default 'pad') to do the\n replacement. So this is why the 'a' values are being replaced by 10\n in rows 1 and 2 and 'b' in row 4 in this case.\n\n >>> s.replace('a')\n 0 10\n 1 10\n 2 10\n 3 b\n 4 b\n dtype: object\n\n .. deprecated:: 2.1.0\n The 'method' parameter and padding behavior are deprecated.\n\n On the other hand, if ``None`` is explicitly passed for ``value``, it will\n be respected:\n\n >>> s.replace('a', None)\n 0 10\n 1 None\n 2 None\n 3 b\n 4 None\n dtype: object\n\n .. versionchanged:: 1.4.0\n Previously the explicit ``None`` was silently ignored.\n\n When ``regex=True``, ``value`` is not ``None`` and `to_replace` is a string,\n the replacement will be applied in all columns of the DataFrame.\n\n >>> df = pd.DataFrame({{'A': [0, 1, 2, 3, 4],\n ... 'B': ['a', 'b', 'c', 'd', 'e'],\n ... 'C': ['f', 'g', 'h', 'i', 'j']}})\n\n >>> df.replace(to_replace='^[a-g]', value='e', regex=True)\n A B C\n 0 0 e e\n 1 1 e e\n 2 2 e h\n 3 3 e i\n 4 4 e j\n\n If ``value`` is not ``None`` and `to_replace` is a dictionary, the dictionary\n keys will be the DataFrame columns that the replacement will be applied.\n\n >>> df.replace(to_replace={{'B': '^[a-c]', 'C': '^[h-j]'}}, value='e', regex=True)\n A B C\n 0 0 e f\n 1 1 e g\n 2 2 e e\n 3 3 d e\n 4 4 e e\n"""\n\n_shared_docs[\n "idxmin"\n] = """\n Return index of first occurrence of minimum over requested axis.\n\n NA/null values are excluded.\n\n Parameters\n ----------\n axis : {{0 or 'index', 1 or 'columns'}}, default 0\n The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.\n skipna : bool, default True\n Exclude NA/null values. If an entire row/column is NA, the result\n will be NA.\n numeric_only : bool, default {numeric_only_default}\n Include only `float`, `int` or `boolean` data.\n\n .. versionadded:: 1.5.0\n\n Returns\n -------\n Series\n Indexes of minima along the specified axis.\n\n Raises\n ------\n ValueError\n * If the row/column is empty\n\n See Also\n --------\n Series.idxmin : Return index of the minimum element.\n\n Notes\n -----\n This method is the DataFrame version of ``ndarray.argmin``.\n\n Examples\n --------\n Consider a dataset containing food consumption in Argentina.\n\n >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48],\n ... 'co2_emissions': [37.2, 19.66, 1712]}},\n ... index=['Pork', 'Wheat Products', 'Beef'])\n\n >>> df\n consumption co2_emissions\n Pork 10.51 37.20\n Wheat Products 103.11 19.66\n Beef 55.48 1712.00\n\n By default, it returns the index for the minimum value in each column.\n\n >>> df.idxmin()\n consumption Pork\n co2_emissions Wheat Products\n dtype: object\n\n To return the index for the minimum value in each row, use ``axis="columns"``.\n\n >>> df.idxmin(axis="columns")\n Pork consumption\n Wheat Products co2_emissions\n Beef consumption\n dtype: object\n"""\n\n_shared_docs[\n "idxmax"\n] = """\n Return index of first occurrence of maximum over requested axis.\n\n NA/null values are excluded.\n\n Parameters\n ----------\n axis : {{0 or 'index', 1 or 'columns'}}, default 0\n The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.\n skipna : bool, default True\n Exclude NA/null values. If an entire row/column is NA, the result\n will be NA.\n numeric_only : bool, default {numeric_only_default}\n Include only `float`, `int` or `boolean` data.\n\n .. versionadded:: 1.5.0\n\n Returns\n -------\n Series\n Indexes of maxima along the specified axis.\n\n Raises\n ------\n ValueError\n * If the row/column is empty\n\n See Also\n --------\n Series.idxmax : Return index of the maximum element.\n\n Notes\n -----\n This method is the DataFrame version of ``ndarray.argmax``.\n\n Examples\n --------\n Consider a dataset containing food consumption in Argentina.\n\n >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48],\n ... 'co2_emissions': [37.2, 19.66, 1712]}},\n ... index=['Pork', 'Wheat Products', 'Beef'])\n\n >>> df\n consumption co2_emissions\n Pork 10.51 37.20\n Wheat Products 103.11 19.66\n Beef 55.48 1712.00\n\n By default, it returns the index for the maximum value in each column.\n\n >>> df.idxmax()\n consumption Wheat Products\n co2_emissions Beef\n dtype: object\n\n To return the index for the maximum value in each row, use ``axis="columns"``.\n\n >>> df.idxmax(axis="columns")\n Pork co2_emissions\n Wheat Products consumption\n Beef co2_emissions\n dtype: object\n"""\n
.venv\Lib\site-packages\pandas\core\shared_docs.py
shared_docs.py
Python
30,103
0.95
0.07563
0.037783
awesome-app
301
2025-06-01T14:07:22.864220
MIT
false
a27131e91809b87c4cfda8d759d9d611
""" miscellaneous sorting / groupby utilities """\nfrom __future__ import annotations\n\nfrom collections import defaultdict\nfrom typing import (\n TYPE_CHECKING,\n Callable,\n DefaultDict,\n cast,\n)\n\nimport numpy as np\n\nfrom pandas._libs import (\n algos,\n hashtable,\n lib,\n)\nfrom pandas._libs.hashtable import unique_label_indices\n\nfrom pandas.core.dtypes.common import (\n ensure_int64,\n ensure_platform_int,\n)\nfrom pandas.core.dtypes.generic import (\n ABCMultiIndex,\n ABCRangeIndex,\n)\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core.construction import extract_array\n\nif TYPE_CHECKING:\n from collections.abc import (\n Hashable,\n Iterable,\n Sequence,\n )\n\n from pandas._typing import (\n ArrayLike,\n AxisInt,\n IndexKeyFunc,\n Level,\n NaPosition,\n Shape,\n SortKind,\n npt,\n )\n\n from pandas import (\n MultiIndex,\n Series,\n )\n from pandas.core.arrays import ExtensionArray\n from pandas.core.indexes.base import Index\n\n\ndef get_indexer_indexer(\n target: Index,\n level: Level | list[Level] | None,\n ascending: list[bool] | bool,\n kind: SortKind,\n na_position: NaPosition,\n sort_remaining: bool,\n key: IndexKeyFunc,\n) -> npt.NDArray[np.intp] | None:\n """\n Helper method that return the indexer according to input parameters for\n the sort_index method of DataFrame and Series.\n\n Parameters\n ----------\n target : Index\n level : int or level name or list of ints or list of level names\n ascending : bool or list of bools, default True\n kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}\n na_position : {'first', 'last'}\n sort_remaining : bool\n key : callable, optional\n\n Returns\n -------\n Optional[ndarray[intp]]\n The indexer for the new index.\n """\n\n # error: Incompatible types in assignment (expression has type\n # "Union[ExtensionArray, ndarray[Any, Any], Index, Series]", variable has\n # type "Index")\n target = ensure_key_mapped(target, key, levels=level) # type: ignore[assignment]\n target = target._sort_levels_monotonic()\n\n if level is not None:\n _, indexer = target.sortlevel(\n level,\n ascending=ascending,\n sort_remaining=sort_remaining,\n na_position=na_position,\n )\n elif (np.all(ascending) and target.is_monotonic_increasing) or (\n not np.any(ascending) and target.is_monotonic_decreasing\n ):\n # Check monotonic-ness before sort an index (GH 11080)\n return None\n elif isinstance(target, ABCMultiIndex):\n codes = [lev.codes for lev in target._get_codes_for_sorting()]\n indexer = lexsort_indexer(\n codes, orders=ascending, na_position=na_position, codes_given=True\n )\n else:\n # ascending can only be a Sequence for MultiIndex\n indexer = nargsort(\n target,\n kind=kind,\n ascending=cast(bool, ascending),\n na_position=na_position,\n )\n return indexer\n\n\ndef get_group_index(\n labels, shape: Shape, sort: bool, xnull: bool\n) -> npt.NDArray[np.int64]:\n """\n For the particular label_list, gets the offsets into the hypothetical list\n representing the totally ordered cartesian product of all possible label\n combinations, *as long as* this space fits within int64 bounds;\n otherwise, though group indices identify unique combinations of\n labels, they cannot be deconstructed.\n - If `sort`, rank of returned ids preserve lexical ranks of labels.\n i.e. returned id's can be used to do lexical sort on labels;\n - If `xnull` nulls (-1 labels) are passed through.\n\n Parameters\n ----------\n labels : sequence of arrays\n Integers identifying levels at each location\n shape : tuple[int, ...]\n Number of unique levels at each location\n sort : bool\n If the ranks of returned ids should match lexical ranks of labels\n xnull : bool\n If true nulls are excluded. i.e. -1 values in the labels are\n passed through.\n\n Returns\n -------\n An array of type int64 where two elements are equal if their corresponding\n labels are equal at all location.\n\n Notes\n -----\n The length of `labels` and `shape` must be identical.\n """\n\n def _int64_cut_off(shape) -> int:\n acc = 1\n for i, mul in enumerate(shape):\n acc *= int(mul)\n if not acc < lib.i8max:\n return i\n return len(shape)\n\n def maybe_lift(lab, size: int) -> tuple[np.ndarray, int]:\n # promote nan values (assigned -1 label in lab array)\n # so that all output values are non-negative\n return (lab + 1, size + 1) if (lab == -1).any() else (lab, size)\n\n labels = [ensure_int64(x) for x in labels]\n lshape = list(shape)\n if not xnull:\n for i, (lab, size) in enumerate(zip(labels, shape)):\n labels[i], lshape[i] = maybe_lift(lab, size)\n\n labels = list(labels)\n\n # Iteratively process all the labels in chunks sized so less\n # than lib.i8max unique int ids will be required for each chunk\n while True:\n # how many levels can be done without overflow:\n nlev = _int64_cut_off(lshape)\n\n # compute flat ids for the first `nlev` levels\n stride = np.prod(lshape[1:nlev], dtype="i8")\n out = stride * labels[0].astype("i8", subok=False, copy=False)\n\n for i in range(1, nlev):\n if lshape[i] == 0:\n stride = np.int64(0)\n else:\n stride //= lshape[i]\n out += labels[i] * stride\n\n if xnull: # exclude nulls\n mask = labels[0] == -1\n for lab in labels[1:nlev]:\n mask |= lab == -1\n out[mask] = -1\n\n if nlev == len(lshape): # all levels done!\n break\n\n # compress what has been done so far in order to avoid overflow\n # to retain lexical ranks, obs_ids should be sorted\n comp_ids, obs_ids = compress_group_index(out, sort=sort)\n\n labels = [comp_ids] + labels[nlev:]\n lshape = [len(obs_ids)] + lshape[nlev:]\n\n return out\n\n\ndef get_compressed_ids(\n labels, sizes: Shape\n) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.int64]]:\n """\n Group_index is offsets into cartesian product of all possible labels. This\n space can be huge, so this function compresses it, by computing offsets\n (comp_ids) into the list of unique labels (obs_group_ids).\n\n Parameters\n ----------\n labels : list of label arrays\n sizes : tuple[int] of size of the levels\n\n Returns\n -------\n np.ndarray[np.intp]\n comp_ids\n np.ndarray[np.int64]\n obs_group_ids\n """\n ids = get_group_index(labels, sizes, sort=True, xnull=False)\n return compress_group_index(ids, sort=True)\n\n\ndef is_int64_overflow_possible(shape: Shape) -> bool:\n the_prod = 1\n for x in shape:\n the_prod *= int(x)\n\n return the_prod >= lib.i8max\n\n\ndef _decons_group_index(\n comp_labels: npt.NDArray[np.intp], shape: Shape\n) -> list[npt.NDArray[np.intp]]:\n # reconstruct labels\n if is_int64_overflow_possible(shape):\n # at some point group indices are factorized,\n # and may not be deconstructed here! wrong path!\n raise ValueError("cannot deconstruct factorized group indices!")\n\n label_list = []\n factor = 1\n y = np.array(0)\n x = comp_labels\n for i in reversed(range(len(shape))):\n labels = (x - y) % (factor * shape[i]) // factor\n np.putmask(labels, comp_labels < 0, -1)\n label_list.append(labels)\n y = labels * factor\n factor *= shape[i]\n return label_list[::-1]\n\n\ndef decons_obs_group_ids(\n comp_ids: npt.NDArray[np.intp],\n obs_ids: npt.NDArray[np.intp],\n shape: Shape,\n labels: Sequence[npt.NDArray[np.signedinteger]],\n xnull: bool,\n) -> list[npt.NDArray[np.intp]]:\n """\n Reconstruct labels from observed group ids.\n\n Parameters\n ----------\n comp_ids : np.ndarray[np.intp]\n obs_ids: np.ndarray[np.intp]\n shape : tuple[int]\n labels : Sequence[np.ndarray[np.signedinteger]]\n xnull : bool\n If nulls are excluded; i.e. -1 labels are passed through.\n """\n if not xnull:\n lift = np.fromiter(((a == -1).any() for a in labels), dtype=np.intp)\n arr_shape = np.asarray(shape, dtype=np.intp) + lift\n shape = tuple(arr_shape)\n\n if not is_int64_overflow_possible(shape):\n # obs ids are deconstructable! take the fast route!\n out = _decons_group_index(obs_ids, shape)\n return out if xnull or not lift.any() else [x - y for x, y in zip(out, lift)]\n\n indexer = unique_label_indices(comp_ids)\n return [lab[indexer].astype(np.intp, subok=False, copy=True) for lab in labels]\n\n\ndef lexsort_indexer(\n keys: Sequence[ArrayLike | Index | Series],\n orders=None,\n na_position: str = "last",\n key: Callable | None = None,\n codes_given: bool = False,\n) -> npt.NDArray[np.intp]:\n """\n Performs lexical sorting on a set of keys\n\n Parameters\n ----------\n keys : Sequence[ArrayLike | Index | Series]\n Sequence of arrays to be sorted by the indexer\n Sequence[Series] is only if key is not None.\n orders : bool or list of booleans, optional\n Determines the sorting order for each element in keys. If a list,\n it must be the same length as keys. This determines whether the\n corresponding element in keys should be sorted in ascending\n (True) or descending (False) order. if bool, applied to all\n elements as above. if None, defaults to True.\n na_position : {'first', 'last'}, default 'last'\n Determines placement of NA elements in the sorted list ("last" or "first")\n key : Callable, optional\n Callable key function applied to every element in keys before sorting\n codes_given: bool, False\n Avoid categorical materialization if codes are already provided.\n\n Returns\n -------\n np.ndarray[np.intp]\n """\n from pandas.core.arrays import Categorical\n\n if na_position not in ["last", "first"]:\n raise ValueError(f"invalid na_position: {na_position}")\n\n if isinstance(orders, bool):\n orders = [orders] * len(keys)\n elif orders is None:\n orders = [True] * len(keys)\n\n labels = []\n\n for k, order in zip(keys, orders):\n k = ensure_key_mapped(k, key)\n if codes_given:\n codes = cast(np.ndarray, k)\n n = codes.max() + 1 if len(codes) else 0\n else:\n cat = Categorical(k, ordered=True)\n codes = cat.codes\n n = len(cat.categories)\n\n mask = codes == -1\n\n if na_position == "last" and mask.any():\n codes = np.where(mask, n, codes)\n\n # not order means descending\n if not order:\n codes = np.where(mask, codes, n - codes - 1)\n\n labels.append(codes)\n\n return np.lexsort(labels[::-1])\n\n\ndef nargsort(\n items: ArrayLike | Index | Series,\n kind: SortKind = "quicksort",\n ascending: bool = True,\n na_position: str = "last",\n key: Callable | None = None,\n mask: npt.NDArray[np.bool_] | None = None,\n) -> npt.NDArray[np.intp]:\n """\n Intended to be a drop-in replacement for np.argsort which handles NaNs.\n\n Adds ascending, na_position, and key parameters.\n\n (GH #6399, #5231, #27237)\n\n Parameters\n ----------\n items : np.ndarray, ExtensionArray, Index, or Series\n kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort'\n ascending : bool, default True\n na_position : {'first', 'last'}, default 'last'\n key : Optional[Callable], default None\n mask : Optional[np.ndarray[bool]], default None\n Passed when called by ExtensionArray.argsort.\n\n Returns\n -------\n np.ndarray[np.intp]\n """\n\n if key is not None:\n # see TestDataFrameSortKey, TestRangeIndex::test_sort_values_key\n items = ensure_key_mapped(items, key)\n return nargsort(\n items,\n kind=kind,\n ascending=ascending,\n na_position=na_position,\n key=None,\n mask=mask,\n )\n\n if isinstance(items, ABCRangeIndex):\n return items.argsort(ascending=ascending)\n elif not isinstance(items, ABCMultiIndex):\n items = extract_array(items)\n else:\n raise TypeError(\n "nargsort does not support MultiIndex. Use index.sort_values instead."\n )\n\n if mask is None:\n mask = np.asarray(isna(items))\n\n if not isinstance(items, np.ndarray):\n # i.e. ExtensionArray\n return items.argsort(\n ascending=ascending,\n kind=kind,\n na_position=na_position,\n )\n\n idx = np.arange(len(items))\n non_nans = items[~mask]\n non_nan_idx = idx[~mask]\n\n nan_idx = np.nonzero(mask)[0]\n if not ascending:\n non_nans = non_nans[::-1]\n non_nan_idx = non_nan_idx[::-1]\n indexer = non_nan_idx[non_nans.argsort(kind=kind)]\n if not ascending:\n indexer = indexer[::-1]\n # Finally, place the NaNs at the end or the beginning according to\n # na_position\n if na_position == "last":\n indexer = np.concatenate([indexer, nan_idx])\n elif na_position == "first":\n indexer = np.concatenate([nan_idx, indexer])\n else:\n raise ValueError(f"invalid na_position: {na_position}")\n return ensure_platform_int(indexer)\n\n\ndef nargminmax(values: ExtensionArray, method: str, axis: AxisInt = 0):\n """\n Implementation of np.argmin/argmax but for ExtensionArray and which\n handles missing values.\n\n Parameters\n ----------\n values : ExtensionArray\n method : {"argmax", "argmin"}\n axis : int, default 0\n\n Returns\n -------\n int\n """\n assert method in {"argmax", "argmin"}\n func = np.argmax if method == "argmax" else np.argmin\n\n mask = np.asarray(isna(values))\n arr_values = values._values_for_argsort()\n\n if arr_values.ndim > 1:\n if mask.any():\n if axis == 1:\n zipped = zip(arr_values, mask)\n else:\n zipped = zip(arr_values.T, mask.T)\n return np.array([_nanargminmax(v, m, func) for v, m in zipped])\n return func(arr_values, axis=axis)\n\n return _nanargminmax(arr_values, mask, func)\n\n\ndef _nanargminmax(values: np.ndarray, mask: npt.NDArray[np.bool_], func) -> int:\n """\n See nanargminmax.__doc__.\n """\n idx = np.arange(values.shape[0])\n non_nans = values[~mask]\n non_nan_idx = idx[~mask]\n\n return non_nan_idx[func(non_nans)]\n\n\ndef _ensure_key_mapped_multiindex(\n index: MultiIndex, key: Callable, level=None\n) -> MultiIndex:\n """\n Returns a new MultiIndex in which key has been applied\n to all levels specified in level (or all levels if level\n is None). Used for key sorting for MultiIndex.\n\n Parameters\n ----------\n index : MultiIndex\n Index to which to apply the key function on the\n specified levels.\n key : Callable\n Function that takes an Index and returns an Index of\n the same shape. This key is applied to each level\n separately. The name of the level can be used to\n distinguish different levels for application.\n level : list-like, int or str, default None\n Level or list of levels to apply the key function to.\n If None, key function is applied to all levels. Other\n levels are left unchanged.\n\n Returns\n -------\n labels : MultiIndex\n Resulting MultiIndex with modified levels.\n """\n\n if level is not None:\n if isinstance(level, (str, int)):\n sort_levels = [level]\n else:\n sort_levels = level\n\n sort_levels = [index._get_level_number(lev) for lev in sort_levels]\n else:\n sort_levels = list(range(index.nlevels)) # satisfies mypy\n\n mapped = [\n ensure_key_mapped(index._get_level_values(level), key)\n if level in sort_levels\n else index._get_level_values(level)\n for level in range(index.nlevels)\n ]\n\n return type(index).from_arrays(mapped)\n\n\ndef ensure_key_mapped(\n values: ArrayLike | Index | Series, key: Callable | None, levels=None\n) -> ArrayLike | Index | Series:\n """\n Applies a callable key function to the values function and checks\n that the resulting value has the same shape. Can be called on Index\n subclasses, Series, DataFrames, or ndarrays.\n\n Parameters\n ----------\n values : Series, DataFrame, Index subclass, or ndarray\n key : Optional[Callable], key to be called on the values array\n levels : Optional[List], if values is a MultiIndex, list of levels to\n apply the key to.\n """\n from pandas.core.indexes.api import Index\n\n if not key:\n return values\n\n if isinstance(values, ABCMultiIndex):\n return _ensure_key_mapped_multiindex(values, key, level=levels)\n\n result = key(values.copy())\n if len(result) != len(values):\n raise ValueError(\n "User-provided `key` function must not change the shape of the array."\n )\n\n try:\n if isinstance(\n values, Index\n ): # convert to a new Index subclass, not necessarily the same\n result = Index(result)\n else:\n # try to revert to original type otherwise\n type_of_values = type(values)\n # error: Too many arguments for "ExtensionArray"\n result = type_of_values(result) # type: ignore[call-arg]\n except TypeError:\n raise TypeError(\n f"User-provided `key` function returned an invalid type {type(result)} \\n which could not be converted to {type(values)}."\n )\n\n return result\n\n\ndef get_flattened_list(\n comp_ids: npt.NDArray[np.intp],\n ngroups: int,\n levels: Iterable[Index],\n labels: Iterable[np.ndarray],\n) -> list[tuple]:\n """Map compressed group id -> key tuple."""\n comp_ids = comp_ids.astype(np.int64, copy=False)\n arrays: DefaultDict[int, list[int]] = defaultdict(list)\n for labs, level in zip(labels, levels):\n table = hashtable.Int64HashTable(ngroups)\n table.map_keys_to_values(comp_ids, labs.astype(np.int64, copy=False))\n for i in range(ngroups):\n arrays[i].append(level[table.get_item(i)])\n return [tuple(array) for array in arrays.values()]\n\n\ndef get_indexer_dict(\n label_list: list[np.ndarray], keys: list[Index]\n) -> dict[Hashable, npt.NDArray[np.intp]]:\n """\n Returns\n -------\n dict:\n Labels mapped to indexers.\n """\n shape = tuple(len(x) for x in keys)\n\n group_index = get_group_index(label_list, shape, sort=True, xnull=True)\n if np.all(group_index == -1):\n # Short-circuit, lib.indices_fast will return the same\n return {}\n ngroups = (\n ((group_index.size and group_index.max()) + 1)\n if is_int64_overflow_possible(shape)\n else np.prod(shape, dtype="i8")\n )\n\n sorter = get_group_index_sorter(group_index, ngroups)\n\n sorted_labels = [lab.take(sorter) for lab in label_list]\n group_index = group_index.take(sorter)\n\n return lib.indices_fast(sorter, group_index, keys, sorted_labels)\n\n\n# ----------------------------------------------------------------------\n# sorting levels...cleverly?\n\n\ndef get_group_index_sorter(\n group_index: npt.NDArray[np.intp], ngroups: int | None = None\n) -> npt.NDArray[np.intp]:\n """\n algos.groupsort_indexer implements `counting sort` and it is at least\n O(ngroups), where\n ngroups = prod(shape)\n shape = map(len, keys)\n that is, linear in the number of combinations (cartesian product) of unique\n values of groupby keys. This can be huge when doing multi-key groupby.\n np.argsort(kind='mergesort') is O(count x log(count)) where count is the\n length of the data-frame;\n Both algorithms are `stable` sort and that is necessary for correctness of\n groupby operations. e.g. consider:\n df.groupby(key)[col].transform('first')\n\n Parameters\n ----------\n group_index : np.ndarray[np.intp]\n signed integer dtype\n ngroups : int or None, default None\n\n Returns\n -------\n np.ndarray[np.intp]\n """\n if ngroups is None:\n ngroups = 1 + group_index.max()\n count = len(group_index)\n alpha = 0.0 # taking complexities literally; there may be\n beta = 1.0 # some room for fine-tuning these parameters\n do_groupsort = count > 0 and ((alpha + beta * ngroups) < (count * np.log(count)))\n if do_groupsort:\n sorter, _ = algos.groupsort_indexer(\n ensure_platform_int(group_index),\n ngroups,\n )\n # sorter _should_ already be intp, but mypy is not yet able to verify\n else:\n sorter = group_index.argsort(kind="mergesort")\n return ensure_platform_int(sorter)\n\n\ndef compress_group_index(\n group_index: npt.NDArray[np.int64], sort: bool = True\n) -> tuple[npt.NDArray[np.int64], npt.NDArray[np.int64]]:\n """\n Group_index is offsets into cartesian product of all possible labels. This\n space can be huge, so this function compresses it, by computing offsets\n (comp_ids) into the list of unique labels (obs_group_ids).\n """\n if len(group_index) and np.all(group_index[1:] >= group_index[:-1]):\n # GH 53806: fast path for sorted group_index\n unique_mask = np.concatenate(\n [group_index[:1] > -1, group_index[1:] != group_index[:-1]]\n )\n comp_ids = unique_mask.cumsum()\n comp_ids -= 1\n obs_group_ids = group_index[unique_mask]\n else:\n size_hint = len(group_index)\n table = hashtable.Int64HashTable(size_hint)\n\n group_index = ensure_int64(group_index)\n\n # note, group labels come out ascending (ie, 1,2,3 etc)\n comp_ids, obs_group_ids = table.get_labels_groupby(group_index)\n\n if sort and len(obs_group_ids) > 0:\n obs_group_ids, comp_ids = _reorder_by_uniques(obs_group_ids, comp_ids)\n\n return ensure_int64(comp_ids), ensure_int64(obs_group_ids)\n\n\ndef _reorder_by_uniques(\n uniques: npt.NDArray[np.int64], labels: npt.NDArray[np.intp]\n) -> tuple[npt.NDArray[np.int64], npt.NDArray[np.intp]]:\n """\n Parameters\n ----------\n uniques : np.ndarray[np.int64]\n labels : np.ndarray[np.intp]\n\n Returns\n -------\n np.ndarray[np.int64]\n np.ndarray[np.intp]\n """\n # sorter is index where elements ought to go\n sorter = uniques.argsort()\n\n # reverse_indexer is where elements came from\n reverse_indexer = np.empty(len(sorter), dtype=np.intp)\n reverse_indexer.put(sorter, np.arange(len(sorter)))\n\n mask = labels < 0\n\n # move labels to right locations (ie, unsort ascending labels)\n labels = reverse_indexer.take(labels)\n np.putmask(labels, mask, -1)\n\n # sort observed ids\n uniques = uniques.take(sorter)\n\n return uniques, labels\n
.venv\Lib\site-packages\pandas\core\sorting.py
sorting.py
Python
22,976
0.95
0.15508
0.054575
react-lib
35
2023-12-01T23:28:51.237392
BSD-3-Clause
false
02ca7f26d33131a061b11d4f8ca17ecd
"""\nAn interface for extending pandas with custom arrays.\n\n.. warning::\n\n This is an experimental API and subject to breaking changes\n without warning.\n"""\nfrom __future__ import annotations\n\nimport operator\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n ClassVar,\n Literal,\n cast,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import (\n algos as libalgos,\n lib,\n)\nfrom pandas.compat import set_function_name\nfrom pandas.compat.numpy import function as nv\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import (\n Appender,\n Substitution,\n cache_readonly,\n)\nfrom pandas.util._exceptions import find_stack_level\nfrom pandas.util._validators import (\n validate_bool_kwarg,\n validate_fillna_kwargs,\n validate_insert_loc,\n)\n\nfrom pandas.core.dtypes.cast import maybe_cast_pointwise_result\nfrom pandas.core.dtypes.common import (\n is_list_like,\n is_scalar,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import ExtensionDtype\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCIndex,\n ABCSeries,\n)\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core import (\n arraylike,\n missing,\n roperator,\n)\nfrom pandas.core.algorithms import (\n duplicated,\n factorize_array,\n isin,\n map_array,\n mode,\n rank,\n unique,\n)\nfrom pandas.core.array_algos.quantile import quantile_with_mask\nfrom pandas.core.missing import _fill_limit_area_1d\nfrom pandas.core.sorting import (\n nargminmax,\n nargsort,\n)\n\nif TYPE_CHECKING:\n from collections.abc import (\n Iterator,\n Sequence,\n )\n\n from pandas._typing import (\n ArrayLike,\n AstypeArg,\n AxisInt,\n Dtype,\n DtypeObj,\n FillnaOptions,\n InterpolateOptions,\n NumpySorter,\n NumpyValueArrayLike,\n PositionalIndexer,\n ScalarIndexer,\n Self,\n SequenceIndexer,\n Shape,\n SortKind,\n TakeIndexer,\n npt,\n )\n\n from pandas import Index\n\n_extension_array_shared_docs: dict[str, str] = {}\n\n\nclass ExtensionArray:\n """\n Abstract base class for custom 1-D array types.\n\n pandas will recognize instances of this class as proper arrays\n with a custom type and will not attempt to coerce them to objects. They\n may be stored directly inside a :class:`DataFrame` or :class:`Series`.\n\n Attributes\n ----------\n dtype\n nbytes\n ndim\n shape\n\n Methods\n -------\n argsort\n astype\n copy\n dropna\n duplicated\n factorize\n fillna\n equals\n insert\n interpolate\n isin\n isna\n ravel\n repeat\n searchsorted\n shift\n take\n tolist\n unique\n view\n _accumulate\n _concat_same_type\n _explode\n _formatter\n _from_factorized\n _from_sequence\n _from_sequence_of_strings\n _hash_pandas_object\n _pad_or_backfill\n _reduce\n _values_for_argsort\n _values_for_factorize\n\n Notes\n -----\n The interface includes the following abstract methods that must be\n implemented by subclasses:\n\n * _from_sequence\n * _from_factorized\n * __getitem__\n * __len__\n * __eq__\n * dtype\n * nbytes\n * isna\n * take\n * copy\n * _concat_same_type\n * interpolate\n\n A default repr displaying the type, (truncated) data, length,\n and dtype is provided. It can be customized or replaced by\n by overriding:\n\n * __repr__ : A default repr for the ExtensionArray.\n * _formatter : Print scalars inside a Series or DataFrame.\n\n Some methods require casting the ExtensionArray to an ndarray of Python\n objects with ``self.astype(object)``, which may be expensive. When\n performance is a concern, we highly recommend overriding the following\n methods:\n\n * fillna\n * _pad_or_backfill\n * dropna\n * unique\n * factorize / _values_for_factorize\n * argsort, argmax, argmin / _values_for_argsort\n * searchsorted\n * map\n\n The remaining methods implemented on this class should be performant,\n as they only compose abstract methods. Still, a more efficient\n implementation may be available, and these methods can be overridden.\n\n One can implement methods to handle array accumulations or reductions.\n\n * _accumulate\n * _reduce\n\n One can implement methods to handle parsing from strings that will be used\n in methods such as ``pandas.io.parsers.read_csv``.\n\n * _from_sequence_of_strings\n\n This class does not inherit from 'abc.ABCMeta' for performance reasons.\n Methods and properties required by the interface raise\n ``pandas.errors.AbstractMethodError`` and no ``register`` method is\n provided for registering virtual subclasses.\n\n ExtensionArrays are limited to 1 dimension.\n\n They may be backed by none, one, or many NumPy arrays. For example,\n ``pandas.Categorical`` is an extension array backed by two arrays,\n one for codes and one for categories. An array of IPv6 address may\n be backed by a NumPy structured array with two fields, one for the\n lower 64 bits and one for the upper 64 bits. Or they may be backed\n by some other storage type, like Python lists. Pandas makes no\n assumptions on how the data are stored, just that it can be converted\n to a NumPy array.\n The ExtensionArray interface does not impose any rules on how this data\n is stored. However, currently, the backing data cannot be stored in\n attributes called ``.values`` or ``._values`` to ensure full compatibility\n with pandas internals. But other names as ``.data``, ``._data``,\n ``._items``, ... can be freely used.\n\n If implementing NumPy's ``__array_ufunc__`` interface, pandas expects\n that\n\n 1. You defer by returning ``NotImplemented`` when any Series are present\n in `inputs`. Pandas will extract the arrays and call the ufunc again.\n 2. You define a ``_HANDLED_TYPES`` tuple as an attribute on the class.\n Pandas inspect this to determine whether the ufunc is valid for the\n types present.\n\n See :ref:`extending.extension.ufunc` for more.\n\n By default, ExtensionArrays are not hashable. Immutable subclasses may\n override this behavior.\n\n Examples\n --------\n Please see the following:\n\n https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/list/array.py\n """\n\n # '_typ' is for pandas.core.dtypes.generic.ABCExtensionArray.\n # Don't override this.\n _typ = "extension"\n\n # similar to __array_priority__, positions ExtensionArray after Index,\n # Series, and DataFrame. EA subclasses may override to choose which EA\n # subclass takes priority. If overriding, the value should always be\n # strictly less than 2000 to be below Index.__pandas_priority__.\n __pandas_priority__ = 1000\n\n # ------------------------------------------------------------------------\n # Constructors\n # ------------------------------------------------------------------------\n\n @classmethod\n def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):\n """\n Construct a new ExtensionArray from a sequence of scalars.\n\n Parameters\n ----------\n scalars : Sequence\n Each element will be an instance of the scalar type for this\n array, ``cls.dtype.type`` or be converted into this type in this method.\n dtype : dtype, optional\n Construct for this particular dtype. This should be a Dtype\n compatible with the ExtensionArray.\n copy : bool, default False\n If True, copy the underlying data.\n\n Returns\n -------\n ExtensionArray\n\n Examples\n --------\n >>> pd.arrays.IntegerArray._from_sequence([4, 5])\n <IntegerArray>\n [4, 5]\n Length: 2, dtype: Int64\n """\n raise AbstractMethodError(cls)\n\n @classmethod\n def _from_scalars(cls, scalars, *, dtype: DtypeObj) -> Self:\n """\n Strict analogue to _from_sequence, allowing only sequences of scalars\n that should be specifically inferred to the given dtype.\n\n Parameters\n ----------\n scalars : sequence\n dtype : ExtensionDtype\n\n Raises\n ------\n TypeError or ValueError\n\n Notes\n -----\n This is called in a try/except block when casting the result of a\n pointwise operation.\n """\n try:\n return cls._from_sequence(scalars, dtype=dtype, copy=False)\n except (ValueError, TypeError):\n raise\n except Exception:\n warnings.warn(\n "_from_scalars should only raise ValueError or TypeError. "\n "Consider overriding _from_scalars where appropriate.",\n stacklevel=find_stack_level(),\n )\n raise\n\n @classmethod\n def _from_sequence_of_strings(\n cls, strings, *, dtype: Dtype | None = None, copy: bool = False\n ):\n """\n Construct a new ExtensionArray from a sequence of strings.\n\n Parameters\n ----------\n strings : Sequence\n Each element will be an instance of the scalar type for this\n array, ``cls.dtype.type``.\n dtype : dtype, optional\n Construct for this particular dtype. This should be a Dtype\n compatible with the ExtensionArray.\n copy : bool, default False\n If True, copy the underlying data.\n\n Returns\n -------\n ExtensionArray\n\n Examples\n --------\n >>> pd.arrays.IntegerArray._from_sequence_of_strings(["1", "2", "3"])\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n """\n raise AbstractMethodError(cls)\n\n @classmethod\n def _from_factorized(cls, values, original):\n """\n Reconstruct an ExtensionArray after factorization.\n\n Parameters\n ----------\n values : ndarray\n An integer ndarray with the factorized values.\n original : ExtensionArray\n The original ExtensionArray that factorize was called on.\n\n See Also\n --------\n factorize : Top-level factorize method that dispatches here.\n ExtensionArray.factorize : Encode the extension array as an enumerated type.\n\n Examples\n --------\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1),\n ... pd.Interval(1, 5), pd.Interval(1, 5)])\n >>> codes, uniques = pd.factorize(interv_arr)\n >>> pd.arrays.IntervalArray._from_factorized(uniques, interv_arr)\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n """\n raise AbstractMethodError(cls)\n\n # ------------------------------------------------------------------------\n # Must be a Sequence\n # ------------------------------------------------------------------------\n @overload\n def __getitem__(self, item: ScalarIndexer) -> Any:\n ...\n\n @overload\n def __getitem__(self, item: SequenceIndexer) -> Self:\n ...\n\n def __getitem__(self, item: PositionalIndexer) -> Self | Any:\n """\n Select a subset of self.\n\n Parameters\n ----------\n item : int, slice, or ndarray\n * int: The position in 'self' to get.\n\n * slice: A slice object, where 'start', 'stop', and 'step' are\n integers or None\n\n * ndarray: A 1-d boolean NumPy ndarray the same length as 'self'\n\n * list[int]: A list of int\n\n Returns\n -------\n item : scalar or ExtensionArray\n\n Notes\n -----\n For scalar ``item``, return a scalar value suitable for the array's\n type. This should be an instance of ``self.dtype.type``.\n\n For slice ``key``, return an instance of ``ExtensionArray``, even\n if the slice is length 0 or 1.\n\n For a boolean mask, return an instance of ``ExtensionArray``, filtered\n to the values where ``item`` is True.\n """\n raise AbstractMethodError(self)\n\n def __setitem__(self, key, value) -> None:\n """\n Set one or more values inplace.\n\n This method is not required to satisfy the pandas extension array\n interface.\n\n Parameters\n ----------\n key : int, ndarray, or slice\n When called from, e.g. ``Series.__setitem__``, ``key`` will be\n one of\n\n * scalar int\n * ndarray of integers.\n * boolean ndarray\n * slice object\n\n value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object\n value or values to be set of ``key``.\n\n Returns\n -------\n None\n """\n # Some notes to the ExtensionArray implementer who may have ended up\n # here. While this method is not required for the interface, if you\n # *do* choose to implement __setitem__, then some semantics should be\n # observed:\n #\n # * Setting multiple values : ExtensionArrays should support setting\n # multiple values at once, 'key' will be a sequence of integers and\n # 'value' will be a same-length sequence.\n #\n # * Broadcasting : For a sequence 'key' and a scalar 'value',\n # each position in 'key' should be set to 'value'.\n #\n # * Coercion : Most users will expect basic coercion to work. For\n # example, a string like '2018-01-01' is coerced to a datetime\n # when setting on a datetime64ns array. In general, if the\n # __init__ method coerces that value, then so should __setitem__\n # Note, also, that Series/DataFrame.where internally use __setitem__\n # on a copy of the data.\n raise NotImplementedError(f"{type(self)} does not implement __setitem__.")\n\n def __len__(self) -> int:\n """\n Length of this array\n\n Returns\n -------\n length : int\n """\n raise AbstractMethodError(self)\n\n def __iter__(self) -> Iterator[Any]:\n """\n Iterate over elements of the array.\n """\n # This needs to be implemented so that pandas recognizes extension\n # arrays as list-like. The default implementation makes successive\n # calls to ``__getitem__``, which may be slower than necessary.\n for i in range(len(self)):\n yield self[i]\n\n def __contains__(self, item: object) -> bool | np.bool_:\n """\n Return for `item in self`.\n """\n # GH37867\n # comparisons of any item to pd.NA always return pd.NA, so e.g. "a" in [pd.NA]\n # would raise a TypeError. The implementation below works around that.\n if is_scalar(item) and isna(item):\n if not self._can_hold_na:\n return False\n elif item is self.dtype.na_value or isinstance(item, self.dtype.type):\n return self._hasna\n else:\n return False\n else:\n # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no\n # attribute "any"\n return (item == self).any() # type: ignore[union-attr]\n\n # error: Signature of "__eq__" incompatible with supertype "object"\n def __eq__(self, other: object) -> ArrayLike: # type: ignore[override]\n """\n Return for `self == other` (element-wise equality).\n """\n # Implementer note: this should return a boolean numpy ndarray or\n # a boolean ExtensionArray.\n # When `other` is one of Series, Index, or DataFrame, this method should\n # return NotImplemented (to ensure that those objects are responsible for\n # first unpacking the arrays, and then dispatch the operation to the\n # underlying arrays)\n raise AbstractMethodError(self)\n\n # error: Signature of "__ne__" incompatible with supertype "object"\n def __ne__(self, other: object) -> ArrayLike: # type: ignore[override]\n """\n Return for `self != other` (element-wise in-equality).\n """\n # error: Unsupported operand type for ~ ("ExtensionArray")\n return ~(self == other) # type: ignore[operator]\n\n def to_numpy(\n self,\n dtype: npt.DTypeLike | None = None,\n copy: bool = False,\n na_value: object = lib.no_default,\n ) -> np.ndarray:\n """\n Convert to a NumPy ndarray.\n\n This is similar to :meth:`numpy.asarray`, but may provide additional control\n over how the conversion is done.\n\n Parameters\n ----------\n dtype : str or numpy.dtype, optional\n The dtype to pass to :meth:`numpy.asarray`.\n copy : bool, default False\n Whether to ensure that the returned value is a not a view on\n another array. Note that ``copy=False`` does not *ensure* that\n ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that\n a copy is made, even if not strictly necessary.\n na_value : Any, optional\n The value to use for missing values. The default value depends\n on `dtype` and the type of the array.\n\n Returns\n -------\n numpy.ndarray\n """\n result = np.asarray(self, dtype=dtype)\n if copy or na_value is not lib.no_default:\n result = result.copy()\n if na_value is not lib.no_default:\n result[self.isna()] = na_value\n return result\n\n # ------------------------------------------------------------------------\n # Required attributes\n # ------------------------------------------------------------------------\n\n @property\n def dtype(self) -> ExtensionDtype:\n """\n An instance of ExtensionDtype.\n\n Examples\n --------\n >>> pd.array([1, 2, 3]).dtype\n Int64Dtype()\n """\n raise AbstractMethodError(self)\n\n @property\n def shape(self) -> Shape:\n """\n Return a tuple of the array dimensions.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.shape\n (3,)\n """\n return (len(self),)\n\n @property\n def size(self) -> int:\n """\n The number of elements in the array.\n """\n # error: Incompatible return value type (got "signedinteger[_64Bit]",\n # expected "int") [return-value]\n return np.prod(self.shape) # type: ignore[return-value]\n\n @property\n def ndim(self) -> int:\n """\n Extension Arrays are only allowed to be 1-dimensional.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.ndim\n 1\n """\n return 1\n\n @property\n def nbytes(self) -> int:\n """\n The number of bytes needed to store this object in memory.\n\n Examples\n --------\n >>> pd.array([1, 2, 3]).nbytes\n 27\n """\n # If this is expensive to compute, return an approximate lower bound\n # on the number of bytes needed.\n raise AbstractMethodError(self)\n\n # ------------------------------------------------------------------------\n # Additional Methods\n # ------------------------------------------------------------------------\n\n @overload\n def astype(self, dtype: npt.DTypeLike, copy: bool = ...) -> np.ndarray:\n ...\n\n @overload\n def astype(self, dtype: ExtensionDtype, copy: bool = ...) -> ExtensionArray:\n ...\n\n @overload\n def astype(self, dtype: AstypeArg, copy: bool = ...) -> ArrayLike:\n ...\n\n def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:\n """\n Cast to a NumPy array or ExtensionArray with 'dtype'.\n\n Parameters\n ----------\n dtype : str or dtype\n Typecode or data-type to which the array is cast.\n copy : bool, default True\n Whether to copy the data, even if not necessary. If False,\n a copy is made only if the old dtype does not match the\n new dtype.\n\n Returns\n -------\n np.ndarray or pandas.api.extensions.ExtensionArray\n An ``ExtensionArray`` if ``dtype`` is ``ExtensionDtype``,\n otherwise a Numpy ndarray with ``dtype`` for its dtype.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n\n Casting to another ``ExtensionDtype`` returns an ``ExtensionArray``:\n\n >>> arr1 = arr.astype('Float64')\n >>> arr1\n <FloatingArray>\n [1.0, 2.0, 3.0]\n Length: 3, dtype: Float64\n >>> arr1.dtype\n Float64Dtype()\n\n Otherwise, we will get a Numpy ndarray:\n\n >>> arr2 = arr.astype('float64')\n >>> arr2\n array([1., 2., 3.])\n >>> arr2.dtype\n dtype('float64')\n """\n dtype = pandas_dtype(dtype)\n if dtype == self.dtype:\n if not copy:\n return self\n else:\n return self.copy()\n\n if isinstance(dtype, ExtensionDtype):\n cls = dtype.construct_array_type()\n return cls._from_sequence(self, dtype=dtype, copy=copy)\n\n elif lib.is_np_dtype(dtype, "M"):\n from pandas.core.arrays import DatetimeArray\n\n return DatetimeArray._from_sequence(self, dtype=dtype, copy=copy)\n\n elif lib.is_np_dtype(dtype, "m"):\n from pandas.core.arrays import TimedeltaArray\n\n return TimedeltaArray._from_sequence(self, dtype=dtype, copy=copy)\n\n if not copy:\n return np.asarray(self, dtype=dtype)\n else:\n return np.array(self, dtype=dtype, copy=copy)\n\n def isna(self) -> np.ndarray | ExtensionArraySupportsAnyAll:\n """\n A 1-D array indicating if each value is missing.\n\n Returns\n -------\n numpy.ndarray or pandas.api.extensions.ExtensionArray\n In most cases, this should return a NumPy ndarray. For\n exceptional cases like ``SparseArray``, where returning\n an ndarray would be expensive, an ExtensionArray may be\n returned.\n\n Notes\n -----\n If returning an ExtensionArray, then\n\n * ``na_values._is_boolean`` should be True\n * `na_values` should implement :func:`ExtensionArray._reduce`\n * ``na_values.any`` and ``na_values.all`` should be implemented\n\n Examples\n --------\n >>> arr = pd.array([1, 2, np.nan, np.nan])\n >>> arr.isna()\n array([False, False, True, True])\n """\n raise AbstractMethodError(self)\n\n @property\n def _hasna(self) -> bool:\n # GH#22680\n """\n Equivalent to `self.isna().any()`.\n\n Some ExtensionArray subclasses may be able to optimize this check.\n """\n return bool(self.isna().any())\n\n def _values_for_argsort(self) -> np.ndarray:\n """\n Return values for sorting.\n\n Returns\n -------\n ndarray\n The transformed values should maintain the ordering between values\n within the array.\n\n See Also\n --------\n ExtensionArray.argsort : Return the indices that would sort this array.\n\n Notes\n -----\n The caller is responsible for *not* modifying these values in-place, so\n it is safe for implementers to give views on ``self``.\n\n Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore\n entries with missing values in the original array (according to\n ``self.isna()``). This means that the corresponding entries in the returned\n array don't need to be modified to sort correctly.\n\n Examples\n --------\n In most cases, this is the underlying Numpy array of the ``ExtensionArray``:\n\n >>> arr = pd.array([1, 2, 3])\n >>> arr._values_for_argsort()\n array([1, 2, 3])\n """\n # Note: this is used in `ExtensionArray.argsort/argmin/argmax`.\n return np.array(self)\n\n def argsort(\n self,\n *,\n ascending: bool = True,\n kind: SortKind = "quicksort",\n na_position: str = "last",\n **kwargs,\n ) -> np.ndarray:\n """\n Return the indices that would sort this array.\n\n Parameters\n ----------\n ascending : bool, default True\n Whether the indices should result in an ascending\n or descending sort.\n kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional\n Sorting algorithm.\n na_position : {'first', 'last'}, default 'last'\n If ``'first'``, put ``NaN`` values at the beginning.\n If ``'last'``, put ``NaN`` values at the end.\n *args, **kwargs:\n Passed through to :func:`numpy.argsort`.\n\n Returns\n -------\n np.ndarray[np.intp]\n Array of indices that sort ``self``. If NaN values are contained,\n NaN values are placed at the end.\n\n See Also\n --------\n numpy.argsort : Sorting implementation used internally.\n\n Examples\n --------\n >>> arr = pd.array([3, 1, 2, 5, 4])\n >>> arr.argsort()\n array([1, 2, 0, 4, 3])\n """\n # Implementer note: You have two places to override the behavior of\n # argsort.\n # 1. _values_for_argsort : construct the values passed to np.argsort\n # 2. argsort : total control over sorting. In case of overriding this,\n # it is recommended to also override argmax/argmin\n ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs)\n\n values = self._values_for_argsort()\n return nargsort(\n values,\n kind=kind,\n ascending=ascending,\n na_position=na_position,\n mask=np.asarray(self.isna()),\n )\n\n def argmin(self, skipna: bool = True) -> int:\n """\n Return the index of minimum value.\n\n In case of multiple occurrences of the minimum value, the index\n corresponding to the first occurrence is returned.\n\n Parameters\n ----------\n skipna : bool, default True\n\n Returns\n -------\n int\n\n See Also\n --------\n ExtensionArray.argmax : Return the index of the maximum value.\n\n Examples\n --------\n >>> arr = pd.array([3, 1, 2, 5, 4])\n >>> arr.argmin()\n 1\n """\n # Implementer note: You have two places to override the behavior of\n # argmin.\n # 1. _values_for_argsort : construct the values used in nargminmax\n # 2. argmin itself : total control over sorting.\n validate_bool_kwarg(skipna, "skipna")\n if not skipna and self._hasna:\n raise NotImplementedError\n return nargminmax(self, "argmin")\n\n def argmax(self, skipna: bool = True) -> int:\n """\n Return the index of maximum value.\n\n In case of multiple occurrences of the maximum value, the index\n corresponding to the first occurrence is returned.\n\n Parameters\n ----------\n skipna : bool, default True\n\n Returns\n -------\n int\n\n See Also\n --------\n ExtensionArray.argmin : Return the index of the minimum value.\n\n Examples\n --------\n >>> arr = pd.array([3, 1, 2, 5, 4])\n >>> arr.argmax()\n 3\n """\n # Implementer note: You have two places to override the behavior of\n # argmax.\n # 1. _values_for_argsort : construct the values used in nargminmax\n # 2. argmax itself : total control over sorting.\n validate_bool_kwarg(skipna, "skipna")\n if not skipna and self._hasna:\n raise NotImplementedError\n return nargminmax(self, "argmax")\n\n def interpolate(\n self,\n *,\n method: InterpolateOptions,\n axis: int,\n index: Index,\n limit,\n limit_direction,\n limit_area,\n copy: bool,\n **kwargs,\n ) -> Self:\n """\n See DataFrame.interpolate.__doc__.\n\n Examples\n --------\n >>> arr = pd.arrays.NumpyExtensionArray(np.array([0, 1, np.nan, 3]))\n >>> arr.interpolate(method="linear",\n ... limit=3,\n ... limit_direction="forward",\n ... index=pd.Index([1, 2, 3, 4]),\n ... fill_value=1,\n ... copy=False,\n ... axis=0,\n ... limit_area="inside"\n ... )\n <NumpyExtensionArray>\n [0.0, 1.0, 2.0, 3.0]\n Length: 4, dtype: float64\n """\n # NB: we return type(self) even if copy=False\n raise NotImplementedError(\n f"{type(self).__name__} does not implement interpolate"\n )\n\n def _pad_or_backfill(\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n """\n Pad or backfill values, used by Series/DataFrame ffill and bfill.\n\n Parameters\n ----------\n method : {'backfill', 'bfill', 'pad', 'ffill'}\n Method to use for filling holes in reindexed Series:\n\n * pad / ffill: propagate last valid observation forward to next valid.\n * backfill / bfill: use NEXT valid observation to fill gap.\n\n limit : int, default None\n This is the maximum number of consecutive\n NaN values to forward/backward fill. In other words, if there is\n a gap with more than this number of consecutive NaNs, it will only\n be partially filled. If method is not specified, this is the\n maximum number of entries along the entire axis where NaNs will be\n filled.\n\n copy : bool, default True\n Whether to make a copy of the data before filling. If False, then\n the original should be modified and no new memory should be allocated.\n For ExtensionArray subclasses that cannot do this, it is at the\n author's discretion whether to ignore "copy=False" or to raise.\n The base class implementation ignores the keyword if any NAs are\n present.\n\n Returns\n -------\n Same type as self\n\n Examples\n --------\n >>> arr = pd.array([np.nan, np.nan, 2, 3, np.nan, np.nan])\n >>> arr._pad_or_backfill(method="backfill", limit=1)\n <IntegerArray>\n [<NA>, 2, 2, 3, <NA>, <NA>]\n Length: 6, dtype: Int64\n """\n\n # If a 3rd-party EA has implemented this functionality in fillna,\n # we warn that they need to implement _pad_or_backfill instead.\n if (\n type(self).fillna is not ExtensionArray.fillna\n and type(self)._pad_or_backfill is ExtensionArray._pad_or_backfill\n ):\n # Check for _pad_or_backfill here allows us to call\n # super()._pad_or_backfill without getting this warning\n warnings.warn(\n "ExtensionArray.fillna 'method' keyword is deprecated. "\n "In a future version. arr._pad_or_backfill will be called "\n "instead. 3rd-party ExtensionArray authors need to implement "\n "_pad_or_backfill.",\n DeprecationWarning,\n stacklevel=find_stack_level(),\n )\n if limit_area is not None:\n raise NotImplementedError(\n f"{type(self).__name__} does not implement limit_area "\n "(added in pandas 2.2). 3rd-party ExtnsionArray authors "\n "need to add this argument to _pad_or_backfill."\n )\n return self.fillna(method=method, limit=limit)\n\n mask = self.isna()\n\n if mask.any():\n # NB: the base class does not respect the "copy" keyword\n meth = missing.clean_fill_method(method)\n\n npmask = np.asarray(mask)\n if limit_area is not None and not npmask.all():\n _fill_limit_area_1d(npmask, limit_area)\n if meth == "pad":\n indexer = libalgos.get_fill_indexer(npmask, limit=limit)\n return self.take(indexer, allow_fill=True)\n else:\n # i.e. meth == "backfill"\n indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1]\n return self[::-1].take(indexer, allow_fill=True)\n\n else:\n if not copy:\n return self\n new_values = self.copy()\n return new_values\n\n def fillna(\n self,\n value: object | ArrayLike | None = None,\n method: FillnaOptions | None = None,\n limit: int | None = None,\n copy: bool = True,\n ) -> Self:\n """\n Fill NA/NaN values using the specified method.\n\n Parameters\n ----------\n value : scalar, array-like\n If a scalar value is passed it is used to fill all missing values.\n Alternatively, an array-like "value" can be given. It's expected\n that the array-like have the same length as 'self'.\n method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None\n Method to use for filling holes in reindexed Series:\n\n * pad / ffill: propagate last valid observation forward to next valid.\n * backfill / bfill: use NEXT valid observation to fill gap.\n\n .. deprecated:: 2.1.0\n\n limit : int, default None\n If method is specified, this is the maximum number of consecutive\n NaN values to forward/backward fill. In other words, if there is\n a gap with more than this number of consecutive NaNs, it will only\n be partially filled. If method is not specified, this is the\n maximum number of entries along the entire axis where NaNs will be\n filled.\n\n .. deprecated:: 2.1.0\n\n copy : bool, default True\n Whether to make a copy of the data before filling. If False, then\n the original should be modified and no new memory should be allocated.\n For ExtensionArray subclasses that cannot do this, it is at the\n author's discretion whether to ignore "copy=False" or to raise.\n The base class implementation ignores the keyword in pad/backfill\n cases.\n\n Returns\n -------\n ExtensionArray\n With NA/NaN filled.\n\n Examples\n --------\n >>> arr = pd.array([np.nan, np.nan, 2, 3, np.nan, np.nan])\n >>> arr.fillna(0)\n <IntegerArray>\n [0, 0, 2, 3, 0, 0]\n Length: 6, dtype: Int64\n """\n if method is not None:\n warnings.warn(\n f"The 'method' keyword in {type(self).__name__}.fillna is "\n "deprecated and will be removed in a future version.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n value, method = validate_fillna_kwargs(value, method)\n\n mask = self.isna()\n # error: Argument 2 to "check_value_size" has incompatible type\n # "ExtensionArray"; expected "ndarray"\n value = missing.check_value_size(\n value, mask, len(self) # type: ignore[arg-type]\n )\n\n if mask.any():\n if method is not None:\n meth = missing.clean_fill_method(method)\n\n npmask = np.asarray(mask)\n if meth == "pad":\n indexer = libalgos.get_fill_indexer(npmask, limit=limit)\n return self.take(indexer, allow_fill=True)\n else:\n # i.e. meth == "backfill"\n indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1]\n return self[::-1].take(indexer, allow_fill=True)\n else:\n # fill with value\n if not copy:\n new_values = self[:]\n else:\n new_values = self.copy()\n new_values[mask] = value\n else:\n if not copy:\n new_values = self[:]\n else:\n new_values = self.copy()\n return new_values\n\n def dropna(self) -> Self:\n """\n Return ExtensionArray without NA values.\n\n Returns\n -------\n\n Examples\n --------\n >>> pd.array([1, 2, np.nan]).dropna()\n <IntegerArray>\n [1, 2]\n Length: 2, dtype: Int64\n """\n # error: Unsupported operand type for ~ ("ExtensionArray")\n return self[~self.isna()] # type: ignore[operator]\n\n def duplicated(\n self, keep: Literal["first", "last", False] = "first"\n ) -> npt.NDArray[np.bool_]:\n """\n Return boolean ndarray denoting duplicate values.\n\n Parameters\n ----------\n keep : {'first', 'last', False}, default 'first'\n - ``first`` : Mark duplicates as ``True`` except for the first occurrence.\n - ``last`` : Mark duplicates as ``True`` except for the last occurrence.\n - False : Mark all duplicates as ``True``.\n\n Returns\n -------\n ndarray[bool]\n\n Examples\n --------\n >>> pd.array([1, 1, 2, 3, 3], dtype="Int64").duplicated()\n array([False, True, False, False, True])\n """\n mask = self.isna().astype(np.bool_, copy=False)\n return duplicated(values=self, keep=keep, mask=mask)\n\n def shift(self, periods: int = 1, fill_value: object = None) -> ExtensionArray:\n """\n Shift values by desired number.\n\n Newly introduced missing values are filled with\n ``self.dtype.na_value``.\n\n Parameters\n ----------\n periods : int, default 1\n The number of periods to shift. Negative values are allowed\n for shifting backwards.\n\n fill_value : object, optional\n The scalar value to use for newly introduced missing values.\n The default is ``self.dtype.na_value``.\n\n Returns\n -------\n ExtensionArray\n Shifted.\n\n Notes\n -----\n If ``self`` is empty or ``periods`` is 0, a copy of ``self`` is\n returned.\n\n If ``periods > len(self)``, then an array of size\n len(self) is returned, with all values filled with\n ``self.dtype.na_value``.\n\n For 2-dimensional ExtensionArrays, we are always shifting along axis=0.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.shift(2)\n <IntegerArray>\n [<NA>, <NA>, 1]\n Length: 3, dtype: Int64\n """\n # Note: this implementation assumes that `self.dtype.na_value` can be\n # stored in an instance of your ExtensionArray with `self.dtype`.\n if not len(self) or periods == 0:\n return self.copy()\n\n if isna(fill_value):\n fill_value = self.dtype.na_value\n\n empty = self._from_sequence(\n [fill_value] * min(abs(periods), len(self)), dtype=self.dtype\n )\n if periods > 0:\n a = empty\n b = self[:-periods]\n else:\n a = self[abs(periods) :]\n b = empty\n return self._concat_same_type([a, b])\n\n def unique(self) -> Self:\n """\n Compute the ExtensionArray of unique values.\n\n Returns\n -------\n pandas.api.extensions.ExtensionArray\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3, 1, 2, 3])\n >>> arr.unique()\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n """\n uniques = unique(self.astype(object))\n return self._from_sequence(uniques, dtype=self.dtype)\n\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n """\n Find indices where elements should be inserted to maintain order.\n\n Find the indices into a sorted array `self` (a) such that, if the\n corresponding elements in `value` were inserted before the indices,\n the order of `self` would be preserved.\n\n Assuming that `self` is sorted:\n\n ====== ================================\n `side` returned index `i` satisfies\n ====== ================================\n left ``self[i-1] < value <= self[i]``\n right ``self[i-1] <= value < self[i]``\n ====== ================================\n\n Parameters\n ----------\n value : array-like, list or scalar\n Value(s) to insert into `self`.\n side : {'left', 'right'}, optional\n If 'left', the index of the first suitable location found is given.\n If 'right', return the last such index. If there is no suitable\n index, return either 0 or N (where N is the length of `self`).\n sorter : 1-D array-like, optional\n Optional array of integer indices that sort array a into ascending\n order. They are typically the result of argsort.\n\n Returns\n -------\n array of ints or int\n If value is array-like, array of insertion points.\n If value is scalar, a single integer.\n\n See Also\n --------\n numpy.searchsorted : Similar method from NumPy.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3, 5])\n >>> arr.searchsorted([4])\n array([3])\n """\n # Note: the base tests provided by pandas only test the basics.\n # We do not test\n # 1. Values outside the range of the `data_for_sorting` fixture\n # 2. Values between the values in the `data_for_sorting` fixture\n # 3. Missing values.\n arr = self.astype(object)\n if isinstance(value, ExtensionArray):\n value = value.astype(object)\n return arr.searchsorted(value, side=side, sorter=sorter)\n\n def equals(self, other: object) -> bool:\n """\n Return if another array is equivalent to this array.\n\n Equivalent means that both arrays have the same shape and dtype, and\n all values compare equal. Missing values in the same location are\n considered equal (in contrast with normal equality).\n\n Parameters\n ----------\n other : ExtensionArray\n Array to compare to this Array.\n\n Returns\n -------\n boolean\n Whether the arrays are equivalent.\n\n Examples\n --------\n >>> arr1 = pd.array([1, 2, np.nan])\n >>> arr2 = pd.array([1, 2, np.nan])\n >>> arr1.equals(arr2)\n True\n """\n if type(self) != type(other):\n return False\n other = cast(ExtensionArray, other)\n if self.dtype != other.dtype:\n return False\n elif len(self) != len(other):\n return False\n else:\n equal_values = self == other\n if isinstance(equal_values, ExtensionArray):\n # boolean array with NA -> fill with False\n equal_values = equal_values.fillna(False)\n # error: Unsupported left operand type for & ("ExtensionArray")\n equal_na = self.isna() & other.isna() # type: ignore[operator]\n return bool((equal_values | equal_na).all())\n\n def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:\n """\n Pointwise comparison for set containment in the given values.\n\n Roughly equivalent to `np.array([x in values for x in self])`\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n\n Returns\n -------\n np.ndarray[bool]\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.isin([1])\n <BooleanArray>\n [True, False, False]\n Length: 3, dtype: boolean\n """\n return isin(np.asarray(self), values)\n\n def _values_for_factorize(self) -> tuple[np.ndarray, Any]:\n """\n Return an array and missing value suitable for factorization.\n\n Returns\n -------\n values : ndarray\n An array suitable for factorization. This should maintain order\n and be a supported dtype (Float64, Int64, UInt64, String, Object).\n By default, the extension array is cast to object dtype.\n na_value : object\n The value in `values` to consider missing. This will be treated\n as NA in the factorization routines, so it will be coded as\n `-1` and not included in `uniques`. By default,\n ``np.nan`` is used.\n\n Notes\n -----\n The values returned by this method are also used in\n :func:`pandas.util.hash_pandas_object`. If needed, this can be\n overridden in the ``self._hash_pandas_object()`` method.\n\n Examples\n --------\n >>> pd.array([1, 2, 3])._values_for_factorize()\n (array([1, 2, 3], dtype=object), nan)\n """\n return self.astype(object), np.nan\n\n def factorize(\n self,\n use_na_sentinel: bool = True,\n ) -> tuple[np.ndarray, ExtensionArray]:\n """\n Encode the extension array as an enumerated type.\n\n Parameters\n ----------\n use_na_sentinel : bool, default True\n If True, the sentinel -1 will be used for NaN values. If False,\n NaN values will be encoded as non-negative integers and will not drop the\n NaN from the uniques of the values.\n\n .. versionadded:: 1.5.0\n\n Returns\n -------\n codes : ndarray\n An integer NumPy array that's an indexer into the original\n ExtensionArray.\n uniques : ExtensionArray\n An ExtensionArray containing the unique values of `self`.\n\n .. note::\n\n uniques will *not* contain an entry for the NA value of\n the ExtensionArray if there are any missing values present\n in `self`.\n\n See Also\n --------\n factorize : Top-level factorize method that dispatches here.\n\n Notes\n -----\n :meth:`pandas.factorize` offers a `sort` keyword as well.\n\n Examples\n --------\n >>> idx1 = pd.PeriodIndex(["2014-01", "2014-01", "2014-02", "2014-02",\n ... "2014-03", "2014-03"], freq="M")\n >>> arr, idx = idx1.factorize()\n >>> arr\n array([0, 0, 1, 1, 2, 2])\n >>> idx\n PeriodIndex(['2014-01', '2014-02', '2014-03'], dtype='period[M]')\n """\n # Implementer note: There are two ways to override the behavior of\n # pandas.factorize\n # 1. _values_for_factorize and _from_factorize.\n # Specify the values passed to pandas' internal factorization\n # routines, and how to convert from those values back to the\n # original ExtensionArray.\n # 2. ExtensionArray.factorize.\n # Complete control over factorization.\n arr, na_value = self._values_for_factorize()\n\n codes, uniques = factorize_array(\n arr, use_na_sentinel=use_na_sentinel, na_value=na_value\n )\n\n uniques_ea = self._from_factorized(uniques, self)\n return codes, uniques_ea\n\n _extension_array_shared_docs[\n "repeat"\n ] = """\n Repeat elements of a %(klass)s.\n\n Returns a new %(klass)s where each element of the current %(klass)s\n is repeated consecutively a given number of times.\n\n Parameters\n ----------\n repeats : int or array of ints\n The number of repetitions for each element. This should be a\n non-negative integer. Repeating 0 times will return an empty\n %(klass)s.\n axis : None\n Must be ``None``. Has no effect but is accepted for compatibility\n with numpy.\n\n Returns\n -------\n %(klass)s\n Newly created %(klass)s with repeated elements.\n\n See Also\n --------\n Series.repeat : Equivalent function for Series.\n Index.repeat : Equivalent function for Index.\n numpy.repeat : Similar method for :class:`numpy.ndarray`.\n ExtensionArray.take : Take arbitrary positions.\n\n Examples\n --------\n >>> cat = pd.Categorical(['a', 'b', 'c'])\n >>> cat\n ['a', 'b', 'c']\n Categories (3, object): ['a', 'b', 'c']\n >>> cat.repeat(2)\n ['a', 'a', 'b', 'b', 'c', 'c']\n Categories (3, object): ['a', 'b', 'c']\n >>> cat.repeat([1, 2, 3])\n ['a', 'b', 'b', 'c', 'c', 'c']\n Categories (3, object): ['a', 'b', 'c']\n """\n\n @Substitution(klass="ExtensionArray")\n @Appender(_extension_array_shared_docs["repeat"])\n def repeat(self, repeats: int | Sequence[int], axis: AxisInt | None = None) -> Self:\n nv.validate_repeat((), {"axis": axis})\n ind = np.arange(len(self)).repeat(repeats)\n return self.take(ind)\n\n # ------------------------------------------------------------------------\n # Indexing methods\n # ------------------------------------------------------------------------\n\n def take(\n self,\n indices: TakeIndexer,\n *,\n allow_fill: bool = False,\n fill_value: Any = None,\n ) -> Self:\n """\n Take elements from an array.\n\n Parameters\n ----------\n indices : sequence of int or one-dimensional np.ndarray of int\n Indices to be taken.\n allow_fill : bool, default False\n How to handle negative values in `indices`.\n\n * False: negative values in `indices` indicate positional indices\n from the right (the default). This is similar to\n :func:`numpy.take`.\n\n * True: negative values in `indices` indicate\n missing values. These values are set to `fill_value`. Any other\n other negative values raise a ``ValueError``.\n\n fill_value : any, optional\n Fill value to use for NA-indices when `allow_fill` is True.\n This may be ``None``, in which case the default NA value for\n the type, ``self.dtype.na_value``, is used.\n\n For many ExtensionArrays, there will be two representations of\n `fill_value`: a user-facing "boxed" scalar, and a low-level\n physical NA value. `fill_value` should be the user-facing version,\n and the implementation should handle translating that to the\n physical version for processing the take if necessary.\n\n Returns\n -------\n ExtensionArray\n\n Raises\n ------\n IndexError\n When the indices are out of bounds for the array.\n ValueError\n When `indices` contains negative values other than ``-1``\n and `allow_fill` is True.\n\n See Also\n --------\n numpy.take : Take elements from an array along an axis.\n api.extensions.take : Take elements from an array.\n\n Notes\n -----\n ExtensionArray.take is called by ``Series.__getitem__``, ``.loc``,\n ``iloc``, when `indices` is a sequence of values. Additionally,\n it's called by :meth:`Series.reindex`, or any other method\n that causes realignment, with a `fill_value`.\n\n Examples\n --------\n Here's an example implementation, which relies on casting the\n extension array to object dtype. This uses the helper method\n :func:`pandas.api.extensions.take`.\n\n .. code-block:: python\n\n def take(self, indices, allow_fill=False, fill_value=None):\n from pandas.core.algorithms import take\n\n # If the ExtensionArray is backed by an ndarray, then\n # just pass that here instead of coercing to object.\n data = self.astype(object)\n\n if allow_fill and fill_value is None:\n fill_value = self.dtype.na_value\n\n # fill value should always be translated from the scalar\n # type for the array, to the physical storage type for\n # the data, before passing to take.\n\n result = take(data, indices, fill_value=fill_value,\n allow_fill=allow_fill)\n return self._from_sequence(result, dtype=self.dtype)\n """\n # Implementer note: The `fill_value` parameter should be a user-facing\n # value, an instance of self.dtype.type. When passed `fill_value=None`,\n # the default of `self.dtype.na_value` should be used.\n # This may differ from the physical storage type your ExtensionArray\n # uses. In this case, your implementation is responsible for casting\n # the user-facing type to the storage type, before using\n # pandas.api.extensions.take\n raise AbstractMethodError(self)\n\n def copy(self) -> Self:\n """\n Return a copy of the array.\n\n Returns\n -------\n ExtensionArray\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr2 = arr.copy()\n >>> arr[0] = 2\n >>> arr2\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n """\n raise AbstractMethodError(self)\n\n def view(self, dtype: Dtype | None = None) -> ArrayLike:\n """\n Return a view on the array.\n\n Parameters\n ----------\n dtype : str, np.dtype, or ExtensionDtype, optional\n Default None.\n\n Returns\n -------\n ExtensionArray or np.ndarray\n A view on the :class:`ExtensionArray`'s data.\n\n Examples\n --------\n This gives view on the underlying data of an ``ExtensionArray`` and is not a\n copy. Modifications on either the view or the original ``ExtensionArray``\n will be reflectd on the underlying data:\n\n >>> arr = pd.array([1, 2, 3])\n >>> arr2 = arr.view()\n >>> arr[0] = 2\n >>> arr2\n <IntegerArray>\n [2, 2, 3]\n Length: 3, dtype: Int64\n """\n # NB:\n # - This must return a *new* object referencing the same data, not self.\n # - The only case that *must* be implemented is with dtype=None,\n # giving a view with the same dtype as self.\n if dtype is not None:\n raise NotImplementedError(dtype)\n return self[:]\n\n # ------------------------------------------------------------------------\n # Printing\n # ------------------------------------------------------------------------\n\n def __repr__(self) -> str:\n if self.ndim > 1:\n return self._repr_2d()\n\n from pandas.io.formats.printing import format_object_summary\n\n # the short repr has no trailing newline, while the truncated\n # repr does. So we include a newline in our template, and strip\n # any trailing newlines from format_object_summary\n data = format_object_summary(\n self, self._formatter(), indent_for_name=False\n ).rstrip(", \n")\n class_name = f"<{type(self).__name__}>\n"\n footer = self._get_repr_footer()\n return f"{class_name}{data}\n{footer}"\n\n def _get_repr_footer(self) -> str:\n # GH#24278\n if self.ndim > 1:\n return f"Shape: {self.shape}, dtype: {self.dtype}"\n return f"Length: {len(self)}, dtype: {self.dtype}"\n\n def _repr_2d(self) -> str:\n from pandas.io.formats.printing import format_object_summary\n\n # the short repr has no trailing newline, while the truncated\n # repr does. So we include a newline in our template, and strip\n # any trailing newlines from format_object_summary\n lines = [\n format_object_summary(x, self._formatter(), indent_for_name=False).rstrip(\n ", \n"\n )\n for x in self\n ]\n data = ",\n".join(lines)\n class_name = f"<{type(self).__name__}>"\n footer = self._get_repr_footer()\n return f"{class_name}\n[\n{data}\n]\n{footer}"\n\n def _formatter(self, boxed: bool = False) -> Callable[[Any], str | None]:\n """\n Formatting function for scalar values.\n\n This is used in the default '__repr__'. The returned formatting\n function receives instances of your scalar type.\n\n Parameters\n ----------\n boxed : bool, default False\n An indicated for whether or not your array is being printed\n within a Series, DataFrame, or Index (True), or just by\n itself (False). This may be useful if you want scalar values\n to appear differently within a Series versus on its own (e.g.\n quoted or not).\n\n Returns\n -------\n Callable[[Any], str]\n A callable that gets instances of the scalar type and\n returns a string. By default, :func:`repr` is used\n when ``boxed=False`` and :func:`str` is used when\n ``boxed=True``.\n\n Examples\n --------\n >>> class MyExtensionArray(pd.arrays.NumpyExtensionArray):\n ... def _formatter(self, boxed=False):\n ... return lambda x: '*' + str(x) + '*' if boxed else repr(x) + '*'\n >>> MyExtensionArray(np.array([1, 2, 3, 4]))\n <MyExtensionArray>\n [1*, 2*, 3*, 4*]\n Length: 4, dtype: int64\n """\n if boxed:\n return str\n return repr\n\n # ------------------------------------------------------------------------\n # Reshaping\n # ------------------------------------------------------------------------\n\n def transpose(self, *axes: int) -> ExtensionArray:\n """\n Return a transposed view on this array.\n\n Because ExtensionArrays are always 1D, this is a no-op. It is included\n for compatibility with np.ndarray.\n\n Returns\n -------\n ExtensionArray\n\n Examples\n --------\n >>> pd.array([1, 2, 3]).transpose()\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n """\n return self[:]\n\n @property\n def T(self) -> ExtensionArray:\n return self.transpose()\n\n def ravel(self, order: Literal["C", "F", "A", "K"] | None = "C") -> ExtensionArray:\n """\n Return a flattened view on this array.\n\n Parameters\n ----------\n order : {None, 'C', 'F', 'A', 'K'}, default 'C'\n\n Returns\n -------\n ExtensionArray\n\n Notes\n -----\n - Because ExtensionArrays are 1D-only, this is a no-op.\n - The "order" argument is ignored, is for compatibility with NumPy.\n\n Examples\n --------\n >>> pd.array([1, 2, 3]).ravel()\n <IntegerArray>\n [1, 2, 3]\n Length: 3, dtype: Int64\n """\n return self\n\n @classmethod\n def _concat_same_type(cls, to_concat: Sequence[Self]) -> Self:\n """\n Concatenate multiple array of this dtype.\n\n Parameters\n ----------\n to_concat : sequence of this type\n\n Returns\n -------\n ExtensionArray\n\n Examples\n --------\n >>> arr1 = pd.array([1, 2, 3])\n >>> arr2 = pd.array([4, 5, 6])\n >>> pd.arrays.IntegerArray._concat_same_type([arr1, arr2])\n <IntegerArray>\n [1, 2, 3, 4, 5, 6]\n Length: 6, dtype: Int64\n """\n # Implementer note: this method will only be called with a sequence of\n # ExtensionArrays of this class and with the same dtype as self. This\n # should allow "easy" concatenation (no upcasting needed), and result\n # in a new ExtensionArray of the same dtype.\n # Note: this strict behaviour is only guaranteed starting with pandas 1.1\n raise AbstractMethodError(cls)\n\n # The _can_hold_na attribute is set to True so that pandas internals\n # will use the ExtensionDtype.na_value as the NA value in operations\n # such as take(), reindex(), shift(), etc. In addition, those results\n # will then be of the ExtensionArray subclass rather than an array\n # of objects\n @cache_readonly\n def _can_hold_na(self) -> bool:\n return self.dtype._can_hold_na\n\n def _accumulate(\n self, name: str, *, skipna: bool = True, **kwargs\n ) -> ExtensionArray:\n """\n Return an ExtensionArray performing an accumulation operation.\n\n The underlying data type might change.\n\n Parameters\n ----------\n name : str\n Name of the function, supported values are:\n - cummin\n - cummax\n - cumsum\n - cumprod\n skipna : bool, default True\n If True, skip NA values.\n **kwargs\n Additional keyword arguments passed to the accumulation function.\n Currently, there is no supported kwarg.\n\n Returns\n -------\n array\n\n Raises\n ------\n NotImplementedError : subclass does not define accumulations\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr._accumulate(name='cumsum')\n <IntegerArray>\n [1, 3, 6]\n Length: 3, dtype: Int64\n """\n raise NotImplementedError(f"cannot perform {name} with type {self.dtype}")\n\n def _reduce(\n self, name: str, *, skipna: bool = True, keepdims: bool = False, **kwargs\n ):\n """\n Return a scalar result of performing the reduction operation.\n\n Parameters\n ----------\n name : str\n Name of the function, supported values are:\n { any, all, min, max, sum, mean, median, prod,\n std, var, sem, kurt, skew }.\n skipna : bool, default True\n If True, skip NaN values.\n keepdims : bool, default False\n If False, a scalar is returned.\n If True, the result has dimension with size one along the reduced axis.\n\n .. versionadded:: 2.1\n\n This parameter is not required in the _reduce signature to keep backward\n compatibility, but will become required in the future. If the parameter\n is not found in the method signature, a FutureWarning will be emitted.\n **kwargs\n Additional keyword arguments passed to the reduction function.\n Currently, `ddof` is the only supported kwarg.\n\n Returns\n -------\n scalar\n\n Raises\n ------\n TypeError : subclass does not define reductions\n\n Examples\n --------\n >>> pd.array([1, 2, 3])._reduce("min")\n 1\n """\n meth = getattr(self, name, None)\n if meth is None:\n raise TypeError(\n f"'{type(self).__name__}' with dtype {self.dtype} "\n f"does not support reduction '{name}'"\n )\n result = meth(skipna=skipna, **kwargs)\n if keepdims:\n result = np.array([result])\n\n return result\n\n # https://github.com/python/typeshed/issues/2148#issuecomment-520783318\n # Incompatible types in assignment (expression has type "None", base class\n # "object" defined the type as "Callable[[object], int]")\n __hash__: ClassVar[None] # type: ignore[assignment]\n\n # ------------------------------------------------------------------------\n # Non-Optimized Default Methods; in the case of the private methods here,\n # these are not guaranteed to be stable across pandas versions.\n\n def _values_for_json(self) -> np.ndarray:\n """\n Specify how to render our entries in to_json.\n\n Notes\n -----\n The dtype on the returned ndarray is not restricted, but for non-native\n types that are not specifically handled in objToJSON.c, to_json is\n liable to raise. In these cases, it may be safer to return an ndarray\n of strings.\n """\n return np.asarray(self)\n\n def _hash_pandas_object(\n self, *, encoding: str, hash_key: str, categorize: bool\n ) -> npt.NDArray[np.uint64]:\n """\n Hook for hash_pandas_object.\n\n Default is to use the values returned by _values_for_factorize.\n\n Parameters\n ----------\n encoding : str\n Encoding for data & key when strings.\n hash_key : str\n Hash_key for string key to encode.\n categorize : bool\n Whether to first categorize object arrays before hashing. This is more\n efficient when the array contains duplicate values.\n\n Returns\n -------\n np.ndarray[uint64]\n\n Examples\n --------\n >>> pd.array([1, 2])._hash_pandas_object(encoding='utf-8',\n ... hash_key="1000000000000000",\n ... categorize=False\n ... )\n array([ 6238072747940578789, 15839785061582574730], dtype=uint64)\n """\n from pandas.core.util.hashing import hash_array\n\n values, _ = self._values_for_factorize()\n return hash_array(\n values, encoding=encoding, hash_key=hash_key, categorize=categorize\n )\n\n def _explode(self) -> tuple[Self, npt.NDArray[np.uint64]]:\n """\n Transform each element of list-like to a row.\n\n For arrays that do not contain list-like elements the default\n implementation of this method just returns a copy and an array\n of ones (unchanged index).\n\n Returns\n -------\n ExtensionArray\n Array with the exploded values.\n np.ndarray[uint64]\n The original lengths of each list-like for determining the\n resulting index.\n\n See Also\n --------\n Series.explode : The method on the ``Series`` object that this\n extension array method is meant to support.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> a = pd.array([[1, 2, 3], [4], [5, 6]],\n ... dtype=pd.ArrowDtype(pa.list_(pa.int64())))\n >>> a._explode()\n (<ArrowExtensionArray>\n [1, 2, 3, 4, 5, 6]\n Length: 6, dtype: int64[pyarrow], array([3, 1, 2], dtype=int32))\n """\n values = self.copy()\n counts = np.ones(shape=(len(self),), dtype=np.uint64)\n return values, counts\n\n def tolist(self) -> list:\n """\n Return a list of the values.\n\n These are each a scalar type, which is a Python scalar\n (for str, int, float) or a pandas scalar\n (for Timestamp/Timedelta/Interval/Period)\n\n Returns\n -------\n list\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.tolist()\n [1, 2, 3]\n """\n if self.ndim > 1:\n return [x.tolist() for x in self]\n return list(self)\n\n def delete(self, loc: PositionalIndexer) -> Self:\n indexer = np.delete(np.arange(len(self)), loc)\n return self.take(indexer)\n\n def insert(self, loc: int, item) -> Self:\n """\n Insert an item at the given position.\n\n Parameters\n ----------\n loc : int\n item : scalar-like\n\n Returns\n -------\n same type as self\n\n Notes\n -----\n This method should be both type and dtype-preserving. If the item\n cannot be held in an array of this type/dtype, either ValueError or\n TypeError should be raised.\n\n The default implementation relies on _from_sequence to raise on invalid\n items.\n\n Examples\n --------\n >>> arr = pd.array([1, 2, 3])\n >>> arr.insert(2, -1)\n <IntegerArray>\n [1, 2, -1, 3]\n Length: 4, dtype: Int64\n """\n loc = validate_insert_loc(loc, len(self))\n\n item_arr = type(self)._from_sequence([item], dtype=self.dtype)\n\n return type(self)._concat_same_type([self[:loc], item_arr, self[loc:]])\n\n def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:\n """\n Analogue to np.putmask(self, mask, value)\n\n Parameters\n ----------\n mask : np.ndarray[bool]\n value : scalar or listlike\n If listlike, must be arraylike with same length as self.\n\n Returns\n -------\n None\n\n Notes\n -----\n Unlike np.putmask, we do not repeat listlike values with mismatched length.\n 'value' should either be a scalar or an arraylike with the same length\n as self.\n """\n if is_list_like(value):\n val = value[mask]\n else:\n val = value\n\n self[mask] = val\n\n def _where(self, mask: npt.NDArray[np.bool_], value) -> Self:\n """\n Analogue to np.where(mask, self, value)\n\n Parameters\n ----------\n mask : np.ndarray[bool]\n value : scalar or listlike\n\n Returns\n -------\n same type as self\n """\n result = self.copy()\n\n if is_list_like(value):\n val = value[~mask]\n else:\n val = value\n\n result[~mask] = val\n return result\n\n # TODO(3.0): this can be removed once GH#33302 deprecation is enforced\n def _fill_mask_inplace(\n self, method: str, limit: int | None, mask: npt.NDArray[np.bool_]\n ) -> None:\n """\n Replace values in locations specified by 'mask' using pad or backfill.\n\n See also\n --------\n ExtensionArray.fillna\n """\n func = missing.get_fill_func(method)\n npvalues = self.astype(object)\n # NB: if we don't copy mask here, it may be altered inplace, which\n # would mess up the `self[mask] = ...` below.\n func(npvalues, limit=limit, mask=mask.copy())\n new_values = self._from_sequence(npvalues, dtype=self.dtype)\n self[mask] = new_values[mask]\n\n def _rank(\n self,\n *,\n axis: AxisInt = 0,\n method: str = "average",\n na_option: str = "keep",\n ascending: bool = True,\n pct: bool = False,\n ):\n """\n See Series.rank.__doc__.\n """\n if axis != 0:\n raise NotImplementedError\n\n return rank(\n self._values_for_argsort(),\n axis=axis,\n method=method,\n na_option=na_option,\n ascending=ascending,\n pct=pct,\n )\n\n @classmethod\n def _empty(cls, shape: Shape, dtype: ExtensionDtype):\n """\n Create an ExtensionArray with the given shape and dtype.\n\n See also\n --------\n ExtensionDtype.empty\n ExtensionDtype.empty is the 'official' public version of this API.\n """\n # Implementer note: while ExtensionDtype.empty is the public way to\n # call this method, it is still required to implement this `_empty`\n # method as well (it is called internally in pandas)\n obj = cls._from_sequence([], dtype=dtype)\n\n taker = np.broadcast_to(np.intp(-1), shape)\n result = obj.take(taker, allow_fill=True)\n if not isinstance(result, cls) or dtype != result.dtype:\n raise NotImplementedError(\n f"Default 'empty' implementation is invalid for dtype='{dtype}'"\n )\n return result\n\n def _quantile(self, qs: npt.NDArray[np.float64], interpolation: str) -> Self:\n """\n Compute the quantiles of self for each quantile in `qs`.\n\n Parameters\n ----------\n qs : np.ndarray[float64]\n interpolation: str\n\n Returns\n -------\n same type as self\n """\n mask = np.asarray(self.isna())\n arr = np.asarray(self)\n fill_value = np.nan\n\n res_values = quantile_with_mask(arr, mask, fill_value, qs, interpolation)\n return type(self)._from_sequence(res_values)\n\n def _mode(self, dropna: bool = True) -> Self:\n """\n Returns the mode(s) of the ExtensionArray.\n\n Always returns `ExtensionArray` even if only one value.\n\n Parameters\n ----------\n dropna : bool, default True\n Don't consider counts of NA values.\n\n Returns\n -------\n same type as self\n Sorted, if possible.\n """\n # error: Incompatible return value type (got "Union[ExtensionArray,\n # ndarray[Any, Any]]", expected "Self")\n return mode(self, dropna=dropna) # type: ignore[return-value]\n\n def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n if any(\n isinstance(other, (ABCSeries, ABCIndex, ABCDataFrame)) for other in inputs\n ):\n return NotImplemented\n\n result = arraylike.maybe_dispatch_ufunc_to_dunder_op(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n return result\n\n if "out" in kwargs:\n return arraylike.dispatch_ufunc_with_out(\n self, ufunc, method, *inputs, **kwargs\n )\n\n if method == "reduce":\n result = arraylike.dispatch_reduction_ufunc(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n return result\n\n return arraylike.default_array_ufunc(self, ufunc, method, *inputs, **kwargs)\n\n def map(self, mapper, na_action=None):\n """\n Map values using an input mapping or function.\n\n Parameters\n ----------\n mapper : function, dict, or Series\n Mapping correspondence.\n na_action : {None, 'ignore'}, default None\n If 'ignore', propagate NA values, without passing them to the\n mapping correspondence. If 'ignore' is not supported, a\n ``NotImplementedError`` should be raised.\n\n Returns\n -------\n Union[ndarray, Index, ExtensionArray]\n The output of the mapping function applied to the array.\n If the function returns a tuple with more than one element\n a MultiIndex will be returned.\n """\n return map_array(self, mapper, na_action=na_action)\n\n # ------------------------------------------------------------------------\n # GroupBy Methods\n\n def _groupby_op(\n self,\n *,\n how: str,\n has_dropped_na: bool,\n min_count: int,\n ngroups: int,\n ids: npt.NDArray[np.intp],\n **kwargs,\n ) -> ArrayLike:\n """\n Dispatch GroupBy reduction or transformation operation.\n\n This is an *experimental* API to allow ExtensionArray authors to implement\n reductions and transformations. The API is subject to change.\n\n Parameters\n ----------\n how : {'any', 'all', 'sum', 'prod', 'min', 'max', 'mean', 'median',\n 'median', 'var', 'std', 'sem', 'nth', 'last', 'ohlc',\n 'cumprod', 'cumsum', 'cummin', 'cummax', 'rank'}\n has_dropped_na : bool\n min_count : int\n ngroups : int\n ids : np.ndarray[np.intp]\n ids[i] gives the integer label for the group that self[i] belongs to.\n **kwargs : operation-specific\n 'any', 'all' -> ['skipna']\n 'var', 'std', 'sem' -> ['ddof']\n 'cumprod', 'cumsum', 'cummin', 'cummax' -> ['skipna']\n 'rank' -> ['ties_method', 'ascending', 'na_option', 'pct']\n\n Returns\n -------\n np.ndarray or ExtensionArray\n """\n from pandas.core.arrays.string_ import StringDtype\n from pandas.core.groupby.ops import WrappedCythonOp\n\n kind = WrappedCythonOp.get_kind_from_how(how)\n op = WrappedCythonOp(how=how, kind=kind, has_dropped_na=has_dropped_na)\n\n # GH#43682\n if isinstance(self.dtype, StringDtype):\n # StringArray\n if op.how in [\n "prod",\n "mean",\n "median",\n "cumsum",\n "cumprod",\n "std",\n "sem",\n "var",\n "skew",\n ]:\n raise TypeError(\n f"dtype '{self.dtype}' does not support operation '{how}'"\n )\n if op.how not in ["any", "all"]:\n # Fail early to avoid conversion to object\n op._get_cython_function(op.kind, op.how, np.dtype(object), False)\n npvalues = self.to_numpy(object, na_value=np.nan)\n else:\n raise NotImplementedError(\n f"function is not implemented for this dtype: {self.dtype}"\n )\n\n res_values = op._cython_op_ndim_compat(\n npvalues,\n min_count=min_count,\n ngroups=ngroups,\n comp_ids=ids,\n mask=None,\n **kwargs,\n )\n\n if op.how in op.cast_blocklist:\n # i.e. how in ["rank"], since other cast_blocklist methods don't go\n # through cython_operation\n return res_values\n\n if isinstance(self.dtype, StringDtype):\n dtype = self.dtype\n string_array_cls = dtype.construct_array_type()\n return string_array_cls._from_sequence(res_values, dtype=dtype)\n\n else:\n raise NotImplementedError\n\n\nclass ExtensionArraySupportsAnyAll(ExtensionArray):\n def any(self, *, skipna: bool = True) -> bool:\n raise AbstractMethodError(self)\n\n def all(self, *, skipna: bool = True) -> bool:\n raise AbstractMethodError(self)\n\n\nclass ExtensionOpsMixin:\n """\n A base class for linking the operators to their dunder names.\n\n .. note::\n\n You may want to set ``__array_priority__`` if you want your\n implementation to be called when involved in binary operations\n with NumPy arrays.\n """\n\n @classmethod\n def _create_arithmetic_method(cls, op):\n raise AbstractMethodError(cls)\n\n @classmethod\n def _add_arithmetic_ops(cls) -> None:\n setattr(cls, "__add__", cls._create_arithmetic_method(operator.add))\n setattr(cls, "__radd__", cls._create_arithmetic_method(roperator.radd))\n setattr(cls, "__sub__", cls._create_arithmetic_method(operator.sub))\n setattr(cls, "__rsub__", cls._create_arithmetic_method(roperator.rsub))\n setattr(cls, "__mul__", cls._create_arithmetic_method(operator.mul))\n setattr(cls, "__rmul__", cls._create_arithmetic_method(roperator.rmul))\n setattr(cls, "__pow__", cls._create_arithmetic_method(operator.pow))\n setattr(cls, "__rpow__", cls._create_arithmetic_method(roperator.rpow))\n setattr(cls, "__mod__", cls._create_arithmetic_method(operator.mod))\n setattr(cls, "__rmod__", cls._create_arithmetic_method(roperator.rmod))\n setattr(cls, "__floordiv__", cls._create_arithmetic_method(operator.floordiv))\n setattr(\n cls, "__rfloordiv__", cls._create_arithmetic_method(roperator.rfloordiv)\n )\n setattr(cls, "__truediv__", cls._create_arithmetic_method(operator.truediv))\n setattr(cls, "__rtruediv__", cls._create_arithmetic_method(roperator.rtruediv))\n setattr(cls, "__divmod__", cls._create_arithmetic_method(divmod))\n setattr(cls, "__rdivmod__", cls._create_arithmetic_method(roperator.rdivmod))\n\n @classmethod\n def _create_comparison_method(cls, op):\n raise AbstractMethodError(cls)\n\n @classmethod\n def _add_comparison_ops(cls) -> None:\n setattr(cls, "__eq__", cls._create_comparison_method(operator.eq))\n setattr(cls, "__ne__", cls._create_comparison_method(operator.ne))\n setattr(cls, "__lt__", cls._create_comparison_method(operator.lt))\n setattr(cls, "__gt__", cls._create_comparison_method(operator.gt))\n setattr(cls, "__le__", cls._create_comparison_method(operator.le))\n setattr(cls, "__ge__", cls._create_comparison_method(operator.ge))\n\n @classmethod\n def _create_logical_method(cls, op):\n raise AbstractMethodError(cls)\n\n @classmethod\n def _add_logical_ops(cls) -> None:\n setattr(cls, "__and__", cls._create_logical_method(operator.and_))\n setattr(cls, "__rand__", cls._create_logical_method(roperator.rand_))\n setattr(cls, "__or__", cls._create_logical_method(operator.or_))\n setattr(cls, "__ror__", cls._create_logical_method(roperator.ror_))\n setattr(cls, "__xor__", cls._create_logical_method(operator.xor))\n setattr(cls, "__rxor__", cls._create_logical_method(roperator.rxor))\n\n\nclass ExtensionScalarOpsMixin(ExtensionOpsMixin):\n """\n A mixin for defining ops on an ExtensionArray.\n\n It is assumed that the underlying scalar objects have the operators\n already defined.\n\n Notes\n -----\n If you have defined a subclass MyExtensionArray(ExtensionArray), then\n use MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin) to\n get the arithmetic operators. After the definition of MyExtensionArray,\n insert the lines\n\n MyExtensionArray._add_arithmetic_ops()\n MyExtensionArray._add_comparison_ops()\n\n to link the operators to your class.\n\n .. note::\n\n You may want to set ``__array_priority__`` if you want your\n implementation to be called when involved in binary operations\n with NumPy arrays.\n """\n\n @classmethod\n def _create_method(cls, op, coerce_to_dtype: bool = True, result_dtype=None):\n """\n A class method that returns a method that will correspond to an\n operator for an ExtensionArray subclass, by dispatching to the\n relevant operator defined on the individual elements of the\n ExtensionArray.\n\n Parameters\n ----------\n op : function\n An operator that takes arguments op(a, b)\n coerce_to_dtype : bool, default True\n boolean indicating whether to attempt to convert\n the result to the underlying ExtensionArray dtype.\n If it's not possible to create a new ExtensionArray with the\n values, an ndarray is returned instead.\n\n Returns\n -------\n Callable[[Any, Any], Union[ndarray, ExtensionArray]]\n A method that can be bound to a class. When used, the method\n receives the two arguments, one of which is the instance of\n this class, and should return an ExtensionArray or an ndarray.\n\n Returning an ndarray may be necessary when the result of the\n `op` cannot be stored in the ExtensionArray. The dtype of the\n ndarray uses NumPy's normal inference rules.\n\n Examples\n --------\n Given an ExtensionArray subclass called MyExtensionArray, use\n\n __add__ = cls._create_method(operator.add)\n\n in the class definition of MyExtensionArray to create the operator\n for addition, that will be based on the operator implementation\n of the underlying elements of the ExtensionArray\n """\n\n def _binop(self, other):\n def convert_values(param):\n if isinstance(param, ExtensionArray) or is_list_like(param):\n ovalues = param\n else: # Assume its an object\n ovalues = [param] * len(self)\n return ovalues\n\n if isinstance(other, (ABCSeries, ABCIndex, ABCDataFrame)):\n # rely on pandas to unbox and dispatch to us\n return NotImplemented\n\n lvalues = self\n rvalues = convert_values(other)\n\n # If the operator is not defined for the underlying objects,\n # a TypeError should be raised\n res = [op(a, b) for (a, b) in zip(lvalues, rvalues)]\n\n def _maybe_convert(arr):\n if coerce_to_dtype:\n # https://github.com/pandas-dev/pandas/issues/22850\n # We catch all regular exceptions here, and fall back\n # to an ndarray.\n res = maybe_cast_pointwise_result(arr, self.dtype, same_dtype=False)\n if not isinstance(res, type(self)):\n # exception raised in _from_sequence; ensure we have ndarray\n res = np.asarray(arr)\n else:\n res = np.asarray(arr, dtype=result_dtype)\n return res\n\n if op.__name__ in {"divmod", "rdivmod"}:\n a, b = zip(*res)\n return _maybe_convert(a), _maybe_convert(b)\n\n return _maybe_convert(res)\n\n op_name = f"__{op.__name__}__"\n return set_function_name(_binop, op_name, cls)\n\n @classmethod\n def _create_arithmetic_method(cls, op):\n return cls._create_method(op)\n\n @classmethod\n def _create_comparison_method(cls, op):\n return cls._create_method(op, coerce_to_dtype=False, result_dtype=bool)\n
.venv\Lib\site-packages\pandas\core\arrays\base.py
base.py
Python
85,439
0.75
0.11299
0.103464
react-lib
580
2024-10-21T16:23:48.132247
MIT
false
12edbcae850de8ee50cc276724f78a36
from __future__ import annotations\n\nimport numbers\nfrom typing import (\n TYPE_CHECKING,\n ClassVar,\n cast,\n)\n\nimport numpy as np\n\nfrom pandas._libs import (\n lib,\n missing as libmissing,\n)\n\nfrom pandas.core.dtypes.common import is_list_like\nfrom pandas.core.dtypes.dtypes import register_extension_dtype\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core import ops\nfrom pandas.core.array_algos import masked_accumulations\nfrom pandas.core.arrays.masked import (\n BaseMaskedArray,\n BaseMaskedDtype,\n)\n\nif TYPE_CHECKING:\n import pyarrow\n\n from pandas._typing import (\n Dtype,\n DtypeObj,\n Self,\n npt,\n type_t,\n )\n\n\n@register_extension_dtype\nclass BooleanDtype(BaseMaskedDtype):\n """\n Extension dtype for boolean data.\n\n .. warning::\n\n BooleanDtype is considered experimental. The implementation and\n parts of the API may change without warning.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Examples\n --------\n >>> pd.BooleanDtype()\n BooleanDtype\n """\n\n name: ClassVar[str] = "boolean"\n\n # https://github.com/python/mypy/issues/4125\n # error: Signature of "type" incompatible with supertype "BaseMaskedDtype"\n @property\n def type(self) -> type: # type: ignore[override]\n return np.bool_\n\n @property\n def kind(self) -> str:\n return "b"\n\n @property\n def numpy_dtype(self) -> np.dtype:\n return np.dtype("bool")\n\n @classmethod\n def construct_array_type(cls) -> type_t[BooleanArray]:\n """\n Return the array type associated with this dtype.\n\n Returns\n -------\n type\n """\n return BooleanArray\n\n def __repr__(self) -> str:\n return "BooleanDtype"\n\n @property\n def _is_boolean(self) -> bool:\n return True\n\n @property\n def _is_numeric(self) -> bool:\n return True\n\n def __from_arrow__(\n self, array: pyarrow.Array | pyarrow.ChunkedArray\n ) -> BooleanArray:\n """\n Construct BooleanArray from pyarrow Array/ChunkedArray.\n """\n import pyarrow\n\n if array.type != pyarrow.bool_() and not pyarrow.types.is_null(array.type):\n raise TypeError(f"Expected array of boolean type, got {array.type} instead")\n\n if isinstance(array, pyarrow.Array):\n chunks = [array]\n length = len(array)\n else:\n # pyarrow.ChunkedArray\n chunks = array.chunks\n length = array.length()\n\n if pyarrow.types.is_null(array.type):\n mask = np.ones(length, dtype=bool)\n # No need to init data, since all null\n data = np.empty(length, dtype=bool)\n return BooleanArray(data, mask)\n\n results = []\n for arr in chunks:\n buflist = arr.buffers()\n data = pyarrow.BooleanArray.from_buffers(\n arr.type, len(arr), [None, buflist[1]], offset=arr.offset\n ).to_numpy(zero_copy_only=False)\n if arr.null_count != 0:\n mask = pyarrow.BooleanArray.from_buffers(\n arr.type, len(arr), [None, buflist[0]], offset=arr.offset\n ).to_numpy(zero_copy_only=False)\n mask = ~mask\n else:\n mask = np.zeros(len(arr), dtype=bool)\n\n bool_arr = BooleanArray(data, mask)\n results.append(bool_arr)\n\n if not results:\n return BooleanArray(\n np.array([], dtype=np.bool_), np.array([], dtype=np.bool_)\n )\n else:\n return BooleanArray._concat_same_type(results)\n\n\ndef coerce_to_array(\n values, mask=None, copy: bool = False\n) -> tuple[np.ndarray, np.ndarray]:\n """\n Coerce the input values array to numpy arrays with a mask.\n\n Parameters\n ----------\n values : 1D list-like\n mask : bool 1D array, optional\n copy : bool, default False\n if True, copy the input\n\n Returns\n -------\n tuple of (values, mask)\n """\n if isinstance(values, BooleanArray):\n if mask is not None:\n raise ValueError("cannot pass mask for BooleanArray input")\n values, mask = values._data, values._mask\n if copy:\n values = values.copy()\n mask = mask.copy()\n return values, mask\n\n mask_values = None\n if isinstance(values, np.ndarray) and values.dtype == np.bool_:\n if copy:\n values = values.copy()\n elif isinstance(values, np.ndarray) and values.dtype.kind in "iufcb":\n mask_values = isna(values)\n\n values_bool = np.zeros(len(values), dtype=bool)\n values_bool[~mask_values] = values[~mask_values].astype(bool)\n\n if not np.all(\n values_bool[~mask_values].astype(values.dtype) == values[~mask_values]\n ):\n raise TypeError("Need to pass bool-like values")\n\n values = values_bool\n else:\n values_object = np.asarray(values, dtype=object)\n\n inferred_dtype = lib.infer_dtype(values_object, skipna=True)\n integer_like = ("floating", "integer", "mixed-integer-float")\n if inferred_dtype not in ("boolean", "empty") + integer_like:\n raise TypeError("Need to pass bool-like values")\n\n # mypy does not narrow the type of mask_values to npt.NDArray[np.bool_]\n # within this branch, it assumes it can also be None\n mask_values = cast("npt.NDArray[np.bool_]", isna(values_object))\n values = np.zeros(len(values), dtype=bool)\n values[~mask_values] = values_object[~mask_values].astype(bool)\n\n # if the values were integer-like, validate it were actually 0/1's\n if (inferred_dtype in integer_like) and not (\n np.all(\n values[~mask_values].astype(float)\n == values_object[~mask_values].astype(float)\n )\n ):\n raise TypeError("Need to pass bool-like values")\n\n if mask is None and mask_values is None:\n mask = np.zeros(values.shape, dtype=bool)\n elif mask is None:\n mask = mask_values\n else:\n if isinstance(mask, np.ndarray) and mask.dtype == np.bool_:\n if mask_values is not None:\n mask = mask | mask_values\n else:\n if copy:\n mask = mask.copy()\n else:\n mask = np.array(mask, dtype=bool)\n if mask_values is not None:\n mask = mask | mask_values\n\n if values.shape != mask.shape:\n raise ValueError("values.shape and mask.shape must match")\n\n return values, mask\n\n\nclass BooleanArray(BaseMaskedArray):\n """\n Array of boolean (True/False) data with missing values.\n\n This is a pandas Extension array for boolean data, under the hood\n represented by 2 numpy arrays: a boolean array with the data and\n a boolean array with the mask (True indicating missing).\n\n BooleanArray implements Kleene logic (sometimes called three-value\n logic) for logical operations. See :ref:`boolean.kleene` for more.\n\n To construct an BooleanArray from generic array-like input, use\n :func:`pandas.array` specifying ``dtype="boolean"`` (see examples\n below).\n\n .. warning::\n\n BooleanArray is considered experimental. The implementation and\n parts of the API may change without warning.\n\n Parameters\n ----------\n values : numpy.ndarray\n A 1-d boolean-dtype array with the data.\n mask : numpy.ndarray\n A 1-d boolean-dtype array indicating missing values (True\n indicates missing).\n copy : bool, default False\n Whether to copy the `values` and `mask` arrays.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Returns\n -------\n BooleanArray\n\n Examples\n --------\n Create an BooleanArray with :func:`pandas.array`:\n\n >>> pd.array([True, False, None], dtype="boolean")\n <BooleanArray>\n [True, False, <NA>]\n Length: 3, dtype: boolean\n """\n\n # The value used to fill '_data' to avoid upcasting\n _internal_fill_value = False\n # Fill values used for any/all\n # Incompatible types in assignment (expression has type "bool", base class\n # "BaseMaskedArray" defined the type as "<typing special form>")\n _truthy_value = True # type: ignore[assignment]\n _falsey_value = False # type: ignore[assignment]\n _TRUE_VALUES = {"True", "TRUE", "true", "1", "1.0"}\n _FALSE_VALUES = {"False", "FALSE", "false", "0", "0.0"}\n\n @classmethod\n def _simple_new(cls, values: np.ndarray, mask: npt.NDArray[np.bool_]) -> Self:\n result = super()._simple_new(values, mask)\n result._dtype = BooleanDtype()\n return result\n\n def __init__(\n self, values: np.ndarray, mask: np.ndarray, copy: bool = False\n ) -> None:\n if not (isinstance(values, np.ndarray) and values.dtype == np.bool_):\n raise TypeError(\n "values should be boolean numpy array. Use "\n "the 'pd.array' function instead"\n )\n self._dtype = BooleanDtype()\n super().__init__(values, mask, copy=copy)\n\n @property\n def dtype(self) -> BooleanDtype:\n return self._dtype\n\n @classmethod\n def _from_sequence_of_strings(\n cls,\n strings: list[str],\n *,\n dtype: Dtype | None = None,\n copy: bool = False,\n true_values: list[str] | None = None,\n false_values: list[str] | None = None,\n ) -> BooleanArray:\n true_values_union = cls._TRUE_VALUES.union(true_values or [])\n false_values_union = cls._FALSE_VALUES.union(false_values or [])\n\n def map_string(s) -> bool:\n if s in true_values_union:\n return True\n elif s in false_values_union:\n return False\n else:\n raise ValueError(f"{s} cannot be cast to bool")\n\n scalars = np.array(strings, dtype=object)\n mask = isna(scalars)\n scalars[~mask] = list(map(map_string, scalars[~mask]))\n return cls._from_sequence(scalars, dtype=dtype, copy=copy)\n\n _HANDLED_TYPES = (np.ndarray, numbers.Number, bool, np.bool_)\n\n @classmethod\n def _coerce_to_array(\n cls, value, *, dtype: DtypeObj, copy: bool = False\n ) -> tuple[np.ndarray, np.ndarray]:\n if dtype:\n assert dtype == "boolean"\n return coerce_to_array(value, copy=copy)\n\n def _logical_method(self, other, op):\n assert op.__name__ in {"or_", "ror_", "and_", "rand_", "xor", "rxor"}\n other_is_scalar = lib.is_scalar(other)\n mask = None\n\n if isinstance(other, BooleanArray):\n other, mask = other._data, other._mask\n elif is_list_like(other):\n other = np.asarray(other, dtype="bool")\n if other.ndim > 1:\n raise NotImplementedError("can only perform ops with 1-d structures")\n other, mask = coerce_to_array(other, copy=False)\n elif isinstance(other, np.bool_):\n other = other.item()\n\n if other_is_scalar and other is not libmissing.NA and not lib.is_bool(other):\n raise TypeError(\n "'other' should be pandas.NA or a bool. "\n f"Got {type(other).__name__} instead."\n )\n\n if not other_is_scalar and len(self) != len(other):\n raise ValueError("Lengths must match")\n\n if op.__name__ in {"or_", "ror_"}:\n result, mask = ops.kleene_or(self._data, other, self._mask, mask)\n elif op.__name__ in {"and_", "rand_"}:\n result, mask = ops.kleene_and(self._data, other, self._mask, mask)\n else:\n # i.e. xor, rxor\n result, mask = ops.kleene_xor(self._data, other, self._mask, mask)\n\n # i.e. BooleanArray\n return self._maybe_mask_result(result, mask)\n\n def _accumulate(\n self, name: str, *, skipna: bool = True, **kwargs\n ) -> BaseMaskedArray:\n data = self._data\n mask = self._mask\n if name in ("cummin", "cummax"):\n op = getattr(masked_accumulations, name)\n data, mask = op(data, mask, skipna=skipna, **kwargs)\n return self._simple_new(data, mask)\n else:\n from pandas.core.arrays import IntegerArray\n\n return IntegerArray(data.astype(int), mask)._accumulate(\n name, skipna=skipna, **kwargs\n )\n
.venv\Lib\site-packages\pandas\core\arrays\boolean.py
boolean.py
Python
12,440
0.95
0.144963
0.042042
awesome-app
273
2024-04-12T08:26:45.453970
BSD-3-Clause
false
8dda8be4f9b9e039a066417bcf313176
from __future__ import annotations\n\nfrom datetime import (\n datetime,\n timedelta,\n)\nfrom functools import wraps\nimport operator\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Literal,\n Union,\n cast,\n final,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import using_string_dtype\n\nfrom pandas._libs import (\n algos,\n lib,\n)\nfrom pandas._libs.arrays import NDArrayBacked\nfrom pandas._libs.tslibs import (\n BaseOffset,\n IncompatibleFrequency,\n NaT,\n NaTType,\n Period,\n Resolution,\n Tick,\n Timedelta,\n Timestamp,\n add_overflowsafe,\n astype_overflowsafe,\n get_unit_from_dtype,\n iNaT,\n ints_to_pydatetime,\n ints_to_pytimedelta,\n periods_per_day,\n to_offset,\n)\nfrom pandas._libs.tslibs.fields import (\n RoundTo,\n round_nsint64,\n)\nfrom pandas._libs.tslibs.np_datetime import compare_mismatched_resolutions\nfrom pandas._libs.tslibs.timedeltas import get_unit_for_round\nfrom pandas._libs.tslibs.timestamps import integer_op_not_supported\nfrom pandas._typing import (\n ArrayLike,\n AxisInt,\n DatetimeLikeScalar,\n Dtype,\n DtypeObj,\n F,\n InterpolateOptions,\n NpDtype,\n PositionalIndexer2D,\n PositionalIndexerTuple,\n ScalarIndexer,\n Self,\n SequenceIndexer,\n TimeAmbiguous,\n TimeNonexistent,\n npt,\n)\nfrom pandas.compat.numpy import function as nv\nfrom pandas.errors import (\n AbstractMethodError,\n InvalidComparison,\n PerformanceWarning,\n)\nfrom pandas.util._decorators import (\n Appender,\n Substitution,\n cache_readonly,\n)\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import construct_1d_object_array_from_listlike\nfrom pandas.core.dtypes.common import (\n is_all_strings,\n is_integer_dtype,\n is_list_like,\n is_object_dtype,\n is_string_dtype,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import (\n ArrowDtype,\n CategoricalDtype,\n DatetimeTZDtype,\n ExtensionDtype,\n PeriodDtype,\n)\nfrom pandas.core.dtypes.generic import (\n ABCCategorical,\n ABCMultiIndex,\n)\nfrom pandas.core.dtypes.missing import (\n is_valid_na_for_dtype,\n isna,\n)\n\nfrom pandas.core import (\n algorithms,\n missing,\n nanops,\n ops,\n)\nfrom pandas.core.algorithms import (\n isin,\n map_array,\n unique1d,\n)\nfrom pandas.core.array_algos import datetimelike_accumulations\nfrom pandas.core.arraylike import OpsMixin\nfrom pandas.core.arrays._mixins import (\n NDArrayBackedExtensionArray,\n ravel_compat,\n)\nfrom pandas.core.arrays.arrow.array import ArrowExtensionArray\nfrom pandas.core.arrays.base import ExtensionArray\nfrom pandas.core.arrays.integer import IntegerArray\nimport pandas.core.common as com\nfrom pandas.core.construction import (\n array as pd_array,\n ensure_wrapped_if_datetimelike,\n extract_array,\n)\nfrom pandas.core.indexers import (\n check_array_indexer,\n check_setitem_lengths,\n)\nfrom pandas.core.ops.common import unpack_zerodim_and_defer\nfrom pandas.core.ops.invalid import (\n invalid_comparison,\n make_invalid_op,\n)\n\nfrom pandas.tseries import frequencies\n\nif TYPE_CHECKING:\n from collections.abc import (\n Iterator,\n Sequence,\n )\n\n from pandas import Index\n from pandas.core.arrays import (\n DatetimeArray,\n PeriodArray,\n TimedeltaArray,\n )\n\nDTScalarOrNaT = Union[DatetimeLikeScalar, NaTType]\n\n\ndef _make_unpacked_invalid_op(op_name: str):\n op = make_invalid_op(op_name)\n return unpack_zerodim_and_defer(op_name)(op)\n\n\ndef _period_dispatch(meth: F) -> F:\n """\n For PeriodArray methods, dispatch to DatetimeArray and re-wrap the results\n in PeriodArray. We cannot use ._ndarray directly for the affected\n methods because the i8 data has different semantics on NaT values.\n """\n\n @wraps(meth)\n def new_meth(self, *args, **kwargs):\n if not isinstance(self.dtype, PeriodDtype):\n return meth(self, *args, **kwargs)\n\n arr = self.view("M8[ns]")\n result = meth(arr, *args, **kwargs)\n if result is NaT:\n return NaT\n elif isinstance(result, Timestamp):\n return self._box_func(result._value)\n\n res_i8 = result.view("i8")\n return self._from_backing_data(res_i8)\n\n return cast(F, new_meth)\n\n\n# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is\n# incompatible with definition in base class "ExtensionArray"\nclass DatetimeLikeArrayMixin( # type: ignore[misc]\n OpsMixin, NDArrayBackedExtensionArray\n):\n """\n Shared Base/Mixin class for DatetimeArray, TimedeltaArray, PeriodArray\n\n Assumes that __new__/__init__ defines:\n _ndarray\n\n and that inheriting subclass implements:\n freq\n """\n\n # _infer_matches -> which infer_dtype strings are close enough to our own\n _infer_matches: tuple[str, ...]\n _is_recognized_dtype: Callable[[DtypeObj], bool]\n _recognized_scalars: tuple[type, ...]\n _ndarray: np.ndarray\n freq: BaseOffset | None\n\n @cache_readonly\n def _can_hold_na(self) -> bool:\n return True\n\n def __init__(\n self, data, dtype: Dtype | None = None, freq=None, copy: bool = False\n ) -> None:\n raise AbstractMethodError(self)\n\n @property\n def _scalar_type(self) -> type[DatetimeLikeScalar]:\n """\n The scalar associated with this datelike\n\n * PeriodArray : Period\n * DatetimeArray : Timestamp\n * TimedeltaArray : Timedelta\n """\n raise AbstractMethodError(self)\n\n def _scalar_from_string(self, value: str) -> DTScalarOrNaT:\n """\n Construct a scalar type from a string.\n\n Parameters\n ----------\n value : str\n\n Returns\n -------\n Period, Timestamp, or Timedelta, or NaT\n Whatever the type of ``self._scalar_type`` is.\n\n Notes\n -----\n This should call ``self._check_compatible_with`` before\n unboxing the result.\n """\n raise AbstractMethodError(self)\n\n def _unbox_scalar(\n self, value: DTScalarOrNaT\n ) -> np.int64 | np.datetime64 | np.timedelta64:\n """\n Unbox the integer value of a scalar `value`.\n\n Parameters\n ----------\n value : Period, Timestamp, Timedelta, or NaT\n Depending on subclass.\n\n Returns\n -------\n int\n\n Examples\n --------\n >>> arr = pd.array(np.array(['1970-01-01'], 'datetime64[ns]'))\n >>> arr._unbox_scalar(arr[0])\n numpy.datetime64('1970-01-01T00:00:00.000000000')\n """\n raise AbstractMethodError(self)\n\n def _check_compatible_with(self, other: DTScalarOrNaT) -> None:\n """\n Verify that `self` and `other` are compatible.\n\n * DatetimeArray verifies that the timezones (if any) match\n * PeriodArray verifies that the freq matches\n * Timedelta has no verification\n\n In each case, NaT is considered compatible.\n\n Parameters\n ----------\n other\n\n Raises\n ------\n Exception\n """\n raise AbstractMethodError(self)\n\n # ------------------------------------------------------------------\n\n def _box_func(self, x):\n """\n box function to get object from internal representation\n """\n raise AbstractMethodError(self)\n\n def _box_values(self, values) -> np.ndarray:\n """\n apply box func to passed values\n """\n return lib.map_infer(values, self._box_func, convert=False)\n\n def __iter__(self) -> Iterator:\n if self.ndim > 1:\n return (self[n] for n in range(len(self)))\n else:\n return (self._box_func(v) for v in self.asi8)\n\n @property\n def asi8(self) -> npt.NDArray[np.int64]:\n """\n Integer representation of the values.\n\n Returns\n -------\n ndarray\n An ndarray with int64 dtype.\n """\n # do not cache or you'll create a memory leak\n return self._ndarray.view("i8")\n\n # ----------------------------------------------------------------\n # Rendering Methods\n\n def _format_native_types(\n self, *, na_rep: str | float = "NaT", date_format=None\n ) -> npt.NDArray[np.object_]:\n """\n Helper method for astype when converting to strings.\n\n Returns\n -------\n ndarray[str]\n """\n raise AbstractMethodError(self)\n\n def _formatter(self, boxed: bool = False):\n # TODO: Remove Datetime & DatetimeTZ formatters.\n return "'{}'".format\n\n # ----------------------------------------------------------------\n # Array-Like / EA-Interface Methods\n\n def __array__(\n self, dtype: NpDtype | None = None, copy: bool | None = None\n ) -> np.ndarray:\n # used for Timedelta/DatetimeArray, overwritten by PeriodArray\n if is_object_dtype(dtype):\n if copy is False:\n warnings.warn(\n "Starting with NumPy 2.0, the behavior of the 'copy' keyword has "\n "changed and passing 'copy=False' raises an error when returning "\n "a zero-copy NumPy array is not possible. pandas will follow this "\n "behavior starting with pandas 3.0.\nThis conversion to NumPy "\n "requires a copy, but 'copy=False' was passed. Consider using "\n "'np.asarray(..)' instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n return np.array(list(self), dtype=object)\n\n if copy is True:\n return np.array(self._ndarray, dtype=dtype)\n return self._ndarray\n\n @overload\n def __getitem__(self, item: ScalarIndexer) -> DTScalarOrNaT:\n ...\n\n @overload\n def __getitem__(\n self,\n item: SequenceIndexer | PositionalIndexerTuple,\n ) -> Self:\n ...\n\n def __getitem__(self, key: PositionalIndexer2D) -> Self | DTScalarOrNaT:\n """\n This getitem defers to the underlying array, which by-definition can\n only handle list-likes, slices, and integer scalars\n """\n # Use cast as we know we will get back a DatetimeLikeArray or DTScalar,\n # but skip evaluating the Union at runtime for performance\n # (see https://github.com/pandas-dev/pandas/pull/44624)\n result = cast("Union[Self, DTScalarOrNaT]", super().__getitem__(key))\n if lib.is_scalar(result):\n return result\n else:\n # At this point we know the result is an array.\n result = cast(Self, result)\n result._freq = self._get_getitem_freq(key)\n return result\n\n def _get_getitem_freq(self, key) -> BaseOffset | None:\n """\n Find the `freq` attribute to assign to the result of a __getitem__ lookup.\n """\n is_period = isinstance(self.dtype, PeriodDtype)\n if is_period:\n freq = self.freq\n elif self.ndim != 1:\n freq = None\n else:\n key = check_array_indexer(self, key) # maybe ndarray[bool] -> slice\n freq = None\n if isinstance(key, slice):\n if self.freq is not None and key.step is not None:\n freq = key.step * self.freq\n else:\n freq = self.freq\n elif key is Ellipsis:\n # GH#21282 indexing with Ellipsis is similar to a full slice,\n # should preserve `freq` attribute\n freq = self.freq\n elif com.is_bool_indexer(key):\n new_key = lib.maybe_booleans_to_slice(key.view(np.uint8))\n if isinstance(new_key, slice):\n return self._get_getitem_freq(new_key)\n return freq\n\n # error: Argument 1 of "__setitem__" is incompatible with supertype\n # "ExtensionArray"; supertype defines the argument type as "Union[int,\n # ndarray]"\n def __setitem__(\n self,\n key: int | Sequence[int] | Sequence[bool] | slice,\n value: NaTType | Any | Sequence[Any],\n ) -> None:\n # I'm fudging the types a bit here. "Any" above really depends\n # on type(self). For PeriodArray, it's Period (or stuff coercible\n # to a period in from_sequence). For DatetimeArray, it's Timestamp...\n # I don't know if mypy can do that, possibly with Generics.\n # https://mypy.readthedocs.io/en/latest/generics.html\n\n no_op = check_setitem_lengths(key, value, self)\n\n # Calling super() before the no_op short-circuit means that we raise\n # on invalid 'value' even if this is a no-op, e.g. wrong-dtype empty array.\n super().__setitem__(key, value)\n\n if no_op:\n return\n\n self._maybe_clear_freq()\n\n def _maybe_clear_freq(self) -> None:\n # inplace operations like __setitem__ may invalidate the freq of\n # DatetimeArray and TimedeltaArray\n pass\n\n def astype(self, dtype, copy: bool = True):\n # Some notes on cases we don't have to handle here in the base class:\n # 1. PeriodArray.astype handles period -> period\n # 2. DatetimeArray.astype handles conversion between tz.\n # 3. DatetimeArray.astype handles datetime -> period\n dtype = pandas_dtype(dtype)\n\n if dtype == object:\n if self.dtype.kind == "M":\n self = cast("DatetimeArray", self)\n # *much* faster than self._box_values\n # for e.g. test_get_loc_tuple_monotonic_above_size_cutoff\n i8data = self.asi8\n converted = ints_to_pydatetime(\n i8data,\n tz=self.tz,\n box="timestamp",\n reso=self._creso,\n )\n return converted\n\n elif self.dtype.kind == "m":\n return ints_to_pytimedelta(self._ndarray, box=True)\n\n return self._box_values(self.asi8.ravel()).reshape(self.shape)\n\n elif is_string_dtype(dtype):\n if isinstance(dtype, ExtensionDtype):\n arr_object = self._format_native_types(na_rep=dtype.na_value) # type: ignore[arg-type]\n cls = dtype.construct_array_type()\n return cls._from_sequence(arr_object, dtype=dtype, copy=False)\n else:\n return self._format_native_types()\n\n elif isinstance(dtype, ExtensionDtype):\n return super().astype(dtype, copy=copy)\n elif dtype.kind in "iu":\n # we deliberately ignore int32 vs. int64 here.\n # See https://github.com/pandas-dev/pandas/issues/24381 for more.\n values = self.asi8\n if dtype != np.int64:\n raise TypeError(\n f"Converting from {self.dtype} to {dtype} is not supported. "\n "Do obj.astype('int64').astype(dtype) instead"\n )\n\n if copy:\n values = values.copy()\n return values\n elif (dtype.kind in "mM" and self.dtype != dtype) or dtype.kind == "f":\n # disallow conversion between datetime/timedelta,\n # and conversions for any datetimelike to float\n msg = f"Cannot cast {type(self).__name__} to dtype {dtype}"\n raise TypeError(msg)\n else:\n return np.asarray(self, dtype=dtype)\n\n @overload\n def view(self) -> Self:\n ...\n\n @overload\n def view(self, dtype: Literal["M8[ns]"]) -> DatetimeArray:\n ...\n\n @overload\n def view(self, dtype: Literal["m8[ns]"]) -> TimedeltaArray:\n ...\n\n @overload\n def view(self, dtype: Dtype | None = ...) -> ArrayLike:\n ...\n\n # pylint: disable-next=useless-parent-delegation\n def view(self, dtype: Dtype | None = None) -> ArrayLike:\n # we need to explicitly call super() method as long as the `@overload`s\n # are present in this file.\n return super().view(dtype)\n\n # ------------------------------------------------------------------\n # Validation Methods\n # TODO: try to de-duplicate these, ensure identical behavior\n\n def _validate_comparison_value(self, other):\n if isinstance(other, str):\n try:\n # GH#18435 strings get a pass from tzawareness compat\n other = self._scalar_from_string(other)\n except (ValueError, IncompatibleFrequency):\n # failed to parse as Timestamp/Timedelta/Period\n raise InvalidComparison(other)\n\n if isinstance(other, self._recognized_scalars) or other is NaT:\n other = self._scalar_type(other)\n try:\n self._check_compatible_with(other)\n except (TypeError, IncompatibleFrequency) as err:\n # e.g. tzawareness mismatch\n raise InvalidComparison(other) from err\n\n elif not is_list_like(other):\n raise InvalidComparison(other)\n\n elif len(other) != len(self):\n raise ValueError("Lengths must match")\n\n else:\n try:\n other = self._validate_listlike(other, allow_object=True)\n self._check_compatible_with(other)\n except (TypeError, IncompatibleFrequency) as err:\n if is_object_dtype(getattr(other, "dtype", None)):\n # We will have to operate element-wise\n pass\n else:\n raise InvalidComparison(other) from err\n\n return other\n\n def _validate_scalar(\n self,\n value,\n *,\n allow_listlike: bool = False,\n unbox: bool = True,\n ):\n """\n Validate that the input value can be cast to our scalar_type.\n\n Parameters\n ----------\n value : object\n allow_listlike: bool, default False\n When raising an exception, whether the message should say\n listlike inputs are allowed.\n unbox : bool, default True\n Whether to unbox the result before returning. Note: unbox=False\n skips the setitem compatibility check.\n\n Returns\n -------\n self._scalar_type or NaT\n """\n if isinstance(value, self._scalar_type):\n pass\n\n elif isinstance(value, str):\n # NB: Careful about tzawareness\n try:\n value = self._scalar_from_string(value)\n except ValueError as err:\n msg = self._validation_error_message(value, allow_listlike)\n raise TypeError(msg) from err\n\n elif is_valid_na_for_dtype(value, self.dtype):\n # GH#18295\n value = NaT\n\n elif isna(value):\n # if we are dt64tz and value is dt64("NaT"), dont cast to NaT,\n # or else we'll fail to raise in _unbox_scalar\n msg = self._validation_error_message(value, allow_listlike)\n raise TypeError(msg)\n\n elif isinstance(value, self._recognized_scalars):\n # error: Argument 1 to "Timestamp" has incompatible type "object"; expected\n # "integer[Any] | float | str | date | datetime | datetime64"\n value = self._scalar_type(value) # type: ignore[arg-type]\n\n else:\n msg = self._validation_error_message(value, allow_listlike)\n raise TypeError(msg)\n\n if not unbox:\n # NB: In general NDArrayBackedExtensionArray will unbox here;\n # this option exists to prevent a performance hit in\n # TimedeltaIndex.get_loc\n return value\n return self._unbox_scalar(value)\n\n def _validation_error_message(self, value, allow_listlike: bool = False) -> str:\n """\n Construct an exception message on validation error.\n\n Some methods allow only scalar inputs, while others allow either scalar\n or listlike.\n\n Parameters\n ----------\n allow_listlike: bool, default False\n\n Returns\n -------\n str\n """\n if hasattr(value, "dtype") and getattr(value, "ndim", 0) > 0:\n msg_got = f"{value.dtype} array"\n else:\n msg_got = f"'{type(value).__name__}'"\n if allow_listlike:\n msg = (\n f"value should be a '{self._scalar_type.__name__}', 'NaT', "\n f"or array of those. Got {msg_got} instead."\n )\n else:\n msg = (\n f"value should be a '{self._scalar_type.__name__}' or 'NaT'. "\n f"Got {msg_got} instead."\n )\n return msg\n\n def _validate_listlike(self, value, allow_object: bool = False):\n if isinstance(value, type(self)):\n if self.dtype.kind in "mM" and not allow_object:\n # error: "DatetimeLikeArrayMixin" has no attribute "as_unit"\n value = value.as_unit(self.unit, round_ok=False) # type: ignore[attr-defined]\n return value\n\n if isinstance(value, list) and len(value) == 0:\n # We treat empty list as our own dtype.\n return type(self)._from_sequence([], dtype=self.dtype)\n\n if hasattr(value, "dtype") and value.dtype == object:\n # `array` below won't do inference if value is an Index or Series.\n # so do so here. in the Index case, inferred_type may be cached.\n if lib.infer_dtype(value) in self._infer_matches:\n try:\n value = type(self)._from_sequence(value)\n except (ValueError, TypeError):\n if allow_object:\n return value\n msg = self._validation_error_message(value, True)\n raise TypeError(msg)\n\n # Do type inference if necessary up front (after unpacking\n # NumpyExtensionArray)\n # e.g. we passed PeriodIndex.values and got an ndarray of Periods\n value = extract_array(value, extract_numpy=True)\n value = pd_array(value)\n value = extract_array(value, extract_numpy=True)\n\n if is_all_strings(value):\n # We got a StringArray\n try:\n # TODO: Could use from_sequence_of_strings if implemented\n # Note: passing dtype is necessary for PeriodArray tests\n value = type(self)._from_sequence(value, dtype=self.dtype)\n except ValueError:\n pass\n\n if isinstance(value.dtype, CategoricalDtype):\n # e.g. we have a Categorical holding self.dtype\n if value.categories.dtype == self.dtype:\n # TODO: do we need equal dtype or just comparable?\n value = value._internal_get_values()\n value = extract_array(value, extract_numpy=True)\n\n if allow_object and is_object_dtype(value.dtype):\n pass\n\n elif not type(self)._is_recognized_dtype(value.dtype):\n msg = self._validation_error_message(value, True)\n raise TypeError(msg)\n\n if self.dtype.kind in "mM" and not allow_object:\n # error: "DatetimeLikeArrayMixin" has no attribute "as_unit"\n value = value.as_unit(self.unit, round_ok=False) # type: ignore[attr-defined]\n return value\n\n def _validate_setitem_value(self, value):\n if is_list_like(value):\n value = self._validate_listlike(value)\n else:\n return self._validate_scalar(value, allow_listlike=True)\n\n return self._unbox(value)\n\n @final\n def _unbox(self, other) -> np.int64 | np.datetime64 | np.timedelta64 | np.ndarray:\n """\n Unbox either a scalar with _unbox_scalar or an instance of our own type.\n """\n if lib.is_scalar(other):\n other = self._unbox_scalar(other)\n else:\n # same type as self\n self._check_compatible_with(other)\n other = other._ndarray\n return other\n\n # ------------------------------------------------------------------\n # Additional array methods\n # These are not part of the EA API, but we implement them because\n # pandas assumes they're there.\n\n @ravel_compat\n def map(self, mapper, na_action=None):\n from pandas import Index\n\n result = map_array(self, mapper, na_action=na_action)\n result = Index(result)\n\n if isinstance(result, ABCMultiIndex):\n return result.to_numpy()\n else:\n return result.array\n\n def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:\n """\n Compute boolean array of whether each value is found in the\n passed set of values.\n\n Parameters\n ----------\n values : np.ndarray or ExtensionArray\n\n Returns\n -------\n ndarray[bool]\n """\n if values.dtype.kind in "fiuc":\n # TODO: de-duplicate with equals, validate_comparison_value\n return np.zeros(self.shape, dtype=bool)\n\n values = ensure_wrapped_if_datetimelike(values)\n\n if not isinstance(values, type(self)):\n inferable = [\n "timedelta",\n "timedelta64",\n "datetime",\n "datetime64",\n "date",\n "period",\n ]\n if values.dtype == object:\n values = lib.maybe_convert_objects(\n values, # type: ignore[arg-type]\n convert_non_numeric=True,\n dtype_if_all_nat=self.dtype,\n )\n if values.dtype != object:\n return self.isin(values)\n\n inferred = lib.infer_dtype(values, skipna=False)\n if inferred not in inferable:\n if inferred == "string":\n pass\n\n elif "mixed" in inferred:\n return isin(self.astype(object), values)\n else:\n return np.zeros(self.shape, dtype=bool)\n\n try:\n values = type(self)._from_sequence(values)\n except ValueError:\n return isin(self.astype(object), values)\n else:\n warnings.warn(\n # GH#53111\n f"The behavior of 'isin' with dtype={self.dtype} and "\n "castable values (e.g. strings) is deprecated. In a "\n "future version, these will not be considered matching "\n "by isin. Explicitly cast to the appropriate dtype before "\n "calling isin instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n if self.dtype.kind in "mM":\n self = cast("DatetimeArray | TimedeltaArray", self)\n # error: Item "ExtensionArray" of "ExtensionArray | ndarray[Any, Any]"\n # has no attribute "as_unit"\n values = values.as_unit(self.unit) # type: ignore[union-attr]\n\n try:\n # error: Argument 1 to "_check_compatible_with" of "DatetimeLikeArrayMixin"\n # has incompatible type "ExtensionArray | ndarray[Any, Any]"; expected\n # "Period | Timestamp | Timedelta | NaTType"\n self._check_compatible_with(values) # type: ignore[arg-type]\n except (TypeError, ValueError):\n # Includes tzawareness mismatch and IncompatibleFrequencyError\n return np.zeros(self.shape, dtype=bool)\n\n # error: Item "ExtensionArray" of "ExtensionArray | ndarray[Any, Any]"\n # has no attribute "asi8"\n return isin(self.asi8, values.asi8) # type: ignore[union-attr]\n\n # ------------------------------------------------------------------\n # Null Handling\n\n def isna(self) -> npt.NDArray[np.bool_]:\n return self._isnan\n\n @property # NB: override with cache_readonly in immutable subclasses\n def _isnan(self) -> npt.NDArray[np.bool_]:\n """\n return if each value is nan\n """\n return self.asi8 == iNaT\n\n @property # NB: override with cache_readonly in immutable subclasses\n def _hasna(self) -> bool:\n """\n return if I have any nans; enables various perf speedups\n """\n return bool(self._isnan.any())\n\n def _maybe_mask_results(\n self, result: np.ndarray, fill_value=iNaT, convert=None\n ) -> np.ndarray:\n """\n Parameters\n ----------\n result : np.ndarray\n fill_value : object, default iNaT\n convert : str, dtype or None\n\n Returns\n -------\n result : ndarray with values replace by the fill_value\n\n mask the result if needed, convert to the provided dtype if its not\n None\n\n This is an internal routine.\n """\n if self._hasna:\n if convert:\n result = result.astype(convert)\n if fill_value is None:\n fill_value = np.nan\n np.putmask(result, self._isnan, fill_value)\n return result\n\n # ------------------------------------------------------------------\n # Frequency Properties/Methods\n\n @property\n def freqstr(self) -> str | None:\n """\n Return the frequency object as a string if it's set, otherwise None.\n\n Examples\n --------\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00"], freq="D")\n >>> idx.freqstr\n 'D'\n\n The frequency can be inferred if there are more than 2 points:\n\n >>> idx = pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"],\n ... freq="infer")\n >>> idx.freqstr\n '2D'\n\n For PeriodIndex:\n\n >>> idx = pd.PeriodIndex(["2023-1", "2023-2", "2023-3"], freq="M")\n >>> idx.freqstr\n 'M'\n """\n if self.freq is None:\n return None\n return self.freq.freqstr\n\n @property # NB: override with cache_readonly in immutable subclasses\n def inferred_freq(self) -> str | None:\n """\n Tries to return a string representing a frequency generated by infer_freq.\n\n Returns None if it can't autodetect the frequency.\n\n Examples\n --------\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])\n >>> idx.inferred_freq\n '2D'\n\n For TimedeltaIndex:\n\n >>> tdelta_idx = pd.to_timedelta(["0 days", "10 days", "20 days"])\n >>> tdelta_idx\n TimedeltaIndex(['0 days', '10 days', '20 days'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.inferred_freq\n '10D'\n """\n if self.ndim != 1:\n return None\n try:\n return frequencies.infer_freq(self)\n except ValueError:\n return None\n\n @property # NB: override with cache_readonly in immutable subclasses\n def _resolution_obj(self) -> Resolution | None:\n freqstr = self.freqstr\n if freqstr is None:\n return None\n try:\n return Resolution.get_reso_from_freqstr(freqstr)\n except KeyError:\n return None\n\n @property # NB: override with cache_readonly in immutable subclasses\n def resolution(self) -> str:\n """\n Returns day, hour, minute, second, millisecond or microsecond\n """\n # error: Item "None" of "Optional[Any]" has no attribute "attrname"\n return self._resolution_obj.attrname # type: ignore[union-attr]\n\n # monotonicity/uniqueness properties are called via frequencies.infer_freq,\n # see GH#23789\n\n @property\n def _is_monotonic_increasing(self) -> bool:\n return algos.is_monotonic(self.asi8, timelike=True)[0]\n\n @property\n def _is_monotonic_decreasing(self) -> bool:\n return algos.is_monotonic(self.asi8, timelike=True)[1]\n\n @property\n def _is_unique(self) -> bool:\n return len(unique1d(self.asi8.ravel("K"))) == self.size\n\n # ------------------------------------------------------------------\n # Arithmetic Methods\n\n def _cmp_method(self, other, op):\n if self.ndim > 1 and getattr(other, "shape", None) == self.shape:\n # TODO: handle 2D-like listlikes\n return op(self.ravel(), other.ravel()).reshape(self.shape)\n\n try:\n other = self._validate_comparison_value(other)\n except InvalidComparison:\n return invalid_comparison(self, other, op)\n\n dtype = getattr(other, "dtype", None)\n if is_object_dtype(dtype):\n # We have to use comp_method_OBJECT_ARRAY instead of numpy\n # comparison otherwise it would raise when comparing to None\n result = ops.comp_method_OBJECT_ARRAY(\n op, np.asarray(self.astype(object)), other\n )\n return result\n if other is NaT:\n if op is operator.ne:\n result = np.ones(self.shape, dtype=bool)\n else:\n result = np.zeros(self.shape, dtype=bool)\n return result\n\n if not isinstance(self.dtype, PeriodDtype):\n self = cast(TimelikeOps, self)\n if self._creso != other._creso:\n if not isinstance(other, type(self)):\n # i.e. Timedelta/Timestamp, cast to ndarray and let\n # compare_mismatched_resolutions handle broadcasting\n try:\n # GH#52080 see if we can losslessly cast to shared unit\n other = other.as_unit(self.unit, round_ok=False)\n except ValueError:\n other_arr = np.array(other.asm8)\n return compare_mismatched_resolutions(\n self._ndarray, other_arr, op\n )\n else:\n other_arr = other._ndarray\n return compare_mismatched_resolutions(self._ndarray, other_arr, op)\n\n other_vals = self._unbox(other)\n # GH#37462 comparison on i8 values is almost 2x faster than M8/m8\n result = op(self._ndarray.view("i8"), other_vals.view("i8"))\n\n o_mask = isna(other)\n mask = self._isnan | o_mask\n if mask.any():\n nat_result = op is operator.ne\n np.putmask(result, mask, nat_result)\n\n return result\n\n # pow is invalid for all three subclasses; TimedeltaArray will override\n # the multiplication and division ops\n __pow__ = _make_unpacked_invalid_op("__pow__")\n __rpow__ = _make_unpacked_invalid_op("__rpow__")\n __mul__ = _make_unpacked_invalid_op("__mul__")\n __rmul__ = _make_unpacked_invalid_op("__rmul__")\n __truediv__ = _make_unpacked_invalid_op("__truediv__")\n __rtruediv__ = _make_unpacked_invalid_op("__rtruediv__")\n __floordiv__ = _make_unpacked_invalid_op("__floordiv__")\n __rfloordiv__ = _make_unpacked_invalid_op("__rfloordiv__")\n __mod__ = _make_unpacked_invalid_op("__mod__")\n __rmod__ = _make_unpacked_invalid_op("__rmod__")\n __divmod__ = _make_unpacked_invalid_op("__divmod__")\n __rdivmod__ = _make_unpacked_invalid_op("__rdivmod__")\n\n @final\n def _get_i8_values_and_mask(\n self, other\n ) -> tuple[int | npt.NDArray[np.int64], None | npt.NDArray[np.bool_]]:\n """\n Get the int64 values and b_mask to pass to add_overflowsafe.\n """\n if isinstance(other, Period):\n i8values = other.ordinal\n mask = None\n elif isinstance(other, (Timestamp, Timedelta)):\n i8values = other._value\n mask = None\n else:\n # PeriodArray, DatetimeArray, TimedeltaArray\n mask = other._isnan\n i8values = other.asi8\n return i8values, mask\n\n @final\n def _get_arithmetic_result_freq(self, other) -> BaseOffset | None:\n """\n Check if we can preserve self.freq in addition or subtraction.\n """\n # Adding or subtracting a Timedelta/Timestamp scalar is freq-preserving\n # whenever self.freq is a Tick\n if isinstance(self.dtype, PeriodDtype):\n return self.freq\n elif not lib.is_scalar(other):\n return None\n elif isinstance(self.freq, Tick):\n # In these cases\n return self.freq\n return None\n\n @final\n def _add_datetimelike_scalar(self, other) -> DatetimeArray:\n if not lib.is_np_dtype(self.dtype, "m"):\n raise TypeError(\n f"cannot add {type(self).__name__} and {type(other).__name__}"\n )\n\n self = cast("TimedeltaArray", self)\n\n from pandas.core.arrays import DatetimeArray\n from pandas.core.arrays.datetimes import tz_to_dtype\n\n assert other is not NaT\n if isna(other):\n # i.e. np.datetime64("NaT")\n # In this case we specifically interpret NaT as a datetime, not\n # the timedelta interpretation we would get by returning self + NaT\n result = self._ndarray + NaT.to_datetime64().astype(f"M8[{self.unit}]")\n # Preserve our resolution\n return DatetimeArray._simple_new(result, dtype=result.dtype)\n\n other = Timestamp(other)\n self, other = self._ensure_matching_resos(other)\n self = cast("TimedeltaArray", self)\n\n other_i8, o_mask = self._get_i8_values_and_mask(other)\n result = add_overflowsafe(self.asi8, np.asarray(other_i8, dtype="i8"))\n res_values = result.view(f"M8[{self.unit}]")\n\n dtype = tz_to_dtype(tz=other.tz, unit=self.unit)\n res_values = result.view(f"M8[{self.unit}]")\n new_freq = self._get_arithmetic_result_freq(other)\n return DatetimeArray._simple_new(res_values, dtype=dtype, freq=new_freq)\n\n @final\n def _add_datetime_arraylike(self, other: DatetimeArray) -> DatetimeArray:\n if not lib.is_np_dtype(self.dtype, "m"):\n raise TypeError(\n f"cannot add {type(self).__name__} and {type(other).__name__}"\n )\n\n # defer to DatetimeArray.__add__\n return other + self\n\n @final\n def _sub_datetimelike_scalar(\n self, other: datetime | np.datetime64\n ) -> TimedeltaArray:\n if self.dtype.kind != "M":\n raise TypeError(f"cannot subtract a datelike from a {type(self).__name__}")\n\n self = cast("DatetimeArray", self)\n # subtract a datetime from myself, yielding a ndarray[timedelta64[ns]]\n\n if isna(other):\n # i.e. np.datetime64("NaT")\n return self - NaT\n\n ts = Timestamp(other)\n\n self, ts = self._ensure_matching_resos(ts)\n return self._sub_datetimelike(ts)\n\n @final\n def _sub_datetime_arraylike(self, other: DatetimeArray) -> TimedeltaArray:\n if self.dtype.kind != "M":\n raise TypeError(f"cannot subtract a datelike from a {type(self).__name__}")\n\n if len(self) != len(other):\n raise ValueError("cannot add indices of unequal length")\n\n self = cast("DatetimeArray", self)\n\n self, other = self._ensure_matching_resos(other)\n return self._sub_datetimelike(other)\n\n @final\n def _sub_datetimelike(self, other: Timestamp | DatetimeArray) -> TimedeltaArray:\n self = cast("DatetimeArray", self)\n\n from pandas.core.arrays import TimedeltaArray\n\n try:\n self._assert_tzawareness_compat(other)\n except TypeError as err:\n new_message = str(err).replace("compare", "subtract")\n raise type(err)(new_message) from err\n\n other_i8, o_mask = self._get_i8_values_and_mask(other)\n res_values = add_overflowsafe(self.asi8, np.asarray(-other_i8, dtype="i8"))\n res_m8 = res_values.view(f"timedelta64[{self.unit}]")\n\n new_freq = self._get_arithmetic_result_freq(other)\n new_freq = cast("Tick | None", new_freq)\n return TimedeltaArray._simple_new(res_m8, dtype=res_m8.dtype, freq=new_freq)\n\n @final\n def _add_period(self, other: Period) -> PeriodArray:\n if not lib.is_np_dtype(self.dtype, "m"):\n raise TypeError(f"cannot add Period to a {type(self).__name__}")\n\n # We will wrap in a PeriodArray and defer to the reversed operation\n from pandas.core.arrays.period import PeriodArray\n\n i8vals = np.broadcast_to(other.ordinal, self.shape)\n dtype = PeriodDtype(other.freq)\n parr = PeriodArray(i8vals, dtype=dtype)\n return parr + self\n\n def _add_offset(self, offset):\n raise AbstractMethodError(self)\n\n def _add_timedeltalike_scalar(self, other):\n """\n Add a delta of a timedeltalike\n\n Returns\n -------\n Same type as self\n """\n if isna(other):\n # i.e np.timedelta64("NaT")\n new_values = np.empty(self.shape, dtype="i8").view(self._ndarray.dtype)\n new_values.fill(iNaT)\n return type(self)._simple_new(new_values, dtype=self.dtype)\n\n # PeriodArray overrides, so we only get here with DTA/TDA\n self = cast("DatetimeArray | TimedeltaArray", self)\n other = Timedelta(other)\n self, other = self._ensure_matching_resos(other)\n return self._add_timedeltalike(other)\n\n def _add_timedelta_arraylike(self, other: TimedeltaArray):\n """\n Add a delta of a TimedeltaIndex\n\n Returns\n -------\n Same type as self\n """\n # overridden by PeriodArray\n\n if len(self) != len(other):\n raise ValueError("cannot add indices of unequal length")\n\n self = cast("DatetimeArray | TimedeltaArray", self)\n\n self, other = self._ensure_matching_resos(other)\n return self._add_timedeltalike(other)\n\n @final\n def _add_timedeltalike(self, other: Timedelta | TimedeltaArray):\n self = cast("DatetimeArray | TimedeltaArray", self)\n\n other_i8, o_mask = self._get_i8_values_and_mask(other)\n new_values = add_overflowsafe(self.asi8, np.asarray(other_i8, dtype="i8"))\n res_values = new_values.view(self._ndarray.dtype)\n\n new_freq = self._get_arithmetic_result_freq(other)\n\n # error: Argument "dtype" to "_simple_new" of "DatetimeArray" has\n # incompatible type "Union[dtype[datetime64], DatetimeTZDtype,\n # dtype[timedelta64]]"; expected "Union[dtype[datetime64], DatetimeTZDtype]"\n return type(self)._simple_new(\n res_values, dtype=self.dtype, freq=new_freq # type: ignore[arg-type]\n )\n\n @final\n def _add_nat(self):\n """\n Add pd.NaT to self\n """\n if isinstance(self.dtype, PeriodDtype):\n raise TypeError(\n f"Cannot add {type(self).__name__} and {type(NaT).__name__}"\n )\n self = cast("TimedeltaArray | DatetimeArray", self)\n\n # GH#19124 pd.NaT is treated like a timedelta for both timedelta\n # and datetime dtypes\n result = np.empty(self.shape, dtype=np.int64)\n result.fill(iNaT)\n result = result.view(self._ndarray.dtype) # preserve reso\n # error: Argument "dtype" to "_simple_new" of "DatetimeArray" has\n # incompatible type "Union[dtype[timedelta64], dtype[datetime64],\n # DatetimeTZDtype]"; expected "Union[dtype[datetime64], DatetimeTZDtype]"\n return type(self)._simple_new(\n result, dtype=self.dtype, freq=None # type: ignore[arg-type]\n )\n\n @final\n def _sub_nat(self):\n """\n Subtract pd.NaT from self\n """\n # GH#19124 Timedelta - datetime is not in general well-defined.\n # We make an exception for pd.NaT, which in this case quacks\n # like a timedelta.\n # For datetime64 dtypes by convention we treat NaT as a datetime, so\n # this subtraction returns a timedelta64 dtype.\n # For period dtype, timedelta64 is a close-enough return dtype.\n result = np.empty(self.shape, dtype=np.int64)\n result.fill(iNaT)\n if self.dtype.kind in "mM":\n # We can retain unit in dtype\n self = cast("DatetimeArray| TimedeltaArray", self)\n return result.view(f"timedelta64[{self.unit}]")\n else:\n return result.view("timedelta64[ns]")\n\n @final\n def _sub_periodlike(self, other: Period | PeriodArray) -> npt.NDArray[np.object_]:\n # If the operation is well-defined, we return an object-dtype ndarray\n # of DateOffsets. Null entries are filled with pd.NaT\n if not isinstance(self.dtype, PeriodDtype):\n raise TypeError(\n f"cannot subtract {type(other).__name__} from {type(self).__name__}"\n )\n\n self = cast("PeriodArray", self)\n self._check_compatible_with(other)\n\n other_i8, o_mask = self._get_i8_values_and_mask(other)\n new_i8_data = add_overflowsafe(self.asi8, np.asarray(-other_i8, dtype="i8"))\n new_data = np.array([self.freq.base * x for x in new_i8_data])\n\n if o_mask is None:\n # i.e. Period scalar\n mask = self._isnan\n else:\n # i.e. PeriodArray\n mask = self._isnan | o_mask\n new_data[mask] = NaT\n return new_data\n\n @final\n def _addsub_object_array(self, other: npt.NDArray[np.object_], op):\n """\n Add or subtract array-like of DateOffset objects\n\n Parameters\n ----------\n other : np.ndarray[object]\n op : {operator.add, operator.sub}\n\n Returns\n -------\n np.ndarray[object]\n Except in fastpath case with length 1 where we operate on the\n contained scalar.\n """\n assert op in [operator.add, operator.sub]\n if len(other) == 1 and self.ndim == 1:\n # Note: without this special case, we could annotate return type\n # as ndarray[object]\n # If both 1D then broadcasting is unambiguous\n return op(self, other[0])\n\n warnings.warn(\n "Adding/subtracting object-dtype array to "\n f"{type(self).__name__} not vectorized.",\n PerformanceWarning,\n stacklevel=find_stack_level(),\n )\n\n # Caller is responsible for broadcasting if necessary\n assert self.shape == other.shape, (self.shape, other.shape)\n\n res_values = op(self.astype("O"), np.asarray(other))\n return res_values\n\n def _accumulate(self, name: str, *, skipna: bool = True, **kwargs) -> Self:\n if name not in {"cummin", "cummax"}:\n raise TypeError(f"Accumulation {name} not supported for {type(self)}")\n\n op = getattr(datetimelike_accumulations, name)\n result = op(self.copy(), skipna=skipna, **kwargs)\n\n return type(self)._simple_new(result, dtype=self.dtype)\n\n @unpack_zerodim_and_defer("__add__")\n def __add__(self, other):\n other_dtype = getattr(other, "dtype", None)\n other = ensure_wrapped_if_datetimelike(other)\n\n # scalar others\n if other is NaT:\n result = self._add_nat()\n elif isinstance(other, (Tick, timedelta, np.timedelta64)):\n result = self._add_timedeltalike_scalar(other)\n elif isinstance(other, BaseOffset):\n # specifically _not_ a Tick\n result = self._add_offset(other)\n elif isinstance(other, (datetime, np.datetime64)):\n result = self._add_datetimelike_scalar(other)\n elif isinstance(other, Period) and lib.is_np_dtype(self.dtype, "m"):\n result = self._add_period(other)\n elif lib.is_integer(other):\n # This check must come after the check for np.timedelta64\n # as is_integer returns True for these\n if not isinstance(self.dtype, PeriodDtype):\n raise integer_op_not_supported(self)\n obj = cast("PeriodArray", self)\n result = obj._addsub_int_array_or_scalar(other * obj.dtype._n, operator.add)\n\n # array-like others\n elif lib.is_np_dtype(other_dtype, "m"):\n # TimedeltaIndex, ndarray[timedelta64]\n result = self._add_timedelta_arraylike(other)\n elif is_object_dtype(other_dtype):\n # e.g. Array/Index of DateOffset objects\n result = self._addsub_object_array(other, operator.add)\n elif lib.is_np_dtype(other_dtype, "M") or isinstance(\n other_dtype, DatetimeTZDtype\n ):\n # DatetimeIndex, ndarray[datetime64]\n return self._add_datetime_arraylike(other)\n elif is_integer_dtype(other_dtype):\n if not isinstance(self.dtype, PeriodDtype):\n raise integer_op_not_supported(self)\n obj = cast("PeriodArray", self)\n result = obj._addsub_int_array_or_scalar(other * obj.dtype._n, operator.add)\n else:\n # Includes Categorical, other ExtensionArrays\n # For PeriodDtype, if self is a TimedeltaArray and other is a\n # PeriodArray with a timedelta-like (i.e. Tick) freq, this\n # operation is valid. Defer to the PeriodArray implementation.\n # In remaining cases, this will end up raising TypeError.\n return NotImplemented\n\n if isinstance(result, np.ndarray) and lib.is_np_dtype(result.dtype, "m"):\n from pandas.core.arrays import TimedeltaArray\n\n return TimedeltaArray._from_sequence(result)\n return result\n\n def __radd__(self, other):\n # alias for __add__\n return self.__add__(other)\n\n @unpack_zerodim_and_defer("__sub__")\n def __sub__(self, other):\n other_dtype = getattr(other, "dtype", None)\n other = ensure_wrapped_if_datetimelike(other)\n\n # scalar others\n if other is NaT:\n result = self._sub_nat()\n elif isinstance(other, (Tick, timedelta, np.timedelta64)):\n result = self._add_timedeltalike_scalar(-other)\n elif isinstance(other, BaseOffset):\n # specifically _not_ a Tick\n result = self._add_offset(-other)\n elif isinstance(other, (datetime, np.datetime64)):\n result = self._sub_datetimelike_scalar(other)\n elif lib.is_integer(other):\n # This check must come after the check for np.timedelta64\n # as is_integer returns True for these\n if not isinstance(self.dtype, PeriodDtype):\n raise integer_op_not_supported(self)\n obj = cast("PeriodArray", self)\n result = obj._addsub_int_array_or_scalar(other * obj.dtype._n, operator.sub)\n\n elif isinstance(other, Period):\n result = self._sub_periodlike(other)\n\n # array-like others\n elif lib.is_np_dtype(other_dtype, "m"):\n # TimedeltaIndex, ndarray[timedelta64]\n result = self._add_timedelta_arraylike(-other)\n elif is_object_dtype(other_dtype):\n # e.g. Array/Index of DateOffset objects\n result = self._addsub_object_array(other, operator.sub)\n elif lib.is_np_dtype(other_dtype, "M") or isinstance(\n other_dtype, DatetimeTZDtype\n ):\n # DatetimeIndex, ndarray[datetime64]\n result = self._sub_datetime_arraylike(other)\n elif isinstance(other_dtype, PeriodDtype):\n # PeriodIndex\n result = self._sub_periodlike(other)\n elif is_integer_dtype(other_dtype):\n if not isinstance(self.dtype, PeriodDtype):\n raise integer_op_not_supported(self)\n obj = cast("PeriodArray", self)\n result = obj._addsub_int_array_or_scalar(other * obj.dtype._n, operator.sub)\n else:\n # Includes ExtensionArrays, float_dtype\n return NotImplemented\n\n if isinstance(result, np.ndarray) and lib.is_np_dtype(result.dtype, "m"):\n from pandas.core.arrays import TimedeltaArray\n\n return TimedeltaArray._from_sequence(result)\n return result\n\n def __rsub__(self, other):\n other_dtype = getattr(other, "dtype", None)\n other_is_dt64 = lib.is_np_dtype(other_dtype, "M") or isinstance(\n other_dtype, DatetimeTZDtype\n )\n\n if other_is_dt64 and lib.is_np_dtype(self.dtype, "m"):\n # ndarray[datetime64] cannot be subtracted from self, so\n # we need to wrap in DatetimeArray/Index and flip the operation\n if lib.is_scalar(other):\n # i.e. np.datetime64 object\n return Timestamp(other) - self\n if not isinstance(other, DatetimeLikeArrayMixin):\n # Avoid down-casting DatetimeIndex\n from pandas.core.arrays import DatetimeArray\n\n other = DatetimeArray._from_sequence(other)\n return other - self\n elif self.dtype.kind == "M" and hasattr(other, "dtype") and not other_is_dt64:\n # GH#19959 datetime - datetime is well-defined as timedelta,\n # but any other type - datetime is not well-defined.\n raise TypeError(\n f"cannot subtract {type(self).__name__} from {type(other).__name__}"\n )\n elif isinstance(self.dtype, PeriodDtype) and lib.is_np_dtype(other_dtype, "m"):\n # TODO: Can we simplify/generalize these cases at all?\n raise TypeError(f"cannot subtract {type(self).__name__} from {other.dtype}")\n elif lib.is_np_dtype(self.dtype, "m"):\n self = cast("TimedeltaArray", self)\n return (-self) + other\n\n # We get here with e.g. datetime objects\n return -(self - other)\n\n def __iadd__(self, other) -> Self:\n result = self + other\n self[:] = result[:]\n\n if not isinstance(self.dtype, PeriodDtype):\n # restore freq, which is invalidated by setitem\n self._freq = result.freq\n return self\n\n def __isub__(self, other) -> Self:\n result = self - other\n self[:] = result[:]\n\n if not isinstance(self.dtype, PeriodDtype):\n # restore freq, which is invalidated by setitem\n self._freq = result.freq\n return self\n\n # --------------------------------------------------------------\n # Reductions\n\n @_period_dispatch\n def _quantile(\n self,\n qs: npt.NDArray[np.float64],\n interpolation: str,\n ) -> Self:\n return super()._quantile(qs=qs, interpolation=interpolation)\n\n @_period_dispatch\n def min(self, *, axis: AxisInt | None = None, skipna: bool = True, **kwargs):\n """\n Return the minimum value of the Array or minimum along\n an axis.\n\n See Also\n --------\n numpy.ndarray.min\n Index.min : Return the minimum value in an Index.\n Series.min : Return the minimum value in a Series.\n """\n nv.validate_min((), kwargs)\n nv.validate_minmax_axis(axis, self.ndim)\n\n result = nanops.nanmin(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n @_period_dispatch\n def max(self, *, axis: AxisInt | None = None, skipna: bool = True, **kwargs):\n """\n Return the maximum value of the Array or maximum along\n an axis.\n\n See Also\n --------\n numpy.ndarray.max\n Index.max : Return the maximum value in an Index.\n Series.max : Return the maximum value in a Series.\n """\n nv.validate_max((), kwargs)\n nv.validate_minmax_axis(axis, self.ndim)\n\n result = nanops.nanmax(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def mean(self, *, skipna: bool = True, axis: AxisInt | None = 0):\n """\n Return the mean value of the Array.\n\n Parameters\n ----------\n skipna : bool, default True\n Whether to ignore any NaT elements.\n axis : int, optional, default 0\n\n Returns\n -------\n scalar\n Timestamp or Timedelta.\n\n See Also\n --------\n numpy.ndarray.mean : Returns the average of array elements along a given axis.\n Series.mean : Return the mean value in a Series.\n\n Notes\n -----\n mean is only defined for Datetime and Timedelta dtypes, not for Period.\n\n Examples\n --------\n For :class:`pandas.DatetimeIndex`:\n\n >>> idx = pd.date_range('2001-01-01 00:00', periods=3)\n >>> idx\n DatetimeIndex(['2001-01-01', '2001-01-02', '2001-01-03'],\n dtype='datetime64[ns]', freq='D')\n >>> idx.mean()\n Timestamp('2001-01-02 00:00:00')\n\n For :class:`pandas.TimedeltaIndex`:\n\n >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='D')\n >>> tdelta_idx\n TimedeltaIndex(['1 days', '2 days', '3 days'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.mean()\n Timedelta('2 days 00:00:00')\n """\n if isinstance(self.dtype, PeriodDtype):\n # See discussion in GH#24757\n raise TypeError(\n f"mean is not implemented for {type(self).__name__} since the "\n "meaning is ambiguous. An alternative is "\n "obj.to_timestamp(how='start').mean()"\n )\n\n result = nanops.nanmean(\n self._ndarray, axis=axis, skipna=skipna, mask=self.isna()\n )\n return self._wrap_reduction_result(axis, result)\n\n @_period_dispatch\n def median(self, *, axis: AxisInt | None = None, skipna: bool = True, **kwargs):\n nv.validate_median((), kwargs)\n\n if axis is not None and abs(axis) >= self.ndim:\n raise ValueError("abs(axis) must be less than ndim")\n\n result = nanops.nanmedian(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def _mode(self, dropna: bool = True):\n mask = None\n if dropna:\n mask = self.isna()\n\n i8modes = algorithms.mode(self.view("i8"), mask=mask)\n npmodes = i8modes.view(self._ndarray.dtype)\n npmodes = cast(np.ndarray, npmodes)\n return self._from_backing_data(npmodes)\n\n # ------------------------------------------------------------------\n # GroupBy Methods\n\n def _groupby_op(\n self,\n *,\n how: str,\n has_dropped_na: bool,\n min_count: int,\n ngroups: int,\n ids: npt.NDArray[np.intp],\n **kwargs,\n ):\n dtype = self.dtype\n if dtype.kind == "M":\n # Adding/multiplying datetimes is not valid\n if how in ["sum", "prod", "cumsum", "cumprod", "var", "skew"]:\n raise TypeError(f"datetime64 type does not support {how} operations")\n if how in ["any", "all"]:\n # GH#34479\n warnings.warn(\n f"'{how}' with datetime64 dtypes is deprecated and will raise in a "\n f"future version. Use (obj != pd.Timestamp(0)).{how}() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n elif isinstance(dtype, PeriodDtype):\n # Adding/multiplying Periods is not valid\n if how in ["sum", "prod", "cumsum", "cumprod", "var", "skew"]:\n raise TypeError(f"Period type does not support {how} operations")\n if how in ["any", "all"]:\n # GH#34479\n warnings.warn(\n f"'{how}' with PeriodDtype is deprecated and will raise in a "\n f"future version. Use (obj != pd.Period(0, freq)).{how}() instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n else:\n # timedeltas we can add but not multiply\n if how in ["prod", "cumprod", "skew", "var"]:\n raise TypeError(f"timedelta64 type does not support {how} operations")\n\n # All of the functions implemented here are ordinal, so we can\n # operate on the tz-naive equivalents\n npvalues = self._ndarray.view("M8[ns]")\n\n from pandas.core.groupby.ops import WrappedCythonOp\n\n kind = WrappedCythonOp.get_kind_from_how(how)\n op = WrappedCythonOp(how=how, kind=kind, has_dropped_na=has_dropped_na)\n\n res_values = op._cython_op_ndim_compat(\n npvalues,\n min_count=min_count,\n ngroups=ngroups,\n comp_ids=ids,\n mask=None,\n **kwargs,\n )\n\n if op.how in op.cast_blocklist:\n # i.e. how in ["rank"], since other cast_blocklist methods don't go\n # through cython_operation\n return res_values\n\n # We did a view to M8[ns] above, now we go the other direction\n assert res_values.dtype == "M8[ns]"\n if how in ["std", "sem"]:\n from pandas.core.arrays import TimedeltaArray\n\n if isinstance(self.dtype, PeriodDtype):\n raise TypeError("'std' and 'sem' are not valid for PeriodDtype")\n self = cast("DatetimeArray | TimedeltaArray", self)\n new_dtype = f"m8[{self.unit}]"\n res_values = res_values.view(new_dtype)\n return TimedeltaArray._simple_new(res_values, dtype=res_values.dtype)\n\n res_values = res_values.view(self._ndarray.dtype)\n return self._from_backing_data(res_values)\n\n\nclass DatelikeOps(DatetimeLikeArrayMixin):\n """\n Common ops for DatetimeIndex/PeriodIndex, but not TimedeltaIndex.\n """\n\n @Substitution(\n URL="https://docs.python.org/3/library/datetime.html"\n "#strftime-and-strptime-behavior"\n )\n def strftime(self, date_format: str) -> npt.NDArray[np.object_]:\n """\n Convert to Index using specified date_format.\n\n Return an Index of formatted strings specified by date_format, which\n supports the same string format as the python standard library. Details\n of the string format can be found in `python string format\n doc <%(URL)s>`__.\n\n Formats supported by the C `strftime` API but not by the python string format\n doc (such as `"%%R"`, `"%%r"`) are not officially supported and should be\n preferably replaced with their supported equivalents (such as `"%%H:%%M"`,\n `"%%I:%%M:%%S %%p"`).\n\n Note that `PeriodIndex` support additional directives, detailed in\n `Period.strftime`.\n\n Parameters\n ----------\n date_format : str\n Date format string (e.g. "%%Y-%%m-%%d").\n\n Returns\n -------\n ndarray[object]\n NumPy ndarray of formatted strings.\n\n See Also\n --------\n to_datetime : Convert the given argument to datetime.\n DatetimeIndex.normalize : Return DatetimeIndex with times to midnight.\n DatetimeIndex.round : Round the DatetimeIndex to the specified freq.\n DatetimeIndex.floor : Floor the DatetimeIndex to the specified freq.\n Timestamp.strftime : Format a single Timestamp.\n Period.strftime : Format a single Period.\n\n Examples\n --------\n >>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),\n ... periods=3, freq='s')\n >>> rng.strftime('%%B %%d, %%Y, %%r')\n Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',\n 'March 10, 2018, 09:00:02 AM'],\n dtype='object')\n """\n result = self._format_native_types(date_format=date_format, na_rep=np.nan)\n if using_string_dtype():\n from pandas import StringDtype\n\n return pd_array(result, dtype=StringDtype(na_value=np.nan)) # type: ignore[return-value]\n return result.astype(object, copy=False)\n\n\n_round_doc = """\n Perform {op} operation on the data to the specified `freq`.\n\n Parameters\n ----------\n freq : str or Offset\n The frequency level to {op} the index to. Must be a fixed\n frequency like 'S' (second) not 'ME' (month end). See\n :ref:`frequency aliases <timeseries.offset_aliases>` for\n a list of possible `freq` values.\n ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'\n Only relevant for DatetimeIndex:\n\n - 'infer' will attempt to infer fall dst-transition hours based on\n order\n - bool-ndarray where True signifies a DST time, False designates\n a non-DST time (note that this flag is only applicable for\n ambiguous times)\n - 'NaT' will return NaT where there are ambiguous times\n - 'raise' will raise an AmbiguousTimeError if there are ambiguous\n times.\n\n nonexistent : 'shift_forward', 'shift_backward', 'NaT', timedelta, default 'raise'\n A nonexistent time does not exist in a particular timezone\n where clocks moved forward due to DST.\n\n - 'shift_forward' will shift the nonexistent time forward to the\n closest existing time\n - 'shift_backward' will shift the nonexistent time backward to the\n closest existing time\n - 'NaT' will return NaT where there are nonexistent times\n - timedelta objects will shift nonexistent times by the timedelta\n - 'raise' will raise an NonExistentTimeError if there are\n nonexistent times.\n\n Returns\n -------\n DatetimeIndex, TimedeltaIndex, or Series\n Index of the same type for a DatetimeIndex or TimedeltaIndex,\n or a Series with the same index for a Series.\n\n Raises\n ------\n ValueError if the `freq` cannot be converted.\n\n Notes\n -----\n If the timestamps have a timezone, {op}ing will take place relative to the\n local ("wall") time and re-localized to the same timezone. When {op}ing\n near daylight savings time, use ``nonexistent`` and ``ambiguous`` to\n control the re-localization behavior.\n\n Examples\n --------\n **DatetimeIndex**\n\n >>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')\n >>> rng\n DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',\n '2018-01-01 12:01:00'],\n dtype='datetime64[ns]', freq='min')\n """\n\n_round_example = """>>> rng.round('h')\n DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',\n '2018-01-01 12:00:00'],\n dtype='datetime64[ns]', freq=None)\n\n **Series**\n\n >>> pd.Series(rng).dt.round("h")\n 0 2018-01-01 12:00:00\n 1 2018-01-01 12:00:00\n 2 2018-01-01 12:00:00\n dtype: datetime64[ns]\n\n When rounding near a daylight savings time transition, use ``ambiguous`` or\n ``nonexistent`` to control how the timestamp should be re-localized.\n\n >>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")\n\n >>> rng_tz.floor("2h", ambiguous=False)\n DatetimeIndex(['2021-10-31 02:00:00+01:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n\n >>> rng_tz.floor("2h", ambiguous=True)\n DatetimeIndex(['2021-10-31 02:00:00+02:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n """\n\n_floor_example = """>>> rng.floor('h')\n DatetimeIndex(['2018-01-01 11:00:00', '2018-01-01 12:00:00',\n '2018-01-01 12:00:00'],\n dtype='datetime64[ns]', freq=None)\n\n **Series**\n\n >>> pd.Series(rng).dt.floor("h")\n 0 2018-01-01 11:00:00\n 1 2018-01-01 12:00:00\n 2 2018-01-01 12:00:00\n dtype: datetime64[ns]\n\n When rounding near a daylight savings time transition, use ``ambiguous`` or\n ``nonexistent`` to control how the timestamp should be re-localized.\n\n >>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")\n\n >>> rng_tz.floor("2h", ambiguous=False)\n DatetimeIndex(['2021-10-31 02:00:00+01:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n\n >>> rng_tz.floor("2h", ambiguous=True)\n DatetimeIndex(['2021-10-31 02:00:00+02:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n """\n\n_ceil_example = """>>> rng.ceil('h')\n DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',\n '2018-01-01 13:00:00'],\n dtype='datetime64[ns]', freq=None)\n\n **Series**\n\n >>> pd.Series(rng).dt.ceil("h")\n 0 2018-01-01 12:00:00\n 1 2018-01-01 12:00:00\n 2 2018-01-01 13:00:00\n dtype: datetime64[ns]\n\n When rounding near a daylight savings time transition, use ``ambiguous`` or\n ``nonexistent`` to control how the timestamp should be re-localized.\n\n >>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")\n\n >>> rng_tz.ceil("h", ambiguous=False)\n DatetimeIndex(['2021-10-31 02:00:00+01:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n\n >>> rng_tz.ceil("h", ambiguous=True)\n DatetimeIndex(['2021-10-31 02:00:00+02:00'],\n dtype='datetime64[ns, Europe/Amsterdam]', freq=None)\n """\n\n\nclass TimelikeOps(DatetimeLikeArrayMixin):\n """\n Common ops for TimedeltaIndex/DatetimeIndex, but not PeriodIndex.\n """\n\n _default_dtype: np.dtype\n\n def __init__(\n self, values, dtype=None, freq=lib.no_default, copy: bool = False\n ) -> None:\n warnings.warn(\n # GH#55623\n f"{type(self).__name__}.__init__ is deprecated and will be "\n "removed in a future version. Use pd.array instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n\n values = extract_array(values, extract_numpy=True)\n if isinstance(values, IntegerArray):\n values = values.to_numpy("int64", na_value=iNaT)\n\n inferred_freq = getattr(values, "_freq", None)\n explicit_none = freq is None\n freq = freq if freq is not lib.no_default else None\n\n if isinstance(values, type(self)):\n if explicit_none:\n # don't inherit from values\n pass\n elif freq is None:\n freq = values.freq\n elif freq and values.freq:\n freq = to_offset(freq)\n freq = _validate_inferred_freq(freq, values.freq)\n\n if dtype is not None and dtype != values.dtype:\n # TODO: we only have tests for this for DTA, not TDA (2022-07-01)\n raise TypeError(\n f"dtype={dtype} does not match data dtype {values.dtype}"\n )\n\n dtype = values.dtype\n values = values._ndarray\n\n elif dtype is None:\n if isinstance(values, np.ndarray) and values.dtype.kind in "Mm":\n dtype = values.dtype\n else:\n dtype = self._default_dtype\n if isinstance(values, np.ndarray) and values.dtype == "i8":\n values = values.view(dtype)\n\n if not isinstance(values, np.ndarray):\n raise ValueError(\n f"Unexpected type '{type(values).__name__}'. 'values' must be a "\n f"{type(self).__name__}, ndarray, or Series or Index "\n "containing one of those."\n )\n if values.ndim not in [1, 2]:\n raise ValueError("Only 1-dimensional input arrays are supported.")\n\n if values.dtype == "i8":\n # for compat with datetime/timedelta/period shared methods,\n # we can sometimes get here with int64 values. These represent\n # nanosecond UTC (or tz-naive) unix timestamps\n if dtype is None:\n dtype = self._default_dtype\n values = values.view(self._default_dtype)\n elif lib.is_np_dtype(dtype, "mM"):\n values = values.view(dtype)\n elif isinstance(dtype, DatetimeTZDtype):\n kind = self._default_dtype.kind\n new_dtype = f"{kind}8[{dtype.unit}]"\n values = values.view(new_dtype)\n\n dtype = self._validate_dtype(values, dtype)\n\n if freq == "infer":\n raise ValueError(\n f"Frequency inference not allowed in {type(self).__name__}.__init__. "\n "Use 'pd.array()' instead."\n )\n\n if copy:\n values = values.copy()\n if freq:\n freq = to_offset(freq)\n if values.dtype.kind == "m" and not isinstance(freq, Tick):\n raise TypeError("TimedeltaArray/Index freq must be a Tick")\n\n NDArrayBacked.__init__(self, values=values, dtype=dtype)\n self._freq = freq\n\n if inferred_freq is None and freq is not None:\n type(self)._validate_frequency(self, freq)\n\n @classmethod\n def _validate_dtype(cls, values, dtype):\n raise AbstractMethodError(cls)\n\n @property\n def freq(self):\n """\n Return the frequency object if it is set, otherwise None.\n """\n return self._freq\n\n @freq.setter\n def freq(self, value) -> None:\n if value is not None:\n value = to_offset(value)\n self._validate_frequency(self, value)\n if self.dtype.kind == "m" and not isinstance(value, Tick):\n raise TypeError("TimedeltaArray/Index freq must be a Tick")\n\n if self.ndim > 1:\n raise ValueError("Cannot set freq with ndim > 1")\n\n self._freq = value\n\n @final\n def _maybe_pin_freq(self, freq, validate_kwds: dict):\n """\n Constructor helper to pin the appropriate `freq` attribute. Assumes\n that self._freq is currently set to any freq inferred in\n _from_sequence_not_strict.\n """\n if freq is None:\n # user explicitly passed None -> override any inferred_freq\n self._freq = None\n elif freq == "infer":\n # if self._freq is *not* None then we already inferred a freq\n # and there is nothing left to do\n if self._freq is None:\n # Set _freq directly to bypass duplicative _validate_frequency\n # check.\n self._freq = to_offset(self.inferred_freq)\n elif freq is lib.no_default:\n # user did not specify anything, keep inferred freq if the original\n # data had one, otherwise do nothing\n pass\n elif self._freq is None:\n # We cannot inherit a freq from the data, so we need to validate\n # the user-passed freq\n freq = to_offset(freq)\n type(self)._validate_frequency(self, freq, **validate_kwds)\n self._freq = freq\n else:\n # Otherwise we just need to check that the user-passed freq\n # doesn't conflict with the one we already have.\n freq = to_offset(freq)\n _validate_inferred_freq(freq, self._freq)\n\n @final\n @classmethod\n def _validate_frequency(cls, index, freq: BaseOffset, **kwargs):\n """\n Validate that a frequency is compatible with the values of a given\n Datetime Array/Index or Timedelta Array/Index\n\n Parameters\n ----------\n index : DatetimeIndex or TimedeltaIndex\n The index on which to determine if the given frequency is valid\n freq : DateOffset\n The frequency to validate\n """\n inferred = index.inferred_freq\n if index.size == 0 or inferred == freq.freqstr:\n return None\n\n try:\n on_freq = cls._generate_range(\n start=index[0],\n end=None,\n periods=len(index),\n freq=freq,\n unit=index.unit,\n **kwargs,\n )\n if not np.array_equal(index.asi8, on_freq.asi8):\n raise ValueError\n except ValueError as err:\n if "non-fixed" in str(err):\n # non-fixed frequencies are not meaningful for timedelta64;\n # we retain that error message\n raise err\n # GH#11587 the main way this is reached is if the `np.array_equal`\n # check above is False. This can also be reached if index[0]\n # is `NaT`, in which case the call to `cls._generate_range` will\n # raise a ValueError, which we re-raise with a more targeted\n # message.\n raise ValueError(\n f"Inferred frequency {inferred} from passed values "\n f"does not conform to passed frequency {freq.freqstr}"\n ) from err\n\n @classmethod\n def _generate_range(\n cls, start, end, periods: int | None, freq, *args, **kwargs\n ) -> Self:\n raise AbstractMethodError(cls)\n\n # --------------------------------------------------------------\n\n @cache_readonly\n def _creso(self) -> int:\n return get_unit_from_dtype(self._ndarray.dtype)\n\n @cache_readonly\n def unit(self) -> str:\n # e.g. "ns", "us", "ms"\n # error: Argument 1 to "dtype_to_unit" has incompatible type\n # "ExtensionDtype"; expected "Union[DatetimeTZDtype, dtype[Any]]"\n return dtype_to_unit(self.dtype) # type: ignore[arg-type]\n\n def as_unit(self, unit: str, round_ok: bool = True) -> Self:\n if unit not in ["s", "ms", "us", "ns"]:\n raise ValueError("Supported units are 's', 'ms', 'us', 'ns'")\n\n dtype = np.dtype(f"{self.dtype.kind}8[{unit}]")\n new_values = astype_overflowsafe(self._ndarray, dtype, round_ok=round_ok)\n\n if isinstance(self.dtype, np.dtype):\n new_dtype = new_values.dtype\n else:\n tz = cast("DatetimeArray", self).tz\n new_dtype = DatetimeTZDtype(tz=tz, unit=unit)\n\n # error: Unexpected keyword argument "freq" for "_simple_new" of\n # "NDArrayBacked" [call-arg]\n return type(self)._simple_new(\n new_values, dtype=new_dtype, freq=self.freq # type: ignore[call-arg]\n )\n\n # TODO: annotate other as DatetimeArray | TimedeltaArray | Timestamp | Timedelta\n # with the return type matching input type. TypeVar?\n def _ensure_matching_resos(self, other):\n if self._creso != other._creso:\n # Just as with Timestamp/Timedelta, we cast to the higher resolution\n if self._creso < other._creso:\n self = self.as_unit(other.unit)\n else:\n other = other.as_unit(self.unit)\n return self, other\n\n # --------------------------------------------------------------\n\n def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n if (\n ufunc in [np.isnan, np.isinf, np.isfinite]\n and len(inputs) == 1\n and inputs[0] is self\n ):\n # numpy 1.18 changed isinf and isnan to not raise on dt64/td64\n return getattr(ufunc, method)(self._ndarray, **kwargs)\n\n return super().__array_ufunc__(ufunc, method, *inputs, **kwargs)\n\n def _round(self, freq, mode, ambiguous, nonexistent):\n # round the local times\n if isinstance(self.dtype, DatetimeTZDtype):\n # operate on naive timestamps, then convert back to aware\n self = cast("DatetimeArray", self)\n naive = self.tz_localize(None)\n result = naive._round(freq, mode, ambiguous, nonexistent)\n return result.tz_localize(\n self.tz, ambiguous=ambiguous, nonexistent=nonexistent\n )\n\n values = self.view("i8")\n values = cast(np.ndarray, values)\n nanos = get_unit_for_round(freq, self._creso)\n if nanos == 0:\n # GH 52761\n return self.copy()\n result_i8 = round_nsint64(values, mode, nanos)\n result = self._maybe_mask_results(result_i8, fill_value=iNaT)\n result = result.view(self._ndarray.dtype)\n return self._simple_new(result, dtype=self.dtype)\n\n @Appender((_round_doc + _round_example).format(op="round"))\n def round(\n self,\n freq,\n ambiguous: TimeAmbiguous = "raise",\n nonexistent: TimeNonexistent = "raise",\n ) -> Self:\n return self._round(freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent)\n\n @Appender((_round_doc + _floor_example).format(op="floor"))\n def floor(\n self,\n freq,\n ambiguous: TimeAmbiguous = "raise",\n nonexistent: TimeNonexistent = "raise",\n ) -> Self:\n return self._round(freq, RoundTo.MINUS_INFTY, ambiguous, nonexistent)\n\n @Appender((_round_doc + _ceil_example).format(op="ceil"))\n def ceil(\n self,\n freq,\n ambiguous: TimeAmbiguous = "raise",\n nonexistent: TimeNonexistent = "raise",\n ) -> Self:\n return self._round(freq, RoundTo.PLUS_INFTY, ambiguous, nonexistent)\n\n # --------------------------------------------------------------\n # Reductions\n\n def any(self, *, axis: AxisInt | None = None, skipna: bool = True) -> bool:\n # GH#34479 the nanops call will issue a FutureWarning for non-td64 dtype\n return nanops.nanany(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())\n\n def all(self, *, axis: AxisInt | None = None, skipna: bool = True) -> bool:\n # GH#34479 the nanops call will issue a FutureWarning for non-td64 dtype\n\n return nanops.nanall(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())\n\n # --------------------------------------------------------------\n # Frequency Methods\n\n def _maybe_clear_freq(self) -> None:\n self._freq = None\n\n def _with_freq(self, freq) -> Self:\n """\n Helper to get a view on the same data, with a new freq.\n\n Parameters\n ----------\n freq : DateOffset, None, or "infer"\n\n Returns\n -------\n Same type as self\n """\n # GH#29843\n if freq is None:\n # Always valid\n pass\n elif len(self) == 0 and isinstance(freq, BaseOffset):\n # Always valid. In the TimedeltaArray case, we require a Tick offset\n if self.dtype.kind == "m" and not isinstance(freq, Tick):\n raise TypeError("TimedeltaArray/Index freq must be a Tick")\n else:\n # As an internal method, we can ensure this assertion always holds\n assert freq == "infer"\n freq = to_offset(self.inferred_freq)\n\n arr = self.view()\n arr._freq = freq\n return arr\n\n # --------------------------------------------------------------\n # ExtensionArray Interface\n\n def _values_for_json(self) -> np.ndarray:\n # Small performance bump vs the base class which calls np.asarray(self)\n if isinstance(self.dtype, np.dtype):\n return self._ndarray\n return super()._values_for_json()\n\n def factorize(\n self,\n use_na_sentinel: bool = True,\n sort: bool = False,\n ):\n if self.freq is not None:\n # We must be unique, so can short-circuit (and retain freq)\n codes = np.arange(len(self), dtype=np.intp)\n uniques = self.copy() # TODO: copy or view?\n if sort and self.freq.n < 0:\n codes = codes[::-1]\n uniques = uniques[::-1]\n return codes, uniques\n\n if sort:\n # algorithms.factorize only passes sort=True here when freq is\n # not None, so this should not be reached.\n raise NotImplementedError(\n f"The 'sort' keyword in {type(self).__name__}.factorize is "\n "ignored unless arr.freq is not None. To factorize with sort, "\n "call pd.factorize(obj, sort=True) instead."\n )\n return super().factorize(use_na_sentinel=use_na_sentinel)\n\n @classmethod\n def _concat_same_type(\n cls,\n to_concat: Sequence[Self],\n axis: AxisInt = 0,\n ) -> Self:\n new_obj = super()._concat_same_type(to_concat, axis)\n\n obj = to_concat[0]\n\n if axis == 0:\n # GH 3232: If the concat result is evenly spaced, we can retain the\n # original frequency\n to_concat = [x for x in to_concat if len(x)]\n\n if obj.freq is not None and all(x.freq == obj.freq for x in to_concat):\n pairs = zip(to_concat[:-1], to_concat[1:])\n if all(pair[0][-1] + obj.freq == pair[1][0] for pair in pairs):\n new_freq = obj.freq\n new_obj._freq = new_freq\n return new_obj\n\n def copy(self, order: str = "C") -> Self:\n new_obj = super().copy(order=order)\n new_obj._freq = self.freq\n return new_obj\n\n def interpolate(\n self,\n *,\n method: InterpolateOptions,\n axis: int,\n index: Index,\n limit,\n limit_direction,\n limit_area,\n copy: bool,\n **kwargs,\n ) -> Self:\n """\n See NDFrame.interpolate.__doc__.\n """\n # NB: we return type(self) even if copy=False\n if method != "linear":\n raise NotImplementedError\n\n if not copy:\n out_data = self._ndarray\n else:\n out_data = self._ndarray.copy()\n\n missing.interpolate_2d_inplace(\n out_data,\n method=method,\n axis=axis,\n index=index,\n limit=limit,\n limit_direction=limit_direction,\n limit_area=limit_area,\n **kwargs,\n )\n if not copy:\n return self\n return type(self)._simple_new(out_data, dtype=self.dtype)\n\n # --------------------------------------------------------------\n # Unsorted\n\n @property\n def _is_dates_only(self) -> bool:\n """\n Check if we are round times at midnight (and no timezone), which will\n be given a more compact __repr__ than other cases. For TimedeltaArray\n we are checking for multiples of 24H.\n """\n if not lib.is_np_dtype(self.dtype):\n # i.e. we have a timezone\n return False\n\n values_int = self.asi8\n consider_values = values_int != iNaT\n reso = get_unit_from_dtype(self.dtype)\n ppd = periods_per_day(reso)\n\n # TODO: can we reuse is_date_array_normalized? would need a skipna kwd\n # (first attempt at this was less performant than this implementation)\n even_days = np.logical_and(consider_values, values_int % ppd != 0).sum() == 0\n return even_days\n\n\n# -------------------------------------------------------------------\n# Shared Constructor Helpers\n\n\ndef ensure_arraylike_for_datetimelike(\n data, copy: bool, cls_name: str\n) -> tuple[ArrayLike, bool]:\n if not hasattr(data, "dtype"):\n # e.g. list, tuple\n if not isinstance(data, (list, tuple)) and np.ndim(data) == 0:\n # i.e. generator\n data = list(data)\n\n data = construct_1d_object_array_from_listlike(data)\n copy = False\n elif isinstance(data, ABCMultiIndex):\n raise TypeError(f"Cannot create a {cls_name} from a MultiIndex.")\n else:\n data = extract_array(data, extract_numpy=True)\n\n if isinstance(data, IntegerArray) or (\n isinstance(data, ArrowExtensionArray) and data.dtype.kind in "iu"\n ):\n data = data.to_numpy("int64", na_value=iNaT)\n copy = False\n elif isinstance(data, ArrowExtensionArray):\n data = data._maybe_convert_datelike_array()\n data = data.to_numpy()\n copy = False\n elif not isinstance(data, (np.ndarray, ExtensionArray)):\n # GH#24539 e.g. xarray, dask object\n data = np.asarray(data)\n\n elif isinstance(data, ABCCategorical):\n # GH#18664 preserve tz in going DTI->Categorical->DTI\n # TODO: cases where we need to do another pass through maybe_convert_dtype,\n # e.g. the categories are timedelta64s\n data = data.categories.take(data.codes, fill_value=NaT)._values\n copy = False\n\n return data, copy\n\n\n@overload\ndef validate_periods(periods: None) -> None:\n ...\n\n\n@overload\ndef validate_periods(periods: int | float) -> int:\n ...\n\n\ndef validate_periods(periods: int | float | None) -> int | None:\n """\n If a `periods` argument is passed to the Datetime/Timedelta Array/Index\n constructor, cast it to an integer.\n\n Parameters\n ----------\n periods : None, float, int\n\n Returns\n -------\n periods : None or int\n\n Raises\n ------\n TypeError\n if periods is None, float, or int\n """\n if periods is not None:\n if lib.is_float(periods):\n warnings.warn(\n # GH#56036\n "Non-integer 'periods' in pd.date_range, pd.timedelta_range, "\n "pd.period_range, and pd.interval_range are deprecated and "\n "will raise in a future version.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n periods = int(periods)\n elif not lib.is_integer(periods):\n raise TypeError(f"periods must be a number, got {periods}")\n return periods\n\n\ndef _validate_inferred_freq(\n freq: BaseOffset | None, inferred_freq: BaseOffset | None\n) -> BaseOffset | None:\n """\n If the user passes a freq and another freq is inferred from passed data,\n require that they match.\n\n Parameters\n ----------\n freq : DateOffset or None\n inferred_freq : DateOffset or None\n\n Returns\n -------\n freq : DateOffset or None\n """\n if inferred_freq is not None:\n if freq is not None and freq != inferred_freq:\n raise ValueError(\n f"Inferred frequency {inferred_freq} from passed "\n "values does not conform to passed frequency "\n f"{freq.freqstr}"\n )\n if freq is None:\n freq = inferred_freq\n\n return freq\n\n\ndef dtype_to_unit(dtype: DatetimeTZDtype | np.dtype | ArrowDtype) -> str:\n """\n Return the unit str corresponding to the dtype's resolution.\n\n Parameters\n ----------\n dtype : DatetimeTZDtype or np.dtype\n If np.dtype, we assume it is a datetime64 dtype.\n\n Returns\n -------\n str\n """\n if isinstance(dtype, DatetimeTZDtype):\n return dtype.unit\n elif isinstance(dtype, ArrowDtype):\n if dtype.kind not in "mM":\n raise ValueError(f"{dtype=} does not have a resolution.")\n return dtype.pyarrow_dtype.unit\n return np.datetime_data(dtype)[0]\n
.venv\Lib\site-packages\pandas\core\arrays\datetimelike.py
datetimelike.py
Python
90,548
0.75
0.144406
0.128795
react-lib
661
2025-05-23T16:39:31.457956
GPL-3.0
false
67a4221261a5c001df2c25d784bb386b
from __future__ import annotations\n\nfrom datetime import (\n datetime,\n timedelta,\n tzinfo,\n)\nfrom typing import (\n TYPE_CHECKING,\n cast,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import using_string_dtype\n\nfrom pandas._libs import (\n lib,\n tslib,\n)\nfrom pandas._libs.tslibs import (\n BaseOffset,\n NaT,\n NaTType,\n Resolution,\n Timestamp,\n astype_overflowsafe,\n fields,\n get_resolution,\n get_supported_dtype,\n get_unit_from_dtype,\n ints_to_pydatetime,\n is_date_array_normalized,\n is_supported_dtype,\n is_unitless,\n normalize_i8_timestamps,\n timezones,\n to_offset,\n tz_convert_from_utc,\n tzconversion,\n)\nfrom pandas._libs.tslibs.dtypes import abbrev_to_npy_unit\nfrom pandas.errors import PerformanceWarning\nfrom pandas.util._exceptions import find_stack_level\nfrom pandas.util._validators import validate_inclusive\n\nfrom pandas.core.dtypes.common import (\n DT64NS_DTYPE,\n INT64_DTYPE,\n is_bool_dtype,\n is_float_dtype,\n is_string_dtype,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import (\n DatetimeTZDtype,\n ExtensionDtype,\n PeriodDtype,\n)\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core.arrays import datetimelike as dtl\nfrom pandas.core.arrays._ranges import generate_regular_range\nimport pandas.core.common as com\n\nfrom pandas.tseries.frequencies import get_period_alias\nfrom pandas.tseries.offsets import (\n Day,\n Tick,\n)\n\nif TYPE_CHECKING:\n from collections.abc import Iterator\n\n from pandas._typing import (\n ArrayLike,\n DateTimeErrorChoices,\n DtypeObj,\n IntervalClosedType,\n Self,\n TimeAmbiguous,\n TimeNonexistent,\n npt,\n )\n\n from pandas import DataFrame\n from pandas.core.arrays import PeriodArray\n\n\n_ITER_CHUNKSIZE = 10_000\n\n\n@overload\ndef tz_to_dtype(tz: tzinfo, unit: str = ...) -> DatetimeTZDtype:\n ...\n\n\n@overload\ndef tz_to_dtype(tz: None, unit: str = ...) -> np.dtype[np.datetime64]:\n ...\n\n\ndef tz_to_dtype(\n tz: tzinfo | None, unit: str = "ns"\n) -> np.dtype[np.datetime64] | DatetimeTZDtype:\n """\n Return a datetime64[ns] dtype appropriate for the given timezone.\n\n Parameters\n ----------\n tz : tzinfo or None\n unit : str, default "ns"\n\n Returns\n -------\n np.dtype or Datetime64TZDType\n """\n if tz is None:\n return np.dtype(f"M8[{unit}]")\n else:\n return DatetimeTZDtype(tz=tz, unit=unit)\n\n\ndef _field_accessor(name: str, field: str, docstring: str | None = None):\n def f(self):\n values = self._local_timestamps()\n\n if field in self._bool_ops:\n result: np.ndarray\n\n if field.endswith(("start", "end")):\n freq = self.freq\n month_kw = 12\n if freq:\n kwds = freq.kwds\n month_kw = kwds.get("startingMonth", kwds.get("month", 12))\n\n result = fields.get_start_end_field(\n values, field, self.freqstr, month_kw, reso=self._creso\n )\n else:\n result = fields.get_date_field(values, field, reso=self._creso)\n\n # these return a boolean by-definition\n return result\n\n if field in self._object_ops:\n result = fields.get_date_name_field(values, field, reso=self._creso)\n result = self._maybe_mask_results(result, fill_value=None)\n\n else:\n result = fields.get_date_field(values, field, reso=self._creso)\n result = self._maybe_mask_results(\n result, fill_value=None, convert="float64"\n )\n\n return result\n\n f.__name__ = name\n f.__doc__ = docstring\n return property(f)\n\n\n# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is\n# incompatible with definition in base class "ExtensionArray"\nclass DatetimeArray(dtl.TimelikeOps, dtl.DatelikeOps): # type: ignore[misc]\n """\n Pandas ExtensionArray for tz-naive or tz-aware datetime data.\n\n .. warning::\n\n DatetimeArray is currently experimental, and its API may change\n without warning. In particular, :attr:`DatetimeArray.dtype` is\n expected to change to always be an instance of an ``ExtensionDtype``\n subclass.\n\n Parameters\n ----------\n values : Series, Index, DatetimeArray, ndarray\n The datetime data.\n\n For DatetimeArray `values` (or a Series or Index boxing one),\n `dtype` and `freq` will be extracted from `values`.\n\n dtype : numpy.dtype or DatetimeTZDtype\n Note that the only NumPy dtype allowed is 'datetime64[ns]'.\n freq : str or Offset, optional\n The frequency.\n copy : bool, default False\n Whether to copy the underlying array of values.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Examples\n --------\n >>> pd.arrays.DatetimeArray._from_sequence(\n ... pd.DatetimeIndex(['2023-01-01', '2023-01-02'], freq='D'))\n <DatetimeArray>\n ['2023-01-01 00:00:00', '2023-01-02 00:00:00']\n Length: 2, dtype: datetime64[ns]\n """\n\n _typ = "datetimearray"\n _internal_fill_value = np.datetime64("NaT", "ns")\n _recognized_scalars = (datetime, np.datetime64)\n _is_recognized_dtype = lambda x: lib.is_np_dtype(x, "M") or isinstance(\n x, DatetimeTZDtype\n )\n _infer_matches = ("datetime", "datetime64", "date")\n\n @property\n def _scalar_type(self) -> type[Timestamp]:\n return Timestamp\n\n # define my properties & methods for delegation\n _bool_ops: list[str] = [\n "is_month_start",\n "is_month_end",\n "is_quarter_start",\n "is_quarter_end",\n "is_year_start",\n "is_year_end",\n "is_leap_year",\n ]\n _object_ops: list[str] = ["freq", "tz"]\n _field_ops: list[str] = [\n "year",\n "month",\n "day",\n "hour",\n "minute",\n "second",\n "weekday",\n "dayofweek",\n "day_of_week",\n "dayofyear",\n "day_of_year",\n "quarter",\n "days_in_month",\n "daysinmonth",\n "microsecond",\n "nanosecond",\n ]\n _other_ops: list[str] = ["date", "time", "timetz"]\n _datetimelike_ops: list[str] = (\n _field_ops + _object_ops + _bool_ops + _other_ops + ["unit"]\n )\n _datetimelike_methods: list[str] = [\n "to_period",\n "tz_localize",\n "tz_convert",\n "normalize",\n "strftime",\n "round",\n "floor",\n "ceil",\n "month_name",\n "day_name",\n "as_unit",\n ]\n\n # ndim is inherited from ExtensionArray, must exist to ensure\n # Timestamp.__richcmp__(DateTimeArray) operates pointwise\n\n # ensure that operations with numpy arrays defer to our implementation\n __array_priority__ = 1000\n\n # -----------------------------------------------------------------\n # Constructors\n\n _dtype: np.dtype[np.datetime64] | DatetimeTZDtype\n _freq: BaseOffset | None = None\n _default_dtype = DT64NS_DTYPE # used in TimeLikeOps.__init__\n\n @classmethod\n def _from_scalars(cls, scalars, *, dtype: DtypeObj) -> Self:\n if lib.infer_dtype(scalars, skipna=True) not in ["datetime", "datetime64"]:\n # TODO: require any NAs be valid-for-DTA\n # TODO: if dtype is passed, check for tzawareness compat?\n raise ValueError\n return cls._from_sequence(scalars, dtype=dtype)\n\n @classmethod\n def _validate_dtype(cls, values, dtype):\n # used in TimeLikeOps.__init__\n dtype = _validate_dt64_dtype(dtype)\n _validate_dt64_dtype(values.dtype)\n if isinstance(dtype, np.dtype):\n if values.dtype != dtype:\n raise ValueError("Values resolution does not match dtype.")\n else:\n vunit = np.datetime_data(values.dtype)[0]\n if vunit != dtype.unit:\n raise ValueError("Values resolution does not match dtype.")\n return dtype\n\n # error: Signature of "_simple_new" incompatible with supertype "NDArrayBacked"\n @classmethod\n def _simple_new( # type: ignore[override]\n cls,\n values: npt.NDArray[np.datetime64],\n freq: BaseOffset | None = None,\n dtype: np.dtype[np.datetime64] | DatetimeTZDtype = DT64NS_DTYPE,\n ) -> Self:\n assert isinstance(values, np.ndarray)\n assert dtype.kind == "M"\n if isinstance(dtype, np.dtype):\n assert dtype == values.dtype\n assert not is_unitless(dtype)\n else:\n # DatetimeTZDtype. If we have e.g. DatetimeTZDtype[us, UTC],\n # then values.dtype should be M8[us].\n assert dtype._creso == get_unit_from_dtype(values.dtype)\n\n result = super()._simple_new(values, dtype)\n result._freq = freq\n return result\n\n @classmethod\n def _from_sequence(cls, scalars, *, dtype=None, copy: bool = False):\n return cls._from_sequence_not_strict(scalars, dtype=dtype, copy=copy)\n\n @classmethod\n def _from_sequence_not_strict(\n cls,\n data,\n *,\n dtype=None,\n copy: bool = False,\n tz=lib.no_default,\n freq: str | BaseOffset | lib.NoDefault | None = lib.no_default,\n dayfirst: bool = False,\n yearfirst: bool = False,\n ambiguous: TimeAmbiguous = "raise",\n ) -> Self:\n """\n A non-strict version of _from_sequence, called from DatetimeIndex.__new__.\n """\n\n # if the user either explicitly passes tz=None or a tz-naive dtype, we\n # disallows inferring a tz.\n explicit_tz_none = tz is None\n if tz is lib.no_default:\n tz = None\n else:\n tz = timezones.maybe_get_tz(tz)\n\n dtype = _validate_dt64_dtype(dtype)\n # if dtype has an embedded tz, capture it\n tz = _validate_tz_from_dtype(dtype, tz, explicit_tz_none)\n\n unit = None\n if dtype is not None:\n unit = dtl.dtype_to_unit(dtype)\n\n data, copy = dtl.ensure_arraylike_for_datetimelike(\n data, copy, cls_name="DatetimeArray"\n )\n inferred_freq = None\n if isinstance(data, DatetimeArray):\n inferred_freq = data.freq\n\n subarr, tz = _sequence_to_dt64(\n data,\n copy=copy,\n tz=tz,\n dayfirst=dayfirst,\n yearfirst=yearfirst,\n ambiguous=ambiguous,\n out_unit=unit,\n )\n # We have to call this again after possibly inferring a tz above\n _validate_tz_from_dtype(dtype, tz, explicit_tz_none)\n if tz is not None and explicit_tz_none:\n raise ValueError(\n "Passed data is timezone-aware, incompatible with 'tz=None'. "\n "Use obj.tz_localize(None) instead."\n )\n\n data_unit = np.datetime_data(subarr.dtype)[0]\n data_dtype = tz_to_dtype(tz, data_unit)\n result = cls._simple_new(subarr, freq=inferred_freq, dtype=data_dtype)\n if unit is not None and unit != result.unit:\n # If unit was specified in user-passed dtype, cast to it here\n result = result.as_unit(unit)\n\n validate_kwds = {"ambiguous": ambiguous}\n result._maybe_pin_freq(freq, validate_kwds)\n return result\n\n @classmethod\n def _generate_range(\n cls,\n start,\n end,\n periods: int | None,\n freq,\n tz=None,\n normalize: bool = False,\n ambiguous: TimeAmbiguous = "raise",\n nonexistent: TimeNonexistent = "raise",\n inclusive: IntervalClosedType = "both",\n *,\n unit: str | None = None,\n ) -> Self:\n periods = dtl.validate_periods(periods)\n if freq is None and any(x is None for x in [periods, start, end]):\n raise ValueError("Must provide freq argument if no data is supplied")\n\n if com.count_not_none(start, end, periods, freq) != 3:\n raise ValueError(\n "Of the four parameters: start, end, periods, "\n "and freq, exactly three must be specified"\n )\n freq = to_offset(freq)\n\n if start is not None:\n start = Timestamp(start)\n\n if end is not None:\n end = Timestamp(end)\n\n if start is NaT or end is NaT:\n raise ValueError("Neither `start` nor `end` can be NaT")\n\n if unit is not None:\n if unit not in ["s", "ms", "us", "ns"]:\n raise ValueError("'unit' must be one of 's', 'ms', 'us', 'ns'")\n else:\n unit = "ns"\n\n if start is not None:\n start = start.as_unit(unit, round_ok=False)\n if end is not None:\n end = end.as_unit(unit, round_ok=False)\n\n left_inclusive, right_inclusive = validate_inclusive(inclusive)\n start, end = _maybe_normalize_endpoints(start, end, normalize)\n tz = _infer_tz_from_endpoints(start, end, tz)\n\n if tz is not None:\n # Localize the start and end arguments\n start = _maybe_localize_point(start, freq, tz, ambiguous, nonexistent)\n end = _maybe_localize_point(end, freq, tz, ambiguous, nonexistent)\n\n if freq is not None:\n # We break Day arithmetic (fixed 24 hour) here and opt for\n # Day to mean calendar day (23/24/25 hour). Therefore, strip\n # tz info from start and day to avoid DST arithmetic\n if isinstance(freq, Day):\n if start is not None:\n start = start.tz_localize(None)\n if end is not None:\n end = end.tz_localize(None)\n\n if isinstance(freq, Tick):\n i8values = generate_regular_range(start, end, periods, freq, unit=unit)\n else:\n xdr = _generate_range(\n start=start, end=end, periods=periods, offset=freq, unit=unit\n )\n i8values = np.array([x._value for x in xdr], dtype=np.int64)\n\n endpoint_tz = start.tz if start is not None else end.tz\n\n if tz is not None and endpoint_tz is None:\n if not timezones.is_utc(tz):\n # short-circuit tz_localize_to_utc which would make\n # an unnecessary copy with UTC but be a no-op.\n creso = abbrev_to_npy_unit(unit)\n i8values = tzconversion.tz_localize_to_utc(\n i8values,\n tz,\n ambiguous=ambiguous,\n nonexistent=nonexistent,\n creso=creso,\n )\n\n # i8values is localized datetime64 array -> have to convert\n # start/end as well to compare\n if start is not None:\n start = start.tz_localize(tz, ambiguous, nonexistent)\n if end is not None:\n end = end.tz_localize(tz, ambiguous, nonexistent)\n else:\n # Create a linearly spaced date_range in local time\n # Nanosecond-granularity timestamps aren't always correctly\n # representable with doubles, so we limit the range that we\n # pass to np.linspace as much as possible\n periods = cast(int, periods)\n i8values = (\n np.linspace(0, end._value - start._value, periods, dtype="int64")\n + start._value\n )\n if i8values.dtype != "i8":\n # 2022-01-09 I (brock) am not sure if it is possible for this\n # to overflow and cast to e.g. f8, but if it does we need to cast\n i8values = i8values.astype("i8")\n\n if start == end:\n if not left_inclusive and not right_inclusive:\n i8values = i8values[1:-1]\n else:\n start_i8 = Timestamp(start)._value\n end_i8 = Timestamp(end)._value\n if not left_inclusive or not right_inclusive:\n if not left_inclusive and len(i8values) and i8values[0] == start_i8:\n i8values = i8values[1:]\n if not right_inclusive and len(i8values) and i8values[-1] == end_i8:\n i8values = i8values[:-1]\n\n dt64_values = i8values.view(f"datetime64[{unit}]")\n dtype = tz_to_dtype(tz, unit=unit)\n return cls._simple_new(dt64_values, freq=freq, dtype=dtype)\n\n # -----------------------------------------------------------------\n # DatetimeLike Interface\n\n def _unbox_scalar(self, value) -> np.datetime64:\n if not isinstance(value, self._scalar_type) and value is not NaT:\n raise ValueError("'value' should be a Timestamp.")\n self._check_compatible_with(value)\n if value is NaT:\n return np.datetime64(value._value, self.unit)\n else:\n return value.as_unit(self.unit).asm8\n\n def _scalar_from_string(self, value) -> Timestamp | NaTType:\n return Timestamp(value, tz=self.tz)\n\n def _check_compatible_with(self, other) -> None:\n if other is NaT:\n return\n self._assert_tzawareness_compat(other)\n\n # -----------------------------------------------------------------\n # Descriptive Properties\n\n def _box_func(self, x: np.datetime64) -> Timestamp | NaTType:\n # GH#42228\n value = x.view("i8")\n ts = Timestamp._from_value_and_reso(value, reso=self._creso, tz=self.tz)\n return ts\n\n @property\n # error: Return type "Union[dtype, DatetimeTZDtype]" of "dtype"\n # incompatible with return type "ExtensionDtype" in supertype\n # "ExtensionArray"\n def dtype(self) -> np.dtype[np.datetime64] | DatetimeTZDtype: # type: ignore[override]\n """\n The dtype for the DatetimeArray.\n\n .. warning::\n\n A future version of pandas will change dtype to never be a\n ``numpy.dtype``. Instead, :attr:`DatetimeArray.dtype` will\n always be an instance of an ``ExtensionDtype`` subclass.\n\n Returns\n -------\n numpy.dtype or DatetimeTZDtype\n If the values are tz-naive, then ``np.dtype('datetime64[ns]')``\n is returned.\n\n If the values are tz-aware, then the ``DatetimeTZDtype``\n is returned.\n """\n return self._dtype\n\n @property\n def tz(self) -> tzinfo | None:\n """\n Return the timezone.\n\n Returns\n -------\n datetime.tzinfo, pytz.tzinfo.BaseTZInfo, dateutil.tz.tz.tzfile, or None\n Returns None when the array is tz-naive.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.tz\n datetime.timezone.utc\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.tz\n datetime.timezone.utc\n """\n # GH 18595\n return getattr(self.dtype, "tz", None)\n\n @tz.setter\n def tz(self, value):\n # GH 3746: Prevent localizing or converting the index by setting tz\n raise AttributeError(\n "Cannot directly set timezone. Use tz_localize() "\n "or tz_convert() as appropriate"\n )\n\n @property\n def tzinfo(self) -> tzinfo | None:\n """\n Alias for tz attribute\n """\n return self.tz\n\n @property # NB: override with cache_readonly in immutable subclasses\n def is_normalized(self) -> bool:\n """\n Returns True if all of the dates are at midnight ("no time")\n """\n return is_date_array_normalized(self.asi8, self.tz, reso=self._creso)\n\n @property # NB: override with cache_readonly in immutable subclasses\n def _resolution_obj(self) -> Resolution:\n return get_resolution(self.asi8, self.tz, reso=self._creso)\n\n # ----------------------------------------------------------------\n # Array-Like / EA-Interface Methods\n\n def __array__(self, dtype=None, copy=None) -> np.ndarray:\n if dtype is None and self.tz:\n # The default for tz-aware is object, to preserve tz info\n dtype = object\n\n return super().__array__(dtype=dtype, copy=copy)\n\n def __iter__(self) -> Iterator:\n """\n Return an iterator over the boxed values\n\n Yields\n ------\n tstamp : Timestamp\n """\n if self.ndim > 1:\n for i in range(len(self)):\n yield self[i]\n else:\n # convert in chunks of 10k for efficiency\n data = self.asi8\n length = len(self)\n chunksize = _ITER_CHUNKSIZE\n chunks = (length // chunksize) + 1\n\n for i in range(chunks):\n start_i = i * chunksize\n end_i = min((i + 1) * chunksize, length)\n converted = ints_to_pydatetime(\n data[start_i:end_i],\n tz=self.tz,\n box="timestamp",\n reso=self._creso,\n )\n yield from converted\n\n def astype(self, dtype, copy: bool = True):\n # We handle\n # --> datetime\n # --> period\n # DatetimeLikeArrayMixin Super handles the rest.\n dtype = pandas_dtype(dtype)\n\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n\n elif isinstance(dtype, ExtensionDtype):\n if not isinstance(dtype, DatetimeTZDtype):\n # e.g. Sparse[datetime64[ns]]\n return super().astype(dtype, copy=copy)\n elif self.tz is None:\n # pre-2.0 this did self.tz_localize(dtype.tz), which did not match\n # the Series behavior which did\n # values.tz_localize("UTC").tz_convert(dtype.tz)\n raise TypeError(\n "Cannot use .astype to convert from timezone-naive dtype to "\n "timezone-aware dtype. Use obj.tz_localize instead or "\n "series.dt.tz_localize instead"\n )\n else:\n # tzaware unit conversion e.g. datetime64[s, UTC]\n np_dtype = np.dtype(dtype.str)\n res_values = astype_overflowsafe(self._ndarray, np_dtype, copy=copy)\n return type(self)._simple_new(res_values, dtype=dtype, freq=self.freq)\n\n elif (\n self.tz is None\n and lib.is_np_dtype(dtype, "M")\n and not is_unitless(dtype)\n and is_supported_dtype(dtype)\n ):\n # unit conversion e.g. datetime64[s]\n res_values = astype_overflowsafe(self._ndarray, dtype, copy=True)\n return type(self)._simple_new(res_values, dtype=res_values.dtype)\n # TODO: preserve freq?\n\n elif self.tz is not None and lib.is_np_dtype(dtype, "M"):\n # pre-2.0 behavior for DTA/DTI was\n # values.tz_convert("UTC").tz_localize(None), which did not match\n # the Series behavior\n raise TypeError(\n "Cannot use .astype to convert from timezone-aware dtype to "\n "timezone-naive dtype. Use obj.tz_localize(None) or "\n "obj.tz_convert('UTC').tz_localize(None) instead."\n )\n\n elif (\n self.tz is None\n and lib.is_np_dtype(dtype, "M")\n and dtype != self.dtype\n and is_unitless(dtype)\n ):\n raise TypeError(\n "Casting to unit-less dtype 'datetime64' is not supported. "\n "Pass e.g. 'datetime64[ns]' instead."\n )\n\n elif isinstance(dtype, PeriodDtype):\n return self.to_period(freq=dtype.freq)\n return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy)\n\n # -----------------------------------------------------------------\n # Rendering Methods\n\n def _format_native_types(\n self, *, na_rep: str | float = "NaT", date_format=None, **kwargs\n ) -> npt.NDArray[np.object_]:\n if date_format is None and self._is_dates_only:\n # Only dates and no timezone: provide a default format\n date_format = "%Y-%m-%d"\n\n return tslib.format_array_from_datetime(\n self.asi8, tz=self.tz, format=date_format, na_rep=na_rep, reso=self._creso\n )\n\n # -----------------------------------------------------------------\n # Comparison Methods\n\n def _has_same_tz(self, other) -> bool:\n # vzone shouldn't be None if value is non-datetime like\n if isinstance(other, np.datetime64):\n # convert to Timestamp as np.datetime64 doesn't have tz attr\n other = Timestamp(other)\n\n if not hasattr(other, "tzinfo"):\n return False\n other_tz = other.tzinfo\n return timezones.tz_compare(self.tzinfo, other_tz)\n\n def _assert_tzawareness_compat(self, other) -> None:\n # adapted from _Timestamp._assert_tzawareness_compat\n other_tz = getattr(other, "tzinfo", None)\n other_dtype = getattr(other, "dtype", None)\n\n if isinstance(other_dtype, DatetimeTZDtype):\n # Get tzinfo from Series dtype\n other_tz = other.dtype.tz\n if other is NaT:\n # pd.NaT quacks both aware and naive\n pass\n elif self.tz is None:\n if other_tz is not None:\n raise TypeError(\n "Cannot compare tz-naive and tz-aware datetime-like objects."\n )\n elif other_tz is None:\n raise TypeError(\n "Cannot compare tz-naive and tz-aware datetime-like objects"\n )\n\n # -----------------------------------------------------------------\n # Arithmetic Methods\n\n def _add_offset(self, offset: BaseOffset) -> Self:\n assert not isinstance(offset, Tick)\n\n if self.tz is not None:\n values = self.tz_localize(None)\n else:\n values = self\n\n try:\n res_values = offset._apply_array(values._ndarray)\n if res_values.dtype.kind == "i":\n # error: Argument 1 to "view" of "ndarray" has incompatible type\n # "dtype[datetime64] | DatetimeTZDtype"; expected\n # "dtype[Any] | type[Any] | _SupportsDType[dtype[Any]]"\n res_values = res_values.view(values.dtype) # type: ignore[arg-type]\n except NotImplementedError:\n warnings.warn(\n "Non-vectorized DateOffset being applied to Series or DatetimeIndex.",\n PerformanceWarning,\n stacklevel=find_stack_level(),\n )\n res_values = self.astype("O") + offset\n # TODO(GH#55564): as_unit will be unnecessary\n result = type(self)._from_sequence(res_values).as_unit(self.unit)\n if not len(self):\n # GH#30336 _from_sequence won't be able to infer self.tz\n return result.tz_localize(self.tz)\n\n else:\n result = type(self)._simple_new(res_values, dtype=res_values.dtype)\n if offset.normalize:\n result = result.normalize()\n result._freq = None\n\n if self.tz is not None:\n result = result.tz_localize(self.tz)\n\n return result\n\n # -----------------------------------------------------------------\n # Timezone Conversion and Localization Methods\n\n def _local_timestamps(self) -> npt.NDArray[np.int64]:\n """\n Convert to an i8 (unix-like nanosecond timestamp) representation\n while keeping the local timezone and not using UTC.\n This is used to calculate time-of-day information as if the timestamps\n were timezone-naive.\n """\n if self.tz is None or timezones.is_utc(self.tz):\n # Avoid the copy that would be made in tzconversion\n return self.asi8\n return tz_convert_from_utc(self.asi8, self.tz, reso=self._creso)\n\n def tz_convert(self, tz) -> Self:\n """\n Convert tz-aware Datetime Array/Index from one time zone to another.\n\n Parameters\n ----------\n tz : str, pytz.timezone, dateutil.tz.tzfile, datetime.tzinfo or None\n Time zone for time. Corresponding timestamps would be converted\n to this time zone of the Datetime Array/Index. A `tz` of None will\n convert to UTC and remove the timezone information.\n\n Returns\n -------\n Array or Index\n\n Raises\n ------\n TypeError\n If Datetime Array/Index is tz-naive.\n\n See Also\n --------\n DatetimeIndex.tz : A timezone that has a variable offset from UTC.\n DatetimeIndex.tz_localize : Localize tz-naive DatetimeIndex to a\n given time zone, or remove timezone from a tz-aware DatetimeIndex.\n\n Examples\n --------\n With the `tz` parameter, we can change the DatetimeIndex\n to other time zones:\n\n >>> dti = pd.date_range(start='2014-08-01 09:00',\n ... freq='h', periods=3, tz='Europe/Berlin')\n\n >>> dti\n DatetimeIndex(['2014-08-01 09:00:00+02:00',\n '2014-08-01 10:00:00+02:00',\n '2014-08-01 11:00:00+02:00'],\n dtype='datetime64[ns, Europe/Berlin]', freq='h')\n\n >>> dti.tz_convert('US/Central')\n DatetimeIndex(['2014-08-01 02:00:00-05:00',\n '2014-08-01 03:00:00-05:00',\n '2014-08-01 04:00:00-05:00'],\n dtype='datetime64[ns, US/Central]', freq='h')\n\n With the ``tz=None``, we can remove the timezone (after converting\n to UTC if necessary):\n\n >>> dti = pd.date_range(start='2014-08-01 09:00', freq='h',\n ... periods=3, tz='Europe/Berlin')\n\n >>> dti\n DatetimeIndex(['2014-08-01 09:00:00+02:00',\n '2014-08-01 10:00:00+02:00',\n '2014-08-01 11:00:00+02:00'],\n dtype='datetime64[ns, Europe/Berlin]', freq='h')\n\n >>> dti.tz_convert(None)\n DatetimeIndex(['2014-08-01 07:00:00',\n '2014-08-01 08:00:00',\n '2014-08-01 09:00:00'],\n dtype='datetime64[ns]', freq='h')\n """\n tz = timezones.maybe_get_tz(tz)\n\n if self.tz is None:\n # tz naive, use tz_localize\n raise TypeError(\n "Cannot convert tz-naive timestamps, use tz_localize to localize"\n )\n\n # No conversion since timestamps are all UTC to begin with\n dtype = tz_to_dtype(tz, unit=self.unit)\n return self._simple_new(self._ndarray, dtype=dtype, freq=self.freq)\n\n @dtl.ravel_compat\n def tz_localize(\n self,\n tz,\n ambiguous: TimeAmbiguous = "raise",\n nonexistent: TimeNonexistent = "raise",\n ) -> Self:\n """\n Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.\n\n This method takes a time zone (tz) naive Datetime Array/Index object\n and makes this time zone aware. It does not move the time to another\n time zone.\n\n This method can also be used to do the inverse -- to create a time\n zone unaware object from an aware object. To that end, pass `tz=None`.\n\n Parameters\n ----------\n tz : str, pytz.timezone, dateutil.tz.tzfile, datetime.tzinfo or None\n Time zone to convert timestamps to. Passing ``None`` will\n remove the time zone information preserving local time.\n ambiguous : 'infer', 'NaT', bool array, default 'raise'\n When clocks moved backward due to DST, ambiguous times may arise.\n For example in Central European Time (UTC+01), when going from\n 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at\n 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the\n `ambiguous` parameter dictates how ambiguous times should be\n handled.\n\n - 'infer' will attempt to infer fall dst-transition hours based on\n order\n - bool-ndarray where True signifies a DST time, False signifies a\n non-DST time (note that this flag is only applicable for\n ambiguous times)\n - 'NaT' will return NaT where there are ambiguous times\n - 'raise' will raise an AmbiguousTimeError if there are ambiguous\n times.\n\n nonexistent : 'shift_forward', 'shift_backward, 'NaT', timedelta, \\ndefault 'raise'\n A nonexistent time does not exist in a particular timezone\n where clocks moved forward due to DST.\n\n - 'shift_forward' will shift the nonexistent time forward to the\n closest existing time\n - 'shift_backward' will shift the nonexistent time backward to the\n closest existing time\n - 'NaT' will return NaT where there are nonexistent times\n - timedelta objects will shift nonexistent times by the timedelta\n - 'raise' will raise an NonExistentTimeError if there are\n nonexistent times.\n\n Returns\n -------\n Same type as self\n Array/Index converted to the specified time zone.\n\n Raises\n ------\n TypeError\n If the Datetime Array/Index is tz-aware and tz is not None.\n\n See Also\n --------\n DatetimeIndex.tz_convert : Convert tz-aware DatetimeIndex from\n one time zone to another.\n\n Examples\n --------\n >>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3)\n >>> tz_naive\n DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',\n '2018-03-03 09:00:00'],\n dtype='datetime64[ns]', freq='D')\n\n Localize DatetimeIndex in US/Eastern time zone:\n\n >>> tz_aware = tz_naive.tz_localize(tz='US/Eastern')\n >>> tz_aware\n DatetimeIndex(['2018-03-01 09:00:00-05:00',\n '2018-03-02 09:00:00-05:00',\n '2018-03-03 09:00:00-05:00'],\n dtype='datetime64[ns, US/Eastern]', freq=None)\n\n With the ``tz=None``, we can remove the time zone information\n while keeping the local time (not converted to UTC):\n\n >>> tz_aware.tz_localize(None)\n DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',\n '2018-03-03 09:00:00'],\n dtype='datetime64[ns]', freq=None)\n\n Be careful with DST changes. When there is sequential data, pandas can\n infer the DST time:\n\n >>> s = pd.to_datetime(pd.Series(['2018-10-28 01:30:00',\n ... '2018-10-28 02:00:00',\n ... '2018-10-28 02:30:00',\n ... '2018-10-28 02:00:00',\n ... '2018-10-28 02:30:00',\n ... '2018-10-28 03:00:00',\n ... '2018-10-28 03:30:00']))\n >>> s.dt.tz_localize('CET', ambiguous='infer')\n 0 2018-10-28 01:30:00+02:00\n 1 2018-10-28 02:00:00+02:00\n 2 2018-10-28 02:30:00+02:00\n 3 2018-10-28 02:00:00+01:00\n 4 2018-10-28 02:30:00+01:00\n 5 2018-10-28 03:00:00+01:00\n 6 2018-10-28 03:30:00+01:00\n dtype: datetime64[ns, CET]\n\n In some cases, inferring the DST is impossible. In such cases, you can\n pass an ndarray to the ambiguous parameter to set the DST explicitly\n\n >>> s = pd.to_datetime(pd.Series(['2018-10-28 01:20:00',\n ... '2018-10-28 02:36:00',\n ... '2018-10-28 03:46:00']))\n >>> s.dt.tz_localize('CET', ambiguous=np.array([True, True, False]))\n 0 2018-10-28 01:20:00+02:00\n 1 2018-10-28 02:36:00+02:00\n 2 2018-10-28 03:46:00+01:00\n dtype: datetime64[ns, CET]\n\n If the DST transition causes nonexistent times, you can shift these\n dates forward or backwards with a timedelta object or `'shift_forward'`\n or `'shift_backwards'`.\n\n >>> s = pd.to_datetime(pd.Series(['2015-03-29 02:30:00',\n ... '2015-03-29 03:30:00']))\n >>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_forward')\n 0 2015-03-29 03:00:00+02:00\n 1 2015-03-29 03:30:00+02:00\n dtype: datetime64[ns, Europe/Warsaw]\n\n >>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_backward')\n 0 2015-03-29 01:59:59.999999999+01:00\n 1 2015-03-29 03:30:00+02:00\n dtype: datetime64[ns, Europe/Warsaw]\n\n >>> s.dt.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1h'))\n 0 2015-03-29 03:30:00+02:00\n 1 2015-03-29 03:30:00+02:00\n dtype: datetime64[ns, Europe/Warsaw]\n """\n nonexistent_options = ("raise", "NaT", "shift_forward", "shift_backward")\n if nonexistent not in nonexistent_options and not isinstance(\n nonexistent, timedelta\n ):\n raise ValueError(\n "The nonexistent argument must be one of 'raise', "\n "'NaT', 'shift_forward', 'shift_backward' or "\n "a timedelta object"\n )\n\n if self.tz is not None:\n if tz is None:\n new_dates = tz_convert_from_utc(self.asi8, self.tz, reso=self._creso)\n else:\n raise TypeError("Already tz-aware, use tz_convert to convert.")\n else:\n tz = timezones.maybe_get_tz(tz)\n # Convert to UTC\n\n new_dates = tzconversion.tz_localize_to_utc(\n self.asi8,\n tz,\n ambiguous=ambiguous,\n nonexistent=nonexistent,\n creso=self._creso,\n )\n new_dates_dt64 = new_dates.view(f"M8[{self.unit}]")\n dtype = tz_to_dtype(tz, unit=self.unit)\n\n freq = None\n if timezones.is_utc(tz) or (len(self) == 1 and not isna(new_dates_dt64[0])):\n # we can preserve freq\n # TODO: Also for fixed-offsets\n freq = self.freq\n elif tz is None and self.tz is None:\n # no-op\n freq = self.freq\n return self._simple_new(new_dates_dt64, dtype=dtype, freq=freq)\n\n # ----------------------------------------------------------------\n # Conversion Methods - Vectorized analogues of Timestamp methods\n\n def to_pydatetime(self) -> npt.NDArray[np.object_]:\n """\n Return an ndarray of ``datetime.datetime`` objects.\n\n Returns\n -------\n numpy.ndarray\n\n Examples\n --------\n >>> idx = pd.date_range('2018-02-27', periods=3)\n >>> idx.to_pydatetime()\n array([datetime.datetime(2018, 2, 27, 0, 0),\n datetime.datetime(2018, 2, 28, 0, 0),\n datetime.datetime(2018, 3, 1, 0, 0)], dtype=object)\n """\n return ints_to_pydatetime(self.asi8, tz=self.tz, reso=self._creso)\n\n def normalize(self) -> Self:\n """\n Convert times to midnight.\n\n The time component of the date-time is converted to midnight i.e.\n 00:00:00. This is useful in cases, when the time does not matter.\n Length is unaltered. The timezones are unaffected.\n\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on Datetime Array/Index.\n\n Returns\n -------\n DatetimeArray, DatetimeIndex or Series\n The same type as the original data. Series will have the same\n name and index. DatetimeIndex will have the same name.\n\n See Also\n --------\n floor : Floor the datetimes to the specified freq.\n ceil : Ceil the datetimes to the specified freq.\n round : Round the datetimes to the specified freq.\n\n Examples\n --------\n >>> idx = pd.date_range(start='2014-08-01 10:00', freq='h',\n ... periods=3, tz='Asia/Calcutta')\n >>> idx\n DatetimeIndex(['2014-08-01 10:00:00+05:30',\n '2014-08-01 11:00:00+05:30',\n '2014-08-01 12:00:00+05:30'],\n dtype='datetime64[ns, Asia/Calcutta]', freq='h')\n >>> idx.normalize()\n DatetimeIndex(['2014-08-01 00:00:00+05:30',\n '2014-08-01 00:00:00+05:30',\n '2014-08-01 00:00:00+05:30'],\n dtype='datetime64[ns, Asia/Calcutta]', freq=None)\n """\n new_values = normalize_i8_timestamps(self.asi8, self.tz, reso=self._creso)\n dt64_values = new_values.view(self._ndarray.dtype)\n\n dta = type(self)._simple_new(dt64_values, dtype=dt64_values.dtype)\n dta = dta._with_freq("infer")\n if self.tz is not None:\n dta = dta.tz_localize(self.tz)\n return dta\n\n def to_period(self, freq=None) -> PeriodArray:\n """\n Cast to PeriodArray/PeriodIndex at a particular frequency.\n\n Converts DatetimeArray/Index to PeriodArray/PeriodIndex.\n\n Parameters\n ----------\n freq : str or Period, optional\n One of pandas' :ref:`period aliases <timeseries.period_aliases>`\n or an Period object. Will be inferred by default.\n\n Returns\n -------\n PeriodArray/PeriodIndex\n\n Raises\n ------\n ValueError\n When converting a DatetimeArray/Index with non-regular values,\n so that a frequency cannot be inferred.\n\n See Also\n --------\n PeriodIndex: Immutable ndarray holding ordinal values.\n DatetimeIndex.to_pydatetime: Return DatetimeIndex as object.\n\n Examples\n --------\n >>> df = pd.DataFrame({"y": [1, 2, 3]},\n ... index=pd.to_datetime(["2000-03-31 00:00:00",\n ... "2000-05-31 00:00:00",\n ... "2000-08-31 00:00:00"]))\n >>> df.index.to_period("M")\n PeriodIndex(['2000-03', '2000-05', '2000-08'],\n dtype='period[M]')\n\n Infer the daily frequency\n\n >>> idx = pd.date_range("2017-01-01", periods=2)\n >>> idx.to_period()\n PeriodIndex(['2017-01-01', '2017-01-02'],\n dtype='period[D]')\n """\n from pandas.core.arrays import PeriodArray\n\n if self.tz is not None:\n warnings.warn(\n "Converting to PeriodArray/Index representation "\n "will drop timezone information.",\n UserWarning,\n stacklevel=find_stack_level(),\n )\n\n if freq is None:\n freq = self.freqstr or self.inferred_freq\n if isinstance(self.freq, BaseOffset) and hasattr(\n self.freq, "_period_dtype_code"\n ):\n freq = PeriodDtype(self.freq)._freqstr\n\n if freq is None:\n raise ValueError(\n "You must pass a freq argument as current index has none."\n )\n\n res = get_period_alias(freq)\n\n # https://github.com/pandas-dev/pandas/issues/33358\n if res is None:\n res = freq\n\n freq = res\n return PeriodArray._from_datetime64(self._ndarray, freq, tz=self.tz)\n\n # -----------------------------------------------------------------\n # Properties - Vectorized Timestamp Properties/Methods\n\n def month_name(self, locale=None) -> npt.NDArray[np.object_]:\n """\n Return the month names with specified locale.\n\n Parameters\n ----------\n locale : str, optional\n Locale determining the language in which to return the month name.\n Default is English locale (``'en_US.utf8'``). Use the command\n ``locale -a`` on your terminal on Unix systems to find your locale\n language code.\n\n Returns\n -------\n Series or Index\n Series or Index of month names.\n\n Examples\n --------\n >>> s = pd.Series(pd.date_range(start='2018-01', freq='ME', periods=3))\n >>> s\n 0 2018-01-31\n 1 2018-02-28\n 2 2018-03-31\n dtype: datetime64[ns]\n >>> s.dt.month_name()\n 0 January\n 1 February\n 2 March\n dtype: object\n\n >>> idx = pd.date_range(start='2018-01', freq='ME', periods=3)\n >>> idx\n DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],\n dtype='datetime64[ns]', freq='ME')\n >>> idx.month_name()\n Index(['January', 'February', 'March'], dtype='object')\n\n Using the ``locale`` parameter you can set a different locale language,\n for example: ``idx.month_name(locale='pt_BR.utf8')`` will return month\n names in Brazilian Portuguese language.\n\n >>> idx = pd.date_range(start='2018-01', freq='ME', periods=3)\n >>> idx\n DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],\n dtype='datetime64[ns]', freq='ME')\n >>> idx.month_name(locale='pt_BR.utf8') # doctest: +SKIP\n Index(['Janeiro', 'Fevereiro', 'Março'], dtype='object')\n """\n values = self._local_timestamps()\n\n result = fields.get_date_name_field(\n values, "month_name", locale=locale, reso=self._creso\n )\n result = self._maybe_mask_results(result, fill_value=None)\n if using_string_dtype():\n from pandas import (\n StringDtype,\n array as pd_array,\n )\n\n return pd_array(result, dtype=StringDtype(na_value=np.nan)) # type: ignore[return-value]\n return result\n\n def day_name(self, locale=None) -> npt.NDArray[np.object_]:\n """\n Return the day names with specified locale.\n\n Parameters\n ----------\n locale : str, optional\n Locale determining the language in which to return the day name.\n Default is English locale (``'en_US.utf8'``). Use the command\n ``locale -a`` on your terminal on Unix systems to find your locale\n language code.\n\n Returns\n -------\n Series or Index\n Series or Index of day names.\n\n Examples\n --------\n >>> s = pd.Series(pd.date_range(start='2018-01-01', freq='D', periods=3))\n >>> s\n 0 2018-01-01\n 1 2018-01-02\n 2 2018-01-03\n dtype: datetime64[ns]\n >>> s.dt.day_name()\n 0 Monday\n 1 Tuesday\n 2 Wednesday\n dtype: object\n\n >>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3)\n >>> idx\n DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'],\n dtype='datetime64[ns]', freq='D')\n >>> idx.day_name()\n Index(['Monday', 'Tuesday', 'Wednesday'], dtype='object')\n\n Using the ``locale`` parameter you can set a different locale language,\n for example: ``idx.day_name(locale='pt_BR.utf8')`` will return day\n names in Brazilian Portuguese language.\n\n >>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3)\n >>> idx\n DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'],\n dtype='datetime64[ns]', freq='D')\n >>> idx.day_name(locale='pt_BR.utf8') # doctest: +SKIP\n Index(['Segunda', 'Terça', 'Quarta'], dtype='object')\n """\n values = self._local_timestamps()\n\n result = fields.get_date_name_field(\n values, "day_name", locale=locale, reso=self._creso\n )\n result = self._maybe_mask_results(result, fill_value=None)\n if using_string_dtype():\n # TODO: no tests that check for dtype of result as of 2024-08-15\n from pandas import (\n StringDtype,\n array as pd_array,\n )\n\n return pd_array(result, dtype=StringDtype(na_value=np.nan)) # type: ignore[return-value]\n return result\n\n @property\n def time(self) -> npt.NDArray[np.object_]:\n """\n Returns numpy array of :class:`datetime.time` objects.\n\n The time part of the Timestamps.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.time\n 0 10:00:00\n 1 11:00:00\n dtype: object\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.time\n array([datetime.time(10, 0), datetime.time(11, 0)], dtype=object)\n """\n # If the Timestamps have a timezone that is not UTC,\n # convert them into their i8 representation while\n # keeping their timezone and not using UTC\n timestamps = self._local_timestamps()\n\n return ints_to_pydatetime(timestamps, box="time", reso=self._creso)\n\n @property\n def timetz(self) -> npt.NDArray[np.object_]:\n """\n Returns numpy array of :class:`datetime.time` objects with timezones.\n\n The time part of the Timestamps.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.timetz\n 0 10:00:00+00:00\n 1 11:00:00+00:00\n dtype: object\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.timetz\n array([datetime.time(10, 0, tzinfo=datetime.timezone.utc),\n datetime.time(11, 0, tzinfo=datetime.timezone.utc)], dtype=object)\n """\n return ints_to_pydatetime(self.asi8, self.tz, box="time", reso=self._creso)\n\n @property\n def date(self) -> npt.NDArray[np.object_]:\n """\n Returns numpy array of python :class:`datetime.date` objects.\n\n Namely, the date part of Timestamps without time and\n timezone information.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.date\n 0 2020-01-01\n 1 2020-02-01\n dtype: object\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.date\n array([datetime.date(2020, 1, 1), datetime.date(2020, 2, 1)], dtype=object)\n """\n # If the Timestamps have a timezone that is not UTC,\n # convert them into their i8 representation while\n # keeping their timezone and not using UTC\n timestamps = self._local_timestamps()\n\n return ints_to_pydatetime(timestamps, box="date", reso=self._creso)\n\n def isocalendar(self) -> DataFrame:\n """\n Calculate year, week, and day according to the ISO 8601 standard.\n\n Returns\n -------\n DataFrame\n With columns year, week and day.\n\n See Also\n --------\n Timestamp.isocalendar : Function return a 3-tuple containing ISO year,\n week number, and weekday for the given Timestamp object.\n datetime.date.isocalendar : Return a named tuple object with\n three components: year, week and weekday.\n\n Examples\n --------\n >>> idx = pd.date_range(start='2019-12-29', freq='D', periods=4)\n >>> idx.isocalendar()\n year week day\n 2019-12-29 2019 52 7\n 2019-12-30 2020 1 1\n 2019-12-31 2020 1 2\n 2020-01-01 2020 1 3\n >>> idx.isocalendar().week\n 2019-12-29 52\n 2019-12-30 1\n 2019-12-31 1\n 2020-01-01 1\n Freq: D, Name: week, dtype: UInt32\n """\n from pandas import DataFrame\n\n values = self._local_timestamps()\n sarray = fields.build_isocalendar_sarray(values, reso=self._creso)\n iso_calendar_df = DataFrame(\n sarray, columns=["year", "week", "day"], dtype="UInt32"\n )\n if self._hasna:\n iso_calendar_df.iloc[self._isnan] = None\n return iso_calendar_df\n\n year = _field_accessor(\n "year",\n "Y",\n """\n The year of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="YE")\n ... )\n >>> datetime_series\n 0 2000-12-31\n 1 2001-12-31\n 2 2002-12-31\n dtype: datetime64[ns]\n >>> datetime_series.dt.year\n 0 2000\n 1 2001\n 2 2002\n dtype: int32\n """,\n )\n month = _field_accessor(\n "month",\n "M",\n """\n The month as January=1, December=12.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="ME")\n ... )\n >>> datetime_series\n 0 2000-01-31\n 1 2000-02-29\n 2 2000-03-31\n dtype: datetime64[ns]\n >>> datetime_series.dt.month\n 0 1\n 1 2\n 2 3\n dtype: int32\n """,\n )\n day = _field_accessor(\n "day",\n "D",\n """\n The day of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="D")\n ... )\n >>> datetime_series\n 0 2000-01-01\n 1 2000-01-02\n 2 2000-01-03\n dtype: datetime64[ns]\n >>> datetime_series.dt.day\n 0 1\n 1 2\n 2 3\n dtype: int32\n """,\n )\n hour = _field_accessor(\n "hour",\n "h",\n """\n The hours of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="h")\n ... )\n >>> datetime_series\n 0 2000-01-01 00:00:00\n 1 2000-01-01 01:00:00\n 2 2000-01-01 02:00:00\n dtype: datetime64[ns]\n >>> datetime_series.dt.hour\n 0 0\n 1 1\n 2 2\n dtype: int32\n """,\n )\n minute = _field_accessor(\n "minute",\n "m",\n """\n The minutes of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="min")\n ... )\n >>> datetime_series\n 0 2000-01-01 00:00:00\n 1 2000-01-01 00:01:00\n 2 2000-01-01 00:02:00\n dtype: datetime64[ns]\n >>> datetime_series.dt.minute\n 0 0\n 1 1\n 2 2\n dtype: int32\n """,\n )\n second = _field_accessor(\n "second",\n "s",\n """\n The seconds of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="s")\n ... )\n >>> datetime_series\n 0 2000-01-01 00:00:00\n 1 2000-01-01 00:00:01\n 2 2000-01-01 00:00:02\n dtype: datetime64[ns]\n >>> datetime_series.dt.second\n 0 0\n 1 1\n 2 2\n dtype: int32\n """,\n )\n microsecond = _field_accessor(\n "microsecond",\n "us",\n """\n The microseconds of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="us")\n ... )\n >>> datetime_series\n 0 2000-01-01 00:00:00.000000\n 1 2000-01-01 00:00:00.000001\n 2 2000-01-01 00:00:00.000002\n dtype: datetime64[ns]\n >>> datetime_series.dt.microsecond\n 0 0\n 1 1\n 2 2\n dtype: int32\n """,\n )\n nanosecond = _field_accessor(\n "nanosecond",\n "ns",\n """\n The nanoseconds of the datetime.\n\n Examples\n --------\n >>> datetime_series = pd.Series(\n ... pd.date_range("2000-01-01", periods=3, freq="ns")\n ... )\n >>> datetime_series\n 0 2000-01-01 00:00:00.000000000\n 1 2000-01-01 00:00:00.000000001\n 2 2000-01-01 00:00:00.000000002\n dtype: datetime64[ns]\n >>> datetime_series.dt.nanosecond\n 0 0\n 1 1\n 2 2\n dtype: int32\n """,\n )\n _dayofweek_doc = """\n The day of the week with Monday=0, Sunday=6.\n\n Return the day of the week. It is assumed the week starts on\n Monday, which is denoted by 0 and ends on Sunday which is denoted\n by 6. This method is available on both Series with datetime\n values (using the `dt` accessor) or DatetimeIndex.\n\n Returns\n -------\n Series or Index\n Containing integers indicating the day number.\n\n See Also\n --------\n Series.dt.dayofweek : Alias.\n Series.dt.weekday : Alias.\n Series.dt.day_name : Returns the name of the day of the week.\n\n Examples\n --------\n >>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()\n >>> s.dt.dayofweek\n 2016-12-31 5\n 2017-01-01 6\n 2017-01-02 0\n 2017-01-03 1\n 2017-01-04 2\n 2017-01-05 3\n 2017-01-06 4\n 2017-01-07 5\n 2017-01-08 6\n Freq: D, dtype: int32\n """\n day_of_week = _field_accessor("day_of_week", "dow", _dayofweek_doc)\n dayofweek = day_of_week\n weekday = day_of_week\n\n day_of_year = _field_accessor(\n "dayofyear",\n "doy",\n """\n The ordinal day of the year.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.dayofyear\n 0 1\n 1 32\n dtype: int32\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.dayofyear\n Index([1, 32], dtype='int32')\n """,\n )\n dayofyear = day_of_year\n quarter = _field_accessor(\n "quarter",\n "q",\n """\n The quarter of the date.\n\n Examples\n --------\n For Series:\n\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "4/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-04-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.quarter\n 0 1\n 1 2\n dtype: int32\n\n For DatetimeIndex:\n\n >>> idx = pd.DatetimeIndex(["1/1/2020 10:00:00+00:00",\n ... "2/1/2020 11:00:00+00:00"])\n >>> idx.quarter\n Index([1, 1], dtype='int32')\n """,\n )\n days_in_month = _field_accessor(\n "days_in_month",\n "dim",\n """\n The number of days in the month.\n\n Examples\n --------\n >>> s = pd.Series(["1/1/2020 10:00:00+00:00", "2/1/2020 11:00:00+00:00"])\n >>> s = pd.to_datetime(s)\n >>> s\n 0 2020-01-01 10:00:00+00:00\n 1 2020-02-01 11:00:00+00:00\n dtype: datetime64[ns, UTC]\n >>> s.dt.daysinmonth\n 0 31\n 1 29\n dtype: int32\n """,\n )\n daysinmonth = days_in_month\n _is_month_doc = """\n Indicates whether the date is the {first_or_last} day of the month.\n\n Returns\n -------\n Series or array\n For Series, returns a Series with boolean values.\n For DatetimeIndex, returns a boolean array.\n\n See Also\n --------\n is_month_start : Return a boolean indicating whether the date\n is the first day of the month.\n is_month_end : Return a boolean indicating whether the date\n is the last day of the month.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> s = pd.Series(pd.date_range("2018-02-27", periods=3))\n >>> s\n 0 2018-02-27\n 1 2018-02-28\n 2 2018-03-01\n dtype: datetime64[ns]\n >>> s.dt.is_month_start\n 0 False\n 1 False\n 2 True\n dtype: bool\n >>> s.dt.is_month_end\n 0 False\n 1 True\n 2 False\n dtype: bool\n\n >>> idx = pd.date_range("2018-02-27", periods=3)\n >>> idx.is_month_start\n array([False, False, True])\n >>> idx.is_month_end\n array([False, True, False])\n """\n is_month_start = _field_accessor(\n "is_month_start", "is_month_start", _is_month_doc.format(first_or_last="first")\n )\n\n is_month_end = _field_accessor(\n "is_month_end", "is_month_end", _is_month_doc.format(first_or_last="last")\n )\n\n is_quarter_start = _field_accessor(\n "is_quarter_start",\n "is_quarter_start",\n """\n Indicator for whether the date is the first day of a quarter.\n\n Returns\n -------\n is_quarter_start : Series or DatetimeIndex\n The same type as the original data with boolean values. Series will\n have the same name and index. DatetimeIndex will have the same\n name.\n\n See Also\n --------\n quarter : Return the quarter of the date.\n is_quarter_end : Similar property for indicating the quarter end.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",\n ... periods=4)})\n >>> df.assign(quarter=df.dates.dt.quarter,\n ... is_quarter_start=df.dates.dt.is_quarter_start)\n dates quarter is_quarter_start\n 0 2017-03-30 1 False\n 1 2017-03-31 1 False\n 2 2017-04-01 2 True\n 3 2017-04-02 2 False\n\n >>> idx = pd.date_range('2017-03-30', periods=4)\n >>> idx\n DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],\n dtype='datetime64[ns]', freq='D')\n\n >>> idx.is_quarter_start\n array([False, False, True, False])\n """,\n )\n is_quarter_end = _field_accessor(\n "is_quarter_end",\n "is_quarter_end",\n """\n Indicator for whether the date is the last day of a quarter.\n\n Returns\n -------\n is_quarter_end : Series or DatetimeIndex\n The same type as the original data with boolean values. Series will\n have the same name and index. DatetimeIndex will have the same\n name.\n\n See Also\n --------\n quarter : Return the quarter of the date.\n is_quarter_start : Similar property indicating the quarter start.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",\n ... periods=4)})\n >>> df.assign(quarter=df.dates.dt.quarter,\n ... is_quarter_end=df.dates.dt.is_quarter_end)\n dates quarter is_quarter_end\n 0 2017-03-30 1 False\n 1 2017-03-31 1 True\n 2 2017-04-01 2 False\n 3 2017-04-02 2 False\n\n >>> idx = pd.date_range('2017-03-30', periods=4)\n >>> idx\n DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],\n dtype='datetime64[ns]', freq='D')\n\n >>> idx.is_quarter_end\n array([False, True, False, False])\n """,\n )\n is_year_start = _field_accessor(\n "is_year_start",\n "is_year_start",\n """\n Indicate whether the date is the first day of a year.\n\n Returns\n -------\n Series or DatetimeIndex\n The same type as the original data with boolean values. Series will\n have the same name and index. DatetimeIndex will have the same\n name.\n\n See Also\n --------\n is_year_end : Similar property indicating the last day of the year.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))\n >>> dates\n 0 2017-12-30\n 1 2017-12-31\n 2 2018-01-01\n dtype: datetime64[ns]\n\n >>> dates.dt.is_year_start\n 0 False\n 1 False\n 2 True\n dtype: bool\n\n >>> idx = pd.date_range("2017-12-30", periods=3)\n >>> idx\n DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],\n dtype='datetime64[ns]', freq='D')\n\n >>> idx.is_year_start\n array([False, False, True])\n """,\n )\n is_year_end = _field_accessor(\n "is_year_end",\n "is_year_end",\n """\n Indicate whether the date is the last day of the year.\n\n Returns\n -------\n Series or DatetimeIndex\n The same type as the original data with boolean values. Series will\n have the same name and index. DatetimeIndex will have the same\n name.\n\n See Also\n --------\n is_year_start : Similar property indicating the start of the year.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))\n >>> dates\n 0 2017-12-30\n 1 2017-12-31\n 2 2018-01-01\n dtype: datetime64[ns]\n\n >>> dates.dt.is_year_end\n 0 False\n 1 True\n 2 False\n dtype: bool\n\n >>> idx = pd.date_range("2017-12-30", periods=3)\n >>> idx\n DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],\n dtype='datetime64[ns]', freq='D')\n\n >>> idx.is_year_end\n array([False, True, False])\n """,\n )\n is_leap_year = _field_accessor(\n "is_leap_year",\n "is_leap_year",\n """\n Boolean indicator if the date belongs to a leap year.\n\n A leap year is a year, which has 366 days (instead of 365) including\n 29th of February as an intercalary day.\n Leap years are years which are multiples of four with the exception\n of years divisible by 100 but not by 400.\n\n Returns\n -------\n Series or ndarray\n Booleans indicating if dates belong to a leap year.\n\n Examples\n --------\n This method is available on Series with datetime values under\n the ``.dt`` accessor, and directly on DatetimeIndex.\n\n >>> idx = pd.date_range("2012-01-01", "2015-01-01", freq="YE")\n >>> idx\n DatetimeIndex(['2012-12-31', '2013-12-31', '2014-12-31'],\n dtype='datetime64[ns]', freq='YE-DEC')\n >>> idx.is_leap_year\n array([ True, False, False])\n\n >>> dates_series = pd.Series(idx)\n >>> dates_series\n 0 2012-12-31\n 1 2013-12-31\n 2 2014-12-31\n dtype: datetime64[ns]\n >>> dates_series.dt.is_leap_year\n 0 True\n 1 False\n 2 False\n dtype: bool\n """,\n )\n\n def to_julian_date(self) -> npt.NDArray[np.float64]:\n """\n Convert Datetime Array to float64 ndarray of Julian Dates.\n 0 Julian date is noon January 1, 4713 BC.\n https://en.wikipedia.org/wiki/Julian_day\n """\n\n # http://mysite.verizon.net/aesir_research/date/jdalg2.htm\n year = np.asarray(self.year)\n month = np.asarray(self.month)\n day = np.asarray(self.day)\n testarr = month < 3\n year[testarr] -= 1\n month[testarr] += 12\n return (\n day\n + np.fix((153 * month - 457) / 5)\n + 365 * year\n + np.floor(year / 4)\n - np.floor(year / 100)\n + np.floor(year / 400)\n + 1_721_118.5\n + (\n self.hour\n + self.minute / 60\n + self.second / 3600\n + self.microsecond / 3600 / 10**6\n + self.nanosecond / 3600 / 10**9\n )\n / 24\n )\n\n # -----------------------------------------------------------------\n # Reductions\n\n def std(\n self,\n axis=None,\n dtype=None,\n out=None,\n ddof: int = 1,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n """\n Return sample standard deviation over requested axis.\n\n Normalized by `N-1` by default. This can be changed using ``ddof``.\n\n Parameters\n ----------\n axis : int, optional\n Axis for the function to be applied on. For :class:`pandas.Series`\n this parameter is unused and defaults to ``None``.\n ddof : int, default 1\n Degrees of Freedom. The divisor used in calculations is `N - ddof`,\n where `N` represents the number of elements.\n skipna : bool, default True\n Exclude NA/null values. If an entire row/column is ``NA``, the result\n will be ``NA``.\n\n Returns\n -------\n Timedelta\n\n See Also\n --------\n numpy.ndarray.std : Returns the standard deviation of the array elements\n along given axis.\n Series.std : Return sample standard deviation over requested axis.\n\n Examples\n --------\n For :class:`pandas.DatetimeIndex`:\n\n >>> idx = pd.date_range('2001-01-01 00:00', periods=3)\n >>> idx\n DatetimeIndex(['2001-01-01', '2001-01-02', '2001-01-03'],\n dtype='datetime64[ns]', freq='D')\n >>> idx.std()\n Timedelta('1 days 00:00:00')\n """\n # Because std is translation-invariant, we can get self.std\n # by calculating (self - Timestamp(0)).std, and we can do it\n # without creating a copy by using a view on self._ndarray\n from pandas.core.arrays import TimedeltaArray\n\n # Find the td64 dtype with the same resolution as our dt64 dtype\n dtype_str = self._ndarray.dtype.name.replace("datetime64", "timedelta64")\n dtype = np.dtype(dtype_str)\n\n tda = TimedeltaArray._simple_new(self._ndarray.view(dtype), dtype=dtype)\n\n return tda.std(axis=axis, out=out, ddof=ddof, keepdims=keepdims, skipna=skipna)\n\n\n# -------------------------------------------------------------------\n# Constructor Helpers\n\n\ndef _sequence_to_dt64(\n data: ArrayLike,\n *,\n copy: bool = False,\n tz: tzinfo | None = None,\n dayfirst: bool = False,\n yearfirst: bool = False,\n ambiguous: TimeAmbiguous = "raise",\n out_unit: str | None = None,\n):\n """\n Parameters\n ----------\n data : np.ndarray or ExtensionArray\n dtl.ensure_arraylike_for_datetimelike has already been called.\n copy : bool, default False\n tz : tzinfo or None, default None\n dayfirst : bool, default False\n yearfirst : bool, default False\n ambiguous : str, bool, or arraylike, default 'raise'\n See pandas._libs.tslibs.tzconversion.tz_localize_to_utc.\n out_unit : str or None, default None\n Desired output resolution.\n\n Returns\n -------\n result : numpy.ndarray\n The sequence converted to a numpy array with dtype ``datetime64[unit]``.\n Where `unit` is "ns" unless specified otherwise by `out_unit`.\n tz : tzinfo or None\n Either the user-provided tzinfo or one inferred from the data.\n\n Raises\n ------\n TypeError : PeriodDType data is passed\n """\n\n # By this point we are assured to have either a numpy array or Index\n data, copy = maybe_convert_dtype(data, copy, tz=tz)\n data_dtype = getattr(data, "dtype", None)\n\n if out_unit is None:\n out_unit = "ns"\n out_dtype = np.dtype(f"M8[{out_unit}]")\n\n if data_dtype == object or is_string_dtype(data_dtype):\n # TODO: We do not have tests specific to string-dtypes,\n # also complex or categorical or other extension\n data = cast(np.ndarray, data)\n copy = False\n if lib.infer_dtype(data, skipna=False) == "integer":\n # Much more performant than going through array_to_datetime\n data = data.astype(np.int64)\n elif tz is not None and ambiguous == "raise":\n obj_data = np.asarray(data, dtype=object)\n result = tslib.array_to_datetime_with_tz(\n obj_data,\n tz=tz,\n dayfirst=dayfirst,\n yearfirst=yearfirst,\n creso=abbrev_to_npy_unit(out_unit),\n )\n return result, tz\n else:\n converted, inferred_tz = objects_to_datetime64(\n data,\n dayfirst=dayfirst,\n yearfirst=yearfirst,\n allow_object=False,\n out_unit=out_unit or "ns",\n )\n copy = False\n if tz and inferred_tz:\n # two timezones: convert to intended from base UTC repr\n # GH#42505 by convention, these are _already_ UTC\n result = converted\n\n elif inferred_tz:\n tz = inferred_tz\n result = converted\n\n else:\n result, _ = _construct_from_dt64_naive(\n converted, tz=tz, copy=copy, ambiguous=ambiguous\n )\n return result, tz\n\n data_dtype = data.dtype\n\n # `data` may have originally been a Categorical[datetime64[ns, tz]],\n # so we need to handle these types.\n if isinstance(data_dtype, DatetimeTZDtype):\n # DatetimeArray -> ndarray\n data = cast(DatetimeArray, data)\n tz = _maybe_infer_tz(tz, data.tz)\n result = data._ndarray\n\n elif lib.is_np_dtype(data_dtype, "M"):\n # tz-naive DatetimeArray or ndarray[datetime64]\n if isinstance(data, DatetimeArray):\n data = data._ndarray\n\n data = cast(np.ndarray, data)\n result, copy = _construct_from_dt64_naive(\n data, tz=tz, copy=copy, ambiguous=ambiguous\n )\n\n else:\n # must be integer dtype otherwise\n # assume this data are epoch timestamps\n if data.dtype != INT64_DTYPE:\n data = data.astype(np.int64, copy=False)\n copy = False\n data = cast(np.ndarray, data)\n result = data.view(out_dtype)\n\n if copy:\n result = result.copy()\n\n assert isinstance(result, np.ndarray), type(result)\n assert result.dtype.kind == "M"\n assert result.dtype != "M8"\n assert is_supported_dtype(result.dtype)\n return result, tz\n\n\ndef _construct_from_dt64_naive(\n data: np.ndarray, *, tz: tzinfo | None, copy: bool, ambiguous: TimeAmbiguous\n) -> tuple[np.ndarray, bool]:\n """\n Convert datetime64 data to a supported dtype, localizing if necessary.\n """\n # Caller is responsible for ensuring\n # lib.is_np_dtype(data.dtype)\n\n new_dtype = data.dtype\n if not is_supported_dtype(new_dtype):\n # Cast to the nearest supported unit, generally "s"\n new_dtype = get_supported_dtype(new_dtype)\n data = astype_overflowsafe(data, dtype=new_dtype, copy=False)\n copy = False\n\n if data.dtype.byteorder == ">":\n # TODO: better way to handle this? non-copying alternative?\n # without this, test_constructor_datetime64_bigendian fails\n data = data.astype(data.dtype.newbyteorder("<"))\n new_dtype = data.dtype\n copy = False\n\n if tz is not None:\n # Convert tz-naive to UTC\n # TODO: if tz is UTC, are there situations where we *don't* want a\n # copy? tz_localize_to_utc always makes one.\n shape = data.shape\n if data.ndim > 1:\n data = data.ravel()\n\n data_unit = get_unit_from_dtype(new_dtype)\n data = tzconversion.tz_localize_to_utc(\n data.view("i8"), tz, ambiguous=ambiguous, creso=data_unit\n )\n data = data.view(new_dtype)\n data = data.reshape(shape)\n\n assert data.dtype == new_dtype, data.dtype\n result = data\n\n return result, copy\n\n\ndef objects_to_datetime64(\n data: np.ndarray,\n dayfirst,\n yearfirst,\n utc: bool = False,\n errors: DateTimeErrorChoices = "raise",\n allow_object: bool = False,\n out_unit: str = "ns",\n):\n """\n Convert data to array of timestamps.\n\n Parameters\n ----------\n data : np.ndarray[object]\n dayfirst : bool\n yearfirst : bool\n utc : bool, default False\n Whether to convert/localize timestamps to UTC.\n errors : {'raise', 'ignore', 'coerce'}\n allow_object : bool\n Whether to return an object-dtype ndarray instead of raising if the\n data contains more than one timezone.\n out_unit : str, default "ns"\n\n Returns\n -------\n result : ndarray\n np.datetime64[out_unit] if returned values represent wall times or UTC\n timestamps.\n object if mixed timezones\n inferred_tz : tzinfo or None\n If not None, then the datetime64 values in `result` denote UTC timestamps.\n\n Raises\n ------\n ValueError : if data cannot be converted to datetimes\n TypeError : When a type cannot be converted to datetime\n """\n assert errors in ["raise", "ignore", "coerce"]\n\n # if str-dtype, convert\n data = np.asarray(data, dtype=np.object_)\n\n result, tz_parsed = tslib.array_to_datetime(\n data,\n errors=errors,\n utc=utc,\n dayfirst=dayfirst,\n yearfirst=yearfirst,\n creso=abbrev_to_npy_unit(out_unit),\n )\n\n if tz_parsed is not None:\n # We can take a shortcut since the datetime64 numpy array\n # is in UTC\n return result, tz_parsed\n elif result.dtype.kind == "M":\n return result, tz_parsed\n elif result.dtype == object:\n # GH#23675 when called via `pd.to_datetime`, returning an object-dtype\n # array is allowed. When called via `pd.DatetimeIndex`, we can\n # only accept datetime64 dtype, so raise TypeError if object-dtype\n # is returned, as that indicates the values can be recognized as\n # datetimes but they have conflicting timezones/awareness\n if allow_object:\n return result, tz_parsed\n raise TypeError("DatetimeIndex has mixed timezones")\n else: # pragma: no cover\n # GH#23675 this TypeError should never be hit, whereas the TypeError\n # in the object-dtype branch above is reachable.\n raise TypeError(result)\n\n\ndef maybe_convert_dtype(data, copy: bool, tz: tzinfo | None = None):\n """\n Convert data based on dtype conventions, issuing\n errors where appropriate.\n\n Parameters\n ----------\n data : np.ndarray or pd.Index\n copy : bool\n tz : tzinfo or None, default None\n\n Returns\n -------\n data : np.ndarray or pd.Index\n copy : bool\n\n Raises\n ------\n TypeError : PeriodDType data is passed\n """\n if not hasattr(data, "dtype"):\n # e.g. collections.deque\n return data, copy\n\n if is_float_dtype(data.dtype):\n # pre-2.0 we treated these as wall-times, inconsistent with ints\n # GH#23675, GH#45573 deprecated to treat symmetrically with integer dtypes.\n # Note: data.astype(np.int64) fails ARM tests, see\n # https://github.com/pandas-dev/pandas/issues/49468.\n data = data.astype(DT64NS_DTYPE).view("i8")\n copy = False\n\n elif lib.is_np_dtype(data.dtype, "m") or is_bool_dtype(data.dtype):\n # GH#29794 enforcing deprecation introduced in GH#23539\n raise TypeError(f"dtype {data.dtype} cannot be converted to datetime64[ns]")\n elif isinstance(data.dtype, PeriodDtype):\n # Note: without explicitly raising here, PeriodIndex\n # test_setops.test_join_does_not_recur fails\n raise TypeError(\n "Passing PeriodDtype data is invalid. Use `data.to_timestamp()` instead"\n )\n\n elif isinstance(data.dtype, ExtensionDtype) and not isinstance(\n data.dtype, DatetimeTZDtype\n ):\n # TODO: We have no tests for these\n data = np.array(data, dtype=np.object_)\n copy = False\n\n return data, copy\n\n\n# -------------------------------------------------------------------\n# Validation and Inference\n\n\ndef _maybe_infer_tz(tz: tzinfo | None, inferred_tz: tzinfo | None) -> tzinfo | None:\n """\n If a timezone is inferred from data, check that it is compatible with\n the user-provided timezone, if any.\n\n Parameters\n ----------\n tz : tzinfo or None\n inferred_tz : tzinfo or None\n\n Returns\n -------\n tz : tzinfo or None\n\n Raises\n ------\n TypeError : if both timezones are present but do not match\n """\n if tz is None:\n tz = inferred_tz\n elif inferred_tz is None:\n pass\n elif not timezones.tz_compare(tz, inferred_tz):\n raise TypeError(\n f"data is already tz-aware {inferred_tz}, unable to "\n f"set specified tz: {tz}"\n )\n return tz\n\n\ndef _validate_dt64_dtype(dtype):\n """\n Check that a dtype, if passed, represents either a numpy datetime64[ns]\n dtype or a pandas DatetimeTZDtype.\n\n Parameters\n ----------\n dtype : object\n\n Returns\n -------\n dtype : None, numpy.dtype, or DatetimeTZDtype\n\n Raises\n ------\n ValueError : invalid dtype\n\n Notes\n -----\n Unlike _validate_tz_from_dtype, this does _not_ allow non-existent\n tz errors to go through\n """\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n if dtype == np.dtype("M8"):\n # no precision, disallowed GH#24806\n msg = (\n "Passing in 'datetime64' dtype with no precision is not allowed. "\n "Please pass in 'datetime64[ns]' instead."\n )\n raise ValueError(msg)\n\n if (\n isinstance(dtype, np.dtype)\n and (dtype.kind != "M" or not is_supported_dtype(dtype))\n ) or not isinstance(dtype, (np.dtype, DatetimeTZDtype)):\n raise ValueError(\n f"Unexpected value for 'dtype': '{dtype}'. "\n "Must be 'datetime64[s]', 'datetime64[ms]', 'datetime64[us]', "\n "'datetime64[ns]' or DatetimeTZDtype'."\n )\n\n if getattr(dtype, "tz", None):\n # https://github.com/pandas-dev/pandas/issues/18595\n # Ensure that we have a standard timezone for pytz objects.\n # Without this, things like adding an array of timedeltas and\n # a tz-aware Timestamp (with a tz specific to its datetime) will\n # be incorrect(ish?) for the array as a whole\n dtype = cast(DatetimeTZDtype, dtype)\n dtype = DatetimeTZDtype(\n unit=dtype.unit, tz=timezones.tz_standardize(dtype.tz)\n )\n\n return dtype\n\n\ndef _validate_tz_from_dtype(\n dtype, tz: tzinfo | None, explicit_tz_none: bool = False\n) -> tzinfo | None:\n """\n If the given dtype is a DatetimeTZDtype, extract the implied\n tzinfo object from it and check that it does not conflict with the given\n tz.\n\n Parameters\n ----------\n dtype : dtype, str\n tz : None, tzinfo\n explicit_tz_none : bool, default False\n Whether tz=None was passed explicitly, as opposed to lib.no_default.\n\n Returns\n -------\n tz : consensus tzinfo\n\n Raises\n ------\n ValueError : on tzinfo mismatch\n """\n if dtype is not None:\n if isinstance(dtype, str):\n try:\n dtype = DatetimeTZDtype.construct_from_string(dtype)\n except TypeError:\n # Things like `datetime64[ns]`, which is OK for the\n # constructors, but also nonsense, which should be validated\n # but not by us. We *do* allow non-existent tz errors to\n # go through\n pass\n dtz = getattr(dtype, "tz", None)\n if dtz is not None:\n if tz is not None and not timezones.tz_compare(tz, dtz):\n raise ValueError("cannot supply both a tz and a dtype with a tz")\n if explicit_tz_none:\n raise ValueError("Cannot pass both a timezone-aware dtype and tz=None")\n tz = dtz\n\n if tz is not None and lib.is_np_dtype(dtype, "M"):\n # We also need to check for the case where the user passed a\n # tz-naive dtype (i.e. datetime64[ns])\n if tz is not None and not timezones.tz_compare(tz, dtz):\n raise ValueError(\n "cannot supply both a tz and a "\n "timezone-naive dtype (i.e. datetime64[ns])"\n )\n\n return tz\n\n\ndef _infer_tz_from_endpoints(\n start: Timestamp, end: Timestamp, tz: tzinfo | None\n) -> tzinfo | None:\n """\n If a timezone is not explicitly given via `tz`, see if one can\n be inferred from the `start` and `end` endpoints. If more than one\n of these inputs provides a timezone, require that they all agree.\n\n Parameters\n ----------\n start : Timestamp\n end : Timestamp\n tz : tzinfo or None\n\n Returns\n -------\n tz : tzinfo or None\n\n Raises\n ------\n TypeError : if start and end timezones do not agree\n """\n try:\n inferred_tz = timezones.infer_tzinfo(start, end)\n except AssertionError as err:\n # infer_tzinfo raises AssertionError if passed mismatched timezones\n raise TypeError(\n "Start and end cannot both be tz-aware with different timezones"\n ) from err\n\n inferred_tz = timezones.maybe_get_tz(inferred_tz)\n tz = timezones.maybe_get_tz(tz)\n\n if tz is not None and inferred_tz is not None:\n if not timezones.tz_compare(inferred_tz, tz):\n raise AssertionError("Inferred time zone not equal to passed time zone")\n\n elif inferred_tz is not None:\n tz = inferred_tz\n\n return tz\n\n\ndef _maybe_normalize_endpoints(\n start: Timestamp | None, end: Timestamp | None, normalize: bool\n):\n if normalize:\n if start is not None:\n start = start.normalize()\n\n if end is not None:\n end = end.normalize()\n\n return start, end\n\n\ndef _maybe_localize_point(\n ts: Timestamp | None, freq, tz, ambiguous, nonexistent\n) -> Timestamp | None:\n """\n Localize a start or end Timestamp to the timezone of the corresponding\n start or end Timestamp\n\n Parameters\n ----------\n ts : start or end Timestamp to potentially localize\n freq : Tick, DateOffset, or None\n tz : str, timezone object or None\n ambiguous: str, localization behavior for ambiguous times\n nonexistent: str, localization behavior for nonexistent times\n\n Returns\n -------\n ts : Timestamp\n """\n # Make sure start and end are timezone localized if:\n # 1) freq = a Timedelta-like frequency (Tick)\n # 2) freq = None i.e. generating a linspaced range\n if ts is not None and ts.tzinfo is None:\n # Note: We can't ambiguous='infer' a singular ambiguous time; however,\n # we have historically defaulted ambiguous=False\n ambiguous = ambiguous if ambiguous != "infer" else False\n localize_args = {"ambiguous": ambiguous, "nonexistent": nonexistent, "tz": None}\n if isinstance(freq, Tick) or freq is None:\n localize_args["tz"] = tz\n ts = ts.tz_localize(**localize_args)\n return ts\n\n\ndef _generate_range(\n start: Timestamp | None,\n end: Timestamp | None,\n periods: int | None,\n offset: BaseOffset,\n *,\n unit: str,\n):\n """\n Generates a sequence of dates corresponding to the specified time\n offset. Similar to dateutil.rrule except uses pandas DateOffset\n objects to represent time increments.\n\n Parameters\n ----------\n start : Timestamp or None\n end : Timestamp or None\n periods : int or None\n offset : DateOffset\n unit : str\n\n Notes\n -----\n * This method is faster for generating weekdays than dateutil.rrule\n * At least two of (start, end, periods) must be specified.\n * If both start and end are specified, the returned dates will\n satisfy start <= date <= end.\n\n Returns\n -------\n dates : generator object\n """\n offset = to_offset(offset)\n\n # Argument 1 to "Timestamp" has incompatible type "Optional[Timestamp]";\n # expected "Union[integer[Any], float, str, date, datetime64]"\n start = Timestamp(start) # type: ignore[arg-type]\n if start is not NaT:\n start = start.as_unit(unit)\n else:\n start = None\n\n # Argument 1 to "Timestamp" has incompatible type "Optional[Timestamp]";\n # expected "Union[integer[Any], float, str, date, datetime64]"\n end = Timestamp(end) # type: ignore[arg-type]\n if end is not NaT:\n end = end.as_unit(unit)\n else:\n end = None\n\n if start and not offset.is_on_offset(start):\n # Incompatible types in assignment (expression has type "datetime",\n # variable has type "Optional[Timestamp]")\n start = offset.rollforward(start) # type: ignore[assignment]\n\n elif end and not offset.is_on_offset(end):\n # Incompatible types in assignment (expression has type "datetime",\n # variable has type "Optional[Timestamp]")\n end = offset.rollback(end) # type: ignore[assignment]\n\n # Unsupported operand types for < ("Timestamp" and "None")\n if periods is None and end < start and offset.n >= 0: # type: ignore[operator]\n end = None\n periods = 0\n\n if end is None:\n # error: No overload variant of "__radd__" of "BaseOffset" matches\n # argument type "None"\n end = start + (periods - 1) * offset # type: ignore[operator]\n\n if start is None:\n # error: No overload variant of "__radd__" of "BaseOffset" matches\n # argument type "None"\n start = end - (periods - 1) * offset # type: ignore[operator]\n\n start = cast(Timestamp, start)\n end = cast(Timestamp, end)\n\n cur = start\n if offset.n >= 0:\n while cur <= end:\n yield cur\n\n if cur == end:\n # GH#24252 avoid overflows by not performing the addition\n # in offset.apply unless we have to\n break\n\n # faster than cur + offset\n next_date = offset._apply(cur)\n next_date = next_date.as_unit(unit)\n if next_date <= cur:\n raise ValueError(f"Offset {offset} did not increment date")\n cur = next_date\n else:\n while cur >= end:\n yield cur\n\n if cur == end:\n # GH#24252 avoid overflows by not performing the addition\n # in offset.apply unless we have to\n break\n\n # faster than cur + offset\n next_date = offset._apply(cur)\n next_date = next_date.as_unit(unit)\n if next_date >= cur:\n raise ValueError(f"Offset {offset} did not decrement date")\n cur = next_date\n
.venv\Lib\site-packages\pandas\core\arrays\datetimes.py
datetimes.py
Python
92,963
0.75
0.092351
0.080033
react-lib
40
2024-06-28T20:34:52.839040
MIT
false
24929cc941cf8f4c6986a85e92a3c537
from __future__ import annotations\n\nfrom typing import ClassVar\n\nimport numpy as np\n\nfrom pandas.core.dtypes.base import register_extension_dtype\nfrom pandas.core.dtypes.common import is_float_dtype\n\nfrom pandas.core.arrays.numeric import (\n NumericArray,\n NumericDtype,\n)\n\n\nclass FloatingDtype(NumericDtype):\n """\n An ExtensionDtype to hold a single size of floating dtype.\n\n These specific implementations are subclasses of the non-public\n FloatingDtype. For example we have Float32Dtype to represent float32.\n\n The attributes name & type are set when these subclasses are created.\n """\n\n _default_np_dtype = np.dtype(np.float64)\n _checker = is_float_dtype\n\n @classmethod\n def construct_array_type(cls) -> type[FloatingArray]:\n """\n Return the array type associated with this dtype.\n\n Returns\n -------\n type\n """\n return FloatingArray\n\n @classmethod\n def _get_dtype_mapping(cls) -> dict[np.dtype, FloatingDtype]:\n return NUMPY_FLOAT_TO_DTYPE\n\n @classmethod\n def _safe_cast(cls, values: np.ndarray, dtype: np.dtype, copy: bool) -> np.ndarray:\n """\n Safely cast the values to the given dtype.\n\n "safe" in this context means the casting is lossless.\n """\n # This is really only here for compatibility with IntegerDtype\n # Here for compat with IntegerDtype\n return values.astype(dtype, copy=copy)\n\n\nclass FloatingArray(NumericArray):\n """\n Array of floating (optional missing) values.\n\n .. warning::\n\n FloatingArray is currently experimental, and its API or internal\n implementation may change without warning. Especially the behaviour\n regarding NaN (distinct from NA missing values) is subject to change.\n\n We represent a FloatingArray with 2 numpy arrays:\n\n - data: contains a numpy float array of the appropriate dtype\n - mask: a boolean array holding a mask on the data, True is missing\n\n To construct an FloatingArray from generic array-like input, use\n :func:`pandas.array` with one of the float dtypes (see examples).\n\n See :ref:`integer_na` for more.\n\n Parameters\n ----------\n values : numpy.ndarray\n A 1-d float-dtype array.\n mask : numpy.ndarray\n A 1-d boolean-dtype array indicating missing values.\n copy : bool, default False\n Whether to copy the `values` and `mask`.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Returns\n -------\n FloatingArray\n\n Examples\n --------\n Create an FloatingArray with :func:`pandas.array`:\n\n >>> pd.array([0.1, None, 0.3], dtype=pd.Float32Dtype())\n <FloatingArray>\n [0.1, <NA>, 0.3]\n Length: 3, dtype: Float32\n\n String aliases for the dtypes are also available. They are capitalized.\n\n >>> pd.array([0.1, None, 0.3], dtype="Float32")\n <FloatingArray>\n [0.1, <NA>, 0.3]\n Length: 3, dtype: Float32\n """\n\n _dtype_cls = FloatingDtype\n\n # The value used to fill '_data' to avoid upcasting\n _internal_fill_value = np.nan\n # Fill values used for any/all\n # Incompatible types in assignment (expression has type "float", base class\n # "BaseMaskedArray" defined the type as "<typing special form>")\n _truthy_value = 1.0 # type: ignore[assignment]\n _falsey_value = 0.0 # type: ignore[assignment]\n\n\n_dtype_docstring = """\nAn ExtensionDtype for {dtype} data.\n\nThis dtype uses ``pd.NA`` as missing value indicator.\n\nAttributes\n----------\nNone\n\nMethods\n-------\nNone\n\nExamples\n--------\nFor Float32Dtype:\n\n>>> ser = pd.Series([2.25, pd.NA], dtype=pd.Float32Dtype())\n>>> ser.dtype\nFloat32Dtype()\n\nFor Float64Dtype:\n\n>>> ser = pd.Series([2.25, pd.NA], dtype=pd.Float64Dtype())\n>>> ser.dtype\nFloat64Dtype()\n"""\n\n# create the Dtype\n\n\n@register_extension_dtype\nclass Float32Dtype(FloatingDtype):\n type = np.float32\n name: ClassVar[str] = "Float32"\n __doc__ = _dtype_docstring.format(dtype="float32")\n\n\n@register_extension_dtype\nclass Float64Dtype(FloatingDtype):\n type = np.float64\n name: ClassVar[str] = "Float64"\n __doc__ = _dtype_docstring.format(dtype="float64")\n\n\nNUMPY_FLOAT_TO_DTYPE: dict[np.dtype, FloatingDtype] = {\n np.dtype(np.float32): Float32Dtype(),\n np.dtype(np.float64): Float64Dtype(),\n}\n
.venv\Lib\site-packages\pandas\core\arrays\floating.py
floating.py
Python
4,286
0.95
0.080925
0.056
awesome-app
445
2023-08-07T06:58:13.023288
Apache-2.0
false
3f145456d97be85f6c8afd778f67afa2
from __future__ import annotations\n\nfrom typing import ClassVar\n\nimport numpy as np\n\nfrom pandas.core.dtypes.base import register_extension_dtype\nfrom pandas.core.dtypes.common import is_integer_dtype\n\nfrom pandas.core.arrays.numeric import (\n NumericArray,\n NumericDtype,\n)\n\n\nclass IntegerDtype(NumericDtype):\n """\n An ExtensionDtype to hold a single size & kind of integer dtype.\n\n These specific implementations are subclasses of the non-public\n IntegerDtype. For example, we have Int8Dtype to represent signed int 8s.\n\n The attributes name & type are set when these subclasses are created.\n """\n\n _default_np_dtype = np.dtype(np.int64)\n _checker = is_integer_dtype\n\n @classmethod\n def construct_array_type(cls) -> type[IntegerArray]:\n """\n Return the array type associated with this dtype.\n\n Returns\n -------\n type\n """\n return IntegerArray\n\n @classmethod\n def _get_dtype_mapping(cls) -> dict[np.dtype, IntegerDtype]:\n return NUMPY_INT_TO_DTYPE\n\n @classmethod\n def _safe_cast(cls, values: np.ndarray, dtype: np.dtype, copy: bool) -> np.ndarray:\n """\n Safely cast the values to the given dtype.\n\n "safe" in this context means the casting is lossless. e.g. if 'values'\n has a floating dtype, each value must be an integer.\n """\n try:\n return values.astype(dtype, casting="safe", copy=copy)\n except TypeError as err:\n casted = values.astype(dtype, copy=copy)\n if (casted == values).all():\n return casted\n\n raise TypeError(\n f"cannot safely cast non-equivalent {values.dtype} to {np.dtype(dtype)}"\n ) from err\n\n\nclass IntegerArray(NumericArray):\n """\n Array of integer (optional missing) values.\n\n Uses :attr:`pandas.NA` as the missing value.\n\n .. warning::\n\n IntegerArray is currently experimental, and its API or internal\n implementation may change without warning.\n\n We represent an IntegerArray with 2 numpy arrays:\n\n - data: contains a numpy integer array of the appropriate dtype\n - mask: a boolean array holding a mask on the data, True is missing\n\n To construct an IntegerArray from generic array-like input, use\n :func:`pandas.array` with one of the integer dtypes (see examples).\n\n See :ref:`integer_na` for more.\n\n Parameters\n ----------\n values : numpy.ndarray\n A 1-d integer-dtype array.\n mask : numpy.ndarray\n A 1-d boolean-dtype array indicating missing values.\n copy : bool, default False\n Whether to copy the `values` and `mask`.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Returns\n -------\n IntegerArray\n\n Examples\n --------\n Create an IntegerArray with :func:`pandas.array`.\n\n >>> int_array = pd.array([1, None, 3], dtype=pd.Int32Dtype())\n >>> int_array\n <IntegerArray>\n [1, <NA>, 3]\n Length: 3, dtype: Int32\n\n String aliases for the dtypes are also available. They are capitalized.\n\n >>> pd.array([1, None, 3], dtype='Int32')\n <IntegerArray>\n [1, <NA>, 3]\n Length: 3, dtype: Int32\n\n >>> pd.array([1, None, 3], dtype='UInt16')\n <IntegerArray>\n [1, <NA>, 3]\n Length: 3, dtype: UInt16\n """\n\n _dtype_cls = IntegerDtype\n\n # The value used to fill '_data' to avoid upcasting\n _internal_fill_value = 1\n # Fill values used for any/all\n # Incompatible types in assignment (expression has type "int", base class\n # "BaseMaskedArray" defined the type as "<typing special form>")\n _truthy_value = 1 # type: ignore[assignment]\n _falsey_value = 0 # type: ignore[assignment]\n\n\n_dtype_docstring = """\nAn ExtensionDtype for {dtype} integer data.\n\nUses :attr:`pandas.NA` as its missing value, rather than :attr:`numpy.nan`.\n\nAttributes\n----------\nNone\n\nMethods\n-------\nNone\n\nExamples\n--------\nFor Int8Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.Int8Dtype())\n>>> ser.dtype\nInt8Dtype()\n\nFor Int16Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.Int16Dtype())\n>>> ser.dtype\nInt16Dtype()\n\nFor Int32Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.Int32Dtype())\n>>> ser.dtype\nInt32Dtype()\n\nFor Int64Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.Int64Dtype())\n>>> ser.dtype\nInt64Dtype()\n\nFor UInt8Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.UInt8Dtype())\n>>> ser.dtype\nUInt8Dtype()\n\nFor UInt16Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.UInt16Dtype())\n>>> ser.dtype\nUInt16Dtype()\n\nFor UInt32Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.UInt32Dtype())\n>>> ser.dtype\nUInt32Dtype()\n\nFor UInt64Dtype:\n\n>>> ser = pd.Series([2, pd.NA], dtype=pd.UInt64Dtype())\n>>> ser.dtype\nUInt64Dtype()\n"""\n\n# create the Dtype\n\n\n@register_extension_dtype\nclass Int8Dtype(IntegerDtype):\n type = np.int8\n name: ClassVar[str] = "Int8"\n __doc__ = _dtype_docstring.format(dtype="int8")\n\n\n@register_extension_dtype\nclass Int16Dtype(IntegerDtype):\n type = np.int16\n name: ClassVar[str] = "Int16"\n __doc__ = _dtype_docstring.format(dtype="int16")\n\n\n@register_extension_dtype\nclass Int32Dtype(IntegerDtype):\n type = np.int32\n name: ClassVar[str] = "Int32"\n __doc__ = _dtype_docstring.format(dtype="int32")\n\n\n@register_extension_dtype\nclass Int64Dtype(IntegerDtype):\n type = np.int64\n name: ClassVar[str] = "Int64"\n __doc__ = _dtype_docstring.format(dtype="int64")\n\n\n@register_extension_dtype\nclass UInt8Dtype(IntegerDtype):\n type = np.uint8\n name: ClassVar[str] = "UInt8"\n __doc__ = _dtype_docstring.format(dtype="uint8")\n\n\n@register_extension_dtype\nclass UInt16Dtype(IntegerDtype):\n type = np.uint16\n name: ClassVar[str] = "UInt16"\n __doc__ = _dtype_docstring.format(dtype="uint16")\n\n\n@register_extension_dtype\nclass UInt32Dtype(IntegerDtype):\n type = np.uint32\n name: ClassVar[str] = "UInt32"\n __doc__ = _dtype_docstring.format(dtype="uint32")\n\n\n@register_extension_dtype\nclass UInt64Dtype(IntegerDtype):\n type = np.uint64\n name: ClassVar[str] = "UInt64"\n __doc__ = _dtype_docstring.format(dtype="uint64")\n\n\nNUMPY_INT_TO_DTYPE: dict[np.dtype, IntegerDtype] = {\n np.dtype(np.int8): Int8Dtype(),\n np.dtype(np.int16): Int16Dtype(),\n np.dtype(np.int32): Int32Dtype(),\n np.dtype(np.int64): Int64Dtype(),\n np.dtype(np.uint8): UInt8Dtype(),\n np.dtype(np.uint16): UInt16Dtype(),\n np.dtype(np.uint32): UInt32Dtype(),\n np.dtype(np.uint64): UInt64Dtype(),\n}\n
.venv\Lib\site-packages\pandas\core\arrays\integer.py
integer.py
Python
6,470
0.95
0.077206
0.025381
node-utils
785
2025-02-18T00:48:31.619910
Apache-2.0
false
751345dabb94c0a86eb7b7f3e87d4f05
from __future__ import annotations\n\nimport operator\nfrom operator import (\n le,\n lt,\n)\nimport textwrap\nfrom typing import (\n TYPE_CHECKING,\n Literal,\n Union,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas._libs.interval import (\n VALID_CLOSED,\n Interval,\n IntervalMixin,\n intervals_to_interval_bounds,\n)\nfrom pandas._libs.missing import NA\nfrom pandas._typing import (\n ArrayLike,\n AxisInt,\n Dtype,\n FillnaOptions,\n IntervalClosedType,\n NpDtype,\n PositionalIndexer,\n ScalarIndexer,\n Self,\n SequenceIndexer,\n SortKind,\n TimeArrayLike,\n npt,\n)\nfrom pandas.compat.numpy import function as nv\nfrom pandas.errors import IntCastingNaNError\nfrom pandas.util._decorators import Appender\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.cast import (\n LossySetitemError,\n maybe_upcast_numeric_to_64bit,\n)\nfrom pandas.core.dtypes.common import (\n is_float_dtype,\n is_integer_dtype,\n is_list_like,\n is_object_dtype,\n is_scalar,\n is_string_dtype,\n needs_i8_conversion,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import (\n CategoricalDtype,\n IntervalDtype,\n)\nfrom pandas.core.dtypes.generic import (\n ABCDataFrame,\n ABCDatetimeIndex,\n ABCIntervalIndex,\n ABCPeriodIndex,\n)\nfrom pandas.core.dtypes.missing import (\n is_valid_na_for_dtype,\n isna,\n notna,\n)\n\nfrom pandas.core.algorithms import (\n isin,\n take,\n unique,\n value_counts_internal as value_counts,\n)\nfrom pandas.core.arrays import ArrowExtensionArray\nfrom pandas.core.arrays.base import (\n ExtensionArray,\n _extension_array_shared_docs,\n)\nfrom pandas.core.arrays.datetimes import DatetimeArray\nfrom pandas.core.arrays.timedeltas import TimedeltaArray\nimport pandas.core.common as com\nfrom pandas.core.construction import (\n array as pd_array,\n ensure_wrapped_if_datetimelike,\n extract_array,\n)\nfrom pandas.core.indexers import check_array_indexer\nfrom pandas.core.ops import (\n invalid_comparison,\n unpack_zerodim_and_defer,\n)\n\nif TYPE_CHECKING:\n from collections.abc import (\n Iterator,\n Sequence,\n )\n\n from pandas import (\n Index,\n Series,\n )\n\n\nIntervalSide = Union[TimeArrayLike, np.ndarray]\nIntervalOrNA = Union[Interval, float]\n\n_interval_shared_docs: dict[str, str] = {}\n\n_shared_docs_kwargs = {\n "klass": "IntervalArray",\n "qualname": "arrays.IntervalArray",\n "name": "",\n}\n\n\n_interval_shared_docs[\n "class"\n] = """\n%(summary)s\n\nParameters\n----------\ndata : array-like (1-dimensional)\n Array-like (ndarray, :class:`DateTimeArray`, :class:`TimeDeltaArray`) containing\n Interval objects from which to build the %(klass)s.\nclosed : {'left', 'right', 'both', 'neither'}, default 'right'\n Whether the intervals are closed on the left-side, right-side, both or\n neither.\ndtype : dtype or None, default None\n If None, dtype will be inferred.\ncopy : bool, default False\n Copy the input data.\n%(name)s\\nverify_integrity : bool, default True\n Verify that the %(klass)s is valid.\n\nAttributes\n----------\nleft\nright\nclosed\nmid\nlength\nis_empty\nis_non_overlapping_monotonic\n%(extra_attributes)s\\n\nMethods\n-------\nfrom_arrays\nfrom_tuples\nfrom_breaks\ncontains\noverlaps\nset_closed\nto_tuples\n%(extra_methods)s\\n\nSee Also\n--------\nIndex : The base pandas Index type.\nInterval : A bounded slice-like interval; the elements of an %(klass)s.\ninterval_range : Function to create a fixed frequency IntervalIndex.\ncut : Bin values into discrete Intervals.\nqcut : Bin values into equal-sized Intervals based on rank or sample quantiles.\n\nNotes\n-----\nSee the `user guide\n<https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#intervalindex>`__\nfor more.\n\n%(examples)s\\n"""\n\n\n@Appender(\n _interval_shared_docs["class"]\n % {\n "klass": "IntervalArray",\n "summary": "Pandas array for interval data that are closed on the same side.",\n "name": "",\n "extra_attributes": "",\n "extra_methods": "",\n "examples": textwrap.dedent(\n """\\n Examples\n --------\n A new ``IntervalArray`` can be constructed directly from an array-like of\n ``Interval`` objects:\n\n >>> pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n\n It may also be constructed using one of the constructor\n methods: :meth:`IntervalArray.from_arrays`,\n :meth:`IntervalArray.from_breaks`, and :meth:`IntervalArray.from_tuples`.\n """\n ),\n }\n)\nclass IntervalArray(IntervalMixin, ExtensionArray):\n can_hold_na = True\n _na_value = _fill_value = np.nan\n\n @property\n def ndim(self) -> Literal[1]:\n return 1\n\n # To make mypy recognize the fields\n _left: IntervalSide\n _right: IntervalSide\n _dtype: IntervalDtype\n\n # ---------------------------------------------------------------------\n # Constructors\n\n def __new__(\n cls,\n data,\n closed: IntervalClosedType | None = None,\n dtype: Dtype | None = None,\n copy: bool = False,\n verify_integrity: bool = True,\n ):\n data = extract_array(data, extract_numpy=True)\n\n if isinstance(data, cls):\n left: IntervalSide = data._left\n right: IntervalSide = data._right\n closed = closed or data.closed\n dtype = IntervalDtype(left.dtype, closed=closed)\n else:\n # don't allow scalars\n if is_scalar(data):\n msg = (\n f"{cls.__name__}(...) must be called with a collection "\n f"of some kind, {data} was passed"\n )\n raise TypeError(msg)\n\n # might need to convert empty or purely na data\n data = _maybe_convert_platform_interval(data)\n left, right, infer_closed = intervals_to_interval_bounds(\n data, validate_closed=closed is None\n )\n if left.dtype == object:\n left = lib.maybe_convert_objects(left)\n right = lib.maybe_convert_objects(right)\n closed = closed or infer_closed\n\n left, right, dtype = cls._ensure_simple_new_inputs(\n left,\n right,\n closed=closed,\n copy=copy,\n dtype=dtype,\n )\n\n if verify_integrity:\n cls._validate(left, right, dtype=dtype)\n\n return cls._simple_new(\n left,\n right,\n dtype=dtype,\n )\n\n @classmethod\n def _simple_new(\n cls,\n left: IntervalSide,\n right: IntervalSide,\n dtype: IntervalDtype,\n ) -> Self:\n result = IntervalMixin.__new__(cls)\n result._left = left\n result._right = right\n result._dtype = dtype\n\n return result\n\n @classmethod\n def _ensure_simple_new_inputs(\n cls,\n left,\n right,\n closed: IntervalClosedType | None = None,\n copy: bool = False,\n dtype: Dtype | None = None,\n ) -> tuple[IntervalSide, IntervalSide, IntervalDtype]:\n """Ensure correctness of input parameters for cls._simple_new."""\n from pandas.core.indexes.base import ensure_index\n\n left = ensure_index(left, copy=copy)\n left = maybe_upcast_numeric_to_64bit(left)\n\n right = ensure_index(right, copy=copy)\n right = maybe_upcast_numeric_to_64bit(right)\n\n if closed is None and isinstance(dtype, IntervalDtype):\n closed = dtype.closed\n\n closed = closed or "right"\n\n if dtype is not None:\n # GH 19262: dtype must be an IntervalDtype to override inferred\n dtype = pandas_dtype(dtype)\n if isinstance(dtype, IntervalDtype):\n if dtype.subtype is not None:\n left = left.astype(dtype.subtype)\n right = right.astype(dtype.subtype)\n else:\n msg = f"dtype must be an IntervalDtype, got {dtype}"\n raise TypeError(msg)\n\n if dtype.closed is None:\n # possibly loading an old pickle\n dtype = IntervalDtype(dtype.subtype, closed)\n elif closed != dtype.closed:\n raise ValueError("closed keyword does not match dtype.closed")\n\n # coerce dtypes to match if needed\n if is_float_dtype(left.dtype) and is_integer_dtype(right.dtype):\n right = right.astype(left.dtype)\n elif is_float_dtype(right.dtype) and is_integer_dtype(left.dtype):\n left = left.astype(right.dtype)\n\n if type(left) != type(right):\n msg = (\n f"must not have differing left [{type(left).__name__}] and "\n f"right [{type(right).__name__}] types"\n )\n raise ValueError(msg)\n if isinstance(left.dtype, CategoricalDtype) or is_string_dtype(left.dtype):\n # GH 19016\n msg = (\n "category, object, and string subtypes are not supported "\n "for IntervalArray"\n )\n raise TypeError(msg)\n if isinstance(left, ABCPeriodIndex):\n msg = "Period dtypes are not supported, use a PeriodIndex instead"\n raise ValueError(msg)\n if isinstance(left, ABCDatetimeIndex) and str(left.tz) != str(right.tz):\n msg = (\n "left and right must have the same time zone, got "\n f"'{left.tz}' and '{right.tz}'"\n )\n raise ValueError(msg)\n elif needs_i8_conversion(left.dtype) and left.unit != right.unit:\n # e.g. m8[s] vs m8[ms], try to cast to a common dtype GH#55714\n left_arr, right_arr = left._data._ensure_matching_resos(right._data)\n left = ensure_index(left_arr)\n right = ensure_index(right_arr)\n\n # For dt64/td64 we want DatetimeArray/TimedeltaArray instead of ndarray\n left = ensure_wrapped_if_datetimelike(left)\n left = extract_array(left, extract_numpy=True)\n right = ensure_wrapped_if_datetimelike(right)\n right = extract_array(right, extract_numpy=True)\n\n if isinstance(left, ArrowExtensionArray) or isinstance(\n right, ArrowExtensionArray\n ):\n pass\n else:\n lbase = getattr(left, "_ndarray", left)\n lbase = getattr(lbase, "_data", lbase).base\n rbase = getattr(right, "_ndarray", right)\n rbase = getattr(rbase, "_data", rbase).base\n if lbase is not None and lbase is rbase:\n # If these share data, then setitem could corrupt our IA\n right = right.copy()\n\n dtype = IntervalDtype(left.dtype, closed=closed)\n\n return left, right, dtype\n\n @classmethod\n def _from_sequence(\n cls,\n scalars,\n *,\n dtype: Dtype | None = None,\n copy: bool = False,\n ) -> Self:\n return cls(scalars, dtype=dtype, copy=copy)\n\n @classmethod\n def _from_factorized(cls, values: np.ndarray, original: IntervalArray) -> Self:\n return cls._from_sequence(values, dtype=original.dtype)\n\n _interval_shared_docs["from_breaks"] = textwrap.dedent(\n """\n Construct an %(klass)s from an array of splits.\n\n Parameters\n ----------\n breaks : array-like (1-dimensional)\n Left and right bounds for each interval.\n closed : {'left', 'right', 'both', 'neither'}, default 'right'\n Whether the intervals are closed on the left-side, right-side, both\n or neither.\\n %(name)s\n copy : bool, default False\n Copy the data.\n dtype : dtype or None, default None\n If None, dtype will be inferred.\n\n Returns\n -------\n %(klass)s\n\n See Also\n --------\n interval_range : Function to create a fixed frequency IntervalIndex.\n %(klass)s.from_arrays : Construct from a left and right array.\n %(klass)s.from_tuples : Construct from a sequence of tuples.\n\n %(examples)s\\n """\n )\n\n @classmethod\n @Appender(\n _interval_shared_docs["from_breaks"]\n % {\n "klass": "IntervalArray",\n "name": "",\n "examples": textwrap.dedent(\n """\\n Examples\n --------\n >>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])\n <IntervalArray>\n [(0, 1], (1, 2], (2, 3]]\n Length: 3, dtype: interval[int64, right]\n """\n ),\n }\n )\n def from_breaks(\n cls,\n breaks,\n closed: IntervalClosedType | None = "right",\n copy: bool = False,\n dtype: Dtype | None = None,\n ) -> Self:\n breaks = _maybe_convert_platform_interval(breaks)\n\n return cls.from_arrays(breaks[:-1], breaks[1:], closed, copy=copy, dtype=dtype)\n\n _interval_shared_docs["from_arrays"] = textwrap.dedent(\n """\n Construct from two arrays defining the left and right bounds.\n\n Parameters\n ----------\n left : array-like (1-dimensional)\n Left bounds for each interval.\n right : array-like (1-dimensional)\n Right bounds for each interval.\n closed : {'left', 'right', 'both', 'neither'}, default 'right'\n Whether the intervals are closed on the left-side, right-side, both\n or neither.\\n %(name)s\n copy : bool, default False\n Copy the data.\n dtype : dtype, optional\n If None, dtype will be inferred.\n\n Returns\n -------\n %(klass)s\n\n Raises\n ------\n ValueError\n When a value is missing in only one of `left` or `right`.\n When a value in `left` is greater than the corresponding value\n in `right`.\n\n See Also\n --------\n interval_range : Function to create a fixed frequency IntervalIndex.\n %(klass)s.from_breaks : Construct an %(klass)s from an array of\n splits.\n %(klass)s.from_tuples : Construct an %(klass)s from an\n array-like of tuples.\n\n Notes\n -----\n Each element of `left` must be less than or equal to the `right`\n element at the same position. If an element is missing, it must be\n missing in both `left` and `right`. A TypeError is raised when\n using an unsupported type for `left` or `right`. At the moment,\n 'category', 'object', and 'string' subtypes are not supported.\n\n %(examples)s\\n """\n )\n\n @classmethod\n @Appender(\n _interval_shared_docs["from_arrays"]\n % {\n "klass": "IntervalArray",\n "name": "",\n "examples": textwrap.dedent(\n """\\n Examples\n --------\n >>> pd.arrays.IntervalArray.from_arrays([0, 1, 2], [1, 2, 3])\n <IntervalArray>\n [(0, 1], (1, 2], (2, 3]]\n Length: 3, dtype: interval[int64, right]\n """\n ),\n }\n )\n def from_arrays(\n cls,\n left,\n right,\n closed: IntervalClosedType | None = "right",\n copy: bool = False,\n dtype: Dtype | None = None,\n ) -> Self:\n left = _maybe_convert_platform_interval(left)\n right = _maybe_convert_platform_interval(right)\n\n left, right, dtype = cls._ensure_simple_new_inputs(\n left,\n right,\n closed=closed,\n copy=copy,\n dtype=dtype,\n )\n cls._validate(left, right, dtype=dtype)\n\n return cls._simple_new(left, right, dtype=dtype)\n\n _interval_shared_docs["from_tuples"] = textwrap.dedent(\n """\n Construct an %(klass)s from an array-like of tuples.\n\n Parameters\n ----------\n data : array-like (1-dimensional)\n Array of tuples.\n closed : {'left', 'right', 'both', 'neither'}, default 'right'\n Whether the intervals are closed on the left-side, right-side, both\n or neither.\\n %(name)s\n copy : bool, default False\n By-default copy the data, this is compat only and ignored.\n dtype : dtype or None, default None\n If None, dtype will be inferred.\n\n Returns\n -------\n %(klass)s\n\n See Also\n --------\n interval_range : Function to create a fixed frequency IntervalIndex.\n %(klass)s.from_arrays : Construct an %(klass)s from a left and\n right array.\n %(klass)s.from_breaks : Construct an %(klass)s from an array of\n splits.\n\n %(examples)s\\n """\n )\n\n @classmethod\n @Appender(\n _interval_shared_docs["from_tuples"]\n % {\n "klass": "IntervalArray",\n "name": "",\n "examples": textwrap.dedent(\n """\\n Examples\n --------\n >>> pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 2)])\n <IntervalArray>\n [(0, 1], (1, 2]]\n Length: 2, dtype: interval[int64, right]\n """\n ),\n }\n )\n def from_tuples(\n cls,\n data,\n closed: IntervalClosedType | None = "right",\n copy: bool = False,\n dtype: Dtype | None = None,\n ) -> Self:\n if len(data):\n left, right = [], []\n else:\n # ensure that empty data keeps input dtype\n left = right = data\n\n for d in data:\n if not isinstance(d, tuple) and isna(d):\n lhs = rhs = np.nan\n else:\n name = cls.__name__\n try:\n # need list of length 2 tuples, e.g. [(0, 1), (1, 2), ...]\n lhs, rhs = d\n except ValueError as err:\n msg = f"{name}.from_tuples requires tuples of length 2, got {d}"\n raise ValueError(msg) from err\n except TypeError as err:\n msg = f"{name}.from_tuples received an invalid item, {d}"\n raise TypeError(msg) from err\n left.append(lhs)\n right.append(rhs)\n\n return cls.from_arrays(left, right, closed, copy=False, dtype=dtype)\n\n @classmethod\n def _validate(cls, left, right, dtype: IntervalDtype) -> None:\n """\n Verify that the IntervalArray is valid.\n\n Checks that\n\n * dtype is correct\n * left and right match lengths\n * left and right have the same missing values\n * left is always below right\n """\n if not isinstance(dtype, IntervalDtype):\n msg = f"invalid dtype: {dtype}"\n raise ValueError(msg)\n if len(left) != len(right):\n msg = "left and right must have the same length"\n raise ValueError(msg)\n left_mask = notna(left)\n right_mask = notna(right)\n if not (left_mask == right_mask).all():\n msg = (\n "missing values must be missing in the same "\n "location both left and right sides"\n )\n raise ValueError(msg)\n if not (left[left_mask] <= right[left_mask]).all():\n msg = "left side of interval must be <= right side"\n raise ValueError(msg)\n\n def _shallow_copy(self, left, right) -> Self:\n """\n Return a new IntervalArray with the replacement attributes\n\n Parameters\n ----------\n left : Index\n Values to be used for the left-side of the intervals.\n right : Index\n Values to be used for the right-side of the intervals.\n """\n dtype = IntervalDtype(left.dtype, closed=self.closed)\n left, right, dtype = self._ensure_simple_new_inputs(left, right, dtype=dtype)\n\n return self._simple_new(left, right, dtype=dtype)\n\n # ---------------------------------------------------------------------\n # Descriptive\n\n @property\n def dtype(self) -> IntervalDtype:\n return self._dtype\n\n @property\n def nbytes(self) -> int:\n return self.left.nbytes + self.right.nbytes\n\n @property\n def size(self) -> int:\n # Avoid materializing self.values\n return self.left.size\n\n # ---------------------------------------------------------------------\n # EA Interface\n\n def __iter__(self) -> Iterator:\n return iter(np.asarray(self))\n\n def __len__(self) -> int:\n return len(self._left)\n\n @overload\n def __getitem__(self, key: ScalarIndexer) -> IntervalOrNA:\n ...\n\n @overload\n def __getitem__(self, key: SequenceIndexer) -> Self:\n ...\n\n def __getitem__(self, key: PositionalIndexer) -> Self | IntervalOrNA:\n key = check_array_indexer(self, key)\n left = self._left[key]\n right = self._right[key]\n\n if not isinstance(left, (np.ndarray, ExtensionArray)):\n # scalar\n if is_scalar(left) and isna(left):\n return self._fill_value\n return Interval(left, right, self.closed)\n if np.ndim(left) > 1:\n # GH#30588 multi-dimensional indexer disallowed\n raise ValueError("multi-dimensional indexing not allowed")\n # Argument 2 to "_simple_new" of "IntervalArray" has incompatible type\n # "Union[Period, Timestamp, Timedelta, NaTType, DatetimeArray, TimedeltaArray,\n # ndarray[Any, Any]]"; expected "Union[Union[DatetimeArray, TimedeltaArray],\n # ndarray[Any, Any]]"\n return self._simple_new(left, right, dtype=self.dtype) # type: ignore[arg-type]\n\n def __setitem__(self, key, value) -> None:\n value_left, value_right = self._validate_setitem_value(value)\n key = check_array_indexer(self, key)\n\n self._left[key] = value_left\n self._right[key] = value_right\n\n def _cmp_method(self, other, op):\n # ensure pandas array for list-like and eliminate non-interval scalars\n if is_list_like(other):\n if len(self) != len(other):\n raise ValueError("Lengths must match to compare")\n other = pd_array(other)\n elif not isinstance(other, Interval):\n # non-interval scalar -> no matches\n if other is NA:\n # GH#31882\n from pandas.core.arrays import BooleanArray\n\n arr = np.empty(self.shape, dtype=bool)\n mask = np.ones(self.shape, dtype=bool)\n return BooleanArray(arr, mask)\n return invalid_comparison(self, other, op)\n\n # determine the dtype of the elements we want to compare\n if isinstance(other, Interval):\n other_dtype = pandas_dtype("interval")\n elif not isinstance(other.dtype, CategoricalDtype):\n other_dtype = other.dtype\n else:\n # for categorical defer to categories for dtype\n other_dtype = other.categories.dtype\n\n # extract intervals if we have interval categories with matching closed\n if isinstance(other_dtype, IntervalDtype):\n if self.closed != other.categories.closed:\n return invalid_comparison(self, other, op)\n\n other = other.categories._values.take(\n other.codes, allow_fill=True, fill_value=other.categories._na_value\n )\n\n # interval-like -> need same closed and matching endpoints\n if isinstance(other_dtype, IntervalDtype):\n if self.closed != other.closed:\n return invalid_comparison(self, other, op)\n elif not isinstance(other, Interval):\n other = type(self)(other)\n\n if op is operator.eq:\n return (self._left == other.left) & (self._right == other.right)\n elif op is operator.ne:\n return (self._left != other.left) | (self._right != other.right)\n elif op is operator.gt:\n return (self._left > other.left) | (\n (self._left == other.left) & (self._right > other.right)\n )\n elif op is operator.ge:\n return (self == other) | (self > other)\n elif op is operator.lt:\n return (self._left < other.left) | (\n (self._left == other.left) & (self._right < other.right)\n )\n else:\n # operator.lt\n return (self == other) | (self < other)\n\n # non-interval/non-object dtype -> no matches\n if not is_object_dtype(other_dtype):\n return invalid_comparison(self, other, op)\n\n # object dtype -> iteratively check for intervals\n result = np.zeros(len(self), dtype=bool)\n for i, obj in enumerate(other):\n try:\n result[i] = op(self[i], obj)\n except TypeError:\n if obj is NA:\n # comparison with np.nan returns NA\n # github.com/pandas-dev/pandas/pull/37124#discussion_r509095092\n result = result.astype(object)\n result[i] = NA\n else:\n raise\n return result\n\n @unpack_zerodim_and_defer("__eq__")\n def __eq__(self, other):\n return self._cmp_method(other, operator.eq)\n\n @unpack_zerodim_and_defer("__ne__")\n def __ne__(self, other):\n return self._cmp_method(other, operator.ne)\n\n @unpack_zerodim_and_defer("__gt__")\n def __gt__(self, other):\n return self._cmp_method(other, operator.gt)\n\n @unpack_zerodim_and_defer("__ge__")\n def __ge__(self, other):\n return self._cmp_method(other, operator.ge)\n\n @unpack_zerodim_and_defer("__lt__")\n def __lt__(self, other):\n return self._cmp_method(other, operator.lt)\n\n @unpack_zerodim_and_defer("__le__")\n def __le__(self, other):\n return self._cmp_method(other, operator.le)\n\n def argsort(\n self,\n *,\n ascending: bool = True,\n kind: SortKind = "quicksort",\n na_position: str = "last",\n **kwargs,\n ) -> np.ndarray:\n ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs)\n\n if ascending and kind == "quicksort" and na_position == "last":\n # TODO: in an IntervalIndex we can reuse the cached\n # IntervalTree.left_sorter\n return np.lexsort((self.right, self.left))\n\n # TODO: other cases we can use lexsort for? much more performant.\n return super().argsort(\n ascending=ascending, kind=kind, na_position=na_position, **kwargs\n )\n\n def min(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOrNA:\n nv.validate_minmax_axis(axis, self.ndim)\n\n if not len(self):\n return self._na_value\n\n mask = self.isna()\n if mask.any():\n if not skipna:\n return self._na_value\n obj = self[~mask]\n else:\n obj = self\n\n indexer = obj.argsort()[0]\n return obj[indexer]\n\n def max(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOrNA:\n nv.validate_minmax_axis(axis, self.ndim)\n\n if not len(self):\n return self._na_value\n\n mask = self.isna()\n if mask.any():\n if not skipna:\n return self._na_value\n obj = self[~mask]\n else:\n obj = self\n\n indexer = obj.argsort()[-1]\n return obj[indexer]\n\n def _pad_or_backfill( # pylint: disable=useless-parent-delegation\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n # TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove\n # this method entirely.\n return super()._pad_or_backfill(\n method=method, limit=limit, limit_area=limit_area, copy=copy\n )\n\n def fillna(\n self, value=None, method=None, limit: int | None = None, copy: bool = True\n ) -> Self:\n """\n Fill NA/NaN values using the specified method.\n\n Parameters\n ----------\n value : scalar, dict, Series\n If a scalar value is passed it is used to fill all missing values.\n Alternatively, a Series or dict can be used to fill in different\n values for each index. The value should not be a list. The\n value(s) passed should be either Interval objects or NA/NaN.\n method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None\n (Not implemented yet for IntervalArray)\n Method to use for filling holes in reindexed Series\n limit : int, default None\n (Not implemented yet for IntervalArray)\n If method is specified, this is the maximum number of consecutive\n NaN values to forward/backward fill. In other words, if there is\n a gap with more than this number of consecutive NaNs, it will only\n be partially filled. If method is not specified, this is the\n maximum number of entries along the entire axis where NaNs will be\n filled.\n copy : bool, default True\n Whether to make a copy of the data before filling. If False, then\n the original should be modified and no new memory should be allocated.\n For ExtensionArray subclasses that cannot do this, it is at the\n author's discretion whether to ignore "copy=False" or to raise.\n\n Returns\n -------\n filled : IntervalArray with NA/NaN filled\n """\n if copy is False:\n raise NotImplementedError\n if method is not None:\n return super().fillna(value=value, method=method, limit=limit)\n\n value_left, value_right = self._validate_scalar(value)\n\n left = self.left.fillna(value=value_left)\n right = self.right.fillna(value=value_right)\n return self._shallow_copy(left, right)\n\n def astype(self, dtype, copy: bool = True):\n """\n Cast to an ExtensionArray or NumPy array with dtype 'dtype'.\n\n Parameters\n ----------\n dtype : str or dtype\n Typecode or data-type to which the array is cast.\n\n copy : bool, default True\n Whether to copy the data, even if not necessary. If False,\n a copy is made only if the old dtype does not match the\n new dtype.\n\n Returns\n -------\n array : ExtensionArray or ndarray\n ExtensionArray or NumPy ndarray with 'dtype' for its dtype.\n """\n from pandas import Index\n\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n\n if isinstance(dtype, IntervalDtype):\n if dtype == self.dtype:\n return self.copy() if copy else self\n\n if is_float_dtype(self.dtype.subtype) and needs_i8_conversion(\n dtype.subtype\n ):\n # This is allowed on the Index.astype but we disallow it here\n msg = (\n f"Cannot convert {self.dtype} to {dtype}; subtypes are incompatible"\n )\n raise TypeError(msg)\n\n # need to cast to different subtype\n try:\n # We need to use Index rules for astype to prevent casting\n # np.nan entries to int subtypes\n new_left = Index(self._left, copy=False).astype(dtype.subtype)\n new_right = Index(self._right, copy=False).astype(dtype.subtype)\n except IntCastingNaNError:\n # e.g test_subtype_integer\n raise\n except (TypeError, ValueError) as err:\n # e.g. test_subtype_integer_errors f8->u8 can be lossy\n # and raises ValueError\n msg = (\n f"Cannot convert {self.dtype} to {dtype}; subtypes are incompatible"\n )\n raise TypeError(msg) from err\n return self._shallow_copy(new_left, new_right)\n else:\n try:\n return super().astype(dtype, copy=copy)\n except (TypeError, ValueError) as err:\n msg = f"Cannot cast {type(self).__name__} to dtype {dtype}"\n raise TypeError(msg) from err\n\n def equals(self, other) -> bool:\n if type(self) != type(other):\n return False\n\n return bool(\n self.closed == other.closed\n and self.left.equals(other.left)\n and self.right.equals(other.right)\n )\n\n @classmethod\n def _concat_same_type(cls, to_concat: Sequence[IntervalArray]) -> Self:\n """\n Concatenate multiple IntervalArray\n\n Parameters\n ----------\n to_concat : sequence of IntervalArray\n\n Returns\n -------\n IntervalArray\n """\n closed_set = {interval.closed for interval in to_concat}\n if len(closed_set) != 1:\n raise ValueError("Intervals must all be closed on the same side.")\n closed = closed_set.pop()\n\n left: IntervalSide = np.concatenate([interval.left for interval in to_concat])\n right: IntervalSide = np.concatenate([interval.right for interval in to_concat])\n\n left, right, dtype = cls._ensure_simple_new_inputs(left, right, closed=closed)\n\n return cls._simple_new(left, right, dtype=dtype)\n\n def copy(self) -> Self:\n """\n Return a copy of the array.\n\n Returns\n -------\n IntervalArray\n """\n left = self._left.copy()\n right = self._right.copy()\n dtype = self.dtype\n return self._simple_new(left, right, dtype=dtype)\n\n def isna(self) -> np.ndarray:\n return isna(self._left)\n\n def shift(self, periods: int = 1, fill_value: object = None) -> IntervalArray:\n if not len(self) or periods == 0:\n return self.copy()\n\n self._validate_scalar(fill_value)\n\n # ExtensionArray.shift doesn't work for two reasons\n # 1. IntervalArray.dtype.na_value may not be correct for the dtype.\n # 2. IntervalArray._from_sequence only accepts NaN for missing values,\n # not other values like NaT\n\n empty_len = min(abs(periods), len(self))\n if isna(fill_value):\n from pandas import Index\n\n fill_value = Index(self._left, copy=False)._na_value\n empty = IntervalArray.from_breaks([fill_value] * (empty_len + 1))\n else:\n empty = self._from_sequence([fill_value] * empty_len, dtype=self.dtype)\n\n if periods > 0:\n a = empty\n b = self[:-periods]\n else:\n a = self[abs(periods) :]\n b = empty\n return self._concat_same_type([a, b])\n\n def take(\n self,\n indices,\n *,\n allow_fill: bool = False,\n fill_value=None,\n axis=None,\n **kwargs,\n ) -> Self:\n """\n Take elements from the IntervalArray.\n\n Parameters\n ----------\n indices : sequence of integers\n Indices to be taken.\n\n allow_fill : bool, default False\n How to handle negative values in `indices`.\n\n * False: negative values in `indices` indicate positional indices\n from the right (the default). This is similar to\n :func:`numpy.take`.\n\n * True: negative values in `indices` indicate\n missing values. These values are set to `fill_value`. Any other\n other negative values raise a ``ValueError``.\n\n fill_value : Interval or NA, optional\n Fill value to use for NA-indices when `allow_fill` is True.\n This may be ``None``, in which case the default NA value for\n the type, ``self.dtype.na_value``, is used.\n\n For many ExtensionArrays, there will be two representations of\n `fill_value`: a user-facing "boxed" scalar, and a low-level\n physical NA value. `fill_value` should be the user-facing version,\n and the implementation should handle translating that to the\n physical version for processing the take if necessary.\n\n axis : any, default None\n Present for compat with IntervalIndex; does nothing.\n\n Returns\n -------\n IntervalArray\n\n Raises\n ------\n IndexError\n When the indices are out of bounds for the array.\n ValueError\n When `indices` contains negative values other than ``-1``\n and `allow_fill` is True.\n """\n nv.validate_take((), kwargs)\n\n fill_left = fill_right = fill_value\n if allow_fill:\n fill_left, fill_right = self._validate_scalar(fill_value)\n\n left_take = take(\n self._left, indices, allow_fill=allow_fill, fill_value=fill_left\n )\n right_take = take(\n self._right, indices, allow_fill=allow_fill, fill_value=fill_right\n )\n\n return self._shallow_copy(left_take, right_take)\n\n def _validate_listlike(self, value):\n # list-like of intervals\n try:\n array = IntervalArray(value)\n self._check_closed_matches(array, name="value")\n value_left, value_right = array.left, array.right\n except TypeError as err:\n # wrong type: not interval or NA\n msg = f"'value' should be an interval type, got {type(value)} instead."\n raise TypeError(msg) from err\n\n try:\n self.left._validate_fill_value(value_left)\n except (LossySetitemError, TypeError) as err:\n msg = (\n "'value' should be a compatible interval type, "\n f"got {type(value)} instead."\n )\n raise TypeError(msg) from err\n\n return value_left, value_right\n\n def _validate_scalar(self, value):\n if isinstance(value, Interval):\n self._check_closed_matches(value, name="value")\n left, right = value.left, value.right\n # TODO: check subdtype match like _validate_setitem_value?\n elif is_valid_na_for_dtype(value, self.left.dtype):\n # GH#18295\n left = right = self.left._na_value\n else:\n raise TypeError(\n "can only insert Interval objects and NA into an IntervalArray"\n )\n return left, right\n\n def _validate_setitem_value(self, value):\n if is_valid_na_for_dtype(value, self.left.dtype):\n # na value: need special casing to set directly on numpy arrays\n value = self.left._na_value\n if is_integer_dtype(self.dtype.subtype):\n # can't set NaN on a numpy integer array\n # GH#45484 TypeError, not ValueError, matches what we get with\n # non-NA un-holdable value.\n raise TypeError("Cannot set float NaN to integer-backed IntervalArray")\n value_left, value_right = value, value\n\n elif isinstance(value, Interval):\n # scalar interval\n self._check_closed_matches(value, name="value")\n value_left, value_right = value.left, value.right\n self.left._validate_fill_value(value_left)\n self.left._validate_fill_value(value_right)\n\n else:\n return self._validate_listlike(value)\n\n return value_left, value_right\n\n def value_counts(self, dropna: bool = True) -> Series:\n """\n Returns a Series containing counts of each interval.\n\n Parameters\n ----------\n dropna : bool, default True\n Don't include counts of NaN.\n\n Returns\n -------\n counts : Series\n\n See Also\n --------\n Series.value_counts\n """\n # TODO: implement this is a non-naive way!\n with warnings.catch_warnings():\n warnings.filterwarnings(\n "ignore",\n "The behavior of value_counts with object-dtype is deprecated",\n category=FutureWarning,\n )\n result = value_counts(np.asarray(self), dropna=dropna)\n # Once the deprecation is enforced, we will need to do\n # `result.index = result.index.astype(self.dtype)`\n return result\n\n # ---------------------------------------------------------------------\n # Rendering Methods\n\n def _formatter(self, boxed: bool = False):\n # returning 'str' here causes us to render as e.g. "(0, 1]" instead of\n # "Interval(0, 1, closed='right')"\n return str\n\n # ---------------------------------------------------------------------\n # Vectorized Interval Properties/Attributes\n\n @property\n def left(self) -> Index:\n """\n Return the left endpoints of each Interval in the IntervalArray as an Index.\n\n Examples\n --------\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(2, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (2, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.left\n Index([0, 2], dtype='int64')\n """\n from pandas import Index\n\n return Index(self._left, copy=False)\n\n @property\n def right(self) -> Index:\n """\n Return the right endpoints of each Interval in the IntervalArray as an Index.\n\n Examples\n --------\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(2, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (2, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.right\n Index([1, 5], dtype='int64')\n """\n from pandas import Index\n\n return Index(self._right, copy=False)\n\n @property\n def length(self) -> Index:\n """\n Return an Index with entries denoting the length of each Interval.\n\n Examples\n --------\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.length\n Index([1, 4], dtype='int64')\n """\n return self.right - self.left\n\n @property\n def mid(self) -> Index:\n """\n Return the midpoint of each Interval in the IntervalArray as an Index.\n\n Examples\n --------\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.mid\n Index([0.5, 3.0], dtype='float64')\n """\n try:\n return 0.5 * (self.left + self.right)\n except TypeError:\n # datetime safe version\n return self.left + 0.5 * self.length\n\n _interval_shared_docs["overlaps"] = textwrap.dedent(\n """\n Check elementwise if an Interval overlaps the values in the %(klass)s.\n\n Two intervals overlap if they share a common point, including closed\n endpoints. Intervals that only have an open endpoint in common do not\n overlap.\n\n Parameters\n ----------\n other : %(klass)s\n Interval to check against for an overlap.\n\n Returns\n -------\n ndarray\n Boolean array positionally indicating where an overlap occurs.\n\n See Also\n --------\n Interval.overlaps : Check whether two Interval objects overlap.\n\n Examples\n --------\n %(examples)s\n >>> intervals.overlaps(pd.Interval(0.5, 1.5))\n array([ True, True, False])\n\n Intervals that share closed endpoints overlap:\n\n >>> intervals.overlaps(pd.Interval(1, 3, closed='left'))\n array([ True, True, True])\n\n Intervals that only have an open endpoint in common do not overlap:\n\n >>> intervals.overlaps(pd.Interval(1, 2, closed='right'))\n array([False, True, False])\n """\n )\n\n @Appender(\n _interval_shared_docs["overlaps"]\n % {\n "klass": "IntervalArray",\n "examples": textwrap.dedent(\n """\\n >>> data = [(0, 1), (1, 3), (2, 4)]\n >>> intervals = pd.arrays.IntervalArray.from_tuples(data)\n >>> intervals\n <IntervalArray>\n [(0, 1], (1, 3], (2, 4]]\n Length: 3, dtype: interval[int64, right]\n """\n ),\n }\n )\n def overlaps(self, other):\n if isinstance(other, (IntervalArray, ABCIntervalIndex)):\n raise NotImplementedError\n if not isinstance(other, Interval):\n msg = f"`other` must be Interval-like, got {type(other).__name__}"\n raise TypeError(msg)\n\n # equality is okay if both endpoints are closed (overlap at a point)\n op1 = le if (self.closed_left and other.closed_right) else lt\n op2 = le if (other.closed_left and self.closed_right) else lt\n\n # overlaps is equivalent negation of two interval being disjoint:\n # disjoint = (A.left > B.right) or (B.left > A.right)\n # (simplifying the negation allows this to be done in less operations)\n return op1(self.left, other.right) & op2(other.left, self.right)\n\n # ---------------------------------------------------------------------\n\n @property\n def closed(self) -> IntervalClosedType:\n """\n String describing the inclusive side the intervals.\n\n Either ``left``, ``right``, ``both`` or ``neither``.\n\n Examples\n --------\n\n For arrays:\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.closed\n 'right'\n\n For Interval Index:\n\n >>> interv_idx = pd.interval_range(start=0, end=2)\n >>> interv_idx\n IntervalIndex([(0, 1], (1, 2]], dtype='interval[int64, right]')\n >>> interv_idx.closed\n 'right'\n """\n return self.dtype.closed\n\n _interval_shared_docs["set_closed"] = textwrap.dedent(\n """\n Return an identical %(klass)s closed on the specified side.\n\n Parameters\n ----------\n closed : {'left', 'right', 'both', 'neither'}\n Whether the intervals are closed on the left-side, right-side, both\n or neither.\n\n Returns\n -------\n %(klass)s\n\n %(examples)s\\n """\n )\n\n @Appender(\n _interval_shared_docs["set_closed"]\n % {\n "klass": "IntervalArray",\n "examples": textwrap.dedent(\n """\\n Examples\n --------\n >>> index = pd.arrays.IntervalArray.from_breaks(range(4))\n >>> index\n <IntervalArray>\n [(0, 1], (1, 2], (2, 3]]\n Length: 3, dtype: interval[int64, right]\n >>> index.set_closed('both')\n <IntervalArray>\n [[0, 1], [1, 2], [2, 3]]\n Length: 3, dtype: interval[int64, both]\n """\n ),\n }\n )\n def set_closed(self, closed: IntervalClosedType) -> Self:\n if closed not in VALID_CLOSED:\n msg = f"invalid option for 'closed': {closed}"\n raise ValueError(msg)\n\n left, right = self._left, self._right\n dtype = IntervalDtype(left.dtype, closed=closed)\n return self._simple_new(left, right, dtype=dtype)\n\n _interval_shared_docs[\n "is_non_overlapping_monotonic"\n ] = """\n Return a boolean whether the %(klass)s is non-overlapping and monotonic.\n\n Non-overlapping means (no Intervals share points), and monotonic means\n either monotonic increasing or monotonic decreasing.\n\n Examples\n --------\n For arrays:\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])\n >>> interv_arr\n <IntervalArray>\n [(0, 1], (1, 5]]\n Length: 2, dtype: interval[int64, right]\n >>> interv_arr.is_non_overlapping_monotonic\n True\n\n >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1),\n ... pd.Interval(-1, 0.1)])\n >>> interv_arr\n <IntervalArray>\n [(0.0, 1.0], (-1.0, 0.1]]\n Length: 2, dtype: interval[float64, right]\n >>> interv_arr.is_non_overlapping_monotonic\n False\n\n For Interval Index:\n\n >>> interv_idx = pd.interval_range(start=0, end=2)\n >>> interv_idx\n IntervalIndex([(0, 1], (1, 2]], dtype='interval[int64, right]')\n >>> interv_idx.is_non_overlapping_monotonic\n True\n\n >>> interv_idx = pd.interval_range(start=0, end=2, closed='both')\n >>> interv_idx\n IntervalIndex([[0, 1], [1, 2]], dtype='interval[int64, both]')\n >>> interv_idx.is_non_overlapping_monotonic\n False\n """\n\n @property\n @Appender(\n _interval_shared_docs["is_non_overlapping_monotonic"] % _shared_docs_kwargs\n )\n def is_non_overlapping_monotonic(self) -> bool:\n # must be increasing (e.g., [0, 1), [1, 2), [2, 3), ... )\n # or decreasing (e.g., [-1, 0), [-2, -1), [-3, -2), ...)\n # we already require left <= right\n\n # strict inequality for closed == 'both'; equality implies overlapping\n # at a point when both sides of intervals are included\n if self.closed == "both":\n return bool(\n (self._right[:-1] < self._left[1:]).all()\n or (self._left[:-1] > self._right[1:]).all()\n )\n\n # non-strict inequality when closed != 'both'; at least one side is\n # not included in the intervals, so equality does not imply overlapping\n return bool(\n (self._right[:-1] <= self._left[1:]).all()\n or (self._left[:-1] >= self._right[1:]).all()\n )\n\n # ---------------------------------------------------------------------\n # Conversion\n\n def __array__(\n self, dtype: NpDtype | None = None, copy: bool | None = None\n ) -> np.ndarray:\n """\n Return the IntervalArray's data as a numpy array of Interval\n objects (with dtype='object')\n """\n if copy is False:\n warnings.warn(\n "Starting with NumPy 2.0, the behavior of the 'copy' keyword has "\n "changed and passing 'copy=False' raises an error when returning "\n "a zero-copy NumPy array is not possible. pandas will follow "\n "this behavior starting with pandas 3.0.\nThis conversion to "\n "NumPy requires a copy, but 'copy=False' was passed. Consider "\n "using 'np.asarray(..)' instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n left = self._left\n right = self._right\n mask = self.isna()\n closed = self.closed\n\n result = np.empty(len(left), dtype=object)\n for i, left_value in enumerate(left):\n if mask[i]:\n result[i] = np.nan\n else:\n result[i] = Interval(left_value, right[i], closed)\n return result\n\n def __arrow_array__(self, type=None):\n """\n Convert myself into a pyarrow Array.\n """\n import pyarrow\n\n from pandas.core.arrays.arrow.extension_types import ArrowIntervalType\n\n try:\n subtype = pyarrow.from_numpy_dtype(self.dtype.subtype)\n except TypeError as err:\n raise TypeError(\n f"Conversion to arrow with subtype '{self.dtype.subtype}' "\n "is not supported"\n ) from err\n interval_type = ArrowIntervalType(subtype, self.closed)\n storage_array = pyarrow.StructArray.from_arrays(\n [\n pyarrow.array(self._left, type=subtype, from_pandas=True),\n pyarrow.array(self._right, type=subtype, from_pandas=True),\n ],\n names=["left", "right"],\n )\n mask = self.isna()\n if mask.any():\n # if there are missing values, set validity bitmap also on the array level\n null_bitmap = pyarrow.array(~mask).buffers()[1]\n storage_array = pyarrow.StructArray.from_buffers(\n storage_array.type,\n len(storage_array),\n [null_bitmap],\n children=[storage_array.field(0), storage_array.field(1)],\n )\n\n if type is not None:\n if type.equals(interval_type.storage_type):\n return storage_array\n elif isinstance(type, ArrowIntervalType):\n # ensure we have the same subtype and closed attributes\n if not type.equals(interval_type):\n raise TypeError(\n "Not supported to convert IntervalArray to type with "\n f"different 'subtype' ({self.dtype.subtype} vs {type.subtype}) "\n f"and 'closed' ({self.closed} vs {type.closed}) attributes"\n )\n else:\n raise TypeError(\n f"Not supported to convert IntervalArray to '{type}' type"\n )\n\n return pyarrow.ExtensionArray.from_storage(interval_type, storage_array)\n\n _interval_shared_docs["to_tuples"] = textwrap.dedent(\n """\n Return an %(return_type)s of tuples of the form (left, right).\n\n Parameters\n ----------\n na_tuple : bool, default True\n If ``True``, return ``NA`` as a tuple ``(nan, nan)``. If ``False``,\n just return ``NA`` as ``nan``.\n\n Returns\n -------\n tuples: %(return_type)s\n %(examples)s\\n """\n )\n\n @Appender(\n _interval_shared_docs["to_tuples"]\n % {\n "return_type": (\n "ndarray (if self is IntervalArray) or Index (if self is IntervalIndex)"\n ),\n "examples": textwrap.dedent(\n """\\n\n Examples\n --------\n For :class:`pandas.IntervalArray`:\n\n >>> idx = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 2)])\n >>> idx\n <IntervalArray>\n [(0, 1], (1, 2]]\n Length: 2, dtype: interval[int64, right]\n >>> idx.to_tuples()\n array([(0, 1), (1, 2)], dtype=object)\n\n For :class:`pandas.IntervalIndex`:\n\n >>> idx = pd.interval_range(start=0, end=2)\n >>> idx\n IntervalIndex([(0, 1], (1, 2]], dtype='interval[int64, right]')\n >>> idx.to_tuples()\n Index([(0, 1), (1, 2)], dtype='object')\n """\n ),\n }\n )\n def to_tuples(self, na_tuple: bool = True) -> np.ndarray:\n tuples = com.asarray_tuplesafe(zip(self._left, self._right))\n if not na_tuple:\n # GH 18756\n tuples = np.where(~self.isna(), tuples, np.nan)\n return tuples\n\n # ---------------------------------------------------------------------\n\n def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:\n value_left, value_right = self._validate_setitem_value(value)\n\n if isinstance(self._left, np.ndarray):\n np.putmask(self._left, mask, value_left)\n assert isinstance(self._right, np.ndarray)\n np.putmask(self._right, mask, value_right)\n else:\n self._left._putmask(mask, value_left)\n assert not isinstance(self._right, np.ndarray)\n self._right._putmask(mask, value_right)\n\n def insert(self, loc: int, item: Interval) -> Self:\n """\n Return a new IntervalArray inserting new item at location. Follows\n Python numpy.insert semantics for negative values. Only Interval\n objects and NA can be inserted into an IntervalIndex\n\n Parameters\n ----------\n loc : int\n item : Interval\n\n Returns\n -------\n IntervalArray\n """\n left_insert, right_insert = self._validate_scalar(item)\n\n new_left = self.left.insert(loc, left_insert)\n new_right = self.right.insert(loc, right_insert)\n\n return self._shallow_copy(new_left, new_right)\n\n def delete(self, loc) -> Self:\n if isinstance(self._left, np.ndarray):\n new_left = np.delete(self._left, loc)\n assert isinstance(self._right, np.ndarray)\n new_right = np.delete(self._right, loc)\n else:\n new_left = self._left.delete(loc)\n assert not isinstance(self._right, np.ndarray)\n new_right = self._right.delete(loc)\n return self._shallow_copy(left=new_left, right=new_right)\n\n @Appender(_extension_array_shared_docs["repeat"] % _shared_docs_kwargs)\n def repeat(\n self,\n repeats: int | Sequence[int],\n axis: AxisInt | None = None,\n ) -> Self:\n nv.validate_repeat((), {"axis": axis})\n left_repeat = self.left.repeat(repeats)\n right_repeat = self.right.repeat(repeats)\n return self._shallow_copy(left=left_repeat, right=right_repeat)\n\n _interval_shared_docs["contains"] = textwrap.dedent(\n """\n Check elementwise if the Intervals contain the value.\n\n Return a boolean mask whether the value is contained in the Intervals\n of the %(klass)s.\n\n Parameters\n ----------\n other : scalar\n The value to check whether it is contained in the Intervals.\n\n Returns\n -------\n boolean array\n\n See Also\n --------\n Interval.contains : Check whether Interval object contains value.\n %(klass)s.overlaps : Check if an Interval overlaps the values in the\n %(klass)s.\n\n Examples\n --------\n %(examples)s\n >>> intervals.contains(0.5)\n array([ True, False, False])\n """\n )\n\n @Appender(\n _interval_shared_docs["contains"]\n % {\n "klass": "IntervalArray",\n "examples": textwrap.dedent(\n """\\n >>> intervals = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 3), (2, 4)])\n >>> intervals\n <IntervalArray>\n [(0, 1], (1, 3], (2, 4]]\n Length: 3, dtype: interval[int64, right]\n """\n ),\n }\n )\n def contains(self, other):\n if isinstance(other, Interval):\n raise NotImplementedError("contains not implemented for two intervals")\n\n return (self._left < other if self.open_left else self._left <= other) & (\n other < self._right if self.open_right else other <= self._right\n )\n\n def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:\n if isinstance(values, IntervalArray):\n if self.closed != values.closed:\n # not comparable -> no overlap\n return np.zeros(self.shape, dtype=bool)\n\n if self.dtype == values.dtype:\n # GH#38353 instead of casting to object, operating on a\n # complex128 ndarray is much more performant.\n left = self._combined.view("complex128")\n right = values._combined.view("complex128")\n # error: Argument 1 to "isin" has incompatible type\n # "Union[ExtensionArray, ndarray[Any, Any],\n # ndarray[Any, dtype[Any]]]"; expected\n # "Union[_SupportsArray[dtype[Any]],\n # _NestedSequence[_SupportsArray[dtype[Any]]], bool,\n # int, float, complex, str, bytes, _NestedSequence[\n # Union[bool, int, float, complex, str, bytes]]]"\n return np.isin(left, right).ravel() # type: ignore[arg-type]\n\n elif needs_i8_conversion(self.left.dtype) ^ needs_i8_conversion(\n values.left.dtype\n ):\n # not comparable -> no overlap\n return np.zeros(self.shape, dtype=bool)\n\n return isin(self.astype(object), values.astype(object))\n\n @property\n def _combined(self) -> IntervalSide:\n # error: Item "ExtensionArray" of "ExtensionArray | ndarray[Any, Any]"\n # has no attribute "reshape" [union-attr]\n left = self.left._values.reshape(-1, 1) # type: ignore[union-attr]\n right = self.right._values.reshape(-1, 1) # type: ignore[union-attr]\n if needs_i8_conversion(left.dtype):\n # error: Item "ndarray[Any, Any]" of "Any | ndarray[Any, Any]" has\n # no attribute "_concat_same_type"\n comb = left._concat_same_type( # type: ignore[union-attr]\n [left, right], axis=1\n )\n else:\n comb = np.concatenate([left, right], axis=1)\n return comb\n\n def _from_combined(self, combined: np.ndarray) -> IntervalArray:\n """\n Create a new IntervalArray with our dtype from a 1D complex128 ndarray.\n """\n nc = combined.view("i8").reshape(-1, 2)\n\n dtype = self._left.dtype\n if needs_i8_conversion(dtype):\n assert isinstance(self._left, (DatetimeArray, TimedeltaArray))\n new_left = type(self._left)._from_sequence(nc[:, 0], dtype=dtype)\n assert isinstance(self._right, (DatetimeArray, TimedeltaArray))\n new_right = type(self._right)._from_sequence(nc[:, 1], dtype=dtype)\n else:\n assert isinstance(dtype, np.dtype)\n new_left = nc[:, 0].view(dtype)\n new_right = nc[:, 1].view(dtype)\n return self._shallow_copy(left=new_left, right=new_right)\n\n def unique(self) -> IntervalArray:\n # No overload variant of "__getitem__" of "ExtensionArray" matches argument\n # type "Tuple[slice, int]"\n nc = unique(\n self._combined.view("complex128")[:, 0] # type: ignore[call-overload]\n )\n nc = nc[:, None]\n return self._from_combined(nc)\n\n\ndef _maybe_convert_platform_interval(values) -> ArrayLike:\n """\n Try to do platform conversion, with special casing for IntervalArray.\n Wrapper around maybe_convert_platform that alters the default return\n dtype in certain cases to be compatible with IntervalArray. For example,\n empty lists return with integer dtype instead of object dtype, which is\n prohibited for IntervalArray.\n\n Parameters\n ----------\n values : array-like\n\n Returns\n -------\n array\n """\n if isinstance(values, (list, tuple)) and len(values) == 0:\n # GH 19016\n # empty lists/tuples get object dtype by default, but this is\n # prohibited for IntervalArray, so coerce to integer instead\n return np.array([], dtype=np.int64)\n elif not is_list_like(values) or isinstance(values, ABCDataFrame):\n # This will raise later, but we avoid passing to maybe_convert_platform\n return values\n elif isinstance(getattr(values, "dtype", None), CategoricalDtype):\n values = np.asarray(values)\n elif not hasattr(values, "dtype") and not isinstance(values, (list, tuple, range)):\n # TODO: should we just cast these to list?\n return values\n else:\n values = extract_array(values, extract_numpy=True)\n\n if not hasattr(values, "dtype"):\n values = np.asarray(values)\n if values.dtype.kind in "iu" and values.dtype != np.int64:\n values = values.astype(np.int64)\n return values\n
.venv\Lib\site-packages\pandas\core\arrays\interval.py
interval.py
Python
63,830
0.75
0.11658
0.075061
python-kit
388
2024-10-17T18:49:21.417075
Apache-2.0
false
0ab618a469404b8e5304478b269f473e
from __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Literal,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import (\n lib,\n missing as libmissing,\n)\nfrom pandas._libs.tslibs import is_supported_dtype\nfrom pandas._typing import (\n ArrayLike,\n AstypeArg,\n AxisInt,\n DtypeObj,\n FillnaOptions,\n InterpolateOptions,\n NpDtype,\n PositionalIndexer,\n Scalar,\n ScalarIndexer,\n Self,\n SequenceIndexer,\n Shape,\n npt,\n)\nfrom pandas.compat import (\n IS64,\n is_platform_windows,\n)\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import doc\nfrom pandas.util._exceptions import find_stack_level\nfrom pandas.util._validators import validate_fillna_kwargs\n\nfrom pandas.core.dtypes.base import ExtensionDtype\nfrom pandas.core.dtypes.common import (\n is_bool,\n is_integer_dtype,\n is_list_like,\n is_scalar,\n is_string_dtype,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import BaseMaskedDtype\nfrom pandas.core.dtypes.missing import (\n array_equivalent,\n is_valid_na_for_dtype,\n isna,\n notna,\n)\n\nfrom pandas.core import (\n algorithms as algos,\n arraylike,\n missing,\n nanops,\n ops,\n)\nfrom pandas.core.algorithms import (\n factorize_array,\n isin,\n map_array,\n mode,\n take,\n)\nfrom pandas.core.array_algos import (\n masked_accumulations,\n masked_reductions,\n)\nfrom pandas.core.array_algos.quantile import quantile_with_mask\nfrom pandas.core.arraylike import OpsMixin\nfrom pandas.core.arrays._utils import to_numpy_dtype_inference\nfrom pandas.core.arrays.base import ExtensionArray\nfrom pandas.core.construction import (\n array as pd_array,\n ensure_wrapped_if_datetimelike,\n extract_array,\n)\nfrom pandas.core.indexers import check_array_indexer\nfrom pandas.core.ops import invalid_comparison\nfrom pandas.core.util.hashing import hash_array\n\nif TYPE_CHECKING:\n from collections.abc import (\n Iterator,\n Sequence,\n )\n from pandas import Series\n from pandas.core.arrays import BooleanArray\n from pandas._typing import (\n NumpySorter,\n NumpyValueArrayLike,\n )\n from pandas.core.arrays import FloatingArray\n\nfrom pandas.compat.numpy import function as nv\n\n\nclass BaseMaskedArray(OpsMixin, ExtensionArray):\n """\n Base class for masked arrays (which use _data and _mask to store the data).\n\n numpy based\n """\n\n # The value used to fill '_data' to avoid upcasting\n _internal_fill_value: Scalar\n # our underlying data and mask are each ndarrays\n _data: np.ndarray\n _mask: npt.NDArray[np.bool_]\n\n # Fill values used for any/all\n _truthy_value = Scalar # bool(_truthy_value) = True\n _falsey_value = Scalar # bool(_falsey_value) = False\n\n @classmethod\n def _simple_new(cls, values: np.ndarray, mask: npt.NDArray[np.bool_]) -> Self:\n result = BaseMaskedArray.__new__(cls)\n result._data = values\n result._mask = mask\n return result\n\n def __init__(\n self, values: np.ndarray, mask: npt.NDArray[np.bool_], copy: bool = False\n ) -> None:\n # values is supposed to already be validated in the subclass\n if not (isinstance(mask, np.ndarray) and mask.dtype == np.bool_):\n raise TypeError(\n "mask should be boolean numpy array. Use "\n "the 'pd.array' function instead"\n )\n if values.shape != mask.shape:\n raise ValueError("values.shape must match mask.shape")\n\n if copy:\n values = values.copy()\n mask = mask.copy()\n\n self._data = values\n self._mask = mask\n\n @classmethod\n def _from_sequence(cls, scalars, *, dtype=None, copy: bool = False) -> Self:\n values, mask = cls._coerce_to_array(scalars, dtype=dtype, copy=copy)\n return cls(values, mask)\n\n @classmethod\n @doc(ExtensionArray._empty)\n def _empty(cls, shape: Shape, dtype: ExtensionDtype):\n values = np.empty(shape, dtype=dtype.type)\n values.fill(cls._internal_fill_value)\n mask = np.ones(shape, dtype=bool)\n result = cls(values, mask)\n if not isinstance(result, cls) or dtype != result.dtype:\n raise NotImplementedError(\n f"Default 'empty' implementation is invalid for dtype='{dtype}'"\n )\n return result\n\n def _formatter(self, boxed: bool = False) -> Callable[[Any], str | None]:\n # NEP 51: https://github.com/numpy/numpy/pull/22449\n return str\n\n @property\n def dtype(self) -> BaseMaskedDtype:\n raise AbstractMethodError(self)\n\n @overload\n def __getitem__(self, item: ScalarIndexer) -> Any:\n ...\n\n @overload\n def __getitem__(self, item: SequenceIndexer) -> Self:\n ...\n\n def __getitem__(self, item: PositionalIndexer) -> Self | Any:\n item = check_array_indexer(self, item)\n\n newmask = self._mask[item]\n if is_bool(newmask):\n # This is a scalar indexing\n if newmask:\n return self.dtype.na_value\n return self._data[item]\n\n return self._simple_new(self._data[item], newmask)\n\n def _pad_or_backfill(\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n mask = self._mask\n\n if mask.any():\n func = missing.get_fill_func(method, ndim=self.ndim)\n\n npvalues = self._data.T\n new_mask = mask.T\n if copy:\n npvalues = npvalues.copy()\n new_mask = new_mask.copy()\n elif limit_area is not None:\n mask = mask.copy()\n func(npvalues, limit=limit, mask=new_mask)\n\n if limit_area is not None and not mask.all():\n mask = mask.T\n neg_mask = ~mask\n first = neg_mask.argmax()\n last = len(neg_mask) - neg_mask[::-1].argmax() - 1\n if limit_area == "inside":\n new_mask[:first] |= mask[:first]\n new_mask[last + 1 :] |= mask[last + 1 :]\n elif limit_area == "outside":\n new_mask[first + 1 : last] |= mask[first + 1 : last]\n\n if copy:\n return self._simple_new(npvalues.T, new_mask.T)\n else:\n return self\n else:\n if copy:\n new_values = self.copy()\n else:\n new_values = self\n return new_values\n\n @doc(ExtensionArray.fillna)\n def fillna(\n self, value=None, method=None, limit: int | None = None, copy: bool = True\n ) -> Self:\n value, method = validate_fillna_kwargs(value, method)\n\n mask = self._mask\n\n value = missing.check_value_size(value, mask, len(self))\n\n if mask.any():\n if method is not None:\n func = missing.get_fill_func(method, ndim=self.ndim)\n npvalues = self._data.T\n new_mask = mask.T\n if copy:\n npvalues = npvalues.copy()\n new_mask = new_mask.copy()\n func(npvalues, limit=limit, mask=new_mask)\n return self._simple_new(npvalues.T, new_mask.T)\n else:\n # fill with value\n if copy:\n new_values = self.copy()\n else:\n new_values = self[:]\n new_values[mask] = value\n else:\n if copy:\n new_values = self.copy()\n else:\n new_values = self[:]\n return new_values\n\n @classmethod\n def _coerce_to_array(\n cls, values, *, dtype: DtypeObj, copy: bool = False\n ) -> tuple[np.ndarray, np.ndarray]:\n raise AbstractMethodError(cls)\n\n def _validate_setitem_value(self, value):\n """\n Check if we have a scalar that we can cast losslessly.\n\n Raises\n ------\n TypeError\n """\n kind = self.dtype.kind\n # TODO: get this all from np_can_hold_element?\n if kind == "b":\n if lib.is_bool(value):\n return value\n\n elif kind == "f":\n if lib.is_integer(value) or lib.is_float(value):\n return value\n\n else:\n if lib.is_integer(value) or (lib.is_float(value) and value.is_integer()):\n return value\n # TODO: unsigned checks\n\n # Note: without the "str" here, the f-string rendering raises in\n # py38 builds.\n raise TypeError(f"Invalid value '{value!s}' for dtype '{self.dtype}'")\n\n def __setitem__(self, key, value) -> None:\n key = check_array_indexer(self, key)\n\n if is_scalar(value):\n if is_valid_na_for_dtype(value, self.dtype):\n self._mask[key] = True\n else:\n value = self._validate_setitem_value(value)\n self._data[key] = value\n self._mask[key] = False\n return\n\n value, mask = self._coerce_to_array(value, dtype=self.dtype)\n\n self._data[key] = value\n self._mask[key] = mask\n\n def __contains__(self, key) -> bool:\n if isna(key) and key is not self.dtype.na_value:\n # GH#52840\n if self._data.dtype.kind == "f" and lib.is_float(key):\n return bool((np.isnan(self._data) & ~self._mask).any())\n\n return bool(super().__contains__(key))\n\n def __iter__(self) -> Iterator:\n if self.ndim == 1:\n if not self._hasna:\n for val in self._data:\n yield val\n else:\n na_value = self.dtype.na_value\n for isna_, val in zip(self._mask, self._data):\n if isna_:\n yield na_value\n else:\n yield val\n else:\n for i in range(len(self)):\n yield self[i]\n\n def __len__(self) -> int:\n return len(self._data)\n\n @property\n def shape(self) -> Shape:\n return self._data.shape\n\n @property\n def ndim(self) -> int:\n return self._data.ndim\n\n def swapaxes(self, axis1, axis2) -> Self:\n data = self._data.swapaxes(axis1, axis2)\n mask = self._mask.swapaxes(axis1, axis2)\n return self._simple_new(data, mask)\n\n def delete(self, loc, axis: AxisInt = 0) -> Self:\n data = np.delete(self._data, loc, axis=axis)\n mask = np.delete(self._mask, loc, axis=axis)\n return self._simple_new(data, mask)\n\n def reshape(self, *args, **kwargs) -> Self:\n data = self._data.reshape(*args, **kwargs)\n mask = self._mask.reshape(*args, **kwargs)\n return self._simple_new(data, mask)\n\n def ravel(self, *args, **kwargs) -> Self:\n # TODO: need to make sure we have the same order for data/mask\n data = self._data.ravel(*args, **kwargs)\n mask = self._mask.ravel(*args, **kwargs)\n return type(self)(data, mask)\n\n @property\n def T(self) -> Self:\n return self._simple_new(self._data.T, self._mask.T)\n\n def round(self, decimals: int = 0, *args, **kwargs):\n """\n Round each value in the array a to the given number of decimals.\n\n Parameters\n ----------\n decimals : int, default 0\n Number of decimal places to round to. If decimals is negative,\n it specifies the number of positions to the left of the decimal point.\n *args, **kwargs\n Additional arguments and keywords have no effect but might be\n accepted for compatibility with NumPy.\n\n Returns\n -------\n NumericArray\n Rounded values of the NumericArray.\n\n See Also\n --------\n numpy.around : Round values of an np.array.\n DataFrame.round : Round values of a DataFrame.\n Series.round : Round values of a Series.\n """\n if self.dtype.kind == "b":\n return self\n nv.validate_round(args, kwargs)\n values = np.round(self._data, decimals=decimals, **kwargs)\n\n # Usually we'll get same type as self, but ndarray[bool] casts to float\n return self._maybe_mask_result(values, self._mask.copy())\n\n # ------------------------------------------------------------------\n # Unary Methods\n\n def __invert__(self) -> Self:\n return self._simple_new(~self._data, self._mask.copy())\n\n def __neg__(self) -> Self:\n return self._simple_new(-self._data, self._mask.copy())\n\n def __pos__(self) -> Self:\n return self.copy()\n\n def __abs__(self) -> Self:\n return self._simple_new(abs(self._data), self._mask.copy())\n\n # ------------------------------------------------------------------\n\n def _values_for_json(self) -> np.ndarray:\n return np.asarray(self, dtype=object)\n\n def to_numpy(\n self,\n dtype: npt.DTypeLike | None = None,\n copy: bool = False,\n na_value: object = lib.no_default,\n ) -> np.ndarray:\n """\n Convert to a NumPy Array.\n\n By default converts to an object-dtype NumPy array. Specify the `dtype` and\n `na_value` keywords to customize the conversion.\n\n Parameters\n ----------\n dtype : dtype, default object\n The numpy dtype to convert to.\n copy : bool, default False\n Whether to ensure that the returned value is a not a view on\n the array. Note that ``copy=False`` does not *ensure* that\n ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that\n a copy is made, even if not strictly necessary. This is typically\n only possible when no missing values are present and `dtype`\n is the equivalent numpy dtype.\n na_value : scalar, optional\n Scalar missing value indicator to use in numpy array. Defaults\n to the native missing value indicator of this array (pd.NA).\n\n Returns\n -------\n numpy.ndarray\n\n Examples\n --------\n An object-dtype is the default result\n\n >>> a = pd.array([True, False, pd.NA], dtype="boolean")\n >>> a.to_numpy()\n array([True, False, <NA>], dtype=object)\n\n When no missing values are present, an equivalent dtype can be used.\n\n >>> pd.array([True, False], dtype="boolean").to_numpy(dtype="bool")\n array([ True, False])\n >>> pd.array([1, 2], dtype="Int64").to_numpy("int64")\n array([1, 2])\n\n However, requesting such dtype will raise a ValueError if\n missing values are present and the default missing value :attr:`NA`\n is used.\n\n >>> a = pd.array([True, False, pd.NA], dtype="boolean")\n >>> a\n <BooleanArray>\n [True, False, <NA>]\n Length: 3, dtype: boolean\n\n >>> a.to_numpy(dtype="bool")\n Traceback (most recent call last):\n ...\n ValueError: cannot convert to bool numpy array in presence of missing values\n\n Specify a valid `na_value` instead\n\n >>> a.to_numpy(dtype="bool", na_value=False)\n array([ True, False, False])\n """\n hasna = self._hasna\n dtype, na_value = to_numpy_dtype_inference(self, dtype, na_value, hasna)\n if dtype is None:\n dtype = object\n\n if hasna:\n if (\n dtype != object\n and not is_string_dtype(dtype)\n and na_value is libmissing.NA\n ):\n raise ValueError(\n f"cannot convert to '{dtype}'-dtype NumPy array "\n "with missing values. Specify an appropriate 'na_value' "\n "for this dtype."\n )\n # don't pass copy to astype -> always need a copy since we are mutating\n with warnings.catch_warnings():\n warnings.filterwarnings("ignore", category=RuntimeWarning)\n data = self._data.astype(dtype)\n data[self._mask] = na_value\n else:\n with warnings.catch_warnings():\n warnings.filterwarnings("ignore", category=RuntimeWarning)\n data = self._data.astype(dtype, copy=copy)\n return data\n\n @doc(ExtensionArray.tolist)\n def tolist(self):\n if self.ndim > 1:\n return [x.tolist() for x in self]\n dtype = None if self._hasna else self._data.dtype\n return self.to_numpy(dtype=dtype, na_value=libmissing.NA).tolist()\n\n @overload\n def astype(self, dtype: npt.DTypeLike, copy: bool = ...) -> np.ndarray:\n ...\n\n @overload\n def astype(self, dtype: ExtensionDtype, copy: bool = ...) -> ExtensionArray:\n ...\n\n @overload\n def astype(self, dtype: AstypeArg, copy: bool = ...) -> ArrayLike:\n ...\n\n def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:\n dtype = pandas_dtype(dtype)\n\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n\n # if we are astyping to another nullable masked dtype, we can fastpath\n if isinstance(dtype, BaseMaskedDtype):\n # TODO deal with NaNs for FloatingArray case\n with warnings.catch_warnings():\n warnings.filterwarnings("ignore", category=RuntimeWarning)\n # TODO: Is rounding what we want long term?\n data = self._data.astype(dtype.numpy_dtype, copy=copy)\n # mask is copied depending on whether the data was copied, and\n # not directly depending on the `copy` keyword\n mask = self._mask if data is self._data else self._mask.copy()\n cls = dtype.construct_array_type()\n return cls(data, mask, copy=False)\n\n if isinstance(dtype, ExtensionDtype):\n eacls = dtype.construct_array_type()\n return eacls._from_sequence(self, dtype=dtype, copy=copy)\n\n na_value: float | np.datetime64 | lib.NoDefault\n\n # coerce\n if dtype.kind == "f":\n # In astype, we consider dtype=float to also mean na_value=np.nan\n na_value = np.nan\n elif dtype.kind == "M":\n na_value = np.datetime64("NaT")\n else:\n na_value = lib.no_default\n\n # to_numpy will also raise, but we get somewhat nicer exception messages here\n if dtype.kind in "iu" and self._hasna:\n raise ValueError("cannot convert NA to integer")\n if dtype.kind == "b" and self._hasna:\n # careful: astype_nansafe converts np.nan to True\n raise ValueError("cannot convert float NaN to bool")\n\n data = self.to_numpy(dtype=dtype, na_value=na_value, copy=copy)\n return data\n\n __array_priority__ = 1000 # higher than ndarray so ops dispatch to us\n\n def __array__(\n self, dtype: NpDtype | None = None, copy: bool | None = None\n ) -> np.ndarray:\n """\n the array interface, return my values\n We return an object array here to preserve our scalar values\n """\n if copy is False:\n if not self._hasna:\n # special case, here we can simply return the underlying data\n return np.array(self._data, dtype=dtype, copy=copy)\n\n warnings.warn(\n "Starting with NumPy 2.0, the behavior of the 'copy' keyword has "\n "changed and passing 'copy=False' raises an error when returning "\n "a zero-copy NumPy array is not possible. pandas will follow "\n "this behavior starting with pandas 3.0.\nThis conversion to "\n "NumPy requires a copy, but 'copy=False' was passed. Consider "\n "using 'np.asarray(..)' instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n if copy is None:\n copy = False # The NumPy copy=False meaning is different here.\n return self.to_numpy(dtype=dtype, copy=copy)\n\n _HANDLED_TYPES: tuple[type, ...]\n\n def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n # For MaskedArray inputs, we apply the ufunc to ._data\n # and mask the result.\n\n out = kwargs.get("out", ())\n\n for x in inputs + out:\n if not isinstance(x, self._HANDLED_TYPES + (BaseMaskedArray,)):\n return NotImplemented\n\n # for binary ops, use our custom dunder methods\n result = arraylike.maybe_dispatch_ufunc_to_dunder_op(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n return result\n\n if "out" in kwargs:\n # e.g. test_ufunc_with_out\n return arraylike.dispatch_ufunc_with_out(\n self, ufunc, method, *inputs, **kwargs\n )\n\n if method == "reduce":\n result = arraylike.dispatch_reduction_ufunc(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n return result\n\n mask = np.zeros(len(self), dtype=bool)\n inputs2 = []\n for x in inputs:\n if isinstance(x, BaseMaskedArray):\n mask |= x._mask\n inputs2.append(x._data)\n else:\n inputs2.append(x)\n\n def reconstruct(x: np.ndarray):\n # we don't worry about scalar `x` here, since we\n # raise for reduce up above.\n from pandas.core.arrays import (\n BooleanArray,\n FloatingArray,\n IntegerArray,\n )\n\n if x.dtype.kind == "b":\n m = mask.copy()\n return BooleanArray(x, m)\n elif x.dtype.kind in "iu":\n m = mask.copy()\n return IntegerArray(x, m)\n elif x.dtype.kind == "f":\n m = mask.copy()\n if x.dtype == np.float16:\n # reached in e.g. np.sqrt on BooleanArray\n # we don't support float16\n x = x.astype(np.float32)\n return FloatingArray(x, m)\n else:\n x[mask] = np.nan\n return x\n\n result = getattr(ufunc, method)(*inputs2, **kwargs)\n if ufunc.nout > 1:\n # e.g. np.divmod\n return tuple(reconstruct(x) for x in result)\n elif method == "reduce":\n # e.g. np.add.reduce; test_ufunc_reduce_raises\n if self._mask.any():\n return self._na_value\n return result\n else:\n return reconstruct(result)\n\n def __arrow_array__(self, type=None):\n """\n Convert myself into a pyarrow Array.\n """\n import pyarrow as pa\n\n return pa.array(self._data, mask=self._mask, type=type)\n\n @property\n def _hasna(self) -> bool:\n # Note: this is expensive right now! The hope is that we can\n # make this faster by having an optional mask, but not have to change\n # source code using it..\n\n # error: Incompatible return value type (got "bool_", expected "bool")\n return self._mask.any() # type: ignore[return-value]\n\n def _propagate_mask(\n self, mask: npt.NDArray[np.bool_] | None, other\n ) -> npt.NDArray[np.bool_]:\n if mask is None:\n mask = self._mask.copy() # TODO: need test for BooleanArray needing a copy\n if other is libmissing.NA:\n # GH#45421 don't alter inplace\n mask = mask | True\n elif is_list_like(other) and len(other) == len(mask):\n mask = mask | isna(other)\n else:\n mask = self._mask | mask\n # Incompatible return value type (got "Optional[ndarray[Any, dtype[bool_]]]",\n # expected "ndarray[Any, dtype[bool_]]")\n return mask # type: ignore[return-value]\n\n def _arith_method(self, other, op):\n op_name = op.__name__\n omask = None\n\n if (\n not hasattr(other, "dtype")\n and is_list_like(other)\n and len(other) == len(self)\n ):\n # Try inferring masked dtype instead of casting to object\n other = pd_array(other)\n other = extract_array(other, extract_numpy=True)\n\n if isinstance(other, BaseMaskedArray):\n other, omask = other._data, other._mask\n\n elif is_list_like(other):\n if not isinstance(other, ExtensionArray):\n other = np.asarray(other)\n if other.ndim > 1:\n raise NotImplementedError("can only perform ops with 1-d structures")\n\n # We wrap the non-masked arithmetic logic used for numpy dtypes\n # in Series/Index arithmetic ops.\n other = ops.maybe_prepare_scalar_for_op(other, (len(self),))\n pd_op = ops.get_array_op(op)\n other = ensure_wrapped_if_datetimelike(other)\n\n if op_name in {"pow", "rpow"} and isinstance(other, np.bool_):\n # Avoid DeprecationWarning: In future, it will be an error\n # for 'np.bool_' scalars to be interpreted as an index\n # e.g. test_array_scalar_like_equivalence\n other = bool(other)\n\n mask = self._propagate_mask(omask, other)\n\n if other is libmissing.NA:\n result = np.ones_like(self._data)\n if self.dtype.kind == "b":\n if op_name in {\n "floordiv",\n "rfloordiv",\n "pow",\n "rpow",\n "truediv",\n "rtruediv",\n }:\n # GH#41165 Try to match non-masked Series behavior\n # This is still imperfect GH#46043\n raise NotImplementedError(\n f"operator '{op_name}' not implemented for bool dtypes"\n )\n if op_name in {"mod", "rmod"}:\n dtype = "int8"\n else:\n dtype = "bool"\n result = result.astype(dtype)\n elif "truediv" in op_name and self.dtype.kind != "f":\n # The actual data here doesn't matter since the mask\n # will be all-True, but since this is division, we want\n # to end up with floating dtype.\n result = result.astype(np.float64)\n else:\n # Make sure we do this before the "pow" mask checks\n # to get an expected exception message on shape mismatch.\n if self.dtype.kind in "iu" and op_name in ["floordiv", "mod"]:\n # TODO(GH#30188) ATM we don't match the behavior of non-masked\n # types with respect to floordiv-by-zero\n pd_op = op\n\n with np.errstate(all="ignore"):\n result = pd_op(self._data, other)\n\n if op_name == "pow":\n # 1 ** x is 1.\n mask = np.where((self._data == 1) & ~self._mask, False, mask)\n # x ** 0 is 1.\n if omask is not None:\n mask = np.where((other == 0) & ~omask, False, mask)\n elif other is not libmissing.NA:\n mask = np.where(other == 0, False, mask)\n\n elif op_name == "rpow":\n # 1 ** x is 1.\n if omask is not None:\n mask = np.where((other == 1) & ~omask, False, mask)\n elif other is not libmissing.NA:\n mask = np.where(other == 1, False, mask)\n # x ** 0 is 1.\n mask = np.where((self._data == 0) & ~self._mask, False, mask)\n\n return self._maybe_mask_result(result, mask)\n\n _logical_method = _arith_method\n\n def _cmp_method(self, other, op) -> BooleanArray:\n from pandas.core.arrays import BooleanArray\n\n mask = None\n\n if isinstance(other, BaseMaskedArray):\n other, mask = other._data, other._mask\n\n elif is_list_like(other):\n other = np.asarray(other)\n if other.ndim > 1:\n raise NotImplementedError("can only perform ops with 1-d structures")\n if len(self) != len(other):\n raise ValueError("Lengths must match to compare")\n\n if other is libmissing.NA:\n # numpy does not handle pd.NA well as "other" scalar (it returns\n # a scalar False instead of an array)\n # This may be fixed by NA.__array_ufunc__. Revisit this check\n # once that's implemented.\n result = np.zeros(self._data.shape, dtype="bool")\n mask = np.ones(self._data.shape, dtype="bool")\n else:\n with warnings.catch_warnings():\n # numpy may show a FutureWarning or DeprecationWarning:\n # elementwise comparison failed; returning scalar instead,\n # but in the future will perform elementwise comparison\n # before returning NotImplemented. We fall back to the correct\n # behavior today, so that should be fine to ignore.\n warnings.filterwarnings("ignore", "elementwise", FutureWarning)\n warnings.filterwarnings("ignore", "elementwise", DeprecationWarning)\n method = getattr(self._data, f"__{op.__name__}__")\n result = method(other)\n\n if result is NotImplemented:\n result = invalid_comparison(self._data, other, op)\n\n mask = self._propagate_mask(mask, other)\n return BooleanArray(result, mask, copy=False)\n\n def _maybe_mask_result(\n self, result: np.ndarray | tuple[np.ndarray, np.ndarray], mask: np.ndarray\n ):\n """\n Parameters\n ----------\n result : array-like or tuple[array-like]\n mask : array-like bool\n """\n if isinstance(result, tuple):\n # i.e. divmod\n div, mod = result\n return (\n self._maybe_mask_result(div, mask),\n self._maybe_mask_result(mod, mask),\n )\n\n if result.dtype.kind == "f":\n from pandas.core.arrays import FloatingArray\n\n return FloatingArray(result, mask, copy=False)\n\n elif result.dtype.kind == "b":\n from pandas.core.arrays import BooleanArray\n\n return BooleanArray(result, mask, copy=False)\n\n elif lib.is_np_dtype(result.dtype, "m") and is_supported_dtype(result.dtype):\n # e.g. test_numeric_arr_mul_tdscalar_numexpr_path\n from pandas.core.arrays import TimedeltaArray\n\n result[mask] = result.dtype.type("NaT")\n\n if not isinstance(result, TimedeltaArray):\n return TimedeltaArray._simple_new(result, dtype=result.dtype)\n\n return result\n\n elif result.dtype.kind in "iu":\n from pandas.core.arrays import IntegerArray\n\n return IntegerArray(result, mask, copy=False)\n\n else:\n result[mask] = np.nan\n return result\n\n def isna(self) -> np.ndarray:\n return self._mask.copy()\n\n @property\n def _na_value(self):\n return self.dtype.na_value\n\n @property\n def nbytes(self) -> int:\n return self._data.nbytes + self._mask.nbytes\n\n @classmethod\n def _concat_same_type(\n cls,\n to_concat: Sequence[Self],\n axis: AxisInt = 0,\n ) -> Self:\n data = np.concatenate([x._data for x in to_concat], axis=axis)\n mask = np.concatenate([x._mask for x in to_concat], axis=axis)\n return cls(data, mask)\n\n def _hash_pandas_object(\n self, *, encoding: str, hash_key: str, categorize: bool\n ) -> npt.NDArray[np.uint64]:\n hashed_array = hash_array(\n self._data, encoding=encoding, hash_key=hash_key, categorize=categorize\n )\n hashed_array[self.isna()] = hash(self.dtype.na_value)\n return hashed_array\n\n def take(\n self,\n indexer,\n *,\n allow_fill: bool = False,\n fill_value: Scalar | None = None,\n axis: AxisInt = 0,\n ) -> Self:\n # we always fill with 1 internally\n # to avoid upcasting\n data_fill_value = self._internal_fill_value if isna(fill_value) else fill_value\n result = take(\n self._data,\n indexer,\n fill_value=data_fill_value,\n allow_fill=allow_fill,\n axis=axis,\n )\n\n mask = take(\n self._mask, indexer, fill_value=True, allow_fill=allow_fill, axis=axis\n )\n\n # if we are filling\n # we only fill where the indexer is null\n # not existing missing values\n # TODO(jreback) what if we have a non-na float as a fill value?\n if allow_fill and notna(fill_value):\n fill_mask = np.asarray(indexer) == -1\n result[fill_mask] = fill_value\n mask = mask ^ fill_mask\n\n return self._simple_new(result, mask)\n\n # error: Return type "BooleanArray" of "isin" incompatible with return type\n # "ndarray" in supertype "ExtensionArray"\n def isin(self, values: ArrayLike) -> BooleanArray: # type: ignore[override]\n from pandas.core.arrays import BooleanArray\n\n # algorithms.isin will eventually convert values to an ndarray, so no extra\n # cost to doing it here first\n values_arr = np.asarray(values)\n result = isin(self._data, values_arr)\n\n if self._hasna:\n values_have_NA = values_arr.dtype == object and any(\n val is self.dtype.na_value for val in values_arr\n )\n\n # For now, NA does not propagate so set result according to presence of NA,\n # see https://github.com/pandas-dev/pandas/pull/38379 for some discussion\n result[self._mask] = values_have_NA\n\n mask = np.zeros(self._data.shape, dtype=bool)\n return BooleanArray(result, mask, copy=False)\n\n def copy(self) -> Self:\n data = self._data.copy()\n mask = self._mask.copy()\n return self._simple_new(data, mask)\n\n @doc(ExtensionArray.duplicated)\n def duplicated(\n self, keep: Literal["first", "last", False] = "first"\n ) -> npt.NDArray[np.bool_]:\n values = self._data\n mask = self._mask\n return algos.duplicated(values, keep=keep, mask=mask)\n\n def unique(self) -> Self:\n """\n Compute the BaseMaskedArray of unique values.\n\n Returns\n -------\n uniques : BaseMaskedArray\n """\n uniques, mask = algos.unique_with_mask(self._data, self._mask)\n return self._simple_new(uniques, mask)\n\n @doc(ExtensionArray.searchsorted)\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n if self._hasna:\n raise ValueError(\n "searchsorted requires array to be sorted, which is impossible "\n "with NAs present."\n )\n if isinstance(value, ExtensionArray):\n value = value.astype(object)\n # Base class searchsorted would cast to object, which is *much* slower.\n return self._data.searchsorted(value, side=side, sorter=sorter)\n\n @doc(ExtensionArray.factorize)\n def factorize(\n self,\n use_na_sentinel: bool = True,\n ) -> tuple[np.ndarray, ExtensionArray]:\n arr = self._data\n mask = self._mask\n\n # Use a sentinel for na; recode and add NA to uniques if necessary below\n codes, uniques = factorize_array(arr, use_na_sentinel=True, mask=mask)\n\n # check that factorize_array correctly preserves dtype.\n assert uniques.dtype == self.dtype.numpy_dtype, (uniques.dtype, self.dtype)\n\n has_na = mask.any()\n if use_na_sentinel or not has_na:\n size = len(uniques)\n else:\n # Make room for an NA value\n size = len(uniques) + 1\n uniques_mask = np.zeros(size, dtype=bool)\n if not use_na_sentinel and has_na:\n na_index = mask.argmax()\n # Insert na with the proper code\n if na_index == 0:\n na_code = np.intp(0)\n else:\n na_code = codes[:na_index].max() + 1\n codes[codes >= na_code] += 1\n codes[codes == -1] = na_code\n # dummy value for uniques; not used since uniques_mask will be True\n uniques = np.insert(uniques, na_code, 0)\n uniques_mask[na_code] = True\n uniques_ea = self._simple_new(uniques, uniques_mask)\n\n return codes, uniques_ea\n\n @doc(ExtensionArray._values_for_argsort)\n def _values_for_argsort(self) -> np.ndarray:\n return self._data\n\n def value_counts(self, dropna: bool = True) -> Series:\n """\n Returns a Series containing counts of each unique value.\n\n Parameters\n ----------\n dropna : bool, default True\n Don't include counts of missing values.\n\n Returns\n -------\n counts : Series\n\n See Also\n --------\n Series.value_counts\n """\n from pandas import (\n Index,\n Series,\n )\n from pandas.arrays import IntegerArray\n\n keys, value_counts, na_counter = algos.value_counts_arraylike(\n self._data, dropna=dropna, mask=self._mask\n )\n mask_index = np.zeros((len(value_counts),), dtype=np.bool_)\n mask = mask_index.copy()\n\n if na_counter > 0:\n mask_index[-1] = True\n\n arr = IntegerArray(value_counts, mask)\n index = Index(\n self.dtype.construct_array_type()(\n keys, mask_index # type: ignore[arg-type]\n )\n )\n return Series(arr, index=index, name="count", copy=False)\n\n def _mode(self, dropna: bool = True) -> Self:\n if dropna:\n result = mode(self._data, dropna=dropna, mask=self._mask)\n res_mask = np.zeros(result.shape, dtype=np.bool_)\n else:\n result, res_mask = mode(self._data, dropna=dropna, mask=self._mask)\n result = type(self)(result, res_mask) # type: ignore[arg-type]\n return result[result.argsort()]\n\n @doc(ExtensionArray.equals)\n def equals(self, other) -> bool:\n if type(self) != type(other):\n return False\n if other.dtype != self.dtype:\n return False\n\n # GH#44382 if e.g. self[1] is np.nan and other[1] is pd.NA, we are NOT\n # equal.\n if not np.array_equal(self._mask, other._mask):\n return False\n\n left = self._data[~self._mask]\n right = other._data[~other._mask]\n return array_equivalent(left, right, strict_nan=True, dtype_equal=True)\n\n def _quantile(\n self, qs: npt.NDArray[np.float64], interpolation: str\n ) -> BaseMaskedArray:\n """\n Dispatch to quantile_with_mask, needed because we do not have\n _from_factorized.\n\n Notes\n -----\n We assume that all impacted cases are 1D-only.\n """\n res = quantile_with_mask(\n self._data,\n mask=self._mask,\n # TODO(GH#40932): na_value_for_dtype(self.dtype.numpy_dtype)\n # instead of np.nan\n fill_value=np.nan,\n qs=qs,\n interpolation=interpolation,\n )\n\n if self._hasna:\n # Our result mask is all-False unless we are all-NA, in which\n # case it is all-True.\n if self.ndim == 2:\n # I think this should be out_mask=self.isna().all(axis=1)\n # but am holding off until we have tests\n raise NotImplementedError\n if self.isna().all():\n out_mask = np.ones(res.shape, dtype=bool)\n\n if is_integer_dtype(self.dtype):\n # We try to maintain int dtype if possible for not all-na case\n # as well\n res = np.zeros(res.shape, dtype=self.dtype.numpy_dtype)\n else:\n out_mask = np.zeros(res.shape, dtype=bool)\n else:\n out_mask = np.zeros(res.shape, dtype=bool)\n return self._maybe_mask_result(res, mask=out_mask)\n\n # ------------------------------------------------------------------\n # Reductions\n\n def _reduce(\n self, name: str, *, skipna: bool = True, keepdims: bool = False, **kwargs\n ):\n if name in {"any", "all", "min", "max", "sum", "prod", "mean", "var", "std"}:\n result = getattr(self, name)(skipna=skipna, **kwargs)\n else:\n # median, skew, kurt, sem\n data = self._data\n mask = self._mask\n op = getattr(nanops, f"nan{name}")\n axis = kwargs.pop("axis", None)\n result = op(data, axis=axis, skipna=skipna, mask=mask, **kwargs)\n\n if keepdims:\n if isna(result):\n return self._wrap_na_result(name=name, axis=0, mask_size=(1,))\n else:\n result = result.reshape(1)\n mask = np.zeros(1, dtype=bool)\n return self._maybe_mask_result(result, mask)\n\n if isna(result):\n return libmissing.NA\n else:\n return result\n\n def _wrap_reduction_result(self, name: str, result, *, skipna, axis):\n if isinstance(result, np.ndarray):\n if skipna:\n # we only retain mask for all-NA rows/columns\n mask = self._mask.all(axis=axis)\n else:\n mask = self._mask.any(axis=axis)\n\n return self._maybe_mask_result(result, mask)\n return result\n\n def _wrap_na_result(self, *, name, axis, mask_size):\n mask = np.ones(mask_size, dtype=bool)\n\n float_dtyp = "float32" if self.dtype == "Float32" else "float64"\n if name in ["mean", "median", "var", "std", "skew", "kurt"]:\n np_dtype = float_dtyp\n elif name in ["min", "max"] or self.dtype.itemsize == 8:\n np_dtype = self.dtype.numpy_dtype.name\n else:\n is_windows_or_32bit = is_platform_windows() or not IS64\n int_dtyp = "int32" if is_windows_or_32bit else "int64"\n uint_dtyp = "uint32" if is_windows_or_32bit else "uint64"\n np_dtype = {"b": int_dtyp, "i": int_dtyp, "u": uint_dtyp, "f": float_dtyp}[\n self.dtype.kind\n ]\n\n value = np.array([1], dtype=np_dtype)\n return self._maybe_mask_result(value, mask=mask)\n\n def _wrap_min_count_reduction_result(\n self, name: str, result, *, skipna, min_count, axis\n ):\n if min_count == 0 and isinstance(result, np.ndarray):\n return self._maybe_mask_result(result, np.zeros(result.shape, dtype=bool))\n return self._wrap_reduction_result(name, result, skipna=skipna, axis=axis)\n\n def sum(\n self,\n *,\n skipna: bool = True,\n min_count: int = 0,\n axis: AxisInt | None = 0,\n **kwargs,\n ):\n nv.validate_sum((), kwargs)\n\n result = masked_reductions.sum(\n self._data,\n self._mask,\n skipna=skipna,\n min_count=min_count,\n axis=axis,\n )\n return self._wrap_min_count_reduction_result(\n "sum", result, skipna=skipna, min_count=min_count, axis=axis\n )\n\n def prod(\n self,\n *,\n skipna: bool = True,\n min_count: int = 0,\n axis: AxisInt | None = 0,\n **kwargs,\n ):\n nv.validate_prod((), kwargs)\n\n result = masked_reductions.prod(\n self._data,\n self._mask,\n skipna=skipna,\n min_count=min_count,\n axis=axis,\n )\n return self._wrap_min_count_reduction_result(\n "prod", result, skipna=skipna, min_count=min_count, axis=axis\n )\n\n def mean(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):\n nv.validate_mean((), kwargs)\n result = masked_reductions.mean(\n self._data,\n self._mask,\n skipna=skipna,\n axis=axis,\n )\n return self._wrap_reduction_result("mean", result, skipna=skipna, axis=axis)\n\n def var(\n self, *, skipna: bool = True, axis: AxisInt | None = 0, ddof: int = 1, **kwargs\n ):\n nv.validate_stat_ddof_func((), kwargs, fname="var")\n result = masked_reductions.var(\n self._data,\n self._mask,\n skipna=skipna,\n axis=axis,\n ddof=ddof,\n )\n return self._wrap_reduction_result("var", result, skipna=skipna, axis=axis)\n\n def std(\n self, *, skipna: bool = True, axis: AxisInt | None = 0, ddof: int = 1, **kwargs\n ):\n nv.validate_stat_ddof_func((), kwargs, fname="std")\n result = masked_reductions.std(\n self._data,\n self._mask,\n skipna=skipna,\n axis=axis,\n ddof=ddof,\n )\n return self._wrap_reduction_result("std", result, skipna=skipna, axis=axis)\n\n def min(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):\n nv.validate_min((), kwargs)\n result = masked_reductions.min(\n self._data,\n self._mask,\n skipna=skipna,\n axis=axis,\n )\n return self._wrap_reduction_result("min", result, skipna=skipna, axis=axis)\n\n def max(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):\n nv.validate_max((), kwargs)\n result = masked_reductions.max(\n self._data,\n self._mask,\n skipna=skipna,\n axis=axis,\n )\n return self._wrap_reduction_result("max", result, skipna=skipna, axis=axis)\n\n def map(self, mapper, na_action=None):\n return map_array(self.to_numpy(), mapper, na_action=na_action)\n\n def any(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):\n """\n Return whether any element is truthy.\n\n Returns False unless there is at least one element that is truthy.\n By default, NAs are skipped. If ``skipna=False`` is specified and\n missing values are present, similar :ref:`Kleene logic <boolean.kleene>`\n is used as for logical operations.\n\n .. versionchanged:: 1.4.0\n\n Parameters\n ----------\n skipna : bool, default True\n Exclude NA values. If the entire array is NA and `skipna` is\n True, then the result will be False, as for an empty array.\n If `skipna` is False, the result will still be True if there is\n at least one element that is truthy, otherwise NA will be returned\n if there are NA's present.\n axis : int, optional, default 0\n **kwargs : any, default None\n Additional keywords have no effect but might be accepted for\n compatibility with NumPy.\n\n Returns\n -------\n bool or :attr:`pandas.NA`\n\n See Also\n --------\n numpy.any : Numpy version of this method.\n BaseMaskedArray.all : Return whether all elements are truthy.\n\n Examples\n --------\n The result indicates whether any element is truthy (and by default\n skips NAs):\n\n >>> pd.array([True, False, True]).any()\n True\n >>> pd.array([True, False, pd.NA]).any()\n True\n >>> pd.array([False, False, pd.NA]).any()\n False\n >>> pd.array([], dtype="boolean").any()\n False\n >>> pd.array([pd.NA], dtype="boolean").any()\n False\n >>> pd.array([pd.NA], dtype="Float64").any()\n False\n\n With ``skipna=False``, the result can be NA if this is logically\n required (whether ``pd.NA`` is True or False influences the result):\n\n >>> pd.array([True, False, pd.NA]).any(skipna=False)\n True\n >>> pd.array([1, 0, pd.NA]).any(skipna=False)\n True\n >>> pd.array([False, False, pd.NA]).any(skipna=False)\n <NA>\n >>> pd.array([0, 0, pd.NA]).any(skipna=False)\n <NA>\n """\n nv.validate_any((), kwargs)\n\n values = self._data.copy()\n # error: Argument 3 to "putmask" has incompatible type "object";\n # expected "Union[_SupportsArray[dtype[Any]],\n # _NestedSequence[_SupportsArray[dtype[Any]]],\n # bool, int, float, complex, str, bytes,\n # _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"\n np.putmask(values, self._mask, self._falsey_value) # type: ignore[arg-type]\n result = values.any()\n if skipna:\n return result\n else:\n if result or len(self) == 0 or not self._mask.any():\n return result\n else:\n return self.dtype.na_value\n\n def all(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):\n """\n Return whether all elements are truthy.\n\n Returns True unless there is at least one element that is falsey.\n By default, NAs are skipped. If ``skipna=False`` is specified and\n missing values are present, similar :ref:`Kleene logic <boolean.kleene>`\n is used as for logical operations.\n\n .. versionchanged:: 1.4.0\n\n Parameters\n ----------\n skipna : bool, default True\n Exclude NA values. If the entire array is NA and `skipna` is\n True, then the result will be True, as for an empty array.\n If `skipna` is False, the result will still be False if there is\n at least one element that is falsey, otherwise NA will be returned\n if there are NA's present.\n axis : int, optional, default 0\n **kwargs : any, default None\n Additional keywords have no effect but might be accepted for\n compatibility with NumPy.\n\n Returns\n -------\n bool or :attr:`pandas.NA`\n\n See Also\n --------\n numpy.all : Numpy version of this method.\n BooleanArray.any : Return whether any element is truthy.\n\n Examples\n --------\n The result indicates whether all elements are truthy (and by default\n skips NAs):\n\n >>> pd.array([True, True, pd.NA]).all()\n True\n >>> pd.array([1, 1, pd.NA]).all()\n True\n >>> pd.array([True, False, pd.NA]).all()\n False\n >>> pd.array([], dtype="boolean").all()\n True\n >>> pd.array([pd.NA], dtype="boolean").all()\n True\n >>> pd.array([pd.NA], dtype="Float64").all()\n True\n\n With ``skipna=False``, the result can be NA if this is logically\n required (whether ``pd.NA`` is True or False influences the result):\n\n >>> pd.array([True, True, pd.NA]).all(skipna=False)\n <NA>\n >>> pd.array([1, 1, pd.NA]).all(skipna=False)\n <NA>\n >>> pd.array([True, False, pd.NA]).all(skipna=False)\n False\n >>> pd.array([1, 0, pd.NA]).all(skipna=False)\n False\n """\n nv.validate_all((), kwargs)\n\n values = self._data.copy()\n # error: Argument 3 to "putmask" has incompatible type "object";\n # expected "Union[_SupportsArray[dtype[Any]],\n # _NestedSequence[_SupportsArray[dtype[Any]]],\n # bool, int, float, complex, str, bytes,\n # _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"\n np.putmask(values, self._mask, self._truthy_value) # type: ignore[arg-type]\n result = values.all(axis=axis)\n\n if skipna:\n return result\n else:\n if not result or len(self) == 0 or not self._mask.any():\n return result\n else:\n return self.dtype.na_value\n\n def interpolate(\n self,\n *,\n method: InterpolateOptions,\n axis: int,\n index,\n limit,\n limit_direction,\n limit_area,\n copy: bool,\n **kwargs,\n ) -> FloatingArray:\n """\n See NDFrame.interpolate.__doc__.\n """\n # NB: we return type(self) even if copy=False\n if self.dtype.kind == "f":\n if copy:\n data = self._data.copy()\n mask = self._mask.copy()\n else:\n data = self._data\n mask = self._mask\n elif self.dtype.kind in "iu":\n copy = True\n data = self._data.astype("f8")\n mask = self._mask.copy()\n else:\n raise NotImplementedError(\n f"interpolate is not implemented for dtype={self.dtype}"\n )\n\n missing.interpolate_2d_inplace(\n data,\n method=method,\n axis=0,\n index=index,\n limit=limit,\n limit_direction=limit_direction,\n limit_area=limit_area,\n mask=mask,\n **kwargs,\n )\n if not copy:\n return self # type: ignore[return-value]\n if self.dtype.kind == "f":\n return type(self)._simple_new(data, mask) # type: ignore[return-value]\n else:\n from pandas.core.arrays import FloatingArray\n\n return FloatingArray._simple_new(data, mask)\n\n def _accumulate(\n self, name: str, *, skipna: bool = True, **kwargs\n ) -> BaseMaskedArray:\n data = self._data\n mask = self._mask\n\n op = getattr(masked_accumulations, name)\n data, mask = op(data, mask, skipna=skipna, **kwargs)\n\n return self._simple_new(data, mask)\n\n # ------------------------------------------------------------------\n # GroupBy Methods\n\n def _groupby_op(\n self,\n *,\n how: str,\n has_dropped_na: bool,\n min_count: int,\n ngroups: int,\n ids: npt.NDArray[np.intp],\n **kwargs,\n ):\n from pandas.core.groupby.ops import WrappedCythonOp\n\n kind = WrappedCythonOp.get_kind_from_how(how)\n op = WrappedCythonOp(how=how, kind=kind, has_dropped_na=has_dropped_na)\n\n # libgroupby functions are responsible for NOT altering mask\n mask = self._mask\n if op.kind != "aggregate":\n result_mask = mask.copy()\n else:\n result_mask = np.zeros(ngroups, dtype=bool)\n\n if how == "rank" and kwargs.get("na_option") in ["top", "bottom"]:\n result_mask[:] = False\n\n res_values = op._cython_op_ndim_compat(\n self._data,\n min_count=min_count,\n ngroups=ngroups,\n comp_ids=ids,\n mask=mask,\n result_mask=result_mask,\n **kwargs,\n )\n\n if op.how == "ohlc":\n arity = op._cython_arity.get(op.how, 1)\n result_mask = np.tile(result_mask, (arity, 1)).T\n\n if op.how in ["idxmin", "idxmax"]:\n # Result values are indexes to take, keep as ndarray\n return res_values\n else:\n # res_values should already have the correct dtype, we just need to\n # wrap in a MaskedArray\n return self._maybe_mask_result(res_values, result_mask)\n\n\ndef transpose_homogeneous_masked_arrays(\n masked_arrays: Sequence[BaseMaskedArray],\n) -> list[BaseMaskedArray]:\n """Transpose masked arrays in a list, but faster.\n\n Input should be a list of 1-dim masked arrays of equal length and all have the\n same dtype. The caller is responsible for ensuring validity of input data.\n """\n masked_arrays = list(masked_arrays)\n dtype = masked_arrays[0].dtype\n\n values = [arr._data.reshape(1, -1) for arr in masked_arrays]\n transposed_values = np.concatenate(\n values,\n axis=0,\n out=np.empty(\n (len(masked_arrays), len(masked_arrays[0])),\n order="F",\n dtype=dtype.numpy_dtype,\n ),\n )\n\n masks = [arr._mask.reshape(1, -1) for arr in masked_arrays]\n transposed_masks = np.concatenate(\n masks, axis=0, out=np.empty_like(transposed_values, dtype=bool)\n )\n\n arr_type = dtype.construct_array_type()\n transposed_arrays: list[BaseMaskedArray] = []\n for i in range(transposed_values.shape[1]):\n transposed_arr = arr_type(transposed_values[:, i], mask=transposed_masks[:, i])\n transposed_arrays.append(transposed_arr)\n\n return transposed_arrays\n
.venv\Lib\site-packages\pandas\core\arrays\masked.py
masked.py
Python
56,578
0.75
0.157579
0.097887
awesome-app
879
2024-10-06T22:28:18.989184
Apache-2.0
false
22cde540a4c63d2c2368b95775fcc1d2
from __future__ import annotations\n\nimport numbers\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n)\n\nimport numpy as np\n\nfrom pandas._libs import (\n lib,\n missing as libmissing,\n)\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import cache_readonly\n\nfrom pandas.core.dtypes.common import (\n is_integer_dtype,\n is_string_dtype,\n pandas_dtype,\n)\n\nfrom pandas.core.arrays.masked import (\n BaseMaskedArray,\n BaseMaskedDtype,\n)\n\nif TYPE_CHECKING:\n from collections.abc import Mapping\n\n import pyarrow\n\n from pandas._typing import (\n Dtype,\n DtypeObj,\n Self,\n npt,\n )\n\n\nclass NumericDtype(BaseMaskedDtype):\n _default_np_dtype: np.dtype\n _checker: Callable[[Any], bool] # is_foo_dtype\n\n def __repr__(self) -> str:\n return f"{self.name}Dtype()"\n\n @cache_readonly\n def is_signed_integer(self) -> bool:\n return self.kind == "i"\n\n @cache_readonly\n def is_unsigned_integer(self) -> bool:\n return self.kind == "u"\n\n @property\n def _is_numeric(self) -> bool:\n return True\n\n def __from_arrow__(\n self, array: pyarrow.Array | pyarrow.ChunkedArray\n ) -> BaseMaskedArray:\n """\n Construct IntegerArray/FloatingArray from pyarrow Array/ChunkedArray.\n """\n import pyarrow\n\n from pandas.core.arrays.arrow._arrow_utils import (\n pyarrow_array_to_numpy_and_mask,\n )\n\n array_class = self.construct_array_type()\n\n pyarrow_type = pyarrow.from_numpy_dtype(self.type)\n if not array.type.equals(pyarrow_type) and not pyarrow.types.is_null(\n array.type\n ):\n # test_from_arrow_type_error raise for string, but allow\n # through itemsize conversion GH#31896\n rt_dtype = pandas_dtype(array.type.to_pandas_dtype())\n if rt_dtype.kind not in "iuf":\n # Could allow "c" or potentially disallow float<->int conversion,\n # but at the moment we specifically test that uint<->int works\n raise TypeError(\n f"Expected array of {self} type, got {array.type} instead"\n )\n\n array = array.cast(pyarrow_type)\n\n if isinstance(array, pyarrow.ChunkedArray):\n # TODO this "if" can be removed when requiring pyarrow >= 10.0, which fixed\n # combine_chunks for empty arrays https://github.com/apache/arrow/pull/13757\n if array.num_chunks == 0:\n array = pyarrow.array([], type=array.type)\n else:\n array = array.combine_chunks()\n\n data, mask = pyarrow_array_to_numpy_and_mask(array, dtype=self.numpy_dtype)\n return array_class(data.copy(), ~mask, copy=False)\n\n @classmethod\n def _get_dtype_mapping(cls) -> Mapping[np.dtype, NumericDtype]:\n raise AbstractMethodError(cls)\n\n @classmethod\n def _standardize_dtype(cls, dtype: NumericDtype | str | np.dtype) -> NumericDtype:\n """\n Convert a string representation or a numpy dtype to NumericDtype.\n """\n if isinstance(dtype, str) and (dtype.startswith(("Int", "UInt", "Float"))):\n # Avoid DeprecationWarning from NumPy about np.dtype("Int64")\n # https://github.com/numpy/numpy/pull/7476\n dtype = dtype.lower()\n\n if not isinstance(dtype, NumericDtype):\n mapping = cls._get_dtype_mapping()\n try:\n dtype = mapping[np.dtype(dtype)]\n except KeyError as err:\n raise ValueError(f"invalid dtype specified {dtype}") from err\n return dtype\n\n @classmethod\n def _safe_cast(cls, values: np.ndarray, dtype: np.dtype, copy: bool) -> np.ndarray:\n """\n Safely cast the values to the given dtype.\n\n "safe" in this context means the casting is lossless.\n """\n raise AbstractMethodError(cls)\n\n\ndef _coerce_to_data_and_mask(\n values, dtype, copy: bool, dtype_cls: type[NumericDtype], default_dtype: np.dtype\n):\n checker = dtype_cls._checker\n\n mask = None\n inferred_type = None\n\n if dtype is None and hasattr(values, "dtype"):\n if checker(values.dtype):\n dtype = values.dtype\n\n if dtype is not None:\n dtype = dtype_cls._standardize_dtype(dtype)\n\n cls = dtype_cls.construct_array_type()\n if isinstance(values, cls):\n values, mask = values._data, values._mask\n if dtype is not None:\n values = values.astype(dtype.numpy_dtype, copy=False)\n\n if copy:\n values = values.copy()\n mask = mask.copy()\n return values, mask, dtype, inferred_type\n\n original = values\n if not copy:\n values = np.asarray(values)\n else:\n values = np.array(values, copy=copy)\n inferred_type = None\n if values.dtype == object or is_string_dtype(values.dtype):\n inferred_type = lib.infer_dtype(values, skipna=True)\n if inferred_type == "boolean" and dtype is None:\n name = dtype_cls.__name__.strip("_")\n raise TypeError(f"{values.dtype} cannot be converted to {name}")\n\n elif values.dtype.kind == "b" and checker(dtype):\n if not copy:\n values = np.asarray(values, dtype=default_dtype)\n else:\n values = np.array(values, dtype=default_dtype, copy=copy)\n\n elif values.dtype.kind not in "iuf":\n name = dtype_cls.__name__.strip("_")\n raise TypeError(f"{values.dtype} cannot be converted to {name}")\n\n if values.ndim != 1:\n raise TypeError("values must be a 1D list-like")\n\n if mask is None:\n if values.dtype.kind in "iu":\n # fastpath\n mask = np.zeros(len(values), dtype=np.bool_)\n else:\n mask = libmissing.is_numeric_na(values)\n else:\n assert len(mask) == len(values)\n\n if mask.ndim != 1:\n raise TypeError("mask must be a 1D list-like")\n\n # infer dtype if needed\n if dtype is None:\n dtype = default_dtype\n else:\n dtype = dtype.numpy_dtype\n\n if is_integer_dtype(dtype) and values.dtype.kind == "f" and len(values) > 0:\n if mask.all():\n values = np.ones(values.shape, dtype=dtype)\n else:\n idx = np.nanargmax(values)\n if int(values[idx]) != original[idx]:\n # We have ints that lost precision during the cast.\n inferred_type = lib.infer_dtype(original, skipna=True)\n if (\n inferred_type not in ["floating", "mixed-integer-float"]\n and not mask.any()\n ):\n values = np.asarray(original, dtype=dtype)\n else:\n values = np.asarray(original, dtype="object")\n\n # we copy as need to coerce here\n if mask.any():\n values = values.copy()\n values[mask] = cls._internal_fill_value\n if inferred_type in ("string", "unicode"):\n # casts from str are always safe since they raise\n # a ValueError if the str cannot be parsed into a float\n values = values.astype(dtype, copy=copy)\n else:\n values = dtype_cls._safe_cast(values, dtype, copy=False)\n\n return values, mask, dtype, inferred_type\n\n\nclass NumericArray(BaseMaskedArray):\n """\n Base class for IntegerArray and FloatingArray.\n """\n\n _dtype_cls: type[NumericDtype]\n\n def __init__(\n self, values: np.ndarray, mask: npt.NDArray[np.bool_], copy: bool = False\n ) -> None:\n checker = self._dtype_cls._checker\n if not (isinstance(values, np.ndarray) and checker(values.dtype)):\n descr = (\n "floating"\n if self._dtype_cls.kind == "f" # type: ignore[comparison-overlap]\n else "integer"\n )\n raise TypeError(\n f"values should be {descr} numpy array. Use "\n "the 'pd.array' function instead"\n )\n if values.dtype == np.float16:\n # If we don't raise here, then accessing self.dtype would raise\n raise TypeError("FloatingArray does not support np.float16 dtype.")\n\n super().__init__(values, mask, copy=copy)\n\n @cache_readonly\n def dtype(self) -> NumericDtype:\n mapping = self._dtype_cls._get_dtype_mapping()\n return mapping[self._data.dtype]\n\n @classmethod\n def _coerce_to_array(\n cls, value, *, dtype: DtypeObj, copy: bool = False\n ) -> tuple[np.ndarray, np.ndarray]:\n dtype_cls = cls._dtype_cls\n default_dtype = dtype_cls._default_np_dtype\n values, mask, _, _ = _coerce_to_data_and_mask(\n value, dtype, copy, dtype_cls, default_dtype\n )\n return values, mask\n\n @classmethod\n def _from_sequence_of_strings(\n cls, strings, *, dtype: Dtype | None = None, copy: bool = False\n ) -> Self:\n from pandas.core.tools.numeric import to_numeric\n\n scalars = to_numeric(strings, errors="raise", dtype_backend="numpy_nullable")\n return cls._from_sequence(scalars, dtype=dtype, copy=copy)\n\n _HANDLED_TYPES = (np.ndarray, numbers.Number)\n
.venv\Lib\site-packages\pandas\core\arrays\numeric.py
numeric.py
Python
9,165
0.95
0.192308
0.064378
python-kit
393
2024-07-13T07:43:01.048148
BSD-3-Clause
false
a0933b28a5247f05c6749ee340085192
from __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Literal,\n)\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas._libs.tslibs import is_supported_dtype\nfrom pandas.compat.numpy import function as nv\n\nfrom pandas.core.dtypes.astype import astype_array\nfrom pandas.core.dtypes.cast import construct_1d_object_array_from_listlike\nfrom pandas.core.dtypes.common import pandas_dtype\nfrom pandas.core.dtypes.dtypes import NumpyEADtype\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core import (\n arraylike,\n missing,\n nanops,\n ops,\n)\nfrom pandas.core.arraylike import OpsMixin\nfrom pandas.core.arrays._mixins import NDArrayBackedExtensionArray\nfrom pandas.core.construction import ensure_wrapped_if_datetimelike\nfrom pandas.core.strings.object_array import ObjectStringArrayMixin\n\nif TYPE_CHECKING:\n from collections.abc import Callable\n\n from pandas._typing import (\n AxisInt,\n Dtype,\n FillnaOptions,\n InterpolateOptions,\n NpDtype,\n Scalar,\n Self,\n npt,\n )\n\n from pandas import Index\n\n\n# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is\n# incompatible with definition in base class "ExtensionArray"\nclass NumpyExtensionArray( # type: ignore[misc]\n OpsMixin,\n NDArrayBackedExtensionArray,\n ObjectStringArrayMixin,\n):\n """\n A pandas ExtensionArray for NumPy data.\n\n This is mostly for internal compatibility, and is not especially\n useful on its own.\n\n Parameters\n ----------\n values : ndarray\n The NumPy ndarray to wrap. Must be 1-dimensional.\n copy : bool, default False\n Whether to copy `values`.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Examples\n --------\n >>> pd.arrays.NumpyExtensionArray(np.array([0, 1, 2, 3]))\n <NumpyExtensionArray>\n [0, 1, 2, 3]\n Length: 4, dtype: int64\n """\n\n # If you're wondering why pd.Series(cls) doesn't put the array in an\n # ExtensionBlock, search for `ABCNumpyExtensionArray`. We check for\n # that _typ to ensure that users don't unnecessarily use EAs inside\n # pandas internals, which turns off things like block consolidation.\n _typ = "npy_extension"\n __array_priority__ = 1000\n _ndarray: np.ndarray\n _dtype: NumpyEADtype\n _internal_fill_value = np.nan\n\n # ------------------------------------------------------------------------\n # Constructors\n\n def __init__(\n self, values: np.ndarray | NumpyExtensionArray, copy: bool = False\n ) -> None:\n if isinstance(values, type(self)):\n values = values._ndarray\n if not isinstance(values, np.ndarray):\n raise ValueError(\n f"'values' must be a NumPy array, not {type(values).__name__}"\n )\n\n if values.ndim == 0:\n # Technically we support 2, but do not advertise that fact.\n raise ValueError("NumpyExtensionArray must be 1-dimensional.")\n\n if copy:\n values = values.copy()\n\n dtype = NumpyEADtype(values.dtype)\n super().__init__(values, dtype)\n\n @classmethod\n def _from_sequence(\n cls, scalars, *, dtype: Dtype | None = None, copy: bool = False\n ) -> NumpyExtensionArray:\n if isinstance(dtype, NumpyEADtype):\n dtype = dtype._dtype\n\n # error: Argument "dtype" to "asarray" has incompatible type\n # "Union[ExtensionDtype, str, dtype[Any], dtype[floating[_64Bit]], Type[object],\n # None]"; expected "Union[dtype[Any], None, type, _SupportsDType, str,\n # Union[Tuple[Any, int], Tuple[Any, Union[int, Sequence[int]]], List[Any],\n # _DTypeDict, Tuple[Any, Any]]]"\n result = np.asarray(scalars, dtype=dtype) # type: ignore[arg-type]\n if (\n result.ndim > 1\n and not hasattr(scalars, "dtype")\n and (dtype is None or dtype == object)\n ):\n # e.g. list-of-tuples\n result = construct_1d_object_array_from_listlike(scalars)\n\n if copy and result is scalars:\n result = result.copy()\n return cls(result)\n\n # ------------------------------------------------------------------------\n # Data\n\n @property\n def dtype(self) -> NumpyEADtype:\n return self._dtype\n\n # ------------------------------------------------------------------------\n # NumPy Array Interface\n\n def __array__(\n self, dtype: NpDtype | None = None, copy: bool | None = None\n ) -> np.ndarray:\n if copy is not None:\n # Note: branch avoids `copy=None` for NumPy 1.x support\n return np.array(self._ndarray, dtype=dtype, copy=copy)\n return np.asarray(self._ndarray, dtype=dtype)\n\n def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):\n # Lightly modified version of\n # https://numpy.org/doc/stable/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.html\n # The primary modification is not boxing scalar return values\n # in NumpyExtensionArray, since pandas' ExtensionArrays are 1-d.\n out = kwargs.get("out", ())\n\n result = arraylike.maybe_dispatch_ufunc_to_dunder_op(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n return result\n\n if "out" in kwargs:\n # e.g. test_ufunc_unary\n return arraylike.dispatch_ufunc_with_out(\n self, ufunc, method, *inputs, **kwargs\n )\n\n if method == "reduce":\n result = arraylike.dispatch_reduction_ufunc(\n self, ufunc, method, *inputs, **kwargs\n )\n if result is not NotImplemented:\n # e.g. tests.series.test_ufunc.TestNumpyReductions\n return result\n\n # Defer to the implementation of the ufunc on unwrapped values.\n inputs = tuple(\n x._ndarray if isinstance(x, NumpyExtensionArray) else x for x in inputs\n )\n if out:\n kwargs["out"] = tuple(\n x._ndarray if isinstance(x, NumpyExtensionArray) else x for x in out\n )\n result = getattr(ufunc, method)(*inputs, **kwargs)\n\n if ufunc.nout > 1:\n # multiple return values; re-box array-like results\n return tuple(type(self)(x) for x in result)\n elif method == "at":\n # no return value\n return None\n elif method == "reduce":\n if isinstance(result, np.ndarray):\n # e.g. test_np_reduce_2d\n return type(self)(result)\n\n # e.g. test_np_max_nested_tuples\n return result\n else:\n # one return value; re-box array-like results\n return type(self)(result)\n\n # ------------------------------------------------------------------------\n # Pandas ExtensionArray Interface\n\n def astype(self, dtype, copy: bool = True):\n dtype = pandas_dtype(dtype)\n\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n\n result = astype_array(self._ndarray, dtype=dtype, copy=copy)\n return result\n\n def isna(self) -> np.ndarray:\n return isna(self._ndarray)\n\n def _validate_scalar(self, fill_value):\n if fill_value is None:\n # Primarily for subclasses\n fill_value = self.dtype.na_value\n return fill_value\n\n def _values_for_factorize(self) -> tuple[np.ndarray, float | None]:\n if self.dtype.kind in "iub":\n fv = None\n else:\n fv = np.nan\n return self._ndarray, fv\n\n # Base EA class (and all other EA classes) don't have limit_area keyword\n # This can be removed here as well when the interpolate ffill/bfill method\n # deprecation is enforced\n def _pad_or_backfill(\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n """\n ffill or bfill along axis=0.\n """\n if copy:\n out_data = self._ndarray.copy()\n else:\n out_data = self._ndarray\n\n meth = missing.clean_fill_method(method)\n missing.pad_or_backfill_inplace(\n out_data.T,\n method=meth,\n axis=0,\n limit=limit,\n limit_area=limit_area,\n )\n\n if not copy:\n return self\n return type(self)._simple_new(out_data, dtype=self.dtype)\n\n def interpolate(\n self,\n *,\n method: InterpolateOptions,\n axis: int,\n index: Index,\n limit,\n limit_direction,\n limit_area,\n copy: bool,\n **kwargs,\n ) -> Self:\n """\n See NDFrame.interpolate.__doc__.\n """\n # NB: we return type(self) even if copy=False\n if not self.dtype._is_numeric:\n raise TypeError(f"Cannot interpolate with {self.dtype} dtype")\n\n if not copy:\n out_data = self._ndarray\n else:\n out_data = self._ndarray.copy()\n\n # TODO: assert we have floating dtype?\n missing.interpolate_2d_inplace(\n out_data,\n method=method,\n axis=axis,\n index=index,\n limit=limit,\n limit_direction=limit_direction,\n limit_area=limit_area,\n **kwargs,\n )\n if not copy:\n return self\n return type(self)._simple_new(out_data, dtype=self.dtype)\n\n # ------------------------------------------------------------------------\n # Reductions\n\n def any(\n self,\n *,\n axis: AxisInt | None = None,\n out=None,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_any((), {"out": out, "keepdims": keepdims})\n result = nanops.nanany(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def all(\n self,\n *,\n axis: AxisInt | None = None,\n out=None,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_all((), {"out": out, "keepdims": keepdims})\n result = nanops.nanall(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def min(\n self, *, axis: AxisInt | None = None, skipna: bool = True, **kwargs\n ) -> Scalar:\n nv.validate_min((), kwargs)\n result = nanops.nanmin(\n values=self._ndarray, axis=axis, mask=self.isna(), skipna=skipna\n )\n return self._wrap_reduction_result(axis, result)\n\n def max(\n self, *, axis: AxisInt | None = None, skipna: bool = True, **kwargs\n ) -> Scalar:\n nv.validate_max((), kwargs)\n result = nanops.nanmax(\n values=self._ndarray, axis=axis, mask=self.isna(), skipna=skipna\n )\n return self._wrap_reduction_result(axis, result)\n\n def sum(\n self,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n min_count: int = 0,\n **kwargs,\n ) -> Scalar:\n nv.validate_sum((), kwargs)\n result = nanops.nansum(\n self._ndarray, axis=axis, skipna=skipna, min_count=min_count\n )\n return self._wrap_reduction_result(axis, result)\n\n def prod(\n self,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n min_count: int = 0,\n **kwargs,\n ) -> Scalar:\n nv.validate_prod((), kwargs)\n result = nanops.nanprod(\n self._ndarray, axis=axis, skipna=skipna, min_count=min_count\n )\n return self._wrap_reduction_result(axis, result)\n\n def mean(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_mean((), {"dtype": dtype, "out": out, "keepdims": keepdims})\n result = nanops.nanmean(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def median(\n self,\n *,\n axis: AxisInt | None = None,\n out=None,\n overwrite_input: bool = False,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_median(\n (), {"out": out, "overwrite_input": overwrite_input, "keepdims": keepdims}\n )\n result = nanops.nanmedian(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def std(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n ddof: int = 1,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="std"\n )\n result = nanops.nanstd(self._ndarray, axis=axis, skipna=skipna, ddof=ddof)\n return self._wrap_reduction_result(axis, result)\n\n def var(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n ddof: int = 1,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="var"\n )\n result = nanops.nanvar(self._ndarray, axis=axis, skipna=skipna, ddof=ddof)\n return self._wrap_reduction_result(axis, result)\n\n def sem(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n ddof: int = 1,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="sem"\n )\n result = nanops.nansem(self._ndarray, axis=axis, skipna=skipna, ddof=ddof)\n return self._wrap_reduction_result(axis, result)\n\n def kurt(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="kurt"\n )\n result = nanops.nankurt(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n def skew(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="skew"\n )\n result = nanops.nanskew(self._ndarray, axis=axis, skipna=skipna)\n return self._wrap_reduction_result(axis, result)\n\n # ------------------------------------------------------------------------\n # Additional Methods\n\n def to_numpy(\n self,\n dtype: npt.DTypeLike | None = None,\n copy: bool = False,\n na_value: object = lib.no_default,\n ) -> np.ndarray:\n mask = self.isna()\n if na_value is not lib.no_default and mask.any():\n result = self._ndarray.copy()\n result[mask] = na_value\n else:\n result = self._ndarray\n\n result = np.asarray(result, dtype=dtype)\n\n if copy and result is self._ndarray:\n result = result.copy()\n\n return result\n\n # ------------------------------------------------------------------------\n # Ops\n\n def __invert__(self) -> NumpyExtensionArray:\n return type(self)(~self._ndarray)\n\n def __neg__(self) -> NumpyExtensionArray:\n return type(self)(-self._ndarray)\n\n def __pos__(self) -> NumpyExtensionArray:\n return type(self)(+self._ndarray)\n\n def __abs__(self) -> NumpyExtensionArray:\n return type(self)(abs(self._ndarray))\n\n def _cmp_method(self, other, op):\n if isinstance(other, NumpyExtensionArray):\n other = other._ndarray\n\n other = ops.maybe_prepare_scalar_for_op(other, (len(self),))\n pd_op = ops.get_array_op(op)\n other = ensure_wrapped_if_datetimelike(other)\n result = pd_op(self._ndarray, other)\n\n if op is divmod or op is ops.rdivmod:\n a, b = result\n if isinstance(a, np.ndarray):\n # for e.g. op vs TimedeltaArray, we may already\n # have an ExtensionArray, in which case we do not wrap\n return self._wrap_ndarray_result(a), self._wrap_ndarray_result(b)\n return a, b\n\n if isinstance(result, np.ndarray):\n # for e.g. multiplication vs TimedeltaArray, we may already\n # have an ExtensionArray, in which case we do not wrap\n return self._wrap_ndarray_result(result)\n return result\n\n _arith_method = _cmp_method\n\n def _wrap_ndarray_result(self, result: np.ndarray):\n # If we have timedelta64[ns] result, return a TimedeltaArray instead\n # of a NumpyExtensionArray\n if result.dtype.kind == "m" and is_supported_dtype(result.dtype):\n from pandas.core.arrays import TimedeltaArray\n\n return TimedeltaArray._simple_new(result, dtype=result.dtype)\n return type(self)(result)\n\n def _formatter(self, boxed: bool = False) -> Callable[[Any], str | None]:\n # NEP 51: https://github.com/numpy/numpy/pull/22449\n if self.dtype.kind in "SU":\n return "'{}'".format\n elif self.dtype == "object":\n return repr\n else:\n return str\n
.venv\Lib\site-packages\pandas\core\arrays\numpy_.py
numpy_.py
Python
17,885
0.95
0.146341
0.1417
awesome-app
688
2023-12-03T02:37:54.618214
Apache-2.0
false
6d28af3114feb8a342f386cee848fb86
from __future__ import annotations\n\nfrom datetime import timedelta\nimport operator\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Literal,\n TypeVar,\n cast,\n overload,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import (\n algos as libalgos,\n lib,\n)\nfrom pandas._libs.arrays import NDArrayBacked\nfrom pandas._libs.tslibs import (\n BaseOffset,\n NaT,\n NaTType,\n Timedelta,\n add_overflowsafe,\n astype_overflowsafe,\n dt64arr_to_periodarr as c_dt64arr_to_periodarr,\n get_unit_from_dtype,\n iNaT,\n parsing,\n period as libperiod,\n to_offset,\n)\nfrom pandas._libs.tslibs.dtypes import (\n FreqGroup,\n PeriodDtypeBase,\n freq_to_period_freqstr,\n)\nfrom pandas._libs.tslibs.fields import isleapyear_arr\nfrom pandas._libs.tslibs.offsets import (\n Tick,\n delta_to_tick,\n)\nfrom pandas._libs.tslibs.period import (\n DIFFERENT_FREQ,\n IncompatibleFrequency,\n Period,\n get_period_field_arr,\n period_asfreq_arr,\n)\nfrom pandas.util._decorators import (\n cache_readonly,\n doc,\n)\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.common import (\n ensure_object,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import (\n DatetimeTZDtype,\n PeriodDtype,\n)\nfrom pandas.core.dtypes.generic import (\n ABCIndex,\n ABCPeriodIndex,\n ABCSeries,\n ABCTimedeltaArray,\n)\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core.arrays import datetimelike as dtl\nimport pandas.core.common as com\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from pandas._typing import (\n AnyArrayLike,\n Dtype,\n FillnaOptions,\n NpDtype,\n NumpySorter,\n NumpyValueArrayLike,\n Self,\n npt,\n )\n\n from pandas.core.arrays import (\n DatetimeArray,\n TimedeltaArray,\n )\n from pandas.core.arrays.base import ExtensionArray\n\n\nBaseOffsetT = TypeVar("BaseOffsetT", bound=BaseOffset)\n\n\n_shared_doc_kwargs = {\n "klass": "PeriodArray",\n}\n\n\ndef _field_accessor(name: str, docstring: str | None = None):\n def f(self):\n base = self.dtype._dtype_code\n result = get_period_field_arr(name, self.asi8, base)\n return result\n\n f.__name__ = name\n f.__doc__ = docstring\n return property(f)\n\n\n# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is\n# incompatible with definition in base class "ExtensionArray"\nclass PeriodArray(dtl.DatelikeOps, libperiod.PeriodMixin): # type: ignore[misc]\n """\n Pandas ExtensionArray for storing Period data.\n\n Users should use :func:`~pandas.array` to create new instances.\n\n Parameters\n ----------\n values : Union[PeriodArray, Series[period], ndarray[int], PeriodIndex]\n The data to store. These should be arrays that can be directly\n converted to ordinals without inference or copy (PeriodArray,\n ndarray[int64]), or a box around such an array (Series[period],\n PeriodIndex).\n dtype : PeriodDtype, optional\n A PeriodDtype instance from which to extract a `freq`. If both\n `freq` and `dtype` are specified, then the frequencies must match.\n freq : str or DateOffset\n The `freq` to use for the array. Mostly applicable when `values`\n is an ndarray of integers, when `freq` is required. When `values`\n is a PeriodArray (or box around), it's checked that ``values.freq``\n matches `freq`.\n copy : bool, default False\n Whether to copy the ordinals before storing.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n See Also\n --------\n Period: Represents a period of time.\n PeriodIndex : Immutable Index for period data.\n period_range: Create a fixed-frequency PeriodArray.\n array: Construct a pandas array.\n\n Notes\n -----\n There are two components to a PeriodArray\n\n - ordinals : integer ndarray\n - freq : pd.tseries.offsets.Offset\n\n The values are physically stored as a 1-D ndarray of integers. These are\n called "ordinals" and represent some kind of offset from a base.\n\n The `freq` indicates the span covered by each element of the array.\n All elements in the PeriodArray have the same `freq`.\n\n Examples\n --------\n >>> pd.arrays.PeriodArray(pd.PeriodIndex(['2023-01-01',\n ... '2023-01-02'], freq='D'))\n <PeriodArray>\n ['2023-01-01', '2023-01-02']\n Length: 2, dtype: period[D]\n """\n\n # array priority higher than numpy scalars\n __array_priority__ = 1000\n _typ = "periodarray" # ABCPeriodArray\n _internal_fill_value = np.int64(iNaT)\n _recognized_scalars = (Period,)\n _is_recognized_dtype = lambda x: isinstance(\n x, PeriodDtype\n ) # check_compatible_with checks freq match\n _infer_matches = ("period",)\n\n @property\n def _scalar_type(self) -> type[Period]:\n return Period\n\n # Names others delegate to us\n _other_ops: list[str] = []\n _bool_ops: list[str] = ["is_leap_year"]\n _object_ops: list[str] = ["start_time", "end_time", "freq"]\n _field_ops: list[str] = [\n "year",\n "month",\n "day",\n "hour",\n "minute",\n "second",\n "weekofyear",\n "weekday",\n "week",\n "dayofweek",\n "day_of_week",\n "dayofyear",\n "day_of_year",\n "quarter",\n "qyear",\n "days_in_month",\n "daysinmonth",\n ]\n _datetimelike_ops: list[str] = _field_ops + _object_ops + _bool_ops\n _datetimelike_methods: list[str] = ["strftime", "to_timestamp", "asfreq"]\n\n _dtype: PeriodDtype\n\n # --------------------------------------------------------------------\n # Constructors\n\n def __init__(\n self, values, dtype: Dtype | None = None, freq=None, copy: bool = False\n ) -> None:\n if freq is not None:\n # GH#52462\n warnings.warn(\n "The 'freq' keyword in the PeriodArray constructor is deprecated "\n "and will be removed in a future version. Pass 'dtype' instead",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n freq = validate_dtype_freq(dtype, freq)\n dtype = PeriodDtype(freq)\n\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n if not isinstance(dtype, PeriodDtype):\n raise ValueError(f"Invalid dtype {dtype} for PeriodArray")\n\n if isinstance(values, ABCSeries):\n values = values._values\n if not isinstance(values, type(self)):\n raise TypeError("Incorrect dtype")\n\n elif isinstance(values, ABCPeriodIndex):\n values = values._values\n\n if isinstance(values, type(self)):\n if dtype is not None and dtype != values.dtype:\n raise raise_on_incompatible(values, dtype.freq)\n values, dtype = values._ndarray, values.dtype\n\n if not copy:\n values = np.asarray(values, dtype="int64")\n else:\n values = np.array(values, dtype="int64", copy=copy)\n if dtype is None:\n raise ValueError("dtype is not specified and cannot be inferred")\n dtype = cast(PeriodDtype, dtype)\n NDArrayBacked.__init__(self, values, dtype)\n\n # error: Signature of "_simple_new" incompatible with supertype "NDArrayBacked"\n @classmethod\n def _simple_new( # type: ignore[override]\n cls,\n values: npt.NDArray[np.int64],\n dtype: PeriodDtype,\n ) -> Self:\n # alias for PeriodArray.__init__\n assertion_msg = "Should be numpy array of type i8"\n assert isinstance(values, np.ndarray) and values.dtype == "i8", assertion_msg\n return cls(values, dtype=dtype)\n\n @classmethod\n def _from_sequence(\n cls,\n scalars,\n *,\n dtype: Dtype | None = None,\n copy: bool = False,\n ) -> Self:\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n if dtype and isinstance(dtype, PeriodDtype):\n freq = dtype.freq\n else:\n freq = None\n\n if isinstance(scalars, cls):\n validate_dtype_freq(scalars.dtype, freq)\n if copy:\n scalars = scalars.copy()\n return scalars\n\n periods = np.asarray(scalars, dtype=object)\n\n freq = freq or libperiod.extract_freq(periods)\n ordinals = libperiod.extract_ordinals(periods, freq)\n dtype = PeriodDtype(freq)\n return cls(ordinals, dtype=dtype)\n\n @classmethod\n def _from_sequence_of_strings(\n cls, strings, *, dtype: Dtype | None = None, copy: bool = False\n ) -> Self:\n return cls._from_sequence(strings, dtype=dtype, copy=copy)\n\n @classmethod\n def _from_datetime64(cls, data, freq, tz=None) -> Self:\n """\n Construct a PeriodArray from a datetime64 array\n\n Parameters\n ----------\n data : ndarray[datetime64[ns], datetime64[ns, tz]]\n freq : str or Tick\n tz : tzinfo, optional\n\n Returns\n -------\n PeriodArray[freq]\n """\n if isinstance(freq, BaseOffset):\n freq = freq_to_period_freqstr(freq.n, freq.name)\n data, freq = dt64arr_to_periodarr(data, freq, tz)\n dtype = PeriodDtype(freq)\n return cls(data, dtype=dtype)\n\n @classmethod\n def _generate_range(cls, start, end, periods, freq):\n periods = dtl.validate_periods(periods)\n\n if freq is not None:\n freq = Period._maybe_convert_freq(freq)\n\n if start is not None or end is not None:\n subarr, freq = _get_ordinal_range(start, end, periods, freq)\n else:\n raise ValueError("Not enough parameters to construct Period range")\n\n return subarr, freq\n\n @classmethod\n def _from_fields(cls, *, fields: dict, freq) -> Self:\n subarr, freq = _range_from_fields(freq=freq, **fields)\n dtype = PeriodDtype(freq)\n return cls._simple_new(subarr, dtype=dtype)\n\n # -----------------------------------------------------------------\n # DatetimeLike Interface\n\n # error: Argument 1 of "_unbox_scalar" is incompatible with supertype\n # "DatetimeLikeArrayMixin"; supertype defines the argument type as\n # "Union[Union[Period, Any, Timedelta], NaTType]"\n def _unbox_scalar( # type: ignore[override]\n self,\n value: Period | NaTType,\n ) -> np.int64:\n if value is NaT:\n # error: Item "Period" of "Union[Period, NaTType]" has no attribute "value"\n return np.int64(value._value) # type: ignore[union-attr]\n elif isinstance(value, self._scalar_type):\n self._check_compatible_with(value)\n return np.int64(value.ordinal)\n else:\n raise ValueError(f"'value' should be a Period. Got '{value}' instead.")\n\n def _scalar_from_string(self, value: str) -> Period:\n return Period(value, freq=self.freq)\n\n # error: Argument 1 of "_check_compatible_with" is incompatible with\n # supertype "DatetimeLikeArrayMixin"; supertype defines the argument type\n # as "Period | Timestamp | Timedelta | NaTType"\n def _check_compatible_with(self, other: Period | NaTType | PeriodArray) -> None: # type: ignore[override]\n if other is NaT:\n return\n # error: Item "NaTType" of "Period | NaTType | PeriodArray" has no\n # attribute "freq"\n self._require_matching_freq(other.freq) # type: ignore[union-attr]\n\n # --------------------------------------------------------------------\n # Data / Attributes\n\n @cache_readonly\n def dtype(self) -> PeriodDtype:\n return self._dtype\n\n # error: Cannot override writeable attribute with read-only property\n @property # type: ignore[override]\n def freq(self) -> BaseOffset:\n """\n Return the frequency object for this PeriodArray.\n """\n return self.dtype.freq\n\n @property\n def freqstr(self) -> str:\n return freq_to_period_freqstr(self.freq.n, self.freq.name)\n\n def __array__(\n self, dtype: NpDtype | None = None, copy: bool | None = None\n ) -> np.ndarray:\n if dtype == "i8":\n # For NumPy 1.x compatibility we cannot use copy=None. And\n # `copy=False` has the meaning of `copy=None` here:\n if not copy:\n return np.asarray(self.asi8, dtype=dtype)\n else:\n return np.array(self.asi8, dtype=dtype)\n\n if copy is False:\n warnings.warn(\n "Starting with NumPy 2.0, the behavior of the 'copy' keyword has "\n "changed and passing 'copy=False' raises an error when returning "\n "a zero-copy NumPy array is not possible. pandas will follow "\n "this behavior starting with pandas 3.0.\nThis conversion to "\n "NumPy requires a copy, but 'copy=False' was passed. Consider "\n "using 'np.asarray(..)' instead.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n\n if dtype == bool:\n return ~self._isnan\n\n # This will raise TypeError for non-object dtypes\n return np.array(list(self), dtype=object)\n\n def __arrow_array__(self, type=None):\n """\n Convert myself into a pyarrow Array.\n """\n import pyarrow\n\n from pandas.core.arrays.arrow.extension_types import ArrowPeriodType\n\n if type is not None:\n if pyarrow.types.is_integer(type):\n return pyarrow.array(self._ndarray, mask=self.isna(), type=type)\n elif isinstance(type, ArrowPeriodType):\n # ensure we have the same freq\n if self.freqstr != type.freq:\n raise TypeError(\n "Not supported to convert PeriodArray to array with different "\n f"'freq' ({self.freqstr} vs {type.freq})"\n )\n else:\n raise TypeError(\n f"Not supported to convert PeriodArray to '{type}' type"\n )\n\n period_type = ArrowPeriodType(self.freqstr)\n storage_array = pyarrow.array(self._ndarray, mask=self.isna(), type="int64")\n return pyarrow.ExtensionArray.from_storage(period_type, storage_array)\n\n # --------------------------------------------------------------------\n # Vectorized analogues of Period properties\n\n year = _field_accessor(\n "year",\n """\n The year of the period.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023", "2024", "2025"], freq="Y")\n >>> idx.year\n Index([2023, 2024, 2025], dtype='int64')\n """,\n )\n month = _field_accessor(\n "month",\n """\n The month as January=1, December=12.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01", "2023-02", "2023-03"], freq="M")\n >>> idx.month\n Index([1, 2, 3], dtype='int64')\n """,\n )\n day = _field_accessor(\n "day",\n """\n The days of the period.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(['2020-01-31', '2020-02-28'], freq='D')\n >>> idx.day\n Index([31, 28], dtype='int64')\n """,\n )\n hour = _field_accessor(\n "hour",\n """\n The hour of the period.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01-01 10:00", "2023-01-01 11:00"], freq='h')\n >>> idx.hour\n Index([10, 11], dtype='int64')\n """,\n )\n minute = _field_accessor(\n "minute",\n """\n The minute of the period.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01-01 10:30:00",\n ... "2023-01-01 11:50:00"], freq='min')\n >>> idx.minute\n Index([30, 50], dtype='int64')\n """,\n )\n second = _field_accessor(\n "second",\n """\n The second of the period.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01-01 10:00:30",\n ... "2023-01-01 10:00:31"], freq='s')\n >>> idx.second\n Index([30, 31], dtype='int64')\n """,\n )\n weekofyear = _field_accessor(\n "week",\n """\n The week ordinal of the year.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01", "2023-02", "2023-03"], freq="M")\n >>> idx.week # It can be written `weekofyear`\n Index([5, 9, 13], dtype='int64')\n """,\n )\n week = weekofyear\n day_of_week = _field_accessor(\n "day_of_week",\n """\n The day of the week with Monday=0, Sunday=6.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01-01", "2023-01-02", "2023-01-03"], freq="D")\n >>> idx.weekday\n Index([6, 0, 1], dtype='int64')\n """,\n )\n dayofweek = day_of_week\n weekday = dayofweek\n dayofyear = day_of_year = _field_accessor(\n "day_of_year",\n """\n The ordinal day of the year.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01-10", "2023-02-01", "2023-03-01"], freq="D")\n >>> idx.dayofyear\n Index([10, 32, 60], dtype='int64')\n\n >>> idx = pd.PeriodIndex(["2023", "2024", "2025"], freq="Y")\n >>> idx\n PeriodIndex(['2023', '2024', '2025'], dtype='period[Y-DEC]')\n >>> idx.dayofyear\n Index([365, 366, 365], dtype='int64')\n """,\n )\n quarter = _field_accessor(\n "quarter",\n """\n The quarter of the date.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01", "2023-02", "2023-03"], freq="M")\n >>> idx.quarter\n Index([1, 1, 1], dtype='int64')\n """,\n )\n qyear = _field_accessor("qyear")\n days_in_month = _field_accessor(\n "days_in_month",\n """\n The number of days in the month.\n\n Examples\n --------\n For Series:\n\n >>> period = pd.period_range('2020-1-1 00:00', '2020-3-1 00:00', freq='M')\n >>> s = pd.Series(period)\n >>> s\n 0 2020-01\n 1 2020-02\n 2 2020-03\n dtype: period[M]\n >>> s.dt.days_in_month\n 0 31\n 1 29\n 2 31\n dtype: int64\n\n For PeriodIndex:\n\n >>> idx = pd.PeriodIndex(["2023-01", "2023-02", "2023-03"], freq="M")\n >>> idx.days_in_month # It can be also entered as `daysinmonth`\n Index([31, 28, 31], dtype='int64')\n """,\n )\n daysinmonth = days_in_month\n\n @property\n def is_leap_year(self) -> npt.NDArray[np.bool_]:\n """\n Logical indicating if the date belongs to a leap year.\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023", "2024", "2025"], freq="Y")\n >>> idx.is_leap_year\n array([False, True, False])\n """\n return isleapyear_arr(np.asarray(self.year))\n\n def to_timestamp(self, freq=None, how: str = "start") -> DatetimeArray:\n """\n Cast to DatetimeArray/Index.\n\n Parameters\n ----------\n freq : str or DateOffset, optional\n Target frequency. The default is 'D' for week or longer,\n 's' otherwise.\n how : {'s', 'e', 'start', 'end'}\n Whether to use the start or end of the time period being converted.\n\n Returns\n -------\n DatetimeArray/Index\n\n Examples\n --------\n >>> idx = pd.PeriodIndex(["2023-01", "2023-02", "2023-03"], freq="M")\n >>> idx.to_timestamp()\n DatetimeIndex(['2023-01-01', '2023-02-01', '2023-03-01'],\n dtype='datetime64[ns]', freq='MS')\n """\n from pandas.core.arrays import DatetimeArray\n\n how = libperiod.validate_end_alias(how)\n\n end = how == "E"\n if end:\n if freq == "B" or self.freq == "B":\n # roll forward to ensure we land on B date\n adjust = Timedelta(1, "D") - Timedelta(1, "ns")\n return self.to_timestamp(how="start") + adjust\n else:\n adjust = Timedelta(1, "ns")\n return (self + self.freq).to_timestamp(how="start") - adjust\n\n if freq is None:\n freq_code = self._dtype._get_to_timestamp_base()\n dtype = PeriodDtypeBase(freq_code, 1)\n freq = dtype._freqstr\n base = freq_code\n else:\n freq = Period._maybe_convert_freq(freq)\n base = freq._period_dtype_code\n\n new_parr = self.asfreq(freq, how=how)\n\n new_data = libperiod.periodarr_to_dt64arr(new_parr.asi8, base)\n dta = DatetimeArray._from_sequence(new_data)\n\n if self.freq.name == "B":\n # See if we can retain BDay instead of Day in cases where\n # len(self) is too small for infer_freq to distinguish between them\n diffs = libalgos.unique_deltas(self.asi8)\n if len(diffs) == 1:\n diff = diffs[0]\n if diff == self.dtype._n:\n dta._freq = self.freq\n elif diff == 1:\n dta._freq = self.freq.base\n # TODO: other cases?\n return dta\n else:\n return dta._with_freq("infer")\n\n # --------------------------------------------------------------------\n\n def _box_func(self, x) -> Period | NaTType:\n return Period._from_ordinal(ordinal=x, freq=self.freq)\n\n @doc(**_shared_doc_kwargs, other="PeriodIndex", other_name="PeriodIndex")\n def asfreq(self, freq=None, how: str = "E") -> Self:\n """\n Convert the {klass} to the specified frequency `freq`.\n\n Equivalent to applying :meth:`pandas.Period.asfreq` with the given arguments\n to each :class:`~pandas.Period` in this {klass}.\n\n Parameters\n ----------\n freq : str\n A frequency.\n how : str {{'E', 'S'}}, default 'E'\n Whether the elements should be aligned to the end\n or start within pa period.\n\n * 'E', 'END', or 'FINISH' for end,\n * 'S', 'START', or 'BEGIN' for start.\n\n January 31st ('END') vs. January 1st ('START') for example.\n\n Returns\n -------\n {klass}\n The transformed {klass} with the new frequency.\n\n See Also\n --------\n {other}.asfreq: Convert each Period in a {other_name} to the given frequency.\n Period.asfreq : Convert a :class:`~pandas.Period` object to the given frequency.\n\n Examples\n --------\n >>> pidx = pd.period_range('2010-01-01', '2015-01-01', freq='Y')\n >>> pidx\n PeriodIndex(['2010', '2011', '2012', '2013', '2014', '2015'],\n dtype='period[Y-DEC]')\n\n >>> pidx.asfreq('M')\n PeriodIndex(['2010-12', '2011-12', '2012-12', '2013-12', '2014-12',\n '2015-12'], dtype='period[M]')\n\n >>> pidx.asfreq('M', how='S')\n PeriodIndex(['2010-01', '2011-01', '2012-01', '2013-01', '2014-01',\n '2015-01'], dtype='period[M]')\n """\n how = libperiod.validate_end_alias(how)\n if isinstance(freq, BaseOffset) and hasattr(freq, "_period_dtype_code"):\n freq = PeriodDtype(freq)._freqstr\n freq = Period._maybe_convert_freq(freq)\n\n base1 = self._dtype._dtype_code\n base2 = freq._period_dtype_code\n\n asi8 = self.asi8\n # self.freq.n can't be negative or 0\n end = how == "E"\n if end:\n ordinal = asi8 + self.dtype._n - 1\n else:\n ordinal = asi8\n\n new_data = period_asfreq_arr(ordinal, base1, base2, end)\n\n if self._hasna:\n new_data[self._isnan] = iNaT\n\n dtype = PeriodDtype(freq)\n return type(self)(new_data, dtype=dtype)\n\n # ------------------------------------------------------------------\n # Rendering Methods\n\n def _formatter(self, boxed: bool = False):\n if boxed:\n return str\n return "'{}'".format\n\n def _format_native_types(\n self, *, na_rep: str | float = "NaT", date_format=None, **kwargs\n ) -> npt.NDArray[np.object_]:\n """\n actually format my specific types\n """\n return libperiod.period_array_strftime(\n self.asi8, self.dtype._dtype_code, na_rep, date_format\n )\n\n # ------------------------------------------------------------------\n\n def astype(self, dtype, copy: bool = True):\n # We handle Period[T] -> Period[U]\n # Our parent handles everything else.\n dtype = pandas_dtype(dtype)\n if dtype == self._dtype:\n if not copy:\n return self\n else:\n return self.copy()\n if isinstance(dtype, PeriodDtype):\n return self.asfreq(dtype.freq)\n\n if lib.is_np_dtype(dtype, "M") or isinstance(dtype, DatetimeTZDtype):\n # GH#45038 match PeriodIndex behavior.\n tz = getattr(dtype, "tz", None)\n unit = dtl.dtype_to_unit(dtype)\n return self.to_timestamp().tz_localize(tz).as_unit(unit)\n\n return super().astype(dtype, copy=copy)\n\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n npvalue = self._validate_setitem_value(value).view("M8[ns]")\n\n # Cast to M8 to get datetime-like NaT placement,\n # similar to dtl._period_dispatch\n m8arr = self._ndarray.view("M8[ns]")\n return m8arr.searchsorted(npvalue, side=side, sorter=sorter)\n\n def _pad_or_backfill(\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n # view as dt64 so we get treated as timelike in core.missing,\n # similar to dtl._period_dispatch\n dta = self.view("M8[ns]")\n result = dta._pad_or_backfill(\n method=method, limit=limit, limit_area=limit_area, copy=copy\n )\n if copy:\n return cast("Self", result.view(self.dtype))\n else:\n return self\n\n def fillna(\n self, value=None, method=None, limit: int | None = None, copy: bool = True\n ) -> Self:\n if method is not None:\n # view as dt64 so we get treated as timelike in core.missing,\n # similar to dtl._period_dispatch\n dta = self.view("M8[ns]")\n result = dta.fillna(value=value, method=method, limit=limit, copy=copy)\n # error: Incompatible return value type (got "Union[ExtensionArray,\n # ndarray[Any, Any]]", expected "PeriodArray")\n return result.view(self.dtype) # type: ignore[return-value]\n return super().fillna(value=value, method=method, limit=limit, copy=copy)\n\n # ------------------------------------------------------------------\n # Arithmetic Methods\n\n def _addsub_int_array_or_scalar(\n self, other: np.ndarray | int, op: Callable[[Any, Any], Any]\n ) -> Self:\n """\n Add or subtract array of integers.\n\n Parameters\n ----------\n other : np.ndarray[int64] or int\n op : {operator.add, operator.sub}\n\n Returns\n -------\n result : PeriodArray\n """\n assert op in [operator.add, operator.sub]\n if op is operator.sub:\n other = -other\n res_values = add_overflowsafe(self.asi8, np.asarray(other, dtype="i8"))\n return type(self)(res_values, dtype=self.dtype)\n\n def _add_offset(self, other: BaseOffset):\n assert not isinstance(other, Tick)\n\n self._require_matching_freq(other, base=True)\n return self._addsub_int_array_or_scalar(other.n, operator.add)\n\n # TODO: can we de-duplicate with Period._add_timedeltalike_scalar?\n def _add_timedeltalike_scalar(self, other):\n """\n Parameters\n ----------\n other : timedelta, Tick, np.timedelta64\n\n Returns\n -------\n PeriodArray\n """\n if not isinstance(self.freq, Tick):\n # We cannot add timedelta-like to non-tick PeriodArray\n raise raise_on_incompatible(self, other)\n\n if isna(other):\n # i.e. np.timedelta64("NaT")\n return super()._add_timedeltalike_scalar(other)\n\n td = np.asarray(Timedelta(other).asm8)\n return self._add_timedelta_arraylike(td)\n\n def _add_timedelta_arraylike(\n self, other: TimedeltaArray | npt.NDArray[np.timedelta64]\n ) -> Self:\n """\n Parameters\n ----------\n other : TimedeltaArray or ndarray[timedelta64]\n\n Returns\n -------\n PeriodArray\n """\n if not self.dtype._is_tick_like():\n # We cannot add timedelta-like to non-tick PeriodArray\n raise TypeError(\n f"Cannot add or subtract timedelta64[ns] dtype from {self.dtype}"\n )\n\n dtype = np.dtype(f"m8[{self.dtype._td64_unit}]")\n\n # Similar to _check_timedeltalike_freq_compat, but we raise with a\n # more specific exception message if necessary.\n try:\n delta = astype_overflowsafe(\n np.asarray(other), dtype=dtype, copy=False, round_ok=False\n )\n except ValueError as err:\n # e.g. if we have minutes freq and try to add 30s\n # "Cannot losslessly convert units"\n raise IncompatibleFrequency(\n "Cannot add/subtract timedelta-like from PeriodArray that is "\n "not an integer multiple of the PeriodArray's freq."\n ) from err\n\n res_values = add_overflowsafe(self.asi8, np.asarray(delta.view("i8")))\n return type(self)(res_values, dtype=self.dtype)\n\n def _check_timedeltalike_freq_compat(self, other):\n """\n Arithmetic operations with timedelta-like scalars or array `other`\n are only valid if `other` is an integer multiple of `self.freq`.\n If the operation is valid, find that integer multiple. Otherwise,\n raise because the operation is invalid.\n\n Parameters\n ----------\n other : timedelta, np.timedelta64, Tick,\n ndarray[timedelta64], TimedeltaArray, TimedeltaIndex\n\n Returns\n -------\n multiple : int or ndarray[int64]\n\n Raises\n ------\n IncompatibleFrequency\n """\n assert self.dtype._is_tick_like() # checked by calling function\n\n dtype = np.dtype(f"m8[{self.dtype._td64_unit}]")\n\n if isinstance(other, (timedelta, np.timedelta64, Tick)):\n td = np.asarray(Timedelta(other).asm8)\n else:\n td = np.asarray(other)\n\n try:\n delta = astype_overflowsafe(td, dtype=dtype, copy=False, round_ok=False)\n except ValueError as err:\n raise raise_on_incompatible(self, other) from err\n\n delta = delta.view("i8")\n return lib.item_from_zerodim(delta)\n\n\ndef raise_on_incompatible(left, right) -> IncompatibleFrequency:\n """\n Helper function to render a consistent error message when raising\n IncompatibleFrequency.\n\n Parameters\n ----------\n left : PeriodArray\n right : None, DateOffset, Period, ndarray, or timedelta-like\n\n Returns\n -------\n IncompatibleFrequency\n Exception to be raised by the caller.\n """\n # GH#24283 error message format depends on whether right is scalar\n if isinstance(right, (np.ndarray, ABCTimedeltaArray)) or right is None:\n other_freq = None\n elif isinstance(right, BaseOffset):\n other_freq = freq_to_period_freqstr(right.n, right.name)\n elif isinstance(right, (ABCPeriodIndex, PeriodArray, Period)):\n other_freq = right.freqstr\n else:\n other_freq = delta_to_tick(Timedelta(right)).freqstr\n\n own_freq = freq_to_period_freqstr(left.freq.n, left.freq.name)\n msg = DIFFERENT_FREQ.format(\n cls=type(left).__name__, own_freq=own_freq, other_freq=other_freq\n )\n return IncompatibleFrequency(msg)\n\n\n# -------------------------------------------------------------------\n# Constructor Helpers\n\n\ndef period_array(\n data: Sequence[Period | str | None] | AnyArrayLike,\n freq: str | Tick | BaseOffset | None = None,\n copy: bool = False,\n) -> PeriodArray:\n """\n Construct a new PeriodArray from a sequence of Period scalars.\n\n Parameters\n ----------\n data : Sequence of Period objects\n A sequence of Period objects. These are required to all have\n the same ``freq.`` Missing values can be indicated by ``None``\n or ``pandas.NaT``.\n freq : str, Tick, or Offset\n The frequency of every element of the array. This can be specified\n to avoid inferring the `freq` from `data`.\n copy : bool, default False\n Whether to ensure a copy of the data is made.\n\n Returns\n -------\n PeriodArray\n\n See Also\n --------\n PeriodArray\n pandas.PeriodIndex\n\n Examples\n --------\n >>> period_array([pd.Period('2017', freq='Y'),\n ... pd.Period('2018', freq='Y')])\n <PeriodArray>\n ['2017', '2018']\n Length: 2, dtype: period[Y-DEC]\n\n >>> period_array([pd.Period('2017', freq='Y'),\n ... pd.Period('2018', freq='Y'),\n ... pd.NaT])\n <PeriodArray>\n ['2017', '2018', 'NaT']\n Length: 3, dtype: period[Y-DEC]\n\n Integers that look like years are handled\n\n >>> period_array([2000, 2001, 2002], freq='D')\n <PeriodArray>\n ['2000-01-01', '2001-01-01', '2002-01-01']\n Length: 3, dtype: period[D]\n\n Datetime-like strings may also be passed\n\n >>> period_array(['2000-Q1', '2000-Q2', '2000-Q3', '2000-Q4'], freq='Q')\n <PeriodArray>\n ['2000Q1', '2000Q2', '2000Q3', '2000Q4']\n Length: 4, dtype: period[Q-DEC]\n """\n data_dtype = getattr(data, "dtype", None)\n\n if lib.is_np_dtype(data_dtype, "M"):\n return PeriodArray._from_datetime64(data, freq)\n if isinstance(data_dtype, PeriodDtype):\n out = PeriodArray(data)\n if freq is not None:\n if freq == data_dtype.freq:\n return out\n return out.asfreq(freq)\n return out\n\n # other iterable of some kind\n if not isinstance(data, (np.ndarray, list, tuple, ABCSeries)):\n data = list(data)\n\n arrdata = np.asarray(data)\n\n dtype: PeriodDtype | None\n if freq:\n dtype = PeriodDtype(freq)\n else:\n dtype = None\n\n if arrdata.dtype.kind == "f" and len(arrdata) > 0:\n raise TypeError("PeriodIndex does not allow floating point in construction")\n\n if arrdata.dtype.kind in "iu":\n arr = arrdata.astype(np.int64, copy=False)\n # error: Argument 2 to "from_ordinals" has incompatible type "Union[str,\n # Tick, None]"; expected "Union[timedelta, BaseOffset, str]"\n ordinals = libperiod.from_ordinals(arr, freq) # type: ignore[arg-type]\n return PeriodArray(ordinals, dtype=dtype)\n\n data = ensure_object(arrdata)\n if freq is None:\n freq = libperiod.extract_freq(data)\n dtype = PeriodDtype(freq)\n return PeriodArray._from_sequence(data, dtype=dtype)\n\n\n@overload\ndef validate_dtype_freq(dtype, freq: BaseOffsetT) -> BaseOffsetT:\n ...\n\n\n@overload\ndef validate_dtype_freq(dtype, freq: timedelta | str | None) -> BaseOffset:\n ...\n\n\ndef validate_dtype_freq(\n dtype, freq: BaseOffsetT | BaseOffset | timedelta | str | None\n) -> BaseOffsetT:\n """\n If both a dtype and a freq are available, ensure they match. If only\n dtype is available, extract the implied freq.\n\n Parameters\n ----------\n dtype : dtype\n freq : DateOffset or None\n\n Returns\n -------\n freq : DateOffset\n\n Raises\n ------\n ValueError : non-period dtype\n IncompatibleFrequency : mismatch between dtype and freq\n """\n if freq is not None:\n freq = to_offset(freq, is_period=True)\n\n if dtype is not None:\n dtype = pandas_dtype(dtype)\n if not isinstance(dtype, PeriodDtype):\n raise ValueError("dtype must be PeriodDtype")\n if freq is None:\n freq = dtype.freq\n elif freq != dtype.freq:\n raise IncompatibleFrequency("specified freq and dtype are different")\n # error: Incompatible return value type (got "Union[BaseOffset, Any, None]",\n # expected "BaseOffset")\n return freq # type: ignore[return-value]\n\n\ndef dt64arr_to_periodarr(\n data, freq, tz=None\n) -> tuple[npt.NDArray[np.int64], BaseOffset]:\n """\n Convert an datetime-like array to values Period ordinals.\n\n Parameters\n ----------\n data : Union[Series[datetime64[ns]], DatetimeIndex, ndarray[datetime64ns]]\n freq : Optional[Union[str, Tick]]\n Must match the `freq` on the `data` if `data` is a DatetimeIndex\n or Series.\n tz : Optional[tzinfo]\n\n Returns\n -------\n ordinals : ndarray[int64]\n freq : Tick\n The frequency extracted from the Series or DatetimeIndex if that's\n used.\n\n """\n if not isinstance(data.dtype, np.dtype) or data.dtype.kind != "M":\n raise ValueError(f"Wrong dtype: {data.dtype}")\n\n if freq is None:\n if isinstance(data, ABCIndex):\n data, freq = data._values, data.freq\n elif isinstance(data, ABCSeries):\n data, freq = data._values, data.dt.freq\n\n elif isinstance(data, (ABCIndex, ABCSeries)):\n data = data._values\n\n reso = get_unit_from_dtype(data.dtype)\n freq = Period._maybe_convert_freq(freq)\n base = freq._period_dtype_code\n return c_dt64arr_to_periodarr(data.view("i8"), base, tz, reso=reso), freq\n\n\ndef _get_ordinal_range(start, end, periods, freq, mult: int = 1):\n if com.count_not_none(start, end, periods) != 2:\n raise ValueError(\n "Of the three parameters: start, end, and periods, "\n "exactly two must be specified"\n )\n\n if freq is not None:\n freq = to_offset(freq, is_period=True)\n mult = freq.n\n\n if start is not None:\n start = Period(start, freq)\n if end is not None:\n end = Period(end, freq)\n\n is_start_per = isinstance(start, Period)\n is_end_per = isinstance(end, Period)\n\n if is_start_per and is_end_per and start.freq != end.freq:\n raise ValueError("start and end must have same freq")\n if start is NaT or end is NaT:\n raise ValueError("start and end must not be NaT")\n\n if freq is None:\n if is_start_per:\n freq = start.freq\n elif is_end_per:\n freq = end.freq\n else: # pragma: no cover\n raise ValueError("Could not infer freq from start/end")\n mult = freq.n\n\n if periods is not None:\n periods = periods * mult\n if start is None:\n data = np.arange(\n end.ordinal - periods + mult, end.ordinal + 1, mult, dtype=np.int64\n )\n else:\n data = np.arange(\n start.ordinal, start.ordinal + periods, mult, dtype=np.int64\n )\n else:\n data = np.arange(start.ordinal, end.ordinal + 1, mult, dtype=np.int64)\n\n return data, freq\n\n\ndef _range_from_fields(\n year=None,\n month=None,\n quarter=None,\n day=None,\n hour=None,\n minute=None,\n second=None,\n freq=None,\n) -> tuple[np.ndarray, BaseOffset]:\n if hour is None:\n hour = 0\n if minute is None:\n minute = 0\n if second is None:\n second = 0\n if day is None:\n day = 1\n\n ordinals = []\n\n if quarter is not None:\n if freq is None:\n freq = to_offset("Q", is_period=True)\n base = FreqGroup.FR_QTR.value\n else:\n freq = to_offset(freq, is_period=True)\n base = libperiod.freq_to_dtype_code(freq)\n if base != FreqGroup.FR_QTR.value:\n raise AssertionError("base must equal FR_QTR")\n\n freqstr = freq.freqstr\n year, quarter = _make_field_arrays(year, quarter)\n for y, q in zip(year, quarter):\n calendar_year, calendar_month = parsing.quarter_to_myear(y, q, freqstr)\n val = libperiod.period_ordinal(\n calendar_year, calendar_month, 1, 1, 1, 1, 0, 0, base\n )\n ordinals.append(val)\n else:\n freq = to_offset(freq, is_period=True)\n base = libperiod.freq_to_dtype_code(freq)\n arrays = _make_field_arrays(year, month, day, hour, minute, second)\n for y, mth, d, h, mn, s in zip(*arrays):\n ordinals.append(libperiod.period_ordinal(y, mth, d, h, mn, s, 0, 0, base))\n\n return np.array(ordinals, dtype=np.int64), freq\n\n\ndef _make_field_arrays(*fields) -> list[np.ndarray]:\n length = None\n for x in fields:\n if isinstance(x, (list, np.ndarray, ABCSeries)):\n if length is not None and len(x) != length:\n raise ValueError("Mismatched Period array lengths")\n if length is None:\n length = len(x)\n\n # error: Argument 2 to "repeat" has incompatible type "Optional[int]"; expected\n # "Union[Union[int, integer[Any]], Union[bool, bool_], ndarray, Sequence[Union[int,\n # integer[Any]]], Sequence[Union[bool, bool_]], Sequence[Sequence[Any]]]"\n return [\n np.asarray(x)\n if isinstance(x, (np.ndarray, list, ABCSeries))\n else np.repeat(x, length) # type: ignore[arg-type]\n for x in fields\n ]\n
.venv\Lib\site-packages\pandas\core\arrays\period.py
period.py
Python
41,620
0.95
0.12021
0.065954
python-kit
748
2023-10-27T05:14:50.401890
MIT
false
8decd9d68951f7ce7fc2633e09ee63eb
from __future__ import annotations\n\nfrom functools import partial\nimport operator\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Literal,\n cast,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._config import (\n get_option,\n using_string_dtype,\n)\n\nfrom pandas._libs import (\n lib,\n missing as libmissing,\n)\nfrom pandas._libs.arrays import NDArrayBacked\nfrom pandas._libs.lib import ensure_string_array\nfrom pandas.compat import (\n HAS_PYARROW,\n pa_version_under10p1,\n)\nfrom pandas.compat.numpy import function as nv\nfrom pandas.util._decorators import doc\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.base import (\n ExtensionDtype,\n StorageExtensionDtype,\n register_extension_dtype,\n)\nfrom pandas.core.dtypes.common import (\n is_array_like,\n is_bool_dtype,\n is_integer_dtype,\n is_object_dtype,\n is_string_dtype,\n pandas_dtype,\n)\n\nfrom pandas.core import (\n missing,\n nanops,\n ops,\n)\nfrom pandas.core.algorithms import isin\nfrom pandas.core.array_algos import masked_reductions\nfrom pandas.core.arrays.base import ExtensionArray\nfrom pandas.core.arrays.floating import (\n FloatingArray,\n FloatingDtype,\n)\nfrom pandas.core.arrays.integer import (\n IntegerArray,\n IntegerDtype,\n)\nfrom pandas.core.arrays.numpy_ import NumpyExtensionArray\nfrom pandas.core.construction import extract_array\nfrom pandas.core.indexers import check_array_indexer\nfrom pandas.core.missing import isna\n\nfrom pandas.io.formats import printing\n\nif TYPE_CHECKING:\n import pyarrow\n\n from pandas._typing import (\n ArrayLike,\n AxisInt,\n Dtype,\n DtypeObj,\n NumpySorter,\n NumpyValueArrayLike,\n Scalar,\n Self,\n npt,\n type_t,\n )\n\n from pandas import Series\n\n\n@register_extension_dtype\nclass StringDtype(StorageExtensionDtype):\n """\n Extension dtype for string data.\n\n .. warning::\n\n StringDtype is considered experimental. The implementation and\n parts of the API may change without warning.\n\n Parameters\n ----------\n storage : {"python", "pyarrow"}, optional\n If not given, the value of ``pd.options.mode.string_storage``.\n na_value : {np.nan, pd.NA}, default pd.NA\n Whether the dtype follows NaN or NA missing value semantics.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Examples\n --------\n >>> pd.StringDtype()\n string[python]\n\n >>> pd.StringDtype(storage="pyarrow")\n string[pyarrow]\n """\n\n @property\n def name(self) -> str: # type: ignore[override]\n if self._na_value is libmissing.NA:\n return "string"\n else:\n return "str"\n\n #: StringDtype().na_value uses pandas.NA except the implementation that\n # follows NumPy semantics, which uses nan.\n @property\n def na_value(self) -> libmissing.NAType | float: # type: ignore[override]\n return self._na_value\n\n _metadata = ("storage", "_na_value") # type: ignore[assignment]\n\n def __init__(\n self,\n storage: str | None = None,\n na_value: libmissing.NAType | float = libmissing.NA,\n ) -> None:\n # infer defaults\n if storage is None:\n if na_value is not libmissing.NA:\n storage = get_option("mode.string_storage")\n if storage == "auto":\n if HAS_PYARROW:\n storage = "pyarrow"\n else:\n storage = "python"\n else:\n storage = get_option("mode.string_storage")\n if storage == "auto":\n storage = "python"\n\n if storage == "pyarrow_numpy":\n warnings.warn(\n "The 'pyarrow_numpy' storage option name is deprecated and will be "\n 'removed in pandas 3.0. Use \'pd.StringDtype(storage="pyarrow", '\n "na_value-np.nan)' to construct the same dtype.\nOr enable the "\n "'pd.options.future.infer_string = True' option globally and use "\n 'the "str" alias as a shorthand notation to specify a dtype '\n '(instead of "string[pyarrow_numpy]").',\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n storage = "pyarrow"\n na_value = np.nan\n\n # validate options\n if storage not in {"python", "pyarrow"}:\n raise ValueError(\n f"Storage must be 'python' or 'pyarrow'. Got {storage} instead."\n )\n if storage == "pyarrow" and pa_version_under10p1:\n raise ImportError(\n "pyarrow>=10.0.1 is required for PyArrow backed StringArray."\n )\n\n if isinstance(na_value, float) and np.isnan(na_value):\n # when passed a NaN value, always set to np.nan to ensure we use\n # a consistent NaN value (and we can use `dtype.na_value is np.nan`)\n na_value = np.nan\n elif na_value is not libmissing.NA:\n raise ValueError(f"'na_value' must be np.nan or pd.NA, got {na_value}")\n\n self.storage = cast(str, storage)\n self._na_value = na_value\n\n def __repr__(self) -> str:\n if self._na_value is libmissing.NA:\n return f"{self.name}[{self.storage}]"\n else:\n # TODO add more informative repr\n return self.name\n\n def __eq__(self, other: object) -> bool:\n # we need to override the base class __eq__ because na_value (NA or NaN)\n # cannot be checked with normal `==`\n if isinstance(other, str):\n # TODO should dtype == "string" work for the NaN variant?\n if other == "string" or other == self.name: # noqa: PLR1714\n return True\n try:\n other = self.construct_from_string(other)\n except (TypeError, ImportError):\n # TypeError if `other` is not a valid string for StringDtype\n # ImportError if pyarrow is not installed for "string[pyarrow]"\n return False\n if isinstance(other, type(self)):\n return self.storage == other.storage and self.na_value is other.na_value\n return False\n\n def __hash__(self) -> int:\n # need to override __hash__ as well because of overriding __eq__\n return super().__hash__()\n\n def __reduce__(self):\n return StringDtype, (self.storage, self.na_value)\n\n @property\n def type(self) -> type[str]:\n return str\n\n @classmethod\n def construct_from_string(cls, string) -> Self:\n """\n Construct a StringDtype from a string.\n\n Parameters\n ----------\n string : str\n The type of the name. The storage type will be taking from `string`.\n Valid options and their storage types are\n\n ========================== ==============================================\n string result storage\n ========================== ==============================================\n ``'string'`` pd.options.mode.string_storage, default python\n ``'string[python]'`` python\n ``'string[pyarrow]'`` pyarrow\n ========================== ==============================================\n\n Returns\n -------\n StringDtype\n\n Raise\n -----\n TypeError\n If the string is not a valid option.\n """\n if not isinstance(string, str):\n raise TypeError(\n f"'construct_from_string' expects a string, got {type(string)}"\n )\n if string == "string":\n return cls()\n elif string == "str" and using_string_dtype():\n return cls(na_value=np.nan)\n elif string == "string[python]":\n return cls(storage="python")\n elif string == "string[pyarrow]":\n return cls(storage="pyarrow")\n elif string == "string[pyarrow_numpy]":\n # this is deprecated in the dtype __init__, remove this in pandas 3.0\n return cls(storage="pyarrow_numpy")\n else:\n raise TypeError(f"Cannot construct a '{cls.__name__}' from '{string}'")\n\n # https://github.com/pandas-dev/pandas/issues/36126\n # error: Signature of "construct_array_type" incompatible with supertype\n # "ExtensionDtype"\n def construct_array_type( # type: ignore[override]\n self,\n ) -> type_t[BaseStringArray]:\n """\n Return the array type associated with this dtype.\n\n Returns\n -------\n type\n """\n from pandas.core.arrays.string_arrow import (\n ArrowStringArray,\n ArrowStringArrayNumpySemantics,\n )\n\n if self.storage == "python" and self._na_value is libmissing.NA:\n return StringArray\n elif self.storage == "pyarrow" and self._na_value is libmissing.NA:\n return ArrowStringArray\n elif self.storage == "python":\n return StringArrayNumpySemantics\n else:\n return ArrowStringArrayNumpySemantics\n\n def _get_common_dtype(self, dtypes: list[DtypeObj]) -> DtypeObj | None:\n storages = set()\n na_values = set()\n\n for dtype in dtypes:\n if isinstance(dtype, StringDtype):\n storages.add(dtype.storage)\n na_values.add(dtype.na_value)\n elif isinstance(dtype, np.dtype) and dtype.kind in ("U", "T"):\n continue\n else:\n return None\n\n if len(storages) == 2:\n # if both python and pyarrow storage -> priority to pyarrow\n storage = "pyarrow"\n else:\n storage = next(iter(storages)) # type: ignore[assignment]\n\n na_value: libmissing.NAType | float\n if len(na_values) == 2:\n # if both NaN and NA -> priority to NA\n na_value = libmissing.NA\n else:\n na_value = next(iter(na_values))\n\n return StringDtype(storage=storage, na_value=na_value)\n\n def __from_arrow__(\n self, array: pyarrow.Array | pyarrow.ChunkedArray\n ) -> BaseStringArray:\n """\n Construct StringArray from pyarrow Array/ChunkedArray.\n """\n if self.storage == "pyarrow":\n if self._na_value is libmissing.NA:\n from pandas.core.arrays.string_arrow import ArrowStringArray\n\n return ArrowStringArray(array)\n else:\n from pandas.core.arrays.string_arrow import (\n ArrowStringArrayNumpySemantics,\n )\n\n return ArrowStringArrayNumpySemantics(array)\n\n else:\n import pyarrow\n\n if isinstance(array, pyarrow.Array):\n chunks = [array]\n else:\n # pyarrow.ChunkedArray\n chunks = array.chunks\n\n results = []\n for arr in chunks:\n # convert chunk by chunk to numpy and concatenate then, to avoid\n # overflow for large string data when concatenating the pyarrow arrays\n arr = arr.to_numpy(zero_copy_only=False)\n arr = ensure_string_array(arr, na_value=self.na_value)\n results.append(arr)\n\n if len(chunks) == 0:\n arr = np.array([], dtype=object)\n else:\n arr = np.concatenate(results)\n\n # Bypass validation inside StringArray constructor, see GH#47781\n new_string_array = StringArray.__new__(StringArray)\n NDArrayBacked.__init__(new_string_array, arr, self)\n return new_string_array\n\n\nclass BaseStringArray(ExtensionArray):\n """\n Mixin class for StringArray, ArrowStringArray.\n """\n\n dtype: StringDtype\n\n @doc(ExtensionArray.tolist)\n def tolist(self):\n if self.ndim > 1:\n return [x.tolist() for x in self]\n return list(self.to_numpy())\n\n @classmethod\n def _from_scalars(cls, scalars, dtype: DtypeObj) -> Self:\n if lib.infer_dtype(scalars, skipna=True) not in ["string", "empty"]:\n # TODO: require any NAs be valid-for-string\n raise ValueError\n return cls._from_sequence(scalars, dtype=dtype)\n\n def _formatter(self, boxed: bool = False):\n formatter = partial(\n printing.pprint_thing,\n escape_chars=("\t", "\r", "\n"),\n quote_strings=not boxed,\n )\n return formatter\n\n def _str_map(\n self,\n f,\n na_value=lib.no_default,\n dtype: Dtype | None = None,\n convert: bool = True,\n ):\n if self.dtype.na_value is np.nan:\n return self._str_map_nan_semantics(\n f, na_value=na_value, dtype=dtype, convert=convert\n )\n\n from pandas.arrays import BooleanArray\n\n if dtype is None:\n dtype = self.dtype\n if na_value is lib.no_default:\n na_value = self.dtype.na_value\n\n mask = isna(self)\n arr = np.asarray(self)\n\n if is_integer_dtype(dtype) or is_bool_dtype(dtype):\n constructor: type[IntegerArray | BooleanArray]\n if is_integer_dtype(dtype):\n constructor = IntegerArray\n else:\n constructor = BooleanArray\n\n na_value_is_na = isna(na_value)\n if na_value_is_na:\n na_value = 1\n elif dtype == np.dtype("bool"):\n # GH#55736\n na_value = bool(na_value)\n result = lib.map_infer_mask(\n arr,\n f,\n mask.view("uint8"),\n convert=False,\n na_value=na_value,\n # error: Argument 1 to "dtype" has incompatible type\n # "Union[ExtensionDtype, str, dtype[Any], Type[object]]"; expected\n # "Type[object]"\n dtype=np.dtype(cast(type, dtype)),\n )\n\n if not na_value_is_na:\n mask[:] = False\n\n return constructor(result, mask)\n\n else:\n return self._str_map_str_or_object(dtype, na_value, arr, f, mask)\n\n def _str_map_str_or_object(\n self,\n dtype,\n na_value,\n arr: np.ndarray,\n f,\n mask: npt.NDArray[np.bool_],\n ):\n # _str_map helper for case where dtype is either string dtype or object\n if is_string_dtype(dtype) and not is_object_dtype(dtype):\n # i.e. StringDtype\n result = lib.map_infer_mask(\n arr, f, mask.view("uint8"), convert=False, na_value=na_value\n )\n if self.dtype.storage == "pyarrow":\n import pyarrow as pa\n\n result = pa.array(\n result, mask=mask, type=pa.large_string(), from_pandas=True\n )\n # error: Too many arguments for "BaseStringArray"\n return type(self)(result) # type: ignore[call-arg]\n\n else:\n # This is when the result type is object. We reach this when\n # -> We know the result type is truly object (e.g. .encode returns bytes\n # or .findall returns a list).\n # -> We don't know the result type. E.g. `.get` can return anything.\n return lib.map_infer_mask(arr, f, mask.view("uint8"))\n\n def _str_map_nan_semantics(\n self,\n f,\n na_value=lib.no_default,\n dtype: Dtype | None = None,\n convert: bool = True,\n ):\n if dtype is None:\n dtype = self.dtype\n if na_value is lib.no_default:\n if is_bool_dtype(dtype):\n # NaN propagates as False\n na_value = False\n else:\n na_value = self.dtype.na_value\n\n mask = isna(self)\n arr = np.asarray(self)\n\n if is_integer_dtype(dtype) or is_bool_dtype(dtype):\n na_value_is_na = isna(na_value)\n if na_value_is_na:\n if is_integer_dtype(dtype):\n na_value = 0\n else:\n # NaN propagates as False\n na_value = False\n\n result = lib.map_infer_mask(\n arr,\n f,\n mask.view("uint8"),\n convert=False,\n na_value=na_value,\n dtype=np.dtype(cast(type, dtype)),\n )\n if na_value_is_na and is_integer_dtype(dtype) and mask.any():\n # TODO: we could alternatively do this check before map_infer_mask\n # and adjust the dtype/na_value we pass there. Which is more\n # performant?\n result = result.astype("float64")\n result[mask] = np.nan\n\n return result\n\n else:\n return self._str_map_str_or_object(dtype, na_value, arr, f, mask)\n\n def view(self, dtype: Dtype | None = None) -> ArrayLike:\n if dtype is not None:\n raise TypeError("Cannot change data-type for string array.")\n return super().view(dtype=dtype)\n\n\n# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is\n# incompatible with definition in base class "ExtensionArray"\nclass StringArray(BaseStringArray, NumpyExtensionArray): # type: ignore[misc]\n """\n Extension array for string data.\n\n .. warning::\n\n StringArray is considered experimental. The implementation and\n parts of the API may change without warning.\n\n Parameters\n ----------\n values : array-like\n The array of data.\n\n .. warning::\n\n Currently, this expects an object-dtype ndarray\n where the elements are Python strings\n or nan-likes (``None``, ``np.nan``, ``NA``).\n This may change without warning in the future. Use\n :meth:`pandas.array` with ``dtype="string"`` for a stable way of\n creating a `StringArray` from any sequence.\n\n .. versionchanged:: 1.5.0\n\n StringArray now accepts array-likes containing\n nan-likes(``None``, ``np.nan``) for the ``values`` parameter\n in addition to strings and :attr:`pandas.NA`\n\n copy : bool, default False\n Whether to copy the array of data.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n See Also\n --------\n :func:`pandas.array`\n The recommended function for creating a StringArray.\n Series.str\n The string methods are available on Series backed by\n a StringArray.\n\n Notes\n -----\n StringArray returns a BooleanArray for comparison methods.\n\n Examples\n --------\n >>> pd.array(['This is', 'some text', None, 'data.'], dtype="string")\n <StringArray>\n ['This is', 'some text', <NA>, 'data.']\n Length: 4, dtype: string\n\n Unlike arrays instantiated with ``dtype="object"``, ``StringArray``\n will convert the values to strings.\n\n >>> pd.array(['1', 1], dtype="object")\n <NumpyExtensionArray>\n ['1', 1]\n Length: 2, dtype: object\n >>> pd.array(['1', 1], dtype="string")\n <StringArray>\n ['1', '1']\n Length: 2, dtype: string\n\n However, instantiating StringArrays directly with non-strings will raise an error.\n\n For comparison methods, `StringArray` returns a :class:`pandas.BooleanArray`:\n\n >>> pd.array(["a", None, "c"], dtype="string") == "a"\n <BooleanArray>\n [True, <NA>, False]\n Length: 3, dtype: boolean\n """\n\n # undo the NumpyExtensionArray hack\n _typ = "extension"\n _storage = "python"\n _na_value: libmissing.NAType | float = libmissing.NA\n\n def __init__(self, values, copy: bool = False) -> None:\n values = extract_array(values)\n\n super().__init__(values, copy=copy)\n if not isinstance(values, type(self)):\n self._validate()\n NDArrayBacked.__init__(\n self,\n self._ndarray,\n StringDtype(storage=self._storage, na_value=self._na_value),\n )\n\n def _validate(self):\n """Validate that we only store NA or strings."""\n if len(self._ndarray) and not lib.is_string_array(self._ndarray, skipna=True):\n raise ValueError("StringArray requires a sequence of strings or pandas.NA")\n if self._ndarray.dtype != "object":\n raise ValueError(\n "StringArray requires a sequence of strings or pandas.NA. Got "\n f"'{self._ndarray.dtype}' dtype instead."\n )\n # Check to see if need to convert Na values to pd.NA\n if self._ndarray.ndim > 2:\n # Ravel if ndims > 2 b/c no cythonized version available\n lib.convert_nans_to_NA(self._ndarray.ravel("K"))\n else:\n lib.convert_nans_to_NA(self._ndarray)\n\n def _validate_scalar(self, value):\n # used by NDArrayBackedExtensionIndex.insert\n if isna(value):\n return self.dtype.na_value\n elif not isinstance(value, str):\n raise TypeError(\n f"Invalid value '{value}' for dtype '{self.dtype}'. Value should be a "\n f"string or missing value, got '{type(value).__name__}' instead."\n )\n return value\n\n @classmethod\n def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):\n if dtype and not (isinstance(dtype, str) and dtype == "string"):\n dtype = pandas_dtype(dtype)\n assert isinstance(dtype, StringDtype) and dtype.storage == "python"\n else:\n if using_string_dtype():\n dtype = StringDtype(storage="python", na_value=np.nan)\n else:\n dtype = StringDtype(storage="python")\n\n from pandas.core.arrays.masked import BaseMaskedArray\n\n na_value = dtype.na_value\n if isinstance(scalars, BaseMaskedArray):\n # avoid costly conversion to object dtype\n na_values = scalars._mask\n result = scalars._data\n result = lib.ensure_string_array(result, copy=copy, convert_na_value=False)\n result[na_values] = na_value\n\n else:\n if lib.is_pyarrow_array(scalars):\n # pyarrow array; we cannot rely on the "to_numpy" check in\n # ensure_string_array because calling scalars.to_numpy would set\n # zero_copy_only to True which caused problems see GH#52076\n scalars = np.array(scalars)\n # convert non-na-likes to str, and nan-likes to StringDtype().na_value\n result = lib.ensure_string_array(scalars, na_value=na_value, copy=copy)\n\n # Manually creating new array avoids the validation step in the __init__, so is\n # faster. Refactor need for validation?\n new_string_array = cls.__new__(cls)\n NDArrayBacked.__init__(new_string_array, result, dtype)\n\n return new_string_array\n\n @classmethod\n def _from_sequence_of_strings(\n cls, strings, *, dtype: Dtype | None = None, copy: bool = False\n ):\n return cls._from_sequence(strings, dtype=dtype, copy=copy)\n\n @classmethod\n def _empty(cls, shape, dtype) -> StringArray:\n values = np.empty(shape, dtype=object)\n values[:] = libmissing.NA\n return cls(values).astype(dtype, copy=False)\n\n def __arrow_array__(self, type=None):\n """\n Convert myself into a pyarrow Array.\n """\n import pyarrow as pa\n\n if type is None:\n type = pa.string()\n\n values = self._ndarray.copy()\n values[self.isna()] = None\n return pa.array(values, type=type, from_pandas=True)\n\n def _values_for_factorize(self) -> tuple[np.ndarray, libmissing.NAType | float]: # type: ignore[override]\n arr = self._ndarray.copy()\n\n return arr, self.dtype.na_value\n\n def _maybe_convert_setitem_value(self, value):\n """Maybe convert value to be pyarrow compatible."""\n if lib.is_scalar(value):\n if isna(value):\n value = self.dtype.na_value\n elif not isinstance(value, str):\n raise TypeError(\n f"Invalid value '{value}' for dtype '{self.dtype}'. Value should "\n f"be a string or missing value, got '{type(value).__name__}' "\n "instead."\n )\n else:\n value = extract_array(value, extract_numpy=True)\n if not is_array_like(value):\n value = np.asarray(value, dtype=object)\n elif isinstance(value.dtype, type(self.dtype)):\n return value\n else:\n # cast categories and friends to arrays to see if values are\n # compatible, compatibility with arrow backed strings\n value = np.asarray(value)\n if len(value) and not lib.is_string_array(value, skipna=True):\n raise TypeError(\n "Invalid value for dtype 'str'. Value should be a "\n "string or missing value (or array of those)."\n )\n return value\n\n def __setitem__(self, key, value) -> None:\n value = self._maybe_convert_setitem_value(value)\n\n key = check_array_indexer(self, key)\n scalar_key = lib.is_scalar(key)\n scalar_value = lib.is_scalar(value)\n if scalar_key and not scalar_value:\n raise ValueError("setting an array element with a sequence.")\n\n if not scalar_value:\n if value.dtype == self.dtype:\n value = value._ndarray\n else:\n value = np.asarray(value)\n mask = isna(value)\n if mask.any():\n value = value.copy()\n value[isna(value)] = self.dtype.na_value\n\n super().__setitem__(key, value)\n\n def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:\n # the super() method NDArrayBackedExtensionArray._putmask uses\n # np.putmask which doesn't properly handle None/pd.NA, so using the\n # base class implementation that uses __setitem__\n ExtensionArray._putmask(self, mask, value)\n\n def _where(self, mask: npt.NDArray[np.bool_], value) -> Self:\n # the super() method NDArrayBackedExtensionArray._where uses\n # np.putmask which doesn't properly handle None/pd.NA, so using the\n # base class implementation that uses __setitem__\n return ExtensionArray._where(self, mask, value)\n\n def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:\n if isinstance(values, BaseStringArray) or (\n isinstance(values, ExtensionArray) and is_string_dtype(values.dtype)\n ):\n values = values.astype(self.dtype, copy=False)\n else:\n if not lib.is_string_array(np.asarray(values), skipna=True):\n values = np.array(\n [val for val in values if isinstance(val, str) or isna(val)],\n dtype=object,\n )\n if not len(values):\n return np.zeros(self.shape, dtype=bool)\n\n values = self._from_sequence(values, dtype=self.dtype)\n\n return isin(np.asarray(self), np.asarray(values))\n\n def astype(self, dtype, copy: bool = True):\n dtype = pandas_dtype(dtype)\n\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n\n elif isinstance(dtype, IntegerDtype):\n arr = self._ndarray.copy()\n mask = self.isna()\n arr[mask] = 0\n values = arr.astype(dtype.numpy_dtype)\n return IntegerArray(values, mask, copy=False)\n elif isinstance(dtype, FloatingDtype):\n arr = self.copy()\n mask = self.isna()\n arr[mask] = "0"\n values = arr.astype(dtype.numpy_dtype)\n return FloatingArray(values, mask, copy=False)\n elif isinstance(dtype, ExtensionDtype):\n # Skip the NumpyExtensionArray.astype method\n return ExtensionArray.astype(self, dtype, copy)\n elif np.issubdtype(dtype, np.floating):\n arr = self._ndarray.copy()\n mask = self.isna()\n arr[mask] = 0\n values = arr.astype(dtype)\n values[mask] = np.nan\n return values\n\n return super().astype(dtype, copy)\n\n def _reduce(\n self,\n name: str,\n *,\n skipna: bool = True,\n keepdims: bool = False,\n axis: AxisInt | None = 0,\n **kwargs,\n ):\n if self.dtype.na_value is np.nan and name in ["any", "all"]:\n if name == "any":\n return nanops.nanany(self._ndarray, skipna=skipna)\n else:\n return nanops.nanall(self._ndarray, skipna=skipna)\n\n if name in ["min", "max", "argmin", "argmax", "sum"]:\n result = getattr(self, name)(skipna=skipna, axis=axis, **kwargs)\n if keepdims:\n return self._from_sequence([result], dtype=self.dtype)\n return result\n raise TypeError(f"Cannot perform reduction '{name}' with string dtype")\n\n def _accumulate(self, name: str, *, skipna: bool = True, **kwargs) -> StringArray:\n """\n Return an ExtensionArray performing an accumulation operation.\n\n The underlying data type might change.\n\n Parameters\n ----------\n name : str\n Name of the function, supported values are:\n - cummin\n - cummax\n - cumsum\n - cumprod\n skipna : bool, default True\n If True, skip NA values.\n **kwargs\n Additional keyword arguments passed to the accumulation function.\n Currently, there is no supported kwarg.\n\n Returns\n -------\n array\n\n Raises\n ------\n NotImplementedError : subclass does not define accumulations\n """\n if name == "cumprod":\n msg = f"operation '{name}' not supported for dtype '{self.dtype}'"\n raise TypeError(msg)\n\n # We may need to strip out trailing NA values\n tail: np.ndarray | None = None\n na_mask: np.ndarray | None = None\n ndarray = self._ndarray\n np_func = {\n "cumsum": np.cumsum,\n "cummin": np.minimum.accumulate,\n "cummax": np.maximum.accumulate,\n }[name]\n\n if self._hasna:\n na_mask = cast("npt.NDArray[np.bool_]", isna(ndarray))\n if np.all(na_mask):\n return type(self)(ndarray)\n if skipna:\n if name == "cumsum":\n ndarray = np.where(na_mask, "", ndarray)\n else:\n # We can retain the running min/max by forward/backward filling.\n ndarray = ndarray.copy()\n missing.pad_or_backfill_inplace(\n ndarray,\n method="pad",\n axis=0,\n )\n missing.pad_or_backfill_inplace(\n ndarray,\n method="backfill",\n axis=0,\n )\n else:\n # When not skipping NA values, the result should be null from\n # the first NA value onward.\n idx = np.argmax(na_mask)\n tail = np.empty(len(ndarray) - idx, dtype="object")\n tail[:] = self.dtype.na_value\n ndarray = ndarray[:idx]\n\n # mypy: Cannot call function of unknown type\n np_result = np_func(ndarray) # type: ignore[operator]\n\n if tail is not None:\n np_result = np.hstack((np_result, tail))\n elif na_mask is not None:\n # Argument 2 to "where" has incompatible type "NAType | float"\n np_result = np.where(na_mask, self.dtype.na_value, np_result) # type: ignore[arg-type]\n\n result = type(self)(np_result)\n return result\n\n def _wrap_reduction_result(self, axis: AxisInt | None, result) -> Any:\n if self.dtype.na_value is np.nan and result is libmissing.NA:\n # the masked_reductions use pd.NA -> convert to np.nan\n return np.nan\n return super()._wrap_reduction_result(axis, result)\n\n def min(self, axis=None, skipna: bool = True, **kwargs) -> Scalar:\n nv.validate_min((), kwargs)\n result = masked_reductions.min(\n values=self.to_numpy(), mask=self.isna(), skipna=skipna\n )\n return self._wrap_reduction_result(axis, result)\n\n def max(self, axis=None, skipna: bool = True, **kwargs) -> Scalar:\n nv.validate_max((), kwargs)\n result = masked_reductions.max(\n values=self.to_numpy(), mask=self.isna(), skipna=skipna\n )\n return self._wrap_reduction_result(axis, result)\n\n def sum(\n self,\n *,\n axis: AxisInt | None = None,\n skipna: bool = True,\n min_count: int = 0,\n **kwargs,\n ) -> Scalar:\n nv.validate_sum((), kwargs)\n result = masked_reductions.sum(\n values=self._ndarray, mask=self.isna(), skipna=skipna\n )\n return self._wrap_reduction_result(axis, result)\n\n def value_counts(self, dropna: bool = True) -> Series:\n from pandas.core.algorithms import value_counts_internal as value_counts\n\n result = value_counts(self._ndarray, dropna=dropna).astype("Int64")\n result = value_counts(self._ndarray, sort=False, dropna=dropna)\n result.index = result.index.astype(self.dtype)\n\n if self.dtype.na_value is libmissing.NA:\n result = result.astype("Int64")\n return result\n\n def memory_usage(self, deep: bool = False) -> int:\n result = self._ndarray.nbytes\n if deep:\n return result + lib.memory_usage_of_objects(self._ndarray)\n return result\n\n @doc(ExtensionArray.searchsorted)\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n if self._hasna:\n raise ValueError(\n "searchsorted requires array to be sorted, which is impossible "\n "with NAs present."\n )\n return super().searchsorted(value=value, side=side, sorter=sorter)\n\n def _cmp_method(self, other, op):\n from pandas.arrays import BooleanArray\n\n if isinstance(other, StringArray):\n other = other._ndarray\n\n mask = isna(self) | isna(other)\n valid = ~mask\n\n if not lib.is_scalar(other):\n if len(other) != len(self):\n # prevent improper broadcasting when other is 2D\n raise ValueError(\n f"Lengths of operands do not match: {len(self)} != {len(other)}"\n )\n\n # for array-likes, first filter out NAs before converting to numpy\n if not is_array_like(other):\n other = np.asarray(other)\n other = other[valid]\n\n if op.__name__ in ops.ARITHMETIC_BINOPS:\n result = np.empty_like(self._ndarray, dtype="object")\n result[mask] = self.dtype.na_value\n result[valid] = op(self._ndarray[valid], other)\n return self._from_backing_data(result)\n else:\n # logical\n result = np.zeros(len(self._ndarray), dtype="bool")\n result[valid] = op(self._ndarray[valid], other)\n res_arr = BooleanArray(result, mask)\n if self.dtype.na_value is np.nan:\n if op == operator.ne:\n return res_arr.to_numpy(np.bool_, na_value=True)\n else:\n return res_arr.to_numpy(np.bool_, na_value=False)\n return res_arr\n\n _arith_method = _cmp_method\n\n\nclass StringArrayNumpySemantics(StringArray):\n _storage = "python"\n _na_value = np.nan\n\n def _validate(self) -> None:\n """Validate that we only store NaN or strings."""\n if len(self._ndarray) and not lib.is_string_array(self._ndarray, skipna=True):\n raise ValueError(\n "StringArrayNumpySemantics requires a sequence of strings or NaN"\n )\n if self._ndarray.dtype != "object":\n raise ValueError(\n "StringArrayNumpySemantics requires a sequence of strings or NaN. Got "\n f"'{self._ndarray.dtype}' dtype instead."\n )\n # TODO validate or force NA/None to NaN\n\n @classmethod\n def _from_sequence(\n cls, scalars, *, dtype: Dtype | None = None, copy: bool = False\n ) -> Self:\n if dtype is None:\n dtype = StringDtype(storage="python", na_value=np.nan)\n return super()._from_sequence(scalars, dtype=dtype, copy=copy)\n
.venv\Lib\site-packages\pandas\core\arrays\string_.py
string_.py
Python
36,940
0.95
0.173148
0.084967
python-kit
794
2025-06-14T18:11:45.721592
GPL-3.0
false
a9a403f3a637ddf61d4b1a2009098302
from __future__ import annotations\n\nimport operator\nimport re\nfrom typing import (\n TYPE_CHECKING,\n Callable,\n Union,\n)\nimport warnings\n\nimport numpy as np\n\nfrom pandas._libs import (\n lib,\n missing as libmissing,\n)\nfrom pandas.compat import (\n pa_version_under10p1,\n pa_version_under13p0,\n pa_version_under16p0,\n)\nfrom pandas.util._exceptions import find_stack_level\n\nfrom pandas.core.dtypes.common import (\n is_scalar,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core.arrays._arrow_string_mixins import ArrowStringArrayMixin\nfrom pandas.core.arrays.arrow import ArrowExtensionArray\nfrom pandas.core.arrays.boolean import BooleanDtype\nfrom pandas.core.arrays.floating import Float64Dtype\nfrom pandas.core.arrays.integer import Int64Dtype\nfrom pandas.core.arrays.numeric import NumericDtype\nfrom pandas.core.arrays.string_ import (\n BaseStringArray,\n StringDtype,\n)\nfrom pandas.core.strings.object_array import ObjectStringArrayMixin\n\nif not pa_version_under10p1:\n import pyarrow as pa\n import pyarrow.compute as pc\n\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from pandas._typing import (\n ArrayLike,\n Dtype,\n Self,\n npt,\n )\n\n from pandas import Series\n\n\nArrowStringScalarOrNAT = Union[str, libmissing.NAType]\n\n\ndef _chk_pyarrow_available() -> None:\n if pa_version_under10p1:\n msg = "pyarrow>=10.0.1 is required for PyArrow backed ArrowExtensionArray."\n raise ImportError(msg)\n\n\ndef _is_string_view(typ):\n return not pa_version_under16p0 and pa.types.is_string_view(typ)\n\n\n# TODO: Inherit directly from BaseStringArrayMethods. Currently we inherit from\n# ObjectStringArrayMixin because we want to have the object-dtype based methods as\n# fallback for the ones that pyarrow doesn't yet support\n\n\nclass ArrowStringArray(ObjectStringArrayMixin, ArrowExtensionArray, BaseStringArray):\n """\n Extension array for string data in a ``pyarrow.ChunkedArray``.\n\n .. warning::\n\n ArrowStringArray is considered experimental. The implementation and\n parts of the API may change without warning.\n\n Parameters\n ----------\n values : pyarrow.Array or pyarrow.ChunkedArray\n The array of data.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n See Also\n --------\n :func:`pandas.array`\n The recommended function for creating a ArrowStringArray.\n Series.str\n The string methods are available on Series backed by\n a ArrowStringArray.\n\n Notes\n -----\n ArrowStringArray returns a BooleanArray for comparison methods.\n\n Examples\n --------\n >>> pd.array(['This is', 'some text', None, 'data.'], dtype="string[pyarrow]")\n <ArrowStringArray>\n ['This is', 'some text', <NA>, 'data.']\n Length: 4, dtype: string\n """\n\n # error: Incompatible types in assignment (expression has type "StringDtype",\n # base class "ArrowExtensionArray" defined the type as "ArrowDtype")\n _dtype: StringDtype # type: ignore[assignment]\n _storage = "pyarrow"\n _na_value: libmissing.NAType | float = libmissing.NA\n\n def __init__(self, values) -> None:\n _chk_pyarrow_available()\n if isinstance(values, (pa.Array, pa.ChunkedArray)) and (\n pa.types.is_string(values.type)\n or _is_string_view(values.type)\n or (\n pa.types.is_dictionary(values.type)\n and (\n pa.types.is_string(values.type.value_type)\n or pa.types.is_large_string(values.type.value_type)\n or _is_string_view(values.type.value_type)\n )\n )\n ):\n values = pc.cast(values, pa.large_string())\n\n super().__init__(values)\n self._dtype = StringDtype(storage=self._storage, na_value=self._na_value)\n\n if not pa.types.is_large_string(self._pa_array.type):\n raise ValueError(\n "ArrowStringArray requires a PyArrow (chunked) array of "\n "large_string type"\n )\n\n @classmethod\n def _box_pa_scalar(cls, value, pa_type: pa.DataType | None = None) -> pa.Scalar:\n pa_scalar = super()._box_pa_scalar(value, pa_type)\n if pa.types.is_string(pa_scalar.type) and pa_type is None:\n pa_scalar = pc.cast(pa_scalar, pa.large_string())\n return pa_scalar\n\n @classmethod\n def _box_pa_array(\n cls, value, pa_type: pa.DataType | None = None, copy: bool = False\n ) -> pa.Array | pa.ChunkedArray:\n pa_array = super()._box_pa_array(value, pa_type)\n if pa.types.is_string(pa_array.type) and pa_type is None:\n pa_array = pc.cast(pa_array, pa.large_string())\n return pa_array\n\n def __len__(self) -> int:\n """\n Length of this array.\n\n Returns\n -------\n length : int\n """\n return len(self._pa_array)\n\n @classmethod\n def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):\n from pandas.core.arrays.masked import BaseMaskedArray\n\n _chk_pyarrow_available()\n\n if dtype and not (isinstance(dtype, str) and dtype == "string"):\n dtype = pandas_dtype(dtype)\n assert isinstance(dtype, StringDtype) and dtype.storage == "pyarrow"\n\n if isinstance(scalars, BaseMaskedArray):\n # avoid costly conversion to object dtype in ensure_string_array and\n # numerical issues with Float32Dtype\n na_values = scalars._mask\n result = scalars._data\n result = lib.ensure_string_array(result, copy=copy, convert_na_value=False)\n return cls(pa.array(result, mask=na_values, type=pa.large_string()))\n elif isinstance(scalars, (pa.Array, pa.ChunkedArray)):\n return cls(pc.cast(scalars, pa.large_string()))\n\n # convert non-na-likes to str\n result = lib.ensure_string_array(scalars, copy=copy)\n return cls(pa.array(result, type=pa.large_string(), from_pandas=True))\n\n @classmethod\n def _from_sequence_of_strings(\n cls, strings, dtype: Dtype | None = None, copy: bool = False\n ):\n return cls._from_sequence(strings, dtype=dtype, copy=copy)\n\n @property\n def dtype(self) -> StringDtype: # type: ignore[override]\n """\n An instance of 'string[pyarrow]'.\n """\n return self._dtype\n\n def insert(self, loc: int, item) -> ArrowStringArray:\n if self.dtype.na_value is np.nan and item is np.nan:\n item = libmissing.NA\n if not isinstance(item, str) and item is not libmissing.NA:\n raise TypeError(\n f"Invalid value '{item}' for dtype 'str'. Value should be a "\n f"string or missing value, got '{type(item).__name__}' instead."\n )\n return super().insert(loc, item)\n\n def _convert_bool_result(self, values, na=lib.no_default, method_name=None):\n if na is not lib.no_default and not isna(na) and not isinstance(na, bool):\n # GH#59561\n warnings.warn(\n f"Allowing a non-bool 'na' in obj.str.{method_name} is deprecated "\n "and will raise in a future version.",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n na = bool(na)\n\n if self.dtype.na_value is np.nan:\n if na is lib.no_default or isna(na):\n # NaN propagates as False\n values = values.fill_null(False)\n else:\n values = values.fill_null(na)\n return values.to_numpy()\n else:\n if na is not lib.no_default and not isna(\n na\n ): # pyright: ignore [reportGeneralTypeIssues]\n values = values.fill_null(na)\n return BooleanDtype().__from_arrow__(values)\n\n def _maybe_convert_setitem_value(self, value):\n """Maybe convert value to be pyarrow compatible."""\n if is_scalar(value):\n if isna(value):\n value = None\n elif not isinstance(value, str):\n raise TypeError(\n f"Invalid value '{value}' for dtype 'str'. Value should be a "\n f"string or missing value, got '{type(value).__name__}' instead."\n )\n else:\n value = np.array(value, dtype=object, copy=True)\n value[isna(value)] = None\n for v in value:\n if not (v is None or isinstance(v, str)):\n raise TypeError(\n "Invalid value for dtype 'str'. Value should be a "\n "string or missing value (or array of those)."\n )\n return super()._maybe_convert_setitem_value(value)\n\n def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:\n value_set = [\n pa_scalar.as_py()\n for pa_scalar in [pa.scalar(value, from_pandas=True) for value in values]\n if pa_scalar.type in (pa.string(), pa.null(), pa.large_string())\n ]\n\n # short-circuit to return all False array.\n if not len(value_set):\n return np.zeros(len(self), dtype=bool)\n\n result = pc.is_in(\n self._pa_array, value_set=pa.array(value_set, type=self._pa_array.type)\n )\n # pyarrow 2.0.0 returned nulls, so we explicily specify dtype to convert nulls\n # to False\n return np.array(result, dtype=np.bool_)\n\n def astype(self, dtype, copy: bool = True):\n dtype = pandas_dtype(dtype)\n\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n elif isinstance(dtype, NumericDtype):\n data = self._pa_array.cast(pa.from_numpy_dtype(dtype.numpy_dtype))\n return dtype.__from_arrow__(data)\n elif isinstance(dtype, np.dtype) and np.issubdtype(dtype, np.floating):\n return self.to_numpy(dtype=dtype, na_value=np.nan)\n\n return super().astype(dtype, copy=copy)\n\n @property\n def _data(self):\n # dask accesses ._data directlys\n warnings.warn(\n f"{type(self).__name__}._data is a deprecated and will be removed "\n "in a future version, use ._pa_array instead",\n FutureWarning,\n stacklevel=find_stack_level(),\n )\n return self._pa_array\n\n # ------------------------------------------------------------------------\n # String methods interface\n\n _str_isalnum = ArrowStringArrayMixin._str_isalnum\n _str_isalpha = ArrowStringArrayMixin._str_isalpha\n _str_isdecimal = ArrowStringArrayMixin._str_isdecimal\n _str_isdigit = ArrowStringArrayMixin._str_isdigit\n _str_islower = ArrowStringArrayMixin._str_islower\n _str_isnumeric = ArrowStringArrayMixin._str_isnumeric\n _str_isspace = ArrowStringArrayMixin._str_isspace\n _str_istitle = ArrowStringArrayMixin._str_istitle\n _str_isupper = ArrowStringArrayMixin._str_isupper\n\n _str_map = BaseStringArray._str_map\n _str_startswith = ArrowStringArrayMixin._str_startswith\n _str_endswith = ArrowStringArrayMixin._str_endswith\n _str_pad = ArrowStringArrayMixin._str_pad\n _str_match = ArrowStringArrayMixin._str_match\n _str_fullmatch = ArrowStringArrayMixin._str_fullmatch\n _str_lower = ArrowStringArrayMixin._str_lower\n _str_upper = ArrowStringArrayMixin._str_upper\n _str_strip = ArrowStringArrayMixin._str_strip\n _str_lstrip = ArrowStringArrayMixin._str_lstrip\n _str_rstrip = ArrowStringArrayMixin._str_rstrip\n _str_removesuffix = ArrowStringArrayMixin._str_removesuffix\n _str_get = ArrowStringArrayMixin._str_get\n _str_capitalize = ArrowStringArrayMixin._str_capitalize\n _str_title = ArrowStringArrayMixin._str_title\n _str_swapcase = ArrowStringArrayMixin._str_swapcase\n _str_slice_replace = ArrowStringArrayMixin._str_slice_replace\n _str_len = ArrowStringArrayMixin._str_len\n _str_slice = ArrowStringArrayMixin._str_slice\n\n def _str_contains(\n self,\n pat,\n case: bool = True,\n flags: int = 0,\n na=lib.no_default,\n regex: bool = True,\n ):\n if flags:\n return super()._str_contains(pat, case, flags, na, regex)\n\n return ArrowStringArrayMixin._str_contains(self, pat, case, flags, na, regex)\n\n def _str_replace(\n self,\n pat: str | re.Pattern,\n repl: str | Callable,\n n: int = -1,\n case: bool = True,\n flags: int = 0,\n regex: bool = True,\n ):\n if isinstance(pat, re.Pattern) or callable(repl) or not case or flags:\n return super()._str_replace(pat, repl, n, case, flags, regex)\n\n return ArrowStringArrayMixin._str_replace(\n self, pat, repl, n, case, flags, regex\n )\n\n def _str_repeat(self, repeats: int | Sequence[int]):\n if not isinstance(repeats, int):\n return super()._str_repeat(repeats)\n else:\n return ArrowExtensionArray._str_repeat(self, repeats=repeats)\n\n def _str_removeprefix(self, prefix: str):\n if not pa_version_under13p0:\n return ArrowStringArrayMixin._str_removeprefix(self, prefix)\n return super()._str_removeprefix(prefix)\n\n def _str_count(self, pat: str, flags: int = 0):\n if flags:\n return super()._str_count(pat, flags)\n result = pc.count_substring_regex(self._pa_array, pat)\n return self._convert_int_result(result)\n\n def _str_find(self, sub: str, start: int = 0, end: int | None = None):\n if (\n pa_version_under13p0\n and not (start != 0 and end is not None)\n and not (start == 0 and end is None)\n ):\n # GH#59562\n return super()._str_find(sub, start, end)\n return ArrowStringArrayMixin._str_find(self, sub, start, end)\n\n def _str_get_dummies(self, sep: str = "|"):\n dummies_pa, labels = ArrowExtensionArray(self._pa_array)._str_get_dummies(sep)\n if len(labels) == 0:\n return np.empty(shape=(0, 0), dtype=np.int64), labels\n dummies = np.vstack(dummies_pa.to_numpy())\n return dummies.astype(np.int64, copy=False), labels\n\n def _convert_int_result(self, result):\n if self.dtype.na_value is np.nan:\n if isinstance(result, pa.Array):\n result = result.to_numpy(zero_copy_only=False)\n else:\n result = result.to_numpy()\n if result.dtype == np.int32:\n result = result.astype(np.int64)\n return result\n\n return Int64Dtype().__from_arrow__(result)\n\n def _convert_rank_result(self, result):\n if self.dtype.na_value is np.nan:\n if isinstance(result, pa.Array):\n result = result.to_numpy(zero_copy_only=False)\n else:\n result = result.to_numpy()\n return result.astype("float64", copy=False)\n\n return Float64Dtype().__from_arrow__(result)\n\n def _reduce(\n self, name: str, *, skipna: bool = True, keepdims: bool = False, **kwargs\n ):\n if self.dtype.na_value is np.nan and name in ["any", "all"]:\n if not skipna:\n nas = pc.is_null(self._pa_array)\n arr = pc.or_kleene(nas, pc.not_equal(self._pa_array, ""))\n else:\n arr = pc.not_equal(self._pa_array, "")\n result = ArrowExtensionArray(arr)._reduce(\n name, skipna=skipna, keepdims=keepdims, **kwargs\n )\n if keepdims:\n # ArrowExtensionArray will return a length-1 bool[pyarrow] array\n return result.astype(np.bool_)\n return result\n\n if name in ("min", "max", "sum", "argmin", "argmax"):\n result = self._reduce_calc(name, skipna=skipna, keepdims=keepdims, **kwargs)\n else:\n raise TypeError(f"Cannot perform reduction '{name}' with string dtype")\n\n if name in ("argmin", "argmax") and isinstance(result, pa.Array):\n return self._convert_int_result(result)\n elif isinstance(result, pa.Array):\n return type(self)(result)\n else:\n return result\n\n def value_counts(self, dropna: bool = True) -> Series:\n result = super().value_counts(dropna=dropna)\n if self.dtype.na_value is np.nan:\n res_values = result._values.to_numpy()\n return result._constructor(\n res_values, index=result.index, name=result.name, copy=False\n )\n return result\n\n def _cmp_method(self, other, op):\n result = super()._cmp_method(other, op)\n if self.dtype.na_value is np.nan:\n if op == operator.ne:\n return result.to_numpy(np.bool_, na_value=True)\n else:\n return result.to_numpy(np.bool_, na_value=False)\n return result\n\n def __pos__(self) -> Self:\n raise TypeError(f"bad operand type for unary +: '{self.dtype}'")\n\n\nclass ArrowStringArrayNumpySemantics(ArrowStringArray):\n _na_value = np.nan\n
.venv\Lib\site-packages\pandas\core\arrays\string_arrow.py
string_arrow.py
Python
17,129
0.95
0.17732
0.044226
react-lib
85
2023-08-18T08:00:07.755569
MIT
false
37355d0d892f27b4d88e0d7b85f5bc23
from __future__ import annotations\n\nfrom datetime import timedelta\nimport operator\nfrom typing import (\n TYPE_CHECKING,\n cast,\n)\n\nimport numpy as np\n\nfrom pandas._libs import (\n lib,\n tslibs,\n)\nfrom pandas._libs.tslibs import (\n NaT,\n NaTType,\n Tick,\n Timedelta,\n astype_overflowsafe,\n get_supported_dtype,\n iNaT,\n is_supported_dtype,\n periods_per_second,\n)\nfrom pandas._libs.tslibs.conversion import cast_from_unit_vectorized\nfrom pandas._libs.tslibs.fields import (\n get_timedelta_days,\n get_timedelta_field,\n)\nfrom pandas._libs.tslibs.timedeltas import (\n array_to_timedelta64,\n floordiv_object_array,\n ints_to_pytimedelta,\n parse_timedelta_unit,\n truediv_object_array,\n)\nfrom pandas.compat.numpy import function as nv\nfrom pandas.util._validators import validate_endpoints\n\nfrom pandas.core.dtypes.common import (\n TD64NS_DTYPE,\n is_float_dtype,\n is_integer_dtype,\n is_object_dtype,\n is_scalar,\n is_string_dtype,\n pandas_dtype,\n)\nfrom pandas.core.dtypes.dtypes import ExtensionDtype\nfrom pandas.core.dtypes.missing import isna\n\nfrom pandas.core import (\n nanops,\n roperator,\n)\nfrom pandas.core.array_algos import datetimelike_accumulations\nfrom pandas.core.arrays import datetimelike as dtl\nfrom pandas.core.arrays._ranges import generate_regular_range\nimport pandas.core.common as com\nfrom pandas.core.ops.common import unpack_zerodim_and_defer\n\nif TYPE_CHECKING:\n from collections.abc import Iterator\n\n from pandas._typing import (\n AxisInt,\n DateTimeErrorChoices,\n DtypeObj,\n NpDtype,\n Self,\n npt,\n )\n\n from pandas import DataFrame\n\nimport textwrap\n\n\ndef _field_accessor(name: str, alias: str, docstring: str):\n def f(self) -> np.ndarray:\n values = self.asi8\n if alias == "days":\n result = get_timedelta_days(values, reso=self._creso)\n else:\n # error: Incompatible types in assignment (\n # expression has type "ndarray[Any, dtype[signedinteger[_32Bit]]]",\n # variable has type "ndarray[Any, dtype[signedinteger[_64Bit]]]\n result = get_timedelta_field(values, alias, reso=self._creso) # type: ignore[assignment]\n if self._hasna:\n result = self._maybe_mask_results(\n result, fill_value=None, convert="float64"\n )\n\n return result\n\n f.__name__ = name\n f.__doc__ = f"\n{docstring}\n"\n return property(f)\n\n\nclass TimedeltaArray(dtl.TimelikeOps):\n """\n Pandas ExtensionArray for timedelta data.\n\n .. warning::\n\n TimedeltaArray is currently experimental, and its API may change\n without warning. In particular, :attr:`TimedeltaArray.dtype` is\n expected to change to be an instance of an ``ExtensionDtype``\n subclass.\n\n Parameters\n ----------\n values : array-like\n The timedelta data.\n\n dtype : numpy.dtype\n Currently, only ``numpy.dtype("timedelta64[ns]")`` is accepted.\n freq : Offset, optional\n copy : bool, default False\n Whether to copy the underlying array of data.\n\n Attributes\n ----------\n None\n\n Methods\n -------\n None\n\n Examples\n --------\n >>> pd.arrays.TimedeltaArray._from_sequence(pd.TimedeltaIndex(['1h', '2h']))\n <TimedeltaArray>\n ['0 days 01:00:00', '0 days 02:00:00']\n Length: 2, dtype: timedelta64[ns]\n """\n\n _typ = "timedeltaarray"\n _internal_fill_value = np.timedelta64("NaT", "ns")\n _recognized_scalars = (timedelta, np.timedelta64, Tick)\n _is_recognized_dtype = lambda x: lib.is_np_dtype(x, "m")\n _infer_matches = ("timedelta", "timedelta64")\n\n @property\n def _scalar_type(self) -> type[Timedelta]:\n return Timedelta\n\n __array_priority__ = 1000\n # define my properties & methods for delegation\n _other_ops: list[str] = []\n _bool_ops: list[str] = []\n _object_ops: list[str] = ["freq"]\n _field_ops: list[str] = ["days", "seconds", "microseconds", "nanoseconds"]\n _datetimelike_ops: list[str] = _field_ops + _object_ops + _bool_ops + ["unit"]\n _datetimelike_methods: list[str] = [\n "to_pytimedelta",\n "total_seconds",\n "round",\n "floor",\n "ceil",\n "as_unit",\n ]\n\n # Note: ndim must be defined to ensure NaT.__richcmp__(TimedeltaArray)\n # operates pointwise.\n\n def _box_func(self, x: np.timedelta64) -> Timedelta | NaTType:\n y = x.view("i8")\n if y == NaT._value:\n return NaT\n return Timedelta._from_value_and_reso(y, reso=self._creso)\n\n @property\n # error: Return type "dtype" of "dtype" incompatible with return type\n # "ExtensionDtype" in supertype "ExtensionArray"\n def dtype(self) -> np.dtype[np.timedelta64]: # type: ignore[override]\n """\n The dtype for the TimedeltaArray.\n\n .. warning::\n\n A future version of pandas will change dtype to be an instance\n of a :class:`pandas.api.extensions.ExtensionDtype` subclass,\n not a ``numpy.dtype``.\n\n Returns\n -------\n numpy.dtype\n """\n return self._ndarray.dtype\n\n # ----------------------------------------------------------------\n # Constructors\n\n _freq = None\n _default_dtype = TD64NS_DTYPE # used in TimeLikeOps.__init__\n\n @classmethod\n def _validate_dtype(cls, values, dtype):\n # used in TimeLikeOps.__init__\n dtype = _validate_td64_dtype(dtype)\n _validate_td64_dtype(values.dtype)\n if dtype != values.dtype:\n raise ValueError("Values resolution does not match dtype.")\n return dtype\n\n # error: Signature of "_simple_new" incompatible with supertype "NDArrayBacked"\n @classmethod\n def _simple_new( # type: ignore[override]\n cls,\n values: npt.NDArray[np.timedelta64],\n freq: Tick | None = None,\n dtype: np.dtype[np.timedelta64] = TD64NS_DTYPE,\n ) -> Self:\n # Require td64 dtype, not unit-less, matching values.dtype\n assert lib.is_np_dtype(dtype, "m")\n assert not tslibs.is_unitless(dtype)\n assert isinstance(values, np.ndarray), type(values)\n assert dtype == values.dtype\n assert freq is None or isinstance(freq, Tick)\n\n result = super()._simple_new(values=values, dtype=dtype)\n result._freq = freq\n return result\n\n @classmethod\n def _from_sequence(cls, data, *, dtype=None, copy: bool = False) -> Self:\n if dtype:\n dtype = _validate_td64_dtype(dtype)\n\n data, freq = sequence_to_td64ns(data, copy=copy, unit=None)\n\n if dtype is not None:\n data = astype_overflowsafe(data, dtype=dtype, copy=False)\n\n return cls._simple_new(data, dtype=data.dtype, freq=freq)\n\n @classmethod\n def _from_sequence_not_strict(\n cls,\n data,\n *,\n dtype=None,\n copy: bool = False,\n freq=lib.no_default,\n unit=None,\n ) -> Self:\n """\n _from_sequence_not_strict but without responsibility for finding the\n result's `freq`.\n """\n if dtype:\n dtype = _validate_td64_dtype(dtype)\n\n assert unit not in ["Y", "y", "M"] # caller is responsible for checking\n\n data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)\n\n if dtype is not None:\n data = astype_overflowsafe(data, dtype=dtype, copy=False)\n\n result = cls._simple_new(data, dtype=data.dtype, freq=inferred_freq)\n\n result._maybe_pin_freq(freq, {})\n return result\n\n @classmethod\n def _generate_range(\n cls, start, end, periods, freq, closed=None, *, unit: str | None = None\n ) -> Self:\n periods = dtl.validate_periods(periods)\n if freq is None and any(x is None for x in [periods, start, end]):\n raise ValueError("Must provide freq argument if no data is supplied")\n\n if com.count_not_none(start, end, periods, freq) != 3:\n raise ValueError(\n "Of the four parameters: start, end, periods, "\n "and freq, exactly three must be specified"\n )\n\n if start is not None:\n start = Timedelta(start).as_unit("ns")\n\n if end is not None:\n end = Timedelta(end).as_unit("ns")\n\n if unit is not None:\n if unit not in ["s", "ms", "us", "ns"]:\n raise ValueError("'unit' must be one of 's', 'ms', 'us', 'ns'")\n else:\n unit = "ns"\n\n if start is not None and unit is not None:\n start = start.as_unit(unit, round_ok=False)\n if end is not None and unit is not None:\n end = end.as_unit(unit, round_ok=False)\n\n left_closed, right_closed = validate_endpoints(closed)\n\n if freq is not None:\n index = generate_regular_range(start, end, periods, freq, unit=unit)\n else:\n index = np.linspace(start._value, end._value, periods).astype("i8")\n\n if not left_closed:\n index = index[1:]\n if not right_closed:\n index = index[:-1]\n\n td64values = index.view(f"m8[{unit}]")\n return cls._simple_new(td64values, dtype=td64values.dtype, freq=freq)\n\n # ----------------------------------------------------------------\n # DatetimeLike Interface\n\n def _unbox_scalar(self, value) -> np.timedelta64:\n if not isinstance(value, self._scalar_type) and value is not NaT:\n raise ValueError("'value' should be a Timedelta.")\n self._check_compatible_with(value)\n if value is NaT:\n return np.timedelta64(value._value, self.unit)\n else:\n return value.as_unit(self.unit).asm8\n\n def _scalar_from_string(self, value) -> Timedelta | NaTType:\n return Timedelta(value)\n\n def _check_compatible_with(self, other) -> None:\n # we don't have anything to validate.\n pass\n\n # ----------------------------------------------------------------\n # Array-Like / EA-Interface Methods\n\n def astype(self, dtype, copy: bool = True):\n # We handle\n # --> timedelta64[ns]\n # --> timedelta64\n # DatetimeLikeArrayMixin super call handles other cases\n dtype = pandas_dtype(dtype)\n\n if lib.is_np_dtype(dtype, "m"):\n if dtype == self.dtype:\n if copy:\n return self.copy()\n return self\n\n if is_supported_dtype(dtype):\n # unit conversion e.g. timedelta64[s]\n res_values = astype_overflowsafe(self._ndarray, dtype, copy=False)\n return type(self)._simple_new(\n res_values, dtype=res_values.dtype, freq=self.freq\n )\n else:\n raise ValueError(\n f"Cannot convert from {self.dtype} to {dtype}. "\n "Supported resolutions are 's', 'ms', 'us', 'ns'"\n )\n\n return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy=copy)\n\n def __iter__(self) -> Iterator:\n if self.ndim > 1:\n for i in range(len(self)):\n yield self[i]\n else:\n # convert in chunks of 10k for efficiency\n data = self._ndarray\n length = len(self)\n chunksize = 10000\n chunks = (length // chunksize) + 1\n for i in range(chunks):\n start_i = i * chunksize\n end_i = min((i + 1) * chunksize, length)\n converted = ints_to_pytimedelta(data[start_i:end_i], box=True)\n yield from converted\n\n # ----------------------------------------------------------------\n # Reductions\n\n def sum(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n keepdims: bool = False,\n initial=None,\n skipna: bool = True,\n min_count: int = 0,\n ):\n nv.validate_sum(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims, "initial": initial}\n )\n\n result = nanops.nansum(\n self._ndarray, axis=axis, skipna=skipna, min_count=min_count\n )\n return self._wrap_reduction_result(axis, result)\n\n def std(\n self,\n *,\n axis: AxisInt | None = None,\n dtype: NpDtype | None = None,\n out=None,\n ddof: int = 1,\n keepdims: bool = False,\n skipna: bool = True,\n ):\n nv.validate_stat_ddof_func(\n (), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="std"\n )\n\n result = nanops.nanstd(self._ndarray, axis=axis, skipna=skipna, ddof=ddof)\n if axis is None or self.ndim == 1:\n return self._box_func(result)\n return self._from_backing_data(result)\n\n # ----------------------------------------------------------------\n # Accumulations\n\n def _accumulate(self, name: str, *, skipna: bool = True, **kwargs):\n if name == "cumsum":\n op = getattr(datetimelike_accumulations, name)\n result = op(self._ndarray.copy(), skipna=skipna, **kwargs)\n\n return type(self)._simple_new(result, freq=None, dtype=self.dtype)\n elif name == "cumprod":\n raise TypeError("cumprod not supported for Timedelta.")\n\n else:\n return super()._accumulate(name, skipna=skipna, **kwargs)\n\n # ----------------------------------------------------------------\n # Rendering Methods\n\n def _formatter(self, boxed: bool = False):\n from pandas.io.formats.format import get_format_timedelta64\n\n return get_format_timedelta64(self, box=True)\n\n def _format_native_types(\n self, *, na_rep: str | float = "NaT", date_format=None, **kwargs\n ) -> npt.NDArray[np.object_]:\n from pandas.io.formats.format import get_format_timedelta64\n\n # Relies on TimeDelta._repr_base\n formatter = get_format_timedelta64(self, na_rep)\n # equiv: np.array([formatter(x) for x in self._ndarray])\n # but independent of dimension\n return np.frompyfunc(formatter, 1, 1)(self._ndarray)\n\n # ----------------------------------------------------------------\n # Arithmetic Methods\n\n def _add_offset(self, other):\n assert not isinstance(other, Tick)\n raise TypeError(\n f"cannot add the type {type(other).__name__} to a {type(self).__name__}"\n )\n\n @unpack_zerodim_and_defer("__mul__")\n def __mul__(self, other) -> Self:\n if is_scalar(other):\n # numpy will accept float and int, raise TypeError for others\n result = self._ndarray * other\n if result.dtype.kind != "m":\n # numpy >= 2.1 may not raise a TypeError\n # and seems to dispatch to others.__rmul__?\n raise TypeError(f"Cannot multiply with {type(other).__name__}")\n freq = None\n if self.freq is not None and not isna(other):\n freq = self.freq * other\n if freq.n == 0:\n # GH#51575 Better to have no freq than an incorrect one\n freq = None\n return type(self)._simple_new(result, dtype=result.dtype, freq=freq)\n\n if not hasattr(other, "dtype"):\n # list, tuple\n other = np.array(other)\n if len(other) != len(self) and not lib.is_np_dtype(other.dtype, "m"):\n # Exclude timedelta64 here so we correctly raise TypeError\n # for that instead of ValueError\n raise ValueError("Cannot multiply with unequal lengths")\n\n if is_object_dtype(other.dtype):\n # this multiplication will succeed only if all elements of other\n # are int or float scalars, so we will end up with\n # timedelta64[ns]-dtyped result\n arr = self._ndarray\n result = [arr[n] * other[n] for n in range(len(self))]\n result = np.array(result)\n return type(self)._simple_new(result, dtype=result.dtype)\n\n # numpy will accept float or int dtype, raise TypeError for others\n result = self._ndarray * other\n if result.dtype.kind != "m":\n # numpy >= 2.1 may not raise a TypeError\n # and seems to dispatch to others.__rmul__?\n raise TypeError(f"Cannot multiply with {type(other).__name__}")\n return type(self)._simple_new(result, dtype=result.dtype)\n\n __rmul__ = __mul__\n\n def _scalar_divlike_op(self, other, op):\n """\n Shared logic for __truediv__, __rtruediv__, __floordiv__, __rfloordiv__\n with scalar 'other'.\n """\n if isinstance(other, self._recognized_scalars):\n other = Timedelta(other)\n # mypy assumes that __new__ returns an instance of the class\n # github.com/python/mypy/issues/1020\n if cast("Timedelta | NaTType", other) is NaT:\n # specifically timedelta64-NaT\n res = np.empty(self.shape, dtype=np.float64)\n res.fill(np.nan)\n return res\n\n # otherwise, dispatch to Timedelta implementation\n return op(self._ndarray, other)\n\n else:\n # caller is responsible for checking lib.is_scalar(other)\n # assume other is numeric, otherwise numpy will raise\n\n if op in [roperator.rtruediv, roperator.rfloordiv]:\n raise TypeError(\n f"Cannot divide {type(other).__name__} by {type(self).__name__}"\n )\n\n result = op(self._ndarray, other)\n freq = None\n\n if self.freq is not None:\n # Note: freq gets division, not floor-division, even if op\n # is floordiv.\n freq = self.freq / other\n if freq.nanos == 0 and self.freq.nanos != 0:\n # e.g. if self.freq is Nano(1) then dividing by 2\n # rounds down to zero\n freq = None\n\n return type(self)._simple_new(result, dtype=result.dtype, freq=freq)\n\n def _cast_divlike_op(self, other):\n if not hasattr(other, "dtype"):\n # e.g. list, tuple\n other = np.array(other)\n\n if len(other) != len(self):\n raise ValueError("Cannot divide vectors with unequal lengths")\n return other\n\n def _vector_divlike_op(self, other, op) -> np.ndarray | Self:\n """\n Shared logic for __truediv__, __floordiv__, and their reversed versions\n with timedelta64-dtype ndarray other.\n """\n # Let numpy handle it\n result = op(self._ndarray, np.asarray(other))\n\n if (is_integer_dtype(other.dtype) or is_float_dtype(other.dtype)) and op in [\n operator.truediv,\n operator.floordiv,\n ]:\n return type(self)._simple_new(result, dtype=result.dtype)\n\n if op in [operator.floordiv, roperator.rfloordiv]:\n mask = self.isna() | isna(other)\n if mask.any():\n result = result.astype(np.float64)\n np.putmask(result, mask, np.nan)\n\n return result\n\n @unpack_zerodim_and_defer("__truediv__")\n def __truediv__(self, other):\n # timedelta / X is well-defined for timedelta-like or numeric X\n op = operator.truediv\n if is_scalar(other):\n return self._scalar_divlike_op(other, op)\n\n other = self._cast_divlike_op(other)\n if (\n lib.is_np_dtype(other.dtype, "m")\n or is_integer_dtype(other.dtype)\n or is_float_dtype(other.dtype)\n ):\n return self._vector_divlike_op(other, op)\n\n if is_object_dtype(other.dtype):\n other = np.asarray(other)\n if self.ndim > 1:\n res_cols = [left / right for left, right in zip(self, other)]\n res_cols2 = [x.reshape(1, -1) for x in res_cols]\n result = np.concatenate(res_cols2, axis=0)\n else:\n result = truediv_object_array(self._ndarray, other)\n\n return result\n\n else:\n return NotImplemented\n\n @unpack_zerodim_and_defer("__rtruediv__")\n def __rtruediv__(self, other):\n # X / timedelta is defined only for timedelta-like X\n op = roperator.rtruediv\n if is_scalar(other):\n return self._scalar_divlike_op(other, op)\n\n other = self._cast_divlike_op(other)\n if lib.is_np_dtype(other.dtype, "m"):\n return self._vector_divlike_op(other, op)\n\n elif is_object_dtype(other.dtype):\n # Note: unlike in __truediv__, we do not _need_ to do type\n # inference on the result. It does not raise, a numeric array\n # is returned. GH#23829\n result_list = [other[n] / self[n] for n in range(len(self))]\n return np.array(result_list)\n\n else:\n return NotImplemented\n\n @unpack_zerodim_and_defer("__floordiv__")\n def __floordiv__(self, other):\n op = operator.floordiv\n if is_scalar(other):\n return self._scalar_divlike_op(other, op)\n\n other = self._cast_divlike_op(other)\n if (\n lib.is_np_dtype(other.dtype, "m")\n or is_integer_dtype(other.dtype)\n or is_float_dtype(other.dtype)\n ):\n return self._vector_divlike_op(other, op)\n\n elif is_object_dtype(other.dtype):\n other = np.asarray(other)\n if self.ndim > 1:\n res_cols = [left // right for left, right in zip(self, other)]\n res_cols2 = [x.reshape(1, -1) for x in res_cols]\n result = np.concatenate(res_cols2, axis=0)\n else:\n result = floordiv_object_array(self._ndarray, other)\n\n assert result.dtype == object\n return result\n\n else:\n return NotImplemented\n\n @unpack_zerodim_and_defer("__rfloordiv__")\n def __rfloordiv__(self, other):\n op = roperator.rfloordiv\n if is_scalar(other):\n return self._scalar_divlike_op(other, op)\n\n other = self._cast_divlike_op(other)\n if lib.is_np_dtype(other.dtype, "m"):\n return self._vector_divlike_op(other, op)\n\n elif is_object_dtype(other.dtype):\n result_list = [other[n] // self[n] for n in range(len(self))]\n result = np.array(result_list)\n return result\n\n else:\n return NotImplemented\n\n @unpack_zerodim_and_defer("__mod__")\n def __mod__(self, other):\n # Note: This is a naive implementation, can likely be optimized\n if isinstance(other, self._recognized_scalars):\n other = Timedelta(other)\n return self - (self // other) * other\n\n @unpack_zerodim_and_defer("__rmod__")\n def __rmod__(self, other):\n # Note: This is a naive implementation, can likely be optimized\n if isinstance(other, self._recognized_scalars):\n other = Timedelta(other)\n return other - (other // self) * self\n\n @unpack_zerodim_and_defer("__divmod__")\n def __divmod__(self, other):\n # Note: This is a naive implementation, can likely be optimized\n if isinstance(other, self._recognized_scalars):\n other = Timedelta(other)\n\n res1 = self // other\n res2 = self - res1 * other\n return res1, res2\n\n @unpack_zerodim_and_defer("__rdivmod__")\n def __rdivmod__(self, other):\n # Note: This is a naive implementation, can likely be optimized\n if isinstance(other, self._recognized_scalars):\n other = Timedelta(other)\n\n res1 = other // self\n res2 = other - res1 * self\n return res1, res2\n\n def __neg__(self) -> TimedeltaArray:\n freq = None\n if self.freq is not None:\n freq = -self.freq\n return type(self)._simple_new(-self._ndarray, dtype=self.dtype, freq=freq)\n\n def __pos__(self) -> TimedeltaArray:\n return type(self)._simple_new(\n self._ndarray.copy(), dtype=self.dtype, freq=self.freq\n )\n\n def __abs__(self) -> TimedeltaArray:\n # Note: freq is not preserved\n return type(self)._simple_new(np.abs(self._ndarray), dtype=self.dtype)\n\n # ----------------------------------------------------------------\n # Conversion Methods - Vectorized analogues of Timedelta methods\n\n def total_seconds(self) -> npt.NDArray[np.float64]:\n """\n Return total duration of each element expressed in seconds.\n\n This method is available directly on TimedeltaArray, TimedeltaIndex\n and on Series containing timedelta values under the ``.dt`` namespace.\n\n Returns\n -------\n ndarray, Index or Series\n When the calling object is a TimedeltaArray, the return type\n is ndarray. When the calling object is a TimedeltaIndex,\n the return type is an Index with a float64 dtype. When the calling object\n is a Series, the return type is Series of type `float64` whose\n index is the same as the original.\n\n See Also\n --------\n datetime.timedelta.total_seconds : Standard library version\n of this method.\n TimedeltaIndex.components : Return a DataFrame with components of\n each Timedelta.\n\n Examples\n --------\n **Series**\n\n >>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d'))\n >>> s\n 0 0 days\n 1 1 days\n 2 2 days\n 3 3 days\n 4 4 days\n dtype: timedelta64[ns]\n\n >>> s.dt.total_seconds()\n 0 0.0\n 1 86400.0\n 2 172800.0\n 3 259200.0\n 4 345600.0\n dtype: float64\n\n **TimedeltaIndex**\n\n >>> idx = pd.to_timedelta(np.arange(5), unit='d')\n >>> idx\n TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],\n dtype='timedelta64[ns]', freq=None)\n\n >>> idx.total_seconds()\n Index([0.0, 86400.0, 172800.0, 259200.0, 345600.0], dtype='float64')\n """\n pps = periods_per_second(self._creso)\n return self._maybe_mask_results(self.asi8 / pps, fill_value=None)\n\n def to_pytimedelta(self) -> npt.NDArray[np.object_]:\n """\n Return an ndarray of datetime.timedelta objects.\n\n Returns\n -------\n numpy.ndarray\n\n Examples\n --------\n >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='D')\n >>> tdelta_idx\n TimedeltaIndex(['1 days', '2 days', '3 days'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.to_pytimedelta()\n array([datetime.timedelta(days=1), datetime.timedelta(days=2),\n datetime.timedelta(days=3)], dtype=object)\n """\n return ints_to_pytimedelta(self._ndarray)\n\n days_docstring = textwrap.dedent(\n """Number of days for each element.\n\n Examples\n --------\n For Series:\n\n >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='d'))\n >>> ser\n 0 1 days\n 1 2 days\n 2 3 days\n dtype: timedelta64[ns]\n >>> ser.dt.days\n 0 1\n 1 2\n 2 3\n dtype: int64\n\n For TimedeltaIndex:\n\n >>> tdelta_idx = pd.to_timedelta(["0 days", "10 days", "20 days"])\n >>> tdelta_idx\n TimedeltaIndex(['0 days', '10 days', '20 days'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.days\n Index([0, 10, 20], dtype='int64')"""\n )\n days = _field_accessor("days", "days", days_docstring)\n\n seconds_docstring = textwrap.dedent(\n """Number of seconds (>= 0 and less than 1 day) for each element.\n\n Examples\n --------\n For Series:\n\n >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='s'))\n >>> ser\n 0 0 days 00:00:01\n 1 0 days 00:00:02\n 2 0 days 00:00:03\n dtype: timedelta64[ns]\n >>> ser.dt.seconds\n 0 1\n 1 2\n 2 3\n dtype: int32\n\n For TimedeltaIndex:\n\n >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='s')\n >>> tdelta_idx\n TimedeltaIndex(['0 days 00:00:01', '0 days 00:00:02', '0 days 00:00:03'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.seconds\n Index([1, 2, 3], dtype='int32')"""\n )\n seconds = _field_accessor(\n "seconds",\n "seconds",\n seconds_docstring,\n )\n\n microseconds_docstring = textwrap.dedent(\n """Number of microseconds (>= 0 and less than 1 second) for each element.\n\n Examples\n --------\n For Series:\n\n >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='us'))\n >>> ser\n 0 0 days 00:00:00.000001\n 1 0 days 00:00:00.000002\n 2 0 days 00:00:00.000003\n dtype: timedelta64[ns]\n >>> ser.dt.microseconds\n 0 1\n 1 2\n 2 3\n dtype: int32\n\n For TimedeltaIndex:\n\n >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='us')\n >>> tdelta_idx\n TimedeltaIndex(['0 days 00:00:00.000001', '0 days 00:00:00.000002',\n '0 days 00:00:00.000003'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.microseconds\n Index([1, 2, 3], dtype='int32')"""\n )\n microseconds = _field_accessor(\n "microseconds",\n "microseconds",\n microseconds_docstring,\n )\n\n nanoseconds_docstring = textwrap.dedent(\n """Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.\n\n Examples\n --------\n For Series:\n\n >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='ns'))\n >>> ser\n 0 0 days 00:00:00.000000001\n 1 0 days 00:00:00.000000002\n 2 0 days 00:00:00.000000003\n dtype: timedelta64[ns]\n >>> ser.dt.nanoseconds\n 0 1\n 1 2\n 2 3\n dtype: int32\n\n For TimedeltaIndex:\n\n >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='ns')\n >>> tdelta_idx\n TimedeltaIndex(['0 days 00:00:00.000000001', '0 days 00:00:00.000000002',\n '0 days 00:00:00.000000003'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.nanoseconds\n Index([1, 2, 3], dtype='int32')"""\n )\n nanoseconds = _field_accessor(\n "nanoseconds",\n "nanoseconds",\n nanoseconds_docstring,\n )\n\n @property\n def components(self) -> DataFrame:\n """\n Return a DataFrame of the individual resolution components of the Timedeltas.\n\n The components (days, hours, minutes seconds, milliseconds, microseconds,\n nanoseconds) are returned as columns in a DataFrame.\n\n Returns\n -------\n DataFrame\n\n Examples\n --------\n >>> tdelta_idx = pd.to_timedelta(['1 day 3 min 2 us 42 ns'])\n >>> tdelta_idx\n TimedeltaIndex(['1 days 00:03:00.000002042'],\n dtype='timedelta64[ns]', freq=None)\n >>> tdelta_idx.components\n days hours minutes seconds milliseconds microseconds nanoseconds\n 0 1 0 3 0 0 2 42\n """\n from pandas import DataFrame\n\n columns = [\n "days",\n "hours",\n "minutes",\n "seconds",\n "milliseconds",\n "microseconds",\n "nanoseconds",\n ]\n hasnans = self._hasna\n if hasnans:\n\n def f(x):\n if isna(x):\n return [np.nan] * len(columns)\n return x.components\n\n else:\n\n def f(x):\n return x.components\n\n result = DataFrame([f(x) for x in self], columns=columns)\n if not hasnans:\n result = result.astype("int64")\n return result\n\n\n# ---------------------------------------------------------------------\n# Constructor Helpers\n\n\ndef sequence_to_td64ns(\n data,\n copy: bool = False,\n unit=None,\n errors: DateTimeErrorChoices = "raise",\n) -> tuple[np.ndarray, Tick | None]:\n """\n Parameters\n ----------\n data : list-like\n copy : bool, default False\n unit : str, optional\n The timedelta unit to treat integers as multiples of. For numeric\n data this defaults to ``'ns'``.\n Must be un-specified if the data contains a str and ``errors=="raise"``.\n errors : {"raise", "coerce", "ignore"}, default "raise"\n How to handle elements that cannot be converted to timedelta64[ns].\n See ``pandas.to_timedelta`` for details.\n\n Returns\n -------\n converted : numpy.ndarray\n The sequence converted to a numpy array with dtype ``timedelta64[ns]``.\n inferred_freq : Tick or None\n The inferred frequency of the sequence.\n\n Raises\n ------\n ValueError : Data cannot be converted to timedelta64[ns].\n\n Notes\n -----\n Unlike `pandas.to_timedelta`, if setting ``errors=ignore`` will not cause\n errors to be ignored; they are caught and subsequently ignored at a\n higher level.\n """\n assert unit not in ["Y", "y", "M"] # caller is responsible for checking\n\n inferred_freq = None\n if unit is not None:\n unit = parse_timedelta_unit(unit)\n\n data, copy = dtl.ensure_arraylike_for_datetimelike(\n data, copy, cls_name="TimedeltaArray"\n )\n\n if isinstance(data, TimedeltaArray):\n inferred_freq = data.freq\n\n # Convert whatever we have into timedelta64[ns] dtype\n if data.dtype == object or is_string_dtype(data.dtype):\n # no need to make a copy, need to convert if string-dtyped\n data = _objects_to_td64ns(data, unit=unit, errors=errors)\n copy = False\n\n elif is_integer_dtype(data.dtype):\n # treat as multiples of the given unit\n data, copy_made = _ints_to_td64ns(data, unit=unit)\n copy = copy and not copy_made\n\n elif is_float_dtype(data.dtype):\n # cast the unit, multiply base/frac separately\n # to avoid precision issues from float -> int\n if isinstance(data.dtype, ExtensionDtype):\n mask = data._mask\n data = data._data\n else:\n mask = np.isnan(data)\n\n data = cast_from_unit_vectorized(data, unit or "ns")\n data[mask] = iNaT\n data = data.view("m8[ns]")\n copy = False\n\n elif lib.is_np_dtype(data.dtype, "m"):\n if not is_supported_dtype(data.dtype):\n # cast to closest supported unit, i.e. s or ns\n new_dtype = get_supported_dtype(data.dtype)\n data = astype_overflowsafe(data, dtype=new_dtype, copy=False)\n copy = False\n\n else:\n # This includes datetime64-dtype, see GH#23539, GH#29794\n raise TypeError(f"dtype {data.dtype} cannot be converted to timedelta64[ns]")\n\n if not copy:\n data = np.asarray(data)\n else:\n data = np.array(data, copy=copy)\n\n assert data.dtype.kind == "m"\n assert data.dtype != "m8" # i.e. not unit-less\n\n return data, inferred_freq\n\n\ndef _ints_to_td64ns(data, unit: str = "ns"):\n """\n Convert an ndarray with integer-dtype to timedelta64[ns] dtype, treating\n the integers as multiples of the given timedelta unit.\n\n Parameters\n ----------\n data : numpy.ndarray with integer-dtype\n unit : str, default "ns"\n The timedelta unit to treat integers as multiples of.\n\n Returns\n -------\n numpy.ndarray : timedelta64[ns] array converted from data\n bool : whether a copy was made\n """\n copy_made = False\n unit = unit if unit is not None else "ns"\n\n if data.dtype != np.int64:\n # converting to int64 makes a copy, so we can avoid\n # re-copying later\n data = data.astype(np.int64)\n copy_made = True\n\n if unit != "ns":\n dtype_str = f"timedelta64[{unit}]"\n data = data.view(dtype_str)\n\n data = astype_overflowsafe(data, dtype=TD64NS_DTYPE)\n\n # the astype conversion makes a copy, so we can avoid re-copying later\n copy_made = True\n\n else:\n data = data.view("timedelta64[ns]")\n\n return data, copy_made\n\n\ndef _objects_to_td64ns(data, unit=None, errors: DateTimeErrorChoices = "raise"):\n """\n Convert a object-dtyped or string-dtyped array into an\n timedelta64[ns]-dtyped array.\n\n Parameters\n ----------\n data : ndarray or Index\n unit : str, default "ns"\n The timedelta unit to treat integers as multiples of.\n Must not be specified if the data contains a str.\n errors : {"raise", "coerce", "ignore"}, default "raise"\n How to handle elements that cannot be converted to timedelta64[ns].\n See ``pandas.to_timedelta`` for details.\n\n Returns\n -------\n numpy.ndarray : timedelta64[ns] array converted from data\n\n Raises\n ------\n ValueError : Data cannot be converted to timedelta64[ns].\n\n Notes\n -----\n Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause\n errors to be ignored; they are caught and subsequently ignored at a\n higher level.\n """\n # coerce Index to np.ndarray, converting string-dtype if necessary\n values = np.asarray(data, dtype=np.object_)\n\n result = array_to_timedelta64(values, unit=unit, errors=errors)\n return result.view("timedelta64[ns]")\n\n\ndef _validate_td64_dtype(dtype) -> DtypeObj:\n dtype = pandas_dtype(dtype)\n if dtype == np.dtype("m8"):\n # no precision disallowed GH#24806\n msg = (\n "Passing in 'timedelta' dtype with no precision is not allowed. "\n "Please pass in 'timedelta64[ns]' instead."\n )\n raise ValueError(msg)\n\n if not lib.is_np_dtype(dtype, "m"):\n raise ValueError(f"dtype '{dtype}' is invalid, should be np.timedelta64 dtype")\n elif not is_supported_dtype(dtype):\n raise ValueError("Supported timedelta64 resolutions are 's', 'ms', 'us', 'ns'")\n\n return dtype\n
.venv\Lib\site-packages\pandas\core\arrays\timedeltas.py
timedeltas.py
Python
37,830
0.95
0.14346
0.093333
vue-tools
586
2023-12-27T19:20:11.271518
Apache-2.0
false
bb9d95d36ac4840f4f36005c8f1037db
from __future__ import annotations\n\nfrom functools import partial\nimport re\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Literal,\n)\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas.compat import (\n pa_version_under10p1,\n pa_version_under11p0,\n pa_version_under13p0,\n pa_version_under17p0,\n)\n\nif not pa_version_under10p1:\n import pyarrow as pa\n import pyarrow.compute as pc\n\nif TYPE_CHECKING:\n from collections.abc import Callable\n\n from pandas._typing import (\n Scalar,\n Self,\n )\n\n\nclass ArrowStringArrayMixin:\n _pa_array: pa.ChunkedArray\n\n def __init__(self, *args, **kwargs) -> None:\n raise NotImplementedError\n\n def _convert_bool_result(self, result, na=lib.no_default, method_name=None):\n # Convert a bool-dtype result to the appropriate result type\n raise NotImplementedError\n\n def _convert_int_result(self, result):\n # Convert an integer-dtype result to the appropriate result type\n raise NotImplementedError\n\n def _apply_elementwise(self, func: Callable) -> list[list[Any]]:\n raise NotImplementedError\n\n def _str_len(self):\n result = pc.utf8_length(self._pa_array)\n return self._convert_int_result(result)\n\n def _str_lower(self) -> Self:\n return type(self)(pc.utf8_lower(self._pa_array))\n\n def _str_upper(self) -> Self:\n return type(self)(pc.utf8_upper(self._pa_array))\n\n def _str_strip(self, to_strip=None) -> Self:\n if to_strip is None:\n result = pc.utf8_trim_whitespace(self._pa_array)\n else:\n result = pc.utf8_trim(self._pa_array, characters=to_strip)\n return type(self)(result)\n\n def _str_lstrip(self, to_strip=None) -> Self:\n if to_strip is None:\n result = pc.utf8_ltrim_whitespace(self._pa_array)\n else:\n result = pc.utf8_ltrim(self._pa_array, characters=to_strip)\n return type(self)(result)\n\n def _str_rstrip(self, to_strip=None) -> Self:\n if to_strip is None:\n result = pc.utf8_rtrim_whitespace(self._pa_array)\n else:\n result = pc.utf8_rtrim(self._pa_array, characters=to_strip)\n return type(self)(result)\n\n def _str_pad(\n self,\n width: int,\n side: Literal["left", "right", "both"] = "left",\n fillchar: str = " ",\n ):\n if side == "left":\n pa_pad = pc.utf8_lpad\n elif side == "right":\n pa_pad = pc.utf8_rpad\n elif side == "both":\n if pa_version_under17p0:\n # GH#59624 fall back to object dtype\n from pandas import array as pd_array\n\n obj_arr = self.astype(object, copy=False) # type: ignore[attr-defined]\n obj = pd_array(obj_arr, dtype=object)\n result = obj._str_pad(width, side, fillchar) # type: ignore[attr-defined]\n return type(self)._from_sequence(result, dtype=self.dtype) # type: ignore[attr-defined]\n else:\n # GH#54792\n # https://github.com/apache/arrow/issues/15053#issuecomment-2317032347\n lean_left = (width % 2) == 0\n pa_pad = partial(pc.utf8_center, lean_left_on_odd_padding=lean_left)\n else:\n raise ValueError(\n f"Invalid side: {side}. Side must be one of 'left', 'right', 'both'"\n )\n return type(self)(pa_pad(self._pa_array, width=width, padding=fillchar))\n\n def _str_get(self, i: int):\n lengths = pc.utf8_length(self._pa_array)\n if i >= 0:\n out_of_bounds = pc.greater_equal(i, lengths)\n start = i\n stop = i + 1\n step = 1\n else:\n out_of_bounds = pc.greater(-i, lengths)\n start = i\n stop = i - 1\n step = -1\n not_out_of_bounds = pc.invert(out_of_bounds.fill_null(True))\n selected = pc.utf8_slice_codeunits(\n self._pa_array, start=start, stop=stop, step=step\n )\n null_value = pa.scalar(None, type=self._pa_array.type)\n result = pc.if_else(not_out_of_bounds, selected, null_value)\n return type(self)(result)\n\n def _str_slice(\n self, start: int | None = None, stop: int | None = None, step: int | None = None\n ):\n if pa_version_under11p0:\n # GH#59724\n result = self._apply_elementwise(lambda val: val[start:stop:step])\n return type(self)(pa.chunked_array(result, type=self._pa_array.type))\n if start is None:\n if step is not None and step < 0:\n # GH#59710\n start = -1\n else:\n start = 0\n if step is None:\n step = 1\n return type(self)(\n pc.utf8_slice_codeunits(self._pa_array, start=start, stop=stop, step=step)\n )\n\n def _str_slice_replace(\n self, start: int | None = None, stop: int | None = None, repl: str | None = None\n ):\n if repl is None:\n repl = ""\n if start is None:\n start = 0\n if stop is None:\n stop = np.iinfo(np.int64).max\n return type(self)(pc.utf8_replace_slice(self._pa_array, start, stop, repl))\n\n def _str_replace(\n self,\n pat: str | re.Pattern,\n repl: str | Callable,\n n: int = -1,\n case: bool = True,\n flags: int = 0,\n regex: bool = True,\n ) -> Self:\n if isinstance(pat, re.Pattern) or callable(repl) or not case or flags:\n raise NotImplementedError(\n "replace is not supported with a re.Pattern, callable repl, "\n "case=False, or flags!=0"\n )\n\n func = pc.replace_substring_regex if regex else pc.replace_substring\n # https://github.com/apache/arrow/issues/39149\n # GH 56404, unexpected behavior with negative max_replacements with pyarrow.\n pa_max_replacements = None if n < 0 else n\n result = func(\n self._pa_array,\n pattern=pat,\n replacement=repl,\n max_replacements=pa_max_replacements,\n )\n return type(self)(result)\n\n def _str_capitalize(self) -> Self:\n return type(self)(pc.utf8_capitalize(self._pa_array))\n\n def _str_title(self):\n return type(self)(pc.utf8_title(self._pa_array))\n\n def _str_swapcase(self):\n return type(self)(pc.utf8_swapcase(self._pa_array))\n\n def _str_removeprefix(self, prefix: str):\n if not pa_version_under13p0:\n starts_with = pc.starts_with(self._pa_array, pattern=prefix)\n removed = pc.utf8_slice_codeunits(self._pa_array, len(prefix))\n result = pc.if_else(starts_with, removed, self._pa_array)\n return type(self)(result)\n predicate = lambda val: val.removeprefix(prefix)\n result = self._apply_elementwise(predicate)\n return type(self)(pa.chunked_array(result))\n\n def _str_removesuffix(self, suffix: str):\n ends_with = pc.ends_with(self._pa_array, pattern=suffix)\n removed = pc.utf8_slice_codeunits(self._pa_array, 0, stop=-len(suffix))\n result = pc.if_else(ends_with, removed, self._pa_array)\n return type(self)(result)\n\n def _str_startswith(\n self, pat: str | tuple[str, ...], na: Scalar | lib.NoDefault = lib.no_default\n ):\n if isinstance(pat, str):\n result = pc.starts_with(self._pa_array, pattern=pat)\n else:\n if len(pat) == 0:\n # For empty tuple we return null for missing values and False\n # for valid values.\n result = pc.if_else(pc.is_null(self._pa_array), None, False)\n else:\n result = pc.starts_with(self._pa_array, pattern=pat[0])\n\n for p in pat[1:]:\n result = pc.or_(result, pc.starts_with(self._pa_array, pattern=p))\n return self._convert_bool_result(result, na=na, method_name="startswith")\n\n def _str_endswith(\n self, pat: str | tuple[str, ...], na: Scalar | lib.NoDefault = lib.no_default\n ):\n if isinstance(pat, str):\n result = pc.ends_with(self._pa_array, pattern=pat)\n else:\n if len(pat) == 0:\n # For empty tuple we return null for missing values and False\n # for valid values.\n result = pc.if_else(pc.is_null(self._pa_array), None, False)\n else:\n result = pc.ends_with(self._pa_array, pattern=pat[0])\n\n for p in pat[1:]:\n result = pc.or_(result, pc.ends_with(self._pa_array, pattern=p))\n return self._convert_bool_result(result, na=na, method_name="endswith")\n\n def _str_isalnum(self):\n result = pc.utf8_is_alnum(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isalpha(self):\n result = pc.utf8_is_alpha(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isdecimal(self):\n result = pc.utf8_is_decimal(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isdigit(self):\n result = pc.utf8_is_digit(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_islower(self):\n result = pc.utf8_is_lower(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isnumeric(self):\n result = pc.utf8_is_numeric(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isspace(self):\n result = pc.utf8_is_space(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_istitle(self):\n result = pc.utf8_is_title(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_isupper(self):\n result = pc.utf8_is_upper(self._pa_array)\n return self._convert_bool_result(result)\n\n def _str_contains(\n self,\n pat,\n case: bool = True,\n flags: int = 0,\n na: Scalar | lib.NoDefault = lib.no_default,\n regex: bool = True,\n ):\n if flags:\n raise NotImplementedError(f"contains not implemented with {flags=}")\n\n if regex:\n pa_contains = pc.match_substring_regex\n else:\n pa_contains = pc.match_substring\n result = pa_contains(self._pa_array, pat, ignore_case=not case)\n return self._convert_bool_result(result, na=na, method_name="contains")\n\n def _str_match(\n self,\n pat: str,\n case: bool = True,\n flags: int = 0,\n na: Scalar | lib.NoDefault = lib.no_default,\n ):\n if not pat.startswith("^"):\n pat = f"^{pat}"\n return self._str_contains(pat, case, flags, na, regex=True)\n\n def _str_fullmatch(\n self,\n pat,\n case: bool = True,\n flags: int = 0,\n na: Scalar | lib.NoDefault = lib.no_default,\n ):\n if not pat.endswith("$") or pat.endswith("\\$"):\n pat = f"{pat}$"\n return self._str_match(pat, case, flags, na)\n\n def _str_find(self, sub: str, start: int = 0, end: int | None = None):\n if (\n pa_version_under13p0\n and not (start != 0 and end is not None)\n and not (start == 0 and end is None)\n ):\n # GH#59562\n res_list = self._apply_elementwise(lambda val: val.find(sub, start, end))\n return self._convert_int_result(pa.chunked_array(res_list))\n\n if (start == 0 or start is None) and end is None:\n result = pc.find_substring(self._pa_array, sub)\n else:\n if sub == "":\n # GH#56792\n res_list = self._apply_elementwise(\n lambda val: val.find(sub, start, end)\n )\n return self._convert_int_result(pa.chunked_array(res_list))\n if start is None:\n start_offset = 0\n start = 0\n elif start < 0:\n start_offset = pc.add(start, pc.utf8_length(self._pa_array))\n start_offset = pc.if_else(pc.less(start_offset, 0), 0, start_offset)\n else:\n start_offset = start\n slices = pc.utf8_slice_codeunits(self._pa_array, start, stop=end)\n result = pc.find_substring(slices, sub)\n found = pc.not_equal(result, pa.scalar(-1, type=result.type))\n offset_result = pc.add(result, start_offset)\n result = pc.if_else(found, offset_result, -1)\n return self._convert_int_result(result)\n
.venv\Lib\site-packages\pandas\core\arrays\_arrow_string_mixins.py
_arrow_string_mixins.py
Python
12,560
0.95
0.205056
0.04886
react-lib
761
2023-09-18T01:16:34.799512
MIT
false
de26ee4f9ed350186f2676fdd889c68a
from __future__ import annotations\n\nfrom functools import wraps\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Literal,\n cast,\n overload,\n)\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas._libs.arrays import NDArrayBacked\nfrom pandas._libs.tslibs import is_supported_dtype\nfrom pandas._typing import (\n ArrayLike,\n AxisInt,\n Dtype,\n F,\n FillnaOptions,\n PositionalIndexer2D,\n PositionalIndexerTuple,\n ScalarIndexer,\n Self,\n SequenceIndexer,\n Shape,\n TakeIndexer,\n npt,\n)\nfrom pandas.errors import AbstractMethodError\nfrom pandas.util._decorators import doc\nfrom pandas.util._validators import (\n validate_bool_kwarg,\n validate_fillna_kwargs,\n validate_insert_loc,\n)\n\nfrom pandas.core.dtypes.common import pandas_dtype\nfrom pandas.core.dtypes.dtypes import (\n DatetimeTZDtype,\n ExtensionDtype,\n PeriodDtype,\n)\nfrom pandas.core.dtypes.missing import array_equivalent\n\nfrom pandas.core import missing\nfrom pandas.core.algorithms import (\n take,\n unique,\n value_counts_internal as value_counts,\n)\nfrom pandas.core.array_algos.quantile import quantile_with_mask\nfrom pandas.core.array_algos.transforms import shift\nfrom pandas.core.arrays.base import ExtensionArray\nfrom pandas.core.construction import extract_array\nfrom pandas.core.indexers import check_array_indexer\nfrom pandas.core.sorting import nargminmax\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from pandas._typing import (\n NumpySorter,\n NumpyValueArrayLike,\n )\n\n from pandas import Series\n\n\ndef ravel_compat(meth: F) -> F:\n """\n Decorator to ravel a 2D array before passing it to a cython operation,\n then reshape the result to our own shape.\n """\n\n @wraps(meth)\n def method(self, *args, **kwargs):\n if self.ndim == 1:\n return meth(self, *args, **kwargs)\n\n flags = self._ndarray.flags\n flat = self.ravel("K")\n result = meth(flat, *args, **kwargs)\n order = "F" if flags.f_contiguous else "C"\n return result.reshape(self.shape, order=order)\n\n return cast(F, method)\n\n\nclass NDArrayBackedExtensionArray(NDArrayBacked, ExtensionArray):\n """\n ExtensionArray that is backed by a single NumPy ndarray.\n """\n\n _ndarray: np.ndarray\n\n # scalar used to denote NA value inside our self._ndarray, e.g. -1\n # for Categorical, iNaT for Period. Outside of object dtype,\n # self.isna() should be exactly locations in self._ndarray with\n # _internal_fill_value.\n _internal_fill_value: Any\n\n def _box_func(self, x):\n """\n Wrap numpy type in our dtype.type if necessary.\n """\n return x\n\n def _validate_scalar(self, value):\n # used by NDArrayBackedExtensionIndex.insert\n raise AbstractMethodError(self)\n\n # ------------------------------------------------------------------------\n\n def view(self, dtype: Dtype | None = None) -> ArrayLike:\n # We handle datetime64, datetime64tz, timedelta64, and period\n # dtypes here. Everything else we pass through to the underlying\n # ndarray.\n if dtype is None or dtype is self.dtype:\n return self._from_backing_data(self._ndarray)\n\n if isinstance(dtype, type):\n # we sometimes pass non-dtype objects, e.g np.ndarray;\n # pass those through to the underlying ndarray\n return self._ndarray.view(dtype)\n\n dtype = pandas_dtype(dtype)\n arr = self._ndarray\n\n if isinstance(dtype, PeriodDtype):\n cls = dtype.construct_array_type()\n return cls(arr.view("i8"), dtype=dtype)\n elif isinstance(dtype, DatetimeTZDtype):\n dt_cls = dtype.construct_array_type()\n dt64_values = arr.view(f"M8[{dtype.unit}]")\n return dt_cls._simple_new(dt64_values, dtype=dtype)\n elif lib.is_np_dtype(dtype, "M") and is_supported_dtype(dtype):\n from pandas.core.arrays import DatetimeArray\n\n dt64_values = arr.view(dtype)\n return DatetimeArray._simple_new(dt64_values, dtype=dtype)\n\n elif lib.is_np_dtype(dtype, "m") and is_supported_dtype(dtype):\n from pandas.core.arrays import TimedeltaArray\n\n td64_values = arr.view(dtype)\n return TimedeltaArray._simple_new(td64_values, dtype=dtype)\n\n # error: Argument "dtype" to "view" of "_ArrayOrScalarCommon" has incompatible\n # type "Union[ExtensionDtype, dtype[Any]]"; expected "Union[dtype[Any], None,\n # type, _SupportsDType, str, Union[Tuple[Any, int], Tuple[Any, Union[int,\n # Sequence[int]]], List[Any], _DTypeDict, Tuple[Any, Any]]]"\n return arr.view(dtype=dtype) # type: ignore[arg-type]\n\n def take(\n self,\n indices: TakeIndexer,\n *,\n allow_fill: bool = False,\n fill_value: Any = None,\n axis: AxisInt = 0,\n ) -> Self:\n if allow_fill:\n fill_value = self._validate_scalar(fill_value)\n\n new_data = take(\n self._ndarray,\n indices,\n allow_fill=allow_fill,\n fill_value=fill_value,\n axis=axis,\n )\n return self._from_backing_data(new_data)\n\n # ------------------------------------------------------------------------\n\n def equals(self, other) -> bool:\n if type(self) is not type(other):\n return False\n if self.dtype != other.dtype:\n return False\n return bool(array_equivalent(self._ndarray, other._ndarray, dtype_equal=True))\n\n @classmethod\n def _from_factorized(cls, values, original):\n assert values.dtype == original._ndarray.dtype\n return original._from_backing_data(values)\n\n def _values_for_argsort(self) -> np.ndarray:\n return self._ndarray\n\n def _values_for_factorize(self):\n return self._ndarray, self._internal_fill_value\n\n def _hash_pandas_object(\n self, *, encoding: str, hash_key: str, categorize: bool\n ) -> npt.NDArray[np.uint64]:\n from pandas.core.util.hashing import hash_array\n\n values = self._ndarray\n return hash_array(\n values, encoding=encoding, hash_key=hash_key, categorize=categorize\n )\n\n # Signature of "argmin" incompatible with supertype "ExtensionArray"\n def argmin(self, axis: AxisInt = 0, skipna: bool = True): # type: ignore[override]\n # override base class by adding axis keyword\n validate_bool_kwarg(skipna, "skipna")\n if not skipna and self._hasna:\n raise NotImplementedError\n return nargminmax(self, "argmin", axis=axis)\n\n # Signature of "argmax" incompatible with supertype "ExtensionArray"\n def argmax(self, axis: AxisInt = 0, skipna: bool = True): # type: ignore[override]\n # override base class by adding axis keyword\n validate_bool_kwarg(skipna, "skipna")\n if not skipna and self._hasna:\n raise NotImplementedError\n return nargminmax(self, "argmax", axis=axis)\n\n def unique(self) -> Self:\n new_data = unique(self._ndarray)\n return self._from_backing_data(new_data)\n\n @classmethod\n @doc(ExtensionArray._concat_same_type)\n def _concat_same_type(\n cls,\n to_concat: Sequence[Self],\n axis: AxisInt = 0,\n ) -> Self:\n if not lib.dtypes_all_equal([x.dtype for x in to_concat]):\n dtypes = {str(x.dtype) for x in to_concat}\n raise ValueError("to_concat must have the same dtype", dtypes)\n\n return super()._concat_same_type(to_concat, axis=axis)\n\n @doc(ExtensionArray.searchsorted)\n def searchsorted(\n self,\n value: NumpyValueArrayLike | ExtensionArray,\n side: Literal["left", "right"] = "left",\n sorter: NumpySorter | None = None,\n ) -> npt.NDArray[np.intp] | np.intp:\n npvalue = self._validate_setitem_value(value)\n return self._ndarray.searchsorted(npvalue, side=side, sorter=sorter)\n\n @doc(ExtensionArray.shift)\n def shift(self, periods: int = 1, fill_value=None):\n # NB: shift is always along axis=0\n axis = 0\n fill_value = self._validate_scalar(fill_value)\n new_values = shift(self._ndarray, periods, axis, fill_value)\n\n return self._from_backing_data(new_values)\n\n def __setitem__(self, key, value) -> None:\n key = check_array_indexer(self, key)\n value = self._validate_setitem_value(value)\n self._ndarray[key] = value\n\n def _validate_setitem_value(self, value):\n return value\n\n @overload\n def __getitem__(self, key: ScalarIndexer) -> Any:\n ...\n\n @overload\n def __getitem__(\n self,\n key: SequenceIndexer | PositionalIndexerTuple,\n ) -> Self:\n ...\n\n def __getitem__(\n self,\n key: PositionalIndexer2D,\n ) -> Self | Any:\n if lib.is_integer(key):\n # fast-path\n result = self._ndarray[key]\n if self.ndim == 1:\n return self._box_func(result)\n return self._from_backing_data(result)\n\n # error: Incompatible types in assignment (expression has type "ExtensionArray",\n # variable has type "Union[int, slice, ndarray]")\n key = extract_array(key, extract_numpy=True) # type: ignore[assignment]\n key = check_array_indexer(self, key)\n result = self._ndarray[key]\n if lib.is_scalar(result):\n return self._box_func(result)\n\n result = self._from_backing_data(result)\n return result\n\n def _fill_mask_inplace(\n self, method: str, limit: int | None, mask: npt.NDArray[np.bool_]\n ) -> None:\n # (for now) when self.ndim == 2, we assume axis=0\n func = missing.get_fill_func(method, ndim=self.ndim)\n func(self._ndarray.T, limit=limit, mask=mask.T)\n\n def _pad_or_backfill(\n self,\n *,\n method: FillnaOptions,\n limit: int | None = None,\n limit_area: Literal["inside", "outside"] | None = None,\n copy: bool = True,\n ) -> Self:\n mask = self.isna()\n if mask.any():\n # (for now) when self.ndim == 2, we assume axis=0\n func = missing.get_fill_func(method, ndim=self.ndim)\n\n npvalues = self._ndarray.T\n if copy:\n npvalues = npvalues.copy()\n func(npvalues, limit=limit, limit_area=limit_area, mask=mask.T)\n npvalues = npvalues.T\n\n if copy:\n new_values = self._from_backing_data(npvalues)\n else:\n new_values = self\n\n else:\n if copy:\n new_values = self.copy()\n else:\n new_values = self\n return new_values\n\n @doc(ExtensionArray.fillna)\n def fillna(\n self, value=None, method=None, limit: int | None = None, copy: bool = True\n ) -> Self:\n value, method = validate_fillna_kwargs(\n value, method, validate_scalar_dict_value=False\n )\n\n mask = self.isna()\n # error: Argument 2 to "check_value_size" has incompatible type\n # "ExtensionArray"; expected "ndarray"\n value = missing.check_value_size(\n value, mask, len(self) # type: ignore[arg-type]\n )\n\n if mask.any():\n if method is not None:\n # (for now) when self.ndim == 2, we assume axis=0\n func = missing.get_fill_func(method, ndim=self.ndim)\n npvalues = self._ndarray.T\n if copy:\n npvalues = npvalues.copy()\n func(npvalues, limit=limit, mask=mask.T)\n npvalues = npvalues.T\n\n # TODO: NumpyExtensionArray didn't used to copy, need tests\n # for this\n new_values = self._from_backing_data(npvalues)\n else:\n # fill with value\n if copy:\n new_values = self.copy()\n else:\n new_values = self[:]\n new_values[mask] = value\n else:\n # We validate the fill_value even if there is nothing to fill\n if value is not None:\n self._validate_setitem_value(value)\n\n if not copy:\n new_values = self[:]\n else:\n new_values = self.copy()\n return new_values\n\n # ------------------------------------------------------------------------\n # Reductions\n\n def _wrap_reduction_result(self, axis: AxisInt | None, result):\n if axis is None or self.ndim == 1:\n return self._box_func(result)\n return self._from_backing_data(result)\n\n # ------------------------------------------------------------------------\n # __array_function__ methods\n\n def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:\n """\n Analogue to np.putmask(self, mask, value)\n\n Parameters\n ----------\n mask : np.ndarray[bool]\n value : scalar or listlike\n\n Raises\n ------\n TypeError\n If value cannot be cast to self.dtype.\n """\n value = self._validate_setitem_value(value)\n\n np.putmask(self._ndarray, mask, value)\n\n def _where(self: Self, mask: npt.NDArray[np.bool_], value) -> Self:\n """\n Analogue to np.where(mask, self, value)\n\n Parameters\n ----------\n mask : np.ndarray[bool]\n value : scalar or listlike\n\n Raises\n ------\n TypeError\n If value cannot be cast to self.dtype.\n """\n value = self._validate_setitem_value(value)\n\n res_values = np.where(mask, self._ndarray, value)\n if res_values.dtype != self._ndarray.dtype:\n raise AssertionError(\n # GH#56410\n "Something has gone wrong, please report a bug at "\n "github.com/pandas-dev/pandas/"\n )\n return self._from_backing_data(res_values)\n\n # ------------------------------------------------------------------------\n # Index compat methods\n\n def insert(self, loc: int, item) -> Self:\n """\n Make new ExtensionArray inserting new item at location. Follows\n Python list.append semantics for negative values.\n\n Parameters\n ----------\n loc : int\n item : object\n\n Returns\n -------\n type(self)\n """\n loc = validate_insert_loc(loc, len(self))\n\n code = self._validate_scalar(item)\n\n new_vals = np.concatenate(\n (\n self._ndarray[:loc],\n np.asarray([code], dtype=self._ndarray.dtype),\n self._ndarray[loc:],\n )\n )\n return self._from_backing_data(new_vals)\n\n # ------------------------------------------------------------------------\n # Additional array methods\n # These are not part of the EA API, but we implement them because\n # pandas assumes they're there.\n\n def value_counts(self, dropna: bool = True) -> Series:\n """\n Return a Series containing counts of unique values.\n\n Parameters\n ----------\n dropna : bool, default True\n Don't include counts of NA values.\n\n Returns\n -------\n Series\n """\n if self.ndim != 1:\n raise NotImplementedError\n\n from pandas import (\n Index,\n Series,\n )\n\n if dropna:\n # error: Unsupported operand type for ~ ("ExtensionArray")\n values = self[~self.isna()]._ndarray # type: ignore[operator]\n else:\n values = self._ndarray\n\n result = value_counts(values, sort=False, dropna=dropna)\n\n index_arr = self._from_backing_data(np.asarray(result.index._data))\n index = Index(index_arr, name=result.index.name)\n return Series(result._values, index=index, name=result.name, copy=False)\n\n def _quantile(\n self,\n qs: npt.NDArray[np.float64],\n interpolation: str,\n ) -> Self:\n # TODO: disable for Categorical if not ordered?\n\n mask = np.asarray(self.isna())\n arr = self._ndarray\n fill_value = self._internal_fill_value\n\n res_values = quantile_with_mask(arr, mask, fill_value, qs, interpolation)\n if res_values.dtype == self._ndarray.dtype:\n return self._from_backing_data(res_values)\n else:\n # e.g. test_quantile_empty we are empty integer dtype and res_values\n # has floating dtype\n # TODO: technically __init__ isn't defined here.\n # Should we raise NotImplementedError and handle this on NumpyEA?\n return type(self)(res_values) # type: ignore[call-arg]\n\n # ------------------------------------------------------------------------\n # numpy-like methods\n\n @classmethod\n def _empty(cls, shape: Shape, dtype: ExtensionDtype) -> Self:\n """\n Analogous to np.empty(shape, dtype=dtype)\n\n Parameters\n ----------\n shape : tuple[int]\n dtype : ExtensionDtype\n """\n # The base implementation uses a naive approach to find the dtype\n # for the backing ndarray\n arr = cls._from_sequence([], dtype=dtype)\n backing = np.empty(shape, dtype=arr._ndarray.dtype)\n return arr._from_backing_data(backing)\n
.venv\Lib\site-packages\pandas\core\arrays\_mixins.py
_mixins.py
Python
17,406
0.95
0.147059
0.124169
awesome-app
909
2024-09-23T14:24:17.627931
GPL-3.0
false
e755d48e3a0c668a6de6e91d6f760e4e
"""\nHelper functions to generate range-like data for DatetimeArray\n(and possibly TimedeltaArray/PeriodArray)\n"""\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\n\nfrom pandas._libs.lib import i8max\nfrom pandas._libs.tslibs import (\n BaseOffset,\n OutOfBoundsDatetime,\n Timedelta,\n Timestamp,\n iNaT,\n)\n\nif TYPE_CHECKING:\n from pandas._typing import npt\n\n\ndef generate_regular_range(\n start: Timestamp | Timedelta | None,\n end: Timestamp | Timedelta | None,\n periods: int | None,\n freq: BaseOffset,\n unit: str = "ns",\n) -> npt.NDArray[np.intp]:\n """\n Generate a range of dates or timestamps with the spans between dates\n described by the given `freq` DateOffset.\n\n Parameters\n ----------\n start : Timedelta, Timestamp or None\n First point of produced date range.\n end : Timedelta, Timestamp or None\n Last point of produced date range.\n periods : int or None\n Number of periods in produced date range.\n freq : Tick\n Describes space between dates in produced date range.\n unit : str, default "ns"\n The resolution the output is meant to represent.\n\n Returns\n -------\n ndarray[np.int64]\n Representing the given resolution.\n """\n istart = start._value if start is not None else None\n iend = end._value if end is not None else None\n freq.nanos # raises if non-fixed frequency\n td = Timedelta(freq)\n b: int\n e: int\n try:\n td = td.as_unit(unit, round_ok=False)\n except ValueError as err:\n raise ValueError(\n f"freq={freq} is incompatible with unit={unit}. "\n "Use a lower freq or a higher unit instead."\n ) from err\n stride = int(td._value)\n\n if periods is None and istart is not None and iend is not None:\n b = istart\n # cannot just use e = Timestamp(end) + 1 because arange breaks when\n # stride is too large, see GH10887\n e = b + (iend - b) // stride * stride + stride // 2 + 1\n elif istart is not None and periods is not None:\n b = istart\n e = _generate_range_overflow_safe(b, periods, stride, side="start")\n elif iend is not None and periods is not None:\n e = iend + stride\n b = _generate_range_overflow_safe(e, periods, stride, side="end")\n else:\n raise ValueError(\n "at least 'start' or 'end' should be specified if a 'period' is given."\n )\n\n with np.errstate(over="raise"):\n # If the range is sufficiently large, np.arange may overflow\n # and incorrectly return an empty array if not caught.\n try:\n values = np.arange(b, e, stride, dtype=np.int64)\n except FloatingPointError:\n xdr = [b]\n while xdr[-1] != e:\n xdr.append(xdr[-1] + stride)\n values = np.array(xdr[:-1], dtype=np.int64)\n return values\n\n\ndef _generate_range_overflow_safe(\n endpoint: int, periods: int, stride: int, side: str = "start"\n) -> int:\n """\n Calculate the second endpoint for passing to np.arange, checking\n to avoid an integer overflow. Catch OverflowError and re-raise\n as OutOfBoundsDatetime.\n\n Parameters\n ----------\n endpoint : int\n nanosecond timestamp of the known endpoint of the desired range\n periods : int\n number of periods in the desired range\n stride : int\n nanoseconds between periods in the desired range\n side : {'start', 'end'}\n which end of the range `endpoint` refers to\n\n Returns\n -------\n other_end : int\n\n Raises\n ------\n OutOfBoundsDatetime\n """\n # GH#14187 raise instead of incorrectly wrapping around\n assert side in ["start", "end"]\n\n i64max = np.uint64(i8max)\n msg = f"Cannot generate range with {side}={endpoint} and periods={periods}"\n\n with np.errstate(over="raise"):\n # if periods * strides cannot be multiplied within the *uint64* bounds,\n # we cannot salvage the operation by recursing, so raise\n try:\n addend = np.uint64(periods) * np.uint64(np.abs(stride))\n except FloatingPointError as err:\n raise OutOfBoundsDatetime(msg) from err\n\n if np.abs(addend) <= i64max:\n # relatively easy case without casting concerns\n return _generate_range_overflow_safe_signed(endpoint, periods, stride, side)\n\n elif (endpoint > 0 and side == "start" and stride > 0) or (\n endpoint < 0 < stride and side == "end"\n ):\n # no chance of not-overflowing\n raise OutOfBoundsDatetime(msg)\n\n elif side == "end" and endpoint - stride <= i64max < endpoint:\n # in _generate_regular_range we added `stride` thereby overflowing\n # the bounds. Adjust to fix this.\n return _generate_range_overflow_safe(\n endpoint - stride, periods - 1, stride, side\n )\n\n # split into smaller pieces\n mid_periods = periods // 2\n remaining = periods - mid_periods\n assert 0 < remaining < periods, (remaining, periods, endpoint, stride)\n\n midpoint = int(_generate_range_overflow_safe(endpoint, mid_periods, stride, side))\n return _generate_range_overflow_safe(midpoint, remaining, stride, side)\n\n\ndef _generate_range_overflow_safe_signed(\n endpoint: int, periods: int, stride: int, side: str\n) -> int:\n """\n A special case for _generate_range_overflow_safe where `periods * stride`\n can be calculated without overflowing int64 bounds.\n """\n assert side in ["start", "end"]\n if side == "end":\n stride *= -1\n\n with np.errstate(over="raise"):\n addend = np.int64(periods) * np.int64(stride)\n try:\n # easy case with no overflows\n result = np.int64(endpoint) + addend\n if result == iNaT:\n # Putting this into a DatetimeArray/TimedeltaArray\n # would incorrectly be interpreted as NaT\n raise OverflowError\n return int(result)\n except (FloatingPointError, OverflowError):\n # with endpoint negative and addend positive we risk\n # FloatingPointError; with reversed signed we risk OverflowError\n pass\n\n # if stride and endpoint had opposite signs, then endpoint + addend\n # should never overflow. so they must have the same signs\n assert (stride > 0 and endpoint >= 0) or (stride < 0 and endpoint <= 0)\n\n if stride > 0:\n # watch out for very special case in which we just slightly\n # exceed implementation bounds, but when passing the result to\n # np.arange will get a result slightly within the bounds\n\n uresult = np.uint64(endpoint) + np.uint64(addend)\n i64max = np.uint64(i8max)\n assert uresult > i64max\n if uresult <= i64max + np.uint64(stride):\n return int(uresult)\n\n raise OutOfBoundsDatetime(\n f"Cannot generate range with {side}={endpoint} and periods={periods}"\n )\n
.venv\Lib\site-packages\pandas\core\arrays\_ranges.py
_ranges.py
Python
6,996
0.95
0.125604
0.123596
python-kit
803
2023-12-29T12:50:08.785502
BSD-3-Clause
false
5407611f99289d50786b4839609c221c
from __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Any,\n)\n\nimport numpy as np\n\nfrom pandas._libs import lib\nfrom pandas.errors import LossySetitemError\n\nfrom pandas.core.dtypes.cast import np_can_hold_element\nfrom pandas.core.dtypes.common import is_numeric_dtype\n\nif TYPE_CHECKING:\n from pandas._typing import (\n ArrayLike,\n npt,\n )\n\n\ndef to_numpy_dtype_inference(\n arr: ArrayLike, dtype: npt.DTypeLike | None, na_value, hasna: bool\n) -> tuple[npt.DTypeLike, Any]:\n if dtype is None and is_numeric_dtype(arr.dtype):\n dtype_given = False\n if hasna:\n if arr.dtype.kind == "b":\n dtype = np.dtype(np.object_)\n else:\n if arr.dtype.kind in "iu":\n dtype = np.dtype(np.float64)\n else:\n dtype = arr.dtype.numpy_dtype # type: ignore[union-attr]\n if na_value is lib.no_default:\n na_value = np.nan\n else:\n dtype = arr.dtype.numpy_dtype # type: ignore[union-attr]\n elif dtype is not None:\n dtype = np.dtype(dtype)\n dtype_given = True\n else:\n dtype_given = True\n\n if na_value is lib.no_default:\n if dtype is None or not hasna:\n na_value = arr.dtype.na_value\n elif dtype.kind == "f": # type: ignore[union-attr]\n na_value = np.nan\n elif dtype.kind == "M": # type: ignore[union-attr]\n na_value = np.datetime64("nat")\n elif dtype.kind == "m": # type: ignore[union-attr]\n na_value = np.timedelta64("nat")\n else:\n na_value = arr.dtype.na_value\n\n if not dtype_given and hasna:\n try:\n np_can_hold_element(dtype, na_value) # type: ignore[arg-type]\n except LossySetitemError:\n dtype = np.dtype(np.object_)\n return dtype, na_value\n
.venv\Lib\site-packages\pandas\core\arrays\_utils.py
_utils.py
Python
1,901
0.95
0.174603
0
awesome-app
34
2025-04-06T14:19:41.541254
BSD-3-Clause
false
59ab52a536a55b20267590ee64ff5d45
from pandas.core.arrays.arrow import ArrowExtensionArray\nfrom pandas.core.arrays.base import (\n ExtensionArray,\n ExtensionOpsMixin,\n ExtensionScalarOpsMixin,\n)\nfrom pandas.core.arrays.boolean import BooleanArray\nfrom pandas.core.arrays.categorical import Categorical\nfrom pandas.core.arrays.datetimes import DatetimeArray\nfrom pandas.core.arrays.floating import FloatingArray\nfrom pandas.core.arrays.integer import IntegerArray\nfrom pandas.core.arrays.interval import IntervalArray\nfrom pandas.core.arrays.masked import BaseMaskedArray\nfrom pandas.core.arrays.numpy_ import NumpyExtensionArray\nfrom pandas.core.arrays.period import (\n PeriodArray,\n period_array,\n)\nfrom pandas.core.arrays.sparse import SparseArray\nfrom pandas.core.arrays.string_ import StringArray\nfrom pandas.core.arrays.string_arrow import ArrowStringArray\nfrom pandas.core.arrays.timedeltas import TimedeltaArray\n\n__all__ = [\n "ArrowExtensionArray",\n "ExtensionArray",\n "ExtensionOpsMixin",\n "ExtensionScalarOpsMixin",\n "ArrowStringArray",\n "BaseMaskedArray",\n "BooleanArray",\n "Categorical",\n "DatetimeArray",\n "FloatingArray",\n "IntegerArray",\n "IntervalArray",\n "NumpyExtensionArray",\n "PeriodArray",\n "period_array",\n "SparseArray",\n "StringArray",\n "TimedeltaArray",\n]\n
.venv\Lib\site-packages\pandas\core\arrays\__init__.py
__init__.py
Python
1,314
0.85
0
0
vue-tools
4
2023-08-16T04:18:46.607480
Apache-2.0
false
4987e93188e8c9ce5e09e861b3d92fb2
"""Accessors for arrow-backed data."""\n\nfrom __future__ import annotations\n\nfrom abc import (\n ABCMeta,\n abstractmethod,\n)\nfrom typing import (\n TYPE_CHECKING,\n cast,\n)\n\nfrom pandas.compat import (\n pa_version_under10p1,\n pa_version_under11p0,\n)\n\nfrom pandas.core.dtypes.common import is_list_like\n\nif not pa_version_under10p1:\n import pyarrow as pa\n import pyarrow.compute as pc\n\n from pandas.core.dtypes.dtypes import ArrowDtype\n\nif TYPE_CHECKING:\n from collections.abc import Iterator\n\n from pandas import (\n DataFrame,\n Series,\n )\n\n\nclass ArrowAccessor(metaclass=ABCMeta):\n @abstractmethod\n def __init__(self, data, validation_msg: str) -> None:\n self._data = data\n self._validation_msg = validation_msg\n self._validate(data)\n\n @abstractmethod\n def _is_valid_pyarrow_dtype(self, pyarrow_dtype) -> bool:\n pass\n\n def _validate(self, data):\n dtype = data.dtype\n if pa_version_under10p1 or not isinstance(dtype, ArrowDtype):\n # Raise AttributeError so that inspect can handle non-struct Series.\n raise AttributeError(self._validation_msg.format(dtype=dtype))\n\n if not self._is_valid_pyarrow_dtype(dtype.pyarrow_dtype):\n # Raise AttributeError so that inspect can handle invalid Series.\n raise AttributeError(self._validation_msg.format(dtype=dtype))\n\n @property\n def _pa_array(self):\n return self._data.array._pa_array\n\n\nclass ListAccessor(ArrowAccessor):\n """\n Accessor object for list data properties of the Series values.\n\n Parameters\n ----------\n data : Series\n Series containing Arrow list data.\n """\n\n def __init__(self, data=None) -> None:\n super().__init__(\n data,\n validation_msg="Can only use the '.list' accessor with "\n "'list[pyarrow]' dtype, not {dtype}.",\n )\n\n def _is_valid_pyarrow_dtype(self, pyarrow_dtype) -> bool:\n return (\n pa.types.is_list(pyarrow_dtype)\n or pa.types.is_fixed_size_list(pyarrow_dtype)\n or pa.types.is_large_list(pyarrow_dtype)\n )\n\n def len(self) -> Series:\n """\n Return the length of each list in the Series.\n\n Returns\n -------\n pandas.Series\n The length of each list.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... [1, 2, 3],\n ... [3],\n ... ],\n ... dtype=pd.ArrowDtype(pa.list_(\n ... pa.int64()\n ... ))\n ... )\n >>> s.list.len()\n 0 3\n 1 1\n dtype: int32[pyarrow]\n """\n from pandas import Series\n\n value_lengths = pc.list_value_length(self._pa_array)\n return Series(value_lengths, dtype=ArrowDtype(value_lengths.type))\n\n def __getitem__(self, key: int | slice) -> Series:\n """\n Index or slice lists in the Series.\n\n Parameters\n ----------\n key : int | slice\n Index or slice of indices to access from each list.\n\n Returns\n -------\n pandas.Series\n The list at requested index.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... [1, 2, 3],\n ... [3],\n ... ],\n ... dtype=pd.ArrowDtype(pa.list_(\n ... pa.int64()\n ... ))\n ... )\n >>> s.list[0]\n 0 1\n 1 3\n dtype: int64[pyarrow]\n """\n from pandas import Series\n\n if isinstance(key, int):\n # TODO: Support negative key but pyarrow does not allow\n # element index to be an array.\n # if key < 0:\n # key = pc.add(key, pc.list_value_length(self._pa_array))\n element = pc.list_element(self._pa_array, key)\n return Series(element, dtype=ArrowDtype(element.type))\n elif isinstance(key, slice):\n if pa_version_under11p0:\n raise NotImplementedError(\n f"List slice not supported by pyarrow {pa.__version__}."\n )\n\n # TODO: Support negative start/stop/step, ideally this would be added\n # upstream in pyarrow.\n start, stop, step = key.start, key.stop, key.step\n if start is None:\n # TODO: When adding negative step support\n # this should be setto last element of array\n # when step is negative.\n start = 0\n if step is None:\n step = 1\n sliced = pc.list_slice(self._pa_array, start, stop, step)\n return Series(sliced, dtype=ArrowDtype(sliced.type))\n else:\n raise ValueError(f"key must be an int or slice, got {type(key).__name__}")\n\n def __iter__(self) -> Iterator:\n raise TypeError(f"'{type(self).__name__}' object is not iterable")\n\n def flatten(self) -> Series:\n """\n Flatten list values.\n\n Returns\n -------\n pandas.Series\n The data from all lists in the series flattened.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... [1, 2, 3],\n ... [3],\n ... ],\n ... dtype=pd.ArrowDtype(pa.list_(\n ... pa.int64()\n ... ))\n ... )\n >>> s.list.flatten()\n 0 1\n 1 2\n 2 3\n 3 3\n dtype: int64[pyarrow]\n """\n from pandas import Series\n\n flattened = pc.list_flatten(self._pa_array)\n return Series(flattened, dtype=ArrowDtype(flattened.type))\n\n\nclass StructAccessor(ArrowAccessor):\n """\n Accessor object for structured data properties of the Series values.\n\n Parameters\n ----------\n data : Series\n Series containing Arrow struct data.\n """\n\n def __init__(self, data=None) -> None:\n super().__init__(\n data,\n validation_msg=(\n "Can only use the '.struct' accessor with 'struct[pyarrow]' "\n "dtype, not {dtype}."\n ),\n )\n\n def _is_valid_pyarrow_dtype(self, pyarrow_dtype) -> bool:\n return pa.types.is_struct(pyarrow_dtype)\n\n @property\n def dtypes(self) -> Series:\n """\n Return the dtype object of each child field of the struct.\n\n Returns\n -------\n pandas.Series\n The data type of each child field.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... {"version": 1, "project": "pandas"},\n ... {"version": 2, "project": "pandas"},\n ... {"version": 1, "project": "numpy"},\n ... ],\n ... dtype=pd.ArrowDtype(pa.struct(\n ... [("version", pa.int64()), ("project", pa.string())]\n ... ))\n ... )\n >>> s.struct.dtypes\n version int64[pyarrow]\n project string[pyarrow]\n dtype: object\n """\n from pandas import (\n Index,\n Series,\n )\n\n pa_type = self._data.dtype.pyarrow_dtype\n types = [ArrowDtype(struct.type) for struct in pa_type]\n names = [struct.name for struct in pa_type]\n return Series(types, index=Index(names))\n\n def field(\n self,\n name_or_index: list[str]\n | list[bytes]\n | list[int]\n | pc.Expression\n | bytes\n | str\n | int,\n ) -> Series:\n """\n Extract a child field of a struct as a Series.\n\n Parameters\n ----------\n name_or_index : str | bytes | int | expression | list\n Name or index of the child field to extract.\n\n For list-like inputs, this will index into a nested\n struct.\n\n Returns\n -------\n pandas.Series\n The data corresponding to the selected child field.\n\n See Also\n --------\n Series.struct.explode : Return all child fields as a DataFrame.\n\n Notes\n -----\n The name of the resulting Series will be set using the following\n rules:\n\n - For string, bytes, or integer `name_or_index` (or a list of these, for\n a nested selection), the Series name is set to the selected\n field's name.\n - For a :class:`pyarrow.compute.Expression`, this is set to\n the string form of the expression.\n - For list-like `name_or_index`, the name will be set to the\n name of the final field selected.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... {"version": 1, "project": "pandas"},\n ... {"version": 2, "project": "pandas"},\n ... {"version": 1, "project": "numpy"},\n ... ],\n ... dtype=pd.ArrowDtype(pa.struct(\n ... [("version", pa.int64()), ("project", pa.string())]\n ... ))\n ... )\n\n Extract by field name.\n\n >>> s.struct.field("project")\n 0 pandas\n 1 pandas\n 2 numpy\n Name: project, dtype: string[pyarrow]\n\n Extract by field index.\n\n >>> s.struct.field(0)\n 0 1\n 1 2\n 2 1\n Name: version, dtype: int64[pyarrow]\n\n Or an expression\n\n >>> import pyarrow.compute as pc\n >>> s.struct.field(pc.field("project"))\n 0 pandas\n 1 pandas\n 2 numpy\n Name: project, dtype: string[pyarrow]\n\n For nested struct types, you can pass a list of values to index\n multiple levels:\n\n >>> version_type = pa.struct([\n ... ("major", pa.int64()),\n ... ("minor", pa.int64()),\n ... ])\n >>> s = pd.Series(\n ... [\n ... {"version": {"major": 1, "minor": 5}, "project": "pandas"},\n ... {"version": {"major": 2, "minor": 1}, "project": "pandas"},\n ... {"version": {"major": 1, "minor": 26}, "project": "numpy"},\n ... ],\n ... dtype=pd.ArrowDtype(pa.struct(\n ... [("version", version_type), ("project", pa.string())]\n ... ))\n ... )\n >>> s.struct.field(["version", "minor"])\n 0 5\n 1 1\n 2 26\n Name: minor, dtype: int64[pyarrow]\n >>> s.struct.field([0, 0])\n 0 1\n 1 2\n 2 1\n Name: major, dtype: int64[pyarrow]\n """\n from pandas import Series\n\n def get_name(\n level_name_or_index: list[str]\n | list[bytes]\n | list[int]\n | pc.Expression\n | bytes\n | str\n | int,\n data: pa.ChunkedArray,\n ):\n if isinstance(level_name_or_index, int):\n name = data.type.field(level_name_or_index).name\n elif isinstance(level_name_or_index, (str, bytes)):\n name = level_name_or_index\n elif isinstance(level_name_or_index, pc.Expression):\n name = str(level_name_or_index)\n elif is_list_like(level_name_or_index):\n # For nested input like [2, 1, 2]\n # iteratively get the struct and field name. The last\n # one is used for the name of the index.\n level_name_or_index = list(reversed(level_name_or_index))\n selected = data\n while level_name_or_index:\n # we need the cast, otherwise mypy complains about\n # getting ints, bytes, or str here, which isn't possible.\n level_name_or_index = cast(list, level_name_or_index)\n name_or_index = level_name_or_index.pop()\n name = get_name(name_or_index, selected)\n selected = selected.type.field(selected.type.get_field_index(name))\n name = selected.name\n else:\n raise ValueError(\n "name_or_index must be an int, str, bytes, "\n "pyarrow.compute.Expression, or list of those"\n )\n return name\n\n pa_arr = self._data.array._pa_array\n name = get_name(name_or_index, pa_arr)\n field_arr = pc.struct_field(pa_arr, name_or_index)\n\n return Series(\n field_arr,\n dtype=ArrowDtype(field_arr.type),\n index=self._data.index,\n name=name,\n )\n\n def explode(self) -> DataFrame:\n """\n Extract all child fields of a struct as a DataFrame.\n\n Returns\n -------\n pandas.DataFrame\n The data corresponding to all child fields.\n\n See Also\n --------\n Series.struct.field : Return a single child field as a Series.\n\n Examples\n --------\n >>> import pyarrow as pa\n >>> s = pd.Series(\n ... [\n ... {"version": 1, "project": "pandas"},\n ... {"version": 2, "project": "pandas"},\n ... {"version": 1, "project": "numpy"},\n ... ],\n ... dtype=pd.ArrowDtype(pa.struct(\n ... [("version", pa.int64()), ("project", pa.string())]\n ... ))\n ... )\n\n >>> s.struct.explode()\n version project\n 0 1 pandas\n 1 2 pandas\n 2 1 numpy\n """\n from pandas import concat\n\n pa_type = self._pa_array.type\n return concat(\n [self.field(i) for i in range(pa_type.num_fields)], axis="columns"\n )\n
.venv\Lib\site-packages\pandas\core\arrays\arrow\accessors.py
accessors.py
Python
13,887
0.95
0.082452
0.039506
react-lib
802
2025-03-14T10:22:33.252374
BSD-3-Clause
false
7b99c6a54dae5d8235923dbdaf0a0373
from __future__ import annotations\n\nimport json\nfrom typing import TYPE_CHECKING\n\nimport pyarrow\n\nfrom pandas.compat import pa_version_under14p1\n\nfrom pandas.core.dtypes.dtypes import (\n IntervalDtype,\n PeriodDtype,\n)\n\nfrom pandas.core.arrays.interval import VALID_CLOSED\n\nif TYPE_CHECKING:\n from pandas._typing import IntervalClosedType\n\n\nclass ArrowPeriodType(pyarrow.ExtensionType):\n def __init__(self, freq) -> None:\n # attributes need to be set first before calling\n # super init (as that calls serialize)\n self._freq = freq\n pyarrow.ExtensionType.__init__(self, pyarrow.int64(), "pandas.period")\n\n @property\n def freq(self):\n return self._freq\n\n def __arrow_ext_serialize__(self) -> bytes:\n metadata = {"freq": self.freq}\n return json.dumps(metadata).encode()\n\n @classmethod\n def __arrow_ext_deserialize__(cls, storage_type, serialized) -> ArrowPeriodType:\n metadata = json.loads(serialized.decode())\n return ArrowPeriodType(metadata["freq"])\n\n def __eq__(self, other):\n if isinstance(other, pyarrow.BaseExtensionType):\n return type(self) == type(other) and self.freq == other.freq\n else:\n return NotImplemented\n\n def __ne__(self, other) -> bool:\n return not self == other\n\n def __hash__(self) -> int:\n return hash((str(self), self.freq))\n\n def to_pandas_dtype(self) -> PeriodDtype:\n return PeriodDtype(freq=self.freq)\n\n\n# register the type with a dummy instance\n_period_type = ArrowPeriodType("D")\npyarrow.register_extension_type(_period_type)\n\n\nclass ArrowIntervalType(pyarrow.ExtensionType):\n def __init__(self, subtype, closed: IntervalClosedType) -> None:\n # attributes need to be set first before calling\n # super init (as that calls serialize)\n assert closed in VALID_CLOSED\n self._closed: IntervalClosedType = closed\n if not isinstance(subtype, pyarrow.DataType):\n subtype = pyarrow.type_for_alias(str(subtype))\n self._subtype = subtype\n\n storage_type = pyarrow.struct([("left", subtype), ("right", subtype)])\n pyarrow.ExtensionType.__init__(self, storage_type, "pandas.interval")\n\n @property\n def subtype(self):\n return self._subtype\n\n @property\n def closed(self) -> IntervalClosedType:\n return self._closed\n\n def __arrow_ext_serialize__(self) -> bytes:\n metadata = {"subtype": str(self.subtype), "closed": self.closed}\n return json.dumps(metadata).encode()\n\n @classmethod\n def __arrow_ext_deserialize__(cls, storage_type, serialized) -> ArrowIntervalType:\n metadata = json.loads(serialized.decode())\n subtype = pyarrow.type_for_alias(metadata["subtype"])\n closed = metadata["closed"]\n return ArrowIntervalType(subtype, closed)\n\n def __eq__(self, other):\n if isinstance(other, pyarrow.BaseExtensionType):\n return (\n type(self) == type(other)\n and self.subtype == other.subtype\n and self.closed == other.closed\n )\n else:\n return NotImplemented\n\n def __ne__(self, other) -> bool:\n return not self == other\n\n def __hash__(self) -> int:\n return hash((str(self), str(self.subtype), self.closed))\n\n def to_pandas_dtype(self) -> IntervalDtype:\n return IntervalDtype(self.subtype.to_pandas_dtype(), self.closed)\n\n\n# register the type with a dummy instance\n_interval_type = ArrowIntervalType(pyarrow.int64(), "left")\npyarrow.register_extension_type(_interval_type)\n\n\n_ERROR_MSG = """\\nDisallowed deserialization of 'arrow.py_extension_type':\nstorage_type = {storage_type}\nserialized = {serialized}\npickle disassembly:\n{pickle_disassembly}\n\nReading of untrusted Parquet or Feather files with a PyExtensionType column\nallows arbitrary code execution.\nIf you trust this file, you can enable reading the extension type by one of:\n\n- upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)`\n- install pyarrow-hotfix (`pip install pyarrow-hotfix`) and disable it by running\n `import pyarrow_hotfix; pyarrow_hotfix.uninstall()`\n\nWe strongly recommend updating your Parquet/Feather files to use extension types\nderived from `pyarrow.ExtensionType` instead, and register this type explicitly.\n"""\n\n\ndef patch_pyarrow():\n # starting from pyarrow 14.0.1, it has its own mechanism\n if not pa_version_under14p1:\n return\n\n # if https://github.com/pitrou/pyarrow-hotfix was installed and enabled\n if getattr(pyarrow, "_hotfix_installed", False):\n return\n\n class ForbiddenExtensionType(pyarrow.ExtensionType):\n def __arrow_ext_serialize__(self):\n return b""\n\n @classmethod\n def __arrow_ext_deserialize__(cls, storage_type, serialized):\n import io\n import pickletools\n\n out = io.StringIO()\n pickletools.dis(serialized, out)\n raise RuntimeError(\n _ERROR_MSG.format(\n storage_type=storage_type,\n serialized=serialized,\n pickle_disassembly=out.getvalue(),\n )\n )\n\n pyarrow.unregister_extension_type("arrow.py_extension_type")\n pyarrow.register_extension_type(\n ForbiddenExtensionType(pyarrow.null(), "arrow.py_extension_type")\n )\n\n pyarrow._hotfix_installed = True\n\n\npatch_pyarrow()\n
.venv\Lib\site-packages\pandas\core\arrays\arrow\extension_types.py
extension_types.py
Python
5,459
0.95
0.172414
0.062016
node-utils
983
2024-01-11T02:06:05.323671
Apache-2.0
false
09e796bc00dadaa6523b3d1123d6de31
from __future__ import annotations\n\nimport numpy as np\nimport pyarrow\n\n\ndef pyarrow_array_to_numpy_and_mask(\n arr, dtype: np.dtype\n) -> tuple[np.ndarray, np.ndarray]:\n """\n Convert a primitive pyarrow.Array to a numpy array and boolean mask based\n on the buffers of the Array.\n\n At the moment pyarrow.BooleanArray is not supported.\n\n Parameters\n ----------\n arr : pyarrow.Array\n dtype : numpy.dtype\n\n Returns\n -------\n (data, mask)\n Tuple of two numpy arrays with the raw data (with specified dtype) and\n a boolean mask (validity mask, so False means missing)\n """\n dtype = np.dtype(dtype)\n\n if pyarrow.types.is_null(arr.type):\n # No initialization of data is needed since everything is null\n data = np.empty(len(arr), dtype=dtype)\n mask = np.zeros(len(arr), dtype=bool)\n return data, mask\n buflist = arr.buffers()\n # Since Arrow buffers might contain padding and the data might be offset,\n # the buffer gets sliced here before handing it to numpy.\n # See also https://github.com/pandas-dev/pandas/issues/40896\n offset = arr.offset * dtype.itemsize\n length = len(arr) * dtype.itemsize\n data_buf = buflist[1][offset : offset + length]\n data = np.frombuffer(data_buf, dtype=dtype)\n bitmask = buflist[0]\n if bitmask is not None:\n mask = pyarrow.BooleanArray.from_buffers(\n pyarrow.bool_(), len(arr), [None, bitmask], offset=arr.offset\n )\n mask = np.asarray(mask)\n else:\n mask = np.ones(len(arr), dtype=bool)\n return data, mask\n
.venv\Lib\site-packages\pandas\core\arrays\arrow\_arrow_utils.py
_arrow_utils.py
Python
1,586
0.95
0.06
0.093023
awesome-app
990
2025-05-19T10:51:36.418301
Apache-2.0
false
aa96521d771742ec264089549c9f3915
from pandas.core.arrays.arrow.accessors import (\n ListAccessor,\n StructAccessor,\n)\nfrom pandas.core.arrays.arrow.array import ArrowExtensionArray\n\n__all__ = ["ArrowExtensionArray", "StructAccessor", "ListAccessor"]\n
.venv\Lib\site-packages\pandas\core\arrays\arrow\__init__.py
__init__.py
Python
221
0.85
0
0
node-utils
944
2023-10-18T02:46:20.389640
GPL-3.0
false
36e38e49a4b688231e6df965213c7c22