title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
βŒ€
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
βŒ€
updated_at
stringlengths
20
20
new file added
diff --git a/addft.py b/addft.py new file mode 100644 index 0000000000000..e4a7a07d06d60 --- /dev/null +++ b/addft.py @@ -0,0 +1,4 @@ +import numpy as np + +x = np.random.randint(10,15) +print(x) \ No newline at end of file
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50525
2023-01-02T06:18:36Z
2023-01-02T08:48:39Z
null
2023-01-02T08:48:55Z
PERF: ArrowExtensionArray comparison methods
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..dcde5ff608724 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -768,6 +768,7 @@ Performance improvements - Performance improvement for :class:`~arrays.StringArray` constructor passing a numpy array with type ``np.str_`` (:issue:`49109`) - Performance improvement in :meth:`~arrays.ArrowExtensionArray.factorize` (:issue:`49177`) - Performance improvement in :meth:`~arrays.ArrowExtensionArray.__setitem__` when key is a null slice (:issue:`50248`) +- Performance improvement in :class:`~arrays.ArrowExtensionArray` comparison methods when array contains NA (:issue:`50524`) - Performance improvement in :meth:`~arrays.ArrowExtensionArray.to_numpy` (:issue:`49973`) - Performance improvement in :meth:`DataFrame.join` when joining on a subset of a :class:`MultiIndex` (:issue:`48611`) - Performance improvement for :meth:`MultiIndex.intersection` (:issue:`48604`) diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index 0076b3d2b64c3..635741167d81e 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -406,8 +406,14 @@ def _cmp_method(self, other, op): f"{op.__name__} not implemented for {type(other)}" ) - result = result.to_numpy() - return BooleanArray._from_sequence(result) + if result.null_count > 0: + # GH50524: avoid conversion to object for better perf + values = pc.fill_null(result, False).to_numpy() + mask = result.is_null().to_numpy() + else: + values = result.to_numpy() + mask = np.zeros(len(values), dtype=np.bool_) + return BooleanArray(values, mask) def _evaluate_op_method(self, other, op, arrow_funcs): pc_func = arrow_funcs[op.__name__]
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature. Perf improvement in `ArrowExtensionArray` comparison methods when array contains nulls. Improvement is from avoiding conversion to object array. ``` import pandas as pd import numpy as np data = np.random.randn(10**6) data[0] = np.nan arr = pd.array(data, dtype="float64[pyarrow]") %timeit arr > 0 # 80.5 ms Β± 619 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) <- main # 4.05 ms Β± 11.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) <- PR ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50524
2023-01-02T01:45:46Z
2023-01-03T22:24:24Z
2023-01-03T22:24:24Z
2023-01-19T04:31:30Z
ENH: Added a new format in parsing.pyx
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx index aa95febfc9721..d06eb196ebe34 100644 --- a/pandas/_libs/tslibs/parsing.pyx +++ b/pandas/_libs/tslibs/parsing.pyx @@ -856,6 +856,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None: (("day_of_week",), "%a", 0), (("day_of_week",), "%A", 0), (("meridiem",), "%p", 0), + (("year", "month", "day"), ("hour", "minute", "second"), "%Y%m%d %H%M%S", 0), ] if dayfirst: diff --git a/pandas/tests/tslibs/test_parsing.py b/pandas/tests/tslibs/test_parsing.py index 558c802fd70f6..e5b8c8e4f413f 100644 --- a/pandas/tests/tslibs/test_parsing.py +++ b/pandas/tests/tslibs/test_parsing.py @@ -332,3 +332,16 @@ def test_guess_datetime_format_f(input): result = parsing.guess_datetime_format(input) expected = "%Y-%m-%dT%H:%M:%S.%f" assert result == expected + + +@pytest.mark.parametrize( + "input", + [ + "20110519 010000", + "20120617 020000", + ], +) +def test_guess_datetime_format_iso_new(input): + result = parsing.guess_datetime_format(input) + expected = "%Y%m%d %H%M%S" + assert result == expected
- [x] closes #50514 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50523
2023-01-01T10:34:44Z
2023-01-03T15:35:29Z
null
2023-01-03T15:37:04Z
Update doc for pd.MultiIndex.droplevel to reflect inplace behaviour
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 3dc6aed56fa24..af304dd2e43bf 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -1961,7 +1961,7 @@ def droplevel(self, level: IndexLabel = 0): Return index with requested level(s) removed. If resulting index has only 1 level left, the result will be - of Index type, not MultiIndex. + of Index type, not MultiIndex. The original index is not modified inplace. Parameters ----------
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50521
2023-01-01T07:48:59Z
2023-01-03T22:27:14Z
2023-01-03T22:27:14Z
2023-01-03T22:27:20Z
⬆️ UPGRADE: Autoupdate pre-commit config
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f3158e64df8dd..82d8fc12c59fe 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -34,7 +34,7 @@ repos: types_or: [python, rst, markdown] additional_dependencies: [tomli] - repo: https://github.com/MarcoGorelli/cython-lint - rev: v0.9.1 + rev: v0.10.1 hooks: - id: cython-lint - id: double-quote-cython-strings @@ -77,12 +77,12 @@ repos: - flake8-bugbear==22.7.1 - pandas-dev-flaker==0.5.0 - repo: https://github.com/pycqa/pylint - rev: v2.15.6 + rev: v2.15.9 hooks: - id: pylint stages: [manual] - repo: https://github.com/pycqa/pylint - rev: v2.15.6 + rev: v2.15.9 hooks: - id: pylint alias: redefined-outer-name @@ -99,11 +99,11 @@ repos: args: [--disable=all, --enable=redefined-outer-name] stages: [manual] - repo: https://github.com/PyCQA/isort - rev: 5.10.1 + rev: 5.11.4 hooks: - id: isort - repo: https://github.com/asottile/pyupgrade - rev: v3.2.2 + rev: v3.3.1 hooks: - id: pyupgrade args: [--py38-plus] diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx index 7fcba58772ac4..77876d0c55337 100644 --- a/pandas/_libs/algos.pyx +++ b/pandas/_libs/algos.pyx @@ -647,40 +647,37 @@ def pad_2d_inplace(numeric_object_t[:, :] values, uint8_t[:, :] mask, limit=None val = values[j, i] -""" -Backfilling logic for generating fill vector - -Diagram of what's going on - -Old New Fill vector Mask - . 0 1 - . 0 1 - . 0 1 -A A 0 1 - . 1 1 - . 1 1 - . 1 1 - . 1 1 - . 1 1 -B B 1 1 - . 2 1 - . 2 1 - . 2 1 -C C 2 1 - . 0 - . 0 -D -""" - - @cython.boundscheck(False) @cython.wraparound(False) def backfill( ndarray[numeric_object_t] old, ndarray[numeric_object_t] new, limit=None -) -> ndarray: - # -> ndarray[intp_t, ndim=1] +) -> ndarray: # -> ndarray[intp_t, ndim=1] + """ + Backfilling logic for generating fill vector + + Diagram of what's going on + + Old New Fill vector Mask + . 0 1 + . 0 1 + . 0 1 + A A 0 1 + . 1 1 + . 1 1 + . 1 1 + . 1 1 + . 1 1 + B B 1 1 + . 2 1 + . 2 1 + . 2 1 + C C 2 1 + . 0 + . 0 + D + """ cdef: Py_ssize_t i, j, nleft, nright ndarray[intp_t, ndim=1] indexer diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx index 27e99706137b6..ac5d783f769ac 100644 --- a/pandas/_libs/tslibs/strptime.pyx +++ b/pandas/_libs/tslibs/strptime.pyx @@ -1,4 +1,18 @@ """Strptime-related classes and functions. + +TimeRE, _calc_julian_from_U_or_W are vendored +from the standard library, see +https://github.com/python/cpython/blob/main/Lib/_strptime.py +The original module-level docstring follows. + +Strptime-related classes and functions. +CLASSES: + LocaleTime -- Discovers and stores locale-specific time information + TimeRE -- Creates regexes for pattern matching a string of text containing + time information +FUNCTIONS: + _getlang -- Figure out what language is being used for the locale + strptime -- Calculates the time struct represented by the passed-in string """ from datetime import timezone @@ -10,6 +24,11 @@ from cpython.datetime cimport ( timedelta, tzinfo, ) +from _strptime import ( + TimeRE as _TimeRE, + _getlang, +) +from _strptime import LocaleTime # no-cython-lint import_datetime() @@ -487,29 +506,6 @@ def array_strptime( return result, result_timezone.base -""" -TimeRE, _calc_julian_from_U_or_W are vendored -from the standard library, see -https://github.com/python/cpython/blob/main/Lib/_strptime.py -The original module-level docstring follows. - -Strptime-related classes and functions. -CLASSES: - LocaleTime -- Discovers and stores locale-specific time information - TimeRE -- Creates regexes for pattern matching a string of text containing - time information -FUNCTIONS: - _getlang -- Figure out what language is being used for the locale - strptime -- Calculates the time struct represented by the passed-in string -""" - -from _strptime import ( - TimeRE as _TimeRE, - _getlang, -) -from _strptime import LocaleTime # no-cython-lint - - class TimeRE(_TimeRE): """ Handle conversion from format directives to regexes.
<!-- START pr-commits --> <!-- END pr-commits --> ## Base PullRequest default branch (https://github.com/pandas-dev/pandas/tree/main) ## Command results <details> <summary>Details: </summary> <details> <summary><em>add path</em></summary> ```Shell /home/runner/work/_actions/technote-space/create-pr-action/v2/node_modules/npm-check-updates/build/src/bin ``` </details> <details> <summary><em>pip install pre-commit</em></summary> ```Shell Defaulting to user installation because normal site-packages is not writeable Collecting pre-commit Downloading pre_commit-2.21.0-py2.py3-none-any.whl (201 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 201.9/201.9 KB 4.2 MB/s eta 0:00:00 Collecting identify>=1.0.0 Downloading identify-2.5.11-py2.py3-none-any.whl (98 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.8/98.8 KB 27.6 MB/s eta 0:00:00 Collecting nodeenv>=0.11.1 Downloading nodeenv-1.7.0-py2.py3-none-any.whl (21 kB) Collecting virtualenv>=20.10.0 Downloading virtualenv-20.17.1-py3-none-any.whl (8.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.8/8.8 MB 101.1 MB/s eta 0:00:00 Collecting cfgv>=2.0.0 Downloading cfgv-3.3.1-py2.py3-none-any.whl (7.3 kB) Requirement already satisfied: pyyaml>=5.1 in /usr/lib/python3/dist-packages (from pre-commit) (5.4.1) Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from nodeenv>=0.11.1->pre-commit) (59.6.0) Collecting distlib<1,>=0.3.6 Downloading distlib-0.3.6-py2.py3-none-any.whl (468 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 468.5/468.5 KB 71.4 MB/s eta 0:00:00 Collecting filelock<4,>=3.4.1 Downloading filelock-3.9.0-py3-none-any.whl (9.7 kB) Collecting platformdirs<3,>=2.4 Downloading platformdirs-2.6.2-py3-none-any.whl (14 kB) Installing collected packages: distlib, platformdirs, nodeenv, identify, filelock, cfgv, virtualenv, pre-commit Successfully installed cfgv-3.3.1 distlib-0.3.6 filelock-3.9.0 identify-2.5.11 nodeenv-1.7.0 platformdirs-2.6.2 pre-commit-2.21.0 virtualenv-20.17.1 ``` </details> <details> <summary><em>pre-commit autoupdate || (exit 0);</em></summary> ```Shell Updating https://github.com/MarcoGorelli/absolufy-imports ... [INFO] Initializing environment for https://github.com/MarcoGorelli/absolufy-imports. already up to date. Updating https://github.com/jendrikseipp/vulture ... [INFO] Initializing environment for https://github.com/jendrikseipp/vulture. already up to date. Updating https://github.com/codespell-project/codespell ... [INFO] Initializing environment for https://github.com/codespell-project/codespell. already up to date. Updating https://github.com/MarcoGorelli/cython-lint ... [INFO] Initializing environment for https://github.com/MarcoGorelli/cython-lint. updating v0.9.1 -> v0.10.1. Updating https://github.com/pre-commit/pre-commit-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. already up to date. Updating https://github.com/cpplint/cpplint ... [INFO] Initializing environment for https://github.com/cpplint/cpplint. already up to date. Updating https://github.com/PyCQA/flake8 ... [INFO] Initializing environment for https://github.com/PyCQA/flake8. already up to date. Updating https://github.com/pycqa/pylint ... [INFO] Initializing environment for https://github.com/pycqa/pylint. updating v2.15.6 -> v2.15.9. Updating https://github.com/pycqa/pylint ... updating v2.15.6 -> v2.15.9. Updating https://github.com/PyCQA/isort ... [INFO] Initializing environment for https://github.com/PyCQA/isort. updating 5.10.1 -> 5.11.4. Updating https://github.com/asottile/pyupgrade ... [INFO] Initializing environment for https://github.com/asottile/pyupgrade. updating v3.2.2 -> v3.3.1. Updating https://github.com/pre-commit/pygrep-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pygrep-hooks. already up to date. Updating https://github.com/sphinx-contrib/sphinx-lint ... [INFO] Initializing environment for https://github.com/sphinx-contrib/sphinx-lint. already up to date. Updating https://github.com/asottile/yesqa ... [INFO] Initializing environment for https://github.com/asottile/yesqa. already up to date. ``` </details> <details> <summary><em>pre-commit run -a || (exit 0);</em></summary> ```Shell [INFO] Initializing environment for https://github.com/codespell-project/codespell:tomli. [INFO] Initializing environment for https://github.com/PyCQA/flake8:flake8-bugbear==22.7.1,flake8==6.0.0,pandas-dev-flaker==0.5.0. [INFO] Initializing environment for https://github.com/asottile/yesqa:flake8-bugbear==22.7.1,flake8==6.0.0,pandas-dev-flaker==0.5.0. [INFO] Initializing environment for local:black==22.10.0. [INFO] Initializing environment for local:pyright@1.1.276. [INFO] Initializing environment for local:flake8-rst==0.7.0,flake8==3.7.9. [INFO] Initializing environment for local:pyyaml,toml. [INFO] Initializing environment for local. [INFO] Initializing environment for local:tomli. [INFO] Initializing environment for local:flake8-pyi==22.8.1,flake8==5.0.4. [INFO] Initializing environment for local:autotyping==22.9.0,libcst==0.4.7. [INFO] Installing environment for https://github.com/MarcoGorelli/absolufy-imports. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/jendrikseipp/vulture. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/codespell-project/codespell. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/MarcoGorelli/cython-lint. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/cpplint/cpplint. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/PyCQA/flake8. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/PyCQA/isort. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/asottile/pyupgrade. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/sphinx-contrib/sphinx-lint. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for local. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... absolufy-imports........................................................................................Passed vulture.................................................................................................Passed codespell...............................................................................................Passed cython-lint.............................................................................................Failed - hook id: cython-lint - exit code: 1 pandas/_libs/tslibs/strptime.pyx:490:1: pointless string statement pandas/_libs/algos.pyx:650:1: pointless string statement double-quote Cython strings.............................................................................Passed debug statements (python)...............................................................................Passed fix end of files........................................................................................Passed trim trailing whitespace................................................................................Passed cpplint.................................................................................................Passed flake8..................................................................................................Passed isort...................................................................................................Passed pyupgrade...............................................................................................Passed rst ``code`` is two backticks...........................................................................Passed rst directives end with two colons......................................................................Passed rst ``inline code`` next to normal text.................................................................Passed Sphinx lint.............................................................................................Passed black...................................................................................................Passed flake8-rst..............................................................................................Failed - hook id: flake8-rst - exit code: 1 Traceback (most recent call last): File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 486, in run_ast_checks ast = self.processor.build_ast() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/processor.py", line 212, in build_ast return compile("".join(self.lines), "", "exec", PyCF_ONLY_AST) File "", line 103 @tm.network # noqa ^ IndentationError: expected an indented block after 'with' statement on line 101 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/bin/flake8-rst", line 8, in <module> sys.exit(main()) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8_rst/cli.py", line 16, in main app.run(argv) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 393, in run self._run(argv) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 381, in _run self.run_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 300, in run_checks self.file_checker_manager.run() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 331, in run self.run_serial() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 315, in run_serial checker.run_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 598, in run_checks self.run_ast_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 488, in run_ast_checks row, column = self._extract_syntax_information(e) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 473, in _extract_syntax_information lines = physical_line.rstrip("\n").split("\n") AttributeError: 'int' object has no attribute 'rstrip' Traceback (most recent call last): File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 486, in run_ast_checks ast = self.processor.build_ast() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/processor.py", line 212, in build_ast return compile("".join(self.lines), "", "exec", PyCF_ONLY_AST) File "", line 27 df2.<TAB> # noqa: E225, E999 ^ SyntaxError: invalid syntax During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/bin/flake8-rst", line 8, in <module> sys.exit(main()) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8_rst/cli.py", line 16, in main app.run(argv) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 393, in run self._run(argv) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 381, in _run self.run_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/main/application.py", line 300, in run_checks self.file_checker_manager.run() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 331, in run self.run_serial() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 315, in run_serial checker.run_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 598, in run_checks self.run_ast_checks() File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 488, in run_ast_checks row, column = self._extract_syntax_information(e) File "/home/runner/.cache/pre-commit/repomntviav0/py_env-python3/lib/python3.10/site-packages/flake8/checker.py", line 473, in _extract_syntax_information lines = physical_line.rstrip("\n").split("\n") AttributeError: 'int' object has no attribute 'rstrip' Unwanted patterns.......................................................................................Passed Check Cython casting is `<type>obj`, not `<type> obj`...................................................Passed Check for backticks incorrectly rendering because of missing spaces.....................................Passed Check for unnecessary random seeds in asv benchmarks....................................................Passed Check for usage of numpy testing or array_equal.........................................................Passed Check for invalid EA testing............................................................................Passed Generate pip dependency from conda......................................................................Passed Check flake8 version is synced across flake8, yesqa, and environment.yml................................Passed Validate correct capitalization among titles in documentation...........................................Passed Import pandas.array as pd_array in core.................................................................Passed Use pandas.io.common.urlopen instead of urllib.request.urlopen..........................................Passed Use bool_t instead of bool in pandas/core/generic.py....................................................Passed Use raise instead of return for exceptions..............................................................Passed Ensure pandas errors are documented in doc/source/reference/testing.rst.................................Passed Check for pg8000 not installed on CI for test_pg8000_sqlalchemy_passthrough_error.......................Passed Check minimum version of dependencies are aligned.......................................................Passed Validate errors locations...............................................................................Passed flake8-pyi..............................................................................................Passed import annotations from __future__......................................................................Passed check that test names start with 'test'.................................................................Passed ``` </details> </details> ## Changed files <details> <summary>Changed file: </summary> - .pre-commit-config.yaml </details> <hr> [:octocat: Repo](https://github.com/technote-space/create-pr-action) | [:memo: Issues](https://github.com/technote-space/create-pr-action/issues) | [:department_store: Marketplace](https://github.com/marketplace/actions/create-pr-action)
https://api.github.com/repos/pandas-dev/pandas/pulls/50520
2023-01-01T07:06:47Z
2023-01-03T21:54:18Z
2023-01-03T21:54:18Z
2023-01-03T21:54:24Z
Revert "STYLE use pandas-dev-flaker (#40906)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 6a840ebfb9b52..82043f79643e4 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -70,12 +70,9 @@ repos: rev: 6.0.0 hooks: - id: flake8 - # Need to patch os.remove rule in pandas-dev-flaker - exclude: ^ci/fix_wheels.py additional_dependencies: &flake8_dependencies - flake8==6.0.0 - flake8-bugbear==22.7.1 - - pandas-dev-flaker==0.5.0 - repo: https://github.com/pycqa/pylint rev: v2.15.9 hooks: @@ -183,6 +180,21 @@ repos: types: [rst] args: [--filename=*.rst] additional_dependencies: [flake8-rst==0.7.0, flake8==3.7.9] + - id: inconsistent-namespace-usage + name: 'Check for inconsistent use of pandas namespace' + entry: python scripts/check_for_inconsistent_pandas_namespace.py + exclude: ^pandas/core/interchange/ + language: python + types: [python] + - id: no-os-remove + name: Check code for instances of os.remove + entry: os\.remove + language: pygrep + types: [python] + files: ^pandas/tests/ + exclude: | + (?x)^ + pandas/tests/io/pytables/test_store\.py$ - id: unwanted-patterns name: Unwanted patterns language: pygrep @@ -192,6 +204,20 @@ repos: \#\ type:\ (?!ignore) |\#\ type:\s?ignore(?!\[) + # foo._class__ instead of type(foo) + |\.__class__ + + # np.bool/np.object instead of np.bool_/np.object_ + |np\.bool[^_8`] + |np\.object[^_8`] + + # imports from collections.abc instead of `from collections import abc` + |from\ collections\.abc\ import + + # Numpy + |from\ numpy\ import\ random + |from\ numpy\.random\ import + # Incorrect code-block / IPython directives |\.\.\ code-block\ :: |\.\.\ ipython\ :: @@ -200,7 +226,17 @@ repos: # Check for deprecated messages without sphinx directive |(DEPRECATED|DEPRECATE|Deprecated)(:|,|\.) + + # {foo!r} instead of {repr(foo)} + |!r} + + # builtin filter function + |(?<!def)[\(\s]filter\( + + # exec + |[^a-zA-Z0-9_]exec\( types_or: [python, cython, rst] + exclude: ^doc/source/development/code_style\.rst # contains examples of patterns to avoid - id: cython-casting name: Check Cython casting is `<type>obj`, not `<type> obj` language: pygrep @@ -231,6 +267,58 @@ repos: files: ^pandas/tests/extension/base types: [python] exclude: ^pandas/tests/extension/base/base\.py + - id: unwanted-patterns-in-tests + name: Unwanted patterns in tests + language: pygrep + entry: | + (?x) + # pytest.xfail instead of pytest.mark.xfail + pytest\.xfail + + # imports from pandas._testing instead of `import pandas._testing as tm` + |from\ pandas\._testing\ import + |from\ pandas\ import\ _testing\ as\ tm + + # No direct imports from conftest + |conftest\ import + |import\ conftest + + # pandas.testing instead of tm + |pd\.testing\. + + # pd.api.types instead of from pandas.api.types import ... + |(pd|pandas)\.api\.types\. + + # np.testing, np.array_equal + |(numpy|np)(\.testing|\.array_equal) + + # unittest.mock (use pytest builtin monkeypatch fixture instead) + |(unittest(\.| import )mock|mock\.Mock\(\)|mock\.patch) + + # pytest raises without context + |\s\ pytest.raises + + # pytest.warns (use tm.assert_produces_warning instead) + |pytest\.warns + files: ^pandas/tests/ + types_or: [python, cython, rst] + - id: unwanted-patterns-in-ea-tests + name: Unwanted patterns in EA tests + language: pygrep + entry: | + (?x) + tm.assert_(series|frame)_equal + files: ^pandas/tests/extension/base/ + exclude: ^pandas/tests/extension/base/base\.py$ + types_or: [python, cython, rst] + - id: unwanted-patterns-in-cython + name: Unwanted patterns in Cython code + language: pygrep + entry: | + (?x) + # `<type>obj` as opposed to `<type> obj` + [a-zA-Z0-9*]>[ ] + types: [cython] - id: pip-to-conda name: Generate pip dependency from conda language: python @@ -251,6 +339,38 @@ repos: language: python types: [rst] files: ^doc/source/(development|reference)/ + - id: unwanted-patterns-bare-pytest-raises + name: Check for use of bare pytest raises + language: python + entry: python scripts/validate_unwanted_patterns.py --validation-type="bare_pytest_raises" + types: [python] + files: ^pandas/tests/ + exclude: ^pandas/tests/extension/ + - id: unwanted-patterns-private-function-across-module + name: Check for use of private functions across modules + language: python + entry: python scripts/validate_unwanted_patterns.py --validation-type="private_function_across_module" + types: [python] + exclude: ^(asv_bench|pandas/tests|doc)/ + - id: unwanted-patterns-private-import-across-module + name: Check for import of private attributes across modules + language: python + entry: python scripts/validate_unwanted_patterns.py --validation-type="private_import_across_module" + types: [python] + exclude: | + (?x) + ^(asv_bench|pandas/tests|doc)/ + |scripts/validate_min_versions_in_sync\.py$ + - id: unwanted-patterns-strings-to-concatenate + name: Check for use of not concatenated strings + language: python + entry: python scripts/validate_unwanted_patterns.py --validation-type="strings_to_concatenate" + types_or: [python, cython] + - id: unwanted-patterns-strings-with-misplaced-whitespace + name: Check for strings with misplaced spaces + language: python + entry: python scripts/validate_unwanted_patterns.py --validation-type="strings_with_wrong_placed_whitespace" + types_or: [python, cython] - id: use-pd_array-in-core name: Import pandas.array as pd_array in core language: python diff --git a/asv_bench/benchmarks/pandas_vb_common.py b/asv_bench/benchmarks/pandas_vb_common.py index d3168bde0a783..97d91111e833a 100644 --- a/asv_bench/benchmarks/pandas_vb_common.py +++ b/asv_bench/benchmarks/pandas_vb_common.py @@ -70,7 +70,7 @@ class BaseIO: def remove(self, f): """Remove created files""" try: - os.remove(f) # noqa: PDF008 + os.remove(f) except OSError: # On Windows, attempting to remove a file that is in use # causes an exception to be raised diff --git a/doc/scripts/eval_performance.py b/doc/scripts/eval_performance.py index 85d9ce4ad01e9..f6087e02a9330 100644 --- a/doc/scripts/eval_performance.py +++ b/doc/scripts/eval_performance.py @@ -6,8 +6,7 @@ from pandas import DataFrame setup_common = """from pandas import DataFrame -from numpy.random import randn -df = DataFrame(randn(%d, 3), columns=list('abc')) +df = DataFrame(np.random.randn(%d, 3), columns=list('abc')) %s""" setup_with = "s = 'a + b * (c ** 2 + b ** 2 - a) / (a * c) ** 3'" diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst index b05f026bbbb44..449b6de36cd24 100644 --- a/doc/source/development/contributing_codebase.rst +++ b/doc/source/development/contributing_codebase.rst @@ -43,7 +43,7 @@ Pre-commit ---------- Additionally, :ref:`Continuous Integration <contributing.ci>` will run code formatting checks -like ``black``, ``flake8`` (including a `pandas-dev-flaker <https://github.com/pandas-dev/pandas-dev-flaker>`_ plugin), +like ``black``, ``flake8``, ``isort``, and ``cpplint`` and more using `pre-commit hooks <https://pre-commit.com/>`_ Any warnings from these checks will cause the :ref:`Continuous Integration <contributing.ci>` to fail; therefore, it is helpful to run the check yourself before submitting code. This diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 5895a06792ffb..9dbe450261e54 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -320,7 +320,7 @@ Null-values are no longer coerced to NaN-value in value_counts and mode ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :meth:`Series.value_counts` and :meth:`Series.mode` no longer coerce ``None``, -``NaT`` and other null-values to a NaN-value for ``np.object``-dtype. This +``NaT`` and other null-values to a NaN-value for ``np.object_``-dtype. This behavior is now consistent with ``unique``, ``isin`` and others (:issue:`42688`). diff --git a/environment.yml b/environment.yml index b6b8f7d6af1ba..9eaf52977d6d2 100644 --- a/environment.yml +++ b/environment.yml @@ -90,7 +90,6 @@ dependencies: - gitdb - natsort # DataFrame.sort_values doctest - numpydoc - - pandas-dev-flaker=0.5.0 - pydata-sphinx-theme<0.11 - pytest-cython # doctest - sphinx diff --git a/pandas/__init__.py b/pandas/__init__.py index 951cb38656d0b..048d20f0de72f 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -135,7 +135,7 @@ ) from pandas import api, arrays, errors, io, plotting, tseries -from pandas import testing # noqa:PDF015 +from pandas import testing from pandas.util._print_versions import show_versions from pandas.io.api import ( diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index bc7b876cb5de8..b56cf2a23a45f 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -1482,7 +1482,7 @@ def infer_dtype(value: object, skipna: bool = True) -> str: return val if values.descr.type_num != NPY_OBJECT: - # i.e. values.dtype != np.object + # i.e. values.dtype != np.object_ # This should not be reached values = values.astype(object) diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py index 43020ae471f10..eb2905751a9b4 100644 --- a/pandas/_testing/__init__.py +++ b/pandas/_testing/__init__.py @@ -886,7 +886,7 @@ def external_error_raised(expected_exception: type[Exception]) -> ContextManager """ import pytest - return pytest.raises(expected_exception, match=None) # noqa: PDF010 + return pytest.raises(expected_exception, match=None) cython_table = pd.core.common._cython_table.items() diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 085a2a80ca8ec..b59b9632913e4 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -14,7 +14,6 @@ import sys from pandas._typing import F -import pandas.compat._compressors from pandas.compat._constants import ( IS64, PY39, @@ -22,6 +21,7 @@ PY311, PYPY, ) +import pandas.compat.compressors from pandas.compat.numpy import ( is_numpy_dev, np_version_under1p21, @@ -131,7 +131,7 @@ def is_ci_environment() -> bool: return os.environ.get("PANDAS_CI", "0") == "1" -def get_lzma_file() -> type[pandas.compat._compressors.LZMAFile]: +def get_lzma_file() -> type[pandas.compat.compressors.LZMAFile]: """ Importing the `LZMAFile` class from the `lzma` module. @@ -145,13 +145,13 @@ def get_lzma_file() -> type[pandas.compat._compressors.LZMAFile]: RuntimeError If the `lzma` module was not imported correctly, or didn't exist. """ - if not pandas.compat._compressors.has_lzma: + if not pandas.compat.compressors.has_lzma: raise RuntimeError( "lzma module not available. " "A Python re-install with the proper dependencies, " "might be required to solve this issue." ) - return pandas.compat._compressors.LZMAFile + return pandas.compat.compressors.LZMAFile __all__ = [ diff --git a/pandas/compat/_compressors.py b/pandas/compat/compressors.py similarity index 100% rename from pandas/compat/_compressors.py rename to pandas/compat/compressors.py diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py index 97262b1f4bb21..fb081d0e63c96 100644 --- a/pandas/core/arrays/string_arrow.py +++ b/pandas/core/arrays/string_arrow.py @@ -1,8 +1,10 @@ from __future__ import annotations -from collections.abc import Callable # noqa: PDF001 import re -from typing import Union +from typing import ( + Callable, + Union, +) import numpy as np diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 3517f3ee9183d..a89b28603113b 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -133,11 +133,11 @@ from pandas.core import ( algorithms as algos, arraylike, + common, indexing, nanops, sample, ) -from pandas.core import common # noqa: PDF018 from pandas.core.array_algos.replace import should_use_regex from pandas.core.arrays import ExtensionArray from pandas.core.base import PandasObject @@ -158,7 +158,7 @@ SingleArrayManager, ) from pandas.core.internals.construction import mgr_to_mgr -from pandas.core.internals.managers import _using_copy_on_write +from pandas.core.internals.managers import using_copy_on_write from pandas.core.methods.describe import describe_ndframe from pandas.core.missing import ( clean_fill_method, @@ -10102,7 +10102,7 @@ def truncate( if isinstance(ax, MultiIndex): setattr(result, self._get_axis_name(axis), ax.truncate(before, after)) - if copy or (copy is None and not _using_copy_on_write()): + if copy or (copy is None and not using_copy_on_write()): result = result.copy(deep=copy) return result diff --git a/pandas/core/interchange/dataframe.py b/pandas/core/interchange/dataframe.py index 9139cb41e3af7..0de9b130f0aab 100644 --- a/pandas/core/interchange/dataframe.py +++ b/pandas/core/interchange/dataframe.py @@ -7,8 +7,10 @@ from pandas.core.interchange.dataframe_protocol import DataFrame as DataFrameXchg if TYPE_CHECKING: - import pandas as pd - from pandas import Index + from pandas import ( + DataFrame, + Index, + ) class PandasDataFrameXchg(DataFrameXchg): @@ -21,7 +23,7 @@ class PandasDataFrameXchg(DataFrameXchg): """ def __init__( - self, df: pd.DataFrame, nan_as_null: bool = False, allow_copy: bool = True + self, df: DataFrame, nan_as_null: bool = False, allow_copy: bool = True ) -> None: """ Constructor - an instance of this (private) class is returned from diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 53f347ec4d372..c4e869a3f6a45 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -15,7 +15,7 @@ import numpy as np -from pandas._config.config import _global_config +from pandas._config import config from pandas._libs import ( algos as libalgos, @@ -376,7 +376,7 @@ def setitem(self: T, indexer, value) -> T: if isinstance(indexer, np.ndarray) and indexer.ndim > self.ndim: raise ValueError(f"Cannot set values with ndim > {self.ndim}") - if _using_copy_on_write() and not self._has_no_reference(0): + if using_copy_on_write() and not self._has_no_reference(0): # if being referenced -> perform Copy-on-Write and clear the reference # this method is only called if there is a single block -> hardcoded 0 self = self.copy() @@ -385,7 +385,7 @@ def setitem(self: T, indexer, value) -> T: def putmask(self, mask, new, align: bool = True): if ( - _using_copy_on_write() + using_copy_on_write() and self.refs is not None and not all(ref is None for ref in self.refs) ): @@ -429,7 +429,7 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T: limit = libalgos.validate_limit(None, limit=limit) if inplace: # TODO(CoW) can be optimized to only copy those blocks that have refs - if _using_copy_on_write() and any( + if using_copy_on_write() and any( not self._has_no_reference_block(i) for i in range(len(self.blocks)) ): self = self.copy() @@ -613,7 +613,7 @@ def copy(self: T, deep: bool | None | Literal["all"] = True) -> T: BlockManager """ if deep is None: - if _using_copy_on_write(): + if using_copy_on_write(): # use shallow copy deep = False else: @@ -701,7 +701,7 @@ def reindex_indexer( pandas-indexer with -1's only. """ if copy is None: - if _using_copy_on_write(): + if using_copy_on_write(): # use shallow copy copy = False else: @@ -879,7 +879,7 @@ def _slice_take_blocks_ax0( # we may try to only slice taker = blklocs[mgr_locs.indexer] max_len = max(len(mgr_locs), taker.max() + 1) - if only_slice or _using_copy_on_write(): + if only_slice or using_copy_on_write(): taker = lib.maybe_indices_to_slice(taker, max_len) if isinstance(taker, slice): @@ -1039,7 +1039,7 @@ def from_blocks( """ Constructor for BlockManager and SingleBlockManager with same signature. """ - parent = parent if _using_copy_on_write() else None + parent = parent if using_copy_on_write() else None return cls(blocks, axes, refs, parent, verify_integrity=False) # ---------------------------------------------------------------- @@ -1222,7 +1222,7 @@ def value_getitem(placement): blk_locs = blklocs[val_locs.indexer] if inplace and blk.should_store(value): # Updating inplace -> check if we need to do Copy-on-Write - if _using_copy_on_write() and not self._has_no_reference_block(blkno_l): + if using_copy_on_write() and not self._has_no_reference_block(blkno_l): blk.set_inplace(blk_locs, value_getitem(val_locs), copy=True) self._clear_reference_block(blkno_l) else: @@ -1320,7 +1320,7 @@ def _iset_single( if inplace and blk.should_store(value): copy = False - if _using_copy_on_write() and not self._has_no_reference_block(blkno): + if using_copy_on_write() and not self._has_no_reference_block(blkno): # perform Copy-on-Write and clear the reference copy = True self._clear_reference_block(blkno) @@ -1344,7 +1344,7 @@ def column_setitem( This is a method on the BlockManager level, to avoid creating an intermediate Series at the DataFrame level (`s = df[loc]; s[idx] = value`) """ - if _using_copy_on_write() and not self._has_no_reference(loc): + if using_copy_on_write() and not self._has_no_reference(loc): # otherwise perform Copy-on-Write and clear the reference blkno = self.blknos[loc] blocks = list(self.blocks) @@ -1839,7 +1839,7 @@ def __init__( self.axes = [axis] self.blocks = (block,) self.refs = refs - self.parent = parent if _using_copy_on_write() else None + self.parent = parent if using_copy_on_write() else None @classmethod def from_blocks( @@ -1876,7 +1876,7 @@ def to_2d_mgr(self, columns: Index) -> BlockManager: new_blk = type(blk)(arr, placement=bp, ndim=2) axes = [columns, self.axes[0]] refs: list[weakref.ref | None] = [weakref.ref(blk)] - parent = self if _using_copy_on_write() else None + parent = self if using_copy_on_write() else None return BlockManager( [new_blk], axes=axes, refs=refs, parent=parent, verify_integrity=False ) @@ -2020,7 +2020,7 @@ def setitem_inplace(self, indexer, value) -> None: in place, not returning a new Manager (and Block), and thus never changing the dtype. """ - if _using_copy_on_write() and not self._has_no_reference(0): + if using_copy_on_write() and not self._has_no_reference(0): self.blocks = (self._block.copy(),) self.refs = None self.parent = None @@ -2354,8 +2354,8 @@ def _preprocess_slice_or_indexer( return "fancy", indexer, len(indexer) -_mode_options = _global_config["mode"] +_mode_options = config._global_config["mode"] -def _using_copy_on_write(): +def using_copy_on_write(): return _mode_options["copy_on_write"] diff --git a/pandas/core/strings/base.py b/pandas/core/strings/base.py index b5618207ab9d8..c96e5a1abcf86 100644 --- a/pandas/core/strings/base.py +++ b/pandas/core/strings/base.py @@ -1,10 +1,10 @@ from __future__ import annotations import abc -from collections.abc import Callable # noqa: PDF001 import re from typing import ( TYPE_CHECKING, + Callable, Literal, ) diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py index 2d77cd0da816f..508ac122d67af 100644 --- a/pandas/core/strings/object_array.py +++ b/pandas/core/strings/object_array.py @@ -1,12 +1,12 @@ from __future__ import annotations -from collections.abc import Callable # noqa: PDF001 import functools import re import sys import textwrap from typing import ( TYPE_CHECKING, + Callable, Literal, ) import unicodedata diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py index a6c32133803d4..c0a7b2b7cc361 100644 --- a/pandas/core/window/ewm.py +++ b/pandas/core/window/ewm.py @@ -26,7 +26,7 @@ ) from pandas.core.dtypes.missing import isna -from pandas.core import common # noqa: PDF018 +from pandas.core import common from pandas.core.indexers.objects import ( BaseIndexer, ExponentialMovingWindowIndexer, diff --git a/pandas/io/common.py b/pandas/io/common.py index 4dae46c8f39f6..6deaf40f00c69 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -55,8 +55,8 @@ WriteBuffer, ) from pandas.compat import get_lzma_file -from pandas.compat._compressors import BZ2File as _BZ2File from pandas.compat._optional import import_optional_dependency +from pandas.compat.compressors import BZ2File as _BZ2File from pandas.util._decorators import doc from pandas.util._exceptions import find_stack_level @@ -478,7 +478,7 @@ def file_path_to_url(path: str) -> str: return urljoin("file:", pathname2url(path)) -_extension_to_compression = { +extension_to_compression = { ".tar": "tar", ".tar.gz": "tar", ".tar.bz2": "tar", @@ -489,7 +489,7 @@ def file_path_to_url(path: str) -> str: ".xz": "xz", ".zst": "zstd", } -_supported_compressions = set(_extension_to_compression.values()) +_supported_compressions = set(extension_to_compression.values()) def get_compression_method( @@ -565,7 +565,7 @@ def infer_compression( return None # Infer compression from the filename/URL extension - for extension, compression in _extension_to_compression.items(): + for extension, compression in extension_to_compression.items(): if filepath_or_buffer.lower().endswith(extension): return compression return None diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py index f2780d5fa6832..88974f3ab4afa 100644 --- a/pandas/io/json/_json.py +++ b/pandas/io/json/_json.py @@ -57,7 +57,7 @@ from pandas.io.common import ( IOHandles, - _extension_to_compression, + extension_to_compression, file_exists, get_handle, is_fsspec_url, @@ -854,7 +854,7 @@ def _get_data_from_filepath(self, filepath_or_buffer): elif ( isinstance(filepath_or_buffer, str) and filepath_or_buffer.lower().endswith( - (".json",) + tuple(f".json{c}" for c in _extension_to_compression) + (".json",) + tuple(f".json{c}" for c in extension_to_compression) ) and not file_exists(filepath_or_buffer) ): diff --git a/pandas/io/xml.py b/pandas/io/xml.py index 1368a407fa494..6ffa3356cc9de 100644 --- a/pandas/io/xml.py +++ b/pandas/io/xml.py @@ -46,10 +46,7 @@ if TYPE_CHECKING: from xml.etree.ElementTree import Element - from lxml.etree import ( - _Element, - _XSLTResultTree, - ) + from lxml import etree from pandas import DataFrame @@ -417,7 +414,7 @@ def _validate_names(self) -> None: def _parse_doc( self, raw_doc: FilePath | ReadBuffer[bytes] | ReadBuffer[str] - ) -> Element | _Element: + ) -> Element | etree._Element: """ Build tree from path_or_buffer. @@ -625,7 +622,7 @@ def _validate_names(self) -> None: def _parse_doc( self, raw_doc: FilePath | ReadBuffer[bytes] | ReadBuffer[str] - ) -> _Element: + ) -> etree._Element: from lxml.etree import ( XMLParser, fromstring, @@ -656,7 +653,7 @@ def _parse_doc( return document - def _transform_doc(self) -> _XSLTResultTree: + def _transform_doc(self) -> etree._XSLTResultTree: """ Transform original tree using stylesheet. diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py index 995b1668046d2..e448e1bce9146 100644 --- a/pandas/tests/api/test_api.py +++ b/pandas/tests/api/test_api.py @@ -250,7 +250,7 @@ class TestTesting(Base): ] def test_testing(self): - from pandas import testing # noqa: PDF015 + from pandas import testing self.check(testing, self.funcs) diff --git a/pandas/tests/arrays/boolean/test_function.py b/pandas/tests/arrays/boolean/test_function.py index 8e9112b531fad..b484dc39cf23b 100644 --- a/pandas/tests/arrays/boolean/test_function.py +++ b/pandas/tests/arrays/boolean/test_function.py @@ -67,7 +67,7 @@ def test_ufuncs_unary(ufunc): def test_ufunc_numeric(): - # np.sqrt on np.bool returns float16, which we upcast to Float32 + # np.sqrt on np.bool_ returns float16, which we upcast to Float32 # bc we do not have Float16 arr = pd.array([True, False, None], dtype="boolean") diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py index 32ab0336aa93f..46fb614d96633 100644 --- a/pandas/tests/io/formats/style/test_style.py +++ b/pandas/tests/io/formats/style/test_style.py @@ -707,13 +707,13 @@ def test_applymap_subset_multiindex(self, slice_): and isinstance(slice_[-1][-1], list) and "C" in slice_[-1][-1] ): - ctx = pytest.raises(KeyError, match="C") # noqa: PDF010 + ctx = pytest.raises(KeyError, match="C") elif ( isinstance(slice_[0], tuple) and isinstance(slice_[0][1], list) and 3 in slice_[0][1] ): - ctx = pytest.raises(KeyError, match="3") # noqa: PDF010 + ctx = pytest.raises(KeyError, match="3") else: ctx = contextlib.nullcontext() diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py index 53be06cd491ef..5c7c4f9ce0b75 100644 --- a/pandas/tests/io/pytables/test_round_trip.py +++ b/pandas/tests/io/pytables/test_round_trip.py @@ -349,7 +349,7 @@ def test_index_types(setup_path): _check_roundtrip(ser, func, path=setup_path) -def test_timeseries_preepoch(setup_path): +def test_timeseries_preepoch(setup_path, request): dr = bdate_range("1/1/1940", "1/1/1960") ts = Series(np.random.randn(len(dr)), index=dr) @@ -357,9 +357,10 @@ def test_timeseries_preepoch(setup_path): _check_roundtrip(ts, tm.assert_series_equal, path=setup_path) except OverflowError: if is_platform_windows(): - pytest.xfail("known failure on some windows platforms") - else: - raise + request.node.add_marker( + pytest.mark.xfail("known failure on some windows platforms") + ) + raise @pytest.mark.parametrize( diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py index 1263d61b55cd5..2664c7df59223 100644 --- a/pandas/tests/io/pytables/test_store.py +++ b/pandas/tests/io/pytables/test_store.py @@ -911,7 +911,7 @@ def do_copy(f, new_f=None, keys=None, propindexes=True, **kwargs): os.close(fd) except (OSError, ValueError): pass - os.remove(new_f) # noqa: PDF008 + os.remove(new_f) # new table df = tm.makeDataFrame() diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py index f74d268690a5b..2172a4bf839fb 100644 --- a/pandas/tests/io/test_common.py +++ b/pandas/tests/io/test_common.py @@ -317,7 +317,7 @@ def test_read_expands_user_home_dir( ), ], ) - @pytest.mark.filterwarnings( # pytables np.object usage + @pytest.mark.filterwarnings( # pytables np.object_ usage "ignore:`np.object` is a deprecated alias:DeprecationWarning" ) def test_read_fspath_all(self, reader, module, path, datapath): @@ -372,7 +372,7 @@ def test_write_fspath_all(self, writer_name, writer_kwargs, module): expected = f_path.read() assert result == expected - @pytest.mark.filterwarnings( # pytables np.object usage + @pytest.mark.filterwarnings( # pytables np.object_ usage "ignore:`np.object` is a deprecated alias:DeprecationWarning" ) def test_write_fspath_hdf5(self): diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py index 782753177f245..fc15ff3488ce9 100644 --- a/pandas/tests/io/test_compression.py +++ b/pandas/tests/io/test_compression.py @@ -19,7 +19,7 @@ import pandas.io.common as icom _compression_to_extension = { - value: key for key, value in icom._extension_to_compression.items() + value: key for key, value in icom.extension_to_compression.items() } diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py index 3dafe6fe61b35..07a028a19d7f9 100644 --- a/pandas/tests/io/test_pickle.py +++ b/pandas/tests/io/test_pickle.py @@ -37,8 +37,8 @@ get_lzma_file, is_platform_little_endian, ) -from pandas.compat._compressors import flatten_buffer from pandas.compat._optional import import_optional_dependency +from pandas.compat.compressors import flatten_buffer import pandas.util._test_decorators as td import pandas as pd @@ -255,7 +255,7 @@ def get_random_path(): class TestCompression: - _extension_to_compression = icom._extension_to_compression + _extension_to_compression = icom.extension_to_compression def compress_file(self, src_path, dest_path, compression): if compression is None: diff --git a/requirements-dev.txt b/requirements-dev.txt index 4f2a80d932fd0..428c2cb92d040 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -65,7 +65,6 @@ gitpython gitdb natsort numpydoc -pandas-dev-flaker==0.5.0 pydata-sphinx-theme<0.11 pytest-cython sphinx diff --git a/scripts/check_for_inconsistent_pandas_namespace.py b/scripts/check_for_inconsistent_pandas_namespace.py new file mode 100644 index 0000000000000..3c21821e794a9 --- /dev/null +++ b/scripts/check_for_inconsistent_pandas_namespace.py @@ -0,0 +1,142 @@ +""" +Check that test suite file doesn't use the pandas namespace inconsistently. + +We check for cases of ``Series`` and ``pd.Series`` appearing in the same file +(likewise for other pandas objects). + +This is meant to be run as a pre-commit hook - to run it manually, you can do: + + pre-commit run inconsistent-namespace-usage --all-files + +To automatically fixup a given file, you can pass `--replace`, e.g. + + python scripts/check_for_inconsistent_pandas_namespace.py test_me.py --replace + +though note that you may need to manually fixup some imports and that you will also +need the additional dependency `tokenize-rt` (which is left out from the pre-commit +hook so that it uses the same virtualenv as the other local ones). + +The general structure is similar to that of some plugins from +https://github.com/asottile/pyupgrade . +""" + +import argparse +import ast +import sys +from typing import ( + MutableMapping, + NamedTuple, + Optional, + Sequence, + Set, +) + +ERROR_MESSAGE = ( + "{path}:{lineno}:{col_offset}: " + "Found both '{prefix}.{name}' and '{name}' in {path}" +) + + +class OffsetWithNamespace(NamedTuple): + lineno: int + col_offset: int + namespace: str + + +class Visitor(ast.NodeVisitor): + def __init__(self) -> None: + self.pandas_namespace: MutableMapping[OffsetWithNamespace, str] = {} + self.imported_from_pandas: Set[str] = set() + + def visit_Attribute(self, node: ast.Attribute) -> None: + if isinstance(node.value, ast.Name) and node.value.id in {"pandas", "pd"}: + offset_with_namespace = OffsetWithNamespace( + node.lineno, node.col_offset, node.value.id + ) + self.pandas_namespace[offset_with_namespace] = node.attr + self.generic_visit(node) + + def visit_ImportFrom(self, node: ast.ImportFrom) -> None: + if node.module is not None and "pandas" in node.module: + self.imported_from_pandas.update(name.name for name in node.names) + self.generic_visit(node) + + +def replace_inconsistent_pandas_namespace(visitor: Visitor, content: str) -> str: + from tokenize_rt import ( + reversed_enumerate, + src_to_tokens, + tokens_to_src, + ) + + tokens = src_to_tokens(content) + for n, i in reversed_enumerate(tokens): + offset_with_namespace = OffsetWithNamespace(i.offset[0], i.offset[1], i.src) + if ( + offset_with_namespace in visitor.pandas_namespace + and visitor.pandas_namespace[offset_with_namespace] + in visitor.imported_from_pandas + ): + # Replace `pd` + tokens[n] = i._replace(src="") + # Replace `.` + tokens[n + 1] = tokens[n + 1]._replace(src="") + + new_src: str = tokens_to_src(tokens) + return new_src + + +def check_for_inconsistent_pandas_namespace( + content: str, path: str, *, replace: bool +) -> Optional[str]: + tree = ast.parse(content) + + visitor = Visitor() + visitor.visit(tree) + + inconsistencies = visitor.imported_from_pandas.intersection( + visitor.pandas_namespace.values() + ) + + if not inconsistencies: + # No inconsistent namespace usage, nothing to replace. + return None + + if not replace: + inconsistency = inconsistencies.pop() + lineno, col_offset, prefix = next( + key for key, val in visitor.pandas_namespace.items() if val == inconsistency + ) + msg = ERROR_MESSAGE.format( + lineno=lineno, + col_offset=col_offset, + prefix=prefix, + name=inconsistency, + path=path, + ) + sys.stdout.write(msg) + sys.exit(1) + + return replace_inconsistent_pandas_namespace(visitor, content) + + +def main(argv: Optional[Sequence[str]] = None) -> None: + parser = argparse.ArgumentParser() + parser.add_argument("paths", nargs="*") + parser.add_argument("--replace", action="store_true") + args = parser.parse_args(argv) + + for path in args.paths: + with open(path, encoding="utf-8") as fd: + content = fd.read() + new_content = check_for_inconsistent_pandas_namespace( + content, path, replace=args.replace + ) + if not args.replace or new_content is None: + continue + with open(path, "w", encoding="utf-8") as fd: + fd.write(new_content) + + +if __name__ == "__main__": + main() diff --git a/scripts/sync_flake8_versions.py b/scripts/sync_flake8_versions.py index 8852634c5d796..0d513d5937dbe 100644 --- a/scripts/sync_flake8_versions.py +++ b/scripts/sync_flake8_versions.py @@ -1,5 +1,5 @@ """ -Check that the flake8 (and pandas-dev-flaker) pins are the same in: +Check that the flake8 pins are the same in: - environment.yml - .pre-commit-config.yaml, in the flake8 hook @@ -103,17 +103,13 @@ def get_revisions( precommit_config: YamlMapping, environment: YamlMapping ) -> tuple[Revisions, Revisions]: flake8_revisions = Revisions(name="flake8") - pandas_dev_flaker_revisions = Revisions(name="pandas-dev-flaker") repos = precommit_config["repos"] flake8_repo, flake8_hook = _get_repo_hook(repos, "flake8") flake8_revisions.pre_commit = Revision("flake8", "==", flake8_repo["rev"]) flake8_additional_dependencies = [] for dep in _process_dependencies(flake8_hook.get("additional_dependencies", [])): - if dep.name == "pandas-dev-flaker": - pandas_dev_flaker_revisions.pre_commit = dep - else: - flake8_additional_dependencies.append(dep) + flake8_additional_dependencies.append(dep) environment_dependencies = environment["dependencies"] environment_additional_dependencies = [] @@ -121,8 +117,6 @@ def get_revisions( if dep.name == "flake8": flake8_revisions.environment = dep environment_additional_dependencies.append(dep) - elif dep.name == "pandas-dev-flaker": - pandas_dev_flaker_revisions.environment = dep else: environment_additional_dependencies.append(dep) @@ -131,8 +125,7 @@ def get_revisions( environment_additional_dependencies, ) - for revisions in flake8_revisions, pandas_dev_flaker_revisions: - _validate_revisions(revisions) + _validate_revisions(flake8_revisions) if __name__ == "__main__": diff --git a/scripts/tests/test_inconsistent_namespace_check.py b/scripts/tests/test_inconsistent_namespace_check.py new file mode 100644 index 0000000000000..eb995158d8cb4 --- /dev/null +++ b/scripts/tests/test_inconsistent_namespace_check.py @@ -0,0 +1,61 @@ +import pytest + +from ..check_for_inconsistent_pandas_namespace import ( + check_for_inconsistent_pandas_namespace, +) + +BAD_FILE_0 = ( + "from pandas import Categorical\n" + "cat_0 = Categorical()\n" + "cat_1 = pd.Categorical()" +) +BAD_FILE_1 = ( + "from pandas import Categorical\n" + "cat_0 = pd.Categorical()\n" + "cat_1 = Categorical()" +) +BAD_FILE_2 = ( + "from pandas import Categorical\n" + "cat_0 = pandas.Categorical()\n" + "cat_1 = Categorical()" +) +GOOD_FILE_0 = ( + "from pandas import Categorical\ncat_0 = Categorical()\ncat_1 = Categorical()" +) +GOOD_FILE_1 = "cat_0 = pd.Categorical()\ncat_1 = pd.Categorical()" +GOOD_FILE_2 = "from array import array\nimport pandas as pd\narr = pd.array([])" +PATH = "t.py" + + +@pytest.mark.parametrize( + "content, expected", + [ + (BAD_FILE_0, "t.py:3:8: Found both 'pd.Categorical' and 'Categorical' in t.py"), + (BAD_FILE_1, "t.py:2:8: Found both 'pd.Categorical' and 'Categorical' in t.py"), + ( + BAD_FILE_2, + "t.py:2:8: Found both 'pandas.Categorical' and 'Categorical' in t.py", + ), + ], +) +def test_inconsistent_usage(content, expected, capsys): + with pytest.raises(SystemExit): + check_for_inconsistent_pandas_namespace(content, PATH, replace=False) + result, _ = capsys.readouterr() + assert result == expected + + +@pytest.mark.parametrize("content", [GOOD_FILE_0, GOOD_FILE_1, GOOD_FILE_2]) +@pytest.mark.parametrize("replace", [True, False]) +def test_consistent_usage(content, replace): + # should not raise + check_for_inconsistent_pandas_namespace(content, PATH, replace=replace) + + +@pytest.mark.parametrize("content", [BAD_FILE_0, BAD_FILE_1, BAD_FILE_2]) +def test_inconsistent_usage_with_replace(content): + result = check_for_inconsistent_pandas_namespace(content, PATH, replace=True) + expected = ( + "from pandas import Categorical\ncat_0 = Categorical()\ncat_1 = Categorical()" + ) + assert result == expected diff --git a/scripts/tests/test_sync_flake8_versions.py b/scripts/tests/test_sync_flake8_versions.py index 743ece34e0b56..2349a4f5d8d1c 100644 --- a/scripts/tests/test_sync_flake8_versions.py +++ b/scripts/tests/test_sync_flake8_versions.py @@ -87,7 +87,6 @@ def test_get_revisions_no_failure(capsys): { "id": "flake8", "additional_dependencies": [ - "pandas-dev-flaker==0.4.0", "flake8-bugs==1.1.1", ], } @@ -101,7 +100,6 @@ def test_get_revisions_no_failure(capsys): "id": "yesqa", "additional_dependencies": [ "flake8==0.1.1", - "pandas-dev-flaker==0.4.0", "flake8-bugs==1.1.1", ], } @@ -116,7 +114,6 @@ def test_get_revisions_no_failure(capsys): { "pip": [ "git+https://github.com/pydata/pydata-sphinx-theme.git@master", - "pandas-dev-flaker==0.4.0", ] }, ] diff --git a/scripts/tests/test_validate_unwanted_patterns.py b/scripts/tests/test_validate_unwanted_patterns.py new file mode 100644 index 0000000000000..ef93fd1d21981 --- /dev/null +++ b/scripts/tests/test_validate_unwanted_patterns.py @@ -0,0 +1,419 @@ +import io + +import pytest + +from .. import validate_unwanted_patterns + + +class TestBarePytestRaises: + @pytest.mark.parametrize( + "data", + [ + ( + """ + with pytest.raises(ValueError, match="foo"): + pass + """ + ), + ( + """ + # with pytest.raises(ValueError, match="foo"): + # pass + """ + ), + ( + """ + # with pytest.raises(ValueError): + # pass + """ + ), + ( + """ + with pytest.raises( + ValueError, + match="foo" + ): + pass + """ + ), + ], + ) + def test_pytest_raises(self, data): + fd = io.StringIO(data.strip()) + result = list(validate_unwanted_patterns.bare_pytest_raises(fd)) + assert result == [] + + @pytest.mark.parametrize( + "data, expected", + [ + ( + ( + """ + with pytest.raises(ValueError): + pass + """ + ), + [ + ( + 1, + ( + "Bare pytests raise have been found. " + "Please pass in the argument 'match' " + "as well the exception." + ), + ), + ], + ), + ( + ( + """ + with pytest.raises(ValueError, match="foo"): + with pytest.raises(ValueError): + pass + pass + """ + ), + [ + ( + 2, + ( + "Bare pytests raise have been found. " + "Please pass in the argument 'match' " + "as well the exception." + ), + ), + ], + ), + ( + ( + """ + with pytest.raises(ValueError): + with pytest.raises(ValueError, match="foo"): + pass + pass + """ + ), + [ + ( + 1, + ( + "Bare pytests raise have been found. " + "Please pass in the argument 'match' " + "as well the exception." + ), + ), + ], + ), + ( + ( + """ + with pytest.raises( + ValueError + ): + pass + """ + ), + [ + ( + 1, + ( + "Bare pytests raise have been found. " + "Please pass in the argument 'match' " + "as well the exception." + ), + ), + ], + ), + ( + ( + """ + with pytest.raises( + ValueError, + # match = "foo" + ): + pass + """ + ), + [ + ( + 1, + ( + "Bare pytests raise have been found. " + "Please pass in the argument 'match' " + "as well the exception." + ), + ), + ], + ), + ], + ) + def test_pytest_raises_raises(self, data, expected): + fd = io.StringIO(data.strip()) + result = list(validate_unwanted_patterns.bare_pytest_raises(fd)) + assert result == expected + + +@pytest.mark.parametrize( + "data, expected", + [ + ( + 'msg = ("bar " "baz")', + [ + ( + 1, + ( + "String unnecessarily split in two by black. " + "Please merge them manually." + ), + ) + ], + ), + ( + 'msg = ("foo " "bar " "baz")', + [ + ( + 1, + ( + "String unnecessarily split in two by black. " + "Please merge them manually." + ), + ), + ( + 1, + ( + "String unnecessarily split in two by black. " + "Please merge them manually." + ), + ), + ], + ), + ], +) +def test_strings_to_concatenate(data, expected): + fd = io.StringIO(data.strip()) + result = list(validate_unwanted_patterns.strings_to_concatenate(fd)) + assert result == expected + + +class TestStringsWithWrongPlacedWhitespace: + @pytest.mark.parametrize( + "data", + [ + ( + """ + msg = ( + "foo\n" + " bar" + ) + """ + ), + ( + """ + msg = ( + "foo" + " bar" + "baz" + ) + """ + ), + ( + """ + msg = ( + f"foo" + " bar" + ) + """ + ), + ( + """ + msg = ( + "foo" + f" bar" + ) + """ + ), + ( + """ + msg = ( + "foo" + rf" bar" + ) + """ + ), + ], + ) + def test_strings_with_wrong_placed_whitespace(self, data): + fd = io.StringIO(data.strip()) + result = list( + validate_unwanted_patterns.strings_with_wrong_placed_whitespace(fd) + ) + assert result == [] + + @pytest.mark.parametrize( + "data, expected", + [ + ( + ( + """ + msg = ( + "foo" + " bar" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ) + ], + ), + ( + ( + """ + msg = ( + f"foo" + " bar" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ) + ], + ), + ( + ( + """ + msg = ( + "foo" + f" bar" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ) + ], + ), + ( + ( + """ + msg = ( + f"foo" + f" bar" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ) + ], + ), + ( + ( + """ + msg = ( + "foo" + rf" bar" + " baz" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ( + 4, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ], + ), + ( + ( + """ + msg = ( + "foo" + " bar" + rf" baz" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ( + 4, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ], + ), + ( + ( + """ + msg = ( + "foo" + rf" bar" + rf" baz" + ) + """ + ), + [ + ( + 3, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ( + 4, + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ), + ], + ), + ], + ) + def test_strings_with_wrong_placed_whitespace_raises(self, data, expected): + fd = io.StringIO(data.strip()) + result = list( + validate_unwanted_patterns.strings_with_wrong_placed_whitespace(fd) + ) + assert result == expected diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py new file mode 100755 index 0000000000000..f78b9b8ced863 --- /dev/null +++ b/scripts/validate_unwanted_patterns.py @@ -0,0 +1,487 @@ +#!/usr/bin/env python3 +""" +Unwanted patterns test cases. + +The reason this file exist despite the fact we already have +`ci/code_checks.sh`, +(see https://github.com/pandas-dev/pandas/blob/master/ci/code_checks.sh) + +is that some of the test cases are more complex/impossible to validate via regex. +So this file is somewhat an extensions to `ci/code_checks.sh` +""" + +import argparse +import ast +import sys +import token +import tokenize +from typing import ( + IO, + Callable, + Iterable, + List, + Set, + Tuple, +) + +PRIVATE_IMPORTS_TO_IGNORE: Set[str] = { + "_extension_array_shared_docs", + "_index_shared_docs", + "_interval_shared_docs", + "_merge_doc", + "_shared_docs", + "_apply_docs", + "_new_Index", + "_new_PeriodIndex", + "_doc_template", + "_agg_template", + "_pipe_template", + "__main__", + "_transform_template", + "_flex_comp_doc_FRAME", + "_op_descriptions", + "_IntegerDtype", + "_use_inf_as_na", + "_get_plot_backend", + "_matplotlib", + "_arrow_utils", + "_registry", + "_get_offset", # TODO: remove after get_offset deprecation enforced + "_test_parse_iso8601", + "_json_normalize", # TODO: remove after deprecation is enforced + "_testing", + "_test_decorators", + "__version__", # check np.__version__ in compat.numpy.function +} + + +def _get_literal_string_prefix_len(token_string: str) -> int: + """ + Getting the length of the literal string prefix. + + Parameters + ---------- + token_string : str + String to check. + + Returns + ------- + int + Length of the literal string prefix. + + Examples + -------- + >>> example_string = "'Hello world'" + >>> _get_literal_string_prefix_len(example_string) + 0 + >>> example_string = "r'Hello world'" + >>> _get_literal_string_prefix_len(example_string) + 1 + """ + try: + return min( + token_string.find(quote) + for quote in (r"'", r'"') + if token_string.find(quote) >= 0 + ) + except ValueError: + return 0 + + +def bare_pytest_raises(file_obj: IO[str]) -> Iterable[Tuple[int, str]]: + """ + Test Case for bare pytest raises. + + For example, this is wrong: + + >>> with pytest.raise(ValueError): + ... # Some code that raises ValueError + + And this is what we want instead: + + >>> with pytest.raise(ValueError, match="foo"): + ... # Some code that raises ValueError + + Parameters + ---------- + file_obj : IO + File-like object containing the Python code to validate. + + Yields + ------ + line_number : int + Line number of unconcatenated string. + msg : str + Explanation of the error. + + Notes + ----- + GH #23922 + """ + contents = file_obj.read() + tree = ast.parse(contents) + + for node in ast.walk(tree): + if not isinstance(node, ast.Call): + continue + + try: + if not (node.func.value.id == "pytest" and node.func.attr == "raises"): + continue + except AttributeError: + continue + + if not node.keywords: + yield ( + node.lineno, + "Bare pytests raise have been found. " + "Please pass in the argument 'match' as well the exception.", + ) + else: + # Means that there are arguments that are being passed in, + # now we validate that `match` is one of the passed in arguments + if not any(keyword.arg == "match" for keyword in node.keywords): + yield ( + node.lineno, + "Bare pytests raise have been found. " + "Please pass in the argument 'match' as well the exception.", + ) + + +PRIVATE_FUNCTIONS_ALLOWED = {"sys._getframe"} # no known alternative + + +def private_function_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str]]: + """ + Checking that a private function is not used across modules. + Parameters + ---------- + file_obj : IO + File-like object containing the Python code to validate. + Yields + ------ + line_number : int + Line number of the private function that is used across modules. + msg : str + Explanation of the error. + """ + contents = file_obj.read() + tree = ast.parse(contents) + + imported_modules: Set[str] = set() + + for node in ast.walk(tree): + if isinstance(node, (ast.Import, ast.ImportFrom)): + for module in node.names: + module_fqdn = module.name if module.asname is None else module.asname + imported_modules.add(module_fqdn) + + if not isinstance(node, ast.Call): + continue + + try: + module_name = node.func.value.id + function_name = node.func.attr + except AttributeError: + continue + + # Exception section # + + # (Debatable) Class case + if module_name[0].isupper(): + continue + # (Debatable) Dunder methods case + elif function_name.startswith("__") and function_name.endswith("__"): + continue + elif module_name + "." + function_name in PRIVATE_FUNCTIONS_ALLOWED: + continue + + if module_name in imported_modules and function_name.startswith("_"): + yield (node.lineno, f"Private function '{module_name}.{function_name}'") + + +def private_import_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str]]: + """ + Checking that a private function is not imported across modules. + Parameters + ---------- + file_obj : IO + File-like object containing the Python code to validate. + Yields + ------ + line_number : int + Line number of import statement, that imports the private function. + msg : str + Explanation of the error. + """ + contents = file_obj.read() + tree = ast.parse(contents) + + for node in ast.walk(tree): + if not isinstance(node, (ast.Import, ast.ImportFrom)): + continue + + for module in node.names: + module_name = module.name.split(".")[-1] + if module_name in PRIVATE_IMPORTS_TO_IGNORE: + continue + + if module_name.startswith("_"): + yield (node.lineno, f"Import of internal function {repr(module_name)}") + + +def strings_to_concatenate(file_obj: IO[str]) -> Iterable[Tuple[int, str]]: + """ + This test case is necessary after 'Black' (https://github.com/psf/black), + is formatting strings over multiple lines. + + For example, when this: + + >>> foo = ( + ... "bar " + ... "baz" + ... ) + + Is becoming this: + + >>> foo = ("bar " "baz") + + 'Black' is not considering this as an + issue (see https://github.com/psf/black/issues/1051), + so we are checking it here instead. + + Parameters + ---------- + file_obj : IO + File-like object containing the Python code to validate. + + Yields + ------ + line_number : int + Line number of unconcatenated string. + msg : str + Explanation of the error. + + Notes + ----- + GH #30454 + """ + tokens: List = list(tokenize.generate_tokens(file_obj.readline)) + + for current_token, next_token in zip(tokens, tokens[1:]): + if current_token.type == next_token.type == token.STRING: + yield ( + current_token.start[0], + ( + "String unnecessarily split in two by black. " + "Please merge them manually." + ), + ) + + +def strings_with_wrong_placed_whitespace( + file_obj: IO[str], +) -> Iterable[Tuple[int, str]]: + """ + Test case for leading spaces in concated strings. + + For example: + + >>> rule = ( + ... "We want the space at the end of the line, " + ... "not at the beginning" + ... ) + + Instead of: + + >>> rule = ( + ... "We want the space at the end of the line," + ... " not at the beginning" + ... ) + + Parameters + ---------- + file_obj : IO + File-like object containing the Python code to validate. + + Yields + ------ + line_number : int + Line number of unconcatenated string. + msg : str + Explanation of the error. + """ + + def has_wrong_whitespace(first_line: str, second_line: str) -> bool: + """ + Checking if the two lines are mattching the unwanted pattern. + + Parameters + ---------- + first_line : str + First line to check. + second_line : str + Second line to check. + + Returns + ------- + bool + True if the two received string match, an unwanted pattern. + + Notes + ----- + The unwanted pattern that we are trying to catch is if the spaces in + a string that is concatenated over multiple lines are placed at the + end of each string, unless this string is ending with a + newline character (\n). + + For example, this is bad: + + >>> rule = ( + ... "We want the space at the end of the line," + ... " not at the beginning" + ... ) + + And what we want is: + + >>> rule = ( + ... "We want the space at the end of the line, " + ... "not at the beginning" + ... ) + + And if the string is ending with a new line character (\n) we + do not want any trailing whitespaces after it. + + For example, this is bad: + + >>> rule = ( + ... "We want the space at the begging of " + ... "the line if the previous line is ending with a \n " + ... "not at the end, like always" + ... ) + + And what we do want is: + + >>> rule = ( + ... "We want the space at the begging of " + ... "the line if the previous line is ending with a \n" + ... " not at the end, like always" + ... ) + """ + if first_line.endswith(r"\n"): + return False + elif first_line.startswith(" ") or second_line.startswith(" "): + return False + elif first_line.endswith(" ") or second_line.endswith(" "): + return False + elif (not first_line.endswith(" ")) and second_line.startswith(" "): + return True + return False + + tokens: List = list(tokenize.generate_tokens(file_obj.readline)) + + for first_token, second_token, third_token in zip(tokens, tokens[1:], tokens[2:]): + # Checking if we are in a block of concated string + if ( + first_token.type == third_token.type == token.STRING + and second_token.type == token.NL + ): + # Striping the quotes, with the string literal prefix + first_string: str = first_token.string[ + _get_literal_string_prefix_len(first_token.string) + 1 : -1 + ] + second_string: str = third_token.string[ + _get_literal_string_prefix_len(third_token.string) + 1 : -1 + ] + + if has_wrong_whitespace(first_string, second_string): + yield ( + third_token.start[0], + ( + "String has a space at the beginning instead " + "of the end of the previous string." + ), + ) + + +def main( + function: Callable[[IO[str]], Iterable[Tuple[int, str]]], + source_path: str, + output_format: str, +) -> bool: + """ + Main entry point of the script. + + Parameters + ---------- + function : Callable + Function to execute for the specified validation type. + source_path : str + Source path representing path to a file/directory. + output_format : str + Output format of the error message. + file_extensions_to_check : str + Comma separated values of what file extensions to check. + excluded_file_paths : str + Comma separated values of what file paths to exclude during the check. + + Returns + ------- + bool + True if found any patterns are found related to the given function. + + Raises + ------ + ValueError + If the `source_path` is not pointing to existing file/directory. + """ + is_failed: bool = False + + for file_path in source_path: + with open(file_path, encoding="utf-8") as file_obj: + for line_number, msg in function(file_obj): + is_failed = True + print( + output_format.format( + source_path=file_path, line_number=line_number, msg=msg + ) + ) + + return is_failed + + +if __name__ == "__main__": + available_validation_types: List[str] = [ + "bare_pytest_raises", + "private_function_across_module", + "private_import_across_module", + "strings_to_concatenate", + "strings_with_wrong_placed_whitespace", + ] + + parser = argparse.ArgumentParser(description="Unwanted patterns checker.") + + parser.add_argument("paths", nargs="*", help="Source paths of files to check.") + parser.add_argument( + "--format", + "-f", + default="{source_path}:{line_number}:{msg}", + help="Output format of the error message.", + ) + parser.add_argument( + "--validation-type", + "-vt", + choices=available_validation_types, + required=True, + help="Validation test case to check.", + ) + + args = parser.parse_args() + + sys.exit( + main( + function=globals().get(args.validation_type), + source_path=args.paths, + output_format=args.format, + ) + ) diff --git a/setup.cfg b/setup.cfg index ef84dd7f9ce85..562ae70fd73ef 100644 --- a/setup.cfg +++ b/setup.cfg @@ -35,10 +35,6 @@ ignore = B023 # Functions defined inside a loop must not use variables redefined in the loop B301, - # single-letter variables - PDF023, - # "use 'pandas._testing' instead" in non-test code - PDF025, # If test must be a simple comparison against sys.platform or sys.version_info Y002, # Use "_typeshed.Self" instead of class-bound TypeVar @@ -59,18 +55,6 @@ exclude = versioneer.py, # exclude asv benchmark environments from linting env -per-file-ignores = - # private import across modules - pandas/tests/*:PDF020 - # pytest.raises without match= - pandas/tests/extension/*:PDF009 - # os.remove - doc/make.py:PDF008 - # import from pandas._testing - pandas/testing.py:PDF014 - # can't use fixtures in asv - asv_bench/*:PDF016 - [flake8-rst] max-line-length = 84
So, #40906 consolidated a bunch of custom checks into a flake8 extension As it stands, flake8 is the slowest pre-commit hook we have (even without the extension - see [timings](https://github.com/pandas-dev/pandas/pull/50518#issuecomment-1368265690)) In https://github.com/pandas-dev/pandas/pull/50160, Joris tried adding ruff, which would more than an order of magnitude faster (!), and has more checks. Several projects are moving towards it So, I suggest we revert #40906 , so that we can rip out flake8 and can replace it with `ruff` without losing the custom checks This is mostly a pure revert of #40906, with some adjustments for things which have since changed I removed the `import pandas.core.common as com` one, as that one was `noqa`'d in a few places anyway (if the check isn't a flake8 extension, then it's not so easy to turn off the check, we it's done for a whole file) and doesn't seem too important anyway --- Overall, I think that this + the ruff PR would be a big improvement to developer workflow. It would mean more hooks (for now, maybe then can be consolidated anyway), but they would run a lot faster. And I'd expect the speed of the checks to matter much more to developer workflow than the speed of them
https://api.github.com/repos/pandas-dev/pandas/pulls/50519
2022-12-31T20:38:57Z
2023-01-05T14:08:19Z
2023-01-05T14:08:19Z
2023-01-05T14:08:20Z
wip
diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml deleted file mode 100644 index 438d2c7b4174e..0000000000000 --- a/.github/workflows/32-bit-linux.yml +++ /dev/null @@ -1,54 +0,0 @@ -name: 32 Bit Linux - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Run 32-bit manylinux2014 Docker Build / Tests - run: | - # Without this (line 34), versioneer will not be able to determine the pandas version. - # This is because of a security update to git that blocks it from reading the config folder if - # it is not owned by the current user. We hit this since the "mounted" folder is not hit by the - # Docker container. - # xref https://github.com/pypa/manylinux/issues/1309 - docker pull quay.io/pypa/manylinux2014_i686 - docker run --platform linux/386 -v $(pwd):/pandas quay.io/pypa/manylinux2014_i686 \ - /bin/bash -xc "cd pandas && \ - git config --global --add safe.directory /pandas && \ - /opt/python/cp38-cp38/bin/python -m venv ~/virtualenvs/pandas-dev && \ - . ~/virtualenvs/pandas-dev/bin/activate && \ - python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ - python -m pip install versioneer[toml] && \ - python -m pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ - python setup.py build_ext -q -j1 && \ - python -m pip install --no-build-isolation --no-use-pep517 -e . && \ - python -m pip list && \ - export PANDAS_CI=1 && \ - pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas --junitxml=test-data.xml" - - - name: Publish test results for Python 3.8-32 bit full Linux - uses: actions/upload-artifact@v3 - with: - name: Test results - path: test-data.xml - if: failure() diff --git a/.github/workflows/assign.yml b/.github/workflows/assign.yml deleted file mode 100644 index b3331060823a9..0000000000000 --- a/.github/workflows/assign.yml +++ /dev/null @@ -1,19 +0,0 @@ -name: Assign -on: - issue_comment: - types: created - -permissions: - contents: read - -jobs: - issue_assign: - permissions: - issues: write - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - if: github.event.comment.body == 'take' - run: | - echo "Assigning issue ${{ github.event.issue.number }} to ${{ github.event.comment.user.login }}" - curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -d '{"assignees": ["${{ github.event.comment.user.login }}"]}' https://api.github.com/repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/assignees diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml deleted file mode 100644 index d264698e60485..0000000000000 --- a/.github/workflows/asv-bot.yml +++ /dev/null @@ -1,78 +0,0 @@ -name: "ASV Bot" - -on: - issue_comment: # Pull requests are issues - types: - - created - -env: - ENV_FILE: environment.yml - COMMENT: ${{github.event.comment.body}} - -permissions: - contents: read - -jobs: - autotune: - permissions: - contents: read - issues: write - pull-requests: write - name: "Run benchmarks" - # TODO: Support more benchmarking options later, against different branches, against self, etc - if: startsWith(github.event.comment.body, '@github-actions benchmark') - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) - # each user can only run one concurrent benchmark bot at a time - # We don't cancel in progress jobs, but if you want to benchmark multiple PRs, you're gonna have - # to wait - group: ${{ github.actor }}-asv - cancel-in-progress: false - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - # Although asv sets up its own env, deps are still needed - # during discovery process - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Run benchmarks - id: bench - continue-on-error: true # This is a fake failure, asv will exit code 1 for regressions - run: | - # extracting the regex, see https://stackoverflow.com/a/36798723 - REGEX=$(echo "$COMMENT" | sed -n "s/^.*-b\s*\(\S*\).*$/\1/p") - cd asv_bench - asv check -E existing - git remote add upstream https://github.com/pandas-dev/pandas.git - git fetch upstream - asv machine --yes - asv continuous -f 1.1 -b $REGEX upstream/main HEAD - echo 'BENCH_OUTPUT<<EOF' >> $GITHUB_ENV - asv compare -f 1.1 upstream/main HEAD >> $GITHUB_ENV - echo 'EOF' >> $GITHUB_ENV - echo "REGEX=$REGEX" >> $GITHUB_ENV - - - uses: actions/github-script@v6 - env: - BENCH_OUTPUT: ${{env.BENCH_OUTPUT}} - REGEX: ${{env.REGEX}} - with: - script: | - const ENV_VARS = process.env - const run_url = `https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}` - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: '\nBenchmarks completed. View runner logs here.' + run_url + '\nRegex used: '+ 'regex ' + ENV_VARS["REGEX"] + '\n' + ENV_VARS["BENCH_OUTPUT"] - }) diff --git a/.github/workflows/autoupdate-pre-commit-config.yml b/.github/workflows/autoupdate-pre-commit-config.yml deleted file mode 100644 index 376aa8343c571..0000000000000 --- a/.github/workflows/autoupdate-pre-commit-config.yml +++ /dev/null @@ -1,39 +0,0 @@ -name: "Update pre-commit config" - -on: - schedule: - - cron: "0 7 1 * *" # At 07:00 on 1st of every month. - workflow_dispatch: - -permissions: - contents: read - -jobs: - update-pre-commit: - permissions: - contents: write # for technote-space/create-pr-action to push code - pull-requests: write # for technote-space/create-pr-action to create a PR - if: github.repository_owner == 'pandas-dev' - name: Autoupdate pre-commit config - runs-on: ubuntu-22.04 - steps: - - name: Set up Python - uses: actions/setup-python@v4 - - name: Cache multiple paths - uses: actions/cache@v3 - with: - path: | - ~/.cache/pre-commit - ~/.cache/pip - key: pre-commit-autoupdate-${{ runner.os }}-build - - name: Update pre-commit config packages - uses: technote-space/create-pr-action@v2 - with: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - EXECUTE_COMMANDS: | - pip install pre-commit - pre-commit autoupdate || (exit 0); - pre-commit run -a || (exit 0); - COMMIT_MESSAGE: "⬆️ UPGRADE: Autoupdate pre-commit config" - PR_BRANCH_NAME: "pre-commit-config-update-${PR_ID}" - PR_TITLE: "⬆️ UPGRADE: Autoupdate pre-commit config" diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index 280b6ed601f08..fdca728f90b36 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -38,153 +38,3 @@ jobs: uses: pre-commit/action@v2.0.3 with: extra_args: --verbose --all-files - - docstring_typing_manual_hooks: - name: Docstring validation, typing, and other manual pre-commit hooks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-code-checks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - # The following checks are independent of each other and should still be run if one fails - - name: Check for no warnings when building single-page docs - run: ci/code_checks.sh single-docs - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run checks on imported code - run: ci/code_checks.sh code - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run doctests - run: ci/code_checks.sh doctests - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation - run: ci/code_checks.sh docstrings - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run check of documentation notebooks - run: ci/code_checks.sh notebooks - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Use existing environment for type checking - run: | - echo $PATH >> $GITHUB_PATH - echo "PYTHONHOME=$PYTHONHOME" >> $GITHUB_ENV - echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Typing + pylint - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --hook-stage manual --all-files - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation script tests - run: pytest scripts - if: ${{ steps.build.outcome == 'success' && always() }} - - asv-benchmarks: - name: ASV Benchmarks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-asv-benchmarks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - - name: Run ASV benchmarks - run: | - cd asv_bench - asv machine --yes - asv run --quick --dry-run --strict --durations=30 --python=same - - build_docker_dev_environment: - name: Build Docker Dev Environment - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment - cancel-in-progress: true - - steps: - - name: Clean up dangling images - run: docker image prune -f - - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Build image - run: docker build --pull --no-cache --tag pandas-dev-env . - - - name: Show environment - run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())" - - requirements-dev-text-installable: - name: Test install requirements-dev.txt - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-requirements-dev-text-installable - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - cache: 'pip' - cache-dependency-path: 'requirements-dev.txt' - - - name: Install requirements-dev.txt - run: pip install -r requirements-dev.txt - - - name: Check Pip Cache Hit - run: echo ${{ steps.setup_python.outputs.cache-hit }} diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml deleted file mode 100644 index 05a5d003c1dd1..0000000000000 --- a/.github/workflows/codeql.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: CodeQL -on: - schedule: - # every day at midnight - - cron: "0 0 * * *" - -concurrency: - group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }} - cancel-in-progress: true - -jobs: - analyze: - runs-on: ubuntu-22.04 - permissions: - actions: read - contents: read - security-events: write - - strategy: - fail-fast: false - matrix: - language: - - python - - steps: - - uses: actions/checkout@v3 - - uses: github/codeql-action/init@v2 - with: - languages: ${{ matrix.language }} - - uses: github/codeql-action/autobuild@v2 - - uses: github/codeql-action/analyze@v2 diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml deleted file mode 100644 index 7a9f491228a83..0000000000000 --- a/.github/workflows/docbuild-and-upload.yml +++ /dev/null @@ -1,95 +0,0 @@ -name: Doc Build and Upload - -on: - push: - branches: - - main - - 1.5.x - tags: - - '*' - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - -permissions: - contents: read - -jobs: - web_and_docs: - name: Doc Build and Upload - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-web-docs - cancel-in-progress: true - - defaults: - run: - shell: bash -el {0} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Set up maintainers cache - uses: actions/cache@v3 - with: - path: maintainers.json - key: maintainers - - - name: Build website - run: python web/pandas_web.py web/pandas --target-path=web/build - - - name: Build documentation - run: doc/make.py --warnings-are-errors - - - name: Build documentation zip - run: doc/make.py zip_html - - - name: Install ssh key - run: | - mkdir -m 700 -p ~/.ssh - echo "${{ secrets.server_ssh_key }}" > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - echo "${{ secrets.server_ip }} ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjYkJBk7sos+r7yATODogQc3jUdW1aascGpyOD4bohj8dWjzwLJv/OJ/fyOQ5lmj81WKDk67tGtqNJYGL9acII=" > ~/.ssh/known_hosts - if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) - - - name: Copy cheatsheets into site directory - run: cp doc/cheatsheet/Pandas_Cheat_Sheet* web/build/ - - - name: Upload web - run: rsync -az --delete --exclude='pandas-docs' --exclude='docs' web/build/ web@${{ secrets.server_ip }}:/var/www/html - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload dev docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/dev - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload prod docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/version/${GITHUB_REF_NAME:1} - if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') - - - name: Move docs into site directory - run: mv doc/build/html web/build/docs - - - name: Save website as an artifact - uses: actions/upload-artifact@v3 - with: - name: website - path: web/build - retention-days: 14 diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml deleted file mode 100644 index d762e20db196a..0000000000000 --- a/.github/workflows/macos-windows.yml +++ /dev/null @@ -1,62 +0,0 @@ -name: Windows-macOS - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - PYTEST_TARGET: pandas - PATTERN: "not slow and not db and not network and not single_cpu" - ERROR_ON_WARNINGS: "1" - - -permissions: - contents: read - -jobs: - pytest: - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - os: [macos-latest, windows-latest] - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - fail-fast: false - runs-on: ${{ matrix.os }} - name: ${{ format('{0} {1}', matrix.os, matrix.env_file) }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.os }} - cancel-in-progress: true - env: - # GH 47443: PYTEST_WORKERS > 1 crashes Windows builds with memory related errors - PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '1' }} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ci/deps/${{ matrix.env_file }} - pyarrow-version: ${{ matrix.os == 'macos-latest' && '9' || '' }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml deleted file mode 100644 index eb065c6e2e87d..0000000000000 --- a/.github/workflows/package-checks.yml +++ /dev/null @@ -1,52 +0,0 @@ -name: Package Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [ labeled, opened, synchronize, reopened ] - -permissions: - contents: read - -jobs: - pip: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - strategy: - matrix: - extra: ["test", "performance", "timezone", "computation", "fss", "aws", "gcp", "excel", "parquet", "feather", "hdf5", "spss", "postgresql", "mysql", "sql-other", "html", "xml", "plot", "output_formatting", "clipboard", "compression", "all"] - fail-fast: false - name: Install Extras - ${{ matrix.extra }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pip-extras-${{ matrix.extra }} - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - - - name: Install required dependencies - run: | - python -m pip install --upgrade pip setuptools wheel python-dateutil pytz numpy cython - python -m pip install versioneer[toml] - shell: bash -el {0} - - - name: Pip install with extra - run: | - python -m pip install -e .[${{ matrix.extra }}] --no-build-isolation - shell: bash -el {0} diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml deleted file mode 100644 index 220c1e464742e..0000000000000 --- a/.github/workflows/python-dev.yml +++ /dev/null @@ -1,93 +0,0 @@ -# This workflow may or may not run depending on the state of the next -# unreleased Python version. DO NOT DELETE IT. -# -# In general, this file will remain frozen(present, but not running) until: -# - The next unreleased Python version has released beta 1 -# - This version should be available on GitHub Actions. -# - Our required build/runtime dependencies(numpy, pytz, Cython, python-dateutil) -# support that unreleased Python version. -# To unfreeze, comment out the ``if: false`` condition, and make sure you update -# the name of the workflow and Python version in actions/setup-python to: '3.12-dev' -# -# After it has been unfrozen, this file should remain unfrozen(present, and running) until: -# - The next Python version has been officially released. -# OR -# - Most/All of our optional dependencies support Python 3.11 AND -# - The next Python version has released a rc(we are guaranteed a stable ABI). -# To freeze this file, uncomment out the ``if: false`` condition, and migrate the jobs -# to the corresponding posix/windows-macos/sdist etc. workflows. -# Feel free to modify this comment as necessary. - -name: Python Dev - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PYTEST_WORKERS: "auto" - PANDAS_CI: 1 - PATTERN: "not slow and not network and not clipboard and not single_cpu" - COVERAGE: true - PYTEST_TARGET: pandas - -permissions: - contents: read - -jobs: - build: - # if: false # Uncomment this to freeze the workflow, comment it to unfreeze - runs-on: ${{ matrix.os }} - strategy: - fail-fast: false - matrix: - os: [ubuntu-22.04, macOS-latest, windows-latest] - - name: actions-311-dev - timeout-minutes: 120 - - concurrency: - #https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.os }}-${{ matrix.pytest_target }}-dev - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python Dev Version - uses: actions/setup-python@v4 - with: - python-version: '3.11-dev' - - - name: Install dependencies - run: | - python --version - python -m pip install --upgrade pip setuptools wheel - python -m pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy - python -m pip install git+https://github.com/nedbat/coveragepy.git - python -m pip install versioneer[toml] - python -m pip install python-dateutil pytz cython hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-cov pytest-asyncio>=0.17 - python -m pip list - - # GH 47305: Parallel build can cause flaky ImportError from pandas/_libs/tslibs - - name: Build Pandas - run: | - python setup.py build_ext -q -j1 - python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - - - name: Build Version - run: | - python -c "import pandas; pandas.show_versions();" - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml deleted file mode 100644 index d11b614e2b2c0..0000000000000 --- a/.github/workflows/sdist.yml +++ /dev/null @@ -1,96 +0,0 @@ -name: sdist - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [labeled, opened, synchronize, reopened] - paths-ignore: - - "doc/**" - -permissions: - contents: read - -jobs: - build: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - timeout-minutes: 60 - defaults: - run: - shell: bash -el {0} - - strategy: - fail-fast: false - matrix: - python-version: ["3.8", "3.9", "3.10", "3.11"] - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{matrix.python-version}}-sdist - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - - - name: Install dependencies - run: | - python -m pip install --upgrade pip setuptools wheel - python -m pip install versioneer[toml] - - # GH 39416 - pip install numpy - - - name: Build pandas sdist - run: | - pip list - python setup.py sdist --formats=gztar - - - name: Upload sdist artifact - uses: actions/upload-artifact@v3 - with: - name: ${{matrix.python-version}}-sdist.gz - path: dist/*.gz - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: false - environment-name: pandas-sdist - extra-specs: | - python =${{ matrix.python-version }} - - - name: Install pandas from sdist - run: | - pip list - python -m pip install dist/*.gz - - - name: Force oldest supported NumPy - run: | - case "${{matrix.python-version}}" in - 3.8) - pip install numpy==1.20.3 ;; - 3.9) - pip install numpy==1.20.3 ;; - 3.10) - pip install numpy==1.21.2 ;; - 3.11) - pip install numpy==1.23.2 ;; - esac - - - name: Import pandas - run: | - cd .. - conda list - python -c "import pandas; pandas.show_versions();" diff --git a/.github/workflows/stale-pr.yml b/.github/workflows/stale-pr.yml deleted file mode 100644 index c47745e097d17..0000000000000 --- a/.github/workflows/stale-pr.yml +++ /dev/null @@ -1,26 +0,0 @@ -name: "Stale PRs" -on: - schedule: - # * is a special character in YAML so you have to quote this string - - cron: "0 0 * * *" - -permissions: - contents: read - -jobs: - stale: - permissions: - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - uses: actions/stale@v4 - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} - stale-pr-message: "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." - stale-pr-label: "Stale" - exempt-pr-labels: "Needs Review,Blocked,Needs Discussion" - days-before-issue-stale: -1 - days-before-pr-stale: 30 - days-before-close: -1 - remove-stale-when-updated: false - debug-only: false diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml deleted file mode 100644 index 9c93725ea15ec..0000000000000 --- a/.github/workflows/ubuntu.yml +++ /dev/null @@ -1,181 +0,0 @@ -name: Ubuntu - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - pattern: ["not single_cpu", "single_cpu"] - pyarrow_version: ["7", "8", "9", "10"] - include: - - name: "Downstream Compat" - env_file: actions-38-downstream_compat.yaml - pattern: "not slow and not network and not single_cpu" - pytest_target: "pandas/tests/test_downstream.py" - - name: "Minimum Versions" - env_file: actions-38-minimum_versions.yaml - pattern: "not slow and not network and not single_cpu" - error_on_warnings: "0" - - name: "Locale: it_IT" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-it" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "it_IT.utf8" - lc_all: "it_IT.utf8" - # Also install it_IT (its encoding is ISO8859-1) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "it_IT" - - name: "Locale: zh_CN" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-zh-hans" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "zh_CN.utf8" - lc_all: "zh_CN.utf8" - # Also install zh_CN (its encoding is gb2312) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "zh_CN" - - name: "Copy-on-Write" - env_file: actions-310.yaml - pattern: "not slow and not network and not single_cpu" - pandas_copy_on_write: "1" - error_on_warnings: "0" - - name: "Data Manager" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - pandas_data_manager: "array" - error_on_warnings: "0" - - name: "Pypy" - env_file: actions-pypy-38.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "--max-worker-restart 0" - error_on_warnings: "0" - - name: "Numpy Dev" - env_file: actions-310-numpydev.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" - error_on_warnings: "0" - exclude: - - env_file: actions-38.yaml - pyarrow_version: "7" - - env_file: actions-38.yaml - pyarrow_version: "8" - - env_file: actions-38.yaml - pyarrow_version: "9" - - env_file: actions-39.yaml - pyarrow_version: "7" - - env_file: actions-39.yaml - pyarrow_version: "8" - - env_file: actions-39.yaml - pyarrow_version: "9" - fail-fast: false - name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} - env: - ENV_FILE: ci/deps/${{ matrix.env_file }} - PATTERN: ${{ matrix.pattern }} - EXTRA_APT: ${{ matrix.extra_apt || '' }} - ERROR_ON_WARNINGS: ${{ matrix.error_on_warnings || '1' }} - LANG: ${{ matrix.lang || '' }} - LC_ALL: ${{ matrix.lc_all || '' }} - PANDAS_DATA_MANAGER: ${{ matrix.pandas_data_manager || 'block' }} - PANDAS_COPY_ON_WRITE: ${{ matrix.pandas_copy_on_write || '0' }} - TEST_ARGS: ${{ matrix.test_args || '' }} - PYTEST_WORKERS: ${{ contains(matrix.pattern, 'not single_cpu') && 'auto' || '1' }} - PYTEST_TARGET: ${{ matrix.pytest_target || 'pandas' }} - IS_PYPY: ${{ contains(matrix.env_file, 'pypy') }} - # TODO: re-enable coverage on pypy, its slow - COVERAGE: ${{ !contains(matrix.env_file, 'pypy') }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.pattern }}-${{ matrix.pyarrow_version || '' }}-${{ matrix.extra_apt || '' }}-${{ matrix.pandas_data_manager || '' }} - cancel-in-progress: true - - services: - mysql: - image: mysql - env: - MYSQL_ALLOW_EMPTY_PASSWORD: yes - MYSQL_DATABASE: pandas - options: >- - --health-cmd "mysqladmin ping" - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 3306:3306 - - postgres: - image: postgres - env: - POSTGRES_USER: postgres - POSTGRES_PASSWORD: postgres - POSTGRES_DB: pandas - options: >- - --health-cmd pg_isready - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 5432:5432 - - moto: - image: motoserver/moto - env: - AWS_ACCESS_KEY_ID: foobar_key - AWS_SECRET_ACCESS_KEY: foobar_secret - ports: - - 5000:5000 - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Extra installs - # xsel for clipboard tests - run: sudo apt-get update && sudo apt-get install -y xsel ${{ env.EXTRA_APT }} - - - name: Generate extra locales - # These extra locales will be available for locale.setlocale() calls in tests - run: | - sudo locale-gen ${{ matrix.extra_loc }} - if: ${{ matrix.extra_loc }} - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ${{ env.ENV_FILE }} - pyarrow-version: ${{ matrix.pyarrow_version }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests - # TODO: Don't continue on error for PyPy - continue-on-error: ${{ env.IS_PYPY == 'true' }} diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml deleted file mode 100644 index 49d29c91f86cd..0000000000000 --- a/.github/workflows/wheels.yml +++ /dev/null @@ -1,199 +0,0 @@ -# Workflow to build wheels for upload to PyPI. -# Inspired by numpy's cibuildwheel config https://github.com/numpy/numpy/blob/main/.github/workflows/wheels.yml -# -# In an attempt to save CI resources, wheel builds do -# not run on each push but only weekly and for releases. -# Wheel builds can be triggered from the Actions page -# (if you have the perms) on a commit to master. -# -# Alternatively, you can add labels to the pull request in order to trigger wheel -# builds. -# The label(s) that trigger builds are: -# - Build -name: Wheel builder - -on: - schedule: - # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ minute (0 - 59) - # β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ hour (0 - 23) - # β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the month (1 - 31) - # β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ month (1 - 12 or JAN-DEC) - # β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the week (0 - 6 or SUN-SAT) - # β”‚ β”‚ β”‚ β”‚ β”‚ - - cron: "27 3 */1 * *" - push: - pull_request: - types: [labeled, opened, synchronize, reopened] - workflow_dispatch: - -concurrency: - group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} - cancel-in-progress: true - -jobs: - build_wheels: - name: Build wheel for ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ${{ matrix.buildplat[0] }} - strategy: - # Ensure that a wheel builder finishes even if another fails - fail-fast: false - matrix: - # GitHub Actions doesn't support pairing matrix values together, let's improvise - # https://github.com/github/feedback/discussions/7835#discussioncomment-1769026 - buildplat: - - [ubuntu-20.04, manylinux_x86_64] - - [macos-11, macosx_*] - - [windows-2019, win_amd64] - - [windows-2019, win32] - # TODO: support PyPy? - python: [["cp38", "3.8"], ["cp39", "3.9"], ["cp310", "3.10"], ["cp311", "3.11"]]# "pp38", "pp39"] - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - - name: Build wheels - uses: pypa/cibuildwheel@v2.9.0 - env: - CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - - # Used to test(Windows-only) and push the built wheels - # You might need to use setup-python separately - # if the new Python-dev version - # is unavailable on conda-forge. - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - python-version: ${{ matrix.python[1] }} - activate-environment: test - channels: conda-forge, anaconda - channel-priority: true - mamba-version: "*" - - - name: Test wheels (Windows 64-bit only) - if: ${{ matrix.buildplat[1] == 'win_amd64' }} - shell: cmd /C CALL {0} - run: | - python ci/test_wheels.py wheelhouse - - - uses: actions/upload-artifact@v3 - with: - name: ${{ matrix.python[0] }}-${{ startsWith(matrix.buildplat[1], 'macosx') && 'macosx' || matrix.buildplat[1] }} - path: ./wheelhouse/*.whl - - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: conda install -q -y anaconda-client - - - - name: Upload wheels - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels - build_sdist: - name: Build sdist - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ubuntu-22.04 - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - # Used to push the built sdist - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - # Really doesn't matter what version we upload with - # just the version we test with - python-version: '3.8' - channels: conda-forge - channel-priority: true - mamba-version: "*" - - - name: Build sdist - run: | - pip install build - python -m build --sdist - - name: Test the sdist - shell: bash -el {0} - run: | - # TODO: Don't run test suite, and instead build wheels from sdist - # by splitting the wheel builders into a two stage job - # (1. Generate sdist 2. Build wheels from sdist) - # This tests the sdists, and saves some build time - python -m pip install dist/*.gz - pip install hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17 - cd .. # Not a good idea to test within the src tree - python -c "import pandas; print(pandas.__version__); - pandas.test(extra_args=['-m not clipboard and not single_cpu', '--skip-slow', '--skip-network', '--skip-db', '-n=2']); - pandas.test(extra_args=['-m not clipboard and single_cpu', '--skip-slow', '--skip-network', '--skip-db'])" - - uses: actions/upload-artifact@v3 - with: - name: sdist - path: ./dist/* - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: | - conda install -q -y anaconda-client - - - name: Upload sdist - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f3158e64df8dd..3350196b9b1c9 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -75,7 +75,6 @@ repos: additional_dependencies: &flake8_dependencies - flake8==6.0.0 - flake8-bugbear==22.7.1 - - pandas-dev-flaker==0.5.0 - repo: https://github.com/pycqa/pylint rev: v2.15.6 hooks: diff --git a/environment.yml b/environment.yml index b6b8f7d6af1ba..9eaf52977d6d2 100644 --- a/environment.yml +++ b/environment.yml @@ -90,7 +90,6 @@ dependencies: - gitdb - natsort # DataFrame.sort_values doctest - numpydoc - - pandas-dev-flaker=0.5.0 - pydata-sphinx-theme<0.11 - pytest-cython # doctest - sphinx diff --git a/requirements-dev.txt b/requirements-dev.txt index 4f2a80d932fd0..428c2cb92d040 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -65,7 +65,6 @@ gitpython gitdb natsort numpydoc -pandas-dev-flaker==0.5.0 pydata-sphinx-theme<0.11 pytest-cython sphinx
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50518
2022-12-31T18:46:52Z
2022-12-31T19:00:06Z
null
2022-12-31T19:00:26Z
CLN/TST: test_series_align_multiindex_containing_nan
diff --git a/pandas/tests/indexing/multiindex/test_multiindex.py b/pandas/tests/indexing/multiindex/test_multiindex.py index f5f58e7e818d9..8e507212976ec 100644 --- a/pandas/tests/indexing/multiindex/test_multiindex.py +++ b/pandas/tests/indexing/multiindex/test_multiindex.py @@ -151,57 +151,35 @@ def test_rename_multiindex_with_duplicates(self): expected = DataFrame(index=mi2) tm.assert_frame_equal(df, expected) - @pytest.mark.parametrize( - "data_result, data_expected", - [ - ( - [ - [(81.0, np.nan), (np.nan, np.nan)], - [(np.nan, np.nan), (82.0, np.nan)], - [1, 2], - [1, 2], - ], - [ - [[81, 82.0, np.nan], Series([np.nan, np.nan, np.nan])], - [[81, 82.0, np.nan], Series([np.nan, np.nan, np.nan])], - [1, np.nan, 2], - [np.nan, 2, 1], - ], - ), - ( - [ - [(81.0, np.nan), (np.nan, np.nan)], - [(np.nan, np.nan), (81.0, np.nan)], - [1, 2], - [1, 2], - ], - [ - [[81.0, np.nan], Series([np.nan, np.nan])], - [[81.0, np.nan], Series([np.nan, np.nan])], - [1, 2], - [2, 1], - ], - ), - ], - ) - def test_subtracting_two_series_with_unordered_index_and_all_nan_index( - self, data_result, data_expected - ): + def test_series_align_multiindex_with_nan_overlap_only(self): + # GH 38439 + mi1 = MultiIndex.from_arrays([[81.0, np.nan], [np.nan, np.nan]]) + mi2 = MultiIndex.from_arrays([[np.nan, 82.0], [np.nan, np.nan]]) + ser1 = Series([1, 2], index=mi1) + ser2 = Series([1, 2], index=mi2) + result1, result2 = ser1.align(ser2) + + mi = MultiIndex.from_arrays([[81.0, 82.0, np.nan], [np.nan, np.nan, np.nan]]) + expected1 = Series([1.0, np.nan, 2.0], index=mi) + expected2 = Series([np.nan, 2.0, 1.0], index=mi) + + tm.assert_series_equal(result1, expected1) + tm.assert_series_equal(result2, expected2) + + def test_series_align_multiindex_with_nan(self): # GH 38439 - # TODO: Refactor. This is impossible to understand GH#49443 - a_index_result = MultiIndex.from_tuples(data_result[0]) - b_index_result = MultiIndex.from_tuples(data_result[1]) - a_series_result = Series(data_result[2], index=a_index_result) - b_series_result = Series(data_result[3], index=b_index_result) - result = a_series_result.align(b_series_result) - - a_index_expected = MultiIndex.from_arrays(data_expected[0]) - b_index_expected = MultiIndex.from_arrays(data_expected[1]) - a_series_expected = Series(data_expected[2], index=a_index_expected) - b_series_expected = Series(data_expected[3], index=b_index_expected) - - tm.assert_series_equal(result[0], a_series_expected) - tm.assert_series_equal(result[1], b_series_expected) + mi1 = MultiIndex.from_arrays([[81.0, np.nan], [np.nan, np.nan]]) + mi2 = MultiIndex.from_arrays([[np.nan, 81.0], [np.nan, np.nan]]) + ser1 = Series([1, 2], index=mi1) + ser2 = Series([1, 2], index=mi2) + result1, result2 = ser1.align(ser2) + + mi = MultiIndex.from_arrays([[81.0, np.nan], [np.nan, np.nan]]) + expected1 = Series([1, 2], index=mi) + expected2 = Series([2, 1], index=mi) + + tm.assert_series_equal(result1, expected1) + tm.assert_series_equal(result2, expected2) def test_nunique_smoke(self): # GH 34019
- [x] closes #49443 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
https://api.github.com/repos/pandas-dev/pandas/pulls/50516
2022-12-31T14:22:30Z
2023-01-04T19:02:41Z
2023-01-04T19:02:41Z
2023-01-19T04:31:29Z
TST: add numexpr fixture
diff --git a/pandas/conftest.py b/pandas/conftest.py index 3b167d9ef4fe2..c1b62a8a0f63e 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -1253,6 +1253,28 @@ def utc_fixture(request): utc_fixture2 = utc_fixture +@pytest.fixture(params=[True, False], ids=["numexpr", "python"]) +def use_numexpr(request): + """ + If True always use numexpr for evaluations, else never. + """ + from pandas.core.computation import expressions as expr + + if request.param and not expr.NUMEXPR_INSTALLED: + pytest.skip("numexpr not installed") + + use_numexpr = expr.USE_NUMEXPR + _min_elements = expr._MIN_ELEMENTS + + expr.set_use_numexpr(request.param) + if request.param: + expr._MIN_ELEMENTS = 0 + yield request.param + + expr.set_use_numexpr(use_numexpr) + expr._MIN_ELEMENTS = _min_elements + + # ---------------------------------------------------------------- # Dtypes # ---------------------------------------------------------------- diff --git a/pandas/tests/arithmetic/conftest.py b/pandas/tests/arithmetic/conftest.py index b734344d25174..1b57faee6599c 100644 --- a/pandas/tests/arithmetic/conftest.py +++ b/pandas/tests/arithmetic/conftest.py @@ -9,22 +9,10 @@ Int64Index, UInt64Index, ) -from pandas.core.computation import expressions as expr - - -@pytest.fixture(autouse=True, params=[0, 1000000], ids=["numexpr", "python"]) -def switch_numexpr_min_elements(request): - _MIN_ELEMENTS = expr._MIN_ELEMENTS - expr._MIN_ELEMENTS = request.param - yield request.param - expr._MIN_ELEMENTS = _MIN_ELEMENTS - # ------------------------------------------------------------------ -# doctest with +SKIP for one fixture fails during setup with -# 'DoctestItem' object has no attribute 'callspec' -# due to switch_numexpr_min_elements fixture + @pytest.fixture(params=[1, np.array(1, dtype=np.int64)]) def one(request): """ @@ -61,9 +49,6 @@ def one(request): zeros.extend([0, 0.0, -0.0]) -# doctest with +SKIP for zero fixture fails during setup with -# 'DoctestItem' object has no attribute 'callspec' -# due to switch_numexpr_min_elements fixture @pytest.fixture(params=zeros) def zero(request): """ diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py index 93225d937697f..0792ff91bf864 100644 --- a/pandas/tests/arithmetic/test_numeric.py +++ b/pandas/tests/arithmetic/test_numeric.py @@ -27,7 +27,6 @@ Int64Index, UInt64Index, ) -from pandas.core.computation import expressions as expr from pandas.tests.arithmetic.common import ( assert_invalid_addsub_type, assert_invalid_comparison, @@ -392,7 +391,7 @@ def test_div_negative_zero(self, zero, numeric_idx, op): @pytest.mark.parametrize("dtype1", [np.int64, np.float64, np.uint64]) def test_ser_div_ser( self, - switch_numexpr_min_elements, + use_numexpr, dtype1, any_real_numpy_dtype, ): @@ -412,7 +411,7 @@ def test_ser_div_ser( if first.dtype == "int64" and second.dtype == "float32": # when using numexpr, the casting rules are slightly different # and int64/float32 combo results in float32 instead of float64 - if expr.USE_NUMEXPR and switch_numexpr_min_elements == 0: + if use_numexpr: expected = expected.astype("float32") result = first / second diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py index 241e2df377af6..ba262aee1156e 100644 --- a/pandas/tests/frame/test_arithmetic.py +++ b/pandas/tests/frame/test_arithmetic.py @@ -22,7 +22,6 @@ ) import pandas._testing as tm import pandas.core.common as com -from pandas.core.computation import expressions as expr from pandas.core.computation.expressions import ( _MIN_ELEMENTS, NUMEXPR_INSTALLED, @@ -33,14 +32,6 @@ ) -@pytest.fixture(autouse=True, params=[0, 1000000], ids=["numexpr", "python"]) -def switch_numexpr_min_elements(request): - _MIN_ELEMENTS = expr._MIN_ELEMENTS - expr._MIN_ELEMENTS = request.param - yield request.param - expr._MIN_ELEMENTS = _MIN_ELEMENTS - - class DummyElement: def __init__(self, value, dtype) -> None: self.value = value @@ -571,7 +562,7 @@ def test_arith_flex_frame_mixed( int_frame, mixed_int_frame, mixed_float_frame, - switch_numexpr_min_elements, + use_numexpr, ): f = getattr(operator, op) @@ -585,7 +576,7 @@ def test_arith_flex_frame_mixed( dtype = {"B": "uint64", "C": None} elif op in ["__add__", "__mul__"]: dtype = {"C": None} - if expr.USE_NUMEXPR and switch_numexpr_min_elements == 0: + if use_numexpr: # when using numexpr, the casting rules are slightly different: # in the `2 + mixed_int_frame` operation, int32 column becomes # and int64 column (not preserving dtype in operation with Python @@ -1067,7 +1058,7 @@ def test_frame_with_frame_reindex(self): ], ids=lambda x: x.__name__, ) - def test_binop_other(self, op, value, dtype, switch_numexpr_min_elements): + def test_binop_other(self, op, value, dtype, use_numexpr): skip = { (operator.truediv, "bool"), @@ -1116,7 +1107,7 @@ def test_binop_other(self, op, value, dtype, switch_numexpr_min_elements): elif (op, dtype) in skip: if op in [operator.add, operator.mul]: - if expr.USE_NUMEXPR and switch_numexpr_min_elements == 0: + if use_numexpr: # "evaluating in Python space because ..." warn = UserWarning else: diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py index 27410a626811c..9c143636d1ecd 100644 --- a/pandas/tests/series/test_arithmetic.py +++ b/pandas/tests/series/test_arithmetic.py @@ -29,15 +29,6 @@ nanops, ops, ) -from pandas.core.computation import expressions as expr - - -@pytest.fixture(autouse=True, params=[0, 1000000], ids=["numexpr", "python"]) -def switch_numexpr_min_elements(request): - _MIN_ELEMENTS = expr._MIN_ELEMENTS - expr._MIN_ELEMENTS = request.param - yield request.param - expr._MIN_ELEMENTS = _MIN_ELEMENTS def _permute(obj):
Adds a `use_numexpr` fixture to `conftest.py`, deletes 3 `switch_numexpr_min_elements` and simplifies numexpr tests.
https://api.github.com/repos/pandas-dev/pandas/pulls/50513
2022-12-31T00:06:35Z
2022-12-31T01:39:34Z
null
2022-12-31T01:39:34Z
BUG: Series.isnull() respect use_inf_as_na
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index 000b5ebbdd2f7..d22dc060c5b8b 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -284,7 +284,7 @@ def _isna_array(values: ArrayLike, inf_as_na: bool = False): if not isinstance(values, np.ndarray): # i.e. ExtensionArray - if inf_as_na and is_categorical_dtype(dtype): + if inf_as_na or is_categorical_dtype(dtype): result = libmissing.isnaobj(values.to_numpy(), inf_as_na=inf_as_na) else: # error: Incompatible types in assignment (expression has type
- [x] closes #50495 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50512
2022-12-30T22:50:38Z
2023-01-31T21:34:56Z
null
2023-01-31T21:34:57Z
BUG: to_dict not converting masked dtype to native python types
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..a5e34b2c3c1f6 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -913,6 +913,7 @@ I/O - Bug in displaying ``string`` dtypes not showing storage option (:issue:`50099`) - Bug in :meth:`DataFrame.to_string` with ``header=False`` that printed the index name on the same line as the first row of the data (:issue:`49230`) - Bug in :meth:`DataFrame.to_string` ignoring float formatter for extension arrays (:issue:`39336`) +- Bug in :meth:`DataFrame.to_dict` not converting elements to native Python types for masked arrays (:issue:`34665`) - Fixed memory leak which stemmed from the initialization of the internal JSON module (:issue:`49222`) - Fixed issue where :func:`json_normalize` would incorrectly remove leading characters from column names that matched the ``sep`` argument (:issue:`49861`) - Bug in :meth:`DataFrame.to_json` where it would segfault when failing to encode a string (:issue:`50307`) diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index ec3fdc6db9dc9..f467eb39f842e 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -250,15 +250,15 @@ def __setitem__(self, key, value) -> None: def __iter__(self) -> Iterator: if self.ndim == 1: if not self._hasna: - for val in self._data: - yield val + for i, val in enumerate(self._data): + yield self._data.item(i) else: na_value = self.dtype.na_value - for isna_, val in zip(self._mask, self._data): + for i, (isna_, val) in enumerate(zip(self._mask, self._data)): if isna_: yield na_value else: - yield val + yield self._data.item(i) else: for i in range(len(self)): yield self[i] diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py index c76699cafd481..1c0f698b1d039 100644 --- a/pandas/tests/frame/methods/test_to_dict.py +++ b/pandas/tests/frame/methods/test_to_dict.py @@ -9,6 +9,7 @@ import pytz from pandas import ( + NA, DataFrame, Index, MultiIndex, @@ -458,3 +459,13 @@ def test_to_dict_index_false(self, orient, expected): df = DataFrame({"col1": [1, 2], "col2": [3, 4]}, index=["row1", "row2"]) result = df.to_dict(orient=orient, index=False) tm.assert_dict_equal(result, expected) + + def test_to_dict_masked_native_python(self): + # GH#34665 + df = DataFrame({"a": Series([1, 2], dtype="Int64"), "B": 1}) + result = df.to_dict(orient="records") + assert type(result[0]["a"]) is int + + df = DataFrame({"a": Series([1, NA], dtype="Int64"), "B": 1}) + result = df.to_dict(orient="records") + assert type(result[0]["a"]) is int
- [x] closes #34665 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50510
2022-12-30T18:29:37Z
2023-01-19T21:45:29Z
null
2023-01-19T21:46:55Z
DOC: Add ignore_functions option to validate_docstrings.py
diff --git a/ci/code_checks.sh b/ci/code_checks.sh index 3c1362b1ac83e..5d2f176d6bcd8 100755 --- a/ci/code_checks.sh +++ b/ci/code_checks.sh @@ -83,6 +83,36 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=EX04,GL01,GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,PR03,PR04,PR05,PR06,PR08,PR09,PR10,RT01,RT04,RT05,SA02,SA03,SA04,SS01,SS02,SS03,SS04,SS05,SS06 RET=$(($RET + $?)) ; echo $MSG "DONE" + MSG='Partially validate docstrings (RT02)' ; echo $MSG + $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=RT02 --ignore_functions \ + pandas.Series.align \ + pandas.Series.dt.total_seconds \ + pandas.Series.cat.rename_categories \ + pandas.Series.cat.reorder_categories \ + pandas.Series.cat.add_categories \ + pandas.Series.cat.remove_categories \ + pandas.Series.cat.remove_unused_categories \ + pandas.Index.all \ + pandas.Index.any \ + pandas.CategoricalIndex.rename_categories \ + pandas.CategoricalIndex.reorder_categories \ + pandas.CategoricalIndex.add_categories \ + pandas.CategoricalIndex.remove_categories \ + pandas.CategoricalIndex.remove_unused_categories \ + pandas.MultiIndex.drop \ + pandas.DatetimeIndex.to_pydatetime \ + pandas.TimedeltaIndex.to_pytimedelta \ + pandas.core.groupby.SeriesGroupBy.apply \ + pandas.core.groupby.DataFrameGroupBy.apply \ + pandas.io.formats.style.Styler.export \ + pandas.api.extensions.ExtensionArray.astype \ + pandas.api.extensions.ExtensionArray.dropna \ + pandas.api.extensions.ExtensionArray.isna \ + pandas.api.extensions.ExtensionArray.repeat \ + pandas.api.extensions.ExtensionArray.unique \ + pandas.DataFrame.align + RET=$(($RET + $?)) ; echo $MSG "DONE" + fi ### DOCUMENTATION NOTEBOOKS ### diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py index b490c2ffdc2e8..0b7ab145b054a 100644 --- a/scripts/tests/test_validate_docstrings.py +++ b/scripts/tests/test_validate_docstrings.py @@ -199,6 +199,32 @@ def test_leftover_files_raises(self): self._import_path(klass="BadDocstrings", func="leftover_files") ) + def test_validate_all_ignore_functions(self, monkeypatch): + monkeypatch.setattr( + validate_docstrings, + "get_all_api_items", + lambda: [ + ( + "pandas.DataFrame.align", + "func", + "current_section", + "current_subsection", + ), + ( + "pandas.Index.all", + "func", + "current_section", + "current_subsection", + ), + ], + ) + result = validate_docstrings.validate_all( + prefix=None, + ignore_functions=["pandas.DataFrame.align"], + ) + assert len(result) == 1 + assert "pandas.Index.all" in result + def test_validate_all_ignore_deprecated(self, monkeypatch): monkeypatch.setattr( validate_docstrings, @@ -339,6 +365,7 @@ def test_exit_status_for_main(self, monkeypatch): errors=[], output_format="default", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 0 @@ -346,7 +373,7 @@ def test_exit_status_errors_for_validate_all(self, monkeypatch): monkeypatch.setattr( validate_docstrings, "validate_all", - lambda prefix, ignore_deprecated=False: { + lambda prefix, ignore_deprecated=False, ignore_functions=None: { "docstring1": { "errors": [ ("ER01", "err desc"), @@ -369,6 +396,7 @@ def test_exit_status_errors_for_validate_all(self, monkeypatch): errors=[], output_format="default", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 5 @@ -376,7 +404,7 @@ def test_no_exit_status_noerrors_for_validate_all(self, monkeypatch): monkeypatch.setattr( validate_docstrings, "validate_all", - lambda prefix, ignore_deprecated=False: { + lambda prefix, ignore_deprecated=False, ignore_functions=None: { "docstring1": {"errors": [], "warnings": [("WN01", "warn desc")]}, "docstring2": {"errors": []}, }, @@ -387,6 +415,7 @@ def test_no_exit_status_noerrors_for_validate_all(self, monkeypatch): errors=[], output_format="default", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 0 @@ -395,7 +424,7 @@ def test_exit_status_for_validate_all_json(self, monkeypatch): monkeypatch.setattr( validate_docstrings, "validate_all", - lambda prefix, ignore_deprecated=False: { + lambda prefix, ignore_deprecated=False, ignore_functions=None: { "docstring1": { "errors": [ ("ER01", "err desc"), @@ -412,6 +441,7 @@ def test_exit_status_for_validate_all_json(self, monkeypatch): errors=[], output_format="json", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 0 @@ -419,7 +449,7 @@ def test_errors_param_filters_errors(self, monkeypatch): monkeypatch.setattr( validate_docstrings, "validate_all", - lambda prefix, ignore_deprecated=False: { + lambda prefix, ignore_deprecated=False, ignore_functions=None: { "Series.foo": { "errors": [ ("ER01", "err desc"), @@ -447,6 +477,7 @@ def test_errors_param_filters_errors(self, monkeypatch): errors=["ER01"], output_format="default", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 3 @@ -456,5 +487,6 @@ def test_errors_param_filters_errors(self, monkeypatch): errors=["ER03"], output_format="default", ignore_deprecated=False, + ignore_functions=None, ) assert exit_status == 1 diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py index a86630eba7d5d..5d0ef6e460486 100755 --- a/scripts/validate_docstrings.py +++ b/scripts/validate_docstrings.py @@ -295,7 +295,7 @@ def pandas_validate(func_name: str): return result -def validate_all(prefix, ignore_deprecated=False): +def validate_all(prefix, ignore_deprecated=False, ignore_functions=None): """ Execute the validation of all docstrings, and return a dict with the results. @@ -307,6 +307,8 @@ def validate_all(prefix, ignore_deprecated=False): validated. If None, all docstrings will be validated. ignore_deprecated: bool, default False If True, deprecated objects are ignored when validating docstrings. + ignore_functions: list of str or None, default None + If not None, contains a list of function to ignore Returns ------- @@ -317,14 +319,11 @@ def validate_all(prefix, ignore_deprecated=False): result = {} seen = {} - base_path = pathlib.Path(__file__).parent.parent - api_doc_fnames = pathlib.Path(base_path, "doc", "source", "reference") - api_items = [] - for api_doc_fname in api_doc_fnames.glob("*.rst"): - with open(api_doc_fname) as f: - api_items += list(get_api_items(f)) + ignore_functions = set(ignore_functions or []) - for func_name, _, section, subsection in api_items: + for func_name, _, section, subsection in get_all_api_items(): + if func_name in ignore_functions: + continue if prefix and not func_name.startswith(prefix): continue doc_info = pandas_validate(func_name) @@ -348,16 +347,25 @@ def validate_all(prefix, ignore_deprecated=False): return result +def get_all_api_items(): + base_path = pathlib.Path(__file__).parent.parent + api_doc_fnames = pathlib.Path(base_path, "doc", "source", "reference") + for api_doc_fname in api_doc_fnames.glob("*.rst"): + with open(api_doc_fname) as f: + yield from get_api_items(f) + + def print_validate_all_results( prefix: str, errors: list[str] | None, output_format: str, ignore_deprecated: bool, + ignore_functions: list[str] | None, ): if output_format not in ("default", "json", "actions"): raise ValueError(f'Unknown output_format "{output_format}"') - result = validate_all(prefix, ignore_deprecated) + result = validate_all(prefix, ignore_deprecated, ignore_functions) if output_format == "json": sys.stdout.write(json.dumps(result)) @@ -408,13 +416,17 @@ def header(title, width=80, char="#"): sys.stderr.write(result["examples_errs"]) -def main(func_name, prefix, errors, output_format, ignore_deprecated): +def main(func_name, prefix, errors, output_format, ignore_deprecated, ignore_functions): """ Main entry point. Call the validation for one or for all docstrings. """ if func_name is None: return print_validate_all_results( - prefix, errors, output_format, ignore_deprecated + prefix, + errors, + output_format, + ignore_deprecated, + ignore_functions, ) else: print_validate_one_results(func_name) @@ -464,6 +476,13 @@ def main(func_name, prefix, errors, output_format, ignore_deprecated): "deprecated objects are ignored when validating " "all docstrings", ) + argparser.add_argument( + "--ignore_functions", + nargs="*", + help="function or method to not validate " + "(e.g. pandas.DataFrame.head). " + "Inverse of the `function` argument.", + ) args = argparser.parse_args() sys.exit( @@ -473,5 +492,6 @@ def main(func_name, prefix, errors, output_format, ignore_deprecated): args.errors.split(",") if args.errors else None, args.format, args.ignore_deprecated, + args.ignore_functions, ) )
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Following @MarcoGorelli [comment](https://github.com/pandas-dev/pandas/issues/49968#issuecomment-1333467996), this PR adds an option to `validate_docstrings.py` to ignore functions. This allows to ignore the remaining `RT02` errors, described in https://github.com/pandas-dev/pandas/issues/49968, in `ci/code_checks.sh`. This will also be useful for other documentation-related errors that required multiple fixes.
https://api.github.com/repos/pandas-dev/pandas/pulls/50509
2022-12-30T18:01:33Z
2023-01-08T16:44:29Z
2023-01-08T16:44:29Z
2023-03-04T10:47:10Z
BUG: DataFrameGroupBy.value_counts fails with a TimeGrouper
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 22f6659367683..1d733566a0e0b 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -945,6 +945,7 @@ Groupby/resample/rolling - Bug in :meth:`.SeriesGroupBy.nunique` would incorrectly raise when the grouper was an empty categorical and ``observed=True`` (:issue:`21334`) - Bug in :meth:`.SeriesGroupBy.nth` would raise when grouper contained NA values after subsetting from a :class:`DataFrameGroupBy` (:issue:`26454`) - Bug in :meth:`DataFrame.groupby` would not include a :class:`.Grouper` specified by ``key`` in the result when ``as_index=False`` (:issue:`50413`) +- Bug in :meth:`.DataFrameGrouBy.value_counts` would raise when used with a :class:`.TimeGrouper` (:issue:`50486`) - Reshaping diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index e323877a512b0..66ad1b3ea7196 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -425,14 +425,22 @@ class Grouping: If we are a Categorical, use the observed values in_axis : if the Grouping is a column in self.obj and hence among Groupby.exclusions list + dropna : bool, default True + Whether to drop NA groups. + uniques : Array-like, optional + When specified, will be used for unique values. Enables including empty groups + in the result for a BinGrouper. Must not contain duplicates. - Returns + Attributes ------- - **Attributes**: - * indices : dict of {group -> index_list} - * codes : ndarray, group codes - * group_index : unique groups - * groups : dict of {group -> label_list} + indices : dict + Mapping of {group -> index_list} + codes : ndarray + Group codes + group_index : Index or None + unique groups + groups : dict + Mapping of {group -> label_list} """ _codes: npt.NDArray[np.signedinteger] | None = None @@ -452,6 +460,7 @@ def __init__( observed: bool = False, in_axis: bool = False, dropna: bool = True, + uniques: ArrayLike | None = None, ) -> None: self.level = level self._orig_grouper = grouper @@ -464,6 +473,7 @@ def __init__( self._observed = observed self.in_axis = in_axis self._dropna = dropna + self._uniques = uniques self._passed_categorical = False @@ -653,6 +663,7 @@ def group_index(self) -> Index: @cache_readonly def _codes_and_uniques(self) -> tuple[npt.NDArray[np.signedinteger], ArrayLike]: + uniques: ArrayLike if self._passed_categorical: # we make a CategoricalIndex out of the cat grouper # preserving the categories / ordered attributes; @@ -697,11 +708,13 @@ def _codes_and_uniques(self) -> tuple[npt.NDArray[np.signedinteger], ArrayLike]: elif isinstance(self.grouping_vector, ops.BaseGrouper): # we have a list of groupers codes = self.grouping_vector.codes_info - # error: Incompatible types in assignment (expression has type "Union - # [ExtensionArray, ndarray[Any, Any]]", variable has type "Categorical") - uniques = ( - self.grouping_vector.result_index._values # type: ignore[assignment] - ) + uniques = self.grouping_vector.result_index._values + elif self._uniques is not None: + # GH#50486 Code grouping_vector using _uniques; allows + # including uniques that are not present in grouping_vector. + cat = Categorical(self.grouping_vector, categories=self._uniques) + codes = cat.codes + uniques = self._uniques else: # GH35667, replace dropna=False with use_na_sentinel=False # error: Incompatible types in assignment (expression has type "Union[ diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index c20fe34a178f5..ea902800cf7e0 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -1214,7 +1214,11 @@ def names(self) -> list[Hashable]: @property def groupings(self) -> list[grouper.Grouping]: lev = self.binlabels - ping = grouper.Grouping(lev, lev, in_axis=False, level=None) + codes = self.group_info[0] + labels = lev.take(codes) + ping = grouper.Grouping( + labels, labels, in_axis=False, level=None, uniques=lev._values + ) return [ping] def _aggregate_series_fast(self, obj: Series, func: Callable) -> NoReturn: diff --git a/pandas/tests/groupby/test_frame_value_counts.py b/pandas/tests/groupby/test_frame_value_counts.py index 8255fbab40dce..56aa121cd48c2 100644 --- a/pandas/tests/groupby/test_frame_value_counts.py +++ b/pandas/tests/groupby/test_frame_value_counts.py @@ -4,9 +4,11 @@ from pandas import ( CategoricalIndex, DataFrame, + Grouper, Index, MultiIndex, Series, + to_datetime, ) import pandas._testing as tm @@ -781,3 +783,39 @@ def test_subset_duplicate_columns(): ), ) tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize("utc", [True, False]) +def test_value_counts_time_grouper(utc): + # GH#50486 + df = DataFrame( + { + "Timestamp": [ + 1565083561, + 1565083561 + 86400, + 1565083561 + 86500, + 1565083561 + 86400 * 2, + 1565083561 + 86400 * 3, + 1565083561 + 86500 * 3, + 1565083561 + 86400 * 4, + ], + "Food": ["apple", "apple", "banana", "banana", "orange", "orange", "pear"], + } + ).drop([3]) + + df["Datetime"] = to_datetime( + df["Timestamp"].apply(lambda t: str(t)), utc=utc, unit="s" + ) + gb = df.groupby(Grouper(freq="1D", key="Datetime")) + result = gb.value_counts() + dates = to_datetime( + ["2019-08-06", "2019-08-07", "2019-08-09", "2019-08-10"], utc=utc + ) + timestamps = df["Timestamp"].unique() + index = MultiIndex( + levels=[dates, timestamps, ["apple", "banana", "orange", "pear"]], + codes=[[0, 1, 1, 2, 2, 3], range(6), [0, 0, 1, 2, 2, 3]], + names=["Datetime", "Timestamp", "Food"], + ) + expected = Series(1, index=index) + tm.assert_series_equal(result, expected) diff --git a/pandas/tests/groupby/test_value_counts.py b/pandas/tests/groupby/test_value_counts.py index 577a72d3f5090..11ee13ec05fd6 100644 --- a/pandas/tests/groupby/test_value_counts.py +++ b/pandas/tests/groupby/test_value_counts.py @@ -114,7 +114,8 @@ def rebuild_index(df): tm.assert_series_equal(left.sort_index(), right.sort_index()) -def test_series_groupby_value_counts_with_grouper(): +@pytest.mark.parametrize("utc", [True, False]) +def test_series_groupby_value_counts_with_grouper(utc): # GH28479 df = DataFrame( { @@ -131,7 +132,9 @@ def test_series_groupby_value_counts_with_grouper(): } ).drop([3]) - df["Datetime"] = to_datetime(df["Timestamp"].apply(lambda t: str(t)), unit="s") + df["Datetime"] = to_datetime( + df["Timestamp"].apply(lambda t: str(t)), utc=utc, unit="s" + ) dfg = df.groupby(Grouper(freq="1D", key="Datetime")) # have to sort on index because of unstable sort on values xref GH9212
- [x] closes #50486 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Unlike other Groupings, BinGrouper was creating its Grouping based on the labels that were to be in the result. Thus the `Grouping.grouping_vector` length did not in general match the input object, which then fails when combing with other Groupings that do match the input object. In order to include empty groups, e.g. the 2nd group below ``` df = pd.DataFrame( { "datetime": to_datetime(["2022-01-01", "2022-01-03"]), "values": [2, 3], } ) gb = df.groupby(pd.Grouper(freq="1D", key="datetime")) print(gb.sum()) # values # datetime # 2022-01-01 2 # 2022-01-02 0 # 2022-01-03 3 ``` added a `uniques` argument to Grouping and then rely on Categorical to encode the groups including those that are missing.
https://api.github.com/repos/pandas-dev/pandas/pulls/50507
2022-12-30T17:38:46Z
2023-01-03T22:42:04Z
2023-01-03T22:42:04Z
2023-01-04T00:47:34Z
BUG: to_numpy not respecting na_value before converting to array
diff --git a/asv_bench/benchmarks/series_methods.py b/asv_bench/benchmarks/series_methods.py index dc86352082cca..a0dd52e9f17e4 100644 --- a/asv_bench/benchmarks/series_methods.py +++ b/asv_bench/benchmarks/series_methods.py @@ -382,4 +382,23 @@ def time_iter(self, dtype): pass +class ToNumpy: + def setup(self): + N = 1_000_000 + self.ser = Series( + np.random.randn( + N, + ) + ) + + def time_to_numpy(self): + self.ser.to_numpy() + + def time_to_numpy_double_copy(self): + self.ser.to_numpy(dtype="float64", copy=True) + + def time_to_numpy_copy(self): + self.ser.to_numpy(copy=True) + + from .pandas_vb_common import setup # noqa: F401 isort:skip diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..d51885b7ad867 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -775,6 +775,7 @@ Performance improvements - Performance improvement when iterating over pyarrow and nullable dtypes (:issue:`49825`, :issue:`49851`) - Performance improvements to :func:`read_sas` (:issue:`47403`, :issue:`47405`, :issue:`47656`, :issue:`48502`) - Memory improvement in :meth:`RangeIndex.sort_values` (:issue:`48801`) +- Performance improvement in :meth:`Series.to_numpy` if ``copy=True`` by avoiding copying twice (:issue:`24345`) - Performance improvement in :class:`DataFrameGroupBy` and :class:`SeriesGroupBy` when ``by`` is a categorical type and ``sort=False`` (:issue:`48976`) - Performance improvement in :class:`DataFrameGroupBy` and :class:`SeriesGroupBy` when ``by`` is a categorical type and ``observed=False`` (:issue:`49596`) - Performance improvement in :func:`read_stata` with parameter ``index_col`` set to ``None`` (the default). Now the index will be a :class:`RangeIndex` instead of :class:`Int64Index` (:issue:`49745`) @@ -849,6 +850,7 @@ Conversion - Bug where any :class:`ExtensionDtype` subclass with ``kind="M"`` would be interpreted as a timezone type (:issue:`34986`) - Bug in :class:`.arrays.ArrowExtensionArray` that would raise ``NotImplementedError`` when passed a sequence of strings or binary (:issue:`49172`) - Bug in :meth:`Series.astype` raising ``pyarrow.ArrowInvalid`` when converting from a non-pyarrow string dtype to a pyarrow numeric type (:issue:`50430`) +- Bug in :meth:`Series.to_numpy` converting to NumPy array before applying ``na_value`` (:issue:`48951`) - Bug in :func:`to_datetime` was not respecting ``exact`` argument when ``format`` was an ISO8601 format (:issue:`12649`) - Bug in :meth:`TimedeltaArray.astype` raising ``TypeError`` when converting to a pyarrow duration type (:issue:`49795`) - diff --git a/pandas/core/base.py b/pandas/core/base.py index e5e0ac4e121ae..23121b7075fe1 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -531,12 +531,19 @@ def to_numpy( f"to_numpy() got an unexpected keyword argument '{bad_keys}'" ) - result = np.asarray(self._values, dtype=dtype) - # TODO(GH-24345): Avoid potential double copy - if copy or na_value is not lib.no_default: - result = result.copy() - if na_value is not lib.no_default: - result[np.asanyarray(self.isna())] = na_value + if na_value is not lib.no_default: + values = self._values.copy() + values[np.asanyarray(self.isna())] = na_value + else: + values = self._values + + result = np.asarray(values, dtype=dtype) + + if copy and na_value is lib.no_default: + if np.shares_memory(self._values[:2], result[:2]): + # Take slices to improve performance of check + result = result.copy() + return result @final diff --git a/pandas/tests/series/methods/test_to_numpy.py b/pandas/tests/series/methods/test_to_numpy.py new file mode 100644 index 0000000000000..487489e8c0b0c --- /dev/null +++ b/pandas/tests/series/methods/test_to_numpy.py @@ -0,0 +1,17 @@ +import numpy as np +import pytest + +from pandas import ( + NA, + Series, +) +import pandas._testing as tm + + +@pytest.mark.parametrize("dtype", ["int64", "float64"]) +def test_to_numpy_na_value(dtype): + # GH#48951 + ser = Series([1, 2, NA, 4]) + result = ser.to_numpy(dtype=dtype, na_value=0) + expected = np.array([1, 2, 0, 4], dtype=dtype) + tm.assert_numpy_array_equal(result, expected)
- [x] closes #48951 (Replace xxxx with the GitHub issue number) - [x] closes #24345 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50506
2022-12-30T16:55:41Z
2023-01-05T23:41:24Z
2023-01-05T23:41:24Z
2023-01-06T07:42:19Z
ENH: Add use_nullable_dtypes to to_numeric
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 5b725eb4d2a98..43571b3879622 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -44,6 +44,7 @@ The ``use_nullable_dtypes`` keyword argument has been expanded to the following * :func:`read_sql_query` * :func:`read_sql_table` * :func:`read_orc` +* :func:`to_numeric` Additionally a new global configuration, ``mode.dtype_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in the following functions to select the nullable dtypes implementation. diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index 89e02ac0fa86d..176307ef27cff 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -2272,6 +2272,7 @@ def maybe_convert_numeric( if convert_empty or seen.coerce_numeric: seen.saw_null() floats[i] = complexes[i] = NaN + mask[i] = 1 else: raise ValueError("Empty string encountered") elif util.is_complex_object(val): @@ -2328,6 +2329,7 @@ def maybe_convert_numeric( seen.saw_null() floats[i] = NaN + mask[i] = 1 if seen.check_uint64_conflict(): return (values, None) diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py index da47aa549dfa3..a8ae8c47b0d19 100644 --- a/pandas/core/tools/numeric.py +++ b/pandas/core/tools/numeric.py @@ -13,11 +13,13 @@ from pandas.core.dtypes.cast import maybe_downcast_numeric from pandas.core.dtypes.common import ( ensure_object, + is_bool_dtype, is_datetime_or_timedelta_dtype, is_decimal, is_integer_dtype, is_number, is_numeric_dtype, + is_object_dtype, is_scalar, needs_i8_conversion, ) @@ -27,13 +29,14 @@ ) import pandas as pd -from pandas.core.arrays.numeric import NumericArray +from pandas.core.arrays import BaseMaskedArray def to_numeric( arg, errors: DateTimeErrorChoices = "raise", downcast: Literal["integer", "signed", "unsigned", "float"] | None = None, + use_nullable_dtypes: bool = False, ): """ Convert argument to a numeric type. @@ -47,7 +50,7 @@ def to_numeric( numbers smaller than `-9223372036854775808` (np.iinfo(np.int64).min) or larger than `18446744073709551615` (np.iinfo(np.uint64).max) are passed in, it is very likely they will be converted to float so that - they can stored in an `ndarray`. These warnings apply similarly to + they can be stored in an `ndarray`. These warnings apply similarly to `Series` since it internally leverages `ndarray`. Parameters @@ -78,6 +81,10 @@ def to_numeric( the dtype it is to be cast to, so if none of the dtypes checked satisfy that specification, no downcasting will be performed on the data. + use_nullable_dtypes : bool = False + Whether or not to use nullable dtypes as default when converting data. If + set to True, nullable dtypes are used for all dtypes that have a nullable + implementation, even if no nulls are present. Returns ------- @@ -178,11 +185,12 @@ def to_numeric( # GH33013: for IntegerArray & FloatingArray extract non-null values for casting # save mask to reconstruct the full array after casting mask: npt.NDArray[np.bool_] | None = None - if isinstance(values, NumericArray): + if isinstance(values, BaseMaskedArray): mask = values._mask values = values._data[~mask] values_dtype = getattr(values, "dtype", None) + new_mask: np.ndarray | None = None if is_numeric_dtype(values_dtype): pass elif is_datetime_or_timedelta_dtype(values_dtype): @@ -191,13 +199,23 @@ def to_numeric( values = ensure_object(values) coerce_numeric = errors not in ("ignore", "raise") try: - values, _ = lib.maybe_convert_numeric( - values, set(), coerce_numeric=coerce_numeric + values, new_mask = lib.maybe_convert_numeric( # type: ignore[call-overload] + values, + set(), + coerce_numeric=coerce_numeric, + convert_to_masked_nullable=use_nullable_dtypes, ) except (ValueError, TypeError): if errors == "raise": raise + if new_mask is not None: + # Remove unnecessary values, is expected later anyway and enables + # downcasting + values = values[~new_mask] + elif use_nullable_dtypes and new_mask is None: + new_mask = np.zeros(values.shape, dtype=np.bool_) + # attempt downcast only if the data has been successfully converted # to a numerical dtype and if a downcast method has been specified if downcast is not None and is_numeric_dtype(values.dtype): @@ -228,18 +246,31 @@ def to_numeric( if values.dtype == dtype: break - # GH33013: for IntegerArray & FloatingArray need to reconstruct masked array - if mask is not None: + # GH33013: for IntegerArray, BooleanArray & FloatingArray need to reconstruct + # masked array + if (mask is not None or new_mask is not None) and not is_object_dtype(values.dtype): + if mask is None: + mask = new_mask + else: + mask = mask.copy() + assert isinstance(mask, np.ndarray) data = np.zeros(mask.shape, dtype=values.dtype) data[~mask] = values from pandas.core.arrays import ( + BooleanArray, FloatingArray, IntegerArray, ) - klass = IntegerArray if is_integer_dtype(data.dtype) else FloatingArray - values = klass(data, mask.copy()) + klass: type[IntegerArray] | type[BooleanArray] | type[FloatingArray] + if is_integer_dtype(data.dtype): + klass = IntegerArray + elif is_bool_dtype(data.dtype): + klass = BooleanArray + else: + klass = FloatingArray + values = klass(data, mask) if is_series: return arg._constructor(values, index=arg.index, name=arg.name) diff --git a/pandas/tests/tools/test_to_numeric.py b/pandas/tests/tools/test_to_numeric.py index 1347f6eb50b09..1c0a8301d65cc 100644 --- a/pandas/tests/tools/test_to_numeric.py +++ b/pandas/tests/tools/test_to_numeric.py @@ -807,3 +807,72 @@ def test_to_numeric_large_float_not_downcast_to_float_32(val): expected = Series([val]) result = to_numeric(expected, downcast="float") tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize( + "val, dtype", [(1, "Int64"), (1.5, "Float64"), (True, "boolean")] +) +def test_to_numeric_use_nullable_dtypes(val, dtype): + # GH#50505 + ser = Series([val], dtype=object) + result = to_numeric(ser, use_nullable_dtypes=True) + expected = Series([val], dtype=dtype) + tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize( + "val, dtype", [(1, "Int64"), (1.5, "Float64"), (True, "boolean")] +) +def test_to_numeric_use_nullable_dtypes_na(val, dtype): + # GH#50505 + ser = Series([val, None], dtype=object) + result = to_numeric(ser, use_nullable_dtypes=True) + expected = Series([val, pd.NA], dtype=dtype) + tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize( + "val, dtype, downcast", + [(1, "Int8", "integer"), (1.5, "Float32", "float"), (1, "Int8", "signed")], +) +def test_to_numeric_use_nullable_dtypes_downcasting(val, dtype, downcast): + # GH#50505 + ser = Series([val, None], dtype=object) + result = to_numeric(ser, use_nullable_dtypes=True, downcast=downcast) + expected = Series([val, pd.NA], dtype=dtype) + tm.assert_series_equal(result, expected) + + +def test_to_numeric_use_nullable_dtypes_downcasting_uint(): + # GH#50505 + ser = Series([1, pd.NA], dtype="UInt64") + result = to_numeric(ser, use_nullable_dtypes=True, downcast="unsigned") + expected = Series([1, pd.NA], dtype="UInt8") + tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize("dtype", ["Int64", "UInt64", "Float64", "boolean"]) +def test_to_numeric_use_nullable_dtypes_already_nullable(dtype): + # GH#50505 + ser = Series([1, pd.NA], dtype=dtype) + result = to_numeric(ser, use_nullable_dtypes=True) + expected = Series([1, pd.NA], dtype=dtype) + tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize( + "use_nullable_dtypes, dtype", [(True, "Float64"), (False, "float64")] +) +def test_to_numeric_use_nullable_dtypes_error(use_nullable_dtypes, dtype): + # GH#50505 + ser = Series(["a", "b", ""]) + expected = ser.copy() + with pytest.raises(ValueError, match="Unable to parse string"): + to_numeric(ser, use_nullable_dtypes=use_nullable_dtypes) + + result = to_numeric(ser, use_nullable_dtypes=use_nullable_dtypes, errors="ignore") + tm.assert_series_equal(result, expected) + + result = to_numeric(ser, use_nullable_dtypes=use_nullable_dtypes, errors="coerce") + expected = Series([np.nan, np.nan, np.nan], dtype=dtype) + tm.assert_series_equal(result, expected)
- [x] closes #37294 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. I think to_numeric should be able to return nullable dtypes as well, similar to our I/O functions
https://api.github.com/repos/pandas-dev/pandas/pulls/50505
2022-12-30T15:38:40Z
2023-01-10T08:27:03Z
2023-01-10T08:27:03Z
2023-01-10T08:27:23Z
ENH: Add pandas nullable support to read_orc
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..a54e6706fa286 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -42,6 +42,7 @@ The ``use_nullable_dtypes`` keyword argument has been expanded to the following * :func:`read_sql` * :func:`read_sql_query` * :func:`read_sql_table` +* :func:`read_orc` Additionally a new global configuration, ``mode.dtype_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in the following functions to select the nullable dtypes implementation. diff --git a/pandas/io/_util.py b/pandas/io/_util.py new file mode 100644 index 0000000000000..d2a001f0cf925 --- /dev/null +++ b/pandas/io/_util.py @@ -0,0 +1,23 @@ +from __future__ import annotations + +from pandas.compat._optional import import_optional_dependency + +import pandas as pd + + +def _arrow_dtype_mapping() -> dict: + pa = import_optional_dependency("pyarrow") + return { + pa.int8(): pd.Int8Dtype(), + pa.int16(): pd.Int16Dtype(), + pa.int32(): pd.Int32Dtype(), + pa.int64(): pd.Int64Dtype(), + pa.uint8(): pd.UInt8Dtype(), + pa.uint16(): pd.UInt16Dtype(), + pa.uint32(): pd.UInt32Dtype(), + pa.uint64(): pd.UInt64Dtype(), + pa.bool_(): pd.BooleanDtype(), + pa.string(): pd.StringDtype(), + pa.float32(): pd.Float32Dtype(), + pa.float64(): pd.Float64Dtype(), + } diff --git a/pandas/io/orc.py b/pandas/io/orc.py index cfa02de9bbcb3..169cb5d16da8d 100644 --- a/pandas/io/orc.py +++ b/pandas/io/orc.py @@ -91,18 +91,20 @@ def read_orc( pa_table = orc_file.read(columns=columns, **kwargs) if use_nullable_dtypes: dtype_backend = get_option("mode.dtype_backend") - if dtype_backend != "pyarrow": - raise NotImplementedError( - f"mode.dtype_backend set to {dtype_backend} is not implemented." + if dtype_backend == "pyarrow": + df = DataFrame( + { + col_name: ArrowExtensionArray(pa_col) + for col_name, pa_col in zip( + pa_table.column_names, pa_table.itercolumns() + ) + } ) - df = DataFrame( - { - col_name: ArrowExtensionArray(pa_col) - for col_name, pa_col in zip( - pa_table.column_names, pa_table.itercolumns() - ) - } - ) + else: + from pandas.io._util import _arrow_dtype_mapping + + mapping = _arrow_dtype_mapping() + df = pa_table.to_pandas(types_mapper=mapping.get) return df else: return pa_table.to_pandas() diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py index 568747685a36e..67e00dde5498b 100644 --- a/pandas/io/parquet.py +++ b/pandas/io/parquet.py @@ -225,24 +225,13 @@ def read( dtype_backend = get_option("mode.dtype_backend") to_pandas_kwargs = {} if use_nullable_dtypes: - import pandas as pd if dtype_backend == "pandas": - mapping = { - self.api.int8(): pd.Int8Dtype(), - self.api.int16(): pd.Int16Dtype(), - self.api.int32(): pd.Int32Dtype(), - self.api.int64(): pd.Int64Dtype(), - self.api.uint8(): pd.UInt8Dtype(), - self.api.uint16(): pd.UInt16Dtype(), - self.api.uint32(): pd.UInt32Dtype(), - self.api.uint64(): pd.UInt64Dtype(), - self.api.bool_(): pd.BooleanDtype(), - self.api.string(): pd.StringDtype(), - self.api.float32(): pd.Float32Dtype(), - self.api.float64(): pd.Float64Dtype(), - } + from pandas.io._util import _arrow_dtype_mapping + + mapping = _arrow_dtype_mapping() to_pandas_kwargs["types_mapper"] = mapping.get + manager = get_option("mode.data_manager") if manager == "array": to_pandas_kwargs["split_blocks"] = True # type: ignore[assignment] diff --git a/pandas/tests/io/test_orc.py b/pandas/tests/io/test_orc.py index 87f648bb5acd6..d5c03dcc85a0d 100644 --- a/pandas/tests/io/test_orc.py +++ b/pandas/tests/io/test_orc.py @@ -11,6 +11,7 @@ import pandas as pd from pandas import read_orc import pandas._testing as tm +from pandas.core.arrays import StringArray pytest.importorskip("pyarrow.orc") @@ -305,16 +306,6 @@ def test_orc_writer_dtypes_not_supported(df_not_supported): df_not_supported.to_orc() -def test_orc_use_nullable_dtypes_pandas_backend_not_supported(dirpath): - input_file = os.path.join(dirpath, "TestOrcFile.emptyFile.orc") - with pytest.raises( - NotImplementedError, - match="mode.dtype_backend set to pandas is not implemented.", - ): - with pd.option_context("mode.dtype_backend", "pandas"): - read_orc(input_file, use_nullable_dtypes=True) - - @td.skip_if_no("pyarrow", min_version="7.0.0") def test_orc_use_nullable_dtypes_pyarrow_backend(): df = pd.DataFrame( @@ -336,13 +327,60 @@ def test_orc_use_nullable_dtypes_pyarrow_backend(): ], } ) + bytes_data = df.copy().to_orc() with pd.option_context("mode.dtype_backend", "pyarrow"): result = read_orc(BytesIO(bytes_data), use_nullable_dtypes=True) + expected = pd.DataFrame( { col: pd.arrays.ArrowExtensionArray(pa.array(df[col], from_pandas=True)) for col in df.columns } ) + + tm.assert_frame_equal(result, expected) + + +@td.skip_if_no("pyarrow", min_version="7.0.0") +def test_orc_use_nullable_dtypes_pandas_backend(): + # GH#50503 + df = pd.DataFrame( + { + "string": list("abc"), + "string_with_nan": ["a", np.nan, "c"], + "string_with_none": ["a", None, "c"], + "int": list(range(1, 4)), + "int_with_nan": pd.Series([1, pd.NA, 3], dtype="Int64"), + "na_only": pd.Series([pd.NA, pd.NA, pd.NA], dtype="Int64"), + "float": np.arange(4.0, 7.0, dtype="float64"), + "float_with_nan": [2.0, np.nan, 3.0], + "bool": [True, False, True], + "bool_with_na": [True, False, None], + } + ) + + bytes_data = df.copy().to_orc() + with pd.option_context("mode.dtype_backend", "pandas"): + result = read_orc(BytesIO(bytes_data), use_nullable_dtypes=True) + + expected = pd.DataFrame( + { + "string": StringArray(np.array(["a", "b", "c"], dtype=np.object_)), + "string_with_nan": StringArray( + np.array(["a", pd.NA, "c"], dtype=np.object_) + ), + "string_with_none": StringArray( + np.array(["a", pd.NA, "c"], dtype=np.object_) + ), + "int": pd.Series([1, 2, 3], dtype="Int64"), + "int_with_nan": pd.Series([1, pd.NA, 3], dtype="Int64"), + "na_only": pd.Series([pd.NA, pd.NA, pd.NA], dtype="Int64"), + "float": pd.Series([4.0, 5.0, 6.0], dtype="Float64"), + "float_with_nan": pd.Series([2.0, pd.NA, 3.0], dtype="Float64"), + "bool": pd.Series([True, False, True], dtype="boolean"), + "bool_with_na": pd.Series([True, False, pd.NA], dtype="boolean"), + } + ) + tm.assert_frame_equal(result, expected)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50503
2022-12-30T14:28:13Z
2023-01-05T23:43:15Z
2023-01-05T23:43:15Z
2023-01-06T07:42:34Z
ENH: Add use_nullable_dtypes to read_clipboard
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 5b725eb4d2a98..fa17bf20635e0 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -36,6 +36,7 @@ Configuration option, ``mode.dtype_backend``, to return pyarrow-backed dtypes The ``use_nullable_dtypes`` keyword argument has been expanded to the following functions to enable automatic conversion to nullable dtypes (:issue:`36712`) * :func:`read_csv` +* :func:`read_clipboard` * :func:`read_fwf` * :func:`read_excel` * :func:`read_html` @@ -49,6 +50,7 @@ Additionally a new global configuration, ``mode.dtype_backend`` can now be used to select the nullable dtypes implementation. * :func:`read_csv` (with ``engine="pyarrow"`` or ``engine="python"``) +* :func:`read_clipboard` (with ``engine="python"``) * :func:`read_excel` * :func:`read_html` * :func:`read_xml` diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py index a3e778e552439..44bee11518cd3 100644 --- a/pandas/io/clipboards.py +++ b/pandas/io/clipboards.py @@ -14,7 +14,9 @@ ) -def read_clipboard(sep: str = r"\s+", **kwargs): # pragma: no cover +def read_clipboard( + sep: str = r"\s+", use_nullable_dtypes: bool = False, **kwargs +): # pragma: no cover r""" Read text from clipboard and pass to read_csv. @@ -24,6 +26,21 @@ def read_clipboard(sep: str = r"\s+", **kwargs): # pragma: no cover A string or regex delimiter. The default of '\s+' denotes one or more whitespace characters. + use_nullable_dtypes : bool = False + Whether or not to use nullable dtypes as default when reading data. If + set to True, nullable dtypes are used for all dtypes that have a nullable + implementation, even if no nulls are present. + + The nullable dtype implementation can be configured by calling + ``pd.set_option("mode.dtype_backend", "pandas")`` to use + numpy-backed nullable dtypes or + ``pd.set_option("mode.dtype_backend", "pyarrow")`` to use + pyarrow-backed nullable dtypes (using ``pd.ArrowDtype``). + This is only implemented for the ``python`` + engine. + + .. versionadded:: 2.0 + **kwargs See read_csv for the full argument list. @@ -85,7 +102,9 @@ def read_clipboard(sep: str = r"\s+", **kwargs): # pragma: no cover stacklevel=find_stack_level(), ) - return read_csv(StringIO(text), sep=sep, **kwargs) + return read_csv( + StringIO(text), sep=sep, use_nullable_dtypes=use_nullable_dtypes, **kwargs + ) def to_clipboard( diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py index ccfefa59c65b8..9aa927ffe447c 100644 --- a/pandas/io/parsers/readers.py +++ b/pandas/io/parsers/readers.py @@ -403,6 +403,8 @@ numpy-backed nullable dtypes or ``pd.set_option("mode.dtype_backend", "pyarrow")`` to use pyarrow-backed nullable dtypes (using ``pd.ArrowDtype``). + This is only implemented for the ``pyarrow`` or ``python`` + engines. .. versionadded:: 2.0 diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py index c47a963e0fa3c..ae9c5aacf6e6b 100644 --- a/pandas/tests/io/test_clipboard.py +++ b/pandas/tests/io/test_clipboard.py @@ -10,12 +10,19 @@ PyperclipWindowsException, ) +import pandas as pd from pandas import ( + NA, DataFrame, + Series, get_option, read_clipboard, ) import pandas._testing as tm +from pandas.core.arrays import ( + ArrowStringArray, + StringArray, +) from pandas.io.clipboard import ( CheckedCall, @@ -402,3 +409,60 @@ def test_raw_roundtrip(self, data): # PR #25040 wide unicode wasn't copied correctly on PY3 on windows clipboard_set(data) assert data == clipboard_get() + + @pytest.mark.parametrize("dtype_backend", ["pandas", "pyarrow"]) + @pytest.mark.parametrize("engine", ["c", "python"]) + def test_read_clipboard_nullable_dtypes( + self, request, mock_clipboard, string_storage, dtype_backend, engine + ): + # GH#50502 + if string_storage == "pyarrow" or dtype_backend == "pyarrow": + pa = pytest.importorskip("pyarrow") + + if dtype_backend == "pyarrow" and engine == "c": + pytest.skip(reason="c engine not yet supported") + + if string_storage == "python": + string_array = StringArray(np.array(["x", "y"], dtype=np.object_)) + string_array_na = StringArray(np.array(["x", NA], dtype=np.object_)) + + else: + string_array = ArrowStringArray(pa.array(["x", "y"])) + string_array_na = ArrowStringArray(pa.array(["x", None])) + + text = """a,b,c,d,e,f,g,h,i +x,1,4.0,x,2,4.0,,True,False +y,2,5.0,,,,,False,""" + mock_clipboard[request.node.name] = text + + with pd.option_context("mode.string_storage", string_storage): + with pd.option_context("mode.dtype_backend", dtype_backend): + result = read_clipboard( + sep=",", use_nullable_dtypes=True, engine=engine + ) + + expected = DataFrame( + { + "a": string_array, + "b": Series([1, 2], dtype="Int64"), + "c": Series([4.0, 5.0], dtype="Float64"), + "d": string_array_na, + "e": Series([2, NA], dtype="Int64"), + "f": Series([4.0, NA], dtype="Float64"), + "g": Series([NA, NA], dtype="Int64"), + "h": Series([True, False], dtype="boolean"), + "i": Series([False, NA], dtype="boolean"), + } + ) + if dtype_backend == "pyarrow": + from pandas.arrays import ArrowExtensionArray + + expected = DataFrame( + { + col: ArrowExtensionArray(pa.array(expected[col], from_pandas=True)) + for col in expected.columns + } + ) + expected["g"] = ArrowExtensionArray(pa.array([None, None])) + + tm.assert_frame_equal(result, expected)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50502
2022-12-30T12:41:09Z
2023-01-06T19:03:22Z
2023-01-06T19:03:22Z
2023-01-06T19:11:18Z
ENH: Add lazy copy to concat and round
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 46f8c73027f48..b2fbc7b088eb3 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -723,6 +723,7 @@ def __init__( ) elif getattr(data, "name", None) is not None: # i.e. Series/Index with non-None name + _copy = copy if using_copy_on_write() else True mgr = dict_to_mgr( # error: Item "ndarray" of "Union[ndarray, Series, Index]" has no # attribute "name" @@ -731,6 +732,7 @@ def __init__( columns, dtype=dtype, typ=manager, + copy=_copy, ) else: mgr = ndarray_to_mgr( diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py index 364025d583b7d..d46b51a2ee954 100644 --- a/pandas/core/internals/concat.py +++ b/pandas/core/internals/concat.py @@ -7,6 +7,7 @@ Sequence, cast, ) +import weakref import numpy as np @@ -61,7 +62,10 @@ ensure_block_shape, new_block_2d, ) -from pandas.core.internals.managers import BlockManager +from pandas.core.internals.managers import ( + BlockManager, + using_copy_on_write, +) if TYPE_CHECKING: from pandas import Index @@ -267,6 +271,8 @@ def _concat_managers_axis0( offset = 0 blocks = [] + refs: list[weakref.ref | None] = [] + parents: list = [] for i, mgr in enumerate(mgrs): # If we already reindexed, then we definitely don't need another copy made_copy = had_reindexers[i] @@ -283,8 +289,18 @@ def _concat_managers_axis0( nb._mgr_locs = nb._mgr_locs.add(offset) blocks.append(nb) + if not made_copy and not copy and using_copy_on_write(): + refs.extend([weakref.ref(blk) for blk in mgr.blocks]) + parents.append(mgr) + elif using_copy_on_write(): + refs.extend([None] * len(mgr.blocks)) + offset += len(mgr.items) - return BlockManager(tuple(blocks), axes) + + result_parents = parents if parents else None + result_ref = refs if refs else None + result = BlockManager(tuple(blocks), axes, parent=result_parents, refs=result_ref) + return result def _maybe_reindex_columns_na_proxy( diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index aced5a73a1f02..f8220649bf890 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -14,9 +14,15 @@ cast, overload, ) +import weakref import numpy as np +from pandas._config import ( + get_option, + using_copy_on_write, +) + from pandas._typing import ( Axis, AxisInt, @@ -47,6 +53,7 @@ get_unanimous_names, ) from pandas.core.internals import concatenate_managers +from pandas.core.internals.construction import dict_to_mgr if TYPE_CHECKING: from pandas import ( @@ -155,7 +162,7 @@ def concat( names=None, verify_integrity: bool = False, sort: bool = False, - copy: bool = True, + copy: bool | None = None, ) -> DataFrame | Series: """ Concatenate pandas objects along a particular axis. @@ -363,6 +370,12 @@ def concat( 0 1 2 1 3 4 """ + if copy is None: + if using_copy_on_write(): + copy = False + else: + copy = True + op = _Concatenator( objs, axis=axis, @@ -523,18 +536,25 @@ def __init__( ) else: - name = getattr(obj, "name", None) + original_obj = obj + name = new_name = getattr(obj, "name", None) if ignore_index or name is None: - name = current_column + new_name = current_column current_column += 1 # doing a row-wise concatenation so need everything # to line up if self._is_frame and axis == 1: - name = 0 + new_name = 0 # mypy needs to know sample is not an NDFrame sample = cast("DataFrame | Series", sample) - obj = sample._constructor({name: obj}) + obj = sample._constructor(obj, columns=[name], copy=False) + if using_copy_on_write(): + # TODO(CoW): Remove when ref tracking in constructors works + obj._mgr.parent = original_obj # type: ignore[union-attr] + obj._mgr.refs = [weakref.ref(original_obj._mgr.blocks[0])] # type: ignore[union-attr] # noqa: E501 + + obj.columns = [new_name] self.objs.append(obj) @@ -584,7 +604,22 @@ def get_result(self): cons = sample._constructor_expanddim index, columns = self.new_axes - df = cons(data, index=index, copy=self.copy) + mgr = dict_to_mgr( + data, + index, + None, + copy=self.copy, + typ=get_option("mode.data_manager"), + ) + if using_copy_on_write() and not self.copy: + parents = [obj._mgr for obj in self.objs] + mgr.parent = parents # type: ignore[union-attr] + refs = [ + weakref.ref(obj._mgr.blocks[0]) # type: ignore[union-attr] + for obj in self.objs + ] + mgr.refs = refs # type: ignore[union-attr] + df = cons(mgr, copy=False) df.columns = columns return df.__finalize__(self, method="concat") @@ -611,7 +646,7 @@ def get_result(self): new_data = concatenate_managers( mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy ) - if not self.copy: + if not self.copy and not using_copy_on_write(): new_data._consolidate_inplace() cons = sample._constructor diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index 8c0d5c3da385c..079317c1ed18d 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -3205,7 +3205,7 @@ def read( dfs.append(df) if len(dfs) > 0: - out = concat(dfs, axis=1) + out = concat(dfs, axis=1, copy=True) out = out.reindex(columns=items, copy=False) return out diff --git a/pandas/tests/copy_view/test_functions.py b/pandas/tests/copy_view/test_functions.py new file mode 100644 index 0000000000000..569cbc4ad7583 --- /dev/null +++ b/pandas/tests/copy_view/test_functions.py @@ -0,0 +1,179 @@ +import numpy as np + +from pandas import ( + DataFrame, + Series, + concat, +) +import pandas._testing as tm +from pandas.tests.copy_view.util import get_array + + +def test_concat_frames(using_copy_on_write): + df = DataFrame({"b": ["a"] * 3}) + df2 = DataFrame({"a": ["a"] * 3}) + df_orig = df.copy() + result = concat([df, df2], axis=1) + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + else: + assert not np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert not np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + + result.iloc[0, 0] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + + result.iloc[0, 1] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + tm.assert_frame_equal(df, df_orig) + + +def test_concat_frames_updating_input(using_copy_on_write): + df = DataFrame({"b": ["a"] * 3}) + df2 = DataFrame({"a": ["a"] * 3}) + result = concat([df, df2], axis=1) + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + else: + assert not np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert not np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + + expected = result.copy() + df.iloc[0, 0] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "b"), get_array(df, "b")) + assert np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + + df2.iloc[0, 0] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(df2, "a")) + tm.assert_frame_equal(result, expected) + + +def test_concat_series(using_copy_on_write): + ser = Series([1, 2], name="a") + ser2 = Series([3, 4], name="b") + ser_orig = ser.copy() + ser2_orig = ser2.copy() + result = concat([ser, ser2], axis=1) + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "a"), ser.values) + assert np.shares_memory(get_array(result, "b"), ser2.values) + else: + assert not np.shares_memory(get_array(result, "a"), ser.values) + assert not np.shares_memory(get_array(result, "b"), ser2.values) + + result.iloc[0, 0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), ser.values) + assert np.shares_memory(get_array(result, "b"), ser2.values) + + result.iloc[0, 1] = 1000 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "b"), ser2.values) + tm.assert_series_equal(ser, ser_orig) + tm.assert_series_equal(ser2, ser2_orig) + + +def test_concat_frames_chained(using_copy_on_write): + df1 = DataFrame({"a": [1, 2, 3], "b": [0.1, 0.2, 0.3]}) + df2 = DataFrame({"c": [4, 5, 6]}) + df3 = DataFrame({"d": [4, 5, 6]}) + result = concat([concat([df1, df2], axis=1), df3], axis=1) + expected = result.copy() + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "a"), get_array(df1, "a")) + assert np.shares_memory(get_array(result, "c"), get_array(df2, "c")) + assert np.shares_memory(get_array(result, "d"), get_array(df3, "d")) + else: + assert not np.shares_memory(get_array(result, "a"), get_array(df1, "a")) + assert not np.shares_memory(get_array(result, "c"), get_array(df2, "c")) + assert not np.shares_memory(get_array(result, "d"), get_array(df3, "d")) + + df1.iloc[0, 0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(df1, "a")) + + tm.assert_frame_equal(result, expected) + + +def test_concat_series_chained(using_copy_on_write): + ser1 = Series([1, 2, 3], name="a") + ser2 = Series([4, 5, 6], name="c") + ser3 = Series([4, 5, 6], name="d") + result = concat([concat([ser1, ser2], axis=1), ser3], axis=1) + expected = result.copy() + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "a"), get_array(ser1, "a")) + assert np.shares_memory(get_array(result, "c"), get_array(ser2, "c")) + assert np.shares_memory(get_array(result, "d"), get_array(ser3, "d")) + else: + assert not np.shares_memory(get_array(result, "a"), get_array(ser1, "a")) + assert not np.shares_memory(get_array(result, "c"), get_array(ser2, "c")) + assert not np.shares_memory(get_array(result, "d"), get_array(ser3, "d")) + + ser1.iloc[0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(ser1, "a")) + + tm.assert_frame_equal(result, expected) + + +def test_concat_series_updating_input(using_copy_on_write): + ser = Series([1, 2], name="a") + ser2 = Series([3, 4], name="b") + expected = DataFrame({"a": [1, 2], "b": [3, 4]}) + result = concat([ser, ser2], axis=1) + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "a"), get_array(ser, "a")) + assert np.shares_memory(get_array(result, "b"), get_array(ser2, "b")) + else: + assert not np.shares_memory(get_array(result, "a"), get_array(ser, "a")) + assert not np.shares_memory(get_array(result, "b"), get_array(ser2, "b")) + + ser.iloc[0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(ser, "a")) + assert np.shares_memory(get_array(result, "b"), get_array(ser2, "b")) + tm.assert_frame_equal(result, expected) + + ser2.iloc[0] = 1000 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "b"), get_array(ser2, "b")) + tm.assert_frame_equal(result, expected) + + +def test_concat_mixed_series_frame(using_copy_on_write): + df = DataFrame({"a": [1, 2, 3], "c": 1}) + ser = Series([4, 5, 6], name="d") + result = concat([df, ser], axis=1) + expected = result.copy() + + if using_copy_on_write: + assert np.shares_memory(get_array(result, "a"), get_array(df, "a")) + assert np.shares_memory(get_array(result, "c"), get_array(df, "c")) + assert np.shares_memory(get_array(result, "d"), get_array(ser, "d")) + else: + assert not np.shares_memory(get_array(result, "a"), get_array(df, "a")) + assert not np.shares_memory(get_array(result, "c"), get_array(df, "c")) + assert not np.shares_memory(get_array(result, "d"), get_array(ser, "d")) + + ser.iloc[0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "d"), get_array(ser, "d")) + + df.iloc[0, 0] = 100 + if using_copy_on_write: + assert not np.shares_memory(get_array(result, "a"), get_array(df, "a")) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 5c24a180c4d6d..daee175ee9bac 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -717,6 +717,23 @@ def test_sort_values_inplace(using_copy_on_write, obj, kwargs, using_array_manag assert np.shares_memory(get_array(obj, "a"), get_array(view, "a")) +def test_round(using_copy_on_write): + df = DataFrame({"a": [1, 2], "b": "c"}) + df2 = df.round() + df_orig = df.copy() + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + + df2.iloc[0, 1] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + tm.assert_frame_equal(df, df_orig) + + def test_reorder_levels(using_copy_on_write): index = MultiIndex.from_tuples( [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"] diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py index 5fa989419a7d4..73be92d1a5dcf 100644 --- a/pandas/tests/reshape/concat/test_concat.py +++ b/pandas/tests/reshape/concat/test_concat.py @@ -51,7 +51,7 @@ def test_append_concat(self): assert isinstance(result.index, PeriodIndex) assert result.index[0] == s1.index[0] - def test_concat_copy(self, using_array_manager): + def test_concat_copy(self, using_array_manager, using_copy_on_write): df = DataFrame(np.random.randn(4, 3)) df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1)) df3 = DataFrame({5: "foo"}, index=range(4)) @@ -82,7 +82,7 @@ def test_concat_copy(self, using_array_manager): result = concat([df, df2, df3, df4], axis=1, copy=False) for arr in result._mgr.arrays: if arr.dtype.kind == "f": - if using_array_manager: + if using_array_manager or using_copy_on_write: # this is a view on some array in either df or df4 assert any( np.shares_memory(arr, other)
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50501
2022-12-30T12:06:06Z
2023-01-17T08:22:25Z
2023-01-17T08:22:25Z
2023-01-17T09:58:03Z
ENH: Add use_nullable_dtypes to read_xml
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..712635d7a7e2a 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -39,6 +39,7 @@ The ``use_nullable_dtypes`` keyword argument has been expanded to the following * :func:`read_fwf` * :func:`read_excel` * :func:`read_html` +* :func:`read_xml` * :func:`read_sql` * :func:`read_sql_query` * :func:`read_sql_table` @@ -49,6 +50,7 @@ to select the nullable dtypes implementation. * :func:`read_csv` (with ``engine="pyarrow"`` or ``engine="python"``) * :func:`read_excel` * :func:`read_html` +* :func:`read_xml` * :func:`read_parquet` * :func:`read_orc` diff --git a/pandas/_libs/ops.pyx b/pandas/_libs/ops.pyx index 478e7eaee90c1..9154e836b3477 100644 --- a/pandas/_libs/ops.pyx +++ b/pandas/_libs/ops.pyx @@ -292,7 +292,7 @@ def maybe_convert_bool(ndarray[object] arr, result[i] = 1 elif val in false_vals: result[i] = 0 - elif is_nan(val): + elif is_nan(val) or val is None: mask[i] = 1 result[i] = 0 # Value here doesn't matter, will be replaced w/ nan has_na = True diff --git a/pandas/io/xml.py b/pandas/io/xml.py index 4f61455826286..1368a407fa494 100644 --- a/pandas/io/xml.py +++ b/pandas/io/xml.py @@ -774,6 +774,7 @@ def _parse( iterparse: dict[str, list[str]] | None, compression: CompressionOptions, storage_options: StorageOptions, + use_nullable_dtypes: bool = False, **kwargs, ) -> DataFrame: """ @@ -843,6 +844,7 @@ def _parse( dtype=dtype, converters=converters, parse_dates=parse_dates, + use_nullable_dtypes=use_nullable_dtypes, **kwargs, ) @@ -869,6 +871,7 @@ def read_xml( iterparse: dict[str, list[str]] | None = None, compression: CompressionOptions = "infer", storage_options: StorageOptions = None, + use_nullable_dtypes: bool = False, ) -> DataFrame: r""" Read XML document into a ``DataFrame`` object. @@ -980,6 +983,19 @@ def read_xml( {storage_options} + use_nullable_dtypes : bool = False + Whether or not to use nullable dtypes as default when reading data. If + set to True, nullable dtypes are used for all dtypes that have a nullable + implementation, even if no nulls are present. + + The nullable dtype implementation can be configured by calling + ``pd.set_option("mode.dtype_backend", "pandas")`` to use + numpy-backed nullable dtypes or + ``pd.set_option("mode.dtype_backend", "pyarrow")`` to use + pyarrow-backed nullable dtypes (using ``pd.ArrowDtype``). + + .. versionadded:: 2.0 + Returns ------- df @@ -1113,4 +1129,5 @@ def read_xml( iterparse=iterparse, compression=compression, storage_options=storage_options, + use_nullable_dtypes=use_nullable_dtypes, ) diff --git a/pandas/tests/io/xml/test_xml.py b/pandas/tests/io/xml/test_xml.py index aeaf2d3b7edbf..d65b9b8af4365 100644 --- a/pandas/tests/io/xml/test_xml.py +++ b/pandas/tests/io/xml/test_xml.py @@ -21,8 +21,17 @@ ) import pandas.util._test_decorators as td -from pandas import DataFrame +import pandas as pd +from pandas import ( + NA, + DataFrame, + Series, +) import pandas._testing as tm +from pandas.core.arrays import ( + ArrowStringArray, + StringArray, +) from pandas.io.common import get_handle from pandas.io.xml import read_xml @@ -1702,3 +1711,74 @@ def test_s3_parser_consistency(): ) tm.assert_frame_equal(df_lxml, df_etree) + + +@pytest.mark.parametrize("dtype_backend", ["pandas", "pyarrow"]) +def test_read_xml_nullable_dtypes(parser, string_storage, dtype_backend): + # GH#50500 + if string_storage == "pyarrow" or dtype_backend == "pyarrow": + pa = pytest.importorskip("pyarrow") + data = """<?xml version='1.0' encoding='utf-8'?> +<data xmlns="http://example.com"> +<row> + <a>x</a> + <b>1</b> + <c>4.0</c> + <d>x</d> + <e>2</e> + <f>4.0</f> + <g></g> + <h>True</h> + <i>False</i> +</row> +<row> + <a>y</a> + <b>2</b> + <c>5.0</c> + <d></d> + <e></e> + <f></f> + <g></g> + <h>False</h> + <i></i> +</row> +</data>""" + + if string_storage == "python": + string_array = StringArray(np.array(["x", "y"], dtype=np.object_)) + string_array_na = StringArray(np.array(["x", NA], dtype=np.object_)) + + else: + string_array = ArrowStringArray(pa.array(["x", "y"])) + string_array_na = ArrowStringArray(pa.array(["x", None])) + + with pd.option_context("mode.string_storage", string_storage): + with pd.option_context("mode.dtype_backend", dtype_backend): + result = read_xml(data, parser=parser, use_nullable_dtypes=True) + + expected = DataFrame( + { + "a": string_array, + "b": Series([1, 2], dtype="Int64"), + "c": Series([4.0, 5.0], dtype="Float64"), + "d": string_array_na, + "e": Series([2, NA], dtype="Int64"), + "f": Series([4.0, NA], dtype="Float64"), + "g": Series([NA, NA], dtype="Int64"), + "h": Series([True, False], dtype="boolean"), + "i": Series([False, NA], dtype="boolean"), + } + ) + + if dtype_backend == "pyarrow": + from pandas.arrays import ArrowExtensionArray + + expected = DataFrame( + { + col: ArrowExtensionArray(pa.array(expected[col], from_pandas=True)) + for col in expected.columns + } + ) + expected["g"] = ArrowExtensionArray(pa.array([None, None])) + + tm.assert_frame_equal(result, expected)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50500
2022-12-30T10:16:02Z
2023-01-04T19:11:43Z
2023-01-04T19:11:43Z
2023-01-04T20:24:30Z
API / CoW: constructing DataFrame from DataFrame creates lazy copy
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 9a99dbad30708..4d4dc6f705027 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -951,6 +951,10 @@ Indexing - Bug in :class:`BusinessHour` would cause creation of :class:`DatetimeIndex` to fail when no opening hour was included in the index (:issue:`49835`) - +Copy on write +^^^^^^^^^^^^^ +- Bug in :class:`DataFrame` constructor not tracking reference if called with another :class:`DataFrame` (:issue:`50499`) + Missing ^^^^^^^ - Bug in :meth:`Index.equals` raising ``TypeError`` when :class:`Index` consists of tuples that contain ``NA`` (:issue:`48446`) diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 4e1d5af1e8a4a..9990904ee1fb3 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -205,6 +205,7 @@ to_arrays, treat_as_nested, ) +from pandas.core.internals.managers import using_copy_on_write from pandas.core.reshape.melt import melt from pandas.core.series import Series from pandas.core.shared_docs import _shared_docs @@ -637,6 +638,8 @@ def __init__( if isinstance(data, DataFrame): data = data._mgr + if not copy and using_copy_on_write(): + data = data.copy(deep=False) if isinstance(data, (BlockManager, ArrayManager)): # first check if a Manager is passed without any other arguments diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 8980fe0249193..7724fb286647c 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -5257,7 +5257,12 @@ def _reindex_with_indexers( # If we've made a copy once, no need to make another one copy = False - if (copy or copy is None) and new_data is self._mgr: + if ( + (copy or copy is None) + and new_data is self._mgr + or not copy + and using_copy_on_write() + ): new_data = new_data.copy(deep=copy) return self._constructor(new_data).__finalize__(self) diff --git a/pandas/tests/copy_view/test_constructors.py b/pandas/tests/copy_view/test_constructors.py new file mode 100644 index 0000000000000..bc4c9d91aee18 --- /dev/null +++ b/pandas/tests/copy_view/test_constructors.py @@ -0,0 +1,24 @@ +import numpy as np +import pytest + +from pandas import DataFrame +import pandas._testing as tm +from pandas.tests.copy_view.util import get_array + + +@pytest.mark.parametrize("columns", [None, ["a"]]) +def test_dataframe_constructor_mgr(using_copy_on_write, columns): + df = DataFrame({"a": [1, 2, 3]}) + df_orig = df.copy() + + new_df = DataFrame(df) + + assert np.shares_memory(get_array(df, "a"), get_array(new_df, "a")) + new_df.iloc[0] = 100 + + if using_copy_on_write: + assert not np.shares_memory(get_array(df, "a"), get_array(new_df, "a")) + tm.assert_frame_equal(df, df_orig) + else: + assert np.shares_memory(get_array(df, "a"), get_array(new_df, "a")) + tm.assert_frame_equal(df, new_df) diff --git a/pandas/tests/frame/methods/test_align.py b/pandas/tests/frame/methods/test_align.py index 88963dcc4b0f7..30b7ed963e792 100644 --- a/pandas/tests/frame/methods/test_align.py +++ b/pandas/tests/frame/methods/test_align.py @@ -11,6 +11,7 @@ date_range, ) import pandas._testing as tm +from pandas.core.internals.managers import using_copy_on_write class TestDataFrameAlign: @@ -45,7 +46,10 @@ def test_align_float(self, float_frame): assert af._mgr is not float_frame._mgr af, bf = float_frame.align(float_frame, copy=False) - assert af._mgr is float_frame._mgr + if using_copy_on_write(): + assert af._mgr is not float_frame._mgr + else: + assert af._mgr is float_frame._mgr # axis = 0 other = float_frame.iloc[:-5, :3] diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py index 0f85cb4515e13..91f9ad3244f20 100644 --- a/pandas/tests/indexing/test_iloc.py +++ b/pandas/tests/indexing/test_iloc.py @@ -76,7 +76,9 @@ class TestiLocBaseIndependent: ], ) @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc]) - def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manager): + def test_iloc_setitem_fullcol_categorical( + self, indexer, key, using_array_manager, using_copy_on_write + ): frame = DataFrame({0: range(3)}, dtype=object) cat = Categorical(["alpha", "beta", "gamma"]) @@ -90,7 +92,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manage indexer(df)[key, 0] = cat expected = DataFrame({0: cat}).astype(object) - if not using_array_manager: + if not using_array_manager and not using_copy_on_write: assert np.shares_memory(df[0].values, orig_vals) tm.assert_frame_equal(df, expected)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. cc @jorisvandenbossche
https://api.github.com/repos/pandas-dev/pandas/pulls/50499
2022-12-30T09:01:56Z
2023-02-08T18:01:39Z
null
2023-02-11T11:38:10Z
DOC: Add `pip install odfpy` for io.rst
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 677be7bf29479..6e47ec4e4aa03 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -3833,7 +3833,7 @@ OpenDocument Spreadsheets The io methods for `Excel files`_ also support reading and writing OpenDocument spreadsheets using the `odfpy <https://pypi.org/project/odfpy/>`__ module. The semantics and features for reading and writing OpenDocument spreadsheets match what can be done for `Excel files`_ using -``engine='odf'``. +``engine='odf'``. The optional dependency 'odfpy' needs to be installed. The :func:`~pandas.read_excel` method can read OpenDocument spreadsheets
Fixes #50487 - [x] closes #50487 Perhaps because of my misunderstanding, adding 'pip install' here may not be consistent with the previous docs style. Because there is no direct instruction in the previous docs how to install the engine manually.
https://api.github.com/repos/pandas-dev/pandas/pulls/50498
2022-12-30T04:09:56Z
2022-12-30T15:06:35Z
2022-12-30T15:06:35Z
2022-12-31T02:13:03Z
DOC: Fix some doctest errors
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index e671f45216968..ca4f5e18a214d 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3477,7 +3477,7 @@ def transpose(self, *args, copy: bool = False) -> DataFrame: 0 1 3 1 2 4 - >>> df1_transposed = df1.T # or df1.transpose() + >>> df1_transposed = df1.T # or df1.transpose() >>> df1_transposed 0 1 col1 1 2 @@ -3507,7 +3507,7 @@ def transpose(self, *args, copy: bool = False) -> DataFrame: 0 Alice 9.5 False 0 1 Bob 8.0 True 0 - >>> df2_transposed = df2.T # or df2.transpose() + >>> df2_transposed = df2.T # or df2.transpose() >>> df2_transposed 0 1 name Alice Bob @@ -4744,7 +4744,7 @@ def assign(self, **kwargs) -> DataFrame: of the columns depends on another one defined within the same assign: >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32, - ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) + ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) temp_c temp_f temp_k Portland 17.0 62.6 290.15 Berkeley 25.0 77.0 298.15 @@ -5983,8 +5983,8 @@ class max_speed >>> columns = pd.MultiIndex.from_tuples([('speed', 'max'), ... ('species', 'type')]) >>> df = pd.DataFrame([(389.0, 'fly'), - ... ( 24.0, 'fly'), - ... ( 80.5, 'run'), + ... (24.0, 'fly'), + ... (80.5, 'run'), ... (np.nan, 'jump')], ... index=index, ... columns=columns) @@ -8262,14 +8262,14 @@ def groupby( 4 2 1 1 5 4 5 2 2 2 6 5 - >>> df.pivot(index="lev1", columns=["lev2", "lev3"],values="values") + >>> df.pivot(index="lev1", columns=["lev2", "lev3"], values="values") lev2 1 2 lev3 1 2 1 2 lev1 1 0.0 1.0 2.0 NaN 2 4.0 3.0 NaN 5.0 - >>> df.pivot(index=["lev1", "lev2"], columns=["lev3"],values="values") + >>> df.pivot(index=["lev1", "lev2"], columns=["lev3"], values="values") lev3 1 2 lev1 lev2 1 1 0.0 1.0 @@ -8317,7 +8317,8 @@ def pivot( Parameters ----------%s - values : column to aggregate, optional + values : list-like or scalar, optional + Column or columns to aggregate. index : column, Grouper, array, or list of the previous If an array is passed, it must be the same length as the data. The list can contain any of the other types (except list). @@ -8403,7 +8404,7 @@ def pivot( This first example aggregates values by taking the sum. >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], - ... columns=['C'], aggfunc=np.sum) + ... columns=['C'], aggfunc=np.sum) >>> table C large small A B @@ -8415,7 +8416,7 @@ def pivot( We can also fill missing values using the `fill_value` parameter. >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], - ... columns=['C'], aggfunc=np.sum, fill_value=0) + ... columns=['C'], aggfunc=np.sum, fill_value=0) >>> table C large small A B @@ -8427,8 +8428,7 @@ def pivot( The next example aggregates by taking the mean across multiple columns. >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], - ... aggfunc={'D': np.mean, - ... 'E': np.mean}) + ... aggfunc={'D': np.mean, 'E': np.mean}) >>> table D E A C @@ -8441,8 +8441,8 @@ def pivot( value column. >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], - ... aggfunc={'D': np.mean, - ... 'E': [min, max, np.mean]}) + ... aggfunc={'D': np.mean, + ... 'E': [min, max, np.mean]}) >>> table D E mean max mean min @@ -10926,7 +10926,8 @@ def to_timestamp( Returns ------- - DataFrame with DatetimeIndex + DataFrame + The DataFrame has a DatetimeIndex. """ new_obj = self.copy(deep=copy) @@ -10960,7 +10961,8 @@ def to_period( Returns ------- - DataFrame with PeriodIndex + DataFrame: + The DataFrame has a PeriodIndex. Examples -------- diff --git a/pandas/core/generic.py b/pandas/core/generic.py index c893e9ce3d9a9..243dd741bad50 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -4846,8 +4846,8 @@ def sort_values( 4 96hr 50 >>> from natsort import index_natsorted >>> df.sort_values( - ... by="time", - ... key=lambda x: np.argsort(index_natsorted(df["time"])) + ... by="time", + ... key=lambda x: np.argsort(index_natsorted(df["time"])) ... ) time value 0 0hr 10 @@ -7694,10 +7694,10 @@ def isna(self: NDFrameT) -> NDFrameT: Show which entries in a DataFrame are NA. >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN], - ... born=[pd.NaT, pd.Timestamp('1939-05-27'), - ... pd.Timestamp('1940-04-25')], - ... name=['Alfred', 'Batman', ''], - ... toy=[None, 'Batmobile', 'Joker'])) + ... born=[pd.NaT, pd.Timestamp('1939-05-27'), + ... pd.Timestamp('1940-04-25')], + ... name=['Alfred', 'Batman', ''], + ... toy=[None, 'Batmobile', 'Joker'])) >>> df age born name toy 0 5.0 NaT Alfred None @@ -7761,10 +7761,10 @@ def notna(self: NDFrameT) -> NDFrameT: Show which entries in a DataFrame are not NA. >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN], - ... born=[pd.NaT, pd.Timestamp('1939-05-27'), - ... pd.Timestamp('1940-04-25')], - ... name=['Alfred', 'Batman', ''], - ... toy=[None, 'Batmobile', 'Joker'])) + ... born=[pd.NaT, pd.Timestamp('1939-05-27'), + ... pd.Timestamp('1940-04-25')], + ... name=['Alfred', 'Batman', ''], + ... toy=[None, 'Batmobile', 'Joker'])) >>> df age born name toy 0 5.0 NaT Alfred None @@ -8153,6 +8153,7 @@ def at_time( Parameters ---------- time : datetime.time or str + The values to select. axis : {0 or 'index', 1 or 'columns'}, default 0 For `Series` this parameter is unused and defaults to 0. @@ -10109,7 +10110,8 @@ def tz_convert( tz : str or tzinfo object or None Target time zone. Passing ``None`` will convert to UTC and remove the timezone information. - axis : the axis to convert + axis : {{0 or 'index', 1 or 'columns'}}, default 0 + The axis to convert level : int, str, default None If axis is a MultiIndex, convert a specific level. Otherwise must be None. @@ -10130,8 +10132,10 @@ def tz_convert( -------- Change to another time zone: - >>> s = pd.Series([1], - ... index=pd.DatetimeIndex(['2018-09-15 01:30:00+02:00'])) + >>> s = pd.Series( + ... [1], + ... index=pd.DatetimeIndex(['2018-09-15 01:30:00+02:00']), + ... ) >>> s.tz_convert('Asia/Shanghai') 2018-09-15 07:30:00+08:00 1 dtype: int64 @@ -10196,7 +10200,8 @@ def tz_localize( tz : str or tzinfo or None Time zone to localize. Passing ``None`` will remove the time zone information and preserve local time. - axis : the axis to localize + axis : {{0 or 'index', 1 or 'columns'}}, default 0 + The axis to localize level : int, str, default None If axis ia a MultiIndex, localize a specific level. Otherwise must be None. @@ -10245,8 +10250,10 @@ def tz_localize( -------- Localize local times: - >>> s = pd.Series([1], - ... index=pd.DatetimeIndex(['2018-09-15 01:30:00'])) + >>> s = pd.Series( + ... [1], + ... index=pd.DatetimeIndex(['2018-09-15 01:30:00']), + ... ) >>> s.tz_localize('CET') 2018-09-15 01:30:00+02:00 1 dtype: int64 @@ -10480,9 +10487,9 @@ def describe( Describing a timestamp ``Series``. >>> s = pd.Series([ - ... np.datetime64("2000-01-01"), - ... np.datetime64("2010-01-01"), - ... np.datetime64("2010-01-01") + ... np.datetime64("2000-01-01"), + ... np.datetime64("2010-01-01"), + ... np.datetime64("2010-01-01") ... ]) >>> s.describe() count 3 @@ -11740,9 +11747,9 @@ def _doc_params(cls): Examples -------- >>> df = pd.DataFrame({'person_id': [0, 1, 2, 3], -... 'age': [21, 25, 62, 43], -... 'height': [1.61, 1.87, 1.49, 2.01]} -... ).set_index('person_id') +... 'age': [21, 25, 62, 43], +... 'height': [1.61, 1.87, 1.49, 2.01]} +... ).set_index('person_id') >>> df age height person_id @@ -11960,7 +11967,7 @@ def _doc_params(cls): >>> df = pd.DataFrame([[2.0, 1.0], ... [3.0, np.nan], ... [1.0, 0.0]], -... columns=list('AB')) +... columns=list('AB')) >>> df A B 0 2.0 1.0 @@ -12025,7 +12032,7 @@ def _doc_params(cls): >>> df = pd.DataFrame([[2.0, 1.0], ... [3.0, np.nan], ... [1.0, 0.0]], -... columns=list('AB')) +... columns=list('AB')) >>> df A B 0 2.0 1.0 @@ -12090,7 +12097,7 @@ def _doc_params(cls): >>> df = pd.DataFrame([[2.0, 1.0], ... [3.0, np.nan], ... [1.0, 0.0]], -... columns=list('AB')) +... columns=list('AB')) >>> df A B 0 2.0 1.0 @@ -12155,7 +12162,7 @@ def _doc_params(cls): >>> df = pd.DataFrame([[2.0, 1.0], ... [3.0, np.nan], ... [1.0, 0.0]], -... columns=list('AB')) +... columns=list('AB')) >>> df A B 0 2.0 1.0 diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py index 147fa622fdedc..486fab62d93e7 100644 --- a/pandas/core/shared_docs.py +++ b/pandas/core/shared_docs.py @@ -798,8 +798,8 @@ Consider a dataset containing food consumption in Argentina. >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48], - ... 'co2_emissions': [37.2, 19.66, 1712]}}, - ... index=['Pork', 'Wheat Products', 'Beef']) + ... 'co2_emissions': [37.2, 19.66, 1712]}}, + ... index=['Pork', 'Wheat Products', 'Beef']) >>> df consumption co2_emissions @@ -865,8 +865,8 @@ Consider a dataset containing food consumption in Argentina. >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48], - ... 'co2_emissions': [37.2, 19.66, 1712]}}, - ... index=['Pork', 'Wheat Products', 'Beef']) + ... 'co2_emissions': [37.2, 19.66, 1712]}}, + ... index=['Pork', 'Wheat Products', 'Beef']) >>> df consumption co2_emissions diff --git a/pandas/core/window/doc.py b/pandas/core/window/doc.py index b1ff53e9d1a44..6e188531a0502 100644 --- a/pandas/core/window/doc.py +++ b/pandas/core/window/doc.py @@ -24,10 +24,10 @@ def create_section_header(header: str) -> str: template_see_also = dedent( """ - pandas.Series.{window_method} : Calling {window_method} with Series data. - pandas.DataFrame.{window_method} : Calling {window_method} with DataFrames. - pandas.Series.{agg_method} : Aggregating {agg_method} for Series. - pandas.DataFrame.{agg_method} : Aggregating {agg_method} for DataFrame.\n + Series.{window_method} : Calling {window_method} with Series data. + DataFrame.{window_method} : Calling {window_method} with DataFrames. + Series.{agg_method} : Aggregating {agg_method} for Series. + DataFrame.{agg_method} : Aggregating {agg_method} for DataFrame.\n """ ).replace("\n", "", 1)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50494
2022-12-29T21:42:29Z
2023-01-03T21:46:36Z
2023-01-03T21:46:36Z
2023-01-03T21:46:40Z
CLN avoid some upcasting when its not the purpose of the test
diff --git a/pandas/tests/frame/methods/test_equals.py b/pandas/tests/frame/methods/test_equals.py index dddd6c6d2eaf2..beec3e965d542 100644 --- a/pandas/tests/frame/methods/test_equals.py +++ b/pandas/tests/frame/methods/test_equals.py @@ -36,7 +36,8 @@ def test_equals(self): df1["start"] = date_range("2000-1-1", periods=10, freq="T") df1["end"] = date_range("2000-1-1", periods=10, freq="D") df1["diff"] = df1["end"] - df1["start"] - df1["bool"] = np.arange(10) % 3 == 0 + # Explicitly cast to object, to avoid implicit cast when setting np.nan + df1["bool"] = (np.arange(10) % 3 == 0).astype(object) df1.loc[::2] = np.nan df2 = df1.copy() assert df1["text"].equals(df2["text"]) diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py index e81837898c927..159dab04e7da6 100644 --- a/pandas/tests/frame/test_query_eval.py +++ b/pandas/tests/frame/test_query_eval.py @@ -448,7 +448,8 @@ def test_date_index_query(self): def test_date_index_query_with_NaT(self): engine, parser = self.engine, self.parser n = 10 - df = DataFrame(np.random.randn(n, 3)) + # Cast to object to avoid implicit cast when setting entry to pd.NaT below + df = DataFrame(np.random.randn(n, 3)).astype({0: object}) df["dates1"] = date_range("1/1/2012", periods=n) df["dates3"] = date_range("1/1/2014", periods=n) df.iloc[0, 0] = pd.NaT @@ -808,7 +809,8 @@ def test_date_index_query(self): def test_date_index_query_with_NaT(self): engine, parser = self.engine, self.parser n = 10 - df = DataFrame(np.random.randn(n, 3)) + # Cast to object to avoid implicit cast when setting entry to pd.NaT below + df = DataFrame(np.random.randn(n, 3)).astype({0: object}) df["dates1"] = date_range("1/1/2012", periods=n) df["dates3"] = date_range("1/1/2014", periods=n) df.iloc[0, 0] = pd.NaT diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py index a3cd3e4afdda1..2e0aa5fd0cf40 100644 --- a/pandas/tests/frame/test_reductions.py +++ b/pandas/tests/frame/test_reductions.py @@ -449,10 +449,14 @@ def test_var_std(self, datetime_frame): def test_numeric_only_flag(self, meth): # GH 9201 df1 = DataFrame(np.random.randn(5, 3), columns=["foo", "bar", "baz"]) + # Cast to object to avoid implicit cast when setting entry to "100" below + df1 = df1.astype({"foo": object}) # set one entry to a number in str format df1.loc[0, "foo"] = "100" df2 = DataFrame(np.random.randn(5, 3), columns=["foo", "bar", "baz"]) + # Cast to object to avoid implicit cast when setting entry to "a" below + df2 = df2.astype({"foo": object}) # set one entry to a non-number str df2.loc[0, "foo"] = "a" diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py index 4a707d8875db3..f16cf4dd27016 100644 --- a/pandas/tests/groupby/test_timegrouper.py +++ b/pandas/tests/groupby/test_timegrouper.py @@ -103,6 +103,8 @@ def test_groupby_with_timegrouper(self): "20130901", "20131205", freq="5D", name="Date", inclusive="left" ), ) + # Cast to object to avoid implicit cast when setting entry to "CarlCarlCarl" + expected = expected.astype({"Buyer": object}) expected.iloc[0, 0] = "CarlCarlCarl" expected.iloc[6, 0] = "CarlCarl" expected.iloc[18, 0] = "Joe" diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py index 59afe22e40f7a..18ad275083022 100644 --- a/pandas/tests/series/methods/test_replace.py +++ b/pandas/tests/series/methods/test_replace.py @@ -16,7 +16,8 @@ def test_replace_explicit_none(self): expected = pd.Series([0, 0, None], dtype=object) tm.assert_series_equal(result, expected) - df = pd.DataFrame(np.zeros((3, 3))) + # Cast column 2 to object to avoid implicit cast when setting entry to "" + df = pd.DataFrame(np.zeros((3, 3))).astype({2: object}) df.iloc[2, 2] = "" result = df.replace("", None) expected = pd.DataFrame(
Generally, tests follow the pattern: - create input - do something to it - check that it matches expected output There are a few places when, whilst creating the test input, some column is created of some dtype (e.g. 'int'), and then some incompatible type is set (e.g. a string `'foo'`), thus upcasting the column to dtype object. After that, this input is put through the function which the test is meant to test. Given that the upcasting isn't the purpose of these tests, it'd be cleaner to just create the input right away with object dtype. The "avoid upcasting to object" part of #50424 does seem uncontroversial, so this would help reduce the diff when a set of PRs to address that will be raised
https://api.github.com/repos/pandas-dev/pandas/pulls/50493
2022-12-29T21:12:47Z
2023-01-02T09:02:42Z
2023-01-02T09:02:42Z
2023-01-02T09:02:42Z
ENH: Add lazy copy for sort_index
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index c893e9ce3d9a9..23eb66abb8561 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -4934,7 +4934,7 @@ def sort_index( if inplace: result = self else: - result = self.copy() + result = self.copy(deep=None) if ignore_index: result.index = default_index(len(self)) diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 878f1d8089d33..dc0bf01c84c74 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -405,6 +405,23 @@ def test_reindex_like(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +def test_sort_index(using_copy_on_write): + # GH 49473 + ser = Series([1, 2, 3]) + ser_orig = ser.copy() + ser2 = ser.sort_index() + + if using_copy_on_write: + assert np.shares_memory(ser.values, ser2.values) + else: + assert not np.shares_memory(ser.values, ser2.values) + + # mutating ser triggers a copy-on-write for the column / block + ser2.iloc[0] = 0 + assert not np.shares_memory(ser2.values, ser.values) + tm.assert_series_equal(ser, ser_orig) + + def test_reorder_levels(using_copy_on_write): index = MultiIndex.from_tuples( [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"]
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50491
2022-12-29T21:05:01Z
2023-01-03T12:25:43Z
2023-01-03T12:25:43Z
2023-01-03T21:48:09Z
ENH: Add lazy copy for tz_convert and tz_localize
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index c893e9ce3d9a9..7e8196c9b27f3 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -10099,7 +10099,7 @@ def truncate( @final @doc(klass=_shared_doc_kwargs["klass"]) def tz_convert( - self: NDFrameT, tz, axis: Axis = 0, level=None, copy: bool_t = True + self: NDFrameT, tz, axis: Axis = 0, level=None, copy: bool_t | None = None ) -> NDFrameT: """ Convert tz-aware axis to target time zone. @@ -10181,7 +10181,7 @@ def tz_localize( tz, axis: Axis = 0, level=None, - copy: bool_t = True, + copy: bool_t | None = None, ambiguous: TimeAmbiguous = "raise", nonexistent: TimeNonexistent = "raise", ) -> NDFrameT: diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 878f1d8089d33..d819e806516ad 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -5,6 +5,7 @@ DataFrame, MultiIndex, Series, + date_range, ) import pandas._testing as tm from pandas.tests.copy_view.util import get_array @@ -441,6 +442,28 @@ def test_frame_set_axis(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize( + "func, tz", [("tz_convert", "Europe/Berlin"), ("tz_localize", None)] +) +def test_tz_convert_localize(using_copy_on_write, func, tz): + # GH 49473 + ser = Series( + [1, 2], index=date_range(start="2014-08-01 09:00", freq="H", periods=2, tz=tz) + ) + ser_orig = ser.copy() + ser2 = getattr(ser, func)("US/Central") + + if using_copy_on_write: + assert np.shares_memory(ser.values, ser2.values) + else: + assert not np.shares_memory(ser.values, ser2.values) + + # mutating ser triggers a copy-on-write for the column / block + ser2.iloc[0] = 0 + assert not np.shares_memory(ser2.values, ser.values) + tm.assert_series_equal(ser, ser_orig) + + def test_series_set_axis(using_copy_on_write): # GH 49473 ser = Series([1, 2, 3])
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50490
2022-12-29T20:42:04Z
2023-01-03T08:15:17Z
2023-01-03T08:15:17Z
2023-01-03T11:08:35Z
TST: Test cow for set_flags
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 95804de2a7089..2c66b0c67c3dd 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -405,6 +405,8 @@ def set_flags( Parameters ---------- + copy : bool, default False + Specify if a copy of the object should be made. allows_duplicate_labels : bool, optional Whether the returned object allows duplicate labels. diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 2b3d13b982d4d..9e3d8f8076be1 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -500,6 +500,24 @@ def test_series_set_axis(using_copy_on_write): tm.assert_series_equal(ser, ser_orig) +def test_set_flags(using_copy_on_write): + ser = Series([1, 2, 3]) + ser_orig = ser.copy() + ser2 = ser.set_flags(allows_duplicate_labels=False) + + assert np.shares_memory(ser, ser2) + + # mutating ser triggers a copy-on-write for the column / block + ser2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(ser2, ser) + tm.assert_series_equal(ser, ser_orig) + else: + assert np.shares_memory(ser2, ser) + expected = Series([0, 2, 3]) + tm.assert_series_equal(ser, expected) + + @pytest.mark.parametrize("copy_kwargs", [{"copy": True}, {}]) @pytest.mark.parametrize("kwargs", [{"mapper": "test"}, {"index": "test"}]) def test_rename_axis(using_copy_on_write, kwargs, copy_kwargs):
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. No changes needed here, just adding a test to ensure behavior does not change
https://api.github.com/repos/pandas-dev/pandas/pulls/50489
2022-12-29T20:26:01Z
2023-01-03T21:41:16Z
2023-01-03T21:41:16Z
2023-02-11T11:38:28Z
TST: GH50347 create test for groupby sum of boolean values.
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index b2fc60b76fdf6..a59c2853fa50b 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -2828,3 +2828,13 @@ def test_groupby_index_name_in_index_content(val_in, index, val_out): result = series.to_frame().groupby("blah").sum() expected = expected.to_frame() tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize("n", [1, 10, 32, 100, 1000]) +def test_sum_of_booleans(n): + # GH 50347 + df = DataFrame({"groupby_col": 1, "bool": [True] * n}) + df["bool"] = df["bool"].eq(True) + result = df.groupby("groupby_col").sum() + expected = DataFrame({"bool": [n]}, index=Index([1], name="groupby_col")) + tm.assert_frame_equal(result, expected)
- [x] closes #50347 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). Does not apply: - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50488
2022-12-29T19:39:55Z
2022-12-30T15:07:18Z
2022-12-30T15:07:18Z
2022-12-30T15:07:24Z
CI/WEB: Adding cache for maintainers' github info
diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml index 908259597cafb..7a9f491228a83 100644 --- a/.github/workflows/docbuild-and-upload.yml +++ b/.github/workflows/docbuild-and-upload.yml @@ -46,6 +46,12 @@ jobs: - name: Build Pandas uses: ./.github/actions/build_pandas + - name: Set up maintainers cache + uses: actions/cache@v3 + with: + path: maintainers.json + key: maintainers + - name: Build website run: python web/pandas_web.py web/pandas --target-path=web/build diff --git a/web/pandas_web.py b/web/pandas_web.py index e9e8e70066b3f..4c30e1959fdff 100755 --- a/web/pandas_web.py +++ b/web/pandas_web.py @@ -27,6 +27,7 @@ import collections import datetime import importlib +import json import operator import os import pathlib @@ -163,6 +164,18 @@ def maintainers_add_info(context): Given the active maintainers defined in the yaml file, it fetches the GitHub user information for them. """ + timestamp = time.time() + + cache_file = pathlib.Path("maintainers.json") + if cache_file.is_file(): + with open(cache_file) as f: + context["maintainers"] = json.load(f) + # refresh cache after 1 hour + if (timestamp - context["maintainers"]["timestamp"]) < 3_600: + return context + + context["maintainers"]["timestamp"] = timestamp + repeated = set(context["maintainers"]["active"]) & set( context["maintainers"]["inactive"] ) @@ -179,6 +192,10 @@ def maintainers_add_info(context): return context resp.raise_for_status() context["maintainers"][f"{kind}_with_github_info"].append(resp.json()) + + with open(cache_file, "w") as f: + json.dump(context["maintainers"], f) + return context @staticmethod
I've seen some CI failures recently caused by hitting github API quota when building the website. The website requests from github the info of maintainers to show names and photos in https://pandas.pydata.org/about/team.html (also the releases in the home, but that's a single request not affecting the quota much). Feels like in the past every github worker was independent and had its own quota, but now seems like running jobs too quickly hits the quota. In this PR I add a cache to reuse the maintainers info for one hour (the time it takes the quota to reset), which should avoid those failures.
https://api.github.com/repos/pandas-dev/pandas/pulls/50485
2022-12-29T08:20:29Z
2022-12-29T10:29:28Z
2022-12-29T10:29:28Z
2023-01-04T10:33:52Z
DOC: Clean dubious docstring
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 3ff50be966fa0..0bc9751694e9f 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -366,6 +366,21 @@ class ApplyTypeError(TypeError): cdef class BaseOffset: """ Base class for DateOffset methods that are not overridden by subclasses. + + Parameters + ---------- + n : int + Number of multiples of the frequency. + + normalize : bool + Whether the frequency can align with midnight. + + Examples + -------- + >>> pd.offsets.Hour(5).n + 5 + >>> pd.offsets.Hour(5).normalize + False """ # ensure that reversed-ops with numpy scalars return NotImplemented __array_priority__ = 1000 @@ -384,23 +399,7 @@ cdef class BaseOffset: def __init__(self, n=1, normalize=False): n = self._validate_n(n) self.n = n - """ - Number of multiples of the frequency. - - Examples - -------- - >>> pd.offsets.Hour(5).n - 5 - """ self.normalize = normalize - """ - Return boolean whether the frequency can align with midnight. - - Examples - -------- - >>> pd.offsets.Hour(5).normalize - False - """ self._cache = {} def __eq__(self, other) -> bool:
Fixes #50344 - [x] closes #50344
https://api.github.com/repos/pandas-dev/pandas/pulls/50484
2022-12-29T08:13:27Z
2022-12-29T15:29:08Z
2022-12-29T15:29:08Z
2022-12-30T03:41:47Z
DEPR: Remove unnecessary kwargs in groupby ops
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 0ef5636a97d40..f6c729959aec1 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -505,6 +505,7 @@ Other API changes Deprecations ~~~~~~~~~~~~ - Deprecated argument ``infer_datetime_format`` in :func:`to_datetime` and :func:`read_csv`, as a strict version of it is now the default (:issue:`48621`) +- Deprecate unused argument ``**kwargs`` from :meth:`cummax`, :meth:`cummin`, :meth:`cumsum`, :meth:`cumprod`, :meth:`take` and :meth:`skew` (:issue:`50483`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 955f65585963d..eb030bf3afc44 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -48,6 +48,7 @@ from pandas.util._decorators import ( Appender, Substitution, + deprecate_nonkeyword_arguments, doc, ) @@ -893,6 +894,9 @@ def fillna( return result @doc(Series.take.__doc__) + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "axis", "indices"] + ) def take( self, indices: TakeIndexer, @@ -903,6 +907,9 @@ def take( return result @doc(Series.skew.__doc__) + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "axis", "skipna", "numeric_only"] + ) def skew( self, axis: Axis | lib.NoDefault = lib.no_default, @@ -911,11 +918,7 @@ def skew( **kwargs, ) -> Series: result = self._op_via_apply( - "skew", - axis=axis, - skipna=skipna, - numeric_only=numeric_only, - **kwargs, + "skew", axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs ) return result @@ -2272,16 +2275,17 @@ def fillna( return result @doc(DataFrame.take.__doc__) - def take( - self, - indices: TakeIndexer, - axis: Axis | None = 0, - **kwargs, - ) -> DataFrame: + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "indices", "axis"] + ) + def take(self, indices: TakeIndexer, axis: Axis | None = 0, **kwargs) -> DataFrame: result = self._op_via_apply("take", indices=indices, axis=axis, **kwargs) return result @doc(DataFrame.skew.__doc__) + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "axis", "skipna", "numeric_only"] + ) def skew( self, axis: Axis | None | lib.NoDefault = lib.no_default, @@ -2290,11 +2294,7 @@ def skew( **kwargs, ) -> DataFrame: result = self._op_via_apply( - "skew", - axis=axis, - skipna=skipna, - numeric_only=numeric_only, - **kwargs, + "skew", axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs ) return result diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 729f34544e2bc..7b810b4b65602 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -69,6 +69,7 @@ class providing the base-class of operations. Appender, Substitution, cache_readonly, + deprecate_nonkeyword_arguments, doc, ) @@ -3411,6 +3412,7 @@ def rank( @final @Substitution(name="groupby") @Appender(_common_see_also) + @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self", "axis"]) def cumprod(self, axis: Axis = 0, *args, **kwargs) -> NDFrameT: """ Cumulative product for each group. @@ -3429,6 +3431,7 @@ def cumprod(self, axis: Axis = 0, *args, **kwargs) -> NDFrameT: @final @Substitution(name="groupby") @Appender(_common_see_also) + @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self", "axis"]) def cumsum(self, axis: Axis = 0, *args, **kwargs) -> NDFrameT: """ Cumulative sum for each group. @@ -3447,6 +3450,9 @@ def cumsum(self, axis: Axis = 0, *args, **kwargs) -> NDFrameT: @final @Substitution(name="groupby") @Appender(_common_see_also) + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "axis", "numeric_only"] + ) def cummin( self, axis: AxisInt = 0, numeric_only: bool = False, **kwargs ) -> NDFrameT: @@ -3472,6 +3478,9 @@ def cummin( @final @Substitution(name="groupby") @Appender(_common_see_also) + @deprecate_nonkeyword_arguments( + version="3.0", allowed_args=["self", "axis", "numeric_only"] + ) def cummax( self, axis: AxisInt = 0, numeric_only: bool = False, **kwargs ) -> NDFrameT: diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index bb15783f4607f..2f9cfceb000e6 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -1594,3 +1594,19 @@ def test_multiindex_group_all_columns_when_empty(groupby_func): result = method(*args).index expected = df.index tm.assert_index_equal(result, expected) + + +def test_deprecated_keywords(): + msg = ( + "In the future version of pandas the arguments args" + "and kwargs will be deprecated for these functions" + ) + with tm.assert_produces_warning(FutureWarning, match=msg): + df = DataFrame({"a": [1, 1, 2], "b": [3, 4, 5]}) + gb = df.groupby("a") + assert gb.cummax(kwargs=1) + assert gb.cummin(kwargs=1) + assert gb.cumsum(args=1, kwargs=1) + assert gb.cumprod(args=1, kwargs=1) + assert gb.skew(kwargs=1) + assert gb.take(kwargs=1)
- [x] closes #50407 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50483
2022-12-29T06:21:25Z
2023-02-11T21:00:24Z
null
2023-02-11T21:00:25Z
DEP: Bump pytest, xdist, hypothesis
diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml index 438d2c7b4174e..fc00dcf73cbdd 100644 --- a/.github/workflows/32-bit-linux.yml +++ b/.github/workflows/32-bit-linux.yml @@ -39,7 +39,7 @@ jobs: . ~/virtualenvs/pandas-dev/bin/activate && \ python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ python -m pip install versioneer[toml] && \ - python -m pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ + python -m pip install cython numpy python-dateutil pytz pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-asyncio>=0.17 hypothesis>=6.34.2 && \ python setup.py build_ext -q -j1 && \ python -m pip install --no-build-isolation --no-use-pep517 -e . && \ python -m pip list && \ diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml index 220c1e464742e..a0030517d1e99 100644 --- a/.github/workflows/python-dev.yml +++ b/.github/workflows/python-dev.yml @@ -76,7 +76,7 @@ jobs: python -m pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy python -m pip install git+https://github.com/nedbat/coveragepy.git python -m pip install versioneer[toml] - python -m pip install python-dateutil pytz cython hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-cov pytest-asyncio>=0.17 + python -m pip install python-dateutil pytz cython hypothesis>=6.34.2 pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-cov pytest-asyncio>=0.17 python -m pip list # GH 47305: Parallel build can cause flaky ImportError from pandas/_libs/tslibs diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml index 0e347b166e425..12e8a89cd0382 100644 --- a/.github/workflows/wheels.yml +++ b/.github/workflows/wheels.yml @@ -168,7 +168,7 @@ jobs: # (1. Generate sdist 2. Build wheels from sdist) # This tests the sdists, and saves some build time python -m pip install dist/*.gz - pip install hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17 + pip install hypothesis>=6.34.2 pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-asyncio>=0.17 cd .. # Not a good idea to test within the src tree python -c "import pandas; print(pandas.__version__); pandas.test(extra_args=['-m not clipboard and not single_cpu', '--skip-slow', '--skip-network', '--skip-db', '-n=2']); diff --git a/ci/deps/actions-310-numpydev.yaml b/ci/deps/actions-310-numpydev.yaml index 863c231b18c4f..8b76765e9fba6 100644 --- a/ci/deps/actions-310-numpydev.yaml +++ b/ci/deps/actions-310-numpydev.yaml @@ -1,6 +1,6 @@ name: pandas-dev channels: - - defaults + - conda-forge dependencies: - python=3.10 @@ -8,10 +8,10 @@ dependencies: - versioneer[toml] # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 - - hypothesis>=5.5.3 + - pytest-xdist>=2.2.0 + - hypothesis>=6.34.2 - pytest-asyncio>=0.17 # pandas dependencies diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml index 79457cd503876..e21cb8b594ff7 100644 --- a/ci/deps/actions-310.yaml +++ b/ci/deps/actions-310.yaml @@ -9,9 +9,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml index 6955baa282274..0403859cc87da 100644 --- a/ci/deps/actions-38-downstream_compat.yaml +++ b/ci/deps/actions-38-downstream_compat.yaml @@ -10,9 +10,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 diff --git a/ci/deps/actions-38-minimum_versions.yaml b/ci/deps/actions-38-minimum_versions.yaml index de7e793c46d19..caeee07c324d1 100644 --- a/ci/deps/actions-38-minimum_versions.yaml +++ b/ci/deps/actions-38-minimum_versions.yaml @@ -11,9 +11,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 @@ -31,7 +31,7 @@ dependencies: - fastparquet=0.6.3 - fsspec=2021.07.0 - html5lib=1.1 - - hypothesis=6.13.0 + - hypothesis=6.34.2 - gcsfs=2021.07.0 - jinja2=3.0.0 - lxml=4.6.3 diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml index 004ef93606457..789127c6b148d 100644 --- a/ci/deps/actions-38.yaml +++ b/ci/deps/actions-38.yaml @@ -9,9 +9,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml index ec7ffebde964f..f45d565bad81b 100644 --- a/ci/deps/actions-39.yaml +++ b/ci/deps/actions-39.yaml @@ -9,9 +9,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 diff --git a/ci/deps/actions-pypy-38.yaml b/ci/deps/actions-pypy-38.yaml index 054129c4198a1..802b3f4108362 100644 --- a/ci/deps/actions-pypy-38.yaml +++ b/ci/deps/actions-pypy-38.yaml @@ -12,11 +12,11 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - pytest-asyncio - - pytest-xdist>=1.31 - - hypothesis>=5.5.3 + - pytest-xdist>=2.2.0 + - hypothesis>=6.34.2 # required - numpy<1.24 diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml index b4171710564bf..2fbb7d28d5210 100644 --- a/ci/deps/circle-38-arm64.yaml +++ b/ci/deps/circle-38-arm64.yaml @@ -9,9 +9,9 @@ dependencies: - cython>=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - boto3 diff --git a/ci/test_wheels_windows.bat b/ci/test_wheels_windows.bat index 9f4ebdabbf8d1..ae7869d63b1ff 100644 --- a/ci/test_wheels_windows.bat +++ b/ci/test_wheels_windows.bat @@ -4,6 +4,6 @@ pd.test(extra_args=['-m not clipboard and single_cpu', '--skip-slow', '--skip-ne python --version pip install pytz six numpy python-dateutil -pip install hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17 +pip install hypothesis>=6.34.2 pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-asyncio>=0.17 pip install --find-links=pandas/dist --no-index pandas -python -c "%test_command%" +python -c "%test_command%" diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst index 68065c77f7881..70a27a84a7592 100644 --- a/doc/source/getting_started/install.rst +++ b/doc/source/getting_started/install.rst @@ -208,8 +208,8 @@ pandas is equipped with an exhaustive set of unit tests, covering about 97% of the code base as of this writing. To run it on your machine to verify that everything is working (and that you have all of the dependencies, soft and hard, installed), make sure you have `pytest -<https://docs.pytest.org/en/latest/>`__ >= 6.0 and `Hypothesis -<https://hypothesis.readthedocs.io/en/latest/>`__ >= 6.13.0, then run: +<https://docs.pytest.org/en/latest/>`__ >= 7.0 and `Hypothesis +<https://hypothesis.readthedocs.io/en/latest/>`__ >= 6.34.2, then run: :: diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 3ac004ef335ac..a261492f04ed0 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -447,13 +447,19 @@ Increased minimum versions for dependencies Some minimum supported versions of dependencies were updated. If installed, we now require: -+-----------------+-----------------+----------+---------+ -| Package | Minimum Version | Required | Changed | -+=================+=================+==========+=========+ -| mypy (dev) | 0.991 | | X | -+-----------------+-----------------+----------+---------+ -| python-dateutil | 2.8.2 | X | X | -+-----------------+-----------------+----------+---------+ ++-------------------+-----------------+----------+---------+ +| Package | Minimum Version | Required | Changed | ++===================+=================+==========+=========+ +| mypy (dev) | 0.991 | | X | ++-------------------+-----------------+----------+---------+ +| pytest (dev) | 7.0.0 | | X | ++-------------------+-----------------+----------+---------+ +| pytest-xdist (dev)| 2.2.0 | | X | ++-------------------+-----------------+----------+---------+ +| hypothesis (dev) | 6.34.2 | | X | ++-------------------+-----------------+----------+---------+ +| python-dateutil | 2.8.2 | X | X | ++-------------------+-----------------+----------+---------+ For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. diff --git a/environment.yml b/environment.yml index 9e4c6db82b2ce..af6dacc84b704 100644 --- a/environment.yml +++ b/environment.yml @@ -11,9 +11,9 @@ dependencies: - cython=0.29.32 # test dependencies - - pytest>=6.0 + - pytest>=7.0.0 - pytest-cov - - pytest-xdist>=1.31 + - pytest-xdist>=2.2.0 - psutil - pytest-asyncio>=0.17 - coverage diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py index 9bd4b384fadb0..d98b23b215565 100644 --- a/pandas/compat/_optional.py +++ b/pandas/compat/_optional.py @@ -19,7 +19,7 @@ "fastparquet": "0.6.3", "fsspec": "2021.07.0", "html5lib": "1.1", - "hypothesis": "6.13.0", + "hypothesis": "6.34.2", "gcsfs": "2021.07.0", "jinja2": "3.0.0", "lxml.etree": "4.6.3", @@ -33,7 +33,7 @@ "pymysql": "1.0.2", "pyarrow": "6.0.0", "pyreadstat": "1.1.2", - "pytest": "6.0", + "pytest": "7.0.0", "pyxlsb": "1.0.8", "s3fs": "2021.08.0", "scipy": "1.7.1", diff --git a/pyproject.toml b/pyproject.toml index a41204590bcf6..9ef722d95c8b3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -55,7 +55,7 @@ repository = 'https://github.com/pandas-dev/pandas' matplotlib = "pandas:plotting._matplotlib" [project.optional-dependencies] -test = ['hypothesis>=5.5.3', 'pytest>=6.0', 'pytest-xdist>=1.31', 'pytest-asyncio>=0.17.0'] +test = ['hypothesis>=6.34.2', 'pytest>=7.0.0', 'pytest-xdist>=2.2.0', 'pytest-asyncio>=0.17.0'] performance = ['bottleneck>=1.3.2', 'numba>=0.53.1', 'numexpr>=2.7.1'] timezone = ['tzdata>=2022.1'] computation = ['scipy>=1.7.1', 'xarray>=0.21.0'] @@ -87,7 +87,7 @@ all = ['beautifulsoup4>=4.9.3', 'fsspec>=2021.07.0', 'gcsfs>=2021.07.0', 'html5lib>=1.1', - 'hypothesis>=6.13.0', + 'hypothesis>=6.34.2', 'jinja2>=3.0.0', 'lxml>=4.6.3', 'matplotlib>=3.6.1', @@ -101,8 +101,8 @@ all = ['beautifulsoup4>=4.9.3', 'pymysql>=1.0.2', 'PyQt5>=5.15.1', 'pyreadstat>=1.1.2', - 'pytest>=6.0', - 'pytest-xdist>=1.31', + 'pytest>=7.0.0', + 'pytest-xdist>=2.2.0', 'pytest-asyncio>=0.17.0', 'python-snappy>=0.6.0', 'pyxlsb>=1.0.8', @@ -143,7 +143,7 @@ parentdir_prefix = "pandas-" [tool.cibuildwheel] skip = "cp36-* cp37-* pp37-* *-manylinux_i686 *_ppc64le *_s390x *-musllinux*" build-verbosity = "3" -test-requires = "hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17" +test-requires = "hypothesis>=6.34.2 pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-asyncio>=0.17" test-command = "python {project}/ci/test_wheels.py" [tool.cibuildwheel.macos] diff --git a/requirements-dev.txt b/requirements-dev.txt index 1a2e39bb47786..b63275796a4da 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -4,9 +4,9 @@ pip versioneer[toml] cython==0.29.32 -pytest>=6.0 +pytest>=7.0.0 pytest-cov -pytest-xdist>=1.31 +pytest-xdist>=2.2.0 psutil pytest-asyncio>=0.17 coverage diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py index 2186e7c8ff9ef..7c102096c1690 100755 --- a/scripts/validate_min_versions_in_sync.py +++ b/scripts/validate_min_versions_in_sync.py @@ -47,11 +47,7 @@ def get_versions_from_code() -> dict[str, str]: versions = _optional.VERSIONS for item in EXCLUDE_DEPS: versions.pop(item, None) - return { - install_map.get(k, k).casefold(): v - for k, v in versions.items() - if k != "pytest" - } + return {install_map.get(k, k).casefold(): v for k, v in versions.items()} def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str, str]]: @@ -59,10 +55,18 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str, # Don't parse with pyyaml because it ignores comments we're looking for seen_required = False seen_optional = False + seen_test = False required_deps = {} optional_deps = {} for line in content: - if "# required dependencies" in line: + if "# test dependencies" in line: + seen_test = True + elif seen_test and "- pytest>=" in line: + # Only grab pytest + package, version = line.strip().split(">=") + package = package[2:] + optional_deps[package.casefold()] = version + elif "# required dependencies" in line: seen_required = True elif "# optional dependencies" in line: seen_optional = True @@ -87,7 +91,6 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str, def get_versions_from_toml() -> dict[str, str]: """Min versions in pyproject.toml for pip install pandas[extra].""" install_map = _optional.INSTALL_MAPPING - dependencies = set() optional_dependencies = {} with open(SETUP_PATH, "rb") as pyproject_f: @@ -95,9 +98,9 @@ def get_versions_from_toml() -> dict[str, str]: opt_deps = pyproject_toml["project"]["optional-dependencies"] dependencies = set(opt_deps["all"]) - # remove test dependencies - test_deps = set(opt_deps["test"]) - dependencies = dependencies.difference(test_deps) + # remove pytest plugin dependencies + pytest_plugins = {dep for dep in opt_deps["test"] if dep.startswith("pytest-")} + dependencies = dependencies.difference(pytest_plugins) for dependency in dependencies: package, version = dependency.strip().split(">=")
All these libraries were released ~1 year ago.
https://api.github.com/repos/pandas-dev/pandas/pulls/50481
2022-12-29T01:33:02Z
2023-01-12T17:16:38Z
2023-01-12T17:16:38Z
2023-01-12T17:19:39Z
DEPR: Remove int64 uint64 float64 index tests
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 7d5a7ac5945d6..100ff97ed512c 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -90,7 +90,6 @@ ensure_platform_int, is_bool_dtype, is_categorical_dtype, - is_complex_dtype, is_dtype_equal, is_ea_or_datetimelike_dtype, is_extension_array_dtype, @@ -107,7 +106,6 @@ is_scalar, is_signed_integer_dtype, is_string_dtype, - is_unsigned_integer_dtype, needs_i8_conversion, pandas_dtype, validate_all_hashable, @@ -125,7 +123,6 @@ ABCDatetimeIndex, ABCMultiIndex, ABCPeriodIndex, - ABCRangeIndex, ABCSeries, ABCTimedeltaIndex, ) @@ -395,7 +392,6 @@ def _outer_indexer( # Whether this index is a NumericIndex, but not a Int64Index, Float64Index, # UInt64Index or RangeIndex. Needed for backwards compat. Remove this attribute and # associated code in pandas 2.0. - _is_backward_compat_public_numeric_index: bool = False @property def _engine_type( @@ -446,13 +442,6 @@ def __new__( elif is_ea_or_datetimelike_dtype(data_dtype): pass - # index-like - elif ( - isinstance(data, Index) - and data._is_backward_compat_public_numeric_index - and dtype is None - ): - return data._constructor(data, name=name, copy=copy) elif isinstance(data, (np.ndarray, Index, ABCSeries)): if isinstance(data, ABCMultiIndex): @@ -981,34 +970,6 @@ def astype(self, dtype, copy: bool = True): new_values = astype_array(values, dtype=dtype, copy=copy) # pass copy=False because any copying will be done in the astype above - if not self._is_backward_compat_public_numeric_index and not isinstance( - self, ABCRangeIndex - ): - # this block is needed so e.g. Int64Index.astype("int32") returns - # Int64Index and not a NumericIndex with dtype int32. - # When Int64Index etc. are removed from the code base, removed this also. - if ( - isinstance(dtype, np.dtype) - and is_numeric_dtype(dtype) - and not is_complex_dtype(dtype) - ): - from pandas.core.api import ( - Float64Index, - Int64Index, - UInt64Index, - ) - - klass: type[Index] - if is_signed_integer_dtype(dtype): - klass = Int64Index - elif is_unsigned_integer_dtype(dtype): - klass = UInt64Index - elif is_float_dtype(dtype): - klass = Float64Index - else: - klass = Index - return klass(new_values, name=self.name, dtype=dtype, copy=False) - return Index(new_values, name=self.name, dtype=new_values.dtype, copy=False) _index_shared_docs[ @@ -5075,10 +5036,6 @@ def _concat(self, to_concat: list[Index], name: Hashable) -> Index: result = concat_compat(to_concat_vals) - is_numeric = result.dtype.kind in ["i", "u", "f"] - if self._is_backward_compat_public_numeric_index and is_numeric: - return type(self)._simple_new(result, name=name) - return Index._with_infer(result, name=name) def putmask(self, mask, value) -> Index: @@ -6476,12 +6433,7 @@ def insert(self, loc: int, item) -> Index: loc = loc if loc >= 0 else loc - 1 new_values[loc] = item - if self._typ == "numericindex": - # Use self._constructor instead of Index to retain NumericIndex GH#43921 - # TODO(2.0) can use Index instead of self._constructor - return self._constructor(new_values, name=self.name) - else: - return Index._with_infer(new_values, name=self.name) + return Index._with_infer(new_values, name=self.name) def drop( self, diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 6c172a2034524..ff24f97106c51 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -21,7 +21,6 @@ from pandas.core.dtypes.common import ( is_categorical_dtype, is_scalar, - pandas_dtype, ) from pandas.core.dtypes.missing import ( is_valid_na_for_dtype, @@ -274,30 +273,6 @@ def _is_dtype_compat(self, other) -> Categorical: return other - @doc(Index.astype) - def astype(self, dtype: Dtype, copy: bool = True) -> Index: - from pandas.core.api import NumericIndex - - dtype = pandas_dtype(dtype) - - categories = self.categories - # the super method always returns Int64Index, UInt64Index and Float64Index - # but if the categories are a NumericIndex with dtype float32, we want to - # return an index with the same dtype as self.categories. - if categories._is_backward_compat_public_numeric_index: - assert isinstance(categories, NumericIndex) # mypy complaint fix - try: - categories._validate_dtype(dtype) - except ValueError: - pass - else: - new_values = self._data.astype(dtype, copy=copy) - # pass copy=False because any copying has been done in the - # _data.astype call above - return categories._constructor(new_values, name=self.name, copy=False) - - return super().astype(dtype, copy=copy) - def equals(self, other: object) -> bool: """ Determine if two CategoricalIndex objects contain the same elements. diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py index af3ff54bb9e2b..3ea7b30f7e9f1 100644 --- a/pandas/core/indexes/numeric.py +++ b/pandas/core/indexes/numeric.py @@ -87,7 +87,6 @@ class NumericIndex(Index): "numeric type", ) _can_hold_strings = False - _is_backward_compat_public_numeric_index: bool = True _engine_types: dict[np.dtype, type[libindex.IndexEngine]] = { np.dtype(np.int8): libindex.Int8Engine, @@ -214,12 +213,7 @@ def _ensure_dtype(cls, dtype: Dtype | None) -> np.dtype | None: # float16 not supported (no indexing engine) raise NotImplementedError("float16 indexes are not supported") - if cls._is_backward_compat_public_numeric_index: - # dtype for NumericIndex - return dtype - else: - # dtype for Int64Index, UInt64Index etc. Needed for backwards compat. - return cls._default_dtype + return dtype # ---------------------------------------------------------------- # Indexing Methods @@ -415,7 +409,6 @@ class Float64Index(NumericIndex): _typ = "float64index" _default_dtype = np.dtype(np.float64) _dtype_validation_metadata = (is_float_dtype, "float") - _is_backward_compat_public_numeric_index: bool = False @property def _engine_type(self) -> type[libindex.Float64Engine]: diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py index e17a0d070be6a..7fffce2b183e6 100644 --- a/pandas/core/indexes/range.py +++ b/pandas/core/indexes/range.py @@ -103,7 +103,6 @@ class RangeIndex(NumericIndex): _typ = "rangeindex" _dtype_validation_metadata = (is_signed_integer_dtype, "signed integer") _range: range - _is_backward_compat_public_numeric_index: bool = False @property def _engine_type(self) -> type[libindex.Int64Engine]: @@ -849,7 +848,7 @@ def _concat(self, indexes: list[Index], name: Hashable) -> Index: # First non-empty index had only one element if rng.start == start: values = np.concatenate([x._values for x in rng_indexes]) - result = Int64Index(values) + result = self._constructor(values) return result.rename(name) step = rng.start - start @@ -858,7 +857,9 @@ def _concat(self, indexes: list[Index], name: Hashable) -> Index: next_ is not None and rng.start != next_ ) if non_consecutive: - result = Int64Index(np.concatenate([x._values for x in rng_indexes])) + result = self._constructor( + np.concatenate([x._values for x in rng_indexes]) + ) return result.rename(name) if step is not None: @@ -906,7 +907,6 @@ def __getitem__(self, key): "and integer or boolean " "arrays are valid indices" ) - # fall back to Int64Index return super().__getitem__(key) def _getitem_slice(self: RangeIndex, slobj: slice) -> RangeIndex: @@ -1011,15 +1011,14 @@ def _arith_method(self, other, op): res_name = ops.get_op_result_name(self, other) result = type(self)(rstart, rstop, rstep, name=res_name) - # for compat with numpy / Int64Index + # for compat with numpy / Index with int64 dtype # even if we can represent as a RangeIndex, return - # as a Float64Index if we have float-like descriptors + # as a float64 Index if we have float-like descriptors if not all(is_integer(x) for x in [rstart, rstop, rstep]): result = result.astype("float64") return result except (ValueError, TypeError, ZeroDivisionError): - # Defer to Int64Index implementation # test_arithmetic_explicit_conversions return super()._arith_method(other, op) diff --git a/pandas/core/series.py b/pandas/core/series.py index 992b86a532433..ade0fd95a5a9f 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -132,7 +132,6 @@ from pandas.core.indexes.accessors import CombinedDatetimelikeProperties from pandas.core.indexes.api import ( DatetimeIndex, - Float64Index, Index, MultiIndex, PeriodIndex, @@ -2569,7 +2568,8 @@ def quantile( if is_list_like(q): result.name = self.name - return self._constructor(result, index=Float64Index(q), name=self.name) + idx = Index(q, dtype=np.float64) + return self._constructor(result, index=idx, name=self.name) else: # scalar return result.iloc[0] diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index 8c0d5c3da385c..f7483ef0757f2 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -70,6 +70,7 @@ is_datetime64_dtype, is_datetime64tz_dtype, is_extension_array_dtype, + is_integer_dtype, is_list_like, is_string_dtype, is_timedelta64_dtype, @@ -88,7 +89,7 @@ concat, isna, ) -from pandas.core.api import Int64Index +from pandas.core.api import NumericIndex from pandas.core.arrays import ( Categorical, DatetimeArray, @@ -2240,13 +2241,13 @@ class GenericIndexCol(IndexCol): def is_indexed(self) -> bool: return False - # error: Return type "Tuple[Int64Index, Int64Index]" of "convert" + # error: Return type "Tuple[NumericIndex, NumericIndex]" of "convert" # incompatible with return type "Union[Tuple[ndarray[Any, Any], # ndarray[Any, Any]], Tuple[DatetimeIndex, DatetimeIndex]]" in # supertype "IndexCol" def convert( # type: ignore[override] self, values: np.ndarray, nan_rep, encoding: str, errors: str - ) -> tuple[Int64Index, Int64Index]: + ) -> tuple[NumericIndex, NumericIndex]: """ Convert the data from this selection to the appropriate pandas type. @@ -2259,7 +2260,7 @@ def convert( # type: ignore[override] """ assert isinstance(values, np.ndarray), type(values) - index = Int64Index(np.arange(len(values))) + index = NumericIndex(np.arange(len(values)), dtype=np.int64) return index, index def set_attr(self) -> None: @@ -4862,11 +4863,11 @@ def _convert_index(name: str, index: Index, encoding: str, errors: str) -> Index atom = DataIndexableCol._get_atom(converted) if ( - isinstance(index, Int64Index) + (isinstance(index, NumericIndex) and is_integer_dtype(index)) or needs_i8_conversion(index.dtype) or is_bool_dtype(index.dtype) ): - # Includes Int64Index, RangeIndex, DatetimeIndex, TimedeltaIndex, PeriodIndex, + # Includes NumericIndex, RangeIndex, DatetimeIndex, TimedeltaIndex, PeriodIndex, # in which case "kind" is "integer", "integer", "datetime64", # "timedelta64", and "integer", respectively. return IndexCol( diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py index ba33cfe055bb3..b8218a2b0241e 100644 --- a/pandas/tests/arrays/categorical/test_constructors.py +++ b/pandas/tests/arrays/categorical/test_constructors.py @@ -29,7 +29,6 @@ timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index class TestCategoricalConstructors: @@ -74,7 +73,7 @@ def test_constructor_empty(self): tm.assert_index_equal(c.categories, expected) c = Categorical([], categories=[1, 2, 3]) - expected = Int64Index([1, 2, 3]) + expected = Index([1, 2, 3]) tm.assert_index_equal(c.categories, expected) def test_constructor_empty_boolean(self): diff --git a/pandas/tests/arrays/integer/test_dtypes.py b/pandas/tests/arrays/integer/test_dtypes.py index f34953876f5f4..629b104dc1424 100644 --- a/pandas/tests/arrays/integer/test_dtypes.py +++ b/pandas/tests/arrays/integer/test_dtypes.py @@ -62,8 +62,7 @@ def test_astype_nansafe(): @pytest.mark.parametrize("dropna", [True, False]) def test_construct_index(all_data, dropna): - # ensure that we do not coerce to Float64Index, rather - # keep as Index + # ensure that we do not coerce to different Index dtype or non-index all_data = all_data[:10] if dropna: diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py index b2fa4d72a355c..54c8e359b2859 100644 --- a/pandas/tests/arrays/sparse/test_array.py +++ b/pandas/tests/arrays/sparse/test_array.py @@ -8,7 +8,6 @@ import pandas as pd from pandas import isna import pandas._testing as tm -from pandas.core.api import Int64Index from pandas.core.arrays.sparse import ( SparseArray, SparseDtype, @@ -469,7 +468,7 @@ def test_dropna(fill_value): tm.assert_sp_array_equal(arr.dropna(), exp) df = pd.DataFrame({"a": [0, 1], "b": arr}) - expected_df = pd.DataFrame({"a": [1], "b": exp}, index=Int64Index([1])) + expected_df = pd.DataFrame({"a": [1], "b": exp}, index=pd.Index([1])) tm.assert_equal(df.dropna(), expected_df) diff --git a/pandas/tests/base/test_unique.py b/pandas/tests/base/test_unique.py index b1b0479f397b1..624d3d68d37fd 100644 --- a/pandas/tests/base/test_unique.py +++ b/pandas/tests/base/test_unique.py @@ -5,7 +5,6 @@ import pandas as pd import pandas._testing as tm -from pandas.core.api import NumericIndex from pandas.tests.base.common import allow_na_ops @@ -20,9 +19,6 @@ def test_unique(index_or_series_obj): expected = pd.MultiIndex.from_tuples(unique_values) expected.names = obj.names tm.assert_index_equal(result, expected, exact=True) - elif isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index: - expected = NumericIndex(unique_values, dtype=obj.dtype) - tm.assert_index_equal(result, expected, exact=True) elif isinstance(obj, pd.Index): expected = pd.Index(unique_values, dtype=obj.dtype) if is_datetime64tz_dtype(obj.dtype): @@ -58,10 +54,7 @@ def test_unique_null(null_obj, index_or_series_obj): unique_values_not_null = [val for val in unique_values_raw if not pd.isnull(val)] unique_values = [null_obj] + unique_values_not_null - if isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index: - expected = NumericIndex(unique_values, dtype=obj.dtype) - tm.assert_index_equal(result, expected, exact=True) - elif isinstance(obj, pd.Index): + if isinstance(obj, pd.Index): expected = pd.Index(unique_values, dtype=obj.dtype) if is_datetime64tz_dtype(obj.dtype): result = result.normalize() diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py index 2d3d576154740..e9f6371239b4a 100644 --- a/pandas/tests/frame/methods/test_astype.py +++ b/pandas/tests/frame/methods/test_astype.py @@ -11,6 +11,7 @@ CategoricalDtype, DataFrame, DatetimeTZDtype, + Index, Interval, IntervalDtype, NaT, @@ -22,7 +23,6 @@ option_context, ) import pandas._testing as tm -from pandas.core.api import UInt64Index def _check_cast(df, v): @@ -372,7 +372,7 @@ def test_astype_extension_dtypes_duplicate_col(self, dtype): ) def test_astype_column_metadata(self, dtype): # GH#19920 - columns = UInt64Index([100, 200, 300], name="foo") + columns = Index([100, 200, 300], dtype=np.uint64, name="foo") df = DataFrame(np.arange(15).reshape(5, 3), columns=columns) df = df.astype(dtype) tm.assert_index_equal(df.columns, columns) @@ -441,7 +441,7 @@ def test_astype_to_datetime_unit(self, unit): arr = np.array([[1, 2, 3]], dtype=dtype) df = DataFrame(arr) ser = df.iloc[:, 0] - idx = pd.Index(ser) + idx = Index(ser) dta = ser._values if unit in ["ns", "us", "ms", "s"]: @@ -476,7 +476,7 @@ def test_astype_to_datetime_unit(self, unit): exp_dta = exp_ser._values res_index = idx.astype(dtype) - exp_index = pd.Index(exp_ser) + exp_index = Index(exp_ser) assert exp_index.dtype == dtype tm.assert_index_equal(res_index, exp_index) @@ -504,7 +504,7 @@ def test_astype_to_timedelta_unit(self, unit): arr = np.array([[1, 2, 3]], dtype=dtype) df = DataFrame(arr) ser = df.iloc[:, 0] - tdi = pd.Index(ser) + tdi = Index(ser) tda = tdi._values if unit in ["us", "ms", "s"]: diff --git a/pandas/tests/frame/methods/test_truncate.py b/pandas/tests/frame/methods/test_truncate.py index 21f0664707ebe..149fcfb35f44d 100644 --- a/pandas/tests/frame/methods/test_truncate.py +++ b/pandas/tests/frame/methods/test_truncate.py @@ -5,11 +5,11 @@ from pandas import ( DataFrame, DatetimeIndex, + Index, Series, date_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index class TestDataFrameTruncate: @@ -108,13 +108,13 @@ def test_truncate_nonsortedindex_axis1(self): "before, after, indices", [(1, 2, [2, 1]), (None, 2, [2, 1, 0]), (1, None, [3, 2, 1])], ) - @pytest.mark.parametrize("klass", [Int64Index, DatetimeIndex]) + @pytest.mark.parametrize("dtyp", [*tm.ALL_REAL_NUMPY_DTYPES, "datetime64[ns]"]) def test_truncate_decreasing_index( - self, before, after, indices, klass, frame_or_series + self, before, after, indices, dtyp, frame_or_series ): # https://github.com/pandas-dev/pandas/issues/33756 - idx = klass([3, 2, 1, 0]) - if klass is DatetimeIndex: + idx = Index([3, 2, 1, 0], dtype=dtyp) + if isinstance(idx, DatetimeIndex): before = pd.Timestamp(before) if before is not None else None after = pd.Timestamp(after) if after is not None else None indices = [pd.Timestamp(i) for i in indices] diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py index ed8eb350234d1..969d5b648ddda 100644 --- a/pandas/tests/indexes/common.py +++ b/pandas/tests/indexes/common.py @@ -28,12 +28,7 @@ isna, ) import pandas._testing as tm -from pandas.core.api import ( # noqa:F401 - Float64Index, - Int64Index, - NumericIndex, - UInt64Index, -) +from pandas.core.api import NumericIndex from pandas.core.arrays import BaseMaskedArray @@ -322,7 +317,9 @@ def test_numpy_argsort(self, index): def test_repeat(self, simple_index): rep = 2 idx = simple_index.copy() - new_index_cls = Int64Index if isinstance(idx, RangeIndex) else idx._constructor + new_index_cls = ( + NumericIndex if isinstance(idx, RangeIndex) else idx._constructor + ) expected = new_index_cls(idx.values.repeat(rep), name=idx.name) tm.assert_index_equal(idx.repeat(rep), expected) @@ -505,7 +502,6 @@ def test_equals_op(self, simple_index): # assuming the 2nd to last item is unique in the data item = index_a[-2] tm.assert_numpy_array_equal(index_a == item, expected3) - # For RangeIndex we can convert to Int64Index tm.assert_series_equal(series_a == item, Series(expected3)) def test_format(self, simple_index): @@ -596,7 +592,7 @@ def test_map(self, simple_index): idx = simple_index result = idx.map(lambda x: x) - # For RangeIndex we convert to Int64Index + # RangeIndex are equivalent to the similar NumericIndex with int64 dtype tm.assert_index_equal(result, idx, exact="equiv") @pytest.mark.parametrize( @@ -619,19 +615,15 @@ def test_map_dictlike(self, mapper, simple_index): identity = mapper(idx.values, idx) result = idx.map(identity) - # For RangeIndex we convert to Int64Index + # RangeIndex are equivalent to the similar NumericIndex with int64 dtype tm.assert_index_equal(result, idx, exact="equiv") # empty mappable dtype = None - if idx._is_backward_compat_public_numeric_index: - new_index_cls = NumericIndex - if idx.dtype.kind == "f": - dtype = idx.dtype - else: - new_index_cls = Float64Index + if idx.dtype.kind == "f": + dtype = idx.dtype - expected = new_index_cls([np.nan] * len(idx), dtype=dtype) + expected = Index([np.nan] * len(idx), dtype=dtype) result = idx.map(mapper(expected, idx)) tm.assert_index_equal(result, expected) @@ -887,13 +879,9 @@ def test_insert_na(self, nulls_fixture, simple_index): expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object) else: expected = Index([index[0], np.nan] + list(index[1:])) - - if index._is_backward_compat_public_numeric_index: - # GH#43921 we preserve NumericIndex - if index.dtype.kind == "f": - expected = NumericIndex(expected, dtype=index.dtype) - else: - expected = NumericIndex(expected) + # GH#43921 we preserve float dtype + if index.dtype.kind == "f": + expected = Index(expected, dtype=index.dtype) result = index.insert(1, na_val) tm.assert_index_equal(result, expected, exact=True) @@ -909,19 +897,19 @@ def test_arithmetic_explicit_conversions(self): # float conversions arr = np.arange(5, dtype="int64") * 3.2 - expected = Float64Index(arr) + expected = NumericIndex(arr, dtype=np.float64) fidx = idx * 3.2 tm.assert_index_equal(fidx, expected) fidx = 3.2 * idx tm.assert_index_equal(fidx, expected) # interops with numpy arrays - expected = Float64Index(arr) + expected = NumericIndex(arr, dtype=np.float64) a = np.zeros(5, dtype="float64") result = fidx - a tm.assert_index_equal(result, expected) - expected = Float64Index(-arr) + expected = NumericIndex(-arr, dtype=np.float64) a = np.zeros(5, dtype="float64") result = a - fidx tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/conftest.py b/pandas/tests/indexes/conftest.py index 1e701945c79a0..83199a50b0f5f 100644 --- a/pandas/tests/indexes/conftest.py +++ b/pandas/tests/indexes/conftest.py @@ -5,6 +5,7 @@ Series, array, ) +import pandas._testing as tm @pytest.fixture(params=[None, False]) @@ -39,3 +40,22 @@ def listlike_box(request): Types that may be passed as the indexer to searchsorted. """ return request.param + + +@pytest.fixture( + params=[ + *tm.ALL_REAL_NUMPY_DTYPES, + "datetime64[ns]", + "category", + "object", + "timedelta64[ns]", + ] +) +def any_numpy_dtype_for_small_integer_indexes(request): + """ + Dtypes that can be given to an Index with small integers. + + This means that for any dtype `x` in the params list, `Index([1, 2, 3], dtype=x)` is + valid and gives the correct Index (sub-)class. + """ + return request.param diff --git a/pandas/tests/indexes/datetimes/methods/test_astype.py b/pandas/tests/indexes/datetimes/methods/test_astype.py index 0f85a4e971c41..007204fd83bd4 100644 --- a/pandas/tests/indexes/datetimes/methods/test_astype.py +++ b/pandas/tests/indexes/datetimes/methods/test_astype.py @@ -15,7 +15,6 @@ date_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index class TestDatetimeIndex: @@ -30,7 +29,7 @@ def test_astype(self): tm.assert_index_equal(result, expected) result = idx.astype(np.int64) - expected = Int64Index( + expected = Index( [1463356800000000000] + [-9223372036854775808] * 3, dtype=np.int64, name="idx", diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py index be05a649ec0b6..e700d6fe00e11 100644 --- a/pandas/tests/indexes/datetimes/test_scalar_compat.py +++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py @@ -19,7 +19,6 @@ date_range, ) import pandas._testing as tm -from pandas.core.api import Float64Index class TestDatetimeIndexOps: @@ -313,33 +312,33 @@ def test_1700(self): dr = date_range(start=Timestamp("1710-10-01"), periods=5, freq="D") r1 = pd.Index([x.to_julian_date() for x in dr]) r2 = dr.to_julian_date() - assert isinstance(r2, Float64Index) + assert isinstance(r2, pd.Index) and r2.dtype == np.float64 tm.assert_index_equal(r1, r2) def test_2000(self): dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="D") r1 = pd.Index([x.to_julian_date() for x in dr]) r2 = dr.to_julian_date() - assert isinstance(r2, Float64Index) + assert isinstance(r2, pd.Index) and r2.dtype == np.float64 tm.assert_index_equal(r1, r2) def test_hour(self): dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="H") r1 = pd.Index([x.to_julian_date() for x in dr]) r2 = dr.to_julian_date() - assert isinstance(r2, Float64Index) + assert isinstance(r2, pd.Index) and r2.dtype == np.float64 tm.assert_index_equal(r1, r2) def test_minute(self): dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="T") r1 = pd.Index([x.to_julian_date() for x in dr]) r2 = dr.to_julian_date() - assert isinstance(r2, Float64Index) + assert isinstance(r2, pd.Index) and r2.dtype == np.float64 tm.assert_index_equal(r1, r2) def test_second(self): dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="S") r1 = pd.Index([x.to_julian_date() for x in dr]) r2 = dr.to_julian_date() - assert isinstance(r2, Float64Index) + assert isinstance(r2, pd.Index) and r2.dtype == np.float64 tm.assert_index_equal(r1, r2) diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py index 07e1fd27b2f96..e3dc7f1dbade2 100644 --- a/pandas/tests/indexes/datetimes/test_setops.py +++ b/pandas/tests/indexes/datetimes/test_setops.py @@ -16,7 +16,6 @@ date_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index from pandas.tseries.offsets import ( BMonthEnd, @@ -184,7 +183,7 @@ def test_union_dataframe_index(self): tm.assert_index_equal(df.index, exp) def test_union_with_DatetimeIndex(self, sort): - i1 = Int64Index(np.arange(0, 20, 2)) + i1 = Index(np.arange(0, 20, 2, dtype=np.int64)) i2 = date_range(start="2012-01-03 00:00:00", periods=10, freq="D") # Works i1.union(i2, sort=sort) diff --git a/pandas/tests/indexes/interval/test_formats.py b/pandas/tests/indexes/interval/test_formats.py index db477003900bc..4d6f3a62d4dd0 100644 --- a/pandas/tests/indexes/interval/test_formats.py +++ b/pandas/tests/indexes/interval/test_formats.py @@ -3,6 +3,7 @@ from pandas import ( DataFrame, + Index, Interval, IntervalIndex, Series, @@ -10,7 +11,6 @@ Timestamp, ) import pandas._testing as tm -from pandas.core.api import Float64Index class TestIntervalIndexRendering: @@ -54,8 +54,8 @@ def test_repr_floats(self): [ Interval(left, right) for left, right in zip( - Float64Index([329.973, 345.137], dtype="float64"), - Float64Index([345.137, 360.191], dtype="float64"), + Index([329.973, 345.137], dtype="float64"), + Index([345.137, 360.191], dtype="float64"), ) ] ), diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py index 1800852919509..a809af7e975e2 100644 --- a/pandas/tests/indexes/interval/test_interval.py +++ b/pandas/tests/indexes/interval/test_interval.py @@ -18,7 +18,6 @@ timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Float64Index import pandas.core.common as com @@ -406,7 +405,7 @@ def test_maybe_convert_i8_nat(self, breaks): index = IntervalIndex.from_breaks(breaks) to_convert = breaks._constructor([pd.NaT] * 3) - expected = Float64Index([np.nan] * 3) + expected = Index([np.nan] * 3, dtype=np.float64) result = index._maybe_convert_i8(to_convert) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/multi/test_analytics.py b/pandas/tests/indexes/multi/test_analytics.py index fb6f56b0fcba7..7d68ccc88cf44 100644 --- a/pandas/tests/indexes/multi/test_analytics.py +++ b/pandas/tests/indexes/multi/test_analytics.py @@ -9,7 +9,6 @@ period_range, ) import pandas._testing as tm -from pandas.core.api import UInt64Index def test_infer_objects(idx): @@ -196,8 +195,8 @@ def test_map_dictlike(idx, mapper): identity = mapper(idx.values, idx) - # we don't infer to UInt64 for a dict - if isinstance(idx, UInt64Index) and isinstance(identity, dict): + # we don't infer to uint64 dtype for a dict + if idx.dtype == np.uint64 and isinstance(identity, dict): expected = idx.astype("int64") else: expected = idx diff --git a/pandas/tests/indexes/multi/test_integrity.py b/pandas/tests/indexes/multi/test_integrity.py index e2d59e5511a52..0fb93eec14f42 100644 --- a/pandas/tests/indexes/multi/test_integrity.py +++ b/pandas/tests/indexes/multi/test_integrity.py @@ -7,12 +7,12 @@ import pandas as pd from pandas import ( + Index, IntervalIndex, MultiIndex, RangeIndex, ) import pandas._testing as tm -from pandas.core.api import Int64Index def test_labels_dtypes(): @@ -84,8 +84,8 @@ def test_values_multiindex_periodindex(): idx = MultiIndex.from_arrays([ints, pidx]) result = idx.values - outer = Int64Index([x[0] for x in result]) - tm.assert_index_equal(outer, Int64Index(ints)) + outer = Index([x[0] for x in result]) + tm.assert_index_equal(outer, Index(ints, dtype=np.int64)) inner = pd.PeriodIndex([x[1] for x in result]) tm.assert_index_equal(inner, pidx) @@ -93,8 +93,8 @@ def test_values_multiindex_periodindex(): # n_lev > n_lab result = idx[:2].values - outer = Int64Index([x[0] for x in result]) - tm.assert_index_equal(outer, Int64Index(ints[:2])) + outer = Index([x[0] for x in result]) + tm.assert_index_equal(outer, Index(ints[:2], dtype=np.int64)) inner = pd.PeriodIndex([x[1] for x in result]) tm.assert_index_equal(inner, pidx[:2]) @@ -246,11 +246,11 @@ def test_rangeindex_fallback_coercion_bug(): tm.assert_frame_equal(df, expected, check_like=True) result = df.index.get_level_values("fizz") - expected = Int64Index(np.arange(10), name="fizz").repeat(10) + expected = Index(np.arange(10, dtype=np.int64), name="fizz").repeat(10) tm.assert_index_equal(result, expected) result = df.index.get_level_values("buzz") - expected = Int64Index(np.tile(np.arange(10), 10), name="buzz") + expected = Index(np.tile(np.arange(10, dtype=np.int64), 10), name="buzz") tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py index 4fe1e471db434..2db1e8c72a87c 100644 --- a/pandas/tests/indexes/period/test_indexing.py +++ b/pandas/tests/indexes/period/test_indexing.py @@ -20,10 +20,6 @@ period_range, ) import pandas._testing as tm -from pandas.core.api import ( - Float64Index, - Int64Index, -) dti4 = date_range("2016-01-01", periods=4) dti = dti4[:-1] @@ -806,10 +802,10 @@ def test_asof_locs_mismatched_type(self): msg = "must be DatetimeIndex or PeriodIndex" with pytest.raises(TypeError, match=msg): - pi.asof_locs(Int64Index(pi.asi8), mask) + pi.asof_locs(pd.Index(pi.asi8, dtype=np.int64), mask) with pytest.raises(TypeError, match=msg): - pi.asof_locs(Float64Index(pi.asi8), mask) + pi.asof_locs(pd.Index(pi.asi8, dtype=np.float64), mask) with pytest.raises(TypeError, match=msg): # TimedeltaIndex diff --git a/pandas/tests/indexes/ranges/test_indexing.py b/pandas/tests/indexes/ranges/test_indexing.py index f8c3eff0ab80a..84a78ad86c3d3 100644 --- a/pandas/tests/indexes/ranges/test_indexing.py +++ b/pandas/tests/indexes/ranges/test_indexing.py @@ -1,9 +1,11 @@ import numpy as np import pytest -from pandas import RangeIndex +from pandas import ( + Index, + RangeIndex, +) import pandas._testing as tm -from pandas.core.api import Int64Index class TestGetIndexer: @@ -55,7 +57,7 @@ def test_take_fill_value(self): # GH#12631 idx = RangeIndex(1, 4, name="xxx") result = idx.take(np.array([1, 0, -1])) - expected = Int64Index([2, 1, 3], name="xxx") + expected = Index([2, 1, 3], dtype=np.int64, name="xxx") tm.assert_index_equal(result, expected) # fill_value @@ -65,7 +67,7 @@ def test_take_fill_value(self): # allow_fill=False result = idx.take(np.array([1, 0, -1]), allow_fill=False, fill_value=True) - expected = Int64Index([2, 1, 3], name="xxx") + expected = Index([2, 1, 3], dtype=np.int64, name="xxx") tm.assert_index_equal(result, expected) msg = "Unable to fill values because RangeIndex cannot contain NA" @@ -86,7 +88,7 @@ def test_where_putmask_range_cast(self): mask = np.array([True, True, False, False, False]) result = idx.putmask(mask, 10) - expected = Int64Index([10, 10, 2, 3, 4], name="test") + expected = Index([10, 10, 2, 3, 4], dtype=np.int64, name="test") tm.assert_index_equal(result, expected) result = idx.where(~mask, 10) diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py index e534147c891d2..d255dc748b7dc 100644 --- a/pandas/tests/indexes/ranges/test_range.py +++ b/pandas/tests/indexes/ranges/test_range.py @@ -4,20 +4,15 @@ from pandas.core.dtypes.common import ensure_platform_int import pandas as pd -import pandas._testing as tm -from pandas.core.indexes.api import ( - Float64Index, +from pandas import ( Index, - Int64Index, RangeIndex, ) +import pandas._testing as tm from pandas.tests.indexes.common import NumericBase # aliases to make some tests easier to read RI = RangeIndex -I64 = Int64Index -F64 = Float64Index -OI = Index class TestRangeIndex(NumericBase): @@ -111,7 +106,7 @@ def test_insert(self): tm.assert_index_equal(idx[0:4], result.insert(0, idx[0]), exact="equiv") # GH 18295 (test missing) - expected = Float64Index([0, np.nan, 1, 2, 3, 4]) + expected = Index([0, np.nan, 1, 2, 3, 4], dtype=np.float64) for na in [np.nan, None, pd.NA]: result = RangeIndex(5).insert(1, na) tm.assert_index_equal(result, expected) @@ -379,7 +374,7 @@ def test_nbytes(self): # memory savings vs int index idx = RangeIndex(0, 1000) - assert idx.nbytes < Int64Index(idx._values).nbytes / 10 + assert idx.nbytes < Index(idx._values).nbytes / 10 # constant memory usage i2 = RangeIndex(0, 10) @@ -530,16 +525,16 @@ def test_len_specialised(self, step): ([RI(-4, -8), RI(-8, -12)], RI(0, 0)), ([RI(-4, -8), RI(3, -4)], RI(0, 0)), ([RI(-4, -8), RI(3, 5)], RI(3, 5)), - ([RI(-4, -2), RI(3, 5)], I64([-4, -3, 3, 4])), + ([RI(-4, -2), RI(3, 5)], Index([-4, -3, 3, 4])), ([RI(-2), RI(3, 5)], RI(3, 5)), - ([RI(2), RI(2)], I64([0, 1, 0, 1])), + ([RI(2), RI(2)], Index([0, 1, 0, 1])), ([RI(2), RI(2, 5), RI(5, 8, 4)], RI(0, 6)), - ([RI(2), RI(3, 5), RI(5, 8, 4)], I64([0, 1, 3, 4, 5])), + ([RI(2), RI(3, 5), RI(5, 8, 4)], Index([0, 1, 3, 4, 5])), ([RI(-2, 2), RI(2, 5), RI(5, 8, 4)], RI(-2, 6)), - ([RI(3), OI([-1, 3, 15])], OI([0, 1, 2, -1, 3, 15])), - ([RI(3), OI([-1, 3.1, 15.0])], OI([0, 1, 2, -1, 3.1, 15.0])), - ([RI(3), OI(["a", None, 14])], OI([0, 1, 2, "a", None, 14])), - ([RI(3, 1), OI(["a", None, 14])], OI(["a", None, 14])), + ([RI(3), Index([-1, 3, 15])], Index([0, 1, 2, -1, 3, 15])), + ([RI(3), Index([-1, 3.1, 15.0])], Index([0, 1, 2, -1, 3.1, 15.0])), + ([RI(3), Index(["a", None, 14])], Index([0, 1, 2, "a", None, 14])), + ([RI(3, 1), Index(["a", None, 14])], Index(["a", None, 14])), ] ) def appends(self, request): diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index af15cbc2f7929..1012734bab234 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -18,6 +18,8 @@ ) from pandas.util._test_decorators import async_mark +from pandas.core.dtypes.common import is_numeric_dtype + import pandas as pd from pandas import ( CategoricalIndex, @@ -592,13 +594,11 @@ def test_map_dictlike(self, index, mapper, request): if index.empty: # to match proper result coercion for uints expected = Index([]) - elif index._is_backward_compat_public_numeric_index: + elif is_numeric_dtype(index.dtype): expected = index._constructor(rng, dtype=index.dtype) elif type(index) is Index and index.dtype != object: # i.e. EA-backed, for now just Nullable expected = Index(rng, dtype=index.dtype) - elif index.dtype.kind == "u": - expected = Index(rng, dtype=index.dtype) else: expected = Index(rng) diff --git a/pandas/tests/indexes/test_setops.py b/pandas/tests/indexes/test_setops.py index bdd2c7a7faad9..d428fc6580f70 100644 --- a/pandas/tests/indexes/test_setops.py +++ b/pandas/tests/indexes/test_setops.py @@ -15,12 +15,10 @@ from pandas import ( CategoricalIndex, - DatetimeIndex, Index, MultiIndex, RangeIndex, Series, - TimedeltaIndex, Timestamp, ) import pandas._testing as tm @@ -30,11 +28,6 @@ is_signed_integer_dtype, pandas_dtype, ) -from pandas.core.api import ( - Float64Index, - Int64Index, - UInt64Index, -) def test_union_same_types(index): @@ -77,7 +70,7 @@ def test_union_different_types(index_flat, index_flat2, request): # idx1 = Index( # [True, True, True, True, True, True, True, True, False, False], dtype='bool' # ) - # idx2 = Int64Index([0, 0, 1, 1, 2, 2], dtype='int64') + # idx2 = Index([0, 0, 1, 1, 2, 2], dtype='int64') mark = pytest.mark.xfail( reason="GH#44000 True==1", raises=ValueError, strict=False ) @@ -567,24 +560,15 @@ def test_intersection_duplicates_all_indexes(index): assert idx.intersection(idx_non_unique).is_unique -@pytest.mark.parametrize( - "cls", - [ - Int64Index, - Float64Index, - DatetimeIndex, - CategoricalIndex, - lambda x: CategoricalIndex(x, categories=set(x)), - TimedeltaIndex, - lambda x: Index(x, dtype=object), - UInt64Index, - ], -) -def test_union_duplicate_index_subsets_of_each_other(cls): +def test_union_duplicate_index_subsets_of_each_other( + any_numpy_dtype_for_small_integer_indexes, +): # GH#31326 - a = cls([1, 2, 2, 3]) - b = cls([3, 3, 4]) - expected = cls([1, 2, 2, 3, 3, 4]) + dtype = any_numpy_dtype_for_small_integer_indexes + a = Index([1, 2, 2, 3], dtype=dtype) + b = Index([3, 3, 4], dtype=dtype) + + expected = Index([1, 2, 2, 3, 3, 4], dtype=dtype) if isinstance(a, CategoricalIndex): expected = Index([1, 2, 2, 3, 3, 4]) result = a.union(b) @@ -593,22 +577,14 @@ def test_union_duplicate_index_subsets_of_each_other(cls): tm.assert_index_equal(result, expected) -@pytest.mark.parametrize( - "cls", - [ - Int64Index, - Float64Index, - DatetimeIndex, - CategoricalIndex, - TimedeltaIndex, - lambda x: Index(x, dtype=object), - ], -) -def test_union_with_duplicate_index_and_non_monotonic(cls): +def test_union_with_duplicate_index_and_non_monotonic( + any_numpy_dtype_for_small_integer_indexes, +): # GH#36289 - a = cls([1, 0, 0]) - b = cls([0, 1]) - expected = cls([0, 0, 1]) + dtype = any_numpy_dtype_for_small_integer_indexes + a = Index([1, 0, 0], dtype=dtype) + b = Index([0, 1], dtype=dtype) + expected = Index([0, 0, 1], dtype=dtype) result = a.union(b) tm.assert_index_equal(result, expected) @@ -645,21 +621,16 @@ def test_union_nan_in_both(dup): tm.assert_index_equal(result, expected) -@pytest.mark.parametrize( - "cls", - [ - Int64Index, - Float64Index, - DatetimeIndex, - TimedeltaIndex, - lambda x: Index(x, dtype=object), - ], -) -def test_union_with_duplicate_index_not_subset_and_non_monotonic(cls): +def test_union_with_duplicate_index_not_subset_and_non_monotonic( + any_numpy_dtype_for_small_integer_indexes, +): # GH#36289 - a = cls([1, 0, 2]) - b = cls([0, 0, 1]) - expected = cls([0, 0, 1, 2]) + dtype = any_numpy_dtype_for_small_integer_indexes + a = Index([1, 0, 2], dtype=dtype) + b = Index([0, 0, 1], dtype=dtype) + expected = Index([0, 0, 1, 2], dtype=dtype) + if isinstance(a, CategoricalIndex): + expected = Index([0, 0, 1, 2]) result = a.union(b) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/timedeltas/methods/test_astype.py b/pandas/tests/indexes/timedeltas/methods/test_astype.py index c728d636fb5db..9b17a8af59ac5 100644 --- a/pandas/tests/indexes/timedeltas/methods/test_astype.py +++ b/pandas/tests/indexes/timedeltas/methods/test_astype.py @@ -12,7 +12,6 @@ timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index class TestTimedeltaIndex: @@ -55,7 +54,7 @@ def test_astype(self): tm.assert_index_equal(result, expected) result = idx.astype(np.int64) - expected = Int64Index( + expected = Index( [100000000000000] + [-9223372036854775808] * 3, dtype=np.int64, name="idx" ) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py index 976d4a61f27e3..eff65fba773e4 100644 --- a/pandas/tests/indexes/timedeltas/test_setops.py +++ b/pandas/tests/indexes/timedeltas/test_setops.py @@ -3,11 +3,11 @@ import pandas as pd from pandas import ( + Index, TimedeltaIndex, timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index from pandas.tseries.offsets import Hour @@ -21,7 +21,7 @@ def test_union(self): expected = timedelta_range("1day", periods=7) tm.assert_index_equal(result, expected) - i1 = Int64Index(np.arange(0, 20, 2)) + i1 = Index(np.arange(0, 20, 2, dtype=np.int64)) i2 = timedelta_range(start="1 day", periods=10, freq="D") i1.union(i2) # Works i2.union(i1) # Fails with "AttributeError: can't set attribute" diff --git a/pandas/tests/indexing/conftest.py b/pandas/tests/indexing/conftest.py index ac3db524170fb..ec817649ec5ea 100644 --- a/pandas/tests/indexing/conftest.py +++ b/pandas/tests/indexing/conftest.py @@ -3,14 +3,11 @@ from pandas import ( DataFrame, + Index, MultiIndex, Series, date_range, ) -from pandas.core.api import ( - Float64Index, - UInt64Index, -) @pytest.fixture @@ -27,15 +24,15 @@ def frame_ints(): @pytest.fixture def series_uints(): - return Series(np.random.rand(4), index=UInt64Index(np.arange(0, 8, 2))) + return Series(np.random.rand(4), index=Index(np.arange(0, 8, 2, dtype=np.uint64))) @pytest.fixture def frame_uints(): return DataFrame( np.random.randn(4, 4), - index=UInt64Index(range(0, 8, 2)), - columns=UInt64Index(range(0, 12, 3)), + index=Index(range(0, 8, 2), dtype=np.uint64), + columns=Index(range(0, 12, 3), dtype=np.uint64), ) @@ -61,15 +58,15 @@ def frame_ts(): @pytest.fixture def series_floats(): - return Series(np.random.rand(4), index=Float64Index(range(0, 8, 2))) + return Series(np.random.rand(4), index=Index(range(0, 8, 2), dtype=np.float64)) @pytest.fixture def frame_floats(): return DataFrame( np.random.randn(4, 4), - index=Float64Index(range(0, 8, 2)), - columns=Float64Index(range(0, 12, 3)), + index=Index(range(0, 8, 2), dtype=np.float64), + columns=Index(range(0, 12, 3), dtype=np.float64), ) diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py index 97fb1b577412c..8d242ba1e1b4d 100644 --- a/pandas/tests/indexing/multiindex/test_loc.py +++ b/pandas/tests/indexing/multiindex/test_loc.py @@ -44,16 +44,18 @@ def test_loc_setitem_frame_with_multiindex(self, multiindex_dataframe_random_dat df.loc[("bar", "two"), 1] = 7 assert df.loc[("bar", "two"), 1] == 7 - def test_loc_getitem_general(self): - + def test_loc_getitem_general(self, any_real_numpy_dtype): # GH#2817 + dtype = any_real_numpy_dtype data = { "amount": {0: 700, 1: 600, 2: 222, 3: 333, 4: 444}, "col": {0: 3.5, 1: 3.5, 2: 4.0, 3: 4.0, 4: 4.0}, - "year": {0: 2012, 1: 2011, 2: 2012, 3: 2012, 4: 2012}, + "num": {0: 12, 1: 11, 2: 12, 3: 12, 4: 12}, } - df = DataFrame(data).set_index(keys=["col", "year"]) - key = 4.0, 2012 + df = DataFrame(data) + df = df.astype({"col": dtype, "num": dtype}) + df = df.set_index(keys=["col", "num"]) + key = 4.0, 12 # emits a PerformanceWarning, ok with tm.assert_produces_warning(PerformanceWarning): @@ -64,8 +66,10 @@ def test_loc_getitem_general(self): assert return_value is None res = df.loc[key] - # col has float dtype, result should be Float64Index - index = MultiIndex.from_arrays([[4.0] * 3, [2012] * 3], names=["col", "year"]) + # col has float dtype, result should be float64 Index + col_arr = np.array([4.0] * 3, dtype=dtype) + year_arr = np.array([12] * 3, dtype=dtype) + index = MultiIndex.from_arrays([col_arr, year_arr], names=["col", "num"]) expected = DataFrame({"amount": [222, 333, 444]}, index=index) tm.assert_frame_equal(res, expected) diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py index cce66355ef5a5..82950e7a1d1ae 100644 --- a/pandas/tests/indexing/test_coercion.py +++ b/pandas/tests/indexing/test_coercion.py @@ -16,7 +16,6 @@ import pandas as pd import pandas._testing as tm -from pandas.core.api import NumericIndex ############################################################### # Index / Series common tests which may trigger dtype coercions @@ -207,33 +206,39 @@ def test_insert_index_object(self, insert, coerced_val, coerced_dtype): @pytest.mark.parametrize( "insert, coerced_val, coerced_dtype", [ - (1, 1, np.int64), + (1, 1, None), (1.1, 1.1, np.float64), (False, False, object), # GH#36319 ("x", "x", object), ], ) - def test_insert_index_int64(self, insert, coerced_val, coerced_dtype): - obj = NumericIndex([1, 2, 3, 4], dtype=np.int64) - assert obj.dtype == np.int64 + def test_insert_int_index( + self, any_int_numpy_dtype, insert, coerced_val, coerced_dtype + ): + dtype = any_int_numpy_dtype + obj = pd.Index([1, 2, 3, 4], dtype=dtype) + coerced_dtype = coerced_dtype if coerced_dtype is not None else dtype - exp = pd.Index([1, coerced_val, 2, 3, 4]) + exp = pd.Index([1, coerced_val, 2, 3, 4], dtype=coerced_dtype) self._assert_insert_conversion(obj, insert, exp, coerced_dtype) @pytest.mark.parametrize( "insert, coerced_val, coerced_dtype", [ - (1, 1.0, np.float64), + (1, 1.0, None), (1.1, 1.1, np.float64), (False, False, object), # GH#36319 ("x", "x", object), ], ) - def test_insert_index_float64(self, insert, coerced_val, coerced_dtype): - obj = NumericIndex([1.0, 2.0, 3.0, 4.0], dtype=np.float64) - assert obj.dtype == np.float64 + def test_insert_float_index( + self, float_numpy_dtype, insert, coerced_val, coerced_dtype + ): + dtype = float_numpy_dtype + obj = pd.Index([1.0, 2.0, 3.0, 4.0], dtype=dtype) + coerced_dtype = coerced_dtype if coerced_dtype is not None else dtype - exp = pd.Index([1.0, coerced_val, 2.0, 3.0, 4.0]) + exp = pd.Index([1.0, coerced_val, 2.0, 3.0, 4.0], dtype=coerced_dtype) self._assert_insert_conversion(obj, insert, exp, coerced_dtype) @pytest.mark.parametrize( diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py index c07b0e81da12b..a32d422ad2905 100644 --- a/pandas/tests/indexing/test_floats.py +++ b/pandas/tests/indexing/test_floats.py @@ -531,8 +531,9 @@ def test_floating_misc(self, indexer_sl): result = indexer_sl(s)[[2.5]] tm.assert_series_equal(result, Series([1], index=[2.5])) - def test_float64index_slicing_bug(self): + def test_floatindex_slicing_bug(self, float_numpy_dtype): # GH 5557, related to slicing a float index + dtype = float_numpy_dtype ser = { 256: 2321.0, 1: 78.0, @@ -686,6 +687,7 @@ def test_float64index_slicing_bug(self): } # smoke test for the repr - s = Series(ser) + s = Series(ser, dtype=dtype) result = s.value_counts() + assert result.index.dtype == dtype str(result) diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py index ff4695808ee75..2bd11fbb85d13 100644 --- a/pandas/tests/indexing/test_indexing.py +++ b/pandas/tests/indexing/test_indexing.py @@ -26,7 +26,6 @@ timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Float64Index from pandas.tests.indexing.common import _mklbl from pandas.tests.indexing.test_floats import gen_obj @@ -167,7 +166,7 @@ def test_inf_upcast(self): assert df.loc[np.inf, 0] == 3 result = df.index - expected = Float64Index([1, 2, np.inf]) + expected = Index([1, 2, np.inf], dtype=np.float64) tm.assert_index_equal(result, expected) def test_setitem_dtype_upcast(self): diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index b3af55215100f..5445053027940 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -41,7 +41,6 @@ is_bool_dtype, is_scalar, ) -from pandas.core.api import Float64Index from pandas.core.indexing import _one_ellipsis_message from pandas.tests.indexing.common import check_indexing_smoketest_or_raises @@ -1377,9 +1376,10 @@ def test_loc_getitem_uint64_scalar(self, val, expected): expected.name = val tm.assert_series_equal(result, expected) - def test_loc_setitem_int_label_with_float64index(self): + def test_loc_setitem_int_label_with_float_index(self, float_numpy_dtype): # note labels are floats - ser = Series(["a", "b", "c"], index=[0, 0.5, 1]) + dtype = float_numpy_dtype + ser = Series(["a", "b", "c"], index=Index([0, 0.5, 1], dtype=dtype)) expected = ser.copy() ser.loc[1] = "zoo" @@ -2073,7 +2073,7 @@ def test_loc_setitem_with_expansion_inf_upcast_empty(self): df.loc[0, np.inf] = 3 result = df.columns - expected = Float64Index([0, 1, np.inf]) + expected = Index([0, 1, np.inf], dtype=np.float64) tm.assert_index_equal(result, expected) @pytest.mark.filterwarnings("ignore:indexing past lexsort depth") @@ -2415,13 +2415,14 @@ def test_loc_getitem_slice_floats_inexact(self): s1 = df.loc[52195.1:52198.9] assert len(s1) == 3 - def test_loc_getitem_float_slice_float64index(self): - ser = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float)) + def test_loc_getitem_float_slice_floatindex(self, float_numpy_dtype): + dtype = float_numpy_dtype + ser = Series(np.random.rand(10), index=np.arange(10, 20, dtype=dtype)) assert len(ser.loc[12.0:]) == 8 assert len(ser.loc[12.5:]) == 7 - idx = np.arange(10, 20, dtype=float) + idx = np.arange(10, 20, dtype=dtype) idx[2] = 12.2 ser.index = idx assert len(ser.loc[12.0:]) == 8 diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py index 0c8e303b4ac56..3ab57e137f1c1 100644 --- a/pandas/tests/resample/test_resampler_grouper.py +++ b/pandas/tests/resample/test_resampler_grouper.py @@ -15,7 +15,6 @@ Timestamp, ) import pandas._testing as tm -from pandas.core.api import Int64Index from pandas.core.indexes.datetimes import date_range test_frame = DataFrame( @@ -333,7 +332,7 @@ def test_consistency_with_window(): # consistent return values with window df = test_frame - expected = Int64Index([1, 2, 3], name="A") + expected = Index([1, 2, 3], name="A") result = df.groupby("A").resample("2s").mean() assert result.index.nlevels == 2 tm.assert_index_equal(result.index.levels[0], expected) diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py index 35d10eafb5ba7..4ffc0232634b3 100644 --- a/pandas/tests/reshape/merge/test_merge.py +++ b/pandas/tests/reshape/merge/test_merge.py @@ -20,6 +20,7 @@ CategoricalIndex, DataFrame, DatetimeIndex, + Index, IntervalIndex, MultiIndex, PeriodIndex, @@ -29,11 +30,6 @@ ) import pandas._testing as tm from pandas.api.types import CategoricalDtype as CDT -from pandas.core.api import ( - Float64Index, - Int64Index, - UInt64Index, -) from pandas.core.reshape.concat import concat from pandas.core.reshape.merge import ( MergeError, @@ -1324,8 +1320,13 @@ def test_merge_two_empty_df_no_division_error(self): ["2001-01-01", "2002-02-02", "2003-03-03", pd.NaT, pd.NaT, pd.NaT] ), ), - (Float64Index([1, 2, 3]), Float64Index([1, 2, 3, None, None, None])), - (Int64Index([1, 2, 3]), Float64Index([1, 2, 3, None, None, None])), + *[ + ( + Index([1, 2, 3], dtype=dtyp), + Index([1, 2, 3, None, None, None], dtype=np.float64), + ) + for dtyp in tm.ALL_REAL_NUMPY_DTYPES + ], ( IntervalIndex.from_tuples([(1, 2), (2, 3), (3, 4)]), IntervalIndex.from_tuples( @@ -2140,17 +2141,19 @@ def test_merge_on_indexes(self, left_df, right_df, how, sort, expected): tm.assert_frame_equal(result, expected) +_test_merge_index_types_params = [ + Index([1, 2], dtype=dtyp, name="index_col") for dtyp in tm.ALL_REAL_NUMPY_DTYPES +] + [ + CategoricalIndex(["A", "B"], categories=["A", "B"], name="index_col"), + RangeIndex(start=0, stop=2, name="index_col"), + DatetimeIndex(["2018-01-01", "2018-01-02"], name="index_col"), +] + + @pytest.mark.parametrize( "index", - [ - CategoricalIndex(["A", "B"], categories=["A", "B"], name="index_col"), - Float64Index([1.0, 2.0], name="index_col"), - Int64Index([1, 2], name="index_col"), - UInt64Index([1, 2], name="index_col"), - RangeIndex(start=0, stop=2, name="index_col"), - DatetimeIndex(["2018-01-01", "2018-01-02"], name="index_col"), - ], - ids=lambda x: type(x).__name__, + _test_merge_index_types_params, + ids=lambda x: f"{type(x).__name__}[{x.dtype}]", ) def test_merge_index_types(index): # gh-20777 @@ -2652,11 +2655,11 @@ def test_merge_duplicate_columns_with_suffix_causing_another_duplicate_raises(): def test_merge_string_float_column_result(): # GH 13353 - df1 = DataFrame([[1, 2], [3, 4]], columns=pd.Index(["a", 114.0])) + df1 = DataFrame([[1, 2], [3, 4]], columns=Index(["a", 114.0])) df2 = DataFrame([[9, 10], [11, 12]], columns=["x", "y"]) result = merge(df2, df1, how="inner", left_index=True, right_index=True) expected = DataFrame( - [[9, 10, 1, 2], [11, 12, 3, 4]], columns=pd.Index(["x", "y", "a", 114.0]) + [[9, 10, 1, 2], [11, 12, 3, 4]], columns=Index(["x", "y", "a", 114.0]) ) tm.assert_frame_equal(result, expected) @@ -2712,8 +2715,8 @@ def test_merge_outer_with_NaN(dtype): def test_merge_different_index_names(): # GH#45094 - left = DataFrame({"a": [1]}, index=pd.Index([1], name="c")) - right = DataFrame({"a": [1]}, index=pd.Index([1], name="d")) + left = DataFrame({"a": [1]}, index=Index([1], name="c")) + right = DataFrame({"a": [1]}, index=Index([1], name="d")) result = merge(left, right, left_on="c", right_on="d") expected = DataFrame({"a_x": [1], "a_y": 1}) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/series/indexing/test_get.py b/pandas/tests/series/indexing/test_get.py index e8034bd4f7160..b78f545b2a010 100644 --- a/pandas/tests/series/indexing/test_get.py +++ b/pandas/tests/series/indexing/test_get.py @@ -2,9 +2,11 @@ import pytest import pandas as pd -from pandas import Series +from pandas import ( + Index, + Series, +) import pandas._testing as tm -from pandas.core.api import Float64Index def test_get(): @@ -65,7 +67,7 @@ def test_get(): 54, ] ), - index=Float64Index( + index=Index( [ 25.0, 36.0, @@ -87,7 +89,8 @@ def test_get(): 1764.0, 1849.0, 1936.0, - ] + ], + dtype=np.float64, ), ) @@ -110,18 +113,18 @@ def test_get(): assert result == "Missing" -def test_get_nan(): +def test_get_nan(float_numpy_dtype): # GH 8569 - s = Float64Index(range(10)).to_series() + s = Index(range(10), dtype=float_numpy_dtype).to_series() assert s.get(np.nan) is None assert s.get(np.nan, default="Missing") == "Missing" -def test_get_nan_multiple(): +def test_get_nan_multiple(float_numpy_dtype): # GH 8569 # ensure that fixing "test_get_nan" above hasn't broken get # with multiple elements - s = Float64Index(range(10)).to_series() + s = Index(range(10), dtype=float_numpy_dtype).to_series() idx = [2, 30] assert s.get(idx) is None diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py index b074f30fa3299..91d6be01eef16 100644 --- a/pandas/tests/series/indexing/test_getitem.py +++ b/pandas/tests/series/indexing/test_getitem.py @@ -88,8 +88,9 @@ def test_getitem_out_of_bounds_empty_rangeindex_keyerror(self): with pytest.raises(KeyError, match="-1"): ser[-1] - def test_getitem_keyerror_with_int64index(self): - ser = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2]) + def test_getitem_keyerror_with_integer_index(self, any_int_numpy_dtype): + dtype = any_int_numpy_dtype + ser = Series(np.random.randn(6), index=Index([0, 0, 1, 1, 2, 2], dtype=dtype)) with pytest.raises(KeyError, match=r"^5$"): ser[5] diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 05e40e20f1226..1135a84b4ecae 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -47,7 +47,6 @@ timedelta_range, ) import pandas._testing as tm -from pandas.core.api import Int64Index from pandas.core.arrays import ( IntervalArray, period_array, @@ -712,7 +711,7 @@ def test_constructor_copy(self): timedelta_range("1 day", periods=3), period_range("2012Q1", periods=3, freq="Q"), Index(list("abc")), - Int64Index([1, 2, 3]), + Index([1, 2, 3]), RangeIndex(0, 3), ], ids=lambda x: type(x).__name__,
This PR updates various tests that currently use Int64Index/UInt64Index/Float64Index to instead use `Index` with the relevant dtype (parametrized dtypes where relevant). Sections of the tests suite are changed in separate commits, so it will be easier to separate them out, if desired. This PR's first 3 commits correspond to #50052, which again build on #49560, which builds on #50195. Those 3 PRs are not part of this PR, but should ideally be treated as precursors for this PR, if possible.
https://api.github.com/repos/pandas-dev/pandas/pulls/50479
2022-12-28T23:49:14Z
2023-01-19T21:49:22Z
null
2023-01-22T10:18:10Z
ENH: Add lazy copy to swaplevel
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index e671f45216968..d916fdc5a0ee4 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -7254,7 +7254,7 @@ def nsmallest(self, n: int, columns: IndexLabel, keep: str = "first") -> DataFra ), ) def swaplevel(self, i: Axis = -2, j: Axis = -1, axis: Axis = 0) -> DataFrame: - result = self.copy() + result = self.copy(deep=None) axis = self._get_axis_number(axis) diff --git a/pandas/core/series.py b/pandas/core/series.py index b69fb4c1b58aa..6280e38a367a2 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -4071,7 +4071,9 @@ def nsmallest(self, n: int = 5, keep: str = "first") -> Series: dtype: object""" ), ) - def swaplevel(self, i: Level = -2, j: Level = -1, copy: bool = True) -> Series: + def swaplevel( + self, i: Level = -2, j: Level = -1, copy: bool | None = None + ) -> Series: """ Swap levels i and j in a :class:`MultiIndex`. @@ -4091,10 +4093,9 @@ def swaplevel(self, i: Level = -2, j: Level = -1, copy: bool = True) -> Series: {examples} """ assert isinstance(self.index, MultiIndex) - new_index = self.index.swaplevel(i, j) - return self._constructor(self._values, index=new_index, copy=copy).__finalize__( - self, method="swaplevel" - ) + result = self.copy(deep=copy) + result.index = self.index.swaplevel(i, j) + return result def reorder_levels(self, order: Sequence[Level]) -> Series: """ diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 878f1d8089d33..4a132becf99f4 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -424,6 +424,24 @@ def test_reorder_levels(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize("obj", [Series([1, 2, 3]), DataFrame({"a": [1, 2, 3]})]) +def test_swaplevel(using_copy_on_write, obj): + index = MultiIndex.from_tuples([(1, 1), (1, 2), (2, 1)], names=["one", "two"]) + obj.index = index + obj_orig = obj.copy() + obj2 = obj.swaplevel() + + if using_copy_on_write: + assert np.shares_memory(obj2.values, obj.values) + else: + assert not np.shares_memory(obj2.values, obj.values) + + obj2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(obj2.values, obj.values) + tm.assert_equal(obj, obj_orig) + + def test_frame_set_axis(using_copy_on_write): # GH 49473 df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]})
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50478
2022-12-28T23:33:40Z
2023-01-03T08:04:12Z
2023-01-03T08:04:12Z
2023-01-03T11:08:42Z
ENH: Add lazy copy for truncate
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 95804de2a7089..f6dfc7d3b076a 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -159,6 +159,7 @@ SingleArrayManager, ) from pandas.core.internals.construction import mgr_to_mgr +from pandas.core.internals.managers import _using_copy_on_write from pandas.core.missing import ( clean_fill_method, clean_reindex_fill_method, @@ -9947,7 +9948,7 @@ def truncate( before=None, after=None, axis: Axis | None = None, - copy: bool_t = True, + copy: bool_t | None = None, ) -> NDFrameT: """ Truncate a Series or DataFrame before and after some index value. @@ -10098,8 +10099,8 @@ def truncate( if isinstance(ax, MultiIndex): setattr(result, self._get_axis_name(axis), ax.truncate(before, after)) - if copy: - result = result.copy() + if copy or (copy is None and not _using_copy_on_write()): + result = result.copy(deep=copy) return result diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 2b3d13b982d4d..e24babd46fea9 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -371,6 +371,30 @@ def test_head_tail(method, using_copy_on_write): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize( + "kwargs", + [ + {"before": "a", "after": "b", "axis": 1}, + {"before": 0, "after": 1, "axis": 0}, + ], +) +def test_truncate(using_copy_on_write, kwargs): + df = DataFrame({"a": [1, 2, 3], "b": 1, "c": 2}) + df_orig = df.copy() + df2 = df.truncate(**kwargs) + df2._mgr._verify_integrity() + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + + df2.iloc[0, 0] = 0 + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + tm.assert_frame_equal(df, df_orig) + + @pytest.mark.parametrize("method", ["assign", "drop_duplicates"]) def test_assign_drop_duplicates(using_copy_on_write, method): df = DataFrame({"a": [1, 2, 3]}) diff --git a/pandas/tests/frame/methods/test_truncate.py b/pandas/tests/frame/methods/test_truncate.py index bfee3edc085d8..21f0664707ebe 100644 --- a/pandas/tests/frame/methods/test_truncate.py +++ b/pandas/tests/frame/methods/test_truncate.py @@ -66,12 +66,6 @@ def test_truncate(self, datetime_frame, frame_or_series): before=ts.index[-1] - ts.index.freq, after=ts.index[0] + ts.index.freq ) - def test_truncate_copy(self, datetime_frame): - index = datetime_frame.index - truncated = datetime_frame.truncate(index[5], index[10]) - truncated.values[:] = 5.0 - assert not (datetime_frame.values[5:11] == 5).any() - def test_truncate_nonsortedindex(self, frame_or_series): # GH#17935
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50477
2022-12-28T23:11:29Z
2023-01-04T09:23:41Z
2023-01-04T09:23:40Z
2023-01-04T09:25:10Z
ENH: Add lazy copy for take and between_time
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 8980fe0249193..fa1a79c81630a 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -35,6 +35,7 @@ ) from pandas._libs import lib +from pandas._libs.lib import array_equal_fast from pandas._libs.tslibs import ( Period, Tick, @@ -3754,6 +3755,18 @@ def _take( See the docstring of `take` for full explanation of the parameters. """ + if not isinstance(indices, slice): + indices = np.asarray(indices, dtype=np.intp) + if ( + axis == 0 + and indices.ndim == 1 + and using_copy_on_write() + and array_equal_fast( + indices, + np.arange(0, len(self), dtype=np.intp), + ) + ): + return self.copy(deep=None) new_data = self._mgr.take( indices, diff --git a/pandas/core/series.py b/pandas/core/series.py index 6b82d48f82ce7..c38eb4e7c5d34 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -32,7 +32,10 @@ properties, reshape, ) -from pandas._libs.lib import no_default +from pandas._libs.lib import ( + array_equal_fast, + no_default, +) from pandas._typing import ( AggFuncType, AlignJoin, @@ -879,6 +882,14 @@ def take(self, indices, axis: Axis = 0, **kwargs) -> Series: nv.validate_take((), kwargs) indices = ensure_platform_int(indices) + + if ( + indices.ndim == 1 + and using_copy_on_write() + and array_equal_fast(indices, np.arange(0, len(self), dtype=indices.dtype)) + ): + return self.copy(deep=None) + new_index = self.index.take(indices) new_values = self._values.take(indices) diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 2bc4202cce5f5..67bf9a117f2a0 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -540,6 +540,40 @@ def test_assign_drop_duplicates(using_copy_on_write, method): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize("obj", [Series([1, 2]), DataFrame({"a": [1, 2]})]) +def test_take(using_copy_on_write, obj): + # Check that no copy is made when we take all rows in original order + obj_orig = obj.copy() + obj2 = obj.take([0, 1]) + + if using_copy_on_write: + assert np.shares_memory(obj2.values, obj.values) + else: + assert not np.shares_memory(obj2.values, obj.values) + + obj2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(obj2.values, obj.values) + tm.assert_equal(obj, obj_orig) + + +@pytest.mark.parametrize("obj", [Series([1, 2]), DataFrame({"a": [1, 2]})]) +def test_between_time(using_copy_on_write, obj): + obj.index = date_range("2018-04-09", periods=2, freq="1D20min") + obj_orig = obj.copy() + obj2 = obj.between_time("0:00", "1:00") + + if using_copy_on_write: + assert np.shares_memory(obj2.values, obj.values) + else: + assert not np.shares_memory(obj2.values, obj.values) + + obj2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(obj2.values, obj.values) + tm.assert_equal(obj, obj_orig) + + def test_reindex_like(using_copy_on_write): df = DataFrame({"a": [1, 2], "b": "a"}) other = DataFrame({"b": "a", "a": [1, 2]})
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50476
2022-12-28T22:26:56Z
2023-01-13T07:57:58Z
2023-01-13T07:57:58Z
2023-01-13T08:23:33Z
BUG: loc.setitem coercing rhs df dtypes
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 5b725eb4d2a98..92b76c62db8f7 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -878,6 +878,7 @@ Indexing ^^^^^^^^ - Bug in :meth:`DataFrame.__setitem__` raising when indexer is a :class:`DataFrame` with ``boolean`` dtype (:issue:`47125`) - Bug in :meth:`DataFrame.reindex` filling with wrong values when indexing columns and index for ``uint`` dtypes (:issue:`48184`) +- Bug in :meth:`DataFrame.loc` when setting :class:`DataFrame` with different dtypes coercing values to single dtype (:issue:`50467`) - Bug in :meth:`DataFrame.loc` coercing dtypes when setting values with a list indexer (:issue:`49159`) - Bug in :meth:`Series.loc` raising error for out of bounds end of slice indexer (:issue:`50161`) - Bug in :meth:`DataFrame.loc` raising ``ValueError`` with ``bool`` indexer and :class:`MultiIndex` (:issue:`47687`) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index fa702770a0990..1b32c09bf4a25 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1703,6 +1703,10 @@ def _setitem_with_indexer(self, indexer, value, name: str = "iloc"): # maybe partial set take_split_path = not self.obj._mgr.is_single_block + if not take_split_path and isinstance(value, ABCDataFrame): + # Avoid cast of values + take_split_path = not value._mgr.is_single_block + # if there is only one block/type, still have to take split path # unless the block is one-dimensional or it can hold the value if not take_split_path and len(self.obj._mgr.arrays) and self.ndim > 1: @@ -2055,7 +2059,7 @@ def _setitem_single_block(self, indexer, value, name: str) -> None: value = self._align_series(indexer, Series(value)) elif isinstance(value, ABCDataFrame) and name != "iloc": - value = self._align_frame(indexer, value) + value = self._align_frame(indexer, value)._values # check for chained assignment self.obj._check_is_chained_assignment_possible() @@ -2291,7 +2295,7 @@ def ravel(i): raise ValueError("Incompatible indexer with Series") - def _align_frame(self, indexer, df: DataFrame): + def _align_frame(self, indexer, df: DataFrame) -> DataFrame: is_frame = self.ndim == 2 if isinstance(indexer, tuple): @@ -2315,15 +2319,15 @@ def _align_frame(self, indexer, df: DataFrame): if idx is not None and cols is not None: if df.index.equals(idx) and df.columns.equals(cols): - val = df.copy()._values + val = df.copy() else: - val = df.reindex(idx, columns=cols)._values + val = df.reindex(idx, columns=cols) return val elif (isinstance(indexer, slice) or is_list_like_indexer(indexer)) and is_frame: ax = self.obj.index[indexer] if df.index.equals(ax): - val = df.copy()._values + val = df.copy() else: # we have a multi-index and are trying to align @@ -2338,7 +2342,7 @@ def _align_frame(self, indexer, df: DataFrame): "specifying the join levels" ) - val = df.reindex(index=ax)._values + val = df.reindex(index=ax) return val raise ValueError("Incompatible indexer with DataFrame") diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py index c0b04b1f8e80f..63c3ba418591e 100644 --- a/pandas/tests/frame/indexing/test_indexing.py +++ b/pandas/tests/frame/indexing/test_indexing.py @@ -1494,6 +1494,15 @@ def test_loc_datetime_assignment_dtype_does_not_change(self, utc, indexer): tm.assert_frame_equal(df, expected) + @pytest.mark.parametrize("indexer, idx", [(tm.loc, 1), (tm.iloc, 2)]) + def test_setitem_value_coercing_dtypes(self, indexer, idx): + # GH#50467 + df = DataFrame([["1", np.nan], ["2", np.nan], ["3", np.nan]], dtype=object) + rhs = DataFrame([[1, np.nan], [2, np.nan]]) + indexer(df)[:idx, :] = rhs + expected = DataFrame([[1, np.nan], [2, np.nan], ["3", np.nan]], dtype=object) + tm.assert_frame_equal(df, expected) + class TestDataFrameIndexingUInt64: def test_setitem(self, uint64_frame):
- [x] closes #50467 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50475
2022-12-28T21:18:27Z
2023-01-06T23:47:59Z
2023-01-06T23:47:59Z
2023-01-07T11:43:23Z
DOC: Add pyarrow type equivalency table
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst index 5b41de4e12e6f..aeaca7caea25d 100644 --- a/doc/source/reference/arrays.rst +++ b/doc/source/reference/arrays.rst @@ -60,6 +60,37 @@ is an :class:`ArrowDtype`. `Pyarrow <https://arrow.apache.org/docs/python/index.html>`__ provides similar array and `data type <https://arrow.apache.org/docs/python/api/datatypes.html>`__ support as NumPy including first-class nullability support for all data types, immutability and more. +The table below shows the equivalent pyarrow-backed (``pa``), pandas extension, and numpy (``np``) types that are recognized by pandas. +Pyarrow-backed types below need to be passed into :class:`ArrowDtype` to be recognized by pandas e.g. ``pd.ArrowDtype(pa.bool_())`` + +=============================================== ========================== =================== +PyArrow type pandas extension type NumPy type +=============================================== ========================== =================== +:external+pyarrow:py:func:`pyarrow.bool_` :class:`BooleanDtype` ``np.bool_`` +:external+pyarrow:py:func:`pyarrow.int8` :class:`Int8Dtype` ``np.int8`` +:external+pyarrow:py:func:`pyarrow.int16` :class:`Int16Dtype` ``np.int16`` +:external+pyarrow:py:func:`pyarrow.int32` :class:`Int32Dtype` ``np.int32`` +:external+pyarrow:py:func:`pyarrow.int64` :class:`Int64Dtype` ``np.int64`` +:external+pyarrow:py:func:`pyarrow.uint8` :class:`UInt8Dtype` ``np.uint8`` +:external+pyarrow:py:func:`pyarrow.uint16` :class:`UInt16Dtype` ``np.uint16`` +:external+pyarrow:py:func:`pyarrow.uint32` :class:`UInt32Dtype` ``np.uint32`` +:external+pyarrow:py:func:`pyarrow.uint64` :class:`UInt64Dtype` ``np.uint64`` +:external+pyarrow:py:func:`pyarrow.float32` :class:`Float32Dtype` ``np.float32`` +:external+pyarrow:py:func:`pyarrow.float64` :class:`Float64Dtype` ``np.float64`` +:external+pyarrow:py:func:`pyarrow.time32` (none) (none) +:external+pyarrow:py:func:`pyarrow.time64` (none) (none) +:external+pyarrow:py:func:`pyarrow.timestamp` :class:`DatetimeTZDtype` ``np.datetime64`` +:external+pyarrow:py:func:`pyarrow.date32` (none) (none) +:external+pyarrow:py:func:`pyarrow.date64` (none) (none) +:external+pyarrow:py:func:`pyarrow.duration` (none) ``np.timedelta64`` +:external+pyarrow:py:func:`pyarrow.binary` (none) (none) +:external+pyarrow:py:func:`pyarrow.string` :class:`StringDtype` ``np.str_`` +:external+pyarrow:py:func:`pyarrow.decimal128` (none) (none) +:external+pyarrow:py:func:`pyarrow.list_` (none) (none) +:external+pyarrow:py:func:`pyarrow.map_` (none) (none) +:external+pyarrow:py:func:`pyarrow.dictionary` :class:`CategoricalDtype` (none) +=============================================== ========================== =================== + .. note:: For string types (``pyarrow.string()``, ``string[pyarrow]``), PyArrow support is still facilitated
null
https://api.github.com/repos/pandas-dev/pandas/pulls/50474
2022-12-28T20:59:43Z
2023-01-04T18:01:01Z
2023-01-04T18:01:01Z
2023-01-04T18:01:06Z
ENH: Add lazy copy for series.reorder_levels
diff --git a/pandas/core/series.py b/pandas/core/series.py index 42e41203cc31e..b5d73373f061e 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -4114,7 +4114,7 @@ def reorder_levels(self, order: Sequence[Level]) -> Series: if not isinstance(self.index, MultiIndex): # pragma: no cover raise Exception("Can only reorder levels on a hierarchical axis.") - result = self.copy() + result = self.copy(deep=None) assert isinstance(result.index, MultiIndex) result.index = result.index.reorder_levels(order) return result diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 2b3d13b982d4d..a0201a1e82000 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -426,6 +426,25 @@ def test_reorder_levels(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +def test_series_reorder_levels(using_copy_on_write): + index = MultiIndex.from_tuples( + [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"] + ) + ser = Series([1, 2, 3, 4], index=index) + ser_orig = ser.copy() + ser2 = ser.reorder_levels(order=["two", "one"]) + + if using_copy_on_write: + assert np.shares_memory(ser2.values, ser.values) + else: + assert not np.shares_memory(ser2.values, ser.values) + + ser2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(ser2.values, ser.values) + tm.assert_series_equal(ser, ser_orig) + + @pytest.mark.parametrize("obj", [Series([1, 2, 3]), DataFrame({"a": [1, 2, 3]})]) def test_swaplevel(using_copy_on_write, obj): index = MultiIndex.from_tuples([(1, 1), (1, 2), (2, 1)], names=["one", "two"]) @@ -461,28 +480,6 @@ def test_frame_set_axis(using_copy_on_write): tm.assert_frame_equal(df, df_orig) -@pytest.mark.parametrize( - "func, tz", [("tz_convert", "Europe/Berlin"), ("tz_localize", None)] -) -def test_tz_convert_localize(using_copy_on_write, func, tz): - # GH 49473 - ser = Series( - [1, 2], index=date_range(start="2014-08-01 09:00", freq="H", periods=2, tz=tz) - ) - ser_orig = ser.copy() - ser2 = getattr(ser, func)("US/Central") - - if using_copy_on_write: - assert np.shares_memory(ser.values, ser2.values) - else: - assert not np.shares_memory(ser.values, ser2.values) - - # mutating ser triggers a copy-on-write for the column / block - ser2.iloc[0] = 0 - assert not np.shares_memory(ser2.values, ser.values) - tm.assert_series_equal(ser, ser_orig) - - def test_series_set_axis(using_copy_on_write): # GH 49473 ser = Series([1, 2, 3]) @@ -516,3 +513,25 @@ def test_rename_axis(using_copy_on_write, kwargs, copy_kwargs): if using_copy_on_write: assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) tm.assert_frame_equal(df, df_orig) + + +@pytest.mark.parametrize( + "func, tz", [("tz_convert", "Europe/Berlin"), ("tz_localize", None)] +) +def test_tz_convert_localize(using_copy_on_write, func, tz): + # GH 49473 + ser = Series( + [1, 2], index=date_range(start="2014-08-01 09:00", freq="H", periods=2, tz=tz) + ) + ser_orig = ser.copy() + ser2 = getattr(ser, func)("US/Central") + + if using_copy_on_write: + assert np.shares_memory(ser.values, ser2.values) + else: + assert not np.shares_memory(ser.values, ser2.values) + + # mutating ser triggers a copy-on-write for the column / block + ser2.iloc[0] = 0 + assert not np.shares_memory(ser2.values, ser.values) + tm.assert_series_equal(ser, ser_orig)
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50472
2022-12-28T19:50:51Z
2023-01-03T13:22:05Z
2023-01-03T13:22:05Z
2023-01-03T21:47:51Z
DOC: Add note about copy on write improvements
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 22f6659367683..a01c31cd614b0 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -77,6 +77,47 @@ be set to ``"pyarrow"`` to return pyarrow-backed, nullable :class:`ArrowDtype` ( df_pyarrow = pd.read_csv(data, use_nullable_dtypes=True, engine="pyarrow") df_pyarrow.dtypes +Copy on write improvements +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +A new lazy copy mechanism that defers the copy until the object in question is modified +was added to the following methods: + +- :meth:`DataFrame.reset_index` / :meth:`Series.reset_index` +- :meth:`DataFrame.set_index` / :meth:`Series.set_index` +- :meth:`DataFrame.set_axis` / :meth:`Series.set_axis` +- :meth:`DataFrame.rename_axis` / :meth:`Series.rename_axis` +- :meth:`DataFrame.rename_columns` +- :meth:`DataFrame.reindex` / :meth:`Series.reindex` +- :meth:`DataFrame.reindex_like` / :meth:`Series.reindex_like` +- :meth:`DataFrame.assign` +- :meth:`DataFrame.drop` +- :meth:`DataFrame.dropna` / :meth:`Series.dropna` +- :meth:`DataFrame.select_dtypes` +- :meth:`DataFrame.align` / :meth:`Series.align` +- :meth:`Series.to_frame` +- :meth:`DataFrame.rename` / :meth:`Series.rename` +- :meth:`DataFrame.add_prefix` / :meth:`Series.add_prefix` +- :meth:`DataFrame.add_suffix` / :meth:`Series.add_suffix` +- :meth:`DataFrame.drop_duplicates` / :meth:`Series.drop_duplicates` +- :meth:`DataFrame.reorder_levels` / :meth:`Series.reorder_levels` + +These methods return views when copy on write is enabled, which provides a significant +performance improvement compared to the regular execution (:issue:`49473`). Copy on write +can be enabled through + +.. code-block:: python + + pd.set_option("mode.copy_on_write", True) + pd.options.mode.copy_on_write = True + +Alternatively, copy on write can be enabled locally through: + +.. code-block:: python + + with pd.option_context("mode.copy_on_write", True): + ... + .. _whatsnew_200.enhancements.other: Other enhancements
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. cc @mroeschke Pending: #50472, #50429, #50415
https://api.github.com/repos/pandas-dev/pandas/pulls/50471
2022-12-28T19:47:45Z
2023-01-09T23:30:05Z
2023-01-09T23:30:05Z
2023-01-12T22:23:17Z
DOC: Add Copy on write whatsnew
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index a1c374db91f8b..b61547d1523cf 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -290,6 +290,52 @@ and attributes without holding entire tree in memory (:issue:`45442`). .. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk .. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse +.. _whatsnew_150.enhancements.copy_on_write: + +Copy on Write +^^^^^^^^^^^^^ + +A new feature ``copy_on_write`` was added (:issue:`46958`). Copy on write ensures that +any DataFrame or Series derived from another in any way always behaves as a copy. +Copy on write disallows updating any other object than the object the method +was applied to. + +Copy on write can be enabled through: + +.. code-block:: python + + pd.set_option("mode.copy_on_write", True) + pd.options.mode.copy_on_write = True + +Alternatively, copy on write can be enabled locally through: + +.. code-block:: python + + with pd.option_context("mode.copy_on_write", True): + ... + +Without copy on write, the parent :class:`DataFrame` is updated when updating a child +:class:`DataFrame` that was derived from this :class:`DataFrame`. + +.. ipython:: python + + df = pd.DataFrame({"foo": [1, 2, 3], "bar": 1}) + view = df["foo"] + view.iloc[0] + df + +With copy on write enabled, df won't be updated anymore: + +.. ipython:: python + + with pd.option_context("mode.copy_on_write", True): + df = pd.DataFrame({"foo": [1, 2, 3], "bar": 1}) + view = df["foo"] + view.iloc[0] + df + +A more detailed explanation can be found `here <https://phofl.github.io/cow-introduction.html>`_. + .. _whatsnew_150.enhancements.other: Other enhancements
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. cc @mroeschke As discussed in the other pr. This is for 1.5, will add another pr for the lazy copy methods. Do you think it makes sense linking https://phofl.github.io/cow-introduction.html instead of explaining everything in here?
https://api.github.com/repos/pandas-dev/pandas/pulls/50470
2022-12-28T19:27:09Z
2023-01-03T21:29:05Z
2023-01-03T21:29:05Z
2023-01-11T14:56:23Z
CLN: avoid NpyDatetimeUnit in tests
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py index f9c32108f0ef0..d9abaf85544af 100644 --- a/pandas/tests/arrays/test_datetimes.py +++ b/pandas/tests/arrays/test_datetimes.py @@ -16,7 +16,6 @@ npy_unit_to_abbrev, tz_compare, ) -from pandas._libs.tslibs.dtypes import NpyDatetimeUnit from pandas.core.dtypes.dtypes import DatetimeTZDtype @@ -34,15 +33,6 @@ def unit(self, request): """Fixture returning parametrized time units""" return request.param - @pytest.fixture - def reso(self, unit): - """Fixture returning datetime resolution for a given time unit""" - return { - "s": NpyDatetimeUnit.NPY_FR_s.value, - "ms": NpyDatetimeUnit.NPY_FR_ms.value, - "us": NpyDatetimeUnit.NPY_FR_us.value, - }[unit] - @pytest.fixture def dtype(self, unit, tz_naive_fixture): tz = tz_naive_fixture @@ -71,19 +61,19 @@ def dta(self, dta_dti): dta, dti = dta_dti return dta - def test_non_nano(self, unit, reso, dtype): + def test_non_nano(self, unit, dtype): arr = np.arange(5, dtype=np.int64).view(f"M8[{unit}]") dta = DatetimeArray._simple_new(arr, dtype=dtype) assert dta.dtype == dtype - assert dta[0]._creso == reso + assert dta[0].unit == unit assert tz_compare(dta.tz, dta[0].tz) assert (dta[0] == dta[:1]).all() @pytest.mark.parametrize( "field", DatetimeArray._field_ops + DatetimeArray._bool_ops ) - def test_fields(self, unit, reso, field, dtype, dta_dti): + def test_fields(self, unit, field, dtype, dta_dti): dta, dti = dta_dti assert (dti == dta).all() @@ -166,7 +156,7 @@ def test_time_date(self, dta_dti, meth): expected = getattr(dti, meth) tm.assert_numpy_array_equal(result, expected) - def test_format_native_types(self, unit, reso, dtype, dta_dti): + def test_format_native_types(self, unit, dtype, dta_dti): # In this case we should get the same formatted values with our nano # version dti._data as we do with the non-nano dta dta, dti = dta_dti diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py index a6639a0388642..b322a5b57591a 100644 --- a/pandas/tests/arrays/test_timedeltas.py +++ b/pandas/tests/arrays/test_timedeltas.py @@ -3,8 +3,6 @@ import numpy as np import pytest -from pandas._libs.tslibs.dtypes import NpyDatetimeUnit - import pandas as pd from pandas import Timedelta import pandas._testing as tm @@ -19,28 +17,17 @@ class TestNonNano: def unit(self, request): return request.param - @pytest.fixture - def reso(self, unit): - if unit == "s": - return NpyDatetimeUnit.NPY_FR_s.value - elif unit == "ms": - return NpyDatetimeUnit.NPY_FR_ms.value - elif unit == "us": - return NpyDatetimeUnit.NPY_FR_us.value - else: - raise NotImplementedError(unit) - @pytest.fixture def tda(self, unit): arr = np.arange(5, dtype=np.int64).view(f"m8[{unit}]") return TimedeltaArray._simple_new(arr, dtype=arr.dtype) - def test_non_nano(self, unit, reso): + def test_non_nano(self, unit): arr = np.arange(5, dtype=np.int64).view(f"m8[{unit}]") tda = TimedeltaArray._simple_new(arr, dtype=arr.dtype) assert tda.dtype == arr.dtype - assert tda[0]._creso == reso + assert tda[0].unit == unit @pytest.mark.parametrize("field", TimedeltaArray._field_ops) def test_fields(self, tda, field): diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 83e40f5f1d98b..7a3ed306bdcfc 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -22,7 +22,6 @@ iNaT, parsing, ) -from pandas._libs.tslibs.dtypes import NpyDatetimeUnit from pandas.errors import ( OutOfBoundsDatetime, OutOfBoundsTimedelta, @@ -862,7 +861,7 @@ def test_to_datetime_dt64s_out_of_bounds(self, cache, dt): # as of 2022-09-28, the Timestamp constructor has been updated # to cast to M8[s] but to_datetime has not ts = Timestamp(dt) - assert ts._creso == NpyDatetimeUnit.NPY_FR_s.value + assert ts.unit == "s" assert ts.asm8 == dt msg = "Out of bounds nanosecond timestamp"
Broken off from #50433.
https://api.github.com/repos/pandas-dev/pandas/pulls/50469
2022-12-28T15:33:09Z
2022-12-28T17:50:56Z
2022-12-28T17:50:56Z
2022-12-28T19:07:49Z
used regular expression in `format_is_iso()`
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index b1387e9717079..22f6659367683 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -103,6 +103,7 @@ Other enhancements - :meth:`DataFrame.plot.hist` now recognizes ``xlabel`` and ``ylabel`` arguments (:issue:`49793`) - Improved error message in :func:`to_datetime` for non-ISO8601 formats, informing users about the position of the first error (:issue:`50361`) - Improved error message when trying to align :class:`DataFrame` objects (for example, in :func:`DataFrame.compare`) to clarify that "identically labelled" refers to both index and columns (:issue:`50083`) +- Performance improvement in :func:`to_datetime` when format is given or can be inferred (:issue:`50465`) - .. --------------------------------------------------------------------------- diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx index 27e99706137b6..b88a43c7c2d93 100644 --- a/pandas/_libs/tslibs/strptime.pyx +++ b/pandas/_libs/tslibs/strptime.pyx @@ -14,6 +14,7 @@ from cpython.datetime cimport ( import_datetime() from _thread import allocate_lock as _thread_allocate_lock +import re import numpy as np import pytz @@ -50,6 +51,7 @@ from pandas._libs.util cimport ( is_float_object, is_integer_object, ) + from pandas._libs.tslibs.timestamps import Timestamp cnp.import_array() @@ -60,15 +62,23 @@ cdef bint format_is_iso(f: str): Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different but must be consistent. Leading 0s in dates and times are optional. """ + iso_regex = re.compile( + r""" + ^ # start of string + %Y # Year + (?:([-/ \\.]?)%m # month with or without separators + (?: \1%d # day with same separator as for year-month + (?:[ T]%H # hour with separator + (?:\:%M # minute with separator + (?:\:%S # second with separator + (?:%z|\.%f(?:%z)? # timezone or fractional second + )?)?)?)?)?)? # optional + $ # end of string + """, + re.VERBOSE, + ) excluded_formats = ["%Y%m"] - - for date_sep in [" ", "/", "\\", "-", ".", ""]: - for time_sep in [" ", "T"]: - for micro_or_tz in ["", "%z", ".%f", ".%f%z"]: - iso_fmt = f"%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S{micro_or_tz}" - if iso_fmt.startswith(f) and f not in excluded_formats: - return True - return False + return re.match(iso_regex, f) is not None and f not in excluded_formats def _test_format_is_iso(f: str) -> bool:
- [x] closes #50465 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature. Added regular expressions in the `format_is_iso()` function ``` python def format_is_iso(f: str) -> bint: """ Does format match the iso8601 set that can be handled by the C parser? Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different but must be consistent. Leading 0s in dates and times are optional. """ iso_regex = \ r"^\d{4}(-\d{2}(-\d{2}(T\d{2}:\d{2}:\d{2}(\.\d+)?(([+-]\d{2}:\d{2})|Z)?)?)?)?$" excluded_formats = ["%Y%m%d", "%Y%m", "%Y"] if re.match(iso_regex, f) and f not in excluded_formats: return True return False ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50468
2022-12-28T15:17:24Z
2023-01-02T20:57:02Z
2023-01-02T20:57:02Z
2023-01-02T20:57:02Z
CI try saving .coverage
diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml deleted file mode 100644 index 438d2c7b4174e..0000000000000 --- a/.github/workflows/32-bit-linux.yml +++ /dev/null @@ -1,54 +0,0 @@ -name: 32 Bit Linux - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Run 32-bit manylinux2014 Docker Build / Tests - run: | - # Without this (line 34), versioneer will not be able to determine the pandas version. - # This is because of a security update to git that blocks it from reading the config folder if - # it is not owned by the current user. We hit this since the "mounted" folder is not hit by the - # Docker container. - # xref https://github.com/pypa/manylinux/issues/1309 - docker pull quay.io/pypa/manylinux2014_i686 - docker run --platform linux/386 -v $(pwd):/pandas quay.io/pypa/manylinux2014_i686 \ - /bin/bash -xc "cd pandas && \ - git config --global --add safe.directory /pandas && \ - /opt/python/cp38-cp38/bin/python -m venv ~/virtualenvs/pandas-dev && \ - . ~/virtualenvs/pandas-dev/bin/activate && \ - python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ - python -m pip install versioneer[toml] && \ - python -m pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ - python setup.py build_ext -q -j1 && \ - python -m pip install --no-build-isolation --no-use-pep517 -e . && \ - python -m pip list && \ - export PANDAS_CI=1 && \ - pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas --junitxml=test-data.xml" - - - name: Publish test results for Python 3.8-32 bit full Linux - uses: actions/upload-artifact@v3 - with: - name: Test results - path: test-data.xml - if: failure() diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml deleted file mode 100644 index d264698e60485..0000000000000 --- a/.github/workflows/asv-bot.yml +++ /dev/null @@ -1,78 +0,0 @@ -name: "ASV Bot" - -on: - issue_comment: # Pull requests are issues - types: - - created - -env: - ENV_FILE: environment.yml - COMMENT: ${{github.event.comment.body}} - -permissions: - contents: read - -jobs: - autotune: - permissions: - contents: read - issues: write - pull-requests: write - name: "Run benchmarks" - # TODO: Support more benchmarking options later, against different branches, against self, etc - if: startsWith(github.event.comment.body, '@github-actions benchmark') - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) - # each user can only run one concurrent benchmark bot at a time - # We don't cancel in progress jobs, but if you want to benchmark multiple PRs, you're gonna have - # to wait - group: ${{ github.actor }}-asv - cancel-in-progress: false - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - # Although asv sets up its own env, deps are still needed - # during discovery process - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Run benchmarks - id: bench - continue-on-error: true # This is a fake failure, asv will exit code 1 for regressions - run: | - # extracting the regex, see https://stackoverflow.com/a/36798723 - REGEX=$(echo "$COMMENT" | sed -n "s/^.*-b\s*\(\S*\).*$/\1/p") - cd asv_bench - asv check -E existing - git remote add upstream https://github.com/pandas-dev/pandas.git - git fetch upstream - asv machine --yes - asv continuous -f 1.1 -b $REGEX upstream/main HEAD - echo 'BENCH_OUTPUT<<EOF' >> $GITHUB_ENV - asv compare -f 1.1 upstream/main HEAD >> $GITHUB_ENV - echo 'EOF' >> $GITHUB_ENV - echo "REGEX=$REGEX" >> $GITHUB_ENV - - - uses: actions/github-script@v6 - env: - BENCH_OUTPUT: ${{env.BENCH_OUTPUT}} - REGEX: ${{env.REGEX}} - with: - script: | - const ENV_VARS = process.env - const run_url = `https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}` - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: '\nBenchmarks completed. View runner logs here.' + run_url + '\nRegex used: '+ 'regex ' + ENV_VARS["REGEX"] + '\n' + ENV_VARS["BENCH_OUTPUT"] - }) diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml deleted file mode 100644 index 280b6ed601f08..0000000000000 --- a/.github/workflows/code-checks.yml +++ /dev/null @@ -1,190 +0,0 @@ -name: Code Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pre_commit: - name: pre-commit - runs-on: ubuntu-22.04 - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pre-commit - cancel-in-progress: true - steps: - - name: Checkout - uses: actions/checkout@v3 - - - name: Install Python - uses: actions/setup-python@v4 - with: - python-version: '3.9' - - - name: Run pre-commit - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --all-files - - docstring_typing_manual_hooks: - name: Docstring validation, typing, and other manual pre-commit hooks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-code-checks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - # The following checks are independent of each other and should still be run if one fails - - name: Check for no warnings when building single-page docs - run: ci/code_checks.sh single-docs - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run checks on imported code - run: ci/code_checks.sh code - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run doctests - run: ci/code_checks.sh doctests - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation - run: ci/code_checks.sh docstrings - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run check of documentation notebooks - run: ci/code_checks.sh notebooks - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Use existing environment for type checking - run: | - echo $PATH >> $GITHUB_PATH - echo "PYTHONHOME=$PYTHONHOME" >> $GITHUB_ENV - echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Typing + pylint - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --hook-stage manual --all-files - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation script tests - run: pytest scripts - if: ${{ steps.build.outcome == 'success' && always() }} - - asv-benchmarks: - name: ASV Benchmarks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-asv-benchmarks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - - name: Run ASV benchmarks - run: | - cd asv_bench - asv machine --yes - asv run --quick --dry-run --strict --durations=30 --python=same - - build_docker_dev_environment: - name: Build Docker Dev Environment - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment - cancel-in-progress: true - - steps: - - name: Clean up dangling images - run: docker image prune -f - - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Build image - run: docker build --pull --no-cache --tag pandas-dev-env . - - - name: Show environment - run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())" - - requirements-dev-text-installable: - name: Test install requirements-dev.txt - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-requirements-dev-text-installable - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - cache: 'pip' - cache-dependency-path: 'requirements-dev.txt' - - - name: Install requirements-dev.txt - run: pip install -r requirements-dev.txt - - - name: Check Pip Cache Hit - run: echo ${{ steps.setup_python.outputs.cache-hit }} diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml deleted file mode 100644 index ee79c10c12d4e..0000000000000 --- a/.github/workflows/docbuild-and-upload.yml +++ /dev/null @@ -1,88 +0,0 @@ -name: Doc Build and Upload - -on: - push: - branches: - - main - - 1.5.x - tags: - - '*' - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - web_and_docs: - name: Doc Build and Upload - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-web-docs - cancel-in-progress: true - - defaults: - run: - shell: bash -el {0} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Build website - run: python web/pandas_web.py web/pandas --target-path=web/build - - - name: Build documentation - run: doc/make.py --warnings-are-errors - - - name: Build documentation zip - run: doc/make.py zip_html - - - name: Install ssh key - run: | - mkdir -m 700 -p ~/.ssh - echo "${{ secrets.server_ssh_key }}" > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - echo "${{ secrets.server_ip }} ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjYkJBk7sos+r7yATODogQc3jUdW1aascGpyOD4bohj8dWjzwLJv/OJ/fyOQ5lmj81WKDk67tGtqNJYGL9acII=" > ~/.ssh/known_hosts - if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) - - - name: Copy cheatsheets into site directory - run: cp doc/cheatsheet/Pandas_Cheat_Sheet* web/build/ - - - name: Upload web - run: rsync -az --delete --exclude='pandas-docs' --exclude='docs' web/build/ web@${{ secrets.server_ip }}:/var/www/html - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload dev docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/dev - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload prod docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/version/${GITHUB_REF_NAME:1} - if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') - - - name: Move docs into site directory - run: mv doc/build/html web/build/docs - - - name: Save website as an artifact - uses: actions/upload-artifact@v3 - with: - name: website - path: web/build - retention-days: 14 diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml deleted file mode 100644 index 5efc1aa67b4cd..0000000000000 --- a/.github/workflows/macos-windows.yml +++ /dev/null @@ -1,62 +0,0 @@ -name: Windows-macOS - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - PYTEST_TARGET: pandas - PATTERN: "not slow and not db and not network and not single_cpu" - TEST_ARGS: "-W error:::pandas" - - -permissions: - contents: read - -jobs: - pytest: - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - os: [macos-latest, windows-latest] - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - fail-fast: false - runs-on: ${{ matrix.os }} - name: ${{ format('{0} {1}', matrix.os, matrix.env_file) }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.os }} - cancel-in-progress: true - env: - # GH 47443: PYTEST_WORKERS > 1 crashes Windows builds with memory related errors - PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '1' }} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ci/deps/${{ matrix.env_file }} - pyarrow-version: ${{ matrix.os == 'macos-latest' && '9' || '' }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml index 220c1e464742e..6d0fb29867a9f 100644 --- a/.github/workflows/python-dev.yml +++ b/.github/workflows/python-dev.yml @@ -91,3 +91,10 @@ jobs: - name: Test uses: ./.github/actions/run-tests + + - name: Save coverage sqlite database as an artifact + uses: actions/upload-artifact@v3 + with: + name: coverage sqlite database + path: .coverage + retention-days: 14 diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml deleted file mode 100644 index 7dbf74278d433..0000000000000 --- a/.github/workflows/ubuntu.yml +++ /dev/null @@ -1,178 +0,0 @@ -name: Ubuntu - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - pattern: ["not single_cpu", "single_cpu"] - pyarrow_version: ["7", "8", "9", "10"] - include: - - name: "Downstream Compat" - env_file: actions-38-downstream_compat.yaml - pattern: "not slow and not network and not single_cpu" - pytest_target: "pandas/tests/test_downstream.py" - - name: "Minimum Versions" - env_file: actions-38-minimum_versions.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "" - - name: "Locale: it_IT" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-it" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "it_IT.utf8" - lc_all: "it_IT.utf8" - # Also install it_IT (its encoding is ISO8859-1) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "it_IT" - - name: "Locale: zh_CN" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-zh-hans" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "zh_CN.utf8" - lc_all: "zh_CN.utf8" - # Also install zh_CN (its encoding is gb2312) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "zh_CN" - - name: "Copy-on-Write" - env_file: actions-310.yaml - pattern: "not slow and not network and not single_cpu" - pandas_copy_on_write: "1" - test_args: "" - - name: "Data Manager" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - pandas_data_manager: "array" - test_args: "" - - name: "Pypy" - env_file: actions-pypy-38.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "--max-worker-restart 0" - - name: "Numpy Dev" - env_file: actions-310-numpydev.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" - exclude: - - env_file: actions-38.yaml - pyarrow_version: "7" - - env_file: actions-38.yaml - pyarrow_version: "8" - - env_file: actions-38.yaml - pyarrow_version: "9" - - env_file: actions-39.yaml - pyarrow_version: "7" - - env_file: actions-39.yaml - pyarrow_version: "8" - - env_file: actions-39.yaml - pyarrow_version: "9" - fail-fast: false - name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} - env: - ENV_FILE: ci/deps/${{ matrix.env_file }} - PATTERN: ${{ matrix.pattern }} - EXTRA_APT: ${{ matrix.extra_apt || '' }} - LANG: ${{ matrix.lang || '' }} - LC_ALL: ${{ matrix.lc_all || '' }} - PANDAS_DATA_MANAGER: ${{ matrix.pandas_data_manager || 'block' }} - PANDAS_COPY_ON_WRITE: ${{ matrix.pandas_copy_on_write || '0' }} - TEST_ARGS: ${{ matrix.test_args || '-W error:::pandas' }} - PYTEST_WORKERS: ${{ contains(matrix.pattern, 'not single_cpu') && 'auto' || '1' }} - PYTEST_TARGET: ${{ matrix.pytest_target || 'pandas' }} - IS_PYPY: ${{ contains(matrix.env_file, 'pypy') }} - # TODO: re-enable coverage on pypy, its slow - COVERAGE: ${{ !contains(matrix.env_file, 'pypy') }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.pattern }}-${{ matrix.pyarrow_version || '' }}-${{ matrix.extra_apt || '' }}-${{ matrix.pandas_data_manager || '' }} - cancel-in-progress: true - - services: - mysql: - image: mysql - env: - MYSQL_ALLOW_EMPTY_PASSWORD: yes - MYSQL_DATABASE: pandas - options: >- - --health-cmd "mysqladmin ping" - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 3306:3306 - - postgres: - image: postgres - env: - POSTGRES_USER: postgres - POSTGRES_PASSWORD: postgres - POSTGRES_DB: pandas - options: >- - --health-cmd pg_isready - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 5432:5432 - - moto: - image: motoserver/moto - env: - AWS_ACCESS_KEY_ID: foobar_key - AWS_SECRET_ACCESS_KEY: foobar_secret - ports: - - 5000:5000 - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Extra installs - # xsel for clipboard tests - run: sudo apt-get update && sudo apt-get install -y xsel ${{ env.EXTRA_APT }} - - - name: Generate extra locales - # These extra locales will be available for locale.setlocale() calls in tests - run: | - sudo locale-gen ${{ matrix.extra_loc }} - if: ${{ matrix.extra_loc }} - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ${{ env.ENV_FILE }} - pyarrow-version: ${{ matrix.pyarrow_version }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests - # TODO: Don't continue on error for PyPy - continue-on-error: ${{ env.IS_PYPY == 'true' }} diff --git a/pyproject.toml b/pyproject.toml index 385c1beb08121..db03dc31acd45 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -428,6 +428,7 @@ reportUnboundVariable = false [tool.coverage.run] branch = true +dynamic_context = "test_function" omit = ["pandas/_typing.py", "pandas/_version.py"] plugins = ["Cython.Coverage"] source = ["pandas"]
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. just checking how long this takes / if it's feasible to do this in CI
https://api.github.com/repos/pandas-dev/pandas/pulls/50466
2022-12-28T14:13:52Z
2022-12-29T10:30:04Z
null
2022-12-29T10:30:05Z
ERR: "day out of range" doesn't show position of error
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index 976a53e9117de..48e855f7e9905 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -508,8 +508,8 @@ cpdef array_to_datetime( continue elif is_raise: raise ValueError( - f"time data \"{val}\" at position {i} doesn't " - f"match format \"{format}\"" + f"time data \"{val}\" doesn't " + f"match format \"{format}\", at position {i}" ) return values, tz_out # these must be ns unit by-definition @@ -557,8 +557,8 @@ cpdef array_to_datetime( continue elif is_raise: raise ValueError( - f"time data \"{val}\" at position {i} doesn't " - f"match format \"{format}\"" + f"time data \"{val}\" doesn't " + f"match format \"{format}\", at position {i}" ) return values, tz_out @@ -575,8 +575,8 @@ cpdef array_to_datetime( iresult[i] = NPY_NAT continue raise TypeError( - f"invalid string coercion to datetime for \"{val}\" " - f"at position {i}" + f"invalid string coercion to datetime " + f"for \"{val}\", at position {i}" ) if tz is not None: @@ -619,7 +619,7 @@ cpdef array_to_datetime( raise TypeError(f"{type(val)} is not convertible to datetime") except OutOfBoundsDatetime as ex: - ex.args = (str(ex) + f" present at position {i}", ) + ex.args = (f"{ex}, at position {i}",) if is_coerce: iresult[i] = NPY_NAT continue @@ -779,7 +779,7 @@ cdef _array_to_datetime_object( pydatetime_to_dt64(oresult[i], &dts) check_dts_bounds(&dts) except (ValueError, OverflowError) as ex: - ex.args = (f"{ex} present at position {i}", ) + ex.args = (f"{ex}, at position {i}", ) if is_coerce: oresult[i] = <object>NaT continue diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx index c1bc5fd0910f8..a863824d92cc7 100644 --- a/pandas/_libs/tslibs/strptime.pyx +++ b/pandas/_libs/tslibs/strptime.pyx @@ -236,11 +236,11 @@ def array_strptime( if exact: found = format_regex.match(val) if not found: - raise ValueError(f"time data \"{val}\" at position {i} doesn't " + raise ValueError(f"time data \"{val}\" doesn't " f"match format \"{fmt}\"") if len(val) != found.end(): raise ValueError( - f"unconverted data remains at position {i}: " + f"unconverted data remains: " f'"{val[found.end():]}"' ) @@ -249,7 +249,7 @@ def array_strptime( found = format_regex.search(val) if not found: raise ValueError( - f"time data \"{val}\" at position {i} doesn't match " + f"time data \"{val}\" doesn't match " f"format \"{fmt}\"" ) @@ -402,8 +402,7 @@ def array_strptime( result_timezone[i] = tz except (ValueError, OutOfBoundsDatetime) as ex: - if isinstance(ex, OutOfBoundsDatetime): - ex.args = (f"{str(ex)} present at position {i}",) + ex.args = (f"{str(ex)}, at position {i}",) if is_coerce: iresult[i] = NPY_NAT continue diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py index a7f9b14f44674..f2de6b607d737 100644 --- a/pandas/tests/frame/test_block_internals.py +++ b/pandas/tests/frame/test_block_internals.py @@ -259,7 +259,7 @@ def f(dtype): f("float64") # 10822 - msg = "Unknown string format: aa present at position 0" + msg = "^Unknown string format: aa, at position 0$" with pytest.raises(ValueError, match=msg): f("M8[ns]") diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py index e29c10ef6e58a..f962a552d9009 100644 --- a/pandas/tests/indexes/datetimes/test_constructors.py +++ b/pandas/tests/indexes/datetimes/test_constructors.py @@ -547,7 +547,7 @@ def test_construction_outofbounds(self): # coerces to object tm.assert_index_equal(Index(dates), exp) - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds nanosecond timestamp: 3000-01-01 00:00:00, at position 0$" with pytest.raises(OutOfBoundsDatetime, match=msg): # can't create DatetimeIndex DatetimeIndex(dates) diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py index 42aba136f378d..be05a649ec0b6 100644 --- a/pandas/tests/indexes/datetimes/test_scalar_compat.py +++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py @@ -38,7 +38,7 @@ def test_dti_date(self): @pytest.mark.parametrize("data", [["1400-01-01"], [datetime(1400, 1, 1)]]) def test_dti_date_out_of_range(self, data): # GH#1475 - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds nanosecond timestamp: 1400-01-01 00:00:00, at position 0$" with pytest.raises(OutOfBoundsDatetime, match=msg): DatetimeIndex(data) diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py index 766b8fe805419..ee9314c8779dd 100644 --- a/pandas/tests/io/parser/test_parse_dates.py +++ b/pandas/tests/io/parser/test_parse_dates.py @@ -1721,7 +1721,7 @@ def test_parse_multiple_delimited_dates_with_swap_warnings(): with pytest.raises( ValueError, match=( - r'^time data "31/05/2000" at position 1 doesn\'t match format "%m/%d/%Y"$' + r'^time data "31/05/2000" doesn\'t match format "%m/%d/%Y", at position 1$' ), ): pd.to_datetime(["01/01/2000", "31/05/2000", "31/05/2001", "01/02/2000"]) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index a4e82838b61d3..f856f18552594 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -76,7 +76,7 @@ def test_infer_with_date_and_datetime(self): def test_unparseable_strings_with_dt64_dtype(self): # pre-2.0 these would be silently ignored and come back with object dtype vals = ["aa"] - msg = "Unknown string format: aa present at position 0" + msg = "^Unknown string format: aa, at position 0$" with pytest.raises(ValueError, match=msg): Series(vals, dtype="datetime64[ns]") diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 927388408cf27..640daa09f5eee 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -481,8 +481,8 @@ def test_to_datetime_parse_timezone_malformed(self, offset): msg = "|".join( [ - r'^time data ".*" at position 0 doesn\'t match format ".*"$', - r'^unconverted data remains at position 0: ".*"$', + r'^time data ".*" doesn\'t match format ".*", at position 0$', + r'^unconverted data remains: ".*", at position 0$', ] ) with pytest.raises(ValueError, match=msg): @@ -859,7 +859,7 @@ def test_to_datetime_dt64s_and_str(self, arg, format): "dt", [np.datetime64("1000-01-01"), np.datetime64("5000-01-02")] ) def test_to_datetime_dt64s_out_of_bounds(self, cache, dt): - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds nanosecond timestamp: .*, at position 0$" with pytest.raises(OutOfBoundsDatetime, match=msg): to_datetime(dt, errors="raise") @@ -1098,7 +1098,7 @@ def test_datetime_bool_arrays_mixed(self, cache): to_datetime([False, datetime.today()], cache=cache) with pytest.raises( ValueError, - match=r'^time data "True" at position 1 doesn\'t match format "%Y%m%d"$', + match=r'^time data "True" doesn\'t match format "%Y%m%d", at position 1$', ): to_datetime(["20130101", True], cache=cache) tm.assert_index_equal( @@ -1139,10 +1139,10 @@ def test_datetime_invalid_scalar(self, value, format, warning): msg = "|".join( [ - r'^time data "a" at position 0 doesn\'t match format "%H:%M:%S"$', - r'^Given date string "a" not likely a datetime present at position 0$', - r'^unconverted data remains at position 0: "9"$', - r"^second must be in 0..59: 00:01:99 present at position 0$", + r'^time data "a" doesn\'t match format "%H:%M:%S", at position 0$', + r'^Given date string "a" not likely a datetime, at position 0$', + r'^unconverted data remains: "9", at position 0$', + r"^second must be in 0..59: 00:01:99, at position 0$", ] ) with pytest.raises(ValueError, match=msg): @@ -1164,11 +1164,11 @@ def test_datetime_outofbounds_scalar(self, value, format, warning): assert res is NaT if format is not None: - msg = r'^time data ".*" at position 0 doesn\'t match format ".*"$' + msg = r'^time data ".*" doesn\'t match format ".*", at position 0$' with pytest.raises(ValueError, match=msg): to_datetime(value, errors="raise", format=format) else: - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds .*, at position 0$" with pytest.raises( OutOfBoundsDatetime, match=msg ), tm.assert_produces_warning(warning, match="Could not infer format"): @@ -1190,10 +1190,10 @@ def test_datetime_invalid_index(self, values, format, warning): msg = "|".join( [ - r'^Given date string "a" not likely a datetime present at position 0$', - r'^time data "a" at position 0 doesn\'t match format "%H:%M:%S"$', - r'^unconverted data remains at position 0: "9"$', - r"^second must be in 0..59: 00:01:99 present at position 0$", + r'^Given date string "a" not likely a datetime, at position 0$', + r'^time data "a" doesn\'t match format "%H:%M:%S", at position 0$', + r'^unconverted data remains: "9", at position 0$', + r"^second must be in 0..59: 00:01:99, at position 0$", ] ) with pytest.raises(ValueError, match=msg): @@ -1373,7 +1373,7 @@ def test_to_datetime_malformed_raise(self): ts_strings = ["200622-12-31", "111111-24-11"] with pytest.raises( ValueError, - match=r"^hour must be in 0\.\.23: 111111-24-11 present at position 1$", + match=r"^hour must be in 0\.\.23: 111111-24-11, at position 1$", ): with tm.assert_produces_warning( UserWarning, match="Could not infer format" @@ -1814,8 +1814,8 @@ def test_dataframe_coerce(self, cache): df2 = DataFrame({"year": [2015, 2016], "month": [2, 20], "day": [4, 5]}) msg = ( - r'^cannot assemble the datetimes: time data ".+" at position 1 doesn\'t ' - r'match format "%Y%m%d"$' + r'^cannot assemble the datetimes: time data ".+" doesn\'t ' + r'match format "%Y%m%d", at position 1$' ) with pytest.raises(ValueError, match=msg): to_datetime(df2, cache=cache) @@ -1892,8 +1892,8 @@ def test_dataframe_float(self, cache): # float df = DataFrame({"year": [2000, 2001], "month": [1.5, 1], "day": [1, 1]}) msg = ( - r"^cannot assemble the datetimes: unconverted data remains at position " - r'0: "1"$' + r"^cannot assemble the datetimes: unconverted data remains: " + r'"1", at position 0$' ) with pytest.raises(ValueError, match=msg): to_datetime(df, cache=cache) @@ -1915,7 +1915,7 @@ def test_to_datetime_barely_out_of_bounds(self): # in an in-bounds datetime arr = np.array(["2262-04-11 23:47:16.854775808"], dtype=object) - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds nanosecond timestamp: .*, at position 0" with pytest.raises(OutOfBoundsDatetime, match=msg): with tm.assert_produces_warning( UserWarning, match="Could not infer format" @@ -1954,8 +1954,8 @@ def test_to_datetime_iso8601_fails(self, input, format, exact): with pytest.raises( ValueError, match=( - rf"time data \"{input}\" at position 0 doesn't match format " - rf"\"{format}\"" + rf"time data \"{input}\" doesn't match format " + rf"\"{format}\", at position 0" ), ): to_datetime(input, format=format, exact=exact) @@ -1976,8 +1976,8 @@ def test_to_datetime_iso8601_exact_fails(self, input, format): with pytest.raises( ValueError, match=( - rf"time data \"{input}\" at position 0 doesn't match format " - rf"\"{format}\"" + rf"time data \"{input}\" doesn't match format " + rf"\"{format}\", at position 0" ), ): to_datetime(input, format=format) @@ -2015,8 +2015,8 @@ def test_to_datetime_iso8601_separator(self, input, format): with pytest.raises( ValueError, match=( - rf"time data \"{input}\" at position 0 doesn\'t match format " - rf"\"{format}\"" + rf"time data \"{input}\" doesn\'t match format " + rf"\"{format}\", at position 0" ), ): to_datetime(input, format=format) @@ -2084,7 +2084,7 @@ def test_to_datetime_on_datetime64_series(self, cache): def test_to_datetime_with_space_in_series(self, cache): # GH 6428 ser = Series(["10/18/2006", "10/18/2008", " "]) - msg = r'^time data " " at position 2 doesn\'t match format "%m/%d/%Y"$' + msg = r'^time data " " doesn\'t match format "%m/%d/%Y", at position 2$' with pytest.raises(ValueError, match=msg): to_datetime(ser, errors="raise", cache=cache) result_coerce = to_datetime(ser, errors="coerce", cache=cache) @@ -2355,8 +2355,8 @@ def test_dayfirst_warnings_invalid_input(self): with pytest.raises( ValueError, match=( - r'^time data "03/30/2011" at position 1 doesn\'t match format ' - r'"%d/%m/%Y"$' + r'^time data "03/30/2011" doesn\'t match format ' + r'"%d/%m/%Y", at position 1$' ), ): to_datetime(arr, dayfirst=True) @@ -2426,8 +2426,8 @@ def test_to_datetime_inconsistent_format(self, cache): data = ["01/01/2011 00:00:00", "01-02-2011 00:00:00", "2011-01-03T00:00:00"] ser = Series(np.array(data)) msg = ( - r'^time data "01-02-2011 00:00:00" at position 1 doesn\'t match format ' - r'"%m/%d/%Y %H:%M:%S"$' + r'^time data "01-02-2011 00:00:00" doesn\'t match format ' + r'"%m/%d/%Y %H:%M:%S", at position 1$' ) with pytest.raises(ValueError, match=msg): to_datetime(ser, cache=cache) @@ -2550,11 +2550,49 @@ def test_day_not_in_month_raise(self, cache): ): to_datetime("2015-02-29", errors="raise", cache=cache) - @pytest.mark.parametrize("arg", ["2015-02-29", "2015-02-32", "2015-04-31"]) - def test_day_not_in_month_raise_value(self, cache, arg): - msg = f'time data "{arg}" at position 0 doesn\'t match format "%Y-%m-%d"' + @pytest.mark.parametrize( + "arg, format, msg", + [ + ( + "2015-02-29", + "%Y-%m-%d", + '^time data "2015-02-29" doesn\'t match format "%Y-%m-%d", ' + "at position 0$", + ), + ( + "2015-29-02", + "%Y-%d-%m", + "^day is out of range for month, at position 0$", + ), + ( + "2015-02-32", + "%Y-%m-%d", + '^time data "2015-02-32" doesn\'t match format "%Y-%m-%d", ' + "at position 0$", + ), + ( + "2015-32-02", + "%Y-%d-%m", + '^time data "2015-32-02" doesn\'t match format "%Y-%d-%m", ' + "at position 0$", + ), + ( + "2015-04-31", + "%Y-%m-%d", + '^time data "2015-04-31" doesn\'t match format "%Y-%m-%d", ' + "at position 0$", + ), + ( + "2015-31-04", + "%Y-%d-%m", + "^day is out of range for month, at position 0$", + ), + ], + ) + def test_day_not_in_month_raise_value(self, cache, arg, format, msg): + # https://github.com/pandas-dev/pandas/issues/50462 with pytest.raises(ValueError, match=msg): - to_datetime(arg, errors="raise", format="%Y-%m-%d", cache=cache) + to_datetime(arg, errors="raise", format=format, cache=cache) @pytest.mark.parametrize( "expected, format, warning", @@ -2934,7 +2972,7 @@ def test_invalid_origins_tzinfo(self): def test_incorrect_value_exception(self): # GH47495 with pytest.raises( - ValueError, match="Unknown string format: yesterday present at position 1" + ValueError, match="Unknown string format: yesterday, at position 1" ): with tm.assert_produces_warning( UserWarning, match="Could not infer format" @@ -2952,8 +2990,7 @@ def test_incorrect_value_exception(self): def test_to_datetime_out_of_bounds_with_format_arg(self, format, warning): # see gh-23830 msg = ( - r"^Out of bounds nanosecond timestamp: 2417-10-10 00:00:00 " - r"present at position 0$" + r"^Out of bounds nanosecond timestamp: 2417-10-10 00:00:00, at position 0$" ) with pytest.raises(OutOfBoundsDatetime, match=msg): with tm.assert_produces_warning(warning, match="Could not infer format"): diff --git a/pandas/tests/tslibs/test_array_to_datetime.py b/pandas/tests/tslibs/test_array_to_datetime.py index 80aa5d7fb1c19..63adb8427969d 100644 --- a/pandas/tests/tslibs/test_array_to_datetime.py +++ b/pandas/tests/tslibs/test_array_to_datetime.py @@ -126,7 +126,7 @@ def test_coerce_outside_ns_bounds(invalid_date, errors): kwargs = {"values": arr, "errors": errors} if errors == "raise": - msg = "Out of bounds .* present at position 0" + msg = "^Out of bounds nanosecond timestamp: .*, at position 0$" with pytest.raises(ValueError, match=msg): tslib.array_to_datetime(**kwargs) @@ -171,9 +171,7 @@ def test_to_datetime_barely_out_of_bounds(): # Close enough to bounds that dropping nanos # would result in an in-bounds datetime. arr = np.array(["2262-04-11 23:47:16.854775808"], dtype=object) - msg = ( - "Out of bounds nanosecond timestamp: 2262-04-11 23:47:16 present at position 0" - ) + msg = "^Out of bounds nanosecond timestamp: 2262-04-11 23:47:16, at position 0$" with pytest.raises(tslib.OutOfBoundsDatetime, match=msg): tslib.array_to_datetime(arr)
- [ ] closes #50462 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Haven't added a whatsnew note, as this can be considered a follow-up to #50366
https://api.github.com/repos/pandas-dev/pandas/pulls/50464
2022-12-28T12:02:52Z
2022-12-28T17:54:47Z
2022-12-28T17:54:47Z
2022-12-28T17:54:53Z
TST: Verify mask's return keeps dtype
diff --git a/pandas/tests/frame/indexing/test_mask.py b/pandas/tests/frame/indexing/test_mask.py index e8a49ab868425..23458b096a140 100644 --- a/pandas/tests/frame/indexing/test_mask.py +++ b/pandas/tests/frame/indexing/test_mask.py @@ -7,6 +7,7 @@ from pandas import ( NA, DataFrame, + Float64Dtype, Series, StringDtype, Timedelta, @@ -130,3 +131,13 @@ def test_mask_where_dtype_timedelta(): [np.nan, np.nan, np.nan, Timedelta("3 day"), Timedelta("4 day")] ) tm.assert_frame_equal(df.where(df > Timedelta(2, unit="d")), expected) + + +def test_mask_return_dtype(): + # GH#50488 + ser = Series([0.0, 1.0, 2.0, 3.0], dtype=Float64Dtype()) + cond = ~ser.isna() + other = Series([True, False, True, False]) + excepted = Series([1.0, 0.0, 1.0, 0.0], dtype=ser.dtype) + result = ser.mask(cond, other) + tm.assert_series_equal(result, excepted)
- [ ] closes #50448 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50460
2022-12-28T09:31:45Z
2023-01-04T13:47:49Z
2023-01-04T13:47:49Z
2023-01-04T13:48:35Z
add test to check crosstab supports Float64 formats
diff --git a/pandas/tests/reshape/test_crosstab.py b/pandas/tests/reshape/test_crosstab.py index 76448d5942a5a..bea0ddbe8a667 100644 --- a/pandas/tests/reshape/test_crosstab.py +++ b/pandas/tests/reshape/test_crosstab.py @@ -795,6 +795,32 @@ def test_margin_normalize_multiple_columns(self): expected.index.name = "C" tm.assert_frame_equal(result, expected) + def test_margin_support_Float(self): + # GH 50313 + # use Float64 formats and function aggfunc with margins + df = DataFrame( + {"A": [1, 2, 2, 1], "B": [3, 3, 4, 5], "C": [-1.0, 10.0, 1.0, 10.0]}, + dtype="Float64", + ) + result = crosstab( + df["A"], + df["B"], + values=df["C"], + aggfunc="sum", + margins=True, + ) + expected = DataFrame( + [ + [-1.0, pd.NA, 10.0, 9.0], + [10.0, 1.0, pd.NA, 11.0], + [9.0, 1.0, 10.0, 20.0], + ], + index=Index([1.0, 2.0, "All"], dtype="object", name="A"), + columns=Index([3.0, 4.0, 5.0, "All"], dtype="object", name="B"), + dtype="Float64", + ) + tm.assert_frame_equal(result, expected) + @pytest.mark.parametrize("a_dtype", ["category", "int64"]) @pytest.mark.parametrize("b_dtype", ["category", "int64"])
- [x] closes #50313 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). A new test is added. The test validates `crosstab` behavior with `Float64`. I have checked that the problem occurs when `Float64` is used together with `margins=True`. The test fails on `1.5.2` and passes on `main`.
https://api.github.com/repos/pandas-dev/pandas/pulls/50459
2022-12-28T09:22:50Z
2022-12-28T19:54:40Z
2022-12-28T19:54:40Z
2022-12-28T21:06:33Z
DEPR: Enforce alignment with numpy ufuncs
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 46464a3ecf420..d03bd9162b357 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -527,6 +527,7 @@ Removal of prior version deprecations/changes - Removed deprecated :func:`pandas.api.types.is_categorical`; use :func:`pandas.api.types.is_categorical_dtype` instead (:issue:`33385`) - Removed deprecated :meth:`Index.asi8` (:issue:`37877`) - Enforced deprecation changing behavior when passing ``datetime64[ns]`` dtype data and timezone-aware dtype to :class:`Series`, interpreting the values as wall-times instead of UTC times, matching :class:`DatetimeIndex` behavior (:issue:`41662`) +- Enforced deprecation changing behavior when applying a numpy ufunc on multiple non-aligned (on the index or columns) :class:`DataFrame` that will now align the inputs first (:issue:`39239`) - Removed deprecated :meth:`DataFrame._AXIS_NUMBERS`, :meth:`DataFrame._AXIS_NAMES`, :meth:`Series._AXIS_NUMBERS`, :meth:`Series._AXIS_NAMES` (:issue:`33637`) - Removed deprecated :meth:`Index.to_native_types`, use ``obj.astype(str)`` instead (:issue:`36418`) - Removed deprecated :meth:`Series.iteritems`, :meth:`DataFrame.iteritems`, use ``obj.items`` instead (:issue:`45321`) diff --git a/pandas/core/arraylike.py b/pandas/core/arraylike.py index 15bb2f59fcccf..4bed021cabea1 100644 --- a/pandas/core/arraylike.py +++ b/pandas/core/arraylike.py @@ -8,13 +8,11 @@ import operator from typing import Any -import warnings import numpy as np from pandas._libs import lib from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op -from pandas.util._exceptions import find_stack_level from pandas.core.dtypes.generic import ABCNDFrame @@ -166,81 +164,6 @@ def __rpow__(self, other): # Helpers to implement __array_ufunc__ -def _is_aligned(frame, other): - """ - Helper to check if a DataFrame is aligned with another DataFrame or Series. - """ - from pandas import DataFrame - - if isinstance(other, DataFrame): - return frame._indexed_same(other) - else: - # Series -> match index - return frame.columns.equals(other.index) - - -def _maybe_fallback(ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any): - """ - In the future DataFrame, inputs to ufuncs will be aligned before applying - the ufunc, but for now we ignore the index but raise a warning if behaviour - would change in the future. - This helper detects the case where a warning is needed and then fallbacks - to applying the ufunc on arrays to avoid alignment. - - See https://github.com/pandas-dev/pandas/pull/39239 - """ - from pandas import DataFrame - from pandas.core.generic import NDFrame - - n_alignable = sum(isinstance(x, NDFrame) for x in inputs) - n_frames = sum(isinstance(x, DataFrame) for x in inputs) - - if n_alignable >= 2 and n_frames >= 1: - # if there are 2 alignable inputs (Series or DataFrame), of which at least 1 - # is a DataFrame -> we would have had no alignment before -> warn that this - # will align in the future - - # the first frame is what determines the output index/columns in pandas < 1.2 - first_frame = next(x for x in inputs if isinstance(x, DataFrame)) - - # check if the objects are aligned or not - non_aligned = sum( - not _is_aligned(first_frame, x) for x in inputs if isinstance(x, NDFrame) - ) - - # if at least one is not aligned -> warn and fallback to array behaviour - if non_aligned: - warnings.warn( - "Calling a ufunc on non-aligned DataFrames (or DataFrame/Series " - "combination). Currently, the indices are ignored and the result " - "takes the index/columns of the first DataFrame. In the future , " - "the DataFrames/Series will be aligned before applying the ufunc.\n" - "Convert one of the arguments to a NumPy array " - "(eg 'ufunc(df1, np.asarray(df2)') to keep the current behaviour, " - "or align manually (eg 'df1, df2 = df1.align(df2)') before passing to " - "the ufunc to obtain the future behaviour and silence this warning.", - FutureWarning, - stacklevel=find_stack_level(), - ) - - # keep the first dataframe of the inputs, other DataFrame/Series is - # converted to array for fallback behaviour - new_inputs = [] - for x in inputs: - if x is first_frame: - new_inputs.append(x) - elif isinstance(x, NDFrame): - new_inputs.append(np.asarray(x)) - else: - new_inputs.append(x) - - # call the ufunc on those transformed inputs - return getattr(ufunc, method)(*new_inputs, **kwargs) - - # signal that we didn't fallback / execute the ufunc yet - return NotImplemented - - def array_ufunc(self, ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any): """ Compatibility with numpy ufuncs. @@ -260,11 +183,6 @@ def array_ufunc(self, ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any) kwargs = _standardize_out_kwarg(**kwargs) - # for backwards compatibility check and potentially fallback for non-aligned frames - result = _maybe_fallback(ufunc, method, *inputs, **kwargs) - if result is not NotImplemented: - return result - # for binary ops, use our custom dunder methods result = maybe_dispatch_ufunc_to_dunder_op(self, ufunc, method, *inputs, **kwargs) if result is not NotImplemented: diff --git a/pandas/tests/frame/test_ufunc.py b/pandas/tests/frame/test_ufunc.py index 611e0eeb3e5f0..9223c4364579e 100644 --- a/pandas/tests/frame/test_ufunc.py +++ b/pandas/tests/frame/test_ufunc.py @@ -118,21 +118,18 @@ def test_binary_input_aligns_columns(request, dtype_a, dtype_b): if isinstance(dtype_a, dict) and isinstance(dtype_b, dict): dtype_b["C"] = dtype_b.pop("B") - df2 = pd.DataFrame({"A": [1, 2], "C": [3, 4]}).astype(dtype_b) - with tm.assert_produces_warning(FutureWarning): - result = np.heaviside(df1, df2) - # Expected future behaviour: - # expected = np.heaviside( - # np.array([[1, 3, np.nan], [2, 4, np.nan]]), - # np.array([[1, np.nan, 3], [2, np.nan, 4]]), - # ) - # expected = pd.DataFrame(expected, index=[0, 1], columns=["A", "B", "C"]) - expected = pd.DataFrame([[1.0, 1.0], [1.0, 1.0]], columns=["A", "B"]) + # As of 2.0, align first before applying the ufunc + result = np.heaviside(df1, df2) + expected = np.heaviside( + np.array([[1, 3, np.nan], [2, 4, np.nan]]), + np.array([[1, np.nan, 3], [2, np.nan, 4]]), + ) + expected = pd.DataFrame(expected, index=[0, 1], columns=["A", "B", "C"]) tm.assert_frame_equal(result, expected) - # ensure the expected is the same when applying with numpy array result = np.heaviside(df1, df2.values) + expected = pd.DataFrame([[1.0, 1.0], [1.0, 1.0]], columns=["A", "B"]) tm.assert_frame_equal(result, expected) @@ -146,35 +143,29 @@ def test_binary_input_aligns_index(request, dtype): ) df1 = pd.DataFrame({"A": [1, 2], "B": [3, 4]}, index=["a", "b"]).astype(dtype) df2 = pd.DataFrame({"A": [1, 2], "B": [3, 4]}, index=["a", "c"]).astype(dtype) - with tm.assert_produces_warning(FutureWarning): - result = np.heaviside(df1, df2) - # Expected future behaviour: - # expected = np.heaviside( - # np.array([[1, 3], [3, 4], [np.nan, np.nan]]), - # np.array([[1, 3], [np.nan, np.nan], [3, 4]]), - # ) - # # TODO(FloatArray): this will be Float64Dtype. - # expected = pd.DataFrame(expected, index=["a", "b", "c"], columns=["A", "B"]) - expected = pd.DataFrame( - [[1.0, 1.0], [1.0, 1.0]], columns=["A", "B"], index=["a", "b"] + result = np.heaviside(df1, df2) + expected = np.heaviside( + np.array([[1, 3], [3, 4], [np.nan, np.nan]]), + np.array([[1, 3], [np.nan, np.nan], [3, 4]]), ) + # TODO(FloatArray): this will be Float64Dtype. + expected = pd.DataFrame(expected, index=["a", "b", "c"], columns=["A", "B"]) tm.assert_frame_equal(result, expected) - # ensure the expected is the same when applying with numpy array result = np.heaviside(df1, df2.values) + expected = pd.DataFrame( + [[1.0, 1.0], [1.0, 1.0]], columns=["A", "B"], index=["a", "b"] + ) tm.assert_frame_equal(result, expected) -@pytest.mark.filterwarnings("ignore:Calling a ufunc on non-aligned:FutureWarning") def test_binary_frame_series_raises(): # We don't currently implement df = pd.DataFrame({"A": [1, 2]}) - # with pytest.raises(NotImplementedError, match="logaddexp"): - with pytest.raises(ValueError, match=""): + with pytest.raises(NotImplementedError, match="logaddexp"): np.logaddexp(df, df["A"]) - # with pytest.raises(NotImplementedError, match="logaddexp"): - with pytest.raises(ValueError, match=""): + with pytest.raises(NotImplementedError, match="logaddexp"): np.logaddexp(df["A"], df) @@ -206,7 +197,8 @@ def test_frame_outer_disallowed(): np.subtract.outer(df, df) -def test_alignment_deprecation(): +def test_alignment_deprecation_enforced(): + # Enforced in 2.0 # https://github.com/pandas-dev/pandas/issues/39184 df1 = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}) df2 = pd.DataFrame({"b": [1, 2, 3], "c": [4, 5, 6]}) @@ -221,12 +213,11 @@ def test_alignment_deprecation(): result = np.add(df1, df1) tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning): - # non-aligned -> warns - result = np.add(df1, df2) + result = np.add(df1, df2.values) tm.assert_frame_equal(result, expected) - result = np.add(df1, df2.values) + result = np.add(df1, df2) + expected = pd.DataFrame({"a": [np.nan] * 3, "b": [5, 7, 9], "c": [np.nan] * 3}) tm.assert_frame_equal(result, expected) result = np.add(df1.values, df2) @@ -241,20 +232,23 @@ def test_alignment_deprecation(): result = np.add(df1, s1) tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning): - result = np.add(df1, s2) + result = np.add(df1, s2.values) tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning): - result = np.add(s2, df1) + expected = pd.DataFrame( + {"a": [np.nan] * 3, "b": [5.0, 6.0, 7.0], "c": [np.nan] * 3} + ) + result = np.add(df1, s2) tm.assert_frame_equal(result, expected) - result = np.add(df1, s2.values) - tm.assert_frame_equal(result, expected) + msg = "Cannot apply ufunc <ufunc 'add'> to mixed DataFrame and Series inputs." + with pytest.raises(NotImplementedError, match=msg): + np.add(s2, df1) @td.skip_if_no("numba") -def test_alignment_deprecation_many_inputs(request): +def test_alignment_deprecation_many_inputs_enforced(): + # Enforced in 2.0 # https://github.com/pandas-dev/pandas/issues/39184 # test that the deprecation also works with > 2 inputs -> using a numba # written ufunc for this because numpy itself doesn't have such ufuncs @@ -271,20 +265,22 @@ def my_ufunc(x, y, z): df2 = pd.DataFrame({"b": [1, 2, 3], "c": [4, 5, 6]}) df3 = pd.DataFrame({"a": [1, 2, 3], "c": [4, 5, 6]}) - with tm.assert_produces_warning(FutureWarning): - result = my_ufunc(df1, df2, df3) - expected = pd.DataFrame([[3.0, 12.0], [6.0, 15.0], [9.0, 18.0]], columns=["a", "b"]) + result = my_ufunc(df1, df2, df3) + expected = pd.DataFrame(np.full((3, 3), np.nan), columns=["a", "b", "c"]) tm.assert_frame_equal(result, expected) # all aligned -> no warning with tm.assert_produces_warning(None): result = my_ufunc(df1, df1, df1) + expected = pd.DataFrame([[3.0, 12.0], [6.0, 15.0], [9.0, 18.0]], columns=["a", "b"]) tm.assert_frame_equal(result, expected) # mixed frame / arrays - with tm.assert_produces_warning(FutureWarning): - result = my_ufunc(df1, df2, df3.values) - tm.assert_frame_equal(result, expected) + msg = ( + r"operands could not be broadcast together with shapes \(3,3\) \(3,3\) \(3,2\)" + ) + with pytest.raises(ValueError, match=msg): + my_ufunc(df1, df2, df3.values) # single frame -> no warning with tm.assert_produces_warning(None): @@ -292,10 +288,11 @@ def my_ufunc(x, y, z): tm.assert_frame_equal(result, expected) # takes indices of first frame - with tm.assert_produces_warning(FutureWarning): - result = my_ufunc(df1.values, df2, df3) - expected = expected.set_axis(["b", "c"], axis=1) - tm.assert_frame_equal(result, expected) + msg = ( + r"operands could not be broadcast together with shapes \(3,2\) \(3,3\) \(3,3\)" + ) + with pytest.raises(ValueError, match=msg): + my_ufunc(df1.values, df2, df3) def test_array_ufuncs_for_many_arguments(): diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py index dae06a58f0e49..b3f1a1be903e5 100644 --- a/pandas/tests/series/test_ufunc.py +++ b/pandas/tests/series/test_ufunc.py @@ -426,14 +426,10 @@ def test_np_matmul(): # GH26650 df1 = pd.DataFrame(data=[[-1, 1, 10]]) df2 = pd.DataFrame(data=[-1, 1, 10]) - expected_result = pd.DataFrame(data=[102]) + expected = pd.DataFrame(data=[102]) - with tm.assert_produces_warning(FutureWarning, match="on non-aligned"): - result = np.matmul(df1, df2) - tm.assert_frame_equal( - expected_result, - result, - ) + result = np.matmul(df1, df2) + tm.assert_frame_equal(expected, result) def test_array_ufuncs_for_many_arguments():
Introduced in #39239
https://api.github.com/repos/pandas-dev/pandas/pulls/50455
2022-12-28T02:20:21Z
2022-12-28T21:46:29Z
2022-12-28T21:46:29Z
2022-12-28T22:08:14Z
DOC: skip a doctest that writes to disk
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index 5089d0489c49f..c0e00d6bd30a4 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -1125,7 +1125,8 @@ def format( >>> df = pd.DataFrame({"A": [1, 0, -1]}) >>> pseudo_css = "number-format: 0Β§[Red](0)Β§-Β§@;" - >>> df.style.applymap(lambda v: pseudo_css).to_excel("formatted_file.xlsx") + >>> filename = "formatted_file.xlsx" + >>> df.style.applymap(lambda v: pseudo_css).to_excel(filename) # doctest: +SKIP .. figure:: ../../_static/style/format_excel_css.png """
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Running `./scripts/validate_docstrings.py` was returning the following error ``` Exception: The following files were leftover from the doctest: {'formatted_file.xlsx'}. Please use # doctest: +SKIP ``` This was due to the doctest writing to disk. This PR fixes this issue.
https://api.github.com/repos/pandas-dev/pandas/pulls/50454
2022-12-28T01:52:12Z
2022-12-28T12:53:01Z
2022-12-28T12:53:01Z
2022-12-28T12:53:08Z
API: to_datetime allow mixed numeric/datetime with errors=coerce
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 12b0d90e68ab9..87ebff90c6021 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -470,7 +470,7 @@ Other API changes - Changed behavior of :meth:`Series.quantile` and :meth:`DataFrame.quantile` with :class:`SparseDtype` to retain sparse dtype (:issue:`49583`) - When creating a :class:`Series` with a object-dtype :class:`Index` of datetime objects, pandas no longer silently converts the index to a :class:`DatetimeIndex` (:issue:`39307`, :issue:`23598`) - :meth:`Series.unique` with dtype "timedelta64[ns]" or "datetime64[ns]" now returns :class:`TimedeltaArray` or :class:`DatetimeArray` instead of ``numpy.ndarray`` (:issue:`49176`) -- :func:`to_datetime` and :class:`DatetimeIndex` now allow sequences containing both ``datetime`` objects and numeric entries, matching :class:`Series` behavior (:issue:`49037`) +- :func:`to_datetime` and :class:`DatetimeIndex` now allow sequences containing both ``datetime`` objects and numeric entries, matching :class:`Series` behavior (:issue:`49037`, :issue:`50453`) - :func:`pandas.api.dtypes.is_string_dtype` now only returns ``True`` for array-likes with ``dtype=object`` when the elements are inferred to be strings (:issue:`15585`) - Passing a sequence containing ``datetime`` objects and ``date`` objects to :class:`Series` constructor will return with ``object`` dtype instead of ``datetime64[ns]`` dtype, consistent with :class:`Index` behavior (:issue:`49341`) - Passing strings that cannot be parsed as datetimes to :class:`Series` or :class:`DataFrame` with ``dtype="datetime64[ns]"`` will raise instead of silently ignoring the keyword and returning ``object`` dtype (:issue:`24435`) diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index c1a30e03235b5..af6afedf99275 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -454,8 +454,6 @@ cpdef array_to_datetime( npy_datetimestruct dts NPY_DATETIMEUNIT out_bestunit bint utc_convert = bool(utc) - bint seen_integer = False - bint seen_datetime = False bint seen_datetime_offset = False bint is_raise = errors=="raise" bint is_ignore = errors=="ignore" @@ -486,7 +484,6 @@ cpdef array_to_datetime( iresult[i] = NPY_NAT elif PyDateTime_Check(val): - seen_datetime = True if val.tzinfo is not None: found_tz = True else: @@ -501,12 +498,10 @@ cpdef array_to_datetime( result[i] = parse_pydatetime(val, &dts, utc_convert) elif PyDate_Check(val): - seen_datetime = True iresult[i] = pydate_to_dt64(val, &dts) check_dts_bounds(&dts) elif is_datetime64_object(val): - seen_datetime = True iresult[i] = get_datetime64_nanos(val, NPY_FR_ns) elif is_integer_object(val) or is_float_object(val): @@ -521,7 +516,6 @@ cpdef array_to_datetime( ) return values, tz_out # these must be ns unit by-definition - seen_integer = True if val != val or val == NPY_NAT: iresult[i] = NPY_NAT @@ -654,17 +648,6 @@ cpdef array_to_datetime( except TypeError: return _array_to_datetime_object(values, errors, dayfirst, yearfirst) - if seen_datetime and seen_integer: - # we have mixed datetimes & integers - - if is_coerce: - # coerce all of the integers/floats to NaT, preserve - # the datetimes and other convertibles - for i in range(n): - val = values[i] - if is_integer_object(val) or is_float_object(val): - result[i] = NPY_NAT - if seen_datetime_offset and not utc_convert: # GH#17697 # 1) If all the offsets are equal, return one offset for diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 83e40f5f1d98b..417b7e85ce6f9 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -1588,29 +1588,24 @@ def test_unit_with_numeric_coerce(self, cache, exp, arr, warning): tm.assert_index_equal(result, expected) @pytest.mark.parametrize( - "exp, arr", + "arr", [ - [ - ["2013-01-01", "NaT", "NaT"], - [Timestamp("20130101"), 1.434692e18, 1.432766e18], - ], - [ - ["NaT", "NaT", "2013-01-01"], - [1.434692e18, 1.432766e18, Timestamp("20130101")], - ], + [Timestamp("20130101"), 1.434692e18, 1.432766e18], + [1.434692e18, 1.432766e18, Timestamp("20130101")], ], ) - def test_unit_mixed(self, cache, exp, arr): - + def test_unit_mixed(self, cache, arr): + # GH#50453 pre-2.0 with mixed numeric/datetimes and errors="coerce" + # the numeric entries would be coerced to NaT, was never clear exactly + # why. # mixed integers/datetimes - expected = DatetimeIndex(exp) + expected = Index([Timestamp(x) for x in arr], dtype="M8[ns]") result = to_datetime(arr, errors="coerce", cache=cache) tm.assert_index_equal(result, expected) # GH#49037 pre-2.0 this raised, but it always worked with Series, # was never clear why it was disallowed result = to_datetime(arr, errors="raise", cache=cache) - expected = Index([Timestamp(x) for x in arr], dtype="M8[ns]") tm.assert_index_equal(result, expected) result = DatetimeIndex(arr)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50453
2022-12-28T01:32:09Z
2022-12-29T00:17:49Z
2022-12-29T00:17:49Z
2022-12-29T00:35:46Z
CI: Fix npdev error on deprecation & future warnings
diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml index f7bd5980439e3..d3ad2710a0efa 100644 --- a/.github/workflows/ubuntu.yml +++ b/.github/workflows/ubuntu.yml @@ -77,7 +77,7 @@ jobs: - name: "Numpy Dev" env_file: actions-310-numpydev.yaml pattern: "not slow and not network and not single_cpu" - test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" + test_args: "-W error::DeprecationWarning -W error::FutureWarning" error_on_warnings: "0" exclude: - env_file: actions-38.yaml diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index dcc40c11ec778..97361fb88bc70 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -1590,7 +1590,15 @@ def maybe_cast_to_integer_array(arr: list | np.ndarray, dtype: np.dtype) -> np.n try: if not isinstance(arr, np.ndarray): - casted = np.array(arr, dtype=dtype, copy=False) + with warnings.catch_warnings(): + # We already disallow dtype=uint w/ negative numbers + # (test_constructor_coercion_signed_to_unsigned) so safe to ignore. + warnings.filterwarnings( + "ignore", + "NumPy will stop allowing conversion of out-of-bound Python int", + DeprecationWarning, + ) + casted = np.array(arr, dtype=dtype, copy=False) else: casted = arr.astype(dtype, copy=False) except OverflowError as err: diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py index 02188a6e57534..2b5eb8b261af3 100644 --- a/pandas/tests/arithmetic/test_datetime64.py +++ b/pandas/tests/arithmetic/test_datetime64.py @@ -1100,7 +1100,9 @@ def test_dt64arr_addsub_intlike( dti = date_range("2016-01-01", periods=2, freq=freq, tz=tz) obj = box_with_array(dti) - other = np.array([4, -1], dtype=dtype) + other = np.array([4, -1]) + if dtype is not None: + other = other.astype(dtype) msg = "|".join( [ diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 05e40e20f1226..72b5bb7877271 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -14,10 +14,6 @@ iNaT, lib, ) -from pandas.compat import ( - IS64, - is_numpy_dev, -) from pandas.errors import IntCastingNaNError import pandas.util._test_decorators as td @@ -764,14 +760,11 @@ def test_constructor_cast(self): def test_constructor_signed_int_overflow_raises(self): # GH#41734 disallow silent overflow, enforced in 2.0 msg = "Values are too large to be losslessly converted" - numpy_warning = DeprecationWarning if is_numpy_dev or not IS64 else None with pytest.raises(ValueError, match=msg): - with tm.assert_produces_warning(numpy_warning, check_stacklevel=False): - Series([1, 200, 923442], dtype="int8") + Series([1, 200, 923442], dtype="int8") with pytest.raises(ValueError, match=msg): - with tm.assert_produces_warning(numpy_warning, check_stacklevel=False): - Series([1, 200, 923442], dtype="uint8") + Series([1, 200, 923442], dtype="uint8") @pytest.mark.parametrize( "values",
As discovered in https://github.com/pandas-dev/pandas/pull/50386, specifying modules in the warning filters need the fully qualified module path which `numpy` is not. In order to not enumerate all the modules paths in numpy, just have this build fail on any `DeprecationWarning` or `FutureWarning`
https://api.github.com/repos/pandas-dev/pandas/pulls/50452
2022-12-27T23:07:54Z
2023-01-18T00:59:27Z
2023-01-18T00:59:27Z
2023-01-18T00:59:32Z
CI: Reduce duplicate pyarrow version testing
diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml index ca6fd63d749ee..5efc1aa67b4cd 100644 --- a/.github/workflows/macos-windows.yml +++ b/.github/workflows/macos-windows.yml @@ -53,7 +53,7 @@ jobs: uses: ./.github/actions/setup-conda with: environment-file: ci/deps/${{ matrix.env_file }} - pyarrow-version: ${{ matrix.os == 'macos-latest' && '6' || '' }} + pyarrow-version: ${{ matrix.os == 'macos-latest' && '9' || '' }} - name: Build Pandas uses: ./.github/actions/build_pandas diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml index fe4d92507f717..7dbf74278d433 100644 --- a/.github/workflows/ubuntu.yml +++ b/.github/workflows/ubuntu.yml @@ -78,14 +78,18 @@ jobs: pattern: "not slow and not network and not single_cpu" test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" exclude: - - env_file: actions-39.yaml - pyarrow_version: "6" - - env_file: actions-39.yaml + - env_file: actions-38.yaml pyarrow_version: "7" - - env_file: actions-310.yaml - pyarrow_version: "6" - - env_file: actions-310.yaml + - env_file: actions-38.yaml + pyarrow_version: "8" + - env_file: actions-38.yaml + pyarrow_version: "9" + - env_file: actions-39.yaml pyarrow_version: "7" + - env_file: actions-39.yaml + pyarrow_version: "8" + - env_file: actions-39.yaml + pyarrow_version: "9" fail-fast: false name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} env:
We probably don't need to be testing multiple permutations of pyarrow version & Python version in the CI. Instead: * PY310 builds will test pyarrow version 6 - 10 * PY38 & 39 builds will test pyarrow = 10
https://api.github.com/repos/pandas-dev/pandas/pulls/50451
2022-12-27T22:44:53Z
2022-12-28T08:27:54Z
2022-12-28T08:27:54Z
2022-12-28T17:42:06Z
BUG: handle duplicated columns when using `read_sql` (#44421)
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index cb47519589377..520da64e72e98 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -900,6 +900,7 @@ I/O - Fixed memory leak which stemmed from the initialization of the internal JSON module (:issue:`49222`) - Fixed issue where :func:`json_normalize` would incorrectly remove leading characters from column names that matched the ``sep`` argument (:issue:`49861`) - Bug in :meth:`DataFrame.to_json` where it would segfault when failing to encode a string (:issue:`50307`) +- Bug in :func:`read_sql` ignoring duplicated columns, fixed by adding a suffix on duplicated columns (:issue:`44421`) Period ^^^^^^ diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 2b845786b0366..99f995bdf082c 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -147,18 +147,29 @@ def _convert_arrays_to_dataframe( use_nullable_dtypes: bool = False, ) -> DataFrame: content = lib.to_object_array_tuples(data) - arrays = convert_object_array( + if arrays := convert_object_array( list(content.T), dtype=None, coerce_float=coerce_float, use_nullable_dtypes=use_nullable_dtypes, - ) - if arrays: - return DataFrame(dict(zip(columns, arrays))) + ): + return DataFrame( + np.array(arrays).T, columns=_suffix_on_duplicated(list(columns)) + ) else: return DataFrame(columns=columns) +def _suffix_on_duplicated(lst: list[Any]) -> list[Any]: + """ + Add suffix on items that are duplicated + """ + return [ + f"{value}_{lst[:index].count(value)}" if lst.count(value) > 1 else value + for index, value in enumerate(lst, 1) + ] + + def _wrap_result( data, columns,
- [X] closes #44421 - [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Hello everyone, This is my first PR on this project, so don't hesitate to be picky. I've normally fixed the issue found in the issue #44421, it was a simple error due to duplicated columns names. I tried to respect the rules for PR, but if I missed something, feel free to comment ;) Have a good day!
https://api.github.com/repos/pandas-dev/pandas/pulls/50450
2022-12-27T19:00:22Z
2023-02-07T20:41:09Z
null
2023-02-07T20:41:10Z
PERF: ArrowExtensionArray.searchsorted
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index ea6a832d25058..4acfc1470a82f 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -757,6 +757,7 @@ Performance improvements - Performance improvement in :meth:`MultiIndex.putmask` (:issue:`49830`) - Performance improvement in :meth:`Index.union` and :meth:`MultiIndex.union` when index contains duplicates (:issue:`48900`) - Performance improvement in :meth:`Series.rank` for pyarrow-backed dtypes (:issue:`50264`) +- Performance improvement in :meth:`Series.searchsorted` for pyarrow-backed dtypes (:issue:`50447`) - Performance improvement in :meth:`Series.fillna` for extension array dtypes (:issue:`49722`, :issue:`50078`) - Performance improvement in :meth:`Index.join`, :meth:`Index.intersection` and :meth:`Index.union` for masked dtypes when :class:`Index` is monotonic (:issue:`50310`) - Performance improvement for :meth:`Series.value_counts` with nullable dtype (:issue:`48338`) diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index d0c17161dea0b..de85ed67e7e8c 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -3,6 +3,7 @@ from typing import ( TYPE_CHECKING, Any, + Literal, TypeVar, cast, ) @@ -116,6 +117,11 @@ def floordiv_compat( } if TYPE_CHECKING: + from pandas._typing import ( + NumpySorter, + NumpyValueArrayLike, + ) + from pandas import Series ArrowExtensionArrayT = TypeVar("ArrowExtensionArrayT", bound="ArrowExtensionArray") @@ -693,6 +699,23 @@ def round( """ return type(self)(pc.round(self._data, ndigits=decimals)) + @doc(ExtensionArray.searchsorted) + def searchsorted( + self, + value: NumpyValueArrayLike | ExtensionArray, + side: Literal["left", "right"] = "left", + sorter: NumpySorter = None, + ) -> npt.NDArray[np.intp] | np.intp: + if self._hasna: + raise ValueError( + "searchsorted requires array to be sorted, which is impossible " + "with NAs present." + ) + if isinstance(value, ExtensionArray): + value = value.astype(object) + # Base class searchsorted would cast to object, which is *much* slower. + return self.to_numpy().searchsorted(value, side=side, sorter=sorter) + def take( self, indices: TakeIndexer, diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py index e5fb3fc3ff836..9b26db07fc28f 100644 --- a/pandas/core/arrays/string_.py +++ b/pandas/core/arrays/string_.py @@ -1,6 +1,9 @@ from __future__ import annotations -from typing import TYPE_CHECKING +from typing import ( + TYPE_CHECKING, + Literal, +) import numpy as np @@ -54,6 +57,11 @@ if TYPE_CHECKING: import pyarrow + from pandas._typing import ( + NumpySorter, + NumpyValueArrayLike, + ) + from pandas import Series @@ -492,6 +500,20 @@ def memory_usage(self, deep: bool = False) -> int: return result + lib.memory_usage_of_objects(self._ndarray) return result + @doc(ExtensionArray.searchsorted) + def searchsorted( + self, + value: NumpyValueArrayLike | ExtensionArray, + side: Literal["left", "right"] = "left", + sorter: NumpySorter = None, + ) -> npt.NDArray[np.intp] | np.intp: + if self._hasna: + raise ValueError( + "searchsorted requires array to be sorted, which is impossible " + "with NAs present." + ) + return super().searchsorted(value=value, side=side, sorter=sorter) + def _cmp_method(self, other, op): from pandas.arrays import BooleanArray diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py index c1785591f41a9..02f72d67673ae 100644 --- a/pandas/tests/extension/test_arrow.py +++ b/pandas/tests/extension/test_arrow.py @@ -1553,3 +1553,20 @@ def test_round(): result = ser.round(-1) expected = pd.Series([120.0, pd.NA, 60.0], dtype=dtype) tm.assert_series_equal(result, expected) + + +def test_searchsorted_with_na_raises(data_for_sorting, as_series): + # GH50447 + b, c, a = data_for_sorting + arr = data_for_sorting.take([2, 0, 1]) # to get [a, b, c] + arr[-1] = pd.NA + + if as_series: + arr = pd.Series(arr) + + msg = ( + "searchsorted requires array to be sorted, " + "which is impossible with NAs present." + ) + with pytest.raises(ValueError, match=msg): + arr.searchsorted(b) diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py index de7967a8578b5..3e865947aa968 100644 --- a/pandas/tests/extension/test_string.py +++ b/pandas/tests/extension/test_string.py @@ -420,3 +420,20 @@ def arrow_not_supported(self, data, request): reason="2D support not implemented for ArrowStringArray" ) request.node.add_marker(mark) + + +def test_searchsorted_with_na_raises(data_for_sorting, as_series): + # GH50447 + b, c, a = data_for_sorting + arr = data_for_sorting.take([2, 0, 1]) # to get [a, b, c] + arr[-1] = pd.NA + + if as_series: + arr = pd.Series(arr) + + msg = ( + "searchsorted requires array to be sorted, " + "which is impossible with NAs present." + ) + with pytest.raises(ValueError, match=msg): + arr.searchsorted(b)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature. Perf improvement in `ArrowExtensionArray.searchsorted`: ``` import pandas as pd import numpy as np arr = pd.array(np.arange(10**6), dtype="int64[pyarrow]") %timeit arr.searchsorted(500) # 367 ms Β± 2.52 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) <- main # 21.3 Β΅s Β± 1.18 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) <- PR ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50447
2022-12-27T11:44:21Z
2023-01-05T23:29:27Z
2023-01-05T23:29:27Z
2023-01-19T04:31:28Z
TST: Add test_ to four tests in test_readlines.py and fix two of them
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 1c3dd35ef47f5..6b531215813d3 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -342,4 +342,3 @@ repos: exclude: | (?x) ^pandas/tests/generic/test_generic.py # GH50380 - |^pandas/tests/io/json/test_readlines.py # GH50378 diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py index 1a414ebf73abe..a76627fb08147 100644 --- a/pandas/tests/io/json/test_readlines.py +++ b/pandas/tests/io/json/test_readlines.py @@ -336,7 +336,7 @@ def test_to_json_append_mode(mode_): df.to_json(mode=mode_, lines=False, orient="records") -def to_json_append_output_consistent_columns(): +def test_to_json_append_output_consistent_columns(): # GH 35849 # Testing that resulting output reads in as expected. # Testing same columns, new rows @@ -354,7 +354,7 @@ def to_json_append_output_consistent_columns(): tm.assert_frame_equal(result, expected) -def to_json_append_output_inconsistent_columns(): +def test_to_json_append_output_inconsistent_columns(): # GH 35849 # Testing that resulting output reads in as expected. # Testing one new column, one old column, new rows @@ -378,7 +378,7 @@ def to_json_append_output_inconsistent_columns(): tm.assert_frame_equal(result, expected) -def to_json_append_output_different_columns(): +def test_to_json_append_output_different_columns(): # GH 35849 # Testing that resulting output reads in as expected. # Testing same, differing and new columns @@ -394,7 +394,7 @@ def to_json_append_output_different_columns(): "col3": [None, None, None, None, "!", "#", None, None], "col4": [None, None, None, None, None, None, True, False], } - ) + ).astype({"col4": "float"}) with tm.ensure_clean("test.json") as path: # Save dataframes to the same file df1.to_json(path, mode="a", lines=True, orient="records") @@ -407,7 +407,7 @@ def to_json_append_output_different_columns(): tm.assert_frame_equal(result, expected) -def to_json_append_output_different_columns_reordered(): +def test_to_json_append_output_different_columns_reordered(): # GH 35849 # Testing that resulting output reads in as expected. # Testing specific result column order. @@ -424,7 +424,7 @@ def to_json_append_output_different_columns_reordered(): "col3": [None, None, "!", "#", None, None, None, None], "col1": [None, None, None, None, 3, 4, 1, 2], } - ) + ).astype({"col4": "float"}) with tm.ensure_clean("test.json") as path: # Save dataframes to the same file df4.to_json(path, mode="a", lines=True, orient="records")
Four tests in https://github.com/pandas-dev/pandas/blob/main/pandas/tests/io/json/test_readlines.py weren't running because they didn't have `test_` in front of their names, fixed those. Two of those tests were not passing because the JSON data was turning booleans into 1s and 0s which was conflicting with the booleans in the compared data, so added `int` before all of the booleans in the compared data. Had detailed comments explaining this but shortened them in order to pass Flake8. - [x] closes #50378 - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
https://api.github.com/repos/pandas-dev/pandas/pulls/50445
2022-12-27T03:03:14Z
2022-12-27T19:00:35Z
2022-12-27T19:00:35Z
2022-12-27T19:06:10Z
BUG: groupby.nth with NA values in groupings after subsetting to SeriesGroupBy
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 12b0d90e68ab9..1c7e9f09fcf21 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -924,6 +924,7 @@ Groupby/resample/rolling - Bug in :meth:`.SeriesGroupBy.describe` with ``as_index=False`` would have the incorrect shape (:issue:`49256`) - Bug in :class:`.DataFrameGroupBy` and :class:`.SeriesGroupBy` with ``dropna=False`` would drop NA values when the grouper was categorical (:issue:`36327`) - Bug in :meth:`.SeriesGroupBy.nunique` would incorrectly raise when the grouper was an empty categorical and ``observed=True`` (:issue:`21334`) +- Bug in :meth:`.SeriesGroupBy.nth` would raise when grouper contained NA values after subsetting from a :class:`DataFrameGroupBy` (:issue:`26454`) Reshaping ^^^^^^^^^ diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 11e8769615470..d7b983367b596 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -44,6 +44,7 @@ class providing the base-class of operations. ) from pandas._libs.algos import rank_1d import pandas._libs.groupby as libgroupby +from pandas._libs.missing import NA from pandas._typing import ( AnyArrayLike, ArrayLike, @@ -2933,7 +2934,14 @@ def _nth( # (e.g. we have selected out # a column that is not in the current object) axis = self.grouper.axis - grouper = axis[axis.isin(dropped.index)] + grouper = self.grouper.codes_info[axis.isin(dropped.index)] + if self.grouper.has_dropped_na: + # Null groups need to still be encoded as -1 when passed to groupby + nulls = grouper == -1 + # error: No overload variant of "where" matches argument types + # "Any", "NAType", "Any" + values = np.where(nulls, NA, grouper) # type: ignore[call-overload] + grouper = Index(values, dtype="Int64") else: diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py index de5025b998b30..77422c28d356f 100644 --- a/pandas/tests/groupby/test_nth.py +++ b/pandas/tests/groupby/test_nth.py @@ -603,8 +603,11 @@ def test_nth_column_order(): def test_nth_nan_in_grouper(dropna): # GH 26011 df = DataFrame( - [[np.nan, 0, 1], ["abc", 2, 3], [np.nan, 4, 5], ["def", 6, 7], [np.nan, 8, 9]], - columns=list("abc"), + { + "a": [np.nan, "a", np.nan, "b", np.nan], + "b": [0, 2, 4, 6, 8], + "c": [1, 3, 5, 7, 9], + } ) result = df.groupby("a").nth(0, dropna=dropna) expected = df.iloc[[1, 3]] @@ -612,6 +615,21 @@ def test_nth_nan_in_grouper(dropna): tm.assert_frame_equal(result, expected) +@pytest.mark.parametrize("dropna", [None, "any", "all"]) +def test_nth_nan_in_grouper_series(dropna): + # GH 26454 + df = DataFrame( + { + "a": [np.nan, "a", np.nan, "b", np.nan], + "b": [0, 2, 4, 6, 8], + } + ) + result = df.groupby("a")["b"].nth(0, dropna=dropna) + expected = df["b"].iloc[[1, 3]] + + tm.assert_series_equal(result, expected) + + def test_first_categorical_and_datetime_data_nat(): # GH 20520 df = DataFrame(
- [x] closes #26454 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50444
2022-12-27T02:53:32Z
2022-12-27T19:03:42Z
2022-12-27T19:03:41Z
2022-12-27T19:13:49Z
TST: Verify isin returns consistent results for passed iterables
diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py index 92ebee9ffa7a5..3e4857b7abf38 100644 --- a/pandas/tests/series/methods/test_isin.py +++ b/pandas/tests/series/methods/test_isin.py @@ -234,3 +234,15 @@ def test_isin_filtering_with_mixed_object_types(data, is_in): expected = Series([True, False]) tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize("data", [[1, 2, 3], [1.0, 2.0, 3.0]]) +@pytest.mark.parametrize("isin", [[1, 2], [1.0, 2.0]]) +def test_isin_filtering_on_iterable(data, isin): + # GH 50234 + + ser = Series(data) + result = ser.isin(i for i in isin) + expected_result = Series([True, True, False]) + + tm.assert_series_equal(result, expected_result)
Add tests to verify that the `Series` method `isin` returns consistent results when passed iterables with different dtypes. (See #50234.) - [x] closes #50234 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~ (Not applicable.) - [x] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~ (Not applicable.)
https://api.github.com/repos/pandas-dev/pandas/pulls/50443
2022-12-26T20:54:01Z
2022-12-27T23:11:00Z
2022-12-27T23:11:00Z
2022-12-27T23:11:08Z
REF: share TimedeltaArray division code
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py index fe7ca3b5ba4ed..bc65d06789e5a 100644 --- a/pandas/core/arrays/timedeltas.py +++ b/pandas/core/arrays/timedeltas.py @@ -1,6 +1,7 @@ from __future__ import annotations from datetime import timedelta +import operator from typing import ( TYPE_CHECKING, Iterator, @@ -65,7 +66,7 @@ from pandas.core.arrays import datetimelike as dtl from pandas.core.arrays._ranges import generate_regular_range import pandas.core.common as com -from pandas.core.construction import extract_array +from pandas.core.ops import roperator from pandas.core.ops.common import unpack_zerodim_and_defer if TYPE_CHECKING: @@ -492,10 +493,11 @@ def __mul__(self, other) -> TimedeltaArray: __rmul__ = __mul__ - @unpack_zerodim_and_defer("__truediv__") - def __truediv__(self, other): - # timedelta / X is well-defined for timedelta-like or numeric X - + def _scalar_divlike_op(self, other, op): + """ + Shared logic for __truediv__, __rtruediv__, __floordiv__, __rfloordiv__ + with scalar 'other'. + """ if isinstance(other, self._recognized_scalars): other = Timedelta(other) # mypy assumes that __new__ returns an instance of the class @@ -507,31 +509,86 @@ def __truediv__(self, other): return result # otherwise, dispatch to Timedelta implementation - return self._ndarray / other + return op(self._ndarray, other) - elif lib.is_scalar(other): - # assume it is numeric - result = self._ndarray / other + else: + # caller is responsible for checking lib.is_scalar(other) + # assume other is numeric, otherwise numpy will raise + + if op in [roperator.rtruediv, roperator.rfloordiv]: + raise TypeError( + f"Cannot divide {type(other).__name__} by {type(self).__name__}" + ) + + result = op(self._ndarray, other) freq = None + if self.freq is not None: - # Tick division is not implemented, so operate on Timedelta - freq = self.freq.delta / other - freq = to_offset(freq) + # Note: freq gets division, not floor-division, even if op + # is floordiv. + freq = self.freq / other + + # TODO: 2022-12-24 test_ufunc_coercions, test_tdi_ops_attributes + # get here for truediv, no tests for floordiv + + if op is operator.floordiv: + if freq.nanos == 0 and self.freq.nanos != 0: + # e.g. if self.freq is Nano(1) then dividing by 2 + # rounds down to zero + # TODO: 2022-12-24 should implement the same check + # for truediv case + freq = None + return type(self)._simple_new(result, dtype=result.dtype, freq=freq) + def _cast_divlike_op(self, other): if not hasattr(other, "dtype"): # e.g. list, tuple other = np.array(other) if len(other) != len(self): raise ValueError("Cannot divide vectors with unequal lengths") + return other - if is_timedelta64_dtype(other.dtype): - # let numpy handle it - return self._ndarray / other + def _vector_divlike_op(self, other, op) -> np.ndarray | TimedeltaArray: + """ + Shared logic for __truediv__, __floordiv__, and their reversed versions + with timedelta64-dtype ndarray other. + """ + # Let numpy handle it + result = op(self._ndarray, np.asarray(other)) - elif is_object_dtype(other.dtype): - other = extract_array(other, extract_numpy=True) + if (is_integer_dtype(other.dtype) or is_float_dtype(other.dtype)) and op in [ + operator.truediv, + operator.floordiv, + ]: + return type(self)._simple_new(result, dtype=result.dtype) + + if op in [operator.floordiv, roperator.rfloordiv]: + mask = self.isna() | isna(other) + if mask.any(): + result = result.astype(np.float64) + np.putmask(result, mask, np.nan) + + return result + + @unpack_zerodim_and_defer("__truediv__") + def __truediv__(self, other): + # timedelta / X is well-defined for timedelta-like or numeric X + op = operator.truediv + if is_scalar(other): + return self._scalar_divlike_op(other, op) + + other = self._cast_divlike_op(other) + if ( + is_timedelta64_dtype(other.dtype) + or is_integer_dtype(other.dtype) + or is_float_dtype(other.dtype) + ): + return self._vector_divlike_op(other, op) + + if is_object_dtype(other.dtype): + other = np.asarray(other) if self.ndim > 1: res_cols = [left / right for left, right in zip(self, other)] res_cols2 = [x.reshape(1, -1) for x in res_cols] @@ -542,40 +599,18 @@ def __truediv__(self, other): return result else: - result = self._ndarray / other - return type(self)._simple_new(result, dtype=result.dtype) + return NotImplemented @unpack_zerodim_and_defer("__rtruediv__") def __rtruediv__(self, other): # X / timedelta is defined only for timedelta-like X - if isinstance(other, self._recognized_scalars): - other = Timedelta(other) - # mypy assumes that __new__ returns an instance of the class - # github.com/python/mypy/issues/1020 - if cast("Timedelta | NaTType", other) is NaT: - # specifically timedelta64-NaT - result = np.empty(self.shape, dtype=np.float64) - result.fill(np.nan) - return result - - # otherwise, dispatch to Timedelta implementation - return other / self._ndarray - - elif lib.is_scalar(other): - raise TypeError( - f"Cannot divide {type(other).__name__} by {type(self).__name__}" - ) - - if not hasattr(other, "dtype"): - # e.g. list, tuple - other = np.array(other) - - if len(other) != len(self): - raise ValueError("Cannot divide vectors with unequal lengths") + op = roperator.rtruediv + if is_scalar(other): + return self._scalar_divlike_op(other, op) + other = self._cast_divlike_op(other) if is_timedelta64_dtype(other.dtype): - # let numpy handle it - return other / self._ndarray + return self._vector_divlike_op(other, op) elif is_object_dtype(other.dtype): # Note: unlike in __truediv__, we do not _need_ to do type @@ -585,60 +620,24 @@ def __rtruediv__(self, other): return np.array(result_list) else: - raise TypeError( - f"Cannot divide {other.dtype} data by {type(self).__name__}" - ) + return NotImplemented @unpack_zerodim_and_defer("__floordiv__") def __floordiv__(self, other): - + op = operator.floordiv if is_scalar(other): - if isinstance(other, self._recognized_scalars): - other = Timedelta(other) - # mypy assumes that __new__ returns an instance of the class - # github.com/python/mypy/issues/1020 - if cast("Timedelta | NaTType", other) is NaT: - # treat this specifically as timedelta-NaT - result = np.empty(self.shape, dtype=np.float64) - result.fill(np.nan) - return result - - # dispatch to Timedelta implementation - return other.__rfloordiv__(self._ndarray) - - # at this point we should only have numeric scalars; anything - # else will raise - result = self._ndarray // other - freq = None - if self.freq is not None: - # Note: freq gets division, not floor-division - freq = self.freq / other - if freq.nanos == 0 and self.freq.nanos != 0: - # e.g. if self.freq is Nano(1) then dividing by 2 - # rounds down to zero - freq = None - return type(self)(result, freq=freq) - - if not hasattr(other, "dtype"): - # list, tuple - other = np.array(other) - if len(other) != len(self): - raise ValueError("Cannot divide with unequal lengths") + return self._scalar_divlike_op(other, op) - if is_timedelta64_dtype(other.dtype): - other = type(self)(other) - - # numpy timedelta64 does not natively support floordiv, so operate - # on the i8 values - result = self.asi8 // other.asi8 - mask = self._isnan | other._isnan - if mask.any(): - result = result.astype(np.float64) - np.putmask(result, mask, np.nan) - return result + other = self._cast_divlike_op(other) + if ( + is_timedelta64_dtype(other.dtype) + or is_integer_dtype(other.dtype) + or is_float_dtype(other.dtype) + ): + return self._vector_divlike_op(other, op) elif is_object_dtype(other.dtype): - other = extract_array(other, extract_numpy=True) + other = np.asarray(other) if self.ndim > 1: res_cols = [left // right for left, right in zip(self, other)] res_cols2 = [x.reshape(1, -1) for x in res_cols] @@ -649,52 +648,18 @@ def __floordiv__(self, other): assert result.dtype == object return result - elif is_integer_dtype(other.dtype) or is_float_dtype(other.dtype): - result = self._ndarray // other - return type(self)(result) - else: - dtype = getattr(other, "dtype", type(other).__name__) - raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}") + return NotImplemented @unpack_zerodim_and_defer("__rfloordiv__") def __rfloordiv__(self, other): - + op = roperator.rfloordiv if is_scalar(other): - if isinstance(other, self._recognized_scalars): - other = Timedelta(other) - # mypy assumes that __new__ returns an instance of the class - # github.com/python/mypy/issues/1020 - if cast("Timedelta | NaTType", other) is NaT: - # treat this specifically as timedelta-NaT - result = np.empty(self.shape, dtype=np.float64) - result.fill(np.nan) - return result - - # dispatch to Timedelta implementation - return other.__floordiv__(self._ndarray) - - raise TypeError( - f"Cannot divide {type(other).__name__} by {type(self).__name__}" - ) - - if not hasattr(other, "dtype"): - # list, tuple - other = np.array(other) - - if len(other) != len(self): - raise ValueError("Cannot divide with unequal lengths") + return self._scalar_divlike_op(other, op) + other = self._cast_divlike_op(other) if is_timedelta64_dtype(other.dtype): - other = type(self)(other) - # numpy timedelta64 does not natively support floordiv, so operate - # on the i8 values - result = other.asi8 // self.asi8 - mask = self._isnan | other._isnan - if mask.any(): - result = result.astype(np.float64) - np.putmask(result, mask, np.nan) - return result + return self._vector_divlike_op(other, op) elif is_object_dtype(other.dtype): result_list = [other[n] // self[n] for n in range(len(self))] @@ -702,8 +667,7 @@ def __rfloordiv__(self, other): return result else: - dtype = getattr(other, "dtype", type(other).__name__) - raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}") + return NotImplemented @unpack_zerodim_and_defer("__mod__") def __mod__(self, other): diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py index 68d91e8506dc5..93225d937697f 100644 --- a/pandas/tests/arithmetic/test_numeric.py +++ b/pandas/tests/arithmetic/test_numeric.py @@ -177,10 +177,12 @@ def test_div_td64arr(self, left, box_cls): result = right // left tm.assert_equal(result, expected) - msg = "Cannot divide" + # (true_) needed for min-versions build 2022-12-26 + msg = "ufunc '(true_)?divide' cannot use operands with types" with pytest.raises(TypeError, match=msg): left / right + msg = "ufunc 'floor_divide' cannot use operands with types" with pytest.raises(TypeError, match=msg): left // right diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index 4e537c8c4c993..c9bfb5e29460e 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -2019,6 +2019,7 @@ def test_td64arr_div_numeric_array( "cannot perform __truediv__", "unsupported operand", "Cannot divide", + "ufunc 'divide' cannot use operands with types", ] ) with pytest.raises(TypeError, match=pattern):
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50441
2022-12-26T18:45:00Z
2022-12-28T00:48:55Z
2022-12-28T00:48:55Z
2022-12-28T01:24:20Z
BUG: Fix issue where df.groupby.resample.size returns wide DF instead of MultiIndex Series.
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 13b60f7f05352..59a27b0969a1e 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -951,6 +951,7 @@ Groupby/resample/rolling - Bug in :meth:`.SeriesGroupBy.nth` would raise when grouper contained NA values after subsetting from a :class:`DataFrameGroupBy` (:issue:`26454`) - Bug in :meth:`DataFrame.groupby` would not include a :class:`.Grouper` specified by ``key`` in the result when ``as_index=False`` (:issue:`50413`) - Bug in :meth:`.DataFrameGrouBy.value_counts` would raise when used with a :class:`.TimeGrouper` (:issue:`50486`) +- Bug in :meth:`Resampler.size` caused a wide :class:`DataFrame` to be returned instead of a :class:`Series` with :class:`MultiIndex` (:issue:`46826`) - Reshaping diff --git a/pandas/core/resample.py b/pandas/core/resample.py index 4af4be20a056e..d6cba824767b5 100644 --- a/pandas/core/resample.py +++ b/pandas/core/resample.py @@ -989,6 +989,12 @@ def var( @doc(GroupBy.size) def size(self): result = self._downsample("size") + + # If the result is a non-empty DataFrame we stack to get a Series + # GH 46826 + if isinstance(result, ABCDataFrame) and not result.empty: + result = result.stack() + if not len(self.ax): from pandas import Series diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py index 0432cf397067d..a521e24aa6022 100644 --- a/pandas/tests/resample/test_resampler_grouper.py +++ b/pandas/tests/resample/test_resampler_grouper.py @@ -515,3 +515,25 @@ def test_resample_empty_Dataframe(keys): expected.index.name = keys[0] tm.assert_frame_equal(result, expected) + + +def test_groupby_resample_size_all_index_same(): + # GH 46826 + df = DataFrame( + {"A": [1] * 3 + [2] * 3 + [1] * 3 + [2] * 3, "B": np.arange(12)}, + index=date_range("31/12/2000 18:00", freq="H", periods=12), + ) + result = df.groupby("A").resample("D").size() + expected = Series( + 3, + index=pd.MultiIndex.from_tuples( + [ + (1, Timestamp("2000-12-31")), + (1, Timestamp("2001-01-01")), + (2, Timestamp("2000-12-31")), + (2, Timestamp("2001-01-01")), + ], + names=["A", None], + ), + ) + tm.assert_series_equal(result, expected)
- [x] closes #46826 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50440
2022-12-26T17:07:53Z
2023-01-04T18:08:14Z
2023-01-04T18:08:14Z
2023-01-04T18:21:31Z
CLN, TST: clean up and increase coverage in tslib.pyx
diff --git a/pandas/__init__.py b/pandas/__init__.py index 585a1ae341217..951cb38656d0b 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -9,10 +9,10 @@ for _dependency in _hard_dependencies: try: __import__(_dependency) - except ImportError as _e: + except ImportError as _e: # pragma: no cover _missing_dependencies.append(f"{_dependency}: {_e}") -if _missing_dependencies: +if _missing_dependencies: # pragma: no cover raise ImportError( "Unable to import required dependencies:\n" + "\n".join(_missing_dependencies) ) diff --git a/pandas/_libs/tslib.pyi b/pandas/_libs/tslib.pyi index 9a0364de2f02c..ab94c4d59c5fc 100644 --- a/pandas/_libs/tslib.pyi +++ b/pandas/_libs/tslib.pyi @@ -8,7 +8,7 @@ def format_array_from_datetime( values: npt.NDArray[np.int64], tz: tzinfo | None = ..., format: str | None = ..., - na_rep: object = ..., + na_rep: str | float = ..., reso: int = ..., # NPY_DATETIMEUNIT ) -> npt.NDArray[np.object_]: ... def array_with_unit_to_datetime( diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index c1a30e03235b5..976a53e9117de 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -87,11 +87,6 @@ def _test_parse_iso8601(ts: str): obj = _TSObject() - if ts == "now": - return Timestamp.utcnow() - elif ts == "today": - return Timestamp.now().normalize() - string_to_dts(ts, &obj.dts, &out_bestunit, &out_local, &out_tzoffset, True) obj.value = npy_datetimestruct_to_datetime(NPY_FR_ns, &obj.dts) check_dts_bounds(&obj.dts) @@ -109,7 +104,7 @@ def format_array_from_datetime( ndarray values, tzinfo tz=None, str format=None, - object na_rep=None, + na_rep: str | float = "NaT", NPY_DATETIMEUNIT reso=NPY_FR_ns, ) -> np.ndarray: """ @@ -145,9 +140,6 @@ def format_array_from_datetime( object[::1] res_flat = result.ravel() # should NOT be a copy cnp.flatiter it = cnp.PyArray_IterNew(values) - if na_rep is None: - na_rep = "NaT" - if tz is None: # if we don't have a format nor tz, then choose # a format based on precision diff --git a/pandas/tests/indexes/datetimes/test_formats.py b/pandas/tests/indexes/datetimes/test_formats.py index 01e6c0ad9b5c3..a927799e39783 100644 --- a/pandas/tests/indexes/datetimes/test_formats.py +++ b/pandas/tests/indexes/datetimes/test_formats.py @@ -44,6 +44,18 @@ def test_format_native_types(): result = index._format_native_types(na_rep="pandas") tm.assert_numpy_array_equal(result, expected) + result = index._format_native_types(date_format="%Y-%m-%d %H:%M:%S.%f") + expected = np.array( + ["2017-01-01 00:00:00.000000", "NaT", "2017-01-03 00:00:00.000000"], + dtype=object, + ) + tm.assert_numpy_array_equal(result, expected) + + # invalid format + result = index._format_native_types(date_format="foo") + expected = np.array(["foo", "NaT", "foo"], dtype=object) + tm.assert_numpy_array_equal(result, expected) + class TestDatetimeIndexRendering: def test_dti_repr_short(self): diff --git a/pandas/tests/tslibs/test_parse_iso8601.py b/pandas/tests/tslibs/test_parse_iso8601.py index 1c01e826d9794..72b136e17445e 100644 --- a/pandas/tests/tslibs/test_parse_iso8601.py +++ b/pandas/tests/tslibs/test_parse_iso8601.py @@ -4,6 +4,8 @@ from pandas._libs import tslib +from pandas import Timestamp + @pytest.mark.parametrize( "date_str, exp", @@ -18,6 +20,7 @@ ("2011\\01\\02", datetime(2011, 1, 2)), ("2013-01-01 05:30:00", datetime(2013, 1, 1, 5, 30)), ("2013-1-1 5:30:00", datetime(2013, 1, 1, 5, 30)), + ("2013-1-1 5:30:00+01:00", Timestamp(2013, 1, 1, 5, 30, tz="UTC+01:00")), ], ) def test_parsers_iso8601(date_str, exp):
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50439
2022-12-26T15:08:32Z
2022-12-27T19:20:21Z
2022-12-27T19:20:21Z
2022-12-27T19:20:27Z
BUG: Series(pyarrow-backed).round raising AttributeError
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 75ba169600962..3b82e31423590 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -947,6 +947,7 @@ ExtensionArray ^^^^^^^^^^^^^^ - Bug in :meth:`Series.mean` overflowing unnecessarily with nullable integers (:issue:`48378`) - Bug in :meth:`Series.tolist` for nullable dtypes returning numpy scalars instead of python scalars (:issue:`49890`) +- Bug in :meth:`Series.round` for pyarrow-backed dtypes raising ``AttributeError`` (:issue:`50437`) - Bug when concatenating an empty DataFrame with an ExtensionDtype to another DataFrame with the same ExtensionDtype, the resulting dtype turned into object (:issue:`48510`) - Bug in :meth:`array.PandasArray.to_numpy` raising with ``NA`` value when ``na_value`` is specified (:issue:`40638`) diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index 6250c298f291f..0e951f975ec22 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -661,6 +661,32 @@ def reshape(self, *args, **kwargs): f"as backed by a 1D pyarrow.ChunkedArray." ) + def round( + self: ArrowExtensionArrayT, decimals: int = 0, *args, **kwargs + ) -> ArrowExtensionArrayT: + """ + Round each value in the array a to the given number of decimals. + + Parameters + ---------- + decimals : int, default 0 + Number of decimal places to round to. If decimals is negative, + it specifies the number of positions to the left of the decimal point. + *args, **kwargs + Additional arguments and keywords have no effect. + + Returns + ------- + ArrowExtensionArray + Rounded values of the ArrowExtensionArray. + + See Also + -------- + DataFrame.round : Round values of a DataFrame. + Series.round : Round values of a Series. + """ + return type(self)(pc.round(self._data, ndigits=decimals)) + def take( self, indices: TakeIndexer, diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py index 9b42b86efd0d0..4bcc8d2cf4bd9 100644 --- a/pandas/tests/extension/test_arrow.py +++ b/pandas/tests/extension/test_arrow.py @@ -1531,3 +1531,17 @@ def test_setitem_invalid_dtype(data): msg = "cannot be converted" with pytest.raises(err, match=msg): data[:] = fill_value + + +def test_round(): + dtype = "float64[pyarrow]" + + ser = pd.Series([0.0, 1.23, 2.56, pd.NA], dtype=dtype) + result = ser.round(1) + expected = pd.Series([0.0, 1.2, 2.6, pd.NA], dtype=dtype) + tm.assert_series_equal(result, expected) + + ser = pd.Series([123.4, pd.NA, 56.78], dtype=dtype) + result = ser.round(-1) + expected = pd.Series([120.0, pd.NA, 60.0], dtype=dtype) + tm.assert_series_equal(result, expected)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature. Adding `ArrowExtensionArray.round` to fix the following: ``` import pandas as pd pd.Series([1.23], dtype="float64[pyarrow]").round(1) # AttributeError: 'ArrowExtensionArray' object has no attribute 'round' ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50437
2022-12-25T17:44:10Z
2022-12-27T19:23:35Z
2022-12-27T19:23:35Z
2023-01-19T04:31:32Z
TST: Make get_array smarter to work for masked arrays
diff --git a/pandas/tests/copy_view/test_util.py b/pandas/tests/copy_view/test_util.py new file mode 100644 index 0000000000000..ff55330d70b28 --- /dev/null +++ b/pandas/tests/copy_view/test_util.py @@ -0,0 +1,14 @@ +import numpy as np + +from pandas import DataFrame +from pandas.tests.copy_view.util import get_array + + +def test_get_array_numpy(): + df = DataFrame({"a": [1, 2, 3]}) + assert np.shares_memory(get_array(df, "a"), get_array(df, "a")) + + +def test_get_array_masked(): + df = DataFrame({"a": [1, 2, 3]}, dtype="Int64") + assert np.shares_memory(get_array(df, "a"), get_array(df, "a")) diff --git a/pandas/tests/copy_view/util.py b/pandas/tests/copy_view/util.py index 9e358c7eec749..0a111e6c6cca2 100644 --- a/pandas/tests/copy_view/util.py +++ b/pandas/tests/copy_view/util.py @@ -1,3 +1,6 @@ +from pandas.core.arrays import BaseMaskedArray + + def get_array(df, col): """ Helper method to get array for a DataFrame column. @@ -8,4 +11,7 @@ def get_array(df, col): """ icol = df.columns.get_loc(col) assert isinstance(icol, int) - return df._get_column_array(icol) + arr = df._get_column_array(icol) + if isinstance(arr, BaseMaskedArray): + return arr._data + return arr
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. This should work ootb when testing with masked arrays
https://api.github.com/repos/pandas-dev/pandas/pulls/50436
2022-12-25T11:29:03Z
2022-12-27T19:24:51Z
2022-12-27T19:24:51Z
2022-12-28T19:17:15Z
BUG: inconsistent handling of exact=False case in to_datetime parsing
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 6b531215813d3..f3158e64df8dd 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -63,7 +63,8 @@ repos: '--extensions=c,h', '--headers=h', --recursive, - '--filter=-readability/casting,-runtime/int,-build/include_subdir' + --linelength=88, + '--filter=-readability/casting,-runtime/int,-build/include_subdir,-readability/fn_size' ] - repo: https://github.com/PyCQA/flake8 rev: 6.0.0 diff --git a/pandas/_libs/tslibs/np_datetime.pxd b/pandas/_libs/tslibs/np_datetime.pxd index de81c611c9ee9..492f45af09e80 100644 --- a/pandas/_libs/tslibs/np_datetime.pxd +++ b/pandas/_libs/tslibs/np_datetime.pxd @@ -120,3 +120,9 @@ cdef int64_t convert_reso( NPY_DATETIMEUNIT to_reso, bint round_ok, ) except? -1 + +cdef extern from "src/datetime/np_datetime_strings.h": + ctypedef enum FormatRequirement: + PARTIAL_MATCH + EXACT_MATCH + INFER_FORMAT diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx index 9db3f7cb4648e..b1e4022527437 100644 --- a/pandas/_libs/tslibs/np_datetime.pyx +++ b/pandas/_libs/tslibs/np_datetime.pyx @@ -53,7 +53,8 @@ cdef extern from "src/datetime/np_datetime_strings.h": npy_datetimestruct *out, NPY_DATETIMEUNIT *out_bestunit, int *out_local, int *out_tzoffset, - const char *format, int format_len, int exact) + const char *format, int format_len, + FormatRequirement exact) # ---------------------------------------------------------------------- @@ -286,17 +287,20 @@ cdef int string_to_dts( const char* buf Py_ssize_t format_length const char* format_buf + FormatRequirement format_requirement buf = get_c_string_buf_and_size(val, &length) if format is None: format_buf = b"" format_length = 0 - exact = False + format_requirement = INFER_FORMAT else: format_buf = get_c_string_buf_and_size(format, &format_length) + format_requirement = <FormatRequirement>exact return parse_iso_8601_datetime(buf, length, want_exc, dts, out_bestunit, out_local, out_tzoffset, - format_buf, format_length, exact) + format_buf, format_length, + format_requirement) cpdef ndarray astype_overflowsafe( diff --git a/pandas/_libs/tslibs/src/datetime/np_datetime_strings.c b/pandas/_libs/tslibs/src/datetime/np_datetime_strings.c index 7bb94012fad0c..f1f03e6467eac 100644 --- a/pandas/_libs/tslibs/src/datetime/np_datetime_strings.c +++ b/pandas/_libs/tslibs/src/datetime/np_datetime_strings.c @@ -67,42 +67,54 @@ This file implements string parsing and creation for NumPy datetime. * Returns 0 on success, -1 on failure. */ +typedef enum { + COMPARISON_SUCCESS, + COMPLETED_PARTIAL_MATCH, + COMPARISON_ERROR +} DatetimePartParseResult; // This function will advance the pointer on format // and decrement characters_remaining by n on success -// On failure will return -1 without incrementing -static int compare_format(const char **format, int *characters_remaining, - const char *compare_to, int n, const int exact) { +// On failure will return COMPARISON_ERROR without incrementing +// If `format_requirement` is PARTIAL_MATCH, and the `format` string has +// been exhausted, then return COMPLETED_PARTIAL_MATCH. +static DatetimePartParseResult compare_format( + const char **format, + int *characters_remaining, + const char *compare_to, + int n, + const FormatRequirement format_requirement +) { + if (format_requirement == INFER_FORMAT) { + return COMPARISON_SUCCESS; + } + if (*characters_remaining < 0) { + return COMPARISON_ERROR; + } + if (format_requirement == PARTIAL_MATCH && *characters_remaining == 0) { + return COMPLETED_PARTIAL_MATCH; + } if (*characters_remaining < n) { - if (exact) { - // TODO(pandas-dev): in the future we should set a PyErr here - // to be very clear about what went wrong - return -1; - } else if (*characters_remaining) { - // TODO(pandas-dev): same return value in this function as - // above branch, but stub out a future where - // we have a better error message - return -1; - } else { - return 0; - } + // TODO(pandas-dev): PyErr to differentiate what went wrong + return COMPARISON_ERROR; } else { if (strncmp(*format, compare_to, n)) { // TODO(pandas-dev): PyErr to differentiate what went wrong - return -1; + return COMPARISON_ERROR; } else { *format += n; *characters_remaining -= n; - return 0; + return COMPARISON_SUCCESS; } } - return 0; + return COMPARISON_SUCCESS; } int parse_iso_8601_datetime(const char *str, int len, int want_exc, npy_datetimestruct *out, NPY_DATETIMEUNIT *out_bestunit, int *out_local, int *out_tzoffset, - const char* format, int format_len, int exact) { + const char* format, int format_len, + FormatRequirement format_requirement) { if (len < 0 || format_len < 0) goto parse_error; int year_leap = 0; @@ -110,6 +122,7 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, const char *substr; int sublen; NPY_DATETIMEUNIT bestunit = NPY_FR_GENERIC; + DatetimePartParseResult comparison; /* If year-month-day are separated by a valid separator, * months/days without leading zeroes will be parsed @@ -139,8 +152,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, while (sublen > 0 && isspace(*substr)) { ++substr; --sublen; - if (compare_format(&format, &format_len, " ", 1, exact)) { + comparison = compare_format(&format, &format_len, " ", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } @@ -155,8 +171,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } /* PARSE THE YEAR (4 digits) */ - if (compare_format(&format, &format_len, "%Y", 2, exact)) { + comparison = compare_format(&format, &format_len, "%Y", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } out->year = 0; @@ -202,8 +221,12 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, ++substr; --sublen; - if (compare_format(&format, &format_len, &ymd_sep, 1, exact)) { + comparison = compare_format(&format, &format_len, &ymd_sep, 1, + format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* Cannot have trailing separator */ if (sublen == 0 || !isdigit(*substr)) { @@ -212,8 +235,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } /* PARSE THE MONTH */ - if (compare_format(&format, &format_len, "%m", 2, exact)) { + comparison = compare_format(&format, &format_len, "%m", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* First digit required */ out->month = (*substr - '0'); @@ -258,14 +284,21 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } ++substr; --sublen; - if (compare_format(&format, &format_len, &ymd_sep, 1, exact)) { + comparison = compare_format(&format, &format_len, &ymd_sep, 1, + format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } /* PARSE THE DAY */ - if (compare_format(&format, &format_len, "%d", 2, exact)) { + comparison = compare_format(&format, &format_len, "%d", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* First digit required */ if (!isdigit(*substr)) { @@ -306,15 +339,21 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, if ((*substr != 'T' && *substr != ' ') || sublen == 1) { goto parse_error; } - if (compare_format(&format, &format_len, substr, 1, exact)) { - goto parse_error; - } + comparison = compare_format(&format, &format_len, substr, 1, format_requirement); + if (comparison == COMPARISON_ERROR) { + goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; + } ++substr; --sublen; /* PARSE THE HOURS */ - if (compare_format(&format, &format_len, "%H", 2, exact)) { + comparison = compare_format(&format, &format_len, "%H", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* First digit required */ if (!isdigit(*substr)) { @@ -359,8 +398,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, if (sublen == 0 || !isdigit(*substr)) { goto parse_error; } - if (compare_format(&format, &format_len, ":", 1, exact)) { + comparison = compare_format(&format, &format_len, ":", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } else if (!isdigit(*substr)) { if (!hour_was_2_digits) { @@ -370,8 +412,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } /* PARSE THE MINUTES */ - if (compare_format(&format, &format_len, "%M", 2, exact)) { + comparison = compare_format(&format, &format_len, "%M", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* First digit required */ out->min = (*substr - '0'); @@ -405,8 +450,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, /* If we make it through this condition block, then the next * character is a digit. */ if (has_hms_sep && *substr == ':') { - if (compare_format(&format, &format_len, ":", 1, exact)) { + comparison = compare_format(&format, &format_len, ":", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } ++substr; --sublen; @@ -420,8 +468,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } /* PARSE THE SECONDS */ - if (compare_format(&format, &format_len, "%S", 2, exact)) { + comparison = compare_format(&format, &format_len, "%S", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* First digit required */ out->sec = (*substr - '0'); @@ -448,8 +499,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, if (sublen > 0 && *substr == '.') { ++substr; --sublen; - if (compare_format(&format, &format_len, ".", 1, exact)) { + comparison = compare_format(&format, &format_len, ".", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } else { bestunit = NPY_FR_s; @@ -457,8 +511,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, } /* PARSE THE MICROSECONDS (0 to 6 digits) */ - if (compare_format(&format, &format_len, "%f", 2, exact)) { + comparison = compare_format(&format, &format_len, "%f", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } numdigits = 0; for (i = 0; i < 6; ++i) { @@ -524,8 +581,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, while (sublen > 0 && isspace(*substr)) { ++substr; --sublen; - if (compare_format(&format, &format_len, " ", 1, exact)) { + comparison = compare_format(&format, &format_len, " ", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } @@ -539,8 +599,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, /* UTC specifier */ if (*substr == 'Z') { - if (compare_format(&format, &format_len, "%z", 2, exact)) { + comparison = compare_format(&format, &format_len, "%z", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* "Z" should be equivalent to tz offset "+00:00" */ if (out_local != NULL) { @@ -561,8 +624,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, --sublen; } } else if (*substr == '-' || *substr == '+') { - if (compare_format(&format, &format_len, "%z", 2, exact)) { + comparison = compare_format(&format, &format_len, "%z", 2, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } /* Time zone offset */ int offset_neg = 0, offset_hour = 0, offset_minute = 0; @@ -647,8 +713,11 @@ int parse_iso_8601_datetime(const char *str, int len, int want_exc, while (sublen > 0 && isspace(*substr)) { ++substr; --sublen; - if (compare_format(&format, &format_len, " ", 1, exact)) { + comparison = compare_format(&format, &format_len, " ", 1, format_requirement); + if (comparison == COMPARISON_ERROR) { goto parse_error; + } else if (comparison == COMPLETED_PARTIAL_MATCH) { + goto finish; } } diff --git a/pandas/_libs/tslibs/src/datetime/np_datetime_strings.h b/pandas/_libs/tslibs/src/datetime/np_datetime_strings.h index 734f7daceba05..a635192d70809 100644 --- a/pandas/_libs/tslibs/src/datetime/np_datetime_strings.h +++ b/pandas/_libs/tslibs/src/datetime/np_datetime_strings.h @@ -26,6 +26,21 @@ This file implements string parsing and creation for NumPy datetime. #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #endif // NPY_NO_DEPRECATED_API +/* 'format_requirement' can be one of three values: + * * PARTIAL_MATCH : Only require a partial match with 'format'. + * For example, if the string is '2020-01-01 05:00:00' and + * 'format' is '%Y-%m-%d', then parse '2020-01-01'; + * * EXACT_MATCH : require an exact match with 'format'. If the + * string is '2020-01-01', then the only format which will + * be able to parse it without error is '%Y-%m-%d'; + * * INFER_FORMAT: parse without comparing 'format' (i.e. infer it). + */ +typedef enum { + PARTIAL_MATCH, + EXACT_MATCH, + INFER_FORMAT +} FormatRequirement; + /* * Parses (almost) standard ISO 8601 date strings. The differences are: * @@ -61,7 +76,7 @@ parse_iso_8601_datetime(const char *str, int len, int want_exc, int *out_tzoffset, const char* format, int format_len, - int exact); + FormatRequirement format_requirement); /* * Provides a string length to use for converting datetime diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 389e32f7f193f..4c89c931fb92f 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -366,6 +366,37 @@ def test_to_datetime_with_non_exact(self, cache): ) tm.assert_series_equal(result, expected) + @pytest.mark.parametrize( + "format, expected", + [ + ("%Y-%m-%d", Timestamp(2000, 1, 3)), + ("%Y-%d-%m", Timestamp(2000, 3, 1)), + ("%Y-%m-%d %H", Timestamp(2000, 1, 3, 12)), + ("%Y-%d-%m %H", Timestamp(2000, 3, 1, 12)), + ("%Y-%m-%d %H:%M", Timestamp(2000, 1, 3, 12, 34)), + ("%Y-%d-%m %H:%M", Timestamp(2000, 3, 1, 12, 34)), + ("%Y-%m-%d %H:%M:%S", Timestamp(2000, 1, 3, 12, 34, 56)), + ("%Y-%d-%m %H:%M:%S", Timestamp(2000, 3, 1, 12, 34, 56)), + ("%Y-%m-%d %H:%M:%S.%f", Timestamp(2000, 1, 3, 12, 34, 56, 123456)), + ("%Y-%d-%m %H:%M:%S.%f", Timestamp(2000, 3, 1, 12, 34, 56, 123456)), + ( + "%Y-%m-%d %H:%M:%S.%f%z", + Timestamp(2000, 1, 3, 12, 34, 56, 123456, tz="UTC+01:00"), + ), + ( + "%Y-%d-%m %H:%M:%S.%f%z", + Timestamp(2000, 3, 1, 12, 34, 56, 123456, tz="UTC+01:00"), + ), + ], + ) + def test_non_exact_doesnt_parse_whole_string(self, cache, format, expected): + # https://github.com/pandas-dev/pandas/issues/50412 + # the formats alternate between ISO8601 and non-ISO8601 to check both paths + result = to_datetime( + "2000-01-03 12:34:56.123456+01:00", format=format, exact=False + ) + assert result == expected + @pytest.mark.parametrize( "arg", [
- [ ] closes #50412 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Haven't added a whatsnew note, as `exact` never worked to begin with for ISO8601 formats, and this just corrects #49333
https://api.github.com/repos/pandas-dev/pandas/pulls/50435
2022-12-25T11:06:42Z
2022-12-31T08:55:06Z
2022-12-31T08:55:06Z
2022-12-31T08:55:06Z
CLN: Assorted
diff --git a/pandas/_libs/intervaltree.pxi.in b/pandas/_libs/intervaltree.pxi.in index 0d7c96a6f2f2b..67fee7c5fbadd 100644 --- a/pandas/_libs/intervaltree.pxi.in +++ b/pandas/_libs/intervaltree.pxi.in @@ -121,9 +121,8 @@ cdef class IntervalTree(IntervalMixin): """ if self._na_count > 0: return False - values = [self.right, self.left] - sort_order = np.lexsort(values) + sort_order = self.left_sorter return is_monotonic(sort_order, False)[0] def get_indexer(self, scalar_t[:] target) -> np.ndarray: diff --git a/pandas/_libs/tslibs/offsets.pyi b/pandas/_libs/tslibs/offsets.pyi index eacdf17b0b4d3..f1aca4717665c 100644 --- a/pandas/_libs/tslibs/offsets.pyi +++ b/pandas/_libs/tslibs/offsets.pyi @@ -12,6 +12,7 @@ from typing import ( import numpy as np +from pandas._libs.tslibs.nattype import NaTType from pandas._typing import npt from .timedeltas import Timedelta @@ -51,6 +52,8 @@ class BaseOffset: def __radd__(self, other: _DatetimeT) -> _DatetimeT: ... @overload def __radd__(self, other: _TimedeltaT) -> _TimedeltaT: ... + @overload + def __radd__(self, other: NaTType) -> NaTType: ... def __sub__(self: _BaseOffsetT, other: BaseOffset) -> _BaseOffsetT: ... @overload def __rsub__(self, other: npt.NDArray[np.object_]) -> npt.NDArray[np.object_]: ... diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd index a28aace5d2f15..d8bc9363f1a23 100644 --- a/pandas/_libs/tslibs/util.pxd +++ b/pandas/_libs/tslibs/util.pxd @@ -83,7 +83,7 @@ cdef inline bint is_integer_object(object obj) nogil: cdef inline bint is_float_object(object obj) nogil: """ - Cython equivalent of `isinstance(val, (float, np.complex_))` + Cython equivalent of `isinstance(val, (float, np.float_))` Parameters ---------- diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py index 87d9c39b0407c..7390b04da4787 100644 --- a/pandas/core/accessor.py +++ b/pandas/core/accessor.py @@ -6,6 +6,7 @@ """ from __future__ import annotations +from typing import final import warnings from pandas.util._decorators import doc @@ -16,6 +17,7 @@ class DirNamesMixin: _accessors: set[str] = set() _hidden_attrs: frozenset[str] = frozenset() + @final def _dir_deletions(self) -> set[str]: """ Delete unwanted __dir__ for this object. diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py index e9eebf54df07f..adf6522f76a1a 100644 --- a/pandas/core/arrays/datetimelike.py +++ b/pandas/core/arrays/datetimelike.py @@ -189,7 +189,7 @@ class DatetimeLikeArrayMixin(OpsMixin, NDArrayBackedExtensionArray): Shared Base/Mixin class for DatetimeArray, TimedeltaArray, PeriodArray Assumes that __new__/__init__ defines: - _data + _ndarray _freq and that the inheriting class has methods: @@ -1418,9 +1418,8 @@ def __add__(self, other): # as is_integer returns True for these if not is_period_dtype(self.dtype): raise integer_op_not_supported(self) - result = cast("PeriodArray", self)._addsub_int_array_or_scalar( - other * self.freq.n, operator.add - ) + obj = cast("PeriodArray", self) + result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.add) # array-like others elif is_timedelta64_dtype(other_dtype): @@ -1435,9 +1434,8 @@ def __add__(self, other): elif is_integer_dtype(other_dtype): if not is_period_dtype(self.dtype): raise integer_op_not_supported(self) - result = cast("PeriodArray", self)._addsub_int_array_or_scalar( - other * self.freq.n, operator.add - ) + obj = cast("PeriodArray", self) + result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.add) else: # Includes Categorical, other ExtensionArrays # For PeriodDtype, if self is a TimedeltaArray and other is a @@ -1477,9 +1475,8 @@ def __sub__(self, other): # as is_integer returns True for these if not is_period_dtype(self.dtype): raise integer_op_not_supported(self) - result = cast("PeriodArray", self)._addsub_int_array_or_scalar( - other * self.freq.n, operator.sub - ) + obj = cast("PeriodArray", self) + result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.sub) elif isinstance(other, Period): result = self._sub_periodlike(other) @@ -1500,9 +1497,8 @@ def __sub__(self, other): elif is_integer_dtype(other_dtype): if not is_period_dtype(self.dtype): raise integer_op_not_supported(self) - result = cast("PeriodArray", self)._addsub_int_array_or_scalar( - other * self.freq.n, operator.sub - ) + obj = cast("PeriodArray", self) + result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.sub) else: # Includes ExtensionArrays, float_dtype return NotImplemented diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py index 3c6686b5c0173..f7107a1f7c83c 100644 --- a/pandas/core/arrays/interval.py +++ b/pandas/core/arrays/interval.py @@ -812,6 +812,8 @@ def argsort( ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs) if ascending and kind == "quicksort" and na_position == "last": + # TODO: in an IntervalIndex we can re-use the cached + # IntervalTree.left_sorter return np.lexsort((self.right, self.left)) # TODO: other cases we can use lexsort for? much more performant. diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 729f34544e2bc..b823a7a51943e 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -32,7 +32,6 @@ class providing the base-class of operations. cast, final, ) -import warnings import numpy as np @@ -2200,13 +2199,8 @@ def sem(self, ddof: int = 1, numeric_only: bool = False): counts = self.count() result_ilocs = result.columns.get_indexer_for(cols) count_ilocs = counts.columns.get_indexer_for(cols) - with warnings.catch_warnings(): - # TODO(2.0): once iloc[:, foo] = bar depecation is enforced, - # this catching will be unnecessary - warnings.filterwarnings( - "ignore", ".*will attempt to set the values inplace.*" - ) - result.iloc[:, result_ilocs] /= np.sqrt(counts.iloc[:, count_ilocs]) + + result.iloc[:, result_ilocs] /= np.sqrt(counts.iloc[:, count_ilocs]) return result @final diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index c06b6c7a9a651..b8ef925362e7b 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -863,8 +863,6 @@ def column_setitem( This is a method on the ArrayManager level, to avoid creating an intermediate Series at the DataFrame level (`s = df[loc]; s[idx] = value`) - - """ if not is_integer(loc): raise TypeError("The column index should be an integer") diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 713413500f64c..f65722ac9685b 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -325,6 +325,7 @@ def apply(self, func, **kwargs) -> list[Block]: return self._split_op_result(result) + @final def reduce(self, func) -> list[Block]: # We will apply the function and reshape the result into a single-row # Block with the same mgr_locs; squeezing will be done at a higher level @@ -1957,19 +1958,6 @@ class ObjectBlock(NumpyBlock): __slots__ = () is_object = True - def reduce(self, func) -> list[Block]: - """ - For object-dtype, we operate column-wise. - """ - assert self.ndim == 2 - - res = func(self.values) - - assert isinstance(res, np.ndarray) - assert res.ndim == 1 - res = res.reshape(-1, 1) - return [self.make_block_same_class(res)] - @maybe_split def convert( self, @@ -1980,7 +1968,9 @@ def convert( attempt to cast any object types to better types return a copy of the block (if copy = True) by definition we ARE an ObjectBlock!!!!! """ - if self.dtype != object: + if self.dtype != _dtype_obj: + # GH#50067 this should be impossible in ObjectBlock, but until + # that is fixed, we short-circuit here. return [self] values = self.values diff --git a/pandas/core/series.py b/pandas/core/series.py index b69fb4c1b58aa..873ebd16ac80b 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -856,7 +856,6 @@ def __array__(self, dtype: npt.DTypeLike | None = None) -> np.ndarray: # coercion __float__ = _coerce_method(float) - __long__ = _coerce_method(int) __int__ = _coerce_method(int) # ---------------------------------------------------------------------- diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py index a97a866a8406e..6ce2ccb3a2925 100644 --- a/pandas/core/tools/datetimes.py +++ b/pandas/core/tools/datetimes.py @@ -398,10 +398,7 @@ def _convert_listlike_datetimes( elif is_datetime64_ns_dtype(arg_dtype): if not isinstance(arg, (DatetimeArray, DatetimeIndex)): - try: - return DatetimeIndex(arg, tz=tz, name=name) - except ValueError: - pass + return DatetimeIndex(arg, tz=tz, name=name) elif utc: # DatetimeArray, DatetimeIndex return arg.tz_localize("utc") diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index c9bfb5e29460e..4fb63d3c4b97b 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -1751,7 +1751,7 @@ def test_td64arr_floordiv_td64arr_with_nat( expected = np.array([1.0, 1.0, np.nan], dtype=np.float64) expected = tm.box_expected(expected, xbox) if box is DataFrame and using_array_manager: - # INFO(ArrayManager) floorfiv returns integer, and ArrayManager + # INFO(ArrayManager) floordiv returns integer, and ArrayManager # performs ops column-wise and thus preserves int64 dtype for # columns without missing values expected[[0, 1]] = expected[[0, 1]].astype("int64") diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index c2c1073eef36d..969c6059b8d31 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -891,7 +891,6 @@ def test_isin_nan_common_float64(self, nulls_fixture): "index", [ Index(["qux", "baz", "foo", "bar"]), - # float64 Index overrides isin, so must be checked separately NumericIndex([1.0, 2.0, 3.0, 4.0], dtype=np.float64), ], ) diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py index ea526c95f20e0..3dc6f2404444b 100644 --- a/pandas/tests/reshape/concat/test_concat.py +++ b/pandas/tests/reshape/concat/test_concat.py @@ -745,7 +745,7 @@ def test_concat_retain_attrs(data): @td.skip_array_manager_invalid_test @pytest.mark.parametrize("df_dtype", ["float64", "int64", "datetime64[ns]"]) @pytest.mark.parametrize("empty_dtype", [None, "float64", "object"]) -def test_concat_ignore_emtpy_object_float(empty_dtype, df_dtype): +def test_concat_ignore_empty_object_float(empty_dtype, df_dtype): # https://github.com/pandas-dev/pandas/issues/45637 df = DataFrame({"foo": [1, 2], "bar": [1, 2]}, dtype=df_dtype) empty = DataFrame(columns=["foo", "bar"], dtype=empty_dtype)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50433
2022-12-25T00:31:44Z
2022-12-29T00:32:30Z
2022-12-29T00:32:30Z
2022-12-29T00:36:04Z
ENH: Add lazy copy to align
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 21b3a0c033702..db3f2de703c34 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -4879,7 +4879,7 @@ def align( join: AlignJoin = "outer", axis: Axis | None = None, level: Level = None, - copy: bool = True, + copy: bool | None = None, fill_value=None, method: FillnaOptions | None = None, limit: int | None = None, diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 7a40e30c0ae7a..481e9f5ee847f 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -5254,7 +5254,7 @@ def _reindex_with_indexers( self: NDFrameT, reindexers, fill_value=None, - copy: bool_t = False, + copy: bool_t | None = False, allow_dups: bool_t = False, ) -> NDFrameT: """allow_dups indicates an internal call here""" @@ -5283,8 +5283,8 @@ def _reindex_with_indexers( # If we've made a copy once, no need to make another one copy = False - if copy and new_data is self._mgr: - new_data = new_data.copy() + if (copy or copy is None) and new_data is self._mgr: + new_data = new_data.copy(deep=copy) return self._constructor(new_data).__finalize__(self) @@ -9058,7 +9058,7 @@ def align( join: AlignJoin = "outer", axis: Axis | None = None, level: Level = None, - copy: bool_t = True, + copy: bool_t | None = None, fill_value: Hashable = None, method: FillnaOptions | None = None, limit: int | None = None, @@ -9251,7 +9251,7 @@ def _align_frame( join: AlignJoin = "outer", axis: Axis | None = None, level=None, - copy: bool_t = True, + copy: bool_t | None = None, fill_value=None, method=None, limit=None, @@ -9315,7 +9315,7 @@ def _align_series( join: AlignJoin = "outer", axis: Axis | None = None, level=None, - copy: bool_t = True, + copy: bool_t | None = None, fill_value=None, method=None, limit=None, @@ -9344,7 +9344,7 @@ def _align_series( if is_series: left = self._reindex_indexer(join_index, lidx, copy) elif lidx is None or join_index is None: - left = self.copy() if copy else self + left = self.copy(deep=copy) if copy or copy is None else self else: left = self._constructor( self._mgr.reindex_indexer(join_index, lidx, axis=1, copy=copy) @@ -9373,7 +9373,7 @@ def _align_series( left = self._constructor(fdata) if ridx is None: - right = other + right = other.copy(deep=copy) if copy or copy is None else other else: right = other.reindex(join_index, level=level) diff --git a/pandas/core/series.py b/pandas/core/series.py index 1bdf92e1dcf02..8400dc1f2ed91 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -4591,15 +4591,18 @@ def _reduce( return op(delegate, skipna=skipna, **kwds) def _reindex_indexer( - self, new_index: Index | None, indexer: npt.NDArray[np.intp] | None, copy: bool + self, + new_index: Index | None, + indexer: npt.NDArray[np.intp] | None, + copy: bool | None, ) -> Series: # Note: new_index is None iff indexer is None # if not None, indexer is np.intp if indexer is None and ( new_index is None or new_index.names == self.index.names ): - if copy: - return self.copy() + if copy or copy is None: + return self.copy(deep=copy) return self new_values = algorithms.take_nd( @@ -4626,7 +4629,7 @@ def align( join: AlignJoin = "outer", axis: Axis | None = None, level: Level = None, - copy: bool = True, + copy: bool | None = None, fill_value: Hashable = None, method: FillnaOptions | None = None, limit: int | None = None, diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index f5c7b31e59bc5..f37fafebd0e99 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -172,6 +172,53 @@ def test_select_dtypes(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize( + "func", + [ + lambda x, y: x.align(y), + lambda x, y: x.align(y.a, axis=0), + lambda x, y: x.align(y.a.iloc[slice(0, 1)], axis=1), + ], +) +def test_align_frame(using_copy_on_write, func): + df = DataFrame({"a": [1, 2, 3], "b": "a"}) + df_orig = df.copy() + df_changed = df[["b", "a"]].copy() + df2, _ = func(df, df_changed) + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + + df2.iloc[0, 0] = 0 + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + tm.assert_frame_equal(df, df_orig) + + +def test_align_series(using_copy_on_write): + ser = Series([1, 2]) + ser_orig = ser.copy() + ser_other = ser.copy() + ser2, ser_other_result = ser.align(ser_other) + + if using_copy_on_write: + assert np.shares_memory(ser2.values, ser.values) + assert np.shares_memory(ser_other_result.values, ser_other.values) + else: + assert not np.shares_memory(ser2.values, ser.values) + assert not np.shares_memory(ser_other_result.values, ser_other.values) + + ser2.iloc[0] = 0 + ser_other_result.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(ser2.values, ser.values) + assert not np.shares_memory(ser_other_result.values, ser_other.values) + tm.assert_series_equal(ser, ser_orig) + tm.assert_series_equal(ser_other, ser_orig) + + def test_to_frame(using_copy_on_write): # Case: converting a Series to a DataFrame with to_frame ser = Series([1, 2, 3]) diff --git a/pandas/tests/frame/methods/test_align.py b/pandas/tests/frame/methods/test_align.py index 73c79996e5b81..88963dcc4b0f7 100644 --- a/pandas/tests/frame/methods/test_align.py +++ b/pandas/tests/frame/methods/test_align.py @@ -402,3 +402,12 @@ def _check_align_fill(self, frame, kind, meth, ax, fax): self._check_align( empty, empty, axis=ax, fill_axis=fax, how=kind, method=meth, limit=1 ) + + def test_align_series_check_copy(self): + # GH# + df = DataFrame({0: [1, 2]}) + ser = Series([1], name=0) + expected = ser.copy() + result, other = df.align(ser, axis=1) + ser.iloc[0] = 100 + tm.assert_series_equal(other, expected)
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50432
2022-12-24T19:42:48Z
2022-12-28T18:09:48Z
2022-12-28T18:09:48Z
2023-01-03T09:29:30Z
ENH: Add lazy copy for drop duplicates
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 21b3a0c033702..9702a5b374356 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3718,6 +3718,10 @@ def _getitem_bool_array(self, key): # check_bool_indexer will throw exception if Series key cannot # be reindexed to match DataFrame rows key = check_bool_indexer(self.index, key) + + if key.all(): + return self.copy(deep=None) + indexer = key.nonzero()[0] return self._take_with_is_copy(indexer, axis=0) @@ -6418,7 +6422,7 @@ def drop_duplicates( 4 Indomie pack 5.0 """ if self.empty: - return self.copy() + return self.copy(deep=None) inplace = validate_bool_kwarg(inplace, "inplace") ignore_index = validate_bool_kwarg(ignore_index, "ignore_index") diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index f5c7b31e59bc5..b3be215bcf000 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -322,10 +322,11 @@ def test_head_tail(method, using_copy_on_write): tm.assert_frame_equal(df, df_orig) -def test_assign(using_copy_on_write): +@pytest.mark.parametrize("method", ["assign", "drop_duplicates"]) +def test_assign_drop_duplicates(using_copy_on_write, method): df = DataFrame({"a": [1, 2, 3]}) df_orig = df.copy() - df2 = df.assign() + df2 = getattr(df, method)() df2._mgr._verify_integrity() if using_copy_on_write:
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. The series case will work after #50429
https://api.github.com/repos/pandas-dev/pandas/pulls/50431
2022-12-24T18:36:43Z
2022-12-28T18:11:19Z
2022-12-28T18:11:19Z
2022-12-28T19:05:44Z
BUG: Series(strings).astype("float64[pyarrow]") raising
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 75ba169600962..12b0d90e68ab9 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -830,6 +830,7 @@ Conversion - Bug in :meth:`Series.convert_dtypes` not converting dtype to nullable dtype when :class:`Series` contains ``NA`` and has dtype ``object`` (:issue:`48791`) - Bug where any :class:`ExtensionDtype` subclass with ``kind="M"`` would be interpreted as a timezone type (:issue:`34986`) - Bug in :class:`.arrays.ArrowExtensionArray` that would raise ``NotImplementedError`` when passed a sequence of strings or binary (:issue:`49172`) +- Bug in :meth:`Series.astype` raising ``pyarrow.ArrowInvalid`` when converting from a non-pyarrow string dtype to a pyarrow numeric type (:issue:`50430`) - Bug in :func:`to_datetime` was not respecting ``exact`` argument when ``format`` was an ISO8601 format (:issue:`12649`) - Bug in :meth:`TimedeltaArray.astype` raising ``TypeError`` when converting to a pyarrow duration type (:issue:`49795`) - diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index 6250c298f291f..7e954b3d1d1ec 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -206,17 +206,17 @@ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = Fal Construct a new ExtensionArray from a sequence of scalars. """ pa_dtype = to_pyarrow_type(dtype) - is_cls = isinstance(scalars, cls) - if is_cls or isinstance(scalars, (pa.Array, pa.ChunkedArray)): - if is_cls: - scalars = scalars._data - if pa_dtype: - scalars = scalars.cast(pa_dtype) - return cls(scalars) - else: - return cls( - pa.chunked_array(pa.array(scalars, type=pa_dtype, from_pandas=True)) - ) + if isinstance(scalars, cls): + scalars = scalars._data + elif not isinstance(scalars, (pa.Array, pa.ChunkedArray)): + try: + scalars = pa.array(scalars, type=pa_dtype, from_pandas=True) + except pa.ArrowInvalid: + # GH50430: let pyarrow infer type, then cast + scalars = pa.array(scalars, from_pandas=True) + if pa_dtype: + scalars = scalars.cast(pa_dtype) + return cls(scalars) @classmethod def _from_sequence_of_strings( diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py index 9b42b86efd0d0..37abdefa25f6e 100644 --- a/pandas/tests/extension/test_arrow.py +++ b/pandas/tests/extension/test_arrow.py @@ -1471,6 +1471,14 @@ def test_astype_from_non_pyarrow(data): tm.assert_extension_array_equal(result, data) +def test_astype_float_from_non_pyarrow_str(): + # GH50430 + ser = pd.Series(["1.0"]) + result = ser.astype("float64[pyarrow]") + expected = pd.Series([1.0], dtype="float64[pyarrow]") + tm.assert_series_equal(result, expected) + + def test_to_numpy_with_defaults(data): # GH49973 result = data.to_numpy()
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature. Fixes the following: ``` import pandas as pd ser = pd.Series(["1.0"]) ser.astype("float64[pyarrow]") # -> ArrowInvalid: Could not convert '1.0' with type str: tried to convert to double ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50430
2022-12-24T15:56:53Z
2022-12-24T20:18:48Z
2022-12-24T20:18:48Z
2023-01-19T04:31:33Z
ENH: Use lazy copy for dropna
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index fbc78da26c4b6..f5c3f3dfddcc7 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -6345,7 +6345,7 @@ def dropna( raise ValueError(f"invalid how option: {how}") if np.all(mask): - result = self.copy() + result = self.copy(deep=None) else: result = self.loc(axis=axis)[mask] diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 98079d0c5ab4d..7fc25b7ea1cad 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -1955,9 +1955,17 @@ def _blklocs(self): """compat with BlockManager""" return None - def getitem_mgr(self, indexer: slice | npt.NDArray[np.bool_]) -> SingleBlockManager: + def getitem_mgr(self, indexer: slice | np.ndarray) -> SingleBlockManager: # similar to get_slice, but not restricted to slice indexer blk = self._block + if ( + using_copy_on_write() + and isinstance(indexer, np.ndarray) + and len(indexer) > 0 + and com.is_bool_indexer(indexer) + and indexer.all() + ): + return type(self)(blk, self.index, [weakref.ref(blk)], parent=self) array = blk._slice(indexer) if array.ndim > 1: # This will be caught by Series._get_values diff --git a/pandas/core/series.py b/pandas/core/series.py index 950499b1ae40d..7a1b03a3c3539 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -921,7 +921,7 @@ def _ixs(self, i: int, axis: AxisInt = 0) -> Any: """ return self._values[i] - def _slice(self, slobj: slice, axis: Axis = 0) -> Series: + def _slice(self, slobj: slice | np.ndarray, axis: Axis = 0) -> Series: # axis kwarg is retained for compat with NDFrame method # _slice is *always* positional return self._get_values(slobj) @@ -5576,7 +5576,7 @@ def dropna( return result else: if not inplace: - return self.copy() + return self.copy(deep=None) return None # ---------------------------------------------------------------------- diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 5dd129f7c4af2..60c7630e40f0b 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -503,6 +503,40 @@ def test_add_suffix(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +@pytest.mark.parametrize("axis, val", [(0, 5.5), (1, np.nan)]) +def test_dropna(using_copy_on_write, axis, val): + df = DataFrame({"a": [1, 2, 3], "b": [4, val, 6], "c": "d"}) + df_orig = df.copy() + df2 = df.dropna(axis=axis) + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + + df2.iloc[0, 0] = 0 + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + tm.assert_frame_equal(df, df_orig) + + +@pytest.mark.parametrize("val", [5, 5.5]) +def test_dropna_series(using_copy_on_write, val): + ser = Series([1, val, 4]) + ser_orig = ser.copy() + ser2 = ser.dropna() + + if using_copy_on_write: + assert np.shares_memory(ser2.values, ser.values) + else: + assert not np.shares_memory(ser2.values, ser.values) + + ser2.iloc[0] = 0 + if using_copy_on_write: + assert not np.shares_memory(ser2.values, ser.values) + tm.assert_series_equal(ser, ser_orig) + + @pytest.mark.parametrize( "method", [
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50429
2022-12-24T14:31:26Z
2023-01-13T16:48:35Z
2023-01-13T16:48:35Z
2023-01-13T16:48:46Z
ENH: Use lazy copy in infer objects
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 7d028935ad175..70c60401f29fb 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -223,6 +223,7 @@ Copy-on-Write improvements - :meth:`DataFrame.to_period` / :meth:`Series.to_period` - :meth:`DataFrame.truncate` - :meth:`DataFrame.tz_convert` / :meth:`Series.tz_localize` + - :meth:`DataFrame.infer_objects` / :meth:`Series.infer_objects` - :func:`concat` These methods return views when Copy-on-Write is enabled, which provides a significant diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 9ad291c39cbc5..eae4ed038d692 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -6426,7 +6426,7 @@ def __deepcopy__(self: NDFrameT, memo=None) -> NDFrameT: return self.copy(deep=True) @final - def infer_objects(self: NDFrameT, copy: bool_t = True) -> NDFrameT: + def infer_objects(self: NDFrameT, copy: bool_t | None = None) -> NDFrameT: """ Attempt to infer better dtypes for object columns. diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index 76da973e110bf..2823c355955ee 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -369,7 +369,10 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T: def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T: return self.apply(astype_array_safe, dtype=dtype, copy=copy, errors=errors) - def convert(self: T, copy: bool) -> T: + def convert(self: T, copy: bool | None) -> T: + if copy is None: + copy = True + def _convert(arr): if is_object_dtype(arr.dtype): # extract PandasArray for tests that patch PandasArray._typ diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 15abc143cd081..115ae5dc6bb9d 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -207,7 +207,9 @@ def mgr_locs(self, new_mgr_locs: BlockPlacement) -> None: self._mgr_locs = new_mgr_locs @final - def make_block(self, values, placement=None) -> Block: + def make_block( + self, values, placement=None, refs: BlockValuesRefs | None = None + ) -> Block: """ Create a new block, with type inference propagate any values that are not specified @@ -219,7 +221,7 @@ def make_block(self, values, placement=None) -> Block: # TODO: perf by not going through new_block # We assume maybe_coerce_values has already been called - return new_block(values, placement=placement, ndim=self.ndim) + return new_block(values, placement=placement, ndim=self.ndim, refs=refs) @final def make_block_same_class( @@ -372,7 +374,7 @@ def _split(self) -> list[Block]: vals = self.values[slice(i, i + 1)] bp = BlockPlacement(ref_loc) - nb = type(self)(vals, placement=bp, ndim=2) + nb = type(self)(vals, placement=bp, ndim=2, refs=self.refs) new_blocks.append(nb) return new_blocks @@ -448,12 +450,15 @@ def convert( self, *, copy: bool = True, + using_cow: bool = False, ) -> list[Block]: """ attempt to coerce any object types to better types return a copy of the block (if copy = True) by definition we are not an ObjectBlock here! """ + if not copy and using_cow: + return [self.copy(deep=False)] return [self.copy()] if copy else [self] # --------------------------------------------------------------------- @@ -2040,6 +2045,7 @@ def convert( self, *, copy: bool = True, + using_cow: bool = False, ) -> list[Block]: """ attempt to cast any object types to better types return a copy of @@ -2048,6 +2054,8 @@ def convert( if self.dtype != _dtype_obj: # GH#50067 this should be impossible in ObjectBlock, but until # that is fixed, we short-circuit here. + if using_cow: + return [self.copy(deep=False)] return [self] values = self.values @@ -2063,10 +2071,14 @@ def convert( convert_period=True, convert_interval=True, ) + refs = None if copy and res_values is values: res_values = values.copy() + elif res_values is values and using_cow: + refs = self.refs + res_values = ensure_block_shape(res_values, self.ndim) - return [self.make_block(res_values)] + return [self.make_block(res_values, refs=refs)] # ----------------------------------------------------------------- diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 74116dd855e3e..5d45b33871900 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -424,11 +424,14 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T: def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T: return self.apply("astype", dtype=dtype, copy=copy, errors=errors) - def convert(self: T, copy: bool) -> T: - return self.apply( - "convert", - copy=copy, - ) + def convert(self: T, copy: bool | None) -> T: + if copy is None: + if using_copy_on_write(): + copy = False + else: + copy = True + + return self.apply("convert", copy=copy, using_cow=using_copy_on_write()) def replace(self: T, to_replace, value, inplace: bool) -> T: inplace = validate_bool_kwarg(inplace, "inplace") diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index b814b9089aabd..6b54345723118 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -748,6 +748,82 @@ def test_head_tail(method, using_copy_on_write): tm.assert_frame_equal(df, df_orig) +def test_infer_objects(using_copy_on_write): + df = DataFrame({"a": [1, 2], "b": "c", "c": 1, "d": "x"}) + df_orig = df.copy() + df2 = df.infer_objects() + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + assert np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + + df2.iloc[0, 0] = 0 + df2.iloc[0, 1] = "d" + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b")) + tm.assert_frame_equal(df, df_orig) + + +def test_infer_objects_no_reference(using_copy_on_write): + df = DataFrame( + { + "a": [1, 2], + "b": "c", + "c": 1, + "d": Series( + [Timestamp("2019-12-31"), Timestamp("2020-12-31")], dtype="object" + ), + "e": "b", + } + ) + df = df.infer_objects() + + arr_a = get_array(df, "a") + arr_b = get_array(df, "b") + arr_d = get_array(df, "d") + + df.iloc[0, 0] = 0 + df.iloc[0, 1] = "d" + df.iloc[0, 3] = Timestamp("2018-12-31") + if using_copy_on_write: + assert np.shares_memory(arr_a, get_array(df, "a")) + # TODO(CoW): Block splitting causes references here + assert not np.shares_memory(arr_b, get_array(df, "b")) + assert np.shares_memory(arr_d, get_array(df, "d")) + + +def test_infer_objects_reference(using_copy_on_write): + df = DataFrame( + { + "a": [1, 2], + "b": "c", + "c": 1, + "d": Series( + [Timestamp("2019-12-31"), Timestamp("2020-12-31")], dtype="object" + ), + } + ) + view = df[:] # noqa: F841 + df = df.infer_objects() + + arr_a = get_array(df, "a") + arr_b = get_array(df, "b") + arr_d = get_array(df, "d") + + df.iloc[0, 0] = 0 + df.iloc[0, 1] = "d" + df.iloc[0, 3] = Timestamp("2018-12-31") + if using_copy_on_write: + assert not np.shares_memory(arr_a, get_array(df, "a")) + assert not np.shares_memory(arr_b, get_array(df, "b")) + assert np.shares_memory(arr_d, get_array(df, "d")) + + @pytest.mark.parametrize( "kwargs", [
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50428
2022-12-24T14:19:28Z
2023-02-08T14:41:44Z
2023-02-08T14:41:44Z
2023-02-08T14:58:20Z
ENH: Use cow for reindex_like
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 7a40e30c0ae7a..dacb8d121ef79 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -4218,7 +4218,7 @@ def reindex_like( self: NDFrameT, other, method: Literal["backfill", "bfill", "pad", "ffill", "nearest"] | None = None, - copy: bool_t = True, + copy: bool_t | None = None, limit=None, tolerance=None, ) -> NDFrameT: diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index f5c7b31e59bc5..2cf364207d356 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -339,6 +339,24 @@ def test_assign(using_copy_on_write): tm.assert_frame_equal(df, df_orig) +def test_reindex_like(using_copy_on_write): + df = DataFrame({"a": [1, 2], "b": "a"}) + other = DataFrame({"b": "a", "a": [1, 2]}) + + df_orig = df.copy() + df2 = df.reindex_like(other) + + if using_copy_on_write: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + + df2.iloc[0, 1] = 0 + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + tm.assert_frame_equal(df, df_orig) + + def test_reorder_levels(using_copy_on_write): index = MultiIndex.from_tuples( [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"]
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50426
2022-12-24T10:57:33Z
2022-12-28T18:12:03Z
2022-12-28T18:12:03Z
2022-12-28T19:05:50Z
REF: Make Util functions accessible from C
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx index 7fcba58772ac4..8e2e5d113d58e 100644 --- a/pandas/_libs/algos.pyx +++ b/pandas/_libs/algos.pyx @@ -38,6 +38,12 @@ from numpy cimport ( cnp.import_array() cimport pandas._libs.util as util + + +cdef extern from "pandas/type.h": + bint is_integer_object(object obj) + bint is_array(object obj) + from pandas._libs.dtypes cimport ( numeric_object_t, numeric_t, @@ -513,7 +519,7 @@ def validate_limit(nobs: int | None, limit=None) -> int: if limit is None: lim = nobs else: - if not util.is_integer_object(limit): + if not is_integer_object(limit): raise ValueError("Limit must be an integer") if limit < 1: raise ValueError("Limit must be greater than 0") diff --git a/pandas/_libs/algos_common_helper.pxi.in b/pandas/_libs/algos_common_helper.pxi.in index ce2e1ffbb5870..ae66cb8cec211 100644 --- a/pandas/_libs/algos_common_helper.pxi.in +++ b/pandas/_libs/algos_common_helper.pxi.in @@ -12,7 +12,7 @@ WARNING: DO NOT edit .pxi FILE directly, .pxi is generated from .pxi.in def ensure_platform_int(object arr): # GH3033, GH1392 # platform int is the size of the int pointer, e.g. np.intp - if util.is_array(arr): + if is_array(arr): if (<ndarray>arr).descr.type_num == cnp.NPY_INTP: return arr else: @@ -23,7 +23,7 @@ def ensure_platform_int(object arr): def ensure_object(object arr): - if util.is_array(arr): + if is_array(arr): if (<ndarray>arr).descr.type_num == NPY_OBJECT: return arr else: @@ -61,7 +61,7 @@ def get_dispatch(dtypes): def ensure_{{name}}(object arr, copy=True): - if util.is_array(arr): + if is_array(arr): if (<ndarray>arr).descr.type_num == NPY_{{c_type}}: return arr else: diff --git a/pandas/_libs/hashing.pyx b/pandas/_libs/hashing.pyx index 197ec99247b4a..885b143a41000 100644 --- a/pandas/_libs/hashing.pyx +++ b/pandas/_libs/hashing.pyx @@ -18,7 +18,10 @@ from numpy cimport ( import_array() -from pandas._libs.util cimport is_nan +cdef extern from "pandas/type.h": + bint is_nan(object obj) + bint is_datetime64_object(object obj) + bint is_integer_object(object obj) @cython.boundscheck(False) diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx index 9e2adee407b1a..e86c01c5d1038 100644 --- a/pandas/_libs/index.pyx +++ b/pandas/_libs/index.pyx @@ -14,7 +14,13 @@ from numpy cimport ( cnp.import_array() -from pandas._libs cimport util +cdef extern from "pandas/type.h": + bint is_integer_object(object obj) + bint is_float_object(object obj) + bint is_bool_object(object obj) + bint is_complex_object(object obj) + bint is_nan(object obj) + from pandas._libs.hashtable cimport HashTable from pandas._libs.tslibs.nattype cimport c_NaT as NaT from pandas._libs.tslibs.np_datetime cimport ( @@ -74,7 +80,7 @@ cdef ndarray _get_bool_indexer(ndarray values, object val): indexer[i] = is_matching_na(item, val) else: - if util.is_nan(val): + if is_nan(val): indexer = np.isnan(values) else: indexer = values == val @@ -836,7 +842,7 @@ include "index_class_helper.pxi" cdef class BoolEngine(UInt8Engine): cdef _check_type(self, object val): - if not util.is_bool_object(val): + if not is_bool_object(val): raise KeyError(val) return <uint8_t>val @@ -994,7 +1000,7 @@ cdef class SharedEngine: except KeyError: loc = -1 else: - assert util.is_integer_object(loc), (loc, val) + assert is_integer_object(loc), (loc, val) res[i] = loc return res @@ -1032,7 +1038,7 @@ cdef class SharedEngine: if isinstance(locs, slice): # Only needed for get_indexer_non_unique locs = np.arange(locs.start, locs.stop, locs.step, dtype=np.intp) - elif util.is_integer_object(locs): + elif is_integer_object(locs): locs = np.array([locs], dtype=np.intp) else: assert locs.dtype.kind == "b" diff --git a/pandas/_libs/index_class_helper.pxi.in b/pandas/_libs/index_class_helper.pxi.in index b9c02ba64f69c..6da1e50bab178 100644 --- a/pandas/_libs/index_class_helper.pxi.in +++ b/pandas/_libs/index_class_helper.pxi.in @@ -36,8 +36,8 @@ cdef class {{name}}Engine(IndexEngine): cdef _check_type(self, object val): {{if name not in {'Float64', 'Float32', 'Complex64', 'Complex128'} }} - if not util.is_integer_object(val): - if util.is_float_object(val): + if not is_integer_object(val): + if is_float_object(val): # Make sure Int64Index.get_loc(2.0) works if val.is_integer(): return int(val) @@ -48,13 +48,13 @@ cdef class {{name}}Engine(IndexEngine): raise KeyError(val) {{endif}} {{elif name not in {'Complex64', 'Complex128'} }} - if not util.is_integer_object(val) and not util.is_float_object(val): + if not is_integer_object(val) and not is_float_object(val): # in particular catch bool and avoid casting True -> 1.0 raise KeyError(val) {{else}} - if (not util.is_integer_object(val) - and not util.is_float_object(val) - and not util.is_complex_object(val) + if (not is_integer_object(val) + and not is_float_object(val) + and not is_complex_object(val) ): # in particular catch bool and avoid casting True -> 1.0 raise KeyError(val) diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx index fc2c486173b9d..3318f7ac8cef8 100644 --- a/pandas/_libs/interval.pyx +++ b/pandas/_libs/interval.pyx @@ -31,7 +31,6 @@ from numpy cimport ( cnp.import_array() -from pandas._libs cimport util from pandas._libs.hashtable cimport Int64Vector from pandas._libs.tslibs.timedeltas cimport _Timedelta from pandas._libs.tslibs.timestamps cimport _Timestamp @@ -42,6 +41,12 @@ from pandas._libs.tslibs.util cimport ( is_timedelta64_object, ) + +cdef extern from "pandas/type.h": + bint is_array(object obj) + bint is_nan(object obj) + + VALID_CLOSED = frozenset(["left", "right", "both", "neither"]) @@ -360,7 +365,7 @@ cdef class Interval(IntervalMixin): self_tuple = (self.left, self.right, self.closed) other_tuple = (other.left, other.right, other.closed) return PyObject_RichCompare(self_tuple, other_tuple, op) - elif util.is_array(other): + elif is_array(other): return np.array( [PyObject_RichCompare(self, x, op) for x in other], dtype=bool, @@ -551,7 +556,7 @@ def intervals_to_interval_bounds(ndarray intervals, bint validate_closed=True): for i in range(n): interval = intervals[i] - if interval is None or util.is_nan(interval): + if interval is None or is_nan(interval): left[i] = np.nan right[i] = np.nan continue diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index bc7b876cb5de8..65624f9971a18 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -92,6 +92,17 @@ cdef extern from "src/parse_helper.h": int floatify(object, float64_t *result, int *maybe_int) except -1 from pandas._libs cimport util + + +cdef extern from "pandas/type.h": + bint is_integer_object(object obj) + bint is_float_object(object obj) + bint is_bool_object(object obj) + bint is_datetime64_object(object obj) + bint is_timedelta64_object(object obj) + bint is_array(object obj) + + from pandas._libs.util cimport ( INT64_MAX, INT64_MIN, diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index a3b0451381ad2..e4fdf0b083240 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -4,6 +4,7 @@ from sys import maxsize cimport cython from cython cimport Py_ssize_t + import numpy as np cimport numpy as cnp @@ -17,6 +18,16 @@ from numpy cimport ( cnp.import_array() from pandas._libs cimport util + + +cdef extern from "pandas/type.h": + bint is_timedelta64_object(object obj) + bint is_float_object(object obj) + bint is_complex_object(object obj) + bint is_datetime64_object(object obj) + bint is_array(object obj) + bint is_nan(object obj) + from pandas._libs.tslibs.nattype cimport ( c_NaT as NaT, checknull_with_nat, @@ -89,38 +100,38 @@ cpdef bint is_matching_na(object left, object right, bint nan_matches_none=False bool """ if left is None: - if nan_matches_none and util.is_nan(right): + if nan_matches_none and is_nan(right): return True return right is None elif left is C_NA: return right is C_NA elif left is NaT: return right is NaT - elif util.is_float_object(left): - if nan_matches_none and right is None and util.is_nan(left): + elif is_float_object(left): + if nan_matches_none and right is None and is_nan(left): return True return ( - util.is_nan(left) - and util.is_float_object(right) - and util.is_nan(right) + is_nan(left) + and is_float_object(right) + and is_nan(right) ) - elif util.is_complex_object(left): + elif is_complex_object(left): return ( - util.is_nan(left) - and util.is_complex_object(right) - and util.is_nan(right) + is_nan(left) + and is_complex_object(right) + and is_nan(right) ) - elif util.is_datetime64_object(left): + elif is_datetime64_object(left): return ( get_datetime64_value(left) == NPY_NAT - and util.is_datetime64_object(right) + and is_datetime64_object(right) and get_datetime64_value(right) == NPY_NAT and get_datetime64_unit(left) == get_datetime64_unit(right) ) - elif util.is_timedelta64_object(left): + elif is_timedelta64_object(left): return ( get_timedelta64_value(left) == NPY_NAT - and util.is_timedelta64_object(right) + and is_timedelta64_object(right) and get_timedelta64_value(right) == NPY_NAT and get_datetime64_unit(left) == get_datetime64_unit(right) ) @@ -153,15 +164,15 @@ cpdef bint checknull(object val, bint inf_as_na=False): """ if val is None or val is NaT or val is C_NA: return True - elif util.is_float_object(val) or util.is_complex_object(val): + elif is_float_object(val) or is_complex_object(val): if val != val: return True elif inf_as_na: return val == INF or val == NEGINF return False - elif util.is_timedelta64_object(val): + elif is_timedelta64_object(val): return get_timedelta64_value(val) == NPY_NAT - elif util.is_datetime64_object(val): + elif is_datetime64_object(val): return get_datetime64_value(val) == NPY_NAT else: return is_decimal_na(val) @@ -251,11 +262,11 @@ def isnaobj2d(arr: ndarray, inf_as_na: bool = False) -> ndarray: def isposinf_scalar(val: object) -> bool: - return util.is_float_object(val) and val == INF + return is_float_object(val) and val == INF def isneginf_scalar(val: object) -> bool: - return util.is_float_object(val) and val == NEGINF + return is_float_object(val) and val == NEGINF cdef bint is_null_datetime64(v): @@ -299,7 +310,7 @@ def is_float_nan(values: ndarray) -> ndarray: for i in range(N): val = values[i] - if util.is_nan(val): + if is_nan(val): result[i] = True return result.view(bool) @@ -327,7 +338,7 @@ def is_numeric_na(values: ndarray) -> ndarray: for i in range(N): val = values[i] if checknull(val): - if val is None or val is C_NA or util.is_nan(val) or is_decimal_na(val): + if val is None or val is C_NA or is_nan(val) or is_decimal_na(val): result[i] = True else: raise TypeError(f"'values' contains non-numeric NA {val}") @@ -343,7 +354,7 @@ def _create_binary_propagating_op(name, is_divmod=False): def method(self, other): if (other is C_NA or isinstance(other, (str, bytes)) or isinstance(other, (numbers.Number, np.bool_)) - or util.is_array(other) and not other.shape): + or is_array(other) and not other.shape): # Need the other.shape clause to handle NumPy scalars, # since we do a setitem on `out` below, which # won't work for NumPy scalars. @@ -352,7 +363,7 @@ def _create_binary_propagating_op(name, is_divmod=False): else: return NA - elif util.is_array(other): + elif is_array(other): out = np.empty(other.shape, dtype=object) out[:] = NA @@ -464,7 +475,7 @@ class NAType(C_NAType): return type(other)(1) else: return NA - elif util.is_array(other): + elif is_array(other): return np.where(other == 0, other.dtype.type(1), NA) return NotImplemented @@ -477,7 +488,7 @@ class NAType(C_NAType): return other else: return NA - elif util.is_array(other): + elif is_array(other): return np.where(other == 1, other, NA) return NotImplemented diff --git a/pandas/_libs/pandas/type.h b/pandas/_libs/pandas/type.h new file mode 100644 index 0000000000000..d238ae37fead4 --- /dev/null +++ b/pandas/_libs/pandas/type.h @@ -0,0 +1,172 @@ +/* +Copyright (c) 2022-, PyData Development Team +All rights reserved. + +Distributed under the terms of the BSD Simplified License. + +The full license is in the LICENSE file, distributed with this software. +*/ + +#ifndef PANDAS__LIBS_PANDAS_TYPE_H_ +#define PANDAS__LIBS_PANDAS_TYPE_H_ +#include "pyerrors.h" +#ifdef __cplusplus +extern "C" { +#endif + +#define PY_SSIZE_T_CLEAN +#include <Python.h> +#include <numpy/ndarrayobject.h> + +/* +Cython equivalent of `isinstance(val, np.timedelta64)` + +Parameters +---------- +val : object + +Returns +------- +is_timedelta64 : bool +*/ +int is_timedelta64_object(PyObject *obj) { + return PyObject_TypeCheck(obj, &PyTimedeltaArrType_Type); +} + +/* +Cython equivalent of + +`isinstance(val, (int, long, np.integer)) and not isinstance(val, bool)` + +Parameters +---------- +val : object + +Returns +------- +is_integer : bool + +Notes +----- +This counts np.timedelta64 objects as integers. +*/ +int is_integer_object(PyObject *obj) { + return !PyBool_Check(obj) && PyArray_IsIntegerScalar(obj) + && !is_timedelta64_object(obj); +} + +/* +Cython equivalent of `isinstance(val, (float, np.complex_))` + +Parameters +---------- +val : object + +Returns +------- +is_float : bool +*/ +int is_float_object(PyObject *obj) { + return PyFloat_Check(obj) || PyObject_TypeCheck(obj, &PyFloatingArrType_Type); +} + +/* +Cython equivalent of `isinstance(val, (complex, np.complex_))` + +Parameters +---------- +val : object + +Returns +------- +is_complex : bool +*/ +int is_complex_object(PyObject *obj) { + return PyComplex_Check(obj) || + PyObject_TypeCheck(obj, &PyComplexFloatingArrType_Type); +} + +/* +Cython equivalent of `isinstance(val, (bool, np.bool_))` + +Parameters +---------- +val : object + +Returns +------- +is_bool : bool +*/ +int is_bool_object(PyObject *obj) { + return PyBool_Check(obj) || PyObject_TypeCheck(obj, &PyBoolArrType_Type); +} + +int is_real_number_object(PyObject *obj) { + return is_bool_object(obj) || is_integer_object(obj) || is_float_object(obj); +} + +/* +Cython equivalent of `isinstance(val, np.datetime64)` + +Parameters +---------- +val : object + +Returns +------- +is_datetime64 : bool +*/ +int is_datetime64_object(PyObject *obj) { + return PyObject_TypeCheck(obj, &PyDatetimeArrType_Type); +} + +/* +Cython equivalent of `isinstance(val, np.ndarray)` + +Parameters +---------- +val : object + +Returns +------- +is_ndarray : bool +*/ +int is_array(PyObject *obj) { return PyArray_Check(obj); } + + +/* +Check if val is a Not-A-Number float or complex, including +float('NaN') and np.nan. + +Parameters +---------- +val : object + +Returns +------- +is_nan : bool +*/ +int is_nan(PyObject *obj) { + if (is_float_object(obj)) { + double fobj = PyFloat_AsDouble(obj); + if (fobj == -1.0 && PyErr_Occurred()) { + // TODO(wayd): handle this error! + } + + return fobj != fobj; + } + + if (is_complex_object(obj)) { + int ret = PyObject_RichCompareBool(obj, obj, Py_NE) == 1; + if (ret == -1) { + // TODO(wayd): handle this error! + } + } + + return 0; +} + +#ifdef __cplusplus +} +#endif +#endif // PANDAS__LIBS_PANDAS_TYPE_H_ diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx index 1941cfde4acb9..c28f9ff036f7a 100644 --- a/pandas/_libs/parsers.pyx +++ b/pandas/_libs/parsers.pyx @@ -62,7 +62,9 @@ from numpy cimport ( cnp.import_array() -from pandas._libs cimport util +cdef extern from "pandas/type.h": + bint is_integer_object(object obj) + from pandas._libs.util cimport ( INT64_MAX, INT64_MIN, @@ -594,7 +596,7 @@ cdef class TextReader: self.parser.quotechar = <char>ord(quote_char) cdef _make_skiprow_set(self): - if util.is_integer_object(self.skiprows): + if is_integer_object(self.skiprows): parser_set_skipfirstnrows(self.parser, self.skiprows) elif not callable(self.skiprows): for i in self.skiprows: diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx index 7ff0842678d7f..b33f10fbcded6 100644 --- a/pandas/_libs/reduction.pyx +++ b/pandas/_libs/reduction.pyx @@ -4,7 +4,8 @@ cimport numpy as cnp cnp.import_array() -from pandas._libs.util cimport is_array +cdef extern from "pandas/type.h": + bint is_array(object obj) cdef cnp.dtype _dtype_obj = np.dtype("object") diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index c1a30e03235b5..ffce1aaef4be3 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -38,11 +38,12 @@ from pandas._libs.tslibs.np_datetime cimport ( pydatetime_to_dt64, string_to_dts, ) -from pandas._libs.util cimport ( - is_datetime64_object, - is_float_object, - is_integer_object, -) + + +cdef extern from "pandas/type.h": + bint is_datetime64_object(object obj) + bint is_float_object(object obj) + bint is_integer_object(object obj) from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime from pandas._libs.tslibs.parsing import parse_datetime_string diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx index 6cd579d59c900..5238683a54006 100644 --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -55,11 +55,12 @@ from pandas._libs.tslibs.timezones cimport ( is_utc, maybe_get_tz, ) -from pandas._libs.tslibs.util cimport ( - is_datetime64_object, - is_float_object, - is_integer_object, -) + + +cdef extern from "pandas/type.h": + bint is_datetime64_object(object obj) + bint is_float_object(object obj) + bint is_integer_object(object obj) from pandas._libs.tslibs.parsing import parse_datetime_string diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index 9407f57a282bf..eb10647e2364b 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -22,6 +22,16 @@ from numpy cimport int64_t cnp.import_array() cimport pandas._libs.tslibs.util as util + + +cdef extern from "pandas/type.h": + bint is_timedelta64_object(object obj) + bint is_integer_object(object obj) + bint is_float_object(object obj) + bint is_datetime64_object(object obj) + bint is_array(object obj) + bint is_nan(object obj) + from pandas._libs.tslibs.np_datetime cimport ( get_datetime64_value, get_timedelta64_value, @@ -68,9 +78,9 @@ def _make_error_func(func_name: str, cls): cdef _nat_divide_op(self, other): - if PyDelta_Check(other) or util.is_timedelta64_object(other) or other is c_NaT: + if PyDelta_Check(other) or is_timedelta64_object(other) or other is c_NaT: return np.nan - if util.is_integer_object(other) or util.is_float_object(other): + if is_integer_object(other) or is_float_object(other): return c_NaT return NotImplemented @@ -96,15 +106,15 @@ cdef class _NaT(datetime): __array_priority__ = 100 def __richcmp__(_NaT self, object other, int op): - if util.is_datetime64_object(other) or PyDateTime_Check(other): + if is_datetime64_object(other) or PyDateTime_Check(other): # We treat NaT as datetime-like for this comparison return op == Py_NE - elif util.is_timedelta64_object(other) or PyDelta_Check(other): + elif is_timedelta64_object(other) or PyDelta_Check(other): # We treat NaT as timedelta-like for this comparison return op == Py_NE - elif util.is_array(other): + elif is_array(other): if other.dtype.kind in "mM": result = np.empty(other.shape, dtype=np.bool_) result.fill(op == Py_NE) @@ -138,14 +148,14 @@ cdef class _NaT(datetime): return c_NaT elif PyDelta_Check(other): return c_NaT - elif util.is_datetime64_object(other) or util.is_timedelta64_object(other): + elif is_datetime64_object(other) or is_timedelta64_object(other): return c_NaT - elif util.is_integer_object(other): + elif is_integer_object(other): # For Period compat return c_NaT - elif util.is_array(other): + elif is_array(other): if other.dtype.kind in "mM": # If we are adding to datetime64, we treat NaT as timedelta # Either way, result dtype is datetime64 @@ -176,14 +186,14 @@ cdef class _NaT(datetime): return c_NaT elif PyDelta_Check(other): return c_NaT - elif util.is_datetime64_object(other) or util.is_timedelta64_object(other): + elif is_datetime64_object(other) or is_timedelta64_object(other): return c_NaT - elif util.is_integer_object(other): + elif is_integer_object(other): # For Period compat return c_NaT - elif util.is_array(other): + elif is_array(other): if other.dtype.kind == "m": if not is_rsub: # NaT - timedelta64 we treat NaT as datetime64, so result @@ -216,7 +226,7 @@ cdef class _NaT(datetime): return NotImplemented def __rsub__(self, other): - if util.is_array(other): + if is_array(other): if other.dtype.kind == "m": # timedelta64 - NaT we have to treat NaT as timedelta64 # for this to be meaningful, and the result is timedelta64 @@ -247,7 +257,7 @@ cdef class _NaT(datetime): return _nat_divide_op(self, other) def __mul__(self, other): - if util.is_integer_object(other) or util.is_float_object(other): + if is_integer_object(other) or is_float_object(other): return NaT return NotImplemented @@ -377,7 +387,7 @@ class NaTType(_NaT): return _nat_rdivide_op(self, other) def __rmul__(self, other): - if util.is_integer_object(other) or util.is_float_object(other): + if is_integer_object(other) or is_float_object(other): return c_NaT return NotImplemented @@ -1220,14 +1230,14 @@ cdef bint checknull_with_nat(object val): """ Utility to check if a value is a nat or not. """ - return val is None or util.is_nan(val) or val is c_NaT + return val is None or is_nan(val) or val is c_NaT cdef bint is_dt64nat(object val): """ Is this a np.datetime64 object np.datetime64("NaT"). """ - if util.is_datetime64_object(val): + if is_datetime64_object(val): return get_datetime64_value(val) == NPY_NAT return False @@ -1236,6 +1246,6 @@ cdef bint is_td64nat(object val): """ Is this a np.timedelta64 object np.timedelta64("NaT"). """ - if util.is_timedelta64_object(val): + if is_timedelta64_object(val): return get_timedelta64_value(val) == NPY_NAT return False diff --git a/pandas/_libs/tslibs/np_datetime.pxd b/pandas/_libs/tslibs/np_datetime.pxd index de81c611c9ee9..3c7f3859cb6b2 100644 --- a/pandas/_libs/tslibs/np_datetime.pxd +++ b/pandas/_libs/tslibs/np_datetime.pxd @@ -54,7 +54,7 @@ cdef extern from "numpy/ndarraytypes.h": int64_t NPY_DATETIME_NAT # elswhere we call this NPY_NAT -cdef extern from "src/datetime/np_datetime.h": +cdef extern from "tslibs/src/datetime/np_datetime.h": ctypedef struct pandas_timedeltastruct: int64_t days int32_t hrs, min, sec, ms, us, ns, seconds, microseconds, nanoseconds diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx index 9db3f7cb4648e..a48ac9f0de44d 100644 --- a/pandas/_libs/tslibs/np_datetime.pyx +++ b/pandas/_libs/tslibs/np_datetime.pyx @@ -35,7 +35,7 @@ from numpy cimport ( from pandas._libs.tslibs.util cimport get_c_string_buf_and_size -cdef extern from "src/datetime/np_datetime.h": +cdef extern from "tslibs/src/datetime/np_datetime.h": int cmp_npy_datetimestruct(npy_datetimestruct *a, npy_datetimestruct *b) @@ -48,7 +48,7 @@ cdef extern from "src/datetime/np_datetime.h": PyArray_DatetimeMetaData get_datetime_metadata_from_dtype(cnp.PyArray_Descr *dtype) -cdef extern from "src/datetime/np_datetime_strings.h": +cdef extern from "tslibs/src/datetime/np_datetime_strings.h": int parse_iso_8601_datetime(const char *str, int len, int want_exc, npy_datetimestruct *out, NPY_DATETIMEUNIT *out_bestunit, diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 470d1e89e5b88..87da874635c34 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -31,12 +31,13 @@ cnp.import_array() from pandas._libs.properties import cache_readonly -from pandas._libs.tslibs cimport util -from pandas._libs.tslibs.util cimport ( - is_datetime64_object, - is_float_object, - is_integer_object, -) + +cdef extern from "pandas/type.h": + bint is_datetime64_object(object obj) + bint is_float_object(object obj) + bint is_integer_object(object obj) + bint is_timedelta64_object(object obj) + bint is_array(object obj) from pandas._libs.tslibs.ccalendar import ( MONTH_ALIASES, @@ -143,7 +144,7 @@ def apply_wraps(func): elif ( isinstance(other, BaseOffset) or PyDelta_Check(other) - or util.is_timedelta64_object(other) + or is_timedelta64_object(other) ): # timedelta path return func(self, other) @@ -479,7 +480,7 @@ cdef class BaseOffset: # TODO(cython3): remove this, this moved to __radd__ return other.__add__(self) - elif util.is_array(other) and other.dtype == object: + elif is_array(other) and other.dtype == object: return np.array([self + x for x in other]) try: @@ -508,7 +509,7 @@ cdef class BaseOffset: return (-self).__add__(other) def __mul__(self, other): - if util.is_array(other): + if is_array(other): return np.array([self * x for x in other]) elif is_integer_object(other): return type(self)(n=other * self.n, normalize=self.normalize, @@ -746,7 +747,7 @@ cdef class BaseOffset: TypeError if `int(n)` raises ValueError if n != int(n) """ - if util.is_timedelta64_object(n): + if is_timedelta64_object(n): raise TypeError(f"`n` argument must be an integer, got {type(n)}") try: nint = int(n) @@ -1071,7 +1072,7 @@ cdef class Tick(SingleConstructorOffset): # PyDate_Check includes date, datetime return Timestamp(other) + self - if util.is_timedelta64_object(other) or PyDelta_Check(other): + if is_timedelta64_object(other) or PyDelta_Check(other): return other + self.delta raise ApplyTypeError(f"Unhandled type: {type(other).__name__}") diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx index 8955fb678d075..63d7053003dc3 100644 --- a/pandas/_libs/tslibs/period.pyx +++ b/pandas/_libs/tslibs/period.pyx @@ -39,7 +39,15 @@ from libc.time cimport ( # import datetime C API import_datetime() -cimport pandas._libs.tslibs.util as util +cimport pandas._libs.util as util + + +cdef extern from "pandas/type.h": + bint is_datetime64_object(object obj) + bint is_timedelta64_object(object obj) + bint is_integer_object(object obj) + bint is_array(object obj) + from pandas._libs.missing cimport C_NA from pandas._libs.tslibs.np_datetime cimport ( NPY_DATETIMEUNIT, @@ -1474,7 +1482,7 @@ cdef int64_t _extract_ordinal(object item, str freqstr, freq) except? -1: if checknull_with_nat(item) or item is C_NA: ordinal = NPY_NAT - elif util.is_integer_object(item): + elif is_integer_object(item): if item == NPY_NAT: ordinal = NPY_NAT else: @@ -1667,7 +1675,7 @@ cdef class _Period(PeriodMixin): return PyObject_RichCompareBool(self.ordinal, other.ordinal, op) elif other is NaT: return op == Py_NE - elif util.is_array(other): + elif is_array(other): # GH#44285 if cnp.PyArray_IsZeroDim(other): return PyObject_RichCompare(self, other.item(), op) @@ -1688,7 +1696,7 @@ cdef class _Period(PeriodMixin): f"Period(freq={self.freqstr})") if ( - util.is_timedelta64_object(other) and + is_timedelta64_object(other) and get_timedelta64_value(other) == NPY_NAT ): # i.e. np.timedelta64("nat") @@ -1727,7 +1735,7 @@ cdef class _Period(PeriodMixin): return self._add_offset(other) elif other is NaT: return NaT - elif util.is_integer_object(other): + elif is_integer_object(other): ordinal = self.ordinal + other * self.freq.n return Period(ordinal=ordinal, freq=self.freq) @@ -1741,7 +1749,7 @@ cdef class _Period(PeriodMixin): raise TypeError(f"unsupported operand type(s) for +: '{sname}' " f"and '{oname}'") - elif util.is_array(other): + elif is_array(other): if other.dtype == object: # GH#50162 return np.array([self + x for x in other], dtype=object) @@ -1762,7 +1770,7 @@ cdef class _Period(PeriodMixin): elif ( is_any_td_scalar(other) or is_offset_object(other) - or util.is_integer_object(other) + or is_integer_object(other) ): return self + (-other) elif is_period_object(other): @@ -1772,7 +1780,7 @@ cdef class _Period(PeriodMixin): elif other is NaT: return NaT - elif util.is_array(other): + elif is_array(other): if other.dtype == object: # GH#50162 return np.array([self - x for x in other], dtype=object) @@ -1783,7 +1791,7 @@ cdef class _Period(PeriodMixin): if other is NaT: return NaT - elif util.is_array(other): + elif is_array(other): if other.dtype == object: # GH#50162 return np.array([x - self for x in other], dtype=object) @@ -2543,7 +2551,7 @@ class Period(_Period): raise ValueError("Only value or ordinal but not both should be " "given but not both") elif ordinal is not None: - if not util.is_integer_object(ordinal): + if not is_integer_object(ordinal): raise ValueError("Ordinal must be an integer") if freq is None: raise ValueError("Must supply freq for ordinal value") @@ -2582,8 +2590,8 @@ class Period(_Period): # if we have a non-hashable value. ordinal = NPY_NAT - elif isinstance(value, str) or util.is_integer_object(value): - if util.is_integer_object(value): + elif isinstance(value, str) or is_integer_object(value): + if is_integer_object(value): if value == NPY_NAT: value = "NaT" @@ -2615,7 +2623,7 @@ class Period(_Period): raise ValueError("Must supply freq for datetime value") if isinstance(dt, Timestamp): nanosecond = dt.nanosecond - elif util.is_datetime64_object(value): + elif is_datetime64_object(value): dt = Timestamp(value) if freq is None: raise ValueError("Must supply freq for datetime value") diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx index 73e9176d3a6d2..f65800a073f1f 100644 --- a/pandas/_libs/tslibs/strptime.pyx +++ b/pandas/_libs/tslibs/strptime.pyx @@ -41,13 +41,16 @@ from pandas._libs.tslibs.np_datetime cimport ( pydate_to_dt64, pydatetime_to_dt64, ) + from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime + from pandas._libs.tslibs.timestamps cimport _Timestamp -from pandas._libs.util cimport ( - is_datetime64_object, - is_float_object, - is_integer_object, -) + + +cdef extern from "pandas/type.h": + bint is_datetime64_object(object obj) + bint is_float_object(object obj) + bint is_integer_object(object obj) cnp.import_array() diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 7810bc9f75e66..52fb2bffee9c9 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -28,8 +28,14 @@ from cpython.datetime cimport ( import_datetime() +cdef extern from "pandas/type.h": + bint is_array(object obj) + bint is_nan(object obj) + bint is_datetime64_object(object obj) + bint is_timedelta64_object(object obj) + bint is_float_object(object obj) + bint is_integer_object(object obj) -cimport pandas._libs.tslibs.util as util from pandas._libs.tslibs.base cimport ABCTimestamp from pandas._libs.tslibs.conversion cimport ( cast_from_unit, @@ -66,13 +72,6 @@ from pandas._libs.tslibs.np_datetime import ( ) from pandas._libs.tslibs.offsets cimport is_tick_object -from pandas._libs.tslibs.util cimport ( - is_array, - is_datetime64_object, - is_float_object, - is_integer_object, - is_timedelta64_object, -) from pandas._libs.tslibs.fields import ( RoundTo, @@ -1129,7 +1128,7 @@ cdef class _Timedelta(timedelta): elif other is NaT: return op == Py_NE - elif util.is_array(other): + elif is_array(other): if other.dtype.kind == "m": return PyObject_RichCompare(self.asm8, other, op) elif other.dtype.kind == "O": @@ -1849,7 +1848,7 @@ class Timedelta(_Timedelta): def __mul__(self, other): if is_integer_object(other) or is_float_object(other): - if util.is_nan(other): + if is_nan(other): # np.nan * timedelta -> np.timedelta64("NaT"), in this case NaT return NaT @@ -1882,7 +1881,7 @@ class Timedelta(_Timedelta): elif is_integer_object(other) or is_float_object(other): # integers or floats - if util.is_nan(other): + if is_nan(other): return NaT return Timedelta._from_value_and_reso( <int64_t>(self.value / other), self._creso @@ -1936,7 +1935,7 @@ class Timedelta(_Timedelta): return self.value // other.value elif is_integer_object(other) or is_float_object(other): - if util.is_nan(other): + if is_nan(other): return NaT return type(self)._from_value_and_reso(self.value // other, self._creso) diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 7593be7bf77f3..3605c0a75f2e7 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -8,6 +8,7 @@ shadows the python class, where we do any heavy lifting. """ import warnings + cimport cython import numpy as np @@ -60,11 +61,12 @@ from pandas._libs.tslibs.dtypes cimport ( periods_per_day, periods_per_second, ) -from pandas._libs.tslibs.util cimport ( - is_array, - is_datetime64_object, - is_integer_object, -) + + +cdef extern from "pandas/type.h": + bint is_array(object obj) + bint is_datetime64_object(object obj) + bint is_integer_object(object obj) from pandas._libs.tslibs.fields import ( RoundTo, diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx index 6105f96a3b1b8..6be112c3f999a 100644 --- a/pandas/_libs/tslibs/timezones.pyx +++ b/pandas/_libs/tslibs/timezones.pyx @@ -37,10 +37,11 @@ from numpy cimport int64_t cnp.import_array() # ---------------------------------------------------------------------- -from pandas._libs.tslibs.util cimport ( - get_nat, - is_integer_object, -) +from pandas._libs.tslibs.util cimport get_nat + + +cdef extern from "pandas/type.h": + bint is_integer_object(object obj) cdef int64_t NPY_NAT = get_nat() diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd index a28aace5d2f15..2d4a74194a55c 100644 --- a/pandas/_libs/tslibs/util.pxd +++ b/pandas/_libs/tslibs/util.pxd @@ -1,24 +1,4 @@ - -from cpython.object cimport PyTypeObject - - -cdef extern from *: - """ - PyObject* char_to_string(const char* data) { - return PyUnicode_FromString(data); - } - """ - object char_to_string(const char* data) - - cdef extern from "Python.h": - # Note: importing extern-style allows us to declare these as nogil - # functions, whereas `from cpython cimport` does not. - bint PyBool_Check(object obj) nogil - bint PyFloat_Check(object obj) nogil - bint PyComplex_Check(object obj) nogil - bint PyObject_TypeCheck(object obj, PyTypeObject* type) nogil - # Note that following functions can potentially raise an exception, # thus they cannot be declared 'nogil'. Also PyUnicode_AsUTF8AndSize() can # potentially allocate memory inside in unlikely case of when underlying @@ -30,23 +10,19 @@ cdef extern from "Python.h": object PyUnicode_DecodeLocale(const char *str, const char *errors) nogil -from numpy cimport ( - float64_t, - int64_t, -) - +from numpy cimport int64_t -cdef extern from "numpy/arrayobject.h": - PyTypeObject PyFloatingArrType_Type -cdef extern from "numpy/ndarrayobject.h": - PyTypeObject PyTimedeltaArrType_Type - PyTypeObject PyDatetimeArrType_Type - PyTypeObject PyComplexFloatingArrType_Type - PyTypeObject PyBoolArrType_Type - - bint PyArray_IsIntegerScalar(obj) nogil - bint PyArray_Check(obj) nogil +cdef extern from "pandas/type.h": + bint is_timedelta64_object(object obj) + bint is_integer_object(object obj) + bint is_float_object(object obj) + bint is_complex_object(object obj) + bint is_bool_object(object obj) + bint is_real_number_object(object obj) + bint is_datetime64_object(object obj) + bint is_array(object obj) + bint is_nan(object obj) cdef extern from "numpy/npy_common.h": int64_t NPY_MIN_INT64 @@ -59,145 +35,6 @@ cdef inline int64_t get_nat(): # -------------------------------------------------------------------- # Type Checking -cdef inline bint is_integer_object(object obj) nogil: - """ - Cython equivalent of - - `isinstance(val, (int, long, np.integer)) and not isinstance(val, bool)` - - Parameters - ---------- - val : object - - Returns - ------- - is_integer : bool - - Notes - ----- - This counts np.timedelta64 objects as integers. - """ - return (not PyBool_Check(obj) and PyArray_IsIntegerScalar(obj) - and not is_timedelta64_object(obj)) - - -cdef inline bint is_float_object(object obj) nogil: - """ - Cython equivalent of `isinstance(val, (float, np.complex_))` - - Parameters - ---------- - val : object - - Returns - ------- - is_float : bool - """ - return (PyFloat_Check(obj) or - (PyObject_TypeCheck(obj, &PyFloatingArrType_Type))) - - -cdef inline bint is_complex_object(object obj) nogil: - """ - Cython equivalent of `isinstance(val, (complex, np.complex_))` - - Parameters - ---------- - val : object - - Returns - ------- - is_complex : bool - """ - return (PyComplex_Check(obj) or - PyObject_TypeCheck(obj, &PyComplexFloatingArrType_Type)) - - -cdef inline bint is_bool_object(object obj) nogil: - """ - Cython equivalent of `isinstance(val, (bool, np.bool_))` - - Parameters - ---------- - val : object - - Returns - ------- - is_bool : bool - """ - return (PyBool_Check(obj) or - PyObject_TypeCheck(obj, &PyBoolArrType_Type)) - - -cdef inline bint is_real_number_object(object obj) nogil: - return is_bool_object(obj) or is_integer_object(obj) or is_float_object(obj) - - -cdef inline bint is_timedelta64_object(object obj) nogil: - """ - Cython equivalent of `isinstance(val, np.timedelta64)` - - Parameters - ---------- - val : object - - Returns - ------- - is_timedelta64 : bool - """ - return PyObject_TypeCheck(obj, &PyTimedeltaArrType_Type) - - -cdef inline bint is_datetime64_object(object obj) nogil: - """ - Cython equivalent of `isinstance(val, np.datetime64)` - - Parameters - ---------- - val : object - - Returns - ------- - is_datetime64 : bool - """ - return PyObject_TypeCheck(obj, &PyDatetimeArrType_Type) - - -cdef inline bint is_array(object val): - """ - Cython equivalent of `isinstance(val, np.ndarray)` - - Parameters - ---------- - val : object - - Returns - ------- - is_ndarray : bool - """ - return PyArray_Check(val) - - -cdef inline bint is_nan(object val): - """ - Check if val is a Not-A-Number float or complex, including - float('NaN') and np.nan. - - Parameters - ---------- - val : object - - Returns - ------- - is_nan : bool - """ - cdef float64_t fval - if is_float_object(val): - fval = val - return fval != fval - return is_complex_object(val) and val != val - - cdef inline const char* get_c_string_buf_and_size(str py_string, Py_ssize_t *length) except NULL: """ diff --git a/setup.py b/setup.py index f8fa048757289..bb36facd91637 100755 --- a/setup.py +++ b/setup.py @@ -442,11 +442,17 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.algos": { "pyxfile": "_libs/algos", "include": klib_include, - "depends": _pxi_dep["algos"], + "depends": _pxi_dep["algos"] + + [ + "pandas/_libs/src/pandas/type.h", + ], }, "_libs.arrays": {"pyxfile": "_libs/arrays"}, "_libs.groupby": {"pyxfile": "_libs/groupby"}, - "_libs.hashing": {"pyxfile": "_libs/hashing", "depends": []}, + "_libs.hashing": { + "pyxfile": "_libs/hashing", + "depends": ["pandas/_libs/src/pandas/type.h"], + }, "_libs.hashtable": { "pyxfile": "_libs/hashtable", "include": klib_include, @@ -458,23 +464,26 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.index": { "pyxfile": "_libs/index", "include": klib_include, - "depends": _pxi_dep["index"], + "depends": _pxi_dep["index"] + ["pandas/_libs/src/pandas/type.h"], }, "_libs.indexing": {"pyxfile": "_libs/indexing"}, "_libs.internals": {"pyxfile": "_libs/internals"}, "_libs.interval": { "pyxfile": "_libs/interval", "include": klib_include, - "depends": _pxi_dep["interval"], + "depends": _pxi_dep["interval"] + ["pandas/_libs/src/pandas/type.h"], }, "_libs.join": {"pyxfile": "_libs/join", "include": klib_include}, "_libs.lib": { "pyxfile": "_libs/lib", - "depends": lib_depends + tseries_depends, + "depends": lib_depends + tseries_depends + ["pandas/_libs/src/pandas/type.h"], "include": klib_include, # due to tokenizer import "sources": ["pandas/_libs/src/parser/tokenizer.c"], }, - "_libs.missing": {"pyxfile": "_libs/missing", "depends": tseries_depends}, + "_libs.missing": { + "pyxfile": "_libs/missing", + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], + }, "_libs.parsers": { "pyxfile": "_libs/parsers", "include": klib_include + ["pandas/_libs/src"], @@ -487,7 +496,10 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "pandas/_libs/src/parser/io.c", ], }, - "_libs.reduction": {"pyxfile": "_libs/reduction"}, + "_libs.reduction": { + "pyxfile": "_libs/reduction", + "depends": ["pandas/_libs/src/pandas/type.h"], + }, "_libs.ops": {"pyxfile": "_libs/ops"}, "_libs.ops_dispatch": {"pyxfile": "_libs/ops_dispatch"}, "_libs.properties": {"pyxfile": "_libs/properties"}, @@ -495,7 +507,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.sparse": {"pyxfile": "_libs/sparse", "depends": _pxi_dep["sparse"]}, "_libs.tslib": { "pyxfile": "_libs/tslib", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.base": {"pyxfile": "_libs/tslibs/base"}, @@ -503,7 +515,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.tslibs.dtypes": {"pyxfile": "_libs/tslibs/dtypes"}, "_libs.tslibs.conversion": { "pyxfile": "_libs/tslibs/conversion", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.fields": { @@ -514,7 +526,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.tslibs.nattype": {"pyxfile": "_libs/tslibs/nattype"}, "_libs.tslibs.np_datetime": { "pyxfile": "_libs/tslibs/np_datetime", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": [ "pandas/_libs/tslibs/src/datetime/np_datetime.c", "pandas/_libs/tslibs/src/datetime/np_datetime_strings.c", @@ -522,7 +534,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): }, "_libs.tslibs.offsets": { "pyxfile": "_libs/tslibs/offsets", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.parsing": { @@ -533,25 +545,28 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): }, "_libs.tslibs.period": { "pyxfile": "_libs/tslibs/period", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.strptime": { "pyxfile": "_libs/tslibs/strptime", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.timedeltas": { "pyxfile": "_libs/tslibs/timedeltas", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.timestamps": { "pyxfile": "_libs/tslibs/timestamps", - "depends": tseries_depends, + "depends": tseries_depends + ["pandas/_libs/src/pandas/type.h"], "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, - "_libs.tslibs.timezones": {"pyxfile": "_libs/tslibs/timezones"}, + "_libs.tslibs.timezones": { + "pyxfile": "_libs/tslibs/timezones", + "depends": ["pandas/_libs/src/pandas/type.h"], + }, "_libs.tslibs.tzconversion": { "pyxfile": "_libs/tslibs/tzconversion", "depends": tseries_depends, @@ -585,6 +600,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): sources.extend(data.get("sources", [])) include = data.get("include", []) + include.append("pandas/_libs") include.append(numpy.get_include()) undef_macros = []
Came up when reviewing #50400 with @lithomas1 . Doesn't directly solve this, but a lot of our util.pyx functions are in essence C functions to begin with. Because they are implemented in Cython though, its a built difficult to use them in our non-Cython extensions like JSON. This takes the code out of the Cython file and ports to a C header file. This is mostly a verbatim copy with some special considerations done for NA handling. Marking as draft because I'm not sure a header-only solution is better than a shared library, but need more time to mess around with the setup script. May also just wait until we implement meson fully
https://api.github.com/repos/pandas-dev/pandas/pulls/50425
2022-12-24T01:01:53Z
2023-01-05T16:08:29Z
null
2023-04-12T20:15:54Z
PDEP-6: Ban upcasting in setitem-like operations
diff --git a/web/pandas/pdeps/0006-ban-upcasting.md b/web/pandas/pdeps/0006-ban-upcasting.md new file mode 100644 index 0000000000000..325c25313af53 --- /dev/null +++ b/web/pandas/pdeps/0006-ban-upcasting.md @@ -0,0 +1,234 @@ +# PDEP-6: Ban upcasting in setitem-like operations + +- Created: 23 December 2022 +- Status: Accepted +- Discussion: [#50402](https://github.com/pandas-dev/pandas/pull/50402) +- Author: [Marco Gorelli](https://github.com/MarcoGorelli) ([original issue](https://github.com/pandas-dev/pandas/issues/39584) by [Joris Van den Bossche](https://github.com/jorisvandenbossche)) +- Revision: 1 + +## Abstract + +The suggestion is that setitem-like operations would +not change a ``Series`` dtype (nor that of a ``DataFrame``'s column). + +Current behaviour: +```python +In [1]: ser = pd.Series([1, 2, 3], dtype='int64') + +In [2]: ser[2] = 'potage' + +In [3]: ser # dtype changed to 'object'! +Out[3]: +0 1 +1 2 +2 potage +dtype: object +``` + +Suggested behaviour: + +```python +In [1]: ser = pd.Series([1, 2, 3]) + +In [2]: ser[2] = 'potage' # raises! +--------------------------------------------------------------------------- +ValueError: Invalid value 'potage' for dtype int64 +``` + +## Motivation and Scope + +Currently, pandas is extremely flexible in handling different dtypes. +However, this can potentially hide bugs, break user expectations, and copy data +in what looks like it should be an inplace operation. + +An example of it hiding a bug is: +```python +In[9]: ser = pd.Series(pd.date_range("2000", periods=3)) + +In[10]: ser[2] = "2000-01-04" # works, is converted to datetime64 + +In[11]: ser[2] = "2000-01-04x" # typo - but pandas does not error, it upcasts to object +``` + +The scope of this PDEP is limited to setitem-like operations on Series (and DataFrame columns). +For example, starting with +```python +df = DataFrame({"a": [1, 2, np.nan], "b": [4, 5, 6]}) +ser = df["a"].copy() +``` +then the following would all raise: + +- setitem-like operations: + - ``ser.fillna('foo', inplace=True)``; + - ``ser.where(ser.isna(), 'foo', inplace=True)`` + - ``ser.fillna('foo', inplace=False)``; + - ``ser.where(ser.isna(), 'foo', inplace=False)`` +- setitem indexing operations (where ``indexer`` could be a slice, a mask, + a single value, a list or array of values, or any other allowed indexer): + - ``ser.iloc[indexer] = 'foo'`` + - ``ser.loc[indexer] = 'foo'`` + - ``df.iloc[indexer, 0] = 'foo'`` + - ``df.loc[indexer, 'a'] = 'foo'`` + - ``ser[indexer] = 'foo'`` + +It may be desirable to expand the top list to ``Series.replace`` and ``Series.update``, +but to keep the scope of the PDEP down, they are excluded for now. + +Examples of operations which would not raise are: +- ``ser.diff()``; +- ``pd.concat([ser, ser.astype(object)])``; +- ``ser.mean()``; +- ``ser[0] = 3``; # same dtype +- ``ser[0] = 3.``; # 3.0 is a 'round' float and so compatible with 'int64' dtype +- ``df['a'] = pd.date_range(datetime(2020, 1, 1), periods=3)``; +- ``df.index.intersection(ser.index)``. + +## Detailed description + +Concretely, the suggestion is: +- if a ``Series`` is of a given dtype, then a ``setitem``-like operation should not change its dtype; +- if a ``setitem``-like operation would previously have changed a ``Series``' dtype, it would now raise. + +For a start, this would involve: + +1. changing ``Block.setitem`` such that it does not have an ``except`` block in + + ```python + value = extract_array(value, extract_numpy=True) + try: + casted = np_can_hold_element(values.dtype, value) + except LossySetitemError: + # current dtype cannot store value, coerce to common dtype + nb = self.coerce_to_target_dtype(value) + return nb.setitem(indexer, value) + else: + ``` + +2. making a similar change in: + - ``Block.where``; + - ``Block.putmask``; + - ``EABackedBlock.setitem``; + - ``EABackedBlock.where``; + - ``EABackedBlock.putmask``; + +The above would already require several hundreds of tests to be adjusted. Note that once +implementation starts, the list of locations to change may turn out to be slightly +different. + +### Ban upcasting altogether, or just upcasting to ``object``? + +The trickiest part of this proposal concerns what to do when setting a float in an integer column: + +```python +In[1]: ser = pd.Series([1, 2, 3]) + +In [2]: ser +Out[2]: +0 1 +1 2 +2 3 +dtype: int64 + +In[3]: ser[0] = 1.5 # what should this do? +``` + +The current behaviour is to upcast to 'float64': +```python +In [4]: ser +Out[4]: +0 1.5 +1 2.0 +2 3.0 +dtype: float64 +``` + +This is not necessarily a sign of a bug, because the user might just be thinking of their ``Series`` as being +numeric (without much regard for ``int`` vs ``float``) - ``'int64'`` is just what pandas happened to infer +when constructing it. + +Possible options could be: +1. only accept round floats (e.g. ``1.0``) and raise on anything else (e.g. ``1.01``); +2. convert the float value to ``int`` before setting it (i.e. silently round all float values); +3. limit "banning upcasting" to when the upcasted dtype is ``object`` (i.e. preserve current behavior of upcasting the int64 Series to float64) . + +Let us compare with what other libraries do: +- ``numpy``: option 2 +- ``cudf``: option 2 +- ``polars``: option 2 +- ``R data.frame``: just upcasts (like pandas does now for non-nullable dtypes); +- ``pandas`` (nullable dtypes): option 1 +- ``datatable``: option 1 +- ``DataFrames.jl``: option 1 + +Option ``2`` would be a breaking behaviour change in pandas. Further, +if the objective of this PDEP is to prevent bugs, then this is also not desirable: +someone might set ``1.5`` and later be surprised to learn that they actually set ``1``. + +There are several downsides to option ``3``: +- it would be inconsistent with the nullable dtypes' behaviour; +- it would also add complexity to the codebase and to tests; +- it would be hard to teach, as instead of being able to teach a simple rule, + there would be a rule with exceptions; +- there would be a risk of loss of precision and or overflow; +- it opens the door to other exceptions, such as not upcasting ``'int8'`` to ``'int16'``. + +Option ``1`` is the maximally safe one in terms of protecting users from bugs, being +consistent with the current behaviour of nullable dtypes, and in being simple to teach. +Therefore, the option chosen by this PDEP is option 1. + +## Usage and Impact + +This would make pandas stricter, so there should not be any risk of introducing bugs. If anything, this would help prevent bugs. + +Unfortunately, it would also risk annoying users who might have been intentionally upcasting. + +Given that users could still get the current behaviour by first explicitly casting the Series +to float, it would be more beneficial to the community at large to err on the side +of strictness. + +## Out of scope + +Enlargement. For example: +```python +ser = pd.Series([1, 2, 3]) +ser[len(ser)] = 4.5 +``` +There is arguably a larger conversation to be had about whether that should be allowed +at all. To keep this proposal focused, it is intentionally excluded from the scope. + +## F.A.Q. + +**Q: What happens if setting ``1.0`` in an ``int8`` Series?** + +**A**: The current behavior would be to insert ``1.0`` as ``1`` and keep the dtype + as ``int8``. So, this would not change. + +**Q: What happens if setting ``1_000_000.0`` in an ``int8`` Series?** + +**A**: The current behavior would be to upcast to ``int32``. So under this PDEP, + it would instead raise. + +**Q: What happens in setting ``16.000000000000001`` in an `int8`` Series?** + +**A**: As far as Python is concerned, ``16.000000000000001`` and ``16.0`` are the + same number. So, it would be inserted as ``16`` and the dtype would not change + (just like what happens now, there would be no change here). + +**Q: What if I want ``1.0000000001`` to be inserted as ``1.0`` in an `'int8'` Series?** + +**A**: You may want to define your own helper function, such as + ```python + >>> def maybe_convert_to_int(x: int | float, tolerance: float): + if np.abs(x - round(x)) < tolerance: + return round(x) + return x + ``` + which you could adapt according to your needs. + +## Timeline + +Deprecate sometime in the 2.x releases (after 2.0.0 has already been released), and enforce in 3.0.0. + +### PDEP History + +- 23 December 2022: Initial draft
Original discussion: #39584 Turning into a PDEP as I think the `int -> float` warrants some care
https://api.github.com/repos/pandas-dev/pandas/pulls/50424
2022-12-23T21:57:17Z
2023-04-21T14:47:08Z
2023-04-21T14:47:08Z
2023-04-21T14:47:09Z
Backport PR #50145 on branch 1.5.x (DOC restructure contributing environment guide)
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index b79fe58c68e4b..942edd863a19a 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -15,24 +15,11 @@ locally before pushing your changes. It's recommended to also install the :ref:` .. contents:: Table of contents: :local: +Step 1: install a C compiler +---------------------------- -Option 1: creating an environment without Docker ------------------------------------------------- - -Installing a C compiler -~~~~~~~~~~~~~~~~~~~~~~~ - -pandas uses C extensions (mostly written using Cython) to speed up certain -operations. To install pandas from source, you need to compile these C -extensions, which means you need a C compiler. This process depends on which -platform you're using. - -If you have setup your environment using :ref:`mamba <contributing.mamba>`, the packages ``c-compiler`` -and ``cxx-compiler`` will install a fitting compiler for your platform that is -compatible with the remaining mamba packages. On Windows and macOS, you will -also need to install the SDKs as they have to be distributed separately. -These packages will automatically be installed by using the ``pandas`` -``environment.yml`` file. +How to do this will depend on your platform. If you choose to user ``Docker`` +in the next step, then you can skip this step. **Windows** @@ -48,6 +35,9 @@ You will need `Build Tools for Visual Studio 2022 Alternatively, you can install the necessary components on the commandline using `vs_BuildTools.exe <https://learn.microsoft.com/en-us/visualstudio/install/use-command-line-parameters-to-install-visual-studio?source=recommendations&view=vs-2022>`_ +Alternatively, you could use the `WSL <https://learn.microsoft.com/en-us/windows/wsl/install>`_ +and consult the ``Linux`` instructions below. + **macOS** To use the :ref:`mamba <contributing.mamba>`-based compilers, you will need to install the @@ -71,38 +61,30 @@ which compilers (and versions) are installed on your system:: `GCC (GNU Compiler Collection) <https://gcc.gnu.org/>`_, is a widely used compiler, which supports C and a number of other languages. If GCC is listed -as an installed compiler nothing more is required. If no C compiler is -installed (or you wish to install a newer version) you can install a compiler -(GCC in the example code below) with:: +as an installed compiler nothing more is required. - # for recent Debian/Ubuntu: - sudo apt install build-essential - # for Red Had/RHEL/CentOS/Fedora - yum groupinstall "Development Tools" - -For other Linux distributions, consult your favorite search engine for -compiler installation instructions. +If no C compiler is installed, or you wish to upgrade, or you're using a different +Linux distribution, consult your favorite search engine for compiler installation/update +instructions. Let us know if you have any difficulties by opening an issue or reaching out on our contributor community :ref:`Slack <community.slack>`. -.. _contributing.mamba: - -Option 1a: using mamba (recommended) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Step 2: create an isolated environment +---------------------------------------- -Now create an isolated pandas development environment: +Before we begin, please: -* Install `mamba <https://mamba.readthedocs.io/en/latest/installation.html>`_ -* Make sure your mamba is up to date (``mamba update mamba``) * Make sure that you have :any:`cloned the repository <contributing.forking>` * ``cd`` to the pandas source directory -We'll now kick off a three-step process: +.. _contributing.mamba: + +Option 1: using mamba (recommended) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -1. Install the build dependencies -2. Build and install pandas -3. Install the optional dependencies +* Install `mamba <https://mamba.readthedocs.io/en/latest/installation.html>`_ +* Make sure your mamba is up to date (``mamba update mamba``) .. code-block:: none @@ -110,28 +92,9 @@ We'll now kick off a three-step process: mamba env create --file environment.yml mamba activate pandas-dev - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - -At this point you should be able to import pandas from your locally built version:: - - $ python - >>> import pandas - >>> print(pandas.__version__) # note: the exact output may differ - 1.5.0.dev0+1355.ge65a30e3eb.dirty - -This will create the new environment, and not touch any of your existing environments, -nor any existing Python installation. - -To return to your root environment:: - - mamba deactivate - -Option 1b: using pip -~~~~~~~~~~~~~~~~~~~~ +Option 2: using pip +~~~~~~~~~~~~~~~~~~~ -If you aren't using mamba for your development environment, follow these instructions. You'll need to have at least the :ref:`minimum Python version <install.version>` that pandas supports. You also need to have ``setuptools`` 51.0.0 or later to build pandas. @@ -150,10 +113,6 @@ You also need to have ``setuptools`` 51.0.0 or later to build pandas. # Install the build dependencies python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - **Unix**/**macOS with pyenv** Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. @@ -162,7 +121,6 @@ Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. # Create a virtual environment # Use an ENV_DIR of your choice. We'll use ~/Users/<yourname>/.pyenv/versions/pandas-dev - pyenv virtualenv <version> <name-to-give-it> # For instance: @@ -174,19 +132,15 @@ Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. # Now install the build dependencies in the cloned pandas repo python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - **Windows** Below is a brief overview on how to set-up a virtual environment with Powershell under Windows. For details please refer to the `official virtualenv user guide <https://virtualenv.pypa.io/en/latest/user_guide.html#activators>`__. -Use an ENV_DIR of your choice. We'll use ~\\virtualenvs\\pandas-dev where -'~' is the folder pointed to by either $env:USERPROFILE (Powershell) or -%USERPROFILE% (cmd.exe) environment variable. Any parent directories +Use an ENV_DIR of your choice. We'll use ``~\\virtualenvs\\pandas-dev`` where +``~`` is the folder pointed to by either ``$env:USERPROFILE`` (Powershell) or +``%USERPROFILE%`` (cmd.exe) environment variable. Any parent directories should already exist. .. code-block:: powershell @@ -200,16 +154,10 @@ should already exist. # Install the build dependencies python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - -Option 2: creating an environment using Docker ----------------------------------------------- +Option 3: using Docker +~~~~~~~~~~~~~~~~~~~~~~ -Instead of manually setting up a development environment, you can use `Docker -<https://docs.docker.com/get-docker/>`_ to automatically create the environment with just several -commands. pandas provides a ``DockerFile`` in the root directory to build a Docker image +pandas provides a ``DockerFile`` in the root directory to build a Docker image with a full pandas development environment. **Docker Commands** @@ -226,13 +174,6 @@ Run Container:: # but if not alter ${PWD} to match your local repo path docker run -it --rm -v ${PWD}:/home/pandas pandas-dev -When inside the running container you can build and install pandas the same way as the other methods - -.. code-block:: bash - - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - *Even easier, you can integrate Docker with the following IDEs:* **Visual Studio Code** @@ -246,3 +187,26 @@ See https://code.visualstudio.com/docs/remote/containers for details. Enable Docker support and use the Services tool window to build and manage images as well as run and interact with containers. See https://www.jetbrains.com/help/pycharm/docker.html for details. + +Step 3: build and install pandas +-------------------------------- + +You can now run:: + + # Build and install pandas + python setup.py build_ext -j 4 + python -m pip install -e . --no-build-isolation --no-use-pep517 + +At this point you should be able to import pandas from your locally built version:: + + $ python + >>> import pandas + >>> print(pandas.__version__) # note: the exact output may differ + 2.0.0.dev0+880.g2b9e661fbb.dirty + +This will create the new environment, and not touch any of your existing environments, +nor any existing Python installation. + +.. note:: + You will need to repeat this step each time the C extensions change, for example + if you modified any file in ``pandas/_libs`` or if you did a fetch and merge from ``upstream/main``.
Backport PR #50145: DOC restructure contributing environment guide
https://api.github.com/repos/pandas-dev/pandas/pulls/50423
2022-12-23T21:27:56Z
2022-12-23T22:03:10Z
2022-12-23T22:03:10Z
2022-12-23T22:03:10Z
Backport PR #50341 on branch 1.5.x (CI: Use regular solver for docker build)
diff --git a/Dockerfile b/Dockerfile index c987461e8cbb8..7230dcab20f6e 100644 --- a/Dockerfile +++ b/Dockerfile @@ -8,6 +8,6 @@ RUN apt-get install -y build-essential RUN apt-get install -y libhdf5-dev RUN python -m pip install --upgrade pip -RUN python -m pip install --use-deprecated=legacy-resolver \ +RUN python -m pip install \ -r https://raw.githubusercontent.com/pandas-dev/pandas/main/requirements-dev.txt CMD ["/bin/bash"]
Backport PR #50341: CI: Use regular solver for docker build
https://api.github.com/repos/pandas-dev/pandas/pulls/50422
2022-12-23T21:26:53Z
2022-12-23T23:31:19Z
2022-12-23T23:31:19Z
2022-12-23T23:31:20Z
Backport PR #49981 on branch 1.5.x (Streamline docker usage)
diff --git a/.devcontainer.json b/.devcontainer.json index 8bea96aea29c1..7c5d009260c64 100644 --- a/.devcontainer.json +++ b/.devcontainer.json @@ -9,8 +9,7 @@ // You can edit these settings after create using File > Preferences > Settings > Remote. "settings": { "terminal.integrated.shell.linux": "/bin/bash", - "python.condaPath": "/opt/conda/bin/conda", - "python.pythonPath": "/opt/conda/bin/python", + "python.pythonPath": "/usr/local/bin/python", "python.formatting.provider": "black", "python.linting.enabled": true, "python.linting.flake8Enabled": true, diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index 167f62087f739..cb95b224ba677 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -152,7 +152,7 @@ jobs: run: docker build --pull --no-cache --tag pandas-dev-env . - name: Show environment - run: docker run -w /home/pandas pandas-dev-env mamba run -n pandas-dev python -c "import pandas as pd; print(pd.show_versions())" + run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())" requirements-dev-text-installable: name: Test install requirements-dev.txt diff --git a/Dockerfile b/Dockerfile index 9de8695b24274..c987461e8cbb8 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,42 +1,13 @@ -FROM quay.io/condaforge/mambaforge +FROM python:3.10.8 +WORKDIR /home/pandas -# if you forked pandas, you can pass in your own GitHub username to use your fork -# i.e. gh_username=myname -ARG gh_username=pandas-dev -ARG pandas_home="/home/pandas" +RUN apt-get update && apt-get -y upgrade +RUN apt-get install -y build-essential -# Avoid warnings by switching to noninteractive -ENV DEBIAN_FRONTEND=noninteractive +# hdf5 needed for pytables installation +RUN apt-get install -y libhdf5-dev -# Configure apt and install packages -RUN apt-get update \ - && apt-get -y install --no-install-recommends apt-utils git tzdata dialog 2>&1 \ - # - # Configure timezone (fix for tests which try to read from "/etc/localtime") - && ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime \ - && dpkg-reconfigure -f noninteractive tzdata \ - # - # cleanup - && apt-get autoremove -y \ - && apt-get clean -y \ - && rm -rf /var/lib/apt/lists/* - -# Switch back to dialog for any ad-hoc use of apt-get -ENV DEBIAN_FRONTEND=dialog - -# Clone pandas repo -RUN mkdir "$pandas_home" \ - && git clone "https://github.com/$gh_username/pandas.git" "$pandas_home" \ - && cd "$pandas_home" \ - && git remote add upstream "https://github.com/pandas-dev/pandas.git" \ - && git pull upstream main - -# Set up environment -RUN mamba env create -f "$pandas_home/environment.yml" - -# Build C extensions and pandas -SHELL ["mamba", "run", "--no-capture-output", "-n", "pandas-dev", "/bin/bash", "-c"] -RUN cd "$pandas_home" \ - && export \ - && python setup.py build_ext -j 4 \ - && python -m pip install --no-build-isolation -e . +RUN python -m pip install --upgrade pip +RUN python -m pip install --use-deprecated=legacy-resolver \ + -r https://raw.githubusercontent.com/pandas-dev/pandas/main/requirements-dev.txt +CMD ["/bin/bash"] diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index a29351195e3ed..b79fe58c68e4b 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -216,34 +216,22 @@ with a full pandas development environment. Build the Docker image:: - # Build the image pandas-yourname-env - docker build --tag pandas-yourname-env . - # Or build the image by passing your GitHub username to use your own fork - docker build --build-arg gh_username=yourname --tag pandas-yourname-env . + # Build the image + docker build -t pandas-dev . Run Container:: # Run a container and bind your local repo to the container - docker run -it -w /home/pandas --rm -v path-to-local-pandas-repo:/home/pandas pandas-yourname-env + # This command assumes you are running from your local repo + # but if not alter ${PWD} to match your local repo path + docker run -it --rm -v ${PWD}:/home/pandas pandas-dev -Then a ``pandas-dev`` virtual environment will be available with all the development dependencies. +When inside the running container you can build and install pandas the same way as the other methods -.. code-block:: shell - - root@... :/home/pandas# conda env list - # conda environments: - # - base * /opt/conda - pandas-dev /opt/conda/envs/pandas-dev - -.. note:: - If you bind your local repo for the first time, you have to build the C extensions afterwards. - Run the following command inside the container:: - - python setup.py build_ext -j 4 +.. code-block:: bash - You need to rebuild the C extensions anytime the Cython code in ``pandas/_libs`` changes. - This most frequently occurs when changing or merging branches. + python setup.py build_ext -j 4 + python -m pip install -e . --no-build-isolation --no-use-pep517 *Even easier, you can integrate Docker with the following IDEs:*
Backport PR #49981: Streamline docker usage
https://api.github.com/repos/pandas-dev/pandas/pulls/50421
2022-12-23T21:19:07Z
2022-12-23T21:26:16Z
2022-12-23T21:26:16Z
2022-12-23T21:26:16Z
Backport PR #49993 on branch 1.5.x (Add new mamba environment creation command)
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index c018d036a7192..a29351195e3ed 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -107,7 +107,7 @@ We'll now kick off a three-step process: .. code-block:: none # Create and activate the build environment - mamba env create + mamba env create --file environment.yml mamba activate pandas-dev # Build and install pandas
Backport PR #49993: Add new mamba environment creation command
https://api.github.com/repos/pandas-dev/pandas/pulls/50420
2022-12-23T21:05:07Z
2022-12-23T21:05:29Z
2022-12-23T21:05:29Z
2022-12-23T21:28:40Z
"Backport PR #50145 on branch 1.5.x (DOC restructure contributing environment guide)"
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index c018d036a7192..55efc40d2f9d3 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -15,24 +15,11 @@ locally before pushing your changes. It's recommended to also install the :ref:` .. contents:: Table of contents: :local: +Step 1: install a C compiler +---------------------------- -Option 1: creating an environment without Docker ------------------------------------------------- - -Installing a C compiler -~~~~~~~~~~~~~~~~~~~~~~~ - -pandas uses C extensions (mostly written using Cython) to speed up certain -operations. To install pandas from source, you need to compile these C -extensions, which means you need a C compiler. This process depends on which -platform you're using. - -If you have setup your environment using :ref:`mamba <contributing.mamba>`, the packages ``c-compiler`` -and ``cxx-compiler`` will install a fitting compiler for your platform that is -compatible with the remaining mamba packages. On Windows and macOS, you will -also need to install the SDKs as they have to be distributed separately. -These packages will automatically be installed by using the ``pandas`` -``environment.yml`` file. +How to do this will depend on your platform. If you choose to user ``Docker`` +in the next step, then you can skip this step. **Windows** @@ -48,6 +35,9 @@ You will need `Build Tools for Visual Studio 2022 Alternatively, you can install the necessary components on the commandline using `vs_BuildTools.exe <https://learn.microsoft.com/en-us/visualstudio/install/use-command-line-parameters-to-install-visual-studio?source=recommendations&view=vs-2022>`_ +Alternatively, you could use the `WSL <https://learn.microsoft.com/en-us/windows/wsl/install>`_ +and consult the ``Linux`` instructions below. + **macOS** To use the :ref:`mamba <contributing.mamba>`-based compilers, you will need to install the @@ -71,38 +61,30 @@ which compilers (and versions) are installed on your system:: `GCC (GNU Compiler Collection) <https://gcc.gnu.org/>`_, is a widely used compiler, which supports C and a number of other languages. If GCC is listed -as an installed compiler nothing more is required. If no C compiler is -installed (or you wish to install a newer version) you can install a compiler -(GCC in the example code below) with:: +as an installed compiler nothing more is required. - # for recent Debian/Ubuntu: - sudo apt install build-essential - # for Red Had/RHEL/CentOS/Fedora - yum groupinstall "Development Tools" - -For other Linux distributions, consult your favorite search engine for -compiler installation instructions. +If no C compiler is installed, or you wish to upgrade, or you're using a different +Linux distribution, consult your favorite search engine for compiler installation/update +instructions. Let us know if you have any difficulties by opening an issue or reaching out on our contributor community :ref:`Slack <community.slack>`. -.. _contributing.mamba: - -Option 1a: using mamba (recommended) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Step 2: create an isolated environment +---------------------------------------- -Now create an isolated pandas development environment: +Before we begin, please: -* Install `mamba <https://mamba.readthedocs.io/en/latest/installation.html>`_ -* Make sure your mamba is up to date (``mamba update mamba``) * Make sure that you have :any:`cloned the repository <contributing.forking>` * ``cd`` to the pandas source directory -We'll now kick off a three-step process: +.. _contributing.mamba: -1. Install the build dependencies -2. Build and install pandas -3. Install the optional dependencies +Option 1: using mamba (recommended) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Install `mamba <https://mamba.readthedocs.io/en/latest/installation.html>`_ +* Make sure your mamba is up to date (``mamba update mamba``) .. code-block:: none @@ -110,28 +92,9 @@ We'll now kick off a three-step process: mamba env create mamba activate pandas-dev - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - -At this point you should be able to import pandas from your locally built version:: - - $ python - >>> import pandas - >>> print(pandas.__version__) # note: the exact output may differ - 1.5.0.dev0+1355.ge65a30e3eb.dirty +Option 2: using pip +~~~~~~~~~~~~~~~~~~~ -This will create the new environment, and not touch any of your existing environments, -nor any existing Python installation. - -To return to your root environment:: - - mamba deactivate - -Option 1b: using pip -~~~~~~~~~~~~~~~~~~~~ - -If you aren't using mamba for your development environment, follow these instructions. You'll need to have at least the :ref:`minimum Python version <install.version>` that pandas supports. You also need to have ``setuptools`` 51.0.0 or later to build pandas. @@ -150,10 +113,6 @@ You also need to have ``setuptools`` 51.0.0 or later to build pandas. # Install the build dependencies python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - **Unix**/**macOS with pyenv** Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. @@ -162,7 +121,6 @@ Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. # Create a virtual environment # Use an ENV_DIR of your choice. We'll use ~/Users/<yourname>/.pyenv/versions/pandas-dev - pyenv virtualenv <version> <name-to-give-it> # For instance: @@ -174,19 +132,15 @@ Consult the docs for setting up pyenv `here <https://github.com/pyenv/pyenv>`__. # Now install the build dependencies in the cloned pandas repo python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - **Windows** Below is a brief overview on how to set-up a virtual environment with Powershell under Windows. For details please refer to the `official virtualenv user guide <https://virtualenv.pypa.io/en/latest/user_guide.html#activators>`__. -Use an ENV_DIR of your choice. We'll use ~\\virtualenvs\\pandas-dev where -'~' is the folder pointed to by either $env:USERPROFILE (Powershell) or -%USERPROFILE% (cmd.exe) environment variable. Any parent directories +Use an ENV_DIR of your choice. We'll use ``~\\virtualenvs\\pandas-dev`` where +``~`` is the folder pointed to by either ``$env:USERPROFILE`` (Powershell) or +``%USERPROFILE%`` (cmd.exe) environment variable. Any parent directories should already exist. .. code-block:: powershell @@ -200,16 +154,10 @@ should already exist. # Install the build dependencies python -m pip install -r requirements-dev.txt - # Build and install pandas - python setup.py build_ext -j 4 - python -m pip install -e . --no-build-isolation --no-use-pep517 - -Option 2: creating an environment using Docker ----------------------------------------------- +Option 3: using Docker +~~~~~~~~~~~~~~~~~~~~~~ -Instead of manually setting up a development environment, you can use `Docker -<https://docs.docker.com/get-docker/>`_ to automatically create the environment with just several -commands. pandas provides a ``DockerFile`` in the root directory to build a Docker image +pandas provides a ``DockerFile`` in the root directory to build a Docker image with a full pandas development environment. **Docker Commands** @@ -258,3 +206,26 @@ See https://code.visualstudio.com/docs/remote/containers for details. Enable Docker support and use the Services tool window to build and manage images as well as run and interact with containers. See https://www.jetbrains.com/help/pycharm/docker.html for details. + +Step 3: build and install pandas +-------------------------------- + +You can now run:: + + # Build and install pandas + python setup.py build_ext -j 4 + python -m pip install -e . --no-build-isolation --no-use-pep517 + +At this point you should be able to import pandas from your locally built version:: + + $ python + >>> import pandas + >>> print(pandas.__version__) # note: the exact output may differ + 2.0.0.dev0+880.g2b9e661fbb.dirty + +This will create the new environment, and not touch any of your existing environments, +nor any existing Python installation. + +.. note:: + You will need to repeat this step each time the C extensions change, for example + if you modified any file in ``pandas/_libs`` or if you did a fetch and merge from ``upstream/main``.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50419
2022-12-23T21:00:15Z
2022-12-23T21:03:32Z
null
2022-12-23T21:03:33Z
Backport PR #50144 on branch 1.5.x (DOC: BT for VisualStudio2022)
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index afa0d0306f1af..c018d036a7192 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -36,29 +36,17 @@ These packages will automatically be installed by using the ``pandas`` **Windows** -You will need `Build Tools for Visual Studio 2019 -<https://visualstudio.microsoft.com/downloads/>`_. +You will need `Build Tools for Visual Studio 2022 +<https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022>`_. -.. warning:: - You DO NOT need to install Visual Studio 2019. - You only need "Build Tools for Visual Studio 2019" found by - scrolling down to "All downloads" -> "Tools for Visual Studio 2019". - In the installer, select the "C++ build tools" workload. - -You can install the necessary components on the commandline using -`vs_buildtools.exe <https://download.visualstudio.microsoft.com/download/pr/9a26f37e-6001-429b-a5db-c5455b93953c/460d80ab276046de2455a4115cc4e2f1e6529c9e6cb99501844ecafd16c619c4/vs_BuildTools.exe>`_: - -.. code:: - - vs_buildtools.exe --quiet --wait --norestart --nocache ^ - --installPath C:\BuildTools ^ - --add "Microsoft.VisualStudio.Workload.VCTools;includeRecommended" ^ - --add Microsoft.VisualStudio.Component.VC.v141 ^ - --add Microsoft.VisualStudio.Component.VC.v141.x86.x64 ^ - --add Microsoft.VisualStudio.Component.Windows10SDK.17763 +.. note:: + You DO NOT need to install Visual Studio 2022. + You only need "Build Tools for Visual Studio 2022" found by + scrolling down to "All downloads" -> "Tools for Visual Studio". + In the installer, select the "Desktop development with C++" Workloads. -To setup the right paths on the commandline, call -``"C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.16 10.0.17763.0``. +Alternatively, you can install the necessary components on the commandline using +`vs_BuildTools.exe <https://learn.microsoft.com/en-us/visualstudio/install/use-command-line-parameters-to-install-visual-studio?source=recommendations&view=vs-2022>`_ **macOS**
Backport PR #50144: DOC: BT for VisualStudio2022
https://api.github.com/repos/pandas-dev/pandas/pulls/50418
2022-12-23T19:28:29Z
2022-12-23T20:56:29Z
2022-12-23T20:56:29Z
2022-12-23T20:56:29Z
BUG: Period constructor raises instead of ignoring when passing a string with extra precision(pico, femto, etc.)
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 45e4fd9f0aabb..35ace840c2cde 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -993,7 +993,7 @@ Period ^^^^^^ - Bug in :meth:`Period.strftime` and :meth:`PeriodIndex.strftime`, raising ``UnicodeDecodeError`` when a locale-specific directive was passed (:issue:`46319`) - Bug in adding a :class:`Period` object to an array of :class:`DateOffset` objects incorrectly raising ``TypeError`` (:issue:`50162`) -- +- Bug in :class:`Period` where passing a string with finer resolution than nanosecond would result in a ``KeyError`` instead of dropping the extra precision (:issue:`50417`) Plotting ^^^^^^^^ diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx index 0f6640a90d7ed..3de1345598437 100644 --- a/pandas/_libs/tslibs/parsing.pyx +++ b/pandas/_libs/tslibs/parsing.pyx @@ -393,7 +393,7 @@ cdef parse_datetime_string_with_reso( &out_tzoffset, False ) if not string_to_dts_failed: - if dts.ps != 0 or out_local: + if out_bestunit == NPY_DATETIMEUNIT.NPY_FR_ns or out_local: # TODO: the not-out_local case we could do without Timestamp; # avoid circular import from pandas import Timestamp @@ -402,6 +402,13 @@ cdef parse_datetime_string_with_reso( parsed = datetime( dts.year, dts.month, dts.day, dts.hour, dts.min, dts.sec, dts.us ) + # Match Timestamp and drop picoseconds, femtoseconds, attoseconds + # The new resolution will just be nano + # GH 50417 + if out_bestunit in {NPY_DATETIMEUNIT.NPY_FR_ps, + NPY_DATETIMEUNIT.NPY_FR_fs, + NPY_DATETIMEUNIT.NPY_FR_as}: + out_bestunit = NPY_DATETIMEUNIT.NPY_FR_ns reso = { NPY_DATETIMEUNIT.NPY_FR_Y: "year", NPY_DATETIMEUNIT.NPY_FR_M: "month", diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py index 0636ecb023530..710a98ba6f005 100644 --- a/pandas/tests/scalar/period/test_period.py +++ b/pandas/tests/scalar/period/test_period.py @@ -545,6 +545,11 @@ def test_period_cons_combined(self): (".000000999", 999), (".123456789", 789), (".999999999", 999), + (".999999000", 0), + # Test femtoseconds, attoseconds, picoseconds are dropped like Timestamp + (".999999001123", 1), + (".999999001123456", 1), + (".999999001123456789", 1), ], ) def test_period_constructor_nanosecond(self, day, hour, sec_float, expected):
- [ ] xref #50149 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50417
2022-12-23T18:35:05Z
2023-01-16T19:33:33Z
2023-01-16T19:33:33Z
2023-01-16T19:35:39Z
CLN: Use exclusions in determination of _selected_obj
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 11e8769615470..bc13c500fdde8 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1020,7 +1020,7 @@ def _set_group_selection(self) -> None: ): return - groupers = [g.name for g in grp.groupings if g.level is None and g.in_axis] + groupers = self.exclusions if len(groupers): # GH12839 clear selected obj cache when group selection changes
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Toward #46944. Currently we use whether a grouping is in-axis or not to determine if it should make it into `_selected_obj`. This is almost the same as using `exclusions`, except for a TimeGrouper. Here, the grouping data may arise from a column and hence should be excluded, however the grouping data is not the column itself but rather derived from the column, so it is not technically in-axis. Such columns should not make it into `_selected_obj`, so we should prefer using excluded directly rather than in-axis. This does not change the behavior of Resampler because Resampler is a subclass of BaseGroupBy which does not have the selection context.
https://api.github.com/repos/pandas-dev/pandas/pulls/50416
2022-12-23T18:26:30Z
2022-12-27T19:40:11Z
2022-12-27T19:40:11Z
2022-12-27T20:03:13Z
ENH: add lazy copy (CoW) mechanism to rename_axis
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index c893e9ce3d9a9..e86427b8cd29d 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -1209,7 +1209,8 @@ class name axes, kwargs = self._construct_axes_from_arguments( (), kwargs, sentinel=lib.no_default ) - copy = kwargs.pop("copy", True) + copy: bool_t | None = kwargs.pop("copy", None) + inplace = kwargs.pop("inplace", False) axis = kwargs.pop("axis", 0) if axis is not None: @@ -1229,7 +1230,9 @@ class name is_list_like(mapper) and not is_dict_like(mapper) ) if non_mapper: - return self._set_axis_name(mapper, axis=axis, inplace=inplace) + return self._set_axis_name( + mapper, axis=axis, inplace=inplace, copy=copy + ) else: raise ValueError("Use `.rename` to alter labels with a mapper.") else: @@ -1248,13 +1251,15 @@ class name f = common.get_rename_function(v) curnames = self._get_axis(axis).names newnames = [f(name) for name in curnames] - result._set_axis_name(newnames, axis=axis, inplace=True) + result._set_axis_name(newnames, axis=axis, inplace=True, copy=copy) if not inplace: return result return None @final - def _set_axis_name(self, name, axis: Axis = 0, inplace: bool_t = False): + def _set_axis_name( + self, name, axis: Axis = 0, inplace: bool_t = False, copy: bool_t | None = True + ): """ Set the name(s) of the axis. @@ -1267,6 +1272,8 @@ def _set_axis_name(self, name, axis: Axis = 0, inplace: bool_t = False): and the value 1 or 'columns' specifies columns. inplace : bool, default False If `True`, do operation inplace and return None. + copy: + Whether to make a copy of the result. Returns ------- @@ -1308,7 +1315,7 @@ def _set_axis_name(self, name, axis: Axis = 0, inplace: bool_t = False): idx = self._get_axis(axis).set_names(name) inplace = validate_bool_kwarg(inplace, "inplace") - renamed = self if inplace else self.copy() + renamed = self if inplace else self.copy(deep=copy) if axis == 0: renamed.index = idx else: diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py index 878f1d8089d33..6e49ac99c1dd1 100644 --- a/pandas/tests/copy_view/test_methods.py +++ b/pandas/tests/copy_view/test_methods.py @@ -3,6 +3,7 @@ from pandas import ( DataFrame, + Index, MultiIndex, Series, ) @@ -456,3 +457,21 @@ def test_series_set_axis(using_copy_on_write): ser2.iloc[0] = 0 assert not np.shares_memory(ser2, ser) tm.assert_series_equal(ser, ser_orig) + + +@pytest.mark.parametrize("copy_kwargs", [{"copy": True}, {}]) +@pytest.mark.parametrize("kwargs", [{"mapper": "test"}, {"index": "test"}]) +def test_rename_axis(using_copy_on_write, kwargs, copy_kwargs): + df = DataFrame({"a": [1, 2, 3, 4]}, index=Index([1, 2, 3, 4], name="a")) + df_orig = df.copy() + df2 = df.rename_axis(**kwargs, **copy_kwargs) + + if using_copy_on_write and not copy_kwargs: + assert np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + else: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + + df2.iloc[0, 0] = 0 + if using_copy_on_write: + assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a")) + tm.assert_frame_equal(df, df_orig)
- [ ] xref #49473 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50415
2022-12-23T17:43:57Z
2023-01-02T15:33:54Z
2023-01-02T15:33:54Z
2023-01-03T21:47:04Z
BUG: Grouper specified by key not regarded as in-axis
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 9957ccb4fde50..72b2d9aeca614 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -932,6 +932,8 @@ Groupby/resample/rolling - Bug in :class:`.DataFrameGroupBy` and :class:`.SeriesGroupBy` with ``dropna=False`` would drop NA values when the grouper was categorical (:issue:`36327`) - Bug in :meth:`.SeriesGroupBy.nunique` would incorrectly raise when the grouper was an empty categorical and ``observed=True`` (:issue:`21334`) - Bug in :meth:`.SeriesGroupBy.nth` would raise when grouper contained NA values after subsetting from a :class:`DataFrameGroupBy` (:issue:`26454`) +- Bug in :meth:`DataFrame.groupby` would not include a :class:`.Grouper` specified by ``key`` in the result when ``as_index=False`` (:issue:`50413`) +- Reshaping ^^^^^^^^^ diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 4e28ac1dcf4a9..729f34544e2bc 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1014,10 +1014,10 @@ def _set_group_selection(self) -> None: """ # This is a no-op for SeriesGroupBy grp = self.grouper - if not ( - grp.groupings is not None - and self.obj.ndim > 1 - and self._group_selection is None + if ( + grp.groupings is None + or self.obj.ndim == 1 + or self._group_selection is not None ): return diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index d4818eabeac1b..e323877a512b0 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -906,7 +906,7 @@ def is_in_obj(gpr) -> bool: elif isinstance(gpr, Grouper) and gpr.key is not None: # Add key to exclusions exclusions.add(gpr.key) - in_axis = False + in_axis = True else: in_axis = False diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py index 7c7b9b29d8709..e32a18afcba70 100644 --- a/pandas/tests/groupby/test_grouping.py +++ b/pandas/tests/groupby/test_grouping.py @@ -7,6 +7,7 @@ from pandas import ( CategoricalIndex, DataFrame, + Grouper, Index, MultiIndex, Series, @@ -168,7 +169,7 @@ def test_grouper_index_types(self, index): def test_grouper_multilevel_freq(self): # GH 7885 - # with level and freq specified in a pd.Grouper + # with level and freq specified in a Grouper from datetime import ( date, timedelta, @@ -182,20 +183,20 @@ def test_grouper_multilevel_freq(self): # Check string level expected = ( df.reset_index() - .groupby([pd.Grouper(key="foo", freq="W"), pd.Grouper(key="bar", freq="W")]) + .groupby([Grouper(key="foo", freq="W"), Grouper(key="bar", freq="W")]) .sum() ) # reset index changes columns dtype to object expected.columns = Index([0], dtype="int64") result = df.groupby( - [pd.Grouper(level="foo", freq="W"), pd.Grouper(level="bar", freq="W")] + [Grouper(level="foo", freq="W"), Grouper(level="bar", freq="W")] ).sum() tm.assert_frame_equal(result, expected) # Check integer level result = df.groupby( - [pd.Grouper(level=0, freq="W"), pd.Grouper(level=1, freq="W")] + [Grouper(level=0, freq="W"), Grouper(level=1, freq="W")] ).sum() tm.assert_frame_equal(result, expected) @@ -206,11 +207,11 @@ def test_grouper_creation_bug(self): g = df.groupby("A") expected = g.sum() - g = df.groupby(pd.Grouper(key="A")) + g = df.groupby(Grouper(key="A")) result = g.sum() tm.assert_frame_equal(result, expected) - g = df.groupby(pd.Grouper(key="A", axis=0)) + g = df.groupby(Grouper(key="A", axis=0)) result = g.sum() tm.assert_frame_equal(result, expected) @@ -220,13 +221,13 @@ def test_grouper_creation_bug(self): tm.assert_frame_equal(result, expected) # GH14334 - # pd.Grouper(key=...) may be passed in a list + # Grouper(key=...) may be passed in a list df = DataFrame( {"A": [0, 0, 0, 1, 1, 1], "B": [1, 1, 2, 2, 3, 3], "C": [1, 2, 3, 4, 5, 6]} ) # Group by single column expected = df.groupby("A").sum() - g = df.groupby([pd.Grouper(key="A")]) + g = df.groupby([Grouper(key="A")]) result = g.sum() tm.assert_frame_equal(result, expected) @@ -235,17 +236,17 @@ def test_grouper_creation_bug(self): expected = df.groupby(["A", "B"]).sum() # Group with two Grouper objects - g = df.groupby([pd.Grouper(key="A"), pd.Grouper(key="B")]) + g = df.groupby([Grouper(key="A"), Grouper(key="B")]) result = g.sum() tm.assert_frame_equal(result, expected) # Group with a string and a Grouper object - g = df.groupby(["A", pd.Grouper(key="B")]) + g = df.groupby(["A", Grouper(key="B")]) result = g.sum() tm.assert_frame_equal(result, expected) # Group with a Grouper object and a string - g = df.groupby([pd.Grouper(key="A"), "B"]) + g = df.groupby([Grouper(key="A"), "B"]) result = g.sum() tm.assert_frame_equal(result, expected) @@ -257,7 +258,7 @@ def test_grouper_creation_bug(self): names=["one", "two", "three"], ), ) - result = s.groupby(pd.Grouper(level="three", freq="M")).sum() + result = s.groupby(Grouper(level="three", freq="M")).sum() expected = Series( [28], index=pd.DatetimeIndex([Timestamp("2013-01-31")], freq="M", name="three"), @@ -265,7 +266,7 @@ def test_grouper_creation_bug(self): tm.assert_series_equal(result, expected) # just specifying a level breaks - result = s.groupby(pd.Grouper(level="one")).sum() + result = s.groupby(Grouper(level="one")).sum() expected = s.groupby(level="one").sum() tm.assert_series_equal(result, expected) @@ -282,18 +283,14 @@ def test_grouper_column_and_index(self): {"A": np.arange(6), "B": ["one", "one", "two", "two", "one", "one"]}, index=idx, ) - result = df_multi.groupby(["B", pd.Grouper(level="inner")]).mean( - numeric_only=True - ) + result = df_multi.groupby(["B", Grouper(level="inner")]).mean(numeric_only=True) expected = ( df_multi.reset_index().groupby(["B", "inner"]).mean(numeric_only=True) ) tm.assert_frame_equal(result, expected) # Test the reverse grouping order - result = df_multi.groupby([pd.Grouper(level="inner"), "B"]).mean( - numeric_only=True - ) + result = df_multi.groupby([Grouper(level="inner"), "B"]).mean(numeric_only=True) expected = ( df_multi.reset_index().groupby(["inner", "B"]).mean(numeric_only=True) ) @@ -302,7 +299,7 @@ def test_grouper_column_and_index(self): # Grouping a single-index frame by a column and the index should # be equivalent to resetting the index and grouping by two columns df_single = df_multi.reset_index("outer") - result = df_single.groupby(["B", pd.Grouper(level="inner")]).mean( + result = df_single.groupby(["B", Grouper(level="inner")]).mean( numeric_only=True ) expected = ( @@ -311,7 +308,7 @@ def test_grouper_column_and_index(self): tm.assert_frame_equal(result, expected) # Test the reverse grouping order - result = df_single.groupby([pd.Grouper(level="inner"), "B"]).mean( + result = df_single.groupby([Grouper(level="inner"), "B"]).mean( numeric_only=True ) expected = ( @@ -368,7 +365,7 @@ def test_grouper_getting_correct_binner(self): ), ) result = df.groupby( - [pd.Grouper(level="one"), pd.Grouper(level="two", freq="M")] + [Grouper(level="one"), Grouper(level="two", freq="M")] ).sum() expected = DataFrame( {"A": [31, 28, 21, 31, 28, 21]}, @@ -646,7 +643,7 @@ def test_list_grouper_with_nat(self): # GH 14715 df = DataFrame({"date": date_range("1/1/2011", periods=365, freq="D")}) df.iloc[-1] = pd.NaT - grouper = pd.Grouper(key="date", freq="AS") + grouper = Grouper(key="date", freq="AS") # Grouper in a list grouping result = df.groupby([grouper]) @@ -847,7 +844,7 @@ def test_groupby_with_empty(self): index = pd.DatetimeIndex(()) data = () series = Series(data, index, dtype=object) - grouper = pd.Grouper(freq="D") + grouper = Grouper(freq="D") grouped = series.groupby(grouper) assert next(iter(grouped), None) is None @@ -982,7 +979,7 @@ def test_groupby_with_small_elem(self): {"event": ["start", "start"], "change": [1234, 5678]}, index=pd.DatetimeIndex(["2014-09-10", "2013-10-10"]), ) - grouped = df.groupby([pd.Grouper(freq="M"), "event"]) + grouped = df.groupby([Grouper(freq="M"), "event"]) assert len(grouped.groups) == 2 assert grouped.ngroups == 2 assert (Timestamp("2014-09-30"), "start") in grouped.groups @@ -997,7 +994,7 @@ def test_groupby_with_small_elem(self): {"event": ["start", "start", "start"], "change": [1234, 5678, 9123]}, index=pd.DatetimeIndex(["2014-09-10", "2013-10-10", "2014-09-15"]), ) - grouped = df.groupby([pd.Grouper(freq="M"), "event"]) + grouped = df.groupby([Grouper(freq="M"), "event"]) assert len(grouped.groups) == 2 assert grouped.ngroups == 2 assert (Timestamp("2014-09-30"), "start") in grouped.groups @@ -1013,7 +1010,7 @@ def test_groupby_with_small_elem(self): {"event": ["start", "start", "start"], "change": [1234, 5678, 9123]}, index=pd.DatetimeIndex(["2014-09-10", "2013-10-10", "2014-08-05"]), ) - grouped = df.groupby([pd.Grouper(freq="M"), "event"]) + grouped = df.groupby([Grouper(freq="M"), "event"]) assert len(grouped.groups) == 3 assert grouped.ngroups == 3 assert (Timestamp("2014-09-30"), "start") in grouped.groups @@ -1036,3 +1033,17 @@ def test_grouping_string_repr(self): result = gr.grouper.groupings[0].__repr__() expected = "Grouping(('A', 'a'))" assert result == expected + + +def test_grouping_by_key_is_in_axis(): + # GH#50413 - Groupers specified by key are in-axis + df = DataFrame({"a": [1, 1, 2], "b": [1, 1, 2], "c": [3, 4, 5]}).set_index("a") + gb = df.groupby([Grouper(level="a"), Grouper(key="b")], as_index=False) + assert not gb.grouper.groupings[0].in_axis + assert gb.grouper.groupings[1].in_axis + + # Currently only in-axis groupings are including in the result when as_index=False; + # This is likely to change in the future. + result = gb.sum() + expected = DataFrame({"b": [1, 2], "c": [7, 5]}) + tm.assert_frame_equal(result, expected)
- [x] closes #50413 (Replace xxxx with the GitHub issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Found this working on #46944. It was a case where we add to exclusions but consider the grouping as not in-axis. This creates a subtle difference in `_obj_with_exclusions` vs `_selected_obj`.
https://api.github.com/repos/pandas-dev/pandas/pulls/50414
2022-12-23T16:34:14Z
2022-12-28T18:07:50Z
2022-12-28T18:07:50Z
2022-12-28T18:18:03Z
STYLE: prohibit parsing exeption messages
diff --git a/doc/source/reference/testing.rst b/doc/source/reference/testing.rst index 07624e87d82e0..be996540cf0ec 100644 --- a/doc/source/reference/testing.rst +++ b/doc/source/reference/testing.rst @@ -32,11 +32,15 @@ Exceptions and warnings errors.CSSWarning errors.DatabaseError errors.DataError + errors.DataOneDimensionalError + errors.DiffArrayLengthError + errors.DotMismatchShapeError errors.DtypeWarning errors.DuplicateLabelError errors.EmptyDataError errors.IncompatibilityWarning errors.IndexingError + errors.IndexSpecifiedError errors.InvalidColumnName errors.InvalidComparison errors.InvalidIndexError @@ -44,7 +48,10 @@ Exceptions and warnings errors.IntCastingNaNError errors.LossySetitemError errors.MergeError + errors.NATypeError errors.NoBufferPresent + errors.NoObjectConcatenateError + errors.NotObjectError errors.NullFrequencyError errors.NumbaUtilError errors.NumExprClobberingError @@ -61,6 +68,7 @@ Exceptions and warnings errors.SettingWithCopyError errors.SettingWithCopyWarning errors.SpecificationError + errors.SupplyTzDetypeError errors.UndefinedVariableError errors.UnsortedIndexError errors.UnsupportedFunctionCall diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index a3b0451381ad2..7bb8c2ce611a2 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -4,6 +4,7 @@ from sys import maxsize cimport cython from cython cimport Py_ssize_t + import numpy as np cimport numpy as cnp @@ -30,6 +31,7 @@ from pandas._libs.tslibs.np_datetime cimport ( ) from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op +from pandas.errors import NATypeError cdef: float64_t INF = <float64_t>np.inf @@ -410,7 +412,7 @@ class NAType(C_NAType): return self.__repr__() def __bool__(self): - raise TypeError("boolean value of NA is ambiguous") + raise NATypeError("boolean value of NA is ambiguous") def __hash__(self): # GH 30013: Ensure hash is large enough to avoid hash collisions with integers diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx index 7ff0842678d7f..abbf5d38ade2e 100644 --- a/pandas/_libs/reduction.pyx +++ b/pandas/_libs/reduction.pyx @@ -6,6 +6,8 @@ cnp.import_array() from pandas._libs.util cimport is_array +from pandas.errors import NotObjectError + cdef cnp.dtype _dtype_obj = np.dtype("object") @@ -18,8 +20,7 @@ cpdef check_result_array(object obj, object dtype): if dtype != _dtype_obj: # If it is object dtype, the function can be a reduction/aggregation # and still return an ndarray e.g. test_agg_over_numpy_arrays - raise ValueError("Must produce aggregated value") - + raise NotObjectError("Must produce aggregated value") cpdef inline extract_result(object res): """ extract the result object, it might be a 0-dim ndarray diff --git a/pandas/core/apply.py b/pandas/core/apply.py index 722de91ba5246..8fd0b6de21248 100644 --- a/pandas/core/apply.py +++ b/pandas/core/apply.py @@ -33,7 +33,10 @@ NDFrameT, npt, ) -from pandas.errors import SpecificationError +from pandas.errors import ( + DiffArrayLengthError, + SpecificationError, +) from pandas.util._decorators import cache_readonly from pandas.core.dtypes.cast import is_nested_object @@ -864,15 +867,14 @@ def wrap_results_for_axis( try: result = self.obj._constructor(data=results) - except ValueError as err: - if "All arrays must be of the same length" in str(err): - # e.g. result = [[2, 3], [1.5], ['foo', 'bar']] - # see test_agg_listlike_result GH#29587 - res = self.obj._constructor_sliced(results) - res.index = res_index - return res - else: - raise + except DiffArrayLengthError: + # e.g. result = [[2, 3], [1.5], ['foo', 'bar']] + # see test_agg_listlike_result GH#29587 + res = self.obj._constructor_sliced(results) + res.index = res_index + return res + except ValueError: + raise if not isinstance(results[0], ABCSeries): if len(result.index) == len(self.res_columns): diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 01584a66f424b..8327168f5c573 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -50,7 +50,10 @@ TimeNonexistent, npt, ) -from pandas.errors import PerformanceWarning +from pandas.errors import ( + PerformanceWarning, + SupplyTzDetypeError, +) from pandas.util._exceptions import find_stack_level from pandas.util._validators import validate_inclusive @@ -2367,9 +2370,8 @@ def _validate_tz_from_dtype( # We also need to check for the case where the user passed a # tz-naive dtype (i.e. datetime64[ns]) if tz is not None and not timezones.tz_compare(tz, dtz): - raise ValueError( - "cannot supply both a tz and a " - "timezone-naive dtype (i.e. datetime64[ns])" + raise SupplyTzDetypeError( + "cannot supply both a tz and a timezone-naive dtype" ) return tz diff --git a/pandas/core/construction.py b/pandas/core/construction.py index 5a80fdb6d9e0e..70cee7bf8b069 100644 --- a/pandas/core/construction.py +++ b/pandas/core/construction.py @@ -27,6 +27,10 @@ DtypeObj, T, ) +from pandas.errors import ( + DataOneDimensionalError, + IndexSpecifiedError, +) from pandas.core.dtypes.base import ( ExtensionDtype, @@ -540,7 +544,9 @@ def sanitize_array( if not is_list_like(data): if index is None: - raise ValueError("index must be specified when data is not list-like") + raise IndexSpecifiedError( + "index must be specified when data is not list-like" + ) data = construct_1d_arraylike_from_scalar(data, len(index), dtype) return data @@ -666,7 +672,7 @@ def _sanitize_ndim( if isinstance(data, np.ndarray): if allow_2d: return result - raise ValueError("Data must be 1-dimensional") + raise DataOneDimensionalError("Data must be 1-dimensional") if is_object_dtype(dtype) and isinstance(dtype, ExtensionDtype): # i.e. PandasDtype("O") diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index f3ce104aa4a3e..22e9e27f12010 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -41,6 +41,7 @@ from pandas.errors import ( IntCastingNaNError, LossySetitemError, + SupplyTzDetypeError, ) from pandas.core.dtypes.common import ( @@ -1177,14 +1178,15 @@ def maybe_cast_to_datetime( else: try: dta = DatetimeArray._from_sequence(value, dtype=dtype) - except ValueError as err: - # We can give a Series-specific exception message. - if "cannot supply both a tz and a timezone-naive dtype" in str(err): - raise ValueError( - "Cannot convert timezone-aware data to " - "timezone-naive dtype. Use " - "pd.Series(values).dt.tz_localize(None) instead." - ) from err + + except SupplyTzDetypeError as err: + raise ValueError( + "Cannot convert timezone-aware data to " + "timezone-naive dtype. Use " + "pd.Series(values).dt.tz_localize(None) instead." + ) from err + + except ValueError: raise return dta diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index 000b5ebbdd2f7..15db8cee92e49 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -20,6 +20,7 @@ NaT, iNaT, ) +from pandas.errors import NATypeError from pandas.core.dtypes.common import ( DT64NS_DTYPE, @@ -580,9 +581,9 @@ def _array_equivalent_object(left: np.ndarray, right: np.ndarray, strict_nan: bo try: if np.any(np.asarray(left_value != right_value)): return False - except TypeError as err: - if "boolean value of NA is ambiguous" in str(err): - return False + except NATypeError: + return False + except TypeError: raise return True diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 6491081c54592..caf84eac594ee 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -92,7 +92,10 @@ function as nv, np_percentile_argname, ) -from pandas.errors import InvalidIndexError +from pandas.errors import ( + DotMismatchShapeError, + InvalidIndexError, +) from pandas.util._decorators import ( Appender, Substitution, @@ -1570,7 +1573,7 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: lvals = self.values rvals = np.asarray(other) if lvals.shape[1] != rvals.shape[0]: - raise ValueError( + raise DotMismatchShapeError( f"Dot product shape mismatch, {lvals.shape} vs {rvals.shape}" ) @@ -1609,12 +1612,12 @@ def __rmatmul__(self, other) -> DataFrame: """ try: return self.T.dot(np.transpose(other)).T - except ValueError as err: - if "shape mismatch" not in str(err): - raise + except DotMismatchShapeError as err: # GH#21581 give exception message for original shapes msg = f"shapes {np.shape(other)} and {self.shape} not aligned" raise ValueError(msg) from err + except ValueError: + raise # ---------------------------------------------------------------------- # IO methods (to / from other formats) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 8cd1a2543e23a..b5e4efe48b27e 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -44,7 +44,10 @@ SingleManager, TakeIndexer, ) -from pandas.errors import SpecificationError +from pandas.errors import ( + NoObjectConcatenateError, + SpecificationError, +) from pandas.util._decorators import ( Appender, Substitution, @@ -1177,12 +1180,10 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) gba = GroupByApply(self, [func], args=(), kwargs={}) try: result = gba.agg() - - except ValueError as err: - if "No objects to concatenate" not in str(err): - raise + except NoObjectConcatenateError: result = self._aggregate_frame(func) - + except ValueError: + raise else: sobj = self._selected_obj diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 775d137523d2b..4fb2ff15c0db1 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -60,7 +60,9 @@ ) from pandas.compat.numpy import function as nv from pandas.errors import ( + DataOneDimensionalError, DuplicateLabelError, + IndexSpecifiedError, InvalidIndexError, ) from pandas.util._decorators import ( @@ -500,11 +502,11 @@ def __new__( try: arr = sanitize_array(data, None, dtype=dtype, copy=copy) - except ValueError as err: - if "index must be specified when data is not list-like" in str(err): - raise cls._raise_scalar_data_error(data) from err - if "Data must be 1-dimensional" in str(err): - raise ValueError("Index data must be 1-dimensional") from err + except IndexSpecifiedError as err: + raise cls._raise_scalar_data_error(data) from err + except DataOneDimensionalError as err: + raise ValueError("Index data must be 1-dimensional") from err + except ValueError: raise arr = ensure_wrapped_if_datetimelike(arr) diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py index 9bdfd7991689b..a9993971c5c04 100644 --- a/pandas/core/internals/construction.py +++ b/pandas/core/internals/construction.py @@ -21,6 +21,7 @@ Manager, npt, ) +from pandas.errors import DiffArrayLengthError from pandas.core.dtypes.cast import ( construct_1d_arraylike_from_scalar, @@ -623,7 +624,7 @@ def _extract_index(data) -> Index: if have_raw_arrays: lengths = list(set(raw_lengths)) if len(lengths) > 1: - raise ValueError("All arrays must be of the same length") + raise DiffArrayLengthError("All arrays must be of the same length") if have_dicts: raise ValueError( diff --git a/pandas/core/resample.py b/pandas/core/resample.py index d6cba824767b5..29ecca5b67aef 100644 --- a/pandas/core/resample.py +++ b/pandas/core/resample.py @@ -42,6 +42,7 @@ from pandas.errors import ( AbstractMethodError, DataError, + NotObjectError, ) from pandas.util._decorators import ( Appender, @@ -459,18 +460,14 @@ def _groupby_and_aggregate(self, how, *args, **kwargs): # on Series, raising AttributeError or KeyError # (depending on whether the column lookup uses getattr/__getitem__) result = grouped.apply(how, *args, **kwargs) + except NotObjectError: + # we have a non-reducing function; try to evaluate - except ValueError as err: - if "Must produce aggregated value" in str(err): - # raised in _aggregate_named - # see test_apply_without_aggregation, test_apply_with_mutated_index - pass - else: - raise - - # we have a non-reducing function - # try to evaluate + # raised in _aggregate_named + # see test_apply_without_aggregation, test_apply_with_mutated_index result = grouped.apply(how, *args, **kwargs) + except ValueError: + raise return self._wrap_result(result) diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index aced5a73a1f02..c4bfd18f9eb81 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -22,6 +22,7 @@ AxisInt, HashableT, ) +from pandas.errors import NoObjectConcatenateError from pandas.util._decorators import cache_readonly from pandas.core.dtypes.concat import concat_compat @@ -420,7 +421,7 @@ def __init__( objs = list(objs) if len(objs) == 0: - raise ValueError("No objects to concatenate") + raise NoObjectConcatenateError("No objects to concatenate") if keys is None: objs = list(com.not_none(*objs)) diff --git a/pandas/core/series.py b/pandas/core/series.py index b5d73373f061e..5c72a80beb0a0 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -62,7 +62,10 @@ npt, ) from pandas.compat.numpy import function as nv -from pandas.errors import InvalidIndexError +from pandas.errors import ( + DotMismatchShapeError, + InvalidIndexError, +) from pandas.util._decorators import ( Appender, Substitution, @@ -2880,7 +2883,7 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray: lvals = self.values rvals = np.asarray(other) if lvals.shape[0] != rvals.shape[0]: - raise Exception( + raise DotMismatchShapeError( f"Dot product shape mismatch, {lvals.shape} vs {rvals.shape}" ) diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py index 89ac1c10254cb..4d319deaf3dae 100644 --- a/pandas/errors/__init__.py +++ b/pandas/errors/__init__.py @@ -555,6 +555,72 @@ class InvalidComparison(Exception): """ +class NotObjectError(ValueError): + """ + Exception is raised by check_result_array to indicate not object dtype. + + Instead of raising a ValueError("Must produce aggregated value"). + """ + + +class SupplyTzDetypeError(ValueError): + """ + Exception is raised by _validate_tz_from_dtype. + + Instead of ValueError("cannot supply both a tz and a timezone-naive dtype + (i.e. datetime64[ns])"). + """ + + +class NATypeError(TypeError): + """ + Exception is raised by NAType.__bool__(). + + Instead of TypeError("boolean value of NA is ambiguous"). + """ + + +class NoObjectConcatenateError(ValueError): + """ + Exception is raised by _Concatenator.__init__(). + + Instead of ValueError("No objects to concatenate"). + """ + + +class DotMismatchShapeError(ValueError): + """ + Exception is raised by dot(). + + Instead of ValueError(f"Dot product shape mismatch, + {lvals.shape} vs {rvals.shape}"). + """ + + +class DiffArrayLengthError(ValueError): + """ + Exception is raised by _extract_index(). + + Instead of ValueError("All arrays must be of the same length"). + """ + + +class IndexSpecifiedError(ValueError): + """ + Exception is raised by sanitize_array(). + + Instead of ValueError("Index must be specified when using an array as input"). + """ + + +class DataOneDimensionalError(ValueError): + """ + Exception is raised by _sanitize_ndim(). + + Instead of ValueError("Data must be 1-dimensional"). + """ + + __all__ = [ "AbstractMethodError", "AccessorRegistrationWarning", @@ -564,10 +630,14 @@ class InvalidComparison(Exception): "CSSWarning", "DatabaseError", "DataError", + "DataOneDimensionalError", + "DiffArrayLengthError", + "DotMismatchShapeError", "DtypeWarning", "DuplicateLabelError", "EmptyDataError", "IncompatibilityWarning", + "IndexSpecifiedError", "IntCastingNaNError", "InvalidColumnName", "InvalidComparison", @@ -576,7 +646,10 @@ class InvalidComparison(Exception): "IndexingError", "LossySetitemError", "MergeError", + "NATypeError", "NoBufferPresent", + "NoObjectConcatenateError", + "NotObjectError", "NullFrequencyError", "NumbaUtilError", "NumExprClobberingError", @@ -593,6 +666,7 @@ class InvalidComparison(Exception): "SettingWithCopyError", "SettingWithCopyWarning", "SpecificationError", + "SupplyTzDetypeError", "UndefinedVariableError", "UnsortedIndexError", "UnsupportedFunctionCall", diff --git a/pandas/tests/frame/methods/test_compare.py b/pandas/tests/frame/methods/test_compare.py index fe74ec8077bc9..fdbc1e7b5bf17 100644 --- a/pandas/tests/frame/methods/test_compare.py +++ b/pandas/tests/frame/methods/test_compare.py @@ -2,6 +2,7 @@ import pytest from pandas.compat import is_numpy_dev +from pandas.errors import NATypeError import pandas as pd import pandas._testing as tm @@ -267,7 +268,7 @@ def test_compare_ea_and_np_dtype(val1, val2): ) if val1 is pd.NA and is_numpy_dev: # can't compare with numpy array if it contains pd.NA - with pytest.raises(TypeError, match="boolean value of NA is ambiguous"): + with pytest.raises(NATypeError, match="boolean value of NA is ambiguous"): result = df1.compare(df2, keep_shape=True) else: result = df1.compare(df2, keep_shape=True) diff --git a/pandas/tests/frame/methods/test_dot.py b/pandas/tests/frame/methods/test_dot.py index 555e5f0e26eaf..721200a54db1e 100644 --- a/pandas/tests/frame/methods/test_dot.py +++ b/pandas/tests/frame/methods/test_dot.py @@ -1,6 +1,8 @@ import numpy as np import pytest +from pandas.errors import DotMismatchShapeError + from pandas import ( DataFrame, Series, @@ -70,8 +72,8 @@ def test_dot_aligns(self, obj, other, expected): def test_dot_shape_mismatch(self, obj): msg = "Dot product shape mismatch" - # exception raised is of type Exception - with pytest.raises(Exception, match=msg): + # exception raised is of DotMismatchShapeError + with pytest.raises(DotMismatchShapeError, match=msg): obj.dot(obj.values[:3]) def test_dot_misaligned(self, obj, other): diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 0496cfcb6f569..153f29f6684ff 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -23,7 +23,10 @@ import pytest import pytz -from pandas.errors import IntCastingNaNError +from pandas.errors import ( + DiffArrayLengthError, + IntCastingNaNError, +) import pandas.util._test_decorators as td from pandas.core.dtypes.common import is_integer_dtype @@ -1481,7 +1484,9 @@ class CustomDict(dict): def test_constructor_ragged(self): data = {"A": np.random.randn(10), "B": np.random.randn(8)} - with pytest.raises(ValueError, match="All arrays must be of the same length"): + with pytest.raises( + DiffArrayLengthError, match="All arrays must be of the same length" + ): DataFrame(data) def test_constructor_scalar(self): diff --git a/pandas/tests/groupby/test_any_all.py b/pandas/tests/groupby/test_any_all.py index e49238a9e6656..15a3042a976da 100644 --- a/pandas/tests/groupby/test_any_all.py +++ b/pandas/tests/groupby/test_any_all.py @@ -3,6 +3,8 @@ import numpy as np import pytest +from pandas.errors import NATypeError + import pandas as pd from pandas import ( DataFrame, @@ -175,7 +177,7 @@ def test_object_type_missing_vals(bool_agg_func, data, expected_res, frame_or_se def test_object_NA_raises_with_skipna_false(bool_agg_func): # GH#37501 ser = Series([pd.NA], dtype=object) - with pytest.raises(TypeError, match="boolean value of NA is ambiguous"): + with pytest.raises(NATypeError, match="boolean value of NA is ambiguous"): ser.groupby([1]).agg(bool_agg_func, skipna=False) diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py index e1ada9f10c261..e47633c57239b 100644 --- a/pandas/tests/indexes/datetimes/test_constructors.py +++ b/pandas/tests/indexes/datetimes/test_constructors.py @@ -16,6 +16,7 @@ astype_overflowsafe, ) from pandas.compat import PY39 +from pandas.errors import SupplyTzDetypeError import pandas as pd from pandas import ( @@ -713,11 +714,8 @@ def test_constructor_dtype_tz_mismatch_raises(self): ["2013-01-01", "2013-01-02"], dtype="datetime64[ns, US/Eastern]" ) - msg = ( - "cannot supply both a tz and a timezone-naive dtype " - r"\(i\.e\. datetime64\[ns\]\)" - ) - with pytest.raises(ValueError, match=msg): + msg = "cannot supply both a tz and a timezone-naive dtype" + with pytest.raises(SupplyTzDetypeError, match=msg): DatetimeIndex(idx, dtype="datetime64[ns]") # this is effectively trying to convert tz's diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py index 35da972dd1a81..673a77dd6a21c 100644 --- a/pandas/tests/indexing/test_indexing.py +++ b/pandas/tests/indexing/test_indexing.py @@ -8,7 +8,10 @@ import numpy as np import pytest -from pandas.errors import IndexingError +from pandas.errors import ( + DataOneDimensionalError, + IndexingError, +) from pandas.core.dtypes.common import ( is_float_dtype, @@ -110,7 +113,12 @@ def test_getitem_ndarray_3d( msg = "|".join(msgs) - potential_errors = (IndexError, ValueError, NotImplementedError) + potential_errors = ( + IndexError, + ValueError, + NotImplementedError, + DataOneDimensionalError, + ) with pytest.raises(potential_errors, match=msg): idxr[nd3] @@ -124,7 +132,7 @@ def test_setitem_ndarray_3d(self, index, frame_or_series, indexer_sli): err = ValueError msg = f"Cannot set values with ndim > {obj.ndim}" else: - err = ValueError + err = (ValueError, DataOneDimensionalError) msg = "|".join( [ r"Buffer has wrong number of dimensions \(expected 1, got 3\)", diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py index 5fa989419a7d4..6243f6899f7d0 100644 --- a/pandas/tests/reshape/concat/test_concat.py +++ b/pandas/tests/reshape/concat/test_concat.py @@ -15,6 +15,7 @@ from pandas.errors import ( InvalidIndexError, + NoObjectConcatenateError, PerformanceWarning, ) import pandas.util._test_decorators as td @@ -361,7 +362,7 @@ def test_concat_single_with_key(self): tm.assert_frame_equal(result, expected[:10]) def test_concat_no_items_raises(self): - with pytest.raises(ValueError, match="No objects to concatenate"): + with pytest.raises(NoObjectConcatenateError, match="No objects to concatenate"): concat([]) def test_concat_exclude_none(self): diff --git a/pandas/tests/scalar/test_na_scalar.py b/pandas/tests/scalar/test_na_scalar.py index a77316cbc0ea6..61108f0a4388e 100644 --- a/pandas/tests/scalar/test_na_scalar.py +++ b/pandas/tests/scalar/test_na_scalar.py @@ -4,6 +4,7 @@ import pytest from pandas._libs.missing import NA +from pandas.errors import NATypeError from pandas.core.dtypes.common import is_scalar @@ -36,10 +37,10 @@ def test_format(): def test_truthiness(): msg = "boolean value of NA is ambiguous" - with pytest.raises(TypeError, match=msg): + with pytest.raises(NATypeError, match=msg): bool(NA) - with pytest.raises(TypeError, match=msg): + with pytest.raises(NATypeError, match=msg): not NA diff --git a/pandas/tests/series/methods/test_matmul.py b/pandas/tests/series/methods/test_matmul.py index b944395bff29f..21eb49de290df 100644 --- a/pandas/tests/series/methods/test_matmul.py +++ b/pandas/tests/series/methods/test_matmul.py @@ -3,6 +3,8 @@ import numpy as np import pytest +from pandas.errors import DotMismatchShapeError + from pandas import ( DataFrame, Series, @@ -70,8 +72,8 @@ def test_matmul(self): tm.assert_series_equal(result, expected) msg = r"Dot product shape mismatch, \(4,\) vs \(3,\)" - # exception raised is of type Exception - with pytest.raises(Exception, match=msg): + # exception raised is of DotMismatchShapeError + with pytest.raises(DotMismatchShapeError, match=msg): a.dot(a.values[:3]) msg = "matrices are not aligned" with pytest.raises(ValueError, match=msg): diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index f856f18552594..9866291362ba0 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -15,7 +15,10 @@ lib, ) from pandas.compat import is_numpy_dev -from pandas.errors import IntCastingNaNError +from pandas.errors import ( + DataOneDimensionalError, + IntCastingNaNError, +) import pandas.util._test_decorators as td from pandas.core.dtypes.common import ( @@ -172,7 +175,7 @@ def test_constructor(self, datetime_series): assert not Series().index._is_all_dates # exception raised is of type ValueError GH35744 - with pytest.raises(ValueError, match="Data must be 1-dimensional"): + with pytest.raises(DataOneDimensionalError, match="Data must be 1-dimensional"): Series(np.random.randn(3, 3), index=np.arange(3)) mixed.name = "Series"
to fix #50353. Welcome any code review and comments.
https://api.github.com/repos/pandas-dev/pandas/pulls/50408
2022-12-23T08:50:43Z
2023-02-16T02:03:21Z
null
2023-02-16T02:03:25Z
TST: Consolidate tests that raise in groupby
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index 2b0c607d6851a..bb15783f4607f 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -166,8 +166,6 @@ def test_averages(self, df, method): ], ) - with pytest.raises(TypeError, match="[Cc]ould not convert"): - getattr(gb, method)() result = getattr(gb, method)(numeric_only=True) tm.assert_frame_equal(result.reindex_like(expected), expected) @@ -317,21 +315,6 @@ def gni(self, df): gni = df.groupby("A", as_index=False) return gni - # TODO: non-unique columns, as_index=False - def test_idxmax_nuisance_raises(self, gb): - # GH#5610, GH#41480 - expected = DataFrame([[0.0], [np.nan]], columns=["B"], index=[1, 3]) - expected.index.name = "A" - with pytest.raises(TypeError, match="not allowed for this dtype"): - gb.idxmax() - - def test_idxmin_nuisance_raises(self, gb): - # GH#5610, GH#41480 - expected = DataFrame([[0.0], [np.nan]], columns=["B"], index=[1, 3]) - expected.index.name = "A" - with pytest.raises(TypeError, match="not allowed for this dtype"): - gb.idxmin() - def test_describe(self, df, gb, gni): # describe expected_index = Index([1, 3], name="A") diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index ab261c7f1a7c8..b2fc60b76fdf6 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -433,19 +433,12 @@ def test_frame_groupby_columns(tsframe): def test_frame_set_name_single(df): grouped = df.groupby("A") - msg = "The default value of numeric_only" - with pytest.raises(TypeError, match="Could not convert"): - grouped.mean() result = grouped.mean(numeric_only=True) assert result.index.name == "A" - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A", as_index=False).mean() result = df.groupby("A", as_index=False).mean(numeric_only=True) assert result.index.name != "A" - with pytest.raises(TypeError, match="Could not convert"): - grouped.agg(np.mean) result = grouped[["C", "D"]].agg(np.mean) assert result.index.name == "A" diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py index 26cdfa2291021..7c7b9b29d8709 100644 --- a/pandas/tests/groupby/test_grouping.py +++ b/pandas/tests/groupby/test_grouping.py @@ -55,11 +55,7 @@ def test_column_select_via_attr(self, df): tm.assert_series_equal(result, expected) df["mean"] = 1.5 - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A").mean() result = df.groupby("A").mean(numeric_only=True) - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A").agg(np.mean) expected = df.groupby("A")[["C", "D", "mean"]].agg(np.mean) tm.assert_frame_equal(result, expected) @@ -289,8 +285,6 @@ def test_grouper_column_and_index(self): result = df_multi.groupby(["B", pd.Grouper(level="inner")]).mean( numeric_only=True ) - with pytest.raises(TypeError, match="Could not convert"): - df_multi.reset_index().groupby(["B", "inner"]).mean() expected = ( df_multi.reset_index().groupby(["B", "inner"]).mean(numeric_only=True) ) @@ -300,8 +294,6 @@ def test_grouper_column_and_index(self): result = df_multi.groupby([pd.Grouper(level="inner"), "B"]).mean( numeric_only=True ) - with pytest.raises(TypeError, match="Could not convert"): - df_multi.reset_index().groupby(["inner", "B"]).mean() expected = ( df_multi.reset_index().groupby(["inner", "B"]).mean(numeric_only=True) ) @@ -310,26 +302,18 @@ def test_grouper_column_and_index(self): # Grouping a single-index frame by a column and the index should # be equivalent to resetting the index and grouping by two columns df_single = df_multi.reset_index("outer") - with pytest.raises(TypeError, match="Could not convert"): - df_single.groupby(["B", pd.Grouper(level="inner")]).mean() result = df_single.groupby(["B", pd.Grouper(level="inner")]).mean( numeric_only=True ) - with pytest.raises(TypeError, match="Could not convert"): - df_single.reset_index().groupby(["B", "inner"]).mean() expected = ( df_single.reset_index().groupby(["B", "inner"]).mean(numeric_only=True) ) tm.assert_frame_equal(result, expected) # Test the reverse grouping order - with pytest.raises(TypeError, match="Could not convert"): - df_single.groupby([pd.Grouper(level="inner"), "B"]).mean() result = df_single.groupby([pd.Grouper(level="inner"), "B"]).mean( numeric_only=True ) - with pytest.raises(TypeError, match="Could not convert"): - df_single.reset_index().groupby(["inner", "B"]).mean() expected = ( df_single.reset_index().groupby(["inner", "B"]).mean(numeric_only=True) ) @@ -406,11 +390,7 @@ def test_empty_groups(self, df): def test_groupby_grouper(self, df): grouped = df.groupby("A") - with pytest.raises(TypeError, match="Could not convert"): - df.groupby(grouped.grouper).mean() result = df.groupby(grouped.grouper).mean(numeric_only=True) - with pytest.raises(TypeError, match="Could not convert"): - grouped.mean() expected = grouped.mean(numeric_only=True) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/groupby/test_min_max.py b/pandas/tests/groupby/test_min_max.py index 38c4c41e8648d..e85a4c95a2b34 100644 --- a/pandas/tests/groupby/test_min_max.py +++ b/pandas/tests/groupby/test_min_max.py @@ -47,16 +47,12 @@ def test_max_min_object_multiple_columns(using_array_manager): gb = df.groupby("A") - with pytest.raises(TypeError, match="not supported between instances"): - gb.max(numeric_only=False) result = gb[["C"]].max() # "max" is valid for column "C" but not for "B" ei = Index([1, 2, 3], name="A") expected = DataFrame({"C": ["b", "d", "e"]}, index=ei) tm.assert_frame_equal(result, expected) - with pytest.raises(TypeError, match="not supported between instances"): - gb.max(numeric_only=False) result = gb[["C"]].min() # "min" is valid for column "C" but not for "B" ei = Index([1, 2, 3], name="A") diff --git a/pandas/tests/groupby/test_raises.py b/pandas/tests/groupby/test_raises.py new file mode 100644 index 0000000000000..cc3f468349efb --- /dev/null +++ b/pandas/tests/groupby/test_raises.py @@ -0,0 +1,178 @@ +# Only tests that raise an error and have no better location should go here. +# Tests for specific groupby methods should go in their respective +# test file. + +import datetime + +import pytest + +from pandas import DataFrame +from pandas.tests.groupby import get_groupby_method_args + + +@pytest.mark.parametrize("how", ["method", "agg", "transform"]) +def test_groupby_raises_string(how, groupby_func, as_index, sort): + df = DataFrame( + { + "a": [1, 1, 1, 2, 2], + "b": range(5), + "c": list("xyzwt"), + } + ) + args = get_groupby_method_args(groupby_func, df) + gb = df.groupby("a", as_index=as_index, sort=sort) + + klass, msg = { + "all": (None, ""), + "any": (None, ""), + "bfill": (None, ""), + "corrwith": (TypeError, "Could not convert"), + "count": (None, ""), + "cumcount": (None, ""), + "cummax": (NotImplementedError, "function is not implemented for this dtype"), + "cummin": (NotImplementedError, "function is not implemented for this dtype"), + "cumprod": (NotImplementedError, "function is not implemented for this dtype"), + "cumsum": (NotImplementedError, "function is not implemented for this dtype"), + "diff": (TypeError, "unsupported operand type"), + "ffill": (None, ""), + "fillna": (None, ""), + "first": (None, ""), + "idxmax": (TypeError, "'argmax' not allowed for this dtype"), + "idxmin": (TypeError, "'argmin' not allowed for this dtype"), + "last": (None, ""), + "max": (None, ""), + "mean": (TypeError, "Could not convert xyz to numeric"), + "median": (TypeError, "could not convert string to float"), + "min": (None, ""), + "ngroup": (None, ""), + "nunique": (None, ""), + "pct_change": (TypeError, "unsupported operand type"), + "prod": (TypeError, "can't multiply sequence by non-int of type 'str'"), + "quantile": (TypeError, "cannot be performed against 'object' dtypes!"), + "rank": (None, ""), + "sem": (ValueError, "could not convert string to float"), + "shift": (None, ""), + "size": (None, ""), + "skew": (TypeError, "could not convert string to float"), + "std": (ValueError, "could not convert string to float"), + "sum": (None, ""), + "var": (TypeError, "could not convert string to float"), + }[groupby_func] + + if klass is None: + if how == "method": + getattr(gb, groupby_func)(*args) + elif how == "agg": + gb.agg(groupby_func, *args) + else: + gb.transform(groupby_func, *args) + else: + with pytest.raises(klass, match=msg): + if how == "method": + getattr(gb, groupby_func)(*args) + elif how == "agg": + gb.agg(groupby_func, *args) + else: + gb.transform(groupby_func, *args) + + +@pytest.mark.parametrize("how", ["agg", "transform"]) +def test_groupby_raises_string_udf(how): + df = DataFrame( + { + "a": [1, 1, 1, 2, 2], + "b": range(5), + "c": list("xyzwt"), + } + ) + gb = df.groupby("a") + + def func(x): + raise TypeError("Test error message") + + with pytest.raises(TypeError, match="Test error message"): + getattr(gb, how)(func) + + +@pytest.mark.parametrize("how", ["method", "agg", "transform"]) +def test_groupby_raises_datetime(how, groupby_func, as_index, sort): + df = DataFrame( + { + "a": [1, 1, 1, 2, 2], + "b": range(5), + "c": datetime.datetime(2005, 1, 1, 10, 30, 23, 540000), + } + ) + args = get_groupby_method_args(groupby_func, df) + gb = df.groupby("a", as_index=as_index, sort=sort) + + klass, msg = { + "all": (None, ""), + "any": (None, ""), + "bfill": (None, ""), + "corrwith": (TypeError, "cannot perform __mul__ with this index type"), + "count": (None, ""), + "cumcount": (None, ""), + "cummax": (None, ""), + "cummin": (None, ""), + "cumprod": (TypeError, "datetime64 type does not support cumprod operations"), + "cumsum": (TypeError, "datetime64 type does not support cumsum operations"), + "diff": (None, ""), + "ffill": (None, ""), + "fillna": (None, ""), + "first": (None, ""), + "idxmax": (None, ""), + "idxmin": (None, ""), + "last": (None, ""), + "max": (None, ""), + "mean": (None, ""), + "median": (None, ""), + "min": (None, ""), + "ngroup": (None, ""), + "nunique": (None, ""), + "pct_change": (TypeError, "cannot perform __truediv__ with this index type"), + "prod": (TypeError, "datetime64 type does not support prod"), + "quantile": (None, ""), + "rank": (None, ""), + "sem": (TypeError, "Cannot cast DatetimeArray to dtype float64"), + "shift": (None, ""), + "size": (None, ""), + "skew": (TypeError, r"dtype datetime64\[ns\] does not support reduction"), + "std": (TypeError, "Cannot cast DatetimeArray to dtype float64"), + "sum": (TypeError, "datetime64 type does not support sum operations"), + "var": (None, ""), + }[groupby_func] + + if klass is None: + if how == "method": + getattr(gb, groupby_func)(*args) + elif how == "agg": + gb.agg(groupby_func, *args) + else: + gb.transform(groupby_func, *args) + else: + with pytest.raises(klass, match=msg): + if how == "method": + getattr(gb, groupby_func)(*args) + elif how == "agg": + gb.agg(groupby_func, *args) + else: + gb.transform(groupby_func, *args) + + +@pytest.mark.parametrize("how", ["agg", "transform"]) +def test_groupby_raises_datetime_udf(how): + df = DataFrame( + { + "a": [1, 1, 1, 2, 2], + "b": range(5), + "c": datetime.datetime(2005, 1, 1, 10, 30, 23, 540000), + } + ) + gb = df.groupby("a") + + def func(x): + raise TypeError("Test error message") + + with pytest.raises(TypeError, match="Test error message"): + getattr(gb, how)(func) diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index d0c8b53f13399..4c6f172b00a58 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -426,11 +426,7 @@ def test_transform_nuisance_raises(df): def test_transform_function_aliases(df): - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A").transform("mean") result = df.groupby("A").transform("mean", numeric_only=True) - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A").transform(np.mean) expected = df.groupby("A")[["C", "D"]].transform(np.mean) tm.assert_frame_equal(result, expected) @@ -508,8 +504,6 @@ def test_groupby_transform_with_int(): } ) with np.errstate(all="ignore"): - with pytest.raises(TypeError, match="Could not convert"): - df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) result = df.groupby("A")[["B", "C"]].transform( lambda x: (x - x.mean()) / x.std() ) @@ -554,8 +548,6 @@ def test_groupby_transform_with_int(): tm.assert_frame_equal(result, expected) # int doesn't get downcasted - with pytest.raises(TypeError, match="unsupported operand type"): - df.groupby("A").transform(lambda x: x * 2 / 2) result = df.groupby("A")[["B", "C"]].transform(lambda x: x * 2 / 2) expected = DataFrame({"B": 1.0, "C": [2.0, 3.0, 4.0, 10.0, 5.0, -1.0]}) tm.assert_frame_equal(result, expected) @@ -748,14 +740,8 @@ def test_cython_transform_frame(op, args, targop): expected = expected.sort_index(axis=1) - if op != "shift": - with pytest.raises(TypeError, match="datetime64 type does not support"): - gb.transform(op, *args).sort_index(axis=1) result = gb[expected.columns].transform(op, *args).sort_index(axis=1) tm.assert_frame_equal(result, expected) - if op != "shift": - with pytest.raises(TypeError, match="datetime64 type does not support"): - getattr(gb, op)(*args).sort_index(axis=1) result = getattr(gb[expected.columns], op)(*args).sort_index(axis=1) tm.assert_frame_equal(result, expected) # individual columns
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Part of #50133
https://api.github.com/repos/pandas-dev/pandas/pulls/50404
2022-12-23T03:01:05Z
2022-12-23T23:31:53Z
2022-12-23T23:31:53Z
2022-12-23T23:33:05Z
CLN: Remove unused rewrite warning
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py index 810a428098df2..eb95bb2484ebf 100644 --- a/pandas/core/reshape/pivot.py +++ b/pandas/core/reshape/pivot.py @@ -21,7 +21,6 @@ Appender, Substitution, ) -from pandas.util._exceptions import rewrite_warning from pandas.core.dtypes.cast import maybe_downcast_to_dtype from pandas.core.dtypes.common import ( @@ -165,17 +164,7 @@ def __internal_pivot_table( values = list(values) grouped = data.groupby(keys, observed=observed, sort=sort) - msg = ( - "pivot_table dropped a column because it failed to aggregate. This behavior " - "is deprecated and will raise in a future version of pandas. Select only the " - "columns that can be aggregated." - ) - with rewrite_warning( - target_message="The default value of numeric_only", - target_category=FutureWarning, - new_message=msg, - ): - agged = grouped.agg(aggfunc) + agged = grouped.agg(aggfunc) if dropna and isinstance(agged, ABCDataFrame) and len(agged.columns): agged = agged.dropna(how="all")
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. With groupby no longer silently dropping nuisance columns, the warning is no longer raised and so this goes unused.
https://api.github.com/repos/pandas-dev/pandas/pulls/50403
2022-12-22T22:04:37Z
2022-12-23T17:41:11Z
2022-12-23T17:41:11Z
2022-12-23T17:41:48Z
REF: restore _concat_managers_axis0
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py index d1a252f727e90..364025d583b7d 100644 --- a/pandas/core/internals/concat.py +++ b/pandas/core/internals/concat.py @@ -193,12 +193,21 @@ def concatenate_managers( if isinstance(mgrs_indexers[0][0], ArrayManager): return _concatenate_array_managers(mgrs_indexers, axes, concat_axis, copy) + # Assertions disabled for performance + # for tup in mgrs_indexers: + # # caller is responsible for ensuring this + # indexers = tup[1] + # assert concat_axis not in indexers + + if concat_axis == 0: + return _concat_managers_axis0(mgrs_indexers, axes, copy) + mgrs_indexers = _maybe_reindex_columns_na_proxy(axes, mgrs_indexers) concat_plans = [ _get_mgr_concatenation_plan(mgr, indexers) for mgr, indexers in mgrs_indexers ] - concat_plan = _combine_concat_plans(concat_plans, concat_axis) + concat_plan = _combine_concat_plans(concat_plans) blocks = [] for placement, join_units in concat_plan: @@ -229,7 +238,7 @@ def concatenate_managers( fastpath = blk.values.dtype == values.dtype else: - values = _concatenate_join_units(join_units, concat_axis, copy=copy) + values = _concatenate_join_units(join_units, copy=copy) fastpath = False if fastpath: @@ -242,6 +251,42 @@ def concatenate_managers( return BlockManager(tuple(blocks), axes) +def _concat_managers_axis0( + mgrs_indexers, axes: list[Index], copy: bool +) -> BlockManager: + """ + concat_managers specialized to concat_axis=0, with reindexing already + having been done in _maybe_reindex_columns_na_proxy. + """ + had_reindexers = { + i: len(mgrs_indexers[i][1]) > 0 for i in range(len(mgrs_indexers)) + } + mgrs_indexers = _maybe_reindex_columns_na_proxy(axes, mgrs_indexers) + + mgrs = [x[0] for x in mgrs_indexers] + + offset = 0 + blocks = [] + for i, mgr in enumerate(mgrs): + # If we already reindexed, then we definitely don't need another copy + made_copy = had_reindexers[i] + + for blk in mgr.blocks: + if made_copy: + nb = blk.copy(deep=False) + elif copy: + nb = blk.copy() + else: + # by slicing instead of copy(deep=False), we get a new array + # object, see test_concat_copy + nb = blk.getitem_block(slice(None)) + nb._mgr_locs = nb._mgr_locs.add(offset) + blocks.append(nb) + + offset += len(mgr.items) + return BlockManager(tuple(blocks), axes) + + def _maybe_reindex_columns_na_proxy( axes: list[Index], mgrs_indexers: list[tuple[BlockManager, dict[int, np.ndarray]]] ) -> list[tuple[BlockManager, dict[int, np.ndarray]]]: @@ -252,25 +297,22 @@ def _maybe_reindex_columns_na_proxy( Columns added in this reindexing have dtype=np.void, indicating they should be ignored when choosing a column's final dtype. """ - new_mgrs_indexers = [] + new_mgrs_indexers: list[tuple[BlockManager, dict[int, np.ndarray]]] = [] + for mgr, indexers in mgrs_indexers: - # We only reindex for axis=0 (i.e. columns), as this can be done cheaply - if 0 in indexers: - new_mgr = mgr.reindex_indexer( - axes[0], - indexers[0], - axis=0, + # For axis=0 (i.e. columns) we use_na_proxy and only_slice, so this + # is a cheap reindexing. + for i, indexer in indexers.items(): + mgr = mgr.reindex_indexer( + axes[i], + indexers[i], + axis=i, copy=False, - only_slice=True, + only_slice=True, # only relevant for i==0 allow_dups=True, - use_na_proxy=True, + use_na_proxy=True, # only relevant for i==0 ) - new_indexers = indexers.copy() - del new_indexers[0] - new_mgrs_indexers.append((new_mgr, new_indexers)) - else: - new_mgrs_indexers.append((mgr, indexers)) - + new_mgrs_indexers.append((mgr, {})) return new_mgrs_indexers @@ -288,7 +330,9 @@ def _get_mgr_concatenation_plan(mgr: BlockManager, indexers: dict[int, np.ndarra plan : list of (BlockPlacement, JoinUnit) tuples """ - # Calculate post-reindex shape , save for item axis which will be separate + assert len(indexers) == 0 + + # Calculate post-reindex shape, save for item axis which will be separate # for each block anyway. mgr_shape_list = list(mgr.shape) for ax, indexer in indexers.items(): @@ -523,16 +567,10 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike: return values -def _concatenate_join_units( - join_units: list[JoinUnit], concat_axis: AxisInt, copy: bool -) -> ArrayLike: +def _concatenate_join_units(join_units: list[JoinUnit], copy: bool) -> ArrayLike: """ - Concatenate values from several join units along selected axis. + Concatenate values from several join units along axis=1. """ - if concat_axis == 0 and len(join_units) > 1: - # Concatenating join units along ax0 is handled in _merge_blocks. - raise AssertionError("Concatenating join units along axis0") - empty_dtype = _get_empty_dtype(join_units) has_none_blocks = any(unit.block.dtype.kind == "V" for unit in join_units) @@ -573,7 +611,7 @@ def _concatenate_join_units( concat_values = ensure_block_shape(concat_values, 2) else: - concat_values = concat_compat(to_concat, axis=concat_axis) + concat_values = concat_compat(to_concat, axis=1) return concat_values @@ -701,28 +739,18 @@ def _trim_join_unit(join_unit: JoinUnit, length: int) -> JoinUnit: return JoinUnit(block=extra_block, indexers=extra_indexers, shape=extra_shape) -def _combine_concat_plans(plans, concat_axis: AxisInt): +def _combine_concat_plans(plans): """ Combine multiple concatenation plans into one. existing_plan is updated in-place. + + We only get here with concat_axis == 1. """ if len(plans) == 1: for p in plans[0]: yield p[0], [p[1]] - elif concat_axis == 0: - offset = 0 - for plan in plans: - last_plc = None - - for plc, unit in plan: - yield plc.add(offset), [unit] - last_plc = plc - - if last_plc is not None: - offset += last_plc.as_slice.stop - else: # singleton list so we can modify it as a side-effect within _next_or_none num_ended = [0]
It was removed in #47372 but removing it was not the point of that PR, just a side-effect.
https://api.github.com/repos/pandas-dev/pandas/pulls/50401
2022-12-22T18:35:11Z
2022-12-27T22:28:35Z
2022-12-27T22:28:35Z
2022-12-27T22:42:41Z
BUG: Index with null value not serialized correctly to json
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 033f47f0c994d..7f7145347296c 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -1012,6 +1012,8 @@ I/O - Fixed memory leak which stemmed from the initialization of the internal JSON module (:issue:`49222`) - Fixed issue where :func:`json_normalize` would incorrectly remove leading characters from column names that matched the ``sep`` argument (:issue:`49861`) - Bug in :meth:`DataFrame.to_json` where it would segfault when failing to encode a string (:issue:`50307`) +- Bug in :meth:`DataFrame.to_json` where it would incorrectly use the string representations of NA-values instead of null when serializing an index (:issue:`31801`) +- Bug in :meth:`DataFrame.to_json` where it would error when serializing ``Decimal("NaN")`` (:issue:`50400`) Period ^^^^^^ diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c index a6f18e0aec4d9..513fa6abfc760 100644 --- a/pandas/_libs/src/ujson/python/objToJSON.c +++ b/pandas/_libs/src/ujson/python/objToJSON.c @@ -276,6 +276,27 @@ static int is_simple_frame(PyObject *obj) { Py_DECREF(mgr); return ret; } +/* TODO: Consider unifying with checknull and co. + in missing.pyx */ +static int is_null_obj(PyObject* obj) { + int is_null = 0; + if (PyFloat_Check(obj)) { + double fval = PyFloat_AS_DOUBLE(obj); + is_null = npy_isnan(fval); + } else if (obj == Py_None || object_is_na_type(obj)) { + is_null = 1; + } else if (object_is_decimal_type(obj)) { + PyObject *is_null_obj = PyObject_CallMethod(obj, + "is_nan", + NULL); + is_null = (is_null_obj == Py_True); + if (!is_null_obj) { + return -1; + } + Py_DECREF(is_null_obj); + } + return is_null; +} static npy_int64 get_long_attr(PyObject *o, const char *attr) { // NB we are implicitly assuming that o is a Timedelta or Timestamp, or NaT @@ -1283,6 +1304,7 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc, type_num = PyArray_TYPE(labels); for (i = 0; i < num; i++) { + int is_null = 0; // Whether current val is a null item = PyArray_GETITEM(labels, dataptr); if (!item) { NpyArr_freeLabels(ret, num); @@ -1320,9 +1342,7 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc, if (is_datetimelike) { if (nanosecVal == get_nat()) { - len = 4; - cLabel = PyObject_Malloc(len + 1); - strncpy(cLabel, "null", len + 1); + is_null = 1; } else { if (enc->datetimeIso) { if ((type_num == NPY_TIMEDELTA) || (PyDelta_Check(item))) { @@ -1348,17 +1368,33 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc, len = strlen(cLabel); } } - } else { // Fallback to string representation - // Replace item with the string to keep it alive. - Py_SETREF(item, PyObject_Str(item)); - if (item == NULL) { - NpyArr_freeLabels(ret, num); - ret = 0; - break; + } else { + // NA values need special handling + is_null = is_null_obj(item); + if (is_null == -1) { + // Something errored + // Return to let the error surface + return 0; + } + if (!is_null) { + // Otherwise, fallback to string representation + // Replace item with the string to keep it alive. + Py_SETREF(item, PyObject_Str(item)); + if (item == NULL) { + NpyArr_freeLabels(ret, num); + ret = 0; + break; + } + + cLabel = (char *)PyUnicode_AsUTF8(item); + len = strlen(cLabel); } + } - cLabel = (char *)PyUnicode_AsUTF8(item); - len = strlen(cLabel); + if (is_null) { + len = 4; + cLabel = PyObject_Malloc(len + 1); + strncpy(cLabel, "null", len + 1); } // Add 1 to include NULL terminator @@ -1366,7 +1402,7 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc, memcpy(ret[i], cLabel, len + 1); Py_DECREF(item); - if (is_datetimelike) { + if (is_datetimelike || is_null) { PyObject_Free(cLabel); } @@ -1512,8 +1548,20 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) { tc->type = JT_UTF8; return; } else if (object_is_decimal_type(obj)) { - GET_TC(tc)->doubleValue = PyFloat_AsDouble(obj); - tc->type = JT_DOUBLE; + /* Check for null, since null can't go thru double path */ + PyObject *is_null_obj = PyObject_CallMethod(obj, + "is_nan", + NULL); + if (!is_null_obj) { + goto INVALID; + } + if (is_null_obj == Py_False) { + GET_TC(tc)->doubleValue = PyFloat_AsDouble(obj); + tc->type = JT_DOUBLE; + } else { + tc->type = JT_NULL; + } + Py_DECREF(is_null_obj); return; } else if (PyDateTime_Check(obj) || PyDate_Check(obj)) { if (object_is_nat_type(obj)) { diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py index d9d76c2d72db3..160dd6340e44c 100644 --- a/pandas/tests/io/json/test_pandas.py +++ b/pandas/tests/io/json/test_pandas.py @@ -1,6 +1,5 @@ import datetime from datetime import timedelta -from decimal import Decimal from io import StringIO import json import os @@ -1766,15 +1765,16 @@ def test_to_s3(self, s3_resource, s3so): timeout -= 0.1 assert timeout > 0, "Timed out waiting for file to appear on moto" - def test_json_pandas_nulls(self, nulls_fixture, request): + def test_json_pandas_nulls(self, nulls_fixture): # GH 31615 - if isinstance(nulls_fixture, Decimal): - mark = pytest.mark.xfail(reason="not implemented") - request.node.add_marker(mark) - result = DataFrame([[nulls_fixture]]).to_json() assert result == '{"0":{"0":null}}' + def test_json_pandas_index_nulls(self, nulls_fixture): + # GH 31801 + result = Series([1], index=[nulls_fixture]).to_json() + assert result == '{"null":1}' + def test_readjson_bool_series(self): # GH31464 result = read_json("[true, true, false]", typ="series")
- [ ] closes #31801 (Replace xxxx with the GitHub issue number) - [ ] closes #28609 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Restores 0.25.3 behavior, which honestly feels a little weird to me, since it doesn't roundtrip. I'm not too familiar with JSON format, though (I guess the problem is that null can't be a key?). Also serialize decimal nans correctly.
https://api.github.com/repos/pandas-dev/pandas/pulls/50400
2022-12-22T17:12:54Z
2023-02-22T13:54:21Z
null
2023-02-22T13:54:23Z
ENH: add math mode to formatter escape="latex-math"
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 3a6d3cc2141be..e09f3e2753ee6 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -281,6 +281,7 @@ Other enhancements - Added new argument ``engine`` to :func:`read_json` to support parsing JSON with pyarrow by specifying ``engine="pyarrow"`` (:issue:`48893`) - Added support for SQLAlchemy 2.0 (:issue:`40686`) - :class:`Index` set operations :meth:`Index.union`, :meth:`Index.intersection`, :meth:`Index.difference`, and :meth:`Index.symmetric_difference` now support ``sort=True``, which will always return a sorted result, unlike the default ``sort=None`` which does not sort in some cases (:issue:`25151`) +- Added new escape mode "latex-math" to avoid escaping "$" in formatter (:issue:`50040`) .. --------------------------------------------------------------------------- .. _whatsnew_200.notable_bug_fixes: diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py index dd751075647d8..3f0366f33a94b 100644 --- a/pandas/core/config_init.py +++ b/pandas/core/config_init.py @@ -873,7 +873,7 @@ def register_converter_cb(key) -> None: "format.escape", None, styler_escape, - validator=is_one_of_factory([None, "html", "latex"]), + validator=is_one_of_factory([None, "html", "latex", "latex-math"]), ) cf.register_option( diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index c5262b9f52fc7..69cc1e9f1f7ae 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -985,6 +985,8 @@ def format( Use 'latex' to replace the characters ``&``, ``%``, ``$``, ``#``, ``_``, ``{``, ``}``, ``~``, ``^``, and ``\`` in the cell display string with LaTeX-safe sequences. + Use 'latex-math' to replace the characters the same way as in 'latex' mode, + except for math substrings, which start and end with ``$``. Escaping is done before ``formatter``. .. versionadded:: 1.3.0 @@ -1101,18 +1103,30 @@ def format( <td .. >NA</td> ... - Using a ``formatter`` with LaTeX ``escape``. + Using a ``formatter`` with ``escape`` in 'latex' mode. >>> df = pd.DataFrame([["123"], ["~ ^"], ["$%#"]]) >>> df.style.format("\\textbf{{{}}}", escape="latex").to_latex() ... # doctest: +SKIP \begin{tabular}{ll} - {} & {0} \\ + & 0 \\ 0 & \textbf{123} \\ 1 & \textbf{\textasciitilde \space \textasciicircum } \\ 2 & \textbf{\$\%\#} \\ \end{tabular} + Using ``escape`` in 'latex-math' mode. + + >>> df = pd.DataFrame([[r"$\sum_{i=1}^{10} a_i$ a~b $\alpha \ + ... = \frac{\beta}{\zeta^2}$"], ["%#^ $ \$x^2 $"]]) + >>> df.style.format(escape="latex-math").to_latex() + ... # doctest: +SKIP + \begin{tabular}{ll} + & 0 \\ + 0 & $\sum_{i=1}^{10} a_i$ a\textasciitilde b $\alpha = \frac{\beta}{\zeta^2}$ \\ + 1 & \%\#\textasciicircum \space $ \$x^2 $ \\ + \end{tabular} + Pandas defines a `number-format` pseudo CSS attribute instead of the `.format` method to create `to_excel` permissible formatting. Note that semi-colons are CSS protected characters but used as separators in Excel's format string. @@ -1739,9 +1753,12 @@ def _str_escape(x, escape): return escape_html(x) elif escape == "latex": return _escape_latex(x) + elif escape == "latex-math": + return _escape_latex_math(x) else: raise ValueError( - f"`escape` only permitted in {{'html', 'latex'}}, got {escape}" + f"`escape` only permitted in {{'html', 'latex', 'latex-math'}}, \ +got {escape}" ) return x @@ -2340,3 +2357,36 @@ def _escape_latex(s): .replace("^", "\\textasciicircum ") .replace("ab2Β§=Β§8yz", "\\textbackslash ") ) + + +def _escape_latex_math(s): + r""" + All characters between two characters ``$`` are preserved. + + The substrings in LaTeX math mode, which start with the character ``$`` + and end with ``$``, are preserved without escaping. Otherwise + regular LaTeX escaping applies. See ``_escape_latex()``. + + Parameters + ---------- + s : str + Input to be escaped + + Return + ------ + str : + Escaped string + """ + s = s.replace(r"\$", r"rt8Β§=Β§7wz") + pattern = re.compile(r"\$.*?\$") + pos = 0 + ps = pattern.search(s, pos) + res = [] + while ps: + res.append(_escape_latex(s[pos : ps.span()[0]])) + res.append(ps.group()) + pos = ps.span()[1] + ps = pattern.search(s, pos) + + res.append(_escape_latex(s[pos : len(s)])) + return "".join(res).replace(r"rt8Β§=Β§7wz", r"\$") diff --git a/pandas/tests/io/formats/style/test_format.py b/pandas/tests/io/formats/style/test_format.py index 0b114ea128b0b..0dec614970467 100644 --- a/pandas/tests/io/formats/style/test_format.py +++ b/pandas/tests/io/formats/style/test_format.py @@ -192,6 +192,15 @@ def test_format_escape_html(escape, exp): assert styler._translate(True, True)["head"][0][1]["display_value"] == f"&{exp}&" +def test_format_escape_latex_math(): + chars = r"$\frac{1}{2} \$ x^2$ ~%#^" + df = DataFrame([[chars]]) + + expected = r"$\frac{1}{2} \$ x^2$ \textasciitilde \%\#\textasciicircum " + s = df.style.format("{0}", escape="latex-math") + assert expected == s._translate(True, True)["body"][0][1]["display_value"] + + def test_format_escape_na_rep(): # tests the na_rep is not escaped df = DataFrame([['<>&"', None]]) @@ -359,7 +368,7 @@ def test_format_decimal(formatter, thousands, precision, func, col): def test_str_escape_error(): - msg = "`escape` only permitted in {'html', 'latex'}, got " + msg = "`escape` only permitted in {'html', 'latex', 'latex-math'}, got " with pytest.raises(ValueError, match=msg): _str_escape("text", "bad_escape") @@ -403,6 +412,9 @@ def test_format_options(): with option_context("styler.format.escape", "latex"): ctx_with_op = df.style._translate(True, True) assert ctx_with_op["body"][1][3]["display_value"] == "\\&\\textasciitilde " + with option_context("styler.format.escape", "latex-math"): + ctx_with_op = df.style._translate(True, True) + assert ctx_with_op["body"][1][3]["display_value"] == "\\&\\textasciitilde " # test option: formatter with option_context("styler.format.formatter", {"int": "{:,.2f}"}):
- [x] closes #50040 - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests). Added `latex-math` mode to avoid escaping `$`. We probably shouldn't escape`\[` and `\]`. They can not be applied between `\begin{tabular}` and `\end{tabular}`, which is used for both Series and DataFrames.
https://api.github.com/repos/pandas-dev/pandas/pulls/50398
2022-12-22T16:43:20Z
2023-03-06T08:34:15Z
2023-03-06T08:34:15Z
2023-03-06T13:19:49Z
STYLE pre-commit check to ensure that test functions name starts with test
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index a54a5827adacb..1c3dd35ef47f5 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -333,3 +333,13 @@ repos: additional_dependencies: - autotyping==22.9.0 - libcst==0.4.7 + - id: check-test-naming + name: check that test names start with 'test' + entry: python -m scripts.check_test_naming + types: [python] + files: ^pandas/tests + language: python + exclude: | + (?x) + ^pandas/tests/generic/test_generic.py # GH50380 + |^pandas/tests/io/json/test_readlines.py # GH50378 diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py index a97205f2921fe..18559b9b4f899 100644 --- a/pandas/tests/computation/test_eval.py +++ b/pandas/tests/computation/test_eval.py @@ -353,7 +353,7 @@ def test_pow(self, lhs, rhs, engine, parser): expected = _eval_single_bin(middle, "**", rhs, engine) tm.assert_almost_equal(result, expected) - def check_single_invert_op(self, lhs, engine, parser): + def test_check_single_invert_op(self, lhs, engine, parser): # simple try: elb = lhs.astype(bool) diff --git a/pandas/tests/frame/methods/test_dtypes.py b/pandas/tests/frame/methods/test_dtypes.py index 87e6ed5b1b135..f3b77c27d75bd 100644 --- a/pandas/tests/frame/methods/test_dtypes.py +++ b/pandas/tests/frame/methods/test_dtypes.py @@ -15,13 +15,6 @@ import pandas._testing as tm -def _check_cast(df, v): - """ - Check if all dtypes of df are equal to v - """ - assert all(s.dtype.name == v for _, s in df.items()) - - class TestDataFrameDataTypes: def test_empty_frame_dtypes(self): empty_df = DataFrame() diff --git a/pandas/tests/frame/methods/test_to_timestamp.py b/pandas/tests/frame/methods/test_to_timestamp.py index acbb51fe79643..d1c10ce37bf3d 100644 --- a/pandas/tests/frame/methods/test_to_timestamp.py +++ b/pandas/tests/frame/methods/test_to_timestamp.py @@ -121,7 +121,7 @@ def test_to_timestamp_columns(self): assert result1.columns.freqstr == "AS-JAN" assert result2.columns.freqstr == "AS-JAN" - def to_timestamp_invalid_axis(self): + def test_to_timestamp_invalid_axis(self): index = period_range(freq="A", start="1/1/2001", end="12/1/2009") obj = DataFrame(np.random.randn(len(index), 5), index=index) diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py index b2c2df52e5ce0..3044aecc26b4f 100644 --- a/pandas/tests/internals/test_internals.py +++ b/pandas/tests/internals/test_internals.py @@ -1323,10 +1323,6 @@ def test_period_can_hold_element(self, element): elem = element(dti) self.check_series_setitem(elem, pi, False) - def check_setting(self, elem, index: Index, inplace: bool): - self.check_series_setitem(elem, index, inplace) - self.check_frame_setitem(elem, index, inplace) - def check_can_hold_element(self, obj, elem, inplace: bool): blk = obj._mgr.blocks[0] if inplace: @@ -1350,23 +1346,6 @@ def check_series_setitem(self, elem, index: Index, inplace: bool): else: assert ser.dtype == object - def check_frame_setitem(self, elem, index: Index, inplace: bool): - arr = index._data.copy() - df = DataFrame(arr) - - self.check_can_hold_element(df, elem, inplace) - - if is_scalar(elem): - df.iloc[0, 0] = elem - else: - df.iloc[: len(elem), 0] = elem - - if inplace: - # assertion here implies setting was done inplace - assert df._mgr.arrays[0] is arr - else: - assert df.dtypes[0] == object - class TestShouldStore: def test_should_store_categorical(self): diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py index e58df00c65608..88bf04f518e12 100644 --- a/pandas/tests/io/test_feather.py +++ b/pandas/tests/io/test_feather.py @@ -113,10 +113,11 @@ def test_read_columns(self): columns = ["col1", "col3"] self.check_round_trip(df, expected=df[columns], columns=columns) - def read_columns_different_order(self): + def test_read_columns_different_order(self): # GH 33878 df = pd.DataFrame({"A": [1, 2], "B": ["x", "y"], "C": [True, False]}) - self.check_round_trip(df, columns=["B", "A"]) + expected = df[["B", "A"]] + self.check_round_trip(df, expected, columns=["B", "A"]) def test_unsupported_other(self): diff --git a/pandas/tests/reshape/concat/test_append_common.py b/pandas/tests/reshape/concat/test_append_common.py index e0275fa85d66e..938d18be8657a 100644 --- a/pandas/tests/reshape/concat/test_append_common.py +++ b/pandas/tests/reshape/concat/test_append_common.py @@ -55,21 +55,6 @@ def item(self, request): item2 = item - def _check_expected_dtype(self, obj, label): - """ - Check whether obj has expected dtype depending on label - considering not-supported dtypes - """ - if isinstance(obj, Index): - assert obj.dtype == label - elif isinstance(obj, Series): - if label.startswith("period"): - assert obj.dtype == "Period[M]" - else: - assert obj.dtype == label - else: - raise ValueError - def test_dtypes(self, item, index_or_series): # to confirm test case covers intended dtypes typ, vals = item diff --git a/pandas/tests/series/methods/test_explode.py b/pandas/tests/series/methods/test_explode.py index c73737dad89aa..0dc3ef25a39a4 100644 --- a/pandas/tests/series/methods/test_explode.py +++ b/pandas/tests/series/methods/test_explode.py @@ -76,7 +76,7 @@ def test_invert_array(): @pytest.mark.parametrize( "s", [pd.Series([1, 2, 3]), pd.Series(pd.date_range("2019", periods=3, tz="UTC"))] ) -def non_object_dtype(s): +def test_non_object_dtype(s): result = s.explode() tm.assert_series_equal(result, s) diff --git a/pandas/tests/strings/test_cat.py b/pandas/tests/strings/test_cat.py index 01c5bf25e0601..ff2898107a9e4 100644 --- a/pandas/tests/strings/test_cat.py +++ b/pandas/tests/strings/test_cat.py @@ -11,7 +11,13 @@ _testing as tm, concat, ) -from pandas.tests.strings.test_strings import assert_series_or_index_equal + + +def assert_series_or_index_equal(left, right): + if isinstance(left, Series): + tm.assert_series_equal(left, right) + else: # Index + tm.assert_index_equal(left, right) @pytest.mark.parametrize("other", [None, Series, Index]) diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py index 4385f71dc653f..a9335e156d9db 100644 --- a/pandas/tests/strings/test_strings.py +++ b/pandas/tests/strings/test_strings.py @@ -26,13 +26,6 @@ def test_startswith_endswith_non_str_patterns(pattern): ser.str.endswith(pattern) -def assert_series_or_index_equal(left, right): - if isinstance(left, Series): - tm.assert_series_equal(left, right) - else: # Index - tm.assert_index_equal(left, right) - - # test integer/float dtypes (inferred by constructor) and mixed diff --git a/pandas/tests/tseries/offsets/test_dst.py b/pandas/tests/tseries/offsets/test_dst.py index 9c6d6a686e9a5..347c91a67ebb5 100644 --- a/pandas/tests/tseries/offsets/test_dst.py +++ b/pandas/tests/tseries/offsets/test_dst.py @@ -30,13 +30,18 @@ YearEnd, ) -from pandas.tests.tseries.offsets.test_offsets import get_utc_offset_hours from pandas.util.version import Version # error: Module has no attribute "__version__" pytz_version = Version(pytz.__version__) # type: ignore[attr-defined] +def get_utc_offset_hours(ts): + # take a Timestamp and compute total hours of utc offset + o = ts.utcoffset() + return (o.days * 24 * 3600 + o.seconds) / 3600.0 + + class TestDST: # one microsecond before the DST transition diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 135227d66d541..933723edd6e66 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -900,12 +900,6 @@ def test_str_for_named_is_name(self): assert offset.freqstr == name -def get_utc_offset_hours(ts): - # take a Timestamp and compute total hours of utc offset - o = ts.utcoffset() - return (o.days * 24 * 3600 + o.seconds) / 3600.0 - - # --------------------------------------------------------------------- diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py index 1fe2a7428486e..a6e29e243b0c8 100644 --- a/pandas/tests/util/test_assert_frame_equal.py +++ b/pandas/tests/util/test_assert_frame_equal.py @@ -34,24 +34,6 @@ def _assert_frame_equal_both(a, b, **kwargs): tm.assert_frame_equal(b, a, **kwargs) -def _assert_not_frame_equal(a, b, **kwargs): - """ - Check that two DataFrame are not equal. - - Parameters - ---------- - a : DataFrame - The first DataFrame to compare. - b : DataFrame - The second DataFrame to compare. - kwargs : dict - The arguments passed to `tm.assert_frame_equal`. - """ - msg = "The two DataFrames were equal when they shouldn't have been" - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(a, b, **kwargs) - - @pytest.mark.parametrize("check_like", [True, False]) def test_frame_equal_row_order_mismatch(check_like, obj_fixture): df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"]) diff --git a/scripts/check_test_naming.py b/scripts/check_test_naming.py new file mode 100644 index 0000000000000..33890feb8692d --- /dev/null +++ b/scripts/check_test_naming.py @@ -0,0 +1,152 @@ +""" +Check that test names start with `test`, and that test classes start with `Test`. + +This is meant to be run as a pre-commit hook - to run it manually, you can do: + + pre-commit run check-test-naming --all-files + +NOTE: if this finds a false positive, you can add the comment `# not a test` to the +class or function definition. Though hopefully that shouldn't be necessary. +""" +from __future__ import annotations + +import argparse +import ast +import os +from pathlib import Path +import sys +from typing import ( + Iterator, + Sequence, +) + +PRAGMA = "# not a test" + + +def _find_names(node: ast.Module) -> Iterator[str]: + for _node in ast.walk(node): + if isinstance(_node, ast.Name): + yield _node.id + elif isinstance(_node, ast.Attribute): + yield _node.attr + + +def _is_fixture(node: ast.expr) -> bool: + if isinstance(node, ast.Call): + node = node.func + return ( + isinstance(node, ast.Attribute) + and node.attr == "fixture" + and isinstance(node.value, ast.Name) + and node.value.id == "pytest" + ) + + +def _is_register_dtype(node): + return isinstance(node, ast.Name) and node.id == "register_extension_dtype" + + +def is_misnamed_test_func( + node: ast.expr | ast.stmt, names: Sequence[str], line: str +) -> bool: + return ( + isinstance(node, ast.FunctionDef) + and not node.name.startswith("test") + and names.count(node.name) == 0 + and not any(_is_fixture(decorator) for decorator in node.decorator_list) + and PRAGMA not in line + and node.name + not in ("teardown_method", "setup_method", "teardown_class", "setup_class") + ) + + +def is_misnamed_test_class( + node: ast.expr | ast.stmt, names: Sequence[str], line: str +) -> bool: + return ( + isinstance(node, ast.ClassDef) + and not node.name.startswith("Test") + and names.count(node.name) == 0 + and not any(_is_register_dtype(decorator) for decorator in node.decorator_list) + and PRAGMA not in line + ) + + +def main(content: str, file: str) -> int: + lines = content.splitlines() + tree = ast.parse(content) + names = list(_find_names(tree)) + ret = 0 + for node in tree.body: + if is_misnamed_test_func(node, names, lines[node.lineno - 1]): + print( + f"{file}:{node.lineno}:{node.col_offset} " + "found test function which does not start with 'test'" + ) + ret = 1 + elif is_misnamed_test_class(node, names, lines[node.lineno - 1]): + print( + f"{file}:{node.lineno}:{node.col_offset} " + "found test class which does not start with 'Test'" + ) + ret = 1 + if ( + isinstance(node, ast.ClassDef) + and names.count(node.name) == 0 + and not any( + _is_register_dtype(decorator) for decorator in node.decorator_list + ) + and PRAGMA not in lines[node.lineno - 1] + ): + for _node in node.body: + if is_misnamed_test_func(_node, names, lines[_node.lineno - 1]): + # It could be that this function is used somewhere by the + # parent class. For example, there might be a base class + # with + # + # class Foo: + # def foo(self): + # assert 1+1==2 + # def test_foo(self): + # self.foo() + # + # and then some subclass overwrites `foo`. So, we check that + # `self.foo` doesn't appear in any of the test classes. + # Note some false negatives might get through, but that's OK. + # This is good enough that has helped identify several examples + # of tests not being run. + assert isinstance(_node, ast.FunctionDef) # help mypy + should_continue = False + for _file in (Path("pandas") / "tests").rglob("*.py"): + with open(os.path.join(_file)) as fd: + _content = fd.read() + if f"self.{_node.name}" in _content: + should_continue = True + break + if should_continue: + continue + + print( + f"{file}:{_node.lineno}:{_node.col_offset} " + "found test function which does not start with 'test'" + ) + ret = 1 + return ret + + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("paths", nargs="*") + args = parser.parse_args() + + ret = 0 + + for file in args.paths: + filename = os.path.basename(file) + if not (filename.startswith("test") and filename.endswith(".py")): + continue + with open(file, encoding="utf-8") as fd: + content = fd.read() + ret |= main(content, file) + + sys.exit(ret) diff --git a/scripts/tests/test_check_test_naming.py b/scripts/tests/test_check_test_naming.py new file mode 100644 index 0000000000000..9ddaf2fe2a97d --- /dev/null +++ b/scripts/tests/test_check_test_naming.py @@ -0,0 +1,54 @@ +import pytest + +from scripts.check_test_naming import main + + +@pytest.mark.parametrize( + "src, expected_out, expected_ret", + [ + ( + "def foo(): pass\n", + "t.py:1:0 found test function which does not start with 'test'\n", + 1, + ), + ( + "class Foo:\n def test_foo(): pass\n", + "t.py:1:0 found test class which does not start with 'Test'\n", + 1, + ), + ("def test_foo(): pass\n", "", 0), + ( + "class TestFoo:\n def foo(): pass\n", + "t.py:2:4 found test function which does not start with 'test'\n", + 1, + ), + ("class TestFoo:\n def test_foo(): pass\n", "", 0), + ( + "class Foo:\n def foo(): pass\n", + "t.py:1:0 found test class which does not start with 'Test'\n" + "t.py:2:4 found test function which does not start with 'test'\n", + 1, + ), + ( + "def foo():\n pass\ndef test_foo():\n foo()\n", + "", + 0, + ), + ( + "class Foo: # not a test\n" + " pass\n" + "def test_foo():\n" + " Class.foo()\n", + "", + 0, + ), + ("@pytest.fixture\ndef foo(): pass\n", "", 0), + ("@pytest.fixture()\ndef foo(): pass\n", "", 0), + ("@register_extension_dtype\nclass Foo: pass\n", "", 0), + ], +) +def test_main(capsys, src, expected_out, expected_ret): + ret = main(src, "t.py") + out, _ = capsys.readouterr() + assert out == expected_out + assert ret == expected_ret
- [ ] closes #50379 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50397
2022-12-22T16:42:04Z
2022-12-24T07:11:22Z
2022-12-24T07:11:21Z
2022-12-24T12:31:30Z
BUG/COMPAT: fix assert_* functions for nested arrays with latest numpy
diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst index 0cb8796e3fb5d..e514267a4b6b8 100644 --- a/doc/source/whatsnew/v1.5.3.rst +++ b/doc/source/whatsnew/v1.5.3.rst @@ -30,6 +30,7 @@ Bug fixes - Bug in :meth:`.Styler.to_excel` leading to error when unrecognized ``border-style`` (e.g. ``"hair"``) provided to Excel writers (:issue:`48649`) - Bug when chaining several :meth:`.Styler.concat` calls, only the last styler was concatenated (:issue:`49207`) - Fixed bug when instantiating a :class:`DataFrame` subclass inheriting from ``typing.Generic`` that triggered a ``UserWarning`` on python 3.11 (:issue:`49649`) +- Bug in :func:`pandas.testing.assert_series_equal` (and equivalent ``assert_`` functions) when having nested data and using numpy >= 1.25 (:issue:`50360`) - .. --------------------------------------------------------------------------- diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index 000b5ebbdd2f7..3ee5d8c2f2676 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -584,6 +584,10 @@ def _array_equivalent_object(left: np.ndarray, right: np.ndarray, strict_nan: bo if "boolean value of NA is ambiguous" in str(err): return False raise + except ValueError: + # numpy can raise a ValueError if left and right cannot be + # compared (e.g. nested arrays) + return False return True diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py index 94707dc2e68c5..76786a53e4478 100644 --- a/pandas/tests/dtypes/test_missing.py +++ b/pandas/tests/dtypes/test_missing.py @@ -545,18 +545,120 @@ def test_array_equivalent_str(dtype): ) -def test_array_equivalent_nested(): +@pytest.mark.parametrize( + "strict_nan", [pytest.param(True, marks=pytest.mark.xfail), False] +) +def test_array_equivalent_nested(strict_nan): # reached in groupby aggregations, make sure we use np.any when checking # if the comparison is truthy - left = np.array([np.array([50, 70, 90]), np.array([20, 30, 40])], dtype=object) - right = np.array([np.array([50, 70, 90]), np.array([20, 30, 40])], dtype=object) + left = np.array([np.array([50, 70, 90]), np.array([20, 30])], dtype=object) + right = np.array([np.array([50, 70, 90]), np.array([20, 30])], dtype=object) - assert array_equivalent(left, right, strict_nan=True) - assert not array_equivalent(left, right[::-1], strict_nan=True) + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) - left = np.array([np.array([50, 50, 50]), np.array([40, 40, 40])], dtype=object) + left = np.empty(2, dtype=object) + left[:] = [np.array([50, 70, 90]), np.array([20, 30, 40])] + right = np.empty(2, dtype=object) + right[:] = [np.array([50, 70, 90]), np.array([20, 30, 40])] + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + left = np.array([np.array([50, 50, 50]), np.array([40, 40])], dtype=object) right = np.array([50, 40]) - assert not array_equivalent(left, right, strict_nan=True) + assert not array_equivalent(left, right, strict_nan=strict_nan) + + +@pytest.mark.parametrize( + "strict_nan", [pytest.param(True, marks=pytest.mark.xfail), False] +) +def test_array_equivalent_nested2(strict_nan): + # more than one level of nesting + left = np.array( + [ + np.array([np.array([50, 70]), np.array([90])], dtype=object), + np.array([np.array([20, 30])], dtype=object), + ], + dtype=object, + ) + right = np.array( + [ + np.array([np.array([50, 70]), np.array([90])], dtype=object), + np.array([np.array([20, 30])], dtype=object), + ], + dtype=object, + ) + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + left = np.array([np.array([np.array([50, 50, 50])], dtype=object)], dtype=object) + right = np.array([50]) + assert not array_equivalent(left, right, strict_nan=strict_nan) + + +@pytest.mark.parametrize( + "strict_nan", [pytest.param(True, marks=pytest.mark.xfail), False] +) +def test_array_equivalent_nested_list(strict_nan): + left = np.array([[50, 70, 90], [20, 30]], dtype=object) + right = np.array([[50, 70, 90], [20, 30]], dtype=object) + + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + left = np.array([[50, 50, 50], [40, 40]], dtype=object) + right = np.array([50, 40]) + assert not array_equivalent(left, right, strict_nan=strict_nan) + + +@pytest.mark.xfail(reason="failing") +@pytest.mark.parametrize("strict_nan", [True, False]) +def test_array_equivalent_nested_mixed_list(strict_nan): + # mixed arrays / lists in left and right + # https://github.com/pandas-dev/pandas/issues/50360 + left = np.array([np.array([1, 2, 3]), np.array([4, 5])], dtype=object) + right = np.array([[1, 2, 3], [4, 5]], dtype=object) + + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + # multiple levels of nesting + left = np.array( + [ + np.array([np.array([1, 2, 3]), np.array([4, 5])], dtype=object), + np.array([np.array([6]), np.array([7, 8]), np.array([9])], dtype=object), + ], + dtype=object, + ) + right = np.array([[[1, 2, 3], [4, 5]], [[6], [7, 8], [9]]], dtype=object) + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + # same-length lists + subarr = np.empty(2, dtype=object) + subarr[:] = [ + np.array([None, "b"], dtype=object), + np.array(["c", "d"], dtype=object), + ] + left = np.array([subarr, None], dtype=object) + right = np.array([list([[None, "b"], ["c", "d"]]), None], dtype=object) + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + +@pytest.mark.xfail(reason="failing") +@pytest.mark.parametrize("strict_nan", [True, False]) +def test_array_equivalent_nested_dicts(strict_nan): + left = np.array([{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object) + right = np.array( + [{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object + ) + assert array_equivalent(left, right, strict_nan=strict_nan) + assert not array_equivalent(left, right[::-1], strict_nan=strict_nan) + + right2 = np.array([{"f1": 1, "f2": ["a", "b"]}], dtype=object) + assert array_equivalent(left, right2, strict_nan=strict_nan) + assert not array_equivalent(left, right2[::-1], strict_nan=strict_nan) def test_array_equivalent_index_with_tuples(): diff --git a/pandas/tests/util/test_assert_almost_equal.py b/pandas/tests/util/test_assert_almost_equal.py index ba52536e246d0..d5c334c87315d 100644 --- a/pandas/tests/util/test_assert_almost_equal.py +++ b/pandas/tests/util/test_assert_almost_equal.py @@ -448,3 +448,87 @@ def test_assert_almost_equal_iterable_values_mismatch(): with pytest.raises(AssertionError, match=msg): tm.assert_almost_equal([1, 2], [1, 3]) + + +subarr = np.empty(2, dtype=object) +subarr[:] = [np.array([None, "b"], dtype=object), np.array(["c", "d"], dtype=object)] + +NESTED_CASES = [ + # nested array + ( + np.array([np.array([50, 70, 90]), np.array([20, 30])], dtype=object), + np.array([np.array([50, 70, 90]), np.array([20, 30])], dtype=object), + ), + # >1 level of nesting + ( + np.array( + [ + np.array([np.array([50, 70]), np.array([90])], dtype=object), + np.array([np.array([20, 30])], dtype=object), + ], + dtype=object, + ), + np.array( + [ + np.array([np.array([50, 70]), np.array([90])], dtype=object), + np.array([np.array([20, 30])], dtype=object), + ], + dtype=object, + ), + ), + # lists + ( + np.array([[50, 70, 90], [20, 30]], dtype=object), + np.array([[50, 70, 90], [20, 30]], dtype=object), + ), + # mixed array/list + ( + np.array([np.array([1, 2, 3]), np.array([4, 5])], dtype=object), + np.array([[1, 2, 3], [4, 5]], dtype=object), + ), + ( + np.array( + [ + np.array([np.array([1, 2, 3]), np.array([4, 5])], dtype=object), + np.array( + [np.array([6]), np.array([7, 8]), np.array([9])], dtype=object + ), + ], + dtype=object, + ), + np.array([[[1, 2, 3], [4, 5]], [[6], [7, 8], [9]]], dtype=object), + ), + # same-length lists + ( + np.array([subarr, None], dtype=object), + np.array([list([[None, "b"], ["c", "d"]]), None], dtype=object), + ), + # dicts + ( + np.array([{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object), + np.array([{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object), + ), + ( + np.array([{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object), + np.array([{"f1": 1, "f2": ["a", "b"]}], dtype=object), + ), + # array/list of dicts + ( + np.array( + [ + np.array( + [{"f1": 1, "f2": np.array(["a", "b"], dtype=object)}], dtype=object + ), + np.array([], dtype=object), + ], + dtype=object, + ), + np.array([[{"f1": 1, "f2": ["a", "b"]}], []], dtype=object), + ), +] + + +@pytest.mark.filterwarnings("ignore:elementwise comparison failed:DeprecationWarning") +@pytest.mark.parametrize("a,b", NESTED_CASES) +def test_assert_almost_equal_array_nested(a, b): + _assert_almost_equal_both(a, b)
This adds tests for `assert_almost_equal` (used by `assert_series_equal` et al in case of object dtype arrays) that all pass on pandas main with released numpy (except for two cases with dict of array), and fixes the implementation of `array_equivalent` (used by `assert_almost_equal`) to return False instead of raising an error if numpy cannot compare the arrays (numpy 1.25.dev starts to raise for this instead of returning False). I also added a bunch of equivalent tests for `array_equivalent` itself, but those don't all pass (and added xfails). Trying to fix those as well might need more extensive changes, and I would prefer to do that separate from this PR (to keep this one possible to backport). See https://github.com/pandas-dev/pandas/issues/50360#issuecomment-1362678681 for an illustration of the behaviour that changed in numpy 1.25.dev that causes this. - [x] closes https://github.com/pandas-dev/pandas/issues/50360 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50396
2022-12-22T16:17:39Z
2023-01-13T23:13:58Z
2023-01-13T23:13:58Z
2023-01-14T16:50:23Z
CLN remove unused functions
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py index a75b2a427a80a..f5aa481b2172e 100644 --- a/pandas/tests/io/formats/test_format.py +++ b/pandas/tests/io/formats/test_format.py @@ -11,7 +11,6 @@ import itertools import locale from operator import methodcaller -import os from pathlib import Path import re from shutil import get_terminal_size @@ -106,11 +105,6 @@ def _assert_filepath_or_buffer_equals(expected): return _assert_filepath_or_buffer_equals -def curpath(): - pth, _ = os.path.split(os.path.abspath(__file__)) - return pth - - def has_info_repr(df): r = repr(df) c1 = r.split("\n")[0].startswith("<class") diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py index eb9223a8221eb..1fe2a7428486e 100644 --- a/pandas/tests/util/test_assert_frame_equal.py +++ b/pandas/tests/util/test_assert_frame_equal.py @@ -52,25 +52,6 @@ def _assert_not_frame_equal(a, b, **kwargs): tm.assert_frame_equal(a, b, **kwargs) -def _assert_not_frame_equal_both(a, b, **kwargs): - """ - Check that two DataFrame are not equal. - - This check is performed commutatively. - - Parameters - ---------- - a : DataFrame - The first DataFrame to compare. - b : DataFrame - The second DataFrame to compare. - kwargs : dict - The arguments passed to `tm.assert_frame_equal`. - """ - _assert_not_frame_equal(a, b, **kwargs) - _assert_not_frame_equal(b, a, **kwargs) - - @pytest.mark.parametrize("check_like", [True, False]) def test_frame_equal_row_order_mismatch(check_like, obj_fixture): df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"])
closes #50393 CLN remove _assert_not_frame_equal_both?
https://api.github.com/repos/pandas-dev/pandas/pulls/50394
2022-12-22T15:14:29Z
2022-12-22T20:59:48Z
2022-12-22T20:59:48Z
2022-12-22T20:59:48Z
ENH: Add cumsum to ArrowExtensionArray
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index 1bbec97756e79..6250c298f291f 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -853,6 +853,45 @@ def _concat_same_type( arr = pa.chunked_array(chunks) return cls(arr) + def _accumulate( + self, name: str, *, skipna: bool = True, **kwargs + ) -> ArrowExtensionArray | ExtensionArray: + """ + Return an ExtensionArray performing an accumulation operation. + + The underlying data type might change. + + Parameters + ---------- + name : str + Name of the function, supported values are: + - cummin + - cummax + - cumsum + - cumprod + skipna : bool, default True + If True, skip NA values. + **kwargs + Additional keyword arguments passed to the accumulation function. + Currently, there is no supported kwarg. + + Returns + ------- + array + + Raises + ------ + NotImplementedError : subclass does not define accumulations + """ + pyarrow_name = { + "cumsum": "cumulative_sum_checked", + }.get(name, name) + pyarrow_meth = getattr(pc, pyarrow_name, None) + if pyarrow_meth is None: + return super()._accumulate(name, skipna=skipna, **kwargs) + result = pyarrow_meth(self._data, skip_nulls=skipna, **kwargs) + return type(self)(result) + def _reduce(self, name: str, *, skipna: bool = True, **kwargs): """ Return a scalar result of performing the reduction operation. diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py index f93cf3d6bc138..9b42b86efd0d0 100644 --- a/pandas/tests/extension/test_arrow.py +++ b/pandas/tests/extension/test_arrow.py @@ -343,6 +343,54 @@ def test_getitem_scalar(self, data): super().test_getitem_scalar(data) +class TestBaseAccumulateTests(base.BaseAccumulateTests): + def check_accumulate(self, s, op_name, skipna): + result = getattr(s, op_name)(skipna=skipna).astype("Float64") + expected = getattr(s.astype("Float64"), op_name)(skipna=skipna) + self.assert_series_equal(result, expected, check_dtype=False) + + @pytest.mark.parametrize("skipna", [True, False]) + def test_accumulate_series_raises( + self, data, all_numeric_accumulations, skipna, request + ): + pa_type = data.dtype.pyarrow_dtype + if ( + (pa.types.is_integer(pa_type) or pa.types.is_floating(pa_type)) + and all_numeric_accumulations == "cumsum" + and not pa_version_under9p0 + ): + request.node.add_marker( + pytest.mark.xfail( + reason=f"{all_numeric_accumulations} implemented for {pa_type}" + ) + ) + op_name = all_numeric_accumulations + ser = pd.Series(data) + + with pytest.raises(NotImplementedError): + getattr(ser, op_name)(skipna=skipna) + + @pytest.mark.parametrize("skipna", [True, False]) + def test_accumulate_series(self, data, all_numeric_accumulations, skipna, request): + pa_type = data.dtype.pyarrow_dtype + if all_numeric_accumulations != "cumsum" or pa_version_under9p0: + request.node.add_marker( + pytest.mark.xfail( + reason=f"{all_numeric_accumulations} not implemented", + raises=NotImplementedError, + ) + ) + elif not (pa.types.is_integer(pa_type) or pa.types.is_floating(pa_type)): + request.node.add_marker( + pytest.mark.xfail( + reason=f"{all_numeric_accumulations} not implemented for {pa_type}" + ) + ) + op_name = all_numeric_accumulations + ser = pd.Series(data) + self.check_accumulate(ser, op_name, skipna) + + class TestBaseNumericReduce(base.BaseNumericReduceTests): def check_reduce(self, ser, op_name, skipna): pa_dtype = ser.dtype.pyarrow_dtype
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Don't think a whatsnew is needed since `EA._accumulate` was added in 2.0
https://api.github.com/repos/pandas-dev/pandas/pulls/50389
2022-12-22T00:58:00Z
2022-12-24T10:01:12Z
2022-12-24T10:01:12Z
2022-12-24T19:26:52Z
CI/WEB: Use Github token to authenticate API calls
diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml index ee79c10c12d4e..908259597cafb 100644 --- a/.github/workflows/docbuild-and-upload.yml +++ b/.github/workflows/docbuild-and-upload.yml @@ -15,6 +15,7 @@ on: env: ENV_FILE: environment.yml PANDAS_CI: 1 + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} permissions: contents: read diff --git a/web/pandas_web.py b/web/pandas_web.py index d054e273cde5e..e9e8e70066b3f 100755 --- a/web/pandas_web.py +++ b/web/pandas_web.py @@ -42,6 +42,12 @@ import requests import yaml +api_token = os.environ.get("GITHUB_TOKEN") +if api_token is not None: + GITHUB_API_HEADERS = {"Authorization": f"Bearer {api_token}"} +else: + GITHUB_API_HEADERS = {} + class Preprocessors: """ @@ -166,7 +172,9 @@ def maintainers_add_info(context): for kind in ("active", "inactive"): context["maintainers"][f"{kind}_with_github_info"] = [] for user in context["maintainers"][kind]: - resp = requests.get(f"https://api.github.com/users/{user}") + resp = requests.get( + f"https://api.github.com/users/{user}", headers=GITHUB_API_HEADERS + ) if context["ignore_io_errors"] and resp.status_code == 403: return context resp.raise_for_status() @@ -178,7 +186,10 @@ def home_add_releases(context): context["releases"] = [] github_repo_url = context["main"]["github_repo_url"] - resp = requests.get(f"https://api.github.com/repos/{github_repo_url}/releases") + resp = requests.get( + f"https://api.github.com/repos/{github_repo_url}/releases", + headers=GITHUB_API_HEADERS, + ) if context["ignore_io_errors"] and resp.status_code == 403: return context resp.raise_for_status() @@ -245,7 +256,8 @@ def roadmap_pdeps(context): github_repo_url = context["main"]["github_repo_url"] resp = requests.get( "https://api.github.com/search/issues?" - f"q=is:pr is:open label:PDEP repo:{github_repo_url}" + f"q=is:pr is:open label:PDEP repo:{github_repo_url}", + headers=GITHUB_API_HEADERS, ) if context["ignore_io_errors"] and resp.status_code == 403: return context
Hopefully to help the doc and web build sometimes failing because of hitting the rate limit of the GIthub API ``` Traceback (most recent call last): File "web/pandas_web.py", line 403, in <module> main(args.source_path, args.target_path, args.base_url, args.ignore_io_errors) File "web/pandas_web.py", line 349, in main context = get_context(config_fname, ignore_io_errors, base_url=base_url) File "web/pandas_web.py", line 305, in get_context context = preprocessor(context) File "/home/runner/work/pandas/pandas/web/pandas_web.py", line 172, in maintainers_add_info resp.raise_for_status() File "/home/runner/micromamba/envs/test/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: rate limit exceeded for url: https://api.github.com/users/shoyer ```
https://api.github.com/repos/pandas-dev/pandas/pulls/50388
2022-12-21T23:02:48Z
2022-12-28T21:10:11Z
2022-12-28T21:10:11Z
2022-12-28T21:11:33Z
BUG: Fix unintialized strlen when PyUnicode_AsUTF8AndSize fails
diff --git a/pandas/_libs/src/ujson/lib/ultrajsonenc.c b/pandas/_libs/src/ujson/lib/ultrajsonenc.c index a74d505b0d0ec..169c5b6889077 100644 --- a/pandas/_libs/src/ujson/lib/ultrajsonenc.c +++ b/pandas/_libs/src/ujson/lib/ultrajsonenc.c @@ -1080,11 +1080,11 @@ void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name, case JT_UTF8: { value = enc->getStringValue(obj, &tc, &szlen); - Buffer_Reserve(enc, RESERVE_STRING(szlen)); if (enc->errorMsg) { enc->endTypeContext(obj, &tc); return; } + Buffer_Reserve(enc, RESERVE_STRING(szlen)); Buffer_AppendCharUnchecked(enc, '\"'); if (enc->forceASCII) {
- [ ] xref #50324 (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Hopefully fixes 32bit builds.
https://api.github.com/repos/pandas-dev/pandas/pulls/50387
2022-12-21T22:39:37Z
2022-12-22T18:33:50Z
2022-12-22T18:33:50Z
2022-12-22T18:49:15Z
CI: Test pandas warnings as error on some builds
diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml index 5efc1aa67b4cd..d762e20db196a 100644 --- a/.github/workflows/macos-windows.yml +++ b/.github/workflows/macos-windows.yml @@ -16,7 +16,7 @@ env: PANDAS_CI: 1 PYTEST_TARGET: pandas PATTERN: "not slow and not db and not network and not single_cpu" - TEST_ARGS: "-W error:::pandas" + ERROR_ON_WARNINGS: "1" permissions: diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml index 7dbf74278d433..9c93725ea15ec 100644 --- a/.github/workflows/ubuntu.yml +++ b/.github/workflows/ubuntu.yml @@ -38,7 +38,7 @@ jobs: - name: "Minimum Versions" env_file: actions-38-minimum_versions.yaml pattern: "not slow and not network and not single_cpu" - test_args: "" + error_on_warnings: "0" - name: "Locale: it_IT" env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" @@ -63,20 +63,22 @@ jobs: env_file: actions-310.yaml pattern: "not slow and not network and not single_cpu" pandas_copy_on_write: "1" - test_args: "" + error_on_warnings: "0" - name: "Data Manager" env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" pandas_data_manager: "array" - test_args: "" + error_on_warnings: "0" - name: "Pypy" env_file: actions-pypy-38.yaml pattern: "not slow and not network and not single_cpu" test_args: "--max-worker-restart 0" + error_on_warnings: "0" - name: "Numpy Dev" env_file: actions-310-numpydev.yaml pattern: "not slow and not network and not single_cpu" test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" + error_on_warnings: "0" exclude: - env_file: actions-38.yaml pyarrow_version: "7" @@ -96,11 +98,12 @@ jobs: ENV_FILE: ci/deps/${{ matrix.env_file }} PATTERN: ${{ matrix.pattern }} EXTRA_APT: ${{ matrix.extra_apt || '' }} + ERROR_ON_WARNINGS: ${{ matrix.error_on_warnings || '1' }} LANG: ${{ matrix.lang || '' }} LC_ALL: ${{ matrix.lc_all || '' }} PANDAS_DATA_MANAGER: ${{ matrix.pandas_data_manager || 'block' }} PANDAS_COPY_ON_WRITE: ${{ matrix.pandas_copy_on_write || '0' }} - TEST_ARGS: ${{ matrix.test_args || '-W error:::pandas' }} + TEST_ARGS: ${{ matrix.test_args || '' }} PYTEST_WORKERS: ${{ contains(matrix.pattern, 'not single_cpu') && 'auto' || '1' }} PYTEST_TARGET: ${{ matrix.pytest_target || 'pandas' }} IS_PYPY: ${{ contains(matrix.env_file, 'pypy') }} diff --git a/ci/run_tests.sh b/ci/run_tests.sh index e6de5caf955fc..a48d6c1ad6580 100755 --- a/ci/run_tests.sh +++ b/ci/run_tests.sh @@ -30,6 +30,13 @@ if [[ "$PATTERN" ]]; then PYTEST_CMD="$PYTEST_CMD -m \"$PATTERN\"" fi +if [[ "$ERROR_ON_WARNINGS" == "1" ]]; then + for pth in $(find pandas -name '*.py' -not -path "pandas/tests/*" | sed -e 's/\.py//g' -e 's/\/__init__//g' -e 's/\//./g'); + do + PYTEST_CMD="$PYTEST_CMD -W error:::$pth" + done +fi + echo $PYTEST_CMD sh -c "$PYTEST_CMD" diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index c9bfb5e29460e..72b32246bf8b5 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -1756,12 +1756,18 @@ def test_td64arr_floordiv_td64arr_with_nat( # columns without missing values expected[[0, 1]] = expected[[0, 1]].astype("int64") - result = left // right + with tm.maybe_produces_warning( + RuntimeWarning, box is pd.array, check_stacklevel=False + ): + result = left // right tm.assert_equal(result, expected) # case that goes through __rfloordiv__ with arraylike - result = np.asarray(left) // right + with tm.maybe_produces_warning( + RuntimeWarning, box is pd.array, check_stacklevel=False + ): + result = np.asarray(left) // right tm.assert_equal(result, expected) @pytest.mark.filterwarnings("ignore:invalid value encountered:RuntimeWarning")
Locally I was validating the recent addition of `-W error:::pandas` and had to change it to `-W error:::$pandas` so validating if the CI will raise as well
https://api.github.com/repos/pandas-dev/pandas/pulls/50386
2022-12-21T21:32:24Z
2022-12-29T23:36:45Z
2022-12-29T23:36:45Z
2022-12-31T18:46:13Z
DEPR: 1.x deprecation cleanups
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py index 2a6e26fbdbd1c..c8c33d3f52102 100644 --- a/pandas/core/arrays/sparse/array.py +++ b/pandas/core/arrays/sparse/array.py @@ -299,12 +299,6 @@ class SparseArray(OpsMixin, PandasObject, ExtensionArray): A dense array of values to store in the SparseArray. This may contain `fill_value`. sparse_index : SparseIndex, optional - index : Index - - .. deprecated:: 1.4.0 - Use a function like `np.full` to construct an array with the desired - repeats of the scalar value instead. - fill_value : scalar, optional Elements in data that are ``fill_value`` are not stored in the SparseArray. For memory savings, this should be the most common value diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 25325d5d473d0..40c31b678d79e 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -1758,9 +1758,6 @@ def _get_label_or_level_values(self, key: Level, axis: AxisInt = 0) -> ArrayLike if `key` matches neither a label nor a level ValueError if `key` matches multiple labels - FutureWarning - if `key` is ambiguous. This will become an ambiguity error in a - future version """ axis = self._get_axis_number(axis) other_axes = [ax for ax in range(self._AXIS_LEN) if ax != axis] @@ -6035,11 +6032,11 @@ def astype( Notes ----- - .. deprecated:: 1.3.0 + .. versionchanged:: 2.0.0 Using ``astype`` to convert from timezone-naive dtype to - timezone-aware dtype is deprecated and will raise in a - future version. Use :meth:`Series.dt.tz_localize` instead. + timezone-aware dtype will raise an exception. + Use :meth:`Series.dt.tz_localize` instead. Examples -------- diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py index 762d79a28c720..ae0b7dc5116cd 100644 --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -612,19 +612,13 @@ def _searchsorted_monotonic(self, label, side: Literal["left", "right"] = "left" # -------------------------------------------------------------------- # Indexing Methods - def get_loc( - self, key, method: str | None = None, tolerance=None - ) -> int | slice | np.ndarray: + def get_loc(self, key) -> int | slice | np.ndarray: """ Get integer location, slice or boolean mask for requested label. Parameters ---------- key : label - method : {None}, optional - * default: matches where the label is within an interval only. - - .. deprecated:: 1.4 Returns ------- @@ -655,7 +649,6 @@ def get_loc( >>> index.get_loc(pd.Interval(0, 1)) 0 """ - self._check_indexing_method(method) self._check_indexing_error(key) if isinstance(key, Interval): diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index d59ee27c88a3c..fa702770a0990 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -2043,8 +2043,6 @@ def _setitem_single_block(self, indexer, value, name: str) -> None: if len(item_labels.get_indexer_for([col])) == 1: # e.g. test_loc_setitem_empty_append_expands_rows loc = item_labels.get_loc(col) - # Go through _setitem_single_column to get - # FutureWarning if relevant. self._setitem_single_column(loc, value, indexer[0]) return diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py index 784549b53bc32..b968004846e8e 100644 --- a/pandas/core/tools/timedeltas.py +++ b/pandas/core/tools/timedeltas.py @@ -97,9 +97,9 @@ def to_timedelta( arg : str, timedelta, list-like or Series The data to be converted to timedelta. - .. deprecated:: 1.2 + .. versionchanged:: 2.0 Strings with units 'M', 'Y' and 'y' do not represent - unambiguous timedelta values and will be removed in a future version + unambiguous timedelta values and will raise an exception. unit : str, optional Denotes the unit of the arg for numeric `arg`. Defaults to ``"ns"``.
null
https://api.github.com/repos/pandas-dev/pandas/pulls/50385
2022-12-21T21:14:54Z
2022-12-21T23:14:21Z
2022-12-21T23:14:21Z
2022-12-21T23:14:24Z
CI Pin numpy 32 bit
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py index 60ec74553a207..6f31358dabe86 100644 --- a/pandas/compat/numpy/__init__.py +++ b/pandas/compat/numpy/__init__.py @@ -9,6 +9,7 @@ np_version_under1p21 = _nlv < Version("1.21") np_version_under1p22 = _nlv < Version("1.22") np_version_gte1p22 = _nlv >= Version("1.22") +np_version_gte1p24 = _nlv >= Version("1.24") is_numpy_dev = _nlv.dev is not None _min_numpy_ver = "1.20.3" diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 53b2cbedc0ae3..e4ab5ef596440 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -13,7 +13,11 @@ iNaT, lib, ) -from pandas.compat import is_numpy_dev +from pandas.compat import ( + IS64, + is_numpy_dev, +) +from pandas.compat.numpy import np_version_gte1p24 import pandas.util._test_decorators as td from pandas.core.dtypes.common import ( @@ -732,6 +736,7 @@ def test_constructor_cast(self): with pytest.raises(ValueError, match=msg): Series(["a", "b", "c"], dtype=float) + @pytest.mark.xfail(np_version_gte1p24 and not IS64, reason="GH49777") def test_constructor_signed_int_overflow_deprecation(self): # GH#41734 disallow silent overflow msg = "Values are too large to be losslessly cast"
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50384
2022-12-21T19:01:12Z
2022-12-22T23:06:39Z
2022-12-22T23:06:39Z
2022-12-23T17:42:46Z
ENH: Add std to masked interface
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 208bbfa10b9b2..beff6c6a5c203 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -758,7 +758,7 @@ Performance improvements - Performance improvement in :meth:`~arrays.ArrowExtensionArray.to_numpy` (:issue:`49973`) - Performance improvement in :meth:`DataFrame.join` when joining on a subset of a :class:`MultiIndex` (:issue:`48611`) - Performance improvement for :meth:`MultiIndex.intersection` (:issue:`48604`) -- Performance improvement in ``var`` for nullable dtypes (:issue:`48379`). +- Performance improvement in ``var`` and ``std`` for nullable dtypes (:issue:`48379`). - Performance improvement when iterating over pyarrow and nullable dtypes (:issue:`49825`, :issue:`49851`) - Performance improvements to :func:`read_sas` (:issue:`47403`, :issue:`47405`, :issue:`47656`, :issue:`48502`) - Memory improvement in :meth:`RangeIndex.sort_values` (:issue:`48801`) diff --git a/pandas/core/array_algos/masked_reductions.py b/pandas/core/array_algos/masked_reductions.py index 4f8076af6206e..4be053cba75b7 100644 --- a/pandas/core/array_algos/masked_reductions.py +++ b/pandas/core/array_algos/masked_reductions.py @@ -169,3 +169,19 @@ def var( return _reductions( np.var, values=values, mask=mask, skipna=skipna, axis=axis, ddof=ddof ) + + +def std( + values: np.ndarray, + mask: npt.NDArray[np.bool_], + *, + skipna: bool = True, + axis: AxisInt | None = None, + ddof: int = 1, +): + if not values.size or mask.all(): + return libmissing.NA + + return _reductions( + np.std, values=values, mask=mask, skipna=skipna, axis=axis, ddof=ddof + ) diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index b3e4afc70a8fd..ec3fdc6db9dc9 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -1049,7 +1049,7 @@ def _quantile( # Reductions def _reduce(self, name: str, *, skipna: bool = True, **kwargs): - if name in {"any", "all", "min", "max", "sum", "prod", "mean", "var"}: + if name in {"any", "all", "min", "max", "sum", "prod", "mean", "var", "std"}: return getattr(self, name)(skipna=skipna, **kwargs) data = self._data @@ -1060,7 +1060,7 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs): if self._hasna: data = self.to_numpy("float64", na_value=np.nan) - # median, var, std, skew, kurt, idxmin, idxmax + # median, skew, kurt, idxmin, idxmax op = getattr(nanops, f"nan{name}") result = op(data, axis=0, skipna=skipna, mask=mask, **kwargs) @@ -1156,6 +1156,21 @@ def var( "var", result, skipna=skipna, axis=axis, **kwargs ) + def std( + self, *, skipna: bool = True, axis: AxisInt | None = 0, ddof: int = 1, **kwargs + ): + nv.validate_stat_ddof_func((), kwargs, fname="std") + result = masked_reductions.std( + self._data, + self._mask, + skipna=skipna, + axis=axis, + ddof=ddof, + ) + return self._wrap_reduction_result( + "std", result, skipna=skipna, axis=axis, **kwargs + ) + def min(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs): nv.validate_min((), kwargs) return masked_reductions.min( diff --git a/pandas/tests/arrays/floating/test_function.py b/pandas/tests/arrays/floating/test_function.py index fbdf419811e24..f31ac8f885de6 100644 --- a/pandas/tests/arrays/floating/test_function.py +++ b/pandas/tests/arrays/floating/test_function.py @@ -80,6 +80,8 @@ def test_ufunc_reduce_raises(values): [ ("var", {"ddof": 0}), ("var", {"ddof": 1}), + ("std", {"ddof": 0}), + ("std", {"ddof": 1}), ("kurtosis", {}), ("skew", {}), ("sem", {}), diff --git a/pandas/tests/arrays/integer/test_function.py b/pandas/tests/arrays/integer/test_function.py index 73c8d4e6b1aed..000ed477666ea 100644 --- a/pandas/tests/arrays/integer/test_function.py +++ b/pandas/tests/arrays/integer/test_function.py @@ -91,6 +91,8 @@ def test_ufunc_reduce_raises(values): [ ("var", {"ddof": 0}), ("var", {"ddof": 1}), + ("std", {"ddof": 0}), + ("std", {"ddof": 1}), ("kurtosis", {}), ("skew", {}), ("sem", {}), diff --git a/pandas/tests/extension/base/dim2.py b/pandas/tests/extension/base/dim2.py index 210e566c7e463..6371411f9992c 100644 --- a/pandas/tests/extension/base/dim2.py +++ b/pandas/tests/extension/base/dim2.py @@ -195,12 +195,12 @@ def test_reductions_2d_axis0(self, data, method): arr2d = data.reshape(1, -1) kwargs = {} - if method == "std": + if method in ["std", "var"]: # pass ddof=0 so we get all-zero std instead of all-NA std kwargs["ddof"] = 0 try: - if method in ["mean", "var"] and hasattr(data, "_mask"): + if method in ["mean", "var", "std"] and hasattr(data, "_mask"): # Empty slices produced by the mask cause RuntimeWarnings by numpy with tm.assert_produces_warning(RuntimeWarning, check_stacklevel=False): result = getattr(arr2d, method)(axis=0, **kwargs) @@ -241,13 +241,13 @@ def get_reduction_result_dtype(dtype): assert dtype == expected.dtype self.assert_extension_array_equal(result, expected) - elif method == "std": - self.assert_extension_array_equal(result, data - data) - elif method == "mean": + elif method in ["mean", "std", "var"]: if is_integer_dtype(data) or is_bool_dtype(data): data = data.astype("Float64") - self.assert_extension_array_equal(result, data) - # punt on method == "var" + if method == "mean": + self.assert_extension_array_equal(result, data) + else: + self.assert_extension_array_equal(result, data - data) @pytest.mark.parametrize("method", ["mean", "median", "var", "std", "sum", "prod"]) def test_reductions_2d_axis1(self, data, method):
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. The warnings were already shown before I changed the code paths, but not caught
https://api.github.com/repos/pandas-dev/pandas/pulls/50382
2022-12-21T16:59:52Z
2022-12-27T19:44:21Z
2022-12-27T19:44:21Z
2022-12-28T19:16:52Z
CI try skipping test in 32-bit
diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml index 438d2c7b4174e..5ad0e237b4930 100644 --- a/.github/workflows/32-bit-linux.yml +++ b/.github/workflows/32-bit-linux.yml @@ -44,7 +44,8 @@ jobs: python -m pip install --no-build-isolation --no-use-pep517 -e . && \ python -m pip list && \ export PANDAS_CI=1 && \ - pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas --junitxml=test-data.xml" + pytest pandas/tests/io/json/test_ujson.py::TestUltraJSONTests::test_encode_string_conversion && \ + pytest pandas/tests/io/json/test_ujson.py::TestUltraJSONTests " - name: Publish test results for Python 3.8-32 bit full Linux uses: actions/upload-artifact@v3 diff --git a/.github/workflows/assign.yml b/.github/workflows/assign.yml deleted file mode 100644 index b3331060823a9..0000000000000 --- a/.github/workflows/assign.yml +++ /dev/null @@ -1,19 +0,0 @@ -name: Assign -on: - issue_comment: - types: created - -permissions: - contents: read - -jobs: - issue_assign: - permissions: - issues: write - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - if: github.event.comment.body == 'take' - run: | - echo "Assigning issue ${{ github.event.issue.number }} to ${{ github.event.comment.user.login }}" - curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -d '{"assignees": ["${{ github.event.comment.user.login }}"]}' https://api.github.com/repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/assignees diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml deleted file mode 100644 index d264698e60485..0000000000000 --- a/.github/workflows/asv-bot.yml +++ /dev/null @@ -1,78 +0,0 @@ -name: "ASV Bot" - -on: - issue_comment: # Pull requests are issues - types: - - created - -env: - ENV_FILE: environment.yml - COMMENT: ${{github.event.comment.body}} - -permissions: - contents: read - -jobs: - autotune: - permissions: - contents: read - issues: write - pull-requests: write - name: "Run benchmarks" - # TODO: Support more benchmarking options later, against different branches, against self, etc - if: startsWith(github.event.comment.body, '@github-actions benchmark') - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) - # each user can only run one concurrent benchmark bot at a time - # We don't cancel in progress jobs, but if you want to benchmark multiple PRs, you're gonna have - # to wait - group: ${{ github.actor }}-asv - cancel-in-progress: false - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - # Although asv sets up its own env, deps are still needed - # during discovery process - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Run benchmarks - id: bench - continue-on-error: true # This is a fake failure, asv will exit code 1 for regressions - run: | - # extracting the regex, see https://stackoverflow.com/a/36798723 - REGEX=$(echo "$COMMENT" | sed -n "s/^.*-b\s*\(\S*\).*$/\1/p") - cd asv_bench - asv check -E existing - git remote add upstream https://github.com/pandas-dev/pandas.git - git fetch upstream - asv machine --yes - asv continuous -f 1.1 -b $REGEX upstream/main HEAD - echo 'BENCH_OUTPUT<<EOF' >> $GITHUB_ENV - asv compare -f 1.1 upstream/main HEAD >> $GITHUB_ENV - echo 'EOF' >> $GITHUB_ENV - echo "REGEX=$REGEX" >> $GITHUB_ENV - - - uses: actions/github-script@v6 - env: - BENCH_OUTPUT: ${{env.BENCH_OUTPUT}} - REGEX: ${{env.REGEX}} - with: - script: | - const ENV_VARS = process.env - const run_url = `https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}` - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: '\nBenchmarks completed. View runner logs here.' + run_url + '\nRegex used: '+ 'regex ' + ENV_VARS["REGEX"] + '\n' + ENV_VARS["BENCH_OUTPUT"] - }) diff --git a/.github/workflows/autoupdate-pre-commit-config.yml b/.github/workflows/autoupdate-pre-commit-config.yml deleted file mode 100644 index 376aa8343c571..0000000000000 --- a/.github/workflows/autoupdate-pre-commit-config.yml +++ /dev/null @@ -1,39 +0,0 @@ -name: "Update pre-commit config" - -on: - schedule: - - cron: "0 7 1 * *" # At 07:00 on 1st of every month. - workflow_dispatch: - -permissions: - contents: read - -jobs: - update-pre-commit: - permissions: - contents: write # for technote-space/create-pr-action to push code - pull-requests: write # for technote-space/create-pr-action to create a PR - if: github.repository_owner == 'pandas-dev' - name: Autoupdate pre-commit config - runs-on: ubuntu-22.04 - steps: - - name: Set up Python - uses: actions/setup-python@v4 - - name: Cache multiple paths - uses: actions/cache@v3 - with: - path: | - ~/.cache/pre-commit - ~/.cache/pip - key: pre-commit-autoupdate-${{ runner.os }}-build - - name: Update pre-commit config packages - uses: technote-space/create-pr-action@v2 - with: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - EXECUTE_COMMANDS: | - pip install pre-commit - pre-commit autoupdate || (exit 0); - pre-commit run -a || (exit 0); - COMMIT_MESSAGE: "⬆️ UPGRADE: Autoupdate pre-commit config" - PR_BRANCH_NAME: "pre-commit-config-update-${PR_ID}" - PR_TITLE: "⬆️ UPGRADE: Autoupdate pre-commit config" diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml deleted file mode 100644 index 280b6ed601f08..0000000000000 --- a/.github/workflows/code-checks.yml +++ /dev/null @@ -1,190 +0,0 @@ -name: Code Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pre_commit: - name: pre-commit - runs-on: ubuntu-22.04 - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pre-commit - cancel-in-progress: true - steps: - - name: Checkout - uses: actions/checkout@v3 - - - name: Install Python - uses: actions/setup-python@v4 - with: - python-version: '3.9' - - - name: Run pre-commit - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --all-files - - docstring_typing_manual_hooks: - name: Docstring validation, typing, and other manual pre-commit hooks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-code-checks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - # The following checks are independent of each other and should still be run if one fails - - name: Check for no warnings when building single-page docs - run: ci/code_checks.sh single-docs - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run checks on imported code - run: ci/code_checks.sh code - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run doctests - run: ci/code_checks.sh doctests - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation - run: ci/code_checks.sh docstrings - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run check of documentation notebooks - run: ci/code_checks.sh notebooks - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Use existing environment for type checking - run: | - echo $PATH >> $GITHUB_PATH - echo "PYTHONHOME=$PYTHONHOME" >> $GITHUB_ENV - echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Typing + pylint - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --hook-stage manual --all-files - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation script tests - run: pytest scripts - if: ${{ steps.build.outcome == 'success' && always() }} - - asv-benchmarks: - name: ASV Benchmarks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-asv-benchmarks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - - name: Run ASV benchmarks - run: | - cd asv_bench - asv machine --yes - asv run --quick --dry-run --strict --durations=30 --python=same - - build_docker_dev_environment: - name: Build Docker Dev Environment - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment - cancel-in-progress: true - - steps: - - name: Clean up dangling images - run: docker image prune -f - - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Build image - run: docker build --pull --no-cache --tag pandas-dev-env . - - - name: Show environment - run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())" - - requirements-dev-text-installable: - name: Test install requirements-dev.txt - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-requirements-dev-text-installable - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - cache: 'pip' - cache-dependency-path: 'requirements-dev.txt' - - - name: Install requirements-dev.txt - run: pip install -r requirements-dev.txt - - - name: Check Pip Cache Hit - run: echo ${{ steps.setup_python.outputs.cache-hit }} diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml deleted file mode 100644 index 05a5d003c1dd1..0000000000000 --- a/.github/workflows/codeql.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: CodeQL -on: - schedule: - # every day at midnight - - cron: "0 0 * * *" - -concurrency: - group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }} - cancel-in-progress: true - -jobs: - analyze: - runs-on: ubuntu-22.04 - permissions: - actions: read - contents: read - security-events: write - - strategy: - fail-fast: false - matrix: - language: - - python - - steps: - - uses: actions/checkout@v3 - - uses: github/codeql-action/init@v2 - with: - languages: ${{ matrix.language }} - - uses: github/codeql-action/autobuild@v2 - - uses: github/codeql-action/analyze@v2 diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml deleted file mode 100644 index ee79c10c12d4e..0000000000000 --- a/.github/workflows/docbuild-and-upload.yml +++ /dev/null @@ -1,88 +0,0 @@ -name: Doc Build and Upload - -on: - push: - branches: - - main - - 1.5.x - tags: - - '*' - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - web_and_docs: - name: Doc Build and Upload - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-web-docs - cancel-in-progress: true - - defaults: - run: - shell: bash -el {0} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Build website - run: python web/pandas_web.py web/pandas --target-path=web/build - - - name: Build documentation - run: doc/make.py --warnings-are-errors - - - name: Build documentation zip - run: doc/make.py zip_html - - - name: Install ssh key - run: | - mkdir -m 700 -p ~/.ssh - echo "${{ secrets.server_ssh_key }}" > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - echo "${{ secrets.server_ip }} ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjYkJBk7sos+r7yATODogQc3jUdW1aascGpyOD4bohj8dWjzwLJv/OJ/fyOQ5lmj81WKDk67tGtqNJYGL9acII=" > ~/.ssh/known_hosts - if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) - - - name: Copy cheatsheets into site directory - run: cp doc/cheatsheet/Pandas_Cheat_Sheet* web/build/ - - - name: Upload web - run: rsync -az --delete --exclude='pandas-docs' --exclude='docs' web/build/ web@${{ secrets.server_ip }}:/var/www/html - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload dev docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/dev - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload prod docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/version/${GITHUB_REF_NAME:1} - if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') - - - name: Move docs into site directory - run: mv doc/build/html web/build/docs - - - name: Save website as an artifact - uses: actions/upload-artifact@v3 - with: - name: website - path: web/build - retention-days: 14 diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml deleted file mode 100644 index ca6fd63d749ee..0000000000000 --- a/.github/workflows/macos-windows.yml +++ /dev/null @@ -1,62 +0,0 @@ -name: Windows-macOS - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - PYTEST_TARGET: pandas - PATTERN: "not slow and not db and not network and not single_cpu" - TEST_ARGS: "-W error:::pandas" - - -permissions: - contents: read - -jobs: - pytest: - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - os: [macos-latest, windows-latest] - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - fail-fast: false - runs-on: ${{ matrix.os }} - name: ${{ format('{0} {1}', matrix.os, matrix.env_file) }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.os }} - cancel-in-progress: true - env: - # GH 47443: PYTEST_WORKERS > 1 crashes Windows builds with memory related errors - PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '1' }} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ci/deps/${{ matrix.env_file }} - pyarrow-version: ${{ matrix.os == 'macos-latest' && '6' || '' }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml deleted file mode 100644 index eb065c6e2e87d..0000000000000 --- a/.github/workflows/package-checks.yml +++ /dev/null @@ -1,52 +0,0 @@ -name: Package Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [ labeled, opened, synchronize, reopened ] - -permissions: - contents: read - -jobs: - pip: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - strategy: - matrix: - extra: ["test", "performance", "timezone", "computation", "fss", "aws", "gcp", "excel", "parquet", "feather", "hdf5", "spss", "postgresql", "mysql", "sql-other", "html", "xml", "plot", "output_formatting", "clipboard", "compression", "all"] - fail-fast: false - name: Install Extras - ${{ matrix.extra }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pip-extras-${{ matrix.extra }} - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - - - name: Install required dependencies - run: | - python -m pip install --upgrade pip setuptools wheel python-dateutil pytz numpy cython - python -m pip install versioneer[toml] - shell: bash -el {0} - - - name: Pip install with extra - run: | - python -m pip install -e .[${{ matrix.extra }}] --no-build-isolation - shell: bash -el {0} diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml deleted file mode 100644 index 220c1e464742e..0000000000000 --- a/.github/workflows/python-dev.yml +++ /dev/null @@ -1,93 +0,0 @@ -# This workflow may or may not run depending on the state of the next -# unreleased Python version. DO NOT DELETE IT. -# -# In general, this file will remain frozen(present, but not running) until: -# - The next unreleased Python version has released beta 1 -# - This version should be available on GitHub Actions. -# - Our required build/runtime dependencies(numpy, pytz, Cython, python-dateutil) -# support that unreleased Python version. -# To unfreeze, comment out the ``if: false`` condition, and make sure you update -# the name of the workflow and Python version in actions/setup-python to: '3.12-dev' -# -# After it has been unfrozen, this file should remain unfrozen(present, and running) until: -# - The next Python version has been officially released. -# OR -# - Most/All of our optional dependencies support Python 3.11 AND -# - The next Python version has released a rc(we are guaranteed a stable ABI). -# To freeze this file, uncomment out the ``if: false`` condition, and migrate the jobs -# to the corresponding posix/windows-macos/sdist etc. workflows. -# Feel free to modify this comment as necessary. - -name: Python Dev - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PYTEST_WORKERS: "auto" - PANDAS_CI: 1 - PATTERN: "not slow and not network and not clipboard and not single_cpu" - COVERAGE: true - PYTEST_TARGET: pandas - -permissions: - contents: read - -jobs: - build: - # if: false # Uncomment this to freeze the workflow, comment it to unfreeze - runs-on: ${{ matrix.os }} - strategy: - fail-fast: false - matrix: - os: [ubuntu-22.04, macOS-latest, windows-latest] - - name: actions-311-dev - timeout-minutes: 120 - - concurrency: - #https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.os }}-${{ matrix.pytest_target }}-dev - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python Dev Version - uses: actions/setup-python@v4 - with: - python-version: '3.11-dev' - - - name: Install dependencies - run: | - python --version - python -m pip install --upgrade pip setuptools wheel - python -m pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy - python -m pip install git+https://github.com/nedbat/coveragepy.git - python -m pip install versioneer[toml] - python -m pip install python-dateutil pytz cython hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-cov pytest-asyncio>=0.17 - python -m pip list - - # GH 47305: Parallel build can cause flaky ImportError from pandas/_libs/tslibs - - name: Build Pandas - run: | - python setup.py build_ext -q -j1 - python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - - - name: Build Version - run: | - python -c "import pandas; pandas.show_versions();" - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml deleted file mode 100644 index d11b614e2b2c0..0000000000000 --- a/.github/workflows/sdist.yml +++ /dev/null @@ -1,96 +0,0 @@ -name: sdist - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [labeled, opened, synchronize, reopened] - paths-ignore: - - "doc/**" - -permissions: - contents: read - -jobs: - build: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - timeout-minutes: 60 - defaults: - run: - shell: bash -el {0} - - strategy: - fail-fast: false - matrix: - python-version: ["3.8", "3.9", "3.10", "3.11"] - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{matrix.python-version}}-sdist - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - - - name: Install dependencies - run: | - python -m pip install --upgrade pip setuptools wheel - python -m pip install versioneer[toml] - - # GH 39416 - pip install numpy - - - name: Build pandas sdist - run: | - pip list - python setup.py sdist --formats=gztar - - - name: Upload sdist artifact - uses: actions/upload-artifact@v3 - with: - name: ${{matrix.python-version}}-sdist.gz - path: dist/*.gz - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: false - environment-name: pandas-sdist - extra-specs: | - python =${{ matrix.python-version }} - - - name: Install pandas from sdist - run: | - pip list - python -m pip install dist/*.gz - - - name: Force oldest supported NumPy - run: | - case "${{matrix.python-version}}" in - 3.8) - pip install numpy==1.20.3 ;; - 3.9) - pip install numpy==1.20.3 ;; - 3.10) - pip install numpy==1.21.2 ;; - 3.11) - pip install numpy==1.23.2 ;; - esac - - - name: Import pandas - run: | - cd .. - conda list - python -c "import pandas; pandas.show_versions();" diff --git a/.github/workflows/stale-pr.yml b/.github/workflows/stale-pr.yml deleted file mode 100644 index c47745e097d17..0000000000000 --- a/.github/workflows/stale-pr.yml +++ /dev/null @@ -1,26 +0,0 @@ -name: "Stale PRs" -on: - schedule: - # * is a special character in YAML so you have to quote this string - - cron: "0 0 * * *" - -permissions: - contents: read - -jobs: - stale: - permissions: - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - uses: actions/stale@v4 - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} - stale-pr-message: "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." - stale-pr-label: "Stale" - exempt-pr-labels: "Needs Review,Blocked,Needs Discussion" - days-before-issue-stale: -1 - days-before-pr-stale: 30 - days-before-close: -1 - remove-stale-when-updated: false - debug-only: false diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml deleted file mode 100644 index 007de6385dd69..0000000000000 --- a/.github/workflows/ubuntu.yml +++ /dev/null @@ -1,174 +0,0 @@ -name: Ubuntu - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - pattern: ["not single_cpu", "single_cpu"] - pyarrow_version: ["7", "8", "9"] - include: - - name: "Downstream Compat" - env_file: actions-38-downstream_compat.yaml - pattern: "not slow and not network and not single_cpu" - pytest_target: "pandas/tests/test_downstream.py" - - name: "Minimum Versions" - env_file: actions-38-minimum_versions.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "" - - name: "Locale: it_IT" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-it" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "it_IT.utf8" - lc_all: "it_IT.utf8" - # Also install it_IT (its encoding is ISO8859-1) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "it_IT" - - name: "Locale: zh_CN" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-zh-hans" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "zh_CN.utf8" - lc_all: "zh_CN.utf8" - # Also install zh_CN (its encoding is gb2312) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "zh_CN" - - name: "Copy-on-Write" - env_file: actions-310.yaml - pattern: "not slow and not network and not single_cpu" - pandas_copy_on_write: "1" - test_args: "" - - name: "Data Manager" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - pandas_data_manager: "array" - test_args: "" - - name: "Pypy" - env_file: actions-pypy-38.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "--max-worker-restart 0" - - name: "Numpy Dev" - env_file: actions-310-numpydev.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" - exclude: - - env_file: actions-39.yaml - pyarrow_version: "6" - - env_file: actions-39.yaml - pyarrow_version: "7" - - env_file: actions-310.yaml - pyarrow_version: "6" - - env_file: actions-310.yaml - pyarrow_version: "7" - fail-fast: false - name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} - env: - ENV_FILE: ci/deps/${{ matrix.env_file }} - PATTERN: ${{ matrix.pattern }} - EXTRA_APT: ${{ matrix.extra_apt || '' }} - LANG: ${{ matrix.lang || '' }} - LC_ALL: ${{ matrix.lc_all || '' }} - PANDAS_DATA_MANAGER: ${{ matrix.pandas_data_manager || 'block' }} - PANDAS_COPY_ON_WRITE: ${{ matrix.pandas_copy_on_write || '0' }} - TEST_ARGS: ${{ matrix.test_args || '-W error:::pandas' }} - PYTEST_WORKERS: ${{ contains(matrix.pattern, 'not single_cpu') && 'auto' || '1' }} - PYTEST_TARGET: ${{ matrix.pytest_target || 'pandas' }} - IS_PYPY: ${{ contains(matrix.env_file, 'pypy') }} - # TODO: re-enable coverage on pypy, its slow - COVERAGE: ${{ !contains(matrix.env_file, 'pypy') }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.pattern }}-${{ matrix.pyarrow_version || '' }}-${{ matrix.extra_apt || '' }}-${{ matrix.pandas_data_manager || '' }} - cancel-in-progress: true - - services: - mysql: - image: mysql - env: - MYSQL_ALLOW_EMPTY_PASSWORD: yes - MYSQL_DATABASE: pandas - options: >- - --health-cmd "mysqladmin ping" - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 3306:3306 - - postgres: - image: postgres - env: - POSTGRES_USER: postgres - POSTGRES_PASSWORD: postgres - POSTGRES_DB: pandas - options: >- - --health-cmd pg_isready - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 5432:5432 - - moto: - image: motoserver/moto - env: - AWS_ACCESS_KEY_ID: foobar_key - AWS_SECRET_ACCESS_KEY: foobar_secret - ports: - - 5000:5000 - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Extra installs - # xsel for clipboard tests - run: sudo apt-get update && sudo apt-get install -y xsel ${{ env.EXTRA_APT }} - - - name: Generate extra locales - # These extra locales will be available for locale.setlocale() calls in tests - run: | - sudo locale-gen ${{ matrix.extra_loc }} - if: ${{ matrix.extra_loc }} - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ${{ env.ENV_FILE }} - pyarrow-version: ${{ matrix.pyarrow_version }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests - # TODO: Don't continue on error for PyPy - continue-on-error: ${{ env.IS_PYPY == 'true' }} diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml deleted file mode 100644 index 49d29c91f86cd..0000000000000 --- a/.github/workflows/wheels.yml +++ /dev/null @@ -1,199 +0,0 @@ -# Workflow to build wheels for upload to PyPI. -# Inspired by numpy's cibuildwheel config https://github.com/numpy/numpy/blob/main/.github/workflows/wheels.yml -# -# In an attempt to save CI resources, wheel builds do -# not run on each push but only weekly and for releases. -# Wheel builds can be triggered from the Actions page -# (if you have the perms) on a commit to master. -# -# Alternatively, you can add labels to the pull request in order to trigger wheel -# builds. -# The label(s) that trigger builds are: -# - Build -name: Wheel builder - -on: - schedule: - # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ minute (0 - 59) - # β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ hour (0 - 23) - # β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the month (1 - 31) - # β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ month (1 - 12 or JAN-DEC) - # β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the week (0 - 6 or SUN-SAT) - # β”‚ β”‚ β”‚ β”‚ β”‚ - - cron: "27 3 */1 * *" - push: - pull_request: - types: [labeled, opened, synchronize, reopened] - workflow_dispatch: - -concurrency: - group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} - cancel-in-progress: true - -jobs: - build_wheels: - name: Build wheel for ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ${{ matrix.buildplat[0] }} - strategy: - # Ensure that a wheel builder finishes even if another fails - fail-fast: false - matrix: - # GitHub Actions doesn't support pairing matrix values together, let's improvise - # https://github.com/github/feedback/discussions/7835#discussioncomment-1769026 - buildplat: - - [ubuntu-20.04, manylinux_x86_64] - - [macos-11, macosx_*] - - [windows-2019, win_amd64] - - [windows-2019, win32] - # TODO: support PyPy? - python: [["cp38", "3.8"], ["cp39", "3.9"], ["cp310", "3.10"], ["cp311", "3.11"]]# "pp38", "pp39"] - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - - name: Build wheels - uses: pypa/cibuildwheel@v2.9.0 - env: - CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - - # Used to test(Windows-only) and push the built wheels - # You might need to use setup-python separately - # if the new Python-dev version - # is unavailable on conda-forge. - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - python-version: ${{ matrix.python[1] }} - activate-environment: test - channels: conda-forge, anaconda - channel-priority: true - mamba-version: "*" - - - name: Test wheels (Windows 64-bit only) - if: ${{ matrix.buildplat[1] == 'win_amd64' }} - shell: cmd /C CALL {0} - run: | - python ci/test_wheels.py wheelhouse - - - uses: actions/upload-artifact@v3 - with: - name: ${{ matrix.python[0] }}-${{ startsWith(matrix.buildplat[1], 'macosx') && 'macosx' || matrix.buildplat[1] }} - path: ./wheelhouse/*.whl - - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: conda install -q -y anaconda-client - - - - name: Upload wheels - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels - build_sdist: - name: Build sdist - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ubuntu-22.04 - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - # Used to push the built sdist - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - # Really doesn't matter what version we upload with - # just the version we test with - python-version: '3.8' - channels: conda-forge - channel-priority: true - mamba-version: "*" - - - name: Build sdist - run: | - pip install build - python -m build --sdist - - name: Test the sdist - shell: bash -el {0} - run: | - # TODO: Don't run test suite, and instead build wheels from sdist - # by splitting the wheel builders into a two stage job - # (1. Generate sdist 2. Build wheels from sdist) - # This tests the sdists, and saves some build time - python -m pip install dist/*.gz - pip install hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17 - cd .. # Not a good idea to test within the src tree - python -c "import pandas; print(pandas.__version__); - pandas.test(extra_args=['-m not clipboard and not single_cpu', '--skip-slow', '--skip-network', '--skip-db', '-n=2']); - pandas.test(extra_args=['-m not clipboard and single_cpu', '--skip-slow', '--skip-network', '--skip-db'])" - - uses: actions/upload-artifact@v3 - with: - name: sdist - path: ./dist/* - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: | - conda install -q -y anaconda-client - - - name: Upload sdist - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50381
2022-12-21T16:58:56Z
2022-12-22T11:49:51Z
null
2022-12-22T11:49:51Z
CLN: Remove pre-1.5 deprecations
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst index 208bbfa10b9b2..f64714c302d05 100644 --- a/doc/source/whatsnew/v2.0.0.rst +++ b/doc/source/whatsnew/v2.0.0.rst @@ -723,6 +723,9 @@ Removal of prior version deprecations/changes - Removed deprecated methods :meth:`ExcelWriter.write_cells`, :meth:`ExcelWriter.save`, :meth:`ExcelWriter.cur_sheet`, :meth:`ExcelWriter.handles`, :meth:`ExcelWriter.path` (:issue:`45795`) - The :class:`ExcelWriter` attribute ``book`` can no longer be set; it is still available to be accessed and mutated (:issue:`48943`) - Removed unused ``*args`` and ``**kwargs`` in :class:`Rolling`, :class:`Expanding`, and :class:`ExponentialMovingWindow` ops (:issue:`47851`) +- Removed the deprecated argument ``line_terminator`` from :meth:`DataFrame.to_csv` (:issue:`45302`) +- Removed the deprecated argument ``label`` from :func:`lreshape` (:issue:`30219`) +- Arguments after ``expr`` in :meth:`DataFrame.eval` and :meth:`DataFrame.query` are keyword-only (:issue:`47587`) - .. --------------------------------------------------------------------------- diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 4a28ed131a986..1bccd461f85ee 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -96,7 +96,6 @@ from pandas.util._decorators import ( Appender, Substitution, - deprecate_nonkeyword_arguments, doc, rewrite_axis_style_signature, ) @@ -4189,8 +4188,7 @@ def query(self, expr: str, *, inplace: Literal[True], **kwargs) -> None: def query(self, expr: str, *, inplace: bool = ..., **kwargs) -> DataFrame | None: ... - @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "expr"]) - def query(self, expr: str, inplace: bool = False, **kwargs) -> DataFrame | None: + def query(self, expr: str, *, inplace: bool = False, **kwargs) -> DataFrame | None: """ Query the columns of a DataFrame with a boolean expression. @@ -4334,7 +4332,7 @@ def query(self, expr: str, inplace: bool = False, **kwargs) -> DataFrame | None: if not isinstance(expr, str): msg = f"expr must be a string to be evaluated, {type(expr)} given" raise ValueError(msg) - kwargs["level"] = kwargs.pop("level", 0) + 2 + kwargs["level"] = kwargs.pop("level", 0) + 1 kwargs["target"] = None res = self.eval(expr, **kwargs) @@ -4359,8 +4357,7 @@ def eval(self, expr: str, *, inplace: Literal[False] = ..., **kwargs) -> Any: def eval(self, expr: str, *, inplace: Literal[True], **kwargs) -> None: ... - @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "expr"]) - def eval(self, expr: str, inplace: bool = False, **kwargs) -> Any | None: + def eval(self, expr: str, *, inplace: bool = False, **kwargs) -> Any | None: """ Evaluate a string describing operations on DataFrame columns. @@ -4466,7 +4463,7 @@ def eval(self, expr: str, inplace: bool = False, **kwargs) -> Any | None: from pandas.core.computation.eval import eval as _eval inplace = validate_bool_kwarg(inplace, "inplace") - kwargs["level"] = kwargs.pop("level", 0) + 2 + kwargs["level"] = kwargs.pop("level", 0) + 1 index_resolvers = self._get_index_resolvers() column_resolvers = self._get_cleaned_column_resolvers() resolvers = column_resolvers, index_resolvers diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py index 633ef12265245..8ed8dd1466475 100644 --- a/pandas/core/reshape/melt.py +++ b/pandas/core/reshape/melt.py @@ -8,10 +8,7 @@ import numpy as np -from pandas.util._decorators import ( - Appender, - deprecate_kwarg, -) +from pandas.util._decorators import Appender from pandas.core.dtypes.common import ( is_extension_array_dtype, @@ -161,8 +158,7 @@ def melt( return result -@deprecate_kwarg(old_arg_name="label", new_arg_name=None) -def lreshape(data: DataFrame, groups, dropna: bool = True, label=None) -> DataFrame: +def lreshape(data: DataFrame, groups, dropna: bool = True) -> DataFrame: """ Reshape wide-format data to long. Generalized inverse of DataFrame.pivot. @@ -178,10 +174,6 @@ def lreshape(data: DataFrame, groups, dropna: bool = True, label=None) -> DataFr {new_name : list_of_columns}. dropna : bool, default True Do not include columns whose entries are all NaN. - label : None - Not used. - - .. deprecated:: 1.0.0 Returns ------- diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py index c05bd437d6c3e..d6a5c5fa414ed 100644 --- a/pandas/io/formats/format.py +++ b/pandas/io/formats/format.py @@ -62,7 +62,6 @@ StorageOptions, WriteBuffer, ) -from pandas.util._decorators import deprecate_kwarg from pandas.core.dtypes.common import ( is_categorical_dtype, @@ -1135,7 +1134,6 @@ def to_string( string = string_formatter.to_string() return save_to_buffer(string, buf=buf, encoding=encoding) - @deprecate_kwarg(old_arg_name="line_terminator", new_arg_name="lineterminator") def to_csv( self, path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None, diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py index bc8ce5df6dd9b..040e2b961eef3 100644 --- a/pandas/tests/reshape/test_melt.py +++ b/pandas/tests/reshape/test_melt.py @@ -658,9 +658,6 @@ def test_pairs(self): exp = DataFrame(exp_data, columns=result.columns) tm.assert_frame_equal(result, exp) - with tm.assert_produces_warning(FutureWarning): - lreshape(df, spec, dropna=False, label="foo") - spec = { "visitdt": [f"visitdt{i:d}" for i in range(1, 3)], "wt": [f"wt{i:d}" for i in range(1, 4)],
Found by `grep -R "@deprecate" pandas`.
https://api.github.com/repos/pandas-dev/pandas/pulls/50377
2022-12-21T15:18:06Z
2022-12-21T19:49:13Z
2022-12-21T19:49:13Z
2023-08-09T15:08:24Z
CI debug timeout on 32-bit linux
diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml index 438d2c7b4174e..5f84b0a6fbbe9 100644 --- a/.github/workflows/32-bit-linux.yml +++ b/.github/workflows/32-bit-linux.yml @@ -39,12 +39,13 @@ jobs: . ~/virtualenvs/pandas-dev/bin/activate && \ python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ python -m pip install versioneer[toml] && \ + python -m pip install pytest-timeout && \ python -m pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ python setup.py build_ext -q -j1 && \ python -m pip install --no-build-isolation --no-use-pep517 -e . && \ python -m pip list && \ export PANDAS_CI=1 && \ - pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas --junitxml=test-data.xml" + pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas/tests/io/json/ --junitxml=test-data.xml --timeout 60" - name: Publish test results for Python 3.8-32 bit full Linux uses: actions/upload-artifact@v3 diff --git a/.github/workflows/assign.yml b/.github/workflows/assign.yml deleted file mode 100644 index b3331060823a9..0000000000000 --- a/.github/workflows/assign.yml +++ /dev/null @@ -1,19 +0,0 @@ -name: Assign -on: - issue_comment: - types: created - -permissions: - contents: read - -jobs: - issue_assign: - permissions: - issues: write - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - if: github.event.comment.body == 'take' - run: | - echo "Assigning issue ${{ github.event.issue.number }} to ${{ github.event.comment.user.login }}" - curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -d '{"assignees": ["${{ github.event.comment.user.login }}"]}' https://api.github.com/repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/assignees diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml deleted file mode 100644 index d264698e60485..0000000000000 --- a/.github/workflows/asv-bot.yml +++ /dev/null @@ -1,78 +0,0 @@ -name: "ASV Bot" - -on: - issue_comment: # Pull requests are issues - types: - - created - -env: - ENV_FILE: environment.yml - COMMENT: ${{github.event.comment.body}} - -permissions: - contents: read - -jobs: - autotune: - permissions: - contents: read - issues: write - pull-requests: write - name: "Run benchmarks" - # TODO: Support more benchmarking options later, against different branches, against self, etc - if: startsWith(github.event.comment.body, '@github-actions benchmark') - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) - # each user can only run one concurrent benchmark bot at a time - # We don't cancel in progress jobs, but if you want to benchmark multiple PRs, you're gonna have - # to wait - group: ${{ github.actor }}-asv - cancel-in-progress: false - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - # Although asv sets up its own env, deps are still needed - # during discovery process - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Run benchmarks - id: bench - continue-on-error: true # This is a fake failure, asv will exit code 1 for regressions - run: | - # extracting the regex, see https://stackoverflow.com/a/36798723 - REGEX=$(echo "$COMMENT" | sed -n "s/^.*-b\s*\(\S*\).*$/\1/p") - cd asv_bench - asv check -E existing - git remote add upstream https://github.com/pandas-dev/pandas.git - git fetch upstream - asv machine --yes - asv continuous -f 1.1 -b $REGEX upstream/main HEAD - echo 'BENCH_OUTPUT<<EOF' >> $GITHUB_ENV - asv compare -f 1.1 upstream/main HEAD >> $GITHUB_ENV - echo 'EOF' >> $GITHUB_ENV - echo "REGEX=$REGEX" >> $GITHUB_ENV - - - uses: actions/github-script@v6 - env: - BENCH_OUTPUT: ${{env.BENCH_OUTPUT}} - REGEX: ${{env.REGEX}} - with: - script: | - const ENV_VARS = process.env - const run_url = `https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}` - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: '\nBenchmarks completed. View runner logs here.' + run_url + '\nRegex used: '+ 'regex ' + ENV_VARS["REGEX"] + '\n' + ENV_VARS["BENCH_OUTPUT"] - }) diff --git a/.github/workflows/autoupdate-pre-commit-config.yml b/.github/workflows/autoupdate-pre-commit-config.yml deleted file mode 100644 index 376aa8343c571..0000000000000 --- a/.github/workflows/autoupdate-pre-commit-config.yml +++ /dev/null @@ -1,39 +0,0 @@ -name: "Update pre-commit config" - -on: - schedule: - - cron: "0 7 1 * *" # At 07:00 on 1st of every month. - workflow_dispatch: - -permissions: - contents: read - -jobs: - update-pre-commit: - permissions: - contents: write # for technote-space/create-pr-action to push code - pull-requests: write # for technote-space/create-pr-action to create a PR - if: github.repository_owner == 'pandas-dev' - name: Autoupdate pre-commit config - runs-on: ubuntu-22.04 - steps: - - name: Set up Python - uses: actions/setup-python@v4 - - name: Cache multiple paths - uses: actions/cache@v3 - with: - path: | - ~/.cache/pre-commit - ~/.cache/pip - key: pre-commit-autoupdate-${{ runner.os }}-build - - name: Update pre-commit config packages - uses: technote-space/create-pr-action@v2 - with: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - EXECUTE_COMMANDS: | - pip install pre-commit - pre-commit autoupdate || (exit 0); - pre-commit run -a || (exit 0); - COMMIT_MESSAGE: "⬆️ UPGRADE: Autoupdate pre-commit config" - PR_BRANCH_NAME: "pre-commit-config-update-${PR_ID}" - PR_TITLE: "⬆️ UPGRADE: Autoupdate pre-commit config" diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml deleted file mode 100644 index 280b6ed601f08..0000000000000 --- a/.github/workflows/code-checks.yml +++ /dev/null @@ -1,190 +0,0 @@ -name: Code Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pre_commit: - name: pre-commit - runs-on: ubuntu-22.04 - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pre-commit - cancel-in-progress: true - steps: - - name: Checkout - uses: actions/checkout@v3 - - - name: Install Python - uses: actions/setup-python@v4 - with: - python-version: '3.9' - - - name: Run pre-commit - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --all-files - - docstring_typing_manual_hooks: - name: Docstring validation, typing, and other manual pre-commit hooks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-code-checks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - # The following checks are independent of each other and should still be run if one fails - - name: Check for no warnings when building single-page docs - run: ci/code_checks.sh single-docs - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run checks on imported code - run: ci/code_checks.sh code - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run doctests - run: ci/code_checks.sh doctests - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation - run: ci/code_checks.sh docstrings - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run check of documentation notebooks - run: ci/code_checks.sh notebooks - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Use existing environment for type checking - run: | - echo $PATH >> $GITHUB_PATH - echo "PYTHONHOME=$PYTHONHOME" >> $GITHUB_ENV - echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Typing + pylint - uses: pre-commit/action@v2.0.3 - with: - extra_args: --verbose --hook-stage manual --all-files - if: ${{ steps.build.outcome == 'success' && always() }} - - - name: Run docstring validation script tests - run: pytest scripts - if: ${{ steps.build.outcome == 'success' && always() }} - - asv-benchmarks: - name: ASV Benchmarks - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-asv-benchmarks - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - id: build - uses: ./.github/actions/build_pandas - - - name: Run ASV benchmarks - run: | - cd asv_bench - asv machine --yes - asv run --quick --dry-run --strict --durations=30 --python=same - - build_docker_dev_environment: - name: Build Docker Dev Environment - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment - cancel-in-progress: true - - steps: - - name: Clean up dangling images - run: docker image prune -f - - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Build image - run: docker build --pull --no-cache --tag pandas-dev-env . - - - name: Show environment - run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())" - - requirements-dev-text-installable: - name: Test install requirements-dev.txt - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-requirements-dev-text-installable - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - cache: 'pip' - cache-dependency-path: 'requirements-dev.txt' - - - name: Install requirements-dev.txt - run: pip install -r requirements-dev.txt - - - name: Check Pip Cache Hit - run: echo ${{ steps.setup_python.outputs.cache-hit }} diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml deleted file mode 100644 index 05a5d003c1dd1..0000000000000 --- a/.github/workflows/codeql.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: CodeQL -on: - schedule: - # every day at midnight - - cron: "0 0 * * *" - -concurrency: - group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }} - cancel-in-progress: true - -jobs: - analyze: - runs-on: ubuntu-22.04 - permissions: - actions: read - contents: read - security-events: write - - strategy: - fail-fast: false - matrix: - language: - - python - - steps: - - uses: actions/checkout@v3 - - uses: github/codeql-action/init@v2 - with: - languages: ${{ matrix.language }} - - uses: github/codeql-action/autobuild@v2 - - uses: github/codeql-action/analyze@v2 diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml deleted file mode 100644 index ee79c10c12d4e..0000000000000 --- a/.github/workflows/docbuild-and-upload.yml +++ /dev/null @@ -1,88 +0,0 @@ -name: Doc Build and Upload - -on: - push: - branches: - - main - - 1.5.x - tags: - - '*' - pull_request: - branches: - - main - - 1.5.x - -env: - ENV_FILE: environment.yml - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - web_and_docs: - name: Doc Build and Upload - runs-on: ubuntu-22.04 - - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-web-docs - cancel-in-progress: true - - defaults: - run: - shell: bash -el {0} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Build website - run: python web/pandas_web.py web/pandas --target-path=web/build - - - name: Build documentation - run: doc/make.py --warnings-are-errors - - - name: Build documentation zip - run: doc/make.py zip_html - - - name: Install ssh key - run: | - mkdir -m 700 -p ~/.ssh - echo "${{ secrets.server_ssh_key }}" > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - echo "${{ secrets.server_ip }} ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjYkJBk7sos+r7yATODogQc3jUdW1aascGpyOD4bohj8dWjzwLJv/OJ/fyOQ5lmj81WKDk67tGtqNJYGL9acII=" > ~/.ssh/known_hosts - if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) - - - name: Copy cheatsheets into site directory - run: cp doc/cheatsheet/Pandas_Cheat_Sheet* web/build/ - - - name: Upload web - run: rsync -az --delete --exclude='pandas-docs' --exclude='docs' web/build/ web@${{ secrets.server_ip }}:/var/www/html - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload dev docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/dev - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - - - name: Upload prod docs - run: rsync -az --delete doc/build/html/ web@${{ secrets.server_ip }}:/var/www/html/pandas-docs/version/${GITHUB_REF_NAME:1} - if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') - - - name: Move docs into site directory - run: mv doc/build/html web/build/docs - - - name: Save website as an artifact - uses: actions/upload-artifact@v3 - with: - name: website - path: web/build - retention-days: 14 diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml deleted file mode 100644 index ca6fd63d749ee..0000000000000 --- a/.github/workflows/macos-windows.yml +++ /dev/null @@ -1,62 +0,0 @@ -name: Windows-macOS - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - PYTEST_TARGET: pandas - PATTERN: "not slow and not db and not network and not single_cpu" - TEST_ARGS: "-W error:::pandas" - - -permissions: - contents: read - -jobs: - pytest: - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - os: [macos-latest, windows-latest] - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - fail-fast: false - runs-on: ${{ matrix.os }} - name: ${{ format('{0} {1}', matrix.os, matrix.env_file) }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.os }} - cancel-in-progress: true - env: - # GH 47443: PYTEST_WORKERS > 1 crashes Windows builds with memory related errors - PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '1' }} - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ci/deps/${{ matrix.env_file }} - pyarrow-version: ${{ matrix.os == 'macos-latest' && '6' || '' }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml deleted file mode 100644 index eb065c6e2e87d..0000000000000 --- a/.github/workflows/package-checks.yml +++ /dev/null @@ -1,52 +0,0 @@ -name: Package Checks - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [ labeled, opened, synchronize, reopened ] - -permissions: - contents: read - -jobs: - pip: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - strategy: - matrix: - extra: ["test", "performance", "timezone", "computation", "fss", "aws", "gcp", "excel", "parquet", "feather", "hdf5", "spss", "postgresql", "mysql", "sql-other", "html", "xml", "plot", "output_formatting", "clipboard", "compression", "all"] - fail-fast: false - name: Install Extras - ${{ matrix.extra }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pip-extras-${{ matrix.extra }} - cancel-in-progress: true - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Setup Python - id: setup_python - uses: actions/setup-python@v4 - with: - python-version: '3.8' - - - name: Install required dependencies - run: | - python -m pip install --upgrade pip setuptools wheel python-dateutil pytz numpy cython - python -m pip install versioneer[toml] - shell: bash -el {0} - - - name: Pip install with extra - run: | - python -m pip install -e .[${{ matrix.extra }}] --no-build-isolation - shell: bash -el {0} diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml deleted file mode 100644 index 220c1e464742e..0000000000000 --- a/.github/workflows/python-dev.yml +++ /dev/null @@ -1,93 +0,0 @@ -# This workflow may or may not run depending on the state of the next -# unreleased Python version. DO NOT DELETE IT. -# -# In general, this file will remain frozen(present, but not running) until: -# - The next unreleased Python version has released beta 1 -# - This version should be available on GitHub Actions. -# - Our required build/runtime dependencies(numpy, pytz, Cython, python-dateutil) -# support that unreleased Python version. -# To unfreeze, comment out the ``if: false`` condition, and make sure you update -# the name of the workflow and Python version in actions/setup-python to: '3.12-dev' -# -# After it has been unfrozen, this file should remain unfrozen(present, and running) until: -# - The next Python version has been officially released. -# OR -# - Most/All of our optional dependencies support Python 3.11 AND -# - The next Python version has released a rc(we are guaranteed a stable ABI). -# To freeze this file, uncomment out the ``if: false`` condition, and migrate the jobs -# to the corresponding posix/windows-macos/sdist etc. workflows. -# Feel free to modify this comment as necessary. - -name: Python Dev - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PYTEST_WORKERS: "auto" - PANDAS_CI: 1 - PATTERN: "not slow and not network and not clipboard and not single_cpu" - COVERAGE: true - PYTEST_TARGET: pandas - -permissions: - contents: read - -jobs: - build: - # if: false # Uncomment this to freeze the workflow, comment it to unfreeze - runs-on: ${{ matrix.os }} - strategy: - fail-fast: false - matrix: - os: [ubuntu-22.04, macOS-latest, windows-latest] - - name: actions-311-dev - timeout-minutes: 120 - - concurrency: - #https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.os }}-${{ matrix.pytest_target }}-dev - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python Dev Version - uses: actions/setup-python@v4 - with: - python-version: '3.11-dev' - - - name: Install dependencies - run: | - python --version - python -m pip install --upgrade pip setuptools wheel - python -m pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy - python -m pip install git+https://github.com/nedbat/coveragepy.git - python -m pip install versioneer[toml] - python -m pip install python-dateutil pytz cython hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-cov pytest-asyncio>=0.17 - python -m pip list - - # GH 47305: Parallel build can cause flaky ImportError from pandas/_libs/tslibs - - name: Build Pandas - run: | - python setup.py build_ext -q -j1 - python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - - - name: Build Version - run: | - python -c "import pandas; pandas.show_versions();" - - - name: Test - uses: ./.github/actions/run-tests diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml deleted file mode 100644 index d11b614e2b2c0..0000000000000 --- a/.github/workflows/sdist.yml +++ /dev/null @@ -1,96 +0,0 @@ -name: sdist - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - types: [labeled, opened, synchronize, reopened] - paths-ignore: - - "doc/**" - -permissions: - contents: read - -jobs: - build: - if: ${{ github.event.label.name == 'Build' || contains(github.event.pull_request.labels.*.name, 'Build') || github.event_name == 'push'}} - runs-on: ubuntu-22.04 - timeout-minutes: 60 - defaults: - run: - shell: bash -el {0} - - strategy: - fail-fast: false - matrix: - python-version: ["3.8", "3.9", "3.10", "3.11"] - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{matrix.python-version}}-sdist - cancel-in-progress: true - - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - - - name: Install dependencies - run: | - python -m pip install --upgrade pip setuptools wheel - python -m pip install versioneer[toml] - - # GH 39416 - pip install numpy - - - name: Build pandas sdist - run: | - pip list - python setup.py sdist --formats=gztar - - - name: Upload sdist artifact - uses: actions/upload-artifact@v3 - with: - name: ${{matrix.python-version}}-sdist.gz - path: dist/*.gz - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: false - environment-name: pandas-sdist - extra-specs: | - python =${{ matrix.python-version }} - - - name: Install pandas from sdist - run: | - pip list - python -m pip install dist/*.gz - - - name: Force oldest supported NumPy - run: | - case "${{matrix.python-version}}" in - 3.8) - pip install numpy==1.20.3 ;; - 3.9) - pip install numpy==1.20.3 ;; - 3.10) - pip install numpy==1.21.2 ;; - 3.11) - pip install numpy==1.23.2 ;; - esac - - - name: Import pandas - run: | - cd .. - conda list - python -c "import pandas; pandas.show_versions();" diff --git a/.github/workflows/stale-pr.yml b/.github/workflows/stale-pr.yml deleted file mode 100644 index c47745e097d17..0000000000000 --- a/.github/workflows/stale-pr.yml +++ /dev/null @@ -1,26 +0,0 @@ -name: "Stale PRs" -on: - schedule: - # * is a special character in YAML so you have to quote this string - - cron: "0 0 * * *" - -permissions: - contents: read - -jobs: - stale: - permissions: - pull-requests: write - runs-on: ubuntu-22.04 - steps: - - uses: actions/stale@v4 - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} - stale-pr-message: "This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this." - stale-pr-label: "Stale" - exempt-pr-labels: "Needs Review,Blocked,Needs Discussion" - days-before-issue-stale: -1 - days-before-pr-stale: 30 - days-before-close: -1 - remove-stale-when-updated: false - debug-only: false diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml deleted file mode 100644 index 007de6385dd69..0000000000000 --- a/.github/workflows/ubuntu.yml +++ /dev/null @@ -1,174 +0,0 @@ -name: Ubuntu - -on: - push: - branches: - - main - - 1.5.x - pull_request: - branches: - - main - - 1.5.x - paths-ignore: - - "doc/**" - -env: - PANDAS_CI: 1 - -permissions: - contents: read - -jobs: - pytest: - runs-on: ubuntu-22.04 - defaults: - run: - shell: bash -el {0} - timeout-minutes: 180 - strategy: - matrix: - env_file: [actions-38.yaml, actions-39.yaml, actions-310.yaml] - pattern: ["not single_cpu", "single_cpu"] - pyarrow_version: ["7", "8", "9"] - include: - - name: "Downstream Compat" - env_file: actions-38-downstream_compat.yaml - pattern: "not slow and not network and not single_cpu" - pytest_target: "pandas/tests/test_downstream.py" - - name: "Minimum Versions" - env_file: actions-38-minimum_versions.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "" - - name: "Locale: it_IT" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-it" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "it_IT.utf8" - lc_all: "it_IT.utf8" - # Also install it_IT (its encoding is ISO8859-1) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "it_IT" - - name: "Locale: zh_CN" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - extra_apt: "language-pack-zh-hans" - # Use the utf8 version as the default, it has no bad side-effect. - lang: "zh_CN.utf8" - lc_all: "zh_CN.utf8" - # Also install zh_CN (its encoding is gb2312) but do not activate it. - # It will be temporarily activated during tests with locale.setlocale - extra_loc: "zh_CN" - - name: "Copy-on-Write" - env_file: actions-310.yaml - pattern: "not slow and not network and not single_cpu" - pandas_copy_on_write: "1" - test_args: "" - - name: "Data Manager" - env_file: actions-38.yaml - pattern: "not slow and not network and not single_cpu" - pandas_data_manager: "array" - test_args: "" - - name: "Pypy" - env_file: actions-pypy-38.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "--max-worker-restart 0" - - name: "Numpy Dev" - env_file: actions-310-numpydev.yaml - pattern: "not slow and not network and not single_cpu" - test_args: "-W error::DeprecationWarning:numpy -W error::FutureWarning:numpy" - exclude: - - env_file: actions-39.yaml - pyarrow_version: "6" - - env_file: actions-39.yaml - pyarrow_version: "7" - - env_file: actions-310.yaml - pyarrow_version: "6" - - env_file: actions-310.yaml - pyarrow_version: "7" - fail-fast: false - name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} - env: - ENV_FILE: ci/deps/${{ matrix.env_file }} - PATTERN: ${{ matrix.pattern }} - EXTRA_APT: ${{ matrix.extra_apt || '' }} - LANG: ${{ matrix.lang || '' }} - LC_ALL: ${{ matrix.lc_all || '' }} - PANDAS_DATA_MANAGER: ${{ matrix.pandas_data_manager || 'block' }} - PANDAS_COPY_ON_WRITE: ${{ matrix.pandas_copy_on_write || '0' }} - TEST_ARGS: ${{ matrix.test_args || '-W error:::pandas' }} - PYTEST_WORKERS: ${{ contains(matrix.pattern, 'not single_cpu') && 'auto' || '1' }} - PYTEST_TARGET: ${{ matrix.pytest_target || 'pandas' }} - IS_PYPY: ${{ contains(matrix.env_file, 'pypy') }} - # TODO: re-enable coverage on pypy, its slow - COVERAGE: ${{ !contains(matrix.env_file, 'pypy') }} - concurrency: - # https://github.community/t/concurrecy-not-work-for-push/183068/7 - group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-${{ matrix.env_file }}-${{ matrix.pattern }}-${{ matrix.pyarrow_version || '' }}-${{ matrix.extra_apt || '' }}-${{ matrix.pandas_data_manager || '' }} - cancel-in-progress: true - - services: - mysql: - image: mysql - env: - MYSQL_ALLOW_EMPTY_PASSWORD: yes - MYSQL_DATABASE: pandas - options: >- - --health-cmd "mysqladmin ping" - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 3306:3306 - - postgres: - image: postgres - env: - POSTGRES_USER: postgres - POSTGRES_PASSWORD: postgres - POSTGRES_DB: pandas - options: >- - --health-cmd pg_isready - --health-interval 10s - --health-timeout 5s - --health-retries 5 - ports: - - 5432:5432 - - moto: - image: motoserver/moto - env: - AWS_ACCESS_KEY_ID: foobar_key - AWS_SECRET_ACCESS_KEY: foobar_secret - ports: - - 5000:5000 - - steps: - - name: Checkout - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - - name: Extra installs - # xsel for clipboard tests - run: sudo apt-get update && sudo apt-get install -y xsel ${{ env.EXTRA_APT }} - - - name: Generate extra locales - # These extra locales will be available for locale.setlocale() calls in tests - run: | - sudo locale-gen ${{ matrix.extra_loc }} - if: ${{ matrix.extra_loc }} - - - name: Set up Conda - uses: ./.github/actions/setup-conda - with: - environment-file: ${{ env.ENV_FILE }} - pyarrow-version: ${{ matrix.pyarrow_version }} - - - name: Build Pandas - uses: ./.github/actions/build_pandas - - - name: Test - uses: ./.github/actions/run-tests - # TODO: Don't continue on error for PyPy - continue-on-error: ${{ env.IS_PYPY == 'true' }} diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml deleted file mode 100644 index 49d29c91f86cd..0000000000000 --- a/.github/workflows/wheels.yml +++ /dev/null @@ -1,199 +0,0 @@ -# Workflow to build wheels for upload to PyPI. -# Inspired by numpy's cibuildwheel config https://github.com/numpy/numpy/blob/main/.github/workflows/wheels.yml -# -# In an attempt to save CI resources, wheel builds do -# not run on each push but only weekly and for releases. -# Wheel builds can be triggered from the Actions page -# (if you have the perms) on a commit to master. -# -# Alternatively, you can add labels to the pull request in order to trigger wheel -# builds. -# The label(s) that trigger builds are: -# - Build -name: Wheel builder - -on: - schedule: - # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ minute (0 - 59) - # β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ hour (0 - 23) - # β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the month (1 - 31) - # β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ month (1 - 12 or JAN-DEC) - # β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of the week (0 - 6 or SUN-SAT) - # β”‚ β”‚ β”‚ β”‚ β”‚ - - cron: "27 3 */1 * *" - push: - pull_request: - types: [labeled, opened, synchronize, reopened] - workflow_dispatch: - -concurrency: - group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} - cancel-in-progress: true - -jobs: - build_wheels: - name: Build wheel for ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ${{ matrix.buildplat[0] }} - strategy: - # Ensure that a wheel builder finishes even if another fails - fail-fast: false - matrix: - # GitHub Actions doesn't support pairing matrix values together, let's improvise - # https://github.com/github/feedback/discussions/7835#discussioncomment-1769026 - buildplat: - - [ubuntu-20.04, manylinux_x86_64] - - [macos-11, macosx_*] - - [windows-2019, win_amd64] - - [windows-2019, win32] - # TODO: support PyPy? - python: [["cp38", "3.8"], ["cp39", "3.9"], ["cp310", "3.10"], ["cp311", "3.11"]]# "pp38", "pp39"] - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - - name: Build wheels - uses: pypa/cibuildwheel@v2.9.0 - env: - CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }} - - # Used to test(Windows-only) and push the built wheels - # You might need to use setup-python separately - # if the new Python-dev version - # is unavailable on conda-forge. - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - python-version: ${{ matrix.python[1] }} - activate-environment: test - channels: conda-forge, anaconda - channel-priority: true - mamba-version: "*" - - - name: Test wheels (Windows 64-bit only) - if: ${{ matrix.buildplat[1] == 'win_amd64' }} - shell: cmd /C CALL {0} - run: | - python ci/test_wheels.py wheelhouse - - - uses: actions/upload-artifact@v3 - with: - name: ${{ matrix.python[0] }}-${{ startsWith(matrix.buildplat[1], 'macosx') && 'macosx' || matrix.buildplat[1] }} - path: ./wheelhouse/*.whl - - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: conda install -q -y anaconda-client - - - - name: Upload wheels - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels - build_sdist: - name: Build sdist - if: >- - github.event_name == 'schedule' || - github.event_name == 'workflow_dispatch' || - (github.event_name == 'pull_request' && - contains(github.event.pull_request.labels.*.name, 'Build')) || - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && ( ! endsWith(github.ref, 'dev0'))) - runs-on: ubuntu-22.04 - env: - IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }} - IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }} - steps: - - name: Checkout pandas - uses: actions/checkout@v3 - with: - submodules: true - # versioneer.py requires the latest tag to be reachable. Here we - # fetch the complete history to get access to the tags. - # A shallow clone can work when the following issue is resolved: - # https://github.com/actions/checkout/issues/338 - fetch-depth: 0 - - # Used to push the built sdist - - uses: conda-incubator/setup-miniconda@v2 - with: - auto-update-conda: true - # Really doesn't matter what version we upload with - # just the version we test with - python-version: '3.8' - channels: conda-forge - channel-priority: true - mamba-version: "*" - - - name: Build sdist - run: | - pip install build - python -m build --sdist - - name: Test the sdist - shell: bash -el {0} - run: | - # TODO: Don't run test suite, and instead build wheels from sdist - # by splitting the wheel builders into a two stage job - # (1. Generate sdist 2. Build wheels from sdist) - # This tests the sdists, and saves some build time - python -m pip install dist/*.gz - pip install hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-asyncio>=0.17 - cd .. # Not a good idea to test within the src tree - python -c "import pandas; print(pandas.__version__); - pandas.test(extra_args=['-m not clipboard and not single_cpu', '--skip-slow', '--skip-network', '--skip-db', '-n=2']); - pandas.test(extra_args=['-m not clipboard and single_cpu', '--skip-slow', '--skip-network', '--skip-db'])" - - uses: actions/upload-artifact@v3 - with: - name: sdist - path: ./dist/* - - - name: Install anaconda client - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - run: | - conda install -q -y anaconda-client - - - name: Upload sdist - if: ${{ success() && (env.IS_SCHEDULE_DISPATCH == 'true' || env.IS_PUSH == 'true') }} - shell: bash -el {0} - env: - PANDAS_STAGING_UPLOAD_TOKEN: ${{ secrets.PANDAS_STAGING_UPLOAD_TOKEN }} - PANDAS_NIGHTLY_UPLOAD_TOKEN: ${{ secrets.PANDAS_NIGHTLY_UPLOAD_TOKEN }} - run: | - source ci/upload_wheels.sh - set_upload_vars - # trigger an upload to - # https://anaconda.org/scipy-wheels-nightly/pandas - # for cron jobs or "Run workflow" (restricted to main branch). - # Tags will upload to - # https://anaconda.org/multibuild-wheels-staging/pandas - # The tokens were originally generated at anaconda.org - upload_wheels
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/50376
2022-12-21T14:53:46Z
2022-12-21T19:06:05Z
null
2022-12-21T19:06:05Z