title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
Bump pre-commit/action from 3.0.0 to 3.0.1
diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index 8e29d56f47dcf..24b2de251ce8e 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -86,7 +86,7 @@ jobs: if: ${{ steps.build.outcome == 'success' && always() }} - name: Typing + pylint - uses: pre-commit/action@v3.0.0 + uses: pre-commit/action@v3.0.1 with: extra_args: --verbose --hook-stage manual --all-files if: ${{ steps.build.outcome == 'success' && always() }}
Bumps [pre-commit/action](https://github.com/pre-commit/action) from 3.0.0 to 3.0.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pre-commit/action/releases">pre-commit/action's releases</a>.</em></p> <blockquote> <h2>pre-commit/action@v3.0.1</h2> <h3>Misc</h3> <ul> <li>Update actions/cache to v4 <ul> <li><a href="https://redirect.github.com/pre-commit/action/issues/190">#190</a> PR by <a href="https://github.com/SukiCZ"><code>@​SukiCZ</code></a>.</li> <li><a href="https://redirect.github.com/pre-commit/action/issues/189">#189</a> issue by <a href="https://github.com/bakerkj"><code>@​bakerkj</code></a>.</li> </ul> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pre-commit/action/commit/2c7b3805fd2a0fd8c1884dcaebf91fc102a13ecd"><code>2c7b380</code></a> v3.0.1</li> <li><a href="https://github.com/pre-commit/action/commit/8e2deebc7918860122d0968e9e4fb73161fedbe2"><code>8e2deeb</code></a> Merge pull request <a href="https://redirect.github.com/pre-commit/action/issues/190">#190</a> from SukiCZ/upgrade-action/cache-v4</li> <li><a href="https://github.com/pre-commit/action/commit/0dbc303468d9ee1ae3a4cddd9b697c1424c73522"><code>0dbc303</code></a> Upgrade action/cache to v4. Fixes: <a href="https://redirect.github.com/pre-commit/action/issues/189">#189</a></li> <li><a href="https://github.com/pre-commit/action/commit/c7d159c2092cbfaab7352e2d8211ab536aa2267c"><code>c7d159c</code></a> Merge pull request <a href="https://redirect.github.com/pre-commit/action/issues/185">#185</a> from pre-commit/asottile-patch-1</li> <li><a href="https://github.com/pre-commit/action/commit/9dd42377a1725fb44b57b6a17501d984852f8ad9"><code>9dd4237</code></a> fix main badge</li> <li><a href="https://github.com/pre-commit/action/commit/37faf8a587e18fa3a3f78f600c9edbfd0691c3c3"><code>37faf8a</code></a> Merge pull request <a href="https://redirect.github.com/pre-commit/action/issues/184">#184</a> from pre-commit/pre-commit-ci-update-config</li> <li><a href="https://github.com/pre-commit/action/commit/049686ec527b5f1785b9cafcb3258821878466b1"><code>049686e</code></a> [pre-commit.ci] pre-commit autoupdate</li> <li><a href="https://github.com/pre-commit/action/commit/5f528da5c95691c4cf42ff76a4d10854b62cbb82"><code>5f528da</code></a> move back to maintenance-only</li> <li><a href="https://github.com/pre-commit/action/commit/efd3bcfec120bd343786e46318186153b7bc8c68"><code>efd3bcf</code></a> Merge pull request <a href="https://redirect.github.com/pre-commit/action/issues/170">#170</a> from pre-commit/pre-commit-ci-update-config</li> <li><a href="https://github.com/pre-commit/action/commit/df308c7f46bfa147cdec6c9795d030dc13431d35"><code>df308c7</code></a> [pre-commit.ci] pre-commit autoupdate</li> <li>Additional commits viewable in <a href="https://github.com/pre-commit/action/compare/v3.0.0...v3.0.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pre-commit/action&package-manager=github_actions&previous-version=3.0.0&new-version=3.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
https://api.github.com/repos/pandas-dev/pandas/pulls/57376
2024-02-12T08:28:08Z
2024-02-12T16:58:20Z
2024-02-12T16:58:20Z
2024-02-12T16:58:23Z
DEPR: Positional arguments in Series.to_string
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst index 483cf659080ea..77c80dcfe7c7e 100644 --- a/doc/source/whatsnew/v3.0.0.rst +++ b/doc/source/whatsnew/v3.0.0.rst @@ -102,6 +102,7 @@ Deprecations ~~~~~~~~~~~~ - Deprecated :meth:`Timestamp.utcfromtimestamp`, use ``Timestamp.fromtimestamp(ts, "UTC")`` instead (:issue:`56680`) - Deprecated :meth:`Timestamp.utcnow`, use ``Timestamp.now("UTC")`` instead (:issue:`56680`) +- Deprecated allowing non-keyword arguments in :meth:`Series.to_string` except ``buf``. (:issue:`57280`) - .. --------------------------------------------------------------------------- diff --git a/pandas/core/series.py b/pandas/core/series.py index eb5b545092307..27ae5d3a8596d 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -46,6 +46,7 @@ from pandas.util._decorators import ( Appender, Substitution, + deprecate_nonkeyword_arguments, doc, ) from pandas.util._exceptions import find_stack_level @@ -1488,6 +1489,7 @@ def __repr__(self) -> str: def to_string( self, buf: None = ..., + *, na_rep: str = ..., float_format: str | None = ..., header: bool = ..., @@ -1504,6 +1506,7 @@ def to_string( def to_string( self, buf: FilePath | WriteBuffer[str], + *, na_rep: str = ..., float_format: str | None = ..., header: bool = ..., @@ -1516,6 +1519,9 @@ def to_string( ) -> None: ... + @deprecate_nonkeyword_arguments( + version="3.0.0", allowed_args=["self", "buf"], name="to_string" + ) def to_string( self, buf: FilePath | WriteBuffer[str] | None = None, diff --git a/pandas/tests/io/formats/test_to_string.py b/pandas/tests/io/formats/test_to_string.py index f91f46c8460c4..7c7069aa74eeb 100644 --- a/pandas/tests/io/formats/test_to_string.py +++ b/pandas/tests/io/formats/test_to_string.py @@ -35,6 +35,16 @@ def _three_digit_exp(): class TestDataFrameToStringFormatters: + def test_keyword_deprecation(self): + # GH 57280 + msg = ( + "Starting with pandas version 3.0.0 all arguments of to_string " + "except for the argument 'buf' will be keyword-only." + ) + s = Series(["a", "b"]) + with tm.assert_produces_warning(FutureWarning, match=msg): + s.to_string(None, "NaN") + def test_to_string_masked_ea_with_formatter(self): # GH#39336 df = DataFrame(
- [X] closes #57280 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/57375
2024-02-12T04:03:04Z
2024-02-17T13:11:08Z
2024-02-17T13:11:08Z
2024-02-17T17:14:36Z
DEPR: Positional arguments in Series.to_markdown
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst index 77c80dcfe7c7e..4620de570e230 100644 --- a/doc/source/whatsnew/v3.0.0.rst +++ b/doc/source/whatsnew/v3.0.0.rst @@ -102,6 +102,7 @@ Deprecations ~~~~~~~~~~~~ - Deprecated :meth:`Timestamp.utcfromtimestamp`, use ``Timestamp.fromtimestamp(ts, "UTC")`` instead (:issue:`56680`) - Deprecated :meth:`Timestamp.utcnow`, use ``Timestamp.now("UTC")`` instead (:issue:`56680`) +- Deprecated allowing non-keyword arguments in :meth:`Series.to_markdown` except ``buf``. (:issue:`57280`) - Deprecated allowing non-keyword arguments in :meth:`Series.to_string` except ``buf``. (:issue:`57280`) - diff --git a/pandas/core/series.py b/pandas/core/series.py index 27ae5d3a8596d..ed82a62d0f345 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -1606,6 +1606,42 @@ def to_string( f.write(result) return None + @overload + def to_markdown( + self, + buf: None = ..., + *, + mode: str = ..., + index: bool = ..., + storage_options: StorageOptions | None = ..., + **kwargs, + ) -> str: + ... + + @overload + def to_markdown( + self, + buf: IO[str], + *, + mode: str = ..., + index: bool = ..., + storage_options: StorageOptions | None = ..., + **kwargs, + ) -> None: + ... + + @overload + def to_markdown( + self, + buf: IO[str] | None, + *, + mode: str = ..., + index: bool = ..., + storage_options: StorageOptions | None = ..., + **kwargs, + ) -> str | None: + ... + @doc( klass=_shared_doc_kwargs["klass"], storage_options=_shared_docs["storage_options"], @@ -1637,6 +1673,9 @@ def to_string( +----+----------+""" ), ) + @deprecate_nonkeyword_arguments( + version="3.0.0", allowed_args=["self", "buf"], name="to_markdown" + ) def to_markdown( self, buf: IO[str] | None = None, diff --git a/pandas/tests/io/formats/test_to_markdown.py b/pandas/tests/io/formats/test_to_markdown.py index 437f079c5f2f9..fffb1b9b9d2a4 100644 --- a/pandas/tests/io/formats/test_to_markdown.py +++ b/pandas/tests/io/formats/test_to_markdown.py @@ -3,10 +3,22 @@ import pytest import pandas as pd +import pandas._testing as tm pytest.importorskip("tabulate") +def test_keyword_deprecation(): + # GH 57280 + msg = ( + "Starting with pandas version 3.0.0 all arguments of to_markdown " + "except for the argument 'buf' will be keyword-only." + ) + s = pd.Series() + with tm.assert_produces_warning(FutureWarning, match=msg): + s.to_markdown(None, "wt") + + def test_simple(): buf = StringIO() df = pd.DataFrame([1, 2, 3])
- [X] closes #57280 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/57372
2024-02-12T02:18:44Z
2024-02-18T21:57:38Z
2024-02-18T21:57:38Z
2024-02-18T22:00:20Z
Remove close from StataReader
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst index aa378faac2a00..a830fb594051f 100644 --- a/doc/source/whatsnew/v3.0.0.rst +++ b/doc/source/whatsnew/v3.0.0.rst @@ -115,6 +115,7 @@ Removal of prior version deprecations/changes - Removed ``Series.__int__`` and ``Series.__float__``. Call ``int(Series.iloc[0])`` or ``float(Series.iloc[0])`` instead. (:issue:`51131`) - Removed ``Series.ravel`` (:issue:`56053`) - Removed ``Series.view`` (:issue:`56054`) +- Removed ``StataReader.close`` (:issue:`49228`) - Removed ``axis`` argument from :meth:`DataFrame.groupby`, :meth:`Series.groupby`, :meth:`DataFrame.rolling`, :meth:`Series.rolling`, :meth:`DataFrame.resample`, and :meth:`Series.resample` (:issue:`51203`) - Removed ``axis`` argument from all groupby operations (:issue:`50405`) - Removed ``pandas.api.types.is_interval`` and ``pandas.api.types.is_period``, use ``isinstance(obj, pd.Interval)`` and ``isinstance(obj, pd.Period)`` instead (:issue:`55264`) diff --git a/pandas/io/stata.py b/pandas/io/stata.py index c2a3db2d44b16..37ea940b3938a 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -1181,24 +1181,6 @@ def __exit__( if self._close_file: self._close_file() - def close(self) -> None: - """Close the handle if its open. - - .. deprecated: 2.0.0 - - The close method is not part of the public API. - The only supported way to use StataReader is to use it as a context manager. - """ - warnings.warn( - "The StataReader.close() method is not part of the public API and " - "will be removed in a future version without notice. " - "Using StataReader as a context manager is the only supported method.", - FutureWarning, - stacklevel=find_stack_level(), - ) - if self._close_file: - self._close_file() - def _set_encoding(self) -> None: """ Set string encoding which depends on file version diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py index c12bcfb91a4c7..42a9e84218a81 100644 --- a/pandas/tests/io/test_stata.py +++ b/pandas/tests/io/test_stata.py @@ -2001,21 +2001,6 @@ def test_direct_read(datapath, monkeypatch): assert reader._path_or_buf is bio -def test_statareader_warns_when_used_without_context(datapath): - file_path = datapath("io", "data", "stata", "stata-compat-118.dta") - with tm.assert_produces_warning( - ResourceWarning, - match="without using a context manager", - ): - sr = StataReader(file_path) - sr.read() - with tm.assert_produces_warning( - FutureWarning, - match="is not part of the public API", - ): - sr.close() - - @pytest.mark.parametrize("version", [114, 117, 118, 119, None]) @pytest.mark.parametrize("use_dict", [True, False]) @pytest.mark.parametrize("infer", [True, False])
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/57371
2024-02-11T22:05:23Z
2024-02-12T17:11:32Z
2024-02-12T17:11:32Z
2024-02-12T17:15:14Z
DOC: GL01 numpydoc validation
diff --git a/ci/code_checks.sh b/ci/code_checks.sh index 744f934142a24..823687cc20ca0 100755 --- a/ci/code_checks.sh +++ b/ci/code_checks.sh @@ -93,8 +93,8 @@ fi ### DOCSTRINGS ### if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then - MSG='Validate docstrings (GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS03, SS04, SS05, PR03, PR04, PR05, PRO9, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG - $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS03,SS04,SS05,PR03,PR04,PR05,PR09,PR10,EX04,RT01,RT04,RT05,SA02,SA03 + MSG='Validate docstrings (GL01, GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS03, SS04, SS05, PR03, PR04, PR05, PRO9, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG + $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL01,GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS03,SS04,SS05,PR03,PR04,PR05,PR09,PR10,EX04,RT01,RT04,RT05,SA02,SA03 RET=$(($RET + $?)) ; echo $MSG "DONE" fi diff --git a/pandas/core/window/doc.py b/pandas/core/window/doc.py index 2cc7962c6bd7b..61f388c35df0f 100644 --- a/pandas/core/window/doc.py +++ b/pandas/core/window/doc.py @@ -11,7 +11,7 @@ def create_section_header(header: str) -> str: return "\n".join((header, "-" * len(header))) + "\n" -template_header = "Calculate the {window_method} {aggregation_description}.\n\n" +template_header = "\nCalculate the {window_method} {aggregation_description}.\n\n" template_returns = dedent( """ diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index a94d9ee5416c4..de475d145f3a0 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -57,17 +57,23 @@ class CSSDict(TypedDict): Subset = Union[slice, Sequence, Index] +def _gl01_adjust(obj: Any) -> Any: + """Adjust docstrings for Numpydoc GLO1.""" + obj.__doc__ = "\n" + obj.__doc__ + return obj + + class StylerRenderer: """ Base class to process rendering a Styler with a specified jinja2 template. """ - loader = jinja2.PackageLoader("pandas", "io/formats/templates") - env = jinja2.Environment(loader=loader, trim_blocks=True) - template_html = env.get_template("html.tpl") - template_html_table = env.get_template("html_table.tpl") - template_html_style = env.get_template("html_style.tpl") - template_latex = env.get_template("latex.tpl") + loader = _gl01_adjust(jinja2.PackageLoader("pandas", "io/formats/templates")) + env = _gl01_adjust(jinja2.Environment(loader=loader, trim_blocks=True)) + template_html = _gl01_adjust(env.get_template("html.tpl")) + template_html_table = _gl01_adjust(env.get_template("html_table.tpl")) + template_html_style = _gl01_adjust(env.get_template("html_style.tpl")) + template_latex = _gl01_adjust(env.get_template("latex.tpl")) def __init__( self,
- [x] closes #25324 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/44567
2021-11-22T00:34:24Z
2021-11-25T20:48:16Z
2021-11-25T20:48:16Z
2021-11-26T02:05:51Z
PERF: only apply nanops rowwise optimization for narrow arrows
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index d728e6e695c7a..52d2322b11f42 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -455,7 +455,8 @@ def _na_for_min_count(values: np.ndarray, axis: int | None) -> Scalar | np.ndarr def maybe_operate_rowwise(func: F) -> F: """ NumPy operations on C-contiguous ndarrays with axis=1 can be - very slow. Operate row-by-row and concatenate the results. + very slow if axis 1 >> axis 0. + Operate row-by-row and concatenate the results. """ @functools.wraps(func) @@ -464,6 +465,9 @@ def newfunc(values: np.ndarray, *, axis: int | None = None, **kwargs): axis == 1 and values.ndim == 2 and values.flags["C_CONTIGUOUS"] + # only takes this path for wide arrays (long dataframes), for threshold see + # https://github.com/pandas-dev/pandas/pull/43311#issuecomment-974891737 + and (values.shape[1] / 1000) > values.shape[0] and values.dtype != object and values.dtype != bool ):
See the timings in https://github.com/pandas-dev/pandas/pull/43311#issuecomment-974891737 (this is a regression compared to previous release mostly when using ArrayManager (and thus doesn't need a whatsnew) and is covered by the FrameOps benchmarks)
https://api.github.com/repos/pandas-dev/pandas/pulls/44566
2021-11-21T20:48:48Z
2021-11-23T15:55:03Z
2021-11-23T15:55:03Z
2021-11-24T21:45:39Z
Backport PR #44561 on branch 1.3.x (CI: fix deprecation warning on interpolation)
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py index 619713f28ee2d..2d60164ccf075 100644 --- a/pandas/compat/numpy/__init__.py +++ b/pandas/compat/numpy/__init__.py @@ -12,9 +12,15 @@ np_version_under1p18 = _nlv < Version("1.18") np_version_under1p19 = _nlv < Version("1.19") np_version_under1p20 = _nlv < Version("1.20") +np_version_under1p22 = _nlv < Version("1.22") is_numpy_dev = _nlv.dev is not None _min_numpy_ver = "1.17.3" +if is_numpy_dev or not np_version_under1p22: + np_percentile_argname = "method" +else: + np_percentile_argname = "interpolation" + if _nlv < Version(_min_numpy_ver): raise ImportError( diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index 3b03a28afe163..aabe03586385b 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -29,6 +29,7 @@ Shape, ) from pandas.compat._optional import import_optional_dependency +from pandas.compat.numpy import np_percentile_argname from pandas.core.dtypes.common import ( get_dtype, @@ -1667,7 +1668,7 @@ def _nanpercentile_1d( if len(values) == 0: return np.array([na_value] * len(q), dtype=values.dtype) - return np.percentile(values, q, interpolation=interpolation) + return np.percentile(values, q, **{np_percentile_argname: interpolation}) def nanpercentile( @@ -1720,7 +1721,9 @@ def nanpercentile( result = np.array(result, dtype=values.dtype, copy=False).T return result else: - return np.percentile(values, q, axis=1, interpolation=interpolation) + return np.percentile( + values, q, axis=1, **{np_percentile_argname: interpolation} + ) def na_accum_func(values: ArrayLike, accum_func, *, skipna: bool) -> ArrayLike: diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py index f341014110e18..67591c8532f43 100644 --- a/pandas/tests/frame/methods/test_quantile.py +++ b/pandas/tests/frame/methods/test_quantile.py @@ -1,6 +1,8 @@ import numpy as np import pytest +from pandas.compat.numpy import np_percentile_argname + import pandas as pd from pandas import ( DataFrame, @@ -153,7 +155,10 @@ def test_quantile_interpolation(self): # cross-check interpolation=nearest results in original dtype exp = np.percentile( - np.array([[1, 2, 3], [2, 3, 4]]), 0.5, axis=0, interpolation="nearest" + np.array([[1, 2, 3], [2, 3, 4]]), + 0.5, + axis=0, + **{np_percentile_argname: "nearest"}, ) expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="int64") tm.assert_series_equal(result, expected) @@ -167,7 +172,7 @@ def test_quantile_interpolation(self): np.array([[1.0, 2.0, 3.0], [2.0, 3.0, 4.0]]), 0.5, axis=0, - interpolation="nearest", + **{np_percentile_argname: "nearest"}, ) expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="float64") tm.assert_series_equal(result, expected)
Backport PR #44561: CI: fix deprecation warning on interpolation
https://api.github.com/repos/pandas-dev/pandas/pulls/44565
2021-11-21T20:33:40Z
2021-11-21T22:04:32Z
2021-11-21T22:04:32Z
2021-11-21T22:04:32Z
BUG: DataFrame.shift with axis=1 and mismatched fill_value
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index fecfc956852e3..653082fa3b67a 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -753,6 +753,7 @@ Other - Bug in :meth:`Series.to_frame` and :meth:`Index.to_frame` ignoring the ``name`` argument when ``name=None`` is explicitly passed (:issue:`44212`) - Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` with ``value=None`` and ExtensionDtypes (:issue:`44270`) - Bug in :meth:`FloatingArray.equals` failing to consider two arrays equal if they contain ``np.nan`` values (:issue:`44382`) +- Bug in :meth:`DataFrame.shift` with ``axis=1`` and ``ExtensionDtype`` columns incorrectly raising when an incompatible ``fill_value`` is passed (:issue:`44564`) - Bug in :meth:`DataFrame.diff` when passing a NumPy integer object instead of an ``int`` object (:issue:`44572`) - diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 74d785586b950..d69709bf9d06c 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -36,6 +36,7 @@ is_1d_only_ea_dtype, is_dtype_equal, is_list_like, + needs_i8_conversion, ) from pandas.core.dtypes.dtypes import ExtensionDtype from pandas.core.dtypes.generic import ( @@ -362,7 +363,28 @@ def shift(self: T, periods: int, axis: int, fill_value) -> T: if fill_value is lib.no_default: fill_value = None - if axis == 0 and self.ndim == 2 and self.nblocks > 1: + if ( + axis == 0 + and self.ndim == 2 + and ( + self.nblocks > 1 + or ( + # If we only have one block and we know that we can't + # keep the same dtype (i.e. the _can_hold_element check) + # then we can go through the reindex_indexer path + # (and avoid casting logic in the Block method). + # The exception to this (until 2.0) is datetimelike + # dtypes with integers, which cast. + not self.blocks[0]._can_hold_element(fill_value) + # TODO(2.0): remove special case for integer-with-datetimelike + # once deprecation is enforced + and not ( + lib.is_integer(fill_value) + and needs_i8_conversion(self.blocks[0].dtype) + ) + ) + ) + ): # GH#35488 we need to watch out for multi-block cases # We only get here with fill_value not-lib.no_default ncols = self.shape[0] diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py index d8511581f0e94..9cd0b8bb5b315 100644 --- a/pandas/tests/frame/methods/test_shift.py +++ b/pandas/tests/frame/methods/test_shift.py @@ -331,6 +331,83 @@ def test_shift_dt64values_int_fill_deprecated(self): expected = DataFrame({"A": [pd.Timestamp(0), pd.Timestamp(0)], "B": df2["A"]}) tm.assert_frame_equal(result, expected) + # same thing but not consolidated + # This isn't great that we get different behavior, but + # that will go away when the deprecation is enforced + df3 = DataFrame({"A": ser}) + df3["B"] = ser + assert len(df3._mgr.arrays) == 2 + result = df3.shift(1, axis=1, fill_value=0) + expected = DataFrame({"A": [0, 0], "B": df2["A"]}) + tm.assert_frame_equal(result, expected) + + @pytest.mark.parametrize( + "as_cat", + [ + pytest.param( + True, + marks=pytest.mark.xfail( + reason="_can_hold_element incorrectly always returns True" + ), + ), + False, + ], + ) + @pytest.mark.parametrize( + "vals", + [ + date_range("2020-01-01", periods=2), + date_range("2020-01-01", periods=2, tz="US/Pacific"), + pd.period_range("2020-01-01", periods=2, freq="D"), + pd.timedelta_range("2020 Days", periods=2, freq="D"), + pd.interval_range(0, 3, periods=2), + pytest.param( + pd.array([1, 2], dtype="Int64"), + marks=pytest.mark.xfail( + reason="_can_hold_element incorrectly always returns True" + ), + ), + pytest.param( + pd.array([1, 2], dtype="Float32"), + marks=pytest.mark.xfail( + reason="_can_hold_element incorrectly always returns True" + ), + ), + ], + ids=lambda x: str(x.dtype), + ) + def test_shift_dt64values_axis1_invalid_fill( + self, vals, as_cat, using_array_manager, request + ): + # GH#44564 + if using_array_manager: + mark = pytest.mark.xfail(raises=NotImplementedError) + request.node.add_marker(mark) + + ser = Series(vals) + if as_cat: + ser = ser.astype("category") + + df = DataFrame({"A": ser}) + result = df.shift(-1, axis=1, fill_value="foo") + expected = DataFrame({"A": ["foo", "foo"]}) + tm.assert_frame_equal(result, expected) + + # same thing but multiple blocks + df2 = DataFrame({"A": ser, "B": ser}) + df2._consolidate_inplace() + + result = df2.shift(-1, axis=1, fill_value="foo") + expected = DataFrame({"A": df2["B"], "B": ["foo", "foo"]}) + tm.assert_frame_equal(result, expected) + + # same thing but not consolidated + df3 = DataFrame({"A": ser}) + df3["B"] = ser + assert len(df3._mgr.arrays) == 2 + result = df3.shift(-1, axis=1, fill_value="foo") + tm.assert_frame_equal(result, expected) + def test_shift_axis1_categorical_columns(self): # GH#38434 ci = CategoricalIndex(["a", "b", "c"])
- [ ] closes #xxxx - [x] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44564
2021-11-21T17:22:54Z
2021-11-25T20:46:59Z
2021-11-25T20:46:59Z
2021-11-26T04:44:52Z
CLN/PERF: remove unnecessary ensure_platform_int
diff --git a/pandas/core/array_algos/take.py b/pandas/core/array_algos/take.py index 87d55702b33e0..c9d6640101a8b 100644 --- a/pandas/core/array_algos/take.py +++ b/pandas/core/array_algos/take.py @@ -179,8 +179,6 @@ def take_1d( Note: similarly to `take_nd`, this function assumes that the indexer is a valid(ated) indexer with no out of bound indices. """ - indexer = ensure_platform_int(indexer) - if not isinstance(arr, np.ndarray): # ExtensionArray -> dispatch to their method return arr.take(indexer, fill_value=fill_value, allow_fill=allow_fill)
This line was added in https://github.com/pandas-dev/pandas/pull/43977, while the docstring and type information (as added in https://github.com/pandas-dev/pandas/pull/43977) indicates this is already the case.
https://api.github.com/repos/pandas-dev/pandas/pulls/44563
2021-11-21T17:03:11Z
2021-11-22T08:54:21Z
2021-11-22T08:54:21Z
2021-11-22T08:54:25Z
PERF/BUG: ensure we store contiguous arrays in DataFrame(ndarray) for ArrayManager
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 803d1c914c954..b8d8e7a2bb893 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -593,13 +593,6 @@ def __init__( copy: bool | None = None, ): - if copy is None: - if isinstance(data, dict) or data is None: - # retain pre-GH#38939 default behavior - copy = True - else: - copy = False - if data is None: data = {} if dtype is not None: @@ -618,6 +611,21 @@ def __init__( manager = get_option("mode.data_manager") + if copy is None: + if isinstance(data, dict): + # retain pre-GH#38939 default behavior + copy = True + elif ( + manager == "array" + and isinstance(data, (np.ndarray, ExtensionArray)) + and data.ndim == 2 + ): + # INFO(ArrayManager) by default copy the 2D input array to get + # contiguous 1D arrays + copy = True + else: + copy = False + if isinstance(data, (BlockManager, ArrayManager)): mgr = self._init_mgr( data, axes={"index": index, "columns": columns}, dtype=dtype, copy=copy diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py index 77f3db0d09df5..61d38c43aca24 100644 --- a/pandas/core/internals/construction.py +++ b/pandas/core/internals/construction.py @@ -290,6 +290,10 @@ def ndarray_to_mgr( if not len(values) and columns is not None and len(columns): values = np.empty((0, 1), dtype=object) + # if the array preparation does a copy -> avoid this for ArrayManager, + # since the copy is done on conversion to 1D arrays + copy_on_sanitize = False if typ == "array" else copy + vdtype = getattr(values, "dtype", None) if is_1d_only_ea_dtype(vdtype) or isinstance(dtype, ExtensionDtype): # GH#19157 @@ -324,7 +328,7 @@ def ndarray_to_mgr( else: # by definition an array here # the dtypes will be coerced to a single dtype - values = _prep_ndarray(values, copy=copy) + values = _prep_ndarray(values, copy=copy_on_sanitize) if dtype is not None and not is_dtype_equal(values.dtype, dtype): shape = values.shape @@ -334,7 +338,7 @@ def ndarray_to_mgr( rcf = not (is_integer_dtype(dtype) and values.dtype.kind == "f") values = sanitize_array( - flat, None, dtype=dtype, copy=copy, raise_cast_failure=rcf + flat, None, dtype=dtype, copy=copy_on_sanitize, raise_cast_failure=rcf ) values = values.reshape(shape) @@ -363,6 +367,9 @@ def ndarray_to_mgr( values = ensure_wrapped_if_datetimelike(values) arrays = [values[:, i] for i in range(values.shape[1])] + if copy: + arrays = [arr.copy() for arr in arrays] + return ArrayManager(arrays, [index, columns], verify_integrity=False) values = values.T diff --git a/pandas/tests/frame/methods/test_values.py b/pandas/tests/frame/methods/test_values.py index 477099fba75e1..f755b0addfd6d 100644 --- a/pandas/tests/frame/methods/test_values.py +++ b/pandas/tests/frame/methods/test_values.py @@ -226,8 +226,8 @@ def test_values_lcd(self, mixed_float_frame, mixed_int_frame): class TestPrivateValues: - def test_private_values_dt64tz(self, request): - + @td.skip_array_manager_invalid_test + def test_private_values_dt64tz(self): dta = date_range("2000", periods=4, tz="US/Central")._data.reshape(-1, 1) df = DataFrame(dta, columns=["A"]) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 7347640fc05a7..f070cf3dd20f4 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -264,12 +264,17 @@ def test_constructor_dtype_nocast_view_dataframe(self): should_be_view[0][0] = 99 assert df.values[0, 0] == 99 - @td.skip_array_manager_invalid_test # TODO(ArrayManager) keep view on 2D array? - def test_constructor_dtype_nocast_view_2d_array(self): - df = DataFrame([[1, 2]]) - should_be_view = DataFrame(df.values, dtype=df[0].dtype) - should_be_view[0][0] = 97 - assert df.values[0, 0] == 97 + def test_constructor_dtype_nocast_view_2d_array(self, using_array_manager): + df = DataFrame([[1, 2], [3, 4]], dtype="int64") + if not using_array_manager: + should_be_view = DataFrame(df.values, dtype=df[0].dtype) + should_be_view[0][0] = 97 + assert df.values[0, 0] == 97 + else: + # INFO(ArrayManager) DataFrame(ndarray) doesn't necessarily preserve + # a view on the array to ensure contiguous 1D arrays + df2 = DataFrame(df.values, dtype=df[0].dtype) + assert df2._mgr.arrays[0].flags.c_contiguous @td.skip_array_manager_invalid_test def test_1d_object_array_does_not_copy(self): @@ -2111,17 +2116,29 @@ def test_constructor_frame_copy(self, float_frame): assert (cop["A"] == 5).all() assert not (float_frame["A"] == 5).all() - # TODO(ArrayManager) keep view on 2D array? - @td.skip_array_manager_not_yet_implemented - def test_constructor_ndarray_copy(self, float_frame): - df = DataFrame(float_frame.values) + def test_constructor_ndarray_copy(self, float_frame, using_array_manager): + if not using_array_manager: + df = DataFrame(float_frame.values) - float_frame.values[5] = 5 - assert (df.values[5] == 5).all() + float_frame.values[5] = 5 + assert (df.values[5] == 5).all() - df = DataFrame(float_frame.values, copy=True) - float_frame.values[6] = 6 - assert not (df.values[6] == 6).all() + df = DataFrame(float_frame.values, copy=True) + float_frame.values[6] = 6 + assert not (df.values[6] == 6).all() + else: + arr = float_frame.values.copy() + # default: copy to ensure contiguous arrays + df = DataFrame(arr) + assert df._mgr.arrays[0].flags.c_contiguous + arr[0, 0] = 100 + assert df.iloc[0, 0] != 100 + + # manually specify copy=False + df = DataFrame(arr, copy=False) + assert not df._mgr.arrays[0].flags.c_contiguous + arr[0, 0] = 1000 + assert df.iloc[0, 0] == 1000 # TODO(ArrayManager) keep view on Series? @td.skip_array_manager_not_yet_implemented
https://github.com/pandas-dev/pandas/pull/42689 removed an "unwanted" copy, but I actually added this on purpose. This is to ensure we store contiguous 1D arrays by default, which is important for performance reasons.
https://api.github.com/repos/pandas-dev/pandas/pulls/44562
2021-11-21T16:22:51Z
2021-11-30T07:32:07Z
2021-11-30T07:32:07Z
2021-11-30T07:32:10Z
CI: fix deprecation warning on interpolation
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py index 5b87257651a2d..2792a756bf20c 100644 --- a/pandas/compat/numpy/__init__.py +++ b/pandas/compat/numpy/__init__.py @@ -11,9 +11,15 @@ _nlv = Version(_np_version) np_version_under1p19 = _nlv < Version("1.19") np_version_under1p20 = _nlv < Version("1.20") +np_version_under1p22 = _nlv < Version("1.22") is_numpy_dev = _nlv.dev is not None _min_numpy_ver = "1.18.5" +if is_numpy_dev or not np_version_under1p22: + np_percentile_argname = "method" +else: + np_percentile_argname = "interpolation" + if _nlv < Version(_min_numpy_ver): raise ImportError( diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index 10d95dfbb9181..d728e6e695c7a 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -30,6 +30,7 @@ npt, ) from pandas.compat._optional import import_optional_dependency +from pandas.compat.numpy import np_percentile_argname from pandas.core.dtypes.common import ( is_any_int_dtype, @@ -1694,7 +1695,7 @@ def _nanpercentile_1d( if len(values) == 0: return np.array([na_value] * len(q), dtype=values.dtype) - return np.percentile(values, q, interpolation=interpolation) + return np.percentile(values, q, **{np_percentile_argname: interpolation}) def nanpercentile( @@ -1747,7 +1748,9 @@ def nanpercentile( result = np.array(result, dtype=values.dtype, copy=False).T return result else: - return np.percentile(values, q, axis=1, interpolation=interpolation) + return np.percentile( + values, q, axis=1, **{np_percentile_argname: interpolation} + ) def na_accum_func(values: ArrayLike, accum_func, *, skipna: bool) -> ArrayLike: diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py index 5773edbdbcdec..ed1623cd87aac 100644 --- a/pandas/tests/frame/methods/test_quantile.py +++ b/pandas/tests/frame/methods/test_quantile.py @@ -1,6 +1,8 @@ import numpy as np import pytest +from pandas.compat.numpy import np_percentile_argname + import pandas as pd from pandas import ( DataFrame, @@ -153,7 +155,10 @@ def test_quantile_interpolation(self): # cross-check interpolation=nearest results in original dtype exp = np.percentile( - np.array([[1, 2, 3], [2, 3, 4]]), 0.5, axis=0, interpolation="nearest" + np.array([[1, 2, 3], [2, 3, 4]]), + 0.5, + axis=0, + **{np_percentile_argname: "nearest"}, ) expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="int64") tm.assert_series_equal(result, expected) @@ -167,7 +172,7 @@ def test_quantile_interpolation(self): np.array([[1.0, 2.0, 3.0], [2.0, 3.0, 4.0]]), 0.5, axis=0, - interpolation="nearest", + **{np_percentile_argname: "nearest"}, ) expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="float64") tm.assert_series_equal(result, expected)
Let's see if this fixes npdev / python dev builds
https://api.github.com/repos/pandas-dev/pandas/pulls/44561
2021-11-21T15:42:18Z
2021-11-21T20:33:07Z
2021-11-21T20:33:07Z
2021-11-21T20:45:30Z
PERF: avoid copy in concatenate_array_managers if reindex already copies
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index fcd5cd0979252..598974979fefb 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -527,8 +527,8 @@ def copy_func(ax): if deep: new_arrays = [arr.copy() for arr in self.arrays] else: - new_arrays = self.arrays - return type(self)(new_arrays, new_axes) + new_arrays = list(self.arrays) + return type(self)(new_arrays, new_axes, verify_integrity=False) def reindex_indexer( self: T, diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py index f14f3c4a38430..782842d167570 100644 --- a/pandas/core/internals/concat.py +++ b/pandas/core/internals/concat.py @@ -77,10 +77,16 @@ def _concatenate_array_managers( # reindex all arrays mgrs = [] for mgr, indexers in mgrs_indexers: + axis1_made_copy = False for ax, indexer in indexers.items(): mgr = mgr.reindex_indexer( axes[ax], indexer, axis=ax, allow_dups=True, use_na_proxy=True ) + if ax == 1 and indexer is not None: + axis1_made_copy = True + if copy and concat_axis == 0 and not axis1_made_copy: + # for concat_axis 1 we will always get a copy through concat_arrays + mgr = mgr.copy() mgrs.append(mgr) if concat_axis == 1: @@ -94,8 +100,6 @@ def _concatenate_array_managers( # concatting along the columns -> combine reindexed arrays in a single manager assert concat_axis == 0 arrays = list(itertools.chain.from_iterable([mgr.arrays for mgr in mgrs])) - if copy: - arrays = [x.copy() for x in arrays] new_mgr = ArrayManager(arrays, [axes[1], axes[0]], verify_integrity=False) return new_mgr
@jbrockmendel https://github.com/pandas-dev/pandas/pull/42797 added an unconditional copy, while if you reindex the arrays, you already are sure you have a copy. This gives a 20% improvement on some of the merge benchmarks.
https://api.github.com/repos/pandas-dev/pandas/pulls/44559
2021-11-21T14:17:58Z
2021-12-06T14:27:49Z
2021-12-06T14:27:49Z
2021-12-06T14:28:53Z
Backport PR #44557 on branch 1.3.x (DOC: follow-up to #44518, move release note)
diff --git a/doc/source/whatsnew/v1.3.5.rst b/doc/source/whatsnew/v1.3.5.rst index 34e67e51e47e3..dabd9a650f45b 100644 --- a/doc/source/whatsnew/v1.3.5.rst +++ b/doc/source/whatsnew/v1.3.5.rst @@ -16,6 +16,7 @@ Fixed regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`Series.equals` when comparing floats with dtype object to None (:issue:`44190`) - Fixed regression in :func:`merge_asof` raising error when array was supplied as join key (:issue:`42844`) +- Fixed regression in creating a :class:`DataFrame` from a timezone-aware :class:`Timestamp` scalar near a Daylight Savings Time transition (:issue:`42505`) - Fixed performance regression in :func:`read_csv` (:issue:`44106`) - Fixed regression in :meth:`Series.duplicated` and :meth:`Series.drop_duplicates` when Series has :class:`Categorical` dtype with boolean categories (:issue:`44351`) -
Backport PR #44557
https://api.github.com/repos/pandas-dev/pandas/pulls/44558
2021-11-21T13:42:44Z
2021-11-21T14:55:38Z
2021-11-21T14:55:38Z
2021-11-21T14:55:42Z
DOC: follow-up to #44518, move release note
diff --git a/doc/source/whatsnew/v1.3.5.rst b/doc/source/whatsnew/v1.3.5.rst index 34e67e51e47e3..dabd9a650f45b 100644 --- a/doc/source/whatsnew/v1.3.5.rst +++ b/doc/source/whatsnew/v1.3.5.rst @@ -16,6 +16,7 @@ Fixed regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`Series.equals` when comparing floats with dtype object to None (:issue:`44190`) - Fixed regression in :func:`merge_asof` raising error when array was supplied as join key (:issue:`42844`) +- Fixed regression in creating a :class:`DataFrame` from a timezone-aware :class:`Timestamp` scalar near a Daylight Savings Time transition (:issue:`42505`) - Fixed performance regression in :func:`read_csv` (:issue:`44106`) - Fixed regression in :meth:`Series.duplicated` and :meth:`Series.drop_duplicates` when Series has :class:`Categorical` dtype with boolean categories (:issue:`44351`) - diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index b962add29db33..62ab6f934baec 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -576,7 +576,6 @@ Conversion - Bug in :class:`IntegerDtype` not allowing coercion from string dtype (:issue:`25472`) - Bug in :func:`to_datetime` with ``arg:xr.DataArray`` and ``unit="ns"`` specified raises TypeError (:issue:`44053`) - Bug in :meth:`DataFrame.convert_dtypes` not returning the correct type when a subclass does not overload :meth:`_constructor_sliced` (:issue:`43201`) -- Bug in creating a :class:`DataFrame` from a timezone-aware :class:`Timestamp` scalar near a Daylight Savings Time transition (:issue:`42505`) - Strings
follow-up to #44518
https://api.github.com/repos/pandas-dev/pandas/pulls/44557
2021-11-21T11:50:47Z
2021-11-21T13:38:32Z
2021-11-21T13:38:32Z
2021-11-21T13:43:16Z
Partial Backport PR #44529 on branch 1.3.x (CLN: tighten noqas)
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 1b28aa2900f65..bb0b5ecb2c0ee 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -2977,6 +2977,7 @@ Read in the content of the "books.xml" as instance of ``StringIO`` or Even read XML from AWS S3 buckets such as Python Software Foundation's IRS 990 Form: .. ipython:: python + :okwarning: df = pd.read_xml( "s3://irs-form-990/201923199349319487_public.xml",
xref https://github.com/pandas-dev/pandas/pull/44529#issuecomment-974722662
https://api.github.com/repos/pandas-dev/pandas/pulls/44556
2021-11-21T11:30:48Z
2021-11-21T13:37:30Z
2021-11-21T13:37:30Z
2021-11-21T13:37:33Z
Backport PR #44523 on branch 1.3.x (COMPAT: Matplotlib 3.5.0)
diff --git a/pandas/plotting/_matplotlib/compat.py b/pandas/plotting/_matplotlib/compat.py index 70ddd1ca09c7e..5569b1f2979b0 100644 --- a/pandas/plotting/_matplotlib/compat.py +++ b/pandas/plotting/_matplotlib/compat.py @@ -24,3 +24,4 @@ def inner(): mpl_ge_3_2_0 = _mpl_version("3.2.0", operator.ge) mpl_ge_3_3_0 = _mpl_version("3.3.0", operator.ge) mpl_ge_3_4_0 = _mpl_version("3.4.0", operator.ge) +mpl_ge_3_5_0 = _mpl_version("3.5.0", operator.ge) diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py index 7e3bf0b224e0e..5fcd61f2b4f11 100644 --- a/pandas/plotting/_matplotlib/converter.py +++ b/pandas/plotting/_matplotlib/converter.py @@ -353,8 +353,8 @@ def get_locator(self, dmin, dmax): locator = MilliSecondLocator(self.tz) locator.set_axis(self.axis) - locator.set_view_interval(*self.axis.get_view_interval()) - locator.set_data_interval(*self.axis.get_data_interval()) + locator.axis.set_view_interval(*self.axis.get_view_interval()) + locator.axis.set_data_interval(*self.axis.get_data_interval()) return locator return dates.AutoDateLocator.get_locator(self, dmin, dmax) diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py index 7ddab91a24ec0..dbdfff8e27b97 100644 --- a/pandas/plotting/_matplotlib/core.py +++ b/pandas/plotting/_matplotlib/core.py @@ -983,6 +983,7 @@ def _plot_colorbar(self, ax: Axes, **kwds): # use the last one which contains the latest information # about the ax img = ax.collections[-1] + ax.grid(False) cbar = self.fig.colorbar(img, ax=ax, **kwds) if mpl_ge_3_0_0(): diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py index e2b6b5ab3319c..52127b926f1fa 100644 --- a/pandas/tests/plotting/common.py +++ b/pandas/tests/plotting/common.py @@ -45,6 +45,8 @@ def setup_method(self, method): from pandas.plotting._matplotlib import compat + self.compat = compat + mpl.rcdefaults() self.start_date_to_int64 = 812419200000000000 @@ -569,6 +571,12 @@ def _unpack_cycler(self, rcParams, field="color"): """ return [v[field] for v in rcParams["axes.prop_cycle"]] + def get_x_axis(self, ax): + return ax._shared_axes["x"] if self.compat.mpl_ge_3_5_0() else ax._shared_x_axes + + def get_y_axis(self, ax): + return ax._shared_axes["y"] if self.compat.mpl_ge_3_5_0() else ax._shared_y_axes + def _check_plot_works(f, filterwarnings="always", default_axes=False, **kwargs): """ diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py index ccd0bc3d16896..6c07366e402d6 100644 --- a/pandas/tests/plotting/frame/test_frame.py +++ b/pandas/tests/plotting/frame/test_frame.py @@ -525,8 +525,8 @@ def test_area_sharey_dont_overwrite(self): df.plot(ax=ax1, kind="area") df.plot(ax=ax2, kind="area") - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) def test_bar_linewidth(self): df = DataFrame(np.random.randn(5, 5)) diff --git a/pandas/tests/plotting/test_common.py b/pandas/tests/plotting/test_common.py index 4674fc1bb2c18..6eebf0c01ae52 100644 --- a/pandas/tests/plotting/test_common.py +++ b/pandas/tests/plotting/test_common.py @@ -39,4 +39,6 @@ def test__gen_two_subplots_with_ax(self): next(gen) axes = fig.get_axes() assert len(axes) == 1 - assert axes[0].get_geometry() == (2, 1, 2) + subplot_geometry = list(axes[0].get_subplotspec().get_geometry()[:-1]) + subplot_geometry[-1] += 1 + assert subplot_geometry == [2, 1, 2] diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py index 96fdcebc9b8f7..403f4a2c06df1 100644 --- a/pandas/tests/plotting/test_hist_method.py +++ b/pandas/tests/plotting/test_hist_method.py @@ -728,35 +728,35 @@ def test_axis_share_x(self): ax1, ax2 = df.hist(column="height", by=df.gender, sharex=True) # share x - assert ax1._shared_x_axes.joined(ax1, ax2) - assert ax2._shared_x_axes.joined(ax1, ax2) + assert self.get_x_axis(ax1).joined(ax1, ax2) + assert self.get_x_axis(ax2).joined(ax1, ax2) # don't share y - assert not ax1._shared_y_axes.joined(ax1, ax2) - assert not ax2._shared_y_axes.joined(ax1, ax2) + assert not self.get_y_axis(ax1).joined(ax1, ax2) + assert not self.get_y_axis(ax2).joined(ax1, ax2) def test_axis_share_y(self): df = self.hist_df ax1, ax2 = df.hist(column="height", by=df.gender, sharey=True) # share y - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) # don't share x - assert not ax1._shared_x_axes.joined(ax1, ax2) - assert not ax2._shared_x_axes.joined(ax1, ax2) + assert not self.get_x_axis(ax1).joined(ax1, ax2) + assert not self.get_x_axis(ax2).joined(ax1, ax2) def test_axis_share_xy(self): df = self.hist_df ax1, ax2 = df.hist(column="height", by=df.gender, sharex=True, sharey=True) # share both x and y - assert ax1._shared_x_axes.joined(ax1, ax2) - assert ax2._shared_x_axes.joined(ax1, ax2) + assert self.get_x_axis(ax1).joined(ax1, ax2) + assert self.get_x_axis(ax2).joined(ax1, ax2) - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) @pytest.mark.parametrize( "histtype, expected", diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py index 812aae8d97151..e40798f4f5125 100644 --- a/pandas/tests/plotting/test_series.py +++ b/pandas/tests/plotting/test_series.py @@ -154,8 +154,8 @@ def test_area_sharey_dont_overwrite(self): abs(self.ts).plot(ax=ax1, kind="area") abs(self.ts).plot(ax=ax2, kind="area") - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) def test_label(self): s = Series([1, 2])
Backport PR #44523
https://api.github.com/repos/pandas-dev/pandas/pulls/44555
2021-11-21T11:10:17Z
2021-11-21T12:24:42Z
2021-11-21T12:24:42Z
2021-11-21T12:24:47Z
[EHN] pandas.DataFrame.to_orc
diff --git a/doc/source/reference/frame.rst b/doc/source/reference/frame.rst index ea27d1efbb235..e71ee80767d29 100644 --- a/doc/source/reference/frame.rst +++ b/doc/source/reference/frame.rst @@ -373,6 +373,7 @@ Serialization / IO / conversion DataFrame.from_dict DataFrame.from_records + DataFrame.to_orc DataFrame.to_parquet DataFrame.to_pickle DataFrame.to_csv diff --git a/doc/source/reference/io.rst b/doc/source/reference/io.rst index 70fd381bffd2c..425b5f81be966 100644 --- a/doc/source/reference/io.rst +++ b/doc/source/reference/io.rst @@ -159,6 +159,7 @@ ORC :toctree: api/ read_orc + DataFrame.to_orc SAS ~~~ diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 4e19deb84487f..4c5d189e1bba3 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -30,7 +30,7 @@ The pandas I/O API is a set of top level ``reader`` functions accessed like binary;`HDF5 Format <https://support.hdfgroup.org/HDF5/whatishdf5.html>`__;:ref:`read_hdf<io.hdf5>`;:ref:`to_hdf<io.hdf5>` binary;`Feather Format <https://github.com/wesm/feather>`__;:ref:`read_feather<io.feather>`;:ref:`to_feather<io.feather>` binary;`Parquet Format <https://parquet.apache.org/>`__;:ref:`read_parquet<io.parquet>`;:ref:`to_parquet<io.parquet>` - binary;`ORC Format <https://orc.apache.org/>`__;:ref:`read_orc<io.orc>`; + binary;`ORC Format <https://orc.apache.org/>`__;:ref:`read_orc<io.orc>`;:ref:`to_orc<io.orc>` binary;`Stata <https://en.wikipedia.org/wiki/Stata>`__;:ref:`read_stata<io.stata_reader>`;:ref:`to_stata<io.stata_writer>` binary;`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__;:ref:`read_sas<io.sas_reader>`; binary;`SPSS <https://en.wikipedia.org/wiki/SPSS>`__;:ref:`read_spss<io.spss_reader>`; @@ -5562,13 +5562,64 @@ ORC .. versionadded:: 1.0.0 Similar to the :ref:`parquet <io.parquet>` format, the `ORC Format <https://orc.apache.org/>`__ is a binary columnar serialization -for data frames. It is designed to make reading data frames efficient. pandas provides *only* a reader for the -ORC format, :func:`~pandas.read_orc`. This requires the `pyarrow <https://arrow.apache.org/docs/python/>`__ library. +for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the +ORC format, :func:`~pandas.read_orc` and :func:`~pandas.DataFrame.to_orc`. This requires the `pyarrow <https://arrow.apache.org/docs/python/>`__ library. .. warning:: * It is *highly recommended* to install pyarrow using conda due to some issues occurred by pyarrow. - * :func:`~pandas.read_orc` is not supported on Windows yet, you can find valid environments on :ref:`install optional dependencies <install.warn_orc>`. + * :func:`~pandas.DataFrame.to_orc` requires pyarrow>=7.0.0. + * :func:`~pandas.read_orc` and :func:`~pandas.DataFrame.to_orc` are not supported on Windows yet, you can find valid environments on :ref:`install optional dependencies <install.warn_orc>`. + * For supported dtypes please refer to `supported ORC features in Arrow <https://arrow.apache.org/docs/cpp/orc.html#data-types>`__. + * Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files. + +.. ipython:: python + + df = pd.DataFrame( + { + "a": list("abc"), + "b": list(range(1, 4)), + "c": np.arange(4.0, 7.0, dtype="float64"), + "d": [True, False, True], + "e": pd.date_range("20130101", periods=3), + } + ) + + df + df.dtypes + +Write to an orc file. + +.. ipython:: python + :okwarning: + + df.to_orc("example_pa.orc", engine="pyarrow") + +Read from an orc file. + +.. ipython:: python + :okwarning: + + result = pd.read_orc("example_pa.orc") + + result.dtypes + +Read only certain columns of an orc file. + +.. ipython:: python + + result = pd.read_orc( + "example_pa.orc", + columns=["a", "b"], + ) + result.dtypes + + +.. ipython:: python + :suppress: + + os.remove("example_pa.orc") + .. _io.sql: diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 8a7ad077c2a90..2719d415dedc0 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -100,6 +100,28 @@ as seen in the following example. 1 2021-01-02 08:00:00 4 2 2021-01-02 16:00:00 5 +.. _whatsnew_150.enhancements.orc: + +Writing to ORC files +^^^^^^^^^^^^^^^^^^^^ + +The new method :meth:`DataFrame.to_orc` allows writing to ORC files (:issue:`43864`). + +This functionality depends the `pyarrow <http://arrow.apache.org/docs/python/>`__ library. For more details, see :ref:`the IO docs on ORC <io.orc>`. + +.. warning:: + + * It is *highly recommended* to install pyarrow using conda due to some issues occurred by pyarrow. + * :func:`~pandas.DataFrame.to_orc` requires pyarrow>=7.0.0. + * :func:`~pandas.DataFrame.to_orc` is not supported on Windows yet, you can find valid environments on :ref:`install optional dependencies <install.warn_orc>`. + * For supported dtypes please refer to `supported ORC features in Arrow <https://arrow.apache.org/docs/cpp/orc.html#data-types>`__. + * Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files. + +.. code-block:: python + + df = pd.DataFrame(data={"col1": [1, 2], "col2": [3, 4]}) + df.to_orc("./out.orc") + .. _whatsnew_150.enhancements.tar: Reading directly from TAR archives diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 54ee5ed2f35d1..00cfd0e0f8fd7 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -2858,6 +2858,7 @@ def to_parquet( See Also -------- read_parquet : Read a parquet file. + DataFrame.to_orc : Write an orc file. DataFrame.to_csv : Write a csv file. DataFrame.to_sql : Write to a sql table. DataFrame.to_hdf : Write to hdf. @@ -2901,6 +2902,93 @@ def to_parquet( **kwargs, ) + def to_orc( + self, + path: FilePath | WriteBuffer[bytes] | None = None, + *, + engine: Literal["pyarrow"] = "pyarrow", + index: bool | None = None, + engine_kwargs: dict[str, Any] | None = None, + ) -> bytes | None: + """ + Write a DataFrame to the ORC format. + + .. versionadded:: 1.5.0 + + Parameters + ---------- + path : str, file-like object or None, default None + If a string, it will be used as Root Directory path + when writing a partitioned dataset. By file-like object, + we refer to objects with a write() method, such as a file handle + (e.g. via builtin open function). If path is None, + a bytes object is returned. + engine : str, default 'pyarrow' + ORC library to use. Pyarrow must be >= 7.0.0. + index : bool, optional + If ``True``, include the dataframe's index(es) in the file output. + If ``False``, they will not be written to the file. + If ``None``, similar to ``infer`` the dataframe's index(es) + will be saved. However, instead of being saved as values, + the RangeIndex will be stored as a range in the metadata so it + doesn't require much space and is faster. Other indexes will + be included as columns in the file output. + engine_kwargs : dict[str, Any] or None, default None + Additional keyword arguments passed to :func:`pyarrow.orc.write_table`. + + Returns + ------- + bytes if no path argument is provided else None + + Raises + ------ + NotImplementedError + Dtype of one or more columns is category, unsigned integers, interval, + period or sparse. + ValueError + engine is not pyarrow. + + See Also + -------- + read_orc : Read a ORC file. + DataFrame.to_parquet : Write a parquet file. + DataFrame.to_csv : Write a csv file. + DataFrame.to_sql : Write to a sql table. + DataFrame.to_hdf : Write to hdf. + + Notes + ----- + * Before using this function you should read the :ref:`user guide about + ORC <io.orc>` and :ref:`install optional dependencies <install.warn_orc>`. + * This function requires `pyarrow <https://arrow.apache.org/docs/python/>`_ + library. + * For supported dtypes please refer to `supported ORC features in Arrow + <https://arrow.apache.org/docs/cpp/orc.html#data-types>`__. + * Currently timezones in datetime columns are not preserved when a + dataframe is converted into ORC files. + + Examples + -------- + >>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]}) + >>> df.to_orc('df.orc') # doctest: +SKIP + >>> pd.read_orc('df.orc') # doctest: +SKIP + col1 col2 + 0 1 4 + 1 2 3 + + If you want to get a buffer to the orc content you can write it to io.BytesIO + >>> import io + >>> b = io.BytesIO(df.to_orc()) # doctest: +SKIP + >>> b.seek(0) # doctest: +SKIP + 0 + >>> content = b.read() # doctest: +SKIP + """ + from pandas.io.orc import to_orc + + return to_orc( + self, path, engine=engine, index=index, engine_kwargs=engine_kwargs + ) + @Substitution( header_type="bool", header="Whether to print column labels, default True", diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 89a590f291356..78edaf15fe7ce 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2629,6 +2629,7 @@ def to_hdf( See Also -------- read_hdf : Read from HDF file. + DataFrame.to_orc : Write a DataFrame to the binary orc format. DataFrame.to_parquet : Write a DataFrame to the binary parquet format. DataFrame.to_sql : Write to a SQL table. DataFrame.to_feather : Write out feather-format for DataFrames. diff --git a/pandas/io/orc.py b/pandas/io/orc.py index b02660c089382..40754a56bbe8b 100644 --- a/pandas/io/orc.py +++ b/pandas/io/orc.py @@ -1,14 +1,28 @@ """ orc compat """ from __future__ import annotations -from typing import TYPE_CHECKING +import io +from types import ModuleType +from typing import ( + TYPE_CHECKING, + Any, + Literal, +) from pandas._typing import ( FilePath, ReadBuffer, + WriteBuffer, ) from pandas.compat._optional import import_optional_dependency +from pandas.core.dtypes.common import ( + is_categorical_dtype, + is_interval_dtype, + is_period_dtype, + is_unsigned_integer_dtype, +) + from pandas.io.common import get_handle if TYPE_CHECKING: @@ -52,3 +66,111 @@ def read_orc( with get_handle(path, "rb", is_text=False) as handles: orc_file = orc.ORCFile(handles.handle) return orc_file.read(columns=columns, **kwargs).to_pandas() + + +def to_orc( + df: DataFrame, + path: FilePath | WriteBuffer[bytes] | None = None, + *, + engine: Literal["pyarrow"] = "pyarrow", + index: bool | None = None, + engine_kwargs: dict[str, Any] | None = None, +) -> bytes | None: + """ + Write a DataFrame to the ORC format. + + .. versionadded:: 1.5.0 + + Parameters + ---------- + df : DataFrame + The dataframe to be written to ORC. Raises NotImplementedError + if dtype of one or more columns is category, unsigned integers, + intervals, periods or sparse. + path : str, file-like object or None, default None + If a string, it will be used as Root Directory path + when writing a partitioned dataset. By file-like object, + we refer to objects with a write() method, such as a file handle + (e.g. via builtin open function). If path is None, + a bytes object is returned. + engine : str, default 'pyarrow' + ORC library to use. Pyarrow must be >= 7.0.0. + index : bool, optional + If ``True``, include the dataframe's index(es) in the file output. If + ``False``, they will not be written to the file. + If ``None``, similar to ``infer`` the dataframe's index(es) + will be saved. However, instead of being saved as values, + the RangeIndex will be stored as a range in the metadata so it + doesn't require much space and is faster. Other indexes will + be included as columns in the file output. + engine_kwargs : dict[str, Any] or None, default None + Additional keyword arguments passed to :func:`pyarrow.orc.write_table`. + + Returns + ------- + bytes if no path argument is provided else None + + Raises + ------ + NotImplementedError + Dtype of one or more columns is category, unsigned integers, interval, + period or sparse. + ValueError + engine is not pyarrow. + + Notes + ----- + * Before using this function you should read the + :ref:`user guide about ORC <io.orc>` and + :ref:`install optional dependencies <install.warn_orc>`. + * This function requires `pyarrow <https://arrow.apache.org/docs/python/>`_ + library. + * For supported dtypes please refer to `supported ORC features in Arrow + <https://arrow.apache.org/docs/cpp/orc.html#data-types>`__. + * Currently timezones in datetime columns are not preserved when a + dataframe is converted into ORC files. + """ + if index is None: + index = df.index.names[0] is not None + if engine_kwargs is None: + engine_kwargs = {} + + # If unsupported dtypes are found raise NotImplementedError + # In Pyarrow 9.0.0 this check will no longer be needed + for dtype in df.dtypes: + if ( + is_categorical_dtype(dtype) + or is_interval_dtype(dtype) + or is_period_dtype(dtype) + or is_unsigned_integer_dtype(dtype) + ): + raise NotImplementedError( + "The dtype of one or more columns is not supported yet." + ) + + if engine != "pyarrow": + raise ValueError("engine must be 'pyarrow'") + engine = import_optional_dependency(engine, min_version="7.0.0") + orc = import_optional_dependency("pyarrow.orc") + + was_none = path is None + if was_none: + path = io.BytesIO() + assert path is not None # For mypy + with get_handle(path, "wb", is_text=False) as handles: + assert isinstance(engine, ModuleType) # For mypy + try: + orc.write_table( + engine.Table.from_pandas(df, preserve_index=index), + handles.handle, + **engine_kwargs, + ) + except TypeError as e: + raise NotImplementedError( + "The dtype of one or more columns is not supported yet." + ) from e + + if was_none: + assert isinstance(path, io.BytesIO) # For mypy + return path.getvalue() + return None diff --git a/pandas/tests/io/test_orc.py b/pandas/tests/io/test_orc.py index f34e9b940317d..0bb320907b813 100644 --- a/pandas/tests/io/test_orc.py +++ b/pandas/tests/io/test_orc.py @@ -1,10 +1,13 @@ """ test orc compat """ import datetime +from io import BytesIO import os import numpy as np import pytest +import pandas.util._test_decorators as td + import pandas as pd from pandas import read_orc import pandas._testing as tm @@ -21,6 +24,27 @@ def dirpath(datapath): return datapath("io", "data", "orc") +# Examples of dataframes with dtypes for which conversion to ORC +# hasn't been implemented yet, that is, Category, unsigned integers, +# interval, period and sparse. +orc_writer_dtypes_not_supported = [ + pd.DataFrame({"unimpl": np.array([1, 20], dtype="uint64")}), + pd.DataFrame({"unimpl": pd.Series(["a", "b", "a"], dtype="category")}), + pd.DataFrame( + {"unimpl": [pd.Interval(left=0, right=2), pd.Interval(left=0, right=5)]} + ), + pd.DataFrame( + { + "unimpl": [ + pd.Period("2022-01-03", freq="D"), + pd.Period("2022-01-04", freq="D"), + ] + } + ), + pd.DataFrame({"unimpl": [np.nan] * 50}).astype(pd.SparseDtype("float", np.nan)), +] + + def test_orc_reader_empty(dirpath): columns = [ "boolean1", @@ -224,3 +248,60 @@ def test_orc_reader_snappy_compressed(dirpath): got = read_orc(inputfile).iloc[:10] tm.assert_equal(expected, got) + + +@td.skip_if_no("pyarrow", min_version="7.0.0") +def test_orc_roundtrip_file(dirpath): + # GH44554 + # PyArrow gained ORC write support with the current argument order + data = { + "boolean1": np.array([False, True], dtype="bool"), + "byte1": np.array([1, 100], dtype="int8"), + "short1": np.array([1024, 2048], dtype="int16"), + "int1": np.array([65536, 65536], dtype="int32"), + "long1": np.array([9223372036854775807, 9223372036854775807], dtype="int64"), + "float1": np.array([1.0, 2.0], dtype="float32"), + "double1": np.array([-15.0, -5.0], dtype="float64"), + "bytes1": np.array([b"\x00\x01\x02\x03\x04", b""], dtype="object"), + "string1": np.array(["hi", "bye"], dtype="object"), + } + expected = pd.DataFrame.from_dict(data) + + with tm.ensure_clean() as path: + expected.to_orc(path) + got = read_orc(path) + + tm.assert_equal(expected, got) + + +@td.skip_if_no("pyarrow", min_version="7.0.0") +def test_orc_roundtrip_bytesio(): + # GH44554 + # PyArrow gained ORC write support with the current argument order + data = { + "boolean1": np.array([False, True], dtype="bool"), + "byte1": np.array([1, 100], dtype="int8"), + "short1": np.array([1024, 2048], dtype="int16"), + "int1": np.array([65536, 65536], dtype="int32"), + "long1": np.array([9223372036854775807, 9223372036854775807], dtype="int64"), + "float1": np.array([1.0, 2.0], dtype="float32"), + "double1": np.array([-15.0, -5.0], dtype="float64"), + "bytes1": np.array([b"\x00\x01\x02\x03\x04", b""], dtype="object"), + "string1": np.array(["hi", "bye"], dtype="object"), + } + expected = pd.DataFrame.from_dict(data) + + bytes = expected.to_orc() + got = read_orc(BytesIO(bytes)) + + tm.assert_equal(expected, got) + + +@td.skip_if_no("pyarrow", min_version="7.0.0") +@pytest.mark.parametrize("df_not_supported", orc_writer_dtypes_not_supported) +def test_orc_writer_dtypes_not_supported(df_not_supported): + # GH44554 + # PyArrow gained ORC write support with the current argument order + msg = "The dtype of one or more columns is not supported yet." + with pytest.raises(NotImplementedError, match=msg): + df_not_supported.to_orc()
- [x] closes #43864 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44554
2021-11-21T08:29:36Z
2022-06-14T00:02:13Z
2022-06-14T00:02:13Z
2022-06-14T00:22:29Z
CI: Make codecov patch checks informational
diff --git a/codecov.yml b/codecov.yml index 893e40db004a6..883f9fbb20729 100644 --- a/codecov.yml +++ b/codecov.yml @@ -12,6 +12,7 @@ coverage: patch: default: target: '50' + informational: true github_checks: annotations: false
To avoid some false positive coverage results which can make an entire PR appear that it's failing: https://github.com/pandas-dev/pandas/pull/44546/checks?check_run_id=4275864618 https://github.com/pandas-dev/pandas/pull/44461/checks?check_run_id=4276518961 Change codecov patch builds to always pass but still report the % coverage of the PR: https://docs.codecov.com/docs/commit-status#informational IMO our code review process should ensure that new code has associated tests.
https://api.github.com/repos/pandas-dev/pandas/pulls/44553
2021-11-21T06:15:17Z
2021-11-21T23:41:45Z
2021-11-21T23:41:45Z
2021-11-22T00:09:05Z
TST: Avoid time dependency in GCS zip test
diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py index 887889bce1eaa..2e8e4a9017dbc 100644 --- a/pandas/tests/io/test_gcs.py +++ b/pandas/tests/io/test_gcs.py @@ -1,5 +1,6 @@ from io import BytesIO import os +import zipfile import numpy as np import pytest @@ -88,16 +89,23 @@ def test_to_read_gcs(gcs_buffer, format): tm.assert_frame_equal(df1, df2) -def assert_equal_zip_safe(result: bytes, expected: bytes): +def assert_equal_zip_safe(result: bytes, expected: bytes, compression: str): """ - We would like to assert these are equal, but the 10th and 11th bytes are a - last-modified timestamp, which in some builds is off-by-one, so we check around - that. + For zip compression, only compare the CRC-32 checksum of the file contents + to avoid checking the time-dependent last-modified timestamp which + in some CI builds is off-by-one See https://en.wikipedia.org/wiki/ZIP_(file_format)#File_headers """ - assert result[:9] == expected[:9] - assert result[11:] == expected[11:] + if compression == "zip": + # Only compare the CRC checksum of the file contents + with zipfile.ZipFile(BytesIO(result)) as exp, zipfile.ZipFile( + BytesIO(expected) + ) as res: + for res_info, exp_info in zip(res.infolist(), exp.infolist()): + assert res_info.CRC == exp_info.CRC + else: + assert result == expected @td.skip_if_no("gcsfs") @@ -126,7 +134,7 @@ def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding) df.to_csv(path_gcs, compression=compression, encoding=encoding) res = gcs_buffer.getvalue() expected = buffer.getvalue() - assert_equal_zip_safe(res, expected) + assert_equal_zip_safe(res, expected, compression_only) read_df = read_csv( path_gcs, index_col=0, compression=compression_only, encoding=encoding @@ -142,7 +150,7 @@ def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding) res = gcs_buffer.getvalue() expected = buffer.getvalue() - assert_equal_zip_safe(res, expected) + assert_equal_zip_safe(res, expected, compression_only) read_df = read_csv(path_gcs, index_col=0, compression="infer", encoding=encoding) tm.assert_frame_equal(df, read_df)
- [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them To avoid CI flakiness from this test, only compare the checksum of the ZIP contents to avoid comparing the flaky last modified timestamp of the ZIP archive. IMO, the CI stability is worth the the trade off of being a little less strict here. cc @jbrockmendel
https://api.github.com/repos/pandas-dev/pandas/pulls/44552
2021-11-21T04:45:48Z
2021-11-22T21:18:15Z
2021-11-22T21:18:15Z
2021-11-22T21:18:19Z
BUG: Indexing on nullable column
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 37e4c9a1378d1..02ef4d2b6acab 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -618,6 +618,8 @@ Indexing - Bug in :meth:`Series.__setitem__` with a boolean mask indexer setting a listlike value of length 1 incorrectly broadcasting that value (:issue:`44265`) - Bug in :meth:`DataFrame.loc.__setitem__` and :meth:`DataFrame.iloc.__setitem__` with mixed dtypes sometimes failing to operate in-place (:issue:`44345`) - Bug in :meth:`DataFrame.loc.__getitem__` incorrectly raising ``KeyError`` when selecting a single column with a boolean key (:issue:`44322`). +- Bug in indexing on columns with ``loc`` or ``iloc`` using a slice with a negative step with ``ExtensionDtype`` columns incorrectly raising (:issue:`44551`) +- Missing ^^^^^^^ diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index a6fa2a9e3b2c1..963b3c49eb054 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1566,7 +1566,9 @@ def _slice(self, slicer) -> ExtensionArray: ) # GH#32959 only full-slicers along fake-dim0 are valid # TODO(EA2D): won't be necessary with 2D EAs - new_locs = self._mgr_locs[first] + # range(1) instead of self._mgr_locs to avoid exception on [::-1] + # see test_iloc_getitem_slice_negative_step_ea_block + new_locs = range(1)[first] if len(new_locs): # effectively slice(None) slicer = slicer[1] diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py index ccaaafa75f3af..e088f1ce87a6a 100644 --- a/pandas/tests/indexing/test_iloc.py +++ b/pandas/tests/indexing/test_iloc.py @@ -1146,6 +1146,18 @@ def test_loc_setitem_boolean_list(self, rhs_func, indexing_func): expected = DataFrame({"a": [5, 1, 10]}) tm.assert_frame_equal(df, expected) + def test_iloc_getitem_slice_negative_step_ea_block(self): + # GH#44551 + df = DataFrame({"A": [1, 2, 3]}, dtype="Int64") + + res = df.iloc[:, ::-1] + tm.assert_frame_equal(res, df) + + df["B"] = "foo" + res = df.iloc[:, ::-1] + expected = DataFrame({"B": df["B"], "A": df["A"]}) + tm.assert_frame_equal(res, expected) + class TestILocErrors: # NB: this test should work for _any_ Series we can pass as
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44551
2021-11-21T04:13:01Z
2021-11-25T15:15:17Z
2021-11-25T15:15:17Z
2021-11-25T16:05:11Z
TST: Use hypothesis for test_round_sanity
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py index cb3468c097cbf..448ec4353d7e7 100644 --- a/pandas/tests/scalar/timedelta/test_timedelta.py +++ b/pandas/tests/scalar/timedelta/test_timedelta.py @@ -1,6 +1,10 @@ """ test the scalar Timedelta """ from datetime import timedelta +from hypothesis import ( + given, + strategies as st, +) import numpy as np import pytest @@ -394,12 +398,12 @@ def test_round_implementation_bounds(self): with pytest.raises(OverflowError, match=msg): Timedelta.max.ceil("s") - @pytest.mark.parametrize("n", range(100)) + @given(val=st.integers(min_value=iNaT + 1, max_value=lib.i8max)) @pytest.mark.parametrize( "method", [Timedelta.round, Timedelta.floor, Timedelta.ceil] ) - def test_round_sanity(self, method, n, request): - val = np.random.randint(iNaT + 1, lib.i8max, dtype=np.int64) + def test_round_sanity(self, val, method): + val = np.int64(val) td = Timedelta(val) assert method(td, "ns") == td diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py index ab114e65b2422..5f07cabd51ca1 100644 --- a/pandas/tests/scalar/timestamp/test_unary_ops.py +++ b/pandas/tests/scalar/timestamp/test_unary_ops.py @@ -1,6 +1,10 @@ from datetime import datetime from dateutil.tz import gettz +from hypothesis import ( + given, + strategies as st, +) import numpy as np import pytest import pytz @@ -276,12 +280,12 @@ def test_round_implementation_bounds(self): with pytest.raises(OverflowError, match=msg): Timestamp.max.ceil("s") - @pytest.mark.parametrize("n", range(100)) + @given(val=st.integers(iNaT + 1, lib.i8max)) @pytest.mark.parametrize( "method", [Timestamp.round, Timestamp.floor, Timestamp.ceil] ) - def test_round_sanity(self, method, n): - val = np.random.randint(iNaT + 1, lib.i8max, dtype=np.int64) + def test_round_sanity(self, val, method): + val = np.int64(val) ts = Timestamp(val) def checker(res, ts, nanos):
- [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them Found this test was flaky in https://github.com/pandas-dev/pandas/runs/4275006116?check_suite_focus=true#step:7:51 Decided to parameterize the value instead of `n` so it appears in the logs and used `hypothesis` instead of 100 random values.
https://api.github.com/repos/pandas-dev/pandas/pulls/44550
2021-11-21T01:30:04Z
2021-11-24T01:26:03Z
2021-11-24T01:26:03Z
2021-11-24T01:26:39Z
REF: use base class for BooleanArray any/all
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py index 0787ec56379b2..925126449eca3 100644 --- a/pandas/core/arrays/boolean.py +++ b/pandas/core/arrays/boolean.py @@ -21,7 +21,6 @@ npt, type_t, ) -from pandas.compat.numpy import function as nv from pandas.core.dtypes.common import ( is_bool_dtype, @@ -447,144 +446,6 @@ def _values_for_argsort(self) -> np.ndarray: data[self._mask] = -1 return data - def any(self, *, skipna: bool = True, axis: int | None = 0, **kwargs): - """ - Return whether any element is True. - - Returns False unless there is at least one element that is True. - By default, NAs are skipped. If ``skipna=False`` is specified and - missing values are present, similar :ref:`Kleene logic <boolean.kleene>` - is used as for logical operations. - - Parameters - ---------- - skipna : bool, default True - Exclude NA values. If the entire array is NA and `skipna` is - True, then the result will be False, as for an empty array. - If `skipna` is False, the result will still be True if there is - at least one element that is True, otherwise NA will be returned - if there are NA's present. - axis : int or None, default 0 - **kwargs : any, default None - Additional keywords have no effect but might be accepted for - compatibility with NumPy. - - Returns - ------- - bool or :attr:`pandas.NA` - - See Also - -------- - numpy.any : Numpy version of this method. - BooleanArray.all : Return whether all elements are True. - - Examples - -------- - The result indicates whether any element is True (and by default - skips NAs): - - >>> pd.array([True, False, True]).any() - True - >>> pd.array([True, False, pd.NA]).any() - True - >>> pd.array([False, False, pd.NA]).any() - False - >>> pd.array([], dtype="boolean").any() - False - >>> pd.array([pd.NA], dtype="boolean").any() - False - - With ``skipna=False``, the result can be NA if this is logically - required (whether ``pd.NA`` is True or False influences the result): - - >>> pd.array([True, False, pd.NA]).any(skipna=False) - True - >>> pd.array([False, False, pd.NA]).any(skipna=False) - <NA> - """ - kwargs.pop("axis", None) - nv.validate_any((), kwargs) - - values = self._data.copy() - np.putmask(values, self._mask, False) - result = values.any(axis=axis) - - if skipna: - return result - else: - if result or self.size == 0 or not self._mask.any(): - return result - else: - return self.dtype.na_value - - def all(self, *, skipna: bool = True, axis: int | None = 0, **kwargs): - """ - Return whether all elements are True. - - Returns True unless there is at least one element that is False. - By default, NAs are skipped. If ``skipna=False`` is specified and - missing values are present, similar :ref:`Kleene logic <boolean.kleene>` - is used as for logical operations. - - Parameters - ---------- - skipna : bool, default True - Exclude NA values. If the entire array is NA and `skipna` is - True, then the result will be True, as for an empty array. - If `skipna` is False, the result will still be False if there is - at least one element that is False, otherwise NA will be returned - if there are NA's present. - axis : int or None, default 0 - **kwargs : any, default None - Additional keywords have no effect but might be accepted for - compatibility with NumPy. - - Returns - ------- - bool or :attr:`pandas.NA` - - See Also - -------- - numpy.all : Numpy version of this method. - BooleanArray.any : Return whether any element is True. - - Examples - -------- - The result indicates whether any element is True (and by default - skips NAs): - - >>> pd.array([True, True, pd.NA]).all() - True - >>> pd.array([True, False, pd.NA]).all() - False - >>> pd.array([], dtype="boolean").all() - True - >>> pd.array([pd.NA], dtype="boolean").all() - True - - With ``skipna=False``, the result can be NA if this is logically - required (whether ``pd.NA`` is True or False influences the result): - - >>> pd.array([True, True, pd.NA]).all(skipna=False) - <NA> - >>> pd.array([True, False, pd.NA]).all(skipna=False) - False - """ - kwargs.pop("axis", None) - nv.validate_all((), kwargs) - - values = self._data.copy() - np.putmask(values, self._mask, True) - result = values.all(axis=axis) - - if skipna: - return result - else: - if not result or self.size == 0 or not self._mask.any(): - return result - else: - return self.dtype.na_value - def _logical_method(self, other, op): assert op.__name__ in {"or_", "ror_", "and_", "rand_", "xor", "rxor"}
#41970 moved this to the base class, so not needed here anymore
https://api.github.com/repos/pandas-dev/pandas/pulls/44549
2021-11-21T00:29:52Z
2021-11-25T17:47:55Z
2021-11-25T17:47:55Z
2021-11-25T18:00:59Z
REF: deduplicate nullable arrays _cmp_method
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py index 0787ec56379b2..d3dfd8fec3412 100644 --- a/pandas/core/arrays/boolean.py +++ b/pandas/core/arrays/boolean.py @@ -5,7 +5,6 @@ TYPE_CHECKING, overload, ) -import warnings import numpy as np @@ -44,7 +43,6 @@ BaseMaskedArray, BaseMaskedDtype, ) -from pandas.core.ops import invalid_comparison if TYPE_CHECKING: import pyarrow @@ -622,52 +620,6 @@ def _logical_method(self, other, op): # expected "ndarray" return BooleanArray(result, mask) # type: ignore[arg-type] - def _cmp_method(self, other, op): - from pandas.arrays import ( - FloatingArray, - IntegerArray, - ) - - if isinstance(other, (IntegerArray, FloatingArray)): - return NotImplemented - - mask = None - - if isinstance(other, BooleanArray): - other, mask = other._data, other._mask - - elif is_list_like(other): - other = np.asarray(other) - if other.ndim > 1: - raise NotImplementedError("can only perform ops with 1-d structures") - if len(self) != len(other): - raise ValueError("Lengths must match to compare") - - if other is libmissing.NA: - # numpy does not handle pd.NA well as "other" scalar (it returns - # a scalar False instead of an array) - result = np.zeros_like(self._data) - mask = np.ones_like(self._data) - else: - # numpy will show a DeprecationWarning on invalid elementwise - # comparisons, this will raise in the future - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", "elementwise", FutureWarning) - with np.errstate(all="ignore"): - method = getattr(self._data, f"__{op.__name__}__") - result = method(other) - - if result is NotImplemented: - result = invalid_comparison(self._data, other, op) - - # nans propagate - if mask is None: - mask = self._mask.copy() - else: - mask = self._mask | mask - - return BooleanArray(result, mask, copy=False) - def _arith_method(self, other, op): mask = None op_name = op.__name__ diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py index 9be2201c566a4..1e7f1aff52d2e 100644 --- a/pandas/core/arrays/floating.py +++ b/pandas/core/arrays/floating.py @@ -1,14 +1,10 @@ from __future__ import annotations from typing import overload -import warnings import numpy as np -from pandas._libs import ( - lib, - missing as libmissing, -) +from pandas._libs import lib from pandas._typing import ( ArrayLike, AstypeArg, @@ -24,7 +20,6 @@ is_datetime64_dtype, is_float_dtype, is_integer_dtype, - is_list_like, is_object_dtype, pandas_dtype, ) @@ -39,7 +34,6 @@ NumericArray, NumericDtype, ) -from pandas.core.ops import invalid_comparison from pandas.core.tools.numeric import to_numeric @@ -337,52 +331,6 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike: def _values_for_argsort(self) -> np.ndarray: return self._data - def _cmp_method(self, other, op): - from pandas.arrays import ( - BooleanArray, - IntegerArray, - ) - - mask = None - - if isinstance(other, (BooleanArray, IntegerArray, FloatingArray)): - other, mask = other._data, other._mask - - elif is_list_like(other): - other = np.asarray(other) - if other.ndim > 1: - raise NotImplementedError("can only perform ops with 1-d structures") - - if other is libmissing.NA: - # numpy does not handle pd.NA well as "other" scalar (it returns - # a scalar False instead of an array) - # This may be fixed by NA.__array_ufunc__. Revisit this check - # once that's implemented. - result = np.zeros(self._data.shape, dtype="bool") - mask = np.ones(self._data.shape, dtype="bool") - else: - with warnings.catch_warnings(): - # numpy may show a FutureWarning: - # elementwise comparison failed; returning scalar instead, - # but in the future will perform elementwise comparison - # before returning NotImplemented. We fall back to the correct - # behavior today, so that should be fine to ignore. - warnings.filterwarnings("ignore", "elementwise", FutureWarning) - with np.errstate(all="ignore"): - method = getattr(self._data, f"__{op.__name__}__") - result = method(other) - - if result is NotImplemented: - result = invalid_comparison(self._data, other, op) - - # nans propagate - if mask is None: - mask = self._mask.copy() - else: - mask = self._mask | mask - - return BooleanArray(result, mask) - def sum(self, *, skipna=True, min_count=0, axis: int | None = 0, **kwargs): nv.validate_sum((), kwargs) return super()._reduce("sum", skipna=skipna, min_count=min_count, axis=axis) diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py index d01068a0d408c..12bef068ef44b 100644 --- a/pandas/core/arrays/integer.py +++ b/pandas/core/arrays/integer.py @@ -1,14 +1,12 @@ from __future__ import annotations from typing import overload -import warnings import numpy as np from pandas._libs import ( iNaT, lib, - missing as libmissing, ) from pandas._typing import ( ArrayLike, @@ -30,7 +28,6 @@ is_float, is_float_dtype, is_integer_dtype, - is_list_like, is_object_dtype, is_string_dtype, pandas_dtype, @@ -38,15 +35,11 @@ from pandas.core.dtypes.missing import isna from pandas.core.arrays import ExtensionArray -from pandas.core.arrays.masked import ( - BaseMaskedArray, - BaseMaskedDtype, -) +from pandas.core.arrays.masked import BaseMaskedDtype from pandas.core.arrays.numeric import ( NumericArray, NumericDtype, ) -from pandas.core.ops import invalid_comparison from pandas.core.tools.numeric import to_numeric @@ -418,51 +411,6 @@ def _values_for_argsort(self) -> np.ndarray: data[self._mask] = data.min() - 1 return data - def _cmp_method(self, other, op): - from pandas.core.arrays import BooleanArray - - mask = None - - if isinstance(other, BaseMaskedArray): - other, mask = other._data, other._mask - - elif is_list_like(other): - other = np.asarray(other) - if other.ndim > 1: - raise NotImplementedError("can only perform ops with 1-d structures") - if len(self) != len(other): - raise ValueError("Lengths must match to compare") - - if other is libmissing.NA: - # numpy does not handle pd.NA well as "other" scalar (it returns - # a scalar False instead of an array) - # This may be fixed by NA.__array_ufunc__. Revisit this check - # once that's implemented. - result = np.zeros(self._data.shape, dtype="bool") - mask = np.ones(self._data.shape, dtype="bool") - else: - with warnings.catch_warnings(): - # numpy may show a FutureWarning: - # elementwise comparison failed; returning scalar instead, - # but in the future will perform elementwise comparison - # before returning NotImplemented. We fall back to the correct - # behavior today, so that should be fine to ignore. - warnings.filterwarnings("ignore", "elementwise", FutureWarning) - with np.errstate(all="ignore"): - method = getattr(self._data, f"__{op.__name__}__") - result = method(other) - - if result is NotImplemented: - result = invalid_comparison(self._data, other, op) - - # nans propagate - if mask is None: - mask = self._mask.copy() - else: - mask = self._mask | mask - - return BooleanArray(result, mask) - def sum(self, *, skipna=True, min_count=0, axis: int | None = 0, **kwargs): nv.validate_sum((), kwargs) return super()._reduce("sum", skipna=skipna, min_count=min_count, axis=axis) diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index 34aaf054de48e..b334a167d3824 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -7,6 +7,7 @@ TypeVar, overload, ) +import warnings import numpy as np @@ -40,6 +41,7 @@ is_dtype_equal, is_float_dtype, is_integer_dtype, + is_list_like, is_object_dtype, is_scalar, is_string_dtype, @@ -66,6 +68,7 @@ from pandas.core.arraylike import OpsMixin from pandas.core.arrays import ExtensionArray from pandas.core.indexers import check_array_indexer +from pandas.core.ops import invalid_comparison if TYPE_CHECKING: from pandas import Series @@ -482,6 +485,51 @@ def _hasna(self) -> bool: # error: Incompatible return value type (got "bool_", expected "bool") return self._mask.any() # type: ignore[return-value] + def _cmp_method(self, other, op) -> BooleanArray: + from pandas.core.arrays import BooleanArray + + mask = None + + if isinstance(other, BaseMaskedArray): + other, mask = other._data, other._mask + + elif is_list_like(other): + other = np.asarray(other) + if other.ndim > 1: + raise NotImplementedError("can only perform ops with 1-d structures") + if len(self) != len(other): + raise ValueError("Lengths must match to compare") + + if other is libmissing.NA: + # numpy does not handle pd.NA well as "other" scalar (it returns + # a scalar False instead of an array) + # This may be fixed by NA.__array_ufunc__. Revisit this check + # once that's implemented. + result = np.zeros(self._data.shape, dtype="bool") + mask = np.ones(self._data.shape, dtype="bool") + else: + with warnings.catch_warnings(): + # numpy may show a FutureWarning: + # elementwise comparison failed; returning scalar instead, + # but in the future will perform elementwise comparison + # before returning NotImplemented. We fall back to the correct + # behavior today, so that should be fine to ignore. + warnings.filterwarnings("ignore", "elementwise", FutureWarning) + with np.errstate(all="ignore"): + method = getattr(self._data, f"__{op.__name__}__") + result = method(other) + + if result is NotImplemented: + result = invalid_comparison(self._data, other, op) + + # nans propagate + if mask is None: + mask = self._mask.copy() + else: + mask = self._mask | mask + + return BooleanArray(result, mask, copy=False) + def isna(self) -> np.ndarray: return self._mask.copy()
xref https://github.com/pandas-dev/pandas/pull/44533#discussion_r753689506, helps on a bullet in #38110
https://api.github.com/repos/pandas-dev/pandas/pulls/44548
2021-11-21T00:21:58Z
2021-11-21T23:36:36Z
2021-11-21T23:36:35Z
2021-11-21T23:38:35Z
Backport PR #44518 on branch 1.3.x (BUG: DataFrame with scalar tzaware Timestamp)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 92a906e9fd8b0..aadd6faecd861 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -2067,8 +2067,13 @@ def sequence_to_dt64ns( ) if tz and inferred_tz: # two timezones: convert to intended from base UTC repr - data = tzconversion.tz_convert_from_utc(data.view("i8"), tz) - data = data.view(DT64NS_DTYPE) + if data.dtype == "i8": + # GH#42505 + # by convention, these are _already_ UTC, e.g + return data.view(DT64NS_DTYPE), tz, None + + utc_vals = tzconversion.tz_convert_from_utc(data.view("i8"), tz) + data = utc_vals.view(DT64NS_DTYPE) elif inferred_tz: tz = inferred_tz elif allow_object and data.dtype == object: diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 94fba0cbb90d5..ea5c1d839f2a4 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -68,6 +68,19 @@ class TestDataFrameConstructors: + def test_constructor_dict_with_tzaware_scalar(self): + # GH#42505 + dt = Timestamp("2019-11-03 01:00:00-0700").tz_convert("America/Los_Angeles") + + df = DataFrame({"dt": dt}, index=[0]) + expected = DataFrame({"dt": [dt]}) + tm.assert_frame_equal(df, expected) + + # Non-homogeneous + df = DataFrame({"dt": dt, "value": [1]}) + expected = DataFrame({"dt": [dt], "value": [1]}) + tm.assert_frame_equal(df, expected) + def test_construct_ndarray_with_nas_and_int_dtype(self): # GH#26919 match Series by not casting np.nan to meaningless int arr = np.array([[1, np.nan], [2, 3]])
Backport PR #44518
https://api.github.com/repos/pandas-dev/pandas/pulls/44546
2021-11-20T23:04:29Z
2021-11-21T11:52:27Z
2021-11-21T11:52:27Z
2022-01-12T17:22:09Z
BUG: Index with object dtype and negative loc for insert adding none and replacing existing value
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index b962add29db33..c49767e80f535 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -546,6 +546,7 @@ Datetimelike - Bug in constructing a :class:`Series` from datetime-like strings with mixed timezones incorrectly partially-inferring datetime values (:issue:`40111`) - Bug in addition with a :class:`Tick` object and a ``np.timedelta64`` object incorrectly raising instead of returning :class:`Timedelta` (:issue:`44474`) - Bug in adding a ``np.timedelta64`` object to a :class:`BusinessDay` or :class:`CustomBusinessDay` object incorrectly raising (:issue:`44532`) +- Bug in :meth:`Index.insert` for inserting ``np.datetime64``, ``np.timedelta64`` or ``tuple`` into :class:`Index` with ``dtype='object'`` with negative loc addine ``None`` and replacing existing value (:issue:`44509`) - Timedelta diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py index 01bf5ec0633b5..96a76aa930f86 100644 --- a/pandas/core/arrays/interval.py +++ b/pandas/core/arrays/interval.py @@ -1496,7 +1496,7 @@ def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None: def insert(self: IntervalArrayT, loc: int, item: Interval) -> IntervalArrayT: """ Return a new IntervalArray inserting new item at location. Follows - Python list.append semantics for negative values. Only Interval + Python numpy.insert semantics for negative values. Only Interval objects and NA can be inserted into an IntervalIndex Parameters diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index a49c303e735ab..220b43f323a5f 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -6433,7 +6433,7 @@ def insert(self, loc: int, item) -> Index: """ Make new Index inserting new item at location. - Follows Python list.append semantics for negative values. + Follows Python numpy.insert semantics for negative values. Parameters ---------- @@ -6476,6 +6476,7 @@ def insert(self, loc: int, item) -> Index: else: new_values = np.insert(arr, loc, None) + loc = loc if loc >= 0 else loc - 1 new_values[loc] = item # Use self._constructor instead of Index to retain NumericIndex GH#43921 diff --git a/pandas/tests/indexes/base_class/test_reshape.py b/pandas/tests/indexes/base_class/test_reshape.py index acb6936f70d0f..547d62669943c 100644 --- a/pandas/tests/indexes/base_class/test_reshape.py +++ b/pandas/tests/indexes/base_class/test_reshape.py @@ -1,6 +1,7 @@ """ Tests for ndarray-like method on the base Index class """ +import numpy as np import pytest from pandas import Index @@ -42,6 +43,18 @@ def test_insert_missing(self, nulls_fixture): result = Index(list("abc")).insert(1, nulls_fixture) tm.assert_index_equal(result, expected) + @pytest.mark.parametrize( + "val", [(1, 2), np.datetime64("2019-12-31"), np.timedelta64(1, "D")] + ) + @pytest.mark.parametrize("loc", [-1, 2]) + def test_insert_datetime_into_object(self, loc, val): + # GH#44509 + idx = Index(["1", "2", "3"]) + result = idx.insert(loc, val) + expected = Index(["1", "2", val, "3"]) + tm.assert_index_equal(result, expected) + assert type(expected[2]) is type(val) + @pytest.mark.parametrize( "pos,expected", [
- [x] closes #44509 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry This behaves like np.insert, so changed the docstring. Making this compatible with list.insert would be a breaking change Nevertheless the else block in the base implementation had a bug, so fixed that one too
https://api.github.com/repos/pandas-dev/pandas/pulls/44545
2021-11-20T22:49:56Z
2021-11-21T23:49:13Z
2021-11-21T23:49:13Z
2021-11-22T20:30:10Z
COMPAT: Fix the last warning from matplotlib 3.5.0
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py index 08dc9538227f7..ba47391513ed2 100644 --- a/pandas/plotting/_matplotlib/core.py +++ b/pandas/plotting/_matplotlib/core.py @@ -1036,7 +1036,6 @@ def _plot_colorbar(self, ax: Axes, **kwds): # use the last one which contains the latest information # about the ax img = ax.collections[-1] - ax.grid(False) cbar = self.fig.colorbar(img, ax=ax, **kwds) if mpl_ge_3_0_0(): diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py index 52127b926f1fa..ae9db5e728efe 100644 --- a/pandas/tests/plotting/common.py +++ b/pandas/tests/plotting/common.py @@ -552,7 +552,7 @@ def is_grid_on(): obj.plot(kind=kind, grid=False, **kws) assert not is_grid_on() - if kind != "pie": + if kind not in ["pie", "hexbin", "scatter"]: self.plt.subplot(1, 4 * len(kinds), spndx) spndx += 1 mpl.rc("axes", grid=True)
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44544
2021-11-20T22:09:27Z
2021-11-23T00:11:17Z
2021-11-23T00:11:17Z
2021-11-23T03:56:21Z
Backport PR #44531 on branch 1.3.x (Add json normalize back into doc and move read_json to pd namespace)
diff --git a/doc/source/reference/io.rst b/doc/source/reference/io.rst index 82d4ec4950ef1..7aad937d10a18 100644 --- a/doc/source/reference/io.rst +++ b/doc/source/reference/io.rst @@ -57,7 +57,7 @@ Excel ExcelWriter -.. currentmodule:: pandas.io.json +.. currentmodule:: pandas JSON ~~~~ @@ -65,7 +65,10 @@ JSON :toctree: api/ read_json - to_json + json_normalize + DataFrame.to_json + +.. currentmodule:: pandas.io.json .. autosummary:: :toctree: api/
Backport PR #44531: Add json normalize back into doc and move read_json to pd namespace
https://api.github.com/repos/pandas-dev/pandas/pulls/44542
2021-11-20T21:11:34Z
2021-11-20T22:52:32Z
2021-11-20T22:52:31Z
2021-11-20T22:52:32Z
REF: remove checknull_old
diff --git a/pandas/_libs/missing.pxd b/pandas/_libs/missing.pxd index e32518864db0a..854dcf2ec9775 100644 --- a/pandas/_libs/missing.pxd +++ b/pandas/_libs/missing.pxd @@ -7,7 +7,6 @@ from numpy cimport ( cpdef bint is_matching_na(object left, object right, bint nan_matches_none=*) cpdef bint checknull(object val, bint inf_as_na=*) -cpdef bint checknull_old(object val) cpdef ndarray[uint8_t] isnaobj(ndarray arr, bint inf_as_na=*) cdef bint is_null_datetime64(v) diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index 6146e8ea13f89..2d66a655ba075 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -135,10 +135,6 @@ cdef inline bint is_decimal_na(object val): return isinstance(val, cDecimal) and val != val -cpdef bint checknull_old(object val): - return checknull(val, inf_as_na=True) - - @cython.wraparound(False) @cython.boundscheck(False) cpdef ndarray[uint8_t] isnaobj(ndarray arr, bint inf_as_na=False): @@ -176,12 +172,6 @@ cpdef ndarray[uint8_t] isnaobj(ndarray arr, bint inf_as_na=False): return result.view(np.bool_) -@cython.wraparound(False) -@cython.boundscheck(False) -def isnaobj_old(arr: ndarray) -> ndarray: - return isnaobj(arr, inf_as_na=True) - - @cython.wraparound(False) @cython.boundscheck(False) def isnaobj2d(arr: ndarray, inf_as_na: bool = False) -> ndarray: @@ -221,12 +211,6 @@ def isnaobj2d(arr: ndarray, inf_as_na: bool = False) -> ndarray: return result.view(np.bool_) -@cython.wraparound(False) -@cython.boundscheck(False) -def isnaobj2d_old(arr: ndarray) -> ndarray: - return isnaobj2d(arr, inf_as_na=True) - - def isposinf_scalar(val: object) -> bool: return util.is_float_object(val) and val == INF diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index 63b64dc504b52..d2733cddf8cee 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -158,10 +158,7 @@ def _isna(obj, inf_as_na: bool = False): boolean ndarray or boolean """ if is_scalar(obj): - if inf_as_na: - return libmissing.checknull_old(obj) - else: - return libmissing.checknull(obj) + return libmissing.checknull(obj, inf_as_na=inf_as_na) elif isinstance(obj, ABCMultiIndex): raise NotImplementedError("isna is not defined for MultiIndex") elif isinstance(obj, type): diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py index 55d0e5e73418e..e04df7e43838f 100644 --- a/pandas/tests/dtypes/test_missing.py +++ b/pandas/tests/dtypes/test_missing.py @@ -565,21 +565,19 @@ def test_na_value_for_dtype(dtype, na_value): class TestNAObj: - - _1d_methods = ["isnaobj", "isnaobj_old"] - _2d_methods = ["isnaobj2d", "isnaobj2d_old"] - def _check_behavior(self, arr, expected): - for method in TestNAObj._1d_methods: - result = getattr(libmissing, method)(arr) - tm.assert_numpy_array_equal(result, expected) + result = libmissing.isnaobj(arr) + tm.assert_numpy_array_equal(result, expected) + result = libmissing.isnaobj(arr, inf_as_na=True) + tm.assert_numpy_array_equal(result, expected) arr = np.atleast_2d(arr) expected = np.atleast_2d(expected) - for method in TestNAObj._2d_methods: - result = getattr(libmissing, method)(arr) - tm.assert_numpy_array_equal(result, expected) + result = libmissing.isnaobj2d(arr) + tm.assert_numpy_array_equal(result, expected) + result = libmissing.isnaobj2d(arr, inf_as_na=True) + tm.assert_numpy_array_equal(result, expected) def test_basic(self): arr = np.array([1, None, "foo", -5.1, NaT, np.nan]) @@ -676,16 +674,16 @@ def test_checknull(self, func): def test_checknull_old(self): for value in na_vals + sometimes_na_vals: - assert libmissing.checknull_old(value) + assert libmissing.checknull(value, inf_as_na=True) for value in inf_vals: - assert libmissing.checknull_old(value) + assert libmissing.checknull(value, inf_as_na=True) for value in int_na_vals: - assert not libmissing.checknull_old(value) + assert not libmissing.checknull(value, inf_as_na=True) for value in never_na_vals: - assert not libmissing.checknull_old(value) + assert not libmissing.checknull(value, inf_as_na=True) def test_is_null_datetimelike(self): for value in na_vals:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44541
2021-11-20T20:04:59Z
2021-11-21T02:34:31Z
2021-11-21T02:34:31Z
2021-11-21T02:36:11Z
CLN: remove is_null_datetimelike
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index 25142cad9a30d..611bec50a7393 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -1462,7 +1462,7 @@ def infer_dtype(value: object, skipna: bool = True) -> str: for i in range(n): val = values[i] - # do not use is_null_datetimelike to keep + # do not use checknull to keep # np.datetime64('nat') and np.timedelta64('nat') if val is None or util.is_nan(val): pass diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index 6146e8ea13f89..d5db2a8d7e0a7 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -21,7 +21,6 @@ from pandas._libs.tslibs.nattype cimport ( c_NaT as NaT, checknull_with_nat, is_dt64nat, - is_null_datetimelike, is_td64nat, ) from pandas._libs.tslibs.np_datetime cimport ( @@ -121,11 +120,20 @@ cpdef bint checknull(object val, bint inf_as_na=False): ------- bool """ - return ( - val is C_NA - or is_null_datetimelike(val, inat_is_null=False, inf_as_na=inf_as_na) - or is_decimal_na(val) - ) + if val is None or val is NaT or val is C_NA: + return True + elif util.is_float_object(val) or util.is_complex_object(val): + if val != val: + return True + elif inf_as_na: + return val == INF or val == NEGINF + return False + elif util.is_timedelta64_object(val): + return get_timedelta64_value(val) == NPY_NAT + elif util.is_datetime64_object(val): + return get_datetime64_value(val) == NPY_NAT + else: + return is_decimal_na(val) cdef inline bint is_decimal_na(object val): diff --git a/pandas/_libs/tslibs/__init__.py b/pandas/_libs/tslibs/__init__.py index e38ed9a20e55b..11de4e60f202d 100644 --- a/pandas/_libs/tslibs/__init__.py +++ b/pandas/_libs/tslibs/__init__.py @@ -5,7 +5,6 @@ "NaTType", "iNaT", "nat_strings", - "is_null_datetimelike", "OutOfBoundsDatetime", "OutOfBoundsTimedelta", "IncompatibleFrequency", @@ -37,7 +36,6 @@ NaT, NaTType, iNaT, - is_null_datetimelike, nat_strings, ) from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime diff --git a/pandas/_libs/tslibs/nattype.pxd b/pandas/_libs/tslibs/nattype.pxd index 0ace3ca1fd4b1..b7c14e0a5b068 100644 --- a/pandas/_libs/tslibs/nattype.pxd +++ b/pandas/_libs/tslibs/nattype.pxd @@ -18,4 +18,3 @@ cdef _NaT c_NaT cdef bint checknull_with_nat(object val) cdef bint is_dt64nat(object val) cdef bint is_td64nat(object val) -cpdef bint is_null_datetimelike(object val, bint inat_is_null=*, bint inf_as_na=*) diff --git a/pandas/_libs/tslibs/nattype.pyi b/pandas/_libs/tslibs/nattype.pyi index 1a33a85a04ae0..6a5555cfff030 100644 --- a/pandas/_libs/tslibs/nattype.pyi +++ b/pandas/_libs/tslibs/nattype.pyi @@ -12,10 +12,6 @@ NaT: NaTType iNaT: int nat_strings: set[str] -def is_null_datetimelike( - val: object, inat_is_null: bool = ..., inf_as_na: bool = ... -) -> bool: ... - class NaTType(datetime): value: np.int64 def asm8(self) -> np.datetime64: ... diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index ae553d79ae91e..a36a1e12c274d 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -1218,45 +1218,3 @@ cdef inline bint is_td64nat(object val): if util.is_timedelta64_object(val): return get_timedelta64_value(val) == NPY_NAT return False - - -cdef: - cnp.float64_t INF = <cnp.float64_t>np.inf - cnp.float64_t NEGINF = -INF - - -cpdef bint is_null_datetimelike( - object val, bint inat_is_null=True, bint inf_as_na=False -): - """ - Determine if we have a null for a timedelta/datetime (or integer versions). - - Parameters - ---------- - val : object - inat_is_null : bool, default True - Whether to treat integer iNaT value as null - inf_as_na : bool, default False - Whether to treat INF or -INF value as null. - - Returns - ------- - bool - """ - if val is None: - return True - elif val is c_NaT: - return True - elif util.is_float_object(val) or util.is_complex_object(val): - if val != val: - return True - if inf_as_na: - return val == INF or val == NEGINF - return False - elif util.is_timedelta64_object(val): - return get_timedelta64_value(val) == NPY_NAT - elif util.is_datetime64_object(val): - return get_datetime64_value(val) == NPY_NAT - elif inat_is_null and util.is_integer_object(val): - return val == NPY_NAT - return False diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py index 55d0e5e73418e..7e34c7781392e 100644 --- a/pandas/tests/dtypes/test_missing.py +++ b/pandas/tests/dtypes/test_missing.py @@ -8,10 +8,7 @@ from pandas._config import config as cf from pandas._libs import missing as libmissing -from pandas._libs.tslibs import ( - iNaT, - is_null_datetimelike, -) +from pandas._libs.tslibs import iNaT from pandas.core.dtypes.common import ( is_float, @@ -687,26 +684,6 @@ def test_checknull_old(self): for value in never_na_vals: assert not libmissing.checknull_old(value) - def test_is_null_datetimelike(self): - for value in na_vals: - assert is_null_datetimelike(value) - assert is_null_datetimelike(value, False) - - for value in inf_vals: - assert not is_null_datetimelike(value) - assert not is_null_datetimelike(value, False) - - for value in int_na_vals: - assert is_null_datetimelike(value) - assert not is_null_datetimelike(value, False) - - for value in sometimes_na_vals: - assert not is_null_datetimelike(value) - assert not is_null_datetimelike(value, False) - - for value in never_na_vals: - assert not is_null_datetimelike(value) - def test_is_matching_na(self, nulls_fixture, nulls_fixture2): left = nulls_fixture right = nulls_fixture2 diff --git a/pandas/tests/tslibs/test_api.py b/pandas/tests/tslibs/test_api.py index 4ded555ed8f73..d7abb19530837 100644 --- a/pandas/tests/tslibs/test_api.py +++ b/pandas/tests/tslibs/test_api.py @@ -29,7 +29,6 @@ def test_namespace(): "NaT", "NaTType", "iNaT", - "is_null_datetimelike", "nat_strings", "OutOfBoundsDatetime", "OutOfBoundsTimedelta",
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry No longer needed following #44507
https://api.github.com/repos/pandas-dev/pandas/pulls/44540
2021-11-20T19:31:14Z
2021-11-20T21:43:30Z
2021-11-20T21:43:30Z
2021-11-20T21:47:15Z
DEPR: Series/DataFrame.append (#35407)
diff --git a/doc/source/getting_started/comparison/comparison_with_spreadsheets.rst b/doc/source/getting_started/comparison/comparison_with_spreadsheets.rst index bdd0f7d8cfddf..19999be9b461f 100644 --- a/doc/source/getting_started/comparison/comparison_with_spreadsheets.rst +++ b/doc/source/getting_started/comparison/comparison_with_spreadsheets.rst @@ -435,13 +435,14 @@ The equivalent in pandas: Adding a row ~~~~~~~~~~~~ -Assuming we are using a :class:`~pandas.RangeIndex` (numbered ``0``, ``1``, etc.), we can use :meth:`DataFrame.append` to add a row to the bottom of a ``DataFrame``. +Assuming we are using a :class:`~pandas.RangeIndex` (numbered ``0``, ``1``, etc.), we can use :func:`concat` to add a row to the bottom of a ``DataFrame``. .. ipython:: python df - new_row = {"class": "E", "student_count": 51, "all_pass": True} - df.append(new_row, ignore_index=True) + new_row = pd.DataFrame([["E", 51, True]], + columns=["class", "student_count", "all_pass"]) + pd.concat([df, new_row]) Find and Replace diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst index 4aca107b7c106..08488a33936f0 100644 --- a/doc/source/user_guide/10min.rst +++ b/doc/source/user_guide/10min.rst @@ -478,7 +478,6 @@ Concatenating pandas objects together with :func:`concat`: a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the :class:`DataFrame` constructor instead of building a :class:`DataFrame` by iteratively appending records to it. - See :ref:`Appending to dataframe <merging.concatenation>` for more. Join ~~~~ diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst index 03221e71ea32a..8c2dd3ba60f13 100644 --- a/doc/source/user_guide/cookbook.rst +++ b/doc/source/user_guide/cookbook.rst @@ -929,9 +929,9 @@ Valid frequency arguments to Grouper :ref:`Timeseries <timeseries.offset_aliases Merge ----- -The :ref:`Concat <merging.concatenation>` docs. The :ref:`Join <merging.join>` docs. +The :ref:`Join <merging.join>` docs. -`Append two dataframes with overlapping index (emulate R rbind) +`Concatenate two dataframes with overlapping index (emulate R rbind) <https://stackoverflow.com/questions/14988480/pandas-version-of-rbind>`__ .. ipython:: python @@ -944,7 +944,7 @@ Depending on df construction, ``ignore_index`` may be needed .. ipython:: python - df = df1.append(df2, ignore_index=True) + df = pd.concat([df1, df2], ignore_index=True) df `Self Join of a DataFrame diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst index cee12c6939b25..bbca5773afdfe 100644 --- a/doc/source/user_guide/merging.rst +++ b/doc/source/user_guide/merging.rst @@ -237,59 +237,6 @@ Similarly, we could index before the concatenation: p.plot([df1, df4], result, labels=["df1", "df4"], vertical=False); plt.close("all"); -.. _merging.concatenation: - -Concatenating using ``append`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -A useful shortcut to :func:`~pandas.concat` are the :meth:`~DataFrame.append` -instance methods on ``Series`` and ``DataFrame``. These methods actually predated -``concat``. They concatenate along ``axis=0``, namely the index: - -.. ipython:: python - - result = df1.append(df2) - -.. ipython:: python - :suppress: - - @savefig merging_append1.png - p.plot([df1, df2], result, labels=["df1", "df2"], vertical=True); - plt.close("all"); - -In the case of ``DataFrame``, the indexes must be disjoint but the columns do not -need to be: - -.. ipython:: python - - result = df1.append(df4, sort=False) - -.. ipython:: python - :suppress: - - @savefig merging_append2.png - p.plot([df1, df4], result, labels=["df1", "df4"], vertical=True); - plt.close("all"); - -``append`` may take multiple objects to concatenate: - -.. ipython:: python - - result = df1.append([df2, df3]) - -.. ipython:: python - :suppress: - - @savefig merging_append3.png - p.plot([df1, df2, df3], result, labels=["df1", "df2", "df3"], vertical=True); - plt.close("all"); - -.. note:: - - Unlike the :py:meth:`~list.append` method, which appends to the original list - and returns ``None``, :meth:`~DataFrame.append` here **does not** modify - ``df1`` and returns its copy with ``df2`` appended. - .. _merging.ignore_index: Ignoring indexes on the concatenation axis @@ -309,19 +256,6 @@ do this, use the ``ignore_index`` argument: p.plot([df1, df4], result, labels=["df1", "df4"], vertical=True); plt.close("all"); -This is also a valid argument to :meth:`DataFrame.append`: - -.. ipython:: python - - result = df1.append(df4, ignore_index=True, sort=False) - -.. ipython:: python - :suppress: - - @savefig merging_append_ignore_index.png - p.plot([df1, df4], result, labels=["df1", "df4"], vertical=True); - plt.close("all"); - .. _merging.mixed_ndims: Concatenating with mixed ndims @@ -473,14 +407,13 @@ like GroupBy where the order of a categorical variable is meaningful. Appending rows to a DataFrame ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -While not especially efficient (since a new object must be created), you can -append a single row to a ``DataFrame`` by passing a ``Series`` or dict to -``append``, which returns a new ``DataFrame`` as above. +If you have a series that you want to append as a single row to a ``DataFrame``, you can convert the row into a +``DataFrame`` and use ``concat`` .. ipython:: python s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"]) - result = df1.append(s2, ignore_index=True) + result = pd.concat([df1, s2.to_frame().T], ignore_index=True) .. ipython:: python :suppress: @@ -493,20 +426,6 @@ You should use ``ignore_index`` with this method to instruct DataFrame to discard its index. If you wish to preserve the index, you should construct an appropriately-indexed DataFrame and append or concatenate those objects. -You can also pass a list of dicts or Series: - -.. ipython:: python - - dicts = [{"A": 1, "B": 2, "C": 3, "X": 4}, {"A": 5, "B": 6, "C": 7, "Y": 8}] - result = df1.append(dicts, ignore_index=True, sort=False) - -.. ipython:: python - :suppress: - - @savefig merging_append_dits.png - p.plot([df1, pd.DataFrame(dicts)], result, labels=["df1", "dicts"], vertical=True); - plt.close("all"); - .. _merging.join: Database-style DataFrame or named Series joining/merging diff --git a/doc/source/whatsnew/v0.6.1.rst b/doc/source/whatsnew/v0.6.1.rst index 139c6e2d1cb0c..4e72a630ad9f1 100644 --- a/doc/source/whatsnew/v0.6.1.rst +++ b/doc/source/whatsnew/v0.6.1.rst @@ -6,7 +6,7 @@ Version 0.6.1 (December 13, 2011) New features ~~~~~~~~~~~~ -- Can :ref:`append single rows <merging.append.row>` (as Series) to a DataFrame +- Can append single rows (as Series) to a DataFrame - Add Spearman and Kendall rank :ref:`correlation <computation.correlation>` options to Series.corr and DataFrame.corr (:issue:`428`) - :ref:`Added <indexing.basics.get_value>` ``get_value`` and ``set_value`` methods to diff --git a/doc/source/whatsnew/v0.7.0.rst b/doc/source/whatsnew/v0.7.0.rst index 52747f2992dc4..1b947030ab8ab 100644 --- a/doc/source/whatsnew/v0.7.0.rst +++ b/doc/source/whatsnew/v0.7.0.rst @@ -19,7 +19,7 @@ New features intersection of the other axes. Improves performance of ``Series.append`` and ``DataFrame.append`` (:issue:`468`, :issue:`479`, :issue:`273`) -- :ref:`Can <merging.concatenation>` pass multiple DataFrames to +- Can pass multiple DataFrames to ``DataFrame.append`` to concatenate (stack) and multiple Series to ``Series.append`` too diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 7218c77e43409..ef390756a20f6 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -275,7 +275,7 @@ ignored when finding the concatenated dtype. These are now consistently *not* i df1 = pd.DataFrame({"bar": [pd.Timestamp("2013-01-01")]}, index=range(1)) df2 = pd.DataFrame({"bar": np.nan}, index=range(1, 2)) - res = df1.append(df2) + res = pd.concat([df1, df2]) Previously, the float-dtype in ``df2`` would be ignored so the result dtype would be ``datetime64[ns]``. As a result, the ``np.nan`` would be cast to ``NaT``. @@ -510,6 +510,49 @@ when given numeric data, but in the future, a :class:`NumericIndex` will be retu Out [4]: NumericIndex([1, 2, 3], dtype='uint64') +.. _whatsnew_140.deprecations.frame_series_append: + +Deprecated Frame.append and Series.append +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:meth:`DataFrame.append` and :meth:`Series.append` have been deprecated and will be removed in Pandas 2.0. +Use :func:`pandas.concat` instead (:issue:`35407`). + +*Deprecated syntax* + +.. code-block:: ipython + + In [1]: pd.Series([1, 2]).append(pd.Series([3, 4]) + Out [1]: + <stdin>:1: FutureWarning: The series.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. + 0 1 + 1 2 + 0 3 + 1 4 + dtype: int64 + + In [2]: df1 = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) + In [3]: df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) + In [4]: df1.append(df2) + Out [4]: + <stdin>:1: FutureWarning: The series.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. + A B + 0 1 2 + 1 3 4 + 0 5 6 + 1 7 8 + +*Recommended syntax* + +.. ipython:: python + + pd.concat([pd.Series([1, 2]), pd.Series([3, 4])]) + + df1 = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) + df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) + pd.concat([df1, df2]) + + .. _whatsnew_140.deprecations.other: Other Deprecations diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 794fb2afc7f9e..05fd20986da72 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3261,9 +3261,10 @@ def memory_usage(self, index: bool = True, deep: bool = False) -> Series: index=self.columns, ) if index: - result = self._constructor_sliced( + index_memory_usage = self._constructor_sliced( self.index.memory_usage(deep=deep), index=["Index"] - ).append(result) + ) + result = index_memory_usage._append(result) return result def transpose(self, *args, copy: bool = False) -> DataFrame: @@ -9003,6 +9004,23 @@ def append( 3 3 4 4 """ + warnings.warn( + "The frame.append method is deprecated " + "and will be removed from pandas in a future version. " + "Use pandas.concat instead.", + FutureWarning, + stacklevel=find_stack_level(), + ) + + return self._append(other, ignore_index, verify_integrity, sort) + + def _append( + self, + other, + ignore_index: bool = False, + verify_integrity: bool = False, + sort: bool = False, + ) -> DataFrame: combined_columns = None if isinstance(other, (Series, dict)): if isinstance(other, dict): @@ -9728,7 +9746,9 @@ def c(x): idx_diff = result_index.difference(correl.index) if len(idx_diff) > 0: - correl = correl.append(Series([np.nan] * len(idx_diff), index=idx_diff)) + correl = correl._append( + Series([np.nan] * len(idx_diff), index=idx_diff) + ) return correl diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index f043a8cee308c..71ebb45cf23b4 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1993,7 +1993,7 @@ def _setitem_with_indexer_missing(self, indexer, value): df = df.infer_objects() self.obj._mgr = df._mgr else: - self.obj._mgr = self.obj.append(value)._mgr + self.obj._mgr = self.obj._append(value)._mgr self.obj._maybe_update_cacher(clear=True) def _ensure_iterable_column_indexer(self, column_indexer): diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py index 8949ad3c8fca0..d40f8c69e1b7c 100644 --- a/pandas/core/reshape/pivot.py +++ b/pandas/core/reshape/pivot.py @@ -287,7 +287,7 @@ def _add_margins( if not values and isinstance(table, ABCSeries): # If there are no values and the table is a series, then there is only # one column in the data. Compute grand margin and return it. - return table.append(Series({key: grand_margin[margins_name]})) + return table._append(Series({key: grand_margin[margins_name]})) elif values: marginal_result_set = _generate_marginal_results( @@ -325,7 +325,7 @@ def _add_margins( margin_dummy[cols] = margin_dummy[cols].apply( maybe_downcast_to_dtype, args=(dtype,) ) - result = result.append(margin_dummy) + result = result._append(margin_dummy) result.index.names = row_names return result @@ -738,7 +738,7 @@ def _normalize(table, normalize, margins: bool, margins_name="All"): elif normalize == "index": index_margin = index_margin / index_margin.sum() - table = table.append(index_margin) + table = table._append(index_margin) table = table.fillna(0) table.index = table_index @@ -747,7 +747,7 @@ def _normalize(table, normalize, margins: bool, margins_name="All"): index_margin = index_margin / index_margin.sum() index_margin.loc[margins_name] = 1 table = concat([table, column_margin], axis=1) - table = table.append(index_margin) + table = table._append(index_margin) table = table.fillna(0) table.index = table_index diff --git a/pandas/core/series.py b/pandas/core/series.py index 746512e8fb7d6..da91dd243934f 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2893,6 +2893,19 @@ def append( ... ValueError: Indexes have overlapping values: [0, 1, 2] """ + warnings.warn( + "The series.append method is deprecated " + "and will be removed from pandas in a future version. " + "Use pandas.concat instead.", + FutureWarning, + stacklevel=find_stack_level(), + ) + + return self._append(to_append, ignore_index, verify_integrity) + + def _append( + self, to_append, ignore_index: bool = False, verify_integrity: bool = False + ): from pandas.core.reshape.concat import concat if isinstance(to_append, (list, tuple)): diff --git a/pandas/tests/frame/methods/test_append.py b/pandas/tests/frame/methods/test_append.py index c5b7a7ff69a53..5cfad472e0134 100644 --- a/pandas/tests/frame/methods/test_append.py +++ b/pandas/tests/frame/methods/test_append.py @@ -13,6 +13,7 @@ class TestDataFrameAppend: + @pytest.mark.filterwarnings("ignore:.*append method is deprecated.*:FutureWarning") def test_append_multiindex(self, multiindex_dataframe_random_data, frame_or_series): obj = multiindex_dataframe_random_data obj = tm.get_obj(obj, frame_or_series) @@ -26,16 +27,16 @@ def test_append_multiindex(self, multiindex_dataframe_random_data, frame_or_seri def test_append_empty_list(self): # GH 28769 df = DataFrame() - result = df.append([]) + result = df._append([]) expected = df tm.assert_frame_equal(result, expected) assert result is not df df = DataFrame(np.random.randn(5, 4), columns=["foo", "bar", "baz", "qux"]) - result = df.append([]) + result = df._append([]) expected = df tm.assert_frame_equal(result, expected) - assert result is not df # .append() should return a new object + assert result is not df # ._append() should return a new object def test_append_series_dict(self): df = DataFrame(np.random.randn(5, 4), columns=["foo", "bar", "baz", "qux"]) @@ -43,38 +44,38 @@ def test_append_series_dict(self): series = df.loc[4] msg = "Indexes have overlapping values" with pytest.raises(ValueError, match=msg): - df.append(series, verify_integrity=True) + df._append(series, verify_integrity=True) series.name = None msg = "Can only append a Series if ignore_index=True" with pytest.raises(TypeError, match=msg): - df.append(series, verify_integrity=True) + df._append(series, verify_integrity=True) - result = df.append(series[::-1], ignore_index=True) - expected = df.append( + result = df._append(series[::-1], ignore_index=True) + expected = df._append( DataFrame({0: series[::-1]}, index=df.columns).T, ignore_index=True ) tm.assert_frame_equal(result, expected) # dict - result = df.append(series.to_dict(), ignore_index=True) + result = df._append(series.to_dict(), ignore_index=True) tm.assert_frame_equal(result, expected) - result = df.append(series[::-1][:3], ignore_index=True) - expected = df.append( + result = df._append(series[::-1][:3], ignore_index=True) + expected = df._append( DataFrame({0: series[::-1][:3]}).T, ignore_index=True, sort=True ) tm.assert_frame_equal(result, expected.loc[:, result.columns]) msg = "Can only append a dict if ignore_index=True" with pytest.raises(TypeError, match=msg): - df.append(series.to_dict()) + df._append(series.to_dict()) # can append when name set row = df.loc[4] row.name = 5 - result = df.append(row) - expected = df.append(df[-1:], ignore_index=True) + result = df._append(row) + expected = df._append(df[-1:], ignore_index=True) tm.assert_frame_equal(result, expected) def test_append_list_of_series_dicts(self): @@ -82,8 +83,8 @@ def test_append_list_of_series_dicts(self): dicts = [x.to_dict() for idx, x in df.iterrows()] - result = df.append(dicts, ignore_index=True) - expected = df.append(df, ignore_index=True) + result = df._append(dicts, ignore_index=True) + expected = df._append(df, ignore_index=True) tm.assert_frame_equal(result, expected) # different columns @@ -91,8 +92,8 @@ def test_append_list_of_series_dicts(self): {"foo": 1, "bar": 2, "baz": 3, "peekaboo": 4}, {"foo": 5, "bar": 6, "baz": 7, "peekaboo": 8}, ] - result = df.append(dicts, ignore_index=True, sort=True) - expected = df.append(DataFrame(dicts), ignore_index=True, sort=True) + result = df._append(dicts, ignore_index=True, sort=True) + expected = df._append(DataFrame(dicts), ignore_index=True, sort=True) tm.assert_frame_equal(result, expected) def test_append_list_retain_index_name(self): @@ -108,11 +109,11 @@ def test_append_list_retain_index_name(self): ) # append series - result = df.append(serc) + result = df._append(serc) tm.assert_frame_equal(result, expected) # append list of series - result = df.append([serc]) + result = df._append([serc]) tm.assert_frame_equal(result, expected) def test_append_missing_cols(self): @@ -123,10 +124,9 @@ def test_append_missing_cols(self): df = DataFrame(np.random.randn(5, 4), columns=["foo", "bar", "baz", "qux"]) dicts = [{"foo": 9}, {"bar": 10}] - with tm.assert_produces_warning(None): - result = df.append(dicts, ignore_index=True, sort=True) + result = df._append(dicts, ignore_index=True, sort=True) - expected = df.append(DataFrame(dicts), ignore_index=True, sort=True) + expected = df._append(DataFrame(dicts), ignore_index=True, sort=True) tm.assert_frame_equal(result, expected) def test_append_empty_dataframe(self): @@ -134,28 +134,28 @@ def test_append_empty_dataframe(self): # Empty df append empty df df1 = DataFrame() df2 = DataFrame() - result = df1.append(df2) + result = df1._append(df2) expected = df1.copy() tm.assert_frame_equal(result, expected) # Non-empty df append empty df df1 = DataFrame(np.random.randn(5, 2)) df2 = DataFrame() - result = df1.append(df2) + result = df1._append(df2) expected = df1.copy() tm.assert_frame_equal(result, expected) # Empty df with columns append empty df df1 = DataFrame(columns=["bar", "foo"]) df2 = DataFrame() - result = df1.append(df2) + result = df1._append(df2) expected = df1.copy() tm.assert_frame_equal(result, expected) # Non-Empty df with columns append empty df df1 = DataFrame(np.random.randn(5, 2), columns=["bar", "foo"]) df2 = DataFrame() - result = df1.append(df2) + result = df1._append(df2) expected = df1.copy() tm.assert_frame_equal(result, expected) @@ -167,19 +167,19 @@ def test_append_dtypes(self): df1 = DataFrame({"bar": Timestamp("20130101")}, index=range(5)) df2 = DataFrame() - result = df1.append(df2) + result = df1._append(df2) expected = df1.copy() tm.assert_frame_equal(result, expected) df1 = DataFrame({"bar": Timestamp("20130101")}, index=range(1)) df2 = DataFrame({"bar": "foo"}, index=range(1, 2)) - result = df1.append(df2) + result = df1._append(df2) expected = DataFrame({"bar": [Timestamp("20130101"), "foo"]}) tm.assert_frame_equal(result, expected) df1 = DataFrame({"bar": Timestamp("20130101")}, index=range(1)) df2 = DataFrame({"bar": np.nan}, index=range(1, 2)) - result = df1.append(df2) + result = df1._append(df2) expected = DataFrame( {"bar": Series([Timestamp("20130101"), np.nan], dtype="M8[ns]")} ) @@ -188,7 +188,7 @@ def test_append_dtypes(self): df1 = DataFrame({"bar": Timestamp("20130101")}, index=range(1)) df2 = DataFrame({"bar": np.nan}, index=range(1, 2), dtype=object) - result = df1.append(df2) + result = df1._append(df2) expected = DataFrame( {"bar": Series([Timestamp("20130101"), np.nan], dtype="M8[ns]")} ) @@ -197,7 +197,7 @@ def test_append_dtypes(self): df1 = DataFrame({"bar": np.nan}, index=range(1)) df2 = DataFrame({"bar": Timestamp("20130101")}, index=range(1, 2)) - result = df1.append(df2) + result = df1._append(df2) expected = DataFrame( {"bar": Series([np.nan, Timestamp("20130101")], dtype="M8[ns]")} ) @@ -206,7 +206,7 @@ def test_append_dtypes(self): df1 = DataFrame({"bar": Timestamp("20130101")}, index=range(1)) df2 = DataFrame({"bar": 1}, index=range(1, 2), dtype=object) - result = df1.append(df2) + result = df1._append(df2) expected = DataFrame({"bar": Series([Timestamp("20130101"), 1])}) tm.assert_frame_equal(result, expected) @@ -217,7 +217,7 @@ def test_append_timestamps_aware_or_naive(self, tz_naive_fixture, timestamp): # GH 30238 tz = tz_naive_fixture df = DataFrame([Timestamp(timestamp, tz=tz)]) - result = df.append(df.iloc[0]).iloc[-1] + result = df._append(df.iloc[0]).iloc[-1] expected = Series(Timestamp(timestamp, tz=tz), name=0) tm.assert_series_equal(result, expected) @@ -233,7 +233,7 @@ def test_append_timestamps_aware_or_naive(self, tz_naive_fixture, timestamp): ) def test_other_dtypes(self, data, dtype): df = DataFrame(data, dtype=dtype) - result = df.append(df.iloc[0]).iloc[-1] + result = df._append(df.iloc[0]).iloc[-1] expected = Series(data, name=0, dtype=dtype) tm.assert_series_equal(result, expected) @@ -248,7 +248,7 @@ def test_append_numpy_bug_1681(self, dtype): df = DataFrame() other = DataFrame({"A": "foo", "B": index}, index=index) - result = df.append(other) + result = df._append(other) assert (result["B"] == index).all() @pytest.mark.filterwarnings("ignore:The values in the array:RuntimeWarning") @@ -263,9 +263,16 @@ def test_multiindex_column_append_multiple(self): df2 = df.copy() for i in range(1, 10): df[i, "colA"] = 10 - df = df.append(df2, ignore_index=True) + df = df._append(df2, ignore_index=True) result = df["multi"] expected = DataFrame( {"col1": [1, 2, 3] * (i + 1), "col2": [11, 12, 13] * (i + 1)} ) tm.assert_frame_equal(result, expected) + + def test_append_raises_future_warning(self): + # GH#35407 + df1 = DataFrame([[1, 2], [3, 4]]) + df2 = DataFrame([[5, 6], [7, 8]]) + with tm.assert_produces_warning(FutureWarning): + df1.append(df2) diff --git a/pandas/tests/frame/methods/test_drop_duplicates.py b/pandas/tests/frame/methods/test_drop_duplicates.py index 8cbf7bbfe0368..cd61f59a85d1e 100644 --- a/pandas/tests/frame/methods/test_drop_duplicates.py +++ b/pandas/tests/frame/methods/test_drop_duplicates.py @@ -7,6 +7,7 @@ from pandas import ( DataFrame, NaT, + concat, ) import pandas._testing as tm @@ -111,7 +112,7 @@ def test_drop_duplicates(): # GH 11864 df = DataFrame([i] * 9 for i in range(16)) - df = df.append([[1] + [0] * 8], ignore_index=True) + df = concat([df, DataFrame([[1] + [0] * 8])], ignore_index=True) for keep in ["first", "last", False]: assert df.duplicated(keep=keep).sum() == 0 diff --git a/pandas/tests/generic/test_duplicate_labels.py b/pandas/tests/generic/test_duplicate_labels.py index 1b32675ec2d35..1c0ae46aa5500 100644 --- a/pandas/tests/generic/test_duplicate_labels.py +++ b/pandas/tests/generic/test_duplicate_labels.py @@ -294,14 +294,12 @@ def test_setting_allows_duplicate_labels_raises(self, data): assert data.flags.allows_duplicate_labels is True - @pytest.mark.parametrize( - "func", [operator.methodcaller("append", pd.Series(0, index=["a", "b"]))] - ) - def test_series_raises(self, func): - s = pd.Series([0, 1], index=["a", "b"]).set_flags(allows_duplicate_labels=False) + def test_series_raises(self): + a = pd.Series(0, index=["a", "b"]) + b = pd.Series([0, 1], index=["a", "b"]).set_flags(allows_duplicate_labels=False) msg = "Index has duplicates." with pytest.raises(pd.errors.DuplicateLabelError, match=msg): - func(s) + pd.concat([a, b]) @pytest.mark.parametrize( "getter, target", diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py index f27e92a55268f..ba9cb22ed27a5 100644 --- a/pandas/tests/generic/test_finalize.py +++ b/pandas/tests/generic/test_finalize.py @@ -180,7 +180,10 @@ pd.DataFrame, frame_data, operator.methodcaller("append", pd.DataFrame({"A": [1]})), - ) + ), + marks=pytest.mark.filterwarnings( + "ignore:.*append method is deprecated.*:FutureWarning" + ), ), pytest.param( ( @@ -188,6 +191,9 @@ frame_data, operator.methodcaller("append", pd.DataFrame({"B": [1]})), ), + marks=pytest.mark.filterwarnings( + "ignore:.*append method is deprecated.*:FutureWarning" + ), ), pytest.param( ( diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py index e193af062098b..df40822337ed0 100644 --- a/pandas/tests/indexes/period/test_indexing.py +++ b/pandas/tests/indexes/period/test_indexing.py @@ -144,7 +144,7 @@ def test_getitem_partial(self): result = ts[24:] tm.assert_series_equal(exp, result) - ts = ts[10:].append(ts[10:]) + ts = pd.concat([ts[10:], ts[10:]]) msg = "left slice bound for non-unique label: '2008'" with pytest.raises(KeyError, match=msg): ts[slice("2008", "2009")] diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py index 95a9fd227c685..8251f09b97062 100644 --- a/pandas/tests/indexing/test_partial.py +++ b/pandas/tests/indexing/test_partial.py @@ -361,7 +361,7 @@ def test_partial_setting_mixed_dtype(self): s = df.loc[1].copy() s.name = 2 - expected = df.append(s) + expected = pd.concat([df, DataFrame(s).T.infer_objects()]) df.loc[2] = df.loc[1] tm.assert_frame_equal(df, expected) @@ -538,7 +538,8 @@ def test_partial_set_invalid(self): # allow object conversion here df = orig.copy() df.loc["a", :] = df.iloc[0] - exp = orig.append(Series(df.iloc[0], name="a")) + ser = Series(df.iloc[0], name="a") + exp = pd.concat([orig, DataFrame(ser).T.infer_objects()]) tm.assert_frame_equal(df, exp) tm.assert_index_equal(df.index, Index(orig.index.tolist() + ["a"])) assert df.index.dtype == "object" diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py index ab0199dca3f24..fe6914e6ef4f5 100644 --- a/pandas/tests/io/formats/test_format.py +++ b/pandas/tests/io/formats/test_format.py @@ -2427,7 +2427,7 @@ def test_datetimeindex(self): # nat in index s2 = Series(2, index=[Timestamp("20130111"), NaT]) - s = s2.append(s) + s = pd.concat([s2, s]) result = s.to_string() assert "NaT" in result diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py index 4f69a7f590319..2e512d74076d3 100644 --- a/pandas/tests/resample/test_time_grouper.py +++ b/pandas/tests/resample/test_time_grouper.py @@ -207,7 +207,7 @@ def test_aggregate_with_nat(func, fill_value): dt_result = getattr(dt_grouped, func)() pad = DataFrame([[fill_value] * 4], index=[3], columns=["A", "B", "C", "D"]) - expected = normal_result.append(pad) + expected = pd.concat([normal_result, pad]) expected = expected.sort_index() dti = date_range(start="2013-01-01", freq="D", periods=5, name="key") expected.index = dti._with_freq(None) # TODO: is this desired? @@ -238,7 +238,7 @@ def test_aggregate_with_nat_size(): dt_result = dt_grouped.size() pad = Series([0], index=[3]) - expected = normal_result.append(pad) + expected = pd.concat([normal_result, pad]) expected = expected.sort_index() expected.index = date_range( start="2013-01-01", freq="D", periods=5, name="key" diff --git a/pandas/tests/reshape/concat/test_append.py b/pandas/tests/reshape/concat/test_append.py index 061afb3a7e0f5..0b1d1c4a3d346 100644 --- a/pandas/tests/reshape/concat/test_append.py +++ b/pandas/tests/reshape/concat/test_append.py @@ -28,23 +28,23 @@ def test_append(self, sort, float_frame): begin_frame = float_frame.reindex(begin_index) end_frame = float_frame.reindex(end_index) - appended = begin_frame.append(end_frame) + appended = begin_frame._append(end_frame) tm.assert_almost_equal(appended["A"], float_frame["A"]) del end_frame["A"] - partial_appended = begin_frame.append(end_frame, sort=sort) + partial_appended = begin_frame._append(end_frame, sort=sort) assert "A" in partial_appended - partial_appended = end_frame.append(begin_frame, sort=sort) + partial_appended = end_frame._append(begin_frame, sort=sort) assert "A" in partial_appended # mixed type handling - appended = mixed_frame[:5].append(mixed_frame[5:]) + appended = mixed_frame[:5]._append(mixed_frame[5:]) tm.assert_frame_equal(appended, mixed_frame) # what to test here - mixed_appended = mixed_frame[:5].append(float_frame[5:], sort=sort) - mixed_appended2 = float_frame[:5].append(mixed_frame[5:], sort=sort) + mixed_appended = mixed_frame[:5]._append(float_frame[5:], sort=sort) + mixed_appended2 = float_frame[:5]._append(mixed_frame[5:], sort=sort) # all equal except 'foo' column tm.assert_frame_equal( @@ -55,18 +55,18 @@ def test_append(self, sort, float_frame): def test_append_empty(self, float_frame): empty = DataFrame() - appended = float_frame.append(empty) + appended = float_frame._append(empty) tm.assert_frame_equal(float_frame, appended) assert appended is not float_frame - appended = empty.append(float_frame) + appended = empty._append(float_frame) tm.assert_frame_equal(float_frame, appended) assert appended is not float_frame def test_append_overlap_raises(self, float_frame): msg = "Indexes have overlapping values" with pytest.raises(ValueError, match=msg): - float_frame.append(float_frame, verify_integrity=True) + float_frame._append(float_frame, verify_integrity=True) def test_append_new_columns(self): # see gh-6129: new columns @@ -79,13 +79,13 @@ def test_append_new_columns(self): "c": {"z": 7}, } ) - result = df.append(row) + result = df._append(row) tm.assert_frame_equal(result, expected) def test_append_length0_frame(self, sort): df = DataFrame(columns=["A", "B", "C"]) df3 = DataFrame(index=[0, 1], columns=["A", "B"]) - df5 = df.append(df3, sort=sort) + df5 = df._append(df3, sort=sort) expected = DataFrame(index=[0, 1], columns=["A", "B", "C"]) tm.assert_frame_equal(df5, expected) @@ -100,7 +100,7 @@ def test_append_records(self): df1 = DataFrame(arr1) df2 = DataFrame(arr2) - result = df1.append(df2, ignore_index=True) + result = df1._append(df2, ignore_index=True) expected = DataFrame(np.concatenate((arr1, arr2))) tm.assert_frame_equal(result, expected) @@ -109,8 +109,7 @@ def test_append_sorts(self, sort): df1 = DataFrame({"a": [1, 2], "b": [1, 2]}, columns=["b", "a"]) df2 = DataFrame({"a": [1, 2], "c": [3, 4]}, index=[2, 3]) - with tm.assert_produces_warning(None): - result = df1.append(df2, sort=sort) + result = df1._append(df2, sort=sort) # for None / True expected = DataFrame( @@ -134,7 +133,7 @@ def test_append_different_columns(self, sort): a = df[:5].loc[:, ["bools", "ints", "floats"]] b = df[5:].loc[:, ["strings", "ints", "floats"]] - appended = a.append(b, sort=sort) + appended = a._append(b, sort=sort) assert isna(appended["strings"][0:4]).all() assert isna(appended["bools"][5:]).all() @@ -146,12 +145,12 @@ def test_append_many(self, sort, float_frame): float_frame[15:], ] - result = chunks[0].append(chunks[1:]) + result = chunks[0]._append(chunks[1:]) tm.assert_frame_equal(result, float_frame) chunks[-1] = chunks[-1].copy() chunks[-1]["foo"] = "bar" - result = chunks[0].append(chunks[1:], sort=sort) + result = chunks[0]._append(chunks[1:], sort=sort) tm.assert_frame_equal(result.loc[:, float_frame.columns], float_frame) assert (result["foo"][15:] == "bar").all() assert result["foo"][:15].isna().all() @@ -163,7 +162,7 @@ def test_append_preserve_index_name(self): df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]], columns=["A", "B", "C"]) df2 = df2.set_index(["A"]) - result = df1.append(df2) + result = df1._append(df2) assert result.index.name == "A" indexes_can_append = [ @@ -194,7 +193,7 @@ def test_append_same_columns_type(self, index): df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=index) ser_index = index[:2] ser = Series([7, 8], index=ser_index, name=2) - result = df.append(ser) + result = df._append(ser) expected = DataFrame( [[1, 2, 3.0], [4, 5, 6], [7, 8, np.nan]], index=[0, 1, 2], columns=index ) @@ -209,7 +208,7 @@ def test_append_same_columns_type(self, index): index = index[:2] df = DataFrame([[1, 2], [4, 5]], columns=index) ser = Series([7, 8, 9], index=ser_index, name=2) - result = df.append(ser) + result = df._append(ser) expected = DataFrame( [[1, 2, np.nan], [4, 5, np.nan], [7, 8, 9]], index=[0, 1, 2], @@ -230,7 +229,7 @@ def test_append_different_columns_types(self, df_columns, series_index): df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=df_columns) ser = Series([7, 8, 9], index=series_index, name=2) - result = df.append(ser) + result = df._append(ser) idx_diff = ser.index.difference(df_columns) combined_columns = Index(df_columns.tolist()).append(idx_diff) expected = DataFrame( @@ -287,7 +286,7 @@ def test_append_dtype_coerce(self, sort): axis=1, sort=sort, ) - result = df1.append(df2, ignore_index=True, sort=sort) + result = df1._append(df2, ignore_index=True, sort=sort) if sort: expected = expected[["end_time", "start_time"]] else: @@ -299,7 +298,7 @@ def test_append_missing_column_proper_upcast(self, sort): df1 = DataFrame({"A": np.array([1, 2, 3, 4], dtype="i8")}) df2 = DataFrame({"B": np.array([True, False, True, False], dtype=bool)}) - appended = df1.append(df2, ignore_index=True, sort=sort) + appended = df1._append(df2, ignore_index=True, sort=sort) assert appended["A"].dtype == "f8" assert appended["B"].dtype == "O" @@ -308,7 +307,7 @@ def test_append_empty_frame_to_series_with_dateutil_tz(self): date = Timestamp("2018-10-24 07:30:00", tz=dateutil.tz.tzutc()) ser = Series({"a": 1.0, "b": 2.0, "date": date}) df = DataFrame(columns=["c", "d"]) - result_a = df.append(ser, ignore_index=True) + result_a = df._append(ser, ignore_index=True) expected = DataFrame( [[np.nan, np.nan, 1.0, 2.0, date]], columns=["c", "d", "a", "b", "date"] ) @@ -322,10 +321,10 @@ def test_append_empty_frame_to_series_with_dateutil_tz(self): ) expected["c"] = expected["c"].astype(object) expected["d"] = expected["d"].astype(object) - result_b = result_a.append(ser, ignore_index=True) + result_b = result_a._append(ser, ignore_index=True) tm.assert_frame_equal(result_b, expected) - result = df.append([ser, ser], ignore_index=True) + result = df._append([ser, ser], ignore_index=True) tm.assert_frame_equal(result, expected) def test_append_empty_tz_frame_with_datetime64ns(self): @@ -333,20 +332,20 @@ def test_append_empty_tz_frame_with_datetime64ns(self): df = DataFrame(columns=["a"]).astype("datetime64[ns, UTC]") # pd.NaT gets inferred as tz-naive, so append result is tz-naive - result = df.append({"a": pd.NaT}, ignore_index=True) + result = df._append({"a": pd.NaT}, ignore_index=True) expected = DataFrame({"a": [pd.NaT]}).astype(object) tm.assert_frame_equal(result, expected) # also test with typed value to append df = DataFrame(columns=["a"]).astype("datetime64[ns, UTC]") other = Series({"a": pd.NaT}, dtype="datetime64[ns]") - result = df.append(other, ignore_index=True) + result = df._append(other, ignore_index=True) expected = DataFrame({"a": [pd.NaT]}).astype(object) tm.assert_frame_equal(result, expected) # mismatched tz other = Series({"a": pd.NaT}, dtype="datetime64[ns, US/Pacific]") - result = df.append(other, ignore_index=True) + result = df._append(other, ignore_index=True) expected = DataFrame({"a": [pd.NaT]}).astype(object) tm.assert_frame_equal(result, expected) @@ -359,7 +358,7 @@ def test_append_empty_frame_with_timedelta64ns_nat(self, dtype_str, val): df = DataFrame(columns=["a"]).astype(dtype_str) other = DataFrame({"a": [np.timedelta64(val, "ns")]}) - result = df.append(other, ignore_index=True) + result = df._append(other, ignore_index=True) expected = other.astype(object) tm.assert_frame_equal(result, expected) @@ -373,7 +372,7 @@ def test_append_frame_with_timedelta64ns_nat(self, dtype_str, val): df = DataFrame({"a": pd.array([1], dtype=dtype_str)}) other = DataFrame({"a": [np.timedelta64(val, "ns")]}) - result = df.append(other, ignore_index=True) + result = df._append(other, ignore_index=True) expected = DataFrame({"a": [df.iloc[0, 0], other.iloc[0, 0]]}, dtype=object) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/reshape/concat/test_append_common.py b/pandas/tests/reshape/concat/test_append_common.py index d5d86465dd91b..bb8027948c540 100644 --- a/pandas/tests/reshape/concat/test_append_common.py +++ b/pandas/tests/reshape/concat/test_append_common.py @@ -129,7 +129,7 @@ def test_concatlike_same_dtypes(self, item): # ----- Series ----- # # series.append - res = Series(vals1).append(Series(vals2), ignore_index=True) + res = Series(vals1)._append(Series(vals2), ignore_index=True) exp = Series(exp_data) tm.assert_series_equal(res, exp, check_index_type=True) @@ -138,7 +138,7 @@ def test_concatlike_same_dtypes(self, item): tm.assert_series_equal(res, exp, check_index_type=True) # 3 elements - res = Series(vals1).append([Series(vals2), Series(vals3)], ignore_index=True) + res = Series(vals1)._append([Series(vals2), Series(vals3)], ignore_index=True) exp = Series(exp_data3) tm.assert_series_equal(res, exp) @@ -151,7 +151,7 @@ def test_concatlike_same_dtypes(self, item): # name mismatch s1 = Series(vals1, name="x") s2 = Series(vals2, name="y") - res = s1.append(s2, ignore_index=True) + res = s1._append(s2, ignore_index=True) exp = Series(exp_data) tm.assert_series_equal(res, exp, check_index_type=True) @@ -161,7 +161,7 @@ def test_concatlike_same_dtypes(self, item): # name match s1 = Series(vals1, name="x") s2 = Series(vals2, name="x") - res = s1.append(s2, ignore_index=True) + res = s1._append(s2, ignore_index=True) exp = Series(exp_data, name="x") tm.assert_series_equal(res, exp, check_index_type=True) @@ -174,10 +174,10 @@ def test_concatlike_same_dtypes(self, item): "only Series and DataFrame objs are valid" ) with pytest.raises(TypeError, match=msg): - Series(vals1).append(vals2) + Series(vals1)._append(vals2) with pytest.raises(TypeError, match=msg): - Series(vals1).append([Series(vals2), vals3]) + Series(vals1)._append([Series(vals2), vals3]) with pytest.raises(TypeError, match=msg): pd.concat([Series(vals1), vals2]) @@ -237,8 +237,8 @@ def test_concatlike_dtypes_coercion(self, item, item2): # ----- Series ----- # - # series.append - res = Series(vals1).append(Series(vals2), ignore_index=True) + # series._append + res = Series(vals1)._append(Series(vals2), ignore_index=True) exp = Series(exp_data, dtype=exp_series_dtype) tm.assert_series_equal(res, exp, check_index_type=True) @@ -247,7 +247,7 @@ def test_concatlike_dtypes_coercion(self, item, item2): tm.assert_series_equal(res, exp, check_index_type=True) # 3 elements - res = Series(vals1).append([Series(vals2), Series(vals3)], ignore_index=True) + res = Series(vals1)._append([Series(vals2), Series(vals3)], ignore_index=True) exp = Series(exp_data3, dtype=exp_series_dtype) tm.assert_series_equal(res, exp) @@ -279,7 +279,7 @@ def test_concatlike_common_coerce_to_pandas_object(self): dts = Series(dti) tds = Series(tdi) - res = dts.append(tds) + res = dts._append(tds) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) assert isinstance(res.iloc[0], pd.Timestamp) assert isinstance(res.iloc[-1], pd.Timedelta) @@ -304,7 +304,7 @@ def test_concatlike_datetimetz(self, tz_aware_fixture): dts1 = Series(dti1) dts2 = Series(dti2) - res = dts1.append(dts2) + res = dts1._append(dts2) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([dts1, dts2]) @@ -324,7 +324,7 @@ def test_concatlike_datetimetz_short(self, tz): ) exp = DataFrame(0, index=exp_idx, columns=["A", "B"]) - tm.assert_frame_equal(df1.append(df2), exp) + tm.assert_frame_equal(df1._append(df2), exp) tm.assert_frame_equal(pd.concat([df1, df2]), exp) def test_concatlike_datetimetz_to_object(self, tz_aware_fixture): @@ -350,7 +350,7 @@ def test_concatlike_datetimetz_to_object(self, tz_aware_fixture): dts1 = Series(dti1) dts2 = Series(dti2) - res = dts1.append(dts2) + res = dts1._append(dts2) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([dts1, dts2]) @@ -374,7 +374,7 @@ def test_concatlike_datetimetz_to_object(self, tz_aware_fixture): dts1 = Series(dti1) dts3 = Series(dti3) - res = dts1.append(dts3) + res = dts1._append(dts3) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([dts1, dts3]) @@ -392,7 +392,7 @@ def test_concatlike_common_period(self): ps1 = Series(pi1) ps2 = Series(pi2) - res = ps1.append(ps2) + res = ps1._append(ps2) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([ps1, ps2]) @@ -418,7 +418,7 @@ def test_concatlike_common_period_diff_freq_to_object(self): ps1 = Series(pi1) ps2 = Series(pi2) - res = ps1.append(ps2) + res = ps1._append(ps2) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([ps1, ps2]) @@ -444,7 +444,7 @@ def test_concatlike_common_period_mixed_dt_to_object(self): ps1 = Series(pi1) tds = Series(tdi) - res = ps1.append(tds) + res = ps1._append(tds) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([ps1, tds]) @@ -466,7 +466,7 @@ def test_concatlike_common_period_mixed_dt_to_object(self): ps1 = Series(pi1) tds = Series(tdi) - res = tds.append(ps1) + res = tds._append(ps1) tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) res = pd.concat([tds, ps1]) @@ -481,7 +481,7 @@ def test_concat_categorical(self): exp = Series([1, 2, np.nan, 2, 1, 2], dtype="category") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) # partially different categories => not-category s1 = Series([3, 2], dtype="category") @@ -489,7 +489,7 @@ def test_concat_categorical(self): exp = Series([3, 2, 2, 1]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) # completely different categories (same dtype) => not-category s1 = Series([10, 11, np.nan], dtype="category") @@ -497,7 +497,7 @@ def test_concat_categorical(self): exp = Series([10, 11, np.nan, np.nan, 1, 3, 2], dtype="object") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) def test_union_categorical_same_categories_different_order(self): # https://github.com/pandas-dev/pandas/issues/19096 @@ -518,12 +518,12 @@ def test_concat_categorical_coercion(self): exp = Series([1, 2, np.nan, 2, 1, 2], dtype="object") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) # result shouldn't be affected by 1st elem dtype exp = Series([2, 1, 2, 1, 2, np.nan], dtype="object") tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) # all values are not in category => not-category s1 = Series([3, 2], dtype="category") @@ -531,11 +531,11 @@ def test_concat_categorical_coercion(self): exp = Series([3, 2, 2, 1]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) exp = Series([2, 1, 3, 2]) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) # completely different categories => not-category s1 = Series([10, 11, np.nan], dtype="category") @@ -543,11 +543,11 @@ def test_concat_categorical_coercion(self): exp = Series([10, 11, np.nan, 1, 3, 2], dtype="object") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) exp = Series([1, 3, 2, 10, 11, np.nan], dtype="object") tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) # different dtype => not-category s1 = Series([10, 11, np.nan], dtype="category") @@ -555,11 +555,11 @@ def test_concat_categorical_coercion(self): exp = Series([10, 11, np.nan, "a", "b", "c"]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) exp = Series(["a", "b", "c", 10, 11, np.nan]) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) # if normal series only contains NaN-likes => not-category s1 = Series([10, 11], dtype="category") @@ -567,11 +567,11 @@ def test_concat_categorical_coercion(self): exp = Series([10, 11, np.nan, np.nan, np.nan]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) exp = Series([np.nan, np.nan, np.nan, 10, 11]) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) def test_concat_categorical_3elem_coercion(self): # GH 13524 @@ -583,11 +583,11 @@ def test_concat_categorical_3elem_coercion(self): exp = Series([1, 2, np.nan, 2, 1, 2, 1, 2, 1, 2, np.nan], dtype="float") tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1.append([s2, s3], ignore_index=True), exp) + tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) exp = Series([1, 2, 1, 2, np.nan, 1, 2, np.nan, 2, 1, 2], dtype="float") tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3.append([s1, s2], ignore_index=True), exp) + tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) # values are all in either category => not-category s1 = Series([4, 5, 6], dtype="category") @@ -596,11 +596,11 @@ def test_concat_categorical_3elem_coercion(self): exp = Series([4, 5, 6, 1, 2, 3, 1, 3, 4]) tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1.append([s2, s3], ignore_index=True), exp) + tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) exp = Series([1, 3, 4, 4, 5, 6, 1, 2, 3]) tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3.append([s1, s2], ignore_index=True), exp) + tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) # values are all in either category => not-category s1 = Series([4, 5, 6], dtype="category") @@ -609,11 +609,11 @@ def test_concat_categorical_3elem_coercion(self): exp = Series([4, 5, 6, 1, 2, 3, 10, 11, 12]) tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1.append([s2, s3], ignore_index=True), exp) + tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) exp = Series([10, 11, 12, 4, 5, 6, 1, 2, 3]) tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3.append([s1, s2], ignore_index=True), exp) + tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) def test_concat_categorical_multi_coercion(self): # GH 13524 @@ -629,13 +629,13 @@ def test_concat_categorical_multi_coercion(self): exp = Series([1, 3, 3, 4, 2, 3, 2, 2, 1, np.nan, 1, 3, 2]) res = pd.concat([s1, s2, s3, s4, s5, s6], ignore_index=True) tm.assert_series_equal(res, exp) - res = s1.append([s2, s3, s4, s5, s6], ignore_index=True) + res = s1._append([s2, s3, s4, s5, s6], ignore_index=True) tm.assert_series_equal(res, exp) exp = Series([1, 3, 2, 1, np.nan, 2, 2, 2, 3, 3, 4, 1, 3]) res = pd.concat([s6, s5, s4, s3, s2, s1], ignore_index=True) tm.assert_series_equal(res, exp) - res = s6.append([s5, s4, s3, s2, s1], ignore_index=True) + res = s6._append([s5, s4, s3, s2, s1], ignore_index=True) tm.assert_series_equal(res, exp) def test_concat_categorical_ordered(self): @@ -646,11 +646,11 @@ def test_concat_categorical_ordered(self): exp = Series(Categorical([1, 2, np.nan, 2, 1, 2], ordered=True)) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) exp = Series(Categorical([1, 2, np.nan, 2, 1, 2, 1, 2, np.nan], ordered=True)) tm.assert_series_equal(pd.concat([s1, s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s1.append([s2, s1], ignore_index=True), exp) + tm.assert_series_equal(s1._append([s2, s1], ignore_index=True), exp) def test_concat_categorical_coercion_nan(self): # GH 13524 @@ -662,14 +662,14 @@ def test_concat_categorical_coercion_nan(self): exp = Series([np.nan, np.nan, np.nan, 1]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) s1 = Series([1, np.nan], dtype="category") s2 = Series([np.nan, np.nan]) exp = Series([1, np.nan, np.nan, np.nan], dtype="float") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) # mixed dtype, all nan-likes => not-category s1 = Series([np.nan, np.nan], dtype="category") @@ -677,9 +677,9 @@ def test_concat_categorical_coercion_nan(self): exp = Series([np.nan, np.nan, np.nan, np.nan]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) # all category nan-likes => category s1 = Series([np.nan, np.nan], dtype="category") @@ -688,7 +688,7 @@ def test_concat_categorical_coercion_nan(self): exp = Series([np.nan, np.nan, np.nan, np.nan], dtype="category") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) def test_concat_categorical_empty(self): # GH 13524 @@ -697,25 +697,25 @@ def test_concat_categorical_empty(self): s2 = Series([1, 2], dtype="category") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1.append(s2, ignore_index=True), s2) + tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2) - tm.assert_series_equal(s2.append(s1, ignore_index=True), s2) + tm.assert_series_equal(s2._append(s1, ignore_index=True), s2) s1 = Series([], dtype="category") s2 = Series([], dtype="category") tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1.append(s2, ignore_index=True), s2) + tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) s1 = Series([], dtype="category") s2 = Series([], dtype="object") # different dtype => not-category tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1.append(s2, ignore_index=True), s2) + tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2) - tm.assert_series_equal(s2.append(s1, ignore_index=True), s2) + tm.assert_series_equal(s2._append(s1, ignore_index=True), s2) s1 = Series([], dtype="category") s2 = Series([np.nan, np.nan]) @@ -723,10 +723,10 @@ def test_concat_categorical_empty(self): # empty Series is ignored exp = Series([np.nan, np.nan]) tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1.append(s2, ignore_index=True), exp) + tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2.append(s1, ignore_index=True), exp) + tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) def test_categorical_concat_append(self): cat = Categorical(["a", "b"], categories=["a", "b"]) @@ -737,7 +737,7 @@ def test_categorical_concat_append(self): exp = DataFrame({"cats": cat2, "vals": vals2}, index=Index([0, 1, 0, 1])) tm.assert_frame_equal(pd.concat([df, df]), exp) - tm.assert_frame_equal(df.append(df), exp) + tm.assert_frame_equal(df._append(df), exp) # GH 13524 can concat different categories cat3 = Categorical(["a", "b"], categories=["a", "b", "c"]) @@ -748,5 +748,5 @@ def test_categorical_concat_append(self): exp = DataFrame({"cats": list("abab"), "vals": [1, 2, 1, 2]}) tm.assert_frame_equal(res, exp) - res = df.append(df_different_categories, ignore_index=True) + res = df._append(df_different_categories, ignore_index=True) tm.assert_frame_equal(res, exp) diff --git a/pandas/tests/reshape/concat/test_categorical.py b/pandas/tests/reshape/concat/test_categorical.py index aba14fd2fcd77..93197a1814077 100644 --- a/pandas/tests/reshape/concat/test_categorical.py +++ b/pandas/tests/reshape/concat/test_categorical.py @@ -200,7 +200,7 @@ def test_categorical_concat_gh7864(self): dfx = pd.concat([df1, df2]) tm.assert_index_equal(df["grade"].cat.categories, dfx["grade"].cat.categories) - dfa = df1.append(df2) + dfa = df1._append(df2) tm.assert_index_equal(df["grade"].cat.categories, dfa["grade"].cat.categories) def test_categorical_index_upcast(self): diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py index c4b32371042b3..a7b3c77e6ea0a 100644 --- a/pandas/tests/reshape/concat/test_concat.py +++ b/pandas/tests/reshape/concat/test_concat.py @@ -241,7 +241,7 @@ def test_crossed_dtypes_weird_corner(self): columns=columns, ) - appended = df1.append(df2, ignore_index=True) + appended = concat([df1, df2], ignore_index=True) expected = DataFrame( np.concatenate([df1.values, df2.values], axis=0), columns=columns ) diff --git a/pandas/tests/reshape/concat/test_index.py b/pandas/tests/reshape/concat/test_index.py index 35cf670398664..1692446627914 100644 --- a/pandas/tests/reshape/concat/test_index.py +++ b/pandas/tests/reshape/concat/test_index.py @@ -178,14 +178,14 @@ def test_dups_index(self): tm.assert_frame_equal(result.iloc[10:], df) # append - result = df.iloc[0:8, :].append(df.iloc[8:]) + result = df.iloc[0:8, :]._append(df.iloc[8:]) tm.assert_frame_equal(result, df) - result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10]) + result = df.iloc[0:8, :]._append(df.iloc[8:9])._append(df.iloc[9:10]) tm.assert_frame_equal(result, df) expected = concat([df, df], axis=0) - result = df.append(df) + result = df._append(df) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py index 2f9f31ebb0485..4365f61860209 100644 --- a/pandas/tests/reshape/merge/test_merge.py +++ b/pandas/tests/reshape/merge/test_merge.py @@ -686,10 +686,12 @@ def test_join_append_timedeltas(self): # timedelta64 issues with join/merge # GH 5695 - d = {"d": datetime(2013, 11, 5, 5, 56), "t": timedelta(0, 22500)} + d = DataFrame.from_dict( + {"d": [datetime(2013, 11, 5, 5, 56)], "t": [timedelta(0, 22500)]} + ) df = DataFrame(columns=list("dt")) - df = df.append(d, ignore_index=True) - result = df.append(d, ignore_index=True) + df = concat([df, d], ignore_index=True) + result = concat([df, d], ignore_index=True) expected = DataFrame( { "d": [datetime(2013, 11, 5, 5, 56), datetime(2013, 11, 5, 5, 56)], @@ -1176,7 +1178,7 @@ def test_validation(self): tm.assert_frame_equal(result, expected_3) # Dups on right - right_w_dups = right.append(DataFrame({"a": ["e"], "c": ["moo"]}, index=[4])) + right_w_dups = concat([right, DataFrame({"a": ["e"], "c": ["moo"]}, index=[4])]) merge( left, right_w_dups, @@ -1199,8 +1201,8 @@ def test_validation(self): merge(left, right_w_dups, on="a", validate="one_to_one") # Dups on left - left_w_dups = left.append( - DataFrame({"a": ["a"], "c": ["cow"]}, index=[3]), sort=True + left_w_dups = concat( + [left, DataFrame({"a": ["a"], "c": ["cow"]}, index=[3])], sort=True ) merge( left_w_dups, diff --git a/pandas/tests/reshape/test_crosstab.py b/pandas/tests/reshape/test_crosstab.py index 74beda01e4b8a..cc6eec671ac3a 100644 --- a/pandas/tests/reshape/test_crosstab.py +++ b/pandas/tests/reshape/test_crosstab.py @@ -3,6 +3,7 @@ from pandas.core.dtypes.common import is_categorical_dtype +import pandas as pd from pandas import ( CategoricalIndex, DataFrame, @@ -63,7 +64,7 @@ def setup_method(self, method): } ) - self.df = df.append(df, ignore_index=True) + self.df = pd.concat([df, df], ignore_index=True) def test_crosstab_single(self): df = self.df @@ -142,14 +143,14 @@ def test_crosstab_margins(self): exp_cols = df.groupby(["a"]).size().astype("i8") # to keep index.name exp_margin = Series([len(df)], index=Index(["All"], name="a")) - exp_cols = exp_cols.append(exp_margin) + exp_cols = pd.concat([exp_cols, exp_margin]) exp_cols.name = ("All", "") tm.assert_series_equal(all_cols, exp_cols) all_rows = result.loc["All"] exp_rows = df.groupby(["b", "c"]).size().astype("i8") - exp_rows = exp_rows.append(Series([len(df)], index=[("All", "")])) + exp_rows = pd.concat([exp_rows, Series([len(df)], index=[("All", "")])]) exp_rows.name = "All" exp_rows = exp_rows.reindex(all_rows.index) @@ -180,14 +181,14 @@ def test_crosstab_margins_set_margin_name(self): exp_cols = df.groupby(["a"]).size().astype("i8") # to keep index.name exp_margin = Series([len(df)], index=Index(["TOTAL"], name="a")) - exp_cols = exp_cols.append(exp_margin) + exp_cols = pd.concat([exp_cols, exp_margin]) exp_cols.name = ("TOTAL", "") tm.assert_series_equal(all_cols, exp_cols) all_rows = result.loc["TOTAL"] exp_rows = df.groupby(["b", "c"]).size().astype("i8") - exp_rows = exp_rows.append(Series([len(df)], index=[("TOTAL", "")])) + exp_rows = pd.concat([exp_rows, Series([len(df)], index=[("TOTAL", "")])]) exp_rows.name = "TOTAL" exp_rows = exp_rows.reindex(all_rows.index) diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py index 15e13e5567310..41b5e55e75213 100644 --- a/pandas/tests/series/accessors/test_dt_accessor.py +++ b/pandas/tests/series/accessors/test_dt_accessor.py @@ -477,7 +477,7 @@ def test_dt_accessor_datetime_name_accessors(self, time_locale): name = name.capitalize() assert ser.dt.day_name(locale=time_locale)[day] == name assert ser.dt.day_name(locale=None)[day] == eng_name - ser = ser.append(Series([pd.NaT])) + ser = pd.concat([ser, Series([pd.NaT])]) assert np.isnan(ser.dt.day_name(locale=time_locale).iloc[-1]) ser = Series(date_range(freq="M", start="2012", end="2013")) @@ -499,7 +499,7 @@ def test_dt_accessor_datetime_name_accessors(self, time_locale): assert result == expected - ser = ser.append(Series([pd.NaT])) + ser = pd.concat([ser, Series([pd.NaT])]) assert np.isnan(ser.dt.month_name(locale=time_locale).iloc[-1]) def test_strftime(self): diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py index 31c21e123a0de..f6ae886b9e299 100644 --- a/pandas/tests/series/indexing/test_indexing.py +++ b/pandas/tests/series/indexing/test_indexing.py @@ -11,6 +11,7 @@ Series, Timedelta, Timestamp, + concat, date_range, period_range, timedelta_range, @@ -79,6 +80,7 @@ def test_getitem_setitem_ellipsis(): assert (result == 5).all() +@pytest.mark.filterwarnings("ignore:.*append method is deprecated.*:FutureWarning") @pytest.mark.parametrize( "result_1, duplicate_item, expected_1", [ @@ -158,7 +160,7 @@ def test_setitem_ambiguous_keyerror(indexer_sl): # equivalent of an append s2 = s.copy() indexer_sl(s2)[1] = 5 - expected = s.append(Series([5], index=[1])) + expected = concat([s, Series([5], index=[1])]) tm.assert_series_equal(s2, expected) diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py index 3e8e1b3f436ec..0dc2a9933cfc4 100644 --- a/pandas/tests/series/indexing/test_setitem.py +++ b/pandas/tests/series/indexing/test_setitem.py @@ -20,6 +20,7 @@ Series, Timedelta, Timestamp, + concat, date_range, period_range, ) @@ -477,7 +478,7 @@ def test_setitem_not_contained(self, string_series): ser["foobar"] = 1 app = Series([1], index=["foobar"], name="series") - expected = string_series.append(app) + expected = concat([string_series, app]) tm.assert_series_equal(ser, expected) diff --git a/pandas/tests/series/methods/test_append.py b/pandas/tests/series/methods/test_append.py index 2081e244b4e6c..6f8852ade6408 100644 --- a/pandas/tests/series/methods/test_append.py +++ b/pandas/tests/series/methods/test_append.py @@ -15,11 +15,11 @@ class TestSeriesAppend: def test_append_preserve_name(self, datetime_series): - result = datetime_series[:5].append(datetime_series[5:]) + result = datetime_series[:5]._append(datetime_series[5:]) assert result.name == datetime_series.name def test_append(self, datetime_series, string_series, object_series): - appended_series = string_series.append(object_series) + appended_series = string_series._append(object_series) for idx, value in appended_series.items(): if idx in string_series.index: assert value == string_series[idx] @@ -30,12 +30,12 @@ def test_append(self, datetime_series, string_series, object_series): msg = "Indexes have overlapping values:" with pytest.raises(ValueError, match=msg): - datetime_series.append(datetime_series, verify_integrity=True) + datetime_series._append(datetime_series, verify_integrity=True) def test_append_many(self, datetime_series): pieces = [datetime_series[:5], datetime_series[5:10], datetime_series[10:]] - result = pieces[0].append(pieces[1:]) + result = pieces[0]._append(pieces[1:]) tm.assert_series_equal(result, datetime_series) def test_append_duplicates(self): @@ -43,13 +43,13 @@ def test_append_duplicates(self): s1 = Series([1, 2, 3]) s2 = Series([4, 5, 6]) exp = Series([1, 2, 3, 4, 5, 6], index=[0, 1, 2, 0, 1, 2]) - tm.assert_series_equal(s1.append(s2), exp) + tm.assert_series_equal(s1._append(s2), exp) tm.assert_series_equal(pd.concat([s1, s2]), exp) # the result must have RangeIndex exp = Series([1, 2, 3, 4, 5, 6]) tm.assert_series_equal( - s1.append(s2, ignore_index=True), exp, check_index_type=True + s1._append(s2, ignore_index=True), exp, check_index_type=True ) tm.assert_series_equal( pd.concat([s1, s2], ignore_index=True), exp, check_index_type=True @@ -57,7 +57,7 @@ def test_append_duplicates(self): msg = "Indexes have overlapping values:" with pytest.raises(ValueError, match=msg): - s1.append(s2, verify_integrity=True) + s1._append(s2, verify_integrity=True) with pytest.raises(ValueError, match=msg): pd.concat([s1, s2], verify_integrity=True) @@ -67,8 +67,8 @@ def test_append_tuples(self): list_input = [s, s] tuple_input = (s, s) - expected = s.append(list_input) - result = s.append(tuple_input) + expected = s._append(list_input) + result = s._append(tuple_input) tm.assert_series_equal(expected, result) @@ -78,9 +78,14 @@ def test_append_dataframe_raises(self): msg = "to_append should be a Series or list/tuple of Series, got DataFrame" with pytest.raises(TypeError, match=msg): - df.A.append(df) + df.A._append(df) with pytest.raises(TypeError, match=msg): - df.A.append([df]) + df.A._append([df]) + + def test_append_raises_future_warning(self): + # GH#35407 + with tm.assert_produces_warning(FutureWarning): + Series([1, 2]).append(Series([3, 4])) class TestSeriesAppendWithDatetimeIndex: @@ -89,8 +94,8 @@ def test_append(self): ts = Series(np.random.randn(len(rng)), rng) df = DataFrame(np.random.randn(len(rng), 4), index=rng) - result = ts.append(ts) - result_df = df.append(df) + result = ts._append(ts) + result_df = df._append(df) ex_index = DatetimeIndex(np.tile(rng.values, 2)) tm.assert_index_equal(result.index, ex_index) tm.assert_index_equal(result_df.index, ex_index) @@ -107,6 +112,7 @@ def test_append(self): rng2 = rng.copy() rng1.name = "foo" rng2.name = "bar" + assert rng1.append(rng1).name == "foo" assert rng1.append(rng2).name is None @@ -120,8 +126,8 @@ def test_append_tz(self): ts2 = Series(np.random.randn(len(rng2)), rng2) df2 = DataFrame(np.random.randn(len(rng2), 4), index=rng2) - result = ts.append(ts2) - result_df = df.append(df2) + result = ts._append(ts2) + result_df = df._append(df2) tm.assert_index_equal(result.index, rng3) tm.assert_index_equal(result_df.index, rng3) @@ -146,8 +152,8 @@ def test_append_tz_explicit_pytz(self): ts2 = Series(np.random.randn(len(rng2)), rng2) df2 = DataFrame(np.random.randn(len(rng2), 4), index=rng2) - result = ts.append(ts2) - result_df = df.append(df2) + result = ts._append(ts2) + result_df = df._append(df2) tm.assert_index_equal(result.index, rng3) tm.assert_index_equal(result_df.index, rng3) @@ -170,8 +176,8 @@ def test_append_tz_dateutil(self): ts2 = Series(np.random.randn(len(rng2)), rng2) df2 = DataFrame(np.random.randn(len(rng2), 4), index=rng2) - result = ts.append(ts2) - result_df = df.append(df2) + result = ts._append(ts2) + result_df = df._append(df2) tm.assert_index_equal(result.index, rng3) tm.assert_index_equal(result_df.index, rng3) @@ -183,7 +189,7 @@ def test_series_append_aware(self): rng2 = date_range("1/1/2011 02:00", periods=1, freq="H", tz="US/Eastern") ser1 = Series([1], index=rng1) ser2 = Series([2], index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) exp_index = DatetimeIndex( ["2011-01-01 01:00", "2011-01-01 02:00"], tz="US/Eastern", freq="H" @@ -196,7 +202,7 @@ def test_series_append_aware(self): rng2 = date_range("1/1/2011 02:00", periods=1, freq="H", tz="UTC") ser1 = Series([1], index=rng1) ser2 = Series([2], index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) exp_index = DatetimeIndex( ["2011-01-01 01:00", "2011-01-01 02:00"], tz="UTC", freq="H" @@ -212,7 +218,7 @@ def test_series_append_aware(self): rng2 = date_range("1/1/2011 02:00", periods=1, freq="H", tz="US/Central") ser1 = Series([1], index=rng1) ser2 = Series([2], index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) exp_index = Index( [ Timestamp("1/1/2011 01:00", tz="US/Eastern"), @@ -227,7 +233,7 @@ def test_series_append_aware_naive(self): rng2 = date_range("1/1/2011 02:00", periods=1, freq="H", tz="US/Eastern") ser1 = Series(np.random.randn(len(rng1)), index=rng1) ser2 = Series(np.random.randn(len(rng2)), index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) expected = ser1.index.astype(object).append(ser2.index.astype(object)) assert ts_result.index.equals(expected) @@ -237,7 +243,7 @@ def test_series_append_aware_naive(self): rng2 = range(100) ser1 = Series(np.random.randn(len(rng1)), index=rng1) ser2 = Series(np.random.randn(len(rng2)), index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) expected = ser1.index.astype(object).append(ser2.index) assert ts_result.index.equals(expected) @@ -247,7 +253,7 @@ def test_series_append_dst(self): rng2 = date_range("8/1/2016 01:00", periods=3, freq="H", tz="US/Eastern") ser1 = Series([1, 2, 3], index=rng1) ser2 = Series([10, 11, 12], index=rng2) - ts_result = ser1.append(ser2) + ts_result = ser1._append(ser2) exp_index = DatetimeIndex( [
- [x] closes #35407 (adds deprecation warning) - [x] closes #22957 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44539
2021-11-20T17:40:36Z
2021-12-27T20:09:03Z
2021-12-27T20:09:02Z
2021-12-27T20:09:09Z
Backport PR #44530 on branch 1.3.x (Fix regression ignoring arrays in dtype check for merge_asof)
diff --git a/doc/source/whatsnew/v1.3.5.rst b/doc/source/whatsnew/v1.3.5.rst index 951b05b65c81b..34e67e51e47e3 100644 --- a/doc/source/whatsnew/v1.3.5.rst +++ b/doc/source/whatsnew/v1.3.5.rst @@ -15,6 +15,7 @@ including other versions of pandas. Fixed regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`Series.equals` when comparing floats with dtype object to None (:issue:`44190`) +- Fixed regression in :func:`merge_asof` raising error when array was supplied as join key (:issue:`42844`) - Fixed performance regression in :func:`read_csv` (:issue:`44106`) - Fixed regression in :meth:`Series.duplicated` and :meth:`Series.drop_duplicates` when Series has :class:`Categorical` dtype with boolean categories (:issue:`44351`) - diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py index 55d8dfa94f89e..c767cf59a44a1 100644 --- a/pandas/core/reshape/merge.py +++ b/pandas/core/reshape/merge.py @@ -1781,21 +1781,27 @@ def _validate_specification(self) -> None: # GH#29130 Check that merge keys do not have dtype object if not self.left_index: left_on = self.left_on[0] - lo_dtype = ( - self.left[left_on].dtype - if left_on in self.left.columns - else self.left.index.get_level_values(left_on) - ) + if is_array_like(left_on): + lo_dtype = left_on.dtype + else: + lo_dtype = ( + self.left[left_on].dtype + if left_on in self.left.columns + else self.left.index.get_level_values(left_on) + ) else: lo_dtype = self.left.index.dtype if not self.right_index: right_on = self.right_on[0] - ro_dtype = ( - self.right[right_on].dtype - if right_on in self.right.columns - else self.right.index.get_level_values(right_on) - ) + if is_array_like(right_on): + ro_dtype = right_on.dtype + else: + ro_dtype = ( + self.right[right_on].dtype + if right_on in self.right.columns + else self.right.index.get_level_values(right_on) + ) else: ro_dtype = self.right.index.dtype diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py index 310cf2debadc6..6582a234fd263 100644 --- a/pandas/tests/reshape/merge/test_merge_asof.py +++ b/pandas/tests/reshape/merge/test_merge_asof.py @@ -1484,3 +1484,44 @@ def test_merge_asof_numeri_column_in_index_object_dtype(): match=r"Incompatible merge dtype, .*, both sides must have numeric dtype", ): merge_asof(left, right, left_on="a", right_on="a") + + +def test_merge_asof_array_as_on(): + # GH#42844 + right = pd.DataFrame( + { + "a": [2, 6], + "ts": [pd.Timestamp("2021/01/01 00:37"), pd.Timestamp("2021/01/01 01:40")], + } + ) + ts_merge = pd.date_range( + start=pd.Timestamp("2021/01/01 00:00"), periods=3, freq="1h" + ) + left = pd.DataFrame({"b": [4, 8, 7]}) + result = merge_asof( + left, + right, + left_on=ts_merge, + right_on="ts", + allow_exact_matches=False, + direction="backward", + ) + expected = pd.DataFrame({"b": [4, 8, 7], "a": [np.nan, 2, 6], "ts": ts_merge}) + tm.assert_frame_equal(result, expected) + + result = merge_asof( + right, + left, + left_on="ts", + right_on=ts_merge, + allow_exact_matches=False, + direction="backward", + ) + expected = pd.DataFrame( + { + "a": [2, 6], + "ts": [pd.Timestamp("2021/01/01 00:37"), pd.Timestamp("2021/01/01 01:40")], + "b": [4, 8], + } + ) + tm.assert_frame_equal(result, expected)
Backport PR #44530: Fix regression ignoring arrays in dtype check for merge_asof
https://api.github.com/repos/pandas-dev/pandas/pulls/44538
2021-11-20T16:20:09Z
2021-11-20T20:25:38Z
2021-11-20T20:25:38Z
2021-11-20T20:25:38Z
DOC: update styler user guide for out of date material
diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb index 62e44d73d9f1c..f94f86b4eea58 100644 --- a/doc/source/user_guide/style.ipynb +++ b/doc/source/user_guide/style.ipynb @@ -61,7 +61,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][to_html] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in [More about CSS and HTML](#More-About-CSS-and-HTML). Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build `s`:\n", + "The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][tohtml] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in [More about CSS and HTML](#More-About-CSS-and-HTML). Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build `s`:\n", "\n", "[tohtml]: ../reference/api/pandas.io.formats.style.Styler.to_html.rst" ] @@ -153,7 +153,7 @@ "\n", "Before adding styles it is useful to show that the [Styler][styler] can distinguish the *display* value from the *actual* value, in both datavlaues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the [.format()][formatfunc] and [.format_index()][formatfuncindex] methods to manipulate this according to a [format spec string][format] or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels. \n", "\n", - "Additionally, the format function has a **precision** argument to specifically help formatting floats, as well as **decimal** and **thousands** separators to support other locales, an **na_rep** argument to display missing data, and an **escape** argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' regular `display.precision` option, controllable using `with pd.option_context('display.precision', 2):` \n", + "Additionally, the format function has a **precision** argument to specifically help formatting floats, as well as **decimal** and **thousands** separators to support other locales, an **na_rep** argument to display missing data, and an **escape** argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' `styler.format.precision` option, controllable using `with pd.option_context('format.precision', 2):` \n", "\n", "[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n", "[format]: https://docs.python.org/3/library/string.html#format-specification-mini-language\n", @@ -224,16 +224,15 @@ "\n", "The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.\n", "\n", - "The index can be hidden from rendering by calling [.hide_index()][hideidx] without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling [.hide_columns()][hidecols] without any arguments.\n", + "The index can be hidden from rendering by calling [.hide()][hideidx] without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling [.hide(axis=\"columns\")][hideidx] without any further arguments.\n", "\n", - "Specific rows or columns can be hidden from rendering by calling the same [.hide_index()][hideidx] or [.hide_columns()][hidecols] methods and passing in a row/column label, a list-like or a slice of row/column labels to for the ``subset`` argument.\n", + "Specific rows or columns can be hidden from rendering by calling the same [.hide()][hideidx] method and passing in a row/column label, a list-like or a slice of row/column labels to for the ``subset`` argument.\n", "\n", - "Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will start at `col2`, since `col0` and `col1` are simply ignored.\n", + "Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at `col2`, since `col0` and `col1` are simply ignored.\n", "\n", "We can update our `Styler` object from before to hide some data and format the values.\n", "\n", - "[hideidx]: ../reference/api/pandas.io.formats.style.Styler.hide_index.rst\n", - "[hidecols]: ../reference/api/pandas.io.formats.style.Styler.hide_columns.rst" + "[hideidx]: ../reference/api/pandas.io.formats.style.Styler.hide.rst" ] }, { @@ -1413,12 +1412,9 @@ "## Limitations\n", "\n", "- DataFrame only (use `Series.to_frame().style`)\n", - "- The index and columns must be unique\n", + "- The index and columns do not need to be unique, but certain styling functions can only work with unique indexes.\n", "- No large repr, and construction performance isn't great; although we have some [HTML optimizations](#Optimization)\n", - "- You can only style the *values*, not the index or columns (except with `table_styles` above)\n", - "- You can only apply styles, you can't insert new HTML entities\n", - "\n", - "Some of these might be addressed in the future. " + "- You can only apply styles, you can't insert new HTML entities, except via subclassing." ] }, {
- [x] closes #43276 updates some other lines that are out of date.
https://api.github.com/repos/pandas-dev/pandas/pulls/44537
2021-11-20T12:01:34Z
2021-11-20T15:56:51Z
2021-11-20T15:56:51Z
2021-11-21T21:02:03Z
DOC: SS03 numpydoc validation
diff --git a/ci/code_checks.sh b/ci/code_checks.sh index ea9595fd88630..ef026c8e69dbb 100755 --- a/ci/code_checks.sh +++ b/ci/code_checks.sh @@ -93,8 +93,8 @@ fi ### DOCSTRINGS ### if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then - MSG='Validate docstrings (GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS04, SS05, PR03, PR04, PR05, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG - $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS04,SS05,PR03,PR04,PR05,PR10,EX04,RT01,RT04,RT05,SA02,SA03 + MSG='Validate docstrings (GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS03, SS04, SS05, PR03, PR04, PR05, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG + $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS03,SS04,SS05,PR03,PR04,PR05,PR10,EX04,RT01,RT04,RT05,SA02,SA03 RET=$(($RET + $?)) ; echo $MSG "DONE" fi diff --git a/pandas/_config/config.py b/pandas/_config/config.py index e2e6bbe8db7cc..5c3db40828fe3 100644 --- a/pandas/_config/config.py +++ b/pandas/_config/config.py @@ -98,7 +98,7 @@ class RegisteredOption(NamedTuple): class OptionError(AttributeError, KeyError): """ Exception for pandas.options, backwards compatible with KeyError - checks + checks. """ diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 03cd4a97c240e..98055c01d6ab0 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -2247,7 +2247,7 @@ cdef class MonthBegin(MonthOffset): cdef class BusinessMonthEnd(MonthOffset): """ - DateOffset increments between the last business day of the month + DateOffset increments between the last business day of the month. Examples -------- diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 97f5ef20c5c5e..faa32b31a73d7 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -325,7 +325,7 @@ class providing the base-class of operations. _transform_template = """ Call function producing a like-indexed %(klass)s on each group and return a %(klass)s having the same indexes as the original object -filled with the transformed values +filled with the transformed values. Parameters ---------- diff --git a/pandas/core/indexers/objects.py b/pandas/core/indexers/objects.py index c4156f214ca68..4d5e4bbe6bd36 100644 --- a/pandas/core/indexers/objects.py +++ b/pandas/core/indexers/objects.py @@ -124,7 +124,7 @@ def get_window_bounds( class VariableOffsetWindowIndexer(BaseIndexer): - """Calculate window boundaries based on a non-fixed offset such as a BusinessDay""" + """Calculate window boundaries based on a non-fixed offset such as a BusinessDay.""" def __init__( self, diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index 128aa8e282a0d..53d584f801b0f 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -732,7 +732,7 @@ def array(self): @cache_readonly def dtypes(self) -> Series: """ - Return the dtypes as a Series for the underlying MultiIndex + Return the dtypes as a Series for the underlying MultiIndex. """ from pandas import Series
- [x] closes #25306 - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/44536
2021-11-20T06:26:37Z
2021-11-21T20:45:10Z
2021-11-21T20:45:09Z
2021-11-22T00:10:10Z
TST: Address numba warnings in test suite
diff --git a/pandas/tests/groupby/aggregate/test_numba.py b/pandas/tests/groupby/aggregate/test_numba.py index 8e9df8a6da958..e7fa2e0690066 100644 --- a/pandas/tests/groupby/aggregate/test_numba.py +++ b/pandas/tests/groupby/aggregate/test_numba.py @@ -48,7 +48,7 @@ def incorrect_function(x, **kwargs): @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba @pytest.mark.parametrize("jit", [True, False]) @pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) @@ -77,7 +77,7 @@ def func_numba(values, index): @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba @pytest.mark.parametrize("jit", [True, False]) @pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) diff --git a/pandas/tests/groupby/test_numba.py b/pandas/tests/groupby/test_numba.py index 98156ae6a63ca..20fd02b21a744 100644 --- a/pandas/tests/groupby/test_numba.py +++ b/pandas/tests/groupby/test_numba.py @@ -10,7 +10,7 @@ @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba class TestEngine: def test_cython_vs_numba_frame(self, sort, nogil, parallel, nopython): diff --git a/pandas/tests/groupby/transform/test_numba.py b/pandas/tests/groupby/transform/test_numba.py index 503de5ebd2330..4e1b777296d5b 100644 --- a/pandas/tests/groupby/transform/test_numba.py +++ b/pandas/tests/groupby/transform/test_numba.py @@ -45,7 +45,7 @@ def incorrect_function(x, **kwargs): @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba @pytest.mark.parametrize("jit", [True, False]) @pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) @@ -74,7 +74,7 @@ def func(values, index): @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba @pytest.mark.parametrize("jit", [True, False]) @pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) diff --git a/pandas/tests/window/test_numba.py b/pandas/tests/window/test_numba.py index 9fd4bd422178a..ad291344fd6ed 100644 --- a/pandas/tests/window/test_numba.py +++ b/pandas/tests/window/test_numba.py @@ -15,7 +15,7 @@ @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba class TestEngine: @pytest.mark.parametrize("jit", [True, False]) @@ -265,7 +265,7 @@ def test_invalid_kwargs_nopython(): @td.skip_if_no("numba") @pytest.mark.slow -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") # Filter warnings when parallel=True and the function can't be parallelized by Numba class TestTableMethod: def test_table_series_valueerror(self): diff --git a/pandas/tests/window/test_online.py b/pandas/tests/window/test_online.py index a21be0b8be049..80cf1c55958ee 100644 --- a/pandas/tests/window/test_online.py +++ b/pandas/tests/window/test_online.py @@ -11,7 +11,7 @@ @td.skip_if_no("numba") -@pytest.mark.filterwarnings("ignore:\\nThe keyword argument") +@pytest.mark.filterwarnings("ignore:\n") class TestEWM: def test_invalid_update(self): df = DataFrame({"a": range(5), "b": range(5)})
This match was working for me locally, so hopefully it should also match in the CI.
https://api.github.com/repos/pandas-dev/pandas/pulls/44535
2021-11-20T05:27:04Z
2021-11-20T15:56:17Z
2021-11-20T15:56:17Z
2021-11-20T18:00:31Z
DOC: 'MS' offset alias does not work as described with pd.date_range()
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst index 2e81032bd4c22..fde9ff0450a12 100644 --- a/doc/source/user_guide/timeseries.rst +++ b/doc/source/user_guide/timeseries.rst @@ -1270,6 +1270,36 @@ frequencies. We will refer to these aliases as *offset aliases*. "U, us", "microseconds" "N", "nanoseconds" +.. note:: + + When using the offset aliases above, it should be noted that functions + such as :func:`date_range`, :func:`bdate_range`, will only return + timestamps that are in the interval defined by ``start_date`` and + ``end_date``. If the ``start_date`` does not correspond to the frequency, + the returned timestamps will start at the next valid timestamp, same for + ``end_date``, the returned timestamps will stop at the previous valid + timestamp. + + For example, for the offset ``MS``, if the ``start_date`` is not the first + of the month, the returned timestamps will start with the first day of the + next month. If ``end_date`` is not the first day of a month, the last + returned timestamp will be the first day of the corresponding month. + + .. ipython:: python + + dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS") + dates_lst_1 + + dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS") + dates_lst_2 + + We can see in the above example :func:`date_range` and + :func:`bdate_range` will only return the valid timestamps between the + ``start_date`` and ``end_date``. If these are not valid timestamps for the + given frequency it will roll to the next value for ``start_date`` + (respectively previous for the ``end_date``) + + Combining aliases ~~~~~~~~~~~~~~~~~
- [x] closes #44146 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [NA] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44534
2021-11-20T01:59:54Z
2021-11-25T21:51:09Z
2021-11-25T21:51:09Z
2021-11-25T21:51:15Z
BUG: BooleanArray raising on comparison to string
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 2fe289a5f7c35..5c54753ce3666 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -710,6 +710,7 @@ ExtensionArray - Bug in :func:`array` failing to preserve :class:`PandasArray` (:issue:`43887`) - NumPy ufuncs ``np.abs``, ``np.positive``, ``np.negative`` now correctly preserve dtype when called on ExtensionArrays that implement ``__abs__, __pos__, __neg__``, respectively. In particular this is fixed for :class:`TimedeltaArray` (:issue:`43899`) - Avoid raising ``PerformanceWarning`` about fragmented DataFrame when using many columns with an extension dtype (:issue:`44098`) +- Bug in :meth:`BooleanArray.__eq__` and :meth:`BooleanArray.__ne__` raising ``TypeError`` on comparison with an incompatible type (like a string). This caused :meth:`DataFrame.replace` to sometimes raise a ``TypeError`` if a nullable boolean column was included (:issue:`44499`) - Styler diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py index 58e7abbbe1ddd..0787ec56379b2 100644 --- a/pandas/core/arrays/boolean.py +++ b/pandas/core/arrays/boolean.py @@ -44,6 +44,7 @@ BaseMaskedArray, BaseMaskedDtype, ) +from pandas.core.ops import invalid_comparison if TYPE_CHECKING: import pyarrow @@ -653,7 +654,11 @@ def _cmp_method(self, other, op): with warnings.catch_warnings(): warnings.filterwarnings("ignore", "elementwise", FutureWarning) with np.errstate(all="ignore"): - result = op(self._data, other) + method = getattr(self._data, f"__{op.__name__}__") + result = method(other) + + if result is NotImplemented: + result = invalid_comparison(self._data, other, op) # nans propagate if mask is None: diff --git a/pandas/tests/arrays/boolean/test_logical.py b/pandas/tests/arrays/boolean/test_logical.py index afcbe36e165c9..938fa8f1a5d6a 100644 --- a/pandas/tests/arrays/boolean/test_logical.py +++ b/pandas/tests/arrays/boolean/test_logical.py @@ -41,6 +41,20 @@ def test_empty_ok(self, all_logical_operators): result = getattr(a, op_name)(pd.NA) tm.assert_extension_array_equal(a, result) + @pytest.mark.parametrize( + "other", ["a", pd.Timestamp(2017, 1, 1, 12), np.timedelta64(4)] + ) + def test_eq_mismatched_type(self, other): + # GH-44499 + arr = pd.array([True, False]) + result = arr == other + expected = pd.array([False, False]) + tm.assert_extension_array_equal(result, expected) + + result = arr != other + expected = pd.array([True, True]) + tm.assert_extension_array_equal(result, expected) + def test_logical_length_mismatch_raises(self, all_logical_operators): op_name = all_logical_operators a = pd.array([True, False, None], dtype="boolean") diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py index 53b71bb489ebb..d6ecdcd155295 100644 --- a/pandas/tests/frame/methods/test_replace.py +++ b/pandas/tests/frame/methods/test_replace.py @@ -624,6 +624,15 @@ def test_replace_mixed3(self): expected.iloc[1, 1] = m[1] tm.assert_frame_equal(result, expected) + @pytest.mark.parametrize("dtype", ["boolean", "Int64", "Float64"]) + def test_replace_with_nullable_column(self, dtype): + # GH-44499 + nullable_ser = Series([1, 0, 1], dtype=dtype) + df = DataFrame({"A": ["A", "B", "x"], "B": nullable_ser}) + result = df.replace("x", "X") + expected = DataFrame({"A": ["A", "B", "X"], "B": nullable_ser}) + tm.assert_frame_equal(result, expected) + def test_replace_simple_nested_dict(self): df = DataFrame({"col": range(1, 5)}) expected = DataFrame({"col": ["a", 2, 3, "b"]})
- [x] closes #44499 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44533
2021-11-20T01:50:03Z
2021-11-20T16:15:51Z
2021-11-20T16:15:51Z
2021-11-20T16:25:34Z
BUG: td64+offset
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 8b51fe2db7641..b83fd0504342c 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -544,6 +544,8 @@ Datetimelike - Bug in in calling ``np.isnan``, ``np.isfinite``, or ``np.isinf`` on a timezone-aware :class:`DatetimeIndex` incorrectly raising ``TypeError`` (:issue:`43917`) - Bug in constructing a :class:`Series` from datetime-like strings with mixed timezones incorrectly partially-inferring datetime values (:issue:`40111`) - Bug in addition with a :class:`Tick` object and a ``np.timedelta64`` object incorrectly raising instead of returning :class:`Timedelta` (:issue:`44474`) +- Bug in adding a ``np.timedelta64`` object to a :class:`BusinessDay` or :class:`CustomBusinessDay` object incorrectly raising (:issue:`44532`) +- Timedelta ^^^^^^^^^ diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index f689b8ce242e5..968a4d0198f09 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -353,6 +353,9 @@ cdef class BaseOffset: """ Base class for DateOffset methods that are not overridden by subclasses. """ + # ensure that reversed-ops with numpy scalars return NotImplemented + __array_priority__ = 1000 + _day_opt = None _attributes = tuple(["n", "normalize"]) _use_relativedelta = False @@ -434,6 +437,10 @@ cdef class BaseOffset: if not isinstance(self, BaseOffset): # cython semantics; this is __radd__ return other.__add__(self) + + elif util.is_array(other) and other.dtype == object: + return np.array([self + x for x in other]) + try: return self.apply(other) except ApplyTypeError: @@ -448,7 +455,8 @@ cdef class BaseOffset: elif not isinstance(self, BaseOffset): # cython semantics, this is __rsub__ return (-other).__add__(self) - else: # pragma: no cover + else: + # e.g. PeriodIndex return NotImplemented def __call__(self, other): @@ -767,8 +775,6 @@ cdef class SingleConstructorOffset(BaseOffset): # Tick Offsets cdef class Tick(SingleConstructorOffset): - # ensure that reversed-ops with numpy scalars return NotImplemented - __array_priority__ = 1000 _adjust_dst = False _prefix = "undefined" _attributes = tuple(["n", "normalize"]) diff --git a/pandas/tests/tseries/offsets/test_business_day.py b/pandas/tests/tseries/offsets/test_business_day.py index 92176515b6b6f..c7db42b6e9df1 100644 --- a/pandas/tests/tseries/offsets/test_business_day.py +++ b/pandas/tests/tseries/offsets/test_business_day.py @@ -7,7 +7,6 @@ timedelta, ) -import numpy as np import pytest from pandas._libs.tslibs.offsets import ( @@ -61,7 +60,6 @@ def test_with_offset(self): assert (self.d + offset) == datetime(2008, 1, 2, 2) - @pytest.mark.parametrize("reverse", [True, False]) @pytest.mark.parametrize( "td", [ @@ -71,20 +69,15 @@ def test_with_offset(self): ], ids=lambda x: type(x), ) - def test_with_offset_index(self, reverse, td, request): - if reverse and isinstance(td, np.timedelta64): - mark = pytest.mark.xfail( - reason="need __array_priority__, but that causes other errors" - ) - request.node.add_marker(mark) + def test_with_offset_index(self, td): dti = DatetimeIndex([self.d]) expected = DatetimeIndex([datetime(2008, 1, 2, 2)]) - if reverse: - result = dti + (td + self.offset) - else: - result = dti + (self.offset + td) + result = dti + (td + self.offset) + tm.assert_index_equal(result, expected) + + result = dti + (self.offset + td) tm.assert_index_equal(result, expected) def test_eq(self):
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44532
2021-11-20T00:13:57Z
2021-11-20T20:45:51Z
2021-11-20T20:45:51Z
2021-11-20T21:30:13Z
Add json normalize back into doc and move read_json to pd namespace
diff --git a/doc/source/reference/io.rst b/doc/source/reference/io.rst index 82d4ec4950ef1..7aad937d10a18 100644 --- a/doc/source/reference/io.rst +++ b/doc/source/reference/io.rst @@ -57,7 +57,7 @@ Excel ExcelWriter -.. currentmodule:: pandas.io.json +.. currentmodule:: pandas JSON ~~~~ @@ -65,7 +65,10 @@ JSON :toctree: api/ read_json - to_json + json_normalize + DataFrame.to_json + +.. currentmodule:: pandas.io.json .. autosummary:: :toctree: api/
- [x] closes #42540 read_json looked like it had to be used like pandas.io.json.read_json and json_normalize was missing alltogether. Maybe backporting as well? Edit: Build Docs locally, look fine now
https://api.github.com/repos/pandas-dev/pandas/pulls/44531
2021-11-19T23:08:41Z
2021-11-20T21:11:25Z
2021-11-20T21:11:25Z
2021-11-20T21:22:49Z
Fix regression ignoring arrays in dtype check for merge_asof
diff --git a/doc/source/whatsnew/v1.3.5.rst b/doc/source/whatsnew/v1.3.5.rst index 951b05b65c81b..34e67e51e47e3 100644 --- a/doc/source/whatsnew/v1.3.5.rst +++ b/doc/source/whatsnew/v1.3.5.rst @@ -15,6 +15,7 @@ including other versions of pandas. Fixed regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`Series.equals` when comparing floats with dtype object to None (:issue:`44190`) +- Fixed regression in :func:`merge_asof` raising error when array was supplied as join key (:issue:`42844`) - Fixed performance regression in :func:`read_csv` (:issue:`44106`) - Fixed regression in :meth:`Series.duplicated` and :meth:`Series.drop_duplicates` when Series has :class:`Categorical` dtype with boolean categories (:issue:`44351`) - diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py index 4dd15dd367581..960b8faec7c59 100644 --- a/pandas/core/reshape/merge.py +++ b/pandas/core/reshape/merge.py @@ -1783,21 +1783,27 @@ def _validate_specification(self) -> None: # GH#29130 Check that merge keys do not have dtype object if not self.left_index: left_on = self.left_on[0] - lo_dtype = ( - self.left[left_on].dtype - if left_on in self.left.columns - else self.left.index.get_level_values(left_on) - ) + if is_array_like(left_on): + lo_dtype = left_on.dtype + else: + lo_dtype = ( + self.left[left_on].dtype + if left_on in self.left.columns + else self.left.index.get_level_values(left_on) + ) else: lo_dtype = self.left.index.dtype if not self.right_index: right_on = self.right_on[0] - ro_dtype = ( - self.right[right_on].dtype - if right_on in self.right.columns - else self.right.index.get_level_values(right_on) - ) + if is_array_like(right_on): + ro_dtype = right_on.dtype + else: + ro_dtype = ( + self.right[right_on].dtype + if right_on in self.right.columns + else self.right.index.get_level_values(right_on) + ) else: ro_dtype = self.right.index.dtype diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py index 5bb9f56adb8d5..f9310db3123f6 100644 --- a/pandas/tests/reshape/merge/test_merge_asof.py +++ b/pandas/tests/reshape/merge/test_merge_asof.py @@ -1484,3 +1484,44 @@ def test_merge_asof_numeri_column_in_index_object_dtype(): match=r"Incompatible merge dtype, .*, both sides must have numeric dtype", ): merge_asof(left, right, left_on="a", right_on="a") + + +def test_merge_asof_array_as_on(): + # GH#42844 + right = pd.DataFrame( + { + "a": [2, 6], + "ts": [pd.Timestamp("2021/01/01 00:37"), pd.Timestamp("2021/01/01 01:40")], + } + ) + ts_merge = pd.date_range( + start=pd.Timestamp("2021/01/01 00:00"), periods=3, freq="1h" + ) + left = pd.DataFrame({"b": [4, 8, 7]}) + result = merge_asof( + left, + right, + left_on=ts_merge, + right_on="ts", + allow_exact_matches=False, + direction="backward", + ) + expected = pd.DataFrame({"b": [4, 8, 7], "a": [np.nan, 2, 6], "ts": ts_merge}) + tm.assert_frame_equal(result, expected) + + result = merge_asof( + right, + left, + left_on="ts", + right_on=ts_merge, + allow_exact_matches=False, + direction="backward", + ) + expected = pd.DataFrame( + { + "a": [2, 6], + "ts": [pd.Timestamp("2021/01/01 00:37"), pd.Timestamp("2021/01/01 01:40")], + "b": [4, 8], + } + ) + tm.assert_frame_equal(result, expected)
- [x] closes #42844 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry This restores the 1.2.x behavior. Passing an array as join key is something we don't have much test support. We have 2 tests for this which were added ages ago. We could deprecate this.
https://api.github.com/repos/pandas-dev/pandas/pulls/44530
2021-11-19T21:47:37Z
2021-11-20T16:19:43Z
2021-11-20T16:19:42Z
2021-11-20T20:24:11Z
CLN: tighten noqas
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index e2f8ac09d8873..8c8469b93db68 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -3023,6 +3023,7 @@ Read in the content of the "books.xml" as instance of ``StringIO`` or Even read XML from AWS S3 buckets such as Python Software Foundation's IRS 990 Form: .. ipython:: python + :okwarning: df = pd.read_xml( "s3://irs-form-990/201923199349319487_public.xml", diff --git a/pandas/api/__init__.py b/pandas/api/__init__.py index c22f37f2ef292..80202b3569862 100644 --- a/pandas/api/__init__.py +++ b/pandas/api/__init__.py @@ -1,5 +1,5 @@ """ public toolkit API """ -from pandas.api import ( # noqa +from pandas.api import ( # noqa:F401 extensions, indexers, types, diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index acc66ae9deca7..763e76f8497fa 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -1849,5 +1849,5 @@ def union_with_duplicates(lvals: ArrayLike, rvals: ArrayLike) -> ArrayLike: unique_array = ensure_wrapped_if_datetimelike(unique_array) for i, value in enumerate(unique_array): - indexer += [i] * int(max(l_count[value], r_count[value])) + indexer += [i] * int(max(l_count.at[value], r_count.at[value])) return unique_array.take(indexer) diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py index 6d6cc03a1c83e..9be2201c566a4 100644 --- a/pandas/core/arrays/floating.py +++ b/pandas/core/arrays/floating.py @@ -176,9 +176,7 @@ def coerce_to_array( if mask.any(): values = values.copy() values[mask] = np.nan - values = values.astype(dtype, copy=False) # , casting="safe") - else: - values = values.astype(dtype, copy=False) # , casting="safe") + values = values.astype(dtype, copy=False) # , casting="safe") return values, mask diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py index d8b7bf2b86d2c..d01068a0d408c 100644 --- a/pandas/core/arrays/integer.py +++ b/pandas/core/arrays/integer.py @@ -214,9 +214,9 @@ def coerce_to_array( else: assert len(mask) == len(values) - if not values.ndim == 1: + if values.ndim != 1: raise TypeError("values must be a 1D list-like") - if not mask.ndim == 1: + if mask.ndim != 1: raise TypeError("mask must be a 1D list-like") # infer dtype if needed diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index eea3fa37b7435..c189134237554 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -162,7 +162,6 @@ def _isna(obj, inf_as_na: bool = False): return libmissing.checknull_old(obj) else: return libmissing.checknull(obj) - # hack (for now) because MI registers as ndarray elif isinstance(obj, ABCMultiIndex): raise NotImplementedError("isna is not defined for MultiIndex") elif isinstance(obj, type): diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 7b6a76f0a5d10..ca81d54a0fb86 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -309,7 +309,7 @@ def _slice(self, slicer) -> ArrayLike: return self.values[slicer] @final - def getitem_block(self, slicer) -> Block: + def getitem_block(self, slicer: slice | npt.NDArray[np.intp]) -> Block: """ Perform __getitem__-like, return result as block. @@ -326,7 +326,9 @@ def getitem_block(self, slicer) -> Block: return type(self)(new_values, new_mgr_locs, self.ndim) @final - def getitem_block_columns(self, slicer, new_mgr_locs: BlockPlacement) -> Block: + def getitem_block_columns( + self, slicer: slice, new_mgr_locs: BlockPlacement + ) -> Block: """ Perform __getitem__-like, return result as block. diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py index a1b058224795e..8cf94e5e433a6 100644 --- a/pandas/core/reshape/tile.py +++ b/pandas/core/reshape/tile.py @@ -443,7 +443,8 @@ def _bins_to_cuts( ) elif ordered and len(set(labels)) != len(labels): raise ValueError( - "labels must be unique if ordered=True; pass ordered=False for duplicate labels" # noqa + "labels must be unique if ordered=True; pass ordered=False " + "for duplicate labels" ) else: if len(labels) != len(bins) - 1: diff --git a/pandas/io/sas/__init__.py b/pandas/io/sas/__init__.py index 8f81352e6aecb..71027fd064f3d 100644 --- a/pandas/io/sas/__init__.py +++ b/pandas/io/sas/__init__.py @@ -1 +1 @@ -from pandas.io.sas.sasreader import read_sas # noqa +from pandas.io.sas.sasreader import read_sas # noqa:F401 diff --git a/pandas/tests/arrays/boolean/test_function.py b/pandas/tests/arrays/boolean/test_function.py index d90655b6e2820..2f1a3121cdf5b 100644 --- a/pandas/tests/arrays/boolean/test_function.py +++ b/pandas/tests/arrays/boolean/test_function.py @@ -59,8 +59,8 @@ def test_ufuncs_unary(ufunc): expected[a._mask] = np.nan tm.assert_extension_array_equal(result, expected) - s = pd.Series(a) - result = ufunc(s) + ser = pd.Series(a) + result = ufunc(ser) expected = pd.Series(ufunc(a._data), dtype="boolean") expected[a._mask] = np.nan tm.assert_series_equal(result, expected) @@ -86,8 +86,8 @@ def test_value_counts_na(): def test_value_counts_with_normalize(): - s = pd.Series([True, False, pd.NA], dtype="boolean") - result = s.value_counts(normalize=True) + ser = pd.Series([True, False, pd.NA], dtype="boolean") + result = ser.value_counts(normalize=True) expected = pd.Series([1, 1], index=[True, False], dtype="Float64") / 2 tm.assert_series_equal(result, expected) @@ -102,7 +102,7 @@ def test_diff(): ) tm.assert_extension_array_equal(result, expected) - s = pd.Series(a) - result = s.diff() + ser = pd.Series(a) + result = ser.diff() expected = pd.Series(expected) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py index ee24ecb4964ec..50ecbb9eb705a 100644 --- a/pandas/tests/arrays/categorical/test_constructors.py +++ b/pandas/tests/arrays/categorical/test_constructors.py @@ -248,9 +248,7 @@ def test_constructor(self): # this is a legitimate constructor with tm.assert_produces_warning(None): - c = Categorical( # noqa - np.array([], dtype="int64"), categories=[3, 2, 1], ordered=True - ) + Categorical(np.array([], dtype="int64"), categories=[3, 2, 1], ordered=True) def test_constructor_with_existing_categories(self): # GH25318: constructing with pd.Series used to bogusly skip recoding diff --git a/pandas/tests/arrays/categorical/test_repr.py b/pandas/tests/arrays/categorical/test_repr.py index e23fbb16190ea..678109b2c2497 100644 --- a/pandas/tests/arrays/categorical/test_repr.py +++ b/pandas/tests/arrays/categorical/test_repr.py @@ -77,7 +77,7 @@ def test_unicode_print(self): expected = """\ ['ああああ', 'いいいいい', 'ううううううう', 'ああああ', 'いいいいい', ..., 'いいいいい', 'ううううううう', 'ああああ', 'いいいいい', 'ううううううう'] Length: 60 -Categories (3, object): ['ああああ', 'いいいいい', 'ううううううう']""" # noqa +Categories (3, object): ['ああああ', 'いいいいい', 'ううううううう']""" # noqa:E501 assert repr(c) == expected @@ -88,7 +88,7 @@ def test_unicode_print(self): c = Categorical(["ああああ", "いいいいい", "ううううううう"] * 20) expected = """['ああああ', 'いいいいい', 'ううううううう', 'ああああ', 'いいいいい', ..., 'いいいいい', 'ううううううう', 'ああああ', 'いいいいい', 'ううううううう'] Length: 60 -Categories (3, object): ['ああああ', 'いいいいい', 'ううううううう']""" # noqa +Categories (3, object): ['ああああ', 'いいいいい', 'ううううううう']""" # noqa:E501 assert repr(c) == expected @@ -213,14 +213,14 @@ def test_categorical_repr_datetime_ordered(self): c = Categorical(idx, ordered=True) exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < - 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa + 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa:E501 assert repr(c) == exp c = Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00, 2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < - 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa + 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa:E501 assert repr(c) == exp @@ -229,7 +229,7 @@ def test_categorical_repr_datetime_ordered(self): exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00] Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < - 2011-01-01 13:00:00-05:00]""" # noqa + 2011-01-01 13:00:00-05:00]""" # noqa:E501 assert repr(c) == exp @@ -237,7 +237,7 @@ def test_categorical_repr_datetime_ordered(self): exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00, 2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00] Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < - 2011-01-01 13:00:00-05:00]""" # noqa + 2011-01-01 13:00:00-05:00]""" # noqa:E501 assert repr(c) == exp @@ -257,14 +257,14 @@ def test_categorical_repr_period(self): c = Categorical(idx) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(c) == exp c = Categorical(idx.append(idx), categories=idx) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(c) == exp @@ -277,7 +277,7 @@ def test_categorical_repr_period(self): c = Categorical(idx.append(idx), categories=idx) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05] -Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" # noqa +Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" # noqa:E501 assert repr(c) == exp @@ -286,14 +286,14 @@ def test_categorical_repr_period_ordered(self): c = Categorical(idx, ordered=True) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(c) == exp c = Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(c) == exp @@ -306,7 +306,7 @@ def test_categorical_repr_period_ordered(self): c = Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05] -Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" # noqa +Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" # noqa:E501 assert repr(c) == exp @@ -330,7 +330,7 @@ def test_categorical_repr_timedelta(self): Length: 20 Categories (20, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00, - 18 days 01:00:00, 19 days 01:00:00]""" # noqa + 18 days 01:00:00, 19 days 01:00:00]""" # noqa:E501 assert repr(c) == exp @@ -339,7 +339,7 @@ def test_categorical_repr_timedelta(self): Length: 40 Categories (20, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00, - 18 days 01:00:00, 19 days 01:00:00]""" # noqa + 18 days 01:00:00, 19 days 01:00:00]""" # noqa:E501 assert repr(c) == exp @@ -363,7 +363,7 @@ def test_categorical_repr_timedelta_ordered(self): Length: 20 Categories (20, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 < - 18 days 01:00:00 < 19 days 01:00:00]""" # noqa + 18 days 01:00:00 < 19 days 01:00:00]""" # noqa:E501 assert repr(c) == exp @@ -372,26 +372,26 @@ def test_categorical_repr_timedelta_ordered(self): Length: 40 Categories (20, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 < - 18 days 01:00:00 < 19 days 01:00:00]""" # noqa + 18 days 01:00:00 < 19 days 01:00:00]""" # noqa:E501 assert repr(c) == exp def test_categorical_index_repr(self): idx = CategoricalIndex(Categorical([1, 2, 3])) - exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == exp i = CategoricalIndex(Categorical(np.arange(10))) - exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp def test_categorical_index_repr_ordered(self): i = CategoricalIndex(Categorical([1, 2, 3], ordered=True)) - exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=True, dtype='category')""" # noqa + exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp i = CategoricalIndex(Categorical(np.arange(10), ordered=True)) - exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=True, dtype='category')""" # noqa + exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp def test_categorical_index_repr_datetime(self): @@ -400,7 +400,7 @@ def test_categorical_index_repr_datetime(self): exp = """CategoricalIndex(['2011-01-01 09:00:00', '2011-01-01 10:00:00', '2011-01-01 11:00:00', '2011-01-01 12:00:00', '2011-01-01 13:00:00'], - categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=False, dtype='category')""" # noqa + categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -409,7 +409,7 @@ def test_categorical_index_repr_datetime(self): exp = """CategoricalIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], - categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=False, dtype='category')""" # noqa + categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -419,7 +419,7 @@ def test_categorical_index_repr_datetime_ordered(self): exp = """CategoricalIndex(['2011-01-01 09:00:00', '2011-01-01 10:00:00', '2011-01-01 11:00:00', '2011-01-01 12:00:00', '2011-01-01 13:00:00'], - categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=True, dtype='category')""" # noqa + categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -428,7 +428,7 @@ def test_categorical_index_repr_datetime_ordered(self): exp = """CategoricalIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], - categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa + categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -438,7 +438,7 @@ def test_categorical_index_repr_datetime_ordered(self): '2011-01-01 13:00:00-05:00', '2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], - categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa + categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -446,24 +446,24 @@ def test_categorical_index_repr_period(self): # test all length idx = period_range("2011-01-01 09:00", freq="H", periods=1) i = CategoricalIndex(Categorical(idx)) - exp = """CategoricalIndex(['2011-01-01 09:00'], categories=[2011-01-01 09:00], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex(['2011-01-01 09:00'], categories=[2011-01-01 09:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = period_range("2011-01-01 09:00", freq="H", periods=2) i = CategoricalIndex(Categorical(idx)) - exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = period_range("2011-01-01 09:00", freq="H", periods=3) i = CategoricalIndex(Categorical(idx)) - exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = period_range("2011-01-01 09:00", freq="H", periods=5) i = CategoricalIndex(Categorical(idx)) exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], - categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa + categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp @@ -472,13 +472,13 @@ def test_categorical_index_repr_period(self): '2011-01-01 12:00', '2011-01-01 13:00', '2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], - categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa + categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = period_range("2011-01", freq="M", periods=5) i = CategoricalIndex(Categorical(idx)) - exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp def test_categorical_index_repr_period_ordered(self): @@ -486,19 +486,19 @@ def test_categorical_index_repr_period_ordered(self): i = CategoricalIndex(Categorical(idx, ordered=True)) exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], - categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=True, dtype='category')""" # noqa + categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = period_range("2011-01", freq="M", periods=5) i = CategoricalIndex(Categorical(idx, ordered=True)) - exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=True, dtype='category')""" # noqa + exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp def test_categorical_index_repr_timedelta(self): idx = timedelta_range("1 days", periods=5) i = CategoricalIndex(Categorical(idx)) - exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=False, dtype='category')""" # noqa + exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = timedelta_range("1 hours", periods=10) @@ -507,14 +507,14 @@ def test_categorical_index_repr_timedelta(self): '3 days 01:00:00', '4 days 01:00:00', '5 days 01:00:00', '6 days 01:00:00', '7 days 01:00:00', '8 days 01:00:00', '9 days 01:00:00'], - categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=False, dtype='category')""" # noqa + categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=False, dtype='category')""" # noqa:E501 assert repr(i) == exp def test_categorical_index_repr_timedelta_ordered(self): idx = timedelta_range("1 days", periods=5) i = CategoricalIndex(Categorical(idx, ordered=True)) - exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=True, dtype='category')""" # noqa + exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp idx = timedelta_range("1 hours", periods=10) @@ -523,7 +523,7 @@ def test_categorical_index_repr_timedelta_ordered(self): '3 days 01:00:00', '4 days 01:00:00', '5 days 01:00:00', '6 days 01:00:00', '7 days 01:00:00', '8 days 01:00:00', '9 days 01:00:00'], - categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=True, dtype='category')""" # noqa + categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=True, dtype='category')""" # noqa:E501 assert repr(i) == exp diff --git a/pandas/tests/arrays/floating/test_function.py b/pandas/tests/arrays/floating/test_function.py index ef95eac316397..ff84116fa1b18 100644 --- a/pandas/tests/arrays/floating/test_function.py +++ b/pandas/tests/arrays/floating/test_function.py @@ -106,16 +106,16 @@ def test_value_counts_na(): def test_value_counts_empty(): - s = pd.Series([], dtype="Float64") - result = s.value_counts() + ser = pd.Series([], dtype="Float64") + result = ser.value_counts() idx = pd.Index([], dtype="object") expected = pd.Series([], index=idx, dtype="Int64") tm.assert_series_equal(result, expected) def test_value_counts_with_normalize(): - s = pd.Series([0.1, 0.2, 0.1, pd.NA], dtype="Float64") - result = s.value_counts(normalize=True) + ser = pd.Series([0.1, 0.2, 0.1, pd.NA], dtype="Float64") + result = ser.value_counts(normalize=True) expected = pd.Series([2, 1], index=[0.1, 0.2], dtype="Float64") / 3 tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arrays/integer/test_function.py b/pandas/tests/arrays/integer/test_function.py index 6f53b44776900..3d8c93fbd507f 100644 --- a/pandas/tests/arrays/integer/test_function.py +++ b/pandas/tests/arrays/integer/test_function.py @@ -118,8 +118,8 @@ def test_value_counts_na(): def test_value_counts_empty(): # https://github.com/pandas-dev/pandas/issues/33317 - s = pd.Series([], dtype="Int64") - result = s.value_counts() + ser = pd.Series([], dtype="Int64") + result = ser.value_counts() # TODO: The dtype of the index seems wrong (it's int64 for non-empty) idx = pd.Index([], dtype="object") expected = pd.Series([], index=idx, dtype="Int64") @@ -128,8 +128,8 @@ def test_value_counts_empty(): def test_value_counts_with_normalize(): # GH 33172 - s = pd.Series([1, 2, 1, pd.NA], dtype="Int64") - result = s.value_counts(normalize=True) + ser = pd.Series([1, 2, 1, pd.NA], dtype="Int64") + result = ser.value_counts(normalize=True) expected = pd.Series([2, 1], index=[1, 2], dtype="Float64") / 3 tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py index 501a79a8bc5ed..092fc6a460fca 100644 --- a/pandas/tests/arrays/string_/test_string.py +++ b/pandas/tests/arrays/string_/test_string.py @@ -475,8 +475,8 @@ def test_value_counts_na(dtype): def test_value_counts_with_normalize(dtype): - s = pd.Series(["a", "b", "a", pd.NA], dtype=dtype) - result = s.value_counts(normalize=True) + ser = pd.Series(["a", "b", "a", pd.NA], dtype=dtype) + result = ser.value_counts(normalize=True) expected = pd.Series([2, 1], index=["a", "b"], dtype="Float64") / 3 tm.assert_series_equal(result, expected) @@ -518,8 +518,8 @@ def test_memory_usage(dtype): @pytest.mark.parametrize("float_dtype", [np.float16, np.float32, np.float64]) def test_astype_from_float_dtype(float_dtype, dtype): # https://github.com/pandas-dev/pandas/issues/36451 - s = pd.Series([0.1], dtype=float_dtype) - result = s.astype(dtype) + ser = pd.Series([0.1], dtype=float_dtype) + result = ser.astype(dtype) expected = pd.Series(["0.1"], dtype=dtype) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/computation/test_compat.py b/pandas/tests/computation/test_compat.py index 6d6aa08204c3f..5e76fa4cbdb4f 100644 --- a/pandas/tests/computation/test_compat.py +++ b/pandas/tests/computation/test_compat.py @@ -29,13 +29,13 @@ def test_compat(): @pytest.mark.parametrize("parser", expr.PARSERS) def test_invalid_numexpr_version(engine, parser): def testit(): - a, b = 1, 2 # noqa + a, b = 1, 2 # noqa:F841 res = pd.eval("a + b", engine=engine, parser=parser) assert res == 3 if engine == "numexpr": try: - import numexpr as ne # noqa F401 + import numexpr as ne # noqa:F401 except ImportError: pytest.skip("no numexpr") else: diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py index dfdb9dabf6bed..5c614dac2bcb9 100644 --- a/pandas/tests/computation/test_eval.py +++ b/pandas/tests/computation/test_eval.py @@ -704,7 +704,7 @@ def test_identical(self): tm.assert_numpy_array_equal(result, np.array([1.5])) assert result.shape == (1,) - x = np.array([False]) # noqa + x = np.array([False]) # noqa:F841 result = pd.eval("x", engine=self.engine, parser=self.parser) tm.assert_numpy_array_equal(result, np.array([False])) assert result.shape == (1,) @@ -1239,7 +1239,7 @@ def test_truediv(self): assert res == expec def test_failing_subscript_with_name_error(self): - df = DataFrame(np.random.randn(5, 3)) # noqa + df = DataFrame(np.random.randn(5, 3)) # noqa:F841 with pytest.raises(NameError, match="name 'x' is not defined"): self.eval("df[x > 2] > 2") @@ -1304,7 +1304,7 @@ def test_assignment_column(self): # with a local name overlap def f(): df = orig_df.copy() - a = 1 # noqa + a = 1 # noqa:F841 df.eval("a = 1 + b", inplace=True) return df @@ -1316,7 +1316,7 @@ def f(): df = orig_df.copy() def f(): - a = 1 # noqa + a = 1 # noqa:F841 old_a = df.a.copy() df.eval("a = a + b", inplace=True) result = old_a + df.b @@ -1629,7 +1629,7 @@ class TestOperationsNumExprPython(TestOperationsNumExprPandas): parser = "python" def test_check_many_exprs(self): - a = 1 # noqa + a = 1 # noqa:F841 expr = " * ".join("a" * 33) expected = 1 res = pd.eval(expr, engine=self.engine, parser=self.parser) @@ -1669,14 +1669,14 @@ def test_fails_not(self): ) def test_fails_ampersand(self): - df = DataFrame(np.random.randn(5, 3)) # noqa + df = DataFrame(np.random.randn(5, 3)) # noqa:F841 ex = "(df + 2)[df > 1] > 0 & (df > 0)" msg = "cannot evaluate scalar only bool ops" with pytest.raises(NotImplementedError, match=msg): pd.eval(ex, parser=self.parser, engine=self.engine) def test_fails_pipe(self): - df = DataFrame(np.random.randn(5, 3)) # noqa + df = DataFrame(np.random.randn(5, 3)) # noqa:F841 ex = "(df + 2)[df > 1] > 0 | (df > 0)" msg = "cannot evaluate scalar only bool ops" with pytest.raises(NotImplementedError, match=msg): @@ -1851,7 +1851,7 @@ def test_no_new_locals(self, engine, parser): assert lcls == lcls2 def test_no_new_globals(self, engine, parser): - x = 1 # noqa + x = 1 # noqa:F841 gbls = globals().copy() pd.eval("x + 1", engine=engine, parser=parser) gbls2 = globals().copy() @@ -1936,7 +1936,7 @@ def test_name_error_exprs(engine, parser): @pytest.mark.parametrize("express", ["a + @b", "@a + b", "@a + @b"]) def test_invalid_local_variable_reference(engine, parser, express): - a, b = 1, 2 # noqa + a, b = 1, 2 # noqa:F841 if parser != "pandas": with pytest.raises(SyntaxError, match="The '@' prefix is only"): @@ -1980,7 +1980,7 @@ def test_more_than_one_expression_raises(engine, parser): def test_bool_ops_fails_on_scalars(lhs, cmp, rhs, engine, parser): gen = {int: lambda: np.random.randint(10), float: np.random.randn} - mid = gen[lhs]() # noqa + mid = gen[lhs]() # noqa:F841 lhs = gen[lhs]() rhs = gen[rhs]() diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py index f7eb9dfccc382..331c21de8e4bd 100644 --- a/pandas/tests/frame/test_query_eval.py +++ b/pandas/tests/frame/test_query_eval.py @@ -109,7 +109,7 @@ def test_ops(self, op_str, op, rop, n): df.iloc[0] = 2 m = df.mean() - base = DataFrame( # noqa + base = DataFrame( # noqa:F841 np.tile(m.values, n).reshape(n, -1), columns=list("abcd") ) @@ -492,7 +492,7 @@ def test_query_scope(self): df = DataFrame(np.random.randn(20, 2), columns=list("ab")) - a, b = 1, 2 # noqa + a, b = 1, 2 # noqa:F841 res = df.query("a > b", engine=engine, parser=parser) expected = df[df.a > df.b] tm.assert_frame_equal(res, expected) @@ -661,7 +661,7 @@ def test_local_variable_with_in(self): def test_at_inside_string(self): engine, parser = self.engine, self.parser skip_if_no_pandas_parser(parser) - c = 1 # noqa + c = 1 # noqa:F841 df = DataFrame({"a": ["a", "a", "b", "b", "@c", "@c"]}) result = df.query('a == "@c"', engine=engine, parser=parser) expected = df[df.a == "@c"] @@ -680,7 +680,7 @@ def test_query_undefined_local(self): df.query("a == @c", engine=engine, parser=parser) def test_index_resolvers_come_after_columns_with_the_same_name(self): - n = 1 # noqa + n = 1 # noqa:F841 a = np.r_[20:101:20] df = DataFrame({"index": a, "b": np.random.randn(a.size)}) @@ -834,7 +834,7 @@ def test_nested_scope(self): engine = self.engine parser = self.parser # smoke test - x = 1 # noqa + x = 1 # noqa:F841 result = pd.eval("x + 1", engine=engine, parser=parser) assert result == 2 @@ -1073,7 +1073,7 @@ def test_query_string_scalar_variable(self, parser, engine): } ) e = df[df.Symbol == "BUD US"] - symb = "BUD US" # noqa + symb = "BUD US" # noqa:F841 r = df.query("Symbol == @symb", parser=parser, engine=engine) tm.assert_frame_equal(e, r) @@ -1255,7 +1255,7 @@ def test_call_non_named_expression(self, df): def func(*_): return 1 - funcs = [func] # noqa + funcs = [func] # noqa:F841 df.eval("@func()") diff --git a/pandas/tests/indexes/categorical/test_formats.py b/pandas/tests/indexes/categorical/test_formats.py index 98948c2113bbe..044b03579d535 100644 --- a/pandas/tests/indexes/categorical/test_formats.py +++ b/pandas/tests/indexes/categorical/test_formats.py @@ -16,7 +16,7 @@ def test_format_different_scalar_lengths(self): def test_string_categorical_index_repr(self): # short idx = CategoricalIndex(["a", "bb", "ccc"]) - expected = """CategoricalIndex(['a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" # noqa + expected = """CategoricalIndex(['a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected # multiple lines @@ -33,7 +33,7 @@ def test_string_categorical_index_repr(self): expected = """CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', ... 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], - categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" # noqa + categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" # noqa:E501 assert repr(idx) == expected @@ -41,13 +41,13 @@ def test_string_categorical_index_repr(self): idx = CategoricalIndex(list("abcdefghijklmmo")) expected = """CategoricalIndex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'm', 'o'], - categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" # noqa + categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected # short idx = CategoricalIndex(["あ", "いい", "ううう"]) - expected = """CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa + expected = """CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected # multiple lines @@ -64,7 +64,7 @@ def test_string_categorical_index_repr(self): expected = """CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', ... 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa:E501 assert repr(idx) == expected @@ -72,7 +72,7 @@ def test_string_categorical_index_repr(self): idx = CategoricalIndex(list("あいうえおかきくけこさしすせそ")) expected = """CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', 'す', 'せ', 'そ'], - categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa + categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected @@ -81,7 +81,7 @@ def test_string_categorical_index_repr(self): # short idx = CategoricalIndex(["あ", "いい", "ううう"]) - expected = """CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa + expected = """CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected # multiple lines @@ -101,7 +101,7 @@ def test_string_categorical_index_repr(self): ... 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa:E501 assert repr(idx) == expected @@ -109,6 +109,6 @@ def test_string_categorical_index_repr(self): idx = CategoricalIndex(list("あいうえおかきくけこさしすせそ")) expected = """CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', 'す', 'せ', 'そ'], - categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa + categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa:E501 assert repr(idx) == expected diff --git a/pandas/tests/indexing/multiindex/test_setitem.py b/pandas/tests/indexing/multiindex/test_setitem.py index b97aaf6c551d8..2a12d690ff0bd 100644 --- a/pandas/tests/indexing/multiindex/test_setitem.py +++ b/pandas/tests/indexing/multiindex/test_setitem.py @@ -368,8 +368,7 @@ def test_frame_setitem_multi_column2(self): assert sliced_a2.name == ("A", "2") assert sliced_b1.name == ("B", "1") - # TODO: no setitem here? - def test_getitem_setitem_tuple_plus_columns( + def test_loc_getitem_tuple_plus_columns( self, multiindex_year_month_day_dataframe_random_data ): # GH #1013 diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py index 5d4705dbe7d77..8c34ca8056a07 100644 --- a/pandas/tests/io/conftest.py +++ b/pandas/tests/io/conftest.py @@ -131,12 +131,12 @@ def add_tips_files(bucket_name): try: cli.create_bucket(Bucket=bucket) - except: # noqa + except Exception: # OK is bucket already exists pass try: cli.create_bucket(Bucket="cant_get_it", ACL="private") - except: # noqa + except Exception: # OK is bucket already exists pass timeout = 2 @@ -153,11 +153,11 @@ def add_tips_files(bucket_name): try: s3.rm(bucket, recursive=True) - except: # noqa + except Exception: pass try: s3.rm("cant_get_it", recursive=True) - except: # noqa + except Exception: pass timeout = 2 while cli.list_buckets()["Buckets"] and timeout > 0: diff --git a/pandas/tests/io/parser/common/test_common_basic.py b/pandas/tests/io/parser/common/test_common_basic.py index 6d958f46a49dd..96c3709fdb3d8 100644 --- a/pandas/tests/io/parser/common/test_common_basic.py +++ b/pandas/tests/io/parser/common/test_common_basic.py @@ -385,7 +385,7 @@ def test_escapechar(all_parsers): data = '''SEARCH_TERM,ACTUAL_URL "bra tv board","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord" "tv p\xc3\xa5 hjul","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord" -"SLAGBORD, \\"Bergslagen\\", IKEA:s 1700-tals series","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord"''' # noqa +"SLAGBORD, \\"Bergslagen\\", IKEA:s 1700-tals series","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord"''' # noqa:E501 parser = all_parsers result = parser.read_csv( @@ -491,7 +491,7 @@ def test_read_empty_with_usecols(all_parsers, data, kwargs, expected): ], ) def test_trailing_spaces(all_parsers, kwargs, expected): - data = "A B C \nrandom line with trailing spaces \nskip\n1,2,3\n1,2.,4.\nrandom line with trailing tabs\t\t\t\n \n5.1,NaN,10.0\n" # noqa + data = "A B C \nrandom line with trailing spaces \nskip\n1,2,3\n1,2.,4.\nrandom line with trailing tabs\t\t\t\n \n5.1,NaN,10.0\n" # noqa:E501 parser = all_parsers result = parser.read_csv(StringIO(data.replace(",", " ")), **kwargs) diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py index d4e33543d8a04..910731bd7dde2 100644 --- a/pandas/tests/io/parser/test_read_fwf.py +++ b/pandas/tests/io/parser/test_read_fwf.py @@ -311,7 +311,7 @@ def test_fwf_regression(): def test_fwf_for_uint8(): data = """1421302965.213420 PRI=3 PGN=0xef00 DST=0x17 SRC=0x28 04 154 00 00 00 00 00 127 -1421302964.226776 PRI=6 PGN=0xf002 SRC=0x47 243 00 00 255 247 00 00 71""" # noqa +1421302964.226776 PRI=6 PGN=0xf002 SRC=0x47 243 00 00 255 247 00 00 71""" # noqa:E501 df = read_fwf( StringIO(data), colspecs=[(0, 17), (25, 26), (33, 37), (49, 51), (58, 62), (63, 1000)], diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py index f4f79c915b317..fa2305d11f901 100644 --- a/pandas/tests/io/test_stata.py +++ b/pandas/tests/io/test_stata.py @@ -1135,7 +1135,7 @@ def test_read_chunks_117( ): fname = getattr(self, file) - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True): warnings.simplefilter("always") parsed = read_stata( fname, @@ -1151,7 +1151,7 @@ def test_read_chunks_117( pos = 0 for j in range(5): - with warnings.catch_warnings(record=True) as w: # noqa + with warnings.catch_warnings(record=True): warnings.simplefilter("always") try: chunk = itr.read(chunksize) @@ -1232,7 +1232,7 @@ def test_read_chunks_115( fname = getattr(self, file) # Read the whole file - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True): warnings.simplefilter("always") parsed = read_stata( fname, @@ -1249,7 +1249,7 @@ def test_read_chunks_115( ) pos = 0 for j in range(5): - with warnings.catch_warnings(record=True) as w: # noqa + with warnings.catch_warnings(record=True): warnings.simplefilter("always") try: chunk = itr.read(chunksize) diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py index e40798f4f5125..5a80df8d6c779 100644 --- a/pandas/tests/plotting/test_series.py +++ b/pandas/tests/plotting/test_series.py @@ -720,7 +720,7 @@ def test_custom_business_day_freq(self): _check_plot_works(s.plot) - @pytest.mark.xfail + @pytest.mark.xfail(reason="TODO: reason?") def test_plot_accessor_updates_on_inplace(self): s = Series([1, 2, 3, 4]) _, ax = self.plt.subplots() diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py index 450bd8b05ea43..2dae9ee48a90a 100644 --- a/pandas/tests/resample/test_base.py +++ b/pandas/tests/resample/test_base.py @@ -20,7 +20,7 @@ # a fixture value can be overridden by the test parameter value. Note that the # value of the fixture can be overridden this way even if the test doesn't use # it directly (doesn't mention it in the function prototype). -# see https://docs.pytest.org/en/latest/fixture.html#override-a-fixture-with-direct-test-parametrization # noqa +# see https://docs.pytest.org/en/latest/fixture.html#override-a-fixture-with-direct-test-parametrization # noqa:E501 # in this module we override the fixture values defined in conftest.py # tuples of '_index_factory,_series_name,_index_start,_index_end' DATE_RANGE = (date_range, "dti", datetime(2005, 1, 1), datetime(2005, 1, 10)) diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py index 732d375d136d0..a20667655590b 100644 --- a/pandas/tests/series/methods/test_astype.py +++ b/pandas/tests/series/methods/test_astype.py @@ -128,7 +128,7 @@ def test_astype_no_pandas_dtype(self): def test_astype_generic_timestamp_no_frequency(self, dtype, request): # see GH#15524, GH#15987 data = [1] - s = Series(data) + ser = Series(data) if np.dtype(dtype).name not in ["timedelta64", "datetime64"]: mark = pytest.mark.xfail(reason="GH#33890 Is assigned ns unit") @@ -139,7 +139,7 @@ def test_astype_generic_timestamp_no_frequency(self, dtype, request): fr"Please pass in '{dtype.__name__}\[ns\]' instead." ) with pytest.raises(ValueError, match=msg): - s.astype(dtype) + ser.astype(dtype) def test_astype_dt64_to_str(self): # GH#10442 : testing astype(str) is correct for Series/DatetimeIndex diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py index 555342dd39005..d3ff7f4dc7b4c 100644 --- a/pandas/tests/series/test_repr.py +++ b/pandas/tests/series/test_repr.py @@ -355,7 +355,7 @@ def test_categorical_series_repr_datetime(self): 4 2011-01-01 13:00:00 dtype: category Categories (5, datetime64[ns]): [2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, - 2011-01-01 12:00:00, 2011-01-01 13:00:00]""" # noqa + 2011-01-01 12:00:00, 2011-01-01 13:00:00]""" # noqa:E501 assert repr(s) == exp @@ -369,7 +369,7 @@ def test_categorical_series_repr_datetime(self): dtype: category Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, - 2011-01-01 13:00:00-05:00]""" # noqa + 2011-01-01 13:00:00-05:00]""" # noqa:E501 assert repr(s) == exp @@ -383,7 +383,7 @@ def test_categorical_series_repr_datetime_ordered(self): 4 2011-01-01 13:00:00 dtype: category Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < - 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa + 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa:E501 assert repr(s) == exp @@ -397,7 +397,7 @@ def test_categorical_series_repr_datetime_ordered(self): dtype: category Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < - 2011-01-01 13:00:00-05:00]""" # noqa + 2011-01-01 13:00:00-05:00]""" # noqa:E501 assert repr(s) == exp @@ -411,7 +411,7 @@ def test_categorical_series_repr_period(self): 4 2011-01-01 13:00 dtype: category Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(s) == exp @@ -437,7 +437,7 @@ def test_categorical_series_repr_period_ordered(self): 4 2011-01-01 13:00 dtype: category Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < - 2011-01-01 13:00]""" # noqa + 2011-01-01 13:00]""" # noqa:E501 assert repr(s) == exp @@ -481,7 +481,7 @@ def test_categorical_series_repr_timedelta(self): dtype: category Categories (10, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 6 days 01:00:00, 7 days 01:00:00, - 8 days 01:00:00, 9 days 01:00:00]""" # noqa + 8 days 01:00:00, 9 days 01:00:00]""" # noqa:E501 assert repr(s) == exp @@ -513,6 +513,6 @@ def test_categorical_series_repr_timedelta_ordered(self): dtype: category Categories (10, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 6 days 01:00:00 < 7 days 01:00:00 < - 8 days 01:00:00 < 9 days 01:00:00]""" # noqa + 8 days 01:00:00 < 9 days 01:00:00]""" # noqa:E501 assert repr(s) == exp diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py index f927a0ec0927b..a15658ad43498 100644 --- a/pandas/tests/test_downstream.py +++ b/pandas/tests/test_downstream.py @@ -5,7 +5,7 @@ import subprocess import sys -import numpy as np # noqa +import numpy as np # noqa:F401 needed in namespace for statsmodels import pytest import pandas.util._test_decorators as td @@ -100,7 +100,7 @@ def test_oo_optimized_datetime_index_unpickle(): ) def test_statsmodels(): - statsmodels = import_module("statsmodels") # noqa + statsmodels = import_module("statsmodels") # noqa:F841 import statsmodels.api as sm import statsmodels.formula.api as smf diff --git a/pandas/tseries/api.py b/pandas/tseries/api.py index 2094791ecdc60..59666fa0048dd 100644 --- a/pandas/tseries/api.py +++ b/pandas/tseries/api.py @@ -2,7 +2,7 @@ Timeseries API """ -# flake8: noqa +# flake8: noqa:F401 from pandas.tseries.frequencies import infer_freq import pandas.tseries.offsets as offsets diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py index fc01771507888..415af96a29aa3 100644 --- a/pandas/tseries/frequencies.py +++ b/pandas/tseries/frequencies.py @@ -399,10 +399,12 @@ def _is_business_daily(self) -> bool: shifts = np.diff(self.index.asi8) shifts = np.floor_divide(shifts, _ONE_DAY) weekdays = np.mod(first_weekday + np.cumsum(shifts), 7) - # error: Incompatible return value type (got "bool_", expected "bool") - return np.all( # type: ignore[return-value] - ((weekdays == 0) & (shifts == 3)) - | ((weekdays > 0) & (weekdays <= 4) & (shifts == 1)) + + return bool( + np.all( + ((weekdays == 0) & (shifts == 3)) + | ((weekdays > 0) & (weekdays <= 4) & (shifts == 1)) + ) ) def _get_wom_rule(self) -> str | None: diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py index 35a88a802003e..7adfca73c2f1e 100644 --- a/pandas/util/__init__.py +++ b/pandas/util/__init__.py @@ -1,10 +1,10 @@ -from pandas.util._decorators import ( # noqa +from pandas.util._decorators import ( # noqa:F401 Appender, Substitution, cache_readonly, ) -from pandas.core.util.hashing import ( # noqa +from pandas.core.util.hashing import ( # noqa:F401 hash_array, hash_pandas_object, ) diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py index d98b0d24d22b9..a936b8d1f585c 100644 --- a/pandas/util/_decorators.py +++ b/pandas/util/_decorators.py @@ -11,7 +11,7 @@ ) import warnings -from pandas._libs.properties import cache_readonly # noqa +from pandas._libs.properties import cache_readonly # noqa:F401 from pandas._typing import F diff --git a/pandas/util/_tester.py b/pandas/util/_tester.py index 1bdf0d8483c76..541776619a2d3 100644 --- a/pandas/util/_tester.py +++ b/pandas/util/_tester.py @@ -13,7 +13,7 @@ def test(extra_args=None): except ImportError as err: raise ImportError("Need pytest>=5.0.1 to run tests") from err try: - import hypothesis # noqa + import hypothesis # noqa:F401 except ImportError as err: raise ImportError("Need hypothesis>=3.58 to run tests") from err cmd = ["--skip-slow", "--skip-network", "--skip-db"] diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 0ab59a202149d..db9bfc274cd78 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -2,7 +2,7 @@ from pandas.util._exceptions import find_stack_level -from pandas._testing import * # noqa +from pandas._testing import * # noqa:F401,F403,PDF014 warnings.warn( (
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44529
2021-11-19T20:42:38Z
2021-11-20T04:51:23Z
2021-11-20T04:51:23Z
2021-11-22T15:02:16Z
Comment deleted as build on Mac failed. Suggested by @marcogorelli.
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index 4c3c12eb9da92..4ea3701dec029 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -165,7 +165,7 @@ We'll now kick off a three-step process: At this point you should be able to import pandas from your locally built version:: - $ python # start an interpreter + $ python >>> import pandas >>> print(pandas.__version__) 0.22.0.dev0+29.g4ad6d4d74
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44525
2021-11-19T12:26:48Z
2021-11-19T15:02:29Z
2021-11-19T15:02:29Z
2021-11-19T15:02:40Z
validate PR09 errors
diff --git a/ci/code_checks.sh b/ci/code_checks.sh index ef026c8e69dbb..744f934142a24 100755 --- a/ci/code_checks.sh +++ b/ci/code_checks.sh @@ -93,8 +93,8 @@ fi ### DOCSTRINGS ### if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then - MSG='Validate docstrings (GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS03, SS04, SS05, PR03, PR04, PR05, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG - $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS03,SS04,SS05,PR03,PR04,PR05,PR10,EX04,RT01,RT04,RT05,SA02,SA03 + MSG='Validate docstrings (GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS01, SS02, SS03, SS04, SS05, PR03, PR04, PR05, PRO9, PR10, EX04, RT01, RT04, RT05, SA02, SA03)' ; echo $MSG + $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS02,SS03,SS04,SS05,PR03,PR04,PR05,PR09,PR10,EX04,RT01,RT04,RT05,SA02,SA03 RET=$(($RET + $?)) ; echo $MSG "DONE" fi diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 9ea347594229f..00149dec0790f 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -1019,7 +1019,7 @@ class Timestamp(_Timestamp): Due to daylight saving time, one wall clock time can occur twice when shifting from summer to winter time; fold describes whether the datetime-like corresponds to the first (0) or the second time (1) - the wall clock hits the ambiguous time + the wall clock hits the ambiguous time. .. versionadded:: 1.1.0 diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 0960ab4a81149..66d3800301f92 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -7748,7 +7748,7 @@ def update( Wild 185.0 We can also choose to include NA in group keys or not by setting -`dropna` parameter, the default setting is `True`: +`dropna` parameter, the default setting is `True`. >>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]] >>> df = pd.DataFrame(l, columns=["a", "b", "c"]) diff --git a/pandas/core/series.py b/pandas/core/series.py index e0a63b8e35105..f0f5bd7c3e2b2 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -1838,7 +1838,7 @@ def _set_name(self, name, inplace=False) -> Series: Name: Max Speed, dtype: float64 We can also choose to include `NA` in group keys or not by defining -`dropna` parameter, the default setting is `True`: +`dropna` parameter, the default setting is `True`. >>> ser = pd.Series([1, 2, 3, 3], index=["a", 'a', 'b', np.nan]) >>> ser.groupby(level=0).sum() diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py index bc4f4d657b859..6bb7c1fa4418d 100644 --- a/pandas/core/shared_docs.py +++ b/pandas/core/shared_docs.py @@ -126,7 +126,7 @@ dropna : bool, default True If True, and if group keys contain NA values, NA values together with row/column will be dropped. - If False, NA values will also be treated as the key in groups + If False, NA values will also be treated as the key in groups. .. versionadded:: 1.1.0 diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index 73bdc626554a2..f9244462123bc 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -912,7 +912,7 @@ class Window(BaseWindow): If ``'neither'``, the first and last points in the window are excluded from calculations. - Default ``None`` (``'right'``) + Default ``None`` (``'right'``). .. versionchanged:: 1.2.0 diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index 6ddcbe276ba73..d9550f0940376 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -159,7 +159,7 @@ class Styler(StylerRenderer): in cell display string with HTML-safe sequences. Use 'latex' to replace the characters ``&``, ``%``, ``$``, ``#``, ``_``, ``{``, ``}``, ``~``, ``^``, and ``\`` in the cell display string with - LaTeX-safe sequences. If not given uses ``pandas.options.styler.format.escape`` + LaTeX-safe sequences. If not given uses ``pandas.options.styler.format.escape``. .. versionadded:: 1.3.0 formatter : str, callable, dict, optional @@ -527,7 +527,7 @@ def to_latex( position : str, optional The LaTeX positional argument (e.g. 'h!') for tables, placed in location: - \\begin{table}[<position>] + ``\\begin{table}[<position>]``. position_float : {"centering", "raggedleft", "raggedright"}, optional The LaTeX float command placed in location: @@ -2842,7 +2842,7 @@ def bar( When None (default): the maximum value of the data will be used. props : str, optional The base CSS of the cell that is extended to add the bar chart. Defaults to - `"width: 10em;"` + `"width: 10em;"`. .. versionadded:: 1.4.0 @@ -3143,7 +3143,7 @@ def highlight_quantile( ---------- %(subset)s color : str, default 'yellow' - Background color to use for highlighting + Background color to use for highlighting. axis : {0 or 'index', 1 or 'columns', None}, default 0 Axis along which to determine and highlight quantiles. If ``None`` quantiles are measured over the entire DataFrame. See examples. diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index ae4e05160e70a..a94d9ee5416c4 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -817,12 +817,12 @@ def format( .. versionadded:: 1.3.0 decimal : str, default "." - Character used as decimal separator for floats, complex and integers + Character used as decimal separator for floats, complex and integers. .. versionadded:: 1.3.0 thousands : str, optional, default None - Character used as thousands separator for floats, complex and integers + Character used as thousands separator for floats, complex and integers. .. versionadded:: 1.3.0 @@ -1011,9 +1011,9 @@ def format_index( Floating point precision to use for display purposes, if not determined by the specified ``formatter``. decimal : str, default "." - Character used as decimal separator for floats, complex and integers + Character used as decimal separator for floats, complex and integers. thousands : str, optional, default None - Character used as thousands separator for floats, complex and integers + Character used as thousands separator for floats, complex and integers. escape : str, optional Use 'html' to replace the characters ``&``, ``<``, ``>``, ``'``, and ``"`` in cell display string with HTML-safe sequences. diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py index c4b9e36472092..db35b82ac1d50 100644 --- a/pandas/io/parquet.py +++ b/pandas/io/parquet.py @@ -457,7 +457,7 @@ def read_parquet( A file URL can also be a path to a directory that contains multiple partitioned parquet files. Both pyarrow and fastparquet support paths to directories as well as file URLs. A directory path could be: - ``file://localhost/path/to/tables`` or ``s3://bucket/partition_dir`` + ``file://localhost/path/to/tables`` or ``s3://bucket/partition_dir``. engine : {{'auto', 'pyarrow', 'fastparquet'}}, default 'auto' Parquet library to use. If 'auto', then the option ``io.parquet.engine`` is used. The default ``io.parquet.engine`` diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 4be54ceaa2bcf..26869a660f4b4 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -376,7 +376,7 @@ def read_sql_query( rows to include in each chunk. dtype : Type name or dict of columns Data type for data or columns. E.g. np.float64 or - {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} + {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}. .. versionadded:: 1.3.0
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44524
2021-11-19T12:01:16Z
2021-11-23T14:01:20Z
2021-11-23T14:01:20Z
2021-11-23T15:29:06Z
COMPAT: Matplotlib 3.5.0
diff --git a/pandas/plotting/_matplotlib/compat.py b/pandas/plotting/_matplotlib/compat.py index 70ddd1ca09c7e..5569b1f2979b0 100644 --- a/pandas/plotting/_matplotlib/compat.py +++ b/pandas/plotting/_matplotlib/compat.py @@ -24,3 +24,4 @@ def inner(): mpl_ge_3_2_0 = _mpl_version("3.2.0", operator.ge) mpl_ge_3_3_0 = _mpl_version("3.3.0", operator.ge) mpl_ge_3_4_0 = _mpl_version("3.4.0", operator.ge) +mpl_ge_3_5_0 = _mpl_version("3.5.0", operator.ge) diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py index ead0a2129d29f..90d3f8d9836bf 100644 --- a/pandas/plotting/_matplotlib/converter.py +++ b/pandas/plotting/_matplotlib/converter.py @@ -357,8 +357,8 @@ def get_locator(self, dmin, dmax): locator = MilliSecondLocator(self.tz) locator.set_axis(self.axis) - locator.set_view_interval(*self.axis.get_view_interval()) - locator.set_data_interval(*self.axis.get_data_interval()) + locator.axis.set_view_interval(*self.axis.get_view_interval()) + locator.axis.set_data_interval(*self.axis.get_data_interval()) return locator return dates.AutoDateLocator.get_locator(self, dmin, dmax) diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py index ba47391513ed2..08dc9538227f7 100644 --- a/pandas/plotting/_matplotlib/core.py +++ b/pandas/plotting/_matplotlib/core.py @@ -1036,6 +1036,7 @@ def _plot_colorbar(self, ax: Axes, **kwds): # use the last one which contains the latest information # about the ax img = ax.collections[-1] + ax.grid(False) cbar = self.fig.colorbar(img, ax=ax, **kwds) if mpl_ge_3_0_0(): diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py index e2b6b5ab3319c..52127b926f1fa 100644 --- a/pandas/tests/plotting/common.py +++ b/pandas/tests/plotting/common.py @@ -45,6 +45,8 @@ def setup_method(self, method): from pandas.plotting._matplotlib import compat + self.compat = compat + mpl.rcdefaults() self.start_date_to_int64 = 812419200000000000 @@ -569,6 +571,12 @@ def _unpack_cycler(self, rcParams, field="color"): """ return [v[field] for v in rcParams["axes.prop_cycle"]] + def get_x_axis(self, ax): + return ax._shared_axes["x"] if self.compat.mpl_ge_3_5_0() else ax._shared_x_axes + + def get_y_axis(self, ax): + return ax._shared_axes["y"] if self.compat.mpl_ge_3_5_0() else ax._shared_y_axes + def _check_plot_works(f, filterwarnings="always", default_axes=False, **kwargs): """ diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py index ccd0bc3d16896..6c07366e402d6 100644 --- a/pandas/tests/plotting/frame/test_frame.py +++ b/pandas/tests/plotting/frame/test_frame.py @@ -525,8 +525,8 @@ def test_area_sharey_dont_overwrite(self): df.plot(ax=ax1, kind="area") df.plot(ax=ax2, kind="area") - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) def test_bar_linewidth(self): df = DataFrame(np.random.randn(5, 5)) diff --git a/pandas/tests/plotting/frame/test_hist_box_by.py b/pandas/tests/plotting/frame/test_hist_box_by.py index ba6d232733762..c92d952587967 100644 --- a/pandas/tests/plotting/frame/test_hist_box_by.py +++ b/pandas/tests/plotting/frame/test_hist_box_by.py @@ -195,16 +195,16 @@ def test_axis_share_x_with_by(self): ax1, ax2, ax3 = self.hist_df.plot.hist(column="A", by="C", sharex=True) # share x - assert ax1._shared_x_axes.joined(ax1, ax2) - assert ax2._shared_x_axes.joined(ax1, ax2) - assert ax3._shared_x_axes.joined(ax1, ax3) - assert ax3._shared_x_axes.joined(ax2, ax3) + assert self.get_x_axis(ax1).joined(ax1, ax2) + assert self.get_x_axis(ax2).joined(ax1, ax2) + assert self.get_x_axis(ax3).joined(ax1, ax3) + assert self.get_x_axis(ax3).joined(ax2, ax3) # don't share y - assert not ax1._shared_y_axes.joined(ax1, ax2) - assert not ax2._shared_y_axes.joined(ax1, ax2) - assert not ax3._shared_y_axes.joined(ax1, ax3) - assert not ax3._shared_y_axes.joined(ax2, ax3) + assert not self.get_y_axis(ax1).joined(ax1, ax2) + assert not self.get_y_axis(ax2).joined(ax1, ax2) + assert not self.get_y_axis(ax3).joined(ax1, ax3) + assert not self.get_y_axis(ax3).joined(ax2, ax3) @pytest.mark.slow def test_axis_share_y_with_by(self): @@ -212,16 +212,16 @@ def test_axis_share_y_with_by(self): ax1, ax2, ax3 = self.hist_df.plot.hist(column="A", by="C", sharey=True) # share y - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) - assert ax3._shared_y_axes.joined(ax1, ax3) - assert ax3._shared_y_axes.joined(ax2, ax3) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) + assert self.get_y_axis(ax3).joined(ax1, ax3) + assert self.get_y_axis(ax3).joined(ax2, ax3) # don't share x - assert not ax1._shared_x_axes.joined(ax1, ax2) - assert not ax2._shared_x_axes.joined(ax1, ax2) - assert not ax3._shared_x_axes.joined(ax1, ax3) - assert not ax3._shared_x_axes.joined(ax2, ax3) + assert not self.get_x_axis(ax1).joined(ax1, ax2) + assert not self.get_x_axis(ax2).joined(ax1, ax2) + assert not self.get_x_axis(ax3).joined(ax1, ax3) + assert not self.get_x_axis(ax3).joined(ax2, ax3) @pytest.mark.parametrize("figsize", [(12, 8), (20, 10)]) def test_figure_shape_hist_with_by(self, figsize): diff --git a/pandas/tests/plotting/test_common.py b/pandas/tests/plotting/test_common.py index 4674fc1bb2c18..6eebf0c01ae52 100644 --- a/pandas/tests/plotting/test_common.py +++ b/pandas/tests/plotting/test_common.py @@ -39,4 +39,6 @@ def test__gen_two_subplots_with_ax(self): next(gen) axes = fig.get_axes() assert len(axes) == 1 - assert axes[0].get_geometry() == (2, 1, 2) + subplot_geometry = list(axes[0].get_subplotspec().get_geometry()[:-1]) + subplot_geometry[-1] += 1 + assert subplot_geometry == [2, 1, 2] diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py index 96fdcebc9b8f7..403f4a2c06df1 100644 --- a/pandas/tests/plotting/test_hist_method.py +++ b/pandas/tests/plotting/test_hist_method.py @@ -728,35 +728,35 @@ def test_axis_share_x(self): ax1, ax2 = df.hist(column="height", by=df.gender, sharex=True) # share x - assert ax1._shared_x_axes.joined(ax1, ax2) - assert ax2._shared_x_axes.joined(ax1, ax2) + assert self.get_x_axis(ax1).joined(ax1, ax2) + assert self.get_x_axis(ax2).joined(ax1, ax2) # don't share y - assert not ax1._shared_y_axes.joined(ax1, ax2) - assert not ax2._shared_y_axes.joined(ax1, ax2) + assert not self.get_y_axis(ax1).joined(ax1, ax2) + assert not self.get_y_axis(ax2).joined(ax1, ax2) def test_axis_share_y(self): df = self.hist_df ax1, ax2 = df.hist(column="height", by=df.gender, sharey=True) # share y - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) # don't share x - assert not ax1._shared_x_axes.joined(ax1, ax2) - assert not ax2._shared_x_axes.joined(ax1, ax2) + assert not self.get_x_axis(ax1).joined(ax1, ax2) + assert not self.get_x_axis(ax2).joined(ax1, ax2) def test_axis_share_xy(self): df = self.hist_df ax1, ax2 = df.hist(column="height", by=df.gender, sharex=True, sharey=True) # share both x and y - assert ax1._shared_x_axes.joined(ax1, ax2) - assert ax2._shared_x_axes.joined(ax1, ax2) + assert self.get_x_axis(ax1).joined(ax1, ax2) + assert self.get_x_axis(ax2).joined(ax1, ax2) - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) @pytest.mark.parametrize( "histtype, expected", diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py index 812aae8d97151..e40798f4f5125 100644 --- a/pandas/tests/plotting/test_series.py +++ b/pandas/tests/plotting/test_series.py @@ -154,8 +154,8 @@ def test_area_sharey_dont_overwrite(self): abs(self.ts).plot(ax=ax1, kind="area") abs(self.ts).plot(ax=ax2, kind="area") - assert ax1._shared_y_axes.joined(ax1, ax2) - assert ax2._shared_y_axes.joined(ax1, ax2) + assert self.get_y_axis(ax1).joined(ax1, ax2) + assert self.get_y_axis(ax2).joined(ax1, ax2) def test_label(self): s = Series([1, 2])
- [ ] closes #44521 - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry Still some deprecation warnings left. Anyone want to approve for me so we can get this merged ASAP?
https://api.github.com/repos/pandas-dev/pandas/pulls/44523
2021-11-19T01:14:16Z
2021-11-20T00:11:40Z
2021-11-20T00:11:39Z
2021-11-21T11:11:08Z
DEPR: DateOffset.apply
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst index 3ddb6434ed932..2e81032bd4c22 100644 --- a/doc/source/user_guide/timeseries.rst +++ b/doc/source/user_guide/timeseries.rst @@ -852,7 +852,7 @@ savings time. However, all :class:`DateOffset` subclasses that are an hour or sm The basic :class:`DateOffset` acts similar to ``dateutil.relativedelta`` (`relativedelta documentation`_) that shifts a date time by the corresponding calendar duration specified. The -arithmetic operator (``+``) or the ``apply`` method can be used to perform the shift. +arithmetic operator (``+``) can be used to perform the shift. .. ipython:: python @@ -866,7 +866,6 @@ arithmetic operator (``+``) or the ``apply`` method can be used to perform the s friday.day_name() # Add 2 business days (Friday --> Tuesday) two_business_days = 2 * pd.offsets.BDay() - two_business_days.apply(friday) friday + two_business_days (friday + two_business_days).day_name() @@ -938,14 +937,14 @@ in the operation). ts = pd.Timestamp("2014-01-01 09:00") day = pd.offsets.Day() - day.apply(ts) - day.apply(ts).normalize() + day + ts + (day + ts).normalize() ts = pd.Timestamp("2014-01-01 22:00") hour = pd.offsets.Hour() - hour.apply(ts) - hour.apply(ts).normalize() - hour.apply(pd.Timestamp("2014-01-01 23:30")).normalize() + hour + ts + (hour + ts).normalize() + (hour + pd.Timestamp("2014-01-01 23:30")).normalize() .. _relativedelta documentation: https://dateutil.readthedocs.io/en/stable/relativedelta.html @@ -1185,16 +1184,16 @@ under the default business hours (9:00 - 17:00), there is no gap (0 minutes) bet pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00")) pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00")) - # It is the same as BusinessHour().apply(pd.Timestamp('2014-08-01 17:00')). - # And it is the same as BusinessHour().apply(pd.Timestamp('2014-08-04 09:00')) - pd.offsets.BusinessHour().apply(pd.Timestamp("2014-08-02 15:00")) + # It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00'). + # And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00') + pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00") # BusinessDay results (for reference) pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02")) - # It is the same as BusinessDay().apply(pd.Timestamp('2014-08-01')) + # It is the same as BusinessDay() + pd.Timestamp('2014-08-01') # The result is the same as rollworward because BusinessDay never overlap. - pd.offsets.BusinessHour().apply(pd.Timestamp("2014-08-02")) + pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02") ``BusinessHour`` regards Saturday and Sunday as holidays. To use arbitrary holidays, you can use ``CustomBusinessHour`` offset, as explained in the diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 017acb8ef930b..a9f3ffd35eedc 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -464,6 +464,7 @@ Other Deprecations - Deprecated casting behavior when passing an item with mismatched-timezone to :meth:`DatetimeIndex.insert`, :meth:`DatetimeIndex.putmask`, :meth:`DatetimeIndex.where` :meth:`DatetimeIndex.fillna`, :meth:`Series.mask`, :meth:`Series.where`, :meth:`Series.fillna`, :meth:`Series.shift`, :meth:`Series.replace`, :meth:`Series.reindex` (and :class:`DataFrame` column analogues). In the past this has cast to object dtype. In a future version, these will cast the passed item to the index or series's timezone (:issue:`37605`) - Deprecated the 'errors' keyword argument in :meth:`Series.where`, :meth:`DataFrame.where`, :meth:`Series.mask`, and meth:`DataFrame.mask`; in a future version the argument will be removed (:issue:`44294`) - Deprecated :meth:`PeriodIndex.astype` to ``datetime64[ns]`` or ``DatetimeTZDtype``, use ``obj.to_timestamp(how).tz_localize(dtype.tz)`` instead (:issue:`44398`) +- Deprecated :meth:`DateOffset.apply`, use ``offset + other`` instead (:issue:`44522`) - .. --------------------------------------------------------------------------- diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index f689b8ce242e5..055c568f6c9ae 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -435,7 +435,7 @@ cdef class BaseOffset: # cython semantics; this is __radd__ return other.__add__(self) try: - return self.apply(other) + return self._apply(other) except ApplyTypeError: return NotImplemented @@ -458,7 +458,17 @@ cdef class BaseOffset: FutureWarning, stacklevel=1, ) - return self.apply(other) + return self._apply(other) + + def apply(self, other): + # GH#44522 + warnings.warn( + f"{type(self).__name__}.apply is deprecated and will be removed " + "in a future version. Use `offset + other` instead", + FutureWarning, + stacklevel=2, + ) + return self._apply(other) def __mul__(self, other): if util.is_array(other): @@ -889,7 +899,7 @@ cdef class Tick(SingleConstructorOffset): else: return delta_to_tick(self.delta + other.delta) try: - return self.apply(other) + return self._apply(other) except ApplyTypeError: # Includes pd.Period return NotImplemented @@ -898,7 +908,7 @@ cdef class Tick(SingleConstructorOffset): f"the add operation between {self} and {other} will overflow" ) from err - def apply(self, other): + def _apply(self, other): # Timestamp can handle tz and nano sec, thus no need to use apply_wraps if isinstance(other, _Timestamp): # GH#15126 @@ -1041,7 +1051,7 @@ cdef class RelativeDeltaOffset(BaseOffset): self.__dict__.update(state) @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: if self._use_relativedelta: other = _as_datetime(other) @@ -1371,7 +1381,7 @@ cdef class BusinessDay(BusinessMixin): return "+" + repr(self.offset) @apply_wraps - def apply(self, other): + def _apply(self, other): if PyDateTime_Check(other): n = self.n wday = other.weekday() @@ -1684,7 +1694,7 @@ cdef class BusinessHour(BusinessMixin): return dt @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: # used for detecting edge condition nanosecond = getattr(other, "nanosecond", 0) # reset timezone and nanosecond @@ -1833,7 +1843,7 @@ cdef class WeekOfMonthMixin(SingleConstructorOffset): raise ValueError(f"Day must be 0<=day<=6, got {weekday}") @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: compare_day = self._get_offset_day(other) months = self.n @@ -1913,7 +1923,7 @@ cdef class YearOffset(SingleConstructorOffset): return get_day_of_month(&dts, self._day_opt) @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: years = roll_qtrday(other, self.n, self.month, self._day_opt, modby=12) months = years * 12 + (self.month - other.month) return shift_month(other, months, self._day_opt) @@ -2062,7 +2072,7 @@ cdef class QuarterOffset(SingleConstructorOffset): return mod_month == 0 and dt.day == self._get_offset_day(dt) @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: # months_since: find the calendar quarter containing other.month, # e.g. if other.month == 8, the calendar quarter is [Jul, Aug, Sep]. # Then find the month in that quarter containing an is_on_offset date for @@ -2189,7 +2199,7 @@ cdef class MonthOffset(SingleConstructorOffset): return dt.day == self._get_offset_day(dt) @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: compare_day = self._get_offset_day(other) n = roll_convention(other.day, self.n, compare_day) return shift_month(other, n, self._day_opt) @@ -2307,7 +2317,7 @@ cdef class SemiMonthOffset(SingleConstructorOffset): return self._prefix + suffix @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: is_start = isinstance(self, SemiMonthBegin) # shift `other` to self.day_of_month, incrementing `n` if necessary @@ -2482,7 +2492,7 @@ cdef class Week(SingleConstructorOffset): return self.n == 1 and self.weekday is not None @apply_wraps - def apply(self, other): + def _apply(self, other): if self.weekday is None: return other + self.n * self._inc @@ -2833,7 +2843,7 @@ cdef class FY5253(FY5253Mixin): return year_end == dt @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: norm = Timestamp(other).normalize() n = self.n @@ -3082,7 +3092,7 @@ cdef class FY5253Quarter(FY5253Mixin): return start, num_qtrs, tdelta @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: # Note: self.n == 0 is not allowed. n = self.n @@ -3173,7 +3183,7 @@ cdef class Easter(SingleConstructorOffset): self.normalize = state.pop("normalize") @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: current_easter = easter(other.year) current_easter = datetime( current_easter.year, current_easter.month, current_easter.day @@ -3252,7 +3262,7 @@ cdef class CustomBusinessDay(BusinessDay): BusinessDay.__setstate__(self, state) @apply_wraps - def apply(self, other): + def _apply(self, other): if self.n <= 0: roll = "forward" else: @@ -3415,7 +3425,7 @@ cdef class _CustomBusinessMonth(BusinessMixin): return roll_func @apply_wraps - def apply(self, other: datetime) -> datetime: + def _apply(self, other: datetime) -> datetime: # First move to month offset cur_month_offset_date = self.month_roll(other) diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 460bfda56276d..87990247ab5db 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -2566,7 +2566,7 @@ def generate_range(start=None, end=None, periods=None, offset=BDay()): break # faster than cur + offset - next_date = offset.apply(cur) + next_date = offset._apply(cur) if next_date <= cur: raise ValueError(f"Offset {offset} did not increment date") cur = next_date @@ -2580,7 +2580,7 @@ def generate_range(start=None, end=None, periods=None, offset=BDay()): break # faster than cur + offset - next_date = offset.apply(cur) + next_date = offset._apply(cur) if next_date >= cur: raise ValueError(f"Offset {offset} did not decrement date") cur = next_date diff --git a/pandas/tests/tseries/offsets/common.py b/pandas/tests/tseries/offsets/common.py index 0227a07877db0..d8e98bb0c6876 100644 --- a/pandas/tests/tseries/offsets/common.py +++ b/pandas/tests/tseries/offsets/common.py @@ -28,7 +28,7 @@ def assert_offset_equal(offset, base, expected): actual = offset + base actual_swapped = base + offset - actual_apply = offset.apply(base) + actual_apply = offset._apply(base) try: assert actual == expected assert actual_swapped == expected @@ -155,7 +155,7 @@ def test_rsub(self): # i.e. skip for TestCommon and YQM subclasses that do not have # offset2 attr return - assert self.d - self.offset2 == (-self.offset2).apply(self.d) + assert self.d - self.offset2 == (-self.offset2)._apply(self.d) def test_radd(self): if self._offset is None or not hasattr(self, "offset2"): diff --git a/pandas/tests/tseries/offsets/test_business_day.py b/pandas/tests/tseries/offsets/test_business_day.py index 92176515b6b6f..7fd5cca173bd6 100644 --- a/pandas/tests/tseries/offsets/test_business_day.py +++ b/pandas/tests/tseries/offsets/test_business_day.py @@ -239,4 +239,4 @@ def test_apply_corner(self): "with datetime, datetime64 or timedelta" ) with pytest.raises(ApplyTypeError, match=msg): - self._offset().apply(BMonthEnd()) + self._offset()._apply(BMonthEnd()) diff --git a/pandas/tests/tseries/offsets/test_business_hour.py b/pandas/tests/tseries/offsets/test_business_hour.py index ee05eab5ec5ca..401bfe664a3a2 100644 --- a/pandas/tests/tseries/offsets/test_business_hour.py +++ b/pandas/tests/tseries/offsets/test_business_hour.py @@ -318,7 +318,7 @@ def test_roll_date_object(self): def test_normalize(self, case): offset, cases = case for dt, expected in cases.items(): - assert offset.apply(dt) == expected + assert offset._apply(dt) == expected on_offset_cases = [] on_offset_cases.append( diff --git a/pandas/tests/tseries/offsets/test_custom_business_hour.py b/pandas/tests/tseries/offsets/test_custom_business_hour.py index 8bc06cdd45a50..dbc0ff4371fd9 100644 --- a/pandas/tests/tseries/offsets/test_custom_business_hour.py +++ b/pandas/tests/tseries/offsets/test_custom_business_hour.py @@ -192,7 +192,7 @@ def test_roll_date_object(self): def test_normalize(self, norm_cases): offset, cases = norm_cases for dt, expected in cases.items(): - assert offset.apply(dt) == expected + assert offset._apply(dt) == expected def test_is_on_offset(self): tests = [ diff --git a/pandas/tests/tseries/offsets/test_fiscal.py b/pandas/tests/tseries/offsets/test_fiscal.py index 1eee9e611e0f1..8df93102d4bd2 100644 --- a/pandas/tests/tseries/offsets/test_fiscal.py +++ b/pandas/tests/tseries/offsets/test_fiscal.py @@ -643,18 +643,18 @@ def test_bunched_yearends(): fy = FY5253(n=1, weekday=5, startingMonth=12, variation="nearest") dt = Timestamp("2004-01-01") assert fy.rollback(dt) == Timestamp("2002-12-28") - assert (-fy).apply(dt) == Timestamp("2002-12-28") + assert (-fy)._apply(dt) == Timestamp("2002-12-28") assert dt - fy == Timestamp("2002-12-28") assert fy.rollforward(dt) == Timestamp("2004-01-03") - assert fy.apply(dt) == Timestamp("2004-01-03") + assert fy._apply(dt) == Timestamp("2004-01-03") assert fy + dt == Timestamp("2004-01-03") assert dt + fy == Timestamp("2004-01-03") # Same thing, but starting from a Timestamp in the previous year. dt = Timestamp("2003-12-31") assert fy.rollback(dt) == Timestamp("2002-12-28") - assert (-fy).apply(dt) == Timestamp("2002-12-28") + assert (-fy)._apply(dt) == Timestamp("2002-12-28") assert dt - fy == Timestamp("2002-12-28") diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 0c79c0b64f4cd..511f364051f8f 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -125,7 +125,7 @@ def test_return_type(self, offset_types): assert offset + NaT is NaT assert NaT - offset is NaT - assert (-offset).apply(NaT) is NaT + assert (-offset)._apply(NaT) is NaT def test_offset_n(self, offset_types): offset = self._get_offset(offset_types) @@ -188,7 +188,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected, normalize=Fals if ( type(offset_s).__name__ == "DateOffset" - and (funcname == "apply" or normalize) + and (funcname in ["apply", "_apply"] or normalize) and ts.nanosecond > 0 ): exp_warning = UserWarning @@ -196,6 +196,17 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected, normalize=Fals # test nanosecond is preserved with tm.assert_produces_warning(exp_warning): result = func(ts) + + if exp_warning is None and funcname == "_apply": + # GH#44522 + # Check in this particular case to avoid headaches with + # testing for multiple warnings produced by the same call. + with tm.assert_produces_warning(FutureWarning, match="apply is deprecated"): + res2 = offset_s.apply(ts) + + assert type(res2) is type(result) + assert res2 == result + assert isinstance(result, Timestamp) if normalize is False: assert result == expected + Nano(5) @@ -225,7 +236,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected, normalize=Fals if ( type(offset_s).__name__ == "DateOffset" - and (funcname == "apply" or normalize) + and (funcname in ["apply", "_apply"] or normalize) and ts.nanosecond > 0 ): exp_warning = UserWarning @@ -243,13 +254,14 @@ def test_apply(self, offset_types): sdt = datetime(2011, 1, 1, 9, 0) ndt = np_datetime64_compat("2011-01-01 09:00Z") + expected = self.expecteds[offset_types.__name__] + expected_norm = Timestamp(expected.date()) + for dt in [sdt, ndt]: - expected = self.expecteds[offset_types.__name__] - self._check_offsetfunc_works(offset_types, "apply", dt, expected) + self._check_offsetfunc_works(offset_types, "_apply", dt, expected) - expected = Timestamp(expected.date()) self._check_offsetfunc_works( - offset_types, "apply", dt, expected, normalize=True + offset_types, "_apply", dt, expected_norm, normalize=True ) def test_rollforward(self, offset_types): diff --git a/pandas/tests/tseries/offsets/test_ticks.py b/pandas/tests/tseries/offsets/test_ticks.py index ae6bd2d85579a..464eeaed1e725 100644 --- a/pandas/tests/tseries/offsets/test_ticks.py +++ b/pandas/tests/tseries/offsets/test_ticks.py @@ -45,7 +45,7 @@ def test_apply_ticks(): - result = offsets.Hour(3).apply(offsets.Hour(4)) + result = offsets.Hour(3)._apply(offsets.Hour(4)) exp = offsets.Hour(7) assert result == exp @@ -76,7 +76,7 @@ def test_tick_add_sub(cls, n, m): expected = cls(n + m) assert left + right == expected - assert left.apply(right) == expected + assert left._apply(right) == expected expected = cls(n - m) assert left - right == expected
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry Gives us the flexibility to consolidate the logic in `__add__`
https://api.github.com/repos/pandas-dev/pandas/pulls/44522
2021-11-19T00:41:30Z
2021-11-20T20:46:44Z
2021-11-20T20:46:44Z
2021-11-20T21:30:54Z
REF: implement _maybe_squeeze_arg
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index ca81d54a0fb86..a8e7224eb524f 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -869,6 +869,12 @@ def _replace_coerce( # --------------------------------------------------------------------- + def _maybe_squeeze_arg(self, arg: np.ndarray) -> np.ndarray: + """ + For compatibility with 1D-only ExtensionArrays. + """ + return arg + def setitem(self, indexer, value): """ Attempt self.values[indexer] = value, possibly creating a new array. @@ -1314,6 +1320,46 @@ class EABackedBlock(Block): values: ExtensionArray + def putmask(self, mask, new) -> list[Block]: + """ + See Block.putmask.__doc__ + """ + mask = extract_bool_array(mask) + + values = self.values + + mask = self._maybe_squeeze_arg(mask) + + try: + # Caller is responsible for ensuring matching lengths + values._putmask(mask, new) + except (TypeError, ValueError) as err: + if isinstance(err, ValueError) and "Timezones don't match" not in str(err): + # TODO(2.0): remove catching ValueError at all since + # DTA raising here is deprecated + raise + + if is_interval_dtype(self.dtype): + # Discussion about what we want to support in the general + # case GH#39584 + blk = self.coerce_to_target_dtype(new) + if blk.dtype == _dtype_obj: + # For now at least, only support casting e.g. + # Interval[int64]->Interval[float64], + raise + return blk.putmask(mask, new) + + elif isinstance(self, NDArrayBackedExtensionBlock): + # NB: not (yet) the same as + # isinstance(values, NDArrayBackedExtensionArray) + blk = self.coerce_to_target_dtype(new) + return blk.putmask(mask, new) + + else: + raise + + return [self] + def delete(self, loc) -> None: """ Delete given loc(-s) from block in-place. @@ -1410,36 +1456,16 @@ def set_inplace(self, locs, values) -> None: # _cache not yet initialized pass - def putmask(self, mask, new) -> list[Block]: + def _maybe_squeeze_arg(self, arg): """ - See Block.putmask.__doc__ + If necessary, squeeze a (N, 1) ndarray to (N,) """ - mask = extract_bool_array(mask) - - new_values = self.values - - if mask.ndim == new_values.ndim + 1: + # e.g. if we are passed a 2D mask for putmask + if isinstance(arg, np.ndarray) and arg.ndim == self.values.ndim + 1: # TODO(EA2D): unnecessary with 2D EAs - mask = mask.reshape(new_values.shape) - - try: - # Caller is responsible for ensuring matching lengths - new_values._putmask(mask, new) - except TypeError: - if not is_interval_dtype(self.dtype): - # Discussion about what we want to support in the general - # case GH#39584 - raise - - blk = self.coerce_to_target_dtype(new) - if blk.dtype == _dtype_obj: - # For now at least, only support casting e.g. - # Interval[int64]->Interval[float64], - raise - return blk.putmask(mask, new) - - nb = type(self)(new_values, placement=self._mgr_locs, ndim=self.ndim) - return [nb] + assert arg.shape[1] == 1 + arg = arg[:, 0] + return arg @property def is_view(self) -> bool: @@ -1595,15 +1621,8 @@ def where(self, other, cond) -> list[Block]: cond = extract_bool_array(cond) assert not isinstance(other, (ABCIndex, ABCSeries, ABCDataFrame)) - if isinstance(other, np.ndarray) and other.ndim == 2: - # TODO(EA2D): unnecessary with 2D EAs - assert other.shape[1] == 1 - other = other[:, 0] - - if isinstance(cond, np.ndarray) and cond.ndim == 2: - # TODO(EA2D): unnecessary with 2D EAs - assert cond.shape[1] == 1 - cond = cond[:, 0] + other = self._maybe_squeeze_arg(other) + cond = self._maybe_squeeze_arg(cond) if lib.is_scalar(other) and isna(other): # The default `other` for Series / Frame is np.nan @@ -1698,16 +1717,6 @@ def setitem(self, indexer, value): values[indexer] = value return self - def putmask(self, mask, new) -> list[Block]: - mask = extract_bool_array(mask) - - if not self._can_hold_element(new): - return self.coerce_to_target_dtype(new).putmask(mask, new) - - arr = self.values - arr.T._putmask(mask, new) - return [self] - def where(self, other, cond) -> list[Block]: arr = self.values
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry Allows for sharing some more Block methods. Will be able to do the same for 'where' following the bugfix that comes after #44514.
https://api.github.com/repos/pandas-dev/pandas/pulls/44520
2021-11-18T22:42:52Z
2021-11-20T16:05:08Z
2021-11-20T16:05:08Z
2021-11-20T16:11:52Z
REF: remove unused allow_object kwarg from sequence_to_dt64ns
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index a0a7ef3501d7f..460bfda56276d 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -9,7 +9,6 @@ from typing import ( TYPE_CHECKING, Literal, - overload, ) import warnings @@ -356,7 +355,7 @@ def _from_sequence_not_strict( freq, freq_infer = dtl.maybe_infer_freq(freq) - subarr, tz, inferred_freq = sequence_to_dt64ns( + subarr, tz, inferred_freq = _sequence_to_dt64ns( data, dtype=dtype, copy=copy, @@ -1972,41 +1971,22 @@ def std( # Constructor Helpers -@overload -def sequence_to_datetimes( - data, allow_object: Literal[False] = ..., require_iso8601: bool = ... -) -> DatetimeArray: - ... - - -@overload -def sequence_to_datetimes( - data, allow_object: Literal[True] = ..., require_iso8601: bool = ... -) -> np.ndarray | DatetimeArray: - ... - - -def sequence_to_datetimes( - data, allow_object: bool = False, require_iso8601: bool = False -) -> np.ndarray | DatetimeArray: +def sequence_to_datetimes(data, require_iso8601: bool = False) -> DatetimeArray: """ Parse/convert the passed data to either DatetimeArray or np.ndarray[object]. """ - result, tz, freq = sequence_to_dt64ns( + result, tz, freq = _sequence_to_dt64ns( data, - allow_object=allow_object, allow_mixed=True, require_iso8601=require_iso8601, ) - if result.dtype == object: - return result dtype = tz_to_dtype(tz) dta = DatetimeArray._simple_new(result, freq=freq, dtype=dtype) return dta -def sequence_to_dt64ns( +def _sequence_to_dt64ns( data, dtype=None, copy=False, @@ -2015,7 +1995,6 @@ def sequence_to_dt64ns( yearfirst=False, ambiguous="raise", *, - allow_object: bool = False, allow_mixed: bool = False, require_iso8601: bool = False, ): @@ -2030,9 +2009,6 @@ def sequence_to_dt64ns( yearfirst : bool, default False ambiguous : str, bool, or arraylike, default 'raise' See pandas._libs.tslibs.tzconversion.tz_localize_to_utc. - allow_object : bool, default False - Whether to return an object-dtype ndarray instead of raising if the - data contains more than one timezone. allow_mixed : bool, default False Interpret integers as timestamps when datetime objects are also present. require_iso8601 : bool, default False @@ -2102,7 +2078,7 @@ def sequence_to_dt64ns( data, dayfirst=dayfirst, yearfirst=yearfirst, - allow_object=allow_object, + allow_object=False, allow_mixed=allow_mixed, require_iso8601=require_iso8601, ) @@ -2112,9 +2088,6 @@ def sequence_to_dt64ns( data = data.view(DT64NS_DTYPE) elif inferred_tz: tz = inferred_tz - elif allow_object and data.dtype == object: - # We encountered mixed-timezones. - return data, None, None data_dtype = data.dtype diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index 6d5162f3fe3a4..7c0566079a7d0 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -1523,7 +1523,7 @@ def try_datetime(v: np.ndarray) -> ArrayLike: try: # GH#19671 we pass require_iso8601 to be relatively strict # when parsing strings. - dta = sequence_to_datetimes(v, require_iso8601=True, allow_object=False) + dta = sequence_to_datetimes(v, require_iso8601=True) except (ValueError, TypeError): # e.g. <class 'numpy.timedelta64'> is not convertible to datetime return v.reshape(shape) @@ -1635,7 +1635,7 @@ def maybe_cast_to_datetime( try: if is_datetime64: - dta = sequence_to_datetimes(value, allow_object=False) + dta = sequence_to_datetimes(value) # GH 25843: Remove tz information since the dtype # didn't specify one @@ -1663,7 +1663,7 @@ def maybe_cast_to_datetime( # datetime64tz is assumed to be naive which should # be localized to the timezone. is_dt_string = is_string_dtype(value.dtype) - dta = sequence_to_datetimes(value, allow_object=False) + dta = sequence_to_datetimes(value) if dta.tz is not None: value = dta.astype(dtype, copy=False) elif is_dt_string: diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py index cd7d9a479ab38..e6c65499f6fcc 100644 --- a/pandas/tests/arrays/datetimes/test_constructors.py +++ b/pandas/tests/arrays/datetimes/test_constructors.py @@ -6,7 +6,7 @@ import pandas as pd import pandas._testing as tm from pandas.core.arrays import DatetimeArray -from pandas.core.arrays.datetimes import sequence_to_dt64ns +from pandas.core.arrays.datetimes import _sequence_to_dt64ns class TestDatetimeArrayConstructor: @@ -42,7 +42,7 @@ def test_freq_validation(self): "meth", [ DatetimeArray._from_sequence, - sequence_to_dt64ns, + _sequence_to_dt64ns, pd.to_datetime, pd.DatetimeIndex, ], @@ -97,7 +97,7 @@ def test_bool_dtype_raises(self): DatetimeArray._from_sequence(arr) with pytest.raises(TypeError, match=msg): - sequence_to_dt64ns(arr) + _sequence_to_dt64ns(arr) with pytest.raises(TypeError, match=msg): pd.DatetimeIndex(arr) @@ -128,13 +128,13 @@ def test_tz_dtype_mismatch_raises(self): ["2000"], dtype=DatetimeTZDtype(tz="US/Central") ) with pytest.raises(TypeError, match="data is already tz-aware"): - sequence_to_dt64ns(arr, dtype=DatetimeTZDtype(tz="UTC")) + _sequence_to_dt64ns(arr, dtype=DatetimeTZDtype(tz="UTC")) def test_tz_dtype_matches(self): arr = DatetimeArray._from_sequence( ["2000"], dtype=DatetimeTZDtype(tz="US/Central") ) - result, _, _ = sequence_to_dt64ns(arr, dtype=DatetimeTZDtype(tz="US/Central")) + result, _, _ = _sequence_to_dt64ns(arr, dtype=DatetimeTZDtype(tz="US/Central")) tm.assert_numpy_array_equal(arr._data, result) @pytest.mark.parametrize("order", ["F", "C"]) @@ -144,8 +144,8 @@ def test_2d(self, order): if order == "F": arr = arr.T - res = sequence_to_dt64ns(arr) - expected = sequence_to_dt64ns(arr.ravel()) + res = _sequence_to_dt64ns(arr) + expected = _sequence_to_dt64ns(arr.ravel()) tm.assert_numpy_array_equal(res[0].ravel(), expected[0]) assert res[1] == expected[1] diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index 5aa20bedc4a48..ff7c7a7782da3 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -26,7 +26,7 @@ PeriodArray, TimedeltaArray, ) -from pandas.core.arrays.datetimes import sequence_to_dt64ns +from pandas.core.arrays.datetimes import _sequence_to_dt64ns from pandas.core.arrays.timedeltas import sequence_to_td64ns @@ -1361,7 +1361,7 @@ def test_from_pandas_array(dtype): expected = cls._from_sequence(data) tm.assert_extension_array_equal(result, expected) - func = {"M8[ns]": sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype] + func = {"M8[ns]": _sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype] result = func(arr)[0] expected = func(data)[0] tm.assert_equal(result, expected) @@ -1424,7 +1424,7 @@ def test_from_obscure_array(dtype, array_likes): result = cls._from_sequence(data) tm.assert_extension_array_equal(result, expected) - func = {"M8[ns]": sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype] + func = {"M8[ns]": _sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype] result = func(arr)[0] expected = func(data)[0] tm.assert_equal(result, expected)
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44519
2021-11-18T21:13:13Z
2021-11-20T16:05:59Z
2021-11-20T16:05:59Z
2021-11-20T16:13:40Z
BUG: DataFrame with scalar tzaware Timestamp
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index b17b40ec77287..9e8528d45c03a 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -573,6 +573,7 @@ Conversion - Bug in :class:`IntegerDtype` not allowing coercion from string dtype (:issue:`25472`) - Bug in :func:`to_datetime` with ``arg:xr.DataArray`` and ``unit="ns"`` specified raises TypeError (:issue:`44053`) - Bug in :meth:`DataFrame.convert_dtypes` not returning the correct type when a subclass does not overload :meth:`_constructor_sliced` (:issue:`43201`) +- Bug in creating a :class:`DataFrame` from a timezone-aware :class:`Timestamp` scalar near a Daylight Savings Time transition (:issue:`42505`) - Strings diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 460bfda56276d..7fabe0e85fac4 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -2084,8 +2084,13 @@ def _sequence_to_dt64ns( ) if tz and inferred_tz: # two timezones: convert to intended from base UTC repr - data = tzconversion.tz_convert_from_utc(data.view("i8"), tz) - data = data.view(DT64NS_DTYPE) + if data.dtype == "i8": + # GH#42505 + # by convention, these are _already_ UTC, e.g + return data.view(DT64NS_DTYPE), tz, None + + utc_vals = tzconversion.tz_convert_from_utc(data.view("i8"), tz) + data = utc_vals.view(DT64NS_DTYPE) elif inferred_tz: tz = inferred_tz diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 359f166b9855e..d633043eb566f 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -70,6 +70,19 @@ class TestDataFrameConstructors: + def test_constructor_dict_with_tzaware_scalar(self): + # GH#42505 + dt = Timestamp("2019-11-03 01:00:00-0700").tz_convert("America/Los_Angeles") + + df = DataFrame({"dt": dt}, index=[0]) + expected = DataFrame({"dt": [dt]}) + tm.assert_frame_equal(df, expected) + + # Non-homogeneous + df = DataFrame({"dt": dt, "value": [1]}) + expected = DataFrame({"dt": [dt], "value": [1]}) + tm.assert_frame_equal(df, expected) + def test_construct_ndarray_with_nas_and_int_dtype(self): # GH#26919 match Series by not casting np.nan to meaningless int arr = np.array([[1, np.nan], [2, 3]])
- [x] closes #42505 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry @simonjayhawkins where does the whatsnew for this go?
https://api.github.com/repos/pandas-dev/pandas/pulls/44518
2021-11-18T21:11:02Z
2021-11-20T21:12:08Z
2021-11-20T21:12:08Z
2021-11-21T10:37:03Z
BUG: IntegerArray/FloatingArray constructors mismatched NAs
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index db5cce8459ca2..c0b11b6250056 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -626,6 +626,7 @@ Indexing - Bug in :meth:`Series.reset_index` not ignoring ``name`` argument when ``drop`` and ``inplace`` are set to ``True`` (:issue:`44575`) - Bug in :meth:`DataFrame.loc.__setitem__` and :meth:`DataFrame.iloc.__setitem__` with mixed dtypes sometimes failing to operate in-place (:issue:`44345`) - Bug in :meth:`DataFrame.loc.__getitem__` incorrectly raising ``KeyError`` when selecting a single column with a boolean key (:issue:`44322`). +- Bug in setting :meth:`DataFrame.iloc` with a single ``ExtensionDtype`` column and setting 2D values e.g. ``df.iloc[:] = df.values`` incorrectly raising (:issue:`44514`) - Bug in indexing on columns with ``loc`` or ``iloc`` using a slice with a negative step with ``ExtensionDtype`` columns incorrectly raising (:issue:`44551`) - Bug in :meth:`IntervalIndex.get_indexer_non_unique` returning boolean mask instead of array of integers for a non unique and non monotonic index (:issue:`44084`) - Bug in :meth:`IntervalIndex.get_indexer_non_unique` not handling targets of ``dtype`` 'object' with NaNs correctly (:issue:`44482`) @@ -737,6 +738,7 @@ ExtensionArray - Bug in :func:`array` failing to preserve :class:`PandasArray` (:issue:`43887`) - NumPy ufuncs ``np.abs``, ``np.positive``, ``np.negative`` now correctly preserve dtype when called on ExtensionArrays that implement ``__abs__, __pos__, __neg__``, respectively. In particular this is fixed for :class:`TimedeltaArray` (:issue:`43899`) - Avoid raising ``PerformanceWarning`` about fragmented DataFrame when using many columns with an extension dtype (:issue:`44098`) +- Bug in :class:`IntegerArray` and :class:`FloatingArray` construction incorrectly coercing mismatched NA values (e.g. ``np.timedelta64("NaT")``) to numeric NA (:issue:`44514`) - Bug in :meth:`BooleanArray.__eq__` and :meth:`BooleanArray.__ne__` raising ``TypeError`` on comparison with an incompatible type (like a string). This caused :meth:`DataFrame.replace` to sometimes raise a ``TypeError`` if a nullable boolean column was included (:issue:`44499`) - diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index cd04f4f6e4b3a..585b535775397 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -248,6 +248,36 @@ cdef bint checknull_with_nat_and_na(object obj): return checknull_with_nat(obj) or obj is C_NA +@cython.wraparound(False) +@cython.boundscheck(False) +def is_numeric_na(values: ndarray) -> ndarray: + """ + Check for NA values consistent with IntegerArray/FloatingArray. + + Similar to a vectorized is_valid_na_for_dtype restricted to numeric dtypes. + + Returns + ------- + ndarray[bool] + """ + cdef: + ndarray[uint8_t] result + Py_ssize_t i, N + object val + + N = len(values) + result = np.zeros(N, dtype=np.uint8) + + for i in range(N): + val = values[i] + if checknull(val): + if val is None or val is C_NA or util.is_nan(val) or is_decimal_na(val): + result[i] = True + else: + raise TypeError(f"'values' contains non-numeric NA {val}") + return result.view(bool) + + # ----------------------------------------------------------------------------- # Implementation of NA singleton diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py index 1e7f1aff52d2e..5e55715ee0e97 100644 --- a/pandas/core/arrays/floating.py +++ b/pandas/core/arrays/floating.py @@ -4,7 +4,10 @@ import numpy as np -from pandas._libs import lib +from pandas._libs import ( + lib, + missing as libmissing, +) from pandas._typing import ( ArrayLike, AstypeArg, @@ -27,7 +30,6 @@ ExtensionDtype, register_extension_dtype, ) -from pandas.core.dtypes.missing import isna from pandas.core.arrays import ExtensionArray from pandas.core.arrays.numeric import ( @@ -129,8 +131,7 @@ def coerce_to_array( if is_object_dtype(values): inferred_type = lib.infer_dtype(values, skipna=True) if inferred_type == "empty": - values = np.empty(len(values)) - values.fill(np.nan) + pass elif inferred_type not in [ "floating", "integer", @@ -146,13 +147,15 @@ def coerce_to_array( elif not (is_integer_dtype(values) or is_float_dtype(values)): raise TypeError(f"{values.dtype} cannot be converted to a FloatingDtype") + if values.ndim != 1: + raise TypeError("values must be a 1D list-like") + if mask is None: - mask = isna(values) + mask = libmissing.is_numeric_na(values) + else: assert len(mask) == len(values) - if not values.ndim == 1: - raise TypeError("values must be a 1D list-like") if not mask.ndim == 1: raise TypeError("mask must be a 1D list-like") diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py index 12bef068ef44b..0e82ef731bb63 100644 --- a/pandas/core/arrays/integer.py +++ b/pandas/core/arrays/integer.py @@ -7,6 +7,7 @@ from pandas._libs import ( iNaT, lib, + missing as libmissing, ) from pandas._typing import ( ArrayLike, @@ -32,7 +33,6 @@ is_string_dtype, pandas_dtype, ) -from pandas.core.dtypes.missing import isna from pandas.core.arrays import ExtensionArray from pandas.core.arrays.masked import BaseMaskedDtype @@ -183,8 +183,7 @@ def coerce_to_array( if is_object_dtype(values) or is_string_dtype(values): inferred_type = lib.infer_dtype(values, skipna=True) if inferred_type == "empty": - values = np.empty(len(values)) - values.fill(np.nan) + pass elif inferred_type not in [ "floating", "integer", @@ -202,13 +201,14 @@ def coerce_to_array( elif not (is_integer_dtype(values) or is_float_dtype(values)): raise TypeError(f"{values.dtype} cannot be converted to an IntegerDtype") + if values.ndim != 1: + raise TypeError("values must be a 1D list-like") + if mask is None: - mask = isna(values) + mask = libmissing.is_numeric_na(values) else: assert len(mask) == len(values) - if values.ndim != 1: - raise TypeError("values must be a 1D list-like") if mask.ndim != 1: raise TypeError("mask must be a 1D list-like") diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 550bc4ac56d4b..1ee16879feb1b 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1508,6 +1508,17 @@ def setitem(self, indexer, value): # we are always 1-D indexer = indexer[0] + # TODO(EA2D): not needed with 2D EAS + if isinstance(value, (np.ndarray, ExtensionArray)) and value.ndim == 2: + assert value.shape[1] == 1 + # error: No overload variant of "__getitem__" of "ExtensionArray" + # matches argument type "Tuple[slice, int]" + value = value[:, 0] # type: ignore[call-overload] + elif isinstance(value, ABCDataFrame): + # TODO: should we avoid getting here with DataFrame? + assert value.shape[1] == 1 + value = value._ixs(0, axis=1)._values + check_setitem_lengths(indexer, value, self.values) self.values[indexer] = value return self diff --git a/pandas/tests/arrays/floating/test_construction.py b/pandas/tests/arrays/floating/test_construction.py index 4ce3dd35b538b..4b7b237d2eb7c 100644 --- a/pandas/tests/arrays/floating/test_construction.py +++ b/pandas/tests/arrays/floating/test_construction.py @@ -97,14 +97,18 @@ def test_to_array_mixed_integer_float(): np.array(["foo"]), [[1, 2], [3, 4]], [np.nan, {"a": 1}], + # GH#44514 all-NA case used to get quietly swapped out before checking ndim + np.array([pd.NA] * 6, dtype=object).reshape(3, 2), ], ) def test_to_array_error(values): # error in converting existing arrays to FloatingArray - msg = ( - r"(:?.* cannot be converted to a FloatingDtype)" - r"|(:?values must be a 1D list-like)" - r"|(:?Cannot pass scalar)" + msg = "|".join( + [ + "cannot be converted to a FloatingDtype", + "values must be a 1D list-like", + "Cannot pass scalar", + ] ) with pytest.raises((TypeError, ValueError), match=msg): pd.array(values, dtype="Float64") diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py index a2d100db81a2c..208a1a1757be2 100644 --- a/pandas/tests/extension/base/setitem.py +++ b/pandas/tests/extension/base/setitem.py @@ -1,6 +1,13 @@ import numpy as np import pytest +from pandas.core.dtypes.dtypes import ( + DatetimeTZDtype, + IntervalDtype, + PandasDtype, + PeriodDtype, +) + import pandas as pd import pandas._testing as tm from pandas.tests.extension.base.base import BaseExtensionTests @@ -357,6 +364,36 @@ def test_setitem_series(self, data, full_indexer): ) self.assert_series_equal(result, expected) + def test_setitem_frame_2d_values(self, data, request): + # GH#44514 + df = pd.DataFrame({"A": data}) + + # Avoiding using_array_manager fixture + # https://github.com/pandas-dev/pandas/pull/44514#discussion_r754002410 + using_array_manager = isinstance(df._mgr, pd.core.internals.ArrayManager) + if using_array_manager: + if not isinstance( + data.dtype, (PandasDtype, PeriodDtype, IntervalDtype, DatetimeTZDtype) + ): + # These dtypes have non-broken implementations of _can_hold_element + mark = pytest.mark.xfail(reason="Goes through split path, loses dtype") + request.node.add_marker(mark) + + df = pd.DataFrame({"A": data}) + orig = df.copy() + + df.iloc[:] = df + self.assert_frame_equal(df, orig) + + df.iloc[:-1] = df.iloc[:-1] + self.assert_frame_equal(df, orig) + + df.iloc[:] = df.values + self.assert_frame_equal(df, orig) + + df.iloc[:-1] = df.values[:-1] + self.assert_frame_equal(df, orig) + def test_delitem_series(self, data): # GH#40763 ser = pd.Series(data, name="data") diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py index 942da38dc5a26..b102bcdae57d9 100644 --- a/pandas/tests/frame/indexing/test_indexing.py +++ b/pandas/tests/frame/indexing/test_indexing.py @@ -1217,6 +1217,75 @@ def test_setitem_array_as_cell_value(self): expected = DataFrame({"a": [np.zeros((2,))], "b": [np.zeros((2, 2))]}) tm.assert_frame_equal(df, expected) + # with AM goes through split-path, loses dtype + @td.skip_array_manager_not_yet_implemented + def test_iloc_setitem_nullable_2d_values(self): + df = DataFrame({"A": [1, 2, 3]}, dtype="Int64") + orig = df.copy() + + df.loc[:] = df.values[:, ::-1] + tm.assert_frame_equal(df, orig) + + df.loc[:] = pd.core.arrays.PandasArray(df.values[:, ::-1]) + tm.assert_frame_equal(df, orig) + + df.iloc[:] = df.iloc[:, :] + tm.assert_frame_equal(df, orig) + + @pytest.mark.parametrize( + "null", [pd.NaT, pd.NaT.to_numpy("M8[ns]"), pd.NaT.to_numpy("m8[ns]")] + ) + def test_setting_mismatched_na_into_nullable_fails( + self, null, any_numeric_ea_dtype + ): + # GH#44514 don't cast mismatched nulls to pd.NA + df = DataFrame({"A": [1, 2, 3]}, dtype=any_numeric_ea_dtype) + ser = df["A"] + arr = ser._values + + msg = "|".join( + [ + r"int\(\) argument must be a string, a bytes-like object or a " + "(real )?number, not 'NaTType'", + r"timedelta64\[ns\] cannot be converted to an? (Floating|Integer)Dtype", + r"datetime64\[ns\] cannot be converted to an? (Floating|Integer)Dtype", + "object cannot be converted to a FloatingDtype", + "'values' contains non-numeric NA", + ] + ) + with pytest.raises(TypeError, match=msg): + arr[0] = null + + with pytest.raises(TypeError, match=msg): + arr[:2] = [null, null] + + with pytest.raises(TypeError, match=msg): + ser[0] = null + + with pytest.raises(TypeError, match=msg): + ser[:2] = [null, null] + + with pytest.raises(TypeError, match=msg): + ser.iloc[0] = null + + with pytest.raises(TypeError, match=msg): + ser.iloc[:2] = [null, null] + + with pytest.raises(TypeError, match=msg): + df.iloc[0, 0] = null + + with pytest.raises(TypeError, match=msg): + df.iloc[:2, 0] = [null, null] + + # Multi-Block + df2 = df.copy() + df2["B"] = ser.copy() + with pytest.raises(TypeError, match=msg): + df2.iloc[0, 0] = null + + with pytest.raises(TypeError, match=msg): + df2.iloc[:2, 0] = [null, null] + class TestDataFrameIndexingUInt64: def test_setitem(self, uint64_frame): diff --git a/pandas/tests/series/methods/test_clip.py b/pandas/tests/series/methods/test_clip.py index 247f0d50772ce..bc6d5aeb0a581 100644 --- a/pandas/tests/series/methods/test_clip.py +++ b/pandas/tests/series/methods/test_clip.py @@ -46,9 +46,14 @@ def test_series_clipping_with_na_values(self, any_numeric_ea_dtype, nulls_fixtur # Ensure that clipping method can handle NA values with out failing # GH#40581 - s = Series([nulls_fixture, 1.0, 3.0], dtype=any_numeric_ea_dtype) - s_clipped_upper = s.clip(upper=2.0) - s_clipped_lower = s.clip(lower=2.0) + if nulls_fixture is pd.NaT: + # constructor will raise, see + # test_constructor_mismatched_null_nullable_dtype + return + + ser = Series([nulls_fixture, 1.0, 3.0], dtype=any_numeric_ea_dtype) + s_clipped_upper = ser.clip(upper=2.0) + s_clipped_lower = ser.clip(lower=2.0) expected_upper = Series([nulls_fixture, 1.0, 2.0], dtype=any_numeric_ea_dtype) expected_lower = Series([nulls_fixture, 2.0, 3.0], dtype=any_numeric_ea_dtype) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 692c040a33ff8..43e4c8364c06c 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -13,7 +13,10 @@ iNaT, lib, ) -from pandas.compat.numpy import np_version_under1p19 +from pandas.compat.numpy import ( + np_version_under1p19, + np_version_under1p20, +) import pandas.util._test_decorators as td from pandas.core.dtypes.common import ( @@ -1817,6 +1820,33 @@ def test_constructor_bool_dtype_missing_values(self): expected = Series(True, index=[0], dtype="bool") tm.assert_series_equal(result, expected) + @pytest.mark.filterwarnings( + "ignore:elementwise comparison failed:DeprecationWarning" + ) + @pytest.mark.xfail( + np_version_under1p20, reason="np.array([td64nat, float, float]) raises" + ) + @pytest.mark.parametrize("func", [Series, DataFrame, Index, pd.array]) + def test_constructor_mismatched_null_nullable_dtype( + self, func, any_numeric_ea_dtype + ): + # GH#44514 + msg = "|".join( + [ + "cannot safely cast non-equivalent object", + r"int\(\) argument must be a string, a bytes-like object " + "or a (real )?number", + r"Cannot cast array data from dtype\('O'\) to dtype\('float64'\) " + "according to the rule 'safe'", + "object cannot be converted to a FloatingDtype", + "'values' contains non-numeric NA", + ] + ) + + for null in tm.NP_NAT_OBJECTS + [NaT]: + with pytest.raises(TypeError, match=msg): + func([null, 1.0, 3.0], dtype=any_numeric_ea_dtype) + class TestSeriesConstructorIndexCoercion: def test_series_constructor_datetimelike_index_coercion(self):
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry Same yak-shaving as #44495 (which turned out to be a dead end for this particular yak, but still a perf bump)
https://api.github.com/repos/pandas-dev/pandas/pulls/44514
2021-11-18T16:51:09Z
2021-12-01T23:04:26Z
2021-12-01T23:04:26Z
2021-12-06T20:11:28Z
TYP/CI: fix to_string overload
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 751e7911d7130..0960ab4a81149 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1069,7 +1069,7 @@ def to_string( @overload def to_string( self, - buf: FilePathOrBuffer[str], + buf: FilePath | WriteBuffer[str], columns: Sequence[str] | None = ..., col_space: int | list[int] | dict[Hashable, int] | None = ..., header: bool | Sequence[str] = ...,
xref https://github.com/pandas-dev/pandas/pull/44426#issuecomment-972699248
https://api.github.com/repos/pandas-dev/pandas/pulls/44510
2021-11-18T09:51:46Z
2021-11-18T10:56:48Z
2021-11-18T10:56:48Z
2021-11-18T10:56:52Z
BUG: Period incorrectly allowing np.timedelta64('NaT')
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 017acb8ef930b..a64c451ed1075 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -653,6 +653,7 @@ Period ^^^^^^ - Bug in adding a :class:`Period` object to a ``np.timedelta64`` object incorrectly raising ``TypeError`` (:issue:`44182`) - Bug in :meth:`PeriodIndex.to_timestamp` when the index has ``freq="B"`` inferring ``freq="D"`` for its result instead of ``freq="B"`` (:issue:`44105`) +- Bug in :class:`Period` constructor incorrectly allowing ``np.timedelta64("NaT")`` (:issue:`44507`) - Plotting diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx index f594e0a8bdafd..67696f9740ea1 100644 --- a/pandas/_libs/tslibs/period.pyx +++ b/pandas/_libs/tslibs/period.pyx @@ -104,7 +104,7 @@ from pandas._libs.tslibs.nattype cimport ( _nat_scalar_rules, c_NaT as NaT, c_nat_strings as nat_strings, - is_null_datetimelike, + checknull_with_nat, ) from pandas._libs.tslibs.offsets cimport ( BaseOffset, @@ -1459,10 +1459,13 @@ def extract_ordinals(ndarray[object] values, freq) -> np.ndarray: for i in range(n): p = values[i] - if is_null_datetimelike(p): + if checknull_with_nat(p): ordinals[i] = NPY_NAT elif util.is_integer_object(p): - raise TypeError(p) + if p == NPY_NAT: + ordinals[i] = NPY_NAT + else: + raise TypeError(p) else: try: ordinals[i] = p.ordinal @@ -2473,14 +2476,17 @@ class Period(_Period): converted = other.asfreq(freq) ordinal = converted.ordinal - elif is_null_datetimelike(value) or (isinstance(value, str) and - value in nat_strings): + elif checknull_with_nat(value) or (isinstance(value, str) and + value in nat_strings): # explicit str check is necessary to avoid raising incorrectly # if we have a non-hashable value. ordinal = NPY_NAT elif isinstance(value, str) or util.is_integer_object(value): if util.is_integer_object(value): + if value == NPY_NAT: + value = "NaT" + value = str(value) value = value.upper() dt, reso = parse_time_string(value, freq) diff --git a/pandas/tests/arrays/period/test_constructors.py b/pandas/tests/arrays/period/test_constructors.py index 52543d91e8f2a..cf9749058d1d1 100644 --- a/pandas/tests/arrays/period/test_constructors.py +++ b/pandas/tests/arrays/period/test_constructors.py @@ -96,3 +96,28 @@ def test_from_sequence_disallows_i8(): with pytest.raises(TypeError, match=msg): PeriodArray._from_sequence(list(arr.asi8), dtype=arr.dtype) + + +def test_from_td64nat_sequence_raises(): + # GH#44507 + td = pd.NaT.to_numpy("m8[ns]") + + dtype = pd.period_range("2005-01-01", periods=3, freq="D").dtype + + arr = np.array([None], dtype=object) + arr[0] = td + + msg = "Value must be Period, string, integer, or datetime" + with pytest.raises(ValueError, match=msg): + PeriodArray._from_sequence(arr, dtype=dtype) + + with pytest.raises(ValueError, match=msg): + pd.PeriodIndex(arr, dtype=dtype) + with pytest.raises(ValueError, match=msg): + pd.Index(arr, dtype=dtype) + with pytest.raises(ValueError, match=msg): + pd.array(arr, dtype=dtype) + with pytest.raises(ValueError, match=msg): + pd.Series(arr, dtype=dtype) + with pytest.raises(ValueError, match=msg): + pd.DataFrame(arr, dtype=dtype) diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py index cd1bf21753249..f35033115d2fc 100644 --- a/pandas/tests/scalar/period/test_period.py +++ b/pandas/tests/scalar/period/test_period.py @@ -40,6 +40,17 @@ class TestPeriodConstruction: + def test_from_td64nat_raises(self): + # GH#44507 + td = NaT.to_numpy("m8[ns]") + + msg = "Value must be Period, string, integer, or datetime" + with pytest.raises(ValueError, match=msg): + Period(td) + + with pytest.raises(ValueError, match=msg): + Period(td, freq="D") + def test_construction(self): i1 = Period("1/1/2005", freq="M") i2 = Period("Jan 2005")
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44507
2021-11-18T02:34:08Z
2021-11-20T18:14:50Z
2021-11-20T18:14:50Z
2021-11-20T18:54:51Z
DOC: df.to_html documentation incorrectly contains min_rows optional param (release note)
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 2456406f0eca3..2fe289a5f7c35 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -398,6 +398,7 @@ See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for mor Other API changes ^^^^^^^^^^^^^^^^^ - :meth:`Index.get_indexer_for` no longer accepts keyword arguments (other than 'target'); in the past these would be silently ignored if the index was not unique (:issue:`42310`) +- Change in the position of the ``min_rows`` argument in :meth:`DataFrame.to_string` due to change in the docstring (:issue:`44304`) - .. ---------------------------------------------------------------------------
- [x] closes #44304 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44506
2021-11-18T01:58:29Z
2021-11-19T10:49:42Z
2021-11-19T10:49:42Z
2021-11-19T10:49:54Z
REF: combine isnaobj+isnaobj_old
diff --git a/pandas/_libs/missing.pxd b/pandas/_libs/missing.pxd index 9d32fcd3625db..e32518864db0a 100644 --- a/pandas/_libs/missing.pxd +++ b/pandas/_libs/missing.pxd @@ -6,9 +6,9 @@ from numpy cimport ( cpdef bint is_matching_na(object left, object right, bint nan_matches_none=*) -cpdef bint checknull(object val) +cpdef bint checknull(object val, bint inf_as_na=*) cpdef bint checknull_old(object val) -cpdef ndarray[uint8_t] isnaobj(ndarray arr) +cpdef ndarray[uint8_t] isnaobj(ndarray arr, bint inf_as_na=*) cdef bint is_null_datetime64(v) cdef bint is_null_timedelta64(v) diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index b77db2aec4a08..6146e8ea13f89 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -99,7 +99,7 @@ cpdef bint is_matching_na(object left, object right, bint nan_matches_none=False return False -cpdef bint checknull(object val): +cpdef bint checknull(object val, bint inf_as_na=False): """ Return boolean describing of the input is NA-like, defined here as any of: @@ -114,19 +114,16 @@ cpdef bint checknull(object val): Parameters ---------- val : object + inf_as_na : bool, default False + Whether to treat INF and -INF as NA values. Returns ------- bool - - Notes - ----- - The difference between `checknull` and `checknull_old` is that `checknull` - does *not* consider INF or NEGINF to be NA. """ return ( val is C_NA - or is_null_datetimelike(val, inat_is_null=False) + or is_null_datetimelike(val, inat_is_null=False, inf_as_na=inf_as_na) or is_decimal_na(val) ) @@ -139,42 +136,12 @@ cdef inline bint is_decimal_na(object val): cpdef bint checknull_old(object val): - """ - Return boolean describing of the input is NA-like, defined here as any - of: - - None - - nan - - INF - - NEGINF - - NaT - - np.datetime64 representation of NaT - - np.timedelta64 representation of NaT - - NA - - Decimal("NaN") - - Parameters - ---------- - val : object - - Returns - ------- - result : bool - - Notes - ----- - The difference between `checknull` and `checknull_old` is that `checknull` - does *not* consider INF or NEGINF to be NA. - """ - if checknull(val): - return True - elif util.is_float_object(val) or util.is_complex_object(val): - return val == INF or val == NEGINF - return False + return checknull(val, inf_as_na=True) @cython.wraparound(False) @cython.boundscheck(False) -cpdef ndarray[uint8_t] isnaobj(ndarray arr): +cpdef ndarray[uint8_t] isnaobj(ndarray arr, bint inf_as_na=False): """ Return boolean mask denoting which elements of a 1-D array are na-like, according to the criteria defined in `checknull`: @@ -205,53 +172,19 @@ cpdef ndarray[uint8_t] isnaobj(ndarray arr): result = np.empty(n, dtype=np.uint8) for i in range(n): val = arr[i] - result[i] = checknull(val) + result[i] = checknull(val, inf_as_na=inf_as_na) return result.view(np.bool_) @cython.wraparound(False) @cython.boundscheck(False) def isnaobj_old(arr: ndarray) -> ndarray: - """ - Return boolean mask denoting which elements of a 1-D array are na-like, - defined as being any of: - - None - - nan - - INF - - NEGINF - - NaT - - NA - - Decimal("NaN") - - Parameters - ---------- - arr : ndarray - - Returns - ------- - result : ndarray (dtype=np.bool_) - """ - cdef: - Py_ssize_t i, n - object val - ndarray[uint8_t] result - - assert arr.ndim == 1, "'arr' must be 1-D." - - n = len(arr) - result = np.zeros(n, dtype=np.uint8) - for i in range(n): - val = arr[i] - result[i] = ( - checknull(val) - or util.is_float_object(val) and (val == INF or val == NEGINF) - ) - return result.view(np.bool_) + return isnaobj(arr, inf_as_na=True) @cython.wraparound(False) @cython.boundscheck(False) -def isnaobj2d(arr: ndarray) -> ndarray: +def isnaobj2d(arr: ndarray, inf_as_na: bool = False) -> ndarray: """ Return boolean mask denoting which elements of a 2-D array are na-like, according to the criteria defined in `checknull`: @@ -270,11 +203,6 @@ def isnaobj2d(arr: ndarray) -> ndarray: Returns ------- result : ndarray (dtype=np.bool_) - - Notes - ----- - The difference between `isnaobj2d` and `isnaobj2d_old` is that `isnaobj2d` - does *not* consider INF or NEGINF to be NA. """ cdef: Py_ssize_t i, j, n, m @@ -288,7 +216,7 @@ def isnaobj2d(arr: ndarray) -> ndarray: for i in range(n): for j in range(m): val = arr[i, j] - if checknull(val): + if checknull(val, inf_as_na=inf_as_na): result[i, j] = 1 return result.view(np.bool_) @@ -296,47 +224,7 @@ def isnaobj2d(arr: ndarray) -> ndarray: @cython.wraparound(False) @cython.boundscheck(False) def isnaobj2d_old(arr: ndarray) -> ndarray: - """ - Return boolean mask denoting which elements of a 2-D array are na-like, - according to the criteria defined in `checknull_old`: - - None - - nan - - INF - - NEGINF - - NaT - - np.datetime64 representation of NaT - - np.timedelta64 representation of NaT - - NA - - Decimal("NaN") - - Parameters - ---------- - arr : ndarray - - Returns - ------- - ndarray (dtype=np.bool_) - - Notes - ----- - The difference between `isnaobj2d` and `isnaobj2d_old` is that `isnaobj2d` - does *not* consider INF or NEGINF to be NA. - """ - cdef: - Py_ssize_t i, j, n, m - object val - ndarray[uint8_t, ndim=2] result - - assert arr.ndim == 2, "'arr' must be 2-D." - - n, m = (<object>arr).shape - result = np.zeros((n, m), dtype=np.uint8) - for i in range(n): - for j in range(m): - val = arr[i, j] - if checknull_old(val): - result[i, j] = 1 - return result.view(np.bool_) + return isnaobj2d(arr, inf_as_na=True) def isposinf_scalar(val: object) -> bool: diff --git a/pandas/_libs/tslibs/nattype.pxd b/pandas/_libs/tslibs/nattype.pxd index 35319bd88053a..0ace3ca1fd4b1 100644 --- a/pandas/_libs/tslibs/nattype.pxd +++ b/pandas/_libs/tslibs/nattype.pxd @@ -18,4 +18,4 @@ cdef _NaT c_NaT cdef bint checknull_with_nat(object val) cdef bint is_dt64nat(object val) cdef bint is_td64nat(object val) -cpdef bint is_null_datetimelike(object val, bint inat_is_null=*) +cpdef bint is_null_datetimelike(object val, bint inat_is_null=*, bint inf_as_na=*) diff --git a/pandas/_libs/tslibs/nattype.pyi b/pandas/_libs/tslibs/nattype.pyi index a7ee9a70342d4..1a33a85a04ae0 100644 --- a/pandas/_libs/tslibs/nattype.pyi +++ b/pandas/_libs/tslibs/nattype.pyi @@ -12,7 +12,9 @@ NaT: NaTType iNaT: int nat_strings: set[str] -def is_null_datetimelike(val: object, inat_is_null: bool = ...) -> bool: ... +def is_null_datetimelike( + val: object, inat_is_null: bool = ..., inf_as_na: bool = ... +) -> bool: ... class NaTType(datetime): value: np.int64 diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index 0cbae74ecadac..ae553d79ae91e 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -1201,6 +1201,7 @@ cdef inline bint checknull_with_nat(object val): """ return val is None or util.is_nan(val) or val is c_NaT + cdef inline bint is_dt64nat(object val): """ Is this a np.datetime64 object np.datetime64("NaT"). @@ -1209,6 +1210,7 @@ cdef inline bint is_dt64nat(object val): return get_datetime64_value(val) == NPY_NAT return False + cdef inline bint is_td64nat(object val): """ Is this a np.timedelta64 object np.timedelta64("NaT"). @@ -1218,7 +1220,14 @@ cdef inline bint is_td64nat(object val): return False -cpdef bint is_null_datetimelike(object val, bint inat_is_null=True): +cdef: + cnp.float64_t INF = <cnp.float64_t>np.inf + cnp.float64_t NEGINF = -INF + + +cpdef bint is_null_datetimelike( + object val, bint inat_is_null=True, bint inf_as_na=False +): """ Determine if we have a null for a timedelta/datetime (or integer versions). @@ -1227,6 +1236,8 @@ cpdef bint is_null_datetimelike(object val, bint inat_is_null=True): val : object inat_is_null : bool, default True Whether to treat integer iNaT value as null + inf_as_na : bool, default False + Whether to treat INF or -INF value as null. Returns ------- @@ -1237,7 +1248,11 @@ cpdef bint is_null_datetimelike(object val, bint inat_is_null=True): elif val is c_NaT: return True elif util.is_float_object(val) or util.is_complex_object(val): - return val != val + if val != val: + return True + if inf_as_na: + return val == INF or val == NEGINF + return False elif util.is_timedelta64_object(val): return get_timedelta64_value(val) == NPY_NAT elif util.is_datetime64_object(val): diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py index c189134237554..63b64dc504b52 100644 --- a/pandas/core/dtypes/missing.py +++ b/pandas/core/dtypes/missing.py @@ -242,7 +242,7 @@ def _isna_array(values: ArrayLike, inf_as_na: bool = False): if not isinstance(values, np.ndarray): # i.e. ExtensionArray if inf_as_na and is_categorical_dtype(dtype): - result = libmissing.isnaobj_old(values.to_numpy()) + result = libmissing.isnaobj(values.to_numpy(), inf_as_na=inf_as_na) else: result = values.isna() elif is_string_dtype(dtype): @@ -268,10 +268,7 @@ def _isna_string_dtype(values: np.ndarray, inf_as_na: bool) -> np.ndarray: result = np.zeros(values.shape, dtype=bool) else: result = np.empty(shape, dtype=bool) - if inf_as_na: - vec = libmissing.isnaobj_old(values.ravel()) - else: - vec = libmissing.isnaobj(values.ravel()) + vec = libmissing.isnaobj(values.ravel(), inf_as_na=inf_as_na) result[...] = vec.reshape(shape)
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44505
2021-11-18T00:14:09Z
2021-11-20T16:13:39Z
2021-11-20T16:13:39Z
2021-11-20T16:17:59Z
CLN: whats new 1.4.0 `styler` edits
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 37e4c9a1378d1..133330fd1972f 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -95,26 +95,27 @@ See :ref:`here <advanced.numericindex>` for more about :class:`NumericIndex`. Styler ^^^^^^ -:class:`.Styler` has been further developed in 1.4.0. The following enhancements have been made: +:class:`.Styler` has been further developed in 1.4.0. The following general enhancements have been made: + + - Styling and formatting of indexes has been added, with :meth:`.Styler.apply_index`, :meth:`.Styler.applymap_index` and :meth:`.Styler.format_index`. These mirror the signature of the methods already used to style and format data values, and work with both HTML, LaTeX and Excel format (:issue:`41893`, :issue:`43101`, :issue:`41993`, :issue:`41995`) + - The new method :meth:`.Styler.hide` deprecates :meth:`.Styler.hide_index` and :meth:`.Styler.hide_columns` (:issue:`43758`) + - The keyword arguments ``level`` and ``names`` have been added to :meth:`.Styler.hide` (and implicitly to the deprecated methods :meth:`.Styler.hide_index` and :meth:`.Styler.hide_columns`) for additional control of visibility of MultiIndexes and of index names (:issue:`25475`, :issue:`43404`, :issue:`43346`) + - The :meth:`.Styler.export` and :meth:`.Styler.use` have been updated to address all of the added functionality from v1.2.0 and v1.3.0 (:issue:`40675`) + - Global options under the category ``pd.options.styler`` have been extended to configure default ``Styler`` properties which address formatting, encoding, and HTML and LaTeX rendering. Note that formerly ``Styler`` relied on ``display.html.use_mathjax``, which has now been replaced by ``styler.html.mathjax``. (:issue:`41395`) + - Validation of certain keyword arguments, e.g. ``caption`` (:issue:`43368`) + - Various bug fixes as recorded below + +Additionally there are specific enhancements to the HTML specific rendering: - - Styling and formatting of indexes has been added, with :meth:`.Styler.apply_index`, :meth:`.Styler.applymap_index` and :meth:`.Styler.format_index`. These mirror the signature of the methods already used to style and format data values, and work with both HTML, LaTeX and Excel format (:issue:`41893`, :issue:`43101`, :issue:`41993`, :issue:`41995`). - :meth:`.Styler.bar` introduces additional arguments to control alignment and display (:issue:`26070`, :issue:`36419`), and it also validates the input arguments ``width`` and ``height`` (:issue:`42511`). - - :meth:`.Styler.to_latex` introduces keyword argument ``environment``, which also allows a specific "longtable" entry through a separate jinja2 template (:issue:`41866`). - :meth:`.Styler.to_html` introduces keyword arguments ``sparse_index``, ``sparse_columns``, ``bold_headers``, ``caption``, ``max_rows`` and ``max_columns`` (:issue:`41946`, :issue:`43149`, :issue:`42972`). - - Keyword arguments ``level`` and ``names`` added to :meth:`.Styler.hide_index` and :meth:`.Styler.hide_columns` for additional control of visibility of MultiIndexes and index names (:issue:`25475`, :issue:`43404`, :issue:`43346`) - - Global options have been extended to configure default ``Styler`` properties including formatting and encoding and mathjax options and LaTeX (:issue:`41395`) - - Naive sparsification is now possible for LaTeX without the multirow package (:issue:`43369`) - - :meth:`.Styler.to_html` omits CSSStyle rules for hidden table elements (:issue:`43619`) + - :meth:`.Styler.to_html` omits CSSStyle rules for hidden table elements as a performance enhancement (:issue:`43619`) - Custom CSS classes can now be directly specified without string replacement (:issue:`43686`) - - Bug where row trimming failed to reflect hidden rows (:issue:`43703`, :issue:`44247`) - - Update and expand the export and use mechanics (:issue:`40675`) - - New method :meth:`.Styler.hide` added and deprecates :meth:`.Styler.hide_index` and :meth:`.Styler.hide_columns` (:issue:`43758`) -Formerly Styler relied on ``display.html.use_mathjax``, which has now been replaced by ``styler.html.mathjax``. +There are also some LaTeX specific enhancements: -There are also bug fixes and deprecations listed below. - -Validation now for ``caption`` arg (:issue:`43368`) + - :meth:`.Styler.to_latex` introduces keyword argument ``environment``, which also allows a specific "longtable" entry through a separate jinja2 template (:issue:`41866`). + - Naive sparsification is now possible for LaTeX without the necessity of including the multirow package (:issue:`43369`) .. _whatsnew_140.enhancements.pyarrow_csv_engine: @@ -453,6 +454,7 @@ Other Deprecations - Deprecated dropping of nuisance columns in :class:`Rolling`, :class:`Expanding`, and :class:`EWM` aggregations (:issue:`42738`) - Deprecated :meth:`Index.reindex` with a non-unique index (:issue:`42568`) - Deprecated :meth:`.Styler.render` in favour of :meth:`.Styler.to_html` (:issue:`42140`) +- Deprecated :meth:`.Styler.hide_index` and :meth:`.Styler.hide_columns` in favour of :meth:`.Styler.hide` (:issue:`43758`) - Deprecated passing in a string column label into ``times`` in :meth:`DataFrame.ewm` (:issue:`43265`) - Deprecated the 'include_start' and 'include_end' arguments in :meth:`DataFrame.between_time`; in a future version passing 'include_start' or 'include_end' will raise (:issue:`40245`) - Deprecated the ``squeeze`` argument to :meth:`read_csv`, :meth:`read_table`, and :meth:`read_excel`. Users should squeeze the DataFrame afterwards with ``.squeeze("columns")`` instead. (:issue:`43242`) @@ -733,7 +735,7 @@ Styler - Bug when rendering a single level MultiIndex (:issue:`43383`). - Bug when combining non-sparse rendering and :meth:`.Styler.hide_columns` or :meth:`.Styler.hide_index` (:issue:`43464`) - Bug setting a table style when using multiple selectors in :class:`.Styler` (:issue:`44011`) -- +- Bugs where row trimming and column trimming failed to reflect hidden rows (:issue:`43703`, :issue:`44247`) Other ^^^^^
cleans up some of my placeholder bullets
https://api.github.com/repos/pandas-dev/pandas/pulls/44503
2021-11-17T19:22:33Z
2021-11-24T22:39:30Z
2021-11-24T22:39:30Z
2021-11-25T06:20:47Z
ENH: `Styler.to_string`
diff --git a/doc/source/reference/style.rst b/doc/source/reference/style.rst index a739993e4d376..dd7e2fe7434cd 100644 --- a/doc/source/reference/style.rst +++ b/doc/source/reference/style.rst @@ -27,6 +27,7 @@ Styler properties Styler.template_html_style Styler.template_html_table Styler.template_latex + Styler.template_string Styler.loader Style application @@ -74,5 +75,6 @@ Style export and import Styler.to_html Styler.to_latex Styler.to_excel + Styler.to_string Styler.export Styler.use diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 7f2a1a9305039..43aaf1068b6c9 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -14,10 +14,12 @@ including other versions of pandas. Enhancements ~~~~~~~~~~~~ -.. _whatsnew_150.enhancements.enhancement1: +.. _whatsnew_150.enhancements.styler: -enhancement1 -^^^^^^^^^^^^ +Styler +^^^^^^ + + - New method :meth:`.Styler.to_string` for alternative customisable output methods (:issue:`44502`) .. _whatsnew_150.enhancements.enhancement2: diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index 29c1e35dbb546..266c5af47eead 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -1072,6 +1072,73 @@ def to_html( html, buf=buf, encoding=(encoding if buf is not None else None) ) + def to_string( + self, + buf=None, + *, + encoding=None, + sparse_index: bool | None = None, + sparse_columns: bool | None = None, + max_rows: int | None = None, + max_columns: int | None = None, + delimiter: str = " ", + ): + """ + Write Styler to a file, buffer or string in text format. + + .. versionadded:: 1.5.0 + + Parameters + ---------- + buf : str, Path, or StringIO-like, optional, default None + Buffer to write to. If ``None``, the output is returned as a string. + encoding : str, optional + Character encoding setting for file output. + Defaults to ``pandas.options.styler.render.encoding`` value of "utf-8". + sparse_index : bool, optional + Whether to sparsify the display of a hierarchical index. Setting to False + will display each explicit level element in a hierarchical key for each row. + Defaults to ``pandas.options.styler.sparse.index`` value. + sparse_columns : bool, optional + Whether to sparsify the display of a hierarchical index. Setting to False + will display each explicit level element in a hierarchical key for each + column. Defaults to ``pandas.options.styler.sparse.columns`` value. + max_rows : int, optional + The maximum number of rows that will be rendered. Defaults to + ``pandas.options.styler.render.max_rows``, which is None. + max_columns : int, optional + The maximum number of columns that will be rendered. Defaults to + ``pandas.options.styler.render.max_columns``, which is None. + + Rows and columns may be reduced if the number of total elements is + large. This value is set to ``pandas.options.styler.render.max_elements``, + which is 262144 (18 bit browser rendering). + delimiter : str, default single space + The separator between data elements. + + Returns + ------- + str or None + If `buf` is None, returns the result as a string. Otherwise returns `None`. + """ + obj = self._copy(deepcopy=True) + + if sparse_index is None: + sparse_index = get_option("styler.sparse.index") + if sparse_columns is None: + sparse_columns = get_option("styler.sparse.columns") + + text = obj._render_string( + sparse_columns=sparse_columns, + sparse_index=sparse_index, + max_rows=max_rows, + max_cols=max_columns, + delimiter=delimiter, + ) + return save_to_buffer( + text, buf=buf, encoding=(encoding if buf is not None else None) + ) + def set_td_classes(self, classes: DataFrame) -> Styler: """ Set the DataFrame of strings added to the ``class`` attribute of ``<td>`` diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index 2ff0a994ebb01..1fe36a34903ab 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -74,6 +74,7 @@ class StylerRenderer: template_html_table = _gl01_adjust(env.get_template("html_table.tpl")) template_html_style = _gl01_adjust(env.get_template("html_style.tpl")) template_latex = _gl01_adjust(env.get_template("latex.tpl")) + template_string = _gl01_adjust(env.get_template("string.tpl")) def __init__( self, @@ -183,6 +184,24 @@ def _render_latex( d.update(kwargs) return self.template_latex.render(**d) + def _render_string( + self, + sparse_index: bool, + sparse_columns: bool, + max_rows: int | None = None, + max_cols: int | None = None, + **kwargs, + ) -> str: + """ + Render a Styler in string format + """ + self._compute() + + d = self._translate(sparse_index, sparse_columns, max_rows, max_cols, blank="") + + d.update(kwargs) + return self.template_string.render(**d) + def _compute(self): """ Execute the style functions built up in `self._todo`. diff --git a/pandas/io/formats/templates/string.tpl b/pandas/io/formats/templates/string.tpl new file mode 100644 index 0000000000000..06aeb2b4e413c --- /dev/null +++ b/pandas/io/formats/templates/string.tpl @@ -0,0 +1,12 @@ +{% for r in head %} +{% for c in r %}{% if c["is_visible"] %} +{{ c["display_value"] }}{% if not loop.last %}{{ delimiter }}{% endif %} +{% endif %}{% endfor %} + +{% endfor %} +{% for r in body %} +{% for c in r %}{% if c["is_visible"] %} +{{ c["display_value"] }}{% if not loop.last %}{{ delimiter }}{% endif %} +{% endif %}{% endfor %} + +{% endfor %} diff --git a/pandas/tests/io/formats/style/test_to_string.py b/pandas/tests/io/formats/style/test_to_string.py new file mode 100644 index 0000000000000..5b3e0079bd95c --- /dev/null +++ b/pandas/tests/io/formats/style/test_to_string.py @@ -0,0 +1,42 @@ +from textwrap import dedent + +import pytest + +from pandas import DataFrame + +pytest.importorskip("jinja2") +from pandas.io.formats.style import Styler + + +@pytest.fixture +def df(): + return DataFrame({"A": [0, 1], "B": [-0.61, -1.22], "C": ["ab", "cd"]}) + + +@pytest.fixture +def styler(df): + return Styler(df, uuid_len=0, precision=2) + + +def test_basic_string(styler): + result = styler.to_string() + expected = dedent( + """\ + A B C + 0 0 -0.61 ab + 1 1 -1.22 cd + """ + ) + assert result == expected + + +def test_string_delimiter(styler): + result = styler.to_string(delimiter=";") + expected = dedent( + """\ + ;A;B;C + 0;0;-0.61;ab + 1;1;-1.22;cd + """ + ) + assert result == expected
Adds a basic method for writing `Styler` to string using the mechanics already in place and a simplified jinja2 template. With the development of `Styler` in the last 18 months it has moved away from being purely a tool for adding styles and colors to HTML. The methods: `.hide`, `.format`, `.format_index`, and the planned `set_descriptors` and `aliases` kwarg for `format_index`, allow one to fully customise the displayed items of row and column indexes, index names, levels, and the data values themselves, for output to any file-format. This is now used for output to LaTeX as well as HTML, and it seems to be a minimal addition to add a `to_string` function giving users even further capability. (e.g. #45177)
https://api.github.com/repos/pandas-dev/pandas/pulls/44502
2021-11-17T18:39:21Z
2022-01-08T15:57:05Z
2022-01-08T15:57:05Z
2022-01-08T16:23:36Z
(TST) Replacing Timestamp.now () in tests.
diff --git a/pandas/conftest.py b/pandas/conftest.py index 7bc6dbae38e6c..04589993b5f53 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -1268,6 +1268,16 @@ def timedelta64_dtype(request): return request.param +@pytest.fixture +def fixed_now_ts(): + """ + Fixture emits fixed Timestamp.now() + """ + return Timestamp( + year=2021, month=1, day=1, hour=12, minute=4, second=13, microsecond=22 + ) + + @pytest.fixture(params=tm.FLOAT_NUMPY_DTYPES) def float_numpy_dtype(request): """ diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py index 87bbdfb3c808f..49585f3d37924 100644 --- a/pandas/tests/arithmetic/test_datetime64.py +++ b/pandas/tests/arithmetic/test_datetime64.py @@ -150,7 +150,7 @@ def test_dt64arr_nat_comparison(self, tz_naive_fixture, box_with_array): tz = tz_naive_fixture box = box_with_array - ts = Timestamp.now(tz) + ts = Timestamp("2021-01-01", tz=tz) ser = Series([ts, NaT]) obj = tm.box_expected(ser, box) diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py index 3bf5fdb257c2a..a33febbfbe960 100644 --- a/pandas/tests/arithmetic/test_numeric.py +++ b/pandas/tests/arithmetic/test_numeric.py @@ -85,9 +85,9 @@ def test_operator_series_comparison_zerorank(self): expected = 0.0 > Series([1, 2, 3]) tm.assert_series_equal(result, expected) - def test_df_numeric_cmp_dt64_raises(self, box_with_array): + def test_df_numeric_cmp_dt64_raises(self, box_with_array, fixed_now_ts): # GH#8932, GH#22163 - ts = pd.Timestamp.now() + ts = fixed_now_ts obj = np.array(range(5)) obj = tm.box_expected(obj, box_with_array) @@ -281,9 +281,9 @@ def test_add_sub_timedeltalike_invalid(self, numeric_idx, other, box_with_array) @pytest.mark.parametrize( "other", [ - pd.Timestamp.now().to_pydatetime(), - pd.Timestamp.now(tz="UTC").to_pydatetime(), - pd.Timestamp.now().to_datetime64(), + pd.Timestamp("2021-01-01").to_pydatetime(), + pd.Timestamp("2021-01-01", tz="UTC").to_pydatetime(), + pd.Timestamp("2021-01-01").to_datetime64(), pd.NaT, ], ) @@ -873,7 +873,7 @@ def test_add_frames(self, first, second, expected): tm.assert_frame_equal(second + first, expected) # TODO: This came from series.test.test_operators, needs cleanup - def test_series_frame_radd_bug(self): + def test_series_frame_radd_bug(self, fixed_now_ts): # GH#353 vals = Series(tm.rands_array(5, 10)) result = "foo_" + vals @@ -889,7 +889,7 @@ def test_series_frame_radd_bug(self): ts.name = "ts" # really raise this time - now = pd.Timestamp.now().to_pydatetime() + fix_now = fixed_now_ts.to_pydatetime() msg = "|".join( [ "unsupported operand type", @@ -898,10 +898,10 @@ def test_series_frame_radd_bug(self): ] ) with pytest.raises(TypeError, match=msg): - now + ts + fix_now + ts with pytest.raises(TypeError, match=msg): - ts + now + ts + fix_now # TODO: This came from series.test.test_operators, needs cleanup def test_datetime64_with_index(self): diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py index 3069868ebb677..c96d7c01ec97f 100644 --- a/pandas/tests/arithmetic/test_object.py +++ b/pandas/tests/arithmetic/test_object.py @@ -315,7 +315,7 @@ def test_sub_object(self): with pytest.raises(TypeError, match=msg): index - np.array([2, "foo"], dtype=object) - def test_rsub_object(self): + def test_rsub_object(self, fixed_now_ts): # GH#19369 index = pd.Index([Decimal(1), Decimal(2)]) expected = pd.Index([Decimal(1), Decimal(0)]) @@ -331,7 +331,7 @@ def test_rsub_object(self): "foo" - index with pytest.raises(TypeError, match=msg): - np.array([True, Timestamp.now()]) - index + np.array([True, fixed_now_ts]) - index class MyIndex(pd.Index): diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py index f4404a3483e6f..a231e52d4b027 100644 --- a/pandas/tests/arithmetic/test_period.py +++ b/pandas/tests/arithmetic/test_period.py @@ -72,7 +72,7 @@ def test_compare_zerodim(self, box_with_array): "scalar", [ "foo", - Timestamp.now(), + Timestamp("2021-01-01"), Timedelta(days=4), 9, 9.5, @@ -693,9 +693,9 @@ def test_sub_n_gt_1_offsets(self, offset, kwd_name, n): "other", [ # datetime scalars - Timestamp.now(), - Timestamp.now().to_pydatetime(), - Timestamp.now().to_datetime64(), + Timestamp("2016-01-01"), + Timestamp("2016-01-01").to_pydatetime(), + Timestamp("2016-01-01").to_datetime64(), # datetime-like arrays pd.date_range("2016-01-01", periods=3, freq="H"), pd.date_range("2016-01-01", periods=3, tz="Europe/Brussels"), diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index 93332b9b96f84..29c01e45ed28d 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -110,11 +110,11 @@ def test_compare_timedeltalike_scalar(self, box_with_array, td_scalar): [ 345600000000000, "a", - Timestamp.now(), - Timestamp.now("UTC"), - Timestamp.now().to_datetime64(), - Timestamp.now().to_pydatetime(), - Timestamp.now().date(), + Timestamp("2021-01-01"), + Timestamp("2021-01-01").now("UTC"), + Timestamp("2021-01-01").now().to_datetime64(), + Timestamp("2021-01-01").now().to_pydatetime(), + Timestamp("2021-01-01").date(), np.array(4), # zero-dim mismatched dtype ], ) @@ -152,7 +152,7 @@ def test_td64arr_cmp_arraylike_invalid(self, other, box_with_array): def test_td64arr_cmp_mixed_invalid(self): rng = timedelta_range("1 days", periods=5)._data - other = np.array([0, 1, 2, rng[3], Timestamp.now()]) + other = np.array([0, 1, 2, rng[3], Timestamp("2021-01-01")]) result = rng == other expected = np.array([False, False, False, True, False]) @@ -2174,7 +2174,7 @@ def test_td64arr_pow_invalid(self, scalar_td, box_with_array): def test_add_timestamp_to_timedelta(): # GH: 35897 - timestamp = Timestamp.now() + timestamp = Timestamp("2021-01-01") result = timestamp + timedelta_range("0s", "1s", periods=31) expected = DatetimeIndex( [ diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py index 092fc6a460fca..c330e959ad5bf 100644 --- a/pandas/tests/arrays/string_/test_string.py +++ b/pandas/tests/arrays/string_/test_string.py @@ -539,7 +539,7 @@ def test_to_numpy_na_value(dtype, nulls_fixture): tm.assert_numpy_array_equal(result, expected) -def test_isin(dtype, request): +def test_isin(dtype, request, fixed_now_ts): s = pd.Series(["a", "b", None], dtype=dtype) result = s.isin(["a", "c"]) @@ -554,6 +554,6 @@ def test_isin(dtype, request): expected = pd.Series([False, False, False]) tm.assert_series_equal(result, expected) - result = s.isin(["a", pd.Timestamp.now()]) + result = s.isin(["a", fixed_now_ts]) expected = pd.Series([True, False, False]) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index ff7c7a7782da3..00a8110a399d4 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -839,30 +839,29 @@ def test_int_properties(self, arr1d, propname): tm.assert_numpy_array_equal(result, expected) - def test_take_fill_valid(self, arr1d): + def test_take_fill_valid(self, arr1d, fixed_now_ts): arr = arr1d dti = self.index_cls(arr1d) - dt_ind = Timestamp(2021, 1, 1, 12) - dt_ind_tz = dt_ind.tz_localize(dti.tz) - result = arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz) - assert result[0] == dt_ind_tz + now = fixed_now_ts.tz_localize(dti.tz) + result = arr.take([-1, 1], allow_fill=True, fill_value=now) + assert result[0] == now msg = f"value should be a '{arr1d._scalar_type.__name__}' or 'NaT'. Got" with pytest.raises(TypeError, match=msg): # fill_value Timedelta invalid - arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz - dt_ind_tz) + arr.take([-1, 1], allow_fill=True, fill_value=now - now) with pytest.raises(TypeError, match=msg): # fill_value Period invalid arr.take([-1, 1], allow_fill=True, fill_value=Period("2014Q1")) tz = None if dti.tz is not None else "US/Eastern" - dt_ind_tz = dt_ind.tz_localize(tz) + now = fixed_now_ts.tz_localize(tz) msg = "Cannot compare tz-naive and tz-aware datetime-like objects" with pytest.raises(TypeError, match=msg): # Timestamp with mismatched tz-awareness - arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz) + arr.take([-1, 1], allow_fill=True, fill_value=now) value = NaT.value msg = f"value should be a '{arr1d._scalar_type.__name__}' or 'NaT'. Got" @@ -878,7 +877,7 @@ def test_take_fill_valid(self, arr1d): if arr.tz is not None: # GH#37356 # Assuming here that arr1d fixture does not include Australia/Melbourne - value = dt_ind.tz_localize("Australia/Melbourne") + value = fixed_now_ts.tz_localize("Australia/Melbourne") msg = "Timezones don't match. .* != 'Australia/Melbourne'" with pytest.raises(ValueError, match=msg): # require tz match, not just tzawareness match @@ -1032,7 +1031,7 @@ def test_array_interface(self, timedelta_index): expected = np.asarray(arr).astype(dtype) tm.assert_numpy_array_equal(result, expected) - def test_take_fill_valid(self, timedelta_index): + def test_take_fill_valid(self, timedelta_index, fixed_now_ts): tdi = timedelta_index arr = TimedeltaArray(tdi) @@ -1040,14 +1039,13 @@ def test_take_fill_valid(self, timedelta_index): result = arr.take([-1, 1], allow_fill=True, fill_value=td1) assert result[0] == td1 - dt_ind = Timestamp(2021, 1, 1, 12) - value = dt_ind + value = fixed_now_ts msg = f"value should be a '{arr._scalar_type.__name__}' or 'NaT'. Got" with pytest.raises(TypeError, match=msg): # fill_value Timestamp invalid arr.take([0, 1], allow_fill=True, fill_value=value) - value = dt_ind.to_period("D") + value = fixed_now_ts.to_period("D") with pytest.raises(TypeError, match=msg): # fill_value Period invalid arr.take([0, 1], allow_fill=True, fill_value=value) diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py index 0453b1ef1c40d..d905c55c4553a 100644 --- a/pandas/tests/arrays/test_datetimes.py +++ b/pandas/tests/arrays/test_datetimes.py @@ -146,9 +146,9 @@ def test_setitem_clears_freq(self): @pytest.mark.parametrize( "obj", [ - pd.Timestamp.now(), - pd.Timestamp.now().to_datetime64(), - pd.Timestamp.now().to_pydatetime(), + pd.Timestamp("2021-01-01"), + pd.Timestamp("2021-01-01").to_datetime64(), + pd.Timestamp("2021-01-01").to_pydatetime(), ], ) def test_setitem_objects(self, obj): @@ -329,7 +329,7 @@ def test_searchsorted_tzawareness_compat(self, index): "invalid", np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9, np.arange(10).view("timedelta64[ns]") * 24 * 3600 * 10 ** 9, - pd.Timestamp.now().to_period("D"), + pd.Timestamp("2021-01-01").to_period("D"), ], ) @pytest.mark.parametrize("index", [True, False]) diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py index 10d12446666ce..b9ddac92c0a47 100644 --- a/pandas/tests/arrays/test_timedeltas.py +++ b/pandas/tests/arrays/test_timedeltas.py @@ -55,11 +55,11 @@ def test_setitem_objects(self, obj): np.int64(1), 1.0, np.datetime64("NaT"), - pd.Timestamp.now(), + pd.Timestamp("2021-01-01"), "invalid", np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9, (np.arange(10) * 24 * 3600 * 10 ** 9).view("datetime64[ns]"), - pd.Timestamp.now().to_period("D"), + pd.Timestamp("2021-01-01").to_period("D"), ], ) @pytest.mark.parametrize("index", [True, False]) diff --git a/pandas/tests/arrays/timedeltas/test_reductions.py b/pandas/tests/arrays/timedeltas/test_reductions.py index 9e854577f7e3c..586a9187fc169 100644 --- a/pandas/tests/arrays/timedeltas/test_reductions.py +++ b/pandas/tests/arrays/timedeltas/test_reductions.py @@ -127,9 +127,9 @@ def test_sum_2d_skipna_false(self): "add", [ Timedelta(0), - pd.Timestamp.now(), - pd.Timestamp.now("UTC"), - pd.Timestamp.now("Asia/Tokyo"), + pd.Timestamp("2021-01-01"), + pd.Timestamp("2021-01-01", tz="UTC"), + pd.Timestamp("2021-01-01", tz="Asia/Tokyo"), ], ) def test_std(self, add): diff --git a/pandas/tests/dtypes/cast/test_construct_from_scalar.py b/pandas/tests/dtypes/cast/test_construct_from_scalar.py index eccd838a11331..0ce04ce2e64cd 100644 --- a/pandas/tests/dtypes/cast/test_construct_from_scalar.py +++ b/pandas/tests/dtypes/cast/test_construct_from_scalar.py @@ -7,7 +7,6 @@ from pandas import ( Categorical, Timedelta, - Timestamp, ) import pandas._testing as tm @@ -25,9 +24,9 @@ def test_cast_1d_array_like_from_scalar_categorical(): tm.assert_categorical_equal(result, expected) -def test_cast_1d_array_like_from_timestamp(): +def test_cast_1d_array_like_from_timestamp(fixed_now_ts): # check we dont lose nanoseconds - ts = Timestamp.now() + Timedelta(1) + ts = fixed_now_ts + Timedelta(1) res = construct_1d_arraylike_from_scalar(ts, 2, np.dtype("M8[ns]")) assert res[0] == ts diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py index 55d0e5e73418e..d05499cdb444c 100644 --- a/pandas/tests/dtypes/test_missing.py +++ b/pandas/tests/dtypes/test_missing.py @@ -44,8 +44,8 @@ import pandas._testing as tm from pandas.core.api import Float64Index -now = pd.Timestamp.now() -utcnow = pd.Timestamp.now("UTC") +fix_now = pd.Timestamp("2021-01-01") +fix_utcnow = pd.Timestamp("2021-01-01", tz="UTC") @pytest.mark.parametrize("notna_f", [notna, notnull]) @@ -467,12 +467,12 @@ def test_array_equivalent_different_dtype_but_equal(): # There are 3 variants for each of lvalue and rvalue. We include all # three for the tz-naive `now` and exclude the datetim64 variant # for utcnow because it drops tzinfo. - (now, utcnow), - (now.to_datetime64(), utcnow), - (now.to_pydatetime(), utcnow), - (now, utcnow), - (now.to_datetime64(), utcnow.to_pydatetime()), - (now.to_pydatetime(), utcnow.to_pydatetime()), + (fix_now, fix_utcnow), + (fix_now.to_datetime64(), fix_utcnow), + (fix_now.to_pydatetime(), fix_utcnow), + (fix_now, fix_utcnow), + (fix_now.to_datetime64(), fix_utcnow.to_pydatetime()), + (fix_now.to_pydatetime(), fix_utcnow.to_pydatetime()), ], ) def test_array_equivalent_tzawareness(lvalue, rvalue): diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 359f166b9855e..abd72d3780a05 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -2888,8 +2888,8 @@ def test_from_timedelta_scalar_preserves_nanos(self, constructor): obj = constructor(td, dtype="m8[ns]") assert get1(obj) == td - def test_from_timestamp_scalar_preserves_nanos(self, constructor): - ts = Timestamp.now() + Timedelta(1) + def test_from_timestamp_scalar_preserves_nanos(self, constructor, fixed_now_ts): + ts = fixed_now_ts + Timedelta(1) obj = constructor(ts, dtype="M8[ns]") assert get1(obj) == ts diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py index 0c2f8d0103ceb..b618f12e9f6c9 100644 --- a/pandas/tests/indexes/timedeltas/test_indexing.py +++ b/pandas/tests/indexes/timedeltas/test_indexing.py @@ -149,7 +149,7 @@ def test_where_doesnt_retain_freq(self): result = tdi.where(cond, tdi[::-1]) tm.assert_index_equal(result, expected) - def test_where_invalid_dtypes(self): + def test_where_invalid_dtypes(self, fixed_now_ts): tdi = timedelta_range("1 day", periods=3, freq="D", name="idx") tail = tdi[2:].tolist() @@ -161,17 +161,17 @@ def test_where_invalid_dtypes(self): result = tdi.where(mask, i2.asi8) tm.assert_index_equal(result, expected) - ts = i2 + Timestamp.now() + ts = i2 + fixed_now_ts expected = Index([ts[0], ts[1]] + tail, dtype=object, name="idx") result = tdi.where(mask, ts) tm.assert_index_equal(result, expected) - per = (i2 + Timestamp.now()).to_period("D") + per = (i2 + fixed_now_ts).to_period("D") expected = Index([per[0], per[1]] + tail, dtype=object, name="idx") result = tdi.where(mask, per) tm.assert_index_equal(result, expected) - ts = Timestamp.now() + ts = fixed_now_ts expected = Index([ts, ts] + tail, dtype=object, name="idx") result = tdi.where(mask, ts) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py index 73227caa9fd62..f2c2985827a4f 100644 --- a/pandas/tests/scalar/test_nat.py +++ b/pandas/tests/scalar/test_nat.py @@ -645,11 +645,11 @@ def test_nat_comparisons_invalid_ndarray(other): op(other, NaT) -def test_compare_date(): +def test_compare_date(fixed_now_ts): # GH#39151 comparing NaT with date object is deprecated # See also: tests.scalar.timestamps.test_comparisons::test_compare_date - dt = Timestamp.now().to_pydatetime().date() + dt = fixed_now_ts.to_pydatetime().date() for left, right in [(NaT, dt), (dt, NaT)]: assert not left == right diff --git a/pandas/tests/scalar/timestamp/test_arithmetic.py b/pandas/tests/scalar/timestamp/test_arithmetic.py index fd46954fd4c71..1a8fd2a8199a2 100644 --- a/pandas/tests/scalar/timestamp/test_arithmetic.py +++ b/pandas/tests/scalar/timestamp/test_arithmetic.py @@ -91,7 +91,7 @@ def test_delta_preserve_nanos(self): def test_rsub_dtscalars(self, tz_naive_fixture): # In particular, check that datetime64 - Timestamp works GH#28286 td = Timedelta(1235345642000) - ts = Timestamp.now(tz_naive_fixture) + ts = Timestamp("2021-01-01", tz=tz_naive_fixture) other = ts + td assert other - ts == td @@ -170,9 +170,9 @@ def test_addition_subtraction_preserve_frequency(self, freq, td, td64): @pytest.mark.parametrize( "td", [Timedelta(hours=3), np.timedelta64(3, "h"), timedelta(hours=3)] ) - def test_radd_tdscalar(self, td): + def test_radd_tdscalar(self, td, fixed_now_ts): # GH#24775 timedelta64+Timestamp should not raise - ts = Timestamp.now() + ts = fixed_now_ts assert td + ts == ts + td @pytest.mark.parametrize( diff --git a/pandas/tests/scalar/timestamp/test_comparisons.py b/pandas/tests/scalar/timestamp/test_comparisons.py index b7cb7ca8d7069..7ed0a6aedebc1 100644 --- a/pandas/tests/scalar/timestamp/test_comparisons.py +++ b/pandas/tests/scalar/timestamp/test_comparisons.py @@ -12,8 +12,8 @@ class TestTimestampComparison: - def test_comparison_dt64_ndarray(self): - ts = Timestamp.now() + def test_comparison_dt64_ndarray(self, fixed_now_ts): + ts = Timestamp("2021-01-01") ts2 = Timestamp("2019-04-05") arr = np.array([[ts.asm8, ts2.asm8]], dtype="M8[ns]") @@ -51,7 +51,7 @@ def test_comparison_dt64_ndarray(self): @pytest.mark.parametrize("reverse", [True, False]) def test_comparison_dt64_ndarray_tzaware(self, reverse, comparison_op): - ts = Timestamp.now("UTC") + ts = Timestamp("2021-01-01 00:00:00.00000", tz="UTC") arr = np.array([ts.asm8, ts.asm8], dtype="M8[ns]") left, right = ts, arr @@ -147,7 +147,7 @@ def test_compare_invalid(self): @pytest.mark.parametrize("tz", [None, "US/Pacific"]) def test_compare_date(self, tz): # GH#36131 comparing Timestamp with date object is deprecated - ts = Timestamp.now(tz) + ts = Timestamp("2021-01-01 00:00:00.00000", tz=tz) dt = ts.to_pydatetime().date() # These are incorrectly considered as equal because they # dispatch to the date comparisons which truncates ts @@ -278,9 +278,9 @@ def test_timestamp_compare_oob_dt64(self): assert Timestamp.min > other assert other < Timestamp.min - def test_compare_zerodim_array(self): + def test_compare_zerodim_array(self, fixed_now_ts): # GH#26916 - ts = Timestamp.now() + ts = fixed_now_ts dt64 = np.datetime64("2016-01-01", "ns") arr = np.array(dt64) assert arr.ndim == 0 diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py index 366c0f7cf2f74..ab114e65b2422 100644 --- a/pandas/tests/scalar/timestamp/test_unary_ops.py +++ b/pandas/tests/scalar/timestamp/test_unary_ops.py @@ -490,10 +490,10 @@ def test_normalize_pre_epoch_dates(self): # -------------------------------------------------------------- @td.skip_if_windows - def test_timestamp(self): + def test_timestamp(self, fixed_now_ts): # GH#17329 # tz-naive --> treat it as if it were UTC for purposes of timestamp() - ts = Timestamp.now() + ts = fixed_now_ts uts = ts.replace(tzinfo=utc) assert ts.timestamp() == uts.timestamp() diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py index 3a6cd4eb0face..8283604b99d32 100644 --- a/pandas/tests/series/methods/test_replace.py +++ b/pandas/tests/series/methods/test_replace.py @@ -419,10 +419,10 @@ def test_replace_empty_copy(self, frame): tm.assert_equal(res, obj) assert res is not obj - def test_replace_only_one_dictlike_arg(self): + def test_replace_only_one_dictlike_arg(self, fixed_now_ts): # GH#33340 - ser = pd.Series([1, 2, "A", pd.Timestamp.now(), True]) + ser = pd.Series([1, 2, "A", fixed_now_ts, True]) to_replace = {0: 1, 2: "A"} value = "foo" msg = "Series.replace cannot use dict-like to_replace and non-None value" diff --git a/pandas/tests/tools/test_to_timedelta.py b/pandas/tests/tools/test_to_timedelta.py index 395fdea67f1bd..7b35e8d55c338 100644 --- a/pandas/tests/tools/test_to_timedelta.py +++ b/pandas/tests/tools/test_to_timedelta.py @@ -267,9 +267,9 @@ def test_to_timedelta_precision_over_nanos(self, input, expected, func): result = func(input) assert result == expected - def test_to_timedelta_zerodim(self): + def test_to_timedelta_zerodim(self, fixed_now_ts): # ndarray.item() incorrectly returns int for dt64[ns] and td64[ns] - dt64 = pd.Timestamp.now().to_datetime64() + dt64 = fixed_now_ts.to_datetime64() arg = np.array(dt64) msg = ( diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 0c79c0b64f4cd..1b2456c6f5103 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -498,11 +498,11 @@ def test_pickle_dateoffset_odd_inputs(self): base_dt = datetime(2020, 1, 1) assert base_dt + off == base_dt + res - def test_onOffset_deprecated(self, offset_types): + def test_onOffset_deprecated(self, offset_types, fixed_now_ts): # GH#30340 use idiomatic naming off = self._get_offset(offset_types) - ts = Timestamp.now() + ts = fixed_now_ts with tm.assert_produces_warning(FutureWarning): result = off.onOffset(ts) diff --git a/pandas/tests/tslibs/test_timezones.py b/pandas/tests/tslibs/test_timezones.py index fbda5e8fda9dd..7ab0ad0856af0 100644 --- a/pandas/tests/tslibs/test_timezones.py +++ b/pandas/tests/tslibs/test_timezones.py @@ -143,7 +143,7 @@ def test_maybe_get_tz_invalid_types(): msg = "<class 'pandas._libs.tslibs.timestamps.Timestamp'>" with pytest.raises(TypeError, match=msg): - timezones.maybe_get_tz(Timestamp.now("UTC")) + timezones.maybe_get_tz(Timestamp("2021-01-01", tz="UTC")) def test_maybe_get_tz_offset_only():
- [x] xref #44341 - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry Replacing Timestamp.now() in tests. Left Timestamp.now () in the following tests: ``` scalar/timestamp/test_timestamp.py:290: compare(Timestamp.now(), datetime.now()) scalar/timestamp/test_timestamp.py:291: compare(Timestamp.now("UTC"), datetime.now(timezone("UTC"))) scalar/timestamp/test_timestamp.py:320: compare(Timestamp.now(), datetime.now()) scalar/timestamp/test_timestamp.py:321: compare(Timestamp.now("UTC"), datetime.now(tzutc())) scalar/timestamp/test_constructors.py:480: ts_from_method = Timestamp.now() scalar/timestamp/test_constructors.py:484: ts_from_method_tz = Timestamp.now(tz="US/Eastern") ```
https://api.github.com/repos/pandas-dev/pandas/pulls/44501
2021-11-17T17:30:53Z
2021-11-20T20:10:36Z
2021-11-20T20:10:36Z
2021-11-20T20:10:55Z
TYP: changed variable new_pd_index to final_pd_index
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index 0e886befb5f2f..eedf00bcd9c76 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -2093,11 +2093,8 @@ def convert(self, values: np.ndarray, nan_rep, encoding: str, errors: str): if "freq" in kwargs: kwargs["freq"] = None new_pd_index = factory(values, **kwargs) - - # error: Incompatible types in assignment (expression has type - # "Union[ndarray, DatetimeIndex]", variable has type "Index") - new_pd_index = _set_tz(new_pd_index, self.tz) # type: ignore[assignment] - return new_pd_index, new_pd_index + final_pd_index = _set_tz(new_pd_index, self.tz) + return final_pd_index, final_pd_index def take_data(self): """return the values"""
xref #37715 Changed variable new_pd_index to final_pd_index in line 2099 Handled mypy type ignore[assignment] error in pandas/io/pytables.py in line 2099
https://api.github.com/repos/pandas-dev/pandas/pulls/44500
2021-11-17T15:15:41Z
2021-11-28T20:49:22Z
2021-11-28T20:49:22Z
2021-12-06T16:10:36Z
ERR: Raise ValueError when BaseIndexer start & end bounds are unequal length
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 2fe289a5f7c35..07d448bf811a1 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -677,8 +677,8 @@ Groupby/resample/rolling - Fixed bug in :meth:`Series.rolling` and :meth:`DataFrame.rolling` not calculating window bounds correctly for the first row when ``center=True`` and index is decreasing (:issue:`43927`) - Fixed bug in :meth:`Series.rolling` and :meth:`DataFrame.rolling` for centered datetimelike windows with uneven nanosecond (:issue:`43997`) - Bug in :meth:`GroupBy.nth` failing on ``axis=1`` (:issue:`43926`) -- Fixed bug in :meth:`Series.rolling` and :meth:`DataFrame.rolling` not respecting right bound on centered datetime-like windows, if the index contain duplicates (:issue:`#3944`) - +- Fixed bug in :meth:`Series.rolling` and :meth:`DataFrame.rolling` not respecting right bound on centered datetime-like windows, if the index contain duplicates (:issue:`3944`) +- Bug in :meth:`Series.rolling` and :meth:`DataFrame.rolling` when using a :class:`pandas.api.indexers.BaseIndexer` subclass that returned unequal start and end arrays would segfault instead of raising a ``ValueError`` (:issue:`44470`) Reshaping ^^^^^^^^^ diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py index f5f681d9de797..d91388e9722f7 100644 --- a/pandas/core/window/ewm.py +++ b/pandas/core/window/ewm.py @@ -417,6 +417,13 @@ def __init__( self.alpha, ) + def _check_window_bounds( + self, start: np.ndarray, end: np.ndarray, num_vals: int + ) -> None: + # emw algorithms are iterative with each point + # ExponentialMovingWindowIndexer "bounds" are the entire window + pass + def _get_window_indexer(self) -> BaseIndexer: """ Return an indexer class that will compute the window start and end bounds diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index f7799912937b7..73bdc626554a2 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -227,6 +227,20 @@ def _validate(self) -> None: if self.method not in ["table", "single"]: raise ValueError("method must be 'table' or 'single") + def _check_window_bounds( + self, start: np.ndarray, end: np.ndarray, num_vals: int + ) -> None: + if len(start) != len(end): + raise ValueError( + f"start ({len(start)}) and end ({len(end)}) bounds must be the " + f"same length" + ) + elif len(start) != num_vals: + raise ValueError( + f"start and end bounds ({len(start)}) must be the same length " + f"as the object ({num_vals})" + ) + def _create_data(self, obj: NDFrameT) -> NDFrameT: """ Split data into blocks & return conformed data. @@ -311,10 +325,7 @@ def __iter__(self): center=self.center, closed=self.closed, ) - - assert len(start) == len( - end - ), "these should be equal in length from get_window_bounds" + self._check_window_bounds(start, end, len(obj)) for s, e in zip(start, end): result = obj.iloc[slice(s, e)] @@ -565,9 +576,7 @@ def calc(x): center=self.center, closed=self.closed, ) - assert len(start) == len( - end - ), "these should be equal in length from get_window_bounds" + self._check_window_bounds(start, end, len(x)) return func(x, start, end, min_periods, *numba_args) @@ -608,6 +617,7 @@ def _numba_apply( center=self.center, closed=self.closed, ) + self._check_window_bounds(start, end, len(values)) aggregator = executor.generate_shared_aggregator( func, engine_kwargs, numba_cache_key_str ) @@ -1544,10 +1554,7 @@ def cov_func(x, y): center=self.center, closed=self.closed, ) - - assert len(start) == len( - end - ), "these should be equal in length from get_window_bounds" + self._check_window_bounds(start, end, len(x_array)) with np.errstate(all="ignore"): mean_x_y = window_aggregations.roll_mean( @@ -1588,10 +1595,7 @@ def corr_func(x, y): center=self.center, closed=self.closed, ) - - assert len(start) == len( - end - ), "these should be equal in length from get_window_bounds" + self._check_window_bounds(start, end, len(x_array)) with np.errstate(all="ignore"): mean_x_y = window_aggregations.roll_mean( diff --git a/pandas/tests/window/test_base_indexer.py b/pandas/tests/window/test_base_indexer.py index df4666d16ace0..a7ad409683ec8 100644 --- a/pandas/tests/window/test_base_indexer.py +++ b/pandas/tests/window/test_base_indexer.py @@ -452,3 +452,46 @@ def test_rolling_groupby_with_fixed_forward_many(group_keys, window_size): manual = manual.set_index(["a", "c"])["b"] tm.assert_series_equal(result, manual) + + +def test_unequal_start_end_bounds(): + class CustomIndexer(BaseIndexer): + def get_window_bounds(self, num_values, min_periods, center, closed): + return np.array([1]), np.array([1, 2]) + + indexer = CustomIndexer() + roll = Series(1).rolling(indexer) + match = "start" + with pytest.raises(ValueError, match=match): + roll.mean() + + with pytest.raises(ValueError, match=match): + next(iter(roll)) + + with pytest.raises(ValueError, match=match): + roll.corr(pairwise=True) + + with pytest.raises(ValueError, match=match): + roll.cov(pairwise=True) + + +def test_unequal_bounds_to_object(): + # GH 44470 + class CustomIndexer(BaseIndexer): + def get_window_bounds(self, num_values, min_periods, center, closed): + return np.array([1]), np.array([2]) + + indexer = CustomIndexer() + roll = Series([1, 1]).rolling(indexer) + match = "start and end" + with pytest.raises(ValueError, match=match): + roll.mean() + + with pytest.raises(ValueError, match=match): + next(iter(roll)) + + with pytest.raises(ValueError, match=match): + roll.corr(pairwise=True) + + with pytest.raises(ValueError, match=match): + roll.cov(pairwise=True)
- [x] closes #44470 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44497
2021-11-17T05:05:47Z
2021-11-20T16:08:54Z
2021-11-20T16:08:53Z
2021-11-20T18:02:43Z
PERF: lib.Validator iteration
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index ec89e52e2eff7..25142cad9a30d 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -1704,10 +1704,15 @@ cdef class Validator: cdef bint _validate(self, ndarray values) except -1: cdef: Py_ssize_t i - Py_ssize_t n = self.n + Py_ssize_t n = values.size + flatiter it = PyArray_IterNew(values) for i in range(n): - if not self.is_valid(values[i]): + # The PyArray_GETITEM and PyArray_ITER_NEXT are faster + # equivalents to `val = values[i]` + val = PyArray_GETITEM(values, PyArray_ITER_DATA(it)) + PyArray_ITER_NEXT(it) + if not self.is_valid(val): return False return True @@ -1717,10 +1722,15 @@ cdef class Validator: cdef bint _validate_skipna(self, ndarray values) except -1: cdef: Py_ssize_t i - Py_ssize_t n = self.n + Py_ssize_t n = values.size + flatiter it = PyArray_IterNew(values) for i in range(n): - if not self.is_valid_skipna(values[i]): + # The PyArray_GETITEM and PyArray_ITER_NEXT are faster + # equivalents to `val = values[i]` + val = PyArray_GETITEM(values, PyArray_ITER_DATA(it)) + PyArray_ITER_NEXT(it) + if not self.is_valid_skipna(val): return False return True diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py index 5a8e5f488fbf2..df71501d55b20 100644 --- a/pandas/core/arrays/string_.py +++ b/pandas/core/arrays/string_.py @@ -318,9 +318,7 @@ def __init__(self, values, copy=False): def _validate(self): """Validate that we only store NA or strings.""" - if len(self._ndarray) and not lib.is_string_array( - self._ndarray.ravel("K"), skipna=True - ): + if len(self._ndarray) and not lib.is_string_array(self._ndarray, skipna=True): raise ValueError("StringArray requires a sequence of strings or pandas.NA") if self._ndarray.dtype != "object": raise ValueError( diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py index ae961e53d8b79..1f1486b1b29a7 100644 --- a/pandas/core/dtypes/inference.py +++ b/pandas/core/dtypes/inference.py @@ -447,5 +447,5 @@ def is_inferred_bool_dtype(arr: ArrayLike) -> bool: if dtype == np.dtype(bool): return True elif dtype == np.dtype("object"): - return lib.is_bool_array(arr.ravel("K")) + return lib.is_bool_array(arr) return False diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py index 64a635707c2ff..5936248456ca7 100644 --- a/pandas/tests/dtypes/test_inference.py +++ b/pandas/tests/dtypes/test_inference.py @@ -1429,9 +1429,11 @@ def test_other_dtypes_for_array(self, func): func = getattr(lib, func) arr = np.array(["foo", "bar"]) assert not func(arr) + assert not func(arr.reshape(2, 1)) arr = np.array([1, 2]) assert not func(arr) + assert not func(arr.reshape(2, 1)) def test_date(self):
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry This is yak-shaving the trails back to a bug in Series.where with nullable dtypes. ``` from pandas._libs.lib import * arr = np.empty(10**6, dtype=bool).astype(object) %timeit is_bool_array(arr) 8.87 ms ± 28.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <- master 5.33 ms ± 11.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <- PR ```
https://api.github.com/repos/pandas-dev/pandas/pulls/44495
2021-11-17T02:29:26Z
2021-11-17T13:54:38Z
2021-11-17T13:54:38Z
2021-11-17T15:14:05Z
TST: share a bunch of test_custom_business_day
diff --git a/pandas/tests/tseries/offsets/test_business_day.py b/pandas/tests/tseries/offsets/test_business_day.py index ffc2a04334ffc..92176515b6b6f 100644 --- a/pandas/tests/tseries/offsets/test_business_day.py +++ b/pandas/tests/tseries/offsets/test_business_day.py @@ -15,6 +15,7 @@ BDay, BMonthEnd, ) +from pandas.compat import np_datetime64_compat from pandas import ( DatetimeIndex, @@ -36,10 +37,11 @@ class TestBusinessDay(Base): def setup_method(self, method): self.d = datetime(2008, 1, 1) + self.nd = np_datetime64_compat("2008-01-01 00:00:00Z") - self.offset = BDay() + self.offset = self._offset() self.offset1 = self.offset - self.offset2 = BDay(2) + self.offset2 = self._offset(2) def test_different_normalize_equals(self): # GH#21404 changed __eq__ to return False when `normalize` does not match @@ -98,21 +100,24 @@ def test_call(self): with tm.assert_produces_warning(FutureWarning): # GH#34171 DateOffset.__call__ is deprecated assert self.offset2(self.d) == datetime(2008, 1, 3) + assert self.offset2(self.nd) == datetime(2008, 1, 3) def testRollback1(self): - assert BDay(10).rollback(self.d) == self.d + assert self._offset(10).rollback(self.d) == self.d def testRollback2(self): - assert BDay(10).rollback(datetime(2008, 1, 5)) == datetime(2008, 1, 4) + assert self._offset(10).rollback(datetime(2008, 1, 5)) == datetime(2008, 1, 4) def testRollforward1(self): - assert BDay(10).rollforward(self.d) == self.d + assert self._offset(10).rollforward(self.d) == self.d def testRollforward2(self): - assert BDay(10).rollforward(datetime(2008, 1, 5)) == datetime(2008, 1, 7) + assert self._offset(10).rollforward(datetime(2008, 1, 5)) == datetime( + 2008, 1, 7 + ) def test_roll_date_object(self): - offset = BDay() + offset = self._offset() dt = date(2012, 9, 15) @@ -131,8 +136,8 @@ def test_roll_date_object(self): def test_is_on_offset(self): tests = [ - (BDay(), datetime(2008, 1, 1), True), - (BDay(), datetime(2008, 1, 5), False), + (self._offset(), datetime(2008, 1, 1), True), + (self._offset(), datetime(2008, 1, 5), False), ] for offset, d, expected in tests: @@ -140,7 +145,7 @@ def test_is_on_offset(self): apply_cases: _ApplyCases = [ ( - BDay(), + 1, { datetime(2008, 1, 1): datetime(2008, 1, 2), datetime(2008, 1, 4): datetime(2008, 1, 7), @@ -150,7 +155,7 @@ def test_is_on_offset(self): }, ), ( - 2 * BDay(), + 2, { datetime(2008, 1, 1): datetime(2008, 1, 3), datetime(2008, 1, 4): datetime(2008, 1, 8), @@ -160,7 +165,7 @@ def test_is_on_offset(self): }, ), ( - -BDay(), + -1, { datetime(2008, 1, 1): datetime(2007, 12, 31), datetime(2008, 1, 4): datetime(2008, 1, 3), @@ -171,7 +176,7 @@ def test_is_on_offset(self): }, ), ( - -2 * BDay(), + -2, { datetime(2008, 1, 1): datetime(2007, 12, 28), datetime(2008, 1, 4): datetime(2008, 1, 2), @@ -183,7 +188,7 @@ def test_is_on_offset(self): }, ), ( - BDay(0), + 0, { datetime(2008, 1, 1): datetime(2008, 1, 1), datetime(2008, 1, 4): datetime(2008, 1, 4), @@ -196,20 +201,21 @@ def test_is_on_offset(self): @pytest.mark.parametrize("case", apply_cases) def test_apply(self, case): - offset, cases = case + n, cases = case + offset = self._offset(n) for base, expected in cases.items(): assert_offset_equal(offset, base, expected) def test_apply_large_n(self): dt = datetime(2012, 10, 23) - result = dt + BDay(10) + result = dt + self._offset(10) assert result == datetime(2012, 11, 6) - result = dt + BDay(100) - BDay(100) + result = dt + self._offset(100) - self._offset(100) assert result == dt - off = BDay() * 6 + off = self._offset() * 6 rs = datetime(2012, 1, 1) - off xp = datetime(2011, 12, 23) assert rs == xp @@ -219,12 +225,18 @@ def test_apply_large_n(self): xp = datetime(2011, 12, 26) assert rs == xp - off = BDay() * 10 + off = self._offset() * 10 rs = datetime(2014, 1, 5) + off # see #5890 xp = datetime(2014, 1, 17) assert rs == xp def test_apply_corner(self): - msg = "Only know how to combine business day with datetime or timedelta" + if self._offset is BDay: + msg = "Only know how to combine business day with datetime or timedelta" + else: + msg = ( + "Only know how to combine trading day " + "with datetime, datetime64 or timedelta" + ) with pytest.raises(ApplyTypeError, match=msg): - BDay().apply(BMonthEnd()) + self._offset().apply(BMonthEnd()) diff --git a/pandas/tests/tseries/offsets/test_custom_business_day.py b/pandas/tests/tseries/offsets/test_custom_business_day.py index 5847bd11f09df..3bbbaa891709f 100644 --- a/pandas/tests/tseries/offsets/test_custom_business_day.py +++ b/pandas/tests/tseries/offsets/test_custom_business_day.py @@ -2,7 +2,6 @@ Tests for offsets.CustomBusinessDay / CDay """ from datetime import ( - date, datetime, timedelta, ) @@ -10,47 +9,21 @@ import numpy as np import pytest -from pandas._libs.tslibs.offsets import ( - ApplyTypeError, - BMonthEnd, - CDay, -) -from pandas.compat import np_datetime64_compat +from pandas._libs.tslibs.offsets import CDay from pandas import ( - DatetimeIndex, - Timedelta, _testing as tm, read_pickle, ) -from pandas.tests.tseries.offsets.common import ( - Base, - assert_is_on_offset, - assert_offset_equal, -) -from pandas.tests.tseries.offsets.test_offsets import _ApplyCases +from pandas.tests.tseries.offsets.common import assert_offset_equal +from pandas.tests.tseries.offsets.test_business_day import TestBusinessDay -from pandas.tseries import offsets as offsets from pandas.tseries.holiday import USFederalHolidayCalendar -class TestCustomBusinessDay(Base): +class TestCustomBusinessDay(TestBusinessDay): _offset = CDay - def setup_method(self, method): - self.d = datetime(2008, 1, 1) - self.nd = np_datetime64_compat("2008-01-01 00:00:00Z") - - self.offset = CDay() - self.offset1 = self.offset - self.offset2 = CDay(2) - - def test_different_normalize_equals(self): - # GH#21404 changed __eq__ to return False when `normalize` does not match - offset = self._offset() - offset2 = self._offset(normalize=True) - assert offset != offset2 - def test_repr(self): assert repr(self.offset) == "<CustomBusinessDay>" assert repr(self.offset2) == "<2 * CustomBusinessDays>" @@ -58,181 +31,6 @@ def test_repr(self): expected = "<BusinessDay: offset=datetime.timedelta(days=1)>" assert repr(self.offset + timedelta(1)) == expected - def test_with_offset(self): - offset = self.offset + timedelta(hours=2) - - assert (self.d + offset) == datetime(2008, 1, 2, 2) - - @pytest.mark.parametrize("reverse", [True, False]) - @pytest.mark.parametrize( - "td", - [ - Timedelta(hours=2), - Timedelta(hours=2).to_pytimedelta(), - Timedelta(hours=2).to_timedelta64(), - ], - ids=lambda x: type(x), - ) - def test_with_offset_index(self, reverse, td, request): - if reverse and isinstance(td, np.timedelta64): - mark = pytest.mark.xfail( - reason="need __array_priority__, but that causes other errors" - ) - request.node.add_marker(mark) - - dti = DatetimeIndex([self.d]) - expected = DatetimeIndex([datetime(2008, 1, 2, 2)]) - - if reverse: - result = dti + (td + self.offset) - else: - result = dti + (self.offset + td) - tm.assert_index_equal(result, expected) - - def test_eq(self): - assert self.offset2 == self.offset2 - - def test_mul(self): - pass - - def test_hash(self): - assert hash(self.offset2) == hash(self.offset2) - - def test_call(self): - with tm.assert_produces_warning(FutureWarning): - # GH#34171 DateOffset.__call__ is deprecated - assert self.offset2(self.d) == datetime(2008, 1, 3) - assert self.offset2(self.nd) == datetime(2008, 1, 3) - - def testRollback1(self): - assert CDay(10).rollback(self.d) == self.d - - def testRollback2(self): - assert CDay(10).rollback(datetime(2008, 1, 5)) == datetime(2008, 1, 4) - - def testRollforward1(self): - assert CDay(10).rollforward(self.d) == self.d - - def testRollforward2(self): - assert CDay(10).rollforward(datetime(2008, 1, 5)) == datetime(2008, 1, 7) - - def test_roll_date_object(self): - offset = CDay() - - dt = date(2012, 9, 15) - - result = offset.rollback(dt) - assert result == datetime(2012, 9, 14) - - result = offset.rollforward(dt) - assert result == datetime(2012, 9, 17) - - offset = offsets.Day() - result = offset.rollback(dt) - assert result == datetime(2012, 9, 15) - - result = offset.rollforward(dt) - assert result == datetime(2012, 9, 15) - - on_offset_cases = [ - (CDay(), datetime(2008, 1, 1), True), - (CDay(), datetime(2008, 1, 5), False), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, day, expected = case - assert_is_on_offset(offset, day, expected) - - apply_cases: _ApplyCases = [ - ( - CDay(), - { - datetime(2008, 1, 1): datetime(2008, 1, 2), - datetime(2008, 1, 4): datetime(2008, 1, 7), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 8), - }, - ), - ( - 2 * CDay(), - { - datetime(2008, 1, 1): datetime(2008, 1, 3), - datetime(2008, 1, 4): datetime(2008, 1, 8), - datetime(2008, 1, 5): datetime(2008, 1, 8), - datetime(2008, 1, 6): datetime(2008, 1, 8), - datetime(2008, 1, 7): datetime(2008, 1, 9), - }, - ), - ( - -CDay(), - { - datetime(2008, 1, 1): datetime(2007, 12, 31), - datetime(2008, 1, 4): datetime(2008, 1, 3), - datetime(2008, 1, 5): datetime(2008, 1, 4), - datetime(2008, 1, 6): datetime(2008, 1, 4), - datetime(2008, 1, 7): datetime(2008, 1, 4), - datetime(2008, 1, 8): datetime(2008, 1, 7), - }, - ), - ( - -2 * CDay(), - { - datetime(2008, 1, 1): datetime(2007, 12, 28), - datetime(2008, 1, 4): datetime(2008, 1, 2), - datetime(2008, 1, 5): datetime(2008, 1, 3), - datetime(2008, 1, 6): datetime(2008, 1, 3), - datetime(2008, 1, 7): datetime(2008, 1, 3), - datetime(2008, 1, 8): datetime(2008, 1, 4), - datetime(2008, 1, 9): datetime(2008, 1, 7), - }, - ), - ( - CDay(0), - { - datetime(2008, 1, 1): datetime(2008, 1, 1), - datetime(2008, 1, 4): datetime(2008, 1, 4), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 7), - }, - ), - ] - - @pytest.mark.parametrize("case", apply_cases) - def test_apply(self, case): - offset, cases = case - for base, expected in cases.items(): - assert_offset_equal(offset, base, expected) - - def test_apply_large_n(self): - dt = datetime(2012, 10, 23) - - result = dt + CDay(10) - assert result == datetime(2012, 11, 6) - - result = dt + CDay(100) - CDay(100) - assert result == dt - - off = CDay() * 6 - rs = datetime(2012, 1, 1) - off - xp = datetime(2011, 12, 23) - assert rs == xp - - st = datetime(2011, 12, 18) - rs = st + off - xp = datetime(2011, 12, 26) - assert rs == xp - - def test_apply_corner(self): - msg = ( - "Only know how to combine trading day " - "with datetime, datetime64 or timedelta" - ) - with pytest.raises(ApplyTypeError, match=msg): - CDay().apply(BMonthEnd()) - def test_holidays(self): # Define a TradingDay offset holidays = ["2012-05-01", datetime(2013, 5, 1), np.datetime64("2014-05-01")]
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44493
2021-11-16T22:31:07Z
2021-11-17T02:01:18Z
2021-11-17T02:01:18Z
2021-11-17T02:43:13Z
BUG: NumericArray.__pos__ should make a copy
diff --git a/pandas/core/arrays/numeric.py b/pandas/core/arrays/numeric.py index e1990dc064a84..eb955e4d42bc5 100644 --- a/pandas/core/arrays/numeric.py +++ b/pandas/core/arrays/numeric.py @@ -168,7 +168,7 @@ def __neg__(self): return type(self)(-self._data, self._mask.copy()) def __pos__(self): - return self + return self.copy() def __abs__(self): return type(self)(abs(self._data), self._mask.copy()) diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py index 91e90ebdb6253..d247dd3d0d366 100644 --- a/pandas/core/arrays/timedeltas.py +++ b/pandas/core/arrays/timedeltas.py @@ -800,7 +800,7 @@ def __neg__(self) -> TimedeltaArray: return type(self)(-self._ndarray) def __pos__(self) -> TimedeltaArray: - return type(self)(self._ndarray, freq=self.freq) + return type(self)(self._ndarray.copy(), freq=self.freq) def __abs__(self) -> TimedeltaArray: # Note: freq is not preserved diff --git a/pandas/tests/arrays/floating/test_arithmetic.py b/pandas/tests/arrays/floating/test_arithmetic.py index e674b49a99bd4..bd0b8b9c574a4 100644 --- a/pandas/tests/arrays/floating/test_arithmetic.py +++ b/pandas/tests/arrays/floating/test_arithmetic.py @@ -200,4 +200,6 @@ def test_unary_float_operators(float_ea_dtype, source, neg_target, abs_target): tm.assert_extension_array_equal(neg_result, neg_target) tm.assert_extension_array_equal(pos_result, arr) + assert not np.shares_memory(pos_result._data, arr._data) + assert not np.shares_memory(pos_result._mask, arr._mask) tm.assert_extension_array_equal(abs_result, abs_target) diff --git a/pandas/tests/arrays/integer/test_arithmetic.py b/pandas/tests/arrays/integer/test_arithmetic.py index 4f66e2ecfd355..2cf66e599a09a 100644 --- a/pandas/tests/arrays/integer/test_arithmetic.py +++ b/pandas/tests/arrays/integer/test_arithmetic.py @@ -300,4 +300,6 @@ def test_unary_int_operators(any_signed_int_ea_dtype, source, neg_target, abs_ta tm.assert_extension_array_equal(neg_result, neg_target) tm.assert_extension_array_equal(pos_result, arr) + assert not np.shares_memory(pos_result._data, arr._data) + assert not np.shares_memory(pos_result._mask, arr._mask) tm.assert_extension_array_equal(abs_result, abs_target) diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py index 98329776242f1..10d12446666ce 100644 --- a/pandas/tests/arrays/test_timedeltas.py +++ b/pandas/tests/arrays/test_timedeltas.py @@ -99,9 +99,11 @@ def test_pos(self): result = +arr tm.assert_timedelta_array_equal(result, arr) + assert not np.shares_memory(result._ndarray, arr._ndarray) result2 = np.positive(arr) tm.assert_timedelta_array_equal(result2, arr) + assert not np.shares_memory(result2._ndarray, arr._ndarray) def test_neg(self): vals = np.array([-3600 * 10 ** 9, "NaT", 7200 * 10 ** 9], dtype="m8[ns]")
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44492
2021-11-16T20:31:08Z
2021-11-17T02:03:09Z
2021-11-17T02:03:09Z
2021-11-17T02:30:48Z
DOCS: use non-deprecated method in doc examples
diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb index 3a991b5338c38..62e44d73d9f1c 100644 --- a/doc/source/user_guide/style.ipynb +++ b/doc/source/user_guide/style.ipynb @@ -242,7 +242,7 @@ "metadata": {}, "outputs": [], "source": [ - "s = df.style.format('{:.0f}').hide_columns([('Random', 'Tumour'), ('Random', 'Non-Tumour')])\n", + "s = df.style.format('{:.0f}').hide([('Random', 'Tumour'), ('Random', 'Non-Tumour')], axis=\"columns\")\n", "s" ] }, @@ -1384,7 +1384,7 @@ " .applymap(style_negative, props='color:red;')\\\n", " .applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)\\\n", " .set_table_styles([{\"selector\": \"th\", \"props\": \"color: blue;\"}])\\\n", - " .hide_index()\n", + " .hide(axis=\"index\")\n", "style1" ] },
as described
https://api.github.com/repos/pandas-dev/pandas/pulls/44491
2021-11-16T20:08:37Z
2021-11-17T01:49:38Z
2021-11-17T01:49:38Z
2021-11-17T16:59:34Z
TST: fix deprecation warning in test
diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py index 85519398b4444..e793857989ac1 100644 --- a/pandas/tests/io/formats/style/test_style.py +++ b/pandas/tests/io/formats/style/test_style.py @@ -319,7 +319,12 @@ def test_clear(mi_styler_comp): # tests vars are not same vals on obj and clean copy before clear (except for excl) for attr in [a for a in styler.__dict__ if not (callable(a) or a in excl)]: res = getattr(styler, attr) == getattr(clean_copy, attr) - assert not (all(res) if (hasattr(res, "__iter__") and len(res) > 0) else res) + if hasattr(res, "__iter__") and len(res) > 0: + assert not all(res) # some element in iterable differs + elif hasattr(res, "__iter__") and len(res) == 0: + pass # empty array + else: + assert not res # explicit var differs # test vars have same vales on obj and clean copy after clearing styler.clear()
deprecation warning regarding ambiguous boolean of empty array (numpy)
https://api.github.com/repos/pandas-dev/pandas/pulls/44489
2021-11-16T19:37:19Z
2021-11-20T07:28:38Z
2021-11-20T07:28:38Z
2021-11-20T07:28:42Z
Fix 'rtol' and 'atol' for numeric extension types
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py index b5ed173fc2473..5d3babfb1c7be 100644 --- a/pandas/_testing/asserters.py +++ b/pandas/_testing/asserters.py @@ -1068,6 +1068,8 @@ def assert_series_equal( assert_extension_array_equal( left._values, right._values, + rtol=rtol, + atol=atol, check_dtype=check_dtype, index_values=np.asarray(left.index), ) diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py index 2ebc6e17ba497..150e7e8f3d738 100644 --- a/pandas/tests/util/test_assert_series_equal.py +++ b/pandas/tests/util/test_assert_series_equal.py @@ -1,5 +1,7 @@ import pytest +from pandas.core.dtypes.common import is_extension_array_dtype + import pandas as pd from pandas import ( Categorical, @@ -105,7 +107,7 @@ def test_series_not_equal_metadata_mismatch(kwargs): @pytest.mark.parametrize("data1,data2", [(0.12345, 0.12346), (0.1235, 0.1236)]) -@pytest.mark.parametrize("dtype", ["float32", "float64"]) +@pytest.mark.parametrize("dtype", ["float32", "float64", "Float32"]) @pytest.mark.parametrize("decimals", [0, 1, 2, 3, 5, 10]) def test_less_precise(data1, data2, dtype, decimals): rtol = 10 ** -decimals @@ -115,7 +117,10 @@ def test_less_precise(data1, data2, dtype, decimals): if (decimals == 5 or decimals == 10) or ( decimals >= 3 and abs(data1 - data2) >= 0.0005 ): - msg = "Series values are different" + if is_extension_array_dtype(dtype): + msg = "ExtensionArray are different" + else: + msg = "Series values are different" with pytest.raises(AssertionError, match=msg): tm.assert_series_equal(s1, s2, rtol=rtol) else:
- [ ] closes #xxxx - [x] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry cc @stefansorgqc
https://api.github.com/repos/pandas-dev/pandas/pulls/44488
2021-11-16T19:30:03Z
2021-11-25T17:48:38Z
2021-11-25T17:48:38Z
2021-11-25T17:48:42Z
update astype keyerror msg
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index fd8af2c0cedd0..94363f30ae2a2 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -5835,7 +5835,8 @@ def astype( if col_name not in self: raise KeyError( "Only a column name can be used for the " - "key in a dtype mappings argument." + "key in a dtype mappings argument. " + f"'{col_name}' not found in columns." ) # GH#44417 cast to Series so we can use .iat below, which will be diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py index e5e07761fd755..feee03bbb91a2 100644 --- a/pandas/tests/frame/methods/test_astype.py +++ b/pandas/tests/frame/methods/test_astype.py @@ -222,10 +222,13 @@ def test_astype_dict_like(self, dtype_class): # in the keys of the dtype dict dt4 = dtype_class({"b": str, 2: str}) dt5 = dtype_class({"e": str}) - msg = "Only a column name can be used for the key in a dtype mappings argument" - with pytest.raises(KeyError, match=msg): + msg_frame = ( + "Only a column name can be used for the key in a dtype mappings argument. " + "'{}' not found in columns." + ) + with pytest.raises(KeyError, match=msg_frame.format(2)): df.astype(dt4) - with pytest.raises(KeyError, match=msg): + with pytest.raises(KeyError, match=msg_frame.format("e")): df.astype(dt5) tm.assert_frame_equal(df, original)
Hi! This is extremely minor and i just quickly did it in codespaces but I ran into this a few times and I thought it might be helpful for others.
https://api.github.com/repos/pandas-dev/pandas/pulls/44487
2021-11-16T18:45:37Z
2021-11-20T21:14:03Z
2021-11-20T21:14:03Z
2021-11-20T21:14:07Z
BUG: fix get_indexer_non_unique() with 'object' targets with NaNs (#4…
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 39e3894f86302..9c9ec0eddb225 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -592,9 +592,9 @@ Strings Interval ^^^^^^^^ -- Bug in :meth:`IntervalIndex.get_indexer_non_unique` returning boolean mask instead of array of integers for a non unique and non monotonic index (:issue:`44084`) - Bug in :meth:`Series.where` with ``IntervalDtype`` incorrectly raising when the ``where`` call should not replace anything (:issue:`44181`) - +- Indexing ^^^^^^^^ @@ -625,6 +625,8 @@ Indexing - Bug in :meth:`DataFrame.loc.__setitem__` and :meth:`DataFrame.iloc.__setitem__` with mixed dtypes sometimes failing to operate in-place (:issue:`44345`) - Bug in :meth:`DataFrame.loc.__getitem__` incorrectly raising ``KeyError`` when selecting a single column with a boolean key (:issue:`44322`). - Bug in indexing on columns with ``loc`` or ``iloc`` using a slice with a negative step with ``ExtensionDtype`` columns incorrectly raising (:issue:`44551`) +- Bug in :meth:`IntervalIndex.get_indexer_non_unique` returning boolean mask instead of array of integers for a non unique and non monotonic index (:issue:`44084`) +- Bug in :meth:`IntervalIndex.get_indexer_non_unique` not handling targets of ``dtype`` 'object' with NaNs correctly (:issue:`44482`) - Missing diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx index 670034c4dc4c1..969da5aa53e3e 100644 --- a/pandas/_libs/index.pyx +++ b/pandas/_libs/index.pyx @@ -338,7 +338,12 @@ cdef class IndexEngine: missing = np.empty(n_t, dtype=np.intp) # map each starget to its position in the index - if stargets and len(stargets) < 5 and self.is_monotonic_increasing: + if ( + stargets and + len(stargets) < 5 and + not any([checknull(t) for t in stargets]) and + self.is_monotonic_increasing + ): # if there are few enough stargets and the index is monotonically # increasing, then use binary search for each starget remaining_stargets = set() diff --git a/pandas/tests/indexes/test_indexing.py b/pandas/tests/indexes/test_indexing.py index f7cffe48d1722..9acdd52178e0e 100644 --- a/pandas/tests/indexes/test_indexing.py +++ b/pandas/tests/indexes/test_indexing.py @@ -332,3 +332,12 @@ def test_get_indexer_non_unique_multiple_nans(idx, target, expected): axis = Index(idx) actual = axis.get_indexer_for(target) tm.assert_numpy_array_equal(actual, expected) + + +def test_get_indexer_non_unique_nans_in_object_dtype_target(nulls_fixture): + idx = Index([1.0, 2.0]) + target = Index([1, nulls_fixture], dtype="object") + + result_idx, result_missing = idx.get_indexer_non_unique(target) + tm.assert_numpy_array_equal(result_idx, np.array([0, -1], dtype=np.intp)) + tm.assert_numpy_array_equal(result_missing, np.array([1], dtype=np.intp))
…4482) numpy.searchsorted() does not handle NaNs in 'object' arrays as expected (numpy/numpy#15499). Therefore we cannot search NaNs using binary search. So we use binary search only for targets without NaNs. - [x] closes #44482 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry This does also put the whatsnew of #44404 to the correct place.
https://api.github.com/repos/pandas-dev/pandas/pulls/44483
2021-11-16T13:29:33Z
2021-11-26T15:39:39Z
2021-11-26T15:39:38Z
2021-11-27T20:09:22Z
CLN: TODOs and FIXMEs
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py index 2e1ebf9d5a266..a8cc07c8fd964 100644 --- a/pandas/core/arrays/datetimelike.py +++ b/pandas/core/arrays/datetimelike.py @@ -1466,8 +1466,6 @@ def max(self, *, axis: int | None = None, skipna: bool = True, **kwargs): Index.max : Return the maximum value in an Index. Series.max : Return the maximum value in a Series. """ - # TODO: skipna is broken with max. - # See https://github.com/pandas-dev/pandas/issues/24265 nv.validate_max((), kwargs) nv.validate_minmax_axis(axis, self.ndim) diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index 568f3484e78e4..34aaf054de48e 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -605,7 +605,7 @@ def value_counts(self, dropna: bool = True) -> Series: data = self._data[~self._mask] value_counts = Index(data).value_counts() - # TODO(extension) + # TODO(ExtensionIndex) # if we have allow Index to hold an ExtensionArray # this is easier index = value_counts.index._values.astype(object) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index b8354e800753d..4535010b29c3a 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1039,7 +1039,7 @@ def _wrap_applied_output_series( key_index, ) -> DataFrame | Series: # this is to silence a DeprecationWarning - # TODO: Remove when default dtype of empty Series is object + # TODO(2.0): Remove when default dtype of empty Series is object kwargs = first_not_none._construct_axes_dict() backup = create_series_with_explicit_dtype(dtype_if_empty=object, **kwargs) values = [x if (x is not None) else backup for x in values] diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 00c4d2778e545..97f5ef20c5c5e 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -3337,7 +3337,8 @@ def pct_change(self, periods=1, fill_method="pad", limit=None, freq=None, axis=0 Series or DataFrame Percentage changes within each group. """ - # TODO: Remove this conditional for SeriesGroupBy when GH#23918 is fixed + # TODO(GH#23918): Remove this conditional for SeriesGroupBy when + # GH#23918 is fixed if freq is not None or axis != 0: return self.apply( lambda x: x.pct_change( diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 60c8851f059fe..8223a04883738 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -868,8 +868,8 @@ def result_arraylike(self) -> ArrayLike: Analogous to result_index, but returning an ndarray/ExtensionArray allowing us to retain ExtensionDtypes not supported by Index. """ - # TODO: once Index supports arbitrary EAs, this can be removed in favor - # of result_index + # TODO(ExtensionIndex): once Index supports arbitrary EAs, this can + # be removed in favor of result_index if len(self.groupings) == 1: return self.groupings[0].group_arraylike diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index e77cdd7ad7369..74d785586b950 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -80,8 +80,6 @@ operate_blockwise, ) -# TODO: flexible with index=None and/or items=None - T = TypeVar("T", bound="BaseBlockManager") diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index a95592c96d411..10d95dfbb9181 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -1781,16 +1781,20 @@ def na_accum_func(values: ArrayLike, accum_func, *, skipna: bool) -> ArrayLike: # We need to define mask before masking NaTs mask = isna(values) - if accum_func == np.minimum.accumulate: - # Note: the accum_func comparison fails as an "is" comparison - y = values.view("i8") - y[mask] = lib.i8max - changed = True - else: - y = values - changed = False + y = values.view("i8") + # Note: the accum_func comparison fails as an "is" comparison + changed = accum_func == np.minimum.accumulate + + try: + if changed: + y[mask] = lib.i8max + + result = accum_func(y, axis=0) + finally: + if changed: + # restore NaT elements + y[mask] = iNaT - result = accum_func(y.view("i8"), axis=0) if skipna: result[mask] = iNaT elif accum_func == np.minimum.accumulate: @@ -1800,10 +1804,6 @@ def na_accum_func(values: ArrayLike, accum_func, *, skipna: bool) -> ArrayLike: # everything up to the first non-na entry stays NaT result[: nz[0]] = iNaT - if changed: - # restore NaT elements - y[mask] = iNaT # TODO: could try/finally for this? - if isinstance(values.dtype, np.dtype): result = result.view(orig_dtype) else: diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py index 300df9728cd75..38b42f4470058 100644 --- a/pandas/io/sas/sas7bdat.py +++ b/pandas/io/sas/sas7bdat.py @@ -103,13 +103,14 @@ class _Column: col_id: int name: str | bytes label: str | bytes - format: str | bytes # TODO: i think allowing bytes is from py2 days + format: str | bytes ctype: bytes length: int def __init__( self, col_id: int, + # These can be bytes when convert_header_text is False name: str | bytes, label: str | bytes, format: str | bytes, diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 75cabf1681bc7..4be54ceaa2bcf 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -2158,9 +2158,6 @@ def to_sql( table.insert(chunksize, method) def has_table(self, name: str, schema: str | None = None): - # TODO(wesm): unused? - # escape = _get_valid_sqlite_name - # esc_name = escape(name) wld = "?" query = f"SELECT name FROM sqlite_master WHERE type='table' AND name={wld};" diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py index 1d0b64c1835df..b7084e2bc6dc7 100644 --- a/pandas/tests/apply/test_series_apply.py +++ b/pandas/tests/apply/test_series_apply.py @@ -794,18 +794,15 @@ def test_apply_to_timedelta(): list_of_valid_strings = ["00:00:01", "00:00:02"] a = pd.to_timedelta(list_of_valid_strings) b = Series(list_of_valid_strings).apply(pd.to_timedelta) - # FIXME: dont leave commented-out - # Can't compare until apply on a Series gives the correct dtype - # assert_series_equal(a, b) + tm.assert_series_equal(Series(a), b) list_of_strings = ["00:00:01", np.nan, pd.NaT, pd.NaT] - a = pd.to_timedelta(list_of_strings) # noqa + a = pd.to_timedelta(list_of_strings) with tm.assert_produces_warning(FutureWarning, match="Inferring timedelta64"): ser = Series(list_of_strings) - b = ser.apply(pd.to_timedelta) # noqa - # Can't compare until apply on a Series gives the correct dtype - # assert_series_equal(a, b) + b = ser.apply(pd.to_timedelta) + tm.assert_series_equal(Series(a), b) @pytest.mark.parametrize( diff --git a/pandas/tests/arrays/floating/test_arithmetic.py b/pandas/tests/arrays/floating/test_arithmetic.py index e674b49a99bd4..5d959b9224e52 100644 --- a/pandas/tests/arrays/floating/test_arithmetic.py +++ b/pandas/tests/arrays/floating/test_arithmetic.py @@ -142,17 +142,16 @@ def test_error_invalid_values(data, all_arithmetic_operators): with pytest.raises(TypeError, match=msg): ops(pd.Series("foo", index=s.index)) - if op != "__rpow__": - # TODO(extension) - # rpow with a datetimelike coerces the integer array incorrectly - msg = ( - "can only perform ops with numeric values|" - "cannot perform .* with this index type: DatetimeArray|" + msg = "|".join( + [ + "can only perform ops with numeric values", + "cannot perform .* with this index type: DatetimeArray", "Addition/subtraction of integers and integer-arrays " - "with DatetimeArray is no longer supported. *" - ) - with pytest.raises(TypeError, match=msg): - ops(pd.Series(pd.date_range("20180101", periods=len(s)))) + "with DatetimeArray is no longer supported. *", + ] + ) + with pytest.raises(TypeError, match=msg): + ops(pd.Series(pd.date_range("20180101", periods=len(s)))) # Various diff --git a/pandas/tests/arrays/integer/test_arithmetic.py b/pandas/tests/arrays/integer/test_arithmetic.py index 4f66e2ecfd355..5a5d047996747 100644 --- a/pandas/tests/arrays/integer/test_arithmetic.py +++ b/pandas/tests/arrays/integer/test_arithmetic.py @@ -179,17 +179,16 @@ def test_error_invalid_values(data, all_arithmetic_operators): with pytest.raises(TypeError, match=msg): ops(pd.Series("foo", index=s.index)) - if op != "__rpow__": - # TODO(extension) - # rpow with a datetimelike coerces the integer array incorrectly - msg = ( - "can only perform ops with numeric values|" - "cannot perform .* with this index type: DatetimeArray|" + msg = "|".join( + [ + "can only perform ops with numeric values", + "cannot perform .* with this index type: DatetimeArray", "Addition/subtraction of integers and integer-arrays " - "with DatetimeArray is no longer supported. *" - ) - with pytest.raises(TypeError, match=msg): - ops(pd.Series(pd.date_range("20180101", periods=len(s)))) + "with DatetimeArray is no longer supported. *", + ] + ) + with pytest.raises(TypeError, match=msg): + ops(pd.Series(pd.date_range("20180101", periods=len(s)))) # Various diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py index 5b9df44f5b565..0453b1ef1c40d 100644 --- a/pandas/tests/arrays/test_datetimes.py +++ b/pandas/tests/arrays/test_datetimes.py @@ -33,9 +33,13 @@ def test_cmp_dt64_arraylike_tznaive(self, comparison_op): result = op(arr, arr) tm.assert_numpy_array_equal(result, expected) - for other in [right, np.array(right)]: - # TODO: add list and tuple, and object-dtype once those - # are fixed in the constructor + for other in [ + right, + np.array(right), + list(right), + tuple(right), + right.astype(object), + ]: result = op(arr, other) tm.assert_numpy_array_equal(result, expected) diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py index 1ddb18c218cc6..6927a5927ef48 100644 --- a/pandas/tests/frame/test_arithmetic.py +++ b/pandas/tests/frame/test_arithmetic.py @@ -925,8 +925,8 @@ def test_binop_other(self, op, value, dtype, switch_numexpr_min_elements): (operator.mul, "bool"), } - e = DummyElement(value, dtype) - s = DataFrame({"A": [e.value, e.value]}, dtype=e.dtype) + elem = DummyElement(value, dtype) + df = DataFrame({"A": [elem.value, elem.value]}, dtype=elem.dtype) invalid = { (operator.pow, "<M8[ns]"), @@ -960,7 +960,7 @@ def test_binop_other(self, op, value, dtype, switch_numexpr_min_elements): with pytest.raises(TypeError, match=msg): with tm.assert_produces_warning(warn): - op(s, e.value) + op(df, elem.value) elif (op, dtype) in skip: @@ -971,19 +971,17 @@ def test_binop_other(self, op, value, dtype, switch_numexpr_min_elements): else: warn = None with tm.assert_produces_warning(warn): - op(s, e.value) + op(df, elem.value) else: msg = "operator '.*' not implemented for .* dtypes" with pytest.raises(NotImplementedError, match=msg): - op(s, e.value) + op(df, elem.value) else: - # FIXME: Since dispatching to Series, this test no longer - # asserts anything meaningful with tm.assert_produces_warning(None): - result = op(s, e.value).dtypes - expected = op(s, value).dtypes + result = op(df, elem.value).dtypes + expected = op(df, value).dtypes tm.assert_series_equal(result, expected) @@ -1240,9 +1238,7 @@ def test_combineFrame(self, float_frame, mixed_float_frame, mixed_int_frame): added = float_frame + mixed_int_frame _check_mixed_float(added, dtype="float64") - def test_combine_series( - self, float_frame, mixed_float_frame, mixed_int_frame, datetime_frame - ): + def test_combine_series(self, float_frame, mixed_float_frame, mixed_int_frame): # Series series = float_frame.xs(float_frame.index[0]) @@ -1272,17 +1268,18 @@ def test_combine_series( added = mixed_float_frame + series.astype("float16") _check_mixed_float(added, dtype={"C": None}) - # FIXME: don't leave commented-out - # these raise with numexpr.....as we are adding an int64 to an - # uint64....weird vs int - - # added = mixed_int_frame + (100*series).astype('int64') - # _check_mixed_int(added, dtype = {"A": 'int64', "B": 'float64', "C": - # 'int64', "D": 'int64'}) - # added = mixed_int_frame + (100*series).astype('int32') - # _check_mixed_int(added, dtype = {"A": 'int32', "B": 'float64', "C": - # 'int32', "D": 'int64'}) + # these used to raise with numexpr as we are adding an int64 to an + # uint64....weird vs int + added = mixed_int_frame + (100 * series).astype("int64") + _check_mixed_int( + added, dtype={"A": "int64", "B": "float64", "C": "int64", "D": "int64"} + ) + added = mixed_int_frame + (100 * series).astype("int32") + _check_mixed_int( + added, dtype={"A": "int32", "B": "float64", "C": "int32", "D": "int64"} + ) + def test_combine_timeseries(self, datetime_frame): # TimeSeries ts = datetime_frame["A"] diff --git a/pandas/tests/indexes/datetimes/methods/test_insert.py b/pandas/tests/indexes/datetimes/methods/test_insert.py index 016a29e4cc266..592f4240ee750 100644 --- a/pandas/tests/indexes/datetimes/methods/test_insert.py +++ b/pandas/tests/indexes/datetimes/methods/test_insert.py @@ -236,7 +236,6 @@ def test_insert_mismatched_types_raises(self, tz_aware_fixture, item): result = dti.insert(1, item) if isinstance(item, np.ndarray): - # FIXME: without doing .item() here this segfaults assert item.item() == 0 expected = Index([dti[0], 0] + list(dti[1:]), dtype=object, name=9) else: diff --git a/pandas/tests/indexes/multi/test_formats.py b/pandas/tests/indexes/multi/test_formats.py index 17699aa32929e..a6dadd42f7bf0 100644 --- a/pandas/tests/indexes/multi/test_formats.py +++ b/pandas/tests/indexes/multi/test_formats.py @@ -87,10 +87,7 @@ def test_unicode_repr_issues(self): index = MultiIndex(levels=levels, codes=codes) repr(index.levels) - - # FIXME: dont leave commented-out - # NumPy bug - # repr(index.get_level_values(1)) + repr(index.get_level_values(1)) def test_repr_max_seq_items_equal_to_n(self, idx): # display.max_seq_items == n diff --git a/pandas/tests/series/methods/test_convert.py b/pandas/tests/series/methods/test_convert.py index b658929dfd0d5..178026c1efc09 100644 --- a/pandas/tests/series/methods/test_convert.py +++ b/pandas/tests/series/methods/test_convert.py @@ -108,15 +108,15 @@ def test_convert(self): result = ser._convert(datetime=True) tm.assert_series_equal(result, expected) - # preserver if non-object + # preserve if non-object ser = Series([1], dtype="float32") result = ser._convert(datetime=True) tm.assert_series_equal(result, ser) # FIXME: dont leave commented-out # res = ser.copy() - # r[0] = np.nan - # result = res._convert(convert_dates=True,convert_numeric=False) + # res[0] = np.nan + # result = res._convert(datetime=True, numeric=False) # assert result.dtype == 'M8[ns]' def test_convert_no_arg_error(self): diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py index 8d18eba36af5e..b7379e0f6eb49 100644 --- a/pandas/tests/series/test_arithmetic.py +++ b/pandas/tests/series/test_arithmetic.py @@ -244,13 +244,6 @@ def test_add_corner_cases(self, datetime_series): result = empty + empty.copy() assert len(result) == 0 - # FIXME: dont leave commented-out - # TODO: this returned NotImplemented earlier, what to do? - # deltas = Series([timedelta(1)] * 5, index=np.arange(5)) - # sub_deltas = deltas[::2] - # deltas5 = deltas * 5 - # deltas = deltas + sub_deltas - def test_add_float_plus_int(self, datetime_series): # float + int int_ts = datetime_series.astype(int)[:-5] @@ -613,11 +606,6 @@ def test_comparison_operators_with_nas(self, comparison_op): tm.assert_series_equal(result, expected) - # FIXME: dont leave commented-out - # result = comparison_op(val, ser) - # expected = comparison_op(val, ser.dropna()).reindex(ser.index) - # tm.assert_series_equal(result, expected) - def test_ne(self): ts = Series([3, 4, 5, 6, 7], [3, 4, 5, 6, 7], dtype=float) expected = [True, True, False, True, True] diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 4867ba58838ef..3842e9a625b8b 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -90,13 +90,13 @@ def test_to_datetime_format(self, cache): tm.assert_index_equal(result, expected) def test_to_datetime_format_YYYYMMDD(self, cache): - s = Series([19801222, 19801222] + [19810105] * 5) - expected = Series([Timestamp(x) for x in s.apply(str)]) + ser = Series([19801222, 19801222] + [19810105] * 5) + expected = Series([Timestamp(x) for x in ser.apply(str)]) - result = to_datetime(s, format="%Y%m%d", cache=cache) + result = to_datetime(ser, format="%Y%m%d", cache=cache) tm.assert_series_equal(result, expected) - result = to_datetime(s.apply(str), format="%Y%m%d", cache=cache) + result = to_datetime(ser.apply(str), format="%Y%m%d", cache=cache) tm.assert_series_equal(result, expected) # with NaT @@ -104,15 +104,15 @@ def test_to_datetime_format_YYYYMMDD(self, cache): [Timestamp("19801222"), Timestamp("19801222")] + [Timestamp("19810105")] * 5 ) expected[2] = np.nan - s[2] = np.nan + ser[2] = np.nan - result = to_datetime(s, format="%Y%m%d", cache=cache) + result = to_datetime(ser, format="%Y%m%d", cache=cache) tm.assert_series_equal(result, expected) # string with NaT - s = s.apply(str) - s[2] = "nat" - result = to_datetime(s, format="%Y%m%d", cache=cache) + ser2 = ser.apply(str) + ser2[2] = "nat" + result = to_datetime(ser2, format="%Y%m%d", cache=cache) tm.assert_series_equal(result, expected) def test_to_datetime_format_YYYYMMDD_coercion(self, cache): @@ -208,16 +208,16 @@ def test_to_datetime_with_NA(self, data, format, expected): def test_to_datetime_format_integer(self, cache): # GH 10178 - s = Series([2000, 2001, 2002]) - expected = Series([Timestamp(x) for x in s.apply(str)]) + ser = Series([2000, 2001, 2002]) + expected = Series([Timestamp(x) for x in ser.apply(str)]) - result = to_datetime(s, format="%Y", cache=cache) + result = to_datetime(ser, format="%Y", cache=cache) tm.assert_series_equal(result, expected) - s = Series([200001, 200105, 200206]) - expected = Series([Timestamp(x[:4] + "-" + x[4:]) for x in s.apply(str)]) + ser = Series([200001, 200105, 200206]) + expected = Series([Timestamp(x[:4] + "-" + x[4:]) for x in ser.apply(str)]) - result = to_datetime(s, format="%Y%m", cache=cache) + result = to_datetime(ser, format="%Y%m", cache=cache) tm.assert_series_equal(result, expected) @pytest.mark.parametrize( @@ -262,29 +262,36 @@ def test_to_datetime_format_time(self, cache): "01/10/2010 13:56:01", "%m/%d/%Y %H:%M:%S", Timestamp("2010-01-10 13:56:01"), - ] # , - # FIXME: don't leave commented-out - # ['01/10/2010 08:14 PM', '%m/%d/%Y %I:%M %p', - # Timestamp('2010-01-10 20:14')], - # ['01/10/2010 07:40 AM', '%m/%d/%Y %I:%M %p', - # Timestamp('2010-01-10 07:40')], - # ['01/10/2010 09:12:56 AM', '%m/%d/%Y %I:%M:%S %p', - # Timestamp('2010-01-10 09:12:56')] + ], + ] + locale_specific = [ + ["01/10/2010 08:14 PM", "%m/%d/%Y %I:%M %p", Timestamp("2010-01-10 20:14")], + ["01/10/2010 07:40 AM", "%m/%d/%Y %I:%M %p", Timestamp("2010-01-10 07:40")], + [ + "01/10/2010 09:12:56 AM", + "%m/%d/%Y %I:%M:%S %p", + Timestamp("2010-01-10 09:12:56"), + ], ] - for s, format, dt in data: - assert to_datetime(s, format=format, cache=cache) == dt + if locale.getlocale()[0] == "en_US": + # this fail on a CI build with LC_ALL=zh_CN.utf8, so en_US + # may be more specific than necessary. + data.extend(locale_specific) + + for value, format, dt in data: + assert to_datetime(value, format=format, cache=cache) == dt @td.skip_if_has_locale def test_to_datetime_with_non_exact(self, cache): # GH 10834 # 8904 # exact kw - s = Series( + ser = Series( ["19MAY11", "foobar19MAY11", "19MAY11:00:00:00", "19MAY11 00:00:00Z"] ) - result = to_datetime(s, format="%d%b%y", exact=False, cache=cache) + result = to_datetime(ser, format="%d%b%y", exact=False, cache=cache) expected = to_datetime( - s.str.extract(r"(\d+\w+\d+)", expand=False), format="%d%b%y", cache=cache + ser.str.extract(r"(\d+\w+\d+)", expand=False), format="%d%b%y", cache=cache ) tm.assert_series_equal(result, expected) @@ -543,8 +550,8 @@ def test_to_datetime_YYYYMMDD(self): def test_to_datetime_unparseable_ignore(self): # unparsable - s = "Month 1, 1999" - assert to_datetime(s, errors="ignore") == s + ser = "Month 1, 1999" + assert to_datetime(ser, errors="ignore") == ser @td.skip_if_windows # `tm.set_timezone` does not work in windows def test_to_datetime_now(self): @@ -1356,8 +1363,8 @@ def test_to_datetime_unit_fractional_seconds(self): # GH13834 epoch = 1370745748 - s = Series([epoch + t for t in np.arange(0, 2, 0.25)] + [iNaT]).astype(float) - result = to_datetime(s, unit="s") + ser = Series([epoch + t for t in np.arange(0, 2, 0.25)] + [iNaT]).astype(float) + result = to_datetime(ser, unit="s") expected = Series( [ Timestamp("2013-06-09 02:42:28") + timedelta(seconds=t) @@ -1397,13 +1404,6 @@ def test_to_timestamp_unit_coerce(self): class TestToDatetimeDataFrame: - @pytest.fixture(params=[True, False]) - def cache(self, request): - """ - cache keyword to pass to to_datetime. - """ - return request.param - @pytest.fixture def df(self): return DataFrame( @@ -1619,30 +1619,35 @@ def test_to_datetime_default(self, cache): xp = datetime(2001, 1, 1) assert rs == xp + @pytest.mark.xfail(reason="fails to enforce dayfirst=True, which would raise") + def test_to_datetime_respects_dayfirst(self, cache): # dayfirst is essentially broken - # FIXME: don't leave commented-out - # to_datetime('01-13-2012', dayfirst=True) - # pytest.raises(ValueError, to_datetime('01-13-2012', - # dayfirst=True)) + + # The msg here is not important since it isn't actually raised yet. + msg = "Invalid date specified" + with pytest.raises(ValueError, match=msg): + # if dayfirst is respected, then this would parse as month=13, which + # would raise + to_datetime("01-13-2012", dayfirst=True, cache=cache) def test_to_datetime_on_datetime64_series(self, cache): # #2699 - s = Series(date_range("1/1/2000", periods=10)) + ser = Series(date_range("1/1/2000", periods=10)) - result = to_datetime(s, cache=cache) - assert result[0] == s[0] + result = to_datetime(ser, cache=cache) + assert result[0] == ser[0] def test_to_datetime_with_space_in_series(self, cache): # GH 6428 - s = Series(["10/18/2006", "10/18/2008", " "]) + ser = Series(["10/18/2006", "10/18/2008", " "]) msg = r"(\(')?String does not contain a date(:', ' '\))?" with pytest.raises(ValueError, match=msg): - to_datetime(s, errors="raise", cache=cache) - result_coerce = to_datetime(s, errors="coerce", cache=cache) + to_datetime(ser, errors="raise", cache=cache) + result_coerce = to_datetime(ser, errors="coerce", cache=cache) expected_coerce = Series([datetime(2006, 10, 18), datetime(2008, 10, 18), NaT]) tm.assert_series_equal(result_coerce, expected_coerce) - result_ignore = to_datetime(s, errors="ignore", cache=cache) - tm.assert_series_equal(result_ignore, s) + result_ignore = to_datetime(ser, errors="ignore", cache=cache) + tm.assert_series_equal(result_ignore, ser) @td.skip_if_has_locale def test_to_datetime_with_apply(self, cache): @@ -1681,23 +1686,22 @@ def test_to_datetime_types(self, cache): expected = to_datetime(0, cache=cache) assert result == expected + def test_to_datetime_strings(self, cache): # GH 3888 (strings) expected = to_datetime(["2012"], cache=cache)[0] result = to_datetime("2012", cache=cache) assert result == expected - # FIXME: don't leave commented-out - # array = ['2012','20120101','20120101 12:01:01'] - array = ["20120101", "20120101 12:01:01"] + array = ["2012", "20120101", "20120101 12:01:01"] expected = list(to_datetime(array, cache=cache)) result = [Timestamp(date_str) for date_str in array] tm.assert_almost_equal(result, expected) - # FIXME: don't leave commented-out - # currently fails ### - # result = Timestamp('2012') - # expected = to_datetime('2012') - # assert result == expected + expected = Timestamp(2012, 1, 1) + result = Timestamp("2012") + assert result == expected + result = to_datetime("2012") + assert result == expected def test_to_datetime_unprocessable_input(self, cache): # GH 4928 @@ -1963,12 +1967,12 @@ def test_guess_datetime_format_for_array(self): class TestToDatetimeInferFormat: def test_to_datetime_infer_datetime_format_consistent_format(self, cache): - s = Series(date_range("20000101", periods=50, freq="H")) + ser = Series(date_range("20000101", periods=50, freq="H")) test_formats = ["%m-%d-%Y", "%m/%d/%Y %H:%M:%S.%f", "%Y-%m-%dT%H:%M:%S.%f"] for test_format in test_formats: - s_as_dt_strings = s.apply(lambda x: x.strftime(test_format)) + s_as_dt_strings = ser.apply(lambda x: x.strftime(test_format)) with_format = to_datetime(s_as_dt_strings, format=test_format, cache=cache) no_infer = to_datetime( @@ -1984,7 +1988,7 @@ def test_to_datetime_infer_datetime_format_consistent_format(self, cache): tm.assert_series_equal(no_infer, yes_infer) def test_to_datetime_infer_datetime_format_inconsistent_format(self, cache): - s = Series( + ser = Series( np.array( ["01/01/2011 00:00:00", "01-02-2011 00:00:00", "2011-01-03T00:00:00"] ) @@ -1993,31 +1997,31 @@ def test_to_datetime_infer_datetime_format_inconsistent_format(self, cache): # When the format is inconsistent, infer_datetime_format should just # fallback to the default parsing tm.assert_series_equal( - to_datetime(s, infer_datetime_format=False, cache=cache), - to_datetime(s, infer_datetime_format=True, cache=cache), + to_datetime(ser, infer_datetime_format=False, cache=cache), + to_datetime(ser, infer_datetime_format=True, cache=cache), ) - s = Series(np.array(["Jan/01/2011", "Feb/01/2011", "Mar/01/2011"])) + ser = Series(np.array(["Jan/01/2011", "Feb/01/2011", "Mar/01/2011"])) tm.assert_series_equal( - to_datetime(s, infer_datetime_format=False, cache=cache), - to_datetime(s, infer_datetime_format=True, cache=cache), + to_datetime(ser, infer_datetime_format=False, cache=cache), + to_datetime(ser, infer_datetime_format=True, cache=cache), ) def test_to_datetime_infer_datetime_format_series_with_nans(self, cache): - s = Series( + ser = Series( np.array( ["01/01/2011 00:00:00", np.nan, "01/03/2011 00:00:00", np.nan], dtype=object, ) ) tm.assert_series_equal( - to_datetime(s, infer_datetime_format=False, cache=cache), - to_datetime(s, infer_datetime_format=True, cache=cache), + to_datetime(ser, infer_datetime_format=False, cache=cache), + to_datetime(ser, infer_datetime_format=True, cache=cache), ) def test_to_datetime_infer_datetime_format_series_start_with_nans(self, cache): - s = Series( + ser = Series( np.array( [ np.nan, @@ -2031,8 +2035,8 @@ def test_to_datetime_infer_datetime_format_series_start_with_nans(self, cache): ) tm.assert_series_equal( - to_datetime(s, infer_datetime_format=False, cache=cache), - to_datetime(s, infer_datetime_format=True, cache=cache), + to_datetime(ser, infer_datetime_format=False, cache=cache), + to_datetime(ser, infer_datetime_format=True, cache=cache), ) @pytest.mark.parametrize( @@ -2040,8 +2044,8 @@ def test_to_datetime_infer_datetime_format_series_start_with_nans(self, cache): ) def test_infer_datetime_format_tz_name(self, tz_name, offset): # GH 33133 - s = Series([f"2019-02-02 08:07:13 {tz_name}"]) - result = to_datetime(s, infer_datetime_format=True) + ser = Series([f"2019-02-02 08:07:13 {tz_name}"]) + result = to_datetime(ser, infer_datetime_format=True) expected = Series( [Timestamp("2019-02-02 08:07:13").tz_localize(pytz.FixedOffset(offset))] ) @@ -2058,15 +2062,15 @@ def test_infer_datetime_format_tz_name(self, tz_name, offset): ) def test_infer_datetime_format_zero_tz(self, ts, zero_tz, is_utc): # GH 41047 - s = Series([ts + zero_tz]) - result = to_datetime(s, infer_datetime_format=True) + ser = Series([ts + zero_tz]) + result = to_datetime(ser, infer_datetime_format=True) tz = pytz.utc if is_utc else None expected = Series([Timestamp(ts, tz=tz)]) tm.assert_series_equal(result, expected) def test_to_datetime_iso8601_noleading_0s(self, cache): # GH 11871 - s = Series(["2014-1-1", "2014-2-2", "2015-3-3"]) + ser = Series(["2014-1-1", "2014-2-2", "2015-3-3"]) expected = Series( [ Timestamp("2014-01-01"), @@ -2074,8 +2078,10 @@ def test_to_datetime_iso8601_noleading_0s(self, cache): Timestamp("2015-03-03"), ] ) - tm.assert_series_equal(to_datetime(s, cache=cache), expected) - tm.assert_series_equal(to_datetime(s, format="%Y-%m-%d", cache=cache), expected) + tm.assert_series_equal(to_datetime(ser, cache=cache), expected) + tm.assert_series_equal( + to_datetime(ser, format="%Y-%m-%d", cache=cache), expected + ) class TestDaysInMonth:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry We have to run out of these eventually, right? right?
https://api.github.com/repos/pandas-dev/pandas/pulls/44479
2021-11-16T03:28:35Z
2021-11-17T02:04:46Z
2021-11-17T02:04:46Z
2021-11-17T02:42:20Z
REF: rolling benchmarks to reduce redundant benchmarks
diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py index 406b27dd37ea5..73adb12c171bf 100644 --- a/asv_bench/benchmarks/rolling.py +++ b/asv_bench/benchmarks/rolling.py @@ -9,22 +9,24 @@ class Methods: params = ( ["DataFrame", "Series"], - [10, 1000], + [("rolling", {"window": 10}), ("rolling", {"window": 1000}), ("expanding", {})], ["int", "float"], - ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], + ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum", "sem"], ) - param_names = ["constructor", "window", "dtype", "method"] + param_names = ["constructor", "window_kwargs", "dtype", "method"] - def setup(self, constructor, window, dtype, method): + def setup(self, constructor, window_kwargs, dtype, method): N = 10 ** 5 + window, kwargs = window_kwargs arr = (100 * np.random.random(N)).astype(dtype) - self.roll = getattr(pd, constructor)(arr).rolling(window) + obj = getattr(pd, constructor)(arr) + self.window = getattr(obj, window)(**kwargs) - def time_rolling(self, constructor, window, dtype, method): - getattr(self.roll, method)() + def time_method(self, constructor, window_kwargs, dtype, method): + getattr(self.window, method)() - def peakmem_rolling(self, constructor, window, dtype, method): - getattr(self.roll, method)() + def peakmem_method(self, constructor, window_kwargs, dtype, method): + getattr(self.window, method)() class Apply: @@ -46,19 +48,27 @@ def time_rolling(self, constructor, window, dtype, function, raw): self.roll.apply(function, raw=raw) -class NumbaEngine: +class NumbaEngineMethods: params = ( ["DataFrame", "Series"], ["int", "float"], - [np.sum, lambda x: np.sum(x) + 5], + [("rolling", {"window": 10}), ("expanding", {})], ["sum", "max", "min", "median", "mean"], [True, False], [None, 100], ) - param_names = ["constructor", "dtype", "function", "method", "parallel", "cols"] + param_names = [ + "constructor", + "dtype", + "window_kwargs", + "method", + "parallel", + "cols", + ] - def setup(self, constructor, dtype, function, method, parallel, cols): + def setup(self, constructor, dtype, window_kwargs, method, parallel, cols): N = 10 ** 3 + window, kwargs = window_kwargs shape = (N, cols) if cols is not None and constructor != "Series" else N arr = (100 * np.random.random(shape)).astype(dtype) data = getattr(pd, constructor)(arr) @@ -66,84 +76,88 @@ def setup(self, constructor, dtype, function, method, parallel, cols): # Warm the cache with warnings.catch_warnings(record=True): # Catch parallel=True not being applicable e.g. 1D data - self.roll = data.rolling(10) - self.roll.apply( - function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} - ) - getattr(self.roll, method)( + self.window = getattr(data, window)(**kwargs) + getattr(self.window, method)( engine="numba", engine_kwargs={"parallel": parallel} ) - self.expand = data.expanding() - self.expand.apply( - function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} - ) - - def time_rolling_apply(self, constructor, dtype, function, method, parallel, col): - with warnings.catch_warnings(record=True): - self.roll.apply( - function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} - ) - - def time_expanding_apply(self, constructor, dtype, function, method, parallel, col): - with warnings.catch_warnings(record=True): - self.expand.apply( - function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} - ) - - def time_rolling_methods(self, constructor, dtype, function, method, parallel, col): + def test_method(self, constructor, dtype, window_kwargs, method, parallel, cols): with warnings.catch_warnings(record=True): - getattr(self.roll, method)( + getattr(self.window, method)( engine="numba", engine_kwargs={"parallel": parallel} ) -class ExpandingMethods: - +class NumbaEngineApply: params = ( ["DataFrame", "Series"], ["int", "float"], - ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], + [("rolling", {"window": 10}), ("expanding", {})], + [np.sum, lambda x: np.sum(x) + 5], + [True, False], + [None, 100], ) - param_names = ["constructor", "window", "dtype", "method"] + param_names = [ + "constructor", + "dtype", + "window_kwargs", + "function", + "parallel", + "cols", + ] - def setup(self, constructor, dtype, method): - N = 10 ** 5 - N_groupby = 100 - arr = (100 * np.random.random(N)).astype(dtype) - self.expanding = getattr(pd, constructor)(arr).expanding() - self.expanding_groupby = ( - pd.DataFrame({"A": arr[:N_groupby], "B": range(N_groupby)}) - .groupby("B") - .expanding() - ) + def setup(self, constructor, dtype, window_kwargs, function, parallel, cols): + N = 10 ** 3 + window, kwargs = window_kwargs + shape = (N, cols) if cols is not None and constructor != "Series" else N + arr = (100 * np.random.random(shape)).astype(dtype) + data = getattr(pd, constructor)(arr) - def time_expanding(self, constructor, dtype, method): - getattr(self.expanding, method)() + # Warm the cache + with warnings.catch_warnings(record=True): + # Catch parallel=True not being applicable e.g. 1D data + self.window = getattr(data, window)(**kwargs) + self.window.apply( + function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} + ) - def time_expanding_groupby(self, constructor, dtype, method): - getattr(self.expanding_groupby, method)() + def test_method(self, constructor, dtype, window_kwargs, function, parallel, cols): + with warnings.catch_warnings(record=True): + self.window.apply( + function, raw=True, engine="numba", engine_kwargs={"parallel": parallel} + ) class EWMMethods: - params = (["DataFrame", "Series"], [10, 1000], ["int", "float"], ["mean", "std"]) - param_names = ["constructor", "window", "dtype", "method"] + params = ( + ["DataFrame", "Series"], + [ + ({"halflife": 10}, "mean"), + ({"halflife": 10}, "std"), + ({"halflife": 1000}, "mean"), + ({"halflife": 1000}, "std"), + ( + { + "halflife": "1 Day", + "times": pd.date_range("1900", periods=10 ** 5, freq="23s"), + }, + "mean", + ), + ], + ["int", "float"], + ) + param_names = ["constructor", "kwargs_method", "dtype"] - def setup(self, constructor, window, dtype, method): + def setup(self, constructor, kwargs_method, dtype): N = 10 ** 5 + kwargs, method = kwargs_method arr = (100 * np.random.random(N)).astype(dtype) - times = pd.date_range("1900", periods=N, freq="23s") - self.ewm = getattr(pd, constructor)(arr).ewm(halflife=window) - self.ewm_times = getattr(pd, constructor)(arr).ewm( - halflife="1 Day", times=times - ) + self.method = method + self.ewm = getattr(pd, constructor)(arr).ewm(**kwargs) - def time_ewm(self, constructor, window, dtype, method): - getattr(self.ewm, method)() - - def time_ewm_times(self, constructor, window, dtype, method): - self.ewm_times.mean() + def time_ewm(self, constructor, kwargs_method, dtype): + getattr(self.ewm, self.method)() class VariableWindowMethods(Methods): @@ -151,7 +165,7 @@ class VariableWindowMethods(Methods): ["DataFrame", "Series"], ["50s", "1h", "1d"], ["int", "float"], - ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], + ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum", "sem"], ) param_names = ["constructor", "window", "dtype", "method"] @@ -159,35 +173,35 @@ def setup(self, constructor, window, dtype, method): N = 10 ** 5 arr = (100 * np.random.random(N)).astype(dtype) index = pd.date_range("2017-01-01", periods=N, freq="5s") - self.roll = getattr(pd, constructor)(arr, index=index).rolling(window) + self.window = getattr(pd, constructor)(arr, index=index).rolling(window) class Pairwise: - params = ([10, 1000, None], ["corr", "cov"], [True, False]) - param_names = ["window", "method", "pairwise"] + params = ( + [({"window": 10}, "rolling"), ({"window": 1000}, "rolling"), ({}, "expanding")], + ["corr", "cov"], + [True, False], + ) + param_names = ["window_kwargs", "method", "pairwise"] - def setup(self, window, method, pairwise): + def setup(self, kwargs_window, method, pairwise): N = 10 ** 4 n_groups = 20 + kwargs, window = kwargs_window groups = [i for _ in range(N // n_groups) for i in range(n_groups)] arr = np.random.random(N) self.df = pd.DataFrame(arr) - self.df_group = pd.DataFrame({"A": groups, "B": arr}).groupby("A") + self.window = getattr(self.df, window)(**kwargs) + self.window_group = getattr( + pd.DataFrame({"A": groups, "B": arr}).groupby("A"), window + )(**kwargs) - def time_pairwise(self, window, method, pairwise): - if window is None: - r = self.df.expanding() - else: - r = self.df.rolling(window=window) - getattr(r, method)(self.df, pairwise=pairwise) + def time_pairwise(self, kwargs_window, method, pairwise): + getattr(self.window, method)(self.df, pairwise=pairwise) - def time_groupby(self, window, method, pairwise): - if window is None: - r = self.df_group.expanding() - else: - r = self.df_group.rolling(window=window) - getattr(r, method)(self.df, pairwise=pairwise) + def time_groupby(self, kwargs_window, method, pairwise): + getattr(self.window_group, method)(self.df, pairwise=pairwise) class Quantile: @@ -274,10 +288,18 @@ def peakmem_rolling(self, constructor, window_size, dtype, method): class Groupby: - params = ["sum", "median", "mean", "max", "min", "kurt", "sum"] + params = ( + ["sum", "median", "mean", "max", "min", "kurt", "sum"], + [ + ("rolling", {"window": 2}), + ("rolling", {"window": "30s", "on": "C"}), + ("expanding", {}), + ], + ) - def setup(self, method): + def setup(self, method, window_kwargs): N = 1000 + window, kwargs = window_kwargs df = pd.DataFrame( { "A": [str(i) for i in range(N)] * 10, @@ -285,14 +307,10 @@ def setup(self, method): "C": pd.date_range(start="1900-01-01", freq="1min", periods=N * 10), } ) - self.groupby_roll_int = df.groupby("A").rolling(window=2) - self.groupby_roll_offset = df.groupby("A").rolling(window="30s", on="C") - - def time_rolling_int(self, method): - getattr(self.groupby_roll_int, method)() + self.groupby_window = getattr(df.groupby("A"), window)(**kwargs) - def time_rolling_offset(self, method): - getattr(self.groupby_roll_offset, method)() + def time_method(self, method, window_kwargs): + getattr(self.groupby_window, method)() class GroupbyLargeGroups:
xref https://github.com/pandas-dev/pandas/issues/44450 - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them Some of the current `param` definitions are causing the same benchmark to run more than once without changes. Reorganizing the benchmarks such that the `param` combinations avoid the redundancy.
https://api.github.com/repos/pandas-dev/pandas/pulls/44475
2021-11-15T20:25:34Z
2021-11-15T23:57:08Z
2021-11-15T23:57:08Z
2021-11-16T06:26:29Z
BUG: Tick + np.timedelta64
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index a593a03de5c25..a1ac19b815e22 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -538,7 +538,7 @@ Datetimelike - Bug in inplace addition and subtraction of :class:`DatetimeIndex` or :class:`TimedeltaIndex` with :class:`DatetimeArray` or :class:`TimedeltaArray` (:issue:`43904`) - Bug in in calling ``np.isnan``, ``np.isfinite``, or ``np.isinf`` on a timezone-aware :class:`DatetimeIndex` incorrectly raising ``TypeError`` (:issue:`43917`) - Bug in constructing a :class:`Series` from datetime-like strings with mixed timezones incorrectly partially-inferring datetime values (:issue:`40111`) -- +- Bug in addition with a :class:`Tick` object and a ``np.timedelta64`` object incorrectly raising instead of returning :class:`Timedelta` (:issue:`44474`) Timedelta ^^^^^^^^^ diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 00d02e096c976..f689b8ce242e5 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -72,7 +72,10 @@ from pandas._libs.tslibs.np_datetime cimport ( from pandas._libs.tslibs.tzconversion cimport tz_convert_from_utc_single from .dtypes cimport PeriodDtypeCode -from .timedeltas cimport delta_to_nanoseconds +from .timedeltas cimport ( + delta_to_nanoseconds, + is_any_td_scalar, +) from .timedeltas import Timedelta @@ -154,7 +157,11 @@ def apply_wraps(func): if other is NaT: return NaT - elif isinstance(other, BaseOffset) or PyDelta_Check(other): + elif ( + isinstance(other, BaseOffset) + or PyDelta_Check(other) + or util.is_timedelta64_object(other) + ): # timedelta path return func(self, other) elif is_datetime64_object(other) or PyDate_Check(other): @@ -902,7 +909,7 @@ cdef class Tick(SingleConstructorOffset): # PyDate_Check includes date, datetime return Timestamp(other) + self - if PyDelta_Check(other): + if util.is_timedelta64_object(other) or PyDelta_Check(other): return other + self.delta elif isinstance(other, type(self)): # TODO: this is reached in tests that specifically call apply, @@ -1396,9 +1403,10 @@ cdef class BusinessDay(BusinessMixin): result = result + self.offset return result - elif PyDelta_Check(other) or isinstance(other, Tick): + elif is_any_td_scalar(other): + td = Timedelta(self.offset) + other return BusinessDay( - self.n, offset=self.offset + other, normalize=self.normalize + self.n, offset=td.to_pytimedelta(), normalize=self.normalize ) else: raise ApplyTypeError( @@ -3265,8 +3273,9 @@ cdef class CustomBusinessDay(BusinessDay): result = result + self.offset return result - elif PyDelta_Check(other) or isinstance(other, Tick): - return BDay(self.n, offset=self.offset + other, normalize=self.normalize) + elif is_any_td_scalar(other): + td = Timedelta(self.offset) + other + return BDay(self.n, offset=td.to_pytimedelta(), normalize=self.normalize) else: raise ApplyTypeError( "Only know how to combine trading day with " diff --git a/pandas/tests/tseries/offsets/test_business_day.py b/pandas/tests/tseries/offsets/test_business_day.py index 92daafaf469cd..ffc2a04334ffc 100644 --- a/pandas/tests/tseries/offsets/test_business_day.py +++ b/pandas/tests/tseries/offsets/test_business_day.py @@ -7,6 +7,7 @@ timedelta, ) +import numpy as np import pytest from pandas._libs.tslibs.offsets import ( @@ -17,6 +18,7 @@ from pandas import ( DatetimeIndex, + Timedelta, _testing as tm, ) from pandas.tests.tseries.offsets.common import ( @@ -57,11 +59,30 @@ def test_with_offset(self): assert (self.d + offset) == datetime(2008, 1, 2, 2) - def test_with_offset_index(self): - dti = DatetimeIndex([self.d]) - result = dti + (self.offset + timedelta(hours=2)) + @pytest.mark.parametrize("reverse", [True, False]) + @pytest.mark.parametrize( + "td", + [ + Timedelta(hours=2), + Timedelta(hours=2).to_pytimedelta(), + Timedelta(hours=2).to_timedelta64(), + ], + ids=lambda x: type(x), + ) + def test_with_offset_index(self, reverse, td, request): + if reverse and isinstance(td, np.timedelta64): + mark = pytest.mark.xfail( + reason="need __array_priority__, but that causes other errors" + ) + request.node.add_marker(mark) + dti = DatetimeIndex([self.d]) expected = DatetimeIndex([datetime(2008, 1, 2, 2)]) + + if reverse: + result = dti + (td + self.offset) + else: + result = dti + (self.offset + td) tm.assert_index_equal(result, expected) def test_eq(self): diff --git a/pandas/tests/tseries/offsets/test_custom_business_day.py b/pandas/tests/tseries/offsets/test_custom_business_day.py index b8014f7112435..5847bd11f09df 100644 --- a/pandas/tests/tseries/offsets/test_custom_business_day.py +++ b/pandas/tests/tseries/offsets/test_custom_business_day.py @@ -19,6 +19,7 @@ from pandas import ( DatetimeIndex, + Timedelta, _testing as tm, read_pickle, ) @@ -62,11 +63,30 @@ def test_with_offset(self): assert (self.d + offset) == datetime(2008, 1, 2, 2) - def test_with_offset_index(self): - dti = DatetimeIndex([self.d]) - result = dti + (self.offset + timedelta(hours=2)) + @pytest.mark.parametrize("reverse", [True, False]) + @pytest.mark.parametrize( + "td", + [ + Timedelta(hours=2), + Timedelta(hours=2).to_pytimedelta(), + Timedelta(hours=2).to_timedelta64(), + ], + ids=lambda x: type(x), + ) + def test_with_offset_index(self, reverse, td, request): + if reverse and isinstance(td, np.timedelta64): + mark = pytest.mark.xfail( + reason="need __array_priority__, but that causes other errors" + ) + request.node.add_marker(mark) + dti = DatetimeIndex([self.d]) expected = DatetimeIndex([datetime(2008, 1, 2, 2)]) + + if reverse: + result = dti + (td + self.offset) + else: + result = dti + (self.offset + td) tm.assert_index_equal(result, expected) def test_eq(self): diff --git a/pandas/tests/tseries/offsets/test_ticks.py b/pandas/tests/tseries/offsets/test_ticks.py index 52a2f3aeee850..ae6bd2d85579a 100644 --- a/pandas/tests/tseries/offsets/test_ticks.py +++ b/pandas/tests/tseries/offsets/test_ticks.py @@ -230,9 +230,16 @@ def test_Nanosecond(): ) def test_tick_addition(kls, expected): offset = kls(3) - result = offset + Timedelta(hours=2) - assert isinstance(result, Timedelta) - assert result == expected + td = Timedelta(hours=2) + + for other in [td, td.to_pytimedelta(), td.to_timedelta64()]: + result = offset + other + assert isinstance(result, Timedelta) + assert result == expected + + result = other + offset + assert isinstance(result, Timedelta) + assert result == expected @pytest.mark.parametrize("cls", tick_classes)
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry Looks like a pass to de-duplicate the test_business_day and test_custom_business_day files would be worthwhile
https://api.github.com/repos/pandas-dev/pandas/pulls/44474
2021-11-15T18:36:51Z
2021-11-16T00:07:26Z
2021-11-16T00:07:26Z
2021-11-16T00:19:31Z
BUG: fix timedelta floordiv with scalar float (correction of #44466)
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py index 91e90ebdb6253..2a2e59cfda5e9 100644 --- a/pandas/core/arrays/timedeltas.py +++ b/pandas/core/arrays/timedeltas.py @@ -651,7 +651,7 @@ def __floordiv__(self, other): # at this point we should only have numeric scalars; anything # else will raise - result = self._ndarray / other + result = self._ndarray // other freq = None if self.freq is not None: # Note: freq gets division, not floor-division
Follow-up on https://github.com/pandas-dev/pandas/pull/44466#discussion_r749430496 I still need to add a test that would actually catch this
https://api.github.com/repos/pandas-dev/pandas/pulls/44471
2021-11-15T15:33:16Z
2021-11-18T18:59:51Z
2021-11-18T18:59:51Z
2021-11-18T18:59:54Z
Backport PR #44452: Revert "CI: xfail tests failing on numpy dev"
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py index 66b52da0f5578..b2e53a676b039 100644 --- a/pandas/tests/window/moments/test_moments_rolling.py +++ b/pandas/tests/window/moments/test_moments_rolling.py @@ -558,7 +558,6 @@ def test_rolling_quantile_np_percentile(): tm.assert_almost_equal(df_quantile.values, np.array(np_percentile)) -@pytest.mark.xfail(reason="GH#44343", strict=False) @pytest.mark.parametrize("quantile", [0.0, 0.1, 0.45, 0.5, 1]) @pytest.mark.parametrize( "interpolation", ["linear", "lower", "higher", "nearest", "midpoint"]
Backport PR #44452
https://api.github.com/repos/pandas-dev/pandas/pulls/44469
2021-11-15T12:36:48Z
2021-11-15T13:39:31Z
2021-11-15T13:39:31Z
2021-11-15T13:39:35Z
TYP/CLN: remove defaults from overloads
diff --git a/pandas/core/base.py b/pandas/core/base.py index a1bf448df18c4..9040414a8f35f 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -1239,8 +1239,8 @@ def factorize(self, sort: bool = False, na_sentinel: int | None = -1): def searchsorted( # type: ignore[misc] self, value: npt._ScalarLike_co, - side: Literal["left", "right"] = "left", - sorter: NumpySorter = None, + side: Literal["left", "right"] = ..., + sorter: NumpySorter = ..., ) -> np.intp: ... @@ -1248,8 +1248,8 @@ def searchsorted( # type: ignore[misc] def searchsorted( self, value: npt.ArrayLike | ExtensionArray, - side: Literal["left", "right"] = "left", - sorter: NumpySorter = None, + side: Literal["left", "right"] = ..., + sorter: NumpySorter = ..., ) -> npt.NDArray[np.intp]: ... diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index 8475877f9b905..a32f84e087459 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -126,12 +126,12 @@ def concat( axis: Axis = ..., join: str = ..., ignore_index: bool = ..., - keys=None, - levels=None, - names=None, - verify_integrity: bool = False, - sort: bool = False, - copy: bool = True, + keys=..., + levels=..., + names=..., + verify_integrity: bool = ..., + sort: bool = ..., + copy: bool = ..., ) -> DataFrame | Series: ... diff --git a/pandas/io/common.py b/pandas/io/common.py index 1aacbfa2bfb64..1e928d1f2cd9e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -559,7 +559,7 @@ def get_handle( encoding: str | None = ..., compression: CompressionOptions = ..., memory_map: bool = ..., - is_text: Literal[True] = True, + is_text: Literal[True] = ..., errors: str | None = ..., storage_options: StorageOptions = ..., ) -> IOHandles[str]: diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 027bb9889202b..75cabf1681bc7 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -191,12 +191,12 @@ def execute(sql, con, params=None): def read_sql_table( table_name, con, - schema=None, - index_col=None, - coerce_float=True, - parse_dates=None, - columns=None, - chunksize: None = None, + schema=..., + index_col=..., + coerce_float=..., + parse_dates=..., + columns=..., + chunksize: None = ..., ) -> DataFrame: ... @@ -205,12 +205,12 @@ def read_sql_table( def read_sql_table( table_name, con, - schema=None, - index_col=None, - coerce_float=True, - parse_dates=None, - columns=None, - chunksize: int = 1, + schema=..., + index_col=..., + coerce_float=..., + parse_dates=..., + columns=..., + chunksize: int = ..., ) -> Iterator[DataFrame]: ... @@ -303,12 +303,12 @@ def read_sql_table( def read_sql_query( sql, con, - index_col=None, - coerce_float=True, - params=None, - parse_dates=None, - chunksize: None = None, - dtype: DtypeArg | None = None, + index_col=..., + coerce_float=..., + params=..., + parse_dates=..., + chunksize: None = ..., + dtype: DtypeArg | None = ..., ) -> DataFrame: ... @@ -317,12 +317,12 @@ def read_sql_query( def read_sql_query( sql, con, - index_col=None, - coerce_float=True, - params=None, - parse_dates=None, - chunksize: int = 1, - dtype: DtypeArg | None = None, + index_col=..., + coerce_float=..., + params=..., + parse_dates=..., + chunksize: int = ..., + dtype: DtypeArg | None = ..., ) -> Iterator[DataFrame]: ... @@ -410,12 +410,12 @@ def read_sql_query( def read_sql( sql, con, - index_col=None, - coerce_float=True, - params=None, - parse_dates=None, - columns=None, - chunksize: None = None, + index_col=..., + coerce_float=..., + params=..., + parse_dates=..., + columns=..., + chunksize: None = ..., ) -> DataFrame: ... @@ -424,12 +424,12 @@ def read_sql( def read_sql( sql, con, - index_col=None, - coerce_float=True, - params=None, - parse_dates=None, - columns=None, - chunksize: int = 1, + index_col=..., + coerce_float=..., + params=..., + parse_dates=..., + columns=..., + chunksize: int = ..., ) -> Iterator[DataFrame]: ...
null
https://api.github.com/repos/pandas-dev/pandas/pulls/44468
2021-11-15T12:14:08Z
2021-11-15T13:52:06Z
2021-11-15T13:52:06Z
2021-11-15T14:32:51Z
BUG: fix timedelta floordiv with scalar float
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index a593a03de5c25..43756082a7442 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -543,7 +543,7 @@ Datetimelike Timedelta ^^^^^^^^^ - Bug in division of all-``NaT`` :class:`TimeDeltaIndex`, :class:`Series` or :class:`DataFrame` column with object-dtype arraylike of numbers failing to infer the result as timedelta64-dtype (:issue:`39750`) -- +- Bug in floor division of ``timedelta64[ns]`` data with a scalar returning garbage values (:issue:`44466`) Timezones ^^^^^^^^^ diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py index 3d8f9f7edcc74..91e90ebdb6253 100644 --- a/pandas/core/arrays/timedeltas.py +++ b/pandas/core/arrays/timedeltas.py @@ -651,8 +651,7 @@ def __floordiv__(self, other): # at this point we should only have numeric scalars; anything # else will raise - result = self.asi8 // other - np.putmask(result, self._isnan, iNaT) + result = self._ndarray / other freq = None if self.freq is not None: # Note: freq gets division, not floor-division @@ -661,7 +660,7 @@ def __floordiv__(self, other): # e.g. if self.freq is Nano(1) then dividing by 2 # rounds down to zero freq = None - return type(self)(result.view("m8[ns]"), freq=freq) + return type(self)(result, freq=freq) if not hasattr(other, "dtype"): # list, tuple diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index 8078e8c90a2bf..93332b9b96f84 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -1981,6 +1981,20 @@ def test_td64arr_div_numeric_scalar(self, box_with_array, two): with pytest.raises(TypeError, match="Cannot divide"): two / tdser + @pytest.mark.parametrize("two", [2, 2.0, np.array(2), np.array(2.0)]) + def test_td64arr_floordiv_numeric_scalar(self, box_with_array, two): + tdser = Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]") + expected = Series(["29.5D", "29.5D", "NaT"], dtype="timedelta64[ns]") + + tdser = tm.box_expected(tdser, box_with_array) + expected = tm.box_expected(expected, box_with_array) + + result = tdser // two + tm.assert_equal(result, expected) + + with pytest.raises(TypeError, match="Cannot divide"): + two // tdser + @pytest.mark.parametrize( "vector", [np.array([20, 30, 40]), pd.Index([20, 30, 40]), Series([20, 30, 40])],
Discovered while trying to fix test failures in https://github.com/pandas-dev/pandas/pull/40482. Currently normal division with a float scalar works, but floordiv doesn't (returns garbage): ``` In [42]: arr = pd.timedelta_range(0, periods=3) In [43]: pd.Series(arr) / 2.0 Out[43]: 0 0 days 00:00:00 1 0 days 12:00:00 2 1 days 00:00:00 dtype: timedelta64[ns] In [44]: pd.Series(arr) // 2.0 Out[44]: 0 0 days 00:00:00 1 55681 days 08:53:22.404319232 2 55733 days 11:53:22.031689728 dtype: timedelta64[ns] In [45]: pd.__version__ Out[45]: '1.3.3' ``` It's only the scalar case; dividing with eg float array already works: ``` In [46]: pd.Series(arr) // np.array([2.0]*3) Out[46]: 0 0 days 00:00:00 1 0 days 12:00:00 2 1 days 00:00:00 dtype: timedelta64[ns] ``` I suppose the current code uses the workaround of `asi8` and casting back to timedelta, because numpy didn't support floordiv in the past (didn't check which version started to implement this, will see what CI says) - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44466
2021-11-15T10:53:10Z
2021-11-15T15:06:51Z
2021-11-15T15:06:51Z
2021-11-15T15:33:58Z
DEV: add note to update cython version in environment.yml and asv conf as well
diff --git a/pyproject.toml b/pyproject.toml index 98ab112ab459a..0c3e078d8761a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ requires = [ "setuptools>=51.0.0", "wheel", - "Cython>=0.29.24,<3", # Note: sync with setup.py + "Cython>=0.29.24,<3", # Note: sync with setup.py, environment.yml and asv.conf.json "oldest-supported-numpy>=0.10" ] # uncomment to enable pep517 after versioneer problem is fixed. diff --git a/setup.py b/setup.py index f5151621c9efe..ca71510c5f051 100755 --- a/setup.py +++ b/setup.py @@ -37,7 +37,8 @@ def is_platform_mac(): return sys.platform == "darwin" -min_cython_ver = "0.29.24" # note: sync with pyproject.toml +# note: sync with pyproject.toml, environment.yml and asv.conf.json +min_cython_ver = "0.29.24" try: from Cython import (
Seeing https://github.com/pandas-dev/pandas/pull/44359, this seems a good idea to remember the next time cython version is bumped
https://api.github.com/repos/pandas-dev/pandas/pulls/44463
2021-11-15T07:48:31Z
2021-11-15T09:39:10Z
2021-11-15T09:39:10Z
2021-11-15T09:49:53Z
DataFrame.rename renames wrong elements
diff --git a/pandas/tests/frame/methods/test_rename.py b/pandas/tests/frame/methods/test_rename.py index 26ecf1356a946..0bd46cbb22f2a 100644 --- a/pandas/tests/frame/methods/test_rename.py +++ b/pandas/tests/frame/methods/test_rename.py @@ -406,3 +406,14 @@ def test_rename_with_duplicate_columns(self): ], ).set_index(["STK_ID", "RPT_Date"], drop=False) tm.assert_frame_equal(result, expected) + + def test_rename_boolean_index(self): + df = DataFrame(np.arange(15).reshape(3, 5), columns=[False, True, 2, 3, 4]) + mapper = {0: "foo", 1: "bar", 2: "bah"} + res = df.rename(index=mapper) + exp = DataFrame( + np.arange(15).reshape(3, 5), + columns=[False, True, 2, 3, 4], + index=["foo", "bar", "bah"], + ) + tm.assert_frame_equal(res, exp)
- [x] closes #4980 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/44462
2021-11-15T06:33:54Z
2021-11-15T13:55:26Z
2021-11-15T13:55:25Z
2021-11-15T13:55:29Z
ENH: Add numba engine to rolling/expanding.std/var
diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py index 73adb12c171bf..1c53d4adc8c25 100644 --- a/asv_bench/benchmarks/rolling.py +++ b/asv_bench/benchmarks/rolling.py @@ -53,7 +53,7 @@ class NumbaEngineMethods: ["DataFrame", "Series"], ["int", "float"], [("rolling", {"window": 10}), ("expanding", {})], - ["sum", "max", "min", "median", "mean"], + ["sum", "max", "min", "median", "mean", "var", "std"], [True, False], [None, 100], ) diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index e87f5f53256cf..db5cce8459ca2 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -214,6 +214,7 @@ Other enhancements - :meth:`Timestamp.isoformat`, now handles the ``timespec`` argument from the base :class:``datetime`` class (:issue:`26131`) - :meth:`NaT.to_numpy` ``dtype`` argument is now respected, so ``np.timedelta64`` can be returned (:issue:`44460`) - New option ``display.max_dir_items`` customizes the number of columns added to :meth:`Dataframe.__dir__` and suggested for tab completion (:issue:`37996`) +- :meth:`.Rolling.var`, :meth:`.Expanding.var`, :meth:`.Rolling.std`, :meth:`.Expanding.std` now support `Numba <http://numba.pydata.org/>`_ execution with the ``engine`` keyword (:issue:`44461`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/_numba/executor.py b/pandas/core/_numba/executor.py index c2b6191c05152..acb0c6d175c51 100644 --- a/pandas/core/_numba/executor.py +++ b/pandas/core/_numba/executor.py @@ -51,10 +51,11 @@ def column_looper( start: np.ndarray, end: np.ndarray, min_periods: int, + *args, ): result = np.empty((len(start), values.shape[1]), dtype=np.float64) for i in numba.prange(values.shape[1]): - result[:, i] = func(values[:, i], start, end, min_periods) + result[:, i] = func(values[:, i], start, end, min_periods, *args) return result return column_looper diff --git a/pandas/core/_numba/kernels/__init__.py b/pandas/core/_numba/kernels/__init__.py index 23b0ec5c3d8aa..2753a1e01161d 100644 --- a/pandas/core/_numba/kernels/__init__.py +++ b/pandas/core/_numba/kernels/__init__.py @@ -1,4 +1,5 @@ from pandas.core._numba.kernels.mean_ import sliding_mean from pandas.core._numba.kernels.sum_ import sliding_sum +from pandas.core._numba.kernels.var_ import sliding_var -__all__ = ["sliding_mean", "sliding_sum"] +__all__ = ["sliding_mean", "sliding_sum", "sliding_var"] diff --git a/pandas/core/_numba/kernels/var_.py b/pandas/core/_numba/kernels/var_.py new file mode 100644 index 0000000000000..2e5660673701b --- /dev/null +++ b/pandas/core/_numba/kernels/var_.py @@ -0,0 +1,116 @@ +""" +Numba 1D var kernels that can be shared by +* Dataframe / Series +* groupby +* rolling / expanding + +Mirrors pandas/_libs/window/aggregation.pyx +""" +from __future__ import annotations + +import numba +import numpy as np + +from pandas.core._numba.kernels.shared import is_monotonic_increasing + + +@numba.jit(nopython=True, nogil=True, parallel=False) +def add_var( + val: float, nobs: int, mean_x: float, ssqdm_x: float, compensation: float +) -> tuple[int, float, float, float]: + if not np.isnan(val): + nobs += 1 + prev_mean = mean_x - compensation + y = val - compensation + t = y - mean_x + compensation = t + mean_x - y + delta = t + if nobs: + mean_x += delta / nobs + else: + mean_x = 0 + ssqdm_x += (val - prev_mean) * (val - mean_x) + return nobs, mean_x, ssqdm_x, compensation + + +@numba.jit(nopython=True, nogil=True, parallel=False) +def remove_var( + val: float, nobs: int, mean_x: float, ssqdm_x: float, compensation: float +) -> tuple[int, float, float, float]: + if not np.isnan(val): + nobs -= 1 + if nobs: + prev_mean = mean_x - compensation + y = val - compensation + t = y - mean_x + compensation = t + mean_x - y + delta = t + mean_x -= delta / nobs + ssqdm_x -= (val - prev_mean) * (val - mean_x) + else: + mean_x = 0 + ssqdm_x = 0 + return nobs, mean_x, ssqdm_x, compensation + + +@numba.jit(nopython=True, nogil=True, parallel=False) +def sliding_var( + values: np.ndarray, + start: np.ndarray, + end: np.ndarray, + min_periods: int, + ddof: int = 1, +) -> np.ndarray: + N = len(start) + nobs = 0 + mean_x = 0.0 + ssqdm_x = 0.0 + compensation_add = 0.0 + compensation_remove = 0.0 + + min_periods = max(min_periods, 1) + is_monotonic_increasing_bounds = is_monotonic_increasing( + start + ) and is_monotonic_increasing(end) + + output = np.empty(N, dtype=np.float64) + + for i in range(N): + s = start[i] + e = end[i] + if i == 0 or not is_monotonic_increasing_bounds: + for j in range(s, e): + val = values[j] + nobs, mean_x, ssqdm_x, compensation_add = add_var( + val, nobs, mean_x, ssqdm_x, compensation_add + ) + else: + for j in range(start[i - 1], s): + val = values[j] + nobs, mean_x, ssqdm_x, compensation_remove = remove_var( + val, nobs, mean_x, ssqdm_x, compensation_remove + ) + + for j in range(end[i - 1], e): + val = values[j] + nobs, mean_x, ssqdm_x, compensation_add = add_var( + val, nobs, mean_x, ssqdm_x, compensation_add + ) + + if nobs >= min_periods and nobs > ddof: + if nobs == 1: + result = 0.0 + else: + result = ssqdm_x / (nobs - ddof) + else: + result = np.nan + + output[i] = result + + if not is_monotonic_increasing_bounds: + nobs = 0 + mean_x = 0.0 + ssqdm_x = 0.0 + compensation_remove = 0.0 + + return output diff --git a/pandas/core/window/doc.py b/pandas/core/window/doc.py index 61f388c35df0f..930c12841e4e4 100644 --- a/pandas/core/window/doc.py +++ b/pandas/core/window/doc.py @@ -98,14 +98,17 @@ def create_section_header(header: str) -> str: "extended documentation and performance considerations for the Numba engine.\n\n" ) -window_agg_numba_parameters = dedent( - """ + +def window_agg_numba_parameters(version: str = "1.3") -> str: + return ( + dedent( + """ engine : str, default None * ``'cython'`` : Runs the operation through C-extensions from cython. * ``'numba'`` : Runs the operation through JIT compiled code from numba. * ``None`` : Defaults to ``'cython'`` or globally setting ``compute.use_numba`` - .. versionadded:: 1.3.0 + .. versionadded:: {version}.0 engine_kwargs : dict, default None * For ``'cython'`` engine, there are no accepted ``engine_kwargs`` @@ -114,6 +117,9 @@ def create_section_header(header: str) -> str: ``False``. The default ``engine_kwargs`` for the ``'numba'`` engine is ``{{'nopython': True, 'nogil': False, 'parallel': False}}`` - .. versionadded:: 1.3.0\n + .. versionadded:: {version}.0\n """ -).replace("\n", "", 1) + ) + .replace("\n", "", 1) + .replace("{version}", version) + ) diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py index d91388e9722f7..4bebc56273805 100644 --- a/pandas/core/window/ewm.py +++ b/pandas/core/window/ewm.py @@ -511,7 +511,7 @@ def aggregate(self, func, *args, **kwargs): template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -565,7 +565,7 @@ def mean(self, *args, engine=None, engine_kwargs=None, **kwargs): template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py index 796849e622ff2..8c8b7a8284684 100644 --- a/pandas/core/window/expanding.py +++ b/pandas/core/window/expanding.py @@ -227,7 +227,7 @@ def apply( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -253,7 +253,7 @@ def sum( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -279,7 +279,7 @@ def max( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -305,7 +305,7 @@ def min( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -330,7 +330,7 @@ def mean( @doc( template_header, create_section_header("Parameters"), - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -361,6 +361,7 @@ def median( """ ).replace("\n", "", 1), args_compat, + window_agg_numba_parameters("1.4"), kwargs_compat, create_section_header("Returns"), template_returns, @@ -396,9 +397,18 @@ def median( aggregation_description="standard deviation", agg_method="std", ) - def std(self, ddof: int = 1, *args, **kwargs): + def std( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_expanding_func("std", args, kwargs) - return super().std(ddof=ddof, **kwargs) + return super().std( + ddof=ddof, engine=engine, engine_kwargs=engine_kwargs, **kwargs + ) @doc( template_header, @@ -411,6 +421,7 @@ def std(self, ddof: int = 1, *args, **kwargs): """ ).replace("\n", "", 1), args_compat, + window_agg_numba_parameters("1.4"), kwargs_compat, create_section_header("Returns"), template_returns, @@ -446,9 +457,18 @@ def std(self, ddof: int = 1, *args, **kwargs): aggregation_description="variance", agg_method="var", ) - def var(self, ddof: int = 1, *args, **kwargs): + def var( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_expanding_func("var", args, kwargs) - return super().var(ddof=ddof, **kwargs) + return super().var( + ddof=ddof, engine=engine, engine_kwargs=engine_kwargs, **kwargs + ) @doc( template_header, diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index f9244462123bc..fc3390ee6db03 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -598,6 +598,7 @@ def _numba_apply( func: Callable[..., Any], numba_cache_key_str: str, engine_kwargs: dict[str, bool] | None = None, + *func_args, ): window_indexer = self._get_window_indexer() min_periods = ( @@ -621,7 +622,7 @@ def _numba_apply( aggregator = executor.generate_shared_aggregator( func, engine_kwargs, numba_cache_key_str ) - result = aggregator(values, start, end, min_periods) + result = aggregator(values, start, end, min_periods, *func_args) NUMBA_FUNC_CACHE[(func, numba_cache_key_str)] = aggregator result = result.T if self.axis == 1 else result if obj.ndim == 1: @@ -1459,8 +1460,24 @@ def median( window_func = window_aggregations.roll_median_c return self._apply(window_func, name="median", **kwargs) - def std(self, ddof: int = 1, *args, **kwargs): + def std( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_window_func("std", args, kwargs) + if maybe_use_numba(engine): + if self.method == "table": + raise NotImplementedError("std not supported with method='table'") + else: + from pandas.core._numba.kernels import sliding_var + + return zsqrt( + self._numba_apply(sliding_var, "rolling_std", engine_kwargs, ddof) + ) window_func = window_aggregations.roll_var def zsqrt_func(values, begin, end, min_periods): @@ -1472,8 +1489,24 @@ def zsqrt_func(values, begin, end, min_periods): **kwargs, ) - def var(self, ddof: int = 1, *args, **kwargs): + def var( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_window_func("var", args, kwargs) + if maybe_use_numba(engine): + if self.method == "table": + raise NotImplementedError("var not supported with method='table'") + else: + from pandas.core._numba.kernels import sliding_var + + return self._numba_apply( + sliding_var, "rolling_var", engine_kwargs, ddof + ) window_func = partial(window_aggregations.roll_var, ddof=ddof) return self._apply( window_func, @@ -1810,7 +1843,7 @@ def apply( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -1884,7 +1917,7 @@ def sum( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -1910,7 +1943,7 @@ def max( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -1951,7 +1984,7 @@ def min( template_header, create_section_header("Parameters"), args_compat, - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -1998,7 +2031,7 @@ def mean( @doc( template_header, create_section_header("Parameters"), - window_agg_numba_parameters, + window_agg_numba_parameters(), kwargs_compat, create_section_header("Returns"), template_returns, @@ -2044,6 +2077,7 @@ def median( """ ).replace("\n", "", 1), args_compat, + window_agg_numba_parameters("1.4"), kwargs_compat, create_section_header("Returns"), template_returns, @@ -2081,9 +2115,18 @@ def median( aggregation_description="standard deviation", agg_method="std", ) - def std(self, ddof: int = 1, *args, **kwargs): + def std( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_rolling_func("std", args, kwargs) - return super().std(ddof=ddof, **kwargs) + return super().std( + ddof=ddof, engine=engine, engine_kwargs=engine_kwargs, **kwargs + ) @doc( template_header, @@ -2096,6 +2139,7 @@ def std(self, ddof: int = 1, *args, **kwargs): """ ).replace("\n", "", 1), args_compat, + window_agg_numba_parameters("1.4"), kwargs_compat, create_section_header("Returns"), template_returns, @@ -2133,9 +2177,18 @@ def std(self, ddof: int = 1, *args, **kwargs): aggregation_description="variance", agg_method="var", ) - def var(self, ddof: int = 1, *args, **kwargs): + def var( + self, + ddof: int = 1, + *args, + engine: str | None = None, + engine_kwargs: dict[str, bool] | None = None, + **kwargs, + ): nv.validate_rolling_func("var", args, kwargs) - return super().var(ddof=ddof, **kwargs) + return super().var( + ddof=ddof, engine=engine, engine_kwargs=engine_kwargs, **kwargs + ) @doc( template_header, diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py index 7b1aa93b5923a..bf1af0c83c93f 100644 --- a/pandas/tests/window/conftest.py +++ b/pandas/tests/window/conftest.py @@ -64,11 +64,15 @@ def arithmetic_win_operators(request): @pytest.fixture( params=[ - "sum", - "mean", - "median", - "max", - "min", + ["sum", {}], + ["mean", {}], + ["median", {}], + ["max", {}], + ["min", {}], + ["var", {}], + ["var", {"ddof": 0}], + ["std", {}], + ["std", {"ddof": 0}], ] ) def arithmetic_numba_supported_operators(request): diff --git a/pandas/tests/window/test_numba.py b/pandas/tests/window/test_numba.py index ad291344fd6ed..8cae9c0182724 100644 --- a/pandas/tests/window/test_numba.py +++ b/pandas/tests/window/test_numba.py @@ -50,16 +50,18 @@ def test_numba_vs_cython_rolling_methods( self, data, nogil, parallel, nopython, arithmetic_numba_supported_operators ): - method = arithmetic_numba_supported_operators + method, kwargs = arithmetic_numba_supported_operators engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} roll = data.rolling(2) - result = getattr(roll, method)(engine="numba", engine_kwargs=engine_kwargs) - expected = getattr(roll, method)(engine="cython") + result = getattr(roll, method)( + engine="numba", engine_kwargs=engine_kwargs, **kwargs + ) + expected = getattr(roll, method)(engine="cython", **kwargs) # Check the cache - if method not in ("mean", "sum"): + if method not in ("mean", "sum", "var", "std"): assert ( getattr(np, f"nan{method}"), "Rolling_apply_single", @@ -74,17 +76,19 @@ def test_numba_vs_cython_expanding_methods( self, data, nogil, parallel, nopython, arithmetic_numba_supported_operators ): - method = arithmetic_numba_supported_operators + method, kwargs = arithmetic_numba_supported_operators engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} data = DataFrame(np.eye(5)) expand = data.expanding() - result = getattr(expand, method)(engine="numba", engine_kwargs=engine_kwargs) - expected = getattr(expand, method)(engine="cython") + result = getattr(expand, method)( + engine="numba", engine_kwargs=engine_kwargs, **kwargs + ) + expected = getattr(expand, method)(engine="cython", **kwargs) # Check the cache - if method not in ("mean", "sum"): + if method not in ("mean", "sum", "var", "std"): assert ( getattr(np, f"nan{method}"), "Expanding_apply_single", @@ -282,19 +286,26 @@ def f(x): def test_table_method_rolling_methods( self, axis, nogil, parallel, nopython, arithmetic_numba_supported_operators ): - method = arithmetic_numba_supported_operators + method, kwargs = arithmetic_numba_supported_operators engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} df = DataFrame(np.eye(3)) - - result = getattr( - df.rolling(2, method="table", axis=axis, min_periods=0), method - )(engine_kwargs=engine_kwargs, engine="numba") - expected = getattr( - df.rolling(2, method="single", axis=axis, min_periods=0), method - )(engine_kwargs=engine_kwargs, engine="numba") - tm.assert_frame_equal(result, expected) + roll_table = df.rolling(2, method="table", axis=axis, min_periods=0) + if method in ("var", "std"): + with pytest.raises(NotImplementedError, match=f"{method} not supported"): + getattr(roll_table, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + else: + roll_single = df.rolling(2, method="single", axis=axis, min_periods=0) + result = getattr(roll_table, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + expected = getattr(roll_single, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + tm.assert_frame_equal(result, expected) def test_table_method_rolling_apply(self, axis, nogil, parallel, nopython): engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} @@ -349,19 +360,26 @@ def f(x): def test_table_method_expanding_methods( self, axis, nogil, parallel, nopython, arithmetic_numba_supported_operators ): - method = arithmetic_numba_supported_operators + method, kwargs = arithmetic_numba_supported_operators engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} df = DataFrame(np.eye(3)) - - result = getattr(df.expanding(method="table", axis=axis), method)( - engine_kwargs=engine_kwargs, engine="numba" - ) - expected = getattr(df.expanding(method="single", axis=axis), method)( - engine_kwargs=engine_kwargs, engine="numba" - ) - tm.assert_frame_equal(result, expected) + expand_table = df.expanding(method="table", axis=axis) + if method in ("var", "std"): + with pytest.raises(NotImplementedError, match=f"{method} not supported"): + getattr(expand_table, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + else: + expand_single = df.expanding(method="single", axis=axis) + result = getattr(expand_table, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + expected = getattr(expand_single, method)( + engine_kwargs=engine_kwargs, engine="numba", **kwargs + ) + tm.assert_frame_equal(result, expected) @pytest.mark.parametrize("data", [np.eye(3), np.ones((2, 3)), np.ones((3, 2))]) @pytest.mark.parametrize("method", ["mean", "sum"])
- [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44461
2021-11-15T04:53:02Z
2021-11-26T23:37:11Z
2021-11-26T23:37:11Z
2021-11-26T23:40:37Z
ENH: don't silently ignore dtype in NaT/Timestamp/Timedelta to_numpy
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index a593a03de5c25..2492b51aa6c23 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -210,6 +210,8 @@ Other enhancements - :meth:`read_excel` now accepts a ``decimal`` argument that allow the user to specify the decimal point when parsing string columns to numeric (:issue:`14403`) - :meth:`.GroupBy.mean` now supports `Numba <http://numba.pydata.org/>`_ execution with the ``engine`` keyword (:issue:`43731`) - :meth:`Timestamp.isoformat`, now handles the ``timespec`` argument from the base :class:``datetime`` class (:issue:`26131`) +- :meth:`NaT.to_numpy` ``dtype`` argument is now respected, so ``np.timedelta64`` can be returned (:issue:`44460`) +- .. --------------------------------------------------------------------------- diff --git a/pandas/_libs/tslibs/nattype.pyi b/pandas/_libs/tslibs/nattype.pyi index 22e6395a1fe99..a7ee9a70342d4 100644 --- a/pandas/_libs/tslibs/nattype.pyi +++ b/pandas/_libs/tslibs/nattype.pyi @@ -18,7 +18,9 @@ class NaTType(datetime): value: np.int64 def asm8(self) -> np.datetime64: ... def to_datetime64(self) -> np.datetime64: ... - def to_numpy(self, dtype=..., copy: bool = ...) -> np.datetime64: ... + def to_numpy( + self, dtype=..., copy: bool = ... + ) -> np.datetime64 | np.timedelta64: ... @property def is_leap_year(self) -> bool: ... @property diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index 09bfc4527a428..0ec0fb9e814c1 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -258,19 +258,20 @@ cdef class _NaT(datetime): """ return np.datetime64('NaT', "ns") - def to_numpy(self, dtype=None, copy=False) -> np.datetime64: + def to_numpy(self, dtype=None, copy=False) -> np.datetime64 | np.timedelta64: """ - Convert the Timestamp to a NumPy datetime64. + Convert the Timestamp to a NumPy datetime64 or timedelta64. .. versionadded:: 0.25.0 - This is an alias method for `Timestamp.to_datetime64()`. The dtype and - copy parameters are available here only for compatibility. Their values + With the default 'dtype', this is an alias method for `NaT.to_datetime64()`. + + The copy parameter is available here only for compatibility. Its value will not affect the return value. Returns ------- - numpy.datetime64 + numpy.datetime64 or numpy.timedelta64 See Also -------- @@ -286,7 +287,22 @@ cdef class _NaT(datetime): >>> pd.NaT.to_numpy() numpy.datetime64('NaT') + + >>> pd.NaT.to_numpy("m8[ns]") + numpy.timedelta64('NaT','ns') """ + if dtype is not None: + # GH#44460 + dtype = np.dtype(dtype) + if dtype.kind == "M": + return np.datetime64("NaT").astype(dtype) + elif dtype.kind == "m": + return np.timedelta64("NaT").astype(dtype) + else: + raise ValueError( + "NaT.to_numpy dtype must be a datetime64 dtype, timedelta64 " + "dtype, or None." + ) return self.to_datetime64() def __repr__(self) -> str: diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 43f9be3fef5ee..be39ccd444865 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -929,6 +929,10 @@ cdef class _Timedelta(timedelta): -------- Series.to_numpy : Similar method for Series. """ + if dtype is not None or copy is not False: + raise ValueError( + "Timedelta.to_numpy dtype and copy arguments are ignored" + ) return self.to_timedelta64() def view(self, dtype): diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 28b8158548ca8..bf3b3ed0264a0 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -934,6 +934,10 @@ cdef class _Timestamp(ABCTimestamp): >>> pd.NaT.to_numpy() numpy.datetime64('NaT') """ + if dtype is not None or copy is not False: + raise ValueError( + "Timestamp.to_numpy dtype and copy arguments are ignored." + ) return self.to_datetime64() def to_period(self, freq=None): diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py index b9718249b38c8..73227caa9fd62 100644 --- a/pandas/tests/scalar/test_nat.py +++ b/pandas/tests/scalar/test_nat.py @@ -330,6 +330,11 @@ def test_nat_doc_strings(compare): if klass == Timestamp and method == "isoformat": return + if method == "to_numpy": + # GH#44460 can return either dt64 or td64 depending on dtype, + # different docstring is intentional + return + nat_doc = getattr(NaT, method).__doc__ assert klass_doc == nat_doc @@ -511,6 +516,22 @@ def test_to_numpy_alias(): assert isna(expected) and isna(result) + # GH#44460 + result = NaT.to_numpy("M8[s]") + assert isinstance(result, np.datetime64) + assert result.dtype == "M8[s]" + + result = NaT.to_numpy("m8[ns]") + assert isinstance(result, np.timedelta64) + assert result.dtype == "m8[ns]" + + result = NaT.to_numpy("m8[s]") + assert isinstance(result, np.timedelta64) + assert result.dtype == "m8[s]" + + with pytest.raises(ValueError, match="NaT.to_numpy dtype must be a "): + NaT.to_numpy(np.int64) + @pytest.mark.parametrize( "other", diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py index 4aa2f62fe85a0..cb3468c097cbf 100644 --- a/pandas/tests/scalar/timedelta/test_timedelta.py +++ b/pandas/tests/scalar/timedelta/test_timedelta.py @@ -317,6 +317,13 @@ def test_to_numpy_alias(self): td = Timedelta("10m7s") assert td.to_timedelta64() == td.to_numpy() + # GH#44460 + msg = "dtype and copy arguments are ignored" + with pytest.raises(ValueError, match=msg): + td.to_numpy("m8[s]") + with pytest.raises(ValueError, match=msg): + td.to_numpy(copy=True) + @pytest.mark.parametrize( "freq,s1,s2", [ diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py index f2010b33538fb..214ad634e78da 100644 --- a/pandas/tests/scalar/timestamp/test_timestamp.py +++ b/pandas/tests/scalar/timestamp/test_timestamp.py @@ -619,6 +619,13 @@ def test_to_numpy_alias(self): ts = Timestamp(datetime.now()) assert ts.to_datetime64() == ts.to_numpy() + # GH#44460 + msg = "dtype and copy arguments are ignored" + with pytest.raises(ValueError, match=msg): + ts.to_numpy("M8[s]") + with pytest.raises(ValueError, match=msg): + ts.to_numpy(copy=True) + class SubDatetime(datetime): pass
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44460
2021-11-15T04:43:04Z
2021-11-15T13:58:31Z
2021-11-15T13:58:31Z
2021-11-15T15:25:09Z
BUG: Series/Index arithmetic result names with NAs
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 2fe289a5f7c35..3dd4ab656a859 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -563,6 +563,7 @@ Numeric - Bug in ``numexpr`` engine still being used when the option ``compute.use_numexpr`` is set to ``False`` (:issue:`32556`) - Bug in :class:`DataFrame` arithmetic ops with a subclass whose :meth:`_constructor` attribute is a callable other than the subclass itself (:issue:`43201`) - Bug in arithmetic operations involving :class:`RangeIndex` where the result would have the incorrect ``name`` (:issue:`43962`) +- Bug in arithmetic operations involving :class:`Series` where the result could have the incorrect ``name`` when the operands having matching NA or matching tuple names (:issue:`44459`) - Conversion diff --git a/pandas/conftest.py b/pandas/conftest.py index 65d4b936efe44..7bc6dbae38e6c 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -1656,6 +1656,11 @@ def __init__(self, **kwargs): ("foo", None, None), ("Egon", "Venkman", None), ("NCC1701D", "NCC1701D", "NCC1701D"), + # possibly-matching NAs + (np.nan, np.nan, np.nan), + (np.nan, pd.NaT, None), + (np.nan, pd.NA, None), + (pd.NA, pd.NA, pd.NA), ] ) def names(request): diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index a8896c1fde546..a49c303e735ab 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -2950,7 +2950,7 @@ def _get_reconciled_name_object(self, other): case make a shallow copy of self. """ name = get_op_result_name(self, other) - if self.name != name: + if self.name is not name: return self.rename(name) return self diff --git a/pandas/core/ops/common.py b/pandas/core/ops/common.py index 2a76eb92120e7..b883fe7751daa 100644 --- a/pandas/core/ops/common.py +++ b/pandas/core/ops/common.py @@ -5,6 +5,7 @@ from typing import Callable from pandas._libs.lib import item_from_zerodim +from pandas._libs.missing import is_matching_na from pandas._typing import F from pandas.core.dtypes.generic import ( @@ -116,10 +117,21 @@ def _maybe_match_name(a, b): a_has = hasattr(a, "name") b_has = hasattr(b, "name") if a_has and b_has: - if a.name == b.name: - return a.name - else: - # TODO: what if they both have np.nan for their names? + try: + if a.name == b.name: + return a.name + elif is_matching_na(a.name, b.name): + # e.g. both are np.nan + return a.name + else: + return None + except TypeError: + # pd.NA + if is_matching_na(a.name, b.name): + return a.name + return None + except ValueError: + # e.g. np.int64(1) vs (np.int64(1), np.int64(2)) return None elif a_has: return a.name diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py index b7379e0f6eb49..099a6bada1460 100644 --- a/pandas/tests/series/test_arithmetic.py +++ b/pandas/tests/series/test_arithmetic.py @@ -789,9 +789,9 @@ def test_series_ops_name_retention( assert isinstance(result, Series) if box in [Index, Series]: - assert result.name == names[2] + assert result.name is names[2] or result.name == names[2] else: - assert result.name == names[0] + assert result.name is names[0] or result.name == names[0] def test_binop_maybe_preserve_name(self, datetime_series): # names match, preserve diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 5e9a53f32e0b7..f81e3d61c8ba5 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -100,10 +100,34 @@ def test_random_state(): (Series([1], name="x"), Series([2]), None), (Series([1], name="x"), [2], "x"), ([1], Series([2], name="y"), "y"), + # matching NAs + (Series([1], name=np.nan), pd.Index([], name=np.nan), np.nan), + (Series([1], name=np.nan), pd.Index([], name=pd.NaT), None), + (Series([1], name=pd.NA), pd.Index([], name=pd.NA), pd.NA), + # tuple name GH#39757 + ( + Series([1], name=np.int64(1)), + pd.Index([], name=(np.int64(1), np.int64(2))), + None, + ), + ( + Series([1], name=(np.int64(1), np.int64(2))), + pd.Index([], name=(np.int64(1), np.int64(2))), + (np.int64(1), np.int64(2)), + ), + pytest.param( + Series([1], name=(np.float64("nan"), np.int64(2))), + pd.Index([], name=(np.float64("nan"), np.int64(2))), + (np.float64("nan"), np.int64(2)), + marks=pytest.mark.xfail( + reason="Not checking for matching NAs inside tuples." + ), + ), ], ) def test_maybe_match_name(left, right, expected): - assert ops.common._maybe_match_name(left, right) == expected + res = ops.common._maybe_match_name(left, right) + assert res is expected or res == expected def test_standardize_mapping():
- [x] closes #39757 - [x] closes #22041 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44459
2021-11-15T02:57:34Z
2021-11-20T15:55:56Z
2021-11-20T15:55:55Z
2021-11-20T17:30:54Z
REF: dont dispatch Block.putmask to Block.where
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index a8e7224eb524f..a6fa2a9e3b2c1 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -30,6 +30,7 @@ Shape, npt, ) +from pandas.compat import np_version_under1p20 from pandas.util._decorators import cache_readonly from pandas.util._exceptions import find_stack_level from pandas.util._validators import validate_bool_kwarg @@ -969,16 +970,15 @@ def putmask(self, mask, new) -> list[Block]: putmask_without_repeat(values.T, mask, new) return [self] - elif noop: - return [self] - - dtype, _ = infer_dtype_from(new) - if dtype.kind in ["m", "M"]: + elif np_version_under1p20 and infer_dtype_from(new)[0].kind in ["m", "M"]: # using putmask with object dtype will incorrectly cast to object # Having excluded self._can_hold_element, we know we cannot operate # in-place, so we are safe using `where` return self.where(new, ~mask) + elif noop: + return [self] + elif self.ndim == 1 or self.shape[0] == 1: # no need to split columns
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44457
2021-11-15T02:12:23Z
2021-11-20T21:15:15Z
2021-11-20T21:15:15Z
2021-11-20T21:27:49Z
CLN: TODOs and FIXMEs
diff --git a/pandas/core/array_algos/quantile.py b/pandas/core/array_algos/quantile.py index c5e96f32e261f..a1b40acc2558e 100644 --- a/pandas/core/array_algos/quantile.py +++ b/pandas/core/array_algos/quantile.py @@ -4,7 +4,10 @@ import numpy as np -from pandas._typing import ArrayLike +from pandas._typing import ( + ArrayLike, + npt, +) from pandas.core.dtypes.common import is_sparse from pandas.core.dtypes.missing import ( @@ -18,7 +21,9 @@ from pandas.core.arrays import ExtensionArray -def quantile_compat(values: ArrayLike, qs: np.ndarray, interpolation: str) -> ArrayLike: +def quantile_compat( + values: ArrayLike, qs: npt.NDArray[np.float64], interpolation: str +) -> ArrayLike: """ Compute the quantiles of the given values for each quantile in `qs`. @@ -55,7 +60,7 @@ def _quantile_with_mask( values: np.ndarray, mask: np.ndarray, fill_value, - qs: np.ndarray, + qs: npt.NDArray[np.float64], interpolation: str, ) -> np.ndarray: """ @@ -112,7 +117,7 @@ def _quantile_with_mask( def _quantile_ea_compat( - values: ExtensionArray, qs: np.ndarray, interpolation: str + values: ExtensionArray, qs: npt.NDArray[np.float64], interpolation: str ) -> ExtensionArray: """ ExtensionArray compatibility layer for _quantile_with_mask. @@ -158,7 +163,7 @@ def _quantile_ea_compat( def _quantile_ea_fallback( - values: ExtensionArray, qs: np.ndarray, interpolation: str + values: ExtensionArray, qs: npt.NDArray[np.float64], interpolation: str ) -> ExtensionArray: """ quantile compatibility for ExtensionArray subclasses that do not diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index 9cd67ad293f63..6d5162f3fe3a4 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -1584,6 +1584,7 @@ def try_timedelta(v: np.ndarray) -> np.ndarray: value = try_datetime(v) # type: ignore[assignment] if value.dtype.kind in ["m", "M"] and seen_str: + # TODO(2.0): enforcing this deprecation should close GH#40111 warnings.warn( f"Inferring {value.dtype} from data containing strings is deprecated " "and will be removed in a future version. To retain the old behavior " diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 91f1415178471..a2e49502efaff 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1902,11 +1902,13 @@ def _setitem_single_block(self, indexer, value, name: str): ): col = item_labels[indexer[info_axis]] if len(item_labels.get_indexer_for([col])) == 1: + # e.g. test_loc_setitem_empty_append_expands_rows loc = item_labels.get_loc(col) self.obj._iset_item(loc, value, inplace=True) return - indexer = maybe_convert_ix(*indexer) + indexer = maybe_convert_ix(*indexer) # e.g. test_setitem_frame_align + if (isinstance(value, ABCSeries) and name != "iloc") or isinstance(value, dict): # TODO(EA): ExtensionBlock.setitem this causes issues with # setting for extensionarrays that store dicts. Need to decide diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 46e5b5b9c53ad..7b6a76f0a5d10 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -930,10 +930,7 @@ def setitem(self, indexer, value): value = setitem_datetimelike_compat(values, len(values[indexer]), value) values[indexer] = value - if transpose: - values = values.T - block = type(self)(values, placement=self._mgr_locs, ndim=self.ndim) - return block + return self def putmask(self, mask, new) -> list[Block]: """ @@ -961,9 +958,7 @@ def putmask(self, mask, new) -> list[Block]: new = self.fill_value if self._can_hold_element(new): - # error: Argument 1 to "putmask_without_repeat" has incompatible type - # "Union[ndarray, ExtensionArray]"; expected "ndarray" - putmask_without_repeat(self.values.T, mask, new) # type: ignore[arg-type] + putmask_without_repeat(values.T, mask, new) return [self] elif noop: diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index cb0c3e05e955f..e77cdd7ad7369 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -861,9 +861,9 @@ def take(self: T, indexer, axis: int = 1, verify: bool = True) -> T: """ # We have 6 tests that get here with a slice indexer = ( - np.arange(indexer.start, indexer.stop, indexer.step, dtype="int64") + np.arange(indexer.start, indexer.stop, indexer.step, dtype=np.intp) if isinstance(indexer, slice) - else np.asanyarray(indexer, dtype="int64") + else np.asanyarray(indexer, dtype=np.intp) ) n = self.shape[axis] diff --git a/pandas/core/ops/missing.py b/pandas/core/ops/missing.py index 3663e0682c4f4..ed133f30e192d 100644 --- a/pandas/core/ops/missing.py +++ b/pandas/core/ops/missing.py @@ -34,7 +34,7 @@ from pandas.core.ops import roperator -def fill_zeros(result, x, y): +def _fill_zeros(result, x, y): """ If this is a reversed op, then flip x,y @@ -102,9 +102,6 @@ def mask_zero_div_zero(x, y, result: np.ndarray) -> np.ndarray: >>> mask_zero_div_zero(x, y, result) array([ inf, nan, -inf]) """ - if not isinstance(result, np.ndarray): - # FIXME: SparseArray would raise TypeError with np.putmask - return result if is_scalar(y): y = np.array(y) @@ -141,7 +138,7 @@ def mask_zero_div_zero(x, y, result: np.ndarray) -> np.ndarray: def dispatch_fill_zeros(op, left, right, result): """ - Call fill_zeros with the appropriate fill value depending on the operation, + Call _fill_zeros with the appropriate fill value depending on the operation, with special logic for divmod and rdivmod. Parameters @@ -163,12 +160,12 @@ def dispatch_fill_zeros(op, left, right, result): if op is divmod: result = ( mask_zero_div_zero(left, right, result[0]), - fill_zeros(result[1], left, right), + _fill_zeros(result[1], left, right), ) elif op is roperator.rdivmod: result = ( mask_zero_div_zero(right, left, result[0]), - fill_zeros(result[1], right, left), + _fill_zeros(result[1], right, left), ) elif op is operator.floordiv: # Note: no need to do this for truediv; in py3 numpy behaves the way @@ -179,7 +176,7 @@ def dispatch_fill_zeros(op, left, right, result): # we want. result = mask_zero_div_zero(right, left, result) elif op is operator.mod: - result = fill_zeros(result, left, right) + result = _fill_zeros(result, left, right) elif op is roperator.rmod: - result = fill_zeros(result, right, left) + result = _fill_zeros(result, right, left) return result diff --git a/pandas/tests/base/test_value_counts.py b/pandas/tests/base/test_value_counts.py index 23bb4c5d2670c..ddb21408a1a04 100644 --- a/pandas/tests/base/test_value_counts.py +++ b/pandas/tests/base/test_value_counts.py @@ -1,6 +1,5 @@ import collections from datetime import timedelta -from io import StringIO import numpy as np import pytest @@ -190,19 +189,21 @@ def test_value_counts_datetime64(index_or_series): # GH 3002, datetime64[ns] # don't test names though - txt = "\n".join( - [ - "xxyyzz20100101PIE", - "xxyyzz20100101GUM", - "xxyyzz20100101EGG", - "xxyyww20090101EGG", - "foofoo20080909PIE", - "foofoo20080909GUM", - ] - ) - f = StringIO(txt) - df = pd.read_fwf( - f, widths=[6, 8, 3], names=["person_id", "dt", "food"], parse_dates=["dt"] + df = pd.DataFrame( + { + "person_id": ["xxyyzz", "xxyyzz", "xxyyzz", "xxyyww", "foofoo", "foofoo"], + "dt": pd.to_datetime( + [ + "2010-01-01", + "2010-01-01", + "2010-01-01", + "2009-01-01", + "2008-09-09", + "2008-09-09", + ] + ), + "food": ["PIE", "GUM", "EGG", "EGG", "PIE", "GUM"], + } ) s = klass(df["dt"].copy()) diff --git a/pandas/tests/frame/methods/test_combine_first.py b/pandas/tests/frame/methods/test_combine_first.py index 382c11f23a517..fc485f14a4820 100644 --- a/pandas/tests/frame/methods/test_combine_first.py +++ b/pandas/tests/frame/methods/test_combine_first.py @@ -209,15 +209,15 @@ def test_combine_first_align_nan(self): ) tm.assert_frame_equal(res, exp) assert res["a"].dtype == "datetime64[ns]" - # ToDo: this must be int64 + # TODO: this must be int64 assert res["b"].dtype == "int64" res = dfa.iloc[:0].combine_first(dfb) exp = DataFrame({"a": [np.nan, np.nan], "b": [4, 5]}, columns=["a", "b"]) tm.assert_frame_equal(res, exp) - # ToDo: this must be datetime64 + # TODO: this must be datetime64 assert res["a"].dtype == "float64" - # ToDo: this must be int64 + # TODO: this must be int64 assert res["b"].dtype == "int64" def test_combine_first_timezone(self): diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py index 5e321ad33a2bb..53b71bb489ebb 100644 --- a/pandas/tests/frame/methods/test_replace.py +++ b/pandas/tests/frame/methods/test_replace.py @@ -1,7 +1,6 @@ from __future__ import annotations from datetime import datetime -from io import StringIO import re import numpy as np @@ -912,12 +911,14 @@ def test_replace_dict_tuple_list_ordering_remains_the_same(self): tm.assert_frame_equal(res3, expected) def test_replace_doesnt_replace_without_regex(self): - raw = """fol T_opp T_Dir T_Enh - 0 1 0 0 vo - 1 2 vr 0 0 - 2 2 0 0 0 - 3 3 0 bt 0""" - df = pd.read_csv(StringIO(raw), sep=r"\s+") + df = DataFrame( + { + "fol": [1, 2, 2, 3], + "T_opp": ["0", "vr", "0", "0"], + "T_Dir": ["0", "0", "0", "bt"], + "T_Enh": ["vo", "0", "0", "0"], + } + ) res = df.replace({r"\D": 1}) tm.assert_frame_equal(df, res) diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py index 34854be29ad1f..2b8eff036d60a 100644 --- a/pandas/tests/frame/test_block_internals.py +++ b/pandas/tests/frame/test_block_internals.py @@ -2,7 +2,6 @@ datetime, timedelta, ) -from io import StringIO import itertools import numpy as np @@ -289,15 +288,29 @@ def test_pickle(self, float_string_frame, timezone_frame): def test_consolidate_datetime64(self): # numpy vstack bug - data = ( - "starting,ending,measure\n" - "2012-06-21 00:00,2012-06-23 07:00,77\n" - "2012-06-23 07:00,2012-06-23 16:30,65\n" - "2012-06-23 16:30,2012-06-25 08:00,77\n" - "2012-06-25 08:00,2012-06-26 12:00,0\n" - "2012-06-26 12:00,2012-06-27 08:00,77\n" + df = DataFrame( + { + "starting": pd.to_datetime( + [ + "2012-06-21 00:00", + "2012-06-23 07:00", + "2012-06-23 16:30", + "2012-06-25 08:00", + "2012-06-26 12:00", + ] + ), + "ending": pd.to_datetime( + [ + "2012-06-23 07:00", + "2012-06-23 16:30", + "2012-06-25 08:00", + "2012-06-26 12:00", + "2012-06-27 08:00", + ] + ), + "measure": [77, 65, 77, 0, 77], + } ) - df = pd.read_csv(StringIO(data), parse_dates=[0, 1]) ser_starting = df.starting ser_starting.index = ser_starting.values diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py index bc71b0c2048f8..f7eb9dfccc382 100644 --- a/pandas/tests/frame/test_query_eval.py +++ b/pandas/tests/frame/test_query_eval.py @@ -1,4 +1,3 @@ -from io import StringIO import operator import numpy as np @@ -1020,23 +1019,19 @@ def test_object_array_eq_ne(self, parser, engine): def test_query_with_nested_strings(self, parser, engine): skip_if_no_pandas_parser(parser) - raw = """id event timestamp - 1 "page 1 load" 1/1/2014 0:00:01 - 1 "page 1 exit" 1/1/2014 0:00:31 - 2 "page 2 load" 1/1/2014 0:01:01 - 2 "page 2 exit" 1/1/2014 0:01:31 - 3 "page 3 load" 1/1/2014 0:02:01 - 3 "page 3 exit" 1/1/2014 0:02:31 - 4 "page 1 load" 2/1/2014 1:00:01 - 4 "page 1 exit" 2/1/2014 1:00:31 - 5 "page 2 load" 2/1/2014 1:01:01 - 5 "page 2 exit" 2/1/2014 1:01:31 - 6 "page 3 load" 2/1/2014 1:02:01 - 6 "page 3 exit" 2/1/2014 1:02:31 - """ - df = pd.read_csv( - StringIO(raw), sep=r"\s{2,}", engine="python", parse_dates=["timestamp"] + events = [ + f"page {n} {act}" for n in range(1, 4) for act in ["load", "exit"] + ] * 2 + stamps1 = date_range("2014-01-01 0:00:01", freq="30s", periods=6) + stamps2 = date_range("2014-02-01 1:00:01", freq="30s", periods=6) + df = DataFrame( + { + "id": np.arange(1, 7).repeat(2), + "event": events, + "timestamp": stamps1.append(stamps2), + } ) + expected = df[df.event == '"page 1 load"'] res = df.query("""'"page 1 load"' in event""", parser=parser, engine=engine) tm.assert_frame_equal(expected, res) diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index f632da9616124..bb3f934511d79 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -1,6 +1,5 @@ from datetime import datetime from decimal import Decimal -from io import StringIO import numpy as np import pytest @@ -20,7 +19,6 @@ Timedelta, Timestamp, date_range, - read_csv, to_datetime, ) import pandas._testing as tm @@ -1134,14 +1132,18 @@ def test_grouping_ndarray(df): def test_groupby_wrong_multi_labels(): - data = """index,foo,bar,baz,spam,data -0,foo1,bar1,baz1,spam2,20 -1,foo1,bar2,baz1,spam3,30 -2,foo2,bar2,baz1,spam2,40 -3,foo1,bar1,baz2,spam1,50 -4,foo3,bar1,baz2,spam1,60""" - - data = read_csv(StringIO(data), index_col=0) + + index = Index([0, 1, 2, 3, 4], name="index") + data = DataFrame( + { + "foo": ["foo1", "foo1", "foo2", "foo1", "foo3"], + "bar": ["bar1", "bar2", "bar2", "bar1", "bar1"], + "baz": ["baz1", "baz1", "baz1", "baz2", "baz2"], + "spam": ["spam2", "spam3", "spam2", "spam1", "spam1"], + "data": [20, 30, 40, 50, 60], + }, + index=index, + ) grouped = data.groupby(["foo", "bar", "baz", "spam"]) diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py index 6a9ece738952d..0f9612fa5c96c 100644 --- a/pandas/tests/indexing/test_indexing.py +++ b/pandas/tests/indexing/test_indexing.py @@ -277,7 +277,7 @@ def test_dups_fancy_indexing_only_missing_label(self): ): dfnu.loc[["E"]] - # ToDo: check_index_type can be True after GH 11497 + # TODO: check_index_type can be True after GH 11497 @pytest.mark.parametrize("vals", [[0, 1, 2], list("abc")]) def test_dups_fancy_indexing_missing_label(self, vals): diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index ed9b5cc0850b9..05b56d7aaa323 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -1374,7 +1374,7 @@ def test_loc_setitem_datetimeindex_tz(self, idxer, tz_naive_fixture): tz = tz_naive_fixture idx = date_range(start="2015-07-12", periods=3, freq="H", tz=tz) expected = DataFrame(1.2, index=idx, columns=["var"]) - # if result started off with object dtype, tehn the .loc.__setitem__ + # if result started off with object dtype, then the .loc.__setitem__ # below would retain object dtype result = DataFrame(index=idx, columns=["var"], dtype=np.float64) result.loc[:, idxer] = expected diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py index 10fabe234d218..bb49450b8414e 100644 --- a/pandas/tests/resample/test_resample_api.py +++ b/pandas/tests/resample/test_resample_api.py @@ -316,7 +316,7 @@ def test_agg_consistency_int_str_column_mix(): r.agg({2: "mean", "b": "sum"}) -# TODO: once GH 14008 is fixed, move these tests into +# TODO(GH#14008): once GH 14008 is fixed, move these tests into # `Base` test class diff --git a/pandas/tests/reshape/concat/test_invalid.py b/pandas/tests/reshape/concat/test_invalid.py index cd2a7ca33a267..920d31d1bc43a 100644 --- a/pandas/tests/reshape/concat/test_invalid.py +++ b/pandas/tests/reshape/concat/test_invalid.py @@ -34,9 +34,11 @@ def test_concat_invalid_first_argument(self): with pytest.raises(TypeError, match=msg): concat(df1) + def test_concat_generator_obj(self): # generator ok though concat(DataFrame(np.random.rand(5, 5)) for _ in range(3)) + def test_concat_textreader_obj(self): # text reader ok # GH6583 data = """index,A,B,C,D diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py index 9c36d5777d60c..f2e6b91144898 100644 --- a/pandas/tests/scalar/timedelta/test_arithmetic.py +++ b/pandas/tests/scalar/timedelta/test_arithmetic.py @@ -202,6 +202,34 @@ def test_td_add_sub_numeric_raises(self): with pytest.raises(TypeError, match=msg): other - td + def test_td_add_sub_int_ndarray(self): + td = Timedelta("1 day") + other = np.array([1]) + + msg = r"unsupported operand type\(s\) for \+: 'Timedelta' and 'int'" + with pytest.raises(TypeError, match=msg): + td + np.array([1]) + + msg = "|".join( + [ + ( + r"unsupported operand type\(s\) for \+: 'numpy.ndarray' " + "and 'Timedelta'" + ), + # This message goes on to say "Please do not rely on this error; + # it may not be given on all Python implementations" + "Concatenation operation is not implemented for NumPy arrays", + ] + ) + with pytest.raises(TypeError, match=msg): + other + td + msg = r"unsupported operand type\(s\) for -: 'Timedelta' and 'int'" + with pytest.raises(TypeError, match=msg): + td - other + msg = r"unsupported operand type\(s\) for -: 'numpy.ndarray' and 'Timedelta'" + with pytest.raises(TypeError, match=msg): + other - td + def test_td_rsub_nat(self): td = Timedelta(10, unit="d") result = NaT - td @@ -224,7 +252,7 @@ def test_td_sub_timedeltalike_object_dtype_array(self): def test_td_sub_mixed_most_timedeltalike_object_dtype_array(self): # GH#21980 - now = Timestamp.now() + now = Timestamp("2021-11-09 09:54:00") arr = np.array([now, Timedelta("1D"), np.timedelta64(2, "h")]) exp = np.array( [ @@ -238,7 +266,7 @@ def test_td_sub_mixed_most_timedeltalike_object_dtype_array(self): def test_td_rsub_mixed_most_timedeltalike_object_dtype_array(self): # GH#21980 - now = Timestamp.now() + now = Timestamp("2021-11-09 09:54:00") arr = np.array([now, Timedelta("1D"), np.timedelta64(2, "h")]) msg = r"unsupported operand type\(s\) for \-: 'Timedelta' and 'Timestamp'" with pytest.raises(TypeError, match=msg): @@ -255,63 +283,32 @@ def test_td_add_timedeltalike_object_dtype_array(self, op): @pytest.mark.parametrize("op", [operator.add, ops.radd]) def test_td_add_mixed_timedeltalike_object_dtype_array(self, op): # GH#21980 - now = Timestamp.now() + now = Timestamp("2021-11-09 09:54:00") arr = np.array([now, Timedelta("1D")]) exp = np.array([now + Timedelta("1D"), Timedelta("2D")]) res = op(arr, Timedelta("1D")) tm.assert_numpy_array_equal(res, exp) - # TODO: moved from index tests following #24365, may need de-duplication - def test_ops_ndarray(self): + def test_td_add_sub_td64_ndarray(self): td = Timedelta("1 day") - # timedelta, timedelta - other = pd.to_timedelta(["1 day"]).values - expected = pd.to_timedelta(["2 days"]).values - tm.assert_numpy_array_equal(td + other, expected) - tm.assert_numpy_array_equal(other + td, expected) - msg = r"unsupported operand type\(s\) for \+: 'Timedelta' and 'int'" - with pytest.raises(TypeError, match=msg): - td + np.array([1]) - msg = "|".join( - [ - ( - r"unsupported operand type\(s\) for \+: 'numpy.ndarray' " - "and 'Timedelta'" - ), - "Concatenation operation is not implemented for NumPy arrays", - ] - ) - with pytest.raises(TypeError, match=msg): - np.array([1]) + td - - expected = pd.to_timedelta(["0 days"]).values - tm.assert_numpy_array_equal(td - other, expected) - tm.assert_numpy_array_equal(-other + td, expected) - msg = r"unsupported operand type\(s\) for -: 'Timedelta' and 'int'" - with pytest.raises(TypeError, match=msg): - td - np.array([1]) - msg = r"unsupported operand type\(s\) for -: 'numpy.ndarray' and 'Timedelta'" - with pytest.raises(TypeError, match=msg): - np.array([1]) - td + other = np.array([td.to_timedelta64()]) + expected = np.array([Timedelta("2 Days").to_timedelta64()]) - expected = pd.to_timedelta(["2 days"]).values - tm.assert_numpy_array_equal(td * np.array([2]), expected) - tm.assert_numpy_array_equal(np.array([2]) * td, expected) - msg = ( - "ufunc '?multiply'? cannot use operands with types " - r"dtype\('<m8\[ns\]'\) and dtype\('<m8\[ns\]'\)" - ) - with pytest.raises(TypeError, match=msg): - td * other - with pytest.raises(TypeError, match=msg): - other * td + result = td + other + tm.assert_numpy_array_equal(result, expected) + result = other + td + tm.assert_numpy_array_equal(result, expected) - tm.assert_numpy_array_equal(td / other, np.array([1], dtype=np.float64)) - tm.assert_numpy_array_equal(other / td, np.array([1], dtype=np.float64)) + result = td - other + tm.assert_numpy_array_equal(result, expected * 0) + result = other - td + tm.assert_numpy_array_equal(result, expected * 0) - # timedelta, datetime + def test_td_add_sub_dt64_ndarray(self): + td = Timedelta("1 day") other = pd.to_datetime(["2000-01-01"]).values + expected = pd.to_datetime(["2000-01-02"]).values tm.assert_numpy_array_equal(td + other, expected) tm.assert_numpy_array_equal(other + td, expected) @@ -386,6 +383,30 @@ def test_td_mul_scalar(self, op): # invalid multiply with another timedelta op(td, td) + def test_td_mul_numeric_ndarray(self): + td = Timedelta("1 day") + other = np.array([2]) + expected = np.array([Timedelta("2 Days").to_timedelta64()]) + + result = td * other + tm.assert_numpy_array_equal(result, expected) + + result = other * td + tm.assert_numpy_array_equal(result, expected) + + def test_td_mul_td64_ndarray_invalid(self): + td = Timedelta("1 day") + other = np.array([Timedelta("2 Days").to_timedelta64()]) + + msg = ( + "ufunc '?multiply'? cannot use operands with types " + r"dtype\('<m8\[ns\]'\) and dtype\('<m8\[ns\]'\)" + ) + with pytest.raises(TypeError, match=msg): + td * other + with pytest.raises(TypeError, match=msg): + other * td + # --------------------------------------------------------------- # Timedelta.__div__, __truediv__ @@ -450,6 +471,18 @@ def test_td_div_nan(self, nan): result = td // nan assert result is NaT + def test_td_div_td64_ndarray(self): + td = Timedelta("1 day") + + other = np.array([Timedelta("2 Days").to_timedelta64()]) + expected = np.array([0.5]) + + result = td / other + tm.assert_numpy_array_equal(result, expected) + + result = other / td + tm.assert_numpy_array_equal(result, expected * 4) + # --------------------------------------------------------------- # Timedelta.__rdiv__ @@ -873,7 +906,7 @@ def test_rdivmod_invalid(self): "arr", [ np.array([Timestamp("20130101 9:01"), Timestamp("20121230 9:02")]), - np.array([Timestamp.now(), Timedelta("1D")]), + np.array([Timestamp("2021-11-09 09:54:00"), Timedelta("1D")]), ], ) def test_td_op_timedelta_timedeltalike_array(self, op, arr): diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py index 8b5557ab6e85f..c5cffa0c9fb0f 100644 --- a/pandas/tests/series/methods/test_drop_duplicates.py +++ b/pandas/tests/series/methods/test_drop_duplicates.py @@ -77,7 +77,7 @@ def dtype(self, request): return request.param @pytest.fixture - def cat_series1(self, dtype, ordered): + def cat_series_unused_category(self, dtype, ordered): # Test case 1 cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype)) @@ -86,8 +86,8 @@ def cat_series1(self, dtype, ordered): tc1 = Series(cat) return tc1 - def test_drop_duplicates_categorical_non_bool(self, cat_series1): - tc1 = cat_series1 + def test_drop_duplicates_categorical_non_bool(self, cat_series_unused_category): + tc1 = cat_series_unused_category expected = Series([False, False, False, True]) @@ -102,8 +102,10 @@ def test_drop_duplicates_categorical_non_bool(self, cat_series1): assert return_value is None tm.assert_series_equal(sc, tc1[~expected]) - def test_drop_duplicates_categorical_non_bool_keeplast(self, cat_series1): - tc1 = cat_series1 + def test_drop_duplicates_categorical_non_bool_keeplast( + self, cat_series_unused_category + ): + tc1 = cat_series_unused_category expected = Series([False, False, True, False]) @@ -118,8 +120,10 @@ def test_drop_duplicates_categorical_non_bool_keeplast(self, cat_series1): assert return_value is None tm.assert_series_equal(sc, tc1[~expected]) - def test_drop_duplicates_categorical_non_bool_keepfalse(self, cat_series1): - tc1 = cat_series1 + def test_drop_duplicates_categorical_non_bool_keepfalse( + self, cat_series_unused_category + ): + tc1 = cat_series_unused_category expected = Series([False, False, True, True]) @@ -135,8 +139,8 @@ def test_drop_duplicates_categorical_non_bool_keepfalse(self, cat_series1): tm.assert_series_equal(sc, tc1[~expected]) @pytest.fixture - def cat_series2(self, dtype, ordered): - # Test case 2; TODO: better name + def cat_series(self, dtype, ordered): + # no unused categories, unlike cat_series_unused_category cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype)) input2 = np.array([1, 2, 3, 5, 3, 2, 4], dtype=np.dtype(dtype)) @@ -144,9 +148,8 @@ def cat_series2(self, dtype, ordered): tc2 = Series(cat) return tc2 - def test_drop_duplicates_categorical_non_bool2(self, cat_series2): - # Test case 2; TODO: better name - tc2 = cat_series2 + def test_drop_duplicates_categorical_non_bool2(self, cat_series): + tc2 = cat_series expected = Series([False, False, False, False, True, True, False]) @@ -161,8 +164,8 @@ def test_drop_duplicates_categorical_non_bool2(self, cat_series2): assert return_value is None tm.assert_series_equal(sc, tc2[~expected]) - def test_drop_duplicates_categorical_non_bool2_keeplast(self, cat_series2): - tc2 = cat_series2 + def test_drop_duplicates_categorical_non_bool2_keeplast(self, cat_series): + tc2 = cat_series expected = Series([False, True, True, False, False, False, False]) @@ -177,8 +180,8 @@ def test_drop_duplicates_categorical_non_bool2_keeplast(self, cat_series2): assert return_value is None tm.assert_series_equal(sc, tc2[~expected]) - def test_drop_duplicates_categorical_non_bool2_keepfalse(self, cat_series2): - tc2 = cat_series2 + def test_drop_duplicates_categorical_non_bool2_keepfalse(self, cat_series): + tc2 = cat_series expected = Series([False, True, True, False, True, True, False]) diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py index b78f1652dc419..d5ffca36d325f 100644 --- a/pandas/util/_test_decorators.py +++ b/pandas/util/_test_decorators.py @@ -122,7 +122,7 @@ def _skip_if_no_scipy() -> bool: ) -# TODO: return type, _pytest.mark.structures.MarkDecorator is not public +# TODO(pytest#7469): return type, _pytest.mark.structures.MarkDecorator is not public # https://github.com/pytest-dev/pytest/issues/7469 def skip_if_installed(package: str): """ @@ -138,7 +138,7 @@ def skip_if_installed(package: str): ) -# TODO: return type, _pytest.mark.structures.MarkDecorator is not public +# TODO(pytest#7469): return type, _pytest.mark.structures.MarkDecorator is not public # https://github.com/pytest-dev/pytest/issues/7469 def skip_if_no(package: str, min_version: str | None = None): """ @@ -202,7 +202,7 @@ def skip_if_no(package: str, min_version: str | None = None): ) -# TODO: return type, _pytest.mark.structures.MarkDecorator is not public +# TODO(pytest#7469): return type, _pytest.mark.structures.MarkDecorator is not public # https://github.com/pytest-dev/pytest/issues/7469 def skip_if_np_lt(ver_str: str, *args, reason: str | None = None): if reason is None:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44455
2021-11-14T23:02:00Z
2021-11-16T00:02:44Z
2021-11-16T00:02:44Z
2021-11-16T00:22:03Z
DOC: substitute multiple entries for `subset` in `style.py`
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index 40803ff14e357..b1b31fef61b91 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -27,7 +27,10 @@ Scalar, ) from pandas.compat._optional import import_optional_dependency -from pandas.util._decorators import doc +from pandas.util._decorators import ( + Substitution, + doc, +) from pandas.util._exceptions import find_stack_level import pandas as pd @@ -78,6 +81,26 @@ def _mpl(func: Callable): raise ImportError(no_mpl_message.format(func.__name__)) +#### +# Shared Doc Strings + +subset = """ + subset : label, array-like, IndexSlice, optional + A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input + or single key, to `DataFrame.loc[:, <subset>]` where the columns are + prioritised, to limit ``data`` to *before* applying the function. +""" + +props = """ + props : str, default None + CSS properties to use for highlighting. If ``props`` is given, ``color`` + is not used. +""" + +# +### + + class Styler(StylerRenderer): r""" Helps style a DataFrame or Series according to the data with HTML and CSS. @@ -1302,6 +1325,7 @@ def _apply( self._update_ctx(result) return self + @Substitution(subset=subset) def apply( self, func: Callable, @@ -1332,10 +1356,7 @@ def apply( Apply to each column (``axis=0`` or ``'index'``), to each row (``axis=1`` or ``'columns'``), or to the entire DataFrame at once with ``axis=None``. - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s **kwargs : dict Pass along to ``func``. @@ -1545,6 +1566,7 @@ def _applymap( self._update_ctx(result) return self + @Substitution(subset=subset) def applymap( self, func: Callable, subset: Subset | None = None, **kwargs ) -> Styler: @@ -1557,10 +1579,7 @@ def applymap( ---------- func : function ``func`` should take a scalar and return a string. - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s **kwargs : dict Pass along to ``func``. @@ -1609,6 +1628,7 @@ def applymap( ) return self + @Substitution(subset=subset) def where( self, cond: Callable, @@ -1634,10 +1654,7 @@ def where( Applied when ``cond`` returns true. other : str Applied when ``cond`` returns false. - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s **kwargs : dict Pass along to ``cond``. @@ -2527,6 +2544,7 @@ def hide( axis="{0 or 'index', 1 or 'columns', None}", text_threshold="", ) + @Substitution(subset=subset) def background_gradient( self, cmap="PuBu", @@ -2562,10 +2580,7 @@ def background_gradient( Apply to each column (``axis=0`` or ``'index'``), to each row (``axis=1`` or ``'columns'``), or to the entire DataFrame at once with ``axis=None``. - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s text_color_threshold : float or int {text_threshold} Luminance threshold for determining text color in [0, 1]. Facilitates text @@ -2717,6 +2732,7 @@ def text_gradient( text_only=True, ) + @Substitution(subset=subset) def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler: """ Set defined CSS-properties to each ``<td>`` HTML element within the given @@ -2724,10 +2740,7 @@ def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler: Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s **kwargs : dict A dictionary of property, value pairs to be set for each cell. @@ -2752,6 +2765,7 @@ def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler: values = "".join([f"{p}: {v};" for p, v in kwargs.items()]) return self.applymap(lambda x: values, subset=subset) + @Substitution(subset=subset) def bar( self, subset: Subset | None = None, @@ -2773,10 +2787,7 @@ def bar( Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s axis : {0 or 'index', 1 or 'columns', None}, default 0 Apply to each column (``axis=0`` or ``'index'``), to each row (``axis=1`` or ``'columns'``), or to the entire DataFrame at once @@ -2877,6 +2888,7 @@ def bar( return self + @Substitution(subset=subset, props=props) def highlight_null( self, null_color: str = "red", @@ -2889,17 +2901,9 @@ def highlight_null( Parameters ---------- null_color : str, default 'red' - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. - + %(subset)s .. versionadded:: 1.1.0 - - props : str, default None - CSS properties to use for highlighting. If ``props`` is given, ``color`` - is not used. - + %(props)s .. versionadded:: 1.3.0 Returns @@ -2921,6 +2925,7 @@ def f(data: DataFrame, props: str) -> np.ndarray: props = f"background-color: {null_color};" return self.apply(f, axis=None, subset=subset, props=props) + @Substitution(subset=subset, props=props) def highlight_max( self, subset: Subset | None = None, @@ -2933,20 +2938,14 @@ def highlight_max( Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s color : str, default 'yellow' Background color to use for highlighting. axis : {0 or 'index', 1 or 'columns', None}, default 0 Apply to each column (``axis=0`` or ``'index'``), to each row (``axis=1`` or ``'columns'``), or to the entire DataFrame at once with ``axis=None``. - props : str, default None - CSS properties to use for highlighting. If ``props`` is given, ``color`` - is not used. - + %(props)s .. versionadded:: 1.3.0 Returns @@ -2970,6 +2969,7 @@ def highlight_max( props=props, ) + @Substitution(subset=subset, props=props) def highlight_min( self, subset: Subset | None = None, @@ -2982,20 +2982,14 @@ def highlight_min( Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s color : str, default 'yellow' Background color to use for highlighting. axis : {0 or 'index', 1 or 'columns', None}, default 0 Apply to each column (``axis=0`` or ``'index'``), to each row (``axis=1`` or ``'columns'``), or to the entire DataFrame at once with ``axis=None``. - props : str, default None - CSS properties to use for highlighting. If ``props`` is given, ``color`` - is not used. - + %(props)s .. versionadded:: 1.3.0 Returns @@ -3019,6 +3013,7 @@ def highlight_min( props=props, ) + @Substitution(subset=subset, props=props) def highlight_between( self, subset: Subset | None = None, @@ -3036,10 +3031,7 @@ def highlight_between( Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s color : str, default 'yellow' Background color to use for highlighting. axis : {0 or 'index', 1 or 'columns', None}, default 0 @@ -3051,10 +3043,7 @@ def highlight_between( Right bound for defining the range. inclusive : {'both', 'neither', 'left', 'right'} Identify whether bounds are closed or open. - props : str, default None - CSS properties to use for highlighting. If ``props`` is given, ``color`` - is not used. - + %(props)s Returns ------- self : Styler @@ -3128,6 +3117,7 @@ def highlight_between( inclusive=inclusive, ) + @Substitution(subset=subset, props=props) def highlight_quantile( self, subset: Subset | None = None, @@ -3146,10 +3136,7 @@ def highlight_quantile( Parameters ---------- - subset : label, array-like, IndexSlice, optional - A valid 2d input to `DataFrame.loc[<subset>]`, or, in the case of a 1d input - or single key, to `DataFrame.loc[:, <subset>]` where the columns are - prioritised, to limit ``data`` to *before* applying the function. + %(subset)s color : str, default 'yellow' Background color to use for highlighting axis : {0 or 'index', 1 or 'columns', None}, default 0 @@ -3164,10 +3151,7 @@ def highlight_quantile( quantile estimation. inclusive : {'both', 'neither', 'left', 'right'} Identify whether quantile bounds are closed or open. - props : str, default None - CSS properties to use for highlighting. If ``props`` is given, ``color`` - is not used. - + %(props)s Returns ------- self : Styler
as described
https://api.github.com/repos/pandas-dev/pandas/pulls/44454
2021-11-14T19:12:29Z
2021-11-20T21:21:37Z
2021-11-20T21:21:37Z
2021-11-21T15:01:36Z
Revert "CI: xfail tests failing on numpy dev"
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index a638ecbe04936..b60f2e60e1035 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -1634,7 +1634,6 @@ def test_rolling_quantile_np_percentile(): tm.assert_almost_equal(df_quantile.values, np.array(np_percentile)) -@pytest.mark.xfail(reason="GH#44343", strict=False) @pytest.mark.parametrize("quantile", [0.0, 0.1, 0.45, 0.5, 1]) @pytest.mark.parametrize( "interpolation", ["linear", "lower", "higher", "nearest", "midpoint"]
Reverts pandas-dev/pandas#44362 closes #44343
https://api.github.com/repos/pandas-dev/pandas/pulls/44452
2021-11-14T18:51:57Z
2021-11-15T12:12:19Z
2021-11-15T12:12:19Z
2021-11-15T12:38:02Z
TST: check_stacklevel=True in various tests
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py index f8c945bb496a8..6cfcfa778b105 100644 --- a/pandas/tests/apply/test_frame_apply.py +++ b/pandas/tests/apply/test_frame_apply.py @@ -1198,9 +1198,7 @@ def test_nuiscance_columns(): ) tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning( - FutureWarning, match="Select only valid", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): result = df.agg("sum") expected = Series([6, 6.0, "foobarbaz"], index=["A", "B", "C"]) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index 13fe3c2d427c5..3f1739deb4175 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -783,9 +783,7 @@ def test_to_perioddelta(self, datetime_index, freqstr): with tm.assert_produces_warning(FutureWarning, match=msg): # Deprecation GH#34853 expected = dti.to_perioddelta(freq=freqstr) - with tm.assert_produces_warning( - FutureWarning, match=msg, check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match=msg): # stacklevel is chosen to be "correct" for DatetimeIndex, not # DatetimeArray result = arr.to_perioddelta(freq=freqstr) diff --git a/pandas/tests/dtypes/cast/test_promote.py b/pandas/tests/dtypes/cast/test_promote.py index e8a3b5d28ee63..a514a9ce9b0e4 100644 --- a/pandas/tests/dtypes/cast/test_promote.py +++ b/pandas/tests/dtypes/cast/test_promote.py @@ -411,7 +411,7 @@ def test_maybe_promote_any_with_datetime64( # Casting date to dt64 is deprecated warn = FutureWarning - with tm.assert_produces_warning(warn, match=msg, check_stacklevel=False): + with tm.assert_produces_warning(warn, match=msg): # stacklevel is chosen to make sense when called from higher-level functions _check_promote(dtype, fill_value, expected_dtype, exp_val_for_scalar) diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py index bb1a1bc72116d..15e62e27c08d5 100644 --- a/pandas/tests/frame/indexing/test_setitem.py +++ b/pandas/tests/frame/indexing/test_setitem.py @@ -949,7 +949,7 @@ def test_setitem_mask_categorical(self): df = DataFrame({"cats": catsf, "values": valuesf}, index=idxf) exp_fancy = exp_multi_row.copy() - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): # issue #37643 inplace kwarg deprecated return_value = exp_fancy["cats"].cat.set_categories( ["a", "b", "c"], inplace=True diff --git a/pandas/tests/frame/methods/test_join.py b/pandas/tests/frame/methods/test_join.py index 30118d20f67a9..c6bfd94b84908 100644 --- a/pandas/tests/frame/methods/test_join.py +++ b/pandas/tests/frame/methods/test_join.py @@ -344,9 +344,7 @@ def test_merge_join_different_levels(self): columns = ["a", "b", ("a", ""), ("c", "c1")] expected = DataFrame(columns=columns, data=[[1, 11, 0, 44], [0, 22, 1, 33]]) msg = "merging between different levels is deprecated" - with tm.assert_produces_warning( - FutureWarning, match=msg, check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match=msg): # stacklevel is chosen to be correct for pd.merge, not DataFrame.join result = df1.join(df2, on="a") tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 52797862afa14..359f166b9855e 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -2664,15 +2664,15 @@ def test_constructor_data_aware_dtype_naive(self, tz_aware_fixture, pydt): expected = DataFrame({0: [ts_naive]}) tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = DataFrame({0: ts}, index=[0], dtype="datetime64[ns]") tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = DataFrame([ts], dtype="datetime64[ns]") tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = DataFrame(np.array([ts], dtype=object), dtype="datetime64[ns]") tm.assert_frame_equal(result, expected) @@ -2680,11 +2680,11 @@ def test_constructor_data_aware_dtype_naive(self, tz_aware_fixture, pydt): result = DataFrame(ts, index=[0], columns=[0], dtype="datetime64[ns]") tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): df = DataFrame([Series([ts])], dtype="datetime64[ns]") tm.assert_frame_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): df = DataFrame([[ts]], columns=[0], dtype="datetime64[ns]") tm.assert_equal(df, expected) @@ -2946,9 +2946,7 @@ def test_tzaware_data_tznaive_dtype(self, constructor): ts = Timestamp("2019", tz=tz) ts_naive = Timestamp("2019") - with tm.assert_produces_warning( - FutureWarning, match="Data is timezone-aware", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match="Data is timezone-aware"): result = constructor(ts, dtype="M8[ns]") assert np.all(result.dtypes == "M8[ns]") diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py index 694f843ec138f..d9372ba5cbb50 100644 --- a/pandas/tests/groupby/aggregate/test_cython.py +++ b/pandas/tests/groupby/aggregate/test_cython.py @@ -97,7 +97,7 @@ def test_cython_agg_nothing_to_agg(): frame = DataFrame({"a": np.random.randint(0, 5, 50), "b": ["foo", "bar"] * 25}) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = frame[["b"]].groupby(frame["a"]).mean() expected = DataFrame([], index=frame["a"].sort_values().drop_duplicates()) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index e5870a206f419..c462db526b36d 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -154,9 +154,7 @@ def test_averages(self, df, method): ], ) - with tm.assert_produces_warning( - FutureWarning, match="Dropping invalid", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"): result = getattr(gb, method)(numeric_only=False) tm.assert_frame_equal(result.reindex_like(expected), expected) diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index f632da9616124..aa547c1ec5b6c 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -904,9 +904,7 @@ def test_omit_nuisance_agg(df, agg_function): def test_omit_nuisance_warnings(df): # GH 38815 - with tm.assert_produces_warning( - FutureWarning, filter_level="always", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, filter_level="always"): grouped = df.groupby("A") result = grouped.skew() expected = df.loc[:, ["A", "C", "D"]].groupby("A").skew() diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py index beca71969dfcd..9db6567ca1b56 100644 --- a/pandas/tests/indexes/datetimes/test_indexing.py +++ b/pandas/tests/indexes/datetimes/test_indexing.py @@ -516,9 +516,7 @@ def test_get_loc_tz_aware(self): freq="5s", ) key = Timestamp("2019-12-12 10:19:25", tz="US/Eastern") - with tm.assert_produces_warning( - FutureWarning, match="deprecated", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match="deprecated"): result = dti.get_loc(key, method="nearest") assert result == 7433 diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py index 44c353315562a..76b5b835754aa 100644 --- a/pandas/tests/indexes/datetimes/test_misc.py +++ b/pandas/tests/indexes/datetimes/test_misc.py @@ -142,9 +142,7 @@ def test_datetimeindex_accessors4(self): assert dti.is_month_start[0] == 1 def test_datetimeindex_accessors5(self): - with tm.assert_produces_warning( - FutureWarning, match="The 'freq' argument", check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match="The 'freq' argument"): tests = [ (Timestamp("2013-06-01", freq="M").is_month_start, 1), (Timestamp("2013-06-01", freq="BM").is_month_start, 0), diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py index fda67e7c0a058..c60e56875bfcd 100644 --- a/pandas/tests/indexes/datetimes/test_scalar_compat.py +++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py @@ -65,9 +65,7 @@ def test_dti_timestamp_fields(self, field): expected = getattr(idx, field)[-1] warn = FutureWarning if field.startswith("is_") else None - with tm.assert_produces_warning( - warn, match="Timestamp.freq is deprecated", check_stacklevel=False - ): + with tm.assert_produces_warning(warn, match="Timestamp.freq is deprecated"): result = getattr(Timestamp(idx[-1]), field) assert result == expected diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index ed9b5cc0850b9..63d1568ed4d43 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -2630,7 +2630,7 @@ def test_loc_slice_disallows_positional(): with pytest.raises(TypeError, match=msg): df.loc[1:3, 1] - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): # GH#31840 deprecated incorrect behavior df.loc[1:3, 1] = 2 diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py index cccfd87f6312b..85519398b4444 100644 --- a/pandas/tests/io/formats/style/test_style.py +++ b/pandas/tests/io/formats/style/test_style.py @@ -749,7 +749,7 @@ def test_applymap_subset_multiindex(self, slice_): col = MultiIndex.from_product([["x", "y"], ["A", "B"]]) df = DataFrame(np.random.rand(4, 4), columns=col, index=idx) - with tm.assert_produces_warning(warn, match=msg, check_stacklevel=False): + with tm.assert_produces_warning(warn, match=msg): df.style.applymap(lambda x: "color: red;", subset=slice_).to_html() def test_applymap_subset_multiindex_code(self): diff --git a/pandas/tests/resample/test_deprecated.py b/pandas/tests/resample/test_deprecated.py index 359c3cea62f9c..3aac7a961fa19 100644 --- a/pandas/tests/resample/test_deprecated.py +++ b/pandas/tests/resample/test_deprecated.py @@ -63,12 +63,12 @@ def test_deprecating_on_loffset_and_base(): # not checking the stacklevel for .groupby().resample() because it's complicated to # reconcile it with the stacklevel for Series.resample() and DataFrame.resample(); # see GH #37603 - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): df.groupby("a").resample("3T", base=0).sum() - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): df.groupby("a").resample("3T", loffset="0s").sum() msg = "'offset' and 'base' cannot be present at the same time" - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): with pytest.raises(ValueError, match=msg): df.groupby("a").resample("3T", base=0, offset=0).sum() diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py index 48a55022aa484..6533cbb1f70cd 100644 --- a/pandas/tests/reshape/merge/test_join.py +++ b/pandas/tests/reshape/merge/test_join.py @@ -630,7 +630,7 @@ def test_join_dups(self): dta = x.merge(y, left_index=True, right_index=True).merge( z, left_index=True, right_index=True, how="outer" ) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): dta = dta.merge(w, left_index=True, right_index=True) expected = concat([x, y, z, w], axis=1) expected.columns = ["x_x", "y_x", "x_y", "y_y", "x_x", "y_x", "x_y", "y_y"] diff --git a/pandas/tests/series/accessors/test_cat_accessor.py b/pandas/tests/series/accessors/test_cat_accessor.py index 289e4cfe9397d..fb2071ac9c3f6 100644 --- a/pandas/tests/series/accessors/test_cat_accessor.py +++ b/pandas/tests/series/accessors/test_cat_accessor.py @@ -51,7 +51,7 @@ def test_cat_accessor(self): exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): # issue #37643 inplace kwarg deprecated return_value = ser.cat.set_categories(["b", "a"], inplace=True) @@ -88,7 +88,7 @@ def test_cat_accessor_updates_on_inplace(self): return_value = ser.drop(0, inplace=True) assert return_value is None - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): return_value = ser.cat.remove_unused_categories(inplace=True) assert return_value is None diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py index ed83377f31317..8d18eba36af5e 100644 --- a/pandas/tests/series/test_arithmetic.py +++ b/pandas/tests/series/test_arithmetic.py @@ -790,7 +790,7 @@ def test_series_ops_name_retention( # GH#37374 logical ops behaving as set ops deprecated warn = FutureWarning if is_rlogical and box is Index else None msg = "operating as a set operation is deprecated" - with tm.assert_produces_warning(warn, match=msg, check_stacklevel=False): + with tm.assert_produces_warning(warn, match=msg): # stacklevel is correct for Index op, not reversed op result = op(left, right) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 1b488b4cf0b77..692c040a33ff8 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -1645,12 +1645,12 @@ def test_constructor_data_aware_dtype_naive(self, tz_aware_fixture, pydt): ts = ts.to_pydatetime() ts_naive = Timestamp("2019") - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = Series([ts], dtype="datetime64[ns]") expected = Series([ts_naive]) tm.assert_series_equal(result, expected) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = Series(np.array([ts], dtype=object), dtype="datetime64[ns]") tm.assert_series_equal(result, expected) diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py index 563c8f63df57d..9648b01492e02 100644 --- a/pandas/tests/series/test_logical_ops.py +++ b/pandas/tests/series/test_logical_ops.py @@ -279,9 +279,7 @@ def test_reversed_xor_with_index_returns_index(self): tm.assert_index_equal(result, expected) expected = Index.symmetric_difference(idx2, ser) - with tm.assert_produces_warning( - FutureWarning, match=msg, check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match=msg): result = idx2 ^ ser tm.assert_index_equal(result, expected) @@ -337,9 +335,7 @@ def test_reverse_ops_with_index(self, op, expected): idx = Index([False, True]) msg = "operating as a set operation" - with tm.assert_produces_warning( - FutureWarning, match=msg, check_stacklevel=False - ): + with tm.assert_produces_warning(FutureWarning, match=msg): # behaving as set ops is deprecated, will become logical ops result = op(ser, idx) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py index 1348e62148cb1..c6666a8c91e8d 100644 --- a/pandas/tests/test_expressions.py +++ b/pandas/tests/test_expressions.py @@ -287,32 +287,32 @@ def test_bool_ops_warn_on_arithmetic(self, op_str, opname): return with tm.use_numexpr(True, min_elements=5): - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(df, df) e = fe(df, df) tm.assert_frame_equal(r, e) - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(df.a, df.b) e = fe(df.a, df.b) tm.assert_series_equal(r, e) - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(df.a, True) e = fe(df.a, True) tm.assert_series_equal(r, e) - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(False, df.a) e = fe(False, df.a) tm.assert_series_equal(r, e) - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(False, df) e = fe(False, df) tm.assert_frame_equal(r, e) - with tm.assert_produces_warning(check_stacklevel=False): + with tm.assert_produces_warning(): r = f(df, True) e = fe(df, True) tm.assert_frame_equal(r, e) diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index 27b06e78d8ce2..a638ecbe04936 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -697,7 +697,7 @@ def test_rolling_count_default_min_periods_with_null_values(frame_or_series): expected_counts = [1.0, 2.0, 3.0, 2.0, 2.0, 2.0, 3.0] # GH 31302 - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning): result = frame_or_series(values).rolling(3).count() expected = frame_or_series(expected_counts) tm.assert_equal(result, expected)
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/44449
2021-11-14T15:13:14Z
2021-11-14T16:32:38Z
2021-11-14T16:32:38Z
2021-12-12T16:03:46Z
CI/TST Removed Timestamp.now() from tests. #44341
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index 3f1739deb4175..5aa20bedc4a48 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -168,7 +168,7 @@ def test_take(self): tm.assert_index_equal(self.index_cls(result), expected) - @pytest.mark.parametrize("fill_value", [2, 2.0, Timestamp.now().time]) + @pytest.mark.parametrize("fill_value", [2, 2.0, Timestamp(2021, 1, 1, 12).time]) def test_take_fill_raises(self, fill_value): data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 @@ -842,26 +842,27 @@ def test_int_properties(self, arr1d, propname): def test_take_fill_valid(self, arr1d): arr = arr1d dti = self.index_cls(arr1d) + dt_ind = Timestamp(2021, 1, 1, 12) + dt_ind_tz = dt_ind.tz_localize(dti.tz) - now = Timestamp.now().tz_localize(dti.tz) - result = arr.take([-1, 1], allow_fill=True, fill_value=now) - assert result[0] == now + result = arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz) + assert result[0] == dt_ind_tz msg = f"value should be a '{arr1d._scalar_type.__name__}' or 'NaT'. Got" with pytest.raises(TypeError, match=msg): # fill_value Timedelta invalid - arr.take([-1, 1], allow_fill=True, fill_value=now - now) + arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz - dt_ind_tz) with pytest.raises(TypeError, match=msg): # fill_value Period invalid arr.take([-1, 1], allow_fill=True, fill_value=Period("2014Q1")) tz = None if dti.tz is not None else "US/Eastern" - now = Timestamp.now().tz_localize(tz) + dt_ind_tz = dt_ind.tz_localize(tz) msg = "Cannot compare tz-naive and tz-aware datetime-like objects" with pytest.raises(TypeError, match=msg): # Timestamp with mismatched tz-awareness - arr.take([-1, 1], allow_fill=True, fill_value=now) + arr.take([-1, 1], allow_fill=True, fill_value=dt_ind_tz) value = NaT.value msg = f"value should be a '{arr1d._scalar_type.__name__}' or 'NaT'. Got" @@ -877,7 +878,7 @@ def test_take_fill_valid(self, arr1d): if arr.tz is not None: # GH#37356 # Assuming here that arr1d fixture does not include Australia/Melbourne - value = Timestamp.now().tz_localize("Australia/Melbourne") + value = dt_ind.tz_localize("Australia/Melbourne") msg = "Timezones don't match. .* != 'Australia/Melbourne'" with pytest.raises(ValueError, match=msg): # require tz match, not just tzawareness match @@ -1039,14 +1040,14 @@ def test_take_fill_valid(self, timedelta_index): result = arr.take([-1, 1], allow_fill=True, fill_value=td1) assert result[0] == td1 - now = Timestamp.now() - value = now + dt_ind = Timestamp(2021, 1, 1, 12) + value = dt_ind msg = f"value should be a '{arr._scalar_type.__name__}' or 'NaT'. Got" with pytest.raises(TypeError, match=msg): # fill_value Timestamp invalid arr.take([0, 1], allow_fill=True, fill_value=value) - value = now.to_period("D") + value = dt_ind.to_period("D") with pytest.raises(TypeError, match=msg): # fill_value Period invalid arr.take([0, 1], allow_fill=True, fill_value=value)
Removed Timestamp.now() from tests. #44341 Replaced Timestamp.now () with pd.Timestamp (2021, 1, 1, 12)
https://api.github.com/repos/pandas-dev/pandas/pulls/44448
2021-11-14T14:30:19Z
2021-11-16T00:02:18Z
2021-11-16T00:02:18Z
2021-11-17T17:06:45Z
DOC: Remove use of as_type for PeriodIndex to DatetimeIndex conversion
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst index a112c632ceb25..3ddb6434ed932 100644 --- a/doc/source/user_guide/timeseries.rst +++ b/doc/source/user_guide/timeseries.rst @@ -2073,14 +2073,18 @@ The ``period`` dtype can be used in ``.astype(...)``. It allows one to change th # change monthly freq to daily freq pi.astype("period[D]") - # convert to DatetimeIndex - pi.astype("datetime64[ns]") - # convert to PeriodIndex dti = pd.date_range("2011-01-01", freq="M", periods=3) dti dti.astype("period[M]") +.. deprecated:: 1.4.0 + Converting PeriodIndex to DatetimeIndex with ``.astype(...)`` is deprecated and will raise in a future version. Use ``obj.to_timestamp(how).tz_localize(dtype.tz)`` instead. + +.. ipython:: python + + # convert to DatetimeIndex + pi.to_timestamp(how="start") PeriodIndex partial string indexing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them Followup to #44398. Should get docs build to green. cc @jbrockmendel ![image](https://user-images.githubusercontent.com/45562402/141684919-f8e79f49-a2c8-4001-bd2d-13849e5c0b41.png)
https://api.github.com/repos/pandas-dev/pandas/pulls/44447
2021-11-14T14:17:07Z
2021-11-14T16:21:58Z
2021-11-14T16:21:58Z
2021-11-14T16:28:49Z
Added "." on to check_dtype and obj parameter
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py index 79c8e64fc4df3..63c696c663e66 100644 --- a/pandas/_testing/asserters.py +++ b/pandas/_testing/asserters.py @@ -540,7 +540,7 @@ def assert_categorical_equal( left : Categorical right : Categorical check_dtype : bool, default True - Check that integer dtype of the codes are the same + Check that integer dtype of the codes are the same. check_category_order : bool, default True Whether the order of the categories should be compared, which implies identical integer codes. If False, only the resulting @@ -548,7 +548,7 @@ def assert_categorical_equal( checked regardless. obj : str, default 'Categorical' Specify object name being compared, internally used to show appropriate - assertion message + assertion message. """ _check_isinstance(left, right, Categorical)
- [] closes #xxxx - [] tests added / passed - [] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [] whatsnew entry From #28602 `pandas.testing.assert_categorical_equal: Parameter "check_dtype" description should finish with "." pandas.testing.assert_categorical_equal: Parameter "obj" description should finish with "."`
https://api.github.com/repos/pandas-dev/pandas/pulls/44446
2021-11-14T14:13:42Z
2021-11-15T14:46:18Z
2021-11-15T14:46:18Z
2021-11-15T14:56:55Z
ENH: Infer inner file name of zip archive (GH39465)
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index a593a03de5c25..609f085ca7144 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -639,7 +639,7 @@ I/O - Bug in :func:`read_csv` with :code:`float_precision="round_trip"` which did not skip initial/trailing whitespace (:issue:`43713`) - Bug in dumping/loading a :class:`DataFrame` with ``yaml.dump(frame)`` (:issue:`42748`) - Bug in :func:`read_csv` raising ``ValueError`` when ``parse_dates`` was used with ``MultiIndex`` columns (:issue:`8991`) -- +- :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` with ``compression`` set to ``'zip'`` no longer create a zip file containing a file ending with ".zip". Instead, they try to infer the inner file name more smartly. (:issue:`39465`) Period ^^^^^^ diff --git a/pandas/io/common.py b/pandas/io/common.py index 12c7afc8ee2e4..990b584cb9533 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -805,6 +805,18 @@ def __init__( # _PathLike[str]], IO[bytes]]" super().__init__(file, mode, **kwargs_zip) # type: ignore[arg-type] + def infer_filename(self): + """ + If an explicit archive_name is not given, we still want the file inside the zip + file not to be named something.zip, because that causes confusion (GH39465). + """ + if isinstance(self.filename, (os.PathLike, str)): + filename = Path(self.filename) + if filename.suffix == ".zip": + return filename.with_suffix("").name + return filename.name + return None + def write(self, data): # buffer multiple write calls, write on flush if self.multiple_write_buffer is None: @@ -819,7 +831,7 @@ def flush(self) -> None: return # ZipFile needs a non-empty string - archive_name = self.archive_name or self.filename or "zip" + archive_name = self.archive_name or self.infer_filename() or "zip" with self.multiple_write_buffer: super().writestr(archive_name, self.multiple_write_buffer.getvalue()) diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py index 4c482bafa6c9c..059fd96db43ad 100644 --- a/pandas/tests/io/formats/test_to_csv.py +++ b/pandas/tests/io/formats/test_to_csv.py @@ -1,6 +1,8 @@ import io import os +from pathlib import Path import sys +from zipfile import ZipFile import numpy as np import pytest @@ -541,23 +543,38 @@ def test_to_csv_compression_dict_no_method_raises(self): df.to_csv(path, compression=compression) @pytest.mark.parametrize("compression", ["zip", "infer"]) - @pytest.mark.parametrize( - "archive_name", [None, "test_to_csv.csv", "test_to_csv.zip"] - ) + @pytest.mark.parametrize("archive_name", ["test_to_csv.csv", "test_to_csv.zip"]) def test_to_csv_zip_arguments(self, compression, archive_name): # GH 26023 - from zipfile import ZipFile - df = DataFrame({"ABC": [1]}) with tm.ensure_clean("to_csv_archive_name.zip") as path: df.to_csv( path, compression={"method": compression, "archive_name": archive_name} ) with ZipFile(path) as zp: - expected_arcname = path if archive_name is None else archive_name - expected_arcname = os.path.basename(expected_arcname) assert len(zp.filelist) == 1 - archived_file = os.path.basename(zp.filelist[0].filename) + archived_file = zp.filelist[0].filename + assert archived_file == archive_name + + @pytest.mark.parametrize( + "filename,expected_arcname", + [ + ("archive.csv", "archive.csv"), + ("archive.tsv", "archive.tsv"), + ("archive.csv.zip", "archive.csv"), + ("archive.tsv.zip", "archive.tsv"), + ("archive.zip", "archive"), + ], + ) + def test_to_csv_zip_infer_name(self, filename, expected_arcname): + # GH 39465 + df = DataFrame({"ABC": [1]}) + with tm.ensure_clean_dir() as dir: + path = Path(dir, filename) + df.to_csv(path, compression="zip") + with ZipFile(path) as zp: + assert len(zp.filelist) == 1 + archived_file = zp.filelist[0].filename assert archived_file == expected_arcname @pytest.mark.parametrize("df_new_type", ["Int64"])
relevant for `DataFrame.to_csv` and `Series.to_csv` with `compression='zip'` - [x] closes #39465 - also relevant for #26023 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry This fix is similar-in-spirit to #40387, which has been abandoned. ## Before / After ```py from pandas import pd df = pd.DataFrame() df.to_csv('../test.csv.zip') ``` ### Before ``` > unzip -l test.csv.zip Archive: test.csv.zip Length Date Time Name --------- ---------- ----- ---- 3 2021-11-14 13:39 ../test.csv.zip --------- ------- 3 1 file ``` Notice the `..` in the path - bad! And, of course, that the file _inside_ the zip file is also called `test.csv.zip`. ### After ``` > unzip -l test.csv.zip Archive: test.csv.zip Length Date Time Name --------- ---------- ----- ---- 3 2021-11-14 13:44 test.csv --------- ------- 3 1 file ```
https://api.github.com/repos/pandas-dev/pandas/pulls/44445
2021-11-14T12:32:35Z
2021-11-17T02:09:41Z
2021-11-17T02:09:41Z
2021-11-17T06:48:35Z
Backport PR #44443 on branch 1.3.x (COMPAT: Python 3.9.8)
diff --git a/pandas/tests/io/parser/test_quoting.py b/pandas/tests/io/parser/test_quoting.py index 7a07632390eff..e8b86453db9e7 100644 --- a/pandas/tests/io/parser/test_quoting.py +++ b/pandas/tests/io/parser/test_quoting.py @@ -22,7 +22,7 @@ {"quotechar": None, "quoting": csv.QUOTE_MINIMAL}, "quotechar must be set if quoting enabled", ), - ({"quotechar": 2}, '"quotechar" must be string, not int'), + ({"quotechar": 2}, '"quotechar" must be string( or None)?, not int'), ], ) def test_bad_quote_char(all_parsers, kwargs, msg):
Backport PR #44443: COMPAT: Python 3.9.8
https://api.github.com/repos/pandas-dev/pandas/pulls/44444
2021-11-14T10:49:57Z
2021-11-14T11:53:48Z
2021-11-14T11:53:48Z
2021-11-14T11:53:49Z
COMPAT: Python 3.9.8
diff --git a/pandas/tests/io/parser/test_quoting.py b/pandas/tests/io/parser/test_quoting.py index 6995965467d05..456dd049d2f4a 100644 --- a/pandas/tests/io/parser/test_quoting.py +++ b/pandas/tests/io/parser/test_quoting.py @@ -24,7 +24,7 @@ {"quotechar": None, "quoting": csv.QUOTE_MINIMAL}, "quotechar must be set if quoting enabled", ), - ({"quotechar": 2}, '"quotechar" must be string, not int'), + ({"quotechar": 2}, '"quotechar" must be string( or None)?, not int'), ], ) def test_bad_quote_char(all_parsers, kwargs, msg):
Please see bpo 20028. https://github.com/python/cpython/pull/28705. This breaks our nightlies, but doesn't on regular CI. It should show up pretty soon in regular CI, once the conda-forge folks add it. This means unfortuanately that I have no way to test this. Can someone with with the perms on the MacPython repo please trigger the nightlies manually to see if that fixes the builds after this? Thanks. - [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44443
2021-11-14T04:59:24Z
2021-11-14T10:48:44Z
2021-11-14T10:48:44Z
2021-11-14T14:13:28Z
BUG: AttributeError: 'BooleanArray' object has no attribute 'sum' while infer types #44079
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 2fe289a5f7c35..e6835e793cdcb 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -645,6 +645,7 @@ I/O - Bug in :func:`read_csv` with :code:`float_precision="round_trip"` which did not skip initial/trailing whitespace (:issue:`43713`) - Bug in dumping/loading a :class:`DataFrame` with ``yaml.dump(frame)`` (:issue:`42748`) - Bug in :func:`read_csv` raising ``ValueError`` when ``parse_dates`` was used with ``MultiIndex`` columns (:issue:`8991`) +- Bug in :func:`read_csv` raising ``AttributeError`` when attempting to read a .csv file and infer index column dtype from an nullable integer type (:issue:`44079`) - :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` with ``compression`` set to ``'zip'`` no longer create a zip file containing a file ending with ".zip". Instead, they try to infer the inner file name more smartly. (:issue:`39465`) Period diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py index d096e9008112b..116217e8c3ec1 100644 --- a/pandas/io/parsers/base_parser.py +++ b/pandas/io/parsers/base_parser.py @@ -705,7 +705,7 @@ def _infer_types(self, values, na_values, try_num_bool=True): # error: Argument 2 to "isin" has incompatible type "List[Any]"; expected # "Union[Union[ExtensionArray, ndarray], Index, Series]" mask = algorithms.isin(values, list(na_values)) # type: ignore[arg-type] - na_count = mask.sum() + na_count = mask.astype("uint8", copy=False).sum() if na_count > 0: if is_integer_dtype(values): values = values.astype(np.float64) diff --git a/pandas/tests/io/parser/test_index_col.py b/pandas/tests/io/parser/test_index_col.py index 646cb2029919d..7315dcc0c4c07 100644 --- a/pandas/tests/io/parser/test_index_col.py +++ b/pandas/tests/io/parser/test_index_col.py @@ -297,3 +297,27 @@ def test_multiindex_columns_index_col_with_data(all_parsers): index=Index(["data"]), ) tm.assert_frame_equal(result, expected) + + +@skip_pyarrow +def test_infer_types_boolean_sum(all_parsers): + # GH#44079 + parser = all_parsers + result = parser.read_csv( + StringIO("0,1"), + names=["a", "b"], + index_col=["a"], + dtype={"a": "UInt8"}, + ) + expected = DataFrame( + data={ + "a": [ + 0, + ], + "b": [1], + } + ).set_index("a") + # Not checking index type now, because the C parser will return a + # index column of dtype 'object', and the Python parser will return a + # index column of dtype 'int64'. + tm.assert_frame_equal(result, expected, check_index_type=False)
- [x] closes #44079 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44442
2021-11-14T04:11:49Z
2021-11-20T15:58:54Z
2021-11-20T15:58:54Z
2021-11-20T15:59:00Z
BLD: Exclude CPT data files
diff --git a/MANIFEST.in b/MANIFEST.in index f616fad6b1557..c6ddc79eaa83c 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -33,6 +33,7 @@ global-exclude *.xlsb global-exclude *.xlsm global-exclude *.xlsx global-exclude *.xpt +global-exclude *.cpt global-exclude *.xz global-exclude *.zip global-exclude *~
xref #44300. Since that added a .cpt data file, we should also exclude it from the manifest. - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/44441
2021-11-14T00:03:35Z
2021-11-14T02:11:15Z
2021-11-14T02:11:15Z
2021-11-14T04:41:46Z
DOC: Some minor doc cleanups
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index c2ca3df5ca23d..e2f8ac09d8873 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -102,7 +102,7 @@ header : int or list of ints, default ``'infer'`` names : array-like, default ``None`` List of column names to use. If file contains no header row, then you should explicitly pass ``header=None``. Duplicates in this list are not allowed. -index_col : int, str, sequence of int / str, or False, default ``None`` +index_col : int, str, sequence of int / str, or False, optional, default ``None`` Column(s) to use as the row labels of the ``DataFrame``, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. @@ -120,7 +120,8 @@ usecols : list-like or callable, default ``None`` Return a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in ``names`` or - inferred from the document header row(s). For example, a valid list-like + inferred from the document header row(s). If ``names`` are given, the document + header row(s) are not taken into account. For example, a valid list-like ``usecols`` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``. Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``. To @@ -348,7 +349,7 @@ dialect : str or :class:`python:csv.Dialect` instance, default ``None`` Error handling ++++++++++++++ -error_bad_lines : boolean, default ``None`` +error_bad_lines : boolean, optional, default ``None`` Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no ``DataFrame`` will be returned. If ``False``, then these "bad lines" will dropped from the @@ -358,7 +359,7 @@ error_bad_lines : boolean, default ``None`` .. deprecated:: 1.3.0 The ``on_bad_lines`` parameter should be used instead to specify behavior upon encountering a bad line instead. -warn_bad_lines : boolean, default ``None`` +warn_bad_lines : boolean, optional, default ``None`` If error_bad_lines is ``False``, and warn_bad_lines is ``True``, a warning for each "bad line" will be output. diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 212bb63693d56..1b89eeddcf9df 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1044,12 +1044,14 @@ def _repr_html_(self) -> str | None: return None @Substitution( - header_type="bool or sequence", + header_type="bool or sequence of strings", header="Write out the column names. If a list of strings " "is given, it is assumed to be aliases for the " "column names", col_space_type="int, list or dict of int", - col_space="The minimum width of each column", + col_space="The minimum width of each column. If a list of ints is given " + "every integers corresponds with one column. If a dict is given, the key " + "references the column, while the value defines the space to use.", ) @Substitution(shared_params=fmt.common_docstring, returns=fmt.return_docstring) def to_string( diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 9715bf8f61f3c..a8896c1fde546 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -6681,8 +6681,6 @@ def all(self, *args, **kwargs): Examples -------- - **all** - True, because nonzero integers are considered True. >>> pd.Index([1, 2, 3]).all() @@ -6692,18 +6690,6 @@ def all(self, *args, **kwargs): >>> pd.Index([0, 1, 2]).all() False - - **any** - - True, because ``1`` is considered True. - - >>> pd.Index([0, 0, 1]).any() - True - - False, because ``0`` is considered False. - - >>> pd.Index([0, 0, 0]).any() - False """ nv.validate_all(args, kwargs) self._maybe_disable_logical_methods("all") diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py index 6fb9497dbc1d6..0b57f0f5ef814 100644 --- a/pandas/io/parsers/readers.py +++ b/pandas/io/parsers/readers.py @@ -104,7 +104,7 @@ List of column names to use. If the file contains a header row, then you should explicitly pass ``header=0`` to override the column names. Duplicates in this list are not allowed. -index_col : int, str, sequence of int / str, or False, default ``None`` +index_col : int, str, sequence of int / str, or False, optional, default ``None`` Column(s) to use as the row labels of the ``DataFrame``, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. @@ -116,7 +116,8 @@ Return a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in `names` or - inferred from the document header row(s). For example, a valid list-like + inferred from the document header row(s). If ``names`` are given, the document + header row(s) are not taken into account. For example, a valid list-like `usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``. Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``. To instantiate a DataFrame from ``data`` with element order preserved use @@ -331,7 +332,7 @@ `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for more details. -error_bad_lines : bool, default ``None`` +error_bad_lines : bool, optional, default ``None`` Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these "bad lines" will be dropped from the DataFrame that is @@ -340,7 +341,7 @@ .. deprecated:: 1.3.0 The ``on_bad_lines`` parameter should be used instead to specify behavior upon encountering a bad line instead. -warn_bad_lines : bool, default ``None`` +warn_bad_lines : bool, optional, default ``None`` If error_bad_lines is False, and warn_bad_lines is True, a warning for each "bad line" will be output.
- [x] closes #40362 - [x] closes #44147
https://api.github.com/repos/pandas-dev/pandas/pulls/44440
2021-11-13T23:16:06Z
2021-11-14T02:13:44Z
2021-11-14T02:13:43Z
2021-11-14T19:41:33Z
ENH: Use stacklevel in warnings
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index 8c2c01b6aedc8..acc66ae9deca7 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -35,6 +35,7 @@ npt, ) from pandas.util._decorators import doc +from pandas.util._exceptions import find_stack_level from pandas.core.dtypes.cast import ( construct_1d_object_array_from_listlike, @@ -1550,7 +1551,7 @@ def searchsorted( _diff_special = {"float64", "float32", "int64", "int32", "int16", "int8"} -def diff(arr, n: int, axis: int = 0, stacklevel: int = 3): +def diff(arr, n: int, axis: int = 0): """ difference of n between self, analogous to s-s.shift(n) @@ -1596,7 +1597,7 @@ def diff(arr, n: int, axis: int = 0, stacklevel: int = 3): "dtype lost in 'diff()'. In the future this will raise a " "TypeError. Convert to a suitable dtype prior to calling 'diff'.", FutureWarning, - stacklevel=stacklevel, + stacklevel=find_stack_level(), ) arr = np.asarray(arr) dtype = arr.dtype diff --git a/pandas/core/arraylike.py b/pandas/core/arraylike.py index 11d32e8a159f3..d91404ff05157 100644 --- a/pandas/core/arraylike.py +++ b/pandas/core/arraylike.py @@ -337,7 +337,9 @@ def reconstruct(result): "Consider explicitly converting the DataFrame " "to an array with '.to_numpy()' first." ) - warnings.warn(msg.format(ufunc), FutureWarning, stacklevel=4) + warnings.warn( + msg.format(ufunc), FutureWarning, stacklevel=find_stack_level() + ) return result raise NotImplementedError return result diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py index f8aa1656c8c30..2e1ebf9d5a266 100644 --- a/pandas/core/arrays/datetimelike.py +++ b/pandas/core/arrays/datetimelike.py @@ -416,13 +416,12 @@ def astype(self, dtype, copy: bool = True): elif is_integer_dtype(dtype): # we deliberately ignore int32 vs. int64 here. # See https://github.com/pandas-dev/pandas/issues/24381 for more. - level = find_stack_level() warnings.warn( f"casting {self.dtype} values to int64 with .astype(...) is " "deprecated and will raise in a future version. " "Use .view(...) instead.", FutureWarning, - stacklevel=level, + stacklevel=find_stack_level(), ) values = self.asi8 diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index 2c26d6f838315..9cd67ad293f63 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -969,13 +969,12 @@ def astype_dt64_to_dt64tz( # this should be the only copy values = values.copy() - level = find_stack_level() warnings.warn( "Using .astype to convert from timezone-naive dtype to " "timezone-aware dtype is deprecated and will raise in a " "future version. Use ser.dt.tz_localize instead.", FutureWarning, - stacklevel=level, + stacklevel=find_stack_level(), ) # GH#33401 this doesn't match DatetimeArray.astype, which @@ -986,13 +985,12 @@ def astype_dt64_to_dt64tz( # DatetimeArray/DatetimeIndex.astype behavior if values.tz is None and aware: dtype = cast(DatetimeTZDtype, dtype) - level = find_stack_level() warnings.warn( "Using .astype to convert from timezone-naive dtype to " "timezone-aware dtype is deprecated and will raise in a " "future version. Use obj.tz_localize instead.", FutureWarning, - stacklevel=level, + stacklevel=find_stack_level(), ) return values.tz_localize(dtype.tz) @@ -1006,14 +1004,13 @@ def astype_dt64_to_dt64tz( return result elif values.tz is not None: - level = find_stack_level() warnings.warn( "Using .astype to convert from timezone-aware dtype to " "timezone-naive dtype is deprecated and will raise in a " "future version. Use obj.tz_localize(None) or " "obj.tz_convert('UTC').tz_localize(None) instead", FutureWarning, - stacklevel=level, + stacklevel=find_stack_level(), ) result = values.tz_convert("UTC").tz_localize(None) diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 6b51456006021..38a2cb46ad21d 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -487,9 +487,10 @@ def _data(self): @property def _AXIS_NUMBERS(self) -> dict[str, int]: """.. deprecated:: 1.1.0""" - level = self.ndim + 1 warnings.warn( - "_AXIS_NUMBERS has been deprecated.", FutureWarning, stacklevel=level + "_AXIS_NUMBERS has been deprecated.", + FutureWarning, + stacklevel=find_stack_level(), ) return {"index": 0} diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 7577b1e671d60..6cbe37c6b3838 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -21,6 +21,7 @@ ) from pandas.errors import InvalidIndexError from pandas.util._decorators import cache_readonly +from pandas.util._exceptions import find_stack_level from pandas.core.dtypes.cast import sanitize_to_nanoseconds from pandas.core.dtypes.common import ( @@ -964,8 +965,6 @@ def _check_deprecated_resample_kwargs(kwargs, origin): From where this function is being called; either Grouper or TimeGrouper. Used to determine an approximate stacklevel. """ - from pandas.core.resample import TimeGrouper - # Deprecation warning of `base` and `loffset` since v1.1.0: # we are raising the warning here to be able to set the `stacklevel` # properly since we need to raise the `base` and `loffset` deprecation @@ -975,11 +974,6 @@ def _check_deprecated_resample_kwargs(kwargs, origin): # core/groupby/grouper.py::Grouper # raising these warnings from TimeGrouper directly would fail the test: # tests/resample/test_deprecated.py::test_deprecating_on_loffset_and_base - # hacky way to set the stacklevel: if cls is TimeGrouper it means - # that the call comes from a pandas internal call of resample, - # otherwise it comes from pd.Grouper - stacklevel = (5 if origin is TimeGrouper else 2) + 1 - # the + 1 is for this helper function, check_deprecated_resample_kwargs if kwargs.get("base", None) is not None: warnings.warn( @@ -989,7 +983,7 @@ def _check_deprecated_resample_kwargs(kwargs, origin): "\nbecomes:\n" '\n>>> df.resample(freq="3s", offset="2s")\n', FutureWarning, - stacklevel=stacklevel, + stacklevel=find_stack_level(), ) if kwargs.get("loffset", None) is not None: warnings.warn( @@ -1000,5 +994,5 @@ def _check_deprecated_resample_kwargs(kwargs, origin): '\n>>> df = df.resample(freq="3s").mean()' '\n>>> df.index = df.index.to_timestamp() + to_offset("8H")\n', FutureWarning, - stacklevel=stacklevel, + stacklevel=find_stack_level(), ) diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index 543b2ea26f750..1cd9fe65407ba 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -365,7 +365,7 @@ def diff(self: T, n: int, axis: int) -> T: # with axis=0 is equivalent assert n == 0 axis = 0 - return self.apply(algos.diff, n=n, axis=axis, stacklevel=5) + return self.apply(algos.diff, n=n, axis=axis) def interpolate(self: T, **kwargs) -> T: return self.apply_with_block("interpolate", swap_axis=False, **kwargs) diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 55e5b0d0439fa..4d96cf8a80b38 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1122,7 +1122,7 @@ def take_nd( def diff(self, n: int, axis: int = 1) -> list[Block]: """return block for the diff of the values""" - new_values = algos.diff(self.values, n, axis=axis, stacklevel=7) + new_values = algos.diff(self.values, n, axis=axis) return [self.make_block(values=new_values)] def shift(self, periods: int, axis: int = 0, fill_value: Any = None) -> list[Block]: diff --git a/pandas/core/series.py b/pandas/core/series.py index b3c9167bfbbab..e0a63b8e35105 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -1012,7 +1012,7 @@ def _get_values_tuple(self, key): # mpl hackaround if com.any_none(*key): result = self._get_values(key) - deprecate_ndim_indexing(result, stacklevel=5) + deprecate_ndim_indexing(result, stacklevel=find_stack_level()) return result if not isinstance(self.index, MultiIndex): diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index f82e1aa5d188c..249fda9173b68 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -1427,7 +1427,7 @@ def replace( " In addition, single character regular expressions will " "*not* be treated as literal strings when regex=True." ) - warnings.warn(msg, FutureWarning, stacklevel=3) + warnings.warn(msg, FutureWarning, stacklevel=find_stack_level()) # Check whether repl is valid (GH 13438, GH 15055) if not (isinstance(repl, str) or callable(repl)): diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py index 1caf334f9607e..ed79a5ad98ab9 100644 --- a/pandas/io/excel/_base.py +++ b/pandas/io/excel/_base.py @@ -519,11 +519,10 @@ def parse( if convert_float is None: convert_float = True else: - stacklevel = find_stack_level() warnings.warn( "convert_float is deprecated and will be removed in a future version.", FutureWarning, - stacklevel=stacklevel, + stacklevel=find_stack_level(), ) validate_header_arg(header) diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index d91c0bb54f8dc..40803ff14e357 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -28,6 +28,7 @@ ) from pandas.compat._optional import import_optional_dependency from pandas.util._decorators import doc +from pandas.util._exceptions import find_stack_level import pandas as pd from pandas import ( @@ -310,7 +311,7 @@ def render( warnings.warn( "this method is deprecated in favour of `Styler.to_html()`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) if sparse_index is None: sparse_index = get_option("styler.sparse.index") @@ -1675,7 +1676,7 @@ def where( warnings.warn( "this method is deprecated in favour of `Styler.applymap()`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) if other is None: @@ -1707,7 +1708,7 @@ def set_precision(self, precision: int) -> StylerRenderer: warnings.warn( "this method is deprecated in favour of `Styler.format(precision=..)`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) self.precision = precision return self.format(precision=precision, na_rep=self.na_rep) @@ -2217,7 +2218,7 @@ def set_na_rep(self, na_rep: str) -> StylerRenderer: warnings.warn( "this method is deprecated in favour of `Styler.format(na_rep=..)`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) self.na_rep = na_rep return self.format(na_rep=na_rep, precision=self.precision) @@ -2271,7 +2272,7 @@ def hide_index( warnings.warn( "this method is deprecated in favour of `Styler.hide(axis='index')`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) return self.hide(axis=0, level=level, subset=subset, names=names) @@ -2324,7 +2325,7 @@ def hide_columns( warnings.warn( "this method is deprecated in favour of `Styler.hide(axis='columns')`", FutureWarning, - stacklevel=2, + stacklevel=find_stack_level(), ) return self.hide(axis=1, level=level, subset=subset, names=names)
Part of #44347 - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/44439
2021-11-13T22:36:28Z
2021-11-14T02:08:32Z
2021-11-14T02:08:32Z
2021-11-14T02:38:11Z
CLN: small clean-up PeriodIndex (easy parts of #23416)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py index 0247ce8dc6ac4..dc7cf51ca109d 100644 --- a/pandas/core/arrays/datetimelike.py +++ b/pandas/core/arrays/datetimelike.py @@ -944,6 +944,9 @@ def validate_dtype_freq(dtype, freq): ValueError : non-period dtype IncompatibleFrequency : mismatch between dtype and freq """ + if freq is not None: + freq = frequencies.to_offset(freq) + if dtype is not None: dtype = pandas_dtype(dtype) if not is_period_dtype(dtype): diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py index c3728d8d956de..ae5c3ddc9dfb4 100644 --- a/pandas/core/indexes/period.py +++ b/pandas/core/indexes/period.py @@ -29,7 +29,7 @@ from pandas._libs.tslibs import resolution from pandas.core.algorithms import unique1d -from pandas.core.dtypes.dtypes import PeriodDtype +import pandas.core.arrays.datetimelike as dtl from pandas.core.arrays.period import PeriodArray, period_array from pandas.core.base import _shared_docs from pandas.core.indexes.base import _index_shared_docs, ensure_index @@ -48,17 +48,6 @@ dict(target_klass='PeriodIndex or list of Periods')) -def _wrap_field_accessor(name): - fget = getattr(PeriodArray, name).fget - - def f(self): - result = fget(self) - return Index(result, name=self.name) - - f.__name__ = name - f.__doc__ = fget.__doc__ - return property(f) - # --- Period index sketch @@ -211,27 +200,11 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None, if data is None and ordinal is None: # range-based. - if periods is not None: - if is_float(periods): - periods = int(periods) - - elif not is_integer(periods): - msg = 'periods must be a number, got {periods}' - raise TypeError(msg.format(periods=periods)) - data, freq = PeriodArray._generate_range(start, end, periods, freq, fields) data = PeriodArray(data, freq=freq) else: - if freq is None and dtype is not None: - freq = PeriodDtype(dtype).freq - elif freq and dtype: - freq = PeriodDtype(freq).freq - dtype = PeriodDtype(dtype).freq - - if freq != dtype: - msg = "specified freq and dtype are different" - raise IncompatibleFrequency(msg) + freq = dtl.validate_dtype_freq(dtype, freq) # PeriodIndex allow PeriodIndex(period_index, freq=different) # Let's not encourage that kind of behavior in PeriodArray.
xref #23416
https://api.github.com/repos/pandas-dev/pandas/pulls/23423
2018-10-30T22:32:27Z
2018-10-31T08:09:59Z
2018-10-31T08:09:59Z
2018-10-31T14:54:55Z
API: fix corner case of lib.infer_dtype
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx index c57dd66a33fe0..503584cac48d6 100644 --- a/pandas/_libs/lib.pyx +++ b/pandas/_libs/lib.pyx @@ -57,7 +57,7 @@ from tslibs.conversion cimport convert_to_tsobject from tslibs.timedeltas cimport convert_to_timedelta64 from tslibs.timezones cimport get_timezone, tz_compare -from missing cimport (checknull, +from missing cimport (checknull, isnaobj, is_null_datetime64, is_null_timedelta64, is_null_period) @@ -1181,6 +1181,9 @@ def infer_dtype(value: object, skipna: bool=False) -> str: values = construct_1d_object_array_from_listlike(value) values = getattr(values, 'values', values) + if skipna: + values = values[~isnaobj(values)] + val = _try_infer_map(values) if val is not None: return val diff --git a/pandas/_libs/missing.pxd b/pandas/_libs/missing.pxd index 9f660cc6785c8..d0dd306680ae8 100644 --- a/pandas/_libs/missing.pxd +++ b/pandas/_libs/missing.pxd @@ -1,7 +1,10 @@ # -*- coding: utf-8 -*- +from numpy cimport ndarray, uint8_t + cpdef bint checknull(object val) cpdef bint checknull_old(object val) +cpdef ndarray[uint8_t] isnaobj(ndarray arr) cdef bint is_null_datetime64(v) cdef bint is_null_timedelta64(v) diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx index 2590a30c57f33..6776a4b6d7f7e 100644 --- a/pandas/_libs/missing.pyx +++ b/pandas/_libs/missing.pyx @@ -124,7 +124,7 @@ cdef inline bint _check_none_nan_inf_neginf(object val): @cython.wraparound(False) @cython.boundscheck(False) -def isnaobj(ndarray arr): +cpdef ndarray[uint8_t] isnaobj(ndarray arr): """ Return boolean mask denoting which elements of a 1-D array are na-like, according to the criteria defined in `_check_all_nulls`: diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py index d0dd03d6eb8df..c5911da1666d2 100644 --- a/pandas/tests/dtypes/test_inference.py +++ b/pandas/tests/dtypes/test_inference.py @@ -591,6 +591,22 @@ def test_unicode(self): expected = 'unicode' if PY2 else 'string' assert result == expected + @pytest.mark.parametrize('dtype, missing, skipna, expected', [ + (float, np.nan, False, 'floating'), + (float, np.nan, True, 'floating'), + (object, np.nan, False, 'floating'), + (object, np.nan, True, 'empty'), + (object, None, False, 'mixed'), + (object, None, True, 'empty') + ]) + @pytest.mark.parametrize('box', [pd.Series, np.array]) + def test_object_empty(self, box, missing, dtype, skipna, expected): + # GH 23421 + arr = box([missing, missing], dtype=dtype) + + result = lib.infer_dtype(arr, skipna=skipna) + assert result == expected + def test_datetime(self): dates = [datetime(2012, 1, x) for x in range(1, 20)]
- [x] closes #23421 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry Working towards #23167 needs fixing of the type inference for some corner cases (especially not inferring an object column full of NaNs to be `'floating'`).
https://api.github.com/repos/pandas-dev/pandas/pulls/23422
2018-10-30T17:54:25Z
2018-11-04T15:53:28Z
2018-11-04T15:53:28Z
2018-11-05T00:14:29Z
DOC: Clarify documentation of 'ambiguous' parameter
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx index f9c604cd76472..7263fde5eb530 100644 --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -835,7 +835,20 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None, vals : ndarray[int64_t] tz : tzinfo or None ambiguous : str, bool, or arraylike - If arraylike, must have the same length as vals + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from 03:00 + DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC + and at 01:30:00 UTC. In such a situation, the `ambiguous` parameter + dictates how ambiguous times should be handled. + + - 'infer' will attempt to infer fall dst-transition hours based on + order + - bool-ndarray where True signifies a DST time, False signifies a + non-DST time (note that this flag is only applicable for ambiguous + times, but the array must have the same length as vals) + - bool if True, treat all vals as DST. If False, treat them as non-DST + - 'NaT' will return NaT where there are ambiguous times + nonexistent : str If arraylike, must have the same length as vals diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index 0eec84ecf8285..be35ae1172f7e 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -559,6 +559,13 @@ class NaTType(_NaT): None will remove timezone holding local time. ambiguous : bool, 'NaT', default 'raise' + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from + 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at + 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the + `ambiguous` parameter dictates how ambiguous times should be + handled. + - bool contains flags to determine if time is dst or not (note that this flag is only applicable for ambiguous fall dst dates) - 'NaT' will return NaT for an ambiguous time diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 08b0c5472549e..89092af68b168 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -974,6 +974,13 @@ class Timestamp(_Timestamp): None will remove timezone holding local time. ambiguous : bool, 'NaT', default 'raise' + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from + 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at + 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the + `ambiguous` parameter dictates how ambiguous times should be + handled. + - bool contains flags to determine if time is dst or not (note that this flag is only applicable for ambiguous fall dst dates) - 'NaT' will return NaT for an ambiguous time diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index b656690b30e34..fd3e5cedcfe77 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -606,6 +606,12 @@ def tz_localize(self, tz, ambiguous='raise', nonexistent='raise', Time zone to convert timestamps to. Passing ``None`` will remove the time zone information preserving local time. ambiguous : 'infer', 'NaT', bool array, default 'raise' + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from + 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at + 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the + `ambiguous` parameter dictates how ambiguous times should be + handled. - 'infer' will attempt to infer fall dst-transition hours based on order @@ -677,6 +683,40 @@ def tz_localize(self, tz, ambiguous='raise', nonexistent='raise', DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00', '2018-03-03 09:00:00'], dtype='datetime64[ns]', freq='D') + + Be careful with DST changes. When there is sequential data, pandas can + infer the DST time: + >>> s = pd.to_datetime(pd.Series([ + ... '2018-10-28 01:30:00', + ... '2018-10-28 02:00:00', + ... '2018-10-28 02:30:00', + ... '2018-10-28 02:00:00', + ... '2018-10-28 02:30:00', + ... '2018-10-28 03:00:00', + ... '2018-10-28 03:30:00'])) + >>> s.dt.tz_localize('CET', ambiguous='infer') + 2018-10-28 01:30:00+02:00 0 + 2018-10-28 02:00:00+02:00 1 + 2018-10-28 02:30:00+02:00 2 + 2018-10-28 02:00:00+01:00 3 + 2018-10-28 02:30:00+01:00 4 + 2018-10-28 03:00:00+01:00 5 + 2018-10-28 03:30:00+01:00 6 + dtype: int64 + + In some cases, inferring the DST is impossible. In such cases, you can + pass an ndarray to the ambiguous parameter to set the DST explicitly + + >>> s = pd.to_datetime(pd.Series([ + ... '2018-10-28 01:20:00', + ... '2018-10-28 02:36:00', + ... '2018-10-28 03:46:00'])) + >>> s.dt.tz_localize('CET', ambiguous=np.array([True, True, False])) + 0 2018-10-28 01:20:00+02:00 + 1 2018-10-28 02:36:00+02:00 + 2 2018-10-28 03:46:00+01:00 + dtype: datetime64[ns, CET] + """ if errors is not None: warnings.warn("The errors argument is deprecated and will be " diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 693cd14c8ca1d..96bdb7cc2146a 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -8623,6 +8623,13 @@ def tz_localize(self, tz, axis=0, level=None, copy=True, copy : boolean, default True Also make a copy of the underlying data ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise' + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from + 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at + 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the + `ambiguous` parameter dictates how ambiguous times should be + handled. + - 'infer' will attempt to infer fall dst-transition hours based on order - bool-ndarray where True signifies a DST time, False designates @@ -8650,6 +8657,52 @@ def tz_localize(self, tz, axis=0, level=None, copy=True, ------ TypeError If the TimeSeries is tz-aware and tz is not None. + + Examples + -------- + + Localize local times: + + >>> s = pd.Series([1], + ... index=pd.DatetimeIndex(['2018-09-15 01:30:00'])) + >>> s.tz_localize('CET') + 2018-09-15 01:30:00+02:00 1 + dtype: int64 + + Be careful with DST changes. When there is sequential data, pandas + can infer the DST time: + + >>> s = pd.Series(range(7), index=pd.DatetimeIndex([ + ... '2018-10-28 01:30:00', + ... '2018-10-28 02:00:00', + ... '2018-10-28 02:30:00', + ... '2018-10-28 02:00:00', + ... '2018-10-28 02:30:00', + ... '2018-10-28 03:00:00', + ... '2018-10-28 03:30:00'])) + >>> s.tz_localize('CET', ambiguous='infer') + 2018-10-28 01:30:00+02:00 0 + 2018-10-28 02:00:00+02:00 1 + 2018-10-28 02:30:00+02:00 2 + 2018-10-28 02:00:00+01:00 3 + 2018-10-28 02:30:00+01:00 4 + 2018-10-28 03:00:00+01:00 5 + 2018-10-28 03:30:00+01:00 6 + dtype: int64 + + In some cases, inferring the DST is impossible. In such cases, you can + pass an ndarray to the ambiguous parameter to set the DST explicitly + + >>> s = pd.Series(range(3), index=pd.DatetimeIndex([ + ... '2018-10-28 01:20:00', + ... '2018-10-28 02:36:00', + ... '2018-10-28 03:46:00'])) + >>> s.tz_localize('CET', ambiguous=np.array([True, True, False])) + 2018-10-28 01:20:00+02:00 0 + 2018-10-28 02:36:00+02:00 1 + 2018-10-28 03:46:00+01:00 2 + dtype: int64 + """ if nonexistent not in ('raise', 'NaT', 'shift'): raise ValueError("The nonexistent argument must be one of 'raise'," diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index 39f247a7c4cfe..d3cd4d834dfa0 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -99,6 +99,12 @@ class DatetimeIndex(DatetimeArrayMixin, DatelikeOps, TimelikeOps, the 'left', 'right', or both sides (None) tz : pytz.timezone or dateutil.tz.tzfile ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise' + When clocks moved backward due to DST, ambiguous times may arise. + For example in Central European Time (UTC+01), when going from 03:00 + DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC + and at 01:30:00 UTC. In such a situation, the `ambiguous` parameter + dictates how ambiguous times should be handled. + - 'infer' will attempt to infer fall dst-transition hours based on order - bool-ndarray where True signifies a DST time, False signifies a @@ -173,6 +179,7 @@ class DatetimeIndex(DatetimeArrayMixin, DatelikeOps, TimelikeOps, TimedeltaIndex : Index of timedelta64 data PeriodIndex : Index of Period data pandas.to_datetime : Convert argument to datetime + """ _resolution = cache_readonly(DatetimeArrayMixin._resolution.fget)
when first reading the docs, the `ambiguous` parameter was not clear to me. I propose to add a line of documentation and include an example that exactly indicates the issue that the `ambiguous` parameter solves.
https://api.github.com/repos/pandas-dev/pandas/pulls/23408
2018-10-29T10:29:22Z
2018-11-04T13:56:59Z
2018-11-04T13:56:59Z
2018-11-04T15:57:06Z
BUG/ENH: Handle NonexistentTimeError in date rounding
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index ddf5fffb1d80b..85b7b8c846f1a 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -227,6 +227,7 @@ Other Enhancements - :class:`Series` and :class:`DataFrame` now support :class:`Iterable` in constructor (:issue:`2193`) - :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`) - :meth:`round`, :meth:`ceil`, and meth:`floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support an ``ambiguous`` argument for handling datetimes that are rounded to ambiguous times (:issue:`18946`) +- :meth:`round`, :meth:`ceil`, and meth:`floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support a ``nonexistent`` argument for handling datetimes that are rounded to nonexistent times. See :ref:`timeseries.timezone_nonexsistent` (:issue:`22647`) - :class:`Resampler` now is iterable like :class:`GroupBy` (:issue:`15314`). - :meth:`Series.resample` and :meth:`DataFrame.resample` have gained the :meth:`Resampler.quantile` (:issue:`15023`). - :meth:`pandas.core.dtypes.is_list_like` has gained a keyword ``allow_sets`` which is ``True`` by default; if ``False``, diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx index f9c604cd76472..7d86ea58dd85a 100644 --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -27,11 +27,10 @@ from np_datetime import OutOfBoundsDatetime from util cimport (is_string_object, is_datetime64_object, - is_integer_object, is_float_object, is_array) + is_integer_object, is_float_object) from timedeltas cimport cast_from_unit from timezones cimport (is_utc, is_tzlocal, is_fixed_offset, - treat_tz_as_dateutil, treat_tz_as_pytz, get_utcoffset, get_dst_info, get_timezone, maybe_get_tz, tz_compare) from parsing import parse_datetime_string @@ -850,8 +849,9 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None, int64_t[:] deltas, idx_shifted ndarray ambiguous_array Py_ssize_t i, idx, pos, ntrans, n = len(vals) + Py_ssize_t delta_idx_offset, delta_idx int64_t *tdata - int64_t v, left, right, val, v_left, v_right + int64_t v, left, right, val, v_left, v_right, new_local, remaining_mins ndarray[int64_t] result, result_a, result_b, dst_hours npy_datetimestruct dts bint infer_dst = False, is_dst = False, fill = False @@ -1005,9 +1005,13 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None, if shift: # Shift the nonexistent time forward to the closest existing # time - remaining_minutes = val % HOURS_NS - new_local = val + (HOURS_NS - remaining_minutes) - delta_idx = trans.searchsorted(new_local, side='right') - 1 + remaining_mins = val % HOURS_NS + new_local = val + (HOURS_NS - remaining_mins) + delta_idx = trans.searchsorted(new_local, side='right') + # Need to subtract 1 from the delta_idx if the UTC offset of + # the target tz is greater than 0 + delta_idx_offset = int(deltas[0] > 0) + delta_idx = delta_idx - delta_idx_offset result[i] = new_local - deltas[delta_idx] elif fill_nonexist: result[i] = NPY_NAT diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx index 0eec84ecf8285..e45eb34bcafc1 100644 --- a/pandas/_libs/tslibs/nattype.pyx +++ b/pandas/_libs/tslibs/nattype.pyx @@ -484,6 +484,17 @@ class NaTType(_NaT): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ @@ -503,6 +514,17 @@ class NaTType(_NaT): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ @@ -522,6 +544,17 @@ class NaTType(_NaT): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 08b0c5472549e..8fb4242ce2cc2 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -721,7 +721,7 @@ class Timestamp(_Timestamp): return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, freq) - def _round(self, freq, mode, ambiguous='raise'): + def _round(self, freq, mode, ambiguous='raise', nonexistent='raise'): if self.tz is not None: value = self.tz_localize(None).value else: @@ -733,10 +733,12 @@ class Timestamp(_Timestamp): r = round_nsint64(value, mode, freq)[0] result = Timestamp(r, unit='ns') if self.tz is not None: - result = result.tz_localize(self.tz, ambiguous=ambiguous) + result = result.tz_localize( + self.tz, ambiguous=ambiguous, nonexistent=nonexistent + ) return result - def round(self, freq, ambiguous='raise'): + def round(self, freq, ambiguous='raise', nonexistent='raise'): """ Round the Timestamp to the specified resolution @@ -754,14 +756,27 @@ class Timestamp(_Timestamp): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ ValueError if the freq cannot be converted """ - return self._round(freq, RoundTo.NEAREST_HALF_EVEN, ambiguous) + return self._round( + freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent + ) - def floor(self, freq, ambiguous='raise'): + def floor(self, freq, ambiguous='raise', nonexistent='raise'): """ return a new Timestamp floored to this resolution @@ -775,14 +790,25 @@ class Timestamp(_Timestamp): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ ValueError if the freq cannot be converted """ - return self._round(freq, RoundTo.MINUS_INFTY, ambiguous) + return self._round(freq, RoundTo.MINUS_INFTY, ambiguous, nonexistent) - def ceil(self, freq, ambiguous='raise'): + def ceil(self, freq, ambiguous='raise', nonexistent='raise'): """ return a new Timestamp ceiled to this resolution @@ -796,12 +822,23 @@ class Timestamp(_Timestamp): - 'raise' will raise an AmbiguousTimeError for an ambiguous time .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Raises ------ ValueError if the freq cannot be converted """ - return self._round(freq, RoundTo.PLUS_INFTY, ambiguous) + return self._round(freq, RoundTo.PLUS_INFTY, ambiguous, nonexistent) @property def tz(self): diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py index 14325f42ff0d8..326564cca9753 100644 --- a/pandas/core/indexes/datetimelike.py +++ b/pandas/core/indexes/datetimelike.py @@ -113,6 +113,17 @@ class TimelikeOps(object): Only relevant for DatetimeIndex .. versionadded:: 0.24.0 + nonexistent : 'shift', 'NaT', default 'raise' + A nonexistent time does not exist in a particular timezone + where clocks moved forward due to DST. + + - 'shift' will shift the nonexistent time forward to the closest + existing time + - 'NaT' will return NaT where there are nonexistent times + - 'raise' will raise an NonExistentTimeError if there are + nonexistent times + + .. versionadded:: 0.24.0 Returns ------- @@ -182,7 +193,7 @@ class TimelikeOps(object): """ ) - def _round(self, freq, mode, ambiguous): + def _round(self, freq, mode, ambiguous, nonexistent): # round the local times values = _ensure_datetimelike_to_i8(self) result = round_nsint64(values, mode, freq) @@ -194,20 +205,22 @@ def _round(self, freq, mode, ambiguous): if 'tz' in attribs: attribs['tz'] = None return self._ensure_localized( - self._shallow_copy(result, **attribs), ambiguous + self._shallow_copy(result, **attribs), ambiguous, nonexistent ) @Appender((_round_doc + _round_example).format(op="round")) - def round(self, freq, ambiguous='raise'): - return self._round(freq, RoundTo.NEAREST_HALF_EVEN, ambiguous) + def round(self, freq, ambiguous='raise', nonexistent='raise'): + return self._round( + freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent + ) @Appender((_round_doc + _floor_example).format(op="floor")) - def floor(self, freq, ambiguous='raise'): - return self._round(freq, RoundTo.MINUS_INFTY, ambiguous) + def floor(self, freq, ambiguous='raise', nonexistent='raise'): + return self._round(freq, RoundTo.MINUS_INFTY, ambiguous, nonexistent) @Appender((_round_doc + _ceil_example).format(op="ceil")) - def ceil(self, freq, ambiguous='raise'): - return self._round(freq, RoundTo.PLUS_INFTY, ambiguous) + def ceil(self, freq, ambiguous='raise', nonexistent='raise'): + return self._round(freq, RoundTo.PLUS_INFTY, ambiguous, nonexistent) class DatetimeIndexOpsMixin(DatetimeLikeArrayMixin): @@ -278,7 +291,8 @@ def _evaluate_compare(self, other, op): except TypeError: return result - def _ensure_localized(self, arg, ambiguous='raise', from_utc=False): + def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise', + from_utc=False): """ ensure that we are re-localized @@ -289,6 +303,7 @@ def _ensure_localized(self, arg, ambiguous='raise', from_utc=False): ---------- arg : DatetimeIndex / i8 ndarray ambiguous : str, bool, or bool-ndarray, default 'raise' + nonexistent : str, default 'raise' from_utc : bool, default False If True, localize the i8 ndarray to UTC first before converting to the appropriate tz. If False, localize directly to the tz. @@ -305,7 +320,9 @@ def _ensure_localized(self, arg, ambiguous='raise', from_utc=False): if from_utc: arg = arg.tz_localize('UTC').tz_convert(self.tz) else: - arg = arg.tz_localize(self.tz, ambiguous=ambiguous) + arg = arg.tz_localize( + self.tz, ambiguous=ambiguous, nonexistent=nonexistent + ) return arg def _box_values_as_index(self): diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py index b6c783dc07aec..0c477a021df4d 100644 --- a/pandas/tests/scalar/timestamp/test_unary_ops.py +++ b/pandas/tests/scalar/timestamp/test_unary_ops.py @@ -134,8 +134,8 @@ def test_floor(self): assert result == expected @pytest.mark.parametrize('method', ['ceil', 'round', 'floor']) - def test_round_dst_border(self, method): - # GH 18946 round near DST + def test_round_dst_border_ambiguous(self, method): + # GH 18946 round near "fall back" DST ts = Timestamp('2017-10-29 00:00:00', tz='UTC').tz_convert( 'Europe/Madrid' ) @@ -155,6 +155,24 @@ def test_round_dst_border(self, method): with pytest.raises(pytz.AmbiguousTimeError): getattr(ts, method)('H', ambiguous='raise') + @pytest.mark.parametrize('method, ts_str, freq', [ + ['ceil', '2018-03-11 01:59:00-0600', '5min'], + ['round', '2018-03-11 01:59:00-0600', '5min'], + ['floor', '2018-03-11 03:01:00-0500', '2H']]) + def test_round_dst_border_nonexistent(self, method, ts_str, freq): + # GH 23324 round near "spring forward" DST + ts = Timestamp(ts_str, tz='America/Chicago') + result = getattr(ts, method)(freq, nonexistent='shift') + expected = Timestamp('2018-03-11 03:00:00', tz='America/Chicago') + assert result == expected + + result = getattr(ts, method)(freq, nonexistent='NaT') + assert result is NaT + + with pytest.raises(pytz.NonExistentTimeError, + message='2018-03-11 02:00:00'): + getattr(ts, method)(freq, nonexistent='raise') + @pytest.mark.parametrize('timestamp', [ '2018-01-01 0:0:0.124999360', '2018-01-01 0:0:0.125000367', diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py index 1fd95c4205b0e..2f6efc112819c 100644 --- a/pandas/tests/series/test_datetime_values.py +++ b/pandas/tests/series/test_datetime_values.py @@ -253,7 +253,7 @@ def test_dt_round_tz(self): @pytest.mark.parametrize('method', ['ceil', 'round', 'floor']) def test_dt_round_tz_ambiguous(self, method): - # GH 18946 round near DST + # GH 18946 round near "fall back" DST df1 = pd.DataFrame([ pd.to_datetime('2017-10-29 02:00:00+02:00', utc=True), pd.to_datetime('2017-10-29 02:00:00+01:00', utc=True), @@ -282,6 +282,27 @@ def test_dt_round_tz_ambiguous(self, method): with pytest.raises(pytz.AmbiguousTimeError): getattr(df1.date.dt, method)('H', ambiguous='raise') + @pytest.mark.parametrize('method, ts_str, freq', [ + ['ceil', '2018-03-11 01:59:00-0600', '5min'], + ['round', '2018-03-11 01:59:00-0600', '5min'], + ['floor', '2018-03-11 03:01:00-0500', '2H']]) + def test_dt_round_tz_nonexistent(self, method, ts_str, freq): + # GH 23324 round near "spring forward" DST + s = Series([pd.Timestamp(ts_str, tz='America/Chicago')]) + result = getattr(s.dt, method)(freq, nonexistent='shift') + expected = Series( + [pd.Timestamp('2018-03-11 03:00:00', tz='America/Chicago')] + ) + tm.assert_series_equal(result, expected) + + result = getattr(s.dt, method)(freq, nonexistent='NaT') + expected = Series([pd.NaT]).dt.tz_localize(result.dt.tz) + tm.assert_series_equal(result, expected) + + with pytest.raises(pytz.NonExistentTimeError, + message='2018-03-11 02:00:00'): + getattr(s.dt, method)(freq, nonexistent='raise') + def test_dt_namespace_accessor_categorical(self): # GH 19468 dti = DatetimeIndex(['20171111', '20181212']).repeat(2)
- [x] closes #23324 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry Similar strategy as https://github.com/pandas-dev/pandas/pull/22647, added a `nonexistent` keyword argument to `round`, `ceil`, and `floor` to control rounding when encountering a `NonexistentTimeError` This is also fixes a bug in the `nonexistent='shift'` implementation in https://github.com/pandas-dev/pandas/pull/22644 where dates with timezones with negative UTC offsets got shifted by an additional hour. This bug is naturally tested by these rounding tests
https://api.github.com/repos/pandas-dev/pandas/pulls/23406
2018-10-29T07:13:37Z
2018-11-02T14:15:10Z
2018-11-02T14:15:10Z
2018-11-02T15:18:06Z
DOC: Add DateOffsets to api.rst
diff --git a/doc/source/api.rst b/doc/source/api.rst index 8f0f5fa7610eb..665649aead33c 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -2104,6 +2104,62 @@ Methods Timedelta.to_timedelta64 Timedelta.total_seconds +.. _api.dateoffsets: + +Date Offsets +------------ + +.. currentmodule:: pandas.tseries.offsets + +.. autosummary:: + :toctree: generated/ + + DateOffset + BusinessDay + BusinessHour + CustomBusinessDay + CustomBusinessHour + MonthOffset + MonthEnd + MonthBegin + BusinessMonthEnd + BusinessMonthBegin + CustomBusinessMonthEnd + CustomBusinessMonthBegin + SemiMonthOffset + SemiMonthEnd + SemiMonthBegin + Week + WeekOfMonth + LastWeekOfMonth + QuarterOffset + BQuarterEnd + BQuarterBegin + QuarterEnd + QuarterBegin + YearOffset + BYearEnd + BYearBegin + YearEnd + YearBegin + FY5253 + FY5253Quarter + Easter + Tick + Day + Hour + Minute + Second + Milli + Micro + Nano + BDay + BMonthEnd + BMonthBegin + CBMonthEnd + CBMonthBegin + CDay + .. _api.frequencies: Frequencies
- [x] closes #22447
https://api.github.com/repos/pandas-dev/pandas/pulls/23405
2018-10-29T05:08:30Z
2018-11-01T01:09:03Z
2018-11-01T01:09:03Z
2018-11-01T22:01:24Z
PERF: speed up concat on Series by skipping unnecessary DataFrame creation
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index fc7019c486d9a..f53e270b18f97 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -1060,6 +1060,7 @@ Performance Improvements - Improved the performance of :func:`pandas.get_dummies` with ``sparse=True`` (:issue:`21997`) - Improved performance of :func:`IndexEngine.get_indexer_non_unique` for sorted, non-unique indexes (:issue:`9466`) - Improved performance of :func:`PeriodIndex.unique` (:issue:`23083`) +- Improved performance of :func:`pd.concat` for `Series` objects (:issue:`23404`) .. _whatsnew_0240.docs: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index db10494f0724d..6ca8f6731bbb8 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -358,41 +358,44 @@ def _from_axes(cls, data, axes, **kwargs): d.update(kwargs) return cls(data, **d) - def _get_axis_number(self, axis): - axis = self._AXIS_ALIASES.get(axis, axis) + @classmethod + def _get_axis_number(cls, axis): + axis = cls._AXIS_ALIASES.get(axis, axis) if is_integer(axis): - if axis in self._AXIS_NAMES: + if axis in cls._AXIS_NAMES: return axis else: try: - return self._AXIS_NUMBERS[axis] + return cls._AXIS_NUMBERS[axis] except KeyError: pass raise ValueError('No axis named {0} for object type {1}' - .format(axis, type(self))) + .format(axis, type(cls))) - def _get_axis_name(self, axis): - axis = self._AXIS_ALIASES.get(axis, axis) + @classmethod + def _get_axis_name(cls, axis): + axis = cls._AXIS_ALIASES.get(axis, axis) if isinstance(axis, string_types): - if axis in self._AXIS_NUMBERS: + if axis in cls._AXIS_NUMBERS: return axis else: try: - return self._AXIS_NAMES[axis] + return cls._AXIS_NAMES[axis] except KeyError: pass raise ValueError('No axis named {0} for object type {1}' - .format(axis, type(self))) + .format(axis, type(cls))) def _get_axis(self, axis): name = self._get_axis_name(axis) return getattr(self, name) - def _get_block_manager_axis(self, axis): + @classmethod + def _get_block_manager_axis(cls, axis): """Map the axis to the block_manager axis.""" - axis = self._get_axis_number(axis) - if self._AXIS_REVERSED: - m = self._AXIS_LEN - 1 + axis = cls._get_axis_number(axis) + if cls._AXIS_REVERSED: + m = cls._AXIS_LEN - 1 return m - axis return axis diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index 9f8564541a936..0e60068732447 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -322,7 +322,7 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, # Standardize axis parameter to int if isinstance(sample, Series): - axis = DataFrame()._get_axis_number(axis) + axis = DataFrame._get_axis_number(axis) else: axis = sample._get_axis_number(axis) diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py index 1652835de8228..46bb6303d8908 100644 --- a/pandas/tests/generic/test_generic.py +++ b/pandas/tests/generic/test_generic.py @@ -1019,3 +1019,15 @@ def test_pipe_panel(self): with pytest.raises(ValueError): result = wp.pipe((f, 'y'), x=1, y=1) + + @pytest.mark.parametrize('box', [pd.Series, pd.DataFrame]) + def test_axis_classmethods(self, box): + obj = box() + values = (list(box._AXIS_NAMES.keys()) + + list(box._AXIS_NUMBERS.keys()) + + list(box._AXIS_ALIASES.keys())) + for v in values: + assert obj._get_axis_number(v) == box._get_axis_number(v) + assert obj._get_axis_name(v) == box._get_axis_name(v) + assert obj._get_block_manager_axis(v) == \ + box._get_block_manager_axis(v)
Removes an unnecessary `DataFrame` creation when dealing solely with `Series` objects, which reduces runtime of `concat`. - [ ] xref #23362 - [x] tests passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry ### Performance Comparison ```python import pandas as pd import numpy as np a = np.random.randint(2000, 2100, size=100000) # larger n than #23362 b = np.random.randint(2000, 2100, size=100000) x = pd.core.arrays.period_array(a, freq='B') y = pd.core.arrays.period_array(b, freq='B') s = pd.Series(x) t = pd.Series(y) ``` #### Baseline ```python %timeit x._concat_same_type([x, y]) 202 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit pd.concat([s, t], ignore_index=True) 839 µs ± 32.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ``` #### After ```python %timeit pd.concat([s, t], ignore_index=True) 396 µs ± 12.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ``` And the excised code itself: ```python In [13]: pd.DataFrame()._get_axis_number(0) Out[13]: 0 In [14]: %timeit pd.DataFrame()._get_axis_number(0) Out[14]: 312 µs ± 23.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ``` So roughly 40% of the runtime was being spent mapping `axis=0 -> axis=0`
https://api.github.com/repos/pandas-dev/pandas/pulls/23404
2018-10-29T04:23:12Z
2018-11-02T13:46:23Z
2018-11-02T13:46:23Z
2018-11-02T17:51:11Z
BUG: Respect axis when doing DataFrame.expanding
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index de111072bef02..25c2491e0120f 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -1238,6 +1238,7 @@ Groupby/Resample/Rolling - Bug in :meth:`SeriesGroupBy.mean` when values were integral but could not fit inside of int64, overflowing instead. (:issue:`22487`) - :func:`RollingGroupby.agg` and :func:`ExpandingGroupby.agg` now support multiple aggregation functions as parameters (:issue:`15072`) - Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` when resampling by a weekly offset (``'W'``) across a DST transition (:issue:`9119`, :issue:`21459`) +- Bug in :meth:`DataFrame.expanding` in which the ``axis`` argument was not being respected during aggregations (:issue:`23372`) Reshaping ^^^^^^^^^ diff --git a/pandas/core/window.py b/pandas/core/window.py index 7d48967602bc1..5256532a31870 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -1866,12 +1866,25 @@ def _constructor(self): return Expanding def _get_window(self, other=None): - obj = self._selected_obj - if other is None: - return (max(len(obj), self.min_periods) if self.min_periods - else len(obj)) - return (max((len(obj) + len(obj)), self.min_periods) - if self.min_periods else (len(obj) + len(obj))) + """ + Get the window length over which to perform some operation. + + Parameters + ---------- + other : object, default None + The other object that is involved in the operation. + Such an object is involved for operations like covariance. + + Returns + ------- + window : int + The window length. + """ + axis = self.obj._get_axis(self.axis) + length = len(axis) + (other is not None) * len(axis) + + other = self.min_periods or -1 + return max(length, other) _agg_doc = dedent(""" Examples diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 4b0c4d581a008..c7cd04deac6c8 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -627,6 +627,25 @@ def test_iter_raises(self, klass): with pytest.raises(NotImplementedError): iter(obj.rolling(2)) + def test_rolling_axis(self, axis_frame): + # see gh-23372. + df = DataFrame(np.ones((10, 20))) + axis = df._get_axis_number(axis_frame) + + if axis == 0: + expected = DataFrame({ + i: [np.nan] * 2 + [3.0] * 8 + for i in range(20) + }) + else: + # axis == 1 + expected = DataFrame([ + [np.nan] * 2 + [3.0] * 18 + ] * 10) + + result = df.rolling(3, axis=axis_frame).sum() + tm.assert_frame_equal(result, expected) + class TestExpanding(Base): @@ -714,6 +733,25 @@ def test_iter_raises(self, klass): with pytest.raises(NotImplementedError): iter(obj.expanding(2)) + def test_expanding_axis(self, axis_frame): + # see gh-23372. + df = DataFrame(np.ones((10, 20))) + axis = df._get_axis_number(axis_frame) + + if axis == 0: + expected = DataFrame({ + i: [np.nan] * 2 + [float(j) for j in range(3, 11)] + for i in range(20) + }) + else: + # axis == 1 + expected = DataFrame([ + [np.nan] * 2 + [float(i) for i in range(3, 21)] + ] * 10) + + result = df.expanding(3, axis=axis_frame).sum() + tm.assert_frame_equal(result, expected) + class TestEWM(Base):
Closes #23372.
https://api.github.com/repos/pandas-dev/pandas/pulls/23402
2018-10-29T00:37:21Z
2018-11-01T00:35:39Z
2018-11-01T00:35:38Z
2018-11-01T00:40:48Z
CLN: Follow-up comments to gh-23392
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 89acd1a14a412..f4d2e776c137d 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -942,7 +942,7 @@ Removal of prior version deprecations/changes - Removal of the previously deprecated module ``pandas.core.datetools`` (:issue:`14105`, :issue:`14094`) - Strings passed into :meth:`DataFrame.groupby` that refer to both column and index levels will raise a ``ValueError`` (:issue:`14432`) - :meth:`Index.repeat` and :meth:`MultiIndex.repeat` have renamed the ``n`` argument to ``repeats`` (:issue:`14645`) -- The ``Series`` constructor and ``.astype`` method will now raise a ``ValueError`` if timestamp dtypes are passed in without a frequency (e.g. ``np.datetime64``) for the ``dtype`` parameter (:issue:`15987`) +- The ``Series`` constructor and ``.astype`` method will now raise a ``ValueError`` if timestamp dtypes are passed in without a unit (e.g. ``np.datetime64``) for the ``dtype`` parameter (:issue:`15987`) - Removal of the previously deprecated ``as_indexer`` keyword completely from ``str.match()`` (:issue:`22356`, :issue:`6581`) - The modules ``pandas.types``, ``pandas.computation``, and ``pandas.util.decorators`` have been removed (:issue:`16157`, :issue:`16250`) - Removed the ``pandas.formats.style`` shim for :class:`pandas.io.formats.style.Styler` (:issue:`16059`) diff --git a/doc/test.parquet b/doc/test.parquet new file mode 100644 index 0000000000000..cc2f7edbb6ee4 Binary files /dev/null and b/doc/test.parquet differ diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index f8b7fb7d88ee0..47e17c9868cd7 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -667,7 +667,7 @@ def astype_nansafe(arr, dtype, copy=True, skipna=False): Raises ------ ValueError - The dtype was a datetime /timedelta dtype, but it had no frequency. + The dtype was a datetime64/timedelta64 dtype, but it had no unit. """ # dispatch on extension dtype if needed @@ -749,7 +749,7 @@ def astype_nansafe(arr, dtype, copy=True, skipna=False): return astype_nansafe(to_timedelta(arr).values, dtype, copy=copy) if dtype.name in ("datetime64", "timedelta64"): - msg = ("The '{dtype}' dtype has no frequency. " + msg = ("The '{dtype}' dtype has no unit. " "Please pass in '{dtype}[ns]' instead.") raise ValueError(msg.format(dtype=dtype.name)) @@ -1021,7 +1021,7 @@ def maybe_cast_to_datetime(value, dtype, errors='raise'): if is_datetime64 or is_datetime64tz or is_timedelta64: # Force the dtype if needed. - msg = ("The '{dtype}' dtype has no frequency. " + msg = ("The '{dtype}' dtype has no unit. " "Please pass in '{dtype}[ns]' instead.") if is_datetime64 and not is_dtype_equal(dtype, _NS_DTYPE): diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index bdd99dd485042..7595b1278a291 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -1198,7 +1198,7 @@ def test_constructor_cast_object(self, index): ]) def test_constructor_generic_timestamp_no_frequency(self, dtype): # see gh-15524, gh-15987 - msg = "dtype has no frequency. Please pass in" + msg = "dtype has no unit. Please pass in" with tm.assert_raises_regex(ValueError, msg): Series([], dtype=dtype) diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py index c62531241369d..64184a6465ba3 100644 --- a/pandas/tests/series/test_dtypes.py +++ b/pandas/tests/series/test_dtypes.py @@ -404,7 +404,7 @@ def test_astype_generic_timestamp_no_frequency(self, dtype): data = [1] s = Series(data) - msg = "dtype has no frequency. Please pass in" + msg = "dtype has no unit. Please pass in" with tm.assert_raises_regex(ValueError, msg): s.astype(dtype)
* Use 'unit' instead of 'frequency' * Minor spacing issues in docs Follow-up to #23392.
https://api.github.com/repos/pandas-dev/pandas/pulls/23401
2018-10-28T19:30:44Z
2018-10-28T23:57:07Z
2018-10-28T23:57:06Z
2018-10-28T23:57:54Z