title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
CI/DOC: unpin gitdb #35823 | diff --git a/environment.yml b/environment.yml
index 806119631d5ee..c9b1fa108c00b 100644
--- a/environment.yml
+++ b/environment.yml
@@ -26,7 +26,7 @@ dependencies:
# documentation
- gitpython # obtain contributors from git for whatsnew
- - gitdb2=2.0.6 # GH-32060
+ - gitdb
- sphinx
# documentation (jupyter notebooks)
diff --git a/requirements-dev.txt b/requirements-dev.txt
index deaed8ab9d5f1..d5510ddc24034 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -15,7 +15,7 @@ isort>=5.2.1
mypy==0.730
pycodestyle
gitpython
-gitdb2==2.0.6
+gitdb
sphinx
nbconvert>=5.4.1
nbsphinx
| - [x] closes #35823
| https://api.github.com/repos/pandas-dev/pandas/pulls/35824 | 2020-08-20T15:35:04Z | 2020-08-21T21:12:59Z | 2020-08-21T21:12:59Z | 2020-08-21T21:52:46Z |
Backport PR #35801 on branch 1.1.x (DOC: another pass of v1.1.1 release notes) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 43ffed273adbc..721f07c865409 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -1,6 +1,6 @@
.. _whatsnew_111:
-What's new in 1.1.1 (August XX, 2020)
+What's new in 1.1.1 (August 20, 2020)
-------------------------------------
These are the changes in pandas 1.1.1. See :ref:`release` for a full changelog
@@ -27,7 +27,7 @@ Fixed regressions
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
-- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
+- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index``, ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.aggregate` where a list of functions would produce the wrong results if at least one of the functions did not aggregate (:issue:`35490`)
@@ -40,11 +40,11 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in :class:`~pandas.io.formats.style.Styler` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
+- Bug in :class:`~pandas.io.formats.style.Styler` whereby ``cell_ids`` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`)
-- Bug in :meth:`to_timedelta` fails when arg is a :class:`Series` with `Int64` dtype containing null values (:issue:`35574`)
+- Bug in :meth:`to_timedelta` fails when ``arg`` is a :class:`Series` with ``Int64`` dtype containing null values (:issue:`35574`)
- Bug in ``.groupby(..).rolling(..)`` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
-- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when data and index have mismatched lengths (:issue:`33437`)
+- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when ``data`` and ``index`` have mismatched lengths (:issue:`33437`)
.. ---------------------------------------------------------------------------
| Backport PR #35801: DOC: another pass of v1.1.1 release notes | https://api.github.com/repos/pandas-dev/pandas/pulls/35822 | 2020-08-20T15:06:03Z | 2020-08-20T16:36:03Z | 2020-08-20T16:36:02Z | 2020-08-20T16:36:03Z |
Backport PR #35809 on branch 1.1.x (CI: more xfail failing 32-bit tests) | diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index a9590c7e1233a..d0a62374d0888 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -215,6 +215,7 @@ def foo(x):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_center_center(self):
# GH 35552
series = Series(range(1, 6))
@@ -280,6 +281,7 @@ def test_groupby_rolling_center_center(self):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subselect_rolling(self):
# GH 35486
df = DataFrame(
@@ -305,6 +307,7 @@ def test_groupby_subselect_rolling(self):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_custom_indexer(self):
# GH 35557
class SimpleIndexer(pd.api.indexers.BaseIndexer):
@@ -328,6 +331,7 @@ def get_window_bounds(
expected = df.groupby(df.index).rolling(window=3, min_periods=1).sum()
tm.assert_frame_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
@@ -352,6 +356,7 @@ def test_groupby_rolling_subset_with_closed(self):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subset_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
| Backport PR #35809: CI: more xfail failing 32-bit tests | https://api.github.com/repos/pandas-dev/pandas/pulls/35821 | 2020-08-20T12:05:33Z | 2020-08-20T13:32:05Z | 2020-08-20T13:32:05Z | 2020-08-20T13:32:05Z |
CI/DOC: unpin sphinx, fix autodoc usage | diff --git a/doc/source/reference/frame.rst b/doc/source/reference/frame.rst
index e3dfb552651a0..4d9d18e3d204e 100644
--- a/doc/source/reference/frame.rst
+++ b/doc/source/reference/frame.rst
@@ -343,6 +343,7 @@ Sparse-dtype specific methods and attributes are provided under the
.. autosummary::
:toctree: api/
+ :template: autosummary/accessor_method.rst
DataFrame.sparse.from_spmatrix
DataFrame.sparse.to_coo
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
index 3b595ba5ab206..ae3e121ca8212 100644
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -522,6 +522,7 @@ Sparse-dtype specific methods and attributes are provided under the
.. autosummary::
:toctree: api/
+ :template: autosummary/accessor_method.rst
Series.sparse.from_coo
Series.sparse.to_coo
diff --git a/environment.yml b/environment.yml
index aaabf09b8f190..806119631d5ee 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,7 +27,7 @@ dependencies:
# documentation
- gitpython # obtain contributors from git for whatsnew
- gitdb2=2.0.6 # GH-32060
- - sphinx<=3.1.1
+ - sphinx
# documentation (jupyter notebooks)
- nbconvert>=5.4.1
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 3d0778b74ccbd..deaed8ab9d5f1 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -16,7 +16,7 @@ mypy==0.730
pycodestyle
gitpython
gitdb2==2.0.6
-sphinx<=3.1.1
+sphinx
nbconvert>=5.4.1
nbsphinx
pandoc
| - [x] closes #35138
| https://api.github.com/repos/pandas-dev/pandas/pulls/35815 | 2020-08-20T02:14:50Z | 2020-08-20T07:24:26Z | 2020-08-20T07:24:26Z | 2020-08-21T16:06:02Z |
TST: Fix test_parquet failures for pyarrow 1.0 | diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 3a3ba99484a3a..4e0c16c71a6a8 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -565,15 +565,22 @@ def test_s3_roundtrip(self, df_compat, s3_resource, pa):
@pytest.mark.parametrize("partition_col", [["A"], []])
def test_s3_roundtrip_for_dir(self, df_compat, s3_resource, pa, partition_col):
# GH #26388
- # https://github.com/apache/arrow/blob/master/python/pyarrow/tests/test_parquet.py#L2716
- # As per pyarrow partitioned columns become 'categorical' dtypes
- # and are added to back of dataframe on read
- if partition_col and pd.compat.is_platform_windows():
- pytest.skip("pyarrow/win incompatibility #35791")
-
expected_df = df_compat.copy()
- if partition_col:
- expected_df[partition_col] = expected_df[partition_col].astype("category")
+
+ # GH #35791
+ # read_table uses the new Arrow Datasets API since pyarrow 1.0.0
+ # Previous behaviour was pyarrow partitioned columns become 'category' dtypes
+ # These are added to back of dataframe on read. In new API category dtype is
+ # only used if partition field is string.
+ legacy_read_table = LooseVersion(pyarrow.__version__) < LooseVersion("1.0.0")
+ if partition_col and legacy_read_table:
+ partition_col_type = "category"
+ else:
+ partition_col_type = "int32"
+
+ expected_df[partition_col] = expected_df[partition_col].astype(
+ partition_col_type
+ )
check_round_trip(
df_compat,
| - [x] closes https://github.com/pandas-dev/pandas/issues/35791
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
cc @martindurant -> looks like this is a pyarrow 1.0.0 compat issue (read_table uses the new API) - https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html
I noticed the partition cols are casted from int64 -> int32 is that expected pyarrow behaviour? From the write_table docs looking at version 1.0/2.0 it suggests it is https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table
| https://api.github.com/repos/pandas-dev/pandas/pulls/35814 | 2020-08-20T01:31:11Z | 2020-08-25T06:41:49Z | 2020-08-25T06:41:49Z | 2020-08-25T07:08:14Z |
WEB: Add new Maintainers to Team Page | diff --git a/web/pandas/config.yml b/web/pandas/config.yml
index 23575cc123050..9a178d26659c3 100644
--- a/web/pandas/config.yml
+++ b/web/pandas/config.yml
@@ -79,6 +79,13 @@ maintainers:
- datapythonista
- simonjayhawkins
- topper-123
+ - alimcmaster1
+ - bashtage
+ - charlesdong1991
+ - Dr-Irv
+ - dsaxton
+ - MarcoGorelli
+ - rhshadrach
emeritus:
- Wouter Overmeire
- Skipper Seabold
| Add new maintainers.
cc:
@bashtage
@charlesdong1991
@Dr-Irv
@dsaxton
@MarcoGorelli
@rhshadrach
In case anyone doesn't want to appear here: https://pandas.pydata.org/about/team.html | https://api.github.com/repos/pandas-dev/pandas/pulls/35812 | 2020-08-19T22:37:48Z | 2020-08-20T16:26:31Z | 2020-08-20T16:26:31Z | 2020-08-20T16:26:36Z |
CI: more xfail failing 32-bit tests | diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index a9590c7e1233a..d0a62374d0888 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -215,6 +215,7 @@ def foo(x):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_center_center(self):
# GH 35552
series = Series(range(1, 6))
@@ -280,6 +281,7 @@ def test_groupby_rolling_center_center(self):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subselect_rolling(self):
# GH 35486
df = DataFrame(
@@ -305,6 +307,7 @@ def test_groupby_subselect_rolling(self):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_custom_indexer(self):
# GH 35557
class SimpleIndexer(pd.api.indexers.BaseIndexer):
@@ -328,6 +331,7 @@ def get_window_bounds(
expected = df.groupby(df.index).rolling(window=3, min_periods=1).sum()
tm.assert_frame_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
@@ -352,6 +356,7 @@ def test_groupby_rolling_subset_with_closed(self):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subset_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
| xref #35294 and https://github.com/pandas-dev/pandas/issues/35489#issuecomment-676550757 | https://api.github.com/repos/pandas-dev/pandas/pulls/35809 | 2020-08-19T18:03:26Z | 2020-08-20T12:03:31Z | 2020-08-20T12:03:31Z | 2020-08-20T12:05:50Z |
Backport PR #35672 on branch 1.1.x CI: test_chunks_have_consistent_numerical_type periodically fails on 1.1.x | diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 12e73bae40eac..bacce7585c205 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1138,6 +1138,7 @@ def test_parse_integers_above_fp_precision(all_parsers):
tm.assert_frame_equal(result, expected)
+@pytest.mark.xfail(reason="ResourceWarning #35660", strict=False)
def test_chunks_have_consistent_numerical_type(all_parsers):
parser = all_parsers
integers = [str(i) for i in range(499999)]
@@ -1151,6 +1152,7 @@ def test_chunks_have_consistent_numerical_type(all_parsers):
assert result.a.dtype == float
+@pytest.mark.xfail(reason="ResourceWarning #35660", strict=False)
def test_warn_if_chunks_have_mismatched_type(all_parsers):
warning_type = None
parser = all_parsers
| PR directly against 1.1.x
effectively backport of #35672 | https://api.github.com/repos/pandas-dev/pandas/pulls/35808 | 2020-08-19T17:37:14Z | 2020-08-20T13:41:49Z | 2020-08-20T13:41:49Z | 2020-08-20T13:41:55Z |
Backport PR #35776 on branch 1.1.x (Changed 'int' type to 'integer' in to_numeric docstring) | diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index d8db196e4b92f..1531f7b292365 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -234,7 +234,7 @@ class SparseArray(PandasObject, ExtensionArray, ExtensionOpsMixin):
3. ``data.dtype.fill_value`` if `fill_value` is None and `dtype`
is not a ``SparseDtype`` and `data` is a ``SparseArray``.
- kind : {'int', 'block'}, default 'int'
+ kind : {'integer', 'block'}, default 'integer'
The type of storage for sparse locations.
* 'block': Stores a `block` and `block_length` for each
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 41548931f17f8..cff4695603d06 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -40,13 +40,13 @@ def to_numeric(arg, errors="raise", downcast=None):
- If 'raise', then invalid parsing will raise an exception.
- If 'coerce', then invalid parsing will be set as NaN.
- If 'ignore', then invalid parsing will return the input.
- downcast : {'int', 'signed', 'unsigned', 'float'}, default None
+ downcast : {'integer', 'signed', 'unsigned', 'float'}, default None
If not None, and if the data has been successfully cast to a
numerical dtype (or if the data was numeric to begin with),
downcast that resulting data to the smallest numerical dtype
possible according to the following rules:
- - 'int' or 'signed': smallest signed int dtype (min.: np.int8)
+ - 'integer' or 'signed': smallest signed int dtype (min.: np.int8)
- 'unsigned': smallest unsigned int dtype (min.: np.uint8)
- 'float': smallest float dtype (min.: np.float32)
| Backport PR #35776: Changed 'int' type to 'integer' in to_numeric docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/35806 | 2020-08-19T13:16:52Z | 2020-08-19T14:25:28Z | 2020-08-19T14:25:28Z | 2020-08-19T14:25:28Z |
DOC: another pass of v1.1.1 release notes | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 43ffed273adbc..721f07c865409 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -1,6 +1,6 @@
.. _whatsnew_111:
-What's new in 1.1.1 (August XX, 2020)
+What's new in 1.1.1 (August 20, 2020)
-------------------------------------
These are the changes in pandas 1.1.1. See :ref:`release` for a full changelog
@@ -27,7 +27,7 @@ Fixed regressions
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
-- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
+- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index``, ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.aggregate` where a list of functions would produce the wrong results if at least one of the functions did not aggregate (:issue:`35490`)
@@ -40,11 +40,11 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in :class:`~pandas.io.formats.style.Styler` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
+- Bug in :class:`~pandas.io.formats.style.Styler` whereby ``cell_ids`` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`)
-- Bug in :meth:`to_timedelta` fails when arg is a :class:`Series` with `Int64` dtype containing null values (:issue:`35574`)
+- Bug in :meth:`to_timedelta` fails when ``arg`` is a :class:`Series` with ``Int64`` dtype containing null values (:issue:`35574`)
- Bug in ``.groupby(..).rolling(..)`` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
-- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when data and index have mismatched lengths (:issue:`33437`)
+- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when ``data`` and ``index`` have mismatched lengths (:issue:`33437`)
.. ---------------------------------------------------------------------------
| https://pandas.pydata.org/docs/dev/whatsnew/v1.1.1.html | https://api.github.com/repos/pandas-dev/pandas/pulls/35801 | 2020-08-19T12:27:32Z | 2020-08-20T15:05:47Z | 2020-08-20T15:05:47Z | 2020-08-20T15:07:07Z |
Backport PR #35787 on branch 1.1.x (DOC: clean v1.1.1 release notes) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index ff5bbccf63ffe..43ffed273adbc 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -1,7 +1,7 @@
.. _whatsnew_111:
-What's new in 1.1.1 (?)
------------------------
+What's new in 1.1.1 (August XX, 2020)
+-------------------------------------
These are the changes in pandas 1.1.1. See :ref:`release` for a full changelog
including other versions of pandas.
@@ -15,20 +15,23 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in :meth:`CategoricalIndex.format` where, when stringified scalars had different lengths, the shorter string would be right-filled with spaces, so it had the same length as the longest string (:issue:`35439`)
+- Fixed regression in :meth:`Series.truncate` when trying to truncate a single-element series (:issue:`35544`)
- Fixed regression where :meth:`DataFrame.to_numpy` would raise a ``RuntimeError`` for mixed dtypes when converting to ``str`` (:issue:`35455`)
- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
-- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
-- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`).
+- Fixed regression in ``.groupby(..).rolling(..)`` where column selection was ignored (:issue:`35486`)
+- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
-- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
+- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
-- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
+- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.aggregate` where a list of functions would produce the wrong results if at least one of the functions did not aggregate (:issue:`35490`)
+- Fixed memory usage issue when instantiating large :class:`pandas.arrays.StringArray` (:issue:`35499`)
.. ---------------------------------------------------------------------------
@@ -37,50 +40,11 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
-- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`).
-
-Categorical
-^^^^^^^^^^^
-
-- Bug in :meth:`CategoricalIndex.format` where, when stringified scalars had different lengths, the shorter string would be right-filled with spaces, so it had the same length as the longest string (:issue:`35439`)
-
-
-**Datetimelike**
-
--
--
-
-**Timedelta**
-
+- Bug in :class:`~pandas.io.formats.style.Styler` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
+- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`)
- Bug in :meth:`to_timedelta` fails when arg is a :class:`Series` with `Int64` dtype containing null values (:issue:`35574`)
-
-
-**Numeric**
-
--
--
-
-**Groupby/resample/rolling**
-
-- Bug in :class:`pandas.core.groupby.RollingGroupby` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
-
-**Plotting**
-
--
-
-**Indexing**
-
-- Bug in :meth:`Series.truncate` when trying to truncate a single-element series (:issue:`35544`)
-
-**DataFrame**
+- Bug in ``.groupby(..).rolling(..)`` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when data and index have mismatched lengths (:issue:`33437`)
--
-
-**Strings**
-
-- fix memory usage issue when instantiating large :class:`pandas.arrays.StringArray` (:issue:`35499`)
-
.. ---------------------------------------------------------------------------
| Backport PR #35787: DOC: clean v1.1.1 release notes | https://api.github.com/repos/pandas-dev/pandas/pulls/35800 | 2020-08-19T09:49:44Z | 2020-08-19T11:34:40Z | 2020-08-19T11:34:40Z | 2020-08-19T11:34:40Z |
TST: resample does not yield empty groups (#10603) | diff --git a/pandas/tests/resample/test_timedelta.py b/pandas/tests/resample/test_timedelta.py
index 0fbb60c176b30..3fa85e62d028c 100644
--- a/pandas/tests/resample/test_timedelta.py
+++ b/pandas/tests/resample/test_timedelta.py
@@ -150,3 +150,18 @@ def test_resample_timedelta_edge_case(start, end, freq, resample_freq):
tm.assert_index_equal(result.index, expected_index)
assert result.index.freq == expected_index.freq
assert not np.isnan(result[-1])
+
+
+def test_resample_with_timedelta_yields_no_empty_groups():
+ # GH 10603
+ df = pd.DataFrame(
+ np.random.normal(size=(10000, 4)),
+ index=pd.timedelta_range(start="0s", periods=10000, freq="3906250n"),
+ )
+ result = df.loc["1s":, :].resample("3s").apply(lambda x: len(x))
+
+ expected = pd.DataFrame(
+ [[768.0] * 4] * 12 + [[528.0] * 4],
+ index=pd.timedelta_range(start="1s", periods=13, freq="3s"),
+ )
+ tm.assert_frame_equal(result, expected)
| - [x] closes #10603
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This adds a test to verify that resample does not yield empty groups.
I'm new to contributing to pandas, and I'm not sure where to add this test.
Let me know if this place is inappropriate. | https://api.github.com/repos/pandas-dev/pandas/pulls/35799 | 2020-08-19T08:19:44Z | 2020-08-21T22:42:51Z | 2020-08-21T22:42:51Z | 2020-08-21T22:42:56Z |
Bump asv Python version | diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index 4583fac85b776..1863a17e3d5f7 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -26,7 +26,7 @@
// The Pythons you'd like to test against. If not provided, defaults
// to the current version of Python used to run `asv`.
// "pythons": ["2.7", "3.4"],
- "pythons": ["3.6"],
+ "pythons": ["3.8"],
// The matrix of dependencies to test. Each key is the name of a
// package (in PyPI) and the values are version numbers. An empty
| Got this error trying to run the asvs locally and bumping the Python version in asv.conf.json fixed it:
```
STDOUT -------->
Processing /Users/danielsaxton/pandas/asv_bench/env/a36d556a9d1611aba4092dc48036d261/project
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
STDERR -------->
ERROR: Package 'pandas' requires a different Python: 3.6.10 not in '>=3.7.1'
```
Let me know if this is needed / correct in general or if it was just a thing with my machine.
cc @TomAugspurger
Closes https://github.com/pandas-dev/pandas/issues/34677 | https://api.github.com/repos/pandas-dev/pandas/pulls/35798 | 2020-08-19T02:45:00Z | 2020-08-26T13:57:41Z | 2020-08-26T13:57:41Z | 2020-08-26T17:06:12Z |
BUG: Fix is_categorical_dtype for Sparse[category] | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 6923b42d3340b..109dbd5839756 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -236,7 +236,7 @@ Bug fixes
Categorical
^^^^^^^^^^^
--
+- Bug in :func:`pandas.api.types.is_categorical_dtype` where sparse categorical dtypes would return ``False`` (:issue:`35793`)
-
Datetimelike
diff --git a/pandas/core/dtypes/base.py b/pandas/core/dtypes/base.py
index 3ae5cabf9c73f..4d7e78a0f46e8 100644
--- a/pandas/core/dtypes/base.py
+++ b/pandas/core/dtypes/base.py
@@ -287,6 +287,10 @@ def is_dtype(cls, dtype: object) -> bool:
return False
elif isinstance(dtype, cls):
return True
+ elif hasattr(dtype, "subtype") and isinstance(
+ dtype.subtype, cls # type: ignore[attr-defined]
+ ):
+ return True
if isinstance(dtype, str):
try:
return cls.construct_from_string(dtype) is not None
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 5987fdabf78bb..bc94d4ae422d4 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -558,8 +558,10 @@ def is_categorical_dtype(arr_or_dtype) -> bool:
"""
if isinstance(arr_or_dtype, ExtensionDtype):
# GH#33400 fastpath for dtype object
- return arr_or_dtype.name == "category"
-
+ return ("category" == arr_or_dtype.name) or (
+ is_sparse(arr_or_dtype)
+ and arr_or_dtype.subtype.name == "category" # type: ignore[attr-defined]
+ )
if arr_or_dtype is None:
return False
return CategoricalDtype.is_dtype(arr_or_dtype)
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 2db9a9a403e1c..e36b4ae17ed0e 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -213,6 +213,15 @@ def test_is_categorical_deprecation():
com.is_categorical([1, 2, 3])
+def test_is_categorical_sparse_categorical():
+ # https://github.com/pandas-dev/pandas/issues/35793
+ s = pd.Series(
+ ["a", "b", "c"], dtype=pd.SparseDtype(CategoricalDtype(["a", "b", "c"]))
+ )
+ assert com.is_categorical_dtype(s)
+ assert com.is_categorical_dtype(s.dtype)
+
+
def test_is_datetime64_dtype():
assert not com.is_datetime64_dtype(object)
assert not com.is_datetime64_dtype([1, 2, 3])
| - [x] closes #35793
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35797 | 2020-08-19T00:45:44Z | 2020-09-19T16:39:38Z | null | 2020-09-19T16:39:42Z |
BUG: issubclass check with dtype instead of type, closes GH#24883 | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 81acd567027e5..7bd547bf03a87 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,7 +24,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-
+- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
-
-
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index bc9ff7c44b689..e55df1e1d8155 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -481,13 +481,21 @@ def stringify(value):
self.lhs.update(v)
def _disallow_scalar_only_bool_ops(self):
+ rhs = self.rhs
+ lhs = self.lhs
+
+ # GH#24883 unwrap dtype if necessary to ensure we have a type object
+ rhs_rt = rhs.return_type
+ rhs_rt = getattr(rhs_rt, "type", rhs_rt)
+ lhs_rt = lhs.return_type
+ lhs_rt = getattr(lhs_rt, "type", lhs_rt)
if (
- (self.lhs.is_scalar or self.rhs.is_scalar)
+ (lhs.is_scalar or rhs.is_scalar)
and self.op in _bool_ops_dict
and (
not (
- issubclass(self.rhs.return_type, (bool, np.bool_))
- and issubclass(self.lhs.return_type, (bool, np.bool_))
+ issubclass(rhs_rt, (bool, np.bool_))
+ and issubclass(lhs_rt, (bool, np.bool_))
)
)
):
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 628b955a1de92..56d178daee7fd 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -160,6 +160,13 @@ def test_eval_resolvers_as_list(self):
assert df.eval("a + b", resolvers=[dict1, dict2]) == dict1["a"] + dict2["b"]
assert pd.eval("a + b", resolvers=[dict1, dict2]) == dict1["a"] + dict2["b"]
+ def test_eval_object_dtype_binop(self):
+ # GH#24883
+ df = pd.DataFrame({"a1": ["Y", "N"]})
+ res = df.eval("c = ((a1 == 'Y') & True)")
+ expected = pd.DataFrame({"a1": ["Y", "N"], "c": [True, False]})
+ tm.assert_frame_equal(res, expected)
+
class TestDataFrameQueryWithMultiIndex:
def test_query_with_named_multiindex(self, parser, engine):
| - [x] closes #24883
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35794 | 2020-08-18T23:00:56Z | 2020-08-21T21:53:51Z | 2020-08-21T21:53:51Z | 2020-08-27T09:18:49Z |
CLN/BUG: Clean/Simplify _wrap_applied_output | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index adc1806523d6e..55570341cf4e8 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -254,6 +254,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrameGroupBy.apply` that would some times throw an erroneous ``ValueError`` if the grouping axis had duplicate entries (:issue:`16646`)
- Bug when combining methods :meth:`DataFrame.groupby` with :meth:`DataFrame.resample` and :meth:`DataFrame.interpolate` raising an ``TypeError`` (:issue:`35325`)
- Bug in :meth:`DataFrameGroupBy.apply` where a non-nuisance grouping column would be dropped from the output columns if another groupby method was called before ``.apply()`` (:issue:`34656`)
+- Bug in :meth:`DataFrameGroupby.apply` would drop a :class:`CategoricalIndex` when grouped on. (:issue:`35792`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 70a8379de64e9..2afa56b50c3c7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1197,57 +1197,25 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if len(keys) == 0:
return self.obj._constructor(index=keys)
- key_names = self.grouper.names
-
# GH12824
first_not_none = next(com.not_none(*values), None)
if first_not_none is None:
- # GH9684. If all values are None, then this will throw an error.
- # We'd prefer it return an empty dataframe.
+ # GH9684 - All values are None, return an empty frame.
return self.obj._constructor()
elif isinstance(first_not_none, DataFrame):
return self._concat_objects(keys, values, not_indexed_same=not_indexed_same)
else:
- if len(self.grouper.groupings) > 1:
- key_index = self.grouper.result_index
-
- else:
- ping = self.grouper.groupings[0]
- if len(keys) == ping.ngroups:
- key_index = ping.group_index
- key_index.name = key_names[0]
-
- key_lookup = Index(keys)
- indexer = key_lookup.get_indexer(key_index)
-
- # reorder the values
- values = [values[i] for i in indexer]
-
- # update due to the potential reorder
- first_not_none = next(com.not_none(*values), None)
- else:
-
- key_index = Index(keys, name=key_names[0])
-
- # don't use the key indexer
- if not self.as_index:
- key_index = None
+ key_index = self.grouper.result_index if self.as_index else None
- # make Nones an empty object
- if first_not_none is None:
- return self.obj._constructor()
- elif isinstance(first_not_none, NDFrame):
+ if isinstance(first_not_none, Series):
# this is to silence a DeprecationWarning
# TODO: Remove when default dtype of empty Series is object
kwargs = first_not_none._construct_axes_dict()
- if isinstance(first_not_none, Series):
- backup = create_series_with_explicit_dtype(
- **kwargs, dtype_if_empty=object
- )
- else:
- backup = first_not_none._constructor(**kwargs)
+ backup = create_series_with_explicit_dtype(
+ **kwargs, dtype_if_empty=object
+ )
values = [x if (x is not None) else backup for x in values]
@@ -1256,7 +1224,7 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if isinstance(v, (np.ndarray, Index, Series)) or not self.as_index:
if isinstance(v, Series):
applied_index = self._selected_obj._get_axis(self.axis)
- all_indexed_same = all_indexes_same([x.index for x in values])
+ all_indexed_same = all_indexes_same((x.index for x in values))
singular_series = len(values) == 1 and applied_index.nlevels == 1
# GH3596
@@ -1288,7 +1256,6 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
# GH 8467
return self._concat_objects(keys, values, not_indexed_same=True)
- if self.axis == 0 and isinstance(v, ABCSeries):
# GH6124 if the list of Series have a consistent name,
# then propagate that name to the result.
index = v.index.copy()
@@ -1301,34 +1268,27 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if len(names) == 1:
index.name = list(names)[0]
- # normally use vstack as its faster than concat
- # and if we have mi-columns
- if (
- isinstance(v.index, MultiIndex)
- or key_index is None
- or isinstance(key_index, MultiIndex)
- ):
- stacked_values = np.vstack([np.asarray(v) for v in values])
- result = self.obj._constructor(
- stacked_values, index=key_index, columns=index
- )
- else:
- # GH5788 instead of stacking; concat gets the
- # dtypes correct
- from pandas.core.reshape.concat import concat
-
- result = concat(
- values,
- keys=key_index,
- names=key_index.names,
- axis=self.axis,
- ).unstack()
- result.columns = index
- elif isinstance(v, ABCSeries):
+ # Combine values
+ # vstack+constructor is faster than concat and handles MI-columns
stacked_values = np.vstack([np.asarray(v) for v in values])
+
+ if self.axis == 0:
+ index = key_index
+ columns = v.index.copy()
+ if columns.name is None:
+ # GH6124 - propagate name of Series when it's consistent
+ names = {v.name for v in values}
+ if len(names) == 1:
+ columns.name = list(names)[0]
+ else:
+ index = v.index
+ columns = key_index
+ stacked_values = stacked_values.T
+
result = self.obj._constructor(
- stacked_values.T, index=v.index, columns=key_index
+ stacked_values, index=index, columns=columns
)
+
elif not self.as_index:
# We add grouping column below, so create a frame here
result = DataFrame(
diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
index 30cc8cf480dcf..d352b001f5d2a 100644
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -297,15 +297,16 @@ def all_indexes_same(indexes):
Parameters
----------
- indexes : list of Index objects
+ indexes : iterable of Index objects
Returns
-------
bool
True if all indexes contain the same elements, False otherwise.
"""
- first = indexes[0]
- for index in indexes[1:]:
+ itr = iter(indexes)
+ first = next(itr)
+ for index in itr:
if not first.equals(index):
return False
return True
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index ee38722ffb8ce..a1dcb28a32c6c 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -861,13 +861,14 @@ def test_apply_multi_level_name(category):
b = [1, 2] * 5
if category:
b = pd.Categorical(b, categories=[1, 2, 3])
+ expected_index = pd.CategoricalIndex([1, 2], categories=[1, 2, 3], name="B")
+ else:
+ expected_index = pd.Index([1, 2], name="B")
df = pd.DataFrame(
{"A": np.arange(10), "B": b, "C": list(range(10)), "D": list(range(10))}
).set_index(["A", "B"])
result = df.groupby("B").apply(lambda x: x.sum())
- expected = pd.DataFrame(
- {"C": [20, 25], "D": [20, 25]}, index=pd.Index([1, 2], name="B")
- )
+ expected = pd.DataFrame({"C": [20, 25], "D": [20, 25]}, index=expected_index)
tm.assert_frame_equal(result, expected)
assert df.index.names == ["A", "B"]
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
First part of #35412. I could not find a relevant issue for the test change/whatsnew update. | https://api.github.com/repos/pandas-dev/pandas/pulls/35792 | 2020-08-18T21:54:42Z | 2020-08-26T03:21:44Z | 2020-08-26T03:21:44Z | 2020-10-26T15:42:17Z |
Backport PR #35774 on branch 1.1.x (REGR: follow-up to return copy with df.interpolate on empty DataFrame) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3a386a8df7075..8bd1dbea4696f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6800,7 +6800,7 @@ def interpolate(
obj = self.T if should_transpose else self
if obj.empty:
- return self
+ return self.copy()
if method not in fillna_methods:
axis = self._info_axis_number
diff --git a/pandas/tests/frame/methods/test_interpolate.py b/pandas/tests/frame/methods/test_interpolate.py
index 3c9d79397e4bd..6b86a13fcf1b9 100644
--- a/pandas/tests/frame/methods/test_interpolate.py
+++ b/pandas/tests/frame/methods/test_interpolate.py
@@ -38,6 +38,7 @@ def test_interp_empty(self):
# https://github.com/pandas-dev/pandas/issues/35598
df = DataFrame()
result = df.interpolate()
+ assert result is not df
expected = df
tm.assert_frame_equal(result, expected)
| Backport PR #35774: REGR: follow-up to return copy with df.interpolate on empty DataFrame | https://api.github.com/repos/pandas-dev/pandas/pulls/35789 | 2020-08-18T13:06:29Z | 2020-08-18T14:12:19Z | 2020-08-18T14:12:19Z | 2020-08-18T14:12:20Z |
DOC: clean v1.1.1 release notes | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index ff5bbccf63ffe..43ffed273adbc 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -1,7 +1,7 @@
.. _whatsnew_111:
-What's new in 1.1.1 (?)
------------------------
+What's new in 1.1.1 (August XX, 2020)
+-------------------------------------
These are the changes in pandas 1.1.1. See :ref:`release` for a full changelog
including other versions of pandas.
@@ -15,20 +15,23 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in :meth:`CategoricalIndex.format` where, when stringified scalars had different lengths, the shorter string would be right-filled with spaces, so it had the same length as the longest string (:issue:`35439`)
+- Fixed regression in :meth:`Series.truncate` when trying to truncate a single-element series (:issue:`35544`)
- Fixed regression where :meth:`DataFrame.to_numpy` would raise a ``RuntimeError`` for mixed dtypes when converting to ``str`` (:issue:`35455`)
- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
-- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
-- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`).
+- Fixed regression in ``.groupby(..).rolling(..)`` where column selection was ignored (:issue:`35486`)
+- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
-- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
+- Fixed regression where :func:`pandas.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
-- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
+- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.aggregate` where a list of functions would produce the wrong results if at least one of the functions did not aggregate (:issue:`35490`)
+- Fixed memory usage issue when instantiating large :class:`pandas.arrays.StringArray` (:issue:`35499`)
.. ---------------------------------------------------------------------------
@@ -37,50 +40,11 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
-- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`).
-
-Categorical
-^^^^^^^^^^^
-
-- Bug in :meth:`CategoricalIndex.format` where, when stringified scalars had different lengths, the shorter string would be right-filled with spaces, so it had the same length as the longest string (:issue:`35439`)
-
-
-**Datetimelike**
-
--
--
-
-**Timedelta**
-
+- Bug in :class:`~pandas.io.formats.style.Styler` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`)
+- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`)
- Bug in :meth:`to_timedelta` fails when arg is a :class:`Series` with `Int64` dtype containing null values (:issue:`35574`)
-
-
-**Numeric**
-
--
--
-
-**Groupby/resample/rolling**
-
-- Bug in :class:`pandas.core.groupby.RollingGroupby` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
-
-**Plotting**
-
--
-
-**Indexing**
-
-- Bug in :meth:`Series.truncate` when trying to truncate a single-element series (:issue:`35544`)
-
-**DataFrame**
+- Bug in ``.groupby(..).rolling(..)`` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when data and index have mismatched lengths (:issue:`33437`)
--
-
-**Strings**
-
-- fix memory usage issue when instantiating large :class:`pandas.arrays.StringArray` (:issue:`35499`)
-
.. ---------------------------------------------------------------------------
| https://api.github.com/repos/pandas-dev/pandas/pulls/35787 | 2020-08-18T12:12:49Z | 2020-08-19T09:48:42Z | 2020-08-19T09:48:42Z | 2020-08-19T09:52:45Z | |
DOC: Updated docstring for Series.str.cat (pandas-dev#35556) | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 6702bf519c52e..c447f551da87c 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2371,8 +2371,7 @@ def cat(self, others=None, sep=None, na_rep=None, join="left"):
join : {'left', 'right', 'outer', 'inner'}, default 'left'
Determines the join-style between the calling Series/Index and any
Series/Index/DataFrame in `others` (objects without an index need
- to match the length of the calling Series/Index). To disable
- alignment, use `.values` on any Series/Index/DataFrame in `others`.
+ to match the length of the calling Series/Index).
.. versionadded:: 0.23.0
.. versionchanged:: 1.0.0
@@ -2389,6 +2388,15 @@ def cat(self, others=None, sep=None, na_rep=None, join="left"):
split : Split each string in the Series/Index.
join : Join lists contained as elements in the Series/Index.
+ Notes
+ -----
+ Since version 1.0.0, index alignment is performed when
+ `others` is a Series/Index/DataFrame (or a list-like containing
+ one). To disable alignment (the behavior in and before v0.23.0),
+ convert `others` to a numpy array while passing it as an argument
+ (using `.to_numpy()`). See the Examples section below for more
+ information.
+
Examples
--------
When not passing `others`, all values are concatenated into a single
@@ -2466,7 +2474,27 @@ def cat(self, others=None, sep=None, na_rep=None, join="left"):
2 -c
dtype: object
- For more examples, see :ref:`here <text.concatenate>`.
+ When `others` is a Series/Index/DataFrame, it is advisable
+ to use `to_numpy()` while passing it to the function to
+ avoid unexpected results.
+
+ The following example demostrates this:
+
+ If `others` is a Series/Index/DataFrame and `to_numpy()`
+ **is not** used:
+
+ >>> idx = pd.Index(["a", "b", "c", "d", "e"])
+ >>> ser = pd.Series(["f", "g", "h", "i", "j"])
+ >>> idx.str.cat(ser) # the caller is an Index here
+ Index([nan, nan, nan, nan, nan], dtype='object')
+
+ When `to_numpy()` **is** used:
+
+ >>> idx.str.cat(ser.to_numpy())
+ Index(['af', 'bg', 'ch', 'di', 'ej'], dtype='object')
+
+ For other examples and usages, see
+ :ref:`here <text.concatenate>`.
"""
from pandas import Index, Series, concat
| - [X] closes #35556
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
I have updated the docstring for `Series.str.cat` to make the issue in #35556 more explicit to the user. | https://api.github.com/repos/pandas-dev/pandas/pulls/35784 | 2020-08-18T06:01:25Z | 2020-12-11T05:01:54Z | null | 2020-12-11T05:01:54Z |
CLN: remove unused variable | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 1f0cdbd07560f..166631e69f523 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1031,7 +1031,6 @@ def _cython_agg_blocks(
data = data.get_numeric_data(copy=False)
agg_blocks: List["Block"] = []
- deleted_items: List[np.ndarray] = []
no_result = object()
@@ -1133,7 +1132,6 @@ def blk_func(block: "Block") -> List["Block"]:
# continue and exclude the block
# NotImplementedError -> "ohlc" with wrong dtype
skipped.append(i)
- deleted_items.append(block.mgr_locs.as_array)
else:
agg_blocks.extend(nbs)
| https://api.github.com/repos/pandas-dev/pandas/pulls/35783 | 2020-08-18T04:08:03Z | 2020-08-18T13:05:30Z | 2020-08-18T13:05:30Z | 2020-08-18T15:49:35Z | |
CI: unpin numpy and matplotlib #35779 | diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 5bc87bca87211..8ce4b30c717a4 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -668,6 +668,7 @@ A ``ValueError`` will be raised if there are any negative values in your data.
plt.figure()
.. ipython:: python
+ :okwarning:
series = pd.Series(3 * np.random.rand(4),
index=['a', 'b', 'c', 'd'], name='series')
@@ -742,6 +743,7 @@ If you pass values whose sum total is less than 1.0, matplotlib draws a semicirc
plt.figure()
.. ipython:: python
+ :okwarning:
series = pd.Series([0.1] * 4, index=['a', 'b', 'c', 'd'], name='series2')
diff --git a/environment.yml b/environment.yml
index 1e51470d43d36..aaabf09b8f190 100644
--- a/environment.yml
+++ b/environment.yml
@@ -3,8 +3,7 @@ channels:
- conda-forge
dependencies:
# required
- # Pin numpy<1.19 until MPL 3.3.0 is released.
- - numpy>=1.16.5,<1.19.0
+ - numpy>=1.16.5
- python=3
- python-dateutil>=2.7.3
- pytz
@@ -73,7 +72,7 @@ dependencies:
- ipykernel
- ipython>=7.11.1
- jinja2 # pandas.Styler
- - matplotlib>=2.2.2,<3.3.0 # pandas.plotting, Series.plot, DataFrame.plot
+ - matplotlib>=2.2.2 # pandas.plotting, Series.plot, DataFrame.plot
- numexpr>=2.6.8
- scipy>=1.2
- numba>=0.46.0
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1587dd8798ec3..c16919e523264 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2600,7 +2600,7 @@ def to_html(
1 column_2 1000000 non-null object
2 column_3 1000000 non-null object
dtypes: object(3)
- memory usage: 188.8 MB"""
+ memory usage: 165.9 MB"""
),
see_also_sub=(
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e8bf87a39b572..cd2db659fbd0e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4637,7 +4637,7 @@ def memory_usage(self, index=True, deep=False):
>>> s.memory_usage()
144
>>> s.memory_usage(deep=True)
- 260
+ 244
"""
v = super().memory_usage(deep=deep)
if index:
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index 53ef97bbe9a72..b33daf39de37c 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -294,7 +294,7 @@ def maybe_color_bp(bp, **kwds):
def plot_group(keys, values, ax):
keys = [pprint_thing(x) for x in keys]
- values = [np.asarray(remove_na_arraylike(v)) for v in values]
+ values = [np.asarray(remove_na_arraylike(v), dtype=object) for v in values]
bp = ax.boxplot(values, **kwds)
if fontsize is not None:
ax.tick_params(axis="both", labelsize=fontsize)
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 66e72641cd5bb..3d0778b74ccbd 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -1,7 +1,7 @@
# This file is auto-generated from environment.yml, do not modify.
# See that file for comments about the need/usage of each dependency.
-numpy>=1.16.5,<1.19.0
+numpy>=1.16.5
python-dateutil>=2.7.3
pytz
asv
@@ -47,7 +47,7 @@ bottleneck>=1.2.1
ipykernel
ipython>=7.11.1
jinja2
-matplotlib>=2.2.2,<3.3.0
+matplotlib>=2.2.2
numexpr>=2.6.8
scipy>=1.2
numba>=0.46.0
| - [x] closes #35779
| https://api.github.com/repos/pandas-dev/pandas/pulls/35780 | 2020-08-18T01:10:26Z | 2020-08-19T20:40:05Z | 2020-08-19T20:40:05Z | 2020-08-19T21:38:01Z |
Added numba as an argument | diff --git a/doc/source/user_guide/computation.rst b/doc/source/user_guide/computation.rst
index d7875e5b8d861..151ef36be7c98 100644
--- a/doc/source/user_guide/computation.rst
+++ b/doc/source/user_guide/computation.rst
@@ -361,6 +361,9 @@ compute the mean absolute deviation on a rolling basis:
@savefig rolling_apply_ex.png
s.rolling(window=60).apply(mad, raw=True).plot(style='k')
+Using the Numba engine
+~~~~~~~~~~~~~~~~~~~~~~
+
.. versionadded:: 1.0
Additionally, :meth:`~Rolling.apply` can leverage `Numba <https://numba.pydata.org/>`__
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index 24fcb369804c6..9e101c1a20371 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -373,6 +373,13 @@ nicer interface by passing/returning pandas objects.
In this example, using Numba was faster than Cython.
+Numba as an argument
+~~~~~~~~~~~~~~~~~~~~
+
+Additionally, we can leverage the power of `Numba <https://numba.pydata.org/>`__
+by calling it as an argument in :meth:`~Rolling.apply`. See :ref:`Computation tools
+<stats.rolling_apply>` for an extensive example.
+
Vectorize
~~~~~~~~~
| - [x] closes #35550
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35778 | 2020-08-17T23:30:02Z | 2020-09-01T23:52:09Z | 2020-09-01T23:52:08Z | 2020-09-02T08:08:16Z |
BUG: DataFrame.apply with result_type=reduce incorrect index | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 7bd547bf03a87..c1b73c60be92b 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -25,6 +25,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
+- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
-
-
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 6d44cf917a07a..99a9e1377563c 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -340,7 +340,10 @@ def wrap_results_for_axis(
if self.result_type == "reduce":
# e.g. test_apply_dict GH#8735
- return self.obj._constructor_sliced(results)
+ res = self.obj._constructor_sliced(results)
+ res.index = res_index
+ return res
+
elif self.result_type is None and all(
isinstance(x, dict) for x in results.values()
):
diff --git a/pandas/tests/frame/apply/test_frame_apply.py b/pandas/tests/frame/apply/test_frame_apply.py
index 538978358c8e7..5a1e448beb40f 100644
--- a/pandas/tests/frame/apply/test_frame_apply.py
+++ b/pandas/tests/frame/apply/test_frame_apply.py
@@ -1541,3 +1541,12 @@ def func(row):
tm.assert_frame_equal(result, expected)
tm.assert_frame_equal(df, result)
+
+
+def test_apply_empty_list_reduce():
+ # GH#35683 get columns correct
+ df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]], columns=["a", "b"])
+
+ result = df.apply(lambda x: [], result_type="reduce")
+ expected = pd.Series({"a": [], "b": []}, dtype=object)
+ tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This may xref #35683 pending discussion there. | https://api.github.com/repos/pandas-dev/pandas/pulls/35777 | 2020-08-17T23:18:40Z | 2020-08-22T02:04:21Z | 2020-08-22T02:04:21Z | 2021-11-20T23:22:44Z |
Changed 'int' type to 'integer' in to_numeric docstring | diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index d8db196e4b92f..1531f7b292365 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -234,7 +234,7 @@ class SparseArray(PandasObject, ExtensionArray, ExtensionOpsMixin):
3. ``data.dtype.fill_value`` if `fill_value` is None and `dtype`
is not a ``SparseDtype`` and `data` is a ``SparseArray``.
- kind : {'int', 'block'}, default 'int'
+ kind : {'integer', 'block'}, default 'integer'
The type of storage for sparse locations.
* 'block': Stores a `block` and `block_length` for each
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 41548931f17f8..cff4695603d06 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -40,13 +40,13 @@ def to_numeric(arg, errors="raise", downcast=None):
- If 'raise', then invalid parsing will raise an exception.
- If 'coerce', then invalid parsing will be set as NaN.
- If 'ignore', then invalid parsing will return the input.
- downcast : {'int', 'signed', 'unsigned', 'float'}, default None
+ downcast : {'integer', 'signed', 'unsigned', 'float'}, default None
If not None, and if the data has been successfully cast to a
numerical dtype (or if the data was numeric to begin with),
downcast that resulting data to the smallest numerical dtype
possible according to the following rules:
- - 'int' or 'signed': smallest signed int dtype (min.: np.int8)
+ - 'integer' or 'signed': smallest signed int dtype (min.: np.int8)
- 'unsigned': smallest unsigned int dtype (min.: np.uint8)
- 'float': smallest float dtype (min.: np.float32)
| - [x] closes #35760
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35776 | 2020-08-17T22:44:38Z | 2020-08-19T13:16:42Z | 2020-08-19T13:16:42Z | 2020-08-19T13:16:56Z |
TST: Verify whether read-only datetime64 array can be factorized (35650) | diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index a080bf0feaebc..04ad5b479016f 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -251,6 +251,19 @@ def test_object_factorize(self, writable):
tm.assert_numpy_array_equal(codes, expected_codes)
tm.assert_numpy_array_equal(uniques, expected_uniques)
+ def test_datetime64_factorize(self, writable):
+ # GH35650 Verify whether read-only datetime64 array can be factorized
+ data = np.array([np.datetime64("2020-01-01T00:00:00.000")])
+ data.setflags(write=writable)
+ expected_codes = np.array([0], dtype=np.int64)
+ expected_uniques = np.array(
+ ["2020-01-01T00:00:00.000000000"], dtype="datetime64[ns]"
+ )
+
+ codes, uniques = pd.factorize(data)
+ tm.assert_numpy_array_equal(codes, expected_codes)
+ tm.assert_numpy_array_equal(uniques, expected_uniques)
+
def test_deprecate_order(self):
# gh 19727 - check warning is raised for deprecated keyword, order.
# Test not valid once order keyword is removed.
| - [x] closes #35650
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35775 | 2020-08-17T22:21:28Z | 2020-08-26T12:44:26Z | 2020-08-26T12:44:25Z | 2020-08-26T12:44:51Z |
REGR: follow-up to return copy with df.interpolate on empty DataFrame | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9cbe2f714fd57..fe412bc0ce937 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6893,7 +6893,7 @@ def interpolate(
obj = self.T if should_transpose else self
if obj.empty:
- return self
+ return self.copy()
if method not in fillna_methods:
axis = self._info_axis_number
diff --git a/pandas/tests/frame/methods/test_interpolate.py b/pandas/tests/frame/methods/test_interpolate.py
index 3c9d79397e4bd..6b86a13fcf1b9 100644
--- a/pandas/tests/frame/methods/test_interpolate.py
+++ b/pandas/tests/frame/methods/test_interpolate.py
@@ -38,6 +38,7 @@ def test_interp_empty(self):
# https://github.com/pandas-dev/pandas/issues/35598
df = DataFrame()
result = df.interpolate()
+ assert result is not df
expected = df
tm.assert_frame_equal(result, expected)
| follow-up to #35543 | https://api.github.com/repos/pandas-dev/pandas/pulls/35774 | 2020-08-17T19:27:18Z | 2020-08-18T12:54:56Z | 2020-08-18T12:54:56Z | 2020-08-18T13:06:35Z |
Backport PR #35750 on branch 1.1.x (Pass check_dtype to assert_extension_array_equal) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 10bdfdc10c87a..9c92c803fa677 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -37,6 +37,7 @@ Bug fixes
~~~~~~~~~
- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
+- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`).
Categorical
^^^^^^^^^^^
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 713f29466f097..ef6232fa6d575 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1377,12 +1377,18 @@ def assert_series_equal(
)
elif is_extension_array_dtype(left.dtype) and is_extension_array_dtype(right.dtype):
assert_extension_array_equal(
- left._values, right._values, index_values=np.asarray(left.index)
+ left._values,
+ right._values,
+ check_dtype=check_dtype,
+ index_values=np.asarray(left.index),
)
elif needs_i8_conversion(left.dtype) or needs_i8_conversion(right.dtype):
# DatetimeArray or TimedeltaArray
assert_extension_array_equal(
- left._values, right._values, index_values=np.asarray(left.index)
+ left._values,
+ right._values,
+ check_dtype=check_dtype,
+ index_values=np.asarray(left.index),
)
else:
_testing.assert_almost_equal(
diff --git a/pandas/tests/util/test_assert_extension_array_equal.py b/pandas/tests/util/test_assert_extension_array_equal.py
index d9fdf1491c328..f9259beab5d13 100644
--- a/pandas/tests/util/test_assert_extension_array_equal.py
+++ b/pandas/tests/util/test_assert_extension_array_equal.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+from pandas import array
import pandas._testing as tm
from pandas.core.arrays.sparse import SparseArray
@@ -102,3 +103,11 @@ def test_assert_extension_array_equal_non_extension_array(side):
with pytest.raises(AssertionError, match=msg):
tm.assert_extension_array_equal(*args)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_extension_array_equal_ignore_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = array([1, 2, 3], dtype="Int64")
+ right = array([1, 2, 3], dtype=right_dtype)
+ tm.assert_extension_array_equal(left, right, check_dtype=False)
diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index fe3e1ff906919..3aa3c64923b14 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -260,3 +260,11 @@ def test_assert_frame_equal_interval_dtype_mismatch():
with pytest.raises(AssertionError, match=msg):
tm.assert_frame_equal(left, right, check_dtype=True)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_frame_equal_ignore_extension_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = pd.DataFrame({"a": [1, 2, 3]}, dtype="Int64")
+ right = pd.DataFrame({"a": [1, 2, 3]}, dtype=right_dtype)
+ tm.assert_frame_equal(left, right, check_dtype=False)
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index a7b5aeac560e4..f3c66052b1904 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -296,3 +296,11 @@ def test_series_equal_exact_for_nonnumeric():
tm.assert_series_equal(s1, s3, check_exact=True)
with pytest.raises(AssertionError):
tm.assert_series_equal(s3, s1, check_exact=True)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_series_equal_ignore_extension_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = pd.Series([1, 2, 3], dtype="Int64")
+ right = pd.Series([1, 2, 3], dtype=right_dtype)
+ tm.assert_series_equal(left, right, check_dtype=False)
| Backport PR #35750: Pass check_dtype to assert_extension_array_equal | https://api.github.com/repos/pandas-dev/pandas/pulls/35773 | 2020-08-17T18:31:38Z | 2020-08-17T20:37:01Z | 2020-08-17T20:37:00Z | 2020-08-17T20:37:01Z |
CI: close sockets in SQL tests | diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 29b787d39c09d..a7e3162ed7b73 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -263,7 +263,8 @@ def _get_all_tables(self):
return table_list
def _close_conn(self):
- pass
+ # https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal
+ self.conn.dispose()
class PandasSQLTest:
@@ -1242,7 +1243,7 @@ class _TestSQLAlchemy(SQLAlchemyMixIn, PandasSQLTest):
def setup_class(cls):
cls.setup_import()
cls.setup_driver()
- conn = cls.connect()
+ conn = cls.conn = cls.connect()
conn.connect()
def load_test_data_and_sql(self):
| closes #35660
closes #29514
Broken off from #35711 | https://api.github.com/repos/pandas-dev/pandas/pulls/35772 | 2020-08-17T18:12:28Z | 2020-08-17T20:50:14Z | 2020-08-17T20:50:14Z | 2020-08-18T15:46:19Z |
Backport PR #35519 on branch 1.1.x (REF: StringArray._from_sequence, use less memory) | diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index d7fb2775376c0..2023858181baa 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -7,6 +7,21 @@
from .pandas_vb_common import tm
+class Construction:
+
+ params = ["str", "string"]
+ param_names = ["dtype"]
+
+ def setup(self, dtype):
+ self.data = tm.rands_array(nchars=10 ** 5, size=10)
+
+ def time_construction(self, dtype):
+ Series(self.data, dtype=dtype)
+
+ def peakmem_construction(self, dtype):
+ Series(self.data, dtype=dtype)
+
+
class Methods:
def setup(self):
self.s = Series(tm.makeStringIndex(10 ** 5))
diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index d93cd6edb983a..10bdfdc10c87a 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -75,6 +75,11 @@ Categorical
- Bug in :class:`DataFrame` constructor failing to raise ``ValueError`` in some cases when data and index have mismatched lengths (:issue:`33437`)
-
+**Strings**
+
+- fix memory usage issue when instantiating large :class:`pandas.arrays.StringArray` (:issue:`35499`)
+
+
.. ---------------------------------------------------------------------------
.. _whatsnew_111.contributors:
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 5fa91ffee8ea8..eadfcefaac73d 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -618,35 +618,52 @@ def astype_intsafe(ndarray[object] arr, new_dtype):
@cython.wraparound(False)
@cython.boundscheck(False)
-def astype_str(arr: ndarray, skipna: bool=False) -> ndarray[object]:
- """
- Convert all elements in an array to string.
+cpdef ndarray[object] ensure_string_array(
+ arr,
+ object na_value=np.nan,
+ bint convert_na_value=True,
+ bint copy=True,
+ bint skipna=True,
+):
+ """Returns a new numpy array with object dtype and only strings and na values.
Parameters
----------
- arr : ndarray
- The array whose elements we are casting.
- skipna : bool, default False
+ arr : array-like
+ The values to be converted to str, if needed.
+ na_value : Any
+ The value to use for na. For example, np.nan or pd.NA.
+ convert_na_value : bool, default True
+ If False, existing na values will be used unchanged in the new array.
+ copy : bool, default True
+ Whether to ensure that a new array is returned.
+ skipna : bool, default True
Whether or not to coerce nulls to their stringified form
- (e.g. NaN becomes 'nan').
+ (e.g. if False, NaN becomes 'nan').
Returns
-------
ndarray
- A new array with the input array's elements casted.
+ An array with the input array's elements casted to str or nan-like.
"""
cdef:
- object arr_i
- Py_ssize_t i, n = arr.size
- ndarray[object] result = np.empty(n, dtype=object)
-
- for i in range(n):
- arr_i = arr[i]
+ Py_ssize_t i = 0, n = len(arr)
- if not (skipna and checknull(arr_i)):
- arr_i = str(arr_i)
+ result = np.asarray(arr, dtype="object")
+ if copy and result is arr:
+ result = result.copy()
- result[i] = arr_i
+ for i in range(n):
+ val = result[i]
+ if not checknull(val):
+ result[i] = str(val)
+ else:
+ if convert_na_value:
+ val = na_value
+ if skipna:
+ result[i] = val
+ else:
+ result[i] = str(val)
return result
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index fddd3af858f77..a4778869aee24 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -178,11 +178,10 @@ class StringArray(PandasArray):
def __init__(self, values, copy=False):
values = extract_array(values)
- skip_validation = isinstance(values, type(self))
super().__init__(values, copy=copy)
self._dtype = StringDtype()
- if not skip_validation:
+ if not isinstance(values, type(self)):
self._validate()
def _validate(self):
@@ -201,23 +200,11 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
assert dtype == "string"
result = np.asarray(scalars, dtype="object")
- if copy and result is scalars:
- result = result.copy()
-
- # Standardize all missing-like values to NA
- # TODO: it would be nice to do this in _validate / lib.is_string_array
- # We are already doing a scan over the values there.
- na_values = isna(result)
- has_nans = na_values.any()
- if has_nans and result is scalars:
- # force a copy now, if we haven't already
- result = result.copy()
-
- # convert to str, then to object to avoid dtype like '<U3', then insert na_value
- result = np.asarray(result, dtype=str)
- result = np.asarray(result, dtype="object")
- if has_nans:
- result[na_values] = StringDtype.na_value
+
+ # convert non-na-likes to str, and nan-likes to StringDtype.na_value
+ result = lib.ensure_string_array(
+ result, na_value=StringDtype.na_value, copy=copy
+ )
return cls(result)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 228329898b6a4..2697f42eb05a4 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -916,7 +916,7 @@ def astype_nansafe(arr, dtype, copy: bool = True, skipna: bool = False):
dtype = pandas_dtype(dtype)
if issubclass(dtype.type, str):
- return lib.astype_str(arr.ravel(), skipna=skipna).reshape(arr.shape)
+ return lib.ensure_string_array(arr.ravel(), skipna=skipna).reshape(arr.shape)
elif is_datetime64_dtype(arr):
if is_object_dtype(dtype):
@@ -1608,19 +1608,11 @@ def construct_1d_ndarray_preserving_na(
>>> construct_1d_ndarray_preserving_na([1.0, 2.0, None], dtype=np.dtype('str'))
array(['1.0', '2.0', None], dtype=object)
"""
- subarr = np.array(values, dtype=dtype, copy=copy)
if dtype is not None and dtype.kind == "U":
- # GH-21083
- # We can't just return np.array(subarr, dtype='str') since
- # NumPy will convert the non-string objects into strings
- # Including NA values. Se we have to go
- # string -> object -> update NA, which requires an
- # additional pass over the data.
- na_values = isna(values)
- subarr2 = subarr.astype(object)
- subarr2[na_values] = np.asarray(values, dtype=object)[na_values]
- subarr = subarr2
+ subarr = lib.ensure_string_array(values, convert_na_value=False, copy=copy)
+ else:
+ subarr = np.array(values, dtype=dtype, copy=copy)
return subarr
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index 6f9a1a5be4c43..efd5d29ae0717 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -206,12 +206,16 @@ def test_constructor_raises():
@pytest.mark.parametrize("copy", [True, False])
def test_from_sequence_no_mutate(copy):
- a = np.array(["a", np.nan], dtype=object)
- original = a.copy()
- result = pd.arrays.StringArray._from_sequence(a, copy=copy)
- expected = pd.arrays.StringArray(np.array(["a", pd.NA], dtype=object))
+ nan_arr = np.array(["a", np.nan], dtype=object)
+ na_arr = np.array(["a", pd.NA], dtype=object)
+
+ result = pd.arrays.StringArray._from_sequence(nan_arr, copy=copy)
+ expected = pd.arrays.StringArray(na_arr)
+
tm.assert_extension_array_equal(result, expected)
- tm.assert_numpy_array_equal(a, original)
+
+ expected = nan_arr if copy else na_arr
+ tm.assert_numpy_array_equal(nan_arr, expected)
def test_astype_int():
| Backport PR #35519: REF: StringArray._from_sequence, use less memory | https://api.github.com/repos/pandas-dev/pandas/pulls/35770 | 2020-08-17T14:38:44Z | 2020-08-17T15:32:36Z | 2020-08-17T15:32:36Z | 2020-08-17T15:32:36Z |
BUG: Raise ValueError instead of bare Exception in sanitize_array | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 57e3c9dd66afb..1336fd7d83f7e 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -476,6 +476,7 @@ Other
- Bug in :meth:`DataFrame.replace` and :meth:`Series.replace` with numeric values and string ``to_replace`` (:issue:`34789`)
- Fixed metadata propagation in the :class:`Series.dt` accessor (:issue:`28283`)
- Bug in :meth:`Index.union` behaving differently depending on whether operand is a :class:`Index` or other list-like (:issue:`36384`)
+- Passing an array with 2 or more dimensions to the :class:`Series` constructor now raises the more specific ``ValueError``, from a bare ``Exception`` previously (:issue:`35744`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 4751f6076f869..7901e150a7ff4 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -510,7 +510,7 @@ def sanitize_array(
elif subarr.ndim > 1:
if isinstance(data, np.ndarray):
- raise Exception("Data must be 1-dimensional")
+ raise ValueError("Data must be 1-dimensional")
else:
subarr = com.asarray_tuplesafe(data, dtype=dtype)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 4ad4917533422..a950ca78fc742 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -113,8 +113,8 @@ def test_constructor(self, datetime_series):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
assert not Series().index._is_all_dates
- # exception raised is of type Exception
- with pytest.raises(Exception, match="Data must be 1-dimensional"):
+ # exception raised is of type ValueError GH35744
+ with pytest.raises(ValueError, match="Data must be 1-dimensional"):
Series(np.random.randn(3, 3), index=np.arange(3))
mixed.name = "Series"
| - [x] closes #35744
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35769 | 2020-08-17T13:16:03Z | 2020-10-10T16:46:25Z | 2020-10-10T16:46:24Z | 2021-01-25T21:15:38Z |
Revert "REGR: Fix interpolation on empty dataframe" | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 4fb98bc7c1217..d93cd6edb983a 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -19,7 +19,6 @@ Fixed regressions
- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
-- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`).
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9cbe2f714fd57..2219d54477d9e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6892,9 +6892,6 @@ def interpolate(
obj = self.T if should_transpose else self
- if obj.empty:
- return self
-
if method not in fillna_methods:
axis = self._info_axis_number
diff --git a/pandas/tests/frame/methods/test_interpolate.py b/pandas/tests/frame/methods/test_interpolate.py
index 3c9d79397e4bd..ddb5723e7bd3e 100644
--- a/pandas/tests/frame/methods/test_interpolate.py
+++ b/pandas/tests/frame/methods/test_interpolate.py
@@ -34,13 +34,6 @@ def test_interp_basic(self):
expected.loc[5, "B"] = 9
tm.assert_frame_equal(result, expected)
- def test_interp_empty(self):
- # https://github.com/pandas-dev/pandas/issues/35598
- df = DataFrame()
- result = df.interpolate()
- expected = df
- tm.assert_frame_equal(result, expected)
-
def test_interp_bad_method(self):
df = DataFrame(
{
| Reverts pandas-dev/pandas#35543 | https://api.github.com/repos/pandas-dev/pandas/pulls/35766 | 2020-08-17T11:21:11Z | 2020-08-17T19:15:46Z | null | 2020-12-02T15:33:15Z |
Backport PR #35697 on branch 1.1.x (REGR: Don't ignore compiled patterns in replace) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 565b4a014bd0c..d93cd6edb983a 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -26,6 +26,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
+- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index e6e2b06e1873e..4c3805f812bb0 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -2,7 +2,17 @@
import itertools
import operator
import re
-from typing import DefaultDict, Dict, List, Optional, Sequence, Tuple, TypeVar, Union
+from typing import (
+ DefaultDict,
+ Dict,
+ List,
+ Optional,
+ Pattern,
+ Sequence,
+ Tuple,
+ TypeVar,
+ Union,
+)
import warnings
import numpy as np
@@ -1922,7 +1932,10 @@ def _merge_blocks(
def _compare_or_regex_search(
- a: ArrayLike, b: Scalar, regex: bool = False, mask: Optional[ArrayLike] = None
+ a: ArrayLike,
+ b: Union[Scalar, Pattern],
+ regex: bool = False,
+ mask: Optional[ArrayLike] = None,
) -> Union[ArrayLike, bool]:
"""
Compare two array_like inputs of the same shape or two scalar values
@@ -1933,7 +1946,7 @@ def _compare_or_regex_search(
Parameters
----------
a : array_like
- b : scalar
+ b : scalar or regex pattern
regex : bool, default False
mask : array_like or None (default)
@@ -1943,7 +1956,7 @@ def _compare_or_regex_search(
"""
def _check_comparison_types(
- result: Union[ArrayLike, bool], a: ArrayLike, b: Scalar,
+ result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern],
):
"""
Raises an error if the two arrays (a,b) cannot be compared.
@@ -1964,7 +1977,7 @@ def _check_comparison_types(
else:
op = np.vectorize(
lambda x: bool(re.search(b, x))
- if isinstance(x, str) and isinstance(b, str)
+ if isinstance(x, str) and isinstance(b, (str, Pattern))
else False
)
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index a3f056dbf9648..8603bff0587b6 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1573,3 +1573,11 @@ def test_replace_dict_category_type(self, input_category_df, expected_category_d
result = input_df.replace({"a": "z", "obj1": "obj9", "cat1": "catX"})
tm.assert_frame_equal(result, expected)
+
+ def test_replace_with_compiled_regex(self):
+ # https://github.com/pandas-dev/pandas/issues/35680
+ df = pd.DataFrame(["a", "b", "c"])
+ regex = re.compile("^a$")
+ result = df.replace({regex: "z"}, regex=True)
+ expected = pd.DataFrame(["z", "b", "c"])
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index 11802c59a29da..f78a28c66e946 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -1,3 +1,5 @@
+import re
+
import numpy as np
import pytest
@@ -415,3 +417,11 @@ def test_replace_extension_other(self):
# https://github.com/pandas-dev/pandas/issues/34530
ser = pd.Series(pd.array([1, 2, 3], dtype="Int64"))
ser.replace("", "") # no exception
+
+ def test_replace_with_compiled_regex(self):
+ # https://github.com/pandas-dev/pandas/issues/35680
+ s = pd.Series(["a", "b", "c"])
+ regex = re.compile("^a$")
+ result = s.replace({regex: "z"}, regex=True)
+ expected = pd.Series(["z", "b", "c"])
+ tm.assert_series_equal(result, expected)
| Backport PR #35697: REGR: Don't ignore compiled patterns in replace | https://api.github.com/repos/pandas-dev/pandas/pulls/35765 | 2020-08-17T11:00:57Z | 2020-08-17T12:21:40Z | 2020-08-17T12:21:40Z | 2020-08-17T12:21:40Z |
Backport PR #35543 on branch 1.1.x (REGR: Fix interpolation on empty dataframe) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 565b4a014bd0c..b1fd76157b9f1 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
+- Fixed regression where :meth:`DataFrame.interpolate` would raise a ``TypeError`` when the :class:`DataFrame` was empty (:issue:`35598`).
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a11ee6b5d9846..3a386a8df7075 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6799,6 +6799,9 @@ def interpolate(
obj = self.T if should_transpose else self
+ if obj.empty:
+ return self
+
if method not in fillna_methods:
axis = self._info_axis_number
diff --git a/pandas/tests/frame/methods/test_interpolate.py b/pandas/tests/frame/methods/test_interpolate.py
index ddb5723e7bd3e..3c9d79397e4bd 100644
--- a/pandas/tests/frame/methods/test_interpolate.py
+++ b/pandas/tests/frame/methods/test_interpolate.py
@@ -34,6 +34,13 @@ def test_interp_basic(self):
expected.loc[5, "B"] = 9
tm.assert_frame_equal(result, expected)
+ def test_interp_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/35598
+ df = DataFrame()
+ result = df.interpolate()
+ expected = df
+ tm.assert_frame_equal(result, expected)
+
def test_interp_bad_method(self):
df = DataFrame(
{
| Backport PR #35543: REGR: Fix interpolation on empty dataframe | https://api.github.com/repos/pandas-dev/pandas/pulls/35764 | 2020-08-17T10:24:14Z | 2020-08-17T20:37:25Z | 2020-08-17T20:37:25Z | 2020-08-17T20:37:25Z |
MAINT: Initialize year to silence warning | diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 8429aebbd85b8..7478179df3b75 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -381,7 +381,8 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
object freq):
cdef:
object ret
- int year, quarter = -1, month, mnum, date_len
+ # year initialized to prevent compiler warnings
+ int year = -1, quarter = -1, month, mnum, date_len
# special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1
assert isinstance(date_string, str)
| Initialize year to silence warning due to subtracting from
value that compiler cannot reason must be either initialized
or never reached
closes #35622
- [X] closes #35622
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35763 | 2020-08-17T09:47:13Z | 2020-08-17T18:27:45Z | 2020-08-17T18:27:45Z | 2020-08-17T18:27:49Z |
Backport PR #35754 on branch 1.1.x (CI: Min Pytest Cov Version/Restrict xdist version) | diff --git a/ci/deps/azure-windows-36.yaml b/ci/deps/azure-windows-36.yaml
index 548660cabaa67..21b4e86918f3b 100644
--- a/ci/deps/azure-windows-36.yaml
+++ b/ci/deps/azure-windows-36.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21
+ - pytest-xdist>=1.21,<2.0.0 # GH 35737
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml
index 5bbd0e2795d7e..287d6877b9810 100644
--- a/ci/deps/azure-windows-37.yaml
+++ b/ci/deps/azure-windows-37.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21
+ - pytest-xdist>=1.21,<2.0.0 # GH 35737
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/travis-36-cov.yaml b/ci/deps/travis-36-cov.yaml
index 177e0d3f4c0af..2457c04e67759 100644
--- a/ci/deps/travis-36-cov.yaml
+++ b/ci/deps/travis-36-cov.yaml
@@ -10,7 +10,7 @@ dependencies:
- pytest>=5.0.1
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-cov # this is only needed in the coverage build
+ - pytest-cov>=2.10.1 # this is only needed in the coverage build, ref: GH 35737
# pandas dependencies
- beautifulsoup4
| Backport PR #35754: CI: Min Pytest Cov Version/Restrict xdist version | https://api.github.com/repos/pandas-dev/pandas/pulls/35761 | 2020-08-17T08:52:00Z | 2020-08-17T09:39:03Z | 2020-08-17T09:39:03Z | 2020-08-17T09:39:03Z |
PERF: Allow jitting of groupby agg loop | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 3e6ed1cdf8f7e..09a5bcb0917c2 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -154,7 +154,7 @@ Deprecations
Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
--
+- Performance improvement in :meth:`GroupBy.agg` with the ``numba`` engine (:issue:`35759`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 60e23b14eaf09..2d46a545ba979 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -69,19 +69,16 @@
GroupBy,
_agg_template,
_apply_docs,
+ _group_selection_context,
_transform_template,
get_groupby,
)
+from pandas.core.groupby.numba_ import generate_numba_func, split_for_numba
from pandas.core.indexes.api import Index, MultiIndex, all_indexes_same
import pandas.core.indexes.base as ibase
from pandas.core.internals import BlockManager, make_block
from pandas.core.series import Series
-from pandas.core.util.numba_ import (
- NUMBA_FUNC_CACHE,
- generate_numba_func,
- maybe_use_numba,
- split_for_numba,
-)
+from pandas.core.util.numba_ import NUMBA_FUNC_CACHE, maybe_use_numba
from pandas.plotting import boxplot_frame_groupby
@@ -229,6 +226,18 @@ def apply(self, func, *args, **kwargs):
)
def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs):
+ if maybe_use_numba(engine):
+ if not callable(func):
+ raise NotImplementedError(
+ "Numba engine can only be used with a single function."
+ )
+ with _group_selection_context(self):
+ data = self._selected_obj
+ result, index = self._aggregate_with_numba(
+ data.to_frame(), func, *args, engine_kwargs=engine_kwargs, **kwargs
+ )
+ return self.obj._constructor(result.ravel(), index=index, name=data.name)
+
relabeling = func is None
columns = None
if relabeling:
@@ -251,16 +260,11 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)
return getattr(self, cyfunc)()
if self.grouper.nkeys > 1:
- return self._python_agg_general(
- func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
- )
+ return self._python_agg_general(func, *args, **kwargs)
try:
- return self._python_agg_general(
- func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
- )
+ return self._python_agg_general(func, *args, **kwargs)
except (ValueError, KeyError):
- # Do not catch Numba errors here, we want to raise and not fall back.
# TODO: KeyError is raised in _python_agg_general,
# see see test_groupby.test_basic
result = self._aggregate_named(func, *args, **kwargs)
@@ -936,12 +940,19 @@ class DataFrameGroupBy(GroupBy[DataFrame]):
)
def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs):
- relabeling, func, columns, order = reconstruct_func(func, **kwargs)
-
if maybe_use_numba(engine):
- return self._python_agg_general(
- func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
+ if not callable(func):
+ raise NotImplementedError(
+ "Numba engine can only be used with a single function."
+ )
+ with _group_selection_context(self):
+ data = self._selected_obj
+ result, index = self._aggregate_with_numba(
+ data, func, *args, engine_kwargs=engine_kwargs, **kwargs
)
+ return self.obj._constructor(result, index=index, columns=data.columns)
+
+ relabeling, func, columns, order = reconstruct_func(func, **kwargs)
result, how = self._aggregate(func, *args, **kwargs)
if how is None:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 0047877ef78ee..f96b488fb8d0d 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -34,7 +34,7 @@ class providing the base-class of operations.
from pandas._config.config import option_context
-from pandas._libs import Timestamp
+from pandas._libs import Timestamp, lib
import pandas._libs.groupby as libgroupby
from pandas._typing import F, FrameOrSeries, FrameOrSeriesUnion, Scalar
from pandas.compat.numpy import function as nv
@@ -61,11 +61,11 @@ class providing the base-class of operations.
import pandas.core.common as com
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
-from pandas.core.groupby import base, ops
+from pandas.core.groupby import base, numba_, ops
from pandas.core.indexes.api import CategoricalIndex, Index, MultiIndex
from pandas.core.series import Series
from pandas.core.sorting import get_group_index_sorter
-from pandas.core.util.numba_ import maybe_use_numba
+from pandas.core.util.numba_ import NUMBA_FUNC_CACHE
_common_see_also = """
See Also
@@ -384,7 +384,8 @@ class providing the base-class of operations.
- dict of axis labels -> functions, function names or list of such.
Can also accept a Numba JIT function with
- ``engine='numba'`` specified.
+ ``engine='numba'`` specified. Only passing a single function is supported
+ with this engine.
If the ``'numba'`` engine is chosen, the function must be
a user defined function with ``values`` and ``index`` as the
@@ -1053,12 +1054,43 @@ def _cython_agg_general(
return self._wrap_aggregated_output(output, index=self.grouper.result_index)
- def _python_agg_general(
- self, func, *args, engine="cython", engine_kwargs=None, **kwargs
- ):
+ def _aggregate_with_numba(self, data, func, *args, engine_kwargs=None, **kwargs):
+ """
+ Perform groupby aggregation routine with the numba engine.
+
+ This routine mimics the data splitting routine of the DataSplitter class
+ to generate the indices of each group in the sorted data and then passes the
+ data and indices into a Numba jitted function.
+ """
+ group_keys = self.grouper._get_group_keys()
+ labels, _, n_groups = self.grouper.group_info
+ sorted_index = get_group_index_sorter(labels, n_groups)
+ sorted_labels = algorithms.take_nd(labels, sorted_index, allow_fill=False)
+ sorted_data = data.take(sorted_index, axis=self.axis).to_numpy()
+ starts, ends = lib.generate_slices(sorted_labels, n_groups)
+ cache_key = (func, "groupby_agg")
+ if cache_key in NUMBA_FUNC_CACHE:
+ # Return an already compiled version of roll_apply if available
+ numba_agg_func = NUMBA_FUNC_CACHE[cache_key]
+ else:
+ numba_agg_func = numba_.generate_numba_agg_func(
+ tuple(args), kwargs, func, engine_kwargs
+ )
+ result = numba_agg_func(
+ sorted_data, sorted_index, starts, ends, len(group_keys), len(data.columns),
+ )
+ if cache_key not in NUMBA_FUNC_CACHE:
+ NUMBA_FUNC_CACHE[cache_key] = numba_agg_func
+
+ if self.grouper.nkeys > 1:
+ index = MultiIndex.from_tuples(group_keys, names=self.grouper.names)
+ else:
+ index = Index(group_keys, name=self.grouper.names[0])
+ return result, index
+
+ def _python_agg_general(self, func, *args, **kwargs):
func = self._is_builtin_func(func)
- if engine != "numba":
- f = lambda x: func(x, *args, **kwargs)
+ f = lambda x: func(x, *args, **kwargs)
# iterate through "columns" ex exclusions to populate output dict
output: Dict[base.OutputKey, np.ndarray] = {}
@@ -1069,21 +1101,11 @@ def _python_agg_general(
# agg_series below assumes ngroups > 0
continue
- if maybe_use_numba(engine):
- result, counts = self.grouper.agg_series(
- obj,
- func,
- *args,
- engine=engine,
- engine_kwargs=engine_kwargs,
- **kwargs,
- )
- else:
- try:
- # if this function is invalid for this dtype, we will ignore it.
- result, counts = self.grouper.agg_series(obj, f)
- except TypeError:
- continue
+ try:
+ # if this function is invalid for this dtype, we will ignore it.
+ result, counts = self.grouper.agg_series(obj, f)
+ except TypeError:
+ continue
assert result is not None
key = base.OutputKey(label=name, position=idx)
diff --git a/pandas/core/groupby/numba_.py b/pandas/core/groupby/numba_.py
new file mode 100644
index 0000000000000..aebe60f797fcd
--- /dev/null
+++ b/pandas/core/groupby/numba_.py
@@ -0,0 +1,172 @@
+"""Common utilities for Numba operations with groupby ops"""
+import inspect
+from typing import Any, Callable, Dict, Optional, Tuple
+
+import numpy as np
+
+from pandas._typing import FrameOrSeries, Scalar
+from pandas.compat._optional import import_optional_dependency
+
+from pandas.core.util.numba_ import (
+ NUMBA_FUNC_CACHE,
+ NumbaUtilError,
+ check_kwargs_and_nopython,
+ get_jit_arguments,
+ jit_user_function,
+)
+
+
+def split_for_numba(arg: FrameOrSeries) -> Tuple[np.ndarray, np.ndarray]:
+ """
+ Split pandas object into its components as numpy arrays for numba functions.
+
+ Parameters
+ ----------
+ arg : Series or DataFrame
+
+ Returns
+ -------
+ (ndarray, ndarray)
+ values, index
+ """
+ return arg.to_numpy(), arg.index.to_numpy()
+
+
+def validate_udf(func: Callable) -> None:
+ """
+ Validate user defined function for ops when using Numba with groupby ops.
+
+ The first signature arguments should include:
+
+ def f(values, index, ...):
+ ...
+
+ Parameters
+ ----------
+ func : function, default False
+ user defined function
+
+ Returns
+ -------
+ None
+
+ Raises
+ ------
+ NumbaUtilError
+ """
+ udf_signature = list(inspect.signature(func).parameters.keys())
+ expected_args = ["values", "index"]
+ min_number_args = len(expected_args)
+ if (
+ len(udf_signature) < min_number_args
+ or udf_signature[:min_number_args] != expected_args
+ ):
+ raise NumbaUtilError(
+ f"The first {min_number_args} arguments to {func.__name__} must be "
+ f"{expected_args}"
+ )
+
+
+def generate_numba_func(
+ func: Callable,
+ engine_kwargs: Optional[Dict[str, bool]],
+ kwargs: dict,
+ cache_key_str: str,
+) -> Tuple[Callable, Tuple[Callable, str]]:
+ """
+ Return a JITed function and cache key for the NUMBA_FUNC_CACHE
+
+ This _may_ be specific to groupby (as it's only used there currently).
+
+ Parameters
+ ----------
+ func : function
+ user defined function
+ engine_kwargs : dict or None
+ numba.jit arguments
+ kwargs : dict
+ kwargs for func
+ cache_key_str : str
+ string representing the second part of the cache key tuple
+
+ Returns
+ -------
+ (JITed function, cache key)
+
+ Raises
+ ------
+ NumbaUtilError
+ """
+ nopython, nogil, parallel = get_jit_arguments(engine_kwargs)
+ check_kwargs_and_nopython(kwargs, nopython)
+ validate_udf(func)
+ cache_key = (func, cache_key_str)
+ numba_func = NUMBA_FUNC_CACHE.get(
+ cache_key, jit_user_function(func, nopython, nogil, parallel)
+ )
+ return numba_func, cache_key
+
+
+def generate_numba_agg_func(
+ args: Tuple,
+ kwargs: Dict[str, Any],
+ func: Callable[..., Scalar],
+ engine_kwargs: Optional[Dict[str, bool]],
+) -> Callable[[np.ndarray, np.ndarray, np.ndarray, np.ndarray, int, int], np.ndarray]:
+ """
+ Generate a numba jitted agg function specified by values from engine_kwargs.
+
+ 1. jit the user's function
+ 2. Return a groupby agg function with the jitted function inline
+
+ Configurations specified in engine_kwargs apply to both the user's
+ function _AND_ the rolling apply function.
+
+ Parameters
+ ----------
+ args : tuple
+ *args to be passed into the function
+ kwargs : dict
+ **kwargs to be passed into the function
+ func : function
+ function to be applied to each window and will be JITed
+ engine_kwargs : dict
+ dictionary of arguments to be passed into numba.jit
+
+ Returns
+ -------
+ Numba function
+ """
+ nopython, nogil, parallel = get_jit_arguments(engine_kwargs)
+
+ check_kwargs_and_nopython(kwargs, nopython)
+
+ validate_udf(func)
+
+ numba_func = jit_user_function(func, nopython, nogil, parallel)
+
+ numba = import_optional_dependency("numba")
+
+ if parallel:
+ loop_range = numba.prange
+ else:
+ loop_range = range
+
+ @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel)
+ def group_apply(
+ values: np.ndarray,
+ index: np.ndarray,
+ begin: np.ndarray,
+ end: np.ndarray,
+ num_groups: int,
+ num_columns: int,
+ ) -> np.ndarray:
+ result = np.empty((num_groups, num_columns))
+ for i in loop_range(num_groups):
+ group_index = index[begin[i] : end[i]]
+ for j in loop_range(num_columns):
+ group = values[begin[i] : end[i], j]
+ result[i, j] = numba_func(group, group_index, *args)
+ return result
+
+ return group_apply
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 64eb413fe78fa..c6171a55359fe 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -55,12 +55,6 @@
get_group_index_sorter,
get_indexer_dict,
)
-from pandas.core.util.numba_ import (
- NUMBA_FUNC_CACHE,
- generate_numba_func,
- maybe_use_numba,
- split_for_numba,
-)
class BaseGrouper:
@@ -610,21 +604,11 @@ def _transform(
return result
def agg_series(
- self,
- obj: Series,
- func: F,
- *args,
- engine: str = "cython",
- engine_kwargs=None,
- **kwargs,
+ self, obj: Series, func: F, *args, **kwargs,
):
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
- if maybe_use_numba(engine):
- return self._aggregate_series_pure_python(
- obj, func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
- )
if len(obj) == 0:
# SeriesGrouper would raise if we were to call _aggregate_series_fast
return self._aggregate_series_pure_python(obj, func)
@@ -670,20 +654,8 @@ def _aggregate_series_fast(self, obj: Series, func: F):
return result, counts
def _aggregate_series_pure_python(
- self,
- obj: Series,
- func: F,
- *args,
- engine: str = "cython",
- engine_kwargs=None,
- **kwargs,
+ self, obj: Series, func: F, *args, **kwargs,
):
-
- if maybe_use_numba(engine):
- numba_func, cache_key = generate_numba_func(
- func, engine_kwargs, kwargs, "groupby_agg"
- )
-
group_index, _, ngroups = self.group_info
counts = np.zeros(ngroups, dtype=int)
@@ -692,13 +664,7 @@ def _aggregate_series_pure_python(
splitter = get_splitter(obj, group_index, ngroups, axis=0)
for label, group in splitter:
- if maybe_use_numba(engine):
- values, index = split_for_numba(group)
- res = numba_func(values, index, *args)
- if cache_key not in NUMBA_FUNC_CACHE:
- NUMBA_FUNC_CACHE[cache_key] = numba_func
- else:
- res = func(group, *args, **kwargs)
+ res = func(group, *args, **kwargs)
if result is None:
if isinstance(res, (Series, Index, np.ndarray)):
@@ -876,13 +842,7 @@ def groupings(self) -> "List[grouper.Grouping]":
]
def agg_series(
- self,
- obj: Series,
- func: F,
- *args,
- engine: str = "cython",
- engine_kwargs=None,
- **kwargs,
+ self, obj: Series, func: F, *args, **kwargs,
):
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
diff --git a/pandas/core/util/numba_.py b/pandas/core/util/numba_.py
index c9b7943478cdd..b951cd4f0cc2a 100644
--- a/pandas/core/util/numba_.py
+++ b/pandas/core/util/numba_.py
@@ -1,12 +1,10 @@
"""Common utilities for Numba operations"""
from distutils.version import LooseVersion
-import inspect
import types
from typing import Callable, Dict, Optional, Tuple
import numpy as np
-from pandas._typing import FrameOrSeries
from pandas.compat._optional import import_optional_dependency
from pandas.errors import NumbaUtilError
@@ -129,94 +127,3 @@ def impl(data, *_args):
return impl
return numba_func
-
-
-def split_for_numba(arg: FrameOrSeries) -> Tuple[np.ndarray, np.ndarray]:
- """
- Split pandas object into its components as numpy arrays for numba functions.
-
- Parameters
- ----------
- arg : Series or DataFrame
-
- Returns
- -------
- (ndarray, ndarray)
- values, index
- """
- return arg.to_numpy(), arg.index.to_numpy()
-
-
-def validate_udf(func: Callable) -> None:
- """
- Validate user defined function for ops when using Numba.
-
- The first signature arguments should include:
-
- def f(values, index, ...):
- ...
-
- Parameters
- ----------
- func : function, default False
- user defined function
-
- Returns
- -------
- None
-
- Raises
- ------
- NumbaUtilError
- """
- udf_signature = list(inspect.signature(func).parameters.keys())
- expected_args = ["values", "index"]
- min_number_args = len(expected_args)
- if (
- len(udf_signature) < min_number_args
- or udf_signature[:min_number_args] != expected_args
- ):
- raise NumbaUtilError(
- f"The first {min_number_args} arguments to {func.__name__} must be "
- f"{expected_args}"
- )
-
-
-def generate_numba_func(
- func: Callable,
- engine_kwargs: Optional[Dict[str, bool]],
- kwargs: dict,
- cache_key_str: str,
-) -> Tuple[Callable, Tuple[Callable, str]]:
- """
- Return a JITed function and cache key for the NUMBA_FUNC_CACHE
-
- This _may_ be specific to groupby (as it's only used there currently).
-
- Parameters
- ----------
- func : function
- user defined function
- engine_kwargs : dict or None
- numba.jit arguments
- kwargs : dict
- kwargs for func
- cache_key_str : str
- string representing the second part of the cache key tuple
-
- Returns
- -------
- (JITed function, cache key)
-
- Raises
- ------
- NumbaUtilError
- """
- nopython, nogil, parallel = get_jit_arguments(engine_kwargs)
- check_kwargs_and_nopython(kwargs, nopython)
- validate_udf(func)
- cache_key = (func, cache_key_str)
- numba_func = NUMBA_FUNC_CACHE.get(
- cache_key, jit_user_function(func, nopython, nogil, parallel)
- )
- return numba_func, cache_key
diff --git a/pandas/tests/groupby/aggregate/test_numba.py b/pandas/tests/groupby/aggregate/test_numba.py
index 690694b0e66f5..29e65e938f6f9 100644
--- a/pandas/tests/groupby/aggregate/test_numba.py
+++ b/pandas/tests/groupby/aggregate/test_numba.py
@@ -4,7 +4,7 @@
from pandas.errors import NumbaUtilError
import pandas.util._test_decorators as td
-from pandas import DataFrame, option_context
+from pandas import DataFrame, NamedAgg, option_context
import pandas._testing as tm
from pandas.core.util.numba_ import NUMBA_FUNC_CACHE
@@ -128,3 +128,25 @@ def func_1(values, index):
with option_context("compute.use_numba", True):
result = grouped.agg(func_1, engine=None)
tm.assert_frame_equal(expected, result)
+
+
+@td.skip_if_no("numba", "0.46.0")
+@pytest.mark.parametrize(
+ "agg_func",
+ [
+ ["min", "max"],
+ "min",
+ {"B": ["min", "max"], "C": "sum"},
+ NamedAgg(column="B", aggfunc="min"),
+ ],
+)
+def test_multifunc_notimplimented(agg_func):
+ data = DataFrame(
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+ )
+ grouped = data.groupby(0)
+ with pytest.raises(NotImplementedError, match="Numba engine can"):
+ grouped.agg(agg_func, engine="numba")
+
+ with pytest.raises(NotImplementedError, match="Numba engine can"):
+ grouped[1].agg(agg_func, engine="numba")
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
New performance comparison for 10,000 groups
```
In [1]: In [1]: df_g = pd.DataFrame({'a': range(10**4), 'b': range(10**4), 'c': range(10**4)})
In [2]: In [2]: def f(x):
...: ...: return np.sum(x) + 1
...:
In [3]: df_g.groupby('a').agg(f)
Out[3]:
b c
a
0 1 1
1 2 2
2 3 3
3 4 4
4 5 5
... ... ...
9995 9996 9996
9996 9997 9997
9997 9998 9998
9998 9999 9999
9999 10000 10000
[10000 rows x 2 columns]
In [4]: %timeit df_g.groupby('a').agg(f)
1.2 s ± 70.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: def f(values, index):
...: return np.sum(values) + 1
...:
In [6]: df_g.groupby('a').agg(f, engine='numba', engine_kwargs={'parallel': True})
Out[6]:
b c
a
0 1.0 1.0
1 2.0 2.0
2 3.0 3.0
3 4.0 4.0
4 5.0 5.0
... ... ...
9995 9996.0 9996.0
9996 9997.0 9997.0
9997 9998.0 9998.0
9998 9999.0 9999.0
9999 10000.0 10000.0
In [8]: %timeit df_g.groupby('a').agg(f, engine='numba', engine_kwargs={'parallel': True})
2.07 ms ± 64.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/35759 | 2020-08-17T06:01:31Z | 2020-08-22T03:30:05Z | 2020-08-22T03:30:05Z | 2020-08-22T03:31:34Z |
REGR: re-add encoding for read_excel | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 565b4a014bd0c..4b25b1c1bf51a 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -27,6 +27,7 @@ Fixed regressions
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
+- Fixed regression in :func:`pandas.read_excel` where ``encoding`` was removed but is necessary for some files (:issue:`35753`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index b1bbda4a4b7e0..aee705976f3d1 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -199,6 +199,8 @@
Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
'X'...'X'. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
+encoding : str, default "utf-8"
+ The encoding to use to decode bytes.
Returns
-------
@@ -298,6 +300,7 @@ def read_excel(
skipfooter=0,
convert_float=True,
mangle_dupe_cols=True,
+ encoding: str = "utf-8",
):
if not isinstance(io, ExcelFile):
@@ -332,6 +335,7 @@ def read_excel(
skipfooter=skipfooter,
convert_float=convert_float,
mangle_dupe_cols=mangle_dupe_cols,
+ encoding=encoding,
)
| - [ ] closes #35753
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Needs tests! | https://api.github.com/repos/pandas-dev/pandas/pulls/35758 | 2020-08-17T03:29:47Z | 2020-08-18T18:02:42Z | null | 2020-10-29T18:09:49Z |
CI: Unpin Pytest + Pytest Asyncio Min Version | diff --git a/ci/deps/azure-38-locale.yaml b/ci/deps/azure-38-locale.yaml
index c466a5929ea29..c7090d3a46a77 100644
--- a/ci/deps/azure-38-locale.yaml
+++ b/ci/deps/azure-38-locale.yaml
@@ -6,9 +6,9 @@ dependencies:
# tools
- cython>=0.29.16
- - pytest>=5.0.1,<6.0.0 # https://github.com/pandas-dev/pandas/issues/35620
+ - pytest>=5.0.1
- pytest-xdist>=1.21
- - pytest-asyncio
+ - pytest-asyncio>=0.12.0
- hypothesis>=3.58.0
- pytest-azurepipelines
| - [x] closes #35620
Pytest 6.0.0+ will require pytest-asyncio>=0.12.0 for these async tests to run.
Also worth noting `pytest-asyncio 0.12.0 requires pytest>=5.4.0`. (So maybe we should think about bumping min pytest version)
```
____________________________________________________________________________________ ERROR collecting pandas/tests/indexes/test_base.py _____________________________________________________________________________________
../../.conda/envs/pandas-dev/lib/python3.8/site-packages/pluggy/hooks.py:286: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
../../.conda/envs/pandas-dev/lib/python3.8/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
../../.conda/envs/pandas-dev/lib/python3.8/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
../../.conda/envs/pandas-dev/lib/python3.8/site-packages/pytest_asyncio/plugin.py:39: in pytest_pycollect_makeitem
item = pytest.Function(name, parent=collector)
../../.conda/envs/pandas-dev/lib/python3.8/site-packages/_pytest/nodes.py:95: in __call__
warnings.warn(NODE_USE_FROM_PARENT.format(name=self.__name__), stacklevel=2)
E pytest.PytestDeprecationWarning: Direct construction of Function has been deprecated, please use Function.from_parent.
E See https://docs.pytest.org/en/stable/deprecations.html#node-construction-changed-to-node-from-parent for more details.
```
cc @simonjayhawkins | https://api.github.com/repos/pandas-dev/pandas/pulls/35757 | 2020-08-17T00:09:59Z | 2020-08-19T17:42:48Z | 2020-08-19T17:42:48Z | 2020-12-22T19:03:19Z |
CI: Min Pytest Cov Version/Restrict xdist version | diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml
index 4d745454afcab..f4c238ab8b173 100644
--- a/ci/deps/azure-windows-37.yaml
+++ b/ci/deps/azure-windows-37.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21
+ - pytest-xdist>=1.21,<2.0.0 # GH 35737
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml
index f428a6dadfaa2..1f383164b5328 100644
--- a/ci/deps/azure-windows-38.yaml
+++ b/ci/deps/azure-windows-38.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21
+ - pytest-xdist>=1.21,<2.0.0 # GH 35737
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/travis-37-cov.yaml b/ci/deps/travis-37-cov.yaml
index 3a0827a16f97a..edc11bdf4ab35 100644
--- a/ci/deps/travis-37-cov.yaml
+++ b/ci/deps/travis-37-cov.yaml
@@ -10,7 +10,7 @@ dependencies:
- pytest>=5.0.1
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-cov # this is only needed in the coverage build
+ - pytest-cov>=2.10.1 # this is only needed in the coverage build, ref: GH 35737
# pandas dependencies
- beautifulsoup4
| - [x] closes #35737 | https://api.github.com/repos/pandas-dev/pandas/pulls/35754 | 2020-08-16T19:21:29Z | 2020-08-17T08:50:01Z | 2020-08-17T08:50:01Z | 2020-08-17T08:51:47Z |
BUG: DataFrame.groupby(., dropna=True, axis=0) incorrectly throws ShapeError | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 3545dd8a89159..621baa01fbded 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -224,6 +224,7 @@ Indexing
Missing
^^^^^^^
+- Bug in :class:`Grouper` now correctly propagates ``dropna`` argument and :meth:`DataFrameGroupBy.transform` now correctly handles missing values for ``dropna=True`` (:issue:`35612`)
-
-
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 07ffb881495fa..16b00735cf694 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -553,7 +553,6 @@ def _transform_general(self, func, *args, **kwargs):
result = maybe_downcast_numeric(result, self._selected_obj.dtype)
result.name = self._selected_obj.name
- result.index = self._selected_obj.index
return result
def _transform_fast(self, result) -> Series:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 947f18901775b..cebbfac16019e 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -729,14 +729,28 @@ def _set_result_index_ordered(
# set the result index on the passed values object and
# return the new object, xref 8046
- # the values/counts are repeated according to the group index
- # shortcut if we have an already ordered grouper
- if not self.grouper.is_monotonic:
- index = Index(np.concatenate(self._get_indices(self.grouper.result_index)))
- result.set_axis(index, axis=self.axis, inplace=True)
- result = result.sort_index(axis=self.axis)
-
- result.set_axis(self.obj._get_axis(self.axis), axis=self.axis, inplace=True)
+ if self.grouper.is_monotonic:
+ # shortcut if we have an already ordered grouper
+ result.set_axis(self.obj._get_axis(self.axis), axis=self.axis, inplace=True)
+ return result
+
+ # row order is scrambled => sort the rows by position in original index
+ original_positions = Index(
+ np.concatenate(self._get_indices(self.grouper.result_index))
+ )
+ result.set_axis(original_positions, axis=self.axis, inplace=True)
+ result = result.sort_index(axis=self.axis)
+
+ dropped_rows = len(result.index) < len(self.obj.index)
+
+ if dropped_rows:
+ # get index by slicing original index according to original positions
+ # slice drops attrs => use set_axis when no rows were dropped
+ sorted_indexer = result.index
+ result.index = self._selected_obj.index[sorted_indexer]
+ else:
+ result.set_axis(self.obj._get_axis(self.axis), axis=self.axis, inplace=True)
+
return result
@final
diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py
index e38fa5e8de87e..ab568e24ff029 100644
--- a/pandas/tests/groupby/test_groupby_dropna.py
+++ b/pandas/tests/groupby/test_groupby_dropna.py
@@ -171,36 +171,53 @@ def test_grouper_dropna_propagation(dropna):
@pytest.mark.parametrize(
- "dropna,df_expected,s_expected",
+ "dropna,input_index,expected_data,expected_index",
[
- pytest.param(
+ (True, pd.RangeIndex(0, 4), {"B": [2, 2, 1]}, pd.RangeIndex(0, 3)),
+ (True, list("abcd"), {"B": [2, 2, 1]}, list("abc")),
+ (
True,
- pd.DataFrame({"B": [2, 2, 1]}),
- pd.Series(data=[2, 2, 1], name="B"),
- marks=pytest.mark.xfail(raises=ValueError),
+ pd.MultiIndex.from_tuples(
+ [(1, "R"), (1, "B"), (2, "R"), (2, "B")], names=["num", "col"]
+ ),
+ {"B": [2, 2, 1]},
+ pd.MultiIndex.from_tuples(
+ [(1, "R"), (1, "B"), (2, "R")], names=["num", "col"]
+ ),
),
+ (False, pd.RangeIndex(0, 4), {"B": [2, 2, 1, 1]}, pd.RangeIndex(0, 4)),
+ (False, list("abcd"), {"B": [2, 2, 1, 1]}, list("abcd")),
(
False,
- pd.DataFrame({"B": [2, 2, 1, 1]}),
- pd.Series(data=[2, 2, 1, 1], name="B"),
+ pd.MultiIndex.from_tuples(
+ [(1, "R"), (1, "B"), (2, "R"), (2, "B")], names=["num", "col"]
+ ),
+ {"B": [2, 2, 1, 1]},
+ pd.MultiIndex.from_tuples(
+ [(1, "R"), (1, "B"), (2, "R"), (2, "B")], names=["num", "col"]
+ ),
),
],
)
-def test_slice_groupby_then_transform(dropna, df_expected, s_expected):
- # GH35014
+def test_groupby_dataframe_slice_then_transform(
+ dropna, input_index, expected_data, expected_index
+):
+ # GH35014 & GH35612
- df = pd.DataFrame({"A": [0, 0, 1, None], "B": [1, 2, 3, None]})
+ df = pd.DataFrame({"A": [0, 0, 1, None], "B": [1, 2, 3, None]}, index=input_index)
gb = df.groupby("A", dropna=dropna)
- res = gb.transform(len)
- tm.assert_frame_equal(res, df_expected)
+ result = gb.transform(len)
+ expected = pd.DataFrame(expected_data, index=expected_index)
+ tm.assert_frame_equal(result, expected)
- gb_slice = gb[["B"]]
- res = gb_slice.transform(len)
- tm.assert_frame_equal(res, df_expected)
+ result = gb[["B"]].transform(len)
+ expected = pd.DataFrame(expected_data, index=expected_index)
+ tm.assert_frame_equal(result, expected)
- res = gb["B"].transform(len)
- tm.assert_series_equal(res, s_expected)
+ result = gb["B"].transform(len)
+ expected = pd.Series(expected_data["B"], index=expected_index, name="B")
+ tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 1d2208592a06d..5205ca3777fc0 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -626,7 +626,7 @@ def test_list_grouper_with_nat(self):
[
(
"transform",
- Series(name=2, dtype=np.float64, index=pd.RangeIndex(0, 0, 1)),
+ Series(name=2, dtype=np.float64, index=Index([])),
),
(
"agg",
| - [x] closes #35612
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This PR makes a few more changes to propagate `dropna` correctly for sliced groupby objects.
It depends on #35444 for the changes to `pandas/core/groupby/generic.py`. I put them in by hand for now so that the tests pass but there should be no diff once #35444 is merged. | https://api.github.com/repos/pandas-dev/pandas/pulls/35751 | 2020-08-16T05:04:18Z | 2020-12-19T02:34:45Z | 2020-12-19T02:34:44Z | 2020-12-19T02:34:50Z |
Pass check_dtype to assert_extension_array_equal | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index c028fe6bea719..ff5bbccf63ffe 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -38,6 +38,7 @@ Bug fixes
~~~~~~~~~
- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
+- Bug in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` where extension dtypes were not ignored when ``check_dtypes`` was set to ``False`` (:issue:`35715`).
Categorical
^^^^^^^^^^^
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 713f29466f097..ef6232fa6d575 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1377,12 +1377,18 @@ def assert_series_equal(
)
elif is_extension_array_dtype(left.dtype) and is_extension_array_dtype(right.dtype):
assert_extension_array_equal(
- left._values, right._values, index_values=np.asarray(left.index)
+ left._values,
+ right._values,
+ check_dtype=check_dtype,
+ index_values=np.asarray(left.index),
)
elif needs_i8_conversion(left.dtype) or needs_i8_conversion(right.dtype):
# DatetimeArray or TimedeltaArray
assert_extension_array_equal(
- left._values, right._values, index_values=np.asarray(left.index)
+ left._values,
+ right._values,
+ check_dtype=check_dtype,
+ index_values=np.asarray(left.index),
)
else:
_testing.assert_almost_equal(
diff --git a/pandas/tests/util/test_assert_extension_array_equal.py b/pandas/tests/util/test_assert_extension_array_equal.py
index d9fdf1491c328..f9259beab5d13 100644
--- a/pandas/tests/util/test_assert_extension_array_equal.py
+++ b/pandas/tests/util/test_assert_extension_array_equal.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+from pandas import array
import pandas._testing as tm
from pandas.core.arrays.sparse import SparseArray
@@ -102,3 +103,11 @@ def test_assert_extension_array_equal_non_extension_array(side):
with pytest.raises(AssertionError, match=msg):
tm.assert_extension_array_equal(*args)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_extension_array_equal_ignore_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = array([1, 2, 3], dtype="Int64")
+ right = array([1, 2, 3], dtype=right_dtype)
+ tm.assert_extension_array_equal(left, right, check_dtype=False)
diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index fe3e1ff906919..3aa3c64923b14 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -260,3 +260,11 @@ def test_assert_frame_equal_interval_dtype_mismatch():
with pytest.raises(AssertionError, match=msg):
tm.assert_frame_equal(left, right, check_dtype=True)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_frame_equal_ignore_extension_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = pd.DataFrame({"a": [1, 2, 3]}, dtype="Int64")
+ right = pd.DataFrame({"a": [1, 2, 3]}, dtype=right_dtype)
+ tm.assert_frame_equal(left, right, check_dtype=False)
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index a7b5aeac560e4..f3c66052b1904 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -296,3 +296,11 @@ def test_series_equal_exact_for_nonnumeric():
tm.assert_series_equal(s1, s3, check_exact=True)
with pytest.raises(AssertionError):
tm.assert_series_equal(s3, s1, check_exact=True)
+
+
+@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
+def test_assert_series_equal_ignore_extension_dtype_mismatch(right_dtype):
+ # https://github.com/pandas-dev/pandas/issues/35715
+ left = pd.Series([1, 2, 3], dtype="Int64")
+ right = pd.Series([1, 2, 3], dtype=right_dtype)
+ tm.assert_series_equal(left, right, check_dtype=False)
| - [x] closes #35715
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35750 | 2020-08-16T04:14:10Z | 2020-08-17T18:20:20Z | 2020-08-17T18:20:20Z | 2020-08-17T18:31:25Z |
BUG: close file handles in mmap | diff --git a/pandas/io/common.py b/pandas/io/common.py
index 54f35e689aac8..d1305c9cabe0e 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -18,6 +18,7 @@
Optional,
Tuple,
Type,
+ Union,
)
from urllib.parse import (
urljoin,
@@ -452,7 +453,7 @@ def get_handle(
except ImportError:
need_text_wrapping = (BufferedIOBase, RawIOBase)
- handles: List[IO] = list()
+ handles: List[Union[IO, _MMapWrapper]] = list()
f = path_or_buf
# Convert pathlib.Path/py.path.local or string
@@ -535,6 +536,8 @@ def get_handle(
try:
wrapped = _MMapWrapper(f)
f.close()
+ handles.remove(f)
+ handles.append(wrapped)
f = wrapped
except Exception:
# we catch any errors that may have occurred
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 3d5f6ae3a4af9..1d8d5a29686a4 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1836,6 +1836,7 @@ def test_raise_on_no_columns(all_parsers, nrows):
parser.read_csv(StringIO(data))
+@td.check_file_leaks
def test_memory_map(all_parsers, csv_dir_path):
mmap_file = os.path.join(csv_dir_path, "test_mmap.csv")
parser = all_parsers
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Broken off from #35711 | https://api.github.com/repos/pandas-dev/pandas/pulls/35748 | 2020-08-15T23:04:48Z | 2020-08-17T18:58:32Z | 2020-08-17T18:58:32Z | 2020-08-17T19:44:57Z |
REF: insert self.on column _after_ concat | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 966773b7c6982..ac96258cbc3c9 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -38,7 +38,7 @@
from pandas.core.base import DataError, PandasObject, SelectionMixin, ShallowMixin
import pandas.core.common as com
from pandas.core.construction import extract_array
-from pandas.core.indexes.api import Index, MultiIndex, ensure_index
+from pandas.core.indexes.api import Index, MultiIndex
from pandas.core.util.numba_ import NUMBA_FUNC_CACHE, maybe_use_numba
from pandas.core.window.common import (
WindowGroupByMixin,
@@ -402,36 +402,27 @@ def _wrap_results(self, results, blocks, obj, exclude=None) -> FrameOrSeries:
return result
final.append(result)
- # if we have an 'on' column
- # we want to put it back into the results
- # in the same location
- columns = self._selected_obj.columns
- if self.on is not None and not self._on.equals(obj.index):
-
- name = self._on.name
- final.append(Series(self._on, index=obj.index, name=name))
-
- if self._selection is not None:
-
- selection = ensure_index(self._selection)
-
- # need to reorder to include original location of
- # the on column (if its not already there)
- if name not in selection:
- columns = self.obj.columns
- indexer = columns.get_indexer(selection.tolist() + [name])
- columns = columns.take(sorted(indexer))
-
- # exclude nuisance columns so that they are not reindexed
- if exclude is not None and exclude:
- columns = [c for c in columns if c not in exclude]
+ exclude = exclude or []
+ columns = [c for c in self._selected_obj.columns if c not in exclude]
+ if not columns and not len(final) and exclude:
+ raise DataError("No numeric types to aggregate")
+ elif not len(final):
+ return obj.astype("float64")
- if not columns:
- raise DataError("No numeric types to aggregate")
+ df = concat(final, axis=1).reindex(columns=columns, copy=False)
- if not len(final):
- return obj.astype("float64")
- return concat(final, axis=1).reindex(columns=columns, copy=False)
+ # if we have an 'on' column we want to put it back into
+ # the results in the same location
+ if self.on is not None and not self._on.equals(obj.index):
+ name = self._on.name
+ extra_col = Series(self._on, index=obj.index, name=name)
+ if name not in df.columns and name not in df.index.names:
+ new_loc = len(df.columns)
+ df.insert(new_loc, name, extra_col)
+ elif name in df.columns:
+ # TODO: sure we want to overwrite results?
+ df[name] = extra_col
+ return df
def _center_window(self, result, window) -> np.ndarray:
"""
@@ -2277,6 +2268,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
if isinstance(self.window, BaseIndexer):
rolling_indexer = type(self.window)
indexer_kwargs = self.window.__dict__
+ assert isinstance(indexer_kwargs, dict)
# We'll be using the index of each group later
indexer_kwargs.pop("index_array", None)
elif self.is_freq_type:
| The idea here is to push towards #34714 by making _wrap_results do things in the same order as other other similar methods do. cc @mroeschke LMK if there is a simpler way to accomplish this.
Orthogonal to #35470, #35730, #35696. | https://api.github.com/repos/pandas-dev/pandas/pulls/35746 | 2020-08-15T22:17:21Z | 2020-08-20T16:59:24Z | 2020-08-20T16:59:23Z | 2020-08-20T17:53:01Z |
TST: encoding for URLs in read_csv | diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index 509ae89909699..b30a7b1ef34de 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -46,6 +46,21 @@ def check_compressed_urls(salaries_table, compression, extension, mode, engine):
tm.assert_frame_equal(url_table, salaries_table)
+@tm.network("https://raw.githubusercontent.com/", check_before_test=True)
+def test_url_encoding_csv():
+ """
+ read_csv should honor the requested encoding for URLs.
+
+ GH 10424
+ """
+ path = (
+ "https://raw.githubusercontent.com/pandas-dev/pandas/master/"
+ + "pandas/tests/io/parser/data/unicode_series.csv"
+ )
+ df = read_csv(path, encoding="latin-1", header=None)
+ assert df.loc[15, 1] == "Á köldum klaka (Cold Fever) (1994)"
+
+
@pytest.fixture
def tips_df(datapath):
"""DataFrame with the tips dataset."""
| - [x] closes #10424
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Reading CSVs from URLs with a non UTF-8 encoding should already work. | https://api.github.com/repos/pandas-dev/pandas/pulls/35742 | 2020-08-15T20:08:03Z | 2020-08-17T18:59:43Z | 2020-08-17T18:59:43Z | 2020-08-17T19:53:42Z |
DEPR: Deprecate pandas/io/date_converters.py | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index a1ce2f847d4b8..4dfabaa99fff6 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -930,7 +930,7 @@ take full advantage of the flexibility of the date parsing API:
.. ipython:: python
df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec,
- date_parser=pd.io.date_converters.parse_date_time)
+ date_parser=pd.to_datetime)
df
Pandas will try to call the ``date_parser`` function in three different ways. If
@@ -942,11 +942,6 @@ an exception is raised, the next one is tried:
2. If #1 fails, ``date_parser`` is called with all the columns
concatenated row-wise into a single array (e.g., ``date_parser(['2013 1', '2013 2'])``).
-3. If #2 fails, ``date_parser`` is called once for every row with one or more
- string arguments from the columns indicated with `parse_dates`
- (e.g., ``date_parser('2013', '1')`` for the first row, ``date_parser('2013', '2')``
- for the second, etc.).
-
Note that performance-wise, you should try these methods of parsing dates in order:
1. Try to infer the format using ``infer_datetime_format=True`` (see section below).
@@ -958,14 +953,6 @@ Note that performance-wise, you should try these methods of parsing dates in ord
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
-You can explore the date parsing functionality in
-`date_converters.py <https://github.com/pandas-dev/pandas/blob/master/pandas/io/date_converters.py>`__
-and add your own. We would love to turn this module into a community supported
-set of date/time parsers. To get you started, ``date_converters.py`` contains
-functions to parse dual date and time columns, year/month/day columns,
-and year/month/day/hour/minute/second columns. It also contains a
-``generic_parser`` function so you can curry it with a function that deals with
-a single date rather than the entire array.
.. ipython:: python
:suppress:
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index bce6a735b7b07..7c7313219c15b 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -195,7 +195,7 @@ Deprecations
~~~~~~~~~~~~
- Deprecated parameter ``inplace`` in :meth:`MultiIndex.set_codes` and :meth:`MultiIndex.set_levels` (:issue:`35626`)
- Deprecated parameter ``dtype`` in :~meth:`Index.copy` on method all index classes. Use the :meth:`Index.astype` method instead for changing dtype(:issue:`35853`)
--
+- Date parser functions :func:`~pandas.io.date_converters.parse_date_time`, :func:`~pandas.io.date_converters.parse_date_fields`, :func:`~pandas.io.date_converters.parse_all_fields` and :func:`~pandas.io.date_converters.generic_parser` from ``pandas.io.date_converters`` are deprecated and will be removed in a future version; use :func:`to_datetime` instead (:issue:`35741`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/date_converters.py b/pandas/io/date_converters.py
index 07919dbda63ae..f079a25f69fec 100644
--- a/pandas/io/date_converters.py
+++ b/pandas/io/date_converters.py
@@ -1,16 +1,46 @@
"""This module is designed for community supported date conversion functions"""
+import warnings
+
import numpy as np
from pandas._libs.tslibs import parsing
def parse_date_time(date_col, time_col):
+ """
+ Parse columns with dates and times into a single datetime column.
+
+ .. deprecated:: 1.2
+ """
+ warnings.warn(
+ """
+ Use pd.to_datetime(date_col + " " + time_col) instead to get a Pandas Series.
+ Use pd.to_datetime(date_col + " " + time_col).to_pydatetime() instead to get a Numpy array.
+""", # noqa: E501
+ FutureWarning,
+ stacklevel=2,
+ )
date_col = _maybe_cast(date_col)
time_col = _maybe_cast(time_col)
return parsing.try_parse_date_and_time(date_col, time_col)
def parse_date_fields(year_col, month_col, day_col):
+ """
+ Parse columns with years, months and days into a single date column.
+
+ .. deprecated:: 1.2
+ """
+ warnings.warn(
+ """
+ Use pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) instead to get a Pandas Series.
+ Use ser = pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) and
+ np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.
+""", # noqa: E501
+ FutureWarning,
+ stacklevel=2,
+ )
+
year_col = _maybe_cast(year_col)
month_col = _maybe_cast(month_col)
day_col = _maybe_cast(day_col)
@@ -18,6 +48,24 @@ def parse_date_fields(year_col, month_col, day_col):
def parse_all_fields(year_col, month_col, day_col, hour_col, minute_col, second_col):
+ """
+ Parse columns with datetime information into a single datetime column.
+
+ .. deprecated:: 1.2
+ """
+
+ warnings.warn(
+ """
+ Use pd.to_datetime({"year": year_col, "month": month_col, "day": day_col,
+ "hour": hour_col, "minute": minute_col, second": second_col}) instead to get a Pandas Series.
+ Use ser = pd.to_datetime({"year": year_col, "month": month_col, "day": day_col,
+ "hour": hour_col, "minute": minute_col, second": second_col}) and
+ np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.
+""", # noqa: E501
+ FutureWarning,
+ stacklevel=2,
+ )
+
year_col = _maybe_cast(year_col)
month_col = _maybe_cast(month_col)
day_col = _maybe_cast(day_col)
@@ -30,6 +78,20 @@ def parse_all_fields(year_col, month_col, day_col, hour_col, minute_col, second_
def generic_parser(parse_func, *cols):
+ """
+ Use dateparser to parse columns with data information into a single datetime column.
+
+ .. deprecated:: 1.2
+ """
+
+ warnings.warn(
+ """
+ Use pd.to_datetime instead.
+""",
+ FutureWarning,
+ stacklevel=2,
+ )
+
N = _check_columns(cols)
results = np.empty(N, dtype=object)
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index 833186b69c63b..662659982c0b3 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -370,7 +370,11 @@ def test_date_col_as_index_col(all_parsers):
tm.assert_frame_equal(result, expected)
-def test_multiple_date_cols_int_cast(all_parsers):
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ ([conv.parse_date_time, FutureWarning], [pd.to_datetime, None]),
+)
+def test_multiple_date_cols_int_cast(all_parsers, date_parser, warning):
data = (
"KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
"KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
@@ -382,13 +386,15 @@ def test_multiple_date_cols_int_cast(all_parsers):
parse_dates = {"actual": [1, 2], "nominal": [1, 3]}
parser = all_parsers
- result = parser.read_csv(
- StringIO(data),
- header=None,
- date_parser=conv.parse_date_time,
- parse_dates=parse_dates,
- prefix="X",
- )
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=None,
+ date_parser=date_parser,
+ parse_dates=parse_dates,
+ prefix="X",
+ )
+
expected = DataFrame(
[
[datetime(1999, 1, 27, 19, 0), datetime(1999, 1, 27, 18, 56), "KORD", 0.81],
@@ -808,7 +814,9 @@ def test_parse_dates_custom_euro_format(all_parsers, kwargs):
tm.assert_frame_equal(df, expected)
else:
msg = "got an unexpected keyword argument 'day_first'"
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(TypeError, match=msg), tm.assert_produces_warning(
+ FutureWarning
+ ):
parser.read_csv(
StringIO(data),
names=["time", "Q", "NTU"],
@@ -1166,7 +1174,11 @@ def test_parse_dates_no_convert_thousands(all_parsers, data, kwargs, expected):
tm.assert_frame_equal(result, expected)
-def test_parse_date_time_multi_level_column_name(all_parsers):
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ ([conv.parse_date_time, FutureWarning], [pd.to_datetime, None]),
+)
+def test_parse_date_time_multi_level_column_name(all_parsers, date_parser, warning):
data = """\
D,T,A,B
date, time,a,b
@@ -1174,12 +1186,13 @@ def test_parse_date_time_multi_level_column_name(all_parsers):
2001-01-06, 00:00:00, 1.0, 11.
"""
parser = all_parsers
- result = parser.read_csv(
- StringIO(data),
- header=[0, 1],
- parse_dates={"date_time": [0, 1]},
- date_parser=conv.parse_date_time,
- )
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=[0, 1],
+ parse_dates={"date_time": [0, 1]},
+ date_parser=date_parser,
+ )
expected_data = [
[datetime(2001, 1, 5, 9, 0, 0), 0.0, 10.0],
@@ -1189,6 +1202,10 @@ def test_parse_date_time_multi_level_column_name(all_parsers):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ ([conv.parse_date_time, FutureWarning], [pd.to_datetime, None]),
+)
@pytest.mark.parametrize(
"data,kwargs,expected",
[
@@ -1261,9 +1278,10 @@ def test_parse_date_time_multi_level_column_name(all_parsers):
),
],
)
-def test_parse_date_time(all_parsers, data, kwargs, expected):
+def test_parse_date_time(all_parsers, data, kwargs, expected, date_parser, warning):
parser = all_parsers
- result = parser.read_csv(StringIO(data), date_parser=conv.parse_date_time, **kwargs)
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(StringIO(data), date_parser=date_parser, **kwargs)
# Python can sometimes be flaky about how
# the aggregated columns are entered, so
@@ -1272,15 +1290,20 @@ def test_parse_date_time(all_parsers, data, kwargs, expected):
tm.assert_frame_equal(result, expected)
-def test_parse_date_fields(all_parsers):
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ ([conv.parse_date_fields, FutureWarning], [pd.to_datetime, None]),
+)
+def test_parse_date_fields(all_parsers, date_parser, warning):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
- result = parser.read_csv(
- StringIO(data),
- header=0,
- parse_dates={"ymd": [0, 1, 2]},
- date_parser=conv.parse_date_fields,
- )
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=0,
+ parse_dates={"ymd": [0, 1, 2]},
+ date_parser=date_parser,
+ )
expected = DataFrame(
[[datetime(2001, 1, 10), 10.0], [datetime(2001, 2, 1), 11.0]],
@@ -1289,19 +1312,27 @@ def test_parse_date_fields(all_parsers):
tm.assert_frame_equal(result, expected)
-def test_parse_date_all_fields(all_parsers):
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ (
+ [conv.parse_all_fields, FutureWarning],
+ [lambda x: pd.to_datetime(x, format="%Y %m %d %H %M %S"), None],
+ ),
+)
+def test_parse_date_all_fields(all_parsers, date_parser, warning):
parser = all_parsers
data = """\
year,month,day,hour,minute,second,a,b
2001,01,05,10,00,0,0.0,10.
2001,01,5,10,0,00,1.,11.
"""
- result = parser.read_csv(
- StringIO(data),
- header=0,
- date_parser=conv.parse_all_fields,
- parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]},
- )
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=0,
+ date_parser=date_parser,
+ parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]},
+ )
expected = DataFrame(
[
[datetime(2001, 1, 5, 10, 0, 0), 0.0, 10.0],
@@ -1312,19 +1343,27 @@ def test_parse_date_all_fields(all_parsers):
tm.assert_frame_equal(result, expected)
-def test_datetime_fractional_seconds(all_parsers):
+@pytest.mark.parametrize(
+ "date_parser, warning",
+ (
+ [conv.parse_all_fields, FutureWarning],
+ [lambda x: pd.to_datetime(x, format="%Y %m %d %H %M %S.%f"), None],
+ ),
+)
+def test_datetime_fractional_seconds(all_parsers, date_parser, warning):
parser = all_parsers
data = """\
year,month,day,hour,minute,second,a,b
2001,01,05,10,00,0.123456,0.0,10.
2001,01,5,10,0,0.500000,1.,11.
"""
- result = parser.read_csv(
- StringIO(data),
- header=0,
- date_parser=conv.parse_all_fields,
- parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]},
- )
+ with tm.assert_produces_warning(warning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=0,
+ date_parser=date_parser,
+ parse_dates={"ymdHMS": [0, 1, 2, 3, 4, 5]},
+ )
expected = DataFrame(
[
[datetime(2001, 1, 5, 10, 0, 0, microsecond=123456), 0.0, 10.0],
@@ -1339,12 +1378,13 @@ def test_generic(all_parsers):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
- result = parser.read_csv(
- StringIO(data),
- header=0,
- parse_dates={"ym": [0, 1]},
- date_parser=lambda y, m: date(year=int(y), month=int(m), day=1),
- )
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = parser.read_csv(
+ StringIO(data),
+ header=0,
+ parse_dates={"ym": [0, 1]},
+ date_parser=lambda y, m: date(year=int(y), month=int(m), day=1),
+ )
expected = DataFrame(
[[date(2001, 1, 1), 10, 10.0], [date(2001, 2, 1), 1, 11.0]],
columns=["ym", "day", "a"],
diff --git a/pandas/tests/io/test_date_converters.py b/pandas/tests/io/test_date_converters.py
index cdb8eca02a3e5..a9fa27e091714 100644
--- a/pandas/tests/io/test_date_converters.py
+++ b/pandas/tests/io/test_date_converters.py
@@ -8,11 +8,12 @@
def test_parse_date_time():
+
dates = np.array(["2007/1/3", "2008/2/4"], dtype=object)
times = np.array(["05:07:09", "06:08:00"], dtype=object)
expected = np.array([datetime(2007, 1, 3, 5, 7, 9), datetime(2008, 2, 4, 6, 8, 0)])
-
- result = conv.parse_date_time(dates, times)
+ with tm.assert_produces_warning(FutureWarning):
+ result = conv.parse_date_time(dates, times)
tm.assert_numpy_array_equal(result, expected)
@@ -20,9 +21,10 @@ def test_parse_date_fields():
days = np.array([3, 4])
months = np.array([1, 2])
years = np.array([2007, 2008])
- result = conv.parse_date_fields(years, months, days)
-
expected = np.array([datetime(2007, 1, 3), datetime(2008, 2, 4)])
+
+ with tm.assert_produces_warning(FutureWarning):
+ result = conv.parse_date_fields(years, months, days)
tm.assert_numpy_array_equal(result, expected)
@@ -34,7 +36,8 @@ def test_parse_all_fields():
days = np.array([3, 4])
years = np.array([2007, 2008])
months = np.array([1, 2])
-
- result = conv.parse_all_fields(years, months, days, hours, minutes, seconds)
expected = np.array([datetime(2007, 1, 3, 5, 7, 9), datetime(2008, 2, 4, 6, 8, 0)])
+
+ with tm.assert_produces_warning(FutureWarning):
+ result = conv.parse_all_fields(years, months, days, hours, minutes, seconds)
tm.assert_numpy_array_equal(result, expected)
| - [x] closes #24518
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35741 | 2020-08-15T19:26:47Z | 2020-09-12T21:37:58Z | 2020-09-12T21:37:58Z | 2020-09-12T21:38:03Z |
REF: _apply_blockwise define exclude in terms of skipped | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index ac96258cbc3c9..f516871f789d0 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -12,7 +12,7 @@
from pandas._libs.tslibs import BaseOffset, to_offset
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import ArrayLike, Axis, FrameOrSeries, Scalar
+from pandas._typing import ArrayLike, Axis, FrameOrSeries, Label
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, cache_readonly, doc
@@ -381,21 +381,31 @@ def _wrap_result(self, result, block=None, obj=None):
return type(obj)(result, index=index, columns=block.columns)
return result
- def _wrap_results(self, results, blocks, obj, exclude=None) -> FrameOrSeries:
+ def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeries:
"""
Wrap the results.
Parameters
----------
results : list of ndarrays
- blocks : list of blocks
obj : conformed data (may be resampled)
- exclude: list of columns to exclude, default to None
+ skipped: List[int]
+ Indices of blocks that are skipped.
"""
from pandas import Series, concat
+ exclude: List[Label] = []
+ if obj.ndim == 2:
+ orig_blocks = list(obj._to_dict_of_blocks(copy=False).values())
+ for i in skipped:
+ exclude.extend(orig_blocks[i].columns)
+ else:
+ orig_blocks = [obj]
+
+ kept_blocks = [blk for i, blk in enumerate(orig_blocks) if i not in skipped]
+
final = []
- for result, block in zip(results, blocks):
+ for result, block in zip(results, kept_blocks):
result = self._wrap_result(result, block=block, obj=obj)
if result.ndim == 1:
@@ -491,7 +501,6 @@ def _apply_blockwise(
skipped: List[int] = []
results: List[ArrayLike] = []
- exclude: List[Scalar] = []
for i, b in enumerate(blocks):
try:
values = self._prep_values(b.values)
@@ -499,7 +508,6 @@ def _apply_blockwise(
except (TypeError, NotImplementedError) as err:
if isinstance(obj, ABCDataFrame):
skipped.append(i)
- exclude.extend(b.columns)
continue
else:
raise DataError("No numeric types to aggregate") from err
@@ -507,8 +515,7 @@ def _apply_blockwise(
result = homogeneous_func(values)
results.append(result)
- block_list = [blk for i, blk in enumerate(blocks) if i not in skipped]
- return self._wrap_results(results, block_list, obj, exclude)
+ return self._wrap_results(results, obj, skipped)
def _apply(
self,
@@ -1283,7 +1290,7 @@ def count(self):
).sum()
results.append(result)
- return self._wrap_results(results, blocks, obj)
+ return self._wrap_results(results, obj, skipped=[])
_shared_docs["apply"] = dedent(
r"""
| orthogonal to #35730 | https://api.github.com/repos/pandas-dev/pandas/pulls/35740 | 2020-08-15T19:16:36Z | 2020-08-21T21:56:51Z | 2020-08-21T21:56:51Z | 2020-08-21T22:06:35Z |
Backport PR #35723 on branch 1.1.x (agg with list of non-aggregating functions) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 85e2a335c55c6..565b4a014bd0c 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -26,6 +26,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
+- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index c50b753cf3293..f5858c5c54f1d 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -322,11 +322,14 @@ def _aggregate_multiple_funcs(self, arg):
# let higher level handle
return results
- output = self._wrap_aggregated_output(results)
+ output = self._wrap_aggregated_output(results, index=None)
return self.obj._constructor_expanddim(output, columns=columns)
+ # TODO: index should not be Optional - see GH 35490
def _wrap_series_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]], index: Index,
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> Union[Series, DataFrame]:
"""
Wraps the output of a SeriesGroupBy operation into the expected result.
@@ -335,7 +338,7 @@ def _wrap_series_output(
----------
output : Mapping[base.OutputKey, Union[Series, np.ndarray]]
Data to wrap.
- index : pd.Index
+ index : pd.Index or None
Index to apply to the output.
Returns
@@ -363,8 +366,11 @@ def _wrap_series_output(
return result
+ # TODO: Remove index argument, use self.grouper.result_index, see GH 35490
def _wrap_aggregated_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]]
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> Union[Series, DataFrame]:
"""
Wraps the output of a SeriesGroupBy aggregation into the expected result.
@@ -383,9 +389,7 @@ def _wrap_aggregated_output(
In the vast majority of cases output will only contain one element.
The exception is operations that expand dimensions, like ohlc.
"""
- result = self._wrap_series_output(
- output=output, index=self.grouper.result_index
- )
+ result = self._wrap_series_output(output=output, index=index)
return self._reindex_output(result)
def _wrap_transformed_output(
@@ -1714,7 +1718,9 @@ def _insert_inaxis_grouper_inplace(self, result):
result.insert(0, name, lev)
def _wrap_aggregated_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]]
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> DataFrame:
"""
Wraps the output of DataFrameGroupBy aggregations into the expected result.
@@ -1739,8 +1745,7 @@ def _wrap_aggregated_output(
self._insert_inaxis_grouper_inplace(result)
result = result._consolidate()
else:
- index = self.grouper.result_index
- result.index = index
+ result.index = self.grouper.result_index
if self.axis == 1:
result = result.T
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index ac45222625569..11d0c8e42f745 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -973,7 +973,9 @@ def _cython_transform(self, how: str, numeric_only: bool = True, **kwargs):
return self._wrap_transformed_output(output)
- def _wrap_aggregated_output(self, output: Mapping[base.OutputKey, np.ndarray]):
+ def _wrap_aggregated_output(
+ self, output: Mapping[base.OutputKey, np.ndarray], index: Optional[Index]
+ ):
raise AbstractMethodError(self)
def _wrap_transformed_output(self, output: Mapping[base.OutputKey, np.ndarray]):
@@ -1048,7 +1050,7 @@ def _cython_agg_general(
if len(output) == 0:
raise DataError("No numeric types to aggregate")
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
def _python_agg_general(
self, func, *args, engine="cython", engine_kwargs=None, **kwargs
@@ -1101,7 +1103,7 @@ def _python_agg_general(
output[key] = maybe_cast_result(values[mask], result)
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
def _concat_objects(self, keys, values, not_indexed_same: bool = False):
from pandas.core.reshape.concat import concat
@@ -2521,7 +2523,7 @@ def _get_cythonized_result(
raise TypeError(error_msg)
if aggregate:
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
else:
return self._wrap_transformed_output(output)
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 40a20c8210052..ce9d4b892d775 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1061,3 +1061,16 @@ def test_groupby_get_by_index():
res = df.groupby("A").agg({"B": lambda x: x.get(x.index[-1])})
expected = pd.DataFrame(dict(A=["S", "W"], B=[1.0, 2.0])).set_index("A")
pd.testing.assert_frame_equal(res, expected)
+
+
+def test_nonagg_agg():
+ # GH 35490 - Single/Multiple agg of non-agg function give same results
+ # TODO: agg should raise for functions that don't aggregate
+ df = pd.DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 2, 1]})
+ g = df.groupby("a")
+
+ result = g.agg(["cumsum"])
+ result.columns = result.columns.droplevel(-1)
+ expected = g.agg("cumsum")
+
+ tm.assert_frame_equal(result, expected)
| Backport PR #35723: agg with list of non-aggregating functions | https://api.github.com/repos/pandas-dev/pandas/pulls/35738 | 2020-08-15T15:55:41Z | 2020-08-15T16:59:36Z | 2020-08-15T16:59:36Z | 2020-08-15T16:59:37Z |
BUG/ENH: to_pickle/read_pickle support compression for file ojects | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 8b28a4439e1da..44dd5ba122acd 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -295,6 +295,7 @@ I/O
- :meth:`to_csv` passes compression arguments for `'gzip'` always to `gzip.GzipFile` (:issue:`28103`)
- :meth:`to_csv` did not support zip compression for binary file object not having a filename (:issue: `35058`)
- :meth:`to_csv` and :meth:`read_csv` did not honor `compression` and `encoding` for path-like objects that are internally converted to file-like objects (:issue:`35677`, :issue:`26124`, and :issue:`32392`)
+- :meth:`to_picke` and :meth:`read_pickle` did not support compression for file-objects (:issue:`26237`, :issue:`29054`, and :issue:`29570`)
Plotting
^^^^^^^^
diff --git a/pandas/_typing.py b/pandas/_typing.py
index 74bfc9134c3af..b237013ac7805 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -116,7 +116,7 @@
# compression keywords and compression
-CompressionDict = Mapping[str, Optional[Union[str, int, bool]]]
+CompressionDict = Dict[str, Any]
CompressionOptions = Optional[Union[str, CompressionDict]]
@@ -138,6 +138,6 @@ class IOargs(Generic[ModeVar, EncodingVar]):
filepath_or_buffer: FileOrBuffer
encoding: EncodingVar
- compression: CompressionOptions
+ compression: CompressionDict
should_close: bool
mode: Union[ModeVar, str]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c48bec9b670ad..1713743b98bff 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -27,7 +27,6 @@
Iterable,
Iterator,
List,
- Mapping,
Optional,
Sequence,
Set,
@@ -49,6 +48,7 @@
ArrayLike,
Axes,
Axis,
+ CompressionOptions,
Dtype,
FilePathOrBuffer,
FrameOrSeriesUnion,
@@ -2062,7 +2062,7 @@ def to_stata(
variable_labels: Optional[Dict[Label, str]] = None,
version: Optional[int] = 114,
convert_strl: Optional[Sequence[Label]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
) -> None:
"""
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 2b13d54ec3aed..a80b89569f429 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -205,11 +205,13 @@ def get_filepath_or_buffer(
"""
filepath_or_buffer = stringify_path(filepath_or_buffer)
+ # handle compression dict
+ compression_method, compression = get_compression_method(compression)
+ compression_method = infer_compression(filepath_or_buffer, compression_method)
+ compression = dict(compression, method=compression_method)
+
# bz2 and xz do not write the byte order mark for utf-16 and utf-32
# print a warning when writing such files
- compression_method = infer_compression(
- filepath_or_buffer, get_compression_method(compression)[0]
- )
if (
mode
and "w" in mode
@@ -238,7 +240,7 @@ def get_filepath_or_buffer(
content_encoding = req.headers.get("Content-Encoding", None)
if content_encoding == "gzip":
# Override compression based on Content-Encoding header
- compression = "gzip"
+ compression = {"method": "gzip"}
reader = BytesIO(req.read())
req.close()
return IOargs(
@@ -374,11 +376,7 @@ def get_compression_method(
if isinstance(compression, Mapping):
compression_args = dict(compression)
try:
- # error: Incompatible types in assignment (expression has type
- # "Union[str, int, None]", variable has type "Optional[str]")
- compression_method = compression_args.pop( # type: ignore[assignment]
- "method"
- )
+ compression_method = compression_args.pop("method")
except KeyError as err:
raise ValueError("If mapping, compression must have key 'method'") from err
else:
@@ -652,12 +650,8 @@ def __init__(
super().__init__(file, mode, **kwargs_zip) # type: ignore[arg-type]
def write(self, data):
- archive_name = self.filename
- if self.archive_name is not None:
- archive_name = self.archive_name
- if archive_name is None:
- # ZipFile needs a non-empty string
- archive_name = "zip"
+ # ZipFile needs a non-empty string
+ archive_name = self.archive_name or self.filename or "zip"
super().writestr(archive_name, data)
@property
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index 270caec022fef..15cd5c026c6b6 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -21,12 +21,7 @@
)
from pandas.core.dtypes.missing import notna
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle
class CSVFormatter:
@@ -60,17 +55,15 @@ def __init__(
if path_or_buf is None:
path_or_buf = StringIO()
- # Extract compression mode as given, if dict
- compression, self.compression_args = get_compression_method(compression)
- self.compression = infer_compression(path_or_buf, compression)
-
ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
- compression=self.compression,
+ compression=compression,
mode=mode,
storage_options=storage_options,
)
+ self.compression = ioargs.compression.pop("method")
+ self.compression_args = ioargs.compression
self.path_or_buf = ioargs.filepath_or_buffer
self.should_close = ioargs.should_close
self.mode = ioargs.mode
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 7a3b76ff7e3d0..a4d923fdbe45a 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -19,12 +19,7 @@
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.core.reshape.concat import concat
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
-)
+from pandas.io.common import get_compression_method, get_filepath_or_buffer, get_handle
from pandas.io.json._normalize import convert_to_line_delimits
from pandas.io.json._table_schema import build_table_schema, parse_table_schema
from pandas.io.parsers import _validate_integer
@@ -66,6 +61,7 @@ def to_json(
)
path_or_buf = ioargs.filepath_or_buffer
should_close = ioargs.should_close
+ compression = ioargs.compression
if lines and orient != "records":
raise ValueError("'lines' keyword only valid when 'orient' is records")
@@ -616,9 +612,6 @@ def read_json(
if encoding is None:
encoding = "utf-8"
- compression_method, compression = get_compression_method(compression)
- compression_method = infer_compression(path_or_buf, compression_method)
- compression = dict(compression, method=compression_method)
ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index c6ef5221e7ead..a0466c5ac6b57 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -63,12 +63,7 @@
from pandas.core.series import Series
from pandas.core.tools import datetimes as tools
-from pandas.io.common import (
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
- validate_header_arg,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle, validate_header_arg
from pandas.io.date_converters import generic_parser
# BOM character (byte order mark)
@@ -424,9 +419,7 @@ def _read(filepath_or_buffer: FilePathOrBuffer, kwds):
if encoding is not None:
encoding = re.sub("_", "-", encoding).lower()
kwds["encoding"] = encoding
-
compression = kwds.get("compression", "infer")
- compression = infer_compression(filepath_or_buffer, compression)
# TODO: get_filepath_or_buffer could return
# Union[FilePathOrBuffer, s3fs.S3File, gcsfs.GCSFile]
@@ -1976,6 +1969,10 @@ def __init__(self, src, **kwds):
encoding = kwds.get("encoding")
+ # parsers.TextReader doesn't support compression dicts
+ if isinstance(kwds.get("compression"), dict):
+ kwds["compression"] = kwds["compression"]["method"]
+
if kwds.get("compression") is None and encoding:
if isinstance(src, str):
src = open(src, "rb")
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 857a2d1b69be4..655deb5ca3779 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -92,11 +92,8 @@ def to_pickle(
mode="wb",
storage_options=storage_options,
)
- compression = ioargs.compression
- if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
- compression = None
f, fh = get_handle(
- ioargs.filepath_or_buffer, "wb", compression=compression, is_text=False
+ ioargs.filepath_or_buffer, "wb", compression=ioargs.compression, is_text=False
)
if protocol < 0:
protocol = pickle.HIGHEST_PROTOCOL
@@ -196,11 +193,8 @@ def read_pickle(
ioargs = get_filepath_or_buffer(
filepath_or_buffer, compression=compression, storage_options=storage_options
)
- compression = ioargs.compression
- if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
- compression = None
f, fh = get_handle(
- ioargs.filepath_or_buffer, "rb", compression=compression, is_text=False
+ ioargs.filepath_or_buffer, "rb", compression=ioargs.compression, is_text=False
)
# 1) try standard library Pickle
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 34d520004cc65..b3b16e04a5d9e 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -16,18 +16,7 @@
from pathlib import Path
import struct
import sys
-from typing import (
- Any,
- AnyStr,
- BinaryIO,
- Dict,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Union,
-)
+from typing import Any, AnyStr, BinaryIO, Dict, List, Optional, Sequence, Tuple, Union
import warnings
from dateutil.relativedelta import relativedelta
@@ -58,13 +47,7 @@
from pandas.core.indexes.base import Index
from pandas.core.series import Series
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
- stringify_path,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle, stringify_path
_version_error = (
"Version of given Stata file is {version}. pandas supports importing "
@@ -1976,9 +1959,6 @@ def _open_file_binary_write(
return fname, False, None # type: ignore[return-value]
elif isinstance(fname, (str, Path)):
# Extract compression mode as given, if dict
- compression_typ, compression_args = get_compression_method(compression)
- compression_typ = infer_compression(fname, compression_typ)
- compression = dict(compression_args, method=compression_typ)
ioargs = get_filepath_or_buffer(
fname, mode="wb", compression=compression, storage_options=storage_options
)
@@ -2235,7 +2215,7 @@ def __init__(
time_stamp: Optional[datetime.datetime] = None,
data_label: Optional[str] = None,
variable_labels: Optional[Dict[Label, str]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
super().__init__()
@@ -3118,7 +3098,7 @@ def __init__(
data_label: Optional[str] = None,
variable_labels: Optional[Dict[Label, str]] = None,
convert_strl: Optional[Sequence[Label]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
# Copy to new list since convert_strl might be modified later
@@ -3523,7 +3503,7 @@ def __init__(
variable_labels: Optional[Dict[Label, str]] = None,
convert_strl: Optional[Sequence[Label]] = None,
version: Optional[int] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
if version is None:
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 6331113ab8945..d1c6705dd7a6f 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -14,7 +14,9 @@
import datetime
import glob
import gzip
+import io
import os
+from pathlib import Path
import pickle
import shutil
from warnings import catch_warnings, simplefilter
@@ -486,3 +488,30 @@ def test_read_pickle_with_subclass():
tm.assert_series_equal(result[0], expected[0])
assert isinstance(result[1], MyTz)
+
+
+def test_pickle_binary_object_compression(compression):
+ """
+ Read/write from binary file-objects w/wo compression.
+
+ GH 26237, GH 29054, and GH 29570
+ """
+ df = tm.makeDataFrame()
+
+ # reference for compression
+ with tm.ensure_clean() as path:
+ df.to_pickle(path, compression=compression)
+ reference = Path(path).read_bytes()
+
+ # write
+ buffer = io.BytesIO()
+ df.to_pickle(buffer, compression=compression)
+ buffer.seek(0)
+
+ # gzip and zip safe the filename: cannot compare the compressed content
+ assert buffer.getvalue() == reference or compression in ("gzip", "zip")
+
+ # read
+ read_df = pd.read_pickle(buffer, compression=compression)
+ buffer.seek(0)
+ tm.assert_frame_equal(df, read_df)
| - [x] closes #26237, closes #29054, and closes #29570
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This was basically already supported but `to/read_pickle` set `compression` to `None` for file objects.
Some functions called `get_filepath_or_buffer` (might convert a string to a file object) before calling `infer_compression` (doesn't work with file objects). I moved `infer_compression` and `get_compression_method` inside `get_filepath_or_buffer`. | https://api.github.com/repos/pandas-dev/pandas/pulls/35736 | 2020-08-15T15:24:31Z | 2020-09-05T14:50:04Z | 2020-09-05T14:50:04Z | 2020-09-05T17:47:58Z |
CI: See if socket check is responsible for CI failures | diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index 0dad8c7397e37..9334b908801a1 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -254,7 +254,6 @@ def file_leak_context():
else:
proc = psutil.Process()
flist = proc.open_files()
- conns = proc.connections()
yield
@@ -265,9 +264,6 @@ def file_leak_context():
flist2_ex = [(x.path, x.fd) for x in flist2]
assert flist2_ex == flist_ex, (flist2, flist)
- conns2 = proc.connections()
- assert conns2 == conns, (conns2, conns)
-
def async_mark():
try:
| Reverts part of #35693 | https://api.github.com/repos/pandas-dev/pandas/pulls/35734 | 2020-08-15T00:14:18Z | 2020-08-15T01:28:27Z | null | 2020-08-15T01:28:39Z |
BLD: update min versions #35732 | diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index 81eac490fe5b9..689c7c889ef66 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -11,25 +11,25 @@
"fsspec": "0.7.4",
"fastparquet": "0.3.2",
"gcsfs": "0.6.0",
- "lxml.etree": "3.8.0",
- "matplotlib": "2.2.2",
- "numexpr": "2.6.2",
+ "lxml.etree": "4.3.0",
+ "matplotlib": "2.2.3",
+ "numexpr": "2.6.8",
"odfpy": "1.3.0",
"openpyxl": "2.5.7",
"pandas_gbq": "0.12.0",
- "pyarrow": "0.13.0",
- "pytables": "3.4.3",
+ "pyarrow": "0.15.0",
+ "pytables": "3.4.4",
"pytest": "5.0.1",
"pyxlsb": "1.0.6",
"s3fs": "0.4.0",
"scipy": "1.2.0",
- "sqlalchemy": "1.1.4",
- "tables": "3.4.3",
+ "sqlalchemy": "1.2.8",
+ "tables": "3.4.4",
"tabulate": "0.8.3",
- "xarray": "0.8.2",
+ "xarray": "0.12.0",
"xlrd": "1.2.0",
- "xlwt": "1.2.0",
- "xlsxwriter": "0.9.8",
+ "xlwt": "1.3.0",
+ "xlsxwriter": "1.0.2",
"numba": "0.46.0",
}
| - [x] closes #35732
| https://api.github.com/repos/pandas-dev/pandas/pulls/35733 | 2020-08-14T22:45:03Z | 2020-08-17T13:11:54Z | 2020-08-17T13:11:54Z | 2020-08-17T15:39:20Z |
REF: move towards making _apply_blockwise actually block-wise | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index f516871f789d0..f7e81f41b8675 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -6,13 +6,23 @@
from functools import partial
import inspect
from textwrap import dedent
-from typing import Callable, Dict, List, Optional, Set, Tuple, Type, Union
+from typing import (
+ TYPE_CHECKING,
+ Callable,
+ Dict,
+ List,
+ Optional,
+ Set,
+ Tuple,
+ Type,
+ Union,
+)
import numpy as np
from pandas._libs.tslibs import BaseOffset, to_offset
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import ArrayLike, Axis, FrameOrSeries, Label
+from pandas._typing import ArrayLike, Axis, FrameOrSeriesUnion, Label
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, cache_readonly, doc
@@ -55,6 +65,9 @@
)
from pandas.core.window.numba_ import generate_numba_apply_func
+if TYPE_CHECKING:
+ from pandas import Series
+
def calculate_center_offset(window) -> int:
"""
@@ -145,7 +158,7 @@ class _Window(PandasObject, ShallowMixin, SelectionMixin):
def __init__(
self,
- obj: FrameOrSeries,
+ obj: FrameOrSeriesUnion,
window=None,
min_periods: Optional[int] = None,
center: bool = False,
@@ -219,7 +232,7 @@ def _validate_get_window_bounds_signature(window: BaseIndexer) -> None:
f"get_window_bounds"
)
- def _create_blocks(self, obj: FrameOrSeries):
+ def _create_blocks(self, obj: FrameOrSeriesUnion):
"""
Split data into blocks & return conformed data.
"""
@@ -381,7 +394,7 @@ def _wrap_result(self, result, block=None, obj=None):
return type(obj)(result, index=index, columns=block.columns)
return result
- def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeries:
+ def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeriesUnion:
"""
Wrap the results.
@@ -394,22 +407,23 @@ def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeries:
"""
from pandas import Series, concat
+ if obj.ndim == 1:
+ if not results:
+ raise DataError("No numeric types to aggregate")
+ assert len(results) == 1
+ return Series(results[0], index=obj.index, name=obj.name)
+
exclude: List[Label] = []
- if obj.ndim == 2:
- orig_blocks = list(obj._to_dict_of_blocks(copy=False).values())
- for i in skipped:
- exclude.extend(orig_blocks[i].columns)
- else:
- orig_blocks = [obj]
+ orig_blocks = list(obj._to_dict_of_blocks(copy=False).values())
+ for i in skipped:
+ exclude.extend(orig_blocks[i].columns)
kept_blocks = [blk for i, blk in enumerate(orig_blocks) if i not in skipped]
final = []
for result, block in zip(results, kept_blocks):
- result = self._wrap_result(result, block=block, obj=obj)
- if result.ndim == 1:
- return result
+ result = type(obj)(result, index=obj.index, columns=block.columns)
final.append(result)
exclude = exclude or []
@@ -488,13 +502,31 @@ def _get_window_indexer(self, window: int) -> BaseIndexer:
return VariableWindowIndexer(index_array=self._on.asi8, window_size=window)
return FixedWindowIndexer(window_size=window)
+ def _apply_series(self, homogeneous_func: Callable[..., ArrayLike]) -> "Series":
+ """
+ Series version of _apply_blockwise
+ """
+ _, obj = self._create_blocks(self._selected_obj)
+ values = obj.values
+
+ try:
+ values = self._prep_values(obj.values)
+ except (TypeError, NotImplementedError) as err:
+ raise DataError("No numeric types to aggregate") from err
+
+ result = homogeneous_func(values)
+ return obj._constructor(result, index=obj.index, name=obj.name)
+
def _apply_blockwise(
self, homogeneous_func: Callable[..., ArrayLike]
- ) -> FrameOrSeries:
+ ) -> FrameOrSeriesUnion:
"""
Apply the given function to the DataFrame broken down into homogeneous
sub-frames.
"""
+ if self._selected_obj.ndim == 1:
+ return self._apply_series(homogeneous_func)
+
# This isn't quite blockwise, since `blocks` is actually a collection
# of homogenenous DataFrames.
blocks, obj = self._create_blocks(self._selected_obj)
@@ -505,12 +537,9 @@ def _apply_blockwise(
try:
values = self._prep_values(b.values)
- except (TypeError, NotImplementedError) as err:
- if isinstance(obj, ABCDataFrame):
- skipped.append(i)
- continue
- else:
- raise DataError("No numeric types to aggregate") from err
+ except (TypeError, NotImplementedError):
+ skipped.append(i)
+ continue
result = homogeneous_func(values)
results.append(result)
@@ -2234,7 +2263,7 @@ def _apply(
def _constructor(self):
return Rolling
- def _create_blocks(self, obj: FrameOrSeries):
+ def _create_blocks(self, obj: FrameOrSeriesUnion):
"""
Split data into blocks & return conformed data.
"""
@@ -2275,7 +2304,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
if isinstance(self.window, BaseIndexer):
rolling_indexer = type(self.window)
indexer_kwargs = self.window.__dict__
- assert isinstance(indexer_kwargs, dict)
+ assert isinstance(indexer_kwargs, dict) # for mypy
# We'll be using the index of each group later
indexer_kwargs.pop("index_array", None)
elif self.is_freq_type:
| A step towards #34714, orthogonal to the other outstanding PR in this vein #35696. | https://api.github.com/repos/pandas-dev/pandas/pulls/35730 | 2020-08-14T21:42:51Z | 2020-08-23T00:06:17Z | 2020-08-23T00:06:17Z | 2020-08-23T01:26:18Z |
DOC: Fix broken link in cookbook.rst | diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index 49487ac327e73..7542e1dc7df6f 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -765,7 +765,7 @@ Timeseries
<https://stackoverflow.com/questions/13893227/vectorized-look-up-of-values-in-pandas-dataframe>`__
`Aggregation and plotting time series
-<http://nipunbatra.github.io/2015/06/timeseries/>`__
+<https://nipunbatra.github.io/blog/visualisation/2013/05/01/aggregation-timeseries.html>`__
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
`How to rearrange a Python pandas DataFrame?
| The original link [Aggregation and plotting time series](http://nipunbatra.github.io/2015/06/timeseries/) found in the [Pandas Cookbook](https://pandas.pydata.org/pandas-docs/stable/user_guide/cookbook.html?highlight=get_group#timeseries) is broken.
This appears to have been moved to the authors [Blog](https://nipunbatra.github.io/blog/visualisation/2013/05/01/aggregation-timeseries.html)
While the date does not match ( 2013/05 vs 2015/06 ) the contents appear identical. I was able to determine this after viewing the archive at [WayBack Machine](https://web.archive.org/web/20161202094122/http://nipunbatra.github.io/2015/06/timeseries/)
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35729 | 2020-08-14T18:29:27Z | 2020-08-14T21:01:49Z | 2020-08-14T21:01:49Z | 2020-08-14T22:06:03Z |
BLD: bump xlrd min version to 1.2.0 | diff --git a/ci/deps/azure-37-locale_slow.yaml b/ci/deps/azure-37-locale_slow.yaml
index 3ccb66e09fe7e..8000f3e6b9a9c 100644
--- a/ci/deps/azure-37-locale_slow.yaml
+++ b/ci/deps/azure-37-locale_slow.yaml
@@ -24,7 +24,7 @@ dependencies:
- pytz=2017.3
- scipy
- sqlalchemy=1.2.8
- - xlrd=1.1.0
+ - xlrd=1.2.0
- xlsxwriter=1.0.2
- xlwt=1.3.0
- html5lib=1.0.1
diff --git a/ci/deps/azure-37-minimum_versions.yaml b/ci/deps/azure-37-minimum_versions.yaml
index 94cc5812bcc10..05b1957198bc4 100644
--- a/ci/deps/azure-37-minimum_versions.yaml
+++ b/ci/deps/azure-37-minimum_versions.yaml
@@ -25,7 +25,7 @@ dependencies:
- pytz=2017.3
- pyarrow=0.15
- scipy=1.2
- - xlrd=1.1.0
+ - xlrd=1.2.0
- xlsxwriter=1.0.2
- xlwt=1.3.0
- html5lib=1.0.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 7ab150394bf51..4c270117e079e 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -287,7 +287,7 @@ s3fs 0.4.0 Amazon S3 access
tabulate 0.8.3 Printing in Markdown-friendly format (see `tabulate`_)
xarray 0.12.0 pandas-like API for N-dimensional data
xclip Clipboard I/O on linux
-xlrd 1.1.0 Excel reading
+xlrd 1.2.0 Excel reading
xlwt 1.3.0 Excel writing
xsel Clipboard I/O on linux
zlib Compression for HDF5
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index a3bb6dfd86bd2..42f95d88d74ac 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -122,7 +122,7 @@ Optional libraries below the lowest tested version may still work, but are not c
+-----------------+-----------------+---------+
| xarray | 0.12.0 | X |
+-----------------+-----------------+---------+
-| xlrd | 1.1.0 | |
+| xlrd | 1.2.0 | X |
+-----------------+-----------------+---------+
| xlsxwriter | 1.0.2 | X |
+-----------------+-----------------+---------+
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index 6423064732def..81eac490fe5b9 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -27,7 +27,7 @@
"tables": "3.4.3",
"tabulate": "0.8.3",
"xarray": "0.8.2",
- "xlrd": "1.1.0",
+ "xlrd": "1.2.0",
"xlwt": "1.2.0",
"xlsxwriter": "0.9.8",
"numba": "0.46.0",
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index b610c5ec3a838..51fbbf836a03f 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -1,9 +1,7 @@
-import contextlib
from datetime import datetime, time
from functools import partial
import os
from urllib.error import URLError
-import warnings
import numpy as np
import pytest
@@ -14,22 +12,6 @@
from pandas import DataFrame, Index, MultiIndex, Series
import pandas._testing as tm
-
-@contextlib.contextmanager
-def ignore_xlrd_time_clock_warning():
- """
- Context manager to ignore warnings raised by the xlrd library,
- regarding the deprecation of `time.clock` in Python 3.7.
- """
- with warnings.catch_warnings():
- warnings.filterwarnings(
- action="ignore",
- message="time.clock has been deprecated",
- category=DeprecationWarning,
- )
- yield
-
-
read_ext_params = [".xls", ".xlsx", ".xlsm", ".xlsb", ".ods"]
engine_params = [
# Add any engines to test here
@@ -134,21 +116,19 @@ def test_usecols_int(self, read_ext, df_ref):
# usecols as int
msg = "Passing an integer for `usecols`"
with pytest.raises(ValueError, match=msg):
- with ignore_xlrd_time_clock_warning():
- pd.read_excel(
- "test1" + read_ext, sheet_name="Sheet1", index_col=0, usecols=3
- )
+ pd.read_excel(
+ "test1" + read_ext, sheet_name="Sheet1", index_col=0, usecols=3
+ )
# usecols as int
with pytest.raises(ValueError, match=msg):
- with ignore_xlrd_time_clock_warning():
- pd.read_excel(
- "test1" + read_ext,
- sheet_name="Sheet2",
- skiprows=[1],
- index_col=0,
- usecols=3,
- )
+ pd.read_excel(
+ "test1" + read_ext,
+ sheet_name="Sheet2",
+ skiprows=[1],
+ index_col=0,
+ usecols=3,
+ )
def test_usecols_list(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
@@ -597,8 +577,7 @@ def test_sheet_name(self, read_ext, df_ref):
df1 = pd.read_excel(
filename + read_ext, sheet_name=sheet_name, index_col=0
) # doc
- with ignore_xlrd_time_clock_warning():
- df2 = pd.read_excel(filename + read_ext, index_col=0, sheet_name=sheet_name)
+ df2 = pd.read_excel(filename + read_ext, index_col=0, sheet_name=sheet_name)
tm.assert_frame_equal(df1, df_ref, check_names=False)
tm.assert_frame_equal(df2, df_ref, check_names=False)
| Warnings about time.clock are filling up the py37 min_version build. xlrd 1.2.0 was released Dec 15, 2018, so I think we're safe to bump it. | https://api.github.com/repos/pandas-dev/pandas/pulls/35728 | 2020-08-14T16:53:02Z | 2020-08-14T21:01:03Z | 2020-08-14T21:01:03Z | 2020-08-15T16:02:08Z |
CLN: remove unused variable | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 0306d4de2fc73..966773b7c6982 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -318,7 +318,7 @@ def __repr__(self) -> str:
def __iter__(self):
window = self._get_window(win_type=None)
- blocks, obj = self._create_blocks(self._selected_obj)
+ _, obj = self._create_blocks(self._selected_obj)
index = self._get_window_indexer(window=window)
start, end = index.get_window_bounds(
| Tiny step towards #34714. | https://api.github.com/repos/pandas-dev/pandas/pulls/35726 | 2020-08-14T16:26:34Z | 2020-08-14T19:50:21Z | 2020-08-14T19:50:21Z | 2020-08-14T21:38:46Z |
agg with list of non-aggregating functions | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index b37103910afab..81057297cbb71 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -24,6 +24,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
+- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index b7280a9f7db3c..b806d9856d20f 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -322,11 +322,14 @@ def _aggregate_multiple_funcs(self, arg):
# let higher level handle
return results
- output = self._wrap_aggregated_output(results)
+ output = self._wrap_aggregated_output(results, index=None)
return self.obj._constructor_expanddim(output, columns=columns)
+ # TODO: index should not be Optional - see GH 35490
def _wrap_series_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]], index: Index,
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> Union[Series, DataFrame]:
"""
Wraps the output of a SeriesGroupBy operation into the expected result.
@@ -335,7 +338,7 @@ def _wrap_series_output(
----------
output : Mapping[base.OutputKey, Union[Series, np.ndarray]]
Data to wrap.
- index : pd.Index
+ index : pd.Index or None
Index to apply to the output.
Returns
@@ -363,8 +366,11 @@ def _wrap_series_output(
return result
+ # TODO: Remove index argument, use self.grouper.result_index, see GH 35490
def _wrap_aggregated_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]]
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> Union[Series, DataFrame]:
"""
Wraps the output of a SeriesGroupBy aggregation into the expected result.
@@ -383,9 +389,7 @@ def _wrap_aggregated_output(
In the vast majority of cases output will only contain one element.
The exception is operations that expand dimensions, like ohlc.
"""
- result = self._wrap_series_output(
- output=output, index=self.grouper.result_index
- )
+ result = self._wrap_series_output(output=output, index=index)
return self._reindex_output(result)
def _wrap_transformed_output(
@@ -1720,7 +1724,9 @@ def _insert_inaxis_grouper_inplace(self, result):
result.insert(0, name, lev)
def _wrap_aggregated_output(
- self, output: Mapping[base.OutputKey, Union[Series, np.ndarray]]
+ self,
+ output: Mapping[base.OutputKey, Union[Series, np.ndarray]],
+ index: Optional[Index],
) -> DataFrame:
"""
Wraps the output of DataFrameGroupBy aggregations into the expected result.
@@ -1745,8 +1751,7 @@ def _wrap_aggregated_output(
self._insert_inaxis_grouper_inplace(result)
result = result._consolidate()
else:
- index = self.grouper.result_index
- result.index = index
+ result.index = self.grouper.result_index
if self.axis == 1:
result = result.T
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 4597afeeaddbf..0047877ef78ee 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -974,7 +974,9 @@ def _cython_transform(self, how: str, numeric_only: bool = True, **kwargs):
return self._wrap_transformed_output(output)
- def _wrap_aggregated_output(self, output: Mapping[base.OutputKey, np.ndarray]):
+ def _wrap_aggregated_output(
+ self, output: Mapping[base.OutputKey, np.ndarray], index: Optional[Index]
+ ):
raise AbstractMethodError(self)
def _wrap_transformed_output(self, output: Mapping[base.OutputKey, np.ndarray]):
@@ -1049,7 +1051,7 @@ def _cython_agg_general(
if len(output) == 0:
raise DataError("No numeric types to aggregate")
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
def _python_agg_general(
self, func, *args, engine="cython", engine_kwargs=None, **kwargs
@@ -1102,7 +1104,7 @@ def _python_agg_general(
output[key] = maybe_cast_result(values[mask], result)
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
def _concat_objects(self, keys, values, not_indexed_same: bool = False):
from pandas.core.reshape.concat import concat
@@ -2534,7 +2536,7 @@ def _get_cythonized_result(
raise TypeError(error_msg)
if aggregate:
- return self._wrap_aggregated_output(output)
+ return self._wrap_aggregated_output(output, index=self.grouper.result_index)
else:
return self._wrap_transformed_output(output)
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 40a20c8210052..ce9d4b892d775 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1061,3 +1061,16 @@ def test_groupby_get_by_index():
res = df.groupby("A").agg({"B": lambda x: x.get(x.index[-1])})
expected = pd.DataFrame(dict(A=["S", "W"], B=[1.0, 2.0])).set_index("A")
pd.testing.assert_frame_equal(res, expected)
+
+
+def test_nonagg_agg():
+ # GH 35490 - Single/Multiple agg of non-agg function give same results
+ # TODO: agg should raise for functions that don't aggregate
+ df = pd.DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 2, 1]})
+ g = df.groupby("a")
+
+ result = g.agg(["cumsum"])
+ result.columns = result.columns.droplevel(-1)
+ expected = g.agg("cumsum")
+
+ tm.assert_frame_equal(result, expected)
| - [x] closes #35490
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Reverts to 1.0.5 behavior while maintaining the bugfix that introduced the "regression". Ideally `agg` would instead raise in these scenarios ([ref](https://github.com/pandas-dev/pandas/issues/35490#issuecomment-672981745)), but that would be an API change. Much of this PR should be reverted once this is done, I've marked such places with TODOs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/35723 | 2020-08-14T15:59:40Z | 2020-08-14T20:59:10Z | 2020-08-14T20:59:10Z | 2021-04-23T01:26:59Z |
Backport PR #35664 on branch 1.1.x (BUG: Styler cell_ids fails on multiple renders) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 98d67e930ccc0..3f177b29d52b8 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -33,7 +33,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`).
+- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
Categorical
^^^^^^^^^^^
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 584f42a6cab12..3bbb5271bce61 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -390,16 +390,16 @@ def format_attr(pair):
"is_visible": (c not in hidden_columns),
}
# only add an id if the cell has a style
+ props = []
if self.cell_ids or (r, c) in ctx:
row_dict["id"] = "_".join(cs[1:])
+ for x in ctx[r, c]:
+ # have to handle empty styles like ['']
+ if x.count(":"):
+ props.append(tuple(x.split(":")))
+ else:
+ props.append(("", ""))
row_es.append(row_dict)
- props = []
- for x in ctx[r, c]:
- # have to handle empty styles like ['']
- if x.count(":"):
- props.append(tuple(x.split(":")))
- else:
- props.append(("", ""))
cellstyle_map[tuple(props)].append(f"row{r}_col{c}")
body.append(row_es)
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 3ef5157655e78..6025649e9dbec 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -1684,8 +1684,11 @@ def f(a, b, styler):
def test_no_cell_ids(self):
# GH 35588
+ # GH 35663
df = pd.DataFrame(data=[[0]])
- s = Styler(df, uuid="_", cell_ids=False).render()
+ styler = Styler(df, uuid="_", cell_ids=False)
+ styler.render()
+ s = styler.render() # render twice to ensure ctx is not updated
assert s.find('<td class="data row0 col0" >') != -1
| Backport PR #35664: BUG: Styler cell_ids fails on multiple renders | https://api.github.com/repos/pandas-dev/pandas/pulls/35722 | 2020-08-14T14:37:59Z | 2020-08-14T15:39:01Z | 2020-08-14T15:39:01Z | 2020-08-14T15:39:01Z |
Backport PR #35707 on branch 1.1.x (REGR: fix DataFrame.diff with read-only data) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 98d67e930ccc0..22d34bef65aa9 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -16,10 +16,11 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression where :meth:`DataFrame.to_numpy` would raise a ``RuntimeError`` for mixed dtypes when converting to ``str`` (:issue:`35455`)
-- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`).
+- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
+- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 7e90a8cc681ef..0a70afda893cf 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -1200,14 +1200,15 @@ ctypedef fused out_t:
@cython.boundscheck(False)
@cython.wraparound(False)
def diff_2d(
- diff_t[:, :] arr,
- out_t[:, :] out,
+ ndarray[diff_t, ndim=2] arr, # TODO(cython 3) update to "const diff_t[:, :] arr"
+ ndarray[out_t, ndim=2] out,
Py_ssize_t periods,
int axis,
):
cdef:
Py_ssize_t i, j, sx, sy, start, stop
- bint f_contig = arr.is_f_contig()
+ bint f_contig = arr.flags.f_contiguous
+ # bint f_contig = arr.is_f_contig() # TODO(cython 3)
# Disable for unsupported dtype combinations,
# see https://github.com/cython/cython/issues/2646
diff --git a/pandas/tests/frame/methods/test_diff.py b/pandas/tests/frame/methods/test_diff.py
index 45f134a93a23a..0486fb2d588b6 100644
--- a/pandas/tests/frame/methods/test_diff.py
+++ b/pandas/tests/frame/methods/test_diff.py
@@ -214,3 +214,12 @@ def test_diff_integer_na(self, axis, expected):
# Test case for default behaviour of diff
result = df.diff(axis=axis)
tm.assert_frame_equal(result, expected)
+
+ def test_diff_readonly(self):
+ # https://github.com/pandas-dev/pandas/issues/35559
+ arr = np.random.randn(5, 2)
+ arr.flags.writeable = False
+ df = pd.DataFrame(arr)
+ result = df.diff()
+ expected = pd.DataFrame(np.array(df)).diff()
+ tm.assert_frame_equal(result, expected)
diff --git a/setup.py b/setup.py
index aebbdbf4d1e96..22da02360619e 100755
--- a/setup.py
+++ b/setup.py
@@ -457,6 +457,9 @@ def run(self):
if sys.version_info[:2] == (3, 8): # GH 33239
extra_compile_args.append("-Wno-error=deprecated-declarations")
+ # https://github.com/pandas-dev/pandas/issues/35559
+ extra_compile_args.append("-Wno-error=unreachable-code")
+
# enable coverage by building cython files by setting the environment variable
# "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext
# with `--with-cython-coverage`enabled
| Backport PR #35707: REGR: fix DataFrame.diff with read-only data | https://api.github.com/repos/pandas-dev/pandas/pulls/35721 | 2020-08-14T14:37:31Z | 2020-08-14T15:38:21Z | 2020-08-14T15:38:21Z | 2020-08-14T15:38:21Z |
update io documentation | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 43030d76d945a..7ad08658420be 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -3596,7 +3596,7 @@ similar to how ``read_csv`` and ``to_csv`` work.
os.remove('store_tl.h5')
-HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting ``dropna=True``.
+HDFStore will by default drop rows that are all missing. This behavior can be changed by setting ``dropna=True``.
.. ipython:: python
| The HDFS dropna=True parameter does not drop the NaN values. Perhaps this is a bug to the core Pandas, or really a typo in the documentation
- [X] closes #35719
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35720 | 2020-08-14T14:34:07Z | 2020-08-15T02:31:54Z | null | 2020-08-15T11:13:23Z |
CI: doctest failure for read_hdf on 1.1.x | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index b67a1c5781d91..5693ecc500e35 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -312,6 +312,10 @@ def read_hdf(
mode : {'r', 'r+', 'a'}, default 'r'
Mode to use when opening the file. Ignored if path_or_buf is a
:class:`pandas.HDFStore`. Default is 'r'.
+ errors : str, default 'strict'
+ Specifies how encoding and decoding errors are to be handled.
+ See the errors argument for :func:`open` for a full list
+ of options.
where : list, optional
A list of Term (or convertible) objects.
start : int, optional
@@ -324,10 +328,6 @@ def read_hdf(
Return an iterator object.
chunksize : int, optional
Number of rows to include in an iteration when using an iterator.
- errors : str, default 'strict'
- Specifies how encoding and decoding errors are to be handled.
- See the errors argument for :func:`open` for a full list
- of options.
**kwargs
Additional keyword arguments passed to HDFStore.
| NOTE: PR against 1.1.x branch
(fixed in #35214 on master)
xref https://github.com/pandas-dev/pandas/pull/35699#issuecomment-673392019 | https://api.github.com/repos/pandas-dev/pandas/pulls/35718 | 2020-08-14T11:24:39Z | 2020-08-14T12:12:39Z | 2020-08-14T12:12:39Z | 2020-08-14T12:12:48Z |
CLN: remove extant uses of built-in filter function | diff --git a/pandas/_config/localization.py b/pandas/_config/localization.py
index 66865e1afb952..3933c8f3d519c 100644
--- a/pandas/_config/localization.py
+++ b/pandas/_config/localization.py
@@ -88,12 +88,14 @@ def _valid_locales(locales, normalize):
valid_locales : list
A list of valid locales.
"""
- if normalize:
- normalizer = lambda x: locale.normalize(x.strip())
- else:
- normalizer = lambda x: x.strip()
-
- return list(filter(can_set_locale, map(normalizer, locales)))
+ return [
+ loc
+ for loc in (
+ locale.normalize(loc.strip()) if normalize else loc.strip()
+ for loc in locales
+ )
+ if can_set_locale(loc)
+ ]
def _default_locale_getter():
diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
index fcccc24ed7615..125ecb0d88036 100644
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -167,10 +167,9 @@ def _is_type(t):
# partition all AST nodes
_all_nodes = frozenset(
- filter(
- lambda x: isinstance(x, type) and issubclass(x, ast.AST),
- (getattr(ast, node) for node in dir(ast)),
- )
+ node
+ for node in (getattr(ast, name) for name in dir(ast))
+ if isinstance(node, type) and issubclass(node, ast.AST)
)
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 2349cb1dcc0c7..01e20f49917ac 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -2012,8 +2012,11 @@ def _sort_labels(uniques: np.ndarray, left, right):
def _get_join_keys(llab, rlab, shape, sort: bool):
# how many levels can be done without overflow
- pred = lambda i: not is_int64_overflow_possible(shape[:i])
- nlev = next(filter(pred, range(len(shape), 0, -1)))
+ nlev = next(
+ lev
+ for lev in range(len(shape), 0, -1)
+ if not is_int64_overflow_possible(shape[:lev])
+ )
# get keys for the first `nlev` levels
stride = np.prod(shape[1:nlev], dtype="i8")
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 0d2b351926343..41a28d32521c0 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -754,8 +754,9 @@ def _combine_lines(self, lines) -> str:
"""
Combines a list of JSON objects into one JSON object.
"""
- lines = filter(None, map(lambda x: x.strip(), lines))
- return "[" + ",".join(lines) + "]"
+ return (
+ f'[{",".join((line for line in (line.strip() for line in lines) if line))}]'
+ )
def read(self):
"""
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 9dc0e1f71d13b..5d49757ce7d58 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2161,9 +2161,7 @@ def read(self, nrows=None):
if self.usecols is not None:
columns = self._filter_usecols(columns)
- col_dict = dict(
- filter(lambda item: item[0] in columns, col_dict.items())
- )
+ col_dict = {k: v for k, v in col_dict.items() if k in columns}
return index, columns, col_dict
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 2abc570a04de3..f08e0514a68e1 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -99,22 +99,20 @@ def _ensure_str(name):
def _ensure_term(where, scope_level: int):
"""
- ensure that the where is a Term or a list of Term
- this makes sure that we are capturing the scope of variables
- that are passed
- create the terms here with a frame_level=2 (we are 2 levels down)
+ Ensure that the where is a Term or a list of Term.
+
+ This makes sure that we are capturing the scope of variables that are
+ passed create the terms here with a frame_level=2 (we are 2 levels down)
"""
# only consider list/tuple here as an ndarray is automatically a coordinate
# list
level = scope_level + 1
if isinstance(where, (list, tuple)):
- wlist = []
- for w in filter(lambda x: x is not None, where):
- if not maybe_expression(w):
- wlist.append(w)
- else:
- wlist.append(Term(w, scope_level=level))
- where = wlist
+ where = [
+ Term(term, scope_level=level + 1) if maybe_expression(term) else term
+ for term in where
+ if term is not None
+ ]
elif maybe_expression(where):
where = Term(where, scope_level=level)
return where if where is None or len(where) else None
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 08d8d5ca342b7..853ab00853d1b 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -168,7 +168,7 @@ def setup_ops(self):
def setup_method(self, method):
self.setup_ops()
self.setup_data()
- self.current_engines = filter(lambda x: x != self.engine, _engines)
+ self.current_engines = (engine for engine in _engines if engine != self.engine)
def teardown_method(self, method):
del self.lhses, self.rhses, self.scalar_rhses, self.scalar_lhses
@@ -774,11 +774,9 @@ def setup_class(cls):
cls.parser = "python"
def setup_ops(self):
- self.cmp_ops = list(
- filter(lambda x: x not in ("in", "not in"), expr._cmp_ops_syms)
- )
+ self.cmp_ops = [op for op in expr._cmp_ops_syms if op not in ("in", "not in")]
self.cmp2_ops = self.cmp_ops[::-1]
- self.bin_ops = [s for s in expr._bool_ops_syms if s not in ("and", "or")]
+ self.bin_ops = [op for op in expr._bool_ops_syms if op not in ("and", "or")]
self.special_case_ops = _special_case_arith_ops_syms
self.arith_ops = _good_arith_ops
self.unary_ops = "+", "-", "~"
@@ -1150,9 +1148,9 @@ def eval(self, *args, **kwargs):
return pd.eval(*args, **kwargs)
def test_simple_arith_ops(self):
- ops = self.arith_ops
+ ops = (op for op in self.arith_ops if op != "//")
- for op in filter(lambda x: x != "//", ops):
+ for op in ops:
ex = f"1 {op} 1"
ex2 = f"x {op} 1"
ex3 = f"1 {op} (x + 1)"
@@ -1637,8 +1635,11 @@ def setup_class(cls):
super().setup_class()
cls.engine = "numexpr"
cls.parser = "python"
- cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
- cls.arith_ops = filter(lambda x: x not in ("in", "not in"), cls.arith_ops)
+ cls.arith_ops = [
+ op
+ for op in expr._arith_ops_syms + expr._cmp_ops_syms
+ if op not in ("in", "not in")
+ ]
def test_check_many_exprs(self):
a = 1 # noqa
@@ -1726,8 +1727,11 @@ class TestOperationsPythonPython(TestOperationsNumExprPython):
def setup_class(cls):
super().setup_class()
cls.engine = cls.parser = "python"
- cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
- cls.arith_ops = filter(lambda x: x not in ("in", "not in"), cls.arith_ops)
+ cls.arith_ops = [
+ op
+ for op in expr._arith_ops_syms + expr._cmp_ops_syms
+ if op not in ("in", "not in")
+ ]
class TestOperationsPythonPandas(TestOperationsNumExprPandas):
| No internet for a good part of yesterday so some time to experiment with code cleanups. | https://api.github.com/repos/pandas-dev/pandas/pulls/35717 | 2020-08-14T11:01:42Z | 2020-08-14T12:32:01Z | 2020-08-14T12:32:01Z | 2020-09-03T12:15:58Z |
Backport PR #35673 on branch 1.1.x (REGR: Dataframe.reset_index() on empty DataFrame with MI and datatime level) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index b37103910afab..98d67e930ccc0 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b7286ce86d24e..041121d60ad33 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4778,7 +4778,7 @@ def _maybe_casted_values(index, labels=None):
# we can have situations where the whole mask is -1,
# meaning there is nothing found in labels, so make all nan's
- if mask.all():
+ if mask.size > 0 and mask.all():
dtype = index.dtype
fill_value = na_value_for_dtype(dtype)
values = construct_1d_arraylike_from_scalar(
diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py
index da4bfa9be4881..b88ef0e6691cb 100644
--- a/pandas/tests/frame/methods/test_reset_index.py
+++ b/pandas/tests/frame/methods/test_reset_index.py
@@ -318,3 +318,33 @@ def test_reset_index_dtypes_on_empty_frame_with_multiindex(array, dtype):
result = DataFrame(index=idx)[:0].reset_index().dtypes
expected = Series({"level_0": np.int64, "level_1": np.float64, "level_2": dtype})
tm.assert_series_equal(result, expected)
+
+
+def test_reset_index_empty_frame_with_datetime64_multiindex():
+ # https://github.com/pandas-dev/pandas/issues/35606
+ idx = MultiIndex(
+ levels=[[pd.Timestamp("2020-07-20 00:00:00")], [3, 4]],
+ codes=[[], []],
+ names=["a", "b"],
+ )
+ df = DataFrame(index=idx, columns=["c", "d"])
+ result = df.reset_index()
+ expected = DataFrame(
+ columns=list("abcd"), index=RangeIndex(start=0, stop=0, step=1)
+ )
+ expected["a"] = expected["a"].astype("datetime64[ns]")
+ expected["b"] = expected["b"].astype("int64")
+ tm.assert_frame_equal(result, expected)
+
+
+def test_reset_index_empty_frame_with_datetime64_multiindex_from_groupby():
+ # https://github.com/pandas-dev/pandas/issues/35657
+ df = DataFrame(dict(c1=[10.0], c2=["a"], c3=pd.to_datetime("2020-01-01")))
+ df = df.head(0).groupby(["c2", "c3"])[["c1"]].sum()
+ result = df.reset_index()
+ expected = DataFrame(
+ columns=["c2", "c3", "c1"], index=RangeIndex(start=0, stop=0, step=1)
+ )
+ expected["c3"] = expected["c3"].astype("datetime64[ns]")
+ expected["c1"] = expected["c1"].astype("float64")
+ tm.assert_frame_equal(result, expected)
| Backport PR #35673: REGR: Dataframe.reset_index() on empty DataFrame with MI and datatime level | https://api.github.com/repos/pandas-dev/pandas/pulls/35716 | 2020-08-14T10:17:29Z | 2020-08-14T11:15:48Z | 2020-08-14T11:15:48Z | 2020-08-14T11:15:48Z |
PERF: RangeIndex.format performance | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 3cd920158f774..0f0f009307c75 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -540,7 +540,7 @@ with :attr:`numpy.nan` in the case of an empty :class:`DataFrame` (:issue:`26397
.. ipython:: python
- df.describe()
+ df.describe()
``__str__`` methods now call ``__repr__`` rather than vice versa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index af61354470a71..7739a483e3d38 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -15,8 +15,9 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
+- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
-
--
+
.. ---------------------------------------------------------------------------
@@ -26,7 +27,7 @@ Bug fixes
~~~~~~~~~
- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
--
+- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should bw ``""`` (:issue:`35712`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ceb109fdf6d7a..b1e5d5627e3f6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -933,7 +933,9 @@ def format(
return self._format_with_header(header, na_rep=na_rep)
- def _format_with_header(self, header, na_rep="NaN") -> List[str_t]:
+ def _format_with_header(
+ self, header: List[str_t], na_rep: str_t = "NaN"
+ ) -> List[str_t]:
from pandas.io.formats.format import format_array
values = self._values
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 4990e6a8e20e9..cbb30763797d1 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -347,7 +347,7 @@ def _format_attrs(self):
attrs.append(("length", len(self)))
return attrs
- def _format_with_header(self, header, na_rep="NaN") -> List[str]:
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
from pandas.io.formats.printing import pprint_thing
result = [
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 9d00f50a65a06..0e8d7c1b866b8 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -354,15 +354,20 @@ def format(
"""
header = []
if name:
- fmt_name = ibase.pprint_thing(self.name, escape_chars=("\t", "\r", "\n"))
- header.append(fmt_name)
+ header.append(
+ ibase.pprint_thing(self.name, escape_chars=("\t", "\r", "\n"))
+ if self.name is not None
+ else ""
+ )
if formatter is not None:
return header + list(self.map(formatter))
return self._format_with_header(header, na_rep=na_rep, date_format=date_format)
- def _format_with_header(self, header, na_rep="NaT", date_format=None) -> List[str]:
+ def _format_with_header(
+ self, header: List[str], na_rep: str = "NaT", date_format: Optional[str] = None
+ ) -> List[str]:
return header + list(
self._format_native_types(na_rep=na_rep, date_format=date_format)
)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index e8d0a44324cc5..9281f8017761d 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -948,7 +948,7 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
# Rendering Methods
# __repr__ associated methods are based on MultiIndex
- def _format_with_header(self, header, na_rep="NaN") -> List[str]:
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
return header + list(self._format_native_types(na_rep=na_rep))
def _format_native_types(self, na_rep="NaN", quoting=None, **kwargs):
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index c5572a9de7fa5..b85e2d3947cb1 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -1,7 +1,7 @@
from datetime import timedelta
import operator
from sys import getsizeof
-from typing import Any
+from typing import Any, List
import warnings
import numpy as np
@@ -187,6 +187,15 @@ def _format_data(self, name=None):
# we are formatting thru the attributes
return None
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
+ if not len(self._range):
+ return header
+ first_val_str = str(self._range[0])
+ last_val_str = str(self._range[-1])
+ max_length = max(len(first_val_str), len(last_val_str))
+
+ return header + [f"{x:<{max_length}}" for x in self._range]
+
# --------------------------------------------------------------------
_deprecation_message = (
"RangeIndex.{} is deprecated and will be "
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index e4d0b46f7c716..e95e7267f17ec 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -1,5 +1,5 @@
import gc
-from typing import Optional, Type
+from typing import Type
import numpy as np
import pytest
@@ -33,7 +33,7 @@
class Base:
""" base class for index sub-class tests """
- _holder: Optional[Type[Index]] = None
+ _holder: Type[Index]
_compat_props = ["shape", "ndim", "size", "nbytes"]
def create_index(self) -> Index:
@@ -686,6 +686,12 @@ def test_format(self):
expected = [str(x) for x in idx]
assert idx.format() == expected
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder([])
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
+
def test_hasnans_isnans(self, index):
# GH 11343, added tests for hasnans / isnans
if isinstance(index, MultiIndex):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 15a88ab3819ce..085d41aaa5b76 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -536,6 +536,12 @@ def test_contains_raise_error_if_period_index_is_in_multi_index(self, msg, key):
with pytest.raises(KeyError, match=msg):
df.loc[key]
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder([], freq="A")
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
+
def test_maybe_convert_timedelta():
pi = PeriodIndex(["2000", "2001"], freq="D")
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index c4c242746e92c..172cd4a106ac1 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -171,8 +171,14 @@ def test_cache(self):
pass
assert idx._cache == {}
+ idx.format()
+ assert idx._cache == {}
+
df = pd.DataFrame({"a": range(10)}, index=idx)
+ str(df)
+ assert idx._cache == {}
+
df.loc[50]
assert idx._cache == {}
@@ -515,3 +521,9 @@ def test_engineless_lookup(self):
idx.get_loc("a")
assert "_engine" not in idx._cache
+
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder(0)
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
| #35440 dropped ``RangeIndex._format_with_header``, which was functionally not needed, but needed to avoid creating an internal ndarray.
This rectifies that + gives some perf. improvements.
```python
>>> idx = pd.RangeIndex(1_000_000)
>>> %timeit idx.format()
4.6 s ± 102 ms per loop # pandas v1.1.0
1.67 s ± 19.6 ms per loop # master
595 ms ± 2.35 ms per loop # this PR
```
Also, now the ``_data`` attribute isn't called, so this PR gives a perf. & memory improvement in some use cases compared to master:
```python
>>> idx = pd.RangeIndex(1_000_000)
>>> idx.format()
>>> "_data" in idx._cache
False # pandas v.1.1.0
True # master
False # this PR
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/35712 | 2020-08-13T18:39:42Z | 2020-08-26T11:12:56Z | 2020-08-26T11:12:55Z | 2020-08-26T13:29:51Z |
CI: Check for file leaks in _all_ tests | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 97cc514e31bb3..8d3c9d4dac400 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1181,7 +1181,13 @@ def ip():
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.interactiveshell import InteractiveShell
- return InteractiveShell()
+ # GH#35711 make sure sqlite history file handle is not leaked
+ from traitlets.config import Config
+
+ c = Config()
+ c.HistoryManager.hist_file = ":memory:"
+
+ return InteractiveShell(config=c)
@pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"])
diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py
index fcee25c258efa..39b9a1cd4fa28 100644
--- a/pandas/tests/io/conftest.py
+++ b/pandas/tests/io/conftest.py
@@ -2,11 +2,22 @@
import pytest
+import pandas.util._test_decorators as td
+
import pandas._testing as tm
from pandas.io.parsers import read_csv
+@pytest.fixture(autouse=True)
+def check_file_leaks():
+ """
+ Check that a test does not leak file handles.
+ """
+ with td.file_leak_context():
+ yield
+
+
@pytest.fixture
def tips_file(datapath):
"""Path to the tips dataset"""
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 51fbbf836a03f..49415a7e0a126 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -650,7 +650,6 @@ def test_read_from_pathlib_path(self, read_ext):
tm.assert_frame_equal(expected, actual)
@td.skip_if_no("py.path")
- @td.check_file_leaks
def test_read_from_py_localpath(self, read_ext):
# GH12655
@@ -664,7 +663,6 @@ def test_read_from_py_localpath(self, read_ext):
tm.assert_frame_equal(expected, actual)
- @td.check_file_leaks
def test_close_from_py_localpath(self, read_ext):
# GH31467
| Sits on top of #35693
Still have two tests failing locally, but I'm not seeing any socket leaks that I occasionally see in the CI logs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/35711 | 2020-08-13T17:34:59Z | 2020-08-21T21:27:03Z | null | 2021-11-20T23:22:42Z |
add web/ directory to isort checks | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 816bb23865c04..852f66763683b 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -121,7 +121,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
# Imports - Check formatting using isort see setup.cfg for settings
MSG='Check import format using isort' ; echo $MSG
- ISORT_CMD="isort --quiet --check-only pandas asv_bench scripts"
+ ISORT_CMD="isort --quiet --check-only pandas asv_bench scripts web"
if [[ "$GITHUB_ACTIONS" == "true" ]]; then
eval $ISORT_CMD | awk '{print "##[error]" $0}'; RET=$(($RET + ${PIPESTATUS[0]}))
else
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35709 | 2020-08-13T16:04:03Z | 2020-08-13T17:52:33Z | 2020-08-13T17:52:33Z | 2020-08-13T17:52:37Z |
Reorganize imports to be compliant with isort (and conventional) | diff --git a/web/pandas_web.py b/web/pandas_web.py
index e62deaa8cdc7f..7dd63175e69ac 100755
--- a/web/pandas_web.py
+++ b/web/pandas_web.py
@@ -34,13 +34,12 @@
import time
import typing
+import feedparser
import jinja2
+import markdown
import requests
import yaml
-import feedparser
-import markdown
-
class Preprocessors:
"""
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35708 | 2020-08-13T14:59:01Z | 2020-08-13T16:21:47Z | 2020-08-13T16:21:47Z | 2020-08-13T16:21:47Z |
REGR: fix DataFrame.diff with read-only data | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 3f177b29d52b8..85e2a335c55c6 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -16,10 +16,11 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression where :meth:`DataFrame.to_numpy` would raise a ``RuntimeError`` for mixed dtypes when converting to ``str`` (:issue:`35455`)
-- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`).
+- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`)
- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
+- Fixed regression in :meth:`DataFrame.diff` with read-only data (:issue:`35559`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 7e90a8cc681ef..0a70afda893cf 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -1200,14 +1200,15 @@ ctypedef fused out_t:
@cython.boundscheck(False)
@cython.wraparound(False)
def diff_2d(
- diff_t[:, :] arr,
- out_t[:, :] out,
+ ndarray[diff_t, ndim=2] arr, # TODO(cython 3) update to "const diff_t[:, :] arr"
+ ndarray[out_t, ndim=2] out,
Py_ssize_t periods,
int axis,
):
cdef:
Py_ssize_t i, j, sx, sy, start, stop
- bint f_contig = arr.is_f_contig()
+ bint f_contig = arr.flags.f_contiguous
+ # bint f_contig = arr.is_f_contig() # TODO(cython 3)
# Disable for unsupported dtype combinations,
# see https://github.com/cython/cython/issues/2646
diff --git a/pandas/tests/frame/methods/test_diff.py b/pandas/tests/frame/methods/test_diff.py
index 45f134a93a23a..0486fb2d588b6 100644
--- a/pandas/tests/frame/methods/test_diff.py
+++ b/pandas/tests/frame/methods/test_diff.py
@@ -214,3 +214,12 @@ def test_diff_integer_na(self, axis, expected):
# Test case for default behaviour of diff
result = df.diff(axis=axis)
tm.assert_frame_equal(result, expected)
+
+ def test_diff_readonly(self):
+ # https://github.com/pandas-dev/pandas/issues/35559
+ arr = np.random.randn(5, 2)
+ arr.flags.writeable = False
+ df = pd.DataFrame(arr)
+ result = df.diff()
+ expected = pd.DataFrame(np.array(df)).diff()
+ tm.assert_frame_equal(result, expected)
diff --git a/setup.py b/setup.py
index 43d19d525876b..f6f0cd9aabc0e 100755
--- a/setup.py
+++ b/setup.py
@@ -456,6 +456,9 @@ def run(self):
if sys.version_info[:2] == (3, 8): # GH 33239
extra_compile_args.append("-Wno-error=deprecated-declarations")
+ # https://github.com/pandas-dev/pandas/issues/35559
+ extra_compile_args.append("-Wno-error=unreachable-code")
+
# enable coverage by building cython files by setting the environment variable
# "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext
# with `--with-cython-coverage`enabled
| Closes #35559 | https://api.github.com/repos/pandas-dev/pandas/pulls/35707 | 2020-08-13T13:49:48Z | 2020-08-14T14:35:22Z | 2020-08-14T14:35:21Z | 2020-08-14T14:42:58Z |
CI: pin isort < 5.4.0 | diff --git a/environment.yml b/environment.yml
index 1e51470d43d36..1a7d8662b0e97 100644
--- a/environment.yml
+++ b/environment.yml
@@ -21,7 +21,7 @@ dependencies:
- flake8<3.8.0 # temporary pin, GH#34150
- flake8-comprehensions>=3.1.0 # used by flake8, linting of unnecessary comprehensions
- flake8-rst>=0.6.0,<=0.7.0 # linting of code blocks in rst files
- - isort>=5.2.1 # check that imports are in the right order
+ - isort>=5.2.1,<5.4.0 # check that imports are in the right order, max pin #35703
- mypy=0.730
- pycodestyle # used by flake8
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 66e72641cd5bb..b2cc049464de2 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -11,7 +11,7 @@ cpplint
flake8<3.8.0
flake8-comprehensions>=3.1.0
flake8-rst>=0.6.0,<=0.7.0
-isort>=5.2.1
+isort>=5.2.1,<5.4.0
mypy==0.730
pycodestyle
gitpython
| xref #35703 | https://api.github.com/repos/pandas-dev/pandas/pulls/35705 | 2020-08-13T11:27:00Z | 2020-08-14T10:09:05Z | null | 2020-08-14T10:29:10Z |
ENH add na_action to DataFrame.applymap | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 2afa1f1a6199e..25f845a8bf012 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -100,8 +100,8 @@ For example:
Other enhancements
^^^^^^^^^^^^^^^^^^
-
- Added :meth:`~DataFrame.set_flags` for setting table-wide flags on a ``Series`` or ``DataFrame`` (:issue:`28394`)
+- :meth:`DataFrame.applymap` now supports ``na_action`` (:issue:`23803`)
- :class:`Index` with object dtype supports division and multiplication (:issue:`34160`)
- :meth:`DataFrame.explode` and :meth:`Series.explode` now support exploding of sets (:issue:`35614`)
-
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index eadfcefaac73d..7464fafee2b94 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2377,7 +2377,7 @@ def map_infer_mask(ndarray arr, object f, const uint8_t[:] mask, bint convert=Tr
@cython.boundscheck(False)
@cython.wraparound(False)
-def map_infer(ndarray arr, object f, bint convert=True):
+def map_infer(ndarray arr, object f, bint convert=True, bint ignore_na=False):
"""
Substitute for np.vectorize with pandas-friendly dtype inference.
@@ -2385,6 +2385,9 @@ def map_infer(ndarray arr, object f, bint convert=True):
----------
arr : ndarray
f : function
+ convert : bint
+ ignore_na : bint
+ If True, NA values will not have f applied
Returns
-------
@@ -2398,6 +2401,9 @@ def map_infer(ndarray arr, object f, bint convert=True):
n = len(arr)
result = np.empty(n, dtype=object)
for i in range(n):
+ if ignore_na and checknull(arr[i]):
+ result[i] = arr[i]
+ continue
val = f(arr[i])
if cnp.PyArray_IsZeroDim(val):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e1a889bf79d95..647857c8bab67 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7617,7 +7617,7 @@ def apply(self, func, axis=0, raw=False, result_type=None, args=(), **kwds):
)
return op.get_result()
- def applymap(self, func) -> DataFrame:
+ def applymap(self, func, na_action: Optional[str] = None) -> DataFrame:
"""
Apply a function to a Dataframe elementwise.
@@ -7628,6 +7628,10 @@ def applymap(self, func) -> DataFrame:
----------
func : callable
Python function, returns a single value from a single value.
+ na_action : {None, 'ignore'}, default None
+ If ‘ignore’, propagate NaN values, without passing them to func.
+
+ .. versionadded:: 1.2
Returns
-------
@@ -7651,6 +7655,15 @@ def applymap(self, func) -> DataFrame:
0 3 4
1 5 5
+ Like Series.map, NA values can be ignored:
+
+ >>> df_copy = df.copy()
+ >>> df_copy.iloc[0, 0] = pd.NA
+ >>> df_copy.applymap(lambda x: len(str(x)), na_action='ignore')
+ 0 1
+ 0 <NA> 4
+ 1 5 5
+
Note that a vectorized version of `func` often exists, which will
be much faster. You could square each number elementwise.
@@ -7666,11 +7679,17 @@ def applymap(self, func) -> DataFrame:
0 1.000000 4.494400
1 11.262736 20.857489
"""
+ if na_action not in {"ignore", None}:
+ raise ValueError(
+ f"na_action must be 'ignore' or None. Got {repr(na_action)}"
+ )
+ ignore_na = na_action == "ignore"
+
# if we have a dtype == 'M8[ns]', provide boxed values
def infer(x):
if x.empty:
- return lib.map_infer(x, func)
- return lib.map_infer(x.astype(object)._values, func)
+ return lib.map_infer(x, func, ignore_na=ignore_na)
+ return lib.map_infer(x.astype(object)._values, func, ignore_na=ignore_na)
return self.apply(infer)
diff --git a/pandas/tests/frame/apply/test_frame_apply.py b/pandas/tests/frame/apply/test_frame_apply.py
index bc09501583e2c..1662f9e2fff56 100644
--- a/pandas/tests/frame/apply/test_frame_apply.py
+++ b/pandas/tests/frame/apply/test_frame_apply.py
@@ -630,6 +630,22 @@ def test_applymap(self, float_frame):
result = frame.applymap(func)
tm.assert_frame_equal(result, frame)
+ def test_applymap_na_ignore(self, float_frame):
+ # GH 23803
+ strlen_frame = float_frame.applymap(lambda x: len(str(x)))
+ float_frame_with_na = float_frame.copy()
+ mask = np.random.randint(0, 2, size=float_frame.shape, dtype=bool)
+ float_frame_with_na[mask] = pd.NA
+ strlen_frame_na_ignore = float_frame_with_na.applymap(
+ lambda x: len(str(x)), na_action="ignore"
+ )
+ strlen_frame_with_na = strlen_frame.copy()
+ strlen_frame_with_na[mask] = pd.NA
+ tm.assert_frame_equal(strlen_frame_na_ignore, strlen_frame_with_na)
+
+ with pytest.raises(ValueError, match="na_action must be .*Got 'abc'"):
+ float_frame_with_na.applymap(lambda x: len(str(x)), na_action="abc")
+
def test_applymap_box_timestamps(self):
# GH 2689, GH 2627
ser = pd.Series(date_range("1/1/2000", periods=10))
| For symmetry with Series.map
- [x] closes #23803
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35704 | 2020-08-13T11:11:53Z | 2020-09-11T17:40:12Z | 2020-09-11T17:40:11Z | 2020-09-11T17:40:18Z |
Backport PR #35654 on branch 1.1.x (BUG: GH-35558 merge_asof tolerance error) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index cdc244ca193b4..b37103910afab 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 27b331babe692..2349cb1dcc0c7 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1667,7 +1667,7 @@ def _get_merge_keys(self):
msg = (
f"incompatible tolerance {self.tolerance}, must be compat "
- f"with type {repr(lk.dtype)}"
+ f"with type {repr(lt.dtype)}"
)
if needs_i8_conversion(lt):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 9b09f0033715d..895de2b748c34 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1339,3 +1339,25 @@ def test_merge_index_column_tz(self):
index=pd.Index([0, 1, 2, 3, 4]),
)
tm.assert_frame_equal(result, expected)
+
+ def test_left_index_right_index_tolerance(self):
+ # https://github.com/pandas-dev/pandas/issues/35558
+ dr1 = pd.date_range(
+ start="1/1/2020", end="1/20/2020", freq="2D"
+ ) + pd.Timedelta(seconds=0.4)
+ dr2 = pd.date_range(start="1/1/2020", end="2/1/2020")
+
+ df1 = pd.DataFrame({"val1": "foo"}, index=pd.DatetimeIndex(dr1))
+ df2 = pd.DataFrame({"val2": "bar"}, index=pd.DatetimeIndex(dr2))
+
+ expected = pd.DataFrame(
+ {"val1": "foo", "val2": "bar"}, index=pd.DatetimeIndex(dr1)
+ )
+ result = pd.merge_asof(
+ df1,
+ df2,
+ left_index=True,
+ right_index=True,
+ tolerance=pd.Timedelta(seconds=0.5),
+ )
+ tm.assert_frame_equal(result, expected)
| Backport PR #35654: BUG: GH-35558 merge_asof tolerance error | https://api.github.com/repos/pandas-dev/pandas/pulls/35702 | 2020-08-13T10:17:28Z | 2020-08-13T11:04:50Z | 2020-08-13T11:04:50Z | 2020-08-13T11:04:51Z |
Backport PR #35647 on branch 1.1.x (BUG: Support custom BaseIndexers in groupby.rolling) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 415f9e508feb8..cdc244ca193b4 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/window/indexers.py b/pandas/core/window/indexers.py
index bc36bdca982e8..7cbe34cdebf9f 100644
--- a/pandas/core/window/indexers.py
+++ b/pandas/core/window/indexers.py
@@ -1,6 +1,6 @@
"""Indexer objects for computing start/end window bounds for rolling operations"""
from datetime import timedelta
-from typing import Dict, Optional, Tuple, Type, Union
+from typing import Dict, Optional, Tuple, Type
import numpy as np
@@ -265,7 +265,8 @@ def __init__(
index_array: Optional[np.ndarray],
window_size: int,
groupby_indicies: Dict,
- rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]],
+ rolling_indexer: Type[BaseIndexer],
+ indexer_kwargs: Optional[Dict],
**kwargs,
):
"""
@@ -276,7 +277,10 @@ def __init__(
"""
self.groupby_indicies = groupby_indicies
self.rolling_indexer = rolling_indexer
- super().__init__(index_array, window_size, **kwargs)
+ self.indexer_kwargs = indexer_kwargs or {}
+ super().__init__(
+ index_array, self.indexer_kwargs.pop("window_size", window_size), **kwargs
+ )
@Appender(get_window_bounds_doc)
def get_window_bounds(
@@ -298,7 +302,9 @@ def get_window_bounds(
else:
index_array = self.index_array
indexer = self.rolling_indexer(
- index_array=index_array, window_size=self.window_size,
+ index_array=index_array,
+ window_size=self.window_size,
+ **self.indexer_kwargs,
)
start, end = indexer.get_window_bounds(
len(indicies), min_periods, center, closed
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index ea03a7f2f8162..d727881f8285a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -145,7 +145,7 @@ class _Window(PandasObject, ShallowMixin, SelectionMixin):
def __init__(
self,
- obj,
+ obj: FrameOrSeries,
window=None,
min_periods: Optional[int] = None,
center: bool = False,
@@ -2255,10 +2255,16 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
-------
GroupbyRollingIndexer
"""
- rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]]
- if self.is_freq_type:
+ rolling_indexer: Type[BaseIndexer]
+ indexer_kwargs: Optional[Dict] = None
+ index_array = self.obj.index.asi8
+ if isinstance(self.window, BaseIndexer):
+ rolling_indexer = type(self.window)
+ indexer_kwargs = self.window.__dict__
+ # We'll be using the index of each group later
+ indexer_kwargs.pop("index_array", None)
+ elif self.is_freq_type:
rolling_indexer = VariableWindowIndexer
- index_array = self.obj.index.asi8
else:
rolling_indexer = FixedWindowIndexer
index_array = None
@@ -2267,6 +2273,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
window_size=window,
groupby_indicies=self._groupby.indices,
rolling_indexer=rolling_indexer,
+ indexer_kwargs=indexer_kwargs,
)
return window_indexer
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index e1dcac06c39cc..a9590c7e1233a 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -305,6 +305,29 @@ def test_groupby_subselect_rolling(self):
)
tm.assert_series_equal(result, expected)
+ def test_groupby_rolling_custom_indexer(self):
+ # GH 35557
+ class SimpleIndexer(pd.api.indexers.BaseIndexer):
+ def get_window_bounds(
+ self, num_values=0, min_periods=None, center=None, closed=None
+ ):
+ min_periods = self.window_size if min_periods is None else 0
+ end = np.arange(num_values, dtype=np.int64) + 1
+ start = end.copy() - self.window_size
+ start[start < 0] = min_periods
+ return start, end
+
+ df = pd.DataFrame(
+ {"a": [1.0, 2.0, 3.0, 4.0, 5.0] * 3}, index=[0] * 5 + [1] * 5 + [2] * 5
+ )
+ result = (
+ df.groupby(df.index)
+ .rolling(SimpleIndexer(window_size=3), min_periods=1)
+ .sum()
+ )
+ expected = df.groupby(df.index).rolling(window=3, min_periods=1).sum()
+ tm.assert_frame_equal(result, expected)
+
def test_groupby_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
| Backport PR #35647: BUG: Support custom BaseIndexers in groupby.rolling | https://api.github.com/repos/pandas-dev/pandas/pulls/35699 | 2020-08-13T06:14:44Z | 2020-08-13T10:14:51Z | 2020-08-13T10:14:51Z | 2020-08-13T10:14:52Z |
CLN: replace PyObject_Str with str #34213 | diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 8429aebbd85b8..90448e952b27e 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -10,7 +10,6 @@ import cython
from cython import Py_ssize_t
from cpython.datetime cimport datetime, datetime_new, import_datetime, tzinfo
-from cpython.object cimport PyObject_Str
from cpython.version cimport PY_VERSION_HEX
import_datetime()
@@ -939,7 +938,7 @@ cdef inline object convert_to_unicode(object item, bint keep_trivial_numbers):
return item
if not isinstance(item, str):
- item = PyObject_Str(item)
+ item = str(item)
return item
| Part of #34213
| https://api.github.com/repos/pandas-dev/pandas/pulls/35698 | 2020-08-13T02:31:02Z | 2020-08-13T13:39:05Z | null | 2020-08-17T15:40:06Z |
REGR: Don't ignore compiled patterns in replace | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 565b4a014bd0c..d93cd6edb983a 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -26,6 +26,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
+- Fixed regression in :meth:`DataFrame.replace` and :meth:`Series.replace` where compiled regular expressions would be ignored during replacement (:issue:`35680`)
- Fixed regression in :meth:`~pandas.core.groupby.DataFrameGroupBy.agg` where a list of functions would produce the wrong results if at least one of the functions did not aggregate. (:issue:`35490`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 371b721f08b27..5a215c4cd5fa3 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -2,7 +2,17 @@
import itertools
import operator
import re
-from typing import DefaultDict, Dict, List, Optional, Sequence, Tuple, TypeVar, Union
+from typing import (
+ DefaultDict,
+ Dict,
+ List,
+ Optional,
+ Pattern,
+ Sequence,
+ Tuple,
+ TypeVar,
+ Union,
+)
import warnings
import numpy as np
@@ -1907,7 +1917,10 @@ def _merge_blocks(
def _compare_or_regex_search(
- a: ArrayLike, b: Scalar, regex: bool = False, mask: Optional[ArrayLike] = None
+ a: ArrayLike,
+ b: Union[Scalar, Pattern],
+ regex: bool = False,
+ mask: Optional[ArrayLike] = None,
) -> Union[ArrayLike, bool]:
"""
Compare two array_like inputs of the same shape or two scalar values
@@ -1918,7 +1931,7 @@ def _compare_or_regex_search(
Parameters
----------
a : array_like
- b : scalar
+ b : scalar or regex pattern
regex : bool, default False
mask : array_like or None (default)
@@ -1928,7 +1941,7 @@ def _compare_or_regex_search(
"""
def _check_comparison_types(
- result: Union[ArrayLike, bool], a: ArrayLike, b: Scalar,
+ result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern],
):
"""
Raises an error if the two arrays (a,b) cannot be compared.
@@ -1949,7 +1962,7 @@ def _check_comparison_types(
else:
op = np.vectorize(
lambda x: bool(re.search(b, x))
- if isinstance(x, str) and isinstance(b, str)
+ if isinstance(x, str) and isinstance(b, (str, Pattern))
else False
)
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index a3f056dbf9648..8603bff0587b6 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1573,3 +1573,11 @@ def test_replace_dict_category_type(self, input_category_df, expected_category_d
result = input_df.replace({"a": "z", "obj1": "obj9", "cat1": "catX"})
tm.assert_frame_equal(result, expected)
+
+ def test_replace_with_compiled_regex(self):
+ # https://github.com/pandas-dev/pandas/issues/35680
+ df = pd.DataFrame(["a", "b", "c"])
+ regex = re.compile("^a$")
+ result = df.replace({regex: "z"}, regex=True)
+ expected = pd.DataFrame(["z", "b", "c"])
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index 11802c59a29da..f78a28c66e946 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -1,3 +1,5 @@
+import re
+
import numpy as np
import pytest
@@ -415,3 +417,11 @@ def test_replace_extension_other(self):
# https://github.com/pandas-dev/pandas/issues/34530
ser = pd.Series(pd.array([1, 2, 3], dtype="Int64"))
ser.replace("", "") # no exception
+
+ def test_replace_with_compiled_regex(self):
+ # https://github.com/pandas-dev/pandas/issues/35680
+ s = pd.Series(["a", "b", "c"])
+ regex = re.compile("^a$")
+ result = s.replace({regex: "z"}, regex=True)
+ expected = pd.Series(["z", "b", "c"])
+ tm.assert_series_equal(result, expected)
| - [x] closes #35680
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35697 | 2020-08-13T02:19:55Z | 2020-08-17T10:59:09Z | 2020-08-17T10:59:09Z | 2020-08-17T13:03:21Z |
REF: implement reset_dropped_locs | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index b806d9856d20f..1f0cdbd07560f 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1111,6 +1111,7 @@ def blk_func(block: "Block") -> List["Block"]:
assert len(locs) == result.shape[1]
for i, loc in enumerate(locs):
agg_block = result.iloc[:, [i]]._mgr.blocks[0]
+ agg_block.mgr_locs = [loc]
new_blocks.append(agg_block)
else:
result = result._mgr.blocks[0].values
@@ -1124,7 +1125,6 @@ def blk_func(block: "Block") -> List["Block"]:
return new_blocks
skipped: List[int] = []
- new_items: List[np.ndarray] = []
for i, block in enumerate(data.blocks):
try:
nbs = blk_func(block)
@@ -1136,33 +1136,13 @@ def blk_func(block: "Block") -> List["Block"]:
deleted_items.append(block.mgr_locs.as_array)
else:
agg_blocks.extend(nbs)
- new_items.append(block.mgr_locs.as_array)
if not agg_blocks:
raise DataError("No numeric types to aggregate")
# reset the locs in the blocks to correspond to our
# current ordering
- indexer = np.concatenate(new_items)
- agg_items = data.items.take(np.sort(indexer))
-
- if deleted_items:
-
- # we need to adjust the indexer to account for the
- # items we have removed
- # really should be done in internals :<
-
- deleted = np.concatenate(deleted_items)
- ai = np.arange(len(data))
- mask = np.zeros(len(data))
- mask[deleted] = 1
- indexer = (ai - mask.cumsum())[indexer]
-
- offset = 0
- for blk in agg_blocks:
- loc = len(blk.mgr_locs)
- blk.mgr_locs = indexer[offset : (offset + loc)]
- offset += loc
+ agg_items = data.reset_dropped_locs(agg_blocks, skipped)
return agg_blocks, agg_items
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 5a215c4cd5fa3..f05d4cf1c4be6 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1504,6 +1504,38 @@ def unstack(self, unstacker, fill_value) -> "BlockManager":
bm = BlockManager(new_blocks, [new_columns, new_index])
return bm
+ def reset_dropped_locs(self, blocks: List[Block], skipped: List[int]) -> Index:
+ """
+ Decrement the mgr_locs of the given blocks with `skipped` removed.
+
+ Notes
+ -----
+ Alters each block's mgr_locs inplace.
+ """
+ ncols = len(self)
+
+ new_locs = [blk.mgr_locs.as_array for blk in blocks]
+ indexer = np.concatenate(new_locs)
+
+ new_items = self.items.take(np.sort(indexer))
+
+ if skipped:
+ # we need to adjust the indexer to account for the
+ # items we have removed
+ deleted_items = [self.blocks[i].mgr_locs.as_array for i in skipped]
+ deleted = np.concatenate(deleted_items)
+ ai = np.arange(ncols)
+ mask = np.zeros(ncols)
+ mask[deleted] = 1
+ indexer = (ai - mask.cumsum())[indexer]
+
+ offset = 0
+ for blk in blocks:
+ loc = len(blk.mgr_locs)
+ blk.mgr_locs = indexer[offset : (offset + loc)]
+ offset += loc
+ return new_items
+
class SingleBlockManager(BlockManager):
""" manage a single block with """
| We get to get rid of the comment `# really should be done in internals :<`
Making the window.rolling usage use this helper method is the next step. | https://api.github.com/repos/pandas-dev/pandas/pulls/35696 | 2020-08-13T01:32:19Z | 2020-08-17T18:28:28Z | 2020-08-17T18:28:28Z | 2020-08-17T19:45:25Z |
CI: avoid file leaks in sas_xport tests | diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 291c9d1ee7f0c..fffdebda8c87a 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -6,7 +6,7 @@
from pandas._typing import FilePathOrBuffer, Label
-from pandas.io.common import stringify_path
+from pandas.io.common import get_filepath_or_buffer, stringify_path
if TYPE_CHECKING:
from pandas import DataFrame # noqa: F401
@@ -109,6 +109,10 @@ def read_sas(
else:
raise ValueError("unable to infer format of SAS file")
+ filepath_or_buffer, _, _, should_close = get_filepath_or_buffer(
+ filepath_or_buffer, encoding
+ )
+
reader: ReaderBase
if format.lower() == "xport":
from pandas.io.sas.sas_xport import XportReader
@@ -129,5 +133,7 @@ def read_sas(
return reader
data = reader.read()
- reader.close()
+
+ if should_close:
+ reader.close()
return data
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index 2682bafedb8f1..939edb3d8e0b4 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -3,6 +3,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
import pandas._testing as tm
@@ -26,10 +28,12 @@ def setup_method(self, datapath):
self.dirpath = datapath("io", "sas", "data")
self.file01 = os.path.join(self.dirpath, "DEMO_G.xpt")
self.file02 = os.path.join(self.dirpath, "SSHSV1_A.xpt")
- self.file02b = open(os.path.join(self.dirpath, "SSHSV1_A.xpt"), "rb")
self.file03 = os.path.join(self.dirpath, "DRXFCD_G.xpt")
self.file04 = os.path.join(self.dirpath, "paxraw_d_short.xpt")
+ with td.file_leak_context():
+ yield
+
def test1_basic(self):
# Tests with DEMO_G.xpt (all numeric file)
@@ -127,7 +131,12 @@ def test2_binary(self):
data_csv = pd.read_csv(self.file02.replace(".xpt", ".csv"))
numeric_as_float(data_csv)
- data = read_sas(self.file02b, format="xport")
+ with open(self.file02, "rb") as fd:
+ with td.file_leak_context():
+ # GH#35693 ensure that if we pass an open file, we
+ # dont incorrectly close it in read_sas
+ data = read_sas(fd, format="xport")
+
tm.assert_frame_equal(data, data_csv)
def test_multiple_types(self):
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index bdf633839b2cd..0dad8c7397e37 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -23,8 +23,8 @@ def test_foo():
For more information, refer to the ``pytest`` documentation on ``skipif``.
"""
+from contextlib import contextmanager
from distutils.version import LooseVersion
-from functools import wraps
import locale
from typing import Callable, Optional
@@ -237,23 +237,36 @@ def documented_fixture(fixture):
def check_file_leaks(func) -> Callable:
"""
- Decorate a test function tot check that we are not leaking file descriptors.
+ Decorate a test function to check that we are not leaking file descriptors.
"""
- psutil = safe_import("psutil")
- if not psutil:
+ with file_leak_context():
return func
- @wraps(func)
- def new_func(*args, **kwargs):
+
+@contextmanager
+def file_leak_context():
+ """
+ ContextManager analogue to check_file_leaks.
+ """
+ psutil = safe_import("psutil")
+ if not psutil:
+ yield
+ else:
proc = psutil.Process()
flist = proc.open_files()
+ conns = proc.connections()
- func(*args, **kwargs)
+ yield
flist2 = proc.open_files()
- assert flist2 == flist
-
- return new_func
+ # on some builds open_files includes file position, which we _dont_
+ # expect to remain unchanged, so we need to compare excluding that
+ flist_ex = [(x.path, x.fd) for x in flist]
+ flist2_ex = [(x.path, x.fd) for x in flist2]
+ assert flist2_ex == flist_ex, (flist2, flist)
+
+ conns2 = proc.connections()
+ assert conns2 == conns, (conns2, conns)
def async_mark():
| Introduces a contextmanager version of td.check_file_leaks so we can do more targeted versions of those checks for debugging | https://api.github.com/repos/pandas-dev/pandas/pulls/35693 | 2020-08-12T17:15:52Z | 2020-08-13T19:18:48Z | 2020-08-13T19:18:48Z | 2020-08-13T23:09:59Z |
ENH: GH-35611 Tests for top-level Pandas functions serializable | diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index bcfed2d0d3a10..3d45a1f7389b7 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -10,6 +10,7 @@
import pandas as pd
from pandas import Series, Timestamp
+import pandas._testing as tm
from pandas.core import ops
import pandas.core.common as com
@@ -157,3 +158,12 @@ def test_version_tag():
raise ValueError(
"No git tags exist, please sync tags between upstream and your repo"
)
+
+
+@pytest.mark.parametrize(
+ "obj", [(obj,) for obj in pd.__dict__.values() if callable(obj)]
+)
+def test_serializable(obj):
+ # GH 35611
+ unpickled = tm.round_trip_pickle(obj)
+ assert type(obj) == type(unpickled)
| - [x] closes #35611
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35692 | 2020-08-12T16:52:24Z | 2020-08-19T18:02:56Z | 2020-08-19T18:02:56Z | 2020-08-19T18:03:00Z |
Fix GH-29442 DataFrame.groupby doesn't preserve _metadata | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
index e63912ebc8fee..c9dde182ab831 100644
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -22,7 +22,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
--
+- Bug causing ``groupby(...).sum()`` and similar to not preserve metadata (:issue:`29442`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 8340f964fb44b..4e43ff63b0959 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1001,8 +1001,9 @@ def _agg_general(
):
with group_selection_context(self):
# try a cython aggregation if we can
+ result = None
try:
- return self._cython_agg_general(
+ result = self._cython_agg_general(
how=alias,
alt=npfunc,
numeric_only=numeric_only,
@@ -1021,8 +1022,9 @@ def _agg_general(
raise
# apply a non-cython aggregation
- result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
- return result
+ if result is None:
+ result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
+ return result.__finalize__(self.obj, method="groupby")
def _cython_agg_general(
self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index 6692102bc9008..5507e98d974c1 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -772,13 +772,27 @@ def test_categorical_accessor(method):
[
operator.methodcaller("sum"),
lambda x: x.agg("sum"),
+ ],
+)
+def test_groupby_finalize(obj, method):
+ obj.attrs = {"a": 1}
+ result = method(obj.groupby([0, 0]))
+ assert result.attrs == {"a": 1}
+
+
+@pytest.mark.parametrize(
+ "obj", [pd.Series([0, 0]), pd.DataFrame({"A": [0, 1], "B": [1, 2]})]
+)
+@pytest.mark.parametrize(
+ "method",
+ [
lambda x: x.agg(["sum", "count"]),
lambda x: x.transform(lambda y: y),
lambda x: x.apply(lambda y: y),
],
)
@not_implemented_mark
-def test_groupby(obj, method):
+def test_groupby_finalize_not_implemented(obj, method):
obj.attrs = {"a": 1}
result = method(obj.groupby([0, 0]))
assert result.attrs == {"a": 1}
| This bug is a regression in v1.1.0 and was introduced by the fix for GH-34214 in commit [6f065b].
Underlying cause is that the `*Splitter` classes do not use the `._constructor` property and do not call `__finalize__`.
Please note that the method name used for `__finalize__` calls was my best guess since documentation for the value has been hard to find.
[6f065b]: https://github.com/pandas-dev/pandas/commit/6f065b6d423ea211d803e8be93c27f547541c372
- [x] closes #29442
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35688 | 2020-08-12T13:47:15Z | 2020-10-14T18:37:52Z | 2020-10-14T18:37:51Z | 2020-10-14T19:50:59Z |
BUG: to_pickle/read_pickle do not close user-provided file objects | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 86f47a5826214..deb5697053ea8 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -234,7 +234,7 @@ I/O
- Bug in :meth:`to_csv` caused a ``ValueError`` when it was called with a filename in combination with ``mode`` containing a ``b`` (:issue:`35058`)
- In :meth:`read_csv` `float_precision='round_trip'` now handles `decimal` and `thousands` parameters (:issue:`35365`)
--
+- :meth:`to_pickle` and :meth:`read_pickle` were closing user-provided file objects (:issue:`35679`)
Plotting
^^^^^^^^
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 549d55e65546d..eee6ec7c9feca 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -100,7 +100,9 @@ def to_pickle(
try:
f.write(pickle.dumps(obj, protocol=protocol))
finally:
- f.close()
+ if f != filepath_or_buffer:
+ # do not close user-provided file objects GH 35679
+ f.close()
for _f in fh:
_f.close()
if should_close:
@@ -215,7 +217,9 @@ def read_pickle(
# e.g. can occur for files written in py27; see GH#28645 and GH#31988
return pc.load(f, encoding="latin-1")
finally:
- f.close()
+ if f != filepath_or_buffer:
+ # do not close user-provided file objects GH 35679
+ f.close()
for _f in fh:
_f.close()
if should_close:
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index e4d43db7834e3..6331113ab8945 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -183,6 +183,15 @@ def python_unpickler(path):
result = python_unpickler(path)
compare_element(result, expected, typ)
+ # and the same for file objects (GH 35679)
+ with open(path, mode="wb") as handle:
+ writer(expected, path)
+ handle.seek(0) # shouldn't close file handle
+ with open(path, mode="rb") as handle:
+ result = pd.read_pickle(handle)
+ handle.seek(0) # shouldn't close file handle
+ compare_element(result, expected, typ)
+
def test_pickle_path_pathlib():
df = tm.makeDataFrame()
| - [x] closes #35679
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Do not close user-provided file-objects in `to_pickle` and `read_pickle`. | https://api.github.com/repos/pandas-dev/pandas/pulls/35686 | 2020-08-12T12:30:50Z | 2020-08-12T22:23:26Z | 2020-08-12T22:23:26Z | 2020-08-12T22:30:28Z |
BUG/ENH: compression for google cloud storage in to_csv | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 55570341cf4e8..dae2f98bc0b76 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -240,6 +240,8 @@ I/O
- In :meth:`read_csv` `float_precision='round_trip'` now handles `decimal` and `thousands` parameters (:issue:`35365`)
- :meth:`to_pickle` and :meth:`read_pickle` were closing user-provided file objects (:issue:`35679`)
- :meth:`to_csv` passes compression arguments for `'gzip'` always to `gzip.GzipFile` (:issue:`28103`)
+- :meth:`to_csv` did not support zip compression for binary file object not having a filename (:issue: `35058`)
+- :meth:`to_csv` and :meth:`read_csv` did not honor `compression` and `encoding` for path-like objects that are internally converted to file-like objects (:issue:`35677`, :issue:`26124`, and :issue:`32392`)
Plotting
^^^^^^^^
diff --git a/pandas/_typing.py b/pandas/_typing.py
index 1b972030ef5a5..f8af92e07c674 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -1,4 +1,6 @@
+from dataclasses import dataclass
from datetime import datetime, timedelta, tzinfo
+from io import IOBase
from pathlib import Path
from typing import (
IO,
@@ -8,6 +10,7 @@
Callable,
Collection,
Dict,
+ Generic,
Hashable,
List,
Mapping,
@@ -62,7 +65,8 @@
"ExtensionDtype", str, np.dtype, Type[Union[str, float, int, complex, bool]]
]
DtypeObj = Union[np.dtype, "ExtensionDtype"]
-FilePathOrBuffer = Union[str, Path, IO[AnyStr]]
+FilePathOrBuffer = Union[str, Path, IO[AnyStr], IOBase]
+FileOrBuffer = Union[str, IO[AnyStr], IOBase]
# FrameOrSeriesUnion means either a DataFrame or a Series. E.g.
# `def func(a: FrameOrSeriesUnion) -> FrameOrSeriesUnion: ...` means that if a Series
@@ -114,3 +118,26 @@
# compression keywords and compression
CompressionDict = Mapping[str, Optional[Union[str, int, bool]]]
CompressionOptions = Optional[Union[str, CompressionDict]]
+
+
+# let's bind types
+ModeVar = TypeVar("ModeVar", str, None, Optional[str])
+EncodingVar = TypeVar("EncodingVar", str, None, Optional[str])
+
+
+@dataclass
+class IOargs(Generic[ModeVar, EncodingVar]):
+ """
+ Return value of io/common.py:get_filepath_or_buffer.
+
+ Note (copy&past from io/parsers):
+ filepath_or_buffer can be Union[FilePathOrBuffer, s3fs.S3File, gcsfs.GCSFile]
+ though mypy handling of conditional imports is difficult.
+ See https://github.com/python/mypy/issues/1297
+ """
+
+ filepath_or_buffer: FileOrBuffer
+ encoding: EncodingVar
+ compression: CompressionOptions
+ should_close: bool
+ mode: Union[ModeVar, str]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 312d449e36022..eaa27d3f2a857 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2281,14 +2281,11 @@ def to_markdown(
result = tabulate.tabulate(self, **kwargs)
if buf is None:
return result
- buf, _, _, should_close = get_filepath_or_buffer(
- buf, mode=mode, storage_options=storage_options
- )
- assert buf is not None # Help mypy.
- assert not isinstance(buf, str)
- buf.writelines(result)
- if should_close:
- buf.close()
+ ioargs = get_filepath_or_buffer(buf, mode=mode, storage_options=storage_options)
+ assert not isinstance(ioargs.filepath_or_buffer, str)
+ ioargs.filepath_or_buffer.writelines(result)
+ if ioargs.should_close:
+ ioargs.filepath_or_buffer.close()
return None
@deprecate_kwarg(old_arg_name="fname", new_arg_name="path")
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3bad2d6dd18b9..94eef26e57592 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2,6 +2,7 @@
from datetime import timedelta
import functools
import gc
+from io import StringIO
import json
import operator
import pickle
@@ -3249,6 +3250,7 @@ def to_csv(
formatter.save()
if path_or_buf is None:
+ assert isinstance(formatter.path_or_buf, StringIO)
return formatter.path_or_buf.getvalue()
return None
diff --git a/pandas/io/common.py b/pandas/io/common.py
index d1305c9cabe0e..97dbc7f1031a2 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -27,12 +27,17 @@
uses_params,
uses_relative,
)
+import warnings
import zipfile
from pandas._typing import (
CompressionDict,
CompressionOptions,
+ EncodingVar,
+ FileOrBuffer,
FilePathOrBuffer,
+ IOargs,
+ ModeVar,
StorageOptions,
)
from pandas.compat import _get_lzma_file, _import_lzma
@@ -69,9 +74,7 @@ def is_url(url) -> bool:
return parse_url(url).scheme in _VALID_URLS
-def _expand_user(
- filepath_or_buffer: FilePathOrBuffer[AnyStr],
-) -> FilePathOrBuffer[AnyStr]:
+def _expand_user(filepath_or_buffer: FileOrBuffer[AnyStr]) -> FileOrBuffer[AnyStr]:
"""
Return the argument with an initial component of ~ or ~user
replaced by that user's home directory.
@@ -101,7 +104,7 @@ def validate_header_arg(header) -> None:
def stringify_path(
filepath_or_buffer: FilePathOrBuffer[AnyStr],
-) -> FilePathOrBuffer[AnyStr]:
+) -> FileOrBuffer[AnyStr]:
"""
Attempt to convert a path-like object to a string.
@@ -134,9 +137,9 @@ def stringify_path(
# "__fspath__" [union-attr]
# error: Item "IO[bytes]" of "Union[str, Path, IO[bytes]]" has no
# attribute "__fspath__" [union-attr]
- return filepath_or_buffer.__fspath__() # type: ignore[union-attr]
+ filepath_or_buffer = filepath_or_buffer.__fspath__() # type: ignore[union-attr]
elif isinstance(filepath_or_buffer, pathlib.Path):
- return str(filepath_or_buffer)
+ filepath_or_buffer = str(filepath_or_buffer)
return _expand_user(filepath_or_buffer)
@@ -162,13 +165,13 @@ def is_fsspec_url(url: FilePathOrBuffer) -> bool:
)
-def get_filepath_or_buffer(
+def get_filepath_or_buffer( # type: ignore[assignment]
filepath_or_buffer: FilePathOrBuffer,
- encoding: Optional[str] = None,
+ encoding: EncodingVar = None,
compression: CompressionOptions = None,
- mode: Optional[str] = None,
+ mode: ModeVar = None,
storage_options: StorageOptions = None,
-):
+) -> IOargs[ModeVar, EncodingVar]:
"""
If the filepath_or_buffer is a url, translate and return the buffer.
Otherwise passthrough.
@@ -191,14 +194,35 @@ def get_filepath_or_buffer(
.. versionadded:: 1.2.0
- Returns
- -------
- Tuple[FilePathOrBuffer, str, CompressionOptions, bool]
- Tuple containing the filepath or buffer, the encoding, the compression
- and should_close.
+ ..versionchange:: 1.2.0
+
+ Returns the dataclass IOargs.
"""
filepath_or_buffer = stringify_path(filepath_or_buffer)
+ # bz2 and xz do not write the byte order mark for utf-16 and utf-32
+ # print a warning when writing such files
+ compression_method = infer_compression(
+ filepath_or_buffer, get_compression_method(compression)[0]
+ )
+ if (
+ mode
+ and "w" in mode
+ and compression_method in ["bz2", "xz"]
+ and encoding in ["utf-16", "utf-32"]
+ ):
+ warnings.warn(
+ f"{compression} will not write the byte order mark for {encoding}",
+ UnicodeWarning,
+ )
+
+ # Use binary mode when converting path-like objects to file-like objects (fsspec)
+ # except when text mode is explicitly requested. The original mode is returned if
+ # fsspec is not used.
+ fsspec_mode = mode or "rb"
+ if "t" not in fsspec_mode and "b" not in fsspec_mode:
+ fsspec_mode += "b"
+
if isinstance(filepath_or_buffer, str) and is_url(filepath_or_buffer):
# TODO: fsspec can also handle HTTP via requests, but leaving this unchanged
if storage_options:
@@ -212,7 +236,13 @@ def get_filepath_or_buffer(
compression = "gzip"
reader = BytesIO(req.read())
req.close()
- return reader, encoding, compression, True
+ return IOargs(
+ filepath_or_buffer=reader,
+ encoding=encoding,
+ compression=compression,
+ should_close=True,
+ mode=fsspec_mode,
+ )
if is_fsspec_url(filepath_or_buffer):
assert isinstance(
@@ -244,7 +274,7 @@ def get_filepath_or_buffer(
try:
file_obj = fsspec.open(
- filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ filepath_or_buffer, mode=fsspec_mode, **(storage_options or {})
).open()
# GH 34626 Reads from Public Buckets without Credentials needs anon=True
except tuple(err_types_to_retry_with_anon):
@@ -255,23 +285,41 @@ def get_filepath_or_buffer(
storage_options = dict(storage_options)
storage_options["anon"] = True
file_obj = fsspec.open(
- filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ filepath_or_buffer, mode=fsspec_mode, **(storage_options or {})
).open()
- return file_obj, encoding, compression, True
+ return IOargs(
+ filepath_or_buffer=file_obj,
+ encoding=encoding,
+ compression=compression,
+ should_close=True,
+ mode=fsspec_mode,
+ )
elif storage_options:
raise ValueError(
"storage_options passed with file object or non-fsspec file path"
)
if isinstance(filepath_or_buffer, (str, bytes, mmap.mmap)):
- return _expand_user(filepath_or_buffer), None, compression, False
+ return IOargs(
+ filepath_or_buffer=_expand_user(filepath_or_buffer),
+ encoding=encoding,
+ compression=compression,
+ should_close=False,
+ mode=mode,
+ )
if not is_file_like(filepath_or_buffer):
msg = f"Invalid file path or buffer object type: {type(filepath_or_buffer)}"
raise ValueError(msg)
- return filepath_or_buffer, None, compression, False
+ return IOargs(
+ filepath_or_buffer=filepath_or_buffer,
+ encoding=encoding,
+ compression=compression,
+ should_close=False,
+ mode=mode,
+ )
def file_path_to_url(path: str) -> str:
@@ -452,6 +500,15 @@ def get_handle(
need_text_wrapping = (BufferedIOBase, RawIOBase, S3File)
except ImportError:
need_text_wrapping = (BufferedIOBase, RawIOBase)
+ # fsspec is an optional dependency. If it is available, add its file-object
+ # class to the list of classes that need text wrapping. If fsspec is too old and is
+ # needed, get_filepath_or_buffer would already have thrown an exception.
+ try:
+ from fsspec.spec import AbstractFileSystem
+
+ need_text_wrapping = (*need_text_wrapping, AbstractFileSystem)
+ except ImportError:
+ pass
handles: List[Union[IO, _MMapWrapper]] = list()
f = path_or_buf
@@ -583,12 +640,15 @@ def __init__(
self.archive_name = archive_name
kwargs_zip: Dict[str, Any] = {"compression": zipfile.ZIP_DEFLATED}
kwargs_zip.update(kwargs)
- super().__init__(file, mode, **kwargs_zip)
+ super().__init__(file, mode, **kwargs_zip) # type: ignore[arg-type]
def write(self, data):
archive_name = self.filename
if self.archive_name is not None:
archive_name = self.archive_name
+ if archive_name is None:
+ # ZipFile needs a non-empty string
+ archive_name = "zip"
super().writestr(archive_name, data)
@property
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index ead36c95556b1..9bc1d7fedcb31 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -352,9 +352,9 @@ def __init__(self, filepath_or_buffer, storage_options: StorageOptions = None):
if is_url(filepath_or_buffer):
filepath_or_buffer = BytesIO(urlopen(filepath_or_buffer).read())
elif not isinstance(filepath_or_buffer, (ExcelFile, self._workbook_class)):
- filepath_or_buffer, _, _, _ = get_filepath_or_buffer(
+ filepath_or_buffer = get_filepath_or_buffer(
filepath_or_buffer, storage_options=storage_options
- )
+ ).filepath_or_buffer
if isinstance(filepath_or_buffer, self._workbook_class):
self.book = filepath_or_buffer
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index fb606b5ec8aef..a98eebe1c6a2a 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -34,9 +34,7 @@ def to_feather(df: DataFrame, path, storage_options: StorageOptions = None, **kw
import_optional_dependency("pyarrow")
from pyarrow import feather
- path, _, _, should_close = get_filepath_or_buffer(
- path, mode="wb", storage_options=storage_options
- )
+ ioargs = get_filepath_or_buffer(path, mode="wb", storage_options=storage_options)
if not isinstance(df, DataFrame):
raise ValueError("feather only support IO with DataFrames")
@@ -74,7 +72,11 @@ def to_feather(df: DataFrame, path, storage_options: StorageOptions = None, **kw
if df.columns.inferred_type not in valid_types:
raise ValueError("feather must have string column names")
- feather.write_feather(df, path, **kwargs)
+ feather.write_feather(df, ioargs.filepath_or_buffer, **kwargs)
+
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
+ ioargs.filepath_or_buffer.close()
def read_feather(
@@ -122,14 +124,15 @@ def read_feather(
import_optional_dependency("pyarrow")
from pyarrow import feather
- path, _, _, should_close = get_filepath_or_buffer(
- path, storage_options=storage_options
- )
+ ioargs = get_filepath_or_buffer(path, storage_options=storage_options)
- df = feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
+ df = feather.read_feather(
+ ioargs.filepath_or_buffer, columns=columns, use_threads=bool(use_threads)
+ )
# s3fs only validates the credentials when the file is closed.
- if should_close:
- path.close()
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
+ ioargs.filepath_or_buffer.close()
return df
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index c462a96da7133..270caec022fef 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -62,14 +62,19 @@ def __init__(
# Extract compression mode as given, if dict
compression, self.compression_args = get_compression_method(compression)
+ self.compression = infer_compression(path_or_buf, compression)
- self.path_or_buf, _, _, self.should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
- compression=compression,
+ compression=self.compression,
mode=mode,
storage_options=storage_options,
)
+ self.path_or_buf = ioargs.filepath_or_buffer
+ self.should_close = ioargs.should_close
+ self.mode = ioargs.mode
+
self.sep = sep
self.na_rep = na_rep
self.float_format = float_format
@@ -78,12 +83,10 @@ def __init__(
self.header = header
self.index = index
self.index_label = index_label
- self.mode = mode
if encoding is None:
encoding = "utf-8"
self.encoding = encoding
self.errors = errors
- self.compression = infer_compression(self.path_or_buf, compression)
if quoting is None:
quoting = csvlib.QUOTE_MINIMAL
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index fe5e172655ae1..7a3b76ff7e3d0 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -58,12 +58,14 @@ def to_json(
)
if path_or_buf is not None:
- path_or_buf, _, _, should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
path_or_buf,
compression=compression,
mode="wt",
storage_options=storage_options,
)
+ path_or_buf = ioargs.filepath_or_buffer
+ should_close = ioargs.should_close
if lines and orient != "records":
raise ValueError("'lines' keyword only valid when 'orient' is records")
@@ -102,6 +104,8 @@ def to_json(
fh.write(s)
finally:
fh.close()
+ for handle in handles:
+ handle.close()
elif path_or_buf is None:
return s
else:
@@ -615,7 +619,7 @@ def read_json(
compression_method, compression = get_compression_method(compression)
compression_method = infer_compression(path_or_buf, compression_method)
compression = dict(compression, method=compression_method)
- filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
compression=compression,
@@ -623,7 +627,7 @@ def read_json(
)
json_reader = JsonReader(
- filepath_or_buffer,
+ ioargs.filepath_or_buffer,
orient=orient,
typ=typ,
dtype=dtype,
@@ -633,10 +637,10 @@ def read_json(
numpy=numpy,
precise_float=precise_float,
date_unit=date_unit,
- encoding=encoding,
+ encoding=ioargs.encoding,
lines=lines,
chunksize=chunksize,
- compression=compression,
+ compression=ioargs.compression,
nrows=nrows,
)
@@ -644,8 +648,9 @@ def read_json(
return json_reader
result = json_reader.read()
- if should_close:
- filepath_or_buffer.close()
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
+ ioargs.filepath_or_buffer.close()
return result
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index b556732e4d116..f1b1aa6a43cb5 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -50,7 +50,7 @@ def read_orc(
import pyarrow.orc
- path, _, _, _ = get_filepath_or_buffer(path)
- orc_file = pyarrow.orc.ORCFile(path)
+ ioargs = get_filepath_or_buffer(path)
+ orc_file = pyarrow.orc.ORCFile(ioargs.filepath_or_buffer)
result = orc_file.read(columns=columns, **kwargs).to_pandas()
return result
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 7f0eef039a1e8..e5d6ac006e251 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -9,7 +9,7 @@
from pandas import DataFrame, get_option
-from pandas.io.common import _expand_user, get_filepath_or_buffer, is_fsspec_url
+from pandas.io.common import get_filepath_or_buffer, is_fsspec_url, stringify_path
def get_engine(engine: str) -> "BaseImpl":
@@ -113,7 +113,7 @@ def write(
raise ValueError(
"storage_options passed with file object or non-fsspec file path"
)
- path = _expand_user(path)
+ path = stringify_path(path)
if partition_cols is not None:
# writes to multiple files under the given path
self.api.parquet.write_to_dataset(
@@ -143,10 +143,12 @@ def read(
)
fs = kwargs.pop("filesystem", None)
should_close = False
- path = _expand_user(path)
+ path = stringify_path(path)
if not fs:
- path, _, _, should_close = get_filepath_or_buffer(path)
+ ioargs = get_filepath_or_buffer(path)
+ path = ioargs.filepath_or_buffer
+ should_close = ioargs.should_close
kwargs["use_pandas_metadata"] = True
result = self.api.parquet.read_table(
@@ -205,7 +207,7 @@ def write(
raise ValueError(
"storage_options passed with file object or non-fsspec file path"
)
- path, _, _, _ = get_filepath_or_buffer(path)
+ path = get_filepath_or_buffer(path).filepath_or_buffer
with catch_warnings(record=True):
self.api.write(
@@ -228,7 +230,7 @@ def read(
).open()
parquet_file = self.api.ParquetFile(path, open_with=open_with)
else:
- path, _, _, _ = get_filepath_or_buffer(path)
+ path = get_filepath_or_buffer(path).filepath_or_buffer
parquet_file = self.api.ParquetFile(path)
return parquet_file.to_pandas(columns=columns, **kwargs)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 983aa56324083..a917bff9d7ca7 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -432,10 +432,10 @@ def _read(filepath_or_buffer: FilePathOrBuffer, kwds):
# Union[FilePathOrBuffer, s3fs.S3File, gcsfs.GCSFile]
# though mypy handling of conditional imports is difficult.
# See https://github.com/python/mypy/issues/1297
- fp_or_buf, _, compression, should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
filepath_or_buffer, encoding, compression, storage_options=storage_options
)
- kwds["compression"] = compression
+ kwds["compression"] = ioargs.compression
if kwds.get("date_parser", None) is not None:
if isinstance(kwds["parse_dates"], bool):
@@ -450,7 +450,7 @@ def _read(filepath_or_buffer: FilePathOrBuffer, kwds):
_validate_names(kwds.get("names", None))
# Create the parser.
- parser = TextFileReader(fp_or_buf, **kwds)
+ parser = TextFileReader(ioargs.filepath_or_buffer, **kwds)
if chunksize or iterator:
return parser
@@ -460,9 +460,10 @@ def _read(filepath_or_buffer: FilePathOrBuffer, kwds):
finally:
parser.close()
- if should_close:
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
try:
- fp_or_buf.close()
+ ioargs.filepath_or_buffer.close()
except ValueError:
pass
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index fc1d2e385cf72..857a2d1b69be4 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -86,15 +86,18 @@ def to_pickle(
>>> import os
>>> os.remove("./dummy.pkl")
"""
- fp_or_buf, _, compression, should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
filepath_or_buffer,
compression=compression,
mode="wb",
storage_options=storage_options,
)
- if not isinstance(fp_or_buf, str) and compression == "infer":
+ compression = ioargs.compression
+ if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
compression = None
- f, fh = get_handle(fp_or_buf, "wb", compression=compression, is_text=False)
+ f, fh = get_handle(
+ ioargs.filepath_or_buffer, "wb", compression=compression, is_text=False
+ )
if protocol < 0:
protocol = pickle.HIGHEST_PROTOCOL
try:
@@ -105,9 +108,10 @@ def to_pickle(
f.close()
for _f in fh:
_f.close()
- if should_close:
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
try:
- fp_or_buf.close()
+ ioargs.filepath_or_buffer.close()
except ValueError:
pass
@@ -189,12 +193,15 @@ def read_pickle(
>>> import os
>>> os.remove("./dummy.pkl")
"""
- fp_or_buf, _, compression, should_close = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
filepath_or_buffer, compression=compression, storage_options=storage_options
)
- if not isinstance(fp_or_buf, str) and compression == "infer":
+ compression = ioargs.compression
+ if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
compression = None
- f, fh = get_handle(fp_or_buf, "rb", compression=compression, is_text=False)
+ f, fh = get_handle(
+ ioargs.filepath_or_buffer, "rb", compression=compression, is_text=False
+ )
# 1) try standard library Pickle
# 2) try pickle_compat (older pandas version) to handle subclass changes
@@ -222,8 +229,9 @@ def read_pickle(
f.close()
for _f in fh:
_f.close()
- if should_close:
+ if ioargs.should_close:
+ assert not isinstance(ioargs.filepath_or_buffer, str)
try:
- fp_or_buf.close()
+ ioargs.filepath_or_buffer.close()
except ValueError:
pass
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 3d9be7c15726b..76dac39d1889f 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -137,7 +137,7 @@ def __init__(
self._current_row_on_page_index = 0
self._current_row_in_file_index = 0
- self._path_or_buf, _, _, _ = get_filepath_or_buffer(path_or_buf)
+ self._path_or_buf = get_filepath_or_buffer(path_or_buf).filepath_or_buffer
if isinstance(self._path_or_buf, str):
self._path_or_buf = open(self._path_or_buf, "rb")
self.handle = self._path_or_buf
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 6cf248b748107..e4d9324ce5130 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -253,12 +253,9 @@ def __init__(
self._chunksize = chunksize
if isinstance(filepath_or_buffer, str):
- (
- filepath_or_buffer,
- encoding,
- compression,
- should_close,
- ) = get_filepath_or_buffer(filepath_or_buffer, encoding=encoding)
+ filepath_or_buffer = get_filepath_or_buffer(
+ filepath_or_buffer, encoding=encoding
+ ).filepath_or_buffer
if isinstance(filepath_or_buffer, (str, bytes)):
self.filepath_or_buffer = open(filepath_or_buffer, "rb")
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index fffdebda8c87a..ae9457a8e3147 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -109,22 +109,26 @@ def read_sas(
else:
raise ValueError("unable to infer format of SAS file")
- filepath_or_buffer, _, _, should_close = get_filepath_or_buffer(
- filepath_or_buffer, encoding
- )
+ ioargs = get_filepath_or_buffer(filepath_or_buffer, encoding)
reader: ReaderBase
if format.lower() == "xport":
from pandas.io.sas.sas_xport import XportReader
reader = XportReader(
- filepath_or_buffer, index=index, encoding=encoding, chunksize=chunksize
+ ioargs.filepath_or_buffer,
+ index=index,
+ encoding=ioargs.encoding,
+ chunksize=chunksize,
)
elif format.lower() == "sas7bdat":
from pandas.io.sas.sas7bdat import SAS7BDATReader
reader = SAS7BDATReader(
- filepath_or_buffer, index=index, encoding=encoding, chunksize=chunksize
+ ioargs.filepath_or_buffer,
+ index=index,
+ encoding=ioargs.encoding,
+ chunksize=chunksize,
)
else:
raise ValueError("unknown SAS format")
@@ -134,6 +138,6 @@ def read_sas(
data = reader.read()
- if should_close:
+ if ioargs.should_close:
reader.close()
return data
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index ec3819f1673a8..0074ebc4decb0 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1069,9 +1069,9 @@ def __init__(
self._native_byteorder = _set_endianness(sys.byteorder)
path_or_buf = stringify_path(path_or_buf)
if isinstance(path_or_buf, str):
- path_or_buf, encoding, _, should_close = get_filepath_or_buffer(
+ path_or_buf = get_filepath_or_buffer(
path_or_buf, storage_options=storage_options
- )
+ ).filepath_or_buffer
if isinstance(path_or_buf, (str, bytes)):
self.path_or_buf = open(path_or_buf, "rb")
@@ -1979,11 +1979,16 @@ def _open_file_binary_write(
compression_typ, compression_args = get_compression_method(compression)
compression_typ = infer_compression(fname, compression_typ)
compression = dict(compression_args, method=compression_typ)
- path_or_buf, _, compression, _ = get_filepath_or_buffer(
+ ioargs = get_filepath_or_buffer(
fname, mode="wb", compression=compression, storage_options=storage_options,
)
- f, _ = get_handle(path_or_buf, "wb", compression=compression, is_text=False)
- return f, True, compression
+ f, _ = get_handle(
+ ioargs.filepath_or_buffer,
+ "wb",
+ compression=ioargs.compression,
+ is_text=False,
+ )
+ return f, True, ioargs.compression
else:
raise TypeError("fname must be a binary file, buffer or path-like.")
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 5ce2233bc0cd0..85a12a13d19fb 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -105,21 +105,21 @@ def test_infer_compression_from_path(self, extension, expected, path_type):
compression = icom.infer_compression(path, compression="infer")
assert compression == expected
- def test_get_filepath_or_buffer_with_path(self):
- filename = "~/sometest"
- filepath_or_buffer, _, _, should_close = icom.get_filepath_or_buffer(filename)
- assert filepath_or_buffer != filename
- assert os.path.isabs(filepath_or_buffer)
- assert os.path.expanduser(filename) == filepath_or_buffer
- assert not should_close
+ @pytest.mark.parametrize("path_type", [str, CustomFSPath, Path])
+ def test_get_filepath_or_buffer_with_path(self, path_type):
+ # ignore LocalPath: it creates strange paths: /absolute/~/sometest
+ filename = path_type("~/sometest")
+ ioargs = icom.get_filepath_or_buffer(filename)
+ assert ioargs.filepath_or_buffer != filename
+ assert os.path.isabs(ioargs.filepath_or_buffer)
+ assert os.path.expanduser(filename) == ioargs.filepath_or_buffer
+ assert not ioargs.should_close
def test_get_filepath_or_buffer_with_buffer(self):
input_buffer = StringIO()
- filepath_or_buffer, _, _, should_close = icom.get_filepath_or_buffer(
- input_buffer
- )
- assert filepath_or_buffer == input_buffer
- assert not should_close
+ ioargs = icom.get_filepath_or_buffer(input_buffer)
+ assert ioargs.filepath_or_buffer == input_buffer
+ assert not ioargs.should_close
def test_iterator(self):
reader = pd.read_csv(StringIO(self.data1), chunksize=1)
@@ -389,6 +389,25 @@ def test_binary_mode(self):
df.to_csv(path, mode="w+b")
tm.assert_frame_equal(df, pd.read_csv(path, index_col=0))
+ @pytest.mark.parametrize("encoding", ["utf-16", "utf-32"])
+ @pytest.mark.parametrize("compression_", ["bz2", "xz"])
+ def test_warning_missing_utf_bom(self, encoding, compression_):
+ """
+ bz2 and xz do not write the byte order mark (BOM) for utf-16/32.
+
+ https://stackoverflow.com/questions/55171439
+
+ GH 35681
+ """
+ df = tm.makeDataFrame()
+ with tm.ensure_clean() as path:
+ with tm.assert_produces_warning(UnicodeWarning):
+ df.to_csv(path, compression=compression_, encoding=encoding)
+
+ # reading should fail (otherwise we wouldn't need the warning)
+ with pytest.raises(Exception):
+ pd.read_csv(path, compression=compression_, encoding=encoding)
+
def test_is_fsspec_url():
assert icom.is_fsspec_url("gcs://pandas/somethingelse.com")
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index bc14b485f75e5..31e9ad4cf4416 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -124,6 +124,8 @@ def test_compression_binary(compression_only):
GH22555
"""
df = tm.makeDataFrame()
+
+ # with a file
with tm.ensure_clean() as path:
with open(path, mode="wb") as file:
df.to_csv(file, mode="wb", compression=compression_only)
@@ -132,6 +134,14 @@ def test_compression_binary(compression_only):
df, pd.read_csv(path, index_col=0, compression=compression_only)
)
+ # with BytesIO
+ file = io.BytesIO()
+ df.to_csv(file, mode="wb", compression=compression_only)
+ file.seek(0) # file shouldn't be closed
+ tm.assert_frame_equal(
+ df, pd.read_csv(file, index_col=0, compression=compression_only)
+ )
+
def test_gzip_reproducibility_file_name():
"""
diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py
index eacf4fa08545d..18b5743a3375a 100644
--- a/pandas/tests/io/test_gcs.py
+++ b/pandas/tests/io/test_gcs.py
@@ -9,12 +9,32 @@
from pandas.util import _test_decorators as td
-@td.skip_if_no("gcsfs")
-def test_read_csv_gcs(monkeypatch):
+@pytest.fixture
+def gcs_buffer(monkeypatch):
+ """Emulate GCS using a binary buffer."""
from fsspec import AbstractFileSystem, registry
registry.target.clear() # noqa # remove state
+ gcs_buffer = BytesIO()
+ gcs_buffer.close = lambda: True
+
+ class MockGCSFileSystem(AbstractFileSystem):
+ def open(*args, **kwargs):
+ gcs_buffer.seek(0)
+ return gcs_buffer
+
+ monkeypatch.setattr("gcsfs.GCSFileSystem", MockGCSFileSystem)
+
+ return gcs_buffer
+
+
+@td.skip_if_no("gcsfs")
+def test_read_csv_gcs(gcs_buffer):
+ from fsspec import registry
+
+ registry.target.clear() # noqa # remove state
+
df1 = DataFrame(
{
"int": [1, 3],
@@ -24,21 +44,19 @@ def test_read_csv_gcs(monkeypatch):
}
)
- class MockGCSFileSystem(AbstractFileSystem):
- def open(*args, **kwargs):
- return BytesIO(df1.to_csv(index=False).encode())
+ gcs_buffer.write(df1.to_csv(index=False).encode())
- monkeypatch.setattr("gcsfs.GCSFileSystem", MockGCSFileSystem)
df2 = read_csv("gs://test/test.csv", parse_dates=["dt"])
tm.assert_frame_equal(df1, df2)
@td.skip_if_no("gcsfs")
-def test_to_csv_gcs(monkeypatch):
- from fsspec import AbstractFileSystem, registry
+def test_to_csv_gcs(gcs_buffer):
+ from fsspec import registry
registry.target.clear() # noqa # remove state
+
df1 = DataFrame(
{
"int": [1, 3],
@@ -47,29 +65,57 @@ def test_to_csv_gcs(monkeypatch):
"dt": date_range("2018-06-18", periods=2),
}
)
- s = BytesIO()
- s.close = lambda: True
-
- class MockGCSFileSystem(AbstractFileSystem):
- def open(*args, **kwargs):
- s.seek(0)
- return s
- monkeypatch.setattr("gcsfs.GCSFileSystem", MockGCSFileSystem)
df1.to_csv("gs://test/test.csv", index=True)
- def mock_get_filepath_or_buffer(*args, **kwargs):
- return BytesIO(df1.to_csv(index=True).encode()), None, None, False
-
- monkeypatch.setattr(
- "pandas.io.common.get_filepath_or_buffer", mock_get_filepath_or_buffer
- )
-
df2 = read_csv("gs://test/test.csv", parse_dates=["dt"], index_col=0)
tm.assert_frame_equal(df1, df2)
+@td.skip_if_no("gcsfs")
+@pytest.mark.parametrize("encoding", ["utf-8", "cp1251"])
+def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding):
+ """
+ Compression and encoding should with GCS.
+
+ GH 35677 (to_csv, compression), GH 26124 (to_csv, encoding), and
+ GH 32392 (read_csv, encoding)
+ """
+ from fsspec import registry
+
+ registry.target.clear() # noqa # remove state
+ df = tm.makeDataFrame()
+
+ # reference of compressed and encoded file
+ compression = {"method": compression_only}
+ if compression_only == "gzip":
+ compression["mtime"] = 1 # be reproducible
+ buffer = BytesIO()
+ df.to_csv(buffer, compression=compression, encoding=encoding, mode="wb")
+
+ # write compressed file with explicit compression
+ path_gcs = "gs://test/test.csv"
+ df.to_csv(path_gcs, compression=compression, encoding=encoding)
+ assert gcs_buffer.getvalue() == buffer.getvalue()
+ read_df = read_csv(
+ path_gcs, index_col=0, compression=compression_only, encoding=encoding
+ )
+ tm.assert_frame_equal(df, read_df)
+
+ # write compressed file with implicit compression
+ if compression_only == "gzip":
+ compression_only = "gz"
+ compression["method"] = "infer"
+ path_gcs += f".{compression_only}"
+ df.to_csv(
+ path_gcs, compression=compression, encoding=encoding,
+ )
+ assert gcs_buffer.getvalue() == buffer.getvalue()
+ read_df = read_csv(path_gcs, index_col=0, compression="infer", encoding=encoding)
+ tm.assert_frame_equal(df, read_df)
+
+
@td.skip_if_no("fastparquet")
@td.skip_if_no("gcsfs")
def test_to_parquet_gcs_new_file(monkeypatch, tmpdir):
| - [x] closes #35677, closes #26124, and closes #32392
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
By inferring the compression before converting the path to a file object `df.to_csv("gs://mybucket/test2.csv.gz", compression="infer", mode="wb")` works. By wrapping fsspec file-objects in a `TextIOWrapper` `df.to_csv("gs://mybucket/test2.csv", mode="wb")` works as well. Path-like objects that are internally converted to file-like objects (in `get_filepath_or_buffer`) are now always opened in binary mode (unless text mode is explicitly requested) and the potentially changed mode is returned (no need to specify `mode="wb"` for google cloud files). As long as the google file is opened in binary mode (which is now always the case), we also honor the requested `encoding`.
This PR also fixes Zip compression for file objects not having a name. | https://api.github.com/repos/pandas-dev/pandas/pulls/35681 | 2020-08-12T01:38:04Z | 2020-09-03T03:06:19Z | 2020-09-03T03:06:18Z | 2020-09-03T15:56:27Z |
PERF: make RangeIndex iterate over ._range | diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index b242de6a17208..9c05019c70396 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -57,8 +57,8 @@ def time_datetime_difference_disjoint(self):
class Range:
def setup(self):
- self.idx_inc = RangeIndex(start=0, stop=10 ** 7, step=3)
- self.idx_dec = RangeIndex(start=10 ** 7, stop=-1, step=-3)
+ self.idx_inc = RangeIndex(start=0, stop=10 ** 6, step=3)
+ self.idx_dec = RangeIndex(start=10 ** 6, stop=-1, step=-3)
def time_max(self):
self.idx_inc.max()
@@ -73,15 +73,23 @@ def time_min_trivial(self):
self.idx_inc.min()
def time_get_loc_inc(self):
- self.idx_inc.get_loc(900000)
+ self.idx_inc.get_loc(900_000)
def time_get_loc_dec(self):
- self.idx_dec.get_loc(100000)
+ self.idx_dec.get_loc(100_000)
+
+ def time_iter_inc(self):
+ for _ in self.idx_inc:
+ pass
+
+ def time_iter_dec(self):
+ for _ in self.idx_dec:
+ pass
class IndexEquals:
def setup(self):
- idx_large_fast = RangeIndex(100000)
+ idx_large_fast = RangeIndex(100_000)
idx_small_slow = date_range(start="1/1/2012", periods=1)
self.mi_large_slow = MultiIndex.from_product([idx_large_fast, idx_small_slow])
@@ -94,7 +102,7 @@ def time_non_object_equals_multiindex(self):
class IndexAppend:
def setup(self):
- N = 10000
+ N = 10_000
self.range_idx = RangeIndex(0, 100)
self.int_idx = self.range_idx.astype(int)
self.obj_idx = self.int_idx.astype(str)
@@ -168,7 +176,7 @@ def time_get_loc_non_unique_sorted(self, dtype):
class Float64IndexMethod:
# GH 13166
def setup(self):
- N = 100000
+ N = 100_000
a = np.arange(N)
self.ind = Float64Index(a * 4.8000000418824129e-08)
@@ -212,7 +220,7 @@ class GC:
params = [1, 2, 5]
def create_use_drop(self):
- idx = Index(list(range(1000 * 1000)))
+ idx = Index(list(range(1_000_000)))
idx._engine
def peakmem_gc_instances(self, N):
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 3577a7aacc008..7efb70f0752e2 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -373,6 +373,10 @@ def get_indexer(self, target, method=None, limit=None, tolerance=None):
def tolist(self):
return list(self._range)
+ @doc(Int64Index.__iter__)
+ def __iter__(self):
+ yield from self._range
+
@doc(Int64Index._shallow_copy)
def _shallow_copy(self, values=None, name: Label = no_default):
name = self.name if name is no_default else name
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index ef4bb9a0869b0..c4c242746e92c 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -167,6 +167,10 @@ def test_cache(self):
idx.any()
assert idx._cache == {}
+ for _ in idx:
+ pass
+ assert idx._cache == {}
+
df = pd.DataFrame({"a": range(10)}, index=idx)
df.loc[50]
| Minor performance issue.
By adding a custom ``__iter__`` method to ``RangeIndex``, we partly avoid needing to create/cache the expensive ``_data`` attribute and partly it's just faster to iterate over a ``range`` than a ``ndarray``:
```python
>>> idx = pd.RangeIndex(100_000)
>>> %%timeit
... for _ in idx:
... pass
10.9 ms ± 74.7 µs per loop # master
6.11 ms ± 48.8 µs per loop # this PR
>>> "_data" in idx._cache
True # master
False # this PR
```
xref #35432, #26565. | https://api.github.com/repos/pandas-dev/pandas/pulls/35676 | 2020-08-11T18:35:41Z | 2020-08-13T18:06:36Z | 2020-08-13T18:06:36Z | 2020-08-13T18:27:18Z |
Fix cython3 | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 7b36bc8baf891..0fc8d537d1ecb 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -294,9 +294,8 @@ cdef class Slider:
Only handles contiguous data for now
"""
cdef:
- ndarray values, buf
- Py_ssize_t stride, orig_len, orig_stride
- char *orig_data
+ ndarray values, buf, orig_data
+ Py_ssize_t stride
def __init__(self, ndarray values, ndarray buf):
assert values.ndim == 1
@@ -309,25 +308,17 @@ cdef class Slider:
self.buf = buf
self.stride = values.strides[0]
- self.orig_data = self.buf.data
- self.orig_len = self.buf.shape[0]
- self.orig_stride = self.buf.strides[0]
-
- self.buf.data = self.values.data
- self.buf.strides[0] = self.stride
+ self.orig_data = self.buf[:]
+ self.buf = self.values[::self.stride]
cdef move(self, int start, int end):
"""
For slicing
"""
- self.buf.data = self.values.data + self.stride * start
- self.buf.shape[0] = end - start
+ self.buf = self.values[start:end:self.stride]
cdef reset(self):
-
- self.buf.shape[0] = self.orig_len
- self.buf.data = self.orig_data
- self.buf.strides[0] = self.orig_stride
+ self.buf = self.orig_data[:]
class InvalidApply(Exception):
@@ -407,7 +398,7 @@ cdef class BlockSlider:
list blocks
cdef:
- char **base_ptrs
+ list base_ptrs
def __init__(self, object frame):
cdef:
@@ -430,12 +421,7 @@ cdef class BlockSlider:
self.idx_slider = Slider(
self.frame.index._index_data, self.dummy.index._index_data)
- self.base_ptrs = <char**>malloc(sizeof(char*) * len(self.blocks))
- for i, block in enumerate(self.blocks):
- self.base_ptrs[i] = (<ndarray>block).data
-
- def __dealloc__(self):
- free(self.base_ptrs)
+ self.base_ptrs = [block[:] for block in self.blocks]
cdef move(self, int start, int end):
cdef:
@@ -444,11 +430,7 @@ cdef class BlockSlider:
# move blocks
for i in range(self.nblocks):
- arr = self.blocks[i]
-
- # axis=1 is the frame's axis=0
- arr.data = self.base_ptrs[i] + arr.strides[1] * start
- arr.shape[1] = end - start
+ self.blocks[i] = self.base_ptrs[i][:, start:end]
# move and set the index
self.idx_slider.move(start, end)
@@ -463,8 +445,4 @@ cdef class BlockSlider:
# reset blocks
for i in range(self.nblocks):
- arr = self.blocks[i]
-
- # axis=1 is the frame's axis=0
- arr.data = self.base_ptrs[i]
- arr.shape[1] = 0
+ self.blocks[i] = self.base_ptrs[i][:]
diff --git a/pandas/_libs/writers.pyx b/pandas/_libs/writers.pyx
index 40c39aabb7a7a..918b150dc8b99 100644
--- a/pandas/_libs/writers.pyx
+++ b/pandas/_libs/writers.pyx
@@ -2,7 +2,7 @@ import cython
from cython import Py_ssize_t
from cpython.bytes cimport PyBytes_GET_SIZE
-from cpython.unicode cimport PyUnicode_GET_SIZE
+from cpython.unicode cimport PyUnicode_GET_LENGTH
import numpy as np
@@ -144,7 +144,7 @@ cpdef inline Py_ssize_t word_len(object val):
Py_ssize_t l = 0
if isinstance(val, str):
- l = PyUnicode_GET_SIZE(val)
+ l = PyUnicode_GET_LENGTH(val)
elif isinstance(val, bytes):
l = PyBytes_GET_SIZE(val)
diff --git a/pyproject.toml b/pyproject.toml
index f6f8081b6c464..b17a53dd2909a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -4,7 +4,7 @@
requires = [
"setuptools",
"wheel",
- "Cython>=0.29.16,<3", # Note: sync with setup.py
+ "Cython>=0.29.21", # Note: sync with setup.py
"numpy==1.16.5; python_version=='3.7' and platform_system!='AIX'",
"numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'",
"numpy==1.16.5; python_version=='3.7' and platform_system=='AIX'",
diff --git a/setup.py b/setup.py
index 43d19d525876b..8d1cee4bbf6a6 100755
--- a/setup.py
+++ b/setup.py
@@ -34,7 +34,7 @@ def is_platform_mac():
min_numpy_ver = "1.16.5"
-min_cython_ver = "0.29.16" # note: sync with pyproject.toml
+min_cython_ver = "0.29.21" # note: sync with pyproject.toml
try:
import Cython
| I'm not sure if this actually works, but it gets cython to compile with py310.
- [x] closes #34014
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35675 | 2020-08-11T17:40:20Z | 2020-09-09T22:57:26Z | null | 2020-09-09T22:57:57Z |
Avoid redirect | diff --git a/doc/source/getting_started/tutorials.rst b/doc/source/getting_started/tutorials.rst
index 4c2d0621c6103..b8940d2efed2f 100644
--- a/doc/source/getting_started/tutorials.rst
+++ b/doc/source/getting_started/tutorials.rst
@@ -94,4 +94,4 @@ Various tutorials
* `Intro to pandas data structures, by Greg Reda <http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/>`_
* `Pandas and Python: Top 10, by Manish Amde <https://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/>`_
* `Pandas DataFrames Tutorial, by Karlijn Willems <https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_
-* `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1>`_
+* `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1/>`_
| A minor change to avoid 301 redirect on link. | https://api.github.com/repos/pandas-dev/pandas/pulls/35674 | 2020-08-11T16:25:43Z | 2020-08-22T03:08:19Z | 2020-08-22T03:08:19Z | 2020-08-22T03:08:29Z |
REGR: Dataframe.reset_index() on empty DataFrame with MI and datatime level | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index b37103910afab..98d67e930ccc0 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 547d86f221b5f..1587dd8798ec3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4816,7 +4816,7 @@ def _maybe_casted_values(index, labels=None):
# we can have situations where the whole mask is -1,
# meaning there is nothing found in labels, so make all nan's
- if mask.all():
+ if mask.size > 0 and mask.all():
dtype = index.dtype
fill_value = na_value_for_dtype(dtype)
values = construct_1d_arraylike_from_scalar(
diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py
index da4bfa9be4881..b88ef0e6691cb 100644
--- a/pandas/tests/frame/methods/test_reset_index.py
+++ b/pandas/tests/frame/methods/test_reset_index.py
@@ -318,3 +318,33 @@ def test_reset_index_dtypes_on_empty_frame_with_multiindex(array, dtype):
result = DataFrame(index=idx)[:0].reset_index().dtypes
expected = Series({"level_0": np.int64, "level_1": np.float64, "level_2": dtype})
tm.assert_series_equal(result, expected)
+
+
+def test_reset_index_empty_frame_with_datetime64_multiindex():
+ # https://github.com/pandas-dev/pandas/issues/35606
+ idx = MultiIndex(
+ levels=[[pd.Timestamp("2020-07-20 00:00:00")], [3, 4]],
+ codes=[[], []],
+ names=["a", "b"],
+ )
+ df = DataFrame(index=idx, columns=["c", "d"])
+ result = df.reset_index()
+ expected = DataFrame(
+ columns=list("abcd"), index=RangeIndex(start=0, stop=0, step=1)
+ )
+ expected["a"] = expected["a"].astype("datetime64[ns]")
+ expected["b"] = expected["b"].astype("int64")
+ tm.assert_frame_equal(result, expected)
+
+
+def test_reset_index_empty_frame_with_datetime64_multiindex_from_groupby():
+ # https://github.com/pandas-dev/pandas/issues/35657
+ df = DataFrame(dict(c1=[10.0], c2=["a"], c3=pd.to_datetime("2020-01-01")))
+ df = df.head(0).groupby(["c2", "c3"])[["c1"]].sum()
+ result = df.reset_index()
+ expected = DataFrame(
+ columns=["c2", "c3", "c1"], index=RangeIndex(start=0, stop=0, step=1)
+ )
+ expected["c3"] = expected["c3"].astype("datetime64[ns]")
+ expected["c1"] = expected["c1"].astype("float64")
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #35606
- [ ] closes #35657
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35673 | 2020-08-11T16:24:25Z | 2020-08-14T01:50:52Z | 2020-08-14T01:50:52Z | 2020-08-14T10:23:32Z |
CI/TST: change skip to xfail #35660 | diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index c84c0048cc838..3d5f6ae3a4af9 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1138,7 +1138,7 @@ def test_parse_integers_above_fp_precision(all_parsers):
tm.assert_frame_equal(result, expected)
-@pytest.mark.skip("unreliable test #35214")
+@pytest.mark.xfail(reason="ResourceWarning #35660", strict=False)
def test_chunks_have_consistent_numerical_type(all_parsers):
parser = all_parsers
integers = [str(i) for i in range(499999)]
@@ -1152,7 +1152,7 @@ def test_chunks_have_consistent_numerical_type(all_parsers):
assert result.a.dtype == float
-@pytest.mark.skip("unreliable test #35214")
+@pytest.mark.xfail(reason="ResourceWarning #35660", strict=False)
def test_warn_if_chunks_have_mismatched_type(all_parsers):
warning_type = None
parser = all_parsers
| Followup of #35660 | https://api.github.com/repos/pandas-dev/pandas/pulls/35672 | 2020-08-11T16:22:57Z | 2020-08-11T21:57:57Z | 2020-08-11T21:57:57Z | 2020-08-19T17:37:41Z |
ENH: Enable short_caption in to_latex | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 9fc094330fb36..b6d5fc0f35bc3 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -96,6 +96,32 @@ For example:
buffer = io.BytesIO()
data.to_csv(buffer, mode="w+b", encoding="utf-8", compression="gzip")
+Support for short caption and table position in ``to_latex``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:meth:`DataFrame.to_latex` now allows one to specify
+a floating table position (:issue:`35281`)
+and a short caption (:issue:`36267`).
+
+New keyword ``position`` is implemented to set the position.
+
+.. ipython:: python
+
+ data = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
+ table = data.to_latex(position='ht')
+ print(table)
+
+Usage of keyword ``caption`` is extended.
+Besides taking a single string as an argument,
+one can optionally provide a tuple of ``(full_caption, short_caption)``
+to add a short caption macro.
+
+.. ipython:: python
+
+ data = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
+ table = data.to_latex(caption=('the full long caption', 'short caption'))
+ print(table)
+
.. _whatsnew_120.read_csv_table_precision_default:
Change in default floating precision for ``read_csv`` and ``read_table``
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 784e8877ef128..77be48ef29df8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3013,6 +3013,9 @@ def to_latex(
.. versionchanged:: 1.0.0
Added caption and label arguments.
+ .. versionchanged:: 1.2.0
+ Added position argument, changed meaning of caption argument.
+
Parameters
----------
buf : str, Path or StringIO-like, optional, default None
@@ -3074,11 +3077,16 @@ def to_latex(
centered labels (instead of top-aligned) across the contained
rows, separating groups via clines. The default will be read
from the pandas config module.
- caption : str, optional
- The LaTeX caption to be placed inside ``\caption{{}}`` in the output.
+ caption : str or tuple, optional
+ Tuple (full_caption, short_caption),
+ which results in ``\caption[short_caption]{{full_caption}}``;
+ if a single string is passed, no short caption will be set.
.. versionadded:: 1.0.0
+ .. versionchanged:: 1.2.0
+ Optionally allow caption to be a tuple ``(full_caption, short_caption)``.
+
label : str, optional
The LaTeX label to be placed inside ``\label{{}}`` in the output.
This is used with ``\ref{{}}`` in the main ``.tex`` file.
@@ -3087,6 +3095,8 @@ def to_latex(
position : str, optional
The LaTeX positional argument for tables, to be placed after
``\begin{{}}`` in the output.
+
+ .. versionadded:: 1.2.0
{returns}
See Also
--------
@@ -3097,8 +3107,8 @@ def to_latex(
Examples
--------
>>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'],
- ... mask=['red', 'purple'],
- ... weapon=['sai', 'bo staff']))
+ ... mask=['red', 'purple'],
+ ... weapon=['sai', 'bo staff']))
>>> print(df.to_latex(index=False)) # doctest: +NORMALIZE_WHITESPACE
\begin{{tabular}}{{lll}}
\toprule
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index dcd91b3a12294..7635cda56ba26 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1021,7 +1021,7 @@ def to_latex(
multicolumn: bool = False,
multicolumn_format: Optional[str] = None,
multirow: bool = False,
- caption: Optional[str] = None,
+ caption: Optional[Union[str, Tuple[str, str]]] = None,
label: Optional[str] = None,
position: Optional[str] = None,
) -> Optional[str]:
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 170df193bef00..2eee0ce73291f 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -2,7 +2,7 @@
Module for formatting output data in Latex.
"""
from abc import ABC, abstractmethod
-from typing import IO, Iterator, List, Optional, Type
+from typing import IO, Iterator, List, Optional, Tuple, Type, Union
import numpy as np
@@ -11,6 +11,39 @@
from pandas.io.formats.format import DataFrameFormatter, TableFormatter
+def _split_into_full_short_caption(
+ caption: Optional[Union[str, Tuple[str, str]]]
+) -> Tuple[str, str]:
+ """Extract full and short captions from caption string/tuple.
+
+ Parameters
+ ----------
+ caption : str or tuple, optional
+ Either table caption string or tuple (full_caption, short_caption).
+ If string is provided, then it is treated as table full caption,
+ while short_caption is considered an empty string.
+
+ Returns
+ -------
+ full_caption, short_caption : tuple
+ Tuple of full_caption, short_caption strings.
+ """
+ if caption:
+ if isinstance(caption, str):
+ full_caption = caption
+ short_caption = ""
+ else:
+ try:
+ full_caption, short_caption = caption
+ except ValueError as err:
+ msg = "caption must be either a string or a tuple of two strings"
+ raise ValueError(msg) from err
+ else:
+ full_caption = ""
+ short_caption = ""
+ return full_caption, short_caption
+
+
class RowStringConverter(ABC):
r"""Converter for dataframe rows into LaTeX strings.
@@ -275,6 +308,8 @@ class TableBuilderAbstract(ABC):
Use multirow to enhance MultiIndex rows.
caption: str, optional
Table caption.
+ short_caption: str, optional
+ Table short caption.
label: str, optional
LaTeX label.
position: str, optional
@@ -289,6 +324,7 @@ def __init__(
multicolumn_format: Optional[str] = None,
multirow: bool = False,
caption: Optional[str] = None,
+ short_caption: Optional[str] = None,
label: Optional[str] = None,
position: Optional[str] = None,
):
@@ -298,6 +334,7 @@ def __init__(
self.multicolumn_format = multicolumn_format
self.multirow = multirow
self.caption = caption
+ self.short_caption = short_caption
self.label = label
self.position = position
@@ -384,8 +421,23 @@ def _position_macro(self) -> str:
@property
def _caption_macro(self) -> str:
- r"""Caption macro, extracted from self.caption, like \caption{cap}."""
- return f"\\caption{{{self.caption}}}" if self.caption else ""
+ r"""Caption macro, extracted from self.caption.
+
+ With short caption:
+ \caption[short_caption]{caption_string}.
+
+ Without short caption:
+ \caption{caption_string}.
+ """
+ if self.caption:
+ return "".join(
+ [
+ r"\caption",
+ f"[{self.short_caption}]" if self.short_caption else "",
+ f"{{{self.caption}}}",
+ ]
+ )
+ return ""
@property
def _label_macro(self) -> str:
@@ -596,15 +648,32 @@ def env_end(self) -> str:
class LatexFormatter(TableFormatter):
- """
+ r"""
Used to render a DataFrame to a LaTeX tabular/longtable environment output.
Parameters
----------
formatter : `DataFrameFormatter`
+ longtable : bool, default False
+ Use longtable environment.
column_format : str, default None
The columns format as specified in `LaTeX table format
<https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns
+ multicolumn : bool, default False
+ Use \multicolumn to enhance MultiIndex columns.
+ multicolumn_format : str, default 'l'
+ The alignment for multicolumns, similar to `column_format`
+ multirow : bool, default False
+ Use \multirow to enhance MultiIndex rows.
+ caption : str or tuple, optional
+ Tuple (full_caption, short_caption),
+ which results in \caption[short_caption]{full_caption};
+ if a single string is passed, no short caption will be set.
+ label : str, optional
+ The LaTeX label to be placed inside ``\label{}`` in the output.
+ position : str, optional
+ The LaTeX positional argument for tables, to be placed after
+ ``\begin{}`` in the output.
See Also
--------
@@ -619,18 +688,18 @@ def __init__(
multicolumn: bool = False,
multicolumn_format: Optional[str] = None,
multirow: bool = False,
- caption: Optional[str] = None,
+ caption: Optional[Union[str, Tuple[str, str]]] = None,
label: Optional[str] = None,
position: Optional[str] = None,
):
self.fmt = formatter
self.frame = self.fmt.frame
self.longtable = longtable
- self.column_format = column_format # type: ignore[assignment]
+ self.column_format = column_format
self.multicolumn = multicolumn
self.multicolumn_format = multicolumn_format
self.multirow = multirow
- self.caption = caption
+ self.caption, self.short_caption = _split_into_full_short_caption(caption)
self.label = label
self.position = position
@@ -658,6 +727,7 @@ def builder(self) -> TableBuilderAbstract:
multicolumn_format=self.multicolumn_format,
multirow=self.multirow,
caption=self.caption,
+ short_caption=self.short_caption,
label=self.label,
position=self.position,
)
@@ -671,7 +741,7 @@ def _select_builder(self) -> Type[TableBuilderAbstract]:
return TabularBuilder
@property
- def column_format(self) -> str:
+ def column_format(self) -> Optional[str]:
"""Column format."""
return self._column_format
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index d3d865158309c..908fdea2f73d0 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -414,6 +414,11 @@ def caption_table(self):
"""Caption for table/tabular LaTeX environment."""
return "a table in a \\texttt{table/tabular} environment"
+ @pytest.fixture
+ def short_caption(self):
+ """Short caption for testing \\caption[short_caption]{full_caption}."""
+ return "a table"
+
@pytest.fixture
def label_table(self):
"""Label for table/tabular LaTeX environment."""
@@ -493,6 +498,107 @@ def test_to_latex_caption_and_label(self, df_short, caption_table, label_table):
)
assert result == expected
+ def test_to_latex_caption_and_shortcaption(
+ self,
+ df_short,
+ caption_table,
+ short_caption,
+ ):
+ result = df_short.to_latex(caption=(caption_table, short_caption))
+ expected = _dedent(
+ r"""
+ \begin{table}
+ \centering
+ \caption[a table]{a table in a \texttt{table/tabular} environment}
+ \begin{tabular}{lrl}
+ \toprule
+ {} & a & b \\
+ \midrule
+ 0 & 1 & b1 \\
+ 1 & 2 & b2 \\
+ \bottomrule
+ \end{tabular}
+ \end{table}
+ """
+ )
+ assert result == expected
+
+ def test_to_latex_caption_and_shortcaption_list_is_ok(self, df_short):
+ caption = ("Long-long-caption", "Short")
+ result_tuple = df_short.to_latex(caption=caption)
+ result_list = df_short.to_latex(caption=list(caption))
+ assert result_tuple == result_list
+
+ def test_to_latex_caption_shortcaption_and_label(
+ self,
+ df_short,
+ caption_table,
+ short_caption,
+ label_table,
+ ):
+ # test when the short_caption is provided alongside caption and label
+ result = df_short.to_latex(
+ caption=(caption_table, short_caption),
+ label=label_table,
+ )
+ expected = _dedent(
+ r"""
+ \begin{table}
+ \centering
+ \caption[a table]{a table in a \texttt{table/tabular} environment}
+ \label{tab:table_tabular}
+ \begin{tabular}{lrl}
+ \toprule
+ {} & a & b \\
+ \midrule
+ 0 & 1 & b1 \\
+ 1 & 2 & b2 \\
+ \bottomrule
+ \end{tabular}
+ \end{table}
+ """
+ )
+ assert result == expected
+
+ @pytest.mark.parametrize(
+ "bad_caption",
+ [
+ ("full_caption", "short_caption", "extra_string"),
+ ("full_caption", "short_caption", 1),
+ ("full_caption", "short_caption", None),
+ ("full_caption",),
+ (None,),
+ ],
+ )
+ def test_to_latex_bad_caption_raises(self, bad_caption):
+ # test that wrong number of params is raised
+ df = pd.DataFrame({"a": [1]})
+ msg = "caption must be either a string or a tuple of two strings"
+ with pytest.raises(ValueError, match=msg):
+ df.to_latex(caption=bad_caption)
+
+ def test_to_latex_two_chars_caption(self, df_short):
+ # test that two chars caption is handled correctly
+ # it must not be unpacked into long_caption, short_caption.
+ result = df_short.to_latex(caption="xy")
+ expected = _dedent(
+ r"""
+ \begin{table}
+ \centering
+ \caption{xy}
+ \begin{tabular}{lrl}
+ \toprule
+ {} & a & b \\
+ \midrule
+ 0 & 1 & b1 \\
+ 1 & 2 & b2 \\
+ \bottomrule
+ \end{tabular}
+ \end{table}
+ """
+ )
+ assert result == expected
+
def test_to_latex_longtable_caption_only(self, df_short, caption_longtable):
# GH 25436
# test when no caption and no label is provided
@@ -595,6 +701,47 @@ def test_to_latex_longtable_caption_and_label(
)
assert result == expected
+ def test_to_latex_longtable_caption_shortcaption_and_label(
+ self,
+ df_short,
+ caption_longtable,
+ short_caption,
+ label_longtable,
+ ):
+ # test when the caption, the short_caption and the label are provided
+ result = df_short.to_latex(
+ longtable=True,
+ caption=(caption_longtable, short_caption),
+ label=label_longtable,
+ )
+ expected = _dedent(
+ r"""
+ \begin{longtable}{lrl}
+ \caption[a table]{a table in a \texttt{longtable} environment}
+ \label{tab:longtable}\\
+ \toprule
+ {} & a & b \\
+ \midrule
+ \endfirsthead
+ \caption[]{a table in a \texttt{longtable} environment} \\
+ \toprule
+ {} & a & b \\
+ \midrule
+ \endhead
+ \midrule
+ \multicolumn{3}{r}{{Continued on next page}} \\
+ \midrule
+ \endfoot
+
+ \bottomrule
+ \endlastfoot
+ 0 & 1 & b1 \\
+ 1 & 2 & b2 \\
+ \end{longtable}
+ """
+ )
+ assert result == expected
+
class TestToLatexEscape:
@pytest.fixture
| - [x] closes #36267
- [x] tests added / passed
- [x] passes black pandas
- [x] passes git diff upstream/master -u -- "*.py" | flake8 --diff
- [x] whatsnew entry
Enable short_caption for ``DataFrame.to_latex`` by expanding the meaning of kwarg ``caption``.
Optionally ``caption`` can be a ``Tuple[str, str] = full_caption, short_caption``.
The final caption macro would look like this:
```
\caption[short_caption]{full_caption}
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/35668 | 2020-08-11T11:51:27Z | 2020-10-17T16:23:33Z | 2020-10-17T16:23:32Z | 2020-11-06T15:34:10Z |
Backport PR #35633 on branch 1.1.x (BUG: DataFrame.apply with func altering row in-place) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index e5860644fa371..415f9e508feb8 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -21,6 +21,7 @@ Fixed regressions
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
+- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 6b8d7dc35fe95..6d44cf917a07a 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -389,6 +389,8 @@ def series_generator(self):
blk = mgr.blocks[0]
for (arr, name) in zip(values, self.index):
+ # GH#35462 re-pin mgr in case setitem changed it
+ ser._mgr = mgr
blk.values = arr
ser.name = name
yield ser
diff --git a/pandas/tests/frame/apply/test_frame_apply.py b/pandas/tests/frame/apply/test_frame_apply.py
index 3a32278e2a4b1..538978358c8e7 100644
--- a/pandas/tests/frame/apply/test_frame_apply.py
+++ b/pandas/tests/frame/apply/test_frame_apply.py
@@ -1522,3 +1522,22 @@ def test_apply_dtype(self, col):
expected = df.dtypes
tm.assert_series_equal(result, expected)
+
+
+def test_apply_mutating():
+ # GH#35462 case where applied func pins a new BlockManager to a row
+ df = pd.DataFrame({"a": range(100), "b": range(100, 200)})
+
+ def func(row):
+ mgr = row._mgr
+ row.loc["a"] += 1
+ assert row._mgr is not mgr
+ return row
+
+ expected = df.copy()
+ expected["a"] += 1
+
+ result = df.apply(func, axis=1)
+
+ tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(df, result)
| Backport PR #35633: BUG: DataFrame.apply with func altering row in-place | https://api.github.com/repos/pandas-dev/pandas/pulls/35666 | 2020-08-11T08:40:57Z | 2020-08-11T09:41:03Z | 2020-08-11T09:41:03Z | 2020-08-11T09:41:03Z |
BUG: Styler cell_ids fails on multiple renders | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 98d67e930ccc0..3f177b29d52b8 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -33,7 +33,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`).
+- Bug in ``Styler`` whereby `cell_ids` argument had no effect due to other recent changes (:issue:`35588`) (:issue:`35663`).
Categorical
^^^^^^^^^^^
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 584f42a6cab12..3bbb5271bce61 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -390,16 +390,16 @@ def format_attr(pair):
"is_visible": (c not in hidden_columns),
}
# only add an id if the cell has a style
+ props = []
if self.cell_ids or (r, c) in ctx:
row_dict["id"] = "_".join(cs[1:])
+ for x in ctx[r, c]:
+ # have to handle empty styles like ['']
+ if x.count(":"):
+ props.append(tuple(x.split(":")))
+ else:
+ props.append(("", ""))
row_es.append(row_dict)
- props = []
- for x in ctx[r, c]:
- # have to handle empty styles like ['']
- if x.count(":"):
- props.append(tuple(x.split(":")))
- else:
- props.append(("", ""))
cellstyle_map[tuple(props)].append(f"row{r}_col{c}")
body.append(row_es)
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 3ef5157655e78..6025649e9dbec 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -1684,8 +1684,11 @@ def f(a, b, styler):
def test_no_cell_ids(self):
# GH 35588
+ # GH 35663
df = pd.DataFrame(data=[[0]])
- s = Styler(df, uuid="_", cell_ids=False).render()
+ styler = Styler(df, uuid="_", cell_ids=False)
+ styler.render()
+ s = styler.render() # render twice to ensure ctx is not updated
assert s.find('<td class="data row0 col0" >') != -1
| - [x] closes #35663
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35664 | 2020-08-11T06:44:45Z | 2020-08-14T12:36:10Z | 2020-08-14T12:36:10Z | 2020-08-15T16:28:19Z |
CI: troubleshoot ResourceWarnings | diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py
index fcee25c258efa..aef2d5bdb5d51 100644
--- a/pandas/tests/io/conftest.py
+++ b/pandas/tests/io/conftest.py
@@ -2,6 +2,8 @@
import pytest
+import pandas.util._test_decorators as td
+
import pandas._testing as tm
from pandas.io.parsers import read_csv
@@ -88,3 +90,25 @@ def add_tips_files(bucket_name):
yield conn
finally:
s3.stop()
+
+
+@pytest.fixture(autouse=True)
+def check_for_file_leaks():
+ """
+ Fixture to run around every test to ensure that we are not leaking files.
+
+ See also
+ --------
+ _test_decorators.check_file_leaks
+ """
+ # GH#30162
+ psutil = td.safe_import("psutil")
+ if not psutil:
+ yield
+
+ else:
+ proc = psutil.Process()
+ flist = proc.open_files()
+ yield
+ flist2 = proc.open_files()
+ assert flist == flist2
diff --git a/pandas/tests/io/excel/conftest.py b/pandas/tests/io/excel/conftest.py
index 0455e0d61ad97..15ff52d5bea48 100644
--- a/pandas/tests/io/excel/conftest.py
+++ b/pandas/tests/io/excel/conftest.py
@@ -1,7 +1,5 @@
import pytest
-import pandas.util._test_decorators as td
-
import pandas._testing as tm
from pandas.io.parsers import read_csv
@@ -41,25 +39,3 @@ def read_ext(request):
Valid extensions for reading Excel files.
"""
return request.param
-
-
-@pytest.fixture(autouse=True)
-def check_for_file_leaks():
- """
- Fixture to run around every test to ensure that we are not leaking files.
-
- See also
- --------
- _test_decorators.check_file_leaks
- """
- # GH#30162
- psutil = td.safe_import("psutil")
- if not psutil:
- yield
-
- else:
- proc = psutil.Process()
- flist = proc.open_files()
- yield
- flist2 = proc.open_files()
- assert flist == flist2
diff --git a/pandas/tests/io/excel/test_odf.py b/pandas/tests/io/excel/test_odf.py
index d6c6399f082c6..e9cd2c9060229 100644
--- a/pandas/tests/io/excel/test_odf.py
+++ b/pandas/tests/io/excel/test_odf.py
@@ -9,6 +9,12 @@
pytest.importorskip("odf")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(autouse=True)
def cd_and_set_engine(monkeypatch, datapath):
func = functools.partial(pd.read_excel, engine="odf")
diff --git a/pandas/tests/io/excel/test_odswriter.py b/pandas/tests/io/excel/test_odswriter.py
index b50c641ebf0c0..c429c9d0f8c05 100644
--- a/pandas/tests/io/excel/test_odswriter.py
+++ b/pandas/tests/io/excel/test_odswriter.py
@@ -9,6 +9,12 @@
pytestmark = pytest.mark.parametrize("ext", [".ods"])
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_write_append_mode_raises(ext):
msg = "Append mode is not supported with odf!"
diff --git a/pandas/tests/io/excel/test_openpyxl.py b/pandas/tests/io/excel/test_openpyxl.py
index 5f8d58ea1f105..9b8f46f56ddd7 100644
--- a/pandas/tests/io/excel/test_openpyxl.py
+++ b/pandas/tests/io/excel/test_openpyxl.py
@@ -14,6 +14,12 @@
pytestmark = pytest.mark.parametrize("ext", [".xlsx"])
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_to_excel_styleconverter(ext):
from openpyxl import styles
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index b610c5ec3a838..9c545c829b951 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -15,6 +15,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@contextlib.contextmanager
def ignore_xlrd_time_clock_warning():
"""
@@ -128,6 +134,7 @@ def cd_and_set_engine(self, engine, datapath, monkeypatch):
monkeypatch.chdir(datapath("io", "data", "excel"))
monkeypatch.setattr(pd, "read_excel", func)
+ @td.check_file_leaks
def test_usecols_int(self, read_ext, df_ref):
df_ref = df_ref.reindex(columns=["A", "B", "C"])
@@ -150,6 +157,7 @@ def test_usecols_int(self, read_ext, df_ref):
usecols=3,
)
+ @td.check_file_leaks
def test_usecols_list(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -170,6 +178,7 @@ def test_usecols_list(self, read_ext, df_ref):
tm.assert_frame_equal(df1, df_ref, check_names=False)
tm.assert_frame_equal(df2, df_ref, check_names=False)
+ @td.check_file_leaks
def test_usecols_str(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -219,6 +228,7 @@ def test_usecols_str(self, read_ext, df_ref):
tm.assert_frame_equal(df2, df1, check_names=False)
tm.assert_frame_equal(df3, df1, check_names=False)
+ @td.check_file_leaks
@pytest.mark.parametrize(
"usecols", [[0, 1, 3], [0, 3, 1], [1, 0, 3], [1, 3, 0], [3, 0, 1], [3, 1, 0]]
)
@@ -232,6 +242,7 @@ def test_usecols_diff_positional_int_columns_order(self, read_ext, usecols, df_r
)
tm.assert_frame_equal(result, expected, check_names=False)
+ @td.check_file_leaks
@pytest.mark.parametrize("usecols", [["B", "D"], ["D", "B"]])
def test_usecols_diff_positional_str_columns_order(self, read_ext, usecols, df_ref):
expected = df_ref[["B", "D"]]
@@ -240,6 +251,7 @@ def test_usecols_diff_positional_str_columns_order(self, read_ext, usecols, df_r
result = pd.read_excel("test1" + read_ext, sheet_name="Sheet1", usecols=usecols)
tm.assert_frame_equal(result, expected, check_names=False)
+ @td.check_file_leaks
def test_read_excel_without_slicing(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -248,6 +260,7 @@ def test_read_excel_without_slicing(self, read_ext, df_ref):
result = pd.read_excel("test1" + read_ext, sheet_name="Sheet1", index_col=0)
tm.assert_frame_equal(result, expected, check_names=False)
+ @td.check_file_leaks
def test_usecols_excel_range_str(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -258,12 +271,14 @@ def test_usecols_excel_range_str(self, read_ext, df_ref):
)
tm.assert_frame_equal(result, expected, check_names=False)
+ @td.check_file_leaks
def test_usecols_excel_range_str_invalid(self, read_ext):
msg = "Invalid column name: E1"
with pytest.raises(ValueError, match=msg):
pd.read_excel("test1" + read_ext, sheet_name="Sheet1", usecols="D:E1")
+ @td.check_file_leaks
def test_index_col_label_error(self, read_ext):
msg = "list indices must be integers.*, not str"
@@ -275,6 +290,7 @@ def test_index_col_label_error(self, read_ext):
usecols=["A", "C"],
)
+ @td.check_file_leaks
def test_index_col_empty(self, read_ext):
# see gh-9208
result = pd.read_excel(
@@ -286,6 +302,7 @@ def test_index_col_empty(self, read_ext):
)
tm.assert_frame_equal(result, expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("index_col", [None, 2])
def test_index_col_with_unnamed(self, read_ext, index_col):
# see gh-18792
@@ -300,6 +317,7 @@ def test_index_col_with_unnamed(self, read_ext, index_col):
tm.assert_frame_equal(result, expected)
+ @td.check_file_leaks
def test_usecols_pass_non_existent_column(self, read_ext):
msg = (
"Usecols do not match columns, "
@@ -309,6 +327,7 @@ def test_usecols_pass_non_existent_column(self, read_ext):
with pytest.raises(ValueError, match=msg):
pd.read_excel("test1" + read_ext, usecols=["E"])
+ @td.check_file_leaks
def test_usecols_wrong_type(self, read_ext):
msg = (
"'usecols' must either be list-like of "
@@ -318,12 +337,14 @@ def test_usecols_wrong_type(self, read_ext):
with pytest.raises(ValueError, match=msg):
pd.read_excel("test1" + read_ext, usecols=["E1", 0])
+ @td.check_file_leaks
def test_excel_stop_iterator(self, read_ext):
parsed = pd.read_excel("test2" + read_ext, sheet_name="Sheet1")
expected = DataFrame([["aaaa", "bbbbb"]], columns=["Test", "Test1"])
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
def test_excel_cell_error_na(self, read_ext):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -332,6 +353,7 @@ def test_excel_cell_error_na(self, read_ext):
expected = DataFrame([[np.nan]], columns=["Test"])
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
def test_excel_table(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -349,6 +371,7 @@ def test_excel_table(self, read_ext, df_ref):
)
tm.assert_frame_equal(df3, df1.iloc[:-1])
+ @td.check_file_leaks
def test_reader_special_dtypes(self, read_ext):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -411,6 +434,7 @@ def test_reader_special_dtypes(self, read_ext):
tm.assert_frame_equal(actual, no_convert_float)
# GH8212 - support for converters and missing values
+ @td.check_file_leaks
def test_reader_converters(self, read_ext):
basename = "test_converters"
@@ -438,6 +462,7 @@ def test_reader_converters(self, read_ext):
)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_reader_dtype(self, read_ext):
# GH 8212
basename = "testdtype"
@@ -467,6 +492,7 @@ def test_reader_dtype(self, read_ext):
with pytest.raises(ValueError, match=msg):
pd.read_excel(basename + read_ext, dtype={"d": "int64"})
+ @td.check_file_leaks
@pytest.mark.parametrize(
"dtype,expected",
[
@@ -501,6 +527,7 @@ def test_reader_dtype_str(self, read_ext, dtype, expected):
actual = pd.read_excel(basename + read_ext, dtype=dtype)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_reader_spaces(self, read_ext):
# see gh-32207
basename = "test_spaces"
@@ -519,6 +546,7 @@ def test_reader_spaces(self, read_ext):
)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_reading_all_sheets(self, read_ext):
# Test reading all sheet names by setting sheet_name to None,
# Ensure a dict is returned.
@@ -532,6 +560,7 @@ def test_reading_all_sheets(self, read_ext):
# Ensure sheet order is preserved
assert expected_keys == list(dfs.keys())
+ @td.check_file_leaks
def test_reading_multiple_specific_sheets(self, read_ext):
# Test reading specific sheet names by specifying a mixed list
# of integers and strings, and confirm that duplicated sheet
@@ -546,6 +575,7 @@ def test_reading_multiple_specific_sheets(self, read_ext):
tm.assert_contains_all(expected_keys, dfs.keys())
assert len(expected_keys) == len(dfs.keys())
+ @td.check_file_leaks
def test_reading_all_sheets_with_blank(self, read_ext):
# Test reading all sheet names by setting sheet_name to None,
# In the case where some sheets are blank.
@@ -556,15 +586,18 @@ def test_reading_all_sheets_with_blank(self, read_ext):
tm.assert_contains_all(expected_keys, dfs.keys())
# GH6403
+ @td.check_file_leaks
def test_read_excel_blank(self, read_ext):
actual = pd.read_excel("blank" + read_ext, sheet_name="Sheet1")
tm.assert_frame_equal(actual, DataFrame())
+ @td.check_file_leaks
def test_read_excel_blank_with_header(self, read_ext):
expected = DataFrame(columns=["col_1", "col_2"])
actual = pd.read_excel("blank_with_header" + read_ext, sheet_name="Sheet1")
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_date_conversion_overflow(self, read_ext):
# GH 10001 : pandas.ExcelFile ignore parse_dates=False
if pd.read_excel.keywords["engine"] == "pyxlsb":
@@ -585,6 +618,7 @@ def test_date_conversion_overflow(self, read_ext):
result = pd.read_excel("testdateoverflow" + read_ext)
tm.assert_frame_equal(result, expected)
+ @td.check_file_leaks
def test_sheet_name(self, read_ext, df_ref):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -603,6 +637,7 @@ def test_sheet_name(self, read_ext, df_ref):
tm.assert_frame_equal(df1, df_ref, check_names=False)
tm.assert_frame_equal(df2, df_ref, check_names=False)
+ @td.check_file_leaks
def test_excel_read_buffer(self, read_ext):
pth = "test1" + read_ext
@@ -611,12 +646,14 @@ def test_excel_read_buffer(self, read_ext):
actual = pd.read_excel(f, sheet_name="Sheet1", index_col=0)
tm.assert_frame_equal(expected, actual)
+ @td.check_file_leaks
def test_bad_engine_raises(self, read_ext):
bad_engine = "foo"
with pytest.raises(ValueError, match="Unknown engine: foo"):
pd.read_excel("", engine=bad_engine)
@tm.network
+ @td.check_file_leaks
def test_read_from_http_url(self, read_ext):
url = (
"https://raw.githubusercontent.com/pandas-dev/pandas/master/"
@@ -627,6 +664,7 @@ def test_read_from_http_url(self, read_ext):
tm.assert_frame_equal(url_table, local_table)
@td.skip_if_not_us_locale
+ @td.check_file_leaks
def test_read_from_s3_url(self, read_ext, s3_resource):
# Bucket "pandas-test" created in tests/io/conftest.py
with open("test1" + read_ext, "rb") as f:
@@ -640,6 +678,7 @@ def test_read_from_s3_url(self, read_ext, s3_resource):
@pytest.mark.slow
# ignore warning from old xlrd
@pytest.mark.filterwarnings("ignore:This metho:PendingDeprecationWarning")
+ @td.check_file_leaks
def test_read_from_file_url(self, read_ext, datapath):
# FILE
@@ -657,6 +696,7 @@ def test_read_from_file_url(self, read_ext, datapath):
tm.assert_frame_equal(url_table, local_table)
+ @td.check_file_leaks
def test_read_from_pathlib_path(self, read_ext):
# GH12655
@@ -696,6 +736,7 @@ def test_close_from_py_localpath(self, read_ext):
# should not throw an exception because the passed file was closed
f.read()
+ @td.check_file_leaks
def test_reader_seconds(self, read_ext):
if pd.read_excel.keywords["engine"] == "pyxlsb":
pytest.xfail("Sheets containing datetimes not supported by pyxlsb")
@@ -725,6 +766,7 @@ def test_reader_seconds(self, read_ext):
actual = pd.read_excel("times_1904" + read_ext, sheet_name="Sheet1")
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_read_excel_multiindex(self, read_ext):
# see gh-4679
if pd.read_excel.keywords["engine"] == "pyxlsb":
@@ -807,6 +849,7 @@ def test_read_excel_multiindex(self, read_ext):
)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_read_excel_multiindex_header_only(self, read_ext):
# see gh-11733.
#
@@ -818,6 +861,7 @@ def test_read_excel_multiindex_header_only(self, read_ext):
expected = DataFrame([[1, 2, 3, 4]] * 2, columns=exp_columns)
tm.assert_frame_equal(result, expected)
+ @td.check_file_leaks
def test_excel_old_index_format(self, read_ext):
# see gh-4679
filename = "test_index_name_pre17" + read_ext
@@ -890,6 +934,7 @@ def test_excel_old_index_format(self, read_ext):
actual = pd.read_excel(filename, sheet_name="multi_no_names", index_col=[0, 1])
tm.assert_frame_equal(actual, expected, check_names=False)
+ @td.check_file_leaks
def test_read_excel_bool_header_arg(self, read_ext):
# GH 6114
msg = "Passing a bool to header is invalid"
@@ -897,6 +942,7 @@ def test_read_excel_bool_header_arg(self, read_ext):
with pytest.raises(TypeError, match=msg):
pd.read_excel("test1" + read_ext, header=arg)
+ @td.check_file_leaks
def test_read_excel_skiprows_list(self, read_ext):
# GH 4903
if pd.read_excel.keywords["engine"] == "pyxlsb":
@@ -923,6 +969,7 @@ def test_read_excel_skiprows_list(self, read_ext):
)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_read_excel_nrows(self, read_ext):
# GH 16645
num_rows_to_pull = 5
@@ -931,6 +978,7 @@ def test_read_excel_nrows(self, read_ext):
expected = expected[:num_rows_to_pull]
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_read_excel_nrows_greater_than_nrows_in_file(self, read_ext):
# GH 16645
expected = pd.read_excel("test1" + read_ext)
@@ -939,12 +987,14 @@ def test_read_excel_nrows_greater_than_nrows_in_file(self, read_ext):
actual = pd.read_excel("test1" + read_ext, nrows=num_rows_to_pull)
tm.assert_frame_equal(actual, expected)
+ @td.check_file_leaks
def test_read_excel_nrows_non_integer_parameter(self, read_ext):
# GH 16645
msg = "'nrows' must be an integer >=0"
with pytest.raises(ValueError, match=msg):
pd.read_excel("test1" + read_ext, nrows="5")
+ @td.check_file_leaks
def test_read_excel_squeeze(self, read_ext):
# GH 12157
f = "test_squeeze" + read_ext
@@ -962,12 +1012,14 @@ def test_read_excel_squeeze(self, read_ext):
expected = pd.Series([1, 2, 3], name="a")
tm.assert_series_equal(actual, expected)
+ @td.check_file_leaks
def test_deprecated_kwargs(self, read_ext):
with tm.assert_produces_warning(FutureWarning, raise_on_extra_warnings=False):
pd.read_excel("test1" + read_ext, "Sheet1", 0)
pd.read_excel("test1" + read_ext)
+ @td.check_file_leaks
def test_no_header_with_list_index_col(self, read_ext):
# GH 31783
file_name = "testmultiindex" + read_ext
@@ -992,6 +1044,7 @@ def cd_and_set_engine(self, engine, datapath, monkeypatch):
monkeypatch.chdir(datapath("io", "data", "excel"))
monkeypatch.setattr(pd, "ExcelFile", func)
+ @td.check_file_leaks
def test_excel_passes_na(self, read_ext):
with pd.ExcelFile("test4" + read_ext) as excel:
parsed = pd.read_excel(
@@ -1030,6 +1083,7 @@ def test_excel_passes_na(self, read_ext):
)
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("na_filter", [None, True, False])
def test_excel_passes_na_filter(self, read_ext, na_filter):
# gh-25453
@@ -1055,6 +1109,7 @@ def test_excel_passes_na_filter(self, read_ext, na_filter):
expected = DataFrame(expected, columns=["Test"])
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
def test_excel_table_sheet_by_index(self, read_ext, df_ref):
# For some reason pd.read_excel has no attribute 'keywords' here.
# Skipping based on read_ext instead.
@@ -1082,6 +1137,7 @@ def test_excel_table_sheet_by_index(self, read_ext, df_ref):
tm.assert_frame_equal(df3, df1.iloc[:-1])
+ @td.check_file_leaks
def test_sheet_name(self, read_ext, df_ref):
# For some reason pd.read_excel has no attribute 'keywords' here.
# Skipping based on read_ext instead.
@@ -1100,6 +1156,7 @@ def test_sheet_name(self, read_ext, df_ref):
tm.assert_frame_equal(df1_parse, df_ref, check_names=False)
tm.assert_frame_equal(df2_parse, df_ref, check_names=False)
+ @td.check_file_leaks
def test_excel_read_buffer(self, engine, read_ext):
pth = "test1" + read_ext
expected = pd.read_excel(pth, sheet_name="Sheet1", index_col=0, engine=engine)
@@ -1110,6 +1167,7 @@ def test_excel_read_buffer(self, engine, read_ext):
tm.assert_frame_equal(expected, actual)
+ @td.check_file_leaks
def test_reader_closes_file(self, engine, read_ext):
with open("test1" + read_ext, "rb") as f:
with pd.ExcelFile(f) as xlsx:
@@ -1118,6 +1176,7 @@ def test_reader_closes_file(self, engine, read_ext):
assert f.closed
+ @td.check_file_leaks
def test_conflicting_excel_engines(self, read_ext):
# GH 26566
msg = "Engine should not be specified when passing an ExcelFile"
@@ -1126,6 +1185,7 @@ def test_conflicting_excel_engines(self, read_ext):
with pytest.raises(ValueError, match=msg):
pd.read_excel(xl, engine="foo")
+ @td.check_file_leaks
def test_excel_read_binary(self, engine, read_ext):
# GH 15914
expected = pd.read_excel("test1" + read_ext, engine=engine)
@@ -1136,6 +1196,7 @@ def test_excel_read_binary(self, engine, read_ext):
actual = pd.read_excel(data, engine=engine)
tm.assert_frame_equal(expected, actual)
+ @td.check_file_leaks
def test_excel_high_surrogate(self, engine):
# GH 23809
expected = pd.DataFrame(["\udc88"], columns=["Column1"])
@@ -1144,6 +1205,7 @@ def test_excel_high_surrogate(self, engine):
actual = pd.read_excel("high_surrogate.xlsx")
tm.assert_frame_equal(expected, actual)
+ @td.check_file_leaks
@pytest.mark.parametrize("filename", ["df_empty.xlsx", "df_equals.xlsx"])
def test_header_with_index_col(self, engine, filename):
# GH 33476
@@ -1157,6 +1219,7 @@ def test_header_with_index_col(self, engine, filename):
)
tm.assert_frame_equal(expected, result)
+ @td.check_file_leaks
def test_read_datetime_multiindex(self, engine, read_ext):
# GH 34748
if engine == "pyxlsb":
diff --git a/pandas/tests/io/excel/test_style.py b/pandas/tests/io/excel/test_style.py
index 936fc175a493b..f03abc2a51629 100644
--- a/pandas/tests/io/excel/test_style.py
+++ b/pandas/tests/io/excel/test_style.py
@@ -8,6 +8,12 @@
from pandas.io.formats.excel import ExcelFormatter
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize(
"engine",
[
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index e3ee53b63e102..34d04ccfce253 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -22,6 +22,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def path(ext):
"""
diff --git a/pandas/tests/io/excel/test_xlrd.py b/pandas/tests/io/excel/test_xlrd.py
index 1c9c514b20f46..dc3fd860e09a9 100644
--- a/pandas/tests/io/excel/test_xlrd.py
+++ b/pandas/tests/io/excel/test_xlrd.py
@@ -9,6 +9,12 @@
xlwt = pytest.importorskip("xlwt")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(autouse=True)
def skip_ods_and_xlsb_files(read_ext):
if read_ext == ".ods":
diff --git a/pandas/tests/io/excel/test_xlsxwriter.py b/pandas/tests/io/excel/test_xlsxwriter.py
index b6f791434a92b..a05eaa5a22dd8 100644
--- a/pandas/tests/io/excel/test_xlsxwriter.py
+++ b/pandas/tests/io/excel/test_xlsxwriter.py
@@ -12,6 +12,12 @@
pytestmark = pytest.mark.parametrize("ext", [".xlsx"])
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_column_format(ext):
# Test that column formats are applied to cells. Test for issue #9167.
# Applicable to xlsxwriter only.
diff --git a/pandas/tests/io/excel/test_xlwt.py b/pandas/tests/io/excel/test_xlwt.py
index a2d8b9fce9767..8cbe34b8209b8 100644
--- a/pandas/tests/io/excel/test_xlwt.py
+++ b/pandas/tests/io/excel/test_xlwt.py
@@ -12,6 +12,12 @@
pytestmark = pytest.mark.parametrize("ext,", [".xls"])
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_excel_raise_error_on_multiindex_columns_and_no_index(ext):
# MultiIndex as columns is not yet implemented 9794
cols = MultiIndex.from_tuples(
diff --git a/pandas/tests/io/formats/test_console.py b/pandas/tests/io/formats/test_console.py
index b57a2393461a2..476fe9a1777f6 100644
--- a/pandas/tests/io/formats/test_console.py
+++ b/pandas/tests/io/formats/test_console.py
@@ -5,6 +5,12 @@
from pandas._config import detect_console_encoding
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class MockEncoding: # TODO(py27): replace with mock
"""
Used to add a side effect when accessing the 'encoding' property. If the
diff --git a/pandas/tests/io/formats/test_css.py b/pandas/tests/io/formats/test_css.py
index 9383f86e335fa..8b88655e5fe64 100644
--- a/pandas/tests/io/formats/test_css.py
+++ b/pandas/tests/io/formats/test_css.py
@@ -5,6 +5,12 @@
from pandas.io.formats.css import CSSResolver, CSSWarning
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def assert_resolves(css, props, inherited=None):
resolve = CSSResolver()
actual = resolve(css, inherited=inherited)
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index 6801316ada8a3..f03522fad187b 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -7,6 +7,12 @@
import pandas.io.formats.format as fmt
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestEngFormatter:
def test_eng_float_formatter(self):
df = DataFrame({"A": [1.41, 141.0, 14100, 1410000.0]})
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 84805d06df4a8..719e357b7ad37 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -43,6 +43,12 @@
use_32bit_repr = is_platform_windows() or is_platform_32bit()
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(params=["string", "pathlike", "buffer"])
def filepath_or_buffer_id(request):
"""
diff --git a/pandas/tests/io/formats/test_info.py b/pandas/tests/io/formats/test_info.py
index 877bd1650ae60..70a8808ba4f45 100644
--- a/pandas/tests/io/formats/test_info.py
+++ b/pandas/tests/io/formats/test_info.py
@@ -22,6 +22,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def datetime_frame():
"""
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index f0d5ef19c4468..6a957e2891ac9 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -9,6 +9,12 @@
import pandas.io.formats.printing as printing
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_adjoin():
data = [["a", "b", "c"], ["dd", "ee", "ff"], ["ggg", "hhh", "iii"]]
expected = "a dd ggg\nb ee hhh\nc ff iii"
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 3ef5157655e78..61307ef092a65 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -15,6 +15,12 @@
from pandas.io.formats.style import Styler, _get_level_lengths # noqa # isort:skip
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestStyler:
def setup_method(self, method):
np.random.seed(24)
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 753b8b6eda9c5..887162724553f 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -5,16 +5,27 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas import DataFrame, compat
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestToCSV:
+
+ # TODO: we can remove this xfail now that 3.6 is dropped?
@pytest.mark.xfail(
(3, 6, 5) > sys.version_info,
reason=("Python csv library bug (see https://bugs.python.org/issue32255)"),
)
+ @td.check_file_leaks
def test_to_csv_with_single_column(self):
# see gh-18676, https://bugs.python.org/issue32255
#
@@ -43,6 +54,7 @@ def test_to_csv_with_single_column(self):
with open(path, "r") as f:
assert f.read() == expected2
+ @td.check_file_leaks
def test_to_csv_defualt_encoding(self):
# GH17097
df = DataFrame({"col": ["AAAAA", "ÄÄÄÄÄ", "ßßßßß", "聞聞聞聞聞"]})
@@ -52,6 +64,7 @@ def test_to_csv_defualt_encoding(self):
df.to_csv(path)
tm.assert_frame_equal(pd.read_csv(path, index_col=0), df)
+ @td.check_file_leaks
def test_to_csv_quotechar(self):
df = DataFrame({"col": [1, 2]})
expected = """\
@@ -80,6 +93,7 @@ def test_to_csv_quotechar(self):
with pytest.raises(TypeError, match="quotechar"):
df.to_csv(path, quoting=1, quotechar=None)
+ @td.check_file_leaks
def test_to_csv_doublequote(self):
df = DataFrame({"col": ['a"a', '"bb"']})
expected = '''\
@@ -99,6 +113,7 @@ def test_to_csv_doublequote(self):
with pytest.raises(Error, match="escapechar"):
df.to_csv(path, doublequote=False) # no escapechar set
+ @td.check_file_leaks
def test_to_csv_escapechar(self):
df = DataFrame({"col": ['a"a', '"bb"']})
expected = """\
@@ -124,12 +139,14 @@ def test_to_csv_escapechar(self):
with open(path, "r") as f:
assert f.read() == expected
+ @td.check_file_leaks
def test_csv_to_string(self):
df = DataFrame({"col": [1, 2]})
expected_rows = [",col", "0,1", "1,2"]
expected = tm.convert_rows_list_to_csv_str(expected_rows)
assert df.to_csv() == expected
+ @td.check_file_leaks
def test_to_csv_decimal(self):
# see gh-781
df = DataFrame({"col1": [1], "col2": ["a"], "col3": [10.1]})
@@ -166,6 +183,7 @@ def test_to_csv_decimal(self):
# same for a multi-index
assert df.set_index(["a", "b"]).to_csv(decimal="^") == expected
+ @td.check_file_leaks
def test_to_csv_float_format(self):
# testing if float_format is taken into account for the index
# GH 11553
@@ -178,6 +196,7 @@ def test_to_csv_float_format(self):
# same for a multi-index
assert df.set_index(["a", "b"]).to_csv(float_format="%.2f") == expected
+ @td.check_file_leaks
def test_to_csv_na_rep(self):
# see gh-11553
#
@@ -213,6 +232,7 @@ def test_to_csv_na_rep(self):
csv = pd.Series(["a", pd.NA, "c"], dtype="string").to_csv(na_rep="ZZZZZ")
assert expected == csv
+ @td.check_file_leaks
def test_to_csv_date_format(self):
# GH 10209
df_sec = DataFrame({"A": pd.date_range("20130101", periods=5, freq="s")})
@@ -276,6 +296,7 @@ def test_to_csv_date_format(self):
df_sec_grouped = df_sec.groupby([pd.Grouper(key="A", freq="1h"), "B"])
assert df_sec_grouped.mean().to_csv(date_format="%Y-%m-%d") == expected_ymd_sec
+ @td.check_file_leaks
def test_to_csv_multi_index(self):
# see gh-6618
df = DataFrame([1], columns=pd.MultiIndex.from_arrays([[1], [2]]))
@@ -312,6 +333,7 @@ def test_to_csv_multi_index(self):
exp = tm.convert_rows_list_to_csv_str(exp_rows)
assert df.to_csv(index=False) == exp
+ @td.check_file_leaks
@pytest.mark.parametrize(
"ind,expected",
[
@@ -335,6 +357,7 @@ def test_to_csv_single_level_multi_index(self, ind, expected, klass):
)
assert result == expected
+ @td.check_file_leaks
def test_to_csv_string_array_ascii(self):
# GH 10813
str_array = [{"names": ["foo", "bar"]}, {"names": ["baz", "qux"]}]
@@ -349,6 +372,7 @@ def test_to_csv_string_array_ascii(self):
with open(path, "r") as f:
assert f.read() == expected_ascii
+ @td.check_file_leaks
def test_to_csv_string_array_utf8(self):
# GH 10813
str_array = [{"names": ["foo", "bar"]}, {"names": ["baz", "qux"]}]
@@ -363,6 +387,7 @@ def test_to_csv_string_array_utf8(self):
with open(path, "r") as f:
assert f.read() == expected_utf8
+ @td.check_file_leaks
def test_to_csv_string_with_lf(self):
# GH 20353
data = {"int": [1, 2, 3], "str_lf": ["abc", "d\nef", "g\nh\n\ni"]}
@@ -397,6 +422,7 @@ def test_to_csv_string_with_lf(self):
with open(path, "rb") as f:
assert f.read() == expected_crlf
+ @td.check_file_leaks
def test_to_csv_string_with_crlf(self):
# GH 20353
data = {"int": [1, 2, 3], "str_crlf": ["abc", "d\r\nef", "g\r\nh\r\n\r\ni"]}
@@ -436,6 +462,7 @@ def test_to_csv_string_with_crlf(self):
with open(path, "rb") as f:
assert f.read() == expected_crlf
+ @td.check_file_leaks
def test_to_csv_stdout_file(self, capsys):
# GH 21561
df = pd.DataFrame(
@@ -450,6 +477,7 @@ def test_to_csv_stdout_file(self, capsys):
assert captured.out == expected_ascii
assert not sys.stdout.closed
+ @td.check_file_leaks
@pytest.mark.xfail(
compat.is_platform_windows(),
reason=(
@@ -474,6 +502,7 @@ def test_to_csv_write_to_open_file(self):
with open(path, "r") as f:
assert f.read() == expected
+ @td.check_file_leaks
def test_to_csv_write_to_open_file_with_newline_py3(self):
# see gh-21696
# see gh-20353
@@ -488,6 +517,7 @@ def test_to_csv_write_to_open_file_with_newline_py3(self):
with open(path, "rb") as f:
assert f.read() == bytes(expected, "utf-8")
+ @td.check_file_leaks
@pytest.mark.parametrize("to_infer", [True, False])
@pytest.mark.parametrize("read_infer", [True, False])
def test_to_csv_compression(self, compression_only, read_infer, to_infer):
@@ -517,6 +547,7 @@ def test_to_csv_compression(self, compression_only, read_infer, to_infer):
result = pd.read_csv(path, index_col=0, compression=read_compression)
tm.assert_frame_equal(result, df)
+ @td.check_file_leaks
def test_to_csv_compression_dict(self, compression_only):
# GH 26023
method = compression_only
@@ -528,6 +559,7 @@ def test_to_csv_compression_dict(self, compression_only):
read_df = pd.read_csv(path, index_col=0)
tm.assert_frame_equal(read_df, df)
+ @td.check_file_leaks
def test_to_csv_compression_dict_no_method_raises(self):
# GH 26023
df = DataFrame({"ABC": [1]})
@@ -538,6 +570,7 @@ def test_to_csv_compression_dict_no_method_raises(self):
with pytest.raises(ValueError, match=msg):
df.to_csv(path, compression=compression)
+ @td.check_file_leaks
@pytest.mark.parametrize("compression", ["zip", "infer"])
@pytest.mark.parametrize(
"archive_name", [None, "test_to_csv.csv", "test_to_csv.zip"]
@@ -558,6 +591,7 @@ def test_to_csv_zip_arguments(self, compression, archive_name):
archived_file = os.path.basename(zp.filelist[0].filename)
assert archived_file == expected_arcname
+ @td.check_file_leaks
@pytest.mark.parametrize("df_new_type", ["Int64"])
def test_to_csv_na_rep_long_string(self, df_new_type):
# see gh-25099
@@ -570,6 +604,7 @@ def test_to_csv_na_rep_long_string(self, df_new_type):
assert expected == result
+ @td.check_file_leaks
def test_to_csv_timedelta_precision(self):
# GH 6783
s = pd.Series([1, 1]).astype("timedelta64[ns]")
@@ -584,6 +619,7 @@ def test_to_csv_timedelta_precision(self):
expected = tm.convert_rows_list_to_csv_str(expected_rows)
assert result == expected
+ @td.check_file_leaks
def test_na_rep_truncated(self):
# https://github.com/pandas-dev/pandas/issues/31447
result = pd.Series(range(8, 12)).to_csv(na_rep="-")
@@ -598,6 +634,7 @@ def test_na_rep_truncated(self):
expected = tm.convert_rows_list_to_csv_str([",0", "0,1.1", "1,2.2"])
assert result == expected
+ @td.check_file_leaks
@pytest.mark.parametrize("errors", ["surrogatepass", "ignore", "replace"])
def test_to_csv_errors(self, errors):
# GH 22610
@@ -608,6 +645,7 @@ def test_to_csv_errors(self, errors):
# No use in reading back the data as it is not the same anymore
# due to the error handling
+ @td.check_file_leaks
def test_to_csv_binary_handle(self):
"""
Binary file objects should work if 'mode' contains a 'b'.
@@ -620,6 +658,7 @@ def test_to_csv_binary_handle(self):
df.to_csv(handle, mode="w+b")
tm.assert_frame_equal(df, pd.read_csv(path, index_col=0))
+ @td.check_file_leaks
def test_to_csv_encoding_binary_handle(self):
"""
Binary file objects should honor a specified encoding.
diff --git a/pandas/tests/io/formats/test_to_excel.py b/pandas/tests/io/formats/test_to_excel.py
index 8529a0fb33b67..f6ed034e7dfb9 100644
--- a/pandas/tests/io/formats/test_to_excel.py
+++ b/pandas/tests/io/formats/test_to_excel.py
@@ -11,6 +11,12 @@
from pandas.io.formats.excel import CSSToExcelConverter
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize(
"css,expected",
[
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index e85fd398964d0..aa583f79490ff 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -22,6 +22,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def expected_html(datapath, name):
"""
Read HTML file from formats data directory.
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 93ad3739e59c7..505d346c6deed 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -8,6 +8,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestToLatex:
def test_to_latex_filename(self, float_frame):
with tm.ensure_clean("test.tex") as path:
diff --git a/pandas/tests/io/formats/test_to_markdown.py b/pandas/tests/io/formats/test_to_markdown.py
index 5223b313fef4f..96b0967e34948 100644
--- a/pandas/tests/io/formats/test_to_markdown.py
+++ b/pandas/tests/io/formats/test_to_markdown.py
@@ -8,6 +8,12 @@
pytest.importorskip("tabulate")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_simple():
buf = StringIO()
df = pd.DataFrame([1, 2, 3])
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 182c21ed1d416..68980c7e584d7 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -6,6 +6,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_compression_roundtrip(compression):
df = pd.DataFrame(
[[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
diff --git a/pandas/tests/io/json/test_deprecated_kwargs.py b/pandas/tests/io/json/test_deprecated_kwargs.py
index 79245bc9d34a8..9acabdaf26023 100644
--- a/pandas/tests/io/json/test_deprecated_kwargs.py
+++ b/pandas/tests/io/json/test_deprecated_kwargs.py
@@ -8,6 +8,12 @@
from pandas.io.json import read_json
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_deprecated_kwargs():
df = pd.DataFrame({"A": [2, 4, 6], "B": [3, 6, 9]}, index=[0, 1, 2])
buf = df.to_json(orient="split")
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 8f1ed193b100f..ce533401de641 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -21,6 +21,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestBuildSchema:
def setup_method(self, method):
self.df = DataFrame(
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index 8d93fbcc063f4..0491cb44eebb2 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -9,6 +9,12 @@
from pandas.io.json._normalize import nested_to_record
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def deep_nested():
# deeply nested data
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 1280d0fd434d5..fe8f2d20975ec 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -27,6 +27,12 @@
_cat_frame["sort"] = np.arange(len(_cat_frame), dtype="int64")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def assert_json_roundtrip_equal(result, expected, orient):
if orient == "records" or orient == "values":
expected = expected.reset_index(drop=True)
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index b475fa2c514ff..f94490fc58fd4 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -10,6 +10,12 @@
from pandas.io.json._json import JsonReader
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def lines_json_df():
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index f969cbca9f427..da5f9d64c2970 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -21,6 +21,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def _clean_dict(d):
"""
Sanitize dictionary for JSON by converting all keys to strings.
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 50179fc1ec4b8..d702d72f8e71d 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -20,6 +20,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize(
"malformed",
["1\r1\r1\r 1\r 1\r", "1\r1\r1\r 1\r 1\r11\r", "1\r1\r1\r 1\r 1\r11\r1\r"],
diff --git a/pandas/tests/io/parser/test_comment.py b/pandas/tests/io/parser/test_comment.py
index 60e32d7c27200..761e151e7107d 100644
--- a/pandas/tests/io/parser/test_comment.py
+++ b/pandas/tests/io/parser/test_comment.py
@@ -11,6 +11,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize("na_values", [None, ["NaN"]])
def test_comment(all_parsers, na_values):
parser = all_parsers
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index c84c0048cc838..4358bdf178c1b 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -24,6 +24,12 @@
from pandas.io.parsers import CParserWrapper, TextFileReader, TextParser
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_override_set_noconvert_columns():
# see gh-17351
#
@@ -1138,7 +1144,7 @@ def test_parse_integers_above_fp_precision(all_parsers):
tm.assert_frame_equal(result, expected)
-@pytest.mark.skip("unreliable test #35214")
+@td.check_file_leaks
def test_chunks_have_consistent_numerical_type(all_parsers):
parser = all_parsers
integers = [str(i) for i in range(499999)]
diff --git a/pandas/tests/io/parser/test_compression.py b/pandas/tests/io/parser/test_compression.py
index b773664adda72..92472200cb055 100644
--- a/pandas/tests/io/parser/test_compression.py
+++ b/pandas/tests/io/parser/test_compression.py
@@ -12,6 +12,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(params=[True, False])
def buffer(request):
return request.param
diff --git a/pandas/tests/io/parser/test_converters.py b/pandas/tests/io/parser/test_converters.py
index 88b400d9a11df..e753c54a59863 100644
--- a/pandas/tests/io/parser/test_converters.py
+++ b/pandas/tests/io/parser/test_converters.py
@@ -13,6 +13,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_converters_type_must_be_dict(all_parsers):
parser = all_parsers
data = """index,A,B,C,D
diff --git a/pandas/tests/io/parser/test_dialect.py b/pandas/tests/io/parser/test_dialect.py
index cc65def0fd096..dce5c33200cfc 100644
--- a/pandas/tests/io/parser/test_dialect.py
+++ b/pandas/tests/io/parser/test_dialect.py
@@ -14,6 +14,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def custom_dialect():
dialect_name = "weird"
diff --git a/pandas/tests/io/parser/test_dtypes.py b/pandas/tests/io/parser/test_dtypes.py
index 6ac310e3b2227..88e73743d34eb 100644
--- a/pandas/tests/io/parser/test_dtypes.py
+++ b/pandas/tests/io/parser/test_dtypes.py
@@ -9,6 +9,7 @@
import pytest
from pandas.errors import ParserWarning
+import pandas.util._test_decorators as td
from pandas.core.dtypes.dtypes import CategoricalDtype
@@ -17,6 +18,13 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
+@td.check_file_leaks
@pytest.mark.parametrize("dtype", [str, object])
@pytest.mark.parametrize("check_orig", [True, False])
def test_dtype_all_columns(all_parsers, dtype, check_orig):
@@ -43,6 +51,7 @@ def test_dtype_all_columns(all_parsers, dtype, check_orig):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_dtype_all_columns_empty(all_parsers):
# see gh-12048
parser = all_parsers
@@ -52,6 +61,7 @@ def test_dtype_all_columns_empty(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_dtype_per_column(all_parsers):
parser = all_parsers
data = """\
@@ -70,6 +80,7 @@ def test_dtype_per_column(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_invalid_dtype_per_column(all_parsers):
parser = all_parsers
data = """\
@@ -83,6 +94,7 @@ def test_invalid_dtype_per_column(all_parsers):
parser.read_csv(StringIO(data), dtype={"one": "foo", 1: "int"})
+@td.check_file_leaks
@pytest.mark.parametrize(
"dtype",
[
@@ -109,6 +121,7 @@ def test_categorical_dtype(all_parsers, dtype):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
@pytest.mark.parametrize("dtype", [{"b": "category"}, {1: "category"}])
def test_categorical_dtype_single(all_parsers, dtype):
# see gh-10153
@@ -124,6 +137,7 @@ def test_categorical_dtype_single(all_parsers, dtype):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_unsorted(all_parsers):
# see gh-10153
parser = all_parsers
@@ -142,6 +156,7 @@ def test_categorical_dtype_unsorted(all_parsers):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_missing(all_parsers):
# see gh-10153
parser = all_parsers
@@ -160,6 +175,7 @@ def test_categorical_dtype_missing(all_parsers):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
@pytest.mark.slow
def test_categorical_dtype_high_cardinality_numeric(all_parsers):
# see gh-18186
@@ -174,6 +190,7 @@ def test_categorical_dtype_high_cardinality_numeric(all_parsers):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_latin1(all_parsers, csv_dir_path):
# see gh-10153
pth = os.path.join(csv_dir_path, "unicode_series.csv")
@@ -187,6 +204,7 @@ def test_categorical_dtype_latin1(all_parsers, csv_dir_path):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_utf16(all_parsers, csv_dir_path):
# see gh-10153
pth = os.path.join(csv_dir_path, "utf16_ex.txt")
@@ -201,6 +219,7 @@ def test_categorical_dtype_utf16(all_parsers, csv_dir_path):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_chunksize_infer_categories(all_parsers):
# see gh-10153
parser = all_parsers
@@ -219,6 +238,7 @@ def test_categorical_dtype_chunksize_infer_categories(all_parsers):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
def test_categorical_dtype_chunksize_explicit_categories(all_parsers):
# see gh-10153
parser = all_parsers
@@ -241,6 +261,7 @@ def test_categorical_dtype_chunksize_explicit_categories(all_parsers):
tm.assert_frame_equal(actual, expected)
+@td.check_file_leaks
@pytest.mark.parametrize("ordered", [False, True])
@pytest.mark.parametrize(
"categories",
@@ -267,6 +288,7 @@ def test_categorical_category_dtype(all_parsers, categories, ordered):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_category_dtype_unsorted(all_parsers):
parser = all_parsers
data = """a,b
@@ -286,6 +308,7 @@ def test_categorical_category_dtype_unsorted(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_coerces_numeric(all_parsers):
parser = all_parsers
dtype = {"b": CategoricalDtype([1, 2, 3])}
@@ -297,6 +320,7 @@ def test_categorical_coerces_numeric(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_coerces_datetime(all_parsers):
parser = all_parsers
dti = pd.DatetimeIndex(["2017-01-01", "2018-01-01", "2019-01-01"], freq=None)
@@ -309,6 +333,7 @@ def test_categorical_coerces_datetime(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_coerces_timestamp(all_parsers):
parser = all_parsers
dtype = {"b": CategoricalDtype([Timestamp("2014")])}
@@ -320,6 +345,7 @@ def test_categorical_coerces_timestamp(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_coerces_timedelta(all_parsers):
parser = all_parsers
dtype = {"b": CategoricalDtype(pd.to_timedelta(["1H", "2H", "3H"]))}
@@ -331,6 +357,7 @@ def test_categorical_coerces_timedelta(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
@pytest.mark.parametrize(
"data",
[
@@ -350,6 +377,7 @@ def test_categorical_dtype_coerces_boolean(all_parsers, data):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_categorical_unexpected_categories(all_parsers):
parser = all_parsers
dtype = {"b": CategoricalDtype(["a", "b", "d", "e"])}
@@ -361,6 +389,7 @@ def test_categorical_unexpected_categories(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_pass_dtype(all_parsers):
parser = all_parsers
@@ -374,6 +403,7 @@ def test_empty_pass_dtype(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_index_pass_dtype(all_parsers):
parser = all_parsers
@@ -388,6 +418,7 @@ def test_empty_with_index_pass_dtype(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_multi_index_pass_dtype(all_parsers):
parser = all_parsers
@@ -403,6 +434,7 @@ def test_empty_with_multi_index_pass_dtype(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_mangled_column_pass_dtype_by_names(all_parsers):
parser = all_parsers
@@ -416,6 +448,7 @@ def test_empty_with_mangled_column_pass_dtype_by_names(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_mangled_column_pass_dtype_by_indexes(all_parsers):
parser = all_parsers
@@ -429,6 +462,7 @@ def test_empty_with_mangled_column_pass_dtype_by_indexes(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_dup_column_pass_dtype_by_indexes(all_parsers):
# see gh-9424
parser = all_parsers
@@ -443,6 +477,7 @@ def test_empty_with_dup_column_pass_dtype_by_indexes(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_empty_with_dup_column_pass_dtype_by_indexes_raises(all_parsers):
# see gh-9424
parser = all_parsers
@@ -457,6 +492,7 @@ def test_empty_with_dup_column_pass_dtype_by_indexes_raises(all_parsers):
parser.read_csv(StringIO(data), names=["one", "one"], dtype={0: "u1", 1: "f"})
+@td.check_file_leaks
def test_raise_on_passed_int_dtype_with_nas(all_parsers):
# see gh-2631
parser = all_parsers
@@ -474,6 +510,7 @@ def test_raise_on_passed_int_dtype_with_nas(all_parsers):
parser.read_csv(StringIO(data), dtype={"DOY": np.int64}, skipinitialspace=True)
+@td.check_file_leaks
def test_dtype_with_converters(all_parsers):
parser = all_parsers
data = """a,b
@@ -489,6 +526,7 @@ def test_dtype_with_converters(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
@pytest.mark.parametrize(
"dtype,expected",
[
@@ -541,6 +579,7 @@ def test_empty_dtype(all_parsers, dtype, expected):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
@pytest.mark.parametrize(
"dtype", list(np.typecodes["AllInteger"] + np.typecodes["Float"])
)
@@ -553,6 +592,7 @@ def test_numeric_dtype(all_parsers, dtype):
tm.assert_frame_equal(expected, result)
+@td.check_file_leaks
def test_boolean_dtype(all_parsers):
parser = all_parsers
data = "\n".join(
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index de7b3bed034c7..5bf1d158c51e6 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -10,10 +10,19 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas import DataFrame
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
+@td.check_file_leaks
def test_bytes_io_input(all_parsers):
encoding = "cp1255"
parser = all_parsers
@@ -25,6 +34,7 @@ def test_bytes_io_input(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_read_csv_unicode(all_parsers):
parser = all_parsers
data = BytesIO("\u0141aski, Jan;1".encode("utf-8"))
@@ -34,6 +44,7 @@ def test_read_csv_unicode(all_parsers):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
@pytest.mark.parametrize("sep", [",", "\t"])
@pytest.mark.parametrize("encoding", ["utf-16", "utf-16le", "utf-16be"])
def test_utf16_bom_skiprows(all_parsers, sep, encoding):
@@ -68,6 +79,7 @@ def test_utf16_bom_skiprows(all_parsers, sep, encoding):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_utf16_example(all_parsers, csv_dir_path):
path = os.path.join(csv_dir_path, "utf16_ex.txt")
parser = all_parsers
@@ -75,6 +87,7 @@ def test_utf16_example(all_parsers, csv_dir_path):
assert len(result) == 50
+@td.check_file_leaks
def test_unicode_encoding(all_parsers, csv_dir_path):
path = os.path.join(csv_dir_path, "unicode_series.csv")
parser = all_parsers
@@ -87,6 +100,7 @@ def test_unicode_encoding(all_parsers, csv_dir_path):
assert got == expected
+@td.check_file_leaks
@pytest.mark.parametrize(
"data,kwargs,expected",
[
@@ -120,6 +134,7 @@ def _encode_data_with_bom(_data):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_read_csv_utf_aliases(all_parsers, utf_value, encoding_fmt):
# see gh-13549
expected = DataFrame({"mb_num": [4.8], "multibyte": ["test"]})
@@ -132,6 +147,7 @@ def test_read_csv_utf_aliases(all_parsers, utf_value, encoding_fmt):
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
@pytest.mark.parametrize(
"file_path,encoding",
[
@@ -163,6 +179,7 @@ def test_binary_mode_file_buffers(
tm.assert_frame_equal(expected, result)
+@td.check_file_leaks
@pytest.mark.parametrize("pass_encoding", [True, False])
def test_encoding_temp_file(all_parsers, utf_value, encoding_fmt, pass_encoding):
# see gh-24130
@@ -179,6 +196,7 @@ def test_encoding_temp_file(all_parsers, utf_value, encoding_fmt, pass_encoding)
tm.assert_frame_equal(result, expected)
+@td.check_file_leaks
def test_encoding_named_temp_file(all_parsers):
# see gh-31819
parser = all_parsers
diff --git a/pandas/tests/io/parser/test_header.py b/pandas/tests/io/parser/test_header.py
index 4cd110136d7b0..c57e0078c20dc 100644
--- a/pandas/tests/io/parser/test_header.py
+++ b/pandas/tests/io/parser/test_header.py
@@ -15,6 +15,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_read_with_bad_header(all_parsers):
parser = all_parsers
msg = r"but only \d+ lines in file"
diff --git a/pandas/tests/io/parser/test_index_col.py b/pandas/tests/io/parser/test_index_col.py
index 9f425168540ba..cbe6a20fe785d 100644
--- a/pandas/tests/io/parser/test_index_col.py
+++ b/pandas/tests/io/parser/test_index_col.py
@@ -12,6 +12,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize("with_header", [True, False])
def test_index_col_named(all_parsers, with_header):
parser = all_parsers
diff --git a/pandas/tests/io/parser/test_mangle_dupes.py b/pandas/tests/io/parser/test_mangle_dupes.py
index 5c4e642115798..94ca24cf55e1c 100644
--- a/pandas/tests/io/parser/test_mangle_dupes.py
+++ b/pandas/tests/io/parser/test_mangle_dupes.py
@@ -11,6 +11,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize("kwargs", [dict(), dict(mangle_dupe_cols=True)])
def test_basic(all_parsers, kwargs):
# TODO: add test for condition "mangle_dupe_cols=False"
diff --git a/pandas/tests/io/parser/test_multi_thread.py b/pandas/tests/io/parser/test_multi_thread.py
index d50560c684084..873c7d9de78ee 100644
--- a/pandas/tests/io/parser/test_multi_thread.py
+++ b/pandas/tests/io/parser/test_multi_thread.py
@@ -13,6 +13,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def _construct_dataframe(num_rows):
"""
Construct a DataFrame for testing.
diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py
index 9f86bbd65640e..29553496c1587 100644
--- a/pandas/tests/io/parser/test_na_values.py
+++ b/pandas/tests/io/parser/test_na_values.py
@@ -13,6 +13,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_string_nas(all_parsers):
parser = all_parsers
data = """A,B,C
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index 509ae89909699..3a5f67ebe7094 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -17,6 +17,12 @@
from pandas.io.parsers import read_csv
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.network
@pytest.mark.parametrize(
"compress_type, extension",
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index ed947755e3419..4de057319ac93 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -35,6 +35,12 @@
date_strategy = st.datetimes()
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_separator_date_conflict(all_parsers):
# Regression test for gh-4678
#
diff --git a/pandas/tests/io/parser/test_python_parser_only.py b/pandas/tests/io/parser/test_python_parser_only.py
index 4d933fa02d36f..9f2348dba25af 100644
--- a/pandas/tests/io/parser/test_python_parser_only.py
+++ b/pandas/tests/io/parser/test_python_parser_only.py
@@ -16,6 +16,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_default_separator(python_parser_only):
# see gh-17333
#
diff --git a/pandas/tests/io/parser/test_quoting.py b/pandas/tests/io/parser/test_quoting.py
index 14773dfbea20e..60822da7af231 100644
--- a/pandas/tests/io/parser/test_quoting.py
+++ b/pandas/tests/io/parser/test_quoting.py
@@ -14,6 +14,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize(
"kwargs,msg",
[
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index e982667f06f31..18b6911b550fb 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -17,6 +17,12 @@
from pandas.io.parsers import EmptyDataError, read_csv, read_fwf
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_basic():
data = """\
A B C D
diff --git a/pandas/tests/io/parser/test_skiprows.py b/pandas/tests/io/parser/test_skiprows.py
index fdccef1127c7e..f0e11e6f532ed 100644
--- a/pandas/tests/io/parser/test_skiprows.py
+++ b/pandas/tests/io/parser/test_skiprows.py
@@ -15,6 +15,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize("skiprows", [list(range(6)), 6])
def test_skip_rows_bug(all_parsers, skiprows):
# see gh-505
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index 1c2518646bb29..631df1a61f016 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -17,6 +17,12 @@
from pandas.io.parsers import TextFileReader, read_csv
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestTextReader:
@pytest.fixture(autouse=True)
def setup_method(self, datapath):
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 267fae760398a..971b33641baa0 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -18,6 +18,12 @@
from pandas.io.parsers import read_csv
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(params=["python", "python-fwf"], ids=lambda val: val)
def python_engine(request):
return request.param
diff --git a/pandas/tests/io/parser/test_usecols.py b/pandas/tests/io/parser/test_usecols.py
index d4e049cc3fcc2..7bcf3fefa7454 100644
--- a/pandas/tests/io/parser/test_usecols.py
+++ b/pandas/tests/io/parser/test_usecols.py
@@ -22,6 +22,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_raise_on_mixed_dtype_usecols(all_parsers):
# See gh-12678
data = """a,b,c
diff --git a/pandas/tests/io/pytables/test_compat.py b/pandas/tests/io/pytables/test_compat.py
index c7200385aa998..93dd6aac012ff 100644
--- a/pandas/tests/io/pytables/test_compat.py
+++ b/pandas/tests/io/pytables/test_compat.py
@@ -7,6 +7,12 @@
tables = pytest.importorskip("tables")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def pytables_hdf5_file():
"""
diff --git a/pandas/tests/io/pytables/test_complex.py b/pandas/tests/io/pytables/test_complex.py
index 543940e674dba..90b0c33de071b 100644
--- a/pandas/tests/io/pytables/test_complex.py
+++ b/pandas/tests/io/pytables/test_complex.py
@@ -15,6 +15,12 @@
# GH10447
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_complex_fixed(setup_path):
df = DataFrame(
np.random.rand(4, 5).astype(np.complex64),
diff --git a/pandas/tests/io/pytables/test_pytables_missing.py b/pandas/tests/io/pytables/test_pytables_missing.py
index 9adb0a6d227da..7d8411251a092 100644
--- a/pandas/tests/io/pytables/test_pytables_missing.py
+++ b/pandas/tests/io/pytables/test_pytables_missing.py
@@ -6,6 +6,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@td.skip_if_installed("tables")
def test_pytables_raises():
df = pd.DataFrame({"A": [1, 2]})
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 0942c79837e7c..1b39dfcd621ca 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -56,6 +56,12 @@
from pandas.io.pytables import TableIterator # noqa: E402 isort:skip
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
_default_compressor = "blosc"
ignore_natural_naming_warning = pytest.mark.filterwarnings(
"ignore:object name:tables.exceptions.NaturalNameWarning"
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
index 38d32b0bdc8a3..eb4038dc9e81b 100644
--- a/pandas/tests/io/pytables/test_timezones.py
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -15,6 +15,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def _compare_with_tz(a, b):
tm.assert_frame_equal(a, b)
diff --git a/pandas/tests/io/sas/test_sas.py b/pandas/tests/io/sas/test_sas.py
index 5d2643c20ceb2..f6373aeb62a0b 100644
--- a/pandas/tests/io/sas/test_sas.py
+++ b/pandas/tests/io/sas/test_sas.py
@@ -6,6 +6,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
class TestSas:
def test_sas_buffer_format(self):
# see gh-14947
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 8c14f9de9f61c..4f98a39544cb9 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -14,6 +14,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
# https://github.com/cython/cython/issues/1720
@pytest.mark.filterwarnings("ignore:can't resolve package:ImportWarning")
class TestSAS7BDAT:
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index 2682bafedb8f1..195351b56e522 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -14,6 +14,12 @@
# before making comparisons.
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def numeric_as_float(data):
for v in data.columns:
if data[v].dtype is np.dtype("int64"):
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 5ce2233bc0cd0..705af5e7b08fb 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -17,6 +17,10 @@
import pandas.io.common as icom
+def setup_module(module):
+ yield from td.check_file_leaks()
+
+
class CustomFSPath:
"""For testing fspath on unknown objects"""
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index 902a3d5d2a397..bc1ce7f3d35cc 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -5,12 +5,18 @@
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
import pandas._testing as tm
import pandas.io.common as icom
+def setup_module(module):
+ yield from td.check_file_leaks()
+
+
@pytest.mark.parametrize(
"obj",
[
diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index a8a5c8f00e6bf..4077d9a32ba58 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -18,6 +18,12 @@
filter_sparse = pytest.mark.filterwarnings("ignore:The Sparse")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@filter_sparse
@pytest.mark.single
class TestFeather:
diff --git a/pandas/tests/io/test_fsspec.py b/pandas/tests/io/test_fsspec.py
index 3e89f6ca4ae16..0e6a09b945df7 100644
--- a/pandas/tests/io/test_fsspec.py
+++ b/pandas/tests/io/test_fsspec.py
@@ -30,6 +30,12 @@
text = df1.to_csv(index=False).encode() # type: ignore[union-attr]
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def cleared_fs():
fsspec = pytest.importorskip("fsspec")
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index df107259d38cd..fe3c33258af90 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -24,6 +24,12 @@
VERSION = platform.python_version()
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def _skip_if_no_project_id():
if not _get_project_id():
pytest.skip("Cannot run integration tests without a project id")
diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py
index eacf4fa08545d..dc7dbc3def142 100644
--- a/pandas/tests/io/test_gcs.py
+++ b/pandas/tests/io/test_gcs.py
@@ -9,6 +9,12 @@
from pandas.util import _test_decorators as td
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@td.skip_if_no("gcsfs")
def test_read_csv_gcs(monkeypatch):
from fsspec import AbstractFileSystem, registry
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 2c93dbb5b6b83..a53aa729adbea 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -24,6 +24,12 @@
HERE = os.path.dirname(__file__)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(
params=[
"chinese_utf-16.html",
diff --git a/pandas/tests/io/test_orc.py b/pandas/tests/io/test_orc.py
index a1f9c6f6af51a..e6a3cf0e9c2e5 100644
--- a/pandas/tests/io/test_orc.py
+++ b/pandas/tests/io/test_orc.py
@@ -17,6 +17,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture
def dirpath(datapath):
return datapath("io", "data", "orc")
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 82157f3d722a9..e8e5693b9ceb2 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -41,6 +41,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
# setup engines & skips
@pytest.fixture(
params=[
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index e4d43db7834e3..1c98c8d1d2b4f 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -34,6 +34,12 @@
lzma = _import_lzma()
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture(scope="module")
def current_pickle_data():
# our current version pickle data
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index a137e76b1696b..86d8973f760d0 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -9,6 +9,12 @@
import pandas._testing as tm
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_streaming_s3_objects():
# GH17135
# botocore gained iteration support in 1.10.47, can now be used in read_*
diff --git a/pandas/tests/io/test_spss.py b/pandas/tests/io/test_spss.py
index 013f56f83c5ec..6b756b770b063 100644
--- a/pandas/tests/io/test_spss.py
+++ b/pandas/tests/io/test_spss.py
@@ -7,6 +7,12 @@
pyreadstat = pytest.importorskip("pyreadstat")
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
def test_spss_labelled_num(datapath):
# test file from the Haven project (https://haven.tidyverse.org/)
fname = datapath("io", "data", "spss", "labelled-num.sav")
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 29b787d39c09d..09bd1fd84be48 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -57,6 +57,13 @@
except ImportError:
SQLALCHEMY_INSTALLED = False
+
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
SQL_STRINGS = {
"create_iris": {
"sqlite": """CREATE TABLE iris (
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 6d7fec803a8e0..68aa27a28d839 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -12,6 +12,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas.core.dtypes.common import is_categorical_dtype
import pandas as pd
@@ -30,6 +32,12 @@
)
+def setup_module(module):
+ import pandas.util._test_decorators as td
+
+ yield from td.check_file_leaks()
+
+
@pytest.fixture()
def mixed_frame():
return pd.DataFrame(
@@ -127,6 +135,7 @@ def read_dta(self, file):
def read_csv(self, file):
return read_csv(file, parse_dates=True)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_read_empty_dta(self, version):
empty_ds = DataFrame(columns=["unit"])
@@ -136,6 +145,7 @@ def test_read_empty_dta(self, version):
empty_ds2 = read_stata(path)
tm.assert_frame_equal(empty_ds, empty_ds2)
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta1_114", "dta1_117"])
def test_read_dta1(self, file):
@@ -155,6 +165,7 @@ def test_read_dta1(self, file):
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
def test_read_dta2(self):
expected = DataFrame.from_records(
@@ -215,6 +226,7 @@ def test_read_dta2(self):
tm.assert_frame_equal(parsed_115, expected, check_datetimelike_compat=True)
tm.assert_frame_equal(parsed_117, expected, check_datetimelike_compat=True)
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta3_113", "dta3_114", "dta3_115", "dta3_117"])
def test_read_dta3(self, file):
@@ -229,6 +241,7 @@ def test_read_dta3(self, file):
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta4_113", "dta4_114", "dta4_115", "dta4_117"])
def test_read_dta4(self, file):
@@ -275,6 +288,7 @@ def test_read_dta4(self, file):
tm.assert_frame_equal(parsed, expected)
# File containing strls
+ @td.check_file_leaks
def test_read_dta12(self):
parsed_117 = self.read_dta(self.dta21_117)
expected = DataFrame.from_records(
@@ -288,6 +302,7 @@ def test_read_dta12(self):
tm.assert_frame_equal(parsed_117, expected, check_dtype=False)
+ @td.check_file_leaks
def test_read_dta18(self):
parsed_118 = self.read_dta(self.dta22_118)
parsed_118["Bytes"] = parsed_118["Bytes"].astype("O")
@@ -328,6 +343,7 @@ def test_read_dta18(self):
assert rdr.data_label == "This is a Ünicode data label"
+ @td.check_file_leaks
def test_read_write_dta5(self):
original = DataFrame(
[(np.nan, np.nan, np.nan, np.nan, np.nan)],
@@ -340,6 +356,7 @@ def test_read_write_dta5(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), original)
+ @td.check_file_leaks
def test_write_dta6(self):
original = self.read_csv(self.csv3)
original.index.name = "index"
@@ -356,6 +373,7 @@ def test_write_dta6(self):
check_index_type=False,
)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_read_write_dta10(self, version):
original = DataFrame(
@@ -377,11 +395,13 @@ def test_read_write_dta10(self, version):
check_index_type=False,
)
+ @td.check_file_leaks
def test_stata_doc_examples(self):
with tm.ensure_clean() as path:
df = DataFrame(np.random.randn(10, 2), columns=list("AB"))
df.to_stata(path)
+ @td.check_file_leaks
def test_write_preserves_original(self):
# 9795
np.random.seed(423)
@@ -392,6 +412,7 @@ def test_write_preserves_original(self):
df.to_stata(path, write_index=False)
tm.assert_frame_equal(df, df_copy)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_encoding(self, version):
@@ -409,6 +430,7 @@ def test_encoding(self, version):
reread_encoded = read_stata(path)
tm.assert_frame_equal(encoded, reread_encoded)
+ @td.check_file_leaks
def test_read_write_dta11(self):
original = DataFrame(
[(1, 2, 3, 4)],
@@ -433,6 +455,7 @@ def test_read_write_dta11(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), formatted)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_read_write_dta12(self, version):
original = DataFrame(
@@ -470,6 +493,7 @@ def test_read_write_dta12(self, version):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), formatted)
+ @td.check_file_leaks
def test_read_write_dta13(self):
s1 = Series(2 ** 9, dtype=np.int16)
s2 = Series(2 ** 17, dtype=np.int32)
@@ -485,6 +509,7 @@ def test_read_write_dta13(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), formatted)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize(
"file", ["dta14_113", "dta14_114", "dta14_115", "dta14_117"]
@@ -508,6 +533,7 @@ def test_read_write_reread_dta14(self, file, parsed_114, version):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), parsed_114)
+ @td.check_file_leaks
@pytest.mark.parametrize(
"file", ["dta15_113", "dta15_114", "dta15_115", "dta15_117"]
)
@@ -528,6 +554,7 @@ def test_read_write_reread_dta15(self, file):
tm.assert_frame_equal(expected, parsed)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_timestamp_and_label(self, version):
original = DataFrame([(1,)], columns=["variable"])
@@ -542,6 +569,7 @@ def test_timestamp_and_label(self, version):
assert reader.time_stamp == "29 Feb 2000 14:21"
assert reader.data_label == data_label
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_invalid_timestamp(self, version):
original = DataFrame([(1,)], columns=["variable"])
@@ -551,6 +579,7 @@ def test_invalid_timestamp(self, version):
with pytest.raises(ValueError, match=msg):
original.to_stata(path, time_stamp=time_stamp, version=version)
+ @td.check_file_leaks
def test_numeric_column_names(self):
original = DataFrame(np.reshape(np.arange(25.0), (5, 5)))
original.index.name = "index"
@@ -566,6 +595,7 @@ def test_numeric_column_names(self):
written_and_read_again.columns = map(convert_col_name, columns)
tm.assert_frame_equal(original, written_and_read_again)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_nan_to_missing_value(self, version):
s1 = Series(np.arange(4.0), dtype=np.float32)
@@ -580,6 +610,7 @@ def test_nan_to_missing_value(self, version):
written_and_read_again = written_and_read_again.set_index("index")
tm.assert_frame_equal(written_and_read_again, original)
+ @td.check_file_leaks
def test_no_index(self):
columns = ["x", "y"]
original = DataFrame(np.reshape(np.arange(10.0), (5, 2)), columns=columns)
@@ -590,6 +621,7 @@ def test_no_index(self):
with pytest.raises(KeyError, match=original.index.name):
written_and_read_again["index_not_written"]
+ @td.check_file_leaks
def test_string_no_dates(self):
s1 = Series(["a", "A longer string"])
s2 = Series([1.0, 2.0], dtype=np.float64)
@@ -600,6 +632,7 @@ def test_string_no_dates(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), original)
+ @td.check_file_leaks
def test_large_value_conversion(self):
s0 = Series([1, 99], dtype=np.int8)
s1 = Series([1, 127], dtype=np.int8)
@@ -618,6 +651,7 @@ def test_large_value_conversion(self):
modified["s3"] = Series(modified["s3"], dtype=np.float64)
tm.assert_frame_equal(written_and_read_again.set_index("index"), modified)
+ @td.check_file_leaks
def test_dates_invalid_column(self):
original = DataFrame([datetime(2006, 11, 19, 23, 13, 20)])
original.index.name = "index"
@@ -630,6 +664,7 @@ def test_dates_invalid_column(self):
modified.columns = ["_0"]
tm.assert_frame_equal(written_and_read_again.set_index("index"), modified)
+ @td.check_file_leaks
def test_105(self):
# Data obtained from:
# http://go.worldbank.org/ZXY29PVJ21
@@ -644,6 +679,7 @@ def test_105(self):
df0["psch_dis"] = df0["psch_dis"].astype(np.float32)
tm.assert_frame_equal(df.head(3), df0)
+ @td.check_file_leaks
def test_value_labels_old_format(self):
# GH 19417
#
@@ -654,6 +690,7 @@ def test_value_labels_old_format(self):
assert reader.value_labels() == {}
reader.close()
+ @td.check_file_leaks
def test_date_export_formats(self):
columns = ["tc", "td", "tw", "tm", "tq", "th", "ty"]
conversions = {c: c for c in columns}
@@ -677,6 +714,7 @@ def test_date_export_formats(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
+ @td.check_file_leaks
def test_write_missing_strings(self):
original = DataFrame([["1"], [None]], columns=["foo"])
expected = DataFrame([["1"], [""]], columns=["foo"])
@@ -686,6 +724,7 @@ def test_write_missing_strings(self):
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("byteorder", [">", "<"])
def test_bool_uint(self, byteorder, version):
@@ -720,6 +759,7 @@ def test_bool_uint(self, byteorder, version):
written_and_read_again = written_and_read_again.set_index("index")
tm.assert_frame_equal(written_and_read_again, expected)
+ @td.check_file_leaks
def test_variable_labels(self):
with StataReader(self.dta16_115) as rdr:
sr_115 = rdr.variable_labels()
@@ -733,6 +773,7 @@ def test_variable_labels(self):
assert k in keys
assert v in labels
+ @td.check_file_leaks
def test_minimal_size_col(self):
str_lens = (1, 100, 244)
s = {}
@@ -752,6 +793,7 @@ def test_minimal_size_col(self):
assert int(variable[1:]) == int(fmt[1:-1])
assert int(variable[1:]) == typ
+ @td.check_file_leaks
def test_excessively_long_string(self):
str_lens = (1, 244, 500)
s = {}
@@ -770,6 +812,7 @@ def test_excessively_long_string(self):
with tm.ensure_clean() as path:
original.to_stata(path)
+ @td.check_file_leaks
def test_missing_value_generator(self):
types = ("b", "h", "l")
df = DataFrame([[0.0]], columns=["float_"])
@@ -801,6 +844,7 @@ def test_missing_value_generator(self):
)
assert val.string == ".z"
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta17_113", "dta17_115", "dta17_117"])
def test_missing_value_conversion(self, file):
columns = ["int8_", "int16_", "int32_", "float32_", "float64_"]
@@ -815,6 +859,7 @@ def test_missing_value_conversion(self, file):
parsed = read_stata(getattr(self, file), convert_missing=True)
tm.assert_frame_equal(parsed, expected)
+ @td.check_file_leaks
def test_big_dates(self):
yr = [1960, 2000, 9999, 100, 2262, 1677]
mo = [1, 1, 12, 1, 4, 9]
@@ -873,6 +918,7 @@ def test_big_dates(self):
check_datetimelike_compat=True,
)
+ @td.check_file_leaks
def test_dtype_conversion(self):
expected = self.read_csv(self.csv15)
expected["byte_"] = expected["byte_"].astype(np.int8)
@@ -899,6 +945,7 @@ def test_dtype_conversion(self):
tm.assert_frame_equal(expected, conversion)
+ @td.check_file_leaks
def test_drop_column(self):
expected = self.read_csv(self.csv15)
expected["byte_"] = expected["byte_"].astype(np.int8)
@@ -932,6 +979,7 @@ def test_drop_column(self):
columns = ["byte_", "int_", "long_", "not_found"]
read_stata(self.dta15_117, convert_dates=True, columns=columns)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.filterwarnings(
"ignore:\\nStata value:pandas.io.stata.ValueLabelTypeMismatch"
@@ -987,6 +1035,7 @@ def test_categorical_writing(self, version):
res = written_and_read_again.set_index("index")
tm.assert_frame_equal(res, expected)
+ @td.check_file_leaks
def test_categorical_warnings_and_errors(self):
# Warning for non-string labels
# Error for labels too long
@@ -1017,6 +1066,7 @@ def test_categorical_warnings_and_errors(self):
original.to_stata(path)
# should get a warning for mixed content
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_categorical_with_stata_missing_values(self, version):
values = [["a" + str(i)] for i in range(120)]
@@ -1039,6 +1089,7 @@ def test_categorical_with_stata_missing_values(self, version):
expected[col] = cat
tm.assert_frame_equal(res, expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta19_115", "dta19_117"])
def test_categorical_order(self, file):
# Directly construct using expected codes
@@ -1075,6 +1126,7 @@ def test_categorical_order(self, file):
expected[col].cat.categories, parsed[col].cat.categories
)
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta20_115", "dta20_117"])
def test_categorical_sorting(self, file):
parsed = read_stata(getattr(self, file))
@@ -1092,6 +1144,7 @@ def test_categorical_sorting(self, file):
expected = pd.Series(cat, name="srh")
tm.assert_series_equal(expected, parsed["srh"])
+ @td.check_file_leaks
@pytest.mark.parametrize("file", ["dta19_115", "dta19_117"])
def test_categorical_ordering(self, file):
file = getattr(self, file)
@@ -1104,6 +1157,7 @@ def test_categorical_ordering(self, file):
assert parsed[col].cat.ordered
assert not parsed_unordered[col].cat.ordered
+ @td.check_file_leaks
@pytest.mark.parametrize(
"file",
[
@@ -1174,6 +1228,7 @@ def _convert_categorical(from_frame: DataFrame) -> DataFrame:
from_frame[col] = cat
return from_frame
+ @td.check_file_leaks
def test_iterator(self):
fname = self.dta3_117
@@ -1201,6 +1256,7 @@ def test_iterator(self):
from_chunks = pd.concat(itr)
tm.assert_frame_equal(parsed, from_chunks)
+ @td.check_file_leaks
@pytest.mark.parametrize(
"file",
[
@@ -1257,6 +1313,7 @@ def test_read_chunks_115(
pos += chunksize
itr.close()
+ @td.check_file_leaks
def test_read_chunks_columns(self):
fname = self.dta3_117
columns = ["quarter", "cpi", "m1"]
@@ -1273,6 +1330,7 @@ def test_read_chunks_columns(self):
tm.assert_frame_equal(from_frame, chunk, check_dtype=False)
pos += chunksize
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_write_variable_labels(self, version, mixed_frame):
# GH 13631, add support for writing variable labels
@@ -1297,6 +1355,7 @@ def test_write_variable_labels(self, version, mixed_frame):
read_labels = sr.variable_labels()
assert read_labels == variable_labels
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_invalid_variable_labels(self, version, mixed_frame):
mixed_frame.index.name = "index"
@@ -1308,6 +1367,7 @@ def test_invalid_variable_labels(self, version, mixed_frame):
path, variable_labels=variable_labels, version=version
)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117])
def test_invalid_variable_label_encoding(self, version, mixed_frame):
mixed_frame.index.name = "index"
@@ -1321,6 +1381,7 @@ def test_invalid_variable_label_encoding(self, version, mixed_frame):
path, variable_labels=variable_labels, version=version
)
+ @td.check_file_leaks
def test_write_variable_label_errors(self, mixed_frame):
values = ["\u03A1", "\u0391", "\u039D", "\u0394", "\u0391", "\u03A3"]
@@ -1351,6 +1412,7 @@ def test_write_variable_label_errors(self, mixed_frame):
with tm.ensure_clean() as path:
mixed_frame.to_stata(path, variable_labels=variable_labels_long)
+ @td.check_file_leaks
def test_default_date_conversion(self):
# GH 12259
dates = [
@@ -1380,6 +1442,7 @@ def test_default_date_conversion(self):
direct = read_stata(path, convert_dates=True)
tm.assert_frame_equal(reread, direct)
+ @td.check_file_leaks
def test_unsupported_type(self):
original = pd.DataFrame({"a": [1 + 2j, 2 + 4j]})
@@ -1388,6 +1451,7 @@ def test_unsupported_type(self):
with tm.ensure_clean() as path:
original.to_stata(path)
+ @td.check_file_leaks
def test_unsupported_datetype(self):
dates = [
dt.datetime(1999, 12, 31, 12, 12, 12, 12000),
@@ -1419,6 +1483,7 @@ def test_unsupported_datetype(self):
with tm.ensure_clean() as path:
original.to_stata(path)
+ @td.check_file_leaks
def test_repeated_column_labels(self):
# GH 13923, 25772
msg = """
@@ -1434,6 +1499,7 @@ def test_repeated_column_labels(self):
with pytest.raises(ValueError, match=msg):
read_stata(self.dta23, convert_categoricals=True)
+ @td.check_file_leaks
def test_stata_111(self):
# 111 is an old version but still used by current versions of
# SAS when exporting to Stata format. We do not know of any
@@ -1450,6 +1516,7 @@ def test_stata_111(self):
original = original[["y", "x", "w", "z"]]
tm.assert_frame_equal(original, df)
+ @td.check_file_leaks
def test_out_of_range_double(self):
# GH 14618
df = DataFrame(
@@ -1475,6 +1542,7 @@ def test_out_of_range_double(self):
with tm.ensure_clean() as path:
df.to_stata(path)
+ @td.check_file_leaks
def test_out_of_range_float(self):
original = DataFrame(
{
@@ -1509,6 +1577,7 @@ def test_out_of_range_float(self):
with tm.ensure_clean() as path:
original.to_stata(path)
+ @td.check_file_leaks
def test_path_pathlib(self):
df = tm.makeDataFrame()
df.index.name = "index"
@@ -1516,6 +1585,7 @@ def test_path_pathlib(self):
result = tm.round_trip_pathlib(df.to_stata, reader)
tm.assert_frame_equal(df, result)
+ @td.check_file_leaks
def test_pickle_path_localpath(self):
df = tm.makeDataFrame()
df.index.name = "index"
@@ -1523,6 +1593,7 @@ def test_pickle_path_localpath(self):
result = tm.round_trip_localpath(df.to_stata, reader)
tm.assert_frame_equal(df, result)
+ @td.check_file_leaks
@pytest.mark.parametrize("write_index", [True, False])
def test_value_labels_iterator(self, write_index):
# GH 16923
@@ -1536,6 +1607,7 @@ def test_value_labels_iterator(self, write_index):
value_labels = dta_iter.value_labels()
assert value_labels == {"A": {0: "A", 1: "B", 2: "C", 3: "E"}}
+ @td.check_file_leaks
def test_set_index(self):
# GH 17328
df = tm.makeDataFrame()
@@ -1545,6 +1617,7 @@ def test_set_index(self):
reread = pd.read_stata(path, index_col="index")
tm.assert_frame_equal(df, reread)
+ @td.check_file_leaks
@pytest.mark.parametrize(
"column", ["ms", "day", "week", "month", "qtr", "half", "yr"]
)
@@ -1565,6 +1638,7 @@ def test_date_parsing_ignores_format_details(self, column):
formatted = df.loc[0, column + "_fmt"]
assert unformatted == formatted
+ @td.check_file_leaks
def test_writer_117(self):
original = DataFrame(
data=[
@@ -1636,6 +1710,7 @@ def test_writer_117(self):
)
tm.assert_frame_equal(original, copy)
+ @td.check_file_leaks
def test_convert_strl_name_swap(self):
original = DataFrame(
[["a" * 3000, "A", "apple"], ["b" * 1000, "B", "banana"]],
@@ -1651,6 +1726,7 @@ def test_convert_strl_name_swap(self):
reread.columns = original.columns
tm.assert_frame_equal(reread, original, check_index_type=False)
+ @td.check_file_leaks
def test_invalid_date_conversion(self):
# GH 12259
dates = [
@@ -1671,6 +1747,7 @@ def test_invalid_date_conversion(self):
with pytest.raises(ValueError, match=msg):
original.to_stata(path, convert_dates={"wrong_name": "tc"})
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_nonfile_writing(self, version):
# GH 21041
@@ -1685,6 +1762,7 @@ def test_nonfile_writing(self, version):
reread = pd.read_stata(path, index_col="index")
tm.assert_frame_equal(df, reread)
+ @td.check_file_leaks
def test_gzip_writing(self):
# writing version 117 requires seek and cannot be used with gzip
df = tm.makeDataFrame()
@@ -1696,6 +1774,7 @@ def test_gzip_writing(self):
reread = pd.read_stata(gz, index_col="index")
tm.assert_frame_equal(df, reread)
+ @td.check_file_leaks
def test_unicode_dta_118(self):
unicode_df = self.read_dta(self.dta25_118)
@@ -1713,6 +1792,7 @@ def test_unicode_dta_118(self):
tm.assert_frame_equal(unicode_df, expected)
+ @td.check_file_leaks
def test_mixed_string_strl(self):
# GH 23633
output = [{"mixed": "string" * 500, "number": 0}, {"mixed": None, "number": 1}]
@@ -1734,6 +1814,7 @@ def test_mixed_string_strl(self):
expected = output.fillna("")
tm.assert_frame_equal(reread, expected)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_all_none_exception(self, version):
output = [{"none": "none", "number": 0}, {"none": None, "number": 1}]
@@ -1743,6 +1824,7 @@ def test_all_none_exception(self, version):
with pytest.raises(ValueError, match="Column `none` cannot be exported"):
output.to_stata(path, version=version)
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_invalid_file_not_written(self, version):
content = "Here is one __�__ Another one __·__ Another one __½__"
@@ -1760,6 +1842,7 @@ def test_invalid_file_not_written(self, version):
with tm.assert_produces_warning(ResourceWarning):
df.to_stata(path)
+ @td.check_file_leaks
def test_strl_latin1(self):
# GH 23573, correct GSO data to reflect correct size
output = DataFrame(
@@ -1779,6 +1862,7 @@ def test_strl_latin1(self):
size = gso[gso.find(b"\x82") + 1]
assert len(val) == size - 1
+ @td.check_file_leaks
def test_encoding_latin1_118(self):
# GH 25960
msg = """
@@ -1794,6 +1878,7 @@ def test_encoding_latin1_118(self):
expected = pd.DataFrame([["Düsseldorf"]] * 151, columns=["kreis1849"])
tm.assert_frame_equal(encoded, expected)
+ @td.check_file_leaks
@pytest.mark.slow
def test_stata_119(self):
# Gzipped since contains 32,999 variables and uncompressed is 20MiB
@@ -1805,6 +1890,7 @@ def test_stata_119(self):
assert df.iloc[0, -1] == 1
assert df.iloc[0, 0] == pd.Timestamp(datetime(2012, 12, 21, 21, 12, 21))
+ @td.check_file_leaks
@pytest.mark.parametrize("version", [118, 119, None])
def test_utf8_writer(self, version):
cat = pd.Categorical(["a", "β", "ĉ"], ordered=True)
@@ -1849,6 +1935,7 @@ def test_utf8_writer(self, version):
reread_to_stata = read_stata(path)
tm.assert_frame_equal(data, reread_to_stata)
+ @td.check_file_leaks
def test_writer_118_exceptions(self):
df = DataFrame(np.zeros((1, 33000), dtype=np.int8))
with tm.ensure_clean() as path:
@@ -1859,6 +1946,7 @@ def test_writer_118_exceptions(self):
StataWriterUTF8(path, df, version=118)
+@td.check_file_leaks
@pytest.mark.parametrize("version", [105, 108, 111, 113, 114])
def test_backward_compat(version, datapath):
data_base = datapath("io", "data", "stata")
@@ -1869,6 +1957,7 @@ def test_backward_compat(version, datapath):
tm.assert_frame_equal(old_dta, expected, check_dtype=False)
+@td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("use_dict", [True, False])
@pytest.mark.parametrize("infer", [True, False])
@@ -1905,6 +1994,7 @@ def test_compression(compression, version, use_dict, infer):
tm.assert_frame_equal(reread, df)
+@td.check_file_leaks
@pytest.mark.parametrize("method", ["zip", "infer"])
@pytest.mark.parametrize("file_ext", [None, "dta", "zip"])
def test_compression_dict(method, file_ext):
@@ -1926,6 +2016,7 @@ def test_compression_dict(method, file_ext):
tm.assert_frame_equal(reread, df)
+@td.check_file_leaks
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
def test_chunked_categorical(version):
df = DataFrame({"cats": Series(["a", "b", "a", "b", "c"], dtype="category")})
@@ -1939,6 +2030,7 @@ def test_chunked_categorical(version):
tm.assert_series_equal(block.cats, df.cats.iloc[2 * i : 2 * (i + 1)])
+@td.check_file_leaks
def test_chunked_categorical_partial(dirpath):
dta_file = os.path.join(dirpath, "stata-dta-partially-labeled.dta")
values = ["a", "b", "a", "b", 3.0]
@@ -1958,6 +2050,7 @@ def test_chunked_categorical_partial(dirpath):
tm.assert_frame_equal(direct, large_chunk)
+@td.check_file_leaks
def test_iterator_errors(dirpath):
dta_file = os.path.join(dirpath, "stata-dta-partially-labeled.dta")
with pytest.raises(ValueError, match="chunksize must be a positive"):
@@ -1971,6 +2064,7 @@ def test_iterator_errors(dirpath):
reader.__next__()
+@td.check_file_leaks
def test_iterator_value_labels():
# GH 31544
values = ["c_label", "b_label"] + ["a_label"] * 500
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 708118e950686..57cba4fb55e90 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -4,6 +4,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas import DataFrame, Series
import pandas._testing as tm
@@ -24,6 +26,7 @@ def read_csv(self, path, **kwargs):
return out
+ @td.check_file_leaks
def test_from_csv(self, datetime_series, string_series):
# freq doesnt round-trip
datetime_series.index = datetime_series.index._with_freq(None)
@@ -65,6 +68,7 @@ def test_from_csv(self, datetime_series, string_series):
check_series = Series({"1998-01-01": 1.0, "1999-01-01": 2.0})
tm.assert_series_equal(check_series, series)
+ @td.check_file_leaks
def test_to_csv(self, datetime_series):
import io
@@ -79,6 +83,7 @@ def test_to_csv(self, datetime_series):
arr = np.loadtxt(path)
tm.assert_almost_equal(arr, datetime_series.values)
+ @td.check_file_leaks
def test_to_csv_unicode_index(self):
buf = StringIO()
s = Series(["\u05d0", "d2"], index=["\u05d0", "\u05d1"])
@@ -89,6 +94,7 @@ def test_to_csv_unicode_index(self):
s2 = self.read_csv(buf, index_col=0, encoding="UTF-8")
tm.assert_series_equal(s, s2)
+ @td.check_file_leaks
def test_to_csv_float_format(self):
with tm.ensure_clean() as filename:
@@ -99,6 +105,7 @@ def test_to_csv_float_format(self):
xp = Series([0.12, 0.23, 0.57])
tm.assert_series_equal(rs, xp)
+ @td.check_file_leaks
def test_to_csv_list_entries(self):
s = Series(["jack and jill", "jesse and frank"])
@@ -107,6 +114,7 @@ def test_to_csv_list_entries(self):
buf = StringIO()
split.to_csv(buf, header=False)
+ @td.check_file_leaks
def test_to_csv_path_is_none(self):
# GH 8215
# Series.to_csv() was returning None, inconsistent with
@@ -115,6 +123,7 @@ def test_to_csv_path_is_none(self):
csv_str = s.to_csv(path_or_buf=None, header=False)
assert isinstance(csv_str, str)
+ @td.check_file_leaks
@pytest.mark.parametrize(
"s,encoding",
[
@@ -168,6 +177,7 @@ def test_to_csv_compression(self, s, encoding, compression):
s, pd.read_csv(fh, index_col=0, squeeze=True, encoding=encoding)
)
+ @td.check_file_leaks
def test_to_csv_interval_index(self):
# GH 28210
s = Series(["foo", "bar", "baz"], index=pd.interval_range(0, 3))
@@ -184,6 +194,7 @@ def test_to_csv_interval_index(self):
class TestSeriesIO:
+ @td.check_file_leaks
def test_to_frame(self, datetime_series):
datetime_series.name = None
rs = datetime_series.to_frame()
@@ -203,6 +214,7 @@ def test_to_frame(self, datetime_series):
)
tm.assert_frame_equal(rs, xp)
+ @td.check_file_leaks
def test_timeseries_periodindex(self):
# GH2891
from pandas import period_range
@@ -212,6 +224,7 @@ def test_timeseries_periodindex(self):
new_ts = tm.round_trip_pickle(ts)
assert new_ts.index.freq == "M"
+ @td.check_file_leaks
def test_pickle_preserve_name(self):
for n in [777, 777.0, "name", datetime(2001, 11, 11), (1, 2)]:
unpickled = self._pickle_roundtrip_name(tm.makeTimeSeries(name=n))
@@ -224,6 +237,7 @@ def _pickle_roundtrip_name(self, obj):
unpickled = pd.read_pickle(path)
return unpickled
+ @td.check_file_leaks
def test_to_frame_expanddim(self):
# GH 9762
| There's a file being left open somewhere, trying to track it down. | https://api.github.com/repos/pandas-dev/pandas/pulls/35662 | 2020-08-11T04:06:37Z | 2020-08-12T18:01:16Z | null | 2020-08-12T18:01:36Z |
Moto server | diff --git a/ci/deps/azure-37-locale.yaml b/ci/deps/azure-37-locale.yaml
index a6552aa096a22..cc996f4077cd9 100644
--- a/ci/deps/azure-37-locale.yaml
+++ b/ci/deps/azure-37-locale.yaml
@@ -21,6 +21,7 @@ dependencies:
- lxml
- matplotlib>=3.3.0
- moto
+ - flask
- nomkl
- numexpr
- numpy=1.16.*
diff --git a/ci/deps/azure-37-slow.yaml b/ci/deps/azure-37-slow.yaml
index e8ffd3d74ca5e..d17a8a2b0ed9b 100644
--- a/ci/deps/azure-37-slow.yaml
+++ b/ci/deps/azure-37-slow.yaml
@@ -27,9 +27,11 @@ dependencies:
- python-dateutil
- pytz
- s3fs>=0.4.0
+ - moto>=1.3.14
- scipy
- sqlalchemy
- xlrd
- xlsxwriter
- xlwt
- moto
+ - flask
diff --git a/ci/deps/azure-38-locale.yaml b/ci/deps/azure-38-locale.yaml
index c7090d3a46a77..bb40127b672d3 100644
--- a/ci/deps/azure-38-locale.yaml
+++ b/ci/deps/azure-38-locale.yaml
@@ -14,6 +14,7 @@ dependencies:
# pandas dependencies
- beautifulsoup4
+ - flask
- html5lib
- ipython
- jinja2
@@ -32,6 +33,7 @@ dependencies:
- xlrd
- xlsxwriter
- xlwt
+ - moto
- pyarrow>=0.15
- pip
- pip:
diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml
index f4c238ab8b173..4894129915722 100644
--- a/ci/deps/azure-windows-37.yaml
+++ b/ci/deps/azure-windows-37.yaml
@@ -15,13 +15,14 @@ dependencies:
# pandas dependencies
- beautifulsoup4
- bottleneck
- - fsspec>=0.7.4
+ - fsspec>=0.8.0
- gcsfs>=0.6.0
- html5lib
- jinja2
- lxml
- matplotlib=2.2.*
- - moto
+ - moto>=1.3.14
+ - flask
- numexpr
- numpy=1.16.*
- openpyxl
@@ -29,7 +30,7 @@ dependencies:
- pytables
- python-dateutil
- pytz
- - s3fs>=0.4.0
+ - s3fs>=0.4.2
- scipy
- sqlalchemy
- xlrd
diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml
index 1f383164b5328..2853e12b28e35 100644
--- a/ci/deps/azure-windows-38.yaml
+++ b/ci/deps/azure-windows-38.yaml
@@ -16,7 +16,10 @@ dependencies:
- blosc
- bottleneck
- fastparquet>=0.3.2
+ - flask
+ - fsspec>=0.8.0
- matplotlib=3.1.3
+ - moto>=1.3.14
- numba
- numexpr
- numpy=1.18.*
@@ -26,6 +29,7 @@ dependencies:
- pytables
- python-dateutil
- pytz
+ - s3fs>=0.4.0
- scipy
- xlrd
- xlsxwriter
diff --git a/ci/deps/travis-37-arm64.yaml b/ci/deps/travis-37-arm64.yaml
index 5cb53489be225..ea29cbef1272b 100644
--- a/ci/deps/travis-37-arm64.yaml
+++ b/ci/deps/travis-37-arm64.yaml
@@ -17,5 +17,6 @@ dependencies:
- python-dateutil
- pytz
- pip
+ - flask
- pip:
- moto
diff --git a/ci/deps/travis-37-cov.yaml b/ci/deps/travis-37-cov.yaml
index edc11bdf4ab35..33ee6dfffb1a3 100644
--- a/ci/deps/travis-37-cov.yaml
+++ b/ci/deps/travis-37-cov.yaml
@@ -23,7 +23,8 @@ dependencies:
- geopandas
- html5lib
- matplotlib
- - moto
+ - moto>=1.3.14
+ - flask
- nomkl
- numexpr
- numpy=1.16.*
diff --git a/ci/deps/travis-37-locale.yaml b/ci/deps/travis-37-locale.yaml
index 4427c1d940bf2..2cf0e12027401 100644
--- a/ci/deps/travis-37-locale.yaml
+++ b/ci/deps/travis-37-locale.yaml
@@ -21,12 +21,12 @@ dependencies:
- jinja2
- lxml=4.3.0
- matplotlib=3.0.*
- - moto
- nomkl
- numexpr
- numpy
- openpyxl
- pandas-gbq=0.12.0
+ - pyarrow>=0.17
- psycopg2=2.7
- pymysql=0.7.11
- pytables
diff --git a/ci/deps/travis-37.yaml b/ci/deps/travis-37.yaml
index e896233aac63c..26d6c2910a7cc 100644
--- a/ci/deps/travis-37.yaml
+++ b/ci/deps/travis-37.yaml
@@ -20,8 +20,8 @@ dependencies:
- pyarrow
- pytz
- s3fs>=0.4.0
+ - moto>=1.3.14
+ - flask
- tabulate
- pyreadstat
- pip
- - pip:
- - moto
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 8ec75b4846ae2..3e6ed1cdf8f7e 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -24,6 +24,9 @@ of the individual storage backends (detailed from the fsspec docs for
`builtin implementations`_ and linked to `external ones`_). See
Section :ref:`io.remote`.
+:issue:`35655` added fsspec support (including ``storage_options``)
+for reading excel files.
+
.. _builtin implementations: https://filesystem-spec.readthedocs.io/en/latest/api.html#built-in-implementations
.. _external ones: https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations
diff --git a/environment.yml b/environment.yml
index 806119631d5ee..6afc19c227512 100644
--- a/environment.yml
+++ b/environment.yml
@@ -51,6 +51,7 @@ dependencies:
- botocore>=1.11
- hypothesis>=3.82
- moto # mock S3
+ - flask
- pytest>=5.0.1
- pytest-cov
- pytest-xdist>=1.21
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index b1bbda4a4b7e0..aaef71910c9ab 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -3,11 +3,12 @@
from io import BufferedIOBase, BytesIO, RawIOBase
import os
from textwrap import fill
-from typing import Union
+from typing import Any, Mapping, Union
from pandas._config import config
from pandas._libs.parsers import STR_NA_VALUES
+from pandas._typing import StorageOptions
from pandas.errors import EmptyDataError
from pandas.util._decorators import Appender, deprecate_nonkeyword_arguments
@@ -199,6 +200,15 @@
Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
'X'...'X'. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
+storage_options : StorageOptions
+ Extra options that make sense for a particular storage connection, e.g.
+ host, port, username, password, etc., if using a URL that will
+ be parsed by ``fsspec``, e.g., starting "s3://", "gcs://". An error
+ will be raised if providing this argument with a local path or
+ a file-like buffer. See the fsspec and backend storage implementation
+ docs for the set of allowed keys and values
+
+ .. versionadded:: 1.2.0
Returns
-------
@@ -298,10 +308,11 @@ def read_excel(
skipfooter=0,
convert_float=True,
mangle_dupe_cols=True,
+ storage_options: StorageOptions = None,
):
if not isinstance(io, ExcelFile):
- io = ExcelFile(io, engine=engine)
+ io = ExcelFile(io, storage_options=storage_options, engine=engine)
elif engine and engine != io.engine:
raise ValueError(
"Engine should not be specified when passing "
@@ -336,12 +347,14 @@ def read_excel(
class _BaseExcelReader(metaclass=abc.ABCMeta):
- def __init__(self, filepath_or_buffer):
+ def __init__(self, filepath_or_buffer, storage_options: StorageOptions = None):
# If filepath_or_buffer is a url, load the data into a BytesIO
if is_url(filepath_or_buffer):
filepath_or_buffer = BytesIO(urlopen(filepath_or_buffer).read())
elif not isinstance(filepath_or_buffer, (ExcelFile, self._workbook_class)):
- filepath_or_buffer, _, _, _ = get_filepath_or_buffer(filepath_or_buffer)
+ filepath_or_buffer, _, _, _ = get_filepath_or_buffer(
+ filepath_or_buffer, storage_options=storage_options
+ )
if isinstance(filepath_or_buffer, self._workbook_class):
self.book = filepath_or_buffer
@@ -837,14 +850,16 @@ class ExcelFile:
from pandas.io.excel._pyxlsb import _PyxlsbReader
from pandas.io.excel._xlrd import _XlrdReader
- _engines = {
+ _engines: Mapping[str, Any] = {
"xlrd": _XlrdReader,
"openpyxl": _OpenpyxlReader,
"odf": _ODFReader,
"pyxlsb": _PyxlsbReader,
}
- def __init__(self, path_or_buffer, engine=None):
+ def __init__(
+ self, path_or_buffer, engine=None, storage_options: StorageOptions = None
+ ):
if engine is None:
engine = "xlrd"
if isinstance(path_or_buffer, (BufferedIOBase, RawIOBase)):
@@ -858,13 +873,14 @@ def __init__(self, path_or_buffer, engine=None):
raise ValueError(f"Unknown engine: {engine}")
self.engine = engine
+ self.storage_options = storage_options
# Could be a str, ExcelFile, Book, etc.
self.io = path_or_buffer
# Always a string
self._io = stringify_path(path_or_buffer)
- self._reader = self._engines[engine](self._io)
+ self._reader = self._engines[engine](self._io, storage_options=storage_options)
def __fspath__(self):
return self._io
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
index 44abaf5d3b3c9..a6cd8f524503b 100644
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -2,7 +2,7 @@
import numpy as np
-from pandas._typing import FilePathOrBuffer, Scalar
+from pandas._typing import FilePathOrBuffer, Scalar, StorageOptions
from pandas.compat._optional import import_optional_dependency
import pandas as pd
@@ -16,13 +16,19 @@ class _ODFReader(_BaseExcelReader):
Parameters
----------
- filepath_or_buffer: string, path to be parsed or
+ filepath_or_buffer : string, path to be parsed or
an open readable stream.
+ storage_options : StorageOptions
+ passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
- def __init__(self, filepath_or_buffer: FilePathOrBuffer):
+ def __init__(
+ self,
+ filepath_or_buffer: FilePathOrBuffer,
+ storage_options: StorageOptions = None,
+ ):
import_optional_dependency("odf")
- super().__init__(filepath_or_buffer)
+ super().__init__(filepath_or_buffer, storage_options=storage_options)
@property
def _workbook_class(self):
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index 03a30cbd62f9a..73239190604db 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -2,7 +2,7 @@
import numpy as np
-from pandas._typing import FilePathOrBuffer, Scalar
+from pandas._typing import FilePathOrBuffer, Scalar, StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas.io.excel._base import ExcelWriter, _BaseExcelReader
@@ -467,7 +467,11 @@ def write_cells(
class _OpenpyxlReader(_BaseExcelReader):
- def __init__(self, filepath_or_buffer: FilePathOrBuffer) -> None:
+ def __init__(
+ self,
+ filepath_or_buffer: FilePathOrBuffer,
+ storage_options: StorageOptions = None,
+ ) -> None:
"""
Reader using openpyxl engine.
@@ -475,9 +479,11 @@ def __init__(self, filepath_or_buffer: FilePathOrBuffer) -> None:
----------
filepath_or_buffer : string, path object or Workbook
Object to be parsed.
+ storage_options : StorageOptions
+ passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
import_optional_dependency("openpyxl")
- super().__init__(filepath_or_buffer)
+ super().__init__(filepath_or_buffer, storage_options=storage_options)
@property
def _workbook_class(self):
diff --git a/pandas/io/excel/_pyxlsb.py b/pandas/io/excel/_pyxlsb.py
index 0d96c8c4acdb8..c0e281ff6c2da 100644
--- a/pandas/io/excel/_pyxlsb.py
+++ b/pandas/io/excel/_pyxlsb.py
@@ -1,25 +1,31 @@
from typing import List
-from pandas._typing import FilePathOrBuffer, Scalar
+from pandas._typing import FilePathOrBuffer, Scalar, StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas.io.excel._base import _BaseExcelReader
class _PyxlsbReader(_BaseExcelReader):
- def __init__(self, filepath_or_buffer: FilePathOrBuffer):
+ def __init__(
+ self,
+ filepath_or_buffer: FilePathOrBuffer,
+ storage_options: StorageOptions = None,
+ ):
"""
Reader using pyxlsb engine.
Parameters
----------
- filepath_or_buffer: str, path object, or Workbook
+ filepath_or_buffer : str, path object, or Workbook
Object to be parsed.
+ storage_options : StorageOptions
+ passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
import_optional_dependency("pyxlsb")
# This will call load_workbook on the filepath or buffer
# And set the result to the book-attribute
- super().__init__(filepath_or_buffer)
+ super().__init__(filepath_or_buffer, storage_options=storage_options)
@property
def _workbook_class(self):
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index af82c15fd6b66..ff1b3c8bdb964 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -2,13 +2,14 @@
import numpy as np
+from pandas._typing import StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas.io.excel._base import _BaseExcelReader
class _XlrdReader(_BaseExcelReader):
- def __init__(self, filepath_or_buffer):
+ def __init__(self, filepath_or_buffer, storage_options: StorageOptions = None):
"""
Reader using xlrd engine.
@@ -16,10 +17,12 @@ def __init__(self, filepath_or_buffer):
----------
filepath_or_buffer : string, path object or Workbook
Object to be parsed.
+ storage_options : StorageOptions
+ passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
err_msg = "Install xlrd >= 1.0.0 for Excel support"
import_optional_dependency("xlrd", extra=err_msg)
- super().__init__(filepath_or_buffer)
+ super().__init__(filepath_or_buffer, storage_options=storage_options)
@property
def _workbook_class(self):
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 2c664e73b9463..2d86fa44f22a4 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -1,5 +1,6 @@
""" feather-format compat """
+from pandas._typing import StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas import DataFrame, Int64Index, RangeIndex
@@ -7,7 +8,7 @@
from pandas.io.common import get_filepath_or_buffer
-def to_feather(df: DataFrame, path, storage_options=None, **kwargs):
+def to_feather(df: DataFrame, path, storage_options: StorageOptions = None, **kwargs):
"""
Write a DataFrame to the binary Feather format.
@@ -77,7 +78,9 @@ def to_feather(df: DataFrame, path, storage_options=None, **kwargs):
feather.write_feather(df, path, **kwargs)
-def read_feather(path, columns=None, use_threads: bool = True, storage_options=None):
+def read_feather(
+ path, columns=None, use_threads: bool = True, storage_options: StorageOptions = None
+):
"""
Load a feather-format object from the file path.
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 5d49757ce7d58..983aa56324083 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -20,7 +20,7 @@
import pandas._libs.parsers as parsers
from pandas._libs.parsers import STR_NA_VALUES
from pandas._libs.tslibs import parsing
-from pandas._typing import FilePathOrBuffer, Union
+from pandas._typing import FilePathOrBuffer, StorageOptions, Union
from pandas.errors import (
AbstractMethodError,
EmptyDataError,
@@ -596,7 +596,7 @@ def read_csv(
low_memory=_c_parser_defaults["low_memory"],
memory_map=False,
float_precision=None,
- storage_options=None,
+ storage_options: StorageOptions = None,
):
# gh-23761
#
diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py
index fcee25c258efa..518f31d73efa9 100644
--- a/pandas/tests/io/conftest.py
+++ b/pandas/tests/io/conftest.py
@@ -1,4 +1,7 @@
import os
+import shlex
+import subprocess
+import time
import pytest
@@ -31,10 +34,62 @@ def feather_file(datapath):
@pytest.fixture
-def s3_resource(tips_file, jsonl_file, feather_file):
+def s3so():
+ return dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
+
+
+@pytest.fixture(scope="module")
+def s3_base():
"""
Fixture for mocking S3 interaction.
+ Sets up moto server in separate process
+ """
+ pytest.importorskip("s3fs")
+ pytest.importorskip("boto3")
+ requests = pytest.importorskip("requests")
+
+ with tm.ensure_safe_environment_variables():
+ # temporary workaround as moto fails for botocore >= 1.11 otherwise,
+ # see https://github.com/spulec/moto/issues/1924 & 1952
+ os.environ.setdefault("AWS_ACCESS_KEY_ID", "foobar_key")
+ os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "foobar_secret")
+
+ pytest.importorskip("moto", minversion="1.3.14")
+ pytest.importorskip("flask") # server mode needs flask too
+
+ # Launching moto in server mode, i.e., as a separate process
+ # with an S3 endpoint on localhost
+
+ endpoint_uri = "http://127.0.0.1:5555/"
+
+ # pipe to null to avoid logging in terminal
+ proc = subprocess.Popen(
+ shlex.split("moto_server s3 -p 5555"), stdout=subprocess.DEVNULL
+ )
+
+ timeout = 5
+ while timeout > 0:
+ try:
+ # OK to go once server is accepting connections
+ r = requests.get(endpoint_uri)
+ if r.ok:
+ break
+ except Exception:
+ pass
+ timeout -= 0.1
+ time.sleep(0.1)
+ yield
+
+ proc.terminate()
+ proc.wait()
+
+
+@pytest.fixture()
+def s3_resource(s3_base, tips_file, jsonl_file, feather_file):
+ """
+ Sets up S3 bucket with contents
+
The primary bucket name is "pandas-test". The following datasets
are loaded.
@@ -46,45 +101,59 @@ def s3_resource(tips_file, jsonl_file, feather_file):
A private bucket "cant_get_it" is also created. The boto3 s3 resource
is yielded by the fixture.
"""
- s3fs = pytest.importorskip("s3fs")
- boto3 = pytest.importorskip("boto3")
-
- with tm.ensure_safe_environment_variables():
- # temporary workaround as moto fails for botocore >= 1.11 otherwise,
- # see https://github.com/spulec/moto/issues/1924 & 1952
- os.environ.setdefault("AWS_ACCESS_KEY_ID", "foobar_key")
- os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "foobar_secret")
-
- moto = pytest.importorskip("moto")
-
- test_s3_files = [
- ("tips#1.csv", tips_file),
- ("tips.csv", tips_file),
- ("tips.csv.gz", tips_file + ".gz"),
- ("tips.csv.bz2", tips_file + ".bz2"),
- ("items.jsonl", jsonl_file),
- ("simple_dataset.feather", feather_file),
- ]
-
- def add_tips_files(bucket_name):
- for s3_key, file_name in test_s3_files:
- with open(file_name, "rb") as f:
- conn.Bucket(bucket_name).put_object(Key=s3_key, Body=f)
-
- try:
- s3 = moto.mock_s3()
- s3.start()
-
- # see gh-16135
- bucket = "pandas-test"
- conn = boto3.resource("s3", region_name="us-east-1")
-
- conn.create_bucket(Bucket=bucket)
- add_tips_files(bucket)
-
- conn.create_bucket(Bucket="cant_get_it", ACL="private")
- add_tips_files("cant_get_it")
- s3fs.S3FileSystem.clear_instance_cache()
- yield conn
- finally:
- s3.stop()
+ import boto3
+ import s3fs
+
+ test_s3_files = [
+ ("tips#1.csv", tips_file),
+ ("tips.csv", tips_file),
+ ("tips.csv.gz", tips_file + ".gz"),
+ ("tips.csv.bz2", tips_file + ".bz2"),
+ ("items.jsonl", jsonl_file),
+ ("simple_dataset.feather", feather_file),
+ ]
+
+ def add_tips_files(bucket_name):
+ for s3_key, file_name in test_s3_files:
+ with open(file_name, "rb") as f:
+ cli.put_object(Bucket=bucket_name, Key=s3_key, Body=f)
+
+ bucket = "pandas-test"
+ endpoint_uri = "http://127.0.0.1:5555/"
+ conn = boto3.resource("s3", endpoint_url=endpoint_uri)
+ cli = boto3.client("s3", endpoint_url=endpoint_uri)
+
+ try:
+ cli.create_bucket(Bucket=bucket)
+ except: # noqa
+ # OK is bucket already exists
+ pass
+ try:
+ cli.create_bucket(Bucket="cant_get_it", ACL="private")
+ except: # noqa
+ # OK is bucket already exists
+ pass
+ timeout = 2
+ while not cli.list_buckets()["Buckets"] and timeout > 0:
+ time.sleep(0.1)
+ timeout -= 0.1
+
+ add_tips_files(bucket)
+ add_tips_files("cant_get_it")
+ s3fs.S3FileSystem.clear_instance_cache()
+ yield conn
+
+ s3 = s3fs.S3FileSystem(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
+
+ try:
+ s3.rm(bucket, recursive=True)
+ except: # noqa
+ pass
+ try:
+ s3.rm("cant_get_it", recursive=True)
+ except: # noqa
+ pass
+ timeout = 2
+ while cli.list_buckets()["Buckets"] and timeout > 0:
+ time.sleep(0.1)
+ timeout -= 0.1
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 51fbbf836a03f..431a50477fccc 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -606,13 +606,14 @@ def test_read_from_http_url(self, read_ext):
tm.assert_frame_equal(url_table, local_table)
@td.skip_if_not_us_locale
- def test_read_from_s3_url(self, read_ext, s3_resource):
+ def test_read_from_s3_url(self, read_ext, s3_resource, s3so):
# Bucket "pandas-test" created in tests/io/conftest.py
with open("test1" + read_ext, "rb") as f:
s3_resource.Bucket("pandas-test").put_object(Key="test1" + read_ext, Body=f)
url = "s3://pandas-test/test1" + read_ext
- url_table = pd.read_excel(url)
+
+ url_table = pd.read_excel(url, storage_options=s3so)
local_table = pd.read_excel("test1" + read_ext)
tm.assert_frame_equal(url_table, local_table)
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 182c21ed1d416..5bb205842269e 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -44,7 +44,11 @@ def test_with_s3_url(compression, s3_resource):
with open(path, "rb") as f:
s3_resource.Bucket("pandas-test").put_object(Key="test-1", Body=f)
- roundtripped_df = pd.read_json("s3://pandas-test/test-1", compression=compression)
+ roundtripped_df = pd.read_json(
+ "s3://pandas-test/test-1",
+ compression=compression,
+ storage_options=dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"}),
+ )
tm.assert_frame_equal(df, roundtripped_df)
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 1280d0fd434d5..64a666079876f 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1213,10 +1213,12 @@ def test_read_inline_jsonl(self):
tm.assert_frame_equal(result, expected)
@td.skip_if_not_us_locale
- def test_read_s3_jsonl(self, s3_resource):
+ def test_read_s3_jsonl(self, s3_resource, s3so):
# GH17200
- result = read_json("s3n://pandas-test/items.jsonl", lines=True)
+ result = read_json(
+ "s3n://pandas-test/items.jsonl", lines=True, storage_options=s3so
+ )
expected = DataFrame([[1, 2], [1, 2]], columns=["a", "b"])
tm.assert_frame_equal(result, expected)
@@ -1706,7 +1708,12 @@ def test_to_s3(self, s3_resource):
# GH 28375
mock_bucket_name, target_file = "pandas-test", "test.json"
df = DataFrame({"x": [1, 2, 3], "y": [2, 4, 6]})
- df.to_json(f"s3://{mock_bucket_name}/{target_file}")
+ df.to_json(
+ f"s3://{mock_bucket_name}/{target_file}",
+ storage_options=dict(
+ client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"}
+ ),
+ )
timeout = 5
while True:
if target_file in (
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index b30a7b1ef34de..b8b03cbd14a1d 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -71,50 +71,62 @@ def tips_df(datapath):
@td.skip_if_not_us_locale()
class TestS3:
@td.skip_if_no("s3fs")
- def test_parse_public_s3_bucket(self, tips_df):
+ def test_parse_public_s3_bucket(self, tips_df, s3so):
# more of an integration test due to the not-public contents portion
# can probably mock this though.
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
- df = read_csv("s3://pandas-test/tips.csv" + ext, compression=comp)
+ df = read_csv(
+ "s3://pandas-test/tips.csv" + ext,
+ compression=comp,
+ storage_options=s3so,
+ )
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(df, tips_df)
# Read public file from bucket with not-public contents
- df = read_csv("s3://cant_get_it/tips.csv")
+ df = read_csv("s3://cant_get_it/tips.csv", storage_options=s3so)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(df, tips_df)
- def test_parse_public_s3n_bucket(self, tips_df):
+ def test_parse_public_s3n_bucket(self, tips_df, s3so):
# Read from AWS s3 as "s3n" URL
- df = read_csv("s3n://pandas-test/tips.csv", nrows=10)
+ df = read_csv("s3n://pandas-test/tips.csv", nrows=10, storage_options=s3so)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(tips_df.iloc[:10], df)
- def test_parse_public_s3a_bucket(self, tips_df):
+ def test_parse_public_s3a_bucket(self, tips_df, s3so):
# Read from AWS s3 as "s3a" URL
- df = read_csv("s3a://pandas-test/tips.csv", nrows=10)
+ df = read_csv("s3a://pandas-test/tips.csv", nrows=10, storage_options=s3so)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(tips_df.iloc[:10], df)
- def test_parse_public_s3_bucket_nrows(self, tips_df):
+ def test_parse_public_s3_bucket_nrows(self, tips_df, s3so):
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
- df = read_csv("s3://pandas-test/tips.csv" + ext, nrows=10, compression=comp)
+ df = read_csv(
+ "s3://pandas-test/tips.csv" + ext,
+ nrows=10,
+ compression=comp,
+ storage_options=s3so,
+ )
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(tips_df.iloc[:10], df)
- def test_parse_public_s3_bucket_chunked(self, tips_df):
+ def test_parse_public_s3_bucket_chunked(self, tips_df, s3so):
# Read with a chunksize
chunksize = 5
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
df_reader = read_csv(
- "s3://pandas-test/tips.csv" + ext, chunksize=chunksize, compression=comp
+ "s3://pandas-test/tips.csv" + ext,
+ chunksize=chunksize,
+ compression=comp,
+ storage_options=s3so,
)
assert df_reader.chunksize == chunksize
for i_chunk in [0, 1, 2]:
@@ -126,7 +138,7 @@ def test_parse_public_s3_bucket_chunked(self, tips_df):
true_df = tips_df.iloc[chunksize * i_chunk : chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
- def test_parse_public_s3_bucket_chunked_python(self, tips_df):
+ def test_parse_public_s3_bucket_chunked_python(self, tips_df, s3so):
# Read with a chunksize using the Python parser
chunksize = 5
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
@@ -135,6 +147,7 @@ def test_parse_public_s3_bucket_chunked_python(self, tips_df):
chunksize=chunksize,
compression=comp,
engine="python",
+ storage_options=s3so,
)
assert df_reader.chunksize == chunksize
for i_chunk in [0, 1, 2]:
@@ -145,46 +158,53 @@ def test_parse_public_s3_bucket_chunked_python(self, tips_df):
true_df = tips_df.iloc[chunksize * i_chunk : chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
- def test_parse_public_s3_bucket_python(self, tips_df):
+ def test_parse_public_s3_bucket_python(self, tips_df, s3so):
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
df = read_csv(
- "s3://pandas-test/tips.csv" + ext, engine="python", compression=comp
+ "s3://pandas-test/tips.csv" + ext,
+ engine="python",
+ compression=comp,
+ storage_options=s3so,
)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(df, tips_df)
- def test_infer_s3_compression(self, tips_df):
+ def test_infer_s3_compression(self, tips_df, s3so):
for ext in ["", ".gz", ".bz2"]:
df = read_csv(
- "s3://pandas-test/tips.csv" + ext, engine="python", compression="infer"
+ "s3://pandas-test/tips.csv" + ext,
+ engine="python",
+ compression="infer",
+ storage_options=s3so,
)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(df, tips_df)
- def test_parse_public_s3_bucket_nrows_python(self, tips_df):
+ def test_parse_public_s3_bucket_nrows_python(self, tips_df, s3so):
for ext, comp in [("", None), (".gz", "gzip"), (".bz2", "bz2")]:
df = read_csv(
"s3://pandas-test/tips.csv" + ext,
engine="python",
nrows=10,
compression=comp,
+ storage_options=s3so,
)
assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(tips_df.iloc[:10], df)
- def test_read_s3_fails(self):
+ def test_read_s3_fails(self, s3so):
with pytest.raises(IOError):
- read_csv("s3://nyqpug/asdf.csv")
+ read_csv("s3://nyqpug/asdf.csv", storage_options=s3so)
# Receive a permission error when trying to read a private bucket.
# It's irrelevant here that this isn't actually a table.
with pytest.raises(IOError):
read_csv("s3://cant_get_it/file.csv")
- def test_write_s3_csv_fails(self, tips_df):
+ def test_write_s3_csv_fails(self, tips_df, s3so):
# GH 32486
# Attempting to write to an invalid S3 path should raise
import botocore
@@ -195,10 +215,12 @@ def test_write_s3_csv_fails(self, tips_df):
error = (FileNotFoundError, botocore.exceptions.ClientError)
with pytest.raises(error, match="The specified bucket does not exist"):
- tips_df.to_csv("s3://an_s3_bucket_data_doesnt_exit/not_real.csv")
+ tips_df.to_csv(
+ "s3://an_s3_bucket_data_doesnt_exit/not_real.csv", storage_options=s3so
+ )
@td.skip_if_no("pyarrow")
- def test_write_s3_parquet_fails(self, tips_df):
+ def test_write_s3_parquet_fails(self, tips_df, s3so):
# GH 27679
# Attempting to write to an invalid S3 path should raise
import botocore
@@ -209,7 +231,10 @@ def test_write_s3_parquet_fails(self, tips_df):
error = (FileNotFoundError, botocore.exceptions.ClientError)
with pytest.raises(error, match="The specified bucket does not exist"):
- tips_df.to_parquet("s3://an_s3_bucket_data_doesnt_exit/not_real.parquet")
+ tips_df.to_parquet(
+ "s3://an_s3_bucket_data_doesnt_exit/not_real.parquet",
+ storage_options=s3so,
+ )
def test_read_csv_handles_boto_s3_object(self, s3_resource, tips_file):
# see gh-16135
@@ -225,7 +250,7 @@ def test_read_csv_handles_boto_s3_object(self, s3_resource, tips_file):
expected = read_csv(tips_file)
tm.assert_frame_equal(result, expected)
- def test_read_csv_chunked_download(self, s3_resource, caplog):
+ def test_read_csv_chunked_download(self, s3_resource, caplog, s3so):
# 8 MB, S3FS usees 5MB chunks
import s3fs
@@ -245,18 +270,20 @@ def test_read_csv_chunked_download(self, s3_resource, caplog):
s3fs.S3FileSystem.clear_instance_cache()
with caplog.at_level(logging.DEBUG, logger="s3fs"):
- read_csv("s3://pandas-test/large-file.csv", nrows=5)
+ read_csv("s3://pandas-test/large-file.csv", nrows=5, storage_options=s3so)
# log of fetch_range (start, stop)
assert (0, 5505024) in (x.args[-2:] for x in caplog.records)
- def test_read_s3_with_hash_in_key(self, tips_df):
+ def test_read_s3_with_hash_in_key(self, tips_df, s3so):
# GH 25945
- result = read_csv("s3://pandas-test/tips#1.csv")
+ result = read_csv("s3://pandas-test/tips#1.csv", storage_options=s3so)
tm.assert_frame_equal(tips_df, result)
@td.skip_if_no("pyarrow")
- def test_read_feather_s3_file_path(self, feather_file):
+ def test_read_feather_s3_file_path(self, feather_file, s3so):
# GH 29055
expected = read_feather(feather_file)
- res = read_feather("s3://pandas-test/simple_dataset.feather")
+ res = read_feather(
+ "s3://pandas-test/simple_dataset.feather", storage_options=s3so
+ )
tm.assert_frame_equal(expected, res)
diff --git a/pandas/tests/io/test_fsspec.py b/pandas/tests/io/test_fsspec.py
index 3e89f6ca4ae16..666da677d702e 100644
--- a/pandas/tests/io/test_fsspec.py
+++ b/pandas/tests/io/test_fsspec.py
@@ -131,27 +131,38 @@ def test_fastparquet_options(fsspectest):
@td.skip_if_no("s3fs")
-def test_from_s3_csv(s3_resource, tips_file):
- tm.assert_equal(read_csv("s3://pandas-test/tips.csv"), read_csv(tips_file))
+def test_from_s3_csv(s3_resource, tips_file, s3so):
+ tm.assert_equal(
+ read_csv("s3://pandas-test/tips.csv", storage_options=s3so), read_csv(tips_file)
+ )
# the following are decompressed by pandas, not fsspec
- tm.assert_equal(read_csv("s3://pandas-test/tips.csv.gz"), read_csv(tips_file))
- tm.assert_equal(read_csv("s3://pandas-test/tips.csv.bz2"), read_csv(tips_file))
+ tm.assert_equal(
+ read_csv("s3://pandas-test/tips.csv.gz", storage_options=s3so),
+ read_csv(tips_file),
+ )
+ tm.assert_equal(
+ read_csv("s3://pandas-test/tips.csv.bz2", storage_options=s3so),
+ read_csv(tips_file),
+ )
@pytest.mark.parametrize("protocol", ["s3", "s3a", "s3n"])
@td.skip_if_no("s3fs")
-def test_s3_protocols(s3_resource, tips_file, protocol):
+def test_s3_protocols(s3_resource, tips_file, protocol, s3so):
tm.assert_equal(
- read_csv("%s://pandas-test/tips.csv" % protocol), read_csv(tips_file)
+ read_csv("%s://pandas-test/tips.csv" % protocol, storage_options=s3so),
+ read_csv(tips_file),
)
@td.skip_if_no("s3fs")
@td.skip_if_no("fastparquet")
-def test_s3_parquet(s3_resource):
+def test_s3_parquet(s3_resource, s3so):
fn = "s3://pandas-test/test.parquet"
- df1.to_parquet(fn, index=False, engine="fastparquet", compression=None)
- df2 = read_parquet(fn, engine="fastparquet")
+ df1.to_parquet(
+ fn, index=False, engine="fastparquet", compression=None, storage_options=s3so
+ )
+ df2 = read_parquet(fn, engine="fastparquet", storage_options=s3so)
tm.assert_equal(df1, df2)
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 82157f3d722a9..3a3ba99484a3a 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -158,6 +158,10 @@ def check_round_trip(
"""
write_kwargs = write_kwargs or {"compression": None}
read_kwargs = read_kwargs or {}
+ if isinstance(path, str) and "s3://" in path:
+ s3so = dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
+ read_kwargs["storage_options"] = s3so
+ write_kwargs["storage_options"] = s3so
if expected is None:
expected = df
@@ -537,9 +541,11 @@ def test_categorical(self, pa):
expected = df.astype(object)
check_round_trip(df, pa, expected=expected)
- def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa):
+ def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa, s3so):
s3fs = pytest.importorskip("s3fs")
- s3 = s3fs.S3FileSystem()
+ if LooseVersion(pyarrow.__version__) <= LooseVersion("0.17.0"):
+ pytest.skip()
+ s3 = s3fs.S3FileSystem(**s3so)
kw = dict(filesystem=s3)
check_round_trip(
df_compat,
@@ -550,6 +556,8 @@ def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa):
)
def test_s3_roundtrip(self, df_compat, s3_resource, pa):
+ if LooseVersion(pyarrow.__version__) <= LooseVersion("0.17.0"):
+ pytest.skip()
# GH #19134
check_round_trip(df_compat, pa, path="s3://pandas-test/pyarrow.parquet")
@@ -560,10 +568,13 @@ def test_s3_roundtrip_for_dir(self, df_compat, s3_resource, pa, partition_col):
# https://github.com/apache/arrow/blob/master/python/pyarrow/tests/test_parquet.py#L2716
# As per pyarrow partitioned columns become 'categorical' dtypes
# and are added to back of dataframe on read
+ if partition_col and pd.compat.is_platform_windows():
+ pytest.skip("pyarrow/win incompatibility #35791")
expected_df = df_compat.copy()
if partition_col:
expected_df[partition_col] = expected_df[partition_col].astype("category")
+
check_round_trip(
df_compat,
pa,
diff --git a/requirements-dev.txt b/requirements-dev.txt
index deaed8ab9d5f1..2fbb20ddfd3bf 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -32,6 +32,7 @@ boto3
botocore>=1.11
hypothesis>=3.82
moto
+flask
pytest>=5.0.1
pytest-cov
pytest-xdist>=1.21
| Changes moto for s3 tests from monkeypatched/mocking to server mode. This allows aiobotocore exceptions to raise correctly, needed for tests to pass; the change is required by the upcoming async release of s3fs, but also works for old sync (botocore) version. A release of s3fs without this change would break pandas tests.
Note: since I now need to pass storage_options to the various s3 calls in the tests to make them pass (giving the endpoint of the moto s3 server), I needed to also plumb storage_options through the excel IO, which was previously missing, and update the excel tests.
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35655 | 2020-08-10T15:40:01Z | 2020-08-21T20:14:03Z | 2020-08-21T20:14:03Z | 2020-08-22T19:26:28Z |
BUG: GH-35558 merge_asof tolerance error | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index cdc244ca193b4..b37103910afab 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 27b331babe692..2349cb1dcc0c7 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1667,7 +1667,7 @@ def _get_merge_keys(self):
msg = (
f"incompatible tolerance {self.tolerance}, must be compat "
- f"with type {repr(lk.dtype)}"
+ f"with type {repr(lt.dtype)}"
)
if needs_i8_conversion(lt):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 9b09f0033715d..895de2b748c34 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1339,3 +1339,25 @@ def test_merge_index_column_tz(self):
index=pd.Index([0, 1, 2, 3, 4]),
)
tm.assert_frame_equal(result, expected)
+
+ def test_left_index_right_index_tolerance(self):
+ # https://github.com/pandas-dev/pandas/issues/35558
+ dr1 = pd.date_range(
+ start="1/1/2020", end="1/20/2020", freq="2D"
+ ) + pd.Timedelta(seconds=0.4)
+ dr2 = pd.date_range(start="1/1/2020", end="2/1/2020")
+
+ df1 = pd.DataFrame({"val1": "foo"}, index=pd.DatetimeIndex(dr1))
+ df2 = pd.DataFrame({"val2": "bar"}, index=pd.DatetimeIndex(dr2))
+
+ expected = pd.DataFrame(
+ {"val1": "foo", "val2": "bar"}, index=pd.DatetimeIndex(dr1)
+ )
+ result = pd.merge_asof(
+ df1,
+ df2,
+ left_index=True,
+ right_index=True,
+ tolerance=pd.Timedelta(seconds=0.5),
+ )
+ tm.assert_frame_equal(result, expected)
| - [x] closes #35558
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35654 | 2020-08-10T15:07:53Z | 2020-08-13T10:15:52Z | 2020-08-13T10:15:52Z | 2020-08-13T10:17:17Z |
Backport PR #35522 on branch 1.1.x (BUG: Fix assert_equal when check_exact=True for non-numeric dtypes #3…) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index f0ad9d1ca3b0f..2fa4c12d24172 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -17,6 +17,7 @@ Fixed regressions
- Fixed regression where :meth:`DataFrame.to_numpy` would raise a ``RuntimeError`` for mixed dtypes when converting to ``str`` (:issue:`35455`)
- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`).
+- Fixed regression where :func:`pandas.testing.assert_series_equal` would raise an error when non-numeric dtypes were passed with ``check_exact=True`` (:issue:`35446`)
- Fixed regression in :class:`pandas.core.groupby.RollingGroupby` where column selection was ignored (:issue:`35486`)
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
diff --git a/pandas/_testing.py b/pandas/_testing.py
index a020fbff3553a..713f29466f097 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1339,10 +1339,8 @@ def assert_series_equal(
else:
assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
- if check_exact:
- if not is_numeric_dtype(left.dtype):
- raise AssertionError("check_exact may only be used with numeric Series")
-
+ if check_exact and is_numeric_dtype(left.dtype) and is_numeric_dtype(right.dtype):
+ # Only check exact if dtype is numeric
assert_numpy_array_equal(
left._values,
right._values,
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index 1284cc9d4f49b..a7b5aeac560e4 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -281,3 +281,18 @@ class MySeries(Series):
with pytest.raises(AssertionError, match="Series classes are different"):
tm.assert_series_equal(s3, s1, check_series_type=True)
+
+
+def test_series_equal_exact_for_nonnumeric():
+ # https://github.com/pandas-dev/pandas/issues/35446
+ s1 = Series(["a", "b"])
+ s2 = Series(["a", "b"])
+ s3 = Series(["b", "a"])
+
+ tm.assert_series_equal(s1, s2, check_exact=True)
+ tm.assert_series_equal(s2, s1, check_exact=True)
+
+ with pytest.raises(AssertionError):
+ tm.assert_series_equal(s1, s3, check_exact=True)
+ with pytest.raises(AssertionError):
+ tm.assert_series_equal(s3, s1, check_exact=True)
| Backport PR #35522: BUG: Fix assert_equal when check_exact=True for non-numeric dtypes #3… | https://api.github.com/repos/pandas-dev/pandas/pulls/35652 | 2020-08-10T13:32:51Z | 2020-08-10T14:53:32Z | 2020-08-10T14:53:32Z | 2020-08-10T14:54:15Z |
Backport PR #35639 on branch 1.1.x (BUG: RollingGroupby with closed and column selection no longer raises ValueError) | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index f0ad9d1ca3b0f..7f5182e3eaa6f 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -51,6 +51,10 @@ Categorical
-
-
+**Groupby/resample/rolling**
+
+- Bug in :class:`pandas.core.groupby.RollingGroupby` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
+
**Plotting**
-
diff --git a/pandas/core/window/common.py b/pandas/core/window/common.py
index 58e7841d4dde5..51a067427e867 100644
--- a/pandas/core/window/common.py
+++ b/pandas/core/window/common.py
@@ -52,7 +52,7 @@ def __init__(self, obj, *args, **kwargs):
kwargs.pop("parent", None)
groupby = kwargs.pop("groupby", None)
if groupby is None:
- groupby, obj = obj, obj.obj
+ groupby, obj = obj, obj._selected_obj
self._groupby = groupby
self._groupby.mutated = True
self._groupby.grouper.mutated = True
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 87bcaa7d9512f..ea03a7f2f8162 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -2196,7 +2196,7 @@ def _apply(
# Cannot use _wrap_outputs because we calculate the result all at once
# Compose MultiIndex result from grouping levels then rolling level
# Aggregate the MultiIndex data as tuples then the level names
- grouped_object_index = self._groupby._selected_obj.index
+ grouped_object_index = self.obj.index
grouped_index_name = [grouped_object_index.name]
groupby_keys = [grouping.name for grouping in self._groupby.grouper._groupings]
result_index_names = groupby_keys + grouped_index_name
@@ -2220,10 +2220,6 @@ def _apply(
def _constructor(self):
return Rolling
- @cache_readonly
- def _selected_obj(self):
- return self._groupby._selected_obj
-
def _create_blocks(self, obj: FrameOrSeries):
"""
Split data into blocks & return conformed data.
@@ -2262,7 +2258,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]]
if self.is_freq_type:
rolling_indexer = VariableWindowIndexer
- index_array = self._groupby._selected_obj.index.asi8
+ index_array = self.obj.index.asi8
else:
rolling_indexer = FixedWindowIndexer
index_array = None
@@ -2279,7 +2275,7 @@ def _gotitem(self, key, ndim, subset=None):
# here so our index is carried thru to the selected obj
# when we do the splitting for the groupby
if self.on is not None:
- self._groupby.obj = self._groupby.obj.set_index(self._on)
+ self.obj = self.obj.set_index(self._on)
self.on = None
return super()._gotitem(key, ndim, subset=subset)
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index 5241b9548a442..e1dcac06c39cc 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -304,3 +304,54 @@ def test_groupby_subselect_rolling(self):
name="b",
)
tm.assert_series_equal(result, expected)
+
+ def test_groupby_rolling_subset_with_closed(self):
+ # GH 35549
+ df = pd.DataFrame(
+ {
+ "column1": range(6),
+ "column2": range(6),
+ "group": 3 * ["A", "B"],
+ "date": [pd.Timestamp("2019-01-01")] * 6,
+ }
+ )
+ result = (
+ df.groupby("group").rolling("1D", on="date", closed="left")["column1"].sum()
+ )
+ expected = Series(
+ [np.nan, 0.0, 2.0, np.nan, 1.0, 4.0],
+ index=pd.MultiIndex.from_tuples(
+ [("A", pd.Timestamp("2019-01-01"))] * 3
+ + [("B", pd.Timestamp("2019-01-01"))] * 3,
+ names=["group", "date"],
+ ),
+ name="column1",
+ )
+ tm.assert_series_equal(result, expected)
+
+ def test_groupby_subset_rolling_subset_with_closed(self):
+ # GH 35549
+ df = pd.DataFrame(
+ {
+ "column1": range(6),
+ "column2": range(6),
+ "group": 3 * ["A", "B"],
+ "date": [pd.Timestamp("2019-01-01")] * 6,
+ }
+ )
+
+ result = (
+ df.groupby("group")[["column1", "date"]]
+ .rolling("1D", on="date", closed="left")["column1"]
+ .sum()
+ )
+ expected = Series(
+ [np.nan, 0.0, 2.0, np.nan, 1.0, 4.0],
+ index=pd.MultiIndex.from_tuples(
+ [("A", pd.Timestamp("2019-01-01"))] * 3
+ + [("B", pd.Timestamp("2019-01-01"))] * 3,
+ names=["group", "date"],
+ ),
+ name="column1",
+ )
+ tm.assert_series_equal(result, expected)
| Backport PR #35639: BUG: RollingGroupby with closed and column selection no longer raises ValueError | https://api.github.com/repos/pandas-dev/pandas/pulls/35651 | 2020-08-10T13:32:34Z | 2020-08-10T14:48:34Z | 2020-08-10T14:48:34Z | 2020-08-10T14:48:34Z |
Refactor tables latex | diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 5d6f0a08ef2b5..715b8bbdf5672 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -121,10 +121,7 @@ def pad_empties(x):
else:
column_format = self.column_format
- if self.longtable:
- self._write_longtable_begin(buf, column_format)
- else:
- self._write_tabular_begin(buf, column_format)
+ self._write_tabular_begin(buf, column_format)
buf.write("\\toprule\n")
@@ -190,10 +187,7 @@ def pad_empties(x):
if self.multirow and i < len(strrows) - 1:
self._print_cline(buf, i, len(strcols))
- if self.longtable:
- self._write_longtable_end(buf)
- else:
- self._write_tabular_end(buf)
+ self._write_tabular_end(buf)
def _format_multicolumn(self, row: List[str], ilevels: int) -> List[str]:
r"""
@@ -288,7 +282,7 @@ def _write_tabular_begin(self, buf, column_format: str):
for 3 columns
"""
if self._table_float:
- # then write output in a nested table/tabular environment
+ # then write output in a nested table/tabular or longtable environment
if self.caption is None:
caption_ = ""
else:
@@ -304,12 +298,27 @@ def _write_tabular_begin(self, buf, column_format: str):
else:
position_ = f"[{self.position}]"
- buf.write(f"\\begin{{table}}{position_}\n\\centering{caption_}{label_}\n")
+ if self.longtable:
+ table_ = f"\\begin{{longtable}}{position_}{{{column_format}}}"
+ tabular_ = "\n"
+ else:
+ table_ = f"\\begin{{table}}{position_}\n\\centering"
+ tabular_ = f"\n\\begin{{tabular}}{{{column_format}}}\n"
+
+ if self.longtable and (self.caption is not None or self.label is not None):
+ # a double-backslash is required at the end of the line
+ # as discussed here:
+ # https://tex.stackexchange.com/questions/219138
+ backlash_ = "\\\\"
+ else:
+ backlash_ = ""
+ buf.write(f"{table_}{caption_}{label_}{backlash_}{tabular_}")
else:
- # then write output only in a tabular environment
- pass
-
- buf.write(f"\\begin{{tabular}}{{{column_format}}}\n")
+ if self.longtable:
+ tabletype_ = "longtable"
+ else:
+ tabletype_ = "tabular"
+ buf.write(f"\\begin{{{tabletype_}}}{{{column_format}}}\n")
def _write_tabular_end(self, buf):
"""
@@ -323,62 +332,12 @@ def _write_tabular_end(self, buf):
a string.
"""
- buf.write("\\bottomrule\n")
- buf.write("\\end{tabular}\n")
- if self._table_float:
- buf.write("\\end{table}\n")
- else:
- pass
-
- def _write_longtable_begin(self, buf, column_format: str):
- """
- Write the beginning of a longtable environment including caption and
- label if provided by user.
-
- Parameters
- ----------
- buf : string or file handle
- File path or object. If not specified, the result is returned as
- a string.
- column_format : str
- The columns format as specified in `LaTeX table format
- <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl'
- for 3 columns
- """
- if self.caption is None:
- caption_ = ""
- else:
- caption_ = f"\\caption{{{self.caption}}}"
-
- if self.label is None:
- label_ = ""
- else:
- label_ = f"\\label{{{self.label}}}"
-
- if self.position is None:
- position_ = ""
+ if self.longtable:
+ buf.write("\\end{longtable}\n")
else:
- position_ = f"[{self.position}]"
-
- buf.write(
- f"\\begin{{longtable}}{position_}{{{column_format}}}\n{caption_}{label_}"
- )
- if self.caption is not None or self.label is not None:
- # a double-backslash is required at the end of the line
- # as discussed here:
- # https://tex.stackexchange.com/questions/219138
- buf.write("\\\\\n")
-
- @staticmethod
- def _write_longtable_end(buf):
- """
- Write the end of a longtable environment.
-
- Parameters
- ----------
- buf : string or file handle
- File path or object. If not specified, the result is returned as
- a string.
-
- """
- buf.write("\\end{longtable}\n")
+ buf.write("\\bottomrule\n")
+ buf.write("\\end{tabular}\n")
+ if self._table_float:
+ buf.write("\\end{table}\n")
+ else:
+ pass
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 93ad3739e59c7..96a9ed2b86cf4 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -555,7 +555,8 @@ def test_to_latex_longtable_caption_label(self):
result_cl = df.to_latex(longtable=True, caption=the_caption, label=the_label)
expected_cl = r"""\begin{longtable}{lrl}
-\caption{a table in a \texttt{longtable} environment}\label{tab:longtable}\\
+\caption{a table in a \texttt{longtable} environment}
+\label{tab:longtable}\\
\toprule
{} & a & b \\
\midrule
| - [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
As suggested [here](https://github.com/pandas-dev/pandas/pull/35284#issuecomment-665273834), utils fonctions which begin or end tables / tabulars / longtables environments could be merged. | https://api.github.com/repos/pandas-dev/pandas/pulls/35649 | 2020-08-10T07:21:07Z | 2020-08-13T18:17:39Z | 2020-08-13T18:17:39Z | 2020-08-14T06:13:01Z |
BUG: Support custom BaseIndexers in groupby.rolling | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 415f9e508feb8..cdc244ca193b4 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/window/indexers.py b/pandas/core/window/indexers.py
index bc36bdca982e8..7cbe34cdebf9f 100644
--- a/pandas/core/window/indexers.py
+++ b/pandas/core/window/indexers.py
@@ -1,6 +1,6 @@
"""Indexer objects for computing start/end window bounds for rolling operations"""
from datetime import timedelta
-from typing import Dict, Optional, Tuple, Type, Union
+from typing import Dict, Optional, Tuple, Type
import numpy as np
@@ -265,7 +265,8 @@ def __init__(
index_array: Optional[np.ndarray],
window_size: int,
groupby_indicies: Dict,
- rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]],
+ rolling_indexer: Type[BaseIndexer],
+ indexer_kwargs: Optional[Dict],
**kwargs,
):
"""
@@ -276,7 +277,10 @@ def __init__(
"""
self.groupby_indicies = groupby_indicies
self.rolling_indexer = rolling_indexer
- super().__init__(index_array, window_size, **kwargs)
+ self.indexer_kwargs = indexer_kwargs or {}
+ super().__init__(
+ index_array, self.indexer_kwargs.pop("window_size", window_size), **kwargs
+ )
@Appender(get_window_bounds_doc)
def get_window_bounds(
@@ -298,7 +302,9 @@ def get_window_bounds(
else:
index_array = self.index_array
indexer = self.rolling_indexer(
- index_array=index_array, window_size=self.window_size,
+ index_array=index_array,
+ window_size=self.window_size,
+ **self.indexer_kwargs,
)
start, end = indexer.get_window_bounds(
len(indicies), min_periods, center, closed
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 7347d5686aabc..0306d4de2fc73 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -145,7 +145,7 @@ class _Window(PandasObject, ShallowMixin, SelectionMixin):
def __init__(
self,
- obj,
+ obj: FrameOrSeries,
window=None,
min_periods: Optional[int] = None,
center: bool = False,
@@ -2271,10 +2271,16 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
-------
GroupbyRollingIndexer
"""
- rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]]
- if self.is_freq_type:
+ rolling_indexer: Type[BaseIndexer]
+ indexer_kwargs: Optional[Dict] = None
+ index_array = self.obj.index.asi8
+ if isinstance(self.window, BaseIndexer):
+ rolling_indexer = type(self.window)
+ indexer_kwargs = self.window.__dict__
+ # We'll be using the index of each group later
+ indexer_kwargs.pop("index_array", None)
+ elif self.is_freq_type:
rolling_indexer = VariableWindowIndexer
- index_array = self.obj.index.asi8
else:
rolling_indexer = FixedWindowIndexer
index_array = None
@@ -2283,6 +2289,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
window_size=window,
groupby_indicies=self._groupby.indices,
rolling_indexer=rolling_indexer,
+ indexer_kwargs=indexer_kwargs,
)
return window_indexer
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index e1dcac06c39cc..a9590c7e1233a 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -305,6 +305,29 @@ def test_groupby_subselect_rolling(self):
)
tm.assert_series_equal(result, expected)
+ def test_groupby_rolling_custom_indexer(self):
+ # GH 35557
+ class SimpleIndexer(pd.api.indexers.BaseIndexer):
+ def get_window_bounds(
+ self, num_values=0, min_periods=None, center=None, closed=None
+ ):
+ min_periods = self.window_size if min_periods is None else 0
+ end = np.arange(num_values, dtype=np.int64) + 1
+ start = end.copy() - self.window_size
+ start[start < 0] = min_periods
+ return start, end
+
+ df = pd.DataFrame(
+ {"a": [1.0, 2.0, 3.0, 4.0, 5.0] * 3}, index=[0] * 5 + [1] * 5 + [2] * 5
+ )
+ result = (
+ df.groupby(df.index)
+ .rolling(SimpleIndexer(window_size=3), min_periods=1)
+ .sum()
+ )
+ expected = df.groupby(df.index).rolling(window=3, min_periods=1).sum()
+ tm.assert_frame_equal(result, expected)
+
def test_groupby_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
| - [x] closes #35557
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35647 | 2020-08-10T07:01:01Z | 2020-08-12T22:14:31Z | 2020-08-12T22:14:31Z | 2020-08-13T06:14:05Z |
BUG/ENH: consistent gzip compression arguments | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 35403b5c8b66f..43030d76d945a 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -287,16 +287,19 @@ Quoting, compression, and file format
compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``'zip'``, ``'xz'``, ``None``, ``dict``}, default ``'infer'``
For on-the-fly decompression of on-disk data. If 'infer', then use gzip,
- bz2, zip, or xz if filepath_or_buffer is a string ending in '.gz', '.bz2',
+ bz2, zip, or xz if ``filepath_or_buffer`` is path-like ending in '.gz', '.bz2',
'.zip', or '.xz', respectively, and no decompression otherwise. If using 'zip',
the ZIP file must contain only one data file to be read in.
Set to ``None`` for no decompression. Can also be a dict with key ``'method'``
- set to one of {``'zip'``, ``'gzip'``, ``'bz2'``}, and other keys set to
- compression settings. As an example, the following could be passed for
- faster compression: ``compression={'method': 'gzip', 'compresslevel': 1}``.
+ set to one of {``'zip'``, ``'gzip'``, ``'bz2'``} and other key-value pairs are
+ forwarded to ``zipfile.ZipFile``, ``gzip.GzipFile``, or ``bz2.BZ2File``.
+ As an example, the following could be passed for faster compression and to
+ create a reproducible gzip archive:
+ ``compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}``.
.. versionchanged:: 0.24.0 'infer' option added and set to default.
.. versionchanged:: 1.1.0 dict option extended to support ``gzip`` and ``bz2``.
+ .. versionchanged:: 1.2.0 Previous versions forwarded dict entries for 'gzip' to `gzip.open`.
thousands : str, default ``None``
Thousands separator.
decimal : str, default ``'.'``
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index deb5697053ea8..6612f741d925d 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -235,6 +235,7 @@ I/O
- Bug in :meth:`to_csv` caused a ``ValueError`` when it was called with a filename in combination with ``mode`` containing a ``b`` (:issue:`35058`)
- In :meth:`read_csv` `float_precision='round_trip'` now handles `decimal` and `thousands` parameters (:issue:`35365`)
- :meth:`to_pickle` and :meth:`read_pickle` were closing user-provided file objects (:issue:`35679`)
+- :meth:`to_csv` passes compression arguments for `'gzip'` always to `gzip.GzipFile` (:issue:`28103`)
Plotting
^^^^^^^^
diff --git a/pandas/_typing.py b/pandas/_typing.py
index 47a102ddc70e0..1b972030ef5a5 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -109,3 +109,8 @@
# for arbitrary kwargs passed during reading/writing files
StorageOptions = Optional[Dict[str, Any]]
+
+
+# compression keywords and compression
+CompressionDict = Mapping[str, Optional[Union[str, int, bool]]]
+CompressionOptions = Optional[Union[str, CompressionDict]]
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 11147bffa32c3..2219d54477d9e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -35,6 +35,7 @@
from pandas._libs.tslibs import Tick, Timestamp, to_offset
from pandas._typing import (
Axis,
+ CompressionOptions,
FilePathOrBuffer,
FrameOrSeries,
JSONSerializable,
@@ -2058,7 +2059,7 @@ def to_json(
date_unit: str = "ms",
default_handler: Optional[Callable[[Any], JSONSerializable]] = None,
lines: bool_t = False,
- compression: Optional[str] = "infer",
+ compression: CompressionOptions = "infer",
index: bool_t = True,
indent: Optional[int] = None,
storage_options: StorageOptions = None,
@@ -2646,7 +2647,7 @@ def to_sql(
def to_pickle(
self,
path,
- compression: Optional[str] = "infer",
+ compression: CompressionOptions = "infer",
protocol: int = pickle.HIGHEST_PROTOCOL,
storage_options: StorageOptions = None,
) -> None:
@@ -3053,7 +3054,7 @@ def to_csv(
index_label: Optional[Union[bool_t, str, Sequence[Label]]] = None,
mode: str = "w",
encoding: Optional[str] = None,
- compression: Optional[Union[str, Mapping[str, str]]] = "infer",
+ compression: CompressionOptions = "infer",
quoting: Optional[int] = None,
quotechar: str = '"',
line_terminator: Optional[str] = None,
@@ -3144,6 +3145,12 @@ def to_csv(
Compression is supported for binary file objects.
+ .. versionchanged:: 1.2.0
+
+ Previous versions forwarded dict entries for 'gzip' to
+ `gzip.open` instead of `gzip.GzipFile` which prevented
+ setting `mtime`.
+
quoting : optional constant from csv module
Defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
then floats are converted to strings and thus csv.QUOTE_NONNUMERIC
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 9ac642e58b544..54f35e689aac8 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -18,7 +18,6 @@
Optional,
Tuple,
Type,
- Union,
)
from urllib.parse import (
urljoin,
@@ -29,7 +28,12 @@
)
import zipfile
-from pandas._typing import FilePathOrBuffer, StorageOptions
+from pandas._typing import (
+ CompressionDict,
+ CompressionOptions,
+ FilePathOrBuffer,
+ StorageOptions,
+)
from pandas.compat import _get_lzma_file, _import_lzma
from pandas.compat._optional import import_optional_dependency
@@ -160,7 +164,7 @@ def is_fsspec_url(url: FilePathOrBuffer) -> bool:
def get_filepath_or_buffer(
filepath_or_buffer: FilePathOrBuffer,
encoding: Optional[str] = None,
- compression: Optional[str] = None,
+ compression: CompressionOptions = None,
mode: Optional[str] = None,
storage_options: StorageOptions = None,
):
@@ -188,7 +192,7 @@ def get_filepath_or_buffer(
Returns
-------
- Tuple[FilePathOrBuffer, str, str, bool]
+ Tuple[FilePathOrBuffer, str, CompressionOptions, bool]
Tuple containing the filepath or buffer, the encoding, the compression
and should_close.
"""
@@ -291,8 +295,8 @@ def file_path_to_url(path: str) -> str:
def get_compression_method(
- compression: Optional[Union[str, Mapping[str, Any]]]
-) -> Tuple[Optional[str], Dict[str, Any]]:
+ compression: CompressionOptions,
+) -> Tuple[Optional[str], CompressionDict]:
"""
Simplifies a compression argument to a compression method string and
a mapping containing additional arguments.
@@ -316,7 +320,7 @@ def get_compression_method(
if isinstance(compression, Mapping):
compression_args = dict(compression)
try:
- compression_method = compression_args.pop("method")
+ compression_method = compression_args.pop("method") # type: ignore
except KeyError as err:
raise ValueError("If mapping, compression must have key 'method'") from err
else:
@@ -383,7 +387,7 @@ def get_handle(
path_or_buf,
mode: str,
encoding=None,
- compression: Optional[Union[str, Mapping[str, Any]]] = None,
+ compression: CompressionOptions = None,
memory_map: bool = False,
is_text: bool = True,
errors=None,
@@ -464,16 +468,13 @@ def get_handle(
# GZ Compression
if compression == "gzip":
if is_path:
- f = gzip.open(path_or_buf, mode, **compression_args)
+ f = gzip.GzipFile(filename=path_or_buf, mode=mode, **compression_args)
else:
f = gzip.GzipFile(fileobj=path_or_buf, mode=mode, **compression_args)
# BZ Compression
elif compression == "bz2":
- if is_path:
- f = bz2.BZ2File(path_or_buf, mode, **compression_args)
- else:
- f = bz2.BZ2File(path_or_buf, mode=mode, **compression_args)
+ f = bz2.BZ2File(path_or_buf, mode=mode, **compression_args)
# ZIP Compression
elif compression == "zip":
@@ -577,7 +578,9 @@ def __init__(
if mode in ["wb", "rb"]:
mode = mode.replace("b", "")
self.archive_name = archive_name
- super().__init__(file, mode, zipfile.ZIP_DEFLATED, **kwargs)
+ kwargs_zip: Dict[str, Any] = {"compression": zipfile.ZIP_DEFLATED}
+ kwargs_zip.update(kwargs)
+ super().__init__(file, mode, **kwargs_zip)
def write(self, data):
archive_name = self.filename
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index 6eceb94387171..c462a96da7133 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -5,13 +5,13 @@
import csv as csvlib
from io import StringIO, TextIOWrapper
import os
-from typing import Hashable, List, Mapping, Optional, Sequence, Union
+from typing import Hashable, List, Optional, Sequence, Union
import warnings
import numpy as np
from pandas._libs import writers as libwriters
-from pandas._typing import FilePathOrBuffer, StorageOptions
+from pandas._typing import CompressionOptions, FilePathOrBuffer, StorageOptions
from pandas.core.dtypes.generic import (
ABCDatetimeIndex,
@@ -44,7 +44,7 @@ def __init__(
mode: str = "w",
encoding: Optional[str] = None,
errors: str = "strict",
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
quoting: Optional[int] = None,
line_terminator="\n",
chunksize: Optional[int] = None,
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 0d2b351926343..c2bd6302940bb 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -3,13 +3,13 @@
from io import BytesIO, StringIO
from itertools import islice
import os
-from typing import Any, Callable, Optional, Type
+from typing import IO, Any, Callable, List, Optional, Type
import numpy as np
import pandas._libs.json as json
from pandas._libs.tslibs import iNaT
-from pandas._typing import JSONSerializable, StorageOptions
+from pandas._typing import CompressionOptions, JSONSerializable, StorageOptions
from pandas.errors import AbstractMethodError
from pandas.util._decorators import deprecate_kwarg, deprecate_nonkeyword_arguments
@@ -19,7 +19,12 @@
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.core.reshape.concat import concat
-from pandas.io.common import get_filepath_or_buffer, get_handle, infer_compression
+from pandas.io.common import (
+ get_compression_method,
+ get_filepath_or_buffer,
+ get_handle,
+ infer_compression,
+)
from pandas.io.json._normalize import convert_to_line_delimits
from pandas.io.json._table_schema import build_table_schema, parse_table_schema
from pandas.io.parsers import _validate_integer
@@ -41,7 +46,7 @@ def to_json(
date_unit: str = "ms",
default_handler: Optional[Callable[[Any], JSONSerializable]] = None,
lines: bool = False,
- compression: Optional[str] = "infer",
+ compression: CompressionOptions = "infer",
index: bool = True,
indent: int = 0,
storage_options: StorageOptions = None,
@@ -369,7 +374,7 @@ def read_json(
encoding=None,
lines: bool = False,
chunksize: Optional[int] = None,
- compression="infer",
+ compression: CompressionOptions = "infer",
nrows: Optional[int] = None,
storage_options: StorageOptions = None,
):
@@ -607,7 +612,9 @@ def read_json(
if encoding is None:
encoding = "utf-8"
- compression = infer_compression(path_or_buf, compression)
+ compression_method, compression = get_compression_method(compression)
+ compression_method = infer_compression(path_or_buf, compression_method)
+ compression = dict(compression, method=compression_method)
filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
@@ -667,10 +674,13 @@ def __init__(
encoding,
lines: bool,
chunksize: Optional[int],
- compression,
+ compression: CompressionOptions,
nrows: Optional[int],
):
+ compression_method, compression = get_compression_method(compression)
+ compression = dict(compression, method=compression_method)
+
self.orient = orient
self.typ = typ
self.dtype = dtype
@@ -687,6 +697,7 @@ def __init__(
self.nrows_seen = 0
self.should_close = False
self.nrows = nrows
+ self.file_handles: List[IO] = []
if self.chunksize is not None:
self.chunksize = _validate_integer("chunksize", self.chunksize, 1)
@@ -735,8 +746,8 @@ def _get_data_from_filepath(self, filepath_or_buffer):
except (TypeError, ValueError):
pass
- if exists or self.compression is not None:
- data, _ = get_handle(
+ if exists or self.compression["method"] is not None:
+ data, self.file_handles = get_handle(
filepath_or_buffer,
"r",
encoding=self.encoding,
@@ -816,6 +827,8 @@ def close(self):
self.open_stream.close()
except (IOError, AttributeError):
pass
+ for file_handle in self.file_handles:
+ file_handle.close()
def __next__(self):
if self.nrows:
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index eee6ec7c9feca..fc1d2e385cf72 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -1,9 +1,9 @@
""" pickle compat """
import pickle
-from typing import Any, Optional
+from typing import Any
import warnings
-from pandas._typing import FilePathOrBuffer, StorageOptions
+from pandas._typing import CompressionOptions, FilePathOrBuffer, StorageOptions
from pandas.compat import pickle_compat as pc
from pandas.io.common import get_filepath_or_buffer, get_handle
@@ -12,7 +12,7 @@
def to_pickle(
obj: Any,
filepath_or_buffer: FilePathOrBuffer,
- compression: Optional[str] = "infer",
+ compression: CompressionOptions = "infer",
protocol: int = pickle.HIGHEST_PROTOCOL,
storage_options: StorageOptions = None,
):
@@ -114,7 +114,7 @@ def to_pickle(
def read_pickle(
filepath_or_buffer: FilePathOrBuffer,
- compression: Optional[str] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
"""
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 7a25617885839..ec3819f1673a8 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -35,7 +35,7 @@
from pandas._libs.lib import infer_dtype
from pandas._libs.writers import max_len_string_array
-from pandas._typing import FilePathOrBuffer, Label, StorageOptions
+from pandas._typing import CompressionOptions, FilePathOrBuffer, Label, StorageOptions
from pandas.util._decorators import Appender
from pandas.core.dtypes.common import (
@@ -1938,9 +1938,9 @@ def read_stata(
def _open_file_binary_write(
fname: FilePathOrBuffer,
- compression: Union[str, Mapping[str, str], None],
+ compression: CompressionOptions,
storage_options: StorageOptions = None,
-) -> Tuple[BinaryIO, bool, Optional[Union[str, Mapping[str, str]]]]:
+) -> Tuple[BinaryIO, bool, CompressionOptions]:
"""
Open a binary file or no-op if file-like.
@@ -1978,17 +1978,10 @@ def _open_file_binary_write(
# Extract compression mode as given, if dict
compression_typ, compression_args = get_compression_method(compression)
compression_typ = infer_compression(fname, compression_typ)
- path_or_buf, _, compression_typ, _ = get_filepath_or_buffer(
- fname,
- mode="wb",
- compression=compression_typ,
- storage_options=storage_options,
+ compression = dict(compression_args, method=compression_typ)
+ path_or_buf, _, compression, _ = get_filepath_or_buffer(
+ fname, mode="wb", compression=compression, storage_options=storage_options,
)
- if compression_typ is not None:
- compression = compression_args
- compression["method"] = compression_typ
- else:
- compression = None
f, _ = get_handle(path_or_buf, "wb", compression=compression, is_text=False)
return f, True, compression
else:
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index 902a3d5d2a397..bc14b485f75e5 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -1,7 +1,10 @@
+import io
import os
+from pathlib import Path
import subprocess
import sys
import textwrap
+import time
import pytest
@@ -130,6 +133,46 @@ def test_compression_binary(compression_only):
)
+def test_gzip_reproducibility_file_name():
+ """
+ Gzip should create reproducible archives with mtime.
+
+ Note: Archives created with different filenames will still be different!
+
+ GH 28103
+ """
+ df = tm.makeDataFrame()
+ compression_options = {"method": "gzip", "mtime": 1}
+
+ # test for filename
+ with tm.ensure_clean() as path:
+ path = Path(path)
+ df.to_csv(path, compression=compression_options)
+ time.sleep(2)
+ output = path.read_bytes()
+ df.to_csv(path, compression=compression_options)
+ assert output == path.read_bytes()
+
+
+def test_gzip_reproducibility_file_object():
+ """
+ Gzip should create reproducible archives with mtime.
+
+ GH 28103
+ """
+ df = tm.makeDataFrame()
+ compression_options = {"method": "gzip", "mtime": 1}
+
+ # test for file object
+ buffer = io.BytesIO()
+ df.to_csv(buffer, compression=compression_options, mode="wb")
+ output = buffer.getvalue()
+ time.sleep(2)
+ buffer = io.BytesIO()
+ df.to_csv(buffer, compression=compression_options, mode="wb")
+ assert output == buffer.getvalue()
+
+
def test_with_missing_lzma():
"""Tests if import pandas works when lzma is not present."""
# https://github.com/pandas-dev/pandas/issues/27575
| - [x] closes #28103
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
`to_csv` let's the user set all keyword arguments for gzip. Depending on whether the user provides a filename or a file object different keyword arguments can be set (`gzip.open` vs `gzip.GzipFile`).
This PR always uses `gzip.GzipFile`. The additional keyword arguments valid for `gzip.open` but not valid for `gzip.GzipFile` (`encoding`, `errors`, and ~~`newline`~~) are still accessible:
https://github.com/pandas-dev/pandas/blob/aefae55e1960a718561ae0369e83605e3038f292/pandas/io/common.py#L512
Using `gzip.GzipFile`, also allows us to set `mtime` to create reproducible gzip archives.
| https://api.github.com/repos/pandas-dev/pandas/pulls/35645 | 2020-08-09T17:11:31Z | 2020-08-13T22:04:50Z | 2020-08-13T22:04:49Z | 2020-08-13T22:04:56Z |
ENH: Styler tooltips feature | diff --git a/doc/source/reference/style.rst b/doc/source/reference/style.rst
index e80dc1b57ff80..3a8d912fa6ffe 100644
--- a/doc/source/reference/style.rst
+++ b/doc/source/reference/style.rst
@@ -39,6 +39,8 @@ Style application
Styler.set_td_classes
Styler.set_table_styles
Styler.set_table_attributes
+ Styler.set_tooltips
+ Styler.set_tooltips_class
Styler.set_caption
Styler.set_properties
Styler.set_uuid
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index ab00b749d5725..6a85bfd852e19 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -52,6 +52,7 @@ Other enhancements
- :meth:`DataFrame.apply` can now accept NumPy unary operators as strings, e.g. ``df.apply("sqrt")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
- :meth:`DataFrame.apply` can now accept non-callable DataFrame properties as strings, e.g. ``df.apply("size")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
- :meth:`Series.apply` can now accept list-like or dictionary-like arguments that aren't lists or dictionaries, e.g. ``ser.apply(np.array(["sum", "mean"]))``, which was already the case for :meth:`DataFrame.apply` (:issue:`39140`)
+- :meth:`.Styler.set_tooltips` allows on hover tooltips to be added to styled HTML dataframes.
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index b6c1336ede597..49eb579f9bd99 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -182,6 +182,8 @@ def __init__(
self.cell_ids = cell_ids
self.na_rep = na_rep
+ self.tooltips: Optional[_Tooltips] = None
+
self.cell_context: Dict[str, Any] = {}
# display_funcs maps (row, col) -> formatting function
@@ -205,6 +207,117 @@ def _repr_html_(self) -> str:
"""
return self.render()
+ def _init_tooltips(self):
+ """
+ Checks parameters compatible with tooltips and creates instance if necessary
+ """
+ if not self.cell_ids:
+ # tooltips not optimised for individual cell check. requires reasonable
+ # redesign and more extensive code for a feature that might be rarely used.
+ raise NotImplementedError(
+ "Tooltips can only render with 'cell_ids' is True."
+ )
+ if self.tooltips is None:
+ self.tooltips = _Tooltips()
+
+ def set_tooltips(self, ttips: DataFrame) -> "Styler":
+ """
+ Add string based tooltips that will appear in the `Styler` HTML result. These
+ tooltips are applicable only to`<td>` elements.
+
+ .. versionadded:: 1.3.0
+
+ Parameters
+ ----------
+ ttips : DataFrame
+ DataFrame containing strings that will be translated to tooltips, mapped
+ by identical column and index values that must exist on the underlying
+ `Styler` data. None, NaN values, and empty strings will be ignored and
+ not affect the rendered HTML.
+
+ Returns
+ -------
+ self : Styler
+
+ Notes
+ -----
+ Tooltips are created by adding `<span class="pd-t"></span>` to each data cell
+ and then manipulating the table level CSS to attach pseudo hover and pseudo
+ after selectors to produce the required the results. For styling control
+ see `:meth:Styler.set_tooltips_class`.
+ Tooltips are not designed to be efficient, and can add large amounts of
+ additional HTML for larger tables, since they also require that `cell_ids`
+ is forced to `True`.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame(data=[[0, 1], [2, 3]])
+ >>> ttips = pd.DataFrame(
+ ... data=[["Min", ""], [np.nan, "Max"]], columns=df.columns, index=df.index
+ ... )
+ >>> s = df.style.set_tooltips(ttips).render()
+ """
+ self._init_tooltips()
+ assert self.tooltips is not None # mypy requiremen
+ self.tooltips.tt_data = ttips
+ return self
+
+ def set_tooltips_class(
+ self,
+ name: Optional[str] = None,
+ properties: Optional[Sequence[Tuple[str, Union[str, int, float]]]] = None,
+ ) -> "Styler":
+ """
+ Manually configure the name and/or properties of the class for
+ creating tooltips on hover.
+
+ .. versionadded:: 1.3.0
+
+ Parameters
+ ----------
+ name : str, default None
+ Name of the tooltip class used in CSS, should conform to HTML standards.
+ properties : list-like, default None
+ List of (attr, value) tuples; see example.
+
+ Returns
+ -------
+ self : Styler
+
+ Notes
+ -----
+ If arguments are `None` will not make any changes to the underlying ``Tooltips``
+ existing values.
+
+ The default properties for the tooltip CSS class are:
+
+ - visibility: hidden
+ - position: absolute
+ - z-index: 1
+ - background-color: black
+ - color: white
+ - transform: translate(-20px, -20px)
+
+ The property ('visibility', 'hidden') is a key prerequisite to the hover
+ functionality, and should always be included in any manual properties
+ specification.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame(np.random.randn(10, 4))
+ >>> df.style.set_tooltips_class(name='tt-add', properties=[
+ ... ('visibility', 'hidden'),
+ ... ('position', 'absolute'),
+ ... ('z-index', 1)])
+ """
+ self._init_tooltips()
+ assert self.tooltips is not None # mypy requirement
+ if properties:
+ self.tooltips.class_properties = properties
+ if name:
+ self.tooltips.class_name = name
+ return self
+
@doc(
NDFrame.to_excel,
klass="Styler",
@@ -434,7 +547,7 @@ def format_attr(pair):
else:
table_attr += ' class="tex2jax_ignore"'
- return {
+ d = {
"head": head,
"cellstyle": cellstyle,
"body": body,
@@ -444,6 +557,10 @@ def format_attr(pair):
"caption": caption,
"table_attributes": table_attr,
}
+ if self.tooltips:
+ d = self.tooltips._translate(self.data, self.uuid, d)
+
+ return d
def format(self, formatter, subset=None, na_rep: Optional[str] = None) -> "Styler":
"""
@@ -689,6 +806,7 @@ def clear(self) -> None:
Returns None.
"""
self.ctx.clear()
+ self.tooltips = None
self.cell_context = {}
self._todo = []
@@ -1658,6 +1776,179 @@ def pipe(self, func: Callable, *args, **kwargs):
return com.pipe(self, func, *args, **kwargs)
+class _Tooltips:
+ """
+ An extension to ``Styler`` that allows for and manipulates tooltips on hover
+ of table data-cells in the HTML result.
+
+ Parameters
+ ----------
+ css_name: str, default "pd-t"
+ Name of the CSS class that controls visualisation of tooltips.
+ css_props: list-like, default; see Notes
+ List of (attr, value) tuples defining properties of the CSS class.
+ tooltips: DataFrame, default empty
+ DataFrame of strings aligned with underlying ``Styler`` data for tooltip
+ display.
+
+ Notes
+ -----
+ The default properties for the tooltip CSS class are:
+
+ - visibility: hidden
+ - position: absolute
+ - z-index: 1
+ - background-color: black
+ - color: white
+ - transform: translate(-20px, -20px)
+
+ Hidden visibility is a key prerequisite to the hover functionality, and should
+ always be included in any manual properties specification.
+ """
+
+ def __init__(
+ self,
+ css_props: Sequence[Tuple[str, Union[str, int, float]]] = [
+ ("visibility", "hidden"),
+ ("position", "absolute"),
+ ("z-index", 1),
+ ("background-color", "black"),
+ ("color", "white"),
+ ("transform", "translate(-20px, -20px)"),
+ ],
+ css_name: str = "pd-t",
+ tooltips: DataFrame = DataFrame(),
+ ):
+ self.class_name = css_name
+ self.class_properties = css_props
+ self.tt_data = tooltips
+ self.table_styles: List[Dict[str, Union[str, List[Tuple[str, str]]]]] = []
+
+ @property
+ def _class_styles(self):
+ """
+ Combine the ``_Tooltips`` CSS class name and CSS properties to the format
+ required to extend the underlying ``Styler`` `table_styles` to allow
+ tooltips to render in HTML.
+
+ Returns
+ -------
+ styles : List
+ """
+ return [{"selector": f".{self.class_name}", "props": self.class_properties}]
+
+ def _pseudo_css(self, uuid: str, name: str, row: int, col: int, text: str):
+ """
+ For every table data-cell that has a valid tooltip (not None, NaN or
+ empty string) must create two pseudo CSS entries for the specific
+ <td> element id which are added to overall table styles:
+ an on hover visibility change and a content change
+ dependent upon the user's chosen display string.
+
+ For example:
+ [{"selector": "T__row1_col1:hover .pd-t",
+ "props": [("visibility", "visible")]},
+ {"selector": "T__row1_col1 .pd-t::after",
+ "props": [("content", "Some Valid Text String")]}]
+
+ Parameters
+ ----------
+ uuid: str
+ The uuid of the Styler instance
+ name: str
+ The css-name of the class used for styling tooltips
+ row : int
+ The row index of the specified tooltip string data
+ col : int
+ The col index of the specified tooltip string data
+ text : str
+ The textual content of the tooltip to be displayed in HTML.
+
+ Returns
+ -------
+ pseudo_css : List
+ """
+ return [
+ {
+ "selector": "#T_"
+ + uuid
+ + "row"
+ + str(row)
+ + "_col"
+ + str(col)
+ + f":hover .{name}",
+ "props": [("visibility", "visible")],
+ },
+ {
+ "selector": "#T_"
+ + uuid
+ + "row"
+ + str(row)
+ + "_col"
+ + str(col)
+ + f" .{name}::after",
+ "props": [("content", f'"{text}"')],
+ },
+ ]
+
+ def _translate(self, styler_data: FrameOrSeriesUnion, uuid: str, d: Dict):
+ """
+ Mutate the render dictionary to allow for tooltips:
+
+ - Add `<span>` HTML element to each data cells `display_value`. Ignores
+ headers.
+ - Add table level CSS styles to control pseudo classes.
+
+ Parameters
+ ----------
+ styler_data : DataFrame
+ Underlying ``Styler`` DataFrame used for reindexing.
+ uuid : str
+ The underlying ``Styler`` uuid for CSS id.
+ d : dict
+ The dictionary prior to final render
+
+ Returns
+ -------
+ render_dict : Dict
+ """
+ self.tt_data = (
+ self.tt_data.reindex_like(styler_data)
+ .dropna(how="all", axis=0)
+ .dropna(how="all", axis=1)
+ )
+ if self.tt_data.empty:
+ return d
+
+ name = self.class_name
+
+ mask = (self.tt_data.isna()) | (self.tt_data.eq("")) # empty string = no ttip
+ self.table_styles = [
+ style
+ for sublist in [
+ self._pseudo_css(uuid, name, i, j, str(self.tt_data.iloc[i, j]))
+ for i in range(len(self.tt_data.index))
+ for j in range(len(self.tt_data.columns))
+ if not mask.iloc[i, j]
+ ]
+ for style in sublist
+ ]
+
+ if self.table_styles:
+ # add span class to every cell only if at least 1 non-empty tooltip
+ for row in d["body"]:
+ for item in row:
+ if item["type"] == "td":
+ item["display_value"] = (
+ str(item["display_value"])
+ + f'<span class="{self.class_name}"></span>'
+ )
+ d["table_styles"].extend(self._class_styles)
+ d["table_styles"].extend(self.table_styles)
+
+ return d
+
+
def _is_visible(idx_row, idx_col, lengths) -> bool:
"""
Index -> {(idx_row, idx_col): bool}).
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 0bb422658df25..c61d81d565459 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -1769,6 +1769,74 @@ def test_uuid_len_raises(self, len_):
with pytest.raises(TypeError, match=msg):
Styler(df, uuid_len=len_, cell_ids=False).render()
+ @pytest.mark.parametrize(
+ "ttips",
+ [
+ DataFrame(
+ data=[["Min", "Max"], [np.nan, ""]],
+ columns=["A", "B"],
+ index=["a", "b"],
+ ),
+ DataFrame(data=[["Max", "Min"]], columns=["B", "A"], index=["a"]),
+ DataFrame(
+ data=[["Min", "Max", None]], columns=["A", "B", "C"], index=["a"]
+ ),
+ ],
+ )
+ def test_tooltip_render(self, ttips):
+ # GH 21266
+ df = DataFrame(data=[[0, 3], [1, 2]], columns=["A", "B"], index=["a", "b"])
+ s = Styler(df, uuid_len=0).set_tooltips(ttips).render()
+
+ # test tooltip table level class
+ assert "#T__ .pd-t {\n visibility: hidden;\n" in s
+
+ # test 'Min' tooltip added
+ assert (
+ "#T__ #T__row0_col0:hover .pd-t {\n visibility: visible;\n } "
+ + ' #T__ #T__row0_col0 .pd-t::after {\n content: "Min";\n }'
+ in s
+ )
+ assert (
+ '<td id="T__row0_col0" class="data row0 col0" >0<span class="pd-t">'
+ + "</span></td>"
+ in s
+ )
+
+ # test 'Max' tooltip added
+ assert (
+ "#T__ #T__row0_col1:hover .pd-t {\n visibility: visible;\n } "
+ + ' #T__ #T__row0_col1 .pd-t::after {\n content: "Max";\n }'
+ in s
+ )
+ assert (
+ '<td id="T__row0_col1" class="data row0 col1" >3<span class="pd-t">'
+ + "</span></td>"
+ in s
+ )
+
+ def test_tooltip_ignored(self):
+ # GH 21266
+ df = DataFrame(data=[[0, 1], [2, 3]])
+ s = Styler(df).set_tooltips_class("pd-t").render() # no set_tooltips()
+ assert '<style type="text/css" >\n</style>' in s
+ assert '<span class="pd-t"></span>' not in s
+
+ def test_tooltip_class(self):
+ # GH 21266
+ df = DataFrame(data=[[0, 1], [2, 3]])
+ s = (
+ Styler(df, uuid_len=0)
+ .set_tooltips(DataFrame([["tooltip"]]))
+ .set_tooltips_class(name="other-class", properties=[("color", "green")])
+ .render()
+ )
+ assert "#T__ .other-class {\n color: green;\n" in s
+ assert (
+ '#T__ #T__row0_col0 .other-class::after {\n content: "tooltip";\n'
+ in s
+ )
+
@td.skip_if_no_mpl
class TestStylerMatplotlibDep:
| - [x] closes #21266
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is a draft PR to address 21266: Tooltips for DataFrames in Styler.
This PR has the following objectives:
1) Add functionality whilst being completely (almost) decoupled from the rest of the HTML rendering process
2) Be fully backwards compatible and have no impact at all on previous Styler's that have never used tooltips.
3) Use pseudo CSS class rendering via `table_styles` to control the visualisation of the tooltips.
To address 1) the architecture was simple:
- If `tooltips` are detected then an additional HTML element is added to every data cell: `<span class="pd-t"></span>`. If `tooltips` are not detected (by default) nothing is done.
- Add generic table level CSS for the tooltip class, hiding tooltips by default but positioning, sizing, and coloring them.
- Loop through a tooltips DataFrame and based on the row and col index add additional content to the <span> element using the ::after pseudo-element targeting the exact cell id. This is table level also. Rendering is performed as the last step.
To address 2) default values ensure the functions make no changes unless `set_tooltips` has been called.
## Extensions
The very simple architecture requires a DataFrame of tooltips to be supplied. This requires the user to have constructed it privately (possibly using the regular DataFrame.apply and DataFrame.applymap) methods. These could be incorporated.
It would also be better to only add extra HTML <span> elements to data cells that have tooltips. This requires conditional looping of the render dict, which I haven't included here to save on initial complexity. For small dataframes it is also irrelevant.
However, given my time constraints I would rather leave this to a more available and more competent developer!!
Comments welcome..
| https://api.github.com/repos/pandas-dev/pandas/pulls/35643 | 2020-08-09T12:41:11Z | 2021-01-17T15:46:56Z | 2021-01-17T15:46:56Z | 2021-01-21T06:46:55Z |
REF/PERF: Move MultiIndex._tuples to MultiIndex._cache | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 13927dede5542..448c2dfe4a29d 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -243,7 +243,6 @@ class MultiIndex(Index):
_comparables = ["names"]
rename = Index.set_names
- _tuples = None
sortorder: Optional[int]
# --------------------------------------------------------------------
@@ -634,16 +633,9 @@ def from_frame(cls, df, sortorder=None, names=None):
# --------------------------------------------------------------------
- @property
+ @cache_readonly
def _values(self):
# We override here, since our parent uses _data, which we don't use.
- return self.values
-
- @property
- def values(self):
- if self._tuples is not None:
- return self._tuples
-
values = []
for i in range(self.nlevels):
@@ -657,8 +649,12 @@ def values(self):
vals = np.array(vals, copy=False)
values.append(vals)
- self._tuples = lib.fast_zip(values)
- return self._tuples
+ arr = lib.fast_zip(values)
+ return arr
+
+ @property
+ def values(self):
+ return self._values
@property
def array(self):
@@ -737,7 +733,6 @@ def _set_levels(
if any(names):
self._set_names(names)
- self._tuples = None
self._reset_cache()
def set_levels(self, levels, level=None, inplace=None, verify_integrity=True):
@@ -906,7 +901,6 @@ def _set_codes(
self._codes = new_codes
- self._tuples = None
self._reset_cache()
def set_codes(self, codes, level=None, inplace=None, verify_integrity=True):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index aeb7b3e044794..2abc570a04de3 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -320,6 +320,10 @@ def read_hdf(
mode : {'r', 'r+', 'a'}, default 'r'
Mode to use when opening the file. Ignored if path_or_buf is a
:class:`pandas.HDFStore`. Default is 'r'.
+ errors : str, default 'strict'
+ Specifies how encoding and decoding errors are to be handled.
+ See the errors argument for :func:`open` for a full list
+ of options.
where : list, optional
A list of Term (or convertible) objects.
start : int, optional
@@ -332,10 +336,6 @@ def read_hdf(
Return an iterator object.
chunksize : int, optional
Number of rows to include in an iteration when using an iterator.
- errors : str, default 'strict'
- Specifies how encoding and decoding errors are to be handled.
- See the errors argument for :func:`open` for a full list
- of options.
**kwargs
Additional keyword arguments passed to HDFStore.
diff --git a/pandas/tests/indexes/multi/test_compat.py b/pandas/tests/indexes/multi/test_compat.py
index b2500efef9e03..72b5ed0edaa78 100644
--- a/pandas/tests/indexes/multi/test_compat.py
+++ b/pandas/tests/indexes/multi/test_compat.py
@@ -68,24 +68,33 @@ def test_inplace_mutation_resets_values():
mi1 = MultiIndex(levels=levels, codes=codes)
mi2 = MultiIndex(levels=levels2, codes=codes)
+
+ # instantiating MultiIndex should not access/cache _.values
+ assert "_values" not in mi1._cache
+ assert "_values" not in mi2._cache
+
vals = mi1.values.copy()
vals2 = mi2.values.copy()
- assert mi1._tuples is not None
+ # accessing .values should cache ._values
+ assert mi1._values is mi1._cache["_values"]
+ assert mi1.values is mi1._cache["_values"]
+ assert isinstance(mi1._cache["_values"], np.ndarray)
# Make sure level setting works
new_vals = mi1.set_levels(levels2).values
tm.assert_almost_equal(vals2, new_vals)
- # Non-inplace doesn't kill _tuples [implementation detail]
- tm.assert_almost_equal(mi1._tuples, vals)
+ # Non-inplace doesn't drop _values from _cache [implementation detail]
+ tm.assert_almost_equal(mi1._cache["_values"], vals)
# ...and values is still same too
tm.assert_almost_equal(mi1.values, vals)
- # Inplace should kill _tuples
+ # Inplace should drop _values from _cache
with tm.assert_produces_warning(FutureWarning):
mi1.set_levels(levels2, inplace=True)
+ assert "_values" not in mi1._cache
tm.assert_almost_equal(mi1.values, vals2)
# Make sure label setting works too
@@ -95,18 +104,24 @@ def test_inplace_mutation_resets_values():
# Must be 1d array of tuples
assert exp_values.shape == (6,)
- new_values = mi2.set_codes(codes2).values
+
+ new_mi = mi2.set_codes(codes2)
+ assert "_values" not in new_mi._cache
+ new_values = new_mi.values
+ assert "_values" in new_mi._cache
# Not inplace shouldn't change
- tm.assert_almost_equal(mi2._tuples, vals2)
+ tm.assert_almost_equal(mi2._cache["_values"], vals2)
# Should have correct values
tm.assert_almost_equal(exp_values, new_values)
- # ...and again setting inplace should kill _tuples, etc
+ # ...and again setting inplace should drop _values from _cache, etc
with tm.assert_produces_warning(FutureWarning):
mi2.set_codes(codes2, inplace=True)
+ assert "_values" not in mi2._cache
tm.assert_almost_equal(mi2.values, new_values)
+ assert "_values" in mi2._cache
def test_ndarray_compat_properties(idx, compat_props):
| Currently, the heavy-to-calculate ``MultiIndex.values`` attribute is cached in ``MultiIndex._tuples``. It would be more dogmatic to store it in ``MultiIndex._cache`` IMO, which is what this PR does.
This has the added benefit of ``._values`` getting copied over to new copies of the MultiIndex, so also gives a performance boost in cases where copying is needed:
```python
>>> n = 100_000;
>>> df = pd.DataFrame({'a': ['a', 'b'] * int(n / 2), 'b': range(n), 'c': range(20, n + 20)})
>>> mi = pd.MultiIndex.from_frame(df)
>>> mi.values # also caches mi._values in mi._cache
array([('a', 0, 20), ('b', 1, 21), ('a', 2, 22), ...,
('b', 99997, 100017), ('a', 99998, 100018), ('b', 99999, 100019)],
dtype=object)
>>> %timeit mi._shallow_copy().values
22.8 ms ± 3.43 ms per loop # master
34.6 µs ± 997 ns per loop # this PR
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/35641 | 2020-08-09T09:33:24Z | 2020-08-10T22:52:08Z | 2020-08-10T22:52:08Z | 2020-08-10T23:00:12Z |
DOC: Add specific Visual Studio Installer instructions | diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index 4ffd1d586a99a..e5c6f77eea3ef 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -204,6 +204,7 @@ You will need `Build Tools for Visual Studio 2017
You DO NOT need to install Visual Studio 2019.
You only need "Build Tools for Visual Studio 2019" found by
scrolling down to "All downloads" -> "Tools for Visual Studio 2019".
+ In the installer, select the "C++ build tools" workload.
**Mac OS**
| - [ ] further improves #28316 - Currently the 'Contributing to pandas', 'Installing a C compiler' section advises to install Build Tools for Visual Studio 2017 but doesn't specify which components to install.
Notes:
- The first time I installed the Build Tools it didn't seem to get everything needed, and I had to search around for a while before I could figure out what I'd missed.
- The advice at [python.org](https://wiki.python.org/moin/WindowsCompilers#Microsoft_Visual_C.2B-.2B-_14.2_standalone:_Build_Tools_for_Visual_Studio_2019_.28x86.2C_x64.2C_ARM.2C_ARM64.29) is more specific, so I have added that in here. I'm fairly sure that's the minimum required, correct?
| https://api.github.com/repos/pandas-dev/pandas/pulls/35640 | 2020-08-09T09:12:42Z | 2020-08-31T18:47:26Z | 2020-08-31T18:47:26Z | 2020-08-31T18:47:35Z |
BUG: RollingGroupby with closed and column selection no longer raises ValueError | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index f0ad9d1ca3b0f..7f5182e3eaa6f 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -51,6 +51,10 @@ Categorical
-
-
+**Groupby/resample/rolling**
+
+- Bug in :class:`pandas.core.groupby.RollingGroupby` where passing ``closed`` with column selection would raise a ``ValueError`` (:issue:`35549`)
+
**Plotting**
-
diff --git a/pandas/core/window/common.py b/pandas/core/window/common.py
index 58e7841d4dde5..51a067427e867 100644
--- a/pandas/core/window/common.py
+++ b/pandas/core/window/common.py
@@ -52,7 +52,7 @@ def __init__(self, obj, *args, **kwargs):
kwargs.pop("parent", None)
groupby = kwargs.pop("groupby", None)
if groupby is None:
- groupby, obj = obj, obj.obj
+ groupby, obj = obj, obj._selected_obj
self._groupby = groupby
self._groupby.mutated = True
self._groupby.grouper.mutated = True
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index a04d68a6d6745..7347d5686aabc 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -2212,7 +2212,7 @@ def _apply(
# Cannot use _wrap_outputs because we calculate the result all at once
# Compose MultiIndex result from grouping levels then rolling level
# Aggregate the MultiIndex data as tuples then the level names
- grouped_object_index = self._groupby._selected_obj.index
+ grouped_object_index = self.obj.index
grouped_index_name = [grouped_object_index.name]
groupby_keys = [grouping.name for grouping in self._groupby.grouper._groupings]
result_index_names = groupby_keys + grouped_index_name
@@ -2236,10 +2236,6 @@ def _apply(
def _constructor(self):
return Rolling
- @cache_readonly
- def _selected_obj(self):
- return self._groupby._selected_obj
-
def _create_blocks(self, obj: FrameOrSeries):
"""
Split data into blocks & return conformed data.
@@ -2278,7 +2274,7 @@ def _get_window_indexer(self, window: int) -> GroupbyRollingIndexer:
rolling_indexer: Union[Type[FixedWindowIndexer], Type[VariableWindowIndexer]]
if self.is_freq_type:
rolling_indexer = VariableWindowIndexer
- index_array = self._groupby._selected_obj.index.asi8
+ index_array = self.obj.index.asi8
else:
rolling_indexer = FixedWindowIndexer
index_array = None
@@ -2295,7 +2291,7 @@ def _gotitem(self, key, ndim, subset=None):
# here so our index is carried thru to the selected obj
# when we do the splitting for the groupby
if self.on is not None:
- self._groupby.obj = self._groupby.obj.set_index(self._on)
+ self.obj = self.obj.set_index(self._on)
self.on = None
return super()._gotitem(key, ndim, subset=subset)
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index 5241b9548a442..e1dcac06c39cc 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -304,3 +304,54 @@ def test_groupby_subselect_rolling(self):
name="b",
)
tm.assert_series_equal(result, expected)
+
+ def test_groupby_rolling_subset_with_closed(self):
+ # GH 35549
+ df = pd.DataFrame(
+ {
+ "column1": range(6),
+ "column2": range(6),
+ "group": 3 * ["A", "B"],
+ "date": [pd.Timestamp("2019-01-01")] * 6,
+ }
+ )
+ result = (
+ df.groupby("group").rolling("1D", on="date", closed="left")["column1"].sum()
+ )
+ expected = Series(
+ [np.nan, 0.0, 2.0, np.nan, 1.0, 4.0],
+ index=pd.MultiIndex.from_tuples(
+ [("A", pd.Timestamp("2019-01-01"))] * 3
+ + [("B", pd.Timestamp("2019-01-01"))] * 3,
+ names=["group", "date"],
+ ),
+ name="column1",
+ )
+ tm.assert_series_equal(result, expected)
+
+ def test_groupby_subset_rolling_subset_with_closed(self):
+ # GH 35549
+ df = pd.DataFrame(
+ {
+ "column1": range(6),
+ "column2": range(6),
+ "group": 3 * ["A", "B"],
+ "date": [pd.Timestamp("2019-01-01")] * 6,
+ }
+ )
+
+ result = (
+ df.groupby("group")[["column1", "date"]]
+ .rolling("1D", on="date", closed="left")["column1"]
+ .sum()
+ )
+ expected = Series(
+ [np.nan, 0.0, 2.0, np.nan, 1.0, 4.0],
+ index=pd.MultiIndex.from_tuples(
+ [("A", pd.Timestamp("2019-01-01"))] * 3
+ + [("B", pd.Timestamp("2019-01-01"))] * 3,
+ names=["group", "date"],
+ ),
+ name="column1",
+ )
+ tm.assert_series_equal(result, expected)
| - [x] closes #35549
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/35639 | 2020-08-09T06:21:50Z | 2020-08-10T13:13:31Z | 2020-08-10T13:13:30Z | 2020-08-10T15:26:24Z |
DOC: Mention NA for missing data in README | diff --git a/README.md b/README.md
index a72e8402e68a0..a2f2f1c04442a 100644
--- a/README.md
+++ b/README.md
@@ -32,7 +32,7 @@ its way towards this goal.
Here are just a few of the things that pandas does well:
- Easy handling of [**missing data**][missing-data] (represented as
- `NaN`) in floating point as well as non-floating point data
+ `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
- Size mutability: columns can be [**inserted and
deleted**][insertion-deletion] from DataFrame and higher dimensional
objects
| https://api.github.com/repos/pandas-dev/pandas/pulls/35638 | 2020-08-08T19:35:27Z | 2020-08-24T06:50:12Z | 2020-08-24T06:50:12Z | 2020-08-24T12:52:01Z | |
ENH: Make explode work for sets | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index d7d2e3cf876ca..ff9e803b4990a 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -103,7 +103,7 @@ Other enhancements
- Added :meth:`~DataFrame.set_flags` for setting table-wide flags on a ``Series`` or ``DataFrame`` (:issue:`28394`)
- :class:`Index` with object dtype supports division and multiplication (:issue:`34160`)
--
+- :meth:`DataFrame.explode` and :meth:`Series.explode` now support exploding of sets (:issue:`35614`)
-
.. _whatsnew_120.api_breaking.python:
diff --git a/pandas/_libs/reshape.pyx b/pandas/_libs/reshape.pyx
index 5c6c15fb50fed..75dbb4b74aabd 100644
--- a/pandas/_libs/reshape.pyx
+++ b/pandas/_libs/reshape.pyx
@@ -124,7 +124,8 @@ def explode(ndarray[object] values):
counts = np.zeros(n, dtype='int64')
for i in range(n):
v = values[i]
- if c_is_list_like(v, False):
+
+ if c_is_list_like(v, True):
if len(v):
counts[i] += len(v)
else:
@@ -138,8 +139,9 @@ def explode(ndarray[object] values):
for i in range(n):
v = values[i]
- if c_is_list_like(v, False):
+ if c_is_list_like(v, True):
if len(v):
+ v = list(v)
for j in range(len(v)):
result[count] = v[j]
count += 1
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 29d6fb9aa7d56..150d6e24dbb86 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7091,10 +7091,11 @@ def explode(
Notes
-----
- This routine will explode list-likes including lists, tuples,
+ This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
- be object. Scalars will be returned unchanged. Empty list-likes will
- result in a np.nan for that row.
+ be object. Scalars will be returned unchanged, and empty list-likes will
+ result in a np.nan for that row. In addition, the ordering of rows in the
+ output will be non-deterministic when exploding sets.
Examples
--------
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d8fdaa2a60252..6cbd93135a2ca 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3829,10 +3829,11 @@ def explode(self, ignore_index: bool = False) -> "Series":
Notes
-----
- This routine will explode list-likes including lists, tuples,
+ This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
- be object. Scalars will be returned unchanged. Empty list-likes will
- result in a np.nan for that row.
+ be object. Scalars will be returned unchanged, and empty list-likes will
+ result in a np.nan for that row. In addition, the ordering of elements in
+ the output will be non-deterministic when exploding sets.
Examples
--------
diff --git a/pandas/tests/frame/methods/test_explode.py b/pandas/tests/frame/methods/test_explode.py
index 2bbe8ac2d5b81..bd0901387eeed 100644
--- a/pandas/tests/frame/methods/test_explode.py
+++ b/pandas/tests/frame/methods/test_explode.py
@@ -172,3 +172,11 @@ def test_ignore_index():
{"id": [0, 0, 10, 10], "values": list("abcd")}, index=[0, 1, 2, 3]
)
tm.assert_frame_equal(result, expected)
+
+
+def test_explode_sets():
+ # https://github.com/pandas-dev/pandas/issues/35614
+ df = pd.DataFrame({"a": [{"x", "y"}], "b": [1]}, index=[1])
+ result = df.explode(column="a").sort_values(by="a")
+ expected = pd.DataFrame({"a": ["x", "y"], "b": [1, 1]}, index=[1, 1])
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_explode.py b/pandas/tests/series/methods/test_explode.py
index 4b65e042f7b02..1f0fbd1cc5ecb 100644
--- a/pandas/tests/series/methods/test_explode.py
+++ b/pandas/tests/series/methods/test_explode.py
@@ -126,3 +126,11 @@ def test_ignore_index():
result = s.explode(ignore_index=True)
expected = pd.Series([1, 2, 3, 4], index=[0, 1, 2, 3], dtype=object)
tm.assert_series_equal(result, expected)
+
+
+def test_explode_sets():
+ # https://github.com/pandas-dev/pandas/issues/35614
+ s = pd.Series([{"a", "b", "c"}], index=[1])
+ result = s.explode().sort_values()
+ expected = pd.Series(["a", "b", "c"], index=[1, 1, 1])
+ tm.assert_series_equal(result, expected)
| - [x] closes #35614
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35637 | 2020-08-08T19:33:06Z | 2020-09-05T22:36:43Z | 2020-09-05T22:36:43Z | 2020-09-05T22:36:46Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.