title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: Minor typo fixes for code style guide | diff --git a/doc/source/development/code_style.rst b/doc/source/development/code_style.rst
index 17f8783f71bfb..1b223cf5f026b 100644
--- a/doc/source/development/code_style.rst
+++ b/doc/source/development/code_style.rst
@@ -21,7 +21,7 @@ Patterns
foo.__class__
-------------
-*pandas* uses 'type(foo)' instead 'foo.__class__' as it is making the code more
+*pandas* uses 'type(foo)' instead 'foo.__class__' as it makes the code more
readable.
For example:
@@ -52,8 +52,8 @@ f-strings
*pandas* uses f-strings formatting instead of '%' and '.format()' string formatters.
-The convention of using f-strings on a string that is concatenated over serveral lines,
-is to prefix only the lines containing the value needs to be interpeted.
+The convention of using f-strings on a string that is concatenated over several lines,
+is to prefix only the lines containing values which need to be interpreted.
For example:
@@ -86,8 +86,8 @@ For example:
White spaces
~~~~~~~~~~~~
-Putting the white space only at the end of the previous line, so
-there is no whitespace at the beggining of the concatenated string.
+Only put white space at the end of the previous line, so
+there is no whitespace at the beginning of the concatenated string.
For example:
@@ -116,7 +116,7 @@ Representation function (aka 'repr()')
*pandas* uses 'repr()' instead of '%r' and '!r'.
-The use of 'repr()' will only happend when the value is not an obvious string.
+The use of 'repr()' will only happen when the value is not an obvious string.
For example:
@@ -138,7 +138,7 @@ For example:
Imports (aim for absolute)
==========================
-In Python 3, absolute imports are recommended. In absolute import doing something
+In Python 3, absolute imports are recommended. Using absolute imports, doing something
like ``import string`` will import the string module rather than ``string.py``
in the same directory. As much as possible, you should try to write out
absolute imports that show the whole import chain from top-level pandas.
| https://api.github.com/repos/pandas-dev/pandas/pulls/32379 | 2020-02-29T21:37:38Z | 2020-03-01T02:24:44Z | 2020-03-01T02:24:44Z | 2020-03-01T02:24:51Z | |
ENH: infer freq in timedelta_range | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 0f18a1fd81815..999dee83cf7e0 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -66,6 +66,7 @@ Other enhancements
- :class:`Styler` may now render CSS more efficiently where multiple cells have the same styling (:issue:`30876`)
- When writing directly to a sqlite connection :func:`to_sql` now supports the ``multi`` method (:issue:`29921`)
- `OptionError` is now exposed in `pandas.errors` (:issue:`27553`)
+- :func:`timedelta_range` will now infer a frequency when passed ``start``, ``stop``, and ``periods`` (:issue:`32377`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_testing.py b/pandas/_testing.py
index fce06e216dfd7..b0f18cb6fdd39 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -742,9 +742,9 @@ def repr_class(x):
raise_assert_detail(obj, msg, repr_class(left), repr_class(right))
-def assert_attr_equal(attr, left, right, obj="Attributes"):
+def assert_attr_equal(attr: str, left, right, obj: str = "Attributes"):
"""
- checks attributes are equal. Both objects must have attribute.
+ Check attributes are equal. Both objects must have attribute.
Parameters
----------
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 81fc934748d3e..749489a0a04fb 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -39,6 +39,7 @@
from pandas.core.algorithms import checked_add_with_arr
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
+from pandas.core.construction import extract_array
from pandas.tseries.frequencies import to_offset
from pandas.tseries.offsets import Tick
@@ -141,8 +142,7 @@ def dtype(self):
# Constructors
def __init__(self, values, dtype=_TD_DTYPE, freq=None, copy=False):
- if isinstance(values, (ABCSeries, ABCIndexClass)):
- values = values._values
+ values = extract_array(values)
inferred_freq = getattr(values, "_freq", None)
@@ -258,6 +258,10 @@ def _generate_range(cls, start, end, periods, freq, closed=None):
index = _generate_regular_range(start, end, periods, freq)
else:
index = np.linspace(start.value, end.value, periods).astype("i8")
+ if len(index) >= 2:
+ # Infer a frequency
+ td = Timedelta(index[1] - index[0])
+ freq = to_offset(td)
if not left_closed:
index = index[1:]
@@ -614,6 +618,10 @@ def __floordiv__(self, other):
if self.freq is not None:
# Note: freq gets division, not floor-division
freq = self.freq / other
+ if freq.nanos == 0 and self.freq.nanos != 0:
+ # e.g. if self.freq is Nano(1) then dividing by 2
+ # rounds down to zero
+ freq = None
return type(self)(result.view("m8[ns]"), freq=freq)
if not hasattr(other, "dtype"):
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta_range.py b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
index 9f12af9a96104..c07a6471c732f 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta_range.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
@@ -38,6 +38,7 @@ def test_linspace_behavior(self, periods, freq):
result = timedelta_range(start="0 days", end="4 days", periods=periods)
expected = timedelta_range(start="0 days", end="4 days", freq=freq)
tm.assert_index_equal(result, expected)
+ assert result.freq == freq
def test_errors(self):
# not enough params
| Step in the direction of #31195. | https://api.github.com/repos/pandas-dev/pandas/pulls/32377 | 2020-02-29T21:24:08Z | 2020-03-03T01:44:14Z | 2020-03-03T01:44:14Z | 2020-03-03T12:57:33Z |
TYP: annotations for internals, set_axis | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f515b57e24cfa..61715397e8e0b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3599,7 +3599,7 @@ def align(
see_also_sub=" or columns",
)
@Appender(NDFrame.set_axis.__doc__)
- def set_axis(self, labels, axis=0, inplace=False):
+ def set_axis(self, labels, axis: Axis = 0, inplace: bool = False):
return super().set_axis(labels, axis=axis, inplace=inplace)
@Substitution(**_shared_doc_kwargs)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 25770c2c6470c..890c0540d9851 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -520,7 +520,7 @@ def _obj_with_exclusions(self: FrameOrSeries) -> FrameOrSeries:
""" internal compat with SelectionMixin """
return self
- def set_axis(self, labels, axis=0, inplace=False):
+ def set_axis(self, labels, axis: Axis = 0, inplace: bool = False):
"""
Assign desired index to given axis.
@@ -561,7 +561,8 @@ def set_axis(self, labels, axis=0, inplace=False):
obj.set_axis(labels, axis=axis, inplace=True)
return obj
- def _set_axis(self, axis, labels) -> None:
+ def _set_axis(self, axis: int, labels: Index) -> None:
+ labels = ensure_index(labels)
self._data.set_axis(axis, labels)
self._clear_item_cache()
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index e397167e4881f..9c90b20fc0f16 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -164,15 +164,15 @@ def __nonzero__(self):
__bool__ = __nonzero__
@property
- def shape(self):
+ def shape(self) -> Tuple[int, ...]:
return tuple(len(ax) for ax in self.axes)
@property
def ndim(self) -> int:
return len(self.axes)
- def set_axis(self, axis, new_labels):
- new_labels = ensure_index(new_labels)
+ def set_axis(self, axis: int, new_labels: Index):
+ # Caller is responsible for ensuring we have an Index object.
old_len = len(self.axes[axis])
new_len = len(new_labels)
@@ -184,7 +184,9 @@ def set_axis(self, axis, new_labels):
self.axes[axis] = new_labels
- def rename_axis(self, mapper, axis, copy: bool = True, level=None):
+ def rename_axis(
+ self, mapper, axis: int, copy: bool = True, level=None
+ ) -> "BlockManager":
"""
Rename one of axes.
@@ -193,7 +195,7 @@ def rename_axis(self, mapper, axis, copy: bool = True, level=None):
mapper : unary callable
axis : int
copy : bool, default True
- level : int, default None
+ level : int or None, default None
"""
obj = self.copy(deep=copy)
obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
@@ -233,7 +235,7 @@ def _rebuild_blknos_and_blklocs(self):
self._blklocs = new_blklocs
@property
- def items(self):
+ def items(self) -> Index:
return self.axes[0]
def _get_counts(self, f):
@@ -623,7 +625,7 @@ def comp(s, regex=False):
bm._consolidate_inplace()
return bm
- def is_consolidated(self):
+ def is_consolidated(self) -> bool:
"""
Return True if more than one block with the same dtype
"""
@@ -688,7 +690,7 @@ def get_numeric_data(self, copy: bool = False):
self._consolidate_inplace()
return self.combine([b for b in self.blocks if b.is_numeric], copy)
- def combine(self, blocks, copy=True):
+ def combine(self, blocks: List[Block], copy: bool = True) -> "BlockManager":
""" return a new manager with the blocks """
if len(blocks) == 0:
return self.make_empty()
@@ -992,7 +994,6 @@ def delete(self, item):
self.blocks = tuple(
b for blkno, b in enumerate(self.blocks) if not is_blk_deleted[blkno]
)
- self._shape = None
self._rebuild_blknos_and_blklocs()
def set(self, item, value):
@@ -1160,7 +1161,6 @@ def insert(self, loc: int, item, value, allow_duplicates: bool = False):
self.axes[0] = new_axis
self.blocks += (block,)
- self._shape = None
self._known_consolidated = False
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d984225f8fd89..d565cbbdd5344 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -9,7 +9,6 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
Iterable,
List,
Optional,
@@ -23,7 +22,7 @@
from pandas._config import get_option
from pandas._libs import lib, properties, reshape, tslibs
-from pandas._typing import Label
+from pandas._typing import Axis, DtypeObj, Label
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, doc
from pandas.util._validators import validate_bool_kwarg, validate_percentile
@@ -177,7 +176,7 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
_typ = "series"
- _name: Optional[Hashable]
+ _name: Label
_metadata: List[str] = ["name"]
_internal_names_set = {"index"} | generic.NDFrame._internal_names_set
_accessors = {"dt", "cat", "str", "sparse"}
@@ -391,9 +390,12 @@ def _can_hold_na(self):
_index = None
- def _set_axis(self, axis, labels, fastpath: bool = False) -> None:
+ def _set_axis(self, axis: int, labels, fastpath: bool = False) -> None:
"""
Override generic, we want to set the _typ here.
+
+ This is called from the cython code when we set the `index` attribute
+ directly, e.g. `series.index = [1, 2, 3]`.
"""
if not fastpath:
labels = ensure_index(labels)
@@ -413,6 +415,7 @@ def _set_axis(self, axis, labels, fastpath: bool = False) -> None:
object.__setattr__(self, "_index", labels)
if not fastpath:
+ # The ensure_index call aabove ensures we have an Index object
self._data.set_axis(axis, labels)
def _update_inplace(self, result, **kwargs):
@@ -421,25 +424,25 @@ def _update_inplace(self, result, **kwargs):
# ndarray compatibility
@property
- def dtype(self):
+ def dtype(self) -> DtypeObj:
"""
Return the dtype object of the underlying data.
"""
return self._data.dtype
@property
- def dtypes(self):
+ def dtypes(self) -> DtypeObj:
"""
Return the dtype object of the underlying data.
"""
return self._data.dtype
@property
- def name(self) -> Optional[Hashable]:
+ def name(self) -> Label:
return self._name
@name.setter
- def name(self, value: Optional[Hashable]) -> None:
+ def name(self, value: Label) -> None:
if not is_hashable(value):
raise TypeError("Series.name must be a hashable type")
object.__setattr__(self, "_name", value)
@@ -689,7 +692,7 @@ def __array_ufunc__(
inputs = tuple(extract_array(x, extract_numpy=True) for x in inputs)
result = getattr(ufunc, method)(*inputs, **kwargs)
- name: Optional[Hashable]
+ name: Label
if len(set(names)) == 1:
name = names[0]
else:
@@ -3983,7 +3986,7 @@ def rename(
see_also_sub="",
)
@Appender(generic.NDFrame.set_axis.__doc__)
- def set_axis(self, labels, axis=0, inplace=False):
+ def set_axis(self, labels, axis: Axis = 0, inplace: bool = False):
return super().set_axis(labels, axis=axis, inplace=inplace)
@Substitution(**_shared_doc_kwargs)
| https://api.github.com/repos/pandas-dev/pandas/pulls/32376 | 2020-02-29T20:40:04Z | 2020-03-02T17:27:46Z | 2020-03-02T17:27:46Z | 2020-03-02T17:32:28Z | |
REF/TST: misplaced DataFrame.join tests | diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py
index 321eb5fe94daf..7eba2b873c4f4 100644
--- a/pandas/tests/frame/test_combine_concat.py
+++ b/pandas/tests/frame/test_combine_concat.py
@@ -8,7 +8,7 @@
import pandas._testing as tm
-class TestDataFrameConcatCommon:
+class TestDataFrameConcat:
def test_concat_multiple_frames_dtypes(self):
# GH 2759
@@ -107,77 +107,6 @@ def test_concat_tuple_keys(self):
)
tm.assert_frame_equal(results, expected)
- def test_join_str_datetime(self):
- str_dates = ["20120209", "20120222"]
- dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
-
- A = DataFrame(str_dates, index=range(2), columns=["aa"])
- C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates)
-
- tst = A.join(C, on="aa")
-
- assert len(tst.columns) == 3
-
- def test_join_multiindex_leftright(self):
- # GH 10741
- df1 = pd.DataFrame(
- [
- ["a", "x", 0.471780],
- ["a", "y", 0.774908],
- ["a", "z", 0.563634],
- ["b", "x", -0.353756],
- ["b", "y", 0.368062],
- ["b", "z", -1.721840],
- ["c", "x", 1],
- ["c", "y", 2],
- ["c", "z", 3],
- ],
- columns=["first", "second", "value1"],
- ).set_index(["first", "second"])
-
- df2 = pd.DataFrame(
- [["a", 10], ["b", 20]], columns=["first", "value2"]
- ).set_index(["first"])
-
- exp = pd.DataFrame(
- [
- [0.471780, 10],
- [0.774908, 10],
- [0.563634, 10],
- [-0.353756, 20],
- [0.368062, 20],
- [-1.721840, 20],
- [1.000000, np.nan],
- [2.000000, np.nan],
- [3.000000, np.nan],
- ],
- index=df1.index,
- columns=["value1", "value2"],
- )
-
- # these must be the same results (but columns are flipped)
- tm.assert_frame_equal(df1.join(df2, how="left"), exp)
- tm.assert_frame_equal(df2.join(df1, how="right"), exp[["value2", "value1"]])
-
- exp_idx = pd.MultiIndex.from_product(
- [["a", "b"], ["x", "y", "z"]], names=["first", "second"]
- )
- exp = pd.DataFrame(
- [
- [0.471780, 10],
- [0.774908, 10],
- [0.563634, 10],
- [-0.353756, 20],
- [0.368062, 20],
- [-1.721840, 20],
- ],
- index=exp_idx,
- columns=["value1", "value2"],
- )
-
- tm.assert_frame_equal(df1.join(df2, how="right"), exp)
- tm.assert_frame_equal(df2.join(df1, how="left"), exp[["value2", "value1"]])
-
def test_concat_named_keys(self):
# GH 14252
df = pd.DataFrame({"foo": [1, 2], "bar": [0.1, 0.2]})
diff --git a/pandas/tests/frame/test_join.py b/pandas/tests/frame/test_join.py
index 8c388a887158f..4d6e675c6765f 100644
--- a/pandas/tests/frame/test_join.py
+++ b/pandas/tests/frame/test_join.py
@@ -1,6 +1,9 @@
+from datetime import datetime
+
import numpy as np
import pytest
+import pandas as pd
from pandas import DataFrame, Index, period_range
import pandas._testing as tm
@@ -216,3 +219,76 @@ def test_suppress_future_warning_with_sort_kw(sort_kw):
with tm.assert_produces_warning(None, check_stacklevel=False):
result = a.join([b, c], how="outer", sort=sort_kw)
tm.assert_frame_equal(result, expected)
+
+
+class TestDataFrameJoin:
+ def test_join_str_datetime(self):
+ str_dates = ["20120209", "20120222"]
+ dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
+
+ A = DataFrame(str_dates, index=range(2), columns=["aa"])
+ C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates)
+
+ tst = A.join(C, on="aa")
+
+ assert len(tst.columns) == 3
+
+ def test_join_multiindex_leftright(self):
+ # GH 10741
+ df1 = pd.DataFrame(
+ [
+ ["a", "x", 0.471780],
+ ["a", "y", 0.774908],
+ ["a", "z", 0.563634],
+ ["b", "x", -0.353756],
+ ["b", "y", 0.368062],
+ ["b", "z", -1.721840],
+ ["c", "x", 1],
+ ["c", "y", 2],
+ ["c", "z", 3],
+ ],
+ columns=["first", "second", "value1"],
+ ).set_index(["first", "second"])
+
+ df2 = pd.DataFrame(
+ [["a", 10], ["b", 20]], columns=["first", "value2"]
+ ).set_index(["first"])
+
+ exp = pd.DataFrame(
+ [
+ [0.471780, 10],
+ [0.774908, 10],
+ [0.563634, 10],
+ [-0.353756, 20],
+ [0.368062, 20],
+ [-1.721840, 20],
+ [1.000000, np.nan],
+ [2.000000, np.nan],
+ [3.000000, np.nan],
+ ],
+ index=df1.index,
+ columns=["value1", "value2"],
+ )
+
+ # these must be the same results (but columns are flipped)
+ tm.assert_frame_equal(df1.join(df2, how="left"), exp)
+ tm.assert_frame_equal(df2.join(df1, how="right"), exp[["value2", "value1"]])
+
+ exp_idx = pd.MultiIndex.from_product(
+ [["a", "b"], ["x", "y", "z"]], names=["first", "second"]
+ )
+ exp = pd.DataFrame(
+ [
+ [0.471780, 10],
+ [0.774908, 10],
+ [0.563634, 10],
+ [-0.353756, 20],
+ [0.368062, 20],
+ [-1.721840, 20],
+ ],
+ index=exp_idx,
+ columns=["value1", "value2"],
+ )
+
+ tm.assert_frame_equal(df1.join(df2, how="right"), exp)
+ tm.assert_frame_equal(df2.join(df1, how="left"), exp[["value2", "value1"]])
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index adb79f69c2d81..0766bfc37d7ca 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -5,7 +5,7 @@
from pandas import Series
-class TestSeriesCombine:
+class TestSeriesConcat:
@pytest.mark.parametrize(
"dtype", ["float64", "int8", "uint8", "bool", "m8[ns]", "M8[ns]"]
)
| https://api.github.com/repos/pandas-dev/pandas/pulls/32375 | 2020-02-29T20:20:16Z | 2020-03-02T17:47:09Z | 2020-03-02T17:47:09Z | 2020-03-02T17:52:13Z | |
REF: collect+parametrize reorder_levels tests | diff --git a/pandas/conftest.py b/pandas/conftest.py
index be44e6c2b36da..d6b9576d7729f 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1057,3 +1057,17 @@ def index_or_series_obj(request):
copy to avoid mutation, e.g. setting .name
"""
return _index_or_series_objs[request.param].copy(deep=True)
+
+
+@pytest.fixture
+def multiindex_year_month_day_dataframe_random_data():
+ """
+ DataFrame with 3 level MultiIndex (year, month, day) covering
+ first 100 business days from 2000-01-01 with random data
+ """
+ tdf = tm.makeTimeDataFrame(100)
+ ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()
+ # use Int64Index, to make sure things work
+ ymd.index.set_levels([lev.astype("i8") for lev in ymd.index.levels], inplace=True)
+ ymd.index.set_names(["year", "month", "day"], inplace=True)
+ return ymd
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 34df8bb57dd91..e37170d4155f8 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -689,44 +689,6 @@ def test_rename_axis_mapper(self):
with pytest.raises(TypeError, match="bogus"):
df.rename_axis(bogus=None)
- def test_reorder_levels(self):
- index = MultiIndex(
- levels=[["bar"], ["one", "two", "three"], [0, 1]],
- codes=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]],
- names=["L0", "L1", "L2"],
- )
- df = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=index)
-
- # no change, position
- result = df.reorder_levels([0, 1, 2])
- tm.assert_frame_equal(df, result)
-
- # no change, labels
- result = df.reorder_levels(["L0", "L1", "L2"])
- tm.assert_frame_equal(df, result)
-
- # rotate, position
- result = df.reorder_levels([1, 2, 0])
- e_idx = MultiIndex(
- levels=[["one", "two", "three"], [0, 1], ["bar"]],
- codes=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0]],
- names=["L1", "L2", "L0"],
- )
- expected = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=e_idx)
- tm.assert_frame_equal(result, expected)
-
- result = df.reorder_levels([0, 0, 0])
- e_idx = MultiIndex(
- levels=[["bar"], ["bar"], ["bar"]],
- codes=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]],
- names=["L0", "L0", "L0"],
- )
- expected = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=e_idx)
- tm.assert_frame_equal(result, expected)
-
- result = df.reorder_levels(["L0", "L0", "L0"])
- tm.assert_frame_equal(result, expected)
-
def test_set_index_names(self):
df = tm.makeDataFrame()
df.index.name = "name"
diff --git a/pandas/tests/generic/methods/test_reorder_levels.py b/pandas/tests/generic/methods/test_reorder_levels.py
new file mode 100644
index 0000000000000..8bb6417e56659
--- /dev/null
+++ b/pandas/tests/generic/methods/test_reorder_levels.py
@@ -0,0 +1,73 @@
+import numpy as np
+import pytest
+
+from pandas import DataFrame, MultiIndex, Series
+import pandas._testing as tm
+
+
+class TestReorderLevels:
+ @pytest.mark.parametrize("klass", [Series, DataFrame])
+ def test_reorder_levels(self, klass):
+ index = MultiIndex(
+ levels=[["bar"], ["one", "two", "three"], [0, 1]],
+ codes=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]],
+ names=["L0", "L1", "L2"],
+ )
+ df = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=index)
+ obj = df if klass is DataFrame else df["A"]
+
+ # no change, position
+ result = obj.reorder_levels([0, 1, 2])
+ tm.assert_equal(obj, result)
+
+ # no change, labels
+ result = obj.reorder_levels(["L0", "L1", "L2"])
+ tm.assert_equal(obj, result)
+
+ # rotate, position
+ result = obj.reorder_levels([1, 2, 0])
+ e_idx = MultiIndex(
+ levels=[["one", "two", "three"], [0, 1], ["bar"]],
+ codes=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0]],
+ names=["L1", "L2", "L0"],
+ )
+ expected = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=e_idx)
+ expected = expected if klass is DataFrame else expected["A"]
+ tm.assert_equal(result, expected)
+
+ result = obj.reorder_levels([0, 0, 0])
+ e_idx = MultiIndex(
+ levels=[["bar"], ["bar"], ["bar"]],
+ codes=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]],
+ names=["L0", "L0", "L0"],
+ )
+ expected = DataFrame({"A": np.arange(6), "B": np.arange(6)}, index=e_idx)
+ expected = expected if klass is DataFrame else expected["A"]
+ tm.assert_equal(result, expected)
+
+ result = obj.reorder_levels(["L0", "L0", "L0"])
+ tm.assert_equal(result, expected)
+
+ def test_reorder_levels_swaplevel_equivalence(
+ self, multiindex_year_month_day_dataframe_random_data
+ ):
+
+ ymd = multiindex_year_month_day_dataframe_random_data
+
+ result = ymd.reorder_levels(["month", "day", "year"])
+ expected = ymd.swaplevel(0, 1).swaplevel(1, 2)
+ tm.assert_frame_equal(result, expected)
+
+ result = ymd["A"].reorder_levels(["month", "day", "year"])
+ expected = ymd["A"].swaplevel(0, 1).swaplevel(1, 2)
+ tm.assert_series_equal(result, expected)
+
+ result = ymd.T.reorder_levels(["month", "day", "year"], axis=1)
+ expected = ymd.T.swaplevel(0, 1, axis=1).swaplevel(1, 2, axis=1)
+ tm.assert_frame_equal(result, expected)
+
+ with pytest.raises(TypeError, match="hierarchical axis"):
+ ymd.reorder_levels([1, 2], axis=1)
+
+ with pytest.raises(IndexError, match="Too many levels"):
+ ymd.index.reorder_levels([1, 2, 3])
diff --git a/pandas/tests/indexing/multiindex/conftest.py b/pandas/tests/indexing/multiindex/conftest.py
index 0256f5e35e1db..c69d6f86a6ce6 100644
--- a/pandas/tests/indexing/multiindex/conftest.py
+++ b/pandas/tests/indexing/multiindex/conftest.py
@@ -2,7 +2,6 @@
import pytest
from pandas import DataFrame, Index, MultiIndex
-import pandas._testing as tm
@pytest.fixture
@@ -16,17 +15,3 @@ def multiindex_dataframe_random_data():
return DataFrame(
np.random.randn(10, 3), index=index, columns=Index(["A", "B", "C"], name="exp")
)
-
-
-@pytest.fixture
-def multiindex_year_month_day_dataframe_random_data():
- """
- DataFrame with 3 level MultiIndex (year, month, day) covering
- first 100 business days from 2000-01-01 with random data
- """
- tdf = tm.makeTimeDataFrame(100)
- ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()
- # use Int64Index, to make sure things work
- ymd.index.set_levels([lev.astype("i8") for lev in ymd.index.levels], inplace=True)
- ymd.index.set_names(["year", "month", "day"], inplace=True)
- return ymd
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index f6ca93b0c2882..769d1ed877a69 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -54,32 +54,6 @@ def test_set_index_makes_timeseries(self):
s.index = idx
assert s.index.is_all_dates
- def test_reorder_levels(self):
- index = MultiIndex(
- levels=[["bar"], ["one", "two", "three"], [0, 1]],
- codes=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]],
- names=["L0", "L1", "L2"],
- )
- s = Series(np.arange(6), index=index)
-
- # no change, position
- result = s.reorder_levels([0, 1, 2])
- tm.assert_series_equal(s, result)
-
- # no change, labels
- result = s.reorder_levels(["L0", "L1", "L2"])
- tm.assert_series_equal(s, result)
-
- # rotate, position
- result = s.reorder_levels([1, 2, 0])
- e_idx = MultiIndex(
- levels=[["one", "two", "three"], [0, 1], ["bar"]],
- codes=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0]],
- names=["L1", "L2", "L0"],
- )
- expected = Series(np.arange(6), index=e_idx)
- tm.assert_series_equal(result, expected)
-
def test_rename_axis_mapper(self):
# GH 19978
mi = MultiIndex.from_product([["a", "b", "c"], [1, 2]], names=["ll", "nn"])
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index e3cf46b466ae4..84279d874bae1 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -939,25 +939,6 @@ def test_swaplevel(self):
with pytest.raises(TypeError, match=msg):
DataFrame(range(3)).swaplevel()
- def test_reorder_levels(self):
- result = self.ymd.reorder_levels(["month", "day", "year"])
- expected = self.ymd.swaplevel(0, 1).swaplevel(1, 2)
- tm.assert_frame_equal(result, expected)
-
- result = self.ymd["A"].reorder_levels(["month", "day", "year"])
- expected = self.ymd["A"].swaplevel(0, 1).swaplevel(1, 2)
- tm.assert_series_equal(result, expected)
-
- result = self.ymd.T.reorder_levels(["month", "day", "year"], axis=1)
- expected = self.ymd.T.swaplevel(0, 1, axis=1).swaplevel(1, 2, axis=1)
- tm.assert_frame_equal(result, expected)
-
- with pytest.raises(TypeError, match="hierarchical axis"):
- self.ymd.reorder_levels([1, 2], axis=1)
-
- with pytest.raises(IndexError, match="Too many levels"):
- self.ymd.index.reorder_levels([1, 2, 3])
-
def test_insert_index(self):
df = self.ymd[:5].T
df[2000, 1, 10] = df[2000, 1, 7]
| https://api.github.com/repos/pandas-dev/pandas/pulls/32373 | 2020-02-29T18:58:41Z | 2020-03-02T11:18:37Z | 2020-03-02T11:18:36Z | 2020-03-02T15:30:15Z | |
TYP/CLN: Optional[Hashable] -> pandas._typing.Label | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 61641bfb24293..60d89b2076e92 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -890,7 +890,7 @@ def style(self) -> "Styler":
"""
@Appender(_shared_docs["items"])
- def items(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
+ def items(self) -> Iterable[Tuple[Label, Series]]:
if self.columns.is_unique and hasattr(self, "_item_cache"):
for k in self.columns:
yield k, self._get_item_cache(k)
@@ -899,10 +899,10 @@ def items(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
yield k, self._ixs(i, axis=1)
@Appender(_shared_docs["items"])
- def iteritems(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
+ def iteritems(self) -> Iterable[Tuple[Label, Series]]:
yield from self.items()
- def iterrows(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
+ def iterrows(self) -> Iterable[Tuple[Label, Series]]:
"""
Iterate over DataFrame rows as (index, Series) pairs.
@@ -4043,7 +4043,7 @@ def set_index(
"one-dimensional arrays."
)
- missing: List[Optional[Hashable]] = []
+ missing: List[Label] = []
for col in keys:
if isinstance(
col, (ABCIndexClass, ABCSeries, np.ndarray, list, abc.Iterator)
@@ -4082,7 +4082,7 @@ def set_index(
else:
arrays.append(self.index)
- to_remove: List[Optional[Hashable]] = []
+ to_remove: List[Label] = []
for col in keys:
if isinstance(col, ABCMultiIndex):
for n in range(col.nlevels):
@@ -4137,7 +4137,7 @@ def reset_index(
drop: bool = False,
inplace: bool = False,
col_level: Hashable = 0,
- col_fill: Optional[Hashable] = "",
+ col_fill: Label = "",
) -> Optional["DataFrame"]:
"""
Reset the index, or a level of it.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f0147859cae97..866b58c1ffe3d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9729,7 +9729,7 @@ def describe_1d(data):
ldesc = [describe_1d(s) for _, s in data.items()]
# set a convenient order for rows
- names: List[Optional[Hashable]] = []
+ names: List[Label] = []
ldesc_indexes = sorted((x.index for x in ldesc), key=len)
for idxnames in ldesc_indexes:
for name in idxnames:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 6f44b5abf5b04..7e391e7a03fbb 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1,7 +1,7 @@
from datetime import datetime
import operator
from textwrap import dedent
-from typing import TYPE_CHECKING, Any, FrozenSet, Hashable, Optional, Union
+from typing import TYPE_CHECKING, Any, FrozenSet, Hashable, Union
import warnings
import numpy as np
@@ -5546,7 +5546,7 @@ def default_index(n):
return RangeIndex(0, n, name=None)
-def maybe_extract_name(name, obj, cls) -> Optional[Hashable]:
+def maybe_extract_name(name, obj, cls) -> Label:
"""
If no name is passed, then extract it from data, validating hashability.
"""
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 06a180d4a096e..091129707228f 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -2,11 +2,11 @@
Concat routines.
"""
-from typing import Hashable, Iterable, List, Mapping, Optional, Union, overload
+from typing import Iterable, List, Mapping, Union, overload
import numpy as np
-from pandas._typing import FrameOrSeriesUnion
+from pandas._typing import FrameOrSeriesUnion, Label
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
@@ -32,7 +32,7 @@
@overload
def concat(
- objs: Union[Iterable["DataFrame"], Mapping[Optional[Hashable], "DataFrame"]],
+ objs: Union[Iterable["DataFrame"], Mapping[Label, "DataFrame"]],
axis=0,
join: str = "outer",
ignore_index: bool = False,
@@ -48,9 +48,7 @@ def concat(
@overload
def concat(
- objs: Union[
- Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
- ],
+ objs: Union[Iterable[FrameOrSeriesUnion], Mapping[Label, FrameOrSeriesUnion]],
axis=0,
join: str = "outer",
ignore_index: bool = False,
@@ -65,9 +63,7 @@ def concat(
def concat(
- objs: Union[
- Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
- ],
+ objs: Union[Iterable[FrameOrSeriesUnion], Mapping[Label, FrameOrSeriesUnion]],
axis=0,
join="outer",
ignore_index: bool = False,
@@ -536,7 +532,7 @@ def _get_concat_axis(self) -> Index:
idx = ibase.default_index(len(self.objs))
return idx
elif self.keys is None:
- names: List[Optional[Hashable]] = [None] * len(self.objs)
+ names: List[Label] = [None] * len(self.objs)
num = 0
has_names = False
for i, x in enumerate(self.objs):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 12164a4b8ff6b..bb4e222193608 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -692,11 +692,7 @@ def __array_ufunc__(
inputs = tuple(extract_array(x, extract_numpy=True) for x in inputs)
result = getattr(ufunc, method)(*inputs, **kwargs)
- name: Label
- if len(set(names)) == 1:
- name = names[0]
- else:
- name = None
+ name = names[0] if len(set(names)) == 1 else None
def construct_return(result):
if lib.is_scalar(result):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 168666ea21f45..7aeed5c316d7f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -8,17 +8,7 @@
import itertools
import os
import re
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Hashable,
- List,
- Optional,
- Tuple,
- Type,
- Union,
-)
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Type, Union
import warnings
import numpy as np
@@ -27,7 +17,7 @@
from pandas._libs import lib, writers as libwriters
from pandas._libs.tslibs import timezones
-from pandas._typing import ArrayLike, FrameOrSeries
+from pandas._typing import ArrayLike, FrameOrSeries, Label
from pandas.compat._optional import import_optional_dependency
from pandas.errors import PerformanceWarning
from pandas.util._decorators import cache_readonly
@@ -2811,7 +2801,7 @@ def read_multi_index(
levels = []
codes = []
- names: List[Optional[Hashable]] = []
+ names: List[Label] = []
for i in range(nlevels):
level_key = f"{key}_level{i}"
node = getattr(self.group, level_key)
@@ -2976,7 +2966,7 @@ class SeriesFixed(GenericFixed):
pandas_kind = "series"
attributes = ["name"]
- name: Optional[Hashable]
+ name: Label
@property
def shape(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/32371 | 2020-02-29T17:16:02Z | 2020-03-06T07:12:48Z | 2020-03-06T07:12:48Z | 2020-03-06T07:13:20Z | |
PR #32068 follow up - Print merged df result in doc | diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst
index 8302b5c5dea60..49f4bbb6beb19 100644
--- a/doc/source/user_guide/merging.rst
+++ b/doc/source/user_guide/merging.rst
@@ -742,7 +742,7 @@ as shown in the following example.
)
ser
- result = pd.merge(df, ser.reset_index(), on=['Let', 'Num'])
+ pd.merge(df, ser.reset_index(), on=['Let', 'Num'])
Here is another example with duplicate join keys in DataFrames:
| - [x] follow up to PR #32068 which closed #12550 - small fix - see https://github.com/pandas-dev/pandas/pull/32068#discussion_r386029773
| https://api.github.com/repos/pandas-dev/pandas/pulls/32370 | 2020-02-29T14:47:52Z | 2020-02-29T17:17:48Z | 2020-02-29T17:17:48Z | 2020-02-29T19:10:05Z |
CLN: Some code cleanups in pandas/_libs/parsers.pyx | diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 3a42a64046abd..4f7d75e0aaad6 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -1596,8 +1596,6 @@ cdef _categorical_convert(parser_t *parser, int64_t col,
cdef _to_fw_string(parser_t *parser, int64_t col, int64_t line_start,
int64_t line_end, int64_t width):
cdef:
- Py_ssize_t i
- coliter_t it
const char *word = NULL
char *data
ndarray result
@@ -1642,15 +1640,11 @@ cdef _try_double(parser_t *parser, int64_t col,
bint na_filter, kh_str_starts_t *na_hashset, object na_flist):
cdef:
int error, na_count = 0
- Py_ssize_t i, lines
- coliter_t it
- const char *word = NULL
- char *p_end
+ Py_ssize_t lines
float64_t *data
float64_t NA = na_values[np.float64]
kh_float64_t *na_fset
ndarray result
- khiter_t k
bint use_na_flist = len(na_flist) > 0
lines = line_end - line_start
@@ -1685,7 +1679,7 @@ cdef inline int _try_double_nogil(parser_t *parser,
coliter_t it
const char *word = NULL
char *p_end
- khiter_t k, k64
+ khiter_t k64
na_count[0] = 0
coliter_setup(&it, parser, col, line_start)
@@ -1748,11 +1742,10 @@ cdef _try_uint64(parser_t *parser, int64_t col,
bint na_filter, kh_str_starts_t *na_hashset):
cdef:
int error
- Py_ssize_t i, lines
+ Py_ssize_t lines
coliter_t it
uint64_t *data
ndarray result
- khiter_t k
uint_state state
lines = line_end - line_start
@@ -1822,13 +1815,11 @@ cdef _try_int64(parser_t *parser, int64_t col,
bint na_filter, kh_str_starts_t *na_hashset):
cdef:
int error, na_count = 0
- Py_ssize_t i, lines
+ Py_ssize_t lines
coliter_t it
int64_t *data
ndarray result
-
int64_t NA = na_values[np.int64]
- khiter_t k
lines = line_end - line_start
result = np.empty(lines, dtype=np.int64)
@@ -1856,7 +1847,6 @@ cdef inline int _try_int64_nogil(parser_t *parser, int64_t col,
Py_ssize_t i, lines = line_end - line_start
coliter_t it
const char *word = NULL
- khiter_t k
na_count[0] = 0
coliter_setup(&it, parser, col, line_start)
@@ -1892,9 +1882,7 @@ cdef _try_bool_flex(parser_t *parser, int64_t col,
const kh_str_starts_t *false_hashset):
cdef:
int error, na_count = 0
- Py_ssize_t i, lines
- coliter_t it
- const char *word = NULL
+ Py_ssize_t lines
uint8_t *data
ndarray result
@@ -1926,7 +1914,6 @@ cdef inline int _try_bool_flex_nogil(parser_t *parser, int64_t col,
Py_ssize_t i, lines = line_end - line_start
coliter_t it
const char *word = NULL
- khiter_t k
na_count[0] = 0
coliter_setup(&it, parser, col, line_start)
@@ -1981,10 +1968,8 @@ cdef kh_str_starts_t* kset_from_list(list values) except NULL:
# caller takes responsibility for freeing the hash table
cdef:
Py_ssize_t i
- khiter_t k
kh_str_starts_t *table
int ret = 0
-
object val
table = kh_init_str_starts()
@@ -2012,7 +1997,6 @@ cdef kh_str_starts_t* kset_from_list(list values) except NULL:
cdef kh_float64_t* kset_float64_from_list(values) except NULL:
# caller takes responsibility for freeing the hash table
cdef:
- Py_ssize_t i
khiter_t k
kh_float64_t *table
int ret = 0
@@ -2150,7 +2134,6 @@ cdef _apply_converter(object f, parser_t *parser, int64_t col,
int64_t line_start, int64_t line_end,
char* c_encoding):
cdef:
- int error
Py_ssize_t i, lines
coliter_t it
const char *word = NULL
| There are __a lot__ of cdef unused variables in ```pandas/_libs/parsers.pyx``` this PR is covering *some* of the unused variables.
| https://api.github.com/repos/pandas-dev/pandas/pulls/32369 | 2020-02-29T14:20:09Z | 2020-03-16T23:55:58Z | 2020-03-16T23:55:58Z | 2020-03-20T00:06:43Z |
Silence warnings when compiling pandas/_libs/parsers.pyx | diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 3077f73a8d1a4..2fd227694800c 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -701,7 +701,7 @@ cdef class TextReader:
char *word
object name, old_name
int status
- uint64_t hr, data_line
+ uint64_t hr, data_line = 0
char *errors = "strict"
StringPath path = _string_path(self.c_encoding)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This is getting rid of this warning:
```
pandas/_libs/parsers.c: In function ‘__pyx_f_6pandas_5_libs_7parsers_10TextReader__get_header’:
pandas/_libs/parsers.c:9313:27: warning: ‘__pyx_v_data_line’ may be used uninitialized in this function [-Wmaybe-uninitialized]
9313 | __pyx_t_5numpy_uint64_t __pyx_v_data_line;
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32368 | 2020-02-29T13:59:09Z | 2020-02-29T18:04:07Z | 2020-02-29T18:04:07Z | 2020-03-05T09:42:13Z |
CLN: some code cleanups to pandas/_libs/missing.pyx | diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index 4d17a6f883c1c..b0a24a9b2ebfe 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -10,10 +10,13 @@ cnp.import_array()
cimport pandas._libs.util as util
-from pandas._libs.tslibs.np_datetime cimport (
- get_timedelta64_value, get_datetime64_value)
+
+from pandas._libs.tslibs.np_datetime cimport get_datetime64_value, get_timedelta64_value
from pandas._libs.tslibs.nattype cimport (
- checknull_with_nat, c_NaT as NaT, is_null_datetimelike)
+ c_NaT as NaT,
+ checknull_with_nat,
+ is_null_datetimelike,
+)
from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op
from pandas.compat import is_platform_32bit
@@ -44,7 +47,7 @@ cpdef bint checknull(object val):
Returns
-------
- result : bool
+ bool
Notes
-----
@@ -223,7 +226,7 @@ def isnaobj2d_old(arr: ndarray) -> ndarray:
Returns
-------
- result : ndarray (dtype=np.bool_)
+ ndarray (dtype=np.bool_)
Notes
-----
@@ -248,17 +251,11 @@ def isnaobj2d_old(arr: ndarray) -> ndarray:
def isposinf_scalar(val: object) -> bool:
- if util.is_float_object(val) and val == INF:
- return True
- else:
- return False
+ return util.is_float_object(val) and val == INF
def isneginf_scalar(val: object) -> bool:
- if util.is_float_object(val) and val == NEGINF:
- return True
- else:
- return False
+ return util.is_float_object(val) and val == NEGINF
cdef inline bint is_null_datetime64(v):
@@ -423,7 +420,6 @@ class NAType(C_NAType):
return NA
elif isinstance(other, np.ndarray):
return np.where(other == 1, other, NA)
-
return NotImplemented
# Logical ops using Kleene logic
@@ -433,8 +429,7 @@ class NAType(C_NAType):
return False
elif other is True or other is C_NA:
return NA
- else:
- return NotImplemented
+ return NotImplemented
__rand__ = __and__
@@ -443,8 +438,7 @@ class NAType(C_NAType):
return True
elif other is False or other is C_NA:
return NA
- else:
- return NotImplemented
+ return NotImplemented
__ror__ = __or__
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32367 | 2020-02-29T13:43:43Z | 2020-03-03T03:00:27Z | 2020-03-03T03:00:27Z | 2020-03-05T10:12:19Z |
Backport PR #32284 on branch 1.0.x (CI: nested DataFrames in npdev) | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index ea1e339f44d93..61af02090f7db 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -9,6 +9,7 @@
import pytest
from pandas.compat import is_platform_little_endian
+from pandas.compat.numpy import _is_numpy_dev
from pandas.core.dtypes.common import is_integer_dtype
@@ -144,6 +145,7 @@ def test_constructor_dtype_list_data(self):
assert df.loc[1, 0] is None
assert df.loc[0, 1] == "2"
+ @pytest.mark.xfail(_is_numpy_dev, reason="Interprets list of frame as 3D")
def test_constructor_list_frames(self):
# see gh-3243
result = DataFrame([DataFrame()])
@@ -496,6 +498,7 @@ def test_constructor_error_msgs(self):
with pytest.raises(ValueError, match=msg):
DataFrame({"a": False, "b": True})
+ @pytest.mark.xfail(_is_numpy_dev, reason="Interprets embedded frame as 3D")
def test_constructor_with_embedded_frames(self):
# embedded data frames
| Backport PR #32284: CI: nested DataFrames in npdev | https://api.github.com/repos/pandas-dev/pandas/pulls/32366 | 2020-02-29T13:38:41Z | 2020-02-29T14:42:25Z | 2020-02-29T14:42:25Z | 2020-02-29T14:42:26Z |
TST: Removed import of itertools | diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index c402ca194648f..83080aa98648f 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -1,7 +1,6 @@
import builtins
import datetime as dt
from io import StringIO
-from itertools import product
from string import ascii_lowercase
import numpy as np
@@ -1296,36 +1295,32 @@ def __eq__(self, other):
# --------------------------------
-def test_size(df):
- grouped = df.groupby(["A", "B"])
+@pytest.mark.parametrize("by", ["A", "B", ["A", "B"]])
+def test_size(df, by):
+ grouped = df.groupby(by=by)
result = grouped.size()
for key, group in grouped:
assert result[key] == len(group)
- grouped = df.groupby("A")
- result = grouped.size()
- for key, group in grouped:
- assert result[key] == len(group)
- grouped = df.groupby("B")
- result = grouped.size()
- for key, group in grouped:
- assert result[key] == len(group)
+@pytest.mark.parametrize("by", ["A", "B", ["A", "B"]])
+@pytest.mark.parametrize("sort", [True, False])
+def test_size_sort(df, sort, by):
+ df = DataFrame(np.random.choice(20, (1000, 3)), columns=list("ABC"))
+ left = df.groupby(by=by, sort=sort).size()
+ right = df.groupby(by=by, sort=sort)["C"].apply(lambda a: a.shape[0])
+ tm.assert_series_equal(left, right, check_names=False)
- df = DataFrame(np.random.choice(20, (1000, 3)), columns=list("abc"))
- for sort, key in product((False, True), ("a", "b", ["a", "b"])):
- left = df.groupby(key, sort=sort).size()
- right = df.groupby(key, sort=sort)["c"].apply(lambda a: a.shape[0])
- tm.assert_series_equal(left, right, check_names=False)
- # GH11699
+def test_size_series_dataframe():
+ # https://github.com/pandas-dev/pandas/issues/11699
df = DataFrame(columns=["A", "B"])
out = Series(dtype="int64", index=Index([], name="A"))
tm.assert_series_equal(df.groupby("A").size(), out)
def test_size_groupby_all_null():
- # GH23050
+ # https://github.com/pandas-dev/pandas/issues/23050
# Assert no 'Value Error : Length of passed values is 2, index implies 0'
df = DataFrame({"A": [None, None]}) # all-null groups
result = df.groupby("A").size()
@@ -1335,6 +1330,8 @@ def test_size_groupby_all_null():
# quantile
# --------------------------------
+
+
@pytest.mark.parametrize(
"interpolation", ["linear", "lower", "higher", "nearest", "midpoint"]
)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32364 | 2020-02-29T12:27:05Z | 2020-03-07T12:01:24Z | 2020-03-07T12:01:24Z | 2020-03-07T12:09:19Z |
TYP/cln: generic._make_*_function | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f7eb79a4f1c78..e36eace1e42e6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9899,37 +9899,37 @@ def _add_numeric_operations(cls):
"""
Add the operations to the cls; evaluate the doc strings again
"""
- axis_descr, name, name2 = _doc_parms(cls)
+ axis_descr, name1, name2 = _doc_parms(cls)
cls.any = _make_logical_function(
cls,
"any",
- name,
- name2,
- axis_descr,
- _any_desc,
- nanops.nanany,
- _any_see_also,
- _any_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc=_any_desc,
+ func=nanops.nanany,
+ see_also=_any_see_also,
+ examples=_any_examples,
empty_value=False,
)
cls.all = _make_logical_function(
cls,
"all",
- name,
- name2,
- axis_descr,
- _all_desc,
- nanops.nanall,
- _all_see_also,
- _all_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc=_all_desc,
+ func=nanops.nanall,
+ see_also=_all_see_also,
+ examples=_all_examples,
empty_value=True,
)
@Substitution(
desc="Return the mean absolute deviation of the values "
"for the requested axis.",
- name1=name,
+ name1=name1,
name2=name2,
axis_descr=axis_descr,
min_count="",
@@ -9957,177 +9957,177 @@ def mad(self, axis=None, skipna=None, level=None):
cls.sem = _make_stat_function_ddof(
cls,
"sem",
- name,
- name2,
- axis_descr,
- "Return unbiased standard error of the mean over requested "
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return unbiased standard error of the mean over requested "
"axis.\n\nNormalized by N-1 by default. This can be changed "
"using the ddof argument",
- nanops.nansem,
+ func=nanops.nansem,
)
cls.var = _make_stat_function_ddof(
cls,
"var",
- name,
- name2,
- axis_descr,
- "Return unbiased variance over requested axis.\n\nNormalized by "
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return unbiased variance over requested axis.\n\nNormalized by "
"N-1 by default. This can be changed using the ddof argument",
- nanops.nanvar,
+ func=nanops.nanvar,
)
cls.std = _make_stat_function_ddof(
cls,
"std",
- name,
- name2,
- axis_descr,
- "Return sample standard deviation over requested axis."
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return sample standard deviation over requested axis."
"\n\nNormalized by N-1 by default. This can be changed using the "
"ddof argument",
- nanops.nanstd,
+ func=nanops.nanstd,
)
cls.cummin = _make_cum_function(
cls,
"cummin",
- name,
- name2,
- axis_descr,
- "minimum",
- np.minimum.accumulate,
- "min",
- np.inf,
- np.nan,
- _cummin_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="minimum",
+ accum_func=np.minimum.accumulate,
+ accum_func_name="min",
+ mask_a=np.inf,
+ mask_b=np.nan,
+ examples=_cummin_examples,
)
cls.cumsum = _make_cum_function(
cls,
"cumsum",
- name,
- name2,
- axis_descr,
- "sum",
- np.cumsum,
- "sum",
- 0.0,
- np.nan,
- _cumsum_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="sum",
+ accum_func=np.cumsum,
+ accum_func_name="sum",
+ mask_a=0.0,
+ mask_b=np.nan,
+ examples=_cumsum_examples,
)
cls.cumprod = _make_cum_function(
cls,
"cumprod",
- name,
- name2,
- axis_descr,
- "product",
- np.cumprod,
- "prod",
- 1.0,
- np.nan,
- _cumprod_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="product",
+ accum_func=np.cumprod,
+ accum_func_name="prod",
+ mask_a=1.0,
+ mask_b=np.nan,
+ examples=_cumprod_examples,
)
cls.cummax = _make_cum_function(
cls,
"cummax",
- name,
- name2,
- axis_descr,
- "maximum",
- np.maximum.accumulate,
- "max",
- -np.inf,
- np.nan,
- _cummax_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="maximum",
+ accum_func=np.maximum.accumulate,
+ accum_func_name="max",
+ mask_a=-np.inf,
+ mask_b=np.nan,
+ examples=_cummax_examples,
)
cls.sum = _make_min_count_stat_function(
cls,
"sum",
- name,
- name2,
- axis_descr,
- """Return the sum of the values for the requested axis.\n
- This is equivalent to the method ``numpy.sum``.""",
- nanops.nansum,
- _stat_func_see_also,
- _sum_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the sum of the values for the requested axis.\n\n"
+ "This is equivalent to the method ``numpy.sum``.",
+ func=nanops.nansum,
+ see_also=_stat_func_see_also,
+ examples=_sum_examples,
)
cls.mean = _make_stat_function(
cls,
"mean",
- name,
- name2,
- axis_descr,
- "Return the mean of the values for the requested axis.",
- nanops.nanmean,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the mean of the values for the requested axis.",
+ func=nanops.nanmean,
)
cls.skew = _make_stat_function(
cls,
"skew",
- name,
- name2,
- axis_descr,
- "Return unbiased skew over requested axis.\n\nNormalized by N-1.",
- nanops.nanskew,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return unbiased skew over requested axis.\n\nNormalized by N-1.",
+ func=nanops.nanskew,
)
cls.kurt = _make_stat_function(
cls,
"kurt",
- name,
- name2,
- axis_descr,
- "Return unbiased kurtosis over requested axis.\n\n"
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return unbiased kurtosis over requested axis.\n\n"
"Kurtosis obtained using Fisher's definition of\n"
"kurtosis (kurtosis of normal == 0.0). Normalized "
"by N-1.",
- nanops.nankurt,
+ func=nanops.nankurt,
)
cls.kurtosis = cls.kurt
cls.prod = _make_min_count_stat_function(
cls,
"prod",
- name,
- name2,
- axis_descr,
- "Return the product of the values for the requested axis.",
- nanops.nanprod,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the product of the values for the requested axis.",
+ func=nanops.nanprod,
examples=_prod_examples,
)
cls.product = cls.prod
cls.median = _make_stat_function(
cls,
"median",
- name,
- name2,
- axis_descr,
- "Return the median of the values for the requested axis.",
- nanops.nanmedian,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the median of the values for the requested axis.",
+ func=nanops.nanmedian,
)
cls.max = _make_stat_function(
cls,
"max",
- name,
- name2,
- axis_descr,
- """Return the maximum of the values for the requested axis.\n
- If you want the *index* of the maximum, use ``idxmax``. This is
- the equivalent of the ``numpy.ndarray`` method ``argmax``.""",
- nanops.nanmax,
- _stat_func_see_also,
- _max_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the maximum of the values for the requested axis.\n\n"
+ "If you want the *index* of the maximum, use ``idxmax``. This is"
+ "the equivalent of the ``numpy.ndarray`` method ``argmax``.",
+ func=nanops.nanmax,
+ see_also=_stat_func_see_also,
+ examples=_max_examples,
)
cls.min = _make_stat_function(
cls,
"min",
- name,
- name2,
- axis_descr,
- """Return the minimum of the values for the requested axis.\n
- If you want the *index* of the minimum, use ``idxmin``. This is
- the equivalent of the ``numpy.ndarray`` method ``argmin``.""",
- nanops.nanmin,
- _stat_func_see_also,
- _min_examples,
+ name1=name1,
+ name2=name2,
+ axis_descr=axis_descr,
+ desc="Return the minimum of the values for the requested axis.\n\n"
+ "If you want the *index* of the minimum, use ``idxmin``. This is"
+ "the equivalent of the ``numpy.ndarray`` method ``argmin``.",
+ func=nanops.nanmin,
+ see_also=_stat_func_see_also,
+ examples=_min_examples,
)
@classmethod
@@ -10947,8 +10947,16 @@ def _doc_parms(cls):
def _make_min_count_stat_function(
- cls, name, name1, name2, axis_descr, desc, f, see_also: str = "", examples: str = ""
-):
+ cls,
+ name: str,
+ name1: str,
+ name2: str,
+ axis_descr: str,
+ desc: str,
+ func: Callable,
+ see_also: str = "",
+ examples: str = "",
+) -> Callable:
@Substitution(
desc=desc,
name1=name1,
@@ -10983,8 +10991,8 @@ def stat_func(
name, axis=axis, level=level, skipna=skipna, min_count=min_count
)
return self._reduce(
- f,
- name,
+ func,
+ name=name,
axis=axis,
skipna=skipna,
numeric_only=numeric_only,
@@ -10995,8 +11003,16 @@ def stat_func(
def _make_stat_function(
- cls, name, name1, name2, axis_descr, desc, f, see_also: str = "", examples: str = ""
-):
+ cls,
+ name: str,
+ name1: str,
+ name2: str,
+ axis_descr: str,
+ desc: str,
+ func: Callable,
+ see_also: str = "",
+ examples: str = "",
+) -> Callable:
@Substitution(
desc=desc,
name1=name1,
@@ -11021,13 +11037,15 @@ def stat_func(
if level is not None:
return self._agg_by_level(name, axis=axis, level=level, skipna=skipna)
return self._reduce(
- f, name, axis=axis, skipna=skipna, numeric_only=numeric_only
+ func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only
)
return set_function_name(stat_func, name, cls)
-def _make_stat_function_ddof(cls, name, name1, name2, axis_descr, desc, f):
+def _make_stat_function_ddof(
+ cls, name: str, name1: str, name2: str, axis_descr: str, desc: str, func: Callable
+) -> Callable:
@Substitution(desc=desc, name1=name1, name2=name2, axis_descr=axis_descr)
@Appender(_num_ddof_doc)
def stat_func(
@@ -11043,7 +11061,7 @@ def stat_func(
name, axis=axis, level=level, skipna=skipna, ddof=ddof
)
return self._reduce(
- f, name, axis=axis, numeric_only=numeric_only, skipna=skipna, ddof=ddof
+ func, name, axis=axis, numeric_only=numeric_only, skipna=skipna, ddof=ddof
)
return set_function_name(stat_func, name, cls)
@@ -11051,17 +11069,17 @@ def stat_func(
def _make_cum_function(
cls,
- name,
- name1,
- name2,
- axis_descr,
- desc,
- accum_func,
- accum_func_name,
- mask_a,
- mask_b,
- examples,
-):
+ name: str,
+ name1: str,
+ name2: str,
+ axis_descr: str,
+ desc: str,
+ accum_func: Callable,
+ accum_func_name: str,
+ mask_a: float,
+ mask_b: float,
+ examples: str,
+) -> Callable:
@Substitution(
desc=desc,
name1=name1,
@@ -11145,8 +11163,17 @@ def na_accum_func(blk_values):
def _make_logical_function(
- cls, name, name1, name2, axis_descr, desc, f, see_also, examples, empty_value
-):
+ cls,
+ name: str,
+ name1: str,
+ name2: str,
+ axis_descr: str,
+ desc: str,
+ func: Callable,
+ see_also: str,
+ examples: str,
+ empty_value: bool,
+) -> Callable:
@Substitution(
desc=desc,
name1=name1,
@@ -11166,8 +11193,8 @@ def logical_func(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs
)
return self._agg_by_level(name, axis=axis, level=level, skipna=skipna)
return self._reduce(
- f,
- name,
+ func,
+ name=name,
axis=axis,
skipna=skipna,
numeric_only=bool_only,
| Give calls to these funcs named parameters for better clarity + type up the make_stats functions.
| https://api.github.com/repos/pandas-dev/pandas/pulls/32363 | 2020-02-29T08:57:49Z | 2020-03-04T08:19:31Z | 2020-03-04T08:19:31Z | 2020-03-04T08:53:31Z |
DOC: Fixed errors in pandas.DataFrame.asfreq PR07, RT02, RT03, SA04 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e6c5ac9dbf733..1ec5391a2d00f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7485,6 +7485,7 @@ def asfreq(
Parameters
----------
freq : DateOffset or str
+ Frequency DateOffset or string.
method : {'backfill'/'bfill', 'pad'/'ffill'}, default None
Method to use for filling holes in reindexed Series (note this
does not fill NaNs that already were present):
@@ -7502,11 +7503,12 @@ def asfreq(
Returns
-------
- converted : same type as caller
+ Same type as caller
+ Object converted to the specified frequency.
See Also
--------
- reindex
+ reindex : Conform DataFrame to new index with optional filling logic.
Notes
-----
| - [X] closes https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/19
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.DataFrame.asfreq`:
```
################################################################################
##################### Docstring (pandas.DataFrame.asfreq) #####################
################################################################################
Convert TimeSeries to specified frequency.
Optionally provide filling method to pad/backfill missing values.
Returns the original data conformed to a new index with the specified
frequency. ``resample`` is more appropriate if an operation, such as
summarization, is necessary to represent the data at the new frequency.
Parameters
----------
freq : DateOffset or str
Frequency DateOffset or string.
method : {'backfill'/'bfill', 'pad'/'ffill'}, default None
Method to use for filling holes in reindexed Series (note this
does not fill NaNs that already were present):
* 'pad' / 'ffill': propagate last valid observation forward to next
valid
* 'backfill' / 'bfill': use NEXT valid observation to fill.
how : {'start', 'end'}, default end
For PeriodIndex only (see PeriodIndex.asfreq).
normalize : bool, default False
Whether to reset output index to midnight.
fill_value : scalar, optional
Value to use for missing values, applied during upsampling (note
this does not fill NaNs that already were present).
Returns
-------
converted
Same type as caller.
See Also
--------
reindex : Conform DataFrame to new index with optional filling logic.
Notes
-----
To learn more about the frequency strings, please see `this link
<https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
Start by creating a series with 4 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=4, freq='T')
>>> series = pd.Series([0.0, None, 2.0, 3.0], index=index)
>>> df = pd.DataFrame({'s':series})
>>> df
s
2000-01-01 00:00:00 0.0
2000-01-01 00:01:00 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:03:00 3.0
Upsample the series into 30 second bins.
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
Upsample again, providing a ``fill value``.
>>> df.asfreq(freq='30S', fill_value=9.0)
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 9.0
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 9.0
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 9.0
2000-01-01 00:03:00 3.0
Upsample again, providing a ``method``.
>>> df.asfreq(freq='30S', method='bfill')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 2.0
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 3.0
2000-01-01 00:03:00 3.0
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32362 | 2020-02-29T07:00:18Z | 2020-03-11T02:00:36Z | 2020-03-11T02:00:36Z | 2020-03-11T04:03:23Z |
Changed kind parameter from integer to int, Added example | diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 542cfd334b810..549606795f528 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -255,6 +255,16 @@ class SparseArray(PandasObject, ExtensionArray, ExtensionOpsMixin):
Methods
-------
None
+
+ Examples
+ --------
+ >>> from pandas.arrays import SparseArray
+ >>> arr = SparseArray([0, 0, 1, 2])
+ >>> arr
+ [0, 0, 1, 2]
+ Fill: 0
+ IntIndex
+ Indices: array([2, 3], dtype=int32)
"""
_pandas_ftype = "sparse"
| - [x] closes [Fix PR06 error in pandas.arrays.SparseArray](https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/1)
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
```
################################################################################
################################## Validation ##################################
################################################################################
3 Errors found:
Parameter "sparse_index" has no description
Parameter "index" has no description
See Also section not found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32361 | 2020-02-29T06:52:17Z | 2020-02-29T17:43:46Z | 2020-02-29T17:43:46Z | 2020-02-29T17:43:54Z |
DOC: Update the pandas.DatetimeIndex docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index c9fefd46e55c7..46824e0be6c28 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -78,21 +78,26 @@ def _new_DatetimeIndex(cls, d):
)
class DatetimeIndex(DatetimeTimedeltaMixin):
"""
- Immutable ndarray of datetime64 data, represented internally as int64, and
- which can be boxed to Timestamp objects that are subclasses of datetime and
- carry metadata such as frequency information.
+ Immutable ndarray-like of datetime64 data.
+
+ Represented internally as int64, and which can be boxed to Timestamp objects
+ that are subclasses of datetime and carry metadata.
Parameters
----------
data : array-like (1-dimensional), optional
Optional datetime-like data to construct index with.
- copy : bool
- Make a copy of input ndarray.
freq : str or pandas offset object, optional
One of pandas date offset strings or corresponding objects. The string
'infer' can be passed in order to set the frequency of the index as the
inferred frequency upon creation.
- tz : pytz.timezone or dateutil.tz.tzfile
+ tz : pytz.timezone or dateutil.tz.tzfile or datetime.tzinfo or str
+ Set the Timezone of the data.
+ normalize : bool, default False
+ Normalize start/end dates to midnight before generating date range.
+ closed : {'left', 'right'}, optional
+ Set whether to include `start` and `end` that are on the
+ boundary. The default includes boundary points on either end.
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from 03:00
@@ -107,12 +112,16 @@ class DatetimeIndex(DatetimeTimedeltaMixin):
times)
- 'NaT' will return NaT where there are ambiguous times
- 'raise' will raise an AmbiguousTimeError if there are ambiguous times.
- name : object
- Name to be stored in the index.
dayfirst : bool, default False
If True, parse dates in `data` with the day first order.
yearfirst : bool, default False
If True parse dates in `data` with the year first order.
+ dtype : numpy.dtype or DatetimeTZDtype or str, default None
+ Note that the only NumPy dtype allowed is ‘datetime64[ns]’.
+ copy : bool, default False
+ Make a copy of input ndarray.
+ name : label, default None
+ Name to be stored in the index.
Attributes
----------
| - [x] closes https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/7
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Output of ```python scripts/validate_docstrings.py pandas.DatetimeIndex```
```
################################################################################
################################## Validation ##################################
################################################################################
1 Errors found:
No examples section found
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/32360 | 2020-02-29T06:43:41Z | 2020-03-07T21:01:14Z | 2020-03-07T21:01:14Z | 2020-03-09T09:22:50Z |
Fixing RT02 pandas.Index.dropna and PR08 pandas.Index.fillna | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 7e391e7a03fbb..5555b99a0ef88 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2163,7 +2163,7 @@ def dropna(self, how="any"):
Returns
-------
- valid : Index
+ Index
"""
if how not in ("any", "all"):
raise ValueError(f"invalid how option: {how}")
| - [x] closes https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/17
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32359 | 2020-02-29T06:19:05Z | 2020-03-06T17:00:10Z | 2020-03-06T17:00:10Z | 2020-03-06T17:00:16Z |
DOC: Fixed PR09 error in pandas.testing.assert_series_equal | diff --git a/pandas/_testing.py b/pandas/_testing.py
index a70f75d6cfaf4..fce06e216dfd7 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1106,7 +1106,7 @@ def assert_series_equal(
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
check_category_order : bool, default True
- Whether to compare category order of internal Categoricals
+ Whether to compare category order of internal Categoricals.
.. versionadded:: 1.0.2
obj : str, default 'Series'
| - [X] closes https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/15
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.HDFStore.put`:
```
################################################################################
####################### Docstring (pandas.HDFStore.put) #######################
################################################################################
Store object in HDFStore.
Parameters
----------
key : str
value : {Series, DataFrame}
format : 'fixed(f)|table(t)', default is 'fixed'
fixed(f) : Fixed format
Fast writing/reading. Not-appendable, nor searchable.
table(t) : Table format
Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching
/ selecting subsets of the data.
append : bool, default False
This will force Table format, append the input data to the
existing.
data_columns : list, default None
List of columns to create as data columns, or True to
use all columns. See `here
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
encoding : str, default None
Provide an encoding for strings.
dropna : bool, default False, do not write an ALL nan row to
The store settable by the option 'io.hdf.dropna_table'.
################################################################################
################################## Validation ##################################
################################################################################
8 Errors found:
No extended summary found
Parameters {'min_itemsize', 'index', 'complevel', 'errors', 'complib', 'append', 'nan_rep'} not documented
Unknown parameters {'dropna ', 'append '}
Parameter "key" has no description
Parameter "value" has no description
Parameter "format" description should start with a capital letter
See Also section not found
No examples section found
(pandas-dev) D:\Keluarga\Tolhas\Kuliah\Semester 4\DSC Pandas\pandas-tolhassianipar>python scripts/validate_docstrings.py pandas.HDFStore.put
################################################################################
####################### Docstring (pandas.HDFStore.put) #######################
################################################################################
Store object in HDFStore.
Parameters
----------
key : str
value : {Series, DataFrame}
format : 'fixed(f)|table(t)', default is 'fixed'
fixed(f) : Fixed format
Fast writing/reading. Not-appendable, nor searchable.
table(t) : Table format
Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching
/ selecting subsets of the data.
append : bool, default False
This will force Table format, append the input data to the
existing.
data_columns : list, default None
List of columns to create as data columns, or True to
use all columns. See `here
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
encoding : str, default None
Provide an encoding for strings.
dropna : bool, default False, do not write an ALL nan row to
The store settable by the option 'io.hdf.dropna_table'.
################################################################################
################################## Validation ##################################
################################################################################
8 Errors found:
No extended summary found
Parameters {'errors', 'index', 'nan_rep', 'append', 'min_itemsize', 'complevel', 'complib'} not documented
Unknown parameters {'append ', 'dropna '}
Parameter "key" has no description
Parameter "value" has no description
Parameter "format" description should start with a capital letter
See Also section not found
No examples section found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32358 | 2020-02-29T05:54:48Z | 2020-02-29T17:55:18Z | 2020-02-29T17:55:18Z | 2020-03-04T08:55:38Z |
DOC: Fixed ES01, PR07, SA04 error in pandas.core.groupby.DataFrameGroupBy.shift | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 48c00140461b5..6362f11a3e032 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2312,11 +2312,12 @@ def _get_cythonized_result(
return self._wrap_transformed_output(output)
@Substitution(name="groupby")
- @Appender(_common_see_also)
def shift(self, periods=1, freq=None, axis=0, fill_value=None):
"""
Shift each group by periods observations.
+ If freq is passed, the index will be increased using the periods and the freq.
+
Parameters
----------
periods : int, default 1
@@ -2324,7 +2325,9 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
freq : str, optional
Frequency string.
axis : axis to shift, default 0
+ Shift direction.
fill_value : optional
+ The scalar value to use for newly introduced missing values.
.. versionadded:: 0.24.0
@@ -2332,6 +2335,12 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
-------
Series or DataFrame
Object shifted within each group.
+
+ See Also
+ --------
+ Index.shift : Shift values of Index.
+ tshift : Shift the time index, using the index’s frequency
+ if available.
"""
if freq is not None or axis != 0 or not isna(fill_value):
return self.apply(lambda x: x.shift(periods, freq, axis, fill_value))
| - [x] closes: https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/14
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of python scripts/validate_docstrings.py pandas.core.groupby.DataFrameGroupBy.shift:
```
################################################################################
################################## Validation ##################################
################################################################################
1 Errors found:
No examples section found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32356 | 2020-02-29T05:48:55Z | 2020-03-02T22:52:15Z | 2020-03-02T22:52:15Z | 2020-03-03T14:26:56Z |
Fix PR08, RT02, RT03, and SA01 on pandas.Index.fillna | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index aa22527d8c2d7..d2887c6652635 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2126,13 +2126,18 @@ def fillna(self, value=None, downcast=None):
Scalar value to use to fill holes (e.g. 0).
This value cannot be a list-likes.
downcast : dict, default is None
- a dict of item->dtype of what to downcast if possible,
+ A dict of item->dtype of what to downcast if possible,
or the string 'infer' which will try to downcast to an appropriate
equal type (e.g. float64 to int64 if possible).
Returns
-------
- filled : Index
+ Index
+
+ See Also
+ --------
+ DataFrame.fillna : Fill NaN values of a DataFrame.
+ Series.fillna : Fill NaN Values of a Series.
"""
self._assert_can_do_op(value)
if self.hasnans:
| - [x] closes #https://github.com/pandanistas/pandanistas_sprint_ui2020/issues/5
- [ ] tests added / passed
- [x] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32355 | 2020-02-29T05:15:36Z | 2020-03-04T15:41:32Z | 2020-03-04T15:41:32Z | 2020-03-04T15:41:38Z |
DOC : fix errors docstrings pandas.to_numeric | diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 4939cbfc9cc96..8075c69a614d5 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -35,6 +35,7 @@ def to_numeric(arg, errors="raise", downcast=None):
Parameters
----------
arg : scalar, list, tuple, 1-d array, or Series
+ Argument to be converted.
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If 'raise', then invalid parsing will raise an exception.
- If 'coerce', then invalid parsing will be set as NaN.
@@ -61,7 +62,8 @@ def to_numeric(arg, errors="raise", downcast=None):
Returns
-------
- ret : numeric if parsing succeeded.
+ ret
+ Numeric if parsing succeeded.
Return type depends on input. Series if Series, otherwise ndarray.
See Also
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
added docstrings information about arg descriptions, downcast type 'int', and Return type
| https://api.github.com/repos/pandas-dev/pandas/pulls/32354 | 2020-02-29T05:14:05Z | 2020-03-06T20:48:15Z | 2020-03-06T20:48:15Z | 2020-03-08T10:55:55Z |
DOC: Fix SS06 formatting errors in merge_asof docstrings | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 49ac1b6cfa52b..faac472b3fc31 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -320,10 +320,10 @@ def merge_asof(
direction: str = "backward",
) -> "DataFrame":
"""
- Perform an asof merge. This is similar to a left-join except that we
- match on nearest key rather than equal keys.
+ Perform an asof merge.
- Both DataFrames must be sorted by the key.
+ This is similar to a left-join except that we match on nearest
+ key rather than equal keys. Both DataFrames must be sorted by the key.
For each row in the left DataFrame:
| Errors fixed (from the list in #29254 ):
pandas.merge_asof
Validated the fixes with python scripts/validate_docstrings.py
Refrencing issue: #29254
Fix : pandanistas/pandanistas_sprint_ui2020#11 | https://api.github.com/repos/pandas-dev/pandas/pulls/32351 | 2020-02-29T04:58:31Z | 2020-02-29T17:27:50Z | 2020-02-29T17:27:50Z | 2020-03-04T04:54:52Z |
Implement BlockManager.iset | diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 9c90b20fc0f16..bb3254446bd3b 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -8,7 +8,7 @@
import numpy as np
from pandas._libs import Timedelta, Timestamp, internals as libinternals, lib
-from pandas._typing import DtypeObj
+from pandas._typing import DtypeObj, Label
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import (
@@ -996,7 +996,25 @@ def delete(self, item):
)
self._rebuild_blknos_and_blklocs()
- def set(self, item, value):
+ def set(self, item: Label, value):
+ """
+ Set new item in-place.
+
+ Notes
+ -----
+ Does not consolidate.
+ Adds new Block if not contained in the current items Index.
+ """
+ try:
+ loc = self.items.get_loc(item)
+ except KeyError:
+ # This item wasn't present, just insert at end
+ self.insert(len(self.items), item, value)
+ return
+
+ self.iset(loc, value)
+
+ def iset(self, loc: Union[int, slice, np.ndarray], value):
"""
Set new item in-place. Does not consolidate. Adds new Block if not
contained in the current set of items
@@ -1029,13 +1047,6 @@ def value_getitem(placement):
"Shape of new values must be compatible with manager shape"
)
- try:
- loc = self.items.get_loc(item)
- except KeyError:
- # This item wasn't present, just insert at end
- self.insert(len(self.items), item, value)
- return
-
if isinstance(loc, int):
loc = [loc]
@@ -1081,7 +1092,7 @@ def value_getitem(placement):
unfit_mgr_locs = np.concatenate(unfit_mgr_locs)
unfit_count = len(unfit_mgr_locs)
- new_blocks = []
+ new_blocks: List[Block] = []
if value_is_extension_type:
# This code (ab-)uses the fact that sparse blocks contain only
# one item.
@@ -1140,6 +1151,9 @@ def insert(self, loc: int, item, value, allow_duplicates: bool = False):
# insert to the axis; this could possibly raise a TypeError
new_axis = self.items.insert(loc, item)
+ if value.ndim == self.ndim - 1 and not is_extension_array_dtype(value):
+ value = _safe_reshape(value, (1,) + value.shape)
+
block = make_block(values=value, ndim=self.ndim, placement=slice(loc, loc + 1))
for blkno, count in _fast_count_smallints(self._blknos[loc:]):
| Preliminary to the PR that fixes `setitem_with_indexer` (#22036, #15686) | https://api.github.com/repos/pandas-dev/pandas/pulls/32350 | 2020-02-29T04:04:58Z | 2020-03-03T15:29:48Z | 2020-03-03T15:29:48Z | 2020-03-03T15:55:54Z |
REF: Remove BlockManager.rename_axis | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e6c5ac9dbf733..5a220f7de9895 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -967,7 +967,6 @@ def rename(
continue
ax = self._get_axis(axis_no)
- baxis = self._get_block_manager_axis(axis_no)
f = com.get_rename_function(replacements)
if level is not None:
@@ -984,9 +983,8 @@ def rename(
]
raise KeyError(f"{missing_labels} not found in axis")
- result._data = result._data.rename_axis(
- f, axis=baxis, copy=copy, level=level
- )
+ new_index = ax._transform_index(f, level)
+ result.set_axis(new_index, axis=axis_no, inplace=True)
result._clear_item_cache()
if inplace:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 3eab757311ccb..5c06b1c17f0ab 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4738,6 +4738,27 @@ def map(self, mapper, na_action=None):
return Index(new_values, **attributes)
+ # TODO: De-duplicate with map, xref GH#32349
+ def _transform_index(self, func, level=None) -> "Index":
+ """
+ Apply function to all values found in index.
+
+ This includes transforming multiindex entries separately.
+ Only apply function to one level of the MultiIndex if level is specified.
+ """
+ if isinstance(self, ABCMultiIndex):
+ if level is not None:
+ items = [
+ tuple(func(y) if i == level else y for i, y in enumerate(x))
+ for x in self
+ ]
+ else:
+ items = [tuple(func(y) for y in x) for x in self]
+ return type(self).from_tuples(items, names=self.names)
+ else:
+ items = [func(x) for x in self]
+ return Index(items, name=self.name, tupleize_cols=False)
+
def isin(self, values, level=None):
"""
Return a boolean array where the index values are in `values`.
diff --git a/pandas/core/internals/__init__.py b/pandas/core/internals/__init__.py
index 37a3405554745..e70652b81c42f 100644
--- a/pandas/core/internals/__init__.py
+++ b/pandas/core/internals/__init__.py
@@ -17,7 +17,6 @@
from pandas.core.internals.managers import (
BlockManager,
SingleBlockManager,
- _transform_index,
concatenate_block_managers,
create_block_manager_from_arrays,
create_block_manager_from_blocks,
@@ -40,7 +39,6 @@
"_block_shape",
"BlockManager",
"SingleBlockManager",
- "_transform_index",
"concatenate_block_managers",
"create_block_manager_from_arrays",
"create_block_manager_from_blocks",
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 98afc5ac3a0e3..14841e354af4d 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -34,7 +34,7 @@
from pandas.core.arrays.sparse import SparseDtype
from pandas.core.base import PandasObject
from pandas.core.indexers import maybe_convert_indices
-from pandas.core.indexes.api import Index, MultiIndex, ensure_index
+from pandas.core.indexes.api import Index, ensure_index
from pandas.core.internals.blocks import (
Block,
CategoricalBlock,
@@ -216,23 +216,6 @@ def set_axis(self, axis: int, new_labels: Index) -> None:
self.axes[axis] = new_labels
- def rename_axis(
- self, mapper, axis: int, copy: bool = True, level=None
- ) -> "BlockManager":
- """
- Rename one of axes.
-
- Parameters
- ----------
- mapper : unary callable
- axis : int
- copy : bool, default True
- level : int or None, default None
- """
- obj = self.copy(deep=copy)
- obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
- return obj
-
@property
def _is_single_block(self) -> bool:
if self.ndim == 1:
@@ -1966,28 +1949,6 @@ def _compare_or_regex_search(a, b, regex=False):
return result
-def _transform_index(index, func, level=None):
- """
- Apply function to all values found in index.
-
- This includes transforming multiindex entries separately.
- Only apply function to one level of the MultiIndex if level is specified.
-
- """
- if isinstance(index, MultiIndex):
- if level is not None:
- items = [
- tuple(func(y) if i == level else y for i, y in enumerate(x))
- for x in index
- ]
- else:
- items = [tuple(func(y) for y in x) for x in index]
- return MultiIndex.from_tuples(items, names=index.names)
- else:
- items = [func(x) for x in index]
- return Index(items, name=index.name, tupleize_cols=False)
-
-
def _fast_count_smallints(arr):
"""Faster version of set(arr) for sequences of small numbers."""
counts = np.bincount(arr.astype(np.int_))
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c301d6e7c7155..daaa5138f7654 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -46,7 +46,7 @@
from pandas.core.arrays.categorical import _recode_for_categories
import pandas.core.common as com
from pandas.core.frame import _merge_doc
-from pandas.core.internals import _transform_index, concatenate_block_managers
+from pandas.core.internals import concatenate_block_managers
from pandas.core.sorting import is_int64_overflow_possible
if TYPE_CHECKING:
@@ -2022,4 +2022,4 @@ def renamer(x, suffix):
lrenamer = partial(renamer, suffix=lsuffix)
rrenamer = partial(renamer, suffix=rsuffix)
- return (_transform_index(left, lrenamer), _transform_index(right, rrenamer))
+ return (left._transform_index(lrenamer), right._transform_index(rrenamer))
| Better to do it using NDFrame methods
cc @toobaz I know you're on board for getting index/axis stuff out of BlockManager. | https://api.github.com/repos/pandas-dev/pandas/pulls/32349 | 2020-02-29T03:33:03Z | 2020-03-11T02:40:14Z | 2020-03-11T02:40:14Z | 2020-03-11T02:41:50Z |
REF: avoid using internals methods for to_timestamp, to_period | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 61715397e8e0b..8fe3a32fe3d39 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -94,10 +94,8 @@
)
from pandas.core.dtypes.generic import (
ABCDataFrame,
- ABCDatetimeIndex,
ABCIndexClass,
ABCMultiIndex,
- ABCPeriodIndex,
ABCSeries,
)
from pandas.core.dtypes.missing import isna, notna
@@ -8245,7 +8243,9 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation="linear"):
return result
- def to_timestamp(self, freq=None, how="start", axis=0, copy=True) -> "DataFrame":
+ def to_timestamp(
+ self, freq=None, how: str = "start", axis: Axis = 0, copy: bool = True
+ ) -> "DataFrame":
"""
Cast to DatetimeIndex of timestamps, at *beginning* of period.
@@ -8265,23 +8265,16 @@ def to_timestamp(self, freq=None, how="start", axis=0, copy=True) -> "DataFrame"
-------
DataFrame with DatetimeIndex
"""
- new_data = self._data
- if copy:
- new_data = new_data.copy()
+ new_obj = self.copy(deep=copy)
- axis = self._get_axis_number(axis)
- if axis == 0:
- assert isinstance(self.index, (ABCDatetimeIndex, ABCPeriodIndex))
- new_data.set_axis(1, self.index.to_timestamp(freq=freq, how=how))
- elif axis == 1:
- assert isinstance(self.columns, (ABCDatetimeIndex, ABCPeriodIndex))
- new_data.set_axis(0, self.columns.to_timestamp(freq=freq, how=how))
- else: # pragma: no cover
- raise AssertionError(f"Axis must be 0 or 1. Got {axis}")
+ axis_name = self._get_axis_name(axis)
+ old_ax = getattr(self, axis_name)
+ new_ax = old_ax.to_timestamp(freq=freq, how=how)
- return self._constructor(new_data)
+ setattr(new_obj, axis_name, new_ax)
+ return new_obj
- def to_period(self, freq=None, axis=0, copy=True) -> "DataFrame":
+ def to_period(self, freq=None, axis: Axis = 0, copy: bool = True) -> "DataFrame":
"""
Convert DataFrame from DatetimeIndex to PeriodIndex.
@@ -8299,23 +8292,16 @@ def to_period(self, freq=None, axis=0, copy=True) -> "DataFrame":
Returns
-------
- TimeSeries with PeriodIndex
+ DataFrame with PeriodIndex
"""
- new_data = self._data
- if copy:
- new_data = new_data.copy()
+ new_obj = self.copy(deep=copy)
- axis = self._get_axis_number(axis)
- if axis == 0:
- assert isinstance(self.index, ABCDatetimeIndex)
- new_data.set_axis(1, self.index.to_period(freq=freq))
- elif axis == 1:
- assert isinstance(self.columns, ABCDatetimeIndex)
- new_data.set_axis(0, self.columns.to_period(freq=freq))
- else: # pragma: no cover
- raise AssertionError(f"Axis must be 0 or 1. Got {axis}")
+ axis_name = self._get_axis_name(axis)
+ old_ax = getattr(self, axis_name)
+ new_ax = old_ax.to_period(freq=freq)
- return self._constructor(new_data)
+ setattr(new_obj, axis_name, new_ax)
+ return new_obj
def isin(self, values) -> "DataFrame":
"""
| https://api.github.com/repos/pandas-dev/pandas/pulls/32347 | 2020-02-29T03:26:26Z | 2020-03-04T01:02:48Z | 2020-03-04T01:02:48Z | 2020-03-04T01:04:39Z | |
TYP: Update type naming in formatter | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f304fadbab871..f515b57e24cfa 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -758,8 +758,8 @@ def to_string(
header: Union[bool, Sequence[str]] = True,
index: bool = True,
na_rep: str = "NaN",
- formatters: Optional[fmt.formatters_type] = None,
- float_format: Optional[fmt.float_format_type] = None,
+ formatters: Optional[fmt.FormattersType] = None,
+ float_format: Optional[fmt.FloatFormatType] = None,
sparsify: Optional[bool] = None,
index_names: bool = True,
justify: Optional[str] = None,
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index b5ddd15c1312a..2a528781f8c93 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -81,10 +81,10 @@
if TYPE_CHECKING:
from pandas import Series, DataFrame, Categorical
-formatters_type = Union[
+FormattersType = Union[
List[Callable], Tuple[Callable, ...], Mapping[Union[str, int], Callable]
]
-float_format_type = Union[str, Callable, "EngFormatter"]
+FloatFormatType = Union[str, Callable, "EngFormatter"]
common_docstring = """
Parameters
@@ -455,7 +455,7 @@ class TableFormatter:
show_dimensions: Union[bool, str]
is_truncated: bool
- formatters: formatters_type
+ formatters: FormattersType
columns: Index
@property
@@ -548,9 +548,9 @@ def __init__(
header: Union[bool, Sequence[str]] = True,
index: bool = True,
na_rep: str = "NaN",
- formatters: Optional[formatters_type] = None,
+ formatters: Optional[FormattersType] = None,
justify: Optional[str] = None,
- float_format: Optional[float_format_type] = None,
+ float_format: Optional[FloatFormatType] = None,
sparsify: Optional[bool] = None,
index_names: bool = True,
line_width: Optional[int] = None,
@@ -1089,7 +1089,7 @@ def _get_column_name_list(self) -> List[str]:
def format_array(
values: Any,
formatter: Optional[Callable],
- float_format: Optional[float_format_type] = None,
+ float_format: Optional[FloatFormatType] = None,
na_rep: str = "NaN",
digits: Optional[int] = None,
space: Optional[Union[str, int]] = None,
@@ -1171,7 +1171,7 @@ def __init__(
formatter: Optional[Callable] = None,
na_rep: str = "NaN",
space: Union[str, int] = 12,
- float_format: Optional[float_format_type] = None,
+ float_format: Optional[FloatFormatType] = None,
justify: str = "right",
decimal: str = ".",
quoting: Optional[int] = None,
@@ -1278,7 +1278,7 @@ def __init__(self, *args, **kwargs):
def _value_formatter(
self,
- float_format: Optional[float_format_type] = None,
+ float_format: Optional[FloatFormatType] = None,
threshold: Optional[Union[float, int]] = None,
) -> Callable:
"""Returns a function to be applied on each value to format it"""
@@ -1372,7 +1372,7 @@ def format_values_with(float_format):
# There is a special default string when we are fixed-width
# The default is otherwise to use str instead of a formatting string
- float_format: Optional[float_format_type]
+ float_format: Optional[FloatFormatType]
if self.float_format is None:
if self.fixed_width:
float_format = partial(
| - [x] part of #26792, #28480
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32345 | 2020-02-29T02:33:58Z | 2020-02-29T16:16:32Z | 2020-02-29T16:16:32Z | 2020-02-29T16:16:44Z |
CLN: setitem_with_indexer cleanups | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 3ab180bafd156..35e61ab6a59c9 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1633,14 +1633,12 @@ def _setitem_with_indexer(self, indexer, value):
info_idx = [info_idx]
labels = item_labels[info_idx]
+ plane_indexer = indexer[:1]
+ lplane_indexer = length_of_indexer(plane_indexer[0], self.obj.index)
+ # lplane_indexer gives the expected length of obj[indexer[0]]
+
if len(labels) == 1:
# We can operate on a single column
- item = labels[0]
- idx = indexer[0]
-
- plane_indexer = tuple([idx])
- lplane_indexer = length_of_indexer(plane_indexer[0], self.obj.index)
- # lplane_indexer gives the expected length of obj[idx]
# require that we are setting the right number of values that
# we are indexing
@@ -1652,11 +1650,6 @@ def _setitem_with_indexer(self, indexer, value):
"length than the value"
)
- # non-mi
- else:
- plane_indexer = indexer[:1]
- lplane_indexer = length_of_indexer(plane_indexer[0], self.obj.index)
-
def setter(item, v):
ser = self.obj[item]
pi = plane_indexer[0] if lplane_indexer == 1 else plane_indexer
@@ -1718,18 +1711,23 @@ def setter(item, v):
for i, item in enumerate(labels):
- # setting with a list, recoerces
+ # setting with a list, re-coerces
setter(item, value[:, i].tolist())
- # we have an equal len list/ndarray
- elif _can_do_equal_len(
- labels, value, plane_indexer, lplane_indexer, self.obj
+ elif (
+ len(labels) == 1
+ and lplane_indexer == len(value)
+ and not is_scalar(plane_indexer[0])
):
+ # we have an equal len list/ndarray
setter(labels[0], value)
- # per label values
- else:
+ elif lplane_indexer == 0 and len(value) == len(self.obj.index):
+ # We get here in one case via .loc with a all-False mask
+ pass
+ else:
+ # per-label values
if len(labels) != len(value):
raise ValueError(
"Must have equal len keys and value "
@@ -1746,7 +1744,6 @@ def setter(item, v):
else:
if isinstance(indexer, tuple):
- indexer = maybe_convert_ix(*indexer)
# if we are setting on the info axis ONLY
# set using those methods to avoid block-splitting
@@ -1764,6 +1761,8 @@ def setter(item, v):
self.obj[item_labels[indexer[info_axis]]] = value
return
+ indexer = maybe_convert_ix(*indexer)
+
if isinstance(value, (ABCSeries, dict)):
# TODO(EA): ExtensionBlock.setitem this causes issues with
# setting for extensionarrays that store dicts. Need to decide
@@ -2277,26 +2276,3 @@ def _maybe_numeric_slice(df, slice_, include_bool=False):
dtypes.append(bool)
slice_ = IndexSlice[:, df.select_dtypes(include=dtypes).columns]
return slice_
-
-
-def _can_do_equal_len(labels, value, plane_indexer, lplane_indexer, obj) -> bool:
- """
- Returns
- -------
- bool
- True if we have an equal len settable.
- """
- if not len(labels) == 1 or not np.iterable(value) or is_scalar(plane_indexer[0]):
- return False
-
- item = labels[0]
- index = obj[item].index
-
- values_len = len(value)
- # equal len list/ndarray
- if len(index) == values_len:
- return True
- elif lplane_indexer == values_len:
- return True
-
- return False
| working on fixing some significant bugs in setitem_with_indexer, this breaks off some easier cleanups | https://api.github.com/repos/pandas-dev/pandas/pulls/32341 | 2020-02-28T23:37:52Z | 2020-03-03T02:02:44Z | 2020-03-03T02:02:44Z | 2020-03-03T02:07:33Z |
BUG: None / Timedelta incorrectly returning NaT | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 0f18a1fd81815..18123efe76b1d 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -213,6 +213,7 @@ Timedelta
^^^^^^^^^
- Bug in constructing a :class:`Timedelta` with a high precision integer that would round the :class:`Timedelta` components (:issue:`31354`)
+- Bug in dividing ``np.nan`` or ``None`` by :class:`Timedelta`` incorrectly returning ``NaT`` (:issue:`31869`)
-
Timezones
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 66660c5f641fd..298028227e18b 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1407,7 +1407,14 @@ class Timedelta(_Timedelta):
# convert to Timedelta below
pass
+ elif util.is_nan(other):
+ # i.e. np.nan or np.float64("NaN")
+ raise TypeError("Cannot divide float by Timedelta")
+
elif hasattr(other, 'dtype'):
+ if other.dtype.kind == "O":
+ # GH#31869
+ return np.array([x / self for x in other])
return other / self.to_timedelta64()
elif not _validate_ops_compat(other):
@@ -1415,7 +1422,8 @@ class Timedelta(_Timedelta):
other = Timedelta(other)
if other is NaT:
- return NaT
+ # In this context we treat NaT as timedelta-like
+ return np.nan
return float(other.value) / self.value
def __floordiv__(self, other):
diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py
index 230a14aeec60a..ea02a76275443 100644
--- a/pandas/tests/scalar/timedelta/test_arithmetic.py
+++ b/pandas/tests/scalar/timedelta/test_arithmetic.py
@@ -412,6 +412,46 @@ def test_td_rdiv_timedeltalike_scalar(self):
assert np.timedelta64(60, "h") / td == 0.25
+ def test_td_rdiv_na_scalar(self):
+ # GH#31869 None gets cast to NaT
+ td = Timedelta(10, unit="d")
+
+ result = NaT / td
+ assert np.isnan(result)
+
+ result = None / td
+ assert np.isnan(result)
+
+ result = np.timedelta64("NaT") / td
+ assert np.isnan(result)
+
+ with pytest.raises(TypeError, match="cannot use operands with types dtype"):
+ np.datetime64("NaT") / td
+
+ with pytest.raises(TypeError, match="Cannot divide float by Timedelta"):
+ np.nan / td
+
+ def test_td_rdiv_ndarray(self):
+ td = Timedelta(10, unit="d")
+
+ arr = np.array([td], dtype=object)
+ result = arr / td
+ expected = np.array([1], dtype=np.float64)
+ tm.assert_numpy_array_equal(result, expected)
+
+ arr = np.array([None])
+ result = arr / td
+ expected = np.array([np.nan])
+ tm.assert_numpy_array_equal(result, expected)
+
+ arr = np.array([np.nan], dtype=object)
+ with pytest.raises(TypeError, match="Cannot divide float by Timedelta"):
+ arr / td
+
+ arr = np.array([np.nan], dtype=np.float64)
+ with pytest.raises(TypeError, match="cannot use operands with types dtype"):
+ arr / td
+
# ---------------------------------------------------------------
# Timedelta.__floordiv__
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
xref #31869, arguable whether or not this closes that. | https://api.github.com/repos/pandas-dev/pandas/pulls/32340 | 2020-02-28T22:22:06Z | 2020-03-03T02:05:29Z | 2020-03-03T02:05:29Z | 2021-11-20T23:21:50Z |
TST/REF: move tools test files | diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/tools/test_to_datetime.py
similarity index 100%
rename from pandas/tests/indexes/datetimes/test_tools.py
rename to pandas/tests/tools/test_to_datetime.py
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_to_numeric.py
similarity index 100%
rename from pandas/tests/tools/test_numeric.py
rename to pandas/tests/tools/test_to_numeric.py
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/tools/test_to_timedelta.py
similarity index 100%
rename from pandas/tests/indexes/timedeltas/test_tools.py
rename to pandas/tests/tools/test_to_timedelta.py
diff --git a/setup.cfg b/setup.cfg
index 61d5b1030a500..bbd8489622005 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -135,7 +135,7 @@ ignore_errors=True
[mypy-pandas.tests.arithmetic.test_datetime64]
ignore_errors=True
-[mypy-pandas.tests.indexes.datetimes.test_tools]
+[mypy-pandas.tests.tools.test_to_datetime]
ignore_errors=True
[mypy-pandas.tests.scalar.period.test_period]
| These files are in the index tests, but they're not really testing the index objects. | https://api.github.com/repos/pandas-dev/pandas/pulls/32338 | 2020-02-28T17:51:35Z | 2020-02-29T19:36:31Z | 2020-02-29T19:36:31Z | 2020-02-29T20:13:52Z |
CLN: Don't create _join_functions | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 49ac1b6cfa52b..bb40adb69e42d 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1312,7 +1312,12 @@ def _get_join_indexers(
kwargs = copy.copy(kwargs)
if how == "left":
kwargs["sort"] = sort
- join_func = _join_functions[how]
+ join_func = {
+ "inner": libjoin.inner_join,
+ "left": libjoin.left_outer_join,
+ "right": _right_outer_join,
+ "outer": libjoin.full_outer_join,
+ }[how]
return join_func(lkey, rkey, count, **kwargs)
@@ -1842,14 +1847,6 @@ def _right_outer_join(x, y, max_groups):
return left_indexer, right_indexer
-_join_functions = {
- "inner": libjoin.inner_join,
- "left": libjoin.left_outer_join,
- "right": _right_outer_join,
- "outer": libjoin.full_outer_join,
-}
-
-
def _factorize_keys(lk, rk, sort=True):
# Some pre-processing for non-ndarray lk / rk
if is_datetime64tz_dtype(lk) and is_datetime64tz_dtype(rk):
| This seems more clear than defining `_join_functions` far away from where it's actually used | https://api.github.com/repos/pandas-dev/pandas/pulls/32336 | 2020-02-28T16:04:25Z | 2020-03-03T03:09:21Z | 2020-03-03T03:09:20Z | 2020-04-09T02:37:43Z |
DOC: Fixed reference to `convert_dtypes` in `to_numeric` (#32295) | diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 4939cbfc9cc96..40f376724bd39 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -70,7 +70,7 @@ def to_numeric(arg, errors="raise", downcast=None):
to_datetime : Convert argument to datetime.
to_timedelta : Convert argument to timedelta.
numpy.ndarray.astype : Cast a numpy array to a specified type.
- convert_dtypes : Convert dtypes.
+ DataFrame.convert_dtypes : Convert dtypes.
Examples
--------
| - closes #32295
- just changed the reference from convert_dtypes to DataFrame.convert_dtypes
| https://api.github.com/repos/pandas-dev/pandas/pulls/32333 | 2020-02-28T14:37:28Z | 2020-02-29T03:07:24Z | 2020-02-29T03:07:24Z | 2020-03-02T10:20:40Z |
CLN: _libs.interval looping with cdef index | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 1166768472449..50ac055dbffc9 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -502,14 +502,14 @@ def intervals_to_interval_bounds(ndarray intervals,
"""
cdef:
object closed = None, interval
- int64_t n = len(intervals)
+ Py_ssize_t i, n = len(intervals)
ndarray left, right
bint seen_closed = False
left = np.empty(n, dtype=intervals.dtype)
right = np.empty(n, dtype=intervals.dtype)
- for i in range(len(intervals)):
+ for i in range(n):
interval = intervals[i]
if interval is None or util.is_nan(interval):
left[i] = np.nan
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32329 | 2020-02-28T12:28:47Z | 2020-02-28T16:41:26Z | 2020-02-28T16:41:26Z | 2020-02-29T10:25:01Z |
CLN: Removed unused variables defenition | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 6141e2b78e9f4..0ba5cb7e9bc40 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -115,8 +115,6 @@ cdef class IndexEngine:
cdef _maybe_get_bool_indexer(self, object val):
cdef:
ndarray[uint8_t, ndim=1, cast=True] indexer
- ndarray[intp_t, ndim=1] found
- int count
indexer = self._get_index_values() == val
return self._unpack_bool_indexer(indexer, val)
| I don't see ```found``` and ```count``` being used anywhere in this function. | https://api.github.com/repos/pandas-dev/pandas/pulls/32328 | 2020-02-28T11:24:56Z | 2020-02-28T15:40:29Z | 2020-02-28T15:40:29Z | 2020-02-29T10:24:23Z |
TST/CLN: Follow-up to #31867 | diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index c966962a7c87d..fff5ca03e80f4 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -95,43 +95,14 @@ def test_scalar_non_numeric(self, index_func, klass):
s = gen_obj(klass, i)
# getting
- for idxr, getitem in [(lambda x: x.iloc, False), (lambda x: x, True)]:
+ with pytest.raises(KeyError, match="^3.0$"):
+ s[3.0]
- if getitem:
- error = KeyError
- msg = r"^3\.0?$"
- else:
- error = TypeError
- msg = (
- r"cannot do (label|positional) indexing "
- fr"on {type(i).__name__} with these indexers \[3\.0\] of "
- r"type float|"
- "Cannot index by location index with a "
- "non-integer key"
- )
- with pytest.raises(error, match=msg):
- idxr(s)[3.0]
-
- # label based can be a TypeError or KeyError
- if s.index.inferred_type in {
- "categorical",
- "string",
- "unicode",
- "mixed",
- "period",
- "timedelta64",
- "datetime64",
- }:
- error = KeyError
- msg = r"^3\.0$"
- else:
- error = TypeError
- msg = (
- r"cannot do (label|positional) indexing "
- fr"on {type(i).__name__} with these indexers \[3\.0\] of "
- "type float"
- )
- with pytest.raises(error, match=msg):
+ msg = "Cannot index by location index with a non-integer key"
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[3.0]
+
+ with pytest.raises(KeyError, match="^3.0$"):
s.loc[3.0]
# contains
@@ -190,16 +161,12 @@ def test_scalar_with_mixed(self):
s2 = Series([1, 2, 3], index=["a", "b", "c"])
s3 = Series([1, 2, 3], index=["a", "b", 1.5])
- # lookup in a pure stringstr
- # with an invalid indexer
- msg = (
- r"cannot do label indexing "
- r"on Index with these indexers \[1\.0\] of "
- r"type float|"
- "Cannot index by location index with a non-integer key"
- )
+ # lookup in a pure string index with an invalid indexer
+
with pytest.raises(KeyError, match="^1.0$"):
s2[1.0]
+
+ msg = "Cannot index by location index with a non-integer key"
with pytest.raises(TypeError, match=msg):
s2.iloc[1.0]
| xref #31867
| https://api.github.com/repos/pandas-dev/pandas/pulls/32324 | 2020-02-28T09:50:32Z | 2020-02-29T00:56:20Z | 2020-02-29T00:56:20Z | 2020-02-29T13:31:31Z |
STY: spaces in wrong place | diff --git a/pandas/tests/base/test_ops.py b/pandas/tests/base/test_ops.py
index f85d823cb2fac..06ba6cc34ad92 100644
--- a/pandas/tests/base/test_ops.py
+++ b/pandas/tests/base/test_ops.py
@@ -213,9 +213,9 @@ def test_value_counts_unique_nunique(self, index_or_series_obj):
if orig.duplicated().any():
pytest.xfail(
- "The test implementation isn't flexible enough to deal"
- " with duplicated values. This isn't a bug in the"
- " application code, but in the test code."
+ "The test implementation isn't flexible enough to deal "
+ "with duplicated values. This isn't a bug in the "
+ "application code, but in the test code."
)
# create repeated values, 'n'th element is repeated by n+1 times
@@ -279,9 +279,9 @@ def test_value_counts_unique_nunique_null(self, null_obj, index_or_series_obj):
pytest.skip("MultiIndex doesn't support isna")
elif orig.duplicated().any():
pytest.xfail(
- "The test implementation isn't flexible enough to deal"
- " with duplicated values. This isn't a bug in the"
- " application code, but in the test code."
+ "The test implementation isn't flexible enough to deal "
+ "with duplicated values. This isn't a bug in the "
+ "application code, but in the test code."
)
# special assign to the numpy array
| - [x] ref #30755
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32323 | 2020-02-28T09:33:22Z | 2020-02-28T10:15:59Z | 2020-02-28T10:15:59Z | 2020-02-28T10:28:34Z |
DEPR: replace without passing value | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 15e85d0f90c5e..4debd41de213f 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -207,6 +207,7 @@ Removal of prior version deprecations/changes
- :meth:`SeriesGroupBy.agg` no longer pins the name of the group to the input passed to the provided ``func`` (:issue:`51703`)
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
+- Disallow calling :meth:`Series.replace` or :meth:`DataFrame.replace` without a ``value`` and with non-dict-like ``to_replace`` (:issue:`33302`)
- Disallow non-standard (``np.ndarray``, :class:`Index`, :class:`ExtensionArray`, or :class:`Series`) to :func:`isin`, :func:`unique`, :func:`factorize` (:issue:`52986`)
- Disallow passing a pandas type to :meth:`Index.view` (:issue:`55709`)
- Disallow units other than "s", "ms", "us", "ns" for datetime64 and timedelta64 dtypes in :func:`array` (:issue:`53817`)
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 930ee83aea00b..123dc679a83ea 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -296,13 +296,6 @@ def __getitem__(
result = self._from_backing_data(result)
return result
- def _fill_mask_inplace(
- self, method: str, limit: int | None, mask: npt.NDArray[np.bool_]
- ) -> None:
- # (for now) when self.ndim == 2, we assume axis=0
- func = missing.get_fill_func(method, ndim=self.ndim)
- func(self._ndarray.T, limit=limit, mask=mask.T)
-
def _pad_or_backfill(
self,
*,
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index fdc839225a557..1855bd1368251 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -2111,25 +2111,6 @@ def _where(self, mask: npt.NDArray[np.bool_], value) -> Self:
result[~mask] = val
return result
- # TODO(3.0): this can be removed once GH#33302 deprecation is enforced
- def _fill_mask_inplace(
- self, method: str, limit: int | None, mask: npt.NDArray[np.bool_]
- ) -> None:
- """
- Replace values in locations specified by 'mask' using pad or backfill.
-
- See also
- --------
- ExtensionArray.fillna
- """
- func = missing.get_fill_func(method)
- npvalues = self.astype(object)
- # NB: if we don't copy mask here, it may be altered inplace, which
- # would mess up the `self[mask] = ...` below.
- func(npvalues, limit=limit, mask=mask.copy())
- new_values = self._from_sequence(npvalues, dtype=self.dtype)
- self[mask] = new_values[mask]
-
def _rank(
self,
*,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ea618ea088348..0243d7f6fc573 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7319,17 +7319,8 @@ def replace(
inplace: bool = False,
regex: bool = False,
) -> Self | None:
- if value is lib.no_default and not is_dict_like(to_replace) and regex is False:
- # case that goes through _replace_single and defaults to method="pad"
- warnings.warn(
- # GH#33302
- f"{type(self).__name__}.replace without 'value' and with "
- "non-dict-like 'to_replace' is deprecated "
- "and will raise in a future version. "
- "Explicitly specify the new values instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ if not is_bool(regex) and to_replace is not None:
+ raise ValueError("'to_replace' must be 'None' if 'regex' is not a bool")
if not (
is_scalar(to_replace)
@@ -7342,6 +7333,15 @@ def replace(
f"{type(to_replace).__name__!r}"
)
+ if value is lib.no_default and not (
+ is_dict_like(to_replace) or is_dict_like(regex)
+ ):
+ raise ValueError(
+ # GH#33302
+ f"{type(self).__name__}.replace must specify either 'value', "
+ "a dict-like 'to_replace', or dict-like 'regex'."
+ )
+
inplace = validate_bool_kwarg(inplace, "inplace")
if inplace:
if not PYPY:
@@ -7352,41 +7352,10 @@ def replace(
stacklevel=2,
)
- if not is_bool(regex) and to_replace is not None:
- raise ValueError("'to_replace' must be 'None' if 'regex' is not a bool")
-
if value is lib.no_default:
- # GH#36984 if the user explicitly passes value=None we want to
- # respect that. We have the corner case where the user explicitly
- # passes value=None *and* a method, which we interpret as meaning
- # they want the (documented) default behavior.
-
- # passing a single value that is scalar like
- # when value is None (GH5319), for compat
- if not is_dict_like(to_replace) and not is_dict_like(regex):
- to_replace = [to_replace]
-
- if isinstance(to_replace, (tuple, list)):
- # TODO: Consider copy-on-write for non-replaced columns's here
- if isinstance(self, ABCDataFrame):
- from pandas import Series
-
- result = self.apply(
- Series._replace_single,
- args=(to_replace, inplace),
- )
- if inplace:
- return None
- return result
- return self._replace_single(to_replace, inplace)
-
if not is_dict_like(to_replace):
- if not is_dict_like(regex):
- raise TypeError(
- 'If "to_replace" and "value" are both None '
- 'and "to_replace" is not a list, then '
- "regex must be a mapping"
- )
+ # In this case we have checked above that
+ # 1) regex is dict-like and 2) to_replace is None
to_replace = regex
regex = True
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 843788273a6ef..8b65ce679ab6b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -97,7 +97,6 @@
algorithms,
base,
common as com,
- missing,
nanops,
ops,
roperator,
@@ -5116,40 +5115,6 @@ def info(
show_counts=show_counts,
)
- @overload
- def _replace_single(self, to_replace, inplace: Literal[False]) -> Self: ...
-
- @overload
- def _replace_single(self, to_replace, inplace: Literal[True]) -> None: ...
-
- @overload
- def _replace_single(self, to_replace, inplace: bool) -> Self | None: ...
-
- # TODO(3.0): this can be removed once GH#33302 deprecation is enforced
- def _replace_single(self, to_replace, inplace: bool) -> Self | None:
- """
- Replaces values in a Series using the fill method specified when no
- replacement value is given in the replace method
- """
- limit = None
- method = "pad"
-
- result = self if inplace else self.copy()
-
- values = result._values
- mask = missing.mask_missing(values, to_replace)
-
- if isinstance(values, ExtensionArray):
- # dispatch to the EA's _pad_mask_inplace method
- values._fill_mask_inplace(method, limit, mask)
- else:
- fill_f = missing.get_fill_func(method)
- fill_f(values, limit=limit, mask=mask)
-
- if inplace:
- return None
- return result
-
def memory_usage(self, index: bool = True, deep: bool = False) -> int:
"""
Return the memory usage of the Series.
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 2d8517693a2f8..38a443b56ee3d 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -608,24 +608,7 @@
4 None
dtype: object
- When ``value`` is not explicitly passed and `to_replace` is a scalar, list
- or tuple, `replace` uses the method parameter (default 'pad') to do the
- replacement. So this is why the 'a' values are being replaced by 10
- in rows 1 and 2 and 'b' in row 4 in this case.
-
- >>> s.replace('a')
- 0 10
- 1 10
- 2 10
- 3 b
- 4 b
- dtype: object
-
- .. deprecated:: 2.1.0
- The 'method' parameter and padding behavior are deprecated.
-
- On the other hand, if ``None`` is explicitly passed for ``value``, it will
- be respected:
+ If ``None`` is explicitly passed for ``value``, it will be respected:
>>> s.replace('a', None)
0 10
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 3b9c342f35a71..fb7ba2b7af38a 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1264,13 +1264,8 @@ def test_replace_invalid_to_replace(self):
r"Expecting 'to_replace' to be either a scalar, array-like, "
r"dict or None, got invalid type.*"
)
- msg2 = (
- "DataFrame.replace without 'value' and with non-dict-like "
- "'to_replace' is deprecated"
- )
with pytest.raises(TypeError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- df.replace(lambda x: x.strip())
+ df.replace(lambda x: x.strip())
@pytest.mark.parametrize("dtype", ["float", "float64", "int64", "Int64", "boolean"])
@pytest.mark.parametrize("value", [np.nan, pd.NA])
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index 09a3469e73462..0a79bcea679a7 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -137,20 +137,15 @@ def test_replace_gh5319(self):
# API change from 0.12?
# GH 5319
ser = pd.Series([0, np.nan, 2, 3, 4])
- expected = ser.ffill()
msg = (
- "Series.replace without 'value' and with non-dict-like "
- "'to_replace' is deprecated"
+ "Series.replace must specify either 'value', "
+ "a dict-like 'to_replace', or dict-like 'regex'"
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.replace([np.nan])
- tm.assert_series_equal(result, expected)
+ with pytest.raises(ValueError, match=msg):
+ ser.replace([np.nan])
- ser = pd.Series([0, np.nan, 2, 3, 4])
- expected = ser.ffill()
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.replace(np.nan)
- tm.assert_series_equal(result, expected)
+ with pytest.raises(ValueError, match=msg):
+ ser.replace(np.nan)
def test_replace_datetime64(self):
# GH 5797
@@ -182,19 +177,16 @@ def test_replace_timedelta_td64(self):
def test_replace_with_single_list(self):
ser = pd.Series([0, 1, 2, 3, 4])
- msg2 = (
- "Series.replace without 'value' and with non-dict-like "
- "'to_replace' is deprecated"
+ msg = (
+ "Series.replace must specify either 'value', "
+ "a dict-like 'to_replace', or dict-like 'regex'"
)
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- result = ser.replace([1, 2, 3])
- tm.assert_series_equal(result, pd.Series([0, 0, 0, 0, 4]))
+ with pytest.raises(ValueError, match=msg):
+ ser.replace([1, 2, 3])
s = ser.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- return_value = s.replace([1, 2, 3], inplace=True)
- assert return_value is None
- tm.assert_series_equal(s, pd.Series([0, 0, 0, 0, 4]))
+ with pytest.raises(ValueError, match=msg):
+ s.replace([1, 2, 3], inplace=True)
def test_replace_mixed_types(self):
ser = pd.Series(np.arange(5), dtype="int64")
@@ -483,13 +475,8 @@ def test_replace_invalid_to_replace(self):
r"Expecting 'to_replace' to be either a scalar, array-like, "
r"dict or None, got invalid type.*"
)
- msg2 = (
- "Series.replace without 'value' and with non-dict-like "
- "'to_replace' is deprecated"
- )
with pytest.raises(TypeError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- series.replace(lambda x: x.strip())
+ series.replace(lambda x: x.strip())
@pytest.mark.parametrize("frame", [False, True])
def test_replace_nonbool_regex(self, frame):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58040 | 2024-03-28T02:03:54Z | 2024-04-01T18:46:11Z | 2024-04-01T18:46:11Z | 2024-04-01T20:37:05Z |
DEPR: replace method/limit keywords | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 2286a75f5d0c5..26dd6f83ad44a 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -207,6 +207,7 @@ Removal of prior version deprecations/changes
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
- Removed "freq" keyword from :class:`PeriodArray` constructor, use "dtype" instead (:issue:`52462`)
+- Removed deprecated "method" and "limit" keywords from :meth:`Series.replace` and :meth:`DataFrame.replace` (:issue:`53492`)
- Removed the "closed" and "normalize" keywords in :meth:`DatetimeIndex.__new__` (:issue:`52628`)
- Removed the "closed" and "unit" keywords in :meth:`TimedeltaIndex.__new__` (:issue:`52628`, :issue:`55499`)
- All arguments in :meth:`Index.sort_values` are now keyword only (:issue:`56493`)
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 65410c3c09494..34489bb70575a 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -150,7 +150,6 @@ def pytest_collection_modifyitems(items, config) -> None:
("is_categorical_dtype", "is_categorical_dtype is deprecated"),
("is_sparse", "is_sparse is deprecated"),
("DataFrameGroupBy.fillna", "DataFrameGroupBy.fillna is deprecated"),
- ("NDFrame.replace", "The 'method' keyword"),
("NDFrame.replace", "Series.replace without 'value'"),
("NDFrame.clip", "Downcasting behavior in Series and DataFrame methods"),
("Series.idxmin", "The behavior of Series.idxmin"),
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a7545fb8d98de..1b9d7f4c81c9f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7285,9 +7285,7 @@ def replace(
value=...,
*,
inplace: Literal[False] = ...,
- limit: int | None = ...,
regex: bool = ...,
- method: Literal["pad", "ffill", "bfill"] | lib.NoDefault = ...,
) -> Self: ...
@overload
@@ -7297,9 +7295,7 @@ def replace(
value=...,
*,
inplace: Literal[True],
- limit: int | None = ...,
regex: bool = ...,
- method: Literal["pad", "ffill", "bfill"] | lib.NoDefault = ...,
) -> None: ...
@overload
@@ -7309,9 +7305,7 @@ def replace(
value=...,
*,
inplace: bool = ...,
- limit: int | None = ...,
regex: bool = ...,
- method: Literal["pad", "ffill", "bfill"] | lib.NoDefault = ...,
) -> Self | None: ...
@final
@@ -7326,32 +7320,9 @@ def replace(
value=lib.no_default,
*,
inplace: bool = False,
- limit: int | None = None,
regex: bool = False,
- method: Literal["pad", "ffill", "bfill"] | lib.NoDefault = lib.no_default,
) -> Self | None:
- if method is not lib.no_default:
- warnings.warn(
- # GH#33302
- f"The 'method' keyword in {type(self).__name__}.replace is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- elif limit is not None:
- warnings.warn(
- # GH#33302
- f"The 'limit' keyword in {type(self).__name__}.replace is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- if (
- value is lib.no_default
- and method is lib.no_default
- and not is_dict_like(to_replace)
- and regex is False
- ):
+ if value is lib.no_default and not is_dict_like(to_replace) and regex is False:
# case that goes through _replace_single and defaults to method="pad"
warnings.warn(
# GH#33302
@@ -7387,14 +7358,11 @@ def replace(
if not is_bool(regex) and to_replace is not None:
raise ValueError("'to_replace' must be 'None' if 'regex' is not a bool")
- if value is lib.no_default or method is not lib.no_default:
+ if value is lib.no_default:
# GH#36984 if the user explicitly passes value=None we want to
# respect that. We have the corner case where the user explicitly
# passes value=None *and* a method, which we interpret as meaning
# they want the (documented) default behavior.
- if method is lib.no_default:
- # TODO: get this to show up as the default in the docs?
- method = "pad"
# passing a single value that is scalar like
# when value is None (GH5319), for compat
@@ -7408,12 +7376,12 @@ def replace(
result = self.apply(
Series._replace_single,
- args=(to_replace, method, inplace, limit),
+ args=(to_replace, inplace),
)
if inplace:
return None
return result
- return self._replace_single(to_replace, method, inplace, limit)
+ return self._replace_single(to_replace, inplace)
if not is_dict_like(to_replace):
if not is_dict_like(regex):
@@ -7458,9 +7426,7 @@ def replace(
else:
to_replace, value = keys, values
- return self.replace(
- to_replace, value, inplace=inplace, limit=limit, regex=regex
- )
+ return self.replace(to_replace, value, inplace=inplace, regex=regex)
else:
# need a non-zero len on all axes
if not self.size:
@@ -7524,9 +7490,7 @@ def replace(
f"or a list or dict of strings or regular expressions, "
f"you passed a {type(regex).__name__!r}"
)
- return self.replace(
- regex, value, inplace=inplace, limit=limit, regex=True
- )
+ return self.replace(regex, value, inplace=inplace, regex=True)
else:
# dest iterable dict-like
if is_dict_like(value): # NA -> {'A' : 0, 'B' : -1}
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 0761dc17ab147..b0dc05fce7913 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5113,28 +5113,22 @@ def info(
)
@overload
- def _replace_single(
- self, to_replace, method: str, inplace: Literal[False], limit
- ) -> Self: ...
+ def _replace_single(self, to_replace, inplace: Literal[False]) -> Self: ...
@overload
- def _replace_single(
- self, to_replace, method: str, inplace: Literal[True], limit
- ) -> None: ...
+ def _replace_single(self, to_replace, inplace: Literal[True]) -> None: ...
@overload
- def _replace_single(
- self, to_replace, method: str, inplace: bool, limit
- ) -> Self | None: ...
+ def _replace_single(self, to_replace, inplace: bool) -> Self | None: ...
# TODO(3.0): this can be removed once GH#33302 deprecation is enforced
- def _replace_single(
- self, to_replace, method: str, inplace: bool, limit
- ) -> Self | None:
+ def _replace_single(self, to_replace, inplace: bool) -> Self | None:
"""
Replaces values in a Series using the fill method specified when no
replacement value is given in the replace method
"""
+ limit = None
+ method = "pad"
result = self if inplace else self.copy()
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index a2b5439f9e12f..2d8517693a2f8 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -429,20 +429,11 @@
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
{inplace}
- limit : int, default None
- Maximum size gap to forward or backward fill.
-
- .. deprecated:: 2.1.0
regex : bool or same types as `to_replace`, default False
Whether to interpret `to_replace` and/or `value` as regular
expressions. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
`to_replace` must be ``None``.
- method : {{'pad', 'ffill', 'bfill'}}
- The method to use when for replacement, when `to_replace` is a
- scalar, list or tuple and `value` is ``None``.
-
- .. deprecated:: 2.1.0
Returns
-------
@@ -538,14 +529,6 @@
3 1 8 d
4 4 9 e
- >>> s.replace([1, 2], method='bfill')
- 0 3
- 1 3
- 2 3
- 3 4
- 4 5
- dtype: int64
-
**dict-like `to_replace`**
>>> df.replace({{0: 10, 1: 100}})
@@ -615,7 +598,7 @@
When one uses a dict as the `to_replace` value, it is like the
value(s) in the dict are equal to the `value` parameter.
``s.replace({{'a': None}})`` is equivalent to
- ``s.replace(to_replace={{'a': None}}, value=None, method=None)``:
+ ``s.replace(to_replace={{'a': None}}, value=None)``:
>>> s.replace({{'a': None}})
0 10
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index eb6d649c296fc..3b9c342f35a71 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1171,48 +1171,6 @@ def test_replace_with_empty_dictlike(self, mix_abc):
tm.assert_frame_equal(df, df.replace({"b": {}}))
tm.assert_frame_equal(df, df.replace(Series({"b": {}})))
- @pytest.mark.parametrize(
- "to_replace, method, expected",
- [
- (0, "bfill", {"A": [1, 1, 2], "B": [5, np.nan, 7], "C": ["a", "b", "c"]}),
- (
- np.nan,
- "bfill",
- {"A": [0, 1, 2], "B": [5.0, 7.0, 7.0], "C": ["a", "b", "c"]},
- ),
- ("d", "ffill", {"A": [0, 1, 2], "B": [5, np.nan, 7], "C": ["a", "b", "c"]}),
- (
- [0, 2],
- "bfill",
- {"A": [1, 1, 2], "B": [5, np.nan, 7], "C": ["a", "b", "c"]},
- ),
- (
- [1, 2],
- "pad",
- {"A": [0, 0, 0], "B": [5, np.nan, 7], "C": ["a", "b", "c"]},
- ),
- (
- (1, 2),
- "bfill",
- {"A": [0, 2, 2], "B": [5, np.nan, 7], "C": ["a", "b", "c"]},
- ),
- (
- ["b", "c"],
- "ffill",
- {"A": [0, 1, 2], "B": [5, np.nan, 7], "C": ["a", "a", "a"]},
- ),
- ],
- )
- def test_replace_method(self, to_replace, method, expected):
- # GH 19632
- df = DataFrame({"A": [0, 1, 2], "B": [5, np.nan, 7], "C": ["a", "b", "c"]})
-
- msg = "The 'method' keyword in DataFrame.replace is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.replace(to_replace=to_replace, value=None, method=method)
- expected = DataFrame(expected)
- tm.assert_frame_equal(result, expected)
-
@pytest.mark.parametrize(
"replace_dict, final_data",
[({"a": 1, "b": 1}, [[3, 3], [2, 2]]), ({"a": 1, "b": 2}, [[3, 1], [2, 3]])],
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index 355953eac9d51..7d18ef28a722d 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -742,18 +742,6 @@ def test_equals_subclass(self):
assert df1.equals(df2)
assert df2.equals(df1)
- def test_replace_list_method(self):
- # https://github.com/pandas-dev/pandas/pull/46018
- df = tm.SubclassedDataFrame({"A": [0, 1, 2]})
- msg = "The 'method' keyword in SubclassedDataFrame.replace is deprecated"
- with tm.assert_produces_warning(
- FutureWarning, match=msg, raise_on_extra_warnings=False
- ):
- result = df.replace([1, 2], method="ffill")
- expected = tm.SubclassedDataFrame({"A": [0, 0, 0]})
- assert isinstance(result, tm.SubclassedDataFrame)
- tm.assert_frame_equal(result, expected)
-
class MySubclassWithMetadata(DataFrame):
_metadata = ["my_metadata"]
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index c7b894e73d0dd..09a3469e73462 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -196,19 +196,6 @@ def test_replace_with_single_list(self):
assert return_value is None
tm.assert_series_equal(s, pd.Series([0, 0, 0, 0, 4]))
- # make sure things don't get corrupted when fillna call fails
- s = ser.copy()
- msg = (
- r"Invalid fill method\. Expecting pad \(ffill\) or backfill "
- r"\(bfill\)\. Got crash_cymbal"
- )
- msg3 = "The 'method' keyword in Series.replace is deprecated"
- with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=msg3):
- return_value = s.replace([1, 2, 3], inplace=True, method="crash_cymbal")
- assert return_value is None
- tm.assert_series_equal(s, ser)
-
def test_replace_mixed_types(self):
ser = pd.Series(np.arange(5), dtype="int64")
@@ -550,62 +537,6 @@ def test_replace_extension_other(self, frame_or_series):
# should not have changed dtype
tm.assert_equal(obj, result)
- def _check_replace_with_method(self, ser: pd.Series):
- df = ser.to_frame()
-
- msg1 = "The 'method' keyword in Series.replace is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg1):
- res = ser.replace(ser[1], method="pad")
- expected = pd.Series([ser[0], ser[0]] + list(ser[2:]), dtype=ser.dtype)
- tm.assert_series_equal(res, expected)
-
- msg2 = "The 'method' keyword in DataFrame.replace is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- res_df = df.replace(ser[1], method="pad")
- tm.assert_frame_equal(res_df, expected.to_frame())
-
- ser2 = ser.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg1):
- res2 = ser2.replace(ser[1], method="pad", inplace=True)
- assert res2 is None
- tm.assert_series_equal(ser2, expected)
-
- with tm.assert_produces_warning(FutureWarning, match=msg2):
- res_df2 = df.replace(ser[1], method="pad", inplace=True)
- assert res_df2 is None
- tm.assert_frame_equal(df, expected.to_frame())
-
- def test_replace_ea_dtype_with_method(self, any_numeric_ea_dtype):
- arr = pd.array([1, 2, pd.NA, 4], dtype=any_numeric_ea_dtype)
- ser = pd.Series(arr)
-
- self._check_replace_with_method(ser)
-
- @pytest.mark.parametrize("as_categorical", [True, False])
- def test_replace_interval_with_method(self, as_categorical):
- # in particular interval that can't hold NA
-
- idx = pd.IntervalIndex.from_breaks(range(4))
- ser = pd.Series(idx)
- if as_categorical:
- ser = ser.astype("category")
-
- self._check_replace_with_method(ser)
-
- @pytest.mark.parametrize("as_period", [True, False])
- @pytest.mark.parametrize("as_categorical", [True, False])
- def test_replace_datetimelike_with_method(self, as_period, as_categorical):
- idx = pd.date_range("2016-01-01", periods=5, tz="US/Pacific")
- if as_period:
- idx = idx.tz_localize(None).to_period("D")
-
- ser = pd.Series(idx)
- ser.iloc[-2] = pd.NaT
- if as_categorical:
- ser = ser.astype("category")
-
- self._check_replace_with_method(ser)
-
def test_replace_with_compiled_regex(self):
# https://github.com/pandas-dev/pandas/issues/35680
s = pd.Series(["a", "b", "c"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58039 | 2024-03-27T23:03:44Z | 2024-03-28T00:04:36Z | 2024-03-28T00:04:36Z | 2024-03-28T00:14:45Z |
CLN: remove axis keyword from Block.pad_or_backill | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a7545fb8d98de..0a50ec2d4c5ce 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6727,12 +6727,10 @@ def _pad_or_backfill(
axis = self._get_axis_number(axis)
method = clean_fill_method(method)
- if not self._mgr.is_single_block and axis == 1:
- # e.g. test_align_fill_method
- # TODO(3.0): once downcast is removed, we can do the .T
- # in all axis=1 cases, and remove axis kward from mgr.pad_or_backfill.
- if inplace:
+ if axis == 1:
+ if not self._mgr.is_single_block and inplace:
raise NotImplementedError()
+ # e.g. test_align_fill_method
result = self.T._pad_or_backfill(
method=method, limit=limit, limit_area=limit_area
).T
@@ -6741,7 +6739,6 @@ def _pad_or_backfill(
new_mgr = self._mgr.pad_or_backfill(
method=method,
- axis=self._get_block_manager_axis(axis),
limit=limit,
limit_area=limit_area,
inplace=inplace,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index a7cdc7c39754d..468ec32ce7760 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1343,7 +1343,6 @@ def pad_or_backfill(
self,
*,
method: FillnaOptions,
- axis: AxisInt = 0,
inplace: bool = False,
limit: int | None = None,
limit_area: Literal["inside", "outside"] | None = None,
@@ -1357,16 +1356,12 @@ def pad_or_backfill(
# Dispatch to the NumpyExtensionArray method.
# We know self.array_values is a NumpyExtensionArray bc EABlock overrides
vals = cast(NumpyExtensionArray, self.array_values)
- if axis == 1:
- vals = vals.T
- new_values = vals._pad_or_backfill(
+ new_values = vals.T._pad_or_backfill(
method=method,
limit=limit,
limit_area=limit_area,
copy=copy,
- )
- if axis == 1:
- new_values = new_values.T
+ ).T
data = extract_array(new_values, extract_numpy=True)
return [self.make_block_same_class(data, refs=refs)]
@@ -1814,7 +1809,6 @@ def pad_or_backfill(
self,
*,
method: FillnaOptions,
- axis: AxisInt = 0,
inplace: bool = False,
limit: int | None = None,
limit_area: Literal["inside", "outside"] | None = None,
@@ -1827,11 +1821,11 @@ def pad_or_backfill(
elif limit_area is not None:
raise NotImplementedError(
f"{type(values).__name__} does not implement limit_area "
- "(added in pandas 2.2). 3rd-party ExtnsionArray authors "
+ "(added in pandas 2.2). 3rd-party ExtensionArray authors "
"need to add this argument to _pad_or_backfill."
)
- if values.ndim == 2 and axis == 1:
+ if values.ndim == 2:
# NDArrayBackedExtensionArray.fillna assumes axis=0
new_values = values.T._pad_or_backfill(**kwargs).T
else:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58038 | 2024-03-27T22:10:48Z | 2024-03-28T00:06:19Z | 2024-03-28T00:06:19Z | 2024-03-28T00:14:19Z |
Backport PR #57758 on branch 2.2.x (BUG: DataFrame Interchange Protocol errors on Boolean columns) | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 54084abab7817..2a48403d9a318 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -22,6 +22,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the column's type was nullable boolean (:issue:`55332`)
- :meth:`DataFrame.__dataframe__` was showing bytemask instead of bitmask for ``'string[pyarrow]'`` validity buffer (:issue:`57762`)
- :meth:`DataFrame.__dataframe__` was showing non-null validity buffer (instead of ``None``) ``'string[pyarrow]'`` without missing values (:issue:`57761`)
diff --git a/pandas/core/interchange/utils.py b/pandas/core/interchange/utils.py
index 2a19dd5046aa3..fd1c7c9639242 100644
--- a/pandas/core/interchange/utils.py
+++ b/pandas/core/interchange/utils.py
@@ -144,6 +144,9 @@ def dtype_to_arrow_c_fmt(dtype: DtypeObj) -> str:
elif isinstance(dtype, DatetimeTZDtype):
return ArrowCTypes.TIMESTAMP.format(resolution=dtype.unit[0], tz=dtype.tz)
+ elif isinstance(dtype, pd.BooleanDtype):
+ return ArrowCTypes.BOOL
+
raise NotImplementedError(
f"Conversion of {dtype} to Arrow C format string is not implemented."
)
diff --git a/pandas/tests/interchange/test_impl.py b/pandas/tests/interchange/test_impl.py
index 1ccada9116d4c..25418b8bb2b37 100644
--- a/pandas/tests/interchange/test_impl.py
+++ b/pandas/tests/interchange/test_impl.py
@@ -470,6 +470,7 @@ def test_non_str_names_w_duplicates():
),
([1.0, 2.25, None], "Float32", "float32"),
([1.0, 2.25, None], "Float32[pyarrow]", "float32"),
+ ([True, False, None], "boolean", "bool"),
([True, False, None], "boolean[pyarrow]", "bool"),
(["much ado", "about", None], "string[pyarrow_numpy]", "large_string"),
(["much ado", "about", None], "string[pyarrow]", "large_string"),
@@ -532,6 +533,7 @@ def test_pandas_nullable_with_missing_values(
),
([1.0, 2.25, 5.0], "Float32", "float32"),
([1.0, 2.25, 5.0], "Float32[pyarrow]", "float32"),
+ ([True, False, False], "boolean", "bool"),
([True, False, False], "boolean[pyarrow]", "bool"),
(["much ado", "about", "nothing"], "string[pyarrow_numpy]", "large_string"),
(["much ado", "about", "nothing"], "string[pyarrow]", "large_string"),
| Backport PR #57758: BUG: DataFrame Interchange Protocol errors on Boolean columns | https://api.github.com/repos/pandas-dev/pandas/pulls/58036 | 2024-03-27T17:48:23Z | 2024-03-27T19:02:51Z | 2024-03-27T19:02:51Z | 2024-03-27T19:02:52Z |
Backport PR #57548 on branch 2.2.x (Fix accidental loss-of-precision for to_datetime(str, unit=...)) | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 54084abab7817..19539918b8c8f 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -15,7 +15,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pandas nullable on with missing values (:issue:`56702`)
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pyarrow nullable on with missing values (:issue:`57664`)
--
+- Fixed regression in precision of :func:`to_datetime` with string and ``unit`` input (:issue:`57051`)
.. ---------------------------------------------------------------------------
.. _whatsnew_222.bug_fixes:
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 017fdc4bc834f..dd23c2f27ca09 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -277,7 +277,7 @@ def array_with_unit_to_datetime(
bint is_raise = errors == "raise"
ndarray[int64_t] iresult
tzinfo tz = None
- float fval
+ double fval
assert is_ignore or is_coerce or is_raise
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 6791ac0340640..a1ed996dade8e 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -1912,6 +1912,14 @@ def test_unit(self, cache):
with pytest.raises(ValueError, match=msg):
to_datetime([1], unit="D", format="%Y%m%d", cache=cache)
+ def test_unit_str(self, cache):
+ # GH 57051
+ # Test that strs aren't dropping precision to 32-bit accidentally.
+ with tm.assert_produces_warning(FutureWarning):
+ res = to_datetime(["1704660000"], unit="s", origin="unix")
+ expected = to_datetime([1704660000], unit="s", origin="unix")
+ tm.assert_index_equal(res, expected)
+
def test_unit_array_mixed_nans(self, cache):
values = [11111111111111111, 1, 1.0, iNaT, NaT, np.nan, "NaT", ""]
result = to_datetime(values, unit="D", errors="ignore", cache=cache)
| Backport PR #57548: Fix accidental loss-of-precision for to_datetime(str, unit=...) | https://api.github.com/repos/pandas-dev/pandas/pulls/58034 | 2024-03-27T16:52:23Z | 2024-03-27T17:51:23Z | 2024-03-27T17:51:23Z | 2024-03-27T17:51:23Z |
DOC: DataFrame.reset_index names param can't be a tuple as docs state | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b218dd899c8f8..50a93994dc76b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6011,8 +6011,8 @@ def reset_index(
names : int, str or 1-dimensional list, default None
Using the given string, rename the DataFrame column which contains the
- index data. If the DataFrame has a MultiIndex, this has to be a list or
- tuple with length equal to the number of levels.
+ index data. If the DataFrame has a MultiIndex, this has to be a list
+ with length equal to the number of levels.
.. versionadded:: 1.5.0
| - [ ] closes #57994
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/58032 | 2024-03-27T16:12:00Z | 2024-03-29T18:34:34Z | 2024-03-29T18:34:34Z | 2024-03-29T18:34:42Z |
BUG: Fixed DataFrameGroupBy.transform with numba returning the wrong order with non increasing indexes #57069 | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index a398b93b60018..2f23a240bdcd1 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -303,6 +303,7 @@ Bug fixes
- Fixed bug in :class:`SparseDtype` for equal comparison with na fill value. (:issue:`54770`)
- Fixed bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
- Fixed bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
+- Fixed bug in :meth:`DataFrame.transform` that was returning the wrong order unless the index was monotonically increasing. (:issue:`57069`)
- Fixed bug in :meth:`DataFrame.update` bool dtype being converted to object (:issue:`55509`)
- Fixed bug in :meth:`DataFrameGroupBy.apply` that was returning a completely empty DataFrame when all return values of ``func`` were ``None`` instead of returning an empty DataFrame with the original columns and dtypes. (:issue:`57775`)
- Fixed bug in :meth:`Series.diff` allowing non-integer values for the ``periods`` argument. (:issue:`56607`)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 0b61938d474b9..bd8e222831d0c 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1439,6 +1439,7 @@ def _transform_with_numba(self, func, *args, engine_kwargs=None, **kwargs):
data and indices into a Numba jitted function.
"""
data = self._obj_with_exclusions
+ index_sorting = self._grouper.result_ilocs
df = data if data.ndim == 2 else data.to_frame()
starts, ends, sorted_index, sorted_data = self._numba_prep(df)
@@ -1456,7 +1457,7 @@ def _transform_with_numba(self, func, *args, engine_kwargs=None, **kwargs):
)
# result values needs to be resorted to their original positions since we
# evaluated the data sorted by group
- result = result.take(np.argsort(sorted_index), axis=0)
+ result = result.take(np.argsort(index_sorting), axis=0)
index = data.index
if data.ndim == 1:
result_kwargs = {"name": data.name}
diff --git a/pandas/tests/groupby/transform/test_numba.py b/pandas/tests/groupby/transform/test_numba.py
index b75113d3f4e14..a17d25b2e7e2e 100644
--- a/pandas/tests/groupby/transform/test_numba.py
+++ b/pandas/tests/groupby/transform/test_numba.py
@@ -181,10 +181,25 @@ def f(values, index):
df = DataFrame({"group": ["A", "A", "B"], "v": [4, 5, 6]}, index=[-1, -2, -3])
result = df.groupby("group").transform(f, engine="numba")
- expected = DataFrame([-4.0, -3.0, -2.0], columns=["v"], index=[-1, -2, -3])
+ expected = DataFrame([-2.0, -3.0, -4.0], columns=["v"], index=[-1, -2, -3])
tm.assert_frame_equal(result, expected)
+def test_index_order_consistency_preserved():
+ # GH 57069
+ pytest.importorskip("numba")
+
+ def f(values, index):
+ return values
+
+ df = DataFrame(
+ {"vals": [0.0, 1.0, 2.0, 3.0], "group": [0, 1, 0, 1]}, index=range(3, -1, -1)
+ )
+ result = df.groupby("group")["vals"].transform(f, engine="numba")
+ expected = Series([0.0, 1.0, 2.0, 3.0], index=range(3, -1, -1), name="vals")
+ tm.assert_series_equal(result, expected)
+
+
def test_engine_kwargs_not_cached():
# If the user passes a different set of engine_kwargs don't return the same
# jitted function
| - [X] closes #57069
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
DataFrameGroupBy.transform with numba was returning the wrong order unless the index was monotonically increasing due to the transformed results not being correctly reordered.
Fixed the test "pandas/tests/groupby/transform/test_numba.py::test_index_data_correctly_passed" to expect the correct order in the result.
Added a test "pandas/tests/groupby/transform/test_numba.py::test_index_order_consistency_preserved" to test DataFrameGroupBy.transform with engine='numba' with a decreasing index. | https://api.github.com/repos/pandas-dev/pandas/pulls/58030 | 2024-03-27T13:50:55Z | 2024-03-28T22:42:30Z | 2024-03-28T22:42:30Z | 2024-03-28T22:42:39Z |
CLN: enforce `any/all` deprecation with `datetime64` | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 549d49aaa1853..011f72868ab5d 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -212,6 +212,7 @@ Removal of prior version deprecations/changes
- All arguments in :meth:`Series.to_dict` are now keyword only (:issue:`56493`)
- Changed the default value of ``observed`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` to ``True`` (:issue:`51811`)
- Enforce deprecation in :func:`testing.assert_series_equal` and :func:`testing.assert_frame_equal` with object dtype and mismatched null-like values, which are now considered not-equal (:issue:`18463`)
+- Enforced deprecation ``all`` and ``any`` reductions with ``datetime64`` and :class:`DatetimeTZDtype` dtypes (:issue:`58029`)
- Enforced deprecation disallowing parsing datetimes with mixed time zones unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`57275`)
- Enforced deprecation in :meth:`Series.value_counts` and :meth:`Index.value_counts` with object dtype performing dtype inference on the ``.index`` of the result (:issue:`56161`)
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 3dc2d77bb5a19..52cb175ca79a2 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1661,16 +1661,8 @@ def _groupby_op(
dtype = self.dtype
if dtype.kind == "M":
# Adding/multiplying datetimes is not valid
- if how in ["sum", "prod", "cumsum", "cumprod", "var", "skew"]:
- raise TypeError(f"datetime64 type does not support {how} operations")
- if how in ["any", "all"]:
- # GH#34479
- warnings.warn(
- f"'{how}' with datetime64 dtypes is deprecated and will raise in a "
- f"future version. Use (obj != pd.Timestamp(0)).{how}() instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ if how in ["any", "all", "sum", "prod", "cumsum", "cumprod", "var", "skew"]:
+ raise TypeError(f"datetime64 type does not support operation: '{how}'")
elif isinstance(dtype, PeriodDtype):
# Adding/multiplying Periods is not valid
@@ -2217,11 +2209,11 @@ def ceil(
# Reductions
def any(self, *, axis: AxisInt | None = None, skipna: bool = True) -> bool:
- # GH#34479 the nanops call will issue a FutureWarning for non-td64 dtype
+ # GH#34479 the nanops call will raise a TypeError for non-td64 dtype
return nanops.nanany(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())
def all(self, *, axis: AxisInt | None = None, skipna: bool = True) -> bool:
- # GH#34479 the nanops call will issue a FutureWarning for non-td64 dtype
+ # GH#34479 the nanops call will raise a TypeError for non-td64 dtype
return nanops.nanall(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index b68337d9e0de9..a124e8679ae8e 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -31,7 +31,6 @@
npt,
)
from pandas.compat._optional import import_optional_dependency
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_complex,
@@ -521,12 +520,7 @@ def nanany(
if values.dtype.kind == "M":
# GH#34479
- warnings.warn(
- "'any' with datetime64 dtypes is deprecated and will raise in a "
- "future version. Use (obj != pd.Timestamp(0)).any() instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ raise TypeError("datetime64 type does not support operation: 'any'")
values, _ = _get_values(values, skipna, fill_value=False, mask=mask)
@@ -582,12 +576,7 @@ def nanall(
if values.dtype.kind == "M":
# GH#34479
- warnings.warn(
- "'all' with datetime64 dtypes is deprecated and will raise in a "
- "future version. Use (obj != pd.Timestamp(0)).all() instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ raise TypeError("datetime64 type does not support operation: 'all'")
values, _ = _get_values(values, skipna, fill_value=True, mask=mask)
diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 414683b02dcba..dcbbac44d083a 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -162,8 +162,10 @@ def test_in_numeric_groupby(self, data_for_grouping):
msg = "|".join(
[
- # period/datetime
+ # period
"does not support sum operations",
+ # datetime
+ "does not support operation: 'sum'",
# all others
re.escape(f"agg function failed [how->sum,dtype->{dtype}"),
]
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 06e85f5c92913..5de4865feb6f9 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -104,10 +104,8 @@ def _supports_reduction(self, obj, op_name: str) -> bool:
@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
meth = all_boolean_reductions
- msg = f"'{meth}' with datetime64 dtypes is deprecated and will raise in"
- with tm.assert_produces_warning(
- FutureWarning, match=msg, check_stacklevel=False
- ):
+ msg = f"datetime64 type does not support operation: '{meth}'"
+ with pytest.raises(TypeError, match=msg):
super().test_reduce_series_boolean(data, all_boolean_reductions, skipna)
def test_series_constructor(self, data):
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 408cb0ab6fc5c..7aa3de7afe579 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1371,10 +1371,6 @@ def test_any_all_object_dtype(
expected = Series([True, True, val, True])
tm.assert_series_equal(result, expected)
- # GH#50947 deprecates this but it is not emitting a warning in some builds.
- @pytest.mark.filterwarnings(
- "ignore:'any' with datetime64 dtypes is deprecated.*:FutureWarning"
- )
def test_any_datetime(self):
# GH 23070
float_data = [1, np.nan, 3, np.nan]
@@ -1386,10 +1382,9 @@ def test_any_datetime(self):
]
df = DataFrame({"A": float_data, "B": datetime_data})
- result = df.any(axis=1)
-
- expected = Series([True, True, True, False])
- tm.assert_series_equal(result, expected)
+ msg = "datetime64 type does not support operation: 'any'"
+ with pytest.raises(TypeError, match=msg):
+ df.any(axis=1)
def test_any_all_bool_only(self):
# GH 25101
@@ -1481,23 +1476,23 @@ def test_any_all_np_func(self, func, data, expected):
TypeError, match="dtype category does not support reduction"
):
getattr(DataFrame(data), func.__name__)(axis=None)
- else:
- msg = "'(any|all)' with datetime64 dtypes is deprecated"
- if data.dtypes.apply(lambda x: x.kind == "M").any():
- warn = FutureWarning
- else:
- warn = None
+ if data.dtypes.apply(lambda x: x.kind == "M").any():
+ # GH#34479
+ msg = "datetime64 type does not support operation: '(any|all)'"
+ with pytest.raises(TypeError, match=msg):
+ func(data)
+
+ # method version
+ with pytest.raises(TypeError, match=msg):
+ getattr(DataFrame(data), func.__name__)(axis=None)
- with tm.assert_produces_warning(warn, match=msg, check_stacklevel=False):
- # GH#34479
- result = func(data)
+ elif data.dtypes.apply(lambda x: x != "category").any():
+ result = func(data)
assert isinstance(result, np.bool_)
assert result.item() is expected
# method version
- with tm.assert_produces_warning(warn, match=msg):
- # GH#34479
- result = getattr(DataFrame(data), func.__name__)(axis=None)
+ result = getattr(DataFrame(data), func.__name__)(axis=None)
assert isinstance(result, np.bool_)
assert result.item() is expected
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 7ec1598abf403..bcad88bdecabb 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -674,7 +674,7 @@ def test_raises_on_nuisance(df):
df = df.loc[:, ["A", "C", "D"]]
df["E"] = datetime.now()
grouped = df.groupby("A")
- msg = "datetime64 type does not support sum operations"
+ msg = "datetime64 type does not support operation: 'sum'"
with pytest.raises(TypeError, match=msg):
grouped.agg("sum")
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/groupby/test_raises.py b/pandas/tests/groupby/test_raises.py
index f9d5de72eda1d..7af27d7227035 100644
--- a/pandas/tests/groupby/test_raises.py
+++ b/pandas/tests/groupby/test_raises.py
@@ -241,16 +241,16 @@ def test_groupby_raises_datetime(
return
klass, msg = {
- "all": (None, ""),
- "any": (None, ""),
+ "all": (TypeError, "datetime64 type does not support operation: 'all'"),
+ "any": (TypeError, "datetime64 type does not support operation: 'any'"),
"bfill": (None, ""),
"corrwith": (TypeError, "cannot perform __mul__ with this index type"),
"count": (None, ""),
"cumcount": (None, ""),
"cummax": (None, ""),
"cummin": (None, ""),
- "cumprod": (TypeError, "datetime64 type does not support cumprod operations"),
- "cumsum": (TypeError, "datetime64 type does not support cumsum operations"),
+ "cumprod": (TypeError, "datetime64 type does not support operation: 'cumprod'"),
+ "cumsum": (TypeError, "datetime64 type does not support operation: 'cumsum'"),
"diff": (None, ""),
"ffill": (None, ""),
"fillna": (None, ""),
@@ -265,7 +265,7 @@ def test_groupby_raises_datetime(
"ngroup": (None, ""),
"nunique": (None, ""),
"pct_change": (TypeError, "cannot perform __truediv__ with this index type"),
- "prod": (TypeError, "datetime64 type does not support prod"),
+ "prod": (TypeError, "datetime64 type does not support operation: 'prod'"),
"quantile": (None, ""),
"rank": (None, ""),
"sem": (None, ""),
@@ -276,18 +276,16 @@ def test_groupby_raises_datetime(
"|".join(
[
r"dtype datetime64\[ns\] does not support reduction",
- "datetime64 type does not support skew operations",
+ "datetime64 type does not support operation: 'skew'",
]
),
),
"std": (None, ""),
- "sum": (TypeError, "datetime64 type does not support sum operations"),
- "var": (TypeError, "datetime64 type does not support var operations"),
+ "sum": (TypeError, "datetime64 type does not support operation: 'sum"),
+ "var": (TypeError, "datetime64 type does not support operation: 'var'"),
}[groupby_func]
- if groupby_func in ["any", "all"]:
- warn_msg = f"'{groupby_func}' with datetime64 dtypes is deprecated"
- elif groupby_func == "fillna":
+ if groupby_func == "fillna":
kind = "Series" if groupby_series else "DataFrame"
warn_msg = f"{kind}GroupBy.fillna is deprecated"
else:
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 46f6367fbb3ed..117114c4c2cab 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -749,6 +749,7 @@ def test_cython_transform_frame_column(
msg = "|".join(
[
"does not support .* operations",
+ "does not support operation",
".* is not supported for object dtype",
"is not implemented for this dtype",
]
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index b10319f5380e7..048553330c1ce 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -1009,32 +1009,41 @@ def test_any_all_datetimelike(self):
ser = Series(dta)
df = DataFrame(ser)
- msg = "'(any|all)' with datetime64 dtypes is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- # GH#34479
- assert dta.all()
- assert dta.any()
+ # GH#34479
+ msg = "datetime64 type does not support operation: '(any|all)'"
+ with pytest.raises(TypeError, match=msg):
+ dta.all()
+ with pytest.raises(TypeError, match=msg):
+ dta.any()
- assert ser.all()
- assert ser.any()
+ with pytest.raises(TypeError, match=msg):
+ ser.all()
+ with pytest.raises(TypeError, match=msg):
+ ser.any()
- assert df.any().all()
- assert df.all().all()
+ with pytest.raises(TypeError, match=msg):
+ df.any().all()
+ with pytest.raises(TypeError, match=msg):
+ df.all().all()
dta = dta.tz_localize("UTC")
ser = Series(dta)
df = DataFrame(ser)
+ # GH#34479
+ with pytest.raises(TypeError, match=msg):
+ dta.all()
+ with pytest.raises(TypeError, match=msg):
+ dta.any()
- with tm.assert_produces_warning(FutureWarning, match=msg):
- # GH#34479
- assert dta.all()
- assert dta.any()
-
- assert ser.all()
- assert ser.any()
+ with pytest.raises(TypeError, match=msg):
+ ser.all()
+ with pytest.raises(TypeError, match=msg):
+ ser.any()
- assert df.any().all()
- assert df.all().all()
+ with pytest.raises(TypeError, match=msg):
+ df.any().all()
+ with pytest.raises(TypeError, match=msg):
+ df.all().all()
tda = dta - dta[0]
ser = Series(tda)
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index f3b9c909290a8..9b442fa7dbd07 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -708,7 +708,9 @@ def test_selection_api_validation():
tm.assert_frame_equal(exp, result)
exp.index.name = "d"
- with pytest.raises(TypeError, match="datetime64 type does not support sum"):
+ with pytest.raises(
+ TypeError, match="datetime64 type does not support operation: 'sum'"
+ ):
df.resample("2D", level="d").sum()
result = df.resample("2D", level="d").sum(numeric_only=True)
tm.assert_frame_equal(exp, result)
| xref #50947, xref #58006
enforced deprecation of `any/all` with `datetime64` | https://api.github.com/repos/pandas-dev/pandas/pulls/58029 | 2024-03-27T12:41:40Z | 2024-03-28T17:54:36Z | 2024-03-28T17:54:36Z | 2024-03-28T17:54:44Z |
Fix DataFrame.cumsum failing when dtype is timedelta64[ns] | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 549d49aaa1853..e3fc3a24cfe00 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -318,6 +318,7 @@ Bug fixes
~~~~~~~~~
- Fixed bug in :class:`SparseDtype` for equal comparison with na fill value. (:issue:`54770`)
- Fixed bug in :meth:`.DataFrameGroupBy.median` where nat values gave an incorrect result. (:issue:`57926`)
+- Fixed bug in :meth:`DataFrame.cumsum` which was raising ``IndexError`` if dtype is ``timedelta64[ns]`` (:issue:`57956`)
- Fixed bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
- Fixed bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
- Fixed bug in :meth:`DataFrame.update` bool dtype being converted to object (:issue:`55509`)
diff --git a/pandas/core/array_algos/datetimelike_accumulations.py b/pandas/core/array_algos/datetimelike_accumulations.py
index 55942f2c9350d..c3a7c2e4fefb2 100644
--- a/pandas/core/array_algos/datetimelike_accumulations.py
+++ b/pandas/core/array_algos/datetimelike_accumulations.py
@@ -49,7 +49,8 @@ def _cum_func(
if not skipna:
mask = np.maximum.accumulate(mask)
- result = func(y)
+ # GH 57956
+ result = func(y, axis=0)
result[mask] = iNaT
if values.dtype.kind in "mM":
diff --git a/pandas/tests/series/test_cumulative.py b/pandas/tests/series/test_cumulative.py
index 68d7fd8b90df2..9b7b08127a550 100644
--- a/pandas/tests/series/test_cumulative.py
+++ b/pandas/tests/series/test_cumulative.py
@@ -91,6 +91,25 @@ def test_cummin_cummax_datetimelike(self, ts, method, skipna, exp_tdi):
result = getattr(ser, method)(skipna=skipna)
tm.assert_series_equal(expected, result)
+ def test_cumsum_datetimelike(self):
+ # GH#57956
+ df = pd.DataFrame(
+ [
+ [pd.Timedelta(0), pd.Timedelta(days=1)],
+ [pd.Timedelta(days=2), pd.NaT],
+ [pd.Timedelta(hours=-6), pd.Timedelta(hours=12)],
+ ]
+ )
+ result = df.cumsum()
+ expected = pd.DataFrame(
+ [
+ [pd.Timedelta(0), pd.Timedelta(days=1)],
+ [pd.Timedelta(days=2), pd.NaT],
+ [pd.Timedelta(days=1, hours=18), pd.Timedelta(days=1, hours=12)],
+ ]
+ )
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize(
"func, exp",
[
| - [x] closes #57956
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58028 | 2024-03-27T10:31:59Z | 2024-03-28T00:13:58Z | 2024-03-28T00:13:58Z | 2024-03-28T00:14:05Z |
REGR: Performance of DataFrame.stack where columns are not a MultiIndex | diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index afb0c489c9c94..0a2f7fe43b4b3 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -932,14 +932,18 @@ def stack_v3(frame: DataFrame, level: list[int]) -> Series | DataFrame:
if len(frame.columns) == 1:
data = frame.copy()
else:
- # Take the data from frame corresponding to this idx value
- if len(level) == 1:
- idx = (idx,)
- gen = iter(idx)
- column_indexer = tuple(
- next(gen) if k in set_levels else slice(None)
- for k in range(frame.columns.nlevels)
- )
+ if not isinstance(frame.columns, MultiIndex) and not isinstance(idx, tuple):
+ # GH#57750 - if the frame is an Index with tuples, .loc below will fail
+ column_indexer = idx
+ else:
+ # Take the data from frame corresponding to this idx value
+ if len(level) == 1:
+ idx = (idx,)
+ gen = iter(idx)
+ column_indexer = tuple(
+ next(gen) if k in set_levels else slice(None)
+ for k in range(frame.columns.nlevels)
+ )
data = frame.loc[:, column_indexer]
if len(level) < frame.columns.nlevels:
| - [x] closes #57302 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Thanks @DeaMariaLeon for identifying this regression and @jorisvandenbossche for the solution used here.
ASVs
```
| Change | Before [e51039af] <enh_fillna_allow_none~1> | After [664c54b8] <regr_perf_stack> | Ratio | Benchmark (Parameter) |
|----------|-----------------------------------------------|--------------------------------------|---------|------------------------------------------------------------------------|
| - | 304±2ms | 44.2±0.4ms | 0.15 | reshape.ReshapeExtensionDtype.time_stack('datetime64[ns, US/Pacific]') |
| - | 304±2ms | 43.7±0.2ms | 0.14 | reshape.ReshapeExtensionDtype.time_stack('Period[s]') |
| - | 294±0.7ms | 40.8±0.3ms | 0.14 | reshape.ReshapeMaskedArrayDtype.time_stack('Float64') |
| - | 291±2ms | 40.7±0.2ms | 0.14 | reshape.ReshapeMaskedArrayDtype.time_stack('Int64') |
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/58027 | 2024-03-27T03:43:05Z | 2024-03-27T17:29:16Z | 2024-03-27T17:29:16Z | 2024-03-27T17:41:30Z |
CLN: remove no-longer-needed warning filters | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ebcb700e656f6..a7545fb8d98de 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9660,13 +9660,7 @@ def _where(
# make sure we are boolean
fill_value = bool(inplace)
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- cond = cond.fillna(fill_value)
+ cond = cond.fillna(fill_value)
cond = cond.infer_objects()
msg = "Boolean array expected for the condition, not {dtype}"
diff --git a/pandas/io/formats/xml.py b/pandas/io/formats/xml.py
index 702430642a597..47f162e93216d 100644
--- a/pandas/io/formats/xml.py
+++ b/pandas/io/formats/xml.py
@@ -11,7 +11,6 @@
Any,
final,
)
-import warnings
from pandas.errors import AbstractMethodError
from pandas.util._decorators import (
@@ -208,13 +207,7 @@ def _process_dataframe(self) -> dict[int | str, dict[str, Any]]:
df = df.reset_index()
if self.na_rep is not None:
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- df = df.fillna(self.na_rep)
+ df = df.fillna(self.na_rep)
return df.to_dict(orient="index")
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 8f4028c1ead3a..13d74e935f786 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -16,7 +16,6 @@
final,
overload,
)
-import warnings
import numpy as np
@@ -1173,13 +1172,7 @@ def _try_convert_data(
if all(notna(data)):
return data, False
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- filled = data.fillna(np.nan)
+ filled = data.fillna(np.nan)
return filled, True
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index fe8b4896d097e..3ec077806d6c4 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2887,13 +2887,7 @@ def _prepare_data(self) -> np.rec.recarray:
for i, col in enumerate(data):
typ = typlist[i]
if typ <= self._max_string_length:
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- dc = data[col].fillna("")
+ dc = data[col].fillna("")
data[col] = dc.apply(_pad_bytes, args=(typ,))
stype = f"S{typ}"
dtypes[col] = stype
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index dbd2743345a38..700136bca8da7 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -1725,13 +1725,7 @@ def _kind(self) -> Literal["area"]:
def __init__(self, data, **kwargs) -> None:
kwargs.setdefault("stacked", True)
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- data = data.fillna(value=0)
+ data = data.fillna(value=0)
LinePlot.__init__(self, data, **kwargs)
if not self.stacked:
diff --git a/pandas/tests/extension/test_masked.py b/pandas/tests/extension/test_masked.py
index 5481e50de10bb..69ce42203d510 100644
--- a/pandas/tests/extension/test_masked.py
+++ b/pandas/tests/extension/test_masked.py
@@ -14,8 +14,6 @@
"""
-import warnings
-
import numpy as np
import pytest
@@ -215,13 +213,7 @@ def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
if sdtype.kind in "iu":
if op_name in ("__rtruediv__", "__truediv__", "__div__"):
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "Downcasting object dtype arrays",
- category=FutureWarning,
- )
- filled = expected.fillna(np.nan)
+ filled = expected.fillna(np.nan)
expected = filled.astype("Float64")
else:
# combine method result in 'biggest' (int64) dtype
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index 8b3596debc0b8..aeffc4835a347 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -96,7 +96,6 @@ def test_where_upcasting(self):
tm.assert_series_equal(result, expected)
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
def test_where_alignment(self, where_frame, float_string_frame):
# aligning
def _check_align(df, cond, other, check_dtypes=True):
@@ -171,7 +170,6 @@ def test_where_invalid(self):
with pytest.raises(ValueError, match=msg):
df.mask(0)
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
def test_where_set(self, where_frame, float_string_frame, mixed_int_frame):
# where inplace
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 408cb0ab6fc5c..5b9dd9e5b8aa6 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1272,7 +1272,6 @@ def test_any_all_bool_with_na(
):
getattr(bool_frame_with_na, all_boolean_reductions)(axis=axis, bool_only=False)
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
def test_any_all_bool_frame(self, all_boolean_reductions, bool_frame_with_na):
# GH#12863: numpy gives back non-boolean data for object type
# so fill NaNs to compare with pandas behavior
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index 0b6b38340de9e..09235f154b188 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -1223,7 +1223,6 @@ def test_stack_preserve_categorical_dtype_values(self, future_stack):
@pytest.mark.filterwarnings(
"ignore:The previous implementation of stack is deprecated"
)
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
@pytest.mark.parametrize(
"index",
[
diff --git a/pandas/tests/groupby/test_numeric_only.py b/pandas/tests/groupby/test_numeric_only.py
index 55a79863f206b..33cdd1883e1b9 100644
--- a/pandas/tests/groupby/test_numeric_only.py
+++ b/pandas/tests/groupby/test_numeric_only.py
@@ -310,7 +310,6 @@ def test_numeric_only(kernel, has_arg, numeric_only, keys):
method(*args, **kwargs)
-@pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
@pytest.mark.parametrize("dtype", [bool, int, float, object])
def test_deprecate_numeric_only_series(dtype, groupby_func, request):
# GH#46560
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 0ace43f608b5a..7b45a267a4572 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -195,7 +195,6 @@ def test_series_datetimelike_attribute_access_invalid(self):
with pytest.raises(AttributeError, match=msg):
ser.weekday
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
@pytest.mark.parametrize(
"kernel, has_numeric_only",
[
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index 757f63dd86904..b76b69289b72f 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -15,7 +15,6 @@
class TestSeriesLogicalOps:
- @pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
@pytest.mark.parametrize("bool_op", [operator.and_, operator.or_, operator.xor])
def test_bool_operators_with_nas(self, bool_op):
# boolean &, |, ^ should work with object arrays and propagate NAs
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58025 | 2024-03-27T03:12:58Z | 2024-03-27T17:34:03Z | 2024-03-27T17:34:03Z | 2024-03-27T17:41:09Z |
DEPR: mismatched null-likes in tm.assert_foo_equal | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4b7b075ceafaf..549d49aaa1853 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -211,6 +211,7 @@ Removal of prior version deprecations/changes
- All arguments in :meth:`Index.sort_values` are now keyword only (:issue:`56493`)
- All arguments in :meth:`Series.to_dict` are now keyword only (:issue:`56493`)
- Changed the default value of ``observed`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` to ``True`` (:issue:`51811`)
+- Enforce deprecation in :func:`testing.assert_series_equal` and :func:`testing.assert_frame_equal` with object dtype and mismatched null-like values, which are now considered not-equal (:issue:`18463`)
- Enforced deprecation disallowing parsing datetimes with mixed time zones unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`57275`)
- Enforced deprecation in :meth:`Series.value_counts` and :meth:`Index.value_counts` with object dtype performing dtype inference on the ``.index`` of the result (:issue:`56161`)
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index aed0f4b082d4e..cfd31fa610e69 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -1,6 +1,5 @@
import cmath
import math
-import warnings
import numpy as np
@@ -18,7 +17,6 @@ from pandas._libs.util cimport (
is_real_number_object,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.missing import array_equivalent
@@ -188,15 +186,7 @@ cpdef assert_almost_equal(a, b,
return True
elif checknull(b):
# GH#18463
- warnings.warn(
- f"Mismatched null-like values {a} and {b} found. In a future "
- "version, pandas equality-testing functions "
- "(e.g. assert_frame_equal) will consider these not-matching "
- "and raise.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return True
+ raise AssertionError(f"Mismatched null-like values {a} != {b}")
raise AssertionError(f"{a} != {b}")
elif checknull(b):
raise AssertionError(f"{a} != {b}")
diff --git a/pandas/tests/util/test_assert_almost_equal.py b/pandas/tests/util/test_assert_almost_equal.py
index 1688e77ccd2d7..bcc2e4e03f367 100644
--- a/pandas/tests/util/test_assert_almost_equal.py
+++ b/pandas/tests/util/test_assert_almost_equal.py
@@ -311,7 +311,7 @@ def test_assert_almost_equal_inf(a, b):
@pytest.mark.parametrize("left", objs)
@pytest.mark.parametrize("right", objs)
-def test_mismatched_na_assert_almost_equal_deprecation(left, right):
+def test_mismatched_na_assert_almost_equal(left, right):
left_arr = np.array([left], dtype=object)
right_arr = np.array([right], dtype=object)
@@ -331,7 +331,7 @@ def test_mismatched_na_assert_almost_equal_deprecation(left, right):
)
else:
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with pytest.raises(AssertionError, match=msg):
_assert_almost_equal_both(left, right, check_dtype=False)
# TODO: to get the same deprecation in assert_numpy_array_equal we need
@@ -339,11 +339,11 @@ def test_mismatched_na_assert_almost_equal_deprecation(left, right):
# TODO: to get the same deprecation in assert_index_equal we need to
# change/deprecate array_equivalent_object to be stricter, as
# assert_index_equal uses Index.equal which uses array_equivalent.
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with pytest.raises(AssertionError, match="Series are different"):
tm.assert_series_equal(
Series(left_arr, dtype=object), Series(right_arr, dtype=object)
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with pytest.raises(AssertionError, match="DataFrame.iloc.* are different"):
tm.assert_frame_equal(
DataFrame(left_arr, dtype=object), DataFrame(right_arr, dtype=object)
)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58023 | 2024-03-26T23:35:57Z | 2024-03-27T00:39:15Z | 2024-03-27T00:39:15Z | 2024-03-27T02:57:11Z |
DEPR: freq keyword in PeriodArray | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4b7b075ceafaf..c9c5cdc6ec4df 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -206,6 +206,7 @@ Removal of prior version deprecations/changes
- :meth:`SeriesGroupBy.agg` no longer pins the name of the group to the input passed to the provided ``func`` (:issue:`51703`)
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
+- Removed "freq" keyword from :class:`PeriodArray` constructor, use "dtype" instead (:issue:`52462`)
- Removed the "closed" and "normalize" keywords in :meth:`DatetimeIndex.__new__` (:issue:`52628`)
- Removed the "closed" and "unit" keywords in :meth:`TimedeltaIndex.__new__` (:issue:`52628`, :issue:`55499`)
- All arguments in :meth:`Index.sort_values` are now keyword only (:issue:`56493`)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index e73eba710ec39..8baf363b909fb 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -54,7 +54,6 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
ensure_object,
@@ -135,11 +134,6 @@ class PeriodArray(dtl.DatelikeOps, libperiod.PeriodMixin): # type: ignore[misc]
dtype : PeriodDtype, optional
A PeriodDtype instance from which to extract a `freq`. If both
`freq` and `dtype` are specified, then the frequencies must match.
- freq : str or DateOffset
- The `freq` to use for the array. Mostly applicable when `values`
- is an ndarray of integers, when `freq` is required. When `values`
- is a PeriodArray (or box around), it's checked that ``values.freq``
- matches `freq`.
copy : bool, default False
Whether to copy the ordinals before storing.
@@ -224,20 +218,7 @@ def _scalar_type(self) -> type[Period]:
# --------------------------------------------------------------------
# Constructors
- def __init__(
- self, values, dtype: Dtype | None = None, freq=None, copy: bool = False
- ) -> None:
- if freq is not None:
- # GH#52462
- warnings.warn(
- "The 'freq' keyword in the PeriodArray constructor is deprecated "
- "and will be removed in a future version. Pass 'dtype' instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- freq = validate_dtype_freq(dtype, freq)
- dtype = PeriodDtype(freq)
-
+ def __init__(self, values, dtype: Dtype | None = None, copy: bool = False) -> None:
if dtype is not None:
dtype = pandas_dtype(dtype)
if not isinstance(dtype, PeriodDtype):
diff --git a/pandas/tests/arrays/period/test_constructors.py b/pandas/tests/arrays/period/test_constructors.py
index d034162f1b46e..63b0e456c4566 100644
--- a/pandas/tests/arrays/period/test_constructors.py
+++ b/pandas/tests/arrays/period/test_constructors.py
@@ -135,17 +135,6 @@ def test_from_td64nat_sequence_raises():
pd.DataFrame(arr, dtype=dtype)
-def test_freq_deprecated():
- # GH#52462
- data = np.arange(5).astype(np.int64)
- msg = "The 'freq' keyword in the PeriodArray constructor is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = PeriodArray(data, freq="M")
-
- expected = PeriodArray(data, dtype="period[M]")
- tm.assert_equal(res, expected)
-
-
def test_period_array_from_datetime64():
arr = np.array(
["2020-01-01T00:00:00", "2020-02-02T00:00:00"], dtype="datetime64[ns]"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58022 | 2024-03-26T23:24:10Z | 2024-03-27T17:35:02Z | 2024-03-27T17:35:02Z | 2024-03-27T17:40:55Z |
CLN/PERF: Simplify argmin/argmax | diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 7f4e6f6666382..930ee83aea00b 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -210,7 +210,7 @@ def argmin(self, axis: AxisInt = 0, skipna: bool = True): # type: ignore[overri
# override base class by adding axis keyword
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return nargminmax(self, "argmin", axis=axis)
# Signature of "argmax" incompatible with supertype "ExtensionArray"
@@ -218,7 +218,7 @@ def argmax(self, axis: AxisInt = 0, skipna: bool = True): # type: ignore[overri
# override base class by adding axis keyword
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return nargminmax(self, "argmax", axis=axis)
def unique(self) -> Self:
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 76615704f2e33..fdc839225a557 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -885,7 +885,7 @@ def argmin(self, skipna: bool = True) -> int:
# 2. argmin itself : total control over sorting.
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return nargminmax(self, "argmin")
def argmax(self, skipna: bool = True) -> int:
@@ -919,7 +919,7 @@ def argmax(self, skipna: bool = True) -> int:
# 2. argmax itself : total control over sorting.
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return nargminmax(self, "argmax")
def interpolate(
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index bdcb3219a9875..2a96423017bb7 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1623,13 +1623,13 @@ def _argmin_argmax(self, kind: Literal["argmin", "argmax"]) -> int:
def argmax(self, skipna: bool = True) -> int:
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return self._argmin_argmax("argmax")
def argmin(self, skipna: bool = True) -> int:
validate_bool_kwarg(skipna, "skipna")
if not skipna and self._hasna:
- raise NotImplementedError
+ raise ValueError("Encountered an NA value with skipna=False")
return self._argmin_argmax("argmin")
# ------------------------------------------------------------------------
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 263265701691b..0dffc0254c550 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -735,13 +735,8 @@ def argmax(
nv.validate_minmax_axis(axis)
skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
- if skipna and len(delegate) > 0 and isna(delegate).all():
- raise ValueError("Encountered all NA values")
- elif not skipna and isna(delegate).any():
- raise ValueError("Encountered an NA value with skipna=False")
-
if isinstance(delegate, ExtensionArray):
- return delegate.argmax()
+ return delegate.argmax(skipna=skipna)
else:
result = nanops.nanargmax(delegate, skipna=skipna)
# error: Incompatible return value type (got "Union[int, ndarray]", expected
@@ -754,15 +749,10 @@ def argmin(
) -> int:
delegate = self._values
nv.validate_minmax_axis(axis)
- skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
-
- if skipna and len(delegate) > 0 and isna(delegate).all():
- raise ValueError("Encountered all NA values")
- elif not skipna and isna(delegate).any():
- raise ValueError("Encountered an NA value with skipna=False")
+ skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
if isinstance(delegate, ExtensionArray):
- return delegate.argmin()
+ return delegate.argmin(skipna=skipna)
else:
result = nanops.nanargmin(delegate, skipna=skipna)
# error: Incompatible return value type (got "Union[int, ndarray]", expected
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 76dd19a9424f5..c57c7d1fe1232 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6975,11 +6975,11 @@ def argmin(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
nv.validate_minmax_axis(axis)
if not self._is_multi and self.hasnans:
- # Take advantage of cache
- if self._isnan.all():
- raise ValueError("Encountered all NA values")
- elif not skipna:
+ if not skipna:
raise ValueError("Encountered an NA value with skipna=False")
+ elif self._isnan.all():
+ raise ValueError("Encountered all NA values")
+
return super().argmin(skipna=skipna)
@Appender(IndexOpsMixin.argmax.__doc__)
@@ -6988,11 +6988,10 @@ def argmax(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
nv.validate_minmax_axis(axis)
if not self._is_multi and self.hasnans:
- # Take advantage of cache
- if self._isnan.all():
- raise ValueError("Encountered all NA values")
- elif not skipna:
+ if not skipna:
raise ValueError("Encountered an NA value with skipna=False")
+ elif self._isnan.all():
+ raise ValueError("Encountered all NA values")
return super().argmax(skipna=skipna)
def min(self, axis=None, skipna: bool = True, *args, **kwargs):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index b68337d9e0de9..623d61a9b2ea9 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1439,20 +1439,15 @@ def _maybe_arg_null_out(
return result
if axis is None or not getattr(result, "ndim", False):
- if skipna:
- if mask.all():
- raise ValueError("Encountered all NA values")
- else:
- if mask.any():
- raise ValueError("Encountered an NA value with skipna=False")
+ if skipna and mask.all():
+ raise ValueError("Encountered all NA values")
+ elif not skipna and mask.any():
+ raise ValueError("Encountered an NA value with skipna=False")
else:
- na_mask = mask.all(axis)
- if na_mask.any():
+ if skipna and mask.all(axis).any():
raise ValueError("Encountered all NA values")
- elif not skipna:
- na_mask = mask.any(axis)
- if na_mask.any():
- raise ValueError("Encountered an NA value with skipna=False")
+ elif not skipna and mask.any(axis).any():
+ raise ValueError("Encountered an NA value with skipna=False")
return result
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 26638c6160b7b..225a3301b8b8c 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -191,10 +191,10 @@ def test_argmax_argmin_no_skipna_notimplemented(self, data_missing_for_sorting):
# GH#38733
data = data_missing_for_sorting
- with pytest.raises(NotImplementedError, match=""):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
data.argmin(skipna=False)
- with pytest.raises(NotImplementedError, match=""):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
data.argmax(skipna=False)
@pytest.mark.parametrize(
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 408cb0ab6fc5c..c5c7ffab9b4ae 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1066,7 +1066,7 @@ def test_idxmin(self, float_frame, int_frame, skipna, axis):
frame.iloc[15:20, -2:] = np.nan
for df in [frame, int_frame]:
if (not skipna or axis == 1) and df is not int_frame:
- if axis == 1:
+ if skipna:
msg = "Encountered all NA values"
else:
msg = "Encountered an NA value"
@@ -1116,7 +1116,7 @@ def test_idxmax(self, float_frame, int_frame, skipna, axis):
frame.iloc[15:20, -2:] = np.nan
for df in [frame, int_frame]:
if (skipna is False or axis == 1) and df is frame:
- if axis == 1:
+ if skipna:
msg = "Encountered all NA values"
else:
msg = "Encountered an NA value"
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index b10319f5380e7..726ed4ad8a399 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -171,9 +171,9 @@ def test_argminmax(self):
obj.argmin()
with pytest.raises(ValueError, match="Encountered all NA values"):
obj.argmax()
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
obj.argmin(skipna=False)
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
obj.argmax(skipna=False)
obj = Index([NaT, datetime(2011, 11, 1), datetime(2011, 11, 2), NaT])
@@ -189,9 +189,9 @@ def test_argminmax(self):
obj.argmin()
with pytest.raises(ValueError, match="Encountered all NA values"):
obj.argmax()
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
obj.argmin(skipna=False)
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
obj.argmax(skipna=False)
@pytest.mark.parametrize("op, expected_col", [["max", "a"], ["min", "b"]])
@@ -856,7 +856,8 @@ def test_idxmin(self):
# all NaNs
allna = string_series * np.nan
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ msg = "Encountered all NA values"
+ with pytest.raises(ValueError, match=msg):
allna.idxmin()
# datetime64[ns]
@@ -888,7 +889,8 @@ def test_idxmax(self):
# all NaNs
allna = string_series * np.nan
- with pytest.raises(ValueError, match="Encountered all NA values"):
+ msg = "Encountered all NA values"
+ with pytest.raises(ValueError, match=msg):
allna.idxmax()
s = Series(date_range("20130102", periods=6))
@@ -1146,12 +1148,12 @@ def test_idxminmax_object_dtype(self, using_infer_string):
msg = "'>' not supported between instances of 'float' and 'str'"
with pytest.raises(TypeError, match=msg):
ser3.idxmax()
- with pytest.raises(ValueError, match="Encountered an NA value"):
+ with pytest.raises(TypeError, match=msg):
ser3.idxmax(skipna=False)
msg = "'<' not supported between instances of 'float' and 'str'"
with pytest.raises(TypeError, match=msg):
ser3.idxmin()
- with pytest.raises(ValueError, match="Encountered an NA value"):
+ with pytest.raises(TypeError, match=msg):
ser3.idxmin(skipna=False)
def test_idxminmax_object_frame(self):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Ref: https://github.com/pandas-dev/pandas/pull/57971#discussion_r1539849666
ASVs against main:
```
| Change | Before [b63ae8c7] | After [46bbd5eb] <cln_argmin_argmax> | Ratio | Benchmark (Parameter) |
|----------|----------------------|----------------------------------------|---------|---------------------------------------------------------------|
| - | 184±1μs | 138±3μs | 0.75 | series_methods.NanOps.time_func('argmax', 1000000, 'int32') |
| - | 14.4±0.1μs | 10.5±0.2μs | 0.73 | series_methods.NanOps.time_func('argmax', 1000, 'float64') |
| - | 1.23±0.06ms | 863±20μs | 0.7 | series_methods.NanOps.time_func('argmax', 1000000, 'float64') |
| - | 77.9±0.6μs | 40.8±0.7μs | 0.52 | series_methods.NanOps.time_func('argmax', 1000000, 'int8') |
| - | 7.42±0.2μs | 2.44±0.06μs | 0.33 | series_methods.NanOps.time_func('argmax', 1000, 'int64') |
| - | 7.18±0.2μs | 2.29±0.02μs | 0.32 | series_methods.NanOps.time_func('argmax', 1000, 'int32') |
| - | 7.25±0.1μs | 2.29±0.03μs | 0.32 | series_methods.NanOps.time_func('argmax', 1000, 'int8') |
```
ASVs against 2.2.x show no perf change. | https://api.github.com/repos/pandas-dev/pandas/pulls/58019 | 2024-03-26T21:47:08Z | 2024-04-01T18:13:11Z | 2024-04-01T18:13:11Z | 2024-04-02T02:19:26Z |
PERF: Allow Index.to_frame to return RangeIndex columns | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 26dd6f83ad44a..be7c3277759f5 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -299,6 +299,7 @@ Performance improvements
- Performance improvement in :meth:`DataFrameGroupBy.ffill`, :meth:`DataFrameGroupBy.bfill`, :meth:`SeriesGroupBy.ffill`, and :meth:`SeriesGroupBy.bfill` (:issue:`56902`)
- Performance improvement in :meth:`Index.join` by propagating cached attributes in cases where the result matches one of the inputs (:issue:`57023`)
- Performance improvement in :meth:`Index.take` when ``indices`` is a full range indexer from zero to length of index (:issue:`56806`)
+- Performance improvement in :meth:`Index.to_frame` returning a :class:`RangeIndex` columns of a :class:`Index` when possible. (:issue:`58018`)
- Performance improvement in :meth:`MultiIndex.equals` for equal length indexes (:issue:`56990`)
- Performance improvement in :meth:`RangeIndex.__getitem__` with a boolean mask or integers returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57588`)
- Performance improvement in :meth:`RangeIndex.append` when appending the same index (:issue:`57252`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 76dd19a9424f5..e510d487ac954 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1374,16 +1374,19 @@ def _format_attrs(self) -> list[tuple[str_t, str_t | int | bool | None]]:
return attrs
@final
- def _get_level_names(self) -> Hashable | Sequence[Hashable]:
+ def _get_level_names(self) -> range | Sequence[Hashable]:
"""
Return a name or list of names with None replaced by the level number.
"""
if self._is_multi:
- return [
- level if name is None else name for level, name in enumerate(self.names)
- ]
+ return maybe_sequence_to_range(
+ [
+ level if name is None else name
+ for level, name in enumerate(self.names)
+ ]
+ )
else:
- return 0 if self.name is None else self.name
+ return range(1) if self.name is None else [self.name]
@final
def _mpl_repr(self) -> np.ndarray:
@@ -1630,8 +1633,11 @@ def to_frame(
from pandas import DataFrame
if name is lib.no_default:
- name = self._get_level_names()
- result = DataFrame({name: self}, copy=False)
+ result_name = self._get_level_names()
+ else:
+ result_name = Index([name]) # type: ignore[assignment]
+ result = DataFrame(self, copy=False)
+ result.columns = result_name
if index:
result.index = self
diff --git a/pandas/tests/indexes/multi/test_conversion.py b/pandas/tests/indexes/multi/test_conversion.py
index 3c2ca045d6f99..f6b10c989326f 100644
--- a/pandas/tests/indexes/multi/test_conversion.py
+++ b/pandas/tests/indexes/multi/test_conversion.py
@@ -5,6 +5,7 @@
from pandas import (
DataFrame,
MultiIndex,
+ RangeIndex,
)
import pandas._testing as tm
@@ -148,6 +149,13 @@ def test_to_frame_duplicate_labels():
tm.assert_frame_equal(result, expected)
+def test_to_frame_column_rangeindex():
+ mi = MultiIndex.from_arrays([[1, 2], ["a", "b"]])
+ result = mi.to_frame().columns
+ expected = RangeIndex(2)
+ tm.assert_index_equal(result, expected, exact=True)
+
+
def test_to_flat_index(idx):
expected = pd.Index(
(
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index eb0010066a7f6..a2dee61295c74 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -508,3 +508,17 @@ def test_compare_read_only_array():
idx = pd.Index(arr)
result = idx > 69
assert result.dtype == bool
+
+
+def test_to_frame_column_rangeindex():
+ idx = pd.Index([1])
+ result = idx.to_frame().columns
+ expected = RangeIndex(1)
+ tm.assert_index_equal(result, expected, exact=True)
+
+
+def test_to_frame_name_tuple_multiindex():
+ idx = pd.Index([1])
+ result = idx.to_frame(name=(1, 2))
+ expected = pd.DataFrame([1], columns=MultiIndex.from_arrays([[1], [2]]), index=idx)
+ tm.assert_frame_equal(result, expected)
| Discovered in https://github.com/pandas-dev/pandas/pull/57441 | https://api.github.com/repos/pandas-dev/pandas/pulls/58018 | 2024-03-26T19:29:43Z | 2024-03-28T03:01:12Z | 2024-03-28T03:01:12Z | 2024-03-28T17:50:12Z |
Docs: Add note about exception for integer slices with float indices | diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index 24cdbad41fe60..fd843ca68a60b 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -262,6 +262,10 @@ The most robust and consistent way of slicing ranges along arbitrary axes is
described in the :ref:`Selection by Position <indexing.integer>` section
detailing the ``.iloc`` method. For now, we explain the semantics of slicing using the ``[]`` operator.
+ .. note::
+
+ When the :class:`Series` has float indices, slicing will select by position.
+
With Series, the syntax works exactly as with an ndarray, returning a slice of
the values and the corresponding labels:
| - [x] closes #57277
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58017 | 2024-03-26T19:22:07Z | 2024-04-02T00:37:53Z | 2024-04-02T00:37:53Z | 2024-04-02T19:25:43Z |
PERF: Allow np.integer Series/Index to convert to RangeIndex | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 76dd19a9424f5..7c5b88258e6bb 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7157,17 +7157,22 @@ def maybe_sequence_to_range(sequence) -> Any | range:
-------
Any : input or range
"""
- if isinstance(sequence, (ABCSeries, Index, range, ExtensionArray)):
+ if isinstance(sequence, (range, ExtensionArray)):
return sequence
elif len(sequence) == 1 or lib.infer_dtype(sequence, skipna=False) != "integer":
return sequence
- elif len(sequence) == 0:
+ elif isinstance(sequence, (ABCSeries, Index)) and not (
+ isinstance(sequence.dtype, np.dtype) and sequence.dtype.kind == "i"
+ ):
+ return sequence
+ if len(sequence) == 0:
return range(0)
- diff = sequence[1] - sequence[0]
+ np_sequence = np.asarray(sequence, dtype=np.int64)
+ diff = np_sequence[1] - np_sequence[0]
if diff == 0:
return sequence
- elif len(sequence) == 2 or lib.is_sequence_range(np.asarray(sequence), diff):
- return range(sequence[0], sequence[-1] + diff, diff)
+ elif len(sequence) == 2 or lib.is_sequence_range(np_sequence, diff):
+ return range(np_sequence[0], np_sequence[-1] + diff, diff)
else:
return sequence
diff --git a/pandas/tests/frame/methods/test_set_index.py b/pandas/tests/frame/methods/test_set_index.py
index 4fbc84cd1a66c..a1968c6c694d5 100644
--- a/pandas/tests/frame/methods/test_set_index.py
+++ b/pandas/tests/frame/methods/test_set_index.py
@@ -148,7 +148,7 @@ def test_set_index_dst(self):
def test_set_index(self, float_string_frame):
df = float_string_frame
- idx = Index(np.arange(len(df))[::-1])
+ idx = Index(np.arange(len(df) - 1, -1, -1, dtype=np.int64))
df = df.set_index(idx)
tm.assert_index_equal(df.index, idx)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 9078ca865042d..0cc8018ea6213 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -513,7 +513,6 @@ def test_read_write_reread_dta14(self, file, parsed_114, version, datapath):
written_and_read_again = self.read_dta(path)
expected = parsed_114.copy()
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
@pytest.mark.parametrize(
@@ -576,7 +575,6 @@ def test_numeric_column_names(self):
written_and_read_again.columns = map(convert_col_name, columns)
expected = original
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(expected, written_and_read_again)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@@ -594,7 +592,6 @@ def test_nan_to_missing_value(self, version):
written_and_read_again = written_and_read_again.set_index("index")
expected = original
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again, expected)
def test_no_index(self):
@@ -617,7 +614,6 @@ def test_string_no_dates(self):
written_and_read_again = self.read_dta(path)
expected = original
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
def test_large_value_conversion(self):
@@ -637,7 +633,6 @@ def test_large_value_conversion(self):
modified["s1"] = Series(modified["s1"], dtype=np.int16)
modified["s2"] = Series(modified["s2"], dtype=np.int32)
modified["s3"] = Series(modified["s3"], dtype=np.float64)
- modified.index = original.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), modified)
def test_dates_invalid_column(self):
@@ -713,7 +708,7 @@ def test_write_missing_strings(self):
expected = DataFrame(
[["1"], [""]],
- index=pd.Index([0, 1], dtype=np.int32, name="index"),
+ index=pd.RangeIndex(2, name="index"),
columns=["foo"],
)
@@ -746,7 +741,6 @@ def test_bool_uint(self, byteorder, version):
written_and_read_again = written_and_read_again.set_index("index")
expected = original
- expected.index = expected.index.astype(np.int32)
expected_types = (
np.int8,
np.int8,
@@ -1030,7 +1024,7 @@ def test_categorical_writing(self, version):
res = written_and_read_again.set_index("index")
expected = original
- expected.index = expected.index.set_names("index").astype(np.int32)
+ expected.index = expected.index.set_names("index")
expected["incompletely_labeled"] = expected["incompletely_labeled"].apply(str)
expected["unlabeled"] = expected["unlabeled"].apply(str)
@@ -1094,7 +1088,6 @@ def test_categorical_with_stata_missing_values(self, version):
new_cats = cat.remove_unused_categories().categories
cat = cat.set_categories(new_cats, ordered=True)
expected[col] = cat
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(res, expected)
@pytest.mark.parametrize("file", ["stata10_115", "stata10_117"])
@@ -1544,7 +1537,6 @@ def test_out_of_range_float(self):
original["ColumnTooBig"] = original["ColumnTooBig"].astype(np.float64)
expected = original
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(reread.set_index("index"), expected)
@pytest.mark.parametrize("infval", [np.inf, -np.inf])
@@ -1669,7 +1661,6 @@ def test_writer_117(self):
original["int32"] = original["int32"].astype(np.int32)
original["float32"] = Series(original["float32"], dtype=np.float32)
original.index.name = "index"
- original.index = original.index.astype(np.int32)
copy = original.copy()
with tm.ensure_clean() as path:
original.to_stata(
@@ -1962,7 +1953,7 @@ def test_read_write_ea_dtypes(self, dtype_backend):
# stata stores with ms unit, so unit does not round-trip exactly
"e": pd.date_range("2020-12-31", periods=3, freq="D", unit="ms"),
},
- index=pd.Index([0, 1, 2], name="index", dtype=np.int32),
+ index=pd.RangeIndex(range(3), name="index"),
)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
@@ -2049,7 +2040,6 @@ def test_compression(compression, version, use_dict, infer, compression_to_exten
reread = read_stata(fp, index_col="index")
expected = df
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(reread, expected)
@@ -2075,7 +2065,6 @@ def test_compression_dict(method, file_ext):
reread = read_stata(fp, index_col="index")
expected = df
- expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(reread, expected)
@@ -2085,7 +2074,6 @@ def test_chunked_categorical(version):
df.index.name = "index"
expected = df.copy()
- expected.index = expected.index.astype(np.int32)
with tm.ensure_clean() as path:
df.to_stata(path, version=version)
@@ -2094,7 +2082,9 @@ def test_chunked_categorical(version):
block = block.set_index("index")
assert "cats" in block
tm.assert_series_equal(
- block.cats, expected.cats.iloc[2 * i : 2 * (i + 1)]
+ block.cats,
+ expected.cats.iloc[2 * i : 2 * (i + 1)],
+ check_index_type=len(block) > 1,
)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 1cd52ab1ae8b4..1a764cb505ead 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -2192,23 +2192,28 @@ def test_merge_on_indexes(self, how, sort, expected):
@pytest.mark.parametrize(
"index",
- [Index([1, 2], dtype=dtyp, name="index_col") for dtyp in tm.ALL_REAL_NUMPY_DTYPES]
+ [
+ Index([1, 2, 4], dtype=dtyp, name="index_col")
+ for dtyp in tm.ALL_REAL_NUMPY_DTYPES
+ ]
+ [
- CategoricalIndex(["A", "B"], categories=["A", "B"], name="index_col"),
- RangeIndex(start=0, stop=2, name="index_col"),
- DatetimeIndex(["2018-01-01", "2018-01-02"], name="index_col"),
+ CategoricalIndex(["A", "B", "C"], categories=["A", "B", "C"], name="index_col"),
+ RangeIndex(start=0, stop=3, name="index_col"),
+ DatetimeIndex(["2018-01-01", "2018-01-02", "2018-01-03"], name="index_col"),
],
ids=lambda x: f"{type(x).__name__}[{x.dtype}]",
)
def test_merge_index_types(index):
# gh-20777
# assert key access is consistent across index types
- left = DataFrame({"left_data": [1, 2]}, index=index)
- right = DataFrame({"right_data": [1.0, 2.0]}, index=index)
+ left = DataFrame({"left_data": [1, 2, 3]}, index=index)
+ right = DataFrame({"right_data": [1.0, 2.0, 3.0]}, index=index)
result = left.merge(right, on=["index_col"])
- expected = DataFrame({"left_data": [1, 2], "right_data": [1.0, 2.0]}, index=index)
+ expected = DataFrame(
+ {"left_data": [1, 2, 3], "right_data": [1.0, 2.0, 3.0]}, index=index
+ )
tm.assert_frame_equal(result, expected)
| Discovered in https://github.com/pandas-dev/pandas/pull/57441 | https://api.github.com/repos/pandas-dev/pandas/pulls/58016 | 2024-03-26T18:09:17Z | 2024-04-01T18:21:15Z | 2024-04-01T18:21:15Z | 2024-04-04T01:43:59Z |
Add tests for transform sum with series | diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 46f6367fbb3ed..ed7aa9d27e452 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -1490,3 +1490,47 @@ def test_idxmin_idxmax_transform_args(how, skipna, numeric_only):
msg = f"DataFrameGroupBy.{how} with skipna=False encountered an NA value"
with pytest.raises(ValueError, match=msg):
gb.transform(how, skipna, numeric_only)
+
+
+def test_transform_sum_one_column_no_matching_labels():
+ df = DataFrame({"X": [1.0]})
+ series = Series(["Y"])
+ result = df.groupby(series, as_index=False).transform("sum")
+ expected = DataFrame({"X": [1.0]})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_transform_sum_no_matching_labels():
+ df = DataFrame({"X": [1.0, -93204, 4935]})
+ series = Series(["A", "B", "C"])
+
+ result = df.groupby(series, as_index=False).transform("sum")
+ expected = DataFrame({"X": [1.0, -93204, 4935]})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_transform_sum_one_column_with_matching_labels():
+ df = DataFrame({"X": [1.0, -93204, 4935]})
+ series = Series(["A", "B", "A"])
+
+ result = df.groupby(series, as_index=False).transform("sum")
+ expected = DataFrame({"X": [4936.0, -93204, 4936.0]})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_transform_sum_one_column_with_missing_labels():
+ df = DataFrame({"X": [1.0, -93204, 4935]})
+ series = Series(["A", "C"])
+
+ result = df.groupby(series, as_index=False).transform("sum")
+ expected = DataFrame({"X": [1.0, -93204, np.nan]})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_transform_sum_one_column_with_matching_labels_and_missing_labels():
+ df = DataFrame({"X": [1.0, -93204, 4935]})
+ series = Series(["A", "A"])
+
+ result = df.groupby(series, as_index=False).transform("sum")
+ expected = DataFrame({"X": [-93203.0, -93203.0, np.nan]})
+ tm.assert_frame_equal(result, expected)
| - [x] closes #37093
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58012 | 2024-03-26T09:56:19Z | 2024-03-28T17:58:52Z | 2024-03-28T17:58:52Z | 2024-03-28T17:59:04Z |
DEPR: enforce deprecation of non-standard argument to take | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 549d49aaa1853..547055082ced3 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -218,6 +218,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` for object-dtype (:issue:`57820`)
- Enforced deprecation of :meth:`offsets.Tick.delta`, use ``pd.Timedelta(obj)`` instead (:issue:`55498`)
- Enforced deprecation of ``axis=None`` acting the same as ``axis=0`` in the DataFrame reductions ``sum``, ``prod``, ``std``, ``var``, and ``sem``, passing ``axis=None`` will now reduce over both axes; this is particularly the case when doing e.g. ``numpy.sum(df)`` (:issue:`21597`)
+- Enforced deprecation of non-standard (``np.ndarray``, :class:`ExtensionArray`, :class:`Index`, or :class:`Series`) argument to :func:`api.extensions.take` (:issue:`52981`)
- Enforced deprecation of parsing system timezone strings to ``tzlocal``, which depended on system timezone, pass the 'tz' keyword instead (:issue:`50791`)
- Enforced deprecation of passing a dictionary to :meth:`SeriesGroupBy.agg` (:issue:`52268`)
- Enforced deprecation of string ``AS`` denoting frequency in :class:`YearBegin` and strings ``AS-DEC``, ``AS-JAN``, etc. denoting annual frequencies with various fiscal year starts (:issue:`57793`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 8620aafd97528..6a6096567c65d 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -43,7 +43,6 @@
ensure_float64,
ensure_object,
ensure_platform_int,
- is_array_like,
is_bool_dtype,
is_complex_dtype,
is_dict_like,
@@ -1163,28 +1162,30 @@ def take(
"""
if not isinstance(arr, (np.ndarray, ABCExtensionArray, ABCIndex, ABCSeries)):
# GH#52981
- warnings.warn(
- "pd.api.extensions.take accepting non-standard inputs is deprecated "
- "and will raise in a future version. Pass either a numpy.ndarray, "
- "ExtensionArray, Index, or Series instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise TypeError(
+ "pd.api.extensions.take requires a numpy.ndarray, "
+ f"ExtensionArray, Index, or Series, got {type(arr).__name__}."
)
- if not is_array_like(arr):
- arr = np.asarray(arr)
-
indices = ensure_platform_int(indices)
if allow_fill:
# Pandas style, -1 means NA
validate_indices(indices, arr.shape[axis])
+ # error: Argument 1 to "take_nd" has incompatible type
+ # "ndarray[Any, Any] | ExtensionArray | Index | Series"; expected
+ # "ndarray[Any, Any]"
result = take_nd(
- arr, indices, axis=axis, allow_fill=True, fill_value=fill_value
+ arr, # type: ignore[arg-type]
+ indices,
+ axis=axis,
+ allow_fill=True,
+ fill_value=fill_value,
)
else:
# NumPy style
- result = arr.take(indices, axis=axis)
+ # error: Unexpected keyword argument "axis" for "take" of "ExtensionArray"
+ result = arr.take(indices, axis=axis) # type: ignore[call-arg,assignment]
return result
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py
index 4f34ab34c35f0..ce2e4e0f6cec5 100644
--- a/pandas/tests/test_take.py
+++ b/pandas/tests/test_take.py
@@ -299,9 +299,11 @@ def test_take_na_empty(self):
tm.assert_numpy_array_equal(result, expected)
def test_take_coerces_list(self):
+ # GH#52981 coercing is deprecated, disabled in 3.0
arr = [1, 2, 3]
- msg = "take accepting non-standard inputs is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = algos.take(arr, [0, 0])
- expected = np.array([1, 1])
- tm.assert_numpy_array_equal(result, expected)
+ msg = (
+ "pd.api.extensions.take requires a numpy.ndarray, ExtensionArray, "
+ "Index, or Series, got list"
+ )
+ with pytest.raises(TypeError, match=msg):
+ algos.take(arr, [0, 0])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58011 | 2024-03-26T02:37:10Z | 2024-03-28T17:57:15Z | 2024-03-28T17:57:15Z | 2024-03-28T20:08:57Z |
DEPS: bump adbc-driver-postgresql min version to 0.10.0 | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 85ee5230b31be..1b68fa4fc22e6 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -57,7 +57,7 @@ dependencies:
- zstandard>=0.19.0
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- tzdata>=2022.7
- pytest-localserver>=0.7.1
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index efd790d77afbb..893e585cb890e 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -72,6 +72,6 @@ dependencies:
- pyyaml
- py
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- tzdata>=2022.7
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 535c260582eec..20124b24a6b9a 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -57,6 +57,6 @@ dependencies:
- zstandard>=0.19.0
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- pytest-localserver>=0.7.1
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index 8b3f19f55e4b6..eb70816c241bb 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -57,7 +57,7 @@ dependencies:
- zstandard>=0.19.0
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- tzdata>=2022.7
- pytest-localserver>=0.7.1
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 94cb21d1621b6..4399aa748af5c 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -60,6 +60,6 @@ dependencies:
- zstandard=0.19.0
- pip:
- - adbc-driver-postgresql==0.8.0
+ - adbc-driver-postgresql==0.10.0
- adbc-driver-sqlite==0.8.0
- tzdata==2022.7
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 4cc9b1fbe2491..92df608f17c6c 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -57,7 +57,7 @@ dependencies:
- zstandard>=0.19.0
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- tzdata>=2022.7
- pytest-localserver>=0.7.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 77e273d8c81fe..11c16dd9dabcc 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -346,7 +346,7 @@ SQLAlchemy 2.0.0 postgresql, SQL support for dat
sql-other
psycopg2 2.9.6 postgresql PostgreSQL engine for sqlalchemy
pymysql 1.0.2 mysql MySQL engine for sqlalchemy
-adbc-driver-postgresql 0.8.0 postgresql ADBC Driver for PostgreSQL
+adbc-driver-postgresql 0.10.0 postgresql ADBC Driver for PostgreSQL
adbc-driver-sqlite 0.8.0 sql-other ADBC Driver for SQLite
========================= ================== =============== =============================================================
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index a398b93b60018..b538b5bef4eb0 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -129,11 +129,13 @@ For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.
The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
Optional libraries below the lowest tested version may still work, but are not considered supported.
-+-----------------+---------------------+
-| Package | New Minimum Version |
-+=================+=====================+
-| fastparquet | 2023.04.0 |
-+-----------------+---------------------+
++------------------------+---------------------+
+| Package | New Minimum Version |
++========================+=====================+
+| fastparquet | 2023.04.0 |
++------------------------+---------------------+
+| adbc-driver-postgresql | 0.10.0 |
++------------------------+---------------------+
See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
diff --git a/environment.yml b/environment.yml
index e7bf2556d27f8..020154e650c5b 100644
--- a/environment.yml
+++ b/environment.yml
@@ -116,7 +116,7 @@ dependencies:
- pygments # Code highlighting
- pip:
- - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-postgresql>=0.10.0
- adbc-driver-sqlite>=0.8.0
- typing_extensions; python_version<"3.11"
- tzdata>=2022.7
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index f9273ba4bbc62..d6e01a168fba1 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -20,7 +20,7 @@
# deps_minimum.toml & pyproject.toml when updating versions!
VERSIONS = {
- "adbc-driver-postgresql": "0.8.0",
+ "adbc-driver-postgresql": "0.10.0",
"adbc-driver-sqlite": "0.8.0",
"bs4": "4.11.2",
"blosc": "1.21.3",
diff --git a/pyproject.toml b/pyproject.toml
index f96fbee4a5818..84d6eca552b54 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -76,16 +76,16 @@ hdf5 = [# blosc only available on conda (https://github.com/Blosc/python-blosc/i
#'blosc>=1.20.1',
'tables>=3.8.0']
spss = ['pyreadstat>=1.2.0']
-postgresql = ['SQLAlchemy>=2.0.0', 'psycopg2>=2.9.6', 'adbc-driver-postgresql>=0.8.0']
+postgresql = ['SQLAlchemy>=2.0.0', 'psycopg2>=2.9.6', 'adbc-driver-postgresql>=0.10.0']
mysql = ['SQLAlchemy>=2.0.0', 'pymysql>=1.0.2']
-sql-other = ['SQLAlchemy>=2.0.0', 'adbc-driver-postgresql>=0.8.0', 'adbc-driver-sqlite>=0.8.0']
+sql-other = ['SQLAlchemy>=2.0.0', 'adbc-driver-postgresql>=0.10.0', 'adbc-driver-sqlite>=0.8.0']
html = ['beautifulsoup4>=4.11.2', 'html5lib>=1.1', 'lxml>=4.9.2']
xml = ['lxml>=4.9.2']
plot = ['matplotlib>=3.6.3']
output-formatting = ['jinja2>=3.1.2', 'tabulate>=0.9.0']
clipboard = ['PyQt5>=5.15.9', 'qtpy>=2.3.0']
compression = ['zstandard>=0.19.0']
-all = ['adbc-driver-postgresql>=0.8.0',
+all = ['adbc-driver-postgresql>=0.10.0',
'adbc-driver-sqlite>=0.8.0',
'beautifulsoup4>=4.11.2',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 0cc064d2660bb..0ea0eba369158 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -84,7 +84,7 @@ feedparser
pyyaml
requests
pygments
-adbc-driver-postgresql>=0.8.0
+adbc-driver-postgresql>=0.10.0
adbc-driver-sqlite>=0.8.0
typing_extensions; python_version<"3.11"
tzdata>=2022.7
| Broken off from #55901 | https://api.github.com/repos/pandas-dev/pandas/pulls/58010 | 2024-03-26T02:20:17Z | 2024-03-26T17:04:29Z | 2024-03-26T17:04:29Z | 2024-03-26T17:07:35Z |
DEPR: value_counts doing dtype inference on result.index | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index a398b93b60018..4fd2f46fc71fd 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -199,6 +199,7 @@ Removal of prior version deprecations/changes
- All arguments in :meth:`Series.to_dict` are now keyword only (:issue:`56493`)
- Changed the default value of ``observed`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` to ``True`` (:issue:`51811`)
- Enforced deprecation disallowing parsing datetimes with mixed time zones unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`57275`)
+- Enforced deprecation in :meth:`Series.value_counts` and :meth:`Index.value_counts` with object dtype performing dtype inference on the ``.index`` of the result (:issue:`56161`)
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
- Enforced deprecation of :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` for object-dtype (:issue:`57820`)
- Enforced deprecation of ``axis=None`` acting the same as ``axis=0`` in the DataFrame reductions ``sum``, ``prod``, ``std``, ``var``, and ``sem``, passing ``axis=None`` will now reduce over both axes; this is particularly the case when doing e.g. ``numpy.sum(df)`` (:issue:`21597`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 344314d829c19..8620aafd97528 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -892,26 +892,9 @@ def value_counts_internal(
if keys.dtype == np.float16:
keys = keys.astype(np.float32)
- # For backwards compatibility, we let Index do its normal type
- # inference, _except_ for if if infers from object to bool.
- idx = Index(keys)
- if idx.dtype == bool and keys.dtype == object:
- idx = idx.astype(object)
- elif (
- idx.dtype != keys.dtype # noqa: PLR1714 # # pylint: disable=R1714
- and idx.dtype != "string[pyarrow_numpy]"
- ):
- warnings.warn(
- # GH#56161
- "The behavior of value_counts with object-dtype is deprecated. "
- "In a future version, this will *not* perform dtype inference "
- "on the resulting index. To retain the old behavior, use "
- "`result.index = result.index.infer_objects()`",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- idx.name = index_name
-
+ # Starting in 3.0, we no longer perform dtype inference on the
+ # Index object we construct here, xref GH#56161
+ idx = Index(keys, dtype=keys.dtype, name=index_name)
result = Series(counts, index=idx, name=name, copy=False)
if sort:
@@ -1606,16 +1589,8 @@ def union_with_duplicates(
"""
from pandas import Series
- with warnings.catch_warnings():
- # filter warning from object dtype inference; we will end up discarding
- # the index here, so the deprecation does not affect the end result here.
- warnings.filterwarnings(
- "ignore",
- "The behavior of value_counts with object-dtype is deprecated",
- category=FutureWarning,
- )
- l_count = value_counts_internal(lvals, dropna=False)
- r_count = value_counts_internal(rvals, dropna=False)
+ l_count = value_counts_internal(lvals, dropna=False)
+ r_count = value_counts_internal(rvals, dropna=False)
l_count, r_count = l_count.align(r_count, fill_value=0)
final_count = np.maximum(l_count.values, r_count.values)
final_count = Series(final_count, index=l_count.index, dtype="int", copy=False)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 56ea28c0b50f8..af666a591b1bc 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -13,7 +13,6 @@
Union,
overload,
)
-import warnings
import numpy as np
@@ -1217,15 +1216,8 @@ def value_counts(self, dropna: bool = True) -> Series:
Series.value_counts
"""
# TODO: implement this is a non-naive way!
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore",
- "The behavior of value_counts with object-dtype is deprecated",
- category=FutureWarning,
- )
- result = value_counts(np.asarray(self), dropna=dropna)
- # Once the deprecation is enforced, we will need to do
- # `result.index = result.index.astype(self.dtype)`
+ result = value_counts(np.asarray(self), dropna=dropna)
+ result.index = result.index.astype(self.dtype)
return result
# ---------------------------------------------------------------------
diff --git a/pandas/tests/base/test_value_counts.py b/pandas/tests/base/test_value_counts.py
index a0b0bdfdb46d8..ac40e48f3d523 100644
--- a/pandas/tests/base/test_value_counts.py
+++ b/pandas/tests/base/test_value_counts.py
@@ -347,9 +347,8 @@ def test_value_counts_object_inference_deprecated():
dti = pd.date_range("2016-01-01", periods=3, tz="UTC")
idx = dti.astype(object)
- msg = "The behavior of value_counts with object-dtype is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = idx.value_counts()
+ res = idx.value_counts()
exp = dti.value_counts()
+ exp.index = exp.index.astype(object)
tm.assert_series_equal(res, exp)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58009 | 2024-03-26T02:04:00Z | 2024-03-26T17:20:19Z | 2024-03-26T17:20:19Z | 2024-03-26T17:46:41Z |
Backport PR #57553 on branch 2.2.x (API: avoid passing Manager to subclass init) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5c510d98596df..afcd4d014316e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -656,26 +656,37 @@ class DataFrame(NDFrame, OpsMixin):
def _constructor(self) -> Callable[..., DataFrame]:
return DataFrame
- def _constructor_from_mgr(self, mgr, axes):
- if self._constructor is DataFrame:
- # we are pandas.DataFrame (or a subclass that doesn't override _constructor)
- return DataFrame._from_mgr(mgr, axes=axes)
- else:
- assert axes is mgr.axes
+ def _constructor_from_mgr(self, mgr, axes) -> DataFrame:
+ df = DataFrame._from_mgr(mgr, axes=axes)
+
+ if type(self) is DataFrame:
+ # This would also work `if self._constructor is DataFrame`, but
+ # this check is slightly faster, benefiting the most-common case.
+ return df
+
+ elif type(self).__name__ == "GeoDataFrame":
+ # Shim until geopandas can override their _constructor_from_mgr
+ # bc they have different behavior for Managers than for DataFrames
return self._constructor(mgr)
+ # We assume that the subclass __init__ knows how to handle a
+ # pd.DataFrame object.
+ return self._constructor(df)
+
_constructor_sliced: Callable[..., Series] = Series
- def _sliced_from_mgr(self, mgr, axes) -> Series:
- return Series._from_mgr(mgr, axes)
+ def _constructor_sliced_from_mgr(self, mgr, axes) -> Series:
+ ser = Series._from_mgr(mgr, axes)
+ ser._name = None # caller is responsible for setting real name
- def _constructor_sliced_from_mgr(self, mgr, axes):
- if self._constructor_sliced is Series:
- ser = self._sliced_from_mgr(mgr, axes)
- ser._name = None # caller is responsible for setting real name
+ if type(self) is DataFrame:
+ # This would also work `if self._constructor_sliced is Series`, but
+ # this check is slightly faster, benefiting the most-common case.
return ser
- assert axes is mgr.axes
- return self._constructor_sliced(mgr)
+
+ # We assume that the subclass __init__ knows how to handle a
+ # pd.Series object.
+ return self._constructor_sliced(ser)
# ----------------------------------------------------------------------
# Constructors
@@ -1403,7 +1414,8 @@ def _get_values_for_csv(
na_rep=na_rep,
quoting=quoting,
)
- return self._constructor_from_mgr(mgr, axes=mgr.axes)
+ # error: Incompatible return value type (got "DataFrame", expected "Self")
+ return self._constructor_from_mgr(mgr, axes=mgr.axes) # type: ignore[return-value]
# ----------------------------------------------------------------------
@@ -5077,7 +5089,8 @@ def predicate(arr: ArrayLike) -> bool:
return True
mgr = self._mgr._get_data_subset(predicate).copy(deep=None)
- return self._constructor_from_mgr(mgr, axes=mgr.axes).__finalize__(self)
+ # error: Incompatible return value type (got "DataFrame", expected "Self")
+ return self._constructor_from_mgr(mgr, axes=mgr.axes).__finalize__(self) # type: ignore[return-value]
def insert(
self,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2a86f75badecd..796357355fef4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -336,6 +336,7 @@ def _as_manager(self, typ: str, copy: bool_t = True) -> Self:
# fastpath of passing a manager doesn't check the option/manager class
return self._constructor_from_mgr(new_mgr, axes=new_mgr.axes).__finalize__(self)
+ @final
@classmethod
def _from_mgr(cls, mgr: Manager, axes: list[Index]) -> Self:
"""
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 2d430ef4dcff6..0dd808a0ab296 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -2548,7 +2548,8 @@ def _take_new_index(
if axis == 1:
raise NotImplementedError("axis 1 is not supported")
new_mgr = obj._mgr.reindex_indexer(new_axis=new_index, indexer=indexer, axis=1)
- return obj._constructor_from_mgr(new_mgr, axes=new_mgr.axes)
+ # error: Incompatible return value type (got "DataFrame", expected "NDFrameT")
+ return obj._constructor_from_mgr(new_mgr, axes=new_mgr.axes) # type: ignore[return-value]
else:
raise ValueError("'obj' should be either a Series or a DataFrame")
diff --git a/pandas/core/series.py b/pandas/core/series.py
index c1782206d4b67..6fd019656d207 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -662,14 +662,17 @@ def _constructor(self) -> Callable[..., Series]:
return Series
def _constructor_from_mgr(self, mgr, axes):
- if self._constructor is Series:
- # we are pandas.Series (or a subclass that doesn't override _constructor)
- ser = Series._from_mgr(mgr, axes=axes)
- ser._name = None # caller is responsible for setting real name
+ ser = Series._from_mgr(mgr, axes=axes)
+ ser._name = None # caller is responsible for setting real name
+
+ if type(self) is Series:
+ # This would also work `if self._constructor is Series`, but
+ # this check is slightly faster, benefiting the most-common case.
return ser
- else:
- assert axes is mgr.axes
- return self._constructor(mgr)
+
+ # We assume that the subclass __init__ knows how to handle a
+ # pd.Series object.
+ return self._constructor(ser)
@property
def _constructor_expanddim(self) -> Callable[..., DataFrame]:
@@ -681,18 +684,19 @@ def _constructor_expanddim(self) -> Callable[..., DataFrame]:
return DataFrame
- def _expanddim_from_mgr(self, mgr, axes) -> DataFrame:
+ def _constructor_expanddim_from_mgr(self, mgr, axes):
from pandas.core.frame import DataFrame
- return DataFrame._from_mgr(mgr, axes=mgr.axes)
+ df = DataFrame._from_mgr(mgr, axes=mgr.axes)
- def _constructor_expanddim_from_mgr(self, mgr, axes):
- from pandas.core.frame import DataFrame
+ if type(self) is Series:
+ # This would also work `if self._constructor_expanddim is DataFrame`,
+ # but this check is slightly faster, benefiting the most-common case.
+ return df
- if self._constructor_expanddim is DataFrame:
- return self._expanddim_from_mgr(mgr, axes)
- assert axes is mgr.axes
- return self._constructor_expanddim(mgr)
+ # We assume that the subclass __init__ knows how to handle a
+ # pd.DataFrame object.
+ return self._constructor_expanddim(df)
# types
@property
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index ef78ae62cb4d6..855b58229cbdb 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -26,6 +26,17 @@ def _constructor(self):
class TestDataFrameSubclassing:
+ def test_no_warning_on_mgr(self):
+ # GH#57032
+ df = tm.SubclassedDataFrame(
+ {"X": [1, 2, 3], "Y": [1, 2, 3]}, index=["a", "b", "c"]
+ )
+ with tm.assert_produces_warning(None):
+ # df.isna() goes through _constructor_from_mgr, which we want to
+ # *not* pass a Manager do __init__
+ df.isna()
+ df["X"].isna()
+
def test_frame_subclassing_and_slicing(self):
# Subclass frame and ensure it returns the right class on slicing it
# In reference to PR 9632
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58008 | 2024-03-26T01:49:10Z | 2024-04-01T18:22:03Z | 2024-04-01T18:22:03Z | 2024-04-01T20:45:12Z |
CLN: remove unnecessary check `needs_i8_conversion` if Index subclass does not support `any` or `all` | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 84b62563605ac..34ca81e36cbc5 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1697,7 +1697,7 @@ def pyarrow_meth(data, skip_nulls, **kwargs):
except (AttributeError, NotImplementedError, TypeError) as err:
msg = (
f"'{type(self).__name__}' with dtype {self.dtype} "
- f"does not support reduction '{name}' with pyarrow "
+ f"does not support operation '{name}' with pyarrow "
f"version {pa.__version__}. '{name}' may be supported by "
f"upgrading pyarrow."
)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 76615704f2e33..f37d96bd37614 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1886,7 +1886,7 @@ def _reduce(
Raises
------
- TypeError : subclass does not define reductions
+ TypeError : subclass does not define operations
Examples
--------
@@ -1897,7 +1897,7 @@ def _reduce(
if meth is None:
raise TypeError(
f"'{type(self).__name__}' with dtype {self.dtype} "
- f"does not support reduction '{name}'"
+ f"does not support operation '{name}'"
)
result = meth(skipna=skipna, **kwargs)
if keepdims:
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 52cb175ca79a2..d46810e6ebbdd 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1662,7 +1662,7 @@ def _groupby_op(
if dtype.kind == "M":
# Adding/multiplying datetimes is not valid
if how in ["any", "all", "sum", "prod", "cumsum", "cumprod", "var", "skew"]:
- raise TypeError(f"datetime64 type does not support operation: '{how}'")
+ raise TypeError(f"datetime64 type does not support operation '{how}'")
elif isinstance(dtype, PeriodDtype):
# Adding/multiplying Periods is not valid
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 30cf6f0b866ee..fd2a65f9b3289 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -176,7 +176,6 @@
)
from pandas.core.missing import clean_reindex_fill_method
from pandas.core.ops import get_op_result_name
-from pandas.core.ops.invalid import make_invalid_op
from pandas.core.sorting import (
ensure_key_mapped,
get_group_index_sorter,
@@ -6938,14 +6937,8 @@ def _maybe_disable_logical_methods(self, opname: str_t) -> None:
"""
raise if this Index subclass does not support any or all.
"""
- if (
- isinstance(self, ABCMultiIndex)
- # TODO(3.0): PeriodArray and DatetimeArray any/all will raise,
- # so checking needs_i8_conversion will be unnecessary
- or (needs_i8_conversion(self.dtype) and self.dtype.kind != "m")
- ):
- # This call will raise
- make_invalid_op(opname)(self)
+ if isinstance(self, ABCMultiIndex):
+ raise TypeError(f"cannot perform {opname} with {type(self).__name__}")
@Appender(IndexOpsMixin.argmin.__doc__)
def argmin(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index a124e8679ae8e..d0c8d17042741 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -520,7 +520,7 @@ def nanany(
if values.dtype.kind == "M":
# GH#34479
- raise TypeError("datetime64 type does not support operation: 'any'")
+ raise TypeError("datetime64 type does not support operation 'any'")
values, _ = _get_values(values, skipna, fill_value=False, mask=mask)
@@ -576,7 +576,7 @@ def nanall(
if values.dtype.kind == "M":
# GH#34479
- raise TypeError("datetime64 type does not support operation: 'all'")
+ raise TypeError("datetime64 type does not support operation 'all'")
values, _ = _get_values(values, skipna, fill_value=True, mask=mask)
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 9f3fee686a056..de5f5cac1282c 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -1209,7 +1209,7 @@ def test_agg_multiple_mixed_raises():
)
# sorted index
- msg = "does not support reduction"
+ msg = "does not support operation"
with pytest.raises(TypeError, match=msg):
mdf.agg(["min", "sum"])
@@ -1309,7 +1309,7 @@ def test_nuiscance_columns():
)
tm.assert_frame_equal(result, expected)
- msg = "does not support reduction"
+ msg = "does not support operation"
with pytest.raises(TypeError, match=msg):
df.agg("sum")
@@ -1317,7 +1317,7 @@ def test_nuiscance_columns():
expected = Series([6, 6.0, "foobarbaz"], index=["A", "B", "C"])
tm.assert_series_equal(result, expected)
- msg = "does not support reduction"
+ msg = "does not support operation"
with pytest.raises(TypeError, match=msg):
df.agg(["sum"])
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 8778df832d4d7..dbc6cc7715744 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -374,14 +374,14 @@ def test_numeric_like_ops(self):
# min/max)
s = df["value_group"]
for op in ["kurt", "skew", "var", "std", "mean", "sum", "median"]:
- msg = f"does not support reduction '{op}'"
+ msg = f"does not support operation '{op}'"
with pytest.raises(TypeError, match=msg):
getattr(s, op)(numeric_only=False)
def test_numeric_like_ops_series(self):
# numpy ops
s = Series(Categorical([1, 2, 3, 4]))
- with pytest.raises(TypeError, match="does not support reduction 'sum'"):
+ with pytest.raises(TypeError, match="does not support operation 'sum'"):
np.sum(s)
@pytest.mark.parametrize(
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 971c5bf487104..cfc04b5c91354 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -247,7 +247,7 @@ def test_scalar_from_string(self, arr1d):
assert result == arr1d[0]
def test_reduce_invalid(self, arr1d):
- msg = "does not support reduction 'not a method'"
+ msg = "does not support operation 'not a method'"
with pytest.raises(TypeError, match=msg):
arr1d._reduce("not a method")
diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index dcbbac44d083a..bab8566a06dc2 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -165,7 +165,7 @@ def test_in_numeric_groupby(self, data_for_grouping):
# period
"does not support sum operations",
# datetime
- "does not support operation: 'sum'",
+ "does not support operation 'sum'",
# all others
re.escape(f"agg function failed [how->sum,dtype->{dtype}"),
]
diff --git a/pandas/tests/extension/base/reduce.py b/pandas/tests/extension/base/reduce.py
index 03952d87f0ac6..c3a6daee2dd54 100644
--- a/pandas/tests/extension/base/reduce.py
+++ b/pandas/tests/extension/base/reduce.py
@@ -86,7 +86,7 @@ def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
# TODO: the message being checked here isn't actually checking anything
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
- "does not support reduction|"
+ "does not support operation|"
)
with pytest.raises(TypeError, match=msg):
@@ -105,7 +105,7 @@ def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
# TODO: the message being checked here isn't actually checking anything
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
- "does not support reduction|"
+ "does not support operation|"
)
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 5de4865feb6f9..a42fa6088d9c8 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -104,7 +104,7 @@ def _supports_reduction(self, obj, op_name: str) -> bool:
@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
meth = all_boolean_reductions
- msg = f"datetime64 type does not support operation: '{meth}'"
+ msg = f"datetime64 type does not support operation '{meth}'"
with pytest.raises(TypeError, match=msg):
super().test_reduce_series_boolean(data, all_boolean_reductions, skipna)
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index fd3dad37da1f9..c1161a258aaa4 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -975,7 +975,7 @@ def test_sum_mixed_datetime(self):
df = DataFrame({"A": date_range("2000", periods=4), "B": [1, 2, 3, 4]}).reindex(
[2, 3, 4]
)
- with pytest.raises(TypeError, match="does not support reduction 'sum'"):
+ with pytest.raises(TypeError, match="does not support operation 'sum'"):
df.sum()
def test_mean_corner(self, float_frame, float_string_frame):
@@ -1381,7 +1381,7 @@ def test_any_datetime(self):
]
df = DataFrame({"A": float_data, "B": datetime_data})
- msg = "datetime64 type does not support operation: 'any'"
+ msg = "datetime64 type does not support operation 'any'"
with pytest.raises(TypeError, match=msg):
df.any(axis=1)
@@ -1466,18 +1466,18 @@ def test_any_all_np_func(self, func, data, expected):
if any(isinstance(x, CategoricalDtype) for x in data.dtypes):
with pytest.raises(
- TypeError, match="dtype category does not support reduction"
+ TypeError, match=".* dtype category does not support operation"
):
func(data)
# method version
with pytest.raises(
- TypeError, match="dtype category does not support reduction"
+ TypeError, match=".* dtype category does not support operation"
):
getattr(DataFrame(data), func.__name__)(axis=None)
if data.dtypes.apply(lambda x: x.kind == "M").any():
# GH#34479
- msg = "datetime64 type does not support operation: '(any|all)'"
+ msg = "datetime64 type does not support operation '(any|all)'"
with pytest.raises(TypeError, match=msg):
func(data)
@@ -1734,19 +1734,19 @@ def test_any_all_categorical_dtype_nuisance_column(self, all_boolean_reductions)
df = ser.to_frame()
# Double-check the Series behavior is to raise
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
getattr(ser, all_boolean_reductions)()
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
getattr(np, all_boolean_reductions)(ser)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
getattr(df, all_boolean_reductions)(bool_only=False)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
getattr(df, all_boolean_reductions)(bool_only=None)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
getattr(np, all_boolean_reductions)(df, axis=0)
def test_median_categorical_dtype_nuisance_column(self):
@@ -1755,22 +1755,22 @@ def test_median_categorical_dtype_nuisance_column(self):
ser = df["A"]
# Double-check the Series behavior is to raise
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
ser.median()
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
df.median(numeric_only=False)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
df.median()
# same thing, but with an additional non-categorical column
df["B"] = df["A"].astype(int)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
df.median(numeric_only=False)
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
df.median()
# TODO: np.median(df, axis=0) gives np.array([2.0, 2.0]) instead
@@ -1964,7 +1964,7 @@ def test_minmax_extensionarray(method, numeric_only):
def test_frame_mixed_numeric_object_with_timestamp(ts_value):
# GH 13912
df = DataFrame({"a": [1], "b": [1.1], "c": ["foo"], "d": [ts_value]})
- with pytest.raises(TypeError, match="does not support reduction"):
+ with pytest.raises(TypeError, match="does not support operation"):
df.sum()
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index be8f5d73fe7e8..54d7895691f3f 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -671,7 +671,7 @@ def test_raises_on_nuisance(df):
df = df.loc[:, ["A", "C", "D"]]
df["E"] = datetime.now()
grouped = df.groupby("A")
- msg = "datetime64 type does not support operation: 'sum'"
+ msg = "datetime64 type does not support operation 'sum'"
with pytest.raises(TypeError, match=msg):
grouped.agg("sum")
with pytest.raises(TypeError, match=msg):
@@ -1794,7 +1794,7 @@ def get_categorical_invalid_expected():
else:
msg = "category type does not support"
if op == "skew":
- msg = "|".join([msg, "does not support reduction 'skew'"])
+ msg = "|".join([msg, "does not support operation 'skew'"])
with pytest.raises(TypeError, match=msg):
get_result()
diff --git a/pandas/tests/groupby/test_raises.py b/pandas/tests/groupby/test_raises.py
index 7af27d7227035..70be98af1289f 100644
--- a/pandas/tests/groupby/test_raises.py
+++ b/pandas/tests/groupby/test_raises.py
@@ -241,16 +241,16 @@ def test_groupby_raises_datetime(
return
klass, msg = {
- "all": (TypeError, "datetime64 type does not support operation: 'all'"),
- "any": (TypeError, "datetime64 type does not support operation: 'any'"),
+ "all": (TypeError, "datetime64 type does not support operation 'all'"),
+ "any": (TypeError, "datetime64 type does not support operation 'any'"),
"bfill": (None, ""),
"corrwith": (TypeError, "cannot perform __mul__ with this index type"),
"count": (None, ""),
"cumcount": (None, ""),
"cummax": (None, ""),
"cummin": (None, ""),
- "cumprod": (TypeError, "datetime64 type does not support operation: 'cumprod'"),
- "cumsum": (TypeError, "datetime64 type does not support operation: 'cumsum'"),
+ "cumprod": (TypeError, "datetime64 type does not support operation 'cumprod'"),
+ "cumsum": (TypeError, "datetime64 type does not support operation 'cumsum'"),
"diff": (None, ""),
"ffill": (None, ""),
"fillna": (None, ""),
@@ -265,7 +265,7 @@ def test_groupby_raises_datetime(
"ngroup": (None, ""),
"nunique": (None, ""),
"pct_change": (TypeError, "cannot perform __truediv__ with this index type"),
- "prod": (TypeError, "datetime64 type does not support operation: 'prod'"),
+ "prod": (TypeError, "datetime64 type does not support operation 'prod'"),
"quantile": (None, ""),
"rank": (None, ""),
"sem": (None, ""),
@@ -275,14 +275,14 @@ def test_groupby_raises_datetime(
TypeError,
"|".join(
[
- r"dtype datetime64\[ns\] does not support reduction",
- "datetime64 type does not support operation: 'skew'",
+ r"dtype datetime64\[ns\] does not support operation",
+ "datetime64 type does not support operation 'skew'",
]
),
),
"std": (None, ""),
- "sum": (TypeError, "datetime64 type does not support operation: 'sum"),
- "var": (TypeError, "datetime64 type does not support operation: 'var'"),
+ "sum": (TypeError, "datetime64 type does not support operation 'sum"),
+ "var": (TypeError, "datetime64 type does not support operation 'var'"),
}[groupby_func]
if groupby_func == "fillna":
@@ -323,7 +323,7 @@ def test_groupby_raises_datetime_np(
klass, msg = {
np.sum: (
TypeError,
- re.escape("datetime64[us] does not support reduction 'sum'"),
+ re.escape("datetime64[us] does not support operation 'sum'"),
),
np.mean: (None, ""),
}[groupby_func_np]
@@ -417,7 +417,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'mean'",
+ "'Categorical' .* does not support operation 'mean'",
"category dtype does not support aggregation 'mean'",
]
),
@@ -426,7 +426,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'median'",
+ "'Categorical' .* does not support operation 'median'",
"category dtype does not support aggregation 'median'",
]
),
@@ -445,7 +445,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'sem'",
+ "'Categorical' .* does not support operation 'sem'",
"category dtype does not support aggregation 'sem'",
]
),
@@ -456,7 +456,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "dtype category does not support reduction 'skew'",
+ "dtype category does not support operation 'skew'",
"category type does not support skew operations",
]
),
@@ -465,7 +465,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'std'",
+ "'Categorical' .* does not support operation 'std'",
"category dtype does not support aggregation 'std'",
]
),
@@ -475,7 +475,7 @@ def test_groupby_raises_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'var'",
+ "'Categorical' .* does not support operation 'var'",
"category dtype does not support aggregation 'var'",
]
),
@@ -519,10 +519,10 @@ def test_groupby_raises_category_np(
gb = gb["d"]
klass, msg = {
- np.sum: (TypeError, "dtype category does not support reduction 'sum'"),
+ np.sum: (TypeError, "dtype category does not support operation 'sum'"),
np.mean: (
TypeError,
- "dtype category does not support reduction 'mean'",
+ "dtype category does not support operation 'mean'",
),
}[groupby_func_np]
_call_and_check(klass, msg, how, gb, groupby_func_np, ())
@@ -618,7 +618,7 @@ def test_groupby_raises_category_on_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'sem'",
+ "'Categorical' .* does not support operation 'sem'",
"category dtype does not support aggregation 'sem'",
]
),
@@ -630,7 +630,7 @@ def test_groupby_raises_category_on_category(
"|".join(
[
"category type does not support skew operations",
- "dtype category does not support reduction 'skew'",
+ "dtype category does not support operation 'skew'",
]
),
),
@@ -638,7 +638,7 @@ def test_groupby_raises_category_on_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'std'",
+ "'Categorical' .* does not support operation 'std'",
"category dtype does not support aggregation 'std'",
]
),
@@ -648,7 +648,7 @@ def test_groupby_raises_category_on_category(
TypeError,
"|".join(
[
- "'Categorical' .* does not support reduction 'var'",
+ "'Categorical' .* does not support operation 'var'",
"category dtype does not support aggregation 'var'",
]
),
diff --git a/pandas/tests/indexes/test_old_base.py b/pandas/tests/indexes/test_old_base.py
index f41c6870cdb1c..871e7cdda4102 100644
--- a/pandas/tests/indexes/test_old_base.py
+++ b/pandas/tests/indexes/test_old_base.py
@@ -222,12 +222,7 @@ def test_logical_compat(self, simple_index):
assert idx.any() == idx._values.any()
assert idx.any() == idx.to_series().any()
else:
- msg = "cannot perform (any|all)"
- if isinstance(idx, IntervalIndex):
- msg = (
- r"'IntervalArray' with dtype interval\[.*\] does "
- "not support reduction '(any|all)'"
- )
+ msg = "does not support operation '(any|all)'"
with pytest.raises(TypeError, match=msg):
idx.all()
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 048553330c1ce..5547b716b2670 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -378,7 +378,7 @@ def test_invalid_td64_reductions(self, opname):
[
f"reduction operation '{opname}' not allowed for this dtype",
rf"cannot perform {opname} with type timedelta64\[ns\]",
- f"does not support reduction '{opname}'",
+ f"does not support operation '{opname}'",
]
)
@@ -714,7 +714,7 @@ def test_ops_consistency_on_empty(self, method):
[
"operation 'var' not allowed",
r"cannot perform var with type timedelta64\[ns\]",
- "does not support reduction 'var'",
+ "does not support operation 'var'",
]
)
with pytest.raises(TypeError, match=msg):
@@ -1010,7 +1010,7 @@ def test_any_all_datetimelike(self):
df = DataFrame(ser)
# GH#34479
- msg = "datetime64 type does not support operation: '(any|all)'"
+ msg = "datetime64 type does not support operation '(any|all)'"
with pytest.raises(TypeError, match=msg):
dta.all()
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 60fcf8cbc142c..4af1ca1d4800a 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -99,7 +99,7 @@ def _check_stat_op(
# mean, idxmax, idxmin, min, and max are valid for dates
if name not in ["max", "min", "mean", "median", "std"]:
ds = Series(date_range("1/1/2001", periods=10))
- msg = f"does not support reduction '{name}'"
+ msg = f"does not support operation '{name}'"
with pytest.raises(TypeError, match=msg):
f(ds)
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index 9b442fa7dbd07..a77097fd5ce61 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -709,7 +709,7 @@ def test_selection_api_validation():
exp.index.name = "d"
with pytest.raises(
- TypeError, match="datetime64 type does not support operation: 'sum'"
+ TypeError, match="datetime64 type does not support operation 'sum'"
):
df.resample("2D", level="d").sum()
result = df.resample("2D", level="d").sum(numeric_only=True)
diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py
index 94a6910509e2d..36a2afb2162c2 100644
--- a/pandas/tests/series/test_ufunc.py
+++ b/pandas/tests/series/test_ufunc.py
@@ -289,7 +289,7 @@ def test_multiply(self, values_for_np_reduce, box_with_array, request):
else:
msg = "|".join(
[
- "does not support reduction",
+ "does not support operation",
"unsupported operand type",
"ufunc 'multiply' cannot use operands",
]
@@ -319,7 +319,7 @@ def test_add(self, values_for_np_reduce, box_with_array):
else:
msg = "|".join(
[
- "does not support reduction",
+ "does not support operation",
"unsupported operand type",
"ufunc 'add' cannot use operands",
]
| xref #54566
removed unnecessary check needs_i8_conversion if Index subclass does not support any or all. | https://api.github.com/repos/pandas-dev/pandas/pulls/58006 | 2024-03-25T22:56:32Z | 2024-04-04T16:47:01Z | 2024-04-04T16:47:01Z | 2024-04-04T16:47:08Z |
DEPR: remove Tick.delta | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a9967dcb8efe6..77778e8bbd859 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -1022,7 +1022,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.DateOffset.rule_code GL08" \
-i "pandas.tseries.offsets.Day PR02" \
-i "pandas.tseries.offsets.Day.copy SA01" \
- -i "pandas.tseries.offsets.Day.delta GL08" \
-i "pandas.tseries.offsets.Day.freqstr SA01" \
-i "pandas.tseries.offsets.Day.is_on_offset GL08" \
-i "pandas.tseries.offsets.Day.kwds SA01" \
@@ -1075,7 +1074,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.FY5253Quarter.year_has_extra_week GL08" \
-i "pandas.tseries.offsets.Hour PR02" \
-i "pandas.tseries.offsets.Hour.copy SA01" \
- -i "pandas.tseries.offsets.Hour.delta GL08" \
-i "pandas.tseries.offsets.Hour.freqstr SA01" \
-i "pandas.tseries.offsets.Hour.is_on_offset GL08" \
-i "pandas.tseries.offsets.Hour.kwds SA01" \
@@ -1098,7 +1096,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.LastWeekOfMonth.weekday GL08" \
-i "pandas.tseries.offsets.Micro PR02" \
-i "pandas.tseries.offsets.Micro.copy SA01" \
- -i "pandas.tseries.offsets.Micro.delta GL08" \
-i "pandas.tseries.offsets.Micro.freqstr SA01" \
-i "pandas.tseries.offsets.Micro.is_on_offset GL08" \
-i "pandas.tseries.offsets.Micro.kwds SA01" \
@@ -1109,7 +1106,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.Micro.rule_code GL08" \
-i "pandas.tseries.offsets.Milli PR02" \
-i "pandas.tseries.offsets.Milli.copy SA01" \
- -i "pandas.tseries.offsets.Milli.delta GL08" \
-i "pandas.tseries.offsets.Milli.freqstr SA01" \
-i "pandas.tseries.offsets.Milli.is_on_offset GL08" \
-i "pandas.tseries.offsets.Milli.kwds SA01" \
@@ -1120,7 +1116,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.Milli.rule_code GL08" \
-i "pandas.tseries.offsets.Minute PR02" \
-i "pandas.tseries.offsets.Minute.copy SA01" \
- -i "pandas.tseries.offsets.Minute.delta GL08" \
-i "pandas.tseries.offsets.Minute.freqstr SA01" \
-i "pandas.tseries.offsets.Minute.is_on_offset GL08" \
-i "pandas.tseries.offsets.Minute.kwds SA01" \
@@ -1151,7 +1146,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.MonthEnd.rule_code GL08" \
-i "pandas.tseries.offsets.Nano PR02" \
-i "pandas.tseries.offsets.Nano.copy SA01" \
- -i "pandas.tseries.offsets.Nano.delta GL08" \
-i "pandas.tseries.offsets.Nano.freqstr SA01" \
-i "pandas.tseries.offsets.Nano.is_on_offset GL08" \
-i "pandas.tseries.offsets.Nano.kwds SA01" \
@@ -1184,7 +1178,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.QuarterEnd.startingMonth GL08" \
-i "pandas.tseries.offsets.Second PR02" \
-i "pandas.tseries.offsets.Second.copy SA01" \
- -i "pandas.tseries.offsets.Second.delta GL08" \
-i "pandas.tseries.offsets.Second.freqstr SA01" \
-i "pandas.tseries.offsets.Second.is_on_offset GL08" \
-i "pandas.tseries.offsets.Second.kwds SA01" \
@@ -1217,7 +1210,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.tseries.offsets.SemiMonthEnd.rule_code GL08" \
-i "pandas.tseries.offsets.Tick GL08" \
-i "pandas.tseries.offsets.Tick.copy SA01" \
- -i "pandas.tseries.offsets.Tick.delta GL08" \
-i "pandas.tseries.offsets.Tick.freqstr SA01" \
-i "pandas.tseries.offsets.Tick.is_on_offset GL08" \
-i "pandas.tseries.offsets.Tick.kwds SA01" \
diff --git a/doc/source/reference/offset_frequency.rst b/doc/source/reference/offset_frequency.rst
index 37eff247899be..8bb2c6ffe73be 100644
--- a/doc/source/reference/offset_frequency.rst
+++ b/doc/source/reference/offset_frequency.rst
@@ -1042,7 +1042,6 @@ Properties
.. autosummary::
:toctree: api/
- Tick.delta
Tick.freqstr
Tick.kwds
Tick.name
@@ -1077,7 +1076,6 @@ Properties
.. autosummary::
:toctree: api/
- Day.delta
Day.freqstr
Day.kwds
Day.name
@@ -1112,7 +1110,6 @@ Properties
.. autosummary::
:toctree: api/
- Hour.delta
Hour.freqstr
Hour.kwds
Hour.name
@@ -1147,7 +1144,6 @@ Properties
.. autosummary::
:toctree: api/
- Minute.delta
Minute.freqstr
Minute.kwds
Minute.name
@@ -1182,7 +1178,6 @@ Properties
.. autosummary::
:toctree: api/
- Second.delta
Second.freqstr
Second.kwds
Second.name
@@ -1217,7 +1212,6 @@ Properties
.. autosummary::
:toctree: api/
- Milli.delta
Milli.freqstr
Milli.kwds
Milli.name
@@ -1252,7 +1246,6 @@ Properties
.. autosummary::
:toctree: api/
- Micro.delta
Micro.freqstr
Micro.kwds
Micro.name
@@ -1287,7 +1280,6 @@ Properties
.. autosummary::
:toctree: api/
- Nano.delta
Nano.freqstr
Nano.kwds
Nano.name
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index a398b93b60018..8b3d4fe8ff5e1 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -201,6 +201,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation disallowing parsing datetimes with mixed time zones unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`57275`)
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
- Enforced deprecation of :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` for object-dtype (:issue:`57820`)
+- Enforced deprecation of :meth:`offsets.Tick.delta`, use ``pd.Timedelta(obj)`` instead (:issue:`55498`)
- Enforced deprecation of ``axis=None`` acting the same as ``axis=0`` in the DataFrame reductions ``sum``, ``prod``, ``std``, ``var``, and ``sem``, passing ``axis=None`` will now reduce over both axes; this is particularly the case when doing e.g. ``numpy.sum(df)`` (:issue:`21597`)
- Enforced deprecation of parsing system timezone strings to ``tzlocal``, which depended on system timezone, pass the 'tz' keyword instead (:issue:`50791`)
- Enforced deprecation of passing a dictionary to :meth:`SeriesGroupBy.agg` (:issue:`52268`)
diff --git a/pandas/_libs/tslibs/offsets.pyi b/pandas/_libs/tslibs/offsets.pyi
index 791ebc0fbb245..3f942d6aa3622 100644
--- a/pandas/_libs/tslibs/offsets.pyi
+++ b/pandas/_libs/tslibs/offsets.pyi
@@ -20,8 +20,6 @@ from pandas._typing import (
npt,
)
-from .timedeltas import Timedelta
-
_BaseOffsetT = TypeVar("_BaseOffsetT", bound=BaseOffset)
_DatetimeT = TypeVar("_DatetimeT", bound=datetime)
_TimedeltaT = TypeVar("_TimedeltaT", bound=timedelta)
@@ -114,8 +112,6 @@ class Tick(SingleConstructorOffset):
_prefix: str
def __init__(self, n: int = ..., normalize: bool = ...) -> None: ...
@property
- def delta(self) -> Timedelta: ...
- @property
def nanos(self) -> int: ...
def delta_to_tick(delta: timedelta) -> Tick: ...
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index fd18ae5908f10..e36abdf0ad971 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -957,22 +957,6 @@ cdef class Tick(SingleConstructorOffset):
def _as_pd_timedelta(self):
return Timedelta(self)
- @property
- def delta(self):
- warnings.warn(
- # GH#55498
- f"{type(self).__name__}.delta is deprecated and will be removed in "
- "a future version. Use pd.Timedelta(obj) instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- try:
- return self.n * Timedelta(self._nanos_inc)
- except OverflowError as err:
- # GH#55503 as_unit will raise a more useful OutOfBoundsTimedelta
- Timedelta(self).as_unit("ns")
- raise AssertionError("This should not be reached.")
-
@property
def nanos(self) -> int64_t:
"""
diff --git a/pandas/tests/tseries/offsets/test_ticks.py b/pandas/tests/tseries/offsets/test_ticks.py
index c8fbdfa11991a..f91230e1460c4 100644
--- a/pandas/tests/tseries/offsets/test_ticks.py
+++ b/pandas/tests/tseries/offsets/test_ticks.py
@@ -16,7 +16,6 @@
import pytest
from pandas._libs.tslibs.offsets import delta_to_tick
-from pandas.errors import OutOfBoundsTimedelta
from pandas import (
Timedelta,
@@ -239,16 +238,6 @@ def test_tick_addition(kls, expected):
assert result == expected
-def test_tick_delta_overflow():
- # GH#55503 raise OutOfBoundsTimedelta, not OverflowError
- tick = offsets.Day(10**9)
- msg = "Cannot cast 1000000000 days 00:00:00 to unit='ns' without overflow"
- depr_msg = "Day.delta is deprecated"
- with pytest.raises(OutOfBoundsTimedelta, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- tick.delta
-
-
@pytest.mark.parametrize("cls", tick_classes)
def test_tick_division(cls):
off = cls(10)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58005 | 2024-03-25T20:56:35Z | 2024-03-26T17:03:05Z | 2024-03-26T17:03:05Z | 2024-03-26T17:12:03Z |
DEPR: remove DTA.__init__, TDA.__init__ | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 1b68fa4fc22e6..ed7dfe1a3c17e 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -26,7 +26,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 893e585cb890e..dd1d341c70a9b 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -28,7 +28,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 20124b24a6b9a..388116439f944 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -26,7 +26,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index eb70816c241bb..745b2fc5dfd2e 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -26,7 +26,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 4399aa748af5c..b760f27a3d4d3 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -29,7 +29,7 @@ dependencies:
- beautifulsoup4=4.11.2
- blosc=1.21.3
- bottleneck=1.3.6
- - fastparquet=2023.04.0
+ - fastparquet=2023.10.0
- fsspec=2022.11.0
- html5lib=1.1
- hypothesis=6.46.1
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 92df608f17c6c..8f235a836bb3d 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -26,7 +26,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index 869aae8596681..ed4d139714e71 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -27,7 +27,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 11c16dd9dabcc..3cd9e030d6b3c 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -361,7 +361,7 @@ Dependency Minimum Version pip extra Notes
PyTables 3.8.0 hdf5 HDF5-based reading / writing
blosc 1.21.3 hdf5 Compression for HDF5; only available on ``conda``
zlib hdf5 Compression for HDF5
-fastparquet 2023.04.0 - Parquet reading / writing (pyarrow is default)
+fastparquet 2023.10.0 - Parquet reading / writing (pyarrow is default)
pyarrow 10.0.1 parquet, feather Parquet, ORC, and feather reading / writing
pyreadstat 1.2.0 spss SPSS files (.sav) reading
odfpy 1.4.1 excel Open document format (.odf, .ods, .odt) reading / writing
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index fb33601263c5d..295d3d36a9c26 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -134,7 +134,7 @@ Optional libraries below the lowest tested version may still work, but are not c
+------------------------+---------------------+
| Package | New Minimum Version |
+========================+=====================+
-| fastparquet | 2023.04.0 |
+| fastparquet | 2023.10.0 |
+------------------------+---------------------+
| adbc-driver-postgresql | 0.10.0 |
+------------------------+---------------------+
diff --git a/environment.yml b/environment.yml
index 020154e650c5b..186d7e1d703df 100644
--- a/environment.yml
+++ b/environment.yml
@@ -30,7 +30,7 @@ dependencies:
- beautifulsoup4>=4.11.2
- blosc
- bottleneck>=1.3.6
- - fastparquet>=2023.04.0
+ - fastparquet>=2023.10.0
- fsspec>=2022.11.0
- html5lib>=1.1
- hypothesis>=6.46.1
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index d6e01a168fba1..f4e717c26d6fd 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -25,7 +25,7 @@
"bs4": "4.11.2",
"blosc": "1.21.3",
"bottleneck": "1.3.6",
- "fastparquet": "2023.04.0",
+ "fastparquet": "2023.10.0",
"fsspec": "2022.11.0",
"html5lib": "1.1",
"hypothesis": "6.46.1",
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 745774b34a3ad..3dc2d77bb5a19 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -26,7 +26,6 @@
algos,
lib,
)
-from pandas._libs.arrays import NDArrayBacked
from pandas._libs.tslibs import (
BaseOffset,
IncompatibleFrequency,
@@ -1936,100 +1935,6 @@ class TimelikeOps(DatetimeLikeArrayMixin):
Common ops for TimedeltaIndex/DatetimeIndex, but not PeriodIndex.
"""
- _default_dtype: np.dtype
-
- def __init__(
- self, values, dtype=None, freq=lib.no_default, copy: bool = False
- ) -> None:
- warnings.warn(
- # GH#55623
- f"{type(self).__name__}.__init__ is deprecated and will be "
- "removed in a future version. Use pd.array instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- if dtype is not None:
- dtype = pandas_dtype(dtype)
-
- values = extract_array(values, extract_numpy=True)
- if isinstance(values, IntegerArray):
- values = values.to_numpy("int64", na_value=iNaT)
-
- inferred_freq = getattr(values, "_freq", None)
- explicit_none = freq is None
- freq = freq if freq is not lib.no_default else None
-
- if isinstance(values, type(self)):
- if explicit_none:
- # don't inherit from values
- pass
- elif freq is None:
- freq = values.freq
- elif freq and values.freq:
- freq = to_offset(freq)
- freq = _validate_inferred_freq(freq, values.freq)
-
- if dtype is not None and dtype != values.dtype:
- # TODO: we only have tests for this for DTA, not TDA (2022-07-01)
- raise TypeError(
- f"dtype={dtype} does not match data dtype {values.dtype}"
- )
-
- dtype = values.dtype
- values = values._ndarray
-
- elif dtype is None:
- if isinstance(values, np.ndarray) and values.dtype.kind in "Mm":
- dtype = values.dtype
- else:
- dtype = self._default_dtype
- if isinstance(values, np.ndarray) and values.dtype == "i8":
- values = values.view(dtype)
-
- if not isinstance(values, np.ndarray):
- raise ValueError(
- f"Unexpected type '{type(values).__name__}'. 'values' must be a "
- f"{type(self).__name__}, ndarray, or Series or Index "
- "containing one of those."
- )
- if values.ndim not in [1, 2]:
- raise ValueError("Only 1-dimensional input arrays are supported.")
-
- if values.dtype == "i8":
- # for compat with datetime/timedelta/period shared methods,
- # we can sometimes get here with int64 values. These represent
- # nanosecond UTC (or tz-naive) unix timestamps
- if dtype is None:
- dtype = self._default_dtype
- values = values.view(self._default_dtype)
- elif lib.is_np_dtype(dtype, "mM"):
- values = values.view(dtype)
- elif isinstance(dtype, DatetimeTZDtype):
- kind = self._default_dtype.kind
- new_dtype = f"{kind}8[{dtype.unit}]"
- values = values.view(new_dtype)
-
- dtype = self._validate_dtype(values, dtype)
-
- if freq == "infer":
- raise ValueError(
- f"Frequency inference not allowed in {type(self).__name__}.__init__. "
- "Use 'pd.array()' instead."
- )
-
- if copy:
- values = values.copy()
- if freq:
- freq = to_offset(freq)
- if values.dtype.kind == "m" and not isinstance(freq, Tick):
- raise TypeError("TimedeltaArray/Index freq must be a Tick")
-
- NDArrayBacked.__init__(self, values=values, dtype=dtype)
- self._freq = freq
-
- if inferred_freq is None and freq is not None:
- type(self)._validate_frequency(self, freq)
-
@classmethod
def _validate_dtype(cls, values, dtype):
raise AbstractMethodError(cls)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index ad4611aac9e35..d446407ec3d01 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -186,7 +186,7 @@ class DatetimeArray(dtl.TimelikeOps, dtl.DatelikeOps): # type: ignore[misc]
Parameters
----------
- values : Series, Index, DatetimeArray, ndarray
+ data : Series, Index, DatetimeArray, ndarray
The datetime data.
For DatetimeArray `values` (or a Series or Index boxing one),
@@ -287,7 +287,6 @@ def _scalar_type(self) -> type[Timestamp]:
_dtype: np.dtype[np.datetime64] | DatetimeTZDtype
_freq: BaseOffset | None = None
- _default_dtype = DT64NS_DTYPE # used in TimeLikeOps.__init__
@classmethod
def _from_scalars(cls, scalars, *, dtype: DtypeObj) -> Self:
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index c41e078095feb..6eb4d234b349d 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -113,7 +113,7 @@ class TimedeltaArray(dtl.TimelikeOps):
Parameters
----------
- values : array-like
+ data : array-like
The timedelta data.
dtype : numpy.dtype
@@ -196,7 +196,6 @@ def dtype(self) -> np.dtype[np.timedelta64]: # type: ignore[override]
# Constructors
_freq = None
- _default_dtype = TD64NS_DTYPE # used in TimeLikeOps.__init__
@classmethod
def _validate_dtype(cls, values, dtype):
diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py
index 3d22427d41985..d7264c002c67f 100644
--- a/pandas/tests/arrays/datetimes/test_constructors.py
+++ b/pandas/tests/arrays/datetimes/test_constructors.py
@@ -16,34 +16,6 @@ def test_from_sequence_invalid_type(self):
with pytest.raises(TypeError, match="Cannot create a DatetimeArray"):
DatetimeArray._from_sequence(mi, dtype="M8[ns]")
- def test_only_1dim_accepted(self):
- arr = np.array([0, 1, 2, 3], dtype="M8[h]").astype("M8[ns]")
-
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 3-dim, we allow 2D to sneak in for ops purposes GH#29853
- DatetimeArray(arr.reshape(2, 2, 1))
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 0-dim
- DatetimeArray(arr[[0]].squeeze())
-
- def test_freq_validation(self):
- # GH#24623 check that invalid instances cannot be created with the
- # public constructor
- arr = np.arange(5, dtype=np.int64) * 3600 * 10**9
-
- msg = (
- "Inferred frequency h from passed values does not "
- "conform to passed frequency W-SUN"
- )
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- DatetimeArray(arr, freq="W")
-
@pytest.mark.parametrize(
"meth",
[
@@ -76,42 +48,9 @@ def test_from_pandas_array(self):
expected = pd.date_range("1970-01-01", periods=5, freq="h")._data
tm.assert_datetime_array_equal(result, expected)
- def test_mismatched_timezone_raises(self):
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- arr = DatetimeArray(
- np.array(["2000-01-01T06:00:00"], dtype="M8[ns]"),
- dtype=DatetimeTZDtype(tz="US/Central"),
- )
- dtype = DatetimeTZDtype(tz="US/Eastern")
- msg = r"dtype=datetime64\[ns.*\] does not match data dtype datetime64\[ns.*\]"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(TypeError, match=msg):
- DatetimeArray(arr, dtype=dtype)
-
- # also with mismatched tzawareness
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(TypeError, match=msg):
- DatetimeArray(arr, dtype=np.dtype("M8[ns]"))
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(TypeError, match=msg):
- DatetimeArray(arr.tz_localize(None), dtype=arr.dtype)
-
- def test_non_array_raises(self):
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="list"):
- DatetimeArray([1, 2, 3])
-
def test_bool_dtype_raises(self):
arr = np.array([1, 2, 3], dtype="bool")
- depr_msg = "DatetimeArray.__init__ is deprecated"
- msg = "Unexpected value for 'dtype': 'bool'. Must be"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- DatetimeArray(arr)
-
msg = r"dtype bool cannot be converted to datetime64\[ns\]"
with pytest.raises(TypeError, match=msg):
DatetimeArray._from_sequence(arr, dtype="M8[ns]")
@@ -122,41 +61,6 @@ def test_bool_dtype_raises(self):
with pytest.raises(TypeError, match=msg):
pd.to_datetime(arr)
- def test_incorrect_dtype_raises(self):
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Unexpected value for 'dtype'."):
- DatetimeArray(np.array([1, 2, 3], dtype="i8"), dtype="category")
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Unexpected value for 'dtype'."):
- DatetimeArray(np.array([1, 2, 3], dtype="i8"), dtype="m8[s]")
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Unexpected value for 'dtype'."):
- DatetimeArray(np.array([1, 2, 3], dtype="i8"), dtype="M8[D]")
-
- def test_mismatched_values_dtype_units(self):
- arr = np.array([1, 2, 3], dtype="M8[s]")
- dtype = np.dtype("M8[ns]")
- msg = "Values resolution does not match dtype."
- depr_msg = "DatetimeArray.__init__ is deprecated"
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- DatetimeArray(arr, dtype=dtype)
-
- dtype2 = DatetimeTZDtype(tz="UTC", unit="ns")
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- DatetimeArray(arr, dtype=dtype2)
-
- def test_freq_infer_raises(self):
- depr_msg = "DatetimeArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Frequency inference"):
- DatetimeArray(np.array([1, 2, 3], dtype="i8"), freq="infer")
-
def test_copy(self):
data = np.array([1, 2, 3], dtype="M8[ns]")
arr = DatetimeArray._from_sequence(data, copy=False)
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index b6ae1a9df0e65..971c5bf487104 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -1320,12 +1320,6 @@ def test_from_pandas_array(dtype):
cls = {"M8[ns]": DatetimeArray, "m8[ns]": TimedeltaArray}[dtype]
- depr_msg = f"{cls.__name__}.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- result = cls(arr)
- expected = cls(data)
- tm.assert_extension_array_equal(result, expected)
-
result = cls._from_sequence(arr, dtype=dtype)
expected = cls._from_sequence(data, dtype=dtype)
tm.assert_extension_array_equal(result, expected)
diff --git a/pandas/tests/arrays/timedeltas/test_constructors.py b/pandas/tests/arrays/timedeltas/test_constructors.py
index 91b6f7fa222f9..ee29f505fd7b1 100644
--- a/pandas/tests/arrays/timedeltas/test_constructors.py
+++ b/pandas/tests/arrays/timedeltas/test_constructors.py
@@ -1,45 +1,10 @@
import numpy as np
import pytest
-import pandas._testing as tm
from pandas.core.arrays import TimedeltaArray
class TestTimedeltaArrayConstructor:
- def test_only_1dim_accepted(self):
- # GH#25282
- arr = np.array([0, 1, 2, 3], dtype="m8[h]").astype("m8[ns]")
-
- depr_msg = "TimedeltaArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 3-dim, we allow 2D to sneak in for ops purposes GH#29853
- TimedeltaArray(arr.reshape(2, 2, 1))
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 0-dim
- TimedeltaArray(arr[[0]].squeeze())
-
- def test_freq_validation(self):
- # ensure that the public constructor cannot create an invalid instance
- arr = np.array([0, 0, 1], dtype=np.int64) * 3600 * 10**9
-
- msg = (
- "Inferred frequency None from passed values does not "
- "conform to passed frequency D"
- )
- depr_msg = "TimedeltaArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- TimedeltaArray(arr.view("timedelta64[ns]"), freq="D")
-
- def test_non_array_raises(self):
- depr_msg = "TimedeltaArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match="list"):
- TimedeltaArray([1, 2, 3])
-
def test_other_type_raises(self):
msg = r"dtype bool cannot be converted to timedelta64\[ns\]"
with pytest.raises(TypeError, match=msg):
@@ -78,16 +43,6 @@ def test_incorrect_dtype_raises(self):
np.array([1, 2, 3], dtype="i8"), dtype=np.dtype("m8[Y]")
)
- def test_mismatched_values_dtype_units(self):
- arr = np.array([1, 2, 3], dtype="m8[s]")
- dtype = np.dtype("m8[ns]")
- msg = r"Values resolution does not match dtype"
- depr_msg = "TimedeltaArray.__init__ is deprecated"
-
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- with pytest.raises(ValueError, match=msg):
- TimedeltaArray(arr, dtype=dtype)
-
def test_copy(self):
data = np.array([1, 2, 3], dtype="m8[ns]")
arr = TimedeltaArray._from_sequence(data, copy=False)
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 2f97ab6be8965..895ea110c8ad5 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -56,7 +56,6 @@ def test_infer_from_tdi_mismatch(self):
# has one and it does not match the `freq` input
tdi = timedelta_range("1 second", periods=100, freq="1s")
- depr_msg = "TimedeltaArray.__init__ is deprecated"
msg = (
"Inferred frequency .* from passed values does "
"not conform to passed frequency"
@@ -64,18 +63,9 @@ def test_infer_from_tdi_mismatch(self):
with pytest.raises(ValueError, match=msg):
TimedeltaIndex(tdi, freq="D")
- with pytest.raises(ValueError, match=msg):
- # GH#23789
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- TimedeltaArray(tdi, freq="D")
-
with pytest.raises(ValueError, match=msg):
TimedeltaIndex(tdi._data, freq="D")
- with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- TimedeltaArray(tdi._data, freq="D")
-
def test_dt64_data_invalid(self):
# GH#23539
# passing tz-aware DatetimeIndex raises, naive or ndarray[datetime64]
@@ -240,11 +230,6 @@ def test_explicit_none_freq(self):
result = TimedeltaIndex(tdi._data, freq=None)
assert result.freq is None
- msg = "TimedeltaArray.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- tda = TimedeltaArray(tdi, freq=None)
- assert tda.freq is None
-
def test_from_categorical(self):
tdi = timedelta_range(1, periods=5)
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index a4fd29878a2d1..ee26fdae74960 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -20,10 +20,6 @@
TimedeltaIndex,
)
import pandas._testing as tm
-from pandas.core.arrays import (
- DatetimeArray,
- TimedeltaArray,
-)
@pytest.fixture
@@ -284,14 +280,6 @@ def test_from_obscure_array(dtype, box):
else:
data = box(arr)
- cls = {"M8[ns]": DatetimeArray, "m8[ns]": TimedeltaArray}[dtype]
-
- depr_msg = f"{cls.__name__}.__init__ is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- expected = cls(arr)
- result = cls._from_sequence(data, dtype=dtype)
- tm.assert_extension_array_equal(result, expected)
-
if not isinstance(data, memoryview):
# FIXME(GH#44431) these raise on memoryview and attempted fix
# fails on py3.10
diff --git a/pyproject.toml b/pyproject.toml
index 84d6eca552b54..5f5b013ca8461 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -91,7 +91,7 @@ all = ['adbc-driver-postgresql>=0.10.0',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
#'blosc>=1.21.3',
'bottleneck>=1.3.6',
- 'fastparquet>=2023.04.0',
+ 'fastparquet>=2023.10.0',
'fsspec>=2022.11.0',
'gcsfs>=2022.11.0',
'html5lib>=1.1',
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 0ea0eba369158..a42ee1587961a 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -19,7 +19,7 @@ pytz
beautifulsoup4>=4.11.2
blosc
bottleneck>=1.3.6
-fastparquet>=2023.04.0
+fastparquet>=2023.10.0
fsspec>=2022.11.0
html5lib>=1.1
hypothesis>=6.46.1
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58004 | 2024-03-25T20:55:49Z | 2024-03-26T20:32:57Z | 2024-03-26T20:32:57Z | 2024-03-26T22:35:00Z |
DEPR: enforce deprecation of DTI/TDI unused keywords | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a9967dcb8efe6..b3a694de20103 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -504,7 +504,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.Timedelta.to_timedelta64 SA01" \
-i "pandas.Timedelta.total_seconds SA01" \
-i "pandas.Timedelta.view SA01" \
- -i "pandas.TimedeltaIndex PR01" \
-i "pandas.TimedeltaIndex.as_unit RT03,SA01" \
-i "pandas.TimedeltaIndex.ceil SA01" \
-i "pandas.TimedeltaIndex.components SA01" \
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4d2381ae1e5e4..5cb8c3c0f54d1 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -195,6 +195,8 @@ Removal of prior version deprecations/changes
- :meth:`SeriesGroupBy.agg` no longer pins the name of the group to the input passed to the provided ``func`` (:issue:`51703`)
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
+- Removed the "closed" and "normalize" keywords in :meth:`DatetimeIndex.__new__` (:issue:`52628`)
+- Removed the "closed" and "unit" keywords in :meth:`TimedeltaIndex.__new__` (:issue:`52628`, :issue:`55499`)
- All arguments in :meth:`Index.sort_values` are now keyword only (:issue:`56493`)
- All arguments in :meth:`Series.to_dict` are now keyword only (:issue:`56493`)
- Changed the default value of ``observed`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` to ``True`` (:issue:`51811`)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 2d773c04b8ea9..cefdc14145d1f 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -28,7 +28,6 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import is_scalar
from pandas.core.dtypes.dtypes import DatetimeTZDtype
@@ -150,17 +149,6 @@ class DatetimeIndex(DatetimeTimedeltaMixin):
inferred frequency upon creation.
tz : pytz.timezone or dateutil.tz.tzfile or datetime.tzinfo or str
Set the Timezone of the data.
- normalize : bool, default False
- Normalize start/end dates to midnight before generating date range.
-
- .. deprecated:: 2.1.0
-
- closed : {'left', 'right'}, optional
- Set whether to include `start` and `end` that are on the
- boundary. The default includes boundary points on either end.
-
- .. deprecated:: 2.1.0
-
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from 03:00
@@ -322,8 +310,6 @@ def __new__(
data=None,
freq: Frequency | lib.NoDefault = lib.no_default,
tz=lib.no_default,
- normalize: bool | lib.NoDefault = lib.no_default,
- closed=lib.no_default,
ambiguous: TimeAmbiguous = "raise",
dayfirst: bool = False,
yearfirst: bool = False,
@@ -331,23 +317,6 @@ def __new__(
copy: bool = False,
name: Hashable | None = None,
) -> Self:
- if closed is not lib.no_default:
- # GH#52628
- warnings.warn(
- f"The 'closed' keyword in {cls.__name__} construction is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- if normalize is not lib.no_default:
- # GH#52628
- warnings.warn(
- f"The 'normalize' keyword in {cls.__name__} construction is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
if is_scalar(data):
cls._raise_scalar_data_error(data)
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6a2c04b0ddf51..8af5a56f43c57 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -3,7 +3,6 @@
from __future__ import annotations
from typing import TYPE_CHECKING
-import warnings
from pandas._libs import (
index as libindex,
@@ -14,8 +13,6 @@
Timedelta,
to_offset,
)
-from pandas._libs.tslibs.timedeltas import disallow_ambiguous_unit
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_scalar,
@@ -63,12 +60,6 @@ class TimedeltaIndex(DatetimeTimedeltaMixin):
----------
data : array-like (1-dimensional), optional
Optional timedelta-like data to construct index with.
- unit : {'D', 'h', 'm', 's', 'ms', 'us', 'ns'}, optional
- The unit of ``data``.
-
- .. deprecated:: 2.2.0
- Use ``pd.to_timedelta`` instead.
-
freq : str or pandas offset object, optional
One of pandas date offset strings or corresponding objects. The string
``'infer'`` can be passed in order to set the frequency of the index as
@@ -151,40 +142,16 @@ def _resolution_obj(self) -> Resolution | None: # type: ignore[override]
def __new__(
cls,
data=None,
- unit=lib.no_default,
freq=lib.no_default,
- closed=lib.no_default,
dtype=None,
copy: bool = False,
name=None,
):
- if closed is not lib.no_default:
- # GH#52628
- warnings.warn(
- f"The 'closed' keyword in {cls.__name__} construction is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- if unit is not lib.no_default:
- # GH#55499
- warnings.warn(
- f"The 'unit' keyword in {cls.__name__} construction is "
- "deprecated and will be removed in a future version. "
- "Use pd.to_timedelta instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- else:
- unit = None
-
name = maybe_extract_name(name, data, cls)
if is_scalar(data):
cls._raise_scalar_data_error(data)
- disallow_ambiguous_unit(unit)
if dtype is not None:
dtype = pandas_dtype(dtype)
@@ -211,7 +178,7 @@ def __new__(
# - Cases checked above all return/raise before reaching here - #
tdarr = TimedeltaArray._from_sequence_not_strict(
- data, freq=freq, unit=unit, dtype=dtype, copy=copy
+ data, freq=freq, unit=None, dtype=dtype, copy=copy
)
refs = None
if not copy and isinstance(data, (ABCSeries, Index)):
diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py
index 48bbfc1a9f646..4be45e834ce31 100644
--- a/pandas/tests/indexes/datetimes/test_constructors.py
+++ b/pandas/tests/indexes/datetimes/test_constructors.py
@@ -35,18 +35,6 @@
class TestDatetimeIndex:
- def test_closed_deprecated(self):
- # GH#52628
- msg = "The 'closed' keyword"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- DatetimeIndex([], closed=True)
-
- def test_normalize_deprecated(self):
- # GH#52628
- msg = "The 'normalize' keyword"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- DatetimeIndex([], normalize=True)
-
def test_from_dt64_unsupported_unit(self):
# GH#49292
val = np.datetime64(1, "D")
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 0510700bb64d7..2f97ab6be8965 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -15,12 +15,6 @@
class TestTimedeltaIndex:
- def test_closed_deprecated(self):
- # GH#52628
- msg = "The 'closed' keyword"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- TimedeltaIndex([], closed=True)
-
def test_array_of_dt64_nat_raises(self):
# GH#39462
nat = np.datetime64("NaT", "ns")
@@ -36,14 +30,6 @@ def test_array_of_dt64_nat_raises(self):
with pytest.raises(TypeError, match=msg):
to_timedelta(arr)
- @pytest.mark.parametrize("unit", ["Y", "y", "M"])
- def test_unit_m_y_raises(self, unit):
- msg = "Units 'M', 'Y', and 'y' are no longer supported"
- depr_msg = "The 'unit' keyword in TimedeltaIndex construction is deprecated"
- with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- TimedeltaIndex([1, 3, 7], unit)
-
def test_int64_nocopy(self):
# GH#23539 check that a copy isn't made when we pass int64 data
# and copy=False
@@ -138,9 +124,6 @@ def test_construction_base_constructor(self):
tm.assert_index_equal(pd.Index(arr), TimedeltaIndex(arr))
tm.assert_index_equal(pd.Index(np.array(arr)), TimedeltaIndex(np.array(arr)))
- @pytest.mark.filterwarnings(
- "ignore:The 'unit' keyword in TimedeltaIndex construction:FutureWarning"
- )
def test_constructor(self):
expected = TimedeltaIndex(
[
@@ -162,22 +145,6 @@ def test_constructor(self):
)
tm.assert_index_equal(result, expected)
- expected = TimedeltaIndex(
- ["0 days 00:00:00", "0 days 00:00:01", "0 days 00:00:02"]
- )
- result = TimedeltaIndex(range(3), unit="s")
- tm.assert_index_equal(result, expected)
- expected = TimedeltaIndex(
- ["0 days 00:00:00", "0 days 00:00:05", "0 days 00:00:09"]
- )
- result = TimedeltaIndex([0, 5, 9], unit="s")
- tm.assert_index_equal(result, expected)
- expected = TimedeltaIndex(
- ["0 days 00:00:00.400", "0 days 00:00:00.450", "0 days 00:00:01.200"]
- )
- result = TimedeltaIndex([400, 450, 1200], unit="ms")
- tm.assert_index_equal(result, expected)
-
def test_constructor_iso(self):
# GH #21877
expected = timedelta_range("1s", periods=9, freq="s")
diff --git a/pandas/tests/scalar/timedelta/test_constructors.py b/pandas/tests/scalar/timedelta/test_constructors.py
index c69f572c92bf2..5509216f4daf4 100644
--- a/pandas/tests/scalar/timedelta/test_constructors.py
+++ b/pandas/tests/scalar/timedelta/test_constructors.py
@@ -126,30 +126,26 @@ def test_unit_parser(self, unit, np_unit, wrapper):
)
# TODO(2.0): the desired output dtype may have non-nano resolution
- msg = "The 'unit' keyword in TimedeltaIndex construction is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = to_timedelta(wrapper(range(5)), unit=unit)
- tm.assert_index_equal(result, expected)
- result = TimedeltaIndex(wrapper(range(5)), unit=unit)
- tm.assert_index_equal(result, expected)
-
- str_repr = [f"{x}{unit}" for x in np.arange(5)]
- result = to_timedelta(wrapper(str_repr))
- tm.assert_index_equal(result, expected)
- result = to_timedelta(wrapper(str_repr))
- tm.assert_index_equal(result, expected)
-
- # scalar
- expected = Timedelta(np.timedelta64(2, np_unit).astype("timedelta64[ns]"))
- result = to_timedelta(2, unit=unit)
- assert result == expected
- result = Timedelta(2, unit=unit)
- assert result == expected
-
- result = to_timedelta(f"2{unit}")
- assert result == expected
- result = Timedelta(f"2{unit}")
- assert result == expected
+ result = to_timedelta(wrapper(range(5)), unit=unit)
+ tm.assert_index_equal(result, expected)
+
+ str_repr = [f"{x}{unit}" for x in np.arange(5)]
+ result = to_timedelta(wrapper(str_repr))
+ tm.assert_index_equal(result, expected)
+ result = to_timedelta(wrapper(str_repr))
+ tm.assert_index_equal(result, expected)
+
+ # scalar
+ expected = Timedelta(np.timedelta64(2, np_unit).astype("timedelta64[ns]"))
+ result = to_timedelta(2, unit=unit)
+ assert result == expected
+ result = Timedelta(2, unit=unit)
+ assert result == expected
+
+ result = to_timedelta(f"2{unit}")
+ assert result == expected
+ result = Timedelta(f"2{unit}")
+ assert result == expected
@pytest.mark.parametrize("unit", ["T", "t", "L", "l", "U", "u", "N", "n"])
def test_unit_T_L_N_U_raises(self, unit):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58003 | 2024-03-25T20:46:33Z | 2024-03-26T17:07:02Z | 2024-03-26T17:07:02Z | 2024-03-26T17:11:35Z |
DEPR: Enforce deprecation of parsing to tzlocal | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4d2381ae1e5e4..bb856936cd96d 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -202,6 +202,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
- Enforced deprecation of :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` for object-dtype (:issue:`57820`)
- Enforced deprecation of ``axis=None`` acting the same as ``axis=0`` in the DataFrame reductions ``sum``, ``prod``, ``std``, ``var``, and ``sem``, passing ``axis=None`` will now reduce over both axes; this is particularly the case when doing e.g. ``numpy.sum(df)`` (:issue:`21597`)
+- Enforced deprecation of parsing system timezone strings to ``tzlocal``, which depended on system timezone, pass the 'tz' keyword instead (:issue:`50791`)
- Enforced deprecation of passing a dictionary to :meth:`SeriesGroupBy.agg` (:issue:`52268`)
- Enforced deprecation of string ``AS`` denoting frequency in :class:`YearBegin` and strings ``AS-DEC``, ``AS-JAN``, etc. denoting annual frequencies with various fiscal year starts (:issue:`57793`)
- Enforced deprecation of string ``A`` denoting frequency in :class:`YearEnd` and strings ``A-DEC``, ``A-JAN``, etc. denoting annual frequencies with various fiscal year ends (:issue:`57699`)
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 94c549cbd3db0..384df1cac95eb 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -45,7 +45,6 @@ from decimal import InvalidOperation
from dateutil.parser import DEFAULTPARSER
from dateutil.tz import (
- tzlocal as _dateutil_tzlocal,
tzoffset,
tzutc as _dateutil_tzutc,
)
@@ -703,17 +702,12 @@ cdef datetime dateutil_parse(
if res.tzname and res.tzname in time.tzname:
# GH#50791
if res.tzname != "UTC":
- # If the system is localized in UTC (as many CI runs are)
- # we get tzlocal, once the deprecation is enforced will get
- # timezone.utc, not raise.
- warnings.warn(
+ raise ValueError(
f"Parsing '{res.tzname}' as tzlocal (dependent on system timezone) "
- "is deprecated and will raise in a future version. Pass the 'tz' "
+ "is no longer supported. Pass the 'tz' "
"keyword or call tz_localize after construction instead",
- FutureWarning,
- stacklevel=find_stack_level()
)
- ret = ret.replace(tzinfo=_dateutil_tzlocal())
+ ret = ret.replace(tzinfo=timezone.utc)
elif res.tzoffset == 0:
ret = ret.replace(tzinfo=_dateutil_tzutc())
elif res.tzoffset:
diff --git a/pandas/tests/tslibs/test_parsing.py b/pandas/tests/tslibs/test_parsing.py
index d1b0595dd50e6..52af5adb686a7 100644
--- a/pandas/tests/tslibs/test_parsing.py
+++ b/pandas/tests/tslibs/test_parsing.py
@@ -6,7 +6,6 @@
import re
from dateutil.parser import parse as du_parse
-from dateutil.tz import tzlocal
from hypothesis import given
import numpy as np
import pytest
@@ -22,6 +21,10 @@
)
import pandas.util._test_decorators as td
+# Usually we wouldn't want this import in this test file (which is targeted at
+# tslibs.parsing), but it is convenient to test the Timestamp constructor at
+# the same time as the other parsing functions.
+from pandas import Timestamp
import pandas._testing as tm
from pandas._testing._hypothesis import DATETIME_NO_TZ
@@ -33,20 +36,21 @@
def test_parsing_tzlocal_deprecated():
# GH#50791
msg = (
- "Parsing 'EST' as tzlocal.*"
+ r"Parsing 'EST' as tzlocal \(dependent on system timezone\) "
+ r"is no longer supported\. "
"Pass the 'tz' keyword or call tz_localize after construction instead"
)
dtstr = "Jan 15 2004 03:00 EST"
with tm.set_timezone("US/Eastern"):
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res, _ = parse_datetime_string_with_reso(dtstr)
+ with pytest.raises(ValueError, match=msg):
+ parse_datetime_string_with_reso(dtstr)
- assert isinstance(res.tzinfo, tzlocal)
+ with pytest.raises(ValueError, match=msg):
+ parsing.py_parse_datetime_string(dtstr)
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = parsing.py_parse_datetime_string(dtstr)
- assert isinstance(res.tzinfo, tzlocal)
+ with pytest.raises(ValueError, match=msg):
+ Timestamp(dtstr)
def test_parse_datetime_string_with_reso():
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58002 | 2024-03-25T20:45:34Z | 2024-03-25T22:33:52Z | 2024-03-25T22:33:52Z | 2024-03-26T01:28:50Z |
DEPR: remove Categorical.to_list | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4d2381ae1e5e4..f3729fb697bea 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -211,6 +211,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of strings ``T``, ``L``, ``U``, and ``N`` denoting units in :class:`Timedelta` (:issue:`57627`)
- Enforced deprecation of the behavior of :func:`concat` when ``len(keys) != len(objs)`` would truncate to the shorter of the two. Now this raises a ``ValueError`` (:issue:`43485`)
- Enforced deprecation of values "pad", "ffill", "bfill", and "backfill" for :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` (:issue:`57869`)
+- Enforced deprecation removing :meth:`Categorical.to_list`, use ``obj.tolist()`` instead (:issue:`51254`)
- Enforced silent-downcasting deprecation for :ref:`all relevant methods <whatsnew_220.silent_downcasting>` (:issue:`54710`)
- In :meth:`DataFrame.stack`, the default value of ``future_stack`` is now ``True``; specifying ``False`` will raise a ``FutureWarning`` (:issue:`55448`)
- Iterating over a :class:`.DataFrameGroupBy` or :class:`.SeriesGroupBy` will return tuples of length 1 for the groups when grouping by ``level`` a list of length 1 (:issue:`50064`)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 429dc9236cf45..416331a260e9f 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -626,19 +626,6 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
return result
- def to_list(self) -> list:
- """
- Alias for tolist.
- """
- # GH#51254
- warnings.warn(
- "Categorical.to_list is deprecated and will be removed in a future "
- "version. Use obj.tolist() instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.tolist()
-
@classmethod
def _from_inferred_categories(
cls, inferred_categories, inferred_codes, dtype, true_values=None
diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py
index cff8afaa17516..2791fd55f54d7 100644
--- a/pandas/tests/arrays/categorical/test_api.py
+++ b/pandas/tests/arrays/categorical/test_api.py
@@ -18,13 +18,6 @@
class TestCategoricalAPI:
- def test_to_list_deprecated(self):
- # GH#51254
- cat1 = Categorical(list("acb"), ordered=False)
- msg = "Categorical.to_list is deprecated and will be removed"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- cat1.to_list()
-
def test_ordered_api(self):
# GH 9347
cat1 = Categorical(list("acb"), ordered=False)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58000 | 2024-03-25T20:37:56Z | 2024-03-25T22:17:41Z | 2024-03-25T22:17:40Z | 2024-03-26T01:29:02Z |
DEPR: Enforce datetimelike deprecations | diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py
index 69697906e493e..7d5b250c7b157 100644
--- a/asv_bench/benchmarks/categoricals.py
+++ b/asv_bench/benchmarks/categoricals.py
@@ -24,7 +24,7 @@ def setup(self):
self.codes = np.tile(range(len(self.categories)), N)
self.datetimes = pd.Series(
- pd.date_range("1995-01-01 00:00:00", periods=N / 10, freq="s")
+ pd.date_range("1995-01-01 00:00:00", periods=N // 10, freq="s")
)
self.datetimes_with_nat = self.datetimes.copy()
self.datetimes_with_nat.iloc[-1] = pd.NaT
diff --git a/asv_bench/benchmarks/timeseries.py b/asv_bench/benchmarks/timeseries.py
index 06f488f7baaaf..8deec502898d9 100644
--- a/asv_bench/benchmarks/timeseries.py
+++ b/asv_bench/benchmarks/timeseries.py
@@ -29,7 +29,7 @@ def setup(self, index_type):
"dst": date_range(
start="10/29/2000 1:00:00", end="10/29/2000 1:59:59", freq="s"
),
- "repeated": date_range(start="2000", periods=N / 10, freq="s").repeat(10),
+ "repeated": date_range(start="2000", periods=N // 10, freq="s").repeat(10),
"tz_aware": date_range(start="2000", periods=N, freq="s", tz="US/Eastern"),
"tz_local": date_range(
start="2000", periods=N, freq="s", tz=dateutil.tz.tzlocal()
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index d2d5707f32bf3..003f3ea513c8d 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -221,7 +221,8 @@ Removal of prior version deprecations/changes
- All arguments in :meth:`Series.to_dict` are now keyword only (:issue:`56493`)
- Changed the default value of ``observed`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` to ``True`` (:issue:`51811`)
- Enforce deprecation in :func:`testing.assert_series_equal` and :func:`testing.assert_frame_equal` with object dtype and mismatched null-like values, which are now considered not-equal (:issue:`18463`)
-- Enforced deprecation ``all`` and ``any`` reductions with ``datetime64`` and :class:`DatetimeTZDtype` dtypes (:issue:`58029`)
+- Enforced deprecation ``all`` and ``any`` reductions with ``datetime64``, :class:`DatetimeTZDtype`, and :class:`PeriodDtype` dtypes (:issue:`58029`)
+- Enforced deprecation disallowing ``float`` "periods" in :func:`date_range`, :func:`period_range`, :func:`timedelta_range`, :func:`interval_range`, (:issue:`56036`)
- Enforced deprecation disallowing parsing datetimes with mixed time zones unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`57275`)
- Enforced deprecation in :meth:`Series.value_counts` and :meth:`Index.value_counts` with object dtype performing dtype inference on the ``.index`` of the result (:issue:`56161`)
- Enforced deprecation of :meth:`.DataFrameGroupBy.get_group` and :meth:`.SeriesGroupBy.get_group` allowing the ``name`` argument to be a non-tuple when grouping by a list of length 1 (:issue:`54155`)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index d46810e6ebbdd..c9aeaa1ce21c3 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1661,8 +1661,14 @@ def _groupby_op(
dtype = self.dtype
if dtype.kind == "M":
# Adding/multiplying datetimes is not valid
- if how in ["any", "all", "sum", "prod", "cumsum", "cumprod", "var", "skew"]:
+ if how in ["sum", "prod", "cumsum", "cumprod", "var", "skew"]:
raise TypeError(f"datetime64 type does not support operation '{how}'")
+ if how in ["any", "all"]:
+ # GH#34479
+ raise TypeError(
+ f"'{how}' with datetime64 dtypes is no longer supported. "
+ f"Use (obj != pd.Timestamp(0)).{how}() instead."
+ )
elif isinstance(dtype, PeriodDtype):
# Adding/multiplying Periods is not valid
@@ -1670,11 +1676,9 @@ def _groupby_op(
raise TypeError(f"Period type does not support {how} operations")
if how in ["any", "all"]:
# GH#34479
- warnings.warn(
- f"'{how}' with PeriodDtype is deprecated and will raise in a "
- f"future version. Use (obj != pd.Period(0, freq)).{how}() instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise TypeError(
+ f"'{how}' with PeriodDtype is no longer supported. "
+ f"Use (obj != pd.Period(0, freq)).{how}() instead."
)
else:
# timedeltas we can add but not multiply
@@ -2424,17 +2428,17 @@ def validate_periods(periods: None) -> None: ...
@overload
-def validate_periods(periods: int | float) -> int: ...
+def validate_periods(periods: int) -> int: ...
-def validate_periods(periods: int | float | None) -> int | None:
+def validate_periods(periods: int | None) -> int | None:
"""
If a `periods` argument is passed to the Datetime/Timedelta Array/Index
constructor, cast it to an integer.
Parameters
----------
- periods : None, float, int
+ periods : None, int
Returns
-------
@@ -2443,22 +2447,13 @@ def validate_periods(periods: int | float | None) -> int | None:
Raises
------
TypeError
- if periods is None, float, or int
+ if periods is not None or int
"""
- if periods is not None:
- if lib.is_float(periods):
- warnings.warn(
- # GH#56036
- "Non-integer 'periods' in pd.date_range, pd.timedelta_range, "
- "pd.period_range, and pd.interval_range are deprecated and "
- "will raise in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- periods = int(periods)
- elif not lib.is_integer(periods):
- raise TypeError(f"periods must be a number, got {periods}")
- return periods
+ if periods is not None and not lib.is_integer(periods):
+ raise TypeError(f"periods must be an integer, got {periods}")
+ # error: Incompatible return value type (got "int | integer[Any] | None",
+ # expected "int | None")
+ return periods # type: ignore[return-value]
def _validate_inferred_freq(
diff --git a/pandas/tests/groupby/test_raises.py b/pandas/tests/groupby/test_raises.py
index 70be98af1289f..9301f8d56d9d2 100644
--- a/pandas/tests/groupby/test_raises.py
+++ b/pandas/tests/groupby/test_raises.py
@@ -241,8 +241,8 @@ def test_groupby_raises_datetime(
return
klass, msg = {
- "all": (TypeError, "datetime64 type does not support operation 'all'"),
- "any": (TypeError, "datetime64 type does not support operation 'any'"),
+ "all": (TypeError, "'all' with datetime64 dtypes is no longer supported"),
+ "any": (TypeError, "'any' with datetime64 dtypes is no longer supported"),
"bfill": (None, ""),
"corrwith": (TypeError, "cannot perform __mul__ with this index type"),
"count": (None, ""),
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 43fcfd1e59670..99d05dd0f26e4 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -135,16 +135,14 @@ def test_date_range_name(self):
assert idx.name == "TEST"
def test_date_range_invalid_periods(self):
- msg = "periods must be a number, got foo"
+ msg = "periods must be an integer, got foo"
with pytest.raises(TypeError, match=msg):
date_range(start="1/1/2000", periods="foo", freq="D")
def test_date_range_fractional_period(self):
- msg = "Non-integer 'periods' in pd.date_range, pd.timedelta_range"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- rng = date_range("1/1/2000", periods=10.5)
- exp = date_range("1/1/2000", periods=10)
- tm.assert_index_equal(rng, exp)
+ msg = "periods must be an integer"
+ with pytest.raises(TypeError, match=msg):
+ date_range("1/1/2000", periods=10.5)
@pytest.mark.parametrize(
"freq,freq_depr",
@@ -1042,7 +1040,7 @@ def test_constructor(self):
bdate_range(START, periods=20, freq=BDay())
bdate_range(end=START, periods=20, freq=BDay())
- msg = "periods must be a number, got B"
+ msg = "periods must be an integer, got B"
with pytest.raises(TypeError, match=msg):
date_range("2011-1-1", "2012-1-1", "B")
@@ -1120,7 +1118,7 @@ def test_constructor(self):
bdate_range(START, periods=20, freq=CDay())
bdate_range(end=START, periods=20, freq=CDay())
- msg = "periods must be a number, got C"
+ msg = "periods must be an integer, got C"
with pytest.raises(TypeError, match=msg):
date_range("2011-1-1", "2012-1-1", "C")
diff --git a/pandas/tests/indexes/interval/test_interval_range.py b/pandas/tests/indexes/interval/test_interval_range.py
index 7aea481b49221..5252b85ad8d0e 100644
--- a/pandas/tests/indexes/interval/test_interval_range.py
+++ b/pandas/tests/indexes/interval/test_interval_range.py
@@ -236,11 +236,10 @@ def test_interval_dtype(self, start, end, expected):
def test_interval_range_fractional_period(self):
# float value for periods
- expected = interval_range(start=0, periods=10)
- msg = "Non-integer 'periods' in pd.date_range, .* pd.interval_range"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = interval_range(start=0, periods=10.5)
- tm.assert_index_equal(result, expected)
+ msg = "periods must be an integer, got 10.5"
+ ts = Timestamp("2024-03-25")
+ with pytest.raises(TypeError, match=msg):
+ interval_range(ts, periods=10.5)
def test_constructor_coverage(self):
# equivalent timestamp-like start/end
@@ -340,7 +339,7 @@ def test_errors(self):
interval_range(start=Timedelta("1 day"), end=Timedelta("10 days"), freq=2)
# invalid periods
- msg = "periods must be a number, got foo"
+ msg = "periods must be an integer, got foo"
with pytest.raises(TypeError, match=msg):
interval_range(start=0, periods="foo")
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index ec2216c102c3f..6aba9f17326ba 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -196,11 +196,9 @@ def test_constructor_invalid_quarters(self):
)
def test_period_range_fractional_period(self):
- msg = "Non-integer 'periods' in pd.date_range, pd.timedelta_range"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = period_range("2007-01", periods=10.5, freq="M")
- exp = period_range("2007-01", periods=10, freq="M")
- tm.assert_index_equal(result, exp)
+ msg = "periods must be an integer, got 10.5"
+ with pytest.raises(TypeError, match=msg):
+ period_range("2007-01", periods=10.5, freq="M")
def test_constructor_with_without_freq(self):
# GH53687
diff --git a/pandas/tests/indexes/period/test_period_range.py b/pandas/tests/indexes/period/test_period_range.py
index fb200d071951e..67f4d7421df23 100644
--- a/pandas/tests/indexes/period/test_period_range.py
+++ b/pandas/tests/indexes/period/test_period_range.py
@@ -70,7 +70,7 @@ def test_start_end_non_nat(self):
def test_periods_requires_integer(self):
# invalid periods param
- msg = "periods must be a number, got foo"
+ msg = "periods must be an integer, got foo"
with pytest.raises(TypeError, match=msg):
period_range(start="2017Q1", periods="foo")
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 895ea110c8ad5..12ac5dd63bd8c 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -143,14 +143,12 @@ def test_constructor_iso(self):
tm.assert_index_equal(result, expected)
def test_timedelta_range_fractional_period(self):
- msg = "Non-integer 'periods' in pd.date_range, pd.timedelta_range"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- rng = timedelta_range("1 days", periods=10.5)
- exp = timedelta_range("1 days", periods=10)
- tm.assert_index_equal(rng, exp)
+ msg = "periods must be an integer"
+ with pytest.raises(TypeError, match=msg):
+ timedelta_range("1 days", periods=10.5)
def test_constructor_coverage(self):
- msg = "periods must be a number, got foo"
+ msg = "periods must be an integer, got foo"
with pytest.raises(TypeError, match=msg):
timedelta_range(start="1 days", periods="foo", freq="D")
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57999 | 2024-03-25T20:30:17Z | 2024-04-05T17:06:14Z | 2024-04-05T17:06:14Z | 2024-04-05T17:06:22Z |
CI: Enable pytables and numba in 312 build | diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index eb70816c241bb..cbff1875783d4 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -34,7 +34,7 @@ dependencies:
- jinja2>=3.1.2
- lxml>=4.9.2
- matplotlib>=3.6.3
- # - numba>=0.56.4
+ - numba>=0.56.4
- numexpr>=2.8.4
- odfpy>=1.4.1
- qtpy>=2.3.0
@@ -44,7 +44,7 @@ dependencies:
- pyarrow>=10.0.1
- pymysql>=1.0.2
- pyreadstat>=1.2.0
- # - pytables>=3.8.0
+ - pytables>=3.8.0
- python-calamine>=0.1.7
- pyxlsb>=1.0.10
- s3fs>=2022.11.0
diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py
index b722a7f179479..7f7f7eccb2382 100644
--- a/pandas/tests/io/pytables/test_append.py
+++ b/pandas/tests/io/pytables/test_append.py
@@ -6,6 +6,7 @@
import pytest
from pandas._libs.tslibs import Timestamp
+from pandas.compat import PY312
import pandas as pd
from pandas import (
@@ -283,7 +284,7 @@ def test_append_all_nans(setup_path):
tm.assert_frame_equal(store["df2"], df, check_index_type=True)
-def test_append_frame_column_oriented(setup_path):
+def test_append_frame_column_oriented(setup_path, request):
with ensure_clean_store(setup_path) as store:
# column oriented
df = DataFrame(
@@ -303,6 +304,13 @@ def test_append_frame_column_oriented(setup_path):
tm.assert_frame_equal(expected, result)
# selection on the non-indexable
+ request.applymarker(
+ pytest.mark.xfail(
+ PY312,
+ reason="AST change in PY312",
+ raises=ValueError,
+ )
+ )
result = store.select("df1", ("columns=A", "index=df.index[0:4]"))
expected = df.reindex(columns=["A"], index=df.index[0:4])
tm.assert_frame_equal(expected, result)
diff --git a/pandas/tests/io/pytables/test_select.py b/pandas/tests/io/pytables/test_select.py
index 0e303d1c890c5..752e2fc570023 100644
--- a/pandas/tests/io/pytables/test_select.py
+++ b/pandas/tests/io/pytables/test_select.py
@@ -2,6 +2,7 @@
import pytest
from pandas._libs.tslibs import Timestamp
+from pandas.compat import PY312
import pandas as pd
from pandas import (
@@ -168,7 +169,7 @@ def test_select(setup_path):
tm.assert_frame_equal(expected, result)
-def test_select_dtypes(setup_path):
+def test_select_dtypes(setup_path, request):
with ensure_clean_store(setup_path) as store:
# with a Timestamp data column (GH #2637)
df = DataFrame(
@@ -279,6 +280,13 @@ def test_select_dtypes(setup_path):
expected = df[df["A"] > 0]
store.append("df", df, data_columns=True)
+ request.applymarker(
+ pytest.mark.xfail(
+ PY312,
+ reason="AST change in PY312",
+ raises=ValueError,
+ )
+ )
np_zero = np.float64(0) # noqa: F841
result = store.select("df", where=["A>np_zero"])
tm.assert_frame_equal(expected, result)
@@ -607,7 +615,7 @@ def test_select_iterator_many_empty_frames(setup_path):
assert len(results) == 0
-def test_frame_select(setup_path):
+def test_frame_select(setup_path, request):
df = DataFrame(
np.random.default_rng(2).standard_normal((10, 4)),
columns=Index(list("ABCD"), dtype=object),
@@ -624,6 +632,13 @@ def test_frame_select(setup_path):
crit2 = "columns=['A', 'D']"
crit3 = "columns=A"
+ request.applymarker(
+ pytest.mark.xfail(
+ PY312,
+ reason="AST change in PY312",
+ raises=TypeError,
+ )
+ )
result = store.select("frame", [crit1, crit2])
expected = df.loc[date:, ["A", "D"]]
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index fda385685da19..e62df0bc1c977 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -7,6 +7,8 @@
import numpy as np
import pytest
+from pandas.compat import PY312
+
import pandas as pd
from pandas import (
DataFrame,
@@ -866,7 +868,7 @@ def test_start_stop_fixed(setup_path):
df.iloc[8:10, -2] = np.nan
-def test_select_filter_corner(setup_path):
+def test_select_filter_corner(setup_path, request):
df = DataFrame(np.random.default_rng(2).standard_normal((50, 100)))
df.index = [f"{c:3d}" for c in df.index]
df.columns = [f"{c:3d}" for c in df.columns]
@@ -874,6 +876,13 @@ def test_select_filter_corner(setup_path):
with ensure_clean_store(setup_path) as store:
store.put("frame", df, format="table")
+ request.applymarker(
+ pytest.mark.xfail(
+ PY312,
+ reason="AST change in PY312",
+ raises=ValueError,
+ )
+ )
crit = "columns=df.columns[:75]"
result = store.select("frame", [crit])
tm.assert_frame_equal(result, df.loc[:, df.columns[:75]])
| Looks like there's conda packages for this Python version now | https://api.github.com/repos/pandas-dev/pandas/pulls/57998 | 2024-03-25T18:50:39Z | 2024-03-26T21:34:28Z | 2024-03-26T21:34:28Z | 2024-03-26T21:34:31Z |
REF: Use numpy set methods in interpolate | diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index b3e152e36a304..9fef78d9f8c3d 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -471,20 +471,20 @@ def _interpolate_1d(
if valid.all():
return
- # These are sets of index pointers to invalid values... i.e. {0, 1, etc...
- all_nans = set(np.flatnonzero(invalid))
+ # These index pointers to invalid values... i.e. {0, 1, etc...
+ all_nans = np.flatnonzero(invalid)
first_valid_index = find_valid_index(how="first", is_valid=valid)
if first_valid_index is None: # no nan found in start
first_valid_index = 0
- start_nans = set(range(first_valid_index))
+ start_nans = np.arange(first_valid_index)
last_valid_index = find_valid_index(how="last", is_valid=valid)
if last_valid_index is None: # no nan found in end
last_valid_index = len(yvalues)
- end_nans = set(range(1 + last_valid_index, len(valid)))
+ end_nans = np.arange(1 + last_valid_index, len(valid))
- # Like the sets above, preserve_nans contains indices of invalid values,
+ # preserve_nans contains indices of invalid values,
# but in this case, it is the final set of indices that need to be
# preserved as NaN after the interpolation.
@@ -493,27 +493,25 @@ def _interpolate_1d(
# are more than 'limit' away from the prior non-NaN.
# set preserve_nans based on direction using _interp_limit
- preserve_nans: list | set
if limit_direction == "forward":
- preserve_nans = start_nans | set(_interp_limit(invalid, limit, 0))
+ preserve_nans = np.union1d(start_nans, _interp_limit(invalid, limit, 0))
elif limit_direction == "backward":
- preserve_nans = end_nans | set(_interp_limit(invalid, 0, limit))
+ preserve_nans = np.union1d(end_nans, _interp_limit(invalid, 0, limit))
else:
# both directions... just use _interp_limit
- preserve_nans = set(_interp_limit(invalid, limit, limit))
+ preserve_nans = np.unique(_interp_limit(invalid, limit, limit))
# if limit_area is set, add either mid or outside indices
# to preserve_nans GH #16284
if limit_area == "inside":
# preserve NaNs on the outside
- preserve_nans |= start_nans | end_nans
+ preserve_nans = np.union1d(preserve_nans, start_nans)
+ preserve_nans = np.union1d(preserve_nans, end_nans)
elif limit_area == "outside":
# preserve NaNs on the inside
- mid_nans = all_nans - start_nans - end_nans
- preserve_nans |= mid_nans
-
- # sort preserve_nans and convert to list
- preserve_nans = sorted(preserve_nans)
+ mid_nans = np.setdiff1d(all_nans, start_nans, assume_unique=True)
+ mid_nans = np.setdiff1d(mid_nans, end_nans, assume_unique=True)
+ preserve_nans = np.union1d(preserve_nans, mid_nans)
is_datetimelike = yvalues.dtype.kind in "mM"
@@ -1027,7 +1025,7 @@ def clean_reindex_fill_method(method) -> ReindexMethod | None:
def _interp_limit(
invalid: npt.NDArray[np.bool_], fw_limit: int | None, bw_limit: int | None
-):
+) -> np.ndarray:
"""
Get indexers of values that won't be filled
because they exceed the limits.
@@ -1059,20 +1057,23 @@ def _interp_limit(invalid, fw_limit, bw_limit):
# 1. operate on the reversed array
# 2. subtract the returned indices from N - 1
N = len(invalid)
- f_idx = set()
- b_idx = set()
+ f_idx = np.array([], dtype=np.int64)
+ b_idx = np.array([], dtype=np.int64)
+ assume_unique = True
def inner(invalid, limit: int):
limit = min(limit, N)
- windowed = _rolling_window(invalid, limit + 1).all(1)
- idx = set(np.where(windowed)[0] + limit) | set(
- np.where((~invalid[: limit + 1]).cumsum() == 0)[0]
+ windowed = np.lib.stride_tricks.sliding_window_view(invalid, limit + 1).all(1)
+ idx = np.union1d(
+ np.where(windowed)[0] + limit,
+ np.where((~invalid[: limit + 1]).cumsum() == 0)[0],
)
return idx
if fw_limit is not None:
if fw_limit == 0:
- f_idx = set(np.where(invalid)[0])
+ f_idx = np.where(invalid)[0]
+ assume_unique = False
else:
f_idx = inner(invalid, fw_limit)
@@ -1082,26 +1083,8 @@ def inner(invalid, limit: int):
# just use forwards
return f_idx
else:
- b_idx_inv = list(inner(invalid[::-1], bw_limit))
- b_idx = set(N - 1 - np.asarray(b_idx_inv))
+ b_idx = N - 1 - inner(invalid[::-1], bw_limit)
if fw_limit == 0:
return b_idx
- return f_idx & b_idx
-
-
-def _rolling_window(a: npt.NDArray[np.bool_], window: int) -> npt.NDArray[np.bool_]:
- """
- [True, True, False, True, False], 2 ->
-
- [
- [True, True],
- [True, False],
- [False, True],
- [True, False],
- ]
- """
- # https://stackoverflow.com/a/6811241
- shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
- strides = a.strides + (a.strides[-1],)
- return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
+ return np.intersect1d(f_idx, b_idx, assume_unique=assume_unique)
| `_interpolate_1d` goes back an forth between Python objects and Numpy objects in order to use Python `set` methods. Refactoring to just use Numpy set routines instead | https://api.github.com/repos/pandas-dev/pandas/pulls/57997 | 2024-03-25T18:34:39Z | 2024-03-26T17:11:04Z | 2024-03-26T17:11:04Z | 2024-03-26T17:11:08Z |
CLN: `pandas.concat` internal checks | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 35a08e0167924..b1f662b6f231f 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -635,16 +635,13 @@ def _get_concat_axis(self) -> Index:
indexes, self.keys, self.levels, self.names
)
- self._maybe_check_integrity(concat_axis)
-
- return concat_axis
-
- def _maybe_check_integrity(self, concat_index: Index) -> None:
if self.verify_integrity:
- if not concat_index.is_unique:
- overlap = concat_index[concat_index.duplicated()].unique()
+ if not concat_axis.is_unique:
+ overlap = concat_axis[concat_axis.duplicated()].unique()
raise ValueError(f"Indexes have overlapping values: {overlap}")
+ return concat_axis
+
def _clean_keys_and_objs(
objs: Iterable[Series | DataFrame] | Mapping[HashableT, Series | DataFrame],
@@ -742,6 +739,12 @@ def _concat_indexes(indexes) -> Index:
return indexes[0].append(indexes[1:])
+def validate_unique_levels(levels: list[Index]) -> None:
+ for level in levels:
+ if not level.is_unique:
+ raise ValueError(f"Level values not unique: {level.tolist()}")
+
+
def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiIndex:
if (levels is None and isinstance(keys[0], tuple)) or (
levels is not None and len(levels) > 1
@@ -754,6 +757,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde
_, levels = factorize_from_iterables(zipped)
else:
levels = [ensure_index(x) for x in levels]
+ validate_unique_levels(levels)
else:
zipped = [keys]
if names is None:
@@ -763,12 +767,9 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde
levels = [ensure_index(keys).unique()]
else:
levels = [ensure_index(x) for x in levels]
+ validate_unique_levels(levels)
- for level in levels:
- if not level.is_unique:
- raise ValueError(f"Level values not unique: {level.tolist()}")
-
- if not all_indexes_same(indexes) or not all(level.is_unique for level in levels):
+ if not all_indexes_same(indexes):
codes_list = []
# things are potentially different sizes, so compute the exact codes
| * Only validate unique `levels` when needed
* Inline single used `_maybe_check_integrity` check | https://api.github.com/repos/pandas-dev/pandas/pulls/57996 | 2024-03-25T18:27:31Z | 2024-03-26T21:34:19Z | 2024-03-26T21:34:19Z | 2024-03-26T21:34:37Z |
DOC: Fix reference to rows in `read_csv(index_col)` error message | diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 3bbb7c83345e5..5a7d117b0543e 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -174,7 +174,7 @@ def __init__(self, kwds) -> None:
and all(map(is_integer, self.index_col))
):
raise ValueError(
- "index_col must only contain row numbers "
+ "index_col must only contain integers of column positions "
"when specifying a multi-index header"
)
else:
diff --git a/pandas/tests/io/parser/test_header.py b/pandas/tests/io/parser/test_header.py
index d185e83bfc027..85ce55b3bcf83 100644
--- a/pandas/tests/io/parser/test_header.py
+++ b/pandas/tests/io/parser/test_header.py
@@ -162,7 +162,7 @@ def test_header_multi_index(all_parsers):
{"index_col": ["foo", "bar"]},
(
"index_col must only contain "
- "row numbers when specifying "
+ "integers of column positions when specifying "
"a multi-index header"
),
),
| **Description**
The user passes column numbers (not row numbers) as the `index_col` argument to specify an index when calling e.g. `read_csv()`.
**Checklist**
- [x] ~closes #xxxx (Replace xxxx with the GitHub issue number)~ (No Specific Issue)
- [x] ~[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~ (No change to functionality)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~ (No new args/methods/functions).
- [x] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~ (No change to functionality)
| https://api.github.com/repos/pandas-dev/pandas/pulls/57991 | 2024-03-25T04:18:00Z | 2024-03-26T17:07:56Z | 2024-03-26T17:07:56Z | 2024-03-26T17:19:46Z |
DOC: ecosystem.md: add pygwalker, add seaborn code example | diff --git a/web/pandas/community/ecosystem.md b/web/pandas/community/ecosystem.md
index 715a2fafbe87a..6cd67302b2a0e 100644
--- a/web/pandas/community/ecosystem.md
+++ b/web/pandas/community/ecosystem.md
@@ -82,6 +82,20 @@ pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides
interactive web-based charts and maps.
+### [pygwalker](https://github.com/Kanaries/pygwalker)
+
+PyGWalker is an interactive data visualization and
+exploratory data analysis tool built upon Graphic Walker
+with support for visualization, cleaning, and annotation workflows.
+
+pygwalker can save interactively created charts
+to Graphic-Walker and Vega-Lite JSON.
+
+```
+import pygwalker as pyg
+pyg.walk(df)
+```
+
### [seaborn](https://seaborn.pydata.org)
Seaborn is a Python visualization library based on
@@ -94,6 +108,11 @@ pandas with the option to perform statistical estimation while plotting,
aggregating across observations and visualizing the fit of statistical
models to emphasize patterns in a dataset.
+```
+import seaborn as sns
+sns.set_theme()
+```
+
### [plotnine](https://github.com/has2k1/plotnine/)
Hadley Wickham's [ggplot2](https://ggplot2.tidyverse.org/) is a
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57990 | 2024-03-25T02:12:12Z | 2024-03-27T19:00:51Z | 2024-03-27T19:00:51Z | 2024-03-27T21:13:59Z |
BUG: Fix error for `boxplot` when using a pre-grouped `DataFrame` with more than one grouping | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 74a19472ec835..15e85d0f90c5e 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -409,7 +409,7 @@ Period
Plotting
^^^^^^^^
--
+- Bug in :meth:`.DataFrameGroupBy.boxplot` failed when there were multiple groupings (:issue:`14701`)
-
Groupby/resample/rolling
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index b41e03d87b275..75b24cd42e062 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -533,14 +533,14 @@ def boxplot_frame_groupby(
)
axes = flatten_axes(axes)
- ret = pd.Series(dtype=object)
-
+ data = {}
for (key, group), ax in zip(grouped, axes):
d = group.boxplot(
ax=ax, column=column, fontsize=fontsize, rot=rot, grid=grid, **kwds
)
ax.set_title(pprint_thing(key))
- ret.loc[key] = d
+ data[key] = d
+ ret = pd.Series(data)
maybe_adjust_figure(fig, bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2)
else:
keys, frames = zip(*grouped)
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index 2dd45a9abc7a5..f8029a1c1ee40 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -740,3 +740,17 @@ def test_boxplot_multiindex_column(self):
expected_xticklabel = ["(bar, one)", "(bar, two)"]
result_xticklabel = [x.get_text() for x in axes.get_xticklabels()]
assert expected_xticklabel == result_xticklabel
+
+ @pytest.mark.parametrize("group", ["X", ["X", "Y"]])
+ def test_boxplot_multi_groupby_groups(self, group):
+ # GH 14701
+ rows = 20
+ df = DataFrame(
+ np.random.default_rng(12).normal(size=(rows, 2)), columns=["Col1", "Col2"]
+ )
+ df["X"] = Series(np.repeat(["A", "B"], int(rows / 2)))
+ df["Y"] = Series(np.tile(["C", "D"], int(rows / 2)))
+ grouped = df.groupby(group)
+ _check_plot_works(df.boxplot, by=group, default_axes=True)
+ _check_plot_works(df.plot.box, by=group, default_axes=True)
+ _check_plot_works(grouped.boxplot, default_axes=True)
| - [x] closes #14701
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
The original bug shows an indexing error when using `boxplot` on a pre-grouped `DataFrame`. For example:
`df.groupby(['X', 'Y']).boxplot()`
This error occurs as multiple groupings are passed as a tuple into the index of a `Series`, which is not allowed. The fix converts the `tuple` to a `string`, which is acceptable as an index to a `Series`.
There are other comments on the bug thread that state "Groupby boxplot is not working even when a single 'by' is used", which is not the case.
I have included in the tests both a single `by` and multiple `by` covering three different methods of plotting a `boxplot` from a `DataFrame`:
1) `df.boxplot(by='X')`
2) `df.boxplot(by=['X','Y'])`
3) `df.plot.box(by='X')`
4) `df.plot.box(by=['X','Y'])`
5) `df.groupby('X').boxplot()`
6) **`df.groupby(['X','Y']).boxplot()`**
Of all of the above, only example 6 required the fix. | https://api.github.com/repos/pandas-dev/pandas/pulls/57985 | 2024-03-24T18:26:01Z | 2024-03-31T15:00:23Z | 2024-03-31T15:00:23Z | 2024-03-31T16:08:06Z |
CLN: Enforce deprecations for EA.fillna | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index f225d384888e3..76cb3cb1f2e81 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -34,7 +34,6 @@ Other enhancements
- Allow dictionaries to be passed to :meth:`pandas.Series.str.replace` via ``pat`` parameter (:issue:`51748`)
- Support passing a :class:`Series` input to :func:`json_normalize` that retains the :class:`Series` :class:`Index` (:issue:`51452`)
- Users can globally disable any ``PerformanceWarning`` by setting the option ``mode.performance_warnings`` to ``False`` (:issue:`56920`)
--
.. ---------------------------------------------------------------------------
.. _whatsnew_300.notable_bug_fixes:
@@ -258,6 +257,7 @@ Removal of prior version deprecations/changes
- Unrecognized timezones when parsing strings to datetimes now raises a ``ValueError`` (:issue:`51477`)
- Removed the :class:`Grouper` attributes ``ax``, ``groups``, ``indexer``, and ``obj`` (:issue:`51206`, :issue:`51182`)
- Removed deprecated keyword ``verbose`` on :func:`read_csv` and :func:`read_table` (:issue:`56556`)
+- Removed the ``method`` keyword in ``ExtensionArray.fillna``, implement ``ExtensionArray._pad_or_backfill`` instead (:issue:`53621`)
- Removed the attribute ``dtypes`` from :class:`.DataFrameGroupBy` (:issue:`51997`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index c1d0ade572e8a..7f4e6f6666382 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -33,7 +33,6 @@
from pandas.util._decorators import doc
from pandas.util._validators import (
validate_bool_kwarg,
- validate_fillna_kwargs,
validate_insert_loc,
)
@@ -336,13 +335,7 @@ def _pad_or_backfill(
return new_values
@doc(ExtensionArray.fillna)
- def fillna(
- self, value=None, method=None, limit: int | None = None, copy: bool = True
- ) -> Self:
- value, method = validate_fillna_kwargs(
- value, method, validate_scalar_dict_value=False
- )
-
+ def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
mask = self.isna()
# error: Argument 2 to "check_value_size" has incompatible type
# "ExtensionArray"; expected "ndarray"
@@ -353,25 +346,12 @@ def fillna(
)
if mask.any():
- if method is not None:
- # (for now) when self.ndim == 2, we assume axis=0
- func = missing.get_fill_func(method, ndim=self.ndim)
- npvalues = self._ndarray.T
- if copy:
- npvalues = npvalues.copy()
- func(npvalues, limit=limit, mask=mask.T)
- npvalues = npvalues.T
-
- # TODO: NumpyExtensionArray didn't used to copy, need tests
- # for this
- new_values = self._from_backing_data(npvalues)
+ # fill with value
+ if copy:
+ new_values = self.copy()
else:
- # fill with value
- if copy:
- new_values = self.copy()
- else:
- new_values = self[:]
- new_values[mask] = value
+ new_values = self[:]
+ new_values[mask] = value
else:
# We validate the fill_value even if there is nothing to fill
if value is not None:
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index aaf43662ebde2..84b62563605ac 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -29,7 +29,6 @@
pa_version_under13p0,
)
from pandas.util._decorators import doc
-from pandas.util._validators import validate_fillna_kwargs
from pandas.core.dtypes.cast import (
can_hold_element,
@@ -1068,6 +1067,7 @@ def _pad_or_backfill(
# a kernel for duration types.
pass
+ # TODO: Why do we no longer need the above cases?
# TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
# this method entirely.
return super()._pad_or_backfill(
@@ -1078,21 +1078,15 @@ def _pad_or_backfill(
def fillna(
self,
value: object | ArrayLike | None = None,
- method: FillnaOptions | None = None,
limit: int | None = None,
copy: bool = True,
) -> Self:
- value, method = validate_fillna_kwargs(value, method)
-
if not self._hasna:
# TODO(CoW): Not necessary anymore when CoW is the default
return self.copy()
if limit is not None:
- return super().fillna(value=value, method=method, limit=limit, copy=copy)
-
- if method is not None:
- return super().fillna(method=method, limit=limit, copy=copy)
+ return super().fillna(value=value, limit=limit, copy=copy)
if isinstance(value, (np.ndarray, ExtensionArray)):
# Similar to check_value_size, but we do not mask here since we may
@@ -1118,7 +1112,7 @@ def fillna(
# a kernel for duration types.
pass
- return super().fillna(value=value, method=method, limit=limit, copy=copy)
+ return super().fillna(value=value, limit=limit, copy=copy)
def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:
# short-circuit to return all False array.
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 86831f072bb8f..76615704f2e33 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -38,7 +38,6 @@
from pandas.util._exceptions import find_stack_level
from pandas.util._validators import (
validate_bool_kwarg,
- validate_fillna_kwargs,
validate_insert_loc,
)
@@ -1007,31 +1006,6 @@ def _pad_or_backfill(
[<NA>, 2, 2, 3, <NA>, <NA>]
Length: 6, dtype: Int64
"""
-
- # If a 3rd-party EA has implemented this functionality in fillna,
- # we warn that they need to implement _pad_or_backfill instead.
- if (
- type(self).fillna is not ExtensionArray.fillna
- and type(self)._pad_or_backfill is ExtensionArray._pad_or_backfill
- ):
- # Check for _pad_or_backfill here allows us to call
- # super()._pad_or_backfill without getting this warning
- warnings.warn(
- "ExtensionArray.fillna 'method' keyword is deprecated. "
- "In a future version. arr._pad_or_backfill will be called "
- "instead. 3rd-party ExtensionArray authors need to implement "
- "_pad_or_backfill.",
- DeprecationWarning,
- stacklevel=find_stack_level(),
- )
- if limit_area is not None:
- raise NotImplementedError(
- f"{type(self).__name__} does not implement limit_area "
- "(added in pandas 2.2). 3rd-party ExtnsionArray authors "
- "need to add this argument to _pad_or_backfill."
- )
- return self.fillna(method=method, limit=limit)
-
mask = self.isna()
if mask.any():
@@ -1057,8 +1031,7 @@ def _pad_or_backfill(
def fillna(
self,
- value: object | ArrayLike | None = None,
- method: FillnaOptions | None = None,
+ value: object | ArrayLike,
limit: int | None = None,
copy: bool = True,
) -> Self:
@@ -1071,14 +1044,6 @@ def fillna(
If a scalar value is passed it is used to fill all missing values.
Alternatively, an array-like "value" can be given. It's expected
that the array-like have the same length as 'self'.
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
- Method to use for filling holes in reindexed Series:
-
- * pad / ffill: propagate last valid observation forward to next valid.
- * backfill / bfill: use NEXT valid observation to fill gap.
-
- .. deprecated:: 2.1.0
-
limit : int, default None
If method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
@@ -1086,9 +1051,6 @@ def fillna(
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled.
-
- .. deprecated:: 2.1.0
-
copy : bool, default True
Whether to make a copy of the data before filling. If False, then
the original should be modified and no new memory should be allocated.
@@ -1110,16 +1072,6 @@ def fillna(
[0, 0, 2, 3, 0, 0]
Length: 6, dtype: Int64
"""
- if method is not None:
- warnings.warn(
- f"The 'method' keyword in {type(self).__name__}.fillna is "
- "deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- value, method = validate_fillna_kwargs(value, method)
-
mask = self.isna()
# error: Argument 2 to "check_value_size" has incompatible type
# "ExtensionArray"; expected "ndarray"
@@ -1130,24 +1082,12 @@ def fillna(
)
if mask.any():
- if method is not None:
- meth = missing.clean_fill_method(method)
-
- npmask = np.asarray(mask)
- if meth == "pad":
- indexer = libalgos.get_fill_indexer(npmask, limit=limit)
- return self.take(indexer, allow_fill=True)
- else:
- # i.e. meth == "backfill"
- indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1]
- return self[::-1].take(indexer, allow_fill=True)
+ # fill with value
+ if not copy:
+ new_values = self[:]
else:
- # fill with value
- if not copy:
- new_values = self[:]
- else:
- new_values = self.copy()
- new_values[mask] = value
+ new_values = self.copy()
+ new_values[mask] = value
else:
if not copy:
new_values = self[:]
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 1ea32584403ba..56ea28c0b50f8 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -29,7 +29,6 @@
ArrayLike,
AxisInt,
Dtype,
- FillnaOptions,
IntervalClosedType,
NpDtype,
PositionalIndexer,
@@ -894,23 +893,7 @@ def max(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOr
indexer = obj.argsort()[-1]
return obj[indexer]
- def _pad_or_backfill( # pylint: disable=useless-parent-delegation
- self,
- *,
- method: FillnaOptions,
- limit: int | None = None,
- limit_area: Literal["inside", "outside"] | None = None,
- copy: bool = True,
- ) -> Self:
- # TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
- # this method entirely.
- return super()._pad_or_backfill(
- method=method, limit=limit, limit_area=limit_area, copy=copy
- )
-
- def fillna(
- self, value=None, method=None, limit: int | None = None, copy: bool = True
- ) -> Self:
+ def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
"""
Fill NA/NaN values using the specified method.
@@ -921,9 +904,6 @@ def fillna(
Alternatively, a Series or dict can be used to fill in different
values for each index. The value should not be a list. The
value(s) passed should be either Interval objects or NA/NaN.
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
- (Not implemented yet for IntervalArray)
- Method to use for filling holes in reindexed Series
limit : int, default None
(Not implemented yet for IntervalArray)
If method is specified, this is the maximum number of consecutive
@@ -944,8 +924,6 @@ def fillna(
"""
if copy is False:
raise NotImplementedError
- if method is not None:
- return super().fillna(value=value, method=method, limit=limit)
value_left, value_right = self._validate_scalar(value)
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 108202f5e510b..d20d7f98b8aa8 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -38,7 +38,6 @@
)
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
-from pandas.util._validators import validate_fillna_kwargs
from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import (
@@ -237,32 +236,18 @@ def _pad_or_backfill(
return new_values
@doc(ExtensionArray.fillna)
- def fillna(
- self, value=None, method=None, limit: int | None = None, copy: bool = True
- ) -> Self:
- value, method = validate_fillna_kwargs(value, method)
-
+ def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
mask = self._mask
value = missing.check_value_size(value, mask, len(self))
if mask.any():
- if method is not None:
- func = missing.get_fill_func(method, ndim=self.ndim)
- npvalues = self._data.T
- new_mask = mask.T
- if copy:
- npvalues = npvalues.copy()
- new_mask = new_mask.copy()
- func(npvalues, limit=limit, mask=new_mask)
- return self._simple_new(npvalues.T, new_mask.T)
+ # fill with value
+ if copy:
+ new_values = self.copy()
else:
- # fill with value
- if copy:
- new_values = self.copy()
- else:
- new_values = self[:]
- new_values[mask] = value
+ new_values = self[:]
+ new_values[mask] = value
else:
if copy:
new_values = self.copy()
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index d05f857f46179..e73eba710ec39 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -847,19 +847,6 @@ def _pad_or_backfill(
else:
return self
- def fillna(
- self, value=None, method=None, limit: int | None = None, copy: bool = True
- ) -> Self:
- if method is not None:
- # view as dt64 so we get treated as timelike in core.missing,
- # similar to dtl._period_dispatch
- dta = self.view("M8[ns]")
- result = dta.fillna(value=value, method=method, limit=limit, copy=copy)
- # error: Incompatible return value type (got "Union[ExtensionArray,
- # ndarray[Any, Any]]", expected "PeriodArray")
- return result.view(self.dtype) # type: ignore[return-value]
- return super().fillna(value=value, method=method, limit=limit, copy=copy)
-
# ------------------------------------------------------------------
# Arithmetic Methods
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index bf44e5e099530..bdcb3219a9875 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -98,10 +98,7 @@ class ellipsis(Enum):
from scipy.sparse import spmatrix
- from pandas._typing import (
- FillnaOptions,
- NumpySorter,
- )
+ from pandas._typing import NumpySorter
SparseIndexKind = Literal["integer", "block"]
@@ -717,24 +714,9 @@ def isna(self) -> Self: # type: ignore[override]
mask[self.sp_index.indices] = isna(self.sp_values)
return type(self)(mask, fill_value=False, dtype=dtype)
- def _pad_or_backfill( # pylint: disable=useless-parent-delegation
- self,
- *,
- method: FillnaOptions,
- limit: int | None = None,
- limit_area: Literal["inside", "outside"] | None = None,
- copy: bool = True,
- ) -> Self:
- # TODO(3.0): We can remove this method once deprecation for fillna method
- # keyword is enforced.
- return super()._pad_or_backfill(
- method=method, limit=limit, limit_area=limit_area, copy=copy
- )
-
def fillna(
self,
value=None,
- method: FillnaOptions | None = None,
limit: int | None = None,
copy: bool = True,
) -> Self:
@@ -743,17 +725,8 @@ def fillna(
Parameters
----------
- value : scalar, optional
- method : str, optional
-
- .. warning::
-
- Using 'method' will result in high memory use,
- as all `fill_value` methods will be converted to
- an in-memory ndarray
-
+ value : scalar
limit : int, optional
-
copy: bool, default True
Ignored for SparseArray.
@@ -773,22 +746,15 @@ def fillna(
When ``self.fill_value`` is not NA, the result dtype will be
``self.dtype``. Again, this preserves the amount of memory used.
"""
- if (method is None and value is None) or (
- method is not None and value is not None
- ):
- raise ValueError("Must specify one of 'method' or 'value'.")
-
- if method is not None:
- return super().fillna(method=method, limit=limit)
+ if value is None:
+ raise ValueError("Must specify 'value'.")
+ new_values = np.where(isna(self.sp_values), value, self.sp_values)
+ if self._null_fill_value:
+ # This is essentially just updating the dtype.
+ new_dtype = SparseDtype(self.dtype.subtype, fill_value=value)
else:
- new_values = np.where(isna(self.sp_values), value, self.sp_values)
-
- if self._null_fill_value:
- # This is essentially just updating the dtype.
- new_dtype = SparseDtype(self.dtype.subtype, fill_value=value)
- else:
- new_dtype = self.dtype
+ new_dtype = self.dtype
return self._simple_new(new_values, self._sparse_index, new_dtype)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c0eda7f022d8f..f7607820180c3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -99,7 +99,6 @@
check_dtype_backend,
validate_ascending,
validate_bool_kwarg,
- validate_fillna_kwargs,
validate_inclusive,
)
@@ -9578,7 +9577,6 @@ def _align_series(
# fill
fill_na = notna(fill_value)
if fill_na:
- fill_value, _ = validate_fillna_kwargs(fill_value, None)
left = left.fillna(fill_value)
right = right.fillna(fill_value)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f6bf5dffb5f48..a7cdc7c39754d 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1873,13 +1873,11 @@ def fillna(
copy, refs = self._get_refs_and_copy(inplace)
try:
- new_values = self.values.fillna(
- value=value, method=None, limit=limit, copy=copy
- )
+ new_values = self.values.fillna(value=value, limit=limit, copy=copy)
except TypeError:
# 3rd party EA that has not implemented copy keyword yet
refs = None
- new_values = self.values.fillna(value=value, method=None, limit=limit)
+ new_values = self.values.fillna(value=value, limit=limit)
# issue the warning *after* retrying, in case the TypeError
# was caused by an invalid fill_value
warnings.warn(
diff --git a/pandas/tests/arrays/categorical/test_missing.py b/pandas/tests/arrays/categorical/test_missing.py
index 332d31e9e3fc2..9d4b78ce9944e 100644
--- a/pandas/tests/arrays/categorical/test_missing.py
+++ b/pandas/tests/arrays/categorical/test_missing.py
@@ -62,34 +62,6 @@ def test_set_item_nan(self):
exp = Categorical([1, np.nan, 3], categories=[1, 2, 3])
tm.assert_categorical_equal(cat, exp)
- @pytest.mark.parametrize(
- "fillna_kwargs, msg",
- [
- (
- {"value": 1, "method": "ffill"},
- "Cannot specify both 'value' and 'method'.",
- ),
- ({}, "Must specify a fill 'value' or 'method'."),
- ({"method": "bad"}, "Invalid fill method. Expecting .* bad"),
- (
- {"value": Series([1, 2, 3, 4, "a"])},
- "Cannot setitem on a Categorical with a new category",
- ),
- ],
- )
- def test_fillna_raises(self, fillna_kwargs, msg):
- # https://github.com/pandas-dev/pandas/issues/19682
- # https://github.com/pandas-dev/pandas/issues/13628
- cat = Categorical([1, 2, 3, None, None])
-
- if len(fillna_kwargs) == 1 and "value" in fillna_kwargs:
- err = TypeError
- else:
- err = ValueError
-
- with pytest.raises(err, match=msg):
- cat.fillna(**fillna_kwargs)
-
@pytest.mark.parametrize("named", [True, False])
def test_fillna_iterable_category(self, named):
# https://github.com/pandas-dev/pandas/issues/21097
diff --git a/pandas/tests/extension/conftest.py b/pandas/tests/extension/conftest.py
index 5ae0864190f10..97fb5a0bc5066 100644
--- a/pandas/tests/extension/conftest.py
+++ b/pandas/tests/extension/conftest.py
@@ -189,7 +189,7 @@ def use_numpy(request):
def fillna_method(request):
"""
Parametrized fixture giving method parameters 'ffill' and 'bfill' for
- Series.fillna(method=<method>) testing.
+ Series.<method> testing.
"""
return request.param
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 709cff59cd824..59f313b4c9edb 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -287,17 +287,10 @@ def value_counts(self, dropna: bool = True):
return value_counts(self.to_numpy(), dropna=dropna)
# We override fillna here to simulate a 3rd party EA that has done so. This
- # lets us test the deprecation telling authors to implement _pad_or_backfill
- # Simulate a 3rd-party EA that has not yet updated to include a "copy"
+ # lets us test a 3rd-party EA that has not yet updated to include a "copy"
# keyword in its fillna method.
- # error: Signature of "fillna" incompatible with supertype "ExtensionArray"
- def fillna( # type: ignore[override]
- self,
- value=None,
- method=None,
- limit: int | None = None,
- ):
- return super().fillna(value=value, method=method, limit=limit, copy=True)
+ def fillna(self, value=None, limit=None):
+ return super().fillna(value=value, limit=limit, copy=True)
def to_decimal(values, context=None):
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index bed3ec62f43da..a2721908e858f 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -137,86 +137,6 @@ def test_fillna_frame(self, data_missing):
):
super().test_fillna_frame(data_missing)
- def test_fillna_limit_pad(self, data_missing):
- msg = "ExtensionArray.fillna 'method' keyword is deprecated"
- with tm.assert_produces_warning(
- DeprecationWarning,
- match=msg,
- check_stacklevel=False,
- raise_on_extra_warnings=False,
- ):
- super().test_fillna_limit_pad(data_missing)
-
- msg = "The 'method' keyword in DecimalArray.fillna is deprecated"
- with tm.assert_produces_warning(
- FutureWarning,
- match=msg,
- check_stacklevel=False,
- raise_on_extra_warnings=False,
- ):
- super().test_fillna_limit_pad(data_missing)
-
- @pytest.mark.parametrize(
- "limit_area, input_ilocs, expected_ilocs",
- [
- ("outside", [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]),
- ("outside", [1, 0, 1, 0, 1], [1, 0, 1, 0, 1]),
- ("outside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 1]),
- ("outside", [0, 1, 0, 1, 0], [0, 1, 0, 1, 1]),
- ("inside", [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]),
- ("inside", [1, 0, 1, 0, 1], [1, 1, 1, 1, 1]),
- ("inside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 0]),
- ("inside", [0, 1, 0, 1, 0], [0, 1, 1, 1, 0]),
- ],
- )
- def test_ffill_limit_area(
- self, data_missing, limit_area, input_ilocs, expected_ilocs
- ):
- # GH#56616
- msg = "ExtensionArray.fillna 'method' keyword is deprecated"
- with tm.assert_produces_warning(
- DeprecationWarning,
- match=msg,
- check_stacklevel=False,
- raise_on_extra_warnings=False,
- ):
- msg = "DecimalArray does not implement limit_area"
- with pytest.raises(NotImplementedError, match=msg):
- super().test_ffill_limit_area(
- data_missing, limit_area, input_ilocs, expected_ilocs
- )
-
- def test_fillna_limit_backfill(self, data_missing):
- msg = "ExtensionArray.fillna 'method' keyword is deprecated"
- with tm.assert_produces_warning(
- DeprecationWarning,
- match=msg,
- check_stacklevel=False,
- raise_on_extra_warnings=False,
- ):
- super().test_fillna_limit_backfill(data_missing)
-
- msg = "The 'method' keyword in DecimalArray.fillna is deprecated"
- with tm.assert_produces_warning(
- FutureWarning,
- match=msg,
- check_stacklevel=False,
- raise_on_extra_warnings=False,
- ):
- super().test_fillna_limit_backfill(data_missing)
-
- def test_fillna_no_op_returns_copy(self, data):
- msg = "|".join(
- [
- "ExtensionArray.fillna 'method' keyword is deprecated",
- "The 'method' keyword in DecimalArray.fillna is deprecated",
- ]
- )
- with tm.assert_produces_warning(
- (FutureWarning, DeprecationWarning), match=msg, check_stacklevel=False
- ):
- super().test_fillna_no_op_returns_copy(data)
-
def test_fillna_series(self, data_missing):
msg = "ExtensionArray.fillna added a 'copy' keyword"
with tm.assert_produces_warning(
@@ -224,18 +144,6 @@ def test_fillna_series(self, data_missing):
):
super().test_fillna_series(data_missing)
- def test_fillna_series_method(self, data_missing, fillna_method):
- msg = "|".join(
- [
- "ExtensionArray.fillna 'method' keyword is deprecated",
- "The 'method' keyword in DecimalArray.fillna is deprecated",
- ]
- )
- with tm.assert_produces_warning(
- (FutureWarning, DeprecationWarning), match=msg, check_stacklevel=False
- ):
- super().test_fillna_series_method(data_missing, fillna_method)
-
@pytest.mark.parametrize("dropna", [True, False])
def test_value_counts(self, all_data, dropna):
all_data = all_data[:10]
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 11a9f4f22167f..9b2251d0b7d4a 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -706,10 +706,6 @@ def test_fillna_no_op_returns_copy(self, data):
assert result is not data
tm.assert_extension_array_equal(result, data)
- result = data.fillna(method="backfill")
- assert result is not data
- tm.assert_extension_array_equal(result, data)
-
@pytest.mark.xfail(
reason="GH 45419: pyarrow.ChunkedArray does not support views", run=False
)
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index c09d4d315451f..49ad3fce92a5c 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -136,10 +136,6 @@ def test_fillna_no_op_returns_copy(self, data):
assert result is not data
tm.assert_extension_array_equal(result, data)
- result = data.fillna(method="backfill")
- assert result is not data
- tm.assert_extension_array_equal(result, data)
-
def _get_expected_exception(
self, op_name: str, obj, other
) -> type[Exception] | None:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Ref: #53621
Ran into a little trouble here - `NDFrame.fillna` has a `limit` argument that wasn't deprecated and works, but `EA.fillna` has this deprecated and is currently ignored. Currently this PR keeps the `limit` argument in `EA.fillna` and removes the deprecation, but the implementation still ignores it. If we want to keep it, I plan on doing a followup implementing `limit` across all EAs (I think this should be straightforward except for maybe Sparse - not sure).
As an alternative, we could also deprecate `limit` on `NDFrame.fillna` (not my preference), and then I can keep but not enforce the deprecation on `EA.fillna` here.
cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/57983 | 2024-03-24T14:03:06Z | 2024-03-25T17:38:07Z | 2024-03-25T17:38:07Z | 2024-03-25T20:35:50Z |
DOC Add documentation for how pandas rounds values in Series.round and Dataframe.round methods | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5d10a5541f556..2222164da90c7 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -10703,6 +10703,12 @@ def round(
numpy.around : Round a numpy array to the given number of decimals.
Series.round : Round a Series to the given number of decimals.
+ Notes
+ -----
+ For values exactly halfway between rounded decimal values, pandas rounds
+ to the nearest even value (e.g. -0.5 and 0.5 round to 0.0, 1.5 and 2.5
+ round to 2.0, etc.).
+
Examples
--------
>>> df = pd.DataFrame(
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 08e56cb4925b3..0be7a0a7aaa82 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2509,13 +2509,21 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series:
numpy.around : Round values of an np.array.
DataFrame.round : Round values of a DataFrame.
+ Notes
+ -----
+ For values exactly halfway between rounded decimal values, pandas rounds
+ to the nearest even value (e.g. -0.5 and 0.5 round to 0.0, 1.5 and 2.5
+ round to 2.0, etc.).
+
Examples
--------
- >>> s = pd.Series([0.1, 1.3, 2.7])
+ >>> s = pd.Series([-0.5, 0.1, 2.5, 1.3, 2.7])
>>> s.round()
- 0 0.0
- 1 1.0
- 2 3.0
+ 0 -0.0
+ 1 0.0
+ 2 2.0
+ 3 1.0
+ 4 3.0
dtype: float64
"""
nv.validate_round(args, kwargs)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Description:
In the pandas documentation it is not documented how numbers are rounded if values are exactly halfway between rounded decimal values (they are rounded to the nearest even value). I added the documentation for this behaviour. | https://api.github.com/repos/pandas-dev/pandas/pulls/57981 | 2024-03-24T11:59:13Z | 2024-03-24T22:13:41Z | 2024-03-24T22:13:41Z | 2024-03-25T08:05:26Z |
DOC: clarify three documentation strings in base.py | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 987136ffdff7d..d43222f1acd11 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1065,7 +1065,7 @@ def nunique(self, dropna: bool = True) -> int:
@property
def is_unique(self) -> bool:
"""
- Return boolean if values in the object are unique.
+ Return True if values in the object are unique.
Returns
-------
@@ -1086,7 +1086,7 @@ def is_unique(self) -> bool:
@property
def is_monotonic_increasing(self) -> bool:
"""
- Return boolean if values in the object are monotonically increasing.
+ Return True if values in the object are monotonically increasing.
Returns
-------
@@ -1109,7 +1109,7 @@ def is_monotonic_increasing(self) -> bool:
@property
def is_monotonic_decreasing(self) -> bool:
"""
- Return boolean if values in the object are monotonically decreasing.
+ Return True if values in the object are monotonically decreasing.
Returns
-------
| This change is consistent with other documentation strings in this file.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57978 | 2024-03-23T20:42:18Z | 2024-03-25T17:43:31Z | 2024-03-25T17:43:31Z | 2024-03-25T17:43:44Z |
PDEP: Change status of CoW proposal to implemented | diff --git a/web/pandas/pdeps/0007-copy-on-write.md b/web/pandas/pdeps/0007-copy-on-write.md
index e45fbaf555bc1..f5adb6a571120 100644
--- a/web/pandas/pdeps/0007-copy-on-write.md
+++ b/web/pandas/pdeps/0007-copy-on-write.md
@@ -1,7 +1,7 @@
# PDEP-7: Consistent copy/view semantics in pandas with Copy-on-Write
- Created: July 2021
-- Status: Accepted
+- Status: Implemented
- Discussion: [#36195](https://github.com/pandas-dev/pandas/issues/36195)
- Author: [Joris Van den Bossche](https://github.com/jorisvandenbossche)
- Revision: 1
| We already enabled CoW by default and removed the legacy mode | https://api.github.com/repos/pandas-dev/pandas/pulls/57977 | 2024-03-23T16:58:11Z | 2024-03-25T17:41:01Z | 2024-03-25T17:41:01Z | 2024-03-25T21:32:57Z |
DOC: fix list indentation in pandas.DataFrame.stack | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5d10a5541f556..8fb400872378c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9288,10 +9288,9 @@ def stack(
DataFrame. The new inner-most levels are created by pivoting the
columns of the current dataframe:
- - if the columns have a single level, the output is a Series;
- - if the columns have multiple levels, the new index
- level(s) is (are) taken from the prescribed level(s) and
- the output is a DataFrame.
+ - if the columns have a single level, the output is a Series;
+ - if the columns have multiple levels, the new index level(s) is (are)
+ taken from the prescribed level(s) and the output is a DataFrame.
Parameters
----------
| Fixed the grey-out box around the list of options for multiple levels of columns. | https://api.github.com/repos/pandas-dev/pandas/pulls/57975 | 2024-03-23T16:38:04Z | 2024-03-25T11:40:22Z | 2024-03-25T11:40:22Z | 2024-03-25T11:40:22Z |
BUG: Fixed ADBC to_sql creation of table when using public schema | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index d0f8951ac07ad..9e1a883d47cf8 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -25,6 +25,7 @@ Bug fixes
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the column's type was nullable boolean (:issue:`55332`)
- :meth:`DataFrame.__dataframe__` was showing bytemask instead of bitmask for ``'string[pyarrow]'`` validity buffer (:issue:`57762`)
- :meth:`DataFrame.__dataframe__` was showing non-null validity buffer (instead of ``None``) ``'string[pyarrow]'`` without missing values (:issue:`57761`)
+- :meth:`DataFrame.to_sql` was failing to find the right table when using the schema argument (:issue:`57539`)
.. ---------------------------------------------------------------------------
.. _whatsnew_222.other:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index b80487abbc4ab..aa9d0d88ae69a 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -2380,7 +2380,9 @@ def to_sql(
raise ValueError("datatypes not supported") from exc
with self.con.cursor() as cur:
- total_inserted = cur.adbc_ingest(table_name, tbl, mode=mode)
+ total_inserted = cur.adbc_ingest(
+ table_name=name, data=tbl, mode=mode, db_schema_name=schema
+ )
self.con.commit()
return total_inserted
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index c8f4d68230e5b..67b1311a5a798 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -1373,6 +1373,30 @@ def insert_on_conflict(table, conn, keys, data_iter):
pandasSQL.drop_table("test_insert_conflict")
+@pytest.mark.parametrize("conn", all_connectable)
+def test_to_sql_on_public_schema(conn, request):
+ if "sqlite" in conn or "mysql" in conn:
+ request.applymarker(
+ pytest.mark.xfail(
+ reason="test for public schema only specific to postgresql"
+ )
+ )
+
+ conn = request.getfixturevalue(conn)
+
+ test_data = DataFrame([[1, 2.1, "a"], [2, 3.1, "b"]], columns=list("abc"))
+ test_data.to_sql(
+ name="test_public_schema",
+ con=conn,
+ if_exists="append",
+ index=False,
+ schema="public",
+ )
+
+ df_out = sql.read_sql_table("test_public_schema", conn, schema="public")
+ tm.assert_frame_equal(test_data, df_out)
+
+
@pytest.mark.parametrize("conn", mysql_connectable)
def test_insertion_method_on_conflict_update(conn, request):
# GH 14553: Example in to_sql docstring
| Problem: table on public schema being lost when tried to be created
Solution: used db_schema_name argument to specify schema name of adbc_ingest
- [x] closes #57539 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57974 | 2024-03-23T16:19:24Z | 2024-03-28T17:55:54Z | 2024-03-28T17:55:54Z | 2024-03-28T17:56:04Z |
Changed the strings to make code simpler | diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py
index 1716110b619d6..69697906e493e 100644
--- a/asv_bench/benchmarks/categoricals.py
+++ b/asv_bench/benchmarks/categoricals.py
@@ -88,7 +88,7 @@ def setup(self):
)
for col in ("int", "float", "timestamp"):
- self.df[col + "_as_str"] = self.df[col].astype(str)
+ self.df[f"{col}_as_str"] = self.df[col].astype(str)
for col in self.df.columns:
self.df[col] = self.df[col].astype("category")
| Changed the strings to make code simpler
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/57973 | 2024-03-23T16:10:53Z | 2024-03-25T17:44:31Z | 2024-03-25T17:44:31Z | 2024-03-25T17:44:38Z |
CLN: Enforce deprecation of argmin/max and idxmin/max with NA values | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index f225d384888e3..c5d032f9dace5 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -259,6 +259,7 @@ Removal of prior version deprecations/changes
- Removed the :class:`Grouper` attributes ``ax``, ``groups``, ``indexer``, and ``obj`` (:issue:`51206`, :issue:`51182`)
- Removed deprecated keyword ``verbose`` on :func:`read_csv` and :func:`read_table` (:issue:`56556`)
- Removed the attribute ``dtypes`` from :class:`.DataFrameGroupBy` (:issue:`51997`)
+- Enforced deprecation of ``argmin``, ``argmax``, ``idxmin``, and ``idxmax`` returning a result when ``skipna=False`` and an NA value is encountered or all values are NA values; these operations will now raise in such cases (:issue:`33941`, :issue:`51276`)
.. ---------------------------------------------------------------------------
.. _whatsnew_300.performance:
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 987136ffdff7d..80919130cab63 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -14,7 +14,6 @@
final,
overload,
)
-import warnings
import numpy as np
@@ -35,7 +34,6 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import can_hold_element
from pandas.core.dtypes.common import (
@@ -686,7 +684,8 @@ def argmax(
axis : {{None}}
Unused. Parameter needed for compatibility with DataFrame.
skipna : bool, default True
- Exclude NA/null values when showing the result.
+ Exclude NA/null values. If the entire Series is NA, or if ``skipna=False``
+ and there is an NA value, this method will raise a ``ValueError``.
*args, **kwargs
Additional arguments and keywords for compatibility with NumPy.
@@ -736,28 +735,15 @@ def argmax(
nv.validate_minmax_axis(axis)
skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
+ if skipna and len(delegate) > 0 and isna(delegate).all():
+ raise ValueError("Encountered all NA values")
+ elif not skipna and isna(delegate).any():
+ raise ValueError("Encountered an NA value with skipna=False")
+
if isinstance(delegate, ExtensionArray):
- if not skipna and delegate.isna().any():
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return -1
- else:
- return delegate.argmax()
+ return delegate.argmax()
else:
result = nanops.nanargmax(delegate, skipna=skipna)
- if result == -1:
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
# error: Incompatible return value type (got "Union[int, ndarray]", expected
# "int")
return result # type: ignore[return-value]
@@ -770,28 +756,15 @@ def argmin(
nv.validate_minmax_axis(axis)
skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
+ if skipna and len(delegate) > 0 and isna(delegate).all():
+ raise ValueError("Encountered all NA values")
+ elif not skipna and isna(delegate).any():
+ raise ValueError("Encountered an NA value with skipna=False")
+
if isinstance(delegate, ExtensionArray):
- if not skipna and delegate.isna().any():
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return -1
- else:
- return delegate.argmin()
+ return delegate.argmin()
else:
result = nanops.nanargmin(delegate, skipna=skipna)
- if result == -1:
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
# error: Incompatible return value type (got "Union[int, ndarray]", expected
# "int")
return result # type: ignore[return-value]
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9a537c71f3cd0..3cb37e037ecd3 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6976,16 +6976,10 @@ def argmin(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
if not self._is_multi and self.hasnans:
# Take advantage of cache
- mask = self._isnan
- if not skipna or mask.all():
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return -1
+ if self._isnan.all():
+ raise ValueError("Encountered all NA values")
+ elif not skipna:
+ raise ValueError("Encountered an NA value with skipna=False")
return super().argmin(skipna=skipna)
@Appender(IndexOpsMixin.argmax.__doc__)
@@ -6995,16 +6989,10 @@ def argmax(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
if not self._is_multi and self.hasnans:
# Take advantage of cache
- mask = self._isnan
- if not skipna or mask.all():
- warnings.warn(
- f"The behavior of {type(self).__name__}.argmax/argmin "
- "with skipna=False and NAs, or with all-NAs is deprecated. "
- "In a future version this will raise ValueError.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return -1
+ if self._isnan.all():
+ raise ValueError("Encountered all NA values")
+ elif not skipna:
+ raise ValueError("Encountered an NA value with skipna=False")
return super().argmax(skipna=skipna)
def min(self, axis=None, skipna: bool = True, *args, **kwargs):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 6cb825e9b79a2..b68337d9e0de9 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1441,17 +1441,18 @@ def _maybe_arg_null_out(
if axis is None or not getattr(result, "ndim", False):
if skipna:
if mask.all():
- return -1
+ raise ValueError("Encountered all NA values")
else:
if mask.any():
- return -1
+ raise ValueError("Encountered an NA value with skipna=False")
else:
- if skipna:
- na_mask = mask.all(axis)
- else:
- na_mask = mask.any(axis)
+ na_mask = mask.all(axis)
if na_mask.any():
- result[na_mask] = -1
+ raise ValueError("Encountered all NA values")
+ elif not skipna:
+ na_mask = mask.any(axis)
+ if na_mask.any():
+ raise ValueError("Encountered an NA value with skipna=False")
return result
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 08e56cb4925b3..4b0eb4e1f7358 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2333,8 +2333,8 @@ def idxmin(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashab
axis : {0 or 'index'}
Unused. Parameter needed for compatibility with DataFrame.
skipna : bool, default True
- Exclude NA/null values. If the entire Series is NA, the result
- will be NA.
+ Exclude NA/null values. If the entire Series is NA, or if ``skipna=False``
+ and there is an NA value, this method will raise a ``ValueError``.
*args, **kwargs
Additional arguments and keywords have no effect but might be
accepted for compatibility with NumPy.
@@ -2376,32 +2376,10 @@ def idxmin(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashab
>>> s.idxmin()
'A'
-
- If `skipna` is False and there is an NA value in the data,
- the function returns ``nan``.
-
- >>> s.idxmin(skipna=False)
- nan
"""
axis = self._get_axis_number(axis)
- with warnings.catch_warnings():
- # TODO(3.0): this catching/filtering can be removed
- # ignore warning produced by argmin since we will issue a different
- # warning for idxmin
- warnings.simplefilter("ignore")
- i = self.argmin(axis, skipna, *args, **kwargs)
-
- if i == -1:
- # GH#43587 give correct NA value for Index.
- warnings.warn(
- f"The behavior of {type(self).__name__}.idxmin with all-NA "
- "values, or any-NA and skipna=False, is deprecated. In a future "
- "version this will raise ValueError",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.index._na_value
- return self.index[i]
+ iloc = self.argmin(axis, skipna, *args, **kwargs)
+ return self.index[iloc]
def idxmax(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashable:
"""
@@ -2415,8 +2393,8 @@ def idxmax(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashab
axis : {0 or 'index'}
Unused. Parameter needed for compatibility with DataFrame.
skipna : bool, default True
- Exclude NA/null values. If the entire Series is NA, the result
- will be NA.
+ Exclude NA/null values. If the entire Series is NA, or if ``skipna=False``
+ and there is an NA value, this method will raise a ``ValueError``.
*args, **kwargs
Additional arguments and keywords have no effect but might be
accepted for compatibility with NumPy.
@@ -2459,32 +2437,10 @@ def idxmax(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashab
>>> s.idxmax()
'C'
-
- If `skipna` is False and there is an NA value in the data,
- the function returns ``nan``.
-
- >>> s.idxmax(skipna=False)
- nan
"""
axis = self._get_axis_number(axis)
- with warnings.catch_warnings():
- # TODO(3.0): this catching/filtering can be removed
- # ignore warning produced by argmax since we will issue a different
- # warning for argmax
- warnings.simplefilter("ignore")
- i = self.argmax(axis, skipna, *args, **kwargs)
-
- if i == -1:
- # GH#43587 give correct NA value for Index.
- warnings.warn(
- f"The behavior of {type(self).__name__}.idxmax with all-NA "
- "values, or any-NA and skipna=False, is deprecated. In a future "
- "version this will raise ValueError",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.index._na_value
- return self.index[i]
+ iloc = self.argmax(axis, skipna, *args, **kwargs)
+ return self.index[iloc]
def round(self, decimals: int = 0, *args, **kwargs) -> Series:
"""
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 15aa210a09d6d..a2b5439f9e12f 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -692,8 +692,8 @@
axis : {{0 or 'index', 1 or 'columns'}}, default 0
The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
skipna : bool, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA.
+ Exclude NA/null values. If the entire Series is NA, or if ``skipna=False``
+ and there is an NA value, this method will raise a ``ValueError``.
numeric_only : bool, default {numeric_only_default}
Include only `float`, `int` or `boolean` data.
@@ -757,8 +757,8 @@
axis : {{0 or 'index', 1 or 'columns'}}, default 0
The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
skipna : bool, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA.
+ Exclude NA/null values. If the entire Series is NA, or if ``skipna=False``
+ and there is an NA value, this method will raise a ``ValueError``.
numeric_only : bool, default {numeric_only_default}
Include only `float`, `int` or `boolean` data.
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index c803a8113b4a4..26638c6160b7b 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -169,8 +169,8 @@ def test_argmin_argmax_all_na(self, method, data, na_value):
("idxmin", True, 2),
("argmax", True, 0),
("argmin", True, 2),
- ("idxmax", False, np.nan),
- ("idxmin", False, np.nan),
+ ("idxmax", False, -1),
+ ("idxmin", False, -1),
("argmax", False, -1),
("argmin", False, -1),
],
@@ -179,17 +179,13 @@ def test_argreduce_series(
self, data_missing_for_sorting, op_name, skipna, expected
):
# data_missing_for_sorting -> [B, NA, A] with A < B and NA missing.
- warn = None
- msg = "The behavior of Series.argmax/argmin"
- if op_name.startswith("arg") and expected == -1:
- warn = FutureWarning
- if op_name.startswith("idx") and np.isnan(expected):
- warn = FutureWarning
- msg = f"The behavior of Series.{op_name}"
ser = pd.Series(data_missing_for_sorting)
- with tm.assert_produces_warning(warn, match=msg):
+ if expected == -1:
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ getattr(ser, op_name)(skipna=skipna)
+ else:
result = getattr(ser, op_name)(skipna=skipna)
- tm.assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
def test_argmax_argmin_no_skipna_notimplemented(self, data_missing_for_sorting):
# GH#38733
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 63c15fab76562..408cb0ab6fc5c 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1065,18 +1065,20 @@ def test_idxmin(self, float_frame, int_frame, skipna, axis):
frame.iloc[5:10] = np.nan
frame.iloc[15:20, -2:] = np.nan
for df in [frame, int_frame]:
- warn = None
- if skipna is False or axis == 1:
- warn = None if df is int_frame else FutureWarning
- msg = "The behavior of DataFrame.idxmin with all-NA values"
- with tm.assert_produces_warning(warn, match=msg):
+ if (not skipna or axis == 1) and df is not int_frame:
+ if axis == 1:
+ msg = "Encountered all NA values"
+ else:
+ msg = "Encountered an NA value"
+ with pytest.raises(ValueError, match=msg):
+ df.idxmin(axis=axis, skipna=skipna)
+ with pytest.raises(ValueError, match=msg):
+ df.idxmin(axis=axis, skipna=skipna)
+ else:
result = df.idxmin(axis=axis, skipna=skipna)
-
- msg2 = "The behavior of Series.idxmin"
- with tm.assert_produces_warning(warn, match=msg2):
expected = df.apply(Series.idxmin, axis=axis, skipna=skipna)
- expected = expected.astype(df.index.dtype)
- tm.assert_series_equal(result, expected)
+ expected = expected.astype(df.index.dtype)
+ tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
@@ -1113,16 +1115,17 @@ def test_idxmax(self, float_frame, int_frame, skipna, axis):
frame.iloc[5:10] = np.nan
frame.iloc[15:20, -2:] = np.nan
for df in [frame, int_frame]:
- warn = None
- if skipna is False or axis == 1:
- warn = None if df is int_frame else FutureWarning
- msg = "The behavior of DataFrame.idxmax with all-NA values"
- with tm.assert_produces_warning(warn, match=msg):
- result = df.idxmax(axis=axis, skipna=skipna)
+ if (skipna is False or axis == 1) and df is frame:
+ if axis == 1:
+ msg = "Encountered all NA values"
+ else:
+ msg = "Encountered an NA value"
+ with pytest.raises(ValueError, match=msg):
+ df.idxmax(axis=axis, skipna=skipna)
+ return
- msg2 = "The behavior of Series.idxmax"
- with tm.assert_produces_warning(warn, match=msg2):
- expected = df.apply(Series.idxmax, axis=axis, skipna=skipna)
+ result = df.idxmax(axis=axis, skipna=skipna)
+ expected = df.apply(Series.idxmax, axis=axis, skipna=skipna)
expected = expected.astype(df.index.dtype)
tm.assert_series_equal(result, expected)
@@ -2118,15 +2121,16 @@ def test_numeric_ea_axis_1(method, skipna, min_count, any_numeric_ea_dtype):
if method in ("prod", "product", "sum"):
kwargs["min_count"] = min_count
- warn = None
- msg = None
if not skipna and method in ("idxmax", "idxmin"):
- warn = FutureWarning
+ # GH#57745 - EAs use groupby for axis=1 which still needs a proper deprecation.
msg = f"The behavior of DataFrame.{method} with all-NA values"
- with tm.assert_produces_warning(warn, match=msg):
- result = getattr(df, method)(axis=1, **kwargs)
- with tm.assert_produces_warning(warn, match=msg):
- expected = getattr(expected_df, method)(axis=1, **kwargs)
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ getattr(df, method)(axis=1, **kwargs)
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ getattr(expected_df, method)(axis=1, **kwargs)
+ return
+ result = getattr(df, method)(axis=1, **kwargs)
+ expected = getattr(expected_df, method)(axis=1, **kwargs)
if method not in ("idxmax", "idxmin"):
expected = expected.astype(expected_dtype)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 91ee13ecd87dd..b10319f5380e7 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -128,28 +128,14 @@ def test_nanargminmax(self, opname, index_or_series):
obj = klass([NaT, datetime(2011, 11, 1)])
assert getattr(obj, arg_op)() == 1
- msg = (
- "The behavior of (DatetimeIndex|Series).argmax/argmin with "
- "skipna=False and NAs"
- )
- if klass is Series:
- msg = "The behavior of Series.(idxmax|idxmin) with all-NA"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = getattr(obj, arg_op)(skipna=False)
- if klass is Series:
- assert np.isnan(result)
- else:
- assert result == -1
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ getattr(obj, arg_op)(skipna=False)
obj = klass([NaT, datetime(2011, 11, 1), NaT])
# check DatetimeIndex non-monotonic path
assert getattr(obj, arg_op)() == 1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = getattr(obj, arg_op)(skipna=False)
- if klass is Series:
- assert np.isnan(result)
- else:
- assert result == -1
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ getattr(obj, arg_op)(skipna=False)
@pytest.mark.parametrize("opname", ["max", "min"])
@pytest.mark.parametrize("dtype", ["M8[ns]", "datetime64[ns, UTC]"])
@@ -175,40 +161,38 @@ def test_argminmax(self):
obj = Index([np.nan, 1, np.nan, 2])
assert obj.argmin() == 1
assert obj.argmax() == 3
- msg = "The behavior of Index.argmax/argmin with skipna=False and NAs"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin(skipna=False) == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax(skipna=False) == -1
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ obj.argmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ obj.argmax(skipna=False)
obj = Index([np.nan])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin() == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax() == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin(skipna=False) == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax(skipna=False) == -1
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmin()
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmax()
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmax(skipna=False)
- msg = "The behavior of DatetimeIndex.argmax/argmin with skipna=False and NAs"
obj = Index([NaT, datetime(2011, 11, 1), datetime(2011, 11, 2), NaT])
assert obj.argmin() == 1
assert obj.argmax() == 2
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin(skipna=False) == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax(skipna=False) == -1
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ obj.argmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ obj.argmax(skipna=False)
obj = Index([NaT])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin() == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax() == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmin(skipna=False) == -1
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert obj.argmax(skipna=False) == -1
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmin()
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmax()
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ obj.argmax(skipna=False)
@pytest.mark.parametrize("op, expected_col", [["max", "a"], ["min", "b"]])
def test_same_tz_min_max_axis_1(self, op, expected_col):
@@ -841,26 +825,16 @@ def test_idxmin_dt64index(self, unit):
# GH#43587 should have NaT instead of NaN
dti = DatetimeIndex(["NaT", "2015-02-08", "NaT"]).as_unit(unit)
ser = Series([1.0, 2.0, np.nan], index=dti)
- msg = "The behavior of Series.idxmin with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = ser.idxmin(skipna=False)
- assert res is NaT
- msg = "The behavior of Series.idxmax with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = ser.idxmax(skipna=False)
- assert res is NaT
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ ser.idxmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ ser.idxmax(skipna=False)
df = ser.to_frame()
- msg = "The behavior of DataFrame.idxmin with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = df.idxmin(skipna=False)
- assert res.dtype == f"M8[{unit}]"
- assert res.isna().all()
- msg = "The behavior of DataFrame.idxmax with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = df.idxmax(skipna=False)
- assert res.dtype == f"M8[{unit}]"
- assert res.isna().all()
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ df.idxmin(skipna=False)
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ df.idxmax(skipna=False)
def test_idxmin(self):
# test idxmin
@@ -872,9 +846,8 @@ def test_idxmin(self):
# skipna or no
assert string_series[string_series.idxmin()] == string_series.min()
- msg = "The behavior of Series.idxmin"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert isna(string_series.idxmin(skipna=False))
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ string_series.idxmin(skipna=False)
# no NaNs
nona = string_series.dropna()
@@ -883,8 +856,8 @@ def test_idxmin(self):
# all NaNs
allna = string_series * np.nan
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert isna(allna.idxmin())
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ allna.idxmin()
# datetime64[ns]
s = Series(date_range("20130102", periods=6))
@@ -905,8 +878,7 @@ def test_idxmax(self):
# skipna or no
assert string_series[string_series.idxmax()] == string_series.max()
- msg = "The behavior of Series.idxmax with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
assert isna(string_series.idxmax(skipna=False))
# no NaNs
@@ -916,9 +888,8 @@ def test_idxmax(self):
# all NaNs
allna = string_series * np.nan
- msg = "The behavior of Series.idxmax with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert isna(allna.idxmax())
+ with pytest.raises(ValueError, match="Encountered all NA values"):
+ allna.idxmax()
s = Series(date_range("20130102", periods=6))
result = s.idxmax()
@@ -1175,12 +1146,12 @@ def test_idxminmax_object_dtype(self, using_infer_string):
msg = "'>' not supported between instances of 'float' and 'str'"
with pytest.raises(TypeError, match=msg):
ser3.idxmax()
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
ser3.idxmax(skipna=False)
msg = "'<' not supported between instances of 'float' and 'str'"
with pytest.raises(TypeError, match=msg):
ser3.idxmin()
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(ValueError, match="Encountered an NA value"):
ser3.idxmin(skipna=False)
def test_idxminmax_object_frame(self):
@@ -1228,14 +1199,12 @@ def test_idxminmax_with_inf(self):
s = Series([0, -np.inf, np.inf, np.nan])
assert s.idxmin() == 1
- msg = "The behavior of Series.idxmin with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert np.isnan(s.idxmin(skipna=False))
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ s.idxmin(skipna=False)
assert s.idxmax() == 2
- msg = "The behavior of Series.idxmax with all-NA values"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- assert np.isnan(s.idxmax(skipna=False))
+ with pytest.raises(ValueError, match="Encountered an NA value"):
+ s.idxmax(skipna=False)
def test_sum_uint64(self):
# GH 53401
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index ed125ece349a9..ce41f1e76de79 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -296,6 +296,7 @@ def check_fun_data(
self,
testfunc,
targfunc,
+ testar,
testarval,
targarval,
skipna,
@@ -319,6 +320,13 @@ def check_fun_data(
else:
targ = bool(targ)
+ if testfunc.__name__ in ["nanargmax", "nanargmin"] and (
+ testar.startswith("arr_nan")
+ or (testar.endswith("nan") and (not skipna or axis == 1))
+ ):
+ with pytest.raises(ValueError, match="Encountered .* NA value"):
+ testfunc(testarval, axis=axis, skipna=skipna, **kwargs)
+ return
res = testfunc(testarval, axis=axis, skipna=skipna, **kwargs)
if (
@@ -350,6 +358,7 @@ def check_fun_data(
self.check_fun_data(
testfunc,
targfunc,
+ testar,
testarval2,
targarval2,
skipna=skipna,
@@ -370,6 +379,7 @@ def check_fun(
self.check_fun_data(
testfunc,
targfunc,
+ testar,
testarval,
targarval,
skipna=skipna,
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Ref: #33941, #51276
This is complicated by #57745 - we still need a proper deprecation for groupby's idxmin/idxmax. For DataFrame with EAs and axis=1, we use groupby's implementation. So I'm leaving that deprecation in place for now, and we can enforce it after groupby's is deprecated and ready to be enforced. | https://api.github.com/repos/pandas-dev/pandas/pulls/57971 | 2024-03-23T01:14:13Z | 2024-03-25T17:55:56Z | 2024-03-25T17:55:56Z | 2024-03-27T18:05:06Z |
Implement hash_join for merges | diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py
index ce64304731116..a6c6990892d38 100644
--- a/asv_bench/benchmarks/join_merge.py
+++ b/asv_bench/benchmarks/join_merge.py
@@ -328,6 +328,23 @@ def time_i8merge(self, how):
merge(self.left, self.right, how=how)
+class UniqueMerge:
+ params = [4_000_000, 1_000_000]
+ param_names = ["unique_elements"]
+
+ def setup(self, unique_elements):
+ N = 1_000_000
+ self.left = DataFrame({"a": np.random.randint(1, unique_elements, (N,))})
+ self.right = DataFrame({"a": np.random.randint(1, unique_elements, (N,))})
+ uniques = self.right.a.drop_duplicates()
+ self.right["a"] = concat(
+ [uniques, Series(np.arange(0, -(N - len(uniques)), -1))], ignore_index=True
+ )
+
+ def time_unique_merge(self, unique_elements):
+ merge(self.left, self.right, how="inner")
+
+
class MergeDatetime:
params = [
[
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index f225d384888e3..f748f6e23e003 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -286,6 +286,7 @@ Performance improvements
- Performance improvement in :meth:`RangeIndex.join` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57651`, :issue:`57752`)
- Performance improvement in :meth:`RangeIndex.reindex` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57647`, :issue:`57752`)
- Performance improvement in :meth:`RangeIndex.take` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57445`, :issue:`57752`)
+- Performance improvement in :func:`merge` if hash-join can be used (:issue:`57970`)
- Performance improvement in ``DataFrameGroupBy.__len__`` and ``SeriesGroupBy.__len__`` (:issue:`57595`)
- Performance improvement in indexing operations for string dtypes (:issue:`56997`)
- Performance improvement in unary methods on a :class:`RangeIndex` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57825`)
diff --git a/pandas/_libs/hashtable.pyi b/pandas/_libs/hashtable.pyi
index 3725bfa3362d9..7a810a988e50e 100644
--- a/pandas/_libs/hashtable.pyi
+++ b/pandas/_libs/hashtable.pyi
@@ -16,7 +16,7 @@ def unique_label_indices(
class Factorizer:
count: int
uniques: Any
- def __init__(self, size_hint: int) -> None: ...
+ def __init__(self, size_hint: int, uses_mask: bool = False) -> None: ...
def get_count(self) -> int: ...
def factorize(
self,
@@ -25,6 +25,9 @@ class Factorizer:
na_value=...,
mask=...,
) -> npt.NDArray[np.intp]: ...
+ def hash_inner_join(
+ self, values: np.ndarray, mask=...
+ ) -> tuple[np.ndarray, np.ndarray]: ...
class ObjectFactorizer(Factorizer):
table: PyObjectHashTable
@@ -216,6 +219,9 @@ class HashTable:
mask=...,
ignore_na: bool = True,
) -> tuple[np.ndarray, npt.NDArray[np.intp]]: ... # np.ndarray[subclass-specific]
+ def hash_inner_join(
+ self, values: np.ndarray, mask=...
+ ) -> tuple[np.ndarray, np.ndarray]: ...
class Complex128HashTable(HashTable): ...
class Complex64HashTable(HashTable): ...
diff --git a/pandas/_libs/hashtable.pyx b/pandas/_libs/hashtable.pyx
index 070533ba999c7..97fae1d6480ce 100644
--- a/pandas/_libs/hashtable.pyx
+++ b/pandas/_libs/hashtable.pyx
@@ -70,7 +70,7 @@ cdef class Factorizer:
cdef readonly:
Py_ssize_t count
- def __cinit__(self, size_hint: int):
+ def __cinit__(self, size_hint: int, uses_mask: bool = False):
self.count = 0
def get_count(self) -> int:
@@ -79,13 +79,16 @@ cdef class Factorizer:
def factorize(self, values, na_sentinel=-1, na_value=None, mask=None) -> np.ndarray:
raise NotImplementedError
+ def hash_inner_join(self, values, mask=None):
+ raise NotImplementedError
+
cdef class ObjectFactorizer(Factorizer):
cdef public:
PyObjectHashTable table
ObjectVector uniques
- def __cinit__(self, size_hint: int):
+ def __cinit__(self, size_hint: int, uses_mask: bool = False):
self.table = PyObjectHashTable(size_hint)
self.uniques = ObjectVector()
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index f9abd574dae01..e3a9102fec395 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -557,6 +557,49 @@ cdef class {{name}}HashTable(HashTable):
self.table.vals[k] = i
self.na_position = na_position
+ @cython.wraparound(False)
+ @cython.boundscheck(False)
+ def hash_inner_join(self, const {{dtype}}_t[:] values, const uint8_t[:] mask = None) -> tuple[ndarray, ndarray]:
+ cdef:
+ Py_ssize_t i, n = len(values)
+ {{c_type}} val
+ khiter_t k
+ Int64Vector locs = Int64Vector()
+ Int64Vector self_locs = Int64Vector()
+ Int64VectorData *l
+ Int64VectorData *sl
+ int8_t na_position = self.na_position
+
+ l = &locs.data
+ sl = &self_locs.data
+
+ if self.uses_mask and mask is None:
+ raise NotImplementedError # pragma: no cover
+
+ with nogil:
+ for i in range(n):
+ if self.uses_mask and mask[i]:
+ if self.na_position == -1:
+ continue
+ if needs_resize(l.size, l.capacity):
+ with gil:
+ locs.resize(locs.data.capacity * 4)
+ self_locs.resize(locs.data.capacity * 4)
+ append_data_int64(l, i)
+ append_data_int64(sl, na_position)
+ else:
+ val = {{to_c_type}}(values[i])
+ k = kh_get_{{dtype}}(self.table, val)
+ if k != self.table.n_buckets:
+ if needs_resize(l.size, l.capacity):
+ with gil:
+ locs.resize(locs.data.capacity * 4)
+ self_locs.resize(locs.data.capacity * 4)
+ append_data_int64(l, i)
+ append_data_int64(sl, self.table.vals[k])
+
+ return self_locs.to_array(), locs.to_array()
+
@cython.boundscheck(False)
def lookup(self, const {{dtype}}_t[:] values, const uint8_t[:] mask = None) -> ndarray:
# -> np.ndarray[np.intp]
@@ -879,8 +922,8 @@ cdef class {{name}}Factorizer(Factorizer):
{{name}}HashTable table
{{name}}Vector uniques
- def __cinit__(self, size_hint: int):
- self.table = {{name}}HashTable(size_hint)
+ def __cinit__(self, size_hint: int, uses_mask: bool = False):
+ self.table = {{name}}HashTable(size_hint, uses_mask=uses_mask)
self.uniques = {{name}}Vector()
def factorize(self, const {{c_type}}[:] values,
@@ -911,6 +954,9 @@ cdef class {{name}}Factorizer(Factorizer):
self.count = len(self.uniques)
return labels
+ def hash_inner_join(self, const {{c_type}}[:] values, const uint8_t[:] mask = None) -> tuple[np.ndarray, np.ndarray]:
+ return self.table.hash_inner_join(values, mask)
+
{{endfor}}
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 8ea2ac24e13c8..2cd065d03ff53 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1780,7 +1780,10 @@ def get_join_indexers_non_unique(
np.ndarray[np.intp]
Indexer into right.
"""
- lkey, rkey, count = _factorize_keys(left, right, sort=sort)
+ lkey, rkey, count = _factorize_keys(left, right, sort=sort, how=how)
+ if count == -1:
+ # hash join
+ return lkey, rkey
if how == "left":
lidx, ridx = libjoin.left_outer_join(lkey, rkey, count, sort=sort)
elif how == "right":
@@ -2385,7 +2388,10 @@ def _left_join_on_index(
def _factorize_keys(
- lk: ArrayLike, rk: ArrayLike, sort: bool = True
+ lk: ArrayLike,
+ rk: ArrayLike,
+ sort: bool = True,
+ how: str | None = None,
) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp], int]:
"""
Encode left and right keys as enumerated types.
@@ -2401,6 +2407,9 @@ def _factorize_keys(
sort : bool, defaults to True
If True, the encoding is done such that the unique elements in the
keys are sorted.
+ how: str, optional
+ Used to determine if we can use hash-join. If not given, then just factorize
+ keys.
Returns
-------
@@ -2409,7 +2418,8 @@ def _factorize_keys(
np.ndarray[np.intp]
Right (resp. left if called with `key='right'`) labels, as enumerated type.
int
- Number of unique elements in union of left and right labels.
+ Number of unique elements in union of left and right labels. -1 if we used
+ a hash-join.
See Also
--------
@@ -2527,28 +2537,41 @@ def _factorize_keys(
klass, lk, rk = _convert_arrays_and_get_rizer_klass(lk, rk)
- rizer = klass(max(len(lk), len(rk)))
+ rizer = klass(
+ max(len(lk), len(rk)),
+ uses_mask=isinstance(rk, (BaseMaskedArray, ArrowExtensionArray)),
+ )
if isinstance(lk, BaseMaskedArray):
assert isinstance(rk, BaseMaskedArray)
- llab = rizer.factorize(lk._data, mask=lk._mask)
- rlab = rizer.factorize(rk._data, mask=rk._mask)
+ lk_data, lk_mask = lk._data, lk._mask
+ rk_data, rk_mask = rk._data, rk._mask
elif isinstance(lk, ArrowExtensionArray):
assert isinstance(rk, ArrowExtensionArray)
# we can only get here with numeric dtypes
# TODO: Remove when we have a Factorizer for Arrow
- llab = rizer.factorize(
- lk.to_numpy(na_value=1, dtype=lk.dtype.numpy_dtype), mask=lk.isna()
- )
- rlab = rizer.factorize(
- rk.to_numpy(na_value=1, dtype=lk.dtype.numpy_dtype), mask=rk.isna()
- )
+ lk_data = lk.to_numpy(na_value=1, dtype=lk.dtype.numpy_dtype)
+ rk_data = rk.to_numpy(na_value=1, dtype=lk.dtype.numpy_dtype)
+ lk_mask, rk_mask = lk.isna(), rk.isna()
else:
# Argument 1 to "factorize" of "ObjectFactorizer" has incompatible type
# "Union[ndarray[Any, dtype[signedinteger[_64Bit]]],
# ndarray[Any, dtype[object_]]]"; expected "ndarray[Any, dtype[object_]]"
- llab = rizer.factorize(lk) # type: ignore[arg-type]
- rlab = rizer.factorize(rk) # type: ignore[arg-type]
+ lk_data, rk_data = lk, rk # type: ignore[assignment]
+ lk_mask, rk_mask = None, None
+
+ hash_join_available = how == "inner" and not sort and lk.dtype.kind in "iufb"
+ if hash_join_available:
+ rlab = rizer.factorize(rk_data, mask=rk_mask)
+ if rizer.get_count() == len(rlab):
+ ridx, lidx = rizer.hash_inner_join(lk_data, lk_mask)
+ return lidx, ridx, -1
+ else:
+ llab = rizer.factorize(lk_data, mask=lk_mask)
+ else:
+ llab = rizer.factorize(lk_data, mask=lk_mask)
+ rlab = rizer.factorize(rk_data, mask=rk_mask)
+
assert llab.dtype == np.dtype(np.intp), llab.dtype
assert rlab.dtype == np.dtype(np.intp), rlab.dtype
diff --git a/scripts/run_stubtest.py b/scripts/run_stubtest.py
index 6307afa1bc822..df88c61061f12 100644
--- a/scripts/run_stubtest.py
+++ b/scripts/run_stubtest.py
@@ -44,6 +44,7 @@
"pandas._libs.hashtable.HashTable.set_na",
"pandas._libs.hashtable.HashTable.sizeof",
"pandas._libs.hashtable.HashTable.unique",
+ "pandas._libs.hashtable.HashTable.hash_inner_join",
# stubtest might be too sensitive
"pandas._libs.lib.NoDefault",
"pandas._libs.lib._NoDefault.no_default",
| cc @mroeschke
Our abstraction in merges is bad, this makes it a little worse unfortunately. But it enables a potentially huge performance improvement for joins that could be hash joins. I am using "right" to make the decision, because left determines the result order, which means that we would have to sort after we are finished which gives the performance improvement back. using right makes this problem go away.
We get time complexity O(m+n) here over O(m*n) with a non-trivial factor as before
This makes adding semi joins pretty easy as well, which is nice next to the performance improvements here.
```
| Change | Before [38086f11] <backtest> | After [3b6b787e] <to> | Ratio | Benchmark (Parameter) |
|----------|--------------------------------|-------------------------|---------|-----------------------------------------------------------------------------|
| - | 193±4ms | 165±5ms | 0.85 | join_merge.I8Merge.time_i8merge('inner') |
| - | 7.76±0.2ms | 6.53±0.06ms | 0.84 | join_merge.Merge.time_merge_2intkey(False) |
| - | 1.03±0.02ms | 834±4μs | 0.81 | join_merge.Merge.time_merge_dataframe_integer_2key(False) |
| - | 485±2μs | 339±5μs | 0.7 | join_merge.Merge.time_merge_dataframe_integer_key(False) |
| - | 3.29±0.2ms | 2.29±0.07ms | 0.7 | join_merge.MergeDatetime.time_merge(('ms', 'ms'), 'Europe/Brussels', False) |
| - | 3.31±0.07ms | 2.27±0.07ms | 0.69 | join_merge.MergeDatetime.time_merge(('ms', 'ms'), None, False) |
| - | 2.79±0.09ms | 1.77±0.01ms | 0.63 | join_merge.MergeDatetime.time_merge(('ns', 'ms'), 'Europe/Brussels', False) |
| - | 2.89±0.04ms | 1.78±0.05ms | 0.62 | join_merge.MergeDatetime.time_merge(('ns', 'ms'), None, False) |
| - | 2.57±0.09ms | 1.56±0.03ms | 0.61 | join_merge.MergeDatetime.time_merge(('ns', 'ns'), None, False) |
| - | 1.97±0.05ms | 1.18±0.02ms | 0.6 | join_merge.MergeEA.time_merge('Float32', False) |
| - | 1.84±0.02ms | 1.10±0.03ms | 0.6 | join_merge.MergeEA.time_merge('UInt16', False) |
| - | 2.09±0.04ms | 1.24±0.01ms | 0.59 | join_merge.MergeEA.time_merge('UInt64', False) |
| - | 2.10±0.09ms | 1.22±0.01ms | 0.58 | join_merge.MergeEA.time_merge('Float64', False) |
| - | 2.09±0.08ms | 1.22±0.01ms | 0.58 | join_merge.MergeEA.time_merge('UInt32', False) |
| - | 2.70±0.1ms | 1.54±0.02ms | 0.57 | join_merge.MergeDatetime.time_merge(('ns', 'ns'), 'Europe/Brussels', False) |
| - | 1.72±0.02ms | 971±20μs | 0.57 | join_merge.MergeEA.time_merge('Int16', False) |
| - | 1.76±0.03ms | 973±10μs | 0.55 | join_merge.MergeEA.time_merge('Int32', False) |
| - | 1.94±0.09ms | 1.07±0.03ms | 0.55 | join_merge.MergeEA.time_merge('Int64', False) |
| - | 57.0±2ms | 21.7±0.3ms | 0.38 | join_merge.UniqueMerge.time_unique_merge(1000000) |
| - | 106±7ms | 34.1±0.3ms | 0.32 | join_merge.UniqueMerge.time_unique_merge(4000000) |
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/57970 | 2024-03-22T22:27:29Z | 2024-03-24T00:41:11Z | 2024-03-24T00:41:11Z | 2024-03-24T00:41:14Z |
WEB: Updating active/inactive core devs | diff --git a/web/pandas/config.yml b/web/pandas/config.yml
index 05fdea13cab43..74e7fda2e7983 100644
--- a/web/pandas/config.yml
+++ b/web/pandas/config.yml
@@ -72,11 +72,9 @@ blog:
- https://phofl.github.io/feeds/pandas.atom.xml
maintainers:
active:
- - wesm
- jorisvandenbossche
- TomAugspurger
- jreback
- - gfyoung
- WillAyd
- mroeschke
- jbrockmendel
@@ -93,7 +91,6 @@ maintainers:
- fangchenli
- twoertwein
- lithomas1
- - mzeitlin11
- lukemanley
- noatamir
inactive:
@@ -108,6 +105,9 @@ maintainers:
- jschendel
- charlesdong1991
- dsaxton
+ - wesm
+ - gfyoung
+ - mzeitlin11
workgroups:
coc:
name: Code of Conduct
@@ -121,13 +121,12 @@ workgroups:
finance:
name: Finance
contact: finance@pandas.pydata.org
- responsibilities: "Approve the project expenses."
+ responsibilities: "Manage the funding. Coordinate the request of grants. Approve the project expenses."
members:
- - Wes McKinney
+ - Matthew Roeschke
- Jeff Reback
- Joris Van den Bossche
- - Tom Augspurger
- - Matthew Roeschke
+ - Patrick Hoefler
infrastructure:
name: Infrastructure
contact: infrastructure@pandas.pydata.org
| I've checked with the devs who weren't active in pandas recently to see if they wished to become inactive, and few were happy to become inactive.
Besides to manage expectations of other core devs and the community, this is relevant for the decision making proposed in [PDEP-1](https://github.com/pandas-dev/pandas/pull/53576/files). With the current proposal, the number of maintainers to have a quorum is lowered from 12 to 11 after this PR, and it'll have an impact if more people become a maintainer.
@bashtage I couldn't get an answer from you via email (I sent two emails). If you'd like to remain active that's totally fine, just let me know. | https://api.github.com/repos/pandas-dev/pandas/pulls/57969 | 2024-03-22T22:02:25Z | 2024-03-26T17:28:26Z | 2024-03-26T17:28:26Z | 2024-03-26T17:28:33Z |
BUG: #57954 encoding ignored for filelike | diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index b234a6b78e051..7ecd8cd6d5012 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -1310,6 +1310,16 @@ def _check_file_or_buffer(self, f, engine: CSVEngine) -> None:
raise ValueError(
"The 'python' engine cannot iterate through this file buffer."
)
+ if hasattr(f, "encoding"):
+ file_encoding = f.encoding
+ orig_reader_enc = self.orig_options.get("encoding", None)
+ any_none = file_encoding is None or orig_reader_enc is None
+ if file_encoding != orig_reader_enc and not any_none:
+ file_path = getattr(f, "name", None)
+ raise ValueError(
+ f"The specified reader encoding {orig_reader_enc} is different "
+ f"from the encoding {file_encoding} of file {file_path}."
+ )
def _clean_options(
self, options: dict[str, Any], engine: CSVEngine
@@ -1485,6 +1495,7 @@ def _make_engine(
"pyarrow": ArrowParserWrapper,
"python-fwf": FixedWidthFieldParser,
}
+
if engine not in mapping:
raise ValueError(
f"Unknown engine: {engine} (valid options are {mapping.keys()})"
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 090235c862a2a..98a460f221592 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -511,7 +511,7 @@ def __next__(self):
def test_buffer_rd_bytes_bad_unicode(c_parser_only):
# see gh-22748
t = BytesIO(b"\xb0")
- t = TextIOWrapper(t, encoding="ascii", errors="surrogateescape")
+ t = TextIOWrapper(t, encoding="UTF-8", errors="surrogateescape")
msg = "'utf-8' codec can't encode character"
with pytest.raises(UnicodeError, match=msg):
c_parser_only.read_csv(t, encoding="UTF-8")
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index 6aeed2377a3aa..eeb783f1957b7 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -48,6 +48,13 @@ def test_StringIO(self, csv_path):
reader = TextReader(src, header=None)
reader.read()
+ def test_encoding_mismatch_warning(self, csv_path):
+ # GH-57954
+ with open(csv_path, encoding="UTF-8") as f:
+ msg = "latin1 is different from the encoding"
+ with pytest.raises(ValueError, match=msg):
+ read_csv(f, encoding="latin1")
+
def test_string_factorize(self):
# should this be optional?
data = "a\nb\na\nb\na"
| - [x] closes #57954
- [x] [Tests added and passed if fixing a bug or adding a new feature
- [x] All [code checks passed] | https://api.github.com/repos/pandas-dev/pandas/pulls/57968 | 2024-03-22T20:58:01Z | 2024-03-28T18:10:56Z | 2024-03-28T18:10:56Z | 2024-03-28T23:30:31Z |
CLN: Enforce verbose parameter deprecation in read_csv/read_table | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index ef561d50066d1..741591be25bf9 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -256,6 +256,7 @@ Removal of prior version deprecations/changes
- Removed unused arguments ``*args`` and ``**kwargs`` in :class:`Resampler` methods (:issue:`50977`)
- Unrecognized timezones when parsing strings to datetimes now raises a ``ValueError`` (:issue:`51477`)
- Removed the :class:`Grouper` attributes ``ax``, ``groups``, ``indexer``, and ``obj`` (:issue:`51206`, :issue:`51182`)
+- Removed deprecated keyword ``verbose`` on :func:`read_csv` and :func:`read_table` (:issue:`56556`)
- Removed the attribute ``dtypes`` from :class:`.DataFrameGroupBy` (:issue:`51997`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 01c7de0c6f2b3..c29cdbcf5975e 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -6,7 +6,6 @@ from csv import (
QUOTE_NONE,
QUOTE_NONNUMERIC,
)
-import time
import warnings
from pandas.util._exceptions import find_stack_level
@@ -344,10 +343,9 @@ cdef class TextReader:
object true_values, false_values
object handle
object orig_header
- bint na_filter, keep_default_na, verbose, has_usecols, has_mi_columns
+ bint na_filter, keep_default_na, has_usecols, has_mi_columns
bint allow_leading_cols
uint64_t parser_start # this is modified after __init__
- list clocks
const char *encoding_errors
kh_str_starts_t *false_set
kh_str_starts_t *true_set
@@ -400,7 +398,6 @@ cdef class TextReader:
bint allow_leading_cols=True,
skiprows=None,
skipfooter=0, # int64_t
- bint verbose=False,
float_precision=None,
bint skip_blank_lines=True,
encoding_errors=b"strict",
@@ -417,9 +414,6 @@ cdef class TextReader:
self.parser = parser_new()
self.parser.chunksize = tokenize_chunksize
- # For timekeeping
- self.clocks = []
-
self.parser.usecols = (usecols is not None)
self._setup_parser_source(source)
@@ -507,8 +501,6 @@ cdef class TextReader:
self.converters = converters
self.na_filter = na_filter
- self.verbose = verbose
-
if float_precision == "round_trip":
# see gh-15140
self.parser.double_converter = round_trip_wrapper
@@ -896,8 +888,6 @@ cdef class TextReader:
int64_t buffered_lines
int64_t irows
- self._start_clock()
-
if rows is not None:
irows = rows
buffered_lines = self.parser.lines - self.parser_start
@@ -915,12 +905,8 @@ cdef class TextReader:
if self.parser_start >= self.parser.lines:
raise StopIteration
- self._end_clock("Tokenization")
- self._start_clock()
columns = self._convert_column_data(rows)
- self._end_clock("Type conversion")
- self._start_clock()
if len(columns) > 0:
rows_read = len(list(columns.values())[0])
# trim
@@ -929,18 +915,8 @@ cdef class TextReader:
parser_trim_buffers(self.parser)
self.parser_start -= rows_read
- self._end_clock("Parser memory cleanup")
-
return columns
- cdef _start_clock(self):
- self.clocks.append(time.time())
-
- cdef _end_clock(self, str what):
- if self.verbose:
- elapsed = time.time() - self.clocks.pop(-1)
- print(f"{what} took: {elapsed * 1000:.2f} ms")
-
def set_noconvert(self, i: int) -> None:
self.noconvert.add(i)
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 7b06c6b6b0d39..3bbb7c83345e5 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -519,7 +519,6 @@ def _convert_to_ndarrays(
dct: Mapping,
na_values,
na_fvalues,
- verbose: bool = False,
converters=None,
dtypes=None,
) -> dict[Any, np.ndarray]:
@@ -596,8 +595,6 @@ def _convert_to_ndarrays(
cvals = self._cast_types(cvals, cast_type, c)
result[c] = cvals
- if verbose and na_count:
- print(f"Filled {na_count} NA values in column {c!s}")
return result
@final
@@ -1236,7 +1233,6 @@ def converter(*date_cols, col: Hashable):
"usecols": None,
# 'iterator': False,
"chunksize": None,
- "verbose": False,
"encoding": None,
"compression": None,
"skip_blank_lines": True,
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index dbda47172f6ac..44210b6979827 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -110,8 +110,6 @@ def __init__(self, f: ReadCsvBuffer[str] | list, **kwds) -> None:
if "has_index_names" in kwds:
self.has_index_names = kwds["has_index_names"]
- self.verbose = kwds["verbose"]
-
self.thousands = kwds["thousands"]
self.decimal = kwds["decimal"]
@@ -372,7 +370,6 @@ def _convert_data(
data,
clean_na_values,
clean_na_fvalues,
- self.verbose,
clean_conv,
clean_dtypes,
)
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 9f2f208d8c350..b234a6b78e051 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -116,7 +116,6 @@ class _read_shared(TypedDict, Generic[HashableT], total=False):
)
keep_default_na: bool
na_filter: bool
- verbose: bool | lib.NoDefault
skip_blank_lines: bool
parse_dates: bool | Sequence[Hashable] | None
infer_datetime_format: bool | lib.NoDefault
@@ -295,10 +294,6 @@ class _read_shared(TypedDict, Generic[HashableT], total=False):
Detect missing value markers (empty strings and the value of ``na_values``). In
data without any ``NA`` values, passing ``na_filter=False`` can improve the
performance of reading a large file.
-verbose : bool, default False
- Indicate number of ``NA`` values placed in non-numeric columns.
-
- .. deprecated:: 2.2.0
skip_blank_lines : bool, default True
If ``True``, skip over blank lines rather than interpreting as ``NaN`` values.
parse_dates : bool, None, list of Hashable, list of lists or dict of {{Hashable : \
@@ -556,7 +551,6 @@ class _Fwf_Defaults(TypedDict):
"converters",
"iterator",
"dayfirst",
- "verbose",
"skipinitialspace",
"low_memory",
}
@@ -755,7 +749,6 @@ def read_csv(
| None = None,
keep_default_na: bool = True,
na_filter: bool = True,
- verbose: bool | lib.NoDefault = lib.no_default,
skip_blank_lines: bool = True,
# Datetime Handling
parse_dates: bool | Sequence[Hashable] | None = None,
@@ -845,17 +838,6 @@ def read_csv(
else:
delim_whitespace = False
- if verbose is not lib.no_default:
- # GH#55569
- warnings.warn(
- "The 'verbose' keyword in pd.read_csv is deprecated and "
- "will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- else:
- verbose = False
-
# locals() should never be modified
kwds = locals().copy()
del kwds["filepath_or_buffer"]
@@ -958,7 +940,6 @@ def read_table(
| None = None,
keep_default_na: bool = True,
na_filter: bool = True,
- verbose: bool | lib.NoDefault = lib.no_default,
skip_blank_lines: bool = True,
# Datetime Handling
parse_dates: bool | Sequence[Hashable] | None = None,
@@ -1039,17 +1020,6 @@ def read_table(
else:
delim_whitespace = False
- if verbose is not lib.no_default:
- # GH#55569
- warnings.warn(
- "The 'verbose' keyword in pd.read_table is deprecated and "
- "will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- else:
- verbose = False
-
# locals() should never be modified
kwds = locals().copy()
del kwds["filepath_or_buffer"]
diff --git a/pandas/tests/io/parser/common/test_verbose.py b/pandas/tests/io/parser/common/test_verbose.py
deleted file mode 100644
index c5490afba1e04..0000000000000
--- a/pandas/tests/io/parser/common/test_verbose.py
+++ /dev/null
@@ -1,82 +0,0 @@
-"""
-Tests that work on both the Python and C engines but do not have a
-specific classification into the other test modules.
-"""
-
-from io import StringIO
-
-import pytest
-
-import pandas._testing as tm
-
-depr_msg = "The 'verbose' keyword in pd.read_csv is deprecated"
-
-
-def test_verbose_read(all_parsers, capsys):
- parser = all_parsers
- data = """a,b,c,d
-one,1,2,3
-one,1,2,3
-,1,2,3
-one,1,2,3
-,1,2,3
-,1,2,3
-one,1,2,3
-two,1,2,3"""
-
- if parser.engine == "pyarrow":
- msg = "The 'verbose' option is not supported with the 'pyarrow' engine"
- with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(
- FutureWarning, match=depr_msg, check_stacklevel=False
- ):
- parser.read_csv(StringIO(data), verbose=True)
- return
-
- # Engines are verbose in different ways.
- with tm.assert_produces_warning(
- FutureWarning, match=depr_msg, check_stacklevel=False
- ):
- parser.read_csv(StringIO(data), verbose=True)
- captured = capsys.readouterr()
-
- if parser.engine == "c":
- assert "Tokenization took:" in captured.out
- assert "Parser memory cleanup took:" in captured.out
- else: # Python engine
- assert captured.out == "Filled 3 NA values in column a\n"
-
-
-def test_verbose_read2(all_parsers, capsys):
- parser = all_parsers
- data = """a,b,c,d
-one,1,2,3
-two,1,2,3
-three,1,2,3
-four,1,2,3
-five,1,2,3
-,1,2,3
-seven,1,2,3
-eight,1,2,3"""
-
- if parser.engine == "pyarrow":
- msg = "The 'verbose' option is not supported with the 'pyarrow' engine"
- with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(
- FutureWarning, match=depr_msg, check_stacklevel=False
- ):
- parser.read_csv(StringIO(data), verbose=True, index_col=0)
- return
-
- with tm.assert_produces_warning(
- FutureWarning, match=depr_msg, check_stacklevel=False
- ):
- parser.read_csv(StringIO(data), verbose=True, index_col=0)
- captured = capsys.readouterr()
-
- # Engines are verbose in different ways.
- if parser.engine == "c":
- assert "Tokenization took:" in captured.out
- assert "Parser memory cleanup took:" in captured.out
- else: # Python engine
- assert captured.out == "Filled 1 NA values in column a\n"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57966 | 2024-03-22T17:47:06Z | 2024-03-22T19:07:49Z | 2024-03-22T19:07:49Z | 2024-04-07T21:49:27Z |
BUG: Fix na_values dict not working on index column (#57547) | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index ef561d50066d1..bce5c7927c72d 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -301,6 +301,7 @@ Bug fixes
- Fixed bug in :meth:`Series.diff` allowing non-integer values for the ``periods`` argument. (:issue:`56607`)
- Fixed bug in :meth:`Series.rank` that doesn't preserve missing values for nullable integers when ``na_option='keep'``. (:issue:`56976`)
- Fixed bug in :meth:`Series.replace` and :meth:`DataFrame.replace` inconsistently replacing matching instances when ``regex=True`` and missing values are present. (:issue:`56599`)
+- Fixed bug in :meth:`read_csv` raising ``TypeError`` when ``index_col`` is specified and ``na_values`` is a dict containing the key ``None``. (:issue:`57547`)
Categorical
^^^^^^^^^^^
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 7b06c6b6b0d39..bb9f1db0d05e8 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -487,6 +487,8 @@ def _agg_index(self, index, try_parse_dates: bool = True) -> Index:
col_na_values, col_na_fvalues = _get_na_values(
col_name, self.na_values, self.na_fvalues, self.keep_default_na
)
+ else:
+ col_na_values, col_na_fvalues = set(), set()
clean_dtypes = self._clean_mapping(self.dtype)
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index dbda47172f6ac..21dcf5f2f9310 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -356,14 +356,15 @@ def _convert_data(
if isinstance(self.na_values, dict):
for col in self.na_values:
- na_value = self.na_values[col]
- na_fvalue = self.na_fvalues[col]
+ if col is not None:
+ na_value = self.na_values[col]
+ na_fvalue = self.na_fvalues[col]
- if isinstance(col, int) and col not in self.orig_names:
- col = self.orig_names[col]
+ if isinstance(col, int) and col not in self.orig_names:
+ col = self.orig_names[col]
- clean_na_values[col] = na_value
- clean_na_fvalues[col] = na_fvalue
+ clean_na_values[col] = na_value
+ clean_na_fvalues[col] = na_fvalue
else:
clean_na_values = self.na_values
clean_na_fvalues = self.na_fvalues
diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py
index ba0e3033321e4..1e370f649aef8 100644
--- a/pandas/tests/io/parser/test_na_values.py
+++ b/pandas/tests/io/parser/test_na_values.py
@@ -532,6 +532,47 @@ def test_na_values_dict_aliasing(all_parsers):
tm.assert_dict_equal(na_values, na_values_copy)
+def test_na_values_dict_null_column_name(all_parsers):
+ # see gh-57547
+ parser = all_parsers
+ data = ",x,y\n\nMA,1,2\nNA,2,1\nOA,,3"
+ names = [None, "x", "y"]
+ na_values = {name: STR_NA_VALUES for name in names}
+ dtype = {None: "object", "x": "float64", "y": "float64"}
+
+ if parser.engine == "pyarrow":
+ msg = "The pyarrow engine doesn't support passing a dict for na_values"
+ with pytest.raises(ValueError, match=msg):
+ parser.read_csv(
+ StringIO(data),
+ index_col=0,
+ header=0,
+ dtype=dtype,
+ names=names,
+ na_values=na_values,
+ keep_default_na=False,
+ )
+ return
+
+ expected = DataFrame(
+ {None: ["MA", "NA", "OA"], "x": [1.0, 2.0, np.nan], "y": [2.0, 1.0, 3.0]}
+ )
+
+ expected = expected.set_index(None)
+
+ result = parser.read_csv(
+ StringIO(data),
+ index_col=0,
+ header=0,
+ dtype=dtype,
+ names=names,
+ na_values=na_values,
+ keep_default_na=False,
+ )
+
+ tm.assert_frame_equal(result, expected)
+
+
def test_na_values_dict_col_index(all_parsers):
# see gh-14203
data = "a\nfoo\n1"
| - [x] closes #57547
- [x] [Tests added and passed]
- [x] All [code checks passed]
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
In the read_csv method, pandas allows having na_values set as a dict, which in such case lets you decide which values are null for each column. In the occurence of one of the columns being `None`, no null values are applied to the column and it remains as it was. This specific case is what is being tested in the issue #57547.
The problem was that in these particular conditions variables `col_na_values` and `col_na_fvalues` were not being set correctly causing a `TypeError`. All i had to do was correctly define these variables as empty sets in an `else` block.
On the python engine this same logic was not yet programmed. I implemented it, by adding an if statement, ensuring na_values are only applied if the column is not `None`. | https://api.github.com/repos/pandas-dev/pandas/pulls/57965 | 2024-03-22T14:51:57Z | 2024-04-09T17:08:34Z | 2024-04-09T17:08:34Z | 2024-04-09T22:44:43Z |
DOC: fix closing sq. bracket in pandas.read_fwf example (#57959) | diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 1ef2e65617c9b..9f2f208d8c350 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -1139,7 +1139,7 @@ def read_fwf(
``file://localhost/path/to/table.csv``.
colspecs : list of tuple (int, int) or 'infer'. optional
A list of tuples giving the extents of the fixed-width
- fields of each line as half-open intervals (i.e., [from, to[ ).
+ fields of each line as half-open intervals (i.e., [from, to] ).
String value 'infer' can be used to instruct the parser to try
detecting the column specifications from the first 100 rows of
the data which are not being skipped via skiprows (default='infer').
| - change closing square bracket in colspecs description to correct "]"
- [x] closes #57959
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57961 | 2024-03-22T05:07:58Z | 2024-03-22T14:33:38Z | 2024-03-22T14:33:38Z | 2024-03-22T14:33:38Z |
BUG: Groupby median on timedelta column with NaT returns odd value (#… | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index f748f6e23e003..3964745e2e657 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -297,6 +297,7 @@ Performance improvements
Bug fixes
~~~~~~~~~
- Fixed bug in :class:`SparseDtype` for equal comparison with na fill value. (:issue:`54770`)
+- Fixed bug in :meth:`.DataFrameGroupBy.median` where nat values gave an incorrect result. (:issue:`57926`)
- Fixed bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
- Fixed bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
- Fixed bug in :meth:`DataFrame.update` bool dtype being converted to object (:issue:`55509`)
diff --git a/pandas/_libs/groupby.pyi b/pandas/_libs/groupby.pyi
index 95ac555303221..53f5f73624232 100644
--- a/pandas/_libs/groupby.pyi
+++ b/pandas/_libs/groupby.pyi
@@ -12,6 +12,7 @@ def group_median_float64(
min_count: int = ..., # Py_ssize_t
mask: np.ndarray | None = ...,
result_mask: np.ndarray | None = ...,
+ is_datetimelike: bool = ..., # bint
) -> None: ...
def group_cumprod(
out: np.ndarray, # float64_t[:, ::1]
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 2ff45038d6a3e..c0b9ed42cb535 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -101,7 +101,11 @@ cdef float64_t median_linear_mask(float64_t* a, int n, uint8_t* mask) noexcept n
return result
-cdef float64_t median_linear(float64_t* a, int n) noexcept nogil:
+cdef float64_t median_linear(
+ float64_t* a,
+ int n,
+ bint is_datetimelike=False
+) noexcept nogil:
cdef:
int i, j, na_count = 0
float64_t* tmp
@@ -111,9 +115,14 @@ cdef float64_t median_linear(float64_t* a, int n) noexcept nogil:
return NaN
# count NAs
- for i in range(n):
- if a[i] != a[i]:
- na_count += 1
+ if is_datetimelike:
+ for i in range(n):
+ if a[i] == NPY_NAT:
+ na_count += 1
+ else:
+ for i in range(n):
+ if a[i] != a[i]:
+ na_count += 1
if na_count:
if na_count == n:
@@ -124,10 +133,16 @@ cdef float64_t median_linear(float64_t* a, int n) noexcept nogil:
raise MemoryError()
j = 0
- for i in range(n):
- if a[i] == a[i]:
- tmp[j] = a[i]
- j += 1
+ if is_datetimelike:
+ for i in range(n):
+ if a[i] != NPY_NAT:
+ tmp[j] = a[i]
+ j += 1
+ else:
+ for i in range(n):
+ if a[i] == a[i]:
+ tmp[j] = a[i]
+ j += 1
a = tmp
n -= na_count
@@ -170,6 +185,7 @@ def group_median_float64(
Py_ssize_t min_count=-1,
const uint8_t[:, :] mask=None,
uint8_t[:, ::1] result_mask=None,
+ bint is_datetimelike=False,
) -> None:
"""
Only aggregates on axis=0
@@ -228,7 +244,7 @@ def group_median_float64(
ptr += _counts[0]
for j in range(ngroups):
size = _counts[j + 1]
- out[j, i] = median_linear(ptr, size)
+ out[j, i] = median_linear(ptr, size, is_datetimelike)
ptr += size
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index acf4c7bebf52d..8585ae3828247 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -415,6 +415,7 @@ def _call_cython_op(
"last",
"first",
"sum",
+ "median",
]:
func(
out=result,
@@ -427,7 +428,7 @@ def _call_cython_op(
is_datetimelike=is_datetimelike,
**kwargs,
)
- elif self.how in ["sem", "std", "var", "ohlc", "prod", "median"]:
+ elif self.how in ["sem", "std", "var", "ohlc", "prod"]:
if self.how in ["std", "sem"]:
kwargs["is_datetimelike"] = is_datetimelike
func(
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 00e781e6a7f07..7ec1598abf403 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -145,6 +145,15 @@ def test_len_nan_group():
assert len(df.groupby(["a", "b"])) == 0
+def test_groupby_timedelta_median():
+ # issue 57926
+ expected = Series(data=Timedelta("1d"), index=["foo"])
+ df = DataFrame({"label": ["foo", "foo"], "timedelta": [pd.NaT, Timedelta("1d")]})
+ gb = df.groupby("label")["timedelta"]
+ actual = gb.median()
+ tm.assert_series_equal(actual, expected, check_names=False)
+
+
@pytest.mark.parametrize("keys", [["a"], ["a", "b"]])
def test_len_categorical(dropna, observed, keys):
# GH#57595
| …57926)
Handle NaT correctly in group_median_float64
- [x] closes #57926
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57957 | 2024-03-21T23:36:07Z | 2024-03-26T20:57:52Z | 2024-03-26T20:57:52Z | 2024-04-10T12:09:36Z |
Backport PR #57764 on branch 2.2.x (BUG: PyArrow dtypes were not supported in the interchange protocol) | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 96f210ce6b7b9..54084abab7817 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -14,6 +14,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pandas nullable on with missing values (:issue:`56702`)
+- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pyarrow nullable on with missing values (:issue:`57664`)
-
.. ---------------------------------------------------------------------------
@@ -21,7 +22,8 @@ Fixed regressions
Bug fixes
~~~~~~~~~
--
+- :meth:`DataFrame.__dataframe__` was showing bytemask instead of bitmask for ``'string[pyarrow]'`` validity buffer (:issue:`57762`)
+- :meth:`DataFrame.__dataframe__` was showing non-null validity buffer (instead of ``None``) ``'string[pyarrow]'`` without missing values (:issue:`57761`)
.. ---------------------------------------------------------------------------
.. _whatsnew_222.other:
diff --git a/pandas/core/interchange/buffer.py b/pandas/core/interchange/buffer.py
index 5c97fc17d7070..5d24325e67f62 100644
--- a/pandas/core/interchange/buffer.py
+++ b/pandas/core/interchange/buffer.py
@@ -12,6 +12,7 @@
if TYPE_CHECKING:
import numpy as np
+ import pyarrow as pa
class PandasBuffer(Buffer):
@@ -76,3 +77,60 @@ def __repr__(self) -> str:
)
+ ")"
)
+
+
+class PandasBufferPyarrow(Buffer):
+ """
+ Data in the buffer is guaranteed to be contiguous in memory.
+ """
+
+ def __init__(
+ self,
+ buffer: pa.Buffer,
+ *,
+ length: int,
+ ) -> None:
+ """
+ Handle pyarrow chunked arrays.
+ """
+ self._buffer = buffer
+ self._length = length
+
+ @property
+ def bufsize(self) -> int:
+ """
+ Buffer size in bytes.
+ """
+ return self._buffer.size
+
+ @property
+ def ptr(self) -> int:
+ """
+ Pointer to start of the buffer as an integer.
+ """
+ return self._buffer.address
+
+ def __dlpack__(self) -> Any:
+ """
+ Represent this structure as DLPack interface.
+ """
+ raise NotImplementedError()
+
+ def __dlpack_device__(self) -> tuple[DlpackDeviceType, int | None]:
+ """
+ Device type and device ID for where the data in the buffer resides.
+ """
+ return (DlpackDeviceType.CPU, None)
+
+ def __repr__(self) -> str:
+ return (
+ "PandasBuffer[pyarrow]("
+ + str(
+ {
+ "bufsize": self.bufsize,
+ "ptr": self.ptr,
+ "device": "CPU",
+ }
+ )
+ + ")"
+ )
diff --git a/pandas/core/interchange/column.py b/pandas/core/interchange/column.py
index 7b39403ca1916..d59a3df694bb3 100644
--- a/pandas/core/interchange/column.py
+++ b/pandas/core/interchange/column.py
@@ -1,6 +1,9 @@
from __future__ import annotations
-from typing import Any
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
import numpy as np
@@ -9,15 +12,18 @@
from pandas.errors import NoBufferPresent
from pandas.util._decorators import cache_readonly
-from pandas.core.dtypes.dtypes import (
+from pandas.core.dtypes.dtypes import BaseMaskedDtype
+
+import pandas as pd
+from pandas import (
ArrowDtype,
- BaseMaskedDtype,
DatetimeTZDtype,
)
-
-import pandas as pd
from pandas.api.types import is_string_dtype
-from pandas.core.interchange.buffer import PandasBuffer
+from pandas.core.interchange.buffer import (
+ PandasBuffer,
+ PandasBufferPyarrow,
+)
from pandas.core.interchange.dataframe_protocol import (
Column,
ColumnBuffers,
@@ -30,6 +36,9 @@
dtype_to_arrow_c_fmt,
)
+if TYPE_CHECKING:
+ from pandas.core.interchange.dataframe_protocol import Buffer
+
_NP_KINDS = {
"i": DtypeKind.INT,
"u": DtypeKind.UINT,
@@ -157,6 +166,16 @@ def _dtype_from_pandasdtype(self, dtype) -> tuple[DtypeKind, int, str, str]:
else:
byteorder = dtype.byteorder
+ if dtype == "bool[pyarrow]":
+ # return early to avoid the `* 8` below, as this is a bitmask
+ # rather than a bytemask
+ return (
+ kind,
+ dtype.itemsize, # pyright: ignore[reportGeneralTypeIssues]
+ ArrowCTypes.BOOL,
+ byteorder,
+ )
+
return kind, dtype.itemsize * 8, dtype_to_arrow_c_fmt(dtype), byteorder
@property
@@ -194,6 +213,12 @@ def describe_null(self):
column_null_dtype = ColumnNullType.USE_BYTEMASK
null_value = 1
return column_null_dtype, null_value
+ if isinstance(self._col.dtype, ArrowDtype):
+ # We already rechunk (if necessary / allowed) upon initialization, so this
+ # is already single-chunk by the time we get here.
+ if self._col.array._pa_array.chunks[0].buffers()[0] is None: # type: ignore[attr-defined]
+ return ColumnNullType.NON_NULLABLE, None
+ return ColumnNullType.USE_BITMASK, 0
kind = self.dtype[0]
try:
null, value = _NULL_DESCRIPTION[kind]
@@ -278,10 +303,11 @@ def get_buffers(self) -> ColumnBuffers:
def _get_data_buffer(
self,
- ) -> tuple[PandasBuffer, Any]: # Any is for self.dtype tuple
+ ) -> tuple[Buffer, tuple[DtypeKind, int, str, str]]:
"""
Return the buffer containing the data and the buffer's associated dtype.
"""
+ buffer: Buffer
if self.dtype[0] in (
DtypeKind.INT,
DtypeKind.UINT,
@@ -291,6 +317,7 @@ def _get_data_buffer(
):
# self.dtype[2] is an ArrowCTypes.TIMESTAMP where the tz will make
# it longer than 4 characters
+ dtype = self.dtype
if self.dtype[0] == DtypeKind.DATETIME and len(self.dtype[2]) > 4:
np_arr = self._col.dt.tz_convert(None).to_numpy()
else:
@@ -298,11 +325,17 @@ def _get_data_buffer(
if isinstance(self._col.dtype, BaseMaskedDtype):
np_arr = arr._data # type: ignore[attr-defined]
elif isinstance(self._col.dtype, ArrowDtype):
- raise NotImplementedError("ArrowDtype not handled yet")
+ # We already rechunk (if necessary / allowed) upon initialization,
+ # so this is already single-chunk by the time we get here.
+ arr = arr._pa_array.chunks[0] # type: ignore[attr-defined]
+ buffer = PandasBufferPyarrow(
+ arr.buffers()[1], # type: ignore[attr-defined]
+ length=len(arr),
+ )
+ return buffer, dtype
else:
np_arr = arr._ndarray # type: ignore[attr-defined]
buffer = PandasBuffer(np_arr, allow_copy=self._allow_copy)
- dtype = self.dtype
elif self.dtype[0] == DtypeKind.CATEGORICAL:
codes = self._col.values._codes
buffer = PandasBuffer(codes, allow_copy=self._allow_copy)
@@ -330,13 +363,26 @@ def _get_data_buffer(
return buffer, dtype
- def _get_validity_buffer(self) -> tuple[PandasBuffer, Any]:
+ def _get_validity_buffer(self) -> tuple[Buffer, Any] | None:
"""
Return the buffer containing the mask values indicating missing data and
the buffer's associated dtype.
Raises NoBufferPresent if null representation is not a bit or byte mask.
"""
null, invalid = self.describe_null
+ buffer: Buffer
+ if isinstance(self._col.dtype, ArrowDtype):
+ # We already rechunk (if necessary / allowed) upon initialization, so this
+ # is already single-chunk by the time we get here.
+ arr = self._col.array._pa_array.chunks[0] # type: ignore[attr-defined]
+ dtype = (DtypeKind.BOOL, 1, ArrowCTypes.BOOL, Endianness.NATIVE)
+ if arr.buffers()[0] is None:
+ return None
+ buffer = PandasBufferPyarrow(
+ arr.buffers()[0],
+ length=len(arr),
+ )
+ return buffer, dtype
if isinstance(self._col.dtype, BaseMaskedDtype):
mask = self._col.array._mask # type: ignore[attr-defined]
diff --git a/pandas/core/interchange/dataframe.py b/pandas/core/interchange/dataframe.py
index 1ffe0e8e8dbb0..1abacddfc7e3b 100644
--- a/pandas/core/interchange/dataframe.py
+++ b/pandas/core/interchange/dataframe.py
@@ -5,6 +5,7 @@
from pandas.core.interchange.column import PandasColumn
from pandas.core.interchange.dataframe_protocol import DataFrame as DataFrameXchg
+from pandas.core.interchange.utils import maybe_rechunk
if TYPE_CHECKING:
from collections.abc import (
@@ -34,6 +35,10 @@ def __init__(self, df: DataFrame, allow_copy: bool = True) -> None:
"""
self._df = df.rename(columns=str, copy=False)
self._allow_copy = allow_copy
+ for i, _col in enumerate(self._df.columns):
+ rechunked = maybe_rechunk(self._df.iloc[:, i], allow_copy=allow_copy)
+ if rechunked is not None:
+ self._df.isetitem(i, rechunked)
def __dataframe__(
self, nan_as_null: bool = False, allow_copy: bool = True
diff --git a/pandas/core/interchange/from_dataframe.py b/pandas/core/interchange/from_dataframe.py
index d45ae37890ba7..4162ebc33f0d6 100644
--- a/pandas/core/interchange/from_dataframe.py
+++ b/pandas/core/interchange/from_dataframe.py
@@ -295,13 +295,14 @@ def string_column_to_ndarray(col: Column) -> tuple[np.ndarray, Any]:
null_pos = None
if null_kind in (ColumnNullType.USE_BITMASK, ColumnNullType.USE_BYTEMASK):
- assert buffers["validity"], "Validity buffers cannot be empty for masks"
- valid_buff, valid_dtype = buffers["validity"]
- null_pos = buffer_to_ndarray(
- valid_buff, valid_dtype, offset=col.offset, length=col.size()
- )
- if sentinel_val == 0:
- null_pos = ~null_pos
+ validity = buffers["validity"]
+ if validity is not None:
+ valid_buff, valid_dtype = validity
+ null_pos = buffer_to_ndarray(
+ valid_buff, valid_dtype, offset=col.offset, length=col.size()
+ )
+ if sentinel_val == 0:
+ null_pos = ~null_pos
# Assemble the strings from the code units
str_list: list[None | float | str] = [None] * col.size()
@@ -486,6 +487,8 @@ def set_nulls(
np.ndarray or pd.Series
Data with the nulls being set.
"""
+ if validity is None:
+ return data
null_kind, sentinel_val = col.describe_null
null_pos = None
diff --git a/pandas/core/interchange/utils.py b/pandas/core/interchange/utils.py
index 2e73e560e5740..2a19dd5046aa3 100644
--- a/pandas/core/interchange/utils.py
+++ b/pandas/core/interchange/utils.py
@@ -16,6 +16,8 @@
DatetimeTZDtype,
)
+import pandas as pd
+
if typing.TYPE_CHECKING:
from pandas._typing import DtypeObj
@@ -145,3 +147,29 @@ def dtype_to_arrow_c_fmt(dtype: DtypeObj) -> str:
raise NotImplementedError(
f"Conversion of {dtype} to Arrow C format string is not implemented."
)
+
+
+def maybe_rechunk(series: pd.Series, *, allow_copy: bool) -> pd.Series | None:
+ """
+ Rechunk a multi-chunk pyarrow array into a single-chunk array, if necessary.
+
+ - Returns `None` if the input series is not backed by a multi-chunk pyarrow array
+ (and so doesn't need rechunking)
+ - Returns a single-chunk-backed-Series if the input is backed by a multi-chunk
+ pyarrow array and `allow_copy` is `True`.
+ - Raises a `RuntimeError` if `allow_copy` is `False` and input is a
+ based by a multi-chunk pyarrow array.
+ """
+ if not isinstance(series.dtype, pd.ArrowDtype):
+ return None
+ chunked_array = series.array._pa_array # type: ignore[attr-defined]
+ if len(chunked_array.chunks) == 1:
+ return None
+ if not allow_copy:
+ raise RuntimeError(
+ "Found multi-chunk pyarrow array, but `allow_copy` is False. "
+ "Please rechunk the array before calling this function, or set "
+ "`allow_copy=True`."
+ )
+ arr = chunked_array.combine_chunks()
+ return pd.Series(arr, dtype=series.dtype, name=series.name, index=series.index)
diff --git a/pandas/tests/interchange/test_impl.py b/pandas/tests/interchange/test_impl.py
index a1dedb6be456c..1ccada9116d4c 100644
--- a/pandas/tests/interchange/test_impl.py
+++ b/pandas/tests/interchange/test_impl.py
@@ -1,4 +1,7 @@
-from datetime import datetime
+from datetime import (
+ datetime,
+ timezone,
+)
import numpy as np
import pytest
@@ -301,6 +304,51 @@ def test_multi_chunk_pyarrow() -> None:
pd.api.interchange.from_dataframe(table, allow_copy=False)
+def test_multi_chunk_column() -> None:
+ pytest.importorskip("pyarrow", "11.0.0")
+ ser = pd.Series([1, 2, None], dtype="Int64[pyarrow]")
+ df = pd.concat([ser, ser], ignore_index=True).to_frame("a")
+ df_orig = df.copy()
+ with pytest.raises(
+ RuntimeError, match="Found multi-chunk pyarrow array, but `allow_copy` is False"
+ ):
+ pd.api.interchange.from_dataframe(df.__dataframe__(allow_copy=False))
+ result = pd.api.interchange.from_dataframe(df.__dataframe__(allow_copy=True))
+ # Interchange protocol defaults to creating numpy-backed columns, so currently this
+ # is 'float64'.
+ expected = pd.DataFrame({"a": [1.0, 2.0, None, 1.0, 2.0, None]}, dtype="float64")
+ tm.assert_frame_equal(result, expected)
+
+ # Check that the rechunking we did didn't modify the original DataFrame.
+ tm.assert_frame_equal(df, df_orig)
+ assert len(df["a"].array._pa_array.chunks) == 2
+ assert len(df_orig["a"].array._pa_array.chunks) == 2
+
+
+def test_timestamp_ns_pyarrow():
+ # GH 56712
+ pytest.importorskip("pyarrow", "11.0.0")
+ timestamp_args = {
+ "year": 2000,
+ "month": 1,
+ "day": 1,
+ "hour": 1,
+ "minute": 1,
+ "second": 1,
+ }
+ df = pd.Series(
+ [datetime(**timestamp_args)],
+ dtype="timestamp[ns][pyarrow]",
+ name="col0",
+ ).to_frame()
+
+ dfi = df.__dataframe__()
+ result = pd.api.interchange.from_dataframe(dfi)["col0"].item()
+
+ expected = pd.Timestamp(**timestamp_args)
+ assert result == expected
+
+
@pytest.mark.parametrize("tz", ["UTC", "US/Pacific"])
@pytest.mark.parametrize("unit", ["s", "ms", "us", "ns"])
def test_datetimetzdtype(tz, unit):
@@ -403,42 +451,60 @@ def test_non_str_names_w_duplicates():
pd.api.interchange.from_dataframe(dfi, allow_copy=False)
-def test_nullable_integers() -> None:
- # https://github.com/pandas-dev/pandas/issues/55069
- df = pd.DataFrame({"a": [1]}, dtype="Int8")
- expected = pd.DataFrame({"a": [1]}, dtype="int8")
- result = pd.api.interchange.from_dataframe(df.__dataframe__())
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.xfail(reason="https://github.com/pandas-dev/pandas/issues/57664")
-def test_nullable_integers_pyarrow() -> None:
- # https://github.com/pandas-dev/pandas/issues/55069
- df = pd.DataFrame({"a": [1]}, dtype="Int8[pyarrow]")
- expected = pd.DataFrame({"a": [1]}, dtype="int8")
- result = pd.api.interchange.from_dataframe(df.__dataframe__())
- tm.assert_frame_equal(result, expected)
-
-
@pytest.mark.parametrize(
("data", "dtype", "expected_dtype"),
[
([1, 2, None], "Int64", "int64"),
+ ([1, 2, None], "Int64[pyarrow]", "int64"),
+ ([1, 2, None], "Int8", "int8"),
+ ([1, 2, None], "Int8[pyarrow]", "int8"),
(
[1, 2, None],
"UInt64",
"uint64",
),
+ (
+ [1, 2, None],
+ "UInt64[pyarrow]",
+ "uint64",
+ ),
([1.0, 2.25, None], "Float32", "float32"),
+ ([1.0, 2.25, None], "Float32[pyarrow]", "float32"),
+ ([True, False, None], "boolean[pyarrow]", "bool"),
+ (["much ado", "about", None], "string[pyarrow_numpy]", "large_string"),
+ (["much ado", "about", None], "string[pyarrow]", "large_string"),
+ (
+ [datetime(2020, 1, 1), datetime(2020, 1, 2), None],
+ "timestamp[ns][pyarrow]",
+ "timestamp[ns]",
+ ),
+ (
+ [datetime(2020, 1, 1), datetime(2020, 1, 2), None],
+ "timestamp[us][pyarrow]",
+ "timestamp[us]",
+ ),
+ (
+ [
+ datetime(2020, 1, 1, tzinfo=timezone.utc),
+ datetime(2020, 1, 2, tzinfo=timezone.utc),
+ None,
+ ],
+ "timestamp[us, Asia/Kathmandu][pyarrow]",
+ "timestamp[us, tz=Asia/Kathmandu]",
+ ),
],
)
-def test_pandas_nullable_w_missing_values(
+def test_pandas_nullable_with_missing_values(
data: list, dtype: str, expected_dtype: str
) -> None:
# https://github.com/pandas-dev/pandas/issues/57643
- pytest.importorskip("pyarrow", "11.0.0")
+ # https://github.com/pandas-dev/pandas/issues/57664
+ pa = pytest.importorskip("pyarrow", "11.0.0")
import pyarrow.interchange as pai
+ if expected_dtype == "timestamp[us, tz=Asia/Kathmandu]":
+ expected_dtype = pa.timestamp("us", "Asia/Kathmandu")
+
df = pd.DataFrame({"a": data}, dtype=dtype)
result = pai.from_dataframe(df.__dataframe__())["a"]
assert result.type == expected_dtype
@@ -447,6 +513,86 @@ def test_pandas_nullable_w_missing_values(
assert result[2].as_py() is None
+@pytest.mark.parametrize(
+ ("data", "dtype", "expected_dtype"),
+ [
+ ([1, 2, 3], "Int64", "int64"),
+ ([1, 2, 3], "Int64[pyarrow]", "int64"),
+ ([1, 2, 3], "Int8", "int8"),
+ ([1, 2, 3], "Int8[pyarrow]", "int8"),
+ (
+ [1, 2, 3],
+ "UInt64",
+ "uint64",
+ ),
+ (
+ [1, 2, 3],
+ "UInt64[pyarrow]",
+ "uint64",
+ ),
+ ([1.0, 2.25, 5.0], "Float32", "float32"),
+ ([1.0, 2.25, 5.0], "Float32[pyarrow]", "float32"),
+ ([True, False, False], "boolean[pyarrow]", "bool"),
+ (["much ado", "about", "nothing"], "string[pyarrow_numpy]", "large_string"),
+ (["much ado", "about", "nothing"], "string[pyarrow]", "large_string"),
+ (
+ [datetime(2020, 1, 1), datetime(2020, 1, 2), datetime(2020, 1, 3)],
+ "timestamp[ns][pyarrow]",
+ "timestamp[ns]",
+ ),
+ (
+ [datetime(2020, 1, 1), datetime(2020, 1, 2), datetime(2020, 1, 3)],
+ "timestamp[us][pyarrow]",
+ "timestamp[us]",
+ ),
+ (
+ [
+ datetime(2020, 1, 1, tzinfo=timezone.utc),
+ datetime(2020, 1, 2, tzinfo=timezone.utc),
+ datetime(2020, 1, 3, tzinfo=timezone.utc),
+ ],
+ "timestamp[us, Asia/Kathmandu][pyarrow]",
+ "timestamp[us, tz=Asia/Kathmandu]",
+ ),
+ ],
+)
+def test_pandas_nullable_without_missing_values(
+ data: list, dtype: str, expected_dtype: str
+) -> None:
+ # https://github.com/pandas-dev/pandas/issues/57643
+ pa = pytest.importorskip("pyarrow", "11.0.0")
+ import pyarrow.interchange as pai
+
+ if expected_dtype == "timestamp[us, tz=Asia/Kathmandu]":
+ expected_dtype = pa.timestamp("us", "Asia/Kathmandu")
+
+ df = pd.DataFrame({"a": data}, dtype=dtype)
+ result = pai.from_dataframe(df.__dataframe__())["a"]
+ assert result.type == expected_dtype
+ assert result[0].as_py() == data[0]
+ assert result[1].as_py() == data[1]
+ assert result[2].as_py() == data[2]
+
+
+def test_string_validity_buffer() -> None:
+ # https://github.com/pandas-dev/pandas/issues/57761
+ pytest.importorskip("pyarrow", "11.0.0")
+ df = pd.DataFrame({"a": ["x"]}, dtype="large_string[pyarrow]")
+ result = df.__dataframe__().get_column_by_name("a").get_buffers()["validity"]
+ assert result is None
+
+
+def test_string_validity_buffer_no_missing() -> None:
+ # https://github.com/pandas-dev/pandas/issues/57762
+ pytest.importorskip("pyarrow", "11.0.0")
+ df = pd.DataFrame({"a": ["x", None]}, dtype="large_string[pyarrow]")
+ validity = df.__dataframe__().get_column_by_name("a").get_buffers()["validity"]
+ assert validity is not None
+ result = validity[1]
+ expected = (DtypeKind.BOOL, 1, ArrowCTypes.BOOL, "=")
+ assert result == expected
+
+
def test_empty_dataframe():
# https://github.com/pandas-dev/pandas/issues/56700
df = pd.DataFrame({"a": []}, dtype="int8")
| #57764 | https://api.github.com/repos/pandas-dev/pandas/pulls/57947 | 2024-03-21T08:55:32Z | 2024-03-21T16:37:58Z | 2024-03-21T16:37:58Z | 2024-03-21T16:37:58Z |
Clean up more Cython warning | diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 82e9812094af2..01c7de0c6f2b3 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -1603,7 +1603,7 @@ cdef _categorical_convert(parser_t *parser, int64_t col,
# -> ndarray[f'|S{width}']
cdef _to_fw_string(parser_t *parser, int64_t col, int64_t line_start,
- int64_t line_end, int64_t width) noexcept:
+ int64_t line_end, int64_t width):
cdef:
char *data
ndarray result
| Overlooked this in the recent PR. I think this is the last of Cython warnings.
`(warning: /home/pandas/pandas/_libs/parsers.pyx:1605:26: noexcept clause is ignored for function returning Python object)` | https://api.github.com/repos/pandas-dev/pandas/pulls/57946 | 2024-03-21T02:27:51Z | 2024-03-21T16:16:39Z | 2024-03-21T16:16:39Z | 2024-03-21T18:28:22Z |
PERF: DataFrame(dict) returns RangeIndex columns when possible | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index ef561d50066d1..731195f0b1268 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -264,6 +264,7 @@ Removal of prior version deprecations/changes
Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- :attr:`Categorical.categories` returns a :class:`RangeIndex` columns instead of an :class:`Index` if the constructed ``values`` was a ``range``. (:issue:`57787`)
+- :class:`DataFrame` returns a :class:`RangeIndex` columns when possible when ``data`` is a ``dict`` (:issue:`57943`)
- :func:`concat` returns a :class:`RangeIndex` level in the :class:`MultiIndex` result when ``keys`` is a ``range`` or :class:`RangeIndex` (:issue:`57542`)
- :meth:`RangeIndex.append` returns a :class:`RangeIndex` instead of a :class:`Index` when appending values that could continue the :class:`RangeIndex` (:issue:`57467`)
- :meth:`Series.str.extract` returns a :class:`RangeIndex` columns instead of an :class:`Index` column when possible (:issue:`57542`)
diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
index a8887a21afa34..9b05eb42c6d6e 100644
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -1,6 +1,5 @@
from __future__ import annotations
-import textwrap
from typing import (
TYPE_CHECKING,
cast,
@@ -23,6 +22,7 @@
ensure_index,
ensure_index_from_sequences,
get_unanimous_names,
+ maybe_sequence_to_range,
)
from pandas.core.indexes.category import CategoricalIndex
from pandas.core.indexes.datetimes import DatetimeIndex
@@ -34,16 +34,6 @@
if TYPE_CHECKING:
from pandas._typing import Axis
-_sort_msg = textwrap.dedent(
- """\
-Sorting because non-concatenation axis is not aligned. A future version
-of pandas will change to not sort by default.
-
-To accept the future behavior, pass 'sort=False'.
-
-To retain the current behavior and silence the warning, pass 'sort=True'.
-"""
-)
__all__ = [
@@ -66,6 +56,7 @@
"all_indexes_same",
"default_index",
"safe_sort_index",
+ "maybe_sequence_to_range",
]
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9a537c71f3cd0..e59c0542ee6da 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7169,18 +7169,17 @@ def maybe_sequence_to_range(sequence) -> Any | range:
-------
Any : input or range
"""
- if isinstance(sequence, (ABCSeries, Index, range)):
+ if isinstance(sequence, (ABCSeries, Index, range, ExtensionArray)):
return sequence
- np_sequence = np.asarray(sequence)
- if np_sequence.dtype.kind != "i" or len(np_sequence) == 1:
+ elif len(sequence) == 1 or lib.infer_dtype(sequence, skipna=False) != "integer":
return sequence
- elif len(np_sequence) == 0:
+ elif len(sequence) == 0:
return range(0)
- diff = np_sequence[1] - np_sequence[0]
+ diff = sequence[1] - sequence[0]
if diff == 0:
return sequence
- elif len(np_sequence) == 2 or lib.is_sequence_range(np_sequence, diff):
- return range(np_sequence[0], np_sequence[-1] + diff, diff)
+ elif len(sequence) == 2 or lib.is_sequence_range(np.asarray(sequence), diff):
+ return range(sequence[0], sequence[-1] + diff, diff)
else:
return sequence
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 93f1674fbd328..73b93110c9018 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -60,6 +60,7 @@
default_index,
ensure_index,
get_objs_combined_axis,
+ maybe_sequence_to_range,
union_indexes,
)
from pandas.core.internals.blocks import (
@@ -403,7 +404,7 @@ def dict_to_mgr(
arrays[i] = arr
else:
- keys = list(data.keys())
+ keys = maybe_sequence_to_range(list(data.keys()))
columns = Index(keys) if keys else default_index(0)
arrays = [com.maybe_iterable_to_list(data[k]) for k in keys]
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 7d1a5b4492740..12d8269b640fc 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2709,6 +2709,11 @@ def test_inference_on_pandas_objects(self):
result = DataFrame({"a": ser})
assert result.dtypes.iloc[0] == np.object_
+ def test_dict_keys_returns_rangeindex(self):
+ result = DataFrame({0: [1], 1: [2]}).columns
+ expected = RangeIndex(2)
+ tm.assert_index_equal(result, expected, exact=True)
+
class TestDataFrameConstructorIndexInference:
def test_frame_from_dict_of_series_overlapping_monthly_period_indexes(self):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 99250dc929997..f750d5e7fa919 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1738,6 +1738,7 @@ def test_daily(self):
mask = ts.index.year == y
expected[y] = Series(ts.values[mask], index=doy[mask])
expected = DataFrame(expected, dtype=float).T
+ expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(result, expected)
def test_monthly(self):
@@ -1753,6 +1754,7 @@ def test_monthly(self):
mask = ts.index.year == y
expected[y] = Series(ts.values[mask], index=month[mask])
expected = DataFrame(expected, dtype=float).T
+ expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(result, expected)
def test_pivot_table_with_iterator_values(self, data):
| Discovered in https://github.com/pandas-dev/pandas/pull/57441
Also removed a seemingly unused `_sort_msg` | https://api.github.com/repos/pandas-dev/pandas/pulls/57943 | 2024-03-20T23:52:43Z | 2024-03-25T18:24:40Z | 2024-03-25T18:24:40Z | 2024-03-25T18:24:43Z |
Backport PR #57029 on branch 2.2.x (DOC: Add `DataFrame.to_numpy` method) | diff --git a/doc/source/reference/frame.rst b/doc/source/reference/frame.rst
index fefb02dd916cd..1d9019ff22c23 100644
--- a/doc/source/reference/frame.rst
+++ b/doc/source/reference/frame.rst
@@ -49,6 +49,7 @@ Conversion
DataFrame.infer_objects
DataFrame.copy
DataFrame.bool
+ DataFrame.to_numpy
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
| Backport PR #57029: DOC: Add `DataFrame.to_numpy` method | https://api.github.com/repos/pandas-dev/pandas/pulls/57940 | 2024-03-20T22:22:40Z | 2024-03-21T03:05:31Z | 2024-03-21T03:05:31Z | 2024-03-21T03:05:31Z |
DOC: #38067 add missing holiday observance rules | diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index ecdfb3c565d33..37413722de96f 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -1468,11 +1468,16 @@ or some other non-observed day. Defined observance rules are:
:header: "Rule", "Description"
:widths: 15, 70
+ "next_workday", "move Saturday and Sunday to Monday"
+ "previous_workday", "move Saturday and Sunday to Friday"
"nearest_workday", "move Saturday to Friday and Sunday to Monday"
+ "before_nearest_workday", "apply ``nearest_workday`` and then move to previous workday before that day"
+ "after_nearest_workday", "apply ``nearest_workday`` and then move to next workday after that day"
"sunday_to_monday", "move Sunday to following Monday"
"next_monday_or_tuesday", "move Saturday to Monday and Sunday/Monday to Tuesday"
"previous_friday", move Saturday and Sunday to previous Friday"
"next_monday", "move Saturday and Sunday to following Monday"
+ "weekend_to_monday", "same as ``next_monday``"
An example of how holidays and holiday calendars are defined:
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 50d0d33f0339f..cc9e2e3be8c38 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -108,7 +108,7 @@ def nearest_workday(dt: datetime) -> datetime:
def next_workday(dt: datetime) -> datetime:
"""
- returns next weekday used for observances
+ returns next workday used for observances
"""
dt += timedelta(days=1)
while dt.weekday() > 4:
@@ -119,7 +119,7 @@ def next_workday(dt: datetime) -> datetime:
def previous_workday(dt: datetime) -> datetime:
"""
- returns previous weekday used for observances
+ returns previous workday used for observances
"""
dt -= timedelta(days=1)
while dt.weekday() > 4:
@@ -130,7 +130,7 @@ def previous_workday(dt: datetime) -> datetime:
def before_nearest_workday(dt: datetime) -> datetime:
"""
- returns previous workday after nearest workday
+ returns previous workday before nearest workday
"""
return previous_workday(nearest_workday(dt))
| - [x] closes #38067
Updated docs with missing functions.
`weekend_to_monday` and `next_monday` do indeed do the exact same thing, and I agree that `next_monday` should be deprecated. Any thoughts?
| https://api.github.com/repos/pandas-dev/pandas/pulls/57939 | 2024-03-20T22:04:10Z | 2024-03-21T16:22:47Z | 2024-03-21T16:22:47Z | 2024-03-22T00:05:40Z |
BUG: #29049 make holiday support offsets of offsets | diff --git a/pandas/tests/tseries/holiday/test_holiday.py b/pandas/tests/tseries/holiday/test_holiday.py
index b2eefd04ef93b..08f4a1250392e 100644
--- a/pandas/tests/tseries/holiday/test_holiday.py
+++ b/pandas/tests/tseries/holiday/test_holiday.py
@@ -271,6 +271,25 @@ def test_both_offset_observance_raises():
)
+def test_list_of_list_of_offsets_raises():
+ # see gh-29049
+ # Test that the offsets of offsets are forbidden
+ holiday1 = Holiday(
+ "Holiday1",
+ month=USThanksgivingDay.month,
+ day=USThanksgivingDay.day,
+ offset=[USThanksgivingDay.offset, DateOffset(1)],
+ )
+ msg = "Only BaseOffsets and flat lists of them are supported for offset."
+ with pytest.raises(ValueError, match=msg):
+ Holiday(
+ "Holiday2",
+ month=holiday1.month,
+ day=holiday1.day,
+ offset=[holiday1.offset, DateOffset(3)],
+ )
+
+
def test_half_open_interval_with_observance():
# Prompted by GH 49075
# Check for holidays that have a half-open date interval where
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index cc9e2e3be8c38..8e51183138b5c 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -4,6 +4,7 @@
datetime,
timedelta,
)
+from typing import Callable
import warnings
from dateutil.relativedelta import (
@@ -17,6 +18,7 @@
)
import numpy as np
+from pandas._libs.tslibs.offsets import BaseOffset
from pandas.errors import PerformanceWarning
from pandas import (
@@ -159,24 +161,34 @@ def __init__(
year=None,
month=None,
day=None,
- offset=None,
- observance=None,
+ offset: BaseOffset | list[BaseOffset] | None = None,
+ observance: Callable | None = None,
start_date=None,
end_date=None,
- days_of_week=None,
+ days_of_week: tuple | None = None,
) -> None:
"""
Parameters
----------
name : str
Name of the holiday , defaults to class name
- offset : array of pandas.tseries.offsets or
- class from pandas.tseries.offsets
- computes offset from date
- observance: function
- computes when holiday is given a pandas Timestamp
- days_of_week:
- provide a tuple of days e.g (0,1,2,3,) for Monday Through Thursday
+ year : int, default None
+ Year of the holiday
+ month : int, default None
+ Month of the holiday
+ day : int, default None
+ Day of the holiday
+ offset : list of pandas.tseries.offsets or
+ class from pandas.tseries.offsets, default None
+ Computes offset from date
+ observance : function, default None
+ Computes when holiday is given a pandas Timestamp
+ start_date : datetime-like, default None
+ First date the holiday is observed
+ end_date : datetime-like, default None
+ Last date the holiday is observed
+ days_of_week : tuple of int or dateutil.relativedelta weekday strs, default None
+ Provide a tuple of days e.g (0,1,2,3,) for Monday Through Thursday
Monday=0,..,Sunday=6
Examples
@@ -216,8 +228,19 @@ class from pandas.tseries.offsets
>>> July3rd
Holiday: July 3rd (month=7, day=3, )
"""
- if offset is not None and observance is not None:
- raise NotImplementedError("Cannot use both offset and observance.")
+ if offset is not None:
+ if observance is not None:
+ raise NotImplementedError("Cannot use both offset and observance.")
+ if not (
+ isinstance(offset, BaseOffset)
+ or (
+ isinstance(offset, list)
+ and all(isinstance(off, BaseOffset) for off in offset)
+ )
+ ):
+ raise ValueError(
+ "Only BaseOffsets and flat lists of them are supported for offset."
+ )
self.name = name
self.year = year
| - [x] closes #29049 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
The summing of offsets is generally not possible, so the bug is fixed by simply flattening the asymmetric list of lists of offsets that one might easily end up with when defining holidays in relation to each other.
In theory it is possible to sum consecutive `DataOffset(n, weekday=None)` in the offset list, but I did not want to introduce even more complexity for such little gain.
Allowing asymmetrical lists of lists and their element dtypes as parameters is not very pretty, but the only solution to this problem that does not involve a major refactoring of the offset classes to allow representation of lists of offsets in a single composite offset e.g. ["TUE(1)", 3]. Summing of composite offsets is not possible because additions between weekday and regular day shift offsets are not associative, not distributive, and not commutative.
Another option would be to just enhance the error message given to the end user when trying to chain offsets in the manner shown in the issue and let them do the aggregation to a simple list of offsets themselves. | https://api.github.com/repos/pandas-dev/pandas/pulls/57938 | 2024-03-20T21:39:57Z | 2024-03-28T00:08:54Z | 2024-03-28T00:08:54Z | 2024-03-28T00:51:29Z |
Fix tagging within Dockerfile | diff --git a/Dockerfile b/Dockerfile
index 03f76f39b8cc7..0fcbcee92295c 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -11,4 +11,5 @@ RUN apt-get install -y libhdf5-dev libgles2-mesa-dev
RUN python -m pip install --upgrade pip
COPY requirements-dev.txt /tmp
RUN python -m pip install -r /tmp/requirements-dev.txt
+RUN git config --global --add safe.directory /home/pandas
CMD ["/bin/bash"]
| When the user within a docker container does not match the user on the host machine (this is the case by default on Linux, not sure of all OSes) you cannot use git to inspect the worktree:
```sh
root@6de0debf3870:/home/pandas# git log
fatal: detected dubious ownership in repository at '/home/pandas'
To add an exception for this directory, call:
git config --global --add safe.directory /home/pandas
```
This prevents builds within docker from being tagged with the appropriate git revision | https://api.github.com/repos/pandas-dev/pandas/pulls/57935 | 2024-03-20T21:07:30Z | 2024-03-20T23:21:09Z | 2024-03-20T23:21:09Z | 2024-03-20T23:21:16Z |
REF: Clean up concat statefullness and validation | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 1f0fe0542a0c0..35a08e0167924 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -17,10 +17,7 @@
from pandas.util._decorators import cache_readonly
-from pandas.core.dtypes.common import (
- is_bool,
- is_iterator,
-)
+from pandas.core.dtypes.common import is_bool
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -423,11 +420,12 @@ def __init__(
self.ignore_index = ignore_index
self.verify_integrity = verify_integrity
- objs, keys = self._clean_keys_and_objs(objs, keys)
+ objs, keys, ndims = _clean_keys_and_objs(objs, keys)
- # figure out what our result ndim is going to be
- ndims = self._get_ndims(objs)
- sample, objs = self._get_sample_object(objs, ndims, keys, names, levels)
+ # select an object to be our result reference
+ sample, objs = _get_sample_object(
+ objs, ndims, keys, names, levels, self.intersect
+ )
# Standardize axis parameter to int
if sample.ndim == 1:
@@ -458,100 +456,6 @@ def __init__(
self.names = names or getattr(keys, "names", None)
self.levels = levels
- def _get_ndims(self, objs: list[Series | DataFrame]) -> set[int]:
- # figure out what our result ndim is going to be
- ndims = set()
- for obj in objs:
- if not isinstance(obj, (ABCSeries, ABCDataFrame)):
- msg = (
- f"cannot concatenate object of type '{type(obj)}'; "
- "only Series and DataFrame objs are valid"
- )
- raise TypeError(msg)
-
- ndims.add(obj.ndim)
- return ndims
-
- def _clean_keys_and_objs(
- self,
- objs: Iterable[Series | DataFrame] | Mapping[HashableT, Series | DataFrame],
- keys,
- ) -> tuple[list[Series | DataFrame], Index | None]:
- if isinstance(objs, abc.Mapping):
- if keys is None:
- keys = list(objs.keys())
- objs_list = [objs[k] for k in keys]
- else:
- objs_list = list(objs)
-
- if len(objs_list) == 0:
- raise ValueError("No objects to concatenate")
-
- if keys is None:
- objs_list = list(com.not_none(*objs_list))
- else:
- # GH#1649
- key_indices = []
- clean_objs = []
- if is_iterator(keys):
- keys = list(keys)
- if len(keys) != len(objs_list):
- # GH#43485
- raise ValueError(
- f"The length of the keys ({len(keys)}) must match "
- f"the length of the objects to concatenate ({len(objs_list)})"
- )
- for i, obj in enumerate(objs_list):
- if obj is not None:
- key_indices.append(i)
- clean_objs.append(obj)
- objs_list = clean_objs
-
- if not isinstance(keys, Index):
- keys = Index(keys)
-
- if len(key_indices) < len(keys):
- keys = keys.take(key_indices)
-
- if len(objs_list) == 0:
- raise ValueError("All objects passed were None")
-
- return objs_list, keys
-
- def _get_sample_object(
- self,
- objs: list[Series | DataFrame],
- ndims: set[int],
- keys,
- names,
- levels,
- ) -> tuple[Series | DataFrame, list[Series | DataFrame]]:
- # get the sample
- # want the highest ndim that we have, and must be non-empty
- # unless all objs are empty
- sample: Series | DataFrame | None = None
- if len(ndims) > 1:
- max_ndim = max(ndims)
- for obj in objs:
- if obj.ndim == max_ndim and np.sum(obj.shape):
- sample = obj
- break
-
- else:
- # filter out the empties if we have not multi-index possibilities
- # note to keep empty Series as it affect to result columns / name
- non_empties = [obj for obj in objs if sum(obj.shape) > 0 or obj.ndim == 1]
-
- if len(non_empties) and (
- keys is None and names is None and levels is None and not self.intersect
- ):
- objs = non_empties
- sample = objs[0]
-
- if sample is None:
- sample = objs[0]
- return sample, objs
-
def _sanitize_mixed_ndim(
self,
objs: list[Series | DataFrame],
@@ -664,29 +568,24 @@ def get_result(self):
out = sample._constructor_from_mgr(new_data, axes=new_data.axes)
return out.__finalize__(self, method="concat")
- def _get_result_dim(self) -> int:
- if self._is_series and self.bm_axis == 1:
- return 2
- else:
- return self.objs[0].ndim
-
@cache_readonly
def new_axes(self) -> list[Index]:
- ndim = self._get_result_dim()
+ if self._is_series and self.bm_axis == 1:
+ ndim = 2
+ else:
+ ndim = self.objs[0].ndim
return [
- self._get_concat_axis if i == self.bm_axis else self._get_comb_axis(i)
+ self._get_concat_axis
+ if i == self.bm_axis
+ else get_objs_combined_axis(
+ self.objs,
+ axis=self.objs[0]._get_block_manager_axis(i),
+ intersect=self.intersect,
+ sort=self.sort,
+ )
for i in range(ndim)
]
- def _get_comb_axis(self, i: AxisInt) -> Index:
- data_axis = self.objs[0]._get_block_manager_axis(i)
- return get_objs_combined_axis(
- self.objs,
- axis=data_axis,
- intersect=self.intersect,
- sort=self.sort,
- )
-
@cache_readonly
def _get_concat_axis(self) -> Index:
"""
@@ -747,6 +646,98 @@ def _maybe_check_integrity(self, concat_index: Index) -> None:
raise ValueError(f"Indexes have overlapping values: {overlap}")
+def _clean_keys_and_objs(
+ objs: Iterable[Series | DataFrame] | Mapping[HashableT, Series | DataFrame],
+ keys,
+) -> tuple[list[Series | DataFrame], Index | None, set[int]]:
+ """
+ Returns
+ -------
+ clean_objs : list[Series | DataFrame]
+ LIst of DataFrame and Series with Nones removed.
+ keys : Index | None
+ None if keys was None
+ Index if objs was a Mapping or keys was not None. Filtered where objs was None.
+ ndim : set[int]
+ Unique .ndim attribute of obj encountered.
+ """
+ if isinstance(objs, abc.Mapping):
+ if keys is None:
+ keys = objs.keys()
+ objs_list = [objs[k] for k in keys]
+ else:
+ objs_list = list(objs)
+
+ if len(objs_list) == 0:
+ raise ValueError("No objects to concatenate")
+
+ if keys is not None:
+ if not isinstance(keys, Index):
+ keys = Index(keys)
+ if len(keys) != len(objs_list):
+ # GH#43485
+ raise ValueError(
+ f"The length of the keys ({len(keys)}) must match "
+ f"the length of the objects to concatenate ({len(objs_list)})"
+ )
+
+ # GH#1649
+ key_indices = []
+ clean_objs = []
+ ndims = set()
+ for i, obj in enumerate(objs_list):
+ if obj is None:
+ continue
+ elif isinstance(obj, (ABCSeries, ABCDataFrame)):
+ key_indices.append(i)
+ clean_objs.append(obj)
+ ndims.add(obj.ndim)
+ else:
+ msg = (
+ f"cannot concatenate object of type '{type(obj)}'; "
+ "only Series and DataFrame objs are valid"
+ )
+ raise TypeError(msg)
+
+ if keys is not None and len(key_indices) < len(keys):
+ keys = keys.take(key_indices)
+
+ if len(clean_objs) == 0:
+ raise ValueError("All objects passed were None")
+
+ return clean_objs, keys, ndims
+
+
+def _get_sample_object(
+ objs: list[Series | DataFrame],
+ ndims: set[int],
+ keys,
+ names,
+ levels,
+ intersect: bool,
+) -> tuple[Series | DataFrame, list[Series | DataFrame]]:
+ # get the sample
+ # want the highest ndim that we have, and must be non-empty
+ # unless all objs are empty
+ if len(ndims) > 1:
+ max_ndim = max(ndims)
+ for obj in objs:
+ if obj.ndim == max_ndim and sum(obj.shape): # type: ignore[arg-type]
+ return obj, objs
+ elif keys is None and names is None and levels is None and not intersect:
+ # filter out the empties if we have not multi-index possibilities
+ # note to keep empty Series as it affect to result columns / name
+ if ndims.pop() == 2:
+ non_empties = [obj for obj in objs if sum(obj.shape)]
+ else:
+ non_empties = objs
+
+ if len(non_empties):
+ return non_empties[0], non_empties
+
+ return objs[0], objs
+
+
def _concat_indexes(indexes) -> Index:
return indexes[0].append(indexes[1:])
| * Removed an additional pass over `objs` during validation
* Removed methods off of `_Concatenator` that did not rely on `self` | https://api.github.com/repos/pandas-dev/pandas/pulls/57933 | 2024-03-20T17:59:59Z | 2024-03-20T21:00:44Z | 2024-03-20T21:00:44Z | 2024-03-20T21:00:47Z |
DOC: fix minor typos and grammar missing_data.rst | diff --git a/doc/source/user_guide/missing_data.rst b/doc/source/user_guide/missing_data.rst
index aea7688c062b8..2e104ac06f9f4 100644
--- a/doc/source/user_guide/missing_data.rst
+++ b/doc/source/user_guide/missing_data.rst
@@ -88,7 +88,7 @@ To detect these missing value, use the :func:`isna` or :func:`notna` methods.
.. warning::
- Experimental: the behaviour of :class:`NA`` can still change without warning.
+ Experimental: the behaviour of :class:`NA` can still change without warning.
Starting from pandas 1.0, an experimental :class:`NA` value (singleton) is
available to represent scalar missing values. The goal of :class:`NA` is provide a
@@ -105,7 +105,7 @@ dtype, it will use :class:`NA`:
s[2]
s[2] is pd.NA
-Currently, pandas does not yet use those data types using :class:`NA` by default
+Currently, pandas does not use those data types using :class:`NA` by default in
a :class:`DataFrame` or :class:`Series`, so you need to specify
the dtype explicitly. An easy way to convert to those dtypes is explained in the
:ref:`conversion section <missing_data.NA.conversion>`.
@@ -253,8 +253,8 @@ Conversion
^^^^^^^^^^
If you have a :class:`DataFrame` or :class:`Series` using ``np.nan``,
-:meth:`Series.convert_dtypes` and :meth:`DataFrame.convert_dtypes`
-in :class:`DataFrame` that can convert data to use the data types that use :class:`NA`
+:meth:`DataFrame.convert_dtypes` and :meth:`Series.convert_dtypes`, respectively,
+will convert your data to use the nullable data types supporting :class:`NA`,
such as :class:`Int64Dtype` or :class:`ArrowDtype`. This is especially helpful after reading
in data sets from IO methods where data types were inferred.
| I did not make a corresponding issue for this minor improvement. | https://api.github.com/repos/pandas-dev/pandas/pulls/57929 | 2024-03-20T13:26:20Z | 2024-03-20T16:32:14Z | 2024-03-20T16:32:14Z | 2024-03-21T17:17:04Z |
CLN: Remove unused code | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 9302c581fd497..50a94b35c2edc 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1231,10 +1231,6 @@ def tz_aware_fixture(request):
return request.param
-# Generate cartesian product of tz_aware_fixture:
-tz_aware_fixture2 = tz_aware_fixture
-
-
_UTCS = ["utc", "dateutil/UTC", utc, tzutc(), timezone.utc]
if zoneinfo is not None:
_UTCS.append(zoneinfo.ZoneInfo("UTC"))
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 33b37319675ae..987136ffdff7d 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -87,12 +87,6 @@
_shared_docs: dict[str, str] = {}
-_indexops_doc_kwargs = {
- "klass": "IndexOpsMixin",
- "inplace": "",
- "unique": "IndexOpsMixin",
- "duplicated": "IndexOpsMixin",
-}
class PandasObject(DirNamesMixin):
diff --git a/pandas/core/dtypes/astype.py b/pandas/core/dtypes/astype.py
index d351d13fdfeb6..086f7d2da6640 100644
--- a/pandas/core/dtypes/astype.py
+++ b/pandas/core/dtypes/astype.py
@@ -37,8 +37,6 @@
from pandas.core.arrays import ExtensionArray
-_dtype_obj = np.dtype(object)
-
@overload
def _astype_nansafe(
| Some more unused code. Most of them are not detected by `vulture` but only visible on the editor through Pylance, so I'm not quite sure how to detect them systematically. | https://api.github.com/repos/pandas-dev/pandas/pulls/57920 | 2024-03-19T22:30:25Z | 2024-03-20T00:23:14Z | 2024-03-20T00:23:14Z | 2024-03-20T00:23:20Z |
Remove Cython warnings | diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd
index e4ac3a9e167a3..a5822e57d3fa6 100644
--- a/pandas/_libs/tslibs/util.pxd
+++ b/pandas/_libs/tslibs/util.pxd
@@ -185,11 +185,11 @@ cdef inline const char* get_c_string(str py_string) except NULL:
return get_c_string_buf_and_size(py_string, NULL)
-cdef inline bytes string_encode_locale(str py_string) noexcept:
+cdef inline bytes string_encode_locale(str py_string):
"""As opposed to PyUnicode_Encode, use current system locale to encode."""
return PyUnicode_EncodeLocale(py_string, NULL)
-cdef inline object char_to_string_locale(const char* data) noexcept:
+cdef inline object char_to_string_locale(const char* data):
"""As opposed to PyUnicode_FromString, use current system locale to decode."""
return PyUnicode_DecodeLocale(data, NULL)
| warning: /Users/runner/work/pandas/pandas/pandas/_libs/tslibs/util.pxd:188:38: noexcept clause is ignored for function returning Python object
warning: /Users/runner/work/pandas/pandas/pandas/_libs/tslibs/util.pxd:193:40: noexcept clause is ignored for function returning Python object | https://api.github.com/repos/pandas-dev/pandas/pulls/57919 | 2024-03-19T22:19:43Z | 2024-03-20T00:22:29Z | 2024-03-20T00:22:29Z | 2024-03-20T00:22:35Z |
STYLE: Detect unnecessary pylint ignore | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 190ea32203807..41f1c4c6892a3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -78,7 +78,7 @@ repos:
hooks:
- id: pylint
stages: [manual]
- args: [--load-plugins=pylint.extensions.redefined_loop_name]
+ args: [--load-plugins=pylint.extensions.redefined_loop_name, --fail-on=I0021]
- id: pylint
alias: redefined-outer-name
name: Redefining name from outer scope
diff --git a/pandas/core/groupby/indexing.py b/pandas/core/groupby/indexing.py
index 75c0a062b57d0..c658f625d5ea9 100644
--- a/pandas/core/groupby/indexing.py
+++ b/pandas/core/groupby/indexing.py
@@ -114,7 +114,6 @@ def _positional_selector(self) -> GroupByPositionalSelector:
4 b 5
"""
if TYPE_CHECKING:
- # pylint: disable-next=used-before-assignment
groupby_self = cast(groupby.GroupBy, self)
else:
groupby_self = self
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index b8b1d39d4eb20..2aeb1aff07a54 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -993,7 +993,6 @@ def to_datetime(
errors=errors,
exact=exact,
)
- # pylint: disable-next=used-before-assignment
result: Timestamp | NaTType | Series | Index
if isinstance(arg, Timestamp):
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index b9d5f04cb203b..1de53993fe646 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -1311,7 +1311,6 @@ def test_to_latex_multiindex_names(self, name0, name1, axes):
)
col_names = [n if (bool(n) and 1 in axes) else "" for n in names]
observed = df.to_latex(multirow=False)
- # pylint: disable-next=consider-using-f-string
expected = r"""\begin{tabular}{llrrrr}
\toprule
& %s & \multicolumn{2}{r}{1} & \multicolumn{2}{r}{2} \\
diff --git a/pandas/tests/series/test_iteration.py b/pandas/tests/series/test_iteration.py
index edc82455234bb..1e0fa7fae107e 100644
--- a/pandas/tests/series/test_iteration.py
+++ b/pandas/tests/series/test_iteration.py
@@ -4,12 +4,10 @@ def test_keys(self, datetime_series):
def test_iter_datetimes(self, datetime_series):
for i, val in enumerate(datetime_series):
- # pylint: disable-next=unnecessary-list-index-lookup
assert val == datetime_series.iloc[i]
def test_iter_strings(self, string_series):
for i, val in enumerate(string_series):
- # pylint: disable-next=unnecessary-list-index-lookup
assert val == string_series.iloc[i]
def test_iteritems_datetimes(self, datetime_series):
| Enable `pylint`'s `I0021`/`useless-suppression` ([doc](https://pylint.pycqa.org/en/latest/user_guide/messages/information/useless-suppression.html))
This should make it easier to recognize how many `pylint` rules that `ruff` is still missing. | https://api.github.com/repos/pandas-dev/pandas/pulls/57918 | 2024-03-19T21:52:17Z | 2024-03-19T22:20:15Z | 2024-03-19T22:20:15Z | 2024-03-19T22:27:28Z |
DOC: Update docs with the use of meson instead of setup.py | diff --git a/doc/source/development/maintaining.rst b/doc/source/development/maintaining.rst
index 5d833dca50732..f6ff95aa72c6c 100644
--- a/doc/source/development/maintaining.rst
+++ b/doc/source/development/maintaining.rst
@@ -151,7 +151,7 @@ and then run::
git bisect start
git bisect good v1.4.0
git bisect bad v1.5.0
- git bisect run bash -c "python setup.py build_ext -j 4; python t.py"
+ git bisect run bash -c "python -m pip install -ve . --no-build-isolation --config-settings editable-verbose=true; python t.py"
This finds the first commit that changed the behavior. The C extensions have to be
rebuilt at every step, so the search can take a while.
@@ -159,7 +159,7 @@ rebuilt at every step, so the search can take a while.
Exit bisect and rebuild the current version::
git bisect reset
- python setup.py build_ext -j 4
+ python -m pip install -ve . --no-build-isolation --config-settings editable-verbose=true
Report your findings under the corresponding issue and ping the commit author to get
their input.
diff --git a/pandas/__init__.py b/pandas/__init__.py
index f7ae91dd847f7..3ee6f6abf97bf 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -28,7 +28,8 @@
raise ImportError(
f"C extension: {_module} not built. If you want to import "
"pandas from the source directory, you may need to run "
- "'python setup.py build_ext' to build the C extensions first."
+ "'python -m pip install -ve . --no-build-isolation --config-settings "
+ "editable-verbose=true' to build the C extensions first."
) from _err
from pandas._config import (
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Not sure if this is still needed: https://github.com/pandas-dev/pandas/blob/main/gitpod/gitpod.Dockerfile#L38C5-L38C45 | https://api.github.com/repos/pandas-dev/pandas/pulls/57917 | 2024-03-19T19:22:14Z | 2024-03-22T14:57:06Z | 2024-03-22T14:57:06Z | 2024-03-22T15:11:13Z |
DOC: Getting started tutorials css adjustments | diff --git a/doc/source/_static/css/getting_started.css b/doc/source/_static/css/getting_started.css
index 0d53bbde94ae3..b02311eb66080 100644
--- a/doc/source/_static/css/getting_started.css
+++ b/doc/source/_static/css/getting_started.css
@@ -248,6 +248,7 @@ ul.task-bullet > li > p:first-child {
}
.tutorial-card .card-header {
+ --bs-card-cap-color: var(--pst-color-text-base);
cursor: pointer;
background-color: var(--pst-color-surface);
border: 1px solid var(--pst-color-border)
@@ -269,7 +270,7 @@ ul.task-bullet > li > p:first-child {
.tutorial-card .gs-badge-link a {
- color: var(--pst-color-text-base);
+ color: var(--pst-color-primary-text);
text-decoration: none;
}
| - [x] closes #57912
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
I implemented what I suggested in the issue.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57916 | 2024-03-19T18:54:56Z | 2024-03-20T16:39:36Z | 2024-03-20T16:39:36Z | 2024-03-21T08:09:40Z |
BUG: pretty print all Mappings, not just dicts | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index cb211b0b72dce..f3fcdcdb79ed6 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -357,6 +357,7 @@ MultiIndex
I/O
^^^
- Bug in :meth:`DataFrame.to_excel` when writing empty :class:`DataFrame` with :class:`MultiIndex` on both axes (:issue:`57696`)
+- Now all ``Mapping`` s are pretty printed correctly. Before only literal ``dict`` s were. (:issue:`57915`)
-
-
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 214d1d7079fdb..0bd4f2935f4d0 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -187,8 +187,8 @@ def pprint_thing(
_nest_lvl : internal use only. pprint_thing() is mutually-recursive
with pprint_sequence, this argument is used to keep track of the
current nesting level, and limit it.
- escape_chars : list or dict, optional
- Characters to escape. If a dict is passed the values are the
+ escape_chars : list[str] or Mapping[str, str], optional
+ Characters to escape. If a Mapping is passed the values are the
replacements
default_escapes : bool, default False
Whether the input escape characters replaces or adds to the defaults
@@ -204,11 +204,11 @@ def as_escaped_string(
thing: Any, escape_chars: EscapeChars | None = escape_chars
) -> str:
translate = {"\t": r"\t", "\n": r"\n", "\r": r"\r"}
- if isinstance(escape_chars, dict):
+ if isinstance(escape_chars, Mapping):
if default_escapes:
translate.update(escape_chars)
else:
- translate = escape_chars
+ translate = escape_chars # type: ignore[assignment]
escape_chars = list(escape_chars.keys())
else:
escape_chars = escape_chars or ()
@@ -220,7 +220,7 @@ def as_escaped_string(
if hasattr(thing, "__next__"):
return str(thing)
- elif isinstance(thing, dict) and _nest_lvl < get_option(
+ elif isinstance(thing, Mapping) and _nest_lvl < get_option(
"display.pprint_nest_depth"
):
result = _pprint_dict(
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index acf2bc72c687d..1009dfec53218 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -1,5 +1,6 @@
# Note! This file is aimed specifically at pandas.io.formats.printing utility
# functions, not the general printing of pandas objects.
+from collections.abc import Mapping
import string
import pandas._config.config as cf
@@ -16,6 +17,17 @@ def test_adjoin():
assert adjoined == expected
+class MyMapping(Mapping):
+ def __getitem__(self, key):
+ return 4
+
+ def __iter__(self):
+ return iter(["a", "b"])
+
+ def __len__(self):
+ return 2
+
+
class TestPPrintThing:
def test_repr_binary_type(self):
letters = string.ascii_letters
@@ -42,6 +54,12 @@ def test_repr_obeys_max_seq_limit(self):
def test_repr_set(self):
assert printing.pprint_thing({1}) == "{1}"
+ def test_repr_dict(self):
+ assert printing.pprint_thing({"a": 4, "b": 4}) == "{'a': 4, 'b': 4}"
+
+ def test_repr_mapping(self):
+ assert printing.pprint_thing(MyMapping()) == "{'a': 4, 'b': 4}"
+
class TestFormatBase:
def test_adjoin(self):
| This was discovered in https://github.com/ibis-project/ibis/issues/8687
Due to a strict `isinstance(x, dict)` check, custom mapping types would instead get treated as mere iterables, so only the keys would get printed as a tuple:
```python
import pandas as pd
from ibis.common.collections import frozendict
pd.Series([frozendict({"a": 5, "b": 6})])
# 0 (a, b)
# dtype: object
```
- [x] NA: closes #xxxx
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] NA Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/57915 | 2024-03-19T18:53:53Z | 2024-03-20T00:23:31Z | 2024-03-20T00:23:31Z | 2024-03-20T00:23:37Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.